Eliezer Yudkowsky is a research fellow of the SingularityMachine Intelligence Research Institute for Artificial Intelligence, which he co-founded in 2001. He is mainly concerned with the obstacles and importance of developing a Friendly AI, such as a reflective decision theory that would lay a foundation for describing fully recursive self modifying agents that retain stable preferences while rewriting their source code. He also co-founded Less Wrong, writing the Sequences, long sequences of posts dealing with epistemology, AGI, metaethics, rationality and so on.
Eliezer Yudkowsky is a research fellow of the Singularity Institute for Artificial Intelligence -, which he co-founded in 2001. He is mainly concerned with the obstacles and importance of developing a Friendly AI, such as a reflective decision theory that would lay a foundation for describing fully recursive self modifying agents that retain stable preferences while rewriting their source code. He also co-founded Less Wrong,Wrong, writing most part of Thethe Sequences, long sequences of posts dealing with epistemology, AGI, metaethics, rationality and so on.
Eliezer Yudkowsky is a research fellow of the Singularity Institute for Artificial Intelligence - which he co-founded in 2001. He is mainly concernconcerned with the obstacles and importance of developing a Friendly AI and, such as a reflective decision theory -that would lay a foundation for describing fully recursive self modifying agents that retain stable preferences while rewriting their source code. He also co-founded Less Wrong, writing most part of The Sequences, long sequences of posts dealing with epistemology, AGI, metaethics, rationality and so on.
Eliezer Yudkowsky is a research fellow of the Singularity Institute for Artificial Intelligence - which he co-founded in 2001. He is mainly concern with the obstacles and importance of developing a Friendly AI and a reflective decision theory - a foundation for describing fully recursive self modifying agents that retain stable preferences while rewriting their source code. He also co-founded Less Wrong, writing most part of The Sequences, long sequences of posts dealing with on epistemology, AGI, metaethics, rationality and so on.
Eliezer Yudkowsky is onea research fellow of the foundersSingularity Institute for Artificial Intelligence - which he co-founded in 2001. He is mainly concern with the obstacles and importance of developing a Overcoming BiasFriendly AI and a reflective decision theory - a foundation for describing fully recursive self modifying agents that retain stable preferences while rewriting their source code. He also co-founded Less Wrong, writing most part of The Sequences, long sequences of posts dealing with on epistemology, AGI, metaethics, rationality and so on.
He has published several articles, including: “Cognitive Biases Potentially Affecting Judgment of Global Risks” (2008), “AI as a Positive and Negative Factor in Global Risk. (2008)”, "Creating Friendly AI"(2001), "Levels of Organization in General Intelligence" (2002), "Coherent Extrapolated Volition"(2004), "Timeless Decision Theory" (2010) and Less Wrong. His official webpage is hereValue Systems are Required to Realize Valuable Futures" (2011).
user pagepostsaton Less Wrong