|Ruby||v1.155.0Oct 3rd 2020||(+77)|
|Ruby||v1.154.0Oct 3rd 2020||(-1)|
|Swimmer963||v1.153.0Sep 24th 2020||(+48/-12) added image, removed extra spaces|
|Rick_from_Castify||v1.152.0Sep 22nd 2016||(-581)|
|Rick_from_Castify||v1.151.0Sep 22nd 2016||(-71)|
|AdamSpitz||v1.150.0Jul 19th 2016||(-19) Removed half a sentence that I'm guessing someone just forgot to remove when they made an edit earlier.|
|Rick_from_Castify||v1.149.0Nov 25th 2015||(+71)|
|v1.148.0Sep 8th 2015||(+6946/-5145) Undo revision 15118 by [[Special:Contributions/Jaimeastorga2000|Jaimeastorga2000]] ([[User talk:Jaimeastorga2000|talk]])|
|v1.147.0Sep 8th 2015||(+5049/-6851) Need a snapshot of this version. Will revert in a few seconds.|
|Rob Bensinger||v1.146.0Jul 5th 2015||/* Other resources */| __TOC__
is an ebook collecting six books worth of essays on the science and philosophy of human rationality. It's one of the best places to start for people who want to better understand topics that crop up on Less Wrong, such as cognitive bias, the map-territory distinction, meta-ethics, and existential risk.
Promoted Posts: Major Sequences: Minor Sequences: Essay:
Rationality: From AI to Zombies is an ebook collecting six books worth of essays on the science and philosophy of human rationality. It's one of the best places to start for people who want to better understand topics that crop up on Less Wrong, such as cognitive bias, the map-territory distinction, meta-ethics, and existential risk.
The six books are:
The ebook can be downloaded on a "pay-what-you-want" basis from intelligence.org. Its six books in turn break down into twenty-six sections:
The most important method that Less Wrong can offer you is Long sequences that have been completed and organized into a guide. How to see through the many disguises of answers or beliefs or statements, that don't answer or say or mean anything. A series on the use and abuse of words; why you often can't define a word any way you like; how human brains seem to process definitions. First introduces the Mind Projection Fallacy and the concept
mega-sequence scattered over almost all of Less Wrong on the ultra-high-level penultimate technique of rationality: actually updating on the evidence. Organized into eight subsequences. The second core sequence of Less Wrong. How to take reality apart into pieces... and live in that universe, where we have always lived, without feeling disappointed about the fact that complicated things are made of simpler things. A non-mysterious introduction to quantum mechanics, intended to be accessible to anyone who can grok algebra and complex numbers. Cleaning up the old confusion about QM is used to introduce basic issues in rationality (such as the technical version of Occam's Razor ), epistemology, reductionism, naturalism, and philosophy of science. Not dispensable reading, even though the exact reasons for the digression are hard to explain in advance of reading. What words like "right" and "should" mean; how to integrate moral concepts into a naturalistic universe. The dependencies on this sequence may not be fully organized, and the post list does not have summaries. Yudkowsky considers this one of his less successful attempts at explanation.
concrete theory of transhuman values. How much fun is there in the universe; will we ever run out of fun; are we having fun yet; could we be having more fun. Part The final sequence of Eliezer Yudkowsky's two-year-long string of daily posts to Less Wrong, on improving the art of rationality and building communities of rationalists. Smaller collections of posts. Usually parts of major sequences which depend on some-but-not-all of the points introduced. A collection of introductory posts dealing with the fundamentals of rationality: the difference between the map and the territory, Bayes's Theorem and the nature of evidence, why anyone should care about truth, minds as reflective cognitive engines... Some notes on the incredibly difficult feat of actually getting your brain to think about something (a key step in actually changing your mind ). Whenever someone exhorts you to "think outside the box", they usually, for your convenience, point out exactly where "outside the box" is located. Isn't it funny how nonconformists all dress the same... Subsequence of Some of the various ways that politics damages our sanity - including, of course, making it harder to change our minds on political issues. Subsequence of Affective death spirals are positive feedback loop caused by the halo effect: Positive characteristics perceptually correlate, so the more nice things we say about X, the more additional nice things we're likely to believe about X. Cultishness is an empirical attractor in human groups: roughly an affective death spiral; plus peer pressure and outcasting behavior; plus (often) defensiveness around something believed to be un-improvable. Yet another subsequence of If dragons were common, and you could look at one in the zoo - but zebras were a rare legendary creature that had finally been decided to be mythical - then there's a certain sort of person who would ignore dragons, who would never bother to look at dragons, and chase after rumors of zebras. The grass is always greener on the other side of reality. Which is rather setting ourselves up for eternal disappointment, eh? If we cannot take joy in the merely real, our lives shall be empty indeed. Subsequence of Reductionism . On the putative "possibility" of beings who are just like us in every sense, but not conscious - that is, lacking inner subjective experience. Subsequence of Reductionism . Learning the very basic math of evolutionary biology costs relatively little if you understand algebra, but gives you a surprisingly different perspective from what you'll find in strictly nonmathematical texts. How to do things that are difficult or "impossible". How Yudkowsky made epic errors of reasoning as a teenage "rationalist" and recovered from them starting at around age 23, the period that he refers to as his Bayesian Enlightenment. The original sequences were written by Eliezer Yudkowsky with the goal of creating a book on rationality. MIRI has since collated and edited the sequences into Rationality: From AI to Zombies . If you are new to Less Wrong, this book is the best place to start. __TOC__ Rationality: From AI to Zombies is an ebook collecting six books worth of essays on the science and philosophy of human rationality. It's one of the best places to start for people who want to better understand topics that crop up on Less Wrong , such as cognitive bias, the map-territory distinction, meta-ethics, and existential risk. The six books are: The ebook can be downloaded on a "pay-what-you-want" basis from intelligence.org . Its six books in turn break down into twenty-six sections: __________________________________________________________________ __________________________________________________________________ __________________________________________________________________ __________________________________________________________________ __________________________________________________________________ __________________________________________________________________ __________________________________________________________________ The following collections of essays come from the original sequences , an earlier version of much of the material from Rationality: From AI to Zombies :
"Value Theory", discussing the apparent "arbitrariness" of human morality.
discussion of the complexity of human value, and what the universe might look like if everything were much, much better. Fun theory is the optimistic, far-future-oriented part of value theory, asking: How much fun is there in the universe; will we ever run out of fun; are we having fun yet; could we be having more fun?
physics for our concepts of personal identity and time. Other collections from the same time period (2006-2009) include:
blog conversation between Eliezer Yudkowsky and Robin Hanson
topic of intelligence explosion and how concerned we should be about superintelligent AI. Yudkowsky has also written a more recent sequence: Sequences of essays by Scott Alexander include: Sequences by Luke Muehlhauser : By Anna Salamon : By Alicorn : And by Kaj Sotala : Benito's Guide aims to systematically fill the reader in on the most important ideas discussed on LessWrong (not just in the sequences). It also begins with a series of videos, which are a friendly introduction, and useful if you enjoy talks and interviews. Thinking and Deciding by Jonathan Baron and Good and Real by Gary Drescher have been mentioned as books that overlap significantly with the sequences. More about how the sequences fit in with work done by others . Promoted Posts: Major Sequences:
(5h 20m) Minor Sequences: Essay: