References & Resources for LessWrong

A list of references and resources for LW

Updated: 2011-05-24

  • F = Free
  • E = Easy (adequate for a low educational background)
  • M = Memetic Hazard (controversial ideas or works of fiction)


Do not flinch, most of LessWrong can be read and understood by people with a previous level of education less than secondary school. (And Khan Academy followed by BetterExplained plus the help of Google and Wikipedia ought to be enough to let anyone read anything directed at the scientifically literate.) Most of these references aren't prerequisite, and only a small fraction are pertinent to any particular post on LessWrong. Do not be intimidated, just go ahead and start reading the Sequences if all this sounds too long. It's much easier to understand than this list makes it look like.

Nevertheless, as it says in the Twelve Virtues of Rationality, scholarship is a virtue, and in particular:

It is especially important to eat math and science which impinges upon rationality: Evolutionary psychology, heuristics and biases, social psychology, probability theory, decision theory.


This list is hosted on, a community blog devoted to refining the art of human rationality - the art of thinking. If you follow the links below you'll learn more about this community. It is one of the most important resources you'll ever come across if your aim is to get what you want, if you want to win. It shows you that there is more to most things than meets the eye, but more often than not much less than you think. It shows you that even smart people can be completely wrong but that most people are not even wrong. It teaches you to be careful in what you emit and to be skeptical of what you receive. It doesn't tell you what is right, it teaches you how to think and to become less wrong. And to do so is in your own self interest because it helps you to attain your goals, it helps you to achieve what you want.


Why read Less Wrong?

A few articles exemplifying in detail what you can expect from reading Less Wrong, why it is important, what you can learn and how it does help you.

Artificial Intelligence

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind. — I. J. Good, "Speculations Concerning the First Ultraintelligent Machine"


Friendly AI

The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else. — Eliezer Yudkowsky, Artificial Intelligence as a Positive and Negative Factor in Global Risk

Machine Learning

Not essential but an valuable addition for anyone who's more than superficially interested in AI and machine learning.

The Technological Singularity

The term “Singularity” had a much narrower meaning back when the Singularity Institute was founded. Since then the term has acquired all sorts of unsavory connotations. — Eliezer Yudkowsky

Heuristics and Biases

One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision. — Bertrand Russell

Ignorance more frequently begets confidence than does knowledge. — Charles Darwin

The heuristics and biases program in cognitive psychology tries to work backward from biases (experimentally reproducible human errors) to heuristics (the underlying mechanisms at work in the brain).


Learning Mathematics

Here's a phenomenon I was surprised to find: you'll go to talks, and hear various words, whose definitions you're not so sure about. At some point you'll be able to make a sentence using those words; you won't know what the words mean, but you'll know the sentence is correct. You'll also be able to ask a question using those words. You still won't know what the words mean, but you'll know the question is interesting, and you'll want to know the answer. Then later on, you'll learn what the words mean more precisely, and your sense of how they fit together will make that learning much easier. The reason for this phenomenon is that mathematics is so rich and infinite that it is impossible to learn it systematically, and if you wait to master one topic before moving on to the next, you'll never get anywhere. Instead, you'll have tendrils of knowledge extending far from your comfort zone. Then you can later backfill from these tendrils, and extend your comfort zone; this is much easier to do than learning "forwards". (Caution: this backfilling is necessary. There can be a temptation to learn lots of fancy words and to use them in fancy sentences without being able to say precisely what you mean. You should feel free to do that, but you should always feel a pang of guilt when you do.) — Ravi Vakil




Probabilities express uncertainty, and it is only agents who can be uncertain. A blank map does not correspond to a blank territory. Ignorance is in the mind. — Eliezer Yudkowsky

Math is fundamental, not just for LessWrong. But especially Bayes’ Theorem is essential to understand the reasoning underlying most of the writings on LW.

Bayes' theorem



All the limitative theorems of metamathematics and the theory of computation suggest that once the ability to represent your own structure has reached a certain critical point, that is the kiss of death: it guarantees that you can never represent yourself totally. Gödel’s Incompleteness Theorem, Church’s Undecidability Theorem, Turing’s Halting Theorem, Tarski’s Truth Theorem — all have the flavour of some ancient fairy tale which warns you that “To seek self-knowledge is to embark on a journey which … will always be incomplete, cannot be charted on any map, will never halt, cannot be described.” — Douglas Hofstadter 1979


Decision theory

It is precisely the notion that Nature does not care about our algorithm, which frees us up to pursue the winning Way - without attachment to any particular ritual of cognition, apart from our belief that it wins. Every rule is up for grabs, except the rule of winning. — Eliezer Yudkowsky

Remember that any heuristic is bound to certain circumstances. If you want X from agent Y and the rule is that Y only gives you X if you are a devoted irrationalist then ¬irrational. Under certain circumstances what is irrational may be rational and what is rational may be irrational. Paul K. Feyerabend said: "All methodologies have their limitations and the only ‘rule’ that survives is ‘anything goes’."

Game Theory

Game theory is the study of the ways in which strategic interactions among economic agents produce outcomes with respect to the preferences (or utilities) of those agents, where the outcomes in question might have been intended by none of the agents. — Stanford Encyclopedia of Philosophy


With Release 33-9117, the SEC is considering substitution of Python or another programming language for legal English as a basis for some of its regulations. — Will Wall Street require Python?

Programming knowledge is not mandatory for LessWrong but you should however be able to interpret the most basic pseudo code as you will come across various snippets of code in discussions and top-level posts outside of the main sequences.


Python is a general-purpose high-level dynamic programming language.


Haskell is a standardized, general-purpose purely functional programming language, with non-strict semantics and strong static typing.


Computer science

The introduction of suitable abstractions is our only mental aid to organize and master complexity. — Edsger W. Dijkstra

One of the fundamental premises on LessWrong is that a universal computing device can simulate every physical process and that we therefore should be able to reverse engineer the human brain as it is fundamentally computable. That is, intelligence and consciousness are substrate-neutral.

(Algorithmic) Information Theory


A poet once said, "The whole universe is in a glass of wine." We will probably never know in what sense he meant that, for poets do not write to be understood. But it is true that if we look at a glass of wine closely enough we see the entire universe. — Richard Feynman


General relativity

You do not really understand something unless you can explain it to your grandmother. ~ Albert Einstein

Quantum physics

An electron is not a billiard ball, and it’s not a crest and trough moving through a pool of water. An electron is a mathematically different sort of entity, all the time and under all circumstances, and it has to be accepted on its own terms. The universe is not wavering between using particles and waves, unable to make up its mind. It’s only human intuitions about QM that swap back and forth. — Eliezer Yudkowsky

I am not going to tell you that quantum mechanics is weird, bizarre, confusing, or alien. QM is counterintuitive, but that is a problem with your intuitions, not a problem with quantum mechanics. Quantum mechanics has been around for billions of years before the Sun coalesced from interstellar hydrogen. Quantum mechanics was here before you were, and if you have a problem with that, you are the one who needs to change. QM sure won’t. There are no surprising facts, only models that are surprised by facts; and if a model is surprised by the facts, it is no credit to that model. — Eliezer Yudkowsky



(Evolution) is a general postulate to which all theories, all hypotheses, all systems must henceforward bow and which they must satisfy in order to be thinkable and true. Evolution is a light which illuminates all facts, a trajectory which all lines of thought must follow — this is what evolution is. — Pierre Teilhard de Chardin


There is no such thing as philosophy-free science; there is only science whose philosophical baggage is taken on board without examination. — Daniel Dennett, Darwin's Dangerous Idea, 1995.
Philosophy is a battle against the bewitchment of our intelligence by means of language. — Wittgenstein


The Mind

Everything of beauty in the world has its ultimate origins in the human mind. Even a rainbow isn't beautiful in and of itself. — Eliezer Yudkowsky


Levels of epistemic accuracy.



General Education


Not essential but a good preliminary to reading LessWrong and in some cases helpful to be able to make valuable contributions in the comments. Many of the concepts in the following works are often mentioned on LessWrong or the subject of frequent discussions.


Elaboration of miscellaneous terms, concepts and fields of knowledge you might come across in some of the subsequent and more technical advanced posts and comments on LessWrong. The following concepts are frequently discussed but not necessarily supported by the LessWrong community. Those concepts that are controversial are labeled M.


Relevant websites. News and otherwise. F

Fun & Fiction

The following are relevant works of fiction or playful treatments of fringe concepts. That means, do not take these works at face value.

Accompanying text: The Logical Fallacy of Generalization from Fictional Evidence

Memetic Hazard




A popular board game played and analysed by many people in the LessWrong and general AI crowd.


This list is a work in progress. I will try to constantly update and refine it.

If you've anything to add or correct (e.g. a broken link), please comment below and I'll update the list accordingly.

82 comments, sorted by
magical algorithm
Highlighting new comments since Today at 6:54 PM
Select new highlight date
Moderation Guidelinesexpand_more

Would you mind if I edited this post in order to express more strongly that the vast majority of this reading is not required to keep up with the vast majority of LW posts?

I don't mind. You can edit it in any way you want.

The dependency graphs of Eliezer's posts are an often-overlooked resource. I don't see them linked anywhere on the wiki either.

Very nice. My only suggestions are to (1) add a biology section, for people who haven't quite grok'd how the brain is an organ, and (2) tweak the physics section so that it doesn't lead off with quantum physics, if necessary by making two physics sections. The notion that the world's fiddly bits behave according to mathematical laws is neither obvious nor self-explanatory, and starting to explain this notion by reference to casually observable phenomena (rocks, light, magnets, water) rather than deeply confusing and occasionally controversial phenomena (quarks, neutrinos) is a really good idea.

I added a neuroscience section now. Pretty empty so far. The first link is Neuroscience for Kids which I was given by a professional neuroscientist as a valuable and easy to understand resource. The second link is a list made by me on controversial ideas regarding the reverse engineering of the brain. It also includes a video by EY and Anders Sandberg.

I'll advance it with time and also add a category for biology in general.

Thanks, the physics section is now subdivided into 3 categories. I'll refine it according to your suggestion with time. I'll try to come up with a biology section too and ask a neuroscientist for some easy to understand resources on neuroscience.

Nice idea, thanks for taking the time to compile these resources!

A few thoughts:

  1. This would be easier to follow if the links in each section were ordered roughly from easiest to most challenging.

  2. The length of this list is going to intimidate some new readers. One could productively add to the LW conversation after understanding a small fraction of these references. You should make it clear that these aren't prerequisite.

  3. Some of the entries seem only tangentially related to LW (e.g. Haskell, Go).

  4. The "Key Concepts" section might be better near the beginning.

  5. The "Key Resources" do not seem to me to be key resources.

I'm in the process of trying to get another LW project started, but I've long thought that a "LessWrong Syllabus" (laid out in the style of a university degree planner), would be a good idea. This post seems to be a step in that direction.

It could list assumed prerequisites, recommended or core topics, advanced topics, plus suggested learning materials (books, online courses, etc.), and means of testing progress at each stage. [ETA: Links above are just examples]

Methods of testing might be controversial, but it would be straightforward to capture most of the topics, particularly at the beginner stages.

I should note that this is meant to help guide self-study of the theoretical mathy-sciencey aspects of LessWrongian rationality; I think that this format might be less well suited to the study of applied rationality.

  • "Key Resources" are now called Relevant Websites.
  • "Key Concepts" are now at the very top.
  • I added "Note: Don't be intimidated by the length of this list. Most of these references aren't prerequisite. Only a small fraction is necessary to follow most of the posts and discussions on LessWrong. Before giving up rather just go ahead and read the Sequences, you'll see it's much easier to understand than this list makes it look like. And if you've trouble understanding some concept, just ask in the comments or come back and see if you can find some explanation via this companion."

I know that some entries are only tangentially related to LW, but I wanted to compile a list that gives you as much as possible background knowledge to understand and participate on LW and integrate into the community. I believe that programming is a essential field of knowledge and that Go is not just very popular with people interested in LW related content but also one of the first hard AI problems people can learn about by simply playing a game.

About ordering it from easiest to most challenging. Well, I can't do that. First of all it would likely mess up the categories and it is not clear to me what is easy and what not. This list is the culmination of feedback I received from asking, "What should I learn?" That is, although I'm reasonable sure that all of the items are of high quality as they were recommended by highly educated people who have read them, I haven't actually read most of it myself yet.

A list capturing all background knowledge you might ever need for LW

No it isn't.

Want to note: I noticed the category "memetic hazard" and started immediately skimming the page to find everything labeled as such. Something is wrong with my reasoning here—

It wasn't the worst impulse to follow after all, since the category means something like controversial or fictional. Except... "memetic hazard" is a meaningful warning. I would prefer it keeps its value as a signal.

Excellent free lecture series on quantum mechanics from Oxford's undergraduate course. Consists of 24 one-hour lectures. Course material, solutions, and even the PDF of textbook is free.

In the linguistics section, Trask's book should be marked as "easy." It's short and very readable, and assumes almost no background knowledge. (But despite that, provides an informative and well-balanced introduction to the field.)

Edit: Also, a draft of Jaynes's book is available for free online, but the list contains only an Amazon link.

Marks with similar-looking letters (F and E) in light colors look bad on white background (hard to notice). Use contrasting darker colors (if at all) and more distinct text labels, maybe also bolded.

I changed the colors. I will think about some other form of labeling.

I suggest small graphical icons (~16x16) with distinctive colors and sillhouetes. Maybe:

  • A gray X over a black dollar sign for Free
  • A green exclamation mark for Easy
  • A yellow yield sign with an exclamation mark for Memetic Hazard (like your lightbulb one, but probably a lightbulb shape would be too small to make out at reasonable icon size).

It would also be good to arrange the icons and entries in a table, so that a given icon always appears in the same column.

Absolutely the best resource for learning computer programming that I've come across. Highly recommended for beginners.

I only wish that this post had been in a more visible place, so I could have found it before now. This seems like it will be very useful. Thank you for compiling.

The Leading Go Software (PC, Mac, iPhone, iPad)

There is as of yet no Mac version, just a placeholder page.

Thanks, fixed. I also added the Wiki on Go software with lots of links to lists on free Go software of all kinds.

The first 3 chapters of Jaynes' "Probability Theory: The Logic of Science" is available at:

Also, here's a copy of his unpublished book (pdf link at bottom):

Are you sure Diaspora should be marked Easy?

I tried to get a fairly intelligent, friend who's interested in science (generally, not necessarily any specific domains covered in the book) to read it and she gave up within about half an hour.

I (a layman, but well-acquainted with the set of singularitarian memes that the book draws from) found that trying to visualize the physics made my head hurt, even with the accompanying illustrative java applets at the author's website.

It also might be valuable to link to those (there are probably some for Permutation City as well, but I haven't checked since I haven't been able to track a copy of it down):

In particular, the book starts with a description of how new minds are formed in the polis which is very abstract and technical. I wouldn't be surprised if people who could enjoy the rest of the book bounce off the beginning.

You are right, the more technical aspects of the book are really hard. I took it as a whole, how it is portraying a society of uploads. I'm going to change it anyway, thanks.

ETA I also added the links to Egan's site as accompanying text.

Not sure how much it fits here, but is a reasonable intro + reference collection on some mental blindspots

Permutation City (the infamous science fiction novel by Greg Egan)

Why "infamous"?

Whoops! Yes, that's clearly the wrong word. Thank you. To my excuse, I never learnt English formally but basically taught it myself with time :-)

I dunno, Permutation City is pretty infamous in my books because it presents disquieting ideas I don't know how to disprove. (Kind of like Boltzmann Brains or the Eternal Return.)

Thank you, I was about to comment on this; you've given me a needed data point.

Permutation City is the only work of fiction I've enjoyed that I do not go around recommending, because I'm wary that to a reader without the requisite specialized background to separate the parts based on real science from the parts that are pure fiction, it might actually be something of a memetic hazard.

If you are going to recommend it, I would suggest accompanying the recommendation with a link to the antidote

So your strategy is basically 'subjective anticipation is a useful but ultimately incoherent idea; Permutation City takes it to an absurdum'?

That's a good idea, but I don't think your antidote post is strong enough. Subjective anticipation is a deeply-held belief, after all.

I agree, I think the antidote post is better than nothing, but I recommend it in addition to, not instead of, the memetic hazard label.

I haven't added the antidote post as accompanying reading, as I have to read it yet, but 'The Logical Fallacy of Generalization from Fictional Evidence' post by EY. Reload and see the fiction section. Not sure, maybe a bit drastic. But at least it is obvious now.

I don't think that Permutation City being fiction matters (if I understand your comment).

The nonfiction ideas stand on their own, though they were presented in (somewhat didactic) fiction: that computation can be sliced up arbitrarily in space and time, that it be 'instantiated' on almost arbitrary arrangements of matter, and that this implies the computation of our consciousness can 'jump' from correct random arrangement of matter (like space dust) to correct random arrangement, lasting forever, and hooking in something like quantum suicide so that it's even likely...

If it were simply pointing out that the fiction presupposes all sorts of arbitrary and unlikely hidden mechanisms like Skynet wanting to exterminate humanity, Permutation City would not be a problem. But it shows its work, and we LWers frequently accept the premises.

However, the book could also mislead people to believe those arbitrary and unlikely elements if they are linked to them on a list of resources for LessWrong. That's why I think a drastic warning is appropriate. Science fiction can give you a lot of ideas but can also seduce you to believe things that might be dangerous, like that there is no risk from AI.

I introduced a new label M for Memetic Hazard and added a warning sign including a accompanying text to the fiction section.

And I see a number of other things that merited the memetic hazard label also now have it, good idea. I'd suggest that it also be added to the current links in the artificial intelligence section, and to the link on quantum suicide.

Maybe also add a link to Eliezer's Permutation City crossover story, now that we have the requisite memetic hazard label for such a link?

I thought quantum suicide is not controversial since MWI is obviously correct? And the AI section? Well, the list is supposed to reflect the opinions hold in the LW community, especially by EY and the SIAI. I'm trying my best to do so and by that standard, how controversial is AI going FOOM etc.?

Eliezer's Permutation City crossover story? It is on the list for some time, if you are talking about the 'The Finale of the Ultimate Meta Mega Crossover' story.

I thought quantum suicide is not controversial since MWI is obviously correct?

I agree MWI is solid, I'm not suggesting that be flagged. But it does not in any way imply quantum suicide; the latter is somewhere between fringe and crackpot, and a proven memetic hazard with at least one recorded death to its credit.

And the AI section? Well, the list is supposed to reflect the opinions hold in the LW community, especially by EY and the SIAI. I'm trying my best to do so and by that standard, how controversial is AI going FOOM etc.?

Well, AI go FOOM etc is again somewhere in the area between fringe and crackpot, as judged by people who actually know about the subject. If the list were specifically supposed to represent the opinions of the SIAI, then it would belong on the SIAI website, not on LW.

Eliezer's Permutation City crossover story? It is on the list for some time, if you are talking about the 'The Finale of the Ultimate Meta Mega Crossover' story.

So it is, cool.

[quantum suicide is] a proven memetic hazard with at least one recorded death to its credit.

I hadn't heard of this -- can you give more details?

Not even the most optimistic interpretations of quantum immortality/quantum suicide think it can bring other people back from the dead. Does it count as a memetic hazard if only a very mistaken version of it is hazardous?

Why not? If you kill yourself in any branch that lacks the structure that is your father, then the only copies of you that will be alive are those that don't care or those that live in the unlikely universes where your father is alive (even if it means life extension breakthroughs or that he applied for cryonics.)

ETA: I guess you don't need life extension. After all it is physical possible to grow 1000 years old, if unlikely. Have I misunderstood something here?

The way I understand quantum suicide, it's supposed to force your future survival into the relatively scarce branches where an event goes the way you want it by making it dependent on that event. Killing yourself after living in the branch where that event did not go the way you wanted at some time in the past is just ordinary suicide; although there's certainly room for a new category along the lines of "counterfactual quantum suicide," or something.

edit: Although, to the extent that counterfactual quantum suicide would only occur to someone who'd heard of traditional, orthodox quantum suicide, the latter would be a memetic hazard.

What difference does it make if you kill yourself before event X, event X kills you or if you commit suicide after event X? In all cases the branches in which event X does not take place are selected for. That is, if agent Y always commits suicide if event X or is killed by event X then the only branches to include Y are those in which X does not happen.

The difference, to me, is how you define the difference between quantum suicide and classical suicide. Everett's daughter killing herself in all universes where she outlived him only sounds like quantum suicide to me if her death was linked to his in a mechanical and immediate manner; otherwise, with her suffering in the non-preferred universe for a while, it just sounds like plain old suicide.

The difference between quantum and classical seems to be distinct from that between painless and painful.

Why not? If you kill yourself in any branch that lacks the structure that is your father, then the only copies of you that will be alive are those that don't care or those that live in the unlikely universes where your father is alive (even if it means life extension breakthroughs or that he applied for cryonics.)

No, that's not what would happen. Rather, being faithful to your commitment, you would go on a practically infinite suicide spree (*) searching for your father. A long and melancholic story with a suprise happy ending.

(*) I googled it and was sad to see that the phrase "suicide spree" is already taken for a different concept.

I'm not sure where you think we disagree? Personally if I was going to take MWI and quantum suicide absolutely seriously I'd still make the best out of every branch. All you do by quantum suicide is to cancel out the copies you deem having unworthy experiences. But why would I do that if I do not change anything about the positive branches.

My reply wasn't meant to be taken seriously, and I don't take the idea of quantum suicide seriously. But to answer your question, here is the disagreement, or really, me nitpicking for the sake of comedic effect:

In your scenario, most of the copies will NOT be in universes with your father. Most of them will be in the process of committing suicide. This is because -- at least the way I interpreted your wording -- your scenario differs from the classic quantum lottery scenario in that here it is you who evaluates whether you are in the right universe or not.

Yes, we agree. So how serious do you take MWI? I'm not sure I understand how someone could take MWI seriously but not quantum suicide. I haven't read the sequence on it yet though.

Easy - if you believe in MWI, but your utility function assigns value to the amount of measure you exist in, then you don't believe in quantum suicide. This is the most rational position, IMO.

I am absolutely uninterested in the amount of measure I exist in, per se. (*) I am interested in the emotional pain a quantum suicide would inflict on measure 0.9999999 of my friends and relatives.

(*) If God builds a perfect copy of the whole universe, this will not increase my utility the slightest.

I am absolutely uninterested in the amount of measure I exist in, per se. (*) I am interested in the emotional pain a quantum suicide would inflict on measure 0.9999999 of my friends and relatives.

The is a potentially coherent value system but I note that it contains a distinct hint of arbitrariness. You could, technically, like life, dislike death, like happy relatives and care about everything in the branches in which you live but only care about everything except yourself in branches in which you die. But that seems likely to be just a patch job on the intuitions.

Are you sure about this? Isn't my preference simply a result of a value system that values the happiness of living beings in every branch? (Possibly weighted with how similar / emotionally close they are to me, but that's not really necessary.) If I kill myself in every branch except in those where I win the lottery, then there will be many branches with (N-1) sad relatives, and a few branches with 1 happy me and (N-1) neutral relatives. So I don't do that. Is there really anything arbitrary about this?

The part that surprises me is that you do care about all the branches (relatives, etc) yet in those branches you don't care if you die. You'll note that I assumed you preferred death to life? In those worlds you seem to have a preference for happy vs sad relatives but have somehow (and here is where I would say 'arbitrarily') decided you don't care whether you live or die.

Say, for example, that you have a moderate aversion to having one of your little toes broken. You set up a quantum lottery where in the 'lose' branches have your little toe broken instead of you being killed. Does that seem better or worse to you? I mean, there is suffering of someone near and dear to you so I assume that seems bad to you. Yet it seems to me that if you care about the branch at all then you would prefer 'sore toe' to 'death' when you lose!

You are right that my proposed value system does not incorporate survival instinct, and this makes it sound weird, as survival instinct is an important part of every actual human value system, including mine. Your broken toe example shows this nicely.

So why did I get rid of survival instinct? Because you argued that what I wrote "contains a distinct hint of arbitrariness". I think it doesn't. I care for everyone's preferences, and a dead body has no preferences. And to decide against quantum suicide, that is all what is needed. In place of survival instinct we basically have the disincentive of grieving relatives.

When we explicitly add survival instinct, the ingredient you rightfully miss, then yes, the result will indeed become somewhat messy. But the reason for this mess is the added ingredient in itself, not the other, clean part, nor the interrelation with the other part. I just don't think survival instinct can be turned into a coherent, formalized value. So the bug is not in my proposed idealized value system, the bug is in my actual messy human value system.

This approach, by the way, affects my views on cryonics, too.

I think it doesn't. I care for everyone's preferences, and a dead body has no preferences. And to decide against quantum suicide, that is all what is needed. In place of survival instinct we basically have the disincentive of grieving relatives.

This is a handy way to rationalise against quantum suicide. Until you consider Quantum suicide on a global level. People who have been vaporised along with their entire planet have no preferences... Would you bite that bullet and quantum planetary-suicide?

As I already wrote, the above is not my actual value system, but rather a streamlined version of it. My actual value system does incorporate survival instinct. You intend to show with quantum planetary suicide that the streamlined value system leads to nonsensical results. I don't really find the results nonsensical. In this sense, I would bite the bullet.

Actually, I wouldn't, but for a reason not directly related to our current discussion. I don't have too much faith in the literal truth of the MWI. I am quite confused about quantum mechanics, but I have a gut feeling that single-world is not totally out of the question, and not-every-world is quite likely. This is because as a compatibilist, I am willing to bite some bullets about free will most others will not bite. I believe that the full space-time continuum is very finely tuned in every direction (*), so it is totally plausible to me that some of those many worlds are simply locked from us by fine-tuning. There are already some crankish attempts in this direction under the name superdeterminism. I don't think these are successful so far, but I surely would not bet my whole planet against the possibility.

(*) This sentence might sound fuzzy or even pseudo-science. All I have is an analogy to make it more concrete: Our world is not a Gold Universe, but I am talking about the sort of fine-tuning found in a Gold Universe.

You intend to show with quantum planetary suicide that the streamlined value system leads to nonsensical results.

Not nonsensical, no. It would be not liking the idea of planetary suicide that would be nonsensical, given your other expressed preferences. I can even see a perverse logic behind your way of carving which parts of the universal wavefunction you care about, based on the kind of understanding you express of QM.

Just... if you are ever exposed to accessible quantum randomness then please stay away from anyone I care about. These values are, by my way of looking at things, exactly as insane as those parents who kill their children and spouse before offing themselves as well. I'm not saying you are evil or anything. It's not like you are really going to act on any of this so you fall under Mostly Harmless. But the step from mostly killing yourself to evaluating it as preferable for other people to be dead too takes things from none of my business to threat to human life.

Strange as may seem we are talking about the real world here!

wedrifid, please don't use me as a straw-man. I already told you that my actual value system does contain survival instinct, and I already told you why I omitted it here anyway. Here it is, spelled out even more clearly:

  1. You wanted a clean value system that decides against quantum suicide. (I use 'clean' as a synonym of nonarbritrary, low-complexity, aesthetically pleasing.) I proposed a clean value system that is already strong enough to decide against many forms of quantum suicide. You correctly point out that it is not immune against every form.

  2. Incorporating any version of survival instinct makes the value system immune to quantum suicide by definition. I claimed that any value system incorporating survival instincts is necessarily not clean, at least if it has to consistently deal with issues of quantum lottery, mind uploads and such. I don't have a problem with that, and I choose survival over cleanness. And don't worry for my children and spouse. I will spell it out very explicitly, just in case: I don't value the wishes of dead people, because they don't have any. I value the wishes of living people, most importantly their wish to stay alive.

You completely ignored the physics angle to concentrate on the ethics angle. I think the former is more interesting, and frankly, I am more interested in your clever insights there. I already mentioned that I don't have too much faith in MWI. Let me add some more detail to this. I believe that if you want to find out the real reason why quantum suicide is a bad idea, you will have to look at physics rather than values. My common sense tells me that if I put a (quantum or other) gun in my mouth right now, and pull the trigger many times, then the next thing I will feel is not that I am very lucky. Rather, I will not feel anything at all because I will be dead. I am quite sure about this instinct, and let us assume for a minute that it is indeed correct. This can mean two things. One possible conclusion is that MWI must be wrong. Another possible conclusion is that MWI is right but we make some error when we try to apply MWI to this situation. I give high probability to both of this possibilities, and I am very interested in any new insights.

Let me now summarize my position on quantum suicide: I endorse it

  • IF MWI is literally correct. (I don't believe so.)

  • IF the interface between MWI and consciousness works as our naive interpretation suggests. (I don't believe so.)

  • IF the quantum suicide is planetary, more exactly, if it affects a system that is value-wise isolated from the rest of the universe. (Very hard or impossible to achieve.)

  • IF survival instinct as a preference of others is taken into account, more concretely, if your mental image of me, the Mad Scientist with the Doomsday Machine, gets the consent of the whole population of the planet. (Very hard or impossible to achieve.)

wedrifid, please don't use me as a straw-man.

End of conversation. I did not read beyond that sentence.

I am sorry to hear this, and I don't really understand it.

Surely actually performing quantum suicide would be very stupid.

I get the impression that some people consider "take quantum suicide seriously" equivalent to "think doing it is a good idea". That makes not taking it seriously a good option.

I expected the link to be the Ultimate Meta Mega Crossover.

(disclaimer: haven't read)

Perhaps a link to lukerog's the best textbooks on every subject.

Also, why no mention of the self-help aspects of LW?

So glad to see you are continuing to keep this updated!

Thanks, I'll try to invest more time to improve it. If you care, I can give you my PW (if you are not already a mod.) so that you can make additions or improvements?

A source for quotes (presumably checked for accuracy) which includes their contexts.

All the M labels could use explanations, but in particular, why is A Fire Upon the Deep controversial?

It is a work of fiction that does contain bogus ideas that have been added to sidestep the problem of writing about a post-Singularity future (e.g. the Zones of Thought). Whereas a story like Three Worlds Collide does not deserve this labeling since it especially mentions its deliberate shortcomings and explains that they have been introduced to enable the writer to highlight and dissolve certain issues. If you think that this labeling policy should be altered, please elaborate on your reasons.

All the M labels could use explanations...

That would be nice but might go beyond the scope of this list. Take for example CEV (Coherent Extrapolated Volition). It got labeled 'controversial' because there seem to be many people, even on Less Wrong, that take a critical look at it. The various objections may not necessarily be sound but the idea itself is popular enough to list it here. The label is there to indicate that the interested reader should search for a review of the idea if they are more than superficially interested.

Zed Shaw has come out with Learn Python the Hard Way, intended to teach Python to absolute beginners, which looks promising based on a quick browse.

He also wrote a blog post How To Write a Learn X the Hard Way, about the writing principles behind the book.

I appreciate the labels, which are new since the last time I saw a draft. I recommend adding a summary break.