The Most Important Thing You Learned

by Eliezer Yudkowsky1 min read27th Feb 200998 comments

13

Rationality A-Z (discussion and meta)
Personal Blog

My current plan does still call for me to write a rationality book - at some point, and despite all delays - which means I have to decide what goes in the book, and what doesn't.  Obviously the vast majority of my OB content can't go into the book, because there's so much of it.

So let me ask - what was the one thing you learned from my posts on Overcoming Bias, that stands out as most important in your mind?  If you like, you can also list your numbers 2 and 3, but it will be understood that any upvotes on the comment are just agreeing with the #1, not the others.  If it was striking enough that you remember the exact post where you "got it", include that information.  If you think the most important thing is for me to rewrite a post from Robin Hanson or another contributor, go ahead and say so.  To avoid recency effects, you might want to take a quick glance at this list of all my OB posts before naming anything from just the last month - on the other hand, if you can't remember it even after a year, then it's probably not the most important thing.

Please also distinguish this question from "What was the most frequently useful thing you learned, and how did you use it?" and "What one thing has to go into the book that would (actually) make you buy a copy of that book for someone else you know?"  I'll ask those on Saturday and Sunday.

PS:  Do please think of your answer before you read the others' comments, of course.

98 comments, sorted by Highlighting new comments since Today at 8:03 AM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

"the map is not the territory" has stuck in my mind as one of the over-arching principles of rationality. it reinforces the concept of self-doubt, implies one should work to make their map conform more closely to the territory, and is invaluable when one believes to have hit a cognitive wall. there are no walls, just the ones drawn on your map.

the post, "mysterious answers to mysterious questions" is my favorite post that dealt with this topic, though it has been reiterated (and rightly so) over a multitude of postings.

link: http://www.overcomingbias.com/2007/08/mysterious-answ.html

2crazypaki12yI second Tim's post. Mysterious Answers and the "map vs territory" analogy have had a huge influence on my thinking

"A rationalist should win". Very high-level meta-advice and almost impossible to directly apply, but it keeps me oriented.

1MichaelBishop12yI agree that, on average, improvements in rationality lead to more winning, but I'm not convinced that every improvement in rationality does. It seems possible that a non-trivial number make winning harder.

"Newcomb's Problem and Regret of Rationality" is one of my favorites. For all the excellent tools of rationality that stuck with me, this is the one that most globally encompassed Eliezer's general message: that rationality is about success, first and foremost, and if whatever you're doing isn't getting you the best outcome, then you're not being rational, even if you appear rational.

Your explanation / definition of intelligence as an optimization process. (Efficient Cross-Domain Optimization)

That was a major "aha" moment for me.

The most important thing I learned from Overcoming Bias was to stop viewing the human mind as a blank slate, ideally a blank slate, an approximation to a blank slate, or anything with properties even slightly resembling blankness or slateness. The rest is just commentary - admittedly very, very good commentary.

The posts I associate with this are everything on evolutionary psychology such as Godshatter (second most important thing I learned: study evolutionary psychology!), the free will series, the "ghost in the machine" and "ideal philosopher of perfect emptiness" series, and the Mind Projection Fallacy.

Taboo very useful in discussion I believe.

The most important thing I can recall is conservation of expectation. In particular, I'm thinking of Making Beliefs Pay Rent and Conservation of Expected Evidence. We need to see a greater commitment to deciding in advance which direction new evidence will shift our beliefs.

Most frequently referenced concepts:

  1. Mind projection fallacy and "The map is not the territory."
  2. "The opposite of stupidity is not intelligence."

Engines of cognition was the final thing I needed to assimilate the idea that nothing's for free and that intelligence does not magically allow to do anything, has a cost, limitations, and obey the second law of thermodynamics. Or rather, that they both obey the same underlying principle.

http://www.overcomingbias.com/2008/02/second-law.html

"Obviously the vast majority of my OB content can't go into the book, because there's so much of it."

I know this is not what you asked for, but I'd like to vote for a long book. I feel that the kind of people who will be interested by it (and readers of OB) probably won't be intimidated by the page count, and I know that I'd really like to have a polished paper copy of most of the OB material for future reference. The web just isn't quite the same.

In short: Something that is Godel, Escher, Bach-like in lenght probably wouldn't be a problem, though maybe there are other good reasons to keep it shorter other than "there is too much material".

A near-tie. Either:

(1) The Bottom Line, or

(2) Realizing there's actually something at stake that, like, having accurate conclusions really matters for (largely, Eliezer's article on heuristics and biases in global catastrophic risks, which I read shortly before finding OB), or

(3) Eliezer's re-definition of humility in "12 virtues", and the notion in general that I should aim to see how far my knowledge can take me, and to infer all I can, rather than just aiming to not be wrong (by erring on the side of underconfidence).

(1) wasn't a new tho... (read more)

The biggest "aha" post was probably the one linking thermodynamics to beliefs ( The Second Law of Thermodynamics, and Engines of Cognition, and the following one, Perpetual Motion Beliefs ), because it linked two subjects I knew about in a surprising and interesting way, deepening my understanding of both.

Apart from that, "Tsuyoku Naritai" was the one that got me hooked, though I didn't really "learn" anything by it - I like the attitude it portrays.

3SilasBarta12yI agree about Engines of Cognition. It got me really interested in the parallels between information theory and thermodynamics and led me to start reading a lot more about the former, including the classic Jaynes papers. I think it gave me a deeper understanding of why e.g. the Carnot limit holds, and let me to read about the interesting discovery that the thermodynamic availability (extractable work) of a system is equal to its Kullback-Leibler divergence (a generalization of informational entropy) from its environment. Second for me would have to be Artificial Addition, which helped me understand why attempts to "trick" a system into displaying intelligence are fundamentally misguided.

I'm going to have to choose "How to Convince Me That 2 + 2 = 3." It did quite a lot to illuminate the true nature of uncertainty.

http://www.overcomingbias.com/2007/09/how-to-convince.html

The ideas in itare certainly not the most important, but another really striking posts for me is "Surprised by Brains." The lines "Skeptic: Yeah? Let's hear you assign a probability that a brain the size of a planet could produce a new complex design in a single day. / Believer: The size of a planet? (Thinks.) Um... ten percent." in part... (read more)

3Vladimir_Nesov12yI second this one, also as related to Making Beliefs Pay Rent [http://www.overcomingbias.com/2007/07/making-beliefs-.html]: what you think and what you present as argument needs to be valid, needs to actually have the strength as evidence that it claims to have. Failure to abide by this principle results in empty or actively stupid thoughts.

The most important thing for me, basically, was the morality sequence and in particular The Moral Void. I was worrying heavily about whether any of the morals I valued were justified in a universe that lacked Intrinsic Meaning. The Morality sequence (and Nietzsche, incidentally) helped me internalize that it's OK after all to value certain things— that it's not irrational to have a morality— that there's no Universal Judge condemning me for the crime of parochialism if I value myself, my friends, humanity, beauty, knowledge, etc— and that even my flight ... (read more)

Hard to pick a favourite, of course, but there's a warning against confirmation bias that cautions us against standing firm, to move with the evidence like grass in the wind, that has stuck with me.

On the general discussions of what sort of book I want, I want one no more than a couple of hundred pages long which I can press into the hands of as many of my friends as possible. One that speaks as straightforwardly as possible, without all the self-aggrandizing eastern-guru type language...

[-][anonymous]12y 5

deleted

"Shut up and multiply."

The most important and useful thing I learned from your OB posts, Eliezer, is probably the mind-projection fallacy: the knowledge that the adjective "probable" and the adverb "probably" always makes an implicit reference to an agent (usually the speaker).

Honorable mention: the fact that there is no learning without (inductive) bias.

The most important thing I learned may have been how to distinguish actual beliefs from meaningless sounds that come out of our mouths. Beliefs have to pay the rent. (http://www.overcomingbias.com/2007/07/making-beliefs-.html)

The Wrong Question sequence was amazing. One of the very unintuitive sequences that greatly improved my categorization methods. Especially with the 'Disguised Queries' post.

I'm going to go with "Knowing About Biases Can Hurt People", but only because I got the Mind Projection Fallacy straight from Jaynes.

I refuse to name just one thing. I can't rank a number of ideas by how important they were relative to each other, they were each important in their own right. So, to preserve the voting format, I'll just split my suggestions into several comments.

Some notes in general. The first year I used to partially misinterpret some of your essays, but after I got a better grasp of underlying ideas, I saw many of the essays as not contributing any new knowledge. This is not to say that the essays were unimportant: they act as exercises, exploring the relevant ideas i... (read more)

6[anonymous]12yI too would like to support more brevity in your writings - but maybe that just isn't your style.
3Vladimir_Nesov12yOvercoming Bias: Thou Art Godshatter [http://www.overcomingbias.com/2007/11/thou-art-godsha.html]: understanding how intricate human psychology is, and how one should avoid inventing simplistic Fake Utility Functions [http://www.overcomingbias.com/2007/12/fake-utility-fu.html] for human behavior. I used to make this mistake. Also relevant: Detached Lever Fallacy [http://www.overcomingbias.com/2008/07/detached-lever.html], how there's more to other mental operations than meets the eye.
2Vladimir_Nesov12yPrices or Bindings? [http://www.overcomingbias.com/2008/10/infinite-price.html] and to a lesser extent (although with simpler formal statement) Newcomb's Problem [http://www.overcomingbias.com/2008/01/newcombs-proble.html] and The True Prisoner's Dilemma [http://www.overcomingbias.com/2008/09/true-pd.html]: show just how insanely alien the rational thing can be, even if it's directed to your own cause. You may need to conscientiously avoid preventing the world destruction, not take free money, and trade a billion human lives for one paperclip.
1Vladimir_Nesov12yThe Simple Truth [http://yudkowsky.net/rational/the-simple-truth] followed by A Technical Explanation of Technical Explanation [http://yudkowsky.net/rational/technical], given some familiarity with probability theory, formed the basic understanding of Bayesian perspective on probability as quantity of belief. The most confusing point of Technical Explanation involving a tentacle was amended in the post about antiprediction on OB [http://www.overcomingbias.com/2008/12/disjunctions-an.html]. It's very important to get this argument early on, as it forms the language for thinking about knowledge.
1AspiringRationalist9yWhen I first read "The Simple Truth," I didn't really get it. I realized just how much I didn't get it when I re-read it after reading some of the sequences. I think it would work best as a review-of-what-you-just-learned rather than as an introduction.
0Vladimir_Nesov12yRighting a Wrong Question [http://www.overcomingbias.com/2008/03/righting-a-wron.html]: how everything you observe calls for understanding, how even an utter confusion or a lie can communicate positive knowledge. There are always causes behind any apparent confusion, so if the situation doesn't make sense in a way it's supposed to be interpreted, you can always step back and see how it really works, even if you are not supposed to look at the situation this way. For example, don't trust your thought, instead catch your own mind in the process of making a mistake.

If my priors are right, then genuinely new evidence is a random walk. Especially: when I see something complicated I think is new evidence and think the story behind it is obviously something confirming my beliefs in every particular, I need to be very suspicious.

http://www.overcomingbias.com/2007/08/conservation-of.html

http://www.overcomingbias.com/2007/09/conjunction-fal.html

http://www.overcomingbias.com/2007/09/rationalization.html

1[anonymous]12yI didn't get your point here, could you elaborate (re "evidence is a random walk").

A while back, I posted on my blog two lists with the posts I considered the most useful on Overcoming Bias so far.

If I just had to pick one? That's tough, but perhaps burdensome details. The skill of both cutting away all the useless details from predictions, and seeing the burdensome details in the predictions of others.

An example: Even though I was pretty firmly an atheist before, arguments like "people have received messages from the other side, so there might be a god" wouldn't have appeared structurally in error. I would have questioned whet... (read more)

0RobinZ11yI would vote for "Burdensome Details" as well.

There are no genuine mysteries, only things that I am ignorant or confused about.

It's hard to answer this question, given how much of your philosophy I have incorporated wholesale into my own, but I think it's the fundamental idea that there are Iron Laws of evidence, that they constrain exactly what it is reasonable to believe, and that no mere silly human conceit such as "argument" or "faith" can change them even in the millionth decimal place.

Your debunking of philosophical zombieism really stuck with me. I don't think I've ever done a faster 180 on my stance on a philosophical argument.

The most important thing I learned was the high value of the outside perspective. It is something that I strive to deploy deliberately through getting into intentional friendships with other aspiring rationalists at Intentional Insights. We support each other’s ability to achieve goals in life through what we came to call a goal buddy system, providing an intentional outside perspective on each other’s thinking about life projects and priorities.

The most important thing for me, is the near-far bias - even though that's a relatively recent "discovery" here, it still resonates very well with why I argue with people about things, and why people who I respect argue with each other.

  1. The Blegg / Rube series, which I'll still list as separate from...
  2. The Map / Territory distinction
  3. An Alien God

All things that, if pushed with the right questions, I'd have come to on my own, but all three put very beautifully.

Every Cause Wants To Be A Cult, Science as Attire, The Simple Truth

That clear thinking can take you from obvious but wrong to non-obvious but right, and on issues of great importance. That we frequently incur great costs just because we're not really nailing things down.

Looking over the list of posts, I suggest the ones starting with 'Fake'

I've been enjoying the majority of OB posts, but here's the list of ideas I consider the most important for me:

  1. Intelligence as a process steering the future into a constrained region.

  2. The map / territory distinction.

  3. The use of probability theory to quantify the degree of belief.

Is this to be a book that somebody could give to their grandmother and expect the first page to convince her that the second is worth reading?

The series of post about the "free will". I was always a determinist but somehow refused to think about "free will" in detail, holding a belief that determinism and free will are compatible for some mysterious reason. OB helped me to see things clearly (now it seems all pretty obvious).

I vote for "Conservation of Expected Evidence." The essential answer to supposed evidence from irrationalists.

Second place, either "Occam's Razor" or "Decoherence is Falsifiable and Testable" for the understandable explanation of technical definitions of Occam's Razor.

The intuitive breakthrough for me was realizing that given a proposition P and an argument A that supports or opposes P, then showing that A is invalid has no effect on the truth or falsehood of P, and showing that P is true has no effect on the validity of A. This is the core of the "knowing biases can hurt you" problem, and while it's obvious if put in formal terms, it's counterintuitive in practice. The best way to get that to sink in, I think, is to practice demolishing bad arguments that support a conclusion you agree with.

3Wei_Dai9yThat sort of makes sense if what you mean is "whatever we humans think about A has no effect on the truth or falsehood of P in a Platonic sense" but surely showing that A is invalid ought to change how likely you think that P is true? Similarly, if P is actually true, a random argument that concludes with "P is true" is more likely to be valid than a random argument that concludes with "P is false". So showing P is true ought to make you think that A is more or less likely to be valid depending on its conclusion. (Given that this comment was voted up to 3 and nobody gave a counterargument, I wonder if I'm missing something obvious.)
5jimrandomh9yI wrote that two years ago, and you're right that it's imprecise in a way that makes it not literally true. In particular, if a skilled arguer gives you what they think is the best argument for a proposition, and the argument is invalid, then the proposition is likely false. What I was getting at, I think, is that my intuition used to vastly overestimate the correlation between the validity of arguments encountered and the truth of propositions they argue for, because people very often make bad arguments for true statements. This made me reject things I shouldn't have, and easily get sidetracked into dealing with arguments too many layers removed from the interesting conclusions.
1Wei_Dai9yOk, that makes a lot more sense. Thanks for the clarification.
2Benquo9y3 is still a small number. If it were 10+ then you should worry. I'm confused by this too. The nearest correct idea I can think of to what Jim actually said, is that if you have a proposition P with an associated credence based on the available evidence, then finding an additional but invalid argument A shouldn't affect your credence in P. The related error is assuming that if you argue with someone and are able to demolish all their arguments, that this means that you are correct, and giving too little weight to the possibility that they are a bad arguer with a true opinion. Jim, is that close to what you meant? EDIT: Whoops, didn't see Jim's response. But it looks like I guessed right. I've also made the related error in the past, and this quote from Black Belt Bayesian [http://www.acceleratingfuture.com/steven/?p=155] was helpful in improving my truth-finding ability:

"You cannot rely on anyone else to argue you out of your mistakes; you cannot rely on anyone else to save you; you and only you are obligated to find the flaws in your positions"

It wasn't much of an "aha!" moment- when I first read it, I thought something along the lines of "Of course higher standards are possible, but if no one can find flaws in your argument, you're doing pretty well." but the more I thought about it, the more I realized that EY made a good point. I had later stumbled upon flaws in my long standing arguments... (read more)

5billswift12yThe big problem with relying on someone else to save you is "Why would they bother?". No one is likely to be as motivated to find mistakes in your beliefs are you are (or at least as you should be).
[-][anonymous]12y 3

I've been reading OB for a comparatively short time, so I haven't yet been through the vast majority of your posts. But "The Sheer Folly of Callow Youth" really puts in perspective the importance of truth-seeking and why its necessary.

Quote: "Of this I learn the lesson: You cannot manipulate confusion. You cannot make clever plans to work around the holes in your understanding. You can't even make "best guesses" about things which fundamentally confuse you, and relate them to other confusing things. Well, you can, but you won... (read more)

How to make sense out of metaethics. I would particularly name The Meaning of Right.

I happened to have a young child about to enter elementary school when I read that, and it crystalized my concern about rote memorization. I forced many fellow parents to read the essay as well.

  • I realize you mostly care about #1, but just for more data: #2 I'd probably put the Quantum Physics sequence, although that is a large number of posts, and the effect is hard to summarize in a few pages.

  • For #3 I liked (within evolution) that we are adaptation-executers, not fitness-maximizers.

For me this is a tough question since I've been reading your stuff for nearly 10 years now, but thinking of only OB I'd have to say it was the quantum physics stuff, but only because I had encountered essentially everything else in one form or another already, so your writing was just refining the way of presenting what I had already generally learned from you.

definitely "materialism"...especially the idea that there are no ontologically basic mental entities.

1Paul Crowley12yThat whole post [http://www.overcomingbias.com/2008/09/excluding-the-s.html] is good, but that idea is due to Richard Carrier.

Clearing up my meta-ethical confusion regarding utilitarianism. From The "Intuitions" Behind "Utilitarianism":

Whatever value is worth thinking about at all, must be worth trading off against all other values worth thinking about, because thought itself is a limited resource that must be traded off. When you reveal a value, you reveal a utility.

Realizing that the expression of any set of values must inherently "sum to 1" was quite an abrupt and obviously-true-in-retrospect revelation.

This is really from times before OB, and might be all too obvious, but the most important thing I’ve learned from your writings (so far) is bayesian probability. I had come in touch with the concept previously, but I didn’t understand it fully or understand why it was very important until I read your early explanatory essays on the topic. When you write your book, I’m sure that you will not neglect to include really good explanations of these things, suitable for people who have never heard of them before, but since no one else has mentioned it in this thread so far, I thought I might.

[-][anonymous]12y 2

1) I learned to reconcile my postmodernist trends with physical reality. Sounds cryptic? Well let's say I learned to appreciate science a little more than I did.

2) I learned to think more "statistically" and probabilistically - though I didn't learn to multiply.

3) Winning is also a pretty good catch-word for an attitude of mind - and maybe a better title than less-wrong.

2[anonymous]12y4) Oh! - And I stopped buying a lottery ticket. 5) The absence of evidence is not the evidence of absence - the symmetry of confirming and disconfirming evidence
2[anonymous]12y6) Something I didn't learn was the long list of cognitive biases. It's ok to know about them - but I don't think they are very useable in practice. The one I like best is "overconfidence".
1[anonymous]12y7) Something else I didn't learn or understand was your stance on ethics. I am going to rush and take the child of the rails as well - but all else is muddled mud to me.

"Thou art Godshatter" -- this was one of the first posts I read, and it made the entire heuristics and biases program feel more immediate / compelling than before

[-][anonymous]12y 2

Expecting Short Inferential Distances

The 'Shut up and do the impossible' sequence.

Newcombe's problem.

Godshatter.

Einstein's arrogance.

Joy in the merely real.

The cartoon Godel's theorem.

Science isn't strict enough.

The bottom line.

Well, I'd say the most important thing I learned was to be less confident when taking a stand on controversial topics. So to that end, I'll nominate

  1. Twelve Virtues of Rationality
  2. Politics is the Mind-Killer
[-][anonymous]12y 2

The Simple Truth followed by A Technical Explanation of Technical Explanation, given some familiarity with probability theory, formed the basic understanding of Bayesian perspective on probability as quantity of belief. The most confusing point of Technical Explanation involving a tentacle was amended in the post about antiprediction on OB. It's very important to get this argument early on, as it forms the language for thinking about knowledge.

Thanks for the link to the list - I keep forgetting that exists. And thanks again to Andrew Hay for making it.

That said, I don't think I would say I learned anything from your OB posts, at least about rationality. I think I did learn about young Eliezer and possibly about aspiring rationalists in general. If that's a reasonable topic, then I'd have to suggest something in the "Young Eliezer" sequence, possibly My Wild and Reckless Youth.

There are several variations on the questions you're asking that I think I could find answers to:

"Which... (read more)

I liked philosophy before OB, so I knew you were supposed to question everything. OB revelealed new things to question, and taught me to expect genuine answers.

1Technologos12yIn fact, I'd say that OB reinforced in a more concrete way the belief I got from Wittgenstein that not all questions are meaningful (in particular, the ones for which there cannot be "genuine answers").

"I suspect that most existential angst is not really existential. I think that most of what is labeled 'existential angst' comes from trying to solve the wrong problem", from Existential Angst Factory.

I don't know about "most important", but the one post that really stuck in my mind was Archimedes's Chronophone. I spent a while thinking about that one.

Just did a quick search of this page and it didn't turn up... so, by far, the most memorable and referred-to post I've read on OB is Crisis of Faith.

3AnnaSalamon12yDid practicing the Crisis of Faith technique cause you to change your mind about anything?

I really can't think of any one single thing. Part of it is I think I hadn't yet "dehindsightbiased" myself, (still haven't, except now sometimes I can catch myself as it's happening and say "No! I didn't know that before, stop trying to pretend that I did.")

Another part is that lots of posts helped crystallize/sharpen notions I'd been a bit fuzzy on. Part of it is just, well, the total effect.

Stuff like the Evolution sequence and so on were useful to me too.

If I had to pick one thing that stands out in my mind though, I guess I'd have ... (read more)

The idea that the purpose of the law is to provide structure for optimization.

I'm not sure this is the most important thing I've learned yet, but it's the only really 'aha' moment I've had in the admittedly small sample I've been able to catch up on thus far.

I find I think about this most often as I contemplate the effect traffic laws and implements have in shaping my 20 minute optimization exercise in getting to work each morning.

I'm not sure I've "learned" anything. You've largely convinced me that we don't really "know" anything but rather have varying degrees of belief, but I believed that to some degree before reading this site and am not 100% convinced of it now.

The most important thing I can think of that I would have said is almost certainly wrong before and that I'd say is probably right now is that it is legitimate to multiply the utility of a possible outcome by its probability to get the utility of the possibility.

[-][anonymous]12y 1

Prices or Bindings? and to a lesser extent (although with simpler formal statement) Newcomb's Problem and The True Prisoner's Dilemma: show just how insanely alien the rational thing can be, even if it's directed to your own cause. You may need to conscientiously avoid preventing the world destruction, not take free money, and trade a billion human lives for one paperclip.

[-][anonymous]12y 1

Righting a Wrong Question: how everything you observe calls for understanding, how even an utter confusion or a lie can communicate positive knowledge. There are always causes behind any apparent confusion, so if the situation doesn't make sense in a way it's supposed to be interpreted, you can always step back and see how it really works, even if you are not supposed to look at the situation this way. For example, don't trust you thought, instead catch your own mind in the process of making a mistake.

[-][anonymous]12y 1

Overcoming Bias: Thou Art Godshatter: understanding how insanely intricate human psychology is, and how one should avoid inventing simplistic Fake Utility Functions for human behavior. I used to make this mistake. Also relevant: Detached Lever Fallacy, how there's more to other mental operations than meets the eye.

Intelligence as a blind optimization process shaping the future -- esp. in comparison with evolution -- and how the effect of our built-in anthropomorphism makes us see intelligence as existing, when in fact, ALL intelligence is blind. Some intelligence processes are just a little less blind than others.

(Somewhat offtopic, but related: some studies show that the number of "good" ideas produced by any process is linearly proportional to the TOTAL number of ideas produced by that process... which suggests that even human intelligence searches blindly, once we go past the scope of our existing knowledge and heuristics.)

I'm going to echo CatDancer: for me the most valuable insight was that a little information goes a very long way. From the example of the simulated beings breaking out to the Bayescraft interludes to the few observations and lots of cogitations in Three Worlds Collide to GuySrinivasan's random-walk point, I've become more convinced that you can get a surprising amount of utility out of a little data; this changes other beliefs like my assessment of how possible AI rapid takeoff is.

The most important thing I learned was the high value of the outside perspective. It is something that I strive to deploy deliberately through getting into intentional friendships with other aspiring rationalists at Intentional Insights. We support each other’s ability to achieve goals in life through what we came to call a goal buddy system, providing an intentional outside perspective on each other’s thinking about life projects and priorities.

[-][anonymous]8y 0

Generalising from one example.

[This comment is no longer endorsed by its author]Reply
[-][anonymous]9y 0

Making Beliefs Pay Rent

The explanation of Bayes Theorem and pointer to E. T. Jaynes. It gave me a statistics that is useful as well as rigorous, as opposed to the gratuitously arcane and not very useful frequentist stuff I was exposed to in grad school.

Second would be the quantum mechanics posts - finally an understandable explanation of the MW interpretation.

[-][anonymous]12y 0

#1: Teacher's Password http://www.overcomingbias.com/2007/08/guessing-the-te.html

I happened to have a young child about to enter elementary school when I read that, and it crystalized my concern about rote memorization. I forced many fellow parents to read the essay as well.

I realize you mostly care about #1, but just for more data: #2 I'd probably put the Quantum Physics sequence, although that is a large number of posts, and the effect is hard to summarize in a few pages. For #3 I liked (within evolution) that we are adaptation-executers, not fitness-maximizers.

[-][anonymous]12y 0

#1: Teacher's Password

I happened to have a young child about to enter elementary school when I read that, and it crystalized my concern about rote memorization. I forced many fellow parents to read the essay as well.

I realize you mostly care about #1, but just for more data: #2 I'd probably put the Quantum Physics sequence, although that is a large number of posts, and the effect is hard to summarize in a few pages. For #3 I liked (within evolution) that we are adaptation-executers, not fitness-maximizers.

Priors as Mathematical Objects: prior is not something arbitrary, a state of lack-of-knowledge, nor can sufficient evidence turn arbitrary prior into precise belief. Prior is the whole algorithm of what to do with evidence, and bad prior can easily turn evidence into stupidity.

P.S. I wonder if this post was downvoted exclusively because of Eliezer's administrative remark, and not because of its content.

1Eliezer Yudkowsky12yVlad, if you're going to do this, at least do it as replies to your original comment!
1Vladimir_Nesov12yRight. I moved other comments under the original one.

I'm going to break with the crowd here.

I don't think that the Overcoming Bias posts, even cleaned up, are suitable for a book on how to be rational. They are something like a sequence of diffs of a codebase as it was developed. You can get a feel of the shape of the codebase by reading the diffs, particularly if you read them steadily, but it's not a great way to communicate the shape.

A book probably needs more procedures on how to behave rationally:

How to use likelihood ratios How to use utility functions Dutch Books: what they are and how to avoid them

The posts are amazing, well connected and very detailed. I think one of the best insights you had was to make concise these biases as the words of your Confessor:

"[human] rationalists learn to discuss an issue as thoroughly as possible before suggesting any solutions. For humans, solutions are sticky...We would not be able to search freely through the solution space, but would be helplessly attracted toward the 'current best' point, once we named it. Also, any endorsement whatever of a solution that has negative moral features, will cause a human to... (read more)