Less Wrong/2007 Articles/Summaries

Summaries of LessWrong Posts from 2007

Created by John_Maxwell at 4y

If you think that the apocalypse will be in 2020, while I think that it will be in 2030, how could we bet on this? One way would be for me to pay you X dollars every year until 2020. Then, if the apocalypse doesn't happen, you pay me 2X dollars every year until 2030. This idea could be used to set up a prediction market, which could give society information about when an apocalypse might happen. (YudkowskyYudkowsky later realized that this wouldn't work: http://lesswrong.com/lw/ie/the_apocalypse_bet/7i2cwork.)

If you think that the apocalypse will be in 2020, while I think that it will be in 2030, how could we bet on this? One way would be for me to pay you X dollars every year until 2020. Then, if the apocalypse doesn't happen, you pay me 2X dollars every year until 2030. This idea could be used to set up a prediction market, which could give society information about when an apocalypse might happen. (Yudkowsky later realized that this wouldn't work.work: http://lesswrong.com/lw/ie/the_apocalypse_bet/7i2c.)

If you think that the apocalypse will be in 2020, while I think that it will be in 2030, how could we bet on this? One way would be for me to pay you X dollars every year until 2020. Then, if the apocalypse doesn't happen, you pay me 2X dollars every year until 2030. This idea could be used to set up a prediction market, which could give society information about when an apocalypse might happen. (Yudkowsky later realized that this wouldn't work.)

Admit it when the evidence goes against you, or else things can get a whole lot worse.

As a side effect of evolution, super-stimulisuperstimuli exist, andand, as a result of economics, they are getting and shouldwill likely continue to get worse.

Medical disclaimers without probabilities are hard to use, and if probabilities aren't there because some people can't handle having them there, maybe we ought to tax those people.

Variance decomposition does not imply majoritarian-ish results; this is an artifact of minimizing *square*square error, and drops out using square root error when bias is larger than variance; how and why to factor in evidence requires more assumptions, as per Aumann agreement.

Not every belief that we have is directly about sensory experience, but beliefs should pay rent in anticipations of experience. For example, if I believe that "Gravity is 9.8 m/s^2" then I should be able to predict where I'll see the second hand on my watch at the time I hear the crash of a bowling ball dropped off a building. On the other hand, if your postmodern English professor says that the famous writer Wulky is a "post-utopian",utopian," this may not actually mean anything. The moral is to ask "What experiences do I anticipate?" notinstead of "What statements do I believe?"

A hypothesis that forbids nothing,nothing permits everything, and therebythus fails to constrain anticipation. Your strength as a rationalist is your ability to be more confused by fiction than by reality. If you are equally good at explaining any outcome, you have zero knowledge.

People think that fake explanations use words like "magic","magic," while real explanations use scientific words like "heat conduction".conduction." But being a real explanation isn't a matter of literary genre. Scientific-sounding words aren't enough. Real explanations constrain anticipation. Ideally, you could explain only the observations that actually happened. Fake explanations could just as well "explain" the opposite of what you observed.

Words like "Democracy""democracy" or "freedom" are applause lights - no one disapproves of them, so they can be used to signal conformity and hand-wave away difficult problems. If you hear people talking about the importance of "balancing risks and opportunities" or of solving problems "through a collaborative process" that aren't followed up by any specifics, then the words are applause lights, not real thoughts.

Eliezer explains that he is overcoming writer's block by writing one LessWrongLess Wrong post a day.

Elementary probability theory tells us that the probability of one thing (we write P(A)) is necessarily greater than or equal to the conjunction of that thing and another thing (write P(A&B)). However, in the psychology lab, subjects' judgments do not conform to this rule. This is not an isolated artifact of a particular study design. Debiasing won't be as simple as practicing specific...

Read More (411 more words)

Admit when the evidence goes against you, or else things can get a whole lot worse.

Casey Serin owes banks 2.2 million dollars after lying on mortgage applications in order to simultaneously buy 8 different houses in different states. The sad part is that he hasn't given up - he hasn't declared bankruptcy, and has just attempted to purchase another house. While this behavior seems merely stupid, it recallsbrings to mind Merton and Scholes of Long-Term Capital ManagementManagement, who made 40% profits for three yearsyears, and then lost it all when they overleveraged. Each profession has rules on how to be successfulsuccessful, which makes rationality seem unlikely to help greatly in life. Yet it seems that one of the greater skills is not being stupid, which rationality does help with.

Beware inIn your discussions thatdiscussions, beware, for clear evolutionary reasons, people have great difficulty being rational about current political issues. This is no surprise to someone familiar with evolutionary psychology.

People act funny when they talk about politics. In the ancestral environment, being on the wrong side might get you killed, and being on the correct side might get you sex, foodfood, or let you kill your hated rival. If you must talk about politics (for the purposespurpose of teaching rationality), use examples from the distant past. Politics is an extension of war by other means. Arguments are soldiers. Once you know which side you're on, you must support all arguments of that side, and attack all arguments that appear to favor the enemy side; otherwiseotherwise, it's like stabbing your soldiers in the back - providing aid and comfort to the enemy. If your topic legitimately relates to attempts to ban evolution in school curricula, then go ahead and talk about it -it, but don't blame it explicitly on the whole Republican Party (Democratic/Republican/Democratic/Liberal/Conservative/Nationalist).Nationalist Party.

Those who understand the map/territory distinction will *integrate*integrate their knowledge, as they see the evidence that reality is a single unified process.

Outside the laboratory: thoseThose who understand the map/territory distinction will *integrate* their knowledge, as they see the evidence that reality is a single unified process.

Written regarding the proverb "Outside the laboratory, scientists are no wiser than anyone else." The case is made that if this proverb is in fact true, that's quite worrisome because it implies that scientists are blindly following scientific rituals without understanding why. In particular, it is argued that if a scientist is religious, theye probably don'doesn't understand the foundations of science very well.

(alternate summary:)

Certain repeated science experiments imply bayesian priors so extreme that you should believe scientific consensus above evidence from your own eyes, when they conflict.

Eliezer explains that he references to transhumanism on Overcoming Bias not for the purpose of proselytization, but because it is rather impossible for him to share lessons about rationality from his personal experiences otherwise, as he happens to be highly involved in the transhumanist community.

Eliezer explains that he makes references to transhumanism on Overcoming Bias not for the purpose of proselytization, but because it is rather impossible for him to share lessons about rationality from his personal experiences otherwise, as he happens to be highly involved in the transhumanist community.

Eliezer remarksexplains that he makes references to transhumanism on Overcoming Bias not for the purpose of proselytization, but because it is rather impossible for him to share lessons about rationality from his personal experiences otherwise, as he happens to be highly involved in the transhumanist community.

Eliezer remarks that he makes references to transhumanism on Overcoming Bias not for the purpose of proselytization, but because it is rather impossible for him to share lessons about rationality from his personal experiences otherwise, as he happens to be highly involved in the transhumanist community.

A way of breaking the conformity effect in some casescases.

(alternate summary:)

describes the seeming fascination that many have with trying to compress morality down to a single principle. The sequence leading up to this post tries to explain the cognitive twists whereby people smuggle all of their complicated other preferences into their choice of exactly which acts they try to justify using their single principle; but if they were really following only that single principle, they would choose other acts to justify.

onOn noticing when you're still doing something that has become disconnected from its original purposepurpose.

(alternate summary:)

tackles the Hollywood Rationality trope that "rational" preferences must reduce to selfish hedonism - caring strictly about personally experienced pleasure. An ideal Bayesian agent - implementing strict Bayesian decision theory - can have a utility function that ranges over anything, not just internal subjective experiences.