Less Wrong/2007 Articles/Summaries

Ruby (+38)
Ruby (-29)
Deku-shrub (+13/-67) /* The Apocalypse Bet */
Tyrrell_McAllister (+57/-5) /* The Apocalypse Bet */
Tyrrell_McAllister (+52) /* The Apocalypse Bet */
Deku-shrub
(+585/-251) general cleanup/streamlining
(+52/-32) /* Just Lose Hope Already */ more cleaning (I'll do this as fewer, bigger edits from now on, sorry)
(+158/-120) /* Politics is the Mind-Killer */ rewrote one-sentence summary for clarity, streamlined second summary a bit
(+9/-11) /* Outside the Laboratory */ upgraded formatting

Summaries of LessWrong Posts from 2007

If you think that the apocalypse will be in 2020, while I think that it will be in 2030, how could we bet on this? One way would be for me to pay you X dollars every year until 2020. Then, if the apocalypse doesn't happen, you pay me 2X dollars every year until 2030. This idea could be used to set up a prediction market, which could give society information about when an apocalypse might happen. (YudkowskyYudkowsky later realized that this wouldn't work: http://lesswrong.com/lw/ie/the_apocalypse_bet/7i2cwork.)

If you think that the apocalypse will be in 2020, while I think that it will be in 2030, how could we bet on this? One way would be for me to pay you X dollars every year until 2020. Then, if the apocalypse doesn't happen, you pay me 2X dollars every year until 2030. This idea could be used to set up a prediction market, which could give society information about when an apocalypse might happen. (Yudkowsky later realized that this wouldn't work.work: http://lesswrong.com/lw/ie/the_apocalypse_bet/7i2c.)

If you think that the apocalypse will be in 2020, while I think that it will be in 2030, how could we bet on this? One way would be for me to pay you X dollars every year until 2020. Then, if the apocalypse doesn't happen, you pay me 2X dollars every year until 2030. This idea could be used to set up a prediction market, which could give society information about when an apocalypse might happen. (Yudkowsky later realized that this wouldn't work.)

Admit it when the evidence goes against you, or else things can get a whole lot worse.

As a side effect of evolution, super-stimulisuperstimuli exist, andand, as a result of economics, they are getting and shouldwill likely continue to get worse.

Medical disclaimers without probabilities are hard to use, and if probabilities aren't there because some people can't handle having them there, maybe we ought to tax those people.

Variance decomposition does not imply majoritarian-ish results; this is an artifact of minimizing *square*square error, and drops out using square root error when bias is larger than variance; how and why to factor in evidence requires more assumptions, as per Aumann agreement.

Not every belief that we have is directly about sensory experience, but beliefs should pay rent in anticipations of experience. For example, if I believe that "Gravity is 9.8 m/s^2" then I should be able to predict where I'll see the second hand on my watch at the time I hear the crash of a bowling ball dropped off a building. On the other hand, if your postmodern English professor says that the famous writer Wulky is a "post-utopian",utopian," this may not actually mean anything. The moral is to ask "What experiences do I anticipate?" notinstead of "What statements do I believe?"

A hypothesis that forbids nothing,nothing permits everything, and therebythus fails to constrain anticipation. Your strength as a rationalist is your ability to be more confused by fiction than by reality. If you are equally good at explaining any outcome, you have zero knowledge.

People think that fake explanations use words like "magic","magic," while real explanations use scientific words like "heat conduction".conduction." But being a real explanation isn't a matter of literary genre. Scientific-sounding words aren't enough. Real explanations constrain anticipation. Ideally, you could explain only the observations that actually happened. Fake explanations could just as well "explain" the opposite of what you observed.

Words like "Democracy""democracy" or "freedom" are applause lights - no one disapproves of them, so they can be used to signal conformity and hand-wave away difficult problems. If you hear people talking about the importance of "balancing risks and opportunities" or of solving problems "through a collaborative process" that aren't followed up by any specifics, then the words are applause lights, not real thoughts.

Eliezer explains that he is overcoming writer's block by writing one LessWrongLess Wrong post a day.

Elementary probability theory tells us that the probability of one thing (we write P(A)) is necessarily greater than or equal to the conjunction of that thing and another thing (write P(A&B)). However, in the psychology lab, subjects' judgments do not conform to this rule. This is not an isolated artifact of a particular study design. Debiasing won't be as simple as practicing specific questions,questions; it requires certain general habits of thought.

Traditional Rationality is phrased in terms of social rules, with violations interpretable as cheating - as defections from cooperative norms. But viewing rationality as a social obligation gives rise to some strange ideas. The laws of rationality are mathematics, and no social manoeuvringmaneuvering can exempt you.

The facts that philosophers call a priori"a priori" arrived in your brain by a physical process. Thoughts are existent in the universe; they are identical to the operation of brains. The "a priori" belief generator in your brain works for a reason.

Proposing Solutions Prematurelysolutions prematurely is dangerous, because it introduces weak conclusions in the pool of the facts you are considering, and as a result the data set you think about becomes weaker, overly tilted towards premature conclusions that are likely to be wrong, that are less representative of the phenomenon you are trying to model than the initial facts you started from, before coming up with the premature conclusions.

An Artificial Intelligence coded using SolmonoffSolomonoff Induction would be vulnerable to Pascal's Mugging. How should we, or an AI, handle situations in which it is very unlikely that a proposition is true, but if the proposition is true, it has more moral weight than anything else we can imagine?

Related to contamination and the illusion of transparancy, we "anchor" on our own experience and underadjustunder-adjust when trying to understand others.

In addition to the difficulties encountered in trying to explain something so that your audience understands it, there are other problems associated in learning whether or not you have explained something properly. If you read your intended meaning into whatever your listener says in response, you may think that they understande understands a concept, when in fact they aree is simply rephrasing whatever it was you actually said.

An obsolete post in which Eliezer queried Overcoming Bias readers to find out if they would be interested in holding in-person meetings.

The day after Halloween, Eliezer made a joke related to Torture vs. Dust Specks, which he had posted just a few days ago.

We should be suspicious of our tendency to justify our decisions with arguments that weredid not actually the deciding factor.factor into making said decisions. Whatever process youactually use to make your decisions is what determines your effectiveness.effectiveness as a rationalist.

Evolution is awesomely powerful, unbelievably stupid, incredibly slow, monomaniacally singleminded, irrevocably splintered in focus, blindly shortsighted, and itself a completely accidental process. If evolution were a god, it would not be Jehovah, but H. P. Lovecraft's Azathoth, the blind idiot Godgod burbling chaotically at the center of everything.

The human brain, and every ability for thought and emotion in it, are all adaptations selected for by evolution. Humans have the ability to feel angry for the same reason that birds have wings: ancient humans and birds with those adaptations had more kids. But, it is easy to forget that there is a distinction between the reason humans have the ability to feel anger, and the reason why a particular person was angry at a particular thing. Human brains are adaptation-adaptation executors, not fitness maximizers.

DiscussesProposes a formalism for a discussion of the relationship between terminal and instrumental values. Terminal values are world states that we assign some sort of positive or negative worth to. Instrumental values are links in a chain of events that lead to desired world states.

An obsolete meta post.

Admit when the evidence goes against you, or else things can get a whole lot worse.

Casey Serin owes banks 2.2 million dollars after lying on mortgage applications in order to simultaneously buy 8 different houses in different states. The sad part is that he hasn't given up - he hasn't declared bankruptcy, and has just attempted to purchase another house. While this behavior seems merely stupid, it recallsbrings to mind Merton and Scholes of Long-Term Capital ManagementManagement, who made 40% profits for three yearsyears, and then lost it all when they overleveraged. Each profession has rules on how to be successfulsuccessful, which makes rationality seem unlikely to help greatly in life. Yet it seems that one of the greater skills is not being stupid, which rationality does help with.

Beware inIn your discussions thatdiscussions, beware, for clear evolutionary reasons, people have great difficulty being rational about current political issues. This is no surprise to someone familiar with evolutionary psychology.

People act funny when they talk about politics. In the ancestral environment, being on the wrong side might get you killed, and being on the correct side might get you sex, foodfood, or let you kill your hated rival. If you must talk about politics (for the purposespurpose of teaching rationality), use examples from the distant past. Politics is an extension of war by other means. Arguments are soldiers. Once you know which side you're on, you must support all arguments of that side, and attack all arguments that appear to favor the enemy side; otherwiseotherwise, it's like stabbing your soldiers in the back - providing aid and comfort to the enemy. If your topic legitimately relates to attempts to ban evolution in school curricula, then go ahead and talk about it -it, but don't blame it explicitly on the whole Republican Party (Democratic/Republican/Democratic/Liberal/Conservative/Nationalist).Nationalist Party.

Those who understand the map/territory distinction will *integrate*integrate their knowledge, as they see the evidence that reality is a single unified process.

Load More (10/255)