Mysterious Answers To Mysterious Questions (Sequence)

Created by Eliezer Yudkowsky at 4y

How much of your knowledge could you regenerate, if it were deleted from your mind? If you don't have enough experience to regenerate beliefs when they are deleted, then do you have enough experience to connect that belief to anything at all?

Policy proposals need to come with specifics, not just virtuous-sounding words like "democracy" or "balance". These words can stand for specific proposals (democracy: resolving conflicts through voting) but they are often used in an uninformative way.way to convey mysterious goodness. To test whether a proposal actually carries new information, try reversing it.

Policy proposals need to come with specifics, not just virtuous-sounding words like "democracy" or "balance". These words can stand for specific proposals (democracy: resolving conflicts through voting) but they are often used in an uninformative way. To test whether a proposal actually carries new information, try reversing it.

The 2-4-6 experiment suggests that humans tend to look for positive evidence ("My theory predicts this, and it happens!") rather than negative evidence ("My theory predicts this won't happen, and it doesn't.") This is similar to, but separate from, confirmation bias. To spot an explanation that isn't helpful, it's not enough to think of what it does explain very well — you also have to search for results it couldn't explain; and this is the true strength of the theory.explain.

The 2-4-6 experiment suggests that humans tend to look for positive evidence ("My theory predicts this, and it happens!") rather than negative evidence ("My theory predicts this won't happen, and it doesn't.") This is similar to, but separate from, confirmation bias. To spot an explanation that isn't helpful, it's not enough to think of what it does explain very well — you also have to search for results it couldn't explain; and this is the true strength of the theory.

Sometimes it seems that people want their semantic stopsigns not to actually explain anything, because they like feeling that the universe is too grand and mysterious to understand. But nothing is mysterious in itself; if we can't understand the universe, it's not because the universe is grand but because we are ignorant, and there's nothing wonderful about that ignorance.

When seemingly unanswerable questions are answered with, say, "God did it," that answer doesn't resolve the question, but it does tell you to stop asking further questions; it functions as a semantic stopsign. But religion is by no means the only source of semantic stopsigns. If you're tempted to solve any problem or explain any event with a word like "government" or "big business" or "terrorism," and you fail to ask the obvious next question "How exactly does [government|business|terrorism] explain this thing or solve this problem?", then that word is a semantic stopsign for you.

Absence of proof is not proof of absence. But absence of evidence is always evidence of absence. According to the probability calculus, if P(H|E) > P(H) (observing E would be evidence for hypothesis H), then P(H|~E) < P(H) (absence of E is evidence against H). The absence of an observationevidence may be a strong evidenceindicator or verya weak evidence of absence,indicator that the hypothesis is false, but it isit's always evidence.an indicator.

IfJustifies the use of subjective probability estimates. Let's say you areget paid to explain movements of the financial markets after the fact. You'd like to prepare your explanations for post-hoc analysis,each way things could go in advance, and you might like theories that "explain" all possible outcomes equally well, without focusing uncertainty. But whatcan do your job better if you don't know the outcome yet, and you need to have an explanation ready in 100 minutes? Then you want to spend most of yourmore time on excusespreparing explanations for the outcomes that are actually more likely. Being able to estimate probabilities could be useful even if you anticipate most, so you still need a theory that focuses your uncertainty.get paid to explain anything.

Mysterious Answers to Mysterious Questions is probably the first (and probably most important)important core sequence in Less Wrong. Posts in the sequence are distributed from 28 Jul 07 to 11 Sep 07.

People think that fake explanations use words like "magic", while real explanations use scientific words like "heat conduction". But being a real explanation isn't a matter of literary genre. Scientific-sounding words aren't enough. The real goal is toReal explanations constrain anticipation. Ideally, you could explain only the observations that actually happened. Fake explanations could just as well "explain" the opposite of what you observed.

People think that fake explanations use words like "magic", while real explanations use scientific words like "heat conduction". But being a real explanation isn't a matter of literary genre. Scientific-sounding words aren't enough. The real goal is to constrain anticipation. Ideally, you cancould explain only actual observations.the observations that actually happened. Fake explanations could just as well "explain" the opposite of what you observed.

People think that fake explanations use words like "magic", while real explanations use scientific words like "heat conduction". But being a real explanation isn't a matter of literary genre. Scientific-sounding words aren't enough. The real goal is to constrain anticipation. Ideally, you can explain only actual observations. Fake explanations could "explain" the opposite of what you observed.

Hindsight bias leads us to systematically undervalue scientific findings, because we find it too easy to retrofit them into our models of the world. This unfairly devalues the contributions of researchers. Worse, it prevents us from noticing when we are seeing evidence that doesn't fit what we really would have expected. We need to make a conscious effort to be shocked enough.

Hindsight bias makes us overestimate how well our model could have predicted a known outcome. We underestimate the cost of avoiding a known bad outcome, because we forget that many other equally severe outcomes seemed as probable at the time. Hindsight bias distorts the testing of our models by observation, making us think that our models are better than they really are.

If you are about to make an observation, then the expected value of your posterior probability must equal your current prior probability. On average, you must expect to be exactly as confident as when you started out. If you are a true Bayesian, you cannot seek evidence to confirm your theory, because you do not expect any evidence to do that. You can only seek evidence to test your theory.

Absence of proof is not proof of absence. But absence of evidence is always evidence of absence. According to the probability calculus, if P(H|E) > P(H) (observing E would be evidence for hypothesis H), then P(H|~E) < P(H) (absence of E is evidence against H). The absence of an observation may be strong evidence or very weak evidence of absence, but it is always evidence.