I'm not very experienced in such things, so this might be an oblivious question with an obvious answer. If so, sorry.

I understand that one of the foundations of LessWrong is Bayesian epistemology and reasoning. I've been looking into it, and it seems like the consequences of Bayes' Theorem and similar explorations into probability theory have pretty basic/intuitive implications on rational thought. It seems like it all boils down to "update your beliefs based on evidence." At the moment, I can't see many groundbreaking or especially helpful findings.

There are a couple that are useful, though. "Making beliefs pay rent in anticipated experiences" is useful for ensuring that evidence is available to refine beliefs, and the "conservation of expected evidence" highlights the consequences of conditionality in ways that weren't immediately obvious (e.g. supporting evidence for an already-strong hypothesis isn't that useful, but contradicting evidence is---and the reverse is true for weak hypotheses).

What are some of the most valuable takeaways and implications from Bayesian epistemology? Why does it serve as the effective foundation of this website?

New Answer
Ask Related Question
New Comment

3 Answers sorted by

You could say that in the beginning of LessWrong Bayesian epistemology was a foundation of LessWrong but I don't think that when you look at most posts of the last year that those are build on a Bayesian foundation.

Even within statistical issues a recent post like https://www.lesswrong.com/posts/EAnLQLZeCreiFBHN8/how-do-the-ivermectin-meta-reviews-come-to-so-different doesn't have anyone deploying Bayesian tools. 

I wouldn't be surprised if over the last year more people used gear models as a foundation for Epistemology then Bayesian arguments.

For an actual response back in a time where Bayesianism was more central https://www.lesswrong.com/posts/JBnaLpsrYXLXjFocu/what-bayesianism-taught-me gives a long answer to your question.

Thanks for the quick reply! I'll check those posts out.

From what I've seen (which, again, isn't much), gears-level reasoning just involves comprehensively investigating a model. I'm sure I'm misunderstanding it, especially if it's now the basis for most of the posts around here. Could you enlighten me?

Gears reasoning means that you have a model of how the parts of a system interact and reason based on the model. Given that reality is usually very complicated that means that you are operating on a model that's a simplication of reality. In Tedlocks distinction: Good Bayesian reasoning is foxy. It's about not committing to any single model but having multiple and weighting between them.  On the other hand if you are doing gear's style reasoning you are usually acting as a hedgehog that treats one model to be the source of reliable information.
5Benjamin Hendricks2y
Hmm... I'm probably being thick, but it sounds like gears-based reasoning is just commitment to a detailed model. That wouldn't help you design the model, among other things. I may need to investigate this on my own; I don't want to tangle you in a comment thread explaining something over and over again.
From Tetlocks Superforcasting work we know that commitment to one detailed model makes you worse at the kind of Bayesian reasoning that Superforcasting is about. I think one great talk about the difference is Peter Thiel: You Are Not a Lottery Ticket | Interactive 2013 | SXSW [https://www.youtube.com/watch?v=iZM_JmZdqCw]. In the Bayesian frame everything is lottery tickets. It's also not like we completely got rid of Bayesian epistimology. We still do a lot of things like betting that come from that frame, but generally LessWrong is open to reasoning in a lot of different ways. There the textbook definition of rational thinking from Baron's Thinking and deciding: I do think that's the current spirit of LessWrong and there's a diversity of ways to think that get used within LessWrong.

My main takeaway from Bayes Theorem is that "A implies B with probability P" and "B implies A with probability P" are not the same thing, but many people think they are.

For example, this explains how a scientific journal can publish hundreds of studies with "p < 0.05" and yet half of them fail to replicate. The probability 95% is for "we get this result, if the hypothesis is true" (what the journal requires), not "the hypothesis is true, if we got this result" (what we care about).

Another takeaway is that it is not enough to consider "how likely am I to see this if X is true" but also "how likely am I to see this if X is false". If both answers are the same, then the observation is actually not evidence for X.

It also gives an alternative to "simplified Popperism" popular on internet, saying that things "cannot be proved, only falsified". (First problem: what about negations? What would it mean for "X is Y" and "X is not Y" to be simultaneously both unprovable but falsifiable? Doesn't falsifying one of them kinda automatically prove the other? Second problem: this is not how actual science works. Whenever someone yet again experimentally "falsifies" the theory of relativity, most actual scientists calmly wait until someone finds an error in the experiment. Third problem: if "wasn't falsified yet" is the highest compliment anyone could ever make to a theory, then any crackpot theory that was literally invented yesterday and therefore no one had enough time to disprove it yet, requires the same respect as a theory that was invented decades ago and supported by thousands of experiments.) Similarly it helps to answer the question whether seeing a non-black object, and observing that it is not a raven, should be considered an evidence for "all ravens are black".

With regard to journal results, it is even worse than that.

A published result with p < 0.05 means that: if the given hypothesis is false, but the underlying model and experimental design is otherwise correct, then there is at least a 95% chance that we don't see results like this.

There are enough negations and qualifiers in there that even highly competent scientists get confused on occasion.

I think for me the main takeaway was that to have better beliefs about the world I don't have to look for Proofs but rather Evidence. So if I try to evaluate hypothesis/belief H based on some observed reality R, I shouldn't ask: "does R prove/disprove H"? Or: "can H explain R"? But rather: "how complicated is H's explanation of R"? And then update my beliefs about H and then move on and look for further evidence.

New to LessWrong?