What are the qualitative lessons we can learn about logic and reasoning from Bayesian epistemology, that is, from taking Bayes' rule as a mathematical model for thought (even if it is considered a simplified formalism that we often can't implement?)

I've seen at least a few of these from @Eliezer Yudkowsky, but I think they're scattered across many essays.

Some things I consider to be examples of what I'm gesturing at here:

Thanks!

New to LessWrong?

New Comment
11 comments, sorted by Click to highlight new comments since: Today at 11:12 PM
  • It's not enough for a hypothesis to be consistent with the evidence; to count in favor, it must be more consistent with the hypothesis than its converse. How much more is how strong. (Likelihood ratios.)
  • Knowledge is probabilistic/uncertain (priors) and is updated based on the strength of the evidence. A lot of weak evidence can add up (or multiply, actually, unless you're using logarithms).
  • Your level of knowledge is usually not literally zero, even when uncertainty is very high, and you can start from there. (Upper/Lower bounds, Fermi estimates.) Don't say, "I don't know." You know a little.
  • A hypothesis can be made more ad-hoc to fit the evidence better, but this must lower its prior. (Occam's razor.)
    • The reverse of this also holds. Cutting out burdensome details makes the prior higher. Disjunctive claims get a higher prior, conjunctive claims lower.
    • Solomonoff's Lightsaber is the right way to think about this.
  • More direct evidence can "screen off" indirect evidence. If it's along the same causal chain, you're not allowed to count it twice.
  • Many so-called "logical fallacies" are correct Bayesian inferences.
[-]jmh1y20

Many so-called "logical fallacies" are correct Bayesian inferences.

 

I find this a very interesting claim and wondering if anyone has applied it to some list of logical fallacies such as one might find listed in some Intro to Logic text book.

I'm assuming that one could get all that from reading through all the Sequences but sees to me a cheat sheet type document would be much more helpful.

Wikipedia has a list. Note that even the "informal" fallacies are often "so-called 'logical fallacies'".

Fallacies as weak Bayesian evidence had some good exposition on a few of them from a Bayesian perspective. There could be more under the fallacies tag.

There's also some discussion under Logical fallacy poster.

If you observe 2 pieces of evidence, you have to condition the 2nd on seeing the 1st to avoid double-counting evidence

Just throwing it out there: Bayes' rule on Arbital has some great content.

The basic definition of evidence is more important than you may think. You need to start by asking what different models predict. Related: it is often easier to show how improbable the evidence is according to the scientific model, than to get any numbers at all out of your alternative theory.

Absent hypotheses do not produce evidence. Often (in some cases you can notice confusion but it is hard thing to notice until it up in your face) you need to have a hypothesis that favor certain observation as evidence to even observe it, to notice it. It is source for a lot of misunderstandings (along with a stupid priors of course). If you forget that other people can be tired or in pain or in a hurry, it is really easy to interpret harshness as evidence in favor of "they dont like me" (they can still be in a hurry and dislike you, but well...) and be done with it. After several instances of it you will be convinced enough to make changing your mind very difficult (confirmation bias difficult) so alternatives need to be present in your mind before encounter with observation. 

Vague hypotheses ("what if we are wrong?") and negative ("what if he did not do this?") are not good at producing evidence to. To be useful they have to be precise and concrete and positive (this is easy to check in some cases by visualisation - how hard it is to do and if it even possible to visualise).

Cromwell's rule: you prior probability can never be zero about anything, otherwise it would stay zero in the face of any evidence.

The flip side is that some actually useful hypotheses are inaccessible on a fundamental level, so you can't ever be a True Bayesian. Sorry. This might map to epistemic humility.