This is the first post in the Arguing Well sequence. This post is influenced by Scott Alexander's write up on Proving too Much.

[edit: Reformatted the post as a Problem/Solution to clarify what I'm trying to claim]

The Problem

One of the purposes of arguing well is to figure out what is true. A very common type of bad argument claims something like this:

Because of reason X, I am 100% confident in belief Y

I don't know of any reason that leads to 100% truth all the time (and if you do, please let me know!), and it's usually hard to reason with the person until this faulty logic is dealt with first. This is the purpose of this post.

Assuming the context of all exercises is with someone claiming 100% belief for that one reason, what's wrong with the following:

Ex. 1: I believe that Cthulhu exists because that’s just how I was raised.

How someone was raised doesn’t make something true or not. In fact, I could be raised to believe that Cthulhu doesn’t exist. We can’t both be right.

Ex. 2: I believe that a goddess is watching over me because it makes me feel better and helps me get through the day.

Just because believing it makes you feel better doesn’t make it true. Kids might feel better believing in Santa Claus, but that doesn’t make him actually exist.

Generalized Model

How would you generalize the common problem in the above arguments? You have 2 minutes

The common theme that I see is that same logic that proves the original claim, also proves something false. It “Proves too much” because it also proves false things. I like to think of this logic as “Qualifications for 100% truth”, and whatever qualifications proves the original claim can also prove a false claim.

Truth Qualifications -> Claim

Same Truth Qualifications -> Absurd Claim

Important Note: the purpose of this frame isn't to win an argument/ prove anything. It's to differentiate between heuristics that claim 100% success rates vs ones that claim a more accurate estimates. Imagine "I'm 100% confident I'll roll 7's with my two die cause of my luck!" vs "There's a 6/36 chance I'll roll 7's because I'm assuming two fair die"

Let’s work a couple more examples with this model.

Ex. 3: My startup is guaranteed to succeed because it uses quantum machine learning on a blockchain!

A startup using buzzwords doesn’t make it succeed. In fact, there are several startups that use those terms and failed.

Ex. 4: Of course I believe in evolution! Stephen Hawking believes it, and he’s really smart.

A smart person believing something doesn’t make it true. In fact, smart people often disagree and I bet there’s a person with Mensa-level IQ that doesn’t believe in evolution.

Ex. 5: This paper’s result has to be true since it has p < 0.05!

A paper having a p value less than 0.05 doesn’t mean it’s true. In fact, there are several papers that disagree with each other with p < 0.05. Also, homeopathy has been shown to have a p value < 0.005!

Ideal Algorithm

What algorithm were you running when you solved the above problems? Is there a more ideal/general algorithm? You have 3 minutes.

1. What does this person believe?

2. Why do they believe it?

3. Generalize that reasoning

4. What’s something crazy I can prove with this reasoning?

The algorithm I actually ran felt like a mix of 1 & 2 & 3, and then 4, but without literally thinking those words in my head.

Now to practice that new, ideal algorithm you made.

Final Problem Sets

Ex. 6: I believe in my religion because of faith (defined as hope)

Hoping in something doesn’t make it true. I can hope to make a good grade on a test, but that doesn’t mean that I will make a good grade. Studying would probably help more than hoping. (Here I provided a counter-example as required and an additional counter-reason)

Ex. 7: I believe in my religion because of faith (defined as trust)

Trusting in something doesn’t make it true. I can trust that my dog won’t bite people, but then someone steps on her paw and she bites them. Trusting that my dog won’t bite people doesn’t make my dog not bite people.

Ex. 8: I believe in a soul because I have a really strong gut feeling.

Having a strong gut feeling doesn’t make it true. In juries, people can even have conflicting gut feelings about a crime. If a jury was trying to determine if I was guilty, I would want them to use the evidence available and not their gut feeling. (Again, I added an additional counter-reason)

Ex. 9: I believe in my religion because I had a really amazing, transformative experience.

There are several religions that claim contradictory beliefs, and also have several people who have had really great, transformative experiences.

Ex. 10: I believe in my religion because there are several accounts of people seeing heaven when they died and came back.

There are several accounts of people seeing their religion’s version of heaven or nirvana in death-to-life experiences. You would have to believe Christianity, Mormonism, Islam, Hindu, … too!

Ex. 11: You get an email asking to be wired money, which you’ll be paid back handsomely for. The email concludes “I, prince Nubadola, assure you that this is my message, and it is legitimate. You can trust this email and any others that come from me.”

The email saying the email is legitimate doesn’t make it true. I could even write a new email saying “Prince Nubadola is a fraud, and I assure you that this is true”. (This is circular reasoning/ begging the question)

Conclusion

In order to argue well, it's important to identify and work through arguments that prove too much. In practice, this technique has the potential to lower someone's confidence... in a belief, or help clarify that "No, I don't think this leads to 100% true things all the time, just most of the time". Either way, communication is better and progress is made.

In the next post, I will be generalizing Proving too much. In the meantime, what’s wrong with this question:

If a tree falls in the woods, but no one is around to hear it, does it make a sound? (note: you shouldn’t be able to frame it as Proving too much)

[Feel free to comment if you got different answers/ generalizations/ algorithms than I did. Same if you feel like you hit on something interesting or that there's a concept I missed. Adding your own examples with the Spoiler tag >! is encouraged]

New Comment
13 comments, sorted by Click to highlight new comments since:

Your Proving Too Much disproves too much: If we only allow reasoning steps that always work, we never get real-world knowledge beyond "I think, therefore I am.". Some of these reasons for belief make their belief more likely to be true, and qualitatively that's the best we can get.

"I think, therefore I am."

(This is also incorrect, because considering a thinking you in a counterfactual makes sense. Many UDTish examples demonstrate that this principle doesn't hold.)

Great critique! I've updated the post to show the purpose of this frame. We already have Bayesian updating for achieving beliefs to be more likely true. This framing of Proving Too Much is about differentiating useful reasons and those that claim"This reason leads to truth 100% of the time".

There are lots of beliefs that people hold with 100% confidence for only 1 reason, and this is simply a way of jarring them out of that confidence. Encountering heuristics that account for less than 100% confidence will be covered in Finding Cruxes.

I did all the exercises above. Here's what I wrote down during the timed sections. (It's a stream of consciousness account, it may not be very clear/understandable.)

How would you generalize the common problem in the above arguments? You have 2 minutes

The structure of the reasoning does not necessarily correlate with one outcome more than others. You say A because X, but I can argue that B because X.

But I'm confused, because I can do this for any argument that’s not maximally well-specified though. Like, there’s always a gotcha. If I argue for the structure of genetics due to the pattern of children born with certain features, I could also use that evidence combined with an anti-inductive prior to argue the opposite. I’m not quite sure what the reason is that some things feel like they prove too much and some don’t. I suppose it’s just “in the context of my actual understanding of the situation, do I feel like this argument pins down a world-state positively correlated with the belief or not?” and if it doesn’t, then I can neatly express this by showing it can prove anything, because it’s not actually real evidence.

Oh huh, maybe that's wrong. It’s not that it isn’t evidence for anything, it’s that if it were evidence for this it would be evidence for many inconsistent things. (Though I think those two are the same.)

What algorithm were you running when you solved the above problems? Is there a more ideal/general algorithm? You have 3 minutes.

Hmm, I did like the thing that happened actually. Normally in such a disagreement with a person, I would explain the structure of my beliefs around the thing they called a ‘reason’. I’d do lots of interpretive work like that. “Let me explain the process by which smart people get their beliefs and when those processes are/aren't truth-tracking" or “Let me explain what heuristics help predict whether a startups is successful” or “let me explain what p-hacking is”. But in all of them the new mental motion was much cleaner/cheaper, which was producing a small impossibility proof.

I think I normally avoid such proofs because they’re non-constructive - they don’t tell you where the mistake was or how that part of the world works, and I’m often worried this will feel like a demotivating thing or conversation killer for the other person I’m talking with. But I think it’s worth thinking this way for myself more. I do want to practice it, certainly. I should be able to use all tools of proof and disproof, not just those that make conversations go smoothly.

Some general thoughts

  • I found doing the exercises very enjoyable.
  • I think that the answers here could’ve been more to-a-format. These aren't very open-ended questions, and I think that if I’d practiced matching a format that would’ve drilled a more specific tool better. But not clear that's appropriate.
  • I didn’t like how all the examples were of the “don’t believe a dumb low-status thing”. Like I think people often build epistemologies around making sure to never be religious, endorse a failed startup idea, or believe homeopathy, but I think that you should mostly build it around making sure you will make successful insights in physics, or building a successful startup, which is a different frame. I would’ve liked much more difficult examples in areas where it’s not clear what the right choice is purely based on pattern-matching to low-status beliefs.
  • The post tells people to sit by a clock. I think at the start I would've told people to find a timer by googling ‘timer’ (when you do that, one just appears on google) else I expect most folks to have bounced off and not done those exercises.
  • I really liked the ‘reflect on the general technique’ sections, they were excellent and well-placed.

Wow, this is exactly the type of feedback I wanted, thank you!

I’ve changed my view on this, and my current model is the frame “I can prove anything in the set A because of reason X”

Like I can prove a certain set of facts about Natural numbers using induction, but to claim that induction proves all things about Real numbers or morality or... is proving too much.

I would rewrite the post to focus on questions regarding that such as:

  1. What set of claims do you think reason X proves?
  2. How do you know that reason X proves those types of claims? (And of course figure out how to phrase these things more tactfully)

Also, I’ve also enjoyed Thinking Physics and TurnTrout’s AU sequence type questions over my “pattern match to low-status belief” ones (I do like my generalization and algorithm question though), so I think I understand your point there.

[-]TAG40

Ex. 8: I believe in a soul because I have a really strong gut feeling.

OTOH, thinking without any intuitions or assumptions at all remains an unsolved problem.

I've updated the post! The purpose of my framing isn't to deny useful heuristics always, it's to deny heuristics that claim to be 100% correct always.

Regarding example 4. Believing something because a really smart person believes it is not a bad heuristic, as long as you aren't cherry-picking the really smart person. prepriors, then everyone says, you had the misfortune of being born wrong, I'm lucky enough to be born right. If you were transported to an alternate reality, where half the population thought 2+2=4, and half thought 2+2=7, would you become uncertain, or would you just think that the 2+2=7 population were wrong?

The argument about believing in Cthulhu because that was how you were raised proving too much, itself proves too much.

Regarding example 4. Believing something because a really smart person believes it is not a bad heuristic, as long as you aren't cherry-picking the really smart person. If you have data about many smart people, taking the average is an even better heuristic. As is focusing on the smart people that are experts in some relevant field. The usefulness of this heuristic also depends on your definition of 'smart'. There are a few people with a high IQ, a powerful brain capable of thinking and remembering well, but who have very poor epistemology, and are creationists or Scientologists. Many definitions of smart would rule out these people, requiring some rationality skills of some sort. This makes the smart people heuristic even better.

Thanks for the specific examples on why this is wrong! I've updated the post to state the usefulness of this technique. So you don't have to search for it:

Important Note: the purpose of this frame isn't to win an argument/ prove anything. It's to differentiate between heuristics that claim 100% success rates vs ones that claim a more accurate estimates. Imagine "I'm 100% confident I'll roll 7's with my two die cause of my luck!" vs "There's a 6/36 chance I'll roll 7's because I'm assuming two fair die"

So for example 4, appeal to authority may be a useful heuristic, but if that's the only reason they believe in evolution with 100% confidence, then showing it Proves Too Much is useful. Does this satisfy your critique?

Fair enough, I think that satisfies my critique.

A full consideration of proving too much requires that we have uncertainty both over what arguments are valid, and over the real world. The uncertainty about what arguments are valid, along with our inability to consider all possible arguments makes this type of reasoning work. If you see a particular type of argument in favor of conclusion X, and you disagree with conclusion X, then that gives you evidence against that type of argument.

This is used in moral arguments too. Consider the argument that touching someone really gently isn't wrong. And if it isn't wrong to touch someone with force F, then it isn't wrong to touch them with force F+0.001 Newtons. Therefore, by induction, it isn't wrong to punch people as hard as you like.

Now consider the argument that 1 grain of sand isn't a heap. If you put a grain of sand down somewhere that there isn't already a heap of sand, you don't get a heap. Therefore by induction, no amount of sand is a heap.

If you were unsure about the morality of punching people, but knew that heaps of sand existed, then seeing the first argument would make you update towards "punching people is ok". When you then see the second argument, you update to "inductive arguments don't work in the real world." and reverse the previous update about punching people.

Seeing an argument for a conclusion that you don't believe can make you reduce your credence on other statements supported by similar arguments.

I really like this! I think my model is now:

If a heuristic claims 100% success rate in a specific context, one can show it proves too much by proving a counterexample

Inspired by your induction example, induction is very useful for proofs regarding Natural numbers, but it breaks down in the context of moral reasoning (or even just the context of Real numbers).

This is a better frame of Proving Too Much than I have framed it in this post. I will need to either edit the post or make a new one and link it at the top of this article. Either way thanks!

With that said, I don't think this captures your point of uncertainty over both valid arguments and possible worlds. Would you elaborate on how your point relates to the above, updated model?

"If a tree falls in the woods, but no one is around to hear it, does it make a sound?" doesn't sound like an argument, but a question. "Yes, because the presence of a person with ears doesn't affect the physical behavior of the air" or "No, because air waves shouldn't be considered sound until they interact with a mind" are arguments.

Or do you mean "argument" in the sense of a debate or discussion (as in "we're having an argument about X")?

You're right! I could construe it to mean "it generally leads to arguments", but I just edited it to "question" to avoid future confusion.