In the last post, we discussed a common problem in arguments that Prove Too Much. In this post, we’ll generalize that problem to help determine useful categories. But before we go on, what’s wrong with these arguments?
Ex. 1 [Stolen from slatestarcodex]
“A few months ago, a friend confessed that she had abused her boyfriend. I was shocked, because this friend is one of the kindest and gentlest people I know. I probed for details. She told me that sometimes she needed her boyfriend to do some favor for her, and he wouldn’t, so she would cry – not as an attempt to manipulate him, just because she was sad. She counted this as abuse, because her definition of “abuse” is “something that makes your partner feel bad about setting boundaries”. And when she cried, that made her boyfriend feel guilty about his boundary that he wasn’t going to do the favor.”
By this definition of “abuse”, a majority of people are “abusive”. It would be better to reserve that word for a smaller group of people who are intentionally manipulating people.
Ex. 2: I also had a friend that was “one of the nicest people everyone knows, wow she listens so well!”. She admitted that she was actually “selfish and manipulative” because she did nice things to people so they’d like her.
I honestly wish everyone was as “selfish and manipulative” as this girl; however, it makes those two words nearly useless. It would be better to reserve those words for people who create win-lose situations (You give, I take) as opposed to win-win situations (Oh wow, you make me feel important. I want to be your friend).
What is the general frame of the problem in the two scenarios? You have 2 minutes.
The claim relies on a “bad definition” of a word. It’s “bad” because my expectations weren’t met. If you say you’re selfish, then I expect that you create win-lose situations, but if all you’re actually a really nice person, then you misled me.
In another way, I can say if you meet the qualifications a,b,c, then you are a member of that category. The problem arises when you have the “wrong” qualifications, as in, the person you were talking to was expecting different qualifications.
X meets qualifications a,b,c for [word]: -> X is [word]
Now that you have a new frame to fit everything in, let’s dive in to a couple curveballs.
Ex. 3: If God was all-powerful, could he make a rock so big that he couldn’t lift it?
This is about the qualifications of the word “all-powerful”, and it’s implying that one of those qualifications is “can create a situation that unqualifies itself as all-powerful”. You could define all-powerful that way; however, I (and most other people) are expecting a definition that means “has a lot of power/abilities” like miracles, time travel, etc and not something paradoxical.
Ex. 4: Is a hotdog a sandwich?
Considering just preserving expectations, if someone asked me to make them a sandwich, and I went to hand them either (A) Ham sandwich, (B) Tomato w/ mayo sandwich, or (C ) Hotdog, which one would most surprise them?
Ex. 5: If a tree falls in the woods, and no one is around to hear it, would it make a sound?
What qualifies as a sound? If we agree that it’s vibrational waves between 20Hz-20kHz, then it made a sound. If we agree that there has to be someone to hear it, then it didn’t make a sound. Since the purpose is communication/ expectation preservation, we can just agree on a set of qualifications, solve the philosophical problem, and move on.
What algorithm were you running? What’s an ideal algorithm for correcting these types of arguments? You have 3 minutes (The previous examples should fit in your algorithm)
1. What is the key word?
2. What are their qualifications for that word?
3. What are the desired qualifications given the context?
4. If (2) and (3) disagree, then argue about which set of qualifications provide clearer communication given the context.
Running this algorithm on the previous examples is easy, except for the “tree falling in the woods”. The key word is “sound”, but there is no “(3) desired qualification” for sound in this case. If there is no agreed qualification, then what’s needed is to agree on a qualification. Updating
- What is the key word?
- What are their qualifications for that word?
- What are the desired qualifications given the context?
- If 3 doesn’t exist, then agree on a set of qualifications.
- Else if (2) and (3) disagree, then argue about which set of qualifications provide clearer communication given the context.
What are the differences/similarities/relationships between this and Proving Too Much? You have 3 minutes
My framing of Proving Too Much is a subset of this (I did kind of gave that away in the intro). It’s about the category of 100% Truth/accurate predictions/ map-territory correspondances in the context of wanting to find actually true things that reflect reality. I expect a qualification to lead to only true claims; however, if it implies false or inconsistent claims, then my expectations are violated and that qualification is wrong.
Hard exercise: The category of 100% true reasons to believe things should have no members. How would you construct a category for 0-100% beliefs that is actually more useful?
With this algorithm down, let’s tackle a few more problems.
Final Problem Set
Ex. 6: That salad with cucumbers on it should be called a fruit salad, because cucumbers are botanically a fruit
The key word is “fruit salad”. Most people who order a fruit salad at a restaurant and get cucumbers as their “fruit” would not be happy because their expectations were violated. As the saying goes, the customer is always right.
Ex. 7: Christianity is true to me like Islam is true to someone else.
If I told someone that I made them soup and put a bowl of cereal in front of them, then they might laugh or be disappointed. Both implies a violation of expectation.
Oh, I don’t mean true in that way, but I am talking about something close, let’s call it “reflects reality” instead of truth. Like if Christianity reflects reality, then if I built a time machine, I should be able to go back in time, see Jesus die, be buried, and rise again. Then the bible reflects reality. If Islam reflects reality, then I should be able to go back in time and see Jesus ascend to heaven, but not die or resurrect. Then the Quran reflects reality. So the Bible and the Quran can’t both reflect reality since they’re predicting I would see two different things. Does what I’m trying to say make sense?
"Let's run further than we ran yesterday"
"You mean *farther"
Using the more grammatically correct word doesn't change our expectations. In either case, I'd expect to run a greater distance than yesterday.
This is interesting because it generalizes to all grammar corrections that don't change expectations when the context is communication. If the context is instead signalling competence (like a resume), then it would be important.
Ex. 9: Is cereal soup?
This one is interesting because it’s saying the qualifications for the word “true” are different than in Proving Too Much. Instead of being abrasive and claiming the word “true” as mine and can only mean one thing, I can instead just Taboo the word and replace it with its meaning.
Ex. 10: Is water wet?
If someone told me the pool water is wet, I'd think they were saying something trivially true to be silly. If they told me, in all seriousness, the water isn't wet, then I might think the water is fake/an illusion or that generally something is wrong with the water. So in the context of two normal people communicating, water is expected to be described as wet. (If the context is chemistry research papers, there may be a different answer)
One of the purposes of arguing well is clear communication. When talking to someone else (or yourself!), knowing what key words means to each person aids in understanding each other and helps in avoiding confusion.
In the next post, we’ll be discussing false dilemmas, how they arise, and how to deal with them. What's wrong with the classic example:
"You're either with me, or against me!"
[Feel free to comment your answer to the hard question and if you got different answers/ generalizations/ algorithms than I did. Same if you feel like you hit on something interesting or that there's a concept I missed. Adding your own examples with the Spoiler tag >! is encouraged]