6544

LESSWRONG
LW

6543
Rationality
Frontpage

58

The Most Common Bad Argument In These Parts

by J Bostock
11th Oct 2025
5 min read
9

58

Rationality
Frontpage

58

The Most Common Bad Argument In These Parts
16Thane Ruthenis
4Matt Goldenberg
3Trevor Hill-Hand
2Thane Ruthenis
2Raemon
4abstractapplic
4Mateusz Bagiński
2plex
2Vladimir_Nesov
New Comment
9 comments, sorted by
top scoring
Click to highlight new comments since: Today at 8:05 PM
[-]Thane Ruthenis2h161

Exhaustive Free Association is a step in a chain of reasoning where the logic goes "It's not A, it's not B, it's not C, it's not D, and I can't think of any more things it could be!"

Oh no, I wonder if I ever made that mistake.

Security Mindset

Hmm, no, I think I understand that point pretty well...

They listed out the main ways in which an AI could kill everyone (pandemic, nuclear war, chemical weapons) and decided none of those would be particularly likely to work

Definitely not it, I have a whole rant about it. (Come to think of it, that rant also covers the security-mindset thing.)

They perform an EFA to decide which traits to look for, and then they perform an EFA over different "theories of consciousness" in order to try and calculate the relative welfare ranges of different animals.

I don't think I ever published any EFAs, so I should be in the clear here.

The Fatima Sun Miracle

Oh, I'm not even religious.

Phew! I was pretty worried there for a moment, but no, looks like I know to avoid that fallacy.

Reply7
[-]Matt Goldenberg1h40

This does feel like a nearby fallacy of denying specific examples, which maybe should have its own post

Reply
[-]Trevor Hill-Hand1h30

This comment feels like you want to say something different than what you wrote.

Reply
[-]Thane Ruthenis40m20

Explanation

(The post describes a fallacy where you rule out a few specific members of a set using properties specific to those members, and proceed to conclude that you've ruled out that entire set, having failed to consider that it may have other members which don't share those properties. My comment takes specific examples of people falling into this fallacy that happened to be mentioned in the post, rules out that those specific examples apply to me, and proceeds to conclude that I'm invulnerable to this whole fallacy, thus committing this fallacy.

(Unless your comment was intended to communicate "I think your joke sucks", which, valid.))

Reply
[-]Raemon1h20

yep that's correct

Reply
[-]abstractapplic2h40

here

Link goes to Ethan Muse again, and not to ACX.

Reply
[-]Mateusz Bagiński3h41

Good post!

Why did you call it "exhaustive free association"? I would lean towards something more like "arguing from (falsely complete) exhaustion".

Re it being almost good reasoning, a main thing making it good reasoning rather than bad reasoning is having a good model of the domain so that you actually have good reasons to think that your hypothesis space is exhaustive.

Reply
[-]plex3h20

Non-Exhaustive Free Association or Attempted Exhaustive Free Association seems like a more accurate term?

Edit: oh ops, @Mateusz Bagiński beat me to it. convergence!

Reply
[-]Vladimir_Nesov3h20

Some of this might be conflation between within-model predictions and overall predictions that account for model uncertaintly and unknown unknowns. Within-model predictions are in any case very useful as exercises for developing/understanding models, and as anchors for overall predictions. So it's good actually (rather than a problem) when within-model predictions are being made (based on whatever legible considerations that come to mind), including when they are prepared as part of the context before making an overall prediction, even for claims/predictions that are poorly understood and not properly captured by such models.

The issue is that when you run out of models and need to incorporate unknown unknowns, the last step that transitions from your collection of within-model prediction anchors to an overall prediction isn't going to be legible (otherwise it would just be following another model, and you'd still need to take that last step eventually). It's an error to give too much weight to within-model anchors (rather than some illegible prior) when the claim/prediction is overall poorly understood, but also sometimes the illegible overall assessment just happens to remain close to those anchors. And even base rates (reference classes) is just another model, it shouldn't claim to be the illegible prior at the end of this process, not when the claim/prediction remains poorly understood (and especially not when its understanding explicitly disagrees with the assumptions for base rate models).

So when you happen to disagree about the overall prediction, or about the extent to which the claim/prediction is well-understood, a prediction that happens to remain close to the legible anchors would look like it's committing the error described in the post, but it's not necessarily always (or often) the case. The only way to resolve such disagreements would be by figuring out how the last step was taken, but anything illegible takes a book to properly communicate. There's not going to be a good argument, for any issue that's genuinely poorly understood. The trick is usually to find related but different claims that can be understood better.

Reply
Moderation Log
More from J Bostock
View more
Curated and popular this week
9Comments

I've noticed an antipattern. It's definitely on the dark pareto-frontier of "bad argument" and "I see it all the time amongst smart people". I'm confident it's the worst, common argument I see amongst rationalists and EAs. I don't normally crosspost to the EA forum, but I'm doing it now. I call it Exhaustive Free Association.

Exhaustive Free Association is a step in a chain of reasoning where the logic goes "It's not A, it's not B, it's not C, it's not D, and I can't think of any more things it could be!"[1] Once you spot it, you notice it all the damn time.

Since I've most commonly encountered this amongst rat/EA types, I'm going to have to talk about people in our community as examples of this.

Examples

Here's a few examples. These are mostly for illustrative purposes, and my case does not rely on me having found every single example of this error!

Security Mindset

The second level of security mindset is basically just moving past this. It's the main thing here. Ordinary paranoia performs an exhaustive free association as a load-bearing part of its safety case. 

Superforecasters and AI Doom

A bunch of superforecasters were asked what their probability of an AI killing everyone was. They listed out the main ways in which an AI could kill everyone (pandemic, nuclear war, chemical weapons) and decided none of those would be particularly likely to work, for everyone. They ended up giving some ridiculously low figure, I think it was less than one percent. Their exhaustive free association did not successfully find options like "An AI takes control of the entire supply chain, and kills us by heating the atmosphere to 150 C as a by-product of massive industrial activity."

Clearly, they did something wrong. And these people are smart! I'll talk later about why this error is so pernicious. 

With Apologies to Rethink Priorities

Yeah I'm back on these guys. But this error is all over the place here. They perform an EFA to decide which traits to look for, and then they perform an EFA over different "theories of consciousness" in order to try and calculate the relative welfare ranges of different animals.

The numbers they get out are essentially meaningless, to the point where I think it's worse to look at those numbers than just read the behavioural observations on different animals and go off of vibes.

The Fatima Sun Miracle

See an argument here. The author raises and knocks down an extremely long list of possible non-god explanations for a miracle, including hallucinating children, and demons.

I'm going to treat the actual fact-of-the-matter as having been resolved by Scott Alexander here. Turns out, there's a weird visual effect you get when you look at the sun, sometimes, which people have reported in loads of different scenarios.

Bad Reasoning is Almost Good Reasoning

Most examples of bad reasoning, that are common amongst smart people, are almost good reasoning. Listing out all the ways something could happen is good, if and only if you actually list out all the ways something could happen, or at least manage to grapple with most of the probability mass.

So the most dangerous thing about this argument is how powerful it is. Internally and externally. Those superforecasters managed to fool themselves with an argument resting on an exhaustive free-association. Ethan Muse has been described as "supernaturally persuasive".

If you can't see the anti-pattern, I can see why this would be persuasive! Someone deploying an exhaustive free association looks, at first glance, to be using a serious, knock-down argument.

And in fact, for normal, reference-class-based things, this usually works! Suppose you're planning a party. You've got a pretty good shot at listing out all the things you need, since you've been to plenty of parties before. So an exhaustive free-association can look, at first glance, like an expert with loads of experience. This requires you to follow the normal rules of reference-class forecasting.[2]

Secondly, it's both locally valid, and very powerful, if you can perform an exhaustive search. You have to do a real exhaustive search, like "We proved that this maths problem reduces to 523 cases exactly and showed the conjecture holds for all of them". But in this case, the hard part is the first step, where you reduce your conjecture from infinite cases to 523, which is a reduction by a factor of infinity.

Thirdly, there are lots of people who haven't quite shaken themselves out of student mode. You'll come across lots of arguments, at school and at university, which closely resemble an exhaustive free association. And most of them are right!

Arguments as Soldiers

Lastly, an exhaustive free association is intimidating! It is a powerful formation of soldiers. I'm fully aware that, if I had a public debate with someone like Ethan Muse, the audience would judge him more persuasive. That man has spent an enormous amount of time thinking of arguments and counter-arguments. Making the point that "Actually there probably exists a totally naturalistic explanation which I cannot currently think of" sounds ridiculous!

An Exhaustive Free Association can also be laid as a trap: suppose I mull over a few possible causes, arguments, or objections. I bring them up, but my interlocutor has already prepared the counter-argument for each. In a debate, I am humbled and ridiculed. (Of course, if I am honestly reading something, I duly update as follows "Hmm, the first three ideas I had were shot down instantly, what chance is there that my later arguments come up correct?) Of course, the only reason we're discussing this particular case is because my interlocutor's free association failed to turn anything up!

Sure, stay in scout mindset as much as you can. But if you notice someone massing forces into formations (especially one like this) perhaps you should worry more about their mindset than your own.

Conclusion

Noticing an exhaustive free association is only part of the battle. It's not even a big part. You have to have the courage to point it out, and then you have to decide whether or not its worth your time. Do you spend hours picking over the evidence to find the true point where their argument, or do you give up? 

Hopefully, you now have a third option. "This argument appears to depend on an exhaustive free association." you mutter to yourself, or perhaps some close friends. You will come back to deal with it later, if you have the time.

The Counter-Counter Spell

Of course, crying "Exhaustive Free Association!" is a rather general counter-argument. I would be remiss to post this without giving some hint at a defence to this defence. One method is reference classes. If you wish to dive into that mud-pit, so be it. Another method is simply to show that your listing is exhaustive, through one means or another.

But honestly? The best defence is to make your argument better. If you're relying on something which looks like an exhaustive free association, your first worry should be that it actually is! Invert the argument to a set of assumptions. Go up a level to a general principle. Go down a level to specific cases. 

 

  1. ^

    Apparently I'm writing in verse now. Well, in for a penny, in for a pound I suppose!

    It's not A, it's not B, it's not C, it's not D,
    And I can't think of any more things it could be!
    Cried the old furry fox-frog who lived in the swamp,
    As he heard far-off trees falling down with a "thwomp".

    For there's been no great storm which might break all these boughs,
    And I've not heard a great herd of elephant-cows,
    Which sometimes come this way with trumpety-moos,
    And knock down a few trees with their big grey-black hooves,

    And I've felt not a rumble or tremor at all,
    And the river's been running quite low since last fall!
    So the noises that keep me up, when I'm in bed,
    No they just can't be real, they must be in my head!

    So the old furry fox-frog picked up his old cane,
    And he walked to the house of the heron-crow-crane
    And they both puzzled over the noise in his brain,

    And of course, both were flattened, when the steamrollers came.

  2. ^

    That is, your new item must have only one obvious reference class, or at least one obvious best, most specific, reference class. "Car" is better than "Mode of transport" for predicting the properties of a Honda; "Herbivore" vs "Mode of transport" might produce different, conflicting predictions about the properties of a horse.