Motte and Bailey is a concept by Nicholas Shackel that Scott Alexander has popularised. He describes this as follows:

The original Shackel paper is intended as a critique of post-modernism. Post-modernists sometimes say things like “reality is socially constructed”, and there’s an uncontroversially correct meaning there. We don’t experience the world directly, but through the categories and prejudices implicit to our society; for example, I might view a certain shade of bluish-green as blue, and someone raised in a different culture might view it as green. Okay.

Then post-modernists go on to say that if someone in a different culture thinks that the sun is light glinting off the horns of the Sky Ox, that’s just as real as our own culture’s theory that the sun is a mass of incandescent gas. If you challenge them, they’ll say that you’re denying reality is socially constructed, which means you’re clearly very naive and think you have perfect objectivity and the senses perceive reality directly.

The writers of the paper compare this to a form of medieval castle, where there would be a field of desirable and economically productive land called a bailey, and a big ugly tower in the middle called the motte. If you were a medieval lord, you would do most of your economic activity in the bailey and get rich. If an enemy approached, you would retreat to the motte and rain down arrows on the enemy until they gave up and went away. Then you would go back to the bailey, which is the place you wanted to be all along.

So the motte-and-bailey doctrine is when you make a bold, controversial statement. Then when somebody challenges you, you retreat to an obvious, uncontroversial statement, and say that was what you meant all along, so you’re clearly right and they’re silly for challenging you. Then when the argument is over you go back to making the bold, controversial statement.

Sometimes Motte and Baileys arguments are the result of bad faith, but I suspect in many cases that those making them have no idea that they are engaging in such a strategy. In fact, it seems highly likely that one or more Motte and Baileys would be popular among rationalists. What are the most common such Motte and Baileys?

New Answer
New Comment

6 Answers sorted by

The very concept of a "rationalist" is an egregious one! What is a rationalist, really? The motte: "one who studies the methods of rationality, systematic methods of thought that result in true beliefs and goal achievement". The bailey: "a member of the social ingroup of Eliezer Yudkowsky and Scott Alexander fans, and their friends."

Yeah, this already bothered me some, but your way of putting it here makes it bother me more.

I think the motte/bailey often runs in the other direction, though, for modesty-ish reasons: there's a temptation to redefine 'rationalist' as a social concept, because it looks more humble to say 'I'm in social circle X' than to say 'I'm part of important project X' or 'I'm a specialist in X', when you aren't doing X-stuff as part of a mainstream authority like academia.

I think there are two concepts I tend to want labels for, which I sometimes use 'rationalist' to refer to (though I hope I'm not switching between these in a deceptive/manipulative way!):

  • 'One who successfully develops and/or applies the methods of getting systematically better at mapping reality (and, optionally, steering reality) from inside a human brain.'
  • 'One who is highly acquainted with the kinds of ideas in the Sequences, and with related ideas that have been a major topic on LW (e.g., the availability heuristic and reductionism and conservation of expected evidence, but also ems, tractability/importance/neglectedness, updateless decision theory, ugh fields, ideas from Alicorn's Twilight fanfic...).'

I think the latter... (read more)

3Rob Bensinger3y
Thinking about it more: I can imagine a group that tries to become unusually good at learning true things in a pretty domain-general way, so they call themselves the Learnies, or the Discoverers, or the Trutheteers. If a group like that succeeds in advancing its art, then it should end up using that art to discover at least some truths about the world at large that aren't widely known. This should be a reliable consequence of 'getting good at discovering truths'. And in many such worlds, it won't be trivial for the group to then convince the entire rest of the world to believe the things they learned overnight. So five years might pass and the Learnies are still the main group that is aware of, say, the medical usefulness of some Penicillium molds (because they and/or the world they're in is dysfunctional in some way). It seems natural to me that the Learnies' accumulated learnings get mentally associated with the group, even though penicillin doesn't have any special connection to the art of learning itself. So I don't think there's anything odd about rationalists2 being associated with 'knowledge we accumulated in the process of applying rationality techniques', or about my wanting a way to figure out to what extent someone at a party has acquired that knowledge. I think this example does illustrate a few issues, though: * First, obviously, it's important to be able to distinguish 'stuff the Learnies learned that's part of the art of learning' from 'other stuff the Learnies learned'. Possibly it would help here to have more names for the clusters of nonstandard beliefs rationalists2 tend to have, so I'm somewhat less tempted to think 'is this person familiar with the rationalist2-ish content on ems?' vs. 'is this person familiar with the techno-reductionist content on ems?' or whatever. * Second, I've been describing the stuff above as "knowledge". But what if there's a dispute about whether the Learnies' worldly discoveries are true? In that case, maybe it's

It doesn't help when Yudkowsky actively encourages this confusion! As he Tweeted today: "Anyways, Scott, this is just the usual division of labor in our caliphate: we're both always right, but you cater to the crowd that wants to hear it from somebody too modest to admit that, and I cater to the crowd that wants somebody out of that closet."

Just—the absolute gall of that motherfucker! I still need to finish my memoir about why I don't trust him the way I used to, but it's just so emotionally hard—like a lifelong devout Catholic denouncing the Pope. But wha... (read more)

This doesn't seem to be about the term rationalist at all. It seems to be about which rhetorical style different people prefer. Eliezer makes in a much more confident and more polarizing way then Scott. 

4TAG3y
In my experience Scott has an epistemic style where he assumes and seeks out contrary information, and Eliezer does not...he's more into early cognitive closure. It's not just tone, it's method.
2tangren3y
No, not really? I generally ignore anything Scott writes which could be described as 'agreeing with Yud' -- it's his other work I find valuable, work I wouldn't expect Yud to write in any style.

I made a similar, but slightly different argument in Pseudo-Rationality:

"Pseudo-rationality is the social performance of rationality, as opposed to actual rationality."

I've been Rationalist-adjacent for over 10 years now by my ideals, but have never taken part in the community (until this post, hello!) precisely because I find this fallacy throughout a lot of Rationalist discourse and it has put me off.

The motte: "Here is some verifiable data that suggests my hypothesis. It is incomplete, and I may be wrong. I am but a humble thinker, calling out into the darkness, looking for a few pinpricks of truth's light."

The bailey: "The limitations in my data and argument are small enough that I can confidently make a complex conclusion at the end, to some confidence interval. Prove my studies wrong if you disagree. If you respond to my argument with any kind of detectable emotion I will take this as a sign of your own irrationality and personal failings."

In my reading the bailey tends to come out in a few similar Rationalist argument styles. What they all have in common is that some lip service is usually paid to the limitations of the argument, but the poster still goes on as if their overall argument is probable and valid, instead of a fundamentally unsupported post-hoc rationalization built on sand. I tend to see:

  1. The poster makes an arbitrary decision to evaluate the core hypothesis by proxying it onto a set of related, but fundamentally different, metrics from the actual thesis, where the proxy metrics are easily testable and the actual thesis is very broad. The evaluation that follows using the chosen metrics is reasonable, but the initial choice to even use those metrics as a proxy for the thesis question is subjective, unjustified, and the conclusion would have gone another way had different and arguably just as justifiable proxy metrics been chosen instead. The  proxy is never mentioned. Or if it is, it's is hand-waved away as "of course there are other ways to evaluate this question..."  But assuming that your toy metrics equate to a wider narrative is a fundamental error. Analysis is limited to the scope of what it's analyzing to stay accurate. 
  2. The poster shows their work with some math (usually probabilities) to prove a real-world point, but the math is done on a completely hypothetical thought experiment. Can't argue with math! The entire meat of this hinges on the completely unjustified implication that the real world is enough like the thought experiment that the probabilities from one are relevant to both. But the thought experiment came from the poster's mind, and its similarity to reality is backed up by nothing. There is no more inherent reason why probabilities derived from a hypothetical example would apply to reality than random numbers thrown into the comment box would be, but because there's some math work included it's taken as more accurate than the poster saying "I think the world is like X" outright.
  3. Using Bayesian reasoning and confidence intervals to construct a multi-point argument of mostly-unproven assertions that all rely on each other, so that the whole is much weaker than the sum of its parts. The argument is made as if the chance of error at each successive step is additive rather than compounding, and as if the X% confidence interval the author assigns at each unproven assertion is the actual real probability of it being true. But in reality, confidence intervals are a post-hoc label we give to our own subjective feelings when evaluating a statement that we believe but haven't proven. The second you label an unsupported statement with one of these you've acknowledged that you've left what you're sure of as objective reality. Each successive one in an argument makes the problem worse, because the error compounds. It would be more honest and objective for the argument to stop at the very first doubtful point and leave it there with a CI for future discussion. But instead I see a lot of "of course, this can't be really known right now, but I think it's 65% likely given the inconclusive data I have so far, and if we assume that it's true for the sake of argument..." and then it continues further into the weeds for another few thousand words.  

Obviously this comment is critical, but I do mean this with good humor and I hope it is taken as such. The pursuit of truth is an ideal I hold important.

(An aside: the characterization of post-modern argument in the OP is only accurate in the most extreme and easily parodied of post-modernist thinkers. Most post-modernists would argue that social constructs are subjective narratives told on top of an objective world, and that many more things are socially constructed than most people believe. That the hypothetical about the sun is used as an example of bad post-modernist thought, instead of any of the actual arguments post-modernists make in real life, is a bit of a tip-off that it's not engaging with a steel man.)

An aside: the characterization of post-modern argument in the OP is only accurate in the most extreme and easily parodied of post-modernist thinkers. Most post-modernists would argue that social constructs are subjective narratives told on top of an objective world, and that many more things are socially constructed than most people believe. That the hypothetical about the sun is used as an example of bad post-modernist thought, instead of any of the actual arguments post-modernists make in real life, is a bit of a tip-off that it's not engaging with a ste

... (read more)

It would be more honest and objective for the argument to stop at the very first doubtful point and leave it there with a CI for future discussion.

This seems fine until you have to make actual decisions under uncertainty. Most decisions have multiple uncertain factors going into them, and I think it's genuinely useful to try to quantify your uncertainty in such cases (even if it's very rough, and you feel the need to re-run the analysis in several different ways to check how robust it is, etc.).

What would you propose doing in such cases? I'd be interested ... (read more)

2ametipo3y
  This is the closest to what I was trying to say, but I would scope my criticism even more narrowly. To try and put it bluntly and briefly: Don't choose to suspend disbelief for multiple core hypotheses within your argument, while simultaneously holding that the final conclusion built off of them is objectively likely and has been supported throughout.  The motte with this argument style, that your conclusion is the best you can do given your limited data, is true and I agree. Because of that this is a genuinely good technique for decision making in a limited space, as you mention. What I see as the bailey though, that your conclusion is actually probable in a real and objective sense, and that you've proven it to be so with supporting logic and data, is what doesn't follow to me. Because you haven't falsified anything in an objective sense, there is no guaranteed probability or likelihood that you are correct, and you are more likely to be incorrect the more times in your argument you've chosen to deliberately suspend disbelief for one of your hypotheses to carry onward. Confidence intervals are a number you're applying to your own feelings, not actual odds of correctness, so can't be objectively used to calculate your chance of being right overall. Put another way, in science it is totally possible and reasonable for a researcher to have an informed hypothesis that multiple hypothetical mechanisms in the world all exist, and that they combine together to cause some broader behavior that so far has been unexplained. But if this researcher were to jump to asserting that the broader behavior is probably happening because of all these hypothetical mechanisms, without first actively validating all the individual hypotheses with falsifiable experiments, we'd label their proposed broad system of belief as a pseudoscience. The pseudoscience label would still be true even if their final conclusion turned out to be accurate, because the problem here is with the form (as
2TAG3y
I agree with what you are saying...but my brief version would be "don't confuse absolute plausibility with relative plausibility".

Yeah, it isn't really engaging with a steelman. But then again, the purpose of the passage is to explain a very common dynamic that occurs in post-modernism. And I guess it'd be hard, considering a similar situation, to explain a dynamic that sometimes makes government act dysfunctional, whilst also steelmanning that.

Although I don't think its accurate to say that its not representative of what post-modernists really argue - maybe it doesn't accurately represent what philosophers argue - but it seems to fairly accurately represent what everyday people who ... (read more)

2ametipo3y
The implied claim that I took from the passage (perhaps incorrectly) is that motte and bailey is a fallacy inherent to post-modernist thought in general, rather than a bad rhetorical technique that some post-modernists commenters engage in on the internet. From that it should be easier, not harder, to cite real-world examples of it since the rhetorical fallacy is actually widespread and representative of post-modern thought. The government example isn't analogous, as it would have at least been a real-world example and the person in that hypothetical wouldn't be trying to argue that the dysfunctional dynamic is inherent to all government. But the quote chose to make up an absurd post-modernist claim about the sun being socially constructed to try and prove a claim that post-modernism is absurd. I made my aside because I am a relatively everyday person who is a general fan of post-modernism, or at least the concept of social construction as I've described, and I have a strong suspicion that whatever specific real-world examples the author is pattern-matching as denying objective reality probably have a stronger argument for being a socially constructed than they're aware of. Or at least able to hand-wave as absurd as easily as their sun hypothetical. This is all just an aside of an aside though, and I somewhat regret putting it in the body of my post and distracting from the rest. People generally do make terrible arguments on the internet, so in terms of sheer volume I do agree that bad arguments abound.

I think there's a tendency to assume the rationalist community has all the answers (e.g. The Correct Contrarian Cluster), which seems (a) wrong to me on the object-level, but also (b) at odds with a lot of other rationalist ideas.

If you point this out, you might hear someone say they're "only an aspiring rationalist", or "that's in the sequences", or "rationalists already believe that". Which can seem like a Motte and Bailey, if it doesn't actually dent their self-confidence at all.

(b) at odds with a lot of other rationalist ideas.

The great strength of Rationalism...yes, I'm saying something positive. .is that it's flaws can almost always be explained using concepts from its own toolkit.

I'm not sure what you mean by "has all the answers". I could imagine a rationalist thinking they're n standard deviations above the average college-educated human on some measure of 'has accurate beliefs about difficult topics', and you disagreeing and think they're average, or thinking their advantage is smaller. But that just seems like an ordinary disagreement to me, rather than a motte-and-bailey.

It seems at odds with rationalist ideas to assume you're unusually knowledgeable, but not to conclude you're unusually knowledgeable. 'I'm average' is just as... (read more)

1tangren3y
Well, the motte is "I'm very epistemically humble", and the bailey is "that's why I'm always right".

If rationalists think they're right n% of the time and they're not, then that's condemnable in its own right, regardless of whether there's a motte-and-bailey involved.

If rationalists think they're right n% of the time and they are right n% of the time, but you aren't allowed to be honest about that kind of thing while also being humble, the so much the worse for humility. There are good forms of humility, but the form of 'humility' that's about lying or deceiving yourself about your competence level is straightforwardly bad.

Regardless, I don't think there's any inconsistency with being an 'aspiring rationalist'. Even if you're the most rational human alive, you probably still have enormous room to improve. Humans just aren't that good at reasoning and decision-making yet.

2Dagon3y
The motte and bailey related to this I see is "I'm humble, and often wrong.  But not on <whatever specific topic is at hand>."

Bayes!

The Bailey is that Bayes is just maths, and you therefore can't disagree with it.

When it is in inevitably pointed out that self described Bayesians don't use explicit maths that much, then they fall back to the Motte .. Bayes is actually just a bunch of heuristics for probabilistic reasoning.

I think there's a common Motte and Bailey with religion

Motte: Christianity and other religions in general are almost certainly untrue. Adherents to religions have killed many people worldwide. The modern world would be better if more religious followers learned rationality and became atheists.

Bailey: The development and continued existence of religion has on the whole been a massive net negative for humanity and we would be better off if the religions never existed and people were always atheists.

I don't even think the bailey is outright stated that often by smart rationalist as much as it is sometimes implied and only stated outright by zealous, less-smart atheists. The zealous atheists are likely succumbing to the affect heuristic and automatically refute the assertion that religion may have been a net positive historically even if it is no longer worthwhile. But they most often defend the claim that religion was terrible for humanity by citing to the Motte.

Bailey: "Religion is harmful and untrue."

Motte: "Christianity and Islam (and occasionally Orthodox Judaism) are harmful and untrue."

3Sherrinford3y
Shouldn't it be the other way round?
2lsusr3y
Yes. Fixed. Thanks.

I feel like both sides of the "White Fragility" debate have some of this going on.

I don't feel like I've exactly seen rationalists on these sides (in large part because the discussion generally hasn't been very prominent), but I've seen lots of related people on both sides, and I expect rationalists to have similar beliefs to those people. (Myself included) 

https://www.lesswrong.com/posts/pqa7s3m9CZ98FgmGT/i-read-white-fragility-so-you-don-t-have-to-but-maybe-you?commentId=wEuAmC2kYWsCg4Qsr