Motte and bailey is a technique by which one protects an interesting but hard-to-defend view by making it similar to a less interesting but more defensible position. Whenever the more interesting position - the bailey - is attacked - one retreats to the more defensible one - the motte -, but when the attackers are gone, one expands again to the bailey. 

In that case, one and the same person switches between two interpretations of the original claim. Here, I rather want to focus on situations where different people make different interpretations of the original claim. The originator of the claim adds a number of caveats and hedges to their claim, which makes it more defensible, but less striking and sometimes also less interesting.* When others refer to the same claim, the caveats and hedges gradually disappear, however, making it more and more motte-like.

A salient example of this is that scientific claims (particularly in messy fields like psychology and economics) often come with a number of caveats and hedges, which tend to get lost when re-told. This is especially so when media writes about these claims, but even other scientists often fail to properly transmit all the hedges and caveats that come with them.

Since this happens over and over again, people probably do expect their hedges to drift to some extent. Indeed, it would not surprise me if some people actually want hedge drift to occur. Such a strategy effectively amounts to a more effective, because less observable, version of the motte-and-bailey-strategy. Rather than switching back and forth between the motte and the bailey - something which is at least moderately observable, and also usually relies on some amount of vagueness, which is undesirable - you let others spread the bailey version of your claim, whilst you sit safe in the motte. This way, you get what you want - the spread of the bailey version - in a much safer way.

Even when people don't use this strategy intentionally, you could argue that they should expect hedge drift, and that omitting to take action against it is, if not ouright intellectually dishonest, then at least approaching that. This argument would rest on the consequentialist notion that if you have strong reasons to believe that some negative event will occur, and you could prevent it from happening by fairly simple means, then you have an obligation to do so. I certainly do think that scientists should do more to prevent their views from being garbled via hedge drift. 

Another way of expressing all this is by saying that when including hedging or caveats, scientists often seem to seek plausible deniability ("I included these hedges; it's not my fault if they were misinterpreted"). They don't actually try to prevent their claims from being misunderstood. 

What concrete steps could one then take to prevent hedge-drift? Here are some suggestions. I am sure there are many more.

  1. Many authors use eye-catching, hedge-free titles and/or abstracts, and then only include hedges in the paper itself. This is a recipe for hedge-drift and should be avoided.
  2. Make abundantly clear, preferably in the abstract, just how dependent the conclusions are on keys and assumptions. Say this not in a way that enables you to claim plausible deniability in case someone misinterprets you, but in a way that actually reduces the risk of hedge-drift as much as possible. 
  3. Explicitly caution against hedge drift, using that term or a similar one, in the abstract of the paper.

* Edited 2/5 2016. By hedges and caveats I mean terms like "somewhat" ("x reduces y somewhat"), "slightly", etc, as well as modelling assumptions without which the conclusions don't follow and qualifications regarding domains in which the thesis don't hold.

New to LessWrong?

New Comment
12 comments, sorted by Click to highlight new comments since: Today at 4:50 AM

Asking scientists to keep their paper titles hedge-drift-resistant means (1) asking each individual scientist to do something that will reduce the visibility of their work relative to others', for the sake of a global benefit -- a class of policy that for obvious reasons doesn't have a great track record -- and (2) asking them to give their papers titles that are boring and wordy.

I agree that the world might be a better place if scientists consistently did this. But it doesn't seem very likely to happen.

(Also, here's what might happen if they almost consistently did this: the better, more conscientious scientists all write carefully hedged articles with carefully hedged titles, and journalists ignore all of them because they all sound like "Correlational analysis of OCEAN traits weakly suggest slight association between conscientiousness and Y-chromosome haplogroup O3". A few less careful scientists write lower-quality papers that, among other things, have titles like "The Chinese work harder: correlational analysis of OCEAN traits and genotype", and those are the ones that the journalists pick up. These are also the ones without the careful hedging in the actual analysis, without serious attempts to correct for multiple correlations, etc. So we end up with worse stuff in the press.)

I don't think they have to be wordy as much as conservative in their claims.

Also honestly the state of science journalism is so utterly abysmal it's a whole other discussion. I don't know how much point there is in worrying about how "this will select for the more clickbait-y, inaccurate takes": we already have selection for those anyway, and it's so bad I doubt it can get significantly worse. Yours is not a hypothetical scenario, it's how things are now. The thing that we'd need would be journals straight up enforcing guidelines for titles that do not allow unclear or ambiguous claims.


The form of conservatism that the OP is about is (I think pretty much necessarily) one that makes for wordier and less eye-catching titles.

I agree that the state of science journalism is bad. I don't think I agree that it couldn't get significantly worse. I think having stronger norms saying that conscientious scientists should avoid "eye-catching, hedge-free titles and/or abstracts", etc., might end up making it either better or worse, and my money is on worse. More specificity about the mechanism: I conjecture that journalists will quite reliably ignore anything that isn't eye-catching and not-too-hedged; if it's fairly common for even good scientists to give their papers such titles, then some of what journalists pick up will be good science, albeit incautiously expressed; if all the good scientists are being too careful for that, then all of what journalists pick up will be bad science.

I think roughly conscientious scientists already try doing that today, that's why I'm saying this isn't a hypothetical. Having a stronger norm might lead to more of them doing that; some defectors would always remain of course, but maybe they would at least be regarded less well within their own community. Forget journalists, right now we have a problem even with academic journals being biased towards catchy positive results.

if it's fairly common for even good scientists to give their papers such titles

Depends also about what we're talking about. Honestly my experience is that the most stand out type of paper title isn't necessarily "sensational claim" as much as "tongue in cheek reference", which is a pretty neutral concession to visibility. It's a very field-dependent problem, though. But title-wise, I think the worst of the drift happens in press releases and then through journalists. You can make paper titles catchy without making them state any claim (easy pattern: just state WHAT you're looking at, like "Studying the relationship of X and Y" instead of "Positive correlation of Y with X" or whatever).

Good points. I agree that what you write within parentheses is a potential problem. Indeed, it is a problem for many kinds of far-reaching norms on altruistic behaviour compliance with which is hard to observe: they might handicap conscientious people relative to less conscientious people to such an extent that the norms do more harm than good.

I also agree that individualistic solutions to collective problems have a chequered record. The point of 1)-3) was rather to indicate how you potentially could reduce hedge drift, given that you want to do that. To get scientists and others to want to reduce hedge drift is probably a harder problem.

In conversation, Ben Levinstein suggested that it is partly the editors' role to frame articles in a way such that hedge drift doesn't occur. There is something to that, though it is of course also true that editors often have incentives to encourage hedge drift as well.

Thanks for giving a name to this phenomenon.

Indeed, it would not surprise me if some people actually want hedge drift to occur. They don't actually try to prevent their claims from being misunderstood.

It's much worse. In my experience as an academic, most departments simply pre-hedge-drift their press releases. Science journalists don't - and are often not qualified to - read and comment on the actual papers, all they have to work with is the press release.

Yes, a new paper confirms this.

The association between quality measures of medical university press releases and their corresponding news stories—Important information missing

This made me think of this cartoon:

Your analysis of hedge drift as a motte-and-bailey sounds plausible to me, but if true I think your recommendation that people should make the hedges stick better is incorrect. If the claim people really want to make is the hedgeless bailey claim, then they ought to make that to begin with, and if anything explain why the hedges aren't needed.


This is funny because I was going to write a piece titled, "delicate use of hedging". Saying similar things. I will try to get this post written/published ASAP to match your post. My post was more about encouraging people to use hedging clearly when having discussions and communications; especially to represent being unsure about something you wish to communicate about

Also a link to describe a hedge would help this post.

On the other hand, people can only be obligated to achieve what is possible, and hedge loss might be inevitable.

There might be a long haul project of lowering the status of sloppy science journalism, but this is very long term.

I assume authors control the titles for journal articles. They don't for journalism.


It's not only in social sciences where this phenomena is common. The most striking examples I've seen were in medicine. An article is published, for example "supplement xyz slightly reduces a few of the side effects encountered during radiotherapy used in cancer treatment", which is then published in the media and on social networks as "What the medical industry doesn't want you to know: supplement xyz instantly cures all forms of cancer!". And often there is a link to the original publication, but people still believe it and forward it. And what's even more sad, probably many people then buy that supplement and don't seek medical help, believing that it alone will help.