Strengthening the foundations under the Overton Window without moving it

by KatjaGraceMeteuphoric2 min read14th Mar 20187 comments


DisagreementSocial RealityPublic Discourse
Personal Blog

As I understand them, the social rules for interacting with people you disagree with are like this:

  • You should argue with people who are a bit wrong
  • You should refuse to argue with people who are very wrong, because it makes them seem more plausibly right to onlookers

I think this has some downsides.

Suppose there is some incredibly terrible view, V. It is not an obscure view: suppose it is one of those things that most people believed two hundred years ago, but that is now considered completely unacceptable.

New humans are born and grow up. They are never acquainted with any good arguments for rejecting V, because nobody ever explains in public why it is wrong. They just say that it is unacceptable, and you would have to be a complete loser who is also the Devil to not see that.

Since it took the whole of humanity thousands of years to reject V, even if these new humans are especially smart and moral, they probably do not each have the resources to personally out-reason the whole of civilization for thousands of years. So some of them reject V anyway, because they do whatever society around them says is good person behavior. But some of the ones who rely more on their own assessment of arguments do not.

This is bad, not just because it leads to an unnecessarily high rate of people believing V, but because the very people who usually help get us out of believing stupid things – the ones who think about issues, and interrogate the arguments, instead of adopting whatever views they are handed – are being deprived of the evidence that would let them believe even the good things we already know.

In short: we don’t want to give the new generation the best sincere arguments against V, because that would be admitting that a reasonable person might believe V. Which seems to get in the way of the claim that V is very, very bad. Which is not only a true claim, but an important thing to claim, because it discourages people from believing V.

But we actually know that a reasonable person might believe V, if they don’t have access to society’s best collective thoughts on it. Because we have a whole history of this happening almost all of the time. On the upside, this does not actually mean that V isn’t very, very bad. Just that your standard non-terrible humans can believe very, very bad things sometimes, as we have seen.

So this all sounds kind of like the error where you refuse to go to the gym because it would mean admitting that you are not already incredibly ripped.

But what is the alternative? Even if losing popular understanding of the reasons for rejecting V is a downside, doesn’t it avoid the worse fate of making V acceptable by engaging people who believe it?

Well, note that the social rules were kind of self-fulfilling. If the norm is that  you only argue with people who are a bit wrong, then indeed if you argue with a very wrong person, people will infer that they are only a bit wrong. But if instead we had norms that said you should argue with people who are very wrong, then arguing with someone who was very wrong would not make them look only a bit wrong.

I do think the second norm wouldn’t be that stable. Even if we started out like that, we would probably get pushed to the equilibrium we are in, because for various reasons people are somewhat more likely to argue with people who are only a bit wrong, even before any signaling considerations come into play. Which makes arguing some evidence that you don’t think the person is too wrong. And once it is some evidence, then arguing makes it look a bit more like you think a person might be right. And then the people who loathe to look a bit more like that drop out of the debate, and so it becomes stronger evidence. And so on.

Which is to say, engaging V-believers does not intrinsically make V more acceptable. But society currently interprets it as a message of support for V. There are some weak intrinsic reasons to take this as a signal of support, which get magnified into it being a strong signal.

My weak guess is that this signal could still be overwhelmed by e.g. constructing some stronger reason to doubt that the message is one of support.

For instance, if many people agreed that there were problems with avoiding all serious debate around V, and accepted that it was socially valuable to sometimes make genuine arguments against views that are terrible, then prefacing your engagement with a reference to this motive might go a long way. Because nobody who actually found V plausible would start with ‘Lovely to be here tonight. Please don’t take my engagement as a sign of support or validation—I am actually here because I think Bob’s ideas are some of the least worthy of support and validation in the world, and I try to do the occasional prophylactic ludicrous debate duty. How are we all this evening?’


7 comments, sorted by Highlighting new comments since Today at 8:57 PM
New Comment

Treating wrongness as a quantity is at best a poor proxy for refusing to engage with people arguing in bad faith.

It does seem right to refuse to engage with people who don’t seem like they’re trying to process your argument, since that can mislead onlookers into thinking there’s a serious deliberative process going on (so they might try to triangulate from participants' points of view rather than considering arguments). If someone’s trying to get the right answer, that’s not really a problem, since they should presumably move pretty quickly towards your point of view if you are in fact right (and vice versa).

To give a somewhat more concrete/colorful example, offering a “defense” at a show trial with a predetermined verdict creates the impression that there’s a court trying to figure out what the right answer is, when there isn’t. (Related: it’s quite important that three people were found not guilty at Nuremberg, since is implies can.)

People who are not trying to get the right answer are going to be wronger than average, so these things will be positively correlated.

(Cross-posted from Katja's blog)

There's another thing that is quite different from a process perspective, but can look superficially similar: people in power often try to arrange things so that the powerless can't be heard. Thus, "X is not worth engaging with" isn't in itself a good or bad sign about someone's epistemic virtue; one has to look into what's going on and assess which side (if any) is proceeding in good faith.

Since it took the whole of humanity thousands of years to reject V, even if these new humans are especially smart and moral, they probably do not each have the resources to personally out-reason the whole of civilization for thousands of years.

New humans, even those who don't know the arguments, have background knowledge that the past didn't have. There are many false beliefs V such that past naive humans could reasonably believe V and present naive humans can reasonably believe V, but also many false beliefs V such that past naive humans could reasonably believe V but present naive humans can't reasonably believe V. (Maybe a reasonable person without domain knowledge could doubt evolution before it was known that almost all biologists would end up believing in it, but not after. Maybe it takes much more unreasonableness to be a fascist after WW2 and the Holocaust than before. And so on.) I think a lot of disagreement about whether to argue with people who believe V is driven by disagreement about how obvious not-V is given general modern background knowledge instead of by disagreement about general policy toward people of different reasonableness levels.

user:tempus has been reposting his reply to this comment from many different accounts (without also reposting my reply). Meanwhile, I think the parent comment received multiple downvotes. I think the same may be true of user:gjm's comment below. If these downvotes are from legitimate users, then I apologize, but if I happened to be hunting for further user:tempus sockpuppets, that's where I'd look.

I take the point but remain doubtful for two reasons.

Reason 1:

Engaging with Very Bad Ideas can give them credibility by two somewhat different mechanisms. The first is the one described here: there's a social convention that you're not supposed to dignify the very worst ideas with a response. The second is the fact that, whatever the social conventions, merely hearing some idea being aired makes you more inclined to believe it.

The second of these won't go away even if you manage to shift the social conventions, and I suspect it won't be helped much by saying "This is an incredibly bad idea and by debating it I don't mean to imply that it's any good".

Reason 2:

(This is related to Benquo's point about people arguing in bad faith.) If an idea is Very Bad, then its adherents are unusually likely to be stupid, crazy, evil, trolling, or in some other way unlikely to be persuaded by reasoned debate. This is less true in the scenario described in the OP where the Very Bad Idea is largely abandoned and many of its adherents are smart contrarians, but in general my guess is that creationists are more than averagely likely to be stupid, Nazis are more than averagely likely to be evil, believers in alien abductions are more than averagely likely to be crazy, etc. (Obviously "stupid", "crazy", etc., should be understood as abbreviations for more nuanced descriptions of severe cognitive weaknesses.)

I think we would be better served by keeping the general convention that Very Bad Ideas don't deserve a response, but finding some way to carve out occasional exceptions. (I don't know what that way might be. Probably a difficult problem.)

All I'm saying is that near-unanimous agreement about P in the relevant scientific field is pretty strong probabilistic evidence for P, and reasonable people are more likely to take probabilistic evidence into account than unreasonable people, so if all you know is someone disagrees with P and hasn't heard the best arguments, such near-unanimous agreement constitutes probabilistic evidence against that person being reasonable.

It's a pretty strong authority, so it affects what inferences you should make about the reasonableness of people who hold the belief.

I'm not arguing that anyone who disagrees with a scientific majority is automatically unreasonable, or that the present is right on all points where the past and present disagree, if that's what you're worried about.