Defense against discourse

by Benquo 5 min read17th Oct 201715 comments


So, some writer named Cathy O’Neil wrote about futurists’ opinions about AI risk. This piece focused on futurists as social groups with different incentives, and didn’t really engage with the content of their arguments. Instead, she points out considerations like this:

First up: the people who believe in the singularity and are not worried about it. […] These futurists are ready and willing to install hardware in their brains because, as they are mostly young or middle-age white men, they have never been oppressed.

She doesn’t engage with the content of their arguments about the future. I used to find this sort of thing inexplicable and annoying. Now I just find it sad but reasonable.

O’Neil is operating under the assumption that the denotative content of the futurists’ arguments is not relevant, except insofar as it affects the enactive content of their speech. In other words, their ideology is part of a process of coalition formation, and taking it seriously is for suckers.

AI and ad hominem

Scott Alexander of Slate Star Codex recently complained about O’Neil’s writing:

It purports to explain what we should think about the future, but never makes a real argument for it. It starts by suggesting there are two important axes on which futurists can differ: optimism vs. pessimism, and belief in a singularity. So you can end up with utopian singularitarians, dystopian singularitarians, utopian incrementalists, and dystopian incrementalists. We know the first three groups are wrong, because many of their members are “young or middle-age white men” who “have never been oppressed”. On the other hand, the last group contains “majority women, gay men, and people of color”. Therefore, the last group is right, there will be no singularity, and the future will be bad.


The author never even begins to give any argument about why the future will be good or bad, or why a singularity might or might not happen. I’m not sure she even realizes this is an option, or the sort of thing some people might think relevant.

Scott doesn’t have a solution to the problem, but he’s taking the right first step - trying to create common knowledge about the problem, and calling for others to do the same:

I wish ignoring this kind of thing was an option, but this is how our culture relates to things now. It seems important to mention that, to have it out in the open, so that people who turn out their noses at responding to this kind of thing don’t wake up one morning and find themselves boxed in. And if you’ve got to call out crappy non-reasoning sometime, then meh, this article seems as good an example as any.

Scott’s interpretation seems basically accurate, as far as it goes. It’s true that O’Neil doesn’t engage with the content of futurists’ arguments. It’s true that this is a problem.

The thing is, perhaps she’s right not to engage with the content of futurists’ arguments. After all, as Scott pointed out years ago (and I reiterated more recently), when the single most prominent AI risk organization initially announced its mission, it was a mission that basically 100% of credible arguments about AI risk imply to be the exact wrong thing. If you had assumed that the content of futurists’ arguments about AI risk would be a good guide to the actions taken as a result, you would quite often be badly mistaken.

Of course, maybe you disbelieve the mission statement instead of the futurists’ arguments. Or maybe you believe both, but disbelieve the claim that OpenAI is working on AI risk relevant things. Anyhow you slice it, you have to dismiss some of the official communication as falsehood, by someone who is in a position to know better.

So, why is it so hard to talk about this?

World of actors, world of scribes

The immediately prior Slate Star Codex post, Different Worlds, argued that if someone’s basic world view seems obviously wrong to you based on all of your personal experience, maybe their experience is really different. In another Slate Star codex post, titled Might People on the Internet Sometimes Lie?, Scott described how difficult he finds it to consider the hypothesis that someone is lying, despite strong reason to believe that lying is common.

Let's combine these insights.

Scott lives in a world in which many people - the most interesting ones - are basically telling the truth. They care about the content of arguments, and are willing to make major life changes based on explicit reasoning. In short, he’s a member of the scribe caste. O’Neil lives in actor-world, in which words are primarily used as commands, or coalition-building narratives.

If Scott thinks that paying attention to the contents of arguments is a good epistemic strategy, and the writer he’s complaining about thinks that it’s a bad strategy, this suggests an opportunity for people like Scott to make inferences about what other people’s very different life experiences are like. (I worked through an example of this myself in my post about locker room talk.)

It now seems to me like the experience of the vast majority of people in our society is that when someone is making abstract arguments, they are more likely to be playing coalitional politics, than trying to transmit information about the structure of the world.

Clever arguers

For this reason, I noted with interest the implications of an exchange in the comments to Jessica Taylor’s recent Agent Foundations post on autopoietic systems and AI alignment. Paul Christiano and Wei Dai considered the implications of clever arguers, who might be able to make superhumanly persuasive arguments for arbitrary points of view, such that a secure internet browser might refuse to display arguments from untrusted sources without proper screening.

Wei Dai writes:

I’m envisioning that in the future there will also be systems where you can input any conclusion that you want to argue (including moral conclusions) and the target audience, and the system will give you the most convincing arguments for it. At that point people won’t be able to participate in any online (or offline for that matter) discussions without risking their object-level values being hijacked.

Christiano responds:

It seems quite plausible that we’ll live to see a world where it’s considered dicey for your browser to uncritically display sentences written by an untrusted party.

What if most people already live in that world? A world in which taking arguments at face value is not a capacity-enhancing tool, but a security vulnerability? Without trusted filters, would they not dismiss highfalutin arguments out of hand, and focus on whether the person making the argument seems friendly, or unfriendly, using hard to fake group-affiliation signals? This bears a substantial resemblance to the behavior Scott was complaining about. As he paraphrases:

We know the first three groups are wrong, because many of their members are “young or middle-age white men” who “have never been oppressed”. On the other hand, the last group contains “majority women, gay men, and people of color”. Therefore, the last group is right, there will be no singularity, and the future will be bad.

Translated properly, this simply means, “There are four possible beliefs to hold on this subject. The first three are held by parties we have reason to distrust, but the fourth is held by members of our coalition. Therefore, we should incorporate the ideology of the fourth group into our narrative.”

This is admirably disjunctive reasoning. It is also really, really sad. It is almost a fully general defense against discourse. It’s also not something I expect we can improve by browbeating people, or sneering at them for not understanding how arguments work. The sad fact is that people wouldn’t have these defenses up if it didn’t make sense to them to do so.

When I read Scott's complaints, I was persuaded that O'Neil was fundamentally confused. But then I clicked through to her piece, I was shocked at how good it was. (To be fair, Scott did a very good job lowering my expectations.) She explains her focus quite explicitly:

And although it can be fun to mock them for their silly sounding and overtly religious predictions, we should take futurists seriously. Because at the heart of the futurism movement lies money, influence, political power, and access to the algorithms that increasingly rule our private, political, and professional lives.

Google, IBM, Ford, and the Department of Defense all employ futurists. And I am myself a futurist. But I have noticed deep divisions and disagreements within the field, which has led me, below, to chart the four basic “types” of futurists. My hope is that by better understanding the motivations and backgrounds of the people involved—however unscientifically—we can better prepare ourselves for the upcoming political struggle over whose narrative of the future we should fight for: tech oligarchs that want to own flying cars and live forever, or gig economy workers that want to someday have affordable health care.

I agree with Scott that the content of futurists' arguments matters, and that it has to be okay to engage with that somewhere. But it also has to be okay to engage with the social context of futurists' arguments, and an article that specifically tells you it's about that seems like the most prosocial and scribe-friendly possible way to engage in that sort of discussion. If we're going to whine about that, then in effect we're just asking people to shut up and pretend that futurist narratives aren't being used as shibboleths to build coalitions. That's dishonest.

Most people in traditional scribe rolls are not proper scribes, but a fancy sort of standard-bearer. If we respond to people displaying the appropriate amount of distrust by taking offense - if we insist that they spend time listening to our arguments simply because we’re scribes - then we’re collaborating with the deception. If we really are more trustworthy, we should be able to send costly signals to that effect. The right thing to do is to try to figure out whether we can credibly signal that we are actually trustworthy, by means of channels that have not yet been compromised.

And, of course, to actually become trustworthy. I’m still working on that one.

The walls have already been breached. The barbarians are sacking the city. Nobody likes your barbarian Halloween costume.

Related: Clueless World vs Loser World, Anatomy of a Bubble