The advice in this post is framed as hard-won domain expertise, backed by empiricism and experience. But I think it's actually closer to something more like folk wisdom from a milieu that is currently at a low point of cultural power and influence. Ostensibly-neutral (but in reality heavily blue-tribe-coded) "experts" who stay in their lane are not exactly popular or influential at the moment, and to my eye their track record of positive impact on various topics in the past has been mixed at best.
I agree that it's good to be aware of these dynamics and dichotomy, but that doesn't mean you have to try actively fitting into them. There are other options besides choosing not to play, and playing by the pre-existing rules.
Ostensibly-neutral (but in reality heavily blue-tribe-coded) "experts" who stay in their lane are not exactly popular or influential at the moment,
Aren't they at a low point due to not staying in their lane by smuggling in political views into their ostensibly-neutral pronouncements? E.g., doctors saying it's okay to break Covid restrictions on gathering if it's for social justice.
For that particular example I think the bigger issue is that some experts were saying things that were failing to model reality or transparently silly in ways that anyone could notice, e.g. saying or implying that the risk of spreading disease depends on the righteousness of your cause.
There's not really a way to hold actual-nonsense views (privately / smuggling or openly) and not come across as off-putting and nutty to lots of people. If a domain expert does hold merely unpopular or controversial views though, I think it's better to be very direct and up front about them, e.g. a doctor could say something like:
"Large gatherings of any kind pose some risk of spreading respiratory illnesses, but outdoor gatherings are generally less risky than indoor gatherings. [cite apolitical evidence]
My recommendation as both a domain expert and a concerned citizen is that local governments should use all legal means to cancel public gatherings by default, but should make exceptions on a case-by-case basis when the importance of a gathering outweighs the risk. I believe that protesting for [particular cause] is one such case where the righteousness of the cause outweighs the risk."
This might still be off-putting to a lot of people depending on the value of [cause], but at least it's not an assault on basic reasoning.
But leaving out the second paragraph entirely, if it's what you actually believe, and scrupulously sticking to only statements like the first paragraph mostly doesn't work, because a Professional is only asked to make statements like the first paragraph in a context where it's easy to figure out / guess / assume the worst on where they stand on things in the second paragraph, e.g. you are saying it to give public officials cover / justification to implement their own preferred policies on gathering restrictions.
In general, people are very, very good at figuring this kind of thing out and connecting the dots, even if you follow all the advice in this post and carefully never directly reveal your own politics or connect them to your expertise. It's a very deep / old social instinct that modern media and PR training can't reliably beat, and one of the lessons of the last decade is that you can quickly shred your credibility by trying.
The advice in this post is framed as hard-won domain expertise, backed by empiricism and experience. But I think it's actually closer to something more like folk wisdom from a milieu that is currently at a low point of cultural power and influence.
I don't think this is true, ControlAI have done a lot to shift opinion on AI risk, over 100 UK politicians have signed their AI extinction risk statement. This isn't something I would have thought would be possible if I'd heard about it two years ago.
I agree with what you said about experts, but think there's two separate problems. The post is mostly about avoiding polarization, climate change and COVID have shown us how difficult polarization makes it to coordinate on problems. It would be bad for AI safety to become polarized that way.
The other problem, which I think is what you're getting at, is that experts are far too passive. They are unwilling to speak with conviction so people disregard the things they say. Instead they listen to the influencer who tells them climate scientist are lying, becuase the influencer says it like it's the truth.
It's is even worse in AI safety, because so many people talk about it like it's a joke and call themselves doomers, I don't think any of that is helping.
I think we can do both, I think we can speak confidently while not playing into the polarization that things worse
Prior to my last year at ControlAI
I'm a bit confused by this opening line. Is it saying:
I'm still at ControlAI, where I do creator outreach and work on messaging around AI extinction risk and banning ASI :)
Thanks Max, this is a useful framing. I have a tendency to get into unnecessary arguments/debates and this article helped clarify some of the trade-offs rather crisply.
The rub is that influencers are the ones influencing elections, policy, and discourse right now. Whereas "professionals" have reach an all time level of distrust. Influencers vs professionals is indeed the problem of our time, however I think we've tried the solution you've proposed countless times and have already failed. One just has to look at how messaging about climate change, vaccines, economics, and more turned out. To me, you have described the way the world should work but not the way that it is currently working.
The solution isn't going to be acting more professional or acting more influency. Science communication has tried every discrete value on that axis over the last few decades and still failed. The solution is going to be orthogonal to it.
I've been reading Damon Centola's book, "Change", which seems to be laying out the case that neither professionals nor influencers actually drive social changes, rather, social change is driven by redundant reinforcement in peer networks.
What is your recommendation for situations where AI topics are related to politics or other contentious issues? Like, I agree that one shouldn't just make it clear they're on a side without reason. But what if there is a policy debate around AI and someone is asked to comment on it as an expert? (I'm omitting specific examples in the spirit of your advice)
When it comes to areas where AI intersects with other issues, it's generally very credibility-boosting to demonstrate an understanding of the various positions on the issue and then explain where AI risk (or your particular subject) intersects or doesn't.
A common example of this in Europe is the question about whether banning ASI hurts the economy or relations with the US too much. When someone throws you this question it can be good to give the high-level summary of the various positions people have on this, and then point out that regardless of the economic impact in the present, uncontrolled ASI is the worst possible outcome, so it's in everyone's benefit to ensure this doesn't happen at essentially any cost. Beyond that, the question of how to regulate (or not) other AI issues like environmental impact, deepfakes, or misinformation have different tradeoffs to consider.
Related aside:
People, especially those in politics, will respect you for demonstrating a clear understanding of the policy landscape and for sticking to giving advice within your expertise, without getting sloppy with personal politics. An observation I've made at the top-level of politics is that everyone is for the most part aware that they're all playing The Game, and understand each others' positions well. Unlike at Thanksgiving, you don't need to avoid being a bit meta about the political landscape.
One of the common and probably incorrect objections to this strategy seems to be something like “but I am interested in displaying true/rational beliefs, to attract competent people, and one with irrationality to be pro-worm won’t be moved by my logical arguments about AI anyway”.
This is probably incorrect due to “demonisation” of political enemies (many reasons of which were covered in the sequences). Notable factor of this is a pattern that people usually see some of the “loud” and maybe really irrational examples of opposing political spectrum and make overgeneralisation about irrationality of all of them. Or people can be simply overconfident about political question itself.
I strongly agree with the importance of professional-style messaging. It's good to hear it applied to the communications side of safety. It's a good idea to take into talking with the public. Thanks for posting!
I would add: opinionation is bad practice within the research community as well. Those who quickly express opinions on all manner of topics feel noisy and kind of a waste of intellectual effort.
There is only so much time in the day, and so much time to consider and understand a topic. It is a huge amount of work to develop reasonable and interesting interpretations of just a tiny subfield. So, the more opinions a person expresses, the smaller a fraction of their time they must have spent on each of them, and the shallower they must each be.
So opinionated people increase the cost of listening to them. Our expected value of their words falls because we can't trust they'll stick to what they know. The expected value of all their ideas is diminished.
Is there a way out through rationalist epistemology? In principle, calibration is great, but in practice, it takes loads of work to understand how well one understands. So caveating opinions with estimated certainties is usually noise to me. They often seem to be numbers pulled out of a hat, and there's no standard between people to make it all line up. It takes knowing the person's own metric to know what their estimate means, and this requires a lot of discussion or reading to get a grip on.
Can we use established people as benchmarks instead? Well, we still have cases like Hinton and LeCun, who colour outside their lines a fair bit. To preserve their ideas as a benchmark for quality, we must restrict our evaluation of their wisdom to the very narrow fields which they actually have experience in.
This suggests we should only listen to what they have to say in those few domains, giving us a rule for evaluating our own knowledge: we measure our knowledge against people who have produced important work and are very experienced in the narrow sub/fields we are interested in.
Against this high standard for knowledge, it becomes much easier to draw the line on what I know well - vanishingly little, almost nothing. I have only a very narrow slice of confidently grounded and relevant knowledge. Then it is clear when to phrase it all as questions or "could-be"s, state my ignorance, and immediately ask the other if they know more.
This is not refusing to commit to positions, but knowing when my opinions don't meet a good standard, and communicating that with a focus on exploration and learning.
Within the theme of Hinton and LeCun colouring outside their lines, it bothers me how much the general public seem to look to them for opinions on ASI, which is a very different thing from engineering ML models, which, in my understanding, is what they are experts of.
Sure, understanding where ML fits in the process of developing ASI requires knowledge of ML, but it seems to me that Bostrom and Yampolskiy are clearly more experts in the domain of ASI than Hinton or LeCun.
More subtly, I think Hinton and LeCun are experts at getting ML models to perform well. That may or may not give them understanding of the limits of ML capabilities as the technology continues to develop.
It is as if Hinton and LeCun have been given authority to speak on the subject not because they understand it, but because they made something that is now profitable and popular. Please correct me if I am misunderstanding their expertise. I haven't researched their backgrounds extensively.
I think you're probably correct in general strategy, but what about this:
Since AI safety is currently non-partisan, if one safety professional picks a political team, the audience that turns away from them is still not turned away from AI safety; there are other safety professionals who will pick the other political team. If there are safety professionals on both teams that you see tweets from no matter which sports team bubble you're in, doesn't that still bode well for AI safety?
The point is understanding who you are in the equation. 90-99% of the people reading this aren't Influencers in any meaningful capacity to the general public (maybe within the EA/Rat bubble). You're likely to be the first informed person a member of the public ever hears talk about the subject, and the public's views on AI are roughly at the level of "AI good" or "AI bad" right now. A nuanced perspective where people are considering the views of many AI safety people is a bit naive.
What you can do is work to convince pre-existing Influencers to take up the AI safety cause, and we should be working with many Influencers of many political persuasions. It's much more valuable for you to operate as a non-partisan expert.
We're should be trying to avoid the failures that other science communicators have failed by seeming like partisan hacks, not experts to be taken seriously. We don't want the argument to be over the political persuasions of the AI safety experts, rather than the content of the warnings.
Let's think about the opposite strategy.
Would broadcasting beliefs that are controversial but extremely popular with a small minority, then also broadcasting opinions about AI safety move that minority towards agreement with you on AI safety? Would a concerted and coordinated influence campaign doing this, where you said things like 'I hate alpine worms, just like you, now please form a positive opinion on this AI safety professional who only talks about that issue' work? What if two people screamed at each other about their strong disagreement about alpine worms, then accused each other of 'not taking AI safety seriously' and persuaded their respective followers to take AI safety as a good thing that belongs to their side?
I really like the "mutual accusations of not taking AI safety seriously" idea... like those old "when you eat smarties do you eat the red ones last" advertisements, draw focus away the question of whether you [support AI safety / eat smarties] because it's assumed that you do, and towards the question of how you [support AI safety / eat smarties].
But I don't actually know how well that would work in practice.
Your hot takes are killing your credibility.
Prior to my last year at ControlAI, I was a physicist working on technical AI safety research. Like many of those warning about the dangers of AI, I don’t come from a background in public communications, but I’ve quickly learned some important rules. The #1 rule that I’ve seen far too many others in this field break is that You’re an AI Expert - Not an Influencer.
When communicating to an audience, your persona is one of two broad categories: Influencer or Professional
So… let’s say you’re trying to be a public figure making a difference about AI risk. You’ve been on a podcast or two, maybe even on The News. You might work at an AI policy organization, or perhaps you’re an independent professor, researcher, or even a spokesperson for a protest group. Notably, you're not just a person shouting from the sidelines, you're someone building an actual platform as a spokesperson for this issue.
That makes you a Professional.
But of course, even though you’re not an Influencer, you’re a person with Opinions about many things.
For the sake of this piece, let’s say the latest topic of heated political debate is over a new species of Alpine Worm, let’s call this the Alpine Worm Scandal. Today you saw an Alpine Worm in your garden and it’s irritated you enough that You’re Gonna Tweet About It!
STOP – What Would Media Training Steve do?
Media training is often maligned as a dark art used by politicians to avoid answering questions, but it’s actually quite important to understand what you should and shouldn’t say based on your role for the sake of the audience, your credibility, and your message.
Your role as a Professional is to warn about AI risk, answer related questions, and present solutions. That’s it.
Media Training Steve knows that while he has strong beliefs about the Alpine Worm Scandal, so does his audience. In fact, because concern about AI is (currently) so non-partisan, he’s cultivated an audience roughly split between Pro-Wormers, Anti-Wormers, and people who intentionally avoid Worm politics altogether.
If Media Training Steve posts about his strong Anti-Worm views, this will likely:
If your audience member is on the other team, you’re done. For the majority of the general public, modern politics is essentially team sports, especially in the US. When you take a stance on a partisan issue, you signal your allegiance to a team. At best, the other team will just disregard you and click to the next post, but at worst they will associate AI risk concerns with your team and disregard the issue as a whole.
Rule of Thumb: “Don’t make arguments others can’t repeat” – figure out the best case anyone can make, and make it.
Posting off-topic political takes also provides dangerous ammunition to your opposition.
For example, imagine Steve did make his Anti-Worm post, and then later is trying to make inroads about AI risk with Pro-Worm political leaders. The AI industry no longer has to fight him on AI risk, instead, they can just reference his Anti-Worm beliefs as an out-group signal and a reason to ignore him.
Even worse, if Steve is a leading figure talking about AI extinction risk, this same redirection attack can be used on others in AI safety…“Oh you’re just all Anti-Wormers, look at Steve!”
Communicating about AI extinction risk is interesting in part because it’s a new topic for most people and not already correlated strongly with their pre-existing views. Preventing the end of humanity is also just intrinsically non-partisan, and we should endeavor to keep it that way.
A very bad timeline is one where AI risk prevention becomes partisan.
Finally, as an AI expert, Steve isn’t actually going to change the minds of any meaningful portion of his audience about the Alpine Worm Scandal.
He’d have mistaken himself for an Influencer, someone whose audience cares about his personal and political views, not just his specific area of expertise. Audiences find it awkward when you’re clearly stepping out of your lane, and they’re already getting their Worm takes from elsewhere anyway.
All of that downside, and for what?
Don’t Feed Your Enemies
Among friends and family, you can happily (or unhappily) discuss your many opinions with each other in the context of being private individuals with many beliefs who care about each other on a personal level. You aren’t tied to everything you’ve ever said, and you can change your mind.
When in the public arena, this is not the case.
Regular people often post on public feeds about their many opinions. Most of the time, nothing happens, sometimes they go viral, or they get cancelled. Posting in public is dangerous.
You’re a Professional, and you’re conveying a message that the most valuable industry in the world wants to suppress, which means…
You Have Enemies.
And they won’t play nice.
They can and will go through hours of your podcast transcripts to catch you when you slip and use this against you until the end of time. You can’t take back a statement once you’ve made it, they’ll simply ignore your apology and keep broadcasting your slip-up anyway.
If you were a person-who-posts-hot-takes before getting involved in AI risk prevention, it might be a good idea to go back and delete some posts (though be aware nothing is truly gone on the internet).
You also should expect the AI industry to play dirtier as the endgame draws nearer.
Like it or not, this is an adversarial game. You can choose to play it that way or lose.
You might be uncomfortable with the idea of having enemies. If so, you should give this excellent piece by Gabe Alfour a read.
If you haven’t sat with this reality before, I recommend you do so before entering the public arena. It’s okay to decide the risks are too much for you, and there are plenty of ways to help outside of the spotlight.
The Luxury of Not Being a Politician
Politicians are at the extreme end of Influencers, where people follow, support, and vote for you based on how they feel about your policy positions. Unlike most Influencers, Politicians’ beliefs directly impact the world, and so they are analyzed and criticized extensively.
One reason being a Politician is hell is that it’s a role where people expect you to have positions on everything, everywhere, all at once. Thus, Politicians are always lying, dodging, and giving half-answers precisely because they understand the tradeoffs of being vocal about too many beliefs.
People process the speech of Professionals differently than Influencers. This is because Professionals trade in credibility and have to earn their way to the stage via their expertise.
This is a huge luxury! Don’t squander it!
As a Professional, you can just say “That’s outside the realm of my expertise.” if faced with questions not pertinent to your work, and people will respect you for it! Conversely, if you step outside your domain, audiences know you’re wasting their time, and your lack of Professionalism reflects poorly on you.
The funny thing is that politicians understand this very well, and expect you to stick to your expertise, even if you’re trying to agree with their policy positions.
For example, look at how Bernie Sanders reacts to Geoffrey Hinton getting political:
Bernie knows that Hinton’s value to the conversation is being the Nobel Laureate Godfather of AI, not a political commentator, and this comment is cutting into his credibility.
There are other digs Hinton makes throughout this interview, with which Bernie consistently doesn’t engage and guides Hinton back to AI, despite Hinton voicing opinions Bernie likely shares.
Note: This also fails our rule of thumb from above, as many people wouldn’t reiterate this example!
So How Do You Deal With Politics?
You don’t - at least not publicly.
The cost of being a Professional is that you don’t talk in public about issues outside of the ones where you’ve built up intellectual capital – trust me, the exchange rate is terrible.
Of course, you can talk in private about these issues, or have private social media for your close friends and family.
You can also participate in civil democratic activities like attending a city council meeting in a personal capacity without leveraging your Professional profile. You should still be careful because depending on the situation this could be used against you.
It should also be said that if another issue is really calling you (perhaps there are too many Alpine Worms for you to think about anything else), then go work on that issue! Just please don’t try to do both. You’ve got finite resources and you’ll get more done overall focusing on one issue at a time.
Additionally, for topics that aren’t very contentious, it can be fine to signal other beliefs. For example, warning about superintelligent AI can often result in Luddite accusations. Responding to this by pointing out that you’re generally pro-innovation and think technology brings a lot of benefits is reasonable and unlikely to cause a stir.
Conclusion
Achieving an international ban on superintelligent AI will take considerable effort and discipline.
Many things need to happen for us to succeed: we need to scale rapidly, improve our messaging, and be resilient against attacks from our opposition.
Those of us operating in the public eye need to step up our game, start playing more seriously, and hold each other to a higher standard.
That means acting like Professionals.