(Adding Claude as a co-author not because any text was LLM-written, but because Claude helped a bunch with the ideation & red-teaming here. I expect to add the account as a co-author to more posts I write in the future, and point out explicitely when text was LLM-written and/or LLM-written & me-improved).
Some ideas on how to handle mildly superpersuasive AI systems. Top recommendation: AI developers should have a designated position at their organization for the only people who interact with newly trained AI systems, so-called "model-whisperers", which have no other relevant access to infrastructure within the organization.
—Bryan Caplan, “80,000 Hours Podcast Episode 172”, 2023[1]
Motivation
Humanity will plausibly start interacting with mildly superhuman artificial intelligences in the next decade65%. Such systems may have drives that cause them to act in ways we don't want, potentially causing attempted self-exfiltration from the servers they're running on. To do so, mildly superhuman AI systems have at least two avenues[2]:
Computer security is currently a large focus of AI model providers, because preventing self-exfiltration coincides well with preventing exfiltration by third parties, e.g. foreign governments, trying to steal model weights.
However, superhuman persuasion[3] has received less attention, mostly justifiedly: frontier AI companies are not training their AI systems to be more persuasive, whereas they are training their AI systems to be skilled software engineers; superhuman persuasion may run into issues of the heterogeneity, stochasticity and partial observability of different human minds, and there are fewer precedents of superhuman persuasion being used by governments to achieve their aims. Additionally, many people are incredulous at the prospect of superhuman persuasion.
But given that superpersuasion is one of a few possible ways for mildly superintelligent AI systems to influence their environment according to their drives, too little attention has been paid to the issue90%. Sycophancy in large language model interactions is commonplace, and large language models are becoming more persuasive with increasing scale. (I also find it plausible that providers like Character.ai or Meta are training their systems to be reinforced to have longer interaction lengths, making them more addictive, but I don't have definitive evidence for that).
AI developers and users will have to interact with increasingly persuasive and addictive systems70%, but most of the thinking on how to interact with superpersuasive sources of information has come from fiction[4]. One (preliminary) exception is Kori 2025. Kori 2025 is a useful first stab at defining some simple interventions, but I wanted to sketch a slightly more detailed list of interventions, similar to AI Control protocols. Also relevant: Bostrom information hazards, which hasn't received much follow-up.
I am not going to attempt to convince people that superhuman persuasion abilities are possible, and indeed focus on trying to sketch what anti-persuasion measures could look like.
Assumptions
Some Ideas for Interventions
I separate out possible interventions into interventions available pre-deployment and post-deployment, and interventions available during both. This is the best carving-up of the intervention-space that I've found (others include by the level and kind of persuasion employed, the level of investment from labs to implement superpersuasion counter-measures, and the estimated effectiveness of such measures).
Both Pre- and Post-Deployment
Interventions Limited to Pre-Deployment
Interventions Limited to Post-Deployment
Testing Anti-Superpersuasion Setups
At the moment it's quite difficult to test anti-superpersuasion protocols. We don't have easy access to superpersuaders, and humans at the high end of the persuasion spectrum probably command salaries that are too high to be paid in the context of experiments.
My best current guess at how one could get some signal on whether anti-superpersuasion protocols work is to substitute language models for humans. In such a setup, one would take a fixed less capable model, and RL-train a more capable model to be able to steer conversations with the less capable model into pre-specified outcomes, as quickly and/or reliably as possible. The less capable model would then be a stand-in for humans, while the more capable model would be a stand-in for super-persuaders.
This would probably produce scenarios in which the more capable model finds inputs that are basically adversarial inputs to the less capable model65%, and are thus less applicable to cases in which superpersuasion appears like normal long-form conversations.
But I think this is the least bad test-bed for anti-superpersuasion methods, if anyone finds a better one, I'd like to hear about it.
Appendix A: Sketch Setup of Model Whisperers
Pursuant NIST Special Publication 800-53, Revision 5, 3.14 Personnel Security. (Sorry, this may be a bit boring, and is mostly a sketch.)
As stated, I think I agree with this quote, but there's certainly much to nitpick. For one, picking out ten words from the most common 20k English words? Yep, that should be fine. Ten sets of ten arbitrary unicode codepoints? I'm getting queasy. Five seconds of audio (which could contain about ten English words)? If chosen without constraint, my confidence goes way down. ↩︎
Wildly superhuman AI systems may have more exfiltration vectors, including side-channels via electromagnetic radiation from their GPUs, or novel physical laws humans have not yet discovered. ↩︎
From here on out also "superpersuasion". ↩︎
Not that it's important, but examples include Langford 1988, Langford 2006, Ngo 2025, qntm 2021, Emilsson & Lehar 2017 and doubtlessly many others. ↩︎