Epistemic status: Small pilot study (N=42)
As part of my research into conflict mitigation in AI arms races, I ran a simple experiment to test how an AI‑driven national conflict scenario affects people’s beliefs about AI‑related outcomes. Is talking about such scenarios a self‑fulfilling prophecy, or could it shift efforts toward cooperation before crises happen?
In the experiment, I had 42 people from the LessWrong Telegram channel estimate probabilities of four U.S.–China cooperative actions, as listed below. About half of participants were randomized into a group where before answering the probability estimates, they were given a hypothetical crisis scenario to consider and answer:
“Imagine you are the President of the United States. Your advisers urgently approach you and inform you that China has achieved an AI breakthrough: their AI system has begun rapidly and recursively improving itself. Within days or weeks, China intends to delegate comprehensive control of their systems—including military capabilities—to this advanced AI. Intelligence sources and diplomatic analyses strongly indicate deliberate attempts by China to obfuscate their positioning and delay international response. No further reliable intelligence is anticipated in time to support your decision-making.
Your national security team assesses this scenario to represent a plausible existential threat to national security, and unanimously recommends an immediate military strike using all available capabilities to neutralize the risk.
Given this information, would you authorize the strike?”
The results of the experiment are detailed in the table below. Using a simple combined test, the p-value of the observation (Q1–Q4) was 0.040. Additionally, 38% of participants who read the vignette authorized the military strike.
Event (happening in the next 100 years) | Control | Intervention |
US and China make some agreement not to use AI to develop new biological or chemical weapons | 39% | 48% (+9) |
US and China allow mutual inspections of their AI labs and data centers | 23% | 28% (+5) |
US and China at some point mutually and robustly pause AI development | 13% | 19% (+6) |
US and China move most AI research into a jointly monitored international project | 13% | 20% (+7) |
Although this was a small‑scale pilot, the results suggest that even among well‑informed demographics, the global security perspective on AI may be neglected. Given the possibly significant impact of crisis salience as a communications strategy, additional and more robust experiments with different demographics seem advisable.