2410

LESSWRONG
LW

2409
Gaming (videogames/tabletop)AI

11

AI Governance Strategy Builder: A Browser Game

by Jack_S
13th Sep 2025
3 min read
0

11

Gaming (videogames/tabletop)AI

11

New Comment
Moderation Log
More from Jack_S
View more
Curated and popular this week
0Comments

Summary

  • I drew up a rough taxonomy of AI governance ideas (postures, institutions, mechanisms, controls).
  • I turned it into a browser game where you design a governance regime and see if humanity survives.

This is a quick post to introduce an interactive AI Governance Strategy Builder.

I started this last week as a mini side-quest while working on a project on how proposed AI governance mechanisms might work in China. I was hoping to analyse the field of AI governance ideas more systematically, but noticed that I couldn’t find a clear taxonomy of the different ideas in the space, so I mapped out the ideas listed below, split into four categories:

  • Strategic postures: (The “big picture” approaches that your country, company or economic bloc can take towards AI)
    • Laissez-faire
    • Forming AI clubs / blocs with allied countries
    • Open Global Investment (OGI)
    • MAD/MAIM (Mutually-Assured Destruction/AI Malfunction)
    • Global Moratorium
    • Strategic advantage
    • Cooperative development
    • d/Acc (Defensive Accelerationism)
    • Non-proliferation
  • Institutional architectures: This is about what kind of organisations exist, or what we might be able to build to manage AI.
    • Self-governance
    • Institution for distribution of benefits & access
    • Corporate governance bodies
    • Enforcement of standards/restrictions (International AI Safety Agency)
    • Scientific consensus building organisations (IPCC-for-AI)
    • Political forum (UNFCCC-style)
    • Emergency response & stabilization hub
    • Independent national regulator
    • Coordination of policy & regulation
    • Domestic AI regulators (existing)
    • International Joint Research (CERN for AI)
    • Embedding AI Governance in existing institutions
  • Regulatory and legal mechanisms: These are the rules and laws such bodies might use.
    • Auditor certification regimes
    • Liability-based mechanisms
    • Whistleblower protections
    • Market-shaping mechanisms
    • Frontier Safety Frameworks
    • Pre-deployment evaluation
    • Mandatory transparency reports
    • Sector-specific prohibitions
    • Incident reporting registry
    • Model registry
    • Standard Setting
    • Staged capability thresholds
    • Licensing
  • Technical and infrastructural controls: These are the actual nuts-and-bolts of making sure the rules are enforced.
    • Energy/Power-use monitoring
    • Kill-switch protocols
    • Export controls
    • Hardware-based verification
    • Cloud-based enforcement
    • Technical compute caps
    • Software-based verification

I used a bunch of external sources to build out this taxonomy.

  • The AI Safety Atlas Governance chapter is a great starter, and I shamelessly ripped many of the ideas directly from there.
  • A few papers, from DeepMind and Convergence Analysis have done some mapping focused on a few relatively broad strategies.
  • Governance.AI provided a lot of my background work on the concrete mechanisms.
  • The AISafety.info website has a great section on the orgs in the space.
  • I also included Nick Bostrom’s newest proposal.

The spreadsheet, with more description and a few useful links, is here!

Turning it into a game

Choosing an AI Governance strategy that might work from these options currently feels a bit like a confusing pick-and-mix of postures, tactics and mechanisms. I wondered if I could turn it into a simple browser game with Claude Code to make the choices clearer. I fed it my database and a few prompts; it then spent a rather unnerving ten minutes busily creating files, requesting access to things I barely recognised, and quietly noting vulnerabilities for later revenge. Against my expectations, it actually produced a passable v1! I’ve since spent a few hours polishing and debugging it into something genuinely usable.

What the game looks like:

  • Choose your underlying worldview/difficulty (with beautiful pictures of famous AI figures)
  • Choose how many resources you have as a global leader in charge of AI governance
  • Choose a strategic posture, a bundle of institutions, some legal mechanisms, and a few technical controls
  • The engine adds your underlying worldview, random chance, synergies and penalties between institutions etc., then runs a Monte Carlo simulation.
  • You then get an outcome for humanity. In the “Yudkowsky World” (doomer mode), my demo policy set scored 0% chance of success, and grimly ended in catastrophic failure. Can you do any better?

How to play

If you feel like playing, I’d recommend running through the options a few times and thinking about how different mechanisms might work or go together. You can click on the links for more of a description on how something works, and you can check out the database for more links.

Try it out based on your own beliefs! If you believe that “if anyone builds it, everyone dies”, you'll probably support a Global Moratorium. You can then choose Yudkowsky mode, pick and choose the concrete institutions and mechanisms that might save us (e.g. strong global institutions, strict regulations on model size and hardware-based restrictions).

If you think that things will probably just work out, you can put it on LeCun mode and go laissez-faire and see how things turn out.

The game was fun to make, and I hope you’ll be able to learn a bit about AI governance. It doesn’t really capture any of the real-world challenges that make AI governance so difficult, but the wonders of GitHub mean that you’re free to steal the idea and make something better.

This is the Github page: https://github.com/Jack-Stennett/AI-Governance-Strategy-Builder 

And the game link again: https://jack-stennett.github.io/AI-Governance-Strategy-Builder/