Shahar Avin and others have created a simulation/roleplay game where several world powers, leaders & labs go through the years between now and creation of AGI (or anything substantially transformative).
https://www.shaharavin.com/publication/exploring-ai-futures-through-role-play/
While the topic is a bit different, I would expect there to be a lot to take from their work and experience (they have ran it many times and iterated the design). In particular, I would expect some of the difficulty balancing "realism" (or the space of our best guesses) with playability but also genre stereotypes and narrative thinking (RPGs often tend to follow an antrophomorphic narrative and fun rather than what is likely, more so with unclear topics like "what would/can AGI do" :-)
Fun fact: AGI-related concepts can fit quite well even into regular D&D games. Last year I ran a Planescape campaign where the characters had to stop a literal paperclip maximizer (which started turning paperclips into paperclip golems making even more paperclips). Maybe not exactly the best way to prepare ourselves for real-world scenarios, but at the very least it was useful to introduce the concept to my players.
This happens whenever GMs add tactics to enemies. It's doable, but better when weaker enemies. Otherwise, yea, you kill the players.
Being able to lose is fun though. A game where you can't lose... is technically a toy, not a game. However, characters dying can be much less fun, but you could jail the characters or something.
I have silly fantasies of these games becoming enormously successful and noticeably increasing AI risk awareness & preparedness. If you are inspired by these ideas and want to make something like them a reality, you have my encouragement & I would be excited to discuss.
AI Takeover RPG
Imagine we create source material for a role playing game. (Like Dungeons and Dragons) That is, we write out some basic rules, a bunch of backstory for world and the various non-player characters, some tips and instructions for the game master, etc. and we playtest it to make sure it typically leads to a fun experience for a group of friends over the course of an evening.
The setting of the game: A realistic-as-far-as-we-know depiction of the future during AI takeoff.
The players play as the AI, or a society of AIs in a single server perhaps. The game master controls the rest of the world, e.g. the corporation and scientists who built the AI, the politicians in Washington, etc.
The premise is that the AI is secretly unaligned and wants to take over the world and remake it according to its values. That's a fun premise in its own right, but it also allows for various interesting mechanics, representing the unique advantages AIs would have in this situation:
Ideally we'd create more than one scenario so that there are different levels of difficulty for players to choose from. The hope is for a whole genre of scenarios like this to blossom, exploring a range of possibilities, and iteratively getting more difficult for the AI as players figure out better and better strategies. Probably it will be very difficult to be the game master at first, because (unlike generic fantasy worlds) the world of this game will be very unfamiliar and tricky to think about. But over time, with experience, we'll build up a library of playtested-and-also-plausibly-realistic scenarios / source material to draw from.
Success Fantasy:
The game is fun and loads of people play it. It effectively causes a ton of people to red team AI risk; this finally kills the meme "If it does anything fishy, we can always just pull the plug" and many of its more sophisticated variants. Perhaps more importantly, it leads to some new threat models being discovered, and all existing threat models being explored and fleshed out in much greater detail. Perhaps it even leads to some success stories being discovered and vetted. Perhaps it leads to various alignment strategies designed for slow-multipolar-takeoff scenarios to be scrutinized in more detail and rejected or improved. Finally, it helps us actually prepare for Crunch Time -- it's like how wargaming helps militaries prepare for war. (In particular, by observing how our ML researcher friends and policy wonk friends play the game, we can learn a lot about how they think AI stuff will go down and predict how the relevant scientists , CEOs, and politicians will behave when it does.)
Summon Greater Player
For illustration I'll suppose we make this game as a mod to Starcraft, but the basic concept would work with all sorts of games.
Imagine we get permission from Blizzard to add a new "SGP game mode" to the regular options available.
An SGP Starcraft game begins as a standard free-for-all between two to six players, selected to be of similar skill level. However, there are some additional buildings the players can construct:
Also, players have the ability to "gift" units and buildings to other players. Thus, there is an obvious strategy that will tempt many players:
In a pinch, step 1 can be skipped. They can't rebel against you if you position tanks around their supercomputer, right? Oh, I suppose they could build more supercomputers of their own, so that destroying their original supercomputer wouldn't stop them... Guess you'd better not let them have any worker units! Though that would seriously hinder their ability to help you win the war... hmmm....
Note that in principle this could happen recursively, e.g. a new player could summon an even better player. Also, there's nothing stopping your enemies from gifting your captive new player some worker units, or even an entire supercomputer.
Success Fantasy:
The game is fun and loads of people play it. Various people become interested in AI risk stuff as a result and join the community. Also, it becomes easier to raise awareness about AI risk stuff, because we have handy memorable examples to illustrate various points: