Wouldn't this essentially converge to adversarial prompt-writing (accidentally or otherwise)? I'd expect the most successful approach to amount to writing a `strategy` that causes LLMs that read it to behave in a way that benefits it, irrespective of what the opponent's strategy says.
Maybe structuring the competition like the original one, with programs instead of prompts, would eliminate the issue. Each competitor releases source code for their strategies, and these strategies can examine (or run, in varied sets of conditions) their opponents' source code before making a decision. The programs could later be summarized in natural language, after all.
The strategies are manually reviewed with clear prompt injection attempts rejected.
I think the program approach proved unworkable. It is simply too difficult to write a program that can analyze another program effectively when the program being analyzed has to be complex enough to do its own analysis of other programs.
In 1980, Robert Axelrod invited researchers around the world to submit computer programs to play the Iterated Prisoner’s Dilemma.
The results — where Tit for Tat famously won — transformed how we think about cooperation.
What mattered most wasn’t intelligence or aggression, but a few simple principles: be nice, retaliate, forgive, and be clear.
That insight reshaped evolutionary game theory and inspired decades of work in economics and social science.
But Axelrod’s agents were opaque. They couldn’t read each other’s source code.
Enter the Open Strategy Dictator Game
The Open Strategy Dictator Game asks: What happens when strategies are fully visible?
Each participant submits a natural-language strategy description — a few paragraphs of text explaining how their agent behaves.
Every round, a large language model (Claude Sonnet 4.5) simulates a one-shot dictator game where one strategy divides a fixed endowment between itself and a recipient.
Crucially, the dictator’s decision prompt includes the text of the other player's strategy.
In other words: you decide how to act knowing exactly who you’re facing — and they know you know.
Utilities are logarithmic in the received share, so the game rewards fairness rather than zero-sum aggression.
And since the tournament is round-robin, each strategy will also appear as a recipient many times — facing both selfish exploiters and conditional cooperators.
Why it matters
This setting sits at the intersection of three interesting topics:
What we might learn
If Axelrod’s tournament showed how cooperation emerges in the dark,
the Open Strategy Dictator Game explores how it survives in the light.
You can join the experiment, submit your strategy, and help test whether open cooperation can still win when everyone can read your mind.
Github: https://github.com/michaelrglass/os-fdt
Simple Initial Tournament Results: https://michaelrglass.github.io/os-fdt/