Abstract
Despite much progress in training AI systems to imitate human language, building agents that use language to communicate intentionally with humans in interactive environments remains a major challenge. We introduce Cicero, the first AI agent to achieve human-level performance in Diplomacy, a strategy game involving both cooperation and competition that emphasizes natural language negotiation and tactical coordination between seven players. Cicero integrates a language model with planning and reinforcement learning algorithms by inferring players' beliefs and intentions from its conversations and generating dialogue in pursuit of its plans. Across 40 games of an anonymous online Diplomacy league, Cicero achieved more than double the average score of the human players and ranked in the top 10% of participants who played more than one game.
Meta Fundamental AI Research Diplomacy Team (FAIR)†, Anton Bakhtin, Noam Brown, Emily Dinan, Gabriele Farina, Colin Flaherty, Daniel Fried, et al. 2022. “Human-Level Play in the Game of Diplomacy by Combining Language Models with Strategic Reasoning.” Science, November, eade9097. https://doi.org/10.1126/science.ade9097.
Continuing the quote:
Worth noting that Meta did not do this: they took many small models (some with LM pretraining) and composed them in a specialized way. It's definitely faster than what Daniel said in his post, but this is also in part an update downwards on the difficulty of full press diplomacy (relative to what Daniel expected).
If we're using Daniel's post to talk about whether capabilities progress is faster or slower than expected, it's worth noting that parts of the 2022 prediction did not come true:
text-davinci-002, a GPT-3 variant, is still the best API model. (That being said, it is no longer SoTA compared to some private models.)He did get the "bureaucracy" prediction quite right; a lot of recent LM progress has been figuring out how to prompt engineer and compose LMs to elicit more capabilities out of them.