Civilization failures occur when the system gets stuck in a state where obvious improvements exist but can't be implemented.
This chapter from the book Inadequate Equilibria categorize the causes of civilization failures into three buckets:
However, all of the above problems stem from a single cause: people don't share the same exact genes.
Clonal Ants, who do have the same genes, have no problems with coordination, skin-in-the-game or passing the relevant information to the decision-makers. Same goes for each of the 30 trillion cells we have in our bodies, which engage in massive collaboration to help us survive and replicate.
Evolution makes it so that our ultimate goal is to protect and replicate our genes. Cells share 100% of their genes, their goals are aligned and so cooperation is effortless. Humans shares less genes with each other, so we had to overcome trust issues by evolving complex social behaviours and technologies: status hierarchies, communication, laws and contracts.
I am doing Multi-Agent Reinforcement Learning (MARL) research where agents with different genes try to maximise their ultimate goal. In this sandbox environment, civilization failures occur. What's interesting is that we can make changes to the environment and to the agents themselves to learn what are the minimum changes required to prevent certain civilization failures.
Some examples of questions that can be explored in this setting (that I've called kinship-aligned MARL):
Can you think of more interesting questions? I would love to hear them!
Right now I have developed an environment where agents with divergent interests either learn to cooperate or see their lineage go extinct. This environment is implemented in C which allows me to efficiently train AI agents in it. I have also developed specific reward functions and training algorithms for this MARL setting.
You can read more details on the environment here, and details about the reward function/algorithm here.