Bike locks are a good example. There are twenty bikes at the bike rack in the parking lot. Some bike riders decide to buy bike locks for their bikes to protect them from theft. But this just means that the thieves steal other, unprotected bikes from the rack. The problem wasn't solved, just moved around. The wary bike riders acted in self-interest in a zero-sum game and the result was net 0, but the wary bike riders came out ahead. As long as some bikes remain unprotected, the crime rate will remain the same. All of the bike riders would have to go to the effort of buying bike locks before any progress is made in solving the problem. This would be a Nash Equilibrium...

However, if all the bike riders act in self-interest by selfishly protecting their own bikes at the expense of the others, they will all buy bike locks and the thieves will have no bikes to steal. So in some systems, a Zero-Sum environment actually defeats the Nash Equilibrium.

The same concept would apply easily to cybersecurity: hackers will go after the easiest target, and if everyone tries to not be the easiest target the global level of cyber defense will rise and hacking will become less common.

So if there's a seemingly Zero-Sum system or a seeming Nash Equilibrium, adding the other component might solve the problem. Can this be applied to other problems?

  1. AI safety: Nash Equilibrium[1]. It's in everyone's selfish interest to ignore safe practices and try to make their AI as powerful as possible. But if everyone does this, it's likely that at least one AI will be both powerful enough and malicious enough to cause serious damage and make everyone lose. Is there a way to add a Zero-Sum component to this? Maybe a task force could be created that punishes the least-safe AI systems or rewards the most-safe, based not on their absolute safety level but on their safety level relative to other systems. This system could be a fine/grant or it could just be a label: "World's safest AI system as voted by experts". This is a Zero-Sum component because no matter how hard developers try, the same total number of rewards and punishments will be doled out.
  2. Environmental damage: Nash Equilibrium. Since important parts of the environment are shared between countries and even the entire world, harm done to it is spread out among multiple countries. This means that a selfish country wouldn't place enough negative weight on harming the environment. This also means that countries don't have enough incentive to improve the environment since other countries will reap much of the total benefit. To make this a Zero-Sum system, countries with clean environments could develop tools to prevent polluted environments from mixing with them. If this were accomplished, it would further incentivize these countries to use clean practices. Countries with dirty environments but clean practices could implement these tools to cause their environments to self-clean without changing their other practices. And if these tools were implemented in enough places, the remaining countries would find that the results of their dirty practices were affecting themselves with a higher percentage[2], incentivizing them to use cleaner practices. By itself, developing barriers to the spread of pollution would have a net-0 effect because the total amount of pollution would remain the same. Yet it would cause the behavior of countries to break the Nash Equilibrium and benefit everyone.

Some problems have already been solved or partially solved in this manner.

  1. War: Zero-Sum. The development of nuclear weapons was an example of introducing a Nash Equilibrium to a Zero-Sum system. Theoretically, countries were disincentivized from entering war because they feared nuclear annihilation, causing an era of peace.
  2. Economic Competition: Zero-Sum. If producers compete by only lowering prices, a poor Nash Equilibrium is reached: all producers are incentivized to use low costs to attract customers, yet this means that all producers suffer collectively. But by innovating to create better products while consuming fewer resources, producers turn the Zero-Sum system into a productive Nash Equilibrium.

Some other thoughts:

Perhaps a Zero-Sum system should really be called a Negative-Sum system. In most cases, actions can be taken that harm all parties, yet there is no way to benefit all parties past a certain threshold.

Interestingly, even a beneficial Nash Equilibrium will often seem frustrating from inside. Technology companies have to work hard to stay on the bleeding edge and might think it would be better if innovation would stop all around. But as a whole, both consumers and producers benefit as the value-to-price ratio increases.

Watch out for systems that receive complaints both about being Zero-Sum and being a Nash Equilibrium. In these systems, a solution might be easier than it seems.

  1. ^

    Maybe this isn't a Nash Equilibrium after all, since it sounds more like a volatile, unstable system than an equilibrium. But it's a system where everyone's selfish interests lead to the collective misfortune of everyone.

  2. ^

    Since most of the Earth's surface is covered by oceans, not countries, this part might not have as large of an effect as it might seem.

New Comment
2 comments, sorted by Click to highlight new comments since:

It seems like you're not being clear about how you are thinking about the cases, or are misusing some of the terms. Nash Equilibria exist in zero-sum games, so those aren't different things. If you're familiar with how to do game theory, I think you should carefully set up what you claim the situation is in a payoff matrix, and then check whether, given the set of actions you posit people have in each case, the scenario is actually a Nash equilibrium in the cases you're calling Nash equilibrium.

reason for downvote: this doesn't make clear (and is probably wrong about) the tie from game theory descriptions "zero sum" and "nash equilibrium".  I suspect they don't mean what you think they mean, but perhaps you're just focusing on other aspects of the decisions, and where the game theory is less directly important.

In fact, neither bike protections nor crime is fixed-sum.  If everyone buys locks, thieves go to a bit more effort to defeat the locks, and there's probably LESS theft, but not zero.  The Nash equilibrium for effort-to-secure vs effort-to-steal will depend entirely on payoffs, and there's no reason to believe it's legible enough to find (or that it even contains) a zero-crime option.