In your explanation of the Chicken Dilemma, you say that "'everyone cooperates' is not a strong Nash equilibrium in strict game theory terms" (or something like that, I apologize if I phrased it differently), and I disagree with that assertion. In games of Stag Hunt, everyone cooperates is the Nash equilibrium. And it is ultimately the ideal state of a social system, but it tends not to be stable in real societies.
Using the concept of Public Goods Games best illustrates how that works: In any given collaboration, actors can find themselves with the opportunity to engage in selfish or prosocial behavior. Healthy mature adults in cooperative groups gravitate towards cooperation as long as their cooperation is mirrored by their peers. The problem is that there are two equilbria in public goods games that makes them inherently unstable: The most efficient outcomes are both all-defect and all-cooperate.
Further, it only takes a small amount of cheaters to destroy cooperation. Mechanism design has to include some form of punishment to keep systems stable. Lack of punishment in our existing slate of voting systems is the key problem. Given enough time and enough incentives to cheat, systems ultimately destabilize without the presence of negative sanctions, and they do so predictably, as in the case of Duverger's Law regarding FPTP.
It might be helpful to keep that in mind when further exploring voting systems to understand how they work in the real world over the long term.