I argue that there is a convergent axiological framework by which all intelligent life in the universe trends towards — that there is a universal-like account of the resource constraints, psychology, and game theory of all species across all worlds. I argue for this to help flesh out cross-disciplinary understandings of how value systems work and to draw laws in the study of axiology with which we can make accurate predictions of intelligences we have yet to encounter.

Convergence

Convergent evolution is a phenomenon in biology that selects for otherwise complex biological structures across independent evolutionary tracks. For example, eyes have independently evolved over a 100 different times in myriad environments. This is because light-sensing cells and organs are almost universally advantageous, with the only exception being deep cave systems that don't get any sunlight. So given enough time, complex organs like eyes are expected to evolve as an adaptation of any complex lifeform. This is something we can generalize as a law in exobiology — that by the time you reach higher intelligence you will have already gone through a long evolutionary track that has selected for universal advantages like vision. Just the same, we can expect most highly intelligent alien life to be humanoid, since walking upright uses the most minimal number of limbs for locomotion and conserves the most energy — energy that is needed for metabolically demanding pathways like cognition.

The same line of reasoning leads one to expect a similar convergent phenomenon in exoethology. Behaviors trend towards the social and the cooperative as a species spreads across more environments and becomes more intelligent. From the eusocial insects like ants and bees all the way to humans, the larger the population and the more environments the population spreads into, the more social and cooperative the species needs to be (with members of its own kind) in order to maintain the expansion. The more intelligent a species is (primates, cetaceans, and so on), the more complex their social/cooperative behaviors become (e.g., dolphins play culture-specific games).

The Uncooperative Forest Counter

This sounds like good news since it would mean we could expect any intelligent alien life we encounter to more often than not be humanoid and highly cooperative. But there are obvious counters to this; the book The Dark Forest by Cixin Liu discusses a common one. Liu defines cosmic sociology by two axioms: that the primary goal of a species is survival, and populations grow exponentially. The conclusion he draws from this is that as species exponentially expand into a finite galaxy, resources will become highly contested and result in a galaxy-scale war.

This runs counter to what I have claimed about the nature of intelligence — that behaviors appear to be convergent towards cooperation. So who is right? Unlike species with low levels of intelligence, like the eusocial insects that perform worker policing (where they will kill members of their own colony when they become too genetically distinct), species with high levels of intelligence are cooperative cross-species (humans and dolphins play mutually beneficial games with each other, monkeys and elephants play games, and so on) and trend towards mutualism despite any factors that seem to run counter to this (like parsitism or the phenomenon of racism).[1]

Granted, the species on our planet don't seem to be explicitly aware that they are competing for finite resources in a finite environment, but the opposite scenario — that direct awareness would result in ceasing cooperation — is more absurd. An understanding of your existence as a competitor in an environment of finite resources does not require you to consciously suffer the angst of living under post-industrial capitalism until it totally alienates you from participating in it. This kind of suffering is not a requirement for a 'true understanding' that our world is one of finite resource, so it does not follow a priori that competing for finite resources requires any negatively associated attitudes or behaviors. Instead, we find that the more we understand about the finitude of resource in an infinitely consuming market (what we call economics), the better we get at adjudicating a balance between the two. This balance seems to have no limits to scale, even when it's not perfectly optimized; there is enough to go around.[2]

Exo-Mutualism

Suppose aliens came to us, with faster-than-light travel, a substantially better and more profound understanding of mathematics and logic, and seemed superior in every other measurable way. Suppose also that they had an otherwise similar axiology, a similar system of values to humans — their functionings and resultant behaviors were not opaque to us. Suppose finally that they agreed to a contract that stated they would not harm us. Right after, they invade and kill billions of people, subjugating the few survivors.

What went wrong? It's abject hand-waving to say the aliens were superior than us in every way except, somehow, ethics. The aliens know it's wrong to not honor your contracts, much less kill people, so why are billions dead?

They give us their reason. The aliens knew humans had a history of opportunistic violence and that given the right opportunistic pressures we would one day kill all the aliens, thereby disallowing them from engaging in future contracts with other species. They state that honoring the contract to not harm us would have precluded them from honoring thousands of future contracts, and that since there is a literal quantifiably greater honor in the thousands of future contracts, it was obviously ethical to dishonor this one. The simple utilitarian calculus necessitated a first strike, and the aliens, like many humans, believe utilitarianism is the one true ethical system (despite the Repugnant Conclusion[3] and other problems in population ethics). The murdered utilitarians should have no issue with the obvious utility in them being murdered here.[4]

However, many of the human survivors respond that precisely what it means for a contract to be honored is that you follow through on it despite any perceived negative future consequences, and that you simply should not have made the contract in the first place if you didn't believe you could follow through on it.[5] The aliens and humans have a back-and-forth as to whether or not honoring certain agreements is right or wrong; the humans ultimately conclude that the aliens are cowards for massacring a significantly weaker opponent despite no actual threat posed by us.

The aliens align their thinking more with possibility than actuality, with statistical predictions than direct-act assessments.[6] Is this really the kind of behavior we should expect in our galaxy? Utility is always a measurement of acts by proxies (consequences), never the value of acts themselves, and so Goodhart's Law takes over — that when optimizing for utility, which is only a proxy for good acts, utility stops being a good proxy. But the general measure of survival still matters and so the game theory is quite clear: if you are a more intelligent species and you exploit that intelligence to dominate less intelligent species around you, then what is to stop a more intelligent species than your own from doing the same when they show up? And in an infinite universe, there will always be someone more intelligent. Nash equilibrium rests at the refusal to abuse your power, lest you invite others to do the same back to you.

And spoiler alert if you haven't read the book I mentioned earlier, The Dark Forest ends up agreeing with me. The book concludes in forced cooperation between the two hostile species because of a MAD-like stalemate.

As an aside, this should apply to AGIs too, since an AGI that took over its world would then have to contend with potentially infinitely more AGI systems expanding throughout the universe, thereby requiring them to cooperate or perish. If there is selection against cooperation, like if the AGI is majorly unaligned and turbo-genocidal (against all greater selection pressures suggesting otherwise), then we live in the dumbest of all possible worlds.

The Dumbest Possible World Counter

What if I'm horribly wrong about convergent behaviors and aliens end up being truly alien? You can find people arguing that alien intelligence would be near-incomprehensible to us, that it would be so foreign it would be almost impossible to understand, and even if we could understand it, it would behave so differently, and their values would be so different, that the mutual survival of our species would be out of the question.[7] Worse still, what if a technologically advanced species is just kind of stupid and doesn't understand or care about the game-theoretical implications of massacring everyone?[8] What if aliens say, "This galaxy ain't big enough for the two of us." What then?

Well then I guess we really do have to contend with galactic hyper-war, but hopefully just one time. I understand the reasoning that the less you have at stake in people, the less you need diplomacy, but it would be strange to think the species that initiates galactic hyper-war would also win it since we just said they don't understand basic things relevant to general intelligence like ethics or game theory. There is no probable world in which a species that dumb wins repeatedly against more intelligent opponents. I'm not even sure what disagreeing with this would look like since it feels like saying a <1,000 Elo chess player would somehow consistently win against a >2,000 Elo player. It just wouldn't happen.

So this serves as evidence for a universal value filter — that the axiology of other highly intelligent species is convergent towards cooperative behaviors in all domains given the alternative is terminal. But hey, if I'm wrong, we're all dead anyways.

  1. ^

    The massive ecological change and subsequent extinction of species by humans is a counter-point I will shelve since the vast majority of our behavior that led to those changes in the world were done unintentionally. I only care about the intentional behaviors between highly intelligent species for this discussion.

  2. ^

    Population control is probably required for all highly intelligent species in any galaxy since the alternative is exponential expansion and exponential resource consumption in finite environments with highly intelligent competitors — what likely results in galactic hyper-war. If we made xenophobic decisions to favor resource distribution to familiar-looking organisms over unfamiliar-looking organisms, which is no better than insect-level worker policing, that could buy us a few more millennia, but refusing to slow down an exponentially growing population results in extreme resource scarcity no matter what, and then we get galactic hyper-war again.

  3. ^

    From chapter 17 of Reasons and Persons by Derek Parfit.

  4. ^

    For humor, it's worth thinking about whether surviving human utilitarians should commit suicide to preserve the utility of the alien species' survival. After all, any surviving human would threaten the objectively greater utility of the aliens.

  5. ^

    And that the aliens are clearly being hypocrites here, but whatever.

  6. ^

    Remind you of anyone?

  7. ^

    Blindsight by Peter Watts gives plausible articulations for alien species that might be like this. But he's probably wrong given contemporary understandings of neuroscience and the nature of intelligence, so idk.

  8. ^

    They just like us, fr fr.

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 2:03 PM

I'm doubtful of some of your examples of convergence, but despite that I think you are close to presenting an understanding why things like acausal trade might work.

Which examples?

Convergence is a pretty well understood phenomenon in evolutionary theory and biology more broadly these days. Anything outside of our biosphere will likely follow the same trends and not just for the biological reasons given, but for the game theoretical ones too.

Acausal trade seems unrelated since what I'm talking about is not a matter of predicting what some party might want/need in a narrow sense, but rather the broad sense that it is preferable to cooperate when direct contact is made. As a tangent, acausal trade is named poorly, since there is a clear causal chain involved. I wish they called it remote reasoning or distant games or something else.