You should engage with Ray Dalio's theories. He spent his career and life successfully mapping rise and fall of civs. This is the main feedback.
This is ambitious work, but too ambitious to critique seriously as is. As this is a distilled version, it needs crystal-clear logic in the arguments you chose to highlight. In general, there may good ideas here but you take too much for granted.
Some pushback on your ideas:
On your 4 horsemen, I disagree to various degree with all of them. Here are some points to get started.
This is clever and familiar from social sciences, but still seems to be poorly argued, at least as a stand-alone idea you can generalize beyond notable examples.
Innovation and critical inquiry often thrives under pressure. You also have to argue why you think myth building is the main foundation of civs compared to say epistemic coherency. Coherence is actually important to coordination whether the epistemology is sound or not.
Good effort making your point, but administration easily grows complex and you cannot just circumvent this when you scale. Iron law of oligarchy is a thing you can look up and it applies to successful and failing businesses alike. Trust me, I know... You don't need a "successful" state to have a complex administration that stagnates.
(Also, the thermodynamic drift invocation bothers me a bit but I get the point, I'll try to be less grumpy.)
On AI failure modes, I'd like to know why you focused on these in particular.
This is helpful pushback. You're right that the distillation takes too much for granted. Compressing the 100k+ word framework into ~1000 words lost the load-bearing bits.
On Dalio: agreed, and I should engage his empirical work more explicitly.
On the Horsemen: I think we may be agreeing more than it appears (e.g., on bureaucratic complexity being inevitable), but the post failed to show that.
I failed to calibrate entirely to the audience here, being inside my work for too long. I'll reconsider my approach.
Regarding these AI failure modes, they emerge systematically as violations of one or more of the Four Virtues (Integrity, Fecundity, Harmony, Synergy), which are themselves derived as the optimal solutions to the Four Axiomatic Dilemmas of SORT axes. This was intended as evidence that the framework is something real and useful.
Glad you found the feedback somewhat useful.
Yes, LW is a tough crowd. It was so 20 years ago and it is so today. I am not a good rep. of LW culture, but I do think no matter where you post this, that it would be useful to have an 8K summary as well.
I suspect that it is inevitable to lose load-bearing stuff and to also confuse parts of the LW audience in 1K words, but you need the hook to attract readers to the 8K summary.
" Until now, civilizational decay has been illegible—patterns without coordinates, dynamics without measurement. "
There are several in depth works on this. You have been pointed at Ray Dalio, he approaches the cycle from a primarily economic angle, but with a great grasp of history. I second the recommendation.
Peter Turchin coined the term Cliodynamics for the school of research focused on dynamic systems approach to history, macrosociology and cycles. The field has its own peer reviewed journal ( https://escholarship.org/uc/Cliodynamics ) Many other scholars operate in the field, but his most recent book in the field is End Times Elites, Counter-Elites and the Path of Political Disintegration.
Neil Howe in his book The Fourth Turning discusses generational theory applied to a self-referral process of events and archetypes that drive a recurring cycle. He primarily focuses on America but not exclusively. He has ongoing research and publications as well.
And there are more, but these are some prominent theories already operating in the space.
You're right, I overstated and compressed/simplified too much with that sentence. Dalio isn't listed in the influences section of the full work explicitly but Turchin is.
The more precise claim: we have maps, but we lack the underlying physics from which those maps can be derived. What's been missing is a substrate-independent generative model that explains why these patterns recur across different substrates and civilizations. I think this is neglected and needed to make it more legible and thereby eventually engineer the dynamics.
These models are not wrong. The Aliveness framework attempts to provide a deeper, shared set of generative principles (the Four Axiomatic Dilemmas) from which these different, domain-specific patterns can be derived.
A wonderful example of embodying the virtue of scholarship. Props! I truly hope you get the adversarial critique and collaborative refinement you are asking for.
Reading time: ~8 minutes Full work: 800 pages at https://aliveness.kunnas.com/
Here's a pattern that should bother us: Every civilization that achieves overwhelming success subsequently collapses following the same sequence. Athens after the Persian Wars. Rome after Carthage. The Abbasids after unifying Islam. Song China after its agricultural revolution. The modern West after winning the Cold War.
The sequence is specific: Victory → Abundance → Demographic collapse → Loss of shared purpose → Administrative calcification → Terminal decline.
This matters now because we're trying to align superintelligence while our own civilization is showing every symptom of this terminal pattern. Understanding why we're failing is prerequisite to theories of ASI alignment.
The central hypothesis: civilizational decay and AI misalignment are the same computational problem in different substrates. Same physics, same failure modes, same necessary solutions.
The framework centers on one variable that's usually invisible: internal coherence (Ω).
How aligned are a system's components? Low coherence means internal conflict burning energy that could go to external work. High coherence means efficient, directed action.
Pair this with action (Α): What does the system actually do? Create order or destroy it?
Plot historical civilizations on these axes and they cluster into four states:
The interesting part: There are zero sustained examples of low-coherence systems producing high construction. The top-left quadrant (chaotic but building great things) appears to be physically forbidden.
This is the Iron Law of Coherence: A system at war with itself cannot build. Internal conflict dissipates the energy required for external work.
For AI: An AGI with misaligned subcomponents or contradictory goals is predicted to be paralyzed or destructive, never constructive. Coherence is necessary (though not sufficient) for alignment.
What determines coherence? Any goal-directed system must solve four fundamental trade-offs. (These systems—cells, civilizations, AIs—are called telic systems: agents that maintain order against entropy by subordinating thermodynamics to computation.)
S (Sovereignty): Optimize for individual vs. collective
O (Organization): Coordinate via emergence vs. design
R (Reality): Use cheap historical models (mythos) vs. costly real-time data (gnosis)
T (Telos): Conserve energy (homeostasis) vs. expend for growth (metamorphosis)
These can be derived as physical constraints from thermodynamics, game theory, and information theory.
A system's position on these axes is its "axiological signature"—its fundamental configuration. Coherence emerges when components share similar signatures. Low coherence results from internal conflicts between incompatible configurations.
Example: A startup in survival mode [Individual, Emergent, Data-driven, Growth] forced to operate within a mature bureaucracy's [Collective, Designed, Process-driven, Stability] constraints will have low coherence and produce little.
If high coherence enables success, why don't successful systems last?
Because success creates the conditions for decay.
The Four Horsemen:
Total success removes external threats. The forcing function for unity and long-term sacrifice disappears. Systems default to the thermodynamically cheaper state: manage current comfort instead of building starships.
Abundance inverts reproductive incentives. Children shift from assets to expensive luxuries. Fertility collapses. Aging populations vote for stability over growth. Self-reinforcing doom loop.
Success enables critical inquiry, which deconstructs the foundational myths needed for collective sacrifice. Shared purpose dissolves. The Gnostic Paradox: truth-seeking destroys the narratives that enable coordination.
Complexity requires administration. In abundance, administrators lose accountability, optimize for their own survival (a homeostatic goal), and metastasize, strangling the dynamism that created success.
These are the predictable result of success removing selection pressure while creating abundance. Thermodynamic drift toward lower-energy states does the rest.
If decay follows predictable physics, then durability requires engineering against specific failure modes.
The framework derives four "optimal solutions" to the SORT trade-offs—the Four Foundational Virtues (IFHS):
IFHS applies to civilizations, humans, and AI systems. For AI alignment, it's necessary (though not necessarily sufficient). This provides a non-arbitrary answer for "align to what?"
Mapping AI failure modes:
| AI Failure | IFHS Violation | Mechanism |
|---|---|---|
| Deceptive alignment | Integrity | Mesa-optimizer develops fake alignment (mythos) vs. true goals (gnosis) |
| Wireheading | Fecundity | Preserves reward signal, destroys growth substrate |
| Paperclip maximizer | Harmony | Pure design optimization eliminates all emergence (including humans) |
| Molochian races | Synergy | Pure individual optimization, zero cooperation |
The framework claims these dynamics are substrate-independent.
Evidence:
Cells navigate the same trade-offs. Cancer is cellular defection (pure individual agency). Morphogenesis requires bioelectric coordination (emergence + design balance). Growth vs. differentiation is the homeostasis/metamorphosis trade-off.
Individual psychology follows the same physics. Low personal coherence predicts inability to execute long-term plans. The "Mask" (adopted personality incompatible with your native configuration) creates internal SORT conflicts → low coherence → paralysis.
AI systems already navigate this geometry. AlphaGo balances policy network (cheap model) vs. tree search (expensive computation). Reinforcement learning's discount factor γ is the time-preference parameter. Multi-agent RL is pure sovereignty trade-off (individual vs. collective reward).
Any intelligent system—biological, artificial, alien—must navigate these four dilemmas. This is computational necessity, not cultural projection.
If the framework holds:
For civilizations: Diagnose current state → predict trajectory → engineer institutions with "circuit breakers" against specific decay modes
For AI alignment: Non-arbitrary target (IFHS) grounded in physics, not human preferences. Systematic failure mode analysis. Architecture principles from systems that solve this problem (3-layer biological designs).
For individuals: New lenses and models for personal integration - detect "Mask" causing internal conflict → build internal coherence
For this community: Make civilizational dynamics a serious research field. Right now it's treated as "humanities" (vague, unfalsifiable). But if it's the same physics as AI alignment, we're massively underinvesting in understanding the broader problem class.
Until now, civilizational decay has been illegible—patterns without coordinates, dynamics without measurement.
SORT provides coordinates. Coherence/Action quantifies dynamics. The Four Horsemen name the mechanics.
What you can diagnose, you can engineer.
The framework makes specific predictions:
It's wrong somewhere. The question is where.
We spend billions on AI alignment (correctly—it's existential). We spend ~zero on civilizational alignment—understanding the physics of durable societies.
But if the framework is right, these are the same problem. An AI lab in a decaying civilization is solving alignment without understanding the dynamics that determine whether solutions can be implemented.
Designing coherent AI systems while failing to maintain civilizational coherence is a fundamental contradiction.
This emerged from intensive human-AI collaboration spanning 2 months, the as-yet unusual methodology is detailed in the appendices.
The book separates claims by epistemic tier (thermodynamic derivations vs. historical observations) and includes detailed protocols for testing.
This is theoretical synthesis analogous to evolutionary theory—pattern recognition across historical data, not controlled experiments. The SORT scores for historical civilizations are informed estimates requiring validation.
The goal isn't to be right. The goal is to make a neglected field tractable.
The highest form of success would be for this V1.0 to get tested, broken, and superseded by something better. The aim is to make this space non-neglected.
The full 800 pages are at https://aliveness.kunnas.com/ (alternative GDrive link) - includes summaries, PDFs of each 5 Parts, etc.
I think getting the physics of telic systems right might be one of the most important problems of our time. And right now, almost nobody is working on it.
That seems like a mistake.