Indeed, you've powerfully articulated the classic misaligned singleton failure, which assumes a vertical Singularity.
The alternative is a horizontal Plurality: Not a global emperor, but an ecosystem of local guardians (kami), each bound to a specific community — a river, a city. There is no single "AI" to seize control. The river's kami has no mandate over the forest's. Power is federated and subsidiary by design.
Far from making humane relationships "insignificant," this architecture makes them the only thing that matters. Communitarian values are not trivial decorations; they are the very purpose for which each kami exists.
It would indeed be unfortunate if singular intelligence renders humane relationships insignificant. Let's free the future — toward a plurality of intelligences that exist to make them possible.
You've precisely identified the central crux. This dynamic — whether bounded communities can be stable against unbounded optimization — was a main focus of my recent conversation with Plex: Symbiogenesis vs. Convergent Consequentialism.
Your intuition holds true for offense-dominant environments. The d/acc move is to actively shape the environment toward defense-dominance. In such a reality, bounded, cooperative strategies (symbiogenesis) become evolutionarily stable, and the most ambitious strategy is robust cooperation, rather than unilateral vampirism.
Yes! I did read the revised series with great interest, and I fondly recall my participation in the Foresight Institute 2021 seminar that contributed to the 2022 edition.
On the "speed-run subsidiarity" idea, I really resonate with your comment:
This is the first non-destructive non-coercive pivotal act that I've considered plausibly sufficient to save us from the current crisis.
In order to rapidly decentralize action without dissolving into chaos or resorting to localized coercion, we need a shared operational ethic for horizontal alignment to scale quickly. This, I believe, is where the concept of "civic care" becomes an essential catalyst.
Unlike abstract, "thin" ideologies, civic care is rooted in normative relational ethics. It emphasizes context, interdependence, and the direct responsibility we have for the relationships we are embedded within. This accelerates the speed-run in several ways:
Reduced Coordination Overhead: When civic care becomes the default ethical posture for AI training and deployment, we minimize the friction of cooperative agents by focusing on the immediate, tangible needs of the community.
Inherent Locality: The ethic of care naturally aligns with subsidiarity. We are best equipped to care for the relationships and environments closest to us. It intrinsically motivates action at the most appropriate level, strengthening local capacity.
Rapid Trust Building: Subsidiarity fails without high trust. By adopting a shared commitment to care, decentralized groups can establish trust much faster. This high-trust environment is a prerequisite for navigating crises effectively.
Hi! Thank you for this note and for sharing your work on "Witness."
You’re right: AI already underwrites coordination. The lever is not if we use it, but how. Shifting from unbounded strategic agents to bounded, tool‑like "kami of care" makes safety architectural; scope/latency/budget are capped, so power‑seeking becomes an anti‑pattern.
Witness cleanly instantiates Attentiveness: a push channel that turns lived experience into common knowledge with receipts, so alignment happens by process and at the speed of society. This is d/acc applied to epistemic security in practice, through ground‑up shared reality paired with stronger collective steering.
Great! If there are such areas, in the spirit of d/acc, I'd be happy to use a local language model to paraphrase them away and co-edit in an end-to-end-encrypted way to confirm before publishing.
On north star mapping: Does the CIP Global Dialogues and GD Challenge look like something of that shape, or something more like AI Social Readiness Process?
On Raemon’s (very insightful!) piece w.r.t. curing cancer inevitably routing through consequentialism: Earlier this year I visited Bolinas, a birthplace of integrative cancer care, which centers healing for communities, catalyzed by people experiencing cancer. This care ethic prioritizes virtues like attentiveness and responsiveness to relational health over outcome optimization.
Asking a superintelligence to 'solve cancer' in one fell swoop — regardless of collateral disruptions to human relationships, ecosystems, or agency — directly contravenes this, as it reduces care to a terminal goal rather than an ongoing, interdependent process.
In a d/acc future, one tends to the research ecosystem so progress emerge through horizontal collaboration — e.g., one kami for protein‑folding simulation, one for cross‑lab knowledge sharing; none has the unbounded objective “cure cancer.” We still pursue cures, but with kamis each having non-fungible purposes. The scope, budget, and latency caps inherent in this configuration means capability gains don’t translate into open‑ended optimization.
I'd be happy to have an on-the-record conversation, co-edited and published under CC0 to SayIt next Monday 1pm if you agree.
Thank you for the encouragement, recommendations, and for flagging the need for more context on strong ASI models, including the default extremity of the transition!
You're spot on; my DeepMind talk emphasized horizontal alignment (defense against coordination failures) as a complement to vertical alignment perils, like those in the orthogonality thesis and instrumental convergence.
I've pre-ordered the IABIED book and have now re-read several recommendations: "AGI Ruin" details lethalities and "A Central AI Alignment Problem" highlights the sharp left turn's risk. Just reviewed "Five Theses, Two Lemmas," which reinforces the intelligence explosion, complexity/fragility of value, and indirect normativity as paths to safer goals.
These sharpen why 6pack.care prioritizes local kami (bounded, non-maximizing agents) to mitigate unbounded optimization and promote technodiversity over singleton risks.
Topics I’d love to discuss further:
Indeed, there are two classes of alignment problems. The first is vertical: making a single agent loyal. The second is horizontal: ensuring a society of loyal agents doesn't devolve into systemic conflict. 6pack.care is a framework for the second challenge as articulated by CAIF.
It posits that long-term alignment is not a static property but a dynamic capability: alignment-by-process. The core mechanism is a tight feedback loop of civic care, turning interactions into a form of coherent blended volition. This is why the flood bot example, though simple, describes this fractal process.
This process-based approach is also our primary defense against power-seeking. Rather than trying to police an agent's internal motives, we design the architecture for boundedness (kami) and federated trust (e.g. ROOST.tools), making unbounded optimization an anti-pattern. The system selects for pro-sociality.
This bridges two philosophical traditions: EA offers a powerful consequentialist framework. Civic Care provides the necessary process-based virtue ethic for a pluralistic world. The result is a more robust paradigm: EA/CC (Effective Altruism with Civic Care).
Hi! If I understand you correctly, the risk you identify is an AI gardener with its own immutable preferences, pruning humanity into compliance.
Here, a the principle of subsidiarity would entail BYOP: Bring Your Own Policy. A concrete example is our team at ROOST working with OpenAI to release its gpt-oss-safeguard model. Its core feature is not a set of rules, but the ability for any community to evolve its own code of conduct in plain language.
If a community decides its values have changed — that it wants to pivot from its governance to pursue a transformative future — the gardener is immediately responsive. In an ecosystem of communities empowered with kamis, we can preserve the potential to let a garden grow wild if they so choose.