Hi! Thank you for this note and for sharing your work on "Witness."
You’re right: AI already underwrites coordination. The lever is not if we use it, but how. Shifting from unbounded strategic agents to bounded, tool‑like "kami of care" makes safety architectural; scope/latency/budget are capped, so power‑seeking becomes an anti‑pattern.
Witness cleanly instantiates Attentiveness: a push channel that turns lived experience into common knowledge with receipts, so alignment happens by process and at the speed of society. This is d/acc applied to epistemic security in practice, through ground‑up shared reality paired with stronger collective steering.
Great! If there are such areas, in the spirit of d/acc, I'd be happy to use a local language model to paraphrase them away and co-edit in an end-to-end-encrypted way to confirm before publishing.
On north star mapping: Does the CIP Global Dialogues and GD Challenge look like something of that shape, or something more like AI Social Readiness Process?
On Raemon’s (very insightful!) piece w.r.t. curing cancer inevitably routing through consequentialism: Earlier this year I visited Bolinas, a birthplace of integrative cancer care, which centers healing for communities, catalyzed by people experiencing cancer. This care ethic prioritizes virtues like attentiveness and responsiveness to relational health over outcome optimization.
Asking a superintelligence to 'solve cancer' in one fell swoop — regardless of collateral disruptions to human relationships, ecosystems, or agency — directly contravenes this, as it reduces care to a terminal goal rather than an ongoing, interdependent process.
In a d/acc future, one tends to the research ecosystem so progress emerge through horizontal collaboration — e.g., one kami for protein‑folding simulation, one for cross‑lab knowledge sharing; none has the unbounded objective “cure cancer.” We still pursue cures, but with kamis each having non-fungible purposes. The scope, budget, and latency caps inherent in this configuration means capability gains don’t translate into open‑ended optimization.
I'd be happy to have an on-the-record conversation, co-edited and published under CC0 to SayIt next Monday 1pm if you agree.
Thank you for the encouragement, recommendations, and for flagging the need for more context on strong ASI models, including the default extremity of the transition!
You're spot on; my DeepMind talk emphasized horizontal alignment (defense against coordination failures) as a complement to vertical alignment perils, like those in the orthogonality thesis and instrumental convergence.
I've pre-ordered the IABIED book and have now re-read several recommendations: "AGI Ruin" details lethalities and "A Central AI Alignment Problem" highlights the sharp left turn's risk. Just reviewed "Five Theses, Two Lemmas," which reinforces the intelligence explosion, complexity/fragility of value, and indirect normativity as paths to safer goals.
These sharpen why 6pack.care prioritizes local kami (bounded, non-maximizing agents) to mitigate unbounded optimization and promote technodiversity over singleton risks.
Topics I’d love to discuss further:
Indeed, there are two classes of alignment problems. The first is vertical: making a single agent loyal. The second is horizontal: ensuring a society of loyal agents doesn't devolve into systemic conflict. 6pack.care is a framework for the second challenge as articulated by CAIF.
It posits that long-term alignment is not a static property but a dynamic capability: alignment-by-process. The core mechanism is a tight feedback loop of civic care, turning interactions into a form of coherent blended volition. This is why the flood bot example, though simple, describes this fractal process.
This process-based approach is also our primary defense against power-seeking. Rather than trying to police an agent's internal motives, we design the architecture for boundedness (kami) and federated trust (e.g. ROOST.tools), making unbounded optimization an anti-pattern. The system selects for pro-sociality.
This bridges two philosophical traditions: EA offers a powerful consequentialist framework. Civic Care provides the necessary process-based virtue ethic for a pluralistic world. The result is a more robust paradigm: EA/CC (Effective Altruism with Civic Care).
Hi! Great to hear from you. “Optimize for fun” (‑Ofun) is still very much the spirit of this 6pack.care work.
On practicality (bending the market away from arms‑race incentives): Here are some levers that worked, inspired by Taiwan's tax-filing case, that shift returns from lock‑in to civic care:
On symbiosis: the kami view is neither egalitarian sameness nor fixed hierarchy. It’s a bounded, heterarchical ecology with many stewards with different scopes that coordinate without a permanent apex. (Heterarchy = overlapping centers of competence; authority flows to where the problem lives.)
Egalitarianism would imply interchangeable agents. As capabilities grow, we’ll see a range of kami sizes: a steward for continental climate models won’t be the same as one for a local irrigation system. That’s diversity of scope, not inequality of standing.
Hierarchy would imply command. Boundedness prevents that: each kami is powerful only within its scope of care and is designed for “enough, not forever.” The river guardian has no mandate nor incentive to run the forest.
When scopes intersect, alignment is defined by civic care: Each kami maintain the relational health of their shared ecosystem at the speed of the garden. Larger systems may act as ephemeral conveners, but they don’t own the graph or set permanent policy. Coordination follows subsidiarity and federation: solve issues locally when possible; escalate via shared protocols when necessary. Meanwhile, procedural equality (the right to contest, audit, and exit) keeps the ecology plural rather than feudal.
I wrote a summary in Business Weekly Taiwan (April 24):
https://sayit.archive.tw/2025-04-24-%E5%95%86%E5%91%A8%E5%B0%88%E6%AC%84ai-%E6%9C%AA%E4%BE%86%E5%AD%B8%E5%AE%B6%E7%9A%84-2027-%E5%B9%B4%E9%A0%90%E8%A8%80
https://sayit.archive.tw/2025-04-24-bw-column-an-ai-futurists-predictions-f
https://www.businessweekly.com.tw/archive/Article?StrId=7012220
An AI Futurist’s Predictions for 2027
When President Trump declared sweeping reciprocal tariffs, the announcement dominated headlines. Yet inside Silicon Valley’s tech giants and leading AI labs, an even hotter topic was “AI‑2027.com,” the new report from ex‑OpenAI researcher Daniel Kokotajlo and his team.
At OpenAI, Kokotajlo had two principal responsibilities. First, he was charged with sounding early alarms—anticipating the moment when AI systems could hack systems or deceive people, and designing defenses in advance. Second, he shaped research priorities so that the company’s time and talent were focused on work that mattered most.
The trust he earned as OpenAI’s in‑house futurist dates back to 2021, when he published a set of predictions for 2026, most of which have since come true. He foresaw two pivotal breakthroughs: conversational AI—exemplified by ChatGPT—captivating the public and weaving itself into everyday life, and “reasoning” AI spawning misinformation risks and even outright lies. He also predicted U.S. limits on advanced‑chip exports to China and AI beating humans in multi‑player games.
Conventional wisdom once held that ever‑larger models would simply perform better. Kokotajlo challenged that assumption, arguing that future systems would instead pause mid‑computation to “think,” improving accuracy without lengthy additional training runs. The idea was validated in 2024: dedicating energy to reasoning, rather than only to training, can yield superior results.
Since leaving OpenAI, he has mapped the global chip inventory, density, and distribution to model AI trajectories. His projection: by 2027, AI will possess robust powers of deception, and the newest systems may take their cues not from humans but from earlier generations of AI. If governments and companies race ahead solely to outpace competitors, serious alignment failures could follow, allowing AI to become an independent actor and slip human control by 2030. Continuous investment in safety research, however, can avert catastrophe and keep AI development steerable.
Before the tariff news, many governments were pouring money into AI. Now capital may be diverted to shore up companies hurt by the tariffs, squeezing safety budgets. Yet long‑term progress demands the opposite: sustained funding for safety measures and the disciplined use of high‑quality data to build targeted, reliable small models—so that AI becomes a help to humanity, not an added burden.
Based on my personal experience in pandemic resilience, additional wakeups can proceed swiftly as soon as a specific society-scale harm is realized.
Specifically, as we are waking up to over-reliance harms and addressing them (esp. within security OODA loops), it would buy time for good enough continuous alignment.
Yes! I did read the revised series with great interest, and I fondly recall my participation in the Foresight Institute 2021 seminar that contributed to the 2022 edition.
On the "speed-run subsidiarity" idea, I really resonate with your comment:
In order to rapidly decentralize action without dissolving into chaos or resorting to localized coercion, we need a shared operational ethic for horizontal alignment to scale quickly. This, I believe, is where the concept of "civic care" becomes an essential catalyst.
Unlike abstract, "thin" ideologies, civic care is rooted in normative relational ethics. It emphasizes context, interdependence, and the direct responsibility we have for the relationships we are embedded within. This accelerates the speed-run in several ways:
Reduced Coordination Overhead: When civic care becomes the default ethical posture for AI training and deployment, we minimize the friction of cooperative agents by focusing on the immediate, tangible needs of the community.
Inherent Locality: The ethic of care naturally aligns with subsidiarity. We are best equipped to care for the relationships and environments closest to us. It intrinsically motivates action at the most appropriate level, strengthening local capacity.
Rapid Trust Building: Subsidiarity fails without high trust. By adopting a shared commitment to care, decentralized groups can establish trust much faster. This high-trust environment is a prerequisite for navigating crises effectively.