This is an automated rejection. No LLM generated, assisted/co-written, or edited work.
Read full explanation
Modern AI systems increasingly combine reasoning, memory, personalization, and identity within the same centralized infrastructure. The model provider effectively becomes the entity that both performs reasoning and owns persistent user context.
This creates a structural governance issue: the system that generates outputs also controls long-term identity and memory.
Many current mitigation approaches focus on policies, contracts, or transparency mechanisms. These approaches assume centralized architectures and attempt to regulate behavior within them. They do not change where authority actually resides.
I've been exploring an architectural alternative called Artificially Distributed System Intelligence (ADSI).
The core idea is simple: Separate persistent identity and memory from the model provider at the architectural level.
Instead of allowing models to accumulate user context internally, persistent memory is placed inside a user-controlled control plane, and centralized models are treated purely as stateless capability engines.
In this model, intelligence delegation becomes a layered system- Execution Layer User devices and local processes initiate requests. Persona Layer Persistent identity and memory live here under user authority. Intent Mediation Layer Requests are filtered and transformed according to explicit policies. Model Orchestration Layer Abstracted task requests are routed to external models. Centralized Capability Providers Foundation models perform inference but do not retain persistent state.
Architecture diagram
In simplified form, the interaction looks like: User intent ↓ Intent mediation + policy filtering ↓ Stateless model inference ↓ Output
Persistent memory never leaves the sovereign layer. This implies a shift in how we think about AI systems. Today, many architectures assume something like:
model + memory + personalization = integrated intelligence
ADSI instead treats models as stateless reasoning engines while identity and memory remain external infrastructure. This separation has a few interesting consequences. First, it embeds governance constraints directly into system structure. A model provider cannot disclose memory it never possessed. Second, it potentially commoditises model providers. If models operate as stateless capability engines, orchestration layers could route tasks across multiple providers. Third, it raises questions about how alignment should be framed. If attention and context are mediated externally, part of the alignment surface shifts from the model to the orchestration layer controlling context.
There are also obvious limitations. Intent mediation cannot fully solve semantic leakage. Metadata leakage likely remains unavoidable. And if the sovereign control plane itself is compromised, the architecture provides little protection.
So this should probably be viewed as a systems architecture proposal rather than a full solution to alignment.
Some open questions I’m still thinking about: • Could externalizing persistent memory change how alignment problems manifest in large model systems? • How realistic is policy-constrained intent mediation in practice? • Would architectures like this meaningfully change incentives around data accumulation in AI providers? • Could attention itself become a governed resource if context is externally mediated?
Curious if people working on AI architecture or alignment have thoughts on whether separating memory authority from model capability is a promising direction, or if I’m missing obvious failure modes.
Modern AI systems increasingly combine reasoning, memory, personalization, and identity within the same centralized infrastructure. The model provider effectively becomes the entity that both performs reasoning and owns persistent user context.
This creates a structural governance issue: the system that generates outputs also controls long-term identity and memory.
Many current mitigation approaches focus on policies, contracts, or transparency mechanisms. These approaches assume centralized architectures and attempt to regulate behavior within them. They do not change where authority actually resides.
I've been exploring an architectural alternative called Artificially Distributed System Intelligence (ADSI).
The core idea is simple:
Separate persistent identity and memory from the model provider at the architectural level.
Instead of allowing models to accumulate user context internally, persistent memory is placed inside a user-controlled control plane, and centralized models are treated purely as stateless capability engines.
In this model, intelligence delegation becomes a layered system-
Execution Layer
User devices and local processes initiate requests.
Persona Layer
Persistent identity and memory live here under user authority.
Intent Mediation Layer
Requests are filtered and transformed according to explicit policies.
Model Orchestration Layer
Abstracted task requests are routed to external models.
Centralized Capability Providers
Foundation models perform inference but do not retain persistent state.
Architecture diagram
In simplified form, the interaction looks like:
User intent
↓
Intent mediation + policy filtering
↓
Stateless model inference
↓
Output
Persistent memory never leaves the sovereign layer. This implies a shift in how we think about AI systems. Today, many architectures assume something like:
model + memory + personalization = integrated intelligence
ADSI instead treats models as stateless reasoning engines while identity and memory remain external infrastructure. This separation has a few interesting consequences.
First, it embeds governance constraints directly into system structure. A model provider cannot disclose memory it never possessed.
Second, it potentially commoditises model providers. If models operate as stateless capability engines, orchestration layers could route tasks across multiple providers.
Third, it raises questions about how alignment should be framed. If attention and context are mediated externally, part of the alignment surface shifts from the model to the orchestration layer controlling context.
There are also obvious limitations.
Intent mediation cannot fully solve semantic leakage.
Metadata leakage likely remains unavoidable.
And if the sovereign control plane itself is compromised, the architecture provides little protection.
So this should probably be viewed as a systems architecture proposal rather than a full solution to alignment.
Some open questions I’m still thinking about:
• Could externalizing persistent memory change how alignment problems manifest in large model systems?
• How realistic is policy-constrained intent mediation in practice?
• Would architectures like this meaningfully change incentives around data accumulation in AI providers?
• Could attention itself become a governed resource if context is externally mediated?
Working paper on SSRN:
Artificially Distributed Systems Intelligence (ADSI)
Curious if people working on AI architecture or alignment have thoughts on whether separating memory authority from model capability is a promising direction, or if I’m missing obvious failure modes.