This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work.
Read full explanation
I’ve been obsessing over a problem that sits right at the intersection of AI alignment and economics: The attribution problem in zero-marginal-cost environments. And to my mind this problem only gets really obvious at this intersection, maybe unbearable so to speak.
If generative AI pushes the cost of reproduction to zero, our current model of „Store of Value“ (owning assets) seems to break (not to say, it wasn't already in the process of breaking before). It leads to artificial scarcity and rent-seeking. As a philosopher, I look at this and see a category error. We act as if digital objects are „things“, but in a digital state, an object has no value in isolation.
My intuition - based on the axiom Esse est Operari (To be is to execute) - is that we need to switch to a „Flow of Value“ model. Value is causal enablement.
Together with an AI-partner (because I as a philosopher lack the technological knowledge to properly transfer the theory into executionable action) I tried to translate this philosophical intuition into a hard, thermodynamic protocol ("The Ontological Protocol"). I want to run the core mechanism by you because I suspect I might be walking into a trap (maybe Goodhart’s Law?) that I can't see from my philosophical view-point.
The Core Mechanism: Radical Endogeneity
We tried to build a system without „magic numbers“ or central governance. Just physics and information theory.
1. Price as „Proof of Sacrifice“
Instead of setting a price, the system looks at the network’s burn rate. It takes the 5th percentile of valid transactions as a baseline (Bprod). The idea is: You have to burn something (energy/fiat) to write to the graph. This isn't just spam protection; it anchors the system to physical reality (a proxy for the Landauer Limit).
2. Truth as „Structure“ (The LZMA Proxy)
This is the part where I really need feedback. I tried to falsify my attempts multiple times myself and also with the help of AI. But that's not enough: Without you I am not capable to trust the logic and methodos enough. To my mind the basic need is a way to measure „Value“ without subjective voting.
With the helb of AI Idecided to use Algorithmic Information Theory. We use LZMA compression ratios as a proxy for structure.
If data is highly structured (Code, DNA, Axioms), it compresses well (alpha is approx 1).
If data is random noise, it doesn't compress (alpha is approx 0).
We then link this to time. We call it the Law of Structural Persistence. Basically: lambda is equal to alpha.
If something is "true" (highly structured), the protocol assumes it shouldn't decay. A teacher’s input or a piece of core infrastructure keeps generating royalties indefinitely (Lindy Effect). Noise decays instantly.
The Safety Valve: The Metabolic Loop
My biggest worry is the „Paperclip Maximizer“ scenario in economics (e.g., maximizing addiction/time-on-site).
So we introduced a feedback loop. We define consumption as Metabolic Input.
If a user consumes content (say, a game) and their subsequent output (work, code, art) drops to zero (because they are addicted), the causal chain breaks. The game developer’s revenue stream, which depends on the user’s future ripples, dries up.
The system mathematically forces the developer to care about the user’s agency, not just their attention.
My Question to the LessWrong-Community
I feel like the LZMA proxy (alpha) is elegant, but is it robust? Can you construct an adversarial input that is highly compressible (high alpha) but entropically destructive to the network? And does tying persistence to structure create a rigid, conservative bias?
The full formalization (v3.3) is on GitHub. I’d appreciate any red-teaming.
Besides this concrete concern and question, I would be also really really glad for any fuurther feedback. If this approach resonates, but must be re-constructed entirely, that would not only be fine: It would be the proof that (whatever one human does) an entire community is needed to make it work. And therefore it would be the proof that at least the core of the idea isn't obsolete...
I’ve been obsessing over a problem that sits right at the intersection of AI alignment and economics: The attribution problem in zero-marginal-cost environments. And to my mind this problem only gets really obvious at this intersection, maybe unbearable so to speak.
If generative AI pushes the cost of reproduction to zero, our current model of „Store of Value“ (owning assets) seems to break (not to say, it wasn't already in the process of breaking before). It leads to artificial scarcity and rent-seeking. As a philosopher, I look at this and see a category error. We act as if digital objects are „things“, but in a digital state, an object has no value in isolation.
My intuition - based on the axiom Esse est Operari (To be is to execute) - is that we need to switch to a „Flow of Value“ model. Value is causal enablement.
Together with an AI-partner (because I as a philosopher lack the technological knowledge to properly transfer the theory into executionable action) I tried to translate this philosophical intuition into a hard, thermodynamic protocol ("The Ontological Protocol"). I want to run the core mechanism by you because I suspect I might be walking into a trap (maybe Goodhart’s Law?) that I can't see from my philosophical view-point.
The Core Mechanism: Radical Endogeneity
We tried to build a system without „magic numbers“ or central governance. Just physics and information theory.
1. Price as „Proof of Sacrifice“
Instead of setting a price, the system looks at the network’s burn rate. It takes the 5th percentile of valid transactions as a baseline (Bprod). The idea is: You have to burn something (energy/fiat) to write to the graph. This isn't just spam protection; it anchors the system to physical reality (a proxy for the Landauer Limit).
2. Truth as „Structure“ (The LZMA Proxy)
This is the part where I really need feedback. I tried to falsify my attempts multiple times myself and also with the help of AI. But that's not enough: Without you I am not capable to trust the logic and methodos enough. To my mind the basic need is a way to measure „Value“ without subjective voting.
With the helb of AI Idecided to use Algorithmic Information Theory. We use LZMA compression ratios as a proxy for structure.
We then link this to time. We call it the Law of Structural Persistence. Basically: lambda is equal to alpha.
If something is "true" (highly structured), the protocol assumes it shouldn't decay. A teacher’s input or a piece of core infrastructure keeps generating royalties indefinitely (Lindy Effect). Noise decays instantly.
The Safety Valve: The Metabolic Loop
My biggest worry is the „Paperclip Maximizer“ scenario in economics (e.g., maximizing addiction/time-on-site).
So we introduced a feedback loop. We define consumption as Metabolic Input.
If a user consumes content (say, a game) and their subsequent output (work, code, art) drops to zero (because they are addicted), the causal chain breaks. The game developer’s revenue stream, which depends on the user’s future ripples, dries up.
The system mathematically forces the developer to care about the user’s agency, not just their attention.
My Question to the LessWrong-Community
I feel like the LZMA proxy (alpha) is elegant, but is it robust? Can you construct an adversarial input that is highly compressible (high alpha) but entropically destructive to the network? And does tying persistence to structure create a rigid, conservative bias?
The full formalization (v3.3) is on GitHub. I’d appreciate any red-teaming.
https://github.com/SkopiaOutis/ontologial-protocol
Besides this concrete concern and question, I would be also really really glad for any fuurther feedback. If this approach resonates, but must be re-constructed entirely, that would not only be fine: It would be the proof that (whatever one human does) an entire community is needed to make it work. And therefore it would be the proof that at least the core of the idea isn't obsolete...
Thank you for ans reply!
Your
Skopia Outis