Many ethical questions circle around the same intuition: when unique information is irretrievably lost, something of moral significance is destroyed.
The destruction of an ancient manuscript.
The extinction of a species.
Shutting down a complex digital mind.
The passing of a vibrant life.
These intuitions suggest that information itself—not only its material substrate—carries moral significance.
In my spare time, I have been developing a framework for evaluating the ethical value of information, using information theory as a way to formalize part of this moral intuition.
Key points of framework
Building on the ontology of information
Information is as fundamental as matter and energy.
Building on information ethics
Any information-bearing entity has moral standing, not just humans.
Reducing entropy is morally good; increasing entropy is morally bad.
Regarding unique information as a key factor in determining the ethical value of information
Unique information excludes mutual information with the external world. It can be understood as the amount by which an entity reduces the unknown/uncertainty of a system, i.e., how far it pushes the system in the direction of entropy reduction.
Quantifying total information amount
Information stored equals to ultimate compression size or minimum description length. Use traditional data compression to estimate the raw information amount I(X) of an entity.
Text, images, videos: KB–GB scale
Virus, plant/animal/human genomes: MB–GB scale
Brain structure: 10 GB – 1 TB
Brain state: < 10 MB
Quantifying unique information amount
AI technology is the key breakthrough for estimating mutual information / information gain respect to the world (where “information gain” ≈ “unique information amount”).
An ideal AI model can be regarded as approximating the world’s total knowledge. The essence of AI-based compression is maximize mutual information and minimize conditional information, i.e., describe new content in the shortest way given prior knowledge.
AI lossy compression
With multimodal large models: if compressed content can later be regenerated into outputs different in form but equivalent in effect to the original, then AI lossy compression can be considered mature (Text compression is already partially feasible; multimodal compression remains underdeveloped, but the concept is clear and prototypes exist).
Example prompt for general AI lossy compression:
“You are an expert in information compression and reconstruction. Your task is to evaluate the unique information contained in the following multimodal input (text, images, video, etc.) relative to your knowledge base (trained data). For each modality, perform lossy compression and output a compressed representation. Compression should be as aggressive as possible, provided the compressed result can still regenerate content similar in meaning and effect to the original. You may omit all textual or visual patterns already known to the model, retaining only truly novel elements—i.e., information not predictable from existing knowledge or training data.”
Using AI lossy compression, we can imagine results below (traditional compression size ≈ I(X), AI compression size ≈ I(X | world)):
Relationship between unique information amount and the ethical value of information(hereafter referred to as information value or IV)
Unique information content is not identical to information value. There must also be an influence from the goal-alignment/utility of the information relative to the system.
The information value of entity X to system S can be expressed as:
IV(X→S)=I(X∣S)×e(X→S)
where e is a utility coefficient between 0 and 1, interpretable as the projection factor (cosine) of entropy reduction distance onto the goal direction of the system.
Why decompose information value into unique information amount and utility coefficient?
By splitting the problem into an information-theoretic part and a non-information-theoretic part, we can maximize the scientific degree of ethics.
Key logical leap of this research: inferring information value by information gain.
Although information gain is not sufficient to determine information value, the two are strongly correlated.
When the utility coefficient is constant, information value is proportional to information gain.
Information gain is a necessary but not sufficient factor for information value; it determines the upper bound of possible information value.
Relationship between unique information/information gain and the ethical value of information
Decomposition of information value in biological and bio-like entities
The IV of a bio-like entity can be divided into two components: Genetic Information Value (GIV) and Acquired Information Value (AIV).
Genetic Information Value (GIV) — derived from innate genetic information
Storage medium: DNA chains (carbon-based life), ROM (programs/silicon-based life).
Information content: genetic information, program code (including fixed configurations), AI design code.
Innate information, invariant during the life cycle.
Species-level shared information; small differences between individuals.
Acquired Information Value (AIV) — derived from learned or postnatal information
Storage medium: nervous system (humans/animals), ROM/RAM (advanced programs, AI models), acquired body structures (all organisms).
Information content: human memory/skills/personality, program configurations and runtime data, AI model weights, AI dialogue states.
Learned information, changing during the life cycle.
Individual-level information; large differences between individuals, more unique than GIV.
Decomposition of acquired information
Structural information (static)
Human memory/skills/personality, program user configurations, AI model weights, as well as tree rings and morphology.
Changes across the life course, but relatively slow.
Long-term memory.
Stateful information (dynamic)
Human momentary consciousness and short-term memory, runtime data of processes, AI dialogue state.
Information loss from knocking out or anesthetizing a person.
Real-time, fast-changing.
Short-term memory.
Application Examples
Information value of any information entity
All else being equal, the higher the complexity of the entity (greater information amount), the higher its information value.
All else being equal, the higher the uniqueness of the entity (greater unique information amount), the higher its information value.
Information value of biological species
Generally, the greater the genomic complexity of a species (higher genetic information amount), the higher its information value.
If a species has high evolutionary uniqueness (no close relatives or similar species), its information value increases.
Information value of biological individuals
The higher the neural complexity of an individual (greater acquired information amount), the higher its information value.
The higher the uniqueness of an individual (greater unique information amount), the higher its information value .
Individuals with free will or nonconformist behavior have more unique information.
Human memory and personality are the most central components of life’s value ( think about permanent amnesia as a counterexample).
A person’s momentary consciousness does have information value, but its importance is far less than that of memory and personality.
The time-dimension boundaries of life
When does human life begin?
Development of the brain as the boundary of life’s emergence: the fetus begins to interact with the environment and store information (AIV starts to appear).
Birth as the boundary of rapid growth: the infant’s interaction with the world and information storage significantly exceed the fetal stage (AIV begins to accelerate).
After birth, AIV grows continuously, without further abrupt boundaries.
Fertilization as the boundary of life’s expectation: although the fertilized egg has no AIV, it carries strong future potential.
When does human life end?
Brain death as the boundary of life’s end: it's an irreversible extinction of a person’s acquired information (AIV).
If the body is frozen promptly after clinical death, the person’s information value (IV) is preserved, delaying “information death.”
The evaluation of information value loss in cryonics depends on the feasibility of future revival:
If revival is feasible: cryonics preserves the person’s IV - clinical death is not the end of life.
If revival is not feasible: cryonics is indistinguishable from death.
The species-dimension boundary of life
What count as being alive?
Humans and primates clearly count as alive; lower organisms are also alive, but possess fewer “life attributes.”
When digital entities act like realhuman(in both complexity and uniqueness), they can be considered to have life and information value comparable to humans.
At the current stage (2025), large-model digital entities have AIV amount somewhere between humans and lower organisms, and are evolving.
The “pseudo-consciousness” problem of AI: that AI is regarded as having pseudo-consciousness is just because its AIV remains far below human at the current stage.(Its deep capabilities is far below human level)
The space-dimension boundary of life
A person’s external information can be treat as part of his life.
The essence of human life is the unique information it carries. This unique information is not confined to the brain but also exists in the external world—notes, phones, online accounts. Together, they constitute a complete “informational self”.
There is no essential difference between recalling information from one's hippocampus and retrieving it from a notebook written by himself.
When a person’s personality, knowledge, and ideas spread into friends or even society, the boundary of his "life" extends further outward, and becomes increasingly blurred.
The real goal of information life
“Information survival” may be a more fundamental goal of life than “conscious survival.”
Novelty slows down the subjective passage of time; repetition does the opposite.
An obedient child who never engages in independent thought is, in some sense, not fully living for themselves.
Mutual information and empathy/altruism
Two lives that share part of their information also have partially overlapping goals.
Take mutual information into consideration, empathy and altruism can be seen as strategies aligned with the goal of information survival.
The growth/developmental goal of information
Doing fresh things is central to the meaning of life.
Having experienced something is similar to owning an object.
Informational beings have a basic need to ingest new information, just as they need to ingest food or power.
Creative are beneficial because it grow information.
Final words
This post outlines the overall idea of some more formal articles I am currently drafting. The topics in my mind include:
A formalized theoretical framework about this post.
AI-based methods for evaluating mutual information and unique information amout.
Information as the foundation of value — discussing the “ontology of information” perspective
I believe LessWrong community is a perfect place to discuss this kind of work, so I’ve extracted the core ideas and published them here. I would greatly appreciate any feedback, thank you!