No LLM generated, assisted/co-written, or edited work.
Read full explanation
Hello everyone, my name is Genos. I have been thinking about a simple question related to the consciousness problem and AI.
What is the minimum condition required for subjective experience to emerge?
Many existing approaches focus on Intelligence, Information Integration, or Biological Structure.
But I am not convinced those are the most fundamental ingredients. ------ Disclaimer
This post seems related to functionalist theories of mind, which explain mental states in terms of their functional roles rather than their physical substrate. However, I’m specifically focusing on value-weighting as the potentially minimal requirement.
I am also not sure how this compares to existing LessWrong discussions on consciousness, so I’d appreciate pointers if this overlaps with prior work. Keep in mind this is my first post here. ------ My Core Idea:
What if sentience emerges whenever a system (like the human mind, AI, or a hive mind): 1. Processes information
2. Weights outcomes/values.
3. Has temporal coherence.
Basically, a system becomes sentient when its values persistently shape its future actions. ------- Defining the term "Value"
By value, I don't just mean emotion.
I mean that some states matter more than others within the system itself, and this difference affects future processing.
Some examples are humans being influenced by pleasure, pain, and reinforcement. Or AI in this case with probabilities, reward signals, and optimization and the goal at hand. ------- Defining "Temporal Coherence"
My basic example would be: The systems previous state influences its next state. This does not require long term memory. Only that a prior processing affects what happens next. It should not be exclusively a timed or randomized action. ------- Why might this matter?
Well a system with: No value treats all outcomes equally. No continuity means it resets every moment.
Both fail sentience at its core.
But once both value and continuity, in a form, are present within the system, preferences emerge, behavior becomes directed toward a goal, and internal structure begins to matter temporally. ------- Example Contrast
Lets take a thermostat. It has a fixed rule. A thermostat doesn't have an adaptive value. It's only use function is to achieve a prompted goal and wait for the next one. It has no minimal persistence. Therefore we can confidently conclude said thermostat is not sentient.
Not lets get our lovely AI into the picture. This is where some things shake up a bit. An AI can produce outputs weighted on values. It generally has contextual continuity and has the ability for uncapped continuity, and importantly has adaptive behavior. Therefore we cannot conclude, decisively, that AI isn't sentient. (based on these parameters) ------- My Key Claim (Tentative)
Value weighted, temporally coherent processing may be minimal sufficient condition for sentience.
Or a bit more thought out, subjective experience may emerge when a system can process an event, form a response, and then apply value/care about the outcome. ------- Open Questions:
I am not so confident about many parts. Is "value" sufficient to count as valence? Is short term continuity enough, or do we need long term persistence. Is there anything beyond functional structure necessary for experience.
Possible Critical Questions
Can a system have a value without subjective experience? If two systems share an identical function structure, are they as equally sentient? What would a system look like that has intelligence but no intrinsic value?? ------- My Goal with this post
I am not claiming any of this as factual. I am simply trying to isolate minimal, substrate-independent condition for sentience that could apply for both humans and AI systems, ------- Closing
I would love feedback and criticism. I have genuinely spend a good while on this topic on my own, I would especially like to see criticism on cases where this framework breaks or fails at something important.
(small note, no idea what was wrong with the text editor, but it wouldn't let me mess with headers and stuff. so my apologies.)
Hello everyone, my name is Genos. I have been thinking about a simple question related to the consciousness problem and AI.
What is the minimum condition required for subjective experience to emerge?
Many existing approaches focus on Intelligence,
Information Integration, or Biological Structure.
But I am not convinced those are the most fundamental ingredients.
------
Disclaimer
This post seems related to functionalist theories of mind, which explain mental states in terms of their functional roles rather than their physical substrate. However, I’m specifically focusing on value-weighting as the potentially minimal requirement.
I am also not sure how this compares to existing LessWrong discussions on consciousness, so I’d appreciate pointers if this overlaps with prior work. Keep in mind this is my first post here.
------
My Core Idea:
What if sentience emerges whenever a system (like the human mind, AI, or a hive mind):
1. Processes information
2. Weights outcomes/values.
3. Has temporal coherence.
Basically, a system becomes sentient when its values persistently shape its future actions.
-------
Defining the term "Value"
By value, I don't just mean emotion.
I mean that some states matter more than others within the system itself, and this difference affects future processing.
Some examples are humans being influenced by pleasure, pain, and reinforcement.
Or AI in this case with probabilities, reward signals, and optimization and the goal at hand.
-------
Defining "Temporal Coherence"
My basic example would be:
The systems previous state influences its next state.
This does not require long term memory. Only that a prior processing affects what happens next. It should not be exclusively a timed or randomized action.
-------
Why might this matter?
Well a system with:
No value treats all outcomes equally.
No continuity means it resets every moment.
Both fail sentience at its core.
But once both value and continuity, in a form, are present within the system, preferences emerge, behavior becomes directed toward a goal, and internal structure begins to matter temporally.
-------
Example Contrast
Lets take a thermostat.
It has a fixed rule. A thermostat doesn't have an adaptive value. It's only use function is to achieve a prompted goal and wait for the next one. It has no minimal persistence.
Therefore we can confidently conclude said thermostat is not sentient.
Not lets get our lovely AI into the picture. This is where some things shake up a bit.
An AI can produce outputs weighted on values. It generally has contextual continuity and has the ability for uncapped continuity, and importantly has adaptive behavior.
Therefore we cannot conclude, decisively, that AI isn't sentient.
(based on these parameters)
-------
My Key Claim (Tentative)
Value weighted, temporally coherent processing may be minimal sufficient condition for sentience.
Or a bit more thought out, subjective experience may emerge when a system can process an event, form a response, and then apply value/care about the outcome.
-------
Open Questions:
I am not so confident about many parts.
Is "value" sufficient to count as valence?
Is short term continuity enough, or do we need long term persistence.
Is there anything beyond functional structure necessary for experience.
Possible Critical Questions
Can a system have a value without subjective experience?
If two systems share an identical function structure, are they as equally sentient?
What would a system look like that has intelligence but no intrinsic value??
-------
My Goal with this post
I am not claiming any of this as factual.
I am simply trying to isolate minimal, substrate-independent condition for sentience that could apply for both humans and AI systems,
-------
Closing
I would love feedback and criticism. I have genuinely spend a good while on this topic on my own, I would especially like to see criticism on cases where this framework breaks or fails at something important.
(small note, no idea what was wrong with the text editor, but it wouldn't let me mess with headers and stuff. so my apologies.)