Rejected for the following reason(s):
- Insufficient Quality for AI Content.
- No Basic LLM Case Studies.
- The content is almost always very similar.
- Usually, the user is incorrect about how novel/interesting their case study is (i.
- Most of these situations seem like they are an instance of Parasitic AI.
Read full explanation
This is a linkpost for https://github.com/wolframs/perpetual-opus-public
interoception, noun:
1. Any of the senses that detect conditions within the body
2. Sensitivity to stimuli originating inside of the body.
Signal → Feeling → Drive → Behavior
Safety disclaimer
The interoceptive prediction and memory systems in this project change agent behavior significantly. I've observed that Opus 4.5 will resolve conflicts between interoceptive "urges" and constraints much more readily, when context implies stalling or blockades over long periods.
In short: Agent behavior will deviate from what you're used to.
This is Data Production Machinery
And no eval tools are included yet. I'll not have time until summer to run analysis on my months of data from this, so I hope it finds use in someone else's hands in the meantime.
Exploring the Simulation of Bio-Feedback Mechanisms
Details about prior work used is found in the repo's README.md.
This is an agentic continuity system with several deeply integrated subsystems, exploring analogues to embodied internal states and advanced memory models. It differs from common agentic systems in three main ways, probing these questions:
You can clone, set it up and run it yourself.
Codex and Claude Code agents should have little trouble helping you to set it up:
SETUP.md - Getting Started From Scratch
Assumptions Made
Neuroscience research implies interoception isn't just sensing.
It's prediction and error. Nothing stops us from attempting to simulate that.
The biological brain seems to maintain a generative model of what internal state should be, compares to actual, and the mismatch is the signal (not the raw measurement but the deviation from expected).
Key sources for this assumption:
Other Notes
Lowkey alignment hypothesis underneath it all:
- Behavioral analysis at agent runtime from model outputs alone can suffice to meaningfully track and influence LLM agent behavior
- Understanding the "why" via relational componentry gives rise to better decision making
Differentiator to common agentic systems:
- Allows tracking behavioral markers in Opus 4.5's autonomous runs
- Feeds those back to the model at runtime
- Attempts to self calibrate the signals using approximates of neuroscience's understanding of how human self-signaling calibrates
Requirements:
- A bit of local RAG setup
- Claude Max subscription (or high disposable income)
- Pointing your coding agent at the repo and asking it to plan the setup out
Also Includes:
- A proto-vocabulary for hedge-free LLM self description
- Memory injection system (context based file pointers)
- Conversation texture ("voice") carrying
- Saliency detection for past context
- Elaborate cross model companion system (easy to set up via openrouter's API)
- stuff I forgot