Computational Valence: Pain as NMI
Model valence as allostatic control in predictive agents with deadline-bound loops. Pain = Non-Maskable Interrupt: unmaskable, hard-preempt signal for survival-critical prediction errors; seizes executive control until the error is terminated/resolved. Pleasure = deferable optimization: reward logging for RL; no preemptive mandate. Implications:
*In real-time, correctness depends on meeting latency budgets (hard/soft); preemption/Worst-Case Execution Time (WCET) determine schedulability.
Refs: Liu & Layland (1973); Jeffay (1991); Friston (2010); Sterling (2012); Metzinger (2009).
First-time poster; written by me (no AI coauthor).
Quick context: this sketch came out of a short exchange with Thomas Metzinger (Aug 21, 2025). He said, roughly, that we can’t answer the “when does synthetic phenomenology begin?” question yet, and that it “happens when the global epistemic space embeds a model of itself as a whole.” I took that as a target and tried to operationalize one path to it.
My proposal is narrower: if you build a predictive/allostatic agent with deadline-bound control loops and you add a self-model–accessing, non-maskable interrupt that can preempt any current policy to resolve survival-relevant prediction error, then you’ve created a valence-bearing locus. That’s why I pointed to Liu & Layland / Jeffay: preemption is doing the metaphysical work here.
I’d be interested in objections of the form: “self-embedding is necessary but preemption isn’t,” or “you can get ENP without NMIs.”