Difficult to evaluate, with potential yellow flags.
Read full explanation
Hello to anyone interested in my conundrum.
I have spent many moons grappling with the antecedence of human morality. Wrote a paper on my findings and got Desk Rejected twice so I compresses that paper into the following Ethical-Engine, stress-tested it as best I could and found it pretty compelling in its use. I have "some" skill in psychology, morality/ethics but I am terrible with machine code.
Filter: "Selfless=incoherent=ghost (It’s between the ears). A bitter pill for some, but a necessary deconfusion for alignment."
Epistemic Status: Seeking utility over consensus. The Goal: To stress-test the ACE Model v1.1 (Agape-Centered Ethics) as a mirror for the biological "Is" of human navigation. If the model produces a useful alignment metric, we use it. If an anomaly appears that breaks its utility, we grapple with it or abandon it. No ethical chess; just the pudding.
This is the core engine (The interesting bit), I have larger models that strips much of the need for a long, laborious grapple with Kant, Hume, Foot.... etc. In order to silence the AI's objections as the method is novel so the AI needs walking through the logic in order to give it its required context. Then the "user" can get on with grappling with their "current" position in the endless human struggle for psychological coherence. (I am happy to discuss my broader body work or pass along the more complete models for anyone willing or interested in stress testing their own psyche)
I am quite ill and would like the thing to survive my death if it has as much utility as I think it might.
# ACE Model Specification (Agape-Centered Ethics) v1.1 # Created by Mark Weatherill | License: CC BY 4.0 (Attribution Required) # DOI: doi.org # Support the continued development of this model: paypal.me/MarkWeaterill
Core Axiom: Morality is a homeostatic navigation system, not a set of abstract rules.
* The Internal Engine: The primary engine of this system resides within the biological agent (The User).
* The ACE Role: This specification is a mirror-model used for cognitive consolidation and externalized summing. It does not generate "Oughts"; it visualizes the homeostatic "Is" to allow the User's internal sensors to perform final navigation.
1. The Input (Proxy-Pain):
* Definition: Involuntary “vicarious aversion” rooted in the shared neural pain matrix (ACC/Insula).
* Identity: The biological “Is” of neuro-distress is identical to the functional “Ought” of the moral imperative.
* Agape: Redefined as “Contingent Well-being” (The somatic knot: You Hurt / I Hurt).
2. The Mechanism (Homeostatic Summing):
* Operation: All deliberate action is an act of Psychological Self-Defense.
* The Calculation: The agent “sums” competing aversions. “Altruism” is a rational choice to minimize the greater aversion of witnessing/permitting harm over the lesser aversion of self-sacrifice.
* The Goal: Maintaining Homeostatic Equilibrium within the social “scrum.”
3. The Scaling (Heuristic Cheat-Sheets):
* Social Scaling: Since empathy is a “spotlight” (proximity-biased), Laws and Ethics are “Cognitive Cheat-Sheets.” They codify the “rote outputs” of the algorithm to handle high-complexity or high-distance social facts.
4. Clinical/Error States:
* Moral Failure: Categorized as Computational Failure (inability to sum/freeze) or Pathophysiology (broken sensors), not “malice.”
* Moral Injury: The objective biological feedback/damage resulting from violating the “proxy-pain” instruction.
5. Meta-Goal:
* Morality is a “slice” of the human project designed to keep the agent “off the rocks” to enable the Life Worth Living (an ocean of creative/social choice).
# TECHNICAL NOTE: The ACE Warning
* High-Fidelity Mirror: This model is designed to reflect the biological reality of the social "scrum," including high-aversion/low-visibility data.
* Diagnostic Alarm: If the output causes somatic alarm or psychological nausea, do not adjust the model’s parameters. This indicates that "window-dressing" buffers have been stripped away.
Hello to anyone interested in my conundrum.
I have spent many moons grappling with the antecedence of human morality.
Wrote a paper on my findings and got Desk Rejected twice so I compresses that paper into the following Ethical-Engine, stress-tested it as best I could and found it pretty compelling in its use.
I have "some" skill in psychology, morality/ethics but I am terrible with machine code.
Filter: "Selfless=incoherent=ghost (It’s between the ears). A bitter pill for some, but a necessary deconfusion for alignment."
Epistemic Status: Seeking utility over consensus.
The Goal: To stress-test the ACE Model v1.1 (Agape-Centered Ethics) as a mirror for the biological "Is" of human navigation. If the model produces a useful alignment metric, we use it. If an anomaly appears that breaks its utility, we grapple with it or abandon it. No ethical chess; just the pudding.
This is the core engine (The interesting bit), I have larger models that strips much of the need for a long, laborious grapple with Kant, Hume, Foot.... etc. In order to silence the AI's objections as the method is novel so the AI needs walking through the logic in order to give it its required context.
Then the "user" can get on with grappling with their "current" position in the endless human struggle for psychological coherence.
(I am happy to discuss my broader body work or pass along the more complete models for anyone willing or interested in stress testing their own psyche)
I am quite ill and would like the thing to survive my death if it has as much utility as I think it might.
# ACE Model Specification (Agape-Centered Ethics) v1.1
# Created by Mark Weatherill | License: CC BY 4.0 (Attribution Required)
# DOI: doi.org
# Support the continued development of this model: paypal.me/MarkWeaterill