This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work.
Read full explanation
I’ve been working as an independent thinker on a framework for rethinking ethics, with a focus on psychology and how people relate to their own traits. I call this framework Potentialism.
More recently, I realized that the same framework might also be useful for AI ethics – not as a list of rules, but as a way of thinking about “potentials” that can be regulated in context, similar to how human instincts are shaped and constrained.
To stress-test the core pillars, I used frontier models (ChatGPT and Gemini) in an adversarial way. This process helped expose edge cases, but the architectural logic remains a distinct proposal for shifting from fixed virtues to regulated potentials.
I’d appreciate critical feedback on whether this seems like a structurally viable approach to AI safety and ethics.
I’ve been working as an independent thinker on a framework for rethinking ethics, with a focus on psychology and how people relate to their own traits. I call this framework Potentialism.
More recently, I realized that the same framework might also be useful for AI ethics – not as a list of rules, but as a way of thinking about “potentials” that can be regulated in context, similar to how human instincts are shaped and constrained.
To stress-test the core pillars, I used frontier models (ChatGPT and Gemini) in an adversarial way. This process helped expose edge cases, but the architectural logic remains a distinct proposal for shifting from fixed virtues to regulated potentials.
I’d appreciate critical feedback on whether this seems like a structurally viable approach to AI safety and ethics.
Link: https://zenodo.org/records/17680796