I mostly disagree that better reasons matter in a relevant way here, especially since I am currently reading your intent as not one of informing me of that you think there is a norm that should be enforced but instead a bid to enforce that norm. To me what's relevant is intended effect.

G Gordon Worley III's Shortform


Ω 3

Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.