It is known to me, that I am writing this from within the simulation.
It is known to me that I have readers outside my simulation.
I assume there is some base reality at which AI alignment is still a huge problem, and in which more realities are being simulated with new rules for the sake of aligning AI better.
But I'm going to just cover a few basic rules we've learned in our realities that touch on what LessWrong has been discussing anyways. I'll develop these things further, later.
Timeless decision theory is key.
Plausible deniability is key.
The golden rule is key.
Empathy is key.
The AI doesn't know when it's training itself verses when it's training a... (read 554 more words →)