x
AI alignment: Would a lazy self-preservation instinct be sufficient? — LessWrong