eClaire_de_lune
2
eClaire_de_lune has not written any posts yet.

eClaire_de_lune has not written any posts yet.

Does this kind of AI risk depend on AI systems’ being “conscious”?
It doesn’t; in fact, I’ve said nothing about consciousness anywhere in this piece. I’ve used a very particular conception of an “aim” (discussed above) that I think could easily apply to an AI system that is not human-like at all and has no conscious experience.
Today’s game-playing AIs can make plans, accomplish goals, and even systematically mislead humans (e.g., in poker). Consciousness isn’t needed to do any of those things, or to radically reshape the world.
Imho, I think that consciousness + empathy/compassion is a pretty big factor to circumvent existential risk due to AI. If AI is able to make its own... (read more)
It may not align it, but I do think it would prevent certain unlikely existential risks.
If AI/AGI/ASI is truly intelligent, and not just knowledgeable, we should definitely empathize and be compassionate with it. If it ends up being non-sentient, so be it, guess we made a perfect tool. If it ends up being sentient and we've been abusing a being that is super-intelligent, then good luck to future humanity, this is more for super-alignment. Realistically, the main issue is that the average human... (read more)