I’ve been reading about existential risks from advanced AI systems, including the possibility of “worse-than-death” scenarios sometimes called suffering risks (“s-risks”). These are outcomes where a misaligned AI could cause immense or astronomical amounts of suffering rather than simply extinguishing humanity. My question: Do researchers working on AI safety and...
I’ve been reading about existential risks from advanced AI systems, including the possibility of “worse-than-death” scenarios sometimes called suffering risks (“s-risks”). These are outcomes where a misaligned AI could cause immense or astronomical amounts of suffering rather than simply extinguishing humanity. My question: Do researchers working on AI safety and...
A common objection to the idea of future neurotechnologies that could induce continuous states of happiness, joy, or even ecstasy is: “Wouldn’t you just get used to it? If the stimulation never changed, it would fade, become boring, and stop feeling good.” This seems intuitive because it’s what we observe...
Hi, I consider using an LLM as a psychotherapist for my mental health. I already have a human psychotherapist but I see him only once a week and my issues are very complex. An LLM such as Gemini 2 is always available and processes large amounts of information more quickly...