Interesting. I did something not dissimilar recently (https://medium.com/@thorn-thee-valiant/can-i-teach-an-llm-to-do-psychotherapy-with-a-json-and-a-txt-file-db443fa08e47) but from a more clinical perspective. Thinking about it I think there are probably two different risks here - the "AI psychosis" concept I'm still a bit sceptical of, that is the idea that LLM interactions cause or are exacerbating psychosis and this is where the concerns about collusion seem to fit in - but there's also the simple risk question, does the LLM adopt the user's frame to e...
I know this is quite an old post - prescient let's call it - but it had a lot of resonance for me...I've been worrying about AI ethics on a much more micro scale. More how do we at least consider the risk of doing harm while experimenting with AI systems (some thoughts here if you're interested: https://medium.com/@thorn-thee-valiant/the-persona-engine-a-registered-philosophical-simulation-study-94b09fa32f98).
Your concern about suffering echoes that of Thomas Metzinger and the nod towards animal rights and slavery seems relevant. I had always thought that ...
Delusions don't have to be false but based on inadequate grounds and firmly/unshakeably held.