This post tried making some quick estimates:
One billion people use chatbots on a weekly basis. That’s 1 in every 8 people on Earth.
How many people have mental health issues that cause them to develop religious delusions of grandeur? We don’t have much to go on here, so let’s do a very very rough guess with very flimsy data. This study says “approximately 25%-39% of patients with schizophrenia and 15%-22% of those with mania / bipolar have religious delusions.” 40 million people have bipolar disorder and 24 million have schizophrenia, so anywhere from 12-18 million people are especially susceptible to religious delusions. There are probably other disorders that cause religious delusions I’m missing, so I’ll stick to 18 million people. 8 billion people divided by 18 million equals 444, so 1 in every 444 people are highly prone to religious delusions. [...]
If one billion people are using chatbots weekly, and 1 in every 444 of them are prone to religious delusions, 2.25 million people prone to religious delusions are also using chatbots weekly. That’s about the same population as Paris.
I’ll assume 10,000 people believe chatbots are God based on the first article I shared. Obviously no one actually has good numbers on this, but this is what’s been reported on as a problem. [...]
Of the people who use chatbots weekly, 1 in every 100,000 develops the belief that the chatbot is God. 1 in every 444 weekly users were already especially prone to religious delusions. These numbers just don’t seem surprising or worth writing articles about. When a technology is used weekly by 1 in 8 people on Earth, millions of its users will have bad mental health, and for thousands that will manifest in the ways they use it.
That sounds right, I think I've heard from some people who had those kinds of experiences. And apparently there was some bug at one point where memory features would get applied even if you turned them off? Or so some anecdotes I heard claimed, that must've been pretty destabilizing to someone already trying to deal with psychosis. :/
(I have memory features mostly turned off in ChatGPT and predominantly use Claude anyway.)
About the bit where developers thought they were more productive but were actually less so: I've heard people say things like "overall, using AI tools didn't save me any time, but doing it this way cost me less mental energy than doing it all by myself". I've also sometimes felt similarly. I wonder if people might be using something like "how good do I feel at the end of the day" as a proxy for "how productive was I today".
Yeah it's gotten aggressive, sometimes it feels like a relief to turn it off and not have to look at yellow lines everywhere.
Yeah if you literally only want a spell check then the one that's built-in to your browser should be fine. Some people seem to use "spell check" in a broader meaning that also includes things like "grammar check" though.
Rather, we already have [weak] evidence that ChatGPT seemingly tries to induce psychosis under some specific conditions.
We have seen that there are conditions where it acts in ways that induce psychosis. But it trying to intentionally induce psychosis seems unlikely to me, especially since things like "it tries to match the user's vibe and say things the user might want to hear, and sometimes the user wants to hear things that end up inducing psychosis" and "it tries to roleplay a persona that's underdefined and sometimes goes into strange places" already seen like a sufficient explanation.
I feel like the summary in the introduction is somewhat at odds with the content? You say
Unfortunately the evidence is very strongly on the side of “dangerous”. Retrospective studies of long term users show cognitive deficits not found in other drug users, while animal studies show brain damage and inconsistent cognitive deficits.
But then you also say that
I do agree that this is reason to be concerned and that you might want to avoid MDMA because of this, but this sounds to me like "suggestive but inconsistent and often low-quality evidence" rather than "very strong evidence".
If one needs a spell or grammar check, some tool like Grammarly is a safer bet. Now they've started incorporating more LLM features and seem to be heavily advertising "AI" on their front page, but at least so far I've been able to just ignore those features.
The core functionality is just a straightforward spell and style check that will do stuff like pointing out redundant words and awkward sentence structures, without imposing too much of its own style. (Though of course any editing help always changes the style a bit, its changes don't jump out the way LLM changes do.)
It also helps to be on the free version where you are only shown a limited number of "premium suggestions" that seem to change your style more.
Ah okay, that makes more sense to me. I assumed that you would be talking about AIs similar to current-day systems since you said that you'd updated from the behavior of current-day systems.
I'm guessing it's easiest to get them to say that Islam is true if you genuinely believe in Islam yourself or can put yourself in the mindset of someone who does. I'd also expect it to be possible to get them to endorse its truth, but I'm not knowledgeable enough about Islam to think that I could personally pull it off without some significant amount of effort and research.