"I’ll assume 10,000 people believe chatbots are God based on the first article I shared" basically assumes the conclusion that it's unimportant. Perhaps instead all 2.25 million delusion-prone LLM users are having their delusions validated and exacerbated by LLMs? After all, their delusions are presumably pretty important to their lives, so there's a high chance they talked to an LLM about them at some point, and perhaps after that they all keep talking.
I mean, I also expect it's actually very few people (at least so far), but we don't really know.
A delusional person who got LLM'd potentially has it much worse than usual, because any attempts at an intervention (short of actual forcible hospitalization) would lead to that person going to the LLM to get its take on it, with the LLM then skillfully convincing them not to listen to their friends'/family members' advice/pleas/urges.
Arguably it's not about how many delusional people LLMs eat, but about the fact that LLMs choose to eat delusional people at all, which is a pretty clear sign they're not at all "aligned".
Yes its too early to tell what the net effect will be. I am following the digital health/therapist product space and there is a lot of chatbots focused on CBT style interventions. Preliminary indications say they are well received. I think a fair perspective on the current situation is to compare GenAI to previous AI. The Facebook styled algorithms have done pretty massive mental harm. GenAI LLM at present are not close to that impact.
In the future it depends a lot on how companies react - if mass LLM delusion is a thing then I expect LLM can be trained to detect and stop that, if the will is there. Especially a different flavor of LLM perhaps. Its clear to me that the majority of social media harm could have been prevented in a different competitive environment.
In the future, I am more worried about LLM being deliberately used to oppress people, NK could be internally invincible if everyone wore ankle bracelet LLM listeners etc. We also have yet to see what AI companions will do - that has the potential to cause massive disruption too and you can't put in a simple check to claim its failed.
I am not so sure that calling LLM not at all aligned because of this issue is fair. If they are not capable enough then they won't be able to prevent such harm and appear misaligned. If they are capable to detect such harm and stop it, but companies don't bother to put in automatic checks, then yes they are misaligned.
How many people have mental health issues that cause them to develop religious delusions of grandeur? We don’t have much to go on here, so let’s do a very very rough guess with very flimsy data. This study says “approximately 25%-39% of patients with schizophrenia and 15%-22% of those with mania / bipolar have religious delusions.” 40 million people have bipolar disorder and 24 million have schizophrenia, so anywhere from 12-18 million people are especially susceptible to religious delusions. There are probably other disorders that cause religious delusions I’m missing, so I’ll stick to 18 million people. 8 billion people divided by 18 million equals 444, so 1 in every 444 people are highly prone to religious delusions. [...]
If one billion people are using chatbots weekly, and 1 in every 444 of them are prone to religious delusions, 2.25 million people prone to religious delusions are also using chatbots weekly. That’s about the same population as Paris.
I’ll assume 10,000 people believe chatbots are God based on the first article I shared. Obviously no one actually has good numbers on this, but this is what’s been reported on as a problem. Let’s visualize these two numbers:
It’s helpful to compare these numbers to the total number of people using chatbots weekly:
Of the people who use chatbots weekly, 1 in every 100,000 develops the belief that the chatbot is God. 1 in every 444 weekly users were already especially prone to religious delusions. These numbers just don’t seem surprising or worth writing articles about. When a technology is used weekly by 1 in 8 people on Earth, millions of its users will have bad mental health, and for thousands that will manifest in the ways they use it. This shouldn’t surprise us or lead us to assume the tech itself is causing their bad mental health. [...]
Monster Energy Drinks have sold around 22,000,000,000 cans. Some of the people who interact with them develop delusions. Here’s a video of a woman who developed a religious delusion about Monster Energy [...]
I think chatbots are so new that many people just haven’t added them to their mental category of “services with a billion users” like YouTube and Instagram yet. Chatbots took off much faster than YouTube and Instagram. Because they’re so new, people assume they must not be that big yet. Chatbots are actually the fastest growing applications ever. When you hear “ChatGPT users” you should have the same order of magnitude in your head as TikTok and Instagram and YouTube users.