Counterarguments:
Yes its too early to tell what the net effect will be. I am following the digital health/therapist product space and there is a lot of chatbots focused on CBT style interventions. Preliminary indications say they are well received. I think a fair perspective on the current situation is to compare GenAI to previous AI. The Facebook styled algorithms have done pretty massive mental harm. GenAI LLM at present are not close to that impact.
In the future it depends a lot on how companies react - if mass LLM delusion is a thing then I expect LLM can be trained to detect and stop that, if the will is there. Especially a different flavor of LLM perhaps. Its clear to me that the majority of social media harm could have been prevented in a different competitive environment.
In the future, I am more worried about LLM being deliberately used to oppress people, NK could be internally invincible if everyone wore ankle bracelet LLM listeners etc. We also have yet to see what AI companions will do - that has the potential to cause massive disruption too and you can't put in a simple check to claim its failed.
I am not so sure that calling LLM not at all aligned because of this issue is fair. If they are not capable enough then they won't be able to prevent such harm and appear misaligned. If they are capable to detect such harm and stop it, but companies don't bother to put in automatic checks, then yes they are misaligned.