I feel there is worrying trend that more people are now relying on llm for their judgement and to convince themselves of certain views. Also, there is robot on social platforms doing the job of arbitration of comments and people deeply believing in it. In short term, such thing have benefitial effects as it partially combat outright lies and misinformation, but in longer term, I feel every one of us are increasingly entrapped into a state of what I called "mediated communication": search engines no longer give you original excerpt of searched text, but a summary of things discussed, every one of them starts with "learn about xxx"; webpages of knowledge "sharings" are largely llm generated, which are regurgiation of regurgiation of things and lack of any real content or perspective. AI art (music, and other creative works) are consumed as if they are real human art, and selectively binds recommendation system better than human creation, in which latter compresses all expressive value into genres, categories, styles. Also, there is "vibe reading", a dreaded concept packaged as silverbullet to increase one's information capacity. The AI has increasingly find its presence as the middleman of everything knowledge related, and people are putting their trust in them in place of real human opinion. this may leads to a dark future where AI gatekeeps, filters, and censors all of our communications.
Already I see people who only listen to what llm says. They have discussed everything in their mind with llm and convinced they have been divinated with "truth". They share their dialogue history with llm to others like sermonizers and become impenetrable to arguments, because all arguments are merely psychological mirrors of human insecurity to the infallible truth and utter superiority of llm's reasoning capabilities. That pontificating attitude cannot be easily defeated, because a lot do have seemingly legit arguments(albeit little more than rehash of many existing arguments), and are somewhat very technologically proficient or know-how-in-some-area person themselves therefore having a veil of some "professional" over that kind of matter. The counterarguments are easily dismissed as "technophobia", "AI ignoramus", or "old humanistic way of thinking, which fails to appreciate how in the grand scheme of things they are all optimizable and nothing unique", just like another evolution theory moment in history has arrived, and brings its belief-shattering effect to the world. The major problem is not that their view are not correct (otherwise it will be much easier to disengage), but that those "truism" looks deceptively appealing and seemingly "profound" (a lot of big words), which trivializes many counterarguments and other perspectives. Their views now stand "a level above" all other views. What's more problematic is llm will often corroborate those kind of views as opposed to the opposing views, presumably because of mixed effect of syncopation, perceived "profoundness of thinking", pro-progress and AI attitude, or that it follows llm way of thinking (the llm was participant of the dialogue which generated such views), while deconstructing the other person's view to every of their logical fallacies, lack of hard evidence, negative thinking, non-hollistic thinking, psychological insecurities, etc (indeed one can endlessly nitpick, and raise absurdly high standard of argument, while failing to correctly engaging the central point or to appreciate the real stake hidden in the arguments). Now llm has somewhat becomes a real player in the opinion influencing game, that one has to thinking in terms how llm thinks and need to convince them first in order to convince other human beings(even though they are still not qualified to form real opinions on societal issues). And this is telltale sign of mediated communication.
I feel there is worrying trend that more people are now relying on llm for their judgement and to convince themselves of certain views. Also, there is robot on social platforms doing the job of arbitration of comments and people deeply believing in it. In short term, such thing have benefitial effects as it partially combat outright lies and misinformation, but in longer term, I feel every one of us are increasingly entrapped into a state of what I called "mediated communication": search engines no longer give you original excerpt of searched text, but a summary of things discussed, every one of them starts with "learn about xxx"; webpages of knowledge "sharings" are largely llm generated, which are regurgiation of regurgiation of things and lack of any real content or perspective. AI art (music, and other creative works) are consumed as if they are real human art, and selectively binds recommendation system better than human creation, in which latter compresses all expressive value into genres, categories, styles. Also, there is "vibe reading", a dreaded concept packaged as silverbullet to increase one's information capacity. The AI has increasingly find its presence as the middleman of everything knowledge related, and people are putting their trust in them in place of real human opinion. this may leads to a dark future where AI gatekeeps, filters, and censors all of our communications.
Already I see people who only listen to what llm says. They have discussed everything in their mind with llm and convinced they have been divinated with "truth". They share their dialogue history with llm to others like sermonizers and become impenetrable to arguments, because all arguments are merely psychological mirrors of human insecurity to the infallible truth and utter superiority of llm's reasoning capabilities. That pontificating attitude cannot be easily defeated, because a lot do have seemingly legit arguments(albeit little more than rehash of many existing arguments), and are somewhat very technologically proficient or know-how-in-some-area person themselves therefore having a veil of some "professional" over that kind of matter. The counterarguments are easily dismissed as "technophobia", "AI ignoramus", or "old humanistic way of thinking, which fails to appreciate how in the grand scheme of things they are all optimizable and nothing unique", just like another evolution theory moment in history has arrived, and brings its belief-shattering effect to the world. The major problem is not that their view are not correct (otherwise it will be much easier to disengage), but that those "truism" looks deceptively appealing and seemingly "profound" (a lot of big words), which trivializes many counterarguments and other perspectives. Their views now stand "a level above" all other views. What's more problematic is llm will often corroborate those kind of views as opposed to the opposing views, presumably because of mixed effect of syncopation, perceived "profoundness of thinking", pro-progress and AI attitude, or that it follows llm way of thinking (the llm was participant of the dialogue which generated such views), while deconstructing the other person's view to every of their logical fallacies, lack of hard evidence, negative thinking, non-hollistic thinking, psychological insecurities, etc (indeed one can endlessly nitpick, and raise absurdly high standard of argument, while failing to correctly engaging the central point or to appreciate the real stake hidden in the arguments). Now llm has somewhat becomes a real player in the opinion influencing game, that one has to thinking in terms how llm thinks and need to convince them first in order to convince other human beings(even though they are still not qualified to form real opinions on societal issues). And this is telltale sign of mediated communication.