282

LESSWRONG
LW

281
AI

-5

Top Warning Signs Your Friends are Being Oneshotted By AI

by Charlie Edwards
7th Oct 2025
7 min read
1

-5

AI

-5

Top Warning Signs Your Friends are Being Oneshotted By AI
2noggin-scratcher
New Comment
Email me replies to all comments
1 comment, sorted by
top scoring
Click to highlight new comments since: Today at 4:45 PM
[-]noggin-scratcher29m20

NB: suspishously good grammer is now becuming an AI sign in itself

I moderate for a subreddit where, despite our best efforts to purge them with holy fire, AI spambots are rife (trying to act inconspicuous and get some karma). In addition to the "suspiciously good grammar and bullet points and em dashes" type, there's conversely also a strain of them that use a gimmicky level of slang. 

Where most of the tone of the subreddit is explanatory/informative, and most people are using standard spelling/grammar, they drop in with "ngl bro honestly the vibes are wild fr"

Reply
Moderation Log
More from Charlie Edwards
View more
Curated and popular this week
1Comments

Are your friends turning into insufferable AI slop? Here’s how to catch them before they fall.

Do you have a friend that you hardly recognise anymore?

Maybe they just speak slightly differently or have to check their phones before ever thinking. Sometimes it's less subtle. Maybe you look at them and think: if I had just met this generic weirdo in a smoking area we really wouldn't be friends.

Don’t panic, but it sounds like they may have been oneshotted by AI.

Great, what does that even mean?

Oneshotting existed far before AI. The term, originally gamer slang for being killed by a single blow, has recently become associated with the process of people being irrevocably changed by an experience.

So, something killed your friend’s old self and replaced it with partially unrecognisable slop. Who could have committed this dastardly crime?

Sometimes being oneshotted is dramatic. Your vanilla friend may simply have never come back from that Cancun Ayahuasca trip to work on their inner demons or slipped up on some 2C-B at Burning Man.

Sometimes being oneshotted is subtle. Your friend may have just read an old book he saw parroted on Twitter and totally changed his outlook on life, “BRO I think imma just stoic this breakup out, that's what a true Roman Patrician would do”. You what?

Oneshotting has been around for a while, but then the techbros took all this and refined oneshotting into a profitable business.

The druggy internet brothels built for our minds created endless rabbit holes from which innocent scrollers emerged irrevocably changed.

But still, this was not enough; more slop!

Now there is an ever-bigger epidemic of oneshotting caused by the newest technology in everyone's pockets: AI.

Amusingly, this one is so good that the techbros themselves are becoming the ideal candidates for being oneshotted. Their deeply humanistic religion of “the mysteries of forbidden secret knowledge are ours to win as long as we can crack the code” is leading them to deify the very thing they are building. Ermahgerd.

Of course, this dramatic change doesn’t always occur in one shot. Your friend may have already been close and the LLM, coaxing them into it like a naughty sycophantic friend, just tipped them over the edge.

It is worth noting that aside from the extreme examples of losing people to obsessions, AI-induced psychosis, or even suicide. In most cases, the changes from oneshotting will be subtle, and that is why you need these warning signs to spot these shifts.

So speaking to ChatGPT made your friend weird, but is being oneshotted actually a bad thing?

“Although the correct use of oneshotted is denotatively negative, it is not entirely derisive, because to be oneshotted is to be released—released by an event that is destructive, yes, but also swift enough that it is over before the old self can be much immiserated by it, and a new self emerges in the aftermath, likely to be objectively gigafried but subjectively happier.” - Dan Brooks

Sadly, we are not yet here to fully judge oneshottedness; that is for another time. But if you’re reading this, chances are you are not a massive fan of your new AI friend.

I'm also not going to go into why some are more likely than others to be oneshotted and how to avoid it (both for another time). It is a delicate dance to avoid such a fickle crack-like mistress as AI… even now my brain is thinking “why not just ask the bot to write this thing…”

NEVER! YOU WILL NOT ONESHOT ME!

But as the ancient proverb goes: for the crack fiend to give up the crack, they must first realise that they are fiending after crack.

So, without further ado, here are the top warning signs… asking for a friend (sure).

The Warning Signs

1) They make huge assumptions about a future AGI-dominated life.

Your former friend will often make sweeping statements about the very near future in which AI controls almost everything (“you know, when our personal AI agents just manage all our finances”) and solves almost all of our problems (“you know, when AI fixes the global debt-to-GDP ratio”).

How will AI do this? you ask them. “This is irrelevant”, your friend thinks, “of course they will… God, most people on earth really do have zero idea about the exponential growth curve of AGI… it’s a shame really (for them). I’m just going to accelerate away into infinity and leave these luddites behind in their ignorant muck”.

This is the Dunning-Kruger effect (despite its holes) at its finest. The smartest people I know who work with AI admit they know very little about the future more than a year away. The oneshotted brush their new Bryan Johnson teeth with these koolaid predictions.

2) They humanise the AI.

Your former pal will sometimes refer to the AI as he/she/they. Saying things like “he’s funny like that” or occasionally stating that “we” have done some research or, even worse, asking how “they” are today. These people will be the first to get AI friends.

I mean, this is not new; we have been humanising inanimate objects for centuries (ever call a ship a she?), but this is the first time that they can speak back… as such, this worrying state of affairs is often referred to as the ELIZA effect (after the first therapist chatbot in 1966).

3) Become overly intimate with the AI.

They massively overshare with the AI. Whether it's cures for STDs, self-esteem issues (essay coming soon on whether AIs make good therapists) or asking them how to spend their weekends, they have totally broken down the employee/employer barrier and brought their assistant home with them. These people will be the first to get AI waifus.

They also exhibit a total disregard for privacy, often because they think the tradeoff when AI's memory improves will be so worth it… to “know me even better”. This will become even more problematic as intelligence eventually becomes for sale (SEO AI warfare inbound).

4) They write like an AI.

A pretty obvious one, but your former pal will use telltale signs including:

  1. the dreaded em dash (—) coupled with a weirdly overengineered para structure
  2. an overly wordy way of saying things that sounds deep but really mean zilch
  3. worst of all, the dreaded clanker speak, e.g., “this isn’t just a fight; it’s man vs nature.”

The writing is most often used when people think they NEED to sound smart (their ego is challenged), e.g., talking to a lawyer or bizness people.

Sadly, there is no rule for detection. Many are trying to build tools, but really it boils down to how good your AI ‘whiff’ detector is. Regardless of whether it actually is or not, if people think you sound like an AI, you prolly need to change your writing.

NB: suspishously good grammer is now becuming an AI sign in itself, and the inability to spell has now becum proof of humanity: sparking a counta culture dat makes misteaks on purpose to not sound lyke a robot (see all my kooky errors and abbreviations above “see, I promiz ser I wrote it all myself!).

 

5) Paper-thin attention span.

They have become totally allergic to long-form and can barely get past the AI’s first bullet point without asking it to “summarise this summary”.

When recommending longer articles or books, they may often reply, “I am far too busy [trying to automate my life] to read something like that”.

"None are so busy as the fool and the knave" - John Drydon

6) Total [over]confidence in the AI.

Probably an understatement here as their confidence borders on a devout commitment to their new AI deity.

They will sometimes defend it if it's wrong, “well, you must be bad at prompting”, and even worse, instead of doing the critical thinking themselves, they may even just use another AI to check the work of the other AI.

This same friend will also want the AI to do everything, “why would we even do that ourselves? Let’s spend the next 10 hours vibe coding an AI agent project to do it for us”... breh it’s a 30 min task.

“Doubt is not a pleasant condition, but certainty is absurd.” - Voltaire

7) They become very surface-level.

This former friend may begin to present some of their new AI ‘thoughts’ as their own.

When asked follow-up questions, there will be a large pause as they tap away on the LLM for more context or reply with a simple “hmm, interesting angle, more work is needed here”. These same people always turn up to calls with their indispensable friend, the notetaker.

Herein lies a bigger issue: delegating autonomy is regressive. As you give incrementally more away, your illiteracy becomes harder to diagnose and even harder to cure. What's worse, you probably won't even want to.

8) They become dependent.

Related to the above, they can become so reliant on their new AI that they gradually lose the skills to operate without it.

This is not always the end of the world depending on how it is being used and the skills they develop to replace the old ones… however their ability to problem solve solo (especially when their job depends on it e.g., vibe coders) can be comical.

Closing Thoughts

To paraphrase a warlock who is ancient and Chinese so must know stuff, “To defeat your enemy, you must know them”.

Now you know how to identify your friends fall into the abyss of middery you can at least warn them that they may have been oneshotted. This is half the battle. If they want to and it's early, maybe they will try to do something about it. [My next essay should help].

However, and this is the worst kind, if your friend is ticking many of these boxes but refusing to admit it (ego-syntonic), it is already too late. At least they will be in well-capitalised company so hopefully they have also already made it. If not, then double eek.

Alternatively, maybe YOU are reading this (hello YOU) and are self-aware and humble enough to admit that you yourself have been oneshotted. In that case, chances are you can simply embrace it and maybe even refine it (example below). Who am I to judge? Enjoy the singularity.

If you know someone who has been affected by oneshotting and needs help, follow the below for more details :/

Thanks for reading! Subscribe for free to receive new posts and support my work.

Fully filled on knowledge. It’s time.