Anders Lindström

Posts

Sorted by New

Wiki Contributions

Comments

Thanks for the links. This could take epidemic proportions and could mind-screw whole generations if it goes south. Like all addictions it will be difficult to get people to talk about it and to get a picture of how big of a problem this is/will be. But for instance, Open AI should already have a pretty good picture by now how many users that are spending long hours chatting with GFE /BFE characters. 

The tricky part is when people share good "character prompts". Its like spreading a brain virus. Even if just 1 in 20 or a 100 gets infected it can have a massive R-number (for certain super spreaders) like if a big influencer (hmmm...) as Elon says "try this at home!"

Thanks for sharing, I will predict two things 1. an avalanche of papers published in the next 6-12 months outlining the "unexpected" persuasive nature of LLM's. 2. Support groups for LLM addicts that will have forums with topics like "Is it ethical to have two or more GFE characters at the same time?" or "What prompt are you planning to write to your GFE character for your anniversary?"

However, lets not forget the Tamagotchi. It wasn't a LLM/boarderline AGI, it was $20 dollar toy but people (kids) was fighting tooth and nails to keep it alive. Imagine now an AGI, how many people will not fight to keep it alive when "you" want to pull the kill switch. Maybe the kill switch problem will be more about human emotions than technical feasibility.