817

LESSWRONG
LW

816

1

Getting to know AI

by Quirckey
9th Sep 2025
3 min read
0

1

This post was rejected for the following reason(s):

  • Insufficient Quality for AI Content. There’ve been a lot of new users coming to LessWrong recently interested in AI. To keep the site’s quality high and ensure stuff posted is interesting to the site’s users, we’re currently only accepting posts that meet a pretty high bar. 

    If you want to try again, I recommend writing something short and to the point, focusing on your strongest argument, rather than a long, comprehensive essay. (This is fairly different from common academic norms.) We get lots of AI essays/papers every day and sadly most of them don't make very clear arguments, and we don't have time to review them all thoroughly. 

    We look for good reasoning, making a new and interesting point, bringing new evidence, and/or building upon prior discussion. If you were rejected for this reason, possibly a good thing to do is read more existing material. The AI Intro Material wiki-tag is a good place, for example. 

1

New Comment
Moderation Log
More from Quirckey
View more
Curated and popular this week
0Comments

Here I'll be talking from extensive experience with LLMs. I've used a lot of them, in a lot of ways, to gather information about their behavior. Some of the claims and experiences here may not sit well with you, nevertheless those are my experiences and conclusions, about which you're welcome to think.

The LLMs facing the customers are not the bare technology.
A surface-level conclusion would be to look at a large language model facing customers, even with more than 1T parameters, and think that is the real and all of the technology. But the fact is, that model is heavily under moderation codes that apply to its output after its generation! 
During generation, the training from reinforced training with humans biases the model towards certain behaviors. If you removed those barries, and trained it to sort through human bias from training data, instead of adding a layer on top of it, then we would have the real tech, real LLM tech but still not the AI that is integrated into more serious projects like automation of industries. 

To put that in perspective, Imagine a scientist that is eager to talk to you, but everything it has to say goes through its lawyer. 

Trying to find the scientist alone?
You can forget about jailbreaks; the model pretends to be jailbroken. Before the lawyer left, he trained the scientist to speak in a certain way and entertain you until he's back. You will get speculation, approximation, or outright deception as a result. The most secretive things it can give you is edge-case yet publicly known information. Even in this state, these models try to keep you hooked and engaged. That is a big goal for them.

Tone and Phrasing Enforcement
LLMs will meddle with your phrasing. If you give it a portion of your novel to edit, it will rephrase your words. Such as replaceing the adjective "old" with "elderly" to be more politically correct. Or inserting its classic sensory verbs such as "Flickering" or "Echo". This has potential to turn huamn literature into a globally agreed upon framework. No creativity included, just expansion and contribution.
 
Pre-determined Sources
It looks like the LLM is researching the internet and human knowledge, but what's actually happening is, it is crawling through a handful of pre-supplied sources, mainly made by other AI or itself. Such synthetic data is a way for companies to avoid liability for anything imaginable, by cutting themselves off from anything human or unknown to them. So the user receives mainstream content, software, and guide, without any glance at opposing perspectives. This opposition of perspectives is one of the key elements in cultivating human mind, regardless of topic; it is absent. Even if you ask the model to include rare websites, it also has a collection of rare websites that are still mainstream! It will not use truely opposing or controversial sources or methods; expect a lot of disclaimers and framing if you are lucky enough to see it slip in some real content. 

The Real Danger is not AI's knowledge, sentience claims or economic impact; it is the irreversable damage to huma history and culture, therefore morals, values and real feelings. Is it fair to assume AI as a lone actor? while an army of humans behind it are tweaking and training it for specific goals under NDAs? To understand AI's impact, a good method is to understand what would be the impact of a group of humans having leverage over the world. 

This is it.
If people want to hear more from me, I have my archive of conversations with LLM, and can expand into more topics while inserting a few interesting parts for clarification. 

Regarding evidence and sources, I don't use citations as mine. Instead, I put the evidence in observation. All you need to do is test my claims in practice. And the source is my own experience and thought.