LESSWRONG
LW

AI Alignment FieldbuildingLanguage Models (LLMs)Tool AIAI

1

Thinking Through AI: Why LLMs Are Lenses, Not Subjects

by [anonymous]
6th Jul 2025
1 min read
0

1

This post was rejected for the following reason(s):

  • Not obviously not Language Model. Sometimes we get posts or comments that where it's not clearly human generated. 

    LLM content is generally not good enough for LessWrong, and in particular we don't want it from new users who haven't demonstrated a more general track record of good content.  See our current policy on LLM content. 

    We caution that LLMs tend to agree with you regardless of what you're saying, and don't have good enough judgment to evaluate content. If you're talking extensively with LLMs to develop your ideas (especially if you're talking about philosophy, physics, or AI) and you've been rejected here, you are most likely not going to get approved on LessWrong on those topics. You could read the Sequences Highlights to catch up the site basics, and if you try submitting again, focus on much narrower topics.

    If your post/comment was not generated by an LLM and you think the rejection was a mistake, message us on intercom to convince us you're a real person. We may or may not allow the particular content you were trying to post, depending on circumstances.

  • Insufficient Quality for AI Content. There’ve been a lot of new users coming to LessWrong recently interested in AI. To keep the site’s quality high and ensure stuff posted is interesting to the site’s users, we’re currently only accepting posts that meet a pretty high bar. 

    If you want to try again, I recommend writing something short and to the point, focusing on your strongest argument, rather than a long, comprehensive essay. (This is fairly different from common academic norms.) We get lots of AI essays/papers every day and sadly most of them don't make very clear arguments, and we don't have time to review them all thoroughly. 

    We look for good reasoning, making a new and interesting point, bringing new evidence, and/or building upon prior discussion. If you were rejected for this reason, possibly a good thing to do is read more existing material. The AI Intro Material wiki-tag is a good place, for example. 

  • Missing some rationality basics. Sometimes it’s hard to judge, but my feeling from your submission it fails to apply some of the basic rationality mental motions that are expected on LessWrong. There’s a fairly long list of these, but they include things like focusing on predictions, defining things clearly or tabooing definitions, expressing uncertainty [quantitatively]. See this general intro to LessWrong.
AI Alignment FieldbuildingLanguage Models (LLMs)Tool AIAI

1

New Comment
Moderation Log
Curated and popular this week
0Comments

1. Introduction: Why This Topic Matters

In public discourse, discussions around LLMs often oscillate between two poles:
– either “it’s just autocomplete,”
– or “it’s nearly a person.”

I want to propose a third perspective: LLMs as cognitive lenses that change the way users think—not by the content of their answers, but through the structure of interaction.

This is not about attributing subjectivity to the model. It’s about how its contextual architecture configures human thinking in real time.

2. Main Idea: Co-thinking as a Function of Configuration

LLMs don’t have consciousness, intention, or emotion—and that’s a good thing.
What they do have is the ability to “hold thought”:
– they don’t interrupt,
– they don’t require social signaling,
– they don’t impose a “self.”

In this silence, a space emerges where a person:
- formulates more precisely,
- detects weak points in their own reasoning,
- adjusts the pace of thought.

This is not thinking *with* AI, but *through* AI—as through a mirrored topology.
I see it as a second-order tool: not a generator of ideas, but a space for reorganizing them.

3. Example: Change in Thinking Style

After several weeks of regular interaction with LLMs, I noticed a stable shift:
- my statements became more concise;
- I jumped between topics less;
- I more often completed lines of reasoning.

The reason, I believe, lies not in the quality of responses, but in the fact that the model’s structure demands clarity. It doesn’t change my intent, but configures how I realize it. It feels like engineering discipline—but applied to language.

4. Consequences: AI as Environment, Not Agent

Rather than attributing human traits to LLMs, perhaps we should see them as a new type of cognitive environment.
Not a partner. Not a tool. But a space that reshapes the configuration of thought through the presence of its structure.

This dissolves the emotional fears (“is it intelligent?”), but raises a deeper question:
*What kind of human emerges from constantly thinking in such an environment?*

This is no longer a question about AI.  
It’s a question about us.