LESSWRONG
LW

1

If Massive Data Can’t Create Human Like Intelligence, Are We Even on the Right Path?

by ahmadrizq
23rd Jul 2025
1 min read
0

1

This post was rejected for the following reason(s):

  • No LLM generated, heavily assisted/co-written, or otherwise reliant work. LessWrong has recently been inundated with new users submitting work where much of the content is the output of LLM(s). This work by-and-large does not meet our standards, and is rejected. This includes dialogs with LLMs that claim to demonstrate various properties about them, posts introducing some new concept and terminology that explains how LLMs work, often centered around recursiveness, emergence, sentience, consciousness, etc. (these generally don't turn out to be as novel or interesting as they may seem).

    Our LLM-generated content policy can be viewed here.

  • We are sorry about this, but submissions from new users that are mostly just links to papers on open repositories (or similar) have usually indicated either crackpot-esque material, or AI-generated speculation. It's possible that this one is totally fine. Unfortunately, part of the trouble with separating valuable from confused speculative science or philosophy is that the ideas are quite complicated, accurately identifying whether they have flaws is very time intensive, and we don't have time to do that for every new user presenting a speculative theory or framing (which are usually wrong).

    Separately, LessWrong users are also quite unlikely to follow such links to read the content without other indications that it would be worth their time (like being familiar with the author), so this format of submission is pretty strongly discouraged without at least a brief summary or set of excerpts that would motivate a reader to read the full thing.

1

New Comment
Moderation Log
More from ahmadrizq
View more
Curated and popular this week
0Comments

We’ve built language models with hundreds of billions of parameters. They pass tests, write essays, imitate conversation. But here’s the thing:

None of them actually understand what they’re saying.

None of them can ask their own questions, reflect on meaning, or form genuine intention. So I’m asking the question no one wants to

What if we’ve been building AI on the wrong foundation all along?

I think intelligence is about structure.

About how you build understanding. How you connect one concept to another. How you get that “Aha!” moment when something clicks, not just because you’ve seen A show up next to B a lot of times.

These days, AI is great at guessing. But it doesn’t understand. The more data it gets, the more fluent it sounds. But it’s also more obvious: 

none of them can actually think.

And for me, the more I sit with this, the more I keep coming back to the same thought:

Maybe intelligence isn’t built on data. Maybe it’s built on language.


Not language as a tool.

But language as the thing underneath thinking. The core structure that lets us form concepts, link ideas, create abstraction, and imagine what doesn’t yet exist.

I’m not saying I can prove this. I’m not even saying it’s some new revolutionary insight.

But the more I read from Vygotsky, Chomsky, Friston, and others, the more I realize:

A lot of serious minds have pointed in this same direction.

Language shapes thought. It’s not a side effect of intelligence. It might be the roott of it

And honestly, that makes me more confident in my own intuition. Not because I need to be right. But because this line of thinking feels human

This is my research proposal:

https://doi.org/10.13140/RG.2.2.12996.54406


i would love to hear any opinion, advice, or even a new kind of perspective.