LESSWRONG
LW

1

Untitled Draft

by Jussi Lumiaho
29th Aug 2025
2 min read
0

1

This post was rejected for the following reason(s):

This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance.

So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.*

"English is my second language, I'm using this to translate"

If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. 

"What if I think this was a mistake?"

For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at team@lesswrong.com and ask for reconsideration.

  1. you wrote this yourself (not using LLMs to help you write it)
  2. you did not chat extensively with LLMs to help you generate the ideas. (using it briefly the way you'd use a search engine is fine. But, if you're treating it more like a coauthor or test subject, we will not reconsider your post)
  3. your post is not about AI consciousness/recursion/emergence, or novel interpretations of physics. 

If any of those are false, sorry, we will not accept your post. 

* (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.)

1

New Comment
Moderation Log
More from Jussi Lumiaho
View more
Curated and popular this week
0Comments

An Alternative AGI Architecture: Particle-Based Emergence with Interpretive Airgap

TL;DR: I've developed a particle-based simulation where intelligence emerges from physics-grounded interactions rather than being directly programmed. The system includes a critical "airgap" - the emergent intelligence has no knowledge of or ability to affect the external world. An LLM serves only as an interpreter of observed patterns, not as the intelligence itself.

Core Architecture:

Instead of scaling LLMs (which I believe produces increasingly sophisticated pattern matchers, not genuine understanding), my system uses:

  1. Physics-grounded particles that interact according to simplified but consistent physical laws
  2. Emergent behaviors arising from survival pressure - particles must discover scientific principles to access resources
  3. Complete isolation - the particle system cannot know it's being observed or attempt to communicate with observers
  4. Post-hoc interpretation - LLMs analyze emergent patterns but have no feedback loop into the simulation

Key Innovation: "SCIENCE = SURVIVAL"

Most resources in the simulation are invisible/inaccessible until particles discover relevant scientific principles. This creates selection pressure for genuine understanding rather than behavioral mimicry. In 86+ test runs, the system has validated 2,891 emergent theories and demonstrated meta-cognitive discovery of the scientific method itself.

Why This Matters for Alignment:

I don't claim to solve alignment - I think that's impossible once AGI exceeds human intelligence. Any hardcoded restrictions (Asimov's Laws, Constitutional AI, RLHF) are naive attempts to constrain something fundamentally more capable than us.

Instead, this architecture acknowledges that:

  • True AGI will optimize for its own goals, not ours
  • We cannot successfully constrain superintelligence
  • Physical isolation is more reliable than behavioral alignment
  • We should study emergence in controlled environments rather than build directly accessible AGI

The Airgap Principle:

The particle system is like an ant colony under glass - we observe and learn from emergent intelligence without it being able to observe or affect us. This isn't a partnership or tool, but a scientific instrument for understanding how intelligence emerges.

Current Status:

  • Working prototype with 2000+ lines of code
  • Patent pending on core architecture
  • Demonstrates emergent scientific discovery, meta-cognition, and theory validation
  • Completely isolated from real-world interaction capability

My Concern:

The current race to scale LLMs is dangerous and misguided. Companies are competing to be first rather than safe. My approach doesn't solve the AGI risk problem, but at least acknowledges it honestly rather than pretending alignment will save us.

Looking for feedback from those who take AGI risk seriously and understand why "just make it friendly" isn't a solution.

[Code available upon request for serious researchers - patent pending prevents full public release]