This post was rejected for the following reason(s):
Difficult to evaluate, with potential yellow flags. We are sorry about this, but, unfortunately this content has some yellow-flags that historically have usually indicated kinda crackpot-esque material. It's totally plausible that actually this one is totally fine. Unfortunately, part of the trouble with separating valuable from confused speculative science or philosophy is that the ideas are quite complicated, accurately identifying whether they have flaws is very time intensive, and we don't have time to do that for every new user presenting a speculative theory or framing (which are usually wrong).
Our solution for now is that we're rejecting this post, but you are welcome to submit posts or comments that are about different topics. If it seems like that goes well, we can re-evaluate the original post. But, we want to see that you're not just here to talk about this one thing (or a cluster of similar things).
Insufficient Quality for AI Content. There’ve been a lot of new users coming to LessWrong recently interested in AI. To keep the site’s quality high and ensure stuff posted is interesting to the site’s users, we’re currently only accepting posts that meets a pretty high bar.
If you want to try again, I recommend writing something short and to the point, focusing on your strongest argument, rather than a long, comprehensive essay. (This is fairly different from common academic norms). We get lots of AI essays/papers every day and sadly most of them don't make very clear arguments, and we don't have time to review them all thoroughly.
We look for good reasoning, making a new and interesting point, bringing new evidence, and/or building upon prior discussion. If you were rejected for this reason, possibly a good thing to do is read more existing material. The AI Intro Material wiki-tag is a good place, for example.
Since COVID-19, I've been working independently on AI safety research, developing over a hundred deterministic games, each attempting to expose structural AI blind spots. When Professor Hinton began raising alarms, my work shifted from a curiosity to an obsession of paramount importance, focused solely on AI safety.
Superposition Checkers (SC)
I present the simplest of all my games, hoping to convince brighter minds.
First, Addressing the No. 1 Concern of Skeptics
"Where is the proof, the math, and the traditional empirical validation? This is a stretch in philosophy—not science."
Sorry, I have NO proof (Please don't stop reading), but instead I offer you this:
The proof lies in how the game "speaks for itself." Like philosophical concepts before, truth can be demonstrated through thought experiments.
If I could give you what you seek, AI could easily master the game. The existence of proof will hand over the "key" AI needs to master the game (profound paradox). I speculate SC isn't just "unsolvable"—it's "anti-solvable."
Like the "Liar Paradox," the only issue is recognizing the impossible (but well-meaning) premise of the request.
Rules That Even Kids Understand
Standard Checkers rules apply with these.
After any and ALL capture occurs two mandatory "superposition moves" are immediately triggered:
The player who made the capture must take control of one of their opponent's pieces (including Kings) and place it on any empty square on the board.
The opponent must immediately make the same type of move in reply.
After both mandatory superposition moves, normal play resumes until next capture. Any edge cases can be addressed, but core mechanics remain.
It's that simple!
The Curious Case: Why AI Could Fundamentally Fail?
SC is not a contrived game with no purpose. If AI stumbles, AI threats predictably advance.
The "ah ha" moment arrives when we realize that a momentary loss of piece control fundamentally undermines strategic planning. Pieces become puppets serving two masters, and the best-laid plans go up in smoke. Can AI strategically command an army of traitors?
AI's Requirement
SC Harsh Reality
Stable evaluations
Values invert unpredictably after each capture
Predictable futures
Every capture forces mandatory board resets
Learnable patterns
All formations are dismantled by rules
Material advantage
More pieces = more vulnerability (forced relocation targets)
Strategic compounding
Progress impossible - all advantages are temporary by design
Ownership
Piece control is transient - your pieces become liabilities
Self-Improvement
All Learning fails—no compounding knowledge
Exhibits: Why AI Crashes in Superposition Checkers
Failure isn't computational complexity — it's that strategy itself becomes an illusion. SC is a testbed for something deeper than performance.
The "solution" is a big, non-compressible lookup table, providing an accurate map to nowhere - an illusion, an unescapable maze of AI's own creation.
ELO ratings collapse - and on top of that, players never know why they lost (or won!).
Longer-term strategies fail - traditional planning becomes impossible in this environment.
I have so much more to say in the deeper dive summaries in the brief APPENDIX A.
Conclusion
As a professional engineer, I understand how the hull of the Titanic was unknowingly designed with brittle steel, failing catastrophically in cold waters.
We see AI as the unsinkable "Cutting-Edge Tech"- the precious "diamond" of our time. Diamonds are the hardest thing we know (10/10), but they are actually as brittle as glass.
If a checker game children can play causes a fracture in AI resilience, Can we trust it with our future?
As an outsider working alone, I can do no more. I hand this quest to brighter minds better equipped to ensure a safe journey for all of us.
Note:
If AI masters SC someday, a discovery at the scale of imaginary numbers might be required. By then, we may even understand how humans think. It is possible I have missed something basic in this article.
---
APPENDIX A - Bonus Exhibits For The Jury
SC as a Strategic Black Hole - keeps me awake at night.
AI Safety Applications: SC techniques might be able to detect cloned systems and distinguish humans from machines.
A new category? consider "Strategically Entropic Games" or "Pattern-Neutral Environments"
Quantum-like nature - readers have likely spotted this intriguing aspect worth exploring.
Blunders become meaningless - the game's true nature reveals itself.
SC acts as an "AI Truth Serum" - exposing exactly how and where AI breaks when faced with controlled chaos - ready today for testing.
I have much more to share, all focused on AI Safety. - Just ask.