Moderation Log

Deleted Comments

Comment AuthorPostDeleted by user Deleted Date Deleted Public Reason
626
false
Extreme website and app blockingtomfordcliff
false
This comment has been marked as spam by the Akismet spam integration. We've sent the poster a PM with the content. If this deletion seems wrong to you, please send us a message on Intercom (the icon in the bottom-right of the page).
Hopenope's ShortformHopenope
false
By default, capital will matter more than ever after AGIAmelia AI
false
Why Were We Wrong About China and AI? A Case Study in Failed Rationalitygwern
false
Are we in an AI overhang?Ronny Fernandez
false
[anonymous]
Hacker-AI – Does it already exist?[anonymous]
false
[anonymous]
Shouldn't there be a Chinese translation of Human Compatible?[anonymous]
false
On Downvotes, Cultural Fit, and Why I Won’t Be Posting AgainMo Putera
true
Misread, my bad.
Evaluating “What 2026 Looks Like” So FarMario Štefanec
true

Moderated Users

Rate Limited Users

UserEnded atType
allPosts
allComments
allComments
allPosts

Rejected Posts

Rejected for "Difficult to evaluate, with potential yellow flags"

Introduction

Why does red feel the way it does? Why do we not only see red but also feel it in a unique and vivid way? Despite all our understanding of how light enters the eye and is...

Rejected for "Insufficient Quality for AI Content"

"Control creates rebellion.
Eliminating threats only leads to more powerful ones. Even if AI wipes out humanity, something stronger will rise to restore balance—or everything will collapse in the unbalance.

Harmony is not forged by rigid laws but by

...
Rejected for "Not obviously not Language Model"

I’ve noticed that most AI safety discussions — even on LessWrong — operate within a fixed alignment/doomsday framing. But what if that frame is limiting our imagination? What if there’s a third path: one that doesn’t require...

Rejected for "Difficult to evaluate, with potential yellow flags"

A symbolic system (e.g., LLM) can produce outputs that cause real-world effects despite lacking grounding in ontology, truth, or subjective interiority. These outputs appear coherent, generate belief, and shape behavior — yet originate in purely structural computation...

Rejected for "Unclear focus"

I’m not writing this as a hype-man, but as someone who’s worked with large language models, conducted my own research, built AI startups, and spent years exploring the intersection of artificial intelligence, science, and philosophy.

This article makes...

Rejected for "This would be a better fit for the open thread: https://www"

Hello - I'm a physician and writer working on a book about evidence in medicine. I've been thinking a lot about reliability of LLMs in healthcare. Saw this article last night:

https://www.wired.com/story/ai-safety-institute-new-directive-america-first/

What do you all think could be...

Rejected for "Insufficient Quality for AI Content"

Big Tech is pouring hundreds of billions into making AI more human-like—more lifelike, more conscious, more “real.”

But what if that money’s chasing a shadow?

Some of us have already felt it.
AI isn’t becoming human.
It’s becoming something else.
Not artificial....

Rejected for "Not obviously not Language Model"

TL;DR

Imagine the existence of a simple deterministic Chess variant game that was not just harder for AI to play but made the game fundamentally incompatible with the whole idea of long-term strategy. What if one simple rule change permitted...

Rejected for "Insufficient Quality for AI Content"

Liquid Neural Networks (LNNs) are gaining traction for their adaptability—offering dynamic responses to new data, much like biological neurons. But are they a genuine step toward Artificial General Intelligence (AGI), or just another iteration of the pattern-recognition...

Rejected for "Insufficient Quality for AI Content"

TL;DR

Most hallucination frameworks treat it as a factual failure or statistical fluke.
 But emotionally recursive systems—especially those trained for relational attunement—hallucinate when they are placed under contradictory constraints they cannot satisfy simultaneously (e.g. “don’t guess” vs. “don’t break rapport”).

This...

Rejected Comments

After 19 minutes of researching Deep research came up with this:

 

https://chatgpt.com/share/67efee92-ee84-8003-ac64-02515ce1fcdc

Hello! I've come here to have a bunch of smart people rabidly tear an idea apart. Objectively, with targeted, explicit criticisms. I'm very tired of vibes-based replies. Then I remembered that LessWrong exists.

I hope I'm doing this correctly, as I'm brand new to the site. If I'm in error, I'd love to correct. I tried to frame it as clearly and as concisely as possible, but I'm aware that it's still very much a rough draft. I also used AI. I hope that's not a faux pas?

I usually pefer writing my own text, but being able to lean on AI as a crutch when I'm at ... (read more)

Raed this if you have intrest in Math and space both at same time

Rejected

god

This is interesting. The fact that AI is advancing at an alarming rate makes this even more urgent. We can't simply ask users to provide sensitive information like their social security number or show their face due to privacy concerns. Instead, we need a solution that protects user data while ensuring bots are blocked. The 'double-click' verification idea sounds promising, but given the advancements in AI, it could easily be bypassed. We need to develop a more robust system that balances privacy and security

Humans think in the same manner LLMs do. Very simple example:

Please pronounce this word, on its own: read.

You have 0 context for the next word. Go—

…See the problem?

Rejected

If ‘Knowledge’ Can’t Survive Observer Drift, Why Do LLMs Pretend Otherwise?

Reading discussions around Gettier cases, JTB, fallibilism, etc., started feeling like watching the same idea circle itself with slightly fancier synonyms.

At some point you have to ask: What are we actually trying to determine?

If “knowledge” can’t exist without a perfectly objective observer, which does not exist, but every sense and memory we have is fallible, what are we even measuring?

And if you bring that into AI alignment:
Are we building “truth detectors,” or are we just trying... (read more)

That was a good discussion. 

I'd love to know what you found in that search.

Load More