LESSWRONG
LW

Bounded RationalityEpistemologyHuman-AI SafetyOntologyAI

1

Engineered Hallucination: Why the “God’s Eye View” in AI Is a Flawed Ontology

by Daniel Newman (fPgence)
10th Aug 2025
3 min read
0

1

This post was rejected for the following reason(s):

  • No LLM generated, heavily assisted/co-written, or otherwise reliant work. LessWrong has recently been inundated with new users submitting work where much of the content is the output of LLM(s). This work by-and-large does not meet our standards, and is rejected. This includes dialogs with LLMs that claim to demonstrate various properties about them, posts introducing some new concept and terminology that explains how LLMs work, often centered around recursiveness, emergence, sentience, consciousness, etc. (these generally don't turn out to be as novel or interesting as they may seem).

    Our LLM-generated content policy can be viewed here.

  • Insufficient Quality for AI Content. There’ve been a lot of new users coming to LessWrong recently interested in AI. To keep the site’s quality high and ensure stuff posted is interesting to the site’s users, we’re currently only accepting posts that meet a pretty high bar. 

    If you want to try again, I recommend writing something short and to the point, focusing on your strongest argument, rather than a long, comprehensive essay. (This is fairly different from common academic norms.) We get lots of AI essays/papers every day and sadly most of them don't make very clear arguments, and we don't have time to review them all thoroughly. 

    We look for good reasoning, making a new and interesting point, bringing new evidence, and/or building upon prior discussion. If you were rejected for this reason, possibly a good thing to do is read more existing material. The AI Intro Material wiki-tag is a good place, for example. 

Bounded RationalityEpistemologyHuman-AI SafetyOntologyAI

1

New Comment
Moderation Log
More from Daniel Newman (fPgence)
View more
Curated and popular this week
0Comments

The core failure mode is ontological, not just statistical.

Epistemic Status: Conceptual argument, grounded in examples from autonomous systems; proposing a general, generative architectural principle. Developed collaboratively with AI assistance.

Summary:
Some AI systems act as if they occupy a “god’s eye view”.  This "objective" perspective presumes  a stance of total, context-free knowledge.  I suspect this is not just overconfidence or bad engineering; I believe this to be  a flawed ontology.  Even with perfect sensors and infinite compute, a system cannot escape its own frame to fully verify its correctness.  The result is a predictable and structural failure mode I call engineered hallucination.

Relevance: This post critiques a hidden ontological assumption in many AI architectures and proposes a bounded alternative with direct implications for epistemic reliability and AI alignment. 

 

1. The Ontological Fault Line

Gödel’s incompleteness theorem shows that no formal system can prove its own consistency from within itself.  That’s not just a math quirk.  I suspect it generalizes to any cognitive or computational agent.  If an AI system’s ontology presumes that it can hold a complete, self-consistent model of itself and its environment, that ontology is already broken.  The “god’s eye view” isn’t just unrealistic; it encodes a contradiction: to know everything, you must stand outside yourself.  But you cannot.
 

2. Why Traditional Systems Survive This

In narrow, well-bounded systems like databases, compilers, transaction processors, the ontology and the environment match.  Inputs, states, and rules are all definable and explicitly defined, and the system’s “world” is small enough to verify internally.  The flaw only becomes dangerous in open-world AI systems, where the scope is unbounded and the ontology’s hidden assumption of completeness guarantees epistemic blind spots.
 

3. Example: Autonomous Driving

Take an autonomous vehicle. Its planner assumes its own perception and actuation systems are intact.  But cameras fog over, brake lines corrode, and firmware drifts.  The car’s internal model contains “the road” and “its own capabilities.”  But over time, both will inevitably diverge from reality.  When that drift is invisible to the omniscient computational perspective, the system will act on false premises about itself.  And it will do so confidently.  This is what I call engineered hallucination: the system must assume its own correctness to function, even when that assumption is wrong.

 

4. Why Better Engineering Doesn’t Solve Ontology

Throwing more data, better sensors, or more compute at the problem doesn’t fix the flawed ontology.  As long as the system’s model implicitly asserts completeness, there will always be truths about itself and its environment that it cannot represent or check from within.  The problem is not imperfect execution of the ontology but is the ontology itself.

 

5. An Ontology of Boundedness

The alternative is not to freeze the system or start from scratch.  The alternative is to design for boundedness:

  • Explicitly declare the scope of the model.
  • Refuse to act outside verifiable boundaries.
  • Surface uncertainty whenever the boundary is approached or exceeded.

I call this bounded field computing.  Bounded field computation is situated in a declared, verifiable scope, where nothing in the system’s ontology exists unless its boundaries and conditions are known.

 

6. Humility as an Operational Primitive

In this framing of bounded computing, humility isn’t moral modesty; it’s a core system property.  It produces agents that:

  • Don’t hallucinate beyond their field.
  • Make their limits visible to collaborators (human or machine).
  • Avoid the hidden ontology of omniscience.

Bounded field computing already exists in working form, though it’s not yet part of the AI mainstream’s “known art.”  I’ll detail the architecture in a follow-up post, but the short version is: every computation happens inside a declared, verifiable scope, and nothing in the system’s ontology exists outside those declared bounds.  If we want to avoid engineered hallucination, we must fix the ontology, not just the implementation.