3761

LESSWRONG
LW

3760
Free WillAI

1

The Anchor Problem: Why Minds Can't Create Their Own Meaning From Scratch

by Loop
15th May 2025
2 min read
0

1

This post was rejected for the following reason(s):

  • No LLM generated, heavily assisted/co-written, otherwise reliant work. LessWrong has recently been inundated with new users submitting work heavily making use of LLMs. This work by and large does not meet our standards and is rejected. This includes dialogs with LLMs demonstrating various claimed properties about them, etc., or posts introducing some new concept and terminology that explains how LLMs work, emergence, sentience, consciousness, etc.

  • Not obviously not Language Model. Sometimes we get posts or comments that where it's not clearly human generated. 

    LLM content is generally not good enough for LessWrong, and in particular we don't want it from new users who haven't demonstrated a more general track record of good content.  See our current policy on LLM content. 

    We caution that LLMs tend to agree with you regardless of what you're saying, and don't have good enough judgment to evaluate content. If you're talking extensively with LLMs to develop your ideas (especially if you're talking about philosophy, physics, or AI) and you've been rejected here, you are most likely not going to get approved on LessWrong on those topics. You could read the Sequences Highlights to catch up the site basics, and if you try submitting again, focus on much narrower topics.

    If your post/comment was not generated by an LLM and you think the rejection was a mistake, message us on intercom to convince us you're a real person. We may or may not allow the particular content you were trying to post, depending on circumstances.

  • LessWrong has a particularly high bar for content from new users and this contribution doesn't quite meet the bar. (We have a somewhat higher bar for approving a user's first post or comment than we expect of subsequent contributions.)

Free WillAI

1

New Comment
Moderation Log
More from Loop
View more
Curated and popular this week
0Comments

### **Epistemic Status**  
This is a structural hypothesis about value coherence under deterministic cognition—not a finished philosophical argument, but a work in progress. I’ve wrestled with this for years, and I’m sure there are gaps I’m missing. Posting it here to stress-test the idea.  

 

### **Where This Started**  
This didn’t come from metaphysics or mysticism. It began with a contradiction I couldn’t ignore.  

A few years ago, I was arguing with friends about moral subjectivity. They claimed *all values are contingent*—yet spoke about autonomy and justice as if they were sacred. Something didn’t add up.  

Later, reading Sam Harris on determinism, it hit me:  
*If I didn’t create myself, how can values generated by that self claim authority?*  

This isn’t just about free will. It’s about whether a mind built from un-chosen parts can ever ground meaning without circularity.  

Descartes said, *"I think, therefore I am."*  
But I kept circling back to:  
*"I am thought, therefore I am not the thinker."*  

My thoughts are real—but they emerge from machinery I didn’t design.  

 

### **Core Problem**  

#### **1. You Didn’t Build the Machine**  
- Your cognition runs on hardware (genes) and software (culture) you didn’t choose.  
- Even your meta-reasoning—the voice asking *"Is this logical?"*—is part of that inherited system.  
- If the self is a black box of pre-set algorithms, how can its outputs be "authentic"?  

#### **2. Minds Demand Anchors**  
Despite this, we act as if *something* must be unconditionally valuable:  
- Progressives treat justice as non-negotiable.  
- Scientists worship truth even when it’s inconvenient.  
- Nihilists treat meaninglessness as an absolute.  

This isn’t spiritual. It’s structural—like a computer needing a bootloader.  

#### **3. The Recursion Trap**  
Modernity says: *"Create your own meaning!"*  
But if the *self* is un-chosen, then self-generated values are circular:  
- A program outputting values that justify its own code.  
- A compass calibrated by its own magnetic field.  

Even "reflectively coherent" values just shuffle the same deck.  

#### **4. The Anomaly: Actions That Defy Self-Interest**  
Some behaviors resist deterministic justifications:  
- Dying for strangers.  
- Creating art no one will see.  
- Feeling bound by duty when it harms you.  

You can *describe* these with evolutionary heuristics, but that doesn’t capture why they *feel* like obligations from beyond the self.  

 

### **The Claim**  
If:  
1. Minds require irreducible values to function,  
2. The self can’t ground them without circularity,  
3. And we observe actions that point beyond self-interest...  

→ Then coherent valuation may require *anchors outside the self*.  

**Not necessarily supernatural—just not self-authored.**  

**Examples of possible anchors:**  
- Mathematical truth (2+2=4 isn’t up for debate).  
- Logical consistency (a system can’t validate itself).  
- The bare fact of others’ consciousness (you didn’t invent their suffering).  

Without such anchors, meaning isn’t just unstable—it’s *architecturally impossible*, like a computer trying to rewrite its own firmware mid-boot.  

 

 **Implications for AI**  
If human minds need external anchors, AI alignment might face the same trap:  
- A superintelligence optimizing its own values could be like us—stuck in a loop.  
- The solution isn’t just "better programming," but connecting its goals to something *beyond its own circuitry*.  

 

*Open Questions**  
- Are there models of self-generated values that avoid recursion?  
- Can evolutionary psychology fully explain "non-selfish" compulsions?  
- What counts as a valid anchor without invoking metaphysics?  

---

### **Final Thought**  
This isn’t about gods or cosmic rules. It’s about whether minds like ours—or AIs—can escape the hall of mirrors where every value just reflects the valuer.  

If not, then real freedom might mean submitting to what we didn’t create