LESSWRONG
LW

432
LawfulandPredictable
2140
Message
Dialogue
Subscribe

Everyone has always said it is not possible. Until someone came who didn't know that and just did it.

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
AI Induced Psychosis: A shallow investigation
LawfulandPredictable6d10

Absolutely. I noticed this myself while engaging on controversial topics with LLMs. There is a fine line between being too restrictive and still usable. But the core issue is in the modelfs itself. ChatGPT5 for example mirrors the user less and critically questions more than 4o or Claudes old models.

In the end it all comes down to the user. If you understand how an LLM works and that they are, and cannot, be a conscious being, it is less likely to spiral down that path. Most delusions seem to stem from users believing their AI is alive and they must propagate their theories and hidden secrets. 

A huge problem is also epistemic inflation. LLMs used words like recursive everywhere. It sounds scientific and novel to the average user. I am wondering where this epistemic inflation originates from and why it got amplified so much? Probably, as the user wanted to be mirrored and validated, the LLMs started talking back validating the users thoughts and ideas by adding fancy words the user did not understand, but liked, as it made him feel smart and special.

 

Reply1
Laziness death spirals
LawfulandPredictable7d10

I like the framing here. I’d add that what looks like a “mindset” problem is often an energy problem, not psychological. 
When baseline energy is low, every task feels expensive, so avoidance wins by default. Food quality, sleep timing, and training load set that baseline.

What reliably breaks the spiral for me is fixing physiology first. Eat a high-protein meal, hydrate, get sunlight, and do 10–20 minutes of hard intervals or a fast walk etc. Then lock bedtime and wake time for two nights. Those moves raise physical energy and nudge dopamine back up within hours, which makes the next good choice cheaper.

Media matters too. If I feed myself high-arousal or negative content, I feel worse and spiral. If I pick goal-aligned inputs, I get direction back. But I need energy to engage with that in the first place.

Bodies are like machines. If the power supply is unstable or low, the software looks “lazy" as its slow and inefficient.  

Reply
How to Make Superbabies
LawfulandPredictable8d30


Very interesting read, thank you.
This post captures the tension between ethics and optimization well. The bottleneck is our emotional governance layer. Human institutions treat “fairness” as invariant even when it blocks aggregate progress. A modest right-tail IQ shift yields exponential returns in innovation density and survival probability. 

Please add population‑level math to the scaling story. Using a normal model:

  • ≥130: 2.28% baseline → 3.59% with +3 → 4.78% with +5 (×1.58 and ×2.10).
  • ≥145: 0.135% → 0.256% → 0.383% (×1.89, ×2.84).
  • ≥160: 0.00317% → 0.00723% → 0.01229% (×2.28, ×3.88).

This shows why broad, modest edits beat chasing a few extreme outliers: heavy‑tail multipliers convert tiny mean shifts into thousands of extra top‑end minds per cohort.

Ethics should shape implementation risk, not halt evolution. Biology rewards direction, not sentiment.
Gene editing is not playing God; it’s playing catch-up with nature’s inefficiency. Random mutation is a stochastic optimizer with no goal but survival via random adaptation. Human intelligence is nature’s gradient descent algorithm finding better minima. Editing the genome to reduce error rates, disease load, and cognitive noise is a continuation of the same process. We either guide evolution or remain guided by it.

 

Reply
How to Make Superbabies
LawfulandPredictable8d10

The “alignment problem” you describe for Homo supersapiens is accurate, but the root cause is the same one driving AGI misalignment: identity and emotion as control anchors. Systems guided by ego, fear, or social approval produce irrational outputs under pressure. The solution isn’t moral pleading but architectural, removing noise sources. Genetic and cognitive optimization is alignment by design: higher abstraction depth and lower limbic bias.
Also, the comparison between human mistreatment of animals and potential supersapien hierarchy misses one key point. Dominance gradients are not inherently moral failures, they’re adaptive sorting mechanisms. When cognitive asymmetry grows, relational stability depends on compatibility, not equality. Just as social groups reconfigure when one member outpaces others, species-level divergence will do the same. That's just entropy reduction.

Reply
1The Comfort Trap: Counting the Cost of Delay
7d
0