1714

LESSWRONG
LW

1713

-1

My views on Lesswrong

by samuelshadrach
14th Oct 2025
Linkpost from samuelshadrach.com
5 min read
0

-1

-1

New Comment
Moderation Log
More from samuelshadrach
View more
Curated and popular this week
0Comments

Disclaimer

  • Quick Note

Appreciation for LW

  • I think Eliezer Yudkowsky is one of the most brilliant philosophers to ever live.
  • I think Lesswrong is one of the few places on Earth that is actually focussed on important questions. Attention is the scarcest resource humans have, and they are directing it to the correct places.
  • I think there are a few core insights you can take away from LW that are basically correct and have unimaginably large implications.
    • Extended Church-Turing thesis. The entire universe is deterministic due to laws of physics, and so are human brains. Penrose quantum bullshit doesn't work. Any insight you have about any of the following topics - free will, consciousness, moral values, moral patienthood, intelligence - has to reconcile with the fact that the ultimate instantiation of this thing is a Turing machine whose code can be written down on a piece of paper.
    • Hamming question is important and can be applied to all of life philosophy, not just specific branches of the tech tree. "What is the most important question in scientific field and why aren't you working on it?" -> "What is the most important question in life and why aren't you working on it?" You can recursively apply the Hamming question to your actions in life until you realise most technology does not matter much, most politics does not matter much, and a few things matter a lot.
    • Intelligence is what separates humans from apes, and allows us to invent both technology and politics at scale. Intelligence-enhancing tech (artificial superintelligence, human genetic engineering, whole brain emulation) is deserves more attention than most other technologies.
    • Many of the experiences we consider innate to human nature - like death, sex, parenting etc - can be radically transformed with sufficient application of technology. This is core to what enabled Yudkowsky to build a religion that simultaneously fears and worships the Machine God. I too both fear and worship the Machine God, despite not following Yudkowsky's religion.

Why I don't affiliate too much with LW today

Differences in actions from LW consensus

  • LW has a lot of low-agency Thinkers (not Doers) who want to stay in a comfortable tech job or AI capabilities job, and will retroactively invent justifications for why that this is the best thing to do.
    • Get their feedback, sure, but don't make the mistake of thinking you should necessarily become like them.
  • Many people on LW are not trying to create a political movement to pause AI
    • I'm not sure why this is, some of my guesses are listed in the next section.
    • I don't want to waste too much of my time on these people, because I have limited time to build this movement.

Differences in opinions from LW consensus

  • I support a worldwide ban on deployment of superintelligent AI for atleast next 10 years. (Strong opinion)
    • I think loss of control of a rogue ASI is possible, and could lead to human extinction.
    • Solving technical AI alignment will not change my position as I also think ASI could lead to a small number of people overthrowing the US and Chinese govts, and establishing a world dictatorship stable for centuries at minimum. They could attain immortality via mind uploading. They could enforce their control via hyperpersuasion or automated military or some other way. A good analogy for hyperpersuasion would be religion, but persuasive enough it actually gets 100% of humanity converted to it. (Strong opinion)
  • I support a worldwide ban on deployment of human genetic engineering for atleast next 10 years. (Weak opinion)
    • I am not yet convinced commoditisation of the tech will occur quickly. By commoditisation I mean that the entire supply chain is replicated in multiple nuclear-armed states, and per unit cost is low. If commoditisation is not possible, then a small number of genetically engineered superhumans could take over the world (by being good at business and politics), and prevent the rest of humanity from getting access. - Efforts to open source the tech can help here (instead of a ban), but I would like guarantees that commoditisation will be possible before we deploy this.
    • I definitely also think people (dictators, parents, religious and community leaders) will attempt to genetically engineer neurocircuitry in their populations based on what is competitive, rather than ideal in some philosophical sense. In the worst case, I expect collusion between dictators and various leaders in society to enforce this via violence. - I do not have a good solution to this yet, only some ideas and partial solutions.
    • Examples of neurocircuitry that might become possible to genetically edit in both positive and negative direction: respect for authority, empathy, sex drive, ability to experience love or trust, fear and disgust responses. And ofcource fluid intelligence (IQ) and social intelligence. I expect this makes genetically edited humans a superior species to non-edited humans.
  • I do not ascribe to utilitarianism, longtermism or universe colonisation as my primary life philosophy.
    • I would like to add value to society at scale, and prefer solving bigger problems over smaller ones. I like to think long-term on scale of centuries not billions of years. I am yet to fully figure out my values.
  • I think attention is worth more than capital.
    • I think acquiring capital not attention might be a common mistake elites in SF are making. You can pay or threaten someone into do a thing, you can't pay them to actually care.
    • Religious leaders (actually persuade someone to change their values) > Politicians (use violence as incentive) > Billionaires (use money as incentive)
    • Yudkowsky has made a successful attempt to start a religion for atheists. I don't ascribe to it, but I think more religion for atheists is good.
    • I think the average person on LW does not understand politics well. My goals are explicitly political hence it again makes less sense for me to engage.

Recent comment on LW

Comment posted 2025-10-13. Got downvoted with no replies.

Has anyone on lesswrong written actual posts why they are against a pause AI political movement?

I default to assuming uncharitable explanations like:

  • LW has low-agency people who want to keep their tech job and their friend circles
  • Status dynamics in Silicon Valley, everyone gets "bad vibes" from politics despite understanding nothing about how politics works or how much power it has or how to do it with integrity. Yudkowsky has changed his mind about engaging with politics but his followers haven't.
  • The few high-agency people here are still attempting the weak sauce strategy of persuading US policymakers and natsec circles instead of applying force against them.

If I got more information I could be more charitable to people here.

Opinion on LW mods

  • I have not had a lot of personal interactions with them but I generally think Oliver Habryka, Ben Pace and Raymond Arnold are acting in good faith.
  • I don't directly blame them for all groupthink and cowardice on LW.
  • I do get the sense their life philosophy is very pro-longtermist utilitarianism and universe colonisation and immortality and stuff, and I am less motivated by these drives than they might be.