LESSWRONG
LW

1570
samuelshadrach
197222910
Message
Dialogue
Subscribe

2025-09-09

Additional info relevant to LW

  • I support a worldwide ban on deployment of superintelligent AI for atleast next 10 years.
    • Solving technical AI alignment will not change my position as I also believe in the possibility of ASI-enabled dictatorships (enforced via hyperpersuasion or violence) that are stable for >100 years.
  • I support a worldwide ban on deployment of human genetic engineering for atleast next 10 years. (Weak opinion)
    • I am not yet convinced commoditisation will occur fast enough to prevent dictatorships (enforced via hyperpersuasion or violence).
    • I definitely also think people (parents, community leaders, dictators) will attempt to engineer (precursors to) moral values in their kids based on what is competitive, rather than ideal in some philosophical sense.
  • I do not ascribe to utilitarianism, longtermism or universe colonisation as my primary life philosophy.
    • I would like to add value to society at scale, and prefer solving bigger problems over smaller ones. I like to think long-term on scale of centuries not billions of years. I am yet to fully figure out my values.
  • I think attention is worth more than capital.
    • I think acquiring capital not attention might be a common mistake elites in SF are making. You can pay or threaten someone into do a thing, you can't pay them to actually care.
    • Religious leaders (actually persuade someone to change their values) > Politicians (use violence as incentive) > Billionaires (use money as incentive)
    • Yudkowsky has made a successful attempt to start a religion for atheists. I don't ascribe to it, but I think more religion for atheists is good.

Bio below is copy-pasted and may not be updated, visit samuelshadrach.com for latest version

Experts on extinction risk in 60 seconds

Experts' AI xrisk short


2025-09-08

About me

Samuel Headshot Casual

Connect with me

  • Contact me, Contact me (secure)
  • Donate to me, Support the movement, Support me, Trust me
  • Discussion forums

History

  • CV: DOB 2001-01-03, Male. Completed schooling in Delhi 2018, Graduated IIT Delhi biochem engg BTech+MTech 2023. Managed risk for $1B AUM at cryptocurrency startup Rari Capital. Completed ML Safety Scholars under Dan Hendrycks, UC Berkeley. Have intermediate-level skills in software, finance, biotech and probably some other topics too.
  • Notable influences: cypherpunks mailing list and its successors such as Tor/Signal/blockchain, extropians mailing list and its successors such as EA/lesswrong.

Immediate goals

  • Lead one-week hunger strike livestreamed in September 2025, in protest of those building superintelligent AI.
  • Publish guide for whistleblowers of companies building superintelligent AI.
    • For whistleblowers: Whistleblower Database, Whistleblower Guide
    • For funders: Funding request, theory of change, redteaming
  • Create educational content on this cause (for journalists, youtubers, software developers, general public).
    • Software: Try AI models from 2019 to 2025
    • Videos: Youtube: @samuel.da.shadrach, Twitter: @samuel_sha13010, Raw videos
    • Research: Prolific.com survey on AI pause, AI timelines talk
    • See also (not by me): Experts sign Dan Hendrycks' letter, Eliezer Yudkowsky's interview on x-risk, Eliezer Yudkowsky's TIME article, Game: Universal Paperclips

Other previous work by me

  • Software: Books Search for Researchers, Open source search
  • Research (on privacy/transparency): SecureDrop review, Open source weaponry, Long-term view on information, tech and society
  • Research (on misc topics): Unimportant

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
2xpostah's Shortform
8mo
158
No wikitag contributions to display.
[Anthropic] A hacker used Claude Code to automate ransomware
samuelshadrach17d*0-19

I support more advancements in cyberhacking capabilities so that companies and govts are incapable of keeping secrets. Secrecy enables them to act against the wishes of the majority to an extent that couldn’t otherwise.

Reply
xpostah's Shortform
samuelshadrach26d30

Is the possibility of hyperpersuasion the crux between me and a majority of LW users?

By far my strongest argument against deploying both human genetic engg and superintelligence aligned to its creators, is the possibility of a small group of people taking over the world via hyperpersuasion.

Reply
xpostah's Shortform
samuelshadrach1mo13

If you already have lots of peoples attention (for instance because you have a social media following or high status credentials) and you’re a US/UK citizen, your best available plan might be to run a political campaign with AI pause as the agenda.

You’re unlikely to win the election, but it’ll likely shift the Overton window and give people hope that change is possible.

For most people, having a next step after “ok I read the blogposts and I’m convinced, now what?” is important. Voting or campaigning for you could be that next step.

Reply1
Obligated to Respond
samuelshadrach3d1-2

I love this comment. I think persuading people towards atheism is good, because then there’s more demand for a new atheist religion. (I consider e/acc or even Yudkowsky-style longtermism a religion)

Reply
Support the movement against extinction risk due to AI
samuelshadrach11d10

Anyone wanna set up prediction markets on any of the claims here? I wanna be able to claim some fake internet points if I later get proven right.

Reply
Support the movement against extinction risk due to AI
[+]samuelshadrach12d-70
Support the movement against extinction risk due to AI
samuelshadrach12d10

This comment has almost zero information. Do you actually want a discussion or is this a "boo, go away" comment?

Reply
Support the movement against extinction risk due to AI
samuelshadrach13d32

They’re a force you cannot align. Like the sorceror’s apprentice animating a broomstick to help carry water, once called up, you cannot tell them to stop.

I agree the public is not going to take orders from you, on what they should think or do. You need to use persuasion like everyone else. I agree people will also do lots of counterproductive and unproductive things in response to info about AI risk.

I think we haven't found the crux yet.

Reply
Support the movement against extinction risk due to AI
samuelshadrach13d10

(Waiting for the downvotes! And the impending rate limit!)

Reply
[Anthropic] A hacker used Claude Code to automate ransomware
samuelshadrach13d10

@Shankar reacting to your emote: This claim feels trivially obvious to me. If you have a counter you can bring it up.

Ofcoursw law is decided by (leaders representing) a large group of people who try to encode their morality and their conflict resolution processes into something more formal.

Yes you can nitpick on details but the broad overview is this.

A country with very different morality will have very different laws (such as an Islamic state having different laws from a western one)

Reply
Load More
18Advice for tech nerds in India in their 20s
5d
1
-26Support the movement against extinction risk due to AI
13d
8
9Prolific.com survey on AI pause
1mo
3
11 week fast on livestream for AI xrisk
2mo
2
21Open Source Search (Summary)
3mo
1
2Real-time voice translation
3mo
0
-3US Govt Whistleblower guide (incomplete draft)
4mo
15
3Open-source weaponry
4mo
0
2Intelligence explosion
5mo
0
2SecureDrop review
5mo
0
Load More