LESSWRONG
LW

AI Development PauseGovernmentPoliticsSlowing Down AIAI
Personal Blog

-26

Support the movement against extinction risk due to AI

by samuelshadrach
1st Sep 2025
2 min read
8

-26

This is a linkpost for https://samuelshadrach.com/raw/text_english_html/connect_with_me/support_me.html
AI Development PauseGovernmentPoliticsSlowing Down AIAI
Personal Blog

-26

Support the movement against extinction risk due to AI
3the gears to ascension
1samuelshadrach
2the gears to ascension
-7samuelshadrach
1samuelshadrach
1samuelshadrach
1Richard_Kennaway
3samuelshadrach
New Comment
8 comments, sorted by
top scoring
Click to highlight new comments since: Today at 11:52 AM
[-]the gears to ascension4d3-2

Correct me if i misread, but if I understand, these are incredibly bad ideas which would backfire spectacularly, aren't they?

Reply
[-]samuelshadrach3d10

This comment has almost zero information. Do you actually want a discussion or is this a "boo, go away" comment?

Reply
[-]the gears to ascension3d2-1

sponsoring cyberattacks will lead to blowback that more than defeats the purpose.
starting a religion will lead to blowback that more than defeats the purpose.

if you're at the level where you think these are great ideas to suggest, then you need to be at the level where it's obvious to you why both are dead ends. it's the same reason in both cases: there are enormous groups you're attacking by either approach. generally speaking, groups don't like being attacked, and will attack you back, which usually undoes the benefit of attacking.

in particular, starting a war over AI seems likely to simply lead to AI being pushed into use for military use much more rapidly.

you said

thereby destroying privacy of every person on Earth with no exceptions. I am supportive of this

it might be possible that there are aliens out there somewhere with no privacy between any of them, who live in a utopia of some sort. also possibly some aliens somewhere else with no privacy who live in the ultimate dystopia. here on earth, there are a lot of hominids who really don't like some other hominids. sudden loss of privacy would likely result in mass death of some sort, when some sort of war gets started as a response; if not, then loss of freedom is still plausible. sudden loss of privacy would be catastrophic and would cause so much chaos and damage that it's effectively impossible to have even a vague sense of whether it'd end up stabilizing on a good world after the easily predictable hell it would create at the beginning. there are certainly many bad things people do that it would make public, but if everyone knows the terrible stuff at once, is that even negative for people who do bad things? and what about how malicious people would use it to attack pacifist targets?

Reply
[+]samuelshadrach3d-70
[-]samuelshadrach3d10

Anyone wanna set up prediction markets on any of the claims here? I wanna be able to claim some fake internet points if I later get proven right.

Reply
[-]samuelshadrach4d10

(Waiting for the downvotes! And the impending rate limit!)

Reply
[-]Richard_Kennaway4d10

Happy to get in the first one!

I think rousing the hoi polloi would be counterproductive. They’re a force you cannot align. Like the sorceror’s apprentice animating a broomstick to help carry water, once called up, you cannot tell them to stop.

Reply
[-]samuelshadrach4d32

They’re a force you cannot align. Like the sorceror’s apprentice animating a broomstick to help carry water, once called up, you cannot tell them to stop.

I agree the public is not going to take orders from you, on what they should think or do. You need to use persuasion like everyone else. I agree people will also do lots of counterproductive and unproductive things in response to info about AI risk.

I think we haven't found the crux yet.

Reply
Moderation Log
More from samuelshadrach
View more
Curated and popular this week
8Comments

Document below may not be updated to the latest version. Click link for latest version.

  • Low effort
    • Like, share, subscribe to my content or people publishing similar content on AI extinction risk. Can share with your friends, people in media or politics, people working at AI labs or in x-risk, anyone really.
  • High effort
    • Organise a protest in your city around AI extinction risk.
    • Start a social media channel to persuade people at scale about AI extinction risk. Even one video is better than zero, as it motivates other people to also come forward.
  • Most impactful
    • If you have a large social media following or high status credentials (UK, US citizens only): Run for election with AI pause as an agenda.
      • (Maybe) Consider supporting UBI as an agenda, as one of the largest group of single-issue voters in US is only concerned with losing their own job/income/equity. Example: Andrew Yang (signed FLI pause letter).
    • Invent a new ideology or religion that can unite humanity around a common position on superintelligent AI, human genetic engg, and whole brain emulation.
      • IMO superintelligent AI and human genetic engineering are both potentially less than 5 years away, unless people take political action otherwise. Whole brain emulation is seeing slow and steady progress, so maybe it is 30 years away.
    • If you have >$100k in funds: Sponsor bounties for potential whistleblowers at top AI labs and their supporting govts.
    • If you have >$10M in funds: Sponsor cyberattacks / social engineering from foreign soil against top AI labs and their supporting govts, and publish leaked info publicly.
      • At minimum, publish info relevant to AI risk, such as values, decisions and capabilities of key decision-makers.
      • At maximum, publish all data that Big Tech has collected on everyone to the public, thereby destroying privacy of every person on Earth with no exceptions. I am supportive of this but I'm aware this is a radical stance. Even if you don't agree with me, please atleast publish AI-risk-related info.
      • Sell or lease the latest AI capabilities (code, model weights) to other top AI labs worldwide if the profitability of your operation is a significant concern.
      • I'm trying to figure out a better incentive mechanism than donations, but until then, donations will help.

Support me

  • Donate to me
    • Looking for people funding "outside game" strategies for fixing AI extinction risk (like mass protest, social media channels, whistleblowers) not "inside game" strategies (like alignment reseach at top AI labs, lobbying US policymakers on behalf of top AI labs). Examples: Pierre Omidyar funding The Intercept, Brian Acton funding Signal, etc
  • Work with me
    • Provide me feedback or do fact-checking for whistleblower guide. Especially interested in people with expertise in US or international law.