LESSWRONG
LW

710
samuelshadrach
284303570
Message
Dialogue
Subscribe

DMs open

My website

My views on Lesswrong

Donate Monero

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
2xpostah's Shortform
10mo
193
xpostah's Shortform
[+]samuelshadrach4d-80
xpostah's Shortform
[+]samuelshadrach10d-6-8
[Anthropic] A hacker used Claude Code to automate ransomware
samuelshadrach2mo*0-20

I support more advancements in cyberhacking capabilities so that companies and govts are incapable of keeping secrets. Secrecy enables them to act against the wishes of the majority to an extent that couldn’t otherwise.

Reply
xpostah's Shortform
samuelshadrach3mo13

If you already have lots of peoples attention (for instance because you have a social media following or high status credentials) and you’re a US/UK citizen, your best available plan might be to run a political campaign with AI pause as the agenda.

You’re unlikely to win the election, but it’ll likely shift the Overton window and give people hope that change is possible.

For most people, having a next step after “ok I read the blogposts and I’m convinced, now what?” is important. Voting or campaigning for you could be that next step.

Reply1
xpostah's Shortform
samuelshadrach5d0-1

Is anyone making psychological profiles of the heads of the AI labs? I know profiling is partly pseudoscience, but I also think there's some useful work to be done here.

Reply
xpostah's Shortform
samuelshadrach6d10

Idk if this is true, but let's say it is true. In my mind, this is still an improvement over the current situation, which is that the public will be unaware while AI lab heads and US intelligence heads rush to build the Machine God and succeed, and then either cause human extinction or build an immortal dictatorship.

What am I missing?

Reply
leogao's Shortform
samuelshadrach7d3-2

it's perfectly reasonable for people to be selfish and care about superintelligence happening during their lifetime

Yes people are selfish, that is why you should sometimes be ready to fight against them. Point a is not a disagreement with Ben.

then for some reason the entire world modern order collapses

This is low probability on time scale of decades but is an argument people can use to justify their self-serving desires for immortality as somehow altruistic.

Reply
xpostah's Shortform
samuelshadrach8d1-1

Maybe unpopular opinion - I think if Moskowitz knows he can't become a skilled politician himself, maybe he should spend a few years grooming a successor who can, and then transfer his entire net worth to this person with no conditions attached. Typically such a successor is one's child, but in this case there's no time for that (due to short AI timelines).

Reply
xpostah's Shortform
samuelshadrach8d0-10

I think Moskowitz's funding of malaria nets and Moskowitz's funding of AI policymakers - both come from the same mistake of thinking like a billionaire instead of thinking like a politician.

The actual bottlenecks to fixing both African politics and US-China geopolitics around AI are not money but political leaders who are skilled in public persuasion, coalition building, finding successors, inventing ideology, etc

Reply
xpostah's Shortform
samuelshadrach9d10

Appealing to outgroup fear just gets you a bunch of paranoid groups who never talk to one another.

Why is this a problem? Also remember they all ideally share common outgroups.

Reply
Load More
6Samuel x Bhishma - Superintelligence by 2030?
7d
0
1My views on Lesswrong
14d
0
-1Letter to Heads of AI labs
18d
2
8Why am I not currently starting a religion around AI or similar topics?
19d
2
2Day #14 Hunger Strike, on livestream, In protest of Superintelligent AI
1mo
0
9Samuel Shadrach Interviewed
1mo
0
13Day #8 Hunger Strike, Protest Against Superintelligent AI
1mo
4
9Day 16 Hunger Strike - Guido Reichstader Interviewed
1mo
0
7Day #1 Fasting, Livestream, In protest of Superintelligent AI
1mo
3
18Advice for tech nerds in India in their 20s
2mo
1
Load More