LESSWRONG
LW

296
samuelshadrach
299283270
Message
Dialogue
Subscribe

My views on Lesswrong

My website

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
2xpostah's Shortform
9mo
174
[Anthropic] A hacker used Claude Code to automate ransomware
samuelshadrach1mo*0-20

I support more advancements in cyberhacking capabilities so that companies and govts are incapable of keeping secrets. Secrecy enables them to act against the wishes of the majority to an extent that couldn’t otherwise.

Reply
xpostah's Shortform
samuelshadrach2mo30

Is the possibility of hyperpersuasion the crux between me and a majority of LW users?

By far my strongest argument against deploying both human genetic engg and superintelligence aligned to its creators, is the possibility of a small group of people taking over the world via hyperpersuasion.

Reply
xpostah's Shortform
samuelshadrach2mo13

If you already have lots of peoples attention (for instance because you have a social media following or high status credentials) and you’re a US/UK citizen, your best available plan might be to run a political campaign with AI pause as the agenda.

You’re unlikely to win the election, but it’ll likely shift the Overton window and give people hope that change is possible.

For most people, having a next step after “ok I read the blogposts and I’m convinced, now what?” is important. Voting or campaigning for you could be that next step.

Reply1
Daniel Kokotajlo's Shortform
samuelshadrach8h10

I am glad you atleast recognise the benefits of open source.

My preference order is:

  • For capabilities of AI labs: Ban AI > Open source AI > Closed source AI
  • For values and decision-making processes of people running AI labs: Open source (i.e. publicly publish) everything

As you say, I think open source today will atleast help build the proof required to convince everyone to ban tomorrow.

I think we should go further, and instead of hoping a benevolent leader to livestream the lab by choice, we should incentivise whistleblowers and cyberattackers to get the data out by any means necessary.

See also: Whistleblower database, Whistleblower guide

Reply
Messy on Purpose: Part 2 of A Conservative Vision for the Future
samuelshadrach2d30

Makes sense!

Reply
Messy on Purpose: Part 2 of A Conservative Vision for the Future
samuelshadrach2d50

I like that you're discussing the question of purpose in a world where intelligences way smarter than you are doing all the useful knowledge work, and you are useless to your civilisation as a result. The frontier intelligences might have purpose (or they might not even care if they do), but you might not.

  • I was disappointed by Bostrom's answer of "let's just create more happy beings".
    • Also, what makes those beings happy in the first place? Have the purpose of making even more beings? I don't actually care that much about populating the universe with beings with self-replication as their only purpose.
    • His entire book "Deep Utopia" seems to not provide a good answer.
  • I was also disappointed by Yudkowsky's answer of "let's ask the AI to run a gazillion mind simulations and have it figure it out on our behalf".
    • This answer might be a good answer, but it is too meta, and does not tell me what the final output of such a process will be, in a way I as a human from 2025 can relate to.
  • I like Holden Karnofsky's take on why describing utopia goes badly. I was not very satisfied with his answer to what utopia looks like either.
    • What is the use of social interactions if there's no useful project people can do together? Producing art and knowledge will both be done better by frontier intelligence, anything these people produce will be useless to society.
    • For that matter, why will people bother with social interactions with other biological humans anyway? The AI will be better at that too.
    • (Mandatory disclaimer - I think Karnofsky is accelerating us towards human extinction, me endorsing an idea of his does not mean me endorsing his actions.)
Reply
Jan_Kulveit's Shortform
samuelshadrach3d226

The post spends most of its time arguing about why ASI is inevitable and only one final para arguing why ASI is good. If you actually believed ASI was good, you would probably spend most of the post arguing that. Arguing ASI is inevitable seems exactly like the sort of cope you would argue if you thought ASI was bad and you were doing a bad thing by building it, and had to justify it to yourself.

Reply1
LLMs one-box when in a "hostile telepath" version of Newcomb's Paradox, except for the one that beat the predictor
samuelshadrach3d10

Try gpt-5-pro API in the playground. gpt-5 is worse. Use API not the consumer frontend.

Reply
abramdemski's Shortform
samuelshadrach4d12

Trajectory 3 is the obvious natural conclusion. He who controls the memes controls the world. AI-invented religions and political ideologies are coming soon. There is already billions of dollars invested in propaganda, it will now get invested here.

I support a ban on AI research to prevent this outcome.

Reply
xpostah's Shortform
samuelshadrach4d10

A world with very cheap materials and energy, but not cheap intelligence, will still have conflict.

People will still have a) differences in aesthetics and b) differences in their best guess answers to moral and philosophical questions. They will almost certainly still try to accumulate all available resources in service of their ideology. No finite amount of resources will satisfy people. Risking even catastrophic outcomes (like nuclear war) could still be on the table.

Cheap intelligence is what allows you to start resolving the questions that lead to conflict in the first place, for instance by running gazillions of world simulations on gazillions of mind configurations.

Reply
Load More
-1Why am I not currently starting a religion around AI or similar topics?
1d
0
2Day #14 Hunger Strike, on livestream, In protest of Superintelligent AI
14d
0
9Samuel Shadrach Interviewed
17d
0
13Day #8 Hunger Strike, Protest Against Superintelligent AI
20d
4
9Day 16 Hunger Strike - Guido Reichstader Interviewed
21d
0
7Day #1 Fasting, Livestream, In protest of Superintelligent AI
1mo
3
18Advice for tech nerds in India in their 20s
1mo
1
-26Support the movement against extinction risk due to AI
1mo
8
9Prolific.com survey on AI pause
2mo
3
11 week fast on livestream for AI xrisk
3mo
2
Load More