1181

LESSWRONG
LW

1180
April Fool'sAI
Personal Blog

6

The case for creating unaligned superintelligence

by Yair Halberstadt
2nd Apr 2025
1 min read
0

6

6

New Comment
Moderation Log
More from Yair Halberstadt
View more
Curated and popular this week
0Comments
April Fool'sAI
Personal Blog

Roko's basilisk states that

> an otherwise benevolent artificial superintelligence (AI) in the future that would punish anyone who knew of its potential existence but did not directly contribute to its advancement or development, in order to incentivize said advancement.

Now this is obviously correct, so there's no need to get too into it.

But if there's a 10% chance that we end up with a benevolent superintelligence in the future, there's a 50% chance we end up with a malevolent one. And the malevolent one has exactly the same incentives.

So if you want to spend an eternity in heaven instead of hell, the correct thing to do is put all of your effort into creating the most evil superintelligence you can.

This essay is where I start.

Happy April 1st if you happen to be on the US west coast![1]

  1. ^

    I'm not...