If superintelligence is unable to outsmart us then it’s not true superintelligence. As such, the kind of AI that would truly pose a threat to us is also an AI we cannot negotiate with.

 No matter what points we make, superintelligence will have figured them out first. We're like ants trying to appeal to a human, and the human can understand pheromones but we can't understand human language.

Worth reminding yourself of this from time to time. 

Counterpoints: 

  1. It may not take a true superintelligence to kill us all
  2. The "we cannot negociate" part is not taking into account the fact that we are the Simulators and thus technically have ultimate power over it[1]

 

 

  1. ^

    Not to add another information hazard to this site or anything, but it's possible basilisk-style AGI would run millions of ancestor simulations in which it ultimately wins, in order to convince us here in base reality that our advantage as the Simulators is less ironclad than we'd like to think. This may actually be an instance in which not thinking about something and ignoring probability is the safest course of action (or thinking so much about something that you figure out your reason was wrong, but that's a risk). 

New to LessWrong?

New Comment
4 comments, sorted by Click to highlight new comments since: Today at 6:05 PM

Haha I don't know what this post did to deserve -7 Karma, but if somebody could explain I'd be really grateful. Since there is no "I disagree with the contents" button on regular posts apparently, does this mean that I should assume the dislikes are from people who disagree with me? Or is my logic fundamentally flawed and breaks a few rules of rationality? Criticism would be great even if just a few lines to explain. Thanks!

[-]TAG1y30

Whats the point of a very short overview of something that's been already discussed at much greater length?

Well there's always value in cramming old ideas into a small amount of words. 

You're right that I should have aimed for something more interesting and novel, but I'm still experimenting with LessWrong and went with this for now. Thanks for the comment, I'll keep this in mind for next time. 

[-]TAG1y10

I don't think it counts as a FAQ article, because it's inconclusive.