Wiki Contributions

Comments

I don't doubt that slow take-off is risky. I rather meant that foom is not guaranteed, and risk due a not-immediately-omnipotent AI make be more like a catastrophic, painful war.

Isn't there a certain amount of disagreement about whether FOOM is the necessary thing to happen?

I think the comparison to cancer etc is helpful, thanks.

The suicide option is a somewhat strange but maybe helpful perspective, as it simplifies the original question by splitting it:

  1. Do you consider a life worth living that ends in a situation in which suicide is the best option?
  2. How likely will this be the case for most people in our relatively soon future? (Including because of AI)

Of course everyone can apply their own criteria, but:

  1. I think it is a bit weird to downvote a question, except if the question is extremely stupid. I also would not know a better place to ask it, except maybe the ea forum.
  2. This is a question about the effects of unaligned AGI, and which kind of world to expect from it. For me that is at least relevant to the question of how I should try making the world better.
  3. What do you mean by "AI tag"?

Thank you for your comment. It is very helpful. But may I ask what your personal expectations are regarding the world in 2040?

Oh LessWrong people, please explain to me why asking this question got a negative score.

Thanks, but the "helping" part would only help if the kids get old enough and are talented and willing to do so, right? Also, if I were born to become cannonfodder, I would be quite angry, I guess.

Interesting perspective. So you think the lives were unpleasant on average, but still good enough?

Thanks, but I am not convinced that the first AI that turns against humans and wins automatically has to be an AI that is extremely powerful in all dimensions. Skynet may be cartoonish, but why shouldn't the first AI that moves against humankind be one that controls a large part of the US nukes while not being able to manipulate germs?