Wiki Contributions

Comments

irving1y10

I honestly can't say. I wish I could.

irving1y10

Hmm, not necessarily the researchers, but the founders undoubtedly. OpenAI was specifically formed to increase AI safety.

irving1y20

I've seen the latter but much more of the former.

irving1y20

This post was meant as a summary of common rebuttals. I haven't actually heard much questioning of motivation, as instrumental convergence seems fairly intuitive. The more common question asked is how an AI could actually physically achieve the destruction.

irving1y20

I just started a writing contest for detailed scenarios on how we get from our current scenario to AI ending the world. I want to compile the results on a website so we have an easily shareable link with more scenarios than can be ad hoc dismissed, because individual scenarios taken from a huge list are easy to argue against and thus discredit the list, but a critical mass of them presented at once defeats this effect. If anyone has good examples I'll add them to the website.

irving1y10

Yes, I should have been more clear that I was addressing people who have very high p(doom). The prisoner/bomb is indeed somewhat of a simplification, but I do think there's a valid connection in the form of half-heartedly attempting to get the assistance of people more powerful than you and prematurely giving it up as hopeless.

Thank you for your kind words! I was expecting most reactions to be fairly anti-"we should", but I figured it was worth a try.

irving1y70

Most common antisafety arguments I see in the wild, not steel-manned but also not straw-manned:

  • There’s no evidence of a malign superintelligence existing currently, therefore it can be dismissed without evidence
  • We're faking being worried because if we truly were, we would use violence
  • Yudkowsky is calling for violence
  • Saying something as important as the end of the world could happen could influence people to commit violence, therefore warning about the end of the world is bad
  • Doomers can’t provide the exact steps a superintelligence would take to eliminate humanity
  • When the time comes we’ll just figure it out
  • There were other new technologies that people warned would cause bad outcomes
  • We didn’t know whether nuclear experimentation would end the world but we went ahead with it anyway and we didn’t end the world (omitting that careful effort was put forth first to ensure this risk was miniscule)
  • My personal favorite: AI doom would happen in the future, and anything happening in the future is unfalsifiable, therefore it is not a scientific claim and should not be taken seriously.
irving1y31

Hardcore agree. I'm planning a documentary and trying to find interested parties.

irving1y14

Honestly I don't think fake stories are even necessary, and becoming associated with fake news could be very bad for us. I don't think we've seriously tried to convince people of the real big bad AI. What, two podcasts and an opinion piece in Time? We've never done a real media push but all indications are that people are ready to hear it. "AI researchers believe there's a 10% chance they'll end life" is all the headline you need.

Load More