LESSWRONG
LW

Kevin Van Horn
4030
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
What Evidence Is AlphaGo Zero Re AGI Complexity?
Kevin Van Horn8y40

If there are more than a few independent short-term extinction scenarios (from any cause) with a probability higher than 1%, then we are in trouble -- their combined probability would add up to a significant probability of doom.

As far as resources go, even if we threw 100 times the current budget of MIRI at the problem, that would be $175 million, which is

- 0.005% of the U.S. federal budget,

- 54 cents per person living in the U.S., or

- 2 cents per human being.

Reply
What Evidence Is AlphaGo Zero Re AGI Complexity?
Kevin Van Horn8y20

Arguing about the mostly likely outcome is missing the point: when the stakes are as high as survival of the human race, even a 1% probability of an adverse outcome is very worrisome. So my question to Robin Hanson is this: are you 99% certain that the FOOM scenario is wrong?

Reply
There's No Fire Alarm for Artificial General Intelligence
Kevin Van Horn8y10

The relevant question is not, "How long until superhuman AI?", but "Can we solve the value alignment problem before that time?" The value alignment problem looks very difficult. It probably requires figuring out how to create bug-free software... so I don't expect a satisfactory solution within the next 50 years. Even if we knew for certain that superhuman AI wouldn't arrive for another 100 years, it would make sense to be putting some serious effort into solving the value alignment problem now.

Reply
No posts to display.