Eliezer makes strong claims about AGI. I think they are interesting claims. For the sake of healthy epistemics, I am curious about preregistering some predictions.

My question is: by what year will we know if Eliezer is right or wrong? In particular, by what year, if the Singularity hasn’t happened yet, will we conclusively know that the imminent doom predictions were wrong?

I’m particularly interested in Eliezer’s answers to these questions.

Thank you! — a purveyor of good epistemics

New to LessWrong?

New Comment
6 comments, sorted by Click to highlight new comments since: Today at 9:50 PM

We won't know if he is right, because "no fire alarm..." etc. But a "wrong" scenario might be something like "look at this ML marvel writing all the code ever needed, being a perfect companion, proving every theorem, fixing aging, inventing interstellar propulsion, and yet indifferent to world domination, just chilling".

Yes, that would qualify. Even if it turns out that the AI was actually treacherous and kills us all later, that would still fall under a class of world-models he disclaimed.

Likewise there was a disagreement about speed of world GDP growth before transformative AI.

We might know that Eliezer is right, for a very short time before we all die.

As far as I understand his model, there might be a few AGIs that work like this but if you have 9 AGI's like this and then someone creates an AGI that optimizes for world domination, then that AGI is likely going to take over. 

A few good AGI's that don't make pivotal moves don't end the risk.

I don't know that I've seen the original bet anywhere, but Eliezer's specific claim is that the world will end by 2030 Jan 01. Here's what I could find quickly on the topic:

Yes, he did make a bet at approximately even odds for that date.

The problem is that even if you took it on epistemic face value as a prediction of probability greater than 50%, survival past 2030 doesn't mean that Eliezer was wrong, just that we aren't in his worst-case half of scenarios. He does have more crisp predictions, but not about dates.

If we can go a decade from the first AGI without a major catastrophe or X-risk, then this will be the biggest sign that we were wrong about how hard alignment is and that means Eliezer Yudkowsky's predictions are significantly wrong.