1570

LESSWRONG
LW

1569
AI
Frontpage

5

At what point will we know if Eliezer’s predictions are right or wrong?

by anonymous123456
18th Jul 2022
1 min read
6

5

AI
Frontpage

5

At what point will we know if Eliezer’s predictions are right or wrong?
11Shmi
2JBlack
2ChristianKl
2kithpendragon
3JBlack
1Noosphere89
New Comment
6 comments, sorted by
top scoring
Click to highlight new comments since: Today at 12:09 AM
[-]Shmi3y114

We won't know if he is right, because "no fire alarm..." etc. But a "wrong" scenario might be something like "look at this ML marvel writing all the code ever needed, being a perfect companion, proving every theorem, fixing aging, inventing interstellar propulsion, and yet indifferent to world domination, just chilling".

Reply
[-]JBlack3y20

Yes, that would qualify. Even if it turns out that the AI was actually treacherous and kills us all later, that would still fall under a class of world-models he disclaimed.

Likewise there was a disagreement about speed of world GDP growth before transformative AI.

We might know that Eliezer is right, for a very short time before we all die.

Reply
[-]ChristianKl3y20

As far as I understand his model, there might be a few AGIs that work like this but if you have 9 AGI's like this and then someone creates an AGI that optimizes for world domination, then that AGI is likely going to take over. 

A few good AGI's that don't make pivotal moves don't end the risk.

Reply
[-]kithpendragon3y20

I don't know that I've seen the original bet anywhere, but Eliezer's specific claim is that the world will end by 2030 Jan 01. Here's what I could find quickly on the topic:

  • https://www.lesswrong.com/posts/ZEgQGAjQm5rTAnGuM/beware-boasting-about-non-existent-forecasting-track-records#CianX64pyMcrQKmtF
  • https://www.lesswrong.com/posts/X3p8mxE5dHYDZNxCm/a-concrete-bet-offer-to-those-with-short-ai-timelines#2p7FqFsi8e9ePYs3q
Reply
[-]JBlack3y30

Yes, he did make a bet at approximately even odds for that date.

The problem is that even if you took it on epistemic face value as a prediction of probability greater than 50%, survival past 2030 doesn't mean that Eliezer was wrong, just that we aren't in his worst-case half of scenarios. He does have more crisp predictions, but not about dates.

Reply
[-]Noosphere893y12

If we can go a decade from the first AGI without a major catastrophe or X-risk, then this will be the biggest sign that we were wrong about how hard alignment is and that means Eliezer Yudkowsky's predictions are significantly wrong.

Reply
Moderation Log
More from anonymous123456
View more
Curated and popular this week
6Comments

Eliezer makes strong claims about AGI. I think they are interesting claims. For the sake of healthy epistemics, I am curious about preregistering some predictions.

My question is: by what year will we know if Eliezer is right or wrong? In particular, by what year, if the Singularity hasn’t happened yet, will we conclusively know that the imminent doom predictions were wrong?

I’m particularly interested in Eliezer’s answers to these questions.

Thank you! — a purveyor of good epistemics