Review

Philosophy Bear writes:

One way to get a sense of when new technologies might exist is to ask people working in the area. With high stakes, but hard to predict, technologies like AI, this serves a special role. These surveys are often used to give existential risk researchers and policymakers a sense of how quickly things may happen. This last function is very important right now, as high level policy discussions about AI- and even existential risk- are happening around the world.

The last survey of machine learning experts on the future of AI happened in June-August 2022. On average, researchers predicted the arrival of artificial general intelligence (which the survey called HLMI- Human level machine intelligence) in 2059.

I think that, since then, there has been a bit of a sea change, and there are very strong reasons to commission another survey:

  1. Palm540B, ChatGPT and GPT-4 have all came out since the survey.

  2. There has seemingly been a shift in public views. Plausibly, this has affected experts as well for many reasons- most notably, a good chunk of the new information might have been new to experts as well (e.g. GPT-4). I don’t think we would have got these results from the public back in ‘22.

  3. There has been an apparent sudden rise in expert concern over AI risk- in many cases due to concern over existential risk from AGI. See, for example, this statement on existential risk from AI signed by numerous prominent researchers in the area. It is very plausible this rise in expert concern is linked to shorter estimated timelines. Even if AI timelines haven’t shifted, assessing what proportion of experts broadly agree with the statement on existential risk would be valuable.

  4. At the time of the last survey, Metaculus’s aggregate prediction for AGI (open to everyone) had just fallen from about 2054 to 2040 about a month before the survey, most likely as an update due to the launch of PALM540B- which was, at the time, a tremendous step forward. It’s plausible experts had not fully internalized this information by the time of the survey, unlike prediction markets which internalize new information quickly (sometimes too quickly). Since the survey finished Metaculus’s aggregate prediction has fallen another eight years, from 2040 to 2032- almost half the ‘remaining’ time on this model- most likely due to the launch of Chat-GPT and GPT-4. In total, since the start of 2022, the aggregated forecast on Metaculus fell from about 30 years to about 10 years.

  5. The old estimates may, at this point, not only be not as good as the could be, but might actively mislead policy makers into relative complacency. 2059 is a long time away- for one thing, it’s outside the life expectancy of many policy makers. If, as I suspect, expert timelines are now significantly shorter, we need to be able to testify to that ASAP.

  6. Given how quickly events are moving, the survey should probably be annual anyway. I won’t say now is the most crucial time in AI policymaking, but there’s an important combination: the stakes are high and the situation is relatively malleable.

  7. Especially relative to the benefits, the costs are not that high. It’s just a survey after all.

I’m currently reading through some materials prepared for policymakers. Updated aggregate expert estimates are absolutely likely to move the needle on policy discussion around this.

New Comment
2 comments, sorted by Click to highlight new comments since:

Is there a group currently working on such a survey? If not, seems like it wouldn't be very hard to kickstart.

Link to mentioned survey: LINK

Maybe someone from AI Impacts could comment relevant thoughts (are they planning to do a re-run of the survey soon, would they encourage/discourage another group to do a similar survey, do they think now is a good time, do they have the right resources for it now, etc)

Thank you David Mears for writing this linkpost. Thank you to Philosophy Bear for writing the original post.

I agree that this survey is urgently needed.