[RETRACTED] It's time for EA leadership to pull the short-timelines fire alarm.
[EDIT 4/10/2022: This post was rash and ill-conceived, and did not have clearly defined goals nor meet the vaguely-defined ones. I apologize to everyone on here; you should probably update accordingly about my opinions in the future. In retrospect, I was trying to express an emotion of exasperation related to the recent news I later mention, which I do think has decreased timelines broadly across the ML world. While I stand by my claims on roughly-human AGI probability, I no longer stand by my statement that "we should pull the fire-alarm". That is unlikely to lead to the calculated concerted effort we need to maximize our odds of successful coordination. Nor is it at all clear, given the timeline mechanism I described here, that AGI built in this way would be able to quickly FOOM, the primary point of concern for such a fire alarm. I've left the rest of the post here as a record. ] Based on the past week's worth of papers, it seems very possible (>30%) that we are now in the crunch-time section of a short-timelines world, and that we have 3-7 years until Moore's law and organizational prioritization put these systems at extremely dangerous levels of capability.[1] The papers I'm thinking about: * https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html * https://twitter.com/andyzengtweets/status/1512089759497269251 * https://openai.com/dall-e-2/ * https://twitter.com/hausman_k/status/1511152160695730181 It seems altogether possible that it would not take long, given these advances and a moderate amount of iteration, to create an agentic system capable of long-term decision-making. If you want to think of this as the public miscalibrated Bayesian-updating of one person, you should feel free to do that. If this was a conclusion you reached independently, though, I want to make sure we coordinate. For those who haven't grappled with what actual advanced AI would mean, especially if many different organizations can achieve it: *
Lucky or intentional. Exploit embargoes artificially weight the balance towards the defender - we should create a strong norm of providing defender access first in AI.