People routinely ask, “If AI labs believed AGI was imminent, why are they doing X?” Sometimes this skepticism is valid. But consider OpenAI launching Sora 2 as a TikTok-style feed of AI-generated videos[1]. On the surface, it seems like a waste of developer time, and critics argue it makes the world a worse place to live[13]. Yet the underlying video generation requires modeling physics, space, and causality essential for AGI.
This dynamic is even clearer in the realm of “AI for Science”, the focus of this article. Several AI science companies have been in the news lately (Periodic Labs[2], Lila[3], Medra[4], etc.), and a common reaction I’ve seen is skepticism: “These are long, hard projects. This isn’t what you’d do if you believed artificial general intelligence (AGI) was imminent.” I believe this view is fundamentally mistaken. These ventures aren’t a hedge against imminent AGI; they represent the most strategic and direct preparation for its arrival.
The key is understanding three points: First, science is so broad that any AI capable of tackling it would essentially be AGI. Second, mastering real-world science is one of the final capabilities needed for AGI. Third, once AGI exists, real-world labs configured for its use will become immensely valuable.
Continued on the site...