If we achieve AGI-level performance using an LLM-like approach, the training hardware will be capable of running ~1,000,000s concurrent instances of the model.
Definitions
Although there is some debate about the definition of compute overhang, I believe that the AI Impacts definition matches the original use, and I prefer it: "enough computing hardware to run many powerful AI systems already exists by the time the software to run such systems is developed". A large compute overhang leads to additional risk due to faster takeoff.
I use the types of superintelligence defined in Bostrom's Superintelligence book (summary here).
I use the definition of AGI in this Metaculus question. The adversarial Turing test portion of the definition is... (read 428 more words →)
This scenario now seems less likely with the OpenAI "O" series. It seems like we might reach AGI with heavy inference compute cost at first. This would mean much less overhang.