I wonder if there's a palatable middle ground where instead of banning all AI research, we might get people to agree to ban in advance only dangerous types of ASI.
My current personal beliefs:
- ASI existential risk is very much worth worrying about
- Dangerous ASI is likely the #1 threat to humanity
- In the next few decades, the odds of ASI killing/disempowering us is tiny
- I feel good accelerating capabilities at OpenAI to build technology that helps more people
- I would not support a ban or pause on AI/AGI (because it deprives people of AI benefits, breaks promises, and also accumulates a compute overhang for whenever the ban is later lifted)
- I would happily support a preemptive ban on dangerous ASI
We'll take a serious look at offering this option. We're generally happy to give people more control and transparency; the main worry that might hold us back is death by a thousand papercuts where each extra API option is small by itself but makes the overall experience more confusing for developers when dozens of them pile up. But in this case, might really be worth it. Stay tuned. :)
What would be the best solution for you? A new API parameter that lets you override the date?
Hi Rose and Tom. One small error I noticed:
>AI chips double in FLOP/$ every ~2 years.
If you look at the top black dots from 2008 onward, the slope is more like doubling every 4-5 years.
Can you explain what a point on this graph means? Like, if I see Gemini 2.5 Pro Experimental at 110 minutes on GPQA, what does that mean? It takes an average bio+chem+physics PhD 110 minutes to get a score as high as Gemini 2.5 Pro Experimental?
One potential angle: automating software won't be worth very much if multiple players can do it and profits are competed to zero. Look at compilers - almost no one is writing assembly or their own compilers, and yet the compiler writers haven't earned billions or trillions of dollars. With many technologies, the vast majority of value is often consumer surplus never captured by producers.
In general I agree with your point. If evidence of transformative AI was close, you'd strategically delay fundraising as late as possible. However, if you have uncertainty about your ability to deliver, investors' ability to recognize transformative potential, or uncertainty about competition, you might hedge and raise sooner than you need. Raising too early never kills a business. But raising too late always does.
Terrific!
We can already see what people do with their free time when basic needs are met. A number of technologies have enabled new hacks to set up 'fake' status games that are more positive-sum than ever before in history:
Feels likely to me that advancing digital technology will continue to make it easier for us to spend time in constructed digital worlds that make us feel like valued winners. On the one hand, it would be sad if people retreat into fake digital siloes; on the other hand, it would be nice if people got to feel like winners more.
Management consulting firms have lots of great ideas on slide design: https://www.theanalystacademy.com/consulting-presentations/
Some things they do well:
Yes, mostly.
I expect existentially dangerous ASI to take longer than ASI, which will take longer than AGI, which will take longer than powerful AI. Killing everyone on Earth is very hard to do, few are motivated to do it, and many will be motivated to prevent it as ASI’s properties become apparent. So I think the odds are low. And I’ll emphasize that these are my odds including humanity’s responses, not odds of a counterfactual world where we sleepwalk into oblivion without any response.