Yep, the 'gradual boost' section is the one for this. Also my historical work on the compute-centric model (see link in post) models gradual automation in detail.
So if you've fully ignored the fact that pre-ASARA systems have sped things up, then accounting for that will make takeoff less fast bc by the time ASARA comes around you'll have already plucked much of the low-hanging fruit of software progress.
But I didn't fully ignore that, even outside of the gradual boost section. I somewhat adjusted my estimate of r and of "distance to effective limits" to account for intermediate software progress. Then, in the gradual boost section, i got rid of these adjustments as they weren't needed. Turned out that takeoff was then faster. My interpretation (as i say in the gradual boost section): dropping those adjustments had a bigger effect than changing the modelling.
To put it anothr way: if you run the gradual boost section but literally leave all the parameters unchanged, you'll get a slower takeoff.
Forethought is hiring!
You can see our research here.
You can read about what it’s like to work with us here.
We’re currently hiring researchers, and I’d love LW readers to apply.
If you like writing and reading LessWrong, I think you might also enjoy working at Forethought.
I joined Forethought a year ago, and it’s been pretty transformative for my research. I get lots of feedback on my research and great collaboration opportunities.
The median views of our staff are often different from the median views of LW. E.g. we probably have a lower probability on AI takeover (though I’m still >10% on that). That's part of the reason i'm excited for LW readers to apply. I think a great way to make intellectual progress is via debate. So we want to hire ppl who strongly disagree with us, and have their own perspectives on what’s going on in AI.
We’ve also got a referral bounty of £10,000 for counterfactual recommendations for successful Senior Research Fellow hires, and £5,000 for Research Fellows.
The deadline for applications is Sunday 2nd November. Happy to answer questions!
I also work at Forethought!
I agree with a lot of this post, but wanted to flag that I would be very excited for ppl doing blue skies research to apply and want Forethought to be a place that's good for that. We want to work on high impact research and understand that sometimes mean doing things where it's unclear up front if it will bear fruit.
(Fyi the previous comment from "Tom" was not actually from me. I think it was Rose. But this one is from me!)
Worth noting that the "classic" AI risk also relies on human labour not being needed anymore. For AI to seize power, it must be able to do so without human help (hence human labour not needed), and for it to kill everyone human labour must not be needed to make new chips / robots
Thanks, I like this!
Haven't fully wrapped my head around it yet, but will think more.
One quick minor reaction is that I don't think you need IC stuff for coups. To give a not very plausible but clear example: a company has a giant intelligence explosion and then can make its own nanobots to take over the world. Doesn't require broad automation, incentives for governments to serve their people to change, etc
Do you think the same for a company within the US? That with a 6 month, or even just a 3 month going off recent trends, lead it would find a way to sabotage other companies?
I think it's plausible, but:
(I think a remerging crux here might be the power of AI's persuasion+strategy skills)
Thanks for this!
Yep i agree that scenario gets >5%!
Agree ASML is one of the trickiest cases for OpenBrain. Though I imagine there are many parts of the semiconductor supply chain with TSMC-level monopolies (which is less than ASML). And i don't think hacking will work. These companies already protect themselves from this, knowing that it's a threat today. Data stored on local physical machines that aren't internet connected, and in ppl's heads.
And i think you'll take many months of delay if you go to ASML's suppliers rather than ASML. ASLM built their factories using many years worth of output from suppliers. Even with massive efficiency gains over ASML (despite not knowing their trade secrets) it will take you months to replicate.
I agree that the more OpenBrain have super-strategy and super-persuasion stuff, the more likely they can capture all the gains from trade. (And military pressure can help here too, like with colonialism.)
Also, if OpenBrain can prevent anyone else developing ASI for years, e.g. by sabotage, i think they have a massive advantage. Then ASML loses their option of just waiting a few months and trading with someone else. I think this is your strongest argument tbh.
Biggest cruxes imo:
I'm curious how confident you are a company with a 6 month lead could outgrow the rest of the world by themselves?
as discussed on the other thread, i'm imagining other countries have access to powerful AI as well. Or even equally good AI if they are sold access or if another actor catches up after the SIE fizzles (as it must)
Great - I agree that if you can get to >50% physical capital (as valued by how useful it is for the new supercharged economy) within 6 months of the SIE fizzling out, then you can outgrow the world in the scenario you describe. Sounds like you're more bullish on a very fast industrial explosion than I am -- I think it's hard to know whether we'll get physical capital doubling times of <3 months within a few months of SIE. On longer timelines to an SIE, this seems more plausible as there's more time for robotics to improve in the meantime.
Another blocker -- cooperation from pre-existing companies
I want to mention another blocker to someone outgrowing the world via a very fast industrial explosion. You'll be able to go faster if you draw heavily on the knowledge, skills and machines of existing companies. So you'll need their cooperation. But they might not cooperate unless you give them a big fraction of the economic value. Which would prevent you from outgrowing the world.
Of course, you could "go it alone" and leapfrog the companies that won't do business with you. But then it's even harder to do a very fast industrial explosion. E.g. take ASML and TSMC. I predict it will be much more efficient for the leader in AI to work with those companies (e.g. selling them API access to ASI, or acquiring them) than to leapfrog. If they try to leapfrog, the laggard can team up with them and recover some of the lost lead.
Concretely, let's say OpenBrain gets to AGI first. Then its AIs are combined with ASML's existing knowledge to create EUV++ machines. Are the profits from these machines going to ASML or to OpenBrain? Seems unclear to me. ASML's bargaining position might be strong -- they have a strong monopoly and can wait 6 months and deal with the laggard instead.
So maybe other companies have a lot of economic leverage post-AGI, and they convert that to economic value. And, as discussed, maybe other governments tax the activites happening in their domains.
Will $ be helpful?
But you suggest these "mere $" might not be very useful, if ASI has colonised space and doesn't care. But $ can be used to buy AI and buy crazy new physical tech. So if other actors have lots of $, they can convert that to ability to colonise the stars themselves.
Q about your Israel scenario?
Are you basically assuming a merge between the lab and govt here? And then an nation-wide effort to utilise Israel's existing human and physical capital to kick-start the industrial explosion within it own borders?
Asking bc absent a merge i'm still not seeing a reason for all actors outside Israel to be left in the dust. The AI company would trade with some Israeli companies and (many more) non-Israeli ones. In which case Israel per se would only outgrow the world if the AI company outgrows the world by itself.
Yeah, i think one of the biggest weaknesses of this model, and honestly of most thinking on the intelligence explosion, is not carefully thinking through the data.
During an SIE, AIs will need to generate data themselves, by doing the things that human researchers currently do to generate data. That includes finding new untapped data sources, creating virtual envs, creating SFT data themselves by doing tasks with scaffolds, etc.
OTOH it seems unlikely they'll have anything as easy as the internet to work with. OTOH, internet data is actually v poorly targeted at teaching AIs how to do crucial real-world tasks, so perhaps with abundant cognitive labour you can do much better and make curriculla that directly targeted the skills that most need improving