Yeah, i think one of the biggest weaknesses of this model, and honestly of most thinking on the intelligence explosion, is not carefully thinking through the data.
During an SIE, AIs will need to generate data themselves, by doing the things that human researchers currently do to generate data. That includes finding new untapped data sources, creating virtual envs, creating SFT data themselves by doing tasks with scaffolds, etc.
OTOH it seems unlikely they'll have anything as easy as the internet to work with. OTOH, internet data is actually v poorly targeted at teaching AIs how to do crucial real-world tasks, so perhaps with abundant cognitive labour you can do much better and make curriculla that directly targeted the skills that most need improving
Yep, the 'gradual boost' section is the one for this. Also my historical work on the compute-centric model (see link in post) models gradual automation in detail.
So if you've fully ignored the fact that pre-ASARA systems have sped things up, then accounting for that will make takeoff less fast bc by the time ASARA comes around you'll have already plucked much of the low-hanging fruit of software progress.
But I didn't fully ignore that, even outside of the gradual boost section. I somewhat adjusted my estimate of r and of "distance to effective limits" to account for intermediate software progress. Then, in the gradual boost section, i got rid of these adjustments as they weren't needed. Turned out that takeoff was then faster. My interpretation (as i say in the gradual boost section): dropping those adjustments had a bigger effect than changing the modelling.
To put it anothr way: if you run the gradual boost section but literally leave all the parameters unchanged, you'll get a slower takeoff.
Forethought is hiring!
You can see our research here.
You can read about what it’s like to work with us here.
We’re currently hiring researchers, and I’d love LW readers to apply.
If you like writing and reading LessWrong, I think you might also enjoy working at Forethought.
I joined Forethought a year ago, and it’s been pretty transformative for my research. I get lots of feedback on my research and great collaboration opportunities.
The median views of our staff are often different from the median views of LW. E.g. we probably have a lower probability on AI takeover (though I’m still >10% on that). That's part of the reason i'm excited for LW readers to apply. I think a great way to make intellectual progress is via debate. So we want to hire ppl who strongly disagree with us, and have their own perspectives on what’s going on in AI.
We’ve also got a referral bounty of £10,000 for counterfactual recommendations for successful Senior Research Fellow hires, and £5,000 for Research Fellows.
The deadline for applications is Sunday 2nd November. Happy to answer questions!
I also work at Forethought!
I agree with a lot of this post, but wanted to flag that I would be very excited for ppl doing blue skies research to apply and want Forethought to be a place that's good for that. We want to work on high impact research and understand that sometimes mean doing things where it's unclear up front if it will bear fruit.
(Fyi the previous comment from "Tom" was not actually from me. I think it was Rose. But this one is from me!)
Worth noting that the "classic" AI risk also relies on human labour not being needed anymore. For AI to seize power, it must be able to do so without human help (hence human labour not needed), and for it to kill everyone human labour must not be needed to make new chips / robots
Thanks, I like this!
Haven't fully wrapped my head around it yet, but will think more.
One quick minor reaction is that I don't think you need IC stuff for coups. To give a not very plausible but clear example: a company has a giant intelligence explosion and then can make its own nanobots to take over the world. Doesn't require broad automation, incentives for governments to serve their people to change, etc
Do you think the same for a company within the US? That with a 6 month, or even just a 3 month going off recent trends, lead it would find a way to sabotage other companies?
I think it's plausible, but:
(I think a remerging crux here might be the power of AI's persuasion+strategy skills)
Thanks for this!
Yep i agree that scenario gets >5%!
Agree ASML is one of the trickiest cases for OpenBrain. Though I imagine there are many parts of the semiconductor supply chain with TSMC-level monopolies (which is less than ASML). And i don't think hacking will work. These companies already protect themselves from this, knowing that it's a threat today. Data stored on local physical machines that aren't internet connected, and in ppl's heads.
And i think you'll take many months of delay if you go to ASML's suppliers rather than ASML. ASLM built their factories using many years worth of output from suppliers. Even with massive efficiency gains over ASML (despite not knowing their trade secrets) it will take you months to replicate.
I agree that the more OpenBrain have super-strategy and super-persuasion stuff, the more likely they can capture all the gains from trade. (And military pressure can help here too, like with colonialism.)
Also, if OpenBrain can prevent anyone else developing ASI for years, e.g. by sabotage, i think they have a massive advantage. Then ASML loses their option of just waiting a few months and trading with someone else. I think this is your strongest argument tbh.
Biggest cruxes imo:
I'm curious how confident you are a company with a 6 month lead could outgrow the rest of the world by themselves?
as discussed on the other thread, i'm imagining other countries have access to powerful AI as well. Or even equally good AI if they are sold access or if another actor catches up after the SIE fizzles (as it must)
Thanks! (Quickly written reply!)
I believe I was here thinking about how society has, at least in the past few hundred years, spent a minority of GDP on obtaining new raw materials. Which suggests that access to such materials wasn't a significant bottleneck on expansion.
So it's a stronger claim that "hard cap". I think a hard cap would, theoretically, result in all GDP being used to unblock the bottleneck, as there's no other way to increase GDP. I think you could quantify the strength of the bottleneck as the marginal elasticity of GDP to additional raw materials. In a task-based model, i think the % of GDP spent on each task is proportional to this elasticity?
Yeah, I think maybe it is? I do feel like, given the very long history of sustained growth, it's on the sceptic to explain why their proposed bottleneck will kick in with explosive growth but not before. So you could state my argument as: raw materials never bottlenecked growth before; no particular reason they would just bc growth is faster bc that faster growth is driven by having more labour+capital which can be used for gathering more resources; so we shouldn't expect raw materials to bottleneck growth in the future.
TBC, this is all compatible with "if we had way more raw materials then this would boost output". E.g. in Cobb Douglas doubling an input to output increases output notably, but there still aren't bottlenecks.
(And i actually agree that it's more like CES with rho<0, i.e. raw materials is a stronger bottleneck, but just think we'll be able to spend output to get more raw materials.)
(Also, to clarify: this is all about the feasibility of explosive growth. I'm not claiming it would be good to do any of this!)