Thanks, those links are interesting. Though, improving algorithms and compute don't seem like sufficient conditions - I was also thinking about where does the training data come from (and time for getting the data and doing the required training)? Going from being able to do AI development work to being able to do most economically useful tasks seems like a big step - it seems speculative* as I don't think there are demonstrations of something like this working yet - it would be interesting to know if there is a path here for getting suitable data (enough input-output pairs or tasks with well-specified, automatically verifiable goals) that doesn't require any speculative leaps in efficiency with respect to the amount of training data required. Or is there a concrete method that has been thought of that wouldn't require this data? The takeoff forecast page seems to use a framework of saying the AI could speed up any task a human could do - but I'm not aware of it being shown that this could happen when generalising to new tasks. Even outside of considering the takeover scenario, it would be interesting to see analysis about how automation of some job tasks that make up a sizeable fraction of human labour time could be done, with reasonable projections of the resources required given current trends, just for getting a better idea of how job automation would proceed.
Similarly, it's not clear to me where data would come from to train the agents for long-range strategic planning (particularly about novel scenarios that humans haven't navigated before) or producing efficient robot factories (which seems like it would take quite a bit of trial and error of working in the physical world, unless some high-quality simulated environment is being presumed).
*By speculative, I don't mean to say it makes it unlikely, but just that it doesn't seem to be constrained by good evidence that we have in the present, and so different reasonable people may come to very different conclusions. It seems helpful to me to identify where there could potentially be very wide differences in people's estimates.
Thanks for a very interesting post and for going a lot further than usual to set out reasoning for such a scenario.
There were a few steps where it seemed like a large, unexplained jump in abilities of the agents occurred, and I was wondering are there particular reasons to think these jumps would occur or is it just speculative? In particular:
My understanding at present is that we have methods for training AI to do tasks where there are many examples of pairs of inputs and desired outputs or where a task has a clearly defined goal and there is opportunity to try doing it many times and apply RL. But this kind of training doesn't seem to feature in the scenario for tasks like the above, so I wondered is there a clear pathway to achieving these using methods we can presently say have a good chance of working? (I am not an AI researcher, so may just not know the extent of demonstrated abilities. I am also not saying I don't believe these are possible - I just wonder for what steps there is a clear pathway and which are primarily speculative.)
I'm also not sure about the resources. The scenario has over $800 billion/year being spent on datacentres by July 2027 (vs ~$300 billion/year in the present as stated). The present total profit of the "Magnificent 7" American tech companies is around $500 billion/year (from https://companiesmarketcap.com/tech/most-profitable-tech-companies/). So the estimated figure is about the total profit of these companies plus around 60%. This gap is a lot more than the profits of Chinese big tech. There doesn't appear to have been widespread economic use of the AIs before this point (given this is where a "tipping point" is said to occur). So this seems like a lot of spending to be happening without widespread acceptance of the economic case. Is there more detail about where these sums of money are plausibly going to come from in that time? How would it affect the time scale if the resources grew, say, half as fast?
I have been spending around 10%-20% of my time over the past 5 years working as a fund manager on the Long Term Future Fund. As a result, Lightcone has never applied to the LTFF, or the EA Infrastructure Fund, as my involvement with EA Funds would pose too tricky of a COI in evaluating our application. But I am confident that both the LTFF and the EAIF would evaluate an application by Lightcone quite favorably
Would it make sense then for Lightcone people to not have positions at funding orgs like these?
Coming here from your email. It's great to see so much thought and effort go into trying to make an evidence-based case for asking for public donations, which I think is rare to see.
A question that occurs is that the donations chart seems to show that <$1M/year of donations were needed up to 2020 (I can't read off precise figures) - let's say it would be $1M/yr today after adding inflation. The site metrics seem to have been on solid upward trends by then, so it seems like it was at least possible to run LessWrong well on that budget. Would it be fair to say that LessWrong could be run with that much donated income? And Lighthaven too (since you seem to think Lighthaven will pay for itself)? You say that now 2025 spending will be ~$3M and expenses will be ~$2M/yr after that. I couldn't follow what is the extra spending above 2020 levels getting?
PS It was fiddly to go back and forth between numbers/charts in the post to try to make sense of and crosscheck them and to check I was using the correct numbers here - I kept misremembering where I'd seen a piece of info and having to scroll around - perhaps making it easy to add a contents section with clickable links to section headings would be valuable for easing navigation of long posts!
Why's that? They seem to be going for AGI, can afford to invest billions if Zuckerberg chooses, their effort is led by one of the top AI researchers and they have produced some systems that seem impressive (at least to me). If you wanted to cover your bases, wouldn't it make sense to include them? Though 3-5% may be a bit much (but I also think it's a bit much for the listed companies besides MS and Google). Or can a strong argument be made for why, if AGI were attained in the near term, they wouldn't be the ones to profit from it?
- Invest like 3-5% of my portfolio into each of Nvidia, TSMC, Microsoft, Google, ASML and Amazon
Should Meta be in the list? Are the big Chinese tech companies considered out of the race?
Do you mean you'd be adding the probability distribution with that covariance matrix on top of the mean prediction from f, to make it a probabilistic prediction? I was talking about deterministic predictions before, though my text doesn't make that clear. For probabilistic models, yes adding an uncertainty distribution may make result in non-zero likelihoods. But if we know the true dynamics are deterministic (pretend there's no quantum effects, which are largely irrelevant for our prediction errors for systems in the classical physics domain), then we still know the model is not true, and so it seems difficult to interpret p if we were to do Bayesian updating.
Likelihoods are also not obviously (to me) very good measures of model quality for chaotic systems, either - in these cases we know that even if we had the true model, its predictions would diverge from reality due to errors in the initial condition estimates, but it would trace out the correct attractor - and its the attractor geometry (conditional on boundary conditions) that we'd seem to really want to assess. Perhaps then it would have a higher likelihood than every other model, but it's not obvious to me, and it's not obvious that there's not a better metric for leading to good inferences when we don't have the true model.
Basically the logic that says to use Bayes for deducing the truth does not seem to carry over in an obvious way (to me) to the case when we want to predict but can't use the true model.
Yes I'd selected that because I thought it might get it to work. And now I've unselected it, it seems to be working. It's possible this was a glitch somewhere or me just being dumb before I guess.
I'm still not very sure how to interpret downvotes in the absence of disagree votes... Do people really mean it has negative value to raise discussion about having some tighter quality control on publishing high-profile AI takeover scenarios, or is it just disagreement about not having robust quality control being a problem?
To expand a little more, in my field of climate science research, people have been wrestling with how to produce useful scenarios about the future for many years. There are a couple of instances where respectable groups have produced disaster scenarios, but they have generally had a lot more review than the work here as I understand it. There would be a lot of concerns about people publishing such work without external checking by people in the wider community.
It wouldn't necessarily have to be review like in a journal article - having different experts provide commentary that is published alongside a report could also be very helpful for giving those less informed a better idea of where there are crux disagreements etc.