And I can’t bet on “what they supposedly imply” because if my beliefs about AGI are right, then I expect we’ll both be much too dead for a bet to pay out in my favor.
I could pay you in advance as I did with Greg Colbourn, although this would only work for relatively short timelines. Otherwise, I had better keep my money invested.
Hi Stephen.
To create the estimates in the tables below I used the following sources (ordered from most to least reliable):
- Web pages listing all the researchers working at an organization.
- Asking people who work at the organization.
- Scraping publications and posts from sites including The Alignment Forum, DeepMind and OpenAI and analyzing the data [2].
- LinkedIn insights to estimate the number of employees in an organization.
I find it interesting that you ranked LinkedIn last. Is this because many people working at the target organisations do not add their roles or organisations to their LinkedIn profiles?
I hope that is a roughly correct rendition of your argument.
Thanks for the great summary, Kave!
So this argument doesn't bite for, say, shrimp welfare interventions, which could be arbitrarily more impactful than global health, or R&D developments.
Nitpick. SWP received 1.82 M 2023-$ (= 1.47*10^6*1.24) during the year ended on 31 March 2024, which is 1.72*10^-8 (= 1.82*10^6/(106*10^12)) of the gross world product (GWP) in 2023, and OP estimated R&D has a benefit-to-cost ratio of 45. So I estimate SWP can only be up to 1.29 M (= 1/(1.72*10^-8)/45) times as cost-effective as R&D due to this increasing SWP’s funding.
Here are my even-assuming-outside-view criticisms:
- Even the Davidson model allows that the distribution for interventions that increase the rate/effectiveness of R&D (rather than just purchasing some at the same rate) could be much more effective. I think superresearchers (or even just a large increase in the number of top researchers) are such an intervention
- To the extent we're allowing cause-hopping to enable large multipliers (which we must to think that there are potentially much more impactful opportunities than superbabies), I care about superbabies because of the cause of x-risk reduction! Which I think has much higher cost-effectiveness than growth-based welfare interventions.
Fair points, although I do not see how they would be sufficiently strong to overcome the large baseline difference between SWP and general R&D. I do not think reducing the nearterm risk of human extinction is astronomically cost-effective, and I am sceptical of longterm effects.
Thanks for the post! I think genetic engineering for increasing IQ can indeed be super valuable, and is quite neglected in society. However, I would be very surprised if it was among the areas where additional investment generates the most welfare per $:
Thanks for the post, Dan and Elliot. I have not read the comments, but I do not think preferential gaps make sense in principle. If one was exactly indifferent between 2 outcomes, I believe any improvement/worsening of one of them must make one prefer one of the outcomes over the other. At the same time, if one is roughly indifferent between 2 outcomes, a sufficiently small improvement/worsening of one of them will still lead to one being practically indifferent between them. For example, although I think i) 1 $ plus a chance of 10^-100 of 1 $ is clearly better than ii) 1 $, I am practically indifferent between i) and ii), because the value of 10^-100 $ is negligible.
Thanks, JBlack. As I say in the post, "We can agree on another [later] resolution date such that the bet is good for you". Metaculus' changing the resolution criteria does not obviously benefit one side or the other. In any case, I am open to updating the terms of the bet such that, if the resolution criteria do change, the bet is cancelled unless both sides agree on maintaining it given the new criteria.
Thanks, Dagon. Below is how superintelligent AI is defined in the question from Metaculus related to my bet proposal. I think it very much points towards full automation.
"Superintelligent Artificial Intelligence" (SAI) is defined for the purposes of this question as an AI which can perform any task humans can perform in 2021, as well or superior to the best humans in their domain. The SAI may be able to perform these tasks themselves, or be capable of designing sub-agents with these capabilities (for instance the SAI may design robots capable of beating professional football players which are not successful brain surgeons, and design top brain surgeons which are not football players). Tasks include (but are not limited to): performing in top ranks among professional e-sports leagues, performing in top ranks among physical sports, preparing and serving food, providing emotional and psychotherapeutic support, discovering scientific insights which could win 2021 Nobel prizes, creating original art and entertainment, and having professional-level software design and AI design capabilities.
As an AI improves in capacity, it may not be clear at which point the SAI has become able to perform any task as well as top humans. It will be defined that the AI is superintelligent if, in less than 7 days in a non-externally-constrained environment, the AI already has or can learn/invent the capacity to do any given task. A "non-externally-constrained environment" here means, for instance, access to the internet and compute and resources similar to contemporaneous AIs.
Fair! I have now added a 3rd bullet, and clarified the sentence before the bullets:
I think the bet would not change the impact of your donations, which is what matters if you also plan to donate the profits, if:
- Your median date of superintelligent AI as defined by Metaculus was the end of 2028. If you believe the median date is later, the bet will be worse for you.
- The probability of me paying you if you win was the same as the probability of you paying me if I win. The former will be lower than the latter if you believe the transfer is less likely given superintelligent AI, in which case the bet will be worse for you.
- The cost-effectiveness of your best donation opportunities in the month the transfer is made is the same whether you win or lose the bet. If you believe it is lower if you win the bet, this will be worse for you.
We can agree on another resolution date such that the bet is good for you accounting for the above.
I agree the bet is not worth it if superintelligent AI as defined by Metaculus' immediately implies donations can no longer do any good, but this seems like an extreme view. Even if AIs outperform humans in all tasks for the same cost, humans could still donate to AIs.
I think the Cuban Missile Crisis is a better analogy for the period right after Metaculus' question resolves non-ambiguously than mutually assured destruction. For the former, there were still good opportunities to decrease the expected damage of nuclear war. For the latter, the damage had already been made.
Thanks for the great questions!
Here is the talk. I liked it!