Vasco Grilo

Wikitag Contributions

Comments

Sorted by

I hope that is a roughly correct rendition of your argument.

Thanks for the great summary, Kave!

So this argument doesn't bite for, say, shrimp welfare interventions, which could be arbitrarily more impactful than global health, or R&D developments.

Nitpick. SWP received 1.82 M 2023-$ (= 1.47*10^6*1.24) during the year ended on 31 March 2024, which is 1.72*10^-8 (= 1.82*10^6/(106*10^12)) of the gross world product (GWP) in 2023, and OP estimated R&D has a benefit-to-cost ratio of 45. So I estimate SWP can only be up to 1.29 M (= 1/(1.72*10^-8)/45) times as cost-effective as R&D due to this increasing SWP’s funding.

Here are my even-assuming-outside-view criticisms:

  1. Even the Davidson model allows that the distribution for interventions that increase the rate/effectiveness of R&D (rather than just purchasing some at the same rate) could be much more effective. I think superresearchers (or even just a large increase in the number of top researchers) are such an intervention
  2. To the extent we're allowing cause-hopping to enable large multipliers (which we must to think that there are potentially much more impactful opportunities than superbabies), I care about superbabies because of the cause of x-risk reduction! Which I think has much higher cost-effectiveness than growth-based welfare interventions.

Fair points, although I do not see how they would be sufficiently strong to overcome the large baseline difference between SWP and general R&D. I do not think reducing the nearterm risk of human extinction is astronomically cost-effective, and I am sceptical of longterm effects.

Thanks for the post! I think genetic engineering for increasing IQ can indeed be super valuable, and is quite neglected in society. However, I would be very surprised if it was among the areas where additional investment generates the most welfare per $:

  • Open Philanthropy (OP) estimated that funding R&D (research and development) is 45 % as cost-effective as giving cash to people with 500 $/year.
  • People in extreme poverty have around 500 $/year, and unconditional cash transfer to them are like 1/3 as cost-effective as GiveWell's (GW's) top charities. GW used to consider such transfers around 10 % as cost-effective as their top charities, but now thinks they are 3 to 4 times as cost-effective as previously.
  • So I think R&D is like 15 % (= 0.45/3) as cost-effective as GW's top charities.
  • I estimate the Shrimp Welfare Project (SWP) is 64.3 k times as cost-effective as GW's top charities.
  • So I believe SWP is like 429 k (= 64.3*10^3/0.15) times as cost-effective as R&D (neglecting the beneficial or harmful effects of R&D on animals; it is unclear to me whether saving human lives is beneficial or harmful).
  • Trusting these numbers, genetic engineering would have to be 429 k times as cost-effective as typical R&D for it to be as cost-effective as SWP. I can see it being 10 times as cost-effective as R&D, but this is not anything close to enough to make it competitive with SWP.

Thanks for the post, Dan and Elliot. I have not read the comments, but I do not think preferential gaps make sense in principle. If one was exactly indifferent between 2 outcomes, I believe any improvement/worsening of one of them must make one prefer one of the outcomes over the other. At the same time, if one is roughly indifferent between 2 outcomes, a sufficiently small improvement/worsening of one of them will still lead to one being practically indifferent between them. For example, although I think i) 1 $ plus a chance of 10^-100 of 1 $ is clearly better than ii) 1 $, I am practically indifferent between i) and ii), because the value of 10^-100 $ is negligible.

Thanks, JBlack. As I say in the post, "We can agree on another [later] resolution date such that the bet is good for you". Metaculus' changing the resolution criteria does not obviously benefit one side or the other. In any case, I am open to updating the terms of the bet such that, if the resolution criteria do change, the bet is cancelled unless both sides agree on maintaining it given the new criteria.

Thanks, Dagon. Below is how superintelligent AI is defined in the question from Metaculus related to my bet proposal. I think it very much points towards full automation.

"Superintelligent Artificial Intelligence" (SAI) is defined for the purposes of this question as an AI which can perform any task humans can perform in 2021, as well or superior to the best humans in their domain.  The SAI may be able to perform these tasks themselves, or be capable of designing sub-agents with these capabilities (for instance the SAI may design robots capable of beating professional football players which are not successful brain surgeons, and design top brain surgeons which are not football players).  Tasks include (but are not limited to): performing in top ranks among professional e-sports leagues, performing in top ranks among physical sports, preparing and serving food, providing emotional and psychotherapeutic support, discovering scientific insights which could win 2021 Nobel prizes, creating original art and entertainment, and having professional-level software design and AI design capabilities.

As an AI improves in capacity, it may not be clear at which point the SAI has become able to perform any task as well as top humans.  It will be defined that the AI is superintelligent if, in less than 7 days in a non-externally-constrained environment, the AI already has or can learn/invent the capacity to do any given task.  A "non-externally-constrained environment" here means, for instance, access to the internet and compute and resources similar to contemporaneous AIs.

Fair! I have now added a 3rd bullet, and clarified the sentence before the bullets:

I think the bet would not change the impact of your donations, which is what matters if you also plan to donate the profits, if:

  • Your median date of superintelligent AI as defined by Metaculus was the end of 2028. If you believe the median date is later, the bet will be worse for you.
  • The probability of me paying you if you win was the same as the probability of you paying me if I win. The former will be lower than the latter if you believe the transfer is less likely given superintelligent AI, in which case the bet will be worse for you.
  • The cost-effectiveness of your best donation opportunities in the month the transfer is made is the same whether you win or lose the bet. If you believe it is lower if you win the bet, this will be worse for you.

We can agree on another resolution date such that the bet is good for you accounting for the above.

I agree the bet is not worth it if superintelligent AI as defined by Metaculus' immediately implies donations can no longer do any good, but this seems like an extreme view. Even if AIs outperform humans in all tasks for the same cost, humans could still donate to AIs.

I think the Cuban Missile Crisis is a better analogy for the period right after Metaculus' question resolves non-ambiguously than mutually assured destruction. For the former, there were still good opportunities to decrease the expected damage of nuclear war. For the latter, the damage had already been made.

Thanks, Daniel. My bullet points are supposed to be conditions for the bet to be neutral "in terms of purchasing power, which is what matters if you also plan to donate the profits", not personal welfare. I agree a given amount of purchashing power will buy the winner less personal welfare given superintelligent AI, because then they will tend to have a higher real consumption in the future. Or are you saying that a given amount of purchasing power given superintelligent AI will buy not only less personal welfare, but also less impartial welfare via donations? If so, why? The cost-effectiveness of donations should ideally be constant across spending categories, including across worlds where there is or not superintelligent AI by a given date. Funding should be moved from the least to the most cost-effective categories until their marginal cost-effectiveness is equalised. I understand the altruistic market is not efficient. However, for my bet not to be taken, I think one would have to argue about which concrete decisions major funders like Open Philanthropy are making badly, and why they imply spending more money on worlds where there is no superintelligent AI relative to what is being done at the margin.

Thanks, Richard! I have updated the bet to account for that.

If, until the end of 2028, Metaculus' question about superintelligent AI:

  • Resolves non-ambiguously, I transfer to you 10 k January-2025-$ in the month after that in which the question resolved.
  • Does not resolve, you transfer to me 10 k January-2025-$ in January 2029. As before, I plan to donate my profits to animal welfare organisations.

The nominal amount of the transfer in $ is 10 k times the ratio between the consumer price index for all urban consumers and items in the United States, as reported by the Federal Reserve Economic Data, in the month in which the bet resolved and January 2025.

Great discussion! I am open to the following bet.

If, until the end of 2028, Metaculus' question about superintelligent AI:

  • Resolves non-ambiguously, I transfer to you 10 k January-2025-$ in the month after that in which the question resolved.
  • Does not resolve, you transfer to me 10 k January-2025-$ in January 2029. As before, I plan to donate my profits to animal welfare organisations.

The nominal amount of the transfer in $ is 10 k times the ratio between the consumer price index for all urban consumers and items in the United States, as reported by the Federal Reserve Economic Data, in the month in which the bet resolved and January 2025.

I think the bet would not change the impact of your donations, which is what matters if you also plan to donate the profits, if:

  • Your median date of superintelligent AI as defined by Metaculus was the end of 2028. If you believe the median date is later, the bet will be worse for you.
  • The probability of me paying you if you win was the same as the probability of you paying me if I win. The former will be lower than the latter if you believe the transfer is less likely given superintelligent AI, in which case the bet will be worse for you.
  • The cost-effectiveness of your best donation opportunities in the month the transfer is made is the same whether you win or lose the bet. If you believe it is lower if you win the bet, this will be worse for you.

We can agree on another resolution date such that the bet is good for you accounting for the above.

Sorry for the lack of clarity! "today-$" refers to January 2025. For example, assuming prices increased by 10 % from this month until December 2028, the winner would receive 11 k$ (= 10*10^3*(1 + 0.1)).

Load More