General Case

Let's say I currently have 1M USD, and I can either:

1. Donate it to orgs working on AI alignment.
2. Invest the money and get returns >> than the stock market.

How big must the expected return of my investment be in what time frame for it to be better than donating right away?

My specific Case

I recently made around 1M USD pre tax, and consider how to best utilize it to decrease AI X-risk. I consider these two options:

  1. Donate 45% to other orgs. Use the other 45% as funding source for a for-profit AI Alignment company, that would do a mix of money making AI endevours (that I will try to make sure doesn't improve general capabilities), do direct AI Alignment work. Then with time dedicate more and more resources to direct AI Alignment work/donations to other orgs.
  2. Donate only 10%, and use the other 80% as funding for the for-profit AI Alignment company, where the bulk of the additional 35% would go at first mostly into money making endevors.

In both cases, 10% of the money is fun money/cover living expenses.

Let's say I hypothetically expect to make 10x returns after 5 years on the additional investments into for-profit in scenario 2, would that be a worthy investment?

I do realize that 10x in 5 years might seem unrealistic, but for the sake of discussion I think it is a useful figure, however wrong it might seem.

My Own Estimate

Donating 1 usd today, corresponds roughly to donating (adjusted for inflation):

1.18 usd in one year

1.39 usd in two years

2.39 usd in five years

This is based on a timeline of getting AGI in 15 years, and I simply played around with numbers and this seemed reasonable. I know "playing with numbers" is not a reliable approach, but I haven't come up with anything better, hence the post here.

New to LessWrong?

New Answer
New Comment

1 Answers sorted by

Max H

Jun 24, 2023

108

I expect that for most people, starting a new for-profit (or non-profit) AI alignment organization is likely to be net-negative for AI x-risk, even if you have the best of intentions. It's really easy to end up doing capabilities work and contributing to general hype by accident.

I think the benefit of donating to other organizations (now or later) depends pretty heavily on which organizations you're talking about. There are lots of organizations doing some potentially great work (some overviews here and here), some of which are definitely looking for funding. But evaluating each project or organization's quality, likelihood of making a positive impact, and need for funding can be pretty challenging, especially if you're doing it on your own.

You might be interested in donating to or becoming a grant-maker for the Survival and Flourishing Fund or Lightspeed grants, or even funding Lightcone directly. MIRI and ARC seem like some of the safest choices if you want to ensure that your donations are not net-negative, though I don't think either of them are particularly funding-constrained at the moment.

 

I expect that for most people, starting a new for-profit (or non-profit) AI alignment organization is likely to be net-negative for AI x-risk

While there are some examples of this, such as OpenAI, I still find this claim to be rather bold. If no one was starting AI alignment orgs we would still have roughly the same capabilities today, but only a fraction of the alignment research. Right now, over a hundred times more money is spent on advancing AI compared to reducing risks, so even a company spending half their resources advancing capabilites, and half on... (read more)

3Max H10mo
Yeah, my own views are that a lot of "alignment" work is mostly capabilities work in disguise. I don't claim my view is the norm or consensus, but I don't think it's totally unique or extreme either (e.g. see Should we publish mechanistic interpretability research?, If interpretability research goes well, it may get dangerous.) I think Conjecture and Anthropic are examples of (mostly?) for-profit companies that are some % concerned with safety and x-risk. These organizations are probably net-positive for AI x-risk compared to the counterfactual where all their researchers are working on AI capabilites at some less safety-focused org instead (e.g. Meta). I'm less sure they're net-positive if the counterfactual is that all of their researchers are working at hedge funds or doing physics PhDs or whatever. But I haven't thought about this question on the object-level very closely; I'm more just pointing out that differential impact on capabilities vs. alignment is an important term to consider in a cost-benefit calculation.  Depending on your own model of remaining timelines and takeoff speeds, "Half on capabilities, half on alignment" might end up being neutral to net-negative. It also depends on the quality and kind of  your alignment output vs. capabilities output. OTOH, I think there's also an argument for a high-variance strategy at this stage of the game, so if you have some ideas for alignment that are unique or high-impact, even if they have a low probability of success, that might make your impact very net-positive in expectation. In general though, I think it's very easy to deceive yourself on this kind of question.  
1AlignmentOptimizer10mo
I do think you make valid and reasonable points, and I appreciate and commemorate you for that. Let's use 80000 hours conservative estimate that only around 5B usd is spent on capabilities each year, and 50M on AI alignment. That seems worse than 6B USD spent on capabilities, and 1.05B spent on AI alignment. A half half approach in this case would 20X the alignment research, but only increase capabilities 20%. This I agree with, I have some ideas but will consult with experts in the field before pursuing any of them.