1. FTX, a big source of EA funding, has imploded.
  2. There's mounting evidence that FTX was engaged in theft/fraud, which would be straightforwardly unethical.
  3. There's been a big drop in the funding that EA organisations expect to receive over the next few years. This includes many AI Safety organisations.
  4. Because these organisations were acting under false information, they would've made (ex-post) wrong decisions, which they will now need to revise.

Which revisions are most pressing?

New Answer
Ask Related Question
New Comment

3 Answers sorted by

EA is constrained by the following formula:

Number of Donors x Average Donation = Number of Grants x Average Grant

If we lose a big donor, there are four things EA can do:

  1. Increase the number of donors:
    1. Outreach. Community growth. Might be difficult right now for reputation reasons, though fortunately, EA was very quick to denounce SBF.
    2. Maybe lobby the government for cash?
    3. Maybe lobby OpenAI, DeepMind, etc for cash?
  2. Increase average donation:
    1. Get another billionaire donor. Presumably, this is hard because otherwise EA would've done it already, but there might be factors that are hidden from me.
    2. 80K could begin pushing earning-to-give again. They shifted their recommendations a few years ago to promoting direct-impact careers. This made sense when EA was less funding-constrained.
    3. Get existing donors to ramp up their donations. In the good ol' days, EA used to be a club for people donating 60% of their income to anti-malaria bednets. Maybe EA will return to that frugal ascetic lifestyle.
  3. Reduce the number of grants:
    1. FTX was funding a number of projects. Some of these were higher priorities than others. Hopefully the high-priority projects retain their funding, whereas low-priority projects are paused.
    2. EA has been engaged in a "hit-or-miss" approach to grant-making. This makes sense when you have more cash than sure-thing ideas. But now we have less cash we should focus on sure-thing ideas.
    3. The problem with the "sure-thing" approach to grant-making is that it biases certain causes (e.g. global health & dev) over others (e.g. x-risk). I think that would be a mistake. Someone needs to think about how to calibrate for this bias.

      Here's a tentative idea: EA needs more prizes and other forms of retrodictive funding. This will shift risk from the grant-maker to the researcher, which might be good because the researcher is more informed about the likelihood of success than the grant-maker.
  4. Reduce average grant:
    1. Maybe EA needs to focus on cheaper projects.
    2. For example, in AI safety there has been a recent shift away from theoretic work (like MIRI's decision theory) towards experimental work. This experimental work is very expensive because it involves (say) training large language models. This shift should be at least somewhat reversed.
    3. Academics are very cheap! And they often already have funding. EA (especially AI safety) needs to do more outreach to established academics, such as top philosophers, mathematicians, economists, computer scientists, etc.

(Cross-post from EA forum)

I'm curious about this too. I actually have the sense that overall funding for AI alignment was already larger than overall shovel-ready projects before FTX was involved. This is normal and expected in a field that many people is working on an important problem but where most of the work is funding for research, and where hardly anyone has promising scalable uses for money.

I think this led a lot of prizes being announced. A prize is a good way to fund if you don't see enough shovel-ready projects to exhaust your funding. You offer prizes for anyone who can formulate and execute new projects, hence enticing people who weren't previously working on the problem to start working on the problem. This is a pretty good approach IMO.

With the collapse of FTX, I guess a bunch of prizes will go away.

What else? I'm interested.

There's mounting evidence that FTX was engaged in theft/fraud, which would be straightforwardly unethical.

I think it's way too early to decide anything remotely like that. As far as I understand, we have a single leaked balance sheet from Alameda and a handful of tweets from CZ (CEO of Binance) who presumably got to look at some aspect of FTX internals when deciding whether to acquire. Do we have any other real information?

Having spent the better part of the last three days looking into this, I disagree.

FTX lent $10 billion out of $16 billion in customer assets to a hedge fund in which its CEO owned a 50% stake.

It accepted at least $4 billion in collateral of its own token, FTT. The total circulating supply at the time was less than that.

Exchanges are NEVER supposed to lend out customer funds without their consent, and it's clear FTX did that. What's more, they should not accept their own "stock" as collateral. Accepting your own stock as collateral is like opening the book ... (read more)

Are you saying that it's too early to claim "SBF committed fraud", or "SBF did something unethical", or "if SBF committed fraud, then he did something unethical"?

I think we have enough evidence to assert all three.

2Alex Flint24d
The direct information I'm aware of is (1) CZ's tweets about not acquiring, (2) SBF's own tweets yesterday, (3) the leaked P&L doc from Alameda. I don't think any of these are sufficient to decide "SBF committed fraud" or "SBF did something unethical". Perhaps there is additional information that I haven't seen, though. (I do think that if SBF committed fraud, then he did something unethical.)
You have to be confident that no such information is available to say that it's too early for others to have made up their mind. It sounds like it's too early for you, but you don't know how much time others have spent following the situation. Obviously nothing is slam-dunk certain when the situation's still developing, but it's often the case that you can draw fairly strong conclusions based on a few unusual data points. You can't assess whether that's the case if you're not aware of all the data points that are out there.

New to LessWrong?