One of the largest cryptocurrency exchanges, FTX, recently imploded after apparently transferring customer funds to cover losses at their affiliated hedge fund. Matt Levine has good coverage, especially his recent post on their balance sheet. Normally a crypto exchange going bust isn't something I'd pay that much attention to, aside from sympathy for its customers, but its Future Fund was one of the largest funders in effective altruism (EA).

One reaction I've seen in several places, mostly outside EA, is something like, "this was obviously a fraud from the start, look at all the red flags, how could EAs have been so credulous?" I think this is mostly wrong: the red flags they cite (size of FTX's claimed profits, located in the Bahamas, involved in crypto, relatively young founders, etc.) are not actually strong indicators here. Cause for scrutiny, sure, but short of anything obviously wrong.

The opposite reaction, which I've also seen in several places, mostly within EA, is more like, "how could we have caught this when serious insitutional investors with hundreds of millions of dollars on the line missed it?" FTX had raised about $2B in external funding, including ~$200M from Sequoia, ~$100M from SoftBank, and ~$100M from the Ontario Teacher's Pension Plan. I think this argument does have some truth in it: this is part of why I'm ok dismissing the "obvious fraud" view of the previous paragraph. But I also think this lets EA off too easily.

The issue is, we had a lot more on the line than their investors did. Their worst case was that their investments would go to zero and they would have mild public embarrassment at having funded something that turned out so poorly. A strategy of making a lot of risky bets can do well, especially if spending more time investigating each opportunity trades off against making more investments or means that they sometimes lose the best opportunities to competitor funds. Half of their investments could fail and they could still come out ahead if the other half did well enough. Sequoia wrote after, "We are in the business of taking risk. Some investments will surprise to the upside, and some will surprise to the downside."

This was not our situation:

  • The money FTX planned to donate represented a far greater portion of the EA "portfolio" than FTX did for these institutional investors, The FTX Future Fund was probably the biggest source of EA funding after Open Philanthropy, and was ramping up very quickly.

  • This bankruptcy means that many organizations now suddenly have much less money than they expected: the FTX Future Fund's committed grants won't be paid out, and the moral and legal status of past grants is unclear. [1] Institutional investors were not relying on the continued healthy operation of FTX or any other single company they invested in, and were thinking of the venture capital segment of their portfolios as a long-term investment.

  • FTX and their affiliated hedge fund, Alameda Research, were founded and run by people from the effective altruism community with the explicit goal of earning money to donate. Their founder, Sam Bankman-Fried, was profiled by 80,000 Hours and listed on their homepage as an example earning to give, back when he was a first-year trader at Jane Street, and he was later on the board of the Centre for Effective Altruism's US branch. FTX, and Bankman-Fried in particular, represented in part an investment of reputation, and unlike typical financial investments reputational investments can go negative.

These other investors did have much more experience evaluating large startups than most EAs, but we have people in the community who do this kind of evaluation professionally, and it would also have been possible to hire an outside group. I suspect the main reason this didn't happen is that EA isn't a unified whole, it's a collection of individuals and organizations with similar goals and ways of thinking about the world. There are likely many things that would be worth it for "EA" to do that don't happen because it's not clear who would do them or even whether someone is already quietly doing the work. I hope building a better process for identifying and coordinating on this sort of work is one of the things that can come out of this collapse.

While at this stage it's still not clear to me whether more vetting would have prevented this abuse of customer funds (perhaps by leading to better governance at FTX or more robust separation between FTX and Alameda) or led EAs to be more cautious with FTX funding, I don't think it's enough to say that since Sequoia etc. missed it we most likely would have as well.

[1] Disclosure: my work may have been funded in part by FTX. I've asked for my pay to be put on hold if it would be coming from an FTX grant.

Comment via: facebook, mastodon


New Comment
18 comments, sorted by Click to highlight new comments since: Today at 6:30 PM

A lot of professional and amateur investors did see it.  They're all the ones who didn't put their money anywhere near FTX.  This was mostly not specifically FTX, but distrust (or worse) of the general topic of crypto-based financial services.  There was never a business plan that made any sense - no actual customers happily paying for something that would exist in good times and lean, at least not in the amounts they were pulling out of it for personal/philanthropic/gambling reasons.  And the lack of oversight is going to allow a significant portion of such companies steal or lie rather than cleanly closing up shop when things get tough.  

This bankruptcy means that many organizations now suddenly have much less money than they expected

This is far bigger than just FTX and EA.  It's the story of EVERY major economic downturn, especially after an insane boom.  Nobody is as rich as they thought they were.  Future revenue expectations and asset values (which are an abstraction over future revenues) have plunged, while prices have gone up significantly.  

There was never a business plan that made any sense - no actual customers happily paying for something that would exist in good times and lean, at least not in the amounts they were pulling out of it for personal/philanthropic/gambling reasons.

Why isn't "people put money into the exchange and we let them trade it back and forth while taking a cut on each transaction" a plausible business model?

It's a plausible business model, for a much smaller business than SBF was counting on.  Honestly, if he'd run it for 10 years, getting through at least one bust cycle, before starting to use the capital for EA and personal investments, it might have worked.  Or it might have lost out to a flashier, scammier version that customers liked more in the boom times.

It's a plausible business model, but I assumed it's not FTX's business model (even before the event), because if it were so, FTX would have tried to acquire BitLicense from New York, like Coinbase did. The same applies to Binance and other exchanges uninterested in BitLicense.

I see many people saying things like "centralized cryptocurrency exchange is obviously dangerous, I told you so", but I disagree. Yes, cryptocurrency allows you to self-custody, and option to self-custody is its main innovation, but it is an option: you don't need to take it, and it does have significant convenience cost. I think the right lesson is something like "unregulated centralized cryptocurrency exchange has been historically dangerous". It has been a while since BitLicense was introduced in 2015, and I think it proved to be a good regulatory framework so far.

I don't know anything about this, but it looks like FTX US was applying for a "trust charter"? And this is what Coinbase has

Nobody is as rich as they thought they were.

If it was just that (which is the situation Open Philanthropy and other funders are in) I'd agree this was expected with a downturn. Some things that would have made sense to fund no longer do, some projects get wound down.

But that's not what happened with the FTX Future Fund's grants. Committed grants aren't going to be paid out, and and people are generally trying to avoid spending any additional money from grants that were already paid out. Organizations that thought they had a year of funding confirmed now have no money, and are scrambling to find other funding.

It was a bigger run-up boom than usual, and a bigger drop when it happened.  A lot of EA charities were hit extra-hard because SBF concentrated the boom into a single point of failure, which failed spectacularly.  SBF made it a much sharper bust by failing to stop the grants much earlier, when it started to look like the risk was coming (he instead used customer funds to make more, riskier bets, and perhaps to prop up his other interests).  But there was no path where that funding actually occurred.

If you think this is different from other businesses and personal finance, I'd have you take it up with the tens of thousands laid off in the last few weeks, and the millions who are worried about it and are seeing much worse prospects in the coming years.

I think it's worth being clear about what exactly "this" is.

My mainline story right now (admitting that I'm not fully caught up) is that prior to 2022:

  • There was a lack of capital controls, that would have made fraud and large mistakes easier;
  • There was plenty of reason to doubt SBF's ethics;
  • But there was no actual fraud.

Professional investors and EA would both have cared about the first point. But it's not clear how investors would have felt about it; I could believe anything from "this is a dealbreaker" to "this is positive on net". (Is Sam doing fraud bad in expectation for his investors? He might not get caught; and if he does, they'll lose money but probably won't take most of the flak.) Professional investors probably wouldn't have cared about the second point much, though I could see it being a mild negative or mild positive.

So, "should EA have caught the fraud"? I think that might be asking too much.

"Should EA have noticed the lack of controls and reacted to that?" Or, "should EA have noticed Sam's lack of ethics and reacted to that?" I currently think those would have been possible, and "but professional investors didn't" isn't much of a defense.

Or, "should EA have noticed Sam's lack of ethics and reacted to that?"

How would such a reaction have looked like?

Noting that that's a separate question, possible answers that come to mind (which I'm not necessarily endorsing) include:

  • Not holding up Sam as an exemplar of EA, as I gather kind of happened
  • Declining to take more than $X from Sam, on the grounds that "a large amount of EA funding being dependent on someone with bad ethics seems bad"
  • Noticing that the combination "bad ethics and bad capital controls" makes fraud both easy and likely, and explicitly warning people about that. (And taking the lack-of-ethics as a reason to look into capital controls, if they didn't know about that.)

I do think "EA knows about SBF's ethics and acts exactly as they did anyway" is not a story that's flattering about EA.

EA is not an entity that knows or doesn't know. Individual players in the field know or don't know and make decisions based on what they know.  If you want to critique it makes sense to think about which players should have made different decisions.

This feels like an isolated demand for a thing that I'm not trying to do.

Yes, obviously if I have concrete suggestions that would be great, and likely those would involve looking inside EA at the people and organizations within it and identifying specific points of intervention that could have avoided this problem, or something.

But I'm not trying to identify a solution, I'm trying to identify a problem. A thing where I think EA could have done better. I think it's ridiculous to suggest either that I can't do that without also suggesting improvements, or that I can't do it without looking inside EA.

Maybe you're not intending to suggest anything like that? But it feels to me like you are, and I find it annoying.

So like, these do seem related, but... I think I feel like you think they're more closely related than I think they are? Like the kind of thing they're using as a branching-off point is different from the kind of thing my comment was.

So I'd summarize those posts as saying: "if you're going to say "let's _", it would be nice if you went into more detail about how to _ and what exactly _ looks like".

But I'm not saying "let's _". I'm saying "we might think we can't _ because [...], but that doesn't hold because [...]. I currently think _ is possible." And now I'm similarly being asked to go into detail about how to _ and what exactly _ looks like, and...

Yeah, there's an implied "let's _" in my comment, and it's a perfectly fine question in general, but...

It feels like it's missing the point of what I said; and in this context, and the way it's been asked, it feels kind of aggressive and offputting to me.

(I would much less have this reaction, if my second comment in this thread had been my first one. The kind of thing my second comment is, feels much more the kind of thing those posts are reacting to. But I only made my second comment after being asked, and I explicitly said that it was a different question and I didn't necessarily endorse my answers.)

It's a bit hard to reconcile the EA community's stance going from "solving malaria with nets + help really sick/dying children" (good!) to the now arbitrary, anything-holds "AI, longtermism, whatever our benevolent leaders and their rich techbro friends get their hands on" (bad? and seemingly dis-humble? if that's a word).

The "future fund" and more of the recent EA stuff screams techbro "visionary" 'we are the chosen saviors of the world and for the future of the world' sort of cultish EA movement? [0] I really don't think this is the right group to even embark TBH - it's extremely disingenuous and speed running to rid whatever humility this group had to begin with.

I almost got bought into this whole EA thing by reading your blog (and deeply admired the previously more humble "mosquito net/donate most of our income" EA posts) - but I am deeply concerned about how smart people have become unwitting PR for this seemingly techbro centered cult that has so many of these FTX-like frauds and "rich techbros" sucked into in their orbit :( Good luck and sorry for the rant - not aimed at you in particular but at this whole EA thing.


The basic idea that humanity is worth keeping and we should put serious effort into not wiping ourselves out seems pretty clearly right and important to me? If it doesn't to you, Toby Ord's Precipice makes a very strong case. This doesn't mean that everything people do with the goal of trying to reduce this risk is better than bednet distribution, or even positive, but I do think there are a lot of important things that would seriously reduce risk here and aren't being done. For example, I think Kevin Esvelts' Delay, Detect, Defend biosecurity agenda is very valuable.

Overall I think the problem with the FTX implosion was primarily "they were reckless and fraudulent", and this shouldn't affect our views one way or the other on the importance of the causes they publicly endorsed.

That does not sound like a good goal. It’s too ambiguous and lofty. I realize there are a lot of smart folks associated with the concept of EA but this goal ( good or not) seems more aligned with the White Savior complex with a great deal of humility lacking. We should at least be aware of our limitations: we lack the basic knowledge and tools to even predict let alone change the “world” in the next two years let alone fool ourselves into thinking that “we” (and I mean the self elected saviors in the EA group) are the right people to do it.

It was kinda cool to see the mosquito net era; but this new era of McAskill et al is frankly intellectually dishonest and disgusting almost like the imperial / colonial style. The ftx fuckery is an unrelated problem but it beautifully drills a giant hole into this whole long termism 80000 hours AI savior stuff. The private texts between him and Elon really explains how deep the connections are to basically twist the meaning of EA to push whatever power play/PR is needed.

Thanks for the pointers, I enjoy reading them even though I disagree with the premise. One important criticism is that “donating to experts and people on the ground” seems probably better than “let’s embed and try to be experts in things we have no business to be in”.

To close my rant here as a nobody: Mosquito nets and your wife’s inspiring work led me to think that “the previous generation EA” was worthwhile “world changing “ for the impact it had right now right here. That’s amazing work and I certainly believe that lost part was a great goal: thanks for that inspiration.

Why do you think the goal is ambiguous? I'm not sure if this quote is why you think it is ambiguous:

we lack the basic knowledge and tools to even predict let alone change the “world” in the next two years

But this is questioning our capacity to predict or change the world, rather than whether "putting serious effort into not wiping ourselves out" is a good goal to have.

There is a very specific and concrete definition of "wipe ourselves out": the complete extinction of the human species. There are a few other specific cases too, but this is the main one.

New to LessWrong?