There is a narrative about the FTX collapse that I have noticed emerging[1] as a commonly-held  belief, despite little concrete evidence for or against it. The belief goes something like this:

  • Sam Bankman Fried did what he did primarily for the sake of "Effective Altruism," as he understood it. Even though from a purely utilitarian perspective his actions were negative in expectation, he justified the fraud to himself because it was "for the greater good." As such, poor messaging on our part[2] may be partially at fault for his downfall.

This take may be more or less plausible, but it is also unsubstantiated. As Astrid Wilde noted on Twitter, there is a distinct possibility that the causality of the situation may have run the other way, with SBF as a conman taking advantage of the EA community's high-trust environment to boost himself.[3] Alternatively (or additionally), it also seems quite plausible to me that the downfall of FTX had something to do with the social dynamics of the company, much as Enron's downfall can be traced back to [insert your favorite theory for why Enron collapsed here]. We do not, and to some degree cannot, know  what SBF's internal monologue has been, and if we are to update our actions responsibly in order to avoid future mistakes of this magnitude (which we absolutely should do), we must deal with the facts as they most likely are, not as we would like or fear them to be.

All of this said, I strongly suspect[4] that in ten years from now, conventional wisdom will hold the above belief as being basically cannon, regardless of further evidence in either direction. This is because it presents an intrinsically interesting, almost Hollywood villain-esque narrative, one that will surely evoke endless "hot takes" which journalists, bloggers, etc. will have a hard time passing over. Expect this to become the default understanding of what happened (from outsiders at least), and prepare accordingly. At the same time, be cautious when updating your internal beliefs so as not to assume automatically that this story must be the truth of the matter. We need to carefully examine where our focus in self-improvement should lie moving forward, and it may not be the case that a revamping of our internal messaging is necessary (though it may very well be in the end; I certainly do not feel qualified to make that final call, only to point out what I recognize from experience as a temptingly powerful story beat which may influence it).

  1. ^

    Primarily on the Effective Altruism forum, but also on Twitter.

  2. ^

    See e.g. "pro fanaticism" messaging from some community factions, though it should be noted that this has always been a minority position.

  3. ^

    EDIT: Some in the comments have pointed out that as SBF has been involved with EA since pretty much forever, it's unlikely that he was sociopathically taking advantage of the community, and therefore we should not morally absolve ourselves. To this I have two primary responses: A) This may be the case, but it do not mistake this objection as defeating the main point, which is that EA ideology was not necessarily the cause of this aspect of his life. We should definitely be introspective in considering how to prevent this in the future, but we should also not beat ourselves up unnecessarily if doing so would be counterproductive.  B) It is unclear how deeply he actually believed in EA ideals, and how much of his public persona has been an act—anecdotes (and memes like this one, which I am unsure how much weight to put on it as evidence; probably fairly little) suggest the latter, though as someone who's never met him personally it's hard to say.

  4. ^

    With roughly 80% confidence, conditional on 1.) No obviously true alternative story coming out about FTX that totally accounts for all their misdeeds somehow, and 2.) This post (or one containing the same observation), not becoming widely cited (since feedback loops can get complex and I don't want to bother accounting for that).

New Comment
52 comments, sorted by Click to highlight new comments since: Today at 8:44 AM

I don't buy this argument for a few reasons:

  • SBF met Will MacAskill in 2013 and it was following that discussion that SBF decided to earn to give
    • EA wasn't a powerful or influential movement back in 2013, but quite a fringe cause.
  • SBF was in EA since his college days, long before his career in quantitative finance and later in crypto

 

SBF didn't latch onto EA after he acquired some measure of power or when EA was a force to be reckoned with, but pretty early on. He was in a sense "homegrown" within EA.

 

The "SBF was a sociopath using EA to launder his reputation" is just motivated credulity IMO. There is little evidence in favour of it. It's just something that sounds good to be true and absolves us of responsibility.

 

Astrid's hypothesis is very uncredible when you consider that she doesn't seem to be aware of SBF's history within EA. Like what's the angle here? There's nothing suggesting SBF planned to enter finance as a college student before MacAskill sold him on earning to give.

From what I've heard, SBF was controlling, and fucked over his initial (EA) investors as best he could without sabotaging his company, and fucked over parts of the Alameda founding team that wouldn't submit to him. This isn't very "EA" by the usual lights.

SBF seems to have successfully come across as a much friendlier and trustworthy player than he actually is, in large part thanks to EA, and a propensity to be thankful for another large funder showing up.

[-]Ansel1y2118

From what I've heard, SBF was controlling, and fucked over his initial (EA) investors as best he could without sabotaging his company, and fucked over parts of the Alameda founding team that wouldn't submit to him. This isn't very "EA" by the usual lights.

 

It's not immediately clear to me that this isn't a No True Scotsman fallacy.

You may draw what conclusions you like! It's not my intention to defend EA here.

Here's an attempt to clarify my outlook, though my words might not succeed:

To the extent EA builds up idealized molds to shove people into to extract value from them, this is fucked up. To the extent that EA then pretends people like Sam or others in power fit the same mold, this is extra fucked up. Both these things look to me to me rampant in EA. I don't like it.

That does clarify where you're coming from. I made my comment because it seems to me that it would be a shame for people to fall into one of the more obvious attractors for reasoning within EA about the SBF situation. 
E.G., an attractor labelled something like "SBF's actions were not part of EA because EA doesn't do those Bad Things".

Which is basically on the greatest hits list for how (not necessarily centrally unified) groups of humans have defended themselves from losing cohesion over the actions of a subset anytime in recorded history. Some portion of the reasoning on SBF in the past week looks motivated in service of the above.

The following isn't really pointed at you, just my thoughts on the situation.

I think that there's nearly unavoidable tension with trying to float arguments that deal with the optics of SBF's connection to EA, from within EA. Which is a thing that is explicitly happening in this thread. Standards of epistemic honesty are in conflict with the group need to hold together. While the truth of the matter is and may remain uncertain, if SBF's fraud was motivated wholly or in part by EA principles, that connection should be taken seriously.

 

My personal opinion is that, the more I think about it, the more obvious it seems that several cultural features of LW adjacent EA are really ideal for generating extremist behavior. People are forming consensus thought groups around moral calculations that explicitly marginalize the value of all living people, to say nothing of the extreme side of negative consequentialism. This is all in an overall environment of iconoclasm and disregarding established norms in favor of taking new ideas to their logical conclusion.
 
These are being held in an equilibrium by stabilizing norms. At the risk of stating the obvious, insofar as the group in question is a group at all, it is heterogeneous; the cultural features I'm talking about are also some of the unique positive values of EA. But these memes have sharp edges.

From what I've heard, SBF was controlling, and fucked over his initial (EA) investors as best he could without sabotaging his company, and fucked over parts of the Alameda founding team that wouldn't submit to him.

Woah, I did not hear about this despite trying nontrivially hard to figure out what happened when I was considering whether to take a job there in mid-late 2019 (and also did not hear about it afterwards). I think I would've made pretty different decisions both then and afterwards if I had the correct impression.

Specifically, I knew about the management team leaving in early 2018 (and I guess "fucked over" framing was within my distribution but I didn't know the details). I did not in any way know about fucking over the investors.

From what I've heard, SBF was controlling, and fucked over his initial (EA) investors as best he could without sabotaging his company, and fucked over parts of the Alameda founding team that wouldn't submit to him.

Link?

I'm drawing on multiple (edit: 3-5) accounts from people I know who were involved at the time, and chose to leave. I don't think much is written up yet, and I hope that changes soon.

If true, definitely makes him seem like an unpleasant character on the inside. 

In any case the folks over in the EA leadership really should have done some more due diligence before getting intermeshed. The management team leaving in 2018 should already have been a really strong signal, and ignoring that is the sign of amateurs.

[-]lc1y3849

The default hypothesis should be that, while his EA ambitions may have been real, SBF's impetus to steal from his users had little to nothing to do with EA and everything to do with him and his close associates retaining their status as rich successful startup founders. Sam & crew were clearly enjoying immense prestige derived from their fame and fortune, even if none of them owned a yacht. When people in that position prop Alameda up with billions of dollars of user funds rather than give up those privileges, I think the reasonable assumption is that they're doing it to protect that status, not save the lightcone. I find it highly odd that no one has mentioned this as a plausible explanation.

I'm not sure why that should be the default hypothesis. Do you have specific information about them in particular or is that based on general psychology? Power corrupts is a common saying but how strong is the effect really? I'd like to see more evidence of that.

When someone in a position where they stand to lose a lot commits fraud to stop that happening, the default assumption is they did it to save their own skin, not for any higher motives. Or never ascribe to ideals what can be ascribed to selfishness.

It depends on when the "stealing" began. I haven't followed the thing closely enough to know. Banks reinvest funds too - it's just more regulated.

Sam has engaged with EA ideas early on and shown a deep understanding and even obsession with them long before it would have given him massive benefits to associate with EA. So, I think your point is almost certainly false, but it could've been true in a similar situation, and that's really important to be aware of. 

I don't think this changes anything. It's still possible for someone with EA motivations to have dark triad traits, so I wouldn't say "he was motivated by EA principles" implies that the same thing could've happened to almost anyone with EA principles. (What probably could've happened to more EAs is being complicit in the inner circle as lieutenants.)

"Feeling good about being a hero" is a motivation that people with dark triad traits can have just like anyone else. (The same goes for being deeply interested and obsessed with certain intellectual pursuits, like moral philosophy or applying utilitarianism to your life.) Let's assume someone has a dark triad personality. I model people like that as the same as a more neurotypical person except that they: 

  • Feel the same way I feel about people I find annoying and unsympathetic about 99.9-100% of people.
  • Don't have any system-1 fear of bad consequences. Don't have any worries related to things like guilt or shame (or maybe do have issues around shame but it expresses itself more in externalizing negative emotions like jealousy, spite).
  • Find it uncannily easy to move on from close relationships or shut empathy on and off at will as circumstances change regarding what's advantageous for them (if they ever form closer connections in the first place).

There are more factors that are different, but with some of the factors you wonder if they're just consequences of the above. For instance, being power-hungry: if you can't find meaning in close relationships, what else is there to do? Or habitual lying: if you find nearly everyone unsympathetic and annoying and you don't experience the emotion of guilt, you probably find it easier (and more pleasant) to lie.

In short, I think people with dark triad traits lack a bunch of prosocial system-1 stuff, but they can totally aim to pursue system-2 goals like "wanting to be a hero" like anyone else. 

(Maybe this is obvious, but sometimes I hear people say "I can't imagine that he isn't serious about EA" as though it makes other things about someone's character impossible, which is not true.) 

SBF had sociopathic personality traits and was clearly motivated by EA principles. If you look at people who commit heinous acts in the name of just about any ideology, they will likely have sociopathic personality traits, but some ideologies can make it easier to justify taking sociopathic actions(and acquire resources/followers to do so).

[-]lc1y30

Who are you replying to?

Double-posted as an after thought and kept comments separate because they say separate things (so people can vote separately). 

The type of view "I don't think this changes anything" in the second comment is proactively replying to is this one: 

(Maybe this is obvious, but sometimes I hear people say "I can't imagine that he isn't serious about EA" as though it makes other things about someone's character impossible, which is not true.) 

I'd like to submit SBF being vegan as strong Bayesian Evidence that this narrative is, in fact, entirely correct. (Source: Wikipedia.)

For me, having listened to the guy talk is even stronger evidence since I think I'd notice it if he was lying, but that's obviously not verifiable.

[-]Yitz1y1910

For me, having listened to the guy talk is even stronger evidence since I think I'd notice it if he was lying, but that's obviously not verifiable.

Going to quote from Astrid Wilde here (original source linked in post):

i felt this way about someone once too. in 2015 that person kidnapped me, trafficked me, and blackmailed me out of my life savings at the time of ~$45,000. i spent the next 3 years homeless.

sociopathic charisma is something i never would have believed in if i hadn't experienced it first hand. but there really are people out there who spend their entire lives honing their social intelligence to gain wealth, power, and status.

most of them just don't have enough smart but naive people around them to fake competency and reputation launder at scale. EA was the perfect political philosophy and community for this to scale....

I would really very strongly recommend not updating on an intuitive feeling of "I can trust this guy," considering that in the counterfactual case (where you could not in fact, trust the guy), you would be equally likely to have that exact feeling!

As for SBF being vegan as evidence, see my reply to you on the EA forum.

I would really very strongly recommend not updating on an intuitive feeling of "I can trust this guy," considering that in the counterfactual case (where you could not in fact, trust the guy), you would be equally likely to have that exact feeling!

Fictional support:

Romana: You mean you didn't believe his story?

The Doctor: No.

Romana: But he had such an honest face.

The Doctor: Romana, you can't be a successful crook with a dishonest face, can you?

Doctor Who

[-]lc1y71

How do you know he is vegan? A sociopath would have no problem eating vegan in public and privately eating meat in order to keep a narrative.

Early EA was not a productive environment for sociopaths or conmen. I don't buy that story. Faking vegan for example will be hard over such long time with low expected reward. I think a more plausible story is that he changed. Many people change over time esp. if their peer group changes or if they acquire power.

Possible, but adds additional complexity to the competing explanation.

I don't think "He was pretending to be vegan" adds any more complexity to the "He was a conman" explanation than "He was genuinely a vegan" adds to the "He was a naive/cartoon-villain utilitarian" explanation?

Huh, didn't expect the different intuitions here (yay disagreement voting!). I do think pretending to be vegan adds substantial complexity; making such a big lifestyle adjustment for questionable benefit is implausible in my model. But I may just not have a good theory of mind for "sociapaths" as lc puts it.

I do agree that it adds complexity. But so does "He was actually a vegan". Of course the "He was actually a vegan" complexity is paid for in evidence of him endorsing veganism and never being seen eating meat. But this evidence also pays for the complexity of adding "He was pretending to be a vegan" to the "He was thoroughly a conman" hypothesis.

But so does "He was actually a vegan"

But not a lot since highly idealistic people tend to be vegan. I think

But didn't he project a highly idealistic image in general? Committing to donating to charity, giving off a luxury-avoiding vibe, etc.. This gives evidence to narrow the conman hypothesis down from common conmen to conmen who pretend to be highly idealistic. And I'm not sure P(vegan|highly idealistic) exceeds P(claims to be vegan|conman who pretends to be highly idealistic).

I once saw a picture on twitter claiming to disprove him being vegan, by showing him standing in front of his fridge where there were eggs visible in the fridge in the background. The veganism might be a lie.

Edit: here it is: https://twitter.com/SilverBulletBTC/status/1591403692246589444/photo/1

He lives in an apartment with multiple roommates.  Pretty obvious explanation when there's multiple different egg cartons and JUST egg in there.

oh that makes sense lol

In the 80000 hours interview, Wiblin asks him about the vegan leafletting at university. That's more commitment to veganism than the average vegan.

I'm pretty sure the tweet I saw was something similar to this. Would be happy to have this disproven as a hoax or something of course...

Thank you for writing this much-needed piece. EA can be quick to self-flagellation under the best of circumstances. And this is not the best of circumstances.

It seems like we're all getting distracted from the main point here. It doesn't even matter whether SBF did it, let alone why. What matters is what this says about the kind of world we live in, for the last 20 years, and, now, for the last 7 days:

I strongly suspect[4] that in ten years from now, conventional wisdom will hold the above belief as being basically cannon, regardless of further evidence in either direction. This is because it presents an intrinsically interesting, almost Hollywood villain-esque narrative, one that will surely evoke endless "hot takes" which journalists, bloggers, etc. will have a hard time passing over. Expect this to become the default understanding of what happened (from outsiders at least), and prepare accordingly.

The fact that Lesswrong is vulnerable to this, let alone EA, is deeply disturbing. Smart people are supposed to automatically coordinate around this sort of thing, because that's what agents do, and that's not what's happening right now. This is basically a Quirrell moment in real life; a massive proportion of people on LW are deferring their entire worldview to obvious supervillains.

This is basically a Quirrell moment in real life; a massive proportion of people on LW are deferring their entire worldview to obvious supervillains.

Who are the obvious supervillains that they're deferring their entire worldview to? And who's deferring to them?

This comment had negative karma when I looked at it. I don't think we as a community should be punishing asking honest questions, so I strong-upvoted this comment.

[-]lc1y30

He's not saying LessWrong is vulnerable to it, he's saying it's just what people outside of LessWrong are going to believe. He's explicitly mentioning it so as to not necessarily take it at face value.

You are correct in that I was not explicitly saying that LessWrong is vulnerable to this (except for the fact that this assumption hasn't really been pushed back on until nowish), but to be honest I do expect some percentage of LessWrong folks to end up believing this regardless of evidence. That's not really a critique against the community as a whole though, because in any group, no matter how forward-thinking, you'll find people who don't adjust much based on evidence contrary to their beliefs.

Sam Bankman Fried did what he did primarily for the sake of "Effective Altruism," as he understood it. Even though from a purely utilitarian perspective his actions were negative in expectation, he justified the fraud to himself because it was "for the greater good." As such, poor messaging on our part[2] may be partially at fault for his downfall.

Without knowing his calculation it's hard to know whether or not his actions were negative or positive in expectation given his values. 

If you believe that each future person is as valuable as each present person and there will be 10^100 people in the future lightcone, the amount of people that were hurt by FTX blowing up is a rounding error.

In his 80000 hours interview Sam Bankman-Fried, talks about how he thinks taking a high-risk high-upside approach is very valuable. Almeda investing billions of dollars of FTX customers' money is a high-upside bet. 

Being at this point certain, that his actions were negative in expectation looks to me like highly motivated reasoning by people who don't like to look at the ethics underlying effective altruism. They are neither willing to say that maybe Sam Bankman-Fried did things right nor willing to criticize the underlying ethical assumptions.

His 80000 interview suggests that he thought the chance of FTX blowing up is something between 1% and 10%. There he gives 50% odds for making more than 50 billion dollars that can be donated to EA causes. 

If someone is saying that his action was negative in expectation, do they mean, that Sam Bankman-Fried lied about his expectations? Do they mean that a 10% chance of this happening should have been enough to tilt the expectation to be negative under the ethical assumptions of longtermism that puts most of the utility that's produced in the far future? Are you saying something else?

His 80000 interview suggests that he thought the chance of FTX blowing up is something between 1% and 10%. There he gives 50% odds for making more than 50 billion dollars that can be donated to EA causes.

If someone is saying that his action was negative in expectation, do they mean, that Sam Bankman-Fried lied about his expectations? Do they mean that a 10% chance of this happening should have been enough to tilt the expectation to be negative under the ethical assumptions of longtermism that puts most of the utility that's produced in the far future? Are you saying something else?

I wish I had any sort of trustworthy stats about the success rate of things in the reference class of steal from one pool of money in order to cover up losses in another pool of money, in the hope of making (and winning) big bets in the second pool of money to eventually make the first pool of money whole. I would expect the success rate to be very low (I would be extremely surprised if it were as high as 10%, somewhat surprised if it were as high as 1%), but it's also the sort of thing where if you do it successfully, probably nobody finds out.

Do Ponzi schemes ever become solvent again? What about insolvent businesses that are hiding their insolvency?

Zombie banks would be one type of organization in that reference class. 

If you believe that each future person is as valuable as each present person and there will be 10^100 people in the future lightcone, the amount of people that were hurt by FTX blowing up is a rounding error.

 

But you have to count the effect of the indirect harms on the future lightcone too. There's a longtermist argument that SBF's (alleged and currently very likely) crimes plausibly did more harm than all the wars and pandemics in history if...

  • Governments are now 10% less likely to cooperate with EAs on AI safety
  • The next 2 EA mega-donors decide to pass on EA
  • (Had he not been caught:) The EA movement drifted towards fraud and corruption
  • etc.

You are however only counting one side here. SBF appearing successful was a motivating example for others to start projects that would have made them Mega donors.

Governments are now 10% less likely to cooperate with EAs on AI safety

I don't think that's likely to be the case. 

The next 2 EA mega-donors decide to pass on EA

There's an unclearness here about what "pass on EA means". Zvi wrote about Survival and Flourishing Fund not being an EA fund.

How to model all the related factors is complicated. Saying that you easily know the right answer to whether the effects are negative or positive in expectation without running any numbers seems to me unjustified. 

You are however only counting one side here

 

In that comment I was only offering plausible counter-arguments to "the amount of people that were hurt by FTX blowing up is a rounding error."

How to model all the related factors is complicated. Saying that you easily know the right answer to whether the effects are negative or positive in expectation without running any numbers seems to me unjustified. 

I think we basically agree here.

I'm in favour of more complicated models that include more indirect effects, not less.

Maybe the difference is: I think in the long run (over decades, including the actions of many EAs as influential as SBF) an EA movement that has strong norms against lying, corruption and fraud actually ends up more likely to save the world, even if it gets less funding in the short term. 

The fact that I can't predict and quantify ahead of time all the possible harms that result from fraud doesn't convince me that those concerns are unjustified.

We might be living in a world where SBF stealing money and giving $50B to longtermist causes very quickly really is our best shot at preventing AI disaster, but I doubt it. 

Apart from anything else I don't think money is necessarily the most important bottleneck.

We already have an EA movement where the leading organization has no problem editing out elements of a picture it publishes on its website because of possible PR risks. While you can argue that it's not literally lying it comes very close and suggests the kind of environment that does not have the strong norms that would be desirable  

I don't think FTX/Almeda doing this in secret strongly damaged general norms against lying, corruption, and fraud.

Them blowing up like this actually is a chance for moving toward those norms. It's a chance to actually look into ethics in a different way to make it more clear that being honest and transparent is good. 

Saying "poor messaging on our part" which resulted in "actions were negative in expectation in a purely utilitarian perspective" is a way to avoid having the actual conversation about the ethical norms that might produce change toward stronger norms for truth. 

I am still super confused why there was apparently no due diligence by the EA leadership, assuming there is such a thing. At least Enron and MtGoX had no one to oversee them. Are they just that gullible? (Also see my question about most places not having a single person responsible for risk assessment and mitigation.) I would assume that rationality spinoffs would pay attention to Bayes and probabilities.

I think approximately no one audits people's books before accepting money from them. It's one thing to refuse to accept money from a known criminal (or other type of undesirable), but if you insist that the people giving you money prove that they obtained it honestly, then they'll simply give that money to someone else instead.

I am still super confused why there was apparently no due diligence by the EA leadership, assuming there is such a thing. At least Enron and MtGoX had no one to oversee them.

Enron was overseen by Arthur Andersen and audited by them.

EA leadership did not have a good way to audit FTX to find out that they have owned user funds to Almeda. 

Arthur Andersen, who's a team of professional analysts who actually has access to the books, seems to me a lot more guilty for failing oversight than people at CEA or other EA orgs. 

One would think—unfortunately we humans are really bad at judging our own ability to judge the trustworthiness of other people, even when we know about said bais. If you hire a friend or trusted community leader to do a high-stakes job, many people won't even bother with an NDA, let alone do any formal investigation into their honesty! Hopefully this will serve as a lesson that won't have to be repeated...

it is a problem with the algorithms that implemented the attention. it's not the messaging, but rather the interaction patterns, that embed the mistake that both encouraged trusting him and which encouraged him to see it as a good place to be trusted. he did actually donate a bunch of money to altruistic causes while fucking up the ev calculation; he may have been fooling himself, but it is usually the case (correlation) that the behaviors one sees in an environment are the behaviors the environment causes, even if you're wrong about which part of the environment is doing the causing. because correlation isn't inherently causation this heuristic does sometimes fail; it's more reliable than most correlations-being-causations because environments do have a lot of influence over possibility. if the true path was that he manipulated EAs, then that's an error EA needs to repair and publicly communicate by nature of being introspectable by other human beings; if instead it was because EA actually encouraged this de novo rather than being infectable by it, then that is slightly worse, but ultimately still has a solution that looks like figuring out how to build immunity so such misbehavior can be reliably trusted to not happen again. building error-behavior immunity is a difficult task, especially because it can cause erroneous immune matches if people blame the wrong part of the misbehavior.

the alignment problem was always about inter-agent behavior.