Murder is just a word. ... SBF bites all the bullets, all the time, as we see throughout. Murder is bad because look at all the investments and productivity that would be lost, and the distress particular people might feel
You are saying this as if you disagreed with it. In this case, I'd like to vehemently disagree with your disagreeing with Sam.
Murder really is bad because of all the bad things that follow from it, not because there is some moral category of "murder", which is always bad. This isn't just "Sam biting all the bullets", this is basic utilitarianism 101, something that I wouldn't even call a bullet. The elegance of this argument and arguments like it is the reason people like utilitarianism, myself included.
Believing this has, in my opinion, morally good consequences. It explains why murdering a random person is bad, but very importantly does not explain why murdering a tyrant is bad, or why abortion is bad. Deontology very easily fails those tests, unless you're including a lot of moral "epicycles".
. The elegance of this argument and arguments like it is the reason people like utilitarianism, myself included.
Excessive bullet biting for the pursuit of elegance is a road to moral ruin. Human value is complex. To be a consistent agent in Deontology, Virtue Ethics, or Utilitarianism, you necessarily have to (at minimum) toss out the other two. But morally, we actually DO value aspects of all 3 - we really DO think it's bad to murder someone outside of the consequences of doing so, and it feels like adding epicycles to justify that moral intuition with reasons when there is indeed a deontological core to some of our moral intuitions. Of course, there's also a core of utilitarianism and virtue ethics that would all suggest not murdering - but throwing out things you actually value in terms of your moral intuitions in the name of elegance is bad, actually.
But when you think that suffering is the thing that matters, you confuse the map for the territory, the measure for the man, the math with reality.
I like this quote a lot—I feel like it captures a lot of why I don't like suffering-focused ethics. It also seems very related to beliefs about the moral value of animals: my guess is that a wide variety of non-human animals can experience suffering, but very few can live a meaningful and fulfilling life. If you primarily care about suffering, then animal welfare is a huge priority, but if you instead care about meaning, fulfillment, love, etc., then it's much less clearly important.
I also like the quote. I consider meaning and fulfillment of life goals morally important, so I'm against one-dimensional approaches to ethics.
However, I think it's a bit unfair that just because the quote talks about suffering (and not pleasure/positive experience), you then go on to talk exclusively about suffering-focused ethics.
Firstly, "suffering-focused ethics" is an umbrella term that encompasses several moral views, including very much pluralistic ones (see the start of the Wikipedia article or the start of this initial post).
Second, even if (as I do from here on) we assume that you're talking about "exclusively suffering-focused views/axiologies," which I concede make up a somewhat common minority of views in EA at large and among suffering-focused views in particular, I'd like to point out that the same criticism (of "map-and-territory confusion") applies just as much, if not more strongly, against classical hedonistic utilitarian views. I would also argue that classical hedonistic utilitarianism has had, at least historically, more influence among EAs and that it describes better where SBF himself was coming from (not that we should give much weight to this last bit).
To ...
Obviously I agree with this. I find it strange that you would take me to be disagreeing with this and defending some sort of pure pleasure version of utilitarianism. What I said was that I care about "meaning, fulfillment, love"—not just suffering, and not just pleasure either.
That seems like a misunderstanding – I didn't mean to be saying anything about your particular views!
I only brought up classical hedonistic utilitarianism because it's a view that many EAs still place a lot of credence on (it seems more popular than negative utilitarianism?). Your comment seemed to me to be unfairly singling out something about (strongly/exclusively) suffering-focused ethics. I wanted to point out that there are other EA-held views (not yours) where the same criticism applies the same or (arguably) even more.
What thought process do you think goes into your guess that very few non-human animals can leave a meaningful and fulfilling life? My guess is that many mammals and birds can live a meaningful and fulfilling life, though the phrase “meaningful and fulfilling” strikes me as hard to specify. I’m mostly thinking that having emotionally significant social bonds with other individuals is sufficient for a life to be meaningful and fulfilling, and that many mammals and birds can form emotionally significant social bonds.
Was there a reckoning, a post-mortem, an update, for those who need one? Somewhat. Not anything like enough.
I feel like you aren't giving enough credit here (and possibly just underestimating the strength of the effect?) IMO the EA community has had a reckoning, a post-mortem, an update, etc. far more than most social or political movements would (and do) in response to similar misbehavior from a prominent member. And for sufficintly large groups of people, there is no reckoning at all, because there is safety in numbers -- if a normal person commits a crime, other normal people who haven't committed crimes yet don't feel any pressure to be less normal.
I'm curious to operationalize forecasting questions based on this. Maybe something like "will there be another instance of a prominent EA committing fraud?"
a post-mortem
What is this f***ing post-mortem? What was the root-cause analysis? Where is the list of changes that have been made to prevent an impulsive and immoral man like Sam taking tons of resources, talent and prestige from the Effective Altruism ecosystem and performing crimes of a magnitude for which a typical human lifetime is not long enough to make right? Was it due to the rapid growth beyond the movement's ability to vet people? Was it due to people in leadership being afraid to investigate accusations of misbehavior? What was the cause here that has been fixed?
Please do not claim that things have been fixed without saying concretely what you believe has been fixed. I have seen far too many people continue roughly business as usual. It sickens me.
Do you agree with my comparative claim? EA vs. Democrats or Republicans, for example, or EA vs. Social Justice, or EA vs. idk pick some other analogous movement.
I could make a bigass list of EA forum and LW posts arguing about how to interpret what happened and lashing out with various bits of blame here and there. Pretty much all of the lessons/criticisms Zvi makes in this post have been made multiple times before. Including by e.g. Habryka, whom I respect greatly and admire for doing so. But I don't feel motivated to make this list and link it here because I'm pretty sure you've read it all too; our disagreement is not about the list but whether the list is enough.
Notice, also, that I didn't actually say "The problem is fixed." I instead expressed doubt in the "not anything like enough" claim. I mused that it would be good to make some forecastable predictions. This was because I myself am unsure about what to think here. I have appreciated the discussions and attempted lesson-taking from the SBF disaster and I'm glad it's happening & I support doing more of it.
[I feel like this conversation is getting somewhat heated btw; if you like I'd be happy to have a phone or vid...
I think the following can be and are both true at once:
I certainly agree this is possible. Insofar as you think that's not only possible but actual, then thanks, that's a helpful clarification of your position. Had you said something like this above I probably wouldn't have objected, at least not as strongly, and instead would have just asked for predictions.
IIRC Will MacAskill admitted to major fault...
I read this as an admission of guilt and responsibility. What do you wish he had said?
Does it matter what he said? What has he done? As far as I'm aware he is mostly getting along with being a prominent figurehead of EA and a public intellectual.
Also this is hardly an admission of guilt. It primarily says "This seems bad and I will reflect on it." He didn't say
"This theft of many thousands of people's life savings will forever be part of the legacy of Effective Altruism, and I must ensure that this movement is not responsible for something even worse in the future. I take responsibility for endorsing and supporting this awful person and for playing a key role in building an ecosystem in which he thrived. I have failed in my leadership position and I will work to make sure this great injustice cannot happen again and that the causes are rectified, and if I cannot accomplish that with confidence within 12 months then I will no longer publicly support the Effective Altruism movement."
I read this as an admission of guilt and responsibility. What do you wish he had said?
I think it's a decent opening and it clearly calls for reflection, but you might notice that indeed no further reflection has been published, and Will has not published anything that talks much about what lessons he has taken away from them.
To be clear, as I understand the situation Will did indeed write up a bunch of reflections, but then the EV board asked him not to because that posed too much legal and PR risk. I agree this is some evidence about Will showing some remorse, but also evidence that the overall leadership does not care very much about people learning from what happened (at least compared to increased PR and legal risk).
I think this is a potentially large cost of the fiscal sponsorship umbrella. Will can't take on the risk personally or even for just his org, it's automatically shared with a ton of other orgs.
That's what I told Will to do. He felt like that would be uncollaborative with broader EA leadership.
I wish he had said (perhaps after some time to ponder) "I now realize that SBF used FTX to steal customer funds. SBF and FTX had a lot of goodwill, that I contributed to, and I let those people and the entire community down.
As a community, we need to recognize that this happened in part because of us. And I recognize that this happened partly because of me, in particular. Yes, we want to make the world better, and yes, we should be ambitious in the pursuit of that. But we have been doing so in a way that we can now see can set people on extremely dark and destructive paths.
No promise to do good justifies fraud, or the encouragement of fraud. We have to find a philosophy that does not drive people towards fraud.
We must not see or treat ourselves as above common-sense ethical norms, and must engage criticism with humility. We must fundamentally rethink how to embody utilitarianism where it is useful, within such a framework, recognizing that saying 'but don't lie or do fraud' at the end often does not work.
I know others have worried that our formulation of EA ideas could lead people to do harm. I used to think this was unlikely. I now realize it was not, and that this was part of a predictable pattern that we must end, so that we can be a force for good once more.
I was wrong. I will continue to reflect in the coming months."
And then, ya know, reflect, and do some things.
The statement he actually made I interpret as a plea for time to process while affirming the bare minimum. Where was his follow-up?
I think Germany is an extreme outlier here fwiw, (eg) Japan did far worse things and after WW2 cared more about covering up wrongdoing than with admitting fault; further, Germany's government and cultural "reformation" was very much strongarmed by the US and other allies, whereas the US actively assisted Japan in covering up war crimes.
EDIT: See shortform elaboration: https://www.lesswrong.com/posts/s58hDHX2GkFDbpGKD/linch-s-shortform?commentId=ywf8R3CobzdkbTx3d
Virtually no one in EA would have approved of the manner by which Sam sought to make FTX more valuable.
I talked to many people about Sam doing shady things before FTX collapsed. Many people definitely endorsed those things. I don't think they endorsed stealing customer deposits, though honestly, my guess is a good chunk of people would have endorsed it if that wouldn't have resulted in everything exploding (and if it was just like a temporary dip into customer deposits).
I don't understand the second paragraph. Yes, Sam tricked people into depositing money onto his exchange, which he then used to fund a bunch of schemes, mostly motivated via EA and with the leadership team being substantially populated by EA people. Of course the customers didn't want to help EA, that's what made it a fraud. My guess is I am misunderstanding something you are trying to communicate.
Using the Hare Psychopathy Checklist, SBF seems to be a psychopath. The Checklist consists of 20 items. Each item is scored on a three-point scale, with a rating of 0 if it does not apply at all, 1 if there is a partial match or mixed information, and 2 if there is a reasonably good match. Here are my ratings. Most of my ratings follow quite directly from Zvi's account. I tried to err on the conservative side.
Disclaimer: I am a layperson, not a psychiatrist, and have no relevant training in this area. The Wikipedia page warns against laypersons applying the Checklist.
Thanks for taking the time to do this. I'm not really a fan of the way you approach writing up your thoughts here. The post seems high on snark, rhetoric and bare assertion, and low on clarity, reasoning transparency, and quality of reasoning. The piece feels like you are leaning on your reputation to make something like a political speech, which will get you credit among certain groups, rather than a reasoned argument designed to persuade anyone who doesn't already like you. For example, you say:
...But at least the crazy kids are trying. At all. They get to
This seems to be misunderstanding several points I was attempting to make so I'll clear those up here. Apologies if I gave the wrong idea.
Promoted to curated: I am generally quite hesitant to curate posts like this, since I like the focus of LessWrong to be on content that is generally timeless and is about deeper patterns in the world and the art of reasoning. However, I do think that despite this post being in some sense about recent events, that it definitely has a lot of content that I think is broadly relevant to a lot of people, and also, as recent events go, the whole FTX and SBF thing is quite high up there in terms of probably still being relevant in 10 years or so.
This post feels l...
it died away quickly,
Citation needed? In my experience it seems like the change has been permanent.
Well, we probably aren't talking to the same people. But (a) the people I know haven't regressed to thinking SBF was good actually. They still think that was a huge disaster, we were wrong to trust him and work for him, we were gullible, too naively consequentialist, etc. And (b) they still seem to have made a generic update towards common-sense morality and a bigger generic update towards integrity/honesty being important. As a result, I claim (c) that if something like SBF started to happen again, many people would like antibodies speak up and crush it before it got nearly so big. In fact now that I mention it I think there are even two or three examples of this happening already (immune system reactions).
To be clear I'm not confident in any of this and I'm still worried, especially about the naive consequentialism.
My original comment was "citation needed," could you at least say more about why you think the change died away quickly? Maybe give some examples of bad behavior (possibly anonymized if you like) that happened pre-SBF, stopped happening after SBF, and then started up again?
a bigger generic update towards integrity/honesty being important.
This seems backwards to me. For example, Open Phil as a grantmaker substantially increased the degree to which they are concerned about PR and how much they should overall obfuscate or distort, with the reasoning that FTX’s collapse substantially increased the risk that various people would take opportunities to attack EA-affiliated things, and also clearly demonstrated that PR risks are real and have really bad consequences.
In-general I've mostly seen people update that EA should now try harder to not look bad and to me more concerned about our reputation in a way I think quite straightforwardly trades off against honesty and integrity.
To be clear, this is not universal, and some people I know have updated in ways that put more emphasis on integrity, but I think most of the update is backwards.
that the misalignment issues involved would have almost certainly destroyed us all, or all that we care about.
How?
Great review and summary!
I followed the aftermath of FTX and the trial quite closely and I agree with your takes.
Also +1 to mentioning the suspiciousness around Alameda's dealings with tether. It's weird that this doesn't get talked about much, so far.
On the parts of your post that contain criticism of EA:
...We are taking many of the brightest young people. We are telling them to orient themselves as utility maximizers with scope sensitivity, willing to deploy instrumental convergence. Taught by modern overprotective society to look for rule
Since everything these days is about AI, consider SBF as a misaligned AGI (or NGI?).
Cute analogy. I love it.
...We are taking many of the brightest young people. We are telling them to orient themselves as utility maximizers with scope sensitivity, willing to deploy instrumental convergence. Taught by modern overprotective society to look for rules they can follow so that they can be blameless good people, they are offered a set of rules that tells them to plan their whole lives around sacrifices on an alter, with no limit to the demand for such sacrifices. And then, in addition to telling them to in turn recruit more people to and raise more money for the cause, we
This post was enjoyable as heck to read. Thanks for taking the time to write it.
I guess I'm of two minds about the effective altruism of it all.
One one hand: It kinda just seems like a bunch of self-identified effective altruists, who were well-meaning but perhaps naive, got blinded by money and suckered into servitude by a smart and charismatic leader who was successful at scamming a lot of people. Maybe there isn't a big lesson about EA philosophy or the EA subculture. Maybe this is just like any other cult leader or con artist or corrupt CEO manip...
A shame Sam didn't read this:
But if you are running on corrupted hardware, then the reflective observation that it seems like a righteous and altruistic act to seize power for yourself—this seeming may not be be much evidence for the proposition that seizing power is in fact the action that will most benefit the tribe.
Great review. Brilliant excerpts, excellent analysis. My only quibble would be:
What Michael Lewis is not is for sale.
What leads you to this conclusion? I don't know much about Lewis, but based on his prior books I would've said one thing he is not is stupid, or bad at understanding people. I feel you have to be inconceivably ignorant to stand by SBF and suggest he probably didn't intentionally commit fraud, particularly in light of all the stories presented in the book.
Bizarre statements like "There’s still an SBF-shaped hole in the world that needs filling" have me speechless with no good explanation other than Lewis was on the take.
My expectation is that in the unlikely scenario that this attempted takeoff had fully succeeded, and SBF had gained sufficient affordances and capabilities thereby, that the misalignment issues involved would have almost certainly destroyed us all, or all that we care about. Luckily, that did not come to pass.
Yep. (Well, not "almost certainly" but I'd say "probably")
Just because he didn’t feel the emotion didn’t mean he couldn’t convey it. He’d started with his facial expressions. He practiced forcing his mouth and eyes to move in ways they didn’t naturally.
Unsure if it's for the same reasons, but Adolf Hitler also did this; he would deliberately practice making faces in front of a mirror to help finetune his speeches and interpersonal interactions.
There was a rush to deontology that died away quickly, mostly retreating back into its special enclave of veganism.
Can you explain what you mean by the second half of that sentence?
Vegans believe that they should follow a deontological rule, to never eat meat, rather than weighing the costs and benefits of individual food choices. They don't consume meat even when it is expensive (in various senses) to not do so. And they advocate for others to commit to doing likewise.
Whereas EA thinking in other areas instead says to do the math.
The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?
If Ray eventually found that the money was "still there", doesn't this make Sam right that "the money was really all there, or close to it" and "if he hadn’t declared bankruptcy it would all have worked out"?
Ray kept searching, Ray kept finding.
That would raise the amount collected to $9.3 billion—even before anyone asked CZ for the $2.275 billion he’d taken out of FTX. Ray was inching toward an answer to the question I’d been asking from the day of the collapse: Where did all that money go? The answer was: nowhere. It was still there.
I wasn't positive what I'd think of Going Infinite going in -- Lewis is obviously a great writer, but I've disliked more of his books than I've liked. I ended up reading it twice. It was interesting and somewhat fresh.
I think this review takes the approach that could be seen coming from Lewis's surprising take on SBF -- to act like Lewis was even more sympathetic than he was. Though the big stuff is a fair criticism (Lewis thinks SBF is more dumb than conniving), a lot of the reporting on SBF's perspective doesn't seem fair to take as an endorsement of the perspective, and I think that Lewis's take, even when likely wrong, is often way more interesting than pointing out that SBF is indeed a thoughtless asshole, etc.
That is not what profits mean. Your expenses count. Your payroll counts. This is absurd.
This seems like a normal use of profits for trading profits. It's, as you point out, not the firm's profit.
we need to get better at policing narcissism and psychopathy, arguing about anything else is a distraction
Previously: Sadly, FTX
I doubted whether it would be a good use of time to read Michael Lewis’s new book Going Infinite about Sam Bankman-Fried (hereafter SBF or Sam). What would I learn that I did not already know? Was Michael Lewis so far in the tank of SBF that the book was filled with nonsense and not to be trusted?
I set up a prediction market, which somehow attracted over a hundred traders. Opinions were mixed. That, combined with Matt Levine clearly reporting having fun, felt good enough to give the book a try.
I need not have worried.
Going Infinite is awesome. I would have been happy with my decision on the basis of any one of the following:
The details I learned or clarified about the psychology of SBF in particular.
The details I learned or clarified about the psychology of Effective Altruism.
The details about all the crimes and other things that happened.
The sheer joy of reading, because man can Michael Lewis write.
I also get to write this post, an attempt to quickly share what I’ve extracted, including some of the sheer joy. We need more joy, now more than ever.
There are three problems with Going Infinite.
Michael Lewis fails to put two and two together regarding: Who is this guy?
Michael Lewis fails to figure out that obviously this man was constantly lying and also did all of the crimes.
Michael Lewis omits or fails to notice key facts and considerations.
I do think all of these are genuine mistakes. He (still) is in the tank because character is fate and we are who we choose to be. Michael Lewis roots for the wicked smart, impossibly hard working, deeply obsessed protagonist taking on the system saying that everyone else is an idiot, that has unique insight into and will change the world. It all makes too much sense, far too much for him to check.
What Michael Lewis is not is for sale. Or at least, not for cheap. I do not think anyone paid him. Like all worthy protagonists, including those he looks to cover, Michael Lewis has a code. In this case, the code did him wrong. It happens.
Then at the trial it turns out, among many other things, your hero selected from seven balance sheet variations, and he gave their hedge fund the faster trade execution he kept swearing he didn’t give them, and the insurance fund he kept talking about was, in its entirely, a literal call to a random number generator.
Let’s have some fun, rant a bunch, and also explain it all. I’d like to solve the puzzle.
While also pointing out the puzzles that remain unsolved.
[Note: Unattributed quotes here are from the book. The number refers to the Kindle location of the quote. The ability to easily do this is why I read such books on Kindle.]
Who Was This Guy?
That’s the central mystery of the book. It’s not the money. It’s SBF. Who was this guy?
The book solves this mystery, despite Lewis not noticing he has done so.
This is a very raw-G ‘smart’ person, who manufactured an entirely artificial superficial charm, has grandiose self-worth, pathologically lies, is endlessly manipulative, lacks remorse or guilt, has extreme emotional shallowness, fails to accept responsibility for anything ever, needs stimulation constantly to the point of constantly fidgeting, never sleeping and playing video games during television appearances, is constantly impulsive and irritable and irresponsible, has goals like going infinite and mostly does things without any plan or vision at all, did all the crimes and the first opportunity he got had his bail revoked, although Lewis seems to be in denial about the crimes and the bail got revoked after the book’s events.
There are two other things on the list I’m drawing from there, but I think we get the point? This should not be a hard type to recognize once we have everything in front of us. It also is presumably not an especially new personality type to the author of The Big Short, Flash Boys and Liar’s Poker. I mean, come on.
Nor is this a type of person who we could consider might not be committing fraud if you put him in charge of a crypto exchange. There would not even be a distinction in their head between ‘fraud’ and ‘not fraud,’ between ‘I tell truth’ and ‘I tell lie’ or between ‘customer money’ and ‘money.’
To them, there are only actions and (some of their) consequences. If the customer asks for their money and you don’t have it, or people find out you don’t have the money, or that you said you had the money and you didn’t (or that you took the money), people might get mad. They might demand their money back. Don’t let that happen. That would be bad. But also don’t worry about it.
Is that ‘fraud sort of happened?’ It is ‘super ultra fraud from day one?’ Yes.
What about the Effective Altruism and the Benthamite Utilitarianism? Was that for real? Yes, in an abstract intellectual way. Number go up. There needs to be McGuffin. A utility function. A justification for everything. This provided one.
If SBF was not so impulsive and impatient, we would not be able to tell. This is the Orthogonality Thesis and Instrumental Convergence. A proper SBF, with an actual linear utility function and expected impact curve, with any sane discount rate, would not be in the business of shoveling money out the window well before he’d maxed out his ability to provide himself with operating capital and guard against downside risks.
Instead, he would have done the amount necessary to convince most people he was sincere, as this would serve his purposes. Optimal fully fake SBF would be vegan and drive a Toyota Corolla. This was different. Next level. Could he fool me this way?
Sure, if he wanted to. But I believe him exactly because there would be no value in investing this much in order to fool me. We see him shoveling tons of money out the door, well in advance of any reasonable pace for doing so and in deeply irresponsible fashion, often in ways that plausibly make things worse while also putting him at risk.
That does not mean his positions on any of this were coherent, or optimized, or made any sense, or was a good thing, or anything like that. That if he had made his utilitarian Number Go Up that I or you would have liked that world, or that his play was +EV even by his own metrics. It also does not mean that his motivations would have survived as he gained further wealth and power. It does mean that I buy that he wanted to make the utilitarian Number Go Up as he saw it, up until the end.
In particular, this caught my eye:
At this time, they were borrowing at 50% (!) interest in order to trade, were very much in danger of going broke, very much liquidity constrained, and the claim is they donated much or all of their yearly profits. This is a completely crazy thing to do, in a way that could not possibly have had sufficient signaling value to compensate for it, especially since it also sends other highly negative signals.
So I am inclined to believe him.
Also, that is not how any of this works. That is not what profits mean. Your expenses count. Your payroll counts. This is absurd.
Oh, and by the way: This is their 2018 deck, the same year, which claims >100% consistent annualized returns (and ‘no risk’).
So, yes. A fraud from the start.
What about the Vox interview with Kelsey Piper? Didn’t SBF admit to having no ethics, to it all being a lie? Well, kind of. He admitted that he plays lots of stupid signaling games and pretends to care about various issues including woke ones, and that he treats all that with contempt. He showed he cares not one bit for ethics, that reputation is only instrumental to him.
But all of that is totally compatible with being a true believer in EA and Benthamite Utilitarianism. In a post-book interview, Lewis calls the interview an aberration. But it wasn’t an aberration. It was peak Sam all the way.
How did he get to be that way? We’ll return to that at the end of the story.
Where Was This Guy?
This proved a bigger question than one might expect, well before SBF had any reason to be on the run. According to Lewis, SBF tasked a woman named Natalie, with no relevant prior experience in such matters, to manage all of his logistics and scheduling and also PR. And then SBF systematically ignored her advice, caused constant shitshows, and failed to tell her even where he was going to be or whether he intended to keep any of his prior commitments.
Luckily, she was a quick study.
Why? Because Sam did not care about keeping his word or commitments. At all. Not unless he could point to concrete negative consequences of not doing so, which mostly did not much bother him. He cared about what he felt like doing, what seemed worth doing.
[Gamer’s note: Sam did love bughouse, which is 4-player chess played on two boards. In theory it is indeed solvable, in practice there are enough variables you have to wing it. But this emphasizes that what matters to Sam is almost certainly that something is probabilistic in practice, not in theory.]
This was less ‘calculated’ than ‘some math’ by which we mean something between Fermi estimate, motivated five second approximation and ass pull. You can throw together numbers that justify whatever, if that is what you are inclined to do. Somehow Lewis thinks that ‘done some math in his head’ does not represent ‘a whim’ or ‘thoughtlessness.’ Yes, Sam thought he had something better to do with his time, he didn’t feel like doing what he said he would do. Call that exactly what it always is.
It gets easier if you give exactly zero consideration to the costs you impose on others, or how they might react, or the mess others will need to clean up, or any ethical considerations, or any second-order or other considerations he didn’t notice when giving this a few seconds of thought. Sam would likely object this is not quite right, he does consider the cost to the person inconvenienced, but doesn’t care more about the person he’s inconveniencing than he would to people halfway across the world, so the size is trivial and who cares really? Think of all the good Sam can do by leaving you alone at lunch.
It also gets easier if you decide to treat your utility of money as linear, despite this being completely Obvious Nonsense on so many levels, such as whether not having enough money might suddenly be a real problem that meant the music would stop, and many clear acknowledgements that he had zero idea how to efficiently deploy even the money he already had.
Who Was This Guy as a Kid?
Books like this ask such questions. People think it matters. So here are some quotes?
Yes. It actually is many people’s idea of fun. What interests me here is Barbara’s reaction. Why is she busted? The correct answer is ‘No, I’m here for you, and I’d be happy if you were having fun. A lot of kids find this kind of thing fun and I thought you might as well, but it is clear you aren’t.’
The idea that you as a parent have to not only take them on but also enjoy their kid activities is so toxic. Kid activities, like Trix, are for kids.
Then his mother realized that SBF was instead interested in talking about real things.
This must have been so amazing. I can’t wait for this to happen to me with my kids.
And yet, she says everything changed, but it does not sound like she updated enough:
School drove SBF away from books by forcing him to engage in stupid ways with stupid books, which SBF would later justify with arguments about information density.
Seriously, what the actual f*** was SBF doing in a high school? (Also, why would he want to read the Harry Potter books a second time, one of the deeper unsolved mysteries remaining?) He was doing philosophy better than philosophers, off the cuff. He was bored out of his mind.
At their semi-famous deeply philosophical family dinners, SBF would hold his own against various guests. He wanted to talk and think about real things.
His parents decided to… send him to a more competitive high school?
They quite obviously should have instead sent him to Stanford.
Not that college ultimately went so well for Sam.
If he had been four years younger, perhaps it would have gone better.
Maybe they should have made him a gamer instead?
Or let him found a business or write or otherwise do something real.
Instead they did none of that. You give a child endless bullshit that can’t keep him engaged? He’s going to call you on it, whether it is an amusement park ride or pretentiousness.
Or… Shakespeare? Here’s the now-famous quote.
This is classic SBF thinking. Choose some considerations, ignore others opinions entirely and treat them as dumb, answer a different question than everyone else is asking, completely disregard aesthetics and history and really all context of any kind.
Here is early SBF doing philosophy to explain why the math says that actually murder is usually bad.
Murder is just a word.
So is fault?
Indeed, why would any of those other aspects matter? Why stay stuck in the past?
SBF bites all the bullets, all the time, as we see throughout. Murder is bad because look at all the investments and productivity that would be lost, and the distress particular people might feel. I hope there isn’t too much on the other side of that ledger today. Luckily this one stayed theoretical as far as we know, but it sure sounds like SBF has no ‘on principle’ objection to killing an innocent person if they were to be standing in his way and it was especially important to not miss his next meeting. Or they kept saying inconvenient things.
To be fair, although I think the quote is more insightful without it, I did delete an important sentence in between those two quotes, which was:
Why Was That Guy So Misaligned?
Since everything these days is about AI, consider SBF as a misaligned AGI (or NGI?).
Was Sam born a criminal? Of all Sam’s characterizes in the list I open with, one of the few that is conspicuously missing involves doing juvenile crimes.
Why would he? There was no point. Crime looked boring. Until it didn’t.
Having spent his entire childhood bored out of his mind, with no goal or utility function beyond not being bored, and having been sufficiently disabused of the virtues of academia, Sam did not know what to do. What would be a worthy goal or activity? Here we had this super smart person, lacking the motivations and interests that drive ordinary people, at a loss. What to do?
In stepped Will MacAskill, who suggested the goal of Number Go Up.
The utilitarian part of this equation, the part where people don’t matter as individuals including himself, was already there.
Of course from your perspective you must in important senses care about yourself more than other people. You must care about those around you, close to you, in a different way than others. Without this both your life and also society fall apart, the engine of creation stops, the defectors extract everything, and so on. The consequences, the utilitarian calculus, is self-refuting.
Even more than that, if you take such abstractions too seriously, if you follow the math wherever it goes without pausing to check whether wrong conclusions are wrong? If you turn yourself into a system that optimizes for a maximalist goal like ‘save the most lives’ or ‘do the most good’ along a simple metric? What do you get?
You get misaligned, divorced from human values, aiming for a proxy metric that will often break even on the margin due to missing considerations, and break rather severely at scale if you gain too many affordances and push on it too hard, which is (in part, from one perspective) the SBF story.
Yet SBF did not take such concerns seriously. As many (very far from all!) others I know in EA do not take such concerns seriously. The math is treated as real, the metric as the map as the territory, and so on.
MacAskill set SBF on a maximalist goal using an abstracted ungrounded simplified metric, hoping to extract a maximal amount of SBF’s resources for MacAskill’s (on their face altruistic) goals.
Did MacAskill understand the inevitable result? Would he have approved of the actions taken, or the consequences of them? No.
Nor do I expect he people who set other people and systems on such paths, most of the time, to appreciate what they are doing or what the consequences will be, either.
That does not change what MacAskill did: He took young SBF, a powerful agent, a proto-utilitarian in want of a functioning definition of utility and a willingness to bite all of the bullets and ignore all of the unprincipled reasons not to be a terrible person doing terrible things, and gave him a maximalist utility function to save the most lives possible (or do the most good possible, as defined by things that can be quantified and measured, and then added up linearly, with no risk aversion).
Then he pointed out and argued explicitly that the way to do that was via instrumental convergence. Rather than doing good or saving lives directly, you could maximize money, and then spend the money to do the good or save the lives. Which meant that SBF’s behaviors should look no different from anyone looking to make money, except when you give the money away afterwards. That was the intended path.
And then that led SBF directly into contact with finance and trading, and their zero-sum-style competitions, and to move from chess and Magic to trading as his puzzle of choice.
What happened with SBF will happen with an AI given a similar target, in terms of having misalignments that start out tolerable but steadily grow worse as capabilities increase and you face situations outside of the distribution, and things start to spiral to places very far than anything you ever would have intended.
Imagine a world in which SBF’s motivations had even less anchors to human intuition, and also he had a much larger capabilities advantage over others (say he was orders of magnitude faster, and could make instantiations of himself?) and he had acted such that the house of cards had not come crashing down, and instead of taking the risks and trying to score object-level wins prematurely he had mostly instead steadily accumulated more money and power, until no one could stop him, and his inclination to risk all of humanity every time he felt he had a tiny edge under some math calculation.
Which for a while was kind of fine, because he’d landed at Jane Street (which we’ll cover soon), where they had strong alignment and good supervision of their agents, and where the way to succeed and make money and climb the incentive gradient was to be socially responsible and honest and manage risk and make money straight up and pretend to be a normal person with normal tones of voice and facial expressions. So he did his best on all such fronts, and for a while it was fine, or fine-ish.
Then he left Jane Street for crypto, where fraud was par for the course, because that was also where he could make the most money, which was what MacAskill told him to do. A world full of fraud, and a world vulnerable to fraud, where everyone was constantly breaking the rules and laws.
Then, as they always do, cheating, lying and fraud fed upon themselves. Get away with a little, feel the rush, get reinforced, update you can get away with a little more. Grow contempt for the rules. Rince. Repeat. Once the doom loop starts, it rarely stops until the inevitable blow-up.
The rest is the last few chapters of the book.
Also notice how little anyone did to try and stop him, despite all the giant fire alarms, other than those he directly attacked before he was ready to do so.
Will All of This Happen Again?
We are still doing this.
We are taking many of the brightest young people. We are telling them to orient themselves as utility maximizers with scope sensitivity, willing to deploy instrumental convergence. Taught by modern overprotective society to look for rules they can follow so that they can be blameless good people, they are offered a set of rules that tells them to plan their whole lives around sacrifices on an alter, with no limit to the demand for such sacrifices. And then, in addition to telling them to in turn recruit more people to and raise more money for the cause, we point them into the places they can earn the best ‘career capital’ or money or ‘do the most good,’ which more often than not have structures that systematically destroy these people’s souls.
SBF was a special case. He among other things, and in his own words, did not have a soul to begin with. But various versions of this sort of thing are going to keep happening, if we do not learn to ground ourselves in real (virtue?!) ethics, in love of the world and its people.
All of this has happened before. If we are not careful, all of this will happen again.
Was there a reckoning, a post-mortem, an update, for those who need one? Somewhat. Not anything like enough. There was a rush to deontology that died away quickly, mostly retreating back into its special enclave of veganism. There were general recriminations. There were lots of explicit statements that no, of course we did not mean that and of course we do not endorse any of that, no one should be doing any of that. And yes, I think everyone means it. But it’s based on, essentially, unprincipled hacks on top of the system, rather than fixing the root problem, and the smartest kids in the world are going to keep noticing this. We need to instead dig into the root causes, to design systems and find ways of being that do not need such hacks, while still preserving what makes such real efforts to seek truth and change the world for the better special in the first place.
Then we are going to do the same thing with Artificial General Intelligence. Make it an agent, give it a maximalist goal that becomes misaligned out of the intended distribution that ignores key second-order and ethical considerations and inherently is incompatible with the necessary safeguards, and unleash upon the world. It will not end well for us. And that could even be thought of as the good scenario, where we are able to point the thing towards anything at all.
So yes. Remember and beware the tale of Sam Bankman-Fried. Not to blame or to label, but to learn from it. Do not let history repeat itself.
Behold the Power of Yup
Sam’s great persona transformation, the book says, was when Sam realized that he should stop trying to make his words have meaning or map in any way to reality, and instead focus purely on agreeing with everything anyone said and telling people what they want to hear.
The claim is that this completely brazen strategy flat out works, including when dealing with the rich, famous and powerful. Could it be this easy?
This is a great reverse tell, because normally extending your ‘yup’ means that you are aware of the gravity of the situation.
The best part of a strategy where your entire plan is to agree with everything is you do not need to listen to what people say or have the slightest interest in it.
I’ve come around to the video game playing being genius. By trying to also play games that will not wait for you, like Storybook Brawl and League of Legends, SBF had a constant look that he was engaged and paying close attention. That is a hard thing to fake. Better to make it real. Also you get to play the video games.
What Sam also did frequently was to talk completely unguarded. Most famously on Odd Lots there was The Box (link goes to episode, if you don’t know I won’t ruin it for you) but such statements were common.
This also went for philosophy, as when he told Tyler Cowen he would take a 51% coinflip to double or destroy the Earth, and then keep taking the flips until everyone was dead. Reminds me of Trump, who will lie right to your face but will sometimes be honest about lying right to your face, which some people find endearing.
Thus, Sam in some ways had a reputation for honesty.
Yes, Sam was often very (over)confident, and said so. He also constantly lied to everyone’s face and told them what they want to hear.
I wonder if this was the trick to getting an actual Sam opinion. If he does not give you a probability, look out, he wasn’t even listening to you, sorry man. If he does give you a probability, then sure you have to recalibrate it but probabilities might be a sort of sacred trust, and also it is much harder to know what you want to hear.
It also helps to be graded on the crypto curve. Do you have any idea how little you could trust the word of anyone in crypto in 2018? A simple ‘you follow professional norms and honor the word done in a trading chat’ backed with halfway decent execution went a long way.
The book claims that the PR campaign really was a pure ‘let Sam be Sam’ where being Sam meant this superficially agreeable persona who took meetings while playing video games, combined with a willingness to talk remarkably frankly about technical details. They say that Natalie, the woman charged with PR and Sam’s calendar despite having zero relevant experience, tried to call in professional help, but the professionals they did nothing.
Patrick McKenzie is in awe of this paragraph.
I am going to go ahead and agree with Patrick McKenzie that we saw, especially in the aftermath of FTX’s collapse, what he described as ‘the dark matter of a PR firm.’ I do not buy the story that there were no public relations professionals involved, that Sam went out and said ‘yep’ a lot while playing video games, driving a Corolla and having a net worth of $20 billion, while one person with no experience did all the arrangements and scrambling, and everyone loved him and every press source treated him with kid gloves and all that.
The system is hackable. It is not that hackable. The parts of Sam’s public relations operations I did have interaction with were very much conscious of exactly how public relations works via their well-compensated expert consultants. It would have been completely insane to do things any other way. Even by SBF standards.
This is one of many places where I am confident the events the book describes happened, and I am also confident that there is quite a lot of ‘dark matter’ that is being left out, at least a lot of which Michael Lewis never found. Some of it I get a chance to mention here. Definitely not all, not even of the parts I know about.
Now for the chronological story of how this all played out.
First stop, Jane Street Capital.
Jane Street Capital
This is the part of the story I am best able to fact check. I too worked at Jane Street Capital, and directly witnessed a lot of this part of the story.
I also am deeply thankful to Jane Street Capital. It was not a fit for me in the end, but it was a pretty great place to work and they treated me right. I am not about to spill their secrets in ways they would not want such secrets spilled. I can safely say that this chapter is not entirely accurate, but the inaccuracies do not bear strongly on the SBF story, so I will decline to elaborate further.
SBF does not have that kind of ethical code. He was happy, on top of all SBF’s actions at the time, to share a bunch of details with Michael Lewis.
I will say that the description of the interview process was spot on, I very much enjoyed my shot at it, and that I can totally believe SBF got the high score.
As I have said before, I recommend going through the Jane Street interview process, even if you do not think you have much chance of being hired. It is great.
Matt Levine has extensively discussed the Asher Incident.
What matters to the broader story is that this Asher guy was another intern who offered SBF a bet without thinking through the implications, offering SBF a chance to both make about $33 in expected value and also humiliate Asher, and oh boy did SBF take full advantage, continuing to rub it in well past the point where he had secured his profit and was doing anything other than rubbing it in Asher’s face.
This is such a strange response. There was quickly nothing left to prove once Sam pointed out Asher’s mistake. Sam did not prioritize proving his point over having others feel better. He prioritized making Asher feel worse. That is not cute nerd indifference to social cues. Michael Lewis pretends not to notice the difference.
The actual bet was on the maximum loss by any Jane Street intern that day from gambling. Interns were encouraged to bet and make markets against each other, with a maximum loss of $100 per day so no one got seriously hurt. SBF bought a contract for $65 that paid out equal to the maximum loss, then (of course) paid $1 to get another intern to flip a coin for, oh, about $99. Then, for no good reason, he did it two more times.
Matt Levine and his readers point out that there was a better version of the trade, which was to pay $1 to get two interns to flip against each other. What he forgets is that SBF is not one to shy from variance. What was not pointed out was the fun problem that if SBF had lost the initial coin flip, then Asher could have claimed that since SBF won his bet with Asher, he hadn’t lost the full $100. The bet is now self-referential. And SBF couldn’t have clarified this before accepting the bet without tipping Asher off to why SBF would always win.
What is the result? Presumably you need the result where the contract resolves correctly, Sam has to lose what the contract pays out, so it is the halfway point and resolves $82.50, paying out $17.50? And by flipping the coin himself, Sam gave up a quarter of his expected profit?
The bosses were not happy, and thought Sam needed to learn to read the room.
It had not occurred to the bosses, presumably, that Sam could read others fine. The problem was that Sam did not care. Either way, Sam was right that his lack of ordinary facial expressions was a problem.
So Operation Ordinary Facial Expressions was born.
Here’s one anecdote I can confirm, and it was as crazy as it sounds:
The book then describes a refinement or extension of this trade, which I would not choose to confirm or deny regardless of whether it happened. What if SBF could simply predict the outcome of the 2016 election an hour ahead of everyone else?
There then follows a section on Sam’s version of what happened with the 2016 presidential election, where he and Lewis both draw all the wrong conclusions (even if you were to believe the exact story presented), and became convinced that he could be the best like no one ever was and make all the money and Jane Street was not maximizing enough and thus holding him back.
Then a bit later, SBF decided to leave Jane Street, because he discovered the Japan and South Korea Bitcoin arbitrage trade, and he wanted the trade all to himself.
This is one place I will introduce myself into the story a tiny bit. When Sam decided to quit, the two of us went for a walk in the park. He said he was leaving to run or at least help run CEA, the Center for Effective Altruism.
Which was not a crazy fit. Sam was clearly deeply into EA and the thesis that he could be a major upgrade there seemed plausible, as did the possibility that from his perspective this could be high leverage. I was confused by his decision, Jane Street seemed like a better fit for him, but we strategized a bit about how good could be done and I wished him the best of luck.
As we all know now, he was, as with everyone else, lying right to my face.
He admit it. He was not leaving to join CEA. He was leaving to pursue the Japan trade. And he had decided that I was not someone he wanted to bring in on that.
I’d also note Sam wrote this:
I am not saying I made the most robust effort to build a close friendship with Sam, but I was right there, happy to talk to him before he was him, culturally rather adjacent sharing many of his interests, and (I like to think) very clearly able to keep a secret. We had him over for dinner once, and by my wife’s recollection he didn’t say two words to her, nor did he eat any of the food (yes we’re not vegans but we do make efforts to accommodate), while trying to get me to disassociate with Ben Hoffman because he was making concrete criticisms of EA and such criticisms hurt the cause. So, yeah.
On reflection, it was vastly overdetermined that Sam had no reason to tell me what was going on. Why take that risk? I clearly was not about to work 18 hour days in Berkeley, California. It was unlikely I would have let him own 100% of the firm. I was too old and busted, a ‘grown-up’ he had no use for, and ultimately a rationalist is not an EA who will trust Sam completely, so in many ways I was the opposite of EA. Why take the risk that I might not keep his confidence?
Looking back now, of course, I am for my own sake deeply happy that he did not attempt to take me with him. How might things have been different if I had somehow ended up going with him? Would I have been able to steer things to turn out differently? Would I have been only another person who left with the management team, or another witness on the stand, or could I have helped steer the ship? Would I have perhaps managed to block the Anthropic investment? We will never know.
Without loss of generality or confirming any of the other bits, there are too many different things to get into it all, I’d also like to dispute this (very gentle?) slander:
That is quite a rich thing to say. The employees I knew in no way felt stuck trading in order to support their lavish lifestyles. They traded because they very much enjoyed it, were good at it, liked the team they were on and so on. Many indeed were and are largely altruists, and wanted to do good as effectively as possible. Not being ostentatious was not an act.
It was the flood of effective altruists out of the firm that was worrisome. It was the effective altruists who were the greedy ones, who were convinced they could make more money outside the firm, and that they had a moral obligation to do so. You know, for the common good. They proved themselves neither honest nor loyal. Neither was ‘part of their utility function.’
All right. On to Alameda.
Soiling the Good Name of Alameda County
All time great chapter opening. Again, the man can write.
Caroline had many very good instincts throughout. If only she had followed them.
Oh yes, Sam, famously worried about recruiting from Jane Street.
Why did he recruit EAs? Partly because he thought EAs would work infinite hours for almost no pay and still be worthy of and provide limitless trust. Exploit the recruits for cheap labor, without even the pretense of a non-profit.
That would explain how Alameda could lose money trading crypto in large parts of 2018, despite it being extremely difficult to lose money trading crypto in 2018 if you know how trading works. It could also explain why Sam wanted to rely on his bot program. No one knew how to trade!
Alameda started out with the arbitrage trade with South Korea and Japan. It is not clear the extent to which they managed to take advantage of it - the book describes them as getting only secondary, much less profitable versions of it, Sam debating various wild schemes to do better but not pulling the trigger on them because they were too absurd even for him, and ultimately the opportunity vanishing.
Which brings us to the bot.
The bot in question was called Modelbot. I would have simply called it Arbbot.
The idea was simple, and also very much something I would have tried. There were a lot of different exchanges trading a lot of cryptos at lots of different prices. Sometimes the prices were different. When that happened, you could do various forms of arbitrage (and statistical arbitrage). Sam, being a trader and also being Sam, was a big fan of taking the free money.
What made it extra attractive was that, while Sam had declined (or failed) to hire actual traders, he did manage to hire a world class programmer, and he did manage to raise capital while exploiting the country arbitrage trade.
So Sam built Modelbot to do exactly that, and he would have gotten away with it too except for those meddling (Effective Altruist) kids.
Not ‘170 million dollars of our investors money.’ Not ‘our only opportunity to trade, obviously we wouldn’t get another.’ Not even ‘and if we lost it like that I can’t help but wonder if we’d have some legal or other problems to deal with.’
No, this was 170 million dollars ‘that might otherwise go to effective altruism.’
Something is deeply, deeply wrong with that picture. Although not as wrong as Sam’s part of the picture.
I mean, good on the management team for fully updating on trusting Sam, although not fully updating on ‘this person needs to be removed immediately if not sooner,’ assuming Tara’s account is accurate, and the book does not say that Sam disputes it, nor does it seem remotely inconsistent with other Sam things. Turning on the bot after promising not to is bad enough, but turning on a new bot and then falling asleep with no one else watching it? Yeah, that is another planet of not okay.
That is not the weird part of the story. The weird part of the story is, why was it non-trivial to test whether or not the bot worked?
Any bot you would ever dare turn on has various risk limits. You can turn the bot on, with very low limits on how much it is allowed to trade. Do the trades small. See if you end up with more money than you started with, in the same places it started. If you can do that, you can start slowly ramping the numbers up. Standard procedure. If you can’t do that, you haven’t finished programming your bot, so get on that. Have multiple people watching at all times, analyzing the trades, seeing if things make sense, refining your algorithms as you go.
Instead, the claim is that Sam turned the program on with no one watching, without any reason not to wait, then went indefinitely with no way to test whether the program would actually work if one turned it on. I notice I am confused.
Alameda, without a profitable bot and without the arbitrage trade, started bleeding money as per the book’s own report, and they were paying very high interest rates to borrow money. Things did not look so good and were escalating quickly.
Then comes the story of the missing Ripple. They were supposed to have $4 million worth of Ripple. Then they lost it. No one knew where it was. What to do?
Sam’s attitude was that the Ripple would probably turn up, so no duty to the investors to say anything, no need to worry, carry on your day. Others were, understandably, rather more concerned?
Remind you of anything that’s going to happen in the second half of the book? Yes, it turns out that if you tell people everything’s fine but you have reason to know it very well might not be fine, often that would constitute fraud. You cannot, in general, simply not mention or account for things that you’d rather not mention or account for.
So between Sam asking everyone to work 18 hour days all the time, and being a generally irritable and horrible person to work for, and being completely untrustworthy and risking all the money for no reason, and misplacing $4 million in Ripple and then proposing to act like that hadn’t happened, and also for the company bleeding money and a bunch of other stuff, for some strange reason, Sam’s entire management team decided they had enough and wanted Sam out.
The management team ran into a problem. Thanks to them taking ‘I promise I’ll get to that later, we need to move fast’ as an explanation, Sam owned the entire company. Somehow everyone had allowed this.
The book’s account also claims the offer had an absurd clause that was completely unsingable, aiming to bankrupt Sam outright. Not typically how one gets to yes.
What? Liable for all taxes on any future Alameda profits? As fine print they hoped Sam wouldn’t notice, perhaps? That is the most absurd ask I have ever seen. Sam obviously would rather light the entire enterprise on fire than agree to that. I have no knowledge here one way or the other, but I have to assume this is not a complete and accurate description?
The book confirms that the whole thing seemed pretty nuts the way it is described.
That is not typically how any of this works. Quitters do not typically get severance. Quitters definitely do not typically get more than 100% of the value of the entire company. If someone demands you buy them out for more than the company is worth, and they accurately describe how much the company is worth, presumably you say ‘wait, that is more than the company is worth, why would I ever pay that?’
To his credit, Nishad noticed that he was deeply confused.
Ah, yes, the funders demanding 50% interest. I can take this one. Loaning money to even a relatively responsible crypto firm is highly risky and, typically, deeply stupid. This is not 2022-hindsight, back in 2018 I was trading for a crypto firm and we had borrowed money and I remarked that I had no idea why anyone had voluntarily loaned us any.
Invest in a crypto trading firm? Sure, maybe. Could work. Big upside. But why would you instead loan money to a crypto firm, where if Number Go Up you get a modest interest payment and if Number Go Down your number goes down to zero?
The ideal answer is that you don’t. If you must, earn enough interest that it is worth it. A rate of 50% seems if anything a bit low.
The whole idea of the EAs who left ‘trashing Sam’s reputation’ is treated as a big deal here, and as the reason the funders cut back a lot in size. But I never heard the complaints until FTX was blowing up? Most I know didn’t hear them? Given how big FTX was in EA spaces, does it seem a bit weird that this massive reputation-trashing operation went so unnoticed? They certainly had plenty of good material to work with. If they’d presented the facts as laid out in the book, that seems like enough?
Why do we get to flash forward to this, after it all fell apart:
Why was it that EA leadership didn’t get the message to Eliezer?
And as for the funders that cut back in size, maybe the reason for that was that they gave Sam big size so Sam could do the one big trade, and now that it was over it made sense to scale back down?
Instead, the book says that seven figures in severance was indeed paid, Sam went on with Alameda, and then everyone kind of forgets, and by the later parts of the book Sam is seen as having an unblemished reputation.
So, you’ve had your entire management team walk out. What will you do next?
I notice I am still confused. Modelbot instantly made a lot of money? Why didn’t they turn it on for small before? Was there no experiment to run? What happened the previous time that Sam turned it on and fell asleep? None of this makes sense.
(What happened to the Ripple was that it was improperly labeled when sent to an exchange, so it piled up there while the exchange had no idea whose it was until they finally traced the thing and figured it out, at which point the exchange yelled at them for being complete idiots but did hand over the Ripple. Very nice of them, and also pretty insane that they let things drag out that long before figuring it out.)
Those who stayed behind did not make the correct updates.
Sam proved he could design a profitable arbitrage bot. And that he got lucky with his carelessness. That this time the risks paid off. Also that he was a terrible manager and team builder whose chosen management team all hated him so much after a short period that they walked out on him while actively trying to take him down.
Those who remained concluded… other things.
Sam also concluded that since EAs would not play ball, he shouldn’t hire so many EAs.
As much as I criticize EAs here and elsewhere, they do tend to notice when you are completely untrustworthy and your statements are not at all truth tracking. And they tend to then care about it. They often don’t actually want to work constant 18 hour days. Also, newly hired EAs were not going to be personally loyal in the way that SBF wanted.
How fraudulent was the operation? Once again: This is their 2018 deck, which claims >100% consistent annualized returns (and ‘no risk’). There is no ambiguity here.
How Any of This Worked
The crypto world was a hive of scum and villainy long before SBF got involved. There were also plenty of idealists and well-meaning, honest people, as there usually are. Those were mostly not the people getting rich, or the ones running the exchanges.
The exchanges were licenses to print money proportional to their user bases, with users who were asking all the wrong questions and no regulators or consumer watchdogs keeping them in check, so it got ugly out there.
The main thing customers demanded of crypto exchanges was, and I can confirm this, the ability to take wildly irresponsible gambles with their crypto. As in customers highly valued 100:1 leverage, using $100 of Bitcoin to buy $10,000 of Bitcoin, willing to lose it all if the price dipped by 1% for a microsecond.
People think using this leverage is a good idea. They are always wrong. Michael Lewis seems confused about this as well.
This is mostly a no good, very bad trade, because the exchange liquidates your account while it still has positive value, and by ‘liquidate’ the exchange meant ‘confiscate all of it and write you down to zero.’ Maybe they would then actually liquidate what was there. Maybe they wouldn’t. That was up to them.
Are there versions of this trade that are good for you and bad for the exchange? Assuming, of course, that we completely ignore that it would be safe to presume all of this is market manipulation and multi-accounting and very much not legal, because lol legal, lol compliance department, this is crypto what are you even talking about.
Yes. If there were effectively no law but code, I can think of three.
You have reason to expect (or cause) Bitcoin to be even more volatile than usual, and for there to be a jump in price that gets your wrong-way trade liquidated at a negative account value. For example, if you are allowed to do this trade right before the decision on whether to allow a Bitcoin ETF, then the trade seems good.
You can size big enough to force the exchange to make big trades that impact the entire Bitcoin market. As in, Bitcoin goes up 75bps (0.75%) and they liquidate your short position (which was still worth $250k) to zero. But by doing that, they drive up the price of Bitcoin a lot more, after they have impact you sell your Bitcoins, and you end up ahead. Not impossible back in the day if they let you scale up big enough.
You can provide directly to the liquidation, and the liquidation mechanism is dumb. So when your short account is liquidated, the exchange issues market buy orders bigger than their market can bear rather than doing something less stupid, your account has various sell orders at higher prices, you fill a lot of the liquidation order at stupid prices, you quickly sell off the remainder before prices restabilize, and you laugh.
When I was trading crypto, I insisted on caring about things like not doing market manipulation, not spoofing orders and not trading when I had material non-public information (aka insider trading). This mostly made everyone else rather annoyed at me for being such a stickler for the laws of some alien world. They put up with it because, as the smoking man put it, they needed my expertise.
Wash trading was common.
I discovered this because I was trading on Binance, attempting to purchase Stellar for someone who wanted to purchase a bunch of Stellar when it was the #8 coin in the world (it is #23 now), and continuously failing to purchase any Stellar. There would be trading, I would issue a buy order where it was trading, and the entire market would mysteriously shift up. I would withdraw the order, things came back down. The market moved if you breathed on it, there was no way to get any size. It didn’t make sense until I realized that most of the trading was not real. The whole thing was a house of cards. I reported this back and the person said, yes, everyone knows there’s a lot of wash trading, I still want to buy Stellar. There’s only so much you can do.
Lewis offers a strange claim here:
I mean, no, they didn’t? I was at least kind of there, unrelatedly trading crypto. Throughout 2018 there were plenty of ways to trade rather large amounts for far less than a percent. Yes, spreads did tighten, and I am confident Alameda contributed to that tightening, but this was not an order of magnitude change.
Nor was it the final change. As time went on, spreads would tighten further. Alameda would be in a more and more competitive business. This was likely a prime motivation behind creating a crypto exchange, FTX, before Alameda lost its edge.
During this period, SBF relocated to Hong Kong, because he found that being in the room with other crypto people was very good for business. For example:
It also gave Sam access to a new labor pool, one eager to get into the game and do whatever it took without asking questions, and got everyone out of the United States.
Sam’s approach to hiring was to ensure no one ever know what they were doing, so this new pool of talent worked out great.
Spoken like an employer who does not know how to attract the best talent, and also someone who even Michael Lewis knows is bullshitting this time.
Another potential factor in this story that is not mentioned by Lewis, and also that did not come up in my previous post, is Tether. Patrick McKenzie has the theory that Alameda’s true main business was knowing what to say to American banks to allow Tether to move capital. That they were centrally engaged in fraud on this entire additional level.
This related thread of Patrick McKenzie’s is also fun. As is this one. Or at least, you can learn more about what I (and I assume Patrick) find fun.
The next step was building a crypto exchange.
Building FTX
Sam had a secret weapon in building FTX, which is that he had a programmer that could single-handedly (so the book says) program the whole thing better than most programming teams. Knowing what I know about exchanges and engineers, this claim is a lot less wild than it sounds. At core an exchange is about getting a small number of important things right, which FTX mostly did get right except where Sam chose to intentionally get them wrong. I totally believe that a two person team, one to know what to do and the other to do it, could have pulled that off.
Before going down other funding routes, Sam tried to get CZ of Binance to pay.
I do not really know what Sam was expecting. CZ presumably took a few weeks to think it over in order to keep optionality and get a head start, if he wasn’t already building such an exchange anyway as seems rather plausible. Luckily for Sam, he still managed to do the better execution than CZ.
The book describes CZ as a strangely conventional and unimaginative person, who created Binance and made it the dominant exchange on the planet, becoming one of the richest people on Earth, without any exceptional qualities or skills of note. Lewis makes it sound like CZ was one of many who started exchanges, and he was at the right place at the right time and things broke his way. I don’t know anything about CZ that isn’t common knowledge, but I do not buy this at all. Random people do not luck into that kind of situation. But that would be the story of a different book.
So now Sam needed money to build FTX. He had a killer programmer, but there is a lot more to an exchange than that. So it was time to fundraise.
The book talks about two ways they raised money: Selling FTT tokens, which are a cryptocurrency Sam created representing claims on a portion of FTX’s future revenue and thus effectively a form of preferred stock in FTX, and traditional VC fundraising.
The FTT story is told as a story of quick success. He starts out charging early people $0.10, then quickly that goes up quite a lot, some people get rich out of the gate, Sam is sad at what he gave away. VCs in this spot, and crypto people too, tell you not to be upset about that. You need big gains and a story to drive excitement, you still have most of the company and a ton of the tokens. You have what you need. Why fret it? Instead, Sam says later in the book he regretted creating the tokens and sold them so cheap, rather than regretting the tokens because he used them later in such crazy fashion that he blew up his whole empire.
The stock story is where SBF learns the basics of how VC works. In traditional Sam fashion, he noticed things were kind of arbitrary and dumb, then did not stop to think that they might not be as arbitrary and dumb as all that and there might be method to the madness even if it wasn’t fully optimal.
No, this does not mean the valuation is arbitrary. That is especially true when, as was the case in FTX and most crypto companies, you politely decline to let anyone do proper due diligence, and you’re not even a traditional VC. What is going on is that Jump is quite reasonably deciding that at a fair rate they would be willing to invest, but that they are not in position to evaluate what is fair. So they outsource that to others, including to the lead whoever that might be. If VCs are willing to costly signal, via their own investment, that a $20 billion valuation is reasonable, then Jump can be in as well.
Well, yeah, that one is largely right. They care a ton about a good story.
Then we have Sam being peak Sam.
Sam is very much the one who gets fired in the first week here. No, you are not obligated to flip coins every time you think you have a tiny edge, especially billion dollar ones with uncapped potential losses subject to potential rampant manipulation and huge adverse selection. Nor has Sam paused to consider the cost of capital. VCs demand edges well in excess of 33% before they are willing to invest.
It is crazy, completely insane, to think that a VC willing to invest in a start-up at $15 billion would want to be short for size at $20 billion, with no market or way to cover.
Another part of the puzzle is that Sam used Alameda’s resources to create FTX, and the first VC that Sam talked to figured this and a number of other things out.
Yet presumably Sam said this because he not only thought he was right, he thought he was so obviously right it made sense to say so over the phone. That tells you a lot about Sam’s attitude towards capital, sizing, risk and other related matters, and also in believing that he knows all and everyone else is an idiot, which is more, more, I’m still not satisfied.
The Sam Show
What about physically building FTX, as in their new headquarters in the Bahamas that was never finished?
There’s a bunch of great stories in the book about the architects who got brought in to make this very expensive building, who were given no guidance, and desperately tried to figure out what their client wanted. All they got were three quick notes - the shape of an F, a side that looked like Sam’s hair, and a display area for a Tungsten Cube - that Sam also didn’t bother writing, instead someone else tried to imagine what Sam might ask for. It was all the Sam show, all about Sam all the time, very cult of personality or at least hair.
Another way he kept it all about him was not to give anyone else a title that represented what they were actually doing.
If while reading this book you are not playing the game of noticing world-class levels of lacking self-awareness, you were missing out.
Nishad Singh failed to imagine the way things actually went south, but he did imagine a different highly plausible one that likely happened in a bunch of other Everett branches.
People don’t do things. None of the people in the world thought to kidnap Sam, despite zero attempts to prevent this, so despite being perhaps the most juicy kidnap target the world has ever known, he remained un-kidnapped. The man had actual zero security, posed zero physical threat, had billions in crypto that was accounted for by literal no one including himself, and was a pure act utilitarian and effectively a causal decision theorist. That person pays all the ransom, and then shrugs it off and gets back to work.
Which is good, given they had no other decision making process whatsoever.
There was no CFO. Why have a CFO? What would they do, keep track of how much money we have?
You know what? I do indeed think you did not know how much money you had.
A Few Good Trades
Sam did a lot of trades. Some of them were good trades. Some of them were not.
That means sometimes you look dumb, and sometimes you look like a genius.
When the good ones can pay off by orders of magnitude, every VC and everyone in crypto knows that is a nice place to be.
For example, that Solana trade? Sweet.
Makes up for a lot of other trades gone bad, provided you then sell some rather than double down. Yeah, I know. This is Sam we are talking about.
With a lot of effective control over Solana, Sam then was properly motivated to drive more hype and adaption. He even got to create a spin off, a ‘Sam Coin’ called Serum, which was meant to be a claim on a portion of the fees for financial transactions on the Solana blockchain.
This was, presumably, a way to expropriate other holders of Solana. Instead of returning the fees to Solana holders, they would go to Serum holders, so suddenly there was another coin to distribute and manipulate and hype. Fun.
The only problem was that it worked too well.
Lewis is so close to getting it. He understands that Sam will betray everyone around him whenever he can. He is altering the deal, pray that he does not alter it any further. Only from Sam’s perspective, there is no deal, there is only reality, which is what you can get away with.
Sam also had the advantage of being Sam and controlling Alameda and FTX.
He also had the bonus of not being so inspired to turn his paper gains into actual dollars (or stable coins, or liquid cryptos like BTC or ETH). Why liquidate what you can borrow against? That way Number Go Up.
Another trade he did was to take advantage of all the wash trading. The wash trading was so ingrained into how business was done, and done so poorly, that when SBF intercepted some of it, Binance’s employees failed to explain to their boss CZ what was even happening. Or, he credibly pretended not to understand.
What happened was that Binance was doing its market manipulation via predictable market orders, so SBF would step in front of those orders, which made a bunch of money that came out of Binance’s pocket. Which Binance did not like.
Sam would occasionally consult others on what to do? I guess? Even Lewis realizes Sam does not actually care what anyone else thinks.
Putting the $500 million into Anthropic was arguably the most important decision Sam ever made. I do not know if investing in Anthropic was a good or bad move for the chances of everyone not dying, but chances are this was either a massively good or massively bad investment. It dwarfs in impact the rest of his EA activities combined.
Another good trade Sam noticed was that rich people dramatically underinvest in politics, whatever you think of Sam’s execution during what one might generously label his learning phase.
We should not forget the original arbitrage trade with South Korea and Japan.
The good arbitrage trade that still doesn’t fully make sense was ModelBot. I see no reason for it not to have worked, but I also see no reason Sam could not have safely proved that it worked by starting small and then scaling up. Why all the drama? Then it stopped working as competition improved.
Even excluding the arbitrage trades, that track record is really good. Sam took a lot of shots, but I think not thousands of such shots. If you can make trades like Solana at $0.25 and early Anthropic, the rest of your trades can lose and you could still have very good alpha - provided you are responsible with your sizing and other risk management, and cut your losses when trades fail and properly consider liquidity issues. There would be no need to lie, or to do all the fraud and crime.
The problem was that Sam was the opposite of responsible with the sizing and risk management. He did not cut his losses when trades failed. He did not consider liquidity issues.
There is also the highly related issue of all the lying and fraud and crime.
The Plan
Behind every great fortune, they say, is a great crime. Certainly that was true for this one. Then, as The Godfather tells us, one needs to appear to go legit.
Sam’s plan was to present FTX as the responsible adults in the room.
It did help that the room was crypto, and filled with crypto exchanges. Many of which were indeed doing all the crimes. From the perspective of the United States, even the ones not doing all the crimes were still doing crimes anyway, the SEC has yet to explain to anyone what it would take to do crypto without doing crimes.
The biggest fish in the pond was CZ and Binance. Oh boy were they doing crimes. Their headquarters is intentionally nowhere. Their internal messages explicitly affirm that they are running an unlicensed security exchange in America. And so on.
Step one was to get CZ and Binance off the cap table, so no one evaluating FTX for its legitimacy would see them on the cap table doing all the crime. So Sam bought him out.
If SBF was going to pretend FTX was worth that much, why shouldn’t CZ get paid accordingly? However, SBF made a big mistake, and left CZ with $500 million in FTT tokens rather than fully paying out in cash. It really should not have been that hard to not let that happen, given all the money available for spewing elsewhere. Ideally you sell a little of the equity you bought back, and use the proceeds from that.
The next step in reputation washing was a bunch of advertising.
It really is that easy. Once the Miami Heat opened the door, no one else asked any questions. Everyone wanted the money, and that was that, FTX on the umpires. Sure, why not?
Given how restrictive FTX US was, this helps explain why SBF was so eager to sponsor all the things. He was after a different goal.
A common theme of FTX’s sponsorships, like much of what FTX did, is that SBF would spew money in spectacular fashion, most of which was wasted, but he’d also have big wins. In this case, the win was Tom Brady.
No one has ever Not Done the Research more than Sam, who is confused why Tom Brady impacted people more than Brett Farve. I am not confused at all. Tom Brady is the quarterback everyone was already always talking about, the one everyone hated or perhaps loved, the cheater, the one with the rings, the GOAT, the one who got a girl in trouble and left her, and all that. I say quarterback, you say Brady.
Next up was getting into politics. As I noted in the good trades section, Sam noticed this was remarkably cheap. So since he had no time to waste but he definitely had money to waste, he got cracking. Gabe Bankman-Fried, Sam’s brother, got put in charge of the political operation.
Attention is not always people’s strong suit.
Sam’s most famous political bet was on Carrick Flynn. The decision to back Flynn comes off in the book, if anything, massively stupider than it looked in real time.
I don’t blame Flynn, who was trying to do what he thought was the right thing and wasn’t legally allowed to coordinate with SBF’s efforts at all. But it seems so utterly obvious every step of the section on him that this man was never going to be in Congress. Yet they threw tons of money at him anyway, even after that money became the central campaign issue, and all the other candidates ganged up on Flynn over being a crypto stooge and a carpetbagger, and everyone in the district was complaining how their mailboxes were overflowing from campaign ads and they couldn’t take one more of Flynn’s spots on the television.
To be fair, there was real uncertainty the night before, no one knew for sure that it hadn’t worked. And yes, a champion is super valuable?
What did Sam learn?
Well, yes. Also, you learned that when you stick your neck out like that you and those associated with you (read: EA) pay a lasting reputational cost. Sam did not seem to notice this.
No time to lose. Sam was off to meet Mitch McConnell, with everyone scrambling to get SBF into a presentable suit (he had been convinced to technically bring a suit, but had given no thought to its presentability, he let others handle such things) and he worked to ensure not to call Mitch, who insisted on being called Leader, ‘dear leader’ instead. Which I admit sounds hard.
Also, check out the claim at the end here.
A $10 million donation to a McConnell dark money group One Nation has been confirmed.
I once again remind everyone that, while the price has likely gone up, the offer is probably still on the table if someone is bold enough to take it. Sure, now that he’s got the nomination in his sights it probably costs you $10 billion, but given the way people talk about what a Trump victory would look like, surely that is a small price to pay?
Meanwhile, Sam claimed to have infiltrated Trump’s team, and I love what they did with the place.
Trump actually did it, and it is plausible this switched which Eric won. Good show.
The Tragedy of FTT
The whole FTT situation still blows my mind.
I mean, I know it happened, I accept it. Still. Blows my mind every time.
The tragedy is there was absolutely no need for any of it. There was no need to keep flipping coins double-or-nothing for all the money on the assumption that the odds were in Sam’s favor.
Which they weren’t. Yet he kept flipping.
So here’s the basics, for those who don’t know.
Alameda owned a lot of FTT, which is effectively stock in FTX.
This FTT was highly illiquid. Trying to sell even a fraction of it would have collapsed the price, as everyone involved knew. Collapsing the price of FTT would then, as again everyone involved knew, cause a collapse of confidence in FTX, causing a run on the bank that was FTX. Which those involved had information to know would be quite a serious problem, were it to happen.
Every trader knows that you do not borrow heavily against your own illiquid stock, with loans recallable at any time and likely to be recalled when times get tough for your industry, to buy other illiquid and highly speculative things highly correlated to your stock and your industry.
Especially if you know you could not survive the resulting bank run because you’ve appropriated billions in customer funds to cover your other losses or even to keep making more illiquid investments, or to spend on random stuff. All while your exchange was a highly valued money printing machine that could easily raise equity capital.
And if you were still for some reason going to do that, they would at least know not to also give your biggest rival a huge chunk of that same illiquid token sufficient to crash the market, then actively try to drive him out of his home and bring regulators down on his head, in ways he can see you doing right there.
I mean, come on, that’s completely insane.
Except that is, by the book’s admission, exactly what happened.
Did other crypto firms accept this collateral, knowing or even worse somehow not knowing exactly what this implied? Why yes. Yes they did.
This created a highly volatile situation. A downward spiral waiting to happen.
Then crypto crashed, everyone including Alameda lost a lot of money, and it happened.
To try and prevent it from happening, Alameda had to actually repay its loans, or else the FTT it used as collateral was going to get liquidated. Then it had to bail out firms like Voyager. This was all on top of all the money Alameda and FTX had already spent and lost.
Sam still did not seem to notice that funding might be an urgent issue.
So Sam kept poking the bear. Hence, The Vanishing.
The Vanishing
That’s what Michael Lewis calls the collapse of FTX.
The proximate cause was that Sam pissed off CZ, while very much not being in a position to call BS on anyone. As in doing things like this:
CZ was understandably upset, leaked a supposed balance sheet from Alameda that looked bad but not as bad as the full reality, and announced his intention to dump his FTT.
Caroline Ellison decided to respond by offering to buy all the FTT at $22, thinking this was a show of strength, except for once crypto investors understood what that meant and acted accordingly.
Then the run on FTX began in earnest. Which would not have been a problem…
…except that FTX did not have the money to pay their customers, because Alameda had taken it and did not have the ability to give it all back.
Or have much idea how much they even had.
Didn’t see it coming, I suppose. Did decide to use $8 billion in customer funds as if it was Alameda operating capital. Did not anticipate that the customers might ask for that money back all at once when they found out what was going on. Whoops.
Lewis rightly points out, near the end, that while many people did realize FTX was obviously up to no good, no one actually managed to figure out the exact no good they were up to until rather late in the game.
They also couldn’t imagine that things could have been as chaotic and unaccounted for, or as blatant, as they were. It wasn’t necessary for the no good to be that no good. The borrowing against FTT tokens was bad enough on its own.
A lot of people, as FTX started to collapse, did the same calculation I did. It was quickly clear, as Sam went on Twitter to put on his best dog-drinking-coffee face and say ‘assets are fine,’ that there were only two possible worlds.
Either things really were fine, because FTX was obviously a money machine, things not being fine would have meant a completely crazy level of recklessness and incompetence, and SBF had gone way over the ‘this is fraud if things are not fine’ line and was very all-in. No one would be so stupid as to.
Or this was pure fraud, through and through, and FTX and SBF did all the crime.
That’s why me and so many others turned around on a dime - once we could rule out scenario #1, we knew we were in scenario #2.
One by one, people who wanted it to be one way got the piece of evidence that convinced them it was the other way.
Here is happens to Constance, on seeing the ‘balance sheet.’
They may have made five hundred million, but even if they hadn’t stolen everyone’s money, that was not about to pay the expenses.
This is the kind of thing that still blows my mind. You have stock in FTX, you have $25 million in liquid assets, the world is in front of you. And you chase FTX’s interest payments, and trust FTX so much, that you keep all your money on the exchange. What? That is completely crazy behavior. And yet, most employees tell exactly that story. It seems likely SBF/FTX insisted upon it, and Lewis either missed this or declined to mention it.
Because at $25 million while working at a crypto company, I’d hope I’d be doing things like millions in gold in a secret vault. At minimum I’d have $5 million in an offshore bank account.
But even after that, Constance didn’t turn on Sam yet. She only turned on Sam when she realized that, compared to those around her, she’d been given an order of magnitude or two less stock than she should have gotten.
Only then did she decide to spend the last chapter of the book helping Sam with logistics so she could try to get Sam to confess.
The Reckoning
As future prisoners, having been caught doing all the crime, the principles of FTX faced the prisoner’s dilemma.
The game theory of SBF: You have to commit to the bit.
The game theory of everyone else? Not so much.
Caroline held this meeting on November 9 to explain the situation to her employees. as Patrick McKenzie says ‘what a document.’
Being Causal Decision Theory agents, and being somewhat more grounded in reality, the rest of the EAs all turned state’s witness.
Sam also had many other bizarre ideas about how any of this worked.
Sam was convinced to declare bankruptcy in America lest he instead have it declared for him by less friendly other parties, then tried to undo it which you cannot do, then went around insisting that if he hadn’t declared bankruptcy it would all have worked out.
Sam kept trying to explain how the money was all there, really, or close to it, and how all of this was merely a serious of unfortunate ‘fuck ups’ and misunderstandings. Sam thought he had plenty of money, didn’t keep track of things properly, everything seemed safe at the time, he was as surprised as anyone.
So far, so public domain. The weird thing is that Michael Lewis seems to buy it.
This story does not actually make any sense, and of course is directly and blatantly contradicted by testimony at the trial. And yet Michael Lewis is intrigued:
The case has been made to me that this accounting is not as naive and stupid as it looks. I continue to mostly disagree with that. Lewis continues to double down.
This is perhaps the most ‘naive guy’ thing in the entire book. Smart people can’t think they are making good trades when they are making bad ones and losing tons of money, right? And they wouldn’t lie to Michael Lewis about profitability, right?
At their peak, Alameda was on (some form of electronic) paper worth $100 billion or so. We know that Alameda’s edge in algorithmic trades had likely been going away as they faced stiffer competition, that source of profit was likely gone, yet they continued to borrow. What was the profitable trading Alameda was doing with all that capital?
They were getting long. Alameda was borrowing a bunch of capital from various lenders, and using it to get long and then get longer. That is where the money was going.
Then Number Went Down. Money gone.
Does Michael Lewis think the people at Three Arrows Capital or Voyager were stupid? The people who created and ran Luna? Enron? Lehman Brothers? Does this man not remember his own books?
He said it himself. Sam’s entire empire was a leveraged - Lewis’s word - bet on the success of crypto and the empire itself more generally. When you have no ethics only a quest for Number Go Up (you know, for the common good) and therefore don’t care that, sure, technically that was customer deposits right there, that leverages your bet all the more. As Number Go Down, rather than hedge, they doubled down, including via providing bailouts.
Leverage plus Number Go Down equals Broke Fi Broke, overwhelming other sources of profits.
Yes, a lot of their horde of stuff they bought for pennies. But they then used that as collateral to borrow money and put more things into the horde. All of which was correlated, and all of which was down. A lot.
Also, SBF was shoving money out the door in any number of other ways that the above numbers are missing, money was constantly being misplaced or stolen, a fire sale is not a cheap thing to partake in, and so on. So I do not think there is any mystery here.
We then get a fascinating story. San says combined losses from things like this were only $1 billion, but honestly how would he even know given everything.
That is not a hack. He did not steal the money. You gave it to him.
I know that people call such things hacks, like the ‘hack’ about going both ways using leverage earlier. Instead, Sam is right here. This is people playing the game. If your risk engine is stupid enough to let me use my MOBL at $54 to borrow and withdraw a bunch of actual BTC, treating the value of MOBL as real, then that is on the risk engine, whether or not there was also market manipulation involved. I felt the same way about Avi and the Mango trade - yes sure it is illegal and no one is crying for him when he gets arrested nor should they, but also suck it up and write better code, everyone, as is the crypto way.
FTX’s risk engine was by all accounts excellent, when dealing with coins that were liquid relative to the position sizes involved, and when the risk engine was set to on. FTX’s risk engine was sometimes turned off, and the risk engine clearly did not make reasonable adjustments for illiquid or obviously bubble-shaped coins.
This also was rather a big deal - FTX lost, by Sam’s own account, a full year’s profits. And that’s the official Sam story. The real story is inevitably much worse. I do not for a second buy that they only lost $1 billion total in hacks.
John Ray, the Hero We Need
John Ray is pretty great. He’s the guy who cleaned up the Enron mess, the guy you call when you have a world-class mess, and he’s the one they called in for FTX.
Suddenly there is a no-nonsense adult in the room who is having none of it, even when there is some of it worth having.
Michael Lewis tries his best to throw shade at him, but Lewis is too honest - too much a naive guy - for any of it to stick even a little.
That’s a great skill if you are consistently correct. Based on the evidence presented, John Ray is almost never wrong about what type of guy he is dealing with.
Nishad being a naive guy seems right to me, based on the rest of the book. He had more than enough information to know what was happening, but the Arc Words of the whole book are that people don’t see what they don’t look for, so there you go.
John Ray is here to let you know that you are suffering, and pitying, too many fools.
First he got things under some semblance of order. He then moved on to looking for all the money.
Lewis portrays Ray in all this as an archeologist, shifting through the ruins for cash and clues. Michael Lewis makes a point of all the money Ray and his team were going to bill FTX for the work they did. I look at what they had to deal with and how much money they ultimately rounded up, and I say they earned every penny. Part of earning that is that when you are Ray, you cannot rely upon or trust anyone who made the mess in the first place. That’s hostile information sources. If you want it done right, and you do, you have to figure it all out for yourself.
The best thing about Ray is his reaction to Lewis, as Lewis keeps trying to explain all the things he think he knows, and Ray keeps ignoring him, and it’s going to be some of the straight up funniest scenes in the movie.
I demand that Ray be played by John Goodman, it would be so perfect.
Yes. Very clearly.
We have a fun clip of Ray spelling the uselessness of Lewis to him out for us.
The thing about Ray is that, in order to be so good at his job, he needs to have zero tolerance for pretty much anything. So when something actually is real, he can miss it.
Lewis would say this theory is ridiculous, and on its face it definitely is, everyone wanted to be Sam’s friend, but also how much got invested into OpenAI and Anthropic in the name of access? As in, friendship?
What Ray cannot see is that Anthropic was obviously a very good financial investment, because he does not know anything about AI. He certainly does not want to hear anything about existential risk, or whether Anthropic is helping or not helping with that concern.
A key question was, what crypto was worth anything, and what wasn’t? For some reason Ray locked onto Serum, the offshoot of Solana.
In case you didn’t know, well, not so much, here’s Serum.
Ray kept searching, Ray kept finding.
Caroline Ellison
Sam’s on-again, off-again, very-bad-idea relationship with Caroline Ellison is a key part of the story, because Caroline ended up effectively in charge of Alameda when the worst of the fraud went down. It does not seem like a coincidence that Caroline ended up in charge of Alameda, despite her not seeming like someone who should be given that kind of responsibility, as per (among other signs) her repeated observations that she was not up to the job.
Also she did not have an ideal attitude with respect to willingness to do various crimes, where the whole thing made her deeply uncomfortable but she still did the crimes anyway - you want someone who does not do crimes, or contains their crimes to contextually ‘ordinary decent crimes’ rather than outright frauds like stealing customer funds. Or if you have decided that your plan is to do a lot of crimes, a plan I recommend strongly against, you want someone who is fine with doing lots of crime.
In any case, Caroline it seems exchanged long emails spelling out the arguments for exactly how obviously she and Sam should not have been dating, with Sam offering points like this:
While I admit those are actually pretty strong arguments in favor, and in other circumstances would be very good reasons to date someone, the arguments against seem rather conclusive.
This is exactly what one would expect from the rest of Sam’s behavior in other contexts. Story checks out.
That I guess brings us to the psychiatrist? Who according to other reports had the entire firm including Sam hopped up on various pills in ways the book declines to mention?
I know of at least two psychiatrists who were and are better experts on this than George Lerner. For example, have you met… Scott Alexander? Anyway.
You can’t do that. I mean, obviously you literally can, but professionally no, you really, really can’t treat Gabe and his brother Sam and his girlfriend and employee Caroline and everyone else in their entire social network. This is Dr. Nick territory. Then again, given what everyone involved wanted, maybe you can? It’s not like they wanted anything from him except drugs and practical advice understanding other people. Maybe there is no conflict of interest here after all.
Also, who cares, given George didn’t even have a license in the Bahamas in the first place?
Perhaps he was taking inspiration from the (nominal) psychiatrist in Billions?
New and Old EA Cause Areas
In addition to his EA causes, SBF did have his own cause area, which was physical beauty.
He was against it.
He also investigated but dismissed as a child the cause area of hell.
According to the company psychiatrist, the EAs really did only care about suffering.
This attitude drives me bonkers. Yes, suffering is bad. It is the way we indicate to ourselves that things are bad. It sucks. Preventing it is a good idea. But when you think that suffering is the thing that matters, you confuse the map for the territory, the measure for the man, the math with reality. Combine that with all the other EA beliefs, set this as a maximalist goal, and you get… well, among other things, you get FTX. Also you get people worried about wild animal or electron suffering and who need hacks put in to not actively want to wipe out humanity.
If you do not love life, and you do not love people, or anything or anyone within the world, and instead wholly rely on a proxy metric? If you do not have Something to Protect? Oh no.
I mean, listen to yourselves, as George is describing you:
That is saying, my own child’s only value would be if they too become an effective altruist, or if they increase my altruistic productivity. This is not an attitude compatible with life. If this is you, please halt, catch fire and seek help immediately.
This last point does not ring true, EAs totally complain about lack of dating opportunities, although I can totally buy that everyone else thought the EAs thought they were smarter than everyone else - and in context, that they were technically right.
As much as I criticize EAs, I do it because they are worthy of criticism. They aspire to do better. Otherwise I wouldn’t waste my time. And when Lewis goes too far and misses the mark, there’s big ‘no one picks on my brother but me’ energy.
We rationalists have long had a name for the ‘emotional charge’ that drives ordinary philanthropy. We call it ‘cute puppies with rare diseases.’ There is a reason most philanthropy accomplishes nothing except fueling that emotional charge, which is that most decisions in most philanthropy are driven by fueling that emotional charge. The entire point, the founding principle, of EA, the core of what is good about EA, is to care about actually accomplishing the mission and cutting the enemy.
Can this be taken too far in various ways to the point where it loses its connection to reality? Does relying too much on the math and not enough on common sense and error checks lead to not noticing wrong conclusions are wrong? Oh yes, this absolutely happens in practice, the SBF group was not so extreme an outlier here.
But at least the crazy kids are trying. At all. They get to be wrong, where most others are not even wrong.
Also, future children in another galaxy? Try our own children, here and now. People get fooled into thinking that ‘long term’ means some distant future. And yes, in some important senses, most of the potential value of humanity lies in its distant future.
But the dangers we aim to prevent, the benefits we hope to accrue? They are not some distant dream of a million years from now. They are for people alive today. You, yes you, and your loved ones and friends and if you have them children, are at risk of dying from AI or from a pandemic. Nor are these risks so improbable that one needs to cite future generations for them to be worthy causes.
I fight the possibility of AI killing everyone, not (only or even primarily) because of a long, long time from now in a galaxy far, far away. I fight so I and everyone else will have grandchildren, and so that those grandchildren will live. Here and now.
If some other EAs made this change because the numbers (overwhelmingly, and in this case I believe correctly) said so, and would have done so even if the case was less overwhelmingly correct? So be it. We need some people like that. Others need to help with global poverty, and so they do. And they make a lot of mistakes there too, they take the math too seriously, they don’t consider second and third order effects properly, and so on. I could go on rants. But you know what? They try, damn it.
As opposed to ordinary philanthropy, where the EAs are right: It’s mostly kinda dumb.
This was a really good idea, in the world in which FTX had properly secured the money in order to give it away, and in which they had the proper infrastructure to do this responsibly. Even without either of those things, it was still a reasonable idea.
There were problems. People were unprepared to hand out a million dollars. A lot of decisions involving a lot of money got made, if not Brewster’s Millions style, in ways that were quite warping on the places the money got spread around. From what I heard, essentially any 19-year-old could get a $50,000 grant to move to Berkeley and think about AI safety, and there was a general failure to differentiate good and real and worthwhile efforts from others. The dynamics this created were an invitation to fake work, to predators and entryism and sociopaths, to hype and networks and corruption. If things had continued, that effect could have gotten worse.
As always, Sam was not considering second-order effects, and also not considering that efforts might backfire rather than be wasted. Nor did he pay enough attention to one of the most important questions traders always must ask on every trade they do, which is: What is the correct sizing?
Doing this trade with only a select few would have been great. Doing it with everyone who had an EA identity and a pulse was plausibly net negative.
Won’t Get Fooled Again
In the wake of publication, many people pointed out that Michael Lewis had been fooled, including this book report from David Roth. Michael Lewis did not take kindly to this, while confirming he had been fooled.
He really thinks he included all the nasty details. The trial has made it clear this was not the case. Even my old post on FTX, among many other source options, also made it clear this was not the case.
As does the book. The book, despite conspicuously leaving out all the most blatant details, repeatedly shows SBF doing fraud.
Even more than that, the book describes a person who is so obviously doing all the fraud. It would not make any sense for the Sam portrayed here to not be doing all the fraud. That much is clear by the end of chapter one. Did Lewis read his own book?
The idea that the arrest was a ‘rush to judgment’ is laughable. Sure, they partly moved quickly because it was important to send a message - which it was - but also because no one else has ever more obviously been doing all the crime. There are so many distinct frauds right out in the open.
Also, seriously, what the hell, you want to poison the jury pool or even the actual jury? The interviewer points out how our system works, and Lewis says no, it shouldn’t work that way, people should read my book.
At one point, Lewis confirms that FTX violated the Foreign Corrupt Practices Act, as was revealed during the trial, and also perhaps as tellingly that it put a billion dollars into an account at an exchange that regularly freezes accounts frozen by the local police for no reason.
And Lewis is wondering how the money could be missing.
Next he admits that SBF committed bank fraud.
He seems to not understand that this does not make it not a federal crime? That ‘we probably would not have otherwise gotten caught on this one’ is not a valid answer?
Similarly, Lewis clearly thinks ‘the money was still there and eventually people got paid back’ should be some sort of defense for fraud. It isn’t, and it shouldn’t be.
And then there’s this:
All right, that is a purer Naive Guy statement. They just weren’t lying, no sir.
Nor was Sam a liar, in Lewis’s eyes. Michael Lewis continued to claim, on the Judging Sam podcast, that he could trust Sam completely. That Sam would never lie to him. True, Lewis said, Sam would not volunteer information and he would use exact words. But Sam’s exact words to Lewis, unlike the words he saw Sam constantly spewing to everyone else, could be trusted.
It’s so weird. How can the same person write a book, and yet not have read it?
Even then, on October 1, Lewis was claiming he did not know if Sam was guilty. Not only that, he was claiming that many of the prosecutors did not know if Sam was guilty. And Lewis keeps saying that Sam himself really actually believes he is innocent, and for weeks after it was so over Sam really believed he’d be able to raise funds and turn it all around.
Lewis really did believe, or claimed to believe on his podcast, even in early October that, absent one little mistake where $8 billion dollars ended up in the wrong place, the rest of what happened was fine. That the rest of the story was not filled to the brim with all the crime.
Yet I totally believe that Lewis believed all of it. The man seems so totally sincere.
Then on October 9 Lewis said nothing that came out so far at the trial surprised him, other than the claim by one of Sam’s oldest friends that Alameda’s special code not only let them steal all the money, it also let them trade faster than their competitors, implemented on Sam’s orders. Everyone was constantly asking point blank about that, and Sam constantly said that wasn’t true. Even so, Lewis still repeated that in his model Sam doesn’t outright lie, he simply doesn’t tell you the answer that you needed to hear. He was still holding onto that even then.
When it is revealed that the FTX insurance fund to cover trading losses, that Sam often talks about, was purely fake, literally the product of a random number generator written into the code to display to people to make them think there was an insurance fund? Because to Sam money is fungible, so why would there be an insurance fund? Still no change.
I still can’t process all that. Not really. Chewbaca is a wookie. It does not make sense.
He has Matt Levine on the podcast, and Matt Levine points out that the book made it that much clearer that Sam’s fraud unfolded exactly the way frauds always unfold, that there was nothing confusing here. Yeah, on some level Sam fooled himself that would all work out (or, given it was Sam, that it had odds, and the words ‘safe’ and ‘risk’ were meaningless, so who cares?).
In later podcasts, Lewis did admit that a lot of the trial testimony was rather damning, and that he is confident that Sam will be convicted. But there is no sign he has figured out that Sam was doing all the lying and all the crime.
Conclusion
I mostly feel good closing the book on the events of SBF, Alameda and FTX. It all makes sense. We know what happened.
There are still a few mysteries, mostly centered on early Alameda. The story there, as outlined, continues not to make sense. Why was it so difficult to evaluate ModelBot? What was going on with the demands of those exiting? How did SBF get away with so little reputation damage? I still do want to know. Mostly, though, I am content.
My previous post on FTX holds up remarkably well, and could be used as a companion piece to this one. I was missing pieces of the puzzle, and definitely made mistakes, including the failure to buy up FTX debt for pennies on the dollar. But the rough outline there of what happened holds up, as does the discussion of implications for Effective Altruism.
I do not think that any of what happened was an accident. SBF was fortunate to get as far as he did before it all blew up. A blow up was the almost inevitable result. While SBF went off the rails, he went off the rails in ways that should have been largely predicted, and which make sense given who he was and then the forces and philosophical ideas that acted upon him.
This was not so unusual a case of fraud.
Nor was it an unusual case of what happens when a maximalist goal is given to a highly capable consequentialist system.
My expectation is that in the unlikely scenario that this attempted takeoff had fully succeeded, and SBF had gained sufficient affordances and capabilities thereby, that the misalignment issues involved would have almost certainly destroyed us all, or all that we care about. Luckily, that did not come to pass.
Other attempts are coming.
All of this has happened before.
All of this will happen again.