“Empire of AI” by Karen Hao was a nice read that I would recommend. It’s half hitpiece on how OpenAI corporate culture has evolved (with a focus on Sam Altman and his two-faced politicking), and half illustrating how frontier AI labs are “empires” that extract resources from the Global South (such as potable water for data center cooling and cheap labor for data labeling).
Below I collect some quotes from the book that illustrate how Sam Altman is manipulative and power-seeking, and accordingly why I find it frightening that he wields so much power over OpenAI.
There is some irony in the fact that I’ve put together a quote compilation focused on Sam Altman, when one of the main themes of the book is that the AI industry ignores the voices of powerless people, such as those in the Global South. Sorry about that.
Regarding Sam Altman’s early years running Loopt (early 2010s):
In [storytelling] Altman is a natural. Even knowing as you watch him that his company would ultimately fail, you can’t help but be compelled by what he’s saying. He speaks with a casual ease about the singular positioning of his company. His startup is part of the grand, unstoppable trajectory of technology. Consumers and advertisers are clamoring for the service. Don’t bet against him—his success is inevitable. (pg. 33)
“Sam remembers all these details about you. He’s so attentive. But then part of it is he uses that to figure out how to influence you in different ways,” says one person who worked several years with him. “He’s so good at adjusting to what you say, and you really feel like you’re making progress with him. And then you realize over time that you’re actually just running in place.” (pg. 34-35)
[Altman] sometimes lied about details so insignificant that it was hard to say why the dishonesty mattered at all. But over time, those tiny “paper cuts,” as one person called them, led to an atmosphere of pervasive distrust and chaos at the company. (pg. 35)
Regarding Sam Altman’s time running YC (mid 2010s):
A few years in [to running YC], he had refined his appearance and ironed out the edges. He’d traded in T-shirts and cargo shorts for fitted Henleys and jeans. He’d built eighteen pounds of muscle in a single year to flesh out his small frame. He learned to talk less, ask more questions, and project a thoughtful modesty with a furrowed brow. In private settings and with close friends, he still showed flashes of anger and frustration. In public ones and with acquaintances, he embodied the nice guy. [...] He avoided expressing negative emotions, avoided confrontation, avoided saying no to people. (pg. 42)
Ilya Sutskever to Sam Altman (2017):
“We don’t understand why the CEO title is so important to you [...] Your stated reasons have changed, and it’s hard to really understand what’s driving it. Is AGI *truly* your primary motivation? How does it connect to your political goals? How has your thought process changed over time?” (pg. 62)
Sam Altman’s shift away from YC to OpenAI in 2019:
The media widely reported Altman’s move as a well-choreographed step in his career and his new role as YC chairman. Except that he didn’t actually hold the title. He had proposed the idea to YC’s partnership but then publicized it as if it were a foregone conclusion, without their agreement [..] (pg. 69)
Sam Altman’s early dealings with Microsoft in 2019:
[AI safety researchers at OpenAI] were stunned to discover the extent of the promises that Altman had made to Microsoft for which technologies it would get access to in return for its investment. The terms of the deal didn’t align with what they had understood from Altman. (pg. 145)
Again in 2020:
Altman had made each of OpenAI’s decisions about the Microsoft deal and GPT-3’s deployment a foregone conclusion, but he had maneuvered and manipulated dissenters into believing they had a real say until it was too late to change course. (pg. 156)
Prior to the release of DALL-E 2 in 2022:
In private conversations with Safety, Altman expressed sympathy for their perspective, agreeing that the company was not on track with its AI safety research and needed to invest more. In private conversations with Applied, he pressed them to keep going. (pg. 240)
Sam Altman in 2019 on Conversations with Tyler:
“The way the world was introduced to nuclear power is an image that no one will ever forget, of a mushroom cloud over Japan [...] I’ve thought a lot about why the world turned against science, and one answer of many that I am willing to believe is that image, and that we learned that maybe some technology is too powerful for people to have. People are more convinced by imagery than facts.” (pg. 317)
Not consistently candid part 1 (in 2022):
Altman had highlighted the strong safety and testing protocols that OpenAI had put in place with the Deployment Safety Board to evaluate GPT-4’s deployment. After the meeting, one of the independent directors was catching up with an employee when the employee noted that a breach of the DSB protocols had already happened. Microsoft had done a limited rollout of GPT-4 to users in India, without the DSB’s approval. Despite spending a full day holed up in a room with the board for the on-site, Altman had not once notified them of the violation. (pg. 323-4)
Not consistently candid part 2 (in 2023):
Recently, [Altman] had told Murati he thought that OpenAI’s legal team had cleared GPT-4 Turbo for skipping DSB review. But when Murati checked in with Jason Kwon, who oversaw the legal team, Kwon had no idea how Altman had gotten that impression. (pg. 346)
In 2023, leading up to Altman being fired as CEO from OpenAI:
Murati had attempted to give Altman detailed feedback on the accelerating issues, hoping it would prompt self-reflection and change. Instead, he had iced her out [...] She had seen him do something similar with other executives: If they disagreed with or challenged him, he could quickly cut them out of key decision-making processes or begin to undermine their credibility. (pg. 347)
Murati on Musk vs. Altman:
Musk would make a decision and be able to articulate why he’d made it. With Altman, she was often left guessing whether he was truly being transparent with her and whether the whiplash he caused was based on sound reasoning or some hidden calculus. (pg. 362)
Not consistently candid part 3 (in 2023):
On the second day of the five-day board crisis, the directors confronted him during a mediated discussion about the many instances he had lied to them, which had led to their collapse of trust. Among the examples, they raised how he had lied to Sutskever about McCauley saying Toner should step off the board.
Altman momentarily lost his composure, clearly caught red-handed. “Well, I thought you could have said that. I don’t know,” he mumbled. (pg. 364)
In 2024:
In an office hours, [safety researchers] confronted Altman [regarding his plans to create a AI chip company]. Altman was uncharacteristically dismissive. “How much would you be willing to delay a cure for cancer to avoid risks?” he asked. He then quickly walked it back, as if he’d suddenly remembered his audience. “Maybe if it’s extinction risk, it should be infinitely long,” he said. (pg. 377-8)
In 2024, regarding Jan Leike’s departure:
“Of all the things Jan was worried about, Jan had no worries about the level of compute commit or the prioritization of Superalignment work, as I understand it,” Altman said. (pg. 387)
[Meanwhile Leike, two days later:] “Over the past few months my team has been sailing against the wind. Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done.” (pg. 388)
Altman in 2024 (this one seems worse than the goalpost shifting Anthropic has been doing with their RSP, yet I hear comparatively less discussion):
“When we originally set up the Microsoft deal, we came up with this thing called the sufficient AGI clause,” a clause that determined the moment when OpenAI would stop sharing its IP with Microsoft. “We all think differently now,” he added. There would no longer be a clean cutoff point for when OpenAI reached AGI. “We think it’s going to be a continual thing.” (pg. 402)