I guess that if one wishes to advocate for AI safety or an AI pause, and has no other political commitments, one can try to understand and engage with their economic and national security views. You can see the interplay of these factors in their thinking around 1:17:00, where they are discussing, not AI, but industrial policy in general, and whether it's ever appropriate for government to intervene in an industry.
I ask myself, if I was an American anti-AGI/ASI activist, what would I need to understand? I think I would need to understand the Republican position on things, the Democratic position on things (though the Democrats don't seem to have a coherent worldview at this point, they strike me as radically in need of assembling one; thus I guess Ezra Klein's "abundance agenda"), the strategic thinking of the companies that are actually in contention to create superintelligence (e.g. the thinking of Altman, Musk, Amodei, Hassabis); and finally, what's going on in China and in any other nations that may become contenders in the race.
I will never get the python love they also express here, or the hate for OOP. I really wish we weren’t so foolish as to build the AI future on python, but here we are.
I'm confused. How can they "hate OOP" (which I assume stands for "Object Oriented Programming") and also love Python, which is the definitive everything-is-an-object, everything-happens-at-runtime language? If they were ML professionals I'd think it was a rant about JAX vs Pytorch, but they aren't right?
EDIT: it was pointed out to me that e.g. Java is more OOP than Python, and objects do not count as Objects if you don't have to deal with classes & co yourself but just use them in a simple fashion. Still, love Python & hate OOP? In this context they are probably referring to Pytorch, that I would call OOP for sure, it does a lot of object-oriented shenanigans.
I have been forced recently to cover many statements by US AI Czar David Sacks.
Here I will do so again, for the third time in a month. I would much prefer to avoid this. In general, when people go on a binge of repeatedly making such inaccurate inflammatory statements, in such a combative way, I ignore.
Alas, under the circumstances of his attacks on Anthropic, I felt an obligation to engage once more. The All-In Podcast did indeed go almost all-in (they left at least one chip behind) to go after anyone worried about AI killing everyone or otherwise opposing the administration’s AI strategies, in ways that are often Obvious Nonsense.
To their credit, they also repeatedly agreed AI existential risk is real, which also makes this an opportunity to extend an olive branch. And some of the disagreements clearly stem from real confusions and disagreements, especially around them not feeling the AGI or superintelligence and thinking all of this really is about jobs and also market share.
If anyone involved wants to look for ways to work together, or simply wants to become less confused, I’m here. If not, I hope to be elsewhere.
Table of Contents
Our Continuing Coverage
I first covered many of his claims in Fighting Obvious Nonsense About AI Diffusion. Then I did my best to do a fully balanced look at the UAE-KSA chips deal, in America Makes AI Chip Diffusion Deal with UAE and KSA. As I said then, depending on details of the deal and other things we do not publicly know, it is possible that from the perspective of someone whose focus in AI is great power competition, this deal advanced American interests. The fact that many of Sacks’s arguments in favor of the deal were Obvious Nonsense, and many seemed to be in clearly bad faith, had to be addressed but did not mean the deal itself had to be an error.
This third post became necessary because of recent additional statements by Sacks on the All-In Podcast. Mostly they are not anything he has not said before, and are things he is likely to say many times again in the future, and they are largely once again Obvious Nonsense, so why cover them? Doesn’t Sacks rant his hallucinations about the supposed ‘AI Existential Risk Industrial Complex’ all the time?
Yes. Yes, he does. Mostly he falsely rants, and he repeats himself, and I ignore it.
What was different this time was the context.
Important Recent Context
The Trump Administration is attempting to pass what they call the ‘Big Beautiful Bill.’
Primarily this bill is a federal budget, almost none of which has to do with AI.
It also contains a provision that would impose a 10 year moratorium, on the state or local level, on civil law enforcement of almost any laws related to AI.
Many people, including myself and Anthropic CEO Dario Amodei, are not afraid to say that this is a bonkers crazy thing to do, and that perhaps we might want to take some modest actions on AI prior to it transforming the world rather than after.
Putting this provision in the BBB is also almost certainly a violation of the Byrd rule, but Congress chose to put it in anyway, likely as a form of ‘reconnaissance in force.’
It is not entirely clear that the administration even wants this moratorium in this form. Maybe yes, maybe no. But they very much do care about the BBB.
Thus, someone leaked to Semafor, and we got this article with the title ‘Anthropic emerges as an adversary to Trump’s big bill,’ claiming that Anthropic is lobbying against the BBB due to the AI provision, and this and other Anthropic actions are making Trump world very angry.
The other main trigger, Semafor reports, was Anthropic’s hiring two Biden AI staffers, Elizabeth Kelly and Tarun Chhabra, and Biden AI advisor Ben Buchanan, although it is noted by Semafor that Anthropic also employs Republican-aligned policy staff, like Benjamin Merkel and Mary Croghan. Buchanan, the architect of the Biden Diffusion rules, has (as one would expect) personally opposed the UAE-KSA deal and other ways in which Biden administration rules have been reversed.
Bizarrely, the Trump administration also expressed annoyance at Anthropic CEO Dario Amodei warning about imminent loss of up to half of white collar jobs. I think that projection was too aggressive, but I am confident he believes it.
Semafor bizarrely frames these lobbying tactics as potentially savvy business moves?
Yeah, look, no, obviously not, if you agree with Reed (and I do) that Anthropic can’t have a substantial impact on the BBB proceedings then this was clearly a misstep given the reaction. Why would anyone think ‘antagonize the Trump administration’ was good business for Anthropic? To help a bit with recruiting because they would look slightly more ‘more principled’ at the risk of facing a hostile White House?
Anthropic and the White House being enemies would help only OpenAI and China.
Anthropic’s lobbying of course is partly motivated by what they believe is good for America and humanity, and partly by what is good for Anthropic.
Anthropic has been, up until recently, seemingly been pursuing a very deliberate insider strategy. They were careful not to antagonize anyone. They continue to downplay public statements about AI existential and catastrophic risks. They have offered only very careful and measured support for any AI regulations. Dario has very much publicly gotten behind and emphasized the ‘need to beat China’ framework. Not only does Anthropic not call for AI to ‘slow down’ or ‘pause,’ they call upon American AI to accelerate. On SB 1047, Anthropic called for and got major softening of the bill and then still refused to endorse it.
This has been extremely frustrating for those who are worried about AI killing everyone, many of whom think Anthropic should speak up far louder and make the case for what is actually necessary. They see Anthropic as having largely sold out on this and often other fronts. Because such an approach is very obviously good for Anthropic’s narrow business interests.
What was said on the All-In Podcast recently, and is being reiterated even more than usual on Sacks’s Twitter recently, is a frankly rather unhinged attack against anyone and everyone Sacks dislikes in the AI space, in an attempt to associate all of it together into a supposed grand diabolical and conspiratorial ‘AI Existential Risk Industrial Complex’ out that, quite frankly, does not exist.
What is different this time is primarily the targeting of Anthropic.
Presumably the message is, loud and clear: Back the hell off. Or else.
The Point of This Post
This post has five primary objectives.
For various reasons, I am, shall we say, writing this with the maximum amount of charity and politeness that I can bring myself to muster.
You should proceed to the rest of this post if and only if this post is relevant to you.
Summary of the Podcast
I used the YouTube transcript. This was four podcasts in one.
If those involved believe what they are saying in part one and what David Sacks often says on Twitter on related topics, then they are deeply, deeply misinformed and confused about many things. That would mean this is a great opportunity for us all to talk, learn and work together. We actually agree on quite a lot, and that ‘we’ extends also to many of the others they are attacking here.
I would be happy to talk to any combination of the All-In hosts, in public, in private or on the podcast in any combination, to help clear all this up along with anything else they are curious about. We all benefit from that. I would love to do all this cooperatively. However differently we go about it, we all want all the good things and there are some signs there is underlying appreciation here for the problems ahead.
However it ended up in the podcast – again, this could all be a big misunderstanding – there was a lot of Obvious Nonsense here, including a lot of zombie lies, clearly weaponized. They say quite a lot of things that are not, and frame things in ways that serve to instill implications that are not true, and equate things that should not be equated, and so on. I can’t pretend otherwise.
There’s also a profound failure to ‘feel the AGI’ and definitely a failure to feel the ASI (artificial superintelligence), or even to feel that others might truly feel it, which seems to be driving a lot of the disagreement.
There’s a conflation, that I believe is largely genuine, of any and all skepticism of technology under the umbrella term ‘Doomer.’ Someone worries about job loss? Doomer. Someone worries about existential risk (by which perhaps you mean the effect on jobs?)? Doomer. Someone worries about AI ethics? Doomer. Someone worries about climate change? Doesn’t come up, but also doomer, perusambly.
But guys, seriously, if you actually believe all this, call me, let’s clear this up. I don’t know how you got this confused but we can fix it, even if we continue to disagree about important things too.
If you don’t believe it, of course, then stop saying it. And whether or not you intend to stop, you can call me anyway, let’s talk off the record and see if there’s anything to be done about all this.
The transcript mostly doesn’t make clear who is saying what, but also there don’t seem to be any real disagreements between the participants, so I’m going to use ‘they’ throughout.
I put a few of these notes into logical order rather than order in the transcript where it made more sense, but mostly this is chronological. I considered moving a few jobs-related things into the jobs section but decided not to do this.
As per my podcast standard, I will organize this as a series of bullet points. Anything in the main bullet point is my description of what was importantly said. Anything in the secondary sections is me responding to what was said.
Part 1 (The Part With the Unhinged Attacks on Anthropic and also other targets):
Other Related Obvious Nonsense
David Sacks in particular continues to repeat a wide variety of highly unhinged claims about Effective Altruism. Here he includes Barack Obama in this grand conspiracy, then links to several even worse posts that are in transparently obvious bad faith.
Claims about job loss (what I call They Took Our Jobs) are a mundane problem, calling for mundane solutions, and have nothing whatsoever to do with existential risk or ‘effective altruism,’ what are you even talking about. Is this because the article quotes Dario Amodei’s claims about job losses, therefore it is part of some grand ‘existential risk industrial complex’?
Seriously, do you understand how fully unhinged you sound to anyone with any knowledge of the situation?
David Sacks does not even disagree that we will face large scale job loss from AI, only about the speed and net impact. This same All-In Podcast talks about the possibility of large job losses in Part 2, not dissimilar in size to what Dario describes. Everyone who talks about this on the podcast seems to agree that massive job losses via AI automation are indeed coming, except they say This Is Good, Actually because technology will always also create more jobs to replace them. The disagreement here is highly reasonable and is mainly talking price, and the talking price is almost entirely about whether new jobs will replace the old ones.
Indeed, they talk about a ‘tough job market for new grads’ and warn that if you don’t embrace the AI tools, you’ll be left behind and won’t find work. That’s basically the same claim as Kevin Roose is making.
What did Barack Obama do and say? The post I saw was that he retweeted a New York Times article by Kevin Roose that talks about job losses and illustrates some signs of it, including reporting the newsworthy statement from Dario Amodei, and then Obama made this statement:
Do you disagree with Obama’s statement here, Sacks? Do you think it insufficiently expresses the need to provide miniature American flags for others and be twirling, always twirling towards freedom? Obama’s statement is essentially content-free.
EDIT: I then realized after I hit post later that yes, Obama did also retweet the Axios article that quoted Dario, saying this:
That is at least a non-trivial statement, although his follow-up Call to Action is the ultimate trivial statement. This very clearly is not part of some conspiracy to make us ‘have public discussions about how to maximize the benefits and limit the harms of this powerful technology.’
How do these people continue to claim that this all-powerful ‘Effective Altruism’ was somehow the astroturfing lobbyist group and they are the rogue resistance, when the AI industry has more lobbyists in Washington and Brussels than the fossil fuel industry and the tobacco industry combined? When almost all of that industry lobbying, including from OpenAI, Google, Meta and a16z, is exactly what you would expect, opposition to regulations and attempts to get their bag of subsidies.
What is most frustrating is that David Sacks very clearly understands that AGI presents an existential risk. AI existential risk is even explicitly affirmed multiple times during this podcast!
He has been very clear on this in the past, as in, for example:
Despite this, Sacks seems to have decided that reiterating these bizarre conspiracy theories and unhinged attacks is a good strategy for whatever his goals might be.
Here is another recent absurdity that I got forcibly put in front of me via Tyler Cowen:
David Sacks has to know exactly what he is doing here. This is in obvious bad faith. At best, this is the tactic of ‘take a large group of people, and treat the entire group as saying anything that its most extreme member once said, and state it in the most negative way possible.’
To state the obvious, going point by point, how false all of this is:
I sincerely wish that David Sacks would stop. I do not expect him to stop. Given that I do not expect him to stop, I sincerely wish that I can go back to avoiding responding when he continues.
Part 2 – We Do Mean the Effect on Jobs
The discussion of the future of jobs and employment in Part 2 was much better.
There seemed to be a problem with scale throughout Part 2.
This all seems to take place in a tech and startup bubble where everyone can be founding a new startup or deeply steeping themselves in AI tools to get one of those cool new AI jobs.
This is great advice for podcast listeners in terms of career development, but it simply doesn’t scale the way they want it to, nor does it then broaden out as fast or far in terms of jobs as they pitch it as doing.
There’s ‘what can a bright young listener to this podcast who is into tech and startups and is situationally aware do’ and ‘what is going to happen to a typical person.’ You cannot, in fact, successfully tell most people to ‘learn to code’ by adding in the word vibe.
Part 3 – The Big Beautiful Bill
It seems worth a few notes while I am here. I will divide the ‘BBB’ into two things.
I am not an expert on Congressional budget procedure or different types of appropriations but it seemed like no one here was one either, and the resulting discussion seemed like it would benefit from someone who understands how any of this works.
They are very keen to blame anything and everything they can on Biden, the rest on Congress, and nothing on Trump.
They seem very excited by making the DOGE cuts permanent for reasons that are not explained.
I notice that there is a prediction that this administration will balance the Federal budget. Are we taking wagers on that? There’s a lot of talk of the need to get the deficit down, and they blame the bill not doing this on Congress, essentially.
It sees this expectation is based on creating lots of economic growth, largely via AI. Very large gains from AI does seem to me to be the only sane way we might balance the budget any time soon. I agree that there should be lots of emphasis on GDP growth. They are very confident, it seems, that lower taxes will pay for themselves and spur lots of growth, and they think the CBO is dumb and simplistic.
There’s a concrete prediction for a very hot Q2 GDP print, 3%-4%. I hope it happens. It seems they generally think the economy will do better than predicted, largely due to AI but also I think due to Trump Is Magic Economy Catnip?
They talk about the need for more energy production and some details are discussed on timing and sizing, I agree and would be doing vastly more to move projects forward but from what I have seen of the BBB it does not seem to be net positive on this front. I think they are right to emphasize this but from what I can tell this is not cashing out in terms of much action to create new energy production.
I don’t have anything to say about Part 4, especially given it is out of my scope here.
Where Does This Leave Us
I hope that Anthropic understands the reaction that they seem to be causing, and chooses wisely how to navigate given this. Given how often Sacks makes similar claims and how much we all have learned to tune those claims out most of the time, it would be easy to miss that something important has changed there.
I presume that David Sacks will continue to double down on this rhetoric, as will many others who have chosen to go down similar rhetorical paths. I expect them to continue employing these Obvious Nonsense vibe-based strategies and accusations of grand conspiracies indefinitely, without regard to whether they map onto reality.
I expect it to be part of a deliberate strategy to brand anyone opposing them, in the style of a certain kind of politics, as long as such styles are ascendant. Notice when someone makes or amplifies such claims. Update on that person accordingly.
I would love to be wrong about that. I do see signs that, underneath it all, something better might indeed be possible. But assuming I’m not wrong, it is what it is.
My realistic aspiration is to not have to keep having that conversation this way, and in particular not having to parse claims from such arguments as if they were attempting to be words that have meaning, that are truthful, or that map into physical reality. It is not fun for anyone, and there are so many other important things to do.
If they want to have a different kind of conversation, I would welcome that.