Psychology professor at University of New Mexico. BA Columbia, PhD Stanford. Works on evolutionary psychology, Effective Altruism, AI alignment, X risk. Worked on neural networks, genetic algorithms, evolutionary robotics, & autonomous agents back in the 90s.
Thanks to Mikhail Samin for writing one of the most persuasive and important articles I've read on LessWrong.
I think a lot of the dubious, skeptical, or hostile comments on this post reflect some profound cognitive dissonance.
Rationalists and EAs generally were very supportive of OpenAI at first, and 80,000 Hours encouraged people to work there; then OpenAI betrayed our trust and violated most of the safety commitments that they made. So, we were fooled once.
Then, Rationalists and EAs were generally very supportive of Anthropic, and 80,000 Hours encouraged well-meaning people to work there; then Anthropic betrayed our trust and violated most of the safety commitments they made. So, we were fooled twice. Which is embarrassing, and we find ways to cope with our embarrassment, gullibility, and naivete.
What's the lesson from OpenAI and Anthropic betraying our trust so massively and recklessly?
The lesson is simply about human nature. People are willing to sell their souls. A mid-level hit man is willing to kill someone for about $50,000. A cyber-scammer is willing to defraud thousands of elderly people for a few million dollars. Sam Altman was willing to betray AI Safety to achieve a current net worth of (allegedly) about $2.1 billion. Dario Amodei was willing to betray AI Safety to achieve his current net worth of (allegedly) about $3.7 billion. If the AI bubble doesn't burst soon, they'll each probably be worth over $10 billion within a couple of years.
So, we should have expected that almost anyone, no matter how well-meaning and principled, would eventually succumb to the greed, hubris, and thrills of trying to build Artificial Superintelligence. We like to think that we'd never sell our souls or compromise our principles for $10 billion. But millions of humans compromise their principles, every day, for much, much less than that.
Why exactly did we think Sam Altman or Dario Amodei would be any different? Because they were 'friendlies'? Allies to the Rationalist cause? EA-adjacent? Long-termists who cared about the future?
None of that matters to ordinary humans when they're facing the prospect of winning billions of dollars -- and all they have to do is a bit of rationalization and self-deception, get some social validation from naive worshippers/employees, and tap into that inner streak of sociopathy that is latent in most of us.
In other words, Anthropic's utter betrayal of Rationalists, and EAs, and humanity, should have been one of the least surprising developments in the entire tech industry. Instead, here we are, trading various copes and excuses for this company's rapid descent from 'probably well-intentioned' to 'shamelessly evil'.
Matrice -- maybe, if it was possible for people to boycott Google/Deepmind, or Microsoft/OpenAI.
But as a practical matter, we can't expect hundreds of millions of people to suddenly switch from gmail to some email alternative, or to switch from Windows to Linux.
It's virtually impossible to organize a successful boycott of all the Big Tech companies that have oligarchic control of people's digital lives, and that are involved in AGI/ASI development.
I still think the key point of leverage of specific, personalized, grassroots social stigmatization of AGI/ASI developers and people closely involved in what they're doing.
(But I could be convinced that Big Tech boycotts might be a useful auxiliary strategy).
PS the full video of my 15-minute talk was just posted today on the NatCon YouTube channel; here's the link
Matrice -- for more on the stigmatization strategy, see my EA Forum post from a couple years ago, here
IMHO, a grassroots moral stigmatization campaign by everyone who knows AGI devs would be much more effective that just current users of a company's products boycotting that company.
Oliver -- that's all very reasonable, and I largely agree.
I've got no problem with people developing narrow, domain-specific AI such as self-driving cars, or smarter matchmaking apps, or suchlike.
I wish there were better terms that could split the AI industry into 'those focused on safe, narrow, non-agentic AI' versus 'those trying to build a Sand God'. It's only the latter who need to be highly stigmatized.
Peace out :)
Seth - thanks for sharing that link; I hadn't seen it, and I'll read it.
I agree that we should avoid making AI safety either liberal-coded or conservative-coded.
But, we should not hesitate to use different messaging, emphasis, talking points, and verbal styles when addressing liberal or conservative audiences. That's just good persuasion strategy, and it can be done with epistemic and ethical integrity.
Russell -- I take your point that in most alternative timelines, we would already be dead, decades ago, due to nuclear war, and I often make that point in discussing AI risk, to hammer home the point that humanity does not have any magical 'character armor' that will protect us from extinction, and that nobody is coming to save us if we're dumb enough to develop AGI/ASI.
However, I disagree with the claim that 'our current situation is not one to preserve'. I know people in the military/intelligence communities who work full time on nuclear safety, nuclear non-proliferation, counter-terrorism, etc. There are tens of thousands of smart people across dozens of agencies across many countries who spend their entire lives reducing the risks of nuclear war. They're not just activists making noise from outside the centers of power. They're inside the government, with high security clearances, respected expertise, and real influence. I'm not saying the risk of nuclear war has gone to zero, but it is taken very seriously by all the major world governments.
By contrast, AI safety remains something of a fringe issue, with virtually no representation inside governments, corporations, media, academia, or any other power centers. That's the thing that needs to change.
We don't need a 'hail Mary' where we develop AGI/ASI and then hope that it can reduce nuclear risk more than it increases all other risks.
I didn't say that all Rationalists are evil. I do consider myself a Rationalist in many ways, and I've been an active member of LessWrong and EA for years, and have taught several college courses on EA that include Rationalist readings.
What I did say, in relation to my claim that 'they’ve created in a trendy millenarian cult that expects ASIs will fill all their material, social, and spiritual needs', is that 'This is the common denominator among millions of tech bros, AI devs, VCs, Rationalists, and effective accelerationists'.
The 'common denominator' language implies overlap, not total agreement.
And I think there is substantial overlap among these communities -- socially, financially, ethically, geographically.
Many Rationalists have been absolutely central to analyzing AI risks, advocating for AI safety, and fighting the good fight. But many others have gone to work for AI companies, often in 'AI safety' roles that do not actually slow down AI capabilities development. And many have become e/accs or transhumanists who see humanity as a disposable stepping-stone to something better.
Yes. And way too much 'AI safety work' boils down to 'getting paid huge amounts by AI companies to do safety-washing & public relations, to kinda sorta help save humanity, but without upsetting my Bay Area roommates & friends & lovers who work on AI capabilities development'.
TsviBT -- I can't actually follow what you're saying here. Could you please rephrase a little more directly and clearly? I'd like to understand your point. Thanks!