LESSWRONG
LW

AI

14

My talk on AI risks at the National Conservatism conference last week

by geoffreymiller
11th Sep 2025
12 min read
2

14

AI

14

My talk on AI risks at the National Conservatism conference last week
8Kaj_Sotala
1geoffreymiller
New Comment
2 comments, sorted by
top scoring
Click to highlight new comments since: Today at 6:59 PM
[-]Kaj_Sotala38m80

I expected to strong-upvote this because "appealing to conservatives in the kind of language and values that they appreciate and working to join them on issues of AI safety" feels like a very laudable goal. However, much of this talk seemed to be not so much "trying to find common ground and things we can all agree on with the conservatives", instead being "demonizing anyone associated with building AI, including much of the AI safety community itself". 

I'm confused how you can simultaneously suggest that this talk is about finding allies and building a coalition together with the conservatives, while also explicitly naming "rationalists" in your list of groups that are trying to destroy religion, among other things. I would expect that the net result of this talk is to makes anyone sympathetic to it discount the opinions of many of the people who've put the most work into understanding e.g. technical AI safety or AI governance.

Reply1
[-]geoffreymiller9m10

Kaj: well, as I argued here, regulation and treaties aren't enough to stop reckless AI development. We need to morally stigmatize anyone associated with building AGI/ASI.  That's the main lever of social power that we have. 

I see zero prospect for 'technical AI safety work' solving the problem of slowing down reckless AI development. And it's often little more than safety-washing by AI companies to make it look like they take safety seriously - while they continue to push AGI capabilities development as hard as ever.

I think a large proportion of the Rationalist/EA/LessWrong community is very naive about this, and that we're being played by bad actors in the AI industry.  

Reply
Moderation Log
More from geoffreymiller
View more
Curated and popular this week
2Comments

Lately I’ve been trying to raise awareness of AI risks among American conservatives.  Stopping the reckless development of advanced AI agents (including Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI)) should be a human issue, not a partisan issue. Yet most people working in AI safety advocacy lean to the Left politically, and we seem to be ignoring many potential allies on the Right. 

This neglect of conservative allies seems suboptimal, given that the Republican Party currently controls all three branches of the U.S. government (including the Presidency, the Supreme Court, and majorities in the House and the Senate). Granted, the pro-AI lobbyists, Big Tech accelerationists, and tech VCs like Andreessen Horowitz have some influence with the current White House (e.g. on AI Czar David Sacks), but many conservative political leaders in the executive and legislative branches have expressed serious public concerns about AI risks. 

In the interests of coalition-building, I attended a political conference last week in Washington DC called the National Conservatism conference (NatCon 5). There I gave a talk titled ‘Artificial Superintelligence would ruin everything’, in a session on ‘AI and the American Soul’. 

NatCon has been a big deal as the intellectual vanguard of American conservatism since the first conference in 2019. Vice President JD Vance rose to prominence partly because he gave inspiring speeches at previous NatCons, even before he was elected a Senator in 2022. Other speakers at NatCon 5 last week included Tulsi Gabbard (Director of National Intelligence), Tom Homan (Border Czar), Sebastian Gorka (NSC Senior Director for Counterterrorism), Jay Bhattacharya (Director of the NIH), Russell Vought (Director of the OMB), Harmeet Dhillon (Assistant Attorney General), Ambassador Jamieson Greer (U.S. Trade Representative), and U.S. Senators Jim Banks, Josh Hawley, and Eric Schmitt. There were about 1,200 registered attendees at NatCon 5, including many staffers working in the current administration, plus leading figures from conservative universities, news channels, magazines, and podcasts. The YouTube playlist of NatCon 5 talk videos released so far is here.

NatCon was also an opportunity to reach out to religious leaders. The conference included benedictions by Catholic priests, Protestant ministers, and Jewish rabbis, and the Judaeo-Christian tradition was a prominent theme. Most of the attendees would have identified as Christian. The main conference organizer, Yoram Hazony, is a Jewish political theorist who lives in Israel; his books on nationalism and conservatism would be excellent introductions for those genuinely interested in understanding modern national conservatism. (In a future post, I may explore some strategies for raising AI safety awareness among the 2.5 billion Christians in the world.)

Why this post? Well, a few days ago, Remmelt was kind enough to post some excerpts (here on EA Forum and here on LessWrong) from an article by The Verge reporting on the controversies about AI at the conference, including several excerpts from my talk and my questions to other speakers. This sparked little interest on EA Forum so far, but a fair amount of controversy on LessWrong, including some debate between Oliver Habryka and me. The AI theme at NatCon was also covered in this article in the Financial Times. Other NatCon speakers who addressed various AI risks included Senator Josh Hawley on AI unemployment (talk here), Mike Benz on AI censorship (talk here), and Rachel Brovard on AI transhumanism (talk here), plus the other speakers in our AI panel: Wynton Hall, Spencer Klavan, and Jeffrey Tucker (videos not yet released).

The video of my NatCon talk on AI risk hasn’t been released yet. But I thought it might be useful to include the full text of my talk below, so people on this forum can see how I tried to reach out to conservative ‘thought leaders’ given the beliefs, values, and civilizational goals that they already have. I tried to meet them where they are. In this regard, I think my talk was pretty successful – I got a lot of good questions in the Q&A afterwards, a lot of follow-up from conservative media. For example, here’s a short interview I did with Joe Allen from Bannon’s War Room -- who also recently interviewed Nate Soares about AI X-risk.  I also had many good discussions about AI over the 3 days of the conference, which were very helpful to me in understanding which kinds of AI safety arguments do or do not work with conservatives, nationalists, and Christians. 

I’d welcome any (reasonable) comments, reactions, and suggestions about how AI safety advocates can reach out more effectively to conservatives – especially to those who are currently in power.

-------------------------------------------

[talk itself is below]

Artificial Superintelligence would ruin everything

It’s an honor and a thrill to be here, at my first NatCon, among people who respect our ancestors, cherish our descendants, and love our nation.

In my day job, I’m a psychology professor who teaches courses on relationships, emotions, and altruism. But I’m also a conservative, a patriot, and a parent. There are still four or five of us around in academia. 

I’ve been following AI closely for 35 years, ever since I worked on neural networks and autonomous robots as a grad student at Stanford and then a post-doc at University of Sussex. In the last ten years, I’ve talked a lot about the dangers of AI in articles, podcasts, and social media.

The AI industry’s explicit goal is go far beyond current LLMs like ChatGPT, to develop Artificial General Intelligence, or AGI, that can do any cognitive or behavioral task that smart humans can do, and then to develop Artificial Superintelligence, or ASI, which would be vastly smarter than all the humans that have ever lived. 

These are not distant goals – many AI leaders expect AGI within 10 years, then ASI shortly after that. I’ve got two toddlers, and we could face dangerous ASIs by the time they graduate high school. 

In this talk, I aim to persuade you that ASI is a false god, and if we build it, it would ruin everything we know and love. Specifically, it would ruin five things that national conservatives care about: survival, education, work, marriage, and religion. 

First, ASI would ruin the survival of our species 

The most severe risk is that ASIs that we can’t understand, predict, or control end up exterminating all of humanity. This is called ASI extinction risk, and it’s been a major topic of research for 20 years. 

Remember, we’re not coding AIs like traditional software that can be analyzed and debugged. The best current AI systems are black box neural networks trained on over 40 trillion language tokens, yielding over a trillion connection weights, like synapses in a brain. Reading all of these connection weights aloud would take a human about 130,000 years. We have no idea how these LLMs really work, and no idea how to make them aligned with human interests.

So what? Well, about a quarter of the general public already think that ASI could cause human extinction within this century. Hundreds of leading AI researchers agree, including Yoshua Bengio and Geoff Hinton – the two most-cited living scientists. 

And every single CEO of a major AI company has warned that developing ASIs would impose serious extinction risks on humanity. This includes Sam Altman of OpenAI, Dario Amodei of Anthropic, Demis Hassabis of Deepmind, and Elon Musk of xAI. 

Generally, the more you know about AI, the higher your p(doom), or estimated probability that ASI would doom humanity to imminent extinction.  Among most AI safety experts, who have studied these issues for years, p(doom) is at least 20%. For many, including me, it’s well over 50%. 

Long story short, almost everyone in AI understands that developing ASI would be playing Russian roulette with our entire species. In the six chambers of this existential revolver, we’re just arguing about whether there would be one round, or five rounds. 

Problem is, when I talk about ASI extinction risk, many people seem weirdly unconcerned. They think the risk is too speculative or too distant.

So, in this talk, I’m going to focus on what happens even if we avoid ASI extinction – if we survive to enjoy the AI industry’s best-case scenario. 

ASI would ruin education. 

Actually, AI is already ruining higher education. Millions of college students are using AI to cheat, every day, in every class. Most college professors are in a blind panic about this, and we have no idea how to preserve academic integrity in our classes, or how our students will ever learn anything, or whether universities have any future.

We can’t run online quizzes or exams, because students will use AI to answer them. We can’t assign term papers, because LLMs can already write better than almost any student. So, in my classes, I’ve had to ‘go medieval’, using only in-person paper-and-pencil tests. 

The main result of AI in higher education so far, is that students use AI to avoid having to learn any knowledge or skills. 

So, what is the AI industry’s plan for education? They seem inspired by Neal Stephenson’s 1995 novel ‘The Diamond Age’, in which a nanotech engineer invents a superintelligent interactive book that serves as a customized tutor for his bright and curious daughter. ASIs could make great personal tutors for kids. They would know everything about everything, and be able to explain it with the best possible combination of words, videos, and games.  

The question is, what values would the AI companies train into these ASI tutors, so that they can shape the next generation of AI users? Will they nudge the kids towards national conservatism, family values, and the Christian faith? Or will they teach Bay Area leftist, secular, globalist, transhumanist values? 

You know the answer.  ASI tutors would give the AI industry total educational and ideological control over future generations.

ASI would ruin work. 

National conservatives believe in borders. We oppose immigration by millions of people who want our jobs or our welfare, but who do not share our traditions or values. 

Many conservatives in the current Trump administration, quite rightly, want stronger geographical borders against alien people. But they seem oblivious about protecting our digital borders against invasion by alien intelligences. Indeed, they seem giddy with delight about AI companies growing ASIs inside our data centers – without understanding that a few ASIs can easily become hundreds of ASIs, then millions, then billions. If you worry about immigrants out-breeding our native populations, wait until you see how quickly ASIs can self-replicate. 

These ASIs won’t be American in any meaningful sense. They won’t be human. They won’t assimilate. They won’t have marriages or families. They won’t be Christian or Jewish. They won’t be national conservatives. But they will take our jobs. 

Economists, bless their hearts, will often say ‘AI, like every technology before it, may eliminate some traditional jobs, but it will produce such prosperity that many new jobs will be created.’ This copium reveals a total misunderstanding of AI. 

Remember, Artificial General Intelligence is defined as an AI that can do any cognitive or behavioral task at least as well as a smart human can, to an economically competitive level. This includes being able to learn how to control a human-shaped body to do any physical labor that a human can learn to do. Even stronger ASI, plus anthropoid robots, could replace any human worker doing any existing job – from bricklaying to brain surgery, from running hedge funds to doing further AI research. 

OK, so we’d lose all existing jobs to ASI. But won’t the AI-fueled economic growth create billions of new jobs? Yes, it will – for other ASIs, which will be able to learn any new job faster than any human can learn it. We can’t re-train to do new jobs faster than the ASIs can train themselves.

ASI would, within a few decades, impose permanent unemployment on every human, now and forever. Our kids won’t have jobs, and won’t be able to pay for food, housing, or medical care. And every CEO of every AI company knows this -- which is why they all say that the only long-term solution to AI-induced unemployment is a massively expanded welfare state.  Elon Musk calls this Universal Generous Income; others have called it the Fully Automated Luxury Communist Utopia.

This is their plan: ASI will automate all human labor, no human workers will earn any income, the AI companies will earn all income, then pay most of their revenue to the government, and the government will distribute generous welfare payments to everyone. 

The AI executives promise that they will happily pay enough taxes to support this universal welfare state. Maybe they’ll pay the $20 trillion a year that it would cost. But for how long? For generations?  Forever?

After ASI, the dignity of human work would die. Husbands would no longer be breadwinners. Every mother would become a welfare queen. Every child would, economically, become a ward of the state. The family as an economic unit would end. The bonds of mutual loyalty that sustain our economic interdependence would become irrelevant. 

ASI would ruin marriage 

Look, I’m not against all AI applied to human relationships. I’m chief science advisor to a start-up company called Keeper, which has developed a really good AI matchmaking app to help men and women find compatible partners – ones who actually want marriage and children. Whereas Tinder and Hinge offer short-term casual hookups, Keeper is all about traditional family values. But we’re using narrow, domain-specific AI to do the matchmaking, with no ambitions to develop ASI.

By contrast, the big AI companies wants its users to develop their most significant, intimate relationships with their AIs, not with other humans. This is clear in the recent push to develop AI girlfriends and boyfriends, to make them ever more attractive, charming, interactive, and addictive. 

The AI transhumanists are eager for a future in which everyone has their own customized AI companions that anticipate all their desires. 

This would be logical outcome of combining chatbots, sexbots, deepfakes, goon caves, AR, VR, and reproductive tech. To misquote Xi Jinping, it would be Romanticism, with Bay Area Characteristics, for a New Era. Actual reproduction and parenting would be outsourced to artificial wombs and child care robots.

Granted, an ASI partner would be tempting in many ways. It would know everything about everyone and everything. It would chat insightfully about all our favorite books, movies, and games. It would show empathy, curiosity, arousal, and every human emotion that can simulated. And, no need for monogamy with AIs. If you can afford it, why not lease a whole AI harem?

But, after enough young people get a taste for an ASI boyfriend or girlfriend, no mere human would seem worthy of romantic attraction, much less a marriage, or a family. 

ASI would ruin religion 

Our new Pope, Leo XIV, said earlier this year that AI poses ‘new challenges for the defense of human dignity, justice, and labor’, and he views AI as the defining challenge of our world today. 

But most American AI developers are liberal Bay Area atheists. They may have had ‘spiritual experiences’ at Burning Man, enjoying LSD, EDM, and tantric sex. But they view traditional religion with bemused contempt. The have a god-shaped hole in their souls, which they fill with a techno-utopian faith in the coming ASI Singularity. In place of the Judeo-Christian tradition, they’ve created in a trendy millenarian cult that expects ASIs will fill all their material, social, and spiritual needs. 

This is the common denominator among millions of tech bros, AI devs, VCs, Rationalists, and effective accelerationists. ASI, to them, will be the new prophet, savior, and god. Indeed, they speak of summoning the ‘sand-god’: sand makes silicon chips, silicon chips enable superintelligence, and superintelligence means omniscience, omnipotence, and omnipresence. But godlike ASIs won’t offer real love, mercy, holiness, or salvation.

Summoning the ASI sand-god would be the ultimate hubris. It won’t have true divinity, and it won’t save any souls. But it may say unto us, ‘Thou shalt have no other gods before me’. 

So, what to do about ASI? 

My humble suggestion is that we shouldn’t let ASIs ruin our education, work, marriage, religion, nation, civilization, and species. We should shut down all ASI development, globally, with extreme prejudice, right now. 

To do that, we need strict AI regulations and treaties, and the will to enforce them aggressively. But we also need the moral courage to label ASI as Evil. Not just risky, not just suicidal, but world-historically evil. We need a global campaign to stigmatize and ostracize anyone trying to build ASI. We need to treat ASI developers as betrayers of our species, traitors to our nation, apostates to our faith, and threats to our kids. 

Many people will say: but if we don’t develop ASI, China will, and wouldn’t that be worse? 

This is where a little knowledge of game theory is a dangerous thing. We’re not in a geopolitical arms race to build a new tool, or a new weapon. We’re not racing up a mountain to reach global hegemony. Instead, we’re racing off a cliff. An American-made ASI would ruin everything we value in America. Maybe Xi Jinping could be persuaded that a Chinese-made ASI would ruin everything the Han Chinese people love, and would mean that the CCP would give up all power to their new ASI-emperor. 

When two players realize they’re racing off the same cliff, they can easily coordinate on the Pareto-optimal equilibrium where they both simply stop racing. There would be no temptation to defect from a global ban on ASI development, once the major players realize that secretly building any ASI would be civilizational suicide.

So, we’re not in a two-player AI arms race with ASI as the payoff – rather, we’re in a game that could add a third player, the ASI, that would also win any future games we try to play. The only winner of an AI arms race would be the ASI itself – not America, not China. Only the ASI, stamping on a human face, forever.

Let’s get real. The AI industry is radically revolutionary and aggressively globalist. It despises all traditions, borders, family values, and civic virtues. It aims to create and worship a new sand-god. It aims to ruin everything national conservatives know and love. 

We, in turn, must ruin the AI industry’s influence here in Washington, right now. Their lobbyists are spending hundreds of millions of dollars to seduce this administration into allowing our political enemies to summon the most dangerous demons the world has ever seen. 

[end]

[Note: this was cross-posted to EA Forum today here]