LESSWRONG
LW

AI

-3

The Case for An AI Safety Political Party in the US

by Oliver Kuperman
10th Sep 2025
24 min read
4

-3

AI

-3

The Case for An AI Safety Political Party in the US
2Seth Herd
1Oliver Kuperman
2Seth Herd
1Oliver Kuperman
New Comment
4 comments, sorted by
top scoring
Click to highlight new comments since: Today at 9:10 PM
[-]Seth Herd1h20

I don't have time to read the whole thing right now, but I have long thought that politics is one arena where we could and should advance the idea of AI safety.

I think this becomes particularly possible and powerful if we see substantial concerns about job loss to AI before we have takeover-capable AGI. And I now think we probably will.

I got hung up on your introductory point 3, "reasonable chance for success". If I were you I'd reword or re-think that to qualify "success". We're not likely to get anyone into the US Senate on a third-party ticket, before doomsday, so saying up front what your realistic expectations are would be helpful for drawing readers in. You might edit that statement since you do deal with that concern in the post.

Reply
[-]Oliver Kuperman1h10

Thanks for the response! I agree that “Reasonable chance of success” is kinda a vague claim that people might take as an attempt to guarantee immediate electoral success. However a major point of this essay is that a third party doesn’t need to actually win any elections to have a substantial impact (although I think the ability to get a sitting senator to run under your ticket is not the best predictor for electoral success). I agree that If one narrowly focuses on long term electoral success, this project loses a ton of its value, but I reject that framing. 

Ross Perot never won, and I think few people would deny his campaign had major impacts on US policy, or at least public discourse. RFK Jr also did not win, but he got to take over the DHS by leveraging his political support. The US Green Party never won, but their influence I think has still been substantial given the democratic party’s embrace of environmental policies.

I think maybe reposting this on a weekend is a better idea, as people will have more time to read it, but I agree this post might be too long. After you read more of it, could you tell me which parts you think I should cut? 

Also, any other reasons you think this post might be so downvoted? I might understand why it would not receive upvotes due to its length but the subject seems somewhat novel, topical, and covers a wide range of possible? Are there any blatant writing flaws you see?

Reply
[-]Seth Herd1h20

I bounced off it exactly at that point so editing only that in its current form is worth it. I'm not sure what view people take of just reposting repeatedly; it isn't usually done so might be considered a bit gauche.

Ummm how did the green party fare in terms of getting their agenda forwarded, as of right now?

My primary concern with political involvement is getting your issue polarized like environmental concerns were. If one party raises your banner the other will want to tear it down by fair means or foul, and suddenly even an obvious concern is debated and gridlocked.

Reply
[-]Oliver Kuperman45m10

I mean, the Green New Deal was widely influential? Sure it did not pass, but I think it’s pretty easy to argue the Green Party had an effect on US environmental policy. Did you read the section on that which included a study which demonstrated Green Party candidates entering into competitive elections had an effect on Democratic Party platforms? It’s not an airtight case, but little is in politics and it’s not a stretch to think the Democratic Party being very supportive of environmental policies is at least partially due to the influence of the US Green Party.

On the issue of politicization, that is one of the reasons I propose a third party that has generally moderate positions.  However, even if the issue does get politicized before the party is able to influence policy, I don’t see how that is a bad thing compared to the status quo? I’d rather have one party support AI safety and another party oppose it than both parties ignoring the issue.

On reposting, I am not sure what I am supposed to do when nobody leaves any comments? The point of this essay was not so much to convince people as to figure out why people have not vigorously pursued this option before. This is the biggest platform for serious discussion on AI safety, and this essay is written for an audience that already takes AI somewhat seriously, so I do not know what else to do with this.

Anyways thanks for the feedback. 

Reply
Moderation Log
More from Oliver Kuperman
View more
Curated and popular this week
4Comments

Introduction:

Artificial Intelligence is advancing rapidly, raising significant concerns about its safe development and deployment. Despite widespread public concern about AI, there is a notable absence of a sustained political movement dedicated to addressing these issues. While certain organizations and individuals are engaged in shaping AI policy, these efforts have generally avoided the domain of electoral politics.

This post advances the argument for the establishment of a political party centered on promoting AI safety. The argument rests on three main claims:

  1. Comprehensive U.S. government support for AI safety would substantially increase the likelihood that advanced AI systems are developed and deployed responsibly.
  2. Current government action on AI safety remains limited, leaving considerable scope for additional policy initiatives.
  3. An AI safety political party could represent a comparatively cost-effective mechanism for advancing these goals and holds a reasonable probability of success.

The Importance of Increased Governmental Support for AI Safety:

Governments are needed to promote AI safety because the dynamics of AI development make voluntary caution difficult, and because AI carries unprecedented risk and transformative potential. Furthermore, the US government can make a huge difference for a relatively insignificant slice of its budget.

The Highly Competitive Nature of AI and a Potential Race to the Bottom:

There’s potentially a massive first-mover advantage in AI. The first group to develop transformative AI could theoretically secure overwhelming economic power by utilizing said AI to kick off a chain of recursive self improvement, where first human AI researchers gain dramatic productivity boosts by using AI, then AI itself. Even without recursive improvement, however, being a first mover in transformational AI could still have dramatic benefits.

Incentives are distorted accordingly. Major labs are pressured to move fast and cut corners—or risk being outpaced. Slowing down for safety often feels like unilateral disarmament. Even well-intentioned actors are trapped in a race-to-the-bottom dynamic, as all your efforts to ensuring your model is safe is not that relevant if an AI system developed by another, less scrupulous company becomes more advanced than your safer models. Anthropic puts it best when they write "Our hypothesis is that being at the frontier of AI development is the most effective way to steer its trajectory towards positive societal outcomes."

The actions of other top AI companies also reflect this dynamic, with Meta having spent hundreds of millions if not billions of dollars to poach talented individuals from other firms, and many AI firms barely meeting basic safety standards.

This is exactly the kind of environment where governance is most essential. Beyond my own analysis, here is what notable advocates of AI safety have said on the necessity of government action and the insufficiency of corporate self-regulation: 

“‘My worry is that the invisible hand is not going to keep us safe. So just leaving it to the profit motive of large companies is not going to be sufficient to make sure they develop it safely,’ he said.  ‘The only thing that can force those big companies to do more research on safety is government regulation.’”

  • Geoffrey Hinton, Nobel Prize Winner for contributions to AI, in an interview with the Guardian in 2024

 

“I don't think we've done what it takes yet in terms of mitigating risk. There's been a lot of global conversation, a lot of legislative proposals, the UN is starting to think about international treaties — but we need to go much further. {...} There's a conflict of interest between those who are building these machines, expecting to make tons of money and competing against each other with the public. We need to manage that conflict, just like we've done for tobacco, like we haven't managed to do with fossil fuels. We can't just let the forces of the market be the only force driving forward how we develop AI.”

  • Yoshua Bengio, recipient of the Turing Award, in an interview with Live Science in 2024.
     

“Many researchers working on these systems think that we’re plunging toward a catastrophe, with more of them daring to say it in private than in public; but they think that they can’t unilaterally stop the forward plunge, that others will go on even if they personally quit their jobs. And so they all think they might as well keep going. This is a stupid state of affairs, and an undignified way for Earth to die, and the rest of humanity ought to step in at this point and help the industry solve its collective action problem."

  • Eliezer Yudkowsky, AI safety advocate and founder of the Machine Intelligence Research Institute, writing in Time Magazine in 2023.

 

The Magnitude of AI Risks:

Beyond the argument from competition, there is also the question about who gets to make key decisions about what type of risks should be taken in the development of AI. If AI has the power to permanently transform society or even destroy it, it makes sense to leave critical decisions about safety to pluralistic institutions rather than unaccountable tech tycoons. Without transparency, accountability, and clear safety guidelines, the risk for AI catastrophe seems much higher. 

To illustrate this point, imagine if a family member of a leader of a major AI company (or the leader themselves) gets late stage cancer or another serious medical condition that is difficult to treat with current technology. It is conceivable that the leader would attempt to develop AI faster in order to increase their or their family member's personal chance of survival, whereas it would be in the best interest of the society to delay development for safety reasons. While it is possible that workers in these AI companies would speak out against the leader’s decisions, it is unclear what could be done if the leader in this example decided against their employees' advice.

This scenarios is not the most likely but there are many similar scenarios and I think it illustrates that the risk appetites, character, and other unique attributes of the leaders and decision makers of these AI companies can materially affect the levels of AI safety that are applied in AI development. While government is not completely insulated from this phenomenon, especially in short timeline scenarios, ideally an AI safety party would be able to facilitate the creation of institutions which would utilize the viewpoints of many diverse AI researchers, business leaders, and community stakeholders in order to create an AI-governance framework which will not give any one (potentially biased) individual the power to unilaterally make decisions on issues of great importance regarding AI safety (such as when and how to deploy or develop highly advanced AI systems). 

The Vast Scope and Influence of Government:

Finally, I think the massive resources of government is an independent reason to support government action on AI safety. Even if you think corporations can somewhat effectively self-regulate on AI and you are opposed to a general pause on AI development, there is no reason the US government shouldn't and can't spend 100 billion dollars a year on AI safety research. This number would be over 20 times greater than 3.7 billion, which was Open AI's total estimated revenue in 2024, but <15% of US defense spending. Ultimately, the US government has more flexibility to support AI safety than corporations, owing simply to its massive size.

The Insufficiency of current US Action on AI Safety:

Despite many compelling reasons existing for the US government to act on AI safety, the US government has never taken significant action on AI safety, and the current administration has actually gone backwards in many respects. Despite claims to the contrary, the recent AI action plan is a profound step away from AI safety, and I would encourage anyone to read it. The first "pillar" of the plan is literally "Accelerate AI Innovation" and the first prong of that first pillar is to "Remove Red Tape and Onerous Regulation", citing the Biden Administration's executive action on AI (referred to as the "Biden administration's dangerous actions") as an example, despite the fact the executive order did not actually do much and was mainly trying to lay the ground-work for future regulations on AI. 

The AI Action plan also proposes government investment to advance AI capabilities, suggesting to "Prioritize investment into theoretical computational and experimental research to preserve America's leadership in discovering new and transformative paradigms that advance the capabilities of AI", and while the AI Action plan does acknowledge the importance of "interpretability, control, and robustness breakthroughs", it receives only about two paragraphs in a 28 page report (25 if you remove pages with fewer than 50 words).

However, as disappointing the current administration's stance on AI Safety may be, the previous administration was not an ideal model. According to this post NSF spending on AI safety was only 20 million dollars between 2023 and 2024, and this was ostensibly the main source of direct government support for AI safety. To put that number into perspective, the US Department of Defense spent an estimated 820.3 billion US dollars in FY 2023, and meaning the collective amount spent by represented only approximately 0.00244% of the US Department of Defense spending in FY 2023.

Many people seem to believe that governments will inevitably pivot to promoting an AI safety agenda at some point, but we shouldn't just stand around waiting for that to happen while lobbyists funded by big AI companies are actively trying to shape the government's AI agenda. 

The reasons to believe an AI political party could work:

In contrast to lobbying, which we have good reason to suggest will fail, there are several good reasons to assume that a political party centered around AI safety could work. While it is tough to make firm conclusions about the effectiveness of certain political strategies, there are many third parties from the US and around the world to learn from.

General Political Science Research:

Green Parties:

Green parties have obtained many electoral successes world-wide, with a recent example being the Green Party in Germany, which a few years ago entered into German government as part of a coalition. Even the US Green Party, which has note been the most electorally successful, has had a significant influence on climate policy in the US. In 2012, Jill Stein proposed a "Green New Deal", and I think its not a stretch to argue that this had some influence on the later Green New Deal proposed by AOC in 2019. 

Additional Reading on Green parties' impacts on Politics:

1.) There is a significant inverse relationship between Green Party presence in government and green house gas emissions, and other left-wing parties have a less clear impact.

2.) In the United States specifically, Green Party candidates running in competitive elections has been associated with a pro-environmental platform shift in the Democratic party.

Ross Perot/The Reform Party: 

Ross Perot's campaigning has influenced American politics in many ways, especially by increasing the prominence of its core issues in political discussions. While by no means the first organization to talk about these issues, Ross Perot's campaigning played a key role in moving the issues of deficit reduction, protectionism, and anti-interventionism into the mainstream of American politics. While it is difficult to ascertain cause and effect, it should be noted that Donald Trump first ran for president as a Reform Party candidate, and many echoes of the Reform Party's ideology can be found in the thinking and policies of the current presidential administration.

Additional Reading on the Impact of Perot/The Reform Party:

In the book Three's a Crowd: The Dynamics of Third Parties, Ross Perot, and Republican Resurgence, Authors Ronald B. Rappaport and Walter J. Stone conclude Ross Perot's Reform Party has had a lasting effect on the two-party system, caused by strategic platform shifts in the Republican party and the migration of former Perot voters and activists into the Republican party.  The authors of that book also wrote a shorter piece on the same subject.

RFK JR:

RFK JR has dramatically shifted US health policy. Under his leadership, DHHS has canceled hundreds of millions of dollars of funding for mRNA vaccines, stopped US funding of "Gavi, the vaccine alliance", an international vaccination organization. While the impact of RFK Jr's decisions are arguably very negative, what is inarguable is that his campaign demonstrates the potential power of savvy third-party candidacies.

 Despite only averaging around 5% in the polls and not having a substantial electoral track record, RFK JR was able to cut a deal with Donald Trump which yielded him control of the DHHS in exchange for him dropping out and endorsing Trump. One could argue that Trump would have done similar things in a world where Kennedy didn't run, but while Donald Trump had a mixed effort on health before his second term, there is little to suggest that, without Kennedy, a hypothetical DHS would have made such dramatic changes from pre-existing policy.  

While future major party political candidates might be less willing to offer a similar deal to future third parties as the one Kennedy got from Trump, this is a separate avenue a hypothetical AI safety party would have for achieving some of its goals.

Political Support for AI Safety:

Along with prior examples of third party success at influencing politics, the available evidence suggests the American public could support the agenda of an AI Safety party.

According to polling by the Pew Research Center (which I linked to at the beginning of this post) 58% of Americans polled were concerned that the government "will not got far enough in its regulation of AI". 

According to a Reuters/Ipsos Poll which concluded last Monday, 47% polled agreed with the statement that "AI is bad for humanity", opposed to just 31% who agreed that "AI is good for humanity, and 58% agreed with the statement that "AI could risk the future of the human kind". Additionally, "71% were concerned that too many people will lose jobs" when asked about their "concerns about artificial intelligence",

According to a poll of 1,481 voters by the AI Policy Institute, 80% of those surveyed supported a policy approach to AI regulation which emphasized government oversight into the release of new AI models to ensure that the new models were safe, as opposed to 24% who supported an approach with basically no AI regulation, 46% who supported a ban on frontier AI development, and 50% who supported regulation based primarily on self-reporting by AI companies.

While it could be argued that Americans do not feel very confident in their beliefs on AI and these polling results do not really mean much for that reason, it is remarkable that, no matter what polling you examine, the general American public seems somewhat wary of AI. A 2025 report by Brookings concluded that "Overall, the U.S. and U.K. publics tend to be more concerned than optimistic about AI’s impacts, though many hold mixed or even inconsistent views".

Thus, an AI Safety political party would not have to start from square one in convincing the public of the risks of AI and to support government action on AI Safety, because many Americans already feel anxious about AI to varying degrees.

Sub-Conclusion:

All in all, while the evidence is mixed and draws from differing political systems, there seems to be good reason to think that a party focused primarily on AI safety and related causes could be effective at influencing American government and society to take AI safety more seriously.

The Costs and Benefits of an AI Safety Political Party:

If it is clear that governments must act to ensure that AI is developed safely, and many well-resourced, intelligent people, clearly think that future AI systems have a significant chance of killing every human currently alive, then I find it odd that there isn’t already a political party built around this issue.

While it may be unlikely that a candidate from an AI safety party wins a major national election anytime soon or cuts a deal with a major party candidate, that doesn’t mean such a party wouldn’t matter. Electoral success isn’t the only—or even primary—way political movements shape the future. A focused party could still have significant impact by any of the following:

  1. Shifting the Overton window on AI regulation and safety.

     

  2. Educating elite institutions and policymakers on the importance of AI safety.

     

  3. Pressuring mainstream parties to adopt AI safety platforms .

     

  4. Supporting and legitimizing pro-safety viewpoints. 

     

  5. Making AI-Safety a higher salience issue

     

  6. Increasing interest, support, and opportunities for AI safety activism.

These effects don’t require sweeping victories and aiming for them is far more promising than passively waiting for a political establishment—much of which still doesn’t grasp the existential risks posed by AI—to change on its own. And it’s likely more scalable than relying solely on individual advocates and underfunded safety orgs operating without any electoral leverage.

But just because an AI political party could, in theory, lead to a positive change in governmental attitudes towards AI safety, does not mean it is the best cause area. How much time and money might it take to start a successful AI safety party?

Estimating the costs of a third-party campaign in America:

Estimating the likelihood of electoral success is inherently difficult, as campaigns are shaped by a wide array of factors, including candidate quality, strategic execution, and message resonance. Nevertheless, by examining the costs and outcomes of previous political campaigns, several conclusions can be drawn. Many campaigns led by candidates with limited personal appeal or unremarkable platforms have nonetheless managed to achieve electoral visibility—and, in some cases, success—with relatively modest expenditures. 

Not only does this exercise help us estimate costs, but I argue that this reinforces the notion that a political party organized around a timely and substantive issue such as AI safety, supported by a clear and distinctive message, could plausibly exert substantial influence on government, as arguably many candidates without a very compelling message, developed organizational structure, or personal charisma did quite well at running efficient campaigns.

Using data provided by Open Secrets and the FEC, I compiled the spending of all presidential campaigns tracked by Open Secrets between 2004 and 2024. There are many omissions of minor candidates, but I think this can at least serve as a starting point for analysis. 

Presidential CandidateElection YearDollars Spent by campaignDollars Spent by Affiliated Outside Groups[1]Total Spent by Campaign and Affiliated Outside Groups*Votes cast for candidatePercent of total vote cast for candidateApproximate Total Dollars Spent/ vote gainedApproximate Total Dollars Spent/ Percentage of Vote Share Won
Chase Oliver2024463,348$0$/N/A463,348$650,1260.4187922640.71$1,106,391.02$
Jill Stein20242,233,635$0$/N/A2,233,635$862,0490.5553068982.59$4,022,343.33$
Cornell West20241,275,968$0$/N/A1,275,968$82,6440.05323686215.44$23,967,753.95$
Kamala Harris20241,154,978,762839,559,2581,994,538,02075,017,61348.3241648726.59$41,274,133.25$
Donald Trump2024448,966,0521,021,577,9981,470,544,05077,302,58049.7960741719.02$29,531,325.00$
Joe Biden20201,042,748,801572,094,9391,614,843,74081,268,92451.3115152619.87$31,471,371.13$
Jo Jorgensen 20202,757,703N/A2,757,7031,865,7241.1779794881.48$2,341,045.01$
Donald Trump2020778379130312,254,7861,090,633,91674,216,15446.8585423714.70$23,275,028.65$
Hillary Clinton2016563,433,611205,144,296768,577,9076585351448.1845780811.67$15,950,703.27$
Donald Trump2016325,515,46197,105,012422,620,47362,984,82846.085579616.71$9,170,340.84$
Gary Johnson201611,956,242131409513,270,3374,489,3413.2848209422.96$4,039,896.61$
Jill Stein20163,587,105N/A3,587,1051,457,2181.0662367162.46$3,364,267.00$
Evan McMullin20161632885N/A1,632,885731,9910.5355929452.23$3,048,742.55$
Virgil Goode201293,794N/A93,794122,3890.0948124190.77$989,258.59$
Gary Johnson20122,507,763551,3863,059,1491,275,9710.9884703472.40$3,094,831.33$
Barack Obama2012721,397,67775,145,374796,543,05165,915,79551.0637065812.08$15,599,005.72$
Mitt Romney2012449,507,659145,132,478594,640,13760,933,50447.204020979.76$12,597,234.83$
Jill Stein2012893,636N/A893,636469,6270.3638110611.90$2,456,318.94$
Randall Terry2012270,405N/A270,40513,1070.01015374220.63$26,631,067.59$
Chuck Baldwin2008208,229N/A208,229199,7500.152116511.04$1,368,878.37$
Bob Barr20081,393,262N/A1,393,262523,7150.3988270242.66$3,493,399.19$
John McCain2008333375676N/A33337567659,948,32345.652714245.56$7,302,428.38$
Cynthia McKinney2008145,020N/A145,020161,7970.1232139920.90$1,176,976.72$
Ralph Nader20083,996,305N/A3,996,305739,0340.5627998645.41$7,100,756.87$
Barack Obama2008729,519,581N/A729,519,58169,498,51652.9255153810.50$13,783,891.87$
George W. Bush2004345,259,170N/A345,259,17062,040,61050.73014845.57$6,805,798.54$
John Kerry2004309708100N/A30970810059,028,44448.267122515.25$6,416,543.68$
Ralph Nader20044549143N/A4549143465,6500.3807585649.77$11,947,578.92$
Michael Badnarik20041073945N/A1073945397,2650.3248406552.70$3,306,067.091$
Michael Peroutka2004708227N/A708227143,6300.117445194.93$6,030,276.77$
David Cobb2004385,712N/A385,712119,8590.0980078193.22$3,935,522.75$

 

To begin the analysis, let's start with raw averages of how many dollars these campaigns spent per percentage of vote share.[2] The average is 10,535,457.35$, unadjusted for inflation, and $18,017,249.48, using a conservative estimate of inflation[3]. However, most winning campaigns spend far more than this per percent of vote share. Why?

One potential reason is simple: diminishing returns. As previously theorized, money in politics has its limits. After a certain amount of time and money spent campaigning, people will already know about your political party and form somewhat firm opinions about whether they will support you or not, however, many of the smaller party candidates are not known by a majority of Americans, and so see greater benefits from campaigning. The R^2 value for the total money spent by campaigns and the amount of dollars they spend per percentage of the vote share is 0.6875.

Another possible reason is strategic. A few thousand voters in swing states for a Democrat or Republican presidential candidate are far more influential than an additional million voters in a state like California. For this reason, large political parties concentrate their resources into the swing states, naturally reducing efficiency, as more resources are spent on a smaller number of voters. 

Whatever the reason, the data (and my verdict) is clear: third parties (which, it is important to note, are almost by definition smaller than their major party counter parts) are more efficient at converting dollars into votes than larger ones. But how much more efficient? To start, the average amount of dollars spent per percentage of vote share won for all third parties is 5,969,545.93$ unadjusted for inflation, and $10,412,796.29 adjusted for inflation using a conservative estimate.

Even if we exclude the long-established Green and Libertarian parties—who benefit from name recognition and a longstanding party infrastructure—the efficiency gap remains. The dollars spent to percent vote share won ratio for the remaining third-party candidates is still only 9,885,294.69$ unadjusted for inflation ($17,243,113.82 adjusted for inflation) and this average is skewed by several outliers. 

For instance, some of the candidates did not even get on the ballot in a majority of states. Cornell West, who had a dollars spent to percent vote share ratio of $23,275,028.65 (unadjusted for inflation) in 2024, he did not appear directly on the ballot in the majority of states, including very populous states such as California and New Jersey.  While there is little available information on Randall Terry's campaign, it appears he also ran in the Democrat Primary, skewing his expenses while not directly contributing to his general election vote total (Randall Terry netted around 20,000 votes in the Oklahoma primary, more than his total general election vote total), and did not appear directly on the ballot in the majority of states. 

While there are many factors which affect campaign finance, an AI third-party, assuming it is somewhat competently run and gets ballot access in a majority of states, should expect to spend somewhere between $1,158,877.12 (the lowest inflation adjusted 2025 dollars spent per vote share by any candidate, which was achieved by Chase Oliver's 2024 campaign), and $18,017,249.48 (the inflation adjusted average of dollars spent per vote share of all campaigns), with something on the lower end of that being far more likely, as third-party candidates typically spend far less per percent of vote share than larger parties. 

Sub-conclusion: 

So what does this imply for a hypothetical AI safety political party? Even in the worst-case scenario—the party is far less efficient than the average third-party campaign and about as efficient as the average campaign—it would cost about $90 million to secure 5% of the vote in 2028. That is a substantial amount, but far from impossible, and the potential payoff is enormous. Even a 10% chance of brokering policy concessions with a major-party nominee could be worth billions in redirected funding for AI safety.

And again, $90 million represents something close to the worst-case. A campaign operating at the same efficiency of  Jill Stein’s 2024 campaign ($4,213,159.35 per percent of vote share) would need only about $20 million to reach 5%. If an AI Safety political party achieved a similar efficiency as Chase Oliver's campaign–$12 million could buy not 5% but 10% of the national vote—a remarkably high return on investment.

Finally, an AI safety party wouldn’t need to run at full scale immediately. Early milestones, like gathering enough signatures for ballot access, could serve as low-cost tests of viability. If the project falters at that stage, it can be re-evaluated before major spending begins.

Responses to Possible Objections and Concerns:

1. Voters won't care.

Yes, AI risk is abstract and future-oriented, but that’s also true of climate change, and people did build green parties around that issue. Furthermore, AI is already beginning to impact jobs, culture, and geopolitics in tangible ways. Job losses due to AI-enabled automation, deep fakes, and AI in warfare aren’t theoretical  issues anymore. While existential AI risk probably cannot be the sole focus of a successful AI safety party, with the right framing, people can care.

2. An AI safety party can’t win.

As already mentioned, winning isn't required for impact. A party could elevate AI risk in national debates, attract additional media attention,  and legitimize pro-safety viewpoints in existing parties.

3. It’s too late (for an AI party to make a difference).

This may be true. But even if it is more likely than not its not a useful reason to do nothing. AI timelines are still uncertain and will be potentially bottlenecked by hardware, funding, energy, and/or algorithmic progress. Even small shifts to the amount of time it takes for AGI or ASI to be developed or the amount of or quality of safety measures in place when AGI or ASI will be developed could matter a lot. 

Additionally, even if we can't convince governments to actively regulate AI development or increase funding for AI safety research, we can at least try to ensure governments aren’t actively making it worse (e.g., by subsidizing unsafe training runs or defunding AI safety).

4. My country is not important for AI:

I did mostly write this with the US in mind. After all, the most likely scenario in my view is that a company centered in the US develops the first AGI. However, the majority of  people on this forum live in the Anglosphere, and almost all of these countries can influence the trajectory of AI. Ideas from Canada, Australia, the UK, or even New Zealand can easily spill over into U.S. discourse. More broadly, there’s also no reason only the U.S. can fund AI safety research. In an interconnected research ecosystem, investments in AI safety anywhere can help globally. By advancing AI safety research, creating public awareness, or just slowing down capabilities races via international coordination. Even if your country isn’t training frontier models, the actions of its government can still matter a lot. While the cost benefit is definitely more tricky, it is at least worth pondering.

5. What about other issues?:

Ideally, an AI safety party should not neglect other issues. While maintaining AI as its primary focus (and making that clear to the general public), it should also endorse a common sense platform generally based around evidence-based solutions, classical liberal ideals, such as respect for personal liberty and democracy, and political moderation. This will help the party appeal to the largest number possible and to not explicitly paint AI -Safety as a "Left-wing" or "Right wing issue", as this will broaden the appeal of the party.

It should also emphasize that AI has the potential to lead to mass unemployment, and there should be a method of ensuring unemployed workers are able to survive (maybe an Automation dividend funded by taxes on AI companies and robotics company, maybe a UBI, maybe a federal job guarantee. Whichever is most popular should do) to tie the issue of AI safety to broader, more relatable anxieties about how AI will transform the economy.

6. What about spoilers?

In first-past-the-post systems, where there is only one round of voting and then the candidate with the most votes wins, there is a very prominent spoiler effect. In the US, where we have a first-past-the-post electoral system,  I think the traditional pro-business, anti-regulation slant of the GOP will present obstacles to an ambitious AI-safety agenda from being supported by Republicans. Thus, I understand if some would be nervous that a potential AI third-party would strengthen the GOP by siphoning away voters from the Democratic party.

However, it remains to be seen what the ultimate approach of the Democratic party will be towards AI regulation, and it makes sense to try to pressure Democrats into adopting more comprehensive AI-safety political positions until the issue is more clearly politicized. Many Democrats in Congress do not understand the massive threat that artificial intelligence poses to humanity. 

I will concede that in a hypothetical 2028 election between a Democrat who is somewhat friendly to AI safety policies and a Republican who is hostile to them, it might make sense for a hypothetical AI party to endorse the Democrat in swing states or even drop out of the race entirely and endorse the Democrat.  However, this does not mean a third-party would be pointless, as it can still extract concessions from the larger party. 

Thus, if an AI safety political party is constructed in a thoughtful manner, its leadership would be able to leverage spoiler effects to its advantage, turning a potential weakness into a strength without inadvertently promoting anti-safety politicians.

7. What about China?

This topic deserves a post of its own, but to put it succinctly: there are no winners in an AI race. If a misaligned, uncontrollable superintelligence is created, it won’t matter whether it originates in the U.S. or China—the likely outcome could be human extinction or some other catastrophic loss.

While you may not share the values of the Chinese government, those values are at least recognizably human—shaped by the same kinds of goals, drives, and emotional states we all share. The same cannot be said for future AI systems, which may form entirely alien objectives and surpass human intelligence to the point where our survival becomes irrelevant to them.

In this light, a “safe” AI superintelligence aligned with Chinese interests would almost certainly be better for Americans—and for the world—than a misaligned AI superintelligence developed in the U.S. Moreover, the gap between a safe AI aligned with China and a safe AI aligned with the U.S. is tiny compared to the chasm between either of those and an unaligned AI.

This reasoning naturally leads to an argument for unilateral disarmament:

  1. U.S. firms are currently positioned to develop transformative AI in roughly X years.
  2. China is projected to reach the same point in X + Y years.
  3. If the U.S. unilaterally stops advancing in AI development, the world gains an additional Y years to work on AI safety.
  4. That reduction in existential risk outweighs the downsides of China getting there first.

But there’s a better alternative: cooperation. Instead of racing or unilaterally slowing down, the U.S. and China could jointly commit to rules governing how AI should be developed, used, and safeguarded.

This cooperative approach is plausible for several reasons:

Firstly, the U.S. currently leads in both AI hardware and software. China has much to gain and little to lose from agreeing to a binding safety framework. Secondly, the Chinese government has consistently signaled that it cares about AI safety[4]—arguably more so than the U.S government ever has. Finally, International precedents exist. Despite immense challenges and distrust between superpower, the world has successfully negotiated enforceable treaties on biological weapons and nuclear arms.

Cooperation is therefore the best path forward: it reduces existential risk, preserves geopolitical stability, and avoids the trap of an all-out race. With credible U.S. commitment, there is little reason to think China would reject such an agreement, and history shows that global safety agreements are possible.

8. What about lobbying?:

One major way in which people who care to influence governmental support for AI safety policies have sought to influence government has been through lobbying organizations and other forms of activism. However, there is reason to doubt they will be able to cause lasting change. First of all, there is significant evidence that lobbying has a status quo bias lobbying has a status quo bias. Lobbying is most effective when it relates to preventing changes, and when there are two groups of lobbyists on an issue, the lobbying to prevent change win out, all else being equal. In fact, according to a study by Dr. Amy McKay, "it takes 3.5 lobbyists working for a new proposal to counteract just one lobbyist working against it".

Even if this effect did not exist, however, it is very unlikely AI safety groups will be able to compete with anti-AI-safety lobbyists. Naturally, the rise of large, transnational organizations built to profit around AI has also lead to a powerful pro-AI lobbyist operation. This indicates we can't simply rely on current strategies of simply funding AI-safety advocacy organizations, as they will be eclipsed by better funded pro-AI-Business voices.

9. What about activism?:

I feel like a political party would help raise the profile of activism and visa versa, and I feel like political parties are more effective than activism, but I don't really know enough about activism to say much here, so I am happy to simply facilitate discussion on this issue. Do you feel like an effective activist campaign would be more able to sway government decision-makers and the major parties than a political campaign? If so, I would like to hear why in the comments.

10. What about working directly through the major parties?:

This could potentially be viable but I have a couple of concerns:

1.) There is a greater risk of "politicizing" AI Safety. Right now I would say both parties do not really have established positions, but once one major political party starts arguing for substantial AI Safety measures, tribalism might kick in and stifle bipartisan support for AI Safety.

2.) I also feel like a third party campaign would inherently have more flexibility than a movement trying to change a major party directly from within, as a third party would likely face less direct hostility from party elites and would face less pressure to directly compete against other major party candidates.

11. What about just running a centrist third party?:

Honestly, this could work too if one puts more emphasis on the RFK Jr route. As long as the leadership of the party cares about AI Safety, the means of getting support to bargain with might not matter that much. After all, many of the people who supported RFK Jr. likely did not do so just because they were skeptical of vaccines, yet change to US vaccine policy ended up being the most tangible impact of his campaign.

Conclusion:

If you care about the risks AI pose to society, you should consider that electoral politics, while slow and flawed, may still be one of the highest-leverage tools available to bolster AI safety in the long run. 

Lobbying, advocacy, and research are all important, but they operate in a political environment where governments still hold the decisive levers of power: regulation, funding, enforcement, and international negotiation.

An AI safety political party doesn’t need to win the presidency or even significant congressional representation to matter. Its possible it can reshape political agendas, shift the Overton window, and pressure larger parties to adopt their priorities. By doing so, it can achieve outsized influence compared to its vote share or funding base.

Even modest success could bring enormous benefits: greater public awareness of AI risks, expanded government research budgets, and the creation of institutions capable of managing transformative technologies safely. And while the costs of launching such a party are real, they pale in comparison to the potential benefit of ensuring significant US government commitment to AI safety.

Acknowledgements:

Written with assistance from Chat GPT and Claude. Also shout out to the Transformer Substack, which has been a good general source of news info on AI.

 

  1. ^

    Open Secrets indicates that many candidates did not have outside groups who raised and spent money on their behalf. I suspect that some of these candidates did have some marginal support from outside groups, but as these weren't significant enough to be reported by Open Secrets, for the sake of my calculations when Open Secrets reports that campaigns affiliated outside group spent 0$, or, before 2012, if Open Secrets omits this entirely, I treat this as if they received none.

  2. ^

    While the dollars per vote paints a similar story, I believe average dollar spent per percent vote share is a better metric as the data does not get skewed by voter turnout. A candidate who got 10,000,000 votes in an election where 100,000,000 people voted did better from an efficiency perspective than a candidate who got 19,000,000 votes in an election where 200,000,000 people voted, assuming their campaign spending was equal.

  3. ^

    I did not want to have to manually adjust every value for inflation individually, so I just took the average and set the starting date on the "US Inflation Calculator" to January 2004, which is an overestimate as most candidates analyzed ran in latter election years. It probably wouldn't be to hard to create code to do this, but I would need to find the inflation rates from each year and the final result would still be a bit inaccurate. Similarly, for the inflation-adjusted estimated of Chase Oliver and Jill Stein's dollars spent per percent of vote share won,  I just used January 2024 as the baseline.

  4. ^

    Additional Sources about China's positive view of AI safety:

    https://www.cnbc.com/2023/11/01/china-backs-global-ai-consensus-even-as-it-clashes-with-us-over-tech.html?utm_source=chatgpt.com

    https://www.mfa.gov.cn/eng/xw/zyxw/202405/t20240530_11332823.html?utm_source=chatgpt.com