It's the new year, and the 2024 primaries are approaching, starting with the Iowa Republican causus on January 15. For a lot of people here on LessWrong, the issue of AI risk will likely be an important factor in making a decision. AI hasn't been mentioned much during any of the candidates' campaigns, but I'm attempting to analyze the information that there is, and determine which candidate is most likely to bring about a good outcome.

A few background facts about my own position - such that if these statements do not apply to you, you won't necessarily want to take my recommendation:

  • I believe that, barring some sort of action to prevent this, the default result of creating artificial superintelligence is human extinction.
  • I believe that our planet is very far behind in alignment research compared to capabilities, and that this means we will likely need extensive international legislation to slow/pause/stop the advance of AI systems in order to survive.
  • I believe that preventing ASI from killing humanity is so much more important than any[1] other issue in American politics that I intend to vote solely on the basis of AI risk, even if this requires voting for candidates I would otherwise not have wanted to vote for.[2]
  • I believe that no mainstream politicians are currently suggesting any plans that would be sufficient for survival, nor do they even realize the problem exists. Most mainstream discourse on AI safety is focused on comparatively harmless risks, like misinformation and bias. The question I am asking is "which of these candidates seems most likely to end up promoting a somewhat helpful AI policy" rather than "which of these candidates has already noticed the problem and proposed the ideal solution," since the answer to the second question is none of them.

(Justification for these beliefs is not the subject of this particular post.)

And a few other background facts about the election, just in case you haven't been following American politics:

  • As the incumbent president, Joe Biden is essentially guaranteed to be the Democratic nominee, unless he dies or is otherwise incapacitated.
  • Donald Trump is leading in the polls for Republican nominee by very wide margins, followed by Nikki Haley, Ron DeSantis, Vivek Ramaswamy, and Chris Christie. Manifold[3] currently gives him an 88% chance of winning the nomination.
  • However, Trump is facing criminal charges regarding the Capitol attack on January 6, 2021, and the Supreme Courts of Colorado and Maine have attempted to disqualify him from the election.
  • As usual, candidates from outside the Democratic and Republican parties are not getting much support, although Robert F. Kennedy Jr. is polling unusually well for an independent candidate.

 

Joe Biden

Biden's most notable action regarding AI was Executive Order 14110[4]. The executive order was intended to limit various risks from AI... none of which were at all related to human extinction, except maybe bioweapons. The order covers risks from misinformation, cybersecurity, algorithmic discrimination, and job loss, while also focusing on trying to reap potential benefits of AI.

But the measures contained in the order, while limited in scope, seem to be a step in the right direction. Most importantly, anyone training a model with 10^26 floating point operations or more must report their actions and safety precautions to the government. That's a necessary piece of any future regulation on such large models.

Biden has spoken with the UN about international cooperation on AI, and frequently speaks of AI and other new technologies as both a source of "enormous potential and enormous peril," or other similar phrasings. "We need to make sure they're used as tools of opportunity, not weapons of oppression," he said. "We need to be sure they’re used as tools of opportunity, not as weapons of oppression. Together with leaders around the world, the United States is working to strengthen rules and policies so AI technologies are safe before they’re released to the public, to make sure we govern this technology, not the other way around, having it govern us."[5]

Biden seems to be taking seriously the possibility of existential risk from ASI. That being said, his latest fears of superintelligences seem to have been inspired by the latest Mission: Impossible movie[6], so I'm not confident that he's reasoning clearly here. But regardless of where he got the idea, he's paying more attention to the actually important issue than anyone else. Biden appears to be significantly better than nothing.

 

Donald Trump

Trump was involved in occasional AI legislation during his time as president, and passed Executive Order 13859[7] and Executive Order 13960[8]. Both of these were focused on being "pro-innovation" and increasing the scope of AI development, with the latter regarding more AI in the federal government. Trump has not mentioned existential risk at any point.

In recent years, Trump has spoken little of AI, except regarding its use in campaign ads. At present time, it remains difficult to determine what views he might now have on the subject, but if his policies as president are any indication, he likely won't be any help when it comes to slowing down AI development. Still, it's not implausible that he changes his mind in the upcoming years. Unclear leaning negative.

 

Nikki Haley

Haley's mentions of AI are even rarer than Trump's. What she has said is mostly about China, and how the US and its military needs to use AI to gain an advantage over China.[9] In general, much of her campaign has focused on fighting China and its allies, so it doesn't seem likely that she'll be a supporter of an international alliance with China to ban AI. Unclear leaning negative.

 

Ron DeSantis

DeSantis has described much of AI regulation as primarily a tool to enforce wokeness, saying that it only limits some companies while protecting those with woke agendas.[10] He also believes said regulations would only help China gain an advantage in AI development. However, he does support some forms of AI regulation.[11]

"China is trying to do it for its military," he said. "We're going to have to compete on the military side with AI, but I don't want a society in which computers overtake humanity. And I don’t know what the appropriate guardrail is federally because a lot is going to change in a year and a half with this because it’s going so rapidly."

"But at the same time, we want any technology to benefit our citizens and benefit humanity. We don’t want it to displace our citizens or displace humanity. So as this happens, and there’s rapid development every month, every two months, we’re going to be looking to see, OK, you know, what is it that we need to do. And if there are guard rails that need to be put in place, you know, I would be willing to do that. I think it’s important for society."[12]

So despite the possibility that DeSantis refuses to sign onto AI regulation out of fear of wokeness, he's able to draw a distinction between different types of regulations, and is explicitly concerned about AI overtaking humanity. Also significantly better than nothing.

 

Vivek Ramaswamy

Ramaswamy has said that the most serious risk AI poses is that once humans begin to treat it as an authority, they will be swayed by the beliefs that it suggests.[13] He doesn't support explicit regulation, but argues that companies must be held liable for the results of any AI system they create.[14] "Just like you can't dump your chemicals, if you're a chemical company, in somebody else's river, well if you're developing an AI algorithm today that has a negative impact on other people, you bear the liability for it."

He's not focusing on the right problems, but it's possible he'll be willing to take action against AI companies. Unclear leaning neutral.

 

Chris Christie

Christie has only spoken of AI as an "opportunity to expand productivity," and says that "we can't be afraid of innovation." He states that "what [he] will do is to make sure that every innovator in this country gets the government the hell off its back and out of its pocket so that it can innovate and bring great new inventions to our country that will make everybody's lives better." In the case of AI, this would, of course, actively make things worse. Very bad.

 

Robert F. Kennedy Jr.

Kennedy has spoken on the Lex Fridman podcast about AI risk, and was familiar with the possibility of human extinction. "It could kill us all," he said. "I mean, Elon said, first it's gonna steal our jobs, then it's gonna kill us, right? And it's, it's probably not hyperbole, it's actually, you know, if it follows the laws of biological evolution, which are just the laws of mathematics, that's probably a good endpoint for it... it's gonna happen, but we need to make sure it's regulated, it's regulated properly for safety, in every country. And, and that includes Russia, and China, and Iran. Right now, we, we should be putting all the weapons of war aside, and sitting down with those guys and saying... how are we gonna do this?"[15]

Well, that contradicts my above expectations quite a bit. Kennedy is completely aware of the actual problem and what is necessary to solve it. By far the best candidate.

 

Conclusion

Kennedy is ideal, Biden and DeSantis might be okay, Christie is definitely bad, and as for the others, it's not clear.

As for a final recommendation... well, we don't really know yet who to vote for in the general election, since we haven't seen the Republican candidate and we don't have as accurate data on Kennedy's electability as we'll have in November. But the one clear recommendation I do have at the moment is to vote for DeSantis in the Republican primary.

If anyone else has relevant information on any of these candidates' views on AI - particularly Trump, Haley, and Ramaswamy - please link it in the comments.

  1. ^

    Nuclear war would be an exception here; it's within an order of magnitude of catastrophe as unaligned ASI. But I believe that ASI is significantly more likely than nuclear war, and so a more important priority.

  2. ^

    Oh... crap. oh crap. I didn't think it would be this much of a "someone I didn't want to vote for" situation. (For the record, apart from AI, my ordering would have been Christie > Haley > Ramaswamy > Biden > DeSantis > Trump > Kennedy, which is... not literally exactly the opposite of what I concluded here, but pretty damn close.)

  3. ^
  4. ^
  5. ^
  6. ^
  7. ^
  8. ^
  9. ^
  10. ^
  11. ^
  12. ^
  13. ^
  14. ^
  15. ^
New Comment
22 comments, sorted by Click to highlight new comments since: Today at 3:29 PM

Trump said he would cancel the executive order on Safe, Secure, and Trustworthy AI on day 1 if reelected. Seems negative considering it creates more uncertainty around how consistent any AI regulation will be and he has no alternative. 

I also expect that if implemented the plans in things like Project 2025 would impair the ability of the government to hire civil servants who are qualified and probably just degrade the US Government's ability to handle complicated new things of any sort across the board.

He has also broadly indicated that he would be hostile to the nonpartisan federal bureaucracy, e.g. by designating way more of them as presidential appointees, allowing him personally to fire and replace them. I think creating new offices that are effectively set up to regulate AI looks much more challenging in a Trump (and to some extent DeSantis) presidency than the other candidates.

[-]trevor4mo161

Important caveat that should have been mentioned: 

I know that civics class taught us about how the president and his advisors uses the threat of vetoing legislation to exert influence over legislation, but the vast majority of both policymaking capability and policymaking knowledge is held by domain-specific people outside the white house e.g. throughout the executive branch, such as lobbyists, natsec folk, leaders of regulator bureaucracies, etc. 

Although the president and his advisors might plausibly be one of the best policy levers, their time is limited, and their interest is highly limited by the many elites in the US who spend tons of money trying to persuade the president and his advisors to care about their issue (both "persuasion" as a euphemism for bribery/lobbying, but notably literal persuasion as well).

This has been done for hundreds of years, and by now the established elites are so much better positioned than anyone adjacent to AI safety (including Altman himself), that this dynamic basically explains why presidents and their advisors aren't interested in AI risk; they assume that, given that a policy or issue has made it all the way to their desk, odds are high that the immense optimization pressure that got it there was optimization pressure that was funded by people anticipating a probability of a return on their investment.

[-]Zane4mo21

The president might not hold enough power to singlehandedly change everything, but they still probably have more power than pretty much any other individual. And lobbying them hasn't been all that ineffective in the past; the AI safety crowd seems to have been involved in the original executive order. I'd expect there to be more progress if we can get a president who's sympathetic to the cause.

[-]trevor4mo8-2

The president might not hold enough power to singlehandedly change everything, but they still probably have more power than pretty much any other individual.

None of us have solid models on how much power the president has. 

The president and his advisors probably don't actually control the nuclear arsenal; that's probably a lie, the military probably doesn't hand over control of the nuclear arsenal to a rando and his election campaign team every 4-8 years. 

Some parts of the constitution are de-facto more respected than others; if the president and his advisors had substantial real power over the military, then both the US Natsec community and foreign intelligence agencies would be very, very heavily involved in the presidential primary process (what we've seen from foreign intelligence agencies so far seems more like targeting public opinion than altering the results of the election).

The president and his advisors influence over the federal legislative process is less opaque, but the details of how that works are worth massive amounts of money because it allows people to navigate the space (and the information becomes worthless if everyone knows it). 

Plus, most presidents are probably far more nihilistic and self-interested in-person than in front of the cameras, and probably became hardcore conflict theorists due to being so deeply immersed in an environment where words are used as weapons (it the legislative process too, not just public opinion). 

So getting a powerful president to support good AI policy would be nice, but it's probably not worth the effort; there are other people in the executive branch with a better ratio of cost-to-access vs unambiguous policy influence.

And lobbying them hasn't been all that ineffective in the past; the AI safety crowd seems to have been involved in the original executive order.

We don't know this either, it's too early to tell. These institutions are extremely, extremely sophisticated at finding clever ways to make elites feel involved, when in reality the niche has already been filled by the elites who arrived first.  

For example, your text makes its way into the final bill which gets passed, but the bureaucracy ignores it because it didn't have the keywords that signal that your text is actually supposed to be enforced. 

Policy influence is measured in real-world results, not effortlessly-generated stuff that looks like policy influence. And the AI safety crowd has barely even gotten their text into their bill (which primarily prioritized accelerating American AI as effectively as possible).

[-]Zane4mo20

I'm not denying that the military and government are secretive. But there's a difference between keeping things from the American people, and keeping them from the president. When it comes to whether the president controls the military and nuclear arsenal, that's the sort of thing that the military can't lie about without substantial risk to the country.

Let's say the military tries to keep the keys to the nukes out of the president's hands - by, say, giving them fake launch codes. Then they're not just taking away the power of the president, they're also obfuscating under which conditions the US will fire nukes. The primary purpose of nuclear weapons is to pose a clear threat to other countries, to be able to say "if these specific conditions happen (i.e. you shoot nukes at us), our government will attack you." And the only thing that keeps someone from getting confused about those conditions and firing off a nuke at the wrong time is that other countries have a clear picture of what those conditions are, and know what to avoid.

Everyone has to be on the same page for the system to function. If the US president believes different things about when the nukes will be fired than the actual truth known to the military leaders, then you're muddying the picture of how the nuclear deterrent works. What happens if the president threatens to nuke Russia, and the military secretly isn't going to follow through? What happens if the president actually does give the order, and someone countermands it? Most importantly, what happens if different countries come to different conclusions about what the rules are - say, North Korea thinks the president really does have the power to launch nukes, but Russia goes through the same reasoning steps as you did, and realizes they don't? If different people have different pictures of what's going on, then you risk nuclear war.

And if your theory is that everyone in the upper levels of every nation's government does know these things, even the US president, and they just don't tell the public - well, that's not a stable situation either. It doesn't take long for someone to spill the truth. Suppose Trump gets told he's not allowed to launch the nukes, and gets upset and decides to tell everyone on Truth Social. Suppose Kim learns the US president's not allowed to launch the nukes, and decides to tell the world about that in order to discredit the US government. It's not possible to keep a secret like that; it requires the cooperation of too many people who can't be trusted.

A similar argument applies to a lot of the other things that one could theorize the president is secretly not allowed to do. The president's greatest powers don't come from having a button they can press to make something happen, they come from the public believing that they can make things happen. Let's say the president signs a treaty to halt advanced AI development, and some other government entity wants to say, "Actually, no, we're ignoring that and letting everyone keep developing whatever AI systems they want." Well, how are they supposed to go about doing that? They can't publicly say that they're overriding the president's order, and if they try to secretly tell major American AI labs to keep going with their research, then it doesn't take long for a whistleblower to come forward. The moment the president signs something, then the American people believe it's the law, and in most cases, that actually makes it become the true law.

I'd definitely want to hear suggestions as to who else in the government you think would have a lot of influence regarding this sort of thing. But the president has more influence than anyone else in the court of public opinion, and there's very little that anyone else in the government can do to stop that.

[-]Caleb W4mo1415

Michael Kratsios strikes me as the person most worth keeping an eye on in terms of Trump's potential AI policy

Looks like a good summary of their current positions, but how about willingness to update their position and act decisively and based on actual evidence/data? De Santis's history of anti-mask/anti-vaccine stances have to be taken into account, perhaps? Same for Kennedy?

If someone is currently on board with AGI worry, flexibility is arguably less important ( Kennedy), but for people who don't seem to have strong stances so far ( Haley, DeSantis), I think it's reasonable to argue that general sanity is more important than the noises they've made on the topic so far. (Afaik Biden hasn't said much about the topic before the executive order.) Then again, you could also argue that DeSantis' comment does qualify as a reasonably strong stance.

Just being "on board with AGI worry" is so far from sufficient to taking useful actions to reduce the risk that I think epistemics and judgment is more important, especially since we're likely to get lots of evidence (one way or another) about the timelines and risks posed by AI during the term of the next president.

[-]Ericf4mo7-3

There is an actual 0% chance that anyone other than the Democratic or Republican nominee (or thier replacement in the event of death etc.) becomes president. Voting for/supporting any other candidate has, historically, done nothing to support that candidate's platform in the short or long term. If you find both options without merit, you should vote for your preferred enemy:

  1. Who will be most receptive to your message, either in a compromise, or argument And/or
  2. So sorry about your number 1 issue, neither party cares. What's your number 2 issue, maybe there is a difference there?
[-]Zane4mo20

I wouldn't entirely dismiss Kennedy just yet; he's polling better than any independent or third party candidate since Ross Perot. That being said, I do agree that his chances are quite low, and I expect I'll end up having to vote for one of the main two candidates.

[-]Ericf4mo4-1

Mr. Pero got fewer votes than either major party candidate. Not a ringing endorsement. And I didn't say the chances were quite low, I said they were zero*. Which is at least 5 orders of magnitude difference from "quite low" so I don't think we agree about his chances.

*technically odds can't be zero, but I consider anything less likely than "we are in a simulation that is subject to intervention from outside" to be zero for all decision making purposes.

[-]Zane4mo30

Maybe the chance that Kennedy wins, given a typical election between a Republican and a Democrat, is too low to be worth tracking. But this election seems unusually likely to have off-model surprises - Biden dies, Trump dies, Trump gets arrested, Trump gets kicked off the ballot, Trump runs independently, controversy over voter fraud, etc. If something crazy happens at the last minute, people could end up voting for Kennedy.

If you think the odds are so low, I'll bet my 10 euros against your 10,000 that Kennedy wins. (Normally I'd use US dollars, but the value of a US dollar in 2024 could change based on who wins the election.)

[-]Ericf4mo110

I can't tie up cash in any sort of escrow, but I'd take that bet on a handshake.

[-]dr_s4mo30

I would. It's possible an election in which a third party candidate has a serious chance might exist, but it wouldn't look like this one at this point. Only way the boat could at least be rocked is if the charges go through and Trump is out of the race by force majeure, at which point there's quite a bit of chaos.

I am not sure that this is the best way to evaluate which candidate is best in this regard. Your goal is to get action taken. Surely someone who is the most persuadable and the most rational would be a better metric. A politician who says, 'AI is an existential threat to humanity and action needs to be taken,' may not be serious about the issue - they might just be saying things that they think will sound cool/interesting to their audience.

In any case, regardless of my particular ideas of how to evaluate this, I think that you need better metrics.

A quick nitpick:

You say:

The executive order was intended to limit various risks from AI... none of which were at all related to human extinction, except maybe bioweapons.

But note this from the EO:

     (k)  The term “dual-use foundation model” means an AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters, such as by:

...

          (iii)  permitting the evasion of human control or oversight through means of deception or obfuscation.

[-]Zane4mo2-2

Ah. I don't think the writers meant that in terms of ASI killing everyone, but yeah, it's kind of related.

What about Congress?

[-]Zane4mo10

Unfortunately, I don't have the time to research more than a thousand candidates across the country, and there's probably only about 1 or 2 LessWrongers in most congressional districts. But I encourage everyone to research the candidates' views on AI for whichever Congress elections you're personally able to vote in.