Having a risk-focused administration after 2028 could very well swing the future. That president is likely to be in a position to control the creation of ASI. We should be putting a lot of effort into making that happen.
This currently links to "https://armiercansformoskovitz.com/", which does not resolve
Can you give more reasoning why out of all eligible people it is Moscowitz? I think right now the most famous politician who publicly sided with AI Safety is Bernie
Bernie Sanders is probably too old to run but I would prefer Sanders over a lot of other candidates if he were like 10 years younger. For most other big name politicians, their stances on AI safety are muddled at best, and the best pro AI Safety politicians like Scott Wiener just don’t have the executive experience or gravitas that Dustin Moskovitz has.
There is also Alexandria Ocasio-Cortez, who, while more friendly to AI Safety, continuously demonstrates she does not believe in the dramatic potential of AI to reshape the world, claiming as recently as last year that we could be in a massive AI bubble.
To nitpick, those two beliefs are compatible: one can both believe that we're in a massive AI bubble about to pop in, IDK, 5 years, and that AI can dramatically reshape the world (not claiming that this is AOC's stance).
Separately, I think there's a decent chance that she gets doom-pilled through her social proximity to Bernie.
I'd much rather see Moskovitz than Trump or other MAGAoids, but that's admittedly a very low bar. Seems like a high variance bet, and the sign of the net effect seems also very uncertain (compared to, IDK, Newsom?), given his ties to Anthropic.
Dustin Moskovitz is like the largest individual funder of AI safety causes ever. I think that should send a higher signal to his commitment to AI safety than his foundation’s stake in Anthropic, which does not personally affect his wealth https://www.forbesindia.com/amp/article/global-game/cross-border/change-agents-cari-tuna-and-dustin-moskovitzs-ai-safety-bet/2991379/1.
The big advantage we have is that the members of the elite and their families are just as vulnerable to AI extinction risk as the rest of us are. This is in sharp contrast to most deliberations on policy in which the deliberators are reduced to putting hope in the altruistic impulses of an elite who are mostly impervious to the consequences of the policy decision. As soon as any significant fraction of the power elite form an accurate understanding of the danger their families are in, we can expect a very drastic moves to curtail the risk. The big disadvantage we have is time: look how long it took for the political will to address global warming to form in Western Europe, and AI extinction risk is harder to come to an accurate understanding of than global warming is.
The United States government is not on track to implement significant AI safety policies before the development of AGI.[1] Several expert forecasts and prediction platforms suggest that AGI is 50% likely to be developed by the early to mid-2030s, which means its not unlikely that we only have 1-2 more presidential terms before AGI is developed. This means that the next presidential election has outsized importance for AI safety, as whoever wins will not only have a huge influence over the US government, they will also have a decent chance of also winning in 2032. Unfortunately, the likely frontrunners for the 2028 elections have not demonstrated that they will champion AI Safety.
Among the frontrunners for the Democratic nomination, there is Gavin Newsom, who vetoed SB 1047 (a proposed California law) to protect AI companies and AI innovation from overregulation. There is also Alexandria Ocasio-Cortez, who, while more friendly to AI Safety, continuously demonstrates she does not believe in the dramatic potential of AI to reshape the world, claiming as recently as last year that we could be in a massive AI bubble. In general, AI Safety and comprehensive AI regulation remain a rarely talked-about issue among Democratic politicians, especially if such discussions are through the lens of x-risk.
On the Republican side of the ledger, you have J.D Vance, who famously stated that "The AI future is not going to be won by hand-wringing about safety", and Marco Rubio, who has been widely silent on AI Safety as the administration he works for has consistently placed AI Safety on the back burner.
In order to reduce existential risk from Artificial Intelligence, America needs leadership that understands the profound risks that AI poses and has the vision and competence to shepherd America through these turbulent times. Of all the potential candidates who could provide this leadership, only one has the qualifications, conviction, and resources to have a shot at making a real difference to American, and ultimately global, AI policy: Dustin Moskovitz.
Dustin Moskovitz is a co-founder of Facebook and Asana (a company that sells productivity software) and also co-founded Coefficient Giving (formerly known as Open Philanthropy), one of the largest effective altruist organizations in the world. As a leading advocate and funder within the AI safety community, he possesses both a deep commitment to mitigating existential risks and the professional background to appeal to conventional measures of success. His entrepreneurial record and demonstrated capacity for large-scale organization lend him a kind of legitimacy that bridges the gap between the technical world of AI safety and the public expectations of political leadership.
If you would like to encourage Dustin Moskovitz to run for president, please sign this petition. By organizing a political draft effort, we can do more than just convince Dustin Moskovitz to run for president. We can also test the feasibility of a Moskovitz campaign without spending too many resources and provide a great story to a future Moskovitz campaign if it ever comes to exist.
For more information on why Dustin Moskovitz should run for president, visit americansformoskovitz.com or read a more detailed essay here.
Written with Grammarly spell check. Note that the fifth paragraph is duplicated from a prior essay of mine on this form.
More precisely, based on available evidence, it is not clear that the US federal government will ever support large-scale AI Safety legislation that would be deemed acceptable by worldviews that deem AI to pose a significant threat to humanity's existence.