(Posting in a personal capacity unless stated otherwise.) I help allocate Open Phil's resources to improve the governance of AI with a focus on avoiding catastrophic outcomes. Formerly co-founder of the Cambridge Boston Alignment Initiative, which supports AI alignment/safety research and outreach programs at Harvard, MIT, and beyond, co-president of Harvard EA, Director of Governance Programs at the Harvard AI Safety Team and MIT AI Alignment, and occasional AI governance researcher.
Not to be confused with the user formerly known as trevor1.
One other thought on Green in rationality: you mention the yin of scout mindset in the Deep Atheism post, and scout mindset and indeed correct Bayesianism involves a Green passivity and maybe the "respect for the Other" described here. While Blue is agnostic, in theory, between yin and yang -- whichever gives me more knowledge! -- Blue as evoked in Duncan's post and as I commonly think of it tends to lean yang: "truth-seeking," "diving down research rabbit holes," "running experiments," etc. A common failure mode of Blue-according-to-Blue is a yang that projects the observer into the observations: seeing new evidence as tools, arguments as soldiers. Green reminds Blue to chill: see the Other as it is, recite the litanies of Gendlin and Tarski, combine the seeking of truth with receptivity to what you find.
I think this post aims at an important and true thing and misses in a subtle and interesting but important way.
Namely: I don't think the important thing is that one faction gets a veto. I think it's that you just need limitations on what the government can do that ensure that it isn't too exploitative/extractive. One way of creating these kinds of limitations is creating lots of veto points and coming up with various ways to make sure that different factions hold the different veto points. But, as other commenters have noted, the UK government does not have structural checks and balances. In my understanding, what they have instead is a bizarrely, miraculously strong respect for precedent and consensus about what "is constitutional" despite (or maybe because of?) the lack of a written constitution. For the UK, and maybe other, less-established democracies (i.e. all of them), I'm tempted to attribute this to the "repeated game" nature of politics: when your democracy has been around long enough, you come to expect that you and the other faction will share power (roughly at 50-50 for median voter theorem reasons), so voices within your own faction start saying "well, hold on, we actually do want to keep the norms around."
Also, re: the electoral college, can you say more about how this creates de facto vetos? The electoral college does not create checks and balances; you can win in the electoral college without appealing to all the big factions (indeed, see Abraham Lincoln's 1860 win), and the electoral college puts no restraints on the behavior of the president afterward. It just noisily empowers states that happen to have factional mixes close to the national average, and indeed can create paths to victory that route through doubling down on support within your own faction while alienating those outside it (e.g. Trump's 2016 and 2020 coalitions).
(An extra-heavy “personal capacity” disclaimer for the following opinions.) Yeah, I hear you that OP doesn’t have as much public writing about our thinking here as would be ideal for this purpose, though I think the increasingly adversarial environment we’re finding ourselves in limits how transparent we can be without undermining our partners’ work (as we’ve written about previously).
The set of comms/advocacy efforts that I’m personally excited about is definitely larger than the set of comms/advocacy efforts that I think OP should fund, since 1) that’s a higher bar, and 2) sometimes OP isn’t the right funder for a specific project. That being said:
Some broader thoughts about what kinds of advocacy would be useful or not useful:
Just being "on board with AGI worry" is so far from sufficient to taking useful actions to reduce the risk that I think epistemics and judgment is more important, especially since we're likely to get lots of evidence (one way or another) about the timelines and risks posed by AI during the term of the next president.
He has also broadly indicated that he would be hostile to the nonpartisan federal bureaucracy, e.g. by designating way more of them as presidential appointees, allowing him personally to fire and replace them. I think creating new offices that are effectively set up to regulate AI looks much more challenging in a Trump (and to some extent DeSantis) presidency than the other candidates.
Thanks for these thoughts! I agree that advocacy and communications is an important part of the story here, and I'm glad for you to have added some detail on that with your comment. I’m also sympathetic to the claim that serious thought about “ambitious comms/advocacy” is especially neglected within the community, though I think it’s far from clear that the effort that went into the policy research that identified these solutions or work on the ground in Brussels should have been shifted at the margin to the kinds of public communications you mention.
I also think Open Phil’s strategy is pretty bullish on supporting comms and advocacy work, but it has taken us a while to acquire the staff capacity to gain context on those opportunities and begin funding them, and perhaps there are specific opportunities that you're more excited about than we are.
For what it’s worth, I didn’t seek significant outside input while writing this post and think that's fine (given the alternative of writing it quickly, posting it here, disclaiming my non-expertise, and getting additional perspectives and context from commenters like yourself). However, I have spoken with about a dozen people working on AI policy in Europe over the last couple months (including one of the people whose public comms efforts are linked in your comment) and would love to chat with more people with experience doing policy/politics/comms work in the EU.
We could definitely use more help thinking about this stuff, and I encourage readers who are interested in contributing to OP’s thinking on advocacy and comms to do any of the following:
Thank you! Classic American mistake on my part to round these institutions to their closest US analogies.
I broadly share your prioritization of public policy over lab policy, but as I've learned more about liability, the more it seems like one or a few labs having solid RSPs/evals commitments/infosec practices/etc would significantly shift how courts make judgments about how much of this kind of work a "reasonable person" would do to mitigate the foreseeable risks. Legal and policy teams in labs will anticipate this and thus really push for compliance with whatever the perceived industry best practice is. (Getting good liability rulings or legislation would multiply this effect.)
"We should be devoting almost all of global production..." and "we must help them increase" are only the case if:
(And, you know, total utilitarianism and such.)
The "highly concentrated elite" issue seems like it makes it more, rather than less, surprising and noteworthy that a lack of structural checks and balances has resulted in a highly stable and (relatively) individual-rights-respecting set of policy outcomes. That is, it seems like there would thus be an especially strong case for various non-elite groups to have explicit veto power.