I would be excited about someone doing a blog on what the companies are doing RE AI policy (including comms that are relevant to policy or directed at policymakers.)
I suspect good posts from such a blog would be shared reasonably frequently among tech policy staffers in DC.
(Not necessarily saying this needs to be you).
First, when I talk to security staff at AI companies about computer security, they often seem to fail to anticipate what insider threat from AIs will be like.
Why do you think this? Is it that they are not thinking about large numbers of automated agents running around doing a bunch of research?
Or is it that they are thinking about these kinds of scenarios, and yet they still don't apply the insider threat frame for some reason?
My understanding is that AGI policy is pretty wide open under Trump. I don't think he and most of his close advisors have entrenched views on the topic.
If AGI is developed in this Admin (or we approach it in this Admin), I suspect there is a lot of EV on the table for folks who are able to explain core concepts/threat models/arguments to Trump administration officials.
There are some promising signs of this so far. Publicly, Vance has engaged with AI2027. Non-publicly, I think there is a lot more engagement/curiosity than many readers might expect.
This isn't to say "everything is great and the USG is super on track to figure out AGI policy" but it's more to say "I think people should keep an open mind– even people who disagree with the Trump Admin on mainstream topics should remember that AGI policy is a weird/niche/new topic where lots of people do not have strong/entrenched/static positions (and even those who do have a position may change their mind as new events unfold.)"
There are definitely still benefits to doing alignment research, but this only justifies the idea that doing alignment research is better than doing nothing.
IMO the thing that matters (for an individual making decisions about what to do with their career) is something more like "on the margin, would it be better to have one additional person do AI governance or alignment/control?"
I happen to think that given the current allocation of talent, on-the-margin it's generally better for people to choose AI policy. (Particularly efforts to contribute technical expertise or technical understanding/awareness to governments, think-tanks interfacing with governments, etc.) There is a lot of demand in the policy community for these skills/perspectives and few people who can provide them. In contrast, technical expertise is much more common at the major AI companies (though perhaps some specific technical skills or perspectives on alignment are neglected.)
In other words, my stance is something like "by default, anon technical person would have more expected impact in AI policy unless they seem like an unusually good fit for alignment or an unusually bad fit for policy."
There's a video version of AI2027 that is quite engaging/accessible. Over 1.5M views so far.
Seems great. My main critique is that the "good ending" seems to assume alignment is rather easy to figure out, though admittedly that might be more of a critique of AI2027 itself rather than the way the video portrays it.
This is fantastic work. There's also something about this post that feels deeply empathic and humble, in ways that are hard-to-articulate but seem important for (some forms of) effective policymaker engagement.
A few questions:
I think we should be careful not to overestimate the success of AI2027. "Vance has engaged with your work" is an impressive feat, but it's still relatively far away from something like "Vance and others in the Admin have taken your work seriously enough to start to meaningfully change their actions or priorities based on it." (That bar is very high, but my impression is that the AI2027 folks would be like "yea, that's what would need to happen in order to steer toward meaningfully better futures.")
My impression is that AI2027 will have (even) more success if it is accompanied by an ambitious policymaker outreach effort (e.g., lots of 1-1 meetings with relevant policymakers and staffers, writing specific pieces of legislation or EOs and forming a coalition around those ideas, publishing short FAQ memos that address misconceptions or objections they are hearing in their meetings with policymakers, etc.)
This isn't to say that research is unnecessary-- much of the success of AI2027 comes from Daniel (and others on the team) having dedicated much of their lives to research and deep understanding. There are plenty of Government Relations people who are decent at "general policy engagement" but will fail to provide useful answers when staffers ask things like "But why won't we just code in the goals we want?", or "But don't you think the real thing here is about how quickly we diffuse the technology?", or "Why don't you think existing laws will work to prevent this?" or a whole host of other questions.
But on the margin, I would probably have Daniel/AI2027 spend more time on policymaker outreach and less time on additional research (especially now that AI2027 is done). There is some degree of influence one can have with the "write something that is thoroughly researched and hope it spreads organically" effort, and I think AI2027 has essentially saturated that. For additional influence, I expect it will be useful for Daniel (or other competent communicators on his team) to advance to "get really good at having meetings with the ~100-1000 most important people, understanding their worldviews, going back and forth with them, understanding their ideological or political constraints, and finding solutions/ideas/arguments that are tailored to these particular individuals." This is still a very intellectual task in some ways, but it involves a lot more "having meetings" and "forming models of social/political reality" than the classic "sit in your room with a whiteboard and understand technical reality" stuff that we typically associate with research.
Note that IFP (a DC-based think tank) recently had someone deliver 535 copies of their new book to every US Congressional office.
Note also that my impression is that DC people (even staffers) are much less "online" than tech audiences. Whether or not you copy IFP, I would suggest thinking about in-person distribution opportunities for DC.
I think there are organizations that themselves would be more likely to be robustly trustworthy and would be more fine to link to
I would be curious for your thoughts on which organizations you feel are robustly trustworthy.
Bonus points for a list that is kind of a weighted sum of "robustly trustworthy" and "having a meaningful impact RE improving public/policymaker understanding". (Adding this in because I suspect that it's easier to maintain "robustly trustworthy" status if one simply chooses not to do a lot of externally-focused comms, so it's particularly impressive to have the combination of "doing lots of useful comms/policy work" and "managing to stay precise/accurate/trustworthy").
Suppose you magically gained a moderate amount of Political Will points and you can spend them on 1-2 things that would make the bill stronger (or introduce a separate bill– no need to anchor too much on the current RAISE vibe.)
What do you think are the 1-2 things you'd change about RAISE or the 1-2 extra things you'd push for?