Before I start criticizing, I would make it clear that I'm grateful for your work and I could not do better myself; I certainly did try, in fact I was one of the first in DC in 2018, but I could not do well since I was one of the many "socially-inept" people which are in fact a serious problem in DC (for the record: if you want to do AI policy, do not move to DC, first visit and have the people there judge your personality/charisma, the standards might be far higher or far lower than you might expect, they are now much better at testing people for fit than when I started 6 years ago).
I'm also grateful to see you put your work out there for review on Lesswrong, rather than staying quiet. I think the decision to attempt to be open vs. closed about AI policy work is vastly more complicated than most people in AI policy believe.
Your post is fantastic, especially the reflections.
Lots of people asked me if I had draft legislation. Apparently, if you have regulatory ideas, people want to see that you have a (short) version of it written up like a bill.
They want you to propose solutions, they get annoyed when people come to them with a new issue they know nothing about and expect them to be the one to think of solutions. They want you to do the work writing up the final product and then hand it to them. If they have any issue with it, they'll rewrite parts of it or throw it in the recycle bin.
In terms of my effect– I think I mostly just got them to think about it more and raised it in their internal "AI policy priorities" list. I think people forget that staffers have like 100 things on their priority list, so merely exposing and re-exposing them to these ideas can be helpful.
I've heard this characterized as "goldfish memory". It's important to note that many of the other 100 things on their priority list also have people trying to "expose and re-expose" them to ideas, and many staffers are hired for skill at pretending that they're listening. I think you were correct to evaluate your work building relationships as more useful than this.
My experience in DC made me think that the Overton Window is extremely wide. Congress does not have cached takes on AI policy, and it seems like a lot of people genuinely want to learn. It's unclear how long this will last (e.g., maybe AI risk ends up getting polarized), but we seem to be in a period of unusually high open-mindedness & curiosity.
I disagree that the Overton window in DC, or even Congress, is as wide as your impression. This is both for the reasons stated above, and because it seems very likely (>95%) that military-adjacent people in both the US and China are actively pursuing AI for things like economic growth/stabilization, military applications like EW and nuclear-armed cruise missiles, or for the data processing required for modern information warfare. I agree that we seem to be in a period of unusually high open-mindedness and curiosity.
With that said, I think coordination would be easier if people ended up being more explicit about what they believe, more explicit about specific policy goals they are hoping to achieve, and more explicit about their legible wins (and losses). In the absence of this, we run the risk of giving too much power and too many resources to people who "play the game", develop influence, but don't end up using their influence to achieve meaningful change.
I think that DC is a very Moloch-infested place, resulting in an intense and pervasive culture of nihilism- a near-universal belief that Moloch is inevitable. Prolonged exposure to that environment (several years), where everyone around you thinks this way, and will permanently mark you as low-social-status if you ever reveal you are one of those people with hope for the world, likely (>90%) has intense psychological effects on the AI Safety people in DC.
Likewise, the best people will know the risks associated with having important conversations near smartphones in a world where people use AI for data science, but they don't know you well enough to know whether you yourself will proceed to have important conversations about them near smartphones. They can't heart-to-heart with you about the problem, because that would turn that conversation into an important one, and it would be near a smartphone.
I think I would've written up a doc that explained my reasoning, documented the people I consulted with, documented the upside and downside risks I was aware of, and sent it out to some EAs.
internally screaming
I would've come with a printed-out 1-pager that explained what CAIS is & summarized the regulatory ideas in the NTIA response. I ended up doing this halfway through, and I would've done this sooner.
If you ever decide to write a doc properly explaining the situation with AI Safety to policymakers who read it, Scott Alexander's Superintelligence FAQ is considered in high esteem, you could probably read it, think about how/why it was good at giving laymen a fair chance to understand the situation, and write a much shorter 1-pager yourself that's optimized for the particular audience. I convinced both of my ~60-year-old parents to take AI safety seriously by asking them to read the AI chapter in Toby Ord's The Precipice, so you might consider that instead.
Thanks for all of this! Here's a response to your point about committees.
I agree that the committee process is extremely important. It's especially important if you're trying to push forward specific legislation.
For people who aren't familiar with committees or why they're important, here's a quick summary of my current understanding (there may be a few mistakes):
(If any readers are familiar with the committee process, please feel free to add more info or correct me if I've said anything inaccurate.)
> I think I would've written up a doc that explained my reasoning, documented the people I consulted with, documented the upside and downside risks I was aware of, and sent it out to some EAs.
internally screaming
Can you please explain what this means?
I started asking other folks in AI Governance. The vast majority had not talked to congressional staffers (at all).
??? WTF do people "in AI governance" do?
WTF do people "in AI governance" do?
Quick answer:
the community as a whole is still probably overinvested in research and underinvested in policymaker engagement/outreach.
My prediction is that the AI safety community will overestimate the difficulty of policymaker engagement/outreach.
I think that the AI safety community has quickly and accurately taken social awkwardness and nerdiness into account, and factored that out of the equation. However, they will still overestimate the difficulty of policymaker outreach, on the basis that policymaker outreach requires substantially above-average sociability and personal charisma.
Even among the many non-nerd extroverts in the AI safety community, who have above average or well above average social skills (e.g. ~80th or 90th percentile), the ability to do well in policy requires an extreme combination of traits that produce intense charismatic competence, such the traits required for as a sense of humor near the level of a successful professional comedian (e.g. ~99th or 99.9th percentile). This is because the policy environment, like corporate executives, selects for charismatic extremity.
Because people who are introspective or think about science at all are very rarely far above the 90th percentile for charisma, even if only the obvious natural extroverts are taken into account, the AI safety community will overestimate the difficulty of policymaker outreach.
I don't think they will underestimate the value of policymaker outreach (in fact I predict they are overestimating the value, due to the American interests in using AI for information warfare pushing AI decisionmaking towards inaccessible and inflexible parts of natsec agencies). But I do anticipate underestimating the feasibility of policymaker outreach.
I'm not sure I understand the direction of reasoning here. Overestimating the difficulty would mean that it will actually be easier than they think, which would be true if they expected a requirement of high charisma but the requirement were actually absent, or would be true if the people who ended up doing it were of higher charisma than the ones making the estimate. Or did you mean underestimating the difficulty?
I should have made it more clear at the beginning.
Among the people who do outreach/policymaker engagement, my impression is that there has been more focus on the executive branch (and less on Congress/congressional staffers).
That makes sense and sounds sensible, at least pre-ChatGPT.
Modern congressional staffers are the product of Goodhart's law; ~50-100 years ago, they were the ones that ran congress de-facto, so all the businessmen and voters wanted to talk to them, so the policymaking ended up moving elsewhere. Just like what happened with congressmen themselves ~100-150 years ago. Congressional staffers today primarily take constituent calls from voters, and make interest groups think they're being listened to. Akash's accomplishments came from wading through that bullshit, meeting people through people until he managed to find some gems.
Most policymaking today is called in from outside, with lobbyists having the domain-expertise needed to write the bills, and senior congressional staffers (like the legislative directors and legislative assistants here) overseeing the process, usually without getting very picky about the details.
It's not like congressmembers have no power, but they're just one part of what's called an "Iron triangle", the congressional lawmakers, the executive branch bureaucracies (e.g. FDA, CDC, DoD, NSA), and the private sector companies (e.g. Walmart, Lockheed, Microsoft, Comcast), with the lobbyists circulating around the three, negotiating and cutting deals between them. It's incredibly corrupt and always has been, but not all-crushingly corrupt like African governments. It's like the Military Industrial Complex, except that's actually a bad example because congress is increasingly out of the loop de-facto on foreign policy (most structures are idiosyncratic, because the fundamental building block is people who are thinking of ways to negotiate backdoor deals).
People in the executive branch/bureaucracies like the DoD have more power on interesting things like foreign policy, Congress is more powerful for things that have been entrenched for decades like farming policy. Think tank people have no power but they're much less stupid and have domain expertise and are often called up to help write bills instead of lobbyists.
I don't know how AI policy is made in Congress, I jumped ship from domestic AI policy to foreign AI policy 3.5 years ago in order to focus more on the incentives from the US-China angle, Akash is the one to ask about where AI policymaking happens in congress, as he was the one actually there deep in the maze (maybe via DM because he didn't describe it in this post).
I strongly recommend people talking to John Wentworth about AI policy, even if he doesn't know much at first; after looking at Wentworth's OpenAI dialog, he's currently my top predicted candidate for "person who starts spending 2 hours a week thinking about AI policy instead of technical alignment, and thinks up galaxy brained solutions that break the stalemates that vexed the AI policy people for years".
Most don't do policy at all. Many do research. Since you're incredulous, here are some examples of great AI governance research (which don't synergize much with talking to policymakers):
I mean, those are all decent projects, but I would call zero of them "great". Like, the whole appeal of governance as an approach to AI safety is that it's (supposed to be) bottlenecked mainly on execution, not on research. None of the projects you list sound like they're addressing an actual rate-limiting step to useful AI governance.
Like, the whole appeal of governance as an approach to AI safety is that it's (supposed to be) bottlenecked mainly on execution, not on research.
(I disagree. Indeed, until recently governance people had very few policy asks for government.)
(Also note that lots of "governance" research is ultimately aimed at helping labs improve their own safety. Central example: Structured access.)
(I disagree. Indeed, until recently governance people had very few policy asks for government.)
Did that change because people finally finished doing enough basic strategy research to know what policies to ask for?
It didn't seem like that to me. Instead, my impression was that it was largely triggered by ChatGPT and GPT4 making the topic more salient, and AI safety feeling more inside the Overton window. So there were suddenly a bunch of government people asking for concrete policy suggestions.
(I disagree. Indeed, until recently governance people had very few policy asks for government.)
Did that change because people finally finished doing enough basic strategy research to know what policies to ask for?
Yeah, that's Luke Muehlhauser's claim; see the first paragraph of the linked piece.
I mostly agree with him. I wasn't doing AI governance years ago but my impression is they didn't have many/good policy asks. I'd be interested in counterevidence — like pre-2022 (collections of) good policy asks.
Anecdotally, I think I know one AI safety person who was doing influence-seeking-in-government and was on a good track but quit (to do research) because they weren't able to leverage their influence because the AI governance community didn't really have asks for (the US federal) government.
My own model differs a bit from Zach's. It seems to me like most of the publicly-available policy proposals have not gotten much more concrete. It feels a lot more like people were motivated to share existing thoughts, as opposed to people having new thoughts or having more concrete thoughts.
Luke's list, for example, is more of a "list of high-level ideas" than a "list of concrete policy proposals." It has things like "licensing" and "information security requirements"– it's not an actual bill or set of requirements. (And to be clear, I still like Luke's post and it's clear that he wasn't trying to be super concrete).
I'd be excited for people to take policy ideas and concretize them further.
Aside: When I say "concrete" in this context, I don't quite mean "people on LW would think this is specific." I mean "this is closer to bill text, text of a section of an executive order, text of an amendment to a bill, text of an international treaty, etc."
I think there are a lot of reasons why we haven't seen much "concrete policy stuff". Here are a few:
For people interested in developing the kinds of proposals I'm talking about, I'd be happy to chat. I'm aware of a couple of groups doing the kind of policy thinking that I would consider "concrete", and it's quite plausible that we'll see more groups shift toward this over time.
Curated. I liked that this post had a lot of object-level detail about a process that is usually opaque to outsiders, and that the "Lessons Learned" section was also grounded enough that someone reading this post might actually be able to skip "learning from experience", at least for a few possible issues that might come up if one tried to do this sort of thing.
Read books. I found Master of the Senate and Act of Congress to be especially helpful. I'm currently reading The Devil's Chessboard to better understand the CIA & intelligence agencies, and I'm finding it informative so far.
Would you recommend "The Devil's Chessboard"? It seems intriguing, yet it makes substantial claims with scant evidence.
In my opinion, intelligence information often leads to exaggerated stories unless it is anchored in public information, leaked documents, and numerous high-quality sources.
One final thing is that I typically didn't emphasize loss of control//superintelligence//recursive self-improvement. I didn't hide it, but I included it in a longer list of threat models
I'd be very interested to see that longer threat model list!
If memory serves me well, I was informed by Hendrycks' overview of catastrophic risks. I don't think it's a perfect categorization, but I think it does a good job laying out some risks that feel "less speculative" (e.g., malicious use, race dynamics as a risk factor that could cause all sorts of threats) while including those that have been painted as "more speculative" (e.g., rogue AIs).
I've updated toward the importance of explaining & emphasizing risks from sudden improvements in AI capabilities, AIs that can automate AI research, and intelligence explosions. I also think there's more appetite for that now than there used to be.
There are a lot of antibodies and subtle cultural pressures that can prevent me from thinking about certain ideas and can atrophy my ability to take directed action in the world.
This hit me like a breath of fresh air. "Antibodies" yes. Makes me feel less alone in my world-space
The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?
In May and June of 2023, I (Akash) had about 50-70 meetings about AI risks with congressional staffers. I had been meaning to write a post reflecting on the experience and some of my takeaways, and I figured it could be a good topic for a LessWrong dialogue. I saw that hath had offered to do LW dialogues with folks, and I reached out.
In this dialogue, we discuss how I decided to chat with staffers, my initial observations in DC, some context about how Congressional offices work, what my meetings looked like, lessons I learned, and some miscellaneous takes about my experience.
Context
Arriving in DC & initial observations
Hierarchy of a Congressional office
Outreach to offices
A typical meeting
Staffer attitudes toward AI risk
Lessons Learned
Final Takes