Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

One piece of advice I gave to EAs of various stripes in early 2021 was: do everything you can to make the government sane around biorisk, in the wake of the COVID pandemic, because this is a practice-run for AI.

I said things like: if you can't get the world to coordinate on banning gain-of-function research, in the wake of a trillions-of-dollars tens-of-millions-of-lives pandemic "warning shot", then you're not going to get coordination in the much harder case of AI research.

Biolabs are often publicly funded (rather than industry-funded). The economic forces arrayed behind this recklessly foolish and impotent research consists of “half-a-dozen researchers thinking it’s cool and might be helpful”. (While the work that would actually be helpful—such as removing needless bureaucracy around vaccines and investing in vaccine infrastructure—languishes.) Compared to the problem of AI—where the economic forces arrayed in favor of “ignore safety and rush ahead” are enormous and the argument for expecting catastrophe much murkier and more abstract—the problem of getting a sane civilizational response to pandemics (in the wake of a literal pandemic!) is ridiculously easier.

And—despite valiant effort!—we've been able to do approximately nothing.

We're not anywhere near global bans on gain-of-function research (or equivalent but better feats of coordination that the people who actually know what they're talking about when it comes to biorisk would tell you are better targets than gain-of-function research).

The government continues to fund research that is actively making things worse, while failing to put any serious funding towards the stuff that might actually help.

I think this sort of evidence has updated a variety of people towards my position. I think that a variety of others have not updated. As I understand the counter-arguments (from a few different conversations), there are two main reasons that people see this evidence and continue to hold out hope for sane government response:

 

1. Perhaps the sorts of government interventions needed to make AI go well are not all that large, and not that precise.

I confess I don't really understand this view. Perhaps the idea is that AI is likely to go well by default, and all the government needs to do is, like, not use anti-trust law to break up some corporation that's doing a really good job at AI alignment just before they succeed? Or perhaps the idea is that AI is likely to go well so long as it's not produced first by an authoritarian regime, and working against authoritarian regimes is something governments are in fact good at?

I'm not sure. I doubt I can pass the ideological Turing test of someone who believes this.

 

2. Perhaps the ability to cause governance to be sane on some issue is tied very directly to the seniority of the government officials advising sanity.

EAs only started trying to affect pandemic policy a few years ago, and aren't very old or recognized among the cacophony of advisors. But if another pandemic hit in 20 years, the sane EA-ish advisors would be much more senior, and a lot more would get done. Similarly, if AI hits in 20 years, sane EA-ish advisors will be much more senior by then. The observation that the government has not responded sanely to pandemic near-misses, is potentially screened-off by the inexperience of EAs advising governance.

I have some sympathy for the second view, although I'm skeptical that sane advisors have significant real impact. I'd love a way to test it as decisively as we've tested the "government (in its current form) responds appropriately to warning shots" hypotheses.

On my own models, the "don't worry, people will wake up as the cliff-edge comes more clearly into view" hypothesis has quite a lot of work to do. In particular, I don't think it's a very defensible position in isolation anymore. The claim "we never needed government support anyway" is defensible; but if you want to argue that we do need government support but (fortunately) governments will start behaving more reasonably after a warning shot, it seems to me like these days you have to pair that with an argument about why you expect the voices of reason to be so much louder and more effectual in 2041 than they were in 2021.

(Which is then subject to a bunch of the usual skepticism that applies to arguments of the form "surely my political party will become popular, claim power, and implement policies I like".)

 

See also: the law of continued failure, and Rob Bensinger's thoughts on the topic.

New Comment
42 comments, sorted by Click to highlight new comments since: Today at 5:47 PM

The only viable counterargument I've heard to this is that the government can be competent at X while being incompetent at Y, even if X is objectively harder than Y. The government is weird like that. It's big and diverse and crazy. Thus, the conclusion goes, we should still have some hope (10%?) that we can get the government to behave sanely on the topic of AGI risk, especially with warning shots, despite the evidence of it behaving incompetently on the topic of bio risk despite warning shots.

Or, to put it more succinctly: The COVID situation is just one example; it's not overwhelmingly strong evidence.

(This counterargument is a lot more convincing to the extent that people can point to examples of governments behaving sanely on topics that seem harder than COVID. Maybe Y2K? Maybe banning bioweapons? Idk, I'd be interested to see research on this: what are the top three examples we can find, as measured by a combination of similarity-to-AGI-risk and competence-of-government-response.)

[-]LawrenceC2yΩ6135

I can't seem to figure out the right keywords to Google, but off the top of my head, some other candidates: banning CFCs (maybe easier? don't know enough), the taboo against chemical weapons (easier), and nuclear non proliferation (probably easier?)?

[-]Kaj_Sotala2yΩ11245

I think Anders Sandberg did research on this at one point, and I recall him summarizing his findings as "things are easy to ban as long as nobody really wants to have them". IIRC, things that went into that category were chemical weapons (they actually not very effective in modern warfare), CFCs (they were relatively straightforward to replace with equally effective alternatives), and human cloning.

This is my impression as well, but it's very possible that we're looking at the wrong reference class (IE its plausible that many "sane" things large governments have done are not salient). Maybe some of the big social welfare/early environmental protection programs? 

On welfare: Bismarck is famous as a social welfare reformer but these efforts were famously made to undermine socialism and appease the working class, a result any newly-formed volatile state would enjoy. I expect that the effects of social welfare are useful in most countries from the same basis.

On environmentalism today, we see significant European advances in green energy right now, but this is accompanied by large price hikes in natural energy resources, providing quite an incentive. Early large-scale state-driven environmentalism (e.g. Danish wind energy R&D and usage) was driven by the 70s oil crises in the same fashion. And then there's of course the democratic incentives, i.e. are enough of the population touting environmentalism, then we'll do it (though 3.5% population-wide active participation seems to work as well).

And that's just describing state-side shifts. Even revolutions have been driven by non-ideological incentives. E.g. the American revolution started as a staged "throwing tea in the ocean" act by tea smugglers because London reduced the tax on tea for the East India company, reducing their profits (see myths and the article about smugglers' incentives). Perpetuating a revolution also became a large personal profit for Washington.

[-]Mau2yΩ2115

I'd guess the very slow rate of nuclear proliferation has been much harder to achieve than banning gain-of-function research would be, since, in the absence of intervention, incentives to get nukes would have been much bigger than incentives to do gain-of-function research.

Also, on top of the taboo against chemical weapons, there was the verified destruction of most chemical weapons globally.

I agree that nuclear non proliferation is probably harder than a ban on gain-of-function. But in this case, the US + USSR both had a strong incentive to discourage nuclear proliferation, and had enough leverage to coerce smaller states to not work on nuclear weapon development (e.g one or the other were the security provider for the current government of said states). 

Ditto with chemical weapons, which seem to have lost battlefield relevance to conflicts between major powers (ie it did not actually break the trench warfare stalemate in WWI even when deployed on a massive scale, and is mainly useful as a weapon of terror against weaker opponents).  At this point, the moral arguments + downside risk of chemical attacks vs their own citizens shifted the calculus for major powers. Then the major powers were able to enforce the ban somewhat successfully on smaller countries. 

I do think that banning GoF (especially on pathogens that already have or are likely to cause a human pandemic) should be ~ as hard as the chemical weapons case---there's not much benefit to doing it, and the downside risk is massive. My guess is a generally sane response to COVID is harder, since it required getting many things right, though I think the median country's response seems much worse than the difficulty of the problem would lead you to believe. 

Unfortunately, I think that AGI relevant research has way more utility than many of the military technologies that we've failed to ban. Plus, they're super financially profitable, instead of being expensive to maintain. So the problem for AGI is harder than the problems we've really seen solved via international coordination?

[-]Mau2y20

I agree with a lot of that. Still, if

nuclear non proliferation [to the extent that it has been achieved] is probably harder than a ban on gain-of-function

that's sufficient to prove Daniel's original criticism of the OP--that governments can [probably] fail at something yet succeed at some harder thing.

(And on a tangent, I'd guess a salient warning shot--which the OP was conditioning on--would give the US + China strong incentives to discourage risky AI stuff.)

I think it's possible the competence of government in a given domain is fairly arbitrary/contingent (with its difficulty being a factor among many others). If true, instead of looking at domains similar to AGI-risk as a reference class, it'd be better to analyse which factors tend to make government more/less competent in general, and use that to inform policy development/advocacy addressing AGI risk.

[-]LawrenceC2yΩ14447

I broadly agree with this general take, though I'd like to add some additional reasons for hope:

1. EAs are spending way more effort and money on AI policy. I don't have exact numbers on this, but I do have a lot of evidence in this direction: at every single EAG, there are far more people interested in AI x-risk policy than biorisk policy, and even those focusing on biorisk are not really focusing on preventing gain-of-function (as opposed to say, engineered pandemics or general robustness). I think this is the biggest reason to expect that AI might be different.

I also think there's some degree of specialization here, and having the EA policy people all swap to biorisk would be quite costly in the future. So I do sympathize with the majority of AI x-risk focused EAs doing AI x-risk stuff, as opposed to biorisk stuff. (Though I also do think that getting a "trial run" in would be a great learning experience.)

2. Some of the big interventions that people want are things governments might do anyways. To put it another way, governments have a lot of inertia. Often when I talk to AI policy people, the main reason for hope is that they want the government to do something that already has a standard template, or is something that governments already know how to do. For example, the authoritarian regimes example you gave, especially if the approach is to dump an absolute crapton of money on compute to race harder or to use sanctions to slow down other countries. Another example people talk about is having governments break up or nationalize large tech companies, so as to slow down AI research. Or maybe the action needed is to enforce some "alignment norms" that are easy to codify into law, and that the policy teams of industry groups are relatively bought into. 

The US government already dumps a lot of money onto compute and AI research, is leveling sanctions vs China, and has many Senators that are on board for breaking up large tech companies. The EU already exports its internet regulations to the rest of the world, and it's very likely that it'd export its AI regulations as well. So it might be easier to push these interventions through, than it is to convince the government not to give $600k to a researcher to do gain-of-function, which is what they have been doing for a long time. 

(This seems like how I'd phrase your first point. Admittedly, there's a good chance I'm also failing the ideological Turing test on this one.)
 
3. AI is taken more seriously than COVID. I think it's reasonable to believe that the US government takes AI issues more seriously than COVID---for example, it's seen as more of a national security issue (esp wrt China), and it's less politicized. And AI (x-risk) is an existential threat to nations, which generally tends to be taken way more seriously than COVID is. So one reason for hope is that policymakers don't really care about preventing a pandemic, but they might actually care about AI, enough that they will listen to the relevant experts and actually try. To put it another way, while there is a general factor of sanity that governments can have, there's also tremendous variance in how competent any particular government is on various tasks. (EDIT: Daniel makes a similar point above.)

4. EAs will get better at influencing the government over time. This is similar to your second point. EAs haven't spent a lot of time trying to influence politics. This isn't just about putting people into positions of power---it's also about learning how to interface with the government in ways that are productive, or how to spend money to achieve political results, or how to convince senior policymakers. It's likely we'll get better at influence over time as we learn what and what not to do, and will leverage our efforts more effectively. 

For example, the California Yimbys were a lot worse at interfacing with the state government or the media effectively when they first started ~10 years ago. But recently they've had many big wins in terms of legalizing housing!

(That being said, it seems plausible to me that EAs should try to get gain-of-function research banned as a trial run, both because we'd probably learn a lot doing it, and because it's good to have clear wins.)

and has many Senators that are on board for breaking up large tech companies. 

That's exactly the opposite of what we need if you listen to AI safety policy folk because it strengthens race dynamics. If you would have all the tech companies merged together, they are likely the first to develop AGI and thus have to worry less about other researchers being the first which allows them to invest more resources into safety.

Idk, I've spoken to AI safety policy people who think it's a terrible idea, and some who think it'll still be necessary. On one hand, you have the race dynamics, on the other hand you have returns to scale and higher profits from horizontal/vertical integration. 

I think it’s reasonable to believe that the US government takes AI issues more seriously than COVID—for example, it’s seen as more of a national security issue (esp wrt China), and it’s less politicized.

I'm not sure that's helpful from a safety perspective. Is it really helpful if the US unleashes the unfriendly self-improving monster first, in an effort to "beat" China?

From my reading and listening on the topic, the US government does not take AI safety seriously, when "safety" is defined in the way that we define it here on LessWrong. Their concerns around AI safety have more to do with things like ensuring that datasets aren't biased so that the AI doesn't produce accidentally racist outcomes. But thinking about AI safety to ensure that a recursively self-improving optimizer doesn't annihilate humanity on its way to some inscrutable goal? I don't think that's a big focus of the US government. If anything, that outcome is seen as an acceptable risk for the US remaining ahead of China in some kind of imagined AI arms race.

Are any of these cruxes for anyone?

My impression is that 2 and 4 are relatively cruxy for some people? Especially 2. 

IE I've heard from some academics that the "natural" thing to do is to join with the AI ethics crowd/Social Justice crowd and try to get draconian anti tech/anti AI regulations passed. My guess is their inside view beliefs are some combination of:

A. Current tech companies are uniquely good at AI research relative to their replacements. IE, even if the US government destroys $10b of current industry RnD spending, and then spends $15b on AI research, this is way less effective at pushing AGI capabilities. 

B. Investment in AI research happens in large part due to expectation of outsized profits. Destroy expectation of outsized profits via draconian anti innovation/anti market regulation or just by tacking on massive regulatory burdens (which the US/UK/EU governments are very capable of doing) is enough to curb research interest in this area significantly. 

C. There's no real pressure from Chinese AI efforts. IE, delaying current AGI progress in the US/UK by 3 years just actually delays AGI by 3 years. More generally, there aren't other relevant players besides big, well known US/UK labs.

(I don't find 2 super plausible myself, so I don't have a great inside view of this. I am trying to understand this view better by talking to said academics. In particular, even if C is true (IE China not an AI threat), the US federal government certainly doesn't believe this and is very hawkish vs China + very invested in throwing money at, or at least not hindering, tech research it believes is necessary for competition.)


As for 4, this is a view I hear a lot from EA policy people? e.g. we used to make stupid mistakes, we're definitely not making them now; we used to just all be junior, now we have X and Y high ranking positions; and we did a bunch of experimentation and we figured out what messaging works relatively better. I think 4 would be a crux for me, personally - if our current efforts to influence government are as good as we can get, I think this route of influence is basically unviable. But I do believe that 4 is probably true to a large extent. 

[-]evhub2y3114

Though I'm unsure whether warning shots will ever even occur, my primary hope with warning shots has always just been that they would change the behavior of big AI labs that are already sensitive to AI risk (e.g. OpenAI and DeepMind), not that they would help us achieve concrete governmental policy wins.

[-]Rohin Shah2yΩ12278

This post seems to make an implicit assumption that the purpose of a warning shot is to get governments to do something. I usually think of a warning shot as making it clear that the risk is real, leading to additional work on alignment and making it easier for alignment advocates to have AGI companies implement specific alignment techniques. I agree that a warning shot is not likely to substitute for a technical approach to alignment.

(EDIT: Whoops, I see Evan made basically this comment already)

Disclaimer: writing quickly. 

Consider the following path: 

A. There is an AI warning shot. 

B. Civilization allocates more resources for alignment and is more conservative pushing capabilities.  

C. This reallocation is sufficient to solve and deploy aligned AGI before the world is destroyed. 

I think that a warning shot is unlikely (P(A) < 10%), but won't get into that here. 

I am guessing that P(B | A) is the biggest crux. The OP primarily considers the ability of governments to implement policy that moves our civilization further from AGI ruin, but I think that the ML community is both more important and probably significantly easier to shift than government.  I basically agree with this post as it pertains to government updates based on warning shots. 

I anticipate that a warning shot would get most capabilities researchers to a) independently think about alignment failures and think about the alignment failures that their models will cause, and b) take the EA/LessWrong/MIRI/Alignment sphere's worries a lot more seriously. My impression is that OpenAI seems to be much more worried about misuse risk than accident risk: if alignment is easy, then the composition of the lightcone is primarily determined by the values of the AGI designers.  Right now, there are ~100 capabilities researchers vs ~30 alignment researchers at OpenAI. I think a warning shot would dramatically update them towards worry towards worry about accident risk, and therefore I anticipate that OpenAI would drastically shift most of their resources to alignment research. I would guess P(B|A) ~= 80%. 

 P(C | A, B) primarily depends on alignment difficulty, of which I am pretty uncertain, and also how large the reallocation in B is, which I am anticipating to be pretty large. The bar for destroying the world gets lower and lower every year, but this would give us a lot more time, but I think we get several years of AGI capabiliity before we deploy it. I'm estimating P(C | A, B) ~= 70%, but this is very low resilience. 

Right now, there are ~100 capabilities researchers vs ~30 alignment researchers at OpenAI.

I don't want to derail this thread, but I do really want to express my disbelief at this number before people keep quoting it. I definitely don't know 30 people at OpenAI who are working on making AI not kill everyone, and it seems kind of crazy to assert that there are (and I think assertions that there are are the result of some pretty adversarial dynamics I am sad about).

I think a warning shot would dramatically update them towards worry towards worry about accident risk, and therefore I anticipate that OpenAI would drastically shift most of their resources to alignment research. I would guess P(B|A) ~= 80%.

I would like to take bets here, though we are likely to run into doomsday-market problems, though there are ways around that.

Written and forecasted quickly, numbers are very rough. Thomas requested I make a forecast before anchoring on his comment (and I also haven't read others).

I’ll make a forecast for the question:  What’s the chance a set of >=1 warning shots counterfactually tips the scales between doom and a flourishing future, conditional on a default of doom without warning shots?

We can roughly break this down into:

  1. Chance >=1 warning shots happens
  2. Chance alignment community / EA have a plan to react to warning shot well
  3. Chance alignment community / EA have enough influence to get the plan executed
  4. Chance the plan implemented tips the scales between doom and flourishing future

I’ll now give rough probabilities:

  1. Chance >=1 warning shots happens: 75%
    1. My current view on takeoff is closer to Daniel Kokotajlo-esque fast-ish takeoff than Paul-esque slow takeoff. But I’d guess even in the DK world we should expect some significant warning shots, we just have less time to react to them.
    2. I’ve also updated recently toward thinking the “warning shot” doesn’t necessarily need to be that accurate of a representation of what we care about to be leveraged. As long as we have a plan ready to react to something related to making people scared of AI, it might not matter much that the warning shot accurately represented the alignment community’s biggest fears.
  2. Chance alignment community / EA have a plan to react to warning shot well: 50%
    1. Scenario planning is hard, and I doubt we currently have very good plans. But I think there are a bunch of talented people working on this, and I’m planning on helping :)
  3. Chance alignment community / EA have enough influence to get the plan executed: 35%
    1. I’m relatively optimistic about having some level of influence, seems to me like we’re getting more influence over time and right now we’re more bottlenecked on plans than influence. That being said, depending on how drastic the plan is we may need much more or less influence. And the best plans could potentially be quite drastic.
  4. Chance the plan implemented tips the scales between doom and flourishing future, conditional on doom being default without warning shots: 5%
    1. This is obviously just a quick gut-level guess; I generally think AI risk is pretty intractable and hard to tip the scales on even though it’s super important, but I guess warning shots may open the window for pretty drastic actions conditional on (1)-(3).
       

Multiplying these all together gives me 0.66%, which might sound low but seems pretty high in my book as far as making a difference on AI risk is concerned.

I totally sympathize with and share the despair that many people feel about our governments' inadequacy to make the right decisions on AI, or even far easier issues like covid-19.

What I don't understand is why this isn't paired with a greater enthusiasm for supporting governance innovation/experimentation, in the hopes of finding better institutional structures that COULD have a fighting chance to make good decisions about AI.

Obviously "fix governance" is a long-term project and AI might be a near-term problem. But I still think the idea of improving institutional decision-making could be a big help in scenarios where AI takes longer than expected or government reform happens quicker than expected. In EA, "improving institutional decisionmaking" has come to mean incremental attempts to influence existing institutions by, eg, passing weaksauce "future generations" climate bills. What I think EA should be doing much more is supporting experiments with radical Dath-Ilan-style institutions (charter cities, liquid democracy, futarchy, etc) in a decentralized hits-based way, and hoping that the successful experiments spread and help improve governance (ie, getting many countries to adopt prediction markets and then futarchy) in time to be helpful for AI.

I've written much more about this in my prize-winning entry to the Future of Life Institute's "AI worldbuilding competition" (which prominently features a "warning shot" that helps catalyze action, in a near-future where governance has already been improved by partial adoption of Dath-Ilan-style institutions), and I'd be happy to talk about this more with interested folks: https://www.lesswrong.com/posts/qo2hqf2ha7rfgCdjY/a-bridge-to-dath-ilan-improved-governance-on-the-critical

Metaculus was created by EAs. Manifold Market was also partly funded by EA money.

What EA money goes currently into "passing weaksauce "future generations" climate bills"?

There is some ambient support for Phil-Tetlock-style forecasting stuff like Metaculus, and some ambient support for prediction markets, definitely. But the vision here tends to be limited, mostly focused on "let's get better forecasting done on EA relevant questions/topics", not "scale up prediction markets until they are the primary way that society answers important questions in many fields".

There isn't huge effort going into future generations bills from within EA (the most notable post is complaining about them, not advocating them! https://forum.effectivealtruism.org/posts/TSZHvG7eGdmXCGhgS/concerns-with-the-wellbeing-of-future-generations-bill-1 ), although a lot of lefty- and climate-oriented EAs like them. But what I meant by that comment is just that EA has interpreted "improving institutional decisionmaking" to mean seeking influence within existing institutions, while I think there should be a second pillar of the cause area devoted to piloting totally new ideas in governance.

As an example of another idea that I think should get more EA attention and funding, Charter Cities have sometimes received an unduly chilly reception on the Forum (https://forum.effectivealtruism.org/posts/EpaSZWQkAy9apupoD/intervention-report-charter-cities), miscategorized as merely a neartermist economic-growth-boosting intervention, wheras charter city advocates are often most excited about their potential for experimental improvements in governance and leading to more "governance competition" among nations.

It was heartening to see the list of focus areas of the FTX future fund -- they seem more interested in institution design and progress-studies-esque ideas than the rest of the EA ecosystem, which I think is great.

I said things like: if you can't get the world to coordinate on banning gain-of-function research, in the wake of a trillions-of-dollars tens-of-millions-of-lives pandemic "warning shot", then you're not going to get coordination in the much harder case of AI research.

To be clear I largely agree with you, but I don't think you've really steel-manned (or at least accurately modeled) the government's decision making process.

We do have an example of a past scenario where:

  • a new technology of enormous, potentially world-ending impact was first publicized/predicted in science fiction
  • a scientist actually realized the technology was near-future feasible, convinces others
  • western governments actually listened to said scientists
  • instead of coordinating on a global ban of the technology - they instead fast tracked the tech's development

The tech of course is nuclear weapons, the sci-fi was "The World Set Free" by HG Wells, the first advocate scientist was Szilard, but nobody listened until he recruited Einstein.

So if we apply that historical lesson to AI risk ... the failure (so far) seems two fold:

  • failure on the "convince a majority of the super high status experts"
  • and perhaps that's good! Because the predictable reaction is tech acceleration, not coordination on deacceleration

AGI is coup-complete.

The problem with banning risky gain-of-function research seems to be that, surprisingly, most people don't understand that it's risky and although there's a general feeling that COVID-19 may have originated in the lab at Wuhan, we don't have definitive evidence.

So the people who are trusted on this are incentivized to say that it's safe, and there also seems to be something of a coverup going on.

I suspect that if there was some breakthrough that allowed us to find strong technical evidence of a lab origin, there would be significant action on this.

But, alas, we're trapped in a situation where lack of evidence stalls any effort to seriously look for evidence.

(Which is then subject to a bunch of the usual skepticism that applies to arguments of the form "surely my political party will become popular, claim power, and implement policies I like".)

In my experience these types of arguments are actually anti-signals. 

Incidentally, has anyone considered the possibility of raising funds for an independent organization dedicated to discovering the true origin of covid-19? I mean a serious org with dozens of qualified people for up to a decade.

It seems like an odd thing to do but the expected value of proving a lab origin for covid-19 is extremely high, perhaps half a percentage point x-risk reduction or something.

The expected value of that is infinitesimal, both in general and for x-risk reduction in particular. People who prefer political reasoning (so, the supermajority) will not trust it, people who don't think COVID was an important thing except in how people reacted to it (like me) won't care, and most people who both find COVID important (or sign of anything important) and actually prefer logical reasoning have already given it a lot of thought and found out that the bottleneck is data that China will not release anyone soon.

[-]Mau2y32
  1. Perhaps the sorts of government interventions needed to make AI go well are not all that large, and not that precise.

I confess I don't really understand this view.

Specifically for the sub-claim that "literal global cooperation" is unnecessary, I think a common element of people's views is that: the semiconductor supply chain has chokepoints in a few countries, so action from just these few governments can shape what is done with AI everywhere (in a certain range of time).

Banning gain-of-function research would be a mistake. What would be recklessly foolish is incentivising governments to decide what avenues of research are recklessly foolish. The fact that governments haven't prohibited it in a panic bout (not even China that otherwise did a lot of panicky things) is a good testament of their abilities, not an inability to react to warning shots.

The reaction seems consistent if people (in government) believe no warning shot was fired. AFAIK the official reading is that we experienced a zoonosis, so banning gain of function research would go against that narrative. It seems true to me that this should be seen as a warning shot, but smallpox and ebola could have prompted this discussion as well and also failed to be seen as a warning shot. 

My guess is in the case of AI warning shots there will also be some other alternative explanations like "Oh, the problem was just that this company's CEO was evil, nothing more general about AI systems".

I agree; that seems to be a significant risk. In case we get lucky to have AI warning shots, it seems prudent to think about how it can be ensured that they are recognized for what they are. This is a problem that I havn't given much thought to before.
 

But I find it encouraging to think that we can use warning shots in other fields to understand the dynamics of how such events are being interpreted. As of now, I don't think AI warning shots would change much, but I would add this potential for learning as a potential counter-argument. I think this seems analogous to the argument "EAs will get better at influencing the government over time" from another comment

The reaction seems consistent if people (in government) believe no warning shot was fired. AFAIK the official reading is that we experienced a zoonosis, so banning gain of function research would go against that narrative.

Governments are also largely neglecting vaccine tech/pipeline investments, which protect against zoonotic viruses, not just engineered ones.

But also, the conceptual gap between 'a virus that was maybe a lab leak, maybe not' and 'a virus that was a lab leak' is much smaller than the gap between the sort of AI systems we're likely to get a 'warning shot' from (if the warning shot is early enough to matter) and misaligned superintelligent squiggle maximizers. So if we government can't make the conceptual leap in the easy case, it's  even less likely to make it in the hard case.

It seems true to me that this should be seen as a warning shot, but smallpox and ebola could have prompted this discussion as well and also failed to be seen as a warning shot. 

If there were other warning shots in addition to this one, that's even worse! We're already playing in Easy Mode here.

If there were other warning shots in addition to this one, that's even worse! We're already playing in Easy Mode here.


Playing devil's advocate, if the government isn't aware that the game is on, it doesn't matter if it's on easy mode - the performance is likely poor independent of the game's difficulty. 

I agree with the post's sentiment that warning shots would currently not do much good. But I am, as of now, still somewhat hopeful that the bottleneck is getting the government to see and target a problem, not the government's ability to act on an identified issue.  

I think this is probably true; I would assign something like a 20% chance of some kind of government action in response to AI aimed at reducing x-risk, and maybe a 5-10% chance that it is effective enough to meaningfully reduce risk.  That being said, 5-10% is a lot, particularly if you are extremely doomy.  As such, I think it is still a major part of the strategic landspace even if it is unlikely.

[-]mic2y10

Has EA invested much into banning gain-of-function research? I've heard about Alvea and 1DaySooner, but no EA projects aimed at gain-of-function. Perhaps the relevant efforts aren't publicly known, but I wouldn't be shocked if more person-hours have been invested in EA community building in the past two years (for example) than banning gain-of-function research.

Has EA invested much into banning gain-of-function research?

If it hasn't, shouldn't that negatively update us on how EA policy investment for AI will go?

[In the sense that this seems like a slam dunk policy to me from where I sit, and if the policy landscape is such that it and things like it are not worth trying, then probably policy can't deliver the wins we need in the much harder AI space.]

[-]Mau2y10

An earlier comment seems to make a good case that there's already more community investment in AI policy, and another earlier thread points out that the content in brackets doesn't seem to involve a good model of policy tractability.

There was already a moratorium on funding GoF research in 2014 after an uproar in 2011, which was not renewed when it expired. There was a Senate bill in 2021 to make the moratorium permanent (and, I think, more far-reaching, in that institutions that did any such research were ineligible for federal funding, i.e. much more like a ban on doing it at all than simply deciding not to fund those projects) that, as far as I can tell, stalled out. I don't think this policy ask was anywhere near as crazy as the AI policy asks that we would need to make the AGI transition survivable!

It sounds like you're arguing "look, if your sense of easy and hard is miscalibrated, you can't reason by saying 'if they can't do easy things, then they can't do hard things'," which seems like a reasonable criticism on logical grounds but not probabilistic ones. Surely not being able to do things that seem easy is evidence that one's not able to do things that seem hard?

[-]Mau2y30

I agree it's some evidence, but that's a much weaker claim than "probably policy can't deliver the wins we need."

Obviously, governments don't believe in autonomous AI risk, only in the risk that AI can be used to invent more powerful weapons. 

In the government's case, that doubt may come from their experience that vastly expensive complex systems are always maximally dysfunctional, and require massive teams of human experts to accomplish a well-defined but difficult task.