Ben Pace and I (Richard Ngo) recently did a public double crux at the Berkeley REACH on how valuable it is for people to go into AI policy and strategy work: I was optimistic and Ben was pessimistic. During the actual event, we didn't come anywhere near to finding a double crux on that issue. But after a lot of subsequent discussion, we've come up with some more general cruxes about where impact comes from.

I found Ben's model of how to have impact very interesting, and so in this post I've tried to explain it, along with my disagreements. Ben liked the goal of writing up a rough summary of our positions and having further discussion in the comments, so while he edited it somewhat he doesn’t at all think that it’s a perfect argument, and it’s not what he’d write if he spent 10 hours on it. He endorsed the wording of the cruxes as broadly accurate.

(During the double crux, we also discussed how the heavy-tailed worldview applies to community building, but decided on this post to focus on the object level of what impact looks like.)

Note from Ben: “I am not an expert in policy, and have not put more than about 20-30 hours of thought into it total as a career path. But, as I recently heard Robin Hanson say, there’s a common situation that looks like this: some people have a shiny idea that they think about a great deal and work through the details of, that folks in other areas are skeptical of given their particular models of how the world works. Even though the skeptics have less detail, it can be useful to publicly say precisely why they’re skeptical.

In this case I’m often skeptical when folks tell me they’re working to reduce x-risk by focusing on policy. Folks doing policy work in AI might be right, and I might be wrong, but it seemed like a good use of time to start a discussion with Richard about how I was thinking about it and what would change my mind. If the following discussion causes me to change my mind on this question, I’ll be really super happy with it.”

Ben's model: Life in a heavy-tailed world

A heavy-tailed distribution is one where the probability of extreme outcomes doesn’t drop very rapidly, meaning that outliers therefore dominate the expectation of the distribution. Owen Cotton-Barratt has written a brief explanation of the idea here. Examples of heavy-tailed distributions include the Pareto distribution and the log-normal distribution; other phrases people use to point at this concept include ‘power laws’ (see Zero to One) and ‘black swans’ (see the recent SSC book review). Wealth is a heavy-tailed distribution, because many people are clustered relatively near the median, but the wealthiest people are millions of times further away. Human height and weight and running speed are not heavy-tailed; there is no man as tall as 100 people.

There are three key claims that make up Ben's view.

The first claim is that, since the industrial revolution, we live in a world where the impact that small groups can have is much more heavy-tailed than in the past.

  • People can affect incredibly large numbers of other people worldwide. The Internet is an example of a revolutionary development which allows this to happen very quickly.
  • Startups are becoming unicorns unprecedentedly quickly, and their valuations are very heavily skewed.
  • The impact of global health interventions is heavy-tail distributed. So is funding raised by Effective Altruism - two donors have contributed more money than everyone else combined.
  • Google and Wikipedia qualitatively changed how people access knowledge; people don't need to argue about verifiable facts any more.
  • Facebook qualitatively changed how people interact with each other (e.g. FB events is a crucial tool for most local EA groups), and can swing elections.
  • It's not just that we got more extreme versions of the same things, but rather that we can get unforeseen types of outcomes.
  • The books HPMOR and Superintelligence both led to mass changes in plans towards more effective ends via the efforts of individuals and small groups.

The second claim is that you should put significant effort into re-orienting yourself to use high-variance strategies.

  • Ben thinks that recommending strategies which are safe and low-risk is insane when pulling out of a heavy-tailed distribution. You want everyone to be taking high-variance strategies.
    • This is only true if the tails are long to the right and not to the left, which seems true to Ben. Most projects tend to end up not pulling any useful levers whatever and just do nothing, but a few pull crucial levers and solve open problems or increase capacity for coordination.
  • Your intuitions were built for the ancestral environment where you didn’t need to be able to think about coordinating humans on the scale of millions or billions, and yet you still rely heavily on the intuitions you’re built with in navigating the modern environment.
  • Scope insensitivity, framing effects, taboo tradeoffs, and risk aversion, are the key things here. You need to learn to train your S1 to understand math.
    • By default, you’re not going to spend enough effort finding or executing high-variance strategies.
  • We're still only 20 years into the internet era. Things keep changing qualitatively, but Ben feels like everyone keeps adjusting to the new technology as if it were always this way.
  • Ben: “My straw model of the vast majority of people’s attitudes is: I guess Facebook and Twitter are just things now. I won’t spend time thinking about whether I could build a platform as successful as those two but optimised better for e.g. intellectual progress or social coordination - basically not just money.”
  • Ben: “I do note that never in history has change been happening so quickly, so it makes sense that people’s intuitions are off.”
  • While many institutions have been redesigned to fit the internet, Ben feels like almost nobody is trying to improve institutions like science on a large scale, and that this is clear low-hanging altruistic fruit.
  • The Open Philanthropy Project has gone through this process of updating away from safe, low-risk bets with GiveWell, toward hits-based giving, which is an example of this kind of move.

The third claim is that AI policy is not a good place to get big wins nor to learn the relevant mindset.

  • Ben: “On a first glance, governments, politics and policy looks like the sort of place where I would not expect to find highly exploitable strategies, nor a place that will teach me the sorts of thinking that will help me find them in future.”
  • People in policy spend a lot of time thinking about how to influence governments. But governments are generally too conventional and slow to reap the benefits of weird actions with extreme outcomes.
  • Working in policy doesn't cultivate the right type of thinking. You're usually in a conventional governmental (or academic) environment, stuck inside the system, getting seduced by local incentive gradients and prestige hierarchies. You often need to spend a long time working your way to positions of actual importance in the government, which leaves you prone to value drift or over-specialisation in the wrong thing.
    • At the very least, you have to operate on the local incentives as well as someone who actually cares about them, which can be damaging to one’s ability to think clearly.
  • Political landscapes are not the sort of environment where people can easily ignore the local social incentives to focus on long-term, global goals. Short term thinking (election cycles, media coverage, etc) is not the sort of thinking that lets you build a new institution over 10 years or more.
    • Ben: “When I’ve talked to senior political people, I’ve often heard things of the sort ‘We were working on a big strategy to improve infrastructure / international aid / tech policy etc, but then suddenly public approval changed and then we couldn’t make headway / our party wasn’t in power / etc.’ which makes me think long term planning is strongly disincentivised.”
  • One lesson of a heavy-tailed world is that signals that you’re taking safe bets are anti-signals of value. Many people following a standard academic track saying “Yeah, I’m gonna get a masters in public policy” sounds fine, sensible, and safe, and therefore cannot be an active sign that you will do something a million times more impactful than the median.

The above is not a full, gears-level analysis of how to find and exploit a heavy tail, because almost all of the work here lies in identifying the particular strategy. Nevertheless, because of the considerations above, Ben thinks that talented, agenty and rational people should be able in many cases to identify places to win, and then execute those plans, and that this is much less the case in policy.

Richard's model: Business (mostly) as usual

I disagree with Ben on all three points above, to varying degrees.

On the first point, I agree that the distribution of success has become much more heavy-tailed since the industrial revolution. However, I think the distribution of success is often very different from the distribution of impact, because of replacement effects. If Facebook hadn't become the leading social network, then MySpace would have. If not Google, then Yahoo. If not Newton, then Leibniz (and if Newton, then Leibniz anyway). Probably the alternatives would have been somewhat worse, but not significantly so (and if they were, different competitors would have come along). The distinguishing trait of modernity is that even a small difference in quality can lead to a huge difference in earnings, via network effects and global markets. But that isn't particularly interesting from an x-risk perspective, because money isn't anywhere near being our main bottleneck.

You might think that since Facebook has billions of users, their executives are a small group with a huge amount of power, but I claim that they're much more constrained by competitive pressures than they seem. Their success depends on the loyalty of their users, but the bigger they are, the easier it is for them to seem untrustworthy. They also need to be particularly careful since antitrust cases have busted the dominance of several massive tech companies before. (While they could swing a few elections before being heavily punished, I don’t think this is unique to the internet age - a small cabal of newspaper owners could probably have done the same centuries ago). Similarly, I think the founders of Wikipedia actually had fairly little counterfactual impact, and currently have fairly little power, because they're reliant on editors who are committed to impartiality.

What we should be more interested in is cases where small groups didn't just ride a trend, but actually created or significantly boosted it. Even in those cases, though, there's a big difference between success and impact. Lots of people have become very rich from shuffling around financial products or ad space in novel ways. But if we look at the last fifty years overall, they're far from dominated by extreme transformative events - in fact, Western societies have changed very little in most ways. Apart from IT, our technology remains roughly the same, our physical surroundings are pretty similar, and our standards of living have stayed flat or even dropped slightly. (This is a version of Tyler Cowen and Peter Thiel's views; for a better articulation, I recommend The Great Stagnation or The Complacent Class). Well, isn't IT enough to make up for that? I think it will be eventually, as AI develops, but right now most of the time spent on the internet is wasted. I don't think current IT has had much of an effect by standard metrics of labour productivity, for example.

Should you pivot?

Ben might claim that this is because few people have been optimising hard for positive impact using high-variance strategies. While I agree to some extent, I also think that there are pretty strong incentives to have impact regardless. We're in the sort of startup economy where scale comes first and monetisation comes second, and so entrepreneurs already strive to create products which influence millions of people even when there’s no clear way to profit from them. And entrepreneurs are definitely no strangers to high-variance strategies, so I expect most approaches to large-scale influence to already have been tried.

On the other hand, I do think that reducing existential risk is an area where a small group of people are managing to have a large influence, a claim which seems to contrast with the assertion above. I’m not entirely sure how to resolve this tension, but I’ve been thinking lately about an analogy from finance. Here's Tyler Cowen:

I see a lot of money managers, so there’s Ray Dalio at Bridgewater. He saw one basic point about real interest rates, made billions off of that over a great run. Now it’s not obvious he and his team knew any better than anyone else.
Peter Lynch, he had fantastic insights into consumer products. Use stuff, see how you like it, buy that stock. He believed that in an age when consumer product stocks were taking off.
Warren Buffett, a certain kind of value investing. Worked great for a while, no big success, a lot of big failures in recent times.

The analogy isn’t perfect, but the idea I want to extract is something like: once you’ve identified a winning strategy or idea, you can achieve great things by exploiting it - but this shouldn’t be taken as strong evidence that you can do exceptional things in general. For example, having a certain type of personality and being a fan of science fiction is very useful in identifying x-risk as a priority, but not very useful in founding a successful startup. Similarly, being a philosopher is very useful in identifying that helping the global poor is morally important, but not very useful in figuring out how to solve systemic poverty.

From this mindset, instead of looking for big wins like “improving intellectual coordination”, we should be looking for things which are easy conditional on existential risk actually being important, and conditional on the particular skillsets of x-risk reduction advocates. Another way of thinking about this is as a distinction between high-impact goals and high-variance strategies: once you’ve identified a high-impact goal, you can pursue it without using high-variance strategies. Startup X may have a crazy new business idea, but they probably shouldn't execute it in crazy new ways. Actually, their best bet is likely to be joining Y Combinator, getting a bunch of VC funding, and following Paul Graham's standard advice. Similarly, reducing x-risk is a crazy new idea for how to improve the world, but it's pretty plausible that we should pursue it in ways similar to those which other successful movements used. Here are some standard things that have historically been very helpful for changing the world:

  • dedicated activists
  • good research
  • money
  • public support
  • political influence

My prior says that all of these things matter, and that most big wins will be due to direct effects on these things. The last two are the ones which we’re disproportionately lacking; I’m more optimistic about the latter for a variety of reasons.

AI policy is a particularly good place to have a large impact.

Here's a general argument: governments are very big levers, because of their scale and ability to apply coercion. A new law can be a black swan all by itself. When I think of really massive wins over the past half-century, I think about the eradication of smallpox and polio, the development of space technology, and the development of the internet. All of these relied on and were driven by governments. Then, of course, there are the massive declines in poverty across Asia in particular. It's difficult to assign credit for this, since it's so tied up with globalisation, but to the extent that any small group was responsible, it was Asian governments and the policies of Deng Xiaoping, Lee Kuan Yew, Rajiv Gandhi, etc.

You might agree that governments do important things, but think that influencing them is very difficult. Firstly, that's true for most black swans, so I don't think that should make policy work much less promising even from Ben's perspective. But secondly, from the outside view, our chances are pretty good. We're a movement comprising many very competent, clever and committed people. We've got the sort of backing that makes policymakers take people seriously: we're affiliated with leading universities, tech companies, and public figures. It's likely that a number of EAs at the best universities already have friends who will end up in top government positions. We have enough money to do extensive lobbying, if that's judged a good idea. Also, we're correct, which usually helps. The main advantage we're missing is widespread popular support, but I don't model this as being crucial for issues where what's needed is targeted interventions which "pull the rope sideways". (We're also missing knowledge about what those interventions should be, but that makes policy research even more valuable).

Here's a more specific route to impact: in a few decades (assuming long timelines and slow takeoff) AIs that are less generally intelligent that humans will be causing political and economic shockwaves, whether that's via mass unemployment, enabling large-scale security breaches, designing more destructive weapons, psychological manipulation, or something even less predictable. At this point, governments will panic and AI policy advisors will have real influence. If competent and aligned people were the obvious choice for those positions, that'd be fantastic. If those people had spent several decades researching what interventions would be most valuable, that'd be even better.

This perspective is inspired by Milton Friedman, who argued that the way to create large-scale change is by nurturing ideas which will be seized upon in a crisis.

Only a crisis - actual or perceived - produces real change. When that crisis occurs, the actions that are taken depend on the ideas that are lying around. That, I believe, is our basic function: to develop alternatives to existing policies, to keep them alive and available until the politically impossible becomes the possible.

The major influence of the Institute of Economic Affairs on Thatcher’s policies is an example of this strategy’s success. An advantage of this approach is that it can be implemented by clusterings of like-minded people collaborating with each other; for that reason, I'm not so worried about policy work cultivating the wrong mindset (I'd be more worried on this front if policy researchers were very widely spread out).

Another fairly specific route to impact: several major AI research labs would likely act on suggestions for coordinating to make AI safer, if we had any. Right now I don’t think we do, and so research into that could have a big multiplier. If a government ends up running a major AI lab (which seems pretty likely conditional on long timelines) then they may also end up following this advice, via the effect described in the paragraph above.

Underlying generators of this disagreement

More generally, Ben and I disagree on where the bottleneck to AI safety is. I think that finding a technical solution is probable, but that most solutions would still require careful oversight, which may or may not happen (maybe 50-50). Ben thinks that finding a technical solution is improbable, but that if it's found it'll probably be implemented well. I also have more credence on long timelines and slow takeoffs than he does. I think that these disagreements affect our views on the importance of influencing governments in particular.

We also have differing views on what the x-risk reduction community should look like. I favour a broader, more diverse community; Ben favours a narrower, more committed community. I don't want to discuss this extensively here, but I will point out that there are many people who are much better at working within a system than outside it - people who would do well in AI safety PhDs, but couldn't just teach themselves to do good research from scratch like Nate Soares did; brilliant yet absent-minded mathematicians; people who could run an excellent policy research group but not an excellent startup. I think it's valuable for such people (amongst which I include myself), to have a "default" path to impact, even at the cost of reducing the pressure to be entrepreneurial or agenty. I think this is pretty undeniable when it comes to technical research, and cross-applies straightforwardly to policy research and advocacy.

Ben and I agree that going into policy is much more valuable if you're thinking very strategically and out of the "out of the box" box than if you're not. Given this mindset, there will probably turn out to be valuable non-standard things which you can do.

Do note that this essay is intrinsically skewed since I haven't portrayed Ben's arguments in full fidelity and have spent many more words arguing my side. Also note that, despite being skeptical about some of Ben's points, I think his overall view is important and interesting and more people should be thinking along similar lines.

Thanks to Anjali Gopal for comments on drafts.

New to LessWrong?

New Comment
13 comments, sorted by Click to highlight new comments since: Today at 3:18 AM

Worth looking at the effectiveness of lobbying to estimate how hard it is to influence policy. From Wikipedia (emphasis mine):

An estimate from 2007 reported that more than 15,000 federal lobbyists were based in Washington, DC; another estimate from 2011 suggested that the count of registered lobbyists who have actually lobbied was closer to 12,000. While numbers like these suggest that lobbying is a widespread activity, most accounts suggest that the Washington lobbying industry is an exclusive one run by a few well-connected firms and players, with serious barriers to entry for firms wanting to get into the lobbying business, since it requires them to have been "roaming the halls of Congress for years and years."

The general consensus view is that lobbying generally works overall in achieving sought-after results for clients, particularly since it has become so prevalent with substantial and growing budgets, although there are dissenting views. A study by the investment-research firm Strategas which was cited in The Economist and the Washington Post compared the 50 firms that spent the most on lobbying relative to their assets, and compared their financial performance against that of the S&P 500 in the stock market; the study concluded that spending on lobbying was a "spectacular investment" yielding "blistering" returns comparable to a high-flying hedge fund, even despite the financial downturn of the past few years. A 2009 study by University of Kansas professor Raquel Meyer Alexander suggested that lobbying brought a substantial return on investment. A 2011 meta-analysis of previous research findings found a positive correlation between corporate political activity and firm performance. There are numerous reports that the National Rifle Association or NRA successfully influenced 45 senators to block a proposed rule to regulate assault weapons, despite strong public support for gun control. The NRA spends heavily to influence gun policy; it gives $3 million annually to the re-election campaigns of congresspersons directly, and gives additional money to PACs and others to influence legislation indirectly, according to the BBC in 2016.

There is widespread agreement that a key ingredient in effective lobbying is money. This view is shared by players in the lobbying industry.

From this page:

Lobbying is widespread throughout the U.S. political system; previous research puts lobbying expenditures at the federal level at approximately five times those of political action committee (PAC) campaign contributions. For instance, in 2012, organized interest groups spent $3.5 billion annually lobbying the federal government, compared to approximately $1.55 billion in campaign contributions from PACs and other organizations over the two-year 2011-2012 election cycle. Corporations and trade associations comprise the vast majority of lobbying expenditures by interest groups — more than 84% at the federal level — compared with issue-ideology membership groups, which makes up only 2% of these expenditures. While lobbying is presumed to be influential, the actual rate of firms engaging in lobbying is relatively low — approximately 10% of all firms.

Overall this suggests that (a) in aggregate, lobbying has a large impact on US policy, and (b) sponsoring 1% of the lobbying activity in the US would take about 120 average lobbyists and $35M/year. Since AI is one issue among many, 1% is much more lobbying activity than AI policy advocates could currently make use of, but this might change when AI becomes more important in the economy. Obviously, spending this much on lobbying before having a much clearer picture of what policies would actually help would be a mistake.

A study by the investment-research firm Strategas which was cited in The Economist and the Washington Post compared the 50 firms that spent the most on lobbying relative to their assets, and compared their financial performance against that of the S&P 500 in the stock market; the study concluded that spending on lobbying was a "spectacular investment" yielding "blistering" returns comparable to a high-flying hedge fund, even despite the financial downturn of the past few years.

I think I read this research while I was a Strategas client; if I'm remembering it correctly it was extremely poorly done. Short back test (just a few years), garden of forking paths, etc. Most sell-side research is not epistemically rigourous and Strategas is not one of the better firms. I would not put much weight on this research.

There is widespread agreement that a key ingredient in effective lobbying is money. This view is shared by players in the lobbying industry.

Well of course lobbyists would say they're worth the money!

Worth noting that lobbying isn't just bribery - it's also about being able to connect lawmakers to experts (or, if you're less ethical, "experts"). Yes, you need to do the "real work" of having policy positions and reading proposed legislation etc. But you also need to invest money and effort into communication, networking, and generally becoming a Schelling point - experts need to know who you are so they can become part of your talent pool, and lawmakers need to know that you have expertise on some set of topics. This is probably the best excuse for all those fancy dinners we associate with lobbying firms - it's not bribery, it's advertising. Meanwhile the lobbying firm needs to know who is receptive to them, and try to work with those people.

However, I think the distribution of success is often very different from the distribution of impact, because of replacement effects. If Facebook hadn't become the leading social network, then MySpace would have. If not Google, then Yahoo. If not Newton, then Leibniz (and if Newton, then Leibniz anyway).

I think this is less true for startups than for scientific discoveries, because of bad Nash equilibrium stemming from founder effects. The objective which Google is maximising might not be concave. It might have many peaks, and which you reach might be quite arbitrarily determined. Yet the peaks might have very different consequences when you have a billion users.

For lack of a concrete example... suppose a webapp W uses feature x, and this influences which audience uses the app. Then, once W has scaled and depend on that audience for substantial profit they can't easily change x. (It might be that changing x to y wouldn't decrease profit, but just not increase it.) Yet, had they initially used y instead of x, they could have grown just as big, but they would have had a different audience. Moreover, because of network effects and returns to scale, it might not be possible for a rivalling company to build their own webapp which is basically the same thing but with y instead.

I'm not sure how much to believe in this without concrete examples (the ones which come to mind are mostly pretty trivial, like Yahoo having a cluttered homepage and Google having a minimalist one, or MacOS being based on Unix).

Maybe Twitter is an example? I can easily picture it having a very different format. Still, I'm not particularly swayed by that.

Strong upvote for addressing what I feel is a neglected subject.

It feels like it would be helpful to state explicitly that working towards AI alignment and working against the development of misaligned AIs are not necessarily the same. In the casual discussions on the subject we usually refer to a military or multinational corporation as candidates who would build an AI lab, drive towards AGI, and then unleash the poorly-aligned result. The policy/strategy question goes directly to their behavior.

It seems like the time we have available to get this right is heavily influenced by how these other actors make decisions, and currently there is no particular pressure on them to make good ones. I'd like to toss a few other potential benefits of a strategy/policy echelon:

1. It would serve as a contact surface for people who are already in strategy and policy to engage with AI safety. Currently they have to use the same personal-interest method as the rest of us.

2. Aside from the institutional examples Richard provided, I point to Jean Monnet and the formation of the precursors to the European Union. Individual people are in a position to have very large influence if they have a framework ready when the opportunity presents itself.

3. Consider the risk of being unprepared if AI risk should come to the forefront of public consciousness and the government decides to act. The converse of Ben's example where politicians abandon projects when the public loses interest is that a public outrage can drive the government into hasty action. For example, if the Russians/Chinese deploy a next-generation narrow AI in their weapons systems, or an American military AI test goes badly wrong, or if there are casualties from a commercial implementation of narrow AI the government may move to regulate AI research and funding, and there is no reason to suspect that law would be any better than the computer crime laws we have currently. I would go as far as to say that AI is the best candidate for a new Sputnik Moment, which seems like it would drive the incentives heavily in a direction we do not want.

Promoted to curated: I am strongly in favor of more people doing public Double-Crux like things, and am particularly excited about people producing writeups like these afterwards. I think this writeup is quite clear, well-written and does a pretty good job at trying to bridge two different world-views in a way that doesn't feel strawmanny.

Curating this a bit late because Ben had hoped he could find the time to write a proper response/reaction, but time turned out to be a bit too scarce, so I will curate it without Ben's response for now.

Suppose your goal is not to maximise an objective, but just to cross some threshold. This is plausibly the situation with existential risk (e.g. "maximise probability of okay outcome"). Then, if you're above the threshold, you want to minimise variance, whereas if you're below it, you want to maximise variance. (See this for a simple example of this strategy applied to a game.) If Richard believes we are currently above the x-risk threshold and Ben believes we are below it, this might be a simple crux.

There's something to this, but it's not the whole story, because increasing probability of survival is good no matter what the current level is. Perhaps if you model decreasing existential risk as becoming exponentially more difficult (e.g. going from 32% risk to 16% risk is as difficult as going from 16% to 8%) and with the possibility of accidental increases (e.g. you're trying to go from 32 to 16 but there's some probability you go to 64 instead) then the current expectation for the level of risk will affect whether you take high-variance actions or not.

Another fairly specific route to impact: several major AI research labs would likely act on suggestions for coordinating to make AI safer, if we had any. Right now I don’t think we do, and so research into that could have a big multiplier.

Strongly agreed. I think that how major AI actors (primarily firms) govern their AI projects and interact with each other is a difficult problem, and providing advice to such actors is the sort of thing that I'd expect to be a positive black swan.

From Ben:

This is only true if the tails are long to the right and not to the left, which seems true to Ben. Most projects tend to end up not pulling any useful levers whatever and just do nothing, but a few pull crucial levers and solve open problems or increase capacity for coordination.

For what it's worth, I disagree with this; I think we have lots of examples of small groups of concerned passionate people changing the world for the worse (generally through unforeseen consequences, but sometimes through consequences that were foreseeable at the time) and lots of sleeping dragons that should not be awoken.

[Existential risk is sort of an example of a 'heavy tail to the left,' but this requires a bit of abuse of notation.]

From Richard:

Then, of course, there are the massive declines in poverty across Asia in particular. It's difficult to assign credit for this, since it's so tied up with globalisation, but to the extent that any small group was responsible, it was Asian governments and the policies of Deng Xiaoping, Lee Kuan Yew, Rajiv Gandhi, etc.

Tying in with the last point, I don't think it's the case that those specific people made good policy so much as unmade bad policy, and communism seems to me like an example of a left tail policy.

I generally agree with the idea of there being long tails to the left. Revolutions are a classic example - and, more generally, any small group of ideologically polarised people taking extreme actions. Environmentalists groups blocking genetically engineered crops might be one example; global warming skepticism another; perhaps also OpenAI.

I'm not sure about the "sleeping dragons", though, since I can't think of many cases where small groups created technologies that counterfactually wouldn't have happened (or even would have happened in safer ways).

I'm not sure about the "sleeping dragons", though, since I can't think of many cases where small groups created technologies that counterfactually wouldn't have happened (or even would have happened in safer ways).

For technology this is possible; here we get into arguments about replacability and inventions that are "after their time" (that is, could feasibly have been built much earlier, but no one thought of them). Most such examples that I'm aware of involve particular disasters, where no one had really cared to solve problem X until problem X manifested in a way that hurt some inventor.

For policy / direct action, I think this is clearer; plausibly WWI wouldn't have happened (or would have taken a different form) if the Black Hand hadn't existed. There must have been many declarations of adversarial intent that turned out quite poorly for the speaker, since it put them on some enemy's radar before they were ready.