Leaving aside tractability ... neglectedness ... and goodness ... I wanted to argue for importance.
Neglectedness is an unusually legible metric and can massively increase marginal impact. So acute awareness of neglectedness when considering allocation of effort should solve most issues of failing to address every possible point of intervention. Assessment of tractability/goodness/importance gives different puzzles for every hypothetical intervention, and studying these puzzles can itself be a project. Neglectedness is more straightforward, it's a lower-hanging strategic fruit, a reason not to skip assessing tractability/goodness/importance for things nobody is working on, to not dismiss them out of hand for that reason alone.
Events are already set for catastrophe, they must be steered along some course they would not naturally go. [...]
Are you confident in the success of this plan? No, that is the wrong question, we are not limited to a single plan. Are you certain that this plan will be enough, that we need essay no others? Asked in such fashion, the question answers itself. The path leading to disaster must be averted along every possible point of intervention.
— Professor Quirrell (competent, despite other issues), HPMOR chapter 92
This post is a quickly-written service-post, an attempt to lay out a basic point of strategy regarding decreasing existential risk from AGI.
By default, AGI will kill everyone. The group of people trying to stop that from happening should seriously attend to all plausible points of intervention.
In this context, a point of intervention is some element of the world—such as an event, a research group, or an ideology—which could substantively contribute to leading humanity to extinction through AGI. A point of intervention isn't an action; it doesn't say what to do. It just says: Here's some place along the path leading to disaster, where there might be useful levers we could pull to stop the flow towards disaster.
Before going on, I'll briefly say: Don't do bad unethical things.
Just because we should attend to every point of intervention, does not mean we should carry out every act of intervention! E.g. don't be an ad-hominem dick to people, whether in private in public. In general, if you're about to do that thing, and you know perfectly well that if you thought about it for three minutes then you'd see that almost everyone would tell you that's a really really bad thing to do, then you should probably not do that thing. And if you still want to do it, then you should probably first try talking to several people who you trust (and who you don't strongly pre-select to be people who are egging you on to do that thing).
Someone was telling me about their somewhat-solitary efforts to get the government apparatus of France to notice AGI x-risk and maybe do something about it; and to not be too swayed by influence saying to ignore those concerns. They expressed being unsure as to whether these efforts would matter much. People in the policy space would tend to think of the US and China as being the two players that really matter.
I argued to them that actually those efforts are pretty high-value. Leaving aside tractability (IDK) and neglectedness (yes) and goodness (probably, though there's always the worry of stimulating R&D investment), I wanted to argue for importance.
In basketball, there's a defensive mode called "full-court press". That's where you pressure the team with offensive possession of the ball everywhere on the court, trying to regain possession of the ball before the offensive team gets close enough to the basket to score. This contrasts with half-court press, where you basically let the opposing team take the ball to the half of court with your basket, and concentrate your defenses there.
Full-court press has the disadvantage of allocating some defensive resources away from the home side of the court. Thus, you can be more vulnerable if the opposing team gets near your basket. Also, full-court press is simply more expensive—the defending team has to run around much more, and gives up the advantage of clustering where they know the opposing team has to go (near the scoring basket).
But, full-court press is a good way to spend more resources to get better outcomes. You make them pass the ball more, giving them more chances to mess up, often producing turnovers. You make them run around more, which tires them out.
Likewise, intervening at every point along the path leading to AGI disaster may be a broad strategy that demands higher costs and risks allocating some resources away from important points; but that may also come with the benefits of giving more less-correlated opportunities to block the flow towards disaster.
Suppose that there are 5 events that might occur, and if all of them occur, something really bad happens; on the other hand, if one of the events does not occur, then the really bad thing does not happen. Suppose each event will occur with probability 0.9.
First of all, how likely is the really bad thing to happen? One answer would would be , i.e. there's a 60% chance of it happening. However, this answer is falling prey to the three multi-stage fallacies. You can't conclude that the bad result is only medium-likely, just because you made a list of events that all have to happen.
But here's a different question: How unlikely can you make the really bad event?
Of course, the answer depends a lot on the specific structure of these events. But here's one kind of structure:
Suppose each of the five prerequisite events is itself a disjunction. In other words, if any one of happens, then happens. I think this is often the case in the real world. E.g., several different funders might fund some research group; several different research groups might succeed at some goal; several different technologies might provide workable components that enable some subsequent technology; etc. Furthermore, it's often the case that it's easy to intervene on some of the but not on others. In this case, it's easy to decrease the probability of somewhat, but not easy to decrease it a lot. You prevent some of the that are easy to prevent, and then you call it a day.
Does it help to somewhat decrease the probability of each , without greatly decreasing any of them? Yep! As long as the probability of the conjunction is fairly high, the marginal value of decreasing the absolute probability of each of the is roughly the same.
Anyway, basically the point of this subsection is that it helps to intervene along many channels / at many points, if there are multiple conjunctive prerequisites to disaster.
Note that [multiple conjunctive prerequisites to disaster] is logically equivalent to [multiple disjunctive stoppers of disaster]. For example, it's plausible to me that either an international ban on AGI research, or a strong social norm in academia against AGI research, would very substantially slow down AGI research.
One of the three multi-stage fallacies is forgetting to use conditional probabilities for the prerequisites to disaster. For example, conditional on [we can't convince major nations to ban AGI research], it's probably much less likely that [we can convince AGI researchers to stop doing that].
The outlook of "every point of intervention" says to consider this correlation as a pointer to some deeper element of the world. In this example, the source of correlation might be [the same funder is paying both groups to continue AGI research], or [AGI risk doesn't feel real to people], or [people are secretly nihilistic and don't actually have hope in a deeply satisfying shared human future], or many other possibilities. (These are therefore not necessarily temporal points of intervention—events in a sequence—but generally, elements that could be intervened on.
Focus on the places where you feel shocked everyone's dropping the ball.
This perspective doesn't help much with prioritization. But, generally, it says we should competently do a diverse portfolio of strategies. On the margin, I think competent newcomers should be directed towards the possibility of starting a new / neglected effort, rather than joining an existing one (though of course many existing efforts have important talent gaps).
There's lots of meaning everywhere. There may or may not be any good plans to decrease x-risk, but there are many things to try that are pretty worth-it and quite neglected.
If someone is deferring to you about strategy, consider helping them keep in mind that there are many approaches.
This doesn't mean "do random stuff and hope it decreases x-risk".
Each actor (person, research group, funder) has to specialize in one or two points of intervention.
Non-top-priority interventions are neglected.
(These are phrased as actions, but points of intervention can be backed out of them.)
International treaties to stop AGI research
Convincing elements of the AI researcher pipeline (e.g. student programs for AI / ML research) to stop
General social milieu / norms
Illustration: A professor doing cutting-edge domain-nonspecific AI research should read in the paper that this is very bad; then should have students stop signing up for classes and research; and have student protests; and should be shunned by colleagues; and should have administration pressure them to switch areas; and then they should get their government funding cut. It should feel like what happens if you announce "Hey everyone! I'm going to go work in advertising for a bunch of money, convincing teenagers to get addicted to cigarettes!", but more so.
Making more very smart people, especially via reprogenetics.
Healing society; decreasing pressure / incentive to do AGI research
Legibilizing AGI x-risk.
Group rationality, e.g. better debates.