by [anonymous]
3 min read26th Jun 20121 comment

4

What's the best way to allocate resources towards decreasing existential risk?

Let's say we have n different risks. Define the functions r1(s1), r2(s2), ... , rn(sn), one for each risk. Each takes USD spent mitigating that particular risk as its input and outputs the probability that particular catastrophic scenario occurs. Probability of survival ("win probability") as a function of spending levels is given by

w(s1, s2, ... , sn) = (1 - r1(s1)) * (1 - r2(s2)) * ... * (1 - rn(sn))

In other words, we win if none of the catastrophic scenarios occur.

What are the characteristics of r1(s1), r2(s2), ... , rn(sn)?

Well obviously their output has to be between 0 and 1, since they are returning probabilities. And, assuming that spending money on this risk actually helps, their output values will go down as their input values go up.

Furthermore: For any given existential risk, we could imagine a number of different interventions. Each intervention will decrease the risk by a certain amount, and each has an associated price tag. Those interventions that are "efficient" have an especially high risk reduction to cost ratio. Assuming the people doing the existential risk reduction spending are good at picking efficient interventions, we should expect them to encounter diminishing returns as they spend more.

And: No risk's probability can go below zero, so if we assume the value of a risk function decreases monotonically, then the risk function must approach zero asymptotically, suggesting that heavily diminishing returns will be reached at some point.

What are the characteristics of w(s1, s2, ... , sn)?

It's pretty easy to prove that if you can decrease any risk by a constant amount, you are best off applying the decrease to the catastrophe that has the highest probability of occurring.

A quick example calculation: Let's say you have three risks which you are 80%, 90%, and 90% likely to avert. Decreasing the last risk by 10% yields a win probability of 72%. Decreasing the first risk by 10% yields a win probability of 72.9%, almost a 1% increase.

Of course, cutting the probability of a risk that's 20% likely to occur in half is more realistic than completely eliminating a risk that's 10% likely to occur. So this favors working to reduce the risk that's 20% likely to occur even more.

Diagram: This is a two-risk model. The graph is z=(1-x)(1-y). At the yellow point, the green risk is more or less under control and a step of constant size is best taken to reduce the blue risk.

Uncertainty

All this analysis has assumed that risk probabilities can be estimated perfectly. But clearly this isn't the case.

If our prior is the same for all risks, and available evidence doesn't give us much reason to move away from our priors, this analysis suggests that available existential risk reduction funds should be divided between different risks roughly evenly if there's a large quantity of such funds.

AI

In my mind, the best argument that existential risks are significant is the Great Filter. In a nutshell: our galaxy has tons of planets and it's been around for a long time, but Earth was uncolonized when we evolved. So clearly there are one or more very substantial sticking points between "lifeless rock" and "resource-gobbling spacefaring civilization". Maybe these sticking points are mostly behind us, maybe they are mostly ahead of us.

Katja Grace argues that the Great Filter is ahead of us, and that AI is unlikely to be responsible for the great filter.

On the other hand, Nick Bostrom has argued (not sure where) that superintelligent FAI could trivially deal with existential risks if created. It may be that shooting for FAI is a better strategy than combating the great filter directly, especially if this would count as an unconventional strategy compared to what previous civilizations have tried.

Potential problems with my model

My model ignores interventions that could decrease the probability of multiple existential risks at once, e.g. funding a popular book about the Great Filter or a TV show that raises the sanity waterline.

I also implicitly assumed that interventions operate independently from one another at the single-risk level.

In general, it may be more useful to think about existential risk on the level of possible interventions then on the level of potential catastrophes.

New Comment
1 comment, sorted by Click to highlight new comments since: Today at 6:14 PM

In Probabilistic Graphical Modeling, the win probability you describe is called a Noisy OR.