2138

LESSWRONG
LW

2137
Probabilistic ReasoningWorld Modeling
Frontpage

16

It's dangerous to calculate p(doom) alone! Take this.

by WillPetillo
27th Jul 2025
6 min read
4

16

16

It's dangerous to calculate p(doom) alone! Take this.
7Vladimir_Nesov
6anaguma
6Vladimir_Nesov
3WillPetillo
New Comment
4 comments, sorted by
top scoring
Click to highlight new comments since: Today at 4:13 PM
[-]Vladimir_Nesov2mo72

Estimates of doom should clarify their stance on permanent disempowerment where originally-humans are never allowed to advance to match the level of development (and influence) of originally-AI superintelligences (or given the relevant resources to get there). Is it doom or is it non-doom? It could be the bulk of the probability, so that's an important ambiguity.

(Defining the meaning of a thing we are talking about should happen prior to considering whether some fact about it is true, or what credence to give some event that involves it. If we decide that something is true, but still don't know what it is that's true, what exactly are we doing?)

Reply
[-]anaguma2mo6-4

I think this depends a lot on what the state and scope of disempowerment looks like. E.g. if humans get the solar system and the AI gets the rest of the lightcone that seems like a good outcome to me.

Reply
[-]Vladimir_Nesov2mo*61

This illustrates my point, 1 star out of 4 billion galaxies solidly falls under Bostrom's definition of existential risk:

Existential risk – One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.

And yet many people consider it a non-doom outcome. So many worlds that fall to x-risk but have survivors are considered non-doom worlds, which should make it clear that "doom" and x-risk shouldn't be treated as interchangeable. The optimists claiming 10-20% of P(doom) might be at the same time implicitly expecting 90-98% P(x-risk) according to such definitions.

Reply
[-]WillPetillo2mo32

This is a useful application of a probability map!  If an important term has multiple competing definitions, create nodes for all of them, link the ones you consider important to a central p(doom) node (assuming you are interested in that concept), and let other people disagree with your assessment, but with a clearer sense of what they specifically disagree about.

Reply
Moderation Log
More from WillPetillo
View more
Curated and popular this week
4Comments
Probabilistic ReasoningWorld Modeling
Frontpage

(Link to calculator described in post: https://will9371.itch.io/probability-calculator)

On the Correct Usage of p(doom)

On it's face, p(doom) is a bit of a weird concept. On the one hand, it speaks to global trends regarding future technology involving a lot of unknowns and thus necessarily draws heavily from intuition. On the other hand, it is a percentage, and as such has at least two significant digits. Intuition is not this precise and is more accurately reflected with the sort of phrases used for evidentiary thresholds in the US legal system: "beyond a reasonable doubt," "clear and convincing," "more likely than not," "some evidence," and so on. Using percentages to express such sentiments thus involves pretending to have more precise knowledge than one actually has.

That said, there are some contexts where using numbers allows one to express things that simply wouldn't be possible with more intuitive expressions of subjective probability. For example, suppose I believe that AI will bring about the end of human existence IF AGI is achieved AND alignment is not solved AND rogue AIs cannot be kept under control, and I believe AGI will be achieved IF scaling continues OR there are new breakthroughs OR an entirely new paradigm emerges and surpasses deep learning, and so on. In such a logical structure, the top level claims depend on the lower-level claims (plus other factors I haven't thought to consider), the lower level claims eventually bottom out into predictions that are, if not precise, at least far more precise than p(doom). If those low-level claims are expressed numerically, I can then use math to relate all of my claims together.

Externalizing this web of relationships into a software application, I could then play with the model to see the full implications of updating any given belief in response to new evidence. I could also compare my mental model to someone I disagree with, allowing us to zero in on the exact point of disagreement immediately, rather than having to engage in extensive rhetorical sparring to (hopefully) find a crux.

Probability Calculator – How it Works

My Probability Calculator aims to be just such an application. The user can create a set of nodes, connecting them in the style of a mind-map. Nodes have a label, description, and a probability. For leaf nodes—those with no inputs—the user sets probabilities directly via a simple slider. Branch nodes calculate probabilities based on their inputs. The set of inputs to a node can be combined via AND, OR, and NOT operations, and combining arbitrary numbers of nodes together allows for the full expressiveness of Boolean logic as applied to probability.

Often, one does not know the full range of inputs for a node. For example, if I were to break down the inputs for "scaling continues", I would probably leave out many significant considerations, including requirements (the existence of which makes the proposition less likely) and also alternative paths (the existence of which makes the proposition more likely). If were to calculate the probability of this node simply as the combination of the factors I can think of, and set the probabilities of those nodes independently of their downstream consequences (as I should, since the alternative would be the very definition of motivated reasoning!), then I could very easily end up with a probability I don't agree with in my own model—not because of an unexpected implication, but simply because of an inability to fully articulate my beliefs.

To enable expression of the unknown unknowns in one's mental model, I have added "skepticism" and "leniency" bias sliders to calculated nodes. Skepticism bias acts like an additional input node that combines with the other inputs by a hidden AND condition. By default, skepticism bias is set to 1, which causes it to have no impact—it's like adding an additional requirement, but one that is certain to happen—and lowering it decreases the node's probability. Leniency bias acts like an additional OR condition. By default, it is set to 0, which causes it to have no impact, and increasing it increases the node's probability.

Probability Calculation is a Tool for Expression, Not Evidence

One might object here that the ability to arbitrarily apply biases is just "fudging the numbers." Actually, it's worse than that, since one can set the leaf node probabilities directly to whatever one wants and the branch nodes are calculated entirely from these leaves plus freely-set biases—it's just guessing all the way down!

Such an objection, however, misses the point. The purpose of this application is not to cloak one's beliefs in the authority of math, but to spell out the process of one's reasoning (and its implications) for all to see. As a tool for expression, it must be arbitrary—capable of asserting any conclusion, no matter how absurd—since any guidance on what can be said would be form of instilling my own biases, making it about my beliefs rather than those of the user. That said, evidence is certainly important and has its place in this application. Each node contains a description field where one can explain the reason for one's connections, biases, and base probabilities and anticipate the obvious objections.

Directions for Ongoing Development (Gamification)

I am a game developer by trade and I would like to leverage that skillset here. A gamified version of this application might looks as follows:

  • Each "level" is a detailed model where the player has read-only access to all of the nodes (they can look at the model but not change it).
  • The player has a set of policies they can choose, with explanations for each. Some policies will be "for/against" decisions, others will be allocations of resources.
  • Each decision maps to applying bias to a calculated node or modifying the probability of a leaf node.
  • The player makes all of their decisions, then clicks "Calculate", which applies all of the modifications and the model recalculates.
  • The player has a "target" p(doom). The player wins if the resulting p(doom) from their choices is less than the target; they lose if it is over. The impact the player has from their policy decisions is specified by whoever creates the model and this impact is hinted at by the the policy descriptions (including links to external sources).
  • Target p(doom) is set such that a minority of choices lead to victory so that the player must learn the author's mental model. The optimism/pessimism of the author is captured in the % target value (e.g. default 95% & target 50% would be very pessimistic; default 1% & target 0.01% would be relatively optimistic).
  • For some levels, it might make sense to require beating several threshold values, not just an all-inclusive p(doom).

API Extensions

I have already built an API that allows one to interact with save files without the UI interface. This makes it possible to build out a map in my application, then connect it to other tools or applications outside of the Unity Game Engine. The code for this is not public, so direct message me if you are interested in connecting Probability Calculator to something else.

Deliberately Excluded Features

One cannot simply apply a positive or negative offset to a calculated node. The idea here is that every node of the graph should represent a belief that comes from somewhere. Calculated beliefs come from dependent beliefs, biases come from unknown unknowns that one is unable to articulate as nodes, leaf nodes are at best grounded in reality and at worst a clear admission of a guess, all of which can be explained in node descriptions. To apply a direct offset would be to say that one simply doesn't like the implications of one's other beliefs without giving a clear hint as to why. Further, when a node calculates an outcome one doesn't like, it is an opportunity to figure out how the rest of one's model needs to change to match one's conclusions...or perhaps to rethink those conclusions. An limiting consequence is that Probability Calculator doesn't have a way of expressing correlations. X can be caused by Y, Y can be caused by X, X and Y can be caused by some unknown, unnamed variable, and so on, but all connections must include a causal explanation.

There are other features that I might add later, but have chosen to deprioritize, such as:

  • Probability distributions to express a range of outcomes.
  • Aggregations of expert opinions.
  • Artwork and better UI.

I have held off on these because I want to be sure the application is actually useful to people as-is before investing time in extending it further. So if you want to see these or other developments, build out some maps and share them!  If you come up with something good, let me know and I'll include it in the presets.

Unexpected Challenges

I have had a hard time building out useful maps. Not because of the UI or the system's expressive capability, but because expressing complex ideas by a causal map is a deeply non-natural—even anti-memetic—way of thinking. I want to fudge numbers, express correlations without bothering with a causal story, and hide the sources of my opinions behind compelling narrative stories. Probability Calculator doesn't let me take any of these shortcuts. But that seems like a good thing, so perhaps you are more up for this sort of challenge than I am.