So many things in your life
That you're bound to regret
Why didn't I do that?
Why didn't I do this?
So many chances you lost
That you'll never forget

         —Meatloaf

 

(En español 🇲🇽: https://tinyurl.com/decisiones-monumentales)

I'm going to help you make difficult choices in the face of strong emotions and uncertainty. Decisions like whether or not you should vaccinate your child against Covid-19. I'm not going to make the choice for you but what I'll teach you will even come in handy when you're deciding which place to live in or whether to buy that used car. I call these decisions monumental because (a) undoing them is either impossible or very costly, and (b) you don't make them often. The potential for regret is enormous; they are decisions you don't want to get wrong. A look at the divorce rate, however, shows that we are often not great at making monumental decisions. Today, you are advised to vaccinate your child against Covid-19 (or get vaccinated yourself) and you think, this is a monumental decision, it has a lot of potential for regret! And then you make your first mistake.

Daniel Kahneman, who has been given a Nobel prize, studied decision-making for many, many years, in a controlled environment, and noticed everyone has two very different ways of making choices: a very fast, gut-feeling, autonomous way that is very good at, say, helping you drive a scooter through Mumbai; and a slow, reason-based way that is very good at helping you solve "computationally difficult" problems, like math.[1]

Here's a really cool experiment to see the latter in action: take the person nearest to you and ask them to look you directly in the eyes. Then, tell them to multiply 213 by 4 without looking away. Their pupils will immediately dilate. This is a sign that they have engaged the decision-making mechanism that is best suited for complex problem solving. Yes, it's kinda weird that it is physically observable.

Both ways of making decisions, which Kahneman calls "system 1" and "system 2", are essential for us. If an airplane loses control during takeoff and the pilots sit there, discussing whether it was a mechanical problem, the software had a bug or a goose got sucked into the engine, because knowing would help them best tackle the problem, it could cost lives. They need to react instantly, which is why they spend years training their "system 1"—the one that reacts immediately, intuitively—so it doesn't make the wrong automatic choice.

Conversely, if a judge makes an instant decision just by looking at a person, without evaluating any arguments, an innocent person could lose their freedom for years[2]. Judges need to not react instantly, but rather make an assessment based on complex interconnections of information, which is why they [should] spend years training their "system 2"—the one that can do calculations—so it doesn't have biases.

So, which of your two decision-making mechanisms should you use when deciding to get yourself or your child vaccinated against Covid-19? Which one will lead you the fewest regrets? Definitely not the one that jumps to conclusions fast.

The problem you face is that there is a lot of information out there: everyone has had two years to make up their mind and share it. The situation has changed multiple times: new vaccines have been released regularly, recovered people have become immune, the virus has mutated and undone everything, policies have been reversed, over and over. Worse, there is contradictory information: vaccines kill people, vaccines save lives, Covid-19 is like the flu, Covid-19 is not at all like the flu, Bill Gates wants to reduce global population, Bill Gates has saved 122 million lives. It's a decision that requires a slow, analytical approach, but if our "system 2"—the rational one—is overwhelmed, we will fall back to our "system 1", which is, by nature, very ill-suited to this problem.

I'm going to show you a way to avoid having your "system 2" overwhelmed and apply it to the subject of vaccinating children, but you can use it for any other complex decision; I've used it to short-list the schools my children should attend, for example. It's called a "decision tree". I have used both pen and paper and spreadsheet software depending on the complexity of my decision trees.

Parallel worlds

A decision tree starts at the end of the decision-making process: the outcome. You start like this:

To reach your conclusion, you have to evaluate two parallel worlds: one in which you vaccinated your child, and one in which you didn't, so you extend your tree like this:

Now you need to think about all the things that could happen in each parallel world, but only if they matter to your decision. You probably won't base your choice on the price of onions, but you almost certainly care about whether a vaccine can cause complications, or whether your child can avoid catching Covid-19. Every time you find something that matters, you will split that world-line into more parallel worlds where different things happen. Your tree will start to look like this:

Oh yeah, in any world where you are still alive, you can still either catch or elude Covid-19:

This part will be pretty hard because your mind will come up with a lot of different things that could happen and you'll put them in the tree and then read them again and realize they are poorly worded or not really relevant, and you have to shift a lot of branches. Even so, or precisely because it's been hard, this model is already helping you structure your thoughts, allowing your "system 2" to take the lead.

We haven't actually made any decisions, however, so before adding more complexity to the model, let's make some assumptions and see how the model can help you go from "I have no idea where to start" to "I know exactly why I made this decision".

Collapsing all the things!

Now that you have a tree, we start moving from the tips back to the conclusion. Let's start with the longest branch: "Vaccinate -> Develop complications -> Do not die"; we had split the world into two possibilities there: "catch Covid" and "elude Covid". Time to make those assumptions I mentioned earlier; don't worry, you can refine them and make the model more complex later on.

Our assumptions are going to be likelihood and cost. For this example, let's assume that if you are vaccinated and developed complications and didn't die, your chances of catching Covid-19 are still 100%. You can make different assumptions, or go and research and stop calling them assumptions, if you prefer. I'm not giving sources, so I'm calling my numbers by what they are: assumptions. I also assume the cost of catching Covid-19 afterwards is 600 shillings. You will probably use dollars or euros. This number is hard to figure out, so, again, you can make reasonable assumptions and question them later.

Of course, if the likelihood of catching Covid-19 is 100%, the likelihood of not catching it is 0.

Now that you know that, you can "roll up" the costs of both parallel worlds—the one in which your child catches Covid-19 and the one where she doesn't—to the parent world to figure out how much it costs to not die from vaccine complications.

Roll up? Parent world? I hear you. Remember that our goals is to reach a conclusion, but right now the tree represents a lot of worlds in which a lot of things happen differently, so we are going to "collapse" all of them, branch by branch, and the process to collapse parallel worlds is to multiply the likelihood of each happening by the cost if it happens. This is an old trick used by insurance companies and all the banks in the world to compare the cost of events that are expensive but almost never happen with those that are cheap but very likely to happen, like an insurer comparing the chances their customers will die by meteorite, which means disbursing a lot of money but doesn't happen very often, against the chances they will slip and require medical attention, which happens more often than you think. To collapse the leaves we just evaluated, we multiply 100% x 600 shillings and add that to 0% x 0 shillings, which is just 600 shillings in the end. We say that the cost of not dying from vaccine complications is 600 shillings (because, under my assumptions, your child will catch the virus at some point and it's going to cost you something, like maybe a week of earnings while you take care of him).

Right, so our tree now looks like this:

Time to figure out likelihoods and costs for the worlds "Vaccinate -> Develop complications". I figure dying from vaccine complications in my child's case is very unlikely; without citing sources, I'm going to say it's 0.001%. That means the likelihood of not dying is 99.999%. If my child were a young woman taking AstraZeneca, those figures would look different. You can get your numbers from whatever source makes sense to you; that's the beauty of this process.

What about the cost of your child dying, however? Money can't pay for a life, after all. Nevertheless, our model requires us to compare on the basis of something, and money is the best way we have to represent costs, so I'm going to have to put an amount to the death of my child. I want to put "infinite", of course, but that won't help me make a decision. Instead, I can look at how much it would cost to cover funeral expenses and months of grieving. I can also look at the amounts people have received as compensation for the death of a child in legal procedures. It's a hard topic, but I'm sure you'll manage to put a number. I'll go for 2,000,000 shillings.

The tree now looks like this:

We collapse that pair of alternate worlds again:

 

This basically says "the tiny chance of something horrible happening is dwarfed by the almost certain likelihood that nothing horrible will happen".

You can now do the same for some of the other branches; I'm going to assume the cost and likelihood of catching Covid-19 is the same whether your child develops complications from the vaccine or not. In fact, I'm going to assume the chances of catching Covid-19 are the same no matter what. I will assume, however, that the costs of catching it while unvaccinated are somewhat higher. My tree now looks like this:

We're almost there! I've added my own assumptions about how likely it is for the child to develop complications from the vaccine (4 in 1,000,000); yours should be different. Let's collapse the last remaining branch so we are left only with two worlds, one in which you vaccinate your child, and one in which you don't:

We are finally there! The very last options have no likelihood associated with them because you will either vaccinate your child or you won't. There's no chance involved. What you have, however, are the collapsed costs summarizing everything that your model has considered as you compared parallel worlds in which something happened or didn't. All that is left is to compare the costs. In my example, the cost of not vaccinating the child is higher than the cost of vaccinating the child, so my conclusion is to vaccinate the child. I can defend that conclusion.

That's it. The nice thing about a decision tree is that you can plug in different figures for costs and likelihood depending on your particular medical history, which sources of information you trust, where you live, and how the environment changes. Maybe you live in a country where the unvaccinated need to pay for tests before going anywhere, so that's going to increase costs in all the worlds where you are not vaccinated. Maybe you have a reliable source telling you there are cheap treatments for Covid-19 and your costs if your child gets infected are 0. Go wild. Here's a more elaborate example model:

Complications

As you build your own tree, you may find it hard to put in extra information. For example, if you have to pay for tests if you are not vaccinated, where does that extra cost go? Once you figure out how much you will pay for tests over the period you are considering, you can park the cost in the first "world" in which you will encounter it and add it to the costs that roll up to it:

All my branches consist of two parallel worlds: one in which something happened, and one in which it didn't. It's sometimes hard to split things nicely like that, but I encourage you to make an effort because it will make all the calculations easier. If, however, you find yourself with more than two possibilities, just make sure that the probabilities still add up to 100%:

See? . But I encourage you to find a way to keep your tree flat even if it gets longer, so you make fewer mistakes:

What happens if instead of a cost you have a benefit? In that case, you can write it as a negative cost. Yes, if I give you 20 shillings you can say you had to pay me negative 20 shillings. Math doesn't care.

What if the model is giving you an answer you don't like? Well, it means your "system 1" and "system 2" are in conflict! The very first question I asked was which way of making decisions you want to use in this case, and the answer was "system 2", so you should go back and verify all your assumptions; maybe you forgot to consider some events. Talk to other people, especially those with very different opinions than yours. Check your sources for costs and likelihoods again. Convince yourself, in essence, that you have done everything you could to let your "system 2" make a decision. Then, trust yourself. Yes, it's paradoxical to trust yourself by not believing yourself, but, really, what you are saying is "I trust my detailed reasoning and thinking more than my instinctive gut reaction which is not good at making decisions that require me to reason and think." That makes sense, doesn't it?

Okay, one last thing. What if the model ends up giving you the wrong answer anyway? Well, it's a bit like saying that if you roll two dice, a 7 is the most likely outcome (because there are more ways for the dice to add up to 7 than to any other number) but you can still get two 6s, which is one of the most unlikely outcomes. If you had perfect information about everything ahead of time, you could predict what is going to happen and make a perfect choice every time, but that is not the case. This model will help you avoid gambling with an important matter, but it cannot guarantee the "right" result if you don't have the right information. Its sole purpose is to lead you to a decision you can justify to yourself and everyone else, to reduce the regret. Remember, in a casino, the house always makes more money than the gamblers (and that poker players that consistently win do not trust their intuition but use a system).

Empower yourself even more

There are many tools to help your "system 2" take the lead, this is just one of them. I found inspiration in the absolutely fantastic online course on Model Thinking offered by the University of Michigan through Coursera. Take it (for free!) and it will blow your mind. 

Disseminate, don't hoard

I own the copyright for this post, but I will license it to you under the Creative Commons Attribution 4.0 International License, for free! That means you can make modifications and post elsewhere as long as you attribute this post somewhere in it. Translations to other languages are especially encouraged! If you think this introduction to decision trees has helped you make better choices, share your knowledge with those you care about, or with everyone.

To help you out, the diagrams above were written in PlantUML; it's pretty simple to understand and I'm sure you'll get the hang of it in minutes. You can find a complete diagram source code here, which lends itself to translating.

  1. ^

    He wrote a book about it: Thinking, Fast and Slow

  2. ^

    It's happened before: black people in white countries have historically received worse penalties and been incorrectly found guilty more often than their white peers.

New to LessWrong?

New Comment
12 comments, sorted by Click to highlight new comments since: Today at 2:57 PM

How do you assign probability that a child will develop complications from the vaccine, they will be permanent but not lethal. E.g. The child will be sterile.

I read studies (seriously, it's super time-consuming!) and consider the best evidence available. Because of the lag introduced by studies, sometimes hearsay is enough to put in a guesstimate. It's interesting to play around with the values, however, and see what magnitude change in a single one would lead to a change in the ultimate decision. Sometimes you find out it doesn't make a difference, and more precise information would be irrelevant, so you can move on.

Since there are so many possible applications for this model, this is great help when trying to explain ones decisions to others as well. As in: How did I arrive at my conclusions and how much cost/benefit did I attribute towards every step to arrive there, and why.

One small note on the final step of "vaccinate" and "not vaccinate": the total cost of the "vaccinate" branch has to be ≥ 600 shillings, one decimal point must have gotten lost somewhere in the calculation.

It is great for holding discussions when there are more people involved in a decision. 

As for the calculation, can you help me spot the mistake? I can't find it!

You have to replace the right term  by . This makes the sum equal to , or  if you keep only two digits after the decimal point. Still pretty close to 600 though :)

Thanks! I've updated the calculation and diagram.

Thank you for this post. I will re-use and reference it - I have big plans for this topic. I want to explore/expand the decision tree on the effect of multiple vaccinations (boosters). I feel multiple vaccinations will work like this: single vaccination shall work, and it’s low risk. Two vaccinations - probability of vaccine effectiveness decreases because they are no longer independent, but they are not entirely dependent on each other either. I have the equation in mind, but I am too far from medical professionals. More importantly, I think while the probability of vaccine effectiveness decreases with each booster, the probability of risk factors sums up - while initials risks are small, risks accumulates and by second booster, it’s not that small. Any thoughts? I am open to ideas and can be convinced one way or another. Small note: Some diagrams are not visible iPad.

I finally got around to fixing the diagrams. It wasn't an iPad-specific problem, just the way diagrams are "pasted" into the editor when copied directly from PlantUML... apparently, it's not the image that gets pasted, but the URL to a diagram rendered server-side which has a limited lifetime.

The whole point of using a model is to explain and predict without the sometimes prohibitive costs of not modelling, but it comes at the price of losing "resolution of reality". That loss is what leads to uncertainty. Understanding enough about the immune system to know how current vaccines operate in the body and how risks add up differently in different bodies (ecosystems, really) could take several generations of dedicated research... we've collectively been at it since before Pasteur, keep making amazing discoveries, and still can't provide really good answers. So I feel you will only get half-baked guesses in this forum and slightly better ones if you ask COVID experts.

I think in order to quantify risk-vs-number-of-vaccinations, we need to understand the type of risk itself and how the vaccine might have unintended effects. If we assume all of the unintended effects are longer-than-expected presence of the mRNA (or otherwise vector) and its derivatives, then the risk of noticeable adverse consequences doesn't really sum up because any accumulation effects will be negligible. I.e. the amount of substance is low - 1 ng/kg body mass is same as 2 ng/kg body mass. Relatively it's a lot but it's not a lot if consider the body a "resilient/tolerant" system.

Thank you for this formal (and fun!) method to guide (and illustrate/document) decision-making.  I think writing it out would help me illustrate leaps or assumptions and come to better decisions.   For instance, I often (in my head) zero-out possibilities that have extremely low probabilities out of hand unless the costs are a similar order of magnitude. 

What if the model is giving you an answer you don't like? Well, it means your "system 1" and "system 2" are in conflict! The very first question I asked was which way of making decisions you want to use in this case, and the answer was "system 2", so you should go back and verify all your assumptions; maybe you forgot to consider some events.

How do you avoid rationalizing (introducing bias in) your decision if you're heavily scrutinizing the tree only when your system 1 and system 2 disagree?

It's hard, and I find that I need to update my models every now and then. Practice makes better.