Nice article. I think it's a mistake for Harsanyi to argue for average utilitarianism. The view has some pretty counterintuitive implications:
If the ancient Egyptians were very happy, my child would bring down the average, and so having the child would be wrong. If the ancient Egyptians were unhappy, my child would bring up the average, and so having the child would be right.
I do prefer total utilitarianism to average utilitarianism,[1] but one thing that pulls me to average utilitarianism is the following case.
Let's suppose Alice can choose either (A) create 1 copy at 10 utils, or (B) create 2 copies at 9 utils. Then average utilitarianism endorses (A), and total utilitarianism endorses (B). Now, if Alice knows she's been created by a similar mechanism, and her option is correlated with the choice of her ancestor, and she hasn't yet learned her own welfare, then EDT endorses picking (A). So that matches average utilitarianism.[2]
Basically, you'd be pleased to hear that all your ancestors were average utility maximisers, rather than total utility maximisers, once you "update on your own existence" (whatever that means). But also, I'm pretty confused by everything in this anthropics/decision theory/population ethics area. Like, the egyptology thing seems pretty counterintuitive, but acausal decision theories and anthropic considerations imply all kind of weird nonlocal effects, so idk if this is excessively fishy.
I think aggregative principles are generally better than utilitarian ones. I'm a fan of LELO in particular, which is roughly somewhere between total and average utilitarianism, leaning mostly to the former.
Maybe this also requires SSA??? Not sure.
Yeah I think correlations and EDT can make things confusing. But note that average utilitarianism can endorse (B) given certain background populations. For example, if the background population is 10 people each at 1 util, then (B) would increase the average more than (A).
Utilitarianism is the view that a social planner should choose options which maximise the social utility of the resulting social outcome. The central object in utilitarianism is the social utility function which assigns a real value to each social outcome . This function typically involves variables such as the well-being, preferences, and mental states of individuals, distributional factors like inequality, and other relevant factors such as justice, social cohesion, and freedoms. Utilitarianism is a broad class of social choice principles, one corresponding to each function .
In my previous article, I introduced aggregative principles, which state that a social planner should make decisions as if they will face the aggregated personal outcomes of every individual in the population. The central object in aggregativism is the function , represented with the the greek letter zeta, which assigns a personal outcome to each social outcome . This function typically aggregates the collection of personal outcomes facing the entire population into a single personal outcome. Aggregativism is a broad class of social choice principles, one corresponding to each function .
We examined three well-known aggregative principles:
I'm interested in aggregative principles because they avoid many theoretical pitfalls of utilitarian principles. Unlike utilitarianism, aggregativism doesn't require specifying a social welfare function, which is notoriously intractable. Moreover, it seems less prone to counterintuitive conclusions such as the repugnant conclusion or the violation of moral side constraints.[1] In this article, I will show that, under natural conditions of human rationality, aggregative principles approximate utilitarian principles. Therefore, even though aggregativism avoids these theoretical pitfalls, we should nonetheless expect aggregativism to generate roughly-utilitarian recommendations in practical social contexts, and thereby retain the most appealing insights from utilitarianism.
The rest of the article is organized as follows. Section 2 formalises social choice principles as functions of type . Section 3 demonstrates the structural similarity between two strategies to specifying such principles, namely the aggregative and utilitarian strategies. Section 4 proves that under natural conditions about human rationality, the aggregative and utilitarian principles are mathematically equivalent. This theorem is the key contribution of the article. Sections 5, 6, and 7 applies the theorem to LELO, HL, and ROI respectively.
Suppose you are a social planner choosing from a set of options . The set might be the set of available tax rates, environmental policies, military actions, political strategies, neural network parameters, or whatever else is being chosen by the social planner. Now, your choice will presumably depend on the social consequences of the options, even if you also consider non-consequentialist factors. We can model the social consequences with a function , where is the set of social outcomes. In particular, if you choose an option , then the resulting social outcome would be .
We call the "social context". As a concrete example, suppose the options are different tax rates (say 10%, 20%, and 30%), and the social outcomes are characterized by variables like total tax revenue, income inequality, and unemployment rate. Then the social context is the function which maps each tax rate to the resulting values of these social outcome variables.
A social choice principle should say, for each social context, which options are acceptable. Formally, a social choice principle is characterised by some function , which takes a social context as input and returns a subset of the options as output. Specifically, consists of exactly those options which satisfy the principle in the social context .
Note that denotes the set of all functions from to , so is a higher-order function, meaning it receives another function as input. Additionally denotes the powerset of , i.e. the set of subsets of . We use the powerset to allow for the fact that multiple options may satisfy a principle: if a principle permits only options and in a context then . Finally, the powerset includes the empty set , which allows for the case . Informally, means that the social planner, following principle and faced with context , has no acceptable options, which allows for principles that aren't universally satisfiable.
Here are some examples of social choice principle:
These examples illustrate the diversity of conceivable social choice principles. The key point is that they can all be represented by functions . I've found this a productive way to think about principles of decision-making, and agency more generally.[3] Finding compelling social choice principle is the central problem in social ethics, and different normative frameworks will propose different principles.
Utilitarianism and aggregativism are two strategies for specifying a social choice principle . The utilitarian strategy specifies a social choice principle using two components:
Given the social utility function and the operator , the utilitarian principle is defined by . Note that if is the social context, then the composition calculates the social utility resulting from each option, thereby providing a real-valued function . The utilitarian principle says that the social planner should choose an option that maximizes this function.
As a simplistic example, consider a social utility function that measures the gross world product of a social outcome. The resulting utilitarian principle would oblige maximizing gross world product. In practice, utilitarians typically endorse more nuanced utility functions that account for factors like individual well-being, fairness, and existential risk.
Aggregativism offers an alternative strategy to specifying social choice principle. Like utilitarianism, it defines the principle using two components:
The function should model a self-interested human in the following sense: for each personal context the subset should contain the options that the hypothetical human might choose in that context. A personal context is an assignment of a personal outcome to each of the options, analogously to a social context. For example, if maps some options to finding a dollar and the remaining options to drowning in a swamp, then presumably contains only the former options.
Given the social zeta function and a model of self-interested human , the aggregationist principle is defined by Note that if is the social context, then the composition calculates the hypothetical personal outcome resulting from each option, thereby providing a personal context . The aggregative principle says that the social planner should choose an option a self-interested human might choose in this personal context.
For example, consider a social zeta function that maps each social outcome to the personal outcome of living every individual's life in sequence, starting with the earliest-born humans. The resulting aggregative principle obliges affecting society such that living the concatenated lives is personally desirable.
This comparison reveals the structural similarity between utilitarianism and aggregativism. Both strategies specify the principle using a two components:
Both , the model of a self-interested human, and the operator are choice principles: is a personal choice principle, it 'chooses' one the options based on their associated personal outcomes, and is a real choice principle, it 'chooses' one of the options based on their associated real value. (Of course, doesn’t literally choose anything, it’s simply a mathematical operator, but so too is .)
In general, for any space , let's say an -context is any function with type-signature , and an -choice principle is any function with type-signature . That is, an -choice principle , when provided with an -context , identifies some subset of the options which are 'acceptable'.
How might one use an -choice principle to specify a social choice principle ? Well, what's needed is some function from social outcomes to elements of . This function will extends any social context to an -context , which can then be provided to the -choice principle to identify the acceptable options. Formally, . This is how utilitarianism and aggregativism succeed in defining social choice principles. The key difference is that utilitarianism uses real numbers while aggregativism uses personal outcomes.
Despite their differences, there are natural conditions under which the utilitarian and aggregative principles are equivalent, in the sense that a social planner is permitted to choice an option, under the utilitarian principle, if and only if they are permitted to choice the same option under the aggregative principle.
Formally, let denote the utilitarian principle and let denote the aggregative principle ; under what conditions does for all social contexts ?
In the previous article, we showed that LELO, HL, and ROI each employ social zeta functions which aggregates the personal outcomes across all individuals in the population. Formally, where is a fixed set of individuals; is a fixed function mapping a social outcome and an individual to the personal outcome that faces when obtains; is the monad capturing a notion of 'collection'; be a fixed collection of individuals impartially representing the population; and is an -algebra specifying how to aggregate collections of personal outcomes into a single personal outcome.
Supposing has the general form above, and the three conditions below are satisfied, then the utilitarian principle and the aggregative principle are mathematically equivalent:
The aggregative principle (when our model of a self-interested human is a rational personal utility maximiser) is equivalent to the utilitarian principle (when social utility is the impartial aggregation of personal utility over each individual). The full proof is elementary and uninsightful.[4]
Now, these three conditions are only approximately true, and they fail in systematic ways. However, the theorem will help elucidate exactly the extent to which the aggregative principle approximates the corresponding utilitarian principle. Namely, the aggregative principle will approximate the utilitarian principle to the degree that these conditions hold.
Because RPU and SUAPU depend on the specific monad under discussion, I will spell out the details for three paradigm examples: the list monad (representing finite sequences), the distribution monad (representing probability distributions), and the nonempty finite powerset monad (representing nonempty finite sets).
The previous section proved an equivalence, under certain conditions, between aggregative principles and utilitarian principles. This section will apply that theorem to the monad , which is used to formalise Live Every Life Once (LELO). We will see that LELO is equivalent to longtermist total utilitarianism.
The real numbers admit a concatenation operator in the obvious way, i.e., there exists a function defined by . This is simply the well-known summation operator, which sends a list of real values to their sum.
Let's unpack RPU, which formally states that . In other words, for any list of personal outcomes , we have equality between and . Informally, the personal utility of a concatenated outcome equals the sum of the personal utilities of the outcomes being concatenated. This 'monoidal' rationality condition constrains how humans must value the concatenation of different personal outcomes.
In the previous article, we saw that the concatenation operator can be equivalently presented by a binary operator and a constant , with the intended interpretation and . We can restate the RPU condition in terms of and with two equations: and for all .
How realistic is the RPU condition? That is, supposing humans do maximise a personal utility function, how monoidally rational is it? I think this condition is approximately true, but unrealistic in several ways. I'll assume here that is interpreted as facing and then facing in sequence, rather than some exotic notion of concatenation.
Firstly, RPU rules out permutation-dependent values. It precludes a personal utility function such that . Informally, RPU assumes human values must be invariant to the ordering of experiences: they cannot value saving the best till last, nor saving the worse till last. In particular, RPU assumes that humans values are time-symmetric, which seems unrealistic, as illustrated by the following examples. Compare the process of learning, i.e. ending with better beliefs than one started with, with the process of unlearning, i.e. ending with worse beliefs than one started with. Humans seem to value learning above unlearning, but such time-asymmetric values are precluded by RPU. Similarly, humans seem to value a history of improvement over a history of degradation, even if both histories are different permutations of the same list of moments, but such values are precluded by RPU.
Secondly, RPU rules out time-discounted values. Under exponential time-discounting, a common assumption in economics, the personal utility function obeys the equation . Here gives the duration of each outcome and is the discount rate. This discounting formula weights the first outcome more than the second outcome , with the difference growing exponentially with the duration of . For instance, let and be equally valuable experiences lasting different durations, like a minute of ecstasy and a week of contentment respectively. Time-discounting implies that depends more on than does. However, RPU precludes this possibility, as it requires that , i.e. that humans are equally concerned with all life stages, not discounting future rewards relative to present ones
Thirdly, RPU rules out path-dependent values. Informally, whether I value a future more than a future must be independent of my past experiences. But this is an unrealistic assumption about human values, as illustrated in the following examples. If denotes reading Moby Dick and denotes reading Oliver Twist, then humans seem to value less than but value more than . This is because humans value reading a book higher if they haven't already read it, due to an inherent value for novelty in reading material. Alternatively, if and denote being married to two different people, then humans seem to value more than but value less than . This is because humans value being married to someone for a decade higher if they've already been married to them, due to an inherent value for consistency in relationships.[5] But RPU would precludes such path-dependent values.
Now let's unpack SUAPU, which formally states that . In other words, the social utility function is the sum of personal utilities over the individuals in the distinguished list representing the population. That is, if is a list of individuals representing the entire population impartially, then for any social outcome , its social utility is given by .
How realistic is the SUAPU condition? The answer depends on one's axiological theory. Indeed, SUAPU is a statement of longtermist total utilitarianism. This is a strong assumption that precludes a social utility function from exhibiting certain properties, analogous to how RPU constrains the personal utility function. Specifically, SUAPU precludes social utility functions with the following features:
Nonetheless, I think that HMPU, RPU, and SUAPU are useful approximations, even if they aren't perfectly true. To the extent that these assumptions do hold, Live Every Life Once (LELO) and longtermist total utilitarianism will be roughly equivalent. This explains why MacAskill appeals to LELO to argue for longtermist utilitarianism in his book "What We Owe The Future" (2022). Indeed, MacAskill's implicit argument can be summarized as follows:
We've seen how to apply the general equivalence, under certain conditions, between aggregative principles and utilitarian principles, e.g. between LELO and longtermist total utilitarianism. This section will apply that theorem to the monad , which is used to formalise Harsanyi's Lottery (HL). We will see that HL is equivalent to average utilitarianism.
The real numbers admit an interpolation operator in the obvious way, i.e., there exists a function defined by . This is simply the well-known mean-value operator, which sends a distribution of real values to their weighted average.
Let's unpack RPU, which formally states that . In other words, for any distribution of personal outcomes , we have equality between and . Informally, the personal utility of an interpolated outcome is the average of the personal utilities outcomes being interpolated. This 'convex' rationality condition constrains how humans must value the interpolation of different personal outcomes.
In the previous article, we saw that the interpolation operator can be equivalently presented by a family of binary operators , with the intended interpretation . We can restate the RPU condition in terms of with the family of equations:.
How realistic is the RPU condition? That is, supposing humans do maximise a personal utility function, how convexly rational is it? I think this condition is approximately true, but unrealistic in several ways. I'll assume here that is interpreted as a lottery between with likelihood and with likelihood .
Firstly, RPU rules out valuing determinacy. Informally, a lottery can't be valued below each determinate outcomes. But perhaps this is an unrealistic assumption, as illustrated in the following examples. If denotes dying on Monday and denotes dying on Tuesday, then humans might value both determinate outcomes over the lottery between them, e.g. but . This is because humans may inherently value determinacy about the day of their death. But RPU precludes valuing determinacy.
Secondly, RPU rules out valuing randomness. Informally, a lottery can't be valued above each determinate outcomes. But perhaps this is an unrealistic assumption, as illustrated in the following examples. If and denotes marrying two different people, then humans might value both determinate outcomes less than the lottery between them, e.g. but . This is because humans may inherently value randomness about whom they marry. Again, RPU precludes valuing randomness.
Thirdly, RPU rules out rules out values discontinuous in the underlying likelihoods. Formally, if is a convergent sequence in , then RPU implies . Moreover, if then and if then . But perhaps this is an unrealistic assumption, as illustrated in the following examples. If denotes an okay outcome and denotes a catastrophic outcome, then humans might value the lottery substantially less than for all , i.e. . Informally, the human values the zeroness catastrophe's likelihood. Analogously, if denotes a terrible defeat and denotes a great victory, then humans might value the lottery substantially more than for all , i.e. . Informally, the human values the nonzeroness victory's likelihood. But RPU precludes valuing either zeroness or nonzeroness of likelihoods, because this value would be discontinuous in the underlying likelihoods.
That being said, I think human values approximate convex rationality far better than they approximate monoidal rational. In fact, while mainstream economics does not assume monoidal rationality (e.g. via time-discounting) it does assume convex rationality. Convex rationality is a straightforward application of von Neumann-Morgenstern (VNM) expected utility theory. Hence, I accept convex rationality of human values, at least when interpolation is interpreted as a lottery between and .[6]
Now let's unpack SUAPU, which formally states that . In other words, the social utility function is the weighted average of personal utility over the individuals in distinguished distribution representing the population. That is, if is a distribution of individuals representing the entire population impartially, then for any social outcome , its social utility is given by .
How realistic is the SUAPU condition? The answer depends on one's axiological theory. Indeed, SUAPU is a statement of average utilitarianism. This is a strong assumption that precludes a social utility function from exhibiting certain properties, analogous to how RPU constrains the personal utility function. Specifically, SUAPU precludes social utility functions with the following features:
Nonetheless, I think that HMPU, RPU, and SUAPU are reasonable approximations, even if not perfectly true. To the extent that these assumptions hold, Harsanyi's Lottery (HL) and average utilitarianism will be roughly equivalent. This explains why Harsanyi appeals to HL to argue for average utilitarianism in his paper 'Cardinal Welfare, Individualistic Ethics, and Interpersonal Comparisons of Utility' (1955).
Harsanyi's implicit argument can be summarized as follows:
We've seen how to apply the general equivalence, under certain conditions, between aggregative principles and utilitarian principles, e.g. between LELO and longtermist total utilitarianism, or between HL and average utilitarianism. This section will apply that theorem to the monad , which is used to formalise Rawls' Original Position (ROI). We will see that ROI is equivalent to the difference principle.
The real numbers admit a fusion operator in the obvious way, i.e., there exists a function where is the largest satisfying for each . This is simply the well-known minimisation operator, which sends a nonempty finite subset of the real values to their minimum.
Let's unpack RPU, which formally states that . In other words, for any nonempty finite subset of personal outcome , we have equality between and . Informally, the personal utility of a fused outcome is the minimum of the personal utilities of the outcomes being fused. This 'semilatticial' rationality condition constrains how humans must value the fusion of different personal outcomes.
In the previous article, we saw that the fusion operator can be equivalently presented by a single binary operator , with the intended interpretation . We can restate the RPU condition in terms of with the single equation .
How realistic is the RPU condition? That is, supposing humans do maximise a personal utility function, how semilatticially rational is it? I think this condition is approximately true, but unrealistic in several ways. I'll assume here that is interpreted as a Knightian uncertainty between facing and facing . Then semilatticial rationality requires that humans are pessimistic, i.e. they value the disjunction of different outcomes no greater than the worst, as if the alternative will be selected by an adversary.
Firstly, RPU rules out valuing determinacy or indeterminacy. Informally, a disjunction can't be valued below each determinate outcome, nor higher. But perhaps this is an unrealistic assumption: humans might value the disjunction lower than either or because humans inherently value determinacy in this case. Or humans might value the disjunction lower than either or because humans inherently value indeterminacy in this case. But RPU precludes such values.
Second, RPU rules out non-pessimistic considerations. Informally, adding additional possibilities can never increase the value of a disjunction. But this is an unrealistic assumption about human values, as illustrated in the following examples. If is a typical comfortable life and is a life of horrific torture, then humans may value the outcome higher than and value the outcome higher than the outcome . In particular, the outcome gives a possibility of being fine, while is certain to result in torture. However, RPU precludes such values.
Overall, I think human values approximate semilatticial rationality. Indeed, suppose you face genuine Knightian uncertainty between a set of possibilities , with personal utilities respectively. What's the ex-ante value? There's not much to done to construct the ex-ante personal value for other than minimising over the possibilities, i.e. equals . The only alternative is to deny that the ex-ante value of depends solely on the ex-post value of the constituent possibilities, or else employ a different semilattice on .[7] Moreover, Wald's maximin model, and robust optimisation more generally, are popular principles of decision-making. These principles involve maximising a semilatticially rational utility function. Hence, I accept semilattical rationality of human values, at least when the fusion is interpreted as Knightian uncertainty between and , a mode of ignorance which is rarely encountered.[8]
Now let's unpack SUAPU, which formally states that . In other words, the social utility function to the minimum of personal utilities over the individuals in the distinguished nonempty subset representing the population. That is, if is a nonempty subset of individuals representing the entire population impartially, then for any social outcome , the social utility is given by .
How realistic is the SUAPU condition? The answer depends on one's axiological theory. Indeed, SUAPU is a statement of Rawls' difference principle. This is a strong assumption that precludes a social utility function from exhibiting certain properties, analogous to how RPU constrains the personal utility function. Specifically, SUAPU precludes social utility functions with the following features:
Nonetheless, I think that HMPU, RPU, and SUAPU are somewhat reasonable approximations, though probably the least plausible in the ROI context than the LELO context or HL context. To the extent that these assumptions hold, Rawls' Original Position (ROI) and his difference principle will be roughly equivalent. This explains why Rawls appeals to ROI to argue for difference utilitarianism in his book "The Theory of Justice" (1973).
Rawls' implicit argument can be summarized as follows:
To summarise, I first formalised social choice principles using functions of type-signature . This allowed me to define the utilitarian principle corresponding to a given social utility function, and the aggregative principles corresponding to a given social zeta function. As discussed in my previous article, this social zeta function maps a social outcome to the aggregated personal outcomes of each individual. Using the formalism, I proved that, under three natural conditions, the aggregative principle is mathematically equivalent a corresponding utilitarian principle. Because these conditions are approximately true, aggregativism approximates utilitarianism. even though aggregativism avoids the theoretical pitfalls of utilitarianism, we should nonetheless expect aggregativism to generate roughly-utilitarian recommendations in practical social contexts, and thereby retain the most appealing insights from utilitarianism. Moreover, this explains why MacAskill, Harsanyi, and Rawls each appeal to aggregative principles to defend their respective utilitarian principles.
In this next article, I will enumerate the theoretical pitfalls that face utilitarianism, and how aggregativism overcomes them.
See Appraising aggregativism and utilitarianism for a thorough defence.
In fact, the function mapping each option to the principle is a canonical embedding of the space of options into the space of social choice principle.
The aggregative principle is , where is a social context, is the human model, and is the social zeta function. This means a social planner should choose an option if a self-interested human would choose the associated personal outcome. By HMPU, has the form , where is the personal utility function. This means a self-interested human will choose an option that maximizes personal utility. Hence, aggregativism is the principle . Intuitively, this means a social planner should choose an option which maximizes the personal utility of the associated personal outcome.
The social zeta function is defined by , where is the aggregation function for personal outcomes, assigns personal outcomes to individuals, and is the distinguished collection representing the population. Intuitively, this means the personal outcome associated to a social outcome is the aggregate of the personal outcomes across all individuals in society.
Now, RPU asserts that , i.e. that the personal utility of the aggregate of personal outcomes is the aggregate of personal utilities of each outcome. Given , we obtain . Intuitively, this means the personal utility of the personal outcome associated to a social outcome is the aggregate of the personal utilities of the personal outcomes faced by each individual in society.
Now, SUAPU asserts that , where is the social utility function, is the aggregation function for real numbers, is the personal utility function, assigns personal outcomes to individuals, and is the distinguished collection representing the population. Intuitively, this means the social utility of a social outcome is the aggregate of the personal utilities of the personal outcomes faced by each individual in society.
This entails that . To see this, note that the right-hand-sides of the equations and are identical: . Indeed, this follows from the functorality of the lifting operator. Therefore, for all . Intuitively, this means the social utility of a social outcome is the personal utility of its associated personal outcome.
Hence, the aggregative principle is . To see this, note that because . Intuitively, this means a social planner following the aggregative principle should choose an option which maximizes the social utility of the resulting social outcome. The utilitarian principle is . Hence the aggregative principle is equivalent to the utilitarian principle conditional on HMPU, RPU, and SUAPU.
Of course, whether these particular cases violate RPU depends on which function models the self-interested human, and which personal utility function is used to characterise . Nonetheless, I think that any reasonable or will contain both examples of novelty value and of consistency value.
We might also ask: are human values convexly rational with respect to other convex algebras on personal outcomes? Recall that, in my previous article, we examined a novel interpretation of as the direct interpolation in some high-dimensional vector space . To obtain semantically meaningful vector representations of personal outcomes, we might leverage the activation space of a large language model like GPT-3. The interpolation of two vector representations is simply. Under this interpretation of , the RPU condition says that personal utility is a linear probe. Formally, RPU requires the personal utility function to satisfy the equation for all vectors and interpolation weights . Whether RPU holds in this setting depends on the specific vector representation of outcomes.
The real numbers admit another fusion operator, , which we could consider. But the semilattice will generate a condition of semilatticial rational which is even less plausible than that generated by the semilattice . Namely, it requires , e.g. humans would value Knightian uncertainty between horrific torture and a comfortable life no higher than certainty in a comfortable life.
In my previous article, we examined a conjunctive interpretation of the fusion of personal outcomes, in contrast to Rawls' disjunctive interpretation. In particular, if and are personal outcomes then is the personal outcome of facing and simultaneously. How should we understand semilatticial rationality, which formally states that for any nonempty finite subset of personal outcome , we have equality between and ? Under this fusion operator, semilatticial rationality requires that humans are "glass half-empty". Informally, the value of facing outcomes simultaneously is no greater than the value of the worst constituent outcome. That is, .
Here's how this rationality condition might arise naturally: Imagine a set of "catastrophes", such as being bored, being cold, being dead. Each catastrophe is represented with a personal outcome and a value . For example, , , and . Moreover, the utility of a complex personal outcome, such as being bored and cold simultaneously, is determined by the worst catastrophe. That is, . It implies that facing multiple catastrophes, which are equally disastrous, is no worse than facing only one such catastrophe, i.e. if then .