Moral uncertainty: What kind of 'should' is involved?

by MichaelA12 min read13th Jan 202011 comments

16

MetaethicsEthics & MoralityMoral UncertaintyValue LearningDecision Theory
Frontpage

This post follows on from my prior post; consider reading that post first.

We are often forced to make decisions under conditions of uncertainty. This may be empirical uncertainty (e.g., what is the likelihood that nuclear war would cause human extinction?), or it may be moral uncertainty (e.g., does the wellbeing of future generations matter morally?).

In my prior post, I discussed overlaps with and distinctions between moral uncertainty and related concepts. In this post, I continue my attempt to clarify what moral uncertainty actually is (rather than how to make decisions when morally uncertain, which is covered later in the sequence). Specifically, here I’ll discuss:

  1. Is what we “ought to do” (or “should do”) under moral uncertainty an objective or subjective (i.e., belief-relative) matter?
  2. Is what we “ought to do” (or “should do”) under moral uncertainty a matter of rationality or morality?

An important aim will be simply clarifying the questions and terms themselves. That said, to foreshadow, the tentative “answers” I’ll arrive at are:

  1. It seems both more intuitive and more action-guiding to say that the “ought” is subjective.
  2. Whether the “ought” is a rational or a moral one may be a “merely verbal” dispute with no practical significance. But I’m very confident that interpreting the “ought” as a matter of rationality works in any case (i.e., whether or not interpreting it as a matter of morality does, and whether or not the distinction really matters).

This post doesn’t explicitly address what types of moral uncertainty would be meaningful for moral antirealists and/or subjectivists; I discuss that topic in a separate post.[1]

Epistemic status: The concepts covered here are broad, fuzzy, and overlap in various ways, making definitions and distinctions between them almost inevitably debatable. Additionally, I’m not an expert in these topics (though I have now spent a couple weeks mostly reading about them). I’ve tried to mostly collect, summarise, and synthesise existing ideas (from academic philosophy and the LessWrong and EA communities). I’d appreciate feedback or comments in relation to any mistakes, unclear phrasings, etc. (and just in general!).

Objective or subjective?

(Note: What I discuss here is not the same as the objectivism vs subjectivism debate in metaethics.)

As I noted in a prior post:

Subjective normativity relates to what one should do based on what one believes, whereas objective normativity relates to what one “actually” should do (i.e., based on the true state of affairs).

Hilary Greaves & Owen Cotton-Barratt give an example of this distinction in the context of empirical uncertainty:

Suppose Alice packs the waterproofs but, as the day turns out, it does not rain. Does it follow that Alice made the wrong decision? In one (objective) sense of “wrong”, yes: thanks to that decision, she experienced the mild but unnecessary inconvenience of carrying bulky raingear around all day. But in a second (more subjective) sense, clearly it need not follow that the decision was wrong: if the probability of rain was sufficiently high and Alice sufficiently dislikes getting wet, her decision could easily be the appropriate one to make given her state of ignorance about how the weather would in fact turn out. Normative theories of decision-making under uncertainty aim to capture this second, more subjective, type of evaluation; the standard such account is expected utility theory.

Greaves & Cotton-Barratt then make the analogous distinction for moral uncertainty:

How should one choose, when facing relevant moral uncertainty? In one (objective) sense, of course, what one should do is simply what the true moral hypothesis says one should do. But it seems there is also a second sense of “should”, analogous to the subjective “should” for empirical uncertainty, capturing the sense in which it is appropriate for the agent facing moral uncertainty to be guided by her moral credences [i.e., beliefs], whatever the moral facts may be. (emphasis added)

(This objective vs subjective distinction seems to me somewhat similar - though not identical - to the distinction between ex post and ex ante thinking. We might say that Alice made the right decision ex ante - i.e., based on what she knew when she made her decision - even if it turned out - ex post - that the other decision would’ve worked out better.)

MacAskill notes that, in both the empirical and moral contexts, “The principal argument for thinking that there must be a subjective sense of ‘ought’ is because the objective sense of ‘ought’ is not sufficiently action-guiding.” He illustrates this in the case of moral uncertainty with the following example:

Susan is a doctor, who faces three sick individuals, Greg, Harold and Harry. Greg is a human patient, whereas Harold and Harry are chimpanzees. They all suffer from the same condition. She has a vial of a drug, D. If she administers all of drug D to Greg, he will be completely cured, and if she administers all of drug to the chimpanzees, they will both be completely cured (health 100%). If she splits the drug between the three, then Greg will be almost completely cured (health 99%), and Harold and Harry will be partially cured (health 50%). She is unsure about the value of the welfare of non-human animals: she thinks it is equally likely that chimpanzees’ welfare has no moral value and that chimpanzees’ welfare has the same moral value as human welfare. And, let us suppose, there is no way that she can improve her epistemic state with respect to the relative value of humans and chimpanzees.

[...]

Her three options are as follows:

A: Give all of the drug to Greg

B: Split the drug

C: Give all of the drug to Harold and Harry

Her decision can be represented in the following table, using numbers to represent how good each outcome would be.

Finally, suppose that, according to the true moral theory, chimpanzee welfare is of the same moral value as human welfare and that therefore, she should give all of the drug to Harold and Harry. What should she do?

Clearly, the best outcome would occur if Susan does C. But she doesn’t know that that would cause the best outcome, because she doesn’t know what the “true moral theory” is. She thus has no way to act on the advice “Just do what is objectively morally right.” Meanwhile, as MacAskill notes, “it seems it would be morally reckless for Susan not to choose option B: given what she knows, she would be risking severe wrongdoing by choosing either option A or option C” (emphasis added).

To capture the intuition the Susan should choose option B, and to provide actually followable guidance for action, we need to accept that there is a subjective sense of “should” (or of “ought”) - a sense of “should” that depends in part on what one believes. (This could also be called a “belief-relative” or “credence-relative” sense of “should”.)[2]

An additional argument in favour of accepting that there’s a subjective “should” in relation to moral uncertainty is consistency with how we treat empirical uncertainty, where most people accept that there’s a subjective “should”.[3] This argument is made regularly, including by MacAskill and by Greaves & Cotton-Barratt, and it seems particularly compelling when one considers that it’s often difficult to draw clear lines between empirical and moral uncertainty (see my prior post). That is, if it’s often hard to say whether an uncertainty is empirical or moral, it seems strange to say we should accept a subjective “should” under empirical uncertainty but not under moral uncertainty.

Ultimately, most of what I’ve read on moral uncertainty is premised on there being a subjective sense of “should”, and much of this sequence will rest on that premise also.[4] As far as I can tell, this seems necessary if we are to come up with any meaningful, action-guiding approaches for decision-making under moral uncertainty (“metanormative theories”).

But I should note that some writers do appear to argue that there’s only an objective sense of “should” (one example, I think, is Weatherson, though he uses different language and I’ve only skimmed his paper). Furthermore, while I can’t see how this could lead to action-guiding principles for making decisions under uncertainty, it does seem to me that it’d still allow for resolving one’s uncertainty. In other words, if we do recognise only objective “oughts”:

  • We may be stuck with fairly useless principles for decision-making, such as “Just do what’s actually right, even when you don’t know what’s actually right”
  • But (as far as I can tell) we could still be guided to clarify and reduce our uncertainties, and thereby bring our beliefs more in line with what’s actually right.

Rational or moral?

There is also debate about what precisely kind of “should” is involved [in cases of moral uncertainty]: rational, moral, or something else again. (Greaves & Cotton-Barratt)

For example, in the above example of Susan the doctor, are we wondering what she rationally ought to do, given her moral uncertainty about the moral status of chimpanzees, or what she morally ought to do?

It may not matter either way

Unfortunately, even after having read up on this, it’s not actually clear to me what the distinction is meant to be. In particular, I haven’t come across a clear explanation of what it would mean for the “should” or “ought” to be moral. I suspect that what that would mean would be partly a matter of interpretation, and that some definitions of a “moral” should could be effectively the same as those for a “rational” should. (But I should note that I didn’t look exhaustively for such explanations and definitions.)

Additionally, both Greaves & Cotton-Barratt and MacAskill explicitly avoid the question of whether what one “ought to do” under moral uncertainty is a matter of rationality or morality.[5] This does not seem to at all hold them back from making valuable contributions to the literature on moral uncertainty (and, more specifically, on how to make decisions when morally uncertain).

Together, the above points make me inclined to believe (though with low confidence) that this may be a “merely verbal” debate with no real, practical implications (at least while the words involved remain as fuzzy as they are).

However, I still did come to two less-dismissive conclusions:

  1. I’m very confident that the project of working out meaningful, action-guiding principles for decision-making under moral uncertainty makes sense if we see the relevant “should” as a rational one. (Note: This doesn’t mean that I think the “should” has to be seen as a rational one.)
  2. I’m less sure whether that project would make sense if we see the relevant “should” as a moral one. (Note: This doesn’t mean I have any particular reason to believe it wouldn’t make sense if we see the “should” as a moral one.)

I provide my reasoning behind these conclusions below, though, given my sense that this debate may lack practical significance, some readers may wish to just skip to the next section.

A rational “should” likely works

Bykvist writes:

An alternative way to understand the ought relevant to moral uncertainty is in terms of rationality (MacAskillet al., forthcoming; Sepielli, 2013). Rationality, in one important sense at least, has to do with what one should do or intend, given one’s beliefs and preferences. This is the kind of rationality that decision theory often is seen as invoking. It can be spelled out in different ways. One is to see it as a matter of coherence: It is rational to do or intend what coheres with one’s beliefs and preferences (Broome, 2013; for a critic, see Arpaly, 2000). Another way to spell it out is to understand it as matter of rational processes: it is rational to do or intend what would be the output of a rational process, which starts with one’s beliefs and preferences (Kolodny, 2007).

To apply the general idea to moral uncertainty, we do not need to take stand on which version is correct. We only need to assume that when a conscientious moral agent faces moral uncertainty, she cares about doing right and avoid doing wrong but is uncertain about the moral status of her actions. She prefers doing right to doing wrong and is indifferent between different right doings (at least when the right doings have the same moral value, that is, none is morally supererogatory). She also cares more about serious wrongdoings than minor wrongdoings. The idea is then to apply traditional decision theoretical principles, according to which rational choice is some function of the agent’s preferences (utilities) and beliefs (credences). Of course, different decision‐theories provide different principles (and require different kinds of utility information). But the plausible ones at least agree on cases where one option dominates another.

Suppose that you are considering only two theories (which is to simplify considerably, but we only need a logically possible case): “business as usual,” according to which it is permissible to eat factory‐farmed meat and permissible to eat vegetables, and “vegetarianism,” according to which it is impermissible to eat factory‐farmed meat and permissible to eat vegetables. Suppose further that you have slightly more confidence in “business as usual.” The option of eating vegetables will dominate the option of eating meat in terms of your own preferences: No matter which moral theory is true, by eating vegetables, you will ensure an outcome that you weakly [prefer] to the alternative outcome: if “vegetarianism” is true, you prefer the outcome; if “business as usual is true,” you are indifferent between the outcomes. The rational thing for you to do is thus to eat vegetables, given your beliefs and preferences. (lines breaks added)

It seems to me that that reasoning makes perfect sense, and that we can have valid, meaningful, action-guiding principles about what one rationally (and subjectively) should do given one’s moral uncertainty. This seems further supported by the approach Christian Tarsney takes, which seems to be useful and to also treat the relevant “should” as a rational one.

Furthermore, MacAskill seems to suggest that there’s a correlation between (a) writers fully engaging with the project of working out action-guiding principles for decision-making under moral uncertainty and (b) writers considering the relevant “should” to be rational (rather than moral):

(Lockhart 2000, 24,26), (Sepielli 2009, 10) and (Ross 2006) all take metanormative norms to be norms of rationality. (Weatherson 2014) and (Harman 2014) both understand metanormative norms as moral norms. So there is an odd situation in the literature where the defenders of metanormavism (Lockhart, Ross, and Sepielli) and the critics of the view (Weatherson and Harman) seem to be talking past one another.

A moral “should” may or may not work

I haven’t seen any writer (a) explicitly state that they understand the relevant “should” to be a moral one, and then (b) go on to fully engage with the project of working out meaningful, action-guiding principles for decision-making under moral uncertainty. Thus, I have an absence of evidence that one can engage in that project while seeing the “should” as moral, and I take this as (very weak) evidence that one can’t engage in that project while seeing the “should” that way.

Additionally, as noted above, MacAskill writes that Weatherson and Harman (who seem fairly dismissive of that project) see the relevant “should” as a moral one. Arguably, this is evidence that that project of finding such action-guiding principles won’t make sense if we see the “should” as moral (rather than rational). However, I consider this to also be very weak evidence, because:

  • It’s only two data points.
  • It’s just a correlation anyway.
  • I haven’t closely investigated the “correlation” myself. That is, I haven’t checked whether or not Weatherson and Harman’s reasons for dismissiveness seem highly related to them seeing the “should” as moral rather than rational.

Closing remarks

In this post, I’ve aimed to:

  • Clarify what is meant by the question “Is what we “ought to do” under moral uncertainty is an objective or subjective matter?”
  • Clarify what is meant by the question “Is that ‘ought’ a matter of rationality or of morality?”
  • Argue that it seems both more intuitive and more action-guiding to say that the “ought” is subjective.
  • Argue that whether the “ought” is a rational or a moral one may be a “merely verbal” dispute with no practical significance (but that interpreting the “ought” as a matter of rationality works in any case).

I hope this has helped give readers more clarity on the seemingly neglected matter of what we actually mean by moral uncertainty. (And as always, I’d welcome any feedback or comments!)

My next posts will continue in a similar vein, but this time building to the question of whether, when we’re talking about moral uncertainty, we’re actually talking about moral risk rather than about moral (Knightian) uncertainty - and whether such a distinction is truly meaningful. (To do so, I'll first discuss the risk-uncertainty distinction in general, and the related matter of unknown unknowns, before applying these ideas in the context of moral risk/uncertainty in particular.)


  1. But the current post is still relevant for many types of moral antirealist. As noted in my last post, this sequence will sometimes use language that may appear to endorse or presume moral realism, but this is essentially just for convenience. ↩︎

  2. We could further divide subjective normativity up into, roughly, “what one should do based on what one actually believes” and “what one should do based on what it would be reasonable for one to believe”. The following quote, while not directly addressing that exact distinction, seems relevant:

    Before moving on, we should distinguish subjective credences, that is, degrees of belief, from epistemic credences, that is, the degree of belief that one is epistemically justified in having, given one’s evidence. When I use the term ‘credence’ I refer to epistemic credences (though much of my discussion could be applied to a parallel discussion involving subjective credences); when I want to refer to subjective credences I use the term ‘degrees of belief’.

    The reason for this is that appropriateness seems to have some sort of normative force: if it is most appropriate for someone to do something, it seems that, other things being equal, they ought, in the relevant sense of ‘ought’, to do it. But people can have crazy beliefs: a psychopath might think that a killing spree is the most moral thing to do. But there’s no sense in which the psychopath ought to go on a killing spree: rather, he ought to revise his beliefs. We can only capture that idea if we talk about epistemic credences, rather than degrees of belief.

    (I found that quote in this comment, where it’s attributed to MacAskill’s BPhil thesis. Unfortunately, I can’t seem to access that thesis, including via Wayback Machine.) ↩︎

  3. Though note that Greaves and Cotton-Barratt write:

    Not everyone does recognise a subjective reading of the moral ‘ought’, even in the case of empirical uncertainty. One can distinguish between objectivist, (rational-)credence-relative and pluralist views on this matter. According to objectivists (Moore, 1903; Moore, 1912; Ross, 1930, p.32; Thomson, 1986, esp. pp. 177-9; Graham, 2010; Bykvist and Olson, 2011) (respectively, credence-relativists (Prichard, 1933; Ross, 1939; Howard-Snyder, 2005; Zimmermann, 2006; Zimmerman, 2009; Mason, 2013), the “ought” of morality is uniquely an objective (respectively, a credence-relative) one. According to pluralists, “ought” is ambiguous between these two readings (Russell, 1966; Gibbard, 2005; Parfit, 2011; Portmore, 2011; Dorsey, 2012; Olsen, 2017), or varies between the two readings according to context (Kolodny and Macfarlane, 2010).

    ↩︎
  4. In the following quote, Bykvist provides what seems to me (if I’m interpreting it correctly) to be a different way of explaining something similar to the objective vs subjective distinction.

    One possible explanation of why so few philosophers have engaged with moral uncertainty might be serious doubt about whether it makes much sense to ask about what one ought do when one is uncertain about what one ought to do. The obvious answer to this question might be thought to be: “you ought to do what you ought to do, no matter whether or not you are certain about it” (Weatherson, 2002, 2014). However, this assumes the same sense of “ought” throughout.

    A better option is to assume that there are different kinds of moral ought. We are asking what we morally ought to do, in one sense of ought, when we are not certain about what we morally ought to do, in another sense of ought. One way to make this idea more precise is to think about the different senses as different levels of moral ought. When we face a moral problem, we are asking what we morally ought to do, at the first level. Standard moral theories, such as utilitarianism, Kantianism, and virtue ethics, provide answers to this question. In a case of moral uncertainty, we are moving up one level and asking about what we ought to do, at the second level, when we are not sure what we ought to do at the first level. At this second level, we take into account our credence in various hypotheses about what we ought to do at the first level and what these hypotheses say about the moral value of each action (MacAskill et al., forthcoming). This second level ought provides a way to cope with the moral uncertainty at the first level. It gives us a verdict of how to best manage the risk of doing first order moral wrongs. That there is such a second‐level moral ought of coping with first‐order moral risks seems to be supported by the fact that agents are morally criticizable when they, knowing all the relevant empirical facts, do what they think is very likely to be a first‐order moral wrong when there is another option that is known not to pose any risk of such wrongdoing.

    Yet another (and I think similar) way of framing this sort of distinction could make use of the following two terms: “A criterion of rightness tells us what it takes for an action to be right (if it’s actions we’re looking at). A decision procedure is something that we use when we’re thinking about what to do” (Askell).

    Specifically, we might say that the true first-order moral theory provides objective “criteria of rightness”, but that we don’t have direct access to what these are. As such, we can use a second-order “decision procedure” that attempts to lead us to take actions that are close as possible to the best actions (according to the unknown criteria of rightness). To do so, this decision procedure must make use of our credences (beliefs) in various moral theories, and is thus subjective. ↩︎

  5. Greaves & Cotton-Barratt write: “For the purpose of this article, we will [...] not take a stand on what kind of “should” [is involved in cases of moral uncertainty]. Our question is how the “should” in question behaves in purely extensional terms. Say that an answer to that question is a metanormative theory.”

    MacAskill writes: “I introduce the technical term ‘appropriateness’ in order to remain neutral on the issue of whether metanormative norms are rational norms, or some other sort of norms (though noting that they can’t be first-order norms provided by first-order normative theories, on pain of inconsistency).” ↩︎

16