Moral uncertainty: What kind of 'should' is involved?

by MichaelA12 min read13th Jan 202011 comments

14

MetaethicsEthics & MoralityMoral UncertaintyValue LearningDecision Theory
Frontpage

This post follows on from my prior post; consider reading that post first.

We are often forced to make decisions under conditions of uncertainty. This may be empirical uncertainty (e.g., what is the likelihood that nuclear war would cause human extinction?), or it may be moral uncertainty (e.g., does the wellbeing of future generations matter morally?).

In my prior post, I discussed overlaps with and distinctions between moral uncertainty and related concepts. In this post, I continue my attempt to clarify what moral uncertainty actually is (rather than how to make decisions when morally uncertain, which is covered later in the sequence). Specifically, here I’ll discuss:

  1. Is what we “ought to do” (or “should do”) under moral uncertainty an objective or subjective (i.e., belief-relative) matter?
  2. Is what we “ought to do” (or “should do”) under moral uncertainty a matter of rationality or morality?

An important aim will be simply clarifying the questions and terms themselves. That said, to foreshadow, the tentative “answers” I’ll arrive at are:

  1. It seems both more intuitive and more action-guiding to say that the “ought” is subjective.
  2. Whether the “ought” is a rational or a moral one may be a “merely verbal” dispute with no practical significance. But I’m very confident that interpreting the “ought” as a matter of rationality works in any case (i.e., whether or not interpreting it as a matter of morality does, and whether or not the distinction really matters).

This post doesn’t explicitly address what types of moral uncertainty would be meaningful for moral antirealists and/or subjectivists; I discuss that topic in a separate post.[1]

Epistemic status: The concepts covered here are broad, fuzzy, and overlap in various ways, making definitions and distinctions between them almost inevitably debatable. Additionally, I’m not an expert in these topics (though I have now spent a couple weeks mostly reading about them). I’ve tried to mostly collect, summarise, and synthesise existing ideas (from academic philosophy and the LessWrong and EA communities). I’d appreciate feedback or comments in relation to any mistakes, unclear phrasings, etc. (and just in general!).

Objective or subjective?

(Note: What I discuss here is not the same as the objectivism vs subjectivism debate in metaethics.)

As I noted in a prior post:

Subjective normativity relates to what one should do based on what one believes, whereas objective normativity relates to what one “actually” should do (i.e., based on the true state of affairs).

Hilary Greaves & Owen Cotton-Barratt give an example of this distinction in the context of empirical uncertainty:

Suppose Alice packs the waterproofs but, as the day turns out, it does not rain. Does it follow that Alice made the wrong decision? In one (objective) sense of “wrong”, yes: thanks to that decision, she experienced the mild but unnecessary inconvenience of carrying bulky raingear around all day. But in a second (more subjective) sense, clearly it need not follow that the decision was wrong: if the probability of rain was sufficiently high and Alice sufficiently dislikes getting wet, her decision could easily be the appropriate one to make given her state of ignorance about how the weather would in fact turn out. Normative theories of decision-making under uncertainty aim to capture this second, more subjective, type of evaluation; the standard such account is expected utility theory.

Greaves & Cotton-Barratt then make the analogous distinction for moral uncertainty:

How should one choose, when facing relevant moral uncertainty? In one (objective) sense, of course, what one should do is simply what the true moral hypothesis says one should do. But it seems there is also a second sense of “should”, analogous to the subjective “should” for empirical uncertainty, capturing the sense in which it is appropriate for the agent facing moral uncertainty to be guided by her moral credences [i.e., beliefs], whatever the moral facts may be. (emphasis added)

(This objective vs subjective distinction seems to me somewhat similar - though not identical - to the distinction between ex post and ex ante thinking. We might say that Alice made the right decision ex ante - i.e., based on what she knew when she made her decision - even if it turned out - ex post - that the other decision would’ve worked out better.)

MacAskill notes that, in both the empirical and moral contexts, “The principal argument for thinking that there must be a subjective sense of ‘ought’ is because the objective sense of ‘ought’ is not sufficiently action-guiding.” He illustrates this in the case of moral uncertainty with the following example:

Susan is a doctor, who faces three sick individuals, Greg, Harold and Harry. Greg is a human patient, whereas Harold and Harry are chimpanzees. They all suffer from the same condition. She has a vial of a drug, D. If she administers all of drug D to Greg, he will be completely cured, and if she administers all of drug to the chimpanzees, they will both be completely cured (health 100%). If she splits the drug between the three, then Greg will be almost completely cured (health 99%), and Harold and Harry will be partially cured (health 50%). She is unsure about the value of the welfare of non-human animals: she thinks it is equally likely that chimpanzees’ welfare has no moral value and that chimpanzees’ welfare has the same moral value as human welfare. And, let us suppose, there is no way that she can improve her epistemic state with respect to the relative value of humans and chimpanzees.

[...]

Her three options are as follows:

A: Give all of the drug to Greg

B: Split the drug

C: Give all of the drug to Harold and Harry

Her decision can be represented in the following table, using numbers to represent how good each outcome would be.

Finally, suppose that, according to the true moral theory, chimpanzee welfare is of the same moral value as human welfare and that therefore, she should give all of the drug to Harold and Harry. What should she do?

Clearly, the best outcome would occur if Susan does C. But she doesn’t know that that would cause the best outcome, because she doesn’t know what the “true moral theory” is. She thus has no way to act on the advice “Just do what is objectively morally right.” Meanwhile, as MacAskill notes, “it seems it would be morally reckless for Susan not to choose option B: given what she knows, she would be risking severe wrongdoing by choosing either option A or option C” (emphasis added).

To capture the intuition the Susan should choose option B, and to provide actually followable guidance for action, we need to accept that there is a subjective sense of “should” (or of “ought”) - a sense of “should” that depends in part on what one believes. (This could also be called a “belief-relative” or “credence-relative” sense of “should”.)[2]

An additional argument in favour of accepting that there’s a subjective “should” in relation to moral uncertainty is consistency with how we treat empirical uncertainty, where most people accept that there’s a subjective “should”.[3] This argument is made regularly, including by MacAskill and by Greaves & Cotton-Barratt, and it seems particularly compelling when one considers that it’s often difficult to draw clear lines between empirical and moral uncertainty (see my prior post). That is, if it’s often hard to say whether an uncertainty is empirical or moral, it seems strange to say we should accept a subjective “should” under empirical uncertainty but not under moral uncertainty.

Ultimately, most of what I’ve read on moral uncertainty is premised on there being a subjective sense of “should”, and much of this sequence will rest on that premise also.[4] As far as I can tell, this seems necessary if we are to come up with any meaningful, action-guiding approaches for decision-making under moral uncertainty (“metanormative theories”).

But I should note that some writers do appear to argue that there’s only an objective sense of “should” (one example, I think, is Weatherson, though he uses different language and I’ve only skimmed his paper). Furthermore, while I can’t see how this could lead to action-guiding principles for making decisions under uncertainty, it does seem to me that it’d still allow for resolving one’s uncertainty. In other words, if we do recognise only objective “oughts”:

  • We may be stuck with fairly useless principles for decision-making, such as “Just do what’s actually right, even when you don’t know what’s actually right”
  • But (as far as I can tell) we could still be guided to clarify and reduce our uncertainties, and thereby bring our beliefs more in line with what’s actually right.

Rational or moral?

There is also debate about what precisely kind of “should” is involved [in cases of moral uncertainty]: rational, moral, or something else again. (Greaves & Cotton-Barratt)

For example, in the above example of Susan the doctor, are we wondering what she rationally ought to do, given her moral uncertainty about the moral status of chimpanzees, or what she morally ought to do?

It may not matter either way

Unfortunately, even after having read up on this, it’s not actually clear to me what the distinction is meant to be. In particular, I haven’t come across a clear explanation of what it would mean for the “should” or “ought” to be moral. I suspect that what that would mean would be partly a matter of interpretation, and that some definitions of a “moral” should could be effectively the same as those for a “rational” should. (But I should note that I didn’t look exhaustively for such explanations and definitions.)

Additionally, both Greaves & Cotton-Barratt and MacAskill explicitly avoid the question of whether what one “ought to do” under moral uncertainty is a matter of rationality or morality.[5] This does not seem to at all hold them back from making valuable contributions to the literature on moral uncertainty (and, more specifically, on how to make decisions when morally uncertain).

Together, the above points make me inclined to believe (though with low confidence) that this may be a “merely verbal” debate with no real, practical implications (at least while the words involved remain as fuzzy as they are).

However, I still did come to two less-dismissive conclusions:

  1. I’m very confident that the project of working out meaningful, action-guiding principles for decision-making under moral uncertainty makes sense if we see the relevant “should” as a rational one. (Note: This doesn’t mean that I think the “should” has to be seen as a rational one.)
  2. I’m less sure whether that project would make sense if we see the relevant “should” as a moral one. (Note: This doesn’t mean I have any particular reason to believe it wouldn’t make sense if we see the “should” as a moral one.)

I provide my reasoning behind these conclusions below, though, given my sense that this debate may lack practical significance, some readers may wish to just skip to the next section.

A rational “should” likely works

Bykvist writes:

An alternative way to understand the ought relevant to moral uncertainty is in terms of rationality (MacAskillet al., forthcoming; Sepielli, 2013). Rationality, in one important sense at least, has to do with what one should do or intend, given one’s beliefs and preferences. This is the kind of rationality that decision theory often is seen as invoking. It can be spelled out in different ways. One is to see it as a matter of coherence: It is rational to do or intend what coheres with one’s beliefs and preferences (Broome, 2013; for a critic, see Arpaly, 2000). Another way to spell it out is to understand it as matter of rational processes: it is rational to do or intend what would be the output of a rational process, which starts with one’s beliefs and preferences (Kolodny, 2007).

To apply the general idea to moral uncertainty, we do not need to take stand on which version is correct. We only need to assume that when a conscientious moral agent faces moral uncertainty, she cares about doing right and avoid doing wrong but is uncertain about the moral status of her actions. She prefers doing right to doing wrong and is indifferent between different right doings (at least when the right doings have the same moral value, that is, none is morally supererogatory). She also cares more about serious wrongdoings than minor wrongdoings. The idea is then to apply traditional decision theoretical principles, according to which rational choice is some function of the agent’s preferences (utilities) and beliefs (credences). Of course, different decision‐theories provide different principles (and require different kinds of utility information). But the plausible ones at least agree on cases where one option dominates another.

Suppose that you are considering only two theories (which is to simplify considerably, but we only need a logically possible case): “business as usual,” according to which it is permissible to eat factory‐farmed meat and permissible to eat vegetables, and “vegetarianism,” according to which it is impermissible to eat factory‐farmed meat and permissible to eat vegetables. Suppose further that you have slightly more confidence in “business as usual.” The option of eating vegetables will dominate the option of eating meat in terms of your own preferences: No matter which moral theory is true, by eating vegetables, you will ensure an outcome that you weakly [prefer] to the alternative outcome: if “vegetarianism” is true, you prefer the outcome; if “business as usual is true,” you are indifferent between the outcomes. The rational thing for you to do is thus to eat vegetables, given your beliefs and preferences. (lines breaks added)

It seems to me that that reasoning makes perfect sense, and that we can have valid, meaningful, action-guiding principles about what one rationally (and subjectively) should do given one’s moral uncertainty. This seems further supported by the approach Christian Tarsney takes, which seems to be useful and to also treat the relevant “should” as a rational one.

Furthermore, MacAskill seems to suggest that there’s a correlation between (a) writers fully engaging with the project of working out action-guiding principles for decision-making under moral uncertainty and (b) writers considering the relevant “should” to be rational (rather than moral):

(Lockhart 2000, 24,26), (Sepielli 2009, 10) and (Ross 2006) all take metanormative norms to be norms of rationality. (Weatherson 2014) and (Harman 2014) both understand metanormative norms as moral norms. So there is an odd situation in the literature where the defenders of metanormavism (Lockhart, Ross, and Sepielli) and the critics of the view (Weatherson and Harman) seem to be talking past one another.

A moral “should” may or may not work

I haven’t seen any writer (a) explicitly state that they understand the relevant “should” to be a moral one, and then (b) go on to fully engage with the project of working out meaningful, action-guiding principles for decision-making under moral uncertainty. Thus, I have an absence of evidence that one can engage in that project while seeing the “should” as moral, and I take this as (very weak) evidence that one can’t engage in that project while seeing the “should” that way.

Additionally, as noted above, MacAskill writes that Weatherson and Harman (who seem fairly dismissive of that project) see the relevant “should” as a moral one. Arguably, this is evidence that that project of finding such action-guiding principles won’t make sense if we see the “should” as moral (rather than rational). However, I consider this to also be very weak evidence, because:

  • It’s only two data points.
  • It’s just a correlation anyway.
  • I haven’t closely investigated the “correlation” myself. That is, I haven’t checked whether or not Weatherson and Harman’s reasons for dismissiveness seem highly related to them seeing the “should” as moral rather than rational.

Closing remarks

In this post, I’ve aimed to:

  • Clarify what is meant by the question “Is what we “ought to do” under moral uncertainty is an objective or subjective matter?”
  • Clarify what is meant by the question “Is that ‘ought’ a matter of rationality or of morality?”
  • Argue that it seems both more intuitive and more action-guiding to say that the “ought” is subjective.
  • Argue that whether the “ought” is a rational or a moral one may be a “merely verbal” dispute with no practical significance (but that interpreting the “ought” as a matter of rationality works in any case).

I hope this has helped give readers more clarity on the seemingly neglected matter of what we actually mean by moral uncertainty. (And as always, I’d welcome any feedback or comments!)

My next posts will continue in a similar vein, but this time building to the question of whether, when we’re talking about moral uncertainty, we’re actually talking about moral risk rather than about moral (Knightian) uncertainty - and whether such a distinction is truly meaningful. (To do so, I'll first discuss the risk-uncertainty distinction in general, and the related matter of unknown unknowns, before applying these ideas in the context of moral risk/uncertainty in particular.)


  1. But the current post is still relevant for many types of moral antirealist. As noted in my last post, this sequence will sometimes use language that may appear to endorse or presume moral realism, but this is essentially just for convenience. ↩︎

  2. We could further divide subjective normativity up into, roughly, “what one should do based on what one actually believes” and “what one should do based on what it would be reasonable for one to believe”. The following quote, while not directly addressing that exact distinction, seems relevant:

    Before moving on, we should distinguish subjective credences, that is, degrees of belief, from epistemic credences, that is, the degree of belief that one is epistemically justified in having, given one’s evidence. When I use the term ‘credence’ I refer to epistemic credences (though much of my discussion could be applied to a parallel discussion involving subjective credences); when I want to refer to subjective credences I use the term ‘degrees of belief’.

    The reason for this is that appropriateness seems to have some sort of normative force: if it is most appropriate for someone to do something, it seems that, other things being equal, they ought, in the relevant sense of ‘ought’, to do it. But people can have crazy beliefs: a psychopath might think that a killing spree is the most moral thing to do. But there’s no sense in which the psychopath ought to go on a killing spree: rather, he ought to revise his beliefs. We can only capture that idea if we talk about epistemic credences, rather than degrees of belief.

    (I found that quote in this comment, where it’s attributed to MacAskill’s BPhil thesis. Unfortunately, I can’t seem to access that thesis, including via Wayback Machine.) ↩︎

  3. Though note that Greaves and Cotton-Barratt write:

    Not everyone does recognise a subjective reading of the moral ‘ought’, even in the case of empirical uncertainty. One can distinguish between objectivist, (rational-)credence-relative and pluralist views on this matter. According to objectivists (Moore, 1903; Moore, 1912; Ross, 1930, p.32; Thomson, 1986, esp. pp. 177-9; Graham, 2010; Bykvist and Olson, 2011) (respectively, credence-relativists (Prichard, 1933; Ross, 1939; Howard-Snyder, 2005; Zimmermann, 2006; Zimmerman, 2009; Mason, 2013), the “ought” of morality is uniquely an objective (respectively, a credence-relative) one. According to pluralists, “ought” is ambiguous between these two readings (Russell, 1966; Gibbard, 2005; Parfit, 2011; Portmore, 2011; Dorsey, 2012; Olsen, 2017), or varies between the two readings according to context (Kolodny and Macfarlane, 2010).

    ↩︎
  4. In the following quote, Bykvist provides what seems to me (if I’m interpreting it correctly) to be a different way of explaining something similar to the objective vs subjective distinction.

    One possible explanation of why so few philosophers have engaged with moral uncertainty might be serious doubt about whether it makes much sense to ask about what one ought do when one is uncertain about what one ought to do. The obvious answer to this question might be thought to be: “you ought to do what you ought to do, no matter whether or not you are certain about it” (Weatherson, 2002, 2014). However, this assumes the same sense of “ought” throughout.

    A better option is to assume that there are different kinds of moral ought. We are asking what we morally ought to do, in one sense of ought, when we are not certain about what we morally ought to do, in another sense of ought. One way to make this idea more precise is to think about the different senses as different levels of moral ought. When we face a moral problem, we are asking what we morally ought to do, at the first level. Standard moral theories, such as utilitarianism, Kantianism, and virtue ethics, provide answers to this question. In a case of moral uncertainty, we are moving up one level and asking about what we ought to do, at the second level, when we are not sure what we ought to do at the first level. At this second level, we take into account our credence in various hypotheses about what we ought to do at the first level and what these hypotheses say about the moral value of each action (MacAskill et al., forthcoming). This second level ought provides a way to cope with the moral uncertainty at the first level. It gives us a verdict of how to best manage the risk of doing first order moral wrongs. That there is such a second‐level moral ought of coping with first‐order moral risks seems to be supported by the fact that agents are morally criticizable when they, knowing all the relevant empirical facts, do what they think is very likely to be a first‐order moral wrong when there is another option that is known not to pose any risk of such wrongdoing.

    Yet another (and I think similar) way of framing this sort of distinction could make use of the following two terms: “A criterion of rightness tells us what it takes for an action to be right (if it’s actions we’re looking at). A decision procedure is something that we use when we’re thinking about what to do” (Askell).

    Specifically, we might say that the true first-order moral theory provides objective “criteria of rightness”, but that we don’t have direct access to what these are. As such, we can use a second-order “decision procedure” that attempts to lead us to take actions that are close as possible to the best actions (according to the unknown criteria of rightness). To do so, this decision procedure must make use of our credences (beliefs) in various moral theories, and is thus subjective. ↩︎

  5. Greaves & Cotton-Barratt write: “For the purpose of this article, we will [...] not take a stand on what kind of “should” [is involved in cases of moral uncertainty]. Our question is how the “should” in question behaves in purely extensional terms. Say that an answer to that question is a metanormative theory.”

    MacAskill writes: “I introduce the technical term ‘appropriateness’ in order to remain neutral on the issue of whether metanormative norms are rational norms, or some other sort of norms (though noting that they can’t be first-order norms provided by first-order normative theories, on pain of inconsistency).” ↩︎

14

11 comments, sorted by Highlighting new comments since Today at 12:20 AM
New Comment
What do we mean by “moral uncertainty”?

I was looking for a sentence like "We define moral uncertainty as ..." and nothing came up. Did I miss something?

I believe such a sentence is indeed lacking. One reason is that, as far as I can tell, there isn't really a crisp definition of moral uncertainty in terms of a small set of necessary and sufficient criteria. Instead, it's basically "Moral uncertainty is uncertainty about moral matters", which then has to be accompanied with a range of examples and counterexamples of the sort of thing we mean by that.

That's part of why I'm writing a series of posts on the various aspects of what we mean by moral uncertainty, rather than just putting a quick definition at the start of one post and then moving on to how to make decisions when morally uncertain. (Which is what I originally did for the earlier version of this other post, before receiving a comment there making a similar point to your one here! I think with such fuzzy terms it's somewhat hard to avoid such issues, though I do appreciate the feedback pushing me to keep trying harder :) )

Another reason such a sentence is lacking is that this post is intended to follow on from my prior one, where I open with a quote listing examples of moral uncertainties, and then write:

I consider the above quote a great starting point for understanding what moral uncertainty is; it gives clear examples of moral uncertainties, and contrasts these with related empirical uncertainties. From what I’ve seen, a lot of academic work on moral uncertainty essentially opens with something like the above, then notes that the rational approach to decision-making under empirical uncertainty is typically considered to be expected utility theory, then discusses various approaches for decision-making under moral uncertainty.
That’s fair enough, as no one article can cover everything, but it also leaves open some major questions about what moral uncertainty actually is.

So this post is meant to follow one in which many examples of moral uncertainty are given, and moral uncertainty is contrasted against various related concepts. Those together provide a better starting point than a "Moral uncertainty is defined as..." sentence can, given how fuzzy the concept of "moral uncertainty" is and how its definition would rely on other terms that do a lot of work (like what "moral matters" are).

But it's true that many people may read this post without having read that one, and without having a background familiarity with the term. So it may well be good to add near the start even just a sentence like "Moral uncertainty is uncertainty about moral matters", and perhaps an explicit note that I partly intend the meaning to become increasingly clear through the provision of various examples. I plan to touch up these posts once I'm done with the sequence of them, and I've made a note to maybe add something like that then.

It's also possible changing the title could help with that, but I didn't manage to think of anything that wasn't overly long or obscure and that better captured the content. (I did explicitly decide to avoid "What is moral uncertainty?", as that felt like even more of an oversell - one reasonably sized post can only tackle part of that question, not all of it.)

And if anyone has any particularly good ideas for snappy definitions or fitting titles, I'd be happy to hear them :)

Update: I'm now considering changing the title to "What kind of 'should' is involved in moral uncertainty?" It seems to me that's a bit of a weird title and it's less immediately apparent what it'd mean, but it might more accurately capture what's in this post. Open to people's thoughts on that.

I've just changed the title along those lines.

Just to give context for people reading the comments later, the original title was "What do we mean by "moral uncertainty"?", which I now realise poorly captured the contents of the post.

Instead, it's basically "Moral uncertainty is uncertainty about moral matters", which then has to be accompanied with a range of examples and counterexamples of the sort of thing we mean by that.

What need is there for a definition of "moral uncertainty"? Empirical uncertainty is uncertainty about empirical matters. Logical uncertainty is uncertainty about logical matters. Moral uncertainty is uncertainty about moral matters. These phrases mean these things in the same way that "red car" means a car that is red, and does not need a definition.

If one does not believe there are objective moral truths, then "Moral uncertainty is uncertainty about moral matters" might feel problematic. The problem lies not in "uncertainty" but in "moral matters". But that is an issue you have postponed.

In my experience, stating things outright and giving examples helps with communication. You might not need a definition, but the revenant question is would it improve the text for other readers?

I agree to an extent. I do think, in practice, "It's like empirical uncertainty, but for moral stuff" really is sufficient for many purposes, for most non-philosophers. But, as commenters on a prior post of mine said, there are some issues not explained by that, which are potentially worth unpacking and which some people would like unpacked. For example...

You note the ambiguity with the term "moral matters", but there's also the ambiguity in the term "uncertainty" (e.g., the risk-uncertainty distinction people sometimes make, or different types of probabilities that might feed into uncertainties), which will be the subject of my next post. And when we talk about moral uncertainty, we very likely want to know what we "should" do given uncertainty, so what we mean by "should" there is also important and relevant, and, as covered in this post, is debated in multiple ways. And then, as you say, there's also the question of what moral uncertainty can mean for antirealists.

And as I covered in an earlier post, there are many other concepts which are somewhat similar to moral uncertainty, so it seems worth pulling those concepts apart (or showing where the lines really are just unclear/arbitrary). E.g., some philosophers seem fairly adamant that moral uncertainty must be treated totally differently to empirical uncertainty (e.g., arguing we basically just have to "Do what's actually right", even if we have no idea what that is, and can't meaningfully take into account our current best guesses as to moral matters). I'd argue (as would people like MacAskill and Tarsney) that realising how hard it is to separate moral and empirical uncertainty helps highlight why that view is flawed.

Do we even need the concept "moral uncertainty"? Would the more complete phrases "uncertainty of moral importance" be better, to distinguish from "uncertainty of effects of an action", which is just plain old rational uncertainty.

Not sure I understand what you mean there. The term "Moral uncertainty" is (I believe) meant to be analogous to the term "empirical uncertainty", which was already established, and I think it covers what you mean by "uncertainty of moral importance", so I'm not sure what we'd come up with another, different-sounding, longer term.

Also, "uncertainty of moral importance" might make it sound like we want to just separately consider how morally important each given act may be. But it could be far more efficient to think that we're "morally uncertain" about things like the moral status of animals or whether to believe utilitarianism or virtue ethics, and then have our judgement of the "moral importance" of many different actions informed by that more general moral uncertainty. So I think "moral uncertainty" is also clearer/less misleading.

This is again analogous to empirical uncertainty, I believe. We don't want to just track our uncertainty about the effects of each given action. It's more natural and efficient to also track our uncertainty about certain states of the world (e.g., how many people are working on AGI and how many are working on AI safety), and have that feed into our uncertainty about the effects of specific actions (e.g. funding a certain AI safety project).

I also don't believe I've come across the term "rational uncertainty" before. It seems to me that we'd have empirical and moral uncertainty (as well as perhaps some other types of uncertainty, like meta-ethical uncertainty), and then put that together with a decision theory (which we may also have some uncertainty about), and get out what we rationally should do. See my two prior posts. I guess being uncertain about rationality might be like being uncertain about what decision theory to use to translate preferences and probability distributions into actions, but then we should call that decision-theoretic uncertainty. Or perhaps you mean "cases in which it is rational to be uncertain", in which case it seems that would be a subset of all other types of uncertainty.

Let me know if I'm misunderstanding you, though.

30 seconds of googling gave me this link, which might not be anything exceptional but at least it offers a couple of relevant definitions:

what should I do, given that I don’t know what I should do?

and

what should I do when I don’t know what I should do?

and later a more focused question

what am I (or we) permitted to do, given that I (or we) don’t know what I (or we) are permitted to do

At least they define what they are working on...

Those questions all help point to the concept at hand, but they're actually all about decision-making under moral uncertainty, rather than moral uncertainty itself. In the same way, empirical uncertainty is uncertainty about things like whether a stock will increase in price tomorrow, which can then be blended with other things (like decision theory and your preferences) to answer questions like "What should I do, given that I don't know whether this stock will increase in price tomorrow?"

I did start with a post on decision-making under moral uncertainty, but then got the feedback (which I've now realised was very much on point) that it would be worth stepping back quite a bit to discuss what moral uncertainty itself actually is.

Additionally, I'd say that none of those quoted questions at all disentangle moral from empirical uncertainty. For example, I could be 100% certain in some moral theory where infringing people's rights is bad but everything else is fine, but still not know what I should do, because I don't know which of a set of actions is least likely to end up infringing rights (an empirical uncertainty). So it'd be necessary to modify those questions to something like "What should I do, given that I don’t know what's morally right, despite knowing the relevant empirical facts?" ...which now involves two other terms worth defining/distinguishing, and so here we're getting into the complexities I mentioned :) (And back into the sort of stuff that my post prior to this one unpacked.)

But all that said, I think it probably is a good idea to open this post with something to point at the concept at hand, for those readers who didn't read the prior post and are relatively unfamiliar with the term "moral uncertainty". So I've added two short sentences at the start to accomplish that objective.

(For anyone who's for some reason interested, the original version of this post is here.)