LESSWRONG
LW

Wikitags
Main
LW Wiki

LW Wiki

Written by joaolkf, steven0461, Ruby, Kaj_Sotala, et al. last updated 4th Oct 2021

Moral uncertainty (or normative uncertainty) is uncertainty about what we ought, morally, to do given the diversity of moral doctrines. For example, suppose that we knew for certain that new technology would enable more humans to live on another planet with slightly less well-being than on Earth. An average would consider these consequences bad, while a total utilitarian would endorse such technology. If we are uncertain about which of these two theories are right, what should we do?

Moral uncertainty includes a level of uncertainty above the more usual uncertainty of since it deals also with uncertainty about which moral theory is right. Even with complete information about the world, this kind of uncertainty would still remain . In one level of uncertainty, one can have doubts on how to act because all the relevant empirical information isn’t available, for example, choosing whether to implement or not a new technology (e.g.: , , ) not fully knowing about its consequences and nature. But even if we ideally get to know each and every consequence of new technology, we would still need to know which is the right ethical perspective for analyzing these consequences.

One approach is to follow only the most probable theory. This has its own problems. For example, what if the most probable theory points only weakly in one way, and other theories point strongly the other way? A better approach is to “perform the action with the highest expected moral value. We get the expected moral value of an action by multiplying the subjective probability that some theory is true by the value of that action if it is true, doing the same for all of the other theories, and adding up the results.” However, we would still need a method of comparing value intertheories, an in one theory may not be the same with an utilon in another theory. Outside , many ethical theories don’t use utilions or even any quantifiable values. This is still an open problem.

and Toby Ord have proposed a parliamentary model. In this model, each theory sends a number of delegates to a parliament in proportion to its probability. The theories then bargain for support as if the probability of each action were proportional to its votes. However, the actual output is always the action with the most votes. Bostrom and Ord's proposal lets probable theories determine most actions, but still gives less probable theories influence on issues they consider unusually important.

Even with a high degree of moral uncertainty and a wide range of possible moral theories, there are still certain actions that seem highly valuable in any theory. Bostrom argues that reduction is among them, showing that it is not only the most important task given most versions of consequentialism but highly recommended by many of the other widely acceptable moral theories.

External links

  • Moral uncertainty — towards a solution?

Sequences

  • Moral uncertainty

See also

References

  1. Crouch, William. (2010) “Moral Uncertainty and Intertheoretic Comparisons of Value” BPhil Thesis, 2010. p. 6. Available at: http://oxford.academia.edu/WilliamCrouch/Papers/873903/Moral_Uncertainty_and_Intertheoretic_Comparisons_of_Value
  2. Sepielli, Andrew. (2008) “Moral Uncertainty and the Principle of Equity among Moral Theories". ISUS-X, Tenth Conference of the International Society for Utilitarian Studies, Kadish Center for Morality, Law and Public Affairs, UC Berkeley. Available at: http://escholarship.org/uc/item/7h5852rr.pdf
  3. Bostrom, Nick. (2012) "Existential Risk Reduction as the Most Important Task for Humanity" Global Policy, forthcoming, 2012. p. 22. Available at: http://www.existential-risk.org/concept.pdf
AGI
Subscribe
Subscribe
Nick Bostrom
Biological Cognitive Enhancement
consequentialism
Existential risk
utilon
utilitarian
what to do given incomplete information
Expected utility
Mind Uploading
Discussion2
Discussion2
1
1
2
3
↩
↩
↩
Value learning
Metaethics
Posts tagged Moral uncertainty
47Normativity
Ω
abramdemski
5y
Ω
11
36Polymath-style attack on the Parliamentary Model for moral uncertainty
danieldewey
11y
74
1902018 AI Alignment Literature Review and Charity Comparison
Ω
Larks
7y
Ω
26
1302019 AI Alignment Literature Review and Charity Comparison
Ω
Larks
6y
Ω
18
93Preliminary thoughts on moral weight
lukeprog
7y
49
92Six Plausible Meta-Ethical Alternatives
Wei Dai
11y
41
90Ontological Crisis in Humans
Wei Dai
13y
69
60Ideas for benchmarking LLM creativity
gwern
7mo
11
58AI Alignment Podcast: An Overview of Technical AI Alignment in 2018 and 2019 with Buck Shlegeris and Rohin Shah
Ω
Palus Astra
5y
Ω
27
57Three kinds of moral uncertainty
Kaj_Sotala
13y
15
51Arguments for moral indefinability
Richard_Ngo
6y
10
37Altruism Under Extreme Uncertainty
lsusr
4y
9
27AXRP Episode 3 - Negotiable Reinforcement Learning with Andrew Critch
Ω
DanielFilan
5y
Ω
0
27Fundamental Uncertainty: Chapter 3 - Why don't we agree on what's right?
Gordon Seidoh Worley
3y
22
26Moral uncertainty vs related concepts
MichaelA
5y
13
Load More (15/70)
Add Posts