## LESSWRONGLW

Okay, this seems like a crux of our disagreement. This statement seems pretty much equivalent to my statement #1 in almost all practical contexts. Can you point out how you think they differ?

This stuff is definitely a bit tricky to talk about, since people can use the word "should" in different ways. I think that sometimes when people say "You should do X if you want Y" they do basically just mean to say "If you do X you will receive Y." But it doesn't seem to me like this is always the case.

A couple examples:

1. "Bayesian updating has a certain asymptoptic convergence property, in the limit of infinite experience and infinite compute. So if you want to understand the world, you should be a Bayesian."

If the first and second sentence were meant to communicate the same thing, then the second would be totally vacuous given the first. Anyone who accepted the first sentence could not intelligibly disagree with or even really consider disagreeing with the second. But I don't think that people who say things like this typically mean for the second sentence to be vacuous or typically regard disagreement as unintelligible.

Suppose, for example, that I responded to this claim by saying something like: "I disagree. Since we only have finite lives, asymptoptic convergence properties don't have direct relevance. I think we should instead use a different 'risk averse' updating rule that, for agents with finite lives, more strongly reduces the likelihood of ending up with especially inaccurate beliefs about key features of the world."

The speaker might think I'm wrong. But if the speaker thinks that what I'm saying constitutes intelligible disagreement with their claim, then it seems like this means their claim is in fact a distinct normative one.

2. (To someone with no CS background) "If you want to understand the world, you should be a Bayesian."

If this sentence were meant to communicate the same thing as the claim about asymptotic convergence, then the speaker shouldn't expect the listener to understand what they're saying (even if the speaker has already explained what it means to be a Bayesian). Most people don't naturally understand or care at all about asymptotic convergence properties.

# 44

Format warning: This post has somehow ended up consisting primarily of substantive endnotes. It should be fine to read just the (short) main body without looking at any of the endnotes, though. The endnotes elaborate on various claims and distinctions and also include a much longer discussion of decision theory.

Thank you to Pablo Stafforini, Phil Trammell, Johannes Treutlein, and Max Daniel for comments on an initial draft. I have also slightly edited the post since I first published it, to try to make a few points clearer.

When discussing normative questions, it is not uncommon for members of the rationalist community to identify as anti-realists. But normative anti-realism seems to me to be in tension with some of the community's core interests, positions, and research activities. In this post I suggest that the cost of rejecting realism may be larger than is sometimes recognized. [1]

1. Realism and Anti-Realism

Everyone is, at least sometimes, inclined to ask: “What should I do?”

We ask this question when we're making a decision and it seems like there are different considerations to be weighed up. You might be considering taking a new job in a new city, for example, and find yourself wondering how to balance your preferences with those of your significant other. You might also find yourself thinking about whether you have any obligation to do impactful work, about whether it’s better to play it safe or take risks, about whether it's better to be happy in the moment or to be able to look back with satisfaction, and so on. It’s almost inevitable that in a situation like this you will find yourself asking “What should I do?” and reasoning about it as though the question has an answer you can approach through a certain kind of directed thought.[2]

But it’s also conceivable that this sort of question doesn’t actually have an answer. Very roughly, at least to certain philosophers, realism is a name for the view that there are some things that we should do or think. Anti-realism is a name for the view that there are not.[3][4][5][6]

2. Anti-Realism and the Rationality Community

In discussions of normative issues, it seems not uncommon for members of the rationalist community to identify as “anti-realists." Since people in different communities can obviously use the same words to mean different things, I don't know what fraction of rationalists have the same thing in mind when they use the term "anti-realism."

To the extent people do have the same thing in mind, though, I find anti-realism hard to square with a lot of other views and lines of research that are popular within the community. A few main points of tension stand out to me.

2.1 Normative Uncertainty

One first point of tension is the community’s relatively strong interest in the subject of normative uncertainty. At least as it's normally discussed in the philosophy literature, normative uncertainty is uncertainty about normative facts that bear on what we should do. If we assume that anti-realism is true, though, then we are assuming that there are no such facts. It seems to me like a committed anti-realist could not be in a state of normative uncertainty.

It may still be the case, as Sepielli (2012) suggests, that a committed anti-realist can experience psychological states that are interestingly structurally analogous to states of normative uncertainty. However, Bykvist and Olson (2012) disagree (in my view) fairly forcefully, and Sepielli is in any case clear that: “Strictly speaking, there cannot be such a thing as normative uncertainty if non-cognitivism [the dominant form of anti-realism] is true.”[7]

2.2 Strongly Endorsed Normative Views

A second point of tension is the existence of a key set of normative claims that a large portion of the community seems to treat as true.

One of these normative claims is the Bayesian claim that we ought to have degrees of belief in propositions that are consistent with the Kolmogorov probability axioms and that are updated in accordance with Bayes’ rule. It seems to me like very large portions of the community self-identify as Bayesians and regard other ways of assigning and updating degrees of belief in propositions as not just different but incorrect.

Another of these normative claim is the subjectivist claim that we should do whatever would best fulfill some version of our current preferences. To learn what we should do, on this view, the main thing is to introspect about our own preferences.[8] Whether or not a given person should commit a violent crime, for instance, depends purely on whether they want to commit the crime (or perhaps on whether they would want to commit it if they went through some particular process of reflection).

A further elaboration on this claim is that, when we are uncertain about the outcomes of our actions, we should more specifically act to maximize the expected fulfillment of our desires. We should consider the different possible outcomes of each action, assign them probabilities, assign them desirability ratings, and then use the expected value formula to rate the overall goodness of the action. Whichever action has the best overall rating is the one we should take.

One possible way of squaring an endorsement of anti-realism with an apparent endorsement of these normative claims is to argue that people don’t actually have normative claims in mind when they write and talk about these issues. Non-cognitivists -- a particular variety of anti-realists -- argue that many utterances that seem at first glance like claims about normative facts are in fact nothing more than expressions of attitudes. For instance, an emotivist -- a further sub-variety of non-cognitivist -- might argue that the sentence “You should maximize the expected fulfillment of your current desires!” is simply a way of expressing a sense of fondness toward this course of action. The sentence might be cached out as being essentially equivalent in content to the sentence, “Hurrah, maximizing the expected fulfillment of your current desires!”

Although a sizeable portion of philosophers are non-cognitivists, I generally don’t find it very plausible as a theory of what people are trying to do when they seem to make normative claims.[9] In this case it doesn’t feel to me like most members of the rationalist community are just trying to describe one particular way of thinking and acting, which they happen to prefer to others. It seems to me, rather, that people often talk about updating your credences in accordance with Bayes' rule and maximizing the expected fulfillment of your current desires as the correct things to do.

One more thing that stands out to me is that arguments for anti-realism often seem to be presented as though they implied (rather than negated) the truth of some of these normative claims. For example, the popular "Replacing Guilt" sequence on Minding Our Way seems to me to repeatedly attack normative realism. It rejects the idea of "shoulds" and points out that there aren't "any oughtthorities to ordain what is right and what is wrong." But then it seems to draw normative implications out of these attacks: among other implications, you should "just do what you want." At least taken at face value, this line of reaoning wouldn't be valid. It makes no more sense than reaoning that, if there are no facts about what we should do, then we should "just maximize total hedonistic well-being” or "just do the opposite of what we want” or "just open up souvenir shops.” Of course, though, there's a good chance that I'm misunderstanding something here.

2.3 Decision Theory Research

A third point of tension is the community's engagement with normative decision theory research. Different normative decision theories pick out different necessary conditions for an action to be the one that a given person should take, with a focus on how one should respond to uncertainty (rather than on what ends one should pursue).[10][11]

A typical version of CDT says that the action you should take at a particular point in time is the one that would cause the largest expected increase in value (under some particular framework for evaluating causation). A typical version of EDT says that the action you should take at a particular point in time is the one that would, once you take it, allow you to rationally expect the most value. There are also alternative versions of these theories -- for instance, versions using risk-weighted expected value maximization or the criterion of stochastic dominance -- that break from the use of pure expected value.

I've pretty frequently seen it argued within the community (e.g. in the papers “Cheating Death in Damascus” and “Functional Decision Theory”) that CDT and EDT are not “correct" and that some other new theory such as functional decision theory is. But if anti-realism is true, then no decision theory is correct.

Eliezier Yudkowsky's influential early writing on decision theory seems to me to take an anti-realist stance. It suggests that we can only ask meaningful questions about the effects and correlates of decisions. For example, in the context of the Newcomb thought experiment, we can ask whether one-boxing is correlated with winning more money. But, it suggests, we cannot take a step further and ask what these effects and correlations imply about what it is "reasonable" for an agent to do (i.e. what they should do). This question -- the one that normative decision theory research, as I understand it, is generally about -- is seemingly dismissed as vacuous.

If this apparently anti-realist stance is widely held, then I don't understand why the community engages so heavily with normative decision theory research or why it takes part in discussions about which decision theory is "correct." It strikes me a bit like an atheist enthustiastically following theological debates about which god is the true god. But I'm mostly just confused here.[12][13]

3. Sympathy for Realism

I wouldn’t necessarily describe myself as a realist. I get that realism is a weird position. It’s both metaphysically and epistemologically suspicious. What is this mysterious property of “should-ness” that certain actions are meant to possess -- and why would our intuitions about which actions possess it be reliable?[14][15]

But I am also very sympathetic to realism and, in practice, tend to reason about normative questions as though I was a full-throated realist. My sympathy for realism and tendency to think as a realist largely stems from my perception that if we reject realism and internalize this rejection then there’s really not much to be said or thought about anything. We can still express attitudes at one another, for example suggesting that we like certain actions or credences in propositions better than others. We can present claims about the world, without any associated explicit or implicit belief that others should agree with them or respond to them in any particular way. And that seems to be about it.

Furthermore, if anti-realism is true, then it can’t also be true that we should believe that anti-realism is true. Belief in anti-realism seems to undermine itself. Perhaps belief in realism is self-undermining in a similar way -- if seemingly correct reasoning leads us to account for all the ways in which realism is a suspect position -- but the negative feedback loop in this case at least seems to me to be less strong.[16]

I think that realism warrants more respect than it has historically received in the rationality community, at least relative to the level of respect it gets from philosophers.[17] I suspect that some of this lack of respect might come from a relatively weaker awareness of the cost of rejecting realism or of the way in which belief in anti-realism appears to undermine itself.

1. I'm basing my the views I express in this post primarily off Derek Parfit’s writing, specifically his book On What Matters. For this reason, it seems pretty plausible to me that there are some important points I've missed by reading too narrowly. In addition, it also seems likely that some of the ways in which I talk about particular issues around normativity will sound a bit foreign or just generally “off” to people who are highly familiar with some of these issues. One unfortunate reason for this is that the study of normative questions and of the nature of normativity seems to me to be spread out pretty awkwardly across the field of philosophy, with philosophers in different sub-disciplines often discussing apparently interconnected questions in significant isolation of one another while using fairly different terminology. This means that (e.g.) meta-ethics and decision theory are seldom talked about at the same time and are often talked about in ways that make it difficult to see how they fit together. A major reason I am leaning on Parfit’s work is that he is -- to my knowledge -- one of relatively few philosophers to have tried to approach questions around normativity through a single unified framework. ↩︎

2. This is a point that is also discussed at length in David Enoch’s book Taking Morality Seriously (pgs. 70-73):

Perhaps...we are essentially deliberative creatures. Perhaps, in other words, we cannot avoid asking ourselves what to do, what to believe, how to reason, what to care about. We can, of course, stop deliberating about one thing or another, and it’s not as if all of us have to be practical philosophers (well, if you’re reading this book, you probably are, but you know what I mean). It’s opting out of the deliberative project as a whole that may not be an option for us….

[Suppose] law school turned out not to be all you thought it would be, and you no longer find the prospects of a career in law as exciting as you once did. For some reason you don’t seem to be able to shake off that old romantic dream of studying philosophy. It seems now is the time to make a decision. And so, alone, or in the company of some others you find helpful in such circumstances, you deliberate. You try to decide whether to join a law firm, apply to graduate school in philosophy, or perhaps do neither.

The decision is of some consequence, and so you resolve to put some thought into it. You ask yourself such questions as: Will I be happy practicing law? Will I be happier doing philosophy? What are my chances of becoming a good lawyer? A good philosopher? How much money does a reasonably successful lawyer make, and how much less does a reasonably successful philosopher make? Am I, so to speak, more of a philosopher or more of a lawyer? As a lawyer, will I be able to make a significant political difference? How important is the political difference I can reasonably expect to make? How important is it to try and make any political difference? Should I give any weight to my father’s expectations, and to the disappointment he will feel if I fail to become a lawyer? How strongly do I really want to do philosophy? And so on. Even with answers to most – even all – of these questions, there remains the ultimate question. “All things considered”, you ask yourself, “what makes best sense for me to do? When all is said and done, what should I do? What shall I do?”

When engaging in this deliberation, when asking yourself these questions, you assume, so it seems to me, that they have answers. These answers may be very vague, allow for some indeterminacy, and so on. But at the very least you assume that some possible answers to these questions are better than others. You try to find out what the (better) answers to these questions are, and how they interact so as to answer the arch-question, the one about what it makes most sense for you to do. You are not trying to create these answers. Of course, in an obvious sense what you will end up doing is up to you (or so, at least, both you and I are supposing here). And in another, less obvious sense, perhaps the answer to some of these questions is also up to you. Perhaps, for instance, how happy practicing law will make you is at least partly up to you. But, when trying to make up your mind, it doesn’t feel like just trying to make an arbitrary choice. This is just not what it is like to deliberate. Rather, it feels like trying to make the right choice. It feels like trying to find the best solution, or at least a good solution, or at the very least one of the better solutions, to a problem you’re presented with. What you’re trying to do, it seems to me, is to make the decision it makes most sense for you to make. Making the decision is up to you. But which decision is the one it makes most sense for you to make is not. This is something you are trying to discover, not create. Or so, at the very least, it feels like when deliberating.

↩︎
3. Specifically, the two relevant views can be described as realism and anti-realism with regard to “normativity.” We can divide the domain of “normativity” up into the domains of “practical rationality,” which describes what actions people should take, and “epistemic rationality,” which describes which beliefs or degrees of belief people should hold. The study of ethics, decision-making under uncertainty, and so on can then all be understood as sub-components of the study of practical rationality. For example, one view on the study of ethics is that it is the study of how factors other than one’s own preferences might play roles in determining what actions one should take. It should be noted that terminology varies very widely though. For example, different authors seem to use the word "ethics" more or less inclusively. The term "moral realism" also sometimes means roughly the same thing as "normative realism," as I've defined it here, and sometimes picks out a more specific position. ↩︎

4. An an edit to the initial post, I think it's probably worth saying more about the concept of "moral realism" in relation to "normative realism." Depending on the context, "moral realism" might be taken to refer to: (a) normative realism, (b) realism about practical rationality (not just epistemic rationality), (c) realism about practical rationality combined with the object-level belief that people should do more than just try to satisfy their own personal preferences, or (d) something else in this direction.

One possible reason the term lacks a consensus definition is that, perhaps surprisingly, many contemporary "moral realists" aren't actually very preocuppied with the concept of "morality." Popular books like Taking Morality Seriously, On What Matters, and The Normative Web spend most of their energy defending normative realism, more broadly, and my impression is that their critics spend most of their energy attacking normative realism more broadly. One reason for this shift in focus toward normative realism is the realization that, on almost any conception of "moral realism," nearly all of the standard metaphysical and epistemological objections to "moral realism" also apply just as well to normative realism in general. Another reason is that any possible distinction between moral and normative-but-not-moral facts doesn't seem like it could have much practical relevance: If we know that we should make some decision, then we know that we should take it; we have no obvious additional need to know or care whether this normative fact warrants the label "moral fact" or not. Here, for example, is David Enoch, in Taking Morality Seriously, on the concept of morality (pg. 86):

What more...does it take for a normative truth (or falsehood) to qualify as moral? Morality is a particular instance of normativity, and so we are now in effect asking about its distinctive characteristics, the ones that serve to distinguish between the moral and the rest of the normative. I do not have a view on these special characteristics of the moral. In fact, I think that for most purposes this is not a line worth worrying about. The distinction within the normative between the moral and the non-moral seems to me to be shallow compared to the distinction between the normative and the non-normative - both philosophically, and, as I am about to argue, practically. (Once you know you have a reason to X and what this reason is, does it really matter for your deliberation whether it qualifies as a moral reason?)

↩︎
5. There are two major strands of anti-realism. Error theory (sometimes equated with “nihilism”) asserts that all claims that people should do particular things or refrain from doing particular things are false. Non-cognitivism asserts that utterances of the form “A should do X” typically cannot even really be understood as claims; they're not the sort of thing that could be true or false. ↩︎

6. In this post, for simplicity, I’m talking about normativity using binary language. Either it’s the case that you “should” take an action or it’s not the case that you “should” take it. But we might also talk in less binary terms. For example, there may be some actions that you merely have “more reason” to take than others. ↩︎

7. In Sepielli’s account, for example, the experience of feeling extremely in favor of blaming someone a little bit for taking an action X is analogous to the experience of being extremely confident that it is a little bit wrong to take action X. This account is open to at least a few objections, such as the objection that degrees of favorability don’t -- at least at first glance -- seem to obey the standard axioms of probability theory. Even if we do accept the account, though, I still feel unclear about the proper method and justification for converting debates around normative uncertainty into debates around these other kinds of psychological states. ↩︎

8. If my memory is correct, one example of a context in which I have encountered this subjectivist viewpoint is in a CFAR workshop. One lesson instructs attendees that if it seems like they “should” do something, but then upon reflection they realize they don’t want to do it, then it’s not actually true that they should do it. ↩︎

9. The PhilPapers survey suggests that about a quarter of both normative ethicists and applied ethicists also self-identify as anti-realists, with the majority of them presumably leaning toward non-cognitivism over error theory. It’s still an active matter of debate whether non-cognitivists have sensible stories about what people are trying to do when they seem to be discussing normative claims. For example, naive emotivist theories stumble in trying to explain sentences like: “It's not true that either you should do X or you should do Y.” ↩︎

10. There is also non-normative research that falls under the label “decision theory,” which focuses on exploring the ways in which people do in practice make decisions or neutrally exploring the implications of different assumptions about decision-making processes. ↩︎

11. Arguably, even in academic literature, decision theories are often discussed under the implicit assumption that some form of subjectivism is true. However, it is also very easy to modify the theories to be compatible with theories that tell you to take into account things beyond your current desires. Value might be equated with one’s future welfare, for example, or with the total future welfare of all conscious beings. ↩︎

However, in keeping with the above endnote, community work on decision theory only sometimes seems to be pitched (as it is in the abstract of this paper) as an exploration of normative principles. It is also sometimes pitched as an exploration of how different “algorithms” “perform” across relevant scenarios. This exploration doesn't seem to me to have any direct link to the core academic decision theory literature and, given a sufficiently specific performance metric, does not seem to be inherently normative. I'm actually more optimistic, then, about this line of research having implications for AI development. Nonetheless, for reasons similar to the ones described in the post “Decision Theory Anti-Realism,” I'm still not very optimistic. In the cases that are being considered, the answer to the question “Which algorithm performs best?” will depend on subtle variations in the set of counterfactuals we consider when judging performance; different algorithms come out on top for different sets of counterfactuals. For example, in a prisoner’s dilemma, the best-performing algorithm will vary depending on whether we are imaging a counterfactual world where just one agent was born with a different algorithm or a counterfactual world where both agents were born with different algorithms. It seems unclear to me where we go from here except perhaps to list several different sets of imaginary counterfactuals and note which algorithms perform best relative to them.

Wolfgang Schwarz and Will MacAskill also make similar points, regarding the sensitivity of comparisons of algorithmic performance, in their essays on FDT. Schwarz writes:

Yudkowsky and Soares constantly talk about how FDT "outperforms" CDT, how FDT agents "achieve more utility", how they "win", etc. As we saw above, it is not at all obvious that this is true. It depends, in part, on how performance is measured. At one place, Yudkowsky and Soares are more specific. Here they say that "in all dilemmas where the agent's beliefs are accurate [??] and the outcome depends only on the agent's actual and counterfactual behavior in the dilemma at hand -- reasonable constraints on what we should consider "fair" dilemmas -- FDT performs at least as well as CDT and EDT (and often better)". OK. But how we should we understand "depends on ... the dilemma at hand"? First, are we talking about subjunctive or evidential dependence? If we're talking about evidential dependence, EDT will often outperform FDT. And EDTers will say that's the right standard. CDTers will agree with FDTers that subjunctive dependence is relevant, but they'll insist that the standard Newcomb Problem isn't "fair" because here the outcome (of both one-boxing and two-boxing) depends not only on the agent's behavior in the present dilemma, but also on what's in the opaque box, which is entirely outside her control. Similarly for all the other cases where FDT supposedly outperforms CDT. Now, I can vaguely see a reading of "depends on ... the dilemma at hand" on which FDT agents really do achieve higher long-run utility than CDT/EDT agents in many "fair" problems (although not in all). But this is a very special and peculiar reading, tailored to FDT. We don't have any independent, non-question-begging criterion by which FDT always "outperforms" EDT and CDT across "fair" decision problems.

[A]rguing that FDT does best in a class of ‘fair’ problems, without being able to define what that class is or why it’s interesting, is a pretty weak argument. And, even if we could define such a class of cases, claiming that FDT ‘appears to be superior’ to EDT and CDT in the classic cases in the literature is simply begging the question: CDT adherents claims that two-boxing is the right action (which gets you more expected utility!) in Newcomb’s problem; EDT adherents claims that smoking is the right action (which gets you more expected utility!) in the smoking lesion. The question is which of these accounts is the right way to understand ‘expected utility’; they’ll therefore all differ on which of them do better in terms of getting expected utility in these classic cases.

↩︎
14. In my view, the epistemological issues are the most severe ones. I think Sharon Street’s paper A Darwinian Dilemma for Realist Theories of Value, for example, presents an especially hard-to-counter attack on the realist position on epistemological grounds. She argues that, in the light of the view that our brains evolved via natural selection, and natural selection did not and could not have directly selected for the accuracy of our normative intuitions, it is extremely difficult to construct a compelling explanation for why our normative intuitions should be correlated in any way with normative facts. This technically leave open the possibility of there being non-trivial normative facts, without us having any way of perceiving or intuiting them, but this state of affairs would strike most people as absurd. Although some realists, including Parfit, have attempted to counter Street’s argument, I’m not aware of anyone who I feel has truly succeeded. Street's argument pretty much just seems to work to me. ↩︎

15. These metaphysical and epistemological issues become less concerning if we accept some version of “naturalist realism” which asserts that all normative claims can be reduced into claims about the natural world (i.e. claims about physical and psychological properties) and therefore tested in roughly the same way we might test any other claim about the natural world. However, this view seems wrong to me.

The bluntest objection to naturalist realism is what's sometimes called the "just-too-different" objection. This is the objection that, to many and perhaps most people, normative claims are just obviously a different sort of claim. No one has ever felt any inclination to evoke an "is/is-made-of-wood divide" or an "is/is-illegal-in-Massachusetts divide," because the property of being made of wood and the propery of being illegal in Massachusetts are obviously properties of the standard (natural) kind. But references to the "is/ought divide" -- or, equivalently, the distinction between the "positive" and the "normative" -- are commonplace and don't typically provoke blank stares. Normative discussions are, seemingly, about something above-and-beyond and distinct from discussions of the physical and psychological aspects of a situation. When people debate whether or not it's "wrong" to support the death penalty or "wrong" for women to abort unwanted pregnancies, for example, it seems obvious that physical and psychological facts are typically not the core (or at least only) thing in dispute.

G.E. Moore’s “Open Question Argument" elaborates on this objection. The argument also raises the point that that, in many cases where we are inclined to ask “What should I do?”, it seems like what we are inclined to ask goes above-and-beyond any individual question we might ask about the natural world. Consider again the case where we are considering a career change and wondering what we should do. It seems like we could know all of the natural facts -- facts like how happy will we be on average while pursuing each career, how satisfied will we feel looking back on each career, how many lives we could improve by donating money made in each career, what labor practices each company has, how disappointed our parents will be if we pursue each career, how our personal values will change if we pursue each career, what we would end up deciding at the end of one hypothetical deliberative process or another, etc. -- and still retain the inclination to ask, “Given all this, what should I do?” This means that -- insofar as we're taking the realist stance that this question actually has a meaningful answer, rather than rejecting the question as vacuous -- the claim that we "should" do one thing or another cannot easily be understood as a claim about the natural world. A set of claims about the natural world may support the claim that we should make a certain decision, but, in cases such as this one, it seems like no set of claims about the natural world is equivalent to the claim that we should make a certain decision.

A last objection to mention is Parfit’s “Triviality Objection” (On What Matters, Section 95). The basic intuition behind Parfit’s objection is that pretty much any attempt to define the word “should” in terms of natural properties would turn many normative claims into puzzling assertions of either obvious tautologies or obvious falsehoods. For example, consider a man who is offered -- at the end of his life, I guess by the devil or something -- the option of undergoing a year of certain torture for a one-in-a-trillion chance of receiving a big prize: a trillion years of an equivalently powerful positive experience, plus a single lollipop. He is purely interested in experiencing pleasure and avoiding pain and would like to know whether he should take the offer. A decision theorist who endorses expected desire-fulfillment maximisation says that he “should,” since the lollipop tips the offer over into having slightly positive expected value. A decision theorist who endorses risk aversion says he “should not,” since the man is nearly certain to be horribly tortured without receiving any sort of compensation. In this context, it’s hard to understand how we could redefine the claim “He should take action X” in terms of natural properties and have this disagreement make any sense. We could define the phrase as meaning “Action X maximizes expected fulfillment of desire,” but now the first decision theorist is expressing an obvious tautology and the second decision theorist is expressing an obvious falsehood. We could also try, in keeping with a suggestion by Eliezer Yudkowsky, to define the phrase as meaning “Action X is the one that someone acting in a winning way would take.” But this is obviously too vague to imply a particular action; taking the gamble is associated with some chance of winning and some chance of losing. We could make the definition more specific -- for instance, saying “Action X is the one that someone acting in a way that maximizes expected winning would take” -- but now of course we’re back in tautology mode. The apparent upshot, here, is that many normative claims simply can’t be interpreted as non-trivially true or non-trivially false claims about natural properties. The associated disagreements only become sensible if we interpret them as being about something above-and-beyond these properties.

Of course, it is surely true that some of the claims people make using the word “should” can be understood as claims about the natural world. Words can, after all, be used in many different ways. But it’s the claims that can’t easily be understood in this way that non-naturalist realists such as Parfit, Enoch, and Moore have in mind. In general, I agree with the view that the key division in metaethics is between self-identified non-naturalist realists on the one hand and self-identified anti-realists and naturalist realists on the other hand, since “naturalist realists” are in fact anti-realists with regard to the distinctively normative properties of decisions that non-naturalist realists are talking about. If we rule out non-naturalist realism as a position then it seems the main remaining question is a somewhat boring one about semantics: When someone makes a statement of form “A should do X,” are they most commonly expressing some sort of attitude (non-cognitivism), making a claim about the natural world (naturalist realism), or making a claim about some made-up property that no actions actually possess (error theory)?

Here, for example, is how Michael Huemer (a non-naturalist realist) expresses this point in his book Ethical Intuitionism (pg. 8):

[Non-naturalist realists] differ fundamentally from everyone else in their view of the world. [Naturalist realists], non-cognitivists, and nihilists all agree in their basic view of the world, for they have no significant disagreements about what the non-evaluative facts are, and they all agree that there are no further facts over and above those. They agree, for example, on the non-evaluative properties of the act of stealing, and they agree, contra the [non-naturalist realists], that there is no further, distinctively evaluative property of the act. Then what sort of dispute do the [three] monistic theories have? I believe that, though this is not generally recognized, their disputes with each other are merely semantic. Once the nature of the world 'out there' has been agreed upon, semantic disputes are all that is left.

I think this attitude is in line with the viewpoint that Luke Muehlhauser expresses in his classic LessWrong blog post on what he calls “pluralistic moral reductionism.” PMR seems to me to be the view that: (a) non-naturalist realism is false, (b) all remaining meta-normative disputes are purely semantic, and (c) purely semantic disputes aren't terribly substantive and often reflect a failure to accept that the same phrase can be used in different ways. If we define the view this way, then, conditional on non-naturalist realism being false, I believe that PMR is the correct view. I believe that many non-naturalist realists would agree on this point as well. ↩︎

16. This point is made by Parfit in On What Matters. He writes: “We could not have decisive reasons to believe that there are no such normative truths, since the fact that we had these reasons would itself have to be one such truth. This point may not refute this kind of skepticism, since some skeptical arguments might succeed even if they undermined themselves. But this point shows how deep such skepticism goes, and how blank this skeptical state of mind would be” (On What Matters, Section 86). ↩︎

17. The PhilPapers survey suggests that philosophers who favor realism outweigh philosophers who favor anti-realism by about a 2:1 ratio. ↩︎