(Cross-posted from Hands and Cities. Content warning: descriptions of cruelty)

Lots of people I know think that what you should do depends, ultimately, on your contingent patterns of care and concern (suitably idealized), as opposed to some sort of objective “normative reality.” And I find various views in this vicinity pretty plausible, too.

To others, though, such views seem like they’re missing something central. Here I want to examine one thing in particular that it’s not clear they can capture: namely, the sense in which morality has, or appears to have, authority.

I. Paperclips and flourishing

Here’s a toy model of the type of view I have in mind.

Consider the paperclip maximizer: an AI system with the sole goal of maximizing the number of paperclips in the world. Let’s say that this AI system’s psychology is sufficiently rich that it’s appropriate to say that it “cares” about making paperclips, “values” paperclips, that its model of the world represents paper-clips as “to-be-made” or as “calling to it,” etc. And let’s say it would remain a paperclip maximizer even after exhaustive reflective equilibrium, and even upon ideal understanding of all non-normative facts. Call this system “Clippy.”

Unlike Clippy, let’s say, you care about things like flourishing, joy, love, beauty, community, consciousness, understanding, creativity, etc — for yourself, for your loved ones, for all beings, perhaps to different degrees, perhaps impartially, perhaps in complex relationships and subject to complex constraints and conditions. And you would continue to do so, in a more coherent way, after exhaustive reflective equilibrium, and upon ideal understanding of all non-normative facts.

And let’s say, on the view I’m considering, that there isn’t a “value reality,” independent of both you and Clippy, that makes one of you right about what’s really valuable, and one of you wrong. You’ve got your (idealized) values, Clippy has Clippy’s (idealized) values; they’re not the same; and that’s, ultimately, about all there is to it. (Or at least, almost all. I haven’t yet said much about how you and Clippy are disposed to treat agents with different object-level values in various game-theoretic situations — dispositions that may, I think, be crucially important to the issues discussed below.)

Of course, we can dance around some of the edges of this picture, to try to avoid endorsing claims that might seem counterintuitive. If we want, for example, we can try to get more specific about how the semantics of words like “right” and “ought” and “valuable” work in this sort of situation, when uttered by a particular party and evaluated from a particular perspective. We can say, for example, that in your mouth, or evaluated from your perspective, the word “good” rigidly refers to e.g. love, joy, beauty etc, such that it will be true, in your mouth, that “joy is good,” and that “joy would still be good even if I valued clipping,” and/or false, evaluated from your perspective, when Clippy says “clipping is good.” But everything we can say about you can be said, symmetrically, about Clippy. Ultimately, if we haven’t already “taken a side” — if we abstract away from both your perspective and Clippy’s perspective (and, also, from our own contingent values) — there’s nothing about reality itself that pulls us back towards one side or another.

Let’s call views of this broad type “subjectivist.” I’m playing fast and loose here (indeed, I expect that I would class a very wide variety of meta-ethical views, including, for example, various types of “naturalist realism,” as subjectivist in this sense), but hopefully the underlying picture is at least somewhat clear.

To illustrate, suppose that you and Clippy both come across an empty planet. You want to turn it into Utopia, Clippy wants to turn it into paperclips, and neither of you wants to split it down the middle, or in any fraction, or to try to co-exist, trade, compromise, etc (this equal refusal to cooperate is important, I think). So, you fight. From an impartial perspective, on the view I’m considering, this is no different from a fight between Clippy and Staply — a different sort of AI system, that maximizes staples rather than paperclips. You are both righteous, perhaps, in your own eyes. Both of you choose, equally, to try to create the world you want; and because your choices conflict, both of you choose, it seems, to try to impose your will on the other. But the universe isn’t “rooting” for either of you.

II. What’s missing from subjectivism?

Various people, including myself, feel intuitively that this sort of picture is missing something — maybe many things — central to moral life.

One way of trying to cash this out is in terms of things like “disagreement” and “objectivity.” Thus, for example, we might wonder whether various semantic proposals of type I gestured at above will be able to capture our sense of what’s going on when people disagree about what’s good, to be done, right, etc (see, e.g., moral twin earth). Or, relatedly, we might wonder whether this sort of view can capture the sense in which we think of moral norms as objective — e.g., as in some sense independent of any particular perspective.

Here I want to examine a different angle, though — one closely related to disagreement and objectivity, I think, but which has a more directly normative flavor. My question is whether this view can capture our sense of morality as possessing “authority.”

I expect some readers are already feeling themselves un-concerned. Authority, one might think, is some kind of wooly, hard-to-define human construct — the type of thing some people, by temperament, just don’t feel very torn up about having to jettison or substantially revise. And maybe that’s ultimately the right response. But as I discuss below, I also think “authority” closely tied to the notion of “obligation,” which various readers might be more attached to. And regardless, let’s at least look at what we’d be revising, or throwing out.

III. Responding to cruelty

Imagine a group of bored men, who decide, one night, that it would be fun to look for a bit of trouble. One of them mentions a homeless man he finds ugly and annoying, who often sleeps in an alleyway nearby. Wouldn’t it be fun to rough him up? The rest are game. They grab a few beers and head to the alley, where they find the homeless man lying on some cardboard. He’s gaunt, and shivering, and once he sees the group of men approaching him, clearly scared. He gets up, and ask them what they want. They push him down. One nudges him with a boot, and starts taunting him. Others join in, laughing. He scrambles backwards and tries to run, but they push him down again. Someone kicks him in the ribs, and feels something break. He cries out in pain. Another kicks him in the spine. Another picks up a nearby plastic crate, and starts beating him with it.

(I’m describing this in some detail, in the hopes of prompting a fairly visceral reaction — a reaction the flavor of which I’m not sure subjectivism can capture. And of course, in real life, these cruel men, and their victim, would not be thin caricatures of callousness and helplessness, but real humans, with their own lives and complexities.)

Now suppose that I come across these men in this alleyway; that I see what they’re doing, and am in a position to confront them (if it simplifies the case, imagine that I can do so without any risk to myself). I imagine feeling a lot of things in this situation — anger, horror, fear, uncertainty. But I also imagine addressing them, or wanting to, in a very specific way — a way ill-suited both to pursuing my own personal preferences about the world (however idealized), and ill-suited, as well, to giving other people advice about how best to pursue theirs.

In particular, I imagine feeling like I’m in a position to look them in the eye and say something like: “Stop.” That is, to speak with a kind of finality; to stand on a certain type of bedrock. I’m not just saying or signaling: “I don’t want you to do this,” or “he doesn’t want you to do this,” or “I and others are ready to fight/blame/punish you if you keep doing this,” or “you wouldn’t want to do this, if you understood better,” or any combination of these. Indeed, I am not just “informing them” of anything.

But nor, centrally, am I coercing them, or incentivizing them. I may try to stop them, and I may succeed, but I am not merely “imposing” my will on them, as they imposed theirs on the homeless man. My will feels like it is not merely mine, and not even merely mine and his (though his is centrally important). It’s rooted in something deeper, beyond both of us; something that seems, to me at least, ancient, and at the core of things; something in the face of which they should tremble; something whose force should hurl them back against the alleyway walls.

(I think the particular phenomenology I’m describing here may be somewhat contingent. But I think that many common forms indignation, righteous anger, demand for explanation, “what the f*** do you think you’re doing?” and so forth are in a similar ballpark.)

IV. The second person standpoint

What is this “something,” this ground it feels like you’re standing on, in cases like these? I’m going to leave it unspecified for now. But whatever it is, it gives, or seems to give, the homeless man, and those who stand with him against these cruel men, a certain type of “authority.” Not “empirical authority,” in the sense that implies that someone’s volition in a situation — a monarch, a sergeant, a CEO — will in fact be recognized and obeyed. The cruel men may very well not listen; they may laugh in the face of those who oppose them, or react with further violence. Rather, it’s a type of “normative authority.”

What is “normative authority”? Again, I’m not sure. Intuitively, it’s related to a type of external-ness, a kind of “not-up-to-you”-ness, but also to a type of “binding-ness.” The number of socks in your sock drawer isn’t “up to you” either, but that number does not “bind” you in a practical sense. You are “free,” as it were, to pretend the number is otherwise, and to accept the consequences. But you are not, we think, “free” to beat up homeless people, and to accept the consequences. We do not say, to these men, “you can try to beat up this homeless man if you want, but we will try to stop you, and you’d regret the choice if you understood better.” We say: “Stop. This is wrong.”

I’m not saying I have a clear characterization of this. Indeed, as I discuss a bit below, I think characterizing it is surprisingly difficult regardless of your meta-ethics. And what’s more, I think, the notion of an authoritative, binding “must,” or “obligation” can, in some contexts and for some psychologies, be quite harmful, leading to a kind of reluctant and fearful relationship towards an alien, externally-imposed standard, rather than to an effort to understand and take responsibility for one’s actions and their consequences. And of course, there are lots of dangers involved in treating one’s own values and moral beliefs as “authoritative’; perhaps, indeed, humans generally err on the side of too much of such moralizing.

Still, it seems plausible to me that something like this notion of “authority” is core to our basic picture of morality. This is something that Stephen Darwall has written a lot about; he calls the perspective we take up, in addressing these men with statements like “Stop,” the “second-person standpoint” — a standpoint he thinks essentially structured by relationships of authority to address claims (understood as something like moral commands) to one another.

V. Command and counsel

I’m not going to try to lay out or evaluate Darwall’s full picture here, but I do want to emphasize one thing he puts a lot of weight on: namely, the difference between his notion of “second-personal authority” and something like “giving advice” or “providing moral information.”

In attempts to find some sort of rational fault with people acting wrongly, subjectivists often appeal to be various idealization procedures, which would show, it is hoped, that the actions of wrongdoers are mistakes by their own lights. Here I think back to a conversation I had with a subjectivist many years ago, in which she attempted to capture the sense in which what the Nazis did was wrong by appealing to the fact that suitably informed and reflectively equilibrated versions of their values would not have endorsed their actions (on her view, as I recall, what the Nazis really wanted was a certain type of physical and emotional security, which they were pursuing in a very confused way).

One problem with this is that even if true (does our condemnation really depend on this assumption?), it seems very contingent. People sometimes illustrate this by appealing to an “ideally coherent Caligula” (see Street (2009)), who wishes solely and coherently to maximize the suffering of others, and who, it is supposed, would continue to wish this on ideal reflection. To the extent one wishes to condemn Caligula’s actions as wrong, one might think, one needs some standard other than incoherence or factual mistake. (Clippy is another such example, but Clippy may seem too alien for typical judgments of right and wrong to apply.)

More importantly in this context, though, even if we grant that e.g. Nazis, or the cruel men above, are making some mistake by their own lights, addressing them with “Stop” seems very different from informing them of this mistake, or as offering them “moral information.” Darwall, as I recall, talks about this in the context of a distinction drawn by Hobbes between “command” and “counsel.” Thus, you might inform someone that they are stepping on your toe, and that the world would be better, by their lights, if they got off of it — this would be counsel. Or you might tell them to get off your toe. This would be command — command that, in most contexts, you have the authority to make.

What’s more, we might think, someone’s making a practical mistake — even a very high-stakes one — does not necessarily warrant occupying a standpoint of “command” in relation to them. If Bob would really do much better by his own lights by getting that cancer treatment, or saving for retirement, or whatnot, but it looks like he’s choosing not to, this does not, intuitively, give you authority to address him with commands like: “Bob, get the chemo,” or  Bob, open that 401k” — even if you know he’s messing up. Bob is his own person; he gets to make his own choices, however misguided, at least in certain spheres. But beating up homeless people isn’t one of them.

That said, I think the question of when it’s appropriate to occupy some standpoint of practical “authority” in relation to someone’s possible mistakes is quite a complicated one; perhaps it is more appropriate to tell Bob to get the chemo than it might naively seem (especially if you’ll thereby help him a lot). But regardless, it still seems like there’s a conceptual distinction between what you’re doing with Bob, in telling him to get the chemo, and what you’re doing with the cruel men, in telling them to stop.

(I don’t think that using grammatically imperative language is necessary for taking up a standpoint of authority in this sense. For example, I think that saying to these men “what the f*** do you think you are doing?” occupies a similar stance; as do, for example, various forms of unspoken anger. And nor, conversely, does all imperative language indicate such an authoritative stance.)

VI. Is this really about subjectivism?

One issue with subjectivism, one might think, is that it can’t capture an adequate notion of moral authority in this sense. On subjectivism, it might seem, agents can make mistakes in the sense of failing to act in line with their idealized values. And when agents with different idealized values get into conflict, they can trade/cooperate/compromise etc, or they can fight. But it’s not clear what would give any one type of subjectivist agent moral authority in the sense above. Ultimately, it might seem, subjectivist agents are just trying to impose their (idealized) wills on the world, whatever those wills happen to be; in this sense, they are engaged in what can seem, at bottom, like a brute struggle for power — albeit, perhaps, a very game-theoretically-sophisticated one, and one that affords many opportunities (indeed, incentives) towards conventionally nice and mutually-beneficial behavior between agents whose values differ (especially given that agents can value various forms of niceness and inclusiveness intrinsically). Impartially considered, though, no participant in that struggle has any more “authority” than another — or at least, so it might seem. They’re all just out there, trying to cause the stuff they want to happen.

On reflection, though, it’s not actually clear that positing some set of True Objective Values actually solves this problem directly (though it does provide more resources for doing so). To see this, consider a simple case, in which the One True Set of Objective Values is the set posited by totalist, hedonistic utilitarianism. In this case, if you are a totalist hedonistic utilitarian, and Clippy is a paper-clip maximizer, this does, in some sense, privilege you in the eyes of the universe. You are fighting for the “true values,” and Clippy, poor thing, for the false: well done you (though really, you just got lucky; Clippy couldn’t help the utility function it was programmed with). But does this give you the authority to demand that Clippy maximize pleasure? Maybe some versions of Clippy (especially the ones trying to maximize the One True Values de dicto, which isn’t the normal thought experiment) are making a kind of practical mistake in failing to maximize pleasure – but as we saw above, not all practical mistakes give rise to the type of authority in question here. Thus, on this view, one might inform the men beating the homeless person that they are failing to maximize pleasure, which, they should know, is the thing to be maximized — the universe, after all, says so. But this seems comparable to the sense in which, on subjectivism, you might inform them that their idealized values would not endorse their cruel behavior. Perhaps you would be right in both cases: but the sense in which it is appropriate to tell them to stop has not been elucidated.

Of course, this won’t bother the utilitarian, who never put much intrinsic weight on notions of “authority” anyway. But I do think it at least suggests that being “objectivity right” does not, in itself, conceptually entail moral authority in the sense above (though it does seem helpful).

Here’s another way of putting this. On a naive, economics-flavored subjectivism, the basic picture is one of agents with different utility functions and beliefs, and that’s all. The naive realism just discussed adds one more element to this picture: namely, the “True” utility function. Great. But the notion of “authority” — and with it, the notion of “obligation,” or of being “bound” by a standard external to yourself — doesn’t actually yet have an obvious place in either picture. We’ll have an easier time talking about being incorrect or mistaken about the “true values,” if there are such things. But we haven’t yet said why the true values bind — or what “binding” or “authority” or “obligation” even is.

This is related, I think, to the sense in which consequentialists have always had a bit of trouble with the notion of “obligation.” Consequentialists are happiest, in my experience, ranking worlds, and talking about “the good” (e.g., the “True Ranking”), and about which actions are “better.” Their view adds that you are in some sense “required” or “obligated” to maximize the good; but sometimes, it adds this almost as an afterthought. And indeed, if the central normative notion is “betterness,” it’s not yet clear what “obligation” consists in. For example, obligation is plausibly tied, conceptually, to notions of blame, and what is “worthy” of blame. But what sorts of reasons does this “worthiness” encode? Consequentialist reasons for blaming behavior? But these will depend on circumstance, and it won’t always be consequentially-best to blame people for not maximizing the good (as consequentialists widely acknowledge). But the consequentialist isn’t particularly excited about positing non-consequentialist reasons for blame, either, and certainly not about actually giving them non-instrumental weight. So consequentialism, in my experience, tends to incline strongly towards instrumentalist accounts of obligation, centered on the consequentialist pros and cons of different practices of moral blame and praise. But if we go that route, the sense in which the consequentialist thinks maximizing the good obligatory remains quite obscure — despite the fact that this is traditionally treated as definitional to consequentialism. The view seems more at home, I think, in a normative universe that gives the idea of “obligation” little pride of place, and which sticks with “betterness” instead.

But if even very “objective values”-flavored forms of consequentialism don’t provide any ready notion of authority or obligation, this suggests that what’s missing from the picture of subjectivism above isn’t necessarily “objective values” per se, but rather something else. And in that light, it becomes less clear, I think, that subjectivism can’t have this thing, too.

VII. What binds?

I am not, here, going to attempt an analysis of what moral obligation or authority consists in, or what a particularly subjectivist account of it might look like. I’ll note, though, that I’m interested in views on which at least part of what gives rise to the moral authority at stake in cases like the beating described above is something about norms that a very wide variety of suitably cooperative and sophisticated agents have strong reason to commit (perhaps from some idealized, veil-of-ignorance type perspective) to abiding by and enforcing. We might imagine such norms as operative, for example, in some vision of a world in which agents who care about very different things — as different, even, as paperclips and pleasure — manage to avoid the needless destruction of conflict, to act in mutually beneficial ways, and to make adequate space for each to pursue their own ends, conditional on respecting the rights of others to do so as well (see e.g. here). By telling the men to stop, you are, implicitly, including them in this vision, and the community of agents committed to it, and holding them to the expectations this commitment involves. And in this sense, your address to them is a form of respect; they are not yet, in your eyes, beyond the pale, to treated as mere forces of nature, like tigers and avalanches (albeit, cognitively sophisticated ones), rather than members — however norm-violating — of something collective, some sort of “we.”

I’m not sure how far this picture of authority stemming from cooperative norms will really go, but I think it goes some of the way, at least, to ameliorating the sense in which it can seem, on subjectivism, that one is merely imposing one’s contingent will upon the world, or on agents like the cruel men. On this view, in moral contexts like the one above, you are not merely acting on behalf of what you care about, and/or what the homeless man cares about; you are also acting on behalf of norms that agents who care about things in general (or at least, some substantial subset of such agents) have reason to commit to upholding; norms that make it possible, in this strange universe, amidst so much vulnerability and uncertainty, for agents to pursue their values at all. Perhaps the universe itself does not care about those norms; but agents, in general, have reason to care.

I’ll admit, though, that even as I write this, it doesn’t feel like it really captures my intuitive reaction to the cruelty I described above. Really, what it feels like is just: when they kick that homeless man, God roars in anger. What they are doing is wrong, wrong, wrong. It needs to stop, they should be stopped, that’s all, stop them.

Sometimes I imagine learning, against what currently appear to me the strong odds, that some form of non-naturalist moral realism is true. I imagine someone far wiser than I telling me this, and asking: did you really think that it was just some subjective thing, or some sort of complex game-theory decision-theory thing? Didn’t a part of you know, really, the whole time? I imagine thinking, then, about cruelty like this.

But if non-naturalist moral realism is false, I’m not also interested in pretending that it’s true: better to live in the real world. And subjectivism, I think, may actually have resources for capturing certain types of moral authority — though perhaps, ultimately, not all. Regardless, my overall view on meta-ethics, as on many other topics in philosophy, is that our main priority should be making sure human civilization reaches a much wiser state — a state where we can really figure this stuff out, along with so many other things, and act, then, on our understanding.

New Comment
4 comments, sorted by Click to highlight new comments since:

I was kind of hoping this post would be more about moral authority as it actually exists in our morally-neutral universe. For having subjectivism in the title, it was actually all about objectivism.

I'm reminded of that aphorism about the guy writing a book on magic, and he'd get asked if it was about "real magic." And he'd have to say no, stage magic, because real magic, to the questioner, means something not real, while the sort of magic that can really be done is not real magic.

How does someone whose moral judgment you trust actually get that trust, in the real world? It's okay if this looks more like "stage magic" than "real magic."

As I understand this, Clippy might be able to issue an authoritative moral command, "Stop!", to the humans, provided it's "caused" by human values, as conveyed through its correct understanding of them. The humans obey, provided they authenticate the command as channeling human values. It's not advice, as the point of intervention is different: it's not affecting a moral argument (decision making) within the humans, instead it's affecting their actions more directly, with the moral argument having been computed by Clippy.

I can make sense of authority from a subjectivist viewpoint. People might be suggestible, there might be some quirks of their psychology that make them behave in certain ways, ropes you can pull that get you specific results. That is a command can be a hack attempt to exploit the other. Assuming that the other is completely reflectively consistent might grant them exploit-freeness. But most real system do have hack vulnerabilities.

This might not be that popular a viewpoint because it can get very anti-coperative. If you argue someone into a position that they would transition away when reflecting it is not a stable reflection of deeper principles.

If you truly oppose Clippy you migth be morally fine trying to confuse and get them to act against their values to the extent that you can. But in polite company ethical discussion regressing a participant can get heavily frowned upon.

In hacking terms if you have root-access you are free to do as you please but that position might still be ill-gotten. You might not actually have any business wielding admin powers. The system doesn't condition its compliance for intents and purposes but that it is given in the correct from and from the right channels. In this sense "authorization" doesn't actually have to do with authority.

The "stop" can also seen as a suggestion given in the hopes that it finds purchase. I think being too knowledgeable about the perpetrators evil would make you not try or you would know that you effective before hand. Only when you don't know by which mechanism the prompt would land would you give it a blind shot. It is like spitting out a conjecture in hopes they prove it to themselfs. If you knew of a proof you would state it, if you didn't think they are intelligent enough to consider the matter you would stay silent.

In Milgrams experiment it is not required that the experimenter and test subject agree on ethics to a great degree. But the effect of the white coats suppressing the zealousness of the test subjects is a real thing. And I would think that the setup could be made make the opposite "moral authority" stance, like having Amnesty fliers on the walls or engaging in ethical discussion amid the "training" questions etc.

[-][anonymous]-10

The problem with all this is that this feeling you get is probably coming from a chunk of brain tissue that nature has hacked in for the obvious benefits it provides. (Notice how it has to be a homeless person in your country). It has no objective value in excess of Clippys very simple heuristic. (One where a war is only wrong if it permanently destroys the material for making more paperclips)