I recently wrote about my vegetarianism to casually document it for friends, and it proved to be more popular than anything else I’ve written lately. My guess is that it was popular because it was about something highly relatable that many people think about already — food choices — but maybe it was engaging because it was ultimately about morality. Since I don’t have much more to say about food, I’ll say some more things on morality to see if it proves similarly engaging to folks.

On my moral thinking I said this:

As you may have noticed, I became a vegetarian via preference utilitarianism, but stay a vegetarian to signal virtue. That would be pretty confused moral reasoning, except properly I don’t think morality is a category of thing that exists in the world and instead is an illusion created by seeing the world through a frame that does not include system relationships. But I do recognize preferences, my own preferences include a preference for the maximization of the preferences of others all else equal, and as a result I think in a way that generally aligns with the moral theory of preference utilitarianism, but if I had different preferences about the preferences of others I could just as easily be a deontologist or virtue theorist in terms of morality, so I see no problem in the contradictions that result from flatting my thinking into terms of morality.

Echoing this sentiment, a few days later this pithy line showed up in my Facebook feed:

The rules say we have to use consequentialism, but good people are deontologists, and virtue ethics is what actually works.

Depending on your thinking on morality it may sound to you like I’m being evasive and Eli is being too clever, but I think both of us are trying to convey that something more complex is going on that ends up doing weird things if you squeeze it into the abstraction of morality. Eli wrote extensively on this topic back in 2008, but you may not like his writing, be unwilling to sift through it all, or simply find it unconvincing, so I’ll try to cover the core of the issue here in my own way.

Morality concerns systems for reasoning about what is good and bad. It has two primary components we can examine: the systems themselves and the value judgements of right and wrong these systems operate on. Although some moral theories tie these two aspects tightly together, as in natural law theory and contractualism, we still need a way to choose, and thus judge, a moral system to use, so some part of value judgement must reside outside the system if we want the system to be complete so we can use it to justify the use of the system. But by doing this we introduce a free variable that can contradict anything else in the system since it is not bound by its logic, thus forcing a choice between consistency and completeness, and since morality specifically concerns “systems” it must choose consistency over completeness in the same way mathematics must choose to be consistent instead of complete.

This is no more fatal to morality than it is to mathematics, but it does mean we’re going to see some weird stuff where we have to leave part of reality out to get a consistent remainder. Just as there are numbers we can’t count and questions we can’t answer with mathematics, we find repugnant conclusions — that we should maximize the creation of minimally good lives — and trolley problems — that you may act to harm less but still cause harm — in morality. These scenarios show us that even if moral theories usually align with our value judgements they don’t always do so, thus we may benefit from having a way of thinking about right and wrong that aims for completeness instead of consistency.

Rather than beginning in logic and reasoning, as morality does, let’s begin in value judgement. Value judgements are preferences, which is to say liking more and being more likely to take some actions than others, all else equal. Preferences are complete, in that there is a relationship between any two things such that we can say one is preferred to the other, but we may end up with inconsistent preferences, such as preferring apples to bananas and bananas to cucumbers but preferring cucumbers to apples. This feature of preferences is annoying if you want to study rational actors and is often worked around as needed, but is great if you want to understand how decisions are made by real people.

Thinking about value judgements as preferences, you notice that value judgements in particular and preferences in general are not only about one’s own behaviors, but include value judgements and preferences on the behavior of others. These exist even in the absence of a universalizing moral system, since even the most libertarian of people must have a preference for not having specific preferences about what others do. And since most people make some significant value judgements about the behavior of others, we quickly find ourselves in an interacting network of value judgements that collectively proscribe certain behaviors. That is, from value judgements alone we get a collection of heuristics and rules about what is good and bad, even if those judgements are not made systematically, and those heuristics and rules are shared among a community. Instead of a moral system we might call this a moral standard since it’s inconsistent, hence not a system, but does provide standard answers to as many moral questions as possible.

One rule people generally include in moral standards is integrity, i.e. a kind of self-consistency in moral choices. And the best way to have integrity is to have a systematic way of reasoning about the value judgements of your actions, thus creating a desire for the existence of a moral system. Unfortunately, our moral standard can by design never be fully consistent, so we’re stuck with the dual of our original problem. We can neither find a satisfying answer to all moral questions using a moral system nor can we achieve integrity while satisfying all our value judgements.

This is what I mean when I say our thinking on what is good and bad becomes confused when squeezed into moral theory. The moment you manage to cover all situations you introduce contradictions and when you eliminate all the contradictions you leave out satisfying all value judgements. Morally, we are forever between a rock and a hard place.

Of course, this all makes a lot more sense if we think of morality as a kind of illusion or mirage we create by trying to impose our understanding of good and bad onto the world. If morality exists not in the territory, but in our maps, and if we rely on our maps so much that our thoughts become a part of our perceptions, then morality will seem to exist “out there” when it’s mostly “in here”. We will mistake the subjective and intersubjective for the objective, and find we can never quite pin down reality under the thumb of morals.

So what are we to do if morality cannot tell us always how to act, we cannot satisfy all our preferences with integrity, and morality exists in the space between reality and our understanding of it? Should we give up, accept that nothing is true and everything is permitted? Probably not, since most people wouldn’t like to live in such a world, and we’ve shown a collective revealed preference for combining integrity and values as best we can throughout history. Instead, we must find ways of addressing moral nebulosity, and thankfully we know that we can, because we have in fact been doing it all along, only we didn’t realize it.

The De of My Decisions

I’ll leave it to you, dear reader, to figure out how you want to live in our messy, complex world. But, so that I don’t leave you with the null advice of “do what’s right for you”, I’ll at least give you a picture of how I’ve chosen to deal with the nebulosity of good and evil.

As preface, I’m something of a native virtue ethicist. Throughout my life I’ve had some vague sense of virtue and acted to fulfill it on the belief that it would lead me to living the sort of life I’d like to live. This is probably due to my upbringing, where my parents simply taught us to be good people in the absence of any strong religion or formal moral philosophy. If put in terms of Albion’s Seed, I grew up in the broader Quaker culture of America, even though none of my ancestors were specifically Quakers (that I know of).

This is probably why other moral stances didn’t appeal to me. I grew up in Florida, surrounded by a mix of Cavalier and imported Puritan culture. Most people I knew during my school years took a deontological view informed by their religion. To me this way of thinking felt confining and unhelpful, even as it seemed to work for them, and I rejected being yoked by specific moral rules.

As an adult, between university and a whole host of new friends I connected with first online and then in person, I met a lot of consequentialists. My guess at the cultural origin of this, if I had to give one, lies in a combination of Jewish religious scholarship and adopted Enlightenment rationalism. The consequentialist approach felt better than deontology, but it also seemed too complicated to work out robust solutions to moral quandaries. I was never sure I had reasoned far enough since more than once I had to change my mind after realizing I had failed to make a full account of a moral question, and while rationalism and consequentialism reasonably demand that you be able to change your mind, I wanted less fragile answers to moral questions where a wrong determination was less likely to result in accidentally causing major harm. Cf. the historical examples of scientific racism, The Great Leap Forward, and most of the history of psychopathology.

So I stuck to my virtue, ill defined as it was, because it was both flexible and robust enough to work for me. I still wished I had a firmer grasp on what exact behaviors were virtuous, but having gotten by my whole life with only a rough idea of what it meant to be a good person I was at no risk of not being able to figure out what to do.

Then about a year ago, after a failed attempt to write a version of my key life advice into a book, I fell into nihilism. Not so much because I failed at the book project, but because in trying to write it I was forced to confront the fundamental lack of meaning that exists external to us in the universe. This was not really a crisis: I knew I was going to come out the other side eventually because developmental psychology gave me a roadmap for where I was going, but I wasn’t sure how I was going to get there. As it happens, it was around this time I finally got around to reading the Daoist classics.

You might say Daoist philosophy has two core parts: dao, or way, and de, or virtue. Dao tends to get a lot of focus and for good reason (besides being literally the name of the philosophy): it’s about a way of perceiving reality as a holon and mirrors much of my own thinking on understanding the world. In comparison de seems boring. It translates to virtue in English and also has a parallel etymology. There at first seems no new insights in de, and it can be read as a collection of stuff Daoists just so happen to think is good. But upon deeper exploration, de turns out to be as complex as dao, and extremely useful if you want to live in the world instead of just contemplate it.

Our first clue to this is that there is no list of virtues found in either Laozi or Zhaungzi. Instead we get a lot of examples and templates of virtuous behavior from which we are asked to discover for ourselves the organizing principles of wuwei, usually translated as non-action, and ziran, which translates as both naturalness and spontaneity. These may seem at odds, but wuwei should not be taken to literally mean doing nothing, but instead to mean doing nothing which is not ziran. And ziran is not about giving in to impulse, but instead about acting in accord with dao, which is to say acting in a way that goes with the world instead of against it. It’s not so different, then, from trying to do the best thing, but from a perspective that more treats the world as so complex that discretion is the better part of valor and no action is often better than unwise action.

These ideas of wuwei and ziran, being imbued with centuries of use by folks who lived the sorts of lives I would like to live, gave me a scaffolding to build my way out of nihilism. They helped me internalize what I already knew and have argued above — that morality is not part of the territory but a particular map for understanding individual and collective value judgements. They were concepts I needed to continue constructing a complete stance for myself. And they allowed me to understand that even if no behavior is forbidden by some hidden moral structure in the universe, there are still ways to figure out what is good.

So morality may be a mirage, but it’s a useful mirage that helps us find life-giving meaning in what would otherwise be a desert of pure perception. I found de to be a helpful bridge towards holonic integration, but you might prefer Sharia law, act utilitarianism, or any number of moral or ethical ideas. Whatever your choice, in this way morality serves as an oasis that will sustain you on your journey to find meaning, especially when all meaning seems lost to the harsh winds of an uncaring world.

New to LessWrong?

New Comment