A Simple Ethics Model

by Ideopunk3 min read22nd Jan 20217 comments

9

Ethics & MoralityWorld Optimization
Frontpage

[Cross-posted from my soon-to-be-defunct blog]

Disclaimer #1: I am probably reinventing the wheel. However, reinventing the wheel is good. 

Disclaimer #2: This post probably fundamentally misunderstands non-consequentialists. 

I’ve been thinking about normative ethics lately. There are three major schools. Consequentialism, deontology (“rule-based morality”), and virtue ethics. These are typically said to be in opposition, but I think it’s simple for consequentialism to subsume the other two. Let me know if you find this framework helpful, or if it’s missing something important.

Consequences

Consequentialism at its simplest looks like this: perform the action that achieves the best consequences. 

Unfortunately, predicting consequences is really hard. It’s even harder to predict nth-order consequences. We often default to only considering immediate consequences because they’re easier to calculate and predict. However, long-term consequences are typically much more important. Thus, approaching decisions as though we can calculate the best consequences will often lead to worse decisions. 

Not only that, we have complicated values we attempt to satisfy. While calculating is great when values are relatively simply (QALYs), it’s often infeasible within our common constraints (especially time: “Do you want this job or not?”)

So even though we’re consequentialists, it’s hard to function consequentially. This isn’t a blow to consequentialism though, it’s a sign that our model of human decision-making is missing something.

Rules

The alternative to brute calculation is the use of rules or heuristics. Not eating meat is simpler (meaning: easier, faster, and cheaper) than calculating the negative utility of each meal and finding the equilibrium between personal pleasure and animal suffering. 

Rules also have the benefit of counteracting biases and poor calculation capability. For example, people make worse decisions while drunk. Having a rule like “I never drive drunk” routes around the risk of calculating utility badly. 

When time and processing power permit, calculations allow for superior decisions. When they don’t permit, rules allow for superior decisions. 

Virtues

Where do virtue ethics come in? Consider virtues like courage, humility, and temperance. These are habits of action. They’re important for three reasons:

  1. The decision-making machine in your head, the one that calculates or checks rules, only delivers knowledge of your preferred action. Whether it’s delivered quickly or slowly, it’s still only knowledge. Execution remains. Execution can be hampered by outside factors like fatigue, resentment, or fear. Virtues are those habits that allow us to execute ethical decision-making in spite of those factors.
  2. We think about rules when we don’t have minutes to calculate. But sometimes we don’t even have seconds to think about rules. Our habits kick in before we have time to access our decision-making. A virtue is whatever habit lends itself to better responses–this includes things like reacting instantly to cruelty (before timidity can set in) or reacting humbly to praise (before scheming can set in).
  3. Even thinking to apply rules or to calculate is a habit. It takes wisdom to think.

Usage

The ethical process is two-fold:

  1. Develop toward superior values
  2. Act toward achieving those values

I think that using this framework helps me with #2 in a few ways. 

Firstly, by categorizing different approaches, it helps me choose between rules and calculations. Sometimes I apply rules when I could be calculating (to better ends). Keeping in mind that calculations are sometimes an option will improve my decision-making. I think this is a far more common failure mode than the alternative, choosing to calculate when heuristics would be superior. 

Secondly, it helps me figure out the point of virtue. Why should those of us who want to do the most good we can do think about virtue? Because there will be moments in our lives when we can do significant good and it will come down to habits instead of careful decisions. 

Thirdly, in the past I’ve gotten hung up on how to achieve my values. Should I just focus on being a good person? Should I follow good heuristics? Should I always do what seems to have the best consequences? In practice, I, like everybody, use all three methods, and have worried that this means I’m being inconsistent (which I’m sure I am in other ways!). It doesn’t though. They’re just different components of the same ethical machine. 

9

7 comments, sorted by Highlighting new comments since Today at 12:26 AM
New Comment

Specifically with regard to deontology, it also makes the problem of consequentialism easier for other people. If I am trying to form a course of action, it is easier for me to plan a high expected value action if I know that everyone will act within the limits of some deontological framework. Yes, a deontological framework reduces the actions I can take in a plan, but it’s consequentially better for me to act deontological so that others can come up with higher expected value plans.

This is interesting. Am I wrong in summarizing it as "deontology helps with coordination"? 

No, that's a great summary.

I usually think about ethics in utilitarian and deontological grounds. It is useful to be reminded there is a virtue ethics dimension to the space too.

I agree that ethical discussion in the West tends to fall under deontology, utilitarianism or virtue ethics. There is another ethical framework which doesn't have a standard name in English since you don't see it much in Western philosophy. I like the name wuwei (無為) which roughly translates into "effortless action". The idea is to act naturally.

Wuwei doesn't constitute utilitarianism because it is focused on the present instant instead of some future result. It doesn't qualify as deontology because all rules can be broken in the right context. To classify wuwei as utilitarian or deontological is to broaden the definition of "utilitarian" or "deontological" to meaninglessness.

Wuwei could be considered a quirky form of virtue ethics, except virtue ethics implies dualism (right and wrong) whereas wuwei is non-dualist (without right and wrong).

It does seem like there's a Western strain of wuwei in the form of the Western Pragmatists, but they tend to be left out of the discussion.

In statistical mechanics, one calculates the number (or hypervolume)  of possible states of a system, and defines the system's entropy as , where  is Boltzmann's constant. It's interesting to note the similarity between maximizing one's possibility space and maximizing entropy, though the equivalence between statmech entropy and information-theoretic entropy relies on physical principles that don't have any 'obvious' parallels in moral reasoning. 

I'd like to posit a slightly more general approach: deontology and virtue ethics are specific cases of a more general framework by which irrational agents with fixed cognitive substrates running lower-level utility functions (in humans: food, sex, social status, etc.) may nevertheless modify their heuristics so as to optimize for their top-level utility function in a more rational manner. 

For instance, an agent with a horrifyingly large time preference, which would (irrationally) choose one utilon right now over two in a couple hours, would do well to add heuristics counteracting those preferences. An agent who is aware that their lower-level utility function completely changes every so often would do well to not just find out and prevent whatever is causing the switch (or, failing that, learn how it switches), but prevent themselves from taking especially harmful actions when under the effect of a switch. 

Hence, such a flawed consequentialist would strive to cultivate in themselves both heuristics and inviolable rules, so as to ensure reliable future behavior under a variety of unpredictable modifications to their various utility functions. 

(As noted in your post, the agents don't even have to be irrational: limited computational power is enough for an agent to want to craft intelligent heuristics to follow when they need to act faster than they can think! So clearly there's a more general way to look at this, but I can't see it yet.)