Disclaimer: I am not a philosopher, so this post will likely seem amateurish to the subject matter experts. 

LW is big on consequentialism, utilitarianism and other quantifiable ethics one can potentially program into a computer to make it provably friendly. However, I posit that most of us intuitively use virtue ethics, and not deontology or consequentialism. In other words, when judging one's actions we intuitively value the person's motivations over the rules they follow or the consequences of said actions. We may reevaluate our judgment later, based on laws and/or actual or expected usefulness, but the initial impulse still remains, even if overridden. To quote Casimir de Montrond, "Mistrust first impulses; they are nearly always good" (the quote is usually misattributed to Talleyrand).

Some examples:

  • Eliezer in a facebook post linked the article When Doing Good Means You’re Bad, which points out that people taking commission to raise a lot of money for charity are commonly considered less moral than those who raise much less but are not paid to do so ("tainted altruism"). 
  • This was brought up at a meetup: a pregnant woman in a dire financial situation who decides to have an abortion because she does not want a burden of raising a baby is judged harsher than a woman in a similar situation whose motivation is to avoid inflicting harsh life on the prospective child.
  • In real-life trolley problems even the committed utilitarians (like commanders during war time) are likely to hesitate before sacrificing lives to save more.

I am not sure how to classify religious fanaticism (or other bigotry), but it seems to require a heavy dose of virtue ethics (feeling righteous), in addition to following the (deontological) tenets of whichever belief, with some consequentialism (for the greater good) mixed in.

When I try to introspect my own moral decisions (like whether to tell the truth, or to cheat on a test, or to drive over the speed limit), I can usually find a grain of virtue ethics inside. It might be followed or overridden, sometimes habitually, but it is always there. Can you?

 

New to LessWrong?

New Comment
91 comments, sorted by Click to highlight new comments since: Today at 11:13 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Virtue ethics might be reframed for the LW audience as "habit ethics": it's the notion of ethics appropriate for a mind that precomputes its behavior in most situations based on its own past behavior. (Deontology might be reframable as "Schelling point ethics" or something.)

I've had the same kind of insight. If you compute the consequences of following certain habits, the best plan looks an awful lot like virtue ethics. You're not just someone who eats ice cream, you're someone who has an ice cream eating habit.

Similarly, if you compute the consequences of setting and following rules, you get back a lot of Deontology. A doctor can't just cut up one person for their organs to save a dozen without the risk of destroying valuable societal trust in others following certain expectations (like not being killed for your organs when you go to the doctor).

[-]torekp10y100

Yes - and to extend the point, I am an "all three" ethicist at heart, and I think most people are. We need to assess outcomes, we need to assess habits, and we need to assess fairness. Of course, this leaves wide open the possibility that two or one of {utility, virtue, justice} could be more fundamental and explain the other(s), but at the day to day level we need them all.

0punctual10y
This is pretty accurate for me. Most of the time virtue ethics give the right answer.

Utilitarians do not judge people based on the consequences of their actions. They judge people based on the consequences of judging them.

There are times when my instincts take over. This is probably a good thing, but it would happen even if I didn't know this. Nonetheless, when I make a decision, normally what I am concerned about is about what will happen.

When you think about the source code of the players you are being a 'virtue ethicist.' When you optimize outcomes, you are being a 'consequentialist.' You can do both at once.

0shminux10y
You can, but it's pretty clear that in the examples given there is a tension between these approaches.
7IlyaShpitser10y
I am confused about what you are trying to say. People at MIRI suspect smart decision theories will look at the source code of players, and so aren't purely consequentialist in that sense. "Folk decision theory" that people use in day to day lives echoes "smart decision theories" above, because we think about 'the kind of person' someone is. It seems sensible to do so. Could you clarify what you are getting at? Do you think we should be purely consequentialist? It's probably a mistake to ignore certain steelmen of virtue ethics if you care about doing "the right thing."
0shminux10y
First, I appreciate your original analogy between virtue ethics and source code. I would like to understand it better, since it looks to me like any normative ethics requires "looking into the source code", though pure consequentialism can also be analyzed as a black box. I assume that what you mean is that virtue ethics requires deeper analysis than deontology (because the rules are simple and easy to check against?), and than consequentialism (because one can avoid opening the black box altogether?). Or am I misinterpreting what you said? Well, no, I am not prescriptive in the OP, only descriptive. And I agree (and mentioned here several times) that virtue ethics and deontological rules are in essence precomputed patterns which provide approximate shortcuts to the full unbounded consequentialism in a wide variety of situations. Of course, they are often a case of lost purposes when people elevate then from the level of shortcuts to the complete description of shouldness.
0IlyaShpitser10y
I don't know what "the ultimate decision theory" is, but I suspect this decision theory will contain both consequentialist and virtue ethical elements. It will be "consequentialist" in the trivial sense of picking the best alternative. It will be "virtue ethical" in the sense that it will in general do different things depending on what it can infer about other players based on their source code. In this sense I don't think virtue ethics is a hack approximation to consequentialism, I think it is an orthogonal idea. ---------------------------------------- That said, I am still confused by what you are trying to say!
0shminux10y
I think we are using different definitions. I think of ethics first as applied to choosing one's own decisions, and only second as a tool to analyze and predict the decisions of others. If I were to program a decision bot, I would certainly employ a mixture of algorithms. Some of them would have a model of what it means to, say, be fair (a virtue) and generate possible actions based on that. Others would restrict possible actions based on a deontological rule such as "thou shalt not kill" (cf. Asimov's laws). Yet others would search the space of outcomes of possible actions and pick the ones most in line with the optimization goal, if there is one. Different routines are likely to output different sets of actions, so they are not orthogonal.
4IlyaShpitser10y
"Orthogonal" means you can be a virtue ethicist without a commitment on consequentialism, and vice versa. A virtue ethicist who is not a consequentialist is the old school Greek ethicist. A consequentialist who is not a virtue ethicist is one of the modern variety of maximizers. A virtue ethicist + consequentialist is someone who tries to work out ethics in multiplayer situations where some players are dicks and others are not. So defection/cooperation ought to depend on the 'ethical character' of the opponent, in some sense.

Beware, the philosopher's virtue ethics is very different than the habit ethics version that is being discussed here (which isn't really an normative ethical theory at all, but rather a descriptive one). The philosopher's virtue ethics is tied to the concept of teleology (purpose) and the objective ends of human beings, and makes no sense under the reductionist framework usually held here.

I am not sure how to classify religious fanaticism

I always thought of that as less a moral difference and more a matter of actually taking beliefs seriously, except with a failure to equally seriously go about checking if beliefs are true.

I'm guessing more than a few rationalists who grew up in religious contexts once upon a time took those religious beliefs much more seriously than their peers did, and consequentially might have shown signs of "fanaticism".

I'm pretty sure that both me and my dogs are virtue ethicists at heart. I don't think natural selection had any sort of a way to code in any other kind of morality, nor does it seem likely that natural selection would have anything to gain by even trying to code in a different kind of morality.

Yes, I presume or really post-sume having read a lot of random stuff that the bulk of my moral sentiments are 1) inborn, 2) started being put in us long before we were humans and 3) are indeed sentiments. I think moral sentiments are the human words for what make... (read more)

1shminux10y
I'm so retweeting your first sentence. Do you have a twitter?
1mwengler10y
I'm mwengler on twitter. I'll tweet it with hashtag #puppyethics and you can find it and retweet it.

Sure, I agree that my instinctive judgments of right and wrong are more about judging people (including myself) than they are about the consequences I expect from various actions. This is especially true when the people involved are in categories I am motivated to judge in particular ways.

What judgments I endorse when I have time and resources to make a more considered decision is an entirely different question.

Which of those reflects my "at heart" ethical philosophy is a difficult question to answer... I suspect because it's not well defined.

I agree, we tend to instinctively rely on virtue ethics. And this means that we are not psychopaths.

Our apparent reliance on virtue ethics is a result of the classical conditioning of 'good' and 'bad' that has been drilled into us since birth. "Bad Timmy! Stealing candy from the store is WONG!" is very negative reinforcement for a behavior.

If we could truly abandon our trained value system for pure consequentialism, then we would all be really good at running companies. But most people are not psychopaths, and more importantly most people d... (read more)

[-][anonymous]10y40

An ideally moral agent would be a consequentialist (though I won't say "utilitarian" for fear of endorsing the Mere Addition Population Ethic). However, actual humans have very limited powers of imagination, very limited knowledge of ourselves, and very little power of prediction. We can't be perfect consequentialists, because we're horrible at imagining and predicting the consequences of our actions -- or even how we will actually feel about things when they happen.

We thus employ any number of other things as admissible heuristics. Virtue eth... (read more)

Indeed, it makes perfect sense for us to be virtue ethicists in the sense as we care about forming the right habits. But in order for virtue ethics not to be vacuous or circular, we need some independent measure for which habits are good and which habits are bad. This is where consequentialism comes in for many Lesswrongers. (When I read professional philosophy, the impression that I formed was that people who talked about "virtue ethics" generally didn't realise this and ended up with something incomprehensible or vacuous.)

0asr10y
Yes. However, it might be that the ends towards which virtue is a means aren't ethical ends. Somebody might care about consequences, but reserve their moral judgement for the process by which people try to achieve their consequences. It might be that people are good or bad, but states of the world are just desirable or undesirable. For example, let's suppose it's desirable to be wealthy. This can happen in several ways., One individual, A, got wealthy through hard work, thrift, and the proper amount of risk-taking. Another, B, got lucky and winning the lottery. Both A and B wind up with the same amount of money, but A got there by exhibiting virtue, and B didn't. A virtue ethicist can say "A is a better person than B", even though the consequence was the same. I suppose you could say "A and B's choices have the same consequence for their bank balance, but different consequences for their own personal identity, and we have ethical preferences about that" But at this point, you're doing virtue ethics and wrapping it in a consequentialist interface. I suspect all these different strands of ethical thought are really disagreeing about what to emphasize and talk about, but can be made formally equivalent.
1Randy_M10y
B didn't choose to win the lottery; B choose to play the lottery. Surely when considering whether an action would be good to take, one would have to consider all the attempts that didn't lead to success?
0asr10y
Yes. Presumably a consequentialist should consider the probabilities of various outcomes. This is potentially problematic, since probability is in the eye of the beholder, and it's not clear who the right beholder is. Is it B? Is it an ideal rational agent with the information available to B? An ideal but computationally limited agent? My sense is that in the real world, it's hard to second-guess any particular decision. There's no good way to account for the difference between what the actor knew, what they could or should have known, and what the evaluator knows.
0Creutzer10y
Nothing you say is false, and yet it strikes me as somewhat confused. For one thing, I'm not aware that anybody has ever said that only the actual consequences of an action matter for its moral status. That's is not what consequentialism means. The thing is that consequentialism and deontology are fundamentally about the moral status of actions, whereas virtue ethics is about the formal status of persons. They're not formally equivalent. If you have a system for actions, then you can derive the status of the person (by looking at the actions they are, by habit, predisposed to perform). Maybe an action that is not particularly good, viewed in isolation, is excusable because it was done out of a habit that is generally good. But you have to start with the actions, because if you want to start with the person and try to derive the value of actions from that - how do you do that? You have no way of assessing the moral status of a person independently of their actions. This is why you can and should build virtue ethics on top of an ethical system for actions, and why it's meaningless in isolation.
1asr10y
I think they can be made formally equivalent, in the sense that you can write a {consequentialist, virtue ethics, deontological} statement that corresponds to any given ethical statement in some other formalism.* For a given virtue-ethics view, you can say "act as a virtuous person would act", or "act in a way that achieves the same consequences as a virtuous person." For example, law is full of deontological rules, but we often have to interpret those rules by asking "how a reasonable person would have judged the situation", which is essentially using an imaginary virtuous person as a guide. I agree that different ethical theories talk in a different language and that these distinctions are relevant in practice. However, I would ignore the form of the sentences and focus on which parts of the ethical theory do the real work. You have to assess people based on their actions, but it might be that we assess actions in a way that isn't particularly consequentialist or even formalized; we can use non-ethically-relevant actions to judge people's characters. For example, if somebody seems impulsive and thoughtless, I will judge them for that, even if I don't observe their impulsivity causing them to take actions with likely bad consequences. There's a big chunk of my brain that's optimized for evaluating how I feel about other people. When I use that part of my brain, I don't look at individual actions people take and ask about the probable consequences; rather, my overall experiences with the person and hearing about the person get tabulated together. I use that part of my brain when I form ethical judgements, and I think of philosophical ethics as a tool for training that part to work better. * I suspect there may be some edge cases where this works badly; I am only concerned with the sort of ethical statements that tend to come up in practice.
2Creutzer10y
This is precisely the circularity that I was talking about. Where do you get the substance from? How do you know which person is virtuous? Unlike "act as maximises average expected utility" (some form of consequentialism) or "don't do X" (primitive deontology), "act as a virtuous person would" is an empty statement. Nobody says you look only at actual actions. You're concerned with actions that the person is predisposed to. The non-ethically-relevant action that you observe is still evidence that that person has a temperament that disposes them to ethically relevant and unfavourable actions.
5asr10y
It's an empty statement until you tack on a concrete description of virtue. But that's not hard to do. Aristotle, for instance, gives a long discussion about bravery and prudence and wisdom and justice and so forth in the Ethics -- and he does it without having a full account of what makes an action good or bad. I suspect that what you are viewing as vacuous is really an implicit appeal to widely shared and widely understood norms that determine what makes people admirable or blameworthy. Yes. But why is it easier to talk about good and bad actions than about good and bad temperaments? I agree there has to be a substantive account somewhere. But I don't actually know how to define utility in a moral sense, and it seems like a very hard problem. It's not pleasure or the emotion of happiness. When consequentialists start talking about "human flourishing," I feel like a virtue ethics is being smuggled in the back door.
2Creutzer10y
I'm not saying one is easier than the other, I'm saying one is more fundamental than the other. Bravery is nothing but a disposition to actions, and prudence and wisdom, to the extent that they are not dispositions to actions, are not morally relevant. They're intellectual virtues, not moral virtues. That's a completely different issue. No, the other way around: when virtue ethicists talk about "human flourishing", consequentialism is being smuggled in through the back door.
0[anonymous]10y
I happen to agree with you here, but I think you're confusing an epistemological point with an ontological one. It may be that actions are epistemically more fundamental than character, insofar as they're our basis of evidence for saying things about people's characters, but it doesn't follow from this that actions are more fundamental full stop. Virtue ethics is, at least a lot of the time, the thesis that dispositions of character are ethically fundamental. Not actions, even if actions are our only epistemic ground for talking about character. As I said, I agree with you that actions are ethically fundamental, but this isn't a critique of virtue ethics, it's just a denial of it.
0Creutzer10y
My point is intended to be neither epistemic nor ontological, but conceptual. Dispositions of character cannot be conceptually prior to actions because they are defined in terms of what actions they are dispositions towards. Admittedly, you can have some a back-and-forth - the same action could be virtuous depending on whether it was done out of habit or just accidentally or through effort and force of will. But you still have to start with actions in order to determine which habits it is that have the power to confer moral value onto the actions they give rise to.

From Wikipedia

a consequentialist may argue that lying is wrong because of the negative consequences produced by lying—though a consequentialist may allow that certain foreseeable consequences might make lying acceptable. A deontologist might argue that lying is always wrong, regardless of any potential "good" that might come from lying. A virtue ethicist, however, would focus less on lying in any particular instance and instead consider what a decision to tell a lie or not tell a lie said about one's character and moral behavior

Under this sch... (read more)

Are you a virtue ethicist at heart?

No, but I'm a deontologist at heart. Only in death does duty end.

3shminux10y
So... you value following duty as a character trait?
8Apprentice10y
I guess you could spin it that way - but let me take an example. For the last couple of weeks, my wife and I have been involved in some drama in our extended family. When we discuss in private and try to decide how we should act, I've noticed my wife keeps starting off with "If we were to do X, what would happen?". She likes to try to predict different outcomes and she wants to pick the action that leads to the best one. So maybe she is a consequentialist through and through. I tend to see the whole sorry business as too complicated for us to predict, especially since I don't want to neglect consequences 10 or 20 years down the line. So I fall back to trying to apply rules that would be generally applicable. "What is our duty to family member X? What is our duty to family member Y?" It's not that I would ever say "We should do X, even though it leads to worse outcomes." But I do want to consider the long run and I'd prefer not to destroy useful Schelling points for short term gain.

OK, so you use virtue ethics (doing one's duty is virtuous) and deontology as shortcuts for consequentialism, given that you lack resources and data to reliably apply the latter. This makes perfect sense. Your wife applies bounded consequentialism, which also makes sense. Presumably your shortcuts will keep her schemes in check, and her schemes will enlarge the list of options you can apply your rules to.

1Apprentice10y
I like that formulation, thank you!

I upvoted this post, and I want to qualify that upvote. I upvoted this post because I believe it raises a substantial point, but I feel like it doesn't have enough, for lack of a better term, punch, to it. Part of my lack of conviction is based in how I'm not very well-educated on the manners of moral psychology, or philosophy, either, and I suspect this would be cleared up if I were to study it more. Shminux, you might not recognize me, but I'm Evan from the meetup. Anyway, I remember at the last meetup we both attended a couple of weeks ago when we discu... (read more)

4shminux10y
First, I do not think that there is anything wrong with virtue ethics as long as we recognize that it is one of several robust computational shortcuts, and not the one true normative ethics. It is quite rational to use all the tools in your disposal. It is irrational for a human to proclaim oneself to be a consequentialist, because no one is. A form of consequentialism is essential for FAI, since virtue- or rule-based shortcuts are bound to fail on the edge cases, and an AI is very good at finding edge cases. Humans, on the other hand, extremely rarely run into these edge cases, such as the trolley problem or specks vs torture. More common are paradigm shifts, such as universal suffrage, gay rights, abortion, euthanasia, ethical treatment of animals, where some deontological rules have to be carefully recalculated, then followed again. Some day it might be sims, uploads, cloning, designer babies, and so on. I would estimate this to be much likelier than them being honest-to-goodness consequentialists. If someone says "I don't just follow my intuition but also attempt to calculate utilities the best I can before making a decision", then it is worthy of respect. If someone says "I base my actions solely on their evaluated consequences", I would lower my opinion of them because of this self-delusion.
0eggman10y
Thanks for replying You made more points which dovetail with my own observations. I'd qualify (again) my previous comment as not an endorsement of virtue ethics generally, but an acknowledgement that it can be valuable. I might consider a form of consequentialism to be better than any other system we have right now for an ideal rational agent, but I don't believe that humans in their current state will reach the best results they could achieve by pretending to be consequentialists. I don't know how humans will fare in their ethical behavior in a future where our mind-brains are modified.
-3Eugine_Nier10y
I don't believe anything resembling careful recalculation occurred with any of these shifts.

Spoiler Alerts

An example from fiction. In the Dark Knight, Batman refuses to kill the Joker. From a consequential point of view, it would save many more lives if Batman just killed the damn Joker. He refuses to do this because it would make him a Killer, and he doesn't want that. Yet, intuitively, we view Batman as virtuous for not killing him.

One could also give this a deontological interpretation: Batman strictly follows "Thou shall not kill". I think, in general, that deontology and virtue ethics have a lot in common: if you follow deontology,... (read more)

Yet, intuitively, we view Batman as virtuous for not killing him.

I don't.

I'm frequently annoyed with supposed "good guys" letting the psychopathic super baddy live, taking their neck off their throats, only to lose many more lives and have to stop the bad guy again and again. I don't view them as virtuous, I view them as holding the idiot ball to keep the narrative going. It's like a bad guy stroking a white cat who sends James Bond off to die some elaborate ceremonial death, instead of clubbing him unconscious, putting a few rounds in his head, and having him rolled up in the carpet and thrown out.

Note that the storyline often allows the hero to have his "virtue" and execution too, as the bad guy will often overpower the idiot security forces holding him to pull a gun and shoot at the hero, allowing the hero to return fire in self defense. How transparent and tiresome. Generally "moral dilemmas" in movies are just this kind of dishonest exercise in having your cake and eating it too. How I long for a starship to explode when the Captain ignores the engineer and says "crank it to 11", or see some bozo snuffed out the moment he says "never tell me the odds".

Bond actually refused to play that game in Goldeneye.

[Bond is holding Trevelyan by his foot on top of the satellite antenna.]
Trevelyan: For England, James?
Bond: No. For me. [lets Trevelyan fall to his death]

3[anonymous]10y
Boy, are you ever on the right website. As far as I can tell, this place is basically a conspiracy full of Dangerously Genre Savvy people trying to get good things done in real life through the use of our Dangerous Genre Savvy. Now if you'll excuse me, I need to go find a white dog to pet. I'm allergic to cats.
7Luke_A_Somers10y
That Genre being 'things that actually happen', which would be a very niche genre in fiction?
2[anonymous]10y
Pretty much, yes. The whole difference between Genre and Genre Savvy is that a Genre Savvy viewer recognizes what would actually happen in real life, whereas fictional characters not only don't recognize that, their whole universe functions in a different, less logical way. In fiction, refusing to shoot Osama bin Laden means he ends up serving time in jail, and justice is served. In real life, refusing to shoot Osama bin Laden means he tells his followers he has enjoyed a Glorious Victory Against the Western Kuffar Cowards (don't laugh: this is what fascist movements actually believe), which spurs them to a new wave of violence.
1TheOtherDave10y
Depends on the genre. Sometimes it means he waits until your back is turned and tries to kill you, thereby allowing you to kill him to defend yourself. Sometimes it means he goes free and mocks you and then dies of a heart attack. Sometimes it means he goes free and his mocking laughter is heard over the credits.
0Eugine_Nier10y
More importantly, he gets to return for the sequel.
0buybuydandavis10y
I think the genre I'm railing against is Dishonest Moral Propaganda. That's what irks - they're using lies to make a case for some nitwit ideology or behavior.
0Luke_A_Somers10y
You didn't even mention 'genre'. I was just trying to figure out how eli was characterizing us here.
[-]9eB110y150

Becoming sympathetic to consequentialist thought has definitely ruined most (almost all?) pop culture artifacts involving morality for me. I just sit there thinking, "Wow, they should definitely put a bullet in that guy's head ASAP," interleaved with, "Wait, what's the big deal, I don't see anyone getting hurt here," depending on the genre. Watch Star Trek TNG episodes with this in mind and you will quickly think that they are simultaneously completely incompetent and morally monstrous (the Prime Directive is one of the most evil rules imaginable).

3buybuydandavis10y
Try being sympathetic to egoist thought and watching the movies. While I enjoy "It's a Wonderful Life" and "The Philadelphia Story", I consider the morality monstrous. Yes, and in the magic fictional universe, not blowing him away when you had the chance miraculously turns out for the good, instead of getting everyone killed.
1shminux10y
I found the Prime Directive to be one of the hardest lessons in consequentialism. If it existed in the real world, we would not have many of the current problems in the developing world, where people slaughter each other using modern weapons instead of spears and bows. And they coordinate the slaughter using modern tech, too. And the radicalization of Islam has been caused in part by the Western ideas of decompartmentalization. Exploiting poorer nations and depleting their natural resources doesn't help much, either. The so-called foreign aid does more harm than good, too. If only Europeans had enough sense to refrain from saving the savages until they are ready.
49eB110y
As I said in another comment in this thread, we know that the real-world reason the Prime Directive exists is because Gene Roddenberry hated historical European imperialism. I grant that the Prime Directive may be a handy rule of thumb given imperfect knowledge and the in-universe history of interference. My main problem with it is that it is a zero tolerance policy where the outcome of following it is, rather than someone being expelled for bringing Tylenol to school, the extinction of a species with billions of lives. It would be like if Europeans knew Africa was going to sink into the ocean in one year and weren't even willing to tell the Africans it was going to happen (and then patting themselves on the back for being so enlightened). And this becomes the core founding principle of the Federation.
0shminux10y
I don't think you interpret the Prime Directive the way Gene Roddenberry did. The directive says that you don't meddle in the affairs of other cultures just because they act in a way that seems wrong to you (incidentally, that's why I am unimpressed with the reactions of all 3 species in Three Worlds Collide: all 3 are overly Yudkowskian in their interpretation of morality as objective). It does not say that you should not attempt to save them from a certain extinction or disaster, and there are several episodes where our brave heroes do just that. All the while trying to minimize their influence on the said cultures otherwise, admittedly with mixed results.
49eB110y
See the episode Pen Pals. The population is going to be destroyed by a geological collapse, and Picard decides that the Prime Directive requires they let everyone there die. Of course, by sheer luck they hear a girl call for help to Data while they are debating the issue, which Picard determines is a "plea for help" so doesn't violate the Prime Directive if they respond. But without that plea, they were going to let everyone die (even though they had the technological capability to save the world without anyone knowing they intervened). I believe this episode had the most protracted discussion of the Prirme Directive that we have seen in-fiction. In Homeward Picard considers it a grave violation of the Prime Directive that Worf's brother has attempted to save a population when everyone on their planet was going to die in 38 hours.
0shminux10y
OK, you have a point, sometimes it does not mean what I thought it did. If you look at the general description of it, however, there are 8 items there, only one of them ("Helping a society escape a natural disaster known to the society, even if inaction would result in a society's extinction.") of the questionable type you describe. The original statement, "no identification of self or mission; no interference with the social development of said planet; no references to space, other worlds, or advanced civilizations." also makes perfect sense.
1Viliam_Bur10y
I am still not convinced that in the parallel reality the life would be better. Why exactly is being killed by a gun worse than being killed by a spear?
-1[anonymous]10y
In many cases, those civilizations were knocked back to "savage level" by dehumanizing levels of exploitation, colonization, and sheer deliberate destruction by Europeans in the first place. This doesn't excuse the behavior of post-colonialist Third World countries, except in the sense that one who creates a power vacuum may bear some responsibility for whoever fills it.
1shminux10y
Maybe I was unclear. It seems that you and I agree that the Prime Directive would be a good default deontological rule when dealing with less advanced societies.
1Eugine_Nier10y
Consider the comparable real life situation. LessWrong has a policy against listing real life examples, so I won't, but you should be able to think of some. While we're at it, think about the reason LW has this policy. You mean you don't see anyone getting immediately hurt. With the kind of civilization affecting decisions that occur on star trek frequently have indirect effects that are orders of magnitude larger than their direct effects.
99eB110y
The problem is that fiction often removes the most compelling reasons that this sort of thinking doesn't work in the real world (uncertainty regarding facts, uncertainty regarding moral reasoning), but tries to retain the moral ambiguity. I think I would be much happier if police were perfect virtue ethicists or deontological reasoners than is currently the case, but if Blofeld reveals his dastardly plans to Bond, I want as many bullets in his head as can be arranged in short order.
1Eugine_Nier10y
To a certain extent this is true due to narrative requirement; however, to a certain extent it's a realistic portrayal of what our certain states of knowledge can feel like from the inside. Edit: Also this helps reduce the amount of memetic hazards in fiction.
0pragmatist10y
I haven't watched Star Trek, so I looked up the Prime Directive on Wikipedia. Interestingly, there's a quote from Jean-Luc Picard suggesting that the justification for the directive is actually broadly consequentialist: Pretty sure that the results haven't invariably been disastrous, but it does seem true to me that the results have been disastrous (or close to it) often enough for us to think very carefully about how (or whether) any such intervention should proceed. I do agree that making non-intervention an inviolable dictat, especially in an extremely populated universe, is horribly misguided.
5buybuydandavis10y
Compared to what? Was life all bright and shiny before civilization interjected itself into the existing barbarism? I've generally considered the Prime Directive moral cowardice dressed up in self righteousness.
0Eugine_Nier10y
In the Star Trek universe, frequently yes. Granted this is completely unrealistic sociology, but then again warp drive and the teleporters are completely unrealistic physics.
0pragmatist10y
Compared to how things were before the intervention. And no, things usually weren't "bright and shiny" before either, but it is possible for shitty situations to get even shittier.
39eB110y
I believe there was an article on Overcoming Bias about how people frequently use consequentialist logic to support their beliefs, when their underlying reasoning is anything but a dispassionate analysis, and I think that logic applies to Picard's quote. The justification for the Prime Directive that has appeared in multiple episodes I've watched (I have been watching all of the episodes, starting with the original series and now several seasons through TNG) is that we need to see if these societies are able to successfully "develop" past the stages of evil and become enlightened societies. I don't ascribe moral valence to societies, but to individuals, which is why I think this sort of social Darwinism is nothing short of barbaric. We already know from real life that there is no significant biological evolution after humans developed mature civilizations, and yet we are to believe that the right moral choice is to let these species "evolve naturally" to see if they are worthy (they are allowed to know of Starfleet once they have achieved warp drive technology). If these people are biologically capable of advanced moral thought, that capability exists whether they are currently exercising it or not. The basic question is whether you think the world would have turned out better or worse if you could go back several hundred years and tell humans, "Hey, this slavery thing is not so hot, it really doesn't work out well," and other moral truths that we take for granted. This is aside from the situations where they are directed not to intervene even when, for example, a star's collapse is going to destroy a civilization made up of billions of individuals that have moral valence, through no particular fault of that society and having no bearing on whether they will achieve Starfleet's preferred standard of morality. I find the idea that it is universally negative from a cost-benefits perspective to "interfere" with a culture's development, such that this becomes the firs
-3Eugine_Nier10y
Reread that sentence. Notice how the second half seems to contradict the first. We do? This is not at all obvious. Consider the generic changes in domestic animals, for example. Note that the above sentence implicitly uses deontological reasoning.
09eB110y
Perhaps you could explain? Social Darwinism in Earth terms seems to be the idea of "survival of the fittest" individuals within a society, but here I'm referring to a Star Trek variant of social Darwinism that occurs at the level of the society (similar to some definitions of social Darwinism described on the Wikipedia page, e.g. under the first Nazism header). The reason I call it social Darwinism rather than merely evolution is because in-fiction it occurs due to advances in societal values rather than because of biological changes, but perhaps this isn't the clearest choice of terms. Societies which are able to develop advanced technology are given moral weight by the powers that be, but those who have not yet developed such technology are given no moral weight. This moral weight is demonstrated by the willingness to avert extinction when such actions carry apparently trivial cost for the Enterprise crew, which seems to leave little room for doubt. I am proposing that sentencing the individuals within these societies to death because of insufficient societal "advancement" (collective action) is the evil part here. I will grant that humans are still evolving, because obviously you can't turn it off in the broader sense. But I haven't found any suggestions that people are evolving in any ways that would change the moral weight we should assign individuals. Perhaps this is a weakness in our knowledge and in the Star Trek universe it's clear that biological evolution continues to proceed in such a way that is morally relevant (even though they never say anything like that), but it seems unlikely based on what we currently know that a smarter humanity is in the cards through evolutionary (vs. technological) means. I don't think so, but I'm not exactly sure why you say that. From a consequentialist perspective, if people have the cognitive ability to understand moral thought, then the outcome of trying to convince them that they should use it in a particular way can
-2Eugine_Nier10y
You claim to not ascribe moral valence to societies, and then promptly proceed to declare a social system "barbaric". I didn't say anything about moral weight, largely because I've never heard a good explanation of how it is supposed to be assigned. I'm talking about their cognitive abilities, in particular their ability to act sufficiently morally. That's deontological reasoning (there is a chance these people can be saved, thus it is our duty to try). Consequentialist reasoning would focus on how likely the attempt is to succeed and what the consequences of failure would be, not just whether they can be saved.
09eB110y
Fair enough. One difficulty of consequentialism is that unpacking it into English can be either difficult or excessively verbose. The reason Star Trek style social Darwinism is barbaric is because of its consequences (death of billions), not because it violates a moral rule that I have regarding social Darwinism. If it worked, then that would be fine. The reason I said it "can be a net benefit" is specifically because I was trying to imply that one should weigh those consequences and act accordingly, not take action based on the fact that it is possible. The Prime Directive is a bright-line rule that precludes such weighing of consequences.
2[anonymous]10y
Nitpicking time. I'm not so sure that that's the reason. He's also playing a long game in which Batman is supposed to be a symbol of what is possible. This reasoning produces actions that have short-term potential problems but causes many others to do better over a long time period.
1Eugine_Nier10y
There is a (meta-consequentialist) reason for this. Imagine what would happen if police were encouraged to act in a consequentialist manner.
2ThrustVectoring10y
Imagine what the police would imagine if they followed the popular conception of what consequentialism is. That's an expected consequence of police action, so if it's worse than what they're doing now, they won't choose to do it (under a sufficiently savvy model of consequentialism.)
1Said Achmiz10y
There is a term for this: rule consequentialism.
0Brillyant10y
This is the best response, I think. "Thou shall not kill" is actually nothing more that a consequentialist heuristic posing as deontological/virtue ethics. If Batman kills Joker in lieu of a trial, he is a de facto "good guy" authority setting a precedent for such eye-for-eye behavior throughout all of Gotham. That is a potentially powerful meme given Batman's status and could reasonably lead to a norm of ruthless, draconian law enforcement methods for decades to come. There are meta-consequentialist considerations at play. Killing Joker means, in some sense, Batman had to agree that Joker's ethics -- killing your enemy to advance your ends -- work. Of course there are times where killing, stealing, lying are consequentially a net positive, but it is very useful to have deontological norms prohibiting those actions and ascribe virtues to those people who follow the rules. It is, in fact, the best consequentialist policy over time.
1[anonymous]10y
At least in The Dark Knight, the Joker was an outright nihilist. His primary goal was simply to prove that everyone is as crazy as him underneath. Mind, the whole supposed Moral Dilemma about Society on the Brink of Collapse should anyone ever See Through the Noble Lie and realize that the Joker Was Right and there really is just Nothing... well, it kinda goes away once you confront the abyss yourself and realize that, given a blank canvas, you'd prefer to paint a pretty picture than burn the building down. (Or in other words, the Joker presumed to prove that people must be Nihilists like him underneath, without considering whether the result might not be a heavily-armed batch of Existentialists.)

I found myself thinking along similar lines about a year ago when I was faced with a legitimate moral dilemma. Situations which I can view in the abstract, or which I'm distanced from, I can generally apply dispassionate cost-benefit analysis to; but if I actually find myself in a position where I have to make decisions with moral consequences, I'll find myself agonising over what kind of person it makes me.

There's an extra frustrating element to this, because some decisions only have moral consequences as far as "what kind of person they make me"... (read more)

However, I posit that most of us intuitively use virtue ethics, and not deontology or consequentialism. In other words, when judging one's actions we intuitively value the person's motivations over the rules they follow or the consequences of said actions.

Consequentialism has nothing to do with how to judge someone else's actions. If I am trying to poison my friend, but by some miracle the poison doesn't kill him and instead manages to cure his arthritis, then I am still a bad person. Virtue ethics seems like a rational framework to judge other people by, perhaps tautologically.

4shminux10y
I don't think that is quite what consequentialism means, except in the ideal case of unlimited computing power and perfect prediction capabilities. What matters in bounded consequentialism is expected consequences of someone's actions, calculated to the best of one's abilities, which may differ from the actual consequences. Of course, in the real world people usually get judged on the mix of intended and actual consequences.

In real-life trolley problems even the committed utilitarians (like commanders during war time) are likely to hesitate before sacrificing lives to save more.

This, at least, seems to me to be entirely appropriate for a utilitarian. If you don't hesitate before sacrificing lives, you're likely to miss opportunities to accomplish the same goal without sacrificing lives.

If you have one option which is clearly superior to your known alternatives, but that option still leads to outcomes you would seriously want to avoid, then you should probably make full use of whatever time you have to look for other possible options which would be superior.

[-][anonymous]10y00

A pretty common trope in moral philosophy is the idea that since we've all met plenty of (and have many historical examples of) decent, good, and sometimes extraordinarily good people, it just can't the be case that the pre-theoretical intuitions of such people are just plain wrong. The direction of fit in a moral theory is theory->world: if our theory doesn't capture the way (decent or good) people actually do think about moral problems, it's probably wrong. If that's right, the fact that we are all virtue ethicisits at heart (or whatever we are), would be pretty good evidence for virtue ethics as the correct theory.

What do you think of this?

2shminux10y
I have a physicist's view on this. Every model is an approximation, including ethical ones. I think that virtue ethics is a decent approximation in many realistic situations. To me it often encodes precommitment to symmetric decisions. E.g. I will cooperate (be honest, generous...) as long as the other person does, because it's a virtuous thing to do. It does not stumble on PD or Parfit's hitchhiker as long as everyone values the same set of virtues. However, like any other normative ethics, it goes awry in many edge cases or when the symmetry breaks down. Thus I don't much care about the notion of eudaimonia or any attempts to pronounce VE "correct" or "incorrect". Again, it's one approximation which often works well, nothing more.

I posit that most of us intuitively use virtue ethics, and not deontology or consequentialism.

I suspect that this is true, and that such differences in intuition account for the existence of these differing theories in the first place, e.g., Kant was intuitively deontological while Aristotle was intuitively a virtue ethicist.

Also, there may already be research into moral psychology that explores whether people's disagreements over ethical frameworks correlate with different personality traits. If so, this would speak to your claim.

-2Eugine_Nier10y
I would argue that both deontology and consequentialism are both (I would claim ultimately unfriendly) attempts at recursively extrapolating our ethical intuitions into something coherent.

If only a small minority of people are consequentialists by default, then coldly calculated actions that have good consequences would more likely be a sign of callous character than a finely tuned moral compass, which in turn could lead to bad consequences in other situations. People might not be as irrational judging these example situations as it seems.

I'm a virtue ethicist and a consequentialist, as the two are orthogonal. As I see it, the claim "being virtuous makes you happy, and that's why you should be virtuous" falls within both virtue ethics and consequentialism.

Is it possible that your definitions of consequentialist and virtue ethicist overlap? Consequentialism tells you to take the actions that will result in the greatest expected good, but it does not necessarily follow that the greatest expected good is obtained by doling out punishments and rewards to other people based on the immediate consequences of their actions.

Examples:

  • People's abilities seem to have a great deal of natural variance which is beyond their conscious control, but rewarding or punishing them for things they can't control doesn't actuall
... (read more)
[-]asr10y00

What sort of moral system to use should depend on what you're using it for. I find virtue ethics the most useful way to view the world, generally.

My sense is that we mostly can't evaluate things from a consequentialist perspective. We're not very good at predicting consequences, and we're even worse at evaluating whether somebody else is behaving in a proper consequentialist way, given the information at their disposal.

Moreover, consequentialism requires us to pin down what we mean by "consequence" and "cause", and those are hard. If a... (read more)