[ epistemic status: My modeling of this rings true for me, but I don't know how universal it is. ]
Interesting discussion, and I'm somewhat disappointed but also somewhat relieved that you didn't discover any actual disagreement or crux, just explored some details and noted that there's far more similarity in practice than differences. I find discussion of moral theory kind of dissatisfying when it doesn't lead to different actions or address conflicts.
My underlying belief is that it's a lot like software development methodology: it's important to HAVE a theory and some consistency of method, but it doesn't matter very much WHICH methodology you follow.
In the vast majority of humans, legible morality is downstream of decision-making. We usually make up stories to justify our actions. There is a bit of far-mode moral discussions that have some influence over near-mode decisions, making those stories easier to tell (and actually true, for loose definitions of "true").
Thus, any moral system implemented in humans has a fair bit of loopholes, and many exceptions. This is uncertainty and inconsistent modeling in Consequentialist stories, or ambiguity and weighting in deontological or virtue stories.
Which makes these systems roughly equivalent in terms of actual human behavior. Except they're very different in how it makes the adherents feel, which in turn makes them behave differently. The mechanism is not legible or part of the moral system, it's an underlying psychological change in how one interacts with one's thinking part and how humans communicate and interact.
Interesting discussion, and I'm somewhat disappointed but also somewhat relieved that you didn't discover any actual disagreement or crux, just explored some details and noted that there's far more similarity in practice than differences.
I feel very similarly actually. At first when I heard how Gordon is a big practitioner of virtue ethics it seemed likely that we'd (easily?) find some cruxes, which is something I had been wanting to do for some time.
But then when we realized how non-naive versions of these different approaches seem to mostly converge on one another, I dunno, that's kinda nice too. It kinda simplifies discussions. And makes it easier for people to work together.
In the vast majority of humans, legible morality is downstream of decision-making. We usually make up stories to justify our actions. There is a bit of far-mode moral discussions that have some influence over near-mode decisions, making those stories easier to tell (and actually true, for loose definitions of "true").
I agree. There's a sort of confusion that happens for many folks where they think their idea of how they make decisions is how they actually make decisions, and they may try to use System 2 thinking to explicitly make that so, but in reality most decisions are a System 1 affair and any theory is an after-the-fact explanation to make legible to self and others why we do the things we do.
That said, the System 2 thinking has an important place as part of a feedback mechanism to direct what System 1 should do. For example, if you keep murdering kittens, having something in System 2 that suggests that murdering kittens is bad is a good way to eventually get you to stop murdering kittens, and over time rework System 1 so that it no longer produces in you the desire for kitten murder.
What matters most, as I think you suggest at the end of your comment, is that you have some theory that can be part of this feedback mechanism so you don't just do what you want in the moment to the exclusion of what would be good to have done long term because it is prosocial, has good secondary effects, etc.
Aren't there situations (at least in some virtue-ethics systems) where it's fundamentally impossible to reduce (or reconcile) virtue-ethics to consequentialism because actions tending towards the same consequence are called both virtuous and unvirtuous depending on who does them? (Or, conversely, where virtuous conduct calls for people to do things whose consequences are in direct opposition.)
For example, the Iliad portrays both Achilles (Greek) and Hector (Trojan) as embodying the virtues of bravery/loyalty/etc. for fighting for their respective sides, even though Achilles's consequentialist goal is for Troy to fall, and Hector's is for that not to happen. Is this an accurate characterization of how virtue-ethics works? Is it possible to explain this in a consequentialist frame?
Aren't there situations (at least in some virtue-ethics systems) where it's fundamentally impossible to reduce (or reconcile) virtue-ethics to consequentialism because actions tending towards the same consequence are called both virtuous and unvirtuous depending on who does them? (Or, conversely, where virtuous conduct calls for people to do things whose consequences are in direct opposition.)
This is most likely to happen if an ethical system is particularly naive, in the sense that it's excessively top down, trying to function as a simple, consistent system, rather than trying to account for the nuanced complexity of real world situations. But, yes, I think sometimes virtue ethicists and consequentialists may reasonably come to different conclusions about what's best to do. For example, maybe I would reject something a consequentialist thinks should be done because I'd say doing so would be undignified. Maybe this would be an error on my part, or maybe this would be an error on the consequentialists part from failing to consider second and third order effects. Hard to say without a specific scenario.
For example, the Iliad portrays both Achilles (Greek) and Hector (Trojan) as embodying the virtues of bravery/loyalty/etc. for fighting for their respective sides, even though Achilles's consequentialist goal is for Troy to fall, and Hector's is for that not to happen. Is this an accurate characterization of how virtue-ethics works? Is it possible to explain this in a consequentialist frame?
I think this is not a great example because the virtues being extolled here are orthogonal to the outcome. And consequentialists can choose to value their own side more than the other side, or to be indifferent between sides, so I'm not sure what the conflict between virtue ethics and consequentialism would be here.
I think this is not a great example because the virtues being extolled here are orthogonal to the outcome.
Would it still be possible to explain these virtues in a consequentialist way, or is it only some virtues that can be explained in this way?
And consequentialists can choose to value their own side more than the other side, or to be indifferent between sides, so I'm not sure what the conflict between virtue ethics and consequentialism would be here.
The special difficulty here is that the two sides are following the same virtue-ethics framework, and come into conflict precisely because of that. So, whatever this framework is, it cannot be cashed out into a single corresponding consequentialist framework that gives the same prescriptions.