Adam Zerner

Hey Gordon. Thanks for being willing to chat with me about this idea I have of Reflective Consequentialism. I've been wanting to do so for a while now.

To start, let me provide some context regarding what lead me to think about it.

Consequentialism makes a lot of sense to me. I vibe pretty strongly with Scott Alexander's Consequentialism FAQ. And sometimes I read things that make me think that people in the rationality community all basically agree with it as well.

But then I read things a bunch of things praising virtue ethics that make me feel confused. The idea with virtue ethics, from what I understand, is that yes, consequentialism is what makes the most sense. But, if humans try to actually make decisions based on which action they think will lead to the best consequences, they'll kinda do a bad job at it.

On the other hand, if they pick some virtues (for example, honesty and humility) and make decisions according to what these virtues would advise, it will lead to better consequences. So then, utilizing virtue ethics leads to better consequences than utilizing consequentialism, which means that humans should utilize virtue ethics when making decisions.

Ok. Now here is where I'm confused. This virtue stuff kinda just feels like a bunch of heuristics to me. But why limit your decision making to those particular heuristics? What about other heuristics? And what about, as opposed to heuristics, more "bottom-up", "inside view" calculus about what you expect the consequences to be? I feel like there is a place for all of those things.

But I also feel like virtue ethics is saying "no, forget about all of that other stuff; your decisions should be based on those heuristics-virtues alone". Which sounds very unreasonable to me. Which makes me surprised at how popular it appears to be on LessWrong.

(This is a little rambly and open-ended. Sorry. Feel free to take this where you want to go, or to ask me to propose a more targeted path forward.)

Gordon Seidoh Worley

I'm excited to talk about this because I'm pretty dedicated to virtue ethics at this point as my personal way of reasoning about how to act in my own life. To give context to readers who might not have it, I received lay ordination (jukai) within a Soto Zen lineage, and that required taking the 16 Bodhisattva Precepts. These precepts are explicit virtues I'm expected to uphold, and are, at least within my tradition, explicitly not rules but rather vows that we do our best to uphold. They include things like "support all life", "speak of others with openness and possibility", and "cultivate a clear mind".

Speaking of the precepts I took, let's look at one of them, which we phrase as "I take up the way of supporting life". What does it mean to support life? Well, that's ultimately up to the precept holder, but the way I think of it is that I have to try my best to do whatever will be best for all life everywhere and at all times while also holding the other precepts. A natural way for me to think about what's best for all life everywhere is to use consequentialist reasoning, but then temper whatever that reasoning concludes by balancing it against other precepts.

Let's take an example. I'm not in favor of wireheading, but why? If I just do an expected value calculation, wireheading seems like a great idea, but it's not something I would actually endorse, and the reason I don't endorse it is because I also value and have now explicitly vowed to uphold another virtue: to cultivate a mind that sees the world clearly, and wireheading is directly at odds with seeing clearly, as are things like experience machines and tiling the universe with hedonium.

So as I see it virtue ethics is not so much a bunch of heuristics but a set of guidelines or principles that work as a general framework for reasoning about norms, and then use that framework as the scaffolding for do more complex reasoning. I'm sure there's naive virtue ethicists, just as there are naive deontologists and consequentialists, who really do reason based on heuristics alone, but as I think of it, these three ways of approaching normative reasoning are just starting points, and all 3 converge towards a similar place when carefully thought through.

Adam Zerner

Oh, cool! You sound like a great person to talk with about this!

That last paragraph that you write sounds like it probably hits the nail on the head, but I think I am unclear enough about what virtue ethicists and deontologists propose that I'm not too confident about your past paragraph.

You sound like you know more about this than I do, so let me ask: can you help me understand how a naive consequentialist, naive virtue ethicist, and naive deontologist would all approach a given moral question? And then from there, maybe we can talk about how less naive versions of each would refine the naive approaches.

Gordon Seidoh Worley

Sure! Let's take them each in turn. To keep this discussion from getting too abstract, I'll consider each stance's approach to answering a relatively concrete ethical question. Lacking a better idea, I'll go with "is it morally acceptable to have children".

Let's start with the naive consequentialist. They need to think through the consequences of having kids. One way they might reason is that most people are glad they exist, so on net you can create a lot of good in the world by having kids. Perhaps, they think, we should try to maximize the number of kids people have, up to the limit of starvation, because even a marginally good life is better than no life, and so will be drawn towards bitting the bullet on the repugnant conclusion.

A naive virtue ethicist will say something like having kids is good because life/kids are a blessing. Basically, having kids has worked out well for most people in the past, so it's probably a good idea to have kids. A choice not to have kids could be justified, but would have to rely on overwhelming evidence that having kids would impact one's or one's kids ability to live up to other valued virtues.

A naive deontologist should follow some rule here. If they are religious, perhaps this is a specific commandment, like "have as many kids as you can". If they are a Kantian, perhaps they reason that they only exist because others thought it was a good idea to have kids, so they had better have kids as a matter of consistency.

Each of these positions can be refined, though. Perhaps a naive consequentialist could be talked out of the repugnant conclusion by an argument that lives are only good if they have some minimum threshold of good experiences in them that prevents us from pushing up close to the survival limit. A naive virtue ethicist might be persuaded that yes creating life is good all else equal, but maybe they in particular would pass on a debilitating congenital disease that makes it likely their offspring would not have good lives. And a naive deontologist might be similarly swayed out of having kids if they knew that their kids would have bad lives, especially if birthing a kid who would ultimately have a bad life would violate another moral imperative.

Adam Zerner

I see. Thank you. In particular, I appreciate the push towards a concrete example.

I am wondering -- perhaps even suspecting -- whether/that as you get further and further from naive virtue ethics and naive deontology, that it "bottoms out" in consequentialism. I'd like to explore this line of thought.

Let's use the "is it morally acceptable to have children" question as our running example. And to keep things simpler, let's forget about deontology for now and explore how a non-naive virtue ethicist would approach it.

It sounds like you're saying that non-naive virtue ethicists will carefully consider the different virtues at play, and use their judgement about how much weight to give to each of them. In this example, the non-naive virtue ethicist is weighing the virtue of "life is a blessing" against the virtue of "promoting overly distressed lives should be avoided".

If so, I think that begs the question of how you weigh virtues against one another. And it sounds like the answer -- either immediately or down the road -- is "because following this virtue seems like it will lead to better consequences".

What do you think?

Gordon Seidoh Worley

So in a sense it is about what results in better consequences, but indirectly. I think virtue ethics is probably best thought of as a system of normative adaptations that are executed rather than a system for logically reasoning about what's going to maximize virtue. There is some calculation being done by a virtue ethicist to balance between virtues, but it's going to be the result of executing some meta-virtue like "take things in moderation" or "strive to live virtuously and avoid unvirtuous behavior" rather than making a really explicit, water-tight argument about how much energy to put into maxing each virtue.

The way I see consequentialism coming in to virtue ethics is roughly the opposite of how Reflective Consequentialism creates something that looks like virtue ethics. I use consequentialist reasoning to help me figure things out on the margins. So I'd say something like "life is a blessing, we should create more life" but then the question is "well, how much and what kind of life?", and being able to do expected value calculations is quite useful for answering such questions.

Let me put this another way using a different example. I take quite seriously the argument that AI poses an existential risk to life within our Hubble volume, and that even small probabilities of x-risk should be taken seriously because the potential astronomical catastrophe from loss of future life moments outweighs other concerns. That said, I only worry about x-risks on the margin. Mostly I just try to live a good life, and do what I can to work on AI safety as I see opportunities where I might have an impact, but I'm not going to rearrange my life around a mission, although it should be noted that there is virtue in carrying out a mission, and those who are wealthy enough to focus on a mission to the exclusion of much else have a responsibility to do so, so there is still a way for a virtue ethicist to go all in on a mission like AI safety. I'm not in such a position, needing more material and social comfort than some others to feel secure enough to work on something like AI safety full time, but I let my beliefs impact my work insofar as they can without ruining my life.

Adam Zerner

Hm. I'm feeling confused here. I'm not understanding how the non-naive virtue ethicist's approach differs from the non-naive consequentialist's.

 

There is some calculation being done by a virtue ethicist to balance between virtues, but it's going to be the result of executing some meta-virtue like "take things in moderation" or "strive to live virtuously and avoid unvirtuous behavior" rather than making a really explicit, water-tight argument about how much energy to put into maxing each virtue.

This seems like something that a consequentialist (let's say from here on out that "non-naive" is implied) would spend a lot of time doing in real life. Ie. instead of a more bottom-up,, inside view approach to calculating what the consequences will be, I think a consequentialist would instead spend a lot of time using heuristics. Which seem to be the same thing as what you're describing with meta-virtues.

 

The way I see consequentialism coming in to virtue ethics is roughly the opposite of how Reflective Consequentialism creates something that looks like virtue ethics. I use consequentialist reasoning to help me figure things out on the margins. So I'd say something like "life is a blessing, we should create more life" but then the question is "well, how much and what kind of life?", and being able to do expected value calculations is quite useful for answering such questions.

For similar reasons, this too doesn't seem to me like a virtue ethics vs. (reflective) consequentialism thing. The way I see it, a consequentialist could very plausibly lean on heuristics to get started and then utilize bottom-up calculus as a way to sort of make adjustments and refinements.

I have a feeling that, we're going to have to be a little more thorough and concrete here in outlining how exactly a consequentialist would approach a given situation, how a virtue ethicist would, and where they differ. But I don't want to dive into that without kinda checking in with you and giving you a chance to respond to anything I said above.

Gordon Seidoh Worley

I mean, as we get less naive, no matter what normative stance you take it's going to start looking like the others because I believe they're forced to converge to use similar strategies to deal with reality as we find it.

But to try to address as what I see as a difference, you say:

This seems like something that a consequentialist (let's say from here on out that "non-naive" is implied) would spend a lot of time doing in real life. Ie. instead of a more bottom-up,, inside view approach to calculating what the consequences will be, I think a consequentialist would instead spend a lot of time using heuristics. Which seem to be the same thing as what you're describing with meta-virtues.

The difference I see is subtle. The consequentialist is going to use heuristics as a means of efficiently achieving the objective of optimizing for what they think is valuable. But for a virtue ethicist there is nothing to optimize for, except instrumentally. Virtues are lived and expressed. You don't live a virtuous life by taking the actions that are likely to be the most virtuous: you instead make yourself into the type of person who would do virtuous things, and the virtue maxing falls out of having made this earlier choice to shape the type of person you are.

This perhaps got lost with my first example about whether or not to have children, because I probably let my writing be framed a bit too much by consequentialist discourse norms. That's why I say virtue ethicists are best thought of as something like virtue executors rather than virtue optimizers, because although there is some thinking about how to behave virtuously in specific scenarios and that creates a feedback loop to help a person learn to be more virtuous, at the end of the day virtue is expected to be an expression of one's being rather than something to be measured and maximized.

Does that clarify things?

Adam Zerner

Hm, something feels contradictory to me. Or maybe just incomplete.

On the one hand, I think I'm hearing that virtue ethicists do not seek to act according to virtues because they think doing so will lead to the best consequences. Instead, they seek to act according to virtues because doing so is an end in itself.

But on the other hand, I thought I heard previously you saying that it's often unclear which virtues to apply, and how strongly to apply each of them, and that a non-naive virtue ethicist would use their judgement about what seems like it'd lead to the best consequences. (You also mentioned using meta-virtues to resolve such situations, but I think that just begs the question of what to do when there are conflicts after applying such meta-virtues.)

Gordon Seidoh Worley

Sorry for the confusion. I'm figuring out how to express some of these ideas as we go.

I think it's right to say that virtue is its own end for the virtue ethicist. Virtue is not something to be cultivated because it generates good outcomes, but rather because virtuosity is valued in and of itself. That said, virtuous behavior is virtuous in part because it leads to beneficial outcomes, so it's hard to make a clean separation between virtue for its own sake and virtue for the sake of good consequences.

I see this an analogous to the situation for deontologists, where following rules is inherently valuable, but the rules should have been picked such that they reliably lead to good outcomes, otherwise they're not a very good set of rules.

As we get less naive, though, we have to start thinking through consequences more. For example, perhaps it's virtuous to have kids, but if those kids would have terrible birth defects, better not to have them. I see analogous situation for consequentialists, but it's more they have to start thinking about second and third order effects of establishing a pattern of behaving in particular ways that effectively recreates rules/virtues out of those patterns (or heuristics, if patterns are being applied to short-circuit reasoning from first principles).

Does that help make some sense of it? To be clear, I think as we get less and less naive the differences get increasingly muddled because every stance towards ethical behavior has to converge towards handling the same complexity, and reality only permits so many ways to do that and still reliably produce desired outcomes.

Adam Zerner

I think I am understanding. Let me try to paraphrase.

You're saying that for a virtue ethicist:

  1. Virtuosity is valued as an end in and of itself.
  2. The things that are considered virtuous are chosen due to the expectation that they will lead to good consequences.

And similarly, you're saying that for a deontologist:

  1. Following rules is valued as an end in and of itself.
  2. The rules that are valued are chosen due to the expectation that they will lead to good consequences.

Does that sound correct?

Gordon Seidoh Worley

Yes, I think that's basically right, although with the caveat that many virtue ethicists and deontologist are naive in the sense that they're not reflective enough to have really thought through where the virtues/rules come from, since they've usually been derived via a process of cultural evolution, though with notable exceptions like Kant. For them virtuosity/rule observance is indistinguishable from doing good.

Adam Zerner

Gotcha, thanks for clarifying. I think I have officially resolved one point of confusion. I originally wrote:

Ok. Now here is where I'm confused. This virtue stuff kinda just feels like a bunch of heuristics to me. But why limit your decision making to those particular heuristics? What about other heuristics? And what about, as opposed to heuristics, more "bottom-up", "inside view" calculus about what you expect the consequences to be? I feel like there is a place for all of those things.

But now I see that virtue ethics is in fact not just ~"consequentialism + heuristics". It is treating virtuosity as an end in and of itself.

I want to ask some follow up questions here, but for the sake of not deviating too far from the original topic of this dialogue, I'd like to first circle back to Reflective Consequentialism a bit.

I think a lot of people misunderstand virtue ethics and interpret it as something like "act virtuously as a means of achieving the end of the best consequences". And I think they advocate for this as wise life strategy.

I disagree with that. I see virtuosity as a "tool in your toolbox" -- a great tool -- but not the only tool. If you are a consequentialist who's goal is to achieve the best consequences, I don't see why limiting yourself to that one tool would make sense.

I'm not the best at the math-y stuff, but I feel like you could frame it in a Bayesian way. The goal of taking the action that leads to the best consequences is a sort of epistemic one. Bayesianism says to update in response to all Bayesian evidence, even if that only means a small shift in your beliefs. To only consider the heuristic of virtuosity feels to me like it necessitates considering other forms of Bayesian evidence as "inadmissible".

I could see this approach being what leads to the best consequences for some people. Ie. for some people, I could see that attempting to go beyond the "virtues as heuristics" sort of approach leading to more harm than good. But I feel like there are a lot of people who are strong and wise enough to "graduate" past this.

What do you think?

Gordon Seidoh Worley

I disagree with that. I see virtuosity as a "tool in your toolbox" -- a great tool -- but not the only tool. If you are a consequentialist who's goal is to achieve the best consequences, I don't see why limiting yourself to that one tool would make sense.

Yes, I think as ethical views get less naive, no matter where you start from, you have to start treating the other two stances as tools you can pull out when you need them. As a consequentialist this looks like drawing on virtue "heuristics" to make calculating consequences tractable and and deontological "patterns" to avoid known antipatterns that might seem attractive but have the sign flipped (a common consequentialist failure mode is to get the magnitude right but the direction wrong, and do bad things for good reasons because you failed to account for secondary order effects that flip the sign and would have been obvious if you just avoided well known antipatterns, like "murdering people is bad", no matter how much the math says murdering some people will result in something good happening).

As a virtue ethicist, I start from the point of view that the goal is to live virtuously, but then to really behave in a virtuous way I need tools, like consequentialist reasoning, to make sure I'm actually living up to my avowed virtues, like "supporting all life". Similarly, when I'm tempted to make exemptions, thinking about the veil of ignorance and the categorical imperative help keep me on track.

So ultimately I think it comes down to a question of what you, individually, care about most. Do you care about good outcomes most? Start from consequentialism. Do you care about living a virtuous life most? Start from virtue ethics. Do you care about using the best decision theory most? Start from deontology.

I could see this approach being what leads to the best consequences for some people. Ie. for some people, I could see that attempting to go beyond the "virtues as heuristics" sort of approach leading to more harm than good. But I feel like there are a lot of people who are strong and wise enough to "graduate" past this.

I think there's something to this. One of the challenges for systems for defining and enforcing normative behavior is that it has to work for a wide variety of people with different levels of smarts, conscientiousness, etc. So we need "naive", first-approximation systems of normative reasoning that will still work for the 10th percentile of folks along relevant dimensions.

Deontology probably does the best here as it puts up strong guardrails that prevent people from doing obviously bad things, at the cost of missing out on recommending some really good but unusual things. Consequentialism does the worst, as naive consequentialists are probably the most likely to talk themselves into doing bad things by getting the math or evidence wrong, even if it does sometimes find really good weird things to recommend. Virtue ethics falls somewhere in between, with the main failure mode that it's relatively easy to rationalize that bad behavior is actually good.

But we do in fact see folks "graduating" past the naive view, as you say. Kant is a great example of someone who worked to make a really mature version of deontology, and maybe we could throw folks like MIRI researchers who worked on decision theory in this bucket, too. Consequentialists can recognize that their actions have second and third and fourth order effects that change the calculation. And I certainly am an example of a virtue ethicist who has "graduated" to using consequentialist reasoning on the margin.

The one thing that's tricky about this "graduated" framing is that it's easy to deceive yourself. The more complex you let your reasoning become, the more degrees of freedom you give yourself to rationalize whatever you want. So I think it's better to think of it as performing as sophisticated of normative reasoning as you're capable of to do as good a job of approximating what you'd judge to be ideal behavior upon extended reflection.

Adam Zerner

Yes, I think as ethical views get less naive, no matter where you start from, you have to start treating the other two stances as tools you can pull out when you need them.

Gotcha.

 

One of the challenges for systems for defining and enforcing normative behavior is that it has to work for a wide variety of people with different levels of smarts, conscientiousness, etc. So we need "naive", first-approximation systems of normative reasoning that will still work for the 10th percentile of folks along relevant dimensions.

That doesn't seem true to me. Why can't you do something like:

"Alice and Bob are both consequentialists. Alice is not so strong and mature, so in order for her to take the actions that lead to the best consequences, she is pretty strict about following deontological rules. Bob on the other hand is strong enough to have 'graduated' past that and thus for him, using more judgement about what 'tools in the toolbox' to choose and how he tries to take actions that lead to the best consequences."

 

The one thing that's tricky about this "graduated" framing is that it's easy to deceive yourself. The more complex you let your reasoning become, the more degrees of freedom you give yourself to rationalize whatever you want. So I think it's better to think of it as performing as sophisticated of normative reasoning as you're capable of to do as good a job of approximating what you'd judge to be ideal behavior upon extended reflection.

Hey, it sounds like you're thinking along similar lines as I was in Reflective Consequentialism in noting the importance of "extended reflection" :)

Gordon Seidoh Worley

One of the challenges for systems for defining and enforcing normative behavior is that it has to work for a wide variety of people with different levels of smarts, conscientiousness, etc. So we need "naive", first-approximation systems of normative reasoning that will still work for the 10th percentile of folks along relevant dimensions.

That doesn't seem true to me. Why can't you do something like:

"Alice and Bob are both consequentialists. Alice is not so strong and mature, so in order for her to take the actions that lead to the best consequences, she is pretty strict about following deontological rules. Bob on the other hand is strong enough to have 'graduated' past that and thus for him, using more judgement about what 'tools in the toolbox' to choose and how he tries to take actions that lead to the best consequences."

This doesn't sound substantially different from what I'm intending to say.

It sounds like Alice will probably regret her actions if she doesn't rely on rules to cover what would otherwise be gaps in her reasoning. We need not say that Alice is stupid, of course; maybe she's just really busy or not very practiced at consequentialist reasoning, and so needs "training wheels". Maybe she will be able to take the training wheels off one day, maybe she won't, but for now she needs them.

Bob sounds like he has more experience and can be less strict about following rules and can have more trust that his reasoning is right when it suggests some standard rule is wrong about what to do in some particular situation. So for him rules are more like patterns he can follow, but is more open to questioning if they will really produce the desired outcome.

I think the real challenge is knowing whether you're more like Alice or more like Bob, and I'd posit that more harm is caused by folks who incorrectly think they are a Bob when they are really an Alice than by folks who think they are Alice when really they are Bob (and thus miss opportunities to violate rules to achieve more good outcomes than they did by following rules strictly).

To take a recent example, SBF, if we accept his statements about his reasoning at face value, thought he was a Bob, but was actually an Alice.

Adam Zerner

I mostly agree with what you're saying. However, there's something that I'm unclear on.

Assuming for the sake of this part of the discussion that we ultimately care about producing the best consequences, I am of the opinion that we should look at it on the individual level. What works best for Alice might not work best for Bob (and might not work best for Alice in five years from now). So then, I think it makes sense to ask the question of what works best for this person, at this point in time (in this context, etc etc).

On the other hand, it sounded to me (~30% confident) like you might be saying that we should look at what works best for society as a whole, and then say that each individual should follow that approach. In which case, if we assume that "leaning heavily into virtues-as-heuristics" is best for society as a whole, someone like Bob who is "strong" enough to lean less heavily on wide-ranging heuristics, the moral thing for Bob to do would still be to apply the "leaning heavily into virtues-as-heuristics" approach. Are you arguing for something like this?

Gordon Seidoh Worley

On the other hand, it sounded to me (~30% confident) like you might be saying that we should look at what works best for society as a whole, and then say that each individual should follow that approach. In which case, if we assume that "leaning heavily into virtues-as-heuristics" is best for society as a whole, someone like Bob who is "strong" enough to lean less heavily on wide-ranging heuristics, the moral thing for Bob to do would still be to apply the "leaning heavily into virtues-as-heuristics" approach. Are you arguing for something like this?

Sort of. I think my stance is more akin to something like "trust the outside view over the inside view when it comes to ethics".

Inside views are really useful, and in some cases you can't get the right answer without really understanding how things work. But when it comes to our behavior, our view of our own minds and reasoning is tainted, and it's really easy for us to unknowingly deceive ourselves into doing what we want rather than what we actually think is best.

The very real risk for Bob is that he's going to make a mistake he'll regret. There are things he can do to help mitigate that, like talking to others about what he's planning to do and changing his plans based on their feedback, but a really simple mechanism is the one where we have virtues or rules to stand in for others checking our reasoning. So Bob can do his consequentialist reasoning, but then check it against the accumulated wisdom stored within highly respected virtues and rules, and seriously reconsider his actions if these plans disagree with what other ethical systems would recommend.

For example, maybe Bob thinks the world would be better if we killed everyone with red hair. He's done the math, and is convinced that their murder will make humanity better off overall and long term, stretching out to astronomical amounts of QALYs gained between now and the heat death of the universe. But if he stops to think about it, everyone, Bob included, has a strong moral intuition that killing people is bad in general, and is especially bad if they are killed because of a superficial trait like hair color. And so Bob has to ask himself, what is more likely: that his calculations are right and the standard wisdom is wrong this time, or that he's made a mistake and everyone will horribly regret it if he kills all the red heads.

Another way I might phrase this is, if consequentialist reasoning returns a result that is counter to what living up to respected virtues or following common ethical rules would recommend, one needs to be really sure that one's reasoning is right, because there's a very strong prior that it's wrong, and probably wrong in some boring way, like failing to account for second-order effects or allowing personal biases to influence one's reasoning.

Adam Zerner

I see. I agree with all of that.

However, I'm still not clear on what your position on the question of whether you think it makes sense to apply moral strategies at the individual level or at a society level. Or, alternatively, at some sort of group level (this group does this, that group does that). (If those questions aren't clear let me know and I can clarify.)

Gordon Seidoh Worley

Oh, right, forgot to address that part.

So in some sense ethical reasoning always happens at the level of the individual who has to decide what actions to take. But there are some nice things that can happen if you get a group of people to follow the same norms. One of the benefits of living in pre-modern societies was that everyone was expected to conform to the same set of shared norms, and when you can trust that other people will rarely defect from the norms, you can efficiently cooperate in ways societies with less trust cannot.

But realistically we live in a cosmopolitain society where we have limited ability to impose norms on others, and in some ways a bedrock norm of modern societies is that you don't get to impose much on other people. Yes, for some basic, commonly shared norms you can impose them, but as soon as there is a sizable minority that objects to the norm (let's say it happens at around 5% of the population not wanting to accept the norm), you get fights over the norm until you settle into a new equilibrium where some people are permitted to opt out of a norm the majority would like to impose.

My guess, though, is that you have something more specific in mind you'd like to discuss and my comments above are probably not hitting the mark, so perhaps you can clarify where we should take the discussion next.

Adam Zerner

I agree with what you're saying about the benefits of sharing norms, but yeah, I have something a little different in mind.

Earlier you had said:

One of the challenges for systems for defining and enforcing normative behavior is that it has to work for a wide variety of people with different levels of smarts, conscientiousness, etc. So we need "naive", first-approximation systems of normative reasoning that will still work for the 10th percentile of folks along relevant dimensions.

However, I don't see why we would need a system to work on a wide variety of people. Why not have one system for one group of people and a different system for a different group of people?

One possible reason is, like you were saying in your most recent message, that there are various benefits to everyone following the same set of norms. However, I don't get the sense that those benefits are large enough to justify having a wide-ranging system. Do you?

Gordon Seidoh Worley

Ah! So in my comment there, what I'm thinking about is the way that, for example, in a religion, you need your ethical norms to be usably by a broad set of folks, some of whom are smarter, more scrupulous, more conscientious, and others less.

For example, I think many people correctly intuit that naive positive act utilitarians are some of the most dangerous people on the planet, because they can easily convince themselves to commit atrocities in the name of the greater good if they make a mistake or leave out something important. We'd probably be in pretty bad shape if we went around advocating that the average person adopt positive act utilitarianism, because they'd almost certainly screw it up.

We also see this problem with deontologists. We can all think of religious zealots who overcommitted on the letter of the law to the great harm of others. Most people get that ethical rules often leave out edge cases for the sake of reliable transmission, but the excessively scrupulous do not, and they weaponize their scrupulosity against the rest of us.

I'm not really saying we can't have different systems for ethical reasoning for different people, but realistically we can't easily control who hears about what ethical systems or tries to adopt them, and so when we communicate about systems of normative behavior, we need to think about how they might be misinterpreted by those who are least likely to understand them, because in the modern world you can basically be guaranteed that someone, somewhere will hear about an idea and try to apply the most degenerate version of it.

This is what's great about an ethical system like the Golden Rule: it may leave a lot of good doing on the table, but at least even the worst reasoners among us can apply it to do good if they're honest with themselves.

Adam Zerner

Ah, I see. That all makes sense to me and I think that they are good points. I think we are in agreement.

Any thoughts on where you'd like to take the conversation next? I think the only remaining thing I wanted to hit on is just to chat casually about the following:

To give context to readers who might not have it, I received lay ordination (jukai) within a Soto Zen lineage, and that required taking the 16 Bodhisattva Precepts. These precepts are explicit virtues I'm expected to uphold, and are, at least within my tradition, explicitly not rules but rather vows that we do our best to uphold. They include things like "support all life", "speak of others with openness and possibility", and "cultivate a clear mind".

Gordon Seidoh Worley

No, I'm pretty happy with what we've sorted out so far in this dialogue.

Happy to chat more about my experiences with Zen.

Adam Zerner

No, I'm pretty happy with what we've sorted out so far in this dialogue.

Cool.

 

Happy to chat more about my experiences with Zen.

So yeah, tell me about this! I don't really know anything about it. It looks like some sort of Buddhist thing? Also, what's the backstory with how you got interested in it?

Gordon Seidoh Worley

Yes, Zen is one of many Buddhist traditions that trace their teachings back 2500 years to the teachings of Siddhartha Gautama. The backstory on how I got into it is either long or short, depending on where you want to start. The long story is probably not that useful to recount, so I'll give you the short version.

In 2015, I was really fed up with the insufficiency of rationalists to actually practice rationality as systematized winning. I followed a few different threads, and found I kept running into Buddhism. After a while spent trying to avoid it, I eventually gave in and decided to seek out formal practice. After about a year of experimentation, I found I was roughly Zen shaped, so it was a good home for learning more of the skills of how to be a fully realized person.

We could go a lot of ways with the discussion, but maybe for this dialogue it's most useful to keep it grounded in talking about ethics. The primary system of ethics in Zen Buddhism is the taking of the Bodhisattva Precepts, which for us are 16 vows that we promise to uphold when we receive them in a ceremony called jukai.

Different lineages phrase them differently. Here's how we phrase them within mine, which is part of the Ordinary Mind Zen School:

  1. The Refuges
    1. I take refuge in the Buddha.
    2. I take refuge in the Dharma.
    3. I take refuge in the Sangha.
  2. The Pure Precepts
    1. I vow to refrain from all action that creates attachments.
    2. I vow to make every effort to live awake and in Truth.
    3. I vow to live to benefit all being.
  3. The Grave Precepts
    1. I take up the way of speaking truthfully.
    2. I take up the way of speaking of others with openness and possibility.
    3. I take up the way of meeting others on equal ground.
    4. I take up the way of cultivating a clear mind.
    5. I take up the way of taking only what is freely given and giving freely of all that I can.
    6. I take up the way of sharing the wisdom of the teachings.
    7. I take up the way of engaging in sexual intimacy respectfully and with an open heart.
    8. I take up the way of letting go of anger.
    9. I take up the way of respecting life.
    10. I take up the way of acknowledging the truth, the teaching, and those who practice the teaching.

Other Buddhist traditions have other lists of vows that practitioners may take, but they are generally similar in nature. There are separately rules for monastic living, but at least in Zen they only apply to those actually living in a monastery, whereas other traditions apply them more broadly.

Adam Zerner

Oh wow, this all is very cool. That makes sense about keeping it focused on ethics for this dialogue.

Something that I personally would worry about in making those sorts of commitments -- or really, any sort of commitment -- is the thought of "what if I want to change my mind?". Like, what if something in the future causes you to update your beliefs such that you no longer think that eg. vow 2a is a good idea. What is your thinking surrounding that?

Gordon Seidoh Worley

The purpose of vows is to help you stay committed so that you can't easily drift off. If you start to drift from the vows, you will hopefully notice or have it pointed out to you that you're breaking them, and then you'll take corrective action to stay committed. That said, people can and do renounce vows all the time. The point, as I see it, is to make sure the threshold for formally breaking vows is high so that you only do it if you're really sure.

An apt comparison would be marriage vows, and in fact marriage vows in Zen use the Bodhisattva Precepts as their basis. You can later decide you want to get divorced, but that's a big deal, so you should only do it if you're sure.

In rationalist/EA terms, we might say that vows are a mechanism to help prevent value drift, but they don't put up so big a barrier that you can't ever change your mind.

Adam Zerner

Ah, I see. That makes sense.

It reminds me of something I believe in with respect to marriage. At least in theory, when you marry someone, you are committing to being with them forever ("till death do us part"). I don't like that because what if being together isn't what is in the best interest of the couple? Marriage (with actual commitment) doesn't allow the couple to pivot and do what's in their best interest.

That said, when things get difficult, there is a risk of impulsively breaking up at a time when it isn't actually best to do so. So a relationship with no amount of commitment doesn't seem good either. What makes sense to me, and what I have in my relationship, is to have some sort of buffer period where you're not allowed to break up without giving, in my case, two months' notice.

Your explanation about how the vows you took is making me believe even more strongly in what we've been saying about how when you get less naive, beliefs and approaches often end up converging with one another.

Gordon Seidoh Worley

Oh, the 2 month thing is a cool idea! I'm not sure how you, if at all, you two have thought about implementation, but reminds me of performance improvement plans (PIPs), a way to signal to an employee that unless some specific things change in a set amount of time then management will "break up" with you. To my eye as a manager, it's a really interesting idea to have a 2 month period where you are still in the relationship and committed to it but also making it clear that what was happening before isn't working and things need to change for the relationship to continue.

Anything else we should discuss? This feels to me like a natural stopping point.

Adam Zerner

Ah yeah, I think PIPs are a great comparison! I think it's very similar. "I'm unhappy with where things are at. I want to work together on improving A, B and C over the next two months, but if we haven't had that improvement we should move on."

Nah, nothing else I have in mind. I agree that this feels like a natural stopping point. It was great talking with you!

New Comment
6 comments, sorted by Click to highlight new comments since:
[-]Dagon80

[ epistemic status: My modeling of this rings true for me, but I don't know how universal it is. ]

Interesting discussion, and I'm somewhat disappointed but also somewhat relieved that you didn't discover any actual disagreement or crux, just explored some details and noted that there's far more similarity in practice than differences.  I find discussion of moral theory kind of dissatisfying when it doesn't lead to different actions or address conflicts.  

My underlying belief is that it's a lot like software development methodology: it's important to HAVE a theory and some consistency of method, but it doesn't matter very much WHICH methodology you follow.

In the vast majority of humans, legible morality is downstream of decision-making.  We usually make up stories to justify our actions.  There is a bit of far-mode moral discussions that have some influence over near-mode decisions, making those stories easier to tell (and actually true, for loose definitions of "true").

Thus, any moral system implemented in humans has a fair bit of loopholes, and many exceptions.  This is uncertainty and inconsistent modeling in Consequentialist stories, or ambiguity and weighting in deontological or virtue stories.

Which makes these systems roughly equivalent in terms of actual human behavior.  Except they're very different in how it makes the adherents feel, which in turn makes them behave differently.  The mechanism is not legible or part of the moral system, it's an underlying psychological change in how one interacts with one's thinking part and how humans communicate and interact.

Interesting discussion, and I'm somewhat disappointed but also somewhat relieved that you didn't discover any actual disagreement or crux, just explored some details and noted that there's far more similarity in practice than differences.

I feel very similarly actually. At first when I heard how Gordon is a big practitioner of virtue ethics it seemed likely that we'd (easily?) find some cruxes, which is something I had been wanting to do for some time.

But then when we realized how non-naive versions of these different approaches seem to mostly converge on one another, I dunno, that's kinda nice too. It kinda simplifies discussions. And makes it easier for people to work together.

In the vast majority of humans, legible morality is downstream of decision-making.  We usually make up stories to justify our actions.  There is a bit of far-mode moral discussions that have some influence over near-mode decisions, making those stories easier to tell (and actually true, for loose definitions of "true").

I agree. There's a sort of confusion that happens for many folks where they think their idea of how they make decisions is how they actually make decisions, and they may try to use System 2 thinking to explicitly make that so, but in reality most decisions are a System 1 affair and any theory is an after-the-fact explanation to make legible to self and others why we do the things we do.

That said, the System 2 thinking has an important place as part of a feedback mechanism to direct what System 1 should do. For example, if you keep murdering kittens, having something in System 2 that suggests that murdering kittens is bad is a good way to eventually get you to stop murdering kittens, and over time rework System 1 so that it no longer produces in you the desire for kitten murder.

What matters most, as I think you suggest at the end of your comment, is that you have some theory that can be part of this feedback mechanism so you don't just do what you want in the moment to the exclusion of what would be good to have done long term because it is prosocial, has good secondary effects, etc.

[-]jchan10

Aren't there situations (at least in some virtue-ethics systems) where it's fundamentally impossible to reduce (or reconcile) virtue-ethics to consequentialism because actions tending towards the same consequence are called both virtuous and unvirtuous depending on who does them? (Or, conversely, where virtuous conduct calls for people to do things whose consequences are in direct opposition.)

For example, the Iliad portrays both Achilles (Greek) and Hector (Trojan) as embodying the virtues of bravery/loyalty/etc. for fighting for their respective sides, even though Achilles's consequentialist goal is for Troy to fall, and Hector's is for that not to happen. Is this an accurate characterization of how virtue-ethics works? Is it possible to explain this in a consequentialist frame?

Aren't there situations (at least in some virtue-ethics systems) where it's fundamentally impossible to reduce (or reconcile) virtue-ethics to consequentialism because actions tending towards the same consequence are called both virtuous and unvirtuous depending on who does them? (Or, conversely, where virtuous conduct calls for people to do things whose consequences are in direct opposition.)

This is most likely to happen if an ethical system is particularly naive, in the sense that it's excessively top down, trying to function as a simple, consistent system, rather than trying to account for the nuanced complexity of real world situations. But, yes, I think sometimes virtue ethicists and consequentialists may reasonably come to different conclusions about what's best to do. For example, maybe I would reject something a consequentialist thinks should be done because I'd say doing so would be undignified. Maybe this would be an error on my part, or maybe this would be an error on the consequentialists part from failing to consider second and third order effects. Hard to say without a specific scenario.

For example, the Iliad portrays both Achilles (Greek) and Hector (Trojan) as embodying the virtues of bravery/loyalty/etc. for fighting for their respective sides, even though Achilles's consequentialist goal is for Troy to fall, and Hector's is for that not to happen. Is this an accurate characterization of how virtue-ethics works? Is it possible to explain this in a consequentialist frame?

I think this is not a great example because the virtues being extolled here are orthogonal to the outcome. And consequentialists can choose to value their own side more than the other side, or to be indifferent between sides, so I'm not sure what the conflict between virtue ethics and consequentialism would be here.

[-]jchan10

I think this is not a great example because the virtues being extolled here are orthogonal to the outcome.

Would it still be possible to explain these virtues in a consequentialist way, or is it only some virtues that can be explained in this way?

And consequentialists can choose to value their own side more than the other side, or to be indifferent between sides, so I'm not sure what the conflict between virtue ethics and consequentialism would be here.

The special difficulty here is that the two sides are following the same virtue-ethics framework, and come into conflict precisely because of that. So, whatever this framework is, it cannot be cashed out into a single corresponding consequentialist framework that gives the same prescriptions.