Subtitle: Costly virtue signaling is an irreplaceable source of empirical information about character.

The following is cross-posted from my blog, which is written for a more general audience, but I think the topic is most important to discuss here on LW. 


We all hate virtue signaling, right? Even “virtue” itself has taken on a negative connotation. When we’re too preoccupied with how we appear to others, or even too preoccupied with being virtuous, it makes us inflexible and puts us out of touch with our real values and goals.

But I believe the pendulum has swung too far against virtue signaling. A quality virtue signal shows that a person follows through with their best understanding of the right thing to do, and is still one of the only insights we have into others’ characters and our own. I don’t care to defend empty “cheap talk” signals, but the best virtue signals offer some proof of their claim by being difficult to fake. Maybe, like being vegan, they take a great deal of forethought, awareness, and require regular social sacrifices. Being vegan proves dedication to a cause like animal rights or environmentalism proportional to the level of sacrifice required. The virtuous sacrifice of being vegan isn’t what makes veganism good for the animals or the environment, but it is a costly signal of character traits associated with the ability to make such a sacrifice. So the virtue signal of veganism doesn’t mean you are necessarily having a positive impact or that veganism is the best choice, but it does show that you as a person are committed, conscientious, gentle, or deeply bought into the cause such that the sacrifice becomes easier for you than it would be for other people. It shows character and acting out your values. Out of your commitment to doing the most good possible, you may notice that you start to think veganism isn’t actually the best way to help animals for a lot of people.1 I believe this represents a step forward for helping animals, but one problem is that now it’s much easier to hide lack of virtuous character traits from measurement.2 It’s harder to know where the lines are or how to track the character of the people you may one day have to decide to trust or not to trust, it’s harder to support virtuous norms that make it easier for the community to act out its values, and it’s harder to be accountable to yourself.

Many will think that it is good when a person stops virtue signaling, or that ostentatiously refusing to virtue signal is a greater sign of virtue. But is it really better when we stop offering others proof of positive qualities that are otherwise hard to directly assess? Is it better to give others no reason to trust us? Virtue signals are a proxy for what actually matters— what we are likely to do and the goals that are likely to guide our behavior in the future. There is much fear about goodharting (when you take the proxy measure as an end in itself, rather than the thing it was imperfectly measuring) and losing track of what really matters, but we cannot throw out the baby with the bathwater. All measures are proxy measures, and using proxies is the only way to ask empirical questions. Goodharting is always a risk when you measure things, but that doesn't mean we shouldn't try to measure character.

The cost of virtue signals can be high, and sometimes not worth it, but I submit that most people undervalue quality virtue signals. Imagine if Nabisco took the stance that it didn’t have anything to prove about the safety and quality of its food, and that food safety testing is just a virtue signal that wastes a bunch of product. They could be sincere, and somehow keep product quality and safety acceptably high, but they are taking away your way of knowing that. Quality control is a huge part of what it is to sell food, and monitoring your adherence to your values should be a huge part of your process of having positive impact on the world.

Virtue signaling is bad when signaling virtue is confused for possessing the signal of virtue is confused for having the desired effect upon the world. It is at its worst when all your energy goes to signaling virtue at the expense of improving the world. But signals of virtue, especially costly signals that are difficult to fake, are very useful tools. Even if I don’t agree with someone else’s principles, I trust them more when I see they are committed to living by the principles they believe in, and I trust them even more if they pay an ongoing tithe in time or effort or money that forces them to be very clear about their values. I also think that person should trust themselves more if they have a track record of good virtue signals. Trust, but verify.

The most common objections to the first version of this post were not actually objections to virtue signals per se, I claim, but disagreements about what signals are virtuous. My support of virtue signals requires some Theory of Mind— a quality virtue signal demonstrates character given that person’s beliefs about what is good. Say a person virtue signals mainly as signal of group membership— I may still judge that to show positive character traits if they believe that taking cues from the group and repping the group are good. If someone uses “virtue signals” cynically to manipulate others, I do not think they have virtuous character. Might an unvirtuous person be able to fool me with their fake virtue signals? Sure, but that will be a lot harder than to do that emitting a genuine virtue signal. Signals don’t have to be 100% reliable to be useful evidence.

Why care about virtue signals? Why not just look at what people do? Because we need to make educated guesses about cooperating with people in the future, especially ourselves. “Virtue” or “character” are names we give to our models of other people, and those models give us predictions about how they will act across a range of anticipated and unanticipated situations. (In our own case, watching our virtue metrics can not only be a way to assess if we are falling into motivated reasoning or untrustworthiness, but also the metric we use to help us improve and become more aligned with our values.) Sometimes you can just look at results instead of evaluating the character of the people involved, but sometimes a person’s virtue is all we have to go on.

Take the lofty business of saving the world. It’s important to be sure that you are really trying to help the world and, for example, not just doing what makes you feel good about yourself or allows you to see the world in a way you like. Sometimes, we can track the impact of our actions and interventions well, and so it doesn’t matter if the people who implement them are virtuous or not as long as the job is getting done. But for the biggest scores, like steering the course of the longterm future, we’re operating in the dark. Someone can sketch out their logic for how a longtermist intervention should work, but there are thousands of judgment calls they will have to make, a million empirical unknowns as to how the plan will unfold over the years, and, if any of us somehow live long enough to see the result, it will be far too late to do anything about it. Beyond evaluating the idea itself, the only insight most of us realistically have into the likelihood of this plan’s success is the virtue of the person executing it. Indeed, if the person executing the plan doesn’t have any more insight into his own murky depths than the untested stories he tells, he probably just has blind confidence.

Quality virtue signals are better than nothing. We should not allow ourselves to be lulled into the false safety of dwelling in a place of moral ambiguities that doesn’t permit real measurements. It’s not good to goodhart, but we also can’t be afraid of approximation when that’s the best we have. Judging virtue and character gives us approximations that go into our complex proprietary models, developed over millenia of evolution, of other human beings, and we need to avail ourselves of that information where little else is available.

I urge you to do the prosocial thing and develop and adopt more legible and meaningful virtue signals— for others and especially for yourself.

(This post was edited after publication, which is a common practice for me. See standard disclaimer.)

1

I’m not taking a position here. In fact, I think a mixed strategy with at least some people pushing the no-animals-as-food norm and others reducing animal consumption in various ways is best for the animals. At the time I of writing I am in a moral trade that involves me eating dairy, i.e. no longer being vegan, and the loss of the clean virtue signal was one of the things that prompted me to write this post.

2

Discussed this example in depth with Jacob Peacock, which partly inspired the post.

40

44 comments, sorted by Click to highlight new comments since: Today at 5:59 AM
New Comment

Huh. I had a weird reaction to this post. My instincts keep violently disagreeing with the main idea, but then when I zoom in on any particular step in the chain of logic it looks sound.

Even if I don’t agree with someone else’s principles, I trust them more when I see they are committed to living by the principles they believe in, and I trust them even more if they pay an ongoing tithe in time or effort or money that forces them to be very clear about their values.

Diving deep on this particular quote... my brain generates the example of a Buddhist monk. Do I trust them more because I see they are committed to living by the principles they believe in? Yup, definitely. Is the monk's ongoing tithe of time/effort a load-bearing element of that trust? Ehhhh, kinda, but not super central. When I think about why I trust a Buddhist monk, it's mainly that... <introspect> ... I expect that they understand their own values unusually well?

That seems right. When I think of other people who understand their own values unusually well (including people who are not good people but have understood and accepted that)... that seems like a surprisingly perfect match for the sort of people who feel instinctively trustworthy to me. Like, the Buddhist monk, or the ex-criminal who found God, or the main character of Breaking Bad toward the end of the series... these are the sorts of people who code as "trustworthy" to me. They are people who are very self-aware, very reflectively stable; people who are not going to have their values overwritten by peer pressure the moment I stop looking.

... and that makes it more clear why my intuition kept disagreeing with the main idea of the post. The thing my trustworthiness-meter is looking for is not signals of virtue, but signals of self-awareness combined with conformity-resistance. Signals that someone's values will not be overwritten by the values of whoever's around them. Most of the standard examples of "virtue signals" signal the exact opposite of that, precisely because they're standard examples. They're signalling exactly the kinds of "virtue" which are broadly recognized as "virtuous", and therefore they weakly signal someone who is more likely to allow their values to be overwritten by whoever's around them.

Most of the standard examples of "virtue signals" signal the exact opposite of that, precisely because they're standard examples. They're signalling exactly the kinds of "virtue" which are broadly recognized as "virtuous", and therefore they weakly signal someone who is more likely to allow their values to be overwritten by whoever's around them.

I might put it like this: The standard examples are ambiguous signals, showing either virtue or conformity.  (And, in some contexts, conformity is an anti-virtue.)

Perhaps unambiguously signaling virtue is an anti-inductive problem.

I kept “virtue” and “character” intentionally broad, and one kind of virtue was what you highlighted here— faithfulness to what one believes is right. But some signals of genuine virtue are more specific, as in “I think animals shouldn’t be harmed and therefore I don’t eat them”, which shows both adherence to one’s own values and specifically valuing animals.

I think your trust-o-meter is looking for people who have an unusually low level of self-deception. The energy is Great if you share my axioms or moral judgments, but for Pete's sake, at least be consistent with your own.

What suggests this to me is the Breaking Bad example, because Walter White really does move on a slow gradient from more to less self-decieved throughout the show in my read of his character - it just so happens that the less self-decieved he is, the more at home he becomes with perpetuating monstrous acts as a result of the previous history of monstrosity he is dealing with. It's real "the falcon cannot hear the falconer" energy.

This is probably a good trust-o-meter to keep equipped when it comes to dealing with nonhuman intelligences. Most people, in most ordinary lives, have good evolutionary reasons to maintain most of their self-deceptions - it probably makes them more effective in social contexts. Intelligences evolved outside of the long evolutionary history of a heavily social species may not have the same motive, or limitations, that we do.

Reflective stability matters if you need to trust someone's values remain constant (and aligned with yours) over the long-term. Maybe less if you only need to maximise odds someone is aligned with you shortterm (because you're together planning a shortterm act). Other signals will be relevant here.

Keen on your thoughts.

Huh. I had a weird reaction to this post. My instincts keep violently disagreeing with the main idea, but then when I zoom in on any particular step in the chain of logic it looks sound.

Like the Penrose staircase.

I don’t care to defend empty “cheap talk” signals, but the best virtue signals offer some proof of their claim by being difficult to fake.

I think this is the crux: the common usage of the phrase "virtue signalling" and the economics notion of "signalling" have diverged; people use the phrase primarily, maybe even exclusively, to refer to cheap talk, and to emphasize this cheapness. Eg Cambridge Dictionary defines virtual signaling as: "an attempt to show other people that you are a good person, for example by expressing opinions that will be acceptable to them, especially on social media".

This doesn't take away from the value of genuine signals of good character, but it does mean we need a different term for them.

Yep. Another objection is that 'virtue' is used kinda ironically in this context; the intended meaning is something like "tribal loyalty signaling". Okay, from the perspective of the tribe, loyalty is a virtue, but from the perspective of the person who accuses others of "virtue signaling" it is more like "you may believe that you are signaling that you are a good person, but actually you are signaling that you are a sheep/fanatic".

I think it's too early to say the true meaning of virtue signal is now tribal signal. I wish to reclaim the word before that happens. At the very least I want people to trip on the phrase a little when they reach for it lazily, because the idea of signaling genuine virtue is not so absurd that it could only be meant ironically. 

Suggestions for new terms and strategies for preventing them being co-opted too?

I understand "virtue signalling" as a novel term that is only loosely related to the concepts of "virtue" or "signalling".

It's a little annoying to have to mentally translate it into "that thing that people mean when they say 'virtue signalling'" (or sometimes a little amusing, when there's an interesting contrast with the literal meaning that a person's actions have signalled their virtues).

Sometimes, people generically hate on the general principle of virtue-signalling as an indirect way of signalling which virtues they disagree with OR which virtues are policed - eg polarizing virtues (while being able to maintain plausible deniability on what specific virtue signals they disagree on). Sometimes, this generic hatred of "virtue signalling" is also a generic hatred/dislike of "lawful good" [or "those more successful than them"], or those they perceive as having "higher pain tolerance than them" (a generalized hatred of all associated with virtue signalling led to the backlash culminating in the 2016 election, as Trump epitomized the intense hatred of BOTH republican and democrat forms of virtue-signalling - "signals" that the working class was unusually receptive to b/c there seems to be a difference in the capacity between "privileged" and "less privileged" in bearing the costs of virtue-signalling.) Trump once said "I LOVE the uneducated".

[Moloch-worship is another form of virtue-signalling, esp among those who always did better in the system than everyone else - eg seen by those who consistently "defend the administration" - eg BG at Caltech lol]. When I was surrounded with the noise of middle school/high school, I used to bond with Moloch-worshippers b/c I thought they were the ONLY people who had any intelligence or straightforwardedness in them...... (that was way way way before I finally discovered the "chaotic"/edgy technologists in elite universities/the SF Bay Area, who have since then become the base of my friend group until I discovered TKS in Toronto...[1])

The "felt costliness" of "virtue-signalling" is higher in the working class than it is in richer elites.

[it should be said that the biggest explicit haters on virtue-signalling tend not to be working-class-voters, where the term "virtue-signalling" isn't popular" [though they definitely feel a NOTION of the thing that Trump felt]. Explicitly stated intense hatred of "virtue signalling" is SUPER-COMMON among those who obsess with Peter Thiel.

(hatred of virtue signalling also can come from outsiders, as "virtue-signalling" is often a mesa-optimizing strategy to get "acceptance within the group", especially the more authoritarian members of the group).

I also think the choice of "vegetarianism/veganism" as an interesting choice in the OP of "virtue-signalling", because  vegetarian/vegan virtue-signalling is the LEAST Moloch-aligned form of virtue-signalling, and thus different from ALL other forms of virtue-signalling. It's also the form of "virtue signalling" I'm most guilty of (whereas I was historically a Thieltard in other dimensions of virtue-signalling).

=====

[1] Lol, a lot of intense visceral strong emotion/anger/trauma/edginess in this history has been removed from this post... which is like.. a LONG FASCINATING STORY... it also hasn't quite ended, because a new force came out (Canada/The Knowledge Society) that has a small but emotionally salient enough chance to completely obliterate the hierarchy of how I emotionally factor ALL of  Moloch/techno-progressivism/edginess/elite universities/drugs/ADHD/"narcisssism"/Bryan Caplan/Thiel Fellowship [and the person who turned out to become my ALL TIME FAVORITE THIEL FELLOW]/Hillybilly Elegy/insecurities/intelligence/Stanford Duck Syndrome/Michael Faraday/Michael O Church/CP Snow's "two cultures"/cancel culture/"narcissism"/trauma/Sarah Constantin/Stephen Hsu/two of my friends who dated each other for several years and  then had the most epic breakup ever..... => ALL WHICH HAVE ELEMENTS RELATED TO HOW PEOPLE EMOTIONALLY INTERACT TO VIRTUE-SIGNALLING

[but much of my intense historical trauma came from me TRYING TO mesa-optimize/virtue-signal, and failing at it because I came from a shitty school. Then I FINALLY found and sought out the edgelords at elite universities + finally turned someone into a Thiel Fellow, and NOTHING EVER WAS THE SAME EVER AGAIN]

(so much emotional intensity was factored out of this post, but one has told me I have a moral duty to write out the story at last).

Hmm, I'm not sure that you're saying anything that the other side (people annoyed by current "virtue signaling") would disagree with, good virtue signals are useful if they're done well, and bad virtue signals are bad, they'd certainly agree that Mcdonald's should virtue signal about its beef quality, though they might not realize the that this is in fact virtue signaling because of how the phrase's usage has evolved. A vegan friend mentioning he's a vegan off-handedly while apologizing for the hassle before coming to your house for a BBQ is completely fine, but veering off the conversation at every opportunity towards the immense and cosmically important suffering of chickens is starting to become either bragging, or a weird power move that people reject instinctively. Being moral yields prestige in the tribe, and a too-obvious virtue signal can be read as attempting to elevate your power. In fact, I think everything you've written also applies to bragging about your skills: sometimes very necessary, others really should be informed of how competent you are, but there's still a whole host of social norms about the way to do that appropriately, and everyone who hates bragging really only hates "bad bragging".

I intentionally didn’t engage with motives for virtue signaling or the way the signal is reported but focused on their information value. The virtue signal with the character information here is actually “not eating animals”, not talking about it.

The virtue signal with the character information here is actually “not eating animals”, not talking about it.

Umm, I think there's still a disconnect with your definitions and general usage of the term.  The VIRTUE is "caring about animals", or maybe "not eating animals".  It's arguable how important that virtue it is, but most people don't object.

The virtue SIGNALING is talking about it, proselytizing for it, and the implied judgement of lack of virtue in people who aren't visibly into it.

One of the big problem with virtue signaling is that it suggests a one-dimensional scale when we care about multiple different aspects of the character of other people.

We care about people willing to sacrifice for the good of others. We care about people being honest to themselves. We care about people being honest to others. We care about people willing to change their mind when presented with compelling evidence.

If we discuss the example of the Buddhist monk, one of the core Buddhists beliefs is that life is suffering and that it's valuable to end that suffering. In the longtermist context that means that a Buddhist who sincerely follows those ideals might decide to let AGI end all life on earth. In this context the veganism of the Buddhist monk is a signal that they are willing to act according to their beliefs which makes the person more dangerous not less dangerous. 

Historically, Hitler fits also into the pattern of a vegetarian who's very committed to living out ideological commitments. Hitler was the kind of person willing to make huge sacrifices for his ideological commitments. 

It's valuable to remove animal suffering. You can argue on the basis that it's valuable that veganism is virtuous, but that doesn't allow you to make the important predictions about the future actions of those people.

If you decide to hire someone for your organization, you have to make a decision whether or not you want to trust them. To make that decision well you have to think about the characteristics that are actually important for your hiring decision. If they are generally skilled, I would advocate in the EA context that the most important characteristics are those that prevent maziness instead of general willingness to personal sacrifice for the greater good along lines like animal welfare, the enviroment or personal donations. 

Maybe I should add something clarifying that virtue is not made of one thing. Virtue signals demonstrate particular qualities. You have to be rational and keep track of what signal is evidence of what and think clearly about how that may interact with other beliefs and qualities to lead to different outcomes, like you’re doing here.

Do you have an idea of virtue signal for non-maziness?

Openness, honesty, and transparency are signals for non-maziness. 

Historically, there are practices of EA organization to signal those qualities. GiveWell recently decided to stop publishing the audio of their board meetings. That's stopping sending a virtue signal for non-maziness.

On the CEA side, there are a few bad signals. After promising Guzey confidentiality for his criticism of William MacAskill, CEA community manager send the criticism document to MacAskill in violation of the promise. Afterward, CEA's position seemed to be that saying "sorry, we won't break confidentiality promises again" deep in the comments of a thread is enough. No need to speak in Our Mistakes about violating their confidentiality promises, no personal consequences for violating the promises, and no sense that they incurred a debt toward Guzey for which they have to do something to make it right.

CEA published images on their website from an EA global where there was a Leverage Research table and edited the images to remove the name of Leverage Research from the image. Image editing like that is a signal for dishonesty.

Given CEA's status in the EA ecosystem openly speaking about either of those incidents has the potential to be socially costly. For anyone who cares about their standing in EA circles talking about those things would be a signal for non-maziness. 

Generally, signals for non-maziness often involve the willingness to create social tension with other people who are in the ingroup. That's qualitatively different than requiring people to engage in costly signals like veganism or taking the giving pledge as EAs.

If CEA's leadership engages in a bunch of costly prosocial signals like being vegans that's not enough when you decide whether or not to trust them to keep confidentiality promises in the future given the value they put on past promises.

In general, I don’t fully agree with rationalist culture about what is demanded by honesty. Like that Leverage example doesn’t sound obviously bad to me— maybe they just don’t want to promote Leverage or confuse anyone about their position on Leverage instead of creating a historical record, as you seem to take to be the only legitimate goal? (Unless you mean the most recent EA Global in which case that would seem more like a cover-up.)

The advantage of pre-commitment virtue signals is that you don’t have to interpret them through the lens of your values to know whether the person fulfilled them or not. Most virtue signals depend on whether you agree the thing is a virtue, though, and when you have a very specific flavor of a virtue like honesty then that becomes ingroup v neargroup-defining.

Honesty isn't just a virtue. When it comes to trusting people, signals of honesty mean that you can take what someone is saying at face value. It allows you to trust people to not mislead you. This is why focusing on whether signals are virtuous, can be misleading when you want to make decisions about trusting. 

Editing pictures that you publish on your own website to remove uncomfortable information, is worse than just not speaking about certain information. It would be possible to simply not publish the photo. Deciding to edit it to remove information is a conscious choice that's a signal. 

Editing pictures that you publish on your own website to remove uncomfortable information, is worse than just not speaking about certain information. It would be possible to simply not publish the photo. Deciding to edit it to remove information is a conscious choice that's a signal.

I don't know this full situation or what I would conclude about it but I don't think your interpretation is QED on its face. Like I said, I feel like it is potentially more dishonest or misleading to seem to endorse Leverage. Idk why they didn't just not post the pictures at all, which seems the least potentially confusing or deceptive, but the fact that they didn't doesn't lead me to conclude dishonesty without knowing more.

I actually think LWers tend toward the bad kind of virtue signaling with honesty, and they tend to define honesty as not doing themselves any favors with communication. (Makes sense considering Hanson's foundational influence.)

Generally, signals for non-maziness often involve the willingness to create social tension with other people who are in the ingroup. That's qualitatively different than requiring people to engage in costly signals like veganism or taking the giving pledge as EAs.

I disagree— I would call social tension a cost. Willingness to risk social tension is not as legible of a signal, though, because it’s harder to track that someone is living up to a pre-commitment.

Whether or not social tension is a cost is besides the point. Costly signals nearly always come with costs. 

If you have an enviroment where status is gained by costly signals that are only valued within that group, it drives status competition in a way where the people who are on top likely will chose status over other ends.

That means that organizations are not honest about the impact that they are having but present themselves as creating more impact than they actually produce. It means that when high status organizations inflate their impact people avoid talking about it when it would cost them status. 

If people optimize to gain status by donating and being vegan, you can't trust people who donate and are vegan to do moves that cost them status but that would result in other positive ends. 

> If people optimize to gain status by donating and being vegan, you can't trust people who donate and are vegan to do moves that cost them status but that would result in other positive ends.

How are people supposed to know their moves are socially positive? 

Also I'm not saying to make those things the only markers of status. You seem to want to optimize for costly signals of "honesty", which I worry is being goodharted in this conversation.

Imagine if Nabisco took the stance that it didn’t have anything to prove about the safety and quality of its food, and that food safety testing is just a virtue signal that wastes a bunch of product. They could be sincere, and somehow keep product quality and safety acceptably high, but they are taking away your way of knowing that.

Food safety testing is an interesting topic. In Germany where I live, the EU decides on limits for the amount of pesticides that can be in the food that can be sold. The German government than sets higher limits then the EU rules. The supermarket chains then set even higher limits for the food that they sell. That's true even for the discouter chains.

This process results in pesticide limits don't have a good medical basis. 

On the other hand there's microplastic in our food. There are no limits for the microplastic in the food, nobody measures and does anything about it. From a health perspective having stronger limits on microplastics and weaker on pesticides would make our food safer. 

German culture values companies virtue signal that they adhere to standards in a way that makes our food likely safer then US food, but it's still imperfect. 

A discourse that would focus more on the empiric issues involved in food safety would be better then one that focuses around avoiding pesticides as a virtue signal. 

You’re saying they should clarify what really matters and find a good measurement for that. Maybe this wasn’t as clear as I thought, but that’s what I’m saying, too!

I'm afraid this doesn't engage with the real concerns/objections to what is commonly referred to as virtue signaling, because you're using a pretty narrow, overly generous (to the point that it feels motte-and-bailey, but I'd prefer to believe that your experiences differ enough from mine that we just have a terminology disagreement).

The virtue signaling that I object to is the cheap, non-rationally-justiable, and often hypocritical expressions of support or hatred for in-group or out-group rallying topics.  The pendulum CANNOT swing too far against this, IMO.  

I think your definition of virtue signal is too narrow. You’re defining away signals of actual virtue! I know many people say “virtue signal” to mean an insincere, in-group membership token, but that’s not the plain meaning. I think your definition is dangerous because it implies that there can be no measurements of genuine virtue—using that standard leaves evidence or the opportunity for evidence on the table.

I think a mixed strategy with at least some people pushing the no-animals-as-food norm and others reducing animal consumption in various ways is best for the animals.

As far as norms go, this heavily violates the norm of honesty. What you are suggesting implies that even if all those people believe exactly the same thing, some should tell you not to eat animals and some should tell you to cut down. But implicit in what they're telling you is "my moral values state that this course of action is the most moral", and it's not possible for both of those courses of action to be moral. You can justify one, and you can justify the other, but you can't justify them both at the same time--either the person saying "no eating animals" or the person saying "it's okay to just cut down" must be a liar.

(Separately, even if they didn't all believe the exact same thing, I would still be very skeptical. If you really think eating animals is mass murder, telling me to eat fewer animals without going completely vegan is equivalent to telling me "If you do what I say, I will no longer think you are murdering 50 people, I will only think you're murdering 25 people". But the moral condemnation curve is pretty flat; being a murderer of 25 people is not condemned substantially less than being a murderer of 50 people.)

Why do you think it is dishonest for different people to have different levels of commitment and willingness to eliminate animal products or different levels of belief that this is an effective strategy for them? There needn’t be any contradiction in observing that a mixed strategy might be most successful (fact) and different people being moved to different levels of diet change as one front of their animal activism.

Seems like you’re either DxE or you haven’t been vegetarian in this world. The truth is not enough people agree that it is wrong to raise animals in horrid conditions for food. If not enough people agreed about murdering a human outgroup, a strategy of purist outrage wouldn’t work, and you’d probably end up switching to a harm reduction strategy like animal EAs have. 25 lives saved is good. You’re objecting because you think it will work to object and you can save more. You think I would object if I really cared about animals but I think you just don’t understand the reality of the situation. IMHO harm reduction is how we create the change to one day be able to protect animals with outrage alone.

Why do you think it is dishonest for different people to have different levels

The argument is that you should do this "as a mixed strategy", which would mean that even a group of people with identical beliefs would act differently based on a random factor. Furthermore, I qualified it with:

even if all those people believe exactly the same thing

so they don't have different levels.

And even in the different levels case, it's easy for people to pretend they have greater differences than they really do, and their actual differences may not be enough to make the statements truthful.

IMHO harm reduction is how we create the change to one day be able to protect animals with outrage alone.

It may be the case that you can save more animals if some percentage of you lie than if you all tell the truth. Whether it's okay to lie in a "harmless" way for your ideology is a subject that's been debated quite a lot in a number of contexts. (I do think it's okay to lie to prevent the murder of a human outgroup, although even then, you need to be very careful, particularly with widespread lies.)

I don't appreciate your hostility and assumption of bad faith here. Like I could answer your objections and point out that your hypothetical is misleading (because you're stipulating that people aren't different and differences in preferences and motivations are what explain why a mixed strategy works), but it seems like that's not really your issue. 

The concept of "least convenient possible world" comes in here. There may be situations in which a mixed strategy is possible without lying, but your idea applies both to those situations and to less convenient situations where it does require lying.

Direct Action Everywhere— it’s the most deontological animal welfare adjacent to EA. They break into farms and liberate animals and stuff like that. I don’t think it’s on the whole the most effective strategy, although I think there’s a place for it.

I think your flat condemnation curve logic raises some weird problems. If the condemnation curve going from the murder of 25 people to 50 is relatively flat, then what do you say to someone who has already killed 25 and plans to kill 25 more? It seems like you would say "We have already decided you are maximally evil, so if you turn back from this course of action, or follow through, it won't make any difference to our assessment."  That logic seems incorrect to me. To me, when the mass murder stops killing people (or starts to kill significantly fewer) then that behavioural change seems significant.

Okay, let me clarify a little. If you ask some people what they think of someone who's killed 25 people, and you ask a similar group what they think of someone who's killed 50 people, you're going to get responses that are not meaningfully different. Nobody's going to advocate a more severe punishment for the 50-person killer, or say that he should be ostracized twice as much (because they've already decided the 25 person killer gets max and you can't go above max), or that they would be happy with dating the 25 person killer but not the 50 person killer, or that only half the police should be used to try to catch the 25 person killer.

It seems like you would say “We have already decided you are maximally evil, so if you turn back from this course of action, or follow through, it won’t make any difference to our assessment.”

Nobody will say that. But they'll behave that way.

when the mass murder stops killing people

Which is the equivalent of completely avoiding meat, not of eating less meat.

(And to the extent that vegetarians don't behave with meat-eaters like they would with human killers, I'd say they don't alieve that meat-eaters are like human killers.)

I agree completely that your focus groups are going to give similar responses. I especially enjoy your dating example. "Oh, George you haven't! When you were only a 25 fold murderer I could look the other way, but I don't think I can marry a man who has killed 26 people."

I agree with you in terms of virtual signaling being a proper and good thing with a real function. However, I think many people's objection against it is related to the higher order effects of goodharting and Moloch-worship (escalating virtual signalling in a game that always ends in a race to the bottom towards things like public self flagellation and self immolation). I was looking for it in the article but I didn't find it, so I figured I'd mention it here.

I mentioned goodharting, of which Moloch-worshipping is a more specific case. I don’t share the skepticism in these comments that good virtue signaling is possible and that people can keep the spirit of the law in their hearts. I also reject the implicit solution to just not legibly measure our characters at all. I think that is interpreted a signal of virtue among LWers and it shouldn’t be.

Just to make sure we're on the same page, we seem to be both agreeing that

  1. Good virtual signaling is possible and should be attempted
  2. Moloch worship is possible, should be avoided, and may be the reason for why many people hate / avoid / devaluate virtual signaling.

And you seem to be saying, yes Moloch is real, yes things can go very bad. But we should still be trying to build good norms and standards to accurately signal virtues.

Does my summary feel fair and correct?

I wrote a messy post back in January spelling out why I disagree with this.

In short:

  • Attention to (virtue) signaling can help you make sense of systems you're not part of.
  • Attention to them in systems you are part of creates anti-helpful incentives.
  • It works way, way better, for both individuals and groups, to…
    • …completely ignore signaling…
    • …work to embody the traits you'd want to be perceived as having, and…
    • …strive to symmetrically create transparency and clarity about whatever is true, even if it's damning.

This automatically produces hard-to-fake virtue signals without adding Goodhart drift. It also mostly eliminates signaling (vs. signal verification) arms races.

Let's look at your vegan example.

If someone is vegan in order to send a social signal, then I have to sort out why they're intent on sending that signal. Do they want to feel righteous? Do they want acclaim for their self-sacrifice for a greater cause? Opportunity to gloat about their moral superiority? Or is it purely as it seems, a sincere effort to do good in the world based on their best understanding? Because they're doing it explicitly to manipulate how they come across to others, it's not a natural extension of how they want to be in the world. Maybe it would be if they weren't using all this signaling thinking to get in their way! But I sure can't tell that. And frankly neither can they.

By way of contrast, if someone does the inner work to give zero fucks about what "signals" they're sending people, and they practice honesty & transparency, and from that place decide (probably quietly) to become vegan, then the clarity of that decision is way way way more impactful. If someone challenges them with "Oh come on, what are you doing to help animals?" they can just matter-of-fact point out "Well, think what you want. I'm not worried about your opinion of me. But in case it actually helps elucidate what I'm saying, I have in fact been following a vegan lifestyle these last three years."

It's more impactful because it's true. There's no attempt to manipulate others' perception. There's just simple clarity. It's unassailable precisely because it's wedded to truth.

An anthropologist outside of EA could describe all this in terms of virtue signaling within EA. But someone within EA who attempts to use this "virtue signaling" thinking to decide what diet they should follow absolutely cannot embody this power. It's a lie before it's even uttered.

But is it really better when we stop offering others proof of positive qualities that are otherwise hard to directly assess? Is it better to give others no reason to trust us?

False dichotomy.

It's not "Virtue signals or no info about trust."

Another option is to just in fact strive to be trustworthy while also striving to let the truth of your trustworthiness be clearly visible, whatever that truth might be.

If I'm selling you a car, there's all kinds of BS virtue signals I could give to help put you at ease. Precisely because they're BS, you'd have to dig under them to detect possible maliciousness or dangerous neglectfulness on my part.

But if I instead just in fact mean you well, and mean the deal honestly, then it's no trouble at all for me to say something like "Hey, I know you're unsure how much to trust me or this car here. I mean you well here, and I know you don't have an easy way of telling that. Just let me know what you need to make a decision here. I'll help however I can."

Clean. Simple.

No need for intentional virtue signaling… because the actual presence of virtues causes reality to reflect the costly signals for free.

So I guess this comes down to an adamant disagreement with your call to action here:

I urge you to do the prosocial thing and develop and adopt more legible and meaningful virtue signals— for others and especially for yourself.

I dearly wish the opposite.

The prosocial thing is to dig inside oneself for one's dire need for others to see one in a particular way, and uproot that.

And also, to in fact truly cultivate virtues that you sincerely believe in.

And finally, to encourage clarity and transparency about what you are in fact like and capable of, with absolutely zero attachment to the outcome of what gets communicated in that clarity & transparency.

This is vastly more prosocial than is cultivating skill with making people think you're virtuous.

It's also a hell of a lot easier in the long run.

I think you should consider the legibility of the signals you send, but that should flow from a desire to monitor yourself so you can improve and be consistent with your higher goals. I feel like you’re assuming virtue signal means manipulative signal, and I suppose that’s my fault for taking a word whose meaning seems to have been too tainted and not being explicit about trying to reclaim it more straightforwardly as “emissions of a state of real virtue”.

Maybe in your framework it would be more accurate to say to LWers: “Don’t fall into the bad virtue signal of not doing anything legibly virtuous or with the intent of being virtuous. Doing so can make it easy to deceive yourself and unnecessarily hard to cooperate with others.”

It seems like the unacknowledged virtue signals among rationalists are 1) painful honesty, including erring on the side of the personally painful course of action when it’s not clear which is most honest and dogpiling on any anyone who seems to use PR, and 2) unhesitant updating (goodharting “shut up and multiply”) that doesn’t indulge qualms of the intuition. If they could just stop doing these then I think they might be more inclined to use the legible virtue signals I’m advocating as a tool, or at the very least they would focus on developing other aspects of character.

I also think if thinking about signaling is too much of a mindfuck (and it has obviously been a serious mindfuck for the community) that not thinking about it and focusing on being good, as you’re suggesting, can be a great solution.