I feel pretty confused about whether I, as an effective altruist, should be vegetarian/vegan (henceforth abbreviated veg*n). I don’t think I’ve seen anyone explicitly talk about the arguments which feel most compelling to me, so I thought I’d do that here, in a low-effort way.

I think that factory farming is one of the worst ongoing moral atrocities. But most of the arguments I’ve heard for veg*nism, which I found compelling a few years ago, hinge on the effects that my personal consumption would have on decreasing factory farming (and sometimes on climate change). I now don’t find this line of thinking persuasive - my personal consumption decisions just have such a tiny effect compared to my career/donation decisions that it feels like I shouldn’t pay much attention to their direct consequences (beyond possibly donating to offset them).

But there are three other arguments which seem more compelling. First is a deontological argument: if you think something is a moral atrocity, you shouldn’t participate in it, even if you offset the effects of your contribution. In general, my utilitarian intuitions are much stronger than my deontological ones, but I do think that following deontological principles is often a very good heuristic for behaving morally. The underlying reason is that humans by default think more naturally in terms of black-and-white categories than shades of grey. As Yudkowsky writes:

Any rule that's not labeled "absolute, no exceptions" lacks weight in people's minds. So you have to perform that the "Don't kill" commandment is absolute and exceptionless (even though it totally isn't), because that's what it takes to get people to even hesitate. To stay their hands at least until the weight of duty is crushing them down. A rule that isn't even absolute? People just disregard that whenever.

Without strong rules in place it’s easy to reason our way into all sorts of behaviour. In particular, it’s easy to understimate the actual level of harm that certain actions cause - e.g. thinking of the direct effects of eating meat but ignoring the effects of normalising eating meat, or normalising “not making personal sacrifices on the basis of moral arguments”, or things like that. And so implementing rules like “never participate in moral atrocities” sends a much more compelling signal than “only participate in moral atrocities when you think that’s net-positive”. That signal helps set an example for people around you - which seems particularly important if you spend time with people who are or will become influential. But it also strengthens your own self-identity as someone who prioritises the world going well.

Then there’s a community-level argument about what we want EA to look like. Norms about veg*nism within the community help build a high-trust environment (since veg*nism is a costly signal), and increase internal cohesion, especially between different cause areas. At the very least, these arguments justify not serving animal products at EA conferences.

Lastly, there’s an argument about how I (and the EA community) are seen by wider society. Will MacAskill sometimes uses the phrase “moral entrepreneurs”, which I think gestures in the right direction: we want to be ahead of the curve, identifying and building on important trends in advance. I expect that veg*nism will become much more mainstream than it currently is; insofar as EA is a disproportionately veg*n community, this will likely bolster our moral authority.

I think there are a few arguments cutting the other way, though. I think one key concern is that these arguments are kinda post-hoc. It’s not necessarily that they’re wrong, it’s more like: I originally privileged the hypothesis that veg*nism is a good idea based on arguments about personal impact which I now don’t buy. And so, now that I’m thinking more about it, I’ve found a bunch of arguments which support it - but I suspect I could construct similarly compelling arguments for the beneficial consequences of a dozen other personal life choices (related to climate change, social justice, capitalism, having children, prison reform, migration reform, drug reform, etc). In other words: maybe the world is large enough that we have to set a high threshold for deontological arguments, in order to avoid being swamped by moral commitments.

Secondly, on a community level, EA is the one group that is most focused on doing really large amounts of good. And so actually doing cost-benefit analyses to figure out that most personal consumption decisions aren’t worth worrying about seems like the type of thing we want to reinforce in our community. Perhaps what’s most important to protect is this laser-focus on doing the most good without trying to optimise too hard for the approval of the rest of society - because that's how we can keep our edge, and avoid dissolving into mainstream thinking.

Thirdly, the question of whether going veg*n strengthens your altruistic motivations is an empirical one which I feel pretty uncertain about. There may well be a moral licensing effect where veg*ns feel (disproportionately) like they’ve done their fair share of altruistic action; or maybe parts of you will become resentful about these constraints. This probably varies a lot for different people.

Fourthly, I am kinda worried about health effects, especially on short-to-medium-term energy levels. I think it’s the type of thing which could probably be sorted out after a bit of experimentation - but again, from my current perspective, the choice to dedicate that experimentation to maintaining my health instead of, say, becoming more productive feels like a decision I’d only make if I were privileging the intervention of veg*nism over other things I could spend my time and effort on.

I don’t really have any particular conclusion to this post; I wrote it mainly to cover a range of arguments that people might not have seen before, and also to try and give a demonstration of the type of reasoning I want to encourage in EA. (A quick search also turns up a post by Jess Whittlestone covering similar considerations.) If I had to give a recommendation, I think probably the dominant factor is how your motivational structure works, in particular whether you’ll interpret the additional moral constraint more as a positive reinforcement of your identity as an altruist, or more as something which drains or stresses you. (Note though that, since people systematically overestimate how altruistic they are, I expect that most people will underrate the value of the former. On the other hand, effective altruists are one of the populations most strongly selected for underrating the importance of avoiding the latter.)

New Comment
25 comments, sorted by Click to highlight new comments since:

I like how you explore all the angles available, including bringing in your own skepticism about your own process. It's nice to see this all laid out this way.

One oddball thought — I hope it's helpful, though it might instead be annoying:

There's a common and beloved line of thinking in this area that I've come to think of as broken. It goes something like:

  1. "Social signal X is important to send."
  2. "I can create social signal X by making myself behave in fashion Y."
  3. "Therefore, I should do Y."

…where that "should" there is often an injection of motivation juice. It frequently acts like writing the conclusion of what behavior the person in question needs to make themselves adopt one way or another. Differences between how they act now and Y can be a painful source of motivation to change, whether by force of will or through some kind of behavioral or psychological hacking.

But I think the problem is actually at step 1.

If you want to think of it in TDT terms: A culture that navigates social signals with this way of thinking engenders unending complexity in signaling arms races. Everything touched by this way of thinking becomes impossible to sort out in any simple way — not just because of computational complexity, but also (and I think primarily) because the focused attention will engender Goodhart drift. In practice this almost totally borks the ability to cooperate due to injecting a tragedy of the commons type dynamic.

Social signal analysis seems like a fine thing to do for systems you're not a part of, but when you're in them I think it's pretty important to let the signals take care of themselves. Otherwise you feed powerfully coercive and rivalrous dynamics.

If you were to remove all the thinking about how your decisions impact others' perception of you, what's left? And what does that thinking conclude?

If you end up with a different answer than the one you get from adding in the epicycles of endless recursive signaling thinking, then that highlights exactly the degree to which signaling has decoupled from reality, which you encourage by choosing an option based on sending said decoupled signals.

So you can simplify your thinking and help replenish the epistemic commons by dropping all explicit thinking about what signals you are or aren't sending in your decision-making.

If veg*nism is really a moral frontier (to pick one of the angles you mentioned) and EA is really honestly into moral pioneering, and these two things match, it'll unfold that way on its own. And it'll unfold for the same reasons you, personally, for non-signaling reasons, choose to be veg*n. The signals will take care of themselves by virtue of the unstoppable pairing of truth and transparency.

Or, you can go the usual route I'm familiar with in these circles and focus on how to seem like a moral pioneer. This goes over about as well as working really hard to seem natural and at ease on a date. You might pull it off! But you'd be pulling off a deception — which ironically might in fact match reality, making the deception pointless. Or it doesn't, in which case you're trying to cause people to respect you and EA in ways that don't match your/their actual moral embodiment.

So, what I see after filtering out signaling thinking is:

  • You don't buy the relevance of your direct impact from going veg*n.
  • You're wondering whether you'd be a more moral person by adopting a deontological angle, which would imply becoming veg*n anyway.
  • You're also concerned that you're privileging the hypothesis of veg*nism and that this is warping your thinking.
  • You want to strengthen your altruistic motivations — but you aren't sure whether going veg*n actually will do that.
  • You're worried about possible negative health effects on yourself from going veg*n.

These considerations seem straightforward to me. No trickery or muddying the signaling commons. Just honest reflection for yourself about whether veg*nism makes sense for you.

For what that's worth! I know this kind of recursive reasoning can be fun and often seems super important and key — and maybe even downright insane to overlook. I don't mean to tell anyone that they can't do that. But I do mean to say that I see real costs to it, that I do in fact think it's basically never worth it, and I want to offer a snapshot of what I see in case it helps inspire some new and valuable insight for others.

I like this comment! But I think that there's less "unending complexity" than you expect - you can do things like advertise your products, or building a specific culture within a group, without spiralling into epistemic hell. More specifically:

you'd be pulling off a deception — which ironically might in fact match reality, making the deception pointless

I don't think it's an ironic match, I think the whole point of doing the signalling is because we think it's based on truth.

Maybe a better way of phrasing is: if we're currently 80% of the way towards the level of "moral pioneeringness" that makes us vegan, what should our attitude be towards filling in the motivational gap with arguments about signalling? I don't think that it's being dishonest to be partly motivated by signalling, because everyone is always partly motivated by signalling. It's just that in some cases those motivations are illegible, whereas in cases of coordinating community norms those motivations need to be made more legible.

So, what I see after filtering out signaling thinking is:

I notice that you've also dropped the idea of building a high-trust community via internal signals. But I think this is, again, the type of thing that is valuable to some extent, and just shouldn't be taken too far. Shared cultural references, shared experiences, shared values, shared hardships, etc, are all ways in which people feel closer to each other, as long as there's some way of communicating them. Maybe I should have just not used the word "signal" in explaining how that information gets communicated, but I do think that's fundamentally what's going on in a lot of high-trust groups.

You want to strengthen your altruistic motivations

My model of how motivations get strengthened involves a big component of "self-signalling". Not sure how close this is to external signalling, but just wanted to mention that as another reason these things are less separable than you argue.

I think that there's less "unending complexity" than you expect - you can do things like advertise your products […]

This example surprises me a little. It seems like a great display of exactly what I'm talking about.

Most ads these days focus on "grabbing". They aren't a simple informative statement like "Hey, just want to let you know about this product in case it's useful to you." Instead they use things like sex and coloration and "For a limited time!" and anchoring hacks ("10 for $10!") to manipulate people into buying their products. And because of a race-to-the-bottom dynamic, it's difficult for honest ads to compete with the grabby ones.

Although it's not exactly advertising, I think this example is illustrative: I'm in a tourist part of Mexico right now. When I walk down the main street where there are lots of shops, the shopkeepers will often be standing outside and will call out things to me like "Hey my long-haired friend! A question for you: Where are you from?" It's pretty obvious — and confirmed from my direct experience from being here a few months — that they aren't doing this just to be friendly. Rather, they're using a signal of friendliness to try to trigger a social routine in me that might result in them "welcoming" me into their shop to buy something.

The result is that I tend to stonewall people who are friendly to me out of nowhere here. Which gums up the ability for people to simply be friendly to each other.

And on top of that, if I actually want to buy something, I have to be careful of subtle things my brain will do when some shopkeeper seems friendly.

The "unending complexity" was from a TDT thought experiment in which everyone in a culture is targeting signals, and that's rarely the case in practice. But I think advertising is actually a pretty good example of a situation where something much like this has happened. Just try to find an actually environmentally friendly product for instance: greenwashing makes this insanely difficult!

And that doesn't even get into the effect of things like Facebook and Google on culture as a direct result of the incentive landscape around hacking people's minds via advertising.

I think what you mean to point at is that it's possible to advertise your product without destroying the ability for people to find out true things about your product. I agree. But I think the signaling-focused culture around doing this does in fact make it way, way harder. That's the thing I'm pointing at.

 

I don't think it's an ironic match, I think the whole point of doing the signalling is because we think it's based on truth.

Here and elsewhere in your reply, I can't tell whether you're saying (a) "Signaling is how this happens" or (b) "Consciously trying to focus on signals seems fine here." I wonder if you're mixing those two claims together…?

There's never a need to "do" signaling. Signals just happen. I signal being a male by a whole bunch of phenotype things about my body I have basically no control over, like the shape of my jaw and the range of my voice.

Although that isn't to say that one cannot "do" signaling. I also signal male by how I dress for instance, and I could certainly amplify that effect if I chose to.

But my thesis is that "doing" signaling consciously is basically always a mistake on net. It applies Goodhart drift to those very signals, which is exactly what leads to a signal-sending/deception-detecting arms race.

If you think something is true, why not encourage clarity about the truth in a symmetric way? Let reality do the signaling for you?

If you think you can send a stronger signal of the truth than reality can via symmetric means, then I think it's worth asking why. I've found that every time I have this opinion, it's because I actually need a certain conclusion to be true, or because I need the other person to believe it. Like needing the person I'm on a date with to believe we're a good match. Completely letting that go tends to let signals of the truth emerge, which I find is always better on net — including when it makes a date go not so well.

 

I don't think that it's being dishonest to be partly motivated by signalling, because everyone is always partly motivated by signalling.

This is another case where I experience you as possibly mixing "signaling happens" and "thinking about signaling is fine" together as the same claim.

I get the sense that "motivated by signaling" means something different in the two parts of your sentence here. In the first instance it seems to mean something like "driven by conscious reflection on signaling dynamics", and in the second case it conveys "influenced by an often subconscious drive to send certain signals".

I do, in fact, think that consciously trying to send certain signals reliably leads to dishonesty. And that this happens even when one is trying to signal what one honestly believes to be true.

And in practice, I find that I become clearer and more reliable and "authentic" (and also a lot more at ease) the more I can incline even my subconscious efforts to signal to prefer truth symmetry. Which is to say, preferring transparency over conveying some predetermined conclusion to others.

But I agree, a lot of what people do comes down to tracking their signals. I don't mean to deny that or say it's dishonest or even ineffective.

 

It's just that in some cases those motivations are illegible, whereas in cases of coordinating community norms those motivations need to be made more legible.

I wonder if this is the center of our disagreement.

The way I see it, if symmetrically making the truth transparent whatever it might be doesn't do the job, then asymmetrically adding signals actually makes the situation worse. It injects Goodhart drift on top of the truth being in fact illegible.

Imagine, for instance, that EA collectively dropped all conscious thinking about signaling, and instead nearly all EAs became veg*n because each one noticed it made personal sense for them. (Not saying it does! I'm just supposing for a thought experiment.) And maybe EA puts in public the results of surveys, kind of like what LW occasionally does, where the focus is on simply being transparent about how people who identify as EAs actually make choices about moral and political topics. This would have the net effect of sending the moral signal of veg*nism more strongly than if EA advertised how nearly everyone involved is veg*n — and without applying political pressure.

And furthermore, the rest of the signals would be coherent. How EAs relate to recycling, and adoption, and matters of social justice, and a bazillion other things would all paint a coherent picture.

If on the other hand EA is 80% aligned with being moral pioneers, then trying to paint a picture of in fact being moral pioneers is actually deceptive. Why is it 80% instead of 100%? This says something real about EA. Why hide that? Why force signals based on how some people in EA think EA wants to be? What truth is that 20% hinting at?

To make this even more blatant: Suppose that EA would not want to become veg*n, but that it does want to embody moral leadership, and it thinks that people the world over would view EA being "ahead of the curve" on veg*nism as evidence of being good moral leaders. In this case EA would be actively deceiving the world in order to convince the world of a particular image that EA wants to be seen as.

It's highly, highly unlikely that this willingness to deceive would be isolated. And that would show in countless signals that EA can't fully control.

Which is to say, EA's signals wouldn't be coherent. They wouldn't be integrated — or in other words, EA would lack integrity.

But maybe EA really is the frontier of morality, and the world just has a hard time seeing that. Being transparent and a pioneer might actually result in rejection due to the Overton window. Yes?

Particularly in the case of morality, I think it's important to be willing to risk that. Let others come to their mistaken conclusions given the true facts.

That's part of what allows the world to notice that its logic is broken.

 

I notice that you've also dropped the idea of building a high-trust community via internal signals. But I think this is, again, the type of thing that is valuable to some extent, and just shouldn't be taken too far. Shared cultural references, shared experiences, shared values, shared hardships, etc, are all ways in which people feel closer to each other, as long as there's some way of communicating them.

Again, here I read you as conflating "Signals happen all the time" with "Consciously orchestrating signals is fine."

I think you're right, this is a lot of how groups work. That kind of analysis seems to say something true when you view the group from the outside.

I also think that something super important breaks when you try to use this thinking in a social engineering way from inside the group.

Most sitcoms are based on this kind of thing for instance. Al worries about what Jane will think about Barry having Al's old record set, so he tells Jane some lie about the records, but Samantha drops by with a surprise that proves Al's story is false, leading to hilarity as Barry tries to cover for Al…

…but all of the complexity vanishes if anyone in this system is willing to say "Here's the simple truth, and I'll just deal with the consequences of people flipping their shit over it." Because it means people are flipping their shit over the truth. How are they supposed to become capable of handling the truth if everyone around them is babying each other away from direct exposure to it?

Alas, adults taking responsibility for their own reactions to reality and simply being honest with each other doesn't make for very exciting high school drama type dynamics.

In the rationality community I've seen (and sometimes participated in) dozens of attempts to create high-trust environments via this kind of internal signal-hacking. It just doesn't work. It lumbers along for a little while as people pretend it's supposed to work. But there's always this awkward fakeness to it, and unspoken resentments bubble up, and the whole thing splinters.

I think it doesn't work because those internal signals don't emerge organically from a deeper truth. This is why team-building exercises in corporations are reliably so lame, but war buddies who fought across enemy lines side by side used to stay together for a lifetime. (Not as true over the last century though because modern wars are mostly fake and way, way beyond human scale.)

I don't think you can take a group and effectively say "This will be a high trust group." There might be some exceptions along the lines of severe cult-level brainwashing or army bootcamp type scenarios, but even then that's because you're imposing a force from the outside. Remove the cult leader or command structure, and the "tight-knit" group splinters.

The only kind of high trust that I think is worth relying on emerges organically from truth. And that's not something we can really control.

So, yeah. Either EA will organically do this, or it won't become a high trust community. And internal signal-hacking would actually get in the way of the organic process thanks to Goodhart drift.

 

Maybe I should have just not used the word "signal" in explaining how that information gets communicated…

A minor aside: For me it's not about the word. It's the structure. The thing I'm pointing at is about the attempt to communicate a conclusion instead of working to symmetrically reveal the truth and let reality speak for itself. The fact that someone thinks the aimed-for conclusion is true doesn't change the fact that asymmetric signal-manipulation gums up collective clarity. It harms the epistemic commons. In the long run I think that reliably does more damage than it's worth.

 

My model of how motivations get strengthened involves a big component of "self-signalling". Not sure how close this is to external signalling, but just wanted to mention that as another reason these things are less separable than you argue.

Again, here those two things ("Signaling happens all the time" vs. "Consciously trying to do signaling is fine") appear to me to be mixed together in a way they don't have to be. Certainly I can analyze how motivation-strengthening works by donning a self-signaling lens — but that's super duper different from using the self-signaling as part of my conscious strategy for making my motivations stronger.

As I mentioned before, for me the difference is whether I'm using a signaling model to study something from a detached outside point of view, or as a strategy to engineer something I'm in fact part of.

And yes, I do think you run into the problems I'm outlining if you consciously use self-signaling to change your motivations. On the inside I think people experience this as a kind of fragmentation. "Akrasia" comes to mind.

But just like with advertising, it's possible to seem to get away with a little bit anyway.

Our aversion to making changes to lifestyle and habits is based on the initial, not long-term, difficulty. I went vegan about a decade ago after learning about factory farming, but it quickly stopped feeling like a sacrifice at all. My food bill is much lower, and even the most basic vegan cooking becomes delicious with the right amount of monosodium glutamate, which, contrary to debunked myths, isn't bad for you at all. It's just a sodium atom and a glutamate molecule, both of which are essential nutrients. 

monosodium glutamate, which, contrary to debunked myths, isn’t bad for you at all. It’s just a sodium atom and a glutamate molecule, both of which are essential nutrients.

So my understanding is also that MSG isn't bad for you, no disagreement on that.

But there's a weakly implied argument here that if you take two essential nutrients and put them together you can't get something which is bad for you, and I don't think that's true.

As far as going veg*n strengthening altruistic motivations, I can at least say for myself and several other people I know who have been veg*n a long time, that I am no longer able to look at a piece of meat and disconnect emotionally from the fact that it is someone's dead body. This can make staying veg*n highly self-reinforcing to the extent that it would cost me a huge amount of effort to stop. Furthermore, every time I see meat, I instantly feel a strong emotional reminder of just what it is we are all fighting for. Eating meat normalizes itself. If you stop long enough, it'll stop feeling normal. 

I was thinking yesterday that I'm surprised more EAs don't hunt or eat lots of mail-ordered hunted meat, like eg this. Regardless of whether you think nature should exist in the long term, as it stands the average deer, for example, has a pretty harsh life and death. Studies like this on American white-tailed deer enumerate the alternative modes of death, which I find universally unappealing.  You've got predation (which surprisingly to me is the number one cause of death for fawns), car accidents, disease, and starvation.  These all seem orders of magnitude worse than being killed by a hunter with a good shot. 

I'd assume human hunting basically trades off against predation and starvation, so the overall quantity of deer and deer consciousness isn't affected much by hunting.  The more humans kill, the fewer coyotes kill.

Edit: So it seems to me that buying hunted meat/encouraging hunting might have a better animal welfare profile than veganism, while also satisfying Richard's concerns about nutrition and satisfying meat cravings. That being said, it is not really scalable in the way veg*ism is.

In my experience, the hardest part about not eating meat is eating outside the house, either at restaurants or at social events. In restaurants it can be hard to find good options that don't include meat (especially if you're also avoiding animal products), and at social events the host has to go out of their way to accommodate you unless they are already planning for it. Eating hunted meat doesn't help the situation with either of those situations. Theoretically there could be a hunted meat restaurant or a social event where the host serves hunted meat, but both of those would be difficult from a verifiability standpoint ("is this really hunted meat, or are they just saying that?").

I do definitely agree that hunted meat could be a good option for people who still want to have meat but are okay with not having it all of the time and are willing to deal with the hassle. Some people buy meat from farms that raise their animals ethically. That has basically all of the same benefits and drawbacks for animal welfare concerns, but it doesn't help with climate emissions, which I think hunted meat would help with.

I agree with your description about the hassle of eating veg when away from home.  The point I was trying to make is that buying hunted meat seems possibly ethically preferable to veganism on animal welfare grounds, would address Richard's nutritional concerns, and also satisfies meat cravings. 

Of course, this only works if you condition on the brutality of nature as the counterfactual. But for the time being, that won't change.

An additional thought:

Having become veg*an before encountering EA is a very good indicator of your ability to independently come to moral conclusions, and act on them, often against outside pressure.

I don't insist on EAs being veg*an, but I trust them more if they decided to become veg*an at some point in their life.

[ I am neither an effective altruist nor a veg*n.  I comment not as a member of those groups, but just as an observer wondering about consistency and decision-making. ]

whether I, as an effective altruist, should be vegetarian/vegan

I think this is the wrong framing - you should instead ask whether you, as someone who cares about animal suffering and global warming more than your own personal convenience, should be vegetarian/vegan.  This would keep the discussion on the topics you want - whether it'll have any impact, and whether it'll have any costs.  Unless you donate to offset them, in which case it's the WRONG impact - your behavioral choices are now interfering with your donation budget (or you're not really offsetting - you'd have donated anyway).

my personal consumption decisions just have such a tiny effect compared to my career/donation decisions

Why are you making this comparison?  They seem like completely orthogonal points - your personal consumption activities don't reduce or increase your career/donation choices at all.  

The additional effects on group visibility and any sort of example you're setting for others, even unknowingly are real, but the fundamental truth is that if you think your marginal harm to farmed animals is a net bad, then you're making a utilitarian mistake by participating in it.

The fact that this is obvious, and yet you're still publicly debating it is an indicator to me that there are factors you'd rather not enumerate.  For me, I like the convenience, taste, and experience of eating meat.  I don't know if you feel similarly or not, but there have to be SOME reasons you even think "not being vegan" is an option worth considering.

There are clear and obvious costs to being a veg*n, including time and monetary costs, which you'd expect would hurt career / donation impact. As a simple example, if you get coffee from Starbucks every day, switching from regular milk to plant-based milk for coffee could cost $0.50 per day -- maybe you'd do better by saving that $0.50 and donating an extra $100 every year.

(But as OP says, probably the impacts on motivation, altruism, etc are more important.)

I suspect the tension between drinking cow milk or plant milk is less important than going to Starbucks at all.  My point remains that your idiosyncratic personal desires (non-utilitarian in most conceptions) is causing you to optimize on easy dimensions rather than more holistic decisions.

Unless you are buying processed food or off season foods, plant based meals will always be cheaper.

I often see people say things like it is cheaper to follow a vegan diet than an omnivorous one.

I think that this is trivially false (but probably not very interesting), the set of omnivorous diet includes the set of vegan meals and even if the vegan meals are often cheaper than the nonvegan ones, in my personal experience I often find that I am regularly in situations where it would be cheaper to consume a meal that contains meat or dairy (e.g. at restaurants where most meals and not vegan, or when looking around the reduced section of the supermarket).

The common response I get to this is 'well if you are optimising for the cheapest possible meal (and not just the cheapest meal at say a restaurant) this will probably be something like rice and beans which is vegan'. I somewhat agree here, but I think that it is more useful to say for some level of satisfaction how is expensive is the cheapest possible meal and it is vegan? I think often once we move to things a little more expensive than rice and beans it becomes much less clear whether vegan diets are usually cheaper.

Also if vegan diets were cheaper for similar levels of satisfaction I think I'd expect vegan food to be much more popular amongst people who are not sympathetic to animal ethics/environmental arguments just because I expect consumer preferences to be pretty sensitive to differences in the cost of similar utility goods.

On the general point, as a recently-turned vegan (~1 yr), my spending is roughly the same. Money saved on not buying meat/milk/cheese was basically directly replaced by splurging on expensive stuff like avocados, cashews, faux-cheeses, and fancy salads. All of those are non-essential, but budget wasn't ever my primary motive in choosing foods.

The following thoughts are mostly in response to your last claim around market dynamics and the foods people choose.

A big part of the observed frequency of meat eating is explained by cultural inertia, esp. with the historical signaling function of meat-eating. For a long long time (and still in rural/poor places) owning animals was a primary store of wealth, and killing them to eat them was a very costly display of your fitness. That kind of signal can be culturally baked-in to various food traditions. Fancy restaurants still play this game, with most of the fanciest and most expensive foods being unusual preparations of hard to acquire or raise meat.

Another enormous factor here is subsidies (something like $40b annually in the US subsidize meat & dairy). Meat is sometimes cheaper or comparable in price to replacement vegan foods, but that's not a market outcome. Without those subsidies you'd see a bigger price differential.

It's also note-worthy that, proportionally, many meals with meat have mostly vegan ingredients. Things like steak are outliers, and many meals that contain meat aren't mostly meat.

"Always"? I had to replace milk with oat-based substitutes due to lactose intolerance, and now pay 2x for essentially the same product. (There are cheaper plant-based substitutes, but they imo don't taste anything like milk. For instance, I find most of them unbearably sweet.)

[+][comment deleted]-10

There’s also timeless decision theory to consider. A rational agent should take other rational agents into consideration when choosing actions. If I choose to go vegan, it stands to reason that similarly acting moral agents would also choose that course. If many (but importantly not all) people want to be vegan, then demand for vegan foods goes up. If demand for vegan food goes up, then suppliers make more vegan food and have an incentive to make it cheaper and tastier. If vegan food is cheaper and tastier, than more people who were on the fence about veganism can make the switch. It’s a virtuous cycle. Just in the four years since I went vegan, I’ve noticed that packaged vegan food is much easier to find in the grocery store I’ve been using for 5 years. My demand contributed to that change.

I’m not sure whether there’s a moral case against animal suffering anymore, but I still think plant farming is net better than animal farming for other reasons. Mass antibiotic use risks super-bugs, energy use is much higher for non-chicken farming than for plants, and the meat-processing industry has more amputation in its worker base than I like. I would like to incentivize readily available plant based food.

Our entire way of life is full of negative externalities that cause massive amounts of direct or indirect suffering and harm. Nearly all forms of production and transportation require large amounts of energy, which is often generated in a way that at minimum harms the climate. Your smartphone was probably assembled in a Chinese factory full of third world workers with a barely tolerable existence. It might have passed through an Amazon warehouse where workers have long mind-numbing shifts that also cause physical problems, contributing to the opioid epidemic. Plant-based diets also require lots of intense agriculture, which destroys local ecosystems and inadvertently kills millions of mice, other small vertebrates and insects. 

It doesn't seem correct to disregard all of these as "massively outweighed by career/donation choices". Nor does it seem wise to pick one of these areas (animal suffering in meat production) and treat it as the one and only lifestyle choice a True Rational ought to make. 

The problems mentioned above are gigantic. How do we provide everybody with comfortable, affordable existences without destroying the climate or exploiting workers anywhere? How do we produce food for billions of humans without destroying ecosystems and harming other living beings, even indirectly and inadvertently? We'll get there eventually, and we're improving every year. But converting to vegetarianism for purposes of virtue signalling, doesn't seem to be very helpful there. 

Of course, if you truly reduce or stop your meat consumption for legitimate reasons of harm reduction, that's awesome and I'll applaud anybody for that. Just as reducing your energy consumption and looking for products produced by companies that treat their employees responsibly are probably also virtuous actions. But they shouldn't become litmus tests.

As others have mentioned, it's basically a virtue argument: are you the sort of person who, given a nearly equal choice, would avoid consuming slaughtered animals (vegetarian) or all products of animal farming (vegan). The exact utility is not easy to calculate, and deontology is virtue ethics or utilitarianism codified into easy-to-digest (hah) rules, anyway. 

Then there’s a community-level argument about what we want EA to look like. Norms about veg*nism within the community help build a high-trust environment (since veg*nism is a costly signal), and increase internal cohesion, especially between different cause areas. At the very least, these arguments justify not serving animal products at EA conferences.
... 
I expect that veg*nism will become much more mainstream than it currently is; insofar as EA is a disproportionately veg*n community, this will likely bolster our moral authority.

To the contrary, to me this violates cause neutrality, and risks EA becoming increasingly politicized. And politicization is really bad! For instance, in 2020, a local EA chapter (EA Munich) made the utterly inane choice of disinviting Robin Hanson.

Secondly, on a community level, EA is the one group that is most focused on doing really large amounts of good. And so actually doing cost-benefit analyses to figure out that most personal consumption decisions aren’t worth worrying about seems like the type of thing we want to reinforce in our community. Perhaps what’s most important to protect is this laser-focus on doing the most good without trying to optimise too hard for the approval of the rest of society - because that's how we can keep our edge, and avoid dissolving into mainstream thinking.

Yes! Why would we even want a community norm of emphasizing costly signals, or treating personal veganism as a litmus test, in a movement based on cause effectiveness? That would be like saying "I don't care what's effective, what matters is what my (EA) peers think".

[-]Fai10

Hi! I just wrote a full post in reply to this on the EA forum. (because it's long, and it's a while after this post). I probably won't post the full post on this forum so here's a link to the EA forum post.

[+][comment deleted]70