Abstract: If you value the welfare of nonhuman animals from a consequentialist perspective, there is a lot of potential for reducing suffering by funding the persuasion of people to go vegetarian through either online ads or pamphlets.  In this essay, I develop a calculator for people to come up with their own estimates, and I personally come up with a cost-effectiveness estimate of $0.02 to $65.92 needed to avert a year of suffering in a factory farm.  I then discuss the methodological criticism that merits skepticism of this estimate and conclude by suggesting (1) a guarded approach of putting in just enough money to help the organizations learn and (2) the need for more studies should be developed that explore advertising vegetarianism in a wide variety of media in a wide variety of ways, that include decent control groups.

-

Introduction

I start with the claim that it's good for people to eat less meat, whether they become vegetarian -- or, better yet, vegan -- because this means less nonhuman animals are being painfully factory farmed.  I've defended this claim previously in my essay "Why Eat Less Meat?".  I recognize that some people, even those who consider themselves effective altruists, do not value the well-being of nonhuman animals.  For them, I hope this essay is interesting, but I admit it will be a lot less relevant.

The second idea is that it shouldn't matter who is eating less meat.  As long as less meat is being eaten, less animals will be farmed, and this is a good thing.  Therefore, we should try to get other people to also try and eat less meat.

The third idea is that it also doesn't matter who is doing the convincing.  Therefore, instead of convincing our own friends and family, we can pay other people to convince people to eat less meat.  And this is exactly what organizations like Vegan Outreach and The Humane League are doing.  With a certain amount of money, one can hire someone to distribute pamphlets to other people or put advertisements on the internet, and some percentage of people who receive the pamphlets or see the ads will go on to eat less meat.  This idea and the previous one should be uncontroversial for consequentialists.

But the fourth idea is the complication.  I want my philanthropic dollars to go as far as possible, so as to help as much as possible.  Therefore, it becomes very important to try and figure out how much money it takes to get people to eat less meat, so I can compare this to other estimations and see what gets me the best "bang for my buck".


Other Estimations

I have seen other estimates floating around the internet that try to estimate the cost of distributing pamphlets, how many conversions each pamphlet produces, and how much less meat is ate via each conversion.  Brian Tomasik calculates $0.02 to $3.65 [PDF] per year of nonhuman animal suffering prevented, later $2.97 per year, and then later $0.55 to $3.65 per year.

Jess Whittlestone provides statistics that reveal an estimate of less than a penny per year[1]. 

Effective Animal Activism, a non-profit evaluator for animal welfare charities, came up with an estimate [Excel Document] of $0.04 to $16.60 per year of suffering averted, that also takes into account a variety of additional variables, like product elasticity.

Jeff Kaufman uses a different line of reasoning, by estimating how many vegetarians there are and guessing how many of them came via pamphlets, estimates it would take $4.29 to $536 to make someone vegetarian for one year.  Extrapolating from that using at a rate of 255 animals saved per year and a weighted average of 329.6 days lived per animal (see below for justification of both assumptions), would give $0.02 to $1.90 per year of suffering averted[2].

A third line of reasoning, also by Jeff Kaufman, was to measure the amount of comments on the pro-vegetarian websites advertised in these campaigns and found that 2-22% of them were about an intended behavior change (eating less meat, going vegetarian, or going vegan), depending on the website.  I don't think we can draw any conclusions from this, but it's interesting.

To make my calculations, I decided to make a calculator.  Unfortunately, I can't embed it here, so you'd have to open it in a new tab as a companion piece.

I'm going to start by using the following formula: Years of Suffering Averted per Dollar = (Pamphlets / dollar) * (Conversions / pamphlet) * (Veg years / conversion) * (Animals saved / veg year) * (Days lived / animal)

Now, to get estimations for these variables.


Pamphlets Per Dollar

How much does it cost to place the advertisement, whether it be the paper pamphlet or a Facebook advertisement?  Nick Cooney, head of the Humane League, says the cost-per-click of Facebook ads is 20 cents.

But what about the cost per pamphlet?  This is more of a guess, but I'm going to go with <a href="">Vegan Outreach's suggested donation of $0.13 per "Compassionate choices" booklet.

However, it's important to note that this cost must also include opportunity cost -- leafleters must forego the ability to use that time to work a job.  This means I must include an opportunity cost of say $8/hr on top of that, making the actual cost $0.27 assuming a pamphlet is given out each minute of volunteer time, meaning 3.7 people are reached per dollar from pamphlets.  For Facebook advertisements, the opportunity cost is trivial.


Conversions Per Pamphlet

This is the estimate with the biggest target on it's head, so to speak.  How many people do we get to actually change their behavior with a simple pamphlet or Facebook advertisement?  Right now, we have three lines of evidence:

Facebook Study

Humane League did A $5000 Facebook advertisement campaign.  They bought ads that look like this...

 

...and sent people to websites (like this one or this one) with auto-playing videos that start playing and show the horrors of factory farming.

Afterward, there was another advertisement run to people who "liked" the video page, offering a 1 in 10 chance of winning a free movie ticket in order to take a survey.  Everyone who emailed in asking for a free vegetarian starter kit were also emailed a survey.  104 people took the survey and there were 32 reported vegetarians[3] and 45 people reported, for example, that their chicken consumption decreased "slightly" or "significantly".

7% of visitors liked the page and 1.5% of visitors ordered a starter kit.  Assuming all the other people went away from the video not changing their consumption, this survey would lead us to (very tenuously) think about 2.6% of people seeing the video will become a vegetarian[4].

(Here's the results of the survey in PDF.)

Pamphlet Study

A second study discussed in "The Powerful Impact of College Leafleting (Part 1)" and "The Powerful Impact of College Leafleting: Additional Findings and Details (Part 2)" looked specifically at pamphlets.

Here, Humane League staff visited two large East Coast state schools and distributed leaflets.  They then returned two months later and surveyed people walking by.  Those who remember receiving a leaflet earlier were counted.  They found about 2% of those receiving a pamphlet went vegetarian.

Vegetarian Years Per Conversion

But once a pamphlet or Facebook advertisement captures someone, how long will they stay vegetarian?  One survey showed vegetarians refrain from eating meat for an average of 6 years or more.  Another study I found says 93% of vegetarians stay vegetarian for at least three years.

 

Animals Saved Per Vegetarian Year

And once you have a vegetarian, how many animals do they save per year?  CountingAnimals says 406 animals saved per year.

The Humane League suggests 28 chickens, 2 egg industry hens, 1/8 beef cow, 1/2 pig, 1 turkey, and 1/30 dairy cow per year (total = 31.66 animals), and does not provide statistics on fish.  This agrees with CountingAnimals on non-fish totals.

Days Lived Per Animal

One problem, however, is that saving a cow that could suffer for years is different from saving a chicken that suffers for only about a month.  Using data from Farm Sanctuary plus World Society for the Protection of Animals data on fish [PDF], I get this table:

Animal Number Days Alive
Chicken (Meat) 28 42
Chicken (Egg) 2 365
Cow (Beef) 0.125 365
Cow (Milk) 0.033 1460
Fish 225 365

This makes the weighted average 329.6 days[5].

 

Accounting For Biases

As I said before, our formula was Years of Suffering Averted = (Pamphlets / dollar) * (Conversions / pamphlet) * (Veg years / conversion) * (Animals saved / veg year) * (Days lived / animal).

Let's plug these values in... Years of Suffering Averted per Dollar = 5 * 0.02 * 3 * 255.16 * 329.6/365 = 69.12.

Or, assuming all this is right (and that's a big assumption), it would cost less than 2 cents to prevent a year of suffering on a factory farm by buying vegetarians.

I don't want to make it sound like I'm beholden to this cost estimate or that this estimate is the "end all, be all" of vegan outreach.  Indeed, I share many of the skepticisms that have been expressed by others.  The simple calculation is... well... simple, and it needs some "beefing up", no pun intended.  Therefore, I also built a "complex calculator" that works on a much more complex formula[6] that is hopefully correct[7] and will provide a more accurate estimation.

 

The big, big deal for the surveys is concern for bias.  The most frequently mentioned bias is social desirability bias, or people who say they reduced meat just because they want to please the surveyor or look like a good person, which actually happens a lot more on surveys than we'd like.

To account for this, we'll have to figure out how inflated answers are because of this bias and then scale the answers down by that amount.  Nick Cooney who says that he's been reading studies that about 25% to 50% of people who say they are vegetarian actually are, though I don't yet have the citations.  Thus, if we find out that an advertisement creates two meat reducers, we'd scale that down to one reducer if we're expecting a 50% desirability bias.

 

The second bias that will be a problem for us is non-response bias, as those who don't reduce their diet are less likely to take the survey and therefore less likely to be counted.  This is especially true in the Facebook study, which only measures people who "liked" or requested a starter kit, showing some pro-vegetarian affiliation.

We can balance this out by assuming everyone who didn't take the survey went on to have no behavior change whatsoever.  Nick Cooney's Facebook Ad Survey is for the 7% of people who liked the page (and then responded to the survey), and obviously those who liked the page are more likely to reduce their consumption.  I chose an optimistic value of 90% to consider the survey completely representative of the 7% who liked the page, and then a bit more for those who reduced their consumption but did not like the page.  My pessimistic value was 95%, assuming everyone who did not like the survey went unchanged and assuming a small response bias among those who liked the page but chose not to take the survey.

For the pamphlets, however, there should be no response bias since the entire population of college students was surveyed from randomly, and no one was said to reject taking the survey.

 

Additional People Are Being Reached

In the Facebook survey, those who said they reduced their meat consumption were also asked if they influenced any of their friends and family to also reduce eating meat, and found that they usually produced 0.86 additional reducers.

This figure seems very high, but I do strongly expect the figure to be positive -- people who reduce eating meat will talk about it sometimes, essentially becoming free advertisements.  I'd be very surprised if they ended up being a net negative.

 

Accounting for Product Elasticity

Another way to boost the effectiveness of the estimate is to be more accurate about what happens when someone stops eating meat.  The change isn't from the actual refusal to eat, but rather from the reduced demand for meat, which leads to a reduced supply.  Following the laws of economics, however, this reduction won't necessarially be one-for-one, but rather depend on the elasticity of product demand and supply.  By getting this number, we can find out how much meat is reduced for every meat not demanded.

My guesses in the calculator come from the following sources, some of which are PDFs: Beef #1Beef #2Dairy #1Dairy #2Pork #1, Pork #2Egg #1, Egg #2PoultrySalmon, and for all fish.

 

Putting It All Together

Implementing the formula on the calculator, we end up with an estimate of $0.03 to $36.52 to reduce one year of suffering on a factory farm based on the Facebook ad data and an estimate of $0.02 to $65.92 based on the pamphlet data.

Of course, many people are skeptical of these figures.  Perhaps surprisingly, so am I.  I'm trying to strike a balance between being an advocate of vegan outreach as a very promising path for making the world a better place, while not losing sight of the methodological hurdles that have not yet been met, and open to the possibility that I'm wrong about this.

The big methodological elephant in the room is that my entire cost estimate depends on having a plausible guess for how likely someone is to change their behavior based on seeing an advertisement.

I feel slightly reassured because:

  1. There are two surveys for two different media, and they both provide estimates of impact that agree with each other.
  2. These estimates also match anecdotes from leafleters about approximately how many people come back and say they went vegetarian because of a pamphlet.
  3. Even if we were to take the simple calculator and drop the "2% chance of getting four years of vegetarianism" assumption down to, say, a pessimistic "0.1% chance of getting one year" conversion rate, the estimate is still not too bad -- $0.91 to avert a year of suffering.
  4. More studies are on the way.  Nick Cooney is going to do a bunch more to study leaflets, and Xio Kikauka and Joey Savoie have publicly published some survey methodology [Google Docs].

That said, the possibility for desirability bias in the survey is a large concern as long as the surveys continue to be from overt animal welfare groups and continue to clearly state that they're looking for reductions in meat consumption.

Also, so long as surveys are only given to people that remember the leaflet or advertisement, there will be a strong possibility of response bias, as those who remember the ad are more likely to be the ones who changed their behavior.  We can attempt to compensate for these things, but we can only do so much.

Furthermore, and more worrying, there's a concern that the surveys are just measuring normal drift in vegetarianism, without any changes being attributable to the ads themselves.  For example, imagine that every year, 2% of people become vegetarians and 2% quit.  Surveying these people at random and not capturing those who quit will end up finding a 2% conversion rate.

How can we address these?  I think all three problems can be solved with a decent control group, whether it be a group of people that receive a leaflet not about vegetarianism, or no leaflet at all.  Luckily, Kikauka and Savoie's survey intend to do just that.

Jeff Kaufman has a good proposal for a survey design I'd like to see implemented in this area.

 

Market Saturation and Diminishing Marginal Returns?

Another concern is that there are diminishing marginal returns to these ads.  As the critique goes, there are only so many people that will be easily swayed by the advertisement, and once all of them are quickly reached by Facebook ads and pamphlets, things will dry up.

Unlike the others, I don't think this criticism works well.  After all, even if it were true, it still would be worthwhile to take the market as far as it will go, and we can keep monitoring for saturation and find the point where it's no longer cost-effective.

However, I don't think the market has been tapped up yet at all.  According to Nick Cooney [PDF], there are still many opportunities in foreign markets and outside the young, college kid demographic.

 

The Conjunction Fallacy?

The conjunction fallacy is a classic fallacy that reminds us that no matter what, the chance of event A happening can never be smaller than the chance of event A happening, followed by event B.  For example, the probability that Linda is a bank teller will always be larger than (or equal to) the probability that Linda is a bank teller and a feminist.

What does this mean for vegetarian outreach?  Well, for the simple calculator, we're estimating five factors.  In the complex calculator, we're estimating 90 factors.  Even if each factor is 99% likely to be correct, the chance that all five are right is 95%, and the chance that all 50 are right is only 60%.  If each factor is only 90% likely to be correct, the complex calculator will be right with a probability of 0.5%!

This is a cause for concern, but I don't think there's any way around this.  It's just an inherent problem with estimation.  Hopefully we'll be balanced by (1) using the different bounds and (2) hoping underestimates and overestimates will cancel each other out.

 

Conversion and The 100 Yard Line

Something we should take into account that helps the case for this outreach rather than hurts it is the idea that conversions aren't binary -- someone can be pushed by the ad to be more likely to reduce their meat intake as opposed to fully converted.  As Brian Tomasik puts it:

Yes, some of the people we convince were already on the border, but there might be lots of other people who get pushed further along and don’t get all the way to vegism by our influence. If we picture the path to vegism as a 100-yard line, then maybe we push everyone along by 20 yards. 1/5 of people cross the line, and this is what we see, but the other 4/5 get pushed closer too. (Obviously an overly simplistic model, but it illustrates the idea.)

This would be either very difficult or outright impossible to capture in a survey, but is something to take into account.

 

Three Places I Might Donate Before Donating to Vegan Outreach

When all is said and done, I like the case for funding this outreach.  However, I think there are three other possibilities along these lines that I find more promising:

Funding the research of vegan outreach: There needs to be more and higher-quality studies of this before one can feel confident enough in the cost-effectiveness of this outreach.  However, initial results are very promising, and the value of information of more studies is therefore very high.  Studies can also find ways to advertise more effectively, increasing the impact of each dollar spent.  Right now, however, it looks like all ongoing studies are fully funded, but if there were opportunities to fund more, I would jump on it.

Funding Effective Animal Activism: EAA is an organization pushing for more cost-effectiveness in the domain of nonhuman animal welfare and is working to further evaluate what opportunities are the best, Givewell-style.  Giving them more money can potentially attract a lot more attention to this outreach, and get it more scrutiny, research, and money down the line.

Funding Centre for Effective Altruism: Overall, it might just be better to get more people involved in the idea of giving effectively, and then getting them interested in vegan outreach, among other things.

 

Conclusion

Vegan outreach is a promising, though not fully studied, method of outreach that deserves both excitement and skepticism.  Should one put money into it?  Overall, I'd take a guarded approach of putting in just enough money to help the organizations learn, develop better cost-effective measurements and transparency, and become more effective.  It shouldn't be too long before this area will become studied well enough to have good confidence in how things are doing.

More studies should be developed that explore advertising vegetarianism in a wide variety of media in a wide variety of ways, with decent control groups.

I look forward to seeing how this develops.  Don't forget to play around with my calculator.

-

 

Footnotes

[1]: Cost effectiveness in years of suffering prevented per dollar = (Pamphlets / dollar) * (Conversions / pamphlet) * (Veg years / conversion) * (Animals saved / veg year) * (Years lived / animal).

Plugging in 80K's values... Cost effectiveness = (Pamphlets / dollar) * 0.01 to 0.03 * 25 * 100 * (Years lived / animal)

Filling in the gaps with my best guesses... Cost effectiveness = 5 * 0.01 to 0.03 * 25 * 100 * 0.90 = 112.5 to 337.5 years of suffering averted per dollar
I personally think 25 veg-years per conversion on average is possible but too high; I personally err from 4 to 7.
[2]: I feel like there's an error in this calculation or that Kaufman might disagree with my assumptions of number of animals or days per animal, because I've been told before that these estimates with this method are supposed to be about an order of magnitude higher than other estimates.  However, I emailed Kaufman and he seemed to not find any fault with the calculation, though he does think the methodology is bad and the calculation should not be taken at face value.
[3]: I calculated the number of vegetarians by eyeballing about how many people said they no longer eat fish, which I'd guess only a vegetarian would be willing to give up.
[4]: 32 vegetarians / 104 people = 30.7%.  That population is 8.5% (7% for likes + 1.5% for the starter kit) of the overall population, leading to 2.61% (30.7% * 8.5%).
[5]: Formula is [(Number Meat Chickens)(Days Alive) + (Number Egg Chickens)(Days Alive) + (Number Beef Cows)(Days Alive) + (Number Milk Cows)(Days Alive) + (Number Fish)(Days Alive)] / (Total Number Animals).  ...Plugging things in: [(28)(42) + (2)(365) + (0.125)(365) + (0.033)(1460) + (225)(365)] / 255.16] = 329.6 days

[6]:
Cost effectiveness in amount of days prevented per dollar = (People Reached / Dollar + (People Reached / Dollar * Additional People Reached / Direct Reach * Response Bias * Desirability Bias)) * Years Spent Reducing * (((Percent Increasing Beef * Increase Value) + (Percent Staying Same with Beef * Staying Same Value) + (Percent Decreasing Beef Slightly * Decrease Slightly Value) + (Percent Decreasing Beef Significantly * Decrease Significantly Value) + (Percent Eliminating Beef * Elimination Value) + (Percent Never Ate Beef * Never Ate Value)) * Normal Beef Consumption * Beef Elasticity * (Average Beef Lifespan + Days of Suffering from Beef Slaughter)) + (((Percent Increasing Dairy * Increase Value) + (Percent Staying Same with Dairy * Staying Same Value) + (Percent Decreasing Dairy Slightly * Decrease Slightly Value) + (Percent Decreasing Dairy Significantly * Decrease Significantly Value) + (Percent Eliminating Dairy * Elimination Value) + (Percent Never Ate Dairy * Never Ate Value)) * Normal Dairy Consumption * Dairy Elasticity * (Average Dairy Lifespan + Days of Suffering from Dairy Slaughter)) + (((Percent Increasing Pig * Increase Value) + (Percent Staying Same with Pig * Staying Same Value) + (Percent Decreasing Pig Slightly * Decrease Slightly Value) + (Percent Decreasing Pig Significantly * Decrease Significantly Value) + (Percent Eliminating Pig * Elimination Value) + (Percent Never Ate Pig * Never Ate Value)) * Normal Pig Consumption * Pig Elasticity * (Average Pig Lifespan + Days of Suffering from Pig Slaughter)) + (((Percent Increasing Broiler Chicken * Increase Value) + (Percent Staying Same with Broiler Chicken * Staying Same Value) + (Percent Decreasing Broiler Chicken Slightly * Decrease Slightly Value) + (Percent Decreasing Broiler Chicken Significantly * Decrease Significantly Value) + (Percent Eliminating Broiler Chicken * Elimination Value) + (Percent Never Ate Broiler Chicken * Never Ate Value)) * Normal Broiler Chicken Consumption * Broiler Chicken Elasticity * (Average Broiler Chicken Lifespan + Days of Suffering from Broiler Chicken Slaughter)) + (((Percent Increasing Egg * Increase Value) + (Percent Staying Same with Egg * Staying Same Value) + (Percent Decreasing Egg Slightly * Decrease Slightly Value) + (Percent Decreasing Egg Significantly * Decrease Significantly Value) + (Percent Eliminating Egg * Elimination Value) + (Percent Never Ate Egg * Never Ate Value)) * Normal Egg Consumption * Egg Elasticity * (Average Egg Lifespan + Days of Suffering from Egg Slaughter)) + (((Percent Increasing Turkey * Increase Value) + (Percent Staying Same with Turkey * Staying Same Value) + (Percent Decreasing Turkey Slightly * Decrease Slightly Value) + (Percent Decreasing Turkey Significantly * Decrease Significantly Value) + (Percent Eliminating Turkey * Elimination Value) + (Percent Never Ate Turkey * Never Ate Value)) * Normal Turkey Consumption * Turkey Elasticity * (Average Turkey Lifespan + Days of Suffering from Turkey Slaughter)) + (((Percent Increasing Farmed Fish * Increase Value) + (Percent Staying Same with Farmed Fish * Staying Same Value) + (Percent Decreasing Farmed Fish Slightly * Decrease Slightly Value) + (Percent Decreasing Farmed Fish Significantly * Decrease Significantly Value) + (Percent Eliminating Farmed Fish * Elimination Value) + (Percent Never Ate Farmed Fish * Never Ate Value)) * Normal Farmed Fish Consumption * Farmed Fish Elasticity * (Average Farmed Fish Lifespan + Days of Suffering from Farmed Fish Slaughter)) + (((Percent Increasing Sea Fish * Increase Value) + (Percent Staying Same with Sea Fish * Staying Same Value) + (Percent Decreasing Sea Fish Slightly * Decrease Slightly Value) + (Percent Decreasing Sea Fish Significantly * Decrease Significantly Value) + (Percent Eliminating Sea Fish * Elimination Value) + (Percent Never Ate Sea Fish * Never Ate Value)) * Normal Sea Fish Consumption * Sea Fish Elasticity * Days of Suffering from Sea Fish Slaughter) * Response Bias * Desirability Bias
[7]: Feel free to check the formula for accuracy and also check to make sure the calculator implements the formula correctly.  I worry that the added accuracy from the complex calculator is outweighed by the risk that the formula is wrong.

-

Edited 18 June to correct two typos and update footnote #2.

Also cross-posted on my blog.

Effective Altruism Through Advertising Vegetarianism?
New Comment
553 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Nick Cooney who says that he's been reading studies that about 25% to 50% of people who say they are vegetarian actually are, though I don't yet have the citations. Thus, if we find out that an advertisement creates two meat reducers, we'd scale that down to one reducer if we're expecting a 50% desirability bias

This doesn't follow. The intervention is increasing the desirability bias, so the portion of purported vegetarians who are actually vegetarian is likely to change, in the direction of a lower proportion of true vegetarianism. It's plausible that 90%+ of the marginal purported vegetarians are bogus. Consider ethics and philosophy professors, who are significantly more likely to profess that eating meat is wrong:

There is no statistically detectable difference between the ethicists and either group of non-ethicists. (The difference between non-ethicists philosophers and the comparison professors was significant to marginal, depending on the test.)

Conclusion? Ethicists condemn meat-eating more than the other groups, but actually eat meat at about the same rate. Perhaps also, they're more likely to misrepresent their meat-eating practices (on the meals-per-week question and

... (read more)
8Peter Wildeford
This is actually a really good point that makes me less confident in the effectiveness of vegetarianism advocacy.
0[anonymous]
An additional point: Cattle have a bit less than 1/3rd the brain mass of humans, chickens about 1/40th, and fish are down more than an order of magnitude (moreso by cortex). If you weight expected value by neurons, which is made plausible by thinking about things like split-brain patients and local computations in nervous systems, that will drastically change the picture. My quick back-of-the envelope (which didn't take into account the small average size of the mostly feed fish involved, and thus reduced neural tissue) is that making this adjustment would cut the cost-effectiveness metric by a factor of at least 400 times, and plausibly 1000+ times. This reflects the fact that fish make up most of the life-days in the calculation, and also have comparatively tiny and simple nervous systems. Personally, I would pay more to ensure a painless death for a cow than for a small feed fish with orders of magnitude less neural capacity.
-4Qiaochu_Yuan
Ah, but now I can turn myself into a utility monster by artificially enlarging my brain! Game over.
4Paul Crowley
We're trying to work out how to make progress on moral questions today, not trying to lay down a rule for all eternity that future agents can't game.
2Qiaochu_Yuan
It was a joke.
2Paul Crowley
Oops, sorry!
2CarlShulman
Or by having kids. Or copying your uploaded self. Or re-engineering your nervous system in other ways...
0CarlShulman
The bit about desirability bias, or the fact that the optimistic estimates involve claiming that vegetarian ads are vastly more effective than other kinds of moralized behavior-change ads with more accurate measurements of effect?
3Peter Wildeford
Both points. The question "why should vegetarianism advocacy be so much more effective than get out the vote advocacy?" is a good point. Since the study quality for get out the vote advocacy is so much higher, we should expect vegetarianism advocacy to end up about the same. On the other hand, I do think vegetarianism advocacy is a lot more psychologically salient (pictures of suffering) than any case that can be made for voting. I've personally distributed some pro-voting pamphlets, and they're not very compelling at all.
4Brian_Tomasik
Good points, Carl! Jonah Sinick actually made the GOTV argument to me on a prior occasion, citing your essay on the topic. One additional consideration is that nearly everyone knows about voting, but many people don't know about the cruelty of factory farms. This goes along with the low-hanging-fruit point. I would not be surprised if, after tempering the figures by this outside-view prior, it takes a few hundred dollars to create a new veg year. Even if so, that's at most 1-2 orders of magnitude different from the naive conservative estimate.
6Peter Wildeford
This is something I've considered a lot, though chicken also dominate the calculations along with fish. I'm not currently sure if I value welfare in proportion to neuron count, though I might. I'd have to sort that out first. A question at this point I might ask is how good does the final estimate have to be? If AMF can add about 30 years of healthy human life for $2000 by averting malaria and a human is worth 40x that of a chicken, then we'd need to pay less than $1.67 to avert a year of suffering for a chicken (assuming averting a year of suffering is the same as adding a year of healthy life, which is a messy assumption).

I think some weighting for the sophistication of a brain is appropriate, but I think the weighting should be sub-linear w.r.t. the number of neurones; I expect that in simpler organisms, a larger share of the brain will be dedicated to processing sensory data and generating experiences. I would love someone to look into this to check if I'm right.

3CarlShulman
I agree on that effect, I left out various complications. A flip side to that would be the number of cortex neurons (and equivalents). These decrease rapidly in simpler nervous systems. We don't object nearly as much to our own pains that we are not conscious of and don't notice or know about, so weighting by consciousness of pain, rather than pain/nociception itself, is a possibility ( I think that Brian Tomasik is into this).

A question at this point I might ask is how good does the final estimate have to be?

First, there are multiple applications of accurate estimates.

The unreasonably low estimates would suggest things like "I'm net reducing factory-farming suffering if I eat meat and donate a few bucks, so I should eat meat if it makes me happier or healthier sufficiently to earn and donate an extra indulgence of $5 ."

There are some people going around making the claim, based on the extreme low-ball cost estimates, that these veg ads would save human lives more cheaply than AMF by reducing food prices. With saner estimates, not so, I think.

Second, there's the question of flow-through effects, which presumably dominate in a total utilitarian calculation anyway, if that's what you're into. The animal experiences probably don't have much effect there, but people being vegetarian might have some, as could effects on human health, pollution, food prices, social movements, etc.

To address the total utilitarian question would require a different sort of evidence, at least in the realistic ranges.

4Louie
Correct. I make this claim. If vegetarianism is that cheap, it's reasonable to bin it with other wastefully low-value virtues like recycling paper, taking shorter showers, turning off lights, voting, "staying informed", volunteering at food banks, and commenting on less wrong.
4KatieHartman
This might be a minor point, but I don't think it's necessarily a given that one year of healthy, average-quality life offsets one year of factory farm-style confinement. If we were only discussing humans, I don't think anyone would consider a year under those conditions to be offset by a healthy year.

You could also reduce meat consumption by advertising good vegetarian meal recipes.

(Generally, the idea is that you can reduce eating meat even without explicitly promoting not eating meat.)

5Peter Wildeford
Are you suggesting that one simply advertise the existence of good vegetarian recipes without mentioning surrounding reasons for reducing meat? This is already a strong component in existing advocacy, though none of it mentions recipes alone. Leading pamphlets like "Compassionate Choices" and "Even if You Like Meat" have recipe sections at the end of the book. Peter Singer's book Animal Liberation has recipes. Vegan Outreach has a starter guide section with lots of recipes. As far as I know, the videos used on the internet don't directly mention recipes, but do point to ChooseVeg.com which has tons of recipes and essentially advertises vegetarianism via a recipe-based argument. Another recent campaign, The Seven Day Vegan Challenge also advertises based on a lot of recipes.

Are you suggesting that one simply advertise the existence of good vegetarian recipes without mentioning surrounding reasons for reducing meat?

I agree with Viliam_Bur that this may be effective, and here's why.

I bake as a hobby (desserts — cakes, pies, etc.). I am not a vegetarian; I find moral arguments for vegetarianism utterly unconvincing and am not interesting in reducing the suffering of animals and so forth.

However, I often like to try new recipes, to expand my repertoire, hone my baking skills, try new things, etc. Sometimes I try out vegan dessert recipes, for the novelty and the challenge of making something that is delicious without containing eggs or dairy or white sugar or any of the usual things that go into making desserts taste good.[1]

More, and more readily available, high-quality vegan dessert recipes would mean that I substitute more vegan dessert dishes for non-vegan ones. This effect would be quite negated if the recipes came bundled with admonitions to become vegan, pro-vegan propaganda, comments about how many animals this recipe saves, etc.; I don't want to be preached to, which I think is a common attitude.

[1] My other (less salient) motivation for learning to make vegan baked goods is to be prepared if I ever have vegan/vegetarian friends who can't eat my usual stuff (hasn't ever been the case so far, but it could happen).

9Viliam_Bur
Thanks, this is what I tried to say. Reducing suffering is far, eating well is near. Also, if a book or a website comes with vegetarian/vegan propaganda, I would assume those people are likely to lie or exaggerate. No propaganda -- no suspicion. This may be just about vegetarians around me, but often people who are into vegetarianism are also into other forms of food limitations, so I often find their food unappealing. They act like an anti-advertisement to vegetarian food. (Perhaps there is an unconscious status motive here: the less people join them, the more noble they are. Which is not how an effective altruist should think.) On the other hand, when I go to some Indian or similar ethnic restaurant, I love the food. It tastes well, it has different components and good spice. I mean, what's wrong about using spice? If your goal is to reduce animal suffering, nothing. But if your goal is to have a weirdest diet possible (no meat, no cooking, no taste, everything compatible with the latest popular book or your horoscope), spice is usually on the list of forbidden components. In short, vegetarianism is often not about not eating animals. So if you focus on "good meal (without meat)" part, and ignore the vegetarianism, you may win people like me. Even if I don't promise to give up meat completely, I can reduce its consumption simply because tasty meals without meat outcompete tasty meals with meat on my table.
1amcknight
I think I've noticed this a bit since switching to a vegan(ish) diet 4 months ago. My guess is that once a person starts making diet restrictions, it becomes much easier to make diet restrictions, and once a person starts learning where their food comes from, it becomes easier to find reasons to make diet restrictions (even dumb reasons).
4GordonAitchJay
What were the moral arguments for vegetarianism that you found utterly unconvincing? Where did you hear or read these? Are you interested in reducing the suffering of humans? If so, why?
-1Said Achmiz
The ones that say we should care about what happens to animals and what animals experience, including arguments from suffering. I've heard them in lots of places; the OP has himself posted an example — his own essay "Why Eat Less Meat?" Yeah. I think if you unpacked this aspect of my values, you'd find something like "sapient / self-aware beings matter" or "conscious minds that are able to think and reason matter". That's more or less how I think about it, though converting that into something rigorous is nontrivial. "Matter" here is used in a broad sense; I care about sapient beings, think that their suffering is wrong, and also consider such beings the appropriate reference class for "veil of ignorance" type arguments, which I find relevant and at least partly convincing. My caring about reducing human suffering has limits (in more than one dimension). It is not necessarily my highest value, and interacts with my other values in various ways, although I mostly use consequentialism in my moral reasoning and so those interactions are reasonably straightforward for the most part.
0freeze
Do you think that animals can suffer? Or, what evolutionary difference do you think gives a difference in the ability to experience consciousness at all between humans and other animals with largely similar central nervous systems/brains?
2Swimmer963 (Miranda Dixon-Luinenburg)
White sugar has animal products in it?
1Said Achmiz
Not as such, no, but animal products are used in its manufacture: bone char is used in the sugar refining process (by some manufacturers, though not all), making it not ok for vegans.
2Swimmer963 (Miranda Dixon-Luinenburg)
Wow. I learned something that I did not know before :)
0A1987dM
I had heard that plenty of times, but I had never bothered to check whether or not that was just an urban legend.
0Douglas_Knight
Have you experimented with baking with lard?
0Said Achmiz
I have not. Christopher Kimball, in The Dessert Bible, comments that unless you can get leaf lard (the highest grade of lard, which comes from the fat around the pig's kidneys), using lard in dessert recipes is undesirable (results in the dough having a bacon-y taste). I don't think I can get leaf lard here in NYC, and even if I could it would probably be very expensive.
0Douglas_Knight
NYC? of course you can. Or mail-order. But I would start with regular lard in the right recipes. On a different note, I usually substitute brown sugar for white for the taste.
0Said Achmiz
Oh? Do you know any good places to get it in NYC? (Preferably Brooklyn, Manhattan also fine.) Yes, brown for white sugar is a good substitution sometimes. However it can partially mute the taste of other ingredients, like fresh fruit, so it's not always ideal. Also, brown sugar is definitely more expensive.
0novalis
I would be shocked if Ottomanelli's on Bleeker didn't have it leaf lard.
0Said Achmiz
The internet tells me they don't carry it, but can special-order it. Mail-order, by the way, looks to come out to $10 / lb., at least., if you can get it; very few places seem to carry it.
0novalis
You might have to call them; they will special-order just about anything. The only thing I have failed to find there was rabbit ears (without buying the whole rabbit).
8NoSignalNoNoise
Many non-vegetarians are suspicious of organizations that try to convince them to be vegetarian. It might be more effective to promote vegetarian recipes separately from "don't eat meat" efforts. Incidentally, I would love to know of more (not too difficult) ways to cook tofu.

I like to take the firmest tofu I can find (this is usually vacuum-packed, not water-packed) and cut it into slices or little cubes, and then pan-fry it in olive oil with a splash of lemon juice added halfway through till it's golden-brown and chewy. Then I put it in pasta (cubes) or on sandwiches (slices) - the sandwich kind is especially nice with spinach sauteed with cheese and hummus.

4Raemon
I think that simply promoting good vegetarian meals would potentially reduce meat consumption among certain groups of people that would be less receptive to accompanying pro-vegetarian arguments. I think it should be part of a vegan-advocacy arsenal (i.e. you do a bunch of different sorts of flyers/ads/promotions, some of which is just recipe spreading without any further context) However, if one of your goals is to increase human compassion for nonhumans, then recipe spreading is dramatically less useful in the long term. One of the biggest arguments (among LW folk anyway) for animal advocacy is that not only are factory farms (and the wilderness) pretty awful, but that it'll hopefully translate into more humanely managed eco-systems, once we go off terraforming or creating virtual worlds. (It may turn out to be effective to get people to try out vegan recipes [without accompanying pro-vegan context] and then later on promote actual vegan ideals to the same people, after they've already taken small steps that indirectly bias themselves towards identifying with veganism)
0freeze
Perhaps, but consider the radical flank effect: https://en.wikipedia.org/wiki/Radical_flank_effect Encouraging the desired end goal, the total cessation of meat consumption, may be more effective than just encouraging reduction even in the short to moderate run (certainly the long run) by moving the middle.

I'm really curious why all of the major animal welfare/rights organizations seem to be putting more emphasis on vegan outreach than on in-vitro meat/genetic modification research. I have a hard time imagining a scenario where any arbitrary (but large) contribution toward vegan outreach leads to greater suffering reduction than the same amount put toward hastening a more efficient and cruelty-free system for producing meat.

There seems to be, based just on my non-rigorous observations, significant overlap between the Vegan/Vegetarian communities and the "Genetically Modified Foods and big Pharma will turn your babies into money-forging cancer" theorists. Obviously not all Vegans are "chemicals=bad because nature" conspiracy theorists, and not all such conspiracy theorists are vegan, but the overlap seems significant. That vocal overlap group strikes me as likely to oppose lab-grown meat because it's unnatural, and then the conspiracy theories will begin. And the animal rights groups probably don't want to divide up their base any further.

(This comment felt harsh to me as I was writing it, even after I cut out other bits. The feeling I'm getting is very similar to political indignation. If this looks as mind-killd to anyone else, please please correct me.)

5KatieHartman
That seems plausible, though PETA already has a million-dollar prize for anyone who can mass-market an in-vitro meat product. Given their annual revenues (~$30 million) and the cost associated with that kind of project, it seems like they're going about it the wrong way. From a utilitarian perspective, wireheading livestock might be an even better option - though that probably would be perceived by most animal activists (and people in general) as vaguely dystopian.
3[anonymous]
Does the technology to reliably and cheaply wirehead farmed animals now exist at all? Without claiming expertise, I find that unlikely.
6johnlawrenceaspden
Opium in the feed? Cut their nerves? Some sort of computerised gamma-ray brain surgery? I'm certain that if there were a tiny financial incentive for agribusiness to do it then a way would swiftly be found. It's not so hard to turn humans into living vegetables. Some sorts of head trauma seem to do it. How hard can it be to make that reliable (or at least reasonably reliable) for cows? Least convenient world and all that: If we could prevent animal suffering by skilfully whacking calves over the head with a claw hammer, would that be a goal to which the rational vegan would aspire? It would be just as good as killing them, plus pleasure for the meat eaters. Also it would probably be possible to find people who'd enjoy doing it, so that's another plus.
6Nornagest
Probably not that hard. Doing it without ruining the meat or at least reducing yields sounds harder to me, though -- muscles atrophy if they don't get used, and they don't get used if nothing's giving them commands. I'd also expect force-feeding a braindead animal to be more expensive and probably more conducive to health problems than letting it feed itself.
8gwern
To continue the 'living vegetables' approach, one could point out that to keep a human in a coma alive and (somewhat) well will cost you somewhere from $500-$3k+. Per day. Even assuming that animals are much cheaper by taking the bottom of the range and then cutting it by an entire order of magnitude, the 1.5-3 year aging of standard cattles being butchered means 50 1.5 365 = >$27.4k extra expenses. That's some expensive meat.
0Jabberslythe
So just kill all the farm animals painlessly now? Sure that sounds good. But if there will still be farm animal being raised then it seems there still is a problem. Or if you are just talking about ways of making slaughter painless for continuing to factory farm, that sounds better than nothing.
2ialdabaoth
I find this interesting, because it seems to imply that people have an intuitive sense that eudaimonia applies to animals. I'll have to think about the consequences of this.
0freeze
Do you know of any sources for this? In my also non-rigorous experience this is a totally unfounded misperception of veg*nism that people seem to have, founded on nothing but a few quack websites/anti-science blogs. Consider for instance /r/vegan over at reddit, which is in fact overwhelmingly pro-GMO and ethics rather than health focused. Of course, it is certainly true that the demographics of reddit or that subreddit are much different from that of veg*ns as a whole (or people as a whole). Lesswrong is an even more extreme case of such a limited demographic.
3Peter Wildeford
A lot of animal welfare/rights organizations provide funding for in-vitro meat / fake meat, though they don't do much to advertise it. The idea is that these meat substitutes won't take off unless they create some demand for them. Vegan Outreach is one of the biggest funders of Beyond Meat and New Harvest.

I like Beyond Meat, but I think the praise for it has been overblown. For example, the Effective Animal Activism link you've provided says:

[Beyond Meat] mimics chicken to such a degree that renowned New York Times food journalist and author Mark Bittman claimed that it "fooled me badly in a blind tasting".

But reading Bittman's piece, the reader will quickly realize that the quote above is taken out of context:

It doesn’t taste much like chicken, but since most white meat chicken doesn’t taste like much anyway, that’s hardly a problem; both are about texture, chew and the ingredients you put on them or combine with them. When you take Brown’s product, cut it up and combine it with, say, chopped tomato and lettuce and mayonnaise with some seasoning in it, and wrap it in a burrito, you won’t know the difference between that and chicken.

I like soy meat alternatives just fine, but vegans and vegetarians are the market. People who enjoy the taste of meat and don't see the ethical problems with it don't want a relatively expensive alternative with a flavor they have to mask. There's demand for in-vitro meat because there's demand for meat. If you can make a product that t... (read more)

9wedrifid
It seems overwhelmingly unlikely that the optimal method of meat production is to have it walking around eating plant matter and going 'Moo!'.

Especially for sheep. The training costs would be prohibitive.

0A1987dM
I dunno -- look at all the brouhaha about genetically modified food.
0TheOtherDave
That there's a population brouhahaing over GM food doesn't preclude the existence of a population eager to buy cheap tasty-enough meat. Indeed, I expect the populations overlap significantly.
0Osiris
I predict a big drop in price soon after vat meat becomes sufficiently popular due to money saved on dealing with useless organs and suffering, as well as a great big leap in profit for any farm that sells "natural cow meat." One is inherently efficient due to it simplfying farming. The other is pretty, however ugly it is for the animals. I do worry about the numbers New Harvest gives, but in the long run, there is hope for this regardless of what the price is initially--the potential for success in feeding humanity cheaply and well is just too great, in my opinion. Seems like I will be pushing "meat in a bucket" whenever possible, and I am not even that into making animals happy.
2Jabberslythe
Well if vegan/vegetarian outreach is particularly effective then it may do more to develope lab meat than just donating to lab meat causes themselves (because there would be more people interested in this and similar technologies). Additionally, making people vegan/vegetarian may have a stronger effect in promoting anti speciesism in general which seems like it will be of larger overall benefit than just ending factory farming. This seems like it would happen because thoughts follow actions.
1hylleddin
I've wondered about this as well. We can try to estimate New Harvest's effectiveness using the same methodology attempted for SENS research in the comment by David Barry here. I can't find New Harvest's 990 revenue reports, but it's donations are routed through the Network for Good, which has a total annual revenue of 150 million dollars, providing an upper bound. An annual revenue of less than 1000 dollars is very unlikely, so we can use the geometric mean of $400 000 per year as an estimated annual revenue. There are about 500 000 minutes in a year, so right now $1 brings development just over a minute closer.* There currently 24 billion chicken, 1 billion cattle, and 1 billion pigs. Assuming the current factory farm suffering rates as an estimate for suffering rates when artificial/substitute meat becomes available, and assuming (as the OP does) that animals suffer roughly equally, then bringing faux meat one minute closer prevents about (25 billion animals)/(500 000 minutes per year) = 50 animal years of suffering. If we assume that New Harvest has a 10% chance of success, $1 dollar there prevents an expected 5 animal years of suffering, or expressed as in the OP, preventing 1 expected animal year of suffering costs about 20 cents. So, these (very rough) estimates show about similar levels of effectiveness. *Assuming some set amount of money is necessary and the bottleneck and you aren't donating enough for diminishing marginal returns.
0freeze
There are already meat alternatives (seitan, tempeh, tofu, soy, etc.) which provide a meat-like flavor and texture. It's not immediately obvious that in-vitro meat is necessarily more effective than just promoting or refining existing alternatives. I suppose for long-run impact this kind of research may be orders of magnitude more useful though.

Something we should take into account that helps the case for this outreach rather than hurts it is the idea that conversions aren't binary -- someone can be pushed by the ad to be more likely to reduce their meat intake as opposed to fully converted.

Eh, don't forget that humans often hate other humans. Exposing an anti-vegetarian to vegetarian advertisements might induce them to increase their meat intake, and an annoying advocate may move someone from neutral to anti-vegetarian. This effect is very unlikely to be captured by surveys- and so while it's reasonable to expect the net effect to be positive, it seems reasonable to lower estimates by a bit.

(Most 'political' moves have polarizing effects; you should expect supporters to like you more, and detractors to like you less, afterwards, which seems like a better model than everyone slowly moving towards vegetarianism.)

Eh, don't forget that humans often hate other humans. Exposing an anti-vegetarian to vegetarian advertisements might induce them to increase their meat intake, and an annoying advocate may move someone from neutral to anti-vegetarian.

If you take a non-vegetarian and make them more non-vegetarian, I don't think much is lost, because you never would have captured them anyway. I suppose they might eat more meat or try and persuade other people to become anti-vegetarian, but my intuition is that this effect would be really small.

But you're right that it would need to be considered.

I agree. In addition, I think people who claim that they will eat more meat after seeing a pamphlet or some other promotion for vegetarianism just feel some anger in the moment, but they'll likely forget about it within an hour or so. I can't see someone several weeks later saying to eirself, "I'd better eat extra meat today because of that pamphlet I read three weeks ago."

6A1987dM
BTW, how comes certain omnivores dislike vegetarians so much? All other things being equal, one fewer person eating meat will reduce its price, about which a meat-eater should be glad. (Similarly, why do certain straight men dislike gay men that much?)

If someone says that they are vegetarian for moral reasons, then it's an implicit (often explicit) claim that non-vegetarians are less moral, and therefore a status grab. If an omnivore doesn't want to become vegetarian nor to lose status, they need to aggressively deny the claim of vegetarianism being more moral.

2Vaniver
Vegetarianism generally includes moral claims as well as preference claims, and responding negatively to conflicting morals is fairly common. Even responding negatively to conflicting preference claims is common. This seems to happen for both tribal reasons (different tastes in music) and possibly practical reasons (drinkers disliking non-drinkers at a party, possibly because of the asymmetric lowering of boundaries). Simple tribalism is one explanation. It also seems likely to me that homophobia is a fitness advantage for men in the presence of bisexual / homosexual men. There's also some evidence that, of men who claim to be straight, increased stated distaste for homosexuals is associated with increased sexual arousal by men, which fits neatly with the previous statement- someone at higher risk of pursuing infertile / socially costly relationships should be expected to spend more effort in avoiding them.
0A1987dM
(Indeed, I was going to mention religion, but I forgot to. OTOH, I think I've met at least one otherwise quite contrarian person who was homophobic.) How so? By encouraging other men to pursue heterosexual relationships, I would increase the demand of straight women and the supply of straight men, which (so long as I'm a straight man myself and the supply of straight women isn't much larger than that of straight men) doesn't sound (from a selfish point of view) like a good thing. [The first time I wrote this paragraph it pattern-matched sexism because it talked about women as a commodity, so I've edited it so that it talks about both women and men as commodity, so if anything it now pattern-matches extreme cynicism; and I'm OK with that.] I've heard that cliché, but I had assumed that it was (at least in part) something someone made up to take the piss out of homophobes. Any links?
2Vaniver
I mean in the "revulsion to same sex attraction" sense, not the "opposed to gay rights" sense. If a man is receptive to the sexual interest of other men, that makes him less likely to have a relationship with a woman, and thus less likely to have children, and thus is a fitness penalty, and so a revulsion that protects against that seems like a fitness advantage. Here's one.
0A1987dM
I was thinking about straight men who dislike gay men whether or not they have been hit on by them. Thanks for the link. (Anyway... Is someone downvoting this entire subthread?)
2TheOtherDave
Are you asking more broadly why people in unmarked cases dislike being treated as though they were a marked case? Or have I overgeneralized, here?
0A1987dM
I'm asking more broadly why people dislike it when market demand for something they like decreases. (After reading the other replies, I guess that's at least partly because liking stuff with low market demand is considered low-status.)
4elharo
In at least some cases, network effects come into play. For example, if I prefer a non-mainstream operating system or computer hardware, there will be less support for my platform of choice. For instance, I may like Windows Phone but I can't get the apps for it that I can for the iPhone or Android. Furthermore, my employer may give me a choice of iPhone or Android but not Windows. Thus someone who prefers Windows Phone would want demand for Windows Phone to increase. Furthermore, supply is not always fixed. For products for which manufacturers can increase output to match demand, increasing demand may increase availability because more retailers will make them available. If economies of scale come into play, increasing demand may also decrease price.
0A1987dM
Good point, though in this particular example, I guess meat eaters aren't anywhere near few enough for these effects to be relevant.
2TheOtherDave
OK. I observe that both of the examples you provide (vegetarians and homosexuals) have a moral subtext in my culture that many other market-demand scenarios (say, a fondness for peanuts) lack. That might be relevant.
0A1987dM
(None of the vegetarians I've met seemed to be particularly bothered when other people ate meat, but as far as I can remember none of them was from the US¹, and from reading other comments in this thread I'm assuming it's different for certain American vegetarians.) ---------------------------------------- 1. Though I did met a few from an English-speaking country (namely Australia), and there are a few Canadians I met for whom I can't remember off the top of my head whether they ate meat.
2TheOtherDave
Fair enough. If there isn't a moral subtext to vegeterianism in your culture, but omnivores there still dislike vegetarians, that's evidence against my suggestion.
2A1987dM
I have seen plenty of ‘jokes’ insulting vegetarians in Italian on Facebook; but then again, I've seen at least one about the metric system too, so maybe there are people who translate stuff from English no matter how little sense they make in the target cultural context.
1Eugine_Nier
What army said is not the same thing. Most of the vegetarians I know also don't seem particularly bothered when other people ate meat but will nonetheless give moral reasons if asked why they don't eat meat.
0TheOtherDave
In isolation, I completely agree. In context, though... well, I said that vegetarians have a moral subtext in my culture, and army1987 replied that vegetarians they've met weren't bothered by others eating meat. I interpreted that as a counterexample... that is, as suggesting vegetarians don't have a moral subtext. If I misinterpreted, I of course apologize, but I can't come up with another interpretation that doesn't turn their comment into a complete nonsequitor, which seems an uncharitable assumption. If you have a third option in mind for what they might have meant, I'd appreciate you elaborating it.
-5Eugine_Nier
-1Eugine_Nier
See also economies of scale.
1Eugine_Nier
This has to do with the way gay sex interacts with status.

Since all of my work output goes to effective altruism, I can't afford any optimization of my meals that isn't about health x productivity. This does sometimes make me feel worried about what happens if the ethical hidden variables turn out unfavorably. Assuming I go on eating one meat meal per day, how much vegetarian advocacy would I have to buy in order to offset all of my annual meat consumption? If it's on the order of $20, I'd pay $30 just to be able to say I'm 50% more ethical than an actual vegetarian.

Eliezer, is that the right way to do the maths? If a high-status opinion-former publicly signals that he's quitting meat because it's ethically indefensible, then others are more likely to follow suit - and the chain-reaction continues. For sure, studies purportedly showing longer lifespans, higher IQs etc of vegetarians aren't very impressive because there are too many possible confounding variables. But what such studies surely do illustrate is that any health-benefits of meat-eating vs vegetarianism, if they exist, must be exceedingly subtle. Either way, practising friendliness towards cognitively humble lifeforms might not strike AI researchers as an urgent challenge now. But isn't the task of ensuring that precisely such an outcome ensues from a hypothetical Intelligence Explosion right at the heart of MIRI's mission - as I understand it at any rate?

I think David is right. It is important that people who may have a big influence on the values of the future lead the way by publicly declaring and demonstrating that suffering (and pleasure) are important where-ever they occur, whether in humans or mice.

-2Said Achmiz
I have to disagree on two points: 1. I don't think that we should take this thesis ("suffering (and pleasure) are important where-ever they occur, whether in humans or mice") to be well-established and uncontroversial, even among the transhumanist/singularitarian/lesswrongian crowd. 2. More importantly, I don't think Eliezer or people like him have any obligation to "lead the way", set examples, or be a role model, except insofar as it's necessary for him to display certain positive character traits in order for people to e.g. donate to MIRI, work for MIRI, etc. (For the record, I think Eliezer already does this; he seems, as near as I can tell, to be a pretty decent and honest guy.) It's really not necessary for him to make any public declarations or demonstrations; let's not encourage signaling for signaling's sake.

Needless to say, I think 1 is settled. As for the second point - Eliezer and his colleagues hope to exercise a lot of control over the future. If he is inadvertently promoting bad values to those around him (e.g. it's OK to harm the weak), he is increasing the chance that any influence they have will be directed towards bad outcomes.

-2Said Achmiz
That has very little to do with whether Eliezer should make public declarations of things. Are you of the opinion that Eliezer does not share your view on this matter? (I don't know whether he does, personally.) If so, you should be attempting to convince him, I guess. If you think that he already agrees with you, your work is done. Public declarations would only be signaling, having little to do with maximizing good outcomes. As for the other thing — I should think the fact that we're having some disagreement in the comments on this very post, about whether animal suffering is important, would be evidence that it's not quite as uncontroversial as you imply. I am also not aware of any Less Wrong post or sequence establishing (or really even arguing for) your view as the correct one. Perhaps you should write one? I'd be interested in reading it.
[-]Pablo130

I am also not aware of any Less Wrong post or sequence establishing (or really even arguing for) your view as the correct one.

I think we should be wary of reasoning that takes the form: "There is no good argument for x on Less Wrong, therefore there are likely no good arguments for x."

1Said Achmiz
Certainly we should, but that was not my reasoning. What I said was: I object to treating an issue as settled and uncontroversial when it's not. And the implication was that if this issue is not settled here, then it's likely to be even less settled elsewhere; after all, we do have a greater proportion of vegetarians here at Less Wrong than in the general population. "I will act as if this is a settled issue" in such a case is an attempt to take an epistemic shortcut. You're skipping the whole part where you actually, you know, argue for your viewpoint, present reasoning and evidence to support it, etc. I would like to think that we don't resort to such tricks here. If caring about animal suffering is such a straightforward thing, then please, write a post or two outlining the reasons why. Posters on Less Wrong have convinced us of far weirder things; it's not as if this isn't a receptive audience. (Or, if there are such posts and I've just missed them, link please. Or! If you think there are very good, LW-quality arguments elsewhere, why not write a Main post with a few links, with maybe brief summaries of each?)
5davidpearce
SaidAchmiz, you're right. The issue isn't settled: I wish it were so. The Transhumanist Declaration (1998, 2009) of the World Transhumanist Association / Humanity Plus does express a non-anthropocentric commitment to the well-being of all sentience. ["We advocate the well-being of all sentience, including humans, non-human animals, and any future artificial intellects, modified life forms, or other intelligences to which technological and scientific advance may give rise" : http://humanityplus.org/philosophy/transhumanist-declaration/] But I wonder what percentage of lesswrongers would support such a far-reaching statement?
-3Said Achmiz
I certainly wouldn't, and here's why. Mentioning "non-human animals" in the same sentence and context along with humans and AIs, and "other intelligences" (implying that non-human animals may be usefully referred to as "intelligences", i.e. that they are similar to humans along the relevant dimensions here, such as intelligence, reasoning capability, etc.) reads like an attempt to smuggle in a claim by means of that implication. Now, I don't impute ignoble intent to the writers of that declaration; they may well consider the question settled, and so do not consider themselves to be making any unsupported claims. But there's clearly a claim hidden in that statement, and I'd like to see it made quite explicit, at least, even if you think it's not worth arguing for. That is, of course, apart from my belief that animals do not have intrinsic moral value. (To be truthful, I often find myself more annoyed with bad arguments than wrong beliefs or bad deeds.)
2Pablo
Those who have thought most about this issue, namely professional moral philosophers, generally agree (1) that suffering is bad for creatures of any species and (2) that it's wrong for people to consume meat and perhaps other animal products (the two claims that seem to be the primary subjects of dispute in this thread). As an anecdote, Jeff McMahan--a leading ethicist and political philosopher--mentioned at a recent conference that the moral case for vegetarianism was one of the easiest cases to make in all philosophy (a discipline where peer disagreement is pervasive). I mention this, not as evidence that the issue is completely settled, but as a reply to your speculation that there is even more disagreement in the relevant community outside Less Wrong. Frankly, I'm baffled by your insistence that the relevant arguments must be found in the Less Wrong archives. There's plenty of good material out there which I'm happy to recommend if you are interested in reading what others who have thought about these issues much more than either of us have written on the subject.
1Said Achmiz
Citation needed. :) It's interesting that you use Jeff McMahan as an example. In his essay The Meat Eaters, McMahan makes some excellent arguments; his replies to the "playing God" and "against Nature" objections, for instance, are excellent examples of clear reasoning and argument, as is his commentary on the sacredness of species. (As an aside, when McMahan started talking about the hypothetical modification or extinction of carnivorous species, I immediately thought of Stanislaw Lem's Return From the Stars, where the human civilization of a century hence has chemically modified all carnivores, including humans, to be nonviolent, evidently having found some way to solve the ecological issues.) But one thing he doesn't do is make any argument for why we should care about the suffering of animals. The moral case, as such, goes entirely unmade; McMahan only alludes to its obviousness once or twice. If he thinks it's an easy case to make — perhaps he should go ahead and make it! (Maybe he does elsewhere? If so, a quick googling does not turn it up. Links, as always, would be appreciated.) He just takes "animal suffering is bad" as an axiom. Well, fair enough, but if I don't share that axiom, you wouldn't expect me to be convinced by his arguments, yes? I don't think the relevant community outside Less Wrong is professional moral philosophers. I meant something more like... "intellectuals/educated people/technophiles/etc. in general", and then even more broadly than that, "people in general". However, this is a peripheral issue, so I'm ok with dropping it. In case it wasn't clear (sorry!), yes, I am interested in reading good material elsewhere (preferably in the form of blog posts or articles rather than entire books or long papers, at least as summaries); if you have some to recommend, I'd appreciate it. I just think that if such very convincing material exists, you (or someone) should post it (links or even better, a topic summary/survey) on Less Wrong, such tha
5Pablo
(FWIW, I'm not the one downvoting your comments, and I think it's a shame that the debate has become so "politicized".) Here are a couple of relevant survey articles: * Jeff McMahan, Animals, in The Blackwell Companion to Applied Ethics, Oxford: Blackwell, 2002, pp. 525-536. * Stuart Rachels, Vegetarianism, in The Oxford Handbook of Animal Ethics, Oxford: Oxford University Press, 2012, pp. 877–905. On the seriousness of suffering, see perhaps * Thomas Nagel, Pleasure and Pain, in The View from Nowhere, Oxford: Oxford University Press, 1986, pp. 156-162. -- Here are some quotes about pain from contemporary moral philosophers which I believe are fairly representative. (I don't have any empirical studies to back this up, other than my impression from interacting with this community for several years, and my inability to find even a single quote that supports the contrary position.) Guy Kahane, The Sovereignty of Suffering: Reflections on Pain’s Badness, 2004, p. 2 Jamie Mayerfeld, Suffering and Moral Responsibility, Oxford, 2002, p. 111. John Broome, ‘More Pain or Less?’, Analysis, vol. 56, no. 2 (April, 1996), p. 117 Michael Huemer, Ethical Intuitionism, Basingstoke, Hampshire, 2005, p. 250. James Rachels, ‘Animals and Ethics’, in Edward Craig (ed.), Routledge Encyclopedia of Philosophy, London, 1998, sect. 3.
2Said Achmiz
Thank you! This is an impressive array of references, and I will read at least some of them as soon as I have time. I very much appreciate you taking the time to collect and post them. Thank you. The downvotes don't worry me too much, at least partly because I continue to be unsure about what down/upvotes even mean on this site. (It seems to be an emotivist sort of yay/boo thing? Not that there's necessarily anything terribly wrong with that, it just doesn't translate to very useful data, especially in small quantities.) To anyone who is downvoting my comments: I'd be curious to hear your reasons, if you're willing to explain them publicly. Though I do understand if you want to remain anonymous.
0Said Achmiz
So, I've just finished reading this one. To say that I found it unconvincing would be quite the understatement. For one, Rachels seems entirely unwilling to even take seriously any objections to his moral premises or argument (he, again, takes the idea that we should care about animal suffering as given). He dismisses the strongest and most interesting objections outright; he selects the weakest objections to rebut, and condescendingly adds that "Resistance to [such] arguments usually stems from emotion, not reason. ... Moreover, they [opponents of his argument] want to justify their next hamburger." Rachels then launches into a laundry list of other arguments against eating factory farmed animals, not based on a moral concern for animals. It seems that factory farming is bad in literally every way! It's bad for animals, it's bad for people, it causes diseases, eating meat is bad for our health, and more, and more. (I'm always wary of such claims. When someone tells you thing A has bad effect X, you listen with concern; when they add that oh yeah, it also had bad effect Y! And Z! And W! ... and then you discover that their political/ideological alignment is "opponent of thing A"... suspicion creeps in. Can eating meat really just be universally bad, bad in every way, irredeemably bad so as to be completely unmotivated? Well, there's no law of nature that says that can't be the case (e.g. eating uranium probably has no upside), but I'm inclined to treat such claims with skepticism, and, in any case, I'd prefer each aspect of meat-eating to be argued against separately, such that I can evaluate them individually, not be faced with a shotgun barrage of everything at once.) Incidentally, I find the "factory farming is detrimental to local human populations" argument much more convincing than any of the others, certainly far more so than the animal-suffering argument. If the provided facts are accurate, then that's the most salient case for stopping the practice — o
5wedrifid
"Partially hydrogenated vegetable oils prevent heart disease and improve lipid profile". To the extent that it is true that it is trivial to find someone claiming the opposite of every nutritional claim it is trivial to find people who are clearly just plain wrong. (The position you are taking is far too strong to be tenable.)
0Said Achmiz
The opposite claim of "Food X causes problem Y" is not necessarily "Food X reduces problem Y". "It is not the case that (or "there is no evidence that") Food X causes problem Y" also counts as "opposite". That's how I meant it: every time someone says "X causes Y", there's some other study that concludes that eh, actually, it's not clear that X causes Y, and in fact probably doesn't.
4davidpearce
SaidAchmiz, one difference between factory farming and the Holocaust is that the Nazis believed in the existence of an international conspiracy of the Jews to destroy the Aryan people. Humanity's only justification of exploiting and killing nonhuman animals is that we enjoy the taste of their flesh. No one believes that factory-farmed nonhuman animals have done "us" any harm. Perhaps the parallel with the (human) Holocaust fails for another reason. Pigs, for example, are at least as intelligent as prelinguistic toddlers; but are they less sentient? The same genes, neural processes, anatomical pathways and behavioural responses to noxious stimuli are found in pigs and toddlers alike. So I think the burden of proof here lies on meat-eating critics who deny any equivalence. A third possible reason for denying the parallel with the Holocaust is the issue of potential. Pigs (etc) lack the variant of the FOXP2 gene implicated in generative syntax. In consequence, pigs will never match the cognitive capacities of many but not all adult humans. The problem with this argument is that we don't regard, say, humans with infantile Tay-Sachs who lack the potential to become cognitively mature adults as any less worthy of love, care and respect than heathy toddlers. Indeed the Nazi treatment of congenitally handicapped humans (the "euthanasia" program) is often confused with the Holocaust, for which it provided many of the technical personnel. A fourth reason to deny the parallel with the human Holocaust is that it's offensive to Jewish people. This unconformable parallel has been drawn by some Jewish writers. "An eternal Treblinka", for example, was made by Isaac Bashevis Singer - the Jewish-American Nobel laureate. Apt comparison or otherwise, creating nonhuman-animal-friendly intelligence is going to be an immense challenge.
1Said Achmiz
It seems to me like a far more relevant justification for exploiting and killing nonhuman animals is "and why shouldn't we do this...?", which is the same justification we use for exploiting and killing ore-bearing rocks. Which is to say, there's no moral problem with doing this, so it needs no "justification". I make it clear in this post that I don't deny the equivalence, and don't think that very young children have the moral worth of cognitively developed humans. (The optimal legality of Doing Bad Things to them is a slightly more complicated matter.) Well, I certainly do. Eh...? Expand on this, please; I'm quite unsure what you mean here.
3davidpearce
SaidAchmiz, to treat exploiting and killing nonhuman animals as ethically no different from "exploiting and killing ore-bearing rocks" does not suggest a cognitively ambitious level of empathetic understanding of other subjects of experience. Isn't there an irony in belonging to an organisation dedicated to the plight of sentient but cognitively humble beings in the imminent face of vastly superior intelligence and claiming that the plight of sentient but cognitively humble beings in the face of vastly superior intelligence is of no ethical consequence whatsoever? Insofar as we want a benign outcome for humans, I'd have thought that the computational equivalent of Godlike capacity for perspective-taking is precisely what we should be aiming for.
8Watercressed
No. Someone who cares about human-level beings but not animals will care about the plight of humans in the face of an AI, but there's no reason they must care about the plight of animals in the face of humans, because they didn't care about animals to begin with. It may be that the best construction for a friendly AI is some kind of complex perspective taking that lends itself to caring about animals, but this is a fact about the world; it falls on the is side of the is-ought divide.
3Said Achmiz
What the heck does this mean? (And why should I be interested in having it?) Wikipedia says: If that's how you're using "sentience", then: 1) It's not clear to me that (most) nonhuman animals have this quality; 2) This quality doesn't seem central to moral worth. So I see no irony. If you use "sentience" to mean something else, then by all means clarify. There are some other problems with your formulation, such as: 1) I don't "belong to" MIRI (which is the organization you refer to, yes?). I have donated to them, which I suppose counts? 2) Your description of their mission, specifically the implied comparison of an FAI with humans, is inaccurate. You use a lot of terms ("cognitively ambitious", "cognitively humble", "empathetic understanding", "Godlike capacity for perspective-taking" (and "the computation equivalent" thereof)) that I'm not sure how to respond to, because it seems like either these phrases are exceedingly odd ways of referring to familiar concepts, or else they are incoherent and have no referents. I'm not sure which interpretation is dictated by the principle of charity here; I don't want to just assume that I know what you're talking about. So, if you please, do clarify what you mean by... any of what you just said.
-1A1987dM
Huh, no, you don't normally go out of your way to do stuff unless there's something in it for you or someone else.
3Said Achmiz
Well, first of all, this is just false. People do things for the barest, most trivial of reasons all the time. You're walking along the street and you kick a bottle that happens to turn up in your path. What's it in for you? In the most trivial sense you could say that "I felt like it" is what's in it for you, but then the concept rather loses its meaning. In any case, that's a tangent, because you mistook my meaning: I wasn't talking about the motivation for doing something. I (and davidpearce, as I read him) was talking about the moral justification for eating meat. His comment, under my intepretation, was something like: "Exploiting and killing nonhuman animals carries great negative moral value. What moral justification do we have for doing this? (i.e. what positive moral value counterbalances it?) None but that we enjoy the taste of their flesh." (Implied corollary: and that is inadequate moral justification!) To which my response was, essentially, that morally neutral acts do not require such justification. (And by implication, I was contradicting davidpearce by claiming that killing and eating animals is a morally neutral act.) If I smash a rock, I don't need to justify that (unless the rock was someone's property, I suppose, which is not the issue we're discussing). I might have any number of motivations for performing a morally neutral act, but they're none of anyone's business, and certainly not an issue for moral philosophers. (Did you really not get all of this intended meaning from my comment...? If that's how you intepreted what I said, shouldn't you be objecting that smashing ore-bearing rocks is not, in fact, unmotivated, as I would seem to be implying, under your interpretation?)
4RobertWiblin
"Public declarations would only be signaling, having little to do with maximizing good outcomes." On the contrary, trying to influence other people in the AI community to share Eliezer's (apparent) concern for the suffering of animals is very important, for the reason given by David. "I am also not aware of any Less Wrong post or sequence establishing (or really even arguing for) your view as the correct one." a) Less Wrong doesn't contain the best content on this topic. b) Most of the posts disputing whether animal suffering matter are written by un-empathetic non-realists, so we would have to discuss meta-ethics and how to deal with meta-ethical uncertainty to convince them. c) The reason has been given by Pablo Stafforini - when I directly experience the badness of suffering, I don't only perceive that suffering is bad for me (or bad for someone with blonde hair, etc), but that suffering would be bad regardless of who experienced it (so long as they did actually have the subjective experience of suffering). d) Even if there is some uncertainty about whether animal suffering is important, that would still require that it be taken quite seriously; even if there were only a 50% chance that other humans mattered, it would be bad to lock them up in horrible conditions, or signal through my actions to potentially influential people that doing so is OK.
0[anonymous]
This is an interesting argument, but it seems a bit truncated. Could you go into more detail?
0Said Achmiz
Where is the best content on this topic, in your opinion? Eh? Unpack this, please.

If it's on the order of $20, I'd pay $30 just to be able to say I'm 50% more ethical than an actual vegetarian.

That's not exactly true, since advocating vegetarianism has more effects than simply reducing the consumption of meat. For one thing, it alters how people think about and live their lives. If that $30 of spending produces a certain amount of human suffering (say, from self-induced guilt over eating meat), then your ethicalness isn't as high as calculated.

9Peter Wildeford
Allegedly, vegetarian diets are supposed to be healthier, but I don't know if that's true. I also don't know how much of a productivity drain, if any, a vegetarian diet would be. I've personally noticed no difference. ~ It depends on what the cost-effectiveness ends up looking like, but $30 sounds fine to me. Additionally or alternatively, you could eat larger animals instead of smaller animals (i.e. more beef and less chicken) so as to do less harm with each meal.
3Mestroyer
If the ethical hidden variables turn out unfavorably, you have more to make up for than that. HPJEV thinking animals are not sentient has probably lost the world more than one vegetarian-lifetime.
1Eliezer Yudkowsky
This seems unlikely to be a significant fraction of my impact upon the summum bonum, for good or ill.
4Raemon
I'm actually fairly concerned about the possibility of you influencing the beliefs of AI researchers, in particular. I'm not sure if it ends up mattering for FAI, if executed as currently outlined. My understanding is that the point is that it'll be able to predict the collective moral values of humanity-over-time (or safely fail to do so), and your particular guesses about ethical-hidden-variables shouldn't matter. But I can imagine plausible scenarios where various ethical-blind-spots on the part of the FAI team, or people influenced by it, end up mattering a great deal in a pretty terrifying way. (Maybe people in that cluster decide they have a better plan, and leave and do their own thing, where ethical-blind-spots/hidden-variables matter more). This concern extends beyond vegetarianism and doesn't have a particular recommended course of action beyond "please be careful about your moral reasoning and public discussion thereof", which presumably you're doing already, or trying to.
9Eliezer Yudkowsky
FAI builders do not need to be saints. No sane strategy would be set up that way. They need to endorse principles of non-jerkness enough to endorse indirect normativity (e.g. CEV). And that's it. Morality is not sneezed into AIs by contact with the builders.
8Mestroyer
Haven't you considered extrapolating the volition of a single person if CEV for many people looks like it won't work out, or will take significantly longer? Three out of three non-vegetarian LessWrongers (my best model for MIRI employees, present and future, aside from you) I have discussed it with say they care about something besides sentience, like sapience. Because they have believed that that's what they care about for a while, I think it has become their true value, and CEV based on them alone would not act on concern for sentience without sapience. These are people who take MWI and cryonics seriously, probably because you and Robin Hansen do and have argued in favor of them. And you could probably change the opinion of these people, or at least people on the road to becoming like them with a few of blog posts. Because in HPMOR you used the word "sentience," which is typically used in sci fi to mean sapience, (instead of using something like "having consciousness") I am worried you are sending people down that path by letting them think HPJEV draws the moral-importance line at sapience, besides my concern that you are showing others that a professional rationalist thinks animals aren't sentient.
2Raemon
I did finally read the 2004 CEV paper recently, and it was fairly reassuring in a number of ways. (The "Jews vs Palestinians cancel each other but Martin Luther King and Gandhi add together" thing sounded... plausible but a little too cutely elegant for me to trust at first glance.) I guess the question I have is (this is less relevant to the current discussion but I'm pretty curious) - in the event where CEV fails to produce a useful outcome (i.e. values diverge too much), is there a backup plan, that doesn't hinge on someone's judgment? (Is there a backup plan, period?)
0[anonymous]
Indirect Normativity is more a matter of basic sanity than non-jerky altruism. I could be a total jerk and still realize that I wanted the AI to do moral philosophy for me. Of course, even if I did this, the world would turn out better than anyone could imagine, for everyone. So yeah, I think it really has more to do with being A) sane enough to choose Indirect Normativity, and B) mostly human. Also, I would regard it as a straight-up mistake for a jerk to extrapolate anything but their own values. (Or a non-jerk for that matter). If they are truly altruistic, the extrapolation should reflect this. If they are not, building altruism or egalitarianism in at a basic level is just dumb (for them, nice for me). (Of course then there are arguments for being honest and building in altruism at a basic level like your supporters wanted you to. Which then suggests the strategy of building in altruism towards only your supporters, which seems highly prudent if there is any doubt about who we should be extrapolating. And then there is the meta-uncertain argument that you shouldn't do too much clever reasoning outside of adult supervision. And then of course there is the argument that these details have low VOI compared to making the damn thing work at all. At which point I will shut up.)
2Decius
Wouldn't that $30 come from your work output that is currently going to effective altruism?
3Eliezer Yudkowsky
Arguably worth it for $30 of reduced guilt, bragging rights and twisted, warped enjoyment of ethical weirdness.
-2Decius
Using the worst estimate, that would mean that it's arguable that a 1 in 50 chance of killing a child under 5 is worth that much reduced guilt, bragging rights, and twisted, warped enjoyment of ethical weirdness. I'd call you a monster, but I'd totally take actions which fail to prevent the death of an entire kid I'd never meet anyway if I could do so without suffering any risk of being blamed and could get a warped enjoyment of ethical weirdness. We monsters.

Several people have been attempting to reductio my pro-human point of view, so I'll do the same back to the pro-animal people here: how simple is the simplest animal you're willing to assign moral worth to? Are you taking into account meta-uncertainty about the moral worth of even very simple animals? (What about living organisms outside of the animal kingdom, like bacteria? Viruses?) If you don't care about organisms simple enough that they don't suffer, does it seem "arbitrary" to you to single out a particular mental behavior as being the mental behavior that signifies moral worth? Does it seem "mindist" to you to single out having a particular kind of mind as being the thing that signifies moral worth?

If you calculated that assigning even very small moral worth to a simple but sufficiently numerous organism leads to the conclusion that the moral worth of non-human organisms on Earth strongly outweighs, in aggregate, the moral worth of humans, would you act on it (e.g. by making the world a substantially better place for some bacterium by infecting many other animals, such as humans, with it)?

If you were the only human left on Earth and you couldn't find enough non-meat to survive on, would you kill yourself to avoid having to hunt to survive?

How do you resolve conflicts among organisms (e.g. predatorial or parasitic relationships)?

how simple is the simplest animal you're willing to assign moral worth to?

I don't value animals per se, it is their suffering I care about and want to prevent. If it turns out that even the tiniest animals can suffer, I will take this into consideration. I'm already taking insects or nematodes into consideration probabilistically; I think it is highly unlikely that they are sentient, and I think that even if they are sentient, their suffering might not be as intense as that of mammals, but since their numbers are so huge, the well-being of all those small creatures makes up a non-negligible term in my utility function.

If you don't care about organisms simple enough that they don't suffer, does it seem "arbitrary" to you to single out a particular mental behavior as being the mental behavior that signifies moral worth?

No, it seems completely non-arbitrary to me. Only sentient beings have a first-person point of view, only for them can states of the world be good or bad. A stone cannot be harmed in the same way a sentient being can be harmed. Introspectively, my suffering is bad because it is suffering, there is no other reason.

If you calculated that assigning ev

... (read more)

I'm already taking insects or nematodes into consideration probabilistically; I think it is highly unlikely that they are sentient, and I think that even if they are sentient, their suffering might not be as intense as that of mammals, but since their numbers are so huge, the well-being of all those small creatures makes up a non-negligible term in my utility function.

A priori, it seems that the moral weight of insects would either be dominated by their massive numbers or by their tiny capacities. It's a narrow space where the two balance and you get a non-negligible but still-not-overwhelming weight for insects in a utility function. How did you decide that this was right?

4Jabberslythe
I think there are good arguments for for suffering not being weighted by number of neurons and if you assign even a 10% to that being the case you end up with insects (and maybe nematodes and zooplankton) dominating the utility function because of their overwhelming numbers. Having said that, ways on increasing the well being of these may be quite a bit different from increasing it for larger animals. In particular, because they so many of them die so within the first few days of life, their averaged life quality seems like it would be terrible. So reducing the populations looks like the current best option. There may be good instrumental reasons for focusing on less controversial animals and hoping that they promote the kind of antispeciesism that spills over to concern about insects and does work for improving similar situations in the future.
9Pablo
For what is worth, here are the results of a survey that Vallinder and I circulated recently. 85% of expert respondents, and 89% of LessWrong respondents, believe that there is at least a 1% chance that insects are sentient, and 77% of experts and 69% of LessWrongers believe there is at least a 20% chance that they are sentient.
4Jabberslythe
Very interesting. What were they experts in? And how many people responded?
5Pablo
They were experts in pain perception and related fields. We sent the survey to about 25 people, of whom 13 responded. Added (6 November, 2015): If there is interest, I can reconstruct the list of experts we contacted. Just let me know.
3Lukas_Gloor
Yes, my current estimate for that is less than 1%, but this is definitely something I should look into more closely. This has been on my to-do list for quite a while already. Another thing to consider is that insects are a diverse bunch. I'm virtually certain that some of them aren't conscious, see for instance this type of behavior. OTOH, cockroaches or bees seem to be much more likely to be sentient.
1Jabberslythe
Yes. Bees and Cockroaches both have about a million neurons compared with maybe 100,000 for most insects.
1TheOtherDave
Can you summarize the properties you look for when making these kinds of estimates of whether an insect is conscious/sentient/etc.? Or do you make these judgments based on more implicit/instinctive inspection?
1Jabberslythe
I mostly do it by thinking about what I would accept as evidence of pain in more complex animals and see if it is present in insects. Complex pain behavior and evolutionary and functional homology relating to pain are things to look for. There is a quite a bit of research on complex pain behavior in crabs by Robert Elwood. I'd link his site but it doesn't seem to be up right now. You should be able to find the articles, though. Crabs have 100,000 neurons which is around what many insects have. Here is a pdf of a paper that find that a bunch of common human mind altering drugs affecting crawfish and fruit flies.
0TheOtherDave
Thanks.
0Lukas_Gloor
It is quite implicit/instinctive. The problem is that without having solved the problem of consciousness, there is also uncertainty about what you're even looking for. Nociception seems to be a necessary criterion, but it's not sufficient. In addition, I suspect that consciousness' adaptive role has to do with the weighting of different "possible" behaviors, so there has to be some learning behavior or variety in behavioral subroutines. I actually give some credence to extreme views like Dennett's (and also Eliezer's if I'm informed correctly), which state that sentience implies self-awareness, but my confidence for that is not higher than 20%. I read a couple of papers on invertebrate sentience and I adjusted the expert estimates downwards somewhat because I have a strong intuition that many biologists are too eager to attribute sentience to whatever they are studying (also, it is a bit confusing because opinions are all over the place). Brian Tomasik lists some interesting quotes and material here. And regarding the number of neurons thing, there I'm basically just going by intuition, which is unfortunate so I should think about this some more.
4davidpearce
Ice9, perhaps consider uncontrollable panic. Some of the most intense forms of sentience that humans undergo seem to be associated with a breakdown of meta-cognitive capacity. So let's hope that what it's like to be an asphyxiating fish, for example, doesn't remotely resemble what it feels like to be a waterboarded human. I worry that our intuitive dimmer-switch model of consciousness, i.e. more intelligent = more sentient, may turn out to be mistaken.
0TheOtherDave
OK, thanks for clarifying.
0Lukas_Gloor
Good point, there is reason to expect that I'm just assigning numbers in a way that makes the result come out convenient. Last time I did a very rough estimate, the expected suffering of insects and nematodes (given my subjective probabilities) came out around half the expected suffering of all decapodes/amphibians-and-larger wild animals. And then wild animals outnumber farm animals by around 2-3 orders of magnitude in terms of expected suffering, and farm animals outnumber humans by a large margin too. So if I just cared about current suffering, or suffering on earth only, then "non-negligible" would indeed be an understatement for insect suffering. However, what worries me most is not the suffering that is happening on earth. If space colonization goes wrong or even non-optimal, the current amount of suffering could be multiplied by orders of magnitude. And this might happen even if our values will improve. Consider the case with farmed animals, humans probably never cared as much for the welfare of animals as they do now, but at the same time, we have never caused as much direct suffering to animals as we do now. If you're primarily care about reducing the absolute amount of suffering, then whatever lets the amount of sentience skyrocket is a priori very dangerous.
3Qiaochu_Yuan
Is the blue-minimizing robot suffering if it sees a lot of blue? Would you want to help alleviate that suffering by recoloring blue things so that they are no longer blue?

I don't see the relevance of this question, but judging by the upvotes it received, it seems that I'm missing something.

I think suffering is suffering, no matter the substrate it is based on. Whether such a robot would be sentient is an empirical question (in my view anyway, it has recently come to my attention that some people disagree with this). Once we solve the problem of consciousness, it will turn out that such a robot is either conscious or that it isn't. If it is conscious, I will try to reduce its suffering. If the only way to do that would involve doing "weird" things, I would do weird things.

2Qiaochu_Yuan
The relevance is that my moral intuitions suggest that the blue-minimizing robot is morally irrelevant. But if you're willing to bite the bullet here, then at least you're being consistent (although I'm no longer sure that consistency is such a great property of a moral system for humans).
[-]Raemon120

1) I am okay with humanely raised farm meat (I found a local butcher shop that sources from farms I consider ethical)

2) If I didn't have access to civilization, I would probably end up hunting to survive, although I'd try to do so as rarely and humanely as was possible given my circumstances. (I'm only like 5% altruist, I just try to direct that altruism as effectively as possible and if push comes to shove I'm a primal animal that needs to eat. I'm skeptical of people who claim otherwise)

3) I'm currently okay with eating insects, mussels, and similar simplish animals, where I can make pretty good guesses about the lack of sentience of. (If insects do turn out to have sentience, that's a pretty inconvenient world to have to live in, morally.)

4) I'm approximately average-preference-utilitarian. I value there being more creatures with more complex and interesting capacities for preference satisfaction (this is arbitrary and I'm fine with that). If I had to choose between humans and animals, I'd choose humans. But that's not the choice offered to humans RE vegetarianism - what's at stake is not humanity and complex relationships/art/intellectual-endeavors - it's pretty straightforward... (read more)

5Swimmer963 (Miranda Dixon-Luinenburg)
This is pretty much the case for me. I was vegetarian for a while in high school–oddly enough, less for reducing-suffering ethical reasons than for "it costs fewer resources to produce enough plants to feed the world population than to produce enough meat, as animals have to be fed plants and are a low-efficiency conversion of plant calories, so in order to better use the planet's resources, everyone should eat more plants and less meat." I consistently ended up with low iron and B12. It's possible to get enough iron, B12, and protein as a vegetarian, but you do have to plan your meals a bit more carefully (i.e. always have beans with rice so you get complete protein) and possibly eat foods that you don't like as much. Right now I cook about one dish with meat in it per week, and I haven't had any iron or B12 deficiency problems since graduating high school 4 years ago. In general, I optimize food for low cost as well as health value and ethics, but if in-vitro meat became available, I think this is valuable enough in the long run that I would be willing to "subsidize" its production and commercialization by paying higher prices.
-1maia
Oddly, this sentence is more or less exactly true for me as well. Only on LessWrong...
4wedrifid
That reasoning does not seem to be either unique to or particularly prevalent on lesswrong.
0maia
Fair enough. I've never encountered it elsewhere, myself.
2wedrifid
(Typically it is expressed as an additional excuse/justification for the political and personal position being taken for unrelated reasons.)
2Said Achmiz
Could you (very briefly) expand on this, or even just give a link with a reasonably accessible explanation? I am curious.
3MTGandP
From the American Dietetic Association: http://www.ncbi.nlm.nih.gov/pubmed/19562864
0Said Achmiz
Interesting, thank you.
2MugaSofer
Well, considering the existence of healthy vegetarians, it seems clear that we evolved to be at least capable of surviving in a low-meat environment. I don't have any sources or anything, and I'm pretty lazy, but I've been vegetarian since childhood, and never had any health problems as a result AFAICT.
5Said Achmiz
I am entirely willing to take your word on this, but you know what they say about "anecdote" and declensions thereof. In this case specifically, one of the few things that seem to be reliably true about nutrition is that "people are different, and what works for some may fail or be outright disastrous for others". In any case, Raemon seemed to be making a weaker claim than "vegetarianism has no serious health downsides". "Healthy portions of meat amount to far less than the 32 oz steak a day implied by some anti-vegetarian doomsayers" is something I'm completely willing to grant.
2MugaSofer
Fair enough.
2elharo
Considering the existence of healthy vegetarians, it seems clear that we evolved to be at least capable of surviving in a low-meat environment supported by modern agriculture that produces large quantities of concentrated non-meat protein in the form of tofu, eggs, whey protein, beans, and the like. This may be a happy accident. Are there any vegetarian hunter-gatherer societies?
5TheOtherDave
Wouldn't these be "gatherer societies" pretty much definitionally?
2wedrifid
(Unless there are Triffids!)
1TheOtherDave
Obligatory Far Side reference
0Nornagest
I've been having a hell of a time finding trustworthy cites on this, possibly because there are so many groups with identity stakes in the matter -- obesity researchers and advocates, vegetarians, and paleo diet adherents all have somewhat conflicting interests in ancestral nutrition. That said, this survey paper describes relatively modern hunter-gatherer diets ranging from 1% vegetable (the Nunamiut of Alaska) to 74% vegetable (the Gwi of Africa), with a mean somewhere around one third; no entirely vegetarian hunter-gatherers are described. This one describes societies subsisting on up to 90% gathered food (I don't know whether or not this is synonymous with "vegetable"), but once again no exclusively vegetarian cultures and a mean around 30%. I should mention by way of disclaimer that modern forager cultures tend to live in marginal environments and these numbers might not reflect the true ancestral proportions. And, of course, that this has no bearing either way on the ethical dimensions of the subject.
2Raemon
I'm having trouble finding... any kind of dietary information that isn't obviously politicized (in any direction) right now. But basically, when people think of a "serving" of meat, they imagine a large hunk of steak, when in fact a serving is more like the size of a deck of cards. A healthy diet has enough things going on in it besides meat that removing meat shouldn't feel like it's gutting out your entire source of pleasure from food.
1Said Achmiz
Ah. Yeah, I don't eat meat in huge chunks or anything. But meat sure is delicious, and comes in a bunch of different formats. Obviously removing meat would not totally turn my diet into a bleak, gray desert of bland gruel; I don't think anyone would claim that. But it would make it meaningfully less enjoyable, on the whole.
2Qiaochu_Yuan
This all seems pretty reasonable (except that I don't think the validity of a human preference has much to do with how difficult it is for non-humans to have the same preference).
-3MugaSofer
This fact seems to outweigh the rest of your comment.
7Vaniver
Bugs, both true and not, are most definitely part of the animal kingdom.
0Qiaochu_Yuan
Whoops. Edited.
4Xodarap
It doesn't seem like you're really criticizing "pro-animal people" - you're just critiquing utilitarianism. (e.g. "Is it arbitrary to state that suffering is bad?" "What if you could help others only at great expense to yourself?") Supposing one does accept utilitarian principles, is there any reason why we shouldn't care about the suffering of non-humans?
-1Qiaochu_Yuan
This is half a criticism and half a reflection of arguments that have been used against my position that I think are problematic. To the extent that you think these arguments are problematic, I probably agree. Resources spent on alleviating the suffering of non-humans are resources that aren't spent on alleviating the suffering of humans, which I value a lot more.
1elharo
That's a false dichotomy. Resources that stop being spent on alleviating the suffering of non-humans do not automatically translate into resources that are spent on alleviating the suffering of humans. Nor is it the case that there are insufficient resources in the world today to eliminate most human suffering. The issue there is purely one of distribution of wealth, not gross wealth.
0Qiaochu_Yuan
Yes, but they're less available. Maybe I triggered the wrong intuition with the word "resources." I had in mind resources like the time and energy of intelligent people, not resources like money. I think it's plausible to guess that time and energy spent on one altruistic cause really does funge directly against time and energy spent on others, e.g. because of good-deed-for-the-day effects.
1Xodarap
Why? (Keeping in mind that we have agreed the basic tenets of utilitarianism are correct: pain is bad etc.)
2Qiaochu_Yuan
Oh. No. Human pain is bad. The pain of sufficiently intelligent animals might also be bad. Fish pain and under is irrelevant.
8Pablo
There is nothing inconsistent about valuing the pain of some animals, but not of others. That said, I find the view hard to believe. When I reflect on why I think pain is bad, it seems clear that my belief is grounded in the phenomenology of pain itself, rather than in any biological or cognitive property of the organism undergoing the painful experience. Pain is bad because it feels bad. That's why I think pain should be alleviated irrespective of the species in which it occurs.
0Qiaochu_Yuan
I don't share these intuitions. Pain is bad if it happens to something I care about. I don't care about fish.
4Pablo
I don't care about fish either. I care about pain. It just so happens that fish can experience pain.
-1Nornagest
Truthfully, I'm not even sure I believe pain is bad in the relevant sense. It's certainly something I'd prefer to avoid under most circumstances, but when I think about it in detail there always ends up being a "because" in there: because it monopolizes attention, because in sufficient quantity it can thoroughly screw up your motivational and emotional machinery, because it's often attached to particular actions in a way that limits my ability to do things. It doesn't feel like a root-level aversion to my reasoning self: when I've torn a ligament and can't flex my foot in a certain way without intense stabbing agony, I'm much more annoyed by the things it prevents me from doing than by the pain it gives me, and indeed I remember the former much better than the latter. I haven't thought this through rigorously, but if I had to take a stab at it right now I'd say that pain is bad in roughly the same way that pleasure is good: in other words, it works reasonably well as a rough experiential pointer to the things I actually want to avoid, and it does place certain constraints on the kind of life I'd want to live, but I'd expect trying to ground an entire moral system in it to give me some pretty insane results once I started looking at corner cases.
-3Xodarap
You probably don't want to draw the line at fish.
0Qiaochu_Yuan
What point are you trying to make with that link?
2Swimmer963 (Miranda Dixon-Luinenburg)
Probably that fish don't seem to be hugely different from amphibians/reptiles, birds, and mammals in terms of the six substitute-indicators-for-feeling-pain, and so it's hard to say whether their pain experience is different. I would agree that fish pain is less relevant than human pain (they have a central nervous system, yes, but less of one, and a huge part of what makes human pain bad is the psychological suffering associated with it).
2Qiaochu_Yuan
My claim was that I don't care about fish pain, not that fish pain is too different from human pain to matter. Rather, fish are too different from humans to matter.
1MugaSofer
Could you expand on this idea?
0Swimmer963 (Miranda Dixon-Luinenburg)
Fair enough. I think "too X to matter" is a complex concept, though.
-4Xodarap
How is the statement "fish and humans feel pain approximately equally" different from the statement "we should care about fish and human pain approximately equally?"
1Shmi
You and I feel pain approximately equally, but I care about mine a lot more than about yours.
1MugaSofer
Do you consider this part of morality? I mean, I personally experience selfish emotions, but I usually, y'know, try to override them?
6Nornagest
Most people probably wouldn't consider that moral as such (though they'd likely be okay with it on pragmatic grounds), but the more general idea of treating some people's pain as more significant than others' is certainly consistent with a lot of moral systems. Common privileged categories: friends, relatives, children, the weak or helpless, people not considered evil.
2Shmi
It's perfectly moral for me to be selfish to some degree, yes. I cannot care about others if I don't care about myself. You might work differently, but utter unselfishness seems like an anomaly.
2wedrifid
It also seems like a lie (to the self or to others).
0Xodarap
Fair enough. To restate but with different emphasis: "we should care about fish and human pain approximately equally?"
1Qiaochu_Yuan
"I care about X's pain" is mostly a statement about X, not a statement about pain. I don't care about fish and I care about humans. You may not share this moral preference, but are you claiming that you don't even understand it?
-2Xodarap
No, I have a lot of biases like this: the halo effect makes me think that humans' ability to do math makes our suffering more important, "what you see is all there is" allows me to believe that slaughterhouses which operate far away must be morally acceptable, and so forth. Anyway, fish suffering isn't a make-or-break decision. People very frequently have the opportunity to choose a bean burrito over a chicken one (or even a beef burrito over a chicken one), and from what Peter has presented here it seems like this is an extremely effective way to reduce suffering.
2Xodarap
I may be misunderstanding you, but I thought you were suggesting that there is a non-arbitrary set of physiological features that vertebrates share but fish don't. I was pointing out that this doesn't seem to be the case.
0Qiaochu_Yuan
No, I'm suggesting that I don't care about fish.
1MugaSofer
Can't speak for all vegetarians/pro-animal-rights types, but I personally discount based on complexity (or intelligence of whatever.) That's not the same as discounting simpler creatures altogether - at least not when we're discussing, say, pigs. (At what point do you draw the line to start valuing creatures, by the way? Chimpanzees? Children? Superintelligent gods? Just curious, this isn't a reductio.)
4Qiaochu_Yuan
Right, but what's the discount rate? What does your discount rate imply is the net moral worth of all mosquitoes on the planet? All bacteria? I'm not sure where my line is either. It's hovering around pigs and dolphins at the moment.
0MugaSofer
I'm not sure what the discount rate is, which is largely why I asked if you were sure about where the line was. I mostly go off intuition for determining how much various species are worth, so if you throw scope insensitivity into the mix...
-1Eugine_Nier
Would you apply said discount rate intraspecies in addition to interspecies? By the way. One question I always wanted to ask a pro-animal-rights type: would you support a program for the extinction/reductions of the population of predatory animals on the grounds that they cause large amounts of unnecessary suffering to their prey?
6Lukas_Gloor
Yes. Assuming that prey populations are kept from skyrocketing (e.g. through the use of immunocontraception) since that too would result in large amounts of unnecessary suffering.
6davidpearce
Eugine, in answer to your question: yes. If we are committed to the well-being of all sentience in our forward light-cone, then we can't simultaneously conserve predators in their existing guise. (cf. http://www.abolitionist.com/reprogramming/index.html) Humans are not obligate carnivores; and the in vitro meat revolution may shortly make this debate redundant; but it's questionable whether posthuman superintelligence committed to the well-being of all sentience could conserve humans in their existing guise either.
2elharo
This is, sadly, not a hypothetical question. This is an issue wildlife managers face regularly. For example, do you control the population of Brown-headed Cowbirds in order to maintain or increase the population of Bell's Vireo or Kirtlands Warbler? The answer is not especially controversial. The only questions are which methods of predator control are most effective, and what unintended side effects might occur. However these are practical, instrumental questions, not moral ones. Where this comes into play in the public is in the conflict between house cats and birds. In particular, the establishment of feral cat colonies causes conflicts between people who preference non-native, vicious but furry and cute predators and people who preference native, avian, non-pet species. Indeed, this is one of the problems I have with many animal rights groups such as the Humane Society. They're not pro-animal rights, just pro-pet species rights. A true concern for animals needs to treat animals as animals, not as furry baby human substitutes. We need to value the species as a whole, not just the individual members; and we need to value their inherent nature as predators and prey. A Capuchin Monkey living in a zoo safe from the threat of Harpy Eagles leads a life as limited and restricted as a human living in Robert Nozick's Experience Machine. While zoos have their place, we should not seek to move all wild creatures into safe, sterile environments with no predators, pain, or danger any more than we would move all humans into isolated, AI-created virtual environments with no true interaction with reality.
3davidpearce
Elharo, I take your point, but surely we do want humans to enjoy healthy lives free from hunger and disease and safe from parasites and predators? Utopian technology promises similar blessings to nonhuman sentients too. Human and nonhuman animals alike typically flourish best when free- living but not "wild".
0elharo
I'm not quite sure what you're saying here. Could you elaborate or rephrase?
2KatieHartman
Why? Assuming that these environments are (or would be) on the whole substantially better on the measures that matter to the individual living in them, why shouldn't we?
1elharo
We're treading close to terminal values here. I will express some aesthetic preference for nature qua nature. However I also recognize a libertarian attitude that we should allow other individuals to live the lives they choose in the environments they find themselves to the extent reasonably possible, and I see no justification for anthropocentric limits on such a preference. Absent strong reasons otherwise, "do no harm" and "careful, limited action" should be the default position. The best we can do for animals that don't have several millennia of adaptation to human companionship (i.e. not dogs, cats, and horses) is to leave them alone and not destroy their natural habitat. Where we have destroyed it, attempt to restore it as best we can, or protect what remains. Focus on the species, not the individual. We have neither the knowledge nor the will to protect individual, non-pet animals. When you ask, "Assuming that these environments are (or would be) on the whole substantially better on the measures that matter to the individual living in them, why shouldn't we?" it's not clear to me whether you're referring to why we shouldn't move humans into virtual boxes or why we shouldn't move animals into virtual boxes, or both. If you're talking about humans, the answer is because we don't get to make that choice for other humans. I for one have no desire to live my life in Nozick box, and will oppose anyone who tries to put me in one while I'm still capable of living a normal life. If you're referring to animals, the argument is similar though more indirect. Ultimately humans should not take it upon themselves to decide how another species lives. The burden of proof rests on those who wish to tamper with nature, not those who wish to leave it alone.

We're treading close to terminal values here. I will express some aesthetic preference for nature qua nature.

That strikes me as inconsistent, assuming that preventing suffering/minimizing disutility is also a terminal value. In those terms, nature is bad. Really, really bad.

I also recognize a libertarian attitude that we should allow other individuals to live the lives they choose in the environments they find themselves to the extent reasonably possible.

It seems arbitrary to exclude the environment from the cluster of factors that go into living "the lives they choose." I choose to not live in a hostile environment where things much larger than me are trying to flay me alive, and I don't think it's too much of a stretch to assume that most other conscious beings would choose the same if they knew they had the option.

Absent strong reasons otherwise, "do no harm" and "careful, limited action" should be the default position. The best we can do for animals that don't have several millennia of adaptation to human companionship (i.e. not dogs, cats, and horses) is to leave them alone and not destroy their natural habitat.

Taken with this...

We n

... (read more)

That strikes me as inconsistent, assuming that preventing suffering/minimizing disutility is also a terminal value.

Two values being in conflict isn't necessarily inconsistent, it just mean that you have to make trade-offs.

2elharo
An example of the importance of predators I happened across recently: "Safer Waters", Alisa Opar, Audubon, July-August 2013, p. 52 This is just one example of the importance of top-level predators for everything in the ecosystem. Nature is complex and interconnected. If you eliminate some species because you think they're mean, you're going to damage a lot more.
4nshepperd
This is an excellent example of how it's a bad idea to mess with ecosystems without really knowing what you're doing. Ideally, any intervention should be tested on some trustworthy (ie. more-or-less complete, and experimentally verified) ecological simulations to make sure it won't have any catastrophic effects down the chain. But of course it would be a mistake to conclude from this that keeping things as they are is inherently good.
4KatieHartman
I'd just like to point out that (a) "mean" is a very poor descriptor of predation (neither its severity nor its connotations re: motivation do justice to reality), and (b) this use of "damage" relies on the use of "healthy" to describe a population of beings routinely devoured alive well before the end of their natural lifespans. If we "damaged" a previously "healthy" system wherein the same sorts of things were happening to humans, we would almost certainly consider it a good thing.
1Richard_Kennaway
If "natural lifespans" means what they would have if they weren't eaten, it's a tautology. If not, what does it mean? The shark's "natural" lifespan requires that it eats other creatures. Their "natural" lifespan requires that it does not.
0KatieHartman
Yes, I'm using "natural lifespan" here as a placeholder for "the typical lifespan assuming nothing is actively trying to kill you." It's not great language, but I don't think it's obviously tautological. Yes. My question is whether that's a system that works for us.
2Richard_Kennaway
We can say, "Evil sharks!" but I don't feel any need to either exterminate all predators from the world, nor to modify them to graze on kelp. Yes, there's a monumental amount of animal suffering in the ordinary course of things, even apart from humans. Maybe there wouldn't be in a system designed by far future humans from scratch. But radically changing the one we live in when we hardly know how it all works -- witness the quoted results of overfishing shark -- strikes me as quixotic folly.
0KatieHartman
It strikes me as folly, too. But "Let's go kill the sharks, then!" does not necessarily follow from "Predation is not anywhere close to optimal." Nowhere have I (or anyone else here, unless I'm mistaken) argued that we should play with massive ecosystems now. I'm very curious why you don't feel any need to exterminate or modify predators, assuming it's likely to be something we can do in the future with some degree of caution and precision.
2Richard_Kennaway
That sort of intervention is too far in the future for me to consider it worth thinking about. People of the future can take care of it then. That applies even if I'm one of those people of the far future (not that I expect to be). Future-me can deal with it, present-me doesn't care or need to care what future-me decides. In contrast, smallpox, tuberculosis, cholera, and the like are worth exterminating now, because (a) unlike the beautiful big fierce animals, they're no loss in themselves, (b) it doesn't appear that their loss will disrupt any ecosystems we want to keep, and (c) we actually can do it here and now.
0Said Achmiz
There's something about this sort of philosophy that I've wondered about for a while. Do you think that deriving utility from the suffering of others (or, less directly, from activities that necessarily involve the suffering of others) is a valid value? Or is it intrinsically invalid? That is, if we were in a position to reshape all of reality according to our whim, and decided to satisfy the values of all morally relevant beings, would we also want to satisfy the values of beings that derive pleasure/utility from the suffering of others, assuming we could do so without actually inflicting disutility/pain on any other beings? And more concretely: in a "we are now omnipotent gods" scenario where we could, if we wanted to, create for sharks an environment where they could eat fish to their hearts' content (and these would of course be artificial fish without any actual capacity for suffering, unbeknownst to the sharks) — would we do so? Or would we judge the sharks' pleasure from eating fish to be an invalid value, and simply modify them to not be predators? The shark question is perhaps a bit esoteric; but if we substitute "psychopaths" or "serial killers" for "sharks", it might well become relevant at some future date.
2KatieHartman
I'm not sure what you mean by "valid" here - could you clarify? I will say that I think a world where beings are deriving utility from the perception of causing suffering without actually causing suffering isn't inferior to a world where beings are deriving the same amount of utility from some other activity that doesn't affect other beings, all else held equal. However, it seems like it might be difficult to maintain enough control over the system to ensure that the pro-suffering beings don't do anything that actually causes suffering.
0Said Achmiz
Sure. By "valid" I mean something like "worth preserving", or "to be endorsed as a part of the complex set of values that make up human-values-in-general". In other words, in the scenario where we're effectively omnipotent (for this purpose, at least), and have decided that we're going to go ahead and satisfy the values of all morally relevant beings — are we going to exclude some values? Or exclude some beings on the basis of their values? For example: should we, in such a scenario, say: "we'll satisfy the values of all the humans, except the psychopaths/sharks/whoever; we don't find their values to be worth satisfying, so they're going to be excluded from this"? I would guess, for instance, that few people here would say: yeah, along with satisfying the values of all humans, let's also satisfy the values of all the paperclip maximizers. We don't find paperclip maximization to be a valid value, in that sense. So my question to you is where you stand on all of that. Are there invalid values? Would you, in fact, try to satisfy Clippy's values as well as those of humans? If not, how about sharks? Psychopaths? Etc.? Ok. Actually, I could take that as an answer to at least some of my above questions, but if you want to expand a bit on what I ask in this post, that would be cool. Well, sure. But let's keep this in the least convenient possible world, where such non-fundamental issues are somehow dealt with.
1elharo
There's a lot here, and I will try to address some specific points later. For now, I will say that personally I do not espouse utilitarianism for several reasons, so if you find me inconsistent with utilitarianism, no surprise there. Nor do I accept the complete elimination of all suffering and maximization of pleasure as a terminal value. I do not want to live, and don't think most other people want to live, in a matrix world where we're all drugged to our gills with maximal levels of L-dopamine and fed through tubes. Eliminating torture, starvation, deprivation, deadly disease, and extreme poverty is good; but that's not the same thing as saying we should never stub our toe, feel some hunger pangs before lunch, play a rough game of hockey, or take a risk climbing a mountain. The world of pure pleasure and no pain, struggle, or effort is a dystopia, not a utopia, at least in my view. I suspect that giving any one single principle exclusive value is likely a path to a boring world tiled in paperclips. It is precisely the interaction among conflicting values and competing entities that makes the world interesting, fun, and worth living in. There is no single principle, not even maximizing pleasure and minimizing pain, that does not lead to dystopia when it is taken to its logical extreme and all other competing principles are thrown out. We are complicated and contradictory beings, and we need to embrace that complexity; not attempt to smooth it out.
0davidpearce
Elharo, which is more interesting? Wireheading - or "the interaction among conflicting values and competing entities that makes the world interesting, fun, and worth living"? Yes, I agree, the latter certainly sounds more exciting; but "from the inside", quite the reverse. Wireheading is always enthralling, whereas everyday life is often humdrum. Likewise with so-called utilitronium. To humans, utilitronium sounds unimaginably dull and monotonous, but "from the inside" it presumably feels sublime. However, we don't need to choose between aiming for a utilitronium shockwave and conserving the status quo. The point of recalibrating our hedonic treadmill is that life can be fabulously richer - in principle orders of magnitude richer - for everyone without being any less diverse, and without forcing us to give up our existing values and preference architectures. (cf. "The catechol-O-methyl transferase Val158Met polymorphism and experience of reward in the flow of daily life.": http://www.ncbi.nlm.nih.gov/pubmed/17687265) In principle, there is nothing to stop benign (super)intelligence from spreading such reward pathway enhancements across the phylogenetic tree.
2KatieHartman
I've heard this posed as a "gotcha" question for vegetarians/vegans. The socially acceptable answer is the one that caters to two widespread and largely unexamined assumptions: that extinction is just bad, always, and that nature is just generally good. If the questioned responds in any other way, he or she can be written off right there. Who the hell thinks nature is a bad thing and genocide is a good thing? But once you get past the idea that nature is somehow inherently good and that ending any particular species is inherently bad, there's not really any way to justify allowing the natural world to exist the way it does if you can do something about it.
0Jiro
It's a "gotcha" question for vegetarians because vegetarians in the real world are seldom vegetarians in a vacuum; their vegetarianism is typically associated and based on a cloud of other ideas that include respect for nature. In other words, it's not a "gotcha" because you would write off the vegetarian who believes it, it's because believing it would undermine his own core, but illogical and unstated, motives.
0A1987dM
The former effect would generally be a heckuva lot smaller than the latter.
1Shmi
I'm parsing this as follows: I don't have a good intuition on whose suffering matters, and unbounded utilitarianism is vulnerable to the Repugnant Conclusion, so I will pick an obvious threshold: humans and decide to not care about other animals until and unless the reason to care arises. EDIT: the Schelling point for the caring threshold seems to be shifting toward progressively less intelligent (but still cute and harmless) species as time passes
5Qiaochu_Yuan
Have you read The Narrowing Circle?
5Shmi
I tried. But it's written in extreme Gwernian: well researched, but long, rambling and without a decent summary upfront. I skipped to the (also poorly written) conclusion, missing most of the arguments, and decided that it's not worth my time. The essay would be right at home as a chapter in some dissertation, though. Leaving aside the dynamics of the Schelling point, did the rest of my reply miss the mark?
3Qiaochu_Yuan
What I mostly got out of it is that there are two big ways in which the circle of things with moral worth has shrunk rather than grown throughout history: it shrunk to exclude gods, and it shrunk to exclude dead people. I'm not sure what your comment was intended to be, but if it was intended to be a summary of the point I was implicitly trying to make, then it's close enough.
1MugaSofer
... are you including chimpanzees there, by any chance?
0TheOtherDave
"Cute" I'll give you. "Harmless" I'm not sure about. That is, it's not in the least bit clear to me that I can reliably predict, from species S being harmful and cute, that the Schelling point you describe won't/hasn't shifted so as to include S on the cared-about side. For clarity: I make no moral claims here about any of this, and am uninterested in the associated moral claims, I'm just disagreeing with the bare empirical claim.
-2Eugine_Nier
I think it's simply a case of more animals moving into the harmless category as our technology improves.
0elharo
The value of a species is not merely the sum of the values of the individual members of the species. I feel a moral obligation to protect and not excessively harm the environment without necessarily feeling a moral obligation to prevent each gazelle from being eaten by a lion. There is value in nature that includes the predator-prey cycle. The moral obligation to animals comes from their worth as animals, not from a utilitarian calculation to maximize pleasure and minimize pain. Animals living as animals in the wild (which is very different than animals living in a farm or as pets) will experience pleasure and pain; but even the ones too low on the complexity scale to feel pleasure and pain have value and should have a place to exist. I don't know if an Orange Roughy feels pain or pleasure or not; but either way it doesn't change my belief that we should stop eating them to avoid the extinction of the species. The non-hypothetical, practical issue at hand is not do we make the world a better place for some particular species, but do we stop making it a worse one? Is it worth extinguishing a species so a few people can have a marginally tastier or more high status dinner? (whales, sharks, Patagonian Toothfish, etc.) Is it worth destroying a few dozen acres of forest containing the last habitat of a microscopic species we've never noticed so a few humans can play golf a little more frequently? I answer No, it isn't. It is possible for the costs of an action to non-human species to outweigh the benefits gained by humans of taking that action.
2Qiaochu_Yuan
Why? What worth? Where does this belief come from?
-2[anonymous]

I asked this before but don't remember if I got any good answers: I am still not convinced that I should care about animal suffering. Human suffering seems orders of magnitude more important. Also, meat is delicious and contains protein. What are the strongest arguments you can offer me in favor of caring about animal suffering to the point that I would be willing to incur the costs involved in becoming more vegetarian? Alternatively, how much would you be willing to pay me to stop eating meat?

What are the strongest arguments you can offer me in favor of caring about animal suffering to the point that I would be willing to incur the costs involved in becoming more vegetarian?

Huh. I'm drawing a similar blank as if someone asked me to provide an argument for why the suffering of red-haired people should count equally to the suffering of black-haired people. Why would the suffering of one species be more important than the suffering of another? Yes, it is plausible that once your nervous system becomes simple enough, you no longer experience anything that we would classify as suffering, but then you said "human suffering is more important", not "there are some classes of animals that suffer less". I'm not sure I can offer a good argument against "human suffering is more important", because it strikes me as so completely arbitrary and unjustified that I'm not sure what the arguments for it would be.

4Qiaochu_Yuan
Because one of those species is mine? Historically, most humans have viewed a much smaller set of (living, mortal) organisms as being the set of (living, mortal) organisms whose suffering matters, e.g. human members of their own tribe. How would you classify these humans? Would you say that their morality is arbitrary and unjustified? If so, I wonder why they're so similar. If I were to imagine a collection of arbitrary moralities, I'd expect it to look much more diverse than this. Would you also say that they were all morally confused and that we have made a great deal of moral progress from most of history until now? If so, have you read gwern's The Narrowing Circle (which is the reason for the living and mortal qualifiers above)? There is something in human nature that cares about things similar to itself. Even if we're currently infected with memes suggesting that this something should be rejected insofar as it distinguishes between different humans (and I think we should be honest with ourselves about the extent to which this is a contingent fact about current moral fashions rather than a deep moral truth), trying to reject it as much as we can is forgetting that we're rebelling within nature. I care about humans because I think that in principle I'm capable of having a meaningful interaction with any human: in principle, I could talk to them, laugh with them, cry with them, sing with them, dance with them... I can't do any of these things with, say, a fish. When I ask my brain in what category it places fish, it responds "natural resources." And natural resources should be conserved, of course (for the sake of future humans), but I don't assign them moral value.

Would you also say that they were all morally confused and that we have made a great deal of moral progress from most of history until now?

Yes! We know stuff that our ancestors didn't know; we have capabilities that they didn't have. If pain and suffering are bad when implemented in my skull, then they also have to be bad when implemented elsewhere. Yes, given bounded resources, I'm going to protect me and my friends and other humans before worrying about other creatures, but that's not because nonhumans don't matter, but because in this horribly, monstrously unfair universe, we are forced to make tradeoffs. We do what we must, but that doesn't make it okay.

3Qiaochu_Yuan
I'm more than willing to agree that our ancestors were factually confused, but I think it's important to distinguish between moral and factual confusion. Consider the following quote from C.S. Lewis: I think our ancestors were primarily factually, rather than morally, confused. I don't see strong reasons to believe that humans over time have made moral, as opposed to factual, progress, and I think attempts to convince me and people like me that we should care about animals should rest primarily on factual, rather than moral, arguments (e.g. claims that smarter animals like pigs are more psychologically similar to humans than I think they are). If I write a computer program with a variable called isSuffering that I set to true, is it suffering? Cool. Then we're in agreement about the practical consequences (humans, right now, who are spending time and effort to fight animal suffering should be spending their time and effort to fight human suffering instead), which is fine with me.
8Zack_M_Davis
(I have no idea how consciousness works, so in general, I can't answer these sorts of questions, but) in this case I feel extremely confident saying No, because the variable names in the source code of present-day computer programs can't affect what the program is actually doing. That doesn't follow if it turns out that preventing animal suffering is sufficiently cheap.
3Rob Bensinger
I'm not sure moral intuitions divide as cleanly into factual and nonfactual components as this suggests. Learning new facts can change our motivations in ways that are in no way logically or empirically required of us, because our motivational and doxastic mechanisms aren't wholly independent. (For instance, knowing a certain fact may involve visualizing certain circumstances more concretely, and vivid visualizations can certainly change one's affective state.) If this motivational component isn't what you had in mind as the 'moral', nonfactual component of our judgments, then I don't know what you do have in mind. I don't think this is specifically relevant. I upvoted your 'blue robot' comment because this is an important issue to worry about, but 'that's a black box' can't be used as a universal bludgeon. (Particularly given that it defeats appeals to 'isHuman' even more thoroughly than it defeats appeals to 'isSuffering'.) I assume you're being tongue-in-cheek here, but be careful not to mislead spectators. 'Human life isn't perfect, ergo we are under no moral obligation to eschew torturing non-humans' obviously isn't sufficient here, so you need to provide more details showing that the threats to humanity warrant (provisionally?) ignoring non-humans' welfare. White slave-owners had plenty of white-person-specific problems to deal with, but that didn't exonerate them for worrying about their (white) friends and family to the extreme exclusion of black people.
1Qiaochu_Yuan
I think of moral confusion as a failure to understand your actual current or extrapolated moral preferences (introspection being unreliable and so forth). Nope. I don't think this analogy holds water. White slave-owners were aware that their slaves were capable of learning their language and bearing their children and all sorts of things that fish can't do.
2Rob Bensinger
Sure. And humans are aware that fish are capable of all sorts of things that rocks and sea hydras can't do. I don't see a relevant disanalogy. (Other than the question-begging one 'fish aren't human'.)
4Qiaochu_Yuan
I guess that should've ended "...that fish can't do and that are important parts of how they interact with other white people." Black people are capable of participating in human society in a way that fish aren't. A "reversed stupidity is not intelligence" warning also seems appropriate here: I don't think the correct response to disagreeing with racism and sexism is to stop discriminating altogether in the sense of not trying to make distinctions between things.
4Rob Bensinger
I don't think we should stop making distinctions altogether either; I'm just trying not to repeat the mistakes of the past, or analogous mistakes. The straw-man version of this historical focus is to take 'the expanding circle' as a universal or inevitable historical progression; the more interesting version is to try to spot a pattern in our past intellectual and moral advances and use it to hack the system, taking a shortcut to a moral code that's improved far beyond contemporary society's hodgepodge of standards. I think the main lesson from 'expanding circle' events is that we should be relatively cautious about assuming that something isn't a moral patient, unless we can come up with an extremely principled and clear example of a necessary condition for moral consideration that it lacks. 'Black people don't have moral standing because they're less intelligent than us' fails that criterion, because white children can be unintelligent and yet deserve to be treated well. Likewise, 'fish can't participate in human society' fails, because extremely pathologically antisocial or socially inept people (of the sort that can't function in society at all) still shouldn't be tortured. (Plus many fish can participate in their own societies. If we encountered an extremely alien sentient species that was highly prosocial but just found it too grating to be around us for our societies to mesh, would we be justified in torturing them? Likewise, if two human civilizations get along fine internally but have social conventions that make fruitful interaction impossible, that doesn't give either civilization the right to oppress the other.) On the other hand, 'rocks aren't conscious' does seem to draw on a good and principled necessary condition -- anything unconscious (hence incapable of suffering or desiring or preferring) does seem categorically morally irrelevant, in a vacuum. So excluding completely unconscious things has the shape of a good policy. (Sure, it's a bit of an e
-1Eugine_Nier
What about unconscious people? So what's your position on abortion?
1Rob Bensinger
I don't know why you got a down-vote; these are good questions. I'm not sure there are unconscious people. By 'unconscious' I meant 'not having any experiences'. There's also another sense of 'unconscious' in which people are obviously sometimes unconscious — whether they're awake, aware of their surroundings, etc. Being conscious in that sense may be sufficient for 'bare consciousness', but it's not necessary, since people can experience dreams while 'unconscious'. Supposing people do sometimes become truly and fully unconscious, I think this is morally equivalent to dying. So it might be that in a loose sense you die every night, as your consciousness truly 'switches off' — or, equivalently, we could say that certain forms of death (like death accompanying high-fidelity cryonic preservation) are in a loose sense a kind of sleep. You say /pəˈteɪtəʊ/, I say /pəˈteɪtoʊ/. The moral rights of dead or otherwise unconscious people would then depend on questions like 'Do we have a responsibility to make conscious beings come into existence?' and 'Do we have a responsibility to fulfill people's wishes after they die?'. I'd lean toward 'yes' on the former, 'no but it's generally useful to act as though we do' on the latter. Complicated. At some stages the embryo is obviously unconscious, for the same reason some species are obviously unconscious. It's conceivable that there's no true consciousness at all until after birth — analogously, it's possible all non-humans are zombies — but at this point I find it unlikely. So I think mid-to-late-stage fetuses do have some moral standing — perhaps not enough for painlessly killing them to be bad, but at least enough for causing them intense pain to be bad. (My view of chickens is similar; suffering is the main worry rather than death.) The two cases are also analogous in that some people have important health reasons for aborting or for eating meat.
-1Qiaochu_Yuan
The original statement of my heuristic for deciding moral worth contained the phrase "in principle" which was meant to cover cases like this. A human in a contingent circumstance (e.g. extremely socially inept, in a coma) that prevents them from participating in human society is unfortunate, but in possible worlds very similar to this one they'd still be capable of participating in human society. But even in possible worlds fairly different from this one, fish still aren't so capable. I also think the reasoning in this example is bad for general reasons, namely moral heuristics don't behave like scientific theories: falsifying a moral hypothesis doesn't mean it's not worth considering. Heuristics that sometimes fail can still be useful, and in general I am skeptical of people who claim to have useful moral heuristics that don't fail on weird edge cases (sufficiently powerful such heuristics should constitute a solution to friendly AI). I'm skeptical of the claim that any fish have societies in a meaningful sense. Citation? If they're intelligent enough we can still trade with them, and that's fine. I don't think this is analogous to the above case. The psychological unity of mankind still applies here: any human from one civilization could have been raised in the other. Yes: not capturing complexity of value. Again, morality doesn't behave like science. Looking for general laws is not obviously a good methodology, and in fact I'm pretty sure it's a bad methodology.
3Rob Bensinger
'Your theory isn't complex enough' isn't a reasonable objection, in itself, to a moral theory. Rather, 'value is complex' is a universal reason to be less confident about all theories. (No theory, no matter how complex, is immune to this problem, because value might always turn out to be even more complex than the theory suggests.) To suggest that your moral theory is more likely to be correct than a simpler alternative merely because it's more complicated is obviously wrong, because knowing that value is complex tells us nothing about how it is complex. In fact, even though we know that value is complex, a complicated theory that accounts for the evidence will almost always get more wrong than a simple theory that accounts for the same evidence -- a more detailed map can be wrong about the territory in more ways. Interestingly, in all the above respects human morality does behave like any other empirical phenomenon. The reasons to think morality is complex, and the best methods for figuring out exactly how it is complex, are the same as for any complex natural entity. "Looking for general laws" is a good idea here for the same reason it's a good idea in any scientific endeavor; we start by ruling out the simplest explanations, then move toward increasing complexity as the data demands. That way we know we're not complicating our theory in arbitrary or unnecessary ways. Knowing at the outset that storms are complex doesn't mean that we shouldn't try to construct very simple predictive and descriptive models of weather systems, and see how close our simulation comes to getting it right. Once we have a basically right model, we can then work on incrementally increasing its precision. As for storms, so for norms. The analogy is particularly appropriate because in both cases we seek an approximation not only as a first step in a truth-seeking research program, but also as a behavior-guiding heuristic for making real-life decisions under uncertainty.
2wedrifid
If I am sure that value is complex and I am given two theories, one of which is complex and the other simple, then I can be sure that the simple one is wrong. The other one is merely probably wrong (as most such theories are). "Too simple" is a valid objection if the premise "Not simple" is implied.
0Rob Bensinger
That's assuming the two theories are being treated as perfected Grand Unified Theories Of The Phenomenon. If that's the case, then yes, you can simply dismiss a purported Finished Product that is too simple, without even bothering to check on how accurate it is first. But we're talking about preliminary hypotheses and approximate models here. If your first guess adds arbitrary complications just to try to look more like you think the Final Theory will someday appear, you won't learn as much from the areas where your map fails. 'Value is complex' is compatible with the utility of starting with simple models, particularly since we don't yet know in what respects it is complex.
0Qiaochu_Yuan
Obviously that's not what I'm suggesting. What I'm suggesting is that it's both more complicated and that this complication is justified from my perspective because it captures my moral intuitions better. What data?
2A1987dM
Then again, the same applies to scientific theories, so long as the old now-falsified theory is a good approximation to the new currently accepted theory within certain ranges of conditions (e.g. classical Newtonian physics if you're much bigger than an atom and much slower than light).
1Rob Bensinger
Isn't a quasi-Aristotelian notion of the accidental/essential or contingent/necessary properties of different species a rather metaphysically fragile foundation for you to base your entire ethical system on? We don't know whether the unconscious / conscious distinction will end up being problematized by future research, but we do already know that the distinctions between taxonomical groupings can be very fuzzy -- and are likely to become far fuzzier as we take more control of our genetic future. We also know that what's normal for a certain species can vary wildly over historical time. 'In principle' we could provide fish with a neural prosthesis that makes them capable of socializing productively with humans, but because our prototype of a fish is dumb, while our prototype of a human is smart, we think of smart fish and dumb humans as aberrant deviations from the telos (proper function) of the species. It seems damningly arbitrary to me. Why should torturing sentient beings be OK in contexts where the technology for improvement is (or 'feels'?) distant, yet completely intolerable in contexts where this external technology is more 'near' on some metric, even if in both cases there is never any realistic prospect of the technology being deployed here? I don't find it implausible that we currently use prototypes as a quick-and-dirty approximation, but I do find it implausible that on reflection, our more educated and careful selves would continue to found the human enterprise on essentialism of this particular sort. Actually, now that you bring it up, I'm surprised by how similar the two are. 'Heuristics' by their very nature are approximations; if we compare them to scientific models that likewise approximate a phenomenon, we see in both cases that an occasional error is permissible. My objection to the 'only things that can intelligently socialize with humans matter' heuristic isn't that it gets things wrong occasionally; it's that it almost always yields the in
5Qiaochu_Yuan
I don't think most fish have complicated enough minds for this to be true. (By contrast, I think dolphins might, and this might be a reason to care about dolphins.) You're still using a methodology that I think is suspect here. I don't think there's good reasons to expect "everything that feels pain has moral value, period" to be a better moral heuristic than "some complicated set of conditions singles out the things that have moral value" if, upon reflection, those conditions seem to be in agreement with what my System 1 is telling me I actually care about (namely, as far as I can tell, my System 1 cares about humans in comas but not fish). My System 2 can try to explain what my System 1 cares about, but if those explanations are bad because your System 2 can find implications they have which are bad, then oh well: at the end of the day, as far as I can tell, System 1 is where my moral intuitions come from, not System 2. Your intuition, not mine. System 1 doesn't know what a biological human is. I'm not using "human" to mean "biological human." I'm using "human" to mean "potential friend." Posthumans and sufficiently intelligent AI could also fall in this category, but I'm still pretty sure that fish don't. I actually only care about the second principle. While getting what I regard to be the wrong answers with respect to most animals. A huge difference between morality and science is that the results of properly done scientific experiments can be relatively clear: it can be clear to all observers that the experiment provides evidence for or against some theory. Morality lacks an analogous notion of moral experiment. (We wouldn't be having this conversation if there were such a thing as a moral experiment; I'd be happy to defer to the evidence in that case, the same as I would in any scientific field where I'm not a domain expert.)
6Rob Bensinger
Thanks for fleshing out your view more! It's likely that previously I was being a bit too finicky with how you were formulating your view; I wanted to hear you come out and express the intuition more generally so I could see exactly where you thought the discontinuity lay, and I think you've done a good job of that now. Any more precision would probably be misleading, since the intuition itself is a bit amorphous: A lot of people think of their pets as friends and companions in various ways, and it's likely that no simple well-defined list of traits would provide a crisp criterion for what 'friendship' or 'potential friendship' means to you. It's just a vague sense that morality is contingent on membership in a class of (rough) social equals, partners, etc. There is no room in morality for a hierarchy of interests — everything either deserves (roughly) all the rights, or none of them at all. The reliance on especially poorly-defined and essentializing categories bothers me, but I'll mostly set that aside. I think the deeper issue here is that our intuitions do allow for hierarchies, and for a more fine-grained distribution of rights based on the different faculties of organisms. It's not all-or-nothing. Allowing that it's not all-or-nothing lets us escape most of your view's problems with essentialism and ad-hoc groupings — we can allow that there is a continuum of different moral statuses across individual humans for the same reasons, and in the same ways, that there is a continuum across species. For instance, if it were an essential fact that our species divided into castes, one of which just couldn't be a 'friend' or socialize with the other — a caste with permanent infant-like minds, for instance — we wouldn't be forced into saying that this caste either has 100% of our moral standing, or 0%. Thinking in terms of a graded scale of moral responsibility gives us the flexibility needed to adapt to an unpredictable environment that frequently lacks sharp lines be
2Qiaochu_Yuan
This is a good point. I'll have to think about this.
0[anonymous]
This is quite a good post, thanks for taking the time to write it. You've said before that you think vegetarianism is the morally superior option. While you've done a good job here of defending the coherence or possibility of the moral significance of animal suffering, would you be willing to go so far as to defend such moral significance simpliciter? I ask in part because I don't think the claim that we ought to err on the side of disjunctivity as I think you construe it (where this involves something like a proportional distribution of moral worth on the basis of a variety of different merits and relationships) is morally safer than operating as if there were a hard and flat moral floor. Operating on your basis we might be less likely to exclude from moral consideration those that ought to be included, but we will be more likely to distribute moral value unevenly where it should be evenly distributed. We've historically had both problems, and I don't know that one or the other is necessarily the more disastrous. Exclusion has led to some real moral abominations (the holocaust, I guess), but uneven distribution where even distribution is called for has led to some long-standing and terribly unjust political traditions (feudalism, say). EDIT: I should add, and not at all by way of criticism, that for all the pejorative aimed at Aristotelian thinking in this last exchange, your conclusion (excluding the safety bit) is strikingly Aristotelian.
1Rob Bensinger
Thanks, hen! My primary argument is indeed that if animals suffer, that is morally significant — not that this thesis is coherent or possible, but that it's true. My claim is that although humans are capable both of suffering and of socializing, and both of these have ethical import, the import of suffering is not completely dependent on the import of socializing, but has some valence in its own right. This allows us to generalize the undesirability of suffering both to sapient nonsocial sentient beings and to nonsapient nonsocial sentient beings, independent of whether they would be easy, hard, or impossible to modify into a social being. It's hard to talk about this in the abstract, so maybe you should say more about what you're worried about, and (ideally) about some alternative that avoids the problem. It sounds like you're suggesting that if we assert that humans have a richer set of rights than non-humans — if we allow value to admit of many degrees and multiple kinds — then we may end up saying that some groups of humans intrinsically deserve more rights than others, in a non-meritocratic way. Is that your worry?
0[anonymous]
Thanks for filling that out. Could I ask you to continue with a defense of this premise in particular? (You may have done this already, and I may have missed it. If so, I'd be happy to be pointed in the right direction). My worry is with both meritocratic and non-meritocratic unevenness. You said earlier that Qiaochu's motivation for excluding animals from moral consideration was based on a desire for simplicity. I think this is right, but could use a more positive formulation: I think on the whole people want this simplicity because they want to defend the extremely potent modern intuition that moral hierarchy is unqualifiedly wrong . At least part of this idea is to leave our moral view fully determined by our understanding of humanity: we owe to every human (or relevantly human-like thing) the moral consideration we take ourselves to be owed. Most vegetarians, I would think, deploy such a flat moral floor (at sentience) for defending the rights of animals. So one view Qiaochu was attacking (I think) by talking about the complexity of value is the view that something so basic as sentience could be the foundation for our moral floor. Your response was not to argue for sentience as such a basis, but to deny the moral floor in favor of a moral stairway, thereby eliminating the absurdity of regarding chickens as full fledged people. The reason this might be worrying is that our understanding of what it is to be human, or what kinds of things are morally valuable now fails to determine our ascription of moral worth. So we admit the possibility of distributing moral worth according to intelligence, strength, military power, wealth, health, beauty, etc. and thereby denying to many people who fall short in these ways the moral significance we generally think they're owed. It was a view very much along these lines that led Aristotle to posit that some human beings, incapable of serious moral achievement for social or biological reasons, were natural slaves. He did not s
0ialdabaoth
The term you are looking for here is 'person'. The debate you are currently having is about what creatures are persons. The following definitions aid clarity in this discussion: * Animal - a particular form of life that has evolved on earth; most animals are mobile, multicellular, and respond to their environment (but this is not universally necessary or sufficient). * Human - a member of the species Homo sapiens, a particular type of hairless ape * Person - A being which has recognized agency, and (in many moral systems) specific rights. Note that separating 'person' from 'human' allows you to recognize the possibility that all humans are not necessarily persons in all moral systems (i.e.: apartheid regimes and ethnic cleansing schemas certainly treat many humans as non-persons; certain cultures treat certain genders as effectively non-persons, etc.). If this is uncomfortable for you, explore the edges of it until your morality restabilizes (example: brain-dead humans are still human, but are they persons?).
0Rob Bensinger
Just keep adding complexity until you get an intelligent socializer. If an AI can be built, and prosthetics can be built, then a prosthetic that confers intelligence upon another system can be built. At worst, the fish brain would just play an especially small or especially indirect causal role in the rest of the brain's functioning. You are deferring to evidence; I just haven't given you good evidence yet that you do indeed feel sympathy for non-human animals (e.g., I haven't bombarded you with videos of tormented non-humans; I can do so if you wish), nor that you're some sort of exotic fish-sociopath in this regard. If you thought evidence had no bearing on your current moral sentiments, then you wouldn't be asking me for arguments at all. However, because we're primarily trying to figure out our own psychological states, a lot of the initial evidence is introspective -- we're experimenting on our own judgments, testing out different frameworks and seeing how close they come to our actual values. (Cf. A Priori.)
0Qiaochu_Yuan
But in that case I would be tempted to ascribe moral value to the prosthetic, not the fish. Agreed, but this is why I think the analogy to science is inappropriate.
2Rob Bensinger
I doubt there will always be a fact of the matter about where an organism ends and its prosthesis begins. My original point here was that we can imagine a graded scale of increasingly human-socialization-capable organisms, and it seems unlikely that Nature will be so kind as to provide us with a sharp line between the Easy-To-Make-Social and the Hard-To-Make-Social. We can make that point by positing prosthetic enhancements of increasing complexity, or genetic modifications to fish brain development, or whatever you please. Fair enough! I don't have a settled view on how much moral evidence should be introspective v. intersubjective, as long as we agree that it's broadly empirical.
4TheOtherDave
With respect to this human-socialization-as-arbiter-of-moral-weight idea, are you endorsing the threshold which human socialization currently demonstrates as the important threshold, or the threshold which human socialization demonstrates at any given moment? For example, suppose species X is on the wrong side of that line (however fuzzy the line might be). If instead of altering Xes so they were better able to socialize with unaltered humans and thereby had, on this view, increased moral weight, I had the ability to increase my own ability to socialize with X, would that amount to the same thing?
0TheOtherDave
Thinking about this... while I sympathize with the temptation, it does seem to me that the same mindset that leads me in this direction also leads me to ascribe moral values to human societies, rather than to individual humans. I'm not yet sure what I want to do with that.
0[anonymous]
It might be worth distinguishing a genetic condition on X from a constituting condition on X. So human society is certainly necessary to bring about the sapience and social capacities of human beings, but if you remove the human from the society once they've been brought up in the relevant way, they're no less capable of social and sapient behavior. On the other hand, the fish-prosthetic is part of what constitutes the fish's capacity for social and sapient behavior. If the fish were removed from it, it would loose those capacities. I think you could plausibly say that the prosthetic should be considered part of the basis for the moral worth of the fish (at the expense of the fish on its own), but refuse to say this about human societies (at the expense of individual human) in light of this distinction.
0TheOtherDave
Hm. Well, I agree with considering the prosthetic part of the basis of the worth of the prosthetically augmented fish, as you suggest. And while I think we underestimate the importance of a continuing social framework for humans to be what we are, even as adults, I will agree that there's some kind of meaningful threshold to be identified such that I can be removed from human society without immediately dropping below that threshold, and there's an important difference (if perhaps not strictly a qualitative one) between me and the fish in this respect. So, yeah, drawing this distinction allows me to ascribe moral value to individual adult humans (though not to very young children, I suppose), rather than entirely to their societies, even while embracing the general principle here. Fair enough.
2Said Achmiz
I've seen that C.S. Lewis quote before, and it seems to me quite mistaken. In this part: Lewis seems to suggest that executing a witch, per se, is what we consider bad. But that's wrong. What was bad about witch hunts was: 1. People were executed without anything resembling solid evidence of their guilt — which of course could not possibly have been obtained, seeing as how they were not guilty and the crimes they were accused of were imaginary; but my point is that the "trial" process was horrifically unjust and monstrously inhumane (torture to extract confessions, etc.). If witches existed today, and if we believed witches existed today, we would still (one should hope!) give them fair trials, convict only on the strength of proof beyond a reasonable doubt, accord the accused all the requisite rights, etc. 2. Punishments were terribly inhumane — burning alive? Come now. Even if we thought witches existed today, and even if we thought the death penalty was an appropriate punishment, we'd carry it out in a more humane manner, and certainly not as a form of public entertainment (again, one would hope; at least, our moral standards today dictate thus). So differences of factual belief are not the main issue here. The fact that, when you apply rigorous standards of evidence and fair prosecution practices to the witch issue, witchcraft disappears as a crime, is instructive (i.e. it indicates that there's no such crime in the first place), but we shouldn't therefore conclude that not believing in witches is the relevant difference between us and the Inquisition.
0MugaSofer
Considering people seemed to think that this was the best way to find witches, 1 still seems like a factual confusion. 2 was based on a Bible quote, I think. The state hanged witches.
0Qiaochu_Yuan
We would? That seems incredibly dangerous. Who knows what kind of things a real witch could do to a jury? If you think humanity as a whole has made substantial moral progress throughout history, what's driven this moral progress? I can tell a story about what drives factual progress (the scientific method, improved technology) but I don't have an analogous story about moral progress. How do you distinguish the current state of affairs from "moral fashion is a random walk, so of course any given era thinks that past eras were terribly immoral"?
3A1987dM
Who knows what kind of things a real witch could do to an executioner, for that matter?
2Said Achmiz
There is a difference between "we should take precautions to make sure the witch doesn't blanket the courtroom with fireballs or charm the jury and all officers of the court; but otherwise human rights apply as usual" and "let's just burn anyone that anyone has claimed to be a witch, without making any attempt to verify those claims, confirm guilt, etc." Regardless of what you think would happen in practice (fear makes people do all sorts of things), it's clear that our current moral standards dictate behavior much closer to the former end of that spectrum. At the absolute least, we would want to be sure that we are executing the actual witches (because every accused person could be innocent and the real witches could be escaping justice), and, for that matter, that we're not imagining the whole witchcraft thing to begin with! That sort of certainty requires proper investigative and trial procedures. That's two questions ("what drives moral progress" and "how can you distinguish moral progress from a random walk"). They're both interesting, but the former is not particularly relevant to the current discussion. (It's an interesting question, however, and Yvain makes some convincing arguments at his blog [sorry, don't have link to specific posts atm] that it's technological advancement that drives what we think of as "moral progress".) As for how I can distinguish it from a random walk — that's harder. However, my objection was to Lewis's assessment of what constitutes the substantive difference between our moral standards and those of medieval witch hunters, which I think is totally mistaken. I do not need even to claim that we've made moral progress per se to make my objection.
2Said Achmiz
No they don't. Are you saying it's not possible to construct a mind for which pain and suffering are not bad? Or are you defining pain and suffering as bad things? In that case, I can respond the neural correlates of human pain and human suffering might not be bad when implemented in brains that differ from human brains in certain relevant ways (Edit: and would therefore not actually qualify as pain and suffering under your new definition).
2Raemon
There's a difference between "it's possible to construct a mind" and "other particular minds are likely to be constructed a certain way." Our minds were build by the same forces that built other minds we know of. We should expect there to be similarities. (I also would define it, not in terms of "pain and suffering" but "preference satisfaction and dissatisfaction". I think I might consider "suffering" as dissatisfaction, by definition, although "pain" is more specific and might be valuable for some minds.)
0A1987dM
Such as human masochists.
0Said Achmiz
I agree that expecting similarities is reasonable (although which similarities, and to what extent, is the key followup question). I was objecting to the assertion of (logical?) necessity, especially since we don't even have so much as a strong certainty. I don't know that I'm comfortable with identifying "suffering" with "preference dissatisfaction" (btw, do you mean by this "failure to satisfy preferences" or "antisatisfaction of negative preferences"? i.e. if I like playing video games and I don't get to play video games, am I suffering? Or am I only suffering if I am having experiences which I explicitly dislike, rather than simply an absence of experiences I like? Or do you claim those are the same thing?).
2TheOtherDave
I can't speak for Raemon, but I would certainly say that the condition described by "I like playing video games and am prohibited from playing video games" is a trivial but valid instance of the category /suffering/. Is the difficulty that there's a different word you'd prefer to use to refer to the category I'm nodding in the direction of, or that you think the category itself is meaningless, or that you don't understand what the category is (reasonably enough; I haven't provided nearly enough information to identify it if the word "suffering" doesn't reliably do so) , or something else? I'm usually indifferent to semantics, so if you'd prefer a different word, I'm happy to use whatever word you like when discussing the category with you.
0Said Achmiz
That one. Also, what term we should use for what categories of things and whether I know what you're talking about is dependent on what claims are being made... I was objecting to Zack_M_Davis's claim, which I take to be something either like: "We humans have categories of experiences called 'pain' and 'suffering', which we consider to be bad. These things are implemented in our brains somehow. If we take that implementation and put it in another kind of brain (alternatively: if we find some other kind of brain where the same or similar implementation is present), then this brain is also necessarily having the same experiences, and we should consider them to be bad also." or... "We humans have categories of experiences called 'pain' and 'suffering', which we consider to be bad. These things are implemented in our brains somehow. We can sensibly define these phenomena in an implementation-independent way, then if any other kind of brain implements these phenomena in some way that fits our defined category, we should consider them to be bad also." I don't think either of those claims are justified. Do you think they are? If you do, I guess we'll have to work out what you're referring to when you say "suffering", and whether that category is relevant to the above issue. (For the record, I, too, am less interested in semantics than in figuring out what we're referring to.)
0TheOtherDave
There are a lot of ill-defined terms in those claims, and depending on how I define them I either do or don't. So let me back up a little. Suppose I prefer that brain B1 not be in state S1. Call C my confidence that state S2 of brain B2 is in important ways similar to B1 in S1. The higher C is, the more confident I am that I prefer B2 not be in S2. The lower C is, the less confident I am. So if you mean taking the implementation of pain and suffering (S1) from our brains (B1) and putting/finding them or similar (C is high) implementations (S2) in other brains (B2), then yes, I think that if (S1) pain and suffering are bad (I antiprefer them) for us (B1), that's strong but not overwhelming evidence that (S2) pain and suffering are bad (I antiprefer them) for others (B2). I don't actually think understanding more clearly what we mean by pain and suffering (either S1 or S2) is particularly important here. I think the important term is C. As long as C is high -- that is, as long as we really are confident that the other brain has a "same or similar implementation", as you say, along salient dimensions (such as manifesting similar subjective experience) -- then I'm pretty comfortable saying I prefer the other brain not experience pain and suffering. And if (S2,B2) is "completely identical" to (S1,B1), I'm "certain" I prefer B2 not be in S2. But I'm not sure that's actually what you mean when you say "same or similar implementation." You might, for example, mean that they have anatomical points of correspondance, but you aren't confident that they manifest similar experience, or something else along those lines. In which case C gets lower, and I become uncertain about my preferences with respect to (B2,S2).
2Said Achmiz
Is brain B1 your brain in this scenario? Or just... some brain? I ask because I think the relevant question is whether the person whose brain it is prefers that brain Bx be or not be in state Sx, and we need to first answer that, and only then move on to what our preferences are w.r.t. other beings' brain states. Anyway, it seemed to me like the claim that Zack_M_Davis was making was about the case where certain neural correlates (or other sorts of implementation details) of what we experience as "pain" and "suffering" (which, for us, might usefully be operationalized as "brain states we prefer not to be in") are found in other life-forms, and we thus conclude that a) these beings are therefore also experiencing "pain" and "suffering" (i.e. are having the same subjective experiences), and b) that these beings, also, have antipreferences about those brain states... Those conclusions are not entailed by the premises. We might expect them to be true for evolutionarily related life-forms, but my objection was to the claim of necessity. Or, he could have been making the claim that we can usefully describe the category of "pain" and/or "suffering" in ways that do not depend on neural correlates or other implementation details (perhaps this would be a functional description of some sort, or a phenomenological one; I don't know), and that if we then discover phenomena matching that category in other life-forms, we should conclude that they are bad. I don't think that conclusion is justified either... or rather, I don't think it's instructive. For instance, Alien Species X might have brain states that they prefer not to be in, but their subjective experience associated with those brain states bears no resemblance in any way to anything that we humans experience as pain or suffering: not phenomenologically, not culturally, not neurally, etc. The only justification for referring to these brain states as "suffering" is by definition. And we all know that arguing "by definit
0TheOtherDave
My brain is certainly an example of a brain that I prefer not be in pain, though not the only example. My confidence that brain B manifests a mind that experiences pain and suffering given certain implementation (or functional, or phenomenological,or whatever) details depends a lot on those details. As does my confidence that B's mind antiprefers the experiential correlates of those details. I agree that there's no strict entailment here, though, "merely" evidence. That said, mere evidence can get us pretty far. I am not inclined to dismiss it.
1Lukas_Gloor
I'd do it that way. It doesn't strike me as morally urgent to prevent people with pain asymbolia from experiencing the sensation of "pain". (Subjects report that they notice the sensation of pain, but they claim it doesn't bother them.) I'd define suffering as wanting to get out of the state you're in. If you're fine with the state you're in, it is not what I consider to be suffering.
0Said Achmiz
Ok, that seems workable to a first approximation. So, a question for anyone who both agrees with that formulation and thinks that "we should care about the suffering of animals" (or some similar view): Do you think that animals can "want to get out of the state they're in"?
1Raemon
Yes? This varies from animal to animal. There's a fair amount of research/examination into which animals appear to do so, some of which is linked to elsewhere in this discussion. (At least some examination was linked to in response to a statement about fish)
6Lukas_Gloor
On why the suffering of one species would be more important than the suffering of another: Does that also apply to race and gender? If not, why not? Assuming a line-up of ancestors, always mother and daughter, from Homo sapiens back to the common ancestor of humans and chickens and forward in time again to modern chickens, where would you draw the line? A common definition for species is biology is that two groups of organisms belong to different species if they cannot have fertile offspring. Is that really a morally relevant criterion that justifies treating a daughter different from her mother? Is that really the criterion you want to use for making your decisions? And does it at all bother you that racists or sexists can use an analogous line of defense?
1Qiaochu_Yuan
I feel psychologically similar to humans of different races and genders but I don't feel psychologically similar to members of most different species. Uh, no. System 1 doesn't know what a species is; that's just a word System 2 is using to approximately communicate an underlying feeling System 1 has. But System 1 knows what a friend is. Other humans can be my friends, at least in principle. Probably various kinds of posthumans and AIs can as well. As far as I can tell, a fish can't, not really. This general argument of "the algorithm you claim to be using to make moral decisions might fail on some edge cases, therefore it is bad" strikes me as disingenuous. Do you have an algorithm you use to make moral decisions that doesn't have this property? Also no. I think current moral fashion is prejudiced against prejudice. Racism and sexism are not crazy or evil points of view; historically, they were points of view held by many sane humans who would have been regarded by their peers as morally upstanding. Have you read What You Can't Say?
1TheOtherDave
I should add to this that even if I endorse what you call "prejudice against prejudice" here -- that is, even if I agree with current moral fashion that racism and sexism are not as good as their absence -- it doesn't follow that because racists or sexists can use a particular argument A as a line of defense, there's therefore something wrong with A. There are all sorts of positions which I endorse and which racists and sexists (and Babyeaters and Nazis and Sith Lords and...) might also endorse.
0Lukas_Gloor
Actually, I do. I try to rely on System 1 as little as possible when it comes to figuring out my terminal value(s). One reason for that, I guess, is that at some point I started out with the premise that I don't want to be the sort of person that would have been racist or sexist in previous centuries. If you don't share that premise, there is no way for me to show that you're being inconsistent -- I acknowledge that.
-3Qiaochu_Yuan
Wow! So you've solved friendly AI? Eliezer will be happy to hear that.
-2MugaSofer
I'm pretty sure Eliezer already knew our brains contained the basis of morality.
2Kaj_Sotala
I should probably clarify - when I said that valuing humans over animals strikes me as arbitrary, I'm saying that it's arbitrary within the context of my personal moral framework, which contains no axioms from which such a distinction could be derived. All morality is ultimately arbitrary and unjustified, so that's not really an argument for or against any moral system. Internal inconsistencies could be arguments, if you value consistency, but your system does seem internally consistent. My original comment was meant more of an explanation of my initial reaction to your question rather than anything that would be convincing on logical grounds, though I did also assign some probability to it possibly being convincing on non-logical grounds. (Our moral axioms are influenced by what other people think, and somebody expressing their disagreement with a moral position has some chance of weakening another person's belief in that position, regardless of whether that effect is "logical".)
1Qiaochu_Yuan
I've been meaning to write a post about how I think it's a really, really bad idea to think about morality in terms of axioms. This seems to be a surprisingly (to me) common habit among LW types, especially since I would have thought it was a habit the metaethics sequence would have stomped out. (You shouldn't regard it as a strength of your moral framework that it can't distinguish humans from non-human animals. That's evidence that it isn't capable of capturing complexity of value.)
7Kaj_Sotala
I agree that thinking about morality exclusively in terms of axioms in a system of classical logical system is likely to be a rather bad idea, since that makes one underestimate the complexity of morality, the strength of non-logical influences, and the extent to which it resembles a system of classical logic in general. But I'm not sure if it's that problematic as long as you keep in mind that "axioms" is really just shorthand for something like "moral subprograms" or "moral dynamics". I did always read the metaethics sequence as establishing the existence of something similar-enough-to-axioms-that-we-might-as-well-use-the-term-axioms-as-shorthand-for-them, with e.g. No Universally Compelling Arguments and Created Already In Motion arguing that you cannot convince a mind about the correctness of some action unless its mind contains a dynamic which reacts to your argument in the way you wish - in other words, unless your argument builds on things that the mind's decision-making system already cares about, and which could be described as axioms when composing a (static) summary of the mind's preferences. I'm not really sure of what you mean here. For one, I didn't say that my moral framework can't distinguish humans and non-humans - I do e.g. take a much more negative stance on killing humans than animals, because killing humans would have a destabilizing effect on society and people's feelings of safety, which would contribute to the creation of much more suffering than killing animals would. Also, whether or not my personal moral framework can capture complexity of value seems irrelevant - CoV is just the empirical thesis that people in general tend to care about a lot of complex things. My personal consciously-held morals are what I currently want to consciously focus on, not a description of what others want, nor something that I'd program into an AI.
1Vladimir_Nesov
Well, I don't think I should care what I care about. The important thing is what's right, and my emotions are only relevant to the extent that they communicate facts about what's right. What's right is too complex, both in definition and consequentialist implications, and neither my emotions nor my reasoned decisions are capable of accurately capturing it. Any consciously-held morals are only a vague map of morality, not morality itself, and so shouldn't hold too much import, on pain of moral wireheading/acceptance of a fake utility function. (Listening to moral intuitions, possibly distilled as moral principles, might give the best moral advice that's available in practice, but that doesn't mean that the advice is any good. Observing this advice might fail to give an adequate picture of the subject matter.)
3Kaj_Sotala
I must be misunderstanding this comment somehow? One still needs to decide what actions to take during every waking moment of their lives, and "in deciding what to do, don't pay attention to what you want" isn't very useful advice. (It also makes any kind of instrumental rationality impossible.)
2Vladimir_Nesov
What you want provides some information about what is right, so you do pay attention. When making decisions, you can further make use of moral principles not based on what you want at a particular moment. In both cases, making use of these signals doesn't mean that you expect them to be accurate, they are just the best you have available in practice. Estimate of the accuracy of the moral intuitions/principles translates into an estimate of value of information about morality. Overestimation of accuracy would lead to excessive exploitation, while an expectation of inaccuracy argues for valuing research about morality comparatively more than pursuit of moral-in-current-estimation actions.
3Osiris
I'm not a very well educated person in this field, but if I may: I see my various squishy feelings (desires and what-is-right intuitions are in this list) as loyal pets. Sometimes, they must be disciplined and treated with suspicion, but for the most part, they are there to please you in their own dumb way. They're no more enemies than one's preference for foods. In my care for them, I train and reward them, not try to destroy or ignore them. Without them, I have no need to DO better among other people, because I would not be human--that is, some things are important only because I'm a barely intelligent ape-man, and they should STAY important as long as I remain a barely intelligent ape-man. Ignoring something going on in one's mind, even when one KNOWS it is wrong, can be a source of pain, I've found--hypocrisy and indecision are not my friends. Hope I didn't make a mess of things with this comment.
2Kaj_Sotala
I'm roughly in agreement, though I would caution that the exploration/exploitation model is a problematic one to use in this context, for two reasons: 1) It implies a relatively clear map/territory split: there are our real values, and our conscious model of them, and errors in our conscious model do not influence the actual values. But to some extent, our conscious models of our values do shape our unconscious values in that direction - if someone switches to an exploitation phase "too early", then over time, their values may actually shift over to what the person thought they were. 2) Exploration/exploitation also assumes that our true values correspond to something akin to an external reward function: if our model is mistaken, then the objectively correct thing to do would be to correct it. In other words, if we realize that our conscious values don't match our unconscious ones, we should revise our conscious values. And sometimes this does happen. But on other occasions, what happens is that our conscious model has become installed as a separate and contradictory set of values, and we need to choose which of the values to endorse (in which situations). This happening is a bad thing if you tend to primarily endorse your unconscious values or a lack of internal conflict, but arguably a good thing if you tend to primarily endorse your conscious values. The process of arriving at our ultimate values seems to be both an act of discovering them and an act of creating them, and we probably shouldn't use terminology like exploration/exploitation that implies that it would be just one of those.
2Vladimir_Nesov
This is value drift. At any given time, you should fix (i.e. notice, as a concept) the implicit idealized values at that time and pursue them even if your hardware later changes and starts implying different values (in the sense where your dog or your computer or an alien also should (normatively) pursue them forever, they are just (descriptively) unlikely to, but you should plot to make that more likely, all else equal). As an analogy, if you are interested in solving different puzzles on different days, then the fact that you are no longer interested in solving yesterday's puzzle doesn't address the problem of solving yesterday's puzzle. And idealized values don't describe valuation of you, the abstract personal identity, of your actions and behavior and desires. They describe valuation of the whole world, including future you with value drift as a particular case that is not fundamentally special. The problem doesn't change, even if the tendency to be interested in a particular problem does. The problem doesn't get solved because you are no longer interested in it. Solving a new, different problem does not address the original problem. The nature of idealized values is irrelevant to this point: whatever they are, they are that thing that they are, so that any "correction" discards the original problem statement and replaces it with a new one. What you can and should correct are intermediate conclusions. (Alternatively, we are arguing about definitions, and you read in my use of the term "values" what I would call intermediate conclusions, but then again I'm interested in you noticing the particular idea that I refer to with this term.) I don't think "unconscious values" is a good proxy for abstract implicit valuation of the universe, consciously-inaccessible processes in the brain are at a vastly different level of abstraction compared to the idealization I'm talking about. This might be true in the sense that humans probably underdetermine the valuation of th
0Kaj_Sotala
I think that the concept of idealized value is obviously important in an FAI context, since we need some way of formalizing "what we want" in order to have any way of ensuring that an AI will further the things we want. I do not understand why the concept would be in relevant to our personal lives, however.
1Vladimir_Nesov
The question of what is normatively the right thing to do (given the resources available) is the same for a FAI and in our personal lives. My understanding is that "implicit idealized value" is the shape of the correct answer to it, not just a tool restricted to the context of FAI. It might be hard for a human to proceed from this concept to concrete decisions, but this is a practical difficulty, not a restriction on the scope of applicability of the idea. (And to see how much of a practical difficulty it is, it is necessary to actually attempt to resolve it.) If idealized value indicates the correct shape of normativity, the question should instead be, How are our personal lives relevant to idealized value? One way was discussed a couple of steps above in this conversation: exploitation/exploration tradeoff. In pursuit of idealized values, if in our personal lives we can't get much information about them, a salient action is to perform/support research into idealized values (or relevant subproblems, such as preventing/evading global catastrophes).
1Kaj_Sotala
What does this mean? It sounds like you're talking about some kind of objective morality?
3A1987dM
I've interacted with enough red-haired people and enough black-haired people that (assuming the anti-zombie principle) I'm somewhat confident that there's no big difference in average between the ways they suffer . I'm nowhere near as confident about fish.
8Kaj_Sotala
I already addressed that uncertainty in my comment: To elaborate: it's perfectly reasonable to discount the suffering of e.g. fish by some factor because one thinks that fish probably suffer less. But as I read it, someone who says "human suffering is more important" isn't saying that: they're saying that they wouldn't care about animal suffering even if it was certain that animals suffered just as much as humans, or even if it was certain that animals suffered more than humans. It's saying that no matter the intensity or nature of the suffering, only suffering that comes from humans counts.
0Shmi
Even less so about silverfish, despite its complex mating rituals.

Human suffering might be orders of magnitude more important. (Though: what reason do you have in mind for this?) But non-human animal suffering is likely to be orders of magnitude more common. Some non-human animals are probably capable of suffering, and we care a great deal about suffering in the case of humans (as, presumably, we would in the case of intelligent aliens). So it seems arbitrary to exclude non-human animal suffering from our concerns completely. Moreover, if you're uncertain about whether animals suffer, you should err on the side of assuming that they do because this is the safer assumption. Mistakenly killing thousands of suffering moral patients over your lifetime is plausibly a much bigger worry than mistakenly sparing thousands of unconscious zombies and missing out on some mouth-pleasures.

I'm not a vegetarian myself, but I do think vegetarianism is a morally superior option. I also think vegetarians should adopt a general policy of not paying people to become vegetarians (except perhaps as a short-term experiment, to incentivize trying out the lifestyle).

1Qiaochu_Yuan
I'm a human and I care about humans. Animals only matter insofar as they affect the lives of humans. Is this really such a difficult concept? I don't mean per organism, I mean in aggregate. In aggregate, I think the totality of animal suffering is orders of magnitude less important than the totality of human suffering. I'm not disagreeing that animals suffer. I'm telling you that I don't care whether they suffer.
[-]Pablo130

I'm a human and I care about humans.

You are many things: a physical object, a living being, a mammal, a member of the species Homo sapiens, an East Asian (I believe), etc. What's so special about the particular category you picked?

-1Qiaochu_Yuan
The psychological unity of humankind. See also this comment.
[-]Pablo130

Presumably mammals also exhibit more psychological similarity than non-mammals, and the same is probably true about East Asians relative to members of other races. What makes the psychological unity of mankind special?

Moreover, it seems that insofar as you care about humans because they have certain psychological traits, you should care about any creature that has those traits. Since many animals have many of the traits that humans have, and some animals have those traits to a greater degree than some humans do, it seems you should care about at least some nonhuman animals.

3Qiaochu_Yuan
I'm willing to entertain this possibility. I've recently been convinced that I should consider caring about dolphins and other similarly intelligent animals, possibly including pigs (so I might be willing to give up pork). I still don't care about fish or chickens. I don't think I can have a meaningful relationship with a fish or a chicken even in principle.
1A1987dM
I suspect that if you plotted all living beings by psychological similarity with Qiaochu_Yuan, there would be a much bigger gap between the -- [reminds himself about small children, people with advanced-stage Alzheimer's, etc.] never mind.
2Pablo
:-)
1A1987dM
(I could steelman my yesterday self by noticing that even though small children aren't similar to QY they can easily become so in the future, and by replacing “gap” with “sparsely populated region”.)
1Nornagest
Doesn't follow. If we imagine a personhood metric for animals evaluated over some reasonably large number of features, it might end up separating (most) humans from all nonhuman animals even if for each particular feature there exist some nonhuman animals that beat humans on it. There's no law of ethics saying that the parameter space has to be small. It's not likely to be a clean separation, and there are almost certainly some exceptional specimens of H. sapiens that wouldn't stand up to such a metric, but -- although I can't speak for Qiaochu -- that's a bullet I'm willing to bite.
0Said Achmiz
Does not follow, since an equally valid conclusion is that Qiaochu_Yuan should not-care about some humans (those that exhibit relevant traits less than some nonhuman animals). One person's modus ponens is etc.
8Rob Bensinger
Every human I know cares at least somewhat about animal suffering. We don't like seeing chickens endlessly and horrifically tortured -- and when we become vividly acquainted with such torture, our not-liking-it generally manifests as a desire for the torture to stop, not just as a desire to become ignorant that this is going on so it won't disturb our peace of mind. I'll need more information to see where the disanalogy is supposed to be between compassion for other species and compassion for other humans. Are you certain you don't care? Are you certain that you won't end up viewing this dispassion as a bias on your part, analogous to people in history who genuinely didn't care at all about black people (but would regret and abandon this apathy if they knew all the facts)? If you feel there's any realistic chance you might discover that you do care in the future, you should again err strongly on the side of vegetarianism. Feeling a bit silly 20 years from now because you avoided torturing beings it turns out you don't care about is a much smaller cost than learning 20 years from now you're the hitler of cows. Vegetarianism accommodates meta-uncertainty about ethical systems better than its rivals do.
3Qiaochu_Yuan
I don't feel psychologically similar to a chicken in the same way that I feel psychologically similar to other humans. No, or else I wouldn't be asking for arguments. This is a good point.
3Rob Bensinger
I don't either, but unless I can come up with a sharp and universal criterion for distinguishing all chickens from all humans, chickens' psychological alienness to me will seem a difference of degree more than of kind. It's a lot easier to argue that chicken suffering matters less than human suffering (or to argue that chickens are zombies) than to argue that chicken suffering is completely morally irrelevant. Some chickens may very well have more psychologically in common with me than I have in common with certain human infants or with certain brain-damaged humans; but I still find myself able to feel that sentient infants and disabled sentient humans oughtn't be tortured. (And not just because I don't want their cries to disturb my own peace of mind. Nor just because they could potentially become highly intelligent, through development or medical intervention. Those might enhance the moral standing of any of these organisms, but they don't appear to exhaust it..)
-2Jiro
That's not a good point, that's a variety of Pascal's Mugging: you're suggesting that the fact that the possible consequence is large ("I tortured beings" is a really negative thing) means that even fi the chance is small, you should act on that basis.
2BerryPick6
It's not a variant of Pascal's Mugging, because the chances aren't vanishingly small and the payoff isn't nearly infinite.
5Shmi
I don't believe you. If you see someone torturing a cat, a dolphin or a monkey, would you feel nothing? (Suppose that they are not likely to switch to torturing humans, to avoid "gateway torture" complications.)
2TheOtherDave
My problem with this question is that if I see video of someone torturing a cat when I am confident there was no actual cat-torturing involved in creating those images (e.g., I am confident it was all photoshopped), what I feel is pretty much indistinguishable from what I feel if I see video of someone torturing a cat when I am confident there was actual cat-torturing. So I'm reluctant to treat what I feel in either case as expressing much of an opinion about suffering, since I feel it roughly equally when I believe suffering is present and when I don't.
0Kawoomba
So if you can factor-out, so to speak, the actual animal suffering: If you had to choose between "watch that video, no animal was harmed" versus "watch that video, an animal was harmed, also you get a biscuit (not the food, the 100 squid (not the animals, the pounds (not the weight unit, the monetary unit)))", which would you choose? (Your feelings would be the same, as you say, your decision probably wouldn't be. Just checking.)
5Qiaochu_Yuan
What?
9Eliezer Yudkowsky
A biscuit provides the same number of calories as 100 SQUID, which stands for Superconducting Quantum Interference Device, which weigh a pound apiece, which masses 453.6 grams, which converts to 4 10^16 joules, which can be converted into 1.13 10^10 kilowatt-hours, which are worth 12 cents per kW-hr, so around 136 billion dollars or so.
2TheOtherDave
...plus a constant.
-1Kawoomba
Reminds me of ... Note the name of the website. She doesn't look happy! "I am altering the deal. Pray I don't alter it any further." Edit: Also, 1.13 * 10^10 kilowatt-hours at 12 cents each yields 1.36 billion dollars, not 136 billion dollars! An honest mistake (cents, not dollars per kWh), or a scam? And as soon as Dmitry is less active ...
7Vaniver
"squid" is slang for a GBP, i.e. Pound Sterling, although I'm more used to hearing the similar "quid." One hundred of them can be referred to as a "biscuit," apparently because of casino chips, similar to how people in America will sometimes refer to a hundred dollars as a "benjamin." That is, what are TheOtherDave's preferences between watching an unsettling movie that does not correspond to reality and watching an unsettling movie that does correspond to reality, but they're paid some cash.
6Paul Crowley
"Quid" is slang, "squid" is a commonly used jokey soundalike. There's a joke that ends "here's that sick squid I owe you". EDIT: also, never heard "biscuit" = £100 before; that's a "ton".
0Vaniver
Does Cockney rhyming slang not count as slang?
0wedrifid
In this case it seems to. It's the first time I recall encountering it but I'm not British and my parsing of unfamiliar and 'rough' accents is such that if I happened to have heard someone say 'squid' I may have parsed it as 'quid', and discarded the 's' as noise from people saying a familiar term in a weird way rather than a different term.
0TheOtherDave
It amuses me that despite making neither head nor tail of the unpacking, I answered the right question. Well, to the extent that my noncommital response can be considered an answer to any question at all.
0Qiaochu_Yuan
Well, I figured that much out from googling, but I was more reacting to what seems like a deliberate act of obfuscation on Kawoomba's part that serves no real purpose.
5Vaniver
Nested parentheses are their own reward, perhaps?
-4Kawoomba
In an interesting twist, in many social circles (not here) your use of the word "obfuscation" would be obfuscatin' in itself. To be very clear though: "Eschew obfuscation, espouse elucidation."
0Paul Crowley
So to be clear - you do some Googling and find two videos, one has realistic CGI animal harm, the other real animal harm; assume the CGI etc is so good that I wouldn't be able to tell which was which if you hadn't told me. You don't pay for the animal harm video, or in any way give anyone an incentive to harm an animal in fetching it; just pick up a pre-existing one. I have a choice between watching the fake-harm video (and knowing it's fake) or watching the real-harm video and receiving £100. If the reward is £100, I'll take the £100; if it's an actual biscuit, I prefer to watch the fake-harm video.
-1TheOtherDave
I'm genuinely unsure, not least because of your perplexing unpacking of "biscuit". Both examples are unpleasant; I don't have a reliable intuition as to which is more so if indeed either is. I have some vague notion that if I watch the real-harm video that might somehow be interpreted as endorsing real-harm more strongly than if I watch the fake-harm vide, like through ratings or download monitoring or something, which inclines me to the fake-harm video. Though whether I'm motivated by the vague belief that such differential endorsement might cause more harm to animals, or by the vague belief that it might cause more harm to my status, I'm again genuinely unsure of. In the real world I usually assume that when I'm not sure it's the latter, but this is such a contrived scenario that I'm not confident of that either. If I assume the biscuit is a reward of some sort, then maybe that reward is enough to offset the differential endorsement above, and maybe it isn't.
0Qiaochu_Yuan
I don't want to see animals get tortured because that would be an unpleasant thing to see, but there are lots of things I think are unpleasant things to see that don't have moral valence (in another comment I gave the example of seeing corpses get raped). I might also be willing to assign dolphins and monkeys moral value (I haven't made up my mind about this), but not most animals.
0CoffeeStain
Do you have another example besides the assault of corpses? I can easily see real moral repugnance from the effect it has on the offenders, who are victims of their own actions. If you find it unpleasant only when you see it, would not they find it horrific when they perform it? Also in these situations, repugnance can leak due to uncertainty of other real moral outcomes, such as the (however small) likelihood of family members of the deceased learning of the activity, for whom these corpses have real moral value.
2A1987dM
Two Girls One Cup?
0Qiaochu_Yuan
Seeing humans perform certain kinds of body modifications would also be deeply unpleasant to me, but it's also not an act I assign moral valence to (I think people should be allowed to modify their bodies more or less arbitrarily).
-1Said Achmiz
I'll chime in to comment that QiaochuYuan's[1] views as expressed in this entire thread are quite similar to my own (with the caveat that for his "human" I would substitute something like "sapient, self-aware beings of approximately human-level intelligence and above" and possibly certain other qualifiers having to do with shared values, to account for Yoda/Spock/AIs/whatever; it seems like QiaochuYuan uses "approximately human" to mean roughly this). So, please reconsider your disbelief. [1] Sorry, the board software is doing weird things when I put in underscores...
2Shmi
So, presumably you don't keep a pet, and if you did, you would not care for its well-being?
-1Said Achmiz
Indeed, I have no pets. If I did have a pet, it is possible that I would not care for it (assuming animal cruelty laws did not exist), although it is more likely that I would develop an attachment to it, and would come to care about its well-being. That is how humans work, in my experience. I don't think this necessarily has any implications w.r.t. the moral status of nonhuman animals.
1KatieHartman
Do you consider young children and very low-intelligence people to be morally-relevant? (If - in the case of children - you consider potential for later development to be a key factor, we can instead discuss only children who have terminal illnesses.)
2Said Achmiz
Good question. Short answer: no. Long answer: When I read Peter Singer, what I took away was not, as many people here apparently did, that we should value animals; what I took away is that we should not value fetuses, newborns, and infants (to a certain age, somewhere between 0 and 2 years [1]). That is, I think the cutoff for moral relevant is somewhere above, say, cats, dogs, newborns... where exactly? I'm not sure. Humans who have a general intelligence so low that they are incapable of thinking about themselves as conscious individuals are also, in my view, not morally relevant. I don't know whether such humans exist (most people with Down syndrome don't quite seem to fit that criterion, for instance). There are many caveats and edge cases, for instance: what if the low-intelligence condition is temporary, and will repair itself with time? Then I think we should consider the wishes of the self that the person was before the impairment, and the rights of their future, non-impaired, selves. But what if the impairment can be repaired using medical technology? Same deal. What if it can't? Then I would consider this person morally irrelevant. What if the person was of extremely low intelligence, and had always been so, but we could apply some medical intervention to raise their intelligence to at least normal human level? I would consider that act morally equivalent to creating a new sapient being (whether that's good or bad is a separate question). So: it's complicated. But to answer practical questions: I don't consider infanticide the moral equivalent of murder (although it's reasonable to outlaw it anyway, as birth is good Schelling point, but the penalty should surely be nowhere near as harsh as for killing an adult or older child). The rights of low-intelligence people is a harder issue partly because there are no obvious cutoffs or metrics. I hope that answers your question; if not, I'll be happy to elaborate further.
4Eliezer Yudkowsky
Ethical generalizations check: Do you care about Babyeaters? Would you eat Yoda?
4wedrifid
Would that allow absorbing some of his midichlorians? Black magic! Well, I might try (since he died of natural causes anyway). But yoda dies without leaving a corpse. It would be difficult. The only viable strategy would seem to be to have Yoda anethetize himself a minute before he ghosts ("becomes one with the force"). Then the flesh would remain corporeal for consumption. The real ethical test would be would I freeze yoda's head in carbonite, acquire brain scanning technology and upload him into a robot body? Yoda may have religious objections to the practice so I may honour his preferences while being severely disappointed. I suspect I'd choose the Dark Side of the Force myself. The Sith philosophy seems much more compatible with life extension by whatever means necessary.
5CCC
It should be noted that Yoda has an observable afterlife. Obi-wan had already appeared after his body had died, apparently in full possession of his memories and his reasoning abilities; Yoda proposes to follow in Obi-wan's footsteps, and has good reason to believe that he will be able to do so.
1Kawoomba
Sith philosophy, for reference: Peace is a lie, there is only passion. Through passion, I gain strength. Through strength, I gain power. Through power, I gain victory. Through victory, my chains are broken. The Force shall free me.
8Eliezer Yudkowsky
Actual use of Sith techniques seems to turn people evil at ridiculously accelerated rates. At least in-universe it seems that sensible people would write off this attractive-sounding philosophy as window dressing on an extremely damaging set of psychic techniques.
0nshepperd
If you're lucky, it might grant intrinsic telepathy, as long as the corpse is relatively fresh.
4Qiaochu_Yuan
Nope (can't parse them as approximately human without revulsion). Nope (approximately human).
-2Jiro
I wouldn't eat flies or squids either. But I know that that's a cultural construct. Let's ask another question: would I care if someone else eats Yoda? Well, I might, but only because eating Yoda is, in practice, correlated with lots of other things I might find undesirable. If I could be assured that such was not the case (for instance, if there was another culture which ate the dead to honor them, that's why he ate Yoda, and Yoda's will granted permission for this), then no, I wouldn't care if someone else eats Yoda.
2wedrifid
In practice? In common Yoda-eating practice? Something about down to earth 'in practice' empirical observations about things that can not possibly have ever occurred strikes me as broken. Perhaps "would be, presumably, correlated with". In Yoda's case he could even have just asked for permission from Yoda's force ghost. Jedi add a whole new level of meaning to "Living Will".
-6Jiro
7Peter Wildeford
I am a moral anti-realist, so I don't think there's any argument I could give you to persuade you to change your values. To me, it feels very inconsistent to not value animals -- it sounds to me exactly like someone who wants to know argument about why they ought to care about foreigners. Also, do you really not value animals? I think if you were to see someone torturing an animal in front of you for fun, you would have some sort of negative reaction. Though maybe you wouldn't, or you would think the reaction irrational? I don't know. However, if you really do care about humans and humans alone, the environmental argument still has weight, though certainly less. ~ One can get both protein and deliciousness from non-meat sources. ~ I'm not sure. I don't think there's a way I could make that transaction work.

Also, do you really not value animals? I think if you were to see someone torturing an animal in front of you for fun, you would have some sort of negative reaction.

Some interesting things about this example:

  1. Distance seems to have a huge impact when it comes to the bystander effect, and it's not clear that it's irrational. If you are the person who is clearly best situated to save a puppy from torture, that seems different from the fact that dogs are routinely farmed for meat in other parts of the world, by armies of people you could not hope to personally defeat or control.

  2. Someone who is willing to be sadistic to animals might be sadistic towards humans as well, and so they may be a poor choice to associate with (and possibly a good choice to anti-associate with).

  3. Many first world countries have some sort of law against bestiality. (In the US, this varies by state.) However, any justification for these laws based on the rights of the animals would also rule out related behavior in agribusiness, which is generally legal. There seems to be a difference between what people are allowed to do for fun and what they're allowed to do for profit; this makes sense in light of viewing the laws as not against actions, but kinds of people.

5Qiaochu_Yuan