Awhile ago, Yvain released the results of the 2012 survey (that I participated in), which contained responses from over 1000 readers. So I uploaded the data into STATA and fooled around looking for cool things.

Mainly, I've heard some people argue that the moral theory you hold has little to no impact on your actual day-to-day behavior. I want to use these survey results to see if this is true -- are consequentialists more likely to donate money than deontologists? Are virtue ethics more likely to be vegetarian? We'll see!

While I can't make any claims to the representativeness of this survey or the external validity of drawing conclusions from its results, at least among the people who did take the survey, ethical theories people endorse seem to have little impact on their actual self-reported behavior.

 

The Ethical Theories

First, a breakdown of the actual ethical theories people endorse. Respondents were asked to categorize themselves into "accept / lean towards consequentialism", "accept / lean towards deontology", "accept / lean towards virtue ethics", or "other / no answer" (N = 1055):

Consequentialism 63.41%
Deontology 3.98%
Virtue Ethics 14.41%
Other / No Answer 18.20%

(Note that "no answer" are the people who specifically chose to note they had no answer. Other people ignored the question and didn't choose any answer at all, truly having no answer. Those people are not included in this analysis.)

 

Ethical Theories and Donation

So first up, I want to see if your ethical theory predicts how much money you are willing to donate, if any. Given the famous connection between utilitarianism and arguments like Peter Singer's who suggest you should donate all your money until it really hurts (you give so much money, you yourself become as poor as the people you're trying to help). Certainly consequentialism is not utilitarianism, but I would expect most consequentialists would endorse donation more than deontologists or virtue ethicists where donations aren't as mandatory.

So I took the charity data and dropped all the non-numerical answers, and LessWrongers have donated an average of $445.15 (N = 879, SD = 1167.095, min = 0, max = 9000) to charity over the past year. To get a better proxy for "effective charity", LessWrongers donate $331.05 (N = 884, SD = 4087.303, min = 0, max = 110000) on average to SIAI and CFAR. (I don't know why the max is higher on the SIAI/CFAR question, but not the inclusive charity question...)

Breaking down generic donations by ethical theory, we get this:

  Mean T-Test p
Consequentialism $479.41 0.266
Deontology $333.89 0.561
Virtue Ethics $433.25 0.887
Other / No Answer $358.54 0.323

Breaking down SIAI/CFAR donations by ethical theory, we get this:

  Mean T-Test p
Consequentialism $420.47 0.391
Deontology $0.00 0.629
Virtue Ethics $86.85 0.459
Other / No Answer $288.44 0.886

What we're seeing is that there are clear differences in the mean amount of money donated, with consequentialists giving the most. However, when we do t-tests (group compared to all not in the group), we find that none of these differences are statistically significant, which indicates the differences are probably due to chance.

This would lead us to suspect that ethical theory has no influence on the amount of money donated...

 

Ethical Theories and Percent of Income Donated

However, I have one more trick in the bag -- these donations don't take into account the income people earn. Many LessWrongers are students, and therefore can't donate much, even if they wanted to. What if we adjusted the donation totals by income, and instead looked at percent of income donated?

Overall, LessWrongers donate 1.75% of their income on average to generic charity (N = 523, SD = 5.70%, min = 0%, max = .88.24%) and 0.49% of their income to SIAI/CFAR (N = 523, SD = 3.11%, min = 0%, max = 52.38%).

(For those keeping score at home, the average income was $49563.76, N = 602, SD = $59358.34, min = $0, max = $700000.)

Breaking down percent of income spent on generic donations by ethical theory, we get this:

  Mean T-Test p
Consequentialism 2.09% 0.069
Deontology 1.09% 0.568
Virtue Ethics 1.36% 0.490
Other / No Answer 0.91% 0.161

Breaking down percent of income spent on SIAI/CFAR donations by ethical theory, we get this:

  Mean T-Test p
Consequentialism 0.60% 0.261
Deontology 0.00% 0.454
Virtue Ethics 0.15% 0.289
Other / No Answer 0.49% 0.993

A bit more can be made out of these results -- specifically, there's weak evidence that consequentialists actually donate more of their income (M = 2.09%) than non-consequentialists (M = 1.14%) with a p-value of 0.069, which is fairly significant. However, there are no differences across any other ethical theories or across SIAI/CFAR donations.

(Though Unnamed concludes that this might just be in-group bias.  Unnamed also does a more thorough analysis and finds more correlations between consequentialism and donation.)

 

Vegetarianism

However, Peter Singer isn't just famous for consequentialist arguments for charity... he's also famous for consequentialist arguments for animal rights, which he argues necessitate veganism, or at least vegetarianism. Are consequentialists more likely to be vegetarian?

  Not a Vegetarian Yes, Vegetarian
Consequentialism 84.39% 15.61%
Deontology 83.78% 16.22%
Virtue Ethics 88.97% 11.03%
Other / No Answer 90.00% 10.00%

Perhaps surprisingly, there is no statistically significant relationship (N = 958, chi2 = 4.73, p = 0.192). Your choice in ethical theory has no correlation with your choice to eat or not eat meat.

(Though Unnamed finds somewhat contrary results that are consistent with a connection -- consequentialists are more likely than non-consequentialists to be vegetarian [p=0.03], this effect holds up when looking at men [p<0.01] but not for women, and as a single analysis, sex and consequentialism both predict vegetarianism [p=0.007 and p=0.02 respectively].)

 

Dust Specks

We can also take it into a more abstract realm of theory. Eliezer Yudkowsky in "Torture vs. Dust Specks" outlines a thought experiment:

[H]ere's the moral dilemma. If neither event is going to happen to you personally, but you still had to choose one or the other:
Would you prefer that one person be horribly tortured for fifty years without hope or rest, or that 3^^^3 [an obnoxiously and unfathomably large number] people get dust specks in their eyes?
I think the answer is obvious. How about you?

According to Yudkowsky, consequentialists (at least of the utilitarian variety) should choose torture over dust specks, since less total harm occurs. Does this turn to actually happen?

  Torture Dust Specks
Consequentialism 45.19% 54.81%
Deontology 11.11% 88.89%
Virtue Ethics 13.79% 86.21%
Other / No Answer 35.85% 64.15%

Here, we see another statistically significant relationship (N = 636, chi2 = 39.31, p < 0.001), and it goes exactly as expected.  People's ethical theories seem to influence their choice of theory in this scenario (or the other way around, or a third variable).

 

Politics

Going back into the practical realm, let's look at politics.

  Communist Conservative Liberal Libertarian Socialist
Consequentialism 0.30% 1.82% 41.03% 29.64% 27.20%
Deontology 2.38% 16.67% 21.43% 26.19% 33.33%
Virtue Ethics 1.99% 5.30% 34.44% 28.48% 29.80%
Other / No Answer 0.00% 2.69% 31.18% 34.41% 31.72%

Here, there is a statistically significant relationship (N = 1037, Chi2 = 50.91, p < 0.001) between ethical theories and political beliefs -- the plurality of consequentialists and virtue ethicists are liberal, the plurality of deontologists are socialist, and the plurality of others or no answers are libertarians.

 

Religion

Continuing along a similar path, next let's look at religion:

  Agnostic Atheist, nonpiritual Atheist, spiritual Theist, committed Deist/Pantheist Theist, lukewarm
Consequentialism 5.71% 81.35% 8.72% 1.35% 1.35% 1.50%
Deontology 11.90% 54.76% 7.14% 11.90% 7.14% 7.14%
Virtue Ethics 12.50% 61.18% 13.16% 5.26% 3.95% 3.95%
Other / No Answer 10.00% 75.79% 6.84% 3.16% 1.05% 3.16%

Again, another statistically significant relationship (N = 1049, Chi2 = 64.74, p < 0.001) between choice of ethical theory and religious beliefs -- consequentialists were more likely than deontologists and virtue ethics to not be religious.

 

Sequences and Meetups

And for a bit of bonus material, here's another interesting finding -- there is also a statistically significant relationship (N = 1052, chi2 = 128.43, p < 0.001) between ethical theory endorsed and amount of sequences read. Whether sequences convince people to adopt more consequentialist theories, whether people with consequentialist theories are more likely to enjoy and therefore keep reading the sequences, or some hidden third variable at work, I cannot figure out with the current data.

  ~1/4 of Sequences Read 1/2 of Sequences 3/4 of Sequences Never looked Never heard of 'em Nearly all Some, but <1/4
Consequentialism 11.83% 14.82% 21.71% 0.60% 31.44% 4.49% 15.12%
Deontology 14.29% 7.14% 9.52% 4.76% 19.05% 11.90% 33.33%
Virtue Ethics 11.26% 12.58% 10.60% 1.99% 15.23% 20.53% 27.81%
Other / No Answer 14.66% 14.14% 10.99% 6.28% 16.75% 11.52% 25.65%

And this trend continues among those who have been to a LessWrong meetup (N = 1044, chi2 = 34.27, p < 0.001):

  Never Been to a Meetup Been to a Meetup
Consequentialism 66.52% 33.48%
Deontology 92.68% 7.32%
Virtue Ethics 84.00% 16.00%
Other / No Answer 79.14% 20.86%

 

Conclusion

In meta-ethics, there is a distinction between moral internalism, which is the theory that moral beliefs must be motivating, and moral externalism, which is the theory that you can have a moral belief and not be motivated to follow it. For instance, if moral externalism is true, you can legitimately think that eating meat is morally wrong but still eat meat.

Now, moral interalism and externalism are more about the semantics of moral statements, what moral statements refer to, and less about actual behavior. Even if people claim that eating meat is morally wrong while still eating meat, it's easy enough for the internalist to deny that the meat eater was telling the truth or making a coherent statement.

However, when it comes to the results of the LessWrong 2012 survey, there are very mixed results on whether choice of ethical theory influences actual behavior -- there's weak evidence that consequentialists are more likely to donate, and even those who do donate are certainly not giving up everything and becoming poor themselves, let alone giving 10% like Giving What We Can advocates.

Furthermore, there's evidence that vegetarianism does not depend upon ethical theory, but there is evidence that it influences people's choice of dust specks vs. torture in the right direction, at least for slightly less than half of the consequentialist sample.  Additionally, there's a relationship between politics and ethics, and relationships between religion and ethics.

This essay has no intention to make a normative point. Certainly there are all sorts of consequentialism theories that don't require you to donate all your money and never eat meat again, and I'm making no accusations of hypocrisy. However, this survey does seem to confirm what Chris Hallquist has previously noted -- moral beliefs don't seem to motivate much, at least for the average person.

(I also cross-posted this on my blog.)

New to LessWrong?

New Comment
47 comments, sorted by Click to highlight new comments since: Today at 12:50 AM

According to Yudkowsky, consequentialists should choose torture over dust specks, since less total harm occurs. Does this turn to actually happen?

[...]

Here, we see another statistically significant relationship (N = 636, chi2 = 39.31, p < 0.001), but it goes in the opposite direction! In a result so surprising I suspect something went wrong with the data somewhere, consequentialists are far more likely to choose torture over dust specks than deontologists or virtue ethicists.

Umm... what's the surprising thing about the results being as predicted? That they fit the prediction so well?

Umm... what's the surprising thing about the results being as predicted? That they fit the prediction so well?

It was apparently later than I thought. Heh. Fixed.

However, when we do t-tests (group compared to all not in the group), we find that none of these differences are statistically significant, which indicates the differences are probably due to chance.

It really is impossible for anyone to understand what a p-value is, isn't it?

Downvoted for snarking without following it up with an explanation.

Could you help me understand what a p-value is? I've heard multiple, different interpretations. I don't think the "probability due to chance" is that widely off the mark, though?

Probability due to chance it is, but probability of what under which assumptions, that's the question. See my reply to magfrump down the thread.

IIRC the p-value is the probability that this is a result from chance. So a p-value of .25 means it's 25% likely to be by chance, and a p-value of .05 means it is 5% likely to happen by chance.

Any p-value less than .5 means that the explanation tested is better than chance; the p-value being statistically significant means even if you measure several things it's STILL more likely not to be chance, instead of an outlier.

EDIT: the sub comments here explain thing mic better than I did, and I think better than I can, so I leave it to readers to look to them.

Any p-value less than .5 means that the explanation tested is better than chance;

A p-value less than .5 means that the actual experimental result or a more extreme one (what that means depends on one's choice of the null hypothesis and few other things) would happen with less than 0.5 chance if the null hypothesis is true. It does not follow that the explanation is better than (the explanation that the results were obtained by) chance.

Note that:

  1. The p-value depends on the null hypothesis H0 and the results but it does not depend on the tested explanation (in fact there is no explanation causally linked to the test except "the null hypothesis is true/false").
  2. The p-value is equal to P(result or more extreme | H0), which is neither equal to P(H0 | result) nor P(~H0 | result) (and of course not P(an explanation different from H0 | result)) nor related to any of them by a unique relation (even if we forget the "or more extreme" part). Another quantity, typically prior P(H0), is needed to calculate the posterior probability of H0 after observing the result.
  3. The sentences "the obtained result is 27% likely due to chance" and "the result is 27% likely to happen by chance" sound similar, but the former is more likely to be understood as "having obtained this very result we conclude that there is 27% probability that no mechanism distinct from chance has caused it", while the latter is likely to be understood "assuming no mechanism distinct from chance is at work, this result is likely to be obtained with probability 27%". Since humans often misunderstand analogous probabilistic statements, it's wise to be very careful with formulations in such a context, especially when explaining the matter.

Thanks, this was very helpful. If you don't mind it's reuse , I will edit it into the LW wiki so it can be referred to in the future.

I don't mind, of course.

iff the null hypothesis is true

Is this a typo? I think the 'iff' should be an 'if'. The 'only if' implication is false.

Thanks, corrected.

(I wanted to stress that the validity of null hypothesis is really an assumption one has to make here and through a lazy mental shortcut iff appeared suitable as a stronger version of if. Later I realised that it is not only false, but rather "not even false", given that "probability of A if B" is meant to represent p(A|B) - not sure what "probability of A only if B" would represent. But I was too tired to edit the post.)

The p-value is the probability that a result like that could have happened if only chance were at work. That this is not the same as the probability that the result is due to chance is easily seen from the fact that the p-value is inversely correlated with sample size. Surely sample size has no influence on whether there's a real effect to be measured; it only affects how likely we are to detect the effect. There may be other reasons for thinking that chance is more or less likely; e.g. because there is an extremely plausible causal mechanism, or conversely because there are independent grounds to doubt any meaningful relationship could be present. If so, that can give you good reason for thinking the probability of the chance hypothesis remains lower or higher than the p-value, possibly much lower or much higher. If a study linked prayer and earthquakes with a .001 p-value (and fraud were ruled out as an explanation), it would still surely be most reasonable to think that chance produced an unlikely result (as of course it sometimes does). The current analysis may include instances of the converse situation, where it seems very unlikely that there is no connection, so it may be more reasonable to think that a small, skewed sample has inflated the p-value, rather than thinking that only chance is at work. I suppose I tend to think it probably does include such cases; I can easily believe that some of the effects of ethical theory on behavior could be very small, small enough to require a very large sample to reliably detect, but zero effect seems a priori unlikely in many of the examples.

Connected to Stuff that maes Stuff happen. A null hypothesis could be one where obesity, exercise and internet are not connected, or alternatively that exercise (or lack thereof) causes obesity, and internet is unrelated to both of these. Then, you can conduct an experiment and collect evidence for or against the null hypothesis. If p=P(data | null hypothesis is true)<0.05, a winner is you.

[-][anonymous]11y20

I have taken enough mathematical statistics to know what a p-value is, but I still don't know what it means.

How is it misinterpreted in this case?

It isn't, but maybe people who actually understand them don't use them as often.

According to Yudkowsky, consequentialists should choose torture over dust specks, since less total harm occurs.

I hope Yudkowsky was a little more careful with the wording. Consequentialists (who are also sadistic) would choose specks and consequentialists (who are also paperclipers) wouldn't care either way unless one option wasted resources that could be used for paperclip manufacture.

(Incidentally, we can tell this isn't Eliezer's wording since his declared usage of 'should' is such that everything 'should' prefer torture to dust specks, whether he, she, or it is a deontologist, consequentialist or UFAI that is indifferent to humanity.)

I hope Yudkowsky was a little more careful with the wording. Consequentialists (who are also sadistic) would choose specks and consequentialists (who are also paperclipers) wouldn't care either way unless one option wasted resources that could be used for paperclip manufacture.

Yudkowsky is well aware of that. I'd assume he thinks it true of human values, or at least rational human values, or ...I'll just update the essay to clarify that a bit.

Yudkowsky is well aware of that. I'd assume he thinks it true of human values, or at least rational human values, or ...

I don't think he would admit rational as an attribute to values.

I meant rational insofar as avoiding scope insensitivity.

'Rational human values' still looks unyudkowskian, though.

Edit: unless understood as 'values of a rational human' rather 'rational values of a human'. I need to pay more attention to such double readings in English.

I agree Yudkowsky probably wouldn't word it that way. My apologies.

Certainly nothing to apologise for. I should apologise for my nitpickery.

Amounts given to charity have a highly skewed distribution, which makes it hard to find effects if you run analyses on the raw numbers. It can help to take logs, or to just look at categorical variables like giving anything vs. giving nothing.

I looked at the data using the second of those two approaches, seeing what percent of people donated any amount >0. I also combined moral views into two categories, consequentialists vs. everyone else (for simplicity & because of small sample sizes). The numbers:

"...have you donated to charity over the past year"
60.9% of consequentialists
60.7% of non-consequentialists
p = .95

"...have you donated to SIAI or CFAR in the past year"
15.1% of consequentialists
5.6% of non-consequentialists
p < .0001

"...have you donated to anti-aging related charities like SENS over the past year"
2.8% of consequentialists
0.6% of non-consequentialists
p = .02

So consequentialists are more likely than non-consequentialists to give to the weird ingroupy charities, but not more likely to give to charity overall. I played around with the data a bit in other ways (e.g., looking at log-giving, controlling for income) and this pattern seemed to hold up.

It's not clear if this relationship is causal; my guess is that it's (at least mostly) not. Consequentialism is associated with closer ties to the LW community on several measures, including sequence-reading, meetup attendance, and the other components of the composite LW exposure variable I've used elsewhere (karma, time in community, and LW use). Controlling statistically for LW exposure weakens the association between consequentialism & giving to SI/CFAR, and leaves it only marginally statistically significant (p = .053).

Consequentialism is also associated with various other elements of the local memeplex, including p(ManyWorlds), p(AntiAgathics), p(Simulation), and personal cryonics status/plans (but not statistically significantly with p(Cryonics)). Which suggests that it's not something about morality/donating in particular.

I realize that I didn't do a very thorough job of looking at total charitable giving, so I did some more analyses.

Summary: People tend to give more if they are older, richer, or more religious. Consequentialists tend to be younger, but after controlling for age they do tend to give more than non-consequentialists (p=.02). Consequentialists also tend to be less religious, and if you control statistically for that as well then the relationship between consequentialism and giving is even stronger (p=.006).

log(charity+1) seems like the best variable to look at for total charitable giving - it has roughly a normal distribution plus a big point mass off in the left tail at zero giving (which stays at zero after adding 1 & taking the log), n=952. There is a weak & nonsignificant trend for consequentialists to have higher log(charity+1), p=.30. Average log(charity+1) is 3.21 vs. 3.01 for the 2 groups, which means that the geometric means are e^3.21 = $25 vs. e^3.01 = $20, a 1.2x ratio.

There are other variables which have stronger, clearer relationships with total charitable giving: income, age, and religion. People with more money give more, older people give more, and more religious people give more. All three variables have independent effects - in a multiple linear regression, age, income, and religiosity are all significant predictors of charitable giving.

(Details on those effects: for income, I used log(income+1000). About a third of the data points are lost with analyses that include income because a lot of people left it blank, and that's a non-random subset as they have lower giving than those who reported their income. The religion effect shows up on p(God) and p(Religion), and on the religious views question treated as categories, and on the religious views question treated as a quantitative scale from 1 (atheist & not spiritual) to 6 (committed theist). I combined the 3 quantitative questions into a single religiosity scale by standardizing each & averaging them.)

What does this tell us about the relationship between consequentialism and giving? Well, consequentialism is negative correlated with religiosity (consequentialists are less religious) and age (consequentialists are younger), which could hide a positive effect of consequentialism on giving. (It is uncorrelated with income).

And in fact, in a multiple regression using all four variables as predictors (age, income, religiosity, and consequentialism), all four are statistically significant predictors of charitable giving. The effect of consequentialism is significant at p=.006, and the effect size suggests that consequentialists give 1.9x as much as nonconsequentialists (of the same age, income, and religiosity); $42 vs. $22 geometric means (these are higher than the means before, since it excludes the people who left income blank, who tended to give less). So consequentialism does predict higher giving, controlling for age, income, and religiosity.

It's not clear if we should be controlling for religiosity in this way, though - if someone buys into the idea package which includes atheism & consequentialism then maybe they shouldn't get credit for being more generous than those who accept atheism but not consequentialism (especially when other idea packages, like theistic ones, are associated with higher generosity). But controlling only for age & income still leaves the effect of consequentialism statistically significant, p=.02, with consequentialists giving 1.7x as much as non-consequentialists of the same age & income ($40 vs. $23).

There is also some concern with controlling for income, especially because of the missing data (from a non-random third of the sample). There are also possible issues with endogeneity (e.g., people who choose a high income job so they can give more money to charity), although that suggests that controlling for income could lead to understating the effect of consequentialism rather than to overstating it. But it turns out to not be worth worrying about; running an analysis which controls only for age, consequentialism is still a significant predictor of giving, p=.02, 1.6x the giving ($27 vs. $17).

And there aren't similar concerns about statistically controlling for age, since age is an exogenous variable - the causal arrows can only go in one direction there. Another way to account for age is to just exclude everyone under the age of 25 (since many people in that age range aren't financially independent, so their giving rates aren't that informative). Of those aged 25+, consequentialists give more than non-consequentialists, p = .04, 1.7x the giving ($56 vs. $32), n = 526.

One more analysis, inspired in part by Gwern's comment here.

In my first analysis I broke people into two groups using a cutoff at the bottom of the distribution, giving anything vs. giving nothing. But if we care about how much money charities receive, then we should care more about the top of the distribution, because that's where most of the money is coming from (80-20 rule and all).

So, let's focus on the top of the distribution by putting a cutoff up there. $1000 is an especially convenient Schelling point, since it is the 80-20 point among donors: 21.8% of those who gave to charity gave $1000+, and they accounted for 81.4% of the donations.

1164 people answered the question about moral views, and 10.8% of those reported giving $1000 or more to charity (this counts those who left the charity question blank in the under-$1000 group). Breaking that down by consequentialism:

12.5% of consequentialists gave $1000+
7.7% of non-consequentialists gave $1000+
p = .01.

Earlier, I found that consequentialists and non-consequentialists are equally likely to give to charity (vs. giving nothing). But here, we see that consequentialists are more likely to be big-money donors ($1000+).

Age, income, and religiosity are also significantly predictive of giving $1000+, and consequentialism remains a significant predictor (p=.002) after controlling for them.

Logging the total charity donations:

R> lw2012 <- read.csv("2012.csv")
R> consequentialism <- lw2012[as.integer(lw2012$MoralViews) == 2,]
R> consequentialism <- log1p(as.integer(as.character(consequentialism$Charity)))
R> consequentialism <- consequentialism[!is.na(consequentialism) & consequentialism != 0]
R> others <- lw2012[as.integer(lw2012$MoralViews) != 2,]
R> others <- log1p(as.integer(as.character(others$Charity)))
R> others <- others[!is.na(others) & others != 0]
R> t.test(consequentialism, others)
    Welch Two Sample t-test

data:  consequentialism and others
t = 1.687, df = 411.6, p-value = 0.09238
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 -0.04613  0.60421
sample estimates:
mean of x mean of y
    5.241     4.962

It looks like you threw out the people who gave 0 to charity when you took the log. I typically use ln(x+1) for these types of variables, which maps zero to zero.

In this case, your approach leads to stronger effects (or at least lower p values). Repeating your analysis with the full data set (n=575), it's actually statistically significant on its own (Welch t=2.03, p=.043), means of 5.25 vs. 4.92. Excluding non-givers also gives lower p-values in the analyses I described here; when controlling just for age the effect of consequentialism is significant at p=.001 (geometric means $204 vs. $121). Even though the trend is for consequentialists to be more likely to give than non-consequentialists, it looks like the added noise of having all those points at zero has a bigger effect.

I threw out zero because it seems like a non-response to me in a lot of questions. Someone giving a number shows they have at least responded with a non-default and did donate something, while someone leaving a zero may be simply equivalent to people who left empty responses.

(While I'm commenting on my analysis: from a utilitarian perspective, comparing logs doesn't even make sense; personal utility may follow some sort of logarithm in money, but charities don't - the problems are just too big for any one person.)

Interestingly for the memeplex theory (which seems very plausible), Consequentialists are also more likely to indicate understanding of the Newcombs Problem (by choosing one box or two box over not sure or no answer) [chi2 = 16.1393, p = 0.001] but not more likely to actually one box over two box (chi2 = 4.1755, p = 0.243).

Looking at the vegetarianism question more closely, there actually does seem to be some sign of a relationship between consequentialism and vegetarianism.

In the numbers reported in the OP, consequentialists do have a higher rate of vegetarianism than any of the other groups except for deontologists (who have a small sample size - it looks like there are 6 deontologist vegetarians).

Combining the three types of non-consequentialists (virtue ethicists, deontologists, and other / no answer) into a single group of non-consequentialists:
15.7% of consequentialists are vegetarian
10.9% of non-consequentialists are vegetarian

That is not such a small difference (consequentialists are 1.4x more likely to be vegetarian) and it is statistically significant (p=.03).

There is also a potentially confounding variable which will tend to hide the effects of consequentialism: sex. Women are more likely than men to be vegetarians (both in the general population and in this survey), and less likely to be consequentialists (in this survey).

Looking only at men (n=940),
15.2% of consequentialist men are vegetarian
9.0% of non-consequentialist men are vegetarian a 1.7x ratio, and statistically significant at p<.01

Looking only at women (n=106),
21.5% of consequentialist women are vegetarian
22.2% of non-consequentialist women are vegetarian
no difference, though the sample size is small

Combining both sexes into a single analysis, both sex and consequentialism are statistically significant in predicting vegetarianism, sex at p=.007, consequentialism at p=.02.

(The pattern of means suggests that there might be an interaction effect, with consequentialism only leading to higher vegetarianism among men, but the sample size of women is too small to find out if that effect holds up.)

Age is another potentially confounding variable, which will tend to make the effect of consequentialism look bigger than it is. On this survey, younger people are more likely to be consequentialist, and more likely to be vegetarian.

Predicting vegetarianism based on age, sex, and consequentialism, all three variables are statistically significant, age at p=.03, sex at p=.008, and consequentialism at p=.04.

Peter, why are you surprised that consequentialists are more likely to choose torture over dust specks than others? That seems to me like the expected observation.

Any statistical relationship between self-reported religious beliefs and self-reported charitable giving? (Investigations of the general population have tended to find that there is one, though I don't think it's easy to disentangle real differences from, e.g., some people falsely claiming to be more religious and more charitable than they really are because they think it looks better.)

Peter, why are you surprised that consequentialists are more likely to choose torture over dust specks than others? That seems to me like the expected observation.

It was apparently later than I thought and I eyeballed the relationship backwards. Heh. Fixed.

Any statistical relationship between self-reported religious beliefs and self-reported charitable giving?

Yes, sort of, and in the direction you expect.

I split religion up into theist (Lukewarm theist, committed theist, pantheist/deist/etc., N = 130) and nontheist (agnostic, atheist but spiritual, atheist and not spiritual, N = 745) and did a t-test:

Theists: $597.71 Atheists: $419.34 p-value: 0.109

Then, by percentage of income:

Theists: 2.52% Atheists: 1.62% p-value: 0.203

I'm certainly interested in these topics. Just at the level of philosophical gossip, it seems to be somewhat commonly believed among philosophers I've known that consequentialists and virtue theorists are generally OK, but you can't trust deontologists. I'd be very curious to see some results that are better than vague impressions from gossip. But I have to suspect that the very small number of deontologists in your survey is a serious problem (not your fault, of course). It definitely makes your sample look strange to me (my general impression, partly from other surveys I've seen, is that deontologists are most common, followed by consequentialists and then virtue theorists). That raises the worry that if your sample is atypical in one respect it may be atypical in other respects.

Just at the level of philosophical gossip, it seems to be somewhat commonly believed among philosophers I've known that consequentialists and virtue theorists are generally OK, but you can't trust deontologists. I'd be very curious to see some results that are better than vague impressions from gossip.

Results that show whether or not you can safely trust deontologists? What would you have in mind for testing that?

But I have to suspect that the very small number of deontologists in your survey is a serious problem (not your fault, of course). [...] That raises the worry that if your sample is atypical in one respect it may be atypical in other respects.

There's no denying that it's an atypical sample. Indeed, any sample of people with ethical theories would have to be... the typical person doesn't even know what "consequentialism" means, let alone if they are one. Really, what we're looking for is a good sample of philosophically informed people, and LessWrong isn't such a bad place for that.

Ideally, we would want to sample philosophers at large, which (as lukeprog points out) Schwitzgebel does to the tune of similar findings (no statistically significant difference on whether ethicists vs. non-ethicists pay registration fees, academic society membership, voting, staying in touch with one’s mother, vegetarianism, organ and blood donation, responsiveness to student emails, charitable giving, and honesty in responding to survey questionnaires, and courteousness at talks).

There have been a variety of studies of various forms of trustworthiness; I don't have a strong preference for any of those I've heard of (I tend to favor many different kinds of studies, generally). I'd previously seen one or two of Schwitzgebel's papers, but those I'd seen before hadn't made distinctions between different kinds of ethicists. Looking at his web page, I see that he does have at least one paper I hadn't seen before which tries to do that, though the samples are not large. Still, as you say, it detects nothing, which does suggest at least that if there is an effect it probably isn't large.

Checking p-values in so many places without applying a Bonferroni correction or other adjustment means you'll think things are more significant than they really are.

Good point. I provide the current statistics as is, and if anyone wants to apply corrections for false positives, go ahead.

... % of their income .... average income was ...

All discussions of percent of income donated to charity should specify net or gross. These can be very different!

The survey question itself did not specify.

See also Schwitzgebel's many papers on the subject, under the heading "The Relationship Between Moral Reflection and Moral Behavior" on his home page.

I'm surprised that all groups have the same average Big Five Openness -- I expected consequentialists to have higher openness, and deontologists to have lower openness.

Me too. However, I worry that the personality measures are not all that reliable, for reasons that the main survey article mentioned.

Much of that sounds more like systematic effects than statistical fluctuations, though, so unless the systematic effects affects consequentialists and deontologists differently we should still see a difference in group averages if there was one; and the sample sizes are large enough that the statistical fluctuations should average away anyway.

[-][anonymous]11y00

Are virtue ethics more likely to be vegetarian?

Ehm.. okay what if one follows predator virtues?

[This comment is no longer endorsed by its author]Reply