Despite being (IMO) a philosophy blog, many Less Wrongers tend to disparage mainstream philosophy and emphasize the divergence between our beliefs and theirs. But, how different are we really? My intention with this post is to quantify this difference.

The questions I will post as comments to this article are from the 2009 PhilPapers Survey. If you answer "other" on any of the questions, then please reply to that comment in order to elaborate your answer. Later, I'll post another article comparing the answers I obtain from Less Wrongers with those given by the professional philosophers. This should give us some indication about the differences in belief between Less Wrong and mainstream philosophy.

Glossary

analytic-synthetic distinction, A-theory and B-theory, atheism, compatibilism, consequentialism, contextualism, correspondence theory of truth, deontology, egalitarianism, empiricism, Humeanism, libertarianism, mental content externalism, moral realism, moral motivation internalism and externalism, naturalism, nominalism, Newcomb's problem, physicalism, Platonism, rationalism, relativism, scientific realism, trolley problem, theism, virtue ethics

Note

Thanks pragmatist, for attaching short (mostly accurate) descriptions of the philosophical positions under the poll comments.

Post Script

The polls stopped rendering correctly after the migration to LW 2.0, but the raw data can be found in this repo.

[Poll] Less Wrong and Mainstream Philosophy: How Different are We?
New Comment
629 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

One respect in which Less Wrongers resemble mainstream philosophers is that many mainstream philosophers disparage mainstream philosophers and emphasize the divergence between their beliefs and those of rival mainstream philosophers. Indeed, that is something of a tradition in Western philosophy.

I've posted brief explanations for some of the questions as replies to those questions. I haven't posted explanations for those questions that I believe the vast majority of LW users will understand. If you don't understand a question, I'm fairly certain that if you scroll down far enough you'll find a comment from me with an attempt at explication.

[-]Shmi120

Thanks for the clarifications, without them the questions made little sense to me. (Well, with them most polls appear poorly defined false dichotomy, but at least this unfortunate fact becomes clear).

2komponisto
Unfortunately, I didn't on the libertarianism/egalitarianism one. (I had a plausible guess, but I wanted to be sure that guess was right.)
2Document
While that improves the situation, we're still trusting that the PhilPapers respondents' beliefs about the terms perfectly match the definitions you posted. Too bad we can't survey both groups ourselves (or something).

Beware that some words might mean different things to different communities. For example, if a philosopher calls himself/herself an "anti-reductive naturalist," there's a good chance they are a strict reductionist in the LW sense. It may help to read the "thoughts on specific questions" section of this page of the PhilPapers Survey site.

7Jayson_Virissimo
Excellent point. I'll add a glossary to the article sometime within the next 24 hours in order to diminish some of the confusion.

I've tried to do that already, adding comments below each question that I think might be confusing.

3Jayson_Virissimo
Thanks for that.
[-][anonymous]190

Stop saying these questions are false dichotomies! None of them are, because they all have an 'other' option!

It would be interesting to have "how well do you think you understand the question?" parallel to each question. I'd imagine less consistency on questions where most participants had to look up the terms on Wikipedia prior to answering.

3Jayson_Virissimo
I won't object to people attaching polls to my poll comments, but I won't make a precommitment to making use of them in my analysis of the results.

Normative ethics: consequentialism, deontology or virtue ethics?

[pollid:86]

Consequentialism: The morality of actions depends only on their consequences.

Deontology: There are moral principles that forbid certain actions and encourage other actions purely based on the nature of the action itself, not on its consequences.

Virtue ethics: Ethical theory should not be in the business of evaluating actions. It should be in the business of evaluating character traits. The fundamental question of ethics is not "What makes an action right or wrong?" It is "What makes a person good or bad?"

All three in weighted combination, with consequentialism scaling such that it becomes dominant in high-stakes scenarios but is not dominant elsewhere. I believe that consequentialism, deontology and virtue ethics are mutually reducible and mutually justifying, but that flattening them into any one of the three is bad because it raises the error rate, by making some values much harder to describe and eliminating redundancy in values that would have protected them from corruption.

Thinking about this...

So, yes, in many cases I make decisions based on moral principles, because the alternatives are computationally intractable. And in a few cases I judge character traits as a proxy for doing either. And I endorse all of that, under the circumstances. Which sounds like what you're describing.

But if I discovered that one of my moral principles was causing me to act in ways that had consequences I anti-value, I would endorse discarding that principle. Which seems to me like I'm a consequentialist who sometimes uses moral principles as a processing shortcut.

Were I actually a deontologist, as described here, presumably I would shrug my shoulders, perhaps regret the negative consequences of my moral principle (perhaps not), and go on using it.

Admittedly, I'm not sure I have a crisp understanding of the distinction between moral principles (which consequentialism on this account ignores) and values (on which it depends).

0Matt_Simpson
I voted "other" for the same reason, though I'm less certain about virtue ethics being being equivalent to the other two.

I lean toward Consequentialism but I support something like deontology/virtue ethics for reasons of personal computability.

8drnickbone
How should we vote for "rule consequentialism"? I went for "Lean toward consequentialism" though it is arguably a form of deontology. "Other" is not very precise.
2thomblake
Rule consequentialism is either consequentialism or deontology (or just inconsistent). What makes it the case that you should follow the rules? If it is that the following the rules maximizes expected utility, then it's ultimately consequentialism. Otherwise, it's most likely deontology.
0drnickbone
A common formulation is that the "rules" are the ones which if generally adopted as a moral code would maximize expected utility: i.e. there is a form of "best" or "ideal" moral code. However, this can lead to cases where an act which would (by itself) maximize expected utility would also be in violation of the ideal moral code. So the act would be "right" from an act utilitarian point of view, but "wrong" from a rule utilitarian point of view. Relevant examples here could include torturing someone "for a greater good" (such as to stop the infamous ticking time bomb). The logic for torture in such cases seems very sound from an act utilitarian perspective; however, an ideal moral code would have a rule of the form "Don't torture anyone, ever, for any reason, no matter if it appears to lead to a greater good". This, incidentally, is one resolution to Torture vs Dust Specks.
0thomblake
Right, but if the moral code is really ideal on consequentialist grounds and following the rules really leads to better expected outcomes for humans than not doing so, even when it appears otherwise, then the act consequentialist should also agree that you should follow the rule even when it appears to be sub-optimal. On the other hand, if the claim is that an ideal reasoner with full knowledge should follow the rule even when it provably does not maximize expected utility, then that's a form of deontology and a consequentialist should disagree.
3drnickbone
There is a recognized distinction here between a moral decision procedure and the criterion for an action to be right or wrong. Pretty much all serious act utilitarians approve a rule-utilitarian decision procedure i.e. they recommend moral agents to follow the usual moral rules (or heuristics) even in those cases where the agent believes that departing from the rules would lead to better consequences. The justification for such a decision procedure is of course that humans are not ideal reasoners, we can not predict and evaluate all consequences of our actions (including others imitating us), we do not have an ideal, impartial conception of the good, and we tend to get things horribly wrong when we depart from the rules with the best of intentions. Yet still, from an act utilitarian criterion for "right" and "wrong" a rule-violating action which maximizes expected utility is "right". This leads to some odd situations, whereby the act utilitarian would have to (privately) classify such a rule-violating action as right, but publically condemn it, call it the "wrong choice", quite possibly punish it, and generally discourage people from following it!
0thomblake
Yes, I was (improperly) ignoring the typically backward-looking nature of act utilitarianism. I kept saying "maximize expected utility" rather than "maximize utility" which resulted in true statements that did not reflect what act utilitarians really say. I blame the principle of charity. EDIT: And if I were being really careful, I'd make sure to phrase "maximize expected utility" in such a way that it's clear that you're maximizing the utility according to your expectations, not maximizing your expectations of utility (wireheading).
7magfrump
I accept consequentialism but I also believe that "acting like I'm following virtue ethics" tends to have the best consequences.
5A1987dM
Voted for "lean toward consequentialism". As someone once put, I consider the “fundamental” rules to be consequentialist¹, but some of the approximations I use because the fundamental rules are infeasible to calculate from scratch every time resemble deontology or virtue ethics, much like QFT and GR are time-reversal symmetric but thermodynamics isn't. Also, ethical injunctions (i.e. fudge factors in my prior probability that certain behaviours will harm someone to compensate for cognitive biases) and TDT-like game-/decision-theoretical considerations make some of my choices resemble deontology, and a term in my utility function for how awesome I am make some of my choices resemble virtue ethics. 1. I assume that, despite the name, people here don't take consequentialism to imply strictly CDT. I still think that in the True Prisoner's Dilemma against a paperclip maximizer known to use the same decision algorithms as ourselves it's immoral to defect.
0A1987dM
May the reason why so many philosophers don't vote for consquentialism is that they're thinking about pure CDTical act consequentialism?
4komponisto
Depends again on the level of discourse. Ultimately consequentialism, but a whole lot of deontology and virtue ethics in "real life".
4pragmatist
Moral particularism
4lukeprog
For the record, I consider myself a consequentialist who is also a moral particularist.
1pragmatist
Fair enough. I should have been more specific. I'm a particularist who thinks consequentialist reasoning is appropriate in certain contexts, but deontological reasoning is appropriate in other contexts. So I'm pretty sure "Other" is the right pick for me.
0[anonymous]
Is that possible? Can you both think a) that one should in general act so as to maximise happiness/utility/whatever, and b) there are no general moral rules? I think that's a contradiction.
4pragmatist
Consequentialism doesn't require a commitment to maximization of any particular variable. It's the claim that only the consequences of actions are relevant to moral evaluation of the actions. I think that's a weak enough claim that you can't really call it a general moral principle. So one could believe that only consequences are morally relevant, but the way in which one evaluates actions based on their consequences does not conform to any general principle. If Luke had said that he's a utilitarian who is also a particularist, that would have been a contradiction.
2[anonymous]
That's a good point. So I should take from Luke's claim that he does not believe one should (as a moral rule) maximise expected utility, or anything like that? And that he would say that it's possible (if perhaps unlikely) for an action to be good even if it minimizes expected utility?
0pragmatist
I probably shouldn't speak for Luke, but I'm guessing the answer to this is yes. If it isn't, then I don't understand how he's a particularist. I don't see why he should be committed to this claim.
0Manfred
I took it to mean that Luke is requiring an agent to be at least somewhat consequentialist before he even thinks of it in terms of a morality.

I don't know the definition of any of the "-ism"s. Should I not answer the questions? I imagine that others will be in the same position as I am.

EDIT: Thanks to pragmatist for the explanations!

1Jayson_Virissimo
If you can spare the time, then read pragmatists excellent summaries or click on the link of the term you don't understand in my glossary. Otherwise, only answer the ones you understand.

Meta-ethics: moral realism or moral anti-realism?

[pollid:84]