Part of the sequence: Rationality and Philosophy

Philosophy is notable for the extent to which disagreements with respect to even those most basic questions persist among its most able practitioners, despite the fact that the arguments thought relevant to the disputed questions are typically well-known to all parties to the dispute.

Thomas Kelly

The goal of philosophy is to uncover certain truths... [But] philosophy continually leads experts with the highest degree of epistemic virtue, doing the very best they can, to accept a wide array of incompatible doctrines. Therefore, philosophy is an unreliable instrument for finding truth. A person who enters the field is highly unlikely to arrive at true answers to philosophical questions.

Jason Brennan

 

After millennia of debate, philosophers remain heavily divided on many core issues. According to the largest-ever survey of philosophers, they're split 25-24-18 on deontology / consequentialism / virtue ethics, 35-27 on empiricism vs. rationalism, and 57-27 on physicalism vs. non-physicalism.

Sometimes, they are even divided on psychological questions that psychologists have already answered: Philosophers are split evenly on the question of whether it's possible to make a moral judgment without being motivated to abide by that judgment, even though we already know that this is possible for some people with damage to their brain's reward system, for example many Parkinson's patients, and patients with damage to the ventromedial frontal cortex (Schroeder et al. 2012).1

Why are physicists, biologists, and psychologists more prone to reach consensus than philosophers?2 One standard story is that "the method of science is to amass such an enormous mountain of evidence that... scientists cannot ignore it." Hence, religionists might still argue that Earth is flat or that evolutionary theory and the Big Bang theory are "lies from the pit of hell," and philosophers might still be divided about whether somebody can make a moral judgment they aren't themselves motivated by, but scientists have reached consensus about such things.

In its dependence on masses of evidence and definitive experiments, science doesn't trust your rationality:

Science is built around the assumption that you're too stupid and self-deceiving to just use [probability theory]. After all, if it was that simple, we wouldn't need a social process of science... [Standard scientific method] doesn't trust your rationality, and it doesn't rely on your ability to use probability theory as the arbiter of truth. It wants you to set up a definitive experiment.

Sometimes, you can answer philosophical questions with mountains of evidence, as with the example of moral motivation given above. But or many philosophical problems, overwhelming evidence simply isn't available. Or maybe you can't afford to wait a decade for definitive experiments to be done. Thus, "if you would rather not waste ten years trying to prove the wrong theory," or if you'd like to get the right answer without overwhelming evidence, "you'll need to [tackle] the vastly more difficult problem: listening to evidence that doesn't shout in your ear."

This is why philosophers need rationality training even more desperately than scientists do. Philosophy asks you to get the right answer without evidence that shouts in your ear. The less evidence you have, or the harder it is to interpret, the more rationality you need to get the right answer. (As likelihood ratios get smaller, your priors need to be better and your updates more accurate.)

Because it tackles so many questions that can't be answered by masses of evidence or definitive experiments, philosophy needs to trust your rationality even though it shouldn't: we generally are as "stupid and self-deceiving" as science assumes we are. We're "predictably irrational" and all that.

But hey! Maybe philosophers are prepared for this. Since philosophy is so much more demanding of one's rationality, perhaps the field has built top-notch rationality training into the standard philosophy curriculum?

Alas, it doesn't seem so. I don't see much Kahneman & Tversky in philosophy syllabi — just light-weight "critical thinking" classes and lists of informal fallacies. But even classes in human bias might not improve things much due to the sophistication effect: someone with a sophisticated knowledge of fallacies and biases might just have more ammunition with which to attack views they don't like. So what's really needed is regular habits training for genuine curiosity, motivated cognition mitigation, and so on.

(Imagine a world in which Frank Jackson's famous reversal on the knowledge argument wasn't news — because established philosophers changed their minds all the time. Imagine a world in which philosophers were fine-tuned enough to reach consensus on 10 bits of evidence rather than 1,000.)

We might also ask: How well do philosophers perform on standard tests of rationality, for example Frederick (2005)'s CRT? Livengood et al. (2010) found, via an internet survey, that subjects with graduate-level philosophy training had a mean CRT score of 1.32. (The best possible score is 3.)

A score of 1.32 isn't radically different from the mean CRT scores found for psychology undergraduates (1.5), financial planners (1.76), Florida Circuit Court judges (1.23), Princeton Undergraduates (1.63), and people who happened to be sitting along the Charles River during a July 4th fireworks display (1.53). It is also noticeably lower than the mean CRT scores found for MIT students (2.18) and for attendees to a LessWrong.com meetup group (2.69).

Moreover, several studies show that philosophers are just as prone to particular biases as laypeople (Schulz et al. 2011; Tobia et al. 2012), for example order effects in moral judgment (Schwitzgebel & Cushman 2012).

People are typically excited about the Center for Applied Rationality because it teaches thinking skills that can improve one's happiness and effectiveness. That excites me, too. But I hope that in the long run CFAR will also help produce better philosophers, because it looks to me like we need top-notch philosophical work to secure a desirable future for humanity.3

 

Next post: Train Philosophers with Pearl and Kahneman, not Plato and Kant

Previous post: Intuitions Aren't Shared That Way

 

 

Notes

1 Clearly, many philosophers have advanced versions of motivational internalism that are directly contradicted by these results from psychology. However, we don't know exactly which version of motivational internalism is defended by each survey participant who said they "accept" or "lean toward" motivational internalism. Perhaps many of them defend weakened versions of motivational internalism, such as those discussed in section 3.1 of May (forthcoming).

2 Mathematicians reach even stronger consensus than physicists, but they don't appeal to what is usually thought of as "mountains of evidence." What's going on, there? Mathematicians and philosophers almost always agree about whether a proof or an argument is valid, given a particular formal system. The difference is that a mathematician's premises consist in axioms and in theorems already strongly proven, whereas a philosopher's premises consist in substantive claims about the world for which the evidence given is often very weak (e.g. that philosopher's intuitions).

3 Bostrom (2000); Yudkowsky (2008); Muehlhauser (2011).

New to LessWrong?

New Comment
169 comments, sorted by Click to highlight new comments since: Today at 6:16 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

A minor (but important) nitpick:

[Standard scientific method] doesn't trust your rationality, and it doesn't rely on your ability to use probability theory as the arbiter of truth. It wants you to set up a definitive experiment.

Science sets up experiments not just because it does not trust you, but because even if you were a perfect Bayesian, you could not determine cause effect relationships just from using Bayes theorem a lot.

Sure. A good clarification.

Right! Besides just Bayes's Theorem, you'd also need Occam's Razor as a simplicity prior over causal structures. And, to drive the probability of a causal structure high enough, confidence that you'd observed in sufficient detail to drive down the probability of extra confounding or intervening variables.

Since the latter part is sometimes difficult though not theoretically impossible to achieve in fields like medicine, a randomized experiment in which you trust that your random numbers will probably have the Markov condition relative to other background variables, can more quickly give you confidence about some directions on causal arrows when the combination of effect size and sample size is large enough. Naturally, all of this is a mere special case of Bayesian reasoning on possible causal structures where (1) you start out very confident that some random numbers are conditionally independent of all their non-descendants in the graph, and (2) you start out very confident that your randomized experimental procedure causally connects to a single descendant node in that graph (the independent variable).

(a) You don't need to observe confounders to learn structure from data. In fact, sometimes you don't need any standard conditional independence at all. (Luke gave me the impression SI wasn't very interested in that point -- maybe it should be).

(b) Occam's razor / faithfulness gives you enough to learn the structure of statistical models, not causal ones. You need additional assumptions to equate the statistical models you learn with causal models. Bayesian networks are not causal models. Causality is not about conditional independence, it is about counterfactual invariance, that is causality expresses what changes or stays the same after a hypothetical 'wiggle.'

There is no guarantee that even given Occam's razor and faithfulness being true that the graph you obtain is such that if I wiggle a parent, the child will change. To verify your causal assumptions, you have to run an experiment, or no scientist will believe your graph is causal. This is what real causal discovery papers do, for example:

http://www.sciencemag.org/content/308/5721/523.abstract

Here they learned a protein signaling network, but then implemented an experiment where they changed the protein level of a paren... (read more)

This sounds like we're talking past each other somehow. Your point (a) is not clear to me - I was saying that to learn a sufficiently high-probability causal model from non-intervention data, you need to have observed the data in sufficient detail to rule out confounders (except at some low probability) (via simplicity priors, which otherwise can't drive down the probability of an untestable invisible confounder by all that far). This can certainly be done in principle, e.g. if you put the system under a microscope with a higher resolution than the system, and verified there were only X kinds of stuff in it and no others.

Your point (b) sounds just plain wrong to me. If you have a simplicity prior over causal models, and you can derive testable probable predictions from causal models, then you can do Bayesian updating and get a posterior over causal models. Substituting the word "flammable fizzbins" for "causal models" in the preceding sentence will produce another true sentence. I think you mean something different by "Bayesian" and "Occam's Razor" than I do.

By (a) I mean that you can sometimes get the true graph exactly even without having to observe confounders. Actually this was sort of known already (see the FCI algorithm, or even the IC* algorithm in Pearl's book), but we can do a lot better than that. For example, if we have the true graph:

a -> b -> c -> d, with a <- u1 -> c, and a <- u2 -> d, where we do not observe u1,u2, and u1,u2 are very complicated, then we can figure out the true graph exactly by independence type techniques without having to observe u1 and u2. Note: the marginal distribution p(a,b,c,d) that came from this graph has no conditional independences at all (checkable by d-separation on a,b,c,d), so typical techniques fail.


(b) is I guess "a subtle issue" -- but my point is about careful language use and keeping causal and statistical issues clear and separate.

A "Bayesian network" (or "belief network" -- I don't like the word Bayesian here because it is confusing the issue, you can use frequentist techniques with belief networks if you wanted, in fact a lot of folks do) is a joint distribution that factorizes as a DAG. That's it. Nothing about causality. ... (read more)

Well, this is very rapidly getting us into complex territory that future decision-theory posts will hopefully explore, but a very brief answer would be that I am unwilling to define anything fundamental in terms of do() operations because our universe does not contain any do() operations, and counterfactuals are not allowed to be part of our fundamental ontology because nothing counterfactual actually exists and no counterfactual universes are ever observed. There are quarks and electrons, or rather amplitude distributions over joint quark and lepton fields; but there is no do() in physics.

Causality seems to exist, in the sense that the universe seems completely causally structured - there is causality in physics. On a microscopic level where no "experiments" ever take place and there are no uncertainties, the microfuture is still related to the micropast with a neighborhood-structure whose laws would yield a continuous analogue of D-separation if we became uncertain of any variables.

Counterfactuals are human hypothetical constructs built on top of high-level models of this actually-existing causality. Experiments do not perform actual interventions and access alternat... (read more)

As an additional data point, I also still do not have a very good understanding of your ideas about causality (although I did note earlier that it seems rather different from Pearl's (which are similar to Ilya's)). I also note that nobody else seems to have a good understanding of your ideas, at least not enough to try to build upon them either here on LW or on the decision theory mailing list or try to explain them to me when I asked.

4Eliezer Yudkowsky11y
Interesting. Sorry to bother you further, but can I ask you to quote a particular sentence or paragraph above that seems unclear? Or was the above clear, but it implies other questions that aren't clear, or the motivations aren't clear?

As a third data point, I used to be very confused about your ideas about causality, but your recent writing has helped a lot. To make embarassingly clear how very wrong I've been able to be, some years ago when you'd told us about TDT but not given details, I thought you had a fully worked-out and justified theory about how a decision agent could use causal graphs to model its uncertainty about the output of platonic computations, and use do() on its own output to compute the utility of different courses of action, and I got very frustrated when I simply couldn't figure out how to fill in the details of that...

...hmm. (I should probably clarify: when I say "use causal graphs to reason about", I don't mean in the 'trivial' sense you are actually using where the platonic computations cause other things but are themselves uncaused in the model; I mean some sort of system where different computations and/or logical facts about computations form a non-degenerate graph, and where do() severs one node somewhere in the middle of that graph from its parents.) "And", I was going to say, "when you finally did tell us more, I had a strong oh moment when you said that y... (read more)

2IlyaShpitser11y
Because causality is not about efficiently encoding anything. A causal process a -> b -> c is equally efficiently encoded via c -> b -> a. This is not true, for lots of reasons, one of them having to do with "observational equivalence." A given causal graph has many different graphs with which it agrees on all observable constraints. All these other graphs are not causal. The 3 node chain above is one example.
2Benya11y
Sorry, I understand the technical point about causal graphs you are refering to, but I do not understand the argument you're trying to make with it in this context. Suppose it's the year 2100, and we have figured out the true underlying laws of physics, and it turns out that we run on a cellular automaton, and we have some very large and energy-intensive instruments that allow us to set up experiments where we can precisely set up the states of individual primitive cells. Now we want to use probabilistic reasoning to examine the time evolution of a cluster of such cells if we have only probabilistic information about the boundary conditions. Since this is a completely ordinary cellular automaton, we can describe it using a causal model, where the state of a cell at time t+1 is caused by its own state and the state of its neighbours at time t. In this case, causality is really fundamentally there in the laws of physics (in a discrete analog of what we suspect for our actual laws of physics). And though you can't reach in from the outside of the universe, it's possible to imagine scenarios where you could do the equivalent of do() on some of the cells in your experiment, though it wouldn't really be done by acausally changing what happens in the universe -- one way to imagine it is that your experiment runs only in a two-dimensional slice surrounded by a "vacuum" of cells in a "zero" state, and you can reach in through that vacuum to change one of the cells in the two-dimensional grid. But when it comes to how to model this inside a computer, it seems that you can reach all the conclusions you need by "ordinary" probabilistic reasoning: For example, you could start with say a uniform joint probability distribution over the state of all cells in your experiment at all times; then you condition on the fact that they fulfill the laws of physics, i.e. the time evolution rule of the cellular automaton; then you condition again on what you know about the boundary conditi
2IlyaShpitser11y
It depends on granularity. If you are talking about your game of life world on the level of the rules of the game, that is equivalent to talking about our Universe on the level of the universal wave function. In both cases there are no more agents with actuators and no more do(.), as a result. That is, it's not that your factorization will be causal, it's that there is no causality. But if you are taking a more granular view of your game of life world, similar to the macroscopic view of our Universe, where there are agents that can push and prod their environment, then suddenly talking about do(.) becomes useful for getting things done (just like it is useful to talk about addition or derivatives). On this macroscopic level, there is causality, but then your statement about all factorizations being causal is false (due to obvious examples involving reversing causal chains, for example).

On second thought, the main problem may not be lack of clarity but that your ideas about causality are too speculative and people either lack confidence that your research program (try to reduce Pearl's do()-based causality to lower-level "causality in physics") is the right one, or do not see how to proceed.

Both apply for me but the former is perhaps more relevant at this point. Basically I'm not sure that "do()-based causality" will actually end up playing a role in the ultimate "correct" decision theory (I guess if there is lack of clarity, it's why you think that it will), and in the mean time there are other problems that definitely need to be solved and also seem more approachable.

(To explain why I think "do()-based causality" may not end up playing a role, it seems plausible that in an AI or at least decision theory (I wanted to say theoretical decision theory but that seems redundant :), cognition about "high-level causality" just ends up being handled as a special case by a more general algorithm, similar to how an AI programmed to maximize expected utility wouldn't specifically need to be hand-coded with natural language processing if it was running on a sufficiently powerful computer.)

ETA: BTW, can you comment on whether my understanding in this comment was correct, and whether they still apply to Eliezer_2012?

3Eliezer Yudkowsky11y
You realize I'm arguing against do()-based causality? If not, I was very much unclearer than I thought. I have never tried to reduce causal arrows to similarity; Barbour does, I don't. I take causality to be, or be the epistemic conjugate of, something physical and real which was involved in manufacturing this oddly-well-modeled-by-causality universe that we actually live in. They are presently primitive in my model; I have not yet reduced them, except in the obvious sense that they are also formal mathematical relations between points, i.e., causal relations are a special case of logical relations (and yet we still live in a causal universe rather than a merely logical one). I do indeed reduce consciousness to computation and computation to causality, though there's a step here involving magical reality-fluid about which I am still confused - I have no idea why or what it means for a causal process to be more or less real, either as a result of having more or less Born measure, being instantiated in many places, or for any other reason.
9Wei Dai11y
Maybe it's just me not updating fast enough. My impression is that when you talked about causality prior to today, you usually mentioned Pearl and never said you disagreed with him on anything, so I assumed you wanted to keep his do()-based causality and just add a layer below it. Were you always against do()-based causality or did you change your mind at some point? Hmm, re-reading Timeless Causality, I don't see how I could have learned that the idea belongs to Barbour and that you disagree with him. It sure sounds like it was your idea. Why should we care about causality as decision theorists, if we have decision theories that can deal with logical universes in general, and causal relations are just a special case of logical relations?
2Eliezer Yudkowsky11y
This sounds like a high-priority problem, but actually I don't see any reference to reduction-to-similarity in Timeless Causality, although there's a lot in Barbour's book about it. What do you mean by "mind reduces to computation which reduces to causal arrows which reduces to some sort of similarity relationship between configurations"? Unless this is just in the sense that causal mechanisms are logical relations?
6Wei Dai11y
I interpreted this paragraph as sugesting that causality reduces to similarity, but given your latest clarifications, I guess what you actually had in mind was that causality tends to produce similarity and so we can infer causality from similarity. Previously, I thought you considered causality to be a higher level concept rather than a primitive one, similar to "sound waves" or "speech" as opposed to say "particle movements". That sort of made sense except that I didn't know why you wanted to make causality an integral part of decision theory. Now you're saying that you consider causality to be primitive and a special kind of logical relations, which actually makes less sense to me, and still doesn't explain why you want to make causality an integral part of decision theory. It makes less sense because if we consider the laws of physics as logical relations, they don't have a direction. As you said, "Time-symmetrical laws of physics didn't seem to leave room for asymmetrical causality." I don't see how you get around this problem if you take causality to be primitive. But the bigger problem is that (at the risk of repeating myself too many times) I don't understand your motivation for studying causality, because if I did I'd probably spend more time thinking about it mysef and understand your ideas about it better.
5Eliezer Yudkowsky11y
I'm trying to think like reality. If causality isn't a special kind of logic, why is everything in the known universe made out of (a continuous analogue of) causality instead of logic in general? Why not Time-Turners or a zillion other possibilities?

If causality isn't a special kind of logic, why is everything in the known universe made out of (a continuous analogue of) causality instead of logic in general?

Wait, if causality is a special kind of logic, how does that help answer the question? Don't we still have to answer why the universe is made of this kind of logical instead of some other?

Why not Time-Turners or a zillion other possibilities?

I don't understand how lack of Time-Turners makes you think causality is a special kind of logic or why you want to incorporate causality into decision theory (which is still my bigger question). Similar questions could be asked about other features of the universe:

  • Why does the universe have 3 spatial dimensions instead of a zillion other possibilities?
  • Why doesn't the laws of physics allow information to be destroyed (i.e., never maps 2 different states at time t to the same state at time t+1)?

But we're not concerned about these questions at the level of decision theory, since it seems possible to have a decision theory that works with an arbitrary number of dimensions, and with both kinds of laws of physics. Similarly, I don't see why we can't have a "causality-agnostic" decision theory that works in universes both with and without Time-Turners.

5DaFranker11y
I think the point was more about whether causality should be thought of as a fundamental part of the rules, like this, or whether it's more useful to think of causality as an abstraction that (ahem, excuse the term) "emerges" from the fundamentals when we try to identify patterns in said fundamentals. Somewhat akin to how "meaning" exists in a computer program despite none of the bits fundamentally meaning anything, I think. My thoughts are becoming more and more confused as I type, though, which makes me wish I had an environment suitable to better concentration.
4IlyaShpitser11y
Ok, I would like to state for the record that I no longer understand what you mean when you say "factor something as a causal graph" (which may well mean no one else on this site understands either). Basically everything you ever wrote on the subject of causality or causal graphs (other than exposition of standard material) is now a complete mystery to me. In particular, I don't understand what sorts of graphs are in your paper on the Newcomb's problem, or why those graphs justify you to make any sorts of conclusions about Newcomb's problem. Graph models are overloaded, there are lots of different models that all have the same graph. You have to explain what you mean if you use graphs.
9IlyaShpitser11y
I would be interested in reading about this. A few points: (a) I agree that causality is a "useful fiction" (like real numbers or derivatives). (b) If you are going to be writing posts about "causal diagrams" you need to be clear about what you mean. Usually by causal diagrams people mean Pearl's stuff, or closely related stuff (agnostic causal models, minimal causal models, etc.) All these models are defined via either do(.) or stronger notation. If you do not mean that by causal diagrams, that's fine! But please explain what you do mean to avoid confusing people. You have a paper on TDT that seems to use causal diagrams. Which ones did you mean in there? edit: I should say that if your project has "defining actual cause" as a special case, it's probably a black hole from which no one returns (it's the analytic philosophy version of the P/NP problem). edit 2: I think the derivation of "do(.)" ought to be not dissimilar to the derivation of "+", if you worry about induction problems. "+" is a mathematical fiction very useful for representing regularities with handling objects, "do(.)" is a mathematical fiction very useful for representing regularities involved with algorithms with actuators running around.
3Eliezer Yudkowsky11y
If causality is' useful fiction, it's conjugate to some useful nonfiction; I should like to know what the latter is. I don't think Pearl's diagrams are defined via do(). I think I disagree with that statement even if you can find Pearl making it. Even if do() - as shorthand for describing experimental procedures involving switches on arrows - does happen to be a procedure you can perform on those diagrams, that's a consequence of the definition, it is not actually part of the representation of the actual causal model. You can write out causal models, and they give predictions - this suffices to define them as hypotheses. More importantly: How can you possibly make the truth-condition be a correspondence to counterfactual universes that don't actually exist? That's the point of my whole epistemology sequence - truth-conditions get defined relative to some combination of physical reality that actually exists, and valid logical consequences pinned down by axioms. So yes, I would definitely derive do() rather than have it being primitive, and I wouldn't ever talk about the truth-condition of causal models relative to a do() out there in the environment - we talk about the truth-condition of causal models relative to quarks and electrons and quantum fields, to reality. I'm a bit worried (from some of his comments about causal decision theory) that Pearl may actually believe in free will, or did when he wrote the first edition of Causality. In reality nothing is without parents, nothing is physically uncaused - that's the other problem with do().

I don't think Pearl's diagrams are defined via do(). I think I disagree with that statement even if you can find Pearl making it.

Well, the author is dead, they say.

There are actually two separate causal models in Pearl's book: "causal Bayesian networks" (chapter 1), and "functional models" aka "non-parametric structural equation models" (chapter 7). These models are not the same, in fact functional models are a lot stronger logically (that is they make many more assumptions).

The first is defined via do(.), you can check the definition. The second can be defined either via a set of functions, or via a set of axioms. The two definitions are, I believe, equivalent. The axiomatic approach is valuable in statistics, where we often cannot exhibit the functions that make up the model, and must resort to enumerating assumptions. If you want to take the axiomatic approach you need a language stronger than do(.). In particular you need to be able to express counterfactual statements of the form "I have a headache. Would I have a headache had I taken an aspirin one hour ago?" Pearl's model in chapter 7 actually makes assumptions about counte... (read more)

2thomblake11y
Reading this whole thread, I'm interested to know what your thoughts on causality are. Do you have existing posts on the subject that I should re-read? I was under the impression you pretty much agreed with Pearl, but now that seems not to be the case. By the way, Pearl certainly wasn't arguing from a "free will" perspective - rather, I think he'd agree with "there is no do() in physics" but disagree that "there is causality in physics".
1Eliezer Yudkowsky11y
Irrelevant question: Isn't (b || d) | a, c?

No, because b -> c <-> a <-> d is an open path if you condition on c and a.

3Eliezer Yudkowsky11y
Ah, right.
5lukeprog11y
How? I find myself very interested in this point, just not enough to schedule a lecture about it in the next month, since we have a lot of other things going on, and we're out of town, and so on.
7IlyaShpitser11y
Fair enough, retracted. Sorry!
5pengvado11y
On your account, how do you learn causal models from observing someone else perform an experiment? That doesn't involve any interventions or counterfactuals. You only see what actually happens, in a system that includes a scientist.
5IlyaShpitser11y
That depends what you mean by an "experiment." If you divide a set of patients into a control group and a test group, and then have the test group smoke a pack of cigarettes per day, that is an "experiment" to me, one that is represented by an intervention (because we are forcing the test group to smoke regardless of what they would naturally want to do). Observing that the test group is much more likely to develop cancer would lead me to conclude that the graph smoking -> cancer is a causal graph rather than merely a statistical graph. ---------------------------------------- If we do not perform the above experiment due to ethical reasons, but instead use observational data on smokers, we have to worry about confounders, like Fisher did. We also have to worry, because we are implicitly linking that data with counterfactual situations (what would have happened if those guys we observed were forced to smoke). This linking isn't "free," there are assumptions operating in the background. Assumptions expressed in a language that can talk about counterfactual situations.

I'm so glad you post here.

Another extremely serious problem is that there is next to no particularly effective effort in philosophical academia to disregard confused questions, and to move away from naive linguistic realism. The number of philosophical questions of the form 'is x y' that can be resolved by 'depends on your definition of x and y' is deeply depressing. There does not seem to be a strong understanding of how important it is to remember that not all words correspond to natural, or even (in some cases) meaningful categories.

Please list as many examples of these questions as you can muster. (I mean questions, seriously discussed by philosophers, which you claim can be resolved in this way.)

Any discussion of what art is. Any discussion of whether or not the universe is real. Any conversation about whether machines can truly be intelligent. More specifically, the ship of Theseus thought experiment and the related sorites paradox are entirely definitional, as is Edmund Gettier's problem of knowledge. The (appallingly bad, by the way) swamp man argument by Donald Davidson hinges entirely on the belief that words actually refer to things. Shades of this pop up in Searle's Chinese room and other bad thought experiments.

I could go on, but that would require me to actually go out and start reading philosophy papers, and goodness knows I hate that,

6Bugmaster11y
Your examples include: (1) Any discussion of what art is. (2) Any discussion of whether or not the universe is real. (3) Any conversation about whether machines can truly be intelligent. I agree that the answers to these questions depend on definitions, but then, so does the answer to the question, "how long is this stick ?". Depending on your definition, the answer may be "this many meters long", "depends on which reference frame you're using", "the concept of a fixed length makes no sense at this scale and temperature", or "it's not a stick, it's a cube". That doesn't mean that the question is inherently confused, only that you and your interlocutor have a communication problem. That said, I believe that questions (1) and (3) are, in fact, questions about humans. They can be rephrased as "what causes humans to interpret an object or a performance as art", and "what kind of things do humans consider to be intelligent". The answers to these questions would be complex, involving multi-modal distributions with fuzzy boundaries, etc., but that still does not necessarily imply that the questions are confused. Which is not to say that confused questions don't exist, or that modern philosophical academia isn't riddled with them; all I'm saying is that your examples are not convincing.

I agree that the answers to these questions depend on definitions

I think he meant that those questions depend ONLY on definitions.

As in, there's a lot of interesting real world knowledge that goes in getting a submarine to propel itself, but that now we know that, have, people asking "can a submarine swim" is only interesting in deciding "should the English word 'swim' apply to the motion of a submarine, which is somewhat like the motion of swimming, but not entirely". That example sounds stupid, but people waste a lot of time on the similar case of "think" instead of "swim".

0Bugmaster11y
Ok, that's a good point; inserting the word "only" in there does make a huge difference. I also agree with BerryPick6 on this sub-thread.
7BerryPick611y
"What causes humans to interpret an object or a performance as art" and "What is art?" may be seen as two entirely different questions to certain philosophers. I'm skeptical that people who frequent this site would make such a distinction, but we aren't talking about LWers here.
0Peterdjones11y
People whoe frequent this site already do make parallel distinctions about more LW-friendly topics. For instance, the point of the Art of Rationality is that there is a right way to do thinking and persuading, which is not to say that Reason "just is" whatever happens to persuade or convince people, since people can be persuaded by bad arguments. If that can be made to work, then "it's hanging in a gallery, but it isn't art" can be made to work. ETA: Rationality is about humans, in a sense, too. The moral is that being "about humans" doens't imply that the search for norms or real meanings, or genuine/pseudo distinctions is fruitless.
1Bugmaster11y
Agreed, but my point was that questions about humans are questions about the Universe (since humans are part of it), and therefore they can be answerable and meaningful. Thus, you could indeed come up with an answer that sounds something like, "it's hanging in a gallery, but our model predicts that it's only 12.5% art". But I agree with BerryPick6 when he says that not all philosophers make that distinction.
3nigerweiss11y
There's a key distinction that I feel you may be glossing over here. In the case of the stick question, there is an extremely high probability that you and the person you're talking to, though you may not be using exactly the same definitions, are using definitions that are closely enough entangled with observable features of the world be broadly isomorphic. In other words, there is a good chance that, without either of you adjusting your definitions, you and the neurotypical human you're talking to are likely to be able to come up with some answer that both of you will find satisfying, and will allow you to meaningfully predict future experiences. With the three examples I raised, this isn't the case. There are a host of different definitions, which are not closely entangled with simple, observable features of the world. As such, even if you and the person you're talking to have similar life experiences, there is no guarantee that you will come to the same conclusions, because your definitions are likely to be personal, and the outcome of the question depends heavily upon those definitions. Furthermore, in the three cases I mentioned, unlike the stick, if you hold a given position, it's not at all clear what evidence could persuade you to change your mind, for many possible (and common!) positions. This is a telltale sign of a confused question.
0Bugmaster11y
I believe that at least two of those definitions could be something like, "what kinds of humans would consider this art ?", or "will machines ever pass the Turing test". These questions are about human actions which express human thoughts, and are indeed observable features of the world. I do agree that there are many other, more personal definitions that are of little use.
3Rob Bensinger11y
I think we need a clearer idea of what we mean by a 'bad' thought experiment. Sometimes thought experiments are good precisely because they make us recognize (sometimes deliberately) that one of the concepts we imported into the experiment is unworkable. Searle's Chinese room is a good example of this, since it (and a class of similar thought experiments) helps show that our intuitive conceptions of the mental are, on a physicalist account, defective in a variety of ways. The right response is to analyze and revise the problem concepts. The right response is not to simply pretend that the thought experiment was never proposed; the results of thought experiments are data, even if they're only data about our own imaginative faculties.
3siodine11y
My first thought was "every philosophical thought experiment ever" and to my surprise wikipedia says there aren't that many thought experiments in philosophy (although, they are huge topics of discussion). I think the violinist experiment is uniquely bad. The floating man experiment is another good example, but very old.
3Rob Bensinger11y
What's your objection to the violinist thought experiment? If you're a utilitarian, perhaps you don't think the waters here are very deep. It's certainly a useful way of deflating and short-circuiting certain other intuitions that block scientific and medicinal progress in much of the developed world, though.
6siodine11y
From SEP: The thought experiment depends on your intuitions or your definition of moral obligations and wrongness, but the experiment doesn't make these distinctions. It just pretends that everyone has same intuition and as such the experiment should remain analogous regardless (probably because Judith didn't think anyone else could have different intuitions), and so then you have all these other philosophers and people arguing about this minutia and adding on further qualifications and modifications to the point where that they may as well be talking about actual abortion.

The thought experiment functions as an informal reductio ad absurdum of the argument 'Fetuses are people. Therefore abortion is immoral.' or 'Fetuses are conscious. Therefore abortion is immoral.' That's all it's doing. If you didn't find the arguments compelling in the first place, then the reductio won't be relevant to you. Likewise, if you think the whole moral framework underlying these anti-abortion arguments is suspect, then you may want to fight things out at the fundaments rather than getting into nitty-gritty details like this. The significance of the violin thought experiment is that you don't need to question the anti-abortionist's premises in order to undermine the most common anti-abortion arguments; they yield consequences all on their own that most anti-abortionists would find unacceptable.

That is the dialectical significance of the above argument. It has nothing to do with assuming that everyone found the original anti-abortion argument plausible. An initially implausible argument that's sufficiently popular may still be worth analyzing and refuting.

2Mitchell_Porter11y
I am unimpressed by your examples. Can we first agree that some questions are not dissolved by observing that meanings are conventional? If I run up to you and say "My house is on fire, what should I do?", and you tell me "The answer depends, in part, on what you mean by 'house' and 'fire'...", that will not save my possessions from destruction. If I take your preceding comment at face value, then you are telling me * there is nothing to think about in pondering the nature of art, it's just a matter of definition * there is nothing to think about regarding whether the universe exists, it's just a matter of definition * there's no question of whether artificial intelligence is the same thing as natural intelligence, it's just a matter of definition and that there's no "house-on-fire" real issue lurking anywhere behind these topics. Is that really what you think?
2nigerweiss11y
Well, I'm sorry. Please fill out a conversational complaint form and put it in the box, and an HR representative will mail you a more detailed survey in six to eight weeks. I agree entirely that meaningful questions exist, and made no claim to the contrary. I do not believe, however, that as an institution, modern philosophy is particularly good at identifying those questions. In response to your questions, * Yes, absolutely. * Yes, mostly. There are different kinds of existence, but the answer you get out will depend entirely on your definitions. * Yes, mostly. There are different kinds of possible artificial intelligence, but the question of whether machines can -truly- be intelligent depends exclusively upon your definition of intelligence. As a general rule, if you can't imagine any piece of experimental evidence settling a question, it's probably a definitional one.
3Mitchell_Porter11y
The true nature of art, existence, and intelligence are all substantial topics - highly substantial! In each case, like the physical house-on-fire, there is an object of inquiry independent of the name we give it. With respect to art - think of the analogous question concerning science. Would you be so quick to claim that whether something is science is purely a matter of definition? With respect to existence - whether the universe is real - we can distinguish possibilities such as: there really is a universe containing billions of light-years of galaxies full of stars; there is a brain in a vat being fed illusory stimuli, with the real world actually being quite unlike the world described by known physics and astronomy; and even solipsistic metaphysical idealism - there is no matter at all, just a perceiving consciousness having experiences. If I ponder whether the universe is real, I am trying to choose between these and other options. Since I know that the universe appears to be there, I also know that any viable scenario must contain "apparent universe" as an entity. To insist that the reality of the universe is just a matter of definition, you must say that "apparent universe" in all its forms is potentially worthy of the name "actual universe". That's certainly not true to what I would mean by "real". If I ask whether the Andromeda galaxy is real, I mean whether there really is a vast tract of space populated with trillions of stars, etc. A data structure providing a small part of the cosmic backdrop in a simulated experience would not count. With respect to intelligence - I think the root of the problem here is that you think you already know what intelligence in humans is - that it is fundamentally just computation - and that the boundary between smart computation and dumb computation is obviously arbitrary. It's like thinking of a cloud as "water vapor". Water vapor can congregate on a continuum of scales from invisibly small to kilometers in size, and
2John_Maxwell11y
So what's the difference between philosophy and science then?
0nigerweiss11y
Err... science deals with questions you can settle with evidence? I'm not sure what you're getting at here.
3John_Maxwell11y
How does your use of the label "philosophical" fit in with your uses of the categories "definitional" and "can be settled by experimental evidence"?

I once met a philosophy professor who was at the time thinking about the problem "Are electrons real?" I asked her what her findings had shown thus far, and she said she thinks they're not real. I then asked her to give me examples of things that are real. She said she doesn't know any examples of such things.

7Rob Bensinger11y
Not only are pretty much all contemporary philosophers attentive to this fact, but there's an active philosophical literature about the naturalness of some terms as opposed to others, and about how one can reasonably distinguish natural kinds from non-natural ones. Particularly interesting is some of the recent work in metaphilosophy and in particular metametaphysics, which examines whether (or when) ontological disputes are substantive, what is the function of philosophical disputes, when one can be justified in believing a metaphysical doctrine, etc. (Note: This field is not merely awesome because it has a hilarious name.) Don't confuse disagreements about which natural kinds exist, and hence about which disputes are substantive, with disagreements about whether there's a distinction between substantive and non-substantive disputes at all.
6bryjnar11y
I strongly disagree. Almost every question in philosophy that I've ever studied has some camp of philosophers who reject the question as ill-posed, or want to dissolve it, or some such. Wittgensteinians sometimes take that attitude towards every question. Such philosophers often not discussed as much as those who propose "big answers" but there's no question that they exist and that any philosopher working in the field is well aware of them. Also, there's a selection effect: people who think question X isn't a proper question tend not to spend their careers publishing on question X!
1siodine11y
I agree, but the problems remain and the arguments flourish.
-1nigerweiss11y
Sure, there are absolutely philosophers who aren't talking about absolute nonsense. But as an industry, philosophy has a miserably bad signal-noise ratio.
1bryjnar11y
I'd mostly agree, but the particular criticism that you levelled isn't very well-founded. Questioning the way we use language and the way that philosophical questions are put is not the unheard of idea that you portray it as. In fact, it's pretty standard. It's just not necessarily the stuff that people choose to put into most "Intro to the Philosophy of X" textbooks, since there's usually more discussion to be had if the question is well-posed!
-1Peterdjones11y
Please name some contemporary philosophers who are naive linguistic realists.

Your previous post was good, but this one seems to be eliding a few too many issues. If you took a poll of physicists asking them to explain what their fundamental model — quantum mechanics — actually tells us about the world (surely a simple enough question), there would be disagreement comparable to that regarding the philosophical questions you mentioned. The survey you cite is also obviously unhelpful, in that the questions on that survey were chosen because they're controversial. Most philosophical questions are not very controversial, but for that very reason you don't hear much about them. If we hand-picked all the foundational questions physicists disagreed about and conducted a popularity poll, would we be rightly surprised to find that the poll results were divided?

(It's also worth noting that some of the things being measured by the poll are attitudinal and linguistic variation between different philosophical schools and programs, not just doctrinal disagreements. Why should we expect ethicists and philosophers of mathematics to completely agree in methodology and terminology, when we do not expect the same from physicists and biologists?)

There are three reasons philosop... (read more)

If you took a poll of physicists asking them to explain what their fundamental model — quantum mechanics — is actually asserting about the world (surely a simple enough question), there would be disagreement comparable to that regarding the philosophical questions you mentioned.

A major problem with modern physics is that there are almost no known phenomena that are known to work in a way that disagrees with how modern physics predicts they would work (in principle; there are lots of inferential/computational difficulties). What physics asserts about the world is, to the best of anyone's knowledge, coincides with what's known about most of the world in all detail. The physicists have to build billion dollar monstrosities like LHC just to get their hands on something they don't already thoroughly understand. This doesn't resemble the situation with philosophy in the slightest.

2Rob Bensinger11y
You're speaking in very general terms, and you're not directly answering my question, which was 'what is quantum mechanics asserting about the world?' I take it that what you're asserting amounts to just "It all adds up to normality." But that doesn't answer questions concerning the correct interpretation of quantum mechanics. "x + y + z . . . = normality." That's a great sentiment, but I'm asking about what physics' "x" and "y" and "z" are, not questioning whether the equation itself holds.
4Vladimir_Nesov11y
I'm pointing out that in particular it's asserting all those things that we know about the world. That's a lot, and the fact that there is consensus and not much arguing about this shouldn't make this achievement a trivial detail. This seems like a significant distinction from philosophy that makes simple analogies between these disciplines extremely suspect. (I agree that I'm not engaging with the main points of your comment; I'm focusing only on this particular aside.)
-3Rob Bensinger11y
So your response to my pointing out that physicists too disagree about basic things, is to point out that physicists don't disagree about everything. In particular, they agree that the world around us exists. Uh... good for them? Philosophers too have been known to harbor a strong suspicion that there is a world, and that it harbors things like chairs and egg timers and volcanoes. Physicists aren't special in that respect. (In particular, see the philosophical literature on Moorean facts.)
4Vladimir_Nesov11y
Physicists agree about almost everything. In particular, they agree about all specific details about how the world works relevant (in principle) to most things that have ever been observed (this is a lot more detail than "the world exists").
0Rob Bensinger11y
They agree about the most useful formalisms for modeling and predicting observations. But 'formalism' and 'observation' are not themselves concepts of physics; they are to be analyzed away in the endgame. My request is not for you to assert (or deny) that physicists have very detailed formalisms, or very useful ones; it is for you to consider how much agreement there is about the territory ultimately corresponding to these formalisms. A simple example is the disagreement about which many-worlds-style interpretation is best; and about whether many-worlds-style interpretations are the best interpretations at all; and about whether, if they are the best, whether they're best enough to dominate the probability space. Since the final truth-conditions and referents of all our macro- and micro-physical discourse depends on this interpretation, one cannot duck the question 'what are chairs?' or 'what are electrons?' simply by noting 'chairs are something or other that's real and fits our model.' It's true, but it's not the question under dispute. I said physicists disagree about many things; I never said that physicists fail to agree about anything, so changing the topic to the latter risks confusing the issue.
5prase11y
You are basically saying that physicists disagree about philosophical questions.
2Rob Bensinger11y
Is the truth of many-worlds theory, or of non-standard models, a purely 'philosophical' matter? If so, then sure. But that's just a matter of how we choose to use the word 'philosophy;' it doesn't change the fact that these are issues physicists, specifically, care and disagree about. To dismiss any foundational issue physicists disagree about as for that very reason 'philosophical' is merely to reaffirm my earlier point. Remember, my point was that we tend to befuddle ourselves by classifying issues as 'philosophical' because they seem intractable and general, then acting surprised when all the topics we've classified in this way are, well, intractable and general. It's fine if you think that humanity should collectively and universally give up on every topic that has ever seemed intractable. But you can make that point much more clearly in those simple words than by bringing in definitions of 'philosophy.'
6Desrtopa11y
It seems that the matters you're arguing that scientists disagree on are all ones where we cannot, at least by means anyone's come up with yet, discriminate between options by use of empiricism. The questions they disagree on may or may not be "philosophical," depending on how you define your terms, but they're questions that scientists are not currently able to resolve by doing science to them. The observation that scientists disagree on matters that they cannot resolve with science doesn't detract from the argument that the process of science is useful for building consensuses. If anything it supports it, since we can see that scientists do not tend to converge on consensuses on questions they aren't able to address with science.
0Rob Bensinger11y
Agreed. It's not that scientists universally distrust human rationality, while philosophers universally trust it. Both groups regularly subject their own reasoning faculties to tests and to distrust. (And both also need to rely at least somewhat on human reasoning, since one can only fairly conclude that a kind of reasoning is flawed by reasoning one's way toward that conclusion. Even purely 'empirical' or 'factual' questions require some amount of interpretive work.) The reason philosophers seem to disagree more than scientists is very simple, and it's the same reason physicists trying to expand the Standard Model disagree more than physicists working within the Standard Model: Because there's a lack of intersubjectively accessible data. Without such data for calibration, different theoretical physicists' inferences, intuitions, and pattern-matching faculties in general will get relatively diverse results, even if their methodologies are quite commendable.
0prase11y
I think you are reading too much into my comment. It totally wasn't about what humanity should collectively give up on, or even what anybody should. And I agree that philosophy is effectively defined as a collection of problems which are not yet understood enough to be even investigated by standard scientific methods. I was only pointing out (perhaps not much clearly, but I hadn't time for a lengthier comment) that the core of physics is formalisms and modelling and predictions (and perhaps engineering issues since experimental apparatuses today are often more complex than the phenomena they are used to observe). That is, almost all knowledge needed to be a physicist is the ordinary "non-philosophical" knowledge that everybody agrees upon, and almost all talks at physics conferences are about formalism and observations, while the questions you label "foundational" are given relatively small amount of attention. It may seem that asking "what is the true nature of electron" is a question of physics, since it is about electrons, but actually most physicists would find the question uninteresting and/or confused while the question might sound truly interesting to a philosopher. (And it isn't due to lack of agreement on the correct answer, but more likely because physicists like more specific / less vague questions as compared to philosophers). One can get false impression about that since the most famous physicists tend to talk significantly more about philosophical questions than the average, but if Feynman speaks about interpretation of quantum mechanics, it's not a proof that interpretation of quantum mechanics is extremely important question of physics (because else a Nobel laureate wouldn't talk about it), it's rather proof that Feynman has really high status and he can get away with giving a talk on a less-than-usually rigorous topic (and it is much easier to make an interesting lecture from philosophical stuff than from more technical stuff). Of course, my poin
1Rob Bensinger11y
I don't think we disagree all that much; and I meant 'you' to be a hypothetical interlocuter, not prase. All I want to reiterate is that the line between physics and philosophy-of-physics can be quite fuzzy. The 'measurement problem' is perhaps the pre-eminent problem in 'philosophy of physics,' but it's not some neoscholastic mumbo-jumbo of the form "what is the true nature of electron?". Rather, it's a straightforward physics problem that happens to have turned out to be especially intractable. Specifically, it is the problem that these three propositions form an inconsistent triad given our Born-probabilistic observations: * (1) Wave-function descriptions specify all the properties of physical systems. * (2) The wave function evolves solely in accord with the Schrödinger equation. * (3) Measurements have definite outcomes. De-Broglie-style interpretations ('hidden variables') reject (1), von-Neumann-style interpretations ('objective collapse') reject (2), and Everett-style interpretations ('many worlds') reject (3). So far. there doesn't seem to be anything 'unphysical' or 'unphysicsy' about any of these views. What's made them 'philosophical' is simply that the problem is especially difficult, and the prospects for solving it to everyone's satisfaction, by ordinary physicsy methods, seem especially dim. So, if that makes it philosophy, OK. But problems of this sort divide philosophers because they're hard, not because philosophers 'trust their own rationality' more than physicists do.
0prase11y
I find it a bit tricky to formulate problems in propositions like yours (1) - (3) and insist that at least one must be rejected because of mutual inconsistency. The problem is that the meaning of the propositions is not precise. What exactly does "all properties of physical systems" denote? Is it "maximum information about the system that can be obtained in principle" (subproblem: what does "in principle" mean), or is it "information sufficient to predict all events in which the system is involved, if there is no uncertainty external to the system involved", or is it something else? We know that the conditions under which we prepare the system can be summarised in a wave function and we know how to calculate the frequencies of measurement outcomes, given a specific wave function. We know that the knowledge of wave function doesn't let us predict the measurements with certainty. We even know, due to Bell's inequalities and the experimental results, that if there is some unknown property of the system which determines the measurement outcome prior to actual measurement, then this property must be non-local. We know that the evolution of systems under observation isn't described by Schrödinger equation only. All this is pretty uncontroversial. Now the interpretations tend to use different words to describe the same amount of knowledge. Instead of saying that we can get unpredictably different outcomes from a measurement on a system with some given wave function, one may say that the outcome is always the same but our consciousness splits and each part is aligned only with a portion of the outcome, or one may say that the outcome is not "definite" (whatever it means). This verbal play is the unphysicsy thing with the given propositions.
0Rob Bensinger11y
You seem to be trying to explain something rather clear with something less clear. The sentence in question is simply affirming that the wave function captures everything that is true of the system; thus (if you accept this view) there are no hidden variables determining the seemingly probabilistic outcomes of trying to measure non-observables. There's nothing mysterious about asserting that there's a hidden cause in this case, any more than science in general is going Mystical when it hypothesizes unobserved causes for patterns in our data. To say that the outcome is not "definite" is to say that it is false that a particular measurement outcome (like 'spin up'), and not an alternative outcome (like 'spin down'), obtains. "Definite" sounds vague here because the very idea of "many worlds" is extremely vague and hard to pin down. One way to think of it is that the statistical properties of quantum mechanics are an epiphenomenon of a vastly larger, unobserved reality (the wave function itself) that continues merrily on its way after the observation. Where's the 'verbal play'?
0prase11y
Say there are no hidden variables and the evolution is probabilistic. Does then the wave function capture everything that is true of the system? It seems to me that it doesn't: it is true that the system will be measured spin up in the next measurement, but the wave function is as well compatible with spin down. But you seem to assert that if I don't believe in hidden variables then the wave function does capture everything that is true of the system. Thus I don't find it rather clear. Neither does "epiphenomenon of a vastly larger reality" seem clarifying to me even a little bit.
2Rob Bensinger11y
At a given time, yes. But over time, the way a wave function changes may (a) be determined entirely by the Schrödinger equation, or (b) be determined by a mixture of the Schrödinger equation and intermittent 'collapses.' Given (a), the apparently probabilistic distribution of observations is somehow mistaken, and we get a many-worlds-type interpretation. Given (b), the probabilities are preserved but the universe suddenly operates by two completely different causal orders, and we get an 'objective collapse' interpretation. These are the two options if the wave function captures all the variables.
2prase11y
I am now interested in clarification of "everything that is true of the system". I have an electron whose spin I am going to measure five minutes from now. Does the proposition "the spin will be measured up" belong to "everything that is true about the electron"? Presume that the spin will indeed be measured up (or I will perceive the world in which it was up or whatever formulation will suit you the best). To me it appears as a true proposition, but there may be philosophical arguments to the contrary (problem of future contingents comes to mind).
1Rob Bensinger11y
Physics-inclined people tend to be 4-dimensionalists, so I don't think they'll object to describing wave functions in terms that account for them at all times. Even indeterminists (i.e., collapse theorists) can accept that we can talk about what will be true of electrons in the future, though we can't even in principle know some of those facts in advance. de Broglie sez: "Yes, that belongs to everything that is true (about the electron's wave function). But at least one truth about the electron (its position at any given time) is not accounted for in the wave function. (This explains why the Schrödinger equation, although a complete description of how wave functions change, is not a complete description of how physical systems change.)" von Neumann sez: "Yes. And the wave function encompasses all these truths. But there is no linear dynamical equation relating all the time-slices of the wave function. There are more free-floating brute facts within wave functions than we might have expected." Everett sez: "Yes... well, sort of. The formalism for 'the spin will be measured up' is a component of a truth. But it would be more accurate and objective to say something like 'the spin will be measured up and down' (assuming it was in a prior superposition). Thus the wave function encompasses all the truths, and evolves linearly over time in accord with the Schrödinger equation. Win-win!"

Inasmuch as philosophical issues are settled, they stop getting talked about.

Why exactly? I mean, there is no controversy in mathematics about whether 2+2=4, and yet we continue teaching this knowledge in schools. Uncontroversial, yet necessary to be taught, because humans don't get it automatically, and because it is necessary for more complicated calculations.

Why exactly don't philosophers do an equivalent of this? It is because once a topic has been settled at a philosophical conference, the next generations of humans are automatically born with this knowledge? Or at least the answer is published so widely, that it becomes more known than the knowledge of 2+2=4? Or what?

Start tabooing the word 'philosophy.' See how it goes.

First approximation: Pretended ability to make specific conclusions concerning ill-defined but high-status topics. :(

9Rob Bensinger11y
Yes, and we continue teaching modus ponens and proof by reductio in philosophy classrooms. (Not to mention historical facts about philosophy.) Here we're changing the subject from 'do issues keep getting talked about equally after they're settled?' to 'do useful facts get taught in class?' The philosopher certainly has plenty of simple equations to appeal to. But the mathematician also has foundational controversies, both settled and open. So if I pretend to be able to make specific conclusions about capital in macroeconomics, I'm doing philosophy?
5Pablo11y
Really? Can you name a few philosophical questions whose answers are uncontroversial?

Although I'm a lawyer, I've developed my own pet meta-approach to philosophy. I call it the "Cognitive Biases Plus Semantic Ambiguity" approach (CB+SA). Both prongs (CB and SA) help explain the amazing lack of progress in philosophy.

First, cognitive biases - or (roughly speaking) cognitive illusions - are persistent by nature. The fact that cognitive illusions (like visual illusions) are persistent, and the fact that philosophy problems are persistent, is not a coincidence. Philosophy problems cluster around those that involve cognitive illusions (positive outcome bias, the just world phenomenon, the Lake Wobegon effect, the fundamental attribution error), etc. I see this in my favorite topic area (the free will problem), but I believe that it likely applies broadly across philosophy.

Second, semantic ambiguity creates persistent problems if not identified and fixed. The solutions to several of Hilbert's 100 problems are "no answer - problem statement is not well defined." That approach is unsexy, and emotionally dissatisfying (all of this work, yet we get no answer!). Perhaps for that reason, philosophers (but not mathematicians) seem completely incapabl... (read more)

7Peterdjones11y
And they neve expend any effort in establishing clear meanings for such terms. Oh wait....they expend far too mcuh effort arguing about definitions...no, too little...no, too much. OK: the problem with philosopher is that they are contradictory.
0khafra11y
If philosophers were strongly biased toward climbing the ladder of abstraction instead of descending it, they could expend a great deal of effort, flailing uselessly about definitions.
-1Bruno_Coelho11y
What sort of people do you have in mind? The generalization apparently consider academic philosophers in the actual state, but not past people. Sure, someone without strong science background will miss the point, focusing on the words. But arguing "by definitions" is not something done exclusively by philosophers.
0BerryPick611y
At least when it comes to the concepts "Good," "Morality" and "Free Will," I'm familiar with some fairly prominent suggestions that they are in dire need of redefinition and other attempts to narrow or eliminate discussions about such loose ideas altogether.
[-][anonymous]11y130

We might also ask: How well do philosophers perform on standard tests of rationality, for example Frederick (2005)'s CRT?...

Your presentation here seems misleading to me. You imply that philosophers are merely average scorers on the CRT relative to the rest of the (similarly educated) population.

This claim is misleading for several reasons: 1) The study from which you get the philosophers' score is a mean score for people who have had some graduate level philosophical training. This is a set that will overlap with many of the other groups you mention. While it will include all professional philosophers, I don't think a majority of the set will be professional philosophers. Graduate level logic or political philosophy, etc. courses are pretty standard in graduate educations across the board.

2) Fredrick takes scores from a variety of different schools, trying to capture people, evidentially, who are undergraduates, graduate students, or faculty. Fredrick comes up with a mean score of 1.24 for respondents who are members of a university. In contrast, Livengood (from which you get the philosophers' mean score) gets a mean score of 0.65 and 0.82 for people with undergraduate or gradu... (read more)

I'm not sure that more rationality in philosophy would help enough as far as FAI is concerned. I expect that if philosophers became more rational, they would mainly just become more uncertain about various philosophical positions, rather than reach many useful (for building FAI) consensuses.

If you look at the most interesting recent advances in philosophy, it seems that most of them were made by non-philosophers. For example, Turing, Church, and other's work on understanding the nature of computation, von Neumann and Morgenstern's decision theory, Tegmark's Ultimate Ensemble, and algorithmic information theory / Solomonoff Induction. (Can anyone think of a similarly impressive advance made by professional philosophers, in this same time frame?) Based on this, I think appropriate background knowledge and raw intellectual firepower (most of the smartest humans probably go into math/science instead of philosophy) are perhaps more important than rationality for making philosophical progress.

(Can anyone think of a similarly impressive advance made by professional philosophers, in this same time frame?)

  • Quine's attack on aprioricity and analycity.
  • Kuhn's' and Popper's philosophy of science.
  • Rawls' and Nozick's political philsophy
  • Kripkes New Metaphsycal Necessity

ETA:

  • Austin's speach act theory
  • Ryles critique of Cartesianism
  • HOT theory (various)
  • Tarski's convention T
  • Gettier's counteraxamples
  • Parfitt on personal identiy
  • Parfitt on ehtics
  • Wittgenstein's PLA

I'm only familiar with about a third of these (not counting Tarski who I agreed with JoshuaZ is more of a mathematician than philosopher), but the ones that I am familiar with do not seem as interesting/impressive/fruitful/useful as the advances I mentioned in the grandparent comment. If you could pick one or two on your list for me to study in more detail, which would you suggest?

2BerryPick611y
I know you aren't asking me, but my choices to answer this question would be Popper's Philosophy of Science; Rawls and Nozick's Political Philosophy and Quine.
1Peterdjones11y
Interesting to whom? Fruitful for what?
9Wei Dai11y
According to my own philosophical interests, which as it turned out (i.e., apparently by coincidence) also seems well aligned with what's useful for building FAI. I guess one thing that might be causing us to talk a bit past each other is that I read the opening post as talking about philosophy in the context of building FAI (since I know that's what the author is really interested in), but you may be seeing it as talking about philosophy in general (and looking at the post again I notice that it doesn't actually mention Friendly AI at all except by linking to a post about it). Anyway, if you think any of the examples you gave might be especially interesting to someone like me, please let me know. Or, if you want, tell me which is most interesting to you and why.
5TimS11y
Made me laugh for a second seeing those two on the same line because Popper (falsifiability) and Kuhn (Structures of Scientific Revolutions) are not particularly related.
2Peterdjones11y
Not at all. i should probably have put them on separate lines.
4JoshuaZ11y
Most of your examples seem valid but this one is strongly questionable: This example doesn't work. Tarski was a professional mathematician. There was a lot of interplay at the time between math and philosophy, but it seems he was closer to the math end of things. He did at times apply for philosophy positions, but for the vast majority of his life he was doing work as a mathematician. He was a mathematician/logician when he was at the Institute for Advanced Study, and he spent most of his professional career as a professor at Berkley in the math department. Moreover, while he did publish some papers in philosophy proper, he was in general a very prolific writer, and the majority of his work (like his work with quantifier elimination in the real numbers, or the Banach-Tarski paradox) are unambiguously mathematical. Similarly, the people who studied under him are all thought of as mathematicians(like Julia Robinson), or mathematician-philosophers(Feferman), with most in the first category. Overall, Tarski was much closer to being a professional mathematician whose work sometimes touched on philosophy than a professional philosopher who sometimes did math.
4BerryPick611y
* Mackie's Argument from Queerness * Hare and Ayers' work on Expressivism * Goodman's New Riddle of Induction * Wittgenstein * Frankfurt on Free Will * The Quine-Putnam indispensability thesis * Causal Theory of Reference
4[anonymous]11y
I think the canonical example would be Thomas Metzinger's model of the first-person perspective.
2Wei Dai11y
Would't there be at least one reference to his book in SEP if that was true?
3gwern11y
http://plato.stanford.edu/search/searcher.py?query=metzinger ?
0Wei Dai11y
Yeah, I did the same search, but none of those results reference his main work, the book that paper-machine cited (or any other papers/books that, judging from the titles, are about his main ideas).
1gwern11y
They're still citations to his body of work, which is all on pretty much the same topic. SEP is good, but it is just an encyclopedia, after all, and Being No One is a very challenging book (I still haven't read it because it's too hard for me). A general citation search would be more useful; I see 647 citations to it in Google Scholar. (I don't know of a citation engine specializing in philosophy - Philpapers shows a fair bit of activity related to Metzinger but doesn't give me how many philosophy papers cite it, much less philosophy of mind.)
4Kawoomba11y
This lecture he gives about the very same topic is much more accessible.
3fubarobfusco11y
Thank you for posting this.
0NancyLebovitz11y
He suggests that the reason we don't have awareness that our sensory experiences are created by a detailed internal process is that it wasn't evolutionarily worthwhile. However, we're currently in an environment where at least our emotional experiences are more and more likely to be hacked by other people who aren't necessarily on our side, which means that self-awareness is becoming more valuable. At this point, the evolution is more likely to be memetic (parents teaching their children to notice what's going on in advertisements) than physiological, though it's also plausible that some people find it innately easier to track what is going on with their emotions than others. Has anyone read The Book of Not Knowing by Peter Ralston? I've only read about half of it, but it looks like it's heading into the same territory.
1Wei Dai11y
I didn't even try to read the book, but went through a bunch of review papers (which of course all try to summarize the main ideas of the book) and feel like I got a general understanding that way. I wanted to see how his ideas compare to his peers (so as to judge how much of an advance they are upon the state of the art), and that's when I found the SEP lacking any discussion of them (which still seems fairly damning to me).
0BerryPick611y
Apparently, his follow-up book "The Ego Tunnel" deals with mostly the same stuff and is not as impenetrable. Have you read it? I'd be interested in hearing your thoughts on it.
-1gwern11y
Ironically, my problem with that book was that it was too easy and simple.
2[anonymous]11y
No idea why this would be true. (For example, despite being a reasonably well-known mathematician, there is only one reference to S. S. Abhyankar in the MacTutor history of mathematicians.)
1BerryPick611y
Nick Bostrom?

I think Nick is actually an example of how rationality isn't that useful for making philosophical progress. I'm a bit reluctant to say this (for obvious social reasons, which I'm judging to be outweighed by the strategic importance of this issue) but his work (PhD thesis) on anthropic reasoning wasn't actually very good. I know that at least one SI Research Associate agrees with my assessment.

ETA: I should qualify this by saying that while his proposed solution wasn't very good (which you can also infer from the fact that nobody ever talks about or builds upon it around here despite strong interest in the topic) he did come up arguments/considerations/thought experiments, such as the Presumptuous Philosopher, that we still discuss.

5BerryPick611y
I'll freely admit that I haven't actually read any of his work, and I was mainly making the comment due to the generally fanboyish response he gets 'round these parts. I found your comment very interesting, and may investigate further.
1cousin_it11y
Just in case this refers to me: I agree with your assessment of Bostrom's thesis, but I'm no longer a SI research associate :-)
0Peterdjones11y
As an example of what?
3BerryPick611y
A straight-up philosopher who is useful to FAI (more X-Risk, but it's probably still applicable.) Obviously, your examples are the ones that immediately occurred to me, so I didn't want to repeat them.
0Peterdjones11y
Why does that count as phil? or that? or that? OK. That resembles modal realism, which is deifnitely philosphy, although it is routinely condemned here as bad philosophy.
7IlyaShpitser11y
Look, everything counts as phil: (http://en.wikipedia.org/wiki/Natural_philosophy). Philosophy gets credit for launching science in the 19th century. Philosophers were the first to invent the AI effect, apparently (http://en.wikipedia.org/wiki/AI_effect). If you want to look at interesting advances in philosophy, read the stuff by the CMU causality gang (Spirtes/Scheines/Glymour, philosophy department, also Kelly). Of course you will probably say that is not really philosophy but theoretical statistics or something. Pearl's stuff can be considered philosophy too (certainly his stuff on actual cause is cited a lot in phil papers).
0Peterdjones11y
Science in general is quoted quite a lot. But there is a difference between phils. discussing phil. and phils. discussing non-phil as somethign that can be philosophised about. if only in tone and presentation.
2IlyaShpitser11y
Your quoting is confusing.
4Wei Dai11y
Perhaps a more relevant question, in the context of the OP, is whether those problems are representative of the types of foundational (as opposed to engineering, logistical, strategic, etc.) problems that need to be solved in order to build an FAI. But we could talk about "philosophy" as well, since, to be honest, I'm not sure why some topics count as "philosophy" and others don't. It seems to me that my list of advances do fall under Wikipedia's description of philosophy as "the study of general and fundamental problems, such as those connected with reality, existence, knowledge, values, reason, mind, and language." Do you disagree, or have a alternative definition?
7Richard_Kennaway11y
I agree. But there are also some systematic differences between what the people you cited did and what (other) philosophers do. * The former didn't merely study fundamental problems, they solved them. * They did stuff that now exists and can be studied independently of the original works. You don't have to read a single word of Turing to understand Turing machines and their importance. You need not study Solomonoff to understand Solomonoff induction. * Their works are generally not shelved with philosophy in libraries. Are they studied in undergraduate courses on philosophy?
4novalis11y
Turing's work on AI (and Searle's response) was discussed in my undergrad intro phil course. But that is not quite the same thing.
1BerryPick611y
Not in my undergraduate program, at least.
5DaFranker11y
I think the criticism is indeed pointed towards the scientific "field" of Philosophy, AKA people working in Philosophy Departments or similar. I doubt many here are targeting the activity of philosophy, nor the people who would identify as "philosophers", but rather specifically towards Philosophy academics with a specialization in Philosophy, who work in a Philosophy Department and produce Philosophy papers to be published in a Journal of Philosophical Writings (and possibly give the occasional Philosophy class or seminar, depending on the local supply of TAs). IME, a large fraction of real, practicing philosophers are actively publishing papers on arXiv or equivalent.
3Peterdjones11y
Did you mean academic field? You mean professional phi. bad, amateur phil good. Or not so much amaterur phil as the sort of sciencey-philly cross-disciplinary stuff done by EY and Robin and Botrom and Tegmark do. Maybe. But actually some of it is quite bad for reasons which are evident if you know phil.
2DaFranker11y
Yes, my bad. A good professional study of philosophy itself is to me indistinguishable from someone doing metaresearch, i.e. figuring out how to make the standards of the scientific method even better and the techniques of all scientists more efficient. IME, this is not what the majority of academics working in Philosophy Departments are doing. OTOH, good applied philosophy, i.e. the sort of stuff you do once you've studied the result of the above metaresearch, is basically just doing science. In other words, doing research in any field that is not about how to do research. So yes, in a sense, most academics categorized as "professional phil" are less good than most academics categorized as "amateur phil" who mainly work in other disciplines. The latter are also almost exclusively "sciencey-philly cross-disciplinary". I'm guessing we both agree that non-academic-nor-scientist amateur philosophers are less likely to produce meaningful research than any of the above, and yet is pretty much the stereotype that most people (in the general north-american population) assign to "philosophers". Then again, the exclusion of "scientists" from that category feels like begging the question.
0Peterdjones11y
Is the "so" meant to imply that that follows from the forefgoing? I don't see how it does.
0Peterdjones11y
I was responding to the sentence: "If you look at the most interesting recent advances in philosophy, it seems that most of them were made by non-philosophers." ..which does not mention "advances in philosophy useful to FAI". None of them have been much discussed by phils. (except possibly Bostrom, the Diane Hsieh of LessWrongism).
3Wei Dai11y
Theory of computation is obviously used by the computational theory of mind as well as philosophy of language and of mathematics and logic. Decision theorists are commonly employed by philosophy departments and all current decision theories descend from vNM's. AIT actually doesn't seem to be much discussed by philosophers (a search found only a couple of references in the SEP, and even the entry on "simplicity" only gives a brief mention of it) which is a bit surprising. (Oh, there's a more substantial discussion in the entry for "information".)
0Peterdjones11y
Surely that is the other way round. Early computer theorists just wanted to solve mathematical problems mechanically. What is your point? His day job was physicist.

Sometimes, they are even divided on psychological questions that psychologists have already answered: Philosophers are split evenly on the question of whether it's possible to make a moral judgment without being motivated to abide by that judgment, even though we already know that this is possible for some people with damage to their brain's reward system, for example many Parkinson's patients, and patients with damage to the ventromedial frontal cortex...

Huh?

Examples like that are the bread and butter of discussions about motivational internalism: prec... (read more)

Are some philosophical questions questions about reality? If so, what does it take for a question about reality to count as "philosophical" as opposed to "scientific"? Are these just empirical clusters?

And if it's not a fact about reality, what does it mean to get it right?

0ygert11y
I think the point is not to think of questions as philosophical or not, but rather look at the people trying to solve these questions. This post is talking about how the people called "philosophers" are not effective at solving these problems, and as such that they should change their approach. In fact, a large part of the Sequences are attempting to solve questions which you might think of as "philosophical" and have in the past been worked on by philosophers. But what this post says is that the correct way to look at these (or any other) problems is to look at them in a rational way (like EY did in writing the Sequences) and not in the way most people (specifically the class of people known as "philosophers") have tried to solve them in the past.

Luke quoted:

Science is built around the assumption that you're too stupid and self-deceiving to just use [probability theory]. After all, if it was that simple, we wouldn't need a social process of science... [Standard scientific method] doesn't trust your rationality, and it doesn't rely on your ability to use probability theory as the arbiter of truth. It wants you to set up a definitive experiment.

That's a pretty irritatingly-wrong quote. Of course the scientific method is social of reasons other than the stupidity and self-deceiving nature of sc... (read more)

A score of 1.32 isn't radically different from the mean CRT scores found for psychology undergraduates (1.5), financial planners (1.76), Florida Circuit Court judges (1.23), Princeton Undergraduates (1.63), and people who happened to be sitting along the Charles River during a July 4th fireworks display (1.53). It is also noticeably lower than the mean CRT scores found for MIT students (2.18) and for attendees to a LessWrong.com meetup group (2.69).

I found this by far the most interesting part of this (very good) post. I am surprised I had to learn it hidden inside a mostly unrelated essay. I would certainly like to hear more about this test.

What would evidence of deontology / consequentialism / virtue ethics, empiricism vs. rationalism, or physicalism vs. non-physicalism look like?

But] philosophy continually leads experts with the highest degree of epistemic virtue, doing the very best they can, to accept a wide array of incompatible doctrines. Therefore, philosophy is an unreliable instrument for finding truth. A person who enters the field is highly unlikely to arrive at true answers to philosophical questions.

Philosophy hasn;t been very successful at finding the truth about the kind of questions philosophy typically considers. What's better...at answering those kinds of questions? You can only condemn philosophy for having worse methods than science, based on results, if they are both applied to the same problems.

[-][anonymous]11y10

Sometimes, they are even divided on psychological questions that psychologists have already answered...

I think you've misunderstood the debate: philosophers are arguing in this case over whether or not moral judgements are intrinsically motivating. If they are, then the brain-damaged people you make reference to are (according to moral judgement internalizes) not really making moral judgements. They're just mouthing the words.

This is just to say that psychology has answered a certain question, but not the question that philosophers debating this point are concerned about.

3Manfred11y
This pattern-matches an awful lot to "if a tree falls in a forest..."
0[anonymous]11y
Yeah, but at a sufficiently low resolution (such as my description), lots of stuff pattern-matches, so: http://plato.stanford.edu/entries/moral-motivation/#MorJudMot I'm not saying the philosophical debate is interesting or important (or that it's not), but the claim that psychologists have settled the question relies on an equivocation on 'moral judgement': in the psychological study, giving an answer to a moral question which comports with answers given by healthy people is a sufficient condition on moral judgement. For philosophers, it is neither necessary, not sufficient. Clearly, they are not talking about the same thing.
3Qiaochu_Yuan11y
How do I know whether anyone is making moral judgments as opposed to mouthing the words?
0[anonymous]11y
That sounds like an interesting question! If you'll forgive me answering your question with another, do you think that this is the kind of question psychology can answer, and if so, what kind of evidential result would help answer it?
4Qiaochu_Yuan11y
Well, I was hoping you would answer with at least a definition of what constitutes a moral judgment. A tentative definition might come from the following procedure: ask a wide selection of people to make what would colloquially be referred to as moral judgments and see what parts of their brains light up. If there's a common light-up pattern to basic moral judgments about things like murder, then we might call that neurological event a moral judgment. Part of this light-up pattern might be missing in the brain-damaged people.
2[anonymous]11y
But that's the philosophical debate! As to your definition, notice the following problem: suppose you get a healthy person answering a moral question. Region A and B of their brain lights up. Now you go to the brain damaged person, and in response to the same moral question only region A lights up. You also notice that the healthy person is motivated to act on the moral judgement, while the brain damaged person is not. So you conclude that B has something to do with motivation. So do you define a moral judgement as 'the lighting up of A and B' or just 'the lighting up of A'? Notice that nothing about the result you've observed seems to answer or even address that question. You can presuppose that it's A, or both A and B, but then you've assumed an answer to the philosophical debate. There's a big difference between assuming an answer, and answering.
5Qiaochu_Yuan11y
Neither. You taboo "moral judgment." From there, as far as I can tell, the question is dissolved.
2[anonymous]11y
Okay, good idea, let's taboo moral judgement. So your definition from the great grandparent was (I'm paraphrasing) "the activity of the brain in response to what are colloquially referred to as moral judgements." What should we replace 'moral judgement' with in this definition? I assume it's clear that we can't replace it with 'the activity of the brain...' (ETA: For the record, if tabooing in this way is your strategy, I think you're with me in rejecting Luke's claim that psychology has settled a the externalism vs. internalism question. At the very best, psychology has rejected the question, not solved it. But much more likely, since philosophers probably won't taboo 'moral judgement' the way you have (i.e. in terms of brain states) psychology is simply discussing a different topic.)
0Qiaochu_Yuan11y
"...in response to questions about whether it is right to kill people in various situations, or take things from people in various situations, or more generally to impose one's will on another person in a way that would have had significance in the ancestral environment." (This is based on my own intuition that people process judgments about ancestral-environment-type things like murder differently from the way people process judgments about non-ancestral-environment-type things like copyright law. I could be wrong about this.) How would a philosopher taboo "moral judgment"?
0[anonymous]11y
That's fine, but it doesn't address the problem I described in the great great grandparent of this reply. Either you mean the brain activity of a healthy person, or the brain activity common to healthy and brain-damaged people. Even if philosophers intend to be discussing brain processes (which, in almost every case, they do not) then you've assumed an answer, not given one. But in any case, this way of tabooing 'moral judgement' makes it very clear that the question the psychologist is discussing is not the question the philosopher is discussing.
3Qiaochu_Yuan11y
In that case I don't understand the question the philosopher is discussing. Can you explain it to me without using the phrase "moral judgment"?
1[anonymous]11y
Well, this isn't something I'm an expert in. Most of my knowledge of the topic comes from this SEP article, which I would in any case just be summarizing if I tried to explain the debate. The article is much clearer than I'm likely to be. So you're probably just better off reading that, especially the intro and section 3: http://plato.stanford.edu/entries/moral-motivation/ That article uses the phrase 'moral judgement' of course, but anyway I think tabooing the term (rather than explaining and then using it) is probably counterproductive. I'd of course be happy to discuss the article.

According to the largest-ever survey of philosophers, they're split 25-24-18 on deontology / consequentialism / virtue ethics,

???

I am confused. I lean towrds value ethics, and I can certainly see the appeal of consequentialism; but as I understand it, deontology is simply "follow the rules", right?

I fail to see the appeal of that as a basis for ethics. (As a basis for avoiding confrontation, yes, but not as a basis for deciding what is right or wrong). It doesn't seem to stand up well on inspection (who makes the rules? Surely they can't be decided deontologically?)

So... what am I missing? Why is deontology more favoured than either of the other two options?

6Peterdjones11y
Deontology doens't mean "follow any rules" or "follow given rules" or "be law abiding". A deontologist can reject purported moral rules, just as a virtue theorist does not have to accept that copulaing with as many women as possible is "manly virtue", just as a value theorist does not have to value blind patriotism. Etc. ETA: Meta-ethical systems ususally don't supply their own methdology. Deontologists usually work out rules based on some specific deontological meta-rule or "maxim", such as "follow on that rule one would wish to be universal law". Deontologies may vary according to the selection of maxim.
4BerryPick611y
Further, many philosophers think that Meta-Ethics and Normative Ethics can have sort of a "hard barrier" between them, so that one's meta-ethical view may have no impact at all upon one's acceptance of Deontology or Deontological systems. EDIT: For the record, I think this is pretty ridiculous, but it's worth noting that people believe it.
0CCC11y
Ah, thank you. This was the point that I was missing; that the choice of maxim to follow may be via some non-deontological method. Now it makes sense. Many thanks.

(As likelihood ratios get smaller, your priors need to be better and your updates more accurate.)

It seems to me that rationality is more about updating the correct amount, which is primarily calculating the likelihood ratio correctly. Most of the examples of philosophical errors you've discussed come from not calculating that ratio correctly, not from starting out with a bizarre prior.

For example, consider Yvain and the Case of the Visual Imagination:

Upon hearing this, my response was "How the stars was this actually a real debate? Of course we h

... (read more)

Just to point out: your 3rd footnote all links to the same page. Enjoyed the post. Perhaps a case study of a big philosophy problem fully dissolved here?

0lukeprog11y
Fixed, thanks.

Sometimes, they are even divided on psychological questions that psychologists have already answered: Philosophers are split evenly on the question of whether it's possible to make a moral judgment without being motivated to abide by that judgment, even though we already know that this is possible for some people with damage to their brain's reward system, for example many Parkinson's patients, and patients with damage to the ventromedial frontal cortex (Schroeder et al. 2012).1

This isn't an area about which I know very much about but my understanding i... (read more)

Typo: But or many philosophical problems

they're split 25-24-18 on deontology / consequentialism / virtue ethics,

Does that mean they're all moral realists? Otherwise it's like being split on the "true" human skin color.

3BerryPick611y
There's a separate question for Moral Realism vs. Moral Anti-Realism. It's an often accepted position among philosophers that one can hold Normative Ethical positions totally removed from their Meta-Ethics, which may account for some of the confusion.

So, your account basically implies that philosophy is less reliable than astrology, but is not as useful? Then why even bother talking to the philosophical types, to begin with?

-2Peterdjones11y
Becuase no one has better approaches to those questions.

sophistication effect

The name of this bias is Bias blind spot.

4lukeprog11y
That's part of it. The sophistication effect specifically calls out the fact that due to the bias blind spot, sophisticated arguers have more ammunition with which to avoid noticing their own biases, and to see biases in others.