All of RichardChappell's Comments + Replies

It's not unusual to count "thwarted aims" as a positive bad of death (as I've argued for myself in my paper Value Receptacles), which at least counts against replacing people with only slightly happier people (though still leaves open that it may be worthwhile to replace people with much happier people, if the extra happiness is sufficient to outweigh the harm of the first person's thwarted ends).

In the case you describe, the "HSC content" is just that Jesus is magic. So there's no argument being offered at all. Now, if they offer an actual argument, from some other p to the conclusion that Jesus is magic, then we can assess this argument like any other. How the arguer came to believe the original premise p is not particularly relevant. What you call the "defeater critique", I call the genetic fallacy.

It's true that an interlocutor is never going to be particularly moved by an argument that starts from premises he doesn't ac... (read more)

Thinking of things in terms of informal fallacies like the genetic fallacy throws away information []. From a Bayesian viewpoint, the source of one's belief is relevant to its likelihood of being true. (Edit 9/2/13: A good example of this is here [].) Right; I mostly complain about arguments made solely from intuited contents when the claims are given with far more confidence than can be justified by the demonstrated reliability of human intuitions in that domain.

Yes, that's the idea. I mean, (2) is plausibly true if the "because" is meant in a purely causal, rather than rationalizing, sense. But we don't take the fact that we stand in a certain psychological relation to this content (i.e., intuiting it) to play any essential justifying role.

Thanks for following up on this issue! I'm looking forward to hearing the rest of your thoughts.

In that case, I struggle to see why the "defeater critique" wouldn't seriously undermine practice (1) in most cases. Philosophers can't simply assume intuited contents p and then move from p to q. We want to know how likely p is to be true, and if our primary reason for thinking p is true is some unreliable cognitive algorithm (rather than, say, hard scientific data or a mathematical proof), then we are left without much reason to be confident that p is true. Suppose a theist says he knows by Holy Spirit Communication (HSC) that Jesus is magic. An atheist replies, "HSC is not a reliable method. See all this experimental data on people making judgments based on the deliverances of (what they claim is) HSC." The theist then says, "No, I'm not arguing from the HSC mental state to the conclusion that Jesus is magic. I'm arguing from the HSC contents (that is, from proposition p) to the conclusion that Jesus is magic." The atheist would be unimpressed, and correctly so.

I'm not sure what you have in mind here. We need to distinguish (i) the referent of a concept from (ii) its reference-fixing "sense" or functional role. The way I understood your view, the reference-fixing story for moral terms involves our (idealized) desires. But the referent is "rigid" in the sense that it's picking out the content of our desires: the thing that actually fills the functional role, rather than the role-property itself.

Since our desires typically aren't themselves about our desires, so it will turn out, on this stor... (read more)

This all does sound good to me; but, is there a way to say the above while tabooing "reference" and avoiding talk of things "referring" to other things? Reference isn't ontologically basic, so what does it reduce to?

Basically, the main part that would worry me is a phrase like, "there's a story to be told about how our moral concepts came to pick out these particular worldly properties" which sounds on its face like, "There's a story to be told about how successorship came to pick out the natural numbers" whereas wh... (read more)

Correct. Eliezer has misunderstood rigid designation here.

0Eliezer Yudkowsky10y
So does that mean this: your real claim here [], independent of any points about language use? If so, I think I would just straightforwardly modify my paragraph above to say that my statements are not trying to talk about language use or human brains / desires, albeit that desire is both an optimization target of, and a quotation of, morality.

Jonathan Ichikawa, 'Who Needs Intuitions'

Elizabeth Harman, 'Is it Reasonable to “Rely on Intuitions” in Ethics?

Timothy Williamson, 'Evidence in Philosophy', chp 7 of The Philosophy of Philosophy.

Thanks. I'm going to be extremely busy for the next few weeks but I will make sure to get back to you on this (and reply to your comment, so you get a notification) at a later time.

The debate over intuitions is one of the hottest in philosophy today

But it -- at least the "debate over intuitions" that I'm most familiar with -- isn't about whether intuitions are reliable, but rather over whether the critics have accurately specified the role they play in traditional philosophical methodology. That is, the standard response to experimentalist critics (at least, in my corner of philosophy) is not to argue that intuitions are "reliable evidence", but rather to deny that we are using them as evidence at all. On thi... (read more)

A quick review (for the benefit of others): Bugmaster asked []: "Is there really any kind of a serious debate in modern philosophy circles regarding whether or not our personal intuitions can be generally trusted?" I replied []: "Yes... the debate over intuitions is one of the hottest in philosophy today." But Richard is right to say [] that most of the philosophical debate about intuitions "isn't about whether intuitions are reliable, but rather over whether the critics have accurately specified the role they play in traditional philosophical methodology." So, I apologize for my sloppy wording. Now, a few words on intuitionist methodology. When I read the defenders of intuitionist methodology, I'm reminded of something John Doris said in my interview with him [] (slight paraphrase for clarity and succinctness; see the exact quote at the bottom of the transcript): When experimentalists pointed out that our brains don't store concepts as necessary and sufficient conditions [], many philosophers rushed to say that philosophers had never been assuming this in the first place. But clearly, many philosophers were making such false assumptions about how concepts worked, since the "classical" view of concepts — concepts as mental representations captured by necessary and sufficient conditions — held sway for quite some time, even after Wittgenstein (1953) []. (For a review, see Murphy 2004 [].) Or, given that experimentalists have raised worries about using intuitions as e
Can you cite a specific paper on book chapter which makes the kind of argument you're suggesting here?

And this responds to what I said... how?

I can build an agent that tracks how many sheep are in the pasture using an internal mental bucket, and keeps looking for sheep until they're all returned. From an outside standpoint, this agent's mental bucket is meaningful because there's a causal process that correlates it to the sheep, and this correlation is made use of to steer the world into futures where all sheep are retrieved. And then the mysterious sensation of about-ness is just what it feels like from the inside to be that agent, with a side order of explicitly modeling both yourself and th... (read more)

It's a nice parable and all, but it doesn't seem particularly responsive to my concerns. I agree that we can use any old external items as tokens to model other things, and that there doesn't have to be anything "special" about the items we make use of in this way, except that we intend to so use them. Such "derivative intentionality" is not particularly difficult to explain (nor is the weak form of "natural intentionality" in which smoke "means" fire, tree rings "signify" age, etc.). The big question is... (read more)

6Eliezer Yudkowsky11y
“I toss in a pebble whenever a sheep passes,” I point out. “When a sheep passes, you toss in a pebble?” Mark says. “What does that have to do with anything?” “It’s an interaction between the sheep and the pebbles,” I reply. “No, it’s an interaction between the pebbles and you,” Mark says. “The magic doesn’t come from the sheep, it comes from you. Mere sheep are obviously nonmagical. The magic has to come from somewhere, on the way to the bucket.” I point at a wooden mechanism perched on the gate. “Do you see that flap of cloth hanging down from that wooden contraption? We’re still fiddling with that – it doesn’t work reliably – but when sheep pass through, they disturb the cloth. When the cloth moves aside, a pebble drops out of a reservoir and falls into the bucket. That way, Autrey and I won’t have to toss in the pebbles ourselves.” Mark furrows his brow. “I don’t quite follow you… is the cloth magical?” I shrug. “I ordered it online from a company called Natural Selections. The fabric is called Sensory Modality.” I pause, seeing the incredulous expressions of Mark and Autrey. “I admit the names are a bit New Agey. The point is that a passing sheep triggers a chain of cause and effect that ends with a pebble in the bucket."

This is somewhat absurd

More than that, it's obviously incoherent. I assume your point is that the same should be said of zombies? Probably reaching diminishing returns in this discussion, so I'll just note that the general consensus of the experts in conceptual analysis (namely, philosophers) disagrees with you here. Even those who want to deny that zombies are metaphysically possible generally concede that the concept is logically coherent.

On reflection, I think that's right. I'm capable of imagining incoherent things. I guess I'm somewhat skeptical that anyone can be an expert in which non-existent things are more or less possible. How could you tell if someone was ever correct -- let alone an expert? Wouldn't there be a relentless treadmill of acceptance of increasingly absurd claims, because nobody want to admit that their powers of conception are weak and they can't imagine something?

Well, you could talk about how she is covered with soft fur, but it's possible to imagine something fuzzy and not covered with fur, or something covered with fur but not fuzzy. Because it's possible to imagine these things, clearly fuzziness must be non-physical.

Erm, this is just poor reasoning. The conclusion that follows from your premises is that the properties of fuzziness and being-covered-in-fur are distinct, but that doesn't yet make fuzziness non-physical, since there are obviously other physical properties besides being-covered-in-fur that it ... (read more)

No, I'm saying that you could hold all of the physical facts fixed and my cat might still not be fuzzy. This is somewhat absurd, but I have a tremendously good imagination; if I can imagine zombies, I can imagine fuzz-zombies.

I'm not sure I follow you. Why would you need to analyse "thinking" in order to "get a start on building AI"? Presumably it's enough to systematize the various computational algorithms that lead to the behavioural/functional outputs associated with intelligent thought. Whether it's really thought, or mere computation, that occurs inside the black box is presumably not any concern of computer scientists!

Becuase thought is essential to intelligence. Why would you need to analyse intelligence to get a start on building artificial intelliigence? Because you would have no idea what you were tryinng to do if you didn't. I fail to see how that is not just a long winded way of saying "analysing thought"

I couldn't help one who lacked the concept. But assuming that you possess the concept, and just need some help in situating it in relation to your other concepts, perhaps the following might help...

Our thoughts (and, derivatively, our assertions) have subject-matters. They are about things. We might make claims about these things, e.g. claiming that certain properties go together (or not). When I write, "Grass is green", I mean that grass is green. I conjure in my mind's eye a mental image of blades of grass, and their colour, in the image, ... (read more)

5Eliezer Yudkowsky11y
(From "The Simple Truth", a parable about using pebbles in a bucket to keep count of the sheep in a pasture.) “My pebbles represent the sheep!” Autrey says triumphantly. “Your pebbles don’t have the representativeness property, so they won’t work. They are empty of meaning. Just look at them. There’s no aura of semantic content; they are merely pebbles. You need a bucket with special causal powers.” “Ah!” Mark says. “Special causal powers, instead of magic.” “Exactly,” says Autrey. “I’m not superstitious. Postulating magic, in this day and age, would be unacceptable to the international shepherding community. We have found that postulating magic simply doesn’t work as an explanation for shepherding phenomena. So when I see something I don’t understand, and I want to explain it using a model with no internal detail that makes no predictions even in retrospect, I postulate special causal powers. If that doesn’t work, I’ll move on to calling it an emergent phenomenon.” “What kind of special powers does the bucket have?” asks Mark. “Hm,” says Autrey. “Maybe this bucket is imbued with an about-ness relation to the pastures. That would explain why it worked – when the bucket is empty, it means the pastures are empty.” “Where did you find this bucket?” says Mark. “And how did you realize it had an about-ness relation to the pastures?” “It’s an ordinary bucket,” I say. “I used to climb trees with it… I don’t think this question needs to be difficult.” “I’m talking to Autrey,” says Mark. “You have to bind the bucket to the pastures, and the pebbles to the sheep, using a magical ritual – pardon me, an emergent process with special causal powers – that my master discovered,” Autrey explains. Autrey then attempts to describe the ritual, with Mark nodding along in sage comprehension. “And this ritual,” says Mark, “it binds the pebbles to the sheep by the magical laws of Sympathy and Contagion, like a voodoo doll.” Autrey winces and looks around. “Please! Don’t call i

You can probably give a functionalist analysis of computation. I doubt we can reductively analyse "thinking" (at least if you taboo away all related mentalistic terms), so this strikes me as a bedrock case (again, like "qualia") where tabooing away the term (and its cognates) simply leaves you unable to talk about the phenomenon in question.

If we cant even get a start on that, how did we get a start on building AI?
It sounds like "thinking" and "qualia" are getting the special privilege of being irreducible, even though there have been plenty of attempts to reduce them, and these attempts have had at least some success. Why can't I pick any concept and declare it a bedrock case? Is my cat fuzzy? Well, you could talk about how she is covered with soft fur, but it's possible to imagine something fuzzy and not covered with fur, or something covered with fur but not fuzzy. Because it's possible to imagine these things, clearly fuzziness must be non-physical. It's maybe harder to imagine a non-fuzzy cat than to imagine a non-thinking person, but that's just because fuzziness doesn't have the same aura of the mysterious that thinking and experiencing do.

But what are brains thinking, if not thoughts?

Right, according to epiphenomenalists, brains aren't thinking (they may be computing, but syntax is not semantics).

If it doesn't appear in the causal diagram, how could we tell that we're not living in a totally meaningless universe?

Our thoughts are (like qualia) what we are most directly acquainted with. If we didn't have them, there would be no "we" to "tell" anything. We only need causal connections to put us in contact with the world beyond our minds.

So if we taboo "thinking" and "computing", what is it that brains are not doing?

Meaning doesn't seem to be a thing in the way that atoms and qualia are, so I'm doubtful that the causal criterion properly applies to it (similarly for normative properties).

(Note that it would seem rather self-defeating to claim that 'meaning' is meaningless.)

2Ben Pace11y
What exactly do you mean by "mean"?
I'm trying to figure out what work "meaning" is doing. Eliezer says brains are "thinking" meaningless gibberish. You dispute this by saying, But what are brains thinking, if not thoughts? And then This implies that "about"-ness and "meaning" have roughly the same set of properties. But I don't understand why anyone believes anything about "meaning" (in this sense). If it doesn't appear in the causal diagram, how could we tell that we're not living in a totally meaningless universe? Let's play the Monday-Tuesday game: on Monday, our thoughts are meaningful; on Tuesday, they're not. What's different?

In my experience, most philosophers are actually pretty motivated to avoid the stigma of "epiphenomenalism", and try instead to lay claim to some more obscure-but-naturalist-friendly label for their view (like "non-reductive physicalism", "anomalous monism", etc.)

FWIW, my old post 'Zombie Rationality' explores what I think the epiphenomenalist should say about the worry that "the upper-tier brain must be thinking meaningless gibberish when the upper-tier lips [talk about consciousness]"

One point to flag is that from an epiphenomenalist's perspective, mere brains never really mean anything, any more than squiggles of ink do; any meaning we attribute to them is purely derivative from the meaning of appropriately-related thoughts (which, on this view, essentially involve qualia).

Another thing to flag is that... (read more)

Where in the causal diagram does "meaning" go?

Nope. Epiphenomenalism is motivated by the thought that you could (conceivably, in a world with different laws from ours) have the same bundles of neurons without any consciousness. You couldn't conceivably have the same bundles of trees not be a forest.

Good point, thanks.

Did this ever happen? (If so, updating the OP with links would be very helpful.)

Thanks, that's helpful. Two (related) possible replies for the afterlife believer:

(1) The Y-component is replaceable: brains play the Y role while we're alive, but we get some kind of replacement device in the afterlife (which qualifies as "us", rather than a "replica of us", due to persisting soul identity).

(2) The brain is only needed for physical expressions of mentality ("talking", etc.), and we revert to purely non-physical mental functioning in the afterlife.

These are silly views, of course, but I'm not yet convinced tha... (read more)

Are you agreeing, then, that X=mind and Y=brain chunks? That's surprising to me. I would have thought that X was all of the relevant behaviors -- walking, talking, breathing, playing games, writing on internet forums, ... I didn't think you would want an identity thesis between Mind and Some Class of Behaviors. Maybe I'm thinking about this wrong, but I thought for soul-ish theories, the mind just was the soul. And then you get a causal picture (for interactionists, anyway) that looks like Soul --> Brain --> Intelligent Behaviors.
It seems the considerations in gjm's comment [] actively tear (2) to shreds.
Did these fundamentally arbitrary assertions get more stupid? That's an angel dancing on the head of a pin argument. Notice that those weren't the assertions before we learned about brains. The mind was a spirit/soul mysteriously trapped in a physical body. Then we started poking around in brains, and finding that the mind didn't seem to work so well when bad things happened to the brain. So minds retreat to epiphenomenalism. Then they can retreat in time, and only actually do anything after you're no longer alive, and no one can see anything actually happening. So, yes, the original theory of the soul got less likely after we learned about brains, but your new theory of the soul, specifically crafted to avoid contradiction with the new evidence, might not have. God, souls, immaterial minds, elan vital, essences, etc., are all a bunch of cockroaches, always scurrying back into the darkness, retreating from the expanding light of evidence. How many retreats do we need to see before we're convinced they will never be able to stand their ground? I anticipated the "functional swap at death" argument, as it was the logical next rampart to retreat behind, but thought it was pointless to say anything about it. I think we've learned by now that the chase never ends. I could just as well say that your mind will continue only if you had duck within a week of your death, and the Spirit of the Duck was within you to transfer the function of your mind to live out the rest of eternity in a lamp post. That's a spiritual lamp post, of course. We can't actually see the the lamp post, you silly goose. Or should I say, silly duck? We really need a clear and concise statement for the rejection of the infinite class of arbitrary assertions consistent with all currently known data.

Did you miss the "N.B." at the end of my post?

I agree that the soul hypothesis is not generally worth taking seriously. What I'm denying is that the existence of brain damage is good evidence for this.

Well... the existence of brain damage, in and of itself, is not evidence for this, I agree.

That is, if I lived in a world where (for example) brain damage existed but cognitive impairment didn't follow from it, in much the same sense that skeletal damage does not result in cognitive impairment in the actual world, the mere existence of brain damage would not tell us much that's relevant to the soul hypothesis one way or the other. (And, relatedly, in the real world I don't think the existence of skeletal damage is good evidence for or against the soul hyp... (read more)

That's surely going to depend on the details of the non-naturalist view. Epiphenomenalism, for example, makes all the same empirical predictions as physicalism. (Though it might be harder to combine with a "soul" view -- it goes more naturally with property dualism than substance dualism.)

But even Cartesian Interactionists, who see the brain as an "intermediary" between soul and body, should presumably expect brain damage to cause the body to be less responsive to the soul (just as in the radio analogy).

Or are you thinking of "no... (read more)

The tooth fairy example gets a variety of responses

Seriously? I've never heard anyone insist that the tooth fairy really exists (in the form of their mother). It would seem most contrary to common usage (in my community, at least) to use 'Tooth Fairy' to denote "whoever replaced the tooth under my pillow with a coin". The magical element is (in my experience) treated as essential to the term and not a mere "connotation".

I've heard of the saying you mention, but I think you misunderstand people when you interpret it literally. My ... (read more)

What if metaethical reductionism is not meant (by some) to accommodate the pre-theoretic grasp of "morality" of most people, but just to accommodate the pre-theoretic grasp of "morality" of people like lessdazed? Could metaethical reductionism be considered a "respectable position" in that sense? And separately, suppose the main reason I'm interested in metaethics is that I am trying to answer a question like "Should I terminally value the lives of random strangers?" and I'm not sure what that question means exactly or how I should go about answering it. In this case, is there a reason for me to care much about the pre-theoretic grasp of most people, as opposed to, say, people I think are most likely to be right about morality?
This is a good example, as people saying this are in some ways doing the opposite of what you advocate, but in other ways, they are doing the same thing. I think people are motivated to say "God is love" out of a desire to avoid being logically compelled to view certain others as wrong (there is a twist, some passive aggressive non-believers say it to claim atheists are wrong when saying theists are wrong). The exact motivation isn't important, but its presence would provide a countervailing force against the sheer silliness of their confusion (which is worse than yours in an important way) and explain how they could possibly make it. The mistake being made is to pretend that there are inherently attached meanings to a word, as if those words simply had that meaning, regardless of context of using words in general, and that word in particular, when that context is totally adequate to explain meaning. When it is clear that contemporary theists and all their ancestors were in error, the hippie says "God is love" to pretend there is no disagreement, and that "God" meant love - instead of what was obviously meant, or often weirdly in addition to the contradictory thing that they admit was meant. You do the opposite in that rather than seek ways to interpret others as agreeing or being right, even when there is obvious disagreement, you seek ways to interpret others as disagreeing or being wrong. You use the exact same mechanism of ignoring context and how language is used. You avoid the worst error of the hippies, which is claiming that others mean both "God is love" and "God is an agent, etc." However, reductionists take the history of language and find words have many connotations, the meaning of "moral" and find it has a long history of meaning many different things, many of several things, and that the meaning people accept as associated to the term has to do with the quantity and quality of many aspects, none of which are intrinsic. You have apparently decide

No, you learned that the tooth fairy doesn't exist, and that your mother was instead responsible for the observable phenomena that you had previously attributed to the tooth fairy.

(It's a good analogy though. I do think that claiming that morality exists "as a computation" is a lot like claiming that the tooth fairy really exists "as one's mother".)

Yes. No. Usually, the first thing to do when guessing about a random number from 1-100 is to split the possibilities in half by asking if it is more than 50 (or odd. Etc.) The tooth fairy example gets a variety of responses, from people insisting it is just objectively wrong to say "the tooth fairy doesn't exist" to those saying it is just objectively wrong to say the tooth fairy was really my mother. I happen to agree with you about what the best way is to describe what went on in this specific case. However, this is a standard blegg-rube situation that is unusual only in that it is not clear which way is best to describe the phenomenon to others. There is a constellation of phenomena that correlate to each other - the fairy being female, being magic, having diaphanous wings, collecting things for money, those things being stored under pillows, those things being teeth. None of these is qualitatively essential to be a tooth fairy to most people than "having ten fingers" is essential to being human. If tomorrow we learn that magic is real, a female sprite collects teeth from under pillows, and does so on the back of a termite (and has size-changing technology/magic, why not?), most people would naively say "the tooth fairy does not fly, but burrows on the back of a termite". That's OK, but not great if the true nature of the situation is not recognized, and they fall into error if they think "tooth fairy" has a meaning divorced from flight. Likewise, those who say "there was never a 'tooth fairy', there is rather the 'burrowing tooth fairy'" are right that there was never a thing exactly like the classic description, but this group makes an error if they demand the first stop calling the "burrowing tooth fairy" the "tooth fairy". There is more to say, an individual who makes up explanations ad hoc is not communicating, and the relative confluence of idiolects is valid because of the tinkerbell effect. that makes saying "No, you learned that the tooth fairy does

I'm not arguing for moral realism here. I'm arguing against metaethical reductionism, which leaves open either realism OR error theory.

For all I've said, people may well be mistaken when they attribute normative properties to things. That's fine. I'm just trying to clarify what it is that people are claiming when they make moral claims. This is conceptual analysis, not metaphysics. I'm pointing out that what you claim to be the meaning of 'morality' isn't what people mean to be talking about when they engage in moral discourse. I'm not presupposing t... (read more)

When I was young, I learned that the tooth fairy was really my mother all along. What do you think of that? (This isn't meant to be insulting or anything similar.)

Purported debates about the true meaning of "ought" reveal that everyone has their own balancing equation, and the average person thinks all others are morally obliged by objective morality to follow his or her equation.

You're confusing metaethics and first-order ethics. Ordinary moral debates aren't about the meaning of "ought". They're about the first-order question of which actions have the property of being what we ought to do. People disagree about which actions have this property. They posit different systematic theories (o... (read more)

I know that, which is why I said "Purported debates about the true meaning of 'ought'" rather than "ordinary debates, which are about the true meaning of 'ought'". Please be careful not to beg the question. People agree that there is such a property, but that is something about which they can be wrong. Rather, they aren't trying to stipulatively define the meaning of 'ought', or else their claim that "You ought to follow the prescriptions of balancing equation Y" would be tautological. In fact, due to people's poor self-insight, time limits, and the sometimes over-coarse granularity of language, they do not stipulate their actual balancing equation. Had they perfect insight and ability to represent their insights, it would be such a tautology. They would cease to speak like that had they the additional insight that for it to do the work it is called upon to do,"ought" is a word that needs grounding in the context of the real reasons for action of beings More generally, they are speaking an idiolect even regarding other definitions. It's meant to be such a claim, but it is in error because the speaker is confused about morality, and in a sense is not even wrong. They are claiming some actions have an objective moral valuation binding upon all intelligent beings, but they may as well claim the action has the property of being a square circle - or better yet, a perfect circle for pi is exactly 3, which is something I have witnessed a religious person claim is true. ~~~~~~~~~ I don't understand either why you believe as you do or what good justification you might have for it. I can see why one might want to make truth claims in which it falls out the folk have the least amount of confusion to be embarrassed about and are least wrong, and if one begins with the assumption that there are "moral facts" in the strongest sense, that's a good start. However, that neither prevents one from having to say they are wrong about an enormous amount nor does it prevent one fro

That asserting there are moral facts is incompatible with the fact that people disagree about what they are?

No, I think there are moral facts and that people disagree about what they are. But such substantive disagreement is incompatible with Eliezer's reductive view on which the very meaning of 'morality' differs from person to person. It treats 'morality' like an indexical (e.g. "I", "here", "now"), which obviously doesn't allow for real disagreement.

Compare: "I am tall." "No, I am not tall!" Such ... (read more)

It's not plausible(RC, 7/1/2011 4:25 GMT), but it is plausible(LD, 7/1/2011 4:25 GMT). It's not impossible for people to be confused in exactly such a way. That's begging the question. That intuition pump imagines intelligent people disagreeing, finds it plausible, notices that intelligent people disagreeing proves nothing, then replaces the label "intelligent" with "omniscient" (since that, if proven, would prove something) without showing the work that would make the replacement valid. If the work could be shown, the intuition pump wouldn't be very valuable, as one could just use the shown work for persuasion rather than the thought experiment with the disagreeing people. I strongly suspect that the reason the shown work is unavailable is because it does not exist. Forget morality for one second. Doesn't the meaning of the word "hat" differ from person to person? It's only sensible to say if/because context forestalls equivocation (or tries to, anyway). Retroactively removing the context by coming in the conversation with a different meaning of ought (even if the first meaning of "ought" was "objective values, as I think they are, as I think I want them to be, that are universally binding on all possible minds, and I would maintain under any coherent extrapolation of my values" where the first person is wrong about those facts and the second meaning of "ought" is the first person's extrapolated volition) introduces equivocation. It's really analogous to saying "No, I am not tall". Where the first person says "X would make me happy, I want to feel like doing X, and others will be better off according to balancing equation Y if I do X, and the word "ought" encompasses when those things coincide according to objective English, so I ought to do X", and the second person says "X would make you happy, you want to feel like doing X, and others will not be better off according to balancing equation Z if you do X, and the word "ought" encompasses when those things co

What would you say to someone who does not share your intuition that such "objective" morality likely exists?

I'd say: be an error theorist! If you don't think objective morality exists, then you don't think that morality exists. That's a perfectly respectable position. You can still agree with me about what it would take for morality to really exist. You just don't think that our world actually has what it takes.

Yes, that makes sense, except that my intuition that objective morality does not exist is not particularly strong either. I guess what I was really asking was, do you have any arguments to the effect that objective morality exists?

One related argument is the Open Question Argument: for any natural property F that an action might have, be it promotes my terminal values, or is the output of an Eliezerian computation that models my coherent extrapolated volition, or whatever the details might be, it's always coherent to ask: "I agree that this action is F, but is it good?"

But the intuitions that any metaethics worthy of the name must allow for fundamental disagreement and fallibility are perhaps more basic than this. I'd say they're just the criteria that we (at least, many ... (read more)

I'd say they're just the criteria that we (at least, many of us) have in mind when insisting that any morality worthy of the name must be "objective", in a certain sense.

What would you say to someone who does not share your intuition that such "objective" morality likely exists?

My main problem with objective morality is that while it's hard to deny that there seem to be mind-independent moral facts like "pain is morally bad", there doesn't seem to be enough such facts to build an ethical system out of them. What natural ph... (read more)

The part about computation doesn't change the fundamental structure of the theory. It's true that it creates more room for superficial disagreement and fallibility (of similar status to disagreements and fallibility regarding the effective means to some shared terminal values), but I see this as an improvement in degree and not in kind. It still doesn't allow for fundamental disagreement and fallibility, e.g. amongst logically omniscient agents.

(I take it to be a metaethical datum that even people with different terminal values, or different Eliezerian &... (read more)

It's not clear to me why there must be fundamental disagreement and fallibility, e.g. amongst logically omniscient agents. Can you refer me to an argument or intuition pump that explains why you think that?

malice implies poor motivations. Rather, the egalitarian instinct appears to be natural to most people.

Why the "rather"? How 'natural' an instinct is implies nothing about its moral quality.

It's not entirely clear what you're asking. Two possibilities, corresponding to my above distinction, are:

(1) What (perhaps more general) normatively significant feature is possessed by [saving lives for $500 each] that isn't possessed by [saving mosquitoes for $2000 each]? This would just be to ask for one's fully general normative theory: a utilitarian might point to the greater happiness that would result from the former option. Eventually we'll reach bedrock ("It's just a brute fact that happiness is good!"), at which point the only remain... (read more)

Yup. I'm asking question (2). Thanks again for your clarifying remarks.

People claim all sorts of justifications for 'ought' statements (aka normative statements).

You still seem to be conflating justification-giving properties with the property of being justified. Non-naturalists emphatically do not appeal to non-natural properties to justify our ought-claims. When explaining why you ought to give to charity, I'll point to various natural features -- that you can save a life for $500 by donating to VillageReach, etc. It's merely the fact that these natural features are justifying, or normatively important, which is non-natural.

Sure. So what is it that makes (a) [the fact that you can save a life by donating $500 to VillageReach] normatively justifying, whereas (b) [the fact that you can save a mosquito by donating $2000 to SaveTheMosquitos] is not normatively justifying? On my naturalist view, the fact that makes (a) but not (b) normatively justifying would be some fact about how the goal we're discussing at the moment is saving human lives, not saving mosquito lives. That's a natural fact. So are the facts about how the English language works and how two English speakers are using their terms.

Thanks, this is helpful. I'm interested in your use of the phrase "source of normativity" in:

The only source of normativity I think exists is the hypothetical imperative

This makes it sound like there's a new thing, normativity, that arises from some other thing (e.g. desires, or means/ends relationships). That's a very realist way of talking.

I take it that what you really want to say something more like, "The only kind of 'normativity'-talk that's naturalistically reducible and hence possibly true is hypothetical imperatives -- when th... (read more)

My thought process on sources of normativity looks something like this: People claim all sorts of justifications for 'ought' statements (aka normative statements). Some justify ought statements with respect to natural law or divine commands or non-natural normative properties or categorical imperatives. But those things don't exist. The only justification of normative language that fits in my model of the universe is when people use 'ought' language as some kind of hypothetical imperative, which can be translated into a claim about things reducible to physics. There are many varieties of this. Many uses of 'ought' terms can be translated into claims about things reducible to physics. If somebody uses 'ought' terms to make claims about things not reducible to physics, then I am suspicious of the warrant for those claims. When interrogating about such warrants, I usually find that the only evidences on offer are pieces of folk wisdom, intuitions, and conventional linguistic practice.

Thanks for this reply. I share your sense that the word 'moral' is unhelpfully ambiguous, which is why I prefer to focus on the more general concept of the normative. I'm certainly not going to stipulate that motivational internalism is true of the normative, though it does seem plausible that there's something irrational about someone who acknowledges that they really ought (all things considered) to phi and yet fails to do so. (I don't doubt that it's possible for someone to form the judgment without any corresponding motivation though, as it's always p... (read more)

Right. Unfortunately, I don't think I'm clear about what you mean by normativity. The only source of normativity I think exists is the hypothetical imperative, which can be reduced to physics by straightforward methods such as the one I used in the original post. I'm not an error theorist about that kind of normativity. This is a good question. Truly, I want to get away from moral vocabulary, and be careful around normative vocabulary. But people already think about these topics in moral and normative vocabulary, which is why I'm trying to solve or dissolve (in this post and its predecessor) some of the usual 'problems' in this space of questions. After that's done, I don't think it will be most helpful to use moral language. This is evident in the fact that in 15 episodes of my 'morality podcast []' I've used almost no moral language at all. Not much, really. I wasn't using the modus ponens to present an argument, but to unpack one interpretation of (some) 'should' discourse. Normative language, like many other kinds of language, is (when used correctly) merely a shortcut for saying something else. I can imagine a language that has no normative language at all. In that language we couldn't say things like "If you want to torture children, you should volunteer as a babysitter" but we could say things like "If you volunteer as a babysitter you will have more opportunities to torture children." The way I'm parsing 'should' in the first sentence, nothing is lost by this translation. Of course, people use 'should' in a variety of ways, some of which translate into claims about things reducible to physics, others of which translate into claims about things non-reducible to physics, while still others don't seem to translate into cognitive statements at all.

That doesn't really answer my question. Let me try again. There are two things you might mean by "mind dependent".

(1) You might just mean "makes some reference to the mind". So, for example, the necessary truth that "Any experience of red is an experience of colour" would also count as "mind-dependent" in this sense. (This seems a very misleading usage though.)

(2) More naturally, "mind dependent" might be taken to mean that the truth of the claim depends upon certain states of mind actually existing. But "pain is bad for people" (like my example above) does not seem to be mind-dependent in this sense.

Which did you have in mind?

By saying that "facts about the well-being of conscious creatures are mind-dependent facts," I just mean that statements about the well-being of conscious creatures are made true or false by facts abound mind states. A statement about my well-being is mind-dependent in the sense that a statement about my well-being (as I am using the term) is a statement about my brain states. A statement about the distance between my chair and my desk is not a statement about brain states, and would be true or false whether or not our Hubble volume still contained any minds.

As I argue elsewhere:

"Hypothetical imperatives thus reveal patterns of normative inheritance. But their highlighted 'means' can't inherit normative status unless the 'end' in question had prior normative worth. A view on which there are only hypothetical imperatives is effectively a form of normative nihilism -- no more productive than an irrigation system without any water to flow through it."

(Earlier in the post explains why hypothetical imperatives aren't reducible to mere empirical statements of a means-ends relationship.)

I tentatively favour... (read more)

Error theory You know this, but for the benefit of others: Roughly, error theory consists of two steps. As Finlay [] puts it: Given my view of conceptual analysis [], it shouldn't be surprising that I'm not confident of some error theorists' assertion of step 1. Is a presupposition of moral absolutism 'essential' to a judgment's status as a 'moral' judgment? Is a presuppositional of motivational internalism 'essential' to a judgment's status as a 'moral' judgment? I don't know. Moral discourse (unlike carbon discourse) is so confused that I'm not too interested to assert one fine boundary line around moral terms over another. So if someone thinks a presupposition of supernaturalism is 'essential' to a judgment's status as a 'moral' judgment, then I will claim that supernaturalism is false. But this doesn't make me an error theorist because I don't necessarily agree that a presupposition of supernaturalism is 'essential' to a judgment's status as a 'moral' judgment. I reject step 1 of error theory in this case. Likewise, if someone thinks a presupposition of moral absolutism or motivational internalism is essential to a judgment's status as a 'moral' judgment, I'll be happy to deny both moral absolutism and motivational internalism, but I wouldn't call myself an error theorist because I reject the claim that moral judgments (by definition, by conceptual analysis) necessarily presuppose moral absolutism or motivational internalism. But hey, if you convince me that the presumption of motivational internalism in moral discourse is so widespread that talking about 'morality' without it would be like using the term 'phlogiston' to talk about oxygen, then I'll be happy to call myself an error theorist, though none of my anticipations [] will have changed. Hypothetical imperatives I'll r

I'm inclined not to write about moral non-naturalism because I'm writing this stuff for Less Wrong, where most people are physicalists

Physicalists could (like Mackie) accept the non-naturalist's account of what it would take for something to be genuinely normative, and then simply deny that there are any such properties in reality. I'm much more sympathetic to this hard-headed "error theory" than to the more weaselly forms of naturalism.

I think many of our normative concepts fail to refer, but that a class of normative concepts often called hypothetical imperatives do refer, thanks to a rather straightforward reduction as given above. Are hypothetical imperatives not 'genuinely normative' in your sense of the phrase? Do you use the term 'normative' when talking about things other than hypothetical imperatives, and do you think those other things successfully refer?

I was thinking of "fundamental" concepts as those that are most basic, and not reducible to (or built up out of) other, more basic, concepts. I do think that normative concepts are conceptually isolated, i.e. not reducible to non-normative concepts, and that's really the more relevant feature so far as the OQA is concerned. But by 'fundamental normative concept' I meant a normative concept that is not reducible to any other concepts at all. They are the most basic, or bedrock, of our normative concepts.

Given the extremely poor access human beings have to the structure of their own concepts, it's dubious that the methods of analytic philosophy can trace those structures. Moreover, concepts typically "cluster together similar things for purposes of inference" ( Yudkowsky [] ) and thus we can re-structure them in light of new discoveries. Concepts that are connected now might be improved by disconnecting them, or vice versa. It is not at all clear that normative concepts are not included in this (Neurath-style) boat.

Just to clarify: By 'pain' I mean the hurtful aspect of the sensation, not the base sensation that could remain in the absence of its hurting.

In your first paragraph you describe people who take pain to be instrumentally useful in some circumstances, to bring about some other end (e.g. healing) which is itself good. I take no stand on that empirical issue. I'm talking about the crazy normative view that pain is itself (i.e. non-instrumentally) good.

Yes, I was imagining someone who thought that unmitigated pain and suffering was good for everyone, themselves included. Such a person is nuts, but hardly inconceivable.

In the not distant past, some surgeons opposed pain-killing medication for post-operative pain, believing that the pain was essential to the healing process. There's also the reports by patients who have had morphone for pain relief, that the pain is still there, but it takes the hurting out of it.

It's not analytic that pain is bad. Imagine some crazy soul who thinks that pain is intrinsically good for you. This person is deeply confused, but their error is not linguistic (as if they asserted "bachelors are female"). They could be perfectly competent speakers of the english language, and even logically omniscient. The problem is that such a person is morally incompetent. They have bizarrely mistaken ideas about what things are good (desirable) for people, and this is a substantive (synthetic), not merely analytic, matter.

Perhaps the t... (read more)

The issue is more whether anyone could think pain is good for them themselves. One could imagine a situation where pain receptors connect up to pleasure centers, but then it becomes a moot point as to whether that is actually pain.

If we taboo and reduce, then the question of "...but is it good?" is out of place. The reply is: "Yes it is, because I just told you that's what I mean to communicate when I use the word-tool 'good' for this discussion. I'm not here to debate definitions; I'm here to get something done."

I just wanted to flag that a non-reductionist moral realist (like myself) is also "not here to debate definitions". See my post on The Importance of Implications. This is compatible with thinking well of the Open Question Argument, if we t... (read more)

I'm inclined not to write about moral non-naturalism because I'm writing this stuff for Less Wrong, where most people are physicalists. What does it mean to you to say that something is a 'fundamental normative concept'? As in... non-reducible to 'is' statements (in the Humean sense)?


facts about the well-being of conscious creatures are mind-dependent facts

How so? (Note that a proposition may be in some sense about minds without its truth value being mind-dependent. E.g. "Any experience of red is an experience of colour" is true regardless of what minds exist. I would think the same is true of, e.g., "All else equal, pain is bad for the experiencer.")

I'm borrowing the concept 'well-being of conscious creatures' from Sam Harris, who seems to think of it in terms of mind-dependent facts, perhaps involving (e.g.) brain states we might call 'pain' or 'pleasure'.
I believe the "facts" in question were synthetic ones ("all else being equal, being set on fire is bad for the person set on fire,") not analytic ones ("all else equal, pain is bad for the experiencer.")

It's confusing that you use the word 'meta-ethics' when talking about plain first-order ethics.

...You're right, that was pretty sloppy. I felt vaguely justified in doing so since I often think about meta-ethics implied by or represented in first-order ethics (not that the levels are easily distinguishable in the first place in practice) and thus sort of made a point of not distinguishing them carefully. In hindsight that was dumb and especially dumb to to fail to acknowledge.

Non-cognitivists, in contrast, think that moral discourse is not truth-apt.

Technically, that's not quite right (except for the early emotivists, etc.). Contemporary expressivists and quasi-realists insist that they can capture the truth-aptness of moral discourse (given the minimalist's understanding that to assert 'P is true' is equivalent to asserting just 'P'). So they will generally explain what's distinctive about their metaethics in some other way, e.g. by appeal to the idea that it's our moral attitudes rather than their contents that have a certain central explanatory role...

Fair enough. I adjusted the wording in the original post. Thanks.

Depending on what you mean by 'direct access', I suspect that you've probably misunderstood. But judging by the relatively low karma levels of my recent comments, going into further detail would not be of sufficient value to the LW community to be worth the time.

You're still getting voted up on net, despite not explaining how, as you've claimed, the psychological fact of p-zombie plausibility is evidence for it (at least beyond references to long descriptions of your general beliefs).

How do you know that "people think zombies are conceivable"? Perhaps you will respond that we can know our own beliefs through introspection, and the inferential chain must stop somewhere. My view is that the relevant chain is merely like so:

zombies are conceivable => physicalism is false

I claim that we may non-inferentially know some non-psychological facts, when our beliefs in said facts meet the conditions for knowledge (exactly what these are is of course controversial, and not something we can settle in this comment thread).

I know that people think zombies are conceivable because they say they think zombies are conceivable (including, in some cases, saying "zombies are conceivable"). To say that we may "non-inferentially know" something appears to violate the principle that beliefs require justification in order to be rational []. By removing "people think zombies are conceivable", you've made the argument weaker rather than stronger, because now the proposition "zombies are conceivable" has no support. In any case, you now seem as eminently vulnerable to Eliezer's original criticism as ever: you indeed appear to think that one can have some sort of "direct access" to the knowledge that zombies are conceivable that bypasses the cognitive processes in your brain. Or have I misunderstood?
Load More