Posts

Sorted by New

Wiki Contributions

Comments

Nick,

Eliezer's one-place function is exactly infallible, because he defines "right" as its output.

I misunderstood some of Eliezer's notation. I now take his function to be an extrapolation of his volition rather than anyone else's. I don't think this weakens my point: if there were a rock somewhere with a lookup table for this function written on it, Eliezer should always follow the rock rather than his own insights (and according to Eliezer everyone else should too), and this remains true even if there is no such rock.

Furthermore, the morality function is based on extrapolated volition. Someone who has only considered one point of view on various moral questions will disagree with their extrapolated (completely knowledgable, completely wise) volition in certain predictable ways. That's exactly what I mean by a "twist."

MTraven - They might have a common structural/functional role. It would be plenty interesting if computing a certain algorithm strictly entailed a certain phenomenal quality (or 'feel').

Dan - I assume that science is essentially limited to third-personal investigation of public, measurable phenomena. It follows that we can expect to learn more and more about the public, measurable aspects of neural functioning. But it would be a remarkable surprise if such inquiry sufficed to establish conclusions about first-personal phenomenology. (In this respect, the epistemic gap between 'physics' and 'phenomenology' mirrors the even more famous gap between 'is' and 'ought'.) Who knows, maybe we'll be surprised? Maybe our current thoughts rest upon severe conceptual errors? Maybe logic is an illusion, and I merely believe in the validity of modus ponens because a demon is messing with my head? We can play "maybe"s all day long, but it doesn't seem very helpful unless you can actually show that a mistake has been made.

Robin - I can't tell what you mean. Are you saying there's a logically possible world that's identical to ours with respect to the arrangement of fingers and palms, etc., but that does not contain any hands? I'm pretty sure that's false: fingers etc. entail hands. But if you can describe a world that serves as a counterexample to this claim, I'd be very curious to hear it.

Alternatively, perhaps you're saying that if we weren't thinking clearly, and didn't really understand the term 'hand', then we might be fooled into believing that hand-zombies were logically possible. (This would be most likely if our 'hand' concept did not explicitly invoke fingers, but rather brought them in implicitly, just as 'water' indirectly reduces to H2O, in virtue of being directly analyzable as 'whatever stuff actually fills the water role'.) I agree with all that, but am yet to be convinced that my judgments about p-zombies rest on any analogous error. [I examine the alleged analogy to conventional a posteriori identities, e.g. water = H2O, here.]

Caledonian - I couldn't care less what you consider me. I'd much rather see you consider my arguments. Maybe then you'd have something of substance to contribute to the conversation. (N.B. I'm well aware that p-zombies are physically - and hence behaviourally - identical to their conscious counterparts. The dispute is over what conclusions we can draw from this.)

Paul - that can't be right. If I could somehow learn (contrary to fact) that animals were p-zombies, i.e. they don't really feel pain despite giving every outward appearance of doing so, that would undermine most arguments for ethical vegetarianism, and instead support the most 'efficient' factory farming practices.

There seems to be an unexamined assumption here.

Why should the moral weight of applying a specified harm to someone be independent of who it is?
When making moral decisions, I tend to weight effects on my friends and family most heavily, then acquaintences, then fellow Americans, and so on. I value random strangers to some extent, but this is based more on arguments about the small size of the planet than true concern for their welfare.

I claim that moral obligations must be reciprocal in order to exist. Altruism is never mandatory.

None of Eliezer's 3^^^3 people will
(with the given hypotheses) ever interact with anyone on Earth or any of their descendents.
I think the sum of moral weights I would assign to these 3^^^3 people would be less than
the sum of weights for (e.g.) all inhabitants of Earth from 2000BC to the present. I would happily
subject all of them to dust motes to prevent one American from being tortured for 50 years, and would think less of any fellow citizen who would not do the same.

(Let me just add that the first chapter of my thesis addresses Constant's concerns, and my previously linked post 'why do you think you're conscious?' speaks to Eliezer's worries about epiphenomenalism -- what is sometimes called 'the paradox of phenomenal judgment.' Some general advice: philosophers aren't idiots, so it's rarely warranted to attribute their disagreement to a mere "failure to realize" some obvious fact.)

g - No, by 'conceptually possible' I mean ideally conceptually possible, i.e. a priori coherent, or free of internal contradiction. (Feel free to substitute 'logical possibility' if you are more familiar with that term.) Contingent failures of imagination on our part don't count. So it's open to you to argue that zombies aren't conceptually possible after all, i.e. that further reflection would reveal a hidden contradiction in the concept. But there seems little reason, besides a dogmatic prior commitment to materialism, to think such a thing. Most (but admittedly not all) materialist philosophers grant the logical possibility of zombies, and instead dispute the inference to metaphysical possibility. This seems no less ad hoc. Anyway:

"I would like to know... why you think conceptual possibility has anything to do with actual possibility."

I actually wrote a whole thesis on this very question, so rather than further clogging the comments here, allow me to simply provide the link. If you're interested enough to read all that, and still have any objections to my view afterwards, I'd be very interested to hear them - my comments are open. For this page, though, I think I should bow out, unless Eliezer sees fit to address the concerns I raised about the original topic, and especially his treatment of the a priori.

Constant - Sure, there's something to be said for epistemic externalism. But I thought Eliezer had higher ambitions than merely distinguishing rationality and reliability? He seems to be attacking the very notion of the a priori, claiming that philosophers lazily treat it as a semantic stopsign or 'truce' (a curious claim, since many philosophers take themselves to be more or less exclusively concerned with the a priori domain, and yet have been known to disagree with one other on occasion), and dismissively joking "it makes you wonder why a thirsty hunter-gatherer can't use the "a priori truth factory" to locate drinkable water." (The answer isn't that hard to see if one honestly wonders about it for a moment or two.) But maybe you're right, and these cheap shots are just part of the local attire, not intended for cognitive consumption.

g - I already answered this. Change the extra-physical laws of nature as you will, it is not conceptually possible for a world physically identical to ours to lack flying airplanes. What else are we to call the boeing-arranged atoms at 10000ft? The zombie (physically identical but non-conscious) world, by contrast, does seem conceptually possible. So there's no analogy here.

TGGP - Yes, I think that, thanks to the bridging laws, "the materially-sufficient is psycho-sufficient". This dualism is empirically indistinguishable from materialism. Anticipating experience may be a useful constraint for science, but that is not all there is to know. (See also my responses to James above.)