Followup to: Mixed Reference: The Great Reductionist Project
Humans need fantasy to be human.
"Tooth fairies? Hogfathers? Little—"
Yes. As practice. You have to start out learning to believe the little lies.
"So we can believe the big ones?"
Yes. Justice. Mercy. Duty. That sort of thing.
"They're not the same at all!"
You think so? Then take the universe and grind it down to the finest powder and sieve it through the finest sieve and then show me one atom of justice, one molecule of mercy.
- Susan and Death, in Hogfather by Terry Pratchett
Suppose three people find a pie - that is, three people exactly simultaneously spot a pie which has been exogenously generated in unclaimed territory. Zaire wants the entire pie; Yancy thinks that 1/3 each is fair; and Xannon thinks that fair would be taking into equal account everyone's ideas about what is "fair".
I myself would say unhesitatingly that a third of the pie each, is fair. "Fairness", as an ethical concept, can get a lot more complicated in more elaborate contexts. But in this simple context, a lot of other things that "fairness" could depend on, like work inputs, have been eliminated or made constant. Assuming no relevant conditions other than those already stated, "fairness" simplifies to the mathematical procedure of splitting the pie into equal parts; and when this logical function is run over physical reality, it outputs "1/3 for Zaire, 1/3 for Yancy, 1/3 for Xannon".
Or to put it another way - just like we get "If Oswald hadn't shot Kennedy, nobody else would've" by running a logical function over a true causal model - similarly, we can get the hypothetical 'fair' situation, whether or not it actually happens, by running the physical starting scenario through a logical function that describes what a 'fair' outcome would look like:
So am I (as Zaire would claim) just assuming-by-authority that I get to have everything my way, since I'm not defining 'fairness' the way Zaire wants to define it?
No more than mathematicians are flatly ordering everyone to assume-without-proof that two different numbers can't have the same successor. For fairness to be what everyone thinks is "fair" would be entirely circular, structurally isomorphic to "Fzeem is what everyone thinks is fzeem"... or like trying to define the counting numbers as "whatever anyone thinks is a number". It only even looks coherent because everyone secretly already has a mental picture of "numbers" - because their brain already navigated to the referent. But something akin to axioms is needed to talk about "numbers, as opposed to something else" in the first place. Even an inchoate mental image of "0, 1, 2, ..." implies the axioms no less than a formal statement - we can extract the axioms back out by asking questions about this rough mental image.
Similarly, the intuition that fairness has something to do with dividing up the pie equally, plays a role akin to secretly already having "0, 1, 2, ..." in mind as the subject of mathematical conversation. You need axioms, not as assumptions that aren't justified, but as pointers to what the heck the conversation is supposed to be about.
Multiple philosophers have suggested that this stance seems similar to "rigid designation", i.e., when I say 'fair' it intrinsically, rigidly refers to something-to-do-with-equal-division. I confess I don't see it that way myself - if somebody thinks of Euclidean geometry when you utter the sound "num-berz" they're not doing anything false, they're associating the sound to a different logical thingy. It's not about words with intrinsically rigid referential power, it's that the words are window dressing on the underlying entities. I want to talk about a particular logical entity, as it might be defined by either axioms or inchoate images, regardless of which word-sounds may be associated to it. If you want to call that "rigid designation", that seems to me like adding a level of indirection; I don't care about the word 'fair' in the first place, I care about the logical entity of fairness. (Or to put it even more sharply: since my ontology does not have room for physics, logic, plus designation, I'm not very interested in discussing this 'rigid designation' business unless it's being reduced to something else.)
Once issues of justice become more complicated and all the contextual variables get added back in, we might not be sure if a disagreement about 'fairness' reflects:
- The equivalent of a multiplication error within the same axioms - incorrectly dividing by 3. (Or more complicatedly: You might have a sophisticated axiomatic concept of 'equity', and incorrectly process those axioms to invalidly yield the assertion that, in a context where 2 of the 3 must starve and there's only enough pie for at most 1 person to survive, you should still divide the pie equally instead of flipping a 3-sided coin. Where I'm assuming that this conclusion is 'incorrect', not because I disagree with it, but because it didn't actually follow from the axioms.)
- Mistaken models of the physical world fed into the function - mistakenly thinking there's 2 pies, or mistakenly thinking that Zaire has no subjective experiences and is not an object of ethical value.
- People associating different logical functions to the letters F-A-I-R, which isn't a disagreement about some common pinpointed variable, but just different people wanting different things.
There's a lot of people who feel that this picture leaves out something fundamental, especially once we make the jump from "fair" to the broader concept of "moral", "good", or "right". And it's this worry about leaving-out-something-fundamental that I hope to address next...
...but please note, if we confess that 'right' lives in a world of physics and logic - because everything lives in a world of physics and logic - then we have to translate 'right' into those terms somehow.
And that is the answer Susan should have given - if she could talk about sufficiently advanced epistemology, sufficiently fast - to Death's entire statement:
You think so? Then take the universe and grind it down to the finest powder and sieve it through the finest sieve and then show me one atom of justice, one molecule of mercy. And yet — Death waved a hand. And yet you act as if there is some ideal order in the world, as if there is some ... rightness in the universe by which it may be judged.
"But!" Susan should've said. "When we judge the universe we're comparing it to a logical referent, a sort of thing that isn't in the universe! Why, it's just like looking at a heap of 2 apples and a heap of 3 apples on a table, and comparing their invisible product to the number 6 - there isn't any 6 if you grind up the whole table, even if you grind up the whole universe, but the product is still 6, physico-logically speaking."
If you require that Rightness be written on some particular great Stone Tablet somewhere - to be "a light that shines from the sky", outside people, as a different Terry Pratchett book put it - then indeed, there's no such Stone Tablet anywhere in our universe.
But there shouldn't be such a Stone Tablet, given standard intuitions about morality. This follows from the Euthryphro Dilemma out of ancient Greece.
The original Euthryphro dilemma goes, "Is it pious because it is loved by the gods, or loved by the gods because it is pious?" The religious version goes, "Is it good because it is commanded by God, or does God command it because it is good?"
The standard atheist reply is: "Would you say that it's an intrinsically good thing - even if the event has no further causal consequences which are good - to slaughter babies or torture people, if that's what God says to do?"
If we can't make it good to slaughter babies by tweaking the state of God, then morality doesn't come from God; so goes the standard atheist argument.
But if you can't make it good to slaughter babies by tweaking the physical state of anything - if we can't imagine a world where some great Stone Tablet of Morality has been physically rewritten, and what is right has changed - then this is telling us that...
(drumroll)
...what's "right" is a logical thingy rather than a physical thingy, that's all. The mark of a logical validity is that we can't concretely visualize a coherent possible world where the proposition is false.
And I mention this in hopes that I can show that it is not moral anti-realism to say that moral statements take their truth-value from logical entities. Even in Ancient Greece, philosophers implicitly knew that 'morality' ought to be such an entity - that it couldn't be something you found when you ground the Universe to powder, because then you could resprinkle the powder and make it wonderful to kill babies - though they didn't know how to say what they knew.
There's a lot of people who still feel that Death would be right, if the universe were all physical; that the kind of dry logical entity I'm describing here, isn't sufficient to carry the bright alive feeling of goodness.
And there are others who accept that physics and logic is everything, but who - I think mistakenly - go ahead and also accept Death's stance that this makes morality a lie, or, in lesser form, that the bright alive feeling can't make it. (Sort of like people who accept an incompatibilist theory of free will, also accept physics, and conclude with sorrow that they are indeed being controlled by physics.)
In case anyone is bored that I'm still trying to fight this battle, well, here's a quote from a recent Facebook conversation with a famous early transhumanist:
No doubt a "crippled" AI that didn't understand the existence or nature of first-person facts could be nonfriendly towards sentient beings... Only a zombie wouldn't value Heaven over Hell. For reasons we simply don't understand, the negative value and normative aspect of agony and despair is built into the nature of the experience itself. Non-reductionist? Yes, on a standard materialist ontology. But not IMO within a more defensible Strawsonian physicalism.
It would actually be quite surprisingly helpful for increasing the percentage of people who will participate meaningfully in saving the planet, if there were some reliably-working standard explanation for why physics and logic together have enough room to contain morality. People who think that reductionism means we have to lie to our children, as Pratchett's Death advocates, won't be much enthused about the Center for Applied Rationality. And there are a fair number of people out there who still advocate proceeding in the confidence of ineffable morality to construct sloppily designed AIs.
So far I don't know of any exposition that works reliably - for the thesis for how morality including our intuitions about whether things really are justified and so on, is preserved in the analysis to physics plus logic; that morality has been explained rather than explained away. Nonetheless I shall now take another stab at it, starting with a simpler bright feeling:
When I see an unusually neat mathematical proof, unexpectedly short or surprisingly general, my brain gets a joyous sense of elegance.
There's presumably some functional slice through my brain that implements this emotion - some configuration subspace of spiking neural circuitry which corresponds to my feeling of elegance. Perhaps I should say that elegance is merely about my brain switching on its elegance-signal? But there are concepts like Kolmogorov complexity that give more formal meanings of "simple" than "Simple is whatever makes my brain feel the emotion of simplicity." Anything you do to fool my brain wouldn't make the proof really elegant, not in that sense. The emotion is not free of semantic content; we could build a correspondence theory for it and navigate to its logical+physical referent, and say: "Sarah feels like this proof is elegant, and her feeling is true." You could even say that certain proofs are elegant even if no conscious agent sees them.
My description of 'elegance' admittedly did invoke agent-dependent concepts like 'unexpectedly' short or 'surprisingly' general. It's almost certainly true that with a different mathematical background, I would have different standards of elegance and experience that feeling on somewhat different occasions. Even so, that still seems like moving around in a field of similar referents for the emotion - much more similar to each other than to, say, the distant cluster of 'anger'.
Rewiring my brain so that the 'elegance' sensation gets activated when I see mathematical proofs where the words have lots of vowels - that wouldn't change what is elegant. Rather, it would make the feeling be about something else entirely; different semantics with a different truth-condition.
Indeed, it's not clear that this thought experiment is, or should be, really conceivable. If all the associated computation is about vowels instead of elegance, then from the inside you would expect that to feel vowelly, not feel elegant...
...which is to say that even feelings can be associated with logical entities. Though unfortunately not in any way that will feel like qualia if you can't read your own source code. I could write out an exact description of your visual cortex's spiking code for 'blue' on paper, and it wouldn't actually look blue to you. Still, on the higher level of description, it should seem intuitively plausible that if you tried rewriting the relevant part of your brain to count vowels, the resulting sensation would no longer have the content or even the feeling of elegance. It would compute vowelliness, and feel vowelly.
My feeling of mathematical elegance is motivating; it makes me more likely to search for similar such proofs later and go on doing math. You could construct an agent that tried to add more vowels instead, and if the agent asked itself why it was doing that, the resulting justification-thought wouldn't feel like because-it's-elegant, it would feel like because-it's-vowelly.
In the same sense, when you try to do what's right, you're motivated by things like (to yet again quote Frankena's list of terminal values):
"Life, consciousness, and activity; health and strength; pleasures and satisfactions of all or certain kinds; happiness, beatitude, contentment, etc.; truth; knowledge and true opinions of various kinds, understanding, wisdom; beauty, harmony, proportion in objects contemplated; aesthetic experience; morally good dispositions or virtues; mutual affection, love, friendship, cooperation; just distribution of goods and evils; harmony and proportion in one's own life; power and experiences of achievement; self-expression; freedom; peace, security; adventure and novelty; and good reputation, honor, esteem, etc."
If we reprogrammed you to count paperclips instead, it wouldn't feel like different things having the same kind of motivation behind it. It wouldn't feel like doing-what's-right for a different guess about what's right. It would feel like doing-what-leads-to-paperclips.
And I quoted the above list because the feeling of rightness isn't about implementing a particular logical function; it contains no mention of logical functions at all; in the environment of evolutionary ancestry nobody has heard of axiomatization; these feelings are about life, consciousness, etcetera. If I could write out the whole truth-condition of the feeling in a way you could compute, you would still feel Moore's Open Question: "I can see that this event is high-rated by logical function X, but is X really right?" - since you can't read your own source code and the description wouldn't be commensurate with your brain's native format.
"But!" you cry. "But, is it really better to do what's right, than to maximize paperclips?" Yes! As soon as you start trying to cash out the logical function that gives betterness its truth-value, it will output "life, consciousness, etc. >B paperclips". And if your brain were computing a different logical function instead, like makes-more-paperclips, it wouldn't feel better, it would feel moreclippy.
But is it really justified to keep our own sense of betterness? Sure, and that's a logical fact - it's the objective output of the logical function corresponding to your experiential sense of what it means for something to be 'justified' in the first place. This doesn't mean that Clippy the Paperclip Maximizer will self-modify to do only things that are justified; Clippy doesn't judge between self-modifications by computing justifications, but rather, computing clippyflurphs.
But isn't it arbitrary for Clippy to maximize paperclips? Indeed; once you implicitly or explicitly pinpoint the logical function that gives judgments of arbitrariness their truth-value - presumably, revolving around the presence or absence of justifications - then this logical function will objectively yield that there's no justification whatsoever for maximizing paperclips (which is why I'm not going to do it) and hence that Clippy's decision is arbitrary. Conversely, Clippy finds that there's no clippyflurph for preserving life, and hence that it is unclipperiffic. But unclipperifficness isn't arbitrariness any more than the number 17 is a right triangle; they're different logical entities pinned down by different axioms, and the corresponding judgments will have different semantic content and feel different. If Clippy is architected to experience that-which-you-call-qualia, Clippy's feeling of clippyflurph will be structurally different from the way justification feels, not just red versus blue, but vision versus sound.
But surely one shouldn't praise the clippyflurphers rather than the just? I quite agree; and as soon as you navigate referentially to the coherent logical entity that is the truth-condition of should - a function on potential actions and future states - it will agree with you that it's better to avoid the arbitrary than the unclipperiffic. Unfortunately, this logical fact does not correspond to the truth-condition of any meaningful proposition computed by Clippy in the course of how it efficiently transforms the universe into paperclips, in much the same way that rightness plays no role in that-which-is-maximized by the blind processes of natural selection.
Where moral judgment is concerned, it's logic all the way down. ALL the way down. Any frame of reference where you're worried that it's really no better to do what's right then to maximize paperclips... well, that really part has a truth-condition (or what does the "really" mean?) and as soon as you write out the truth-condition you're going to end up with yet another ordering over actions or algorithms or meta-algorithms or something. And since grinding up the universe won't and shouldn't yield any miniature '>' tokens, it must be a logical ordering. And so whatever logical ordering it is you're worried about, it probably does produce 'life > paperclips' - but Clippy isn't computing that logical fact any more than your pocket calculator is computing it.
Logical facts have no power to directly affect the universe except when some part of the universe is computing them, and morality is (and should be) logic, not physics.
Which is to say:
The old wizard was staring at him, a sad look in his eyes. "I suppose I do understand now," he said quietly.
"Oh?" said Harry. "Understand what?"
"Voldemort," said the old wizard. "I understand him now at last. Because to believe that the world is truly like that, you must believe there is no justice in it, that it is woven of darkness at its core. I asked you why he became a monster, and you could give no reason. And if I could ask him, I suppose, his answer would be: Why not?"
They stood there gazing into each other's eyes, the old wizard in his robes, and the young boy with the lightning-bolt scar on his forehead.
"Tell me, Harry," said the old wizard, "will you become a monster?"
"No," said the boy, an iron certainty in his voice.
"Why not?" said the old wizard.
The young boy stood very straight, his chin raised high and proud, and said: "There is no justice in the laws of Nature, Headmaster, no term for fairness in the equations of motion. The universe is neither evil, nor good, it simply does not care. The stars don't care, or the Sun, or the sky. But they don't have to! We care! There is light in the world, and it is us!"
Part of the sequence Highly Advanced Epistemology 101 for Beginners
Next post: "Standard and Nonstandard Numbers"
Previous post: "Mixed Reference: The Great Reductionist Project"
Is this a fair summary?
There is a non-trivial point in this summary, which is the meaning of "we." I could imagine a possible world in which the moral intuitions of humans diverge widely enough that there isn't anything that could reasonably be called a coherent extrapolated volition of humanity (and I worry that I already live there).
Um, how do you know?
I think you and Alicorn may be talking past each other somewhat.
Throughout my life, it seems that what I morally value has varied more than what rightness feels like - just as it seems that what I consider status-raising has changed more than what rising in status feels like, and what I find physically pleasurable has changed more than what physical pleasures feel like. It's possible that the things my whole person is optimizing for have not changed at all, that my subjective feelings are a direct reflection of this, and that my evaluation of a change of content is merely a change in my causal model of the production of the desiderata (I thought voting for Smith would lower unemployment, but now I think voting for Jones would, etc.) But it seems more plausible to me that
1) the whole me is optimizing for various things, and these things change over time,
2) and that the conscious me is getting information inputs which it can group together by family resemblance, and which can reinforce or disincentivize its behavior.
Imagine a ship which is governed by an anarchic assembly beneath board and captained by an employee of theirs whom they motivate through in-kind bonuses. So the assembly... (read more)
This comment expands how you'd go about reprogramming someone in this way with another layer of granularity, which is certainly interesting on its own merits, but it doesn't strongly support your assertion about what it would feel like to be that someone. What makes you think this is how qualia work? Have you been performing sinister experiments in your basement? Do you have magic counterfactual-luminosity-powers?
I think Eliezer is simply suggesting that qualia don't in fact exist in a vacuum. Green feels the way it does partly because it's the color of chlorophyll. In a universe where plants had picked a different color for chlorophyll (melanophyll, say), with everything else (per impossibile) held constant, we would associate an at least slightly different quale with green and with black, because part of how colors feel is that they subtly remind us of the things that are most often colored that way. Similarly, part of how 'goodness' feels is that it imperceptibly reminds us of the extension of good; if that extension were dramatically different, then the feeling would (barring any radical redesigns of how associative thought works) be different too. In a universe where the smallest birds were ten feet tall, thinking about 'birdiness' would involve a different quale for the same reason.
Consider Bob. Bob, like most unreflective people, settles many moral questions by "am I disgusted by it?" Bob is disgusted by, among other things, feces, rotten fruit, corpses, maggots, and men kissing men. Internally, it feels to Bob like the disgust he feels at one of those stimuli is the same as the disgust he feels at the other stimuli, and brain scans show that they all activate the insula in basically the same way.
Bob goes through aversion therapy (or some other method) and eventually his insula no longer activates when he sees men kissing men.
When Bob remembers his previous reaction to that stimuli, I imagine he would remember being disgusted, but not be disgusted when he remembers the stimuli. His positions on, say, same-sex marriage or the acceptability of gay relationships have changed, and he is aware that they have changed.
Do you think this example agrees with your account? If/where it disagrees, why do you prefer your account?
I think this is really a sorites problem. If you change what's delicious only slightly, then deliciousness itself seems to be unaltered. But if you change it radically — say, if circuits similar to your old gustatory ones now trigger when and only when you see a bright light — then it seems plausible that the experience itself will be at least somewhat changed, because 'how things feel' is affected by our whole web of perceptual and conceptual associations. There isn't necessarily any sharp line where a change in deliciousness itself suddenly becomes perceptible; but it's nevertheless the case that the overall extension of 'delicious' (like 'disgusting' and 'moral') has some effect on how we experience deliciousness. E.g., deliciousness feels more foodish than lightish.
So, you introspect the way that he introspects. Do all humans? Would all humans need to introspect that way for it to do the work that he wants it to do?
Iron deficiency feels like wanting ice. For clever, verbal reasons. Not being iron deficient doesn't feel like anything. My brain did not notice that it was trying to get iron - it didn't even notice it was trying to get ice, it made up reasons according to which ice was an instrumental value for some terminal goal or other.
The standard religious reply to the baby-slaughter dilemma goes something like this:
It does choose a horn, but it's the other one, "things are moral because G-d commands them". It just denies the connotation that there exists a possible Counterfactual!G-d which could decide that Real!evil things are Counterfactual!good; in all possible worlds, G-d either wants the same thing or is something different mistakenly called "G-d". (Yeah, there's a possible world where we're ruled by an entity who pretends to be G-d and so we believe that we should kill babies. And there's a possible world where you're hallucinating this conversation.)
Or you could say it claims equivalence. Is this road sign a triangle because it has three sides, or does it have three sides because it is a triangle? If you pick the latter, does that mean that if triangles had four sides, the sign would change shape to have four sides? If you pick the former, does that mean that I can have three sides without being a triangle? (I don't think this one is quite fair, because we can imagine a powerful creator who wants immoral things.)
Three possible responses to the atheist response:
Sure. Not believing has bad consequences - you're wrong as a matter of fact, you don't get special believ
Obvious further atheist reply to the denial of counterfactuals: If God's desires don't vary across possible worlds there exists a logical abstraction which only describes the structure of the desires and doesn't make mention of God, just like if multiplication-of-apples doesn't vary across possible worlds, we can strip out the apples and talk about the multiplication.
I don't think it's incompatible. You're supposed to really trust the guy because he's literally made of morality, so if he tells you something that sounds immoral (and you're not, like, psychotic) of course you assume that it's moral and the error is on your side. Most of the time you don't get direct exceptional divine commands, so you don't want to kill any kids. Wouldn't you kill the kid if an AI you knew to be Friendly, smart, and well-informed told you "I can't tell you why right now, but it's really important that you kill that kid"?
If your objection is that Mr. Orders-multiple-genocides hasn't shown that kind of evidence he's morally good, well, I got nuthin'.
What we have is an inconsistent set of four assertions:
At least one of these has to be rejected. Abraham (provisionally) rejects 1; once God announces 'J/K,' he updates in favor of rejecting 2, on the grounds that God didn't really want him to kill his son, though the Voice really was God.
The problem with this is that rejecting 1 assumes that my confidence in my foundational moral principles (e.g., 'thou shalt not murder, self!') is weaker than my confidence in the conjunction of:
But it's hard to believe that I'm more confident in the divinity of a certain class of Voices than in my moral axioms, especially if my confidenc... (read more)
Well, if we're shifting from our idealized post-Protestant-Reformation Abraham to the original Abraham-of-Genesis folk hero, then we should probably bracket all this Medieval talk about God's omnibenevolence and omnipotence. The Yahweh of Genesis is described as being unable to do certain things, as lacking certain items of knowledge, and as making mistakes. Shall not the judge of all the Earth do right?
As Genesis presents the story, the relevant question doesn't seem to be 'Does my moral obligation to obey God outweigh my moral obligation to protect my son?' Nor is it 'Does my confidence in my moral intuitions outweigh my confidence in God's moral intuitions plus my understanding of God's commands?' Rather, the question is: 'Do I care more about obeying God than about my most beloved possession?' Notice there's nothing moral at stake here at all; it's purely a question of weighing loyalties and desires, of weighing the amount I trust God's promises and respect God's authority against the amount of utility (love, happiness) I assign to my son.
The moral rights of the son, and the duties of the father, are not on the table; what's at issue is whether Abraham's such a good soldier-servant that he's willing to give up his most cherished possessions (which just happen to be sentient persons). Replace 'God' with 'Satan' and you get the same fealty calculation on Abraham's part, since God's authority, power, and honesty, not his beneficence, are what Abraham has faith in.
I read this post with a growing sense of unease. The pie example appears to treat "fair" as a 1-place word, but I don't see any reason to suppose it would be. (I note my disquiet that we are both linking to that article; and my worry about how confused this post seems to me.)
The standard atheist reply is tremendously unsatisfying; it appeals to intuition and assumes what it's trying to prove!
My resolution of Euthryphro is "the moral is the practical." A predictable consequence of evolution is that people have moral intuitions, that those intuitions reflect their ancestral environment, and that those intuitions can be variable. Where would I find mercy, justice, or duty? Cognitive algorithms and concepts inside minds.
This article reads like you're trying to move your stone tablet from your head into the world of logic, where it can be as universal as the concept of primes. It's not clear to me why you're embarking on that particular project.
The example of elegance seems like it points the other way. If your sense of elegance is admittedly subjective, why are we supposing a Platonic form of elegance out in the world of logic? Isn't this basically the error where o... (read more)
'Beautiful' needs 2 places because our concept of beauty admits of perceptual variation. 'Fairness' does not grammatically need an 'according to whom?' argument place, because our concept of fairness is not observer-relative. You could introduce a function that takes in a person X who associates a definition with 'fairness,' takes in a situation Y, and asks whether X would call Y 'fair.' But this would be a function for 'What does the spoken syllable FAIR denote in a linguistic community?', not a function for 'What is fair?' If we applied this demand generally, 'beautiful' would became 3-place ('what objects X would some agent Y say some agent Z finds 'beautiful'?'), as would logical terms like 'plus' ('how would some agent X perform the operation X calls "addition" on values Y and Z?'), and indeed all linguistic acts.
Yes, but a given intuition cannot vary limitlessly, because there are limits to what we would consider to fall under the same idea of 'fairness.' Different people may use the spoken syllables FAI... (read more)
Yay, I think we've finished the prerequisites to prerequisites, and started the prerequisites!
Stimulating as always! I have a criticism to make of the use made of the term 'rigid designation'.
What philosophers of language ordinarily mean by calling a term a rigid designator is not that, considered purely syntactically, it intrinsically refers to anything. The property of being a rigid designator is something which can be possessed by an expression in use in a particular language-system. The distinction is between expressions-in-use whose reference we let vary across counterfactual scenarios (or 'possible worlds'), e.g. 'The first person to climb Everest', and those whose reference remains stable, e.g. 'George Washington', 'The sum of two and two'.
There is some controversy over how to apply the rigid/non-rigid distinction to general terms like 'fair' (or predicates like 'is fair') - cf. Scott Soames' book Beyond Rigidity - but I think the natural thing to say is that 'is fair' is rigid, since it is used to attribute the same property across counterfactual scenarios, in contrast with a predicate like 'possesses my favourite property'.
I just wanted to agree with Tristanhaze here that this usage strikes me as non-standard. I want to put this in my own words so that Tristanhaze/Eliezer/others can correct me if I've got the wrong end of the stick.
If something is a rigid designator it means that it refers to the same thing in all possible worlds. To say it's non-rigid is to say it refers to different things in some possible worlds to others. This has nothing to do with whether different language users that use the phrase must always be referring to the same thing. So George Washington may be a rigid designator in that it refers to the same person in all possible world... (read more)
Here's my understanding of the post:
If this is fair, I have two objections:
When humans are sufficiently young they are surely more like a Type 2 FAI than a Type 1 FAI. We're obviously not born with Frankena's list of terminal values. Maybe one can argue that an adult human is like a Type 2 FAI that has completed its value learning process and has "locked down" its utility function and won't change its values or go into shutdown even if it su
Please don't do that.
People on this site already give too much upvotes, and too little downvotes. By which I mean that if anyone writes a lot of comments, their total karma is most likely to be positive, even if the comments are mostly useless (as long as they are not offensive, or don't break some local taboo). People can build a high total karma just by posting a lot, because one thousand comments with average karma of 1 provide more total karma than e.g. twenty comments with 20 karma each. But which of those two would you prefer as a reader, assuming that your goal is not to procrastinate on LW for hours a day?
Every comment written has a cost -- the time people spend reading that comment. So a neutral comment (not helpful, not harmful) has a slightly negative value, if we could measure that precisely. One such comment does not make big harm. Hundred such comments, daily, from different users... that's a different thing. Each comment should pay the price of time it takes to read it, or be downvoted.
People already hesitate to downvote, because expressing a negative opinion about something connected with other person feels like starting an un... (read more)
Mainstream status:
EY's position seems to be highly similar to Frank Jackson's analytic descriptivism, which holds that
Which is a position neither popular nor particularly unpopular, but simply one of many contenders, as the mainstream goes.
I think your discussions of metaethics might be improved by rigorously avoiding words like "fair," "right," "better," "moral," "good," etc. I like the idea that "fair" points to a logical algorithm whose properties we can discuss objectively, but when you insist on using the word "fair," and no other word, as your pointer to this algorithm, people inevitably get confused. It seems like you are insisting that words have objective meanings, or that your morality is universally compelling, or something. You can and do explicitly deny these, but when you continue to rely exclusively on the word "fair" as if there is only one concept that that word can possibly point to, it's not clear what your alternative is.
Whereas if you use different symbols as pointers to your algorithms, the message (as I understand it) becomes much clearer. Translate something like:
Fair is dividing up food equally. Now, is dividing up the pie equally objectively fair? Yes: someone who wants to divide up the pie differently is talking about something other than fairness. So the assertion "dividing the pie equally is fair" is... (read more)
I don't think this works, because "fairness" is not defined as "divide up food equally" (or even "divide up resources equally"). It is the algorithm that, among other things, leads to dividing up the pie equally in the circumstances described in the original post -- i.e., "three people exactly simultaneously spot a pie which has been exogenously generated in unclaimed territory." But once you start tampering with these conditions -- suppose that one of them owned the land, or one of them baked the pie, or two were well-fed and one was on the brink of starvation, etc. -- it would at least be controversial to say "duh, divide equally, that's just what 'fairness' means." And the fact of that controversy suggests most of are using "fairness" to point to an algorithm more complicated than "divide up resources equally."
More generally, fairness -- like morality itself -- is complicated. There are basic shared intuitions, but there's no easy formula for popping out answers to "fair: yes or no?" in intricate scenarios. So there's actually quite a bit of value in using words like "fair," "right,&qu... (read more)
Great post! I agree with your analysis of moral semantics.
However, the question of moral ontology remains...do objective moral values exist? Is there anything I (or anyone) should do, independent from what I desire? With such a clear explanation of moral semantics at hand, I think the answer is an obvious and resounding no. Why would we even think that this is the case? One conclusion we can draw from this post is that telling an unfriendly AI that what it's doing is "wrong" won't affect its behavior. Because that which is "wrong" might be exactly that which is "moreclippy"! I feel that Eliezer probably agrees with me, here, since I gained I lot of insight into the issue from reading Three Worlds Collide.
Asking why we value that which is "right" is a scientific question, with a scientific answer. Our values are what they are, now, though, so, minus the semantics, doesn't morality just reduce to decision theory?
That's the default with no additional data, but I would hesitate, because to me how much each of the persons need the pie is also important in defining "fairness". If one of the three is starving while the others two are well-fed, it would be fair to give more to the one starving.
It may be just nitpicking, but since you took the point to ensure there is no difference between the three characters are involved in spotting the pie, but not mentioned they have the same need of it, it may pinpoint a deeper difference between different conceptions of "fairness" (should give them two different names ?)
I'm trying to understand this, and I'm trying to do it by being a little more concrete.
Suppose I have a choice to make, and my moral intuition is throwing error codes. I have two axiomations of morality that are capable of examining the choice, but they give opposite answers. Does anything in this essay help? If not, is there a future essay planned that will?
In a universe that contains a neurotypical human and clippy, and they're staring at eachother, is there an asymmetry?
Haiti today is a situation that makes my moral intuition throw error codes. Population density is three times that of Cuba. Should we be sending aid? It would be kinder to send helicopter gunships and carry out a cull. Cut the population back to one tenth of its current level, then build paradise. My rival moral intuition is that culling humans is always wrong.
Trying to stay concrete and present, should I restrict my charitable giving to helping countries make the demographic transition? Within a fixed aid budget one can choose package A = (save one child, provide education, provide entry into global economy; 30 years later the child, now an adult, feeds his own family and has some money left over to help others) package B = (save four children; that's it, money all used up, thirty years later there are 16 children needing saving and its not going to happen). Concrete choice of A over B: ignore Haiti and send money to Karuna trust to fund education for untouchables in India, preferring to raise a few children out of poverty by letting other children die.
It's also about half that of Taiwan, significantly less than South Korea or the Netherlands, and just above Belgium, Israel, and Japan -- as well as very nearly on par with India, the country you're using as an alternative! I suspect your source may have overweighted population density as a factor in poor social outcomes.
I don't see how these two frameworks are appealing to different terminal values - they seem to be arguments about which policies maximize consequential lives-saved over time, or maximize QALYs (Quality-Adjusted Life Years) over time. This seem like a surprisingly neat and lovely illustration of "disagreeing moral axioms" that turn out to be about instrumental policies without much in the way of differing terminal values, hence a dispute of fact with a true-or-false answer under a correspondence theory of truth for physical-universe hypotheses.
Is permitting or perhaps even helping Haitians to emigrate to other countries anywhere in the moral calculus?
So you're facing a moral dilemma between giving to charity and murdering nine million people? I think I know what the problem might be.
Really? That's your plan for "maximum holocaust"? You'll do more good than harm in the short run, and if you run out of capital (not hard with such a wastefully expensive plan) then you'll do nothing but good.
This sounds to me like a political applause light, especially
In essence, your statement boils down to "if I wanted to do the most possible harm, I would do what the Enemy are doing!" which is clearly a mindkilling political appeal.
(For reference, here's my plan for maximum holocaust: select the worst things going on in the world today. Multiply their evil by their likelihoods of success. Found a terrorist group attacking the winners. Be careful to kill lots of civilians without actually stopping your target.)
Genesis 2:16-2:17 looks pretty clear to me: every tree which isn't the tree of knowledge is okay. Genesis 3:22 can be interpreted as either referring to a previous life tree ban or establishing one.
If you accept the next gen fic as canon, Revelations 22:14 says that the tree will be allowed at the end, which is evidence it was just a tempban after the fall.
Where do you get that the tree of life was off-limits?
Sheesh. I'll actively suppress knowledge of your plans against the local dictator. ... (read more)
That the judgments of "fair" or "beautiful" don't come from a universal source, but from a particular entity. I have copious evidence that what I consider "beautiful" is different from what some other people consider "beautiful;" I have copious evidence that what I consider "fair" is different from what some other people consider "fair."
It is cl... (read more)
Well, I'm glad to see you're taking a second crack at an exposition of metaethics.
I wonder if it might be worth expounding more on the distinction between utterances (sentences and word-symbols), meaning-bearers (propositions and predicates) and languages (which map utterances to meaning-bearers). My limited experience seems to suggest that a lot of the confusion about metaethics comes from not getting, instinctively, that speakers use their actual language, and that a sentence like "X is better than Y", when uttered by a particular person, refer... (read more)
Well, that's clearly false. Your chances of having to kill a member of the secret police of an oppressive state are much more than 1/10^16, to say nothing of less clear cut examples.
I don't think there is clear route from "we can figure out morality ourselves" to "we can stop telling lies to children". The problem is that once you know morality is in a sense man-made, it becomes tempting to remake it self-servingly. I think we tell ourselves stories that fundamental morality comes from God Or Nature to restrain ourselves, and partly forget its man made nature. Men are not created equal, but it we believe they are, we behave better. "Created equal" is a value masquerading as a fact.
I am having difficulty understanding the model of 'physics+logic = reality.' Up until now I have understood that's physics was reality, but logic is the way to describe and think about what follows from it. Would someone please post a link to the original article (in this sequence or not) which explains the position? Thank you.
I'm just a 2 year math Ph.D. program drop-out from 35 years ago, but I got quite a different take on it. As I experienced it, most mathematics is like "Let X be a G-space where G-space is defined as having ". and then you might spend years provi... (read more)
I'm not sure what you have in mind here. We need to distinguish (i) the referent of a concept from (ii) its reference-fixing "sense" or functional role. The way I understood your view, the reference-fixing story for moral terms involves our (idealized) desires. But the referent is "rigid" in the sense that it's picking out the content of our desires: the thing that actually fills the functional role, rather than the role-property itself.
Since our desires typically aren't themselves about our desires, so it will turn out, on this stor... (read more)
This all does sound good to me; but, is there a way to say the above while tabooing "reference" and avoiding talk of things "referring" to other things? Reference isn't ontologically basic, so what does it reduce to?
Basically, the main part that would worry me is a phrase like, "there's a story to be told about how our moral concepts came to pick out these particular worldly properties" which sounds on its face like, "There's a story to be told about how successorship came to pick out the natural numbers" whereas what I'd want to say is, "Of course, there's a story to be told about how moral concepts came to have the power to move us" or "There's a story to be told about how our brains came to reflect numbers".
'Twasn't me, but I would guess some people want comments to have a point other than a joke.
Well, that sounds about as likely to correctly define the word "fair" as to correctly define the word "banana".
Bull! I'm quite aware of why I eat, breathe, and drink. Why in the world would a paperclip maximizer not be aware of this?
Unless you assume Paperclippers are just rock-bottom stupid I'd also expect them to eventually notice the correlation between mining iron, smelting it, and shaping it in to a weird semi-spiral design... and the sudden rise in the number of paperclips in the world.
USSR's police state required high speed one-to-many means of communication. The Soviet leadership was absolutely terrified of many-to-many means of communication, going so far as to impose extremely tight controls on access to photocopiers, even most high level members of the party couldn't get access.
Sorry, that doesn't capture it either. You can prove all sorts of things about a proof that nobody's found yet, without actually finding the proof yet. It would not be terribly surprising if elegance was one of those things.
I don't think we have any features like this. If you describe exactly what happened to this guy, he may be able to figure out what's wrong.
According to Eliezer's definition of "should" in this post, I "should" do things which lead to "life, consciousness, and activity; health and strength; pleasures and satisfactions of all or certain kinds; happiness, beatitude, contentment, etc.; truth; knowledge and true opinions of various kinds, understanding, wisdom; beauty, harmony, proportion in objects contemplated; aesthetic experience..." But unless I already cared about those things, I don't see why I would do what I "should" do, so as a universal prescription for action, this definition of "morality" fails.
I still feel confused. I definitely see that, when we talk about fairness, our intended meaning is logical in nature. So, if I claim that it is fair for each person to get an equal share of pie, I'm trying to talk about some set of axioms and facts derived from them. Trying.
The problem is, I'm not convinced that the underlying cognitive algorithms are stable enough for those axioms to be useful. Imagine, for example, a two-year-old with the usual attention span. What they consider "good" might vary quite quickly. What I consider "just" ... (read more)
Using the word also implies that this goodness-embodying thing is sapient and has superpowers.
Any half-way competent secret police wouldn't need to.
You seem to have a very non-standard definition of "nonconsensual".
Holding down the back button should show you the full history, just select one of the pages farther back. I am not aware of any sites blocking that feature. You will still get the popup, though.
There must be non-physical things to assume that there is any difference between "us" and "p-zombies". This is a logical requirement. They posit that there effectively is a difference, in the premises right there, by asserting that p-zombies do not have qualia, while we do.
The only way 4 is possible is if it is also implied tha... (read more)
Well, it's not like there's a pre-existing critique of that, or anything.
OK; my surprise was predicated on the hypothetical theist giving the sentence a non-negligible probability; I admit I didn't express this originally, so you'll have to take my word that it's what I meant. Thanks for humoring me :)
On another note, you do surprise me with "God is logically necessary"; although I know that's at least a common theist position, it's difficult for me to see how one can maintain that without redefining "god" into something unrecognizable.
Survival and procreation aren't primary goals in any direct sense. We have urges that have been selected for because they contribute to inclusive genetic fitness, but at the implementation level they don't seem to be evaluated by their contributions to some sort of unitary probability-of-survival metric; similarly, some actions that do contribute greatly to inclusive genetic fitness (like donating eggs or sperm) are quite rare in practice and go almost wholly unrewarded by our biology. Because of this architecture, we end up with situations where we sate... (read more)
My mind reduces all of this to "God = Confusion". What am I missing?
Note that there's some discussion on just what Eliezer means by "logic all the way down" over on Rationally Speaking: http://rationallyspeaking.blogspot.co.uk/2013/01/lesswrong-on-morality-and-logic.html . Seeing as much of this is me and Angra Maiynu arguing that Massimo Pigliucci hasn't understood what Eliezer means, it might be useful for Eliezer to confirm what he does mean.
What???!!! Are you suggesting that I'm actually planning on conducting the proposed thought experiment? Actually, physically, getting a piece of paper and writing out the words in question? I assure you, this is not the case. I don't even have any blank paper in my home - this is the 21st century after all.
This is a thought experiment I'm proposing, in order to help me better understand MixedNuts' mental model. No different from proposing a thought experiment involving dust motes and eternal torture. Are you saying that Eliezer should be punished for considering such hypothetical situations, a trillion times over?
Murder (law) and murder (moral) are two different things; I was exclusively referring to murder (moral).
I will clarify: There can be cases where murder (law) is either not immoral or morally required. There are also cases where an act which is murder (moral) is not illegal.
My original point is that many of the actions of Jehovah constitute murder (moral).
Well, I think consent sort of breaks down as a concept when you start considering all the situations where societies decide to get violent (or for that matter to involve themselves in sexuality; I'd rather not cite examples for fear of inciting color politics). So I'm not sure I can endorse the general form of this argument.
In the specific case of warfare, though, the formalization of war that most modern governments have decided to bind themselves by does include consent on the part of combatants, in the form of the oath of enlistment (or of office, for ... (read more)
I mean, didn't Eliezer cover this? You're not lying if you call numbers groups and groups numbers. If you switch in the middle of a proof, sure, that's lying, but that seems irrelevant. The definitions pick out what you're talking about.
When I'm talking about morality, I'm talking about That Thing That Determines What You're Supposed to Do, You Know, That One.
However Decius answers, he probably violates the local don't-discuss-politics norm. By contrast, your coyness makes it appear that you haven't done so.
In short, it appears to me that you already know Decius' position well enough to continue the discussion if you wanted to. Your invocation of the taboo-your-words convention appears like it isn't your true rejection.
No, you haven't. (p=0.9)
It could look like an electronic object with a plastic shell that starts with "(23 + 54) / (47 * 12 + 76) + 1093" on the screen and some small amount of time after an apple falls from a tree and hits the "Enter" button some number appears on the screen below the earlier input, beginning with "1093.0", with some other decimal digits following.
If the above doesn't qualify as t... (read more)
Yes, I recommend looking into the novel new divination techniques "Physics" and "Mathematics". The former allows one to form a tolerably accurate model of the present based on knowledge of precursor states. The latter allows reasoning about the logical implications of assumed axioms.
Which brings us to the third mystic divination art: Google it.
Next time, try opening with tha... (read more)
I'd be surprised if this is actually true. There are features of a proofs that can be themselves proven without actually identifying the proof itself.
I don't know about Decius, but...
I am.
I'm also saying that it doesn't matter. The p-zombies are still conscious. They just don't have any added "conscious" XML tags as per some imaginary, crazy-assed unnecessary definition of "consciousness".
Tangential to that point: I think any morality system which relies on an external supernatural thinghy in order to make moral judgments or to assign any terminal value to something is broken and not worth considering.
I love the word "Unclipperific."
I follow the argument here, but I'm still mulling over it and I think by the time I figure out whether I agree the conversation will be over. Something disconcerting struck me on reading it, though: I think I could only follow it having already read and understood the Metaethics sequence. (at least, I think I understood it correctly; at least one commenter confirmed the point that gave me the most trouble at the time)
While I was absorbing the Sequences, I found I could understand most posts on their own, and I re... (read more)
The theory you propose in (2) seems close to Neutral Monism. It has fallen into disrepute (and near oblivion) but was the preferred solution to the mind-body problem of many significant philosophers of the late 19th-early 20th, in particular of Bertrand Russell (for a long period). A quote from Russell:
... (read more)Ooo! Seldom do I get to hear someone else voice my version of idealism. I still have a lot of thinking to do on this, but so far it seems to me perfectly legitimate. An idealism isomorphic to mechanical interactions dissolves the Hard Problem of consciousness by denying a premise. It also does so with more elegance than reductionism since it doesn't force us through that series of flaming hoops that orbits and (maybe) eventually collapses into dualism.
This seems more likely to me so far than all the alternatives, so I guess that means I believe it, b... (read more)
Eliezer thinks we'll someday be able to reduce or eliminate Magical Reality Fluid from our model, and I know of no argument (analogous to the Hard Problem for phenomenal properties) that would preclude this possibility without invoking qualia themselves. Personally, I'm an agnostic about Many Worlds, so I'm even less inclined than EY to think that we need Magical Reality Fluid to recover the Born probabilities.
I also don't reify logical constructs, so I don't believe in a bonus category of Abstract Thingies. I'm about as monistic as physicalists come. Math... (read more)
Freedom does have instrumental value; however, lack of coercion is an intrinsic thing in my ethics, in addition to the instrumental value.
I don't think that I will ever be able to codify my ethics accurately in Loglan or an equivalent, but there is a lot of room for improvement in my ability to explain it to other sentient beings.
I was unaware that the "immortalist" value system was assumed to be the LW default; I thought that "human value system" referred to a different default value system.
Yes, it merely requires redefining things like 'conscious' or 'experience' (whatever you decide p-zombies do not have) to be something epiphenomenal and incidentally non-existent.
I wouldn't be experiencing anything.
That's not sufficient - there can be wildly different, incompatible universalizable morality systems based on different premises and axioms; and each could reasonably claim to be that they are a true morality and the other is a tribal shibboleth.
As an example (but there are others), many of the major religious traditions would definitely claim to be universalizable systems of morality; and they are contradicting each other on some points.
My usual response to reading 2) is to think 1).
I wonder if you really wouldn't respond to blackmail if the stakes were high and you'd actually lose something critical. "I don't respond to blackmail" usually means "I claim social dominance in this conflict".
Yes, which is why I explicitly labled it as only a thought experiment.
This seems to me to be entirely in keeping with the LW tradition of thought experiments regarding dust particles and eternal torture.... by posing such a question, you're not actually threatening to torture anybody.
Edit: downvote explantion requested.
Theres motivation to redefine morality, and reason to think it stil is in some sense morality once it has been redefined. Neither is true of maths.
I would translate this:
as: "...it becomes tempting to use some other M instead of morality."
It expresses the same idea, without the confusion about whether morality can be redefined arbitrarily. (Yes, anything can be redefined arbitrarily. It just stops being the original thing.)
I see little point in ignoring what an argument states explicily in favour of speculations about what the formulaters had in mind. I also think that rhetorical use of the word "magic" is mind killing. Quantum teleportation might seem magical to a 19th century physicist, but it still exists.
There's something to be said against equating transhumanism with religious concepts, but the world to come is an exact parallel.
I don't know much about Kabbalah because I'm worried it'll fry my brain, but Gilgul is a thing.
I always interpreted sheol as just the literal grave, but apparently it refers to an actual world. Thanks.
The funny thing is, that the rationalist Clippy would endorse this article. (He would probably put more emphasis on clippyflurphsness rather than this unclipperiffic notion of "justness", though. :))
Well, you do have to press certain buttons for it to happen. ;) And it looks like voltages changing inside an integrated circuit that lead to changes in a display of some kind. Anyway, if you insist on an example of something that "does arithmetic" without any human intervention whatsoever, I can point to the arithmetic logic unit inside a plugged-in arcade machine in attract mode.
And if you don't want to call what an arithmetic logic unit does when it takes a set of inputs and returns a set of outputs "doing arithmetic", I'd have to re... (read more)
If I am remembered for anything, it will be for elucidating the words of wiser men.
On a tangential note, is there a word I could have used above instead of "men" that would preserve the flow but is gender-neutral? I couldn't find one. Ideally one falling syllable.
ETA: The target word should probably end in a nasal or approximate consonant, or else a vowel.
"I value both saving orphans from fires and eating chocolate. I'm a horrible person, so I can't choose whether to abandon my chocolate and save the orphanage."
Should I self-modify to ignore the orphans? Hell no. If future-me doesn't want to save orphans then he never will, even if it would cost no chocolate.
That depends on the Gospel in question. The Johannine Jesus works miracles to show that he's God; the Matthean Jesus is constantly frustrated that everyone follows him around, tells everyone to shut up, and rejects Satan's temptation to publicly show his divine favor as an affront to God.
So you can have N>1 miracles and still have deism? I always thought N was 0 for that.
You do not reason with evil. You condemn it.
I subscribe to desirism. So I'm not a strict anti-realist.
We have the processing unit called "brain" which does contain our understanding of the human context and therefore can plug a context parameter into a metaethical philosophy and thus derive an ethic. But we can't currently express the functioning of the brain as theorems and proofs -- our understanding of its working is far fuzzier than that.
I expect that the use of metaethic in AI development would similarly be so that the AI has something to plug its understanding of the human context into.
As I explained here, it's perfectly reasonable to describe mathematical abstractions as causes.
I'm not a positivist, and I don't argue like one. I think nearly all the arguments against the possibility of zombies are very silly, and I agree there's good prima facie evidence for dualism (though I think that in the final analysis the weight of evidence still favors physicalism). Indeed, it's a good thing I don't think zombies are impossible, since I think that we are zombies.
My reason is twofold: Copernican, and Occamite.
Copernican reasoning: Most of the universe does not consist of humans, or anything human-l... (read more)
Scott Adams on the same subject, the morning after your post:
[...]
... (read more)Well, it's probably important to distinguish between to uses to which the theory of qualia is put: first as the foundation of foundationalist empiricism, and second as the basis for the 'hard problem of consciousness'. Foundationalist theories of empiricism are largely dead, as is the idea that qualia are a source of immediate, non-conceptual knowledge. That's the work that Sellars (a strident reductivist and naturalist) did.
Now that I read it again, I think my orig... (read more)
I am reading this series and suddenly realized that Mercy, Justice and Fair are citizens of this 2nd-order-logic. As well as the "6" number. This is why they are imaginary and non-existing in the physical world. To most of you this comment would seem trivial but it just shows I am really enjoying my reading and thinking :)
A different perspective i'd like people's thoughts on: is it more accurate to say that everything WE KNOW lives in a world of physics and logic, and thus translating 'right' into those terms is correct assuming right and wrong (fairness, etc.) are defined within the bounds of what we know.
I'm wondering if you would agree that you're making an implicit philosophical ar... (read more)
This inspired me to write Morality Isn't Logical, and I'd be interested to know what you think.
Hmm... I don't think my point necessarily helps here. I meant that you will always get disutility when you have two desires that always clash (x and not x); whichever way you choose, the other desire won't be fulfilled.
However, in the case you offered (and probably most cases) it's not a good idea to self-modify, as desires don't clash in principle, always. Like with the chocolate and saving kids one, you just have to perform utility calculations to see which way to go (that one is saving kids).
So, you're saying that it is subjective whether qualia have a point of view, or the ability to posit themselves?
Because I have all of the observations needed to say that cats exist, even if they don't technically exist. I do not have the observations needed to say that there is a non-physical component to subjective experience.
Oh, right. Yup, anything simulating you that perfectly is gonna be conscious - but it might be using magic. For example, perhaps they pull their data out of parallel universe where you ARE real. Or maybe they use some black-swan technique you can't even imagine. They're fairies, for godssake. And you're an invisible cat. Don't fight the counterfactual.
Looks like we have an insurmountable inferential distance problem both ways, so I'll stop here.
This could be true and you'd still be totally wrong about the equal likelihood.
Where do effects of cats stop and cats begin?
Sure. an artifact of such a reproduction = whatever you mean by "effect of cats" in your original statement.
I'm pretty sure this comment means you don't understand the concept of "qualia".
I'm not sure I understand what it means for an algorithm to have an inside, let alone for an algorithm to "feel" something from the inside. "Inside" is a geometrical concept, not an algorithmical one.
Please explain what the inside feeling of e.g. the Fibonacci sequence (or an algorithm calculating such) would be.
The same could be said of cats: Even if cats are part of the physical universe, they could counterfactually be epiphenomenal if something was reproducing the effects they have on the world.
How does the argument apply to qualia and not to cats?
By "our particular logic" I mean the particular method we've learned for exploiting how the universe works to cause our discrete symbols to have consistent behavior that mostly models the universe. There's no requirement that logic be only represented as a finite sequence of symbols generated by replacement rules and variable substitution from a set of axioms; it's just what works best for us right now. There are almost certainly other (and probably better) representations of how the universe works that we haven't found yet. For instance it se... (read more)
What if we also changed the subject into a sentient paperclip? Any "standard" paperclip maximizer has to deal with the annoying fact that it is tying up useful matter in a non-paperclip form that it really wants to turn into paperclips. Humans don't usually struggle with the... (read more)
I think that these two desires are contradictory. Part of what I'm trying to say is that it's a highly nontrivial problem which propositions are even meaningful, let alone true, if you specify possible worlds at a sufficiently high level of detail. For example, at an extremely high level of detail, you might specify a possible world by specifying a set of laws of physics together with an initial con... (read more)
Apart from implying different subjective preferences to mine when it comes to conversation this claim is actually objectively false as a description of reality.
The 'taboo!' demand in this context was itself a borderline (in as much as it isn't actually the salient feature that needs elaboration or challenge and the meaning should be plain to most non disingenuous readers). But assuming there was any doubt at all about what 'contrived' meant in the first place my response would, in fact, help make it clear throug... (read more)
Okay. I think what I'm actually trying to say is that what constitutes a rigid designator, among other things, seems to depend very strongly on the resolution at which you examine possible worlds.
When you say the phrase "imagine the possible world in which I have a glass of water in my hand" to a human, that human knows what you mean because by default humans only model the physical world at a resolution where it is easy to imagine making that intervention and only that intervention. When you say that phrase to an AI which is modeling the world ... (read more)
... wow.
Anyway, if Objectivists are claiming to have reached morality from tautology, I'm inclined to throw that in with all the other nonsense they spout that I know for a fact to be wrong. Now that you say it, I do recall seeing something along the lines of "the fundamental truth that A=A" in an Objectivist ... I don't want to say rant, it was pretty short ... but I don't recall noticing an actual, rational argument in there so it's probably trivially wrong.
To me it seems that you are mixing together "better" in "morally better", and "better" as "more efficient". If we replace the second one with "more efficient", we get:
Betterness (moral) is more efficient measure of being better (morally).
Clippiness is more efficient measure of being clippy.
I guess we (and Clippy) could agree about this. It is just confusing to write the latter sentence as "clippiness is better than betterness, with regards to being clippy", because the two different meanings are e... (read more)
The most important issue is that however the theist defines "free will", he has the burden of showing that free will by that very definition is supremely valuable: valuable enough to outweigh the great evil that humans (and perhaps other creatures) cause by abusing it, and so valuable that God could not possibly create a better world without it.
This to my mind is the biggest problem with the Free Will defence in all its forms. It seems pretty clear that free will by some definition is worth having; it also seems pretty clear that there are abstru... (read more)
Well, I understand that if consciousness was physical, but didn't effect our behavior, then removing that physical process would result in a zombie. That's usually the example given, not magic.
Nothing in denotative expression but a lot in terms of poetic flow and syllable count. Substituting "people" into that context just wouldn't have sounded pretty. In fact it would make the attempt at eloquent elucidation seem contrived and forced---leaving it worse off than if the meaning had just been conveyed unadorned and without an attempt to appear quotable and deep.
I was actually surprised by TheOtherDave's response. My poetic module returned null and I was someho... (read more)
It's conceivable the way it's conceivable that the English upper class are giant lizards in disguise. If you've read much 19c history and sources, you should know that nobody said anything about anybody masturbating or not, and Lincoln at that time probably lived a mile from his nearest neighbour.
Lincoln is an interesting example because if you read enough biographies of him, it becomes funny just how much mileage people can get out of the most trivial and dubious piece of evidence about his early life.
Anyway, the past is full of things that either happened or didn't -- at least I don't believe they're like Schrodinger's cat, but which we'll never know if they did or not.
As it was a casual remark in passing, I don't plan to debate, and "reasonably arguable" is a fairly low bar. But, Hitler had a mesmerizing speaking presence, at least for the people he connected with. He probably would never have amounted to anything except somebody in the German establishment, wanting to quell the chaos that followed the end of WWI, hired him to lecture groups of soldiers to reign them in, and he "discovered he had a voice". Once he became chancel... (read more)
That assumes a determinate answer to the question 'what's the right way to use language?' in this case. But the facts on the ground may underdetermine whether it's 'right' to treat definitions more ostensively (i.e., if Berkeley turns out to be right, then when I say 'tree' I'm picking out an image in my mind, not a non-existent material plant Out There), or 'right' to treat definitions as embedded in a theory, an interpretation of the data (i.e., Berkeley doesn't really believe in trees as we do, he j... (read more)
OK, that's not the local definition of "meaningful". That explains the confusion.
Well, yeah. But we can look at proofs and sort 'em into "elegant" and "inelegant", I guess, so presumably the are criteria buried somewhere in our circuitry. Doubtless inordinately complex ones.
And how, pray tell, did they reach into the vast immense space of possible hypotheses and premises, and pluck out this one specific set of premises which just so happens that if you accept it completely, it inevitably must result in the conclusion that we have something magical granting us qualia?
The begging was done while choosing the premises, not in one of the premises individually.
Premise: All Bob Chairs must have seventy three thousand legs exactly.
Premise: Things we call chairs are illusions unless they are Bob Chairs.
Premise: None of the things we... (read more)
"A philosophical zombie or p-zombie in the philosophy of mind and perception is a hypothetical being that is indistinguishable from a normal human being except in that it lacks conscious experience, qualia, or sentience.[1] "--WP
I am of course taking a p-zombie to be lacking in qualia. I am not sure that alternatives are even coherent, since I don't see how other aspects of consciousness could go missing without affecting behaviour.
You seem to have started one.
That one version of the First Cause argument begs the question by how it describes the universe.
A hypothesis is true or false before it is tested.
I dunno, dude could have good reasons to want knowledge of good and evil staying hush-hush. (Forbidding knowledge in general would indeed be super evil.) For example: You have intuitions telling you to eat when you're hungry and give food to others when they're hungry. And then you learn that the first intuition benefits you but the second makes you a good person. At this point it gets tempting to say "Screw being a good person, I'm going to stuff my face while others starve", whereas before you automatically shared fairly. You could have chosen ... (read more)
I'm not sure where I implied that I'm getting at anything. We're p-zombies, we have no additional consciousness, and it doesn't matter because we're still here doing things.
The tangent was just an aside remark to clarify my position, and wasn't to target anyone.
We may already agree on the consciousness issue, I haven't actually checked that.
Taboo experiences.
Since your blog posts are almost entirely (partisan) political in nature, you should know that traditional political discussion is discouraged here in most threads, except the monthly politics thread. The idea that political discussion is often broken is generally called Politics is the Mindkiller, and there is a whole sequence of old posts on the topic.
I haven't read that. Could you clarify?
Having settled the meta-ethics, will you have anything to say about the ethics? Concrete theorems, with proofs, about how we should live?
Cached thoughts regularly supersede actual moral thinking, like all forms of thinking, and I am capable of remembering this experience. Am I misunderstanding your comment?
The claim seems to be that moral judgement--first-order, not metaethical--is purely logical, but the justification ("grinding up the universe") only seems to go as far as showing it to be necessarily partly logical. And first-order ethics clearly has empirical elements. If human biology was such as to lay eggs and leave them to fend for themselves, there would be no immorailty in "child neglect".
I don't like the idea of the words I use having definitions that I am unaware of and even after long reflection cannot figure out - not just the subtleties and edge cases, but massive central issues.
Another possibility is that humanity should be altered so that they make different choices (perhaps through education, perhaps through conditioning, perhaps through surgery, perhaps in other ways).
Yet another possibility is that the environment should be altered so that humanity's free choices no longer have the consequence of hastening extinction.
There are others.
This seems suspiciously similar to saying "kin selection exists and group selection basically doesn't" but with less convenient redefinition of "group selection".
Why 'should' my goal be anything? What interest is served by causing all people who share my ethics (which need not include all members of the genus Homo) to extrapolate their ethics?
I didn't claim that I had a universally compelling principle. I can say that someone who embodied the position that eye color granted special privilege would end up opposed to me.
I decline to make value judgements beyond obligatory/permissible/forbidden, unless you can provide the necessary and sufficient conditions for one result to be better than another.
Yes, but those two beliefs don't predict different resulting universes as far as I can tell. They're functionally equivalent, and I disbelieve the one that has to pay a complexity penalty.
Generally not, actually.
Obviously Across the Universe does, but there's nothing idiosyncratic about that.
And also, to occasionally demonstrate profound bigotry, as in Matthew 15:22-26:
... (read more)Considering that dilemma becomes a lot easier if, say, I'm diverting a train through the one and away from the ten, I'm guessing there are other taboos there than just murder. Bodily integrity, perhaps? There IS something squicky about the notion of having surgery performed on you without you consent.
Anyway, I was under the impression that you admitted that the correct reaction to a "sadistic choice" (kill him or I'll kill ten others) was murder; you merely claimed this was "difficult to encounter" and thus less worrying than the prospect that murder might be moral in day-to-day life. Which I agree with, I think.
At the same time it should be obvious that there is something---pick the most appropriate word---that you have done by trying to kill something that changes the moral implications of the intended victim deciding to kill you first. This is the thing that we can clearly see that Decius is referring to.
The 'consent' implied b... (read more)
A singleminded agent with my resources could place people in such a situation. I'm guessing the same is true of you. Kidnapping isn't hard, especially if you aren't too worried about eventually being caught, and murder is easy as long as the victim can't resist. "Difficult" is usually defined with regards to the speaker, and most people could arrange such a sadistic choice if they really wanted. They might be caught, but that's not really the point.
If you mean that the odds of such a thing actually happening to you are low, "difficult" ... (read more)
No. But I will specify the definition from Merriam-Webster and elaborate slightly:
Contrive: To bring about with difficulty.
Noncontrived circumstances are any circumstances that are not difficult to encounter.
For example, the credible threat of a gigantic number of people being tortured to death if I don't torture one person to death is a contrived circumstance. 0% of exemplified situations requiring moral judgement are contrived.
There won't even be a "2" or "3" left if you grind everything up. But what if you ca... (read more)
"The kind of obscure technical exceptions that wedrifid will immediately think of the moment someone goes and makes a fully general claim about something that is almost true but requires qualifiers or gentler language."
I disagree with your premise that the actions taken by the entity which preceded all others are defined to be moral. Do you have any basis for that claim?
It could if the environment rewarded paperclips. Admittedly this would require an artificial environment, but that's hardly impossible.
I am no longer responding to arguments on this topic, although I will clarify my points if asked. Political argument in an environment where I am already aware of the consensus position on this topic is not productive.
It bugs the hell out of me not to respond to comments like this, but a lengthy and expensive defense against arguments that I have already encountered elsewhere just isn't worth it.
Considering the extent to which those two can help with other objectives, I'd say you should be very careful about giving up on them.
People would act as if it is a syllogism if they had one of the relevant (and not especially uncommon/unrealistic) premises. It would be a syllogism like...
More like [original research?]. I was under the impression that's the closest thing to a "standard" interpretation, but it could as easily have been my local priest's pet theory.
You've gotta admit it makes sense, though.
To my knowledge, this is a common theory, although I don't know whether it's standard. There are a number of references in the Tanakh to human sacrifice, and even if the early Jews didn't practice (and had no cultural memory of having once practiced) human sacrifice, its presence as a known phenomenon in the Levant could have motivated the story. I can imagine several reasons:
(a) The writer was worried about human sacrifice, and wanted a narrative basis for forbidding it.
(b) The writer wasn't worried about actual human sacrifice, but wanted to clearly distinguish his community from Those People who do child sacrifice.
(c) The writer didn't just want to show a difference between Jews and human-sacrifice groups, but wanted to show that Jews were at least as badass. Being willing to sacrifice humans is an especially striking and impressive sign of devotion to a deity, so a binding-of-Isaac-style story serves to indicate that the Founding Figure (and, by implicit metonymy, the group as a whole, or its exemplars) is willing to give proof of that level of devotion, but is explicitly not required to do so by the god. This is an obvious win-win -- we don't have to actually kill anybod
The first includes "if physicalism is true", the second doens't.
For some values of "imagine". Given relativity, it would be pretty difficult to coheretly unplug gravity from mass, space and acceleration. It would be easier under Newton. I conclude that the unpluggabiliy of qualia means we just don't have a relativity-grade eplanation of them, an explanation that makes them deeply interwoven with other things.
Yes. My understanding of p-zombies was incorrect/different. If p-zombies have no qualia by the premises, as you've shown me a clear definition of, then we can't be p-zombies. (ignoring the details and assuming your experiences are like my own, rather than the Lords of the Matrix playing tricks on me and making you pretend you have qualia; I think this is a reasonable assumption to work with)
... (read more)Well ... is it? Would you notice if your morals changed when you weren't looking?
Hey, it doesn't have to be orphans. Or it could be two different kinds of orphan - boys and girls, say. The boy's orphanage is on fire! So is the nearby girl's orphanage! Which one do you save!
Protip: The correct response is not "I self-modify to only care about one sex."
EDIT: Also, aren't you kind of fighting the counterfactual?
So it is trivially likely that the creator of the universe (God) embodies the set of axioms which describe morality? God is not good?
I handle that contradiction by pointing out that the entity which created the universe, the abstraction which is morality, and the entity which loves genocide are not necessarily the same.
It could be valid to define "better" any way you like. But the definition most consistent with normal usage includes all and only criteria that matter to humans. This is why people say things like "but is it truly, really, fundamentally better?" Because people really care about whether A is better than B. If "better" meant something else (other than better), such as produces more paperclips, then people would find a different word to describe what they care about.
Well, there were some aliens involved.
First off, w.r.t. my saying somebody's got to try to ward off the worst possibilities of the AI "singularity", that is to give due respect to what (correct me if I'm wrong) seems to be the primary purpose of the SI, and Eliezer_Yudkowsky's avowed life purpose (based on bloggingheads conversations ca 2009-10).
The Childhood's End analogy was pretty off the cuff, and a "really bad variation" of it may or may not be, on reflection, a good analogue for any danger to present society, but here's the jist o... (read more)
It is, in Eliezer's sense of the word. So is “clippiness is clippier than betterness”, though.
Do you see why a 2-place beauty would be more relevant than a 1-place beauty?
I was unclear; I didn't mean "that some piles will have prime membership" but that "m... (read more)
I didn't say that. Of course there is something you should do, given a set of goals...hence decision theory.
I think that's a misapplication of reductionism (the thing I think Eliezer is thinking about he said it was mistakenly), where people take something the... (read more)
So your argument is "Doing arithmetic requires consciousness; and we can tell that something is doing arithmetic by looking at its hardware; so we can tell with certainty by looking at certain hardware states that the hardware is sentient"?
So your argument is "We have explained some things physically before, therefore we can explain consciousness physically"?
... (read more)Can you give me an example of how, even in principle, this would work? Construct a toy universe in which there are computations causally determined by non-computations. How would examining anything about the non-computations tell us that the computations exist, or what particular functions those computations are computing?
But morality isn't just moral intuitions. It includes "eat fish on friday"
That doens't follow. Fitness-enahncing and gene-spreading behaviour don;t have to reward the organism concerned. What't the reward for self sacrifice?
that's a considerable understatement.
Sorry, I was misusing terminology. Any ignorance-generating / ignorance-embodying explanation (e.g.s quantum mysticism / elan vital) uses what I'm calling "mysterious substance."
Basically I'm calling "quantum" a mysterious substance (for the quantum mystics), even though it's not like you can bottle it.
Maybe I should have said "mysterious form?" :D
If science had them, there would be no mileage in the philosphical project, any more than there is currently mileage in trying to found dualism on the basis that matter can't think.
You talk like you've solved qualia. Have you?
"Qualia" is something our brains do. We don't know how our brains do it, but it's pretty clear by now that our brains are indeed what does it.
I've not actually read this essay (will do so later today), but I disagree that most people here consider the issue of qualia and the "hard problem of consciousness" to be a solved one.
Time for a poll.
[pollid:372]
It should be remembered though that the guy who's famous for formulating the hard problem of consciousness is:
1) A fan of EY's TDT, who's made significant efforts to get the theory some academic attention. 2) A believer in the singularity, and its accompanying problems. 3) The student of Douglas Hofstrader. 4) Someone very interested in AI. 5) Someone very well versed and interested in physics and psychology. 6) A rare, but sometimes poster on LW. 7) Very likely one of the smartest people alive. etc. etc.
I think consciousness is reducible too, but David Chalmers is a serious dude, and the 'hard problem' is to be taken very, very seriously. It's very easy to not see a philosophical problem, and very easy to think that the problem must be solved by psychology somewhere, much harder to actually explain a solution/dissolution.
Why is the extinction of humanity worse than involuntary restrictions on personal agency? How much reduction in risk or expected delay of extinction is needed to justify denying all choice to all people?
QED does not apply there. You need a huge ceteris paribus included before that follows simply and the ancestor comments have already brought up ways in which all else may not be equal.
I don't believe you. Immortalists can have two fully functional lungs and kidneys. I think you are referring to something else.
The definitions that you are free to introduce or change usually latch on to an otherwise motivated thing, you usually have at least some sort of informal reason to choose a particular definition. When you change a definition, you start talking about something else. If it's not important (or a priori impossible to evaluate) what it is you will be talking about, in other words if the motivation for your definition is vague and tolerates enough arbitrariness, then it's OK to change a definition without a clear reason to make the particular change that you do... (read more)
I began to question whether I intrinsically value freedom of agents other than me as a result of this conversation. I will probably not have an answer very quickly, mostly because I have to disentangle my belief that anyone who would deny freedom to others is mortally opposed to me. And partially because I am (safely) in a condition of impaired mental state due to local cultural tradition.
I'll point out that "human" has a technical definition of "members of the genus homo" and includes species which are not even homo sapiens. If you wi... (read more)
Why do I care if they make mistakes that are not local to me? I get much better security return on investment by locally preventing violence against me and my concerns, because I have to handle several orders of magnitude fewer people.
... with the goal of reaching a point that is likely to be agreed on by as many people as possible, and then discussion the implications of that point.
In addition to being a restatement of personal values, I think that it is an easily-defended principle. It can be attacked and defeated with a single valid reason why one person or group is intrinsically better or worse than any other, and evidence for a lack of such reason is evidence for that statement.
Referring to a part of your brain doesn't have the right properties when you change between different universes.
I'm completely sure that I didn't understand what you meant by that.
Like brains and rotting flesh?
Seems legit. Could you give me an example of "easily-defended principals", as opposed to "restatements of personal values"?
I know only the words spoken, not those intended. (And concluded early in the conversation that the entire subthread should be truncated and replaced with a link). So much confusion and muddled thinking!)
Were you saying that the results of that experiment were completely uninteresting?
How is it that something which is physically identical to a human and has a physical difference from a human is a coherent concept?
I think that "violates bodily autonomy"=bad is a better core rule than "increases QALYs"=good.