Morality Isn't Logical

by Wei_Dai1 min read26th Dec 201285 comments

32

Ethics & Morality
Personal Blog

What do I mean by "morality isn't logical"? I mean in the same sense that mathematics is logical but literary criticism isn't: the "reasoning" we use to think about morality doesn't resemble logical reasoning. All systems of logic, that I'm aware of, have a concept of proof and a method of verifying with high degree of certainty whether an argument constitutes a proof. As long as the logic is consistent (and we have good reason to think that many of them are), once we verify a proof we can accept its conclusion without worrying that there may be another proof that makes the opposite conclusion. With morality though, we have no such method, and people all the time make moral arguments that can be reversed or called into question by other moral arguments. (Edit: For an example of this, see these posts.)

Without being a system of logic, moral philosophical reasoning likely (or at least plausibly) doesn't have any of the nice properties that a well-constructed system of logic would have, for example, consistency, validity, soundness, or even the more basic property that considering arguments in a different order, or in a different mood, won't cause a person to accept an entirely different set of conclusions. For all we know, somebody trying to reason about a moral concept like "fairness" may just be taking a random walk as they move from one conclusion to another based on moral arguments they encounter or think up.

In a recent post, Eliezer said "morality is logic", by which he seems to mean... well, I'm still not exactly sure what, but one interpretation is that a person's cognition about morality can be described as an algorithm, and that algorithm can be studied using logical reasoning. (Which of course is true, but in that sense both math and literary criticism as well as every other subject of human study would be logic.) In any case, I don't think Eliezer is explicitly claiming that an algorithm-for-thinking-about-morality constitutes an algorithm-for-doing-logic, but I worry that the characterization of "morality is logic" may cause some connotations of "logic" to be inappropriately sneaked into "morality". For example Eliezer seems to (at least at one point) assume that considering moral arguments in a different order won't cause a human to accept an entirely different set of conclusions, and maybe this is why. To fight this potential sneaking of connotations, I suggest that when you see the phrase "morality is logic", remind yourself that morality isn't logical.

 

32

85 comments, sorted by Highlighting new comments since Today at 6:54 PM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Taboo both "morality" and "logical" and you may find that you and Eliezer have no disagreement.

LessWrongers routinely disagree on what is meant by "morality". If you think "morality" is ambiguous, then stipulate a meaning ('morality₁ is...') and carry on. If you think people's disagreement about the content of "morality" makes it gibberish, then denying that there are moral truths, or that those truths are "logical," will equally be gibberish. Eliezer's general practice is to reason carefully but informally with something in the neighborhood of our colloquial meanings of terms, when it's clear that we could stipulate a precise definition that adequately approximates what most people mean. Words like 'dog' and 'country' and 'number' and 'curry' and 'fairness' are fuzzy (if not outright ambiguous) in natural language, but we can construct more rigorous definitions that aren't completely semantically alien.

Surprisingly, we seem to be even less clear about what is meant by "logic". A logic, simply put, is a set of explicit rules for generating lines in a proof. And "logic," as a human practice, is the use a... (read more)

-1Wei_Dai8yIf this is the case, then I think he has failed to show that morality is logic, unless he's using an extremely lax standard of "sufficiently careful". For example, I think that "sufficiently careful" reasoning must at a minimum be using a method of reasoning that is not sensitive to the order in which one encounters arguments, and is not sensitive to the mood one is in when considering those arguments. Do you think Eliezer has shown this? Or alternatively, what standard of "sufficiently careful" do you think Eliezer is using when he says "morality is logic"?

I'd split up Eliezer's view into several distinct claims:

  1. A semantic thesis: Logically regimented versions of fairness, harm, obligation, etc. are reasonable semantic candidates for moral terms. They may not be what everyone actually means by 'fair' and 'virtuous' and so on, but they're modest improvements in the same way that a rigorous genome-based definition of Canis lupus familiaris would be a reasonable improvement upon our casual, everyday concept of 'dog,' or that a clear set of thermodynamic thresholds would be a reasonable regimentation of our everyday concept 'hot.'

  2. A metaphysical thesis: These regimentations of moral terms do not commit us to implausible magical objects like Divine Commands or Irreducible 'Oughtness' Properties In Our Fundamental Physics. All they commit us to are the ordinary objects of physics, logic, and mathematics, e.g., sets, functions, and causal relationships; and sets, functions, and causality are not metaphysically objectionable.

  3. A normative thesis: It is useful to adopt moralityspeak ourselves, provided we do so using a usefully regimented semantics. The reasons to refuse to talk in a moral idiom are, in part thanks to 1 and 2, not strong e

... (read more)
4Eliezer Yudkowsky8yI like this splitup! (From the great-grandparent.) I think I want to make a slightly stronger claim than this; i.e. that by logical discourse we're thinning down a universe of possible models using axioms. One thing I didn't go into, in this epistemology sequence, is the notion of 'effectiveness' or 'formality', which is important but I didn't go into as much because my take on it feels much more standard - I'm not sure I have anything more to say about what constitutes an 'effective' formula or axiom or computation or physical description than other workers in the field. This carries a lot of the load in practice in reductionism; e.g., the problem with irreducible fear is that you have to appeal to your own brain's native fear mechanisms to carry out predictions about it, and you can never write down what it looks like. But after we're done being effective, there's still the question of whether we're navigating to a part of the physical universe, or narrowing down mathematical models, and by 'logical' I mean to refer to the latter sort of thing rather than the former. The load of talking about sufficiently careful reasoning is mostly carried by 'effective' as distinguished from empathy-based predictions, appeals to implicit knowledge, and so on. I also don't claim to have given morality an effective description - my actual moral arguments generally consist in appealing to implicit and hopefully shared reasons-for-action, not derivations from axioms - but the metaphysical and normative claim is that these reasons-for-action both have an effective description (descriptively speaking) and that any idealized or normative version of them would still have an effective description (normatively speaking).
5Wei_Dai8yLet me try a different tack in my questioning, as I suspect maybe your claim is along a different axis than the one I described in the sibling comment. So far you've introduced a bunch of "moving parts" for your metaethical theory: * moral arguments * implicit reasons-for-action * effective descriptions of reasons-for-action * utility function But I don't understand how these are supposed to fit together, in an algorithmic sense. In decision theory we also have missing modules or black boxes, but at least we specify their types and how they interact with the other components, so we can have some confidence that everything might work once we fill in the blanks. Here, what are the types of each of your proposed metaethical objects? What's the "controlling algorithm" that takes moral arguments and implicit reasons-for-action, and produces effective descriptions of reasons-for-action, and eventually the final utility function? As you argued in Unnatural Categories [http://lesswrong.com/lw/tc/unnatural_categories/] (which I keep citing recently), reasons-for-action can't be reduced the same way as natural categories. But it seems completely opaque to me how they are supposed to be reduced, besides that moral arguments are involved. Am I asking for too much? Perhaps you are just saying that these must be the relevant parts, and let's figure out both how they are supposed to work internally, and how they are supposed to fit together?
1Wei_Dai8ySo would it be fair to say that your actual moral arguments do not consist of sufficiently careful reasoning? Is there a difference between this claim and the claim that our actual cognition about morality can be described as an algorithm? Or are you saying that these reasons-for-action constitute (currently unknown) axioms which together form a consistent logical system? Can you see why I might be confused? The former interpretation is too weak to distinguish morality from anything else, while the latter seems too strong given our current state of knowledge. But what else might you be saying? Similar question here. Any you saying anything beyond that any idealized or normative way of thinking about morality is still an algorithm?
3Wei_Dai8yIf I grant 1, I currently can't think of any objections to 2 and 3 (which doesn't mean that I won't if I took 1 more seriously and therefore had more incentive to look for such objections). I think at a minimum, it's unusually difficult to do 1-style regimentation for morality (and Eliezer himself explained why in Unnatural Categories [http://lesswrong.com/lw/tc/unnatural_categories/]). I guess one point I'm trying to make is that whatever kind of reasoning we're using to attempt this kind of regimentation is not the same kind of reasoning that we use to think about some logical object after we have regimented it. Does that make sense?
2lukeprog8yRobbBB probably knows this, but I'd just like to mention that the three claims listed above, at least as stated there, are common to many metaethical approaches, not just Eliezer's. Desirism [http://commonsenseatheism.com/?p=16373] is one example. Other examples include the moral reductionisms of Richard Brandt, Peter Railton, and Frank Jackson.

By "morality" you seem to mean something like 'the set of judgments about mass wellbeing ordinary untrained humans arrive at when prompted.' This is about like denying the possibility of arithmetic because people systematically make errors in mathematical reasoning. When the Pythagoreans reasoned about numbers, they were not being 'sufficiently careful;' they did not rigorously define what it took for something to be a number or to have a solution, or stipulate exactly what operations are possible; and they did not have a clear notion of the abstract/concrete distinction, or of which of these two domains 'number' should belong to. Quite plausibly, Pythagoreans would arrive at different solutions in some cases based on their state of mind or the problems' framing; and certainly Pythagoreans ran into disagreements they could not resolve and fell into warring camps as a result, e.g., over whether there are irrational numbers.

But the unreasonableness of the disputants, no matter how extreme, cannot infect the subject matter and make that subject matter intrinsically impossible to carefully reason with. No matter how extreme we make the Pythagoreans' eccentricities, as long as... (read more)

2Wei_Dai8yI think I've been careful not to claim that morality is impossible to carefully reason with, but just that we don't know how to carefully reason with it yet and given our current state of knowledge, it may turn out to be impossible to carefully reason with. With decision theory, we're also in a "non-logical" state of reasoning, where we don't yet have a logical definition of what constitutes correct decision theory and therefore can't just apply logical reasoning. What's helpful in the case of decision theory is that it seems reasonable to assume that when we do come up with such a logical definition, it will be relatively simple. This helps tremendously in guiding our search, and partly compensates for the fact that we do not know how to reason carefully during this search. But with "morality", we don't have this crutch since we think it may well be the case that "value is complex".
3Rob Bensinger8yI agree that it's going to take a lot of work to fully clarify our concepts. I might be able to assign a less remote probability to 'morality turns out to be impossible to carefully reason with' if you could give an example of a similarly complex human discourse that turned out in the past to be 'impossible to carefully reason with'. High-quality theology is an example of the opposite; we turned out to be able to reason very carefully (though admittedly most theology is subpar) with slightly regimented versions of concepts in natural religion. At least, there are some cases where the regimentation was not completely perverse, though the crazier examples may be more salient in our memories. But the biggest problem with was metaphysical, not semantic; there just weren't any things in the neighborhood of our categories for us to refer to. If you have no metaphysical objections to Eliezer's treatment of morality beyond your semantic objections, then you don't think a regimented morality would be problematic for the reasons a regimented theology would be. So what's a better example of a regimentation that would fail because we just can't be careful about the topic in question? What symptoms and causes would be diagnostic of such cases? By comparison, perhaps. But it depends a whole lot on what we mean by 'morality'. For instance, do we mean:? * Morality is the hypothetical decision procedure that, if followed, tends to maximize the amount of positively valenced experience in the universe relative to negatively valenced experience, to a greater extent than any other decision procedure. * Morality is the hypothetical decision procedure that, if followed, tends to maximize the occurrence of states of affairs that agents prefer relative to states they do not prefer (taking into account that agents generally prefer not to have their preferences radically altered). * Morality is any decision procedure that anyone wants people in gener
2Wei_Dai8yWhat I mean by "morality" is the part of normativity [http://www.philosophyetc.net/2008/04/grasping-normativity.html] ("what you really ought, all things considered, to do") that has to do with values (as opposed to rationality). In general, I'm not sure how to show a negative like "it's impossible to reason carefully about subject X", so the best I can do is exhibit some subject that people don't know how to reason carefully about and intuitively seems like it may be impossible to reason carefully about. Take the question, "Which sets really exist?" (Do large cardinals [http://en.wikipedia.org/wiki/Large_cardinal] exist, for example?) Is this a convincing example to you of another subject that may be impossible to reason carefully about?
2TimS8yHaven't we been in this position since before mathematics was a thing. The lack of progress towards consensus in that period of time seems disheartening.
8Rob Bensinger8yThe natural number line is one of the simplest structures a human being is capable of conceiving. The idea of a human preference is one of the most complex structures a human being has yet encountered. And we have a lot more emotional investment and evolutionary baggage interfering with carefully axiomatizing our preferences than with carefully axiomatizing the numbers. Why should we be surprised that we've made more progress with regimenting number theory than with regimenting morality or decision theory in the last few thousand years?
2TimS8yIn terms of moral theory, we appear to have made no progress at all. We don't even agree on definitions. Mathematics may or might not be an empirical discipline, but if you get your math wrong badly enough, you lose the ability to pay rent [http://lesswrong.com/lw/i3/making_beliefs_pay_rent_in_anticipated_experiences/] . If morality paid rent in anticipated experience, I'd expect societies that had more correct morality to do better and societies with less correct morality to do worse. Morality is so important that I expect marginal differences to have major impact. And I just don't see the evidence that such an impact is or ever did happen. So, have I misread history? Or have I made a mistake in predicting that chance differences in morality should have major impacts on the prosperity of a society? (Or some other error?)
4Rob Bensinger8yBut defining terms is the trivial part of any theory; if you concede that we haven't even gotten that far (and that term-defining is trivial), then you'll have a much harder time arguing that if we did agree on definitions we'd still have made no progress. You can't argue that, because if we all have differing term definitions, then that on its own predicts radical disagreement about almost anything; there is no need to posit a further explanation. Morality pays rent in anticipated experience in the same three basic ways that mathematics does: 1. Knowing about morality helps us predict the behavior of moralists, just as knowing about mathematics helps us predict the behavior of mathematicians (including their creations). If you know that people think murder is bad, you can help predict why murder is so rare; just as knowing mathematicians' beliefs about natural numbers helps us predict what funny squiggly lines will occur on calculators. This, of course, doesn't require any commitment to moral realism, just as it doesn't require a commitment to mathematical realism. 2. Inasmuch as the structure of moral reasoning mirrors the structure of physical systems, we can predict how physical systems will change based on what our moral axioms output. For instance, if our moral axioms are carefully tuned to parallel the distribution of suffering in the world, we can use them to predict what sorts of brain-states will be physically instantiated if we perform certain behaviors. Similarly, if our number axioms are carefully tuned to parallel the changes in physical objects (and heaps thereof) in the world, we can use them to predict how physical objects will change when we translate them in spacetime. 3. Inasmuch as our intuitions give rise to our convictions about mathematics and morality, we can use the aforementioned convictions to predict our own future intuitions. In particular, an e
0Eliezer Yudkowsky8y(Some common senses of "moral fortitude" definitely cause GDP, at minimum in the form of trust between businesspeople and less predatory bureaucrats. But this part is equally true of Babyeaters.)

There's a pseudo-theorem in math that is sometimes given to 1st year graduate students (at least in my case, 35 years ago), which is that

All natural numbers are interesting.

Natural numbers consist of {1, 2, 3, ...} -- actually a recent hot topic of conversation on LW ("natural numbers" is sometimes defined to include 0, but everything that follows will work either way).

The "proof" used the principle of mathematical induction (one version of which is):

If P(n) is true for n=1, and the assertion "m is the smallest integer such that !P(m)" leads to a contradiction, then P(n) is true for all natural numbers.

and also uses the fact (from the Peano construction of the natural numbers?) that every non-empty subset of natural numbers has a smallest element.

PROOF:

1 is interesting.

Suppose theorem is false. Then some number m is the smallest uninteresting number. But then wouldn't that be interesting?

Contradiction. QED.

The illustrates a pitfall of mixing (qualities that don't really belong in a mathematical statement) with (rigorous logic), and in general, if you take a quality that is not rigorously defined, and apply a sufficiently long train of logic to it, ... (read more)

4magfrump8yOkay but if I honestly believe that all natural numbers are interesting and thought of this proof as pretty validly matching my intuitions, what does that mean?
0HalMorris8yUnless you turn "interesting" into something rigorously defined and precisely communicated to others, what it means is that all natural numbers are {some quality that is not rigorously defined and can't be precisely communicated to others}.
1magfrump8yI guess I feel that even if I haven't defined "interesting" rigorously, I still have some intuitions for what "interesting" means, large parts of which will be shared by my intended audience. For example, I could make the empirical prediction that if someone names a number I could talk about it for a bit and then they would agree it was interesting (I mean this as a toy example; I'm not sure I could do this.) One could then take approximations of these conversations, or even the existence of these conversations, and define interesting* to be "I can say a unique few sentences about historic results surrounding this number and related mathematical factoids." Which then might be a strong empirical predictor of people claiming something is interesting. So I feel like there's something beyond a useless logical fact being expressed by my intuitions here.
2gwern8yhttp://en.wikipedia.org/wiki/Interesting_number_paradox [http://en.wikipedia.org/wiki/Interesting_number_paradox] and http://en.wikipedia.org/wiki/Berry_paradox [http://en.wikipedia.org/wiki/Berry_paradox]
0HalMorris8yI can't tell what this is. The first link might imply that Gwern thinks I misstated the Interesting Number Paradox (I looked at the Wikipedia article before I wrote my post, but went with my memory, and there are multiple equivalent ways of saying it, but if you think I got it wrong ....? Or maybe it was offered as a handy reference. The Berry Paradox sounds like a very different kettle of fish ... with more real complexity.
1Viliam_Bur8yI would bet on this one. More meta: Perhaps your priors for "if someone replies to my comment, they disagree with me" are too high. ;-) Maybe not for internet in general, but LW is not an average internet site.

For all we know, somebody trying to reason about a moral concept like "fairness" may just be taking a random walk as they move from one conclusion to another based on moral arguments they encounter or think up.

Well. Not a purely random walk. A weighted one.

Isn't this true of all beliefs? And isn't rationality just increasing the weight in the right direction?

The word "morality" needs to be made more specific for this discussion. One of the things you seem to be talking about is mental behavior that produces value judgments or their justifications. It's something human brains do, and we can in principle systematically study this human activity in detail, or abstractly describe humans as brain activity algorithms and study those algorithms. This characterization doesn't seem particularly interesting, as you might also describe mathematicians in this way, but this won't be anywhere close to an efficient... (read more)

People do all sorts of sloppy reasoning; everyday logic also arrives at both A and ~A ; any sort of fuzziness leads to that. To actually be moral, it is necessary that you can't arrive at both A and ~A at will - otherwise your morality provides no constraint.

3DanArmak8yDifferent people can disagree about pretty much any moral question. Any one person's morality may be stable enough not to arrive at A and also ~A, but since the result still dependent most of all on that person's upbringing and culturally endorsed belief, morality is not very useful as logic. (Of course it is useful as morality: our brains are built that way.)
2crap8yDifference in values is a little overstated, I think. Practically, there's little difference between what people say they'd do in Milgram experiment, but a huge difference between what they actually do.
0DanArmak8yI'm not sure how to parse your grammar. Are you saying that different people all say they will do the same ('good') thing on Milgram, but in practice different people do different things on Milgram (some 'good' some 'bad')? Or are you saying that there is a large difference between what people say they would do on Milgram, and between what they actually do? (Because replications of Milgram are prohibited by modern ethics boards, the data is weaker than I'd like it to be.) You also say that I overstate the difference in values between people. But Milgram ran his experiment just once on very homogenous people: all from the same culture. If he'd compared it to widely differing cultures, I expect at least some of the time the compliance rates would differ significantly.

In a recent post, Eliezer said "morality is logic"

The actual quote is:

morality is (and should be) logic, not physics

[-][anonymous]8y 5

Eliezer said "morality is logic", by which he seems to mean... well, I'm still not exactly sure what, but one interpretation is that a person's cognition about morality can be described as an algorithm, and that algorithm can be studied using logical reasoning. (Which of course is true, but in that sense both math and literary criticism as well as every other subject of human study would be logic.)

Thank you -- I knew I ADBOCed with Eliezer's meta-ethics, but I had trouble putting down in words the reason.

You are not using the same definition of logic EY does. For him logic is everything that is not physics in his physics+logic (or territory+maps, in the previously popular terms) picture of the world. Mathematical logic is a tiny sliver of what he calls "logic". For comparison, in an instrumentalist description there are experiences+models, and EY's logic is roughly equivalent to "models" (maps, in the map-territory dualism), of which mathematics is but one.

[-][anonymous]8y 2

With morality though, we have no such method,

Every act of lying is morally prohibited / This act would be a lie // This act is morally prohibited.

So here I have a bit of moral reasoning, the conclusion of which follows from the premises. The argument is valid, so if the premises are true, the conclusion can be considered proven. So given that I can give you valid proofs for moral conclusions, in what way is morality not logical?

doesn't have any of the nice properties of that a well-constructed system of logic would have, for example, consistency, vali

... (read more)
2Wei_Dai8yI should have given some examples of the kind of moral reasoning I'm referring to. * http://lesswrong.com/lw/n3/circular_altruism/ [http://lesswrong.com/lw/n3/circular_altruism/] * http://lesswrong.com/lw/1r9/shut_up_and_divide/ [http://lesswrong.com/lw/1r9/shut_up_and_divide/]
4crap8y1st link is ambiguity aversion. Morality is commonly taken to describe what one will actually do when they are trading off private gains vs other people's losses. See this [http://www.youtube.com/watch?v=QrKnhOJ-R80] as example of moral judgement. Suppose Roberts is smarter. He will quickly see that he can donate 10% to charity, and it'll take longer for him to reason about value of cash that was not given to him (reasoning that may stop him from pressing the button), so there will be a transient during which he pushes the button, unless he somehow suppresses actions during transients. It's an open ended problem 'unlike logic' because consequences are difficult to evaluate. edit: been in a hurry.
0[anonymous]8yAh, thank you, that is helpful. In the case of 'circular altruism', I confess I'm quite at a loss. I've never really managed to pull an argument out of there. But if we're just talking about the practice of quantifying goods in moral judgements, then I agree with you there's no strongly complete ethical calculus that's going to do render ethics a mathematical science. But in at least in 'circular reasoning' EY doesn't need quite so strong a view: so far as I can tell, he's just saying that our moral passions conflict with our reflective moral judgements. And even if we don't have a strongly complete moral system, we can make logically coherent reflective moral judgements. I'd go so far as to say we can make logically coherent reflective literary criticism judgements. Logic isn't picky. So while, on the one hand, I'm also (as yet) unconvinced about EY's ethics, I think it goes too far in the opposite direction to say that ethical reasoning is inherently fuzzy or illogical. Valid arguments are valid arguments, regardless.
1ddxxdd8yThe problem is that when the conclusion is "proven wrong" (i.e. "my gut tells me that it's better to lie to an Al Qaeda prison guard than to tell him the launch codes for America's nuclear weapons"), then the premises that you started with are wrong. So if I'm understanding Wei_Lai's point, it's that the name of the game is to find a premise that cannot and will not be contradicted by other moral premises via a bizarre hypothetical situation. I believe that Sam Harris has already mastered this thought experiment. Paraphrased from his debate with William Lane Craig: "There exists a hypothetical universe in which there is the absolute most amount of suffering possible. Actions that move us away from that universe are considered good; actions that move us towards that universe are considered bad".
1palladias8yThis is why I find Harris frustrating. He's stating something pretty much everyone agrees with, but they all make different substitutions for the variable "suffering." And then Harris is vague about what he personally plugs in.
0evand8yAt least as paraphrased here, the definition of "move towards" is very unclear. Is it a universe with more suffering? A universe with more suffering right now? A universe with more net present suffering, according to some discount rate? What if I move to a universe with more suffering both right now and for all possible future discount rates, assuming no further action, but for which future actions that greatly reduce suffering are made easier? (In other words, does this system get stuck in local optimums?) I think there is much that this approach fails to solve, even if we all agree on how to measure suffering. (Included in "how to measure suffering" is a bit of complicated stuff like average vs total utilitarianism, and how to handle existential risks, and how to do probability math on outcomes that produce a likelihood of suffering.)
1[anonymous]8yI hope so! It would be terribly awkward to find ourselves with true premises, valid reasoning, and a false conclusion. But unless by 'gut feeling' you mean a valid argument with true premises, then gut feelings can't prove anything wrong. Perhaps, though that wouldn't speak to whether or not morality is logical. If Wai Dai's point is that morality is, at best, axiomatic, then sure. But so is Peano arithmetic, and that's as logical as can be.
3ddxxdd8yI just stumbled into this discussion after reading an article about why mathematicians and scientists dislike traditional, Socratic philosophy [http://philosophynow.org/issues/46/Newtons_Flaming_Laser_Sword], and my mindset is fresh off that article. It was a fantastic read, but the underlying theme that I feel is relevant to this discussion is this: * Socratic philosophy treats logical axioms as "self-evident truths" (i.e. I think, therefore I am). * Mathematics treats logical axioms as "propositions", and uses logic to see where those propositions lead (i.e. if you have a line and a point, the number/amount of lines that you can draw through the point that's parallel to the original line determines what type of geometry you are working with (multidimensional, spherical, or flat-plane geometry)). * Scientists treat logical axioms as "hypotheses", and logical "conclusions" as testable statements that can determine whether those axioms are true or not (i.e. if this weird system known as "quantum mechanics" were true, then we would see an interference pattern when shooting electrons through a screen with 2 slits). So I guess the point that we should be making is this: which philosophical approach towards logic should we take to study ethics? I believe Wei_Lai would say that the first approach, treating ethical axioms as "self-evident truths" is problematic due to the fact that a lot of hypothetical situations (like my example before) can create a lot of contradictions between various ethical axioms (i.e. choosing between telling a lie and letting terrorists blow up the planet).
2[anonymous]8yI read the article. It's interesting (I liked the thing about pegs and strings), but I don't think the guy's (nor you) read a lot of actual Greek philosophy. I don't mean that as an attack (why would you want to, after all?), but it makes some of his, and your claims a little strange. Socrates, in the Platonic dialogues, is unwilling to take the law of non-contradiction as an axiom. There just aren't any axioms in Socratic philosophy, just discussions. No proofs, just conversations. Plato (and certainly not Socrates) doesn't have doctrines, and Plato is totally and intentionally merciless with people who try to find Platonic doctrines. Also, Plato and Socrates predate, for most purposes, logic. Right, Aristotle largely invented (or discovered) that trick. Aristotle's logic is consistant and strongly complete (i.e. it's not axiomatic, and relies on no external logical concepts). Euclid picked up on it, and produced a complete and consistant mathematics. So (some) Greek philosophy certainly shares this idea with modern mathematics. I don't think scientists treat logical axioms as hypotheses. Logical axioms aren't empirical claims, and aren't really subject to testing. But Aristotle's work on biology, meteorology, etc. forwards plenty of empirical hypotheses, along with empirical evidence for them. Textual evidence suggests Aristotle performed lots of experiments, mostly in the form of vivisection of animals. He was wrong about pretty much everything, but his method was empirical. This is to say nothing of contemporary philosophy, which certainly doesn't take very much as 'self-evident truth'. I can assure you, no one gets anywhere with that phrase anymore, in any study. Not if those ethical axioms actually are self-evident truths. Then hypothetical situations (no matter how uncomfortable they make us) can't disrupt them. But we might, on the basis of these situations, conclude that we don't have any self-evident moral axioms. But, as you neatly argue, we don't
0ddxxdd8yThanks for taking the time to read and respond to the article, and for the critique; you are correct in that I am not well-versed in Greek philosophy. With that being said, allow me to try to expand my framework to explain what I'm trying to get at: * Scientists, unlike mathematicians, don't always frame their arguments in terms of pure logic (i.e. If A and B, then C). However, I believe that the work that comes from them can be treated as logical statements. Example: "I think that heat is transferred between two objects via some sort of matter that I will call 'phlogiston'. If my hypothesis is true, than an object will lose mass as it cools down." 10 days later: "I have weighed an object when it was hot, and I weighed it when it was cold. The object did not lose any mass. Therefore, my hypothesis is wrong". In logical terms: Let's call the Theory of Phlogiston "A", and let's call the act of measuring a loss of mass with a loss of heat "C". 1. If A, then C. 2. Physical evidence is obtained 3. If Not C, then Not A. Essentially, the scientific method involves the creation of a hypothesis "A", and a logical consequence of that hypothesis, "If A then C". Then physical evidence is presented in favor of, or against "C". If C is disproven, then A is disproven. This is what I mean when I say that hypotheses are "axioms", and physical experiments are "conclusions". * In response to this statement: "No proofs, just conversations". In the framework that I'm working in, every single statement is either a premise or a conclusion. In addition, every single statement is either a "truth" (that we are to believe immediately), a "proposition" (that we are to entertain the logical implications of), or part of a "hypothesis/implication" pair (that we are suppose to believe with a level of skepticism until an experiment verifies it or disproves it). I believe that every single statement that has ever been made in any field of study fal
8[anonymous]8yI see. You're right that philosophers pretty much never do anything like that. Except experimental philosophers, but thus far most of that stuff is just terrible. "In the framework that I'm working in..." That's a good framework with with to approach any philosophical text, including and especially the Platonic dialogues. I just wanted to stress the fact that the dialogues aren't treatises presented in a funny way. You're supposed to argue with Socrates, against him, yell at his interlocutors, try to patch up the arguments with premises of your own. It's very different from, say, Aristotle or Kant or whatever, where its a guy presenting a theory. Would you mind if I go on for a bit? I have thoughts on this, but I don't quite know how to present them briefly. Anyway: Students of Physics should go into a Physics class room or book with an open mind. They should be ready to learn new things about the world, often surprising things (relative to their naive impressions) and should often try to check their prejudices at the door. None of us are born knowing physics. It's something we have to go out and learn. Philosophy isn't like that. The right attitude walking into a philosophy classroom is irritation. It is an inherently annoying subject, and its practitioners are even worse. You can't learn philosophy, and you can't become an expert at it. You can't even become good at it. Being a philosopher is no accomplishment whatsoever. You can just do philosophy, and anyone can do it. Intelligence is good, but it can be a hindrance too, same with education. Doing philosophy means asking questions about things to which you really ought to already know the answers, like the difference between right and wrong, whether or not you're in control of your actions, what change is, what existing is, etc. Philosophy is about asking questions to which we ought to have the answers, but don't. We do philosophy by talking to each other. If that means running an experiment, good. If th
0BerryPick68yThe second part of your post is terrific. :)
0whowhowho8yBut there is a mini-premise, inference and mini-conclusion inside every "hypothesis-implication pair".
0BerryPick68yI'm curious as to why you referenced Rawl's work in this context. It's not apparent to me how Justice as Fairness is relevant here.
0ddxxdd8yI referenced him because I recall that he comes to a very strong conclusion- that a moral society should have agreed-upon laws based on the premise of the "original position". He was the first philosopher that came to mind when I was trying to think of examples of a hard statement that is neither a "proposition" to be explored, nor the conclusion from an observable fact.
0BerryPick68yI mean, I'm pretty sure his conclusion is a "proposition." It has premises, and I could construct it logically if you wanted. In fact, I don't understand his position to be "that a moral society should have agreed-upon laws" at all, but rather his use of the original position is an attempt to isolate and discover the principles of distributive justice, and that's really his bottom line.
1JonathanLivengood8yInteresting piece. I was a bit bemused by this, though: Problematically for the story, Plato died around 347 BCE, and Archimedes wasn't born until 287 BCE -- sixty years later.
0BerryPick68yThank you for an awesome read. :)
0whowhowho8yscience uses logical rules of inference. Does science take them as self-evident? Or does it test them? And can it test them without assuming them?
0jsalvatier8y(whisper: Wei Lai should be Wei Dai)
0buybuydandavis8yNope. Even if one grants objective meaning to a unique interpersonal aggregate of suffering (and I don't), it's just wrong. Sometimes you want people to suffer. For example, if one fellow caused all the suffering of the rest, moving him to less suffering than everyone else would be a move to a worse universe. EDIT: I didn't mean "you" to indicate everyone. Sometimes I want people to suffer, and think that in my hypothetical, the majority of mankind would feel the same, and choose the same, if it were in their power.
3ddxxdd8y...because doing so would create incentive to not cause suffering to others. In the long run, that would result in less universal suffering overall. Isn't this correct?
1buybuydandavis8yNo, that's not my motivation at all. That's not my because. It's just vengeance on my part. Even if one regarded the design of vengeance as an evolutionary adaptation, I don't think that vengeance minimizes suffering, it punishes infractions against values. At that level, it's not about minimizing suffering either, it's about evolutionary fitness.
1Solvent8yYeah, I'm pretty sure I (and most LWers) don't agree with you on that one, at least in the way you phrased it.
-2buybuydandavis8yYou think they'd prefer that the guy that caused everyone else in the universe to suffer didn't suffer himself?
4Solvent8yHere's an old Eliezer quote on this: It's pretty hard to argue about this if our moral intuitions disagree. But at least, you should know that most people on LW disagree with you on this intuition. EDIT: As ArisKatsaris points out, I don't actually have any source for the "most people on LW disagree with you" bit. I've always thought that not wanting harm to come to anyone as an instrumental value was a pretty obvious, standard part of utilitarianism, and 62% of LWers are consequentialist, according to the 2012 survey. The post "Policy Debates Should Not Appear One Sided" is fairly highly regarded, and it esposes a related view, that people don't deserve harm for their stupidity. Also, what those people would prefer isn't nessecarily what our moral system should prefer- humans are petty and short-sighted.
0Eugine_Nier8yWhat do you mean by "utilitarianism"? The word has two different common meanings around here: any type of consequentialism, and the specific type of consequentialism that uses "total happiness" as a utility function. This sentence appears to be designed to confuse the two meanings. That is most definitely not the main point of that post.
0Solvent8yYeah, my mistake. I'd never run across any other versions of consequentialism apart from utilitarianism (except for Clippy, of course). I suppose caring only for yourself might count? But do you seriously think that the majority of those consequentialists aren't utilitarian?
0Eugine_Nier8yWell, even Eliezer's version of consequentialism [http://lesswrong.com/lw/lb/not_for_the_sake_of_happiness_alone/] isn't simple utilitarianism for starters.
0Solvent8yIt's a kind of utilitarianism. I'm including act utilitarianism and desire utilitarianism and preference utilitarianism and whatever in utilitarianism.
0Eugine_Nier8yOk, what is your definition of "utilitarianism"?
0ArisKatsaris8y[citation needed]
0Solvent8yI edited my comment to include a tiny bit more evidence.
0buybuydandavis8yThank you, that's a good start. Yes, I had concluded that EY was anti retribution. Hadn't concluded that he had carried the day on that point. I don't think vengeance and retribution are "ideas" that people had to come up with - they're central moral motivations. "A social preference for which we punish violators" gets at 80% of what morality is about. Some may disagree about the intuition, but I'd note that even EY had to "renounce" all hatred, which implies to me that he had the impulse for hatred (retribution, in this context) in the first place. This seems like it has makings of an interesting poll question.
0Solvent8yI agree. Let's do that. You're consequentialist, right? I'd phrase my opinion as "I have terminal value for people not suffering, including people who have done something wrong. I acknowledge that sometimes causing suffering might have instrumental value, such as imprisonment for crimes." How do you phrase yours? If I were to guess, it would be "I have a terminal value which says that people who have caused suffering should suffer themselves." I'll make a Discussion post about this after I get your refinement of the question?
2ArisKatsaris8yI'd suggest the following two phrasings: * I place terminal value to retribution (inflicting suffering on the causers of suffering), at least for some of the most egregious cases. * I do not place terminal value to retribution, not even for the most egregious cases (e.g. mass murderers). I acknowledge that sometimes it may have instrumental value. Perhaps also add a third choice: * I think I place terminal value to retribution, but I would prefer it if I could self-modify so that I wouldn't.
4Oscar_Cunningham8yI would, all else being equal. Suffering is bad.
0[anonymous]8yThat also applies to literary criticism: Wulky Wilkinsen shows colonial alienation / Authors who show colonial alieniation are post-utopians // Wulky Wilkinsen is a post-utopian.

I like this post, and here is some evidence supporting your fear that some people may over-use the morality=logic metaphor, i.e., copy too many anticipations about how logical reasoning works over to their anticipations about how moral reasoning works... The comment is already downvoted to -2, suggesting the community realizes this (please don't downvote it further so as to over-punish the author), but the fact that someone made it is evidence that your point here is valuable one.

http://lesswrong.com/lw/g0e/narrative_selfimage_and_selfcommunication/83ag

The practice of moral philosophy doesn't much resemble the practice of mathematics. Mainly because in moral philosophy we don't know exactly what we're talking about when we talk about morality. In mathematics, particularly since the 20th century, we can eventually precisely specify what we mean by a mathematical object, in terms of sets.

"Morality is logic" means that when we talk about morality we are talking about a mathematical object. The fact that the only place in our mind the reference to this object is stored is our intuition is what make... (read more)

0Wei_Dai8yHow does one go about defining this mathematical object, in principle? Suppose you were a superintelligence and could surmount any kind of technical difficulty, and you wanted to define a human's morality precisely as a mathematical object, how would you do it?
0nshepperd8yI don't really know the answer to that question. In principle, you start with a human brain, and extract from it somehow a description of what it means when it says "morality". Presumably involving some kind of analysis of what would make the human say "that's good!" or "that's bad!", and/or of what computational processes inside the brain are involved in deciding whether to say "good" or "bad". The output is, in theory, a function mapping things to how much they match "good" or "bad" in your human's language. The 'simple' solution, of just simulating what your human would say after being exposed to every possible moral argument, runs into trouble with what exactly constitutes an argument—if an UFAI can hack your brain into doing terrible things just by talking to you, clearly not all verbal engagement can be allowed—and also more mundane issues like our simulated human going insane from all this talking.
1Wei_Dai8ySuppose the "simple" solution doesn't have the problems you mention. Somehow we get our hands on a human that doesn't have security holes and can't go insane. I still don't think it works. Let's say you are trying to do some probabilistic reasoning about the mathematical object "foobar" and the definition of it you're given is "foobar is what X would say about 'foobar' after being exposed to every possible argument concerning 'foobar'", where X is an algorithmic description of yourself. Well, as soon as you realize that X is actually a simulation of you, you can conclude that you can say anything about 'foobar' and be right. So why bother doing any more probabilistic reasoning? Just say anything, or nothing. What kind of probabilistic reasoning can do you beyond that, even if you wanted to?
5nshepperd8yI think you're collapsing some levels here, but it's making my head hurt to think about it, having the definition-deriver and the subject be the same person. Making this concrete: let 'foobar' refer to the set {1, 2, 3} in a shared language used by us and our subject, Alice. Alice would agree that it is true that "foobar = what X would say about 'foobar' after being exposed to every possible argument concerning 'foobar'" where X is some algorithmic description of Alice. She would say something like "foobar = {1, 2, 3}, X would say {1, 2, 3}, {1, 2, 3} = {1, 2, 3} so this all checks out." Clearly then, any procedure that correctly determines what X would say about 'foobar' should result in the correct definition of foobar, namely {1, 2, 3}. This is what theoretically lets our "simple" solution work. However, Alice would not agree that "what X would say about 'foobar' after being exposed to every possible argument concerning 'foobar'" is a correct definition of 'foobar'. The issue is that this definition has the wrong properties when we consider counterfactuals concerning X. It is in fact the case that foobar is {1, 2, 3}, and further that 'foobar' means {1, 2, 3} in our current language, as stipulated at the beginning of this thought experiment. If-counterfactually X would say '{4, 5, 6}', foobar is still {1, 2, 3}, because what we mean by 'foobar' is {1, 2, 3} and {1, 2, 3} is {1, 2, 3} regardless of what X says. Having written that, I now think I can return to your question. The answer is that firstly, by replacing the true definition "foobar = {1, 2, 3}" with "foobar is what X would say about 'foobar' after being exposed to every possible argument concerning 'foobar'" in the subject's mind, you have just deleted the only reference to foobar that actually exists in the thought experiment. The subject has to reason about 'foobar' using their built in definition, since that is the only thing that actually points directly to the target object. Secondly, as descr
0Wei_Dai8yRight, so my point is that if your theory (that moral reasoning is probabilistic reasoning about some mathematical object) is to be correct, we need a definition of morality as a mathematical object which isn't "what X says after considering all possible moral arguments". So what could it be then? What definition Y can we give, such that it makes sense to say "when we reason about morality, we are really doing probabilistic reasoning about the mathematical object Y"? Secondly, until we have a candidate definition Y at hand, we can't show that moral reasoning really does correspond to probabilistic logical reasoning about Y. (And we'd also have to first understand what "probabilistic logical reasoning" is.) So, at this point, how can we be confident that moral reasoning does correspond to probabilistic logical reasoning about anything mathematical, and isn't just some sort of random walk or some sort of reasoning that's different from probabilistic logical reasoning?
2nshepperd8yUnfortunately I doubt [http://wiki.lesswrong.com/wiki/Complexity_of_value] I can give you a short direct definition of morality. However if such a mathematical object exists, "what X says after considering all possible moral arguments" should be enough to pin it down (disregarding the caveats to do with our subject going insane, etc). Well, I think it safe to assume I mean something by moral talk, otherwise I wouldn't care so much about whether things are right or wrong. I must be talking about something, because that something is wired into my decision system. And I presume this something is mathematical, because (assuming I mean something by "P is good") you can take the set of all good things, and this set is the same in all counterfactuals. Roughly speaking. It is, of course, possible that moral reasoning isn't actually any kind of valid reasoning, but does amount to a "random walk" of some kind, where considering an argument permanently changes your intuition in some nondeterministic way so that after hearing the argument you're not even talking about the same thing you were before hearing it. Which is worrying. Also it's possible that moral talk in particular is mostly signalling intended to disguise our true values which are very similar but more selfish. But that doesn't make a lot of difference since you can still cash out your values as a mathematical object of some sort.
0Wei_Dai8yYes, exactly. This seems to me pretty likely to be the case for humans. Even if it's actually not the case, nobody has done the work to rule it out yet (has anyone even written a post making any kind of argument that it's not the case?), so how do we know that it's not the case? Doesn't it seem to you that we might be doing some motivated cognition in order to jump to a comforting conclusion?
0HalMorris8yI know you're not arguing for this but I can't help noting the discrepancy between the simplicity of the phrase "all possible moral arguments", and what it would mean if it can be defined at all. But then many things are "easier said than done".

I think the term you are looking for is "formal" or "an algebra", not "logic".

You're mischaracterizing the quote that your post replies to. EY claims that he is attempting to comprehend morality as a logical, not a physical thing, and he's trying to convince readers to do the same. You're evidently thinking of morality as a physical thing, something essentially derived from the observation of brains. You're restating the position his post responds to, without strengthening it.

-2timtyler8yThe argument in that post [http://lesswrong.com/lw/fv3/by_which_it_may_be_judged/] seems incoherent to me. In the conventional natural sciences, what is moral is the subject matter of biology. This employs game theory and evolutionary theory (i.e. logic), but also considers the laws of physics and the local state of the universe to explain existing moral systems. For instance, consider the question of whether it is wrong to drive on the left-hand side of the road. That isn't logic, it depends on the local state of the universe. Two advanced superintelligences which had evolved independently could easily find themselves in disagreement over such issues. This is an example of spontaneous symmetry breaking [http://en.wikipedia.org/wiki/Spontaneous_symmetry_breaking]. It is one of the factors which explains how arbitrarily-advanced agents can still disagree on what the right thing to do is.