Some perceive morality as a fixed given, independent of our whims, about which we form changeable beliefs. This view's great advantage is that it seems more normal up at the level of everyday moral conversations: it is the intuition underlying our everyday notions of "moral error", "moral progress", "moral argument", or "just because you want to murder someone doesn't make it right".
Others choose to describe morality as a preference—as a desire in some particular person; nowhere else is it written. This view's great advantage is that it has an easier time living with reductionism—fitting the notion of "morality" into a universe of mere physics. It has an easier time at the meta level, answering questions like "What is morality?" and "Where does morality come from?"
Both intuitions must contend with seemingly impossible questions. For example, Moore's Open Question: Even if you come up with some simple answer that fits on T-Shirt, like "Happiness is the sum total of goodness!", you would need to argue the identity. It isn't instantly obvious to everyone that goodness is happiness, which seems to indicate that happiness and rightness were different concepts to start with. What was that second concept, then, originally?
Or if "Morality is mere preference!" then why care about human preferences? How is it possible to establish any "ought" at all, in a universe seemingly of mere "is"?
So what we should want, ideally, is a metaethic that:
- Adds up to moral normality, including moral errors, moral progress, and things you should do whether you want to or not;
- Fits naturally into a non-mysterious universe, postulating no exception to reductionism;
- Does not oversimplify humanity's complicated moral arguments and many terminal values;
- Answers all the impossible questions.
I'll present that view tomorrow.
Today's post is devoted to setting up the question.
Consider "free will", already dealt with in these posts. On one level of organization, we have mere physics, particles that make no choices. On another level of organization, we have human minds that extrapolate possible futures and choose between them. How can we control anything, even our own choices, when the universe is deterministic?
To dissolve the puzzle of free will, you have to simultaneously imagine two levels of organization while keeping them conceptually distinct. To get it on a gut level, you have to see the level transition—the way in which free will is how the human decision algorithm feels from inside. (Being told flatly "one level emerges from the other" just relates them by a magical transition rule, "emergence".)
For free will, the key is to understand how your brain computes whether you "could" do something—the algorithm that labels reachable states. Once you understand this label, it does not appear particularly meaningless—"could" makes sense—and the label does not conflict with physics following a deterministic course. If you can see that, you can see that there is no conflict between your feeling of freedom, and deterministic physics. Indeed, I am perfectly willing to say that the feeling of freedom is correct, when the feeling is interpreted correctly.
In the case of morality, once again there are two levels of organization, seemingly quite difficult to fit together:
On one level, there are just particles without a shred of should-ness built into them—just like an electron has no notion of what it "could" do—or just like a flipping coin is not uncertain of its own result.
On another level is the ordinary morality of everyday life: moral errors, moral progress, and things you ought to do whether you want to do them or not.
And in between, the level transition question: What is this should-ness stuff?
Award yourself a point if you thought, "But wait, that problem isn't quite analogous to the one of free will. With free will it was just a question of factual investigation—look at human psychology, figure out how it does in fact generate the feeling of freedom. But here, it won't be enough to figure out how the mind generates its feelings of should-ness. Even after we know, we'll be left with a remaining question—is that how we should calculate should-ness? So it's not just a matter of sheer factual reductionism, it's a moral question."
And if you've been reading along this whole time, you know the answer isn't going to be, "Look at this fundamentally moral stuff!"
Nor even, "Sorry, morality is mere preference, and right-ness is just what serves you or your genes; all your moral intuitions otherwise are wrong, but I won't explain where they come from."
The Fake Utility Functions sequence was directed at the problem of oversimplified moral answers particularly.
The sequence on words also showed us how to play Rationalist's Taboo, and Replace the Symbol with the Substance. What is "right", if you can't say "good" or "desirable" or "better" or "preferable" or "moral" or "should"? What happens if you try to carry out the operation of replacing the symbol with what it stands for?
And the sequence on quantum physics, among other purposes, was there to teach the fine art of not running away from Scary and Confusing Problems, even if others have failed to solve them, even if great minds failed to solve them for generations. Heroes screw up, time moves on, and each succeeding era gets an entirely new chance.
If you're just joining us here (Belldandy help you) then you might want to think about reading all those posts before, oh, say, tomorrow.
If you've been reading this whole time, then you should think about trying to dissolve the question on your own, before tomorrow. It doesn't require more than 96 insights beyond those already provided.
Next: The Meaning of Right.
Part of The Metaethics Sequence
Next post: "The Meaning of Right"
Previous post: "Changing Your Metaethics"