There are a lot of posts here that presuppose some combination of moral anti-realism and value complexity. These views go together well: if value is not fundamental, but dependent on characteristics of humans, then it can derive complexity from this and not suffer due to Occam's Razor.
There are another pair of views that go together well: moral realism and value simplicity. Many posts here strongly dismiss these views, effectively allocating near-zero probability to them. I want to point out that this is a case of non-experts being very much at odds with expert opinion and being clearly overconfident. In the Phil Papers survey for example, 56.3% of philosophers lean towards or believe realism, while only 27.7% lean towards or accept anti-realism.
http://philpapers.org/surveys/results.pl
Given this, and given comments from people like me in the intersection of the philosophical and LW communities who can point out that it isn't a case of stupid philosophers supporting realism and all the really smart ones supporting anti-realism, there is no way that the LW community should have anything like the confidence that it does on this point.
Moreover, I should point out that most of the rea...
Among target faculty listing meta-ethics as their area of study moral realism's lead is much smaller: 42.5% for moral realism and 38.2% against.
Looking further through the philpapers data, a big chunk of the belief in moral realism seems to be coupled with theism, where anti-realism is coupled with atheism and knowledge of science. The more a field is taught at Catholic or other religious colleges (medieval philosophy, bread-and-butter courses like epistemology and logic) the more moral realism, while philosophers of science go the other way. Philosophers of religion are 87% moral realist, while philosophers of biology are 55% anti-realist.
In general, only 61% of respondents "accept" rather than lean towards atheism, and a quarter don't even lean towards atheism. Among meta-ethics specialists, 70% accept atheism, indicating that atheism and subject knowledge both predict moral anti-realism. If we restricted ourselves to the 70% of meta-ethics specialists who also accept atheism I would bet at at least 3:1 odds that moral anti-realism comes out on top.
Since the Philpapers team will be publishing correlations between questions, such a bet should be susceptible to objective...
In general, those interquestion correlations should help pinpoint any correct contrarian cluster.
This is why I put more weight on Toby's personal position, than on the majority expert position. As far as I know, Toby is in the same contrarian cluster as me, yet he seems to give much more weight to moral realism (and presumably not the Yudkowskian kind either) than I do. Like ciphergoth, I wish he would tell us which arguments in favor of realism, or against anti-realism, that he finds persuasive.
Atheism doesn't get 80% support among philosophers, and most philosophers of religion reject it because of a selection effect where few wish to study what they believe to be non-subjects (just as normative and applied ethicists are more likely to reject anti-realism).
Many posts here strongly dismiss [moral realism and simplicity], effectively allocating near-zero probability to them. I want to point out that this is a case of non-experts being very much at odds with expert opinion and being clearly overconfident. [...] For non-experts, I really can't see how one could even get to 50% confidence in anti-realism, much less the kind of 98% confidence that is typically expressed here.
One person's modus ponens is another's modus tollens. You say that professional philosophers' disagreement implies that antirealists shouldn't be so confident, but my confidence in antirealism is such that I am instead forced to downgrade my confidence in professional philosophers. I defer to experts in mathematics and science, where I can at least understand something of what it means for a mathematical or scientific claim to be true. But on my current understanding of the world, moral realism just comes out as nonsense. I know what it means for a computation to yield this-and-such a result, or for a moral claim to be true with respect to such-and-these moral premises that might be held by some agent. But what does it mean for a moral claim to be simply true, full ...
But what does it mean for a moral claim to be simply true, full stop?
Well, in my world, it means that the premises are built into saying "moral claim"; that the subject matter of "morality" is the implications of those premises, and that moral claims are true when they make true statements about these implications. If you wanted to talk about the implications of other premises, it wouldn't be the subject matter of what we name "morality". Most possible agents (e.g. under a complexity-based measure of mind design space) will not be interested in this subject matter - they won't care about what is just, fair, freedom-promoting, life-preserving, right, etc.
This doesn't contradict what you say, but it's a reason why someone who believes exactly everything you do might call themselves a moral realist.
In my view, people who look at this state of affairs and say "There is no morality" are advocating that the subject matter of morality is a sort of extradimensional ontologically basic agent-compelling-ness, and that, having discovered this hypothesized transcendental stuff to be nonexistent, we have discovered that there is no morality. In cont...
Yes, but I think that my way of talking about things (agents have preferences, some of which are of a type we call moral, but there is no objective morality) is more useful than your way of talking about things (defining moral as a predicate referring to a large set of preferences), because your formulation (deliberately?) makes it difficult to talk about humans with different moral preferences, which possibility you don't seem to take very seriously, whereas I think it very likely.
If the UFAI convinced you of anything that wasn't true during the process - outright lies about reality or math - or biased sampling of reality producing a biased mental image, like a story that only depicts one possibility where other possibilities are more probable - then we have a simple and direct critique.
If the UFAI never deceived you in the course of telling the story, but simple measures over the space of possible moral arguments you could hear and moralities you subsequently develop, produce a spread of extrapolated volitions "almost all" of whom think that the UFAI-inspired-you has turned into something alien and unvaluable - if it flew through a persuasive keyhole to produce a very noncentral future version of you who is disvalued by central clusters of you - then it's the sort of thing a Coherent Extrapolated Volition would try to stop.
See also #1 on the list of New Humane Rights: "You have the right not to have the spread in your volition optimized away by an external decision process acting on unshared moral premises."
You have the right not to have the spread in your volition optimized away by an external decision process acting on unshared moral premises.
You have the right to a system of moral dynamics complicated enough that you can only work it out by discussing it with other people who share most of it.
You have the right to be created by a creator acting under what that creator regards as a high purpose.
You have the right to exist predominantly in regions where you are having fun.
You have the right to be noticeably unique within a local world.
You have the right to an angel. If you do not know how to build an angel, one will be appointed for you.
You have the right to exist within a linearly unfolding time in which your subjective future coincides with your decision-theoretical future.
You have the right to remain cryptic.
-- Eliezer Yudkowsky
(originally posted sometime around 2005, probably earlier)
What about the least convenient world where human meta-moral computation doesn't have the coherence that you assume? If you found yourself living in such a world, would you give up and say no meta-ethics is possible, or would you keep looking for one? If it's the latter, and assuming you find it, perhaps it can be used in the "convenient" worlds as well?
To put it another way, it doesn't seem right to me that the validity of one's meta-ethics should depend on a contingent fact like that. Although perhaps instead of just complaining about it, I should try to think of some way to remove the dependency...
(We also disagree about the likelihood that the coherence assumption holds, but I think we went over that before, so I'm skipping it in the interest of avoiding repetition.)
In cases like that, I am perfectly willing to say that we have discovered that the subject matter of "fairies" is a coherent, well-formed concept that turns out to have an empty referent. The closet is there, we opened it up and looked, and there was nothing inside. I know what the world ought to look like if there were fairies, or alternatively no fairies, and the world looks like it has no fairies.
Toby, I spent a while looking into the meta-ethical debates about realism. When I thought moral realism was a likely option on the table, I meant:
Strong Moral Realism: All (or perhaps just almost all) beings, human, alien or AI, when given sufficient computing power and the ability to learn science and get an accurate map-territory distinction, will agree on what physical state the universe ought to be transformed into, and therefore they will assist you in transforming it into this state.
But modern philosophers who call themselves "realists" don't mean anything nearly this strong. They mean that that there are moral "facts". But what use is it if the paperclipper agrees that it is a "moral fact" that human rights ought to be respected, if it then goes on to say it has no desire to act according to the prescription of moral facts, and moral facts can't somehow revoke it.
The force of "scientific facts" is that they constrain the world. If an alien wants to get from Andromeda to here, it has to take at least 2.5 million years, the physical fact of the finite speed of light literally stops the alien from getting here sooner, whether it likes it...
I strongly agree with Roko that something like his strong version is the interesting version. What matters is what range of creatures will come to agree on outcomes; it matters much less what range of creatures think their desires are "right" in some absolute sense, if they don't think that will eventually be reflected in agreement.
I am a moral cognitivist. Statements like "ceteris paribus, happiness is a good thing" have truth-values. Such moral statements simply are not compelling or even interesting enough to compute the truth-value of to the vast majority of agents, even those which maximize coherent utility functions using Bayesian belief updating (that is, rational agents) or approximately rational agents.
AFAICT the closest official term for what I am is "analytic descriptivist", though I believe I can offer a better defense of analytic descriptivism than what I've read so far.
EDIT: Looking up moral naturalism shows that Frank Jackson's analytic descriptivism aka moral functionalism is listed as a form of moral naturalism: http://plato.stanford.edu/entries/naturalism-moral/#JacMorFun
Note similarity to "Joy in the Merely Good".
From your SEP link on Moral Realism: "It is worth noting that, while moral realists are united in their cognitivism and in their rejection of error theories, they disagree among themselves not only about which moral claims are actually true but about what it is about the world that makes those claims true. "
I think this is good cause for breaking up that 56%. We should not take them as a block merely because (one component of) their conclusions match, if their justifications are conflicting or contradictory. It could still be the case that 90% of expert philosophers reject any given argument for moral realism. (This would be consistent with my view that those arguments are silly.)
I may have noticed this because the post on Logical Rudeness is fresh in my mind.
Disagreeing positions don't add up just because they share a feature. On the contrary, If people offer lots of different contradictory reasons for a conclusion (even if each individual has consistent beliefs) it is a sign that they are rationalizing their position.
If 2/3's of experts support proposition G , 1/3 because of reason A while rejecting B, and 1/3 because of reason B while rejecting A, and the remaining 1/3 reject A and B; then the majority Reject A, and the majority Reject B. G should not be treated as a reasonable majority view.
This should be clear if A is the koran and B is the bible.
If we're going to add up expert views, we need to add up what experts consider important about a question, not features of their conclusions.
You shouldn't add up two experts if they would consider each other's arguments irrational. That's ignoring their expertise.
In metaethics, there are typically very good arguments against all known views, and only relatively weak arguments for each of them. For anything in philosophy, a good first stop is the Stanford Encyclopedia of Philosophy. Here are some articles on the topic at SEP:
I think the best book to read on metaethics is:
To head off a potential objection, this does assume that our values interact in an additive way.
...and this is an assumption of simplicity of value. That we can see individual "values" only reflects the vague way in which we can perceive our preference. Some "values" dictate the ways in which other "values" should play together, so there is no easy way out, no "additive" or "multiplicative" clean decomposition.
I really don't want us to go there, here; I think it will reduce the quality of the site significantly. At the moment I can follow Recent Comments and find quite a few little nuggets of gold. If we get into arguing with people like this, the good content will be harder to find.
Your sesquipedalian obscurantism may fool your usual audience but you won't find it very successful here.
A possible list of human values which are scalable:
Safety - we prefer that no sources of dangers exist anywhere in the universe
Self-replication - (at least some humans) prefer to gave as many descendants as possible and would be happy to tile the universe with their own grandchildren.
Power - A human often wants to become a king or god. So all the universe must be under his control.
Life extension - some wants immortality
Be the first - one must ensure that he is better than any other being in the universe
Exploration - obviously, scalable
Compassion to other beings.
You were dropping a lot of unfamiliar terminology, the end result of which was failing utterly to communicate what your point was. If you want us to understand your point, you're going to have to unpack most of your sentences.
(easy example: what does Christian NeoRationalist mean?)
"rather there's a tendency to assume that complexity of value must lead to complexity of outcome"
The main problem I see here is the other way around:
There's a tendency to assume that complexity of outcome must have been produced by complexity of value.
AFAICS, it is only members of this community that think this way. Noboby else seems to have a problem with the idea of goals that can be concisely expressed - like: "trying to have as many offspring as possible" - leading to immense diversity and complexity.
This is a facet of an even mor...
Does any existing decision theory make an attempt to decide based on existing human values? How would one begin to put human values into rigorous mathematical form?
I've convinced a few friends that the most likely path to Strong AI (i.e. intelligence explosion) is a bunch of people sitting in a room doing math for 10 years. But that's a lot of math before anyone even begins to start plugging in the values.
I suppose it does make sense for us to talk in English about what all of these things mean, so that in 10+ years they can be more easily translated into...
I've struggled with the concept of how an orgasmium optimizing AI could come about or a paperclipper or a bucketmaker or any of the others but this clarifies things. It's the programmer who passes the values on to the AI that is the cause, it's not necessarioy going to be an emergent property.
That makes things easier I believe as it means the code for the seed AI needs to be screened for maximization functions.
-3 lol, Well I can see that you are no closer to AI than you were last year. Do you have a definition of value yet? Life? Complexity?
I thought not.
Respectufly W
She on the other hand had no clue about what I was trying to express.
The commonality in these situations is you.
One more reason why I think Faustian singleton is the most likely final outcome, even if FAI succeeds. Unlike material or social desires, curiosity can scale endlessly--and to the point where humans become willing to suspend their individuality for the sake of computational efficiency.
Re: "the future of the universe will turn out to be rather simple"
You do realise that filling the universe with orgasmium involves interstellar and intergalactic travel, stellar farming, molecular nanotechnology, coordinating stars to leap between galaxies, mastering nuclear fusion, conquering any other civilisations it might meet - and many other high-tech wonders?
How is any of that that "simple"? Do you just mean: "somewhat less complex than it could conceivably be?"
If it were the case that only a few of our values scale, then we can potentially obtain almost all that we desire by creating a superintelligence with just those values.
Can we really expect a superintelligence to stick with the values we give it ? Our own values change over time; sometimes without even external stimulus just internal reflection. I don't see how we can bound a superintelligence without doing more computation than we expect it to do in its lifetime.
Complexity of value is the thesis that our preferences, the things we care about, don't compress down to one simple rule, or a few simple rules. To review why it's important (by quoting from the wiki):
I certainly agree with both of these points. But I worry that we (at Less Wrong) might have swung a bit too far in the other direction. No, I don't think that we overestimate the complexity of our values, but rather there's a tendency to assume that complexity of value must lead to complexity of outcome, that is, agents who faithfully inherit the full complexity of human values will necessarily create a future that reflects that complexity. I will argue that it is possible for complex values to lead to simple futures, and explain the relevance of this possibility to the project of Friendly AI.
The easiest way to make my argument is to start by considering a hypothetical alien with all of the values of a typical human being, but also an extra one. His fondest desire is to fill the universe with orgasmium, which he considers to have orders of magnitude more utility than realizing any of his other goals. As long as his dominant goal remains infeasible, he's largely indistinguishable from a normal human being. But if he happens to pass his values on to a superintelligent AI, the future of the universe will turn out to be rather simple, despite those values being no less complex than any human's.
The above possibility is easy to reason about, but perhaps does not appear very relevant to our actual situation. I think that it may be, and here's why. All of us have many different values that do not reduce to each other, but most of those values do not appear to scale very well with available resources. In other words, among our manifold desires, there may only be a few that are not easily satiated when we have access to the resources of an entire galaxy or universe. If so, (and assuming we aren't wiped out by an existential risk or fall into a Malthusian scenario) the future of our universe will be shaped largely by those values that do scale. (I should point out that in this case the universe won't necessarily turn out to be mostly simple. Simple values do not necessarily lead to simple outcomes either.)
Now if we were rational agents who had perfect knowledge of our own preferences, then we would already know whether this is the case or not. And if it is, we ought to be able to visualize what the future of the universe will look like, if we had the power to shape it according to our desires. But I find myself uncertain on both questions. Still, I think this possibility is worth investigating further. If it were the case that only a few of our values scale, then we can potentially obtain almost all that we desire by creating a superintelligence with just those values. And perhaps this can be done manually, bypassing an automated preference extraction or extrapolation process with their associated difficulties and dangers. (To head off a potential objection, this does assume that our values interact in an additive way. If there are values that don't scale but interact nonlinearly (multiplicatively, for example) with values that do scale, then those would need to be included as well.)