There are a lot of posts here that presuppose some combination of moral anti-realism and value complexity. These views go together well: if value is not fundamental, but dependent on characteristics of humans, then it can derive complexity from this and not suffer due to Occam's Razor.

There are another pair of views that go together well: moral realism and value simplicity. Many posts here strongly dismiss these views, effectively allocating near-zero probability to them. I want to point out that this is a case of non-experts being very much at odds with ... (read more)

Showing 3 of 15 replies (Click to show all)

The right response to moral realism isn't to dispute it's truth but to simply observe you don't understand the concept.

I mean imagine someone started going around insisting some situations were Heret and others were Grovic but when asked to explain what made a situation Heret or Grovic he simply shrugged and said they were primitive concepts. But you persist and after observing his behavior for a period of time you work out some principle that perfectly predicts which category he will assign a given situation to, even counterfactually but when you present... (read more)

2Stuart_Armstrong10yIt depends on the expertise; for instance, if we're talking about systems of axioms, then mathematicians may be those with the most relevant opinions as to whether one system has preference over others. And the idea that a unique system of moral axioms would have preference over all others makes no mathematical sense. If philosphers were espousing the n-realism position ("there are systems of moral axioms that are more true than others, but there will probably be many such systems, most mutually incompatible"), then I would have a hard time arguing against this. But, put quite simply, I dismiss the moral realistic position for the moment as the arguments go like this: * 1) There are moral truths that have special status; but these are undefined, and it is even undefined what makes them have this status. * 2) These undefined moral truths make a consistent system. * 3) This system is unique, according to criteria that are also undefined. * 4) Were we to discover this system, we should follow it, for reasons that are also undefined. There are too many 'undefined's in there. There is also very little philosphical literature I've encountred on 2), 3) and 4), which is at least as important as 1). A lot of the literature on 1) seems to be reducible to linguistic confusion, and (most importantly) different moral realists have different reasons for believing 1), reasons that are often contradictory. From a outsider's perspective, these seem powerful reasons to assume that philosphers are mired in confusion on this issue, and that their opinions are not determining. My strong mathematical reasons for claiming that there is no "superiority total ordering" on any general collection of systems of axioms clinches the argument for me, pending further evidence.
0taw10yI don't see in what meaningful sense these people are "experts".

Complexity of Value ≠ Complexity of Outcome

by Wei_Dai 2 min read30th Jan 2010232 comments


Complexity of value is the thesis that our preferences, the things we care about, don't compress down to one simple rule, or a few simple rules. To review why it's important (by quoting from the wiki):

  • Caricatures of rationalists often have them moved by artificially simplified values - for example, only caring about personal pleasure. This becomes a template for arguing against rationality: X is valuable, but rationality says to only care about Y, in which case we could not value X, therefore do not be rational.
  • Underestimating the complexity of value leads to underestimating the difficulty of Friendly AI; and there are notable cognitive biases and fallacies which lead people to underestimate this complexity.

I certainly agree with both of these points. But I worry that we (at Less Wrong) might have swung a bit too far in the other direction. No, I don't think that we overestimate the complexity of our values, but rather there's a tendency to assume that complexity of value must lead to complexity of outcome, that is, agents who faithfully inherit the full complexity of human values will necessarily create a future that reflects that complexity. I will argue that it is possible for complex values to lead to simple futures, and explain the relevance of this possibility to the project of Friendly AI.

The easiest way to make my argument is to start by considering a hypothetical alien with all of the values of a typical human being, but also an extra one. His fondest desire is to fill the universe with orgasmium, which he considers to have orders of magnitude more utility than realizing any of his other goals. As long as his dominant goal remains infeasible, he's largely indistinguishable from a normal human being. But if he happens to pass his values on to a superintelligent AI, the future of the universe will turn out to be rather simple, despite those values being no less complex than any human's.

The above possibility is easy to reason about, but perhaps does not appear very relevant to our actual situation. I think that it may be, and here's why. All of us have many different values that do not reduce to each other, but most of those values do not appear to scale very well with available resources. In other words, among our manifold desires, there may only be a few that are not easily satiated when we have access to the resources of an entire galaxy or universe. If so, (and assuming we aren't wiped out by an existential risk or fall into a Malthusian scenario) the future of our universe will be shaped largely by those values that do scale. (I should point out that in this case the universe won't necessarily turn out to be mostly simple. Simple values do not necessarily lead to simple outcomes either.)

Now if we were rational agents who had perfect knowledge of our own preferences, then we would already know whether this is the case or not. And if it is, we ought to be able to visualize what the future of the universe will look like, if we had the power to shape it according to our desires. But I find myself uncertain on both questions. Still, I think this possibility is worth investigating further. If it were the case that only a few of our values scale, then we can potentially obtain almost all that we desire by creating a superintelligence with just those values. And perhaps this can be done manually, bypassing an automated preference extraction or extrapolation process with their associated difficulties and dangers. (To head off a potential objection, this does assume that our values interact in an additive way. If there are values that don't scale but interact nonlinearly (multiplicatively, for example) with values that do scale, then those would need to be included as well.)

Whether or not we actually should take this approach would depend on the outcome of such an investigation. Just how much of our desires can feasibly be obtain this way? And how does the loss of value inherent in this approach compare with the expected loss of value due to the potential of errors in the extraction/extrapolation process? These are questions worth trying to answer before committing to any particular path, I think.
P.S., I hesitated a bit in posting this, because underestimating the complexity of human values is arguably a greater danger than overlooking the possibility that I point out here, and this post could conceivably be used by someone to rationalize sticking with their "One Great Moral Principle". But I guess those tempted to do so will tend not to be Less Wrong readers, and seeing how I already got myself sucked into this debate, I might as well clarify and expand on my position.