Value isn't just complicated, it's fragile.  There is more than one dimension of human value, where if just that one thing is lost, the Future becomes null.  A single blow and all value shatters.  Not every single blow will shatter all value - but more than one possible "single blow" will do so.

Reasoning using a representation of human utility that's a simple continuum from pain to pleasure, as torture vs dust specks does, is a shattering blow to the complexity of value. 

Making moral decisions of such vast scope without understanding the full multidimensionality of human experience and utility is completely irresponsible. An AI using the kind of reasoning found in Torture vs Specks would probably just wirehead everyone for huge-integer-pleasure for eternity.

I don't pretend to know the correct answer to Torture vs Specks because I don't have a full understanding of human value, and because I don't understand how to do calculations with hypercomplex numbers.  A friendly AI *has* to take into account the full complexity of our value and not just a one-dimensional continuum whenever it makes any moral decision.  So only a friendly AI which has correctly extrapolated our values can know to high confidence the best answer to torture vs specks.

 

(edit 1) re:Oscar Cunningham

Why does complexity of value apply here specifically and not a curiosity stopper? Well consequentialist problems come in different difficulty levels - Torture for 5 years vs Torture for 50 years is easy - torture is bad, so less torture is less bad. You are comparing amounts of the same thing. You don't have to understand complexity of value to do that. To compare the value of two very different things, like Torture and Specks, requires you to understand the complexity of value. You can't simplify experiences to integers, because complex value isn't simply an integer.

The intuition that torture must be outweighed by a large enough number of specks, is just that: an intuition. You don't know the dynamics involved in a formal comparison based on a technical understanding of complex value.

New Comment
22 comments, sorted by Click to highlight new comments since: Today at 4:37 PM
[-][anonymous]12y120

On reflection, I should have thought about this a lot more before posting. I felt strongly that the idea of complexity of value says something about thought experiments like torture versus specks, and I kind of went off the rails from there.

What I should have said is simply "There exists an optimal, correct way of understanding and reasoning about human values and we don't know what it is. Our awareness of this lack of understanding should make us less confident about the validity of our consequentialist reasoning - we might be using fundamentally wrong assumptions."

There exists an optimal, correct way of understanding and reasoning about human values and we don't know what it is.

In what sense "optimal"? And what makes you certain that this is the case?

Your post doesn't say what it is about "torture vs dust specks" in particular that makes complexity of value important. You say it's to do with the problem's "vast scope", but you don't actually do any work connecting this to complexity of value. So you've allowed "Complexity of value!" to become a fully general question-stopper.

[-][anonymous]12y20

You've misinterpreted what is meant by "complexity of value". It does not mean that human utility is multidimensional. It means that the human utility function is very complex.

Specific example:

[freedom, happyness, love, beauty, fun] (that's supposed to be a vector) is not a utility function and cannot be used for decision theory.

Something like (freedom + happyness + love + beauty + fun) or minimum-of(freedom, happyness, love, beauty, fun) is a (somewhat complex) utility function.

Having a complex utility function doesn't make torture vs dust specks solve differently.

Don't use the word if the guy just got confused by the word :D

Value is (complex) complicated.

I don't understand how to do calculations with hypercomplex numbers.

Um, that term doesn't mean very big numbers or anything similar. It has to do with extensions of the complex numbers.

[+][anonymous]12y-60

Testing some link stuff in a relatively low voted thread

hyperlinked)

naked: http://en.wikipedia.org/wiki/Ring_(computer_security)

hyperlinked)

You can test stuff privately by sending a message to mwengler. You can read it by clicking your envelope but others won't be able to see it.

To make a paren or other funky character work in an auto-link, use percent-encoding:

http://en.wikipedia.org/wiki/Ring_%28computer_security%29

To keep a closing paren from ending a Markdown link, precede it with a backslash (and add another closing paren to actually end the link):

[foo](http://en.wikipedia.org/wiki/Ring_(computer_security\))

becomes:

foo)

A way to make this argument would be to claim human values about how to interpret human values are themselves complex. As an illustration of this one could point out that the naive utilitarian position in torture-vs.-specks totally disregards the preferences of the 3^^^3 people that pertain to what answer to the dilemma the answerer should give --- we'll assume those 3^^^3 people mostly do not hold naive utilitarian ethics. Then the answerer's problem is much tougher, because to disregard those people's preferences he has to be confident that he understands morality better than they do, which, for a naive preference utilitarian, is a self-defeating position.

(And my knowledge of implicit utilitarian meta-ethics gets iffy here, but the naive utilitarian also has no sense in which he could say that choosing specks was wrong, because wrongness is only determined by preferences. He could only say he himself didn't prefer to do what his ethics told him to do -- but his ethics are his preferences, so his claim to not prefer specks would be mostly wrong, otherwise self-contradicting.)

I wrote a post about this, and also about non-obvious and important considerations for the trolley problem. Hopefully sound arguments in this vein will cause people to recognize moral uncertainty and especially meta-ethical uncertainty as a serious problem. The neglect of the subject increases the chance that an FAI team will see a meta-ethical consensus around them when there isn't one -- consider that Eliezer has (purposefully-exaggeratedly?) claimed that meta-ethics is a solved problem, even though folk like Wei Dai disagree.

Actually, re implicit utilitarian meta-ethics, I have some confusions. Assume preference utilitarianism. We'll say most people think utilitarianism is wrong. They'd prefer you used virtue ethics. They think morality is hella important, moreso than their other preferences -- that's feasible. In such a world, would a preference utilitarian thus be obliged to forget utilitarianism and use virtue ethics? And is he obliged to think about ethics and meta-ethics in the ways preferred by the set of people whose preferences he's tracking? If so, isn't utilitarianism rather self-defeating in many possible worlds, including perhaps the world we inhabit?

(Meta-note: considerations like these are what make me think normative ethics without explicit complementary meta-ethics just aren't a contender for actual morality. Too under-specified, too many holes.)

[-][anonymous]12y30

Why should that be a problem? Consequentialists have no obligation to believe what is true, but only what maximizes utility.

If (some version of) utilitarianism is true, and it maximizes utility to not believe so, then you self-modify to in fact not believe that, and so maximize utility and win. So what?

Believing things is just another action with straightforward consequences and is treated like any other action.

This would only be an issue for utilitarianism if you believed that "X is true" is true iff ideal moral agents believe that X is true. Which would be a weird position, given that even ideal Bayesian agents will rationally believe false things in some worlds.

I guess Parfit's already said everything that should be said here --- we're almost following him line for line, no? Parfit doesn't like self-defeating theories is all. Mostly my hidden agenda is to point out that real utilitarianism would not look like choosing torture. It looks like saying "hey people, I'm your servant, tell me what you want me to be and I'll mold myself into it as best I can". But that's really suspect meta-ethically. That's not what morality is. And I think that becomes clearer when you show where utilitarianism ends up.

"Oh you don't know what love is --- you just do as you're told."

ETA: Basically, I'm with Richard Chappell. But, uh, theist -- where he says "rational agent upon infinite reflection" or whatever, I say "God", and that makes for some differences, e.g. moral disagreement works differently. (Also I try to push it up to super mega meta.)

[-][anonymous]12y20

Right, and if (some version of) utilitarianism is right, then that's a good thing. The agent isn't being exploited, it's becoming less evil. We definitely want evil agents to roll over and do the right thing instead.

All morality tells you to shut up and do what The Rules say. Preference utilitarianism just has agents inherently included in The Rules.

In fact, the preference utilitarian in your example was able to do the right thing (believe in virtue ethics) only because they were a preference utilitarian. If they had been a deontologist, say, they would have remained evil. How is that self-defeating? It's an argument in preference utilitarianism's favor that a sufficiently smart agent can figure out what to do from scratch, i.e. without starting out as a (correct) virtue ethicist.

(Or maybe you're thinking that believing that utilitarianism does sometimes involve letting others control your actions, makes people more prone to roll over in general. Though to the kind of preference utilitarianism you have in mind, that shouldn't be too problematic, I think.)

(Another Parfit-like point is that the categorical imperative can have basically the same effect, but in that case you're limited by this incredibly slippery notion of "similar situation" and so on which lets you make up a lot of bullshit, rather than by whatever population you decide is the one who gets to define morality. (That said I still can't believe Kant didn't deal with that gaping hole, so I suppose he must have, somewhere.))

I don't get it --- why are you assuming that virtue ethics or the rules of the people are right such that always converging to them is a good aspect of your morality? Why not assume people are mostly dumb and so utilitarianism takes away any hope you could possibly have of doing the right thing (say, deontology)?

All morality tells you to shut up and do what The Rules say.

Yeah, but meta-ethics is supposed to tell us where The Rules come from, not normative ethics, so normative ethics that implicitly answer the question are, like, duplicitous and bothersome. Or like, maybe I'd be okay with it, but the implicit meta-ethics isn't at all convincing, and maybe that's the part that bothers me.

[-][anonymous]12y30

Nevermind, misunderstood your initial comment, I think.

I thought you were saying: if pref-util is right, pref-utilists may self-modify away from it, which refutes pref-util.

I now think you're saying: we don't know what is right, but if we assume pref-util, then we'll lose part of our ability to figure it out, so we shouldn't do that (yet).

Also, you're saying that most people don't understand morality better than us, so we shouldn't take their opinions more seriously than ours. (Agreed.) But pref-utilists do take those opinions seriously; they're letting their normative ethics influence their beliefs about their normative ethics. (Well, duh, consequentialism.)

In which case I'd (naively) say, let pref-util redistribute the probability mass you've assigned to pref-util any way it wants. If it wants to sacrifice it all for majority opinions, sure, but don't give it more than that.

Mostly my hidden agenda is to point out that real utilitarianism would not look like choosing torture. It looks like saying "hey people, I'm your servant, tell me what you want me to be and I'll mold myself into it as best I can".

This can also lead to the situation where if everyone decides to be a utilitarian, you wind up with a bunch of people asking each other what they want and answering "I want whatever the group wants".