One approach to constructing a Friendly artificial intelligence is to create a piece of software that looks at large amounts of evidence about humans, and attempts to infer their values.  I've been doing some thinking about this problem, and I'm going to talk about some approaches and problems that have occurred to me.

 

In a naive approach, we might define the problem like this: take some unknown utility function, U, and plug it into a mathematically clean optimization process (like AIXI) O.  Then, look at your data set and take the information about the inputs and outputs of humans, and find the simplest U that best explains human behavior.

Unfortunately, this won't work.  The best possible match for U is one that models not just those elements of human utility we're interested in, but also all the details of our broken, contradictory optimization process.  The U we derive through this process will optimize for confirmation bias, scope insensitivity, hindsight bias, the halo effect, our own limited intelligence and inefficient use of evidence, and just about everything else that's wrong with us.  Not what we're looking for.

Okay, so let's try putting a bandaid on it - let's go back to our original problem setup.  However, we'll take our original O, and use all of the science on cognitive biases at our disposal to handicap it.  We'll limit its search space, saddle it with a laundry list of cognitive biases, cripple its ability to use evidence, and in general make it as human-like as we possibly can.  We could even give it akrasia by implementing hyperbolic discounting of reward.  Then we'll repeat the original process to produce U'.

If we plug U' into our AI, the result will be that it will optimize like a human who had suddenly been stripped of all the kinds of stupidity that we programmed into our modified O.  This is good!  Plugged into a solid CEV infrastructure, this might even be good enough to produce a future that's a nice place to live.  However, it's not quite ideal.  If we miss a cognitive bias, then it'll be incorporated into the learned utility functions, and we may never be rid of it.  What would be nice would be if we could get the AI to learn about cognitive biases, exhaustively, and update in the future if it ever discovered a new one.  

 

If we had enough time and money, we could do this the hard way: acquire a representative sample of the human population, and pay them to perform tasks with simple goals under tremendous surveillance, and have the AI derive the human optimization process from the actions taken towards a known goal.  However, if we assume that the human optimization process can be defined as a function over the state of the human brain, we should not trust the completeness of any such process learned from less data than the entropy of the human brain, which is on the order of tens of petabytes of extremely high quality evidence.  If we want to be confident in the completeness of our model, we may need more experimental evidence than it is really practical to accumulate.  Which isn't to say that this approach is useless - if we can hit close enough to the mark, then the AI may be able to run more exhaustive experimentation later and refine its own understanding of human brains to be closer to the ideal.

But it'd really be nice if our AI could do unsupervised learning to figure out the details of human optimization.  Then we could simply dump the internet into it, and let it grind away at the data and spit out a detailed, complete model of human decision-making, from which our utility function could be derived.  Unfortunately, this does not seem to be a tractable problem.  It's possible that some insight could be gleaned by examining outliers with normal intelligence, but deviant utility functions (I am thinking specifically of sociopaths), but it's unclear how much insight can be produced by these methods.  If anyone has suggestions for a more efficient way of going about it, I'd love to hear it.  As it stands, it might be possible to get enough information from this to supplement a supervised learning approach - the closer we get to a perfectly accurate model, the higher the probability of Things Going Well.                  

Anyways, that's where I am right now.  I just thought I'd put up my thoughts and see if some fresh eyes see anything I've been missing.  

 

Cheers,

Niger 

New Comment
20 comments, sorted by Click to highlight new comments since: Today at 10:22 AM

AIXI does not take general utility functions.

AIXI can only optimize direct functions of sense data.

It cannot have utility functions over the state of worlds in which it is embedded.

This cannot be fixed without using something entirely different in place of AIXI's Solomonoff Induction.

I believe I saw a post a while back in which Anja discussed creating a variant on AIXI with a true utility function, though I may have misunderstood it. Some of the math this stuff involves I'm still not completely comfortable with, which is something I'm trying to fix.

In any case, what you'd actually want want to do is to model your agents using whatever general AI architecture you're using in the first place - plus whatever set of handicaps you've calibrated into it - which, presumably has a formal utility function, and is an efficient optimizer.

I could be mistaken, but I think this is a case of (unfortunately) several people using the term "utility function" for functions over sensory information instead of a direct reward channel. Dewey has a paper on why such functions don't add up to utility functions over outcomes, IIRC.

That would make sense. I assume the problem is lotus eating - the system, given the choice between a large cost to optimize whatever you care about, or small cost to just optimize its own sense experiences, will prefer the latter.

I find this stuff extremely interesting. I mean, when we talk about value modelling what we're really talking about isolating some subset of the causal mechanics driving human behavior (our values) from those elements we don't consider valuable. And, since we don't know if that subset is a natural category (or how to define it if it is), we've got a choice of how much we want to remove. Asking people to make a list of their values would be an example of the extreme sparse end of the spectrum, where we almost certainly don't model as much as we want to, and we know the features we're missing are important. On the other extreme end, we're just naively modelling the behaviors of humans, and letting the models vote. Which definitely captures all of our values, but also captures a bunch of extraneous stuff that we don't really want our system optimizing for. The target you're trying to hit is somewhere in the middle. It seems to me that it's probably best to err on the side of including too much than too little, since, if we get close enough, the optimizer will likely remove a certain amount of cruft on its own.

given the choice between a large cost to optimize whatever you care about, or small cost to just optimize its own sense experiences, will prefer the latter.

You built the machine to optimize its sense experiences. It is not constructed to optimize anything else. That is just what it does. Not when it's cheaper, not when it's inconvenient to do otherwise, but at all times universally.

I suspect having a good estimation of the "human utility function" (even stripped of biases etc.) is not the hardest part of the problem. A "perfect" human, given great power and ability to self-modify, may still result in a disaster. Human morality is mostly calibrating for dealing with others of around the same power.

Well, human values are probably variant to some degree between humans, so a Friendly AI wouldn't so much be 'maximize generic human utility function' as 'take all the human utility functions you can find as of now, find those portions which are reflexively consistent, weight them by frequency, and take those actions that are best supported by the convergent portions of those utility functions.' At least, that was the gist of CEV circa 2004. Not sure what Eliezer and co are working on these days, but that sounds like a reasonable way to build a nice future to me. A fair one, at least.

[-]TrE11y00

I read U² for "U-squared". This doesn't appear to be what you meant. I suggest swapping the ² with a ', giving you U'.

[This comment is no longer endorsed by its author]Reply

You're right, that is confusing. Fixed.