Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

This is a very loose idea.

In Stable Pointers to Value II, I pointed at a loose hierarchy of approaches in which you try to get rid of the wireheading problem by revising the feedback loop to remove the incentive to wirehead. Each revision seems to change the nature of the problem (perhaps to the point where we don't want to call it a wireheading problem, and instead would put it in a more general perverse-instantiation category), but not eliminate problems entirely.

Talking with Lawrence Chan today, he described a way of solving problems by "going meta" (a strategy which he was mostly suspicious of, in the conversation). His example was: you can't extract human values by specifying it as a learning problem, because of severe identifiability problems. However, it is not entirely implausible that we can "learn to learn human values": have humans label examples of other humans trying to do things, indicating what values are being expressed in the scenario.

If this goes wrong, you can try and iterate the operation again, learning to learn to learn...

This struck me as similar to the hierarchy I had constructed in my older post.

My interpretation of what Lawrence meant by "going meta" here is this: machine learning research "eats" other research fields by using automated learning to solve problems which were previously being solved by the process of science, IE, hand-crafting hypotheses and testing them. AI alignment research is full of cases where this doesn't seem like a very good approach. However, one attitude we can take to such cases is to do the operation again: propose to learn how humans would solve this sticky problem.

This is not at all like other learning to learn approaches which merely seek to speed up normal learning. The idea is that our object-level loss function is insufficient to point out the behavior we really want. We want new normative feedback to come in at the meta-level, telling us more about which ways of solving the object-level problem are desirable and which are undesirable.

The idea I'm about to describe seems like a fairly hopeless idea, but I'm interested in seeing how it would go regardless.

What is the fixed point of this particular "go meta" operation?

The intuition is this: any utility function we try to write down has perverse instantiations, so that we don't really want to optimize it fully. Searching over a big space leads to Goodhart and optimization daemons. Unfortunately, search is more or less the only way to produce intelligent behavior that we know of.

However, it seems like we can often improve on this situation by providing more human input to check what was really wanted. Furthermore, it seems like we generally get more by doing this on the meta level -- we don't just want to refine the estimated utility function; we want to refine our notion of safely searching for good options (avoiding searches which goodhart on looks-good-to-humans by manipulating human psychology, for example), refine our notion of what learning the utility function even means, and so on.

Every stage of going meta introduces a need for yet another search, which brings back the problems all over again. But, maybe we can do something interesting by jumping up all the meta levels here, so that each search is itself governed by some feedback, except when we bottom out in extremely simple operations which we trust.

(This feels conceptually similar to some intuitions in HCH/IDA, but I don't see that it is exactly the same.)

Recursive Quantilization

"Recursive quantilization" is an attempt to make the idea a little more formal. I don't think it quite captures everything I would want from the "fixed point of the meta operation Lawrence Chan was suspicious of", but it has the advantage of being slightly more concrete.

Quantilizers are a way of optimizing a utility function when you're suspicious that it isn't the "true" utility function you should be optimizing, but you do think that the average difference is low when sampling things from a known background distribution. Intuitively, you don't want to move too far from the background distribution where your utility estimates are accurate, but you do want to optimize in the direction of high utility somewhat.

What if we want to quantilize, and we expect that there is some background distribution which would make us have a decent amount of trust in the accuracy of the given utility function, but we don't know what that background distribution is?

We have to learn the "safe" background distribution.

Learning is going to require a search for hypotheses matching whatever feedback we get, which re-introduces Goodhart, etc. So, we quantilize that search. But we need a background distribution which we expect to be safe. And so on.

  • You start with very broad priors on what background distributions might be safe, so you barely optimize at all, but have some default (human-programmed) strategy of asking humans questions.
  • You engage in active learning, steering toward questions which resolve the most important ambiguities (to the extent that you're willing to steer).
  • Because we are taking "all the meta levels" here, we can do some amount of generalization across meta levels so that the stack of meta doesn't get out of hand. In other words, we're actually learning one safe background distribution for all the meta levels, which encodes something like a human concept of "non-fishy" ways of going about things.

Issues

There are a lot of potential concerns here, but the one which is most salient to me is that humans will have a lot of trouble providing feedback about non-fishy ways of solving the problems at even slightly high meta levels.

Object level: Plans for achieving high utility.

Meta 1: Distributions containing only plans which the utility function evaluates correctly.

Meta 2: Distributions containing only distributions-on-plans which the first-meta-level learning algorithm can be expected to evaluate correctly.

Et cetera.

How do you analyze a distribution? Presumably you have to get a good picture of its shape in the highly multidimensional space -- look at examples of more and less typical members, and be convinced that the examples you looked at were representative. It's also important that you go into its code and check that it isn't intelligently optimizing for some misaligned goal.

It seems to me that a massive advance in transparency or informed oversight would be needed in order for humans to give helpful feedback at higher meta-levels.

New Comment
4 comments, sorted by Click to highlight new comments since: Today at 8:32 AM

Ultimately I think you'll encounter a difficulty here due to epistemic circularity: you'll eventually need to know about a distribution you can't go more meta on because it would be functionally equivalent to solving the problem of the criterion, discovering completely the universal prior, grounding induction in general, etc.. Not that we don't always have to deal with it, just that in particular I don't expect going meta to help much beyond reducing the number of free variables you have to consider. That being said, getting the number of free variables you have to think about down is helpful, but you'll still be left with them.

I'm not even sure there is good normative feedback on the meta level(s). There is feedback we can give on the meta level for any particular object-level instance, but it seems not at all obvious (to me) that this advice will generalize well to other object-level instances.

On the other hand, it does seem to me that the higher up you are in meta-levels, the smaller the space of concepts and the easier it is to learn. So maybe my overall take is that it seems like we can't depend on humans to give meta-level feedback well, but if we can figure out how to either give better feedback or learn from noisy feedback, it would be easier to learn and likely generalize better.

I share both of these intuitions.

That being said, I'm not convinced that the space of concepts is smaller as you get more meta. (Naively speaking, there are ~exponentially more distributions over distributions than distributions, though some strong simplicity biases can cut this down a lot.) I suspect that one reason it seems that the space of concepts is "smaller" is because we're worse at differentiating concepts at higher levels of meta-ness. For example, it seems that it's often easier to figure out what the consequences of concrete action X are than the consequences of adopting a particular ethical system, and a lot of philosophy on metaethics seems more confused than philosophy on ethics. I think this is related to the "it's more difficult to get feedback" intuition, where we have fewer distinct buckets because it's too hard to distinguish between similar theories at sufficiently high meta-levels.

Yeah, I think I agree with all of that. Perhaps the better way to state my position is, conditional on there being good normative feedback on the meta level, I would expect the space of concepts on the meta-level to be smaller than on the object-level.