Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

Infrafunctions and Robust Optimization

18Scott Garrabrant

9Scott Garrabrant

6Raemon

2Rohin Shah

2James Payor

1James Payor

1drocta

1Jeremy Gillen

2drocta

2Jeremy Gillen

2drocta

New Comment

11 comments, sorted by Click to highlight new comments since: Today at 7:46 AM

Here are the most interesting things about these objects to me that I think this post does not capture.

Given a distribution over non-negative non-identically-zero infrafunctions, up to a positive scalar multiple, the pointwise geometric expectation exists, and is an infra function (up to a positive scalar multiple).

(I am not going to give all the math and be careful here, but hopefully this comment will provide enough of a pointer if someone wants to investigate this.)

This is a bit of a miracle. Compare this with arithmetic expectation of utility functions. This is not always well defined. For example, if you have a sequence of utility functions U_n, each with weight 2^{-n}, but which alternate in which of two outcomes they prefer, and each utility function gets an internal weighting to cancel out their small weight an then some, the expected utility will not exist. There will be a series of larger and larger utility monsters canceling each other out, and the limit will not exist. You could fix this requiring your utility functions are bounded, as is standard for dealing with utility monsters, but it is really interesting that in the case of infra functions and geometric expectation, you don't have to.

If you try to do a similar trick with infra functions, up to a positive scalar multiple, geometric expectation will go to infinity, but you can renormalize everything since you are only working up to a scalar multiple, to make things well defined.

We needed the geometric expectation to only be working up to a scalar multiple, and you cant expect a utility function if you take a geometric expectation of utility functions. (but you do get an infrafunction!)

If you start with utility functions, and then merge them geometrically, the resulting infrafunction will be maximized at the Nash bargaining solution, but the entire infrafunction can be thought of as an extended preference over lotteries of the pair of utility functions, where as Nash bargaining only told you the maximum. In this way geometric merging of infrafunctions is starting with an input more general than the utility functions of Nash bargaining, and giving an output more structured than the output of Nash bargaining, and so can be thought of as a way of making Nash bargaining more compositional. (Since the input and output are now the same type, so you can stack them on top of each other.)

For these two reasons (utility monster resistance and extending Nash bargaining), I am very interested in the mathematical object that is non-negative non-identically-zero infrafunctions defined only up to a positive scalar multiple, and more specifically, I am interested in the set of such functions as a *convex* set where mixing is interpreted as pointwise geometric expectation.

I have been thinking about this same mathematical object (although with a different orientation/motivation) as where I want to go with a weaker replacement for utility functions.

I get the impression that for Diffractor/Vanessa, the heart of a concave-value-function-on-lotteries is that it represents the worst case utility over some set of possible utility functions. For me, on the other hand, a concave value function represents the capacity for compromise -- if I get at least half the good if I get what I want with 50% probability, then I have the capacity to merge/compromise with others using tools like Nash bargaining.

This brings us to the same mathematical object, but it feels like I am using the definition of convex set related to the line segment connecting any two points in the set is also in the set, where Diffractor/Vanessa is using the definition of convex set related to being an intersection of half planes.

I think this pattern where I am more interested in merging, and Diffractor and Vanessa are more interested in guarantees, but we end up looking at the same math is a pattern, and I think the dual definitions of convex set in part explains (or at least rhymes with) this pattern.

I forget if I already mentioned this to you, but another example where you can interpret randomization as worst-case reasoning is MaxEnt RL, see this paper. (I reviewed an earlier version of this paper here (review #3).)

Can I check that I follow how you recover quantilization?

Are you evaluating distributions over actions, and caring about the worst-case expectation of that distribution?

If so, proposing a particular action is evaluated badly? (Since there's a utility function in your set that spikes downward at that action.)

But proposing a range of actions to randomize amongst can be assessed to have decent worst-case expected utility, since particular downward spikes get smoothed over, and you can rely on your knowledge of "in-distribution" behaviour?

Edited to add: fwiw it seems awesome to see quantilization formalized as popping out of an adversarial robustness setup! I haven't seen something like this before, and didn't notice if the infrabayes tools were building to these kinds of results. I'm very much wanting to understand why this works in my own native-ontology-pieces.

If that's correct, here are some places this conflicts with my intuition about how things should be done:

I feel awkward about the randomness is being treated essential. I'd rather be able to do something other than randomness in order to get my mild optimization, and something feels unstable/non-compositional about needing randomness in place for your evaluations... (Not that I have an alternative that springs to mind!)

I also feel like "worst case" is perhaps problematic, since it's bringing maximization in, and you're then needing to rely on your convex set being some kind of smooth in order to get good outcomes. If I have a distribution over potential utility functions, and quantilize for the worst 10% of possibilities, does that do the same sort of work that "worst case" is doing for mild optimization?

For the "Crappy Optimizer Theorem", I don't understand why condition 4, that if , then , isn't just a tautology^{[1]}. Surely if , then no-matter what is being used,

as , then letting , then , and so .

I guess if the 4 conditions are seen as conditions on a function (where they are written for ), then it no-longer is automatic, and it is just when specifying that for some , that condition 4 becomes automatic?

______________

[start of section spitballing stuff based on the crappy optimizer theorem]

Spitball 1:

What if instead of saying , we had ? would we still get the results of the crappy optimizer theorem?

If we define if s(f) is now a distribution over X, then, I suppose instead of writing Q(s)(f)=f(s(f)) should write Q(s)(f) = s(f)(f) , and, in this case, the first 2 and 4th conditions seem just as reasonable. The third condition... seems like it should also be satisfied?

Spitball 2:

While I would expect that the 4 conditions might not be *exactly* satisfied by, e.g. gradient descent, I would kind of expect basically any reasonable deterministic optimization process to at least "almost" satisfy them? (like, maybe gradient-descent-in-practice would fail condition 1 due to floating point errors, but not too badly in reasonable cases).

Do you think that a modification of this theorem for functions Q(s) which only approximately satisfy conditions 1-3, would be reasonably achievable?

______________

^{^}I might be stretching the meaning of "tautology" here. I mean something provable in our usual background mathematics, and which therefore adding it as an additional hypothesis to a theorem, doesn't let us show anything that we couldn't show without it being an explicit hypothesis.

I really like infrafunctions as a way of describing the goals of mild optimizers. But I don't think you've described the correct reasons why infrafunctions help with reflective stability. The main reason is you've hidden most of the difficulty of reflective stability in the bound.

My core argument is that a normal quantilizer is reflectively stable^{[1]} if you have such a bound. In the single-action setting, where it chooses a policy once at the beginning and then follows that policy, it must be reflectively stable because if the chosen policy constructs another optimizer that leads to low true utility, then that policy must have very low base probability (or the bound can't have been true). In a multiple-action setting, we can sample each action conditional on the previous actions, according to the quantilizer distribution, and this will be reflectively stable in the same way (given the bound).

Adding in observations doesn't change anything here if we treat U and V as being expectations over environments.

The way you've described reflective stability in the dynamic consistency section is an incentive to keep the same utility infrafunction no matter what observations are made. I don't see how this is necessary or even strongly related to reflective stability. Can't we have a reflectively stable CDT agent?

**Two core difficulties of reflective stability **

I think the two core difficulties of reflective stability are 1) getting the bound (or similar) and 2) describing an algorithm that lazily does a ~minimal amount of computation for choosing the next few actions. I expect realistic agents need 2 for efficiency. I think utility infrafunctions do help with both of these, to some extent.

The key difficulty of getting a tight bound with normal quantilizers is that simple priors over policies don't clearly distinguish policies that create optimizers. So there's always a region at the top where "create an optimizer" makes up most of the mass. My best guess for a workaround for this is to draw simple conservative OOD boundaries in state-space and policy-space (the base distribution is usually just over policy space, and is predefined). When a boundary is crossed, it lowers the lower bound on the utility (gives Murphy more power). These boundaries need to be simple so that they can be learned from relatively few (mostly in-distribution) examples, or maybe from abstract descriptions. Being simple and conservative makes them more robust to adversarial pressure.

Your utility infrafunction is a nice way to represent lots of simple out-of-distribution boundaries in policy-space and state-space. This is much nicer than storing this information in the base distribution of a quantilizer, and it also allows us to modulate how much optimization pressure can be applied to different regions of state or policy-space.

With 2, an infrafunction allows on-the-fly calculation that the consequences of creating a particular optimizer are bad. It can do this as long as the infrafunction treats the agent's own actions and the actions of child-agents as similar, or if it mostly relies on OOD states as the signal that the infrafunction should be uncertain (have lots of low spikes), or some combination of these. Since the max-min calculation is the motivation for randomizing in the first place, an agent that uses this will create other agents that randomize in the same way. If the utility infrafunction is only defined over policies, then it doesn't really give us an efficiency advantage because we already had to calculate the consequences of most policies when we proved the bound.

One disadvantage, which I think can't be avoided, is that an infrafunction over histories is incentivized to stop humans from doing actions that lead to out-of-distribution worlds, whereas an infrafunction over policies is not (to the extent that stopping humans doesn't itself cross boundaries). This seems necessary because it needs to consider the consequences of the actions of optimizers it creates, and this generalizes easily to all consequences since it needs to be robust.

^{^}Where I'm defining reflective stability as: If you have an anti-Goodhart modification in your decision process (e.g. randomization), ~never follow a plan that indirectly avoids the anti-Goodhart modification (e.g. making a non-randomized optimizer).

The key difficulty here being that the default pathway for achieving a difficult task involves creating new optimization procedures, and by default these won't have the same anti-Goodhart properties as the original.

[This comment is no longer endorsed by its author]

I thought CDT was considered not reflectively-consistent because it fails Newcomb's problem?

(Well, not if you define reflective stability as meaning preservation of anti-Goodhart features, but, CDT doesn't have an anti-Goodhart feature (compared to some base thing) to preserve, so I assume you meant something a little broader?)

Like, isn't it true that a CDT agent who anticipates being in Newcomb-like scenarios would, given the opportunity to do so, modify itself to be not a CDT agent? (Well, assuming that the Newcomb-like scenarios are of the form "at some point in the future, you will be measured, and based on this measurement, your future response will be predicted, and based on this the boxes will be filled")

My understanding of reflective stability was "the agent would not want to modify its method of reasoning". (E.g., a person with an addiction is not reflectively stable, because they want the thing (and pursue the thing), but would rather not want (or pursue) the thing.

The idea being that, any ideal way of reasoning, should be reflectively stable.

And, I thought that what was being described in the part of this article about recovering quantilizers, was not saying "here's how you can use this framework to make quantalizers better", so much as "quantilizers fit within this framework, and can be described within it, where the infrafunction that produces quantilizer-behavior is this one: [the (convex) set of utility functions which differ (in absolute value) from the given one, by, in expectation under the reference policy, at most epsilon]"

So, I think the idea is that, a quantilizer for a given utility function and reference distribution is, in effect, optimizing for an infrafunction that is/corresponds-to the set of utility functions satisfying the bound in question,

and, therefore, any quantilizer, in a sense, is as if it "has this bound" (or, "believes this bound")

And that therefore, any quantilizer should -

- wait.. that doesn't seem right..? I was going to say that any quantilizer should therefore be reflectively stable, but that seems like it must be wrong? What if the reference distribution includes always taking actions to modify oneself in a way that would result in not being a quantilizer? uhhhhhh

Ah, hm, it seems to me like the way I was imagining the distribution and the context in which you were considering it, are rather different. I was thinking of as being an accurate distribution of behaviors of some known-to-be-acceptably-safe agent, whereas it seems like you were considering it as having a much larger support, being much more spread out in what behaviors it has as comparably likely to other behaviors, with things being more ruled-out rather than ruled-in ?

Good point on CDT, I forgot about this. I was using a more specific version of reflective stability.

> - wait.. that doesn't seem right..?

Yeah this is also my reaction. Assuming that bound seems wrong.

I think there is a problem with thinking of as a known-to-be-acceptably-safe agent, because how can you get this information in the first place? Without running that agent in the world? To construct a useful estimate of the expected value of the "safe"-agent, you'd have to run it lots of times, necessarily sampling from it's most dangerous behaviours.

Unless there is some other non-empirical way of knowing an agent is safe?

Yeah I was thinking of having large support of the base distribution. If you just rule-in behaviours, this seems like it'd restrict capabilities too much.

Well, I was kinda thinking of as being, say, a distribution of human behaviors in a certain context (as filtered through a particular user interface), though, I guess that way of doing it would only make sense within limited contexts, not general contexts where whether the agent is physically a human or something else, would matter. And in this sort of situation, well, the action of "modify yourself to no-longer be a quantilizer" would not be in the human distribution, because the actions to do that are not applicable to humans (as humans are, presumably, not quantilizers, and the types of self-modification actions that would be available are not the same). Though, "create a successor agent" could still be in the human distribution.

Of course, one doesn't have practical access to "the true probability distribution of human behaviors in context M", so I guess I was imagining a trained approximation to this distribution.

Hm, well, suppose that the distribution over human-like behaviors includes both making an agent which is a quantilizer and making one which isn't, both of equal probability. Hm. I don't see why a general quantilizer in this case would pick the quantilizer over the plain optimizer, as the utility...

Hm...

I get the idea that the "quantilizers correspond to optimizing an infra-function of form [...]" thing is maybe dealing with a distribution over a single act?

Or.. if we have a utility function over histories until the end of the episode, then, if one has a model of how the environment will be and how one is likely to act in all future steps, given each of one's potential actions in the current step, one gets an expected utility conditioned on each of the potential actions in the current step, and this works as a utility function over actions for the current step,

and if one acts as a quantilizer over that, each step.. does that give the same behavior as an agent optimizing an infra-function defined using the condition with the norm described in the post, in terms of the utility function over histories for an entire episode, and reference distributions for the whole episode?

argh, seems difficult...

Proofs are in this link

This will be a fairly important post. Not one of those obscure result-packed posts, but something a bit more fundamental that I hope to refer back to many times in the future. It's at least worth your time to read this first section up to its last paragraph.

There are quite a few places where randomization would help in designing an agent. Maybe we want to find an interpolation between an agent picking the best result, and an agent mimicking the distribution over what a human would do. Maybe we want the agent to do some random exploration in an environment. Maybe we want an agent to randomize amongst promising plans instead of committing fully to the plan it thinks is the best.

However, all of these run into the standard objection that any behavior like this, where a randomized action is the best thing to do, is unstable as the agent gets smarter and has the ability to rewrite itself. If an agent is randomizing to sometimes take actions that aren' t optimal according to its utility function, then there will be an incentive for the agent to self-modify to eliminate its randomization into those suboptimal actions.

The formalization of this is the following proposition.

Proposition 1:Given some compact metric space of options X, if U:X→R is a bounded function, {μ|∀ν∈ΔX:μ(U)≥ν(U)}=Δ{x|∀y∈X:U(x)≥U(y)}Intuitively, what this is saying is that the only possible way for a mixture of options to be an optimal move is if each component option is an optimal move. So, utility functions can

onlygive you randomization behavior if the randomization is between optimal actions. The set of such will typically only contain a single point. And so, in general, forany utility function at all, an agent using it will experience a convergent pressure towards deterministic decision-making.Every single clever alignment trick involving an agent behaving randomly or sampling from a distribution is thereby guaranteed to fail, as it's not stable under reflection as the agent gets smarter, for anything worthy of being called an agent (in the sense that it has an implicit utility function and acts to achieve it).

The rest of this post will be about how the above sentence is false. There's a mathematically principled, reflectively stable, way an agent can be, where randomization behavior persists. No matter how smart it gets, it won't want to remove its randomization behavior. Reflectively stable quantilizers are back on the menu, as are reflectively stable human-imitators, reflectively stable Thompson samplers, and more.

What's an Infrafunction?Intuitively, just as infradistributions are a generalization of probability distributions, infrafunctions are a generalization of functions/random variables in the same direction. The next paragraph will be informal (and somewhat wrong) to not clog it with caveats.

The Fundamental Theorem of Inframeasures says that there are two ways of viewing inframeasures. The first way is to view an inframeasure as a closed convex set of measures, where the worst-case measure is picked. You don't know what distribution in the set will be picked by reality, and so you model it as an adversarial process and plan for the worst-case. As for the second way to view an inframeasure, the thing you

dowith probability distributions is to take expectations of functions with them. For instance, the probability of an event is just the expected value of the function that's 1 if the event happens, and 0 otherwise. So an inframeasure may also be viewed as a functional that takes a function X→R as an input, and outputs the expectation value, and which must fulfill some weak additional properties like concavity. Measures fulfill the much stronger property of inducing a linear function (X→R)→R.Moving away from that, it's important to note that the vector space of continuous functions X→R, and the vector space of (finite signed) measures on X (denoted M±(X)), are dual to each other. A function f and a measure m are combined to get an expectation value. f,m↦∫fdm (ie, taking the expectation) is the special function of type (X→R)×M±(X)→R. Every continuous linear function (X→R)→R corresponds to taking expectations with respect to some finite signed measure, and every continuous linear function M±(X)→R corresponds to taking expectations with respect to some continuous function X→R.

Since the situation is so symmetric, what happens if we just take all the mathematical machinery of the Fundamental Theorem of Inframeasures, but swap measures and functions around? Well... compare the next paragraph against the earlier paragraph about the Fundamental Theorem of Inframeasures.

The Fundamental Theorem of Infrafunctions says that there are two ways of viewing infrafunctions. The first way is to view an infrafunction as a closed convex set of functions X→R, where the worst-case function is picked. You don't know what function in the set will be picked by reality, and so you model it as an adversarial process and plan for the worst-case. As for the second way to view an infrafunction, a thing you

dowith functions to R is you combine them with a probability distribution to get an expectation value. So an infrafunction may also be viewed as a function that takes a probability distribution as an input, and outputs the expectation value, and which must fulfill some weak additional properties like concavity. Functions fulfill the much stronger property of inducing a linear function M±(X)→R.For the following theorem, a set S of functions X→R is called upper-complete if, whenever f∈S and g≥f, g∈S as well. And a function g will be called minimal in S if f≤g and f∈S implies that f=g.

Theorem 1: Fundamental Theorem of InfrafunctionsIf X is a compact metric space, there is a bijection between concave upper-semicontinuous functions of type ΔX→R∪{−∞}, and closed convex upper-complete sets of continuous functions X→R.Conjecture 1: Continuity=CompactnessIf X is a compact metric space, there is a bijection between concave continuous functions of type ΔX→R, and closed convex upper-complete sets of continuous functions X→R where the subset of minimal functions has compact closure.Ok, so, infrafunctions can alternately be viewed as concave (hill-like) functions ΔX→R, or closed convex upwards-complete sets of continuous functions.

Effectively, this is saying that

anyconcave scoring function on the space of probability distributions (like the negative KL-divergence) can equivalently be viewed as a worst-case process that adversarially selects functions. For any way of scoring distributions over "what to do" where randomization ends up being the highest-scoring option, the scoring function on ΔX isprobablygoing to curve down (be concave), and so it'llimplicitlybe optimizing for the worst-case score amongst a set of functions X→R.Now, going from a desired randomization behavior, to a concave scoring function with that desired behavior as the optimum, to the set of functions the scoring function is implicitly worst-casing over, takes nontrivial mathematical effort. There's a whole bunch of possible randomization behaviors for which I don't know what sort of worst-case beliefs induce that sort of randomization.

For instance, what sort of uncertainty around a utility function makes an agent softmax over it? I don't know. Although, to be fair, I haven't tried particularly hard.

Generalization of QuantilizationHowever, one example that

canbe worked out (and has already been worked out by Jessica Taylor and Vanessa Kosoy) is quantilization.If you haven't seen quantilization before, it's where you take a reference distribution over actions, ν∈ΔX, and instead of picking the

bestaction to optimize the function U, you condition ν on the event "the action I picked is in the top 1 percent of actions sampled from ν, in terms of how much U likes it" and sample fromthatdistribution.I mean, it doesn't have to be 1 percent specifically, but this general process of "take a reference distribution, plot out how good it is w.r.t. some function U, condition on the U score being beyond some threshold, and sample from that updated distribution" is referred to as quantilization.

Well, if that's the sort of optimization behavior we're looking for, we might ask "what sort of concave scoring function on probability distributions has quantilization as the optimum?", or "what sort of utility function uncertainty produces that scoring function?"

As it turns out, quantilization w.r.t. a reference distribution ν, where your utility function is U, corresponds to worst-casing amongst the following set of functions for some ϵ. {V|∫|U−V|dν≤ϵ} The epistemic state that leads to quantilization is therefore one where you think that your utility function is unfixably corrupted, but that its deviation from the true utility function is low relative to a given reference distribution ν. Specifically, if U is your utility function you see, and V is the true utility function which U is a corruption of, you believe that ∫|U−V|dν≤ϵ and have no other beliefs about the true utility function V.

You'd be very wary about going off-distribution because you'd think "the corruption can be arbitrarily bad off-distribution because all I believe is that ∫|U−V|dν is low, so in a region where ν is low-probability, it's possible that V is super-low there". You'd also be very wary about deterministically picking a single spot where U is high, because maybe the corruption is concentrated on those sparse few spots where U is the highest.

However, if you randomize uniformly amongst the top quantile of ν where U is high, this is actually the

optimalresponse to this sort of utility function corruption. No matter how the corruption is set up (as long as it's small relative to ν), the quantilizing policy is unlikely to do poorly w.r.t (the agent's irreducibly uncertain beliefs about) the true utility function.These are some rather big claims. Exactly as stated above, they've already been proven a long time ago by people who aren't me. However, there's a further-reaching generalization that the above results are a special case of, which allows interpolating between quantilization and maximization, which is novel.

Introduction to Lp Spaces (skippable)If you already know what Lp spaces are, you can skip straight to the general theorem, but if you don't, time to take a detour and explain them.

Given some nice space X and probability distribution ν∈ΔX, we can make the vector space of "measurable functions X→R which are equivalent w.r.t. ν". After all, you can add functions together, and multiply by constants, so it makes a vector space. However, the elements of this vector space aren't

quitefunctions, they're equivalence classes of functions that are equivalent w.r.t. ν. Ie, if f and g are two different functions, but ν has zero probability of selecting a point x where f(x)≠g(x), then f and g will be the same point in the vector space we're constructing.This vector space can be equipped with a norm. Actually, it can be equipped with a

lotof norms. One for each real number p in [1,∞]. The Lp norm on the space of "functions that are equivalent w.r.t. ν" is: ||f||p:=(∫|f|pdν)1p For the L2 norm, it'd be ||f||2:=√∫f2dν Compare this to euclidean distance! ||x||= ⎷n∑i=1x2i So, our set-of-functions which induces quantilization behavior, {V|∫|U−V|dν≤ϵ} Can also be expressed as "an ϵ-sized ball around U w.r.t. the L1 norm and ν", and this is the core of how to generalize further, for we may ask, what's so special about the L1 norm? What about the Lp norm for all the p∈[1,∞]? What do those do?Theorem 2: Lp Ball TheoremFor any ϵ>0, p∈[1,∞], f:X→R, and ν:ΔX, the infrafunction corresponding to {g|(∫|f−g|pdν)1p≤ϵ} (Knightian uncertainty over the Lp ball of size ϵ centered at f, w.r.t ν), is μ↦μ(f)−ϵ||dμdν||q where 1p+1q=1. Further, given a function f, the optimal μ to pick is the distribution a⋅ν⋅max(f−b,0)p−1, for some constants a>0 and b≤max(f).Some special cases of this are as follows. For p=1, you get quantilization. Worst-casing over little L1 balls means that your task is to pick the probability distribution μ which maximizes μ(f)−ϵ||dμdν||∞, and this maximizing probability distribution is ν rescaled by the function that's 1 when f exceeds the threshold value and 0 otherwise (as this is the limit of max(f−b,0)δ as δ→0) This can be restated as conditioning ν on the event that f exceeds a certain threshold, and so we get quantilizers.

For p=2, you get a more aggressive sort of optimization. Worst-casing over little L2 balls means that your task is to pick the probability distribution μ which maximizes μ(f)−ϵ||dμdν||2, and this maximizing probability distribution is ν but rescaled linearly with how much f exceeds the threshold value b. So, for example, given two points x and y, if f(x)−b=3(f(y)−b), the probability density at x is enhanced by a factor of 3 over what it'd be at y.

For p=∞, you basically just get argmax over the support of the distribution ν. Worst-casing over little L∞ balls means that your task is to pick the probability distribution μ which maximizes μ(f)−ϵ||dμdν||1, and this maximizing probability distribution is ν but rescaled according to max(f−b,0)n for arbitrarily large n. Ie, incredibly large at the highest value of f, which dominates the other values. So pretty much, a dirac-delta distribution at the best point in the support of ν.

There's a massive amount of work to be done on how various sorts of randomization behavior from agents relate to various sorts of concave scoring rules for distributions, and how those relate with various sorts of (restricted) worst-case assumptions about how the utility function got corrupted.

Dynamic Consistency??But what's this about agents with infrafunctions U as their utility functions being stable under reflection? Well, we'd want an agent to be incentivized to keep its utility function the same (or at least not change it in unpredictable ways) no matter what it sees. Making this more precise, if an agent has a utility (infra)function U, then it

shouldbelieve that optimizing for the infrafunction U|h (U but modified in a predictable way to account for having seen history h) after seeing history h, will produce equal or better results (according to U) than optimizing for any competitor (infra)function V after seeing h. This is a necessary condition to give the starting agent no incentive to alter the utility function of its future self in an unpredictable way (ie, alter it in a way that differs from U|h).For example, if an agent with an infrafunction U ever ends up thinking "If I was an optimizer for the utility function (not infrafunction!) V, I'd do better, hang on, lemme just rewrite myself to optimize that instead", that would be instability under reflection. That just should not ever happen. Infrafunctions shouldn't collapse into utility functions.

And, as it turns out, if you've got an agent operating an an environment with a utility infrafunction, there is a way to update the infrafunction over time which makes this happen. The agent won't want to change its infrafunction in any way other than by updating it. However, the update shares an undesirable property with the dynamically consistent way of updating infradistributions, though. Specifically, the way to update a utility infrafunction (after you've seen a history) depends on what the agent's policy would do in other branches.

If you're wondering why the heck we need to update our utility infrafunction over time, and why updating would require knowing what happens in alternate timelines, here's why. The agent is optimizing worst-case expected value of the functions within its set-of-functions. Thus, the agent will tend to focus its marginal efforts on optimizing for the utility functions in its set which have the lowest expected value, in ways that don't destroy too much value for the utility functions which are already doing well in expectation. And so, for a given function f∈^U (the set induced by the infrafunction U), it matters very much whether the agent is doing quite well according to f in alternate branches (it's doing well in expectation so it's safe to mostly ignore it in this branch), or whether the agent is scoring horribly according to f in the alternate branches (which means that it needs to be optimized in this branch).

Time to introduce the notation to express how the update works. If h is a finite history, and π¬h is a stochastic partial policy that tells the agent what to do in all situations except where the history has h as a prefix, and π∗ is a stochastic partial policy that tells the agent what to do in all situations where the history has h as a prefix, then π∗∙π¬h is the overall policy made by gluing together those two partial policies.

Also, if e is an environment, and π is a stochastic policy, π⋅e refers to the distribution over infinite histories produced by the policy interacting with the environment. By abuse of notation, π¬h⋅e can be interpreted as a probability distribution on {h′∈(A×O)ω|h⋢h′}∪{h} This is because, obeying π¬h, everything either works just fine and π¬h keeps telling you what your next action is and you build some infinite history h′, or the partial history h happens and π¬h stops telling you what to do.

The notation 1¬h is the indicator function that's 1 when the full history lacks h as a prefix, and 0 on the partial history h. U↓h is the function U, but with a restricted domain so it's only defined on infinite bitstrings with h as a prefix.

With those notations out of the way, given an infrafunction U (and using U for the utility functions in the corresponding set ^U), we can finally say how to define the updated form of U, where we're updating on some arbitrary history h, environment e, and off-history partial stochastic policy π¬h which tells us how we act for all histories that lack h as a prefix.

Definition 1: Infrafunction UpdateFor an infrafunction U of type Δ(A×O)ω→R, history h, environment e, and partial stochastic policy π¬h which specifies all aspects of how the agent behaves except after history h, U|e,h,π¬h, the update of the infrafunction, is the infrafunction corresponding to the set of functions{(π¬h⋅e)(1h)⋅U↓h+(π¬h⋅e)(1¬hU)|U∈^U}Or, restated,{Eh′∼π¬h⋅e[1h]⋅U↓h+Eh′∼π¬h⋅e[1¬hU]|U∈^U}So, basically, what this is doing is it's taking all the component functions, restricting them to just be about what happens beyond the partial history h, scaling them down, and taking the behavior of those functions off-h to determine what constant to add to the new function. So, functions which do well off-h get a larger constant added to them than functions which do poorly off-h.

And now we get to our theorem, that if you update infrafunctions in this way, it's always better (from the perspective of the start of time) to optimize for the updated infrafunction than to go off and rewrite yourself to optimize for something else.

Theorem 3: Dynamic ConsistencyFor any environment e, finite history h, off-history policy π¬h, and infrafunctions U and V, we have thatU((π¬h∙argmaxπ∗(U|e,h,π¬h)(π∗⋅(e|h)))⋅e)≥U((π¬h∙argmaxπ∗V(π∗⋅(e|h)))⋅e)Or, restating in words, selecting the after-h policy by argmaxing for U|e,h,π¬h makes an overall policy that outscores the policy you get by selecting the after-h policy to argmax for V.Ok, but what does this sort of update mean in practice? Well, intuitively, if you're optimizing according to an infrafunction, and some of the component functions you're worst-casing over are sufficiently well-satisfied in other branches, they kind of "drop out". We're optimizing the worst-case, so the functions that are doing pretty well elsewhere can be ignored as long as you're not acting disastrously with respect to them. You're willing to take a hit according to those functions, in order to do well according to the functions that aren't being well-satisfied in other branches.

Why Worst Case?Worst case seems a bit sketchy. Aren't there more sane things to do like, have a probability distribution on utility functions, and combine them according to geometric average? That's what Nash Bargaining does to aggregate a bunch of utility functions into one! Scott Garrabrant wrote an entire sequence about that sort of stuff!

Well, guess what, it fits in the infrafunction framework. Geometric averaging of utility functions ends up being writeable as an infrafunction! (But I don't know what it corresponds to worst-casing over). First up, a handy little result.

Proposition 2: Lp Double Integral InequalityIf f≥0, let ∮pfdμ be an abbreviation for (∫fpdμ)1p. Then for all p∈[−∞,1], and μ:ΔX and ν:ΔY and f:X×Y→R≥0, we have that∫∮pf(x,y)dνdμ≤∮p∫f(x,y)dμdνCorollary 1: Lp-Averaging is Well-DefinedGiven any distribution ν over a family of functions or infrafunctions Fi≥0, define the Lp-average of this family (for p∈[−∞,1]) as the function μ↦∮pFi(μ)dν Lp-averaging always produces an infrafunction.Corollary 2: Geometric Mean Makes InfrafunctionsThe geometric mean of a distribution ν over utility functions is an infrafunction.Proof: The geometric mean of a distribution over utility functions Ui is the function μ↦e∫ln(Ui(μ))dν However, the geometric mean is the same as the L0 integral. So we get that it's actually writeable as μ↦∮0Ui(μ)dν And we can apply Corollary 2 to get that it's an infrafunction.

So, this Lp-mixing, for p∈[−∞,1] is... well, for 1, it's just usual mixing. For 0, it's taking the geometric average of the functions. For −∞, it's taking the minimum of all the functions. So, it provides a nice way to interpolate between minimization, geometric averaging, and arithmetic averaging, and all these ways of aggregating functions produce infrafunctions.

Just don't ask me what utility functions are actually

inthe infrafunction corresponding to a geometric mixture.The Crappy Optimizer TheoremTechnically this theorem doesn't actually belong in this post. But it's close enough to the subject matter of this post to throw it in anyways. It turns out that "every" (not really) vaguely optimizer-ish process can be reexpressed as some sort of ultradistribution. An ultradistribution is basically an infradistribution (a closed convex set of probability distributions), except it maximizes functions instead of minimizing them.

And so, "every" (not really) optimizer-y process can be thought of as just argmax operating over a more restricted set of probability distributions.

Try not to read

too muchinto the Crappy Optimizer Theorem, I'd very strongly advise that you take your favorite non-argmax process and work out how it violates the assumptions of the theorem. Hopefully that'll stop you from thinking this theorem is the final word on optimization processes.Anyways, let's discuss this. The type signature of argmax is (X→R)→X. Let's say we're looking for some new sort of optimizer that isn't argmax. We want a function s of the same type signature, that "doesn't try as hard".

We don't actually know what s is! It could be anything. However, there's a function Q:((X→R)→X)→((X→R)→R), which I'll call the "score-shift" function, and it's defined as follows. s↦(f↦f(s(f))) Basically, given an optimizer and a function, you run the optimizer on the function to get a good input, and shove that input through the function to get a score. As a concrete example, Q(argmax)=max. If you have a function f, argmax over it, and plug the result of argmax back into f, that's the same as taking the function f and producing a score of max(f).

So, instead of studying the magical optimizer black box s, we'll be studying Q(s) instead, and characterizing the optimization process by what scores it attains on various functions. There are four properties in particular, which it seems like any good optimizer should fulfill.

1: c-additivityFor any constant function c and function f, Q(s)(f+c)=Q(s)(f)+c.2: HomogenityFor any a≥0 and function f, Q(s)(a⋅f)=a⋅Q(s)(f).3: SubadditivityFor any functions f,g, Q(s)(f+g)≤Q(s)(f)+Q(s)(g).4: Zero boundFor any function f≤0, Q(s)(f)≤0.Rephrasing this, though we don't know what the optimization-y process s

is, it's quite plausible that it'll fulfill the following four properties.1: If you add a constant to the input function, you'll get that constant added to your score.

2: If you rescale the input function, it rescales the score the optimization process attains.

3: Optimizing the sum of two functions does worse than optimizing them separately and adding your best scores together.

4: Optimizing a function that's never positive can't produce a positive score.

Exercise: Which of these properties does softmax break? Which of these properties does gradient ascent with infinitesimal step-size break?

Also, notice that the first two properties combined are effectively saying "if you try to optimize a utility function, the optimization process will ignore scales and shifts in that utility function".

Theorem 4: Crappy Optimizer TheoremFor any selection process s where Q(s) fulfills the four properties above, ∀f:Q(s)(f)=maxμ∈Ψμ(f) will hold for some closed convex set of probability distributions Ψ. Conversely, the function f↦maxμ∈Ψμ(f) for any closed convex set Ψ will fulfill the four properties of an optimization process.Informal Corollary 3:Any selection process s where Q(s) fulfills the four properties is effectively just argmax but using a restricted set of probability distributions.Other Aspects of InfrafunctionsInfrafunctions are the analogue of random variables in inframeasure theory. Here are two useful properties of them.

First off, in the classical case of functions, we can average functions together. There's a distinguished averaging function Δ(X→R)→(X→R). Pretty obvious.

When we go to infrafunctions, this gets extended somewhat. There's a distinguished function □FX→FX, where □ is the space of infradistributions, and FX is the space of infrafunctions. If you know enough category theory, we can say that the space of infrafunctions is a □-algebra, where □ is the infradistribution monad. If you don't know enough category theory, basically, there's an infra-version of "averaging points together" and it makes all the diagrams commute really nicely.

Proposition 3:The space of infrafunctions is a □-algebra, with the function flatFX:□FX→FX being defined as λψ.λμ.ψ(λF.F(μ)).Also, in the classical case, if you've got a function g:X→Y, that produces a function (Y→R)→(X→R) in a pretty obvious way. Just precompose. f↦(x↦f(g(x))).

A similar thing holds here. Except, in this case, instead of a function, we can generalize further to a continuous infrakernel k:X→□Y. Again, to get a function FY→FX, you just precompose. G↦(μ↦G(k∗(μ)). Take the distribution μ, shove it through the infrakernel k to get an infradistribution on Y, and shove that through the infrafunction G.

Proposition 4:All continuous infrakernels k:X→□Y induce a function FY→FX via G↦(μ↦G(k∗(μ))So, given an infrakernel (and functions are a special case of this) going one way, you can transfer infrafunctions backwards from one space to the other.

The restriction to continuous infrakernels, though, actually

doesmatter here. A function f:X→Y inducestwoinfrakernels, one of type X→□Y (the image), and one of type Y→□X (the preimage). So, theoretically we could get a function of type (X→Y)→(FX→FY) by routing through the preimage function. However, since it requires continuity of the function Y→□X to make things work out, you can only reverse the direction if the function mapping a point x to its preimage is Hausdorff-continuous. So, for functions with Hausdorff-continuous inverses, you can flip the usual direction and go (X→Y)→(FX→FY). But this trick doesn't work in general, only (X→□Y)→(FY→FX) is valid in general.There's a bunch of other things you can do, like intersection of infrafunctions making a new infrafunction, and union making a new infrafunction. Really, most of the same stuff as works with infradistributions.

The field is wide-open. But it's a single framework that can accomodate Scott's generalized epistemic states, Scott's geometric averaging, Knightian uncertainty, intersecting and unioning that uncertainty, averaging, quantilizers, dynamic consistency, worst-case reasoning, and approximate maximization. So it seems quite promising for future use.