This is a supplemental post to Geometric Utilitarianism (And Why It Matters), in which I show that when all agents have positive weight ψi, the optimal geometric weighted average moves continuously across the Pareto frontier as we change those weights. I also show that we can extend this continuity result to all weights ψ, if we're willing to accept an arbitrarily good approximation of maximizing G(_,ψ). I think of this as a bonus result which makes the geometric average a bit more appealing as a way to aggregate utilities, and the main post goes into more detail about the problem and why it's interesting.

High Level Overview

How does changing ψ affect the optima of G(_,ψ)? Ideally, we'd like a small change in the weights assigned to each agent to cause a small change in the resulting joint utility p. In other words, we would like p(ψ)=argmaxu∈FG(u,ψ) to be continuous. It turns out this is true when ψi>0 for all agents, and there's at least one way to create a continuous function ψ→p that works for all ψ∈Δn and is an arbitrarily good approximation of G(_,ψ) maximization.

We've already solved the inverse problem: given a point p and Harsanyi weights ϕ which make p optimal according to H(_,ϕ), find geometric weights ψ that make p optimal according to G(_,ψ).

ψ=p⊙ϕp⋅ϕ

So we already have ψ(p,ϕ), which is a smooth function of its inputs where it's defined.^{[1]}

It turns out we can invert this function and recover (p,ϕ) from ψ when pi>0 and ϕi>0 for all agents. This is the interior of what we'll be calling the Harsanyi Bundle; the set of all of the pairs (p,ϕ) which are consistent with our Pareto frontier P. This post will show that there is a bijection between the interior of the Harsanyi Bundle and the interior of the set of all valid weights Δn.

So we have a smooth bijection(p,ϕ)→ψ between these two subsets of R2n and Rn respectively. And thankfully we're doing calculus, where this is sufficient to establish that the inverse function ψ→(p,ϕ) is at least continuous.^{[2]} And that is the main result this post sets out to prove: that individual utilities shift continuously as geometric weights shift.

Establishing this homeomorphism with the interior of Δn means that the interior of the Harsanyi Bundle is an n−1 dimensional manifold. The last part of this post involves extending our map ψ→(p,ϕ) so that it's continuous across all of Δn. My current favorite way to do this is to introduce a new continuous function β(ψ) which maps all weights ψ into the interior of Δn, and then using those positive weights to find the unique optimum of G(_,β(ψ)).

Parameterizing the Pareto Hypersurface

Just like the Harsanyi hyperplane H, we can think of the Pareto frontier P as a set of joint utilities, or as a function P:Rn−1→Rn which maps utilities for the first n−1 agents into that set. Pn(v) returns the highest feasible utility agent n can receive, given that the first n−1 agents receive utilities defined by v. (Or undefined if v is already infeasible.) And Pi(v)=vi for i<n.

We can think of P as a hypersurface that lies "above" the feasible utilities for the first n−1 agents.

Where P is differentiable, the Harsanyi hyperplane H for a point p lines up exactly with the tangent hyperplane at that point TpP.^{[3]} The Harsanyi weights ϕ are orthogonal to TpP, and so there is only one choice of ϕ which causes p to maximize H(_,ϕ).

But where P isn't differentiable, such as at corners, the tangent hyperplane isn't defined, and there can be multiple hyperplanes H which keep F all on one side. And so there can be many consistent values for ϕ at these points.

When the slope of P jumps discontinuously at a point p, these slopes and all of the slopes in between can be used to find all of the valid assignments for ϕ at p. When F looks like a jewel with multiple flat faces meeting at corners, we can identify ϕ(Fi) for each face Fi. The valid assignments for ϕ at a corner p are all of the convex combinations of ϕ(Fi) for all of the faces Fi that meet at that corner p.^{[4]}ϕ only acts like a function when there is only one valid assignment, and in general we can call the set of all valid assignments Φ(p).

Computing the Harsanyi Weights

So far we've been treating ϕ as coming from a black box, but now that we've parameterized P it's actually straightforward to compute ϕ at differentiable points p∈P.

What we have is a function P:Rn−1→Rn, and what we want to do is construct a new function ^P:Rn→R such that P is a level set of ^P. This causes the gradient∇^P to be orthogonal to P, which is exactly the direction we want ϕ to point!

Starting from un=Pn(v), we can rearrange this to get 0=Pn(v)−un. Which is a level set of ^P(u)=Pn(v(u))−un.

∂^P∂ui=∂Pn∂vi for i<n

∂^P∂un=−1

And that's ∇^P! To get ϕ, all we have to do is normalize ∇^P so that its components sum to 1. If we use →1∈Rn to denote the vector whose components are all 1, then

ϕ=∇^P∑ni=1∇^Pi=∇^P^P⋅→1.

For an arbitrary surface that might wiggle up and down, this procedure won't necessarily guarantee that ϕ∈Δn. But this is a Pareto frontier, where we know that ∂Pn∂vi≤0; increasing agent i's utility never increases the feasible utility for agent n. P might wiggle down, but it never wiggles up, and that keeps ϕ in its valid range wherever P is differentiable.

We also know that increasing pi never decreases ϕi. So ∂ϕi∂pi≥0, which implies that ∂2Pn∂v2i≤0. We'll use this fact later when looking at curves which increase pi, and thus monotonically increase ϕi.

The Harsanyi Bundle

In order to claim that ψ(p,ϕ) changes continuously as we change p and ϕ, we need to be able to define what that even means in the context of P. If we take a curve γ:R→P that travels continuously along P, then ϕ(γ) will change discontinuously at corners no matter how we break ties.

All pairs (p,ϕ) come from P×Δn, but let's restrict our attention to a subset I'll denote (P,Φ) which contains all of the valid pairs (p,ϕ) which are consistent with P. So (P,Φ)={(p,ϕ)|p∈P,ϕ∈Φ(p)}. It turns out this forms a surface in 2n-dimensional space I'll call the Harsanyi Bundle, analogous to tangent bundle in differential geometry.^{[5]}

Where P is differentiable, there is only one valid assignment for ϕ. So any continuous curve γ through these parts of P corresponds to a continuous curve χ through (P,Φ). At non-differentiable points p, χ can travel continuously to any point in (p,Φ(p)), including the endpoints which allow it to continue on to other parts of the Harsanyi bundle.

When projected onto P, χ looks like a continuous curve along P that sometimes hangs out at corners while ϕ rotates to line up with the next face along the path of χ.

The upshot is that any two points in (P,Φ) can be reached using continuous paths, so we can think of it as a single continuous surface embedded in 2n-dimensional space. And these correspond to continuous changes in geometric weight ψ.

The Harsanyi Shadow

We're trying to invert ψ:(P,Φ)→Δn, which has the formula

ψ=p⊙ϕp⋅ϕ

If we were writing a program to compute ψ(p,ϕ), our first step would be to compute p⊙ϕ. This is the element-wise product of p and ϕ, also called their Hadamard product. We can think of ⊙ as a linear map ⊙:Rn×Rn→Rn, which takes us from 2n-dimensional space back down to n-dimensional space. And it turns out that the image ⊙(P,Φ), consisting of all the points p⊙ϕ where (p,ϕ)∈(P,Φ), forms a hypersurface that lies "under" P.

I'm calling this hypersurface the Harsanyi Shadow of P, and I think of it as a projection of the Harsanyi Bundle back down into n-dimensional space. As always if there's a standard name or notation for something I generally prefer to switch to that. We'll also show that at least the interior of the Harsanyi Shadow is an n−1 dimensional manifold, since it's in smooth bijection with the interior of Δn.

In this example, the grey line segments on the Harsanyi Shadow correspond to black line segments ¯AB and ¯BC on the Pareto frontier. The blue line segments correspond to the points A, B, and C, and the values of ϕ which make them optimal according to H(_,ϕ).

In particular, points like A and C on the boundary of P, and thus on the boundary of (P,Φ), correspond to "wings" on the Harsanyi Shadow which lie on the same line from the origin. When these wings are normalized onto Δn in the next step of calculating ψ, these will all end up at the same point.

The Harsanyi Shadow of Convex Hulls

Any convex set like F can be thought of as the convex hull of a set of points X. When X is finite, which is generally the case when it represents something like a deterministic policy for each agent, P will be made up of flat surfaces that meet at corners. These correspond to two types of surface in (P,Φ):

"Horizontal" surfaces where ϕ is constant and p is changing

"Vertical" surfaces where p is constant and ϕ is changing.

When one element of a Hadamard product is constant, such as ϕ for "horizontal" surfaces, we can think of it as a linear map ⊙(_,ϕ):Rn→Rn. This corresponds to a diagonal matrix, which we can write in diagonal notationdiag(ϕ) or in components as ⊙(_,ϕ)ij=δijϕi. This is invertible if and only ifϕj>0 for all agents. So flat surfaces on P map linearly to flat surfaces on (P,Φ), which map linearly to flat surfaces on ⊙(P,Φ). ⊙(_,ϕ(H)):H→Rn acts linearly when restricted to points on the same hyperplane H.

Since we have an explicit formula for ϕ, we can use that to write down an explicit formula for p⊙ϕ

p⊙ϕ=p⊙∇^P^Pi⋅→1=1^Pi⋅→1(p⊙∇^P)

In components, this looks like

(p⊙ϕ)i=pi^P⋅→1∂Pn∂vi for i<n

(p⊙ϕ)n=−pn^Pi⋅→1

The Harsanyi Shadow of a Hyperplane

We can approximate any Pareto frontier using pieces of hyperplanes, and the Harsanyi Shadow of this approximation will be made up of the pieces of H⊙ϕ(H). And it turns out that these pieces are all parallel to Δn! Which helps a lot in understanding the geometry of the Harsanyi Shadow, and why the interior pieces are in one-to-one correspondence with pieces of Δn.

I noticed this playing around with examples, and I recommend playing with this one. If you pick two points A and B, you can draw the line between them, and calculate ϕ for that line. ⊙(_,ϕ) is always a linear map, and when ϕi>0 for all agents, it's an invertible linear map that maps this line to another line on the Harsanyi Shadow. And it turns out this line on the Harsanyi Shadow will always have slope -1! Just like the standard 2-simplex Δ2 where ϕ and ψ live. And in general, H⊙ϕ(H) is a hyperplane with the same slope as Δn, as long as ϕi>0 for all agents.

This is why the grey line segments in our example were parallel to the red line segment of valid weights; flat surfaces on the Pareto frontier map onto flat surfaces on the Harsanyi Shadow that are parallel to Δn.

To see why this happens for hyperplanes in general, we can use the fact that ϕ is orthogonal to H at p to write down the normal equation for H. It's all of the points u∈Rn which satisfy

ϕ⋅(u−p)=0

The image of H after going through the map ⊙(_,ϕ), which we can denote H⊙ϕ is all of the points u⊙ϕ where u∈H. One such point is p⊙ϕ, and in general there will be a vector I'll suggestively call δ∈Rn which is orthogonal to H⊙ϕ. This normal vector δ satisfies the equation

Which means we can rewrite the normal equation for H⊙ϕ as

δ⋅(ϕ⊙(u−p))=0

Here I needed to go back to component notation to know how I could simplify that equation further.

∑ni=1δi(ϕi(ui−pi))=0

∑ni=1δiϕi(ui−pi)=0

(δ⊙ϕ)⋅(u−p)=0

This is great, because we also know from the normal equation for H that

ϕ⋅(u−p)=0

And in fact, we know that any scalar multipleλϕ is also orthogonal to H

λϕ⋅(u−p)=0

And so one family of solutions for δ comes from solving

δ⊙ϕ=λϕ

Which has the solution δi=λ.

δ=λ→1

This line is orthogonal to H⊙ϕ, and it's also orthogonal to Δn! So H⊙ϕ and Δn are parallel in the sense that they're orthogonal to the same line. And when ϕi>0 for all agents, H⊙ϕ is a hyperplane with the same normal vector as Δn.

The Harsanyi Shadow of a Corner

At a corner p, the Harsanyi Shadow p⊙Φ(p) will be a subset of p⊙Δn. And when pi>0 for all agents, ⊙(p,_) is an invertible linear map that takes the standard simplex Δn to a simplex with a similarly-negative slope in all directions.

Since pi≥0, ⊙(p,_) can't flip the sign of the slope of this simplex. Together with the results from the previous sub-section about hyperplanes, we can conclude that the Harsanyi Shadow of (P,Φ) never has positive slope. (Just like P and Δn.) The main relevance for us is that in the interior, the Harsanyi Shadow never slopes up in a way that would allow two points to lie on the same line from the origin, which would violate injectivity when we normalize it onto Δn. (That line would need to have positive slope, which the Harsanyi Shadow never has.)

The Harsanyi Shadow of Curved Hypersurfaces

Where P curves differentiably, these correspond to "diagonal" surfaces where p and ϕ are both changing.

Inverting the Hadamard Product

When can we invert ⊙ and recover (p,ϕ) from p⊙ϕ? When pi>0 and ϕi>0 for all agents! This leads to ψi>0 for all agents, and we saw here that this ensures that G(_,ψ) has a unique optimum among F. (And finding this optimum is exactly what we mean by recovering p.)

Why is this true? Here's where we use the fact that ∂ϕi∂pi≥0, that increasing pi never decreases ϕi. In order for ⊙ to cause a collision, there would need to be two points (p,ϕ),(q,τ)∈(P,Φ) such that p⊙ϕ=q⊙τ. This corresponds to n equations that look like

piϕi=qiτi

Holding piϕi constant, this becomes the equation of a hyperbola. And the claim is that if pi>0 and ϕi>0 for all agents, then these equations only have one solution among (P,Φ), and it's (p,ϕ).

For concreteness, suppose pi=2 and ϕi=3. Then piϕi=6, and we could try picking qi>pi, like qi=3. But this would require τi=2, which can't happen because ∂ϕi∂pi≥0; on the Pareto frontier, increasing the utility assigned to an agent can't decrease the Harsanyi weight they must have been assigned. The same problem happens if we try to assign qi<pi. The only solution is qi=pi.

So for the interior of the Harsanyi Bundle (P,Φ), projecting by ⊙ is injective and we can uniquely recover (p,ϕ) from p⊙ϕ.

Normalizing the Harsanyi Shadow

Once we have p⊙ϕ, we can calculate p⋅ϕ by simply adding up all of its elements! p⋅ϕ=∑ni=1piϕi. This is a single number, which acts on p⊙ϕ by scaling it down to land on Δn, the hypersurface of weights that add up to 1. And that's ψ!

ψ=p⊙ϕp⋅ϕ

This normalization map Δ:⊙(P,Φ)→Δn follows the same pattern as ⊙(P,Φ); in the interior of the Harsanyi Shadow we can uniquely recover p⊙ϕ from p⊙ϕp⋅ϕ. And for those points we know we can also uniquely recover (p,ϕ) from p⊙ϕ.

Normalizing the Interior of the Harsanyi Shadow is Injective

Is it possible for two different points on ⊙(P,Φ) to get scaled down to the same point on Δn? This would require p⊙ϕ=λ(q⊙τ) for some λ∈Rn, where (p,ϕ),(q,τ)∈(P,Φ). Around the boundary of the Harsanyi Shadow this can happen, but it turns out it can't in the interior!

As we've seen, ∂ϕi∂pi≥0, so increasing pi never decreases ϕi. This means that increasing pi always increases (p⊙ϕ)i in the interior. (This is the step that fails on the boundary when pi=0 or ϕi=0 for any agent.) But increasing the utility for one agent never increases the feasible utility available for any other agent. ∂pj∂pi≤0 when i≠j.

This is why the interior of the Harsanyi Shadow doesn't contain any points on the same line from the origin. Moving from (p,ϕ) to (q,τ) would involve simultaneously increasing (or simultaneously decreasing) (p⊙ϕ)i for all agents. But increasing pi for one agent, and thus (p⊙ϕ)i, must monotonically decrease pj for all other agents, and thus monotonically decrease (p⊙ϕ)j. And similarly, decreasing pi never decreases (p⊙ϕ)j in the interior.

Normalizing the Harsanyi Shadow is Surjective

Given ψ∈Δn, can we always find (p,ϕ)∈(P,Φ) such that

ψ=p⊙ϕp⋅ϕ

This one is pretty straightforward: G(_,ψ) always has some optima, and at least one of them will be a point p∈P which is consistent with a ϕ∈Δn that satisfies this equation. Because that's the "find ψ to make p optimal according to G(_,ψ)" equation.

Geometrically, this tells us that the Harsanyi Shadow of P doesn't have any holes. If we draw a line from the origin through ψ, it will hit the Harsanyi Shadow somewhere.

Moving Continuously Across the Harsanyi Bundle

Putting it all together, in the first part of this sequence we started by computing the weights ψ which would make a chosen point p∈P optimal according to G(_,ψ). We formalized that here by choosing a point (p,ϕ)∈(P,Φ), computing their Hadamard product p⊙ϕ, then scaling this down to land on Δn.

ψ=p⊙ϕp⋅ϕ

P→(P,Φ)→⊙(P,Φ)→Δn

Going in the reverse direction, we can draw a line from the origin through ψ to a find where that line intersects the image ⊙(P,Φ), which we've been calling the Harsanyi Shadow. If ψ is in the interior of Δn, where ψi>0 for all agents, this point p⊙ϕ on the interior of the Harsanyi Shadow is unique! And from there we can uniquely recover the point (p,ϕ) in the interior of the Harsanyi Bundle, such that p is optimal according to G(_,ψ) and H(_,ϕ).

In this last section, I want to describe the challenge of extending this continuity result to include weights ψ where at least one agent has 0 geometric weight, and some ways of addressing that challenge.

The Challenge of Continuity Around the Boundary

We've called the weight-derivation function ψ(p,ϕ), so let's call the inverse ψ−1(ψ), which corresponds to maximizing G(_,ψ) and breaking ties around the boundary somehow. Ideally, we would like this function to be continuous, so that individual utilities shift continuously as geometric weights shift. In order for that to be true, ψ−1 needs to be equal to its limit points everywhere.

ψ−1(ψ)=limσ→ψψ−1(σ)

We have a bijection between the interiors of Δn and (P,Φ), between weights ψ where ψi>0 for all agents and pairs (p,ϕ) where pi>0 and ϕi>0 for all agents. And so my first hope was that we could take the limit as we approach the boundary, to define the value at every point along the boundary. Unfortunately, this limit depends on how we approach corner points, in a way that makes me suspect that there's no way to way to inherit values for ψ−1 around the boundary, in a way that simultaneously

Agrees with argmaxu∈FG(u,ψ) exactly on the interior

Leads to ψ−1 being continuous when its domain includes the boundary

Motivating Example

When multiple agents have 0 weight according to ψ, there can be many Pareto optima that maximize G(_,ψ). For example, suppose that Alice is a G(_,ψ) maximizer deciding how much money she should receive, from $0-$100. And simultaneously, how to split another $100 between Bob and Carol. Conveniently, they each derive utility equal to the money they receive.

When trade-offs are linear, maximizing G(_,ψ) splits utility between agents proportionally to their weight. In particular, when an agents receives a vanishingly small, but positive, share of the geometric weight ψi, they receive a vanishingly small, but positive share of the utility ui.

In this example, let's say that Alice assigns herself ψAlice=α. The claim I'll be making is that in this example, we can't satisfy those two desiderata from the previous section. We can't extend ψ−1 exactly in a way that makes it continuous.

To see why, let's first consider what happens if Bob receives a vanishingly small weight ψBob=ϵ, and Carol receives the remaining weight ψCarol=1−α−ϵ. If we take the limit as ϵ approaches 0, uBob approaches $0 and uCarol approaches $100.

What if the roles are swapped, so that ψCarol=ϵ, and Bob gets the rest of the weight ψBob=1−α−ϵ? Then if we take the limit as ϵ approaches 0, uBob approaches $100 and uCarol approaches $0.

So along one boundary of Δn, if we inherit values according to the limit as we approach the boundary, uBob=$0 and uCarol=$100 because ψBob=0 and ψCarol>0. And along another boundary, uBob=$100 and uCarol=$0 because ψBob>0 and ψCarol=0.

What happens at the corner where these boundaries meet, where ψBob=0 and ψCarol=0? No matter what value we assign, ψ−1 must jump discontinuously here. And in fact, we can reach any split between Bob and Carol by approaching on different paths that preserve different ratios between their vanishingly small weight. (Equal weight gets them an equal split, a 60:40 ratio of weight gets them a 60:40 split of the money, and so on.)

The issue is that maximizing a weighted average, whether a linear average like H(_,ϕ) or a geometric average like G(_,ψ), really does treat 0 weight differently from a small but positive weight ϵ. A positive weight makes the weighted average sensitive to increases in an agent's utility, even if only a little bit. But there can be an arbitrarily huge difference between "the payoff Bob receives if Alice assigns him 0 weight" and "the payoff Bob receives if Alice assigns him some tiny amount of weight."

How to Achieve Continuity Anyway

So it seems like in order to achieve continuity, we need to do one of the following:

Ensure that ψi>0 for all agents

Maximize something other than G(_,ψ)

One way to take that second approach would be to derive new weights β(ψ) and then maximize G(_,β(ψ)). If βi>0 for all agents, no matter what weights ψ we started with, then we can use all that work we did in this post to show that G(_,β(ψ)) has a unique optimum that travels continuously around P as we change β. If β(ψ) is continuous, then so is ψ−1(ψ).

ψ−1(ψ)=argmaxu∈FG(u,β(ψ))

Ideally, we'd like β to be very close to ψ, while assigning all agents positive weight. One way to do this is to pick a very small amount of weight ϵ, and scale ψ down so that ∑ni=1ψi=1−ϵ. Then we can distribute that small weight ϵ equally among all n agents, giving them a minimum weight of βi=ϵn

βi=ψi(1−ϵ)+ϵn

This ensures that the sum of weights ∑ni=1βi is still 1. We can pick ϵ to be arbitrarily small, making the difference β−ψ arbitrarily small. If all we want is a tie-breaker, and we don't care about continuity, we can take the limit as ϵ approaches 0.

ψ−1(ψ)=limϵ→0argmaxu∈FG(u,β(ψ,ϵ))

But for the applications I have in mind, I actually prefer continuity even if it means requiring a minimum positive weight for all agents. In terms of morality, that seems like a feature rather than a bug. It seems like the kind of feature that might get a superintelligent AI to leave us the Milky Way, even if it seizes the rest of the observable universe for its own misaligned ends.

Anything we deem an "agent" probably does deserve some positive weight in our utility aggregation function. This particular formula is inspired by moral reasoning along the lines of "reserve a small amount of weight ϵ and distribute it equally among all agents, regardless of any other considerations like their ability to reciprocally benefit the decision-maker."

So my current favorite approaches involve always assigning agents positive weight when it comes to G(_,ψ) maximization. Which might look like designing the weight-attribution function so that it's always positive, or it might look like padding ψ so that every agent has positive weight as far as G(_,ψ) is concerned.

^{^}

This function isn't defined when p⋅ϕ=0, but in this case the entire feasible set F is a single point at the origin. And indeed in this case, any reconstruction function will look like p(ψ,{→0})=→0, which is continuous!

The tangent space for a manifold isn't always a hyperplane. For example, consider a circle embedded in 3-dimensional space; each tangent space is still just a line. This section is the reason we assumed F is n-dimensional (there aren't any redundant players with constant utility), so that P is an n−1 dimensional hypersurface with tangent hyperplanes.

^{^}

All of the convex combinations within Δn anyway. Some faces can be oriented so that if you try calculating ϕ for them, you end up with ϕi<0 for some agent. These faces are Pareto dominated, and when calculating the valid values of ϕ at a corner we can ignore convex combinations that extend beyond Δn.

^{^}

At least the interior of this surface is an n−1 dimensional manifold. I haven't proven or disproven whether the whole Harsanyi Bundle is a manifold, but I suspect it is. P is an n−1 dimensional manifold, and so is Δn, but in order for (P,Φ) to be a manifold, it needs to always stay n−1 dimensional and never fork or collapse down to a lower number of dimensions.

This is a supplemental post to Geometric Utilitarianism (And Why It Matters), in which I show that when all agents have positive weight ψi, the optimal geometric weighted average moves continuously across the Pareto frontier as we change those weights. I also show that we can extend this continuity result to all weights ψ, if we're willing to accept an arbitrarily good approximation of maximizing G(_,ψ). I think of this as a bonus result which makes the geometric average a bit more appealing as a way to aggregate utilities, and the main post goes into more detail about the problem and why it's interesting.

## High Level Overview

How does changing ψ affect the optima of G(_,ψ)? Ideally, we'd like a small change in the weights assigned to each agent to cause a small change in the resulting joint utility p. In other words, we would like p(ψ)=argmaxu∈FG(u,ψ) to be continuous. It turns out this is true when ψi>0 for all agents, and there's at least one way to create a continuous function ψ→p that works for all ψ∈Δn and is an arbitrarily good approximation of G(_,ψ) maximization.

We've already solved the inverse problem: given a point p and Harsanyi weights ϕ which make p optimal according to H(_,ϕ), find geometric weights ψ that make p optimal according to G(_,ψ).

ψ=p⊙ϕp⋅ϕ

So we already have ψ(p,ϕ), which is a smooth function of its inputs where it's defined.

^{[1]}It turns out we can invert this function and recover (p,ϕ) from ψ when pi>0 and ϕi>0 for all agents. This is the interior of what we'll be calling the Harsanyi Bundle; the set of all of the pairs (p,ϕ) which are consistent with our Pareto frontier P. This post will show that there is a bijection between the interior of the Harsanyi Bundle and the interior of the set of all valid weights Δn.

So we have a smooth bijection (p,ϕ)→ψ between these two subsets of R2n and Rn respectively. And thankfully we're doing calculus, where this is sufficient to establish that the inverse function ψ→(p,ϕ) is at least continuous.

^{[2]}And that is the main result this post sets out to prove: that individual utilities shift continuously as geometric weights shift.Establishing this homeomorphism with the interior of Δn means that the interior of the Harsanyi Bundle is an n−1 dimensional manifold. The last part of this post involves extending our map ψ→(p,ϕ) so that it's continuous across all of Δn. My current favorite way to do this is to introduce a new continuous function β(ψ) which maps all weights ψ into the interior of Δn, and then using those positive weights to find the unique optimum of G(_,β(ψ)).

## Parameterizing the Pareto Hypersurface

Just like the Harsanyi hyperplane H, we can think of the Pareto frontier P as a set of joint utilities, or as a function P:Rn−1→Rn which maps utilities for the first n−1 agents into that set. Pn(v) returns the highest feasible utility agent n can receive, given that the first n−1 agents receive utilities defined by v. (Or undefined if v is already infeasible.) And Pi(v)=vi for i<n.

We can think of P as a hypersurface that lies "above" the feasible utilities for the first n−1 agents.

Where P is differentiable, the Harsanyi hyperplane H for a point p lines up exactly with the tangent hyperplane at that point TpP.

^{[3]}The Harsanyi weights ϕ are orthogonal to TpP, and so there is only one choice of ϕ which causes p to maximize H(_,ϕ).But where P isn't differentiable, such as at corners, the tangent hyperplane isn't defined, and there can be multiple hyperplanes H which keep F all on one side. And so there can be many consistent values for ϕ at these points.

When the slope of P jumps discontinuously at a point p, these slopes and all of the slopes in between can be used to find all of the valid assignments for ϕ at p. When F looks like a jewel with multiple flat faces meeting at corners, we can identify ϕ(Fi) for each face Fi. The valid assignments for ϕ at a corner p are all of the convex combinations of ϕ(Fi) for all of the faces Fi that meet at that corner p.

^{[4]}ϕ only acts like a function when there is only one valid assignment, and in general we can call the set of all valid assignments Φ(p).## Computing the Harsanyi Weights

So far we've been treating ϕ as coming from a black box, but now that we've parameterized P it's actually straightforward to compute ϕ at differentiable points p∈P.

What we have is a function P:Rn−1→Rn, and what we want to do is construct a new function ^P:Rn→R such that P is a level set of ^P. This causes the gradient ∇^P to be orthogonal to P, which is exactly the direction we want ϕ to point!

Starting from un=Pn(v), we can rearrange this to get 0=Pn(v)−un. Which is a level set of ^P(u)=Pn(v(u))−un.

And that's ∇^P! To get ϕ, all we have to do is normalize ∇^P so that its components sum to 1. If we use →1∈Rn to denote the vector whose components are all 1, then

ϕ=∇^P∑ni=1∇^Pi=∇^P^P⋅→1.

For an arbitrary surface that might wiggle up and down, this procedure won't necessarily guarantee that ϕ∈Δn. But this is a Pareto frontier, where we know that ∂Pn∂vi≤0; increasing agent i's utility never increases the feasible utility for agent n. P might wiggle down, but it never wiggles up, and that keeps ϕ in its valid range wherever P is differentiable.

We also know that increasing pi never decreases ϕi. So ∂ϕi∂pi≥0, which implies that ∂2Pn∂v2i≤0. We'll use this fact later when looking at curves which increase pi, and thus monotonically increase ϕi.

## The Harsanyi Bundle

In order to claim that ψ(p,ϕ) changes continuously as we change p and ϕ, we need to be able to define what that even means in the context of P. If we take a curve γ:R→P that travels continuously along P, then ϕ(γ) will change discontinuously at corners no matter how we break ties.

All pairs (p,ϕ) come from P×Δn, but let's restrict our attention to a subset I'll denote (P,Φ) which contains all of the valid pairs (p,ϕ) which are consistent with P. So (P,Φ)={(p,ϕ)|p∈P,ϕ∈Φ(p)}. It turns out this forms a surface in 2n-dimensional space I'll call the Harsanyi Bundle, analogous to tangent bundle in differential geometry.

^{[5]}Where P is differentiable, there is only one valid assignment for ϕ. So any continuous curve γ through these parts of P corresponds to a continuous curve χ through (P,Φ). At non-differentiable points p, χ can travel continuously to any point in (p,Φ(p)), including the endpoints which allow it to continue on to other parts of the Harsanyi bundle.

When projected onto P, χ looks like a continuous curve along P that sometimes hangs out at corners while ϕ rotates to line up with the next face along the path of χ.

The upshot is that any two points in (P,Φ) can be reached using continuous paths, so we can think of it as a single continuous surface embedded in 2n-dimensional space. And these correspond to continuous changes in geometric weight ψ.

## The Harsanyi Shadow

We're trying to invert ψ:(P,Φ)→Δn, which has the formula

ψ=p⊙ϕp⋅ϕ

If we were writing a program to compute ψ(p,ϕ), our first step would be to compute p⊙ϕ. This is the element-wise product of p and ϕ, also called their Hadamard product. We can think of ⊙ as a linear map ⊙:Rn×Rn→Rn, which takes us from 2n-dimensional space back down to n-dimensional space. And it turns out that the image ⊙(P,Φ), consisting of all the points p⊙ϕ where (p,ϕ)∈(P,Φ), forms a hypersurface that lies "under" P.

I'm calling this hypersurface the Harsanyi Shadow of P, and I think of it as a projection of the Harsanyi Bundle back down into n-dimensional space. As always if there's a standard name or notation for something I generally prefer to switch to that. We'll also show that at least the interior of the Harsanyi Shadow is an n−1 dimensional manifold, since it's in smooth bijection with the interior of Δn.

In this example, the grey line segments on the Harsanyi Shadow correspond to black line segments ¯AB and ¯BC on the Pareto frontier. The blue line segments correspond to the points A, B, and C, and the values of ϕ which make them optimal according to H(_,ϕ).

In particular, points like A and C on the boundary of P, and thus on the boundary of (P,Φ), correspond to "wings" on the Harsanyi Shadow which lie on the same line from the origin. When these wings are normalized onto Δn in the next step of calculating ψ, these will all end up at the same point.

## The Harsanyi Shadow of Convex Hulls

Any convex set like F can be thought of as the convex hull of a set of points X. When X is finite, which is generally the case when it represents something like a deterministic policy for each agent, P will be made up of flat surfaces that meet at corners. These correspond to two types of surface in (P,Φ):

When one element of a Hadamard product is constant, such as ϕ for "horizontal" surfaces, we can think of it as a linear map ⊙(_,ϕ):Rn→Rn. This corresponds to a diagonal matrix, which we can write in diagonal notation diag(ϕ) or in components as ⊙(_,ϕ)ij=δijϕi. This is invertible if and only if ϕj>0 for all agents. So flat surfaces on P map linearly to flat surfaces on (P,Φ), which map linearly to flat surfaces on ⊙(P,Φ). ⊙(_,ϕ(H)):H→Rn acts linearly when restricted to points on the same hyperplane H.

Since we have an explicit formula for ϕ, we can use that to write down an explicit formula for p⊙ϕ

p⊙ϕ=p⊙∇^P^Pi⋅→1=1^Pi⋅→1(p⊙∇^P)

In components, this looks like

## The Harsanyi Shadow of a Hyperplane

We can approximate any Pareto frontier using pieces of hyperplanes, and the Harsanyi Shadow of this approximation will be made up of the pieces of H⊙ϕ(H). And it turns out that these pieces are all parallel to Δn! Which helps a lot in understanding the geometry of the Harsanyi Shadow, and why the interior pieces are in one-to-one correspondence with pieces of Δn.

I noticed this playing around with examples, and I recommend playing with this one. If you pick two points A and B, you can draw the line between them, and calculate ϕ for that line. ⊙(_,ϕ) is always a linear map, and when ϕi>0 for all agents, it's an invertible linear map that maps this line to another line on the Harsanyi Shadow. And it turns out this line on the Harsanyi Shadow will always have slope -1! Just like the standard 2-simplex Δ2 where ϕ and ψ live. And in general, H⊙ϕ(H) is a hyperplane with the same slope as Δn, as long as ϕi>0 for all agents.

This is why the grey line segments in our example were parallel to the red line segment of valid weights; flat surfaces on the Pareto frontier map onto flat surfaces on the Harsanyi Shadow that are parallel to Δn.

To see why this happens for hyperplanes in general, we can use the fact that ϕ is orthogonal to H at p to write down the normal equation for H. It's all of the points u∈Rn which satisfy

ϕ⋅(u−p)=0

The image of H after going through the map ⊙(_,ϕ), which we can denote H⊙ϕ is all of the points u⊙ϕ where u∈H. One such point is p⊙ϕ, and in general there will be a vector I'll suggestively call δ∈Rn which is orthogonal to H⊙ϕ. This normal vector δ satisfies the equation

δ⋅(u⊙ϕ−p⊙ϕ)=0

Since the Hadamard product is distributive and commutative, we know that

u⊙ϕ−p⊙ϕ=ϕ⊙(u−p)

Which means we can rewrite the normal equation for H⊙ϕ as

δ⋅(ϕ⊙(u−p))=0

Here I needed to go back to component notation to know how I could simplify that equation further.

∑ni=1δi(ϕi(ui−pi))=0

∑ni=1δiϕi(ui−pi)=0

(δ⊙ϕ)⋅(u−p)=0

This is great, because we also know from the normal equation for H that

ϕ⋅(u−p)=0

And in fact, we know that any scalar multiple λϕ is also orthogonal to H

λϕ⋅(u−p)=0

And so one family of solutions for δ comes from solving

δ⊙ϕ=λϕ

Which has the solution δi=λ.

δ=λ→1

This line is orthogonal to H⊙ϕ, and it's also orthogonal to Δn! So H⊙ϕ and Δn are parallel in the sense that they're orthogonal to the same line. And when ϕi>0 for all agents, H⊙ϕ is a hyperplane with the same normal vector as Δn.

## The Harsanyi Shadow of a Corner

At a corner p, the Harsanyi Shadow p⊙Φ(p) will be a subset of p⊙Δn. And when pi>0 for all agents, ⊙(p,_) is an invertible linear map that takes the standard simplex Δn to a simplex with a similarly-negative slope in all directions.

Since pi≥0, ⊙(p,_) can't flip the sign of the slope of this simplex. Together with the results from the previous sub-section about hyperplanes, we can conclude that the Harsanyi Shadow of (P,Φ) never has positive slope. (Just like P and Δn.) The main relevance for us is that in the interior, the Harsanyi Shadow never slopes up in a way that would allow two points to lie on the same line from the origin, which would violate injectivity when we normalize it onto Δn. (That line would need to have positive slope, which the Harsanyi Shadow never has.)

## The Harsanyi Shadow of Curved Hypersurfaces

Where P curves differentiably, these correspond to "diagonal" surfaces where p and ϕ are both changing.

## Inverting the Hadamard Product

When can we invert ⊙ and recover (p,ϕ) from p⊙ϕ? When pi>0 and ϕi>0 for all agents! This leads to ψi>0 for all agents, and we saw here that this ensures that G(_,ψ) has a unique optimum among F. (And finding this optimum is exactly what we mean by recovering p.)

Why is this true? Here's where we use the fact that ∂ϕi∂pi≥0, that increasing pi never decreases ϕi. In order for ⊙ to cause a collision, there would need to be two points (p,ϕ),(q,τ)∈(P,Φ) such that p⊙ϕ=q⊙τ. This corresponds to n equations that look like

piϕi=qiτi

Holding piϕi constant, this becomes the equation of a hyperbola. And the claim is that if pi>0 and ϕi>0 for all agents, then these equations only have one solution among (P,Φ), and it's (p,ϕ).

For concreteness, suppose pi=2 and ϕi=3. Then piϕi=6, and we could try picking qi>pi, like qi=3. But this would require τi=2, which can't happen because ∂ϕi∂pi≥0; on the Pareto frontier, increasing the utility assigned to an agent can't decrease the Harsanyi weight they must have been assigned. The same problem happens if we try to assign qi<pi. The only solution is qi=pi.

So for the interior of the Harsanyi Bundle (P,Φ), projecting by ⊙ is injective and we can uniquely recover (p,ϕ) from p⊙ϕ.

## Normalizing the Harsanyi Shadow

Once we have p⊙ϕ, we can calculate p⋅ϕ by simply adding up all of its elements! p⋅ϕ=∑ni=1piϕi. This is a single number, which acts on p⊙ϕ by scaling it down to land on Δn, the hypersurface of weights that add up to 1. And that's ψ!

ψ=p⊙ϕp⋅ϕ

This normalization map Δ:⊙(P,Φ)→Δn follows the same pattern as ⊙(P,Φ); in the interior of the Harsanyi Shadow we can uniquely recover p⊙ϕ from p⊙ϕp⋅ϕ. And for those points we know we can also uniquely recover (p,ϕ) from p⊙ϕ.

## Normalizing the Interior of the Harsanyi Shadow is Injective

Is it possible for two different points on ⊙(P,Φ) to get scaled down to the same point on Δn? This would require p⊙ϕ=λ(q⊙τ) for some λ∈Rn, where (p,ϕ),(q,τ)∈(P,Φ). Around the boundary of the Harsanyi Shadow this can happen, but it turns out it can't in the interior!

As we've seen, ∂ϕi∂pi≥0, so increasing pi never decreases ϕi. This means that increasing pi always increases (p⊙ϕ)i in the interior. (This is the step that fails on the boundary when pi=0 or ϕi=0 for any agent.) But increasing the utility for one agent never increases the feasible utility available for any other agent. ∂pj∂pi≤0 when i≠j.

This is why the interior of the Harsanyi Shadow doesn't contain any points on the same line from the origin. Moving from (p,ϕ) to (q,τ) would involve simultaneously increasing (or simultaneously decreasing) (p⊙ϕ)i for

allagents. But increasing pi for one agent, and thus (p⊙ϕ)i, must monotonically decrease pj for all other agents, and thus monotonically decrease (p⊙ϕ)j. And similarly, decreasing pi never decreases (p⊙ϕ)j in the interior.## Normalizing the Harsanyi Shadow is Surjective

Given ψ∈Δn, can we always find (p,ϕ)∈(P,Φ) such that

ψ=p⊙ϕp⋅ϕ

This one is pretty straightforward: G(_,ψ) always has

someoptima, and at least one of them will be a point p∈P which is consistent with a ϕ∈Δn that satisfies this equation. Because that's the "find ψ to make p optimal according to G(_,ψ)" equation.Geometrically, this tells us that the Harsanyi Shadow of P doesn't have any holes. If we draw a line from the origin through ψ, it will hit the Harsanyi Shadow somewhere.

## Moving Continuously Across the Harsanyi Bundle

Putting it all together, in the first part of this sequence we started by computing the weights ψ which would make a chosen point p∈P optimal according to G(_,ψ). We formalized that here by choosing a point (p,ϕ)∈(P,Φ), computing their Hadamard product p⊙ϕ, then scaling this down to land on Δn.

ψ=p⊙ϕp⋅ϕ

P→(P,Φ)→⊙(P,Φ)→Δn

Going in the reverse direction, we can draw a line from the origin through ψ to a find where that line intersects the image ⊙(P,Φ), which we've been calling the Harsanyi Shadow. If ψ is in the interior of Δn, where ψi>0 for all agents, this point p⊙ϕ on the interior of the Harsanyi Shadow is unique! And from there we can uniquely recover the point (p,ϕ) in the interior of the Harsanyi Bundle, such that p is optimal according to G(_,ψ) and H(_,ϕ).

In this last section, I want to describe the challenge of extending this continuity result to include weights ψ where at least one agent has 0 geometric weight, and some ways of addressing that challenge.

## The Challenge of Continuity Around the Boundary

We've called the weight-derivation function ψ(p,ϕ), so let's call the inverse ψ−1(ψ), which corresponds to maximizing G(_,ψ) and breaking ties around the boundary somehow. Ideally, we would like this function to be continuous, so that individual utilities shift continuously as geometric weights shift. In order for that to be true, ψ−1 needs to be equal to its limit points everywhere.

ψ−1(ψ)=limσ→ψψ−1(σ)

We have a bijection between the interiors of Δn and (P,Φ), between weights ψ where ψi>0 for all agents and pairs (p,ϕ) where pi>0 and ϕi>0 for all agents. And so my first hope was that we could take the limit as we approach the boundary, to define the value at every point along the boundary. Unfortunately, this limit depends on how we approach corner points, in a way that makes me suspect that there's no way to way to inherit values for ψ−1 around the boundary, in a way that simultaneously

## Motivating Example

When multiple agents have 0 weight according to ψ, there can be many Pareto optima that maximize G(_,ψ). For example, suppose that Alice is a G(_,ψ) maximizer deciding how much money she should receive, from $0-$100. And simultaneously, how to split another $100 between Bob and Carol. Conveniently, they each derive utility equal to the money they receive.

When trade-offs are linear, maximizing G(_,ψ) splits utility between agents proportionally to their weight. In particular, when an agents receives a vanishingly small, but positive, share of the geometric weight ψi, they receive a vanishingly small, but positive share of the utility ui.

In this example, let's say that Alice assigns herself ψAlice=α. The claim I'll be making is that in this example, we can't satisfy those two desiderata from the previous section. We can't extend ψ−1 exactly in a way that makes it continuous.

To see why, let's first consider what happens if Bob receives a vanishingly small weight ψBob=ϵ, and Carol receives the remaining weight ψCarol=1−α−ϵ. If we take the limit as ϵ approaches 0, uBob approaches $0 and uCarol approaches $100.

What if the roles are swapped, so that ψCarol=ϵ, and Bob gets the rest of the weight ψBob=1−α−ϵ? Then if we take the limit as ϵ approaches 0, uBob approaches $100 and uCarol approaches $0.

So along one boundary of Δn, if we inherit values according to the limit as we approach the boundary, uBob=$0 and uCarol=$100 because ψBob=0 and ψCarol>0. And along another boundary, uBob=$100 and uCarol=$0 because ψBob>0 and ψCarol=0.

What happens at the corner where these boundaries meet, where ψBob=0 and ψCarol=0? No matter what value we assign, ψ−1 must jump discontinuously here. And in fact, we can reach any split between Bob and Carol by approaching on different paths that preserve different ratios between their vanishingly small weight. (Equal weight gets them an equal split, a 60:40 ratio of weight gets them a 60:40 split of the money, and so on.)

The issue is that maximizing a weighted average, whether a linear average like H(_,ϕ) or a geometric average like G(_,ψ), really does treat 0 weight differently from a small but positive weight ϵ. A positive weight makes the weighted average sensitive to increases in an agent's utility, even if only a little bit. But there can be an arbitrarily huge difference between "the payoff Bob receives if Alice assigns him 0 weight" and "the payoff Bob receives if Alice assigns him some tiny amount of weight."

## How to Achieve Continuity Anyway

So it seems like in order to achieve continuity, we need to do one of the following:

One way to take that second approach would be to derive new weights β(ψ) and then maximize G(_,β(ψ)). If βi>0 for all agents, no matter what weights ψ we started with, then we can use all that work we did in this post to show that G(_,β(ψ)) has a unique optimum that travels continuously around P as we change β. If β(ψ) is continuous, then so is ψ−1(ψ).

ψ−1(ψ)=argmaxu∈FG(u,β(ψ))

Ideally, we'd like β to be very close to ψ, while assigning all agents positive weight. One way to do this is to pick a very small amount of weight ϵ, and scale ψ down so that ∑ni=1ψi=1−ϵ. Then we can distribute that small weight ϵ equally among all n agents, giving them a minimum weight of βi=ϵn

βi=ψi(1−ϵ)+ϵn

This ensures that the sum of weights ∑ni=1βi is still 1. We can pick ϵ to be arbitrarily small, making the difference β−ψ arbitrarily small. If all we want is a tie-breaker, and we don't care about continuity, we can take the limit as ϵ approaches 0.

ψ−1(ψ)=limϵ→0argmaxu∈FG(u,β(ψ,ϵ))

But for the applications I have in mind, I actually prefer continuity even if it means requiring a minimum positive weight for all agents. In terms of morality, that seems like a feature rather than a bug. It seems like the kind of feature that might get a superintelligent AI to leave us the Milky Way, even if it seizes the rest of the observable universe for its own misaligned ends.

Anything we deem an "agent" probably does deserve some positive weight in our utility aggregation function. This particular formula is inspired by moral reasoning along the lines of "reserve a small amount of weight ϵ and distribute it equally among all agents, regardless of any other considerations like their ability to reciprocally benefit the decision-maker."

So my current favorite approaches involve always assigning agents positive weight when it comes to G(_,ψ) maximization. Which might look like designing the weight-attribution function so that it's always positive, or it might look like padding ψ so that every agent has positive weight as far as G(_,ψ) is concerned.

^{^}This function isn't defined when p⋅ϕ=0, but in this case the entire feasible set F is a single point at the origin. And indeed in this case, any reconstruction function will look like p(ψ,{→0})=→0, which is continuous!

^{^}For topological spaces in general, the inverse of a continuous bijection isn't necessarily continuous.

^{^}The tangent space for a manifold isn't always a hyperplane. For example, consider a circle embedded in 3-dimensional space; each tangent space is still just a line. This section is the reason we assumed F is n-dimensional (there aren't any redundant players with constant utility), so that P is an n−1 dimensional hypersurface with tangent hyperplanes.

^{^}All of the convex combinations within Δn anyway. Some faces can be oriented so that if you try calculating ϕ for them, you end up with ϕi<0 for some agent. These faces are Pareto dominated, and when calculating the valid values of ϕ at a corner we can ignore convex combinations that extend beyond Δn.

^{^}At least the interior of this surface is an n−1 dimensional manifold. I haven't proven or disproven whether the whole Harsanyi Bundle is a manifold, but I suspect it is. P is an n−1 dimensional manifold, and so is Δn, but in order for (P,Φ) to be a manifold, it needs to always stay n−1 dimensional and never fork or collapse down to a lower number of dimensions.