This is a supplemental post to Geometric Utilitarianism (And Why It Matters), in which I show that when all agents have positive weight , the optimal geometric weighted average moves continuously across the Pareto frontier as we change those weights. I also show that we can extend this continuity result to all weights , if we're willing to accept an arbitrarily good approximation of maximizing . I think of this as a bonus result which makes the geometric average a bit more appealing as a way to aggregate utilities, and the main post goes into more detail about the problem and why it's interesting.

High Level Overview

How does changing  affect the optima of ? Ideally, we'd like a small change in the weights assigned to each agent to cause a small change in the resulting joint utility . In other words, we would like  to be continuous. It turns out this is true when  for all agents, and there's at least one way to create a continuous function  that works for all  and is an arbitrarily good approximation of  maximization.

We've already solved the inverse problem: given a point  and Harsanyi weights  which make  optimal according to , find geometric weights  that make  optimal according to .

So we already have , which is a smooth function of its inputs where it's defined.[1] 

It turns out we can invert this function and recover  from  when  and  for all agents. This is the interior of what we'll be calling the Harsanyi Bundle; the set of all of the pairs  which are consistent with our Pareto frontier . This post will show that there is a bijection between the interior of the Harsanyi Bundle and the interior of the set of all valid weights .

So we have a smooth bijection  between these two subsets of  and  respectively. And thankfully we're doing calculus, where this is sufficient to establish that the inverse function  is at least continuous.[2] And that is the main result this post sets out to prove: that individual utilities shift continuously as geometric weights shift.

Establishing this homeomorphism with the interior of  means that the interior of the Harsanyi Bundle is an  dimensional manifold. The last part of this post involves extending our map  so that it's continuous across all of . My current favorite way to do this is to introduce a new continuous function  which maps all weights  into the interior of , and then using those positive weights to find the unique optimum of .

Parameterizing the Pareto Hypersurface

Just like the Harsanyi hyperplane , we can think of the Pareto frontier  as a set of joint utilities, or as a function  which maps utilities for the first  agents into that set.  returns the highest feasible utility agent  can receive, given that the first  agents receive utilities defined by . (Or undefined if  is already infeasible.) And  for .

We can think of  as a hypersurface that lies "above" the feasible utilities for the first  agents.

A Pareto Frontier

Where  is differentiable, the Harsanyi hyperplane  for a point  lines up exactly with the tangent hyperplane at that point .[3] The Harsanyi weights  are orthogonal to , and so there is only one choice of  which causes  to maximize .

But where  isn't differentiable, such as at corners, the tangent hyperplane isn't defined, and there can be multiple hyperplanes  which keep  all on one side. And so there can be many consistent values for  at these points.

The Harsanyi Hyperplane at a Corner
Interactive version here.

When the slope of  jumps discontinuously at a point , these slopes and all of the slopes in between can be used to find all of the valid assignments for  at . When  looks like a jewel with multiple flat faces meeting at corners, we can identify  for each face . The valid assignments for  at a corner  are all of the convex combinations of  for all of the faces  that meet at that corner .[4]  only acts like a function when there is only one valid assignment, and in general we can call the set of all valid assignments .

Computing the Harsanyi Weights

So far we've been treating  as coming from a black box, but now that we've parameterized  it's actually straightforward to compute  at differentiable points 

What we have is a function , and what we want to do is construct a new function  such that  is a level set of . This causes the gradient  to be orthogonal to , which is exactly the direction we want  to point!

Starting from , we can rearrange this to get . Which is a level set of 

  •  for 

And that's ! To get , all we have to do is normalize  so that its components sum to 1. If we use  to denote the vector whose components are all 1, then 

.

For an arbitrary surface that might wiggle up and down, this procedure won't necessarily guarantee that . But this is a Pareto frontier, where we know that ; increasing agent 's utility never increases the feasible utility for agent  might wiggle down, but it never wiggles up, and that keeps  in its valid range wherever  is differentiable.

We also know that increasing  never decreases . So , which implies that . We'll use this fact later when looking at curves which increase , and thus monotonically increase .

The Harsanyi Bundle

In order to claim that  changes continuously as we change  and , we need to be able to define what that even means in the context of . If we take a curve  that travels continuously along , then  will change discontinuously at corners no matter how we break ties.

All pairs  come from , but let's restrict our attention to a subset I'll denote  which contains all of the valid pairs  which are consistent with . So . It turns out this forms a surface in -dimensional space I'll call the Harsanyi Bundle, analogous to tangent bundle in differential geometry.[5]

Where  is differentiable, there is only one valid assignment for . So any continuous curve  through these parts of  corresponds to a continuous curve  through . At non-differentiable points  can travel continuously to any point in , including the endpoints which allow it to continue on to other parts of the Harsanyi bundle.

When projected onto  looks like a continuous curve along  that sometimes hangs out at corners while  rotates to line up with the next face along the path of .

Changing p and phi Continuously Around a Corner
Check out an interactive version here!

The upshot is that any two points in  can be reached using continuous paths, so we can think of it as a single continuous surface embedded in -dimensional space. And these correspond to continuous changes in geometric weight .

The Harsanyi Shadow

We're trying to invert , which has the formula

If we were writing a program to compute , our first step would be to compute . This is the element-wise product of  and , also called their Hadamard product. We can think of  as a linear map , which takes us from -dimensional space back down to -dimensional space. And it turns out that the image , consisting of all the points  where , forms a hypersurface that lies "under" .

The Harsanyi Shadow Under a Corner
Interactive version here

I'm calling this hypersurface the Harsanyi Shadow of , and I think of it as a projection of the Harsanyi Bundle back down into -dimensional space. As always if there's a standard name or notation for something I generally prefer to switch to that. We'll also show that at least the interior of the Harsanyi Shadow is an  dimensional manifold, since it's in smooth bijection with the interior of .

In this example, the grey line segments on the Harsanyi Shadow correspond to black line segments  and  on the Pareto frontier. The blue line segments correspond to the points , and , and the values of  which make them optimal according to .

In particular, points like A and C on the boundary of , and thus on the boundary of , correspond to "wings" on the Harsanyi Shadow which lie on the same line from the origin. When these wings are normalized onto  in the next step of calculating , these will all end up at the same point. 

The Harsanyi Shadow of Convex Hulls

Any convex set like  can be thought of as the convex hull of a set of points . When  is finite, which is generally the case when it represents something like a deterministic policy for each agent,  will be made up of flat surfaces that meet at corners. These correspond to two types of surface in :

  • "Horizontal" surfaces where  is constant and  is changing
  • "Vertical" surfaces where  is constant and  is changing.

When one element of a Hadamard product is constant, such as  for "horizontal" surfaces, we can think of it as a linear map . This corresponds to a diagonal matrix, which we can write in diagonal notation  or in components as . This is invertible if and only if  for all agents. So flat surfaces on  map linearly to flat surfaces on , which map linearly to flat surfaces on  acts linearly when restricted to points on the same hyperplane .

Since we have an explicit formula for , we can use that to write down an explicit formula for 

In components, this looks like 

  •  for 

The Harsanyi Shadow of a Hyperplane

We can approximate any Pareto frontier using pieces of hyperplanes, and the Harsanyi Shadow of this approximation will be made up of the pieces of . And it turns out that these pieces are all parallel to ! Which helps a lot in understanding the geometry of the Harsanyi Shadow, and why the interior pieces are in one-to-one correspondence with pieces of .

The Harsanyi Shadow of a Line
Interactive version here

I noticed this playing around with examples, and I recommend playing with this one. If you pick two points A and B, you can draw the line between them, and calculate  for that line.  is always a linear map, and when  for all agents, it's an invertible linear map that maps this line to another line on the Harsanyi Shadow. And it turns out this line on the Harsanyi Shadow will always have slope -1! Just like the standard 2-simplex  where  and  live. And in general,  is a hyperplane with the same slope as , as long as  for all agents.

This is why the grey line segments in our example were parallel to the red line segment of valid weights; flat surfaces on the Pareto frontier map onto flat surfaces on the Harsanyi Shadow that are parallel to .

The Harsanyi Shadow Under a Corner
Interactive version here

To see why this happens for hyperplanes in general, we can use the fact that  is orthogonal to  at  to write down the normal equation for . It's all of the points  which satisfy

The image of  after going through the map , which we can denote  is all of the points  where . One such point is , and in general there will be a vector I'll suggestively call  which is orthogonal to . This normal vector  satisfies the equation

Since the Hadamard product is distributive and commutative, we know that

Which means we can rewrite the normal equation for  as

Here I needed to go back to component notation to know how I could simplify that equation further.

This is great, because we also know from the normal equation for  that

And in fact, we know that any scalar multiple  is also orthogonal to 

And so one family of solutions for  comes from solving

Which has the solution .

This line is orthogonal to , and it's also orthogonal to ! So  and  are parallel in the sense that they're orthogonal to the same line. And when  for all agents,  is a hyperplane with the same normal vector as .

The Harsanyi Shadow of a Corner

At a corner , the Harsanyi Shadow  will be a subset of . And when  for all agents,  is an invertible linear map that takes the standard simplex  to a simplex with a similarly-negative slope in all directions.

The Harsanyi Shadow of a Corner
Interactive version here

Since  can't flip the sign of the slope of this simplex. Together with the results from the previous sub-section about hyperplanes, we can conclude that the Harsanyi Shadow of  never has positive slope. (Just like  and .) The main relevance for us is that in the interior, the Harsanyi Shadow never slopes up in a way that would allow two points to lie on the same line from the origin, which would violate injectivity when we normalize it onto . (That line would need to have positive slope, which the Harsanyi Shadow never has.)

The Harsanyi Shadow of Curved Hypersurfaces

Where  curves differentiably, these correspond to "diagonal" surfaces where  and  are both changing.

The Hadamard-Harsanyi Manifold of a Circle
Interactive version here

Inverting the Hadamard Product

When can we invert  and recover  from ? When  and  for all agents! This leads to  for all agents, and we saw here that this ensures that  has a unique optimum among . (And finding this optimum is exactly what we mean by recovering .)

Why is this true? Here's where we use the fact that , that increasing  never decreases . In order for  to cause a collision, there would need to be two points  such that . This corresponds to n equations that look like

Holding  constant, this becomes the equation of a hyperbola. And the claim is that if  and  for all agents, then these equations only have one solution among , and it's .

For concreteness, suppose  and . Then , and we could try picking , like . But this would require , which can't happen because ; on the Pareto frontier, increasing the utility assigned to an agent can't decrease the Harsanyi weight they must have been assigned. The same problem happens if we try to assign . The only solution is .

So for the interior of the Harsanyi Bundle , projecting by  is injective and we can uniquely recover  from 

Normalizing the Harsanyi Shadow

Once we have , we can calculate  by simply adding up all of its elements! . This is a single number, which acts on  by scaling it down to land on , the hypersurface of weights that add up to 1. And that's !

Recovering p and phi Along a Circle
Interactive version here

This normalization map  follows the same pattern as ; in the interior of the Harsanyi Shadow we can uniquely recover  from . And for those points we know we can also uniquely recover  from 

Normalizing the Interior of the Harsanyi Shadow is Injective

Is it possible for two different points on  to get scaled down to the same point on ? This would require  for some , where . Around the boundary of the Harsanyi Shadow this can happen, but it turns out it can't in the interior!

As we've seen, , so increasing  never decreases . This means that increasing  always increases  in the interior. (This is the step that fails on the boundary when  or  for any agent.) But increasing the utility for one agent never increases the feasible utility available for any other agent.  when .

This is why the interior of the Harsanyi Shadow doesn't contain any points on the same line from the origin. Moving from  to  would involve simultaneously increasing (or simultaneously decreasing)  for all agents. But increasing  for one agent, and thus , must monotonically decrease  for all other agents, and thus monotonically decrease . And similarly, decreasing  never decreases  in the interior.

Normalizing the Harsanyi Shadow is Surjective

Given , can we always find  such that 

This one is pretty straightforward:  always has some optima, and at least one of them will be a point  which is consistent with a  that satisfies this equation. Because that's the "find  to make  optimal according to " equation.

Geometrically, this tells us that the Harsanyi Shadow of  doesn't have any holes. If we draw a line from the origin through , it will hit the Harsanyi Shadow somewhere.

Moving Continuously Across the Harsanyi Bundle

Putting it all together, in the first part of this sequence we started by computing the weights  which would make a chosen point  optimal according to . We formalized that here by choosing a point , computing their Hadamard product , then scaling this down to land on .

Going in the reverse direction, we can draw a line from the origin through  to a find where that line intersects the image , which we've been calling the Harsanyi Shadow. If  is in the interior of , where  for all agents, this point  on the interior of the Harsanyi Shadow is unique! And from there we can uniquely recover the point  in the interior of the Harsanyi Bundle, such that  is optimal according to  and .

In this last section, I want to describe the challenge of extending this continuity result to include weights  where at least one agent has  geometric weight, and some ways of addressing that challenge.

The Challenge of Continuity Around the Boundary

We've called the weight-derivation function , so let's call the inverse , which corresponds to maximizing  and breaking ties around the boundary somehow. Ideally, we would like this function to be continuous, so that individual utilities shift continuously as geometric weights shift. In order for that to be true,  needs to be equal to its limit points everywhere.

We have a bijection between the interiors of  and , between weights  where  for all agents and pairs  where  and  for all agents. And so my first hope was that we could take the limit as we approach the boundary, to define the value at every point along the boundary. Unfortunately, this limit depends on how we approach corner points, in a way that makes me suspect that there's no way to way to inherit values for  around the boundary, in a way that simultaneously

  • Agrees with  exactly on the interior
  • Leads to  being continuous when its domain includes the boundary

Motivating Example

When multiple agents have  weight according to , there can be many Pareto optima that maximize . For example, suppose that Alice is a  maximizer deciding how much money she should receive, from $0-$100. And simultaneously, how to split another $100 between Bob and Carol. Conveniently, they each derive utility equal to the money they receive.

When trade-offs are linear, maximizing  splits utility between agents proportionally to their weight. In particular, when an agents receives a vanishingly small, but positive, share of the geometric weight , they receive a vanishingly small, but positive share of the utility .

In this example, let's say that Alice assigns herself . The claim I'll be making is that in this example, we can't satisfy those two desiderata from the previous section. We can't extend  exactly in a way that makes it continuous.

To see why, let's first consider what happens if Bob receives a vanishingly small weight