This is a supplemental post to Geometric Utilitarianism (And Why It Matters), in which I show that when all agents have positive weight , the optimal geometric weighted average moves continuously across the Pareto frontier as we change those weights. I also show that we can extend this continuity result to all weights , if we're willing to accept an arbitrarily good approximation of maximizing . I think of this as a bonus result which makes the geometric average a bit more appealing as a way to aggregate utilities, and the main post goes into more detail about the problem and why it's interesting.
How does changing affect the optima of ? Ideally, we'd like a small change in the weights assigned to each agent to cause a small change in the resulting joint utility . In other words, we would like to be continuous. It turns out this is true when for all agents, and there's at least one way to create a continuous function that works for all and is an arbitrarily good approximation of maximization.
We've already solved the inverse problem: given a point and Harsanyi weights which make optimal according to , find geometric weights that make optimal according to .
So we already have , which is a smooth function of its inputs where it's defined.[1]
It turns out we can invert this function and recover from when and for all agents. This is the interior of what we'll be calling the Harsanyi Bundle; the set of all of the pairs which are consistent with our Pareto frontier . This post will show that there is a bijection between the interior of the Harsanyi Bundle and the interior of the set of all valid weights .
So we have a smooth bijection between these two subsets of and respectively. And thankfully we're doing calculus, where this is sufficient to establish that the inverse function is at least continuous.[2] And that is the main result this post sets out to prove: that individual utilities shift continuously as geometric weights shift.
Establishing this homeomorphism with the interior of means that the interior of the Harsanyi Bundle is an dimensional manifold. The last part of this post involves extending our map so that it's continuous across all of . My current favorite way to do this is to introduce a new continuous function which maps all weights into the interior of , and then using those positive weights to find the unique optimum of .
Just like the Harsanyi hyperplane , we can think of the Pareto frontier as a set of joint utilities, or as a function which maps utilities for the first agents into that set. returns the highest feasible utility agent can receive, given that the first agents receive utilities defined by . (Or undefined if is already infeasible.) And for .
We can think of as a hypersurface that lies "above" the feasible utilities for the first agents.

Where is differentiable, the Harsanyi hyperplane for a point lines up exactly with the tangent hyperplane at that point .[3] The Harsanyi weights are orthogonal to , and so there is only one choice of which causes to maximize .
But where isn't differentiable, such as at corners, the tangent hyperplane isn't defined, and there can be multiple hyperplanes which keep all on one side. And so there can be many consistent values for at these points.

When the slope of jumps discontinuously at a point , these slopes and all of the slopes in between can be used to find all of the valid assignments for at . When looks like a jewel with multiple flat faces meeting at corners, we can identify for each face . The valid assignments for at a corner are all of the convex combinations of for all of the faces that meet at that corner .[4] only acts like a function when there is only one valid assignment, and in general we can call the set of all valid assignments .
So far we've been treating as coming from a black box, but now that we've parameterized it's actually straightforward to compute at differentiable points .
What we have is a function , and what we want to do is construct a new function such that is a level set of . This causes the gradient to be orthogonal to , which is exactly the direction we want to point!
Starting from , we can rearrange this to get . Which is a level set of .
And that's ! To get , all we have to do is normalize so that its components sum to 1. If we use to denote the vector whose components are all 1, then
For an arbitrary surface that might wiggle up and down, this procedure won't necessarily guarantee that . But this is a Pareto frontier, where we know that ; increasing agent 's utility never increases the feasible utility for agent . might wiggle down, but it never wiggles up, and that keeps in its valid range wherever is differentiable.
We also know that increasing never decreases . So , which implies that . We'll use this fact later when looking at curves which increase , and thus monotonically increase .
In order to claim that changes continuously as we change and , we need to be able to define what that even means in the context of . If we take a curve that travels continuously along , then will change discontinuously at corners no matter how we break ties.
All pairs come from , but let's restrict our attention to a subset I'll denote which contains all of the valid pairs which are consistent with . So . It turns out this forms a surface in -dimensional space I'll call the Harsanyi Bundle, analogous to the tangent bundle in differential geometry.[5]
Where is differentiable, there is only one valid assignment for . So any continuous curve through these parts of corresponds to a continuous curve through . At non-differentiable points , can travel continuously to any point in , including the endpoints which allow it to continue on to other parts of the Harsanyi bundle.
When projected onto , looks like a continuous curve along that sometimes hangs out at corners while rotates to line up with the next face along the path of .

The upshot is that any two points in can be reached using continuous paths, so we can think of it as a single continuous surface embedded in -dimensional space. And these correspond to continuous changes in geometric weight .
We're trying to invert , which has the formula
If we were writing a program to compute , our first step would be to compute . This is the element-wise product of and , also called their Hadamard product. We can think of as a linear map , which takes us from -dimensional space back down to -dimensional space. And it turns out that the image , consisting of all the points where , forms a hypersurface that lies "under" .

I'm calling this hypersurface the Harsanyi Shadow of , and I think of it as a projection of the Harsanyi Bundle back down into -dimensional space. As always if there's a standard name or notation for something I generally prefer to switch to that. We'll also show that at least the interior of the Harsanyi Shadow is an dimensional manifold, since it's in smooth bijection with the interior of .
In this example, the grey line segments on the Harsanyi Shadow correspond to black line segments and on the Pareto frontier. The blue line segments correspond to the points , , and , and the values of which make them optimal according to .
In particular, points like A and C on the boundary of , and thus on the boundary of , correspond to "wings" on the Harsanyi Shadow which lie on the same line from the origin. When these wings are normalized onto in the next step of calculating , these will all end up at the same point.
Any convex set like can be thought of as the convex hull of a set of points . When is finite, which is generally the case when it represents something like a deterministic policy for each agent, will be made up of flat surfaces that meet at corners. These correspond to two types of surface in :
When one element of a Hadamard product is constant, such as for "horizontal" surfaces, we can think of it as a linear map . This corresponds to a diagonal matrix, which we can write in diagonal notation or in components as . This is invertible if and only if for all agents. So flat surfaces on map linearly to flat surfaces on , which map linearly to flat surfaces on . acts linearly when restricted to points on the same hyperplane .
Since we have an explicit formula for , we can use that to write down an explicit formula for
In components, this looks like
We can approximate any Pareto frontier using pieces of hyperplanes, and the Harsanyi Shadow of this approximation will be made up of the pieces of . And it turns out that these pieces are all parallel to ! Which helps a lot in understanding the geometry of the Harsanyi Shadow, and why the interior pieces are in one-to-one correspondence with pieces of .

I noticed this playing around with examples, and I recommend playing with this one. If you pick two points A and B, you can draw the line between them, and calculate for that line. is always a linear map, and when for all agents, it's an invertible linear map that maps this line to another line on the Harsanyi Shadow. And it turns out this line on the Harsanyi Shadow will always have slope ! Just like the standard -simplex where and live. And in general, is a hyperplane with the same slope as , as long as for all agents.
This is why the grey line segments in our example were parallel to the red line segment of valid weights; flat surfaces on the Pareto frontier map onto flat surfaces on the Harsanyi Shadow that are parallel to .

To see why this happens for hyperplanes in general, we can use the fact that is orthogonal to at to write down the normal equation for . It's all of the points which satisfy
The image of after going through the map , which we can denote is all of the points where . One such point is , and in general there will be a vector I'll suggestively call which is orthogonal to . This normal vector satisfies the equation
Since the Hadamard product is distributive and commutative, we know that
Which means we can rewrite the normal equation for as
Here I needed to go back to component notation to know how I could simplify that equation further.
This is great, because we also know from the normal equation for that
And in fact, we know that any scalar multiple is also orthogonal to
And so one family of solutions for comes from solving
Which has the solution .
This line is orthogonal to , and it's also orthogonal to ! So