In the previous post, I presented the concept of a maximal lottery-lottery as follows:

We start with a finite set of candidates and a distribution on utility functions on . Given lottery-lotteries , we say that dominates if . A maximal lottery-lottery is an such that for all other , dominates .

**Conjecture:** For any and , there always exists a maximal lottery-lottery.

We discussed how we can construct a powerful and elegant new voting system, if only we can guarantee that a maximal lottery-lottery exists. However it is an open problem whether or not they exist. In this post, I will discuss a few different angles of attacking this conjecture.

# An Infinite Game

First, consider the following partial result:

**Theorem:** Let be any nonempty finite subset of . Then, there exists an such that for all other , dominates .

**Proof:** Consider the following symmetric two-player zero-sum game between Alice and Bob: Alice chooses an , and Bob chooses a . Alice gets utility equal to , and Bob gets the negative of this utility. This game must have a Nash equilibrium, and in this Nash equilibrium, Alice must play a mixed strategy . Since this is a Nash equilibrium in a symmetric zero-sum game, Alice must get at least 0 utility in expectation, no matter what Bob plays. Alice getting at least 0 utility in expectation is exactly saying that dominates .

This proof is valid (you can check the details), so why doesn't it generalize to infinite subsets , and in particular ? The problem is that games with infinitely many options need not have Nash equilibria. For example consider the game where Alice and Bob each choose a natural number, and whoever chooses the larger number wins.

Fortunately, often in cases like this, you can get past finiteness assumptions by instead using compactness assumptions, and is compact. Indeed, games with a compact option space and continuous utility functions always have mixed strategy equilibria.

Unfortunately, the game we constructed does not have a continuous utility function! This is easy to see, because if assigns probability 1 to a single utility function , if , then Alice wins and gets utility 1, but as we continuously move around , we can discontinuously jump to the game being a tie, and then to Bob winning.

Fortunately, there are a bunch of papers analyzing situations like this. Discontinuities like this show up a bunch in auctions, which also have an infinite compact option space and discontinuous utilities, resulting from discontinuities in who wins the auction.

Unfortunately, Sam Eisenstat and I spent a while looking through a bunch of papers like this, and failed to find anything that we could apply. Several things looked like they might be able to work, but everything we tried ultimately turned out to be a dead end.

# A Limit of Finite Games

The above partial result is actually pretty practically useful on its own. Even if maximal lottery-lotteries do not exist, you can always just take to be the set of all lotteries that assign each candidate a probability that is a multiple of for some large natural number . Because of this, practically speaking, we can use the maximal lottery-lotteries voting system today, without ever proving the conjecture!

Further, we can consider what happens as we send to infinity. If there is an odd number of voters that each have a utility function in general position, we will always get a unique distribution on that dominates all other distributions on , and we can look at what happens to as goes to infinity. Hopefully this converges, but even if it doesn't, we can take any limit point, and ask whether that limit point is a maximal lottery-lottery. My guess is that the sequence does in fact converge, and that the (unique) limit point is a maximal lottery-lottery, but I have not had any luck yet proving either of these.

However, this enables us to get some empirical results! We can just compute some of these finite approximations. Even with three candidates, we have to solve a game with different options, and I have not yet optimized it beyond using an off the shelf linear programming solver in Haskell, running for a few minutes, so the highest I got was . (I am sure we can do much better.)

I took three candidates and three voters, and gave each voter a utility function in general position. Here is a picture of . The blank space means that 0 (or near 0) utility is put there. The numbers show the negative of the natural logarithm of the probability that that point is chosen. (Remember, this is a distribution on distributions on a three element set, or equivalently a distribution on the triangle.)

```
______________________4.4.5.5.5.3.4.4.5.5.5.5.4.5.5.__________
__________________________7.____________4.6.5.8.5.4.________
__________________4.5.5.______6.5.__5.__5.9.5.4.4.4.______
________________4.5.5.4.______5.____5.__5.__4.4.5.5.____
______________5.5.5.5.7.____5.4.5.7.5.108.6.5.4.4.4.5.
______________5.________________________5.4.5.____7.
____________7.5.4.5.____________6.5.6.__4.__5.6.__
______5.4.5.8.__5.8.____6.5.__6.__7.__6.4.6.4.5.
____6.7.5.5.5.5.__5.6.8.6.5.7.5.____6.5.5.__4.
6.6.5.4.5.4.4.5.6.__6.__6.5.__6.____5.6.7.4.
__5.6.5.________5.__5.6.________5.6.8.5.5.
4.5.5.__4.6.__5.6.7.5.6.6.____6.4.5.7.4.
5.4.5.5.5.__5.5.5.7.5.________5.6.4.4.
6.5.5.5.6.__5.__5.5.6.____7.8.4.7.7.
5.5.5.5.5.____5.5.5.5.__5.6.7.__4.
7.4.5.6.__6.5.5.4.6.____4.4.5.5.
4.__8.5.__5.7.__5.5.5.__6.____
5.6.6.6.5.6.5.______________
4.________________________
________________________
______________________
____________________
__________________
________________
______________
____________
__________
________
______
____
__
```

for the same candidates and voters looks similar, but with less detail, and some minor changes. This is a lot of data, but since we are going to use it to sample and then sample from the sample, for the purpose of maximal lottery-lotteries, we only really care about the probability each candidate is elected. For the above example, we get (0.262, 0.415, 0.323) for and (0.263,0.417,0.320) for , which seems like some slight empirical evidence towards converging. Obviously there is a lot of room for a much more careful and larger experimental analysis.

# Fractals!?!

*I believe that maximal lottery-lotteries exist*, and are the limit of the above finite approximations, and (when they don't put all probability mass on the distribution that puts all probability mass on a single candidate) usually have some fractal structure. I think the complexity of the above picture is evidence of this, but I also have some mathematical intuition for why this makes sense (which feels hard to put into words):

Being a Nash equilibrium involves equalizing a bunch of constraints. These constraints are combined with the fact that the distribution is supported entirely within the triangle, and so where probability mass would naturally want to be placed outside the triangle, it gets cut off on the boundary. When it is cut off like this, something has to change inside the triangle to fix new constraints that become broken, and this causes the constraints to sort of reflect off the boundary of the triangle, somewhat like ripples. (Some of this intuition is coming from seeing similar things happen when studying nim.) Sorry that I don't have a more articulate description, but the point is, I think we are getting fractals.

Unfortunately, I think this is going to make it harder to show that maximal lottery-lotteries exist, since we will likely not have simple closed forms of what the maximal lottery-lotteries are.

# A Generalization of Colonel Blotto

Another way to look at maximal lottery-lotteries is as a generalization of the Colonel Blotto game.

In the the continuous Colonel Blotto game, there are two opposing officers that each have some resources, that they can distribute between some battlefields. We will assume that they each have 1 total unit of resources. Whoever sends the most resources to any given battlefield wins that battlefield. (If they send the same amount, they each win half the battlefield.) Each officer gets utility equal to the number of battlefields that they win. This game and its variants have been analyzed quite a bit over the last century.

Finding maximal lotteries is like solving a variant of the Colonel Blotto game. In the original game, you choose how to distribute resources from the simplex where each battlefield gets non-negative resources, and the total of all the resources is 1, In this variant, you instead have some other convex polytope representing the allowable ways to distribute resources.

To reduce finding maximal lottery-lotteries to a Colonel Blotto game, we consider each voter to be a battlefield. (We will only discuss the case where there is a finite set of equal weight voters here.) Each candidate can be thought of as assigning a utility to each voter, which we can also interpret as sending a number of resources equal to that utility. Thus, the polytope of allowable ways to send resources to battlefields is just the convex hull of these points that come from the candidates. Each point in this convex hull corresponds to (at least) one distribution on candidates, and a solution to this Colonel Blotto game corresponds to a maximal lottery-lottery.

This old RAND paper analyzing Colonel Blotto, seems like some further evidence that we might be looking at fractals.

# A Call for Help

I think that proving the existence of maximal lottery-lotteries would actually be a substantial advance for voting theory. (Although it might take some work to get people interested in something that challenges this many of the assumptions of the field.) However, I am not intending to work on this anymore. If you want to take a shot at proving they exist, and have questions, get in touch with me.

Without a proof of existence, I am probably not going to bother writing this up as more than these blogposts, but if anyone manages to prove it, I would be happy to write a paper together. Also if anyone feels very excited about working on this and getting it published in spite of not having a proof, I could maybe be up for collaborating on something like that, especially if someone collects a bunch of experimental evidence and better fractal pictures and is willing to do a lot of the writing.

(I no longer thing this strategy works, see the thread here; sorry for wasting your time Scott!)

Here's an approach which I

thinkcould work, though there are definitely some finicky details (including a potentially-fatal question about convergence at the end, which I couldn't deal with---and which I think may not be the weakest link given that the other steps are also a bit shaky).For lottery-lotteries A and B, write:

We say that B strictly dominates A if f(A,B)<0.

Lemma: if B strictly dominates A, then there is a continuous B′ that also strictly dominates A, i.e. such that B′ has a density with respect to Lebesgue measure and f(A,B′)<0.(ETA: this lemma is false, see comment from Scott below. I think it's fixed but am not sure.)

Proof: (I'll assume for convenience that there is a finite set of m voters although I think this is easy to relax. We could assume that B is a point mass if we wanted though obviously B′ won't be. I think this could be greatly simplified.)Suppose that f(A,B)=−ε. For each utility function u in the support of the electorate V we'll define Xu as the space of values for utility function u that occur with positive probability under A. Note that Xu is countable and hence that there is a finite set X′u⊂Xu which has at least 1−ε2m of the probability mass. Let Lu be the space of lotteries that have a utility in X′u. Let L be the union of Lu over all the utility functions u.

The idea is that L is just a finite union of hyperplanes, and so it divides the space of lotteries into a finite set of open regions. Within each of these regions, continuously shifting B's probability between nearby lotteries results in continuous changes in f(A,B), plus a discontinuous term whose magnitude is at most mε2m=ε2 (corresponding to the positive-probability utilities in Xu that aren't in X′u).

If B assigns probability 0 to L then it is trivial to construct B′ by spreading out B continuously within each of these regions: there is guaranteed to be some spreading that changes f(A,B) by strictly less than ε (since we can make the continuous part of the change arbitrarily small and the discontinuous part is at most ε2) and so we can find some way of spreading out mass that results in a continuous B′ with f(A,B′)<0.

If B assigns positive probability to L we need one more step. The idea is that for each point in L we can find a "safe" direction to shift the probability from B so that it's no longer in L: if we pick a random direction each voter prefers to move one way or the other, and so one direction must have majority approval (note that the argument would break at this point if f(A,B) depended on the probability that B was at least as good as A, rather than being defined symmetrically). Once we've picked such a direction for every point on L, we can associate all of the probability from L into one of the adjacent open regions and proceed as before. (ETA: this step fails if you are on the edge of the simplex.)

(This lemma might be enough to prove the result when combined with standard fixed point theorems. I'll finish by using standard tools from CS but that may just be because it's what I'm familiar with. I also can't easily do the convergence analysis at the end so that might not work and/or be much simpler from a mathematical perspective.)

Proving the theorem given the lemma: We'll describe a sequence of distributions AT→A∞ such that f(AT,B)<−ε implies that B has −Ω(ε√T) entropy.We'd like the sequence to be convergent AT→A∞ in the sense that for any continuous B, f(AT,B)→f(A∞,B). If this were the case then we'd infer that we can't have f(A∞,B)<0 for a continuous B, since that would give us an absolute ε which f(AT,B)<−ε for all sufficiently large T, implying that B has −∞ entropy. By the main lemma, we'd conclude that no B (continuous or otherwise) can strictly dominate A∞. For now I'll present the algorithm to generate AT, in the next section I'll discuss convergence.

Our construction is follow-the-regularized leader with a standard analysis for playing 2-player games. It doesn't converge to a Nash equilibrium but I think it will still be good enough for us.

Let H(A) be the entropy of a lottery-lottery (relative to Lebesgue measure on the simplex) and take AT=argmaxA(H(A)+∑t<Tαtf(A,At)) for a sequence αt=Θ(1√T) (the exact decay rate of αt isn't important as long as it goes to zero but the sum diverges). Note that AT(L)∝exp(−∑t<Tαtf(A,At)). and that f∈[−1,1]. As a result, the distributions AT(L) change very slowly, and we have f(AT+1,X)=f(AT,X)+O(αT) for any lottery-lottery X.

Now suppose that limf(B,At)>ε. Since ∑tαt diverges, we can find arbitrarily large T such that ∑t<Tαtf(B,At)≥Ω(ε∑t<Tαt)=Ω(ε√T). Then:

(Note that we used the finite number of candidates at this step to get H(A0)=O(1). Effectively we use finite candidates to show that there is only finitely much space, and use the continuity of B to argue that we only need to learn up to finite precision, and therefore this is no worse than a finite learning problem.)

Convergence: we want to argue that there exists a distribution A∞ such that f(AT,B)→f(A∞,B) for any continuous B. I don't remember my topology very well, but Ithinkwe can do this as long as AT(S) converges to a limit for every open set S. I can just take A∞(S) to be that limit, and under compactness I get a probability distribution.(ETA: I think we can actually use

anylimit point and don't need convergence, see below. So I think this is fine but am not sure.)So we need to rule out the situation where there is some open set S and real numbers ℓ<h such that AT(S)<ℓ and AT(S)>h each happen infinitely often. This seems like a really messed up situation that shouldn't happen, but I don't have a clean proof tonight.

Some notes that make me think we should be fine:

I'll probably revisit this tomorrow, but overall non-convergence seems weird up enough that I have a higher probability of something going wrong at some other step in the argument. I think that the main issues with convergence are really the ones in the first lemma that let us focus on continuous B.

Haven't ready through everything yet, but I am skeptical here with respect to points on the boundary of the simplex.

I agree that's a bug in the proof (and the lemma obviously can't be true as written given that e.g. if 90% of voters are single-issue voters who hate candidate X, then no continuous lottery-lottery can dominate a lottery that puts 0 probability on X).

I haven't thought about how serious this is. It feels like we could hope to fix things by considering a game where you pick (i) which candidates to give probability 0, which is a discrete choice from a set of size 2n−1and hence relatively unproblematic, (ii) what distribution to play within the corresponding simplex, where we can make a similar continuity argument to handle the infinitely many options.

I don't know if that plays nicely, I'll probably think about it tomorrow.

At face value it looks like it will be OK: we just define the same algorithm, but now we define a probability distribution over 2n simplices and define conditional entropy as the sum of the discrete entropy of the outer choice + the continuous entropy over the chosen simplex. (But there are a bunch of subtleties.)

ETA: you can't just take the topology generated by the open sets of every face and then run my argument in exactly the same way, because that isn't compact and so we don't get a limit point A∞. But we can define our algorithm over the disjoint union of all the faces which is compact (note that a given lottery-lottery is no longer uniquely represented but that's not a problem). The lemma still shows that if any B strictly dominates A∞ then there must be a continuous B (in the disjoint union) that dominates A∞. And we can still find arbitrarily large T for which f(B,∑t<TαtAt)>ε, which leads to a contradiction just as before. So overall I think that this fix works.

Ok, here are some questions to help me understand/poke holes in this proof. (Don't think too hard on these questions. If the answers are not obvious to you, then I am asking the wrong questions.

#1: It doesn't. The previous version implied that there was a B′ for which the probability of ties was arbitrarily low, but the new version can have lots of voters who are indifferent. If B puts its mass in the interior of a face F, then we redistribute probability mass within the interior of F, but some voters assign the same utility to everything in F.

#4: The current lemma is:

I still haven't understood all of your argument, but have you missed the fact that some faces are entirely contained in L?

(Your arguments look similar to stuff we did when trying to apply this paper.)

I think this is OK (though still lots of room for subtleties). Spelling this aspect out in more detail:

This lemma is what we need, because we will run follow the leader over the space of pairs (F, A) where F is a face and A is a distribution over that face. So we conclude that the limit is not dominated by any pair (F, B') where F is a face and B' is a continuous distribution.

Ok, I believe this version of the Lemma, and am moving on to trying to get the rest of the argument.

Actually we could take A∞ to be

anylimit point of A<T=∑t<TαtAt∑t<Tαt (in the sense that f(A∞,B) isalimit point of f(A<T,B) for any continuous B) and then get the same conclusion. I think compactness guarantees the existence of such a limit point (e.g. choose some countable basis for the topology and then restrict the sequence so that one open set after another has a limit), and so the convergence worries are resolved.Hm, I'm now pretty skeptical about the limit step. In particular, if A<T(S) converges to a limit for every open set S, we can't take A∞(S) to be the limit of those probabilities (since it doesn't satisfy countable additivity even though the simplex is compact). In general the space of probability distributions is not compact in the relevant topology, and so I think we can't possibly have f(AT,B)→f(A∞,B) based only on the fact that f(⋅,B) is the expectation of a bounded function.

This seems like it's plausibly the same topological problem that would break fixed-point theorems, and so I think it's the most likely candidate for where the whole thing breaks and why this strategy doesn't make any progress.

There are various ways to try to route around the problem, but it feels like it may just be the same thing you've been working on, so it seems probably easier to start with looking for an examples where this breaks my algorithm and then confirming that it's the same problem the other approaches run into.

To really break the strategy, what we'd want is a sequence AT and a lottery B such that f(AT,B)→0 but there is no way to take the limit A∞ (e.g. in earth mover distance) for which f(A∞,B)=0. If we found that then I could still try to argue that such sequences are never produced by the algorithm, but it would at least show I need a different proof strategy and likely indicate that the strategy didn't make progress.

ETA here is easy counterexample:

Still would take work to turn it into a counterexample for the original algorithm. Would be curious to do that and see if I actually believe that game ought to have an equilibrium (or to understand why it can't be done).

And here's a significantly worse counterexample, showing that there need not be any B′ near B such that limf(A<T,B′)<f(A∞,B):

Under these conditions:

In light of that example I think the basic proof strategy is probably no good. It may be tough to construct a concrete example where the algorithm fails, but if it works it would have to be due to some property

otherthan the fact that no continuous B can beat infinitely many A<T, which is all I really wanted to do.(I'll retract the original proposal.)

Epistemic status: rough sketch, there might be holes but I don't see any atm.I think you need to slightly weaken the definition of an MLL and then you'll have an existence theorem.

Let's start with general discontinuous games. We have a game between two players, with pure strategy spaces X1 and X2 that we assume to be compact Polish spaces and (discontinuous) utility functions u1,2:X1×X2→[0,1]. Then we can take the closure of the graphs of u1,2 and get upper hemicontinuous

multivaluedutility functions. I claim that for such multivalued games, Nash equilibria (in some sense) exist.Let X be a compact Polish space and f⊆X×[0,1] a multivalued upper hemicontinuos function on it. What does it mean for x∈X to be a "maximum" of f? We can define it as follows: for any y∈X, the

maximalpossible value of f(x) is greater or equal to thelowestpossible value of f(y). It is not hard to see that maxima form a non-empty closed set. This leads to a corresponding notion of "best response" in multivalued games, and a corresponding notion of Nash equilibrium.Notice that since we only care about the maximal and minimal possible values of f(x), we might as well require that f takes convex multivalues (i.e. closed intervals), since otherwise we can always take pointwise convex hull. Expected values of intervals can be defined by separately taking the expected value of the upper and lower ends of the interval.

I think that the existence of Nash equilibria follows from the Kakutani theorem in the usual way. The key observation is, for an upper hemicontinuous function, the maximal possible value is a (single-valued) upper semicontinuous function and the lowest possible value is a (single-valued) lower semicontinuous function. This implies that the best-response mapping is upper hemicontinuous.

Applying it to MLL, what I

thinkwe get is a notion of "weak" dominance where we only require that Pr[v(B)>v(A)]≤12, and otherwise the definition is the same.Yeah, I believe this works, and that it feels too weak.

For example, if there is a unanimous winner, you only have to pick them half the time, and can do whatever you want the other half of the time.

Yes, this is a good point. Maybe we can strengthen the "weak-MLL" criterion in other ways while preserving existence. For example, we can consider the "p-dominance" condition Pr[v(B)>v(A)]≤1−p and look for an LL that is "weak p-maximal" for the highest possible p. The function on the LHS is lower-semincontinuous, hence there exists a maximal p for which a weak p-maximal LL exists.

I have a reduction of this problem to a (hopefully) simpler problem. First up, establish the notation used.

[n] refers to the set {1...n}. n is the number of candidates. Use C as an abbreviation for the space Δ[n], it's the space of probability distributions over the candidates. View C as embedded in Rn−1, and set the origin at the center of C.

At this point, we can note that we can biject the following:

1: Functions of type [n]→[0,1]

2: Affine functions of type C→[0,1]

3: Functions of the form λx.⟨a,x⟩+c, where x,a∈Rn−1, and and c∈R, and everything's suitably set so that these functions are bounded in [0,1] over C. (basically, we extend our affine function to the entire space with the Hahn-Banach theorem, and use that every affine function can be written as a linear function plus a constant) We can reexpress our distribution V over utility functions as a distribution over these normal vectors a.

Now, we can reexpress the conjecture as follows. Is it the case that there exists a μ:ΔC s.t. for all ν:ΔC, we have

Ex,y,a∼μ×ν×V[sgn(⟨a,x−y⟩)]≥0Where sgn is the function that's -1 if the quantity is negative, 0 if 0, and 1 if the quantity is positive. To see the equivalence to the original formulation, we can rewrite things as

Ex,y,a∼μ×ν×V[1⟨a,x⟩>⟨a,y⟩−1⟨a,y⟩>⟨a,x⟩]≥0Where the bold 1 is an indicator function. And split up the expectation and realize that this is a probability, so we get

Px,y,a∼μ×ν×V[⟨a,x⟩>⟨a,y⟩]−Px,y,a∼μ×ν×V[⟨a,y⟩>⟨a,x⟩]≥0And this then rephrases as

Px,y,U∼μ×ν×V[U(x)>U(y)]≥Px,y,U∼μ×ν×V[U(y)>U(x)]Which was the original formulation of the problem.

Abbreviating the function Ex,y,a∼μ×ν×V[sgn(⟨a,x−y⟩)] as f(μ,ν), then a

supμ∈ΔCinfν∈ΔCf(μ,ν)≥0necessarycondition to have a μ:ΔC that dominates everything is thatIf you have this property, then you might not necessarily have

anoptimal μ that dominates everything, but there are μ that get a worst-case expectation arbitrarily close to 0. Namely, even if the worst possible ν is selected, then the violation of the defining domination inequality happens with arbitrarily small magnitude. There might not be anoptimallottery-lottery, but there are lottery-lotteries arbitrarily close to optimal where this closeness-to-optimality is uniform over every foe. Which seems good enough to me. So I'll be focused on proving this slightly easier statement and glossing over the subtle distinction between that, and the existence of truly optimal lottery-lotteries.As it turns out, this slightly easier statement (that sup inf is 0 or higher) can be outright proven assuming the following conjecture.

infν′:d(ν,ν′)<δf(μ,ν′)>−ϵStably-Good-Response Conjecture:For every ν:ΔC, and ϵ>0, there exists a μ:ΔC and a δ>0 s.t.Pretty much, for any desired level of suckage and any foe ν, there's a probability distribution μ you can pick which isn't just a

good response(this always exists, just pick ν itself), but astably good response, in the sense that there's some nonzero level of perturbation to the foe where μremainsa good response no matter how the foe is perturbed.Theorem 1Assuming the Stably-Good-Response Conjecture, supμinfνf(μ,ν)≥0.I'll derive a contradiction from the assumption that 0>supμinfνf(μ,ν). Accordingly, assume the strict inequality.

In such a case, there is some ϵ s.t. 0>−ϵ>supμinfνf(μ,ν). Let the set Aμ:={ν|f(μ,ν)>−ϵ}. Now, every ν lies in the interior of Aμ for some μ, by the Stably-Good-Response Conjecture. Since ΔC is a compact set, we can isolate a finite subcover and get some finite set M of probability distributions μ s.t. ∀ν∃μ∈M:f(μ,ν)>−ϵ.

Now, let the set Bν:={μ∈c.h(M)|f(μ,ν)<−ϵ}. Since −ϵ>supμinfνf(μ,ν), this family of sets manages to cover all of c.h(M) (convex hull of our finite set.) Further, for any fixed ν, f(μ,ν) is a continuous function c.h(M)→R (a bit nonobvious, but true nontheless because there's only finitely many vertices to worry about). Due to continuity, all the sets Bν will be open. Since we have an open cover of c.h(M), which is a finite simplex (and thus compact), we can isolate a finite subcover, to get a finite set N of ν s.t. ∀μ∈c.h(M)∃ν∈N:f(μ,ν)<−ϵ. And now we can go

−ϵ>maxμ∈c.h(M)minν∈Nf(μ,ν)≥maxμ∈c.h(M)minν∈c.h(N)f(μ,ν)=minν∈c.h(N)maxμ∈c.h(M)f(μ,ν)≥minν∈c.h(N)maxμ∈Mf(μ,ν)>−ϵThe first strict inequality was from how all μ∈c.h(M) had some ν∈N which made f(μ,ν) get a bad score. The ≥ was from expanding the set of options. The = was from how f is a continuous linear function when restricted to c.h(M)×c.h(N), both of which are compact convex sets, so the minimax theorem can be applied. Then the next ≥ was from restricting the set of options, and the > was from how every ν∈ΔC had some μ∈M that'd make f(μ,ν) get a good score, by construction of M (and compactness to make the inequality a strict one).

But wait, we just showed −ϵ>−ϵ, that's a contradiction. Therefore, our original assumption must have been wrong. Said original assumption was that 0>supμinfνf(μ,ν), so negating it, we've proved that

supμinfνf(μ,ν)≥0As desired.