Consider the following problem:

There are an unspecified but finite number of chips on a table. Each chip is one of three colors: white, black or red. Define a "move" to be the act of replacing two chips of different colors with one chip of the third color. Prove that if after some sequence of moves we end up with a single chip on the table, the color of this chip can't depend on the exact sequence of moves that we made. In other words, if there are two distinct sequences of moves we could make such that in both cases we end up with a single chip left on the table, these chips must be of the same color.

So this is a game in which we have a state that can be represented by an ordered triplet of natural numbers where each coordinate corresponds to the number of the chips of each of the three colors currently on the table. We're allowed to make any one of the following three moves:

For the moment, let's assume that the claim we're being asked to prove is indeed correct. The first question we can ask is whether it's even always possible to end up with only one chip, and here the answer is obviously no: if we start with, say, then we have no legal moves and therefore no way to get down to only one chip.

Therefore, if we rephrase the claim, we can say that the claim asks us to show that there is a function

which sends each state to a particular color if we can end up with a single chip of that color, and sends a state to "none" if there's no way to end up with a single chip from that initial state. This is the part of the claim which says that the color of the final chip should be independent of the sequence of moves we choose to make: it can only be a function of the current state of the game, not of what the players choose to do later on.

Furthermore, we can notice that we lose nothing by allowing moves to be inverted in this game (why?). This means actually has to be invariant under all legal moves.

What is a good guess for what this function could be? The simplest guess is that it's a linear function: we have some expression like

where the are some numbers, and these outputs correspond in some way to the four possible values of the function. If we impose the requirement that ought to be invariant under our three legal moves, we get a system of three equations for the numbers :

However, when we try to solve this system, we're forced to face an unfortunate reality. Substituting the first equation to the second gives or , so we deduce , and similarly we deduce as well. Therefore the only solution to this system seems to be trivial: is simply zero on every possible state. This is indeed invariant under all the moves, but it doesn't give us any information about the states, so it's not what we are looking for.

At this point, you might be tempted to give up and conclude that a linear invariant doesn't work after all. You could look for some having a different form or some entirely different approach to solve this problem.

This is a common situation in mathematics: you have some idea to solve a problem but it doesn't work, at least not the way you tried to execute it. However, that doesn't mean the idea itself doesn't have merit, and often it only takes some effort on the part of the mathematician to bring what seems like a dead idea back to life by putting the idea in a different context or looking at it from a different perspective.

Let's go back and examine what we did when we concluded no linear invariant could work. If the are indeed real numbers, our solution of the linear system is impeccable: the only solution is indeed trivial. However, what if the don't have to be real numbers?

We smuggled in an assumption which might not even seem like one if we don't pay attention: that implies . If this assumption holds and addition has all of its usual properties, it's indeed true that the only solution to this system will be trivial. However, this doesn't have to be the case. What if, in whatever structure we happen to be working in, ?

There is a notable example in which this is true: the integers modulo , often denoted as . This means we declare that two integers are the same if they have the same remainder when divided by . This definition turns out to be well-behaved and compatible with addition and multiplication of integers, so it's a legitimate structure (specifically, an abelian group) in which is equal to .

We can notice this structure doesn't quite meet our needs: we need to take four possible values, but the integers modulo 2 only have two possible values: 0 and 1. It's not too far off, however, and we only need one last idea to come up with the right structure which will solve the problem. What if takes not one, but two values in ? In other words, what if takes a pair of values in the integers modulo 2?

We notice this set has the right size: there are four such pairs and four values that we need in the codomain of . It's also obvious what values the have to take: there are only three nonzero pairs, so we must have some permutation of

If you check the three linear equations we wanted to be satisfied earlier, you'll find that these values of the satisfy all of them!

This means we've solved the problem, as will be clear shortly. is defined as follows:

and we consider the right hand side to be a pair of values, each living in the integers modulo 2. In other words, is a function .

Finally, we do the translation of these pairs to the four elements we defined earlier as follows:

This definition of is not quite what we wanted, since it will assign colors to configurations such as which really have no way of being reduced to a single chip state. However, this turns out to be unimportant for solving the problem. If we start from a state and end up in a single-chip state after some legal moves, then since is invariant under the moves we have , and since all three single chip states have distinct values of this means there can only be one single chip state reachable from any initial state .

This is a toy example intended to be accessible to a more general audience, but the general message is very powerful: many abstract concepts in mathematics arise from "trying to make ideas work" in this way. Often if an idea doesn't work, the problem is not with the idea itself but with your intellectual tools or the power of the concepts at your disposal. Noticing this can both provide an opportunity to expand these tools and to save some ideas from being prematurely consigned to the garbage bin.

New to LessWrong?

New Comment
7 comments, sorted by Click to highlight new comments since: Today at 11:52 PM

When I tried to solve this problem, I began by looking for some property invariant under moves, but after making no progress I discarded it when I noticed that:

Every move changes the parity of every pile. From there it was immediate that the single remaining chip at the end can only be in the pile that originally had a parity different from the other two, and that if they all have the same parity, it is not possible to reduce them to a single chip.

Yes, actually this turns out to be the same as the solution based on the invariant taking values in the Klein-four group: the kernel of the map given by is precisely the subspace of integer vectors with all entries having the same parity. It's just that the invariant solution makes it look natural, while doing it this way around leads to having to express these parity conditions in a more elaborate way.

I wouldn't call that 'no progress'.

This references the end of the post, and the solution.

This definition of is not quite what we wanted, since it will assign colors to configurations such as which really have no way of being reduced to a single chip state.

If your solution doesn't work, make it work.

That color is proved correct, if you can:

  • play forwards and backwards
  • Handle negative piles (with IOUs).

For the reverse play through to a solvable state search this page for: "Other end states".

Negative approach:

(0, 3, 0)

(-1, 2, 1)

(0, 1, 0)

Alternatively, just add extra moves to the game that can be made using a composition of moves even though it moves through a state with negative piles, as long as it moves all positive states (at the start) to positive states (at the end), like:

For your move, remove 2 from a pile of your choice.

 

This is the part of the claim which says that the color of the final chip should be independent of the sequence of moves we choose to make: it can only be a function of the current state of the game, not of what the players choose to do later on.

So that part was wrong, because unless you allow stuff like inverted moves, the result is dependent on what players choose to do later on.

I was wondering if anyone would pick up on that. Kudos to you!

Here is my explanation of what's going on here: you're looking at the different cosets of the submodule of the free module spanned by the column vectors

You can figure out the structure of the quotient systematically by looking at the Smith normal form of this matrix, and you can work that out by taking the greatest common divisor of the determinants of its minors. In this case we get , and , so we can deduce that the Smith normal form is and the quotient is isomorphic to the Klein-four group . This explains why a map to is the right invariant for this problem.

I paused after reading the statement of the problem so as to solve the problem myself.  Here's what I came up with.

From experience with other problems (I used to do USAMO), I looked for invariants and suspected that modulo would be involved.  I found some relatively quickly.

Let r, w, and b be the number of red, white, and black chips, respectively, in any board state.

The moves we're allowed to make are of the form "r += 1; w -= 1; b -= 1", or "r -= 1; w += 1; b -= 1", or "r -= 1; w -= 1; b += 1".

If we look at the sums of two colors—for example, r+w—we find that the effects of each move on the sum are "r+w remains the same", "r+w remains the same", and "r+w decreases by 2", respectively.

It follows that, for each sum of two colors, the even-odd parity is preserved by all moves.  In other words, "r+w mod 2", "r+b mod 2", and "w+b mod 2" are the same before a move and after a move, and indeed their values in the starting state of the game and any possible ending state must be the same.

Therefore, if there is an end state in which there is one chip left... That chip has a color (for example, red).  Then the other two colors (in the example, b+w) sum to zero, so their sum is even.  If their sum is even in the end state, then that must have been true in the start state, and in any other possible ending state.  But, in _any_ state where there is only one chip left, if those two colors sum to an even number, then there must be zero chips of those two colors (since you can't have negative chips), and so the one chip must be of the first color, which was to be proven.

Solving:

Tl:dr; search "What does the game look like?" on this page. (Leave the "" marks out, they just denote the beginning and end of the string.)

 

There are an unspecified but finite number of chips on a table. Each chip is one of three colors: white, black or red. Define a "move" to be the act of replacing two chips of different colors with one chip of the third color. Prove that if after some sequence of moves we end up with a single chip on the table, the color of this chip can't depend on the exact sequence of moves that we made. In other words, if there are two distinct sequences of moves we could make such that in both cases we end up with a single chip left on the table, these chips must be of the same color.

Prove that solvable games have one solution.

Prove that, playing in reverse, one cannot reach the same game (x, y, z) from more than one starting point. (Where starting points are a permutation of (0, 1, 0).)

 

Assign each game (x, y, z) an end.

Well if it ends in (0, 0, 1), then that means after a series of moves like

(+, -, -)

(-, +, -)

(-, -, +)

each happening some number of times (a, b, c) respectively, it ends in (0, 0, 1).


 

So

x+a-b-c = 0

y+b-a-c = 0

z+c-a-b = 1


 

That doesn’t seem like a lot to go off of.


 

But suppose (x, y, z) is a round game. That is:

a=b=c=n.

Then

x+n-2n=0

y+n-2n=0

z+n-2n=1

Then

x-n=0

y-n=0

z-n=1

 

More generally, any solvable round game which ends in 1, 2 or 3 (that is (1, 0, 0), (0, 1, 0), (0, 0, 1)), will be of the form

1+n, 2+n, or 3+n (that is (1+n, n, n), (n, 1+n), (n, n, 1+n)).

We have proved the theorem for solvable round games, as no number n exists such that, for example

(1+n, n, n) = (n, 1+n)

 

Further, a solvable round game has a cycle number of 0, or 3 (they're the same thing).

Any solvable game with a cycle number of 1 can be reduced to a solvable round game in one move.

Any solvable game with a cycle number of 2 can be reduced to a solvable round game in one move.

One move each (white, then black, then red, not necessarily in that order), will turn a solvable round game into a solvable round game.

Will any (legal) move in a solvable game with a cycle number of c, make it solvable game with a cycle number of (c-1)? Though of course, 0 goes to 2, 2 goes 1, 1 goes to 0? [You could define it that way.* (The legal restriction might not even be necessary.)]

[The stuff in []s was added later. The * was added after that. Then this was added.]

 

By construction (playing backwards):

(0, 0, 1), (1, 1, 0), (2, 0, 1), (1, 1, 2), (2, 2, 1), (3, 1, 2), (2, 2, 3)

(nada), red, black, white, red, black, white, 

 

Starting there:

(2, 2, 3) and c = 0

white

(3, 1, 2) and c = 2

white

(4, 0, 1) and c = 1

black

(3, 1, 0) and c = 0

red

(2, 0, 1) and c = 2

black

(1, 1, 0) and c = 1

red

(0, 0, 1) and c = 0

 

And illegal play:

(2, 2, 3) and c = 0

white

(3, 1, 2) and c = 2

white (you can't legally illegally play white again after this)

(4, 0, 1) and c = 1

black

(3, 1, 0) and c = 0

black (you can't legally illegally play black again after this)

(2, 2, -1) and c = 1

red (you can only legally play red at this point, for the rest of the game)

(1, 1, 0) and c = 1

red

(0, 0, 1) and c = 0

.

.

What does the game look like?

On round one (the very end):

(0, 0, 1) or (0, 1, 0) or (1, 0, 0)

On round 2 (the second to last round):

(1, 1, 0) or (1, 0, 1) or (0, 1, 1)

On round 3 (the third to last round):

(There'll be a 2, a 1, and 0. The 1 will be in the same place as round one.)

 

Other end states:

Can you put the game in an end state other than a permutation of (0, 0, 1)? Yes, easy as (1, 2, 3), black, white, black. Though (1, 2, 3) is solvable: black, white, white, then there's only one path left: red, then black.

 

*Or maybe not, see Other end states, above.