Today’s post will be quite an extended one, about an interesting game theory problem I’ve been thinking about recently. The game is a lottery that works as follows. Each of N players chooses a positive integer, and the player who picks the lowest integer that no-one else has selected wins. There are two variants of the game discussed in the literature, one in which the integers must be between 1 and N, and one in which they have no limit. Except in the specific case of only 3 players, this restriction seems to make no difference to the solution […unless I’ve got my algebra wrong, which is by no means impossible!].

Note that throughout this post, I’ll assume a basic familiarity with probability and knowledge of the formulae for infinite sums (given here, for example), particularly the sum to infinity of a geometric progression:

**3-Player Case**

I start with the N=3 case to illustrate the principles of the problem. Because going in to the problem we have exactly the same information as the other two players, there is no deterministic solution, and in order to maximise our chances of victory we need to adopt a ‘mixed strategy’ which involves selecting a random distribution that assigns a probability to the selection of each integer. This is explained a bit further at this website, although unfortunately the solution that they present is incorrect!

In the case that players are limited to selecting 1-N (so 1, 2 or 3), the problem is analysed in a paper on arXiv (although there’s also a mistake in this paper – this time the N=4 solution presented is incorrect, as I’ll show later), and my solution will follow the author’s.

We are trying to select a set of probabilities such that when all players play this strategy, no player increases her chances of victory by adjusting her strategy (such a solution is called a Nash Equilibrium in game theory, as seen in A Beautiful Mind). Let’s assume player 2 and player 3 are both using these probabilities, and player 1 tries to increase her chance of victory by playing instead the strategy .

If she chooses 1, she wins if the other players both play either 2 or 3; if she chooses 2 she wins only if the other players both play 1 or both play 3, and if she chooses 3 she wins only if both other players play 1 or both play 2. So, we can express her total probability of victory as the sum of these three terms

And subject to the normalisation condition

Maximising the victory probability is a natural problem for the method of Lagrange Multipliers. The Lagrange equation

gives us the following three equations

And as in the paper above, we can solve these simultaneously along with the normalisation condition to give the probabilities

Her chances of victory are now 0.28719, regardless of her strategy – the equations above say that as long as two players play this solution, the other player can’t change her probability of winning by choosing ANY strategy. For example, if she switches her strategy to ALWAYS choosing 1, she will now win whenever the other two players play either 2 or 3, the chance of which is given by – and this is still 0.28719.

Now what if we relax the restriction that bids be between 1 and 3?

We can see immediately that the solution presented in the first link above, , is incorrect – in this case, if the third player plays always 1, she wins ¼ times (when both other players play more than 1), while if she plays always 1000, she wins almost a third of the times (whenever the other two players play the same number). But we said her victory probability should be independent of strategy for the Nash Equilibrium, so this can’t be correct. But it is along the right lines – the actual solution does decay rapidly as x increases.

Her chance of victory is now

And differentiating this with respect to each in turn gives an infinite set of coupled equations

This turns out to be solvable, but difficult – a sketch of the solution is given at the bottom of the post. The solution is , is given by the normalisation condition

and we can use a trick to find the parameter . We’ve said that the third player’s odds of victory are independent of her strategy. Consider two strategies, one in which she plays the same as the other players, and another in which she always plays the same, very large number (say ). In the first case, chances of victory are equal for all three players, so they must be equal to 1/3 times one minus the chances of NO-ONE winning (which happens when they all choose the same number)

And in the second case, she wins whenever the other two players play the same number

We can combine these and substitute in the solution with , and solve for :

and using the summation formula

Expanding out and cancelling terms where possible, then taking out factors:

(1)

So is the real root of the equation , which is given by . A quick wikipedia search reveals that this is the reciprocal of something called the tribonacci constant, vital in the construction of the Snub Cube. I don’t know if this has some deeper resonance, but I’d love to hear from anyone who thinks it does! So, the full symmetric Nash Equilibrium in the 3-player, no upper bid limit case is

with . Note that the (incorrect) form given in the linked-to website can also be expressed in the same form, but with .

**4-Player Case**

In the case of 4 players and above, the maximisation equation is

where the new term in the middle of the expression comes from the victory scenario in which two players both choose the same lower integer than our player and so are not unique, while the third player chooses one that is larger. This is the term that is missing in the paper linked to above that leads to the incorrect solution.

In the case that UL = 4, we have four equations and the normalisation condition to solve. The equations feature cubic equations, so the solution for these is required. It’s uncommon enough for me to have had to refer to wikipedia, the solution can be found here – we are saved some hassle because the relevant cubic equations don’t have a quadratic term.

Once again, a sketch of the solution in the unrestricted scenario is presented at the bottom of the post, but it seems to make very little difference whether the upper limit UL is set to 4 or infinity. In both cases, I find that

Why does the upper limit no longer make any difference? We only want to choose a higher number if there is a significant chance of the lower numbers being ‘crowded out’ by the other players. That is, we sometimes choose 4 because there is a small chance that the other three players will all select either 1, 2 or 3. Similarly, we would only choose 5 if there was a significant chance that 4 wasn’t high enough, because there was also a significant chance of all of the other three playing 4 – but from looking at the values, we see that is very small already, and is truly tiny. Consequently, in the general case for is going to be negligibly small.

**N-player Case**

The problem rapidly becomes hard to compute for large N, but there is a body of literature on the subject, in particular this 2010 paper points out the mistake in the paper I linked to before, and presents a general solution for N players. Note from their graphs that the probability of choosing a number greater than N is essentially 0 for higher numbers of players, so whether or not we impose this restriction doesn’t change the equilibrium materially.

**Derivations**

I hope you’ve enjoyed my take on this problem, the sketches of my method of solution for the 3- and 4-player cases are given below but can probably be skipped on a first reading.

In the three-player case, the Lagrange equations are

For i=1, and using normalisation, this gives

Which means for i greater than 1 we have

and with a little re-arrangement

This gives a ‘step-up’ formula for the construction of later terms from the sum of earlier terms. An engineering-style solution is to write a script (or an excel spreadsheet) that calculates to from an initial guess of , and then varying the initial guess to satisfy the normalisation condition. Alternatively, we can show by induction that the expression above is satisfied for our solution . First of all, assuming the relationship holds for , lets see what it implies for (for notational simplicity, I’m going to write below as

and remembering that from our normalisation condition

so we see that the ratio demonstrated above satisfies the step-up equation for every . The first term is a special case, so I show this separately

And now setting :

so we see that the first term is satisfied by the expression.

A similar expression for 4 players is given by the solution to

which gives

with

I’ve not found an analytical solution for general to this yet, and had to proceed by the variational technique discussed above, giving the answers in the body of the post. If you can solve this exactly for me I’d be very interested to hear!

Shouldn’t the formula for the infinite sum above be from 0 to infinity, not from 1?

Mike – I think ‘unique, positive integer’ implies a choice that is 1 or greater. So I don’t think we should include 0 in these sums.

Should your very first sum be (A*x) / (1-x)?

It should be! Well spotted, thanks

Nice post!

A quick observation: I don’t think the solution presented in the link (The Grey Labyrinth) is incorrect – it seems the issue is that you are analysing slightly different games. The rules of the game in that link state that in the event of a draw (all three players select the same number), each player has his dollar returned. So in that game a draw is better than a straight loss, and is in fact 1/3 as good as a win.

In contrast, the game you analyse counts a draw as a loss for everyone. So it makes sense that the optimal strategies are slightly different.

A version of this game (where one can cjose any integer, not just 1,2,3). Thousands of people participate on a regular basis attempting to win a larg csh prize. what would you expect play in this large game to look like?

The solution at The Grey Labyrinth is not wrong. They are solving a different problem. In their case the players still receive 1/3 of the reward in case of a three way tie, which changes the expected payoffs for different strategies. The equation for x then becomes 2x^3 + x^2 + x = 1, which has a solution where x = 1/2.