OddThinking

A blog for odd things and odd thoughts.

Analysing the first move of Klondike

A few days ago, I commented on Usman Latif‘s analysis of the very first move of Klondike. Latif used the Monte Carlo method to estimate the chance that there are exactly no moves possible from the initial deal.

As I have mentioned before:

like backtracking (which I have dismissed before), the Monte Carlo method skips the niceties of logical analysis to use sheer brute force to solve a problem.

I didn’t think Latif had given analysis a fair go, so I gave it a try.

My very first attempt was a dismal failure after just a few minutes. However, I had another stab, and this time, I got a lot further.

The solution takes a few pages to work through.

Using the Principle of Mathematical Induction, it works out the chance that the nth card is unplayable, given that the first n-1 cards are all unplayable. To do that, it has to know how many of the previous cards are deuces, how many are kings, how many “matches” there are (same rank and colour), and how many of those are deuces and kings.

The neat trick is then to work out what the odds are that this unplayable card is itself a matching/non-matching deuce/middle/king card, and recurse with each of those options to find out the chance of success. Add the tail case (which represents the chance that the 8 revealed cards in the stock are all unplayable) and voila – a six-way recursive function that determines the odds. (Ackerman, eat your heart out! Two-way recursion is for wusses!)

Of course, the problem with a six-way recursive function is how long it takes to evaluate, especially when you are trying for n = 7. I figured 6^7 = ~280,000 executions of the tail case alone. The tail case has a significant constant time (computing combinatorials involves large factorials, around n = 45, and Python is slow) and there are only 86,400 seconds in the day. Finger-in-the-air estimate? About 6 hour to 18 hours of computing time.

Off I went, and implemented it in Python. I optimised the tail-case by caching the combinatorial functions into a table. There was also significant pruning of the recursion tree (no extra effort – it was required to prevent, for example, five Kings from being played).

I was incredible surprised when the whole thing completed with only 31,000 executions of the tail case, and in well under five seconds! So much for my estimates!

That was the good news.

The bad news was, when I turned on error-checking assert statements, they revealed that I had made a horrible mistake in my analysis. I had wrongly calculated the odds that this next card is a middle card given that this card is unplayable. The correct way of calculating it is still beyond me.

I suspect what it needs is a 12-way recursive function (no need to recurse for Aces). My ever-so-reliable estimation methods only put that at about ten minutes of computing. However, it sounds suspiciously close to checking every possible input.

Checking every possible input to a problem is a more deterministic cousin of the stochastic Monte Carlo method. It gives an exact response, rather that a probabilistic response based on sampling. However, it is too close to brute force, and hence my interest is starting to wane.

Maybe I should spend some more time on it – optimising 52! down to 12^7 is considerable and worthy of a try, but I think I will take a break for now. In the meantime, I have to give a nod to Latif; the analysis remains trickier than I thought.


Comment

  1. Never let it be said that oddthinking suffers from positive result bias.

Leave a comment

You must be logged in to post a comment.