Computational Methods for Tabletop Games 1: Zombie Dice and Monte Carlo Simulation

Many board and tabletop games rely on the randomness of dice rolls (or card order) to create uncertainty in the game. This will lead many frustrated players to ask “What are the odds of that happening?”. Catan is a good example of this, as a run of not getting any of your numbers will quickly lead to frustration! Catan’s odds are easy to calculate, though. The die rolls are independent from one turn to the next, and the state space of outcomes never changes.

When the games get more complex, the odds may not be possible to compute analytically, or they may just be complicated to compute by hand. Zombie Dice is an example of just such a game. Zombie dice has many stages within a turn, and while each stage is analytically calculable, the overall odds of getting some number of brains is dependent on the player’s strategy. Using a general method called Monte Carlo Simulation, we can easily play thousands of turns and calculate odds, all without needing more than a simple random number generator.

Zombie Dice

The game consists of 13 die, of three different kinds. All the die have faces that are either brains, runners, or shotguns. Brains are good, runners are neutral, and shotguns are very very bad for your zombie face. The three different kinds of die have different ratios of those faces. The red ones have the most shotguns, while green have the fewest.

The rules of zombie dice are very simple, and you can read about them on the wiki page, and on this official rules PDF. The basic idea is to keep rolling as long as you want until you get 3 shotguns, at which point your turn is over.

The randomness happens in two stages. First, you don’t know which 3 dice you will pull out of the cup at the start of each turn. Then, you don’t know which of the faces on each of the die will show face up. One the next stage of your turn, you do know that some dice won’t show up (so if you got red brains, you’re in luck!), so the probabilities change, conditional on the previous dice that were drawn.

Monte Carlo Simulation

There’s plenty on the internet about probabilistic simulation that will be better than what I can write, so I’ll just do a brief overview. Monte Carlo simulation is a very simple concept. If you have a simulation that is driven by random inputs (such as counting the sum of dice rolls), then running that simulation many times and tracking the results is known as Monte Carlo (MC) simulation.

We can do this simulation very quickly in Python. Let’s say we want to know the distribution of the sum of 10 rolls of a 6-sided die. All we have to do is use a random number generator (RNG) to choose a number from 1 to 6 10 times, add them up, and store that value. We’ll do this 100,000 times.

import numpy as np
sums = []
for _ in xrange(100000):
    vals = np.random.choice(range(1,7),size=10)
    sums.append(vals.sum())

The histogram of the sums looks like this:

SumOf10Rolls

The mean value makes sense, since the average value of a single roll is 3.5, the average value of 10 rolls should be 35. What we’ve just done is the essence of MC simulation. We’ve run a simulation (in this case, a summation) a whole bunch of times, and that simulation had random/uncertain inputs. Each time we run the simulation we generate a new input based on the distribution of our uncertainty in the inputs.

Before we go on, let’s make a quick note about convergence. Convergence, in MC terms, is when the simulation has run long enough that the quantities of interest won’t change. If we were trying to measure the average sum of dice rolls, we would say that the simulation has converged when that average sum doesn’t change significantly with each new run.

One way to show this is to just plot the mean of our simulation’s result. We’ll also plot the standard deviation of the mean value. Remember that the variance of the mean is just the sample variance divided by the number of runs:

\sigma^2_{\bar{x}} =\frac{\sigma^2}{N}.

Here is the plot:

ConvergenceOf10Rolls

The result is pretty wiggly, but starts to settle down after a while. It’s never very far from 35, and 35 is always withing the 95% confidence bounds. Seeing the result settle down, along with a small 95% interval size tells us that our simulation has converged. These kinds of plots are the beginning of analyzing convergence. There are other ways to do that (such as Kolmogorov-Smirnov tests and autocorrelation), but our examples are simple enough that we don’t need to get too fancy.

The main thing to keep in mind is that the accuracy in estimations improves (gets a smaller standard deviation) as a function of 1/\sqrt{N}. More samples means more potential accuracy. You can’t control the numerator, so it’s up to run count to help reduce the error.

Zombies in the Casino

Now it’s time to do MC on the Zombie Dice game. Before we even get to running the simulation, we need to write some code that plays the game for us! I’ll code in a non-efficient way, just for readability. We start by making a dice object, as well as a function that draws 3 die from our ‘cup’ and gets the faces that show.

class Dice(object):
    def __init__(self,nb,ns,nr):
        self.faces = ['brains','shotgun','runner']
        self.probs = np.array([nb/6, ns/6, nr/6])

def roll_outcome(die):
    rolled = random.choice(die, size=3, replace=False)
    outcome = []
    for dice in rolled:
        outcome.append(random.choice(dice.faces, p=dice.probs))
    return rolled, outcome

The roll_outcome function is where our MC simulation gets its randomness. We use a random choice without replacement method to pull 3 die, and we use a weighted random choice to get the face up symbols.

We create the initial cup with this code:

init_die = [Dice(3,1,2) for _ in xrange(6)] + \
    [Dice(2,2,2) for _ in xrange(4)] + \
    [Dice(1,3,2) for _ in xrange(3)]

In the next post I’ll go over the code for playing a single roll, for playing a whole turn, and for trying out different player strategies. For now, let’s keep it simple with the MC idea and just look at how we find the probabilities of face results for the first roll in a turn.

The MC data that we want are the probabilities of each of the 10 possible outcomes (think of all the ways that we can get brains, runners, and shotguns). Using 100,000 MC runs, we get the following probabilities:

Probability                 Faces
   0.21954 	('brains', 'runner', 'shotgun')
   0.13523 	('brains', 'brains', 'runner')
   0.12604 	('brains', 'brains', 'shotgun')
   0.12495 	('brains', 'runner', 'runner')
   0.09887 	('runner', 'runner', 'shotgun')
   0.09718 	('brains', 'shotgun', 'shotgun')
   0.08665 	('runner', 'shotgun', 'shotgun')
   0.05024 	('brains', 'brains', 'brains')
   0.03717 	('runner', 'runner', 'runner')
   0.02413 	('shotgun', 'shotgun', 'shotgun')

The most common outcome is 1 of each face. The next most common is two brains and a runner (a pretty good roll!). The least most common is a loss on the first roll. The game seems fairly well balance to not discourage players right off the bat.

We can use the same runs to get the distribution of points for the first roll, along with the convergence on the expected value of points after the first roll. Here’s a histogram of first roll results, in terms of points:

PointsAfterFirst

And here’s the convergence of the expected value (based on those points):

ConvergenceOfFirstRoll

Keep Running, Humans

We’ve found out that our first roll in Zombie Dice is probably just going to give us 1 brain (on average). We also have a good idea of the distribution of more specific outcomes, all thanks to Monte Carlo simulation. The nice thing is that we didn’t need to get into solving the exact values! All we had to do is just generate random rolls and keep track of the results!

The Zombie Dice single-roll problem is not too bad to do analytically, though, so this hasn’t served as the best motivator for MC. In the next post, we’ll get into the problem of finding the expected value of player strategies, and this is an area where MC shines! In that problem, solving exactly would require a lot of work keeping track of the branching tree of outcome possibilities and their probabilities. With MC simulation, the only thing we have to code is our player’s logic, and the simulation will do the rest.

See you next post!

Advertisements

One Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s