How many Monte Carlo samples are enough? | Vose Software

How many Monte Carlo samples are enough?

See also: Monte Carlo simulation introduction

A question that naturally arises when doing MC simulation, is the following. Can we determine how many samples to run a Monte Carlo model for?

Tamara simulates so fast that for most project schedules, a risk analysis simulation of 10,000 samples will only take a matter of seconds, and 10,000 samples is quite sufficient to get stable results.

Simulation models using ModelRisk, and the questions one is seeking to answer, are more varied, so the number of samples needed is more complex too determine. In general, you'll have two opposing pressures:

  •  Too few samples and you get inaccurate outputs, graphs (particularly histogram plots) that look 'scruffy';

  •  Too many samples and it takes a long time to simulate, and it may take even longer to plot graphs, export and analyze data, etc afterwards. Export the data into Excel and you may also come upon row limitations, and limitations on the number of points that can plotted in a chart.

If the output of greatest interest is graphical, you will need a plot that would not change to any meaningful degree by running more samples (i.e. it is stable). As a rough rule:

For a stable histogram plot, you will need 1,000 - 2,000 samples

For a stable cumulative plot, you will need 200 - 400 samples

For a stable scatter plot between two variables, you will need 3,000 - 4,000 samples

For a stable tornado plot, you will need 2,000 - 3,000 samples

For a stable trend plot, you will need 500 - 1,000 samples


It's possible to give some guidance for plots because after a certain amount of samples they don't visually change much. However, if your output of greatest interest is a particular statistic, the number of samples depends mostly on the level of precision you need to see in the reported statistic. The higher the number of samples, the greater precision of course.

There will usually be one or more statistics that you are interested in from your model outputs, so it would be quite natural to wish to have sufficient samples to ensure a certain level of accuracy. Typically, that accuracy can be described in the following way:

'I need the statistic Z to be precise to within +/- d with confidence a.'

ModelRisk offers a feature called Precision Control that allows you to specify a set of outputs and the statistics of interest, together with the precision level and confidence levels for them. It will then continue to run a simulation until all precision levels have been achieved. The method is underpinned by some statistical techniques, examples of which are described below:



Getting sufficient accuracy for the mean

Monte Carlo simulation estimates the true mean m of the output distribution by summing all of the generated values xi and dividing by the number of samples n:

If Monte Carlo sampling is used, each xi is an independent sample from the same distribution. Central Limit Theorem then says that the distribution of the estimate of the true mean is (asymptotically) given by:

where s is the true standard deviation of the model's output.

Using a statistical principle called the pivotal method we can rearrange this equation to make it an equation for m:





Figure 1 shows the cumulative form of the Normal distribution for Equation (1). Specifying the level of confidence we require for our mean estimate translates into a relationship between d, s, and n as you can see from Figure 1:

Figure 1

More formally, this relationship in Equation (1) is:

where Ф-1(•)  is the inverse of the standard Normal cumulative distribution function (i.e. with mean 0 and standard deviation 1). Rearranging (2) and recognizing that we want to have at least this accuracy gives a minimum value for n:

We have one problem left: we don't know the true output standard deviation s. It turns out that we can estimate this perfectly well for our purposes by taking the standard deviation of the first few (say 50) samples.

Getting sufficient accuracy for the cumulative probability P(x) associated with a particular value x

Percentiles closer to the 50th percentile of an output distribution will reach a stable value relatively far quicker than percentiles towards the tails. On the other hand, we are often most interested in what is going on in the tails because that is where the risks and opportunities lie. For example, Basel II and credit rating agencies often require that the 99.9th percentile or greater be accurately determined. The following technique shows you how you can ensure that you have the required level of accuracy for the percentile associated with a particular value.

ModelRisk will estimate the cumulative percentile Px of the output distribution associated with a value x by determining what fraction of the samples fell at or below x. Imagine that x is actually the 80th percentile of the true output distribution. Then, for Monte Carlo simulation, the generated value in each sample independently has an 80% probability of falling below x: it is a binomial process with probability p = 80%. Thus, if so far we have had n samples and s have fallen at or below x, the distribution Beta(s+1, n-s+1) described the uncertainty associated with the true cumulative percentile we should associate with x.

When we are estimating the percentile close to the median of the distribution, or when we are performing a large number of samples, s and n will both be large, and we can use a Normal approximation to the Beta distribution:



is the best guess estimate for Px. Thus we can produce a relationship similar to that in equation (2) for determining the number of samples to get the required precision for the output mean:



Rearranging (4) and recognizing that we want to have at least this accuracy gives a minimum value for n:

By monitoring s and n we can determine whether we have reached the required level of accuracy using either Equation (3) or (4).