Bayesian analysis example: Tigers in the jungle | Vose Software

Bayesian analysis example: Tigers in the jungle

A game warden on an island covered in jungle would like to know how many tigers she has on her island. It is a big island with dense jungle and she has a limited budget, so she can't search every inch of the island methodically. Besides, she wants to disturb the tigers and the other fauna as little as possible. She arranges for a capture-release-recapture survey to be carried out as follows:


Hidden traps are laid at random points on the island. The traps are furnished with transmitters that signal a catch and each captured tiger is retrieved immediately. When 20 tigers have been caught, the traps are removed. Each of these 20 tigers are carefully sedated and marked with an ear tag, then all are released together back to the positions they were originally caught. Some short time later, hidden traps are laid again, but at different points on the island until 30 tigers have been caught and the number of tagged tigers is recorded. Captured tigers are held in captivity until the 30th tiger has been caught.


The experiment results in 7 out of the 30 tigers captured in the second set of traps are tagged. How many tigers are there on the island?

The warden has gone to some lengths to specify the experiment precisely. This is so that we will be able to assume within reasonable accuracy that the experiment is taking a hypergeometric sample from the tiger population. A hypergeometric sample assumes that an individual with the characteristic of interest (in this case, a tagged tiger) has the same probability of being sampled as any individual that does not have that characteristic (i.e. the untagged tigers). You might enjoy thinking through what assumptions are being made in this analysis and where the experimental design has attempted to minimise any deviation from a true hypergeometric sampling.

We will use the usual notation for a hypergeometric process:

n - the sample size, = 30,

D - the number of individuals in the population of interest (tagged tigers) = 20,

M - the population (the number of tigers in the jungle). In the Bayesian inference terminology, this is given the symbol q as it is the parameter we are attempting to estimate, and

s - the number of individuals in the sample that have the characteristic of interest = 7.

We could get a best guess for M by noting that the most likely scenario would be for us to see tagged tigers in the sample in the same proportion as they occur in the population. In other words:

               i.e.            which gives M » 85 to 86

but this does not take account of the uncertainty that occurs due to the random sampling involved in the experiment. Let us imagine that before the experiment was started the warden and her staff believed that the number of tigers was equally likely to be any one value as any other. In other words, they knew absolutely nothing about the number of tigers in the jungle and their prior distribution is thus a discrete uniform distribution over all non-negative integers. This is rather unlikely, of course, but we will discuss better prior distributions elsewhere.

The likelihood function is given by the probability mass function of the hypergeometric distribution, i.e.:



The likelihood function is zero for values of q below 43 since the experiment tells us that there must be at least 43 tigers: 20 that were tagged plus the (30-7) that were caught in the recapture part of the experiment and were not tagged.

ModelRisk provides a convenient function VoseHypergeoProb(s, n, D, M) which will calculate the hypergeometric distribution mass function automatically. The example model performs the Bayesian estimate where a discrete uniform prior, with values of q running from 0 to 150 is multiplied by the likelihood function above to arrive at a posterior distribution.

Tigers model formulae:

q = tested value for total tigers M

p(q) = 1

l(X|q) =IF(q<n+D-s,0,VoseHypergeoProb(s,n,D,q))

The shape of this posterior distribution is shown in Figure 1.

Figure 1: Posterior distribution for q = [0,150]

The graph peaks at a value of 85 as we would expect but it appears cut off at the right tail which shows that we should also look at values of q larger than 150. The analysis is extended for values of q up to 300 with the same shaped prior and this more complete posterior distribution plotted in Figure 2.

Figure 2: Posterior distribution for q = [0,300]

This second plot represents a good model of the state of the warden's knowledge about the number of tigers on that island. Don't forget that this is a distribution of belief and is not a true probability distribution since there is an exact number of tigers on that island.

In this example, we had to adjust our range of tested values of q in light of the posterior distribution. It is quite common to review the set of tested values of q, either expanding the prior's range or modelling some part of the prior's range in more detail when the posterior distribution is concentrated around a small range. It is entirely appropriate to expand the range of the prior as long as we would have been happy to have extended our prior to the new range before seeing the data, and as long as we continue the same shape. However, it would not be appropriate if we had a much more informed prior belief that gave an absolute range for the uncertain parameter that we are now considering stepping outside of. This would not be right because we would be revising our prior belief in light of the data which would be double-counting the data. However, if the likelihood function is concentrated very much at one end of the range of the prior, it may well be worth reviewing whether the prior distribution or the likelihood function are appropriate.

Continuing with our tigers on an island, let us imagine that the warden is unsatisfied with the level of uncertainty that remains about the number of tigers which, from 50 to 250, is rather large. She release all the tigers, decides to wait a short while and then recaptures another 30 tigers. The experiment is completed and this time t tagged tigers are captured. Assuming that a tagged tiger still has the same probability of being captures as an untagged tiger, what is her uncertainty distribution now for the number of tigers on the island?

This is simply a replication of the first problem, except that we no longer use a discrete uniform distribution as her prior. Instead, the distribution of Figure 2 represents her prior belief and the likelihood function is now given by the ModelRisk function VoseHypergeoProb(t, 30, 20, q). The six panels of Figure 3 show what the warden's posterior distribution would have been if the second experiment had trapped t = 1, 3, 5, 7, 10 and 15 tagged tigers instead.

Figure 3: Change in posterior distribution of Tiger estimate depending on the number of tagged tigers seen in the second sample

You might initially imagine that performing another experiment would make you more confident about the actual number of tigers on the island, but the graphs of Figure 3 show that this is not necessarily so. In the top two panels the posterior distribution is now more spread that the prior because the data contradicts the prior (the prior and likelihood peak at very different values of q). In the middle left panel, the likelihood disagrees moderately with the prior but the extra information in the data compensates for this, leaving us with about the same level of uncertainty but with a posterior distribution that is to the right of the prior. The middle right panel represents the scenario where the second experiment has the same results as the first. Since both experiments produced the same result, our confidence is improved and remains centred around the best guess of 85. In the bottom two panels, the likelihood functions disagree with the priors, yet the posterior distributions have a narrower uncertainty. This is because the likelihood function is placing emphasis on the left tail of the possible range of values for q, which is bounded at q = 43.

See Also