## Improving Monte Carlo: Control Variates

I’ve already discussed quite a lot about Monte Carlo in quantitative finance. MC can be used to value products for which an analytical price is not available in a given model, which includes most exotic derivatives in most models. However, two big problems are the time that it takes to run and the ‘Monte Carlo error’ in results.

One technique for improving MC is to use a ‘control variate’. The idea is to find a product whose price is strongly correlated to the product that we’re trying to price, but which is more easy to calculate or which we already know the price of. When we simulate a path in MC, it will almost surely give either an under-estimate or an over-estimate of the true price, but we don’t know which, and averaging all of these errors is what leads to the Monte Carlo error in the final result. The insight in the control variate technique is to use the knowledge given to us by the control variate to reduce this error. If the two prices are strongly correlated and a path produces an over-estimate of product price, it most likely also produces an over-estimate of the control variate and visa versa, which will allow us to improve our estimate of the product we’re trying to price.

The textbook example is the Asian Option. Although the arithmetic version of the asian option discussed in previous posts has no analytic expression in BS, a similar Geometric asian option does have an analytic price. So, for a given set of model parameters, we can calculate the price of the option. As a reminder, the payoff of an arithmetic asian option at expiry is

and the payoff of the related geometric averaging asian is

Denoting the price of the arithmetic option as  and the geometric option as , the traditional monte carlo approach is to generate  paths, and for each one calculate  (the realisation of the payoff along the path) and take the average over all paths, so that

which will get closer to the true price as .

Using as a control variate, we instead calculate

where is the price of the geometric option known from the analytical expression, and is a constant (in this case we will set it to 1).

What do we gain from this? Well, consider the variance of

(since is known, so has zero variance) which is minimal for  in which case

that is, if the two prices are strongly correlated, the variance of the price calculated using the control variate will be a significantly smaller. I’ve plotted a sketch of the prices of the two types of average for 100 paths – the correlation is about 99.98%. Consequently, we expect to see a reduction in variance of about 2000 times for a given number of paths (although we now have to do a little more work on each path, as we need to calculate the geometric average as well as the arithmetic average of spots). This is roughly 45 times smaller standard error on pricing – well over an extra decimal place, which isn’t bad – and this is certainly much easier than running 2000 times as many paths to achieve the same result.

## Interview Questions IV

Another question about correlations today, this time I thought we could have a look at a simple type of random walk, in which the distance travelled at each step either backwards or forwards and has a random length, and how to deal with the issues that come up.

Let’s say at each step we move a distance along the x-axis that is distributed randomly and uniformly between -1 and +1, and importantly that each step is independent of the others. So, after N steps the total distance travelled, L, is

$L_N&space;=&space;\sum_{i=0}^{N}&space;x_i\&space;;&space;\qquad&space;x_i\sim&space;{\mathbb&space;U}[-1,+1]$

where $\inline&space;x_i$ is the i-th step length.

Calculate:

i) the expected distance travelled after N steps

ii) the standard deviation of the distance travelled after N steps

iii) the autocorrelation between the distance travelled at N steps and the distance travelled at N+n steps

Since we’re dealing with uniform variables, it makes sense to start by calculating the expectation and variance of a single realisation of a variable of this type. The expectation is trivially 0, while the variance is

\begin{align*}&space;{\rm&space;Var}[x_i]&space;&&space;=&space;{\mathbb&space;E}[x_i^2]&space;-&space;{\mathbb&space;E}[x_i]^2\\&space;&&space;=&space;\int_{-1}^{+1}&space;x^2&space;dx&space;-&space;0&space;\\&space;&&space;=&space;{2\over&space;3}&space;\end{align}

We’ll also make use of the independence of the individual variables at several points, we recall that for independent variables x and y, that $\inline&space;{\mathbb&space;E}[xy]&space;=&space;{\mathbb&space;E}[x]&space;{\mathbb&space;E}[y]$

i) This one is fairly straight-forward. Expectation is a linear operator, so we can take it inside the sum. We know the expectation of an individual variate, so the expectation of the sum is just the product of these

\begin{align*}&space;{\mathbb&space;E}\Big[\sum_{i=0}^N&space;x_i&space;\Big]&space;&&space;=&space;\sum_{i=0}^N&space;{\mathbb&space;E}[&space;x_i&space;]\\&space;&&space;=&space;N\cdot&space;0\\&space;&&space;=&space;0&space;\end{align}

ii) The standard deviation is the square root of the variance, which is the expectation of the square minus the square of the expectation. We know the second of these is 0, so we only need to calculate the first,

\begin{align*}&space;{\rm&space;Var}\Big[\sum_{i=0}^N&space;x_i&space;\Big]&space;&&space;=&space;{\mathbb&space;E}\Big[\Big(\sum_{i=0}^N&space;x_i&space;\Big)^2\Big]\\&space;&&space;=&space;{\mathbb&space;E}\Big[\sum_{i,j=0}^N&space;x_i&space;x_j\Big]\\&space;&=\sum_{i,j=0}^N&space;{\mathbb&space;E}&space;[x_i&space;x_j]&space;\end{align}

There are two types of term here. When i and j are not equal, we can use the independence criterion given above to express this as the product of the two individual expectations, which are both 0, so these terms don’t contribute. So we are left with

\begin{align*}&space;{\rm&space;Var}\Big[\sum_{i=0}^N&space;x_i&space;\Big]&space;&=\sum_{i=0}^N&space;{\mathbb&space;E}&space;[(x_i)^2]&space;\\&space;&=&space;N\cdot&space;{2\over3}&space;\end{align}and the standard deviation is simply the square root of this.

iii) This is where things get more interesting – the autocorrelation is the correlation of the sum at one time with its value at a later time. This is a quantity that quants are frequently interested in, since the value of a derivative that depends on values of an underlying stock at several times will depend sensitively on the autocorrelation. We recall the expression for correlation

$\rho(x,y)&space;=&space;{{\rm&space;Cov}(x,y)&space;\over&space;\sqrt{{\rm&space;Var}(x){\rm&space;Var}(y)&space;}&space;}$

So we are trying to calculate

\begin{align*}&space;\rho(L_N,&space;L_{N+n})&space;=&space;{\rm&space;Cov}\Big[\sum_{i=0}^N&space;x_i&space;\cdot&space;\sum_{j=0}^{N+n}&space;x_j&space;\Big]&space;\cdot&space;{3&space;\over&space;2\sqrt{N&space;(N+n)}}&space;\end{align}

where I’ve substituted in the already-calculated value of the variances of the two sums.

We can again use the independence property of the steps to separate the later sum into two, the earlier sum and the sum of the additional terms. Also, since the expectation of each sum is zero, the covariance of the sums is just the expectation of their product

\begin{align*}&space;\rho(L_N,&space;L_{N+n})&=&space;{\rm&space;Cov}\Big[\sum_{i=0}^N&space;x_i&space;\cdot&space;\Big(\sum_{j=0}^{N}&space;x_j&space;+&space;\sum_{j=N+1}^{N+n}&space;x_j&space;\Big)&space;\Big]&space;\cdot&space;{3&space;\over&space;2\sqrt{N&space;(N+n)}}\\&=&space;{\mathbb&space;E}\Big[\sum_{i=0}^N&space;x_i&space;\cdot&space;\Big(\sum_{j=0}^{N}&space;x_j&space;+&space;\sum_{j=N+1}^{N+n}&space;x_j&space;\Big)&space;\Big]&space;\cdot&space;{3&space;\over&space;2\sqrt{N&space;(N+n)}}\\&=&space;{\mathbb&space;E}\Big[\sum_{i,j=0}^N&space;x_i&space;x_j&space;+&space;\sum_{i=0}^N&space;x_i&space;\cdot\sum_{j=N+1}^{N+n}&space;x_j&space;\Big]&space;\cdot&space;{3&space;\over&space;2\sqrt{N&space;(N+n)}}&space;\end{align}and using the results above and the independence of the final two sums (because they are the sums of different sets of terms, and each term is independent to all the others) we know

${\mathbb&space;E}\Big[\sum_{i,j=0}^N&space;x_i&space;x_j&space;\Big]&space;=&space;{2&space;\over&space;3}N$

${\mathbb&space;E}\Big[\sum_{i=0}^N&space;x_i&space;\cdot\sum_{j=N+1}^{N+n}&space;x_j&space;\Big]&space;={\mathbb&space;E}\Big[\sum_{i=0}^N&space;x_i&space;\Big]\cdot&space;{\mathbb&space;E}\Big[\sum_{j=N+1}^{N+n}&space;x_j&space;\Big]&space;=&space;0$

so

\begin{align*}\rho(L_N,&space;L_{N+n})&space;&&space;=&space;{N\over&space;\sqrt{N(N+n)}}\\&space;&=&space;\sqrt{N\over&space;N+n}&space;\end{align*}

What does this tell us? Roughly that the sum of the sequence up to N+n terms is correlated to its value at earlier points, but as n gets larger the correlation decreases, as the new random steps blur out the position due to the initial N steps.

We can test our expressions using the RAND() function in excel. Try plotting a sequence of sets of random numbers and summing them, and then plotting the set of sums of 100 terms against the set of sums of 120 or 200 terms (nb. in excel, you probably want to turn auto-calculate off first to stop the randoms from refreshing every time you make a change – instructions can be found here for Excel 2010; for Excel 2013 I found the option inside the “FORMULAS” tab and at the far end – set the ‘Calculation Options’ to manual). I’ve done exactly that, and you can see the results below.

You can also try calculating the correlation of the variables uing Excel’s CORREL() that you generate – these should tend towards the expression above as the number of sums that you compute gets large (if you press F9, all of the random numbers in your sheet will be recomputed and you can see the actual correlation jump around, but these jumps will be smaller as the number of sums gets larger).

## Interview Quesions III

Today’s question will test some of the statistics and correlation I’ve discussed in the last couple of months. Assume throughout that  and  are jointly normally distributed such that

a) Calculate
b) Calculate

The first expectation is of a lognormal variate, and the second is of a lognormal variate conditional on some earlier value of the variate having been a particular value – these are very typical of the sorts of quantities that a quant deals with every day, so the solution will be quite instructive! Before reading the solution have a go at each one, the following posts may be useful: SDEs pt. 1, SDEs pt. 2, Results for Common Distributions

a) Here, we use the standard result for expectations

b) This one is a little tougher, so first of all I’ll discuss what it means and some possible plans of attack. We want to calculate the expectation of , given that takes a value of . Of course, if and were independent, this wouldn’t make any difference and the result would be the same. However, because they are correlated, the realised value of will have an effect on the distribution of .

To demonstrate this, I’ve plotted a few scatter-graphs illustrating the effect of specifying on , with and uncorrelated and then becoming increasing more correlated.

The simplest way of attempting this calculation is to use the result for jointly normal variates given in an earlier post, which says that if and have correlation , we can express in terms of and a new variate  which is uncorrelated with

so

Since the value of is already determined (ie. = ), I’ve separated this term out and the only thing I have to calculate is the expectation of the second term in . Since and are independent, we can calculate the expectation of the , which is the same process as before but featuring slightly more complicated pre-factors

We can check the limiting values of this – if  then and are independent [this is not a general result by the way – see wikipedia for example – but it IS true for jointly normally distributed variables], in this case  just as above. If , , which also makes sense since in this case , so fully determines the expectation of .

The more general way to solve this is to use the full 2D joint normal distribution as given in the previous post mentioned before,

This is the joint probability function of and , but it’s not quite what we need – the expectation we are trying to calculate is

So we need to calculate the conditional expectation of given , for which we need Bayes’ theorem

Putting this together, we have

This integral is left as an exercise to the reader, but it is very similar to those given above and should give the same answer as the previous expression for  after some simplification!