## Asian options III: the Geometric Asian

I’ve introduced the Asian Option before, it is similar to a vanilla option but its payoff is based on the average price over a period of time, rather than solely the price at expiry. We saw that it was exotic – it is not possible to give it a unique price using only the information available from the market.

Today I’m going to introduce another type of option – the geometric asian option. This option is similar, but its payoff is now based on the geometric average, rather than the arithmetic average, of the spot over the averaging dates

This option is exotic, just like the regular (arithmetic-average) asian option. At first sight it seems even more complicated; and what’s more, it is almost never encountered in practice, so why bother with it? Well, it’s a very useful as an exercise because in many models where the arithmetic asian’s price has no closed form, the geometric asian happens to have one! Further, since an arithmetic average is ALWAYS higher than a geometric average for a set of numbers, the price of the geometric asian will give us a strict lower bound on the price of the arithmetic asian.

Considering the Black-Scholes model in its simplest form for the moment (although the pricing formula can be extended to more general models), let’s consider what the spot will look like at each of the averaging times, and as we did in the earlier post, considering a simple geometric asian averaging over only two times  and  so that the payoff is

At . At ,

where . At ,

where  also, and importantly  is uncorrelated with  due to the independent increments of the Brownian motion.

Now the reason we couldn’t make a closed form solution for the arithmetic asian was that  is the sum of two lognormal distributions, which itself is NOT lognormally distributed. However, as discussed in my post on distributions, the product  of two lognormal distributions IS lognormal, so valuing an asian that depends on the product of these two is similar to pricing a vanilla, with a slight change to the parameters that we need

where  is another normal variable (we’ve used the result about the sum of two normally distributed variables here).

If we re-write this as

where and are the forwards at the respective times and and are defined below. This is just the same as the vanilla pricing problem solved here. So, we can use a vanilla pricer to price a geometric asian with two averaging dates, but we need to enter transformed parameters

In fact this result is quite general, we can price a geometric asian with any number of averaging dates, using the general transformations below (have a go at demonstrating this following the logic above)

## Credit Default Swaps Pt II – Credit Spreads

This post examines CDS contracts – see my later series of posts on bonds and the yield curve for a deeper examination of credit spreads from the bond perspective

As well as governments, companies also need to borrow money and one way of achieving this is to issue bonds. Like government bonds, these pay a fixed rate of interest each year. Of course, this rate of interest will typically be quite a bit higher than government bonds in order to justify the increased credit risk associated with them. The difference between the two rates of interest is called the credit spread.

As we did before, today I’m just going to look at pricing a ZCB, but coupon-bearing bonds can be constructed as a product of these so it’s a relatively straight-forward extension.

Consider a ZCB issued by a risky company. It’s going to pay £1 at expiry, and nothing in-between. However, if the company defaults in the mean-time we get nothing. In fact, we’ll extend this a bit – instead of getting nothing we say that we will receive a fraction R of £1, since the company’s assets will be run down and some of their liabilities will get paid. This also allows us to distinguish between different classes of debt based on their seniority (we’re getting dangerously close to corporate finance here so let’s steer back to the maths).

In order to protect against the risk of loss, we might enter a Credit Default Swap (CDS) with a third party. In exchange for us making regular payments of £K/year to them (here we’re assuming that the payment accrues continuously for mathematical ease), they will pay us if and when the counterparty defaults. There are a few varieties of CDS – they might take the defaulted bond from us and reimburse what we originally paid for it, or they might just pay out an amount equal to our loss. We’ll assume the second variety here, so in the event of default they pay us £(1-R), since this is what we lose (up to some discount factors, but we’re keeping it simple here!). Also, if default occurs we stop making the regular payments, since we no longer need protection, so this product is essentially an insurance policy.

We value this product in the normal way. The value of the fixed leg is just the discounted cash flow of each payment multiplied by the probability that the counterparty hasn’t defaulted (because we stop paying in that case) – and remember that we said the payments are accruing continuously, so we use an integral instead of a sum:

and the value of the insurance leg is the value of the payment times the chance that it defaults in any given interval

where in keeping with our notation from before, is the probability of default at a given time and  is the chance that the counterparty hasn’t defaulted by a given time.

The contract has a fair value if these two legs are of equal value, which happens when

At this point we refer to our exponential default model from Part I of this post.

In the exponential default model described in the previous post, we postulated a probability of default  which led to Survival Function . Substituting these in to the equation above we have

which is only zero when the integrand is also zero, so

This is called the credit triangle, and it tells us the price of insuring a risky bond against default if we have it’s hazard rate. If the expected lifetime of the firm increases (ie.  decreases) then the spreads will fall, as insurance is less likely to be required. If the expected recovery after default increases ( increases) then spreads also fall, as the payout will be smaller if it is required.

Although a little care is needed when moving from ZCBs to coupon-bearing bonds, it can be seen that the payments (normally called the spreads) paid to insure the bond against default should be essentially the difference in interest payments between government (‘risk-free’) bonds and the risky bonds we are considering.

The default probabilities we have used here can be calibrated from market observed values for spreads K between government and corporate bonds. This resembles the process used to calculate volatility – the observable quantity on the market is actually the price, not the volatility/hazard rate, and we back this out from market prices using simple products and then use these parameters to price more exotic products to give consistent prices. This is the market’s view of the default probabilities in the risk-neutral measure, which means that they may not actually bear any resemblance to real world default probabilities (eg. historical default probabilities) and in fact are usually larger and often several times larger. See for example this paper for more on the subject.

Of course, throughout this analysis we have ignored the risk of the insurer themselves defaulting – indeed, credit derivatives rapidly become a nightmare when considered in general, and the Lehman Brothers default was largely down to some fairly heroic approximations that were made in pricing credit products that ultimately didn’t hold up in extreme environments. I’ll explore some of this behaviour soon in a post on correlated defaults and the gaussian copula model.

## Credit Default Swaps Pt I: A Default Model for Firms

In today’s post I’m going to discuss a simple model for default of a firm, and in Part II I’ll discuss the price of insuring against losses caused by the default. As usual, the model I discuss today will be a vast over-simplification of reality, but it will serve as a building block for development. Indeed, there are many people in the credit derivatives industry who take these things to a much higher level of complexity.

Modelling the default of a firm is an interesting challenge. Some models are based around ratings agencies, giving firms a certain grade and treating transitions between grades as a Markov chain (similar to a problem I discussed before). I’m going to start with a simpler exponential default model. This says that the event of a given firm defaulting on all of its liabilities is a random event obeying a Poisson process. That is, it is characterised by a single parameter $\inline&space;\lambda$ which gives the likelihood of default in a given time period. We make a simplifying assumption that this is independent of all previous time periods, so default CAN’T be foreseen (this may be a weakness of the model, but perhaps not… discuss!).

Also, the firm can’t default more than once, so the process stops after default. A generalisation of the model will treat $\inline&space;\lambda$ as a function of time $\inline&space;\lambda(t)$ and even potentially a stochastic variable, but we won’t think about that for now.

Mathematically, the probability of default time tau occurring in the small window
$\inline&space;[t,t+dt]$ is

$\lim_{dt&space;\to&space;0}&space;p(&space;\tau&space;<&space;t&space;+&space;dt\&space;|\&space;\tau&space;>&space;t&space;)&space;=&space;\lambda&space;dt$

If we start at $\inline&space;t=0$ with the firm not yet having defaulted, this tells us that

$p&space;(&space;\tau&space;<&space;dt\&space;|\&space;\tau&space;>&space;0&space;)&space;=&space;\lambda&space;dt$

and in a second and third narrow window, using the independence of separate time windows in the Poisson process and taking a physicist’s view on the meaning of $\inline&space;2dt$ and similar terms,

$\begin{matrix}&space;p&space;(&space;dt&space;<&space;\tau&space;<&space;2dt&space;)&space;&&space;=&space;&&space;p&space;(&space;\tau&space;<&space;2dt\&space;|\&space;\tau&space;>&space;dt&space;)\cdot&space;p(\tau&space;>&space;dt&space;)&space;\\&space;&=&&space;\lambda&space;dt&space;\cdot&space;(1-&space;\lambda&space;dt)&space;\end{matrix}$$\begin{matrix}&space;p&space;(\&space;2dt&space;<&space;\tau&space;<&space;3dt\&space;)&space;&=&&space;\lambda&space;dt&space;\cdot&space;\bigl(1&space;-&space;\lambda&space;dt&space;\cdot(1&space;-&space;\lambda&space;dt)&space;-&space;\lambda&space;dt&space;\bigr)\\&space;&=&&space;\lambda&space;dt&space;\cdot&space;(1&space;-&space;2\lambda&space;dt&space;-&space;\lambda^2&space;dt^2)\\&space;&=&&space;\lambda&space;dt&space;\cdot&space;(1&space;-&space;\lambda&space;dt)^2\end{matrix}$

and in general

$p&space;(\&space;t&space;<&space;\tau&space;<&space;t+dt\&space;)&space;=&space;(1-&space;\lambda&space;dt)^n\cdot&space;\lambda&space;dt$where $\inline&space;n&space;=&space;{t&space;\over&space;dt}}$ and we must take the limit that $\inline&space;dt&space;\to&space;0$, which we can do by making use of the identity

$\lim_{a\to&space;\infty}&space;(1&space;+&space;{x&space;\over&space;a})^a&space;=&space;e^x$

we see that

$\begin{matrix}&space;\lim_{dt&space;\to&space;0}p&space;(\&space;t&space;<&space;\tau&space;<&space;t+dt\&space;)&space;&&space;=&space;&&space;\lambda&space;e^{-\lambda&space;t}\\&space;&&space;=&space;&&space;p(\tau&space;=&space;t)&space;\end{matrix}$

and its cumulative density function is

${\rm&space;F}(t)&space;=&space;\int_0^t&space;p(\tau&space;=&space;u)&space;du&space;=&space;1&space;-&space;e^{-\lambda&space;t}$which gives the probability that default has happened before time $\inline&space;t$ (the survival function $\inline&space;{\rm&space;S}(t)&space;=&space;1&space;-&space;{\rm&space;F}(t)$ gives the reverse – the chance the firm is still alive at time $\inline&space;t$) *another derivation of $\inline&space;p(\tau=t)$ is given at the bottom

A few comments on this result. Firstly, note that the form of $\inline&space;p(\tau=t)$ is the Poisson distribution for $\inline&space;n=1$, which makes a lot of sense since this was what we started with! It implies a mean survival time of $\inline&space;\lambda^{-1}$, which gives us some intuition about the physical meaning of lambda. The CDF (and consequently the survival function) are just exponential decays, I’ve plotted them below.

Having characterised the default probability of the firm, in Part II I will think about how it affects the price of products that they issue.

*Another derivation of $\inline&space;p(\tau=t)$ is as follows:

$\lambda&space;dt&space;=&space;p&space;(&space;\tau&space;<&space;t+dt\&space;|\&space;\tau&space;>&space;t&space;)&space;=&space;{p(t&space;<&space;\tau&space;<&space;t+dt)\over&space;p(\tau&space;>&space;t)}$

the first of these terms is approximately $\inline&space;p(\tau=t)dt$, while the second is simply the survival function $\inline&space;S(t)$, which by definition obeys

$p(\tau=t)&space;=&space;-{\partial&space;S&space;\over&space;\partial&space;t}$

combining this we have

$\lambda&space;=&space;-{1\over&space;S}\cdot&space;{\partial&space;S&space;\over&space;\partial&space;t}$

and integrating gives

$-\lambda&space;t&space;=&space;\ln&space;{S(t)&space;\over&space;S(0)}$from which we get the result above, that

$S(t)&space;=&space;e^{-\lambda&space;t}$

## Fitting the initial discount curve in a stochastic rates model

I’ve introduced the Vasicek stochastic rates model in an earlier post, and here I’m going to introduce a development of it called the Hull-White (or sometimes Hull-White extended Vasicek) model.

The rates are modelled by a mean-reverting stochastic process

which is similar to the Vasicek model, except that the  term is now allowed to vary with time (in general and are too, but I’ll ignore those issues for today).

The freedom to set theta as a deterministic function of time allows us to calibrate the model to match the initial discount curve, which means at least initially our model will give the right price for things like FRAs. Calibrating models to match market prices is one of the main things quants do. Of course, the market doesn’t really obey our model. this means that, in time, the market prices and the prices predicted by our model will drift apart, and the model will need to be re-calibrated. But the better a model captures the various risk factors in the market, the less often this procedure will be needed.

Using the trick

to re-express the equation and integrating gives

where  is the rate at . The observable quantities from the discount curve are the initial discount factors (or equivalently the initial forward rates) , where

The rate  is normally distributed, so the integral  must be too. This is because an integral is essentially a sum, and a sum of normal distributions is also normally distributed. Applying the Ito isometry as discussed before, the expectation of this variable will come wholly from the deterministic terms and the variance will come entirly from the stochastic terms, giving

where throughout

and since

we have

Two differentiations of this expression give

and combining these equations gives an expression for that exactly fits the initial discount curve for the given currency

and since  is simply the initial market observed forward rate to each time horizon coming from the discount curve, this can be compactly expressed as

Today we’ve seen how a simple extension to the ‘basic’ Vasicek model allows us to match the initial discount curve seen in the market. Allowing the volatility parameter to vary will allow us to match market prices of other products such as swaptions (an option to enter into a swap), which I’ll discuss another time. But we’re gradually building up a suite of simple models that we can combine later to model much more complicated environments.

## Interview Questions VI: The Drunkard’s Walk

A drunk man is stumbling home at night after closing time. Every step he takes moves him either 1 metre closer or 1 metre further away from his destination, with an equal chance of going in either direction (!). His home is 70 metres down the road, but unfortunately, there is a cliff 30 metres behind him at the other end of the street. What are the chances that he makes it to his house BEFORE tumbling over the edge of the cliff?

This is a fun question and quite heavy on conditional probability. We are trying to find the probability that the drunkard has moved 70 metres forward BEFORE he has ever moved 30 metres backward, or visa versa. There are several ways of attempting this, including some pretty neat martingale maths, but I’m going to attempt it here in the language of matrices and markov chains.

Essentially, there are 100 states that the drunkard can be in, from right beside the cliff all the way down the road to right beside his front door. Let’s label these from 1 to 100, in terms of the number of metres away from the cliff, so that he starts in state 30. At each step, he transitions from his current state to either one state higher or one state lower with probability 50%, and the process continues until he reaches either the cliff or the door, at which point the process will cease in either a good or a very bad night’s rest. We call these two states, 0 and 100, ‘absorbers’, because the process stops at this point and no transitions to new states can happen. A markov diagram that illustrates this process can be drawn like this:

We can characterise each step in the process by a transition matrix acting on a state vector. The drunkard initially has a state of 30 metres, so his state vector is a long string of zeroes with a single 1 at the 30th position:

$S_0&space;=&space;\begin{pmatrix}&space;0&space;\\&space;\vdots&space;\\&space;1\\&space;\vdots\\&space;0&space;\end{pmatrix}$

This vector is probabalistic – a 1 indicates with 100% certainty that the drunkard is in the 30th state. However, with subsequent moves this probability density will be spread over the nearby states as his position’s probability density diffuses into other states. The transition matrix multiplies the drunkard’s state vector to give his state vector after another step:

$P&space;=&space;\begin{pmatrix}&space;1&space;&&space;0.5&space;&&space;0&space;&&space;\cdots&space;&&space;0&space;&&space;0\\&space;0&space;&&space;0&space;&&space;0.5&space;&&space;\cdots&space;&&space;0&space;&&space;0\\&space;0&space;&&space;0.5&space;&&space;0&space;&&space;\cdots&space;&&space;0&space;&&space;0\\&space;0&space;&&space;0&space;&&space;0.5&space;&&space;&&space;0&space;&&space;0&space;\\&space;\vdots&space;&&space;\vdots&space;&&space;\vdots&space;&&space;\ddots&space;&&space;\vdots&space;&&space;\vdots\\&space;0&space;&&space;0&space;&&space;0&space;&&space;\cdots&space;&&space;0.5&space;&&space;1&space;\end{pmatrix};&space;\quad&space;S_{i+1}&space;=&space;P&space;\cdot&space;S_i$

So, after one step the drunkard’s state vector will have a 0.5 in the 29th and the 31st position and zeroes elsewhere, saying that he will be in either of these states with probability 0.5, and certainly nowhere else. Note that the 1’s at the top and bottom of the transition matrix will absorb and probability that arrives at that state for the rest of the process.

To keep things simple, let’s consider a much smaller problem with only six states, where state 1 is ‘down the cliff’ and state 6 is ‘home’; and we’ll come back to the larger problem at the end. We want to calculate the limit of the drunkard’s state as the transition matrix is applied a large number of times, ie. to calculate

$S_n&space;=&space;P^n&space;\cdot&space;S_0$

An efficient way to calculate powers of a large matrix is to first diagonalise it. We have $\inline&space;P&space;=&space;U&space;A\&space;U^{-1}$, where $\inline&space;A$ is a diagonal matrix whose diagonal elements are the eigenvalues of $\inline&space;P$, and $\inline&space;U$ is the matrix whose columns are the eigenvectors of $\inline&space;P$. Note that, as $\inline&space;P$ is not symmetric, $\inline&space;U^{-1}$ will have row vectors that the the LEFT eigenvectors of $\inline&space;P$, which in general will be different to the right eigenvectors. It is easy to see how to raise $\inline&space;P$ to the power n

$P^n&space;=&space;(U&space;A\&space;U^{-1})^n&space;=&space;U&space;A^n\&space;U^{-1}$

since all of the $\inline&space;U$s and $\inline&space;U^{-1}$s in the middle cancel. Since $\inline&space;A$ is diagonal, to raise it to the power of n we simlpy raise each element (which are just the eigenvalues of $\inline&space;P$) to the power of n. To calculate the eigenvalues of P we solve the characteristic equation

$|P&space;-&space;\lambda_a&space;I|&space;=&space;0$

with

$P&space;=&space;\begin{pmatrix}&space;1&space;&&space;0.5&space;&&space;0&space;&&space;0&space;&&space;0&space;&&space;0\\&space;0&space;&&space;0&space;&&space;0.5&space;&&space;0&space;&&space;0&space;&&space;0\\&space;0&space;&&space;0.5&space;&&space;0&space;&&space;0.5&space;&&space;0&space;&&space;0\\&space;0&space;&&space;0&space;&&space;0.5&space;&&space;0&space;&&space;0.5&space;&&space;0&space;\\&space;0&space;&&space;0&space;&&space;0&space;&&space;0.5&space;&&space;0&space;&&space;0\\&space;0&space;&&space;0&space;&&space;0&space;&&space;0&space;&&space;0.5&space;&&space;1&space;\end{pmatrix}$

This gives six eigenvalues, two of which are one (these will turn out to correspond to the two absorbing states) and the remainder are strictly $\inline&space;0\leq&space;\lambda_a&space;<&space;1$. Consequently, when raised to the power n in the diagonal matrix above, all of the terms will disappear except for the first and the last eigenvalue which are 1 as n becomes large.

Calculating the eigenvectors is time consuming and we’d like to avoid it if possible. Luckily, in the limit that n gets large, we’ve seen that most of the eigenvalues raised to the power n will go to zero which will reduce significantly the amount we need to calculate. We have

$P^n\cdot&space;S&space;=&space;\bigl(&space;U&space;A^n&space;\&space;U^{-1}\bigr)\cdot&space;S$

and in the limit that n gets large, $\inline&space;U&space;\cdot&space;A^n$ is just a matrix of zeroes with a one in the upper left and lower right entry. At first sight it looks as though calculating $\inline&space;U^{-1}$ will be required, which is itself an eigenvector problem, but in fact we only have to calculate a single eigenvector – the first (or equivalently the last), which will give the probability of evolving from an initial state S to a final state 0 (or 100).

$\inline&space;U^{-1}$ is the matrix of left eigenvectors of $\inline&space;P$, each of which satisfies

$x_a&space;\cdot&space;P&space;=&space;\lambda_a&space;x_a$

and we are looking for the eigenvectors corresponding to eigenvalue of 1, so we need to solve the matrix equation

$x&space;\cdot&space;\begin{pmatrix}&space;0&space;&&space;0.5&space;&&space;0&space;&&space;0&space;&&space;0&space;&&space;0\\&space;0&space;&&space;-1&space;&&space;0.5&space;&&space;0&space;&&space;0&space;&&space;0\\&space;0&space;&&space;0.5&space;&&space;-1&space;&&space;0.5&space;&&space;0&space;&&space;0\\&space;0&space;&&space;0&space;&&space;0.5&space;&&space;-1&space;&&space;0.5&space;&&space;0&space;\\&space;0&space;&&space;0&space;&&space;0&space;&&space;0.5&space;&&space;-1&space;&&space;0\\&space;0&space;&&space;0&space;&&space;0&space;&&space;0&space;&&space;0.5&space;&&space;0&space;\end{pmatrix}&space;=&space;0$

We know that if he starts in the first state (ie. over the cliff) he must finish there (trivially, after 0 steps) with probability 100%, so that $\inline&space;x_1&space;=&space;1$ and $\inline&space;x_6&space;=&space;0$. The solution to this is

$x_n&space;=&space;{6&space;-&space;n&space;\over&space;5}$

which is plotted here for each initial state

which says that the probability of ending up in state 1 (down the cliff!) falls linearly with starting distance from the cliff. We can scale up this final matrix equation to the original 100 by 100 state space, and find that for someone starting in state 30, there is a 70/100 chance of ending up down the cliff, and consequently only a 30% chance of getting home!

This problem is basically the same as the Gambler’s ruin problem, where a gambler in a casino stakes $1 on each toss of a coin and leaves after reaching a goal of$N or when broke. There are some very neat methods for solving them via martingale methods that don’t use the mechanics above that I’ll look at in a future post.

## Asian Options II: Monte Carlo

In a recent post, I introduced Asian options, an exotic option which pay the average of the underlying over a sequence of fixing dates.

The payoff of an Asian call is

$C(T)&space;=&space;\Big({1&space;\over&space;N}\sum^N_{i=0}&space;S_{t_i}&space;-&space;K&space;\Big)^+$

and to price the option at an earlier time via risk-neutral valuation requires us to solve the integral

$C(t)&space;=&space;P_{t,T}\&space;\big(&space;\iint\cdots\int&space;\big)&space;\Big({1&space;\over&space;N}\sum^N_{i=0}&space;S_{t_i}&space;-&space;K&space;\Big)^+&space;\phi(S_{t_1},S_{t_1},\cdots,S_{t_N})&space;dS_{t_1}dS_{t_2}\cdots&space;dS_{t_N}$

A simple way to do this is using Monte Carlo – we simulate a spot path drawn from the distribution $\inline&space;\phi(S_{t_1},S_{t_1},\cdots,S_{t_N})$ and evaluate the payoff at expiry of that path, then average over a large number of paths to get an estimate of C(t).

It’s particularly easy to do this in Black-Scholes, all we have to do is simulate N gaussian variables, and evolve the underlying according to a geometric brownian motion.

I’ve updated the C++ and Excel Monte Carlo pricer on the blog to be able to price asian puts and calls by Monte Carlo – have a go and see how they behave relative to vanilla options. One subtlety is that we can no longer input a single expiry date, we now need an array of dates as our function input. If you have a look in the code, you’ll see that I’ve adjusted the function optionPricer to take a datatype MyArray, which is the default array used by XLW (it won’t be happy if you tell it to take std::vector). This array can be entered as a row or column of excel cells, and should be the dates, expressed as years from the present date, to average over.

## The Dupire Local Vol Model

In this post I’m going to look at a further generalisation of the Black-Scholes model, which will allow us to re-price any arbitrary market-observed volatility surface, including those featuring a volatility smile.

I’ve previously looked at how we can produce different at-the-money vols at different times by using a piecewise constant volatility $\inline&space;\sigma(t)$, but we were still unable to produce smiley vol surfaces which are often observed in the market. We can go further by allowing vol to depend on both t and the value of the underlying S, so that the full BS model dynamics are given by the following SDE

$dS_t&space;=&space;S_t\cdot&space;\Big(\&space;r(t)&space;dt\&space;+\&space;\sigma(t,S_t)&space;dW_t\&space;\Big)$

Throughout this post we will make constant use of the probability distribution of the underlying implied by this SDE at future times, which I will denote $\inline&space;\phi(t,S_t)$. It can be shown [and I will show in a later post!] that the evolution of this distribution obeys the Kolmogorov Forward Equation (sometimes called the Fokker-Planck equation)

${\partial&space;\phi(t,S_t)&space;\over&space;\partial&space;t}&space;=&space;-{\partial&space;\over&space;\partial&space;S_t}\big(rS_t\phi(t,S_t)\big)&space;+&space;{1\over&space;2}{\partial^2&space;\over&space;\partial&space;S_t^2}\big(\sigma^2(t,S_t)&space;S_t^2&space;\phi(t,S_t)\big)$

This looks a mess, but it essentially tells us how the probability distribution changes with time – we can see that is looks very much like a heat equation with an additional driving term due to the SDE drift.

Vanilla call option prices are given by

$C(t,S_t)&space;=&space;P(0,t)\int_K^{\infty}&space;\big(S_t&space;-&space;K\big)\phi(t,&space;S_t)&space;dS_t$

Assuming the market provides vanilla call option prices at all times and strikes [or at least enough for us to interpolate across the region of interest], we can calculate the time derivative of the call price which is equal to

${\partial&space;C&space;\over&space;\partial&space;T}&space;=&space;-rC&space;+&space;P(0,T)\int_K^{\infty}\big(&space;S_T&space;-&space;K&space;\big)\&space;{\partial&space;\phi&space;\over&space;\partial&space;T}\&space;dS_T$

and we can substitute in the value of the time derivative of the probability distribution from the Kolmogorov equation above

$rC&space;+&space;{\partial&space;C&space;\over&space;\partial&space;T}&space;=&space;P(0,T)\int_K^{\infty}\big(&space;S_T&space;-&space;K&space;\big)\Big[&space;-{\partial&space;\over&space;\partial&space;S_T}\big(&space;rS_T\phi\big)&space;+&space;{1\over&space;2}{\partial^2&space;\over&space;\partial&space;S_T^2}\big(&space;\sigma^2S_T^2\phi\big)&space;\Big]dS_T$

These two integrals can be solved by integration by parts with a little care

\begin{align*}&space;-\int_K^{\infty}\big(&space;S_T&space;-&space;K&space;\big){\partial&space;\over&space;\partial&space;S_T}\big(&space;rS_T\phi\big)dS_T&space;&&space;=&space;-\Big[rS_T\phi&space;\big(&space;S_T&space;-&space;K&space;\big)\Big]^{\infty}_{K}&space;+&space;\int_K^{\infty}rS_T\phi\&space;dS_T&space;\\&space;&&space;=&space;r\int_K^{\infty}&space;(S_T\phi)\&space;dS_T&space;\end{align*}\begin{align*}&space;\int_K^{\infty}\big(&space;S_T&space;-&space;K&space;\big){\partial^2&space;\over&space;\partial&space;S_T^2}\big(&space;\sigma^2&space;S_T^2\phi\big)dS_T&space;&&space;=\Big[\big(&space;S_T&space;-&space;K&space;\big){\partial&space;\over&space;\partial&space;S_T}(\sigma^2&space;S_T^2\phi)&space;\Big]^{\infty}_{K}&space;-&space;\int_K^{\infty}{\partial&space;\over&space;\partial&space;S_T}(\sigma^2&space;S_T^2\phi)\&space;dS_T&space;\\&space;&&space;=&space;-\sigma^2&space;K^2\phi(K,T)&space;\end{align*}where in both cases, the boundary terms disappear at the upper limit due to the distribution $\inline&space;\phi(t,S_t)$ and its derivatives, which go to zero rapidly at high spot.

We already have an expression for $\inline&space;\phi(t,S_t)$ in terms of C and its derivatives from our survey of risk-neutral probabilities,

$\phi(t,S_t)&space;=&space;{1&space;\over&space;P(0,t)}{\partial^2&space;C&space;\over&space;\partial&space;K^2}$

and we can re-arrange the formula above for call option prices

\begin{align*}&space;P(0,T)\int_K^{\infty}&space;S_T\&space;\phi\&space;dS_T&space;&&space;=&space;C&space;+&space;P(0,T)\int_K^{\infty}&space;K\phi\&space;dS_T&space;\\&space;&&space;=&space;C&space;+&space;K&space;{\partial&space;C&space;\over&space;\partial&space;K}&space;\end{align*}and substituting these expressions for $\inline&space;\phi(t,S_t)$ and $\inline&space;\int^{\infty}_K&space;(S_T&space;\phi)\&space;dS_T$ into the equation above

\begin{align*}&space;rC&space;+&space;{\partial&space;C&space;\over&space;\partial&space;T}&space;&&space;=&space;P(0,T)\cdot&space;\Big[&space;r\int_K^{\infty}&space;(S_T\phi)\&space;dS_T&space;+&space;\sigma^2&space;K^2\phi(K,T)&space;\Big]\\&space;&&space;=&space;rC&space;+&space;rK&space;{\partial&space;C&space;\over&space;\partial&space;K}&space;+&space;\sigma^2&space;K^2{\partial^2&space;C&space;\over&space;\partial&space;K^2}&space;\end{align*}

and remember that $\inline&space;\sigma&space;=&space;\sigma(t,S_t)$, which is our Dupire local vol. Cancelling the rC terms from each side and re-arranging gives

$\sigma(T,K)&space;=&space;\sqrt{&space;{\partial&space;C&space;\over&space;\partial&space;T}&space;+&space;rK&space;{\partial&space;C&space;\over&space;\partial&space;K}&space;\over&space;K^2{\partial^2&space;C&space;\over&space;\partial&space;K^2}}$

It’s worth taking a moment to think what this means. From the market, we will have access to call prices at many strikes and expires. If we can choose a robust interpolation method across time and strike, we will be able to calculate the derivative of price with time and with strike, and plug those into the expression above to give us a Dupire local vol at each point on the time-price plane. If we are running a Monte-Carlo engine, this is the vol that we will need to plug in to evolve the spot from a particular spot level and at a particular time, in order to recover the vanilla prices observed on the market.

A nice property of the local vol model is that it can match uniquely any observed market call price surface. However, the model has weaknesses as well – by including only one source of uncertainty (the volatility), we are making too much of a simplification. Although vanilla prices match, exotics priced using local vol models typically have prices that are much lower than prices observed on the market. The local vol model tends to put most of the vol at early times, so that longer running exotics significantly underprice.

It is important to understand that this is NOT the implied vol used when calculating vanilla vol prices. The implied vol and the local vol are related along a spot path by the expression

$\Sigma^2&space;T&space;=&space;\oint_0^T\sigma^2(S_t,t)dt$

(where $\inline&space;\Sigma$ is the implied vol) and the two are quite different. Implied vol is the square root of the average variance per unit time, while the local vol gives the amount of additional variance being added at particular positions on the S-t plane. Since we have an expression for local vol in terms of the call price surface, and there is a 1-to-1 correspondence between call prices and implied vols, we can derive an expression to calculate local vols directly from an implied vol surface. The derivation is long and tedious but trivial mathematically so I don’t present it here, the result is that the local vol is given by (rates are excluded here for simplicity)

$\sigma(y,T)&space;=&space;\sqrt{{\partial&space;w&space;\over&space;\partial&space;T}&space;\over&space;\Big[&space;1&space;-&space;{y&space;\over&space;w}{\partial&space;w&space;\over&space;\partial&space;y}&space;+&space;{1\over&space;2}{\partial^2&space;w&space;\over&space;\partial&space;y^2}+&space;{1&space;\over&space;4}\Big(&space;-{1&space;\over&space;4}&space;-&space;{1&space;\over&space;w}+&space;{y^2&space;\over&space;w}\Big)\Big({\partial&space;w&space;\over&space;\partial&space;y}\Big)^2&space;\Big]}$

where $\inline&space;w&space;=&space;\Sigma^2&space;T$ is the total implied variance to a maturity and strike and $\inline&space;y&space;=&space;\ln{K&space;\over&space;F_T}$ is the log of ‘moneyness’.

This is probably about as far as Black-Scholes alone will take you. Although we can reprice any vanilla surface, we’re still not pricing exotics very well – to correct this we’re going to need to consider other sources of uncertainty in our models. There are a wide variety of ways of doing this, and I’ll being to look at more advanced models in future posts!

## Asian Options I

Today I’m going to introduce the first exotic option that I’ve looked at in my blog, the Asian option. There is quite a lot to be said about them so I’ll do it over a few posts. I’ll also be updating the pricers to price these options in a variety of ways.

The simplest form of Asian option has a payoff similar to a vanilla option, but instead of depending on the value of the underlying spot price at expiry, it instead depends on the average the underlying spot price over a series of dates specified in the option contract. That is, the value of the option at expiry is the payoff

where is the strike, is the number of averaging dates, and  is the value of the spot that is realised at each of these dates.

First of all, what does it mean that this is an exotic option? From the market, we will be able to see the prices of vanilla options at or near each of the dates that the contract depends on. As I discussed in my post on Risk-Neutral Valuation, from these prices we can calculate the expected distribution of possible future spot prices at each date – shouldn’t we be able to use this to calculate a unique price of the Asian option consistent with the vanilla prices?

The answer, unfortunately, is no. Although we know the distribution at each time, this doesn’t tell us anything about the correlations between the price at time  and time , which will be critical for the Asian option. To illustrate this, let’s consider a simple Asian that only averages over two dates,  and , and we’ll take  to be the payoff date for ease, and try to calculate its price via risk-neutral valuation. The price at expiry will be

To calculate the price at earlier times, we use the martingale property of the discounted price in the risk-neutral measure

and expanding this out as an integral, we have

where denotes the discount factor to avoid confusion with the pdfs inside the integral. From the market, we know both  and , which are respectively the market-implied probability distributions of the spot price at times  and  calculated from vanilla call prices using the expression we derived before

But, we don’t know the JOINT distribution  which expresses the effect of the dependence of the two distributions. We know from basic statistics that

and since the spot at the later time depends on the realised value of the spot at the earlier time , so we don’t have enough information to solve the problem from only the market information available from vanilla options.

What is this telling us? It is saying that the market-available information isn’t enough to uniquely determine a model for underlying price movements with time. There are a variety of models that will all re-create the same vanilla prices, but will disagree on the price of anything more exotic, including our simple Asian option. However, observed vanilla prices do allow us to put some bounds on the price of the option. For example, a careful application of the triangle inequality tells us that

for any ; the expressions on the right are the payoff at expiry of call options on  and  respectively, and we can see the prices of these on the market, which allow an upper bound on the asian price. This means that, while we can’t uniquely price the option using the market information alone, we CAN be sure that any model consistent with market vanilla prices will produce an asian price that is lower than the limit implied by this inequality.

In order to go further, we need to make a choice of model. Our choice will determine the price that we get for our option, and changing model may well lead to a change in price – analysing this ‘model risk’ is one of a quant’s main jobs. A very simple choice such as Black Scholes [but note that BS might not be able to re-create an arbitrary vanilla option surface, for example if there is a vol smile at some time slices] will allow us to solve the integral above by numerical integration or by Monte Carlo (this option form still doesn’t have a closed form solution even in BS).

In the coming posts I’m going to develop the Monte Carlo pricer on the blog to price asian options in BS, and look at a number of simplifying assumptions that can be made which will allow us to find closed form approximations to the price of Asian options.

Finally, a brief note on the naming and usage of Asian options. Opinion seems divided – some people believe they are called Asian options because they’re neither European nor American, while some people say it’s because they were first traded in Japan. They are often traded so that the days on which the averaging is done are the business days of the last month of the option’s life. As the look up period is spread out over a period, they are less susceptible to market manipulation on any particular date, which can be important in less liquid markets, although the theoretical prices of Asian options like this are usually very similar in price to vanilla options at any of the look up dates. More exotic Asians might average over the last day of a month for a year. This average is much less volatile than the underlying over the same period, so Asian options are much cheaper than a portfolio of vanillas, but it’s easy to imagine that they might well match the risk exposure of a company very well, since companies trading in two or more currency areas will probably be happy to hedge their average exchange rate over a period of time rather than the exchange rate at any one particular time.

## Interview Questions V: The Lowest Unique Positive Integer Game

Today’s post will be quite an extended one, about an interesting game theory problem I’ve been thinking about recently. The game is a lottery that works as follows. Each of N players chooses a positive integer, and the player who picks the lowest integer that no-one else has selected wins. There are two variants of the game discussed in the literature, one in which the integers must be between 1 and N, and one in which they have no limit. Except in the specific case of only 3 players, this restriction seems to make no difference to the solution […unless I’ve got my algebra wrong, which is by no means impossible!].

Note that throughout this post, I’ll assume a basic familiarity with probability and knowledge of the formulae for infinite sums (given here, for example), particularly the sum to infinity of a geometric progression:

3-Player Case

I start with the N=3 case to illustrate the principles of the problem. Because going in to the problem we have exactly the same information as the other two players, there is no deterministic solution, and in order to maximise our chances of victory we need to adopt a ‘mixed strategy’ which involves selecting a random distribution that assigns a probability to the selection of each integer. This is explained a bit further at this website, although unfortunately the solution that they present is incorrect!

In the case that players are limited to selecting 1-N (so 1, 2 or 3), the problem is analysed in a paper on arXiv (although there’s also a mistake in this paper – this time the N=4 solution presented is incorrect, as I’ll show later), and my solution will follow the author’s.

We are trying to select a set of probabilities  such that when all players play this strategy, no player increases her chances of victory by adjusting her strategy (such a solution is called a Nash Equilibrium in game theory, as seen in A Beautiful Mind). Let’s assume player 2 and player 3 are both using these probabilities, and player 1 tries to increase her chance of victory by playing instead the strategy .

If she chooses 1, she wins if the other players both play either 2 or 3; if she chooses 2 she wins only if the other players both play 1 or both play 3, and if she chooses 3 she wins only if both other players play 1 or both play 2. So, we can express her total probability of victory as the sum of these three terms

And subject to the normalisation condition

Maximising the victory probability is a natural problem for the method of Lagrange Multipliers. The Lagrange equation

gives us the following three equations

And as in the paper above, we can solve these simultaneously along with the normalisation condition to give the probabilities

Her chances of victory are now 0.28719, regardless of her strategy – the equations above say that as long as two players play this solution, the other player can’t change her probability of winning by choosing ANY strategy. For example, if she switches her strategy to ALWAYS choosing 1, she will now win whenever the other two players play either 2 or 3, the chance of which is given by – and this is still 0.28719.

Now what if we relax the restriction that bids be between 1 and 3?

We can see immediately that the solution presented in the first link above, , is incorrect – in this case, if the third player plays always 1, she wins ¼ times (when both other players play more than 1), while if she plays always 1000, she wins almost a third of the times (whenever the other two players play the same number). But we said her victory probability should be independent of strategy for the Nash Equilibrium, so this can’t be correct. But it is along the right lines – the actual solution does decay rapidly as x increases.

Her chance of victory is now

And differentiating this with respect to each  in turn gives an infinite set of coupled equations

This turns out to be solvable, but difficult – a sketch of the solution is given at the bottom of the post. The solution is , is given by the normalisation condition

and we can use a trick to find the parameter . We’ve said that the third player’s odds of victory are independent of her strategy. Consider two strategies, one in which she plays the same as the other players, and another in which she always plays the same, very large number (say ). In the first case, chances of victory are equal for all three players, so they must be equal to 1/3 times one minus the chances of NO-ONE winning (which happens when they all choose the same number)

And in the second case, she wins whenever the other two players play the same number

We can combine these and substitute in the solution  with , and solve for :

and using the summation formula

Expanding out and cancelling terms where possible, then taking out factors:

(1)

So  is the real root of the equation , which is given by . A quick wikipedia search reveals that this is the reciprocal of something called the tribonacci constant, vital in the construction of the Snub Cube. I don’t know if this has some deeper resonance, but I’d love to hear from anyone who thinks it does! So, the full symmetric Nash Equilibrium in the 3-player, no upper bid limit case is

with . Note that the (incorrect) form given in the linked-to website can also be expressed in the same form, but with .

4-Player Case

In the case of 4 players and above, the maximisation equation is

where the new term in the middle of the expression comes from the victory scenario in which two players both choose the same lower integer than our player and so are not unique, while the third player chooses one that is larger. This is the term that is missing in the paper linked to above that leads to the incorrect solution.

In the case that UL = 4, we have four equations and the normalisation condition to solve. The equations feature cubic equations, so the solution for these is required. It’s uncommon enough for me to have had to refer to wikipedia, the solution can be found here – we are saved some hassle because the relevant cubic equations don’t have a quadratic term.

Once again, a sketch of the solution in the unrestricted scenario is presented at the bottom of the post, but it seems to make very little difference whether the upper limit UL is set to 4 or infinity. In both cases, I find that

Why does the upper limit no longer make any difference? We only want to choose a higher number if there is a significant chance of the lower numbers being ‘crowded out’ by the other players. That is, we sometimes choose 4 because there is a small chance that the other three players will all select either 1, 2 or 3. Similarly, we would only choose 5 if there was a significant chance that 4 wasn’t high enough, because there was also a significant chance of all of the other three playing 4 – but from looking at the values, we see that  is very small already, and is truly tiny. Consequently, in the general case  for is going to be negligibly small.

N-player Case

The problem rapidly becomes hard to compute for large N, but there is a body of literature on the subject, in particular this 2010 paper points out the mistake in the paper I linked to before, and presents a general solution for N players. Note from their graphs that the probability of choosing a number greater than N is essentially 0 for higher numbers of players, so whether or not we impose this restriction doesn’t change the equilibrium materially.

Derivations

I hope you’ve enjoyed my take on this problem, the sketches of my method of solution for the 3- and 4-player cases are given below but can probably be skipped on a first reading.

In the three-player case, the Lagrange equations are

For i=1, and using normalisation, this gives

Which means for i greater than 1 we have

and with a little re-arrangement

This gives a ‘step-up’ formula for the construction of later terms from the sum of earlier terms. An engineering-style solution is to write a script (or an excel spreadsheet) that calculates  to  from an initial guess of , and then varying the initial guess to satisfy the normalisation condition. Alternatively, we can show by induction that the expression above is satisfied for our solution . First of all, assuming the relationship holds for , lets see what it implies for  (for notational simplicity, I’m going to write  below as

and remembering that from our normalisation condition

so we see that the ratio demonstrated above satisfies the step-up equation for every . The first term is a special case, so I show this separately

And now setting :

so we see that the first term is satisfied by the expression.

A similar expression for 4 players is given by the solution to

which gives

with

I’ve not found an analytical solution for general to this yet, and had to proceed by the variational technique discussed above, giving the answers in the body of the post. If you can solve this exactly for me I’d be very interested to hear!

## European vs. American Options

All of the options that I’ve discussed so far on this blog have been European options. A European option gives us the right to buy or sell an asset at a fixed price, but only on a particular expiry date. In this post, I’m going to start looking at American options, which give the right to buy or sell at ANY date up until the expiry date.

Surprisingly for the case of vanilla options, despite the apparent extra utility of American options, it turns out that the price of American and European options is almost always the same! Why is this?

In general, American options are MUCH harder to price than European options, since they depend in detail on the path that the underlying takes on its way to the expiry date, unlike Europeans which just depend on the terminal value, and no closed form solution exists. One thing we can say is that an American option will never be LESS valuable than the corresponding European option, as it gives you extra optionality but doesn’t take anything away. So we can always take the European price to be a lower bound on American prices. Also note that Put-Call Parity no longer holds for Americans, and becomes instead an inequality.

How can we go any further? It is useful in this case to think about the value of an option as made up of two separate parts, an ‘intrinsic value’ and a ‘time value’, which sum to give the true option value. The ‘intrinsic value’ is the value that would be received if the exercise was today – in the case of a vanilla call, this is simply . The ‘time value’ is the ‘extra’ value due to time-to-expiry. This is the volatility-dependent part of the price, since we are shielded by the optionality from price swings in the wrong direction, but are still exposed to upside from swings in our favour. As time goes by, the value of the option must approach the ‘intrinsic value’, as the ‘time value’ decays towards expiry.

Consider the graph above, which shows the BS value of a simple European call under typical parameters. Time value is maximal at-the-money, since this is the point where the implicit insurance that the option provides is most useful to us (far in- or out-of-the-money, the option is only useful if there are large price swings, which are unlikely).

What is the extra value that we should assign to an American call relative to a European call due to the extra optionality it gives us? In the case of an American option, at any point before expiry we can exercise and take the intrinsic value there and then. But up until expiry, the value of a European call option is ALWAYS* more than the intrinsic value, as the time value is non-negative. This means that we can sell the option on the market for more than the price that would be received by exercising an American option before expiry – so a rational investor should never do this, and the price of a European and American vanilla call should be identical.

It seems initially as though the same should be true for put options, but actually this turns out not quite to be right. Consider the graph below, showing the same values for a European vanilla put option, under the same parameters.

Notice that here, unlike before, when the put is far in-the-money the option value becomes smaller than the intrinsic value – the time value of the option is negative! In this case, if we held an American rather than a European option it might well make sense to exercise at this point, since we would receive the intrinsic value, which is greater than the option value on the market [actually it’s slightly more complicated, because in this scenario the American option price would be higher than the European value shown below, so it would need to be a bit more in the money before it was worth exercising – you can see how this sort of recursive problem rapidly becomes hard to deal with!].

What is it that causes this effect for in-the-money puts? It turns out that it comes down to interest rates. Roughly what is happening is this – if we exercise an in-the-money American put to receive the intrinsic value, we receive cash straight away. But if we left the option until expiry, our expected payoff is roughly , where is the forward value

so we can see that leaving the underlying to evolve is likely to harm our option value [this is only true for options deep enough in the money for us to be able to roughly neglect the criterion]

We can put this on a slightly more rigourous footing by thinking about the GREEK for time-dependence, Theta. For vanilla options, this is given by

where is the forward price from to is the standard normal PDF of and  is its CDF, is a zero-coupon bond from to and the upper of the  refers to calls and the lower to puts.

The form for Theta shows exactly what I said in the last paragraph – for both calls and puts there is a negative component coming from the ‘optionality’, which is decreasing with time, and a term coming from the expected change in the spot at expiry due to interest rates which is negative for calls and positive for puts.

The plot below shows Theta for the two options shown in the graphs above, and sure enough where the time value of the European put goes negative, Theta becomes positive – the true option value is increasing with time instead of decreasing as usual, as the true value converges to the intrinsic value from below.

*Caveats – I’m assuming a few things here – there are no dividends, rates are positive (negative rates reverses the situation discussed above – so that American CALLS can be more valuable than Europeans), no transaction fees or storage costs, and the other sensibleness and simpleness criteria that we usually assume apply.

In between European and American options lie Bermudan options, a class of options that can be exercised early but only at one of a specific set of times. As I said, it is in general really tough to price more exotic options with early exercise features like these, I’ll look at some methods soon – but this introduction is enough for today!