Interview Questions VII: Integrated Brownian Motion

 
For a standard Weiner process denoted W_t, calculate

    \[{\mathbb E} \Bigl[ \Bigl( \int^T_0 W_t dt \Bigr)^2 \Bigr]\]

This integral is the limit of the sum of W_t at each infinitessimal time slice dt from 0 to T, it is called Integrated Brownian Motion.




We see immediately that this will be a random variable with expectation zero, so the result of the squared expectation will simply be the variance of the random variable, which is what we need to calculate. Here I show two ways to solve this, first by taking the limit of the sum and second using stochastic integration by parts

Need help? Discuss this in the Interview Questions forum!

1) Limit of a sum

(1)   \begin{align*} I &= \int^T_0 W_t dt \nonumber \\ &= \lim_{n \to \infty}{\frac T n}\sum^n_{i=1} W_{\frac {Ti} n} \nonumber \end{align*}

This sum of values along the Wenier process is not independent of one-another, since only the increments are independent. However, we can re-write them in terms of sums of these independent increments

(2)   \begin{align*} W_{\frac {Ti} n} &= \sum^i_{j=1} ( W_{\frac {Tj} n} - W_{\frac {T(j-1)} n} ) \nonumber \\ &= \sum^i_{j=1} \overline{W}_{\frac {Tj} n} \nonumber \end{align*}

where \overline{W}_{\frac {T(j-1)} n} \sim {\mathbb N}(0,{\frac T n}) are the individual independent increments of the brownian motion. Substituting into our previous equation and reversing the order of the summation

(3)   \begin{align*} \lim_{n \to \infty}{\frac T n}\sum^n_{i=1} W_{\frac {Ti} n} &= \lim_{n \to \infty}{\frac T n} \sum^n_{i=1} \sum^i_{j=1} \overline{W}_{\frac {Tj} n} \nonumber \\ &= \lim_{n \to \infty}{\frac T n} \sum^n_{j=1} \sum^n_{i=j} \overline{W}_{\frac {Tj} n} \nonumber \\ &= \lim_{n \to \infty}{\frac T n} \sum^n_{j=1} (n-j+1) \overline{W}_{\frac {Tj} n} \nonumber \\ \end{align*}

which is simply a weighted sum of independent gaussians. To calculate the total variance, we sum the individual variances using the summation formula for \sum_1^N n^2

(4)   \begin{align*} {\mathbb V}[I] &= \lim_{n \to \infty} {\frac {T^2} {n^2}} \sum^n_{j=1} (n-j+1)^2 {\frac T n} \nonumber \\ &= \lim_{n \to \infty} {\frac {T^3} {n^3}} \Bigl( {\frac 1 3} n^3 + O(n^2) \Bigr) \nonumber \\ &= {\frac 1 3} T^3 \end{align*}

which is the solution.

2) Stochastic integration by parts

The stochastic version of integration by parts for potentially stochastic variables X_t, Y_t looks like this:

    \[X_T \cdot Y_T = X_0 \cdot Y_0 + \int^T_0 X_t dY_t + \int^T_0 Y_t dX_t\]

Re-arranging this gives

    \[\int^T_0 Y_t dX_t = \Bigl[ X_t \cdot Y_t \Bigr]^T_0 - \int^T_0 X_t dY_t\]

Now setting Y_t = -W_t and X_t = (T-t) we have

(5)   \begin{align*} \int^T_0 W_t dt &= \Bigl[ -W_t \cdot (T-t) \Bigr]^T_0 + \int^T_0 (T-t) dW_t \nonumber \\ &= \int^T_0 (T-t) dW_t \nonumber \end{align*}

We recognise this as a weighted sum of independent gaussian increments, which is (as expected) a gaussian variable with expectation 0 and variance that we can calculate with the Ito isometry

(6)   \begin{align*} {\mathbb E}\Bigl[ \Bigl( \int^T_0 (T-t) dW_t \Bigr)^2 \Bigr] &= {\mathbb E}\Bigl[ \int^T_0 (T-t)^2 dt \Bigr] \nonumber \\ &= \Bigl[ T^2t - Tt^2 + {\frac 1 3} t^3 \Bigr]^T_0 \nonumber \\ &= {\frac 1 3} T^3 \end{align*}

which is the solution.

Fitting the initial discount curve in a stochastic rates model

 
I’ve introduced the Vasicek stochastic rates model in an earlier post, and here I’m going to introduce a development of it called the Hull-White (or sometimes Hull-White extended Vasicek) model.

The rates are modelled by a mean-reverting stochastic process

    \[dr = \bigl( \theta(t) - a r(t) \bigr) dt + \sigma dW_t\]

which is similar to the Vasicek model, except that the \theta(t) term is now allowed to vary with time (in general a and \sigma are too, but I’ll ignore those issues for today).

The freedom to set theta as a deterministic function of time allows us to calibrate the model to match the initial discount curve, which means at least initially our model will give the right price for things like FRAs. Calibrating models to match market prices is one of the main things quants do. Of course, the market doesn’t really obey our model. this means that, in time, the market prices and the prices predicted by our model will drift apart, and the model will need to be re-calibrated. But the better a model captures the various risk factors in the market, the less often this procedure will be needed.

Using the trick

    \[d \bigl( e^{at} r(t)\bigr) = e^{at} \bigl(dr + ar(t)dt \bigr)\]

to re-express the equation and integrating gives

    \[r(t) = r_0 e^{-at} + e^{-at}\int^t_0 \theta(u) e^{au} du + \sigma e^{-at} \int^t_0 e^{au} dW_u\]

where r_0 is the rate at t=0. The observable quantities from the discount curve are the initial discount factors (or equivalently the initial forward rates) P(0,t), where

    \[P(0,t) = {\mathbb E}\bigl[ e^{-\int^t_0 r(u) du} \bigr]\]

The rate r_0 is normally distributed, so the integral \int^t_0 r(u) du must be too. This is because an integral is essentially a sum, and a sum of normal distributions is also normally distributed. Applying the Ito isometry as discussed before, the expectation of this variable will come wholly from the deterministic terms and the variance will come entirly from the stochastic terms, giving

    \begin{align*} {\mathbb E} \bigl[ \int^t_0 r(u) du \bigr] &= r_0 B(0,t) + \int^t_0 e^{-au} (\int^u_0 \theta(s) e^{as} ds ) du \nonumber \\ {\mathbb V}\bigl[ \int^t_0 r(u) du \bigr] &= \sigma^2 \int^t_0 B(u,t)^2 du \nonumber \end{align*}




where throughout

    \[B(t,t') = {1 \over a}\bigl( 1 - e^{-a(t'-t)}\bigr)\]

and since

    \[{\mathbb E} \bigl[ e^x | x \sim {\mathbb N}(\mu,\sigma^2) \bigr] = e^{\mu + {1\over 2}\sigma^2}\]

we have

    \[P(0,t) = \exp \Bigl\{ -r_0 B(0,t) - \int^t_0 e^{-au} \bigl( \int_0^u \theta(s) e^{as} ds + {1 \over 2} \sigma^2 B(u,t)^2 \bigr) du \Bigr\}\]

Two differentiations of this expression give

    \[-{\partial \over \partial t} \ln P(0,t) = r_0 e^{-at} + e^{-at} \int^t_0 \theta(u) e^{au} du + {1\over 2} \sigma^2 B(0,t)^2\]

    \[-{\partial^2 \over \partial t^2} \ln P(0,t) = -ar_0 e^{-at} + \theta(t) - ae^{-at} \int^t_0 \theta(u) e^{au} du + \sigma^2 e^{-at} B(0,t)\]

and combining these equations gives an expression for \theta(t) that exactly fits the initial discount curve for the given currency

    \[\theta(t) = -{\partial^2 \over \partial t^2} \ln P(0,t) - a{\partial \over \partial t} \ln P(0,t) + {\sigma^2\over 2a}( 1 - e^{-2at} )\]

and since -{\partial \over \partial t} \ln P(0,t) = f(0,t) is simply the initial market observed forward rate to each time horizon coming from the discount curve, this can be compactly expressed as

    \[\theta(t) = {\partial \over \partial t} f(0,t) + a f(0,t) + {\sigma^2 \over 2a} ( 1 - e^{-2at} )\]

Today we’ve seen how a simple extension to the ‘basic’ Vasicek model allows us to match the initial discount curve seen in the market. Allowing the volatility parameter to vary will allow us to match market prices of other products such as swaptions (an option to enter into a swap), which I’ll discuss another time. But we’re gradually building up a suite of simple models that we can combine later to model much more complicated environments.

Interview Questions VI: The Drunkard’s Walk

A drunk man is stumbling home at night after closing time. Every step he takes moves him either 1 metre closer or 1 metre further away from his destination, with an equal chance of going in either direction (!). His home is 70 metres down the road, but unfortunately, there is a cliff 30 metres behind him at the other end of the street. What are the chances that he makes it to his house BEFORE tumbling over the edge of the cliff?




This is a fun question and quite heavy on conditional probability. We are trying to find the probability that the drunkard has moved 70 metres forward BEFORE he has ever moved 30 metres backward, or visa versa. There are several ways of attempting this, including some pretty neat martingale maths, but I’m going to attempt it here in the language of matrices and markov chains.

Essentially, there are 100 states that the drunkard can be in, from right beside the cliff all the way down the road to right beside his front door. Let’s label these from 1 to 100, in terms of the number of metres away from the cliff, so that he starts in state 30. At each step, he transitions from his current state to either one state higher or one state lower with probability 50%, and the process continues until he reaches either the cliff or the door, at which point the process will cease in either a good or a very bad night’s rest. We call these two states, 0 and 100, ‘absorbers’, because the process stops at this point and no transitions to new states can happen. A markov diagram that illustrates this process can be drawn like this:

A markov chain displaying the transition probabilities for each state in the drunkard's walk
A markov chain displaying the transition probabilities for each state in the drunkard’s walk

We can characterise each step in the process by a transition matrix acting on a state vector. The drunkard initially has a state of 30 metres, so his state vector is a long string of zeroes with a single 1 at the 30th position:

S_0 = \begin{pmatrix} 0 \\ \vdots \\ 1\\ \vdots\\ 0 \end{pmatrix}

This vector is probabalistic – a 1 indicates with 100% certainty that the drunkard is in the 30th state. However, with subsequent moves this probability density will be spread over the nearby states as his position’s probability density diffuses into other states. The transition matrix multiplies the drunkard’s state vector to give his state vector after another step:

P = \begin{pmatrix} 1 & 0.5 & 0 & \cdots & 0 & 0\\ 0 & 0 & 0.5 & \cdots & 0 & 0\\ 0 & 0.5 & 0 & \cdots & 0 & 0\\ 0 & 0 & 0.5 & & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots\\ 0 & 0 & 0 & \cdots & 0.5 & 1 \end{pmatrix}; \quad S_{i+1} = P \cdot S_i

So, after one step the drunkard’s state vector will have a 0.5 in the 29th and the 31st position and zeroes elsewhere, saying that he will be in either of these states with probability 0.5, and certainly nowhere else. Note that the 1’s at the top and bottom of the transition matrix will absorb and probability that arrives at that state for the rest of the process.

To keep things simple, let’s consider a much smaller problem with only six states, where state 1 is ‘down the cliff’ and state 6 is ‘home’; and we’ll come back to the larger problem at the end. We want to calculate the limit of the drunkard’s state as the transition matrix is applied a large number of times, ie. to calculate

S_n = P^n \cdot S_0

An efficient way to calculate powers of a large matrix is to first diagonalise it. We have \inline P = U A\ U^{-1}, where \inline A is a diagonal matrix whose diagonal elements are the eigenvalues of \inline P, and \inline U is the matrix whose columns are the eigenvectors of \inline P. Note that, as \inline P is not symmetric, \inline U^{-1} will have row vectors that the the LEFT eigenvectors of \inline P, which in general will be different to the right eigenvectors. It is easy to see how to raise \inline P to the power n

P^n = (U A\ U^{-1})^n = U A^n\ U^{-1}

since all of the \inline Us and \inline U^{-1}s in the middle cancel. Since \inline A is diagonal, to raise it to the power of n we simlpy raise each element (which are just the eigenvalues of \inline P) to the power of n. To calculate the eigenvalues of P we solve the characteristic equation

|P - \lambda_a I| = 0

with

P = \begin{pmatrix} 1 & 0.5 & 0 & 0 & 0 & 0\\ 0 & 0 & 0.5 & 0 & 0 & 0\\ 0 & 0.5 & 0 & 0.5 & 0 & 0\\ 0 & 0 & 0.5 & 0 & 0.5 & 0 \\ 0 & 0 & 0 & 0.5 & 0 & 0\\ 0 & 0 & 0 & 0 & 0.5 & 1 \end{pmatrix}

This gives six eigenvalues, two of which are one (these will turn out to correspond to the two absorbing states) and the remainder are strictly \inline 0\leq \lambda_a < 1. Consequently, when raised to the power n in the diagonal matrix above, all of the terms will disappear except for the first and the last eigenvalue which are 1 as n becomes large.

Calculating the eigenvectors is time consuming and we’d like to avoid it if possible. Luckily, in the limit that n gets large, we’ve seen that most of the eigenvalues raised to the power n will go to zero which will reduce significantly the amount we need to calculate. We have

P^n\cdot S = \bigl( U A^n \ U^{-1}\bigr)\cdot S

and in the limit that n gets large, \inline U \cdot A^n is just a matrix of zeroes with a one in the upper left and lower right entry. At first sight it looks as though calculating \inline U^{-1} will be required, which is itself an eigenvector problem, but in fact we only have to calculate a single eigenvector – the first (or equivalently the last), which will give the probability of evolving from an initial state S to a final state 0 (or 100).

\inline U^{-1} is the matrix of left eigenvectors of \inline P, each of which satisfies

x_a \cdot P = \lambda_a x_a

and we are looking for the eigenvectors corresponding to eigenvalue of 1, so we need to solve the matrix equation

x \cdot \begin{pmatrix} 0 & 0.5 & 0 & 0 & 0 & 0\\ 0 & -1 & 0.5 & 0 & 0 & 0\\ 0 & 0.5 & -1 & 0.5 & 0 & 0\\ 0 & 0 & 0.5 & -1 & 0.5 & 0 \\ 0 & 0 & 0 & 0.5 & -1 & 0\\ 0 & 0 & 0 & 0 & 0.5 & 0 \end{pmatrix} = 0

We know that if he starts in the first state (ie. over the cliff) he must finish there (trivially, after 0 steps) with probability 100%, so that \inline x_1 = 1 and \inline x_6 = 0. The solution to this is

x_n = {6 - n \over 5}

which is plotted here for each initial state

The drunkard's final state probabilities
The probability of the drunkard finishing in either final state given his initial state. Note that this sums to one in all cases as eventually he must finish in one or the other state.

which says that the probability of ending up in state 1 (down the cliff!) falls linearly with starting distance from the cliff. We can scale up this final matrix equation to the original 100 by 100 state space, and find that for someone starting in state 30, there is a 70/100 chance of ending up down the cliff, and consequently only a 30% chance of getting home!

This problem is basically the same as the Gambler’s ruin problem, where a gambler in a casino stakes $1 on each toss of a coin and leaves after reaching a goal of $N or when broke. There are some very neat methods for solving them via martingale methods that don’t use the mechanics above that I’ll look at in a future post.

Forwards vs. Futures

I’ve covered Forwards and Futures in previous posts, and now that I’ve covered the basics of Stochastic Interest Rates as well, we can have a look at the difference between Forwards and Futures Contracts from a financial perspective.

As discussed before, the price of a Forward Contract is enforceable by arbitrage if the underlying is available and freely storable and there are Zero Coupon Bonds available to the Forward Contract delivery time. In this case, the forward price is

F(t,T) = S(t) \cdot {1 \over {\rm ZCB}(t,T)}

In this post I’m going to assume a general interest rate model, which in particular may well be stochastic. In such cases, the price of a ZCB at the present time is given by

{\rm ZCB}(t,T) = {\mathbb E}\Big[ \exp{\Big\{\int_t^T r(t') dt'\Big\} } \Big]

Futures Contracts are a bit more complicated, and we need to extend our earlier description in the case that there are interest rates. The basic description was given before, but additionally in the presence of interest rates, any deposit that is in either party’s account is delivered to the OTHER party at the end of each time period. So, taking the example from the previous post, on day 4 we had $4 on account with the exchange – if rates on that day were 10% p.a., over that day the $4 balance would accrue about 10c interest, which would be paid to the other party.

Let’s say we’re at time s, and want to calculate the Futures price to time T. Our replication strategy is now as follows, following the classic proof due to Cox, Ingersall and Ross but in continuous time. Futures Contracts are free to enter into and break out of due to the margin in each account, so entering X Futures Contracts at time t and closing them at time t+dt will lead to a net receipt (or payment if negative) of \inline {\rm X}\cdot\big[ H(t+dt,T) - H(t,T)\big]. From t+dt to T, we invest (borrow) this amount at the short rate and thus recieve (need to pay)

{\rm X}\cdot\big[ H(t+\tau,T) - H(t,T)\big]\cdot\prod_t^T \big( 1 + r(t)\tau \big)

and now moving to continuous time

{\rm X}\cdot\big[ H(t+dt,T) - H(t,T)\big]\cdot\int_t^T e^{ r(t)}\ dt

We follow this strategy in continuous time, constantly opening contracts and closing them in the following time period [I’m glossing over discrete vs. continuous time here – as long as the short rate corresponds to the discrete time step involved this shouldn’t be a problem], and investing our profits and financing our losses both at the corresponding short rate. We choose a different X for each period [t,t+td] so that \inline {\rm X}(t) = \int_s^t \exp{\{r(t')\}}dt'. We also invest an amount H(s,T) at time s at the short rate, and continually roll this over so that it is worth \inline H(s,T)\cdot \int_s^T \exp{\{r(t)\}}dt at time T

Only the final step of this strategy costs money to enter, so the net price of the portfolio and trading strategy is H(s,T). The net payoff at expiry is

H(s,T)\cdot \int_s^T e^{r(t)}dt + \sum_s^T {\rm X}\cdot[H(t+dt,T)-H(t,T))]\cdot\int_t^T e^{r(t)}dt

= H(s,T)\cdot \int_s^T e^{r(t)}dt + \sum_s^T \int_s^t e^{r(t)}dt\cdot[H(t+dt,T)-H(t,T))]\cdot\int_t^T e^{r(t)}dt

= H(s,T)\cdot \int_s^T e^{r(t)}dt + \int_s^T e^{r(t)}dt \cdot \sum_s^T [H(t+dt,T)-H(t,T))]

= H(s,T)\cdot \int_s^T e^{r(t)}dt + \int_s^T e^{r(t)}dt \cdot [H(T,T)-H(s,T))]= H(T,T) \cdot \int_s^T e^{r(t)}dt

And H(T,T) is S(T), so the net payoff of a portfolio costing H(s,T) is

= S(T) \cdot \int_s^T e^{r(t)}dt

How does this differ from a portfolio costing the Forward price? Remembering that in Risk-Neutral Valuation, the present value of an asset is equal to the expectation of its future value discounted by a numeraire. In the risk-neutral measure, this numeraire is a unit of cash B continually re-invested at the short rate, which is worth \inline B(t,T) = e^{\int_t^T r(t')dt' }, so we see that the Futures Price is a martingale in the risk-neutral measure (sometimes called the ‘cash measure’ because of its numeraire). So the current value of a Futures Contract on some underlying should be

H(t,T) = {\mathbb E}^{\rm RN}\big[ S(T) | {\cal F}_t \big]

ie. the undiscounted expectation of the future spot in the risk-neutral measure. The Forward Price is instead the expected price in the T-forward measure whose numeraire is a ZCB expiring at time T

F(t,T) = {\mathbb E}^{\rm T}\big[ S(T) | {\cal F}_t \big]

We can express these in terms of each other remembering F(T,T) = S(T) = H(T,T) and using a change of numeraire (post on this soon!). I also use the expression for two correlated lognormal, which I derived at the bottom of this post

\begin{align*} F(t,T) &= {\mathbb E}^{T}\big[ F(T,T) | {\cal F}_t \big] \\ &= {\mathbb E}^{T}\big[ S(T) | {\cal F}_t \big] \\ &= {\mathbb E}^{T}\big[ H(T,T) | {\cal F}_t \big] \\ &= {\mathbb E}^{RN}\big[ H(T,T) {B(t)\over B(T)} {{\rm ZCB}(t,T)\over {\rm ZCB(T,T)}}| {\cal F}_t \big] \\ &= {\rm ZCB}(t,T){\mathbb E}^{RN}\big[ H(T,T) {1\over B(T)}| {\cal F}_t \big] \\ &= {\rm ZCB}(t,T){\mathbb E}^{RN}\big[ H(T,T)\big] {\mathbb E}^{RN}\big[ e^{-\int_t^T r(t')dt'} \big] e^{\sigma_H \sigma_B \rho} \\ &= H(t,T) \cdot e^{\sigma_H \sigma_B \rho} \\ \end{align*}

where \inline \sigma_H is the volatility of the Futures price, and \inline \sigma_B is the volatility of a ZCB – in general the algebra will be rather messy!

As a concrete example, let’s consider the following model for asset prices, with S driven by a geometric brownian motion and rates driven by the Vasicek model discussed before

{dS \over S} = r(t) dt + \sigma_S dW_t

dr = a \big[ \theta - r(t)\big] dt + \sigma_r \widetilde{dW_t}

And (critically) assuming that the two brownian processes are correlated according to rho

dW_t \cdot \widetilde{dW_t} = \rho dt

In this case, the volatility \inline \sigma_B is the volatility of \inline {\mathbb E}\big[ e^{-\int_t^T r(t')dt'}\big], and as I discussed in the post on stochastic rates, this is tractable and lognormally distributed in this model.

We can see that in the case of correlated stochastic rates, these two prices are not the same – which means that Futures and Forward Contracts are fundamentally different financial products.

 

For two standard normal variates x and y with correlation rho, we have:

\begin{align*} {\mathbb E}\big[ e^ {\sigma_1 x} \big]& = e^ {{1\over 2}\sigma_1^2 } \end{align*}

and

\begin{align*} {\mathbb E}\big[ e^ {\sigma_1 x + \sigma_2 y} \big]& = {\mathbb E}\big[ e^ {\sigma_1 x + \sigma_2 \rho x + \sigma_2 \sqrt{1-\rho^2}z} \big]\\ & = {\mathbb E}\big[ e^ {(\sigma_1 + \sigma_2 \rho) x + \sigma_2 \sqrt{1-\rho^2}z} \big]\\ & = \big[ e^ {{1\over 2}(\sigma_1 + \sigma_2 \rho)^2 + {1\over 2}(\sigma_2 \sqrt{1-\rho^2})^2} \big]\\ & = \big[ e^ {{1\over 2}\sigma_1^2 + {1\over 2}\sigma_2^2 + \sigma_1 \sigma_2 \rho} \big]\\ & = {\mathbb E}\big[ e^ {\sigma_1 x }\big] {\mathbb E} \big[ e^{\sigma_2 y}\big] e^{ \sigma_1 \sigma_2 \rho} \end{align*}

Stochastic Rates Models

In a previous post, I introduced the Black-Scholes model, in which the price of an underlying stock is modeled with a stochastic variable that changes unpredictably with time. I’ve also discussed the basic model-independent rates products whose value can be determined at the present time exactly. However, to progress further with interest rate derivatives, we’re going to need to model interest rates more carefully. We’ve assumed rates are deterministic so far, but of course this isn’t true – just like stocks, they change with time in an uncertain manner, so we need to allow them to become stochastic as well.

One way of doing this is by analogy with the BS case, by allowing the short rate (which is the instantaneous risk-neutral interest rate r(t)) to become stochastic as well, and then integrating it over the required period of time to calculate forward rates.

A very basic example of this is the Vasicek Model. In this model the short rate is defined to be stochastic, with behaviour governed by the following SDE

    \[dr = k\ [\ \theta - r(t)\ ]\ dt\ +\ \sigma\ dW\]

where k\theta and \sigma are constants and dW is the standard Wiener increment as described before. This is marginally more complicated than the BS model, but still belongs to the small family of SDEs that are analytically tractable. Unlike stock prices, we expect rates to be mean-reverting – stock price variance grows with time, but we expect the distribution of rates to be confined to a fairly narrow range by comparison. The term in square brackets above achieves this, since if r(t) is greater than \theta then it will be negative and cause the rate to be pulled down, while if it is below \theta the term will be positive and push the rate up towards \theta.

Solving the equation requires a trick, which is instead of thinking about the rate alone to think about the quantity d\big(\ e^{kt}\cdot r(t)\ \big). This is equal to e^{kt}\cdot dr(t)\ + \ ke^{kt} dt \cdot r(t), and substituting in the incremental rate term from the original equation we have

    \[d\big(\ e^{kt}\cdot r(t)\ \big) = e^{kt} \Big(k\ [ \theta - r(t) ] dt + \sigma dW \Big) + \ e^{kt} dt \cdot r(t)\]

    \[d\big(\ e^{kt}\cdot r(t)\ \big) = e^{kt} \Big( k\cdot \theta\cdot dt + \sigma\cdot dW \Big)\]

note that the term in r(t) has been cancelled out, and the remaining terms can be integrated directly from a starting time s to a finishing time T

    \[\Big[\ e^{kt}\cdot r(t)\ \Big]_{t=s}^{t=T} = \int_{t=s}^{t=T} k\ \theta\ e^{kt}\cdot dt + \int_{t=s}^{t=T} e^{kt} \sigma\cdot dW\]

    \[e^{kT}\cdot r(T)\ - e^{ks}\cdot r(s)\ = \Big[\ \theta\ e^{kt}\ \Big]_{t=s}^{t=T} + \sigma \int_{t=s}^{t=T} e^{kt}\cdot dW\]

    \[r(T)\ = e^{-k(T-s)}\cdot r(s)\ + \theta\Big(\ 1 - e^{-k(T-s)}\ \Big) + \sigma \int_{s}^{T} e^{-k(T-t)}\cdot dW(t)\]

where r(s) is the initial rate. This is simply a gaussian distribution with mean and variance given by

    \[\mathbb{E} \big[\ r(T)\ |\ {\cal F}_{\rm s}\ \big] = e^{-k(T-s)}\cdot r(s)\ +\ \theta\Big(\ 1 - e^{-k(T-s)}\ \Big)\]

    \[\mathbb{V} \big[\ r(T)\ |\ {\cal F}_{\rm s}\ \big] = {\sigma^2 \over 2k} \Big(\ 1 - e^{-2k(T-s)}\ \Big)\]

Where the variance is calculated using the “Ito isometry

    \[\mathbb{V} \big[\ r(T)\ |\ {\cal F}_{\rm s}\ \big] = \mathbb{E} \big[\ \big(r(T)\big)^2\ |\ {\cal F}_{\rm s}\ \big] - \big(\ \mathbb{E} \big[\ r(T)\ |\ {\cal F}_{\rm s}\ \big]\ \big)^2\]

    \[= \Big( \sigma \int_s^T e^{-k(T-t)} dW(t) \Big)^2\]

    \[=\sigma^2 \int_s^T \Big( e^{-k(T-t)}\Big)^2 dt\]

    \[= {\sigma^2 \over 2k} \Big(\ 1 - e^{-2k(T-s)}\ \Big)\]

as stated above. Note that this allows the possibility of rates going negative, which is generally considered to be a weakness of the model, but the chance is usually rather small.




As we know the distribution of the short rate, we can calculate some other relevant quantities. Of primary importance are the Zero Coupon Bonds, which are required for calculation of forward interest rates. A ZCB is a derivative that pays $1 at a future time t, and we can price this using the Risk-Neutral Valuation technique.

According to the fundamental theorem of asset pricing, the current price of a derivative divided by our choice of numeraire must be equal to it’s future expected price at any time divided by the value of the numeraire at that time, with the expectation taken in the probability measure corresponding to the choice of numeraire.

In the risk-neutral measure, the numeraire is just a unit of currency, initially worth $1 but continually re-invested at the instantaneous short rate, so that its price at time t is $1 \cdot e^{\int_0^t r(t') dt}. Now, the price of the ZCB is given by

    \[{Z(0,t) \over \$1} = {\mathbb E}^{RN}\Big[\ {Z(s,t)\over \$1 \cdot e^{\int_{0}^s r(t') dt' }}\ |\ {\cal F}_0\ \Big]\]

    \[Z(0,t) = {\mathbb E}\Big[\ {Z(s,t)\cdot e^{-\int_{0}^s r(t') dt' }}\ |\ {\cal F}_0\ \Big]\]

Although the RHS is true at any time, we only know the value of the ZCB exactly at a single time at the moment – the expiry date t, at which it is worth exactly $1. Plugging in these values we have

    \[Z(0,t) = \$1 \cdot {\mathbb E}\Big[\ e^{-\int_{0}^t r(t') dt' }\ |\ {\cal F}_0\ \Big]\]

So the ZCB is given by the expectation of the integral of the rate over a period of time. Since the rate is itself gaussian, and an integral is the limit of a sum, it’s not surprising that this quantity is also gaussian (it’s effectively the sum of many correlated gaussians, which is also gaussian, as discussed before), but it’s rather tricky to calculate, I’ve included the derivation at the bottom of the post to same space here. The mean and variance are given by

    \[\mathbb{E} \big[\int_s^T r(t)\ dt \ |\ {\cal F}_{\rm s}\ \big] = \big(r(s) - \theta \big) \cdot B(s,T)\ +\ \theta\cdot\big( T- s \big)\]

    \[\mathbb{V} \big[\int^T_s r(t)\ dt \ |\ {\cal F}_{\rm s}\ \big] = {\sigma^2 \over k^2} \Big( (T-s) -B(s,T) - {k\over 2} B(s,T)^2 \Big)\]

where

    \[B(s,T) = {1\over k}\big( 1 - e^{-k(T-s)}\big)\]

The ZCB is given by the expectation of the exponential of a gaussian variable – and we’ve seen on several occasions that

    \[\mathbb{E} \big[\ \exp{(x)}\ |\ x\sim {\mathbb N}(\mu, \sigma^2)\ \big] = \exp{ \Big(\mu + {1\over 2}\sigma^2 \Big) }\]

So the ZCB prices are

    \[Z(s,T) = A(s,T)\ e^{-B(s,T)\ r(s)}\]

with B(s,T) as defined above and

    \[A(s,T) = \exp{ \Big[ \Big( \theta - {\sigma^2 \over 2 k^2}\Big)\Big( B(s,T) - T + s \Big) - {\sigma^2 \over 4 k^2} B(s,T)^2 \Big]}\]

and as these expressions only depend on the rates at the initial time, we can calculate ZCB bond prices and hence forward rates for any future expiry dates at any given time if we have the current instantaneous rate.

Although we can calculate a discount curve for a given set of parameters, the Vasicek model can’t calibrate to an initial ZCB curve taken from the market, which is a serious disadvantage. There are more advanced generalisations which can, and I’ll discuss some soon, but they will use all of the same tricks and algebra that I’ve covered here.

I’ve written enough for one day here – in later posts I’ll discuss changing to the t-forward measure, in which the ZCB Z(0,t) forms the numeraire instead of a unit of currency, which simplifies many calculations, and I’ll use it to price caplets under stochastic rates, and show that these are equivalent to european options on ZCBs.

An alternate approach to the short-rate model approach discussed today which is very popular these days is the Libor Market Model (LMM) approach, in which instead of simulating the short rate and calculating the required forwards, the different forwards required are instead computed directly and in tandem – I’ll look further at this approach in another post.

 

Here is the calculation of the distribution of the integral of the instantaneous rate over the period s to T:

    \[r(t)\ = e^{-k(t-s)}\cdot r(s)\ + \theta\Big(\ 1 - e^{-k(t-s)}\ \Big) + \sigma \int_{s}^{t} e^{-k(t-u)}\cdot dW(u)\]

    \[\int_s^T r(t)\ dt = \int_s^T \Big[\ e^{-k(t-s)} r(s)\ + \theta\Big(\ 1 - e^{-k(t-s)}\ \Big) + \sigma \int_{s}^{t} e^{-k(t-u)} dW(u)\Big] dt\]

and splitting this into the terms that contribute to the expectation and the variance we have

    \[\mathbb{E} \big[\int_s^T r(t)\ dt \ |\ {\cal F}_{\rm s}\ \big] = \int_s^T \Big[\ e^{-k(t-s)} r(s)\ + \theta\Big(\ 1 - e^{-k(t-s)}\ \Big)\Big] dt\]

    \[= r(s) \int_s^T e^{-k(t-s)} \ dt + \theta \int_s^T \Big(\ 1 - e^{-k(t-s)}\ \Big) dt\]

    \[= r(s) \Big[ {1 \over -k}\ e^{-k(t-s)} \Big]^T_s + \theta \Big[ t - {1 \over -k}\ e^{-k(t-s)}\ \Big]^T_s\]

    \[= \big(r(s) - \theta \big)B(s,T)\ +\ \theta\cdot\big( T- s \big)\]

to calculate the variance, we first need to deal with the following term

    \[\int_s^T \Big[\ \sigma \int_{s}^{t} e^{-k(t-u)} dW(u)\Big] dt\]

we use stochastic integration by parts

    \[\sigma \int_s^T \int_{s}^{t} e^{-k(t-u)} dW(u)\ dt = \int_s^T e^{-kt}\big(\int_{s}^{t} e^{ku} dW(u)\big)\ dt\]

    \[= \sigma\int_s^T \big(\int_{s}^{t} e^{ku} dW(u)\big)\ d_t \Big( \int^t_s e^{-kv}dv \Big)\]

    \[= \sigma\Big[ \int_{s}^{t} e^{ku} dW(u)\cdot \int^t_s e^{-kv}dv\ \Big]^T_s - \sigma\int_s^T dW(t)\Big( e^{kt} \int^t_s e^{-kv}dv\ \Big)\]

    \[= \sigma\int_{s}^{T} e^{ku} dW(u)\cdot \int^T_s e^{-kv}dv - \sigma\int_s^T dW(t) e^{kt} \Big( \int^T_s e^{-kv}dv - \int^T_t e^{-kv}dv \Big)\]

    \[= \sigma\int_s^T e^{kt} \Big(\int^T_t e^{-kv}dv\Big) dW(t)\]

    \[= {\sigma \over k}\int_s^T \big(1 - e^{-k(T-t)}\big)\ dW(t)\]

and we’re now in a position to try and find the variance of the integral

    \[\mathbb{V} \big[\int^T_s r(t)\ dt \ |\ {\cal F}_{\rm s}\ \big] = {\mathbb E}\Big[\ \Big({\sigma \over k}\int_s^T \big(1 - e^{-k(T-t)}\big)\ dW(t)\Big)^2\ \Big]\]

    \[= {\mathbb E}\Big[\ {\sigma^2 \over k^2}\int_s^T \big(1 - e^{-k(T-t)}\big)^2\ dt\ \Big]\]

where Ito’s isometry has been used again, and several more lines of routine algebra leads to the result

    \[= {\sigma^2 \over k^2} \Big( (T-s) -B(s,T) - {k\over 2} B(s,T)^2 \Big)\]

Stochastic Differential Equations Pt2: The Lognormal Distribution

This post follows from the earlier post on Stochastic Differential Equations.

I finished last time by saying that the solution to the BS SDE for terminal spot at time T was

\inline S_T = S_0 e^{(\mu - {1 \over 2}\sigma^2)T + \sigma W_T}

When we solve an ODE, it gives us an expression for the position of a particle at time T. But we’ve already said that we are uncertain about the price of an asset in the future, and this expression expresses that uncertainty through the \inline \sigma W_T term in the exponent. We said in the last post that the difference between this quantity at two different times s and t was normally distributed, and since this term is the distance between t=0 and t=T (we have implicitly ignored a term W_0, but this is ok because we assumed that the process started at zero) it is also normally distributed,

W_T \sim {\mathbb N}(0,T)

It’s a well-known property of the normal distribution (see the Wikipedia entry for this and many others) that if X \sim {\mathbb N}(0,1) then aX \sim {\mathbb N}(0,a^2) for constant a. We can use this in reverse to reduce W_T to a standard normal variable x, by taking a square root of time outside of the distribution so W_T \sim \sqrt{T}\cdot{\mathbb N}(0,1) and we now only need standard normal variables, which we know lots about. We can repeat our first expression in these terms

\inline S_T = S_0 e^{(\mu - {1 \over 2}\sigma^2)T + \sigma \sqrt{T} X}

What does all of this mean? In an ODE environment, we’d be able to specify the exact position of a particle at time T. Once we try to build in uncertainty via SDEs, we are implicitly sacrificing this ability, so instead we can only talk about expected positions, variances, and other probabilistic quantities. However, we certainly can do this, the properties of the normal distribution are very well understood from a probabilistic standpoint so we expect to be able to make headway! Just as X is a random variable distributed across a normal distribution, S(t) is now a random variable whose distribution is a function of random variable X and the other deterministic terms in the expression. We call this distribution the lognormal distribution since the log of S is distributed normally.

The random nature of S is determined entirely by the random nature of X. If we take a draw from X, that will entirely determine the corresponding value of S, since the remaining terms are deterministic. The first things we might want to do are calculate the expectation of S, its variance, and plot its distribution. To calculate the expectation, we integrate over all possible realisations of X weighted by their probability, complete the square and use the gaussian integral formula with a change of variables

{\mathbb E}[S_t] = \int^{\infty}_{-\infty} S_t(x) p(x) dx

={S_0 \over \sqrt{2\pi}}\int^{\infty}_{-\infty} e^{(\mu-{1\over 2}\sigma^2)t + \sigma \sqrt{t}x} e^{-{1\over 2}x^2} dx

={ {S_0 e^{(\mu-{1\over 2}\sigma^2)t}} \over \sqrt{2\pi}}\int^{\infty}_{-\infty} e^{-{1\over 2}x^2 + \sigma \sqrt{t} x} dx

={{S_0 e^{(\mu-{1\over 2}\sigma^2)t}} \over \sqrt{2\pi}}\int^{\infty}_{-\infty} e^{-{1\over 2} (x - \sigma \sqrt{t})^2} e^{{1\over 2}\sigma^2 t} dx

={{S_0 e^{\mu t}} \over \sqrt{2\pi}}\int^{\infty}_{-\infty} e^{-{1\over 2} y^2} dy

=S_0 e^{\mu t}

which is just the linear growth term acting over time [exercise: calculate the variance in a similar way]. We know what the probability distribution of X looks like (it’s a standard normal variable), but what does the probability distribution of S look like? We can calculate the pdf using the change-of-variables technique, which says that if S = g(x), then the area under each curve in corresponding regions must be equal:

\int_{x_1}^{x_2} p_x(x) dx = \int_{g(x_1)}^{g(x_2)} p_S(S) dS

p_x(x) dx = p_S(S) dS

p_S(S_t) = p_x(x) {dx \over dS_t}

 We know the function S(x), but the easiest way to calculate this derivative is first to invert the function t make it ameanable to differentiation

x = {\ln{S_t \over S_0} - (\mu - {1 \over 2}\sigma^2)t \over \sigma \sqrt{t}}

{dx \over dS_t} = {1 \over \sigma \sqrt{t} S_t}

So the pdf of S expressed in terms of S is

 p_S(S_t) = {1 \over S_t \sigma \sqrt{2\pi t}} \exp{-\Bigl(\ln{S_t\over S_0} - (\mu-{1\over 2}\sigma^2)t \Bigr)^2\over 2 \sigma^2 t}

Well it’s a nasty looking function indeed! I’ve plotted it below for a few typical parameter sets and evolution times.

A lognormal PDF with typical parameter values. Of course, it is highly unlikely that parameter values will stay constant for 4 years – we’ll discuss time dependence of these another day
A more-volatile PDF. Note that even though the mode is falling over time, the mean (which is independent of vol) is still going up due to the fat tails at positive values.

This distribution is really central to a lot of what we do, so I’ll come back to it soon and discuss a few more of its properties. The one other thing to mention is that if we want to calculate an expected value over S (which will turn out to be something we do a lot), we have two approaches – either integrate over \inline p_S(S_t)

{\mathbb E}[f(S_t)] = \int_0^{\infty} f(S_t)p_S(S_t)dS

or,instead express the function in terms of x instead (using \inline S_t = S_0 e^{(\mu - {1 \over 2}\sigma^2)t + \sigma \sqrt{t} x}) and instead integrate over the normal distribution

{\mathbb E}[f(S_t(x))] = \int_{-\infty}^{\infty} f(x)p_x(x)dx

This is typically the easier option. I think it is called the Law of the Unconscious Statistician. On that note, we’ve certainly covered enough ground for the moment!

-QuantoDrifter

Stochastic Differential Equations Pt1: The BS Equation

SDEs are of fundamental importance in quantitative finance. Because we don’t know what is going to happen in the future, we need some kind of random process built into our equations to model this uncertainty.

By far the most common process used is Brownian motion, and in 1 dimension this is called the Weiner process, which we will label \inline W_t at time t. This is a diffusive process (basically, there are no ‘jumps’, and in small time intervals the distance that the particle has traveled is probably very small) with no memory, so that future motion is independent of past motion. If we choose any two times s and t with t > s, then the difference between the value of the process at these times is distributed normally, \inline W_t - W_s \sim {\mathbb N}(0,t-s).  This means that our uncertainty about a particle (or asset) following a Weiner process at some future time T can be characterised by a normal distribution with variance T (ie. standard deviation T^0.5) and expectation 0.

The Black-Scholes SDE for a stock price (for reasons that will become clear later, this is usually called geometric brownian motion, by the way) is

{dS \over S} = \mu dt + \sigma dW_t

What does this mean? Well, the first two terms are simple enough for anyone who has studied ordinary differential equations. S is an asset price, t is time and \inline \mu and \inline \sigma are both constants. Without the stochastic term, we would be able to solve the equation by a simple integration on both sides

\int_{S_0}^{S_T} {dS \over S} = \int_0^T \mu dt

S_T = S_0 e^{\mu T}

which is simply exponential growth of a stock with time. Later, we will introduce risk-free bonds that grow in this way, like compound interest in a bank account, but here we are interested in including an element of uncertainty as well. How do we deal with that?

We might be tempted to try the same thing and integrate the whole equation, using the result that \inline d \ln S = S^{-1} \cdot dS = \mu dt + \sigma d W_t as we implicitly did above. Unfortunately, for non-deterministic functions this doesn’t work any more. The reasons are rather deep, but for the moment the fix is that we need to use a central result of stochastic calculus, Ito’s Lemma, to resolve the conundrum. This is the stocastic version of the chain rule for a function of many variables, which says that if

dS = \mu (S,t) dt + \sigma (S,t) dW_t

then

d( f(S,t)) = \Bigr( {\partial f \over \partial t} + \mu(S,t) {\partial f \over \partial S} + {\sigma(S,t)^2 \over 2}{\partial^2 f \over \partial S^2} \Bigl) dt + \sigma(S,t) {\partial f \over \partial S} dW_t

That seems like quite a mouthful, but for the case that we’re considering, ln(S), it’s actually pretty straight-forward. Since ln(S) doesn’t depend explicitly on time, the time-derivative disappears. It’s quite easy to calculate the first and second derivatives of ln(S) with S, and remembering that we needed to multiply all of the terms in the BS equation above by S to turn the LHS into plain old dS (these were absorbed into our \inline \mu(S,t) and \inline \sigma(S,t)), we get:

d \ln S = ( \mu - {1 \over 2} \sigma^2 ) dt + \sigma dW_t

with the \inline \mu and \inline \sigma just plain old constants again. We have picked up an extra term here, and it has come from the second derivative term in Ito’s Lemma. We are now in a position to integrate both sides of the equation (\inline \int^T \sigma dW_t = \sigma W_T, by the way), and similar to the result in the deterministic case we get

S_T = S_0 e^{(\mu - {1 \over 2}\sigma^2)T + \sigma W_T}

This is the solution of the BS SDE, but what does it mean? There’s quite a bit to go along with above, so I’ll talk a bit more about what this means in the second part of this blog post.

-QuantoDrifter