## Interview Questions VII: Integrated Brownian Motion

For a standard Weiner process denoted , calculate

This integral is the limit of the sum of at each infinitessimal time slice from to , it is called Integrated Brownian Motion.

We see immediately that this will be a random variable with expectation zero, so the result of the squared expectation will simply be the variance of the random variable, which is what we need to calculate. Here I show two ways to solve this, first by taking the limit of the sum and second using stochastic integration by parts

Need help? Discuss this in the Interview Questions forum!

1) Limit of a sum

(1)

This sum of values along the Wenier process is not independent of one-another, since only the increments are independent. However, we can re-write them in terms of sums of these independent increments

(2)

where are the individual independent increments of the brownian motion. Substituting into our previous equation and reversing the order of the summation

(3)

which is simply a weighted sum of independent gaussians. To calculate the total variance, we sum the individual variances using the summation formula for

(4)

which is the solution.

2) Stochastic integration by parts

The stochastic version of integration by parts for potentially stochastic variables , looks like this:

Re-arranging this gives

Now setting and we have

(5)

We recognise this as a weighted sum of independent gaussian increments, which is (as expected) a gaussian variable with expectation 0 and variance that we can calculate with the Ito isometry

(6)

which is the solution.

## Improving Monte Carlo: Control Variates

I’ve already discussed quite a lot about Monte Carlo in quantitative finance. MC can be used to value products for which an analytical price is not available in a given model, which includes most exotic derivatives in most models. However, two big problems are the time that it takes to run and the ‘Monte Carlo error’ in results.

One technique for improving MC is to use a ‘control variate’. The idea is to find a product whose price is strongly correlated to the product that we’re trying to price, but which is more easy to calculate or which we already know the price of. When we simulate a path in MC, it will almost surely give either an under-estimate or an over-estimate of the true price, but we don’t know which, and averaging all of these errors is what leads to the Monte Carlo error in the final result. The insight in the control variate technique is to use the knowledge given to us by the control variate to reduce this error. If the two prices are strongly correlated and a path produces an over-estimate of product price, it most likely also produces an over-estimate of the control variate and visa versa, which will allow us to improve our estimate of the product we’re trying to price.

The textbook example is the Asian Option. Although the arithmetic version of the asian option discussed in previous posts has no analytic expression in BS, a similar Geometric asian option does have an analytic price. So, for a given set of model parameters, we can calculate the price of the option. As a reminder, the payoff of an arithmetic asian option at expiry is

and the payoff of the related geometric averaging asian is

Denoting the price of the arithmetic option as  and the geometric option as , the traditional monte carlo approach is to generate  paths, and for each one calculate  (the realisation of the payoff along the path) and take the average over all paths, so that

which will get closer to the true price as .

Using as a control variate, we instead calculate

where is the price of the geometric option known from the analytical expression, and is a constant (in this case we will set it to 1).

What do we gain from this? Well, consider the variance of

(since is known, so has zero variance) which is minimal for  in which case

that is, if the two prices are strongly correlated, the variance of the price calculated using the control variate will be a significantly smaller. I’ve plotted a sketch of the prices of the two types of average for 100 paths – the correlation is about 99.98%. Consequently, we expect to see a reduction in variance of about 2000 times for a given number of paths (although we now have to do a little more work on each path, as we need to calculate the geometric average as well as the arithmetic average of spots). This is roughly 45 times smaller standard error on pricing – well over an extra decimal place, which isn’t bad – and this is certainly much easier than running 2000 times as many paths to achieve the same result.

## Fitting the initial discount curve in a stochastic rates model

I’ve introduced the Vasicek stochastic rates model in an earlier post, and here I’m going to introduce a development of it called the Hull-White (or sometimes Hull-White extended Vasicek) model.

The rates are modelled by a mean-reverting stochastic process

which is similar to the Vasicek model, except that the  term is now allowed to vary with time (in general and are too, but I’ll ignore those issues for today).

The freedom to set theta as a deterministic function of time allows us to calibrate the model to match the initial discount curve, which means at least initially our model will give the right price for things like FRAs. Calibrating models to match market prices is one of the main things quants do. Of course, the market doesn’t really obey our model. this means that, in time, the market prices and the prices predicted by our model will drift apart, and the model will need to be re-calibrated. But the better a model captures the various risk factors in the market, the less often this procedure will be needed.

Using the trick

to re-express the equation and integrating gives

where  is the rate at . The observable quantities from the discount curve are the initial discount factors (or equivalently the initial forward rates) , where

The rate  is normally distributed, so the integral  must be too. This is because an integral is essentially a sum, and a sum of normal distributions is also normally distributed. Applying the Ito isometry as discussed before, the expectation of this variable will come wholly from the deterministic terms and the variance will come entirly from the stochastic terms, giving

where throughout

and since

we have

Two differentiations of this expression give

and combining these equations gives an expression for that exactly fits the initial discount curve for the given currency

and since  is simply the initial market observed forward rate to each time horizon coming from the discount curve, this can be compactly expressed as

Today we’ve seen how a simple extension to the ‘basic’ Vasicek model allows us to match the initial discount curve seen in the market. Allowing the volatility parameter to vary will allow us to match market prices of other products such as swaptions (an option to enter into a swap), which I’ll discuss another time. But we’re gradually building up a suite of simple models that we can combine later to model much more complicated environments.

## Interview Questions VI: The Drunkard’s Walk

A drunk man is stumbling home at night after closing time. Every step he takes moves him either 1 metre closer or 1 metre further away from his destination, with an equal chance of going in either direction (!). His home is 70 metres down the road, but unfortunately, there is a cliff 30 metres behind him at the other end of the street. What are the chances that he makes it to his house BEFORE tumbling over the edge of the cliff?

This is a fun question and quite heavy on conditional probability. We are trying to find the probability that the drunkard has moved 70 metres forward BEFORE he has ever moved 30 metres backward, or visa versa. There are several ways of attempting this, including some pretty neat martingale maths, but I’m going to attempt it here in the language of matrices and markov chains.

Essentially, there are 100 states that the drunkard can be in, from right beside the cliff all the way down the road to right beside his front door. Let’s label these from 1 to 100, in terms of the number of metres away from the cliff, so that he starts in state 30. At each step, he transitions from his current state to either one state higher or one state lower with probability 50%, and the process continues until he reaches either the cliff or the door, at which point the process will cease in either a good or a very bad night’s rest. We call these two states, 0 and 100, ‘absorbers’, because the process stops at this point and no transitions to new states can happen. A markov diagram that illustrates this process can be drawn like this:

We can characterise each step in the process by a transition matrix acting on a state vector. The drunkard initially has a state of 30 metres, so his state vector is a long string of zeroes with a single 1 at the 30th position:

$S_0&space;=&space;\begin{pmatrix}&space;0&space;\\&space;\vdots&space;\\&space;1\\&space;\vdots\\&space;0&space;\end{pmatrix}$

This vector is probabalistic – a 1 indicates with 100% certainty that the drunkard is in the 30th state. However, with subsequent moves this probability density will be spread over the nearby states as his position’s probability density diffuses into other states. The transition matrix multiplies the drunkard’s state vector to give his state vector after another step:

$P&space;=&space;\begin{pmatrix}&space;1&space;&&space;0.5&space;&&space;0&space;&&space;\cdots&space;&&space;0&space;&&space;0\\&space;0&space;&&space;0&space;&&space;0.5&space;&&space;\cdots&space;&&space;0&space;&&space;0\\&space;0&space;&&space;0.5&space;&&space;0&space;&&space;\cdots&space;&&space;0&space;&&space;0\\&space;0&space;&&space;0&space;&&space;0.5&space;&&space;&&space;0&space;&&space;0&space;\\&space;\vdots&space;&&space;\vdots&space;&&space;\vdots&space;&&space;\ddots&space;&&space;\vdots&space;&&space;\vdots\\&space;0&space;&&space;0&space;&&space;0&space;&&space;\cdots&space;&&space;0.5&space;&&space;1&space;\end{pmatrix};&space;\quad&space;S_{i+1}&space;=&space;P&space;\cdot&space;S_i$

So, after one step the drunkard’s state vector will have a 0.5 in the 29th and the 31st position and zeroes elsewhere, saying that he will be in either of these states with probability 0.5, and certainly nowhere else. Note that the 1’s at the top and bottom of the transition matrix will absorb and probability that arrives at that state for the rest of the process.

To keep things simple, let’s consider a much smaller problem with only six states, where state 1 is ‘down the cliff’ and state 6 is ‘home’; and we’ll come back to the larger problem at the end. We want to calculate the limit of the drunkard’s state as the transition matrix is applied a large number of times, ie. to calculate

$S_n&space;=&space;P^n&space;\cdot&space;S_0$

An efficient way to calculate powers of a large matrix is to first diagonalise it. We have $\inline&space;P&space;=&space;U&space;A\&space;U^{-1}$, where $\inline&space;A$ is a diagonal matrix whose diagonal elements are the eigenvalues of $\inline&space;P$, and $\inline&space;U$ is the matrix whose columns are the eigenvectors of $\inline&space;P$. Note that, as $\inline&space;P$ is not symmetric, $\inline&space;U^{-1}$ will have row vectors that the the LEFT eigenvectors of $\inline&space;P$, which in general will be different to the right eigenvectors. It is easy to see how to raise $\inline&space;P$ to the power n

$P^n&space;=&space;(U&space;A\&space;U^{-1})^n&space;=&space;U&space;A^n\&space;U^{-1}$

since all of the $\inline&space;U$s and $\inline&space;U^{-1}$s in the middle cancel. Since $\inline&space;A$ is diagonal, to raise it to the power of n we simlpy raise each element (which are just the eigenvalues of $\inline&space;P$) to the power of n. To calculate the eigenvalues of P we solve the characteristic equation

$|P&space;-&space;\lambda_a&space;I|&space;=&space;0$

with

$P&space;=&space;\begin{pmatrix}&space;1&space;&&space;0.5&space;&&space;0&space;&&space;0&space;&&space;0&space;&&space;0\\&space;0&space;&&space;0&space;&&space;0.5&space;&&space;0&space;&&space;0&space;&&space;0\\&space;0&space;&&space;0.5&space;&&space;0&space;&&space;0.5&space;&&space;0&space;&&space;0\\&space;0&space;&&space;0&space;&&space;0.5&space;&&space;0&space;&&space;0.5&space;&&space;0&space;\\&space;0&space;&&space;0&space;&&space;0&space;&&space;0.5&space;&&space;0&space;&&space;0\\&space;0&space;&&space;0&space;&&space;0&space;&&space;0&space;&&space;0.5&space;&&space;1&space;\end{pmatrix}$

This gives six eigenvalues, two of which are one (these will turn out to correspond to the two absorbing states) and the remainder are strictly $\inline&space;0\leq&space;\lambda_a&space;<&space;1$. Consequently, when raised to the power n in the diagonal matrix above, all of the terms will disappear except for the first and the last eigenvalue which are 1 as n becomes large.

Calculating the eigenvectors is time consuming and we’d like to avoid it if possible. Luckily, in the limit that n gets large, we’ve seen that most of the eigenvalues raised to the power n will go to zero which will reduce significantly the amount we need to calculate. We have

$P^n\cdot&space;S&space;=&space;\bigl(&space;U&space;A^n&space;\&space;U^{-1}\bigr)\cdot&space;S$

and in the limit that n gets large, $\inline&space;U&space;\cdot&space;A^n$ is just a matrix of zeroes with a one in the upper left and lower right entry. At first sight it looks as though calculating $\inline&space;U^{-1}$ will be required, which is itself an eigenvector problem, but in fact we only have to calculate a single eigenvector – the first (or equivalently the last), which will give the probability of evolving from an initial state S to a final state 0 (or 100).

$\inline&space;U^{-1}$ is the matrix of left eigenvectors of $\inline&space;P$, each of which satisfies

$x_a&space;\cdot&space;P&space;=&space;\lambda_a&space;x_a$

and we are looking for the eigenvectors corresponding to eigenvalue of 1, so we need to solve the matrix equation

$x&space;\cdot&space;\begin{pmatrix}&space;0&space;&&space;0.5&space;&&space;0&space;&&space;0&space;&&space;0&space;&&space;0\\&space;0&space;&&space;-1&space;&&space;0.5&space;&&space;0&space;&&space;0&space;&&space;0\\&space;0&space;&&space;0.5&space;&&space;-1&space;&&space;0.5&space;&&space;0&space;&&space;0\\&space;0&space;&&space;0&space;&&space;0.5&space;&&space;-1&space;&&space;0.5&space;&&space;0&space;\\&space;0&space;&&space;0&space;&&space;0&space;&&space;0.5&space;&&space;-1&space;&&space;0\\&space;0&space;&&space;0&space;&&space;0&space;&&space;0&space;&&space;0.5&space;&&space;0&space;\end{pmatrix}&space;=&space;0$

We know that if he starts in the first state (ie. over the cliff) he must finish there (trivially, after 0 steps) with probability 100%, so that $\inline&space;x_1&space;=&space;1$ and $\inline&space;x_6&space;=&space;0$. The solution to this is

$x_n&space;=&space;{6&space;-&space;n&space;\over&space;5}$

which is plotted here for each initial state

which says that the probability of ending up in state 1 (down the cliff!) falls linearly with starting distance from the cliff. We can scale up this final matrix equation to the original 100 by 100 state space, and find that for someone starting in state 30, there is a 70/100 chance of ending up down the cliff, and consequently only a 30% chance of getting home!

This problem is basically the same as the Gambler’s ruin problem, where a gambler in a casino stakes $1 on each toss of a coin and leaves after reaching a goal of$N or when broke. There are some very neat methods for solving them via martingale methods that don’t use the mechanics above that I’ll look at in a future post.

## Asian Options II: Monte Carlo

In a recent post, I introduced Asian options, an exotic option which pay the average of the underlying over a sequence of fixing dates.

The payoff of an Asian call is

$C(T)&space;=&space;\Big({1&space;\over&space;N}\sum^N_{i=0}&space;S_{t_i}&space;-&space;K&space;\Big)^+$

and to price the option at an earlier time via risk-neutral valuation requires us to solve the integral

$C(t)&space;=&space;P_{t,T}\&space;\big(&space;\iint\cdots\int&space;\big)&space;\Big({1&space;\over&space;N}\sum^N_{i=0}&space;S_{t_i}&space;-&space;K&space;\Big)^+&space;\phi(S_{t_1},S_{t_1},\cdots,S_{t_N})&space;dS_{t_1}dS_{t_2}\cdots&space;dS_{t_N}$

A simple way to do this is using Monte Carlo – we simulate a spot path drawn from the distribution $\inline&space;\phi(S_{t_1},S_{t_1},\cdots,S_{t_N})$ and evaluate the payoff at expiry of that path, then average over a large number of paths to get an estimate of C(t).

It’s particularly easy to do this in Black-Scholes, all we have to do is simulate N gaussian variables, and evolve the underlying according to a geometric brownian motion.

I’ve updated the C++ and Excel Monte Carlo pricer on the blog to be able to price asian puts and calls by Monte Carlo – have a go and see how they behave relative to vanilla options. One subtlety is that we can no longer input a single expiry date, we now need an array of dates as our function input. If you have a look in the code, you’ll see that I’ve adjusted the function optionPricer to take a datatype MyArray, which is the default array used by XLW (it won’t be happy if you tell it to take std::vector). This array can be entered as a row or column of excel cells, and should be the dates, expressed as years from the present date, to average over.

## The Dupire Local Vol Model

In this post I’m going to look at a further generalisation of the Black-Scholes model, which will allow us to re-price any arbitrary market-observed volatility surface, including those featuring a volatility smile.

I’ve previously looked at how we can produce different at-the-money vols at different times by using a piecewise constant volatility $\inline&space;\sigma(t)$, but we were still unable to produce smiley vol surfaces which are often observed in the market. We can go further by allowing vol to depend on both t and the value of the underlying S, so that the full BS model dynamics are given by the following SDE

$dS_t&space;=&space;S_t\cdot&space;\Big(\&space;r(t)&space;dt\&space;+\&space;\sigma(t,S_t)&space;dW_t\&space;\Big)$

Throughout this post we will make constant use of the probability distribution of the underlying implied by this SDE at future times, which I will denote $\inline&space;\phi(t,S_t)$. It can be shown [and I will show in a later post!] that the evolution of this distribution obeys the Kolmogorov Forward Equation (sometimes called the Fokker-Planck equation)

${\partial&space;\phi(t,S_t)&space;\over&space;\partial&space;t}&space;=&space;-{\partial&space;\over&space;\partial&space;S_t}\big(rS_t\phi(t,S_t)\big)&space;+&space;{1\over&space;2}{\partial^2&space;\over&space;\partial&space;S_t^2}\big(\sigma^2(t,S_t)&space;S_t^2&space;\phi(t,S_t)\big)$

This looks a mess, but it essentially tells us how the probability distribution changes with time – we can see that is looks very much like a heat equation with an additional driving term due to the SDE drift.

Vanilla call option prices are given by

$C(t,S_t)&space;=&space;P(0,t)\int_K^{\infty}&space;\big(S_t&space;-&space;K\big)\phi(t,&space;S_t)&space;dS_t$

Assuming the market provides vanilla call option prices at all times and strikes [or at least enough for us to interpolate across the region of interest], we can calculate the time derivative of the call price which is equal to

${\partial&space;C&space;\over&space;\partial&space;T}&space;=&space;-rC&space;+&space;P(0,T)\int_K^{\infty}\big(&space;S_T&space;-&space;K&space;\big)\&space;{\partial&space;\phi&space;\over&space;\partial&space;T}\&space;dS_T$

and we can substitute in the value of the time derivative of the probability distribution from the Kolmogorov equation above

$rC&space;+&space;{\partial&space;C&space;\over&space;\partial&space;T}&space;=&space;P(0,T)\int_K^{\infty}\big(&space;S_T&space;-&space;K&space;\big)\Big[&space;-{\partial&space;\over&space;\partial&space;S_T}\big(&space;rS_T\phi\big)&space;+&space;{1\over&space;2}{\partial^2&space;\over&space;\partial&space;S_T^2}\big(&space;\sigma^2S_T^2\phi\big)&space;\Big]dS_T$

These two integrals can be solved by integration by parts with a little care

\begin{align*}&space;-\int_K^{\infty}\big(&space;S_T&space;-&space;K&space;\big){\partial&space;\over&space;\partial&space;S_T}\big(&space;rS_T\phi\big)dS_T&space;&&space;=&space;-\Big[rS_T\phi&space;\big(&space;S_T&space;-&space;K&space;\big)\Big]^{\infty}_{K}&space;+&space;\int_K^{\infty}rS_T\phi\&space;dS_T&space;\\&space;&&space;=&space;r\int_K^{\infty}&space;(S_T\phi)\&space;dS_T&space;\end{align*}\begin{align*}&space;\int_K^{\infty}\big(&space;S_T&space;-&space;K&space;\big){\partial^2&space;\over&space;\partial&space;S_T^2}\big(&space;\sigma^2&space;S_T^2\phi\big)dS_T&space;&&space;=\Big[\big(&space;S_T&space;-&space;K&space;\big){\partial&space;\over&space;\partial&space;S_T}(\sigma^2&space;S_T^2\phi)&space;\Big]^{\infty}_{K}&space;-&space;\int_K^{\infty}{\partial&space;\over&space;\partial&space;S_T}(\sigma^2&space;S_T^2\phi)\&space;dS_T&space;\\&space;&&space;=&space;-\sigma^2&space;K^2\phi(K,T)&space;\end{align*}where in both cases, the boundary terms disappear at the upper limit due to the distribution $\inline&space;\phi(t,S_t)$ and its derivatives, which go to zero rapidly at high spot.

We already have an expression for $\inline&space;\phi(t,S_t)$ in terms of C and its derivatives from our survey of risk-neutral probabilities,

$\phi(t,S_t)&space;=&space;{1&space;\over&space;P(0,t)}{\partial^2&space;C&space;\over&space;\partial&space;K^2}$

and we can re-arrange the formula above for call option prices

\begin{align*}&space;P(0,T)\int_K^{\infty}&space;S_T\&space;\phi\&space;dS_T&space;&&space;=&space;C&space;+&space;P(0,T)\int_K^{\infty}&space;K\phi\&space;dS_T&space;\\&space;&&space;=&space;C&space;+&space;K&space;{\partial&space;C&space;\over&space;\partial&space;K}&space;\end{align*}and substituting these expressions for $\inline&space;\phi(t,S_t)$ and $\inline&space;\int^{\infty}_K&space;(S_T&space;\phi)\&space;dS_T$ into the equation above

\begin{align*}&space;rC&space;+&space;{\partial&space;C&space;\over&space;\partial&space;T}&space;&&space;=&space;P(0,T)\cdot&space;\Big[&space;r\int_K^{\infty}&space;(S_T\phi)\&space;dS_T&space;+&space;\sigma^2&space;K^2\phi(K,T)&space;\Big]\\&space;&&space;=&space;rC&space;+&space;rK&space;{\partial&space;C&space;\over&space;\partial&space;K}&space;+&space;\sigma^2&space;K^2{\partial^2&space;C&space;\over&space;\partial&space;K^2}&space;\end{align*}

and remember that $\inline&space;\sigma&space;=&space;\sigma(t,S_t)$, which is our Dupire local vol. Cancelling the rC terms from each side and re-arranging gives

$\sigma(T,K)&space;=&space;\sqrt{&space;{\partial&space;C&space;\over&space;\partial&space;T}&space;+&space;rK&space;{\partial&space;C&space;\over&space;\partial&space;K}&space;\over&space;K^2{\partial^2&space;C&space;\over&space;\partial&space;K^2}}$

It’s worth taking a moment to think what this means. From the market, we will have access to call prices at many strikes and expires. If we can choose a robust interpolation method across time and strike, we will be able to calculate the derivative of price with time and with strike, and plug those into the expression above to give us a Dupire local vol at each point on the time-price plane. If we are running a Monte-Carlo engine, this is the vol that we will need to plug in to evolve the spot from a particular spot level and at a particular time, in order to recover the vanilla prices observed on the market.

A nice property of the local vol model is that it can match uniquely any observed market call price surface. However, the model has weaknesses as well – by including only one source of uncertainty (the volatility), we are making too much of a simplification. Although vanilla prices match, exotics priced using local vol models typically have prices that are much lower than prices observed on the market. The local vol model tends to put most of the vol at early times, so that longer running exotics significantly underprice.

It is important to understand that this is NOT the implied vol used when calculating vanilla vol prices. The implied vol and the local vol are related along a spot path by the expression

$\Sigma^2&space;T&space;=&space;\oint_0^T\sigma^2(S_t,t)dt$

(where $\inline&space;\Sigma$ is the implied vol) and the two are quite different. Implied vol is the square root of the average variance per unit time, while the local vol gives the amount of additional variance being added at particular positions on the S-t plane. Since we have an expression for local vol in terms of the call price surface, and there is a 1-to-1 correspondence between call prices and implied vols, we can derive an expression to calculate local vols directly from an implied vol surface. The derivation is long and tedious but trivial mathematically so I don’t present it here, the result is that the local vol is given by (rates are excluded here for simplicity)

$\sigma(y,T)&space;=&space;\sqrt{{\partial&space;w&space;\over&space;\partial&space;T}&space;\over&space;\Big[&space;1&space;-&space;{y&space;\over&space;w}{\partial&space;w&space;\over&space;\partial&space;y}&space;+&space;{1\over&space;2}{\partial^2&space;w&space;\over&space;\partial&space;y^2}+&space;{1&space;\over&space;4}\Big(&space;-{1&space;\over&space;4}&space;-&space;{1&space;\over&space;w}+&space;{y^2&space;\over&space;w}\Big)\Big({\partial&space;w&space;\over&space;\partial&space;y}\Big)^2&space;\Big]}$

where $\inline&space;w&space;=&space;\Sigma^2&space;T$ is the total implied variance to a maturity and strike and $\inline&space;y&space;=&space;\ln{K&space;\over&space;F_T}$ is the log of ‘moneyness’.

This is probably about as far as Black-Scholes alone will take you. Although we can reprice any vanilla surface, we’re still not pricing exotics very well – to correct this we’re going to need to consider other sources of uncertainty in our models. There are a wide variety of ways of doing this, and I’ll being to look at more advanced models in future posts!

## Forwards vs. Futures

I’ve covered Forwards and Futures in previous posts, and now that I’ve covered the basics of Stochastic Interest Rates as well, we can have a look at the difference between Forwards and Futures Contracts from a financial perspective.

As discussed before, the price of a Forward Contract is enforceable by arbitrage if the underlying is available and freely storable and there are Zero Coupon Bonds available to the Forward Contract delivery time. In this case, the forward price is

$F(t,T)&space;=&space;S(t)&space;\cdot&space;{1&space;\over&space;{\rm&space;ZCB}(t,T)}$

In this post I’m going to assume a general interest rate model, which in particular may well be stochastic. In such cases, the price of a ZCB at the present time is given by

${\rm&space;ZCB}(t,T)&space;=&space;{\mathbb&space;E}\Big[&space;\exp{\Big\{\int_t^T&space;r(t')&space;dt'\Big\}&space;}&space;\Big]$

Futures Contracts are a bit more complicated, and we need to extend our earlier description in the case that there are interest rates. The basic description was given before, but additionally in the presence of interest rates, any deposit that is in either party’s account is delivered to the OTHER party at the end of each time period. So, taking the example from the previous post, on day 4 we had $4 on account with the exchange – if rates on that day were 10% p.a., over that day the$4 balance would accrue about 10c interest, which would be paid to the other party.

Let’s say we’re at time s, and want to calculate the Futures price to time T. Our replication strategy is now as follows, following the classic proof due to Cox, Ingersall and Ross but in continuous time. Futures Contracts are free to enter into and break out of due to the margin in each account, so entering X Futures Contracts at time t and closing them at time t+dt will lead to a net receipt (or payment if negative) of $\inline&space;{\rm&space;X}\cdot\big[&space;H(t+dt,T)&space;-&space;H(t,T)\big]$. From t+dt to T, we invest (borrow) this amount at the short rate and thus recieve (need to pay)

${\rm&space;X}\cdot\big[&space;H(t+\tau,T)&space;-&space;H(t,T)\big]\cdot\prod_t^T&space;\big(&space;1&space;+&space;r(t)\tau&space;\big)$

and now moving to continuous time

${\rm&space;X}\cdot\big[&space;H(t+dt,T)&space;-&space;H(t,T)\big]\cdot\int_t^T&space;e^{&space;r(t)}\&space;dt$

We follow this strategy in continuous time, constantly opening contracts and closing them in the following time period [I’m glossing over discrete vs. continuous time here – as long as the short rate corresponds to the discrete time step involved this shouldn’t be a problem], and investing our profits and financing our losses both at the corresponding short rate. We choose a different X for each period [t,t+td] so that $\inline&space;{\rm&space;X}(t)&space;=&space;\int_s^t&space;\exp{\{r(t')\}}dt'$. We also invest an amount H(s,T) at time s at the short rate, and continually roll this over so that it is worth $\inline&space;H(s,T)\cdot&space;\int_s^T&space;\exp{\{r(t)\}}dt$ at time T

Only the final step of this strategy costs money to enter, so the net price of the portfolio and trading strategy is H(s,T). The net payoff at expiry is

$H(s,T)\cdot&space;\int_s^T&space;e^{r(t)}dt&space;+&space;\sum_s^T&space;{\rm&space;X}\cdot[H(t+dt,T)-H(t,T))]\cdot\int_t^T&space;e^{r(t)}dt$

$=&space;H(s,T)\cdot&space;\int_s^T&space;e^{r(t)}dt&space;+&space;\sum_s^T&space;\int_s^t&space;e^{r(t)}dt\cdot[H(t+dt,T)-H(t,T))]\cdot\int_t^T&space;e^{r(t)}dt$

$=&space;H(s,T)\cdot&space;\int_s^T&space;e^{r(t)}dt&space;+&space;\int_s^T&space;e^{r(t)}dt&space;\cdot&space;\sum_s^T&space;[H(t+dt,T)-H(t,T))]$

$=&space;H(s,T)\cdot&space;\int_s^T&space;e^{r(t)}dt&space;+&space;\int_s^T&space;e^{r(t)}dt&space;\cdot&space;[H(T,T)-H(s,T))]$$=&space;H(T,T)&space;\cdot&space;\int_s^T&space;e^{r(t)}dt$

And H(T,T) is S(T), so the net payoff of a portfolio costing H(s,T) is

$=&space;S(T)&space;\cdot&space;\int_s^T&space;e^{r(t)}dt$

How does this differ from a portfolio costing the Forward price? Remembering that in Risk-Neutral Valuation, the present value of an asset is equal to the expectation of its future value discounted by a numeraire. In the risk-neutral measure, this numeraire is a unit of cash B continually re-invested at the short rate, which is worth $\inline&space;B(t,T)&space;=&space;e^{\int_t^T&space;r(t')dt'&space;}$, so we see that the Futures Price is a martingale in the risk-neutral measure (sometimes called the ‘cash measure’ because of its numeraire). So the current value of a Futures Contract on some underlying should be

$H(t,T)&space;=&space;{\mathbb&space;E}^{\rm&space;RN}\big[&space;S(T)&space;|&space;{\cal&space;F}_t&space;\big]$

ie. the undiscounted expectation of the future spot in the risk-neutral measure. The Forward Price is instead the expected price in the T-forward measure whose numeraire is a ZCB expiring at time T

$F(t,T)&space;=&space;{\mathbb&space;E}^{\rm&space;T}\big[&space;S(T)&space;|&space;{\cal&space;F}_t&space;\big]$

We can express these in terms of each other remembering F(T,T) = S(T) = H(T,T) and using a change of numeraire (post on this soon!). I also use the expression for two correlated lognormal, which I derived at the bottom of this post

\begin{align*}&space;F(t,T)&space;&=&space;{\mathbb&space;E}^{T}\big[&space;F(T,T)&space;|&space;{\cal&space;F}_t&space;\big]&space;\\&space;&=&space;{\mathbb&space;E}^{T}\big[&space;S(T)&space;|&space;{\cal&space;F}_t&space;\big]&space;\\&space;&=&space;{\mathbb&space;E}^{T}\big[&space;H(T,T)&space;|&space;{\cal&space;F}_t&space;\big]&space;\\&space;&=&space;{\mathbb&space;E}^{RN}\big[&space;H(T,T)&space;{B(t)\over&space;B(T)}&space;{{\rm&space;ZCB}(t,T)\over&space;{\rm&space;ZCB(T,T)}}|&space;{\cal&space;F}_t&space;\big]&space;\\&space;&=&space;{\rm&space;ZCB}(t,T){\mathbb&space;E}^{RN}\big[&space;H(T,T)&space;{1\over&space;B(T)}|&space;{\cal&space;F}_t&space;\big]&space;\\&space;&=&space;{\rm&space;ZCB}(t,T){\mathbb&space;E}^{RN}\big[&space;H(T,T)\big]&space;{\mathbb&space;E}^{RN}\big[&space;e^{-\int_t^T&space;r(t')dt'}&space;\big]&space;e^{\sigma_H&space;\sigma_B&space;\rho}&space;\\&space;&=&space;H(t,T)&space;\cdot&space;e^{\sigma_H&space;\sigma_B&space;\rho}&space;\\&space;\end{align*}

where $\inline&space;\sigma_H$ is the volatility of the Futures price, and $\inline&space;\sigma_B$ is the volatility of a ZCB – in general the algebra will be rather messy!

As a concrete example, let’s consider the following model for asset prices, with S driven by a geometric brownian motion and rates driven by the Vasicek model discussed before

${dS&space;\over&space;S}&space;=&space;r(t)&space;dt&space;+&space;\sigma_S&space;dW_t$

$dr&space;=&space;a&space;\big[&space;\theta&space;-&space;r(t)\big]&space;dt&space;+&space;\sigma_r&space;\widetilde{dW_t}$

And (critically) assuming that the two brownian processes are correlated according to rho

$dW_t&space;\cdot&space;\widetilde{dW_t}&space;=&space;\rho&space;dt$

In this case, the volatility $\inline&space;\sigma_B$ is the volatility of $\inline&space;{\mathbb&space;E}\big[&space;e^{-\int_t^T&space;r(t')dt'}\big]$, and as I discussed in the post on stochastic rates, this is tractable and lognormally distributed in this model.

We can see that in the case of correlated stochastic rates, these two prices are not the same – which means that Futures and Forward Contracts are fundamentally different financial products.

For two standard normal variates x and y with correlation rho, we have:

\begin{align*}&space;{\mathbb&space;E}\big[&space;e^&space;{\sigma_1&space;x}&space;\big]&&space;=&space;e^&space;{{1\over&space;2}\sigma_1^2&space;}&space;\end{align*}

and

\begin{align*}&space;{\mathbb&space;E}\big[&space;e^&space;{\sigma_1&space;x&space;+&space;\sigma_2&space;y}&space;\big]&&space;=&space;{\mathbb&space;E}\big[&space;e^&space;{\sigma_1&space;x&space;+&space;\sigma_2&space;\rho&space;x&space;+&space;\sigma_2&space;\sqrt{1-\rho^2}z}&space;\big]\\&space;&&space;=&space;{\mathbb&space;E}\big[&space;e^&space;{(\sigma_1&space;+&space;\sigma_2&space;\rho)&space;x&space;+&space;\sigma_2&space;\sqrt{1-\rho^2}z}&space;\big]\\&space;&&space;=&space;\big[&space;e^&space;{{1\over&space;2}(\sigma_1&space;+&space;\sigma_2&space;\rho)^2&space;+&space;{1\over&space;2}(\sigma_2&space;\sqrt{1-\rho^2})^2}&space;\big]\\&space;&&space;=&space;\big[&space;e^&space;{{1\over&space;2}\sigma_1^2&space;+&space;{1\over&space;2}\sigma_2^2&space;+&space;\sigma_1&space;\sigma_2&space;\rho}&space;\big]\\&space;&&space;=&space;{\mathbb&space;E}\big[&space;e^&space;{\sigma_1&space;x&space;}\big]&space;{\mathbb&space;E}&space;\big[&space;e^{\sigma_2&space;y}\big]&space;e^{&space;\sigma_1&space;\sigma_2&space;\rho}&space;\end{align*}

## Stochastic Rates Models

In a previous post, I introduced the Black-Scholes model, in which the price of an underlying stock is modeled with a stochastic variable that changes unpredictably with time. I’ve also discussed the basic model-independent rates products whose value can be determined at the present time exactly. However, to progress further with interest rate derivatives, we’re going to need to model interest rates more carefully. We’ve assumed rates are deterministic so far, but of course this isn’t true – just like stocks, they change with time in an uncertain manner, so we need to allow them to become stochastic as well.

One way of doing this is by analogy with the BS case, by allowing the short rate (which is the instantaneous risk-neutral interest rate ) to become stochastic as well, and then integrating it over the required period of time to calculate forward rates.

A very basic example of this is the Vasicek Model. In this model the short rate is defined to be stochastic, with behaviour governed by the following SDE

where  and  are constants and  is the standard Wiener increment as described before. This is marginally more complicated than the BS model, but still belongs to the small family of SDEs that are analytically tractable. Unlike stock prices, we expect rates to be mean-reverting – stock price variance grows with time, but we expect the distribution of rates to be confined to a fairly narrow range by comparison. The term in square brackets above achieves this, since if  is greater than  then it will be negative and cause the rate to be pulled down, while if it is below the term will be positive and push the rate up towards .

Solving the equation requires a trick, which is instead of thinking about the rate alone to think about the quantity . This is equal to , and substituting in the incremental rate term from the original equation we have

note that the term in  has been cancelled out, and the remaining terms can be integrated directly from a starting time to a finishing time

where  is the initial rate. This is simply a gaussian distribution with mean and variance given by

Where the variance is calculated using the “Ito isometry

as stated above. Note that this allows the possibility of rates going negative, which is generally considered to be a weakness of the model, but the chance is usually rather small.

As we know the distribution of the short rate, we can calculate some other relevant quantities. Of primary importance are the Zero Coupon Bonds, which are required for calculation of forward interest rates. A ZCB is a derivative that pays $1 at a future time , and we can price this using the Risk-Neutral Valuation technique. According to the fundamental theorem of asset pricing, the current price of a derivative divided by our choice of numeraire must be equal to it’s future expected price at any time divided by the value of the numeraire at that time, with the expectation taken in the probability measure corresponding to the choice of numeraire. In the risk-neutral measure, the numeraire is just a unit of currency, initially worth$1 but continually re-invested at the instantaneous short rate, so that its price at time is $1 . Now, the price of the ZCB is given by Although the RHS is true at any time, we only know the value of the ZCB exactly at a single time at the moment – the expiry date , at which it is worth exactly$1. Plugging in these values we have

So the ZCB is given by the expectation of the integral of the rate over a period of time. Since the rate is itself gaussian, and an integral is the limit of a sum, it’s not surprising that this quantity is also gaussian (it’s effectively the sum of many correlated gaussians, which is also gaussian, as discussed before), but it’s rather tricky to calculate, I’ve included the derivation at the bottom of the post to same space here. The mean and variance are given by

where

The ZCB is given by the expectation of the exponential of a gaussian variable – and we’ve seen on several occasions that

So the ZCB prices are

with  as defined above and

and as these expressions only depend on the rates at the initial time, we can calculate ZCB bond prices and hence forward rates for any future expiry dates at any given time if we have the current instantaneous rate.

Although we can calculate a discount curve for a given set of parameters, the Vasicek model can’t calibrate to an initial ZCB curve taken from the market, which is a serious disadvantage. There are more advanced generalisations which can, and I’ll discuss some soon, but they will use all of the same tricks and algebra that I’ve covered here.

I’ve written enough for one day here – in later posts I’ll discuss changing to the t-forward measure, in which the ZCB forms the numeraire instead of a unit of currency, which simplifies many calculations, and I’ll use it to price caplets under stochastic rates, and show that these are equivalent to european options on ZCBs.

An alternate approach to the short-rate model approach discussed today which is very popular these days is the Libor Market Model (LMM) approach, in which instead of simulating the short rate and calculating the required forwards, the different forwards required are instead computed directly and in tandem – I’ll look further at this approach in another post.

Here is the calculation of the distribution of the integral of the instantaneous rate over the period to :

and splitting this into the terms that contribute to the expectation and the variance we have

to calculate the variance, we first need to deal with the following term

we use stochastic integration by parts

and we’re now in a position to try and find the variance of the integral

where Ito’s isometry has been used again, and several more lines of routine algebra leads to the result

## Bootstrapping the Discount Curve from Swap Rates

Today’s post will be a short one about calculation of discount curves from swap rates. I’ve discussed both swaps and discount curves in previous posts, you should read those before this one or it might not make much sense!

Although bonds can be used to calculate discount bond prices, typically swaps are the most liquid products on the market and will go to the longest expiry times (often 80+ years for major currencies), so these are used to calculate many of the points on the discount curve [and often both of these can be done simultaneously to give better reliability].

In the previous post on swaps, I calculated the swap rate that makes swaps zero-valued at the current time

where the ‘s here represent the fixing dates of the swap (although payment is made at the beginning of the following period, so the ‘th period is received at .

Consider the sequence of times  for which a sequence of swaps are quoted on the markets, with swap rates  for the swap running from up to . We can back out the discount factor at each time as follows:

and we can see from this the general procedure, calculating another ZCB from each successive swap rate using the expression

These swaps and ZCBs are called co-initial because they both started at the same time .

Now imagine that instead the swaps  have the first fixing at time  and their final fixing at time  for  – such swaps are called co-terminal swaps as they start at different times but finish at the same one. Once again we can calculate the discount factors up to a constant factor, this time by working backwards:

and so on, the dcfs can be backed out.

To specify the exact values of co-terminal swaps, we need to know at least one dcf exactly. In general the co-initial case will also require this – I implicitly assumed that they started fixing at  where we know , but for general co-initial swaps we would also have this issue.

## Solving the Heat Equation

This post follows on from my earlier post on the BS Equation from Delta Hedging. In that post, I showed how delta hedging arguments within the BS model lead to the heat equation

${1\over&space;2}\sigma^2&space;{\partial^2&space;D\over&space;\partial&space;y^2}&space;-&space;{\partial&space;D&space;\over&space;\partial&space;\tau}&space;=&space;0$

where $\inline&space;D&space;=&space;C&space;e^{-r&space;\tau}$, C(S,t) is the derivative price, $\inline&space;y&space;=&space;\ln&space;S&space;+&space;\big(&space;r&space;-&space;{1\over&space;2}\sigma^2&space;\big)\tau$, and $\inline&space;\tau&space;=&space;T&space;-&space;t$. In this post I’m going to solve this equation to give the BS option price for a vanilla call. S can only take positive values, but due the the logarithm y runs from $\inline&space;-\infty$ to $\inline&space;\infty$.

In physics, this equation is usually encountered with periodic boundary conditions coming from the boundaries of the region of interest, but here we don’t have that luxury so will need to be more careful. In this post I’m going to solve the equation using the method of separation of variables, in a later post I’ll re-solve it using the method of Fourier transforms.

The heat equation is linear, which means that if $\inline&space;f(y,\tau)$ and $\inline&space;g(y,\tau)$ are both solutions, then so is the sum $\inline&space;\big(&space;f(y,\tau)&space;+&space;g(y,\tau)&space;\big)$. This means that if we can find a selection of possible solutions, we can combine them to match the boundary condition given by the known form of $\inline&space;D(y,\tau&space;=&space;0)$ at expiry (coming from $\inline&space;C(S,\tau&space;=&space;0)&space;=&space;\big(&space;S&space;-&space;K&space;\big)^+$). We look for separable solutions, which are those of the form $\inline&space;f(y,\tau)&space;=&space;Y(y)T(\tau)$, in which case the derivatives in the heat equation only act on their respective terms:

${1\over&space;2}\sigma^2&space;T(\tau){\partial^2&space;Y(y)\over&space;\partial&space;y^2}&space;-&space;Y(y){\partial&space;T(\tau)&space;\over&space;\partial&space;\tau}&space;=&space;0$

${1\over&space;2}\sigma^2&space;Y''T&space;=&space;Y&space;\dot{T}$

${1\over&space;2}\sigma^2&space;{Y''\over&space;Y}&space;=&space;{\dot{T}&space;\over&space;T}&space;=&space;\lambda^2$

where a dash denotes differentiation by y and a dot differentiation by tau, In the final equation, both sides depend only on independent variables y and tau respectively – the only way this can be true is if both sides are equal to some constant, which we call $\inline&space;\lambda^2$.

In general, we should consider the possibilities that $\inline&space;\lambda^2$ is zero or negative, but it turns out that these won’t matter in this example so I ignore them for brevity. Two equations can be extracted, linked by $\inline&space;\lambda^2$

$Y''&space;-&space;2&space;{\lambda^2&space;\over&space;\sigma^2}&space;Y&space;=&space;0$

$\dot{T}&space;-&space;\lambda^2&space;T&space;=&space;0$

Solving these equations,

$Y_\lambda(y)&space;=&space;Y_{0,\lambda}&space;e^{\sqrt{2}{\lambda&space;\over&space;\sigma&space;}y}$

$T_\lambda&space;(\tau)&space;=&space;e^{\lambda^2&space;\tau}$

with any positive value of lambda allowed. We first try to find the fundamental solution of the heat equation, which is the solution that satisfies $\inline&space;f(y,\tau&space;=&space;0)&space;=&space;\delta(y-a)$ where $\inline&space;\delta$ is the Dirac delta function, and I’ll use this to find the particular solution for a vanilla call at the end. At $\inline&space;\tau&space;=&space;0$ all of the functions T are 1, so these can be neglected, and we simply sum up the right number of Y terms, each with a different value of $\inline&space;\lambda$ which we index with n, to produce the delta function:

$\sum_n&space;Y_{0,\lambda_n}&space;e^{\sqrt{2}{\lambda_n&space;\over&space;\sigma&space;}y}&space;=&space;\delta(y-a)$because $\inline&space;\lambda$ can take any value on the continuous interval $\inline&space;-\infty&space;<&space;\lambda&space;<&space;\infty$, we change the sum to an integral, and also express the delta function using its integral representation and compare co-efficient to equate the two expressions

$\int_{-\infty}^{\infty}&space;Y_{0,\lambda_n}&space;e^{\sqrt{2}{\lambda_n&space;\over&space;\sigma&space;}y}dn&space;=&space;{1&space;\over&space;2\pi}&space;\int_{-\infty}^{\infty}&space;e^{in(y-a)}dn$

so

$Y_{0,n}&space;=&space;{1&space;\over&space;2\pi}e^{-in&space;a}&space;\quad&space;;&space;\quad&space;\lambda_n&space;=&space;{1&space;\over&space;\sqrt{2}}\sigma&space;i&space;n$

And plugging these into our earlier expressions for Y and T, and combining these in the same way for $\inline&space;\tau&space;>&space;0$ to provide the temporal behavior, we arrive at

$f(y,\tau)&space;=&space;\int_{-\infty}^{\infty}&space;Y_{\lambda_n}(y)&space;T_{\lambda_n}(\tau)&space;dn$

$=&space;{1\over&space;2\pi}&space;\int_{-\infty}^{\infty}&space;e^{in(y-a)}&space;e^{-{\sigma^2&space;n^2&space;\tau&space;\over&space;2}}dn$

and once again, this is a gaussian integral that we’ll tackle by completing the square…

$=&space;{1\over&space;2\pi}&space;\int_{-\infty}^{\infty}&space;\exp\Big\{&space;-{1&space;\over&space;2}&space;\sigma^2&space;\tau&space;\Big(&space;n^2&space;-&space;2&space;{i(y-a)\over&space;\sigma^2&space;\tau}&space;+&space;\big(&space;{i(y-a)\over&space;\sigma^2&space;\tau}\big)^2-\big(&space;{i(y-a)\over&space;\sigma^2&space;\tau}\big)^2&space;\Big)&space;\Big\}&space;dn$

$=&space;{1\over&space;2\pi}&space;\int_{-\infty}^{\infty}&space;\exp\Big\{&space;-{1&space;\over&space;2}&space;\sigma^2&space;\tau&space;\Big(n-&space;\big(&space;{i(y-a)\over&space;\sigma}\big)&space;\Big)^2&space;\Big\}&space;\cdot&space;\exp{-(y-a)^2&space;\over&space;2&space;\sigma^2&space;\tau}&space;dn$

$=&space;{1\over&space;2\pi}\cdot\sqrt{2\pi&space;\over&space;\sigma^2&space;\tau}\cdot&space;\exp{-(y-a)^2&space;\over&space;2&space;\sigma^2&space;\tau}$

$=&space;{1\over&space;\sqrt{2\pi&space;\sigma^2&space;\tau}}\cdot&space;\exp{-(y-a)^2&space;\over&space;2&space;\sigma^2&space;\tau}$

and this is the fundamental solution given at the end of the first post and also at wikipedia. Note that although we used separation of variables to find the forms of the functions Y and T, the separability hasn’t been carried through the summation of many different T-Y combinations – the fundamental solution that we’ve arrived at ISN’T separable!

The fundamental solution is the time and space evolution of a function that is initially a delta-function, such that the evolution always obeys the heat equation. We can use the following property of the delta function to construct specific solutions for particular payoff functions:

$f(y)&space;=&space;\int_{-\infty}^{\infty}&space;\delta(y-a)&space;f(a)&space;da$

So, because the fundamental solution $\inline&space;f(y,\tau=0)&space;=&space;\delta(y-a)$, we can construct the payoff function using this integral at $\inline&space;\tau&space;=&space;0$

$D(y,\tau=0)&space;=&space;\int_{-\infty}^{\infty}&space;f(a,\tau=0)&space;D(a,\tau=0)&space;da$

and consequently, at later times

$D(y,\tau)&space;=&space;\int_{-\infty}^{\infty}&space;f(a,\tau)&space;D(a,\tau=0)&space;da$and we can solve this for the specific case that $\inline&space;C(S,\tau&space;=&space;0)&space;=&space;\big(&space;S(\tau=0)&space;-&space;K&space;\big)^+$ as follows (denoting a as y’)

$D(y,\tau)&space;=&space;{e^{-r\tau}&space;\over&space;\sqrt{2\pi\sigma^2\tau}}\int_{-\infty}^{\infty}&space;\exp\Big\{&space;-&space;{(y'-y)^2&space;\over&space;2\sigma^2&space;\tau}&space;\Big\}&space;\cdot&space;\big(&space;e^{y'}&space;-&space;K&space;\big)^+&space;dy'$

The second factor is only non-zero when $\inline&space;y'&space;>&space;\ln&space;K&space;=&space;y_L$, so we can absorb this condition into the lower limit

$=&space;{e^{-r\tau}&space;\over&space;\sqrt{2\pi\sigma^2\tau}}\int_{y_L}^{\infty}&space;\exp\Big\{&space;-&space;{(y'-y)^2&space;\over&space;2\sigma^2&space;\tau}&space;\Big\}&space;\cdot&space;\big(&space;e^{y'}&space;-&space;K&space;\big)&space;dy'$and splitting this into two integrals, which we do in turn

$=&space;{e^{-r\tau}&space;\over&space;\sqrt{2\pi\sigma^2\tau}}\int_{y_L}^{\infty}&space;K\exp\Big\{&space;-&space;{(y'-y)^2&space;\over&space;2\sigma^2&space;\tau}&space;\Big\}&space;dy'&space;=&space;e^{-r\tau}\cdot&space;K\cdot&space;\Phi\Big({y&space;-&space;y_L&space;\over&space;\sigma&space;\sqrt{\tau}}\Big)$

and

${e^{-r\tau}&space;\over&space;\sqrt{2\pi\sigma^2\tau}}\int_{y_L}^{\infty}&space;\exp\Big\{&space;-&space;{(y'-y)^2&space;\over&space;2\sigma^2&space;\tau}&space;\Big\}&space;\cdot&space;e^{y'}&space;dy'$

$=&space;{e^{-r\tau}&space;\over&space;\sqrt{2\pi\sigma^2\tau}}\int_{y_L}^{\infty}&space;\exp\Big\{&space;-&space;{1\over&space;2\sigma^2&space;\tau}\Big(&space;y'^2&space;+&space;y^2&space;-&space;2y'y&space;-2y'\sigma^2\tau&space;\pm&space;(&space;y&space;+&space;\sigma^2\tau&space;)^2&space;\Big)&space;\Big\}&space;dy'$

$=&space;{e^{-r\tau}&space;\over&space;\sqrt{2\pi\sigma^2\tau}}\int_{y_L}^{\infty}&space;\exp\Big\{&space;-&space;{1\over&space;2\sigma^2&space;\tau}\Big(&space;\big(y'-(y+\sigma^2&space;\tau)\big)^2&space;-&space;\big(y&space;+&space;\sigma^2&space;\tau\big)^2&space;+&space;y^2\Big)&space;\Big\}&space;dy'$

$=&space;{e^{-r\tau}&space;\over&space;\sqrt{2\pi\sigma^2\tau}}\int_{y_L}^{\infty}&space;\exp\Big\{&space;-&space;{1\over&space;2\sigma^2&space;\tau}\Big(&space;\big(y'-(y+\sigma^2&space;\tau)\big)^2&space;-&space;\sigma^2\tau\big(2y&space;+&space;\sigma^2&space;\tau\big)\Big)&space;\Big\}&space;dy'$

$=&space;{e^{-r\tau}&space;\over&space;\sqrt{2\pi\sigma^2\tau}}\int_{y_L}^{\infty}&space;\exp\Big\{&space;-&space;{\big(y'-(y+\sigma^2&space;\tau)\big)^2\over&space;2\sigma^2&space;\tau}\Big\}\cdot&space;S&space;e^{r\tau}&space;dy'$

$=&space;e^{-r\tau}&space;\cdot&space;F&space;\cdot&space;\Phi&space;\Big(&space;{y&space;-&space;y_L&space;+&space;\sigma^2&space;\tau&space;\over&space;\sigma&space;\sqrt{\tau}}&space;\Big)$

where as usual, $\inline&space;\Phi(x)$ is the cumulative normal distribution; and we’ve used the expression $\inline&space;y&space;=&space;\ln&space;S&space;+&space;(r&space;-&space;{1\over&space;2}\sigma^2)\tau$ and also $\inline&space;y&space;+&space;{1&space;\over&space;2}\sigma^2&space;\tau&space;=&space;S&space;e^{r\tau}&space;=&space;F$. Putting all of this together, we finally arrive at the Black-Scholes expression for a vanilla call price,

${e^{-r\tau}&space;\over&space;\sqrt{2\pi\sigma^2\tau}}\int_{y_L}^{\infty}&space;\exp\Big\{&space;-&space;{(y'-y)^2&space;\over&space;2\sigma^2&space;\tau}&space;\Big\}&space;\cdot&space;\big(&space;e^{y'}&space;-&space;K&space;\big)&space;dy'$

$=&space;e^{-r\tau}&space;\Big(&space;F&space;\cdot&space;\Phi(d_1)&space;-&space;K\cdot&space;\Phi(d_2)\Big)$

Wow! Finally! Hopefully this will convince you that the method of risk-neutral valuation is a bit more straight-forward!