# Stochastic Differential Equations Pt2: The Lognormal Distribution

This post follows from the earlier post on Stochastic Differential Equations.

I finished last time by saying that the solution to the BS SDE for terminal spot at time T was

$\inline&space;S_T&space;=&space;S_0&space;e^{(\mu&space;-&space;{1&space;\over&space;2}\sigma^2)T&space;+&space;\sigma&space;W_T}$

When we solve an ODE, it gives us an expression for the position of a particle at time T. But we’ve already said that we are uncertain about the price of an asset in the future, and this expression expresses that uncertainty through the $\inline&space;\sigma&space;W_T$ term in the exponent. We said in the last post that the difference between this quantity at two different times s and t was normally distributed, and since this term is the distance between t=0 and t=T (we have implicitly ignored a term $W_0$, but this is ok because we assumed that the process started at zero) it is also normally distributed,

$W_T&space;\sim&space;{\mathbb&space;N}(0,T)$

It’s a well-known property of the normal distribution (see the Wikipedia entry for this and many others) that if $X&space;\sim&space;{\mathbb&space;N}(0,1)$ then $aX&space;\sim&space;{\mathbb&space;N}(0,a^2)$ for constant a. We can use this in reverse to reduce $W_T$ to a standard normal variable x, by taking a square root of time outside of the distribution so $W_T&space;\sim&space;\sqrt{T}\cdot{\mathbb&space;N}(0,1)$ and we now only need standard normal variables, which we know lots about. We can repeat our first expression in these terms

$\inline&space;S_T&space;=&space;S_0&space;e^{(\mu&space;-&space;{1&space;\over&space;2}\sigma^2)T&space;+&space;\sigma&space;\sqrt{T}&space;X}$

What does all of this mean? In an ODE environment, we’d be able to specify the exact position of a particle at time T. Once we try to build in uncertainty via SDEs, we are implicitly sacrificing this ability, so instead we can only talk about expected positions, variances, and other probabilistic quantities. However, we certainly can do this, the properties of the normal distribution are very well understood from a probabilistic standpoint so we expect to be able to make headway! Just as X is a random variable distributed across a normal distribution, S(t) is now a random variable whose distribution is a function of random variable X and the other deterministic terms in the expression. We call this distribution the lognormal distribution since the log of S is distributed normally.

The random nature of S is determined entirely by the random nature of X. If we take a draw from X, that will entirely determine the corresponding value of S, since the remaining terms are deterministic. The first things we might want to do are calculate the expectation of S, its variance, and plot its distribution. To calculate the expectation, we integrate over all possible realisations of X weighted by their probability, complete the square and use the gaussian integral formula with a change of variables

${\mathbb&space;E}[S_t]&space;=&space;\int^{\infty}_{-\infty}&space;S_t(x)&space;p(x)&space;dx$

$={S_0&space;\over&space;\sqrt{2\pi}}\int^{\infty}_{-\infty}&space;e^{(\mu-{1\over&space;2}\sigma^2)t&space;+&space;\sigma&space;\sqrt{t}x}&space;e^{-{1\over&space;2}x^2}&space;dx$

$={&space;{S_0&space;e^{(\mu-{1\over&space;2}\sigma^2)t}}&space;\over&space;\sqrt{2\pi}}\int^{\infty}_{-\infty}&space;e^{-{1\over&space;2}x^2&space;+&space;\sigma&space;\sqrt{t}&space;x}&space;dx$

$={{S_0&space;e^{(\mu-{1\over&space;2}\sigma^2)t}}&space;\over&space;\sqrt{2\pi}}\int^{\infty}_{-\infty}&space;e^{-{1\over&space;2}&space;(x&space;-&space;\sigma&space;\sqrt{t})^2}&space;e^{{1\over&space;2}\sigma^2&space;t}&space;dx$

$={{S_0&space;e^{\mu&space;t}}&space;\over&space;\sqrt{2\pi}}\int^{\infty}_{-\infty}&space;e^{-{1\over&space;2}&space;y^2}&space;dy$

$=S_0&space;e^{\mu&space;t}$

which is just the linear growth term acting over time [exercise: calculate the variance in a similar way]. We know what the probability distribution of X looks like (it’s a standard normal variable), but what does the probability distribution of S look like? We can calculate the pdf using the change-of-variables technique, which says that if S = g(x), then the area under each curve in corresponding regions must be equal:

$\int_{x_1}^{x_2}&space;p_x(x)&space;dx&space;=&space;\int_{g(x_1)}^{g(x_2)}&space;p_S(S)&space;dS$

$p_x(x)&space;dx&space;=&space;p_S(S)&space;dS$

$p_S(S_t)&space;=&space;p_x(x)&space;{dx&space;\over&space;dS_t}$

We know the function S(x), but the easiest way to calculate this derivative is first to invert the function t make it ameanable to differentiation

$x&space;=&space;{\ln{S_t&space;\over&space;S_0}&space;-&space;(\mu&space;-&space;{1&space;\over&space;2}\sigma^2)t&space;\over&space;\sigma&space;\sqrt{t}}$

${dx&space;\over&space;dS_t}&space;=&space;{1&space;\over&space;\sigma&space;\sqrt{t}&space;S_t}$

So the pdf of S expressed in terms of S is

$p_S(S_t)&space;=&space;{1&space;\over&space;S_t&space;\sigma&space;\sqrt{2\pi&space;t}}&space;\exp{-\Bigl(\ln{S_t\over&space;S_0}&space;-&space;(\mu-{1\over&space;2}\sigma^2)t&space;\Bigr)^2\over&space;2&space;\sigma^2&space;t}$

Well it’s a nasty looking function indeed! I’ve plotted it below for a few typical parameter sets and evolution times.

This distribution is really central to a lot of what we do, so I’ll come back to it soon and discuss a few more of its properties. The one other thing to mention is that if we want to calculate an expected value over S (which will turn out to be something we do a lot), we have two approaches – either integrate over $\inline&space;p_S(S_t)$

${\mathbb&space;E}[f(S_t)]&space;=&space;\int_0^{\infty}&space;f(S_t)p_S(S_t)dS$

or,instead express the function in terms of x instead (using $\inline&space;S_t&space;=&space;S_0&space;e^{(\mu&space;-&space;{1&space;\over&space;2}\sigma^2)t&space;+&space;\sigma&space;\sqrt{t}&space;x}$) and instead integrate over the normal distribution

${\mathbb&space;E}[f(S_t(x))]&space;=&space;\int_{-\infty}^{\infty}&space;f(x)p_x(x)dx$

This is typically the easier option. I think it is called the Law of the Unconscious Statistician. On that note, we’ve certainly covered enough ground for the moment!

-QuantoDrifter