Root Finders

One of the nice things about this job is the variety. There’s lots of maths, but the topics aren’t limited to set areas. So one day I might be working with stochastic calculus or statistics, and the next I have to delve into numerical techniques to solve a problem and then code it up.

I hinted about this briefly in my last post about implied vol. In order to do this, I had to solve an equation of the form

C_{BS}}({\rm Fwd},K,\sigma,t,\phi) - {\rm Market\_ Price} = 0

by varying sigma, and because the form of C in general isn’t tractable, numerical techniques must be used.

The easiest solver to implement is a bisection solver. This needs to be given two points on either side of the root, so that C_{BS}(...,\sigma_1,...) is negative and C_{BS}(...,\sigma_2,...) is positive (for vols, sensible values might be \inline \sigma_1 = 0.01% and \inline \sigma_2 = 200%). Then it simply takes the mid-point of these two (call it \inline \sigma_3) and evaluates the function there. If it’s positive, \inline \sigma_3 becomes the new \inline \sigma_2, and if negative \inline \sigma_3 becomes the new \inline \sigma_1. At each iteration this halves the remaining interval between the values. It’s very robust and will always home in to the root, but is considered to be relatively slow. The difference between the n-th value \inline c_n and the true value \inline c obeys

| c_n - c | \leq {| b - a| \over 2^n}

where b and a are the initial values. If we want this difference to be less than \inline 1\times 10^d we have

1\times 10^d \leq {|b-a|\over 2^n}

n \geq \ln_2 |b-a| + 3.32\cdot d

since \inline \ln_2 10 \simeq 3.32. This means that for each additional decimal point of accuracy, a further 3.32 iterations will be needed on average.

Faster ways are less reliable. For example, the secant method is a finite difference version of Newton-Raphson. The first points chosen don’t need to be on opposite sides of the root, but they should be near to the root. The expression used for successive approximations to the root is

x_n = x_{n-1} - f(x_{n-1}){x_{n-1}-x_{n-2} \over f(x_{n-1}) - f(x_{n-2})}

This is an interpolation technique, it works roughly as shown in the figures below. It’s speed of convergence is much faster than the bisection technique when close to the root, but like Newton-Raphson, it is not guaranteed to converge. This isn’t much use to us – we’d like to combine the benefits of both techniques.

A successful setup for the secant method
Here we can see the secant method working. The first step interpolates between the function values f(x1) and f(x2) and finds a better approximation x3. The next step intertpolates f(x2) and f(x3), leading to a new approximation x4 outside the interval [x2,x3]. From this point the procedure proceeds rapidly – it takes 14 steps to achieve 16 d.pds of accuracy (cf. about 55 steps for similar accuracy from the bisection method).
Here we see the secant method failing, due to the curvature of the function and the poor initial choice of x1 and x2. f(x2) and f(x3) are interpolated and lead to a chord that may or may not cross the function (depending on its behaviour at negative x). Clearly secant is not robust to poor choices of initial values or pathological function behaviour.

The industry standard technique for this is the Brent Method. The algorithm is a little complicated but it essentially combines these two approaches, using the secant method but checking that it performs better than the bisection method, and falling back on that if not. It also includes a few extra provisions for added speed and stability – once again, Wikipedia is a great reference for this. I’ve coded up a version of this for the pricer, and there’s a demonstration of how this outperforms the bisection method below. Choose any of the functions in the list, enter some points and see how many iterations each one takes to find the root!

Just a brief note on the coding as well, since I said I’d cover everything here. I’m using javascript at the moment, mostly for speed. Javascript is very forgiving (although consequently a pain to debug!), it allows variables to be defined as functions and passed around as parameters. I created a function

 BrentSolver( functionToSolve, lowerBound, upperBound, tolerance )

which takes a function of one argument functionToSolve(z) that returns the value of the function at z; and finds a root of that function between the lower and upper bounds. Usually we will need this function to have more than one parameter, but again javascript is kind to us – in the block of code that calls it, we call the parameters we need, and define a new function dynamically:

price = 5;
fwd = 100;
strike = 100;
time = 1;
optionStyle = -1;

functionToSolve = function ( vol ) {
  return vanillaPrice( fwd, strike, vol, time, optionStyle ) - price;

brentSolver( functionToSolve, 0.001, 200, 0.00000001 );

Much easier than in C++!! Below is a basic doodle to let you try out the different solvers on fairly well-known functions – have a play and see what happens.


Root Finder: 

Lower Bound:
Upper Bound:

# Iterations:

Leave a Reply

Your email address will not be published. Required fields are marked *