One of the nice things about this job is the variety. There’s lots of maths, but the topics aren’t limited to set areas. So one day I might be working with stochastic calculus or statistics, and the next I have to delve into numerical techniques to solve a problem and then code it up.
I hinted about this briefly in my last post about implied vol. In order to do this, I had to solve an equation of the form
by varying sigma, and because the form of C in general isn’t tractable, numerical techniques must be used.
The easiest solver to implement is a bisection solver. This needs to be given two points on either side of the root, so that is negative and
is positive (for vols, sensible values might be
= 0.01% and
= 200%). Then it simply takes the mid-point of these two (call it
) and evaluates the function there. If it’s positive,
becomes the new
, and if negative
becomes the new
. At each iteration this halves the remaining interval between the values. It’s very robust and will always home in to the root, but is considered to be relatively slow. The difference between the n-th value
and the true value
obeys
where b and a are the initial values. If we want this difference to be less than we have
since . This means that for each additional decimal point of accuracy, a further 3.32 iterations will be needed on average.
Faster ways are less reliable. For example, the secant method is a finite difference version of Newton-Raphson. The first points chosen don’t need to be on opposite sides of the root, but they should be near to the root. The expression used for successive approximations to the root is
This is an interpolation technique, it works roughly as shown in the figures below. It’s speed of convergence is much faster than the bisection technique when close to the root, but like Newton-Raphson, it is not guaranteed to converge. This isn’t much use to us – we’d like to combine the benefits of both techniques.


The industry standard technique for this is the Brent Method. The algorithm is a little complicated but it essentially combines these two approaches, using the secant method but checking that it performs better than the bisection method, and falling back on that if not. It also includes a few extra provisions for added speed and stability – once again, Wikipedia is a great reference for this. I’ve coded up a version of this for the pricer, and there’s a demonstration of how this outperforms the bisection method below. Choose any of the functions in the list, enter some points and see how many iterations each one takes to find the root!
Just a brief note on the coding as well, since I said I’d cover everything here. I’m using javascript at the moment, mostly for speed. Javascript is very forgiving (although consequently a pain to debug!), it allows variables to be defined as functions and passed around as parameters. I created a function
BrentSolver( functionToSolve, lowerBound, upperBound, tolerance )
which takes a function of one argument functionToSolve(z) that returns the value of the function at z; and finds a root of that function between the lower and upper bounds. Usually we will need this function to have more than one parameter, but again javascript is kind to us – in the block of code that calls it, we call the parameters we need, and define a new function dynamically:
price = 5; fwd = 100; strike = 100; time = 1; optionStyle = -1; functionToSolve = function ( vol ) { return vanillaPrice( fwd, strike, vol, time, optionStyle ) - price; } brentSolver( functionToSolve, 0.001, 200, 0.00000001 );
Much easier than in C++!! Below is a basic doodle to let you try out the different solvers on fairly well-known functions – have a play and see what happens.