European vs. American Options

 
All of the options that I’ve discussed so far on this blog have been European options. A European option gives us the right to buy or sell an asset at a fixed price, but only on a particular expiry date. In this post, I’m going to start looking at American options, which give the right to buy or sell at ANY date up until the expiry date.

Surprisingly for the case of vanilla options, despite the apparent extra utility of American options, it turns out that the price of American and European options is almost always the same! Why is this?

The value of a european call option, broken down into its intrinsic and its time component. Volatility is 10%, strike is 100, time to expiry is 1 year and risk free rate is 1%
The value of a european call option, broken down into its intrinsic and its time component. Volatility is 10%, strike is 100, time to expiry is 1 year and risk free rate is 1%

In general, American options are MUCH harder to price than European options, since they depend in detail on the path that the underlying takes on its way to the expiry date, unlike Europeans which just depend on the terminal value, and no closed form solution exists. One thing we can say is that an American option will never be LESS valuable than the corresponding European option, as it gives you extra optionality but doesn’t take anything away. So we can always take the European price to be a lower bound on American prices. Also note that Put-Call Parity no longer holds for Americans, and becomes instead an inequality.

How can we go any further? It is useful in this case to think about the value of an option as made up of two separate parts, an ‘intrinsic value’ and a ‘time value’, which sum to give the true option value. The ‘intrinsic value’ is the value that would be received if the exercise was today – in the case of a vanilla call, this is simply \max(0,S-K). The ‘time value’ is the ‘extra’ value due to time-to-expiry. This is the volatility-dependent part of the price, since we are shielded by the optionality from price swings in the wrong direction, but are still exposed to upside from swings in our favour. As time goes by, the value of the option must approach the ‘intrinsic value’, as the ‘time value’ decays towards expiry.

Consider the graph above, which shows the BS value of a simple European call under typical parameters. Time value is maximal at-the-money, since this is the point where the implicit insurance that the option provides is most useful to us (far in- or out-of-the-money, the option is only useful if there are large price swings, which are unlikely).




What is the extra value that we should assign to an American call relative to a European call due to the extra optionality it gives us? In the case of an American option, at any point before expiry we can exercise and take the intrinsic value there and then. But up until expiry, the value of a European call option is ALWAYS* more than the intrinsic value, as the time value is non-negative. This means that we can sell the option on the market for more than the price that would be received by exercising an American option before expiry – so a rational investor should never do this, and the price of a European and American vanilla call should be identical.

It seems initially as though the same should be true for put options, but actually this turns out not quite to be right. Consider the graph below, showing the same values for a European vanilla put option, under the same parameters.

The value of a european put option under the same parameters as used above, broken down into intrinsic and time components. Unlike the call option, for far in-the-money puts, the time value can be negative, so early exercise can be valuable
The value of a european put option under the same parameters as used above, broken down into intrinsic and time components. Unlike the call option, for far in-the-money puts, the time value can be negative, so early exercise can be valuable

Notice that here, unlike before, when the put is far in-the-money the option value becomes smaller than the intrinsic value – the time value of the option is negative! In this case, if we held an American rather than a European option it might well make sense to exercise at this point, since we would receive the intrinsic value, which is greater than the option value on the market [actually it’s slightly more complicated, because in this scenario the American option price would be higher than the European value shown below, so it would need to be a bit more in the money before it was worth exercising – you can see how this sort of recursive problem rapidly becomes hard to deal with!].

What is it that causes this effect for in-the-money puts? It turns out that it comes down to interest rates. Roughly what is happening is this – if we exercise an in-the-money American put to receive the intrinsic value, we receive (K-S) cash straight away. But if we left the option until expiry, our expected payoff is roughly (K-F), where F is the forward value

    \[F(t,T) = {1\over ZCB(t,T)} S(t)\]

so we can see that leaving the underlying to evolve is likely to harm our option value [this is only true for options deep enough in the money for us to be able to roughly neglect the \max(0,K-S) criterion]

We can put this on a slightly more rigourous footing by thinking about the GREEK for time-dependence, Theta. For vanilla options, this is given by

    \begin{align*} \Theta &= - {\partial V \over \partial \tau} \nonumber \\ &= ZCB(t,T)\cdot\Big\{-{\sigma F(t,T) \phi(d_1)\over 2\sqrt{\tau}} \mp rK\Phi(\pm d_2) \Big\} \nonumber \end{align*}

where F is the forward price from t to T\phi(x) is the standard normal PDF of x and \Phi(x) is its CDF, ZCB is a zero-coupon bond from t to T and the upper of the \mp refers to calls and the lower to puts.

The form for Theta shows exactly what I said in the last paragraph – for both calls and puts there is a negative component coming from the ‘optionality’, which is decreasing with time, and a term coming from the expected change in the spot at expiry due to interest rates which is negative for calls and positive for puts.

The plot below shows Theta for the two options shown in the graphs above, and sure enough where the time value of the European put goes negative, Theta becomes positive – the true option value is increasing with time instead of decreasing as usual, as the true value converges to the intrinsic value from below.

The Theta value for European options. For a European put, this becomes positive when the option value falls below the intrinsic value. The difference between these two Thetas is independent of spot, which can be seen directly from put-call parity.
The Theta value for European options. For a European put, this becomes positive when the option value falls below the intrinsic value. The difference between these two Thetas is independent of spot, which can be seen directly from put-call parity.

*Caveats – I’m assuming a few things here – there are no dividends, rates are positive (negative rates reverses the situation discussed above – so that American CALLS can be more valuable than Europeans), no transaction fees or storage costs, and the other sensibleness and simpleness criteria that we usually assume apply.

In between European and American options lie Bermudan options, a class of options that can be exercised early but only at one of a specific set of times. As I said, it is in general really tough to price more exotic options with early exercise features like these, I’ll look at some methods soon – but this introduction is enough for today!

Excel Monte-Carlo Pricer

I’ve updated the Monte-Carlo pricer to run in excel. Give it a go and the benefits should be obvious – no more messing around with consoles or repeated typing, excel provides a handy interface for running C++ files.

What you’ll need to do is:

  • Download the Excel pricing sheet
  • Download the .xll library (large!) – this is the equivalent of the old .exe file
  • Open the Excel file, and drag-and-drop the .xll file inside the pricing sheet
  • Click on the option to “enable this add-on for this session only”
  • The Option Price function should now re-calculate option price as you make changes to the parameters!

Try using the option pricer function yourself – it now works just like a native excel function, so you can drag it into many cells and it will adjust target cells as usual allowing for easy variation of parameters and plotting of results.

If you’ve successfully set up XLW and Code::Blocks as I described in a previous post, I’ve also made the source code available in a .zip file which you can load in Code::Blocks and fiddle with yourself. I’ll be doing a bit of updating soon to make the code slightly cleaner and allow pricing of more interesting options.

The old console project source code is still available in compiled form and with the source code in a zip file at the bottom of the MC pricers page.

Forwards vs. Futures

I’ve covered Forwards and Futures in previous posts, and now that I’ve covered the basics of Stochastic Interest Rates as well, we can have a look at the difference between Forwards and Futures Contracts from a financial perspective.

As discussed before, the price of a Forward Contract is enforceable by arbitrage if the underlying is available and freely storable and there are Zero Coupon Bonds available to the Forward Contract delivery time. In this case, the forward price is

F(t,T) = S(t) \cdot {1 \over {\rm ZCB}(t,T)}

In this post I’m going to assume a general interest rate model, which in particular may well be stochastic. In such cases, the price of a ZCB at the present time is given by

{\rm ZCB}(t,T) = {\mathbb E}\Big[ \exp{\Big\{\int_t^T r(t') dt'\Big\} } \Big]

Futures Contracts are a bit more complicated, and we need to extend our earlier description in the case that there are interest rates. The basic description was given before, but additionally in the presence of interest rates, any deposit that is in either party’s account is delivered to the OTHER party at the end of each time period. So, taking the example from the previous post, on day 4 we had $4 on account with the exchange – if rates on that day were 10% p.a., over that day the $4 balance would accrue about 10c interest, which would be paid to the other party.

Let’s say we’re at time s, and want to calculate the Futures price to time T. Our replication strategy is now as follows, following the classic proof due to Cox, Ingersall and Ross but in continuous time. Futures Contracts are free to enter into and break out of due to the margin in each account, so entering X Futures Contracts at time t and closing them at time t+dt will lead to a net receipt (or payment if negative) of \inline {\rm X}\cdot\big[ H(t+dt,T) - H(t,T)\big]. From t+dt to T, we invest (borrow) this amount at the short rate and thus recieve (need to pay)

{\rm X}\cdot\big[ H(t+\tau,T) - H(t,T)\big]\cdot\prod_t^T \big( 1 + r(t)\tau \big)

and now moving to continuous time

{\rm X}\cdot\big[ H(t+dt,T) - H(t,T)\big]\cdot\int_t^T e^{ r(t)}\ dt

We follow this strategy in continuous time, constantly opening contracts and closing them in the following time period [I’m glossing over discrete vs. continuous time here – as long as the short rate corresponds to the discrete time step involved this shouldn’t be a problem], and investing our profits and financing our losses both at the corresponding short rate. We choose a different X for each period [t,t+td] so that \inline {\rm X}(t) = \int_s^t \exp{\{r(t')\}}dt'. We also invest an amount H(s,T) at time s at the short rate, and continually roll this over so that it is worth \inline H(s,T)\cdot \int_s^T \exp{\{r(t)\}}dt at time T

Only the final step of this strategy costs money to enter, so the net price of the portfolio and trading strategy is H(s,T). The net payoff at expiry is

H(s,T)\cdot \int_s^T e^{r(t)}dt + \sum_s^T {\rm X}\cdot[H(t+dt,T)-H(t,T))]\cdot\int_t^T e^{r(t)}dt

= H(s,T)\cdot \int_s^T e^{r(t)}dt + \sum_s^T \int_s^t e^{r(t)}dt\cdot[H(t+dt,T)-H(t,T))]\cdot\int_t^T e^{r(t)}dt

= H(s,T)\cdot \int_s^T e^{r(t)}dt + \int_s^T e^{r(t)}dt \cdot \sum_s^T [H(t+dt,T)-H(t,T))]

= H(s,T)\cdot \int_s^T e^{r(t)}dt + \int_s^T e^{r(t)}dt \cdot [H(T,T)-H(s,T))]= H(T,T) \cdot \int_s^T e^{r(t)}dt

And H(T,T) is S(T), so the net payoff of a portfolio costing H(s,T) is

= S(T) \cdot \int_s^T e^{r(t)}dt

How does this differ from a portfolio costing the Forward price? Remembering that in Risk-Neutral Valuation, the present value of an asset is equal to the expectation of its future value discounted by a numeraire. In the risk-neutral measure, this numeraire is a unit of cash B continually re-invested at the short rate, which is worth \inline B(t,T) = e^{\int_t^T r(t')dt' }, so we see that the Futures Price is a martingale in the risk-neutral measure (sometimes called the ‘cash measure’ because of its numeraire). So the current value of a Futures Contract on some underlying should be

H(t,T) = {\mathbb E}^{\rm RN}\big[ S(T) | {\cal F}_t \big]

ie. the undiscounted expectation of the future spot in the risk-neutral measure. The Forward Price is instead the expected price in the T-forward measure whose numeraire is a ZCB expiring at time T

F(t,T) = {\mathbb E}^{\rm T}\big[ S(T) | {\cal F}_t \big]

We can express these in terms of each other remembering F(T,T) = S(T) = H(T,T) and using a change of numeraire (post on this soon!). I also use the expression for two correlated lognormal, which I derived at the bottom of this post

\begin{align*} F(t,T) &= {\mathbb E}^{T}\big[ F(T,T) | {\cal F}_t \big] \\ &= {\mathbb E}^{T}\big[ S(T) | {\cal F}_t \big] \\ &= {\mathbb E}^{T}\big[ H(T,T) | {\cal F}_t \big] \\ &= {\mathbb E}^{RN}\big[ H(T,T) {B(t)\over B(T)} {{\rm ZCB}(t,T)\over {\rm ZCB(T,T)}}| {\cal F}_t \big] \\ &= {\rm ZCB}(t,T){\mathbb E}^{RN}\big[ H(T,T) {1\over B(T)}| {\cal F}_t \big] \\ &= {\rm ZCB}(t,T){\mathbb E}^{RN}\big[ H(T,T)\big] {\mathbb E}^{RN}\big[ e^{-\int_t^T r(t')dt'} \big] e^{\sigma_H \sigma_B \rho} \\ &= H(t,T) \cdot e^{\sigma_H \sigma_B \rho} \\ \end{align*}

where \inline \sigma_H is the volatility of the Futures price, and \inline \sigma_B is the volatility of a ZCB – in general the algebra will be rather messy!

As a concrete example, let’s consider the following model for asset prices, with S driven by a geometric brownian motion and rates driven by the Vasicek model discussed before

{dS \over S} = r(t) dt + \sigma_S dW_t

dr = a \big[ \theta - r(t)\big] dt + \sigma_r \widetilde{dW_t}

And (critically) assuming that the two brownian processes are correlated according to rho

dW_t \cdot \widetilde{dW_t} = \rho dt

In this case, the volatility \inline \sigma_B is the volatility of \inline {\mathbb E}\big[ e^{-\int_t^T r(t')dt'}\big], and as I discussed in the post on stochastic rates, this is tractable and lognormally distributed in this model.

We can see that in the case of correlated stochastic rates, these two prices are not the same – which means that Futures and Forward Contracts are fundamentally different financial products.

 

For two standard normal variates x and y with correlation rho, we have:

\begin{align*} {\mathbb E}\big[ e^ {\sigma_1 x} \big]& = e^ {{1\over 2}\sigma_1^2 } \end{align*}

and

\begin{align*} {\mathbb E}\big[ e^ {\sigma_1 x + \sigma_2 y} \big]& = {\mathbb E}\big[ e^ {\sigma_1 x + \sigma_2 \rho x + \sigma_2 \sqrt{1-\rho^2}z} \big]\\ & = {\mathbb E}\big[ e^ {(\sigma_1 + \sigma_2 \rho) x + \sigma_2 \sqrt{1-\rho^2}z} \big]\\ & = \big[ e^ {{1\over 2}(\sigma_1 + \sigma_2 \rho)^2 + {1\over 2}(\sigma_2 \sqrt{1-\rho^2})^2} \big]\\ & = \big[ e^ {{1\over 2}\sigma_1^2 + {1\over 2}\sigma_2^2 + \sigma_1 \sigma_2 \rho} \big]\\ & = {\mathbb E}\big[ e^ {\sigma_1 x }\big] {\mathbb E} \big[ e^{\sigma_2 y}\big] e^{ \sigma_1 \sigma_2 \rho} \end{align*}

Interview Questions IV

Another question about correlations today, this time I thought we could have a look at a simple type of random walk, in which the distance travelled at each step either backwards or forwards and has a random length, and how to deal with the issues that come up.

Let’s say at each step we move a distance along the x-axis that is distributed randomly and uniformly between -1 and +1, and importantly that each step is independent of the others. So, after N steps the total distance travelled, L, is

L_N = \sum_{i=0}^{N} x_i\ ; \qquad x_i\sim {\mathbb U}[-1,+1]

where \inline x_i is the i-th step length.

Calculate:

i) the expected distance travelled after N steps

ii) the standard deviation of the distance travelled after N steps

iii) the autocorrelation between the distance travelled at N steps and the distance travelled at N+n steps

Since we’re dealing with uniform variables, it makes sense to start by calculating the expectation and variance of a single realisation of a variable of this type. The expectation is trivially 0, while the variance is

\begin{align*} {\rm Var}[x_i] & = {\mathbb E}[x_i^2] - {\mathbb E}[x_i]^2\\ & = \int_{-1}^{+1} x^2 dx - 0 \\ & = {2\over 3} \end{align}

We’ll also make use of the independence of the individual variables at several points, we recall that for independent variables x and y, that \inline {\mathbb E}[xy] = {\mathbb E}[x] {\mathbb E}[y]

 

i) This one is fairly straight-forward. Expectation is a linear operator, so we can take it inside the sum. We know the expectation of an individual variate, so the expectation of the sum is just the product of these

\begin{align*} {\mathbb E}\Big[\sum_{i=0}^N x_i \Big] & = \sum_{i=0}^N {\mathbb E}[ x_i ]\\ & = N\cdot 0\\ & = 0 \end{align}

 

ii) The standard deviation is the square root of the variance, which is the expectation of the square minus the square of the expectation. We know the second of these is 0, so we only need to calculate the first,

\begin{align*} {\rm Var}\Big[\sum_{i=0}^N x_i \Big] & = {\mathbb E}\Big[\Big(\sum_{i=0}^N x_i \Big)^2\Big]\\ & = {\mathbb E}\Big[\sum_{i,j=0}^N x_i x_j\Big]\\ &=\sum_{i,j=0}^N {\mathbb E} [x_i x_j] \end{align}

There are two types of term here. When i and j are not equal, we can use the independence criterion given above to express this as the product of the two individual expectations, which are both 0, so these terms don’t contribute. So we are left with

\begin{align*} {\rm Var}\Big[\sum_{i=0}^N x_i \Big] &=\sum_{i=0}^N {\mathbb E} [(x_i)^2] \\ &= N\cdot {2\over3} \end{align}and the standard deviation is simply the square root of this.

 

iii) This is where things get more interesting – the autocorrelation is the correlation of the sum at one time with its value at a later time. This is a quantity that quants are frequently interested in, since the value of a derivative that depends on values of an underlying stock at several times will depend sensitively on the autocorrelation. We recall the expression for correlation

\rho(x,y) = {{\rm Cov}(x,y) \over \sqrt{{\rm Var}(x){\rm Var}(y) } }

So we are trying to calculate

\begin{align*} \rho(L_N, L_{N+n}) = {\rm Cov}\Big[\sum_{i=0}^N x_i \cdot \sum_{j=0}^{N+n} x_j \Big] \cdot {3 \over 2\sqrt{N (N+n)}} \end{align}

where I’ve substituted in the already-calculated value of the variances of the two sums.

We can again use the independence property of the steps to separate the later sum into two, the earlier sum and the sum of the additional terms. Also, since the expectation of each sum is zero, the covariance of the sums is just the expectation of their product

\begin{align*} \rho(L_N, L_{N+n})&= {\rm Cov}\Big[\sum_{i=0}^N x_i \cdot \Big(\sum_{j=0}^{N} x_j + \sum_{j=N+1}^{N+n} x_j \Big) \Big] \cdot {3 \over 2\sqrt{N (N+n)}}\\&= {\mathbb E}\Big[\sum_{i=0}^N x_i \cdot \Big(\sum_{j=0}^{N} x_j + \sum_{j=N+1}^{N+n} x_j \Big) \Big] \cdot {3 \over 2\sqrt{N (N+n)}}\\&= {\mathbb E}\Big[\sum_{i,j=0}^N x_i x_j + \sum_{i=0}^N x_i \cdot\sum_{j=N+1}^{N+n} x_j \Big] \cdot {3 \over 2\sqrt{N (N+n)}} \end{align}and using the results above and the independence of the final two sums (because they are the sums of different sets of terms, and each term is independent to all the others) we know

{\mathbb E}\Big[\sum_{i,j=0}^N x_i x_j \Big] = {2 \over 3}N

{\mathbb E}\Big[\sum_{i=0}^N x_i \cdot\sum_{j=N+1}^{N+n} x_j \Big] ={\mathbb E}\Big[\sum_{i=0}^N x_i \Big]\cdot {\mathbb E}\Big[\sum_{j=N+1}^{N+n} x_j \Big] = 0

so

\begin{align*}\rho(L_N, L_{N+n}) & = {N\over \sqrt{N(N+n)}}\\ &= \sqrt{N\over N+n} \end{align*}

What does this tell us? Roughly that the sum of the sequence up to N+n terms is correlated to its value at earlier points, but as n gets larger the correlation decreases, as the new random steps blur out the position due to the initial N steps.

We can test our expressions using the RAND() function in excel. Try plotting a sequence of sets of random numbers and summing them, and then plotting the set of sums of 100 terms against the set of sums of 120 or 200 terms (nb. in excel, you probably want to turn auto-calculate off first to stop the randoms from refreshing every time you make a change – instructions can be found here for Excel 2010; for Excel 2013 I found the option inside the “FORMULAS” tab and at the far end – set the ‘Calculation Options’ to manual). I’ve done exactly that, and you can see the results below.

The sum of 100 terms vs. the sum of 120 terms. These are of course highly correlated, as the additional 20 terms usually don't affect the overall sum to a significant extent
The sum of 100 terms vs. the sum of 120 terms. These are of course highly correlated, as the additional 20 terms usually don’t affect the overall sum to a significant extent
The sum of the first 100 terms against the sum of 200 terms. We can see that the sums are slowly becoming less correlated
The sum of the first 100 terms against the sum of 200 terms. We can see that the sums are slowly becoming less correlated
This is the sum of the first 100 terms against the first 500. The correlation is much lower than in the graphs above, but not that from the formula we derived we still expect a correlation of around 45% despite the large number of extra terms in the second sum.
This is the sum of the first 100 terms against the first 500. The correlation is much lower than in the graphs above, but note that from the formula we derived we still expect a correlation of around 45% despite the large number of extra terms in the second sum.

You can also try calculating the correlation of the variables uing Excel’s CORREL() that you generate – these should tend towards the expression above as the number of sums that you compute gets large (if you press F9, all of the random numbers in your sheet will be recomputed and you can see the actual correlation jump around, but these jumps will be smaller as the number of sums gets larger).

Running C++ in Excel via XLW

Anyone who has downloaded and compiled the console version of my Monte Carlo Pricer will know that it’s not incredibly user-friendly. You have to double-click on an .exe file to run it, and then each time select input parameters which involves a fair amount of typing. The output is given as a single number on a console window, which isn’t very convenient for plotting multiple results or for further data analysis.

In fact, console applications aren’t that commonly used by quants. There are a few solutions used in the industry, and the one I’m going to look at today is an excel wrapper. The idea is that we will write C++ code as normal, then use a programme called XLW to wrap it as a link library and allow it to be called from inside excel. This combines the advantages of excel’s flexibility with C++’s speed and power. This post is going to be a step-by-step guide to getting things up and running (I’ve struggled with it a bit while trying to set things up) with a very simple example function – in future posts I’ll discuss shifting the old pricer over to an excel implementation.

I’m going to go through with screenshots from a clean start, hopefully this should be fairly general but if things change in future it may become outdated and require me to update or re-write these guidelines. I will discuss how to set XLW up using the open source Code::Blocks IDE, it’s a bit more advanced than the Dev-C++ that I’ve been using before and I’m shifting to it from now on for this blog.

Instructions:

  • Download and install Code::Blocks, WITHOUT a built in compiler

Auto-installers are available here, but it’s important that you choose the version WITHOUT the built in compiler, as this is out of date

Choose this download!
Choose this download!

Double-click on the downloaded file to auto-install; agree to the terms and choose default options (by default it should do a full install needing about 60MB and install to directory “C:\Program Files (x86)\CodeBlocks”)

 

  • Download and install the latest version of MinGW compiler

The latest version of the MinGW compiler can be found here, download and double-click on the .exe file to install. Click through the screens, choosing the following option on the third screen: “Use pre-packaged repository catalogues”; by default the compiler will install to “C:\MinGW”. On the ‘select components’ screen, be sure to select “C++ Compiler”! This WASN’T checked for me by default!

Check C++ compiler on this screen
Check C++ compiler on this screen

 

  • Register MinGW with Code::Blocks

In order to register the compiler that we’ve just downloaded, run Code::Blocks (there should now be a shortcut from your start bar).

How to register MinGW with Code::Blocks
How to register MinGW with Code::Blocks

In the toolbar, select Settings->Compiler Settings; and in the window that opens select the “Toolchain Executables” tab. Click on “Auto-detect”, and the IDE should find MinGW successfully.

 

  • Download and install XLW

The latest version of XLW is available via the “Download” link on the front page of the project here. At the time of writing, this is version 5.0 (released Feb’ 2013).

Once again, double-click on the downloaded .exe to install, but pay attention to the screens that pop up as I had to make some modifications to the default choices, as follows:

  1. XLW auto-detected Code::Blocks on the third page (Installed Development Environments) – if you see a blank white box here, you’ve not installed the IDE correctly
  2. On the 4th screen, I had to expand the grey ‘X’s and selected the option shown in the image below
Expand the selections, and check 'Code::Blocks' under 'Source'
Expand the selections, and check ‘Code::Blocks’ under ‘Source’

By default, XLW will install to “C:\Program Files (x86)\XLW\xlw-5.0.0f0”

 

  • Run the XLW template generator

Now we’re getting close. In your start menu, you should now find an XLW option. Go to XLW>xlw-5.0.0f0>xlw>Extract XLW xll template; select “Code::Blocks” on the window that appears, by default the template will be created in the directory “C:\Users\….\Documents\XLL_Project”

Select Code::Blocks in the template extractor options
Select Code::Blocks in the template extractor options

 

  • FIX REQUIRED!!

Unfortunately, there seems to be a bug which has something to do with how XLW and MinGW are talking to one-another. After fiddling with many install configurations without success, I found the same issue discussed here (the second paragraph with a dark background), along with the following fix:

  1. Go to C:\MinGW\bin and locate “mingw32-make.exe”
  2. Copy this to the directory C:\Users\….\Documents\XLL_Project which you created in the previous step (it should contain “RunInterfaceGenerator.mak” among other files)
  3. Rename “mingw32-make.exe” to “make.exe”
  4. Run Code::Blocks, in the toolbar go to Settings->Compiler again, and in the window that pops up select ‘Restore Defaults’. Click ‘OK’ on both of the warning pop-up windows.

 

  • Open the Template project in Code::Blocks

If it isn’t already running from the last step, run Code::Blocks. Go to the directory C:\Users\….\Documents\XLL_Project that was created before, and open the project file “Template.cbp”. If all of the previous steps have been done correctly, you should simply be able to click “Build” (it’s the cog-shaped icon in one of the task bars) and have it compile without problems. Note that after compilation, a file called “Template.xll” appears in the directory C:\Users\….\Documents\XLL_Project\bin\Debug\

To test that everything is working, add an additional function to “source.cpp” and declare it in “cppinterface.h”, such as the example given in the picture below and at the bottom of this post, and check that the project still compiles.

I've added a trivial additional function here to add two doubles - of course, I've also updated the header file with a declaration of the function. XLW will do the rest!
I’ve added a trivial additional function here to add two doubles – of course, I’ve also updated the header file with a declaration of the function. XLW will do the rest!

 

  • Call our functions in Excel!

Open an instance of Excel. In order to load the functions that we’ve created into Excel, find the file “Template.xll” in the directory C:\Users\….\Documents\XLL_Project\bin\Debug\ and drag-and-drop it into the Excel window.

Here we can see the trivial example we made before in action - our function is adding two doubles in different cells together. I will present more advanced examples in future posts.
Here we can see the trivial example we made before in action – our function is adding two doubles in different cells together. I will present more advanced examples in future posts.

You should now find that the additional function you’ve written is available alongside Excel’s in-built functions: congratulations, you’ve made your C++ scripts run with excel as a front end! I’ll be looking at the advantages of this a lot in the near future.

 

Additional example function code

In source.cpp:

double // adds two doubles
AddTwoDoubles( double x, // The first double
               double y  // The second double
              )
{
  return x + y;
}

In cppinterface.h (before the #endif ):

double // adds two doubles
AddTwoDoubles( double x, // The first double
               double y  // The second double
              );