Forward Rate Agreements and Swaps

For calibration of discount curves from swap rates, see my post on Bootstrapping the Discount Curve from Swap Rates.

In this post I’m going to introduce two of the fundamental interest rate products, Forward Rate Agreements (FRAs) and Swaps. FRAs allow us to ‘lock in’ a specified interest rate for borrowing between two future times, and Swaps are agreements to exchange a future stream of fixed interest payments for floating ones, or visa-versa. Both are model independent, which means we can statically hedge them today using just Zero Coupon Bonds (ZCBs) – so their prices won’t depend on our models for interest rates or underlying prices etc. ZCBs really are fundamental here – if you haven’t read my earlier post on them yet I recommend starting there!

First, a note about conventions. In other contexts, I’ve used r to be the continuously-compounding rate, such that if I start with $1, by time it will be worth . In the rates world, this would be very confusing, and rates are always quoted as simple annualised rates, so that$1 lent for half a year at 5% would return $1.025 exactly in half a year’s time (note the additional factor of 0.5 coming from the year-fraction of the deposit), and this rates convention will be used throughout this post. Each ZCB gives us a rate at which we can put money on deposit now for expiry at a specific time, and we can construct a discount curve from the collection of all ZCBs that we have available to us. In the case of a ZCB, we deposit the money now and receive it back at expiry. A Forward Rate Agreement extends the idea of putting money on deposit now for a fixed period of time to putting it on deposit at a future date for a specific period of time. We’ve picked up an extra variable here – our rate for deposit starting now depends only on time of expiry , while the FRA rate will depend on the time that we put the money on deposit as well as the time of expiry . An FRA, just like a deposit, involves two cash flows. We pay the counterparty a notional at time , and receive our notional plus interest of at time ; where the first term is the Year Fraction, the second is the Forward Rate, and the final is the Notional. Since f will be fixed when we sign the contract, we can hedge these two cash flows exactly at using ZCBs. The value of the FRA is the value of receiving the second sum minus the cost of making the first payment: This agreement is at ‘fair value’ if the forward rate makes , and re-arranging gives An FRA allows us to ‘lock-in’ a particular interest rate for some time in the future – this is analogous in rates markets to the forward price of a stock or commodity for future delivery, which was discussed in an earlier post. Note that the price of all FRAs is uniquely determined from the discount curve [although in reality our discount curve will be limited in both temporal resolution and maximum date by the ZCBs or other products available on the market which we can use to build it]. A Swap is an agreement to exchange two cash flows coming from assets, but not the assets themselves. By far the most common is the Interest Rate Swap, in which two parties agree to swap a stream of fixed rate interest rate payments on a notional M of cash for a stream of floating rate payments on the same notional. Although the notional might be quite large, usually only the differences between the payments at each time are exchanged, so the actual payments will be very much smaller. The mechanics are probably best demonstrated by example: A swap is written on a notional$100Mn, with periods starting in a year and continuing for three years and with payments at the end of each three month period; to pay fixed annualised 5% payments, and floating payments at the three-month deposit rate (fixed at the beginning of each period). What payments actually get made?

The swap starts in a year’s time, but the first payment is made at the end of the first 3-month period, in 15 months time. At this time, the fixed payment will be . The floating payment will be whatever the three-month deposit rate was at 1 year, multiplied by the same prefactors, – let’s say that  is 4% (although of course we won’t know what it is until the beginning of each period, and the payment will be delayed until the end of the period) – then the net cash flow will be $250,000, paid by the person paying fixed to the person paying floating. The same calculation happens at each time, and a payment is made equal to the difference between the fixed and the floating leg cash flow. Although the notional is huge, we can see that the actual payments are much, much smaller [be alert for newspapers quoting ‘outstanding notional’ to make positions seem large and unsteady!]. The convention for naming Swaps is that if we are receiving fixed payments, we have a receiver’s swap; and if we are receiving floating payments, it is a payer’s swap. What is the point of this product? Well, if we have a loan on which we are having to pay floating rate interest, using a swap we can exchange that to a fixed rate of interest by making fixed rate payments to the counterparty and receiving floating rate payments back, which match the payments that we’re making on the loan. This is of use to companies, who need to handle interest rate risk and might not want to be exposed to rates rising heavily on money they’re borrowing. A bank holding a large portfolio of fixed rate mortgage loans but required to pay interest to a central bank at floating rate might engage in a swap in the reverse direction to hedge it’s exposure. How can we price this product? It’s easy to price the fixed payments – since each is known exactly in advance, we can hedge these out using ZCBs. The value of the sum of the fixed payments is where is the fixed rate (5% in our example above), and is the relevant year-fraction for each payment (0.25 in our example about). What about the floating payments? We have to think a little harder here, but it turns out we can use FRA agreements to hedge these exactly as well. The problem is that we don’t know what the three-month deposit rate will be in a year’s time. But we can replicate it: if we borrow an amount in a year’s time, and put that on deposit for three months, we’ll receive back in 15 months. We know from our discussion above that we can enter an FRA to borrow in a year’s time, and we’ll need to pay in 15 months time – so we can guarantee to match the payment profile of the floating leg using the forward rates for the periods in question. The value of the floating payments is therefore Since we can fix the price of the two legs exactly by arbitrage right now, we can value the swap by comparing the present value of each leg As with FRAs, swaps are said to be at fair value when the values of the fixed and the floating rate match, and the overall value is zero. This is the fixed rate at which we can enter a Swap for free, and occurs when such that This is called the Swap Rate. It is fully determined by the discount curve, and as we shall see the reverse is also true – the Swap Rate is in 1-to-1 correspondence with discount curve. Of course, after a swap is issued the Swap Rate will change constantly, in which case the actual fixed payment will no longer match and the swap will have non-zero value. If is the swap fixed coupon payment and is the current swap rate, then where is called the annuity of the swap. The value is proportional to the difference between the swap rate and the swap fixed coupon. Because of the number of institutions that want to handle interest rate risk resulting from loans, IR Swaps are one of the most liquidly traded financial products. Although we’ve derived their price here from the Discount Curve, in practice it is often done the other way around – Swaps often exist up to much higher maturity dates than other products, and Discount Curves at long maturities are instead constructed from the swap rates quoted on the market at these dates. This is a very important procedure, but financially rather trivial. I’m not going to cover it here but will probably come back to it in a short post in the near future. As well as being important in their own right, FRAs and Swaps (along with ZCBs, of course) are the foundation of the rich field of interest rate derivatives. The right (but not obligation) to enter into an FRA – a call option on an FRA – is called a caplet, and a portfolio of such options on FRAs across different time periods is called a cap, since you have guaranteed a cap on the maximum interest you will have to pay over the whole period to borrow money (a put on an FRA is called a floorlet, and a sequence of these forms a floor, for similar reasons). An option to enter a swap is called a swaption, and these are also heavily traded wherever a borrower might want to re-finance a loan at a later date if interest rates move sharply. The pricing of these products becomes dependent on the underlying model that we assume for interest rates, and I’ll start to deal with them in later posts. Factoryised C++ In two of my most recent posts I’ve talked about Forward Contracts and an implementation of the Normal Quantile function. One strategic reason for this will become apparent from today’s post! I’m going to be talking here about one of the design patterns used in the Monte Carlo code presented on my blog and discussed recently. It isn’t really my goal for this blog to dwell heavily on programming topics as there are many, many more competent people than me sharing their knowledge on the subject across the web. However, there are some topics that are quite interesting to discuss and make ideal blog topics, and a grounding in Object-Oriented programming (particularly in C++) is vital for the modern Quant. As I stated before, the goal of the code presented to date in ‘Simple Option Pricer 1.0’ is to choose a fairly simple option payoff, choose a method for generating uniform random numbers, and then choose a method for generating gaussian variates from these (and of course to enter the relevant market parameters like discount factor and volatility). The first thing to do is to download all of the code presented on that page, compile it, and run it a few times to make sure you can get all of it to work. Check the answers correspond to the results you get from the analytical pricers for the same parameters, although don’t forget that there will be some Monte Carlo error on the answers coming for C++, and observe that answers get better (though time taken increases!) as the number of paths is increased. A very straight-forward way of achieving selectivity would be to use a switch statement: int option; SimpleOption* theOptionPointer; cout << "Select a Contract Type: Call [1], Put [2]"; cin >> option; switch ( option ) { case 1 : theOptionPointer = new CallOption( ... ); break; case 2 : theOptionPointer = new PutOption( ... ); break; default : throw("You must enter [1] or [2]!"); } where I’ve assumed that we’ve defined some suitable option classes elsewhere that all inherit from an abstract base class SimpleOption (if there is demand for it I might go over inheritance another time, but there are many sources on the web. This IS important for quants, but it’s rather dry to go over!). The switch statement looks at the user-entered value and assigns the pointer to an option of the relevant type. This is a basic example of polymorphism – we’ve declared theOptionPointer to be a pointer to the abstract base class SimpleOption, but then given it the address of a specific implementation of the base class via a derived class, either CallOption or PutOption. If we were to call any functions of this pointer, they would have to be those functions defined in the base class itself (although they could be abstract and potentially over-loaded in the derived classes) – if, for example, CallOption had a method called getStrike(), there would need to be a [potentially abstract] prototype for this in the base class SimpleOption if we want to have access to it via the SimpleOption pointer.. However, the code isn’t that great as it stands. We’re going to need a switch statement for each of the choices we’re giving the user (currently Option type, RNG type and Gaussian converter type), which will lead to a bloated main(){} function. More annoyingly, every time we want to include a new option in any of these selections, as well as coding up the relevant classes for the new choice we’re going to need to go into the main function and alter the choices and control structures given by the switch statements. As well as being a bother, this will mean the whole project will need to be re-compiled, and for large projects this can be a significant time constraint. You can see roughly how much time this takes by comparing the speed of Build/Compiling your project (which checks to see if changes have been made, and then only rebuilds the files that are affected) with the time taken to Re-Build/Re-Compile your project (which throws away previously compiled files and starts again from the beginning). Give this a try! I’ve got around this in ‘Simple Option Pricer 1.0’ by coding up a basic Factory for each of the three selections, based on the Factory presented in Chapter 10 of Joshi’s text, C++ Design Patterns and Derivatives Pricing (this is one of my favourite textbooks on the subject – I’ll do some book reviews of other gems in the future). You can see how this works by downloading the following four files: Drop these into the same folder as the rest of the files in ‘Simple Option Pricer 1.0’, and then right-click on the Project in your development environment and add them to the project one-by-one [Nb. if you’re using Visual Studio, you may need to add #include "stdafx.h" to the two .cpp files before this will work – and make sure they’re in the same folder as the rest, this can cause problems!!]. Now, Build/Compile your project. If you’ve done everything correctly, it should happen very swiftly – only the forward.cpp and the gaussianCdfInverter.cpp will need to compile, and the linker will need to run, but the other files won’t need to be recompiled (this is the real strength of the method, by the way!). Now, re-run the programme and you should discover two new options available to you for the option pricer! How have we achieved this sorcery?! In the files optionFactory.cpp and optionFactory.h (and the corresponding files for the RNG and gaussian converters), we’ve defined the factories for the different types of classes we want to use. I’ve used the Singleton Pattern (for brevity I’m not going to discuss it here, but do have a look at wikipedia) to create and access a single instance of each factory. The important point is that there will be a single instance of each factory, and we can get their addresses in memory using the following method at any point in the code optionFactory::factoryAddress() The factories themselves store a two column table (implemented with a map) which links a string (the option name) in the first column with a function that constructs an option of that type and returns a pointer to it in the second column (this will be a base class pointer, using polymorphism as discussed above). The functions that will create these pointers are implemented using another type of polymorphism – templatisation. In the optionFactoryHelper.h header, I’ve defined a new class which is constructed by calling it with a string. It also has a method which builds a new instance of the <class T>, and returns a pointer to it – which is exactly what we said the factory table needs. template <class T> simpleOption* simpleOptionHelper<T>::buildOption(double strike) { return new T(strike); } The constructer for the helper function registers this method in the factory’s table, along with the string. template <class T> simpleOptionHelper<T>::simpleOptionHelper(std::string optionType){ optionFactory& theFactoryAddress = optionFactory::factoryAddress(); theFactoryAddress.registerOption(optionType, simpleOptionHelper<T>::buildOption); } Options are registered in their .cpp files, the simple code to register the forward is at the top of forward.cpp, creating a new instance of the helper class templatised on the specific instance of the option that we are registering through it. This will call the helper class constructor, which as we just saw contains code to register the corresponding option type with the factory namespace { simpleOptionHelper<forward> registerForward("forward"); } Since these are done in namespaces, they happen before main(){} starts running – and once registered, we can call the factory at any time in our code using a string and giving it the required parameters and it will create a new instance of the derived class with that name simpleOption* thisOptionPointer = optionFactory::factoryAddress().createOption( optionType, strike ); I’ll use this pattern again and again in future as more and more options are added to the code. At some point in the future, I’ll look at a way of combining ALL of the factories into a single, templatised factory process using more polymorphism, but the complication will be that since different processes need different arguments for their constructors, I’ll need to build a generic container class first that could contain many different sorts of arguments inside it. The next thing I want to do, however, is look at easier ways of data input and output, as the current console application is very clunky and wouldn’t be suitable for a large amount of data input. I’ll discuss ways of dealing with this in future posts. Inverse Normal CDF Now that I’ve got some Monte Carlo code up, it’s inevitable that I will eventually need an implementation of the Inverse of the Normal Cumulative Density Function (CDF). The inverse of a CDF is called a Quantile function by the way, so I’ll often refer to this as the Normal Quantile function. The reason this is so important is because, as I discussed in my post on Random Number Generators, it can be used to convert uniformly distributed random numbers into normally distributed random numbers, which we will need for Monte Carlo simulations if we believe that underlyings are distributed according to a lognormal distribution. As a reminder, I’ve repeated the graph from the previous post here to show the behaviour of the Normal Quantile: As with the implementation of an algorithm for the Normal CDF (discussed here), there were several possibilities for implementation. In the end I’ve decided to go with the analytic approximation due to Beasley, Springer and Moro as presented in Joshi’s text on computational derivative pricing: a1 = 2.50662823884 a2 = -18.61500062529 a3 = 41.39119773534 a4 = -25.44106049637 b1 = -8.47351093090 b2 = 23.08336743743 b3 = -21.06224101826 b4 = 3.13082909833 c1 = 0.3374754822726147 c2 = 0.9761690190917186 c3 = 0.1607979714918209 c4 = 0.0276438810333863 c5 = 0.0038405729373609 c6= 0.0003951896511919 c7 = 0.0000321767881768 c8 = 0.0000002888167364 c9 = 0.0000003960315187 A polynomial form is used for the central region of the quantile, where $\inline&space;0.08&space;<&space;x&space;<&space;0.92$: $y&space;=&space;x&space;-&space;0.5&space;\&space;;&space;\quad&space;z&space;=&space;y^2$ $\Phi^{-1}(x)&space;\simeq&space;y&space;\cdot&space;{&space;(a_1&space;+&space;a_2&space;z&space;+&space;a_3&space;z^2&space;+&space;a_4&space;z^3&space;)&space;\over&space;(1&space;+&space;b_1&space;z&space;+&space;b_2&space;z^2&space;+&space;b_3&space;z^3&space;+&space;b_4&space;z^4&space;)&space;}$ And if $\inline&space;x&space;\leq&space;0.08$ or $\inline&space;x&space;\geq&space;0.92$ then: $y&space;=&space;x&space;-&space;0.5&space;\&space;;&space;\quad&space;z&space;=&space;\Bigl\{&space;\begin{matrix}&space;x&space;&&space;y\leq&space;0\\&space;1&space;-&space;x&space;&&space;y&space;>&space;0&space;\end{matrix}&space;\&space;;&space;\quad&space;\kappa&space;=&space;\ln&space;(-\ln&space;z)$ $\Phi^{-1}(x)&space;=&space;\pm&space;(c_1&space;+&space;c_2\kappa&space;+&space;c_3\kappa^2&space;+&space;c_4\kappa^3&space;+&space;c_5\kappa^4&space;+&space;c_6\kappa^5&space;+&space;c_7\kappa^6&space;+&space;c_8\kappa^7&space;+&space;c_9\kappa^8)$ With a plus at the front if $\inline&space;y&space;\geq&space;0$ and a minus in front if $\inline&space;y&space;<&space;0$. This is fairly swift as an algorithm, and the quantile method of generating normal variates will turn out to have some key advantages over the various other methods discussed before which will become apparent in future. In a future post I’ll show how to integrate this method of converting uniform variates to gaussian variates into the Monte Carlo code that I’ve made available here, thanks to the factory pattern that I’ve used it will turn out to be incredibly straight-forward! Finally, if you are wondering why the numbers in these numerical algorithms seem so arbitrary, it’s because they [sort-of] are! Once I’ve implemented some optimisation routines (this might not be for a while…) I’ll have a look at how we can find our own expressions for these functions, and hopefully be able to back out some of the numbers above!! Futures I covered Forward Contracts in a post a week ago, and promised to look at the related Futures Contract soon. These contracts aim to do the same thing – provide a means of ensuring an underlying asset for future delivery at a price determined today – but their dynamics are rather different from those of the Forward Contract. One of the main criticisms of Forward Contracts is that they are directly written between two parties. This means they are subject to credit risk – if one counterparty goes bankrupt, the contract may not be guaranteed any more. By contrast, a Futures Contract is written via an exchange. This is a central organisation that matches long and short Futures Contract counterparties, and oversees the transaction (and of course, charges a fee for its services). Futures Contracts are standardised in terms of expiry dates and contract terms, which increases liquidity in the market. The counterparty who wants to receive the underlying asset at expiry is said to be the long party, while the one providing it is the short party. As with a Forward Contract, an underlying asset is specified. Each party must have an account with the exchange in order to enter transactions. Each day, the exchange will quote the current Futures Price, which is the price at which the transaction will happen; but to enter into the contract doesn’t cost either party (beyond the small exchange fee mentioned above). Instead, the counterparties must update their accounts daily to reflect the difference between the initial Futures Price quoted and the current Futures Price quoted. If it is above the initial price, the party who is short the contract must have this difference in their account; and if it is below the initial price, the party who is long must have this difference in their account. This is probably best demonstrated by a simple example. Let’s say that I want to enter a Futures Contract on Day 0 for delivery of a barrel of oil on Day 5 (realistically this would be a much longer period of months or years, but in order to keep the number of steps in the example down I’m keeping it short here), the current Futures Price quoted for a contract expiring in 5 days is$100. I’m the long party as I want to receive the oil, my counterparty is short as he will be delivering.

Day 1: Futures Price quoted for expiry on Day 5 increases to $101.50. The short party has to deposit$1.50 in his account to cover this increase.

Day 2: Futures Price increases to $103.50. The short party has to deposit another$2 in his account to cover this increase (so it’s now $3.50 total). Day 3: Futures Price falls to$101. The short party can withdraw $2.50 from his account to leave only$1, which covers the change from day 0.

Day 4: Futures Price for delivery on Day 5 (ie. tomorrow!) crashes to $96. The short party can take all of the money out of his account, and I need to deposit$4 to cover the difference between the current Futures Price and its initial value.

Day 5: Futures Price recovers to $97. I can take$1 out of my account to leave $3 total. Since this is the day the contract expires, we make the transaction today at the initially agreed Futures Price of$100, and the money in my account is returned to me (if the money was in the short party’s account, it would have been returned to him in the same way). Note also that the Futures Price on day 5 for expiry on day 5 must be equal to the spot rate at that time – so this contract obliges me to trade at a price above the current spot rate, just as a Forward Contract at these prices would have done.

There are several points to make. Firstly, I’ve described a Physically Settled Futures Contract here, since I actually took delivery of a barrel of oil. The standardised contracts will specify a grade of oil and a specific delivery location – many exchanges have physical warehouses for these transactions to happen. However, the large majority of Futures Contracts today are Cash Settled: in this case, instead of taking a delivery, I exchange money with the short party to achieve the same financial payoff. In the example above, since we were transacting at $100 while the market price was only$97, I would have to pay $3 to the short party (as we shall see, it’s no coincidence that this was the amount in my account!). This contract more-or-less eliminates credit risk due to a counterparty defaulting. Consider, for example, that the oil company had defaulted on Day 2. This might seem like bad news for me – at that point, I was expecting to pay$100 for a commodity worth $103.50. The critical point is the account with the exchange. This amount is forfeited if the counterparty wants to leave the transaction or fails to keep topping it up as required, and delivered to the other party. In this case, the short party’s account of$3.50 would have been given to me, and the contract terminated – since the current price was $103.50, I could enter the same contract with a different counterparty without taking any financial loss. On the other hand, if I’d decided on Day 4 that I no longer wanted to be part of the contract, as it looked like I’d be paying$100 while the contract is only worth $96, I might abscond. However, the exchange still has my$4 on account which it can take and give to the short counterparty, so once again they aren’t hurt by my reneging. The only risk to either party is that there is a large shift in price in a very short time, so that the counterparty is unable to credit a sufficient amount to his account before defaulting.

Although it more-or-less deals with credit risk, this contract does introduce liquidity risk. Considering again Day 2, the company had had to deposit money in their account to cover the position. Even though the contract eventually ends up profitably for the short party, it is conceivable that cash-flow concerns would make them unable to honour their intermediate commitments, in which case they would have to break the contract at that stage and take a loss. For a Forward Contract, as there is no payment until the expiry date this couldn’t happen.

Aside from these issues, this contract seems to be essentially the same as a Forward Contract, since it guaranteed a transaction on Day 5 at $100. However, so far we have ignored interest rates, which turn out to make a subtle difference to the final payoff (and which make the maths interesting!). I’m going to to another post soon comparing these two contracts in the presence of interest rates, but I’ll tell you the answer right now to whet your appetite: in the presence of deterministic interest rates, the two payoffs are the same. This isn’t surprising – if we know what will happen in the future then we can hedge by replication both of the contracts (Forward Contracts in the way described in the previous post on them; Futures Contracts as I will describe next time – but have a go and try to construct a replication strategy yourself) so that their values are fixed today. However, this is actually quite unrealistic and most real models of interest rates include stochastic interest rates of some type – in this case, the Futures Contract becomes a path-dependent derivative and we can’t construct a static hedge for it at the current time, since we don’t know what the interest rate will look like at each stage! More to come soon. New Section: Monte Carlo in C++ Good news, this blog has a new section – Monte Carlo pricers! I’m going to experiment to find the most convenient way of making it available for download, and would like it to be available both as a compiled .exe and also as uncompiled source code to allow readers to examine and alter it. The first iteration can be found online now in the MONTE CARLO section. It is code for a straight-forward Monte Carlo pricer that will price calls and puts, with uniform random variates selected by park-miller and converted into gaussian variates using one of the box-muller processes (for more information on these, refer to the post on Random Numbers). I’ve used a factory pattern so in principle it should be very straight forward to add new options and also RNG processes to the code, although there are a few more improvements that I will be making in the near future, which will extend the code to basic path-dependent options and allow easier data input by the user. I’ll be going over various aspects of the code in detail in future posts, and also putting together some instructions on how to compile the code yourself using Dev C++ (a free, open-source compiler) and how to add options and processes via the factory. As always, if you have any problems getting the code to work, please tell me! Forwards I’ve mentioned Forwards many times in the blog so far, but haven’t yet given any description of what they are, and it’s about time for a summary! Forwards were probably the first ‘financial derivative’ to be traded, and they still occupy a central role in the field today. At the most basic level, a Forward Contract is a delayed sale between two parties. The contract is signed now and the sale and the price agreed to, but the transaction only happens at a later date. We can immediately see why these contracts would be popular, as they insulate the people who have signed the contract from risk due to price variations. For example, an airline can foresee fairly accurately its fuel requirements over the next year, so might wish to enter into a Forward Contract with an oil company for a certain amount of fuel to be delivered next year, in order not to be exposed to the risk of price rises. Meanwhile, the oil company knows its costs of production, and might well also decide that as long as the contract price is profitable it is locking in business for the future and avoiding the risk of price falls harming its operations. The first thing to note is that, unlike options, the contract is binding – both parties are obliged to enter into the transaction at the given date, there is no optionality in this contract. The agreed price has different names, but for consistency with other products quants tend to call it the Strike Price, . Forward Contracts are the most simple form of derivative. A derivative is a financial instrument that derives it’s price from some other (‘underlying’) instrument. In the example above, the underlying was the cost of oil, but it could equally well have been foreign exchange rates, stock prices or interest rates – indeed, all of these are different sorts of risks that companies will in general want to hedge out. What is the price of a Forward Contract and how does it depend on the underlying? If the price of oil rises after the contract above had been signed, the airline company would breath a sign of relief – it has locked in the forward price for its oil via the Forward Contract so the contract is clearly a valuable asset for it. By contrast, the oil company will be miffed, as it could have sold the oil on the market for a better price. If prices fall, situations will be reversed. How can we express this mathematically? As a simple example, I will consider a Forward Contract for me to buy a stock from a bank in a year’s time for$100. We can calculate the price of this contract using a technique called ‘replication’. What we need to do is construct a portfolio that exactly matches the payments implicit in the Forward Contract, and according to the ‘Law of One Price’, the price of these two matching portfolios must be the same. Call now =0 and the expiry of the contract .

Portfolio 1: 1 Forward Contract on S at strike price $100 and at time T. Transactions are: at time T, I pay$100, and receive one unit of stock. This will be worth , the price of the underlying at that time. There are no net payments at time .

How can we match this? Well, we will need to ensure the receipt of an amount of cash at time , and an ideal way of ensuring this is by buying a unit of the stock at t=0 and holding it, since this is guaranteed to be worth at . We also need to match the payment of the strike price, $100, at . An ideal product to use for this is a Zero Coupon Bond maturing at t=T (this is equivalent to borrowing money and you can see it either way. We’re making all of the usual assumptions about being able to go long and short equally easily etc.). Recall that Zero Coupon Bonds are worth$1 at time , and worth

at , with the last equality holding only in the case of constant interest rates. So, we can short 100 ZCBs at , these will mature at when we will have to pay the holder \$100. The holder will pay us the present value at , which is . We’ve constructed a portfolio that matches all of the payments of portfolio 1 at :

Portfolio 2: Long 1 underlying stock , short 100 ZCBs; close all positions at .

How much does it cost to enter into portfolio 2? As we’ve said, this will identically be the price of portfolio 1. The cost is

We can see that this price is a linear function of – being long a Forward Contract is better if the strike is lower! We see that its price doesn’t depend on any assumptions about the underlying growth rate (the only assumption was that we can readily trade it on the market at the two times and ), and depends instead on the risk-free interest rate . For this reason the contract is described as ‘Model Independent’ – its price doesn’t depend on any assumptions we make about the world, we can enforce it by no arbitrage using prices of things that are available to buy right now. Compare that to the hedging strategy for a vanilla option discussed here, which relied on continuous re-hedging in the future, and is hence open to risk if our model of future volatility and interest rate isn’t perfect (and it won’t be!).

The ‘Forward Price’ is defined as the strike which makes this contract valueless for both parties – the ‘fair price’ – looking at the equation above we can see that this must be

with the second equality again only valid for constant rates. The Forward Price is exactly the expectation of the underlying stock S in the risk-neutral measure, it appears across quantitative finance and often simplifies algebra, consider for example the following two ways of expressing the BS equation for vanilla option prices:

with the various terms as defined in previous posts (nb. the discount factor here from to is exactly the same as a ZCB expiring at ). I certainly prefer the second form – it is expressed in terms of quantities depending on the same time T, and all of the discounting is handled outside the brackets, so it is much more easily modularised when coding it up.

Compare the payoff of a forward at expiry with that of a call and a put option. How are they related? It turns out that

This result is called put-call parity. It comes about because if we’re long a call and short a put, we will exercise the call only if the price goes up and the counterparty will exercise the put only if the price falls – we’ve effectively locked in a sale at the strike price at time – such a strategy is called a synthetic forward. Note this is actually quite a deep result – although the price of both a call and a put option will depend on the model that we assume for the underlying, the difference in the two prices is model-independent and enforceable by arbitrage! If we assume an increased level of volatility over the option lifetimes, it will increase both prices by the same amount.

Up to this point, I’ve made the implicit assumptions that the underlying asset doesn’t pay any income, it’s free to hold it, and we’re able to go long or short with equal ease. Of course, in general, none of these are true. If the asset pays income like dividends, or if there are costs of storage, it is fairly easy to adjust our replication argument above to find a new price for the Forward Contract that takes into account the extra cash flows associated with holding the underlying over the period . However, we can’t always go short the underlying asset – a particular case of this will be commodities, where the asset in question may not exist at the current time (ie. if you haven’t dug it out of the ground or transported it to the delivery point at !), in which case there is no reason that the forward price should be related to the spot price in the deterministic way described above – this is because we have violated our assumption about being able to trade the underlying at the two times and , which was implicitly vital! However, the price of the Forward Price will still converge towards the expected future spot price as we approach even in this case, since a Forward Contract for transaction at the current time is simply a spot transaction!

One final word about Forward Contracts – they are ‘Over The Counter’, which means they are signed between two counterparties directly and thus carry Credit Risk – there is a chance that your counterparty might go bankrupt before the contract expiry, in which case the transaction won’t be completed. If your contract was out-of-the-money, you will still be expected to settle your position with them, while if it was in-the-money they may well not be able to pay you your full dues, so credit risk will decrease the actual value of the contract. There are advanced ways of accounting for this that I deal with in a later post (CVA – Credit Valuation Adjustment); alternatively there is a similar product called the Futures Contract, which I will deal with very soon in another post, which is superficially similar to the Forward Contract (leading to MUCH confusion in the industry!) but also attempts to deal with this credit risk.

BS from Delta-Hedging

Today I’m going to look at another method of getting to the BS equations, by constructing a delta-hedge. This is the way that the equation was in fact first reached historically, and it’s a nice illustration of the principle of hedging. All of the same assumptions are made as in the post that derived the BS equation via Risk Neutral Valuation.

The principle is that because the price of some derivative is a function of the stochastic underlying , then all of the uncertainty in comes from the same source as the uncertainty in . We try to construct a risk-free portfolio made up of the two of these that perfectly cancels out all of the risk. If the portfolio is risk-free, we know it must grow at the risk free rate or else we have an arbitrage opportunity.

Our model for S is the geometric brownian motion, note that we allow the rate of growth in general to be different from

We can express in terms of its derivatives with respect to and using Ito’s lemma, which I discussed in a previous post,

Our portfolio is made up of one derivative worth and a fraction  of the underlying stock, worth ; so the net price is . We combine the above two results to give

We are trying to find a portfolio that is risk-free, which means we would like the stochastic term to cancel. We see immediately that this happens for , which gives

Since this portfolio is risk-free, to prevent arbitrage it must grow deterministically at the risk free rate

and so

This is the BS partial differential equation (pde). Note that despite the fact that the constant growth term for the underlying had a rate , this has totally disappeared in the pde above – we might disagree with someone else about the expected rate of growth of the stock, but no-arbitrage still demands that we agree with them about the price of the option [as long as we agree about , that is!]

As for any pde, we can only solve for a specific situation if we have boundary conditions – in this case, given by the payoff at expiry . At that point we know the exact form the value that must take

Our job is to use the pdf to evolve the value of backwards to . In the case of vanilla options this can be done exactly, while for more complicated payoffs we would need to discretise and solve numerically. This gives us another way of valuing options that is complementary (and equivalent) to the expectations approach discussed previously.

To solve the equation above, it is useful to first make some substitutions. As we are interested in time-to-expiry only, we make the change of variables  which yields

We can eliminate the terms by considering change-of-variables . This means that

Combining these the BS equation becomes

The linear term in can be removed by another transformation  so that

The exponential terms cancel throughout, and we are left with

One final transformation will be needed before putting in boundary conditions. The transformation will be

But unlike the other transformations I’ve suggested so far, this one mixes the two variables that we are using, so a bit of care is required about what we mean. When I most recently wrote the BS equation, was a function of and  – this means that the partial differentials with respect to  were implicitly holding constant and vise versa. I’m now going to write as a function of and instead, and because the relationship features all three variables we need to take a bit of care with our partial derivatives:

where vertical lines indicate the variable that is being held constant during evaluation. Now, to move from to , we expand out the term in the same way as we did for above

We can compare these last two equations to give expressions for the derivatives that we need after the transformation by comparing the coefficients of  and

Computing and inserting these derivatives [I’ve given a graphical representation of the first of these equations below, because the derivation is a little dry at present!] into the BS equation gives

This is the well-known Heat Equation in physics. For the sake of brevity I won’t solve it here, but the solution is well known – see for example the wikipedia page – which gives the general solution:

Where is the payoff condition (it’s now an initial condition, as expiry is at  = 0). The algebra is quite involved so I give the solution its own post, and you can show by substitution that the BS option formulae given previously is a solution to the equation.

As an aside, what was the portfolio that I was considering all of the way through? Comparing to the vanilla greeks, we recognise it as the option delta – the hedging portfolio is just the portfolio of the option with just enough stock to hedge out the local delta risk. Of course, as time goes by this value will change, and we need to constantly adjust our hedge to account for this. This shows the breakdown caused by one of our assumptions – that we could trade whenever we want and without transaction costs. In fact, because we need to re-hedge at every moment to enforce this portfolio’s risk free nature, in the presence of transaction costs the hedging costs in this strategy will be infinite! This demonstrates a significant failing of one of our assumptions, I’ll come back again to the effect of this in the real world in future posts.

Time Varying Parameters

When I discussed the BS equation here, one of the assumptions was that r and $\inline&space;\sigma$ were constant parameters. In reality, neither of these will be constant: how much of a problem is this for us? In general, they will both be stochastic and hence unpredictable in the future. In this post however I’m going to stick to deterministic quantities and demonstrate that BS can be readily extended to time-varying rates and vols. This will enable us to price at-the-money options correctly, but still won’t help us with the vol smile effect that I discussed here.

If r and $\inline&space;\sigma$ are time varying, the stochastic differential equation describing the underlying spot price in BS is

${&space;dS&space;\over&space;S}&space;=&space;r(t)dt&space;+&space;\sigma(t)dW_t$

Using Ito’s Lemma as discussed before, this can be re-written in terms of the log as

$d(\ln&space;S)&space;=&space;\Bigl(&space;r(t)&space;-&space;{1&space;\over&space;2}\sigma^2(t)\Bigr)&space;dt&space;+&space;\sigma(t)&space;dW_t$

And hence

$S(t)&space;=&space;S(0).\exp{&space;\Bigr[&space;\Bigr(&space;\bar{r}&space;-&space;{1&space;\over&space;2}\bar{\sigma}^2&space;\Bigl)t&space;+&space;\bar{\sigma}&space;\sqrt{t}&space;z&space;\Bigl]&space;}$

where $\inline&space;z&space;\sim&space;{\mathbb&space;N}(0,1)$. This is still lognormal as before, but we now have an effective rate and an implied volatility defined by (you can see these by comparing to the distribution coming from constant r and $\inline&space;\sigma$):

$\bar{r}(t)&space;=&space;\int_0^t&space;r(t')dt'$

$\bar{\sigma}(t)&space;=&space;\sqrt{&space;{1&space;\over&space;t}&space;\int_0^t&space;\sigma^2(t')dt'}$

$\inline&space;\sigma(t)$ is called the instantaneous vol, but the relevant quantity for option pricing is always the implied vol $\inline&space;\bar{\sigma}(t)$. Assuming that we have a discount curve constructed from traded bonds we can calculate r(t) from that as I described here, and if we can see liquid at-the-money (ATM) options on the market at different times we can also calculate the $\inline&space;\sigma(t)$ function consistent with them from their implied volatilities, which we will need to simulate paths in Monte Carlo, via the following procedure

• Take all available ATM options, label their times in order $\inline&space;\{&space;t_1,&space;t_2,&space;\cdots&space;,&space;t_n&space;\}$
• Calculate their implied vols using a vol solver (eg. the one on my PRICERS page). These correspond to $\inline&space;\{&space;\bar{\sigma}(t_1),&space;\bar{\sigma}(t_2),&space;\cdots&space;,&space;\bar{\sigma}(t_n)&space;\}$
• As a first approximation, we will assume $\inline&space;\sigma(t)$ is constant between time windows (although you could make approximations arbitrarily complicated). In this case, for $\inline&space;0&space;<&space;t&space;<&space;t_1$ we have

$\bar{\sigma}(t_1)&space;=&space;\sqrt{&space;{1&space;\over&space;t_1}&space;\int_0^{t_1}&space;\sigma^2dt'}&space;\qquad&space;0&space;<&space;t'&space;<&space;t_1$

• Which works out to simply $\inline&space;\sigma(t)&space;=&space;\bar{\sigma}(t)$ in this region
• For later vols, the procedure is a little more complicated:

$\bar{\sigma}(t_2)&space;=&space;\sqrt{&space;{1&space;\over&space;t_2}&space;\int_0^{t_2}&space;\sigma^2(t')dt'}&space;=&space;\sqrt{&space;{1&space;\over&space;t_2}&space;\int_{t_1}^{t_2}&space;\sigma^2(t')dt'&space;+&space;\bar{\sigma}^2(t_1)}$

$t_2&space;\cdot\bar{\sigma}^2(t_2)&space;-&space;t_1&space;\cdot\bar{\sigma}^2(t_1)&space;=&space;\int_{t_1}^{t_2}&space;\sigma^2(t')dt'&space;=&space;(t_2&space;-&space;t_1)&space;\sigma^2(t_1)$

$\sigma(t_1)&space;=&space;\sqrt{t_2&space;\cdot\bar{\sigma}^2(t_2)&space;-&space;t_1&space;\cdot\bar{\sigma}^2(t_1)&space;\over&space;(t_2&space;-&space;t_1)&space;}$

• And for general time window $\inline&space;t_n&space;<&space;t&space;<&space;t_{n+1}$:

$\sigma(t)&space;=&space;\sqrt{t_{n+1}&space;\cdot\bar{\sigma}^2(t_{n+1})&space;-&space;t_n&space;\cdot\bar{\sigma}^2(t_n)&space;\over&space;(t_{n+1}&space;-&space;t_n)&space;}&space;\qquad&space;t_n&space;<&space;t&space;<&space;t_{n+1}$

• And this final expression can be used to calculate the value of the instantaneous vol required between each time window to give the correct ATM implied vol

Note that this will fail if $\inline&space;t_{n+1}&space;\cdot\bar{\sigma}^2(t_{n+1})&space;<&space;t_n&space;\cdot\bar{\sigma}^2(t_n)$ – this is because we expect the distribution variance $\inline&space;t&space;\cdot\sigma^2(t)$ to be an increasing function of time. If it didn’t hold for any time windows we’d have an arbitrage opportunity, selling the first option and buying the second to lock in a risk-free profit.

As I mentioned, this extension allows BS to correctly price options and forwards at different times by matching to market observables, but still doesn’t predict any volatility smile. Can we take it any further? In fact we can: in a later post I will discuss the local vol model which extends $\inline&space;\sigma(t)$ to be $\inline&space;\sigma(S,t)$ and will allow us to match any set of arbitrage-free market prices.

Random Number Generators

I’m in the process of preparing some C++ code for the blog to allow user-downloadable and -compilable Monte Carlo pricers that won’t overload my server. That’s got me thinking about random number generators, and I’m going to talk a bit about them here [In fact, all of the algorithms discussed below are entirely deterministic, so I should really call them pseudo-random number generators. Whether or not particular algorithms are suitable for use in applications requiring random numbers is an area of active research and occasional catastrophic failure].

Uniform RNGs

Generally, in Monte Carlo simulations we want to be able to generate many ‘sample paths’, each of which is a realisation of a possible evolution of the underlying price, and evaluate the payoff for that path. As I discussed in an earlier post, we usually model these quantities as depending on brownian motions and results are normally distributed, which means we need to generate random variables that are also normally distributed (so, for example, we are more likely to get values near to zero than at large values).

Generating these directly is quite tricky, so the first step is usually to generate uniform random variables and then transform these to gaussian variates. A uniform random variable is a variable with a constant probability of being anywhere within a certain range, typically [0,1] or [-1,1] – for the rest of this post, I denote a uniform variable as U[…], and define u~[0,1] and v~[-1,1]. The pdf of u and v are shown here:

A straight-forward method of generating these numbers uses modular multiplication [this is variously called a Park-Miller generator, a linear congruential generator and a Lehmer generator]:

$x_{k+1}&space;=&space;g&space;\cdot&space;x_k&space;\mod&space;n$

For good choices of the variables, this will generate a sequence with a period of n that is uniformly distributed across the interval [0,n], so dividing by n rescales the variables to be distributed roughly according to u. The choices of g, n and $\inline&space;x_0$ (the ‘seed’) are non-trivial and several are given in the literature, as usual I’ve followed the advice given on wikipedia and lumped for n = 2^32 – 5, g = 279470273, and the choice of seed is left to the user but should be coprime to modulus n. Note that this will have a period of about 2^32, so the upper limit on the number of monte-carlo evaluations that can meaningfully be done is around 4 billion steps. It is also easy to see how the generator could be hampered by a poor choice of parameters – if the multiplier and the modulus are not co-prime, for example, the actual period of the sequence can be much less than n.

The more advanced Marsenne Twister algorithm aims to correct many of the problems with the Park-Miller generator, such as increasing the period length of the sequence and improving the statistical randomness of the values produced. It’s too complicated to go through here (it’s a full post in itself!) in full detail but many good summaries of the theory and performance of the twister algorithm can be found online.

One other thing to point out is that if initilised in the same way, each time one of the above algorithms is run it will return the same sequence of numbers. At first this might seem like an unwanted property as it deviates from ‘true’ randomness. However, in our context it will be extremely valuable as it allows us to test code using the same sequence of numbers each time. This is vital for calculating greeks by bump-and-revalue using monte carlo as otherwise monte carlo error in price will usually swamp small changes due to greeks, and it also helpful when optimising code to make sure that results aren’t being affected.

Gaussian RNGs

Once we have a source of uniform random variates, we need to consider methods for converting them to gaussian variates. The simplest and most direct is the inversion method, which involves finding a mapping between uniform random variate u and the standard normal random variate z. Consider first the CDF of the normal variable (I have discussed this before here), shown below:

This shows us the likelihood that z is below a certain value X, and is equal to the integral of the PDF from $\inline&space;-\infty$ to X. Because the PDF is normalised to integrate to 1, the CDF maps each possible outcome of z to a value between 0 and 1, with the most likely outcomes being where the CDF is steepest (since these cover the y-axis values more quickly) which is exactly where the PDF is largest. This is exactly the opposite of what we want, so we consider instead the inverse CDF shown here:

This mapping is exactly what we’re after [although there are others – it’s not unique]. It maps uniformly distributed variates to normally distributed variates and they’re in a 1-to-1 correspondence. This procedure is robust and in principle it is exact. However, it is not always the fastest method, and it requires us to know the inverse CDF, which to evaluate in a reasonable time requires us to make approximations (this technique works for any distribution – although there is no closed form CDF for the normal distribution, for many other distributions the inverse CDF can be expressed in closed form, in which case this is almost certainly the best method).

There are several other techniques for obtaining normal distributions from uniform variates, a few of which I have implemented and describe them here because they’re quite interesting. The first is called the Box-Muller technique, which converts two independent uniform variates into two independent normal variates by combining the inversion technique described above, the trick that we use to calculate a gaussian integral and the change of variables law for Pdfs that I discussed here.

Consider two independent normally-distributed variables x and y. The joint PDF of the two is

$p(x,y)dxdy&space;=&space;p_z(x)p_z(y)dxdy&space;=&space;{1&space;\over&space;2&space;\pi}\exp{&space;\Bigl(&space;-{1&space;\over&space;2}(x^2&space;+&space;y^2)\Bigr)&space;}dxdy$

As for gaussian integration, we can re-express this in polar co-ordinates remembering that $\inline&space;dxdy&space;=&space;rdrd\theta$

${1&space;\over&space;2&space;\pi}\exp{&space;\Bigl(&space;-{1&space;\over&space;2}(x^2&space;+&space;y^2)\Bigr)&space;}dxdy&space;=&space;{1&space;\over&space;2&space;\pi}e^{-{1\over&space;2}r^2}&space;rdrd\theta&space;=&space;p_R(r)p_{\theta}(\theta)rdrd\theta$

If we can simulate a value of r and theta using uniforms that obeys these densities, we will be able to transform them to normals x and y straightforwardly. There is radial symmetry so we can integrate over $\inline&space;d\theta$ to give us the radial cumulative probability density (ie. radial CDF)

$P(r\leq&space;R)&space;=&space;\int^R_0&space;r'&space;e^{-{1\over&space;2}r'^2}&space;dr'$

Unlike x or y, we can calculate this in closed form and invert it

$\int^R_0&space;r'&space;e^{-{1\over&space;2}r'^2}&space;dr'&space;=&space;\Bigl[&space;-e^{-{1\over&space;2}r'^2}&space;\Bigr]^R_0&space;=&space;1&space;-&space;e^{-{1\over&space;2}R^2}$

$P^{-1}(u)&space;=&space;\sqrt{-2&space;\ln(1&space;-&space;u)}$

This quantile function allows us to select a point with the correct radial distribution using a first uniform variate, and we then need to select a theta from a uniform distribution around angles – this can be done by multiplying a second uniform variate by $\inline&space;2\pi$ [nb. we can simplify the radial quantile further – note that if u is uniformly distributed across [0,1], then so is (1-u), so we can substitute u for (1-u)]. The values of normally distributed variates are given by the x- and y-coordinates of the resulting point, ie.

$z_1&space;=&space;\sqrt{-2&space;\ln&space;u_1}\cdot&space;\sin(2\pi&space;u_2)$

$z_2&space;=&space;\sqrt{-2&space;\ln&space;u_1}\cdot&space;\cos(2\pi&space;u_2)$

This is a really useful trick, quickly converting two uniform variates into two gaussian variates. We can improve the speed still further if we can get rid of the computationally expensive sine and cosine functions, for which there is another nifty trick using rejection. In the final step, we inserted two uniform variates to produce $\inline&space;z_1$ and $\inline&space;z_2$. We can use the following transformation to turn our initial uniform variates into two new variates that avoid the need for a sine or cosine:

Instead of taking $\inline&space;u_1$ and $\inline&space;u_2$, consider instead two uniform variables $\inline&space;v_1$ and $\inline&space;v_2$ across [-1,1]. Let $\inline&space;s&space;=&space;r^2&space;=&space;v_1^2&space;+&space;v_2^2$, and if s > 1 then reject the values and start again. The variables now lie within the unit sphere as shown below:

We can use change of variables to show that after rejecting external points, both theta and s are also uniform on [0,1]:

$p(s)ds&space;=&space;p(r)rdr&space;=&space;2rdr$

$p(s)&space;=&space;2r{dr&space;\over&space;ds}&space;=&space;2r&space;\cdot&space;{1&space;\over&space;2r}&space;=&space;1$

Above, we used $\inline&space;u_1$ and $\inline&space;u_2$ to generate a value of r and angle, but since the s and $\inline&space;\theta$ that we’ve generated here are also uniform on [0,1] they can be used just as well. Because of our definitions, $\inline&space;\sin(2\pi&space;\theta)&space;=&space;{v_1&space;\over&space;r}$, so new expressions are:

$z_1&space;=&space;\sqrt{-2&space;\ln&space;s}\cdot&space;{v_1&space;\over&space;r}$

$z_2&space;=&space;\sqrt{-2&space;\ln&space;s}\cdot&space;{v_2&space;\over&space;r}$

We’ve got an even more efficient set of formulae, at the cost of having to throw out around 20% of our values. It will turn out that this is often quicker than the basic version of the algorithm, but is totally useless if we move to pseudo-random numbers, as I will discuss another time.

The final technique I want to talk about is a generalisation of acceptance-rejection, which we’ve just touched on and was also used implicitly in my first post on Monte Carlo.

The idea here is once again to simulate a distribution f(x) by sampling instead from another distribution g(x) that overlaps it everywhere (we can rescale g(x) by a constant A to ensure this), and rejecting values that fall inside A.g(x) but outside of f(x). This is usually used when we can simulate g(x) directly from uniforms by using its inverse CDF – perhaps the best way to demonstrate how this works is through another example. We let f(x) be the normal PDF, and g(x) the exponential distribution, such that

$g(x)&space;=&space;\lambda&space;e^{-\lambda&space;x}$

This only has support on positive x, so we redefine it to cover positive and negative x, and rescale it so that it is everywhere larger than the normal pdf

$A\cdot&space;g(x)&space;=&space;e^1&space;\cdot&space;e^{-|x|}$

and calculating the inverse CDF gives

$G^{-1}(u)&space;=&space;\begin{matrix}&space;-\ln{2u}&space;&&space;0.5&space;\leq&space;u&space;<&space;1&space;\\&space;\ln{2u}&space;&&space;0&space;\leq&space;u&space;<&space;0.5&space;\end{matrix}$

We generate two independent uniform draws $\inline&space;u_1$ and $\inline&space;u_2$. The first is used to simulate a variate distributed according to A.g(x) via the inverse CDF given here. Then we compare the values of A.g(x) at that point to the value of f(x) evaluated at that point. The second uniform variate is compared to the ratio of the two – if it is GREATER than the ratio, our point has fallen ‘outside’ of f(x), so it is rejected and we start again. If it is LOWER than the ratio, we accept the value of $\inline&space;G^{-1}(u)$ as a realisation of our target distribution f(x).

An obvious problem with this technique is that if the distributions don’t overlap well, many variates will be rejected, so we need a good uniform generator to provide enough independent variables for the algorithm to work.

The algorithm can be made arbitrarily complex by designing trial distributions g(x) that match f(x) as closely as possible, such as the ‘Ziggaraut algorithm’ which optimises the approach by breaking g(x) into many parts matched to different sections of f(x), which and is often used to generate normal random variates. As with the Marsenne Twister, there are many good guides to this algorithm available online.