# Interview Quesions III

Today’s question will test some of the statistics and correlation I’ve discussed in the last couple of months. Assume throughout that and are jointly normally distributed such that a) Calculate b) Calculate The first expectation is of a lognormal variate, and the second is of a lognormal variate conditional on some earlier value of the variate having been a particular value – these are very typical of the sorts of quantities that a quant deals with every day, so the solution will be quite instructive! Before reading the solution have a go at each one, the following posts may be useful: SDEs pt. 1, SDEs pt. 2, Results for Common Distributions

a) Here, we use the standard result for expectations b) This one is a little tougher, so first of all I’ll discuss what it means and some possible plans of attack. We want to calculate the expectation of , given that takes a value of . Of course, if and were independent, this wouldn’t make any difference and the result would be the same. However, because they are correlated, the realised value of will have an effect on the distribution of .

To demonstrate this, I’ve plotted a few scatter-graphs illustrating the effect of specifying on , with and uncorrelated and then becoming increasing more correlated. When x and y are uncorrelated, the realised value of y doesn’t affect the distribution for x, which is still normally distributed around zero When x and y are correlated, the realised value of y has an effect on the distribution of x, which is no longer centered on zero and has a smaller variance When the correlation of x and y becomes high, the value of x is almost completely determined by y. Now, if y is specified then x is tightly centered around a value far from zero

The simplest way of attempting this calculation is to use the result for jointly normal variates given in an earlier post, which says that if and have correlation , we can express in terms of and a new variate which is uncorrelated with  so Since the value of is already determined (ie. = ), I’ve separated this term out and the only thing I have to calculate is the expectation of the second term in . Since and are independent, we can calculate the expectation of the , which is the same process as before but featuring slightly more complicated pre-factors We can check the limiting values of this – if then and are independent [this is not a general result by the way – see wikipedia for example – but it IS true for jointly normally distributed variables], in this case just as above. If , , which also makes sense since in this case , so fully determines the expectation of .

The more general way to solve this is to use the full 2D joint normal distribution as given in the previous post mentioned before, This is the joint probability function of and , but it’s not quite what we need – the expectation we are trying to calculate is So we need to calculate the conditional expectation of given , for which we need Bayes’ theorem Putting this together, we have   This integral is left as an exercise to the reader, but it is very similar to those given above and should give the same answer as the previous expression for after some simplification!