Perturbation Methods in Credit Derivatives. Colin Turfus
Чтение книги онлайн.
Читать онлайн книгу Perturbation Methods in Credit Derivatives - Colin Turfus страница 11
They note in addition that results in Chapter 14 allow calibration of the Black–Karasinski model in a multi‐curve framework where the LIBOR spread(s) over the risk‐free rate can be stochastic and potentially correlated with the risk‐free rate. Furthermore, they note that results in Chapter 13 facilitate the extension of Black–Karasinski option pricing formulae enabling the model to be conveniently calibrated to caps referencing backward‐looking risk‐free rates, as and when a market in these inevitably appears in the post‐IBOR world to which the finance industry is currently headed.
2.5 EXPOSURE SCENARIO GENERATION
Market risk management are in the throes of a comprehensive re‐working of their risk framework to address the new Basel III regulations. Counterparty risk calculations are causing something of a headache. Previously interest rate and credit curves were evolved by identifying principal components, allowing each of these components to evolve along a Monte Carlo path, then reconstructing the implied shape of the curves at each exposure time of interest. The pricing engine can then be used taking the evolved curves along with the evolved values of other required spot variables (equity, FX, inflation, etc.) as input to obtain conditional prices for each relevant portfolio for each path at each exposure time. The distribution of positive exposure values can then be considered and expected positive exposures (EPE) and or potential future exposures (PFE) calculated by considering the tail of this distribution. Unfortunately, auditors are unhappy that a somewhat ad hoc method used for evolving the principal components hitherto is inadequately justified and would like to see something which is more industry-standard implemented in preference.
An option being considered is to try and evolve the curves in their entirety rather than just principal components. Standard models such as HJM‐based or Black–Karasinski present themselves as candidates. But to be useful, there has to be a convenient mechanism for constructing the entire forward curve at each exposure time of interest, for use as input to the pricing model. The Black–Karasinski lognormal model is preferred to the simpler normal Hull–White alternative on the basis that, as the curve evolves upward or downward, lognormal volatilities rise or fall in proportion, which is intuitively sensible: if a Hull–White model is used instead, consideration would have to be given also to how the volatilities evolved as the associated curve moved up or down. Further, for the curve evolution to work in a credit curve context it must ensure positive values of all forward rates, which Black–Karasinski does, HJM‐based models can, but Hull–White does not.
Encouragement is taken from the availability of the highly accurate analytic conditional bond formulae for the Black–Karasinski model set out in Chapter 5. But it is felt that the simple one‐factor model does not do justice to the range of possible evolutions of the shape, not just the level, of interest rate and credit curves. Specifically, there should be fluctuations which impact mainly at the short end of the curve which decay relatively quickly (rapid mean reversion), whereas fluctuations affecting the long end are likely to be longer‐lived (slow mean reversion). So the multi‐factor Black–Karasinski model framework derived in §6.4 and expounded in greater detail in §15.5 is of interest. Work is initiated to implement the forward rate formula (15.49) to allow forward interest rate and credit curves to be generated from simple evolved Brownian variables, mutually correlated as necessary.
2.6 MODEL RISK
Model risk management faces a problem that one of the Monte Carlo pricing models used for credit derivatives pricing is found to manifest anomalous‐looking behaviour when high volatility levels are used in conjunction with long times to maturity in the presence of significant rates‐credit correlation. Auditors have asked for the situation to be investigated and an assessment made of what the correct behaviour should be, with the possibility of a reserve being set aside to take account of the risk of model error in the event that such large volatilities are observed in practice.
It is noted that the calculations set out in Chapter 8 illustrate how to price credit‐contingent (default or survival) cash flows accurately under circumstances of relatively weak credit risk, with credit intensity represented by a Black–Karasinski short‐rate model; further, that the relevant formulae are not limited in terms of the size of the credit volatility. The formula (8.54) for the value of protection payments is coded up and compared with the results from the Monte Carlo engine as the volatility level is increased. While the analytic results are seen to increase linearly with the credit volatility, the Monte Carlo results are found to deviate from this behaviour. It is concluded that there is likely model error resulting in this circumstance. A proposal is made that, in the event that volatility levels exceed a given threshold, a reserve should be set aside based on the difference between the Monte Carlo results and analytic results derived using the formulae presented in Chapter 8. A suggestion is also made that the front office quantitative analysts consider integrating a pricer based on the analytic formulae into the pricing library and migrating trades over from the Monte Carlo model.
Buoyed by this success, the model validation team within the model risk management department consider implementing more of the analytic formulae in their benchmark library for use as “challenger models” in the model validation process. In addition they note from the suggestions in Chapter 16 that in addition to providing alternative benchmarks, these analytic formulae upon differentiation are able to provide explicit formulae for the sensitivity of prices to model and market parameters. In this way model uncertainty calculations can be conveniently carried out, potentially at a large number of points in the product‐model phase space. Thus the circumstances can be identified where the greatest model uncertainty is to be expected. In particular, model testing can be focussed on such “hot spots”.
2.7 MACHINE LEARNING
A new project has started at the bank recently to look into the future prospect of replacing the internal model used for capital risk calculations with a more computationally efficient machine learning‐based alternative. It is recognised that there are a number of unaddressed problems which currently prevent the realisation of such an ambition. One of these is that the training process for a machine learning‐based algorithm which replicates pricing functionality is prohibitively high for the high‐dimensional problem which is constituted by revaluing a bank's portfolio under diverse future market scenarios. Another problem is that the state space over which learning must take place is unbounded with respect to many of the relevant parameters, especially market parameters. Furthermore, regulators are very concerned that it be demonstrated in the model validation process that the internal model remains robust under extreme scenarios, such as those which occurred during the credit crunch of 2007. Consequently, a lot of effort is likely to have to be expended in the learning phase of the machine‐learning process in relation to scenarios of marginal importance at the edge of the state space, which scenarios may contribute little to the overall risk numbers.
Encouragement is taken from the recent work of Antonov et al. [2020] which addresses the problem of how to handle the outer limits of the phase space in a machine‐learned representation of a pricing