Modern Computational Finance. Antoine Savine

Чтение книги онлайн.

Читать онлайн книгу Modern Computational Finance - Antoine Savine страница 11

Modern Computational Finance - Antoine Savine

Скачать книгу

we only consider path‐wise simulations. This means that simulations are provided one path (for all event dates) at a time. The model is, to us, an abstract object that generates multiple scenarios (possible evolutions of the world) and communicates them sequentially for evaluation.

      An alternative that we don't consider is step‐wise simulation, where all paths are computed simultaneously, date by date. The model first generates all the possible states of the world for the first event date, then moves on to the second event date, and so on until it reaches the final maturity. Step‐wise simulation is natural with particular random generators, control variates (when paths are modified after simulation so that the expectation of some function matches some target), and, more generally, calibration inside simulations as in the particle method of [16].

      Monte‐Carlo simulations are explained in detail in [19] [15], and the dedicated chapters of [20], as well as our publication [27], which provides a framework and code in C++, and focuses on a parallel implementation.

      We refer to the figure on page 15 for an illustration of the valuation process. The model produces a number of scenarios; every scenario is consumed by the evaluator to compute the payoff over that scenario. This is repeated over a number of paths, followed by an aggregation of the results. The evaluator is the object that computes payoffs (sums of cash flows normalized by the numeraire, which is also part of the scenarios, as described in chapter 5) as a function of the simulated scenarios. The evaluator conducts its work by traversal of the expression tree that represents the script, while maintaining an internal stack for the storage of intermediate results. We describe the evaluator in detail in section 3.6.

      The evaluation of the script over a path is typically executed a large number of times; therefore, the overall performance is largely dependent on its optimization. This is achieved by many forms of pre‐processing.

      Pre‐processing is not only about pre‐computing parts of subsequent evaluations. It is not so much about the optimization of the arithmetic calculations performed during simulations. It is mainly about moving most of the administrative, logistic overhead to processing time. CPUs perform mathematical calculations incredibly fast, but the access of data in memory may be slow if not carefully dealt with. With hard‐coded payoffs (payoffs coded in C++), it is the compiler that optimizes the code, putting all the right data in the right places in the interest of performance. With scripting, the payoff, as a function of the scenario, is built at run time. The compiler cannot optimize this function for us. This is our responsibility. This is where the pre‐processors come into play.

      A special breed of pre‐processors called indexers arrange data for the fastest possible access during simulations, with a massive performance impact. For instance, when some cash‐flow is related to a given Libor rate of a given maturity, the indexer “informs” the evaluator, before simulations start, where in pre‐allocated memory that Libor will live during simulations, so that, at that point, the evaluator reads the simulated Libor there, directly, without any kind of expensive lookup.

      Most noticeable benefits are achieved in the context of interest rates (and multiple assets) discussed in section III. The dates when coupons are fixed or paid and the maturities of the simulated data such as discount factors and Libors are independent of scenarios. The value of these simulated variables may be different in every scenario, but their specification, including maturity, is fixed. Pre‐processors identify what rates are required for what event dates, and perform memory allocation, indexing, and other logistics before simulations start. During simulations, where performance matters most, the model communicates the values of the simulated data in a pre‐indexed array for a fast, random access.

      This is covered and clarified in part III, but we introduce an example straight away. Consider a script for a caplet:

01Jun2021 caplet pays 0.25 * max( 0, libor( 03Jun2021, 3m, act/360, L3) ‐ STRIKE) on 03Sep2021

equation

      While this is all mathematically very clear, a practical implementation is more challenging. The product is scripted at run time, so we don't know, at compile time, what simulation data exactly the evaluator expects from the model. Dedicated data structures must be designed for that purpose, something like (in pseudo‐code):

Scenario := vector of SimulData per event date SimulData := numeraire, vector of libors, vector of discounts, …

      In our simple example, we have one event date: 01Jun2021, and we need one Libor fixed on the event date for a loan starting two business days later on 03Jun2021, with coupon paid on 03Sep2021, as well as one discount factor for the payment date. We humans know that from reading the script. A pre‐processor figures that out by visiting the script at processing time. This allows not only to pre‐allocate the vectors of Libors and discounts but also to transform the payoff into something like:

0.25 * max( 0, scen[0].libors[0] ‐ STRIKE) * scen[0].disc[0] / scen.numeraire

      The pre‐processor also “knows” that event date 0 is 01Jun2021, and that maturity 0 is 03Sep2021 for both Libors and discounts. That information may be communicated to the model before simulations start. The model then knows what it has to simulate and where in pre‐allocated memory it must write the simulated data. The evaluator doesn't know what that simulated data is; in particular, it doesn't know what are the maturities of the Libors and discounts. All it does is execute the expression:

caplet +=0.25 * max( 0, scen[0].libors[0] ‐ STRIKE) * scen[0].disc[0] / scen.numeraire

Скачать книгу