Statistical Methods and Modeling of Seismogenesis. Eleftheria Papadimitriou

Чтение книги онлайн.

Читать онлайн книгу Statistical Methods and Modeling of Seismogenesis - Eleftheria Papadimitriou страница 14

Statistical Methods and Modeling of Seismogenesis - Eleftheria Papadimitriou

Скачать книгу

and the bi-component distribution, comprised of a dominant exponential component and a secondary normal component. For the samples drawn from the exponential distribution, the kernel estimates were only insignificantly worse than the estimates obtained with the use of the model [1.20]. For the samples drawn from the bi-component distribution, the kernel estimates fitted the starting distribution well, whereas the estimates based on the exponential model [1.20] deviated strongly from the starting distribution. Kijko et al. (2001) used these results as an argument advocating for more frequent use of the kernel estimation of magnitude distribution, particularly in seismic hazard studies.

      Also, many other studies indicated big differences between the “exponential” and “kernel” estimates of hazard parameters. The mentioned Monte Carlo analyses by Kijko et al. (2001), and the actual data studies by Lasocki and Papadimitriou (2006), suggest that in the case when these estimates differ, the “kernel” estimate is more accurate. All of this favors the kernel estimation of magnitude distribution functions for the seismic hazard assessment.

      The PSHA, which uses the kernel estimation of magnitude distribution as an alternative to the parametric model [1.19] and [1.20], has been implemented on the IS-EPOS Platform (tcs.ah-epos.eu, Orlecka-Sikora et al. 2020). The kernel estimation of magnitude distribution is also applied in the SHAPE software package for time-dependent seismic hazard analysis (Leptokaropoulos and Lasocki 2020). SHAPE is open-source, downloadable from https://git.plgrid.pl/projects/EA/repos/seraapplications/browse/SHAPE_Package.

      Orlecka-Sikora (2004, 2008) presented a method for assessing the confidence intervals of the CDF, which had been estimated by the kernel estimation. The method is based on the bias-corrected and accelerating method by Efron (1987), the smoothed bootstrap and the second-order bootstrap samples, and is called the iterative bias-corrected and accelerating method (IBCa).

      The j-th jackknife sample, jn, is the n − 1 element sample {M1, M2,.., Mj−1, Mj+1, .., Mn} that is the initial sample from which the j-th element has been removed. Hence, we can have, at most, n jackknife samples.

      The first-order smoothed bootstrap samples are obtained in the same way as previously (equation [1.25]). The k-th smoothed bootstrap sample, (k), is composed of:

      [1.33]image

Скачать книгу