Statistical Methods and Modeling of Seismogenesis. Eleftheria Papadimitriou
Чтение книги онлайн.
Читать онлайн книгу Statistical Methods and Modeling of Seismogenesis - Eleftheria Papadimitriou страница 19
2.3. Conceptual evolution of a physics-based earthquake simulator
Although based on some of the principles adopted for the previously simulator algorithms as described in section 2.2, the simulation algorithm described in this section was independently developed. It was aiming at demonstrating that a model characterized by a few, simple and reasonable assumptions allows the replication not only of the spatial features, but also of the temporal behavior and the scaling laws of the observed seismicity. The relations among source parameters used in this algorithm have a physical justification and are consistent with the empirical relations known from the literature (see the appendix in section 2.5).
2.3.1. A physics-based earthquake simulator (2015)
In a study of the Corinth Gulf fault system (CGFS), Console et al. (2015) introduced a new and original earthquake simulator. The algorithm applied in their study was developed upon the conceptions introduced for earthquakes simulators in California (Tullis 2012), such as the constraint from the long-term slip rate on fault segments, and the adherence to a physics-based model of rupture growth, without making use of time-dependent rheological parameters on the fault. Because of its limited sophistication, this algorithm is suitable for the production of synthetic catalogs resembling the long-term seismic activity of relatively simple fault systems, including hundreds of thousands of earthquakes of moderate magnitude, even when using quite modest computing resources. The basic concepts, upon which this algorithm is built, are shown in the flow chart of Figure 2.3. A detailed outline of the computer code is provided in the appendix in section 2.6. Here, we recall only the main features of this algorithm. In this version of the code, the seismogenic source is approximated as a rectangle, composed of a number of cells with assigned dimensions in along-strike and down-dip directions. The rectangular fault is then divided into an arbitrary number of segments, without constituting any barrier to rupture growth. Their only role is to make it possible to assign different slip rates associated with tectonic loading onto each of them. Each cell is randomly assigned an initial stress budget within a given interval around an arbitrary average value. This is to accommodate our lack of knowledge about the initial status of stress and strength on each point of the fault. The stress budget on a cell is progressively changed in three ways:
– it is increased at every time step (e.g. one day) through equation [2.18], according to the long-term slip rate (tectonic loading) mainly constrained by geodetic measurements;
– it is decreased at the occurrence time of every rupture, by a given amount (e.g. 3.3 MPa); the same cell can rupture more than once in the same earthquake;
– it is increased by a Coulomb stress change associated with a point source at the center of any other cell that ruptures during an earthquake; as all cells are assumed to rupture approximately with the same mechanism and the same fault plane, this stress change is always positive (see the appendix in section 2.6 for details).
Figure 2.3. Flow chart of the computer code for earthquake simulation adopted in this study (based on Console et al. 2015). For a color version of this figure, see www.iste.co.uk/limnios/statistical.zip
The events are initiated one by one on the cell with the largest stress budget, but only if it exceeds a given stress threshold. This is assumed to be spatially constant through the entire source area for the sake of simplicity. The second ruptured cell of the specific event that is chosen is that of the largest stress budget among the eight cells surrounding the nucleation cell, and so on for the next ruptured cells, until the stopping condition is met, when none of the cells, including and surrounding the cells previously ruptured in the same event, has a stress budget exceeding the threshold (Figure 2.4).
Particular attention has been given to the part of the simulator code that tunes the conditions of stopping an already initiated rupture. We obtained reasonable results by introducing a pair of “heuristic” rules to modulate the stress threshold to be exceeded for expanding an ongoing event into new cells or repeating the slip on an already ruptured cell. These rules, which have a relevant impact on the magnitude distribution of the synthetic catalogs, are:
1) The stress threshold adopted for the nucleation of an event is decreased, after the initial rupture of the nucleation cell, by a quantity which is proportional to the square root of the number of the already ruptured cells, multiplied by a free parameter called the “strength reduction coefficient” (S-R). This feature mimics the sharp decrease of strength at the edges of an expanding rupture, through a sort of weakening mechanism. Increasing this parameter encourages the growth of ruptures, thus decreasing the b-value in the frequency-magnitude distribution. This parameter has a similar role to the η free parameter in the Virtual Quake simulator developed for California (Schultz et al. 2017).
2) The square root of the number of already ruptured cells used in the previous rule is limited to a number equal to the width of the fault system, divided by the size of a cell, and multiplied by a free parameter called “fault aspect ratio” (A-R).
Although the first of these two empirical rules enhances the capability of an already nucleated event to expand into a larger rupture, the second one limits this enhancement to a size that does not exceed by many times the width of the fault system. As has been proved by numerous tests, the strength reduction coefficient (of the order of a few percent) influences the proportion of the seismic moment released by small and by large earthquakes: the smaller this parameter, the larger the number of small events. On the contrary, for the fault aspect ratio, it has no influence on the magnitude distribution of the background activity, but affects the shape of the magnitude distribution in the large magnitude range. This simple algorithm ensures a stable process, during which the stress budget is maintained below the nucleation threshold and never vanishes, if a suitable initial value is chosen. The earthquake rate is modulated by the slip rate assigned to each fault segment.
The smallest magnitude produced by the simulator is that corresponding to the rupture of a single cell; however, the computer code allows the user to arbitrarily choose the smallest magnitude reported in the synthetic catalog (which cannot be smaller than the magnitude associated with an event rupturing an area equal to only one cell). Moreover, the computer code includes an option for running in warm-up mode for the desired number of years in order to reach a stable situation before the real start of the synthetic catalog. It was proved by many tests that the values arbitrarily assigned to the initial stress budget of any cell do not affect the statistical properties of the synthetic catalogs if the selected warm-up time is long enough (e.g. 1,000 years).