Comment on Systematic survey of high-resolution b-value imaging along Californian faults: inference on asperities.

Similar documents
ETH Swiss Federal Institute of Technology Zürich

arxiv:physics/ v2 [physics.geo-ph] 18 Aug 2003

ALM: An Asperity-based Likelihood Model for California

A GLOBAL MODEL FOR AFTERSHOCK BEHAVIOUR

Multifractal Analysis of Seismicity of Kutch Region (Gujarat)

ANALYSIS OF SPATIAL AND TEMPORAL VARIATION OF EARTHQUAKE HAZARD PARAMETERS FROM A BAYESIAN APPROACH IN AND AROUND THE MARMARA SEA

Aspects of risk assessment in power-law distributed natural hazards

Interactions between earthquakes and volcano activity

Journal of Asian Earth Sciences

Earthquake Likelihood Model Testing

Most and Least Likely Locations of Large to Great Earthquakes Along the Pacific Coast of Mexico, Estimated from Local Recurrence Times Based on b-

Supporting Information for Break of slope in earthquake-size distribution reveals creep rate along the San Andreas fault system

Distribution of volcanic earthquake recurrence intervals

UNBIASED ESTIMATE FOR b-value OF MAGNITUDE FREQUENCY

The largest aftershock: How strong, how far away, how delayed?

Impact of earthquake rupture extensions on parameter estimations of point-process models

THE ECAT SOFTWARE PACKAGE TO ANALYZE EARTHQUAKE CATALOGUES

Monte Carlo simulations for analysis and prediction of nonstationary magnitude-frequency distributions in probabilistic seismic hazard analysis

Performance of national scale smoothed seismicity estimates of earthquake activity rates. Abstract

Stephen Hernandez, Emily E. Brodsky, and Nicholas J. van der Elst,

Appendix O: Gridded Seismicity Sources

The magnitude distribution of earthquakes near Southern California faults

Preparation of a Comprehensive Earthquake Catalog for Northeast India and its completeness analysis

Observational analysis of correlations between aftershock productivities and regional conditions in the context of a damage rheology model

The quarter-wavelength average velocity: a review of some past and recent application developments

Earthquake predictability measurement: information score and error diagram

Small-world structure of earthquake network

Estimation of S-wave scattering coefficient in the mantle from envelope characteristics before and after the ScS arrival

A TESTABLE FIVE-YEAR FORECAST OF MODERATE AND LARGE EARTHQUAKES. Yan Y. Kagan 1,David D. Jackson 1, and Yufang Rong 2

A correlation between the b-value and the fractal dimension from the aftershock sequence of the 1999 Chi-Chi, Taiwan, earthquake

Mechanical origin of aftershocks: Supplementary Information

High Resolution Imaging of Fault Zone Properties

Analysis the crustal and sub-crustal earthquake catalog of Vrancea region

UCERF3 Task R2- Evaluate Magnitude-Scaling Relationships and Depth of Rupture: Proposed Solutions

Probabilistic seismic hazard estimation in low-seismicity regions considering non-poissonian seismic occurrence

Assessment and Mitigation of Ground Motion Hazards from Induced Seismicity. Gail M. Atkinson

Standardized Tests of RELM ERF s Against Observed Seismicity

Investigating the effects of smoothing on the performance of earthquake hazard maps

Scale-free network of earthquakes

Assessing the solution quality of the earthquake location problem

PostScript le created: August 6, 2006 time 839 minutes

Adaptive Kernel Estimation and Continuous Probability Representation of Historical Earthquake Catalogs

Theory of earthquake recurrence times

Delayed triggering of microearthquakes by multiple surface waves circling the Earth

Minimum preshock magnitude in critical regions of accelerating seismic crustal deformation

Negative repeating doublets in an aftershock sequence

Acoustic emissions document stress changes over many seismic cycles in stick-slip experiments

An earthquake like magnitude frequency distribution of slow slip in northern Cascadia

The Magnitude Distribution of Dynamically Triggered Earthquakes. Stephen Hernandez and Emily E. Brodsky

Dependence of Possible Characteristic Earthquakes on Spatial Samph'ng: Illustration for the Wasatch Seismic Zone, Utah

L. Danciu, D. Giardini, J. Wößner Swiss Seismological Service ETH-Zurich Switzerland

Simulated and Observed Scaling in Earthquakes Kasey Schultz Physics 219B Final Project December 6, 2013

Monte-Carlo Simulations of EGS Stimulation Phase with a 3-D Hybrid Model Dimitrios Karvounis and Stefan Wiemer

RUPTURE MODELS AND GROUND MOTION FOR SHAKEOUT AND OTHER SOUTHERN SAN ANDREAS FAULT SCENARIOS

PostScript file created: April 24, 2012; time 860 minutes WHOLE EARTH HIGH-RESOLUTION EARTHQUAKE FORECASTS. Yan Y. Kagan and David D.

Homogenization of Earthquake Catalogue in Terms of Magnitude in Iran and Adjoining Region

INTEGRATING DIVERSE CALIBRATION PRODUCTS TO IMPROVE SEISMIC LOCATION

Time-varying and long-term mean aftershock hazard in Wellington

Di#erences in Earthquake Source and Ground Motion Characteristics between Surface and Buried Crustal Earthquakes

PSHA for seismicity induced by gas extraction in the Groningen Field

GLOBAL SOURCE PARAMETERS OF FINITE FAULT MODEL FOR STRONG GROUND MOTION SIMULATIONS OR PREDICTIONS

Toward interpretation of intermediate microseismic b-values

2008 Monitoring Research Review: Ground-Based Nuclear Explosion Monitoring Technologies

Relationship between accelerating seismicity and quiescence, two precursors to large earthquakes

Verification of Predictions of Magnitude of Completeness Using an Earthquake Catalog

SEISMIC HAZARD CHARACTERIZATION AND RISK EVALUATION USING GUMBEL S METHOD OF EXTREMES (G1 AND G3) AND G-R FORMULA FOR IRAQ

Spatial clustering and repeating of seismic events observed along the 1976 Tangshan fault, north China

Scaling relations of seismic moment, rupture area, average slip, and asperity size for M~9 subduction-zone earthquakes

Moment tensor inversion of near source seismograms

ISC: SERVING EARTHQUAKE ENGINEER COMMUNITY Gaspà Rebull, O. 1, Gibson G. 2 and Storchak D. A. 1. International Seismological Centre, UK 2

EARTHQUAKE CLUSTERS, SMALL EARTHQUAKES

Automatic Moment Tensor Analyses, In-Situ Stress Estimation and Temporal Stress Changes at The Geysers EGS Demonstration Project

10.1 A summary of the Virtual Seismologist (VS) method for seismic early warning

Modeling of magnitude distributions by the generalized truncated exponential distribution

ON NEAR-FIELD GROUND MOTIONS OF NORMAL AND REVERSE FAULTS FROM VIEWPOINT OF DYNAMIC RUPTURE MODEL

Seismicity anomalies prior to 8 June 2008, M w =6.4 earthquake in Western Greece

Codal provisions of seismic hazard in Northeast India

SEISMIC HAZARD ANALYSIS. Instructional Material Complementing FEMA 451, Design Examples Seismic Hazard Analysis 5a - 1

Supplementary Materials for

Comparison of Short-Term and Time-Independent Earthquake Forecast Models for Southern California

Potency-magnitude scaling relations for southern California earthquakes with 1.0 < M L < 7.0

Synthetic Seismicity Models of Multiple Interacting Faults

Reliable short-term earthquake prediction does not appear to

arxiv:physics/ v1 6 Aug 2006

Proximity to Past Earthquakes as a Least-Astonishing Hypothesis for Forecasting Locations of Future Earthquakes

P33 Correlation between the Values of b and DC for the Different Regions in the Western Anatolia

Overview of Seismic PHSA Approaches with Emphasis on the Management of Uncertainties

Gutenberg-Richter Relationship: Magnitude vs. frequency of occurrence

Ground displacement in a fault zone in the presence of asperities

Short-Term Properties of Earthquake Catalogs and Models of Earthquake Source

Properties of Dynamic Earthquake Ruptures With Heterogeneous Stress Drop

29th Monitoring Research Review: Ground-Based Nuclear Explosion Monitoring Technologies

Båth s law and the self-similarity of earthquakes

Spatial and temporal variations of the aftershock sequences of the 1999 İzmit and Düzce earthquakes

Mapping spatial variability of the frequency-magnitude distribution of earthquakes

Uncertainties in a probabilistic model for seismic hazard analysis in Japan

Seismic Analysis of Structures Prof. T.K. Datta Department of Civil Engineering Indian Institute of Technology, Delhi. Lecture 03 Seismology (Contd.

Originally published as:

Earthquake Likelihood Model Testing

and fault roughness analysis in laboratory stick-slip experiments

Transcription:

Comment on Systematic survey of high-resolution b-value imaging along Californian faults: inference on asperities Yavor Kamer 1, 2 1 Swiss Seismological Service, ETH Zürich, Switzerland 2 Chair of Entrepreneurial Risks, Department of Management, Technology and Economics, ETH Zürich, Switzerland [Tormann et al., 2014] propose a distance exponential weighted (DEW) b-value mapping approach as an improvement to previous methods of constant radius and nearest neighborhood. To test the performance of their proposed method the authors introduce a score function: n N score abs( b ) 2 true best n (1) i where N is the total number of grid nodes, n is the number of non-empty nodes, b true and b est are the true and estimated b-values. This score function is applied on a semi-synthetic earthquake catalog to make inference on the parameters of the method. In this comment we argue that the proposed methodology cannot be applied to seismic analysis since it requires a priori knowledge of the spatial b-value distribution, which it aims to reveal. The score function given in Equation 1 seeks to minimize the absolute difference between the generating (true) and estimated b-values. Like any statistical parameter, the estimation error of the b-value decreases with increasing sample size. However, since the b-value is a measure of slope on a log-log plot, for any given sample size the estimation error is not only asymmetric but also its amplitude is dependent on the b-value itself. To make a pedagogical analogy; conducting b-value analysis by limiting the sample size (Tormann et al. use 150) is similar to measuring temperature with a peculiarly short thermometer. For small sample sizes such a thermometer would measure lower temperatures more precisely than higher ones, to the extent that it would be impossible to distinguish between values on the upper end. Only when the sample size, and hence the resolution, is increased does

the thermometer become reliable across the whole measurement range. For an illustration in Figure 1a we show the confidence intervals for 6 b-values, accordingly we divide our thermometer into 6 intervals. The length of each interval is scaled with 1/σ N as a proxy for resolution, where σ N is the standard deviation for N number of samples. The analogous thermometers obtained for N=150, 300 and 1500 are shown in Figure 1b. Intersecting the confidence interval curves of different b-values one can derive that the crudest measurement in the range [0.6-1.5] would require a resolution of Δb=0.3 that is achieved with at least ~170 samples. An analysis in this constrained setting can correspond only to a mere classification of b-values as low (0.70±0.10), medium (1.00±0.13) or high (1.30±0.17). Such a graphical inference is, however, an oversimplification because determining if two GR distributions are significantly different (for a given p-value) would require nested-hypothesis tests (see Supporting Document) Compared to the previous sampling techniques the proposed DEW method features two additional parameters; number of maximum events (N max ) and coefficient of exponential magnitude weight decay with distance (λ). The remaining parameters, common with the previous approaches, are set to the following values: minimum number of events (N min =50), maximum search radius (R max =7.5) and minimum search radius to find closest event (R min =2.5km). The authors propose a synthetic Monte-Carlo approach to indentify which parameter values are superior in retrieving the input structure. Their synthetic catalog features three different b-value regions (0.5, 1.3 and 1.8) embedded in a background of b=1. The input b-value of the dominant region (both in terms of number of events and non-empty nodes) is chosen as 0.5. The authors find the best value for λ to be 0.7 while they set the N max to a constant value of 150. Considering the confidence intervals for different b-values (Figure 1a) such small sample sizes and large weight decay may be sufficient for small b-values, however, for larger b-values the same parameter values would be sub-optimal. In order to illustrate this we obtain the same dataset used by Tormann et. al. and conduct the same numerical test varying the input b-value of the dominant region from 0.5 to 1.25 (Figure 2). We vary the parameters N max and λ in the ranges of [150-1950] and [0.01-1]. For each parameter pair the score function was averaged over 500 realizations of the synthetic catalog. The score function surfaces for the 4 different input b-values are presented in Figure 2a. Although in their synthetic test Tormann et. al. conduct their

optimization procedure by setting N max =150, in the application to real seismicity the authors set N max = ; effectively designating the limiting parameter as R max =7.5. This parameter flip does not change our following conclusion, however, since for any spatial distribution N max and R max are coupled via its fractal dimension [Hirabayashi et al., 1992]. Nevertheless we repeat the synthetic test setting N max = and varying R max in the range of [1-100km]. The score surfaces are presented in Figure 2b. We observe that the minima vary as a function of the synthetic input: structures with b=1 (which is one of the most well established empirical laws in seismology [Gutenberg and Richter, 1954]) require much larger N max (i.e R max ) and lower λ values to be retrieved correctly. This indicates that choosing parameter values based on a synthetic input with low b- values will lead to under sampling of regions with higher b-values. Felzer (2006) observed that for a uniform b-value of b=1 such under sampling leads to emergence of artifacts with high or low b-values. Similar concerns regarding the under sampling of the Gutenberg-Richter s distribution have been brought up numerous times [Shi and Bolt, 1982; Frohlich and Davis, 1993; Kagan, 1999, 2002, 2010; Amorese et al., 2010]. We remind the reader that in this comment we are not concerned with the question whether the b-value on a certain fault is uniform or not, rather we are argue that arbitrary parameter choices and improper optimization schemes can indeed lead to under sampling. To illustrate the sensitivity of the published results to parameter choices we consider the observed Parkfield seismicity. We apply the DEW method to obtain two b-value maps; a) using the same parameter values as Tormann et al. (2014) R max = 7.5, λ=0.7 b) using the parameters that would have been obtained if the input synthetic b-value was chosen as b=1 rather than b=0.5; R max = 40km, λ=0.01 (Figure 3). It is important to note that Tormann et al. (2014) use these maps to calculate annual probabilities of M6 or larger event to occur. Figure 3 suggest that the emergence of high and low b-value anomalies is a mere artifact of under sampling. These artifacts lead to differences of up to two orders of magnitude in the recurrence times thus it would be precarious to use such maps for assessment of probabilistic seismic hazard on faults. Since one cannot know in advance what b-value the real data features and thus choose parameters accordingly, we maintain that the approach presented by the Tormann et al. (2014) cannot be used on real datasets as its results depend on the assumed input b-values chosen to derive its parameters. Our conclusion applies also to the previously used similar b-value mapping methods of constant radius and nearest neighborhood.

We remind the reader that spatial/temporal b-value mapping has been previously tackled with likelihood based approaches that take into account model complexities [Imoto, 1987; Ogata and Katsura, 1993]. To put the term model complexities into perspective we note that for the 4174 complete events in Parkfield, Tormann et al. report ~1000 spatially varying b-values saturated in the range [0.50-1.50]. Whereas Imoto (1987), using penalized likelihood optimization for a comparable sized dataset (3035 complete events in New Zealand), obtains only 28 b-values in the range of [0.94-1.08]. Acknowledgements The MATLAB codes and the Parkfield earthquake catalog used to generate Figure 2 and Figure 3 can be downloaded from the following link: http://www.mathworks.co.uk/matlabcentral/fileexchange/46016. We encourage the reader to try different parameter sets and explore the large variety of b-value maps one can obtain from a single dataset in the absence of an objective criteria. References Amitrano, D. (2012), Variability in the power-law distributions of rupture events, Eur. Phys. J. Spec. Top., 205(1), 199 215, doi:10.1140/epjst/e2012-01571-9. Amorese, D., J.-R. Grasso, and P. a. Rydelek (2010), On varying b -values with depth: results from computer-intensive tests for Southern California, Geophys. J. Int., 180(1), 347 360, doi:10.1111/j.1365-246x.2009.04414.x. Felzer, K. R. (2006), Calculating the Gutenberg-Richter b value, AGU Fall Meet. Abstr., S42C- 08. Frohlich, C., and S. Davis (1993), Teleseismic b values; or, much ado about 1.0, J. Geophys. Res., 98, 631 644. Gutenberg, B., and C. F. Richter (1954), Seismicity of the earth and associated phenomena, [2d. ed.]., Princeton University Press, Princeton N.J.

Hirabayashi, T., K. Ito, and T. Yoshii (1992), Multifractal analysis of earthquakes, Pure Appl. Geophys., 138(4), 591 610. Imoto, M. (1987), A Bayesian method for estimating earthquake magnitude distribution and changes in the distribution with time and space in New Zealand, New Zeal. J. Geol. Geophys., 30(2), 103 116, doi:10.1080/00288306.1987.10422177. Kagan, Y. Y. (1999), Universality of the Seismic Moment-frequency Relation, Pure Appl. Geophys., 155(2-4), 537 573, doi:10.1007/s000240050277. Kagan, Y. Y. (2002), Seismic moment distribution revisited: I. Statistical results, Geophys. J. Int., 148(3), 520 541, doi:10.1046/j.1365-246x.2002.01594.x. Kagan, Y. Y. (2010), Earthquake size distribution: Power-law with exponent beta=1/2?, Tectonophysics, 490(1-2), 103 114, doi:10.1016/j.tecto.2010.04.034. Ogata, Y., and K. Katsura (1993), Analysis of temporal and spatial heterogeneity of magnitude frequency distribution inferred from earthquake catalogues, Geophys. J. Int., 113(3), 727 738, doi:10.1111/j.1365-246x.1993.tb04663.x. Shi, Y., and B. Bolt (1982), The standard error of the magnitude-frequency b value, Bull. Seismol. Soc. Am., 72(5), 1677 1687. Tormann, T., S. Wiemer, and A. Mignan (2014), Systematic survey of high-resolution b -value imaging along Californian faults: inference on asperities, J. Geophys. Res. Solid Earth, n/a n/a, doi:10.1002/2013jb010867.

FIGURES Figure 1. a) Confidence intervals of 5% and 95% for b-value estimations with increasing sample size. b) Analogous thermometers corresponding to sample sizes N=150, 300 and 1500. The length of each thermometer is representative of its absolute resolution. When present, confidence interval overlaps between gradations are shown by sawtooth waves with amplitudes scaled accordingly.

Figure 2. Variations of the minimum (red dots) of the score function for different input b-values as a function of a) weight decay Lambda and the maximum allowed sample size N max b) Lambda and the maximum search radius R max

Figure 3. Two b-value maps obtained from the same M 1.3 Parkfield seismicity of the last 30 years: Using the same parameters as Tormann et. al. (top panel: 0.50<b<1.86) and the parameters minimizing the score function for a synthetic input of b=1 (center panel: 0.80<b<0.96). The observed seismicity is superimposed as gray circles scaled according to magnitude. Notice that in top panel the extreme b-values are observed mainly in regions with low seismic density. Bottom panel shows the ratio of the expected recurrence times for a magnitude M6 or greater event.