Tectonophysics (2012) Contents lists available at SciVerse ScienceDirect. Tectonophysics

Size: px
Start display at page:

Download "Tectonophysics (2012) Contents lists available at SciVerse ScienceDirect. Tectonophysics"

Transcription

1 Tectonophysics (2012) Contents lists available at SciVerse ScienceDirect Tectonophysics journal homepage: Review Article Seismicity-based earthquake forecasting techniques: Ten years of progress Kristy F. Tiampo a,, Robert Shcherbakov a,b a Department of Earth Sciences, University of Western Ontario, London, ON, Canada b Department of Physics and Astronomy, University of Western Ontario, London, ON, Canada article info abstract Article history: Received 1 February 2011 Received in revised form 10 August 2011 Accepted 25 August 2011 Available online 7 September 2011 Keywords: Earthquake forecasting Statistical seismology Seismotectonics Seismic hazard Earthquake fault systems interact over a broad spectrum of spatial and temporal scales and, in recent years, studies of the regional seismicity in a variety of regions have produced a number of new techniques for seismicity-based earthquake forecasting. While a wide variety of physical assumptions and statistical approaches are incorporated into the various methodologies, they all endeavor to accurately replicate the statistics and properties of both the historic and instrumental seismic records. As a result, the last ten years have seen significant progress in the field of intermediate- and short-term seismicity-based earthquake forecasting. These include general agreement on the need for prospective testing and successful attempts to standardize both evaluation methods and the appropriate null hypotheses. Here we differentiate the predominant approaches into models based upon techniques for identifying particular physical processes and those that filter, or smooth, the seismicity. Comparison of the methods suggests that while smoothed seismicity models provide improved forecast capability over longer time periods, higher probability gain over shorter time periods is achieved with methods that integrate statistical techniques with our knowledge of the physical process, such as the epidemic-type aftershock sequence (ETAS) model or those related to changes in the b-value, for example. In general, while both classes of seismicity-based forecasts are limited by the relatively short time period available for the instrumental catalog, significant advances have been made in our understanding of both the limitations and potential of seismicity-based earthquake forecasting. There is general agreement that both short-term forecasting, on the order of days to weeks, and longer-term forecasting over five-to-ten year periods, is within reach. This recent progress serves to illuminate both the critical nature of the different temporal scales intrinsic to the earthquake process and the importance of high quality seismic data for the accurate quantification of time-dependent earthquake hazard Elsevier B.V. All rights reserved. Contents 1. Introduction Physical process models Accelerating moment release (AMR) Characteristic earthquakes Variation in b-value The M8 family of algorithms RTL LURR Pattern Informatics (PI) index Smoothed seismicity models EEPAS Time-independent smoothed seismicity ETAS methodologies Relative Intensity (RI) method TripleS Non-Poissonian earthquake clustering Seismic earthquake potential models Corresponding author. address: ktiampo@uwo.ca (K.F. Tiampo) /$ see front matter 2011 Elsevier B.V. All rights reserved. doi: /j.tecto

2 90 K.F. Tiampo, R. Shcherbakov / Tectonophysics (2012) STEP HAZGRIDX Proportional Hazard Model (PHM) Conclusions Acknowledgments References Introduction The impact to life and property from large earthquakes is potentially catastrophic. In 2010, the M7.0 earthquake in Haiti became the fifth most deadly earthquake on record, killing more than 200,000 people and resulting in ~$8 billion (USD) in direct damages (Cavallo et al., 2010). Direct economic damage from the M8.8 earthquake that struck Chile in February of 2010 could reach US $30 billion, or 18% of Chile's annual economic output (Kovacs, 2010). As a result of the potential regional and national impact of large earthquakes, research into their prediction has been ongoing on some level for almost 100 years, with intervals marked by optimism, skepticism, and realism (Geller et al., 1997; Jordan, 2006; Kanamori, 1981; Wyss, 1997). More than ten years ago, this controversy erupted in what has become known in the community as the Nature debates (Nature Debates, Debate on earthquake forecasting, nature/debates/earthquake, 1999; Main, 1999b). Prompted in large part by the apparent lack of success of the Parkfield prediction experiment (see, e.g., Bakun et al., 2005), it ultimately focussed on the nature of earthquakes themselves and whether they might be intrinsically unpredictable. While this question has yet to be decided, it marked a turning point in the field of earthquake science such that today earthquake forecasting, or the assessment of time-dependent earthquake hazard, complete with associated probabilities and errors, is now the standard in earthquake predictability research. At the same time, a wealth of seismicity data at progressively smaller magnitude levels has been collected over the past forty years, in part related to the original goals of efforts such as the Parkfield experiment and in part out of recognition that there is much still to learn about the underlying process, particularly after the Parkfield prediction window passed without an earthquake (Bakun et al., 2005). While it has long been recognized that temporal and spatial clustering is evident in seismicity data, much of the research associated with these patterns in the early years focused on a relatively small fraction of the events, primarily at the larger magnitudes (Kanamori, 1981). Examples include, but are not limited to, characteristic earthquakes and seismic gaps (Bakun et al., 1986; Ellsworth and Cole, 1997; Haberman, 1981; Swan et al., 1980), Mogi donuts and precursory quiescence (Mogi, 1969; Wyss et al., 1996; Yamashita and Knopoff, 1989), temporal clustering (Dodge et al., 1996; Eneva and Ben-Zion, 1997; Frohlich, 1987; Jones and Hauksson, 1997; Press and Allen, 1995), aftershock sequences (Gross and Kisslinger, 1994; Nanjo et al., 1998), stress transfer and earthquake triggering over large distances (Brodsky, 2006; Deng and Sykes, 1996; Gomberg, 1996; King et al., 1994; Pollitz and Sacks, 1997; Stein, 1999), scaling relations (Pacheco et al., 1992; Romanowicz and Rundle, 1993; Rundle, 1989; Saleur et al., 1995), pattern recognition (Keilis-Borok and Kossobokov, 1990; Kossobokov et al., 1999), and time-to-failure analyses (Bowman et al., 1998; Brehm and Braile, 1998; Bufe and Varnes, 1993; Jaumé and Sykes, 1999). Although this body of research represents important attempts to describe these characteristic patterns using empirical probability density functions, it was hampered by the poor statistics associated with the small numbers of moderate-to-large events either available or considered for analysis. The availability of new, larger data sets and the computational advancements that facilitate complex time series analysis, including simulations, rigourous statistical tests, and innovative filtering techniques, provided new impetus for earthquake forecasting at a time when the field was apparently polarized on the issue (Nature Debates, Debate on earthquake forecasting, debates/earthquake, Main, 1999b; Jordan, 2006). In 2002, the first prospective forecast using small magnitude earthquake data was published (Rundle et al., 2002), and this was followed by a renewed interest in seismicity-based methodologies and engendering a renewed effort aimed at better defining and testing these techniques. Landmark initiatives in the earthquake forecasting validation and testing arena include the working group on Regional Earthquake Likelihood Models (RELM) as well as the Collaboratory on the Study of Earthquake Predictability (CSEP), both founded after 2000 (Field, 2007; Gerstenberger and Rhoades, 2010; Zechar et al., 2010). Although a suite of potential precursory phenomena exist in addition to those associated with changes in seismicity, including tilt and strain precursors, electromagnetic signals, hydrologic phenomena, and chemical emissions (Scholz, 2002; Turcotte, 1991), we limit our discussion to the predominant seismicity-based forecasting techniques actively researched over the past ten years. Methods also not discussed here include forecasting techniques associated with earthquake interactions such precursory seismic velocity changes (e.g., Crampin and Gao, 2010) or stress transfer studies (see King et al., 1994; Stein, 1999;andothers). Here we review the current status of seismicity-based forecasting methodologies and the progress made in that field since the 1999 Nature debate. In the interest of space, we limit the discussion to methodologies which rely on the instrumental catalog for their data source and which attempt to produce forecasts which are limited in both space and time in some quantifiable manner. As a result, these methods primarily produce intermediate-term forecasts, on the order of years, although we do include a small subset that relies on aftershock statistics to generate short-term forecasts on the order of days. Important discussion exists elsewhere on the appropriate standard for furnishing a testable earthquake forecast (Jackson and Kagan, 2006; Jordan, 2006), as well as the efficacy of various forecast testing methodologies and their evaluation (e.g. Field, 2007; Gerstenberger and Rhoades, 2010; Schorlemmer et al., 2007; Vere-Jones, 1995; Zechar, et al., 2010). While there is no attempt here to test the reliability of these forecasting techniques against each other or a particular null-hypothesis with rigorous statistics, in some cases attempts are made to compare to either a Poisson null-hypothesis or to a null hypothesis which includes spatial and temporal clustering, as in the case of the relative intensity (RI) forecast model (Holliday et al., 2005) (see Section 3.4) or the ETAS model (see Section 3.3) (e.g. Vere-Jones, 1995). We will discuss briefly any such efforts, or the lack thereof, particularly in those cases where the method has not been formally submitted for independent evaluation. We have separated the methods discussed here into two different categories, although there is some unavoidable overlap. This paper begins with a review of a suite of seismicity-based forecasting methodologies that each assumes a particular physical mechanism is associated with the generation of large earthquakes and their precursors and performs a detailed analysis on the instrumental and/or historic catalog in order to isolate those precursors. We designate these physical process models, Section 2. In this subset we also include two techniques which fall slightly outside the parameters outlined above, the characteristic earthquake hypothesis and the accelerated

3 K.F. Tiampo, R. Shcherbakov / Tectonophysics (2012) moment release (AMR) hypothesis. While both use a relatively small subset of larger events and are not optimally formulated to produce time- and space-limited forecasts, their undeniable impact on the earthquake forecasting community mandates their inclusion here. In Section 3 we detail the evolution and current state of smoothed seismicity models. These models primarily apply a series of filtering techniques, often based on knowledge or assumptions about earthquake statistics, to seismicity catalog data in order to forecast on both short- and intermediate time scales. We conclude with a short discussion of the limitations and future outlook of seismicity-based forecasting tools. 2. Physical process models Physical process models are those in which the precursory process relies on one or more physical mechanisms or phenomena associated with the generation of large events. A detailed analysis, often but not always statistical, is performed on the instrumental and/or historic seismicity in order to isolate those precursors. These techniques are based on the assumption that the seismicity is acting as a sensor for the underlying physical process and can provide information about the spatial and temporal nature of that process. It should be noted that, while the classification of a physical, and potentially verifiable, source for the earthquake generation process is an attractive feature of these methodologies, differentiating between both the source and the subtly varying seismic phenomena is difficult. As a result, many of these techniques rely on pattern recognition or statistical methodologies to isolate the spatio-temporal signal. A comprehensive understanding of their relative successes and failures is often obscured by the complicated nature of the analyses, the simplifying assumptions of the physical models, and the heterogeneities that exist in the real earth. Here we discuss those physical process models that have had the greatest impact on the field and are part of ongoing forecasting research that employs high-quality catalogs from active seismic regions Accelerating moment release (AMR) Precursory seismic activation, or foreshock activity, has been observed before a number of large events around the world (Bakun et al., 2005; Ellsworth et al., 1981; Jones and Molnar, 1979; Jordan and Jones, 2010; Rikitake, 1976; Sykes and Jaumé, 1990). The most widely applied method for analyzing these precursory increases in seismicity has been known as time-to-failure analysis, accelerating seismic moment release (ASMR), or accelerating moment release (AMR) (Ben-Zion and Lyakhovsky, 2002; Bowman and King, 2001; Bowman et al., 1998; Brehm and Braile, 1998; Bufe and Varnes, 1993; Jaumé and Sykes, 1999; Mignan, 2008; Robinson, 2000; Turcotte et al., 2003; among others). While, in general, AMR lies outside the general scope of this review in that it uses only a relatively small fraction of the instrumental catalog in its analysis, and its forecast time periods are poorly defined and often long-term, it is included here because of the important influence it has had on the discipline as well as its potential for incorporation into ongoing forecasting methodologies. AmorecompletediscussionofthehistoryandtheoryofAMRcanbe found in Mignan (2011). Early studies found that the rate of seismic moment release for M 5 earthquakes increased with an accelerating component prior to large events in the San Francisco area, rather than linearly, and that the rate of cumulative seismic moment was best fit with an exponentially increasing model (Ellsworth et al., 1981; Sykes and Jaumé, 1990). Bufe and Varnes (1993) applied a power-law time-to-failure model (Voight, 1989) to the same seismicity sequences and found that the square root of the seismic energy, or cumulative Benioff strain, provided a better prediction of future events. An in-depth review relating material failure and crack propagation to the time-tofailure mechanism can be found in Main (1999a). Following Bufe and Varnes (1993), the relation for AMR is εðþ¼a B t t f t ð2:1:1þ m i¼1 E i where t f is the time of the mainshock, A and B are constants and m typically falls between 0.1 and 0.5 with a mean value at 0.3. εðþ¼ t Nt ðþ pffiffiffiffi is the cumulative Benioff strain, where Ei is the seismic moment of the ith earthquake (Ben-Zion and Lyakhovsky, 2002). However, Mignan et al. (2007) showed that the cumulative number of events is preferred, such that precursory accelerating seismicity corresponds to an increase of the a-value, the y-intercept at the minimum magnitude cutoff of the Gutenberg Richter (GR) curve. This result is supported by recent analyses of natural earthquake catalogs in addition to other AMR studies (see e.g. Bowman and Sammis, 2004; Jiménez et al., 2005). King and Bowman (2003) proposed elastic rebound theory (Reid, 1910) and Coulomb stress interactions (Bakun et al., 1986; King et al., 1994; Smalley et al., 1985; Stein, 1999) as the basis for the Stress Accumulation Model (SAM). In this version, AMR emerges from the background seismicity as the entire region becomes sufficiently stressed for the mainshock to occur due to stress loading of the fault with time. The associated dimensions are directly related to the extent of increased Coulomb stress, and observations of accelerating moment release in California are related to the critical region defined using the Coulomb stress (Bowman and King, 2001; Mignan et al., 2006a). While conventional Coulomb stress techniques directly calculate stress changes, the stress cycle evolution method of King and Bowman (2003) models the evolution of the stress field relative to the failure stress. Following a large event, regions of increased seismicity occur where the overall stress field is elevated (aftershocks). It also results in regions of reduced seismicity where the stress field has been reduced (stress shadows), broad, seismically quiet areas of quiescence, as seen in Fig. 1a. It should be pointed out that if the region being investigated is too large, the accelerating moment release is masked by unassociated random background seismicity, but if the selected region is too small, events are excluded that are important in identifying the acceleration (Bowman et al., 1998). Bowman et al. (1998) originally employed a simple search algorithm to define circular regions of AMR before large earthquakes. The cumulative Benioff strain within a series of circular regions was fit to both a power-law time-to-failure equation (Bowman et al., 1998; Bufe and Varnes, 1993) and a straight line. The ratio of the residuals to these fits (c=power-law residuals/linear residuals) is calculated for each radius, and is called the c-value. The greater the curvature of the ASMR, the smaller the c-value and the likelihood of an event increases. In recent versions, the region size is fit to a spatial pattern that approximates the stress change pattern associated with particular fault mechanisms. An updated version of SAM, in which accelerating seismicity is fit to the appropriate stress lobe patterns, is shown in Fig. 1b (King and Bowman, 2003). Typical rupture mechanisms, such as that shown in Fig. 2 for the southern San Jacinto Fault, are centered on the epicenter of the scenario earthquake. A series of these rupture scenarios of varying size is plotted for different time periods, and the minimum of the resulting plot is the region with the greatest acceleration. In Fig. 2 the regional optimization algorithm of Bowman et al. (1998) was applied to instrumental catalogs for southern California (Tiampo et al., 2008). The scenario fault in Fig. 2b has source parameters equivalent to a magnitude 7.5 earthquake along the southern San Andreas Fault. The best c-value for this event, calculated from the AMR curve in Fig. 2a, is 0.78, denoting a reliability value of approximately 25%. As c-values must be less than 0.6 for a reliable forecast (Mignan et al., 2006a), the event is unlikely to occur in the near future.

4 92 K.F. Tiampo, R. Shcherbakov / Tectonophysics (2012) Fig. 1. Stress and seismicity through the earthquake cycle for the stress accumulation model. a) Stress changes for a finite length fault 80 km long and 15 km wide. The maximum stress level is the failure stress (orange), all other stress levels are lower (the color bar indicates values). The stress field is generated by adding the stress change due to loading of the main fault to a random inhomogeneous regional stress field. Black areas indicate where earthquakes have occurred due to the stress rising above a failure threshold. The removal of the stress shadow from the previous event as a result of loading on the main fault leads to accelerating seismicity in the diagonal lobes (King and Bowman, 2003). b) Accelerating seismicity before a M=6.5 earthquake (King and Bowman, 2003). The solid line indicates the fit of the synthetic data to a power-law time-to-failure equation, while the dashed line is a linear fit to the data. The time scale is given as a percentage of the loading cycle (from Tiampo et al., 2008). Mignan et al. (2007) and Mignan (2008) proposed a new approach, the Non-Critical Precursory Accelerating Seismicity Theory (Non-Critical PAST). Mignan et al. (2007) demonstrated analytically that for a fixed region in space, the cumulative number of events, λ(t), that comprise the background seismicity increases as a powerlaw function through time before the mainshock. This acceleration corresponds to an increase of the a-value over a given regions, in agreement with recent observations (Bowman and Sammis, 2004; Mignan and Giovambattista, 2008) and with previous simulations (King and Bowman, 2003), while events that occur in the stress shadows tend to hide the accelerating seismicity pattern (i.e. background seismicity). While AMR has been observed in many regions (Bowman et al., 1998; Brehm and Braile, 1998; Di Giovambattista and Tyupkin,

5 K.F. Tiampo, R. Shcherbakov / Tectonophysics (2012) Fig. 2. AMR calculated for a large, magnitude 7.5 event on the southern San Jacinto fault, as shown in b). a) Best fit model to the cumulative Benioff strain release for this scenario (from Tiampo et al., 2008). 2004; Jiang and Wu, 2006, 2010b; Mignan et al., 2006b; Papazachos et al., 2007; Robinson, 2000) and is still studied actively, it is not detected for all locations and events (Ben-Zion and Lyakhovsky, 2002; Gross and Rundle, 1998; Hardebeck et al., 2008; Main, 1999a). The reasons for this remain elusive. One potential explanation lies in the fact that, if the model proposed by Bowman and King (2001) and Mignan et al. (2006a,b) is correct, there is a cycle of activation quiescence activation in the seismicity that is localized in space (e.g. Di Giovambattista and Tyupkin, 2004; Evison and Rhoades, 2004; Jaumé and Sykes, 1999). Identification of these spatio-temporal variations can be difficult. As noted in Hardebeck et al. (1998), approximately sixty percent of aftershocks occur in regions where there is a stress increase related to a large event, such that the remaining forty percent of all aftershocks occur in areas designated as stress shadows, or quiescent regions. These can potentially mask the accelerating seismicity pattern. In addition, Ben-Zion and Lyakhovsky (2002) noted that, in simulations of fault networks, AMR occurs only in those cases where the seismicity before a large event has broad-frequency-size statistics. As a forecasting tool, AMR presents a significant challenge because of the difficulties associated with fitting cumulative data, as originally noted by Bufe and Varnes (1993). First, this introduces a sample bias, so that distinguishing between AMR and non-amr signals is difficult and a false diagnosis of AMR can arise from normal variation in the data (Greenhough et al., 2009; Hardebeck et al., 2008). Mignan (2008) showed that the c-value becomes unstable for noise levels higher than 20%, making c-value optimization less efficient. In particular, it cannot identify the quiescence pattern that is coupled to AMR. However, attempts to better quantify the activation quiescence activation pattern and its spatial signature have shown moderate success in recent years. Mignan and Giovambattista (2008) demonstrated that the Region Time Length algorithm (RTL), another forecasting algorithm to quantify relative activation and quiescence (see Section 2.5), is sensitive to the quiescent stage defined in Non-Critical PAST simulations, and that precursory accelerating seismicity and quiescence occurred in the same spatio-temporal window before the 1997 Umbria-Marche, Italy, earthquake. Mignan and Tiampo (2010) demonstrated that the Pattern Informatics (PI) index (see Section 2.7) also correctly identifies the quiescent regions associated with simulated AMR signals. Finally, AMR has been applied successfully in volcano-tectonic settings (Chastin and Main, 2003; Kilburn and Voight, 1998). Second, attempts to fit cumulative AMR data are hampered by the nonlinearity of that fit and the associated sample bias. The original promise of time-to-failure analysis was that the time to the next event could be estimated through curve fitting of the associated power law; however, this promise has never materialized (Bufe and Varnes, 1993; Main, 1999a). Even if the theory is correct, the steep fit of the curve as it nears the actual earthquake occurrence means that even small variations in the data results in large errors in occurrence time. In addition, because of the uncertain forecast time periods and relatively large magnitudes of the events employed by the algorithm (~M 4.5), the forecast cannot be updated as fast as the ongoing seismic activity and the associated stress changes in an active tectonic region. This results in a significant number of false positives, or forecasts which do not result in subsequent events (Jordan and Jones, 2010; Mignan et al., 2006b). Finally, AMR forecasts are best suited to a binary forecast over this uncertain time period, and as a result have never been explicitly formulated for testing again a random or clustered null hypothesis. On the positive side, the AMR (SAM) approach has the benefitofproviding not only an increased likelihood of an event, but the mechanism and rupture length, which can be converted into potential magnitude. Coupled with other techniques (Mignan and Giovambattista, 2008; Tiampo et al., 2008) that are better adapted to frequent updates for better temporal accuracy, AMR has the potential to augment intermediateterm forecasts with both mechanism and magnitude information Characteristic earthquakes While the characteristic earthquake hypothesis also lies outside the parameters of study for this review, as outlined above, its widespread impact on both the seismicity-based forecasting community and ongoing hazard models over the past twenty years merits its inclusion here. The characteristic earthquake term originally was coined by Schwartz et al. (1981) and detailed in Schwartz and Coppersmith (1984), but the concept is an extension of the early work of Reid (1910). As noted in Section 2.1, above, elastic rebound theory hypothesizes that a large earthquake releases most of its accumulated stress on a given fault segment and that the next earthquake occurs after the stress builds up until it is restored to a level that results in rupture again. Here, the characteristic earthquake model supposes that faults tend to generate earthquakes of the same size over a very narrow range of magnitudes on rupture zones or segments that are similar in location and spatial extent (Ellsworth and Cole, 1997; Parsons and Geist, 2009; Schwartz and Coppersmith, 1984; Schwartz et al., 1981; Wesnousky, 1994). The hypothesis leads to forecasts of specific events on the size of the rupture dimension of the largest earthquakes (~M6.5 9). The model is appealing because it fits historic and empirical observations at the most basic level, i.e. large earthquakes tend to occur where they have occurred in the past (Allen, 1968; Davison and Scholz, 1985; Frankel et al., 2002; Kafka, 2002; Petersen et al., 2007). Again, this particular method differs from the primary methods discussed elsewhere in this paper in that it does not utilize the recent seismicity catalogs, including small-to-medium size events, to quantify intermediate-term seismic hazard. Instead, it relies on historic events of between magnitude five to six and above, and the largest events

6 94 K.F. Tiampo, R. Shcherbakov / Tectonophysics (2012) Fig. 3. (a) Schematic of the typical linear GR magnitude frequency relation. (b) Schematic of GR relationship for a characteristic earthquake distribution, with a sharp increase where higher-rate characteristic large-magnitude earthquakes are expected (from Parsons and Geist, 2009). from paleoseismic studies (Wesnousky, 1994). Paleoseismic studies (Anderson et al., 1989; Arrowsmith et al., 1997; Biasi and Weldon, 2006; Biasi et al., 2002; Grant and Shearer, 2004; Grant and Sieh, 1994; Lienkaemper, 2001; Lienkaemper and Prescott, 1989; Matsu'ura and Kase, 2010; Pantosti et al., 2008; Rockwell et al., 2003; Sieh, 1984; Sieh et al., 1989; Weldon et al., 2004, among others) provide detailed information on the displacement, rupture area and recurrence interval for inclusion in the characteristic earthquake model (Parsons and Geist, 2009; Schwartz and Coppersmith, 1984; Wallace, 1970; Wesnousky, 1994). In the characteristic earthquake model the return periods, or recurrence intervals, of the largest, relatively infrequent, events are associated with the most significant seismic hazard for a given fault. While for large seismic regions and for smaller events on a single fault, earthquakes obey the GR magnitude frequency relation (Frohlich and Davis, 1993; Gulia and Wiemer, 2010; Gutenberg and Richter, 1944; Pacheco et al., 1992; Parsons and Geist, 2009; Schorlemmer et al., 2004a, 2005, and others), the rate of characteristic earthquakes may be greater than that expected from the GR scaling law, as shown in Fig. 3 (Parsons and Geist, 2009; Schwartz and Coppersmith, 1984; Wesnousky, 1994). There are many recent studies and applications of the characteristic earthquake theory for seismic hazard assessment (Cao et al., 2003, 2005; Chang and Smith, 2002; Frankel et al., 2002; Parsons, 2004; Petersen et al., 2008; Romeo, 2005; Stirling et al., 1996, 2002b). The two most notable examples are the Parkfield prediction experiment and the incorporation of the characteristic earthquake model into the Working Group on California Earthquake Probabilities (WGCEP) hazard estimates, which incorporates characteristic earthquakes into the construction of seismic hazard models for California (WGCEP, 1988, 1990, 1995, 2002, 2003, 2008). Earthquakes on the Parkfield segment of the San Andreas fault in California were designated as characteristic in the mid-1980s, based upon evidence for the regular recurrence in 1881, 1901, 1922, 1934, and 1966 of an event of the same approximate magnitude and location (Bakun and Lindh, 1985; Bakun and McEvilly, 1984; Bakun et al., 2005). As a result, the National Earthquake Prediction Evaluation Council (NEPEC) issued a forecast for an earthquake of approximately magnitude 6 that had a 95% chance of occurring between 1985 and 1993 near Parkfield, California (Shearer, 1985). The anticipated earthquake did not occur until September 2004, more than ten years after the end of the forecast interval. While a thorough review of the original prediction and the associated studies, modifications, and implications can be found in Jackson and Kagan (2006), the earthquake clearly did not satisfy the assumption of quasi-periodic behavior implicit in the original forecast. The characteristic earthquake model has an ongoing and significant impact on the assessment and quantification of seismic hazard in many regions. However, while evidence persists that earthquakes occur in a quasi-periodic manner for at least some period during the life of a fault, the nature, persistence and variation in that behavior is both spatially and temporally complex. (e.g. Biasi and Weldon, 2006; Cao et al., 2003, 2005; Chang and Smith, 2002; Faenza et al., 2003; Frankel et al., 2002; Ishibe and Shimazaki, 2009; Lienkaemper, 2001; Pailoplee et al., 2009; Parsons, 2004; Parsons and Geist, 2009; Peruzza et al., 2010; Petersen et al., 2008; Romeo, 2005; Stirling et al., 1996, 2002b; Vázquez-Prada et al., 2003). In particular, given the relatively short duration of both instrumental and historic catalogs, and the uncertainties associated with paleoseismic dating, the quantification of rupture dimension, fault segmentation and magnitude not only is difficult but also has important effects on the resulting hazard estimates (Biasi and Weldon, 2006; Jackson and Kagan, 2006; Page and Carlson, 2006; Parsons and Geist, 2009; Romeo, 2005; Savage, 1991, 1992; Stein and Newman, 2004; Stein et al., 2005; Stirling and Wesnousky, 1997). Perhaps most importantly, because the recurrence periods for characteristic events are relatively long and the resulting forecasts generally represent a small fraction of the seismic cycle, a forecast formulated from a characteristic return period cannot incorporate the dynamic nature of seismicity. The ongoing seismic activity and the associated interactions and stress changes in an active tectonic region cannot be incorporated into a forecast based on a characteristic earthquake model because its nature does not allow for the incorporation of spatial and temporal changes in activity (Jordan and Jones, 2010). The characteristic earthquake and the related seismic gap model continue to be studied and applied in various forms (e.g. Biasi and Weldon, 2006; Faenza, et al., 2003; Fedotov, 1968; Hurukawa and Maung, 2011; Ishibe and Shimazaki, 2009; Kelleher, 1972; Kelleher et al., 1973; Lienkaemper, 2001; McCann et al., 1979; Nishenko, 1989; Nishenko and McCann, 1981; Pailoplee et al., 2009; Peruzza et al., 2010; Sykes, 1971; Sykes and Nishenko, 1984; Thatcher, 1989; Vázquez-Prada et al., 2003). It is possible to qualitatively include characteristic earthquakes into a probabilistic forecast model, as demonstrated by its inclusion in two of the forecast models for Italy submitted for testing to the CSEP test site (CSEP, Both the long-term stress transfer (LTST) model (Falcone et al., 2010) and Layered Seismogenic Source model in Central Italy (LASSCI) model (Pace et al., 2010) include significant characteristic earthquake components in their formulations. However, its application to intermediate term forecasting does present several practical and theoretical problems. Recent studies into the characteristic earthquake hypothesis (Jackson and Kagan, 2006; Rong et al., 2003; Stein and Newman, 2004; Stein et al., 2005) suggest that earlier supporting evidence is the result of the limited length of the instrumental earthquake catalog relative to the recurrence intervals, errors in the size or frequency of large events in paleoseismic records, and the variability in the choice of spatial extent and associated slip for the seismic gap region (Jackson and Kagan, 2006; Stein and Newman, 2004; Stein et al., 2005; Thatcher, 1989). For this discussion it is clear that, whatever the outcome and future application of the characteristic earthquake theory, it is not a good fit to the category of seismicity-based forecasting techniques discussed elsewhere in this work. First, large events in wide spatial areas are used to forecast similar events over long time periods, rather than analyzing significant numbers of events to forecast the probability of larger events in specific, well-defined locations. Second, the inherent longterm nature of the increase or decrease in risk associated with these regions is extremely difficult to quantify in a manner that is both testable and verifiable. For example, while Hurukawa and Maung (2011) outline two seismic gaps in Myanmar, they cannot define the recurrence interval for these events or a time period of increased risk, which is likely on the order of 50 to 100 years. Finally, sampling effects

7 K.F. Tiampo, R. Shcherbakov / Tectonophysics (2012) can bias frequency magnitude statistics at large magnitude towards a characteristic distribution (Naylor et al., 2009). The small number of large events available over even a thirty or fifty year time period in either regional or worldwide catalogs makes definitive statistical testing extremely difficult and suggests that it will be many more years before the usefulness of this particular technique can be properly evaluated or implemented in an operational forecasting scheme (Jackson and Kagan, 2006; Schorlemmer and Gerstenberger, 2007; Vere-Jones, 1995, 2006; Zechar et al., 2010) Variation in b-value Variations in the b-value, or slope of the GR frequency magnitude distribution relation for earthquakes, have been studied intensively over the past twenty years (Cao et al., 1996; Frohlich and Davis, 1993; Gerstenberger et al., 2001; Gutenberg and Richter, 1944; Imoto, 1991; Imoto et al., 1990; Ogata and Katsura, 1993; Schorlemmer et al., 2004a; Wiemer and Benoit, 1996; Wiemer and Schorlemmer, 2007; Wiemer and Wyss, 1997, 2002; Wiemer et al., 1998; Wyss and Wiemer, Fig. 4. b-value maps for Izmit, Turkey for (a) the entire time period, 1985 to 1995, (b) January 1985 to April 1990 and (c) April 1990 to August. Circles indicate the crustal volumes used to check the significance of b-value differences in space and time. Dotted lines contour areal strain (in units of mstrain extension positive), as obtained from GPS observations. Stars mark the epicenters of the Izmit (white) and Duzce (gray) earthquakes (from Westerhaus et al., 2002).

8 96 K.F. Tiampo, R. Shcherbakov / Tectonophysics (2012) , and many others). For a more complete review of the early research into b-value variations, see Wiemer and Wyss (2002). In general, this work demonstrates that the b-value is highly heterogeneous in both space and time and on a wide variety of scales (Schorlemmer et al., 2004a; Wiemer and Schorlemmer, 2007; Wiemer and Wyss, 2002). These variations have important implications for earthquake hazard because local and regional probabilistic seismic hazard assessment (PSHA) is commonly performed using the GR frequency magnitude distribution, particularly in areas of dispersed seismicity (Field, 2007; Wiemer and Schorlemmer, 2007; Wiemer et al., 2009). However, the primary focus of this study will be the implications of changes in the b-value that are potentially associated with future large events, and the associated research into b-value forecasting. Recent work on regional b-values has resulted in two important conclusions. First, b-value varies with fault mechanism. The b-value for thrust events is smallest (~0.7), while that of strike-slip events is intermediate (~0.9), and it is greatest for normal events (~1.1). This relationship is inversely proportional to the mean stress in each regime (Schorlemmer et al., 2005). Gulia and Wiemer (2010) confirmed this result for regional seismicity in Italy. Second, related research suggests that locked patches on faults, or asperities, are characterized by low b-values, while creeping faults have higher b- values (Schorlemmer et al., 2004b; Wiemer and Wyss, 1994, 1997, 2002). Taken together, this suggests that the change in b-value can be used as a stress sensor, locating areas of high or low stress accumulation, particularly toward the end of the seismic cycle, and quantified in a regional earthquake forecasting model (Gulia and Wiemer, 2010; Latchman et al., 2008; Schorlemmer et al., 2005). This hypothesis is supported by laboratory results for acoustic emissions. These show that b-value is sensitive to both stress (Scholz, 1968) and material heterogeneity (Mogi, 1967) to first order, and to stress intensity normalized by fracture toughness to second order (Sammonds et al., 1992). Stress intensity is proportional to effective stress (Sammonds et al., 1992) and the square root of the length of the nucleating fracture, such that heterogeneous materials tend to be tougher, confirming the relationship between stress, heterogeneity and b-value. Many of the above references discuss increased seismic hazard associated with lower b-values (e.g. Westerhaus et al., 2002) and formulate postdicted maps of b-value variations for large events, as shown in Fig. 4. However, recent work has focused on formulating prospective probabilistic forecasts using b-value variations. Schorlemmer et al. (2005) studied b-value variations along the Parkfield segment of the San Andreas, and produced retrospective five year periods by extrapolating the GR distribution with spatially varying b-values over small volumes. Wiemer and Schorlemmer (2007) developed the Asperity-based Likelihood Model (ALM) for California and provided it to the RELM forecast testing site. In this version, they analyze the seismic catalog for California for the minimum magnitude of completeness and a depth of thirty kilometers. Because the b-value calculations must range from five to twenty km, depending on the activity rate, two models are calculated. The first is a local model, and the second is a regional model. The b-value fit is calculated from a likelihood score (Aki, 1965) and then the two models are compared with the corrected Akaike Information Criterion, AIC (Akaike, 1974; Burnham and Anderson, 2002; Kenneth et al., 2002). The lower AIC score is the better model. A search performed by varying the size of local regions and comparing them to the regional AIC value. The location with the smallest radii where the local b-value model scores a lower AIC is used to compute the distribution for the seismicity in that region. Once a magnitude frequency distribution is determined for each location, the annual rate of events in each magnitude bin between 5.0 M 9.0 can be calculated for the forecast (Wiemer and Schorlemmer, 2007). Gulia et al., 2010 provided an ALM forecast for Italy to CSEP (CSEP, The methodology was similar to that of Wiemer and Schorlemmer (2007), above, except that the magnitude of completeness values were smoothed using a Gaussian kernel. In addition, Fig. 5. The 5-year forecast rates Italy for the ALM (top), the HALM (middle) and the ALM.IT (bottom) (from Gulia et al., 2010). two modified forecasts were created from ALM: in the ALM.IT model, the input catalog is declustered for M 2 and a Gaussian filter applied on a nodal basis prior to calculation of the a-values in the magnitude - frequency distribution; in the HALM version, the model was modified

9 K.F. Tiampo, R. Shcherbakov / Tectonophysics (2012) so that the region was broken into eight subregions upon tectonic provinces, and this was used for the global model, depending upon the location of each node. The results are shown in Fig. 5. Long-term research into b-value statistics provides strong evidence that persistent variations occur that are correlated with the heterogeneous stress field in major fault zones. Continuing efforts have resulted in testable forecasts for earthquake occurrence and provide reassuring evidence that seismicity precursors can be translated into time-dependent hazard maps The M8 family of algorithms The M8 algorithm (Keilis-Borok and Kossobokov, 1990; Keilis-Borok et al., 1990; Kossobokov, 2006a,b; Kossobokov et al., 1999, 2000, 2002; Latoussakis and Kossobokov, 1990; Peresan et al., 2005, among others) was developed approximately thirty years ago in order to locate regions of heightened earthquake occurrence probability in both space and time. Modified in the intervening years, the current algorithm calculates seven time series from smaller earthquakes, ~M4, for a specified region of investigation that is a function of the size of the earthquake that is to be forecast. The values from these time series are used to make a decision on whether to invoke a Time of Increased Probability, or TIP, for a larger event of approximately M6.5 8 (Kossobokov et al., 1999). The M8 algorithm generally involves the forecast of relatively large areas of approximately five times the rupture dimension, or from hundreds to more than one thousand kilometers, and from six months to five years in the future (Kossobokov, 2006a). Predictions are calculated for earthquakes of magnitude M 0 and above in interval steps of 0.5. The region is scanned using overlapping circles with a diameter directly related to M 0, or 384 km, 560 km, 854 km and 1333 km for M6.5, M7.0, M7.5 and M8, respectively. Time series for earthquake sequences within each circle are calculated and then normalized by the lower magnitude cutoff. Several running averages are computed for the sequence in sliding time windows, typically six months, which characterize earthquake intensity and its deviation from the average, and seismicity clustering. Specifically, M8 calculates N(t), the number of mainshocks; L(t), the deviation of N(t) from the long-term trend; Z(t), the linear concentration of mainshocks calculated as the ratio of l, the average diameter of the source, to the average distance between them, r; and B(t), the maximum number of aftershocks, a proxy for earthquake clustering. N(t), L(t), and Z(t) are calculated twice each, for two different values of Ñ, the standard value of the average annual number of earthquakes in the sequence, typically 10 and 20. Large values are identified when they exceed Q percentiles as a given percentage of the encountered values, typically 75% for B and 90% for the other functions. An alarm or a TIP of five years occurs when at least six out of seven functions, including B, become large within two consecutive time windows. An example of one sample calculation is shown in Fig. 6 (Kossobokov et al., 1999). Since its inception, the M8 algorithm has been both noted and controversial. Its effectiveness is still disputed, partly because it is effectively a pattern recognition approach to which no single causal physical mechanism has been ascribed (CEPEC Report, 2004a,b; Eneva and Ben-Zion, 1997; Harte et al., 2003; Harte et al., 2007; Kossobokov et al., 2000). There have been many predictive successes (CEPEC Report, 2004a,b; Kossobokov et al., 1999), but, these occur in spatial and temporal windows of alarms that are a quite large (Kagan, 1997; Kossobokov et al., 1999; Marzocchi et al., 2003, among others). As a result, the difficulties in understanding and testing the method are numerous. It forecasts large and infrequent events whose statistics are, as noted elsewhere in this work, difficult to evaluate without a sufficient sample size (Jackson and Kagan, 2006; Schorlemmer and Gerstenberger, 2007; Vere-Jones, 1995, 2006; Zechar et al., 2010). Finally, its rigid specification of regions, magnitudes and time mandates a binary forecasting criteria (ie. success or failure) for evaluation, which makes it difficult to assess and means that it is highly sensitive to false positives (Harte et al., 2003, 2007; Jackson and Kagan, 2006; Marzocchi et al., 2003). Approximately ten years ago, a follow-on algorithm was added to the M8 family called the Mendocino Scenario, or MSc (Kossobokov, 2006a; Kossobokov et al., 1999). In this step, predictions are made using M8. Subsequently the areas of alarm (TIP) are reduced by MSc. Given a TIP diagnosed for a certain territory U, the algorithm is designed to find within U a smaller area, V, where the predicted earthquake can be expected. Note that this particular algorithm requires a reasonably complete catalog of earthquakes with magnitudes above M~4. Territory U is coarse-grained into small squares. Within each square, the number Fig. 6. Sample calculation, M8 algorithm for the prediction of earthquakes of magnitude 7.5 or higher in the Western United States. (a) Eight overlapping circles of investigation, CI, scan the region. In each CI the sequence of earthquakes above certain magnitude cutoff is analyzed. (b) Time series from CI no (c) Seven running averages used in the analysis. When the large values for the functions marked with dots concentrate in a 3-year interval (July 1985 July 1988) outlined by the light rectangle, a five year TIP is declared (darker rectangle) (from Kossobokov et al., 1999).

10 98 K.F. Tiampo, R. Shcherbakov / Tectonophysics (2012) of earthquakes, including aftershocks, is calculated for consecutive windows of short times. Quiet spatio-temporal boxes are identified based on the condition that the number of events is again below the Q percentile. Clusters of quiet boxes are identified that are connected in space or in time, and these are identified as chains. The subarea, V, is based on these clusters. Thus, the MSc algorithm outlines an area of the TIP where the activity is generally high but it has been interrupted for a short time. In , five earthquakes of magnitude 8 and above occurred in the test area: all of them were predicted by M8 and MSc identified correctly the locations of four of them (see Fig. 7) (Kossobokov et al., 1999). Kossobokov (2006a,b) applied both M8 and MSc to retrospective forecasting and suggested that the methodology could be rescaled for prediction of both smaller and larger magnitude earthquakes from retrospective testing on M5.5 events in Italy and on the M9.0 earthquakes in Sumatra. In Keilis-Borok et al. (2002) introduced a method for short-term earthquake forecasting. They outlined two seismicity patterns in addition to that employed in the MSc algorithm, ROC and Accord. The ROC pattern records the nearly simultaneous occurrence of medium magnitude mainshocks at large distances, while the Accord pattern reflects a nearly simultaneous rise of seismic activity in several locations in a region. Both patterns were shown to precede five large earthquakes over a matter of months in California between 1968 and 1999, as well as over longer time periods. A short-term alarm of six to nine months is issued based upon chains of these signals that span large intervals. In mid-2003, the Keilis-Borok group issued two short-term earthquake predictions, one for a M 7.0 earthquake in a 250,000 mile 2 region in the northern part of the Japanese islands and one for a M 6.4 earthquake in a 40,000 mile 2 area of central California. The predictions were satisfied by the September 2003 Hokkaido and December 2003 San Simeon earthquakes (CEPEC Report, 2004a,b). This was followed by a prediction for a magnitude 6.4 or greater earthquake before September 5, 2004 in a 12,440 mile 2 region of southern California, and a subsequent prediction of a magnitude 6.4 or greater earthquake to occur before August 14, 2005, within a 12,660 mile 2 area of southern California. Neither prediction was fulfilled, nor was a prediction for a large event in Japan and a moderate earthquake in the area of Slovenia (CEPEC Report, 2004a,b). The current status of the M8 prediction can be found at The issuance of TIPs continues, with a large enough success rate that Harte et al. (2003) implemented the algorithm into the statistical seismology software library SSLib for both its use and testing (R Development Core Team, 2006). This was followed by a modification of the method to produce a continuous probabilistic model for New Zealand for M8, instead of a binary alarm forecast (Harte et al., 2007). The results were favorable when tested against a random null hypotheses, although a physical motivation for successful forecasts remains unclear. One drawback to M8 is that it is a binary forecast, so that its performance is evaluated only by the proportion of hits, misses and false alarms. In addition, because a TIP is issued for large geographic regions and long durations, the probability gain over a forecast that is spatially accurate but temporally random is generally small even though there may be very few misses (Romachkova et al. (1998)). The question remains as to whether this high reliability can be translated into a significant probability gain that will prove useful to the hazard community RTL The RTL is a statistical method in which three earthquake-related parameters (time, place and magnitude) are included in a weighted coefficient (Sobolev and Tyupkin, 1997, 1999). The algorithm combines the distance, time and rupture length of clustered seismicity into a combined measure. The designation of Region Time Length (RTL) arises from Region (epicentral distance), Time interval, and Length (rupture size, ie. magnitude). The RTL algorithm is a statistical method for investigating seismicity changes prior to large events. These changes occur over regions on the order of 100 km, and a few years prior to a large event (Mignan and Di Giovambattista, 2008). In recent years, it has been used to isolate anomalous quiescence Fig. 7. Investigation circles from a test of the M8 and MSc algorithms in the Circum-Pacific seismic belt; the 170 circles of 667-km radii used for prediction of magnitude 8.0 or greater earthquakes are shaded light, while the 147 circles of 427-km radii used for prediction of earthquakes of magnitude 7.5 or higher are shaded darker (from Kossobokov et al., 1999).

11 K.F. Tiampo, R. Shcherbakov / Tectonophysics (2012) and seismicity prior to large events in Japan, Russia, Turkey and Italy (Di Giovambattista and Tyupkin;, 2000; Gentili, 2010; Huang, 2006; Huang and Nagao, 2002; Huang and Sobolev, 2001; Huang et al., 2001, 2002; Sobolev, 2001; Sobolev et al., 2002; Wyss et al., 2004). The RTL parameter, Q, is defined as the product of three functions: n Rðx; tþ ¼ " # i¼1 n expð r i =r 0 Þ " # Tðx; tþ ¼ expð ðt t i Þ=t 0 Þ i¼1 " # n Lðx; tþ ¼ ðl i =r i Þ L bk ðx; tþ i¼1 R bk ðx; tþ ð2:5:1þ T bk ðx; tþ ð2:5:2þ ð2:5:3þ where r 0 and t 0 are characteristic distance and time, r i is the distance from x, t i the occurrence time and l i is the rupture dimension, which is a function of magnitude M i of the ith event. The value of l i is calculated using the empirical relation between the size of source and the magnitude of the earthquake, M i : Fig. 8. Plot of the RTL parameter calculated at the epicenter of the Umbria Marche main shock. t q corresponds to the period of precursory quiescence and t f to the time of occurrence of the mainshock. Inset: Number of events N used for estimation of the RTL parameter in the time window [t 2 t 0, t] and cylindrical volume of radius 2r 0 with t 0 = 1 year and r 0 = 50 km (from Mignan and Di Giovambattista, 2008, reproduced by permission of American Geophysical Union). logðl i Þ ¼ 0:44M 1:289: ð2:5:4þ Here n is the number of events, r i falls within a circle of radius 2r 0, (t t i ) 2 t 0 and M min M i M max, r 0 and t 0 are characteristic distance and time-span. Typically, r 0 =50 km, t 0 =1 year, and M max ~3.8. R bk (x,t), T bk (x,t) and L bk (x,t) are background trends of R(x,t), T(x,t) and L(x,t), respectively. R(x,t), T(x,t) and L(x,t) are dimensionless functions normalized by their standard deviations σ R, σ T and σ L, respectively. The RTL parameter (in units of the product of the standard deviations σ=σ R σ T σ L ) describes the deviation from the background level of seismicity. A negative RTL is interpreted as quiescence and a positive RTL as activation (Di Giovambattista and Tyupkin, 2000; Huang, 2004; Mignan and Di Giovambattista, 2008). Analysis is performed on a declustered catalog. Smaller events, based on the minimum magnitude of completion, are included in the analysis (Mignan and Di Giovambattista, 2008). Note from the equations above that the coefficients of R and T exponentially increase when an earthquake is located near to the test location in either time or distance. Inversely, a larger distance provides an exponential decrease. L grows if the prior earthquake has a greater magnitude, or decreases when the magnitude is smaller. The RTL parameter is designed such that seismic quiescence results in a negative anomaly in comparison to the average background and seismic activation results in an increase of the RTL parameter (Di Giovambattista and Tyupkin, 2000; Huang, 2004). Huang et al., 2002 introduced the Q parameter, an average of the RTL values over some time window [t 1, t 2 ], to quantify seismic quiescence at any position (x, y, z). Qðx; y; z; t 1 ; t 2 Þ ¼ 1 m m RTLðx;y;z;t i Þ i¼1 ð2:5:5þ where t i is the time in the window [t 1, t 2 ], RTL(x,y,z,t i ) is the RTL parameter calculated as the product of the three functions, above, and m is the number of data points available in [t 1, t 2 ]. Using this technique, significant precursory seismic quiescence was detected at the epicenter of the Mw=7.4, 1999 August 17 Izmit (Turkey) earthquake, and was followed by an activation phase approximately two years before the mainshock. The RTL parameter at the epicenter of the 1997 M6 Umbria Marche earthquake is shown in Fig. 8, while a contour plot of Q is shown in Fig. 9. In a review of studies of the M 7 earthquakes in Kamchatka (Russia), Tottori and Kobe (Japan), Huang (2004) showed that seismic quiescence generally starts a few years before the occurrence of major earthquakes and lasts from 1 to 2.5 years. This is followed by a period of seismic activation that generally lasts several months. The linear dimension of the quiescence zone reached a few hundred kilometers, which was approximately ten times larger than the activation zone. The mainshock is most likely to occur once the relevant source region has passed through the quiescence and activation stages. The RTL analysis also has been applied retrospectively to Greece (Huang et al., 2001; Sobolev, 2007; Sobolev and Tyupkin, 1997), Japan (Huang, 2004, 2006), Turkey (Huang et al., 2002), Taiwan (Chen and Wu, 2006), China (Jiang et al., 2004; Rong and Li, 2007) and Italy (Di Giovambattista and Tyupkin, 2000, 2004). Both Chen and Wu (2006) and Gentili (2010) applied an improvement to the algorithm in which they optimized the RTL algorithm by first calculating many sets of RTL values for a variety of r 0 and t 0 and computed the correlation coefficients over pairs of RTL functions. High correlation between two RTL functions occurs when the values of r 0 and t 0 approach the optimal values (Chen and Wu, 2006). Gentili (2010) hypothesizes that quiescence is a better precursor than activation, and proposes an algorithm, RTL surv, based on the method of Chen and Wu (2006) that considers all potential quiescence periods and neglects the activation periods. In almost every case listed above, it was found that seismic quiescence takes place approximately one to two years before the event and is followed by activation periods lasting six months to one year. As such, the spatial and temporal regions over which it occurs may prove optimal for intermediate term forecasting. However, despite the apparent robustness of this method, it has not been adapted to an operational forecasting technique for intermediate term forecasting. Limited testing against a random null hypothesis has been undertaken. While Huang (2006) found that the RTL algorithm performed significantly better for the 2000 Tottori, Japan earthquake, Zechar and Zhuang (2010) found that a more extensive evaluation of multiple forecasts displayed minimal probability gain over random forecasts. More extensive evalution would require widespread testing of the pattern in order to determine the extent of its occurrence, to construct error models, and to investigate the rate of both false positives and failures to predict LURR The Load Unload Response Ration (LURR) originally was proposed to measure the seismic energy change in the months and years prior to a large event so that it might be used as an earthquake

12 100 K.F. Tiampo, R. Shcherbakov / Tectonophysics (2012) Fig. 9. Map of the parameter Q(x,y, t), defined as the mean of RTL values for the time window [ , ] with value t 0 = 1 year and r 0 = 50 km. The Umbria Marche mainshock is represented by a white circle and events located in the quiescence region (where Q(x,y,t) b 2) by large black dots (from Mignan and Di Giovambattista, 2008, reproduced by permission of American Geophysical Union). predictor (Yin et al., 1995). The physical idea is that, when the crust is close to instability, more energy is released in the loading period than in the unloading period. If one could measure the ratio between known periods of loading and unloading, then a measure could be derived that pinpointed times and locations of high energy release as a potential precursor. Although the earthquake triggering capability of tidal forces remains controversial, in recent years studies have suggested that it is a measurable effect, at least in certain regions. Certainly, tidal strains are expected to affect large areas of Earth's crust (Cochran et al., 2004; Lockner and Beeler, 1999; Rydelek et al., 1992; Smith and Sammis, 2004; Tanaka, 2010; Tanaka et al., 2002; Vidale et al., 1998, and others). In the case of LURR, the cyclic nature of the tidal stresses are hypothesized to impose loading and unloading on the crust that correspond to positive and negative tidal Coulomb failure stresses (CFS). In LURR, loading and unloading periods are identified based on earth tide induced perturbations in the CFS on optimally oriented faults (Feng et al., 2008; Mora et al., 2002; Peng et al., 2006; Wang et al., 2004a,b; Yin and Mora, 2006; Yin et al., 1995, 2000, 2006, 2008a,b, 2010; Yu and Zhu, 2010; Yu et al., 2006; Zhang et al., 2004, 2006, 2010). LURR primarily has been employed in intermediate earthquake forecasting. The LURR ratio is calculated from Y ¼! N þ E m i i¼1! N Ei m i¼1 þ ð2:6:1þ where E denotes seismic energy (Kanamori and Anderson, 1975), + is for loading events and is for unloading, and m=½, so that E m denotes Benioff strain. In theory, m could be set to calculate other seismicity measures (Yin et al., 2008a). For a given catalog, the incremental CFS stress caused by tidal loading is calculated for each earthquake. The associated energy change is assigned the appropriate positive or negative sign for loading or unloading, respectively. The regions and time periods are then scanned and the LURR ratio Y, above, is calculated and compared with large events. The LURR ratio generally fluctuates around a value of one, but high LURR values frequently are observed a few months or years prior to a strong earthquake (Yin et al., 2008b). Typically, these values increase to a peak, and then drop again shortly before the event. The time and size of the warning regions scale with the size of the upcoming event. LURR peaks occur somewhere between six months before an M~ 5 earthquake and up to two years before an M ~ 8 event. The region size diameter ranges from approximately 100 km for an M ~ 5 event to approximately 1000 km for an M ~ 8 earthquake (Peng et al., 2006; Yin et al., 2010). The LURR technique has been applied primarily in China, California, and Sumatra and shown to have retrospective forecasting capability (Yin et al., 2008a,b, 2010; Zhang et al., 2006, and others). A typical contour plot is shown in Fig. 10, while the temporal behavior before the 1989 Loma Prieta earthquake is shown in Fig. 11. LURR alarms also have been retrospectively forecast, for example, for the 1994 Northridge, 2003 San Simeon, and 2008 Wenchuan earthquakes. However, it is not successful in retrospectively forecasting

13 K.F. Tiampo, R. Shcherbakov / Tectonophysics (2012) (Smith and Sammis, 2004). They also point out that, in the twenty years before the 1994 Northridge earthquake, there were many LURR peaks of the same amplitude or larger than the one used to forecast that event (Trotta and Tullis, 2006). Random fluctuations such as this produce in false positives that reduce the potential probability gain associated LURR and reduce its efficacy as an operational forecasting technique. Finally, efforts to create a probabilistic forecast using LURR by Yu and Zhu (2010) may help to resolve the questions surrounding the forecast capability of the method Pattern Informatics (PI) index Fig. 10. The LURR anomaly regions, calculated in November 2003, and the epicenter distribution map of the earthquakes with magnitude M 5 that occurred in 2004 in the mainland of China (from Yin et al., 2010). the 1992 Landers sequence. Recent improvements to the methodology include searching for the optimal stress orientation on the assumption that, statistically, fractures are oriented in the regional stress direction. This orientation is called maximum faulting orientation (MFO) and, after optimizing for this faulting direction, the Landers earthquake displays an LURR peak as does the 2004 Sumatran earthquake (Yin and Mora, 2006; Yin et al., 2008a). The LURR technique remains controversial. Smith and Sammis (2004) and Trotta and Tullis (2006) applied the LURR method to the same California data set as Yin et al. (1995). The LURR function is highly variable and dependent on the input parameters, including the choice of the radius of the analysis region, the time window over which results are averaged, and the upper magnitude cutoff. In addition, while Peng et al. (2006) determined that LURR performed significantly better than a random null hypothesis, Trotta and Tullis (2006) found that randomly assigned loading or unloading values causes an equal amount of variation in LURR values as actual tidal values. The choice of seismic activity function also influenced the results. Both Benioff strain and upper magnitude cutoff affect the role of the largest earthquakes in the analysis and the lack of robustness The PI index is an analytical method for quantifying the spatio-temporal seismicity rate changes in historic seismicity (Holliday et al., 2006a; Rundle et al., 2002; Tiampo et al., 2002). Practically, the method is an objective measure of the local change in seismicity relative to the long-term background seismicity that has been used to forecast large earthquakes. The method identifies spatio-temporal patterns of anomalous activation or quiescence that serve as proxies for changes in the underlying stress that may precede large earthquakes. As a result, these anomalies can be related to the location of large earthquakes that occur in the years following their formation (Tiampo et al., 2002, 2006a). Again, theory suggests that these seismicity structures are related to changes in the underlying stress level (Dieterich, 1994; Dieterich et al., 2002; Tiampo et al., 2006a; Toda et al., 2002). The PI index is calculated using instrumental catalog data from seismically active areas. Because the GR magnitude frequency relation implies that, for a large enough spatial volume V and a long enough time interval, the frequency of earthquakes is constant for magnitudes m m c (Richter, 1958; Turcotte, 1997), it is calculated over a large region with a constant background rate, or the a-value of the GR relation. m c is the cutoff magnitude denoting the minimum magnitude of completeness. The seismicity data is gridded by location into boxes. In California, a grid box size of 0.1 in latitude and longitude was successful, but that may vary with tectonic area. Time series are created for each of these gridded locations. An individual time bin quantifies the total number of events at each location that occurred during that time step. Each location is denoted x i, i ranges from 1 to N total locations. The observed seismic activity rate ψ obs (x i,t) is the number of earthquakes per unit time, of any size, within the box at x i at time t. Here the time period is one year, so that ψ obs (x i,t) is the number of events per year, mean removed. The time-averaged seismicity function S(x i,t 0,t) over the interval (t t 0 )is Sx ð i ; t 0 ; tþ ¼ 1 t ðt t 0 Þ ψ obs ðx i ; tþdt: ð2:7:1þ t 0 S(x i,t 0,t) is calculated for N locations and t 0 is a fixed time, such as the start of the catalog. Denoting spatial averages over the N boxes by bn, the phase function S (x i,t 0,t) isdefined to be the mean-zero, unit-norm function obtained from S(x i,t 0,t): S ðx i ; t 0 ; t Þ ¼ Sx ð i; t 0 ; tþ bsðx i ; t 0 ; tþn Sðx i ; t 0 ; tþ : ð2:7:2þ Fig. 11. LURR evolution curve for the Loma Prieta earthquake, 1989 (from Yin et al., 2010). Here S(x i,t 0,t) is the L2-norm, or the square root of the variance, for all spatial boxes. For a large enough spatial and temporal region, the long-term spatial averages are constant, and the vector S (x i, t 0,t) is an effective measure of the local variations in seismicity, given good quality seismic data. Dividing by the constant standard deviation normalizes the regional seismicity by its background and illuminates small, local fluctuations in seismicity. These changes in seismicity are denoted by ΔS (x i,t 1,t 2 )=S (x i,t 0,t 2 ) S (x i,t 0,t 1 ). Again, ΔS (x i,t 1,t 2 ) represents the changes in spatial and temporal activity related to the underlying stress changes in the system. These can be positive or negative, depending on whether it is identifying seismic activation or quiescence (Tiampo et al., 2002, 2006b).

14 102 K.F. Tiampo, R. Shcherbakov / Tectonophysics (2012) a b Fig. 12. a) PI index forecast for California, inclusive ( P(x i, 1989, 1999) for locations at which PN0). Color scale is logarithmic, where the number represents the exponent. Inverted triangles are events of M 5 that occurred between 1989 and 1999 (triangle size scales with magnitude). Circles are events that occurred from 2000 to 2010, the original forecast time period, again for M 5 (circle size also scales with magnitude). b) Map of the seismicity rate change, S(x i, 1989, 1999), normalized to the maximum absolute value. The color scale is linear, blue to white to red, where blue represents relative quiescence and red represents relative activation. Finally, ΔS (x i,t 1,t 2 ) is averaged over all possible base years, t 0. For any given catalog and time period, the PI index, ΔP, is the power associated with ΔS (x i,t 1,t 2 ), ΔP(x i,t 1,t 2 )={ΔS (x i,t 1,t 2 )} 2 μ p., where μ p is the spatial mean of {ΔS (x i,t 1,t 2 )} 2, or the time-dependent background (Tiampo et al., 2002). In 2002, Rundle et al. published a prospective forecast for California for the period of , inclusive. Tiampo et al. (2002, 2006a) applied the PI index to California in order to identify systematic space time variations in seismicity, including stress shadows after large events in southern California. Fig. 12 reproduces that forecast (Fig. 12a) and the anomalous seismicity (activation or quiescence, Fig. 12b) associated with the locations seen in Fig. 12a. Here blue denotes relative quiescence during that time period, while red denotes relative activation. Triangles represent those events of magnitude M 5.0 that occurred during the training period. Circles identify those events of magnitude M 5 that occurred during the forecast period. Thirty-nine events occurred between 2000 and 2010, and thirty-seven fell on or within one box size (~11 km) of an anomaly (successful forecasts), the margin of error for this forecast (Holliday et al., 2006a, 2007), and two did not (missed events). The intervening years have led to various extensions or modifications of the PI method, as well as its application to other tectonic regimes. For example, Tiampo et al. (2006a) showed that the method was capable of both detecting premonitory changes in time and predicting events that were not in the instrumental catalog. In Tiampo et al. (2006c), the PI method was adapted to small regions around each individual anomaly, and rupture dimensions for historic events were postdicted with reasonable accuracy. This method also was applied in Tiampo et al. (2008) to locations identified by the SAM method (see Section 2.1). Fig. 13a, a rupture estimate for the San Jacinto fault as identified by the SAM analysis shown in Fig. 2 (Section 2.1), originally was included in that work (Tiampo et al., 2008). Fig. 13b presents actual seismicity for 2010, where the M5.5 earthquake sequence (July 7, 2010) occurred at the same location and spatial extent as forecast in Tiampo et al. (2008). Ongoing research into the PI method led to what became known as the modified PI (MPI) technique. In this method, instrumental seismicity is filtered in magnitude and space (Chen et al., 2005; Holliday et al., 2005, 2006a; Nanjo et al., 2006a,b). The MPI method retrospectively forecast the M Chi-Chi earthquake and the M6.7 and M Pingtung offshore events in Taiwan (Chen et al., 2005; Wu et al., 2008a,b) as well as the M Kobe earthquake in Japan (Nanjo et al., 2006a,b). It also prospectively forecast the M Niigata earthquake (Nanjo et al., 2006a,b), and the M Macquarie Island and the M Sumatra earthquakes, as shown in Fig. 14 (Holliday et al., 2005). Fig. 13. a) Rupture forecast for event on the San Jacinto fault in southern California, using the PI method (after Tiampo et al., 2008) and b) seismicity along the San Jacinto fault for a b

15 K.F. Tiampo, R. Shcherbakov / Tectonophysics (2012) Fig. 14. Worldwide application of the PI method. Colored areas are the forecast hotspots for the occurrence of M 7 earthquakes during the period derived using the PI method. The color scale gives values of the log10(p/pmax), the spatial grid is 1 Also shown are the locations of the sixty three earthquakes with M 7 that have occurred between 2000 and 2004, inclusive (from Holliday et al., 2005). Holliday et al. (2006b,c) adapted the PI method by combining it with the relative intensity (RI) method (detailed in Section 3.4, below), designated RIPI. After noting that major earthquake episodes preferentially occur during time intervals when fluctuations in seismic intensity, as measured by the PI, are less important than the RI, they calculated a Pierce Skill Score for each and subtracted the PI score from the RI score. If that skill score difference is positive, a warning is issued. The time window is defined by the average length of time necessary in that region to produce as many events of the size of the minimum magnitude cutoff as in the forecast magnitude. A RIPI retrospective forecast for Sumatra produces a warning period from mid-2003 until the M9.0 event in December of 2004 (Holliday et al., 2006b). The original PI method continues to be used for forecasting in other regions. Toya et al. (2009) used a three-dimensional PI technique to perform retrospective forecasting for Taiwan and Sumatra. Recent work includes application of the MPI method to various tectonic regions in China (Jiang and Wu, 2008, 2010a; Zhang et al., 2009). Jiang and Wu (2008, 2010a) found that the PI method outperforms the RI method and that retrospective tests accurately forecast the M Wenchuan earthquake. They also found that determining the optimal parameters, such as time period and discretization box size, is difficult and results in a significant number of false positives. This particular result highlights a significant issue in seismicitybased forecasting. Many algorithms require constant moments (in particular, see Eq. (2.7.1)), and instrumental seismic catalogues often are subject to systematic effects, such as varying network coverage and minimum magnitude of completeness. These result in artifacts in the data that appear as anomalies, or false positives. A method for improving seismicity-based forecasts in general, and the PI algorithm in particular, based upon the Thirumalia Mountain (TM) metric (Thirumalai and Mountain, 1993; Thirumalai et al., 1989), ensures that the choice of spatial region, discretization, and time period results in a stationary time series. Application of this TM method prior to implementing the forecast significantly improves the accuracy and specifically reduces the number of false positives in a forecast (Tiampo et al., 2010). It should be noted that, while Zechar and Jordan (2008) found that the PI method did not perform significantly better than the RI method, they were not performing that test over the full ten year interval that was the published forecast period (Rundle et al., 2002). Nanjo (2010) showed that both the PI and RI perform significantly better than the National Seismic Hazard Map (NSHM). The difficulty in pinpointing the time of upcoming events remains the biggest drawback to the PI method. While the method does very well at the intermediate-term forecasting of events (a five-to-ten year time period) with very few misses (2 misses in 39 events in a ten year period in California), the question remains as to whether the significant number of remaining anomalies (false positives) are the result of the changing nature of the stress regime over that tenyear period, or are the signature of large events that are yet to occur. Retrospective forecasts on both synthetic and high-quality catalogues over incremental time periods might help to resolve the first issue; the second, again, highlights the need for longer time periods over which to which to observe the evolution of the natural fault system (Jackson and Kagan, 2006; Schorlemmer and Gerstenberger, 2007; Vere-Jones, 1995, 2006; Zechar et al., 2010). 3. Smoothed seismicity models Smoothed seismicity models are a more general class of seismicity-based forecasting models which define the important physical spatio-temporal features of earthquake processes, characterize these features in a mathematical and/or probabilistic manner, and then calibrate the model based on data available from seismic catalogs for particular tectonic regions. Although the classic versions did not include specific geologic or tectonic information, a few of the newer methods discussed below attempt to incorporate certain of these features into their forecasts. Originally developed by Frankel (1995), the smoothed seismicity approach has been extended to many different algorithms and regions around the world (see, e.g., Helmstetter et al., 2006, 2007; Kafka, 2002; Kagan and Jackson, 1994, 2000; Kagan et al., 2007; Nanjo, 2010; Rhoades and Evison, 2004; Stirling et al., 2002a, among others). Although the particular smoothing algorithm varies,

16 104 K.F. Tiampo, R. Shcherbakov / Tectonophysics (2012) the use of a two-dimensional Gaussian function in which the distance is specific to the tectonic region is still the most widely implemented technique (Frankel et al., 1996; Petersen et al., 2008). Smoothed models can be formulated to account for the clustering that exists in natural seismicity as a result of the spatial and temporal correlations between events that arise because of stress transfer interactions (King et al., 1994). In addition, while actual earthquake catalog data often is limited by the short time periods available for recorded data, particularly at smaller magnitudes, spatial smoothing can compensate for this lack of data as well as for errors in the data, such as those in magnitude or location (Nanjo, 2010; Werner and Sornette, 2008). Over the past ten years, significant progress has been made in developing methods for characterizing the physical processes related to seismicity generation in this class of models. Note that a number of techniques have been adapted to short-term forecasting on the order of days, either in addition to or in lieu of intermediate-term forecasting. Smoothed seismicity models are intuitively attractive because they concentrate seismic hazard in areas that have had earthquakes in the past, a property of seismicity that has been substantiated by a number of researchers (see e.g., Allen, 1968; Davison and Scholz, 1985; Frankel et al., 2002; Kafka, 2002; Petersen et al., 2007). While many versions can be quite complicated, the basic formulation is relatively straightforward and the results can be easily checked against instrumental catalog statistics. Virtually all methods can be compared with both random and clustered null hypotheses in a relatively straightforward manner. Many also can evolve in time, potentially monitoring the dynamics of the fault system. Proportional hazard model (PHM) forecasts (Section 3.10), for example, currently are recalculated both at regular intervals and as large events occur that modify the ongoing spatio-temporal nature of the seismicity in the region (Faenza and Marzocchi, 2010). However, errors or lack of information in the instrumental catalogues can result in large errors in the resulting forecasts, particularly for large events which have sparse statistics and those areas that have been quiescent in recent memory (Werner and Sornette, 2008). Here we discuss those methods which have had the greatest impact on the field and are an ongoing area of research. In particular, a number of pattern recognition methodologies, although they may show significant promise, are omitted because they are difficult to implement or are not widely applied to date. These include neural net techniques (Adeli and Panakkat, 2009; Alves, 2006; Madahizadeh and Allamehzadeh, 2009; Sri Lakshmi and Tiwari, 2009), pattern recognition algorithms such as K-means clustering (Morales-Esteban et al., 2010), hidden Markov model methods (Ebel et al., 2007), and cellular automata simulations (Jiménez et al., 2008). The following methodologies perform ongoing forecasting research in active seismic regions and with high-quality catalogues. the history of EEPAS, see Rhoades, 2010, but the relationship for the total seismic rate density, λ(t,m,x,y), at any magnitude, m, location, x and y, and time, t, is given by λðt; m; x; yþ ¼ μλ 0 ðt; m; x; yþþ ηðm i t i t 0 ;m i m 0 Þλ i ðt; m; x; yþ ð3:1:1þ where μ is constant, λ 0 is the baseline rate density, t 0 is the start time of the catalog, and η is a normalizing function. λ i is a transient increment of the future rate density function due to each earthquake, λ i ðt; m; x; yþ ¼ w i f 1i ðþg t 1i ðmþh 1i ðx; yþ ð3:1:2þ and Ht t ð f 1i ðþ¼ t i Þ pffiffiffiffiffiffi exp 1 logðt t i Þ a T b T m 2 i ; ðt t i Þσ T lnð10þ 2π 2 σ T ð3:1:3þ g 1i ðmþ ¼ h 1i ðx; yþ ¼ σ M 1 pffiffiffiffiffiffi exp 1 m a M b M m 2 i ; ð3:1:4þ 2π 2 σ M "!# 1 exp ð x x iþ ðy y i Þ 2 : ð3:1:5þ 2πσ 2 A10 b A m i 2σ 2 A10 b A m i H(s) is the Heaviside function and a M, b M, σ M, a T, b T, σ T, σ A, and b A are parameters derived from predictive, recursive relations for regional earthquake catalogues (Rhoades, 2007) EEPAS The Every Earthquake is a Precursor According to Scale (EEPAS) is based on the precursory scale increase phenomenon, where minor seismicity increases occur before and in the same region as large events in much the same manner as aftershocks. As a result, it is both a physically-based and a smoothed seismicity model, but is classified here as the second because the generated forecasts are intrinsically linked to the distributions associated with each modelled parameter. Originally formulated based upon observations of precursory swarms (Evison and Rhoades, 1997, 1999), the idea was extended to the general class of foreshocks to identify localized precursors (Evison and Rhoades, 1999, 2002, 2004). The EEPAS stochastic model was formulated based upon the simple idea that every earthquake is a precursor, and that its input into the model is scaled with its magnitude (Rhoades, 2010; Rhoades and Evison, 2004). For a thorough review of Fig. 15. Normalized rate density of earthquake occurrence under the EEPAS model, relative to a reference rate density in which one earthquake per year is expected exceeding any magnitude m in an area of 10 m km 2 : (a) as a function of location for fixed time and magnitude; (b) as a function of time for fixed magnitude and location; (c) as a function of magnitude, for fixed time and location. The fixed values are those of the Western Tottori earthquake of 2000/10/6, as marked in (a), (b) and (c), respectively (from Rhoades and Evison, 2005).

17 K.F. Tiampo, R. Shcherbakov / Tectonophysics (2012) Qualitatively, the model is structured like an epidemic-type branching process, but here the small earthquakes do not trigger larger ones. Instead, as in the PI method, they are a sensor for an upcoming larger event (Evison and Rhoades, 2001). Also, the various versions of the normal distribution utilized above quantify the normally distributed errors in the predictive relations from the data, while the standard frequency magnitude relation is captured in the normalizing function η(m i ), which is taken from the catalog GR frequency magnitude relation and, for most applications, reduces to a constant (Rhoades, 2007). The precursory relations, originally obtained by Evison (1977), reveal that the forecasting time scales vary from five to thirty years for magnitudes ranging from five to eight, with a corresponding rupture area of 2000 to 20,000 km 2, on the same order as the aftershock region. Events of M 4 are required for input in order to forecast earthquakes with M 5.8 (Rhoades, 2007; Rhoades and Evison, 2004). The EEPAS method has been used for seismicity forecasting in New Zealand, California, Greece, and Japan, including an EEPAS model that was submitted to the RELM test (Rhoades, 2007; Rhoades, 2010; Rhoades and Evison, 2005; Rhoades and Gerstenberger, 2009). Results for Japan are shown in Fig. 15 (Rhoades and Evison, 2005). Recently, Rhoades and Gerstenberger (2009) formulated a forecast model that includes a long-term component from EEPAS and a short-term component from the short-term earthquake probability (STEP) model for aftershock activity (see Section 3.8). Because the strength of EEPAS is that it provides a statistically consistent model for foreshocks, this presents an important opportunity to integrate and test the significance of their underlying physical assumptions as well as its forecast capability Time-independent smoothed seismicity In 1994, Kagan and Jackson first described a method for developing smoothed seismicity models by extrapolating seismic catalog information into probabilistic forecasts. Effectively, this is a timeindependent forecast in which rates from historic and instrumental seismic catalogues are spatially smoothed and prorated for particular time periods. In the intervening years, this particular method has been applied in the northwest and southwest Pacific (Jackson and Kagan, 1999; Kagan and Jackson, 2000), California (Helmstetter et al., 2006; Kagan et al., 2007), and Italy (Werner et al., 2010). Here the earthquake rate density, Λ(θ,ϕ,m,t), the probability per unit area, time, and magnitude, is assumed constant in time and is estimated as the sum of contributions from all events above a prescribed magnitude cutoff. As is the case with most smoothed seismicity models, they can be applied for any minimum magnitude cutoff. For example, Kagan et al. (2007) employ a cutoff magnitude of 5.0, while in the version of Helmstetter et al. (2007) the cutoff magnitude is 2.0. The general form of the function is Λθ; ð ϕ; m; t Þ ¼ f ðθ; ϕþgm ð Þht ðþ ð3:2:1þ where θ is latitude, ϕ is longitude, m is magnitude, t is time, f(θ,ϕ) is the spatial density function, and g(m) is the normalized magnitude distribution. h(t) is the rate (number per unit time) of all earthquakes within the area and magnitude range of interest where, for a time-independent forecast, h(t) is assumed to be constant. Note the similarity of the form of Eqs. (3.2.1) to (3.1.2) of Section 3.1. Again, here the time variation is represented by a constant, creating a time-independent forecast, while in EEPAS (Section 3.1), that function has a log-normal dependence. Various researchers employ different spatial density functions. In general, f is a weighted sum of smoothing kernels, each centered at the epicenter of a previous event. For example, Kagan and Jackson (1995) employ the function f ðθ; ϕ Þ ¼ i f i ðθ i ; ϕ i Þþs; ð3:2:2þ where s is a constant that accounts for unexpected events not in the catalog and f i ðθ i ; ϕ i Þ ¼ f i ðr i Þ ¼ Am ð i 5:0Þ 1 h i 1 þ δ cos 2 ðψ r i Þ þ s: i ð3:2:3þ The distance from each epicenter, r i is truncated at 200 km, otherwise the kernel function equals zero. A is a normalization constant, δ is a parameter quantifying the degree of azimuthal concentration, and ψ measures the orientation of the map point relative to the fault plane azimuth for a given event in a catalog (Kagan and Jackson, 1994, 1995). Helmstetter et al. (2007) and Werner et al. (2010) employ a different kernel function for spatial smoothing, K di Cd r ¼ ð i Þ r : ð3:2:4þ 2 þ d 1:5 2 i Here d i is the adaptive smoothing distance and C is a normalization constant. The spatial density function can be optimized for the various parameters using the existing catalog. Helmstetter et al. (2007) employ a log-likelihood technique, for example. Finally, g(m), the earthquake size distribution, is chosen to follow a tapered GR magnitude frequency relation (Bird and Kagan, 2004; Gutenberg and Richter, Fig. 16. Time-independent earthquake forecast based on the Italian merged instrumental catalog. Expected number of earthquakes of M 4.95 over the five-year period from January 1, 2010, to December 31, 2014 per based on smoothing the locations of earthquakes of M 2.95 in the instrumental catalog from July 1, 1984, to June 25, 2009 (from Werner et al., 2010).

18 106 K.F. Tiampo, R. Shcherbakov / Tectonophysics (2012) ) withparametersthatvarywithtectonicregion.again,results are scaled for the forecast time period of interest. The forecasts of Jackson and Kagan (1999) for the western Pacific rim and California can be found at predictions_index.html. A five-year CSEP forecast for Italy can be seen in Fig. 16 (Werner et al., 2010). While time-independent forecasts such as these, scaled to the appropriate time periods and with well-characterized errors, are important and very useful for seismic hazard estimation, implicit in this work is the assumption that a complete seismicity catalog would provide all the information required, probabilistically, on the location and timing of future events. That might be possible, at least on historic time scales, if the catalogues contained a complete record of all possible events in a tectonic region, which currently is not the case. In addition, this particular algorithm does not include the possibility that there are short-term time-dependent fluctuations that may improve forecasting capability on variable time scales ETAS methodologies The original epidemic-type aftershock sequence (ETAS) hypothesis was formulated by Ogata (1985a,b, 1987, 1988, 1989). Not simply a model of aftershock sequences, ETAS is fundamentally a model of triggered interacting seismicity in which all events have identical roles in the triggering process. Again, it is both a physically-based and a smoothed seismicity model, but is classified here as the second because forecasts are intrinsically linked to the distributions associated with each parameter. In this process every earthquake is regarded as both triggered by earlier events and a potential trigger for subsequent earthquakes, ie. every event is a potential aftershock, mainshock or foreshock, with its own aftershock sequence. For general seismicity, a background term with a random component is added to the formulation. In the intervening years, the model has been used in many studies to describe the spatio-temporal distribution and features of actual seismicity (Console and Murru, 2001; Console et al., 2003; Helmstetter and Sornette, 2002, 2003a,b; Ma and Zhuang, 2001; Ogata, 1988, 1998, 1999, 2005; Ogata and Zhuang, 2006; Saichev and Sornette, 2006; Vere-Jones, 2006; Zhuang et al., 2004, 2005 among others). For a thorough review of the early years of ETAS development and application, see Ogata (1999) and Helmstetter and Sornette (2002). In recent years, ETAS has been used by a number of researchers for the development of smoothed seismicity forecast models, both shortand long-term (Console and Murru, 2001; Console et al., 2003; Console et al., 2006a,b, 2007, 2010; Falcone et al., 2010; Helmstetter et al., 2005, 2006, 2007; Lombardi and Marzocchi, 2010a,b; Murru et al., 2009). In general, the ETAS algorithm is used in a branching model, where the parent event of a given magnitude and locations produces a series of child events that occur within some specified region and time. The average number of children produced for every parent events is the branching ratio (Helmstetter and Sornette, 2003b). The ETAS model includes the contribution of every previous event based upon the magnitude of the triggering earthquake, the spatial distance from the triggering event, and the time interval between the triggering event and the time of the forecast, and follows the form λ i ðx; y; t; mþ ¼ ht t ð i Þexp½ βðm i m 0 ÞŠfðx x i ; y y i Þ: ð3:3:1þ Note, again, that this is of the same form as Eqs. (3.1.2) and (3.2.1), above: a normalizing constant times three functions, one of which encodes the temporal behavior, a second the magnitude relationship, and a third the spatial pattern. Here, i is the individual event, x i and y i are the location of that event, m i is the event magnitude, m 0 is a lower bound on the magnitude of triggering, β=bln10, where b is the slope of the GR magnitude frequency relation (Console et al., 2010; Helmstetter and Sornette, 2003b). h(t t i ) is taken from the modified Omori law (Ogata, 1983; Utsu et al., 1995): ht t ð i Þ ¼ ðt þ cþ p ρðmþ; ð3:3:2þ where c and p are characteristic parameters, pn1 and ρðmþ ¼ k10 α ð m m 0Þ : ð3:3:3þ ρ(m) gives the total number of aftershocks triggered by an event of magnitude m. α often is some value less than b (~ ), while in some applications it is set equal to zero, resulting in triggering only by earthquakes greater than m 0 (Console et al., 2010; Helmstetter and Sornette, 2003a, 2003b; Lombardi and Marzocchi, 2010a,b). Research has also shown that it is possible to substitute other physical models in place of the Omori law. For example, Console et al. (2007) employ the rate-andstate law to generate seismicity rates in an epidemic-type model (Console et al., 2006a, 2010; Dieterich, 1994; Falcone et al., 2010; Ruina, 1983). The spatial distribution function can vary, but often is chosen to be a circular function of the triggering distance, for example: fðr; θþ ¼ " # d 2 q i ; ð3:3:4þ r 2 þ d 2 i where f(x,y) is converted to polar coordinates, r is the distance from x and y, q is a free parameter that models decay with distance, and d i is the triggering distance for a given earthquake. d i can often be characterized as a function of magnitude, such as (Console et al., 2010; Kagan, 2002; Lombardi and Marzocchi, 2010a,b) d i ¼ d :5 ð m i m 0 Þ : ð3:3:5þ In order to produce a forecast map using ETAS, a time-independent background seismicity rate generally is added to the time-dependent ETAS branching model. While this componenet might be based on a long-term hazard map, as in the case of the short-term aftershock probability (STEP) model (see Section 3.8,below),or on along-termtime-independent forecast as in Section 3.2,above,the formofthe final equation is λðx; y; t; mþ ¼ vuðx; yþþ λ i ðx; y; t; mþ ð3:3:6þ t b t i where ν is the background rate for the entire catalog and u(x,y) is a pdf of event rates for the entire region (Console et al., 2010; Lombardi and Marzocchi, 2010a, 2010b). In practice, the various parameters nested in Eqs. (3.3.1) through (3.3.6) are determined from the regional seismic catalogues, and optimized for different time periods using one of several potential optimization schemes. In addition, because the physics of the ETAS process are dominated by the triggering mechanism, many time-dependent forecasts produced with this methodology are short-term forecasts, on the order of days, as shown in Fig. 17 (Falcone et al., 2010). ETAS forecasting models have been applied in California, Italy, Greece, and Japan (Console and Murru, 2001; Console et al., 2003, 2006a,b, 2007, 2010; Falcone et al., 2010; Helmstetter and Sornette, 2003b; Helmstetter et al., 2006, 2007; Lombardi and Marzocchi, 2010b; Murru et al., 2009), where it has been shown that they perform better, at least on short-time scales, than the Poisson model null hypothesis. Finally, in a double-branching version of the ETAS algorithm, the double-branching model (DBM) was developed to incorporate longer period physics into the model and adapted for longer term forecasts (Lombardi and Marzocchi, 2010a; Marzocchi and Lombardi, 2008). The DBM incorporates a second branching process, after the application of an ETAS model, to account for the long-term modulation of earthquake occurrence. After the fitting of the ETAS parameters, the catalog is declustered and the residual seismicity is modeled with a relation similar in form to the original ETAS model given above. Results for five- and

19 K.F. Tiampo, R. Shcherbakov / Tectonophysics (2012) Fig. 17. Short-term occurrence-rate density (events M 4.0, per day per cell of ) for the whole Italian territory, starting on March 21, 2009, 00:00 UTC, for the following 24 h, from an ETAS-type model which propagates seismicity using a) the Omori law and b) a rate-and-state model (from Falcone et al., 2010). ten-year forecasts are shown in Fig. 18 (Lombardi and Marzocchi, 2010a) Relative Intensity (RI) method The RI forecast model was first proposed by Holliday et al. (2005), primarily as a better null hypothesis for forecast testing than a random, nonclustered seismicity model. The idea is to use the rate of occurrence of earthquakes in the past in order to forecast the locations of future large earthquakes, in which future large earthquakes are considered more likely where higher seismic activity occurred in the past. The RI algorithm is the simplest of smoothed seismicity models and was originally formulated as a binary forecast, although it has been modified in several ways since that time. Initially, the study region is tiled with square boxes. In California, these are typically are , so that the forecast locations are small, on the order of the rupture dimension of the smallest forecast magnitude. The number of earthquakes with magnitude M m c, where m c is the minimum magnitude cutoff, in each box is determined over the time period of the catalog. The RI score for each box then is computed as the total number of earthquakes in the box in that time period divided by the value having the largest value. A threshold value in the interval [0, 1] is then selected, and all values above that are expected to have a large event over the forecast period of interest, resulting in a binary forecast. The remaining boxes with RI scores smaller than the threshold represent sites at which large earthquakes are not expected to occur. The result is a map of locations in a seismogenic region where earthquakes are forecast to occur over some future intermediate-term time span. Note that a high threshold reduces the forecast regions but results in more events that are not predicted, while reducing the Fig. 18. DBM maps of the probability of occurrence of one or more events with M 4.5 per cell of over the next a) 5 and b) 10 years. Green circles outline the spatial bin with highest probability (from Lombardi and Marzocchi, 2010a).

20 108 K.F. Tiampo, R. Shcherbakov / Tectonophysics (2012) threshold reduces the failures to predict but increases the false alarms (Holliday et al., 2005). The RI rapidly was adopted as a null hypothesis for general testing of other forecasts due to its intrinsic superiority to nonclustered random seismicity hypotheses, a natural and expected result because earthquakes tend occur where earthquakes have occurred in the past (Frankel et al., 2002; Tiampo et al., 2002; Zechar and Jordan, 2008). Later, it was expanded for use as a forecasting model in its own right (Holliday et al., 2006b; Nanjo, 2010; Shcherbakov et al., 2010). The RI method was applied for prospective forecasting in a variety of tectonic regimes, including California, Japan, and worldwide (Holliday et al., 2005; Rundle et al., 2003; Nanjo et al., 2006a,b; Tiampo et al., 2002). Nanjo (2010) showed that the RI method in California performed better than the NSHM over a ten year time period. Holliday et al. (2006b, 2006c) demonstrated that, for particular time periods, the RI method provides important information on the likelihood of future events. They combined the RI method with the PI method into the RIPI forecasting method (Holliday et al., 2006b), discussed at greater length in Section 2.7. Nanjo (2010) expanded the RI method in order to convert the model from a binary system into a testable CSEP model for Italy that forecasts the numbers of earthquakes at predefined magnitudes. The final input was both 5-year and 10-year models as well as a tuned 3- month model. He modified the original RI approach for the process of binning the data in order to improve the forecasts, which added additional smoothing (Holliday et al., 2007; Nanjo et al., 2006a). The seismicity rate is computed for each box by averaging over the Moore's neighbourhood, the eight surrounding boxes. Then, in order to provide a continuous forecast for every box, the forecast is extrapolated into a given magnitude bin within some give magnitude range, M 1 MbM 2, using the GR frequency magnitude law as given by historic seismicity (Nanjo, 2010). The RI methodology can be expanded to other seismicity measures. Shcherbakov et al. (2010) used the cumulative Benioff strain in each cell during a training period in order to develop a worldwide forecast for a future time period, where Benioff strain is the square root of the seismic energy. The cumulative Benioff strain, B, at time t is computed using the data from the CMT (Harvard) catalog ( from 1976 to 2007, inclusive, for magnitudes M 5.5, where N xy ðþ t qffiffiffiffiffiffiffi B xy ðþ¼ t Exy ðþ i : ð3:4:1þ i¼1 Here, E (i) xy is the seismic energy release from the ith earthquake, (xy) is the cell coordinate, and N xy (t) is the cumulative number of earthquakes through time t. These values are normalized by dividing by the maximum value, B max, for all the box locations. The RI map then is converted to a binary forecast by introducing a threshold cumulative Benioff strain. Those cells with Benioff strains greater than this threshold constitute alarm cells where future earthquakes are forecast to occur. One of the primary goals of this work was to develop a standard optimization procedure for binary forecasts in order to select the optimal threshold (Shcherbakov et al., 2010). An optimized worldwide forecast for this RI version is shown in Fig. 19. The RI method has significant forecast capability (Holliday et al., 2005; Rundleetal., 2003; Nanjo, 2010; Nanjo et al., 2006a,b; Tiampo et al., 2002). However, like many of these techniques, it produces a relatively high false positive rate. While methods exist for lowering that false positive rate (see Section 2.7, above),thechoice ofthreshold andgrid size is critical to its performance (Shcherbakov et al., 2010; Zechar and Jordan, 2010). Also as expected, the method is sensitive to the quality of the data, as it intrinsically relies on forecasting events where high rates of activity have occurred in the past. As a result, areas that have been quiescent for long periods will result in false negatives, or misses, in intermediate term forecasts. However, better acquisition of seismicity data with time will greatly improve the accuracy of future RI forecasts TripleS The Simple Smoothed Seismicity model (TripleS), was developed as a test of a very simple model for earthquake forecasting, with a Fig. 19. Spatial distribution of 2 2 alarm cells for a Benioff strain RI forecast. Earthquakes with magnitudes M 5.5 are considered during the time period 1/1/1976 through 1/1/2004. Also shown as squares are the locations of earthquakes with M 7.2 that occurred during the forecast period T f =1/1/2004 to T e =1/1/2008. The threshold value (0.042) corresponds to the maximum Pierce's skill score of (from Shcherbakov et al., 2010).

21 K.F. Tiampo, R. Shcherbakov / Tectonophysics (2012) Fig. 20. Map views of the corrected submitted five-year and ten-year TripleS forecasts for a hybrid Italian catalog. These are the space rate representations of the forecasts, summed over the magnitude bins, where the unit of measure is the expected number of earthquakes (from Zechar and Jordan, 2010).

22 110 K.F. Tiampo, R. Shcherbakov / Tectonophysics (2012) minimal number of parameters. At its most basic, it applies a Gaussian smoothing filter to a catalog data set and optimizes a single parameter, σ, which controls the spatial extent of smoothing, against retrospective forecasts (Zechar and Jordan, 2010). The simplest smoothed seismicity method is the RI forecast technique, as detailed above in Section 3.4. In that model, the smoothing is anisotropic and uniform. TripleS instead applies a two-dimensional isotropic Gaussian smoothing that using a continuous kernel function that allowed for a wider region of influence:! K σ ðx; yþ ¼ 1 2πσ exp x2 þ y 2 2 2σ 2 : ð3:5:1þ Integrated in two-dimensions over the boundaries (x 1, y 1, x 2, y 2 ), the formula becomes K σ x eqk ; y eqk ; x 1 ; x 2 ; y 1 ; y 2 ¼ 1 4 erf x eqk x p 2 σ ffiffiffi x eqk x 1 erf p 2 σ ffiffiffi y eqk y 2 erf p 2 σ ffiffiffi y eqk y 1 erf p 2 σ ffiffiffi : 2 ð3:5:2þ Zechar and Jordan (2010) developed both five and ten year forecasts using TripleS. First, they derived a relation between distance from the epicenter and σ in order to determine the distance at which the effect of any one epicenter disappears. They then implemented an optimization procedure for the area skill score, a performance metric detailed in Zechar and Jordan (2008). Retrospective forecast experiments were devised to optimize the smoothing distance with respect to the area skill score (Zechar and Jordan, 2010). Results for Italy are shown in Fig. 20. The TripleS technique provides an important opportunity to test the effects of complex formulations on seismicity-based forecasting, and the inverse. Future results will provide insight into the firstorder results available from simple smoothed seismicity models and evaluate the additional benefits of more complex mathematical and physical models Non-Poissonian earthquake clustering As noted earlier, most smoothed seismicity models are founded on the idea that earthquakes will tend to occur in the future where they have occurred in the past (Allen, 1968; Davison and Scholz, 1985; Frankel et al., 2002; Kafka, 2002; Petersen et al., 2007). In the case Fig. 21. Sample daily forecast maps, using non-poissonian cluster models, of M 4 earthquake activity for the time period from 20 to 24 December On 20 December, an M 4.0 earthquake occurred in eastern California, followed by an M 6.5 earthquake near San Simeon, California, on 22 December. The red colors show the forecast rates for mainshocks, while the blue squares show the forecast rates for aftershocks (from Ebel et al., 2007).

23 K.F. Tiampo, R. Shcherbakov / Tectonophysics (2012) of the short-term non-poissonian earthquake clustering model, a daily forecasting model submitted to the RELM project by Ebel et al. (2007), this is extended to the assumption that the average statistical properties of the spatial and temporal occurrences of earthquakes with M 4.0 during the forecast period will be the same as those of the past 70 or so years, including aftershocks and foreshocks. The initial spatial formulation is based on this premise. Because this is primarily an aftershock forecasting algorithm, the average occurrence rate of aftershocks is modeled using Omori's law (Utsu et al., 1995), forming the basis for forecasting activity near the epicenter of a large earthquake immediately following that event. If an earthquake of M 4.0 occurs anywhere in the region, a circle of radius R is drawn around the epicenter, as defined by Gardner and Knopoff (1974). In the case of California, the Reasenberg and Jones (1989) relation for the Omori law was chosen to calculate the expected rate of earthquakes of M 4.0. In addition, the Poisson distribution of interevent times is the statistical distribution from which short-term forecasts of new mainshocks are derived. The forecast assumes that all events of magnitude smaller than the mainshock are aftershocks. If a new event has a magnitude greater than the first event, the forecast assumes that the first earthquake was a foreshock. When the forecast aftershock/foreshock rate drops below the background mainshock rate for any given location, then the background mainshock rate is substituted. Finally, for those locations that are outside the aftershock zones, the average rate of M 4 events for a regional declustered earthquake catalog is calculated and this mean mainshock rate is distributed throughout the entire area proportional to its past distribution (Ebel et al., 2007). Ebel et al. (2007) detail the various forecast choices for any given day and how they are combined into short-term forecasts. An example showing the forecast evolution in the days before and after the December 2003 San Simeon earthquake are shown in Fig. 21. Note again that one of the benefits of a smoothed seismicity map is that the discretization can be used to test the limitations of the algorithm and the available data and, at least in principle, the errors associated with both can be evaluated as well Seismic earthquake potential models The seismic earthquake potential model, as proposed by Ward (2007), is another version of a smoothed seismicity model where the principle theory is that earthquakes are likely to occur in the future in the same locations as they have occurred in the past. The actual locations and time dependence of the events is constructed from this principle based on some combination of the generally accepted laws of seismicity the GR frequency magnitude distribution (Gutenberg and Richter, 1944), the modified Omori law (Utsu et al., 1995), and Båth's law (Båth, 1965). Again the basic requirement is an instrumental catalogue of earthquake locations, dates, and magnitudes and the estimated minimum magnitude of completeness, m c. In the seismic earthquake potential model submitted by Ward (2007) to the RELM testing center, two catalogues that spanned 1850 to 2003 and 1925 to 2003, from Kagan (2005) and Kagan et al. (2006), respectively, were tested. The earthquake rate potential ρ(r) over a given area is computed using a Gaussian filter, ρðr i Þ ¼ T 1 cat j h i exp r i r j =Δ 2 πδ 2 : ð3:7:1þ Here T cat is the inverse of the catalog duration, r is the location of any two points i and j, and j is over all events larger than the minimum magnitude. These smoothed rates are rescaled to ensure that the total number of events is the same for the model as in the actual catalog region. Once the rate at the minimum magnitude is known, then the rates at higher magnitudes are extrapolated from the GR magnitude frequency relation with the historic b-value and maximum magnitude. An example of the result is shown in Fig STEP In 2005, the short-term earthquake probability (STEP) model was inaugurated at (Gerstenberger et al., 2005). STEP is another method that employs a universal seismicity law (in this case the modified-omori aftershock law) (Utsu et al., 1995) with historic and instrumental data in order to create a time-dependent forecast. Because the STEP model is based on the Omori law it is a short-term forecast that produces forecasts on a time scale of days and whose primary signal is related to aftershock sequences, much like the non-poisson clustering model of Ebel et al. (2007) (Section 3.6). The STEP model combines a time-independent occurrence model from tectonic fault data with stochastic clustering models whose parameters are derived from the long-term and recent catalog data. The time-independent model is drawn from the 1996 U.S. Geological Survey (USGS) long-term hazard maps (Frankel et al., 1997). Three stochastic models are calculated for incorporation into the background model: a generic clustering model, a sequence specific model, and a spatially heterogeneous model (Gerstenberger et al., 2005). For the generic clustering model, the rate at time t is given by (Reasenberg and Jones, 1989, 1994): λðþ¼10 t a þbm ð m M Þ = ðt þ cþ p ; ð3:8:1þ where a, b, c and p are constants and M m is the mainshock magnitude. The sequence specific model is estimated using a posteriori values for the parameters of each event in the sequence, if the sequence is long enough. A third, spatially heterogeneous model is calculated at each grid point where parameters are calculated based upon the locally-averaged seismicity in a relatively narrow region (Wiemer and Katsumata, 1999; Wiemer et al., 2002). For the first two models, once the total rate of aftershocks is calculated, it is distributed in an area that extends one-half of a fault length from the source, with a spatial density proportional to 1/r 2, where r is the distance from the source. In the spatially heterogeneous model, spatially variable rates are calculated from the actual distribution of aftershocks that have occurred and been recorded. Each model fit is evaluated using the corrected AIC (Akaike, 1974; Burnham and Anderson, 2002; Kenneth et al., 2002). The relative weight for each model is calculated based on its AIC score. The final model is a weighted sum of the three stochastic models (Gerstenberger et al., 2005, 2007). Finally, in the website version of the STEP model, hazard calculations based on ground shaking are calculated from Boore et al. (1997). Typical hazard maps for California, showing the probability of exceeding a particular level of ground shaking, are shown in Fig. 23. The original model for California was submitted to the RELM testing center in 2005 (Gerstenberger et al., 2007), and a subsequent, revised version for Italy was submitted to the CSEP testing center (Woessner et al., 2010). The basic premise is the same, with three important differences. First, the time-independent model is derived from a declustered catalog which then is smoothed using the TripleS method of Zechar and Jordan (2010) (Section 3.5, above). The second modification was to the aftershock productivity relationship. Here the values derived by Lolli and Gasperini (2003) were substituted, and the model denoted STEP-LG. The third method, denoted STEP-NG, employed the method of Christophersen and Smith (2008) for estimating the average productivity based on mean abundance, the mean number of aftershocks as a function of mainshock magnitude. They note that, in general, the spatially varying model produces the best fit to the local data, but that further from the fault, where data is sparser and the minimum magnitude of completeness is greater, the sequence specific model

24 112 K.F. Tiampo, R. Shcherbakov / Tectonophysics (2012) Fig. 22. Seismic earthquake potential models for MN5.5, 6.5, and 7.5 (a, b, and c panels), assuming M min=5.5, b= 0.9 and M max=8.1 for the interval (left) and (right). Because NN(M min ) is conserved, the choice of M max has little effect on the rates of all but great earthquakes (d panels) (from Ward, 2007). Fig. 23. The probability of exceeding MMI VI over a given 24-h period that starts at 14:07 Pacific Daylight Time on 28 July a) The time-independent hazard based on the 1996 USGS hazard maps for California. SF and LA are the locations of San Francisco and Los Angeles, respectively; b) The time-dependent hazard which exceeds the background including contributions from several events: 22 December 2003, San Simeon (SS, Mw=6.5), an M~4.3 earthquake 4 days earlier near Ventura (VB), an M~3.8 event that occured minutes before the map was made near San Bernardino (FN), the 1999 Hector Mine, M~7.1 earthquake (LHM), and the 1989 M~6.9 Loma Prieta (LP) earthquake and c) the combination of these two contributions, representing the total forecast of the likelihood of ground shaking in the next 24-hour period. d, The ratio of the time-dependent contribution to the background (from Gerstenberger et al., 2005).

25 K.F. Tiampo, R. Shcherbakov / Tectonophysics (2012) fits the data better (Woessner et al., 2010). This serves to illustrate the biggest potential drawback to short-term aftershock forecasting. The quality of the available data is critical to the production of accurate hazard maps, as in the case of all real-time or near real-time information systems, arguing for the continued improvement to local and regional seismic networks in areas of high seismic hazard HAZGRIDX HAZGRIDX, as proposed by Akinci (2010), is another version of a smoothed seismicity model where smoothing is governed by the GR magnitude frequency relation. Starting with a seismicity catalog for Italy which is declustered, the minimum magnitude of completeness determined. The seismicity then is smoothed using the spatially smoothed seismicity method (Frankel, 1995) and the following equation for the smoothed rate of events in each cell is calculated and normalized by the total regional seismicity using the following equation: ñ ¼ j:δ g 3c n j e Δ g = c 2 e Δ g = c 2 j:δ g 3c ; ð3:9:1þ where ij is the distance between the centering of the grid cells i and j and the parameter c is the correlation distance. The sum is taken over all cells j within a distance of 3c of cell i. A five-year CSEP forecast for Italy was created by smoothing over a correlation distance, c, of 15 km and calculating activity rates for each box that fulfill the regional GR magnitude frequency relation. A timeindependent Poisson model is employed to calculate the recurrence rate for each event. Results for the 15 km correlation distance are shown in Fig. 24. Akinci (2010) notes that, not only does catalog completeness have a crucial effect on the reliability and the quality of potential seismicitybased forecasts, but that accurate estimation of the GR b-value also is critical. In models such as this, a low b-value will increase the hazard value, while a high one reduces it. The acquisition of high-quality seismic data over long time periods is necessary in order to robustly estimate regional b-values and provide more accurate smoothed seismicity forecasts Proportional Hazard Model (PHM) The Proportional Hazard Model (PHM) is a multivariate non-parametric statistical method that characterizes the temporal dependence of a hazard function that represents the instantaneous conditional probability of an occurrence (Cox, 1972; Faenza et al., 2003, 2004; Kalbeisch and Prentice, 1980). The model does not assume any a priori statistical distribution of the events and can be used to simultaneously integrate different kinds of information. In this case, it allows for analysis of the earthquake occurrence process without the requirement to assume a model such as the characteristic earthquake distribution. In addition, it allows for testing the impact on the event distribution of the integration of individual pieces of physical information as they are integrated into the model, and test their relative importance (Faenza and Marzocchi, 2010). The PHM was applied in studies of the spatio-temporal distribution of destructive earthquakes in Italy (Cinti et al., 2004; Faenza and Pierdominici;, 2007; Faenza et al., 2003), medium-sized central European earthquakes (Faenza et al., 2009), and large earthquakes worldwide (Faenza et al., 2008) which showed that temporal clustering of events on the order of a few years occurs as a precursory signal prior to large events. Their spatial scale ranges from tens to hundreds of kilometers. Two types of random variables (RV) are considered in this version of PHM, the inter-event time (IET), the time interval between two consecutive events, and the censoring time (CT), the time between the most recent event in the catalog and the end of the catalog itself (Faenza and Marzocchi, 2010). These are combined with other information, or covariates, which are linked to the RVs through a hazard function, λ(t;z): λðt; zþ ¼ λ 0 ðþexp t ðzβþ ð3:10:1þ where λ 0 (t) is an unspecified baseline hazard function, z is the covariate vector and β is a column vector that provides the weight for each Fig. 24. (a) Forecast seismicity rates obtained from HAZGRIDX using a 15 km correlation distance (expected number of M 5.0 events per year in each cell) for the CSI 1.1 case, using the spatially smoothed locations of M 2.9 earthquakes from The yellow asterisk shows the mainshock of the April 6, 2009, L'Aquila earthquake. (b) Five-year probabilities given as log10 rate of events per year for M 5.0 predicted in 10 km 10 km cells around each location. Earthquake rates are in per km 2 (from Akinci, 2010).

26 114 K.F. Tiampo, R. Shcherbakov / Tectonophysics (2012) covariate. The temporal signature is contained in λ 0 (t) while exp(z,β) carries information about the other processes (Faenza and Marzocchi, 2010). Note that λ 0 (t) is independent of z in the equation above, implying a simple scaling relationship between them. Also, like many of the smoothed seismicity models, it is assumed that the past seismicity is a good representation of future seismicity. The coefficients in λ 0 (t) and β are estimated through a Maximum Likelihood Estimation strategy (Faenza, 2005). The evaluation of the hazard function is based on the empirical survivor function. For the earthquake hazard function above, this is! t St; ð zþ ¼ exp λ 0 ðþexp t ðzβþdu 0 ¼ S 0 ðþ t expðz βþ ð3:10:2þ By comparing the survivor function above to the survivor function for a Poisson process, it is possible to pinpoint departures from a Poisson process, or clustering in the data (Faenza and Marzocchi, 2010; Faenza et al., 2003; Kalbeisch and Prentice, 1980). Once a set of locations is chosen, either on a grid or for a set of tectonic subregions, both the IETs and one CT are calculated for each location, relative to the time elapsed since the most recent event. The vector z for a grid is then a two-dimensional vector that comprises the logarithm of the rate of occurrence and the magnitude of each event. It also can be increasingly discretized based upon subregions (Cinti et al., 2004; Faenza and Marzocchi, 2010; Faenza et al., 2003). Studies in Italy using these two discretizations determined that only the rate of occurrence is significantly different from zero, suggesting that, for those parameters tested, the rate of occurrence appears to be the only important covariate in modeling the spatio-temporal distribution of moderate to large earthquakes (Faenza and Marzocchi, 2010). The probability of an event at any location z, for deriving a forecast map over a given time period, is then Pt; ð Δτ; z Þ ¼ St; ð z Þ S ð tþ Δτ; z Þ St; ð zþ ð3:10:3þ where τ is the forecast time, t is the time since the last event (CT), and S(t;z) is the survivor function (Faenza and Marzocchi, 2010). An ongoing forecast for M 5.5 earthquakes in Italy can be found at: Ongoing since 2005, the forecast is updated every January 1 and after each target event occurrence. Although not specifically tested against either a random or clustered null hypothesis, the forecast performed well for the 2009, M6.2 L'Aquila earthquake (Pondrelli et al., 2010), and for the subsequent M5.6 event. Five- and ten-year CSEP forecast models were created for Italy using PHM, and can be seen in Fig. 25 (Faenza and Marzocchi, 2010). Note the region of increased hazard associated with the location of the 2009 L'Aquila earthquake. This illustrates one of the open questions associated with seismicity-based forecasts in general and smoothed seismicity methods in particular. Large events which occur in the forecast testing period often result in a significant and persistent signal in the resulting forecasts. These can be valid signals, a representation of higher hazard associated with potential aftershocks, but it may also be overestimated or affect the relative estimation of other regional seismic hazards. 4. Conclusions Recent developments in the fields of statistical seismology, in conjunction with the availability of large quantities of seismic data at smaller scales and computational advances, have improved significantly our understanding of time-dependent earthquake processes. As a result, the last ten years have seen significant progress in the field of intermediate- and short-term seismicity-based earthquake forecasting. These seismicity-based forecasting techniques can be differentiated into models based upon techniques for identifying particular physical processes and those that filter, or smooth, the seismicity. Such filters are often, although not always, based upon well-characterized seismic relations such as the modified Omori law. Examination of the primary difference between these two classes of models reflects their major strengths and weaknesses. While physical models generally have the potential to provide more detail in both space and time, the basis for their successes and failures is often obscured by the simplifying assumptions of the model and the complicated interactions that exist in the real earth. On the other hand, while the Fig. 25. The number of expected events from the PHM model for the CSEP spatial geometrical grid: left, M 5.0; right, M 6.0 (from Faenza and Marzocchi, 2010).

Relationship between accelerating seismicity and quiescence, two precursors to large earthquakes

Relationship between accelerating seismicity and quiescence, two precursors to large earthquakes GEOPHYSICAL RESEARCH LETTERS, VOL. 35, L15306, doi:10.1029/2008gl035024, 2008 Relationship between accelerating seismicity and quiescence, two precursors to large earthquakes Arnaud Mignan 1 and Rita Di

More information

Southern California Earthquake Center Collaboratory for the Study of Earthquake Predictability (CSEP) Thomas H. Jordan

Southern California Earthquake Center Collaboratory for the Study of Earthquake Predictability (CSEP) Thomas H. Jordan Southern California Earthquake Center Collaboratory for the Study of Earthquake Predictability (CSEP) Thomas H. Jordan SCEC Director & Professor, University of Southern California 5th Joint Meeting of

More information

Pattern Dynamics and Forecast Methods in Seismically Active Regions

Pattern Dynamics and Forecast Methods in Seismically Active Regions Pattern Dynamics and Forecast Methods in Seismically Active Regions K.F. Tiampo (1), J.B. Rundle (1), S. McGinnis (1) and W. Klein (2) (1) CIRES, University of Colorado, Boulder, CO USA (e-mail: kristy@fractal.colorado.edu;

More information

ALM: An Asperity-based Likelihood Model for California

ALM: An Asperity-based Likelihood Model for California ALM: An Asperity-based Likelihood Model for California Stefan Wiemer and Danijel Schorlemmer Stefan Wiemer and Danijel Schorlemmer 1 ETH Zürich, Switzerland INTRODUCTION In most earthquake hazard models,

More information

Earthquake forecasting and its verification

Earthquake forecasting and its verification Nonlinear Processes in Geophysics, 12, 965 977, 2005 SRef-ID: 1607-7946/npg/2005-12-965 European Geosciences Union 2005 Author(s). This work is licensed under a Creative Commons License. Nonlinear Processes

More information

Reliable short-term earthquake prediction does not appear to

Reliable short-term earthquake prediction does not appear to Results of the Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California Ya-Ting Lee a,b, Donald L. Turcotte a,1, James R. Holliday c, Michael K. Sachs c, John B. Rundle a,c,d,

More information

Appendix O: Gridded Seismicity Sources

Appendix O: Gridded Seismicity Sources Appendix O: Gridded Seismicity Sources Peter M. Powers U.S. Geological Survey Introduction The Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3) is a forecast of earthquakes that fall

More information

Short-Term Earthquake Forecasting Using Early Aftershock Statistics

Short-Term Earthquake Forecasting Using Early Aftershock Statistics Bulletin of the Seismological Society of America, Vol. 101, No. 1, pp. 297 312, February 2011, doi: 10.1785/0120100119 Short-Term Earthquake Forecasting Using Early Aftershock Statistics by Peter Shebalin,

More information

A mathematical formulation of accelerating moment release based on the stress accumulation model

A mathematical formulation of accelerating moment release based on the stress accumulation model Click Here for Full Article JOURNAL OF GEOPHYSICAL RESEARCH, VOL. 112,, doi:10.1029/2006jb004671, 2007 A mathematical formulation of accelerating moment release based on the stress accumulation model A.

More information

From the Testing Center of Regional Earthquake Likelihood Models. to the Collaboratory for the Study of Earthquake Predictability

From the Testing Center of Regional Earthquake Likelihood Models. to the Collaboratory for the Study of Earthquake Predictability From the Testing Center of Regional Earthquake Likelihood Models (RELM) to the Collaboratory for the Study of Earthquake Predictability (CSEP) Danijel Schorlemmer, Matt Gerstenberger, Tom Jordan, Dave

More information

Proximity to Past Earthquakes as a Least-Astonishing Hypothesis for Forecasting Locations of Future Earthquakes

Proximity to Past Earthquakes as a Least-Astonishing Hypothesis for Forecasting Locations of Future Earthquakes Bulletin of the Seismological Society of America, Vol. 101, No. 4, pp. 1618 1629, August 2011, doi: 10.1785/0120090164 Proximity to Past Earthquakes as a Least-Astonishing Hypothesis for Forecasting Locations

More information

Earthquakes. Earthquake Magnitudes 10/1/2013. Environmental Geology Chapter 8 Earthquakes and Related Phenomena

Earthquakes. Earthquake Magnitudes 10/1/2013. Environmental Geology Chapter 8 Earthquakes and Related Phenomena Environmental Geology Chapter 8 Earthquakes and Related Phenomena Fall 2013 Northridge 1994 Kobe 1995 Mexico City 1985 China 2008 Earthquakes Earthquake Magnitudes Earthquake Magnitudes Richter Magnitude

More information

A forward test of the Decelerating-Accelerating Seismic Strain model in the Mediterranean

A forward test of the Decelerating-Accelerating Seismic Strain model in the Mediterranean Bollettino di Geofisica Teorica ed Applicata Vol. 50, n. 3, pp. 235-254; September 2009 A forward test of the Decelerating-Accelerating Seismic Strain model in the Mediterranean B.C. PAPAZACHOS, G.F. KARAKAISIS,

More information

Minimum preshock magnitude in critical regions of accelerating seismic crustal deformation

Minimum preshock magnitude in critical regions of accelerating seismic crustal deformation Bollettino di Geofisica Teorica ed Applicata Vol. 44, n. 2, pp. 103-113; June 2003 Minimum preshock magnitude in critical regions of accelerating seismic crustal deformation C.B. Papazachos University

More information

Comparison of Short-Term and Time-Independent Earthquake Forecast Models for Southern California

Comparison of Short-Term and Time-Independent Earthquake Forecast Models for Southern California Bulletin of the Seismological Society of America, Vol. 96, No. 1, pp. 90 106, February 2006, doi: 10.1785/0120050067 Comparison of Short-Term and Time-Independent Earthquake Forecast Models for Southern

More information

Quantifying the effect of declustering on probabilistic seismic hazard

Quantifying the effect of declustering on probabilistic seismic hazard Proceedings of the Ninth Pacific Conference on Earthquake Engineering Building an Earthquake-Resilient Society 14-16 April, 2011, Auckland, New Zealand Quantifying the effect of declustering on probabilistic

More information

Seismic Quiescence before the 1999 Chi-Chi, Taiwan, M w 7.6 Earthquake

Seismic Quiescence before the 1999 Chi-Chi, Taiwan, M w 7.6 Earthquake Bulletin of the Seismological Society of America, Vol. 96, No. 1, pp. 321 327, February 2006, doi: 10.1785/0120050069 Seismic Quiescence before the 1999 Chi-Chi, Taiwan, M w 7.6 Earthquake by Yih-Min Wu

More information

Journal of Asian Earth Sciences

Journal of Asian Earth Sciences Journal of Asian Earth Sciences 7 (0) 09 8 Contents lists available at SciVerse ScienceDirect Journal of Asian Earth Sciences journal homepage: www.elsevier.com/locate/jseaes Maximum magnitudes in aftershock

More information

arxiv:physics/ v1 6 Aug 2006

arxiv:physics/ v1 6 Aug 2006 The application of the modified form of Båth s law to the North Anatolian Fault Zone arxiv:physics/0608064 v1 6 Aug 2006 1. INTRODUCTION S E Yalcin, M L Kurnaz Department of Physics, Bogazici University,

More information

ETH Swiss Federal Institute of Technology Zürich

ETH Swiss Federal Institute of Technology Zürich Swiss Federal Institute of Technology Zürich Earthquake Statistics using ZMAP Recent Results Danijel Schorlemmer, Stefan Wiemer Zürich, Swiss Seismological Service, Switzerland Contributions by: Matt Gerstenberger

More information

A GLOBAL MODEL FOR AFTERSHOCK BEHAVIOUR

A GLOBAL MODEL FOR AFTERSHOCK BEHAVIOUR A GLOBAL MODEL FOR AFTERSHOCK BEHAVIOUR Annemarie CHRISTOPHERSEN 1 And Euan G C SMITH 2 SUMMARY This paper considers the distribution of aftershocks in space, abundance, magnitude and time. Investigations

More information

Comment on Systematic survey of high-resolution b-value imaging along Californian faults: inference on asperities.

Comment on Systematic survey of high-resolution b-value imaging along Californian faults: inference on asperities. Comment on Systematic survey of high-resolution b-value imaging along Californian faults: inference on asperities Yavor Kamer 1, 2 1 Swiss Seismological Service, ETH Zürich, Switzerland 2 Chair of Entrepreneurial

More information

arxiv:physics/ v2 [physics.geo-ph] 18 Aug 2003

arxiv:physics/ v2 [physics.geo-ph] 18 Aug 2003 Is Earthquake Triggering Driven by Small Earthquakes? arxiv:physics/0210056v2 [physics.geo-ph] 18 Aug 2003 Agnès Helmstetter Laboratoire de Géophysique Interne et Tectonophysique, Observatoire de Grenoble,

More information

Collaboratory for the Study of Earthquake Predictability (CSEP)

Collaboratory for the Study of Earthquake Predictability (CSEP) Collaboratory for the Study of Earthquake Predictability (CSEP) T. H. Jordan, D. Schorlemmer, S. Wiemer, M. Gerstenberger, P. Maechling, M. Liukis, J. Zechar & the CSEP Collaboration 5th International

More information

Earthquake predictability measurement: information score and error diagram

Earthquake predictability measurement: information score and error diagram Earthquake predictability measurement: information score and error diagram Yan Y. Kagan Department of Earth and Space Sciences University of California, Los Angeles, California, USA August, 00 Abstract

More information

The Length to Which an Earthquake will go to Rupture. University of Nevada, Reno 89557

The Length to Which an Earthquake will go to Rupture. University of Nevada, Reno 89557 The Length to Which an Earthquake will go to Rupture Steven G. Wesnousky 1 and Glenn P. Biasi 2 1 Center of Neotectonic Studies and 2 Nevada Seismological Laboratory University of Nevada, Reno 89557 Abstract

More information

Adaptive Kernel Estimation and Continuous Probability Representation of Historical Earthquake Catalogs

Adaptive Kernel Estimation and Continuous Probability Representation of Historical Earthquake Catalogs Bulletin of the Seismological Society of America, Vol. 92, No. 3, pp. 904 912, April 2002 Adaptive Kernel Estimation and Continuous Probability Representation of Historical Earthquake Catalogs by Christian

More information

A TESTABLE FIVE-YEAR FORECAST OF MODERATE AND LARGE EARTHQUAKES. Yan Y. Kagan 1,David D. Jackson 1, and Yufang Rong 2

A TESTABLE FIVE-YEAR FORECAST OF MODERATE AND LARGE EARTHQUAKES. Yan Y. Kagan 1,David D. Jackson 1, and Yufang Rong 2 Printed: September 1, 2005 A TESTABLE FIVE-YEAR FORECAST OF MODERATE AND LARGE EARTHQUAKES IN SOUTHERN CALIFORNIA BASED ON SMOOTHED SEISMICITY Yan Y. Kagan 1,David D. Jackson 1, and Yufang Rong 2 1 Department

More information

Earthquake Likelihood Model Testing

Earthquake Likelihood Model Testing Earthquake Likelihood Model Testing D. Schorlemmer, M. Gerstenberger, S. Wiemer, and D. Jackson October 8, 2004 1 Abstract The Regional Earthquake Likelihood Models (RELM) project aims to produce and evaluate

More information

Mechanics of Earthquakes and Faulting

Mechanics of Earthquakes and Faulting Mechanics of Earthquakes and Faulting Lecture 20, 30 Nov. 2017 www.geosc.psu.edu/courses/geosc508 Seismic Spectra & Earthquake Scaling laws. Seismic Spectra & Earthquake Scaling laws. Aki, Scaling law

More information

Limitations of Earthquake Triggering Models*

Limitations of Earthquake Triggering Models* Limitations of Earthquake Triggering Models* Peter Shearer IGPP/SIO/U.C. San Diego September 16, 2009 Earthquake Research Institute * in Southern California Why do earthquakes cluster in time and space?

More information

Preliminary test of the EEPAS long term earthquake forecast model in Australia

Preliminary test of the EEPAS long term earthquake forecast model in Australia Preliminary test of the EEPAS long term earthquake forecast model in Australia Paul Somerville 1, Jeff Fisher 1, David Rhoades 2 and Mark Leonard 3 Abstract 1 Risk Frontiers 2 GNS Science 3 Geoscience

More information

Originally published as:

Originally published as: Originally published as: Kuehn, N. M., Hainzl, S., Scherbaum, F. (2008): Non-Poissonian earthquake occurrence in coupled stress release models and its effect on seismic hazard. - Geophysical Journal International,

More information

PI Forecast for the Sichuan-Yunnan Region: Retrospective Test after the May 12, 2008, Wenchuan Earthquake

PI Forecast for the Sichuan-Yunnan Region: Retrospective Test after the May 12, 2008, Wenchuan Earthquake Pure Appl. Geophys. 167 (2010), 751 761 Ó 2010 Birkhäuser / Springer Basel AG DOI 10.1007/s00024-010-0070-8 Pure and Applied Geophysics PI Forecast for the Sichuan-Yunnan Region: Retrospective Test after

More information

Operational Earthquake Forecasting: Proposed Guidelines for Implementation

Operational Earthquake Forecasting: Proposed Guidelines for Implementation Operational Earthquake Forecasting: Proposed Guidelines for Implementation Thomas H. Jordan Director, Southern California S33D-01, AGU Meeting 14 December 2010 Operational Earthquake Forecasting Authoritative

More information

Earthquake Likelihood Model Testing

Earthquake Likelihood Model Testing Earthquake Likelihood Model Testing D. Schorlemmer, M. Gerstenberger 2, S. Wiemer, and D. Jackson 3 Swiss Seismological Service, ETH Zürich, Schafmattstr. 30, 8093 Zürich, Switzerland. 2 United States

More information

Seismic gaps and earthquakes

Seismic gaps and earthquakes JOURNAL OF GEOPHYSICAL RESEARCH, VOL. 108, NO. B10, 2471, doi:10.1029/2002jb002334, 2003 Seismic gaps and earthquakes Yufang Rong, 1 David D. Jackson, and Yan Y. Kagan Department of Earth and Space Sciences,

More information

Introduction The major accomplishment of this project is the development of a new method to identify earthquake sequences. This method differs from

Introduction The major accomplishment of this project is the development of a new method to identify earthquake sequences. This method differs from 28 June 212 Final Report on Project 8/TVH564: Physical and statistical models for the seismological properties and a temporal evolution of earthquake sequences (swarms) in the Central Volcanic Region,

More information

Shaking Down Earthquake Predictions

Shaking Down Earthquake Predictions Shaking Down Earthquake Predictions Department of Statistics University of California, Davis 25 May 2006 Philip B. Stark Department of Statistics University of California, Berkeley www.stat.berkeley.edu/

More information

A. Talbi. J. Zhuang Institute of. (size) next. how big. (Tiampo. likely in. estimation. changes. method. motivated. evaluated

A. Talbi. J. Zhuang Institute of. (size) next. how big. (Tiampo. likely in. estimation. changes. method. motivated. evaluated Testing Inter-event Times Moments as Earthquake Precursory Signals A. Talbi Earthquake Research Institute, University of Tokyo, Japan Centre de Recherche en Astronomie Astrophysique et Geophysique CRAAG,

More information

Characteristics of seismic activity before Chile M W 8.8 earthquake in 2010

Characteristics of seismic activity before Chile M W 8.8 earthquake in 2010 Earthq Sci (2010)23: 333 341 333 Doi: 10.1007/s11589-010-0730-x Characteristics of seismic activity before Chile M W 8.8 earthquake in 2010 Yan Xue 1,2, Jie Liu 2 and Gang Li 2 1 Institute of Geophysics,

More information

Accelerating energy release prior to large events in simulated earthquake cycles: implications for earthquake forecasting

Accelerating energy release prior to large events in simulated earthquake cycles: implications for earthquake forecasting Accelerating energy release prior to large events in simulated earthquake cycles: implications for earthquake forecasting Peter Mora and David Place QUAKES, Department of Earth Sciences, The University

More information

An Empirical Model for Earthquake Probabilities in the San Francisco Bay Region, California,

An Empirical Model for Earthquake Probabilities in the San Francisco Bay Region, California, Bulletin of the Seismological Society of America, Vol. 93, No. 1, pp. 1 13, February 2003 An Empirical Model for Earthquake Probabilities in the San Francisco Bay Region, California, 2002 2031 by Paul

More information

THE DOUBLE BRANCHING MODEL FOR EARTHQUAKE FORECAST APPLIED TO THE JAPANESE SEISMICITY

THE DOUBLE BRANCHING MODEL FOR EARTHQUAKE FORECAST APPLIED TO THE JAPANESE SEISMICITY 1 2 3 THE DOUBLE BRANCHING MODEL FOR EARTHQUAKE FORECAST APPLIED TO THE JAPANESE SEISMICITY 4 Anna Maria Lombardi Istituto Nazionale di Geofisica e Vulcanologia,Via di Vigna Murata 605, 00143 Roma, Italy,

More information

COULOMB STRESS CHANGES DUE TO RECENT ACEH EARTHQUAKES

COULOMB STRESS CHANGES DUE TO RECENT ACEH EARTHQUAKES COULOMB STRESS CHANGES DUE TO RECENT ACEH EARTHQUAKES Madlazim Physics Department, Faculty Mathematics and Sciences of Surabaya State University (UNESA) Jl. Ketintang, Surabaya 60231, Indonesia. e-mail:

More information

The largest aftershock: How strong, how far away, how delayed?

The largest aftershock: How strong, how far away, how delayed? GEOPHYSICAL RESEARCH LETTERS, VOL. 39,, doi:10.1029/2011gl050604, 2012 The largest aftershock: How strong, how far away, how delayed? M. Tahir, 1 J.-R. Grasso, 1 and D. Amorèse 2 Received 15 December 2011;

More information

Aftershock From Wikipedia, the free encyclopedia

Aftershock From Wikipedia, the free encyclopedia Page 1 of 5 Aftershock From Wikipedia, the free encyclopedia An aftershock is a smaller earthquake that occurs after a previous large earthquake, in the same area of the main shock. If an aftershock is

More information

Because of its reputation of validity over a wide range of

Because of its reputation of validity over a wide range of The magnitude distribution of declustered earthquakes in Southern California Leon Knopoff* Department of Physics and Astronomy and Institute of Geophysics and Planetary Physics, University of California,

More information

Accelerated Preshock Deformation of Broad Regions in the Aegean Area

Accelerated Preshock Deformation of Broad Regions in the Aegean Area Pure appl. geophys. 157 (2000) 1663 1681 0033 4553/00/101663 19 $ 1.50+0.20/0 Accelerated Preshock Deformation of Broad Regions in the Aegean Area B. PAPAZACHOS 1 and C. PAPAZACHOS 1 Abstract Twenty-four

More information

Distribution of seismicity before the larger earthquakes in Italy in the time interval

Distribution of seismicity before the larger earthquakes in Italy in the time interval Author manuscript, published in "Pure Appl. Geophys. (21)" DOI : 1.17/s24-1-89-x Pure Appl. Geophys. 167 (21), 933 958 The original publication is available at: http://www.springerlink.com/content/t66277788xmht483/

More information

Coulomb stress changes due to Queensland earthquakes and the implications for seismic risk assessment

Coulomb stress changes due to Queensland earthquakes and the implications for seismic risk assessment Coulomb stress changes due to Queensland earthquakes and the implications for seismic risk assessment Abstract D. Weatherley University of Queensland Coulomb stress change analysis has been applied in

More information

The Collaboratory for the Study of Earthquake Predictability: Perspectives on Evaluation & Testing for Seismic Hazard

The Collaboratory for the Study of Earthquake Predictability: Perspectives on Evaluation & Testing for Seismic Hazard The Collaboratory for the Study of Earthquake Predictability: Perspectives on Evaluation & Testing for Seismic Hazard D. Schorlemmer, D. D. Jackson, J. D. Zechar, T. H. Jordan The fundamental principle

More information

Application of a long-range forecasting model to earthquakes in the Japan mainland testing region

Application of a long-range forecasting model to earthquakes in the Japan mainland testing region Earth Planets Space, 63, 97 206, 20 Application of a long-range forecasting model to earthquakes in the Japan mainland testing region David A. Rhoades GNS Science, P.O. Box 30-368, Lower Hutt 5040, New

More information

Chien-chih Chen Institute of Geophysics, National Central University, Chungli, Taiwan

Chien-chih Chen Institute of Geophysics, National Central University, Chungli, Taiwan Geophys. J. Int. (2003) 155, F1 F5 FAST TRACK PAPER Accelerating seismicity of moderate-size earthquakes before the 1999 Chi-Chi, Taiwan, earthquake: Testing time-prediction of the self-organizing spinodal

More information

Recurrence Times for Parkfield Earthquakes: Actual and Simulated. Paul B. Rundle, Donald L. Turcotte, John B. Rundle, and Gleb Yakovlev

Recurrence Times for Parkfield Earthquakes: Actual and Simulated. Paul B. Rundle, Donald L. Turcotte, John B. Rundle, and Gleb Yakovlev Recurrence Times for Parkfield Earthquakes: Actual and Simulated Paul B. Rundle, Donald L. Turcotte, John B. Rundle, and Gleb Yakovlev 1 Abstract In this paper we compare the sequence of recurrence times

More information

Comparison of short-term and long-term earthquake forecast models for southern California

Comparison of short-term and long-term earthquake forecast models for southern California Comparison of short-term and long-term earthquake forecast models for southern California A. Helmstetter, Yan Kagan, David Jackson To cite this version: A. Helmstetter, Yan Kagan, David Jackson. Comparison

More information

2 Seismological background

2 Seismological background Kagan c02.tex V1 - September 24, 2013 8:55 A.M. Page 6 2 Seismological background 2.1 Earthquakes This chapter discussion mainly follows Kagan (2006). Since this book is intended for seismologists, as

More information

Simulated and Observed Scaling in Earthquakes Kasey Schultz Physics 219B Final Project December 6, 2013

Simulated and Observed Scaling in Earthquakes Kasey Schultz Physics 219B Final Project December 6, 2013 Simulated and Observed Scaling in Earthquakes Kasey Schultz Physics 219B Final Project December 6, 2013 Abstract Earthquakes do not fit into the class of models we discussed in Physics 219B. Earthquakes

More information

DISCLAIMER BIBLIOGRAPHIC REFERENCE

DISCLAIMER BIBLIOGRAPHIC REFERENCE DISCLAIMER This report has been prepared by the Institute of Geological and Nuclear Sciences Limited (GNS Science) exclusively for and under contract to the Earthquake Commission. Unless otherwise agreed

More information

Synthetic Seismicity Models of Multiple Interacting Faults

Synthetic Seismicity Models of Multiple Interacting Faults Synthetic Seismicity Models of Multiple Interacting Faults Russell Robinson and Rafael Benites Institute of Geological & Nuclear Sciences, Box 30368, Lower Hutt, New Zealand (email: r.robinson@gns.cri.nz).

More information

Accelerating Seismic Crustal Deformation in the Southern Aegean Area

Accelerating Seismic Crustal Deformation in the Southern Aegean Area Bulletin of the Seismological Society of America, Vol. 92, No. 2, pp. 570 580, March 2002 Accelerating Seismic Crustal Deformation in the Southern Aegean Area by C. B. Papazachos, G. F. Karakaisis, A.

More information

Monte Carlo simulations for analysis and prediction of nonstationary magnitude-frequency distributions in probabilistic seismic hazard analysis

Monte Carlo simulations for analysis and prediction of nonstationary magnitude-frequency distributions in probabilistic seismic hazard analysis Monte Carlo simulations for analysis and prediction of nonstationary magnitude-frequency distributions in probabilistic seismic hazard analysis Mauricio Reyes Canales and Mirko van der Baan Dept. of Physics,

More information

Probabilistic approach to earthquake prediction

Probabilistic approach to earthquake prediction ANNALS OF GEOPHYSICS, VOL. 45, N. 6, December 2002 Probabilistic approach to earthquake prediction Rodolfo Console, Daniela Pantosti and Giuliana D Addezio Istituto Nazionale di Geofisica e Vulcanologia,

More information

San Francisco Bay Area Earthquake Simulations: A step toward a Standard Physical Earthquake Model

San Francisco Bay Area Earthquake Simulations: A step toward a Standard Physical Earthquake Model San Francisco Bay Area Earthquake Simulations: A step toward a Standard Physical Earthquake Model Steven N. Ward Institute of Geophysics and Planetary Physics, University of California, Santa Cruz, CA,

More information

Stress triggering and earthquake probability estimates

Stress triggering and earthquake probability estimates JOURNAL OF GEOPHYSICAL RESEARCH, VOL. 109,, doi:10.1029/2003jb002437, 2004 Stress triggering and earthquake probability estimates Jeanne L. Hardebeck 1 Institute for Geophysics and Planetary Physics, Scripps

More information

Earthquakes and Earthquake Hazards Earth - Chapter 11 Stan Hatfield Southwestern Illinois College

Earthquakes and Earthquake Hazards Earth - Chapter 11 Stan Hatfield Southwestern Illinois College Earthquakes and Earthquake Hazards Earth - Chapter 11 Stan Hatfield Southwestern Illinois College What Is an Earthquake? An earthquake is the vibration of Earth, produced by the rapid release of energy.

More information

Non commercial use only

Non commercial use only Time dependent seismicity along the western coast of Canada Evangelos V. Christou, George Karakaisis, Emmanuel Scordilis Department of Geophysics, School of Geology, Aristotle University of Thessaloniki,

More information

PostScript le created: August 6, 2006 time 839 minutes

PostScript le created: August 6, 2006 time 839 minutes GEOPHYSICAL RESEARCH LETTERS, VOL.???, XXXX, DOI:10.1029/, PostScript le created: August 6, 2006 time 839 minutes Earthquake predictability measurement: information score and error diagram Yan Y. Kagan

More information

Potency-magnitude scaling relations for southern California earthquakes with 1.0 < M L < 7.0

Potency-magnitude scaling relations for southern California earthquakes with 1.0 < M L < 7.0 Geophys. J. Int. (2002) 148, F1 F5 FAST TRACK PAPER Potency-magnitude scaling relations for southern California earthquakes with 1.0 < M L < 7.0 Yehuda Ben-Zion 1, and Lupei Zhu 2 1 Department of Earth

More information

On the validity of time-predictable model for earthquake generation in north-east India

On the validity of time-predictable model for earthquake generation in north-east India Proc. Indian Acad. Sci. (Earth Planet. Sci.), Vol. 101, No. 4, December 1992, pp. 361-368. 9 Printed in India. On the validity of time-predictable model for earthquake generation in north-east India V

More information

Time-varying and long-term mean aftershock hazard in Wellington

Time-varying and long-term mean aftershock hazard in Wellington Time-varying and long-term mean aftershock hazard in Wellington A. Christophersen, D.A. Rhoades, R.J. Van Dissen, C. Müller, M.W. Stirling, G.H. McVerry & M.C. Gerstenberger GNS Science, Lower Hutt, New

More information

Testing alarm-based earthquake predictions

Testing alarm-based earthquake predictions Geophys. J. Int. (28) 72, 75 724 doi:./j.365-246x.27.3676.x Testing alarm-based earthquake predictions J. Douglas Zechar and Thomas H. Jordan Department of Earth Sciences, University of Southern California,

More information

Earthquake Clustering and Declustering

Earthquake Clustering and Declustering Earthquake Clustering and Declustering Philip B. Stark Department of Statistics, UC Berkeley joint with (separately) Peter Shearer, SIO/IGPP, UCSD Brad Luen 4 October 2011 Institut de Physique du Globe

More information

Earthquakes Chapter 19

Earthquakes Chapter 19 Earthquakes Chapter 19 Does not contain complete lecture notes. What is an earthquake An earthquake is the vibration of Earth produced by the rapid release of energy Energy released radiates in all directions

More information

Estimation of Regional Seismic Hazard in the Korean Peninsula Using Historical Earthquake Data between A.D. 2 and 1995

Estimation of Regional Seismic Hazard in the Korean Peninsula Using Historical Earthquake Data between A.D. 2 and 1995 Bulletin of the Seismological Society of America, Vol. 94, No. 1, pp. 269 284, February 2004 Estimation of Regional Seismic Hazard in the Korean Peninsula Using Historical Earthquake Data between A.D.

More information

Operational Earthquake Forecasting in Italy: perspectives and the role of CSEP activities

Operational Earthquake Forecasting in Italy: perspectives and the role of CSEP activities Operational Earthquake Forecasting in Italy: perspectives and the role of CSEP activities Warner Marzocchi, Istituto Nazionale di Geofisica e Vulcanologia, Italy The research was developed partially within

More information

Mapping spatial variability of the frequency-magnitude distribution of earthquakes

Mapping spatial variability of the frequency-magnitude distribution of earthquakes Mapping spatial variability of the frequency-magnitude distribution of earthquakes Stefan Wiemer 1 and Max Wyss 2 1 Institute of Geophysics; ETH Hoenggerberg; CH-8093; Zurich, Switzerland; tel. +41 633

More information

Overview of the first earthquake forecast testing experiment in Japan

Overview of the first earthquake forecast testing experiment in Japan Earth Planets Space, 63, 159 169, 2011 Overview of the first earthquake forecast testing experiment in Japan K. Z. Nanjo 1, H. Tsuruoka 1, N. Hirata 1, and T. H. Jordan 2 1 Earthquake Research Institute,

More information

THE ECAT SOFTWARE PACKAGE TO ANALYZE EARTHQUAKE CATALOGUES

THE ECAT SOFTWARE PACKAGE TO ANALYZE EARTHQUAKE CATALOGUES THE ECAT SOFTWARE PACKAGE TO ANALYZE EARTHQUAKE CATALOGUES Tuba Eroğlu Azak Akdeniz University, Department of Civil Engineering, Antalya Turkey tubaeroglu@akdeniz.edu.tr Abstract: Earthquakes are one of

More information

A USGS Perspective on Earthquake Prediction Research

A USGS Perspective on Earthquake Prediction Research A USGS Perspective on Earthquake Prediction Research Michael Blanpied USGS Earthquake Hazard Program Reston, VA USGS Statutory Role USGS Director has the delegated responsibility to issue warnings for

More information

UCERF3 Task R2- Evaluate Magnitude-Scaling Relationships and Depth of Rupture: Proposed Solutions

UCERF3 Task R2- Evaluate Magnitude-Scaling Relationships and Depth of Rupture: Proposed Solutions UCERF3 Task R2- Evaluate Magnitude-Scaling Relationships and Depth of Rupture: Proposed Solutions Bruce E. Shaw Lamont Doherty Earth Observatory, Columbia University Statement of the Problem In UCERF2

More information

Measurements in the Creeping Section of the Central San Andreas Fault

Measurements in the Creeping Section of the Central San Andreas Fault Measurements in the Creeping Section of the Central San Andreas Fault Introduction Duncan Agnew, Andy Michael We propose the PBO instrument, with GPS and borehole strainmeters, the creeping section of

More information

Plate Boundary Observatory Working Group for the Central and Northern San Andreas Fault System PBO-WG-CNSA

Plate Boundary Observatory Working Group for the Central and Northern San Andreas Fault System PBO-WG-CNSA Plate Boundary Observatory Working Group for the Central and Northern San Andreas Fault System PBO-WG-CNSA Introduction Our proposal focuses on the San Andreas fault system in central and northern California.

More information

FORECASTING THE M6.9 KYTHERA EARTHQUAKE OF 8 JANUARY 2006: A STEP FORWARD IN EARTHQUAKE PREDICTION?

FORECASTING THE M6.9 KYTHERA EARTHQUAKE OF 8 JANUARY 2006: A STEP FORWARD IN EARTHQUAKE PREDICTION? European Geosciences Union General Assembly 2006 Vienna, Austria,, 02 07 April 2006 FORECASTING THE M6.9 KYTHERA EARTHQUAKE OF 8 JANUARY 2006: A STEP FORWARD IN EARTHQUAKE PREDICTION? Andreas Tzanis Department

More information

AN OVERVIEW AND GUIDELINES FOR PROBABILISTIC SEISMIC HAZARD MAPPING

AN OVERVIEW AND GUIDELINES FOR PROBABILISTIC SEISMIC HAZARD MAPPING CO 2 TRACCS INTERNATIONAL WORKSHOP Bucharest, 2 September, 2012 AN OVERVIEW AND GUIDELINES FOR PROBABILISTIC SEISMIC HAZARD MAPPING M. Semih YÜCEMEN Department of Civil Engineering and Earthquake Studies

More information

6 Source Characterization

6 Source Characterization 6 Source Characterization Source characterization describes the rate at which earthquakes of a given magnitude, and dimensions (length and width) occur at a given location. For each seismic source, the

More information

I. Locations of Earthquakes. Announcements. Earthquakes Ch. 5. video Northridge, California earthquake, lecture on Chapter 5 Earthquakes!

I. Locations of Earthquakes. Announcements. Earthquakes Ch. 5. video Northridge, California earthquake, lecture on Chapter 5 Earthquakes! 51-100-21 Environmental Geology Summer 2006 Tuesday & Thursday 6-9:20 p.m. Dr. Beyer Earthquakes Ch. 5 I. Locations of Earthquakes II. Earthquake Processes III. Effects of Earthquakes IV. Earthquake Risk

More information

Characteristic earthquake model, , R.I.P.

Characteristic earthquake model, , R.I.P. Characteristic earthquake model, 1884 2011, R.I.P. Yan. Y. Kagan (kagan@moho.ess.ucla.edu) ESS/UCLA, Los Angeles, CA 90095-1567, USA David D. Jackson (david.d.jackson@ucla.edu) ESS/UCLA, Los Angeles, CA

More information

M 7.0 earthquake recurrence on the San Andreas fault from a stress renewal model

M 7.0 earthquake recurrence on the San Andreas fault from a stress renewal model Click Here for Full Article JOURNAL OF GEOPHYSICAL RESEARCH, VOL. 111,, doi:10.1029/2006jb004415, 2006 M 7.0 earthquake recurrence on the San Andreas fault from a stress renewal model Tom Parsons 1 Received

More information

Impact of earthquake rupture extensions on parameter estimations of point-process models

Impact of earthquake rupture extensions on parameter estimations of point-process models 1 INTRODUCTION 1 Impact of earthquake rupture extensions on parameter estimations of point-process models S. Hainzl 1, A. Christophersen 2, and B. Enescu 1 1 GeoForschungsZentrum Potsdam, Germany; 2 Swiss

More information

Seismic Risk in California Is Changing

Seismic Risk in California Is Changing WHITE PAPER 4 August 2016 Seismic Risk in California Is Changing The Impact of New Earthquake Hazard Models for the State Contributors: Paul C. Thenhaus Kenneth W. Campbell Ph.D Nitin Gupta David F. Smith

More information

Module 7 SEISMIC HAZARD ANALYSIS (Lectures 33 to 36)

Module 7 SEISMIC HAZARD ANALYSIS (Lectures 33 to 36) Lecture 34 Topics Module 7 SEISMIC HAZARD ANALYSIS (Lectures 33 to 36) 7.3 DETERMINISTIC SEISMIC HAZARD ANALYSIS 7.4 PROBABILISTIC SEISMIC HAZARD ANALYSIS 7.4.1 Earthquake Source Characterization 7.4.2

More information

IRREPRODUCIBLE SCIENCE

IRREPRODUCIBLE SCIENCE SUMMER SCHOOL 2017 IRREPRODUCIBLE SCIENCE P-VALUES, STATISTICAL INFERENCE AND NON-ERGODIC SYSTEMS ALESSI A CAPONERA SUPERVISOR: MAXIMILIAN WERNER What does research reproducibility mean? Reproducibility

More information

Aspects of risk assessment in power-law distributed natural hazards

Aspects of risk assessment in power-law distributed natural hazards Natural Hazards and Earth System Sciences (2004) 4: 309 313 SRef-ID: 1684-9981/nhess/2004-4-309 European Geosciences Union 2004 Natural Hazards and Earth System Sciences Aspects of risk assessment in power-law

More information

The Centenary of the Omori Formula for a Decay Law of Aftershock Activity

The Centenary of the Omori Formula for a Decay Law of Aftershock Activity The Centenary of the Omori Formula for a Decay Law of Aftershock Activity Author; Tokuji Utsu, Yosihiko Ogata, and Ritsuko S. Matsu'ura Presentater; Okuda Takashi 8. p Values from Superposed Sequences

More information

Multifractal Analysis of Seismicity of Kutch Region (Gujarat)

Multifractal Analysis of Seismicity of Kutch Region (Gujarat) P-382 Multifractal Analysis of Seismicity of Kutch Region (Gujarat) Priyanka Midha *, IIT Roorkee Summary The geographical distribution of past earthquakes is not uniform over the globe. Also, in one region

More information

A double branching model for earthquake occurrence

A double branching model for earthquake occurrence Click Here for Full Article JOURNAL OF GEOPHYSICAL RESEARCH, VOL. 113,, doi:10.1029/2007jb005472, 2008 A double branching model for earthquake occurrence Warner Marzocchi 1 and Anna Maria Lombardi 1 Received

More information

Stabilizing Intermediate-Term Medium-Range Earthquake Predictions

Stabilizing Intermediate-Term Medium-Range Earthquake Predictions Stabilizing Intermediate-Term Medium-Range Earthquake Predictions Stabilizing Intermediate-Term Medium-Range Earthquake Predictions V.G. Kossobokov 1,2, L.L. Romashkova 1,2, G.F. Panza 1,3, and A. Peresan

More information

Foreshocks, Aftershocks, and Earthquake Probabilities: Accounting for the Landers Earthquake

Foreshocks, Aftershocks, and Earthquake Probabilities: Accounting for the Landers Earthquake Bulletin of the Seismological Society of America, Vol. 84, No. 3, pp. 892-899, June 1994 Foreshocks, Aftershocks, and Earthquake Probabilities: Accounting for the Landers Earthquake by Lucile M. Jones

More information

Geophysical Journal International

Geophysical Journal International Geophysical Journal International Geophys. J. Int. (2013) 194, 1823 1835 Advance Access publication 2013 June 17 doi: 10.1093/gji/ggt194 Interevent times in a new alarm-based earthquake forecasting model

More information