USC-SCEC/CEA Technical Report #1

Size: px
Start display at page:

Download "USC-SCEC/CEA Technical Report #1"

Transcription

1 USC-SCEC/CEA Technical Report #1 Milestone 1A Submitted to California Earthquake Authority 801 K Street, Suite 1000 Sacramento, CA By the Southern California Earthquake Center University of Southern California 3651 Trousdale Parkway, ZHS 169 Los Angeles, CA Principal Investigator: Thomas H. Jordan Director, SCEC Project 1 Manager: Edward H. Field United States Geological Survey Projects 2/3 Manager: Paul Somerville URS Corporation February 8,

2 Summary and Introduction This report is submitted on behalf of the Working Group on California Earthquake Probablities (WGCEP). WGCEP is a collaboration of scientists from the Southern California Earthquake Center (SCEC), the United State Geological Survey (USGS), and the California Geological Survey (CGS). This report contains deliverables under Milestone 1A as follows: Section I. Uniform California Earthquake Rupture Forecast 1.0 (UCERF 1.0). This report gives a technical description of the Uniform California Earthquake Rupture Forecast (UCERF) 1.0 and a San Andreas Fault (SAF) Assessment developed by the Working Group on California Earthquake Probabilities (WGCEP). Initial versions of these products were formally reviewed by the WGCEP Management Oversight Committee (MOC) and Scientific Review Panel (SRP), and by some members of the California Earthquake Prediction Evaluation Council (CEPEC), at a November 17-18, 2005 meeting at the University of Southern California. One recommendation that came out of that meeting was to add a Poisson component to the southern California ruptures in UCERF 1.0 to make them consistent with northern California ruptures. This modification is reflected in the version presented here. A paper describing UCERF 1.0 has been submitted for publication in the journal Seismological Research Letters. This model will serve as an important prototype for implementing an interface for downstream seismic-hazard and loss calculations. Section II. San Andreas Fault Assessment. The San Andreas Fault (SAF) Assessment is composed of two components: 1) A Synoptic View of Southern SAF Paleoseismic Constraints; and 2) A Framework for Developing a Time-Dependent Earthquake Rupture Forecast. The first represents a comprehensive evaluation of what existing paleoseismic data on the southern San Andreas implies about earthquake ruptures on the fault. This study has raised important questions, including the degree to which fault are segmented (a key assumption in previous WGCEPs and in UCERF 1.0). The second element here represents a proposed, generalized framework for UCERF 2.0, specifically designed with the following innovations in mind: a) the relaxation of fault segmentation as a model option; b) the inclusion of fault-to-fault rupture jumps (observed in recent earthquake but not generally accommodated by previous WGCEPs or in UCERF 1.0); and c) the accommodation of alternative time-dependent earthquake probability models (because different viable approaches exist). In addition to a discussion of the SAF problem, the SAF assessment provides a preliminary framework that will allow a consistent treatment for all faults throughout the state. Other WGCEP elements are available upon request or at These include the WGCEP project plan, a review of previous WGCEPs, and SHA analysis tools for UCERF

3 Project 1 Oversight The overall management of this project is under the Principal Investigator, Thomas H. Jordan, the director of the Southern California Earthquake Center. Project 1 is directed by Edward H. Field, a research scientist at the United States Geological Survey in Pasadena, California. Independent committees assist the directors in Project 1. The Management Oversight Committee (MOC) is in charge of resource allocation and approves project plans, budgets, and schedules. The MOC will endorse the models before submittal of reports. MOC members are Thomas Jordan for SCEC, Rufus Catchings and Jill McCarthy for the USGS, and Michael Reichle for the CGS. The Executive Committee (EXCOM) is responsible for convening experts, reviewing options, and making decisions on the implementation of the model and supporting databases. The EXCOM will not advocate specific model components, but will make sure that datasets are accommodated to adequately span the range of models. EXCOM members are Edward Field, Thomas Parsons, Mark Petersen, and Ross Stein of the USGS; Ray Weldon of SCEC, and Chris Wills of the CGS. The Science Review Panel (SRP) is an independent body of experts who will decide whether the WGCEP has considered an adequate range of models, given the forecast duration of interest, and that the relative weights have been set appropriately in the models. SRP members are William Ellsworth (Chair), Art Frankel, Mike Blanpied, and David Schwartz of the USGS; David Jackson and Jim Dieterich of SCEC; Lloyd Cluff of PG&E; and Allin Cornell of Stanford. 3

4 Table of Contents Summary and Introduction Section I. Uniform California Earthquake Rupture Forecast (UCERF) Section II. San Andreas Fault (SAF) Assessment A Synoptic View of S. SAF Paleoseismic Constraints A Framework for Developing a Time-Dependent Earthquake Rupture Forecast

5 Section I Uniform California Earthquake Rupture Forecast (UCERF) 1.0 A Time-independent and Time-dependent Model for the State of California Mark D. Petersen, Tianqing Cao, Kenneth W. Campbell, and Arthur D. Frankel U.S. Geological Survey, California Geological Survey, and EQE International ABSTRACT In this paper we compare the 2002 U.S. Geological Survey (USGS) time-independent (Poisson) California hazard calculations with preliminary time-dependent calculations. The timeindependent calculations are time invariant and are based on the 2002 USGS national seismic hazard model that contains about 200 fault sources (Frankel et al., 2002). The time-dependent model incorporates current state-of-the-art methods for calculating hazard and will provide a benchmark for comparison with future statewide Working Group on California Earthquake Probabilities (WGCEP) updates and with the Regional Earthquake Likelihood Models (RELM). This model is referred to as the Uniform California Earthquake Rupture Forecast version 1.0 model (UCERF 1.0). The time-dependent model is based on the 2002 USGS time-independent hazard parameters such as long-term earthquake recurrence, fault slip rates, magnitude-frequency distributions, and ground motion attenuation relations. In addition to these fault and ground motion parameters the time-dependent hazard also requires paleoseismic information to estimate the elapsed time since the last event and the uncertainty in the earthquake recurrence distribution. In this paper we apply information published in the WGCEP 2003 northern California model, a statewide analysis by Cramer et al. (2000), an analysis of the Cascadia subduction zone by Petersen et al. (2002), and paleoseismic results published by Weldon et al. (2004) to estimate these additional parameters. We have updated the time-dependent fault probabilities for the San Andreas, San Gregorio, Hayward, Rodgers Creek, Calaveras, Green Valley, Concord, Greenville, Mount Diablo, San Jacinto, Elsinore, Imperial, Laguna Salada faults, and the Cascadia subduction zone. Probabilities of earthquake ruptures are calculated for 1, 5, 10, 20, 30, and 50 year intervals beginning in the year Time-dependent probabilistic ground motion maps for peak horizontal ground acceleration on rock with a 10% probability of exceedance in the next 30 years are generally as much as 10-20% higher than corresponding time-independent maps near the southern San Andreas, the Cascadia subduction zone, and the eastern San Francisco Bay area faults. These maps are as much as 10-20% lower along the San Andreas fault north of the San Francisco Bay area and near the southern San Jacinto fault. The difference between the time-dependent and time-independent models for high frequencies that control the peak horizontal ground acceleration are significant near the faults, but at sites farther away the differences may be negligible at all return periods. 5

6 INTRODUCTION Ground shaking hazard maps that are based on sound earth-science research are effective tools for mitigating damage from future earthquakes. Applying the assumptions that future earthquakes will occur on active faults or near previous events and that the ground shaking from these events will fall within the range of globally-recorded motions leads to probabilistic hazard maps that predict ground-shaking potential across a region. Development of hazard and risk maps requires technical interactions between earth scientists and engineers in estimating the rate of potential earthquakes across a region, quantifying likely ground shaking levels at a site, and understanding how buildings respond to strong ground-shaking. For the past 25 years the U.S. Geological Survey (USGS) and California Geological Survey (CGS) have cooperated with professional organizations to incorporate hazard maps and other hazard products in public and corporate documents such as building codes, insurance rate structures, and earthquake risk mitigation plans (Algermissen et al., 1982, Frankel et al., 1996, Petersen et al., 1996, Frankel et al., 2002). These hazard products are used in making public-policy decisions, therefore, it is essential that the official USGS-CGS hazard models reflect the best available science. This qualification is also required by the statute that regulates the California Earthquake Authority which provides most earthquake insurance in the state. To adequately represent the best available science, the hazard maps need to be updated regularly to keep pace with new scientific advancements. The USGS and CGS promote science research and consensus building by: (1) providing internal and external funding to scientists and engineers for collecting, interpreting, and publishing geologic and seismic information needed to quantify earthquake hazard and risk across the country and (2) involving scientists from academia, government, and industry in workshops and working groups to define the current best available science. The methodologies, computer codes, and input data used in developing these products need to be openly available for review and analysis. Input parameters and codes for current hazard maps may be obtained at: and At these websites a user may access documentation that describes the methodologies and parameters, a relational database and tables that contain explanations of how the fault parameters and uncertainties were chosen, interactive tools that allow the user to view hazard map information and to disaggregate the hazard models, and web interfaces that present building code design values at a latitude and longitude or zip code of interest. The USGS has historically developed time-independent models of earthquake occurrence that are based on the assumption that the probability of the occurrence of an earthquake in a given period of time follows a Poisson distribution. Probabilities calculated in this way require only knowledge of the mean recurrence time. Results of these calculations do not vary with time (i.e., results are independent of the time since the last event) and are a reasonable basis for the earthquake resistant provisions in building codes and long-term mitigation strategies. In contrast, time-dependent models of earthquake occurrence are based on the assumption that the probability of occurrence of an earthquake in a given time period follows a renewal model, that is a lognormal, Brownian Passage Time (BPT), or other probability distribution in which the probability of the event depends on the time since the last event (Appendix A). In addition to the 6

7 mean frequency (or recurrence time) of earthquakes, these models require additional information about the variability of the frequency of events (the variance or standard deviation), and the time of the last event. The time-dependent models are intuitively appealing because they produce results broadly consistent with the elastic rebound theory of earthquakes. Reliable timedependent models are desired for setting insurance rates as well as other short- to intermediateterm mitigation strategies. The USGS and CGS are beginning to develop these types of hazard products as new geologic and seismic information regarding the dates of previous events along faults becomes available. In application, both the time-independent and time-dependent models also depend on assumptions about the magnitude-frequency characteristics of earthquake occurrence, the simplest of which is the characteristic earthquake model in which all large earthquakes along a particular fault segment are assumed to have similar magnitudes, average displacements, and rupture lengths. More complicated models include Gutenburg-Richter magnitude-frequency distributions and multi-segment ruptures. In as much as time-dependent models require more input parameters and assumptions as contrasted with time-independent models, there is not yet the same degree of consensus about the methods and results for these calculations. Both time-independent and time-dependent hazard calculations require moment-balanced models that are consistent with the global plate rate models and slip rates determined on individual faults. Geologists can estimate the average slip rates on faults in California from offset geologic features that have been dated using radiometric dating techniques. At sites along some faults we know the approximate times of past events extending hundreds or thousands of years into the past, but we do not know the magnitudes of or the length of faults involved in these past earthquakes. A fundamental constraint that we apply to candidate earthquake occurrence models, commonly called moment balancing, is the requirement that over the long term, the displacements from the earthquakes sum to the observed slip rate all along the fault. Models that permit smaller earthquakes will generally contain more frequent earthquakes in order to add up to the total slip rate. It should be remembered that since the ground motion hazard increases with the frequency of earthquakes, then models that permit smaller but more frequent earthquakes will typically lead to estimates of higher hazard. In this paper we describe the general characteristics of the time-independent (Poisson) 2002 USGS-CGS California seismic hazard models and develop a time-dependent model - the first version of the Working Group on California Earthquake Probability (WGCEP) uniform California earthquake rupture forecast model (UCERF 1.0). The time-independent and timedependent hazard maps and hazard curves provide a comparison for the Regional Earthquake Likelihood Models (RELM) presented in this volume and future WGCEP models that will be developed over the next couple years. The time-dependent model does not have the same consensus inputs that are incorporated in the standard time-independent model and so the user should use caution in applying these maps. However, this new model builds on information collected from several Working Group on California Earthquake Probability models (1988, 1990, 1995, 1999, 2003), time-dependent models published by Cramer et al. (2000) and Petersen et al. (2002), paleoseismic data from Weldon et al. (2004), and recent seismicity data. Timedependent analysis incorporates first-order information on the elapsed time since the last earthquake and should provide a reasonable basis for comparison. 7

8 THE 2002 USGS-CGS TIME-INDEPENDENT SEISMIC HAZARD MODEL The USGS and CGS released California hazard maps in 1996 and 2002 using a probabilistic seismic hazard framework and incorporating information from regional workshops held across the country (Petersen et al., 1996; Frankel et al., 1996; Frankel et al., 2002). Hundreds of earth scientists and engineers participated in USGS regional workshops to define the best available science. The current hazard model is based on fault information, seismicity catalogs, geodetic data, and ground-shaking attenuation relations that were discussed at these workshops. An advisory committee reviewed these models and made recommendations on how the resulting products could be improved. The 2002 California model incorporates nearly two hundred fault sources that generate thousands of earthquake ruptures. It is not possible to describe details of the methodology and fault parameters in the limited space available in this publication. Instead, we refer the reader to published references, data, and products available on the websites and publications ( (USGS) ; (CGS); Petersen et al., 1996; Frankel et al., 1996, Frankel et al., 2002) that provide the input parameters and codes needed to reproduce the official hazard model. In the section below we provide only a general description of the methodology, input data, and results. The fault sources considered in the model contribute most to the hazard in California than the background seismicity. Faults are divided into two classes, A-type and B-type. The A-type faults generally have slip rates greater than 1 mm/yr and paleoseismic data that constrain the recurrence intervals of large earthquakes (Figure 1). Various editions of the Working Group on California Earthquake Probability (WGCEP) report indicate that sufficient information is available for these faults to allow development of rupture histories and time-dependent forecasts of earthquake ruptures. Models are developed using single-segment and multi-segment earthquake ruptures as defined by various working groups in northern California and southern California. The B-type faults include all of the other faults in California that have published slip rates and fault locations that can be used to estimate a recurrence interval. To calculate the recurrence, the moment of the potential earthquake is divided by the moment rate determined from the long-term fault slip rate to obtain the recurrence time of a characteristic size earthquake. We use a logic tree to account for epistemic uncertainty in our knowledge of which magnitude-frequency distribution is correct. In the hazard model a Gutenberg-Richter distribution that spans magnitudes between 6.5 and the characteristic size is weighted 1/3 and a characteristic distribution defined by a simple delta function is weighted 2/3. Modeling uncertainties in the characteristic or maximum magnitudes of a fault are accounted for explicitly in the calculation procedure. In addition to the fault sources, a random source is used to account for earthquakes on unknown fault sources, moderate-size earthquakes on faults, and other earthquake sources that have not been quantified. This portion of the model is most important in areas that lack identified active faults, however, it also contributes significantly to the overall seismicity hazard across the state. 8

9 The random background source model is based on earthquake catalogs. For the 1996 and 2002 hazard analyses, we developed a California state-wide earthquake catalog for magnitudes greater than 4 from the late 1700 s to 1994 and 2000, respectively (Petersen et al., 1996, Petersen et al., 2000; Toppozada et al., 2000). This statewide catalog was developed using regional catalogs from the U.S. Geological Survey (Menlo Park and Pasadena), California Institute of Technology, University of California Berkeley, University of Nevada, Reno; and various publications regarding earthquake moment magnitudes and aftershocks. Mine blasts and duplicate earthquakes were removed from the catalog. For the time-independent hazard assessment we only consider independent events; dependent events are not consistent with the assumption of independence in a Poisson process. We applied the algorithm of Gardner and Knopoff (1974) to decluster the catalog, removing aftershocks and foreshocks that are identified using magnitude and distance criteria. The earthquakes in the catalog are spatially binned over a grid and smoothed using a Gaussian distribution with a 50 km correlation length to obtain the rate of earthquakes in the background model (Frankel et al., 1995). The hazard is then computed by using the rate at each grid node in conjunction with a Gutenberg-Richter magnitude frequency distribution and attenuation relations to obtain the rate of exceedance at each ground motion level. In portions of eastern California and western Nevada there are differences between the model rate of earthquakes calculated from the geologic slip rate data and the historic rate of earthquakes from the earthquake catalog. Recent geodetic data seems to be more consistent with the historic earthquake data, both suggesting higher contemporary strain rates than would be implied from geologic studies. Therefore, in four regions of eastern California and western Nevada we have used the geodetic data to supplement our earthquake fault models. The earthquakes are modeled using geodetically-based slip rates that are spread uniformly across a zone and modeled using a Gutenberg-Richter magnitude-frequency distribution. Future research should help delineate the particular faults that are accommodating the observed geodetic strains and determine if recent data reflects the long-term strain rates or if these data are dominated by secular variability. Once we have quantified the earthquake sources we can apply published empirical attenuation relations to estimate the potential ground shaking levels from the modeled earthquakes. We have applied four attenuation relations, equally weighted, for coastal California earthquakes (Abrahamson and Silva, 1997; Boore et al., 1997; Campbell and Borzorgnia, 2003; Sadigh et al., 1997). For the extensional region we have also applied the attenuation relation of Spudich et al. (1999). Ground motions from the Cascadia subduction zone are calculated using the attenuation relations for interface earthquakes of Youngs et al. (1997) and Sadigh et al. (1997) and for deep intraslab earthquakes of Atkinson and Boore (2003) and Youngs et al. (1997). Generally, attenuation relations should be updated when sufficient strong motion data is recorded that shows inconsistencies with the previous relations. Figure 2 shows the time-independent, or Poisson, hazard of peak ground acceleration for 10% probability of exceedance in 30 years from the 2002 national seismic hazard model. 9

10 Tests of hazard maps It is difficult to test the hazard maps that are used in building codes because they consider earthquakes with return periods of more than two thousand years whereas most readily-available test criteria come from the relatively short historical record that only spans a couple hundred years. Most of the RELM models are compared to the recent 5-years of seismicity. This short catalog is useful for very high-probability ground-motion predictions, but does not test the lowprobability (2500-year) ground motions that are considered in building codes. We have performed tests of different components of the hazard model using three datasets that are somewhat independent from the data used to generate the models: (1) historic earthquake magnitude-frequency distribution from the catalog, (2) plate tectonic strain-rate models, and (3) historical intensity data. One simple way to test the magnitude-frequency distribution of the hazard source model is to compare the rate of earthquakes in the model with the observed rate of earthquakes in the 100 to 200-year historic earthquake catalog. In the USGS-CGS source model the catalog was used to estimate the rate of M 4 and greater background (random) earthquakes, meaning that the actual magnitude-frequency distribution was not used directly in the model. The fault information was added to the random earthquakes to produce a complete source model. The complete source model was tested to determine if the earthquake rates at each potential magnitude are consistent with the magnitude-frequency distribution observed in the historic earthquake record. In such tests it was shown that there were significant differences between the 1996 model rate of magnitude 6.5 to 7.0 events and the historic rate of earthquakes (Petersen et al., 2000). However, this earthquake rate discrepancy was considered in developing the 2002 model, and the model was modified so that the rate of earthquakes agreed much more closely with the rate of historic earthquakes (Frankel et al., 2002-Figure 6). The moment rate measured across the plate boundary is another constraint that is applied in developing a proper seismic hazard model for California. We used the NUVEL I plate tectonic rate model (DeMets et al., 1990) to compare with the geological vector slip rates in the 1996 model (which is very similar to the 2002 model; Petersen et al., Figure 4). The slip rates predicted by the NUVEL I plate rate model and the USGS-CGS slip rate model are generally, with a few exceptions, within 10% of each other. Orientations of the vector slip rates are also generally compatible. A clearer understanding of geological slip rates and orientations would reduce the differences between the datasets, especially in southern California. Another test that may be applied to the hazard maps is to use the historical record of intensity (either the year Modified Mercalli Intensity data or the 10,000 year precarious rock data) across California to compare with the intensities predicted by the hazard models. Stirling and Petersen (2005) compared MMI intensity data with the seismic hazard intensities at 26 sites across California and many other sites across the United States and New Zealand. Intensity data indicate that historical rates for peak ground acceleration were similar to the 2002 California hazard model accelerations after some biases in intensity were corrected. The intensity data is difficult to analyze and contains large uncertainties because it is a qualitative measure. Nevertheless, this analysis provides some level of confidence that the short return period portion 10

11 of the hazard models is producing high-frequency ground motions that are similar to the observed historical intensities. THE USGS-CGS PRELIMINARY TIME-DEPENDENT SEISMIC HAZARD MODEL The time-dependent hazard presented in this paper is based on the time-independent, or Poissonian, 2002 national seismic hazard model and additional recurrence information for A- type faults that include: San Andreas, San Gregorio, Hayward, Rodgers Creek, Calaveras, Green Valley, Concord, Greenville, Mount Diablo, San Jacinto, Elsinore, Imperial, Laguna Salada, and the Cascadia subduction zone (Figure 1). A-type faults are defined as having geologic evidence for long-term rupture histories and an estimate of the elapsed time-since the last earthquake. A simple elastic dislocation model predicts that the probability of an earthquake rupture increases with time as the tectonic loading builds stress on a fault. Thus, the elapsed time is the first-order parameter in calculating time-dependent earthquake probabilities. However, other parameters such as static elastic fault interactions, viscoelastic stress-transfer, and dynamic stress changes from earthquakes on nearby faults will also influence the short-term probabilities for earthquake occurrence. In this paper we only consider the influence of the elapsed time since the last earthquake. Over the past 30 years, the USGS and CGS have developed time-dependent source and ground motion models for California using the elapsed time since the last earthquake (Working Group on California Earthquake Probabilities, 1988, 1990, 1995 (led by the Southern California Earthquake Center), 1999, 2003; Cramer et al., 2000; and Petersen et al., 2002, Appendix A). The probabilities of occurrence for the next event were assessed using Poisson, Gaussian, lognormal, and Brownian Passage Time statistical distributions. Past Working Groups applied a value of about 0.5 +/- 0.2 for the ratio of the total sigma to the mean of the recurrence distribution. This ratio, known as the coefficient of variation, accounts for the periodicity in the recurrence times for an earthquake; a coefficient of variation of 1.0 represents irregular behavior (nearly Poissonian) and a coefficient of variation of 0 indicates periodic behavior. For this analysis, we have applied the parameters shown in Table 1 to calculate the time-dependent earthquake probabilities. The basic parameters needed for these simple models are the meanrecurrence interval (T-bar), parametric uncertainty Sigma-p, intrinsic variability Sigma-i, and the year of the last earthquake. The parametric sigma is calculated from the uncertainties in mean displacement and mean slip rate of each fault (Cramer et al. 2000). The intrinsic sigma describes the randomness in the periodicity of the recurrence intervals. The total sigma for the lognormal distribution is the square root of the sum of the squares of the intrinsic and parametric sigmas. For this analysis we assume characteristic earthquake recurrence models with segment boundaries defined by previous working groups. We calculated the time-dependent hazard using the 2002 Working Group on California Earthquake Probabilities report (WGCEP, 2003) for the San Francisco Bay area, the 2002 National Seismic Hazard and Cramer et al. (2000) models for the other faults in northern and southern California, and the Petersen et al. (2002) model for the Cascadia subduction zone. Ned Field and Bill Ellsworth reran the computer code that was used to produce the WGCEP (2003) 11

12 report and provided an update to the time-dependent probabilities for the San Francisco Bay area for a 1, 5, 10, 20, 30, and 50-year time period beginning in For the Cascadia subduction zone, we applied a time-dependent model for the magnitude 9.0 events using the results of Petersen et al. (2002). Recurrence rates for the magnitude 9 earthquakes in the model were estimated from paleo-tsunami data along the coast of Oregon and Washington. The M 8.3 earthquakes in the subduction plate-interface model were parameterized using Poisson process statistics (Frankel et al., 2002). The M 9.0 and 8.3 models were equally weighted in the 2002 hazard model as well as in this time-dependent model. The San Andreas (Parkfield segment), San Jacinto, Elsinore, Imperial, and Laguna Salada faults were all modeled using single segment ruptures following the methodology of Cramer et al. (2000). Multi-segment ruptures were allowed in the WGCEP 1995 model, but these were not incorporated in this preliminary time-dependent model. We developed three southern San Andreas models that consider various combinations of the five segments of the southern San Andreas Fault that were defined by previous working groups: Cholame, Carrizo, Mojave, San Bernardino, and Coachella and three multiple-segment ruptures. In the time-dependent models, 11% of the occurrence rate is based on the Poisson model and 89% is based on the time-dependent model, similar to the method applied in WGCEP (2002). For the time-dependent portion of the model, it is easier to define the single-segment timedependent probabilities because there are published recurrence rates and elapsed times since the last rupture for these segments based on historical and paleo-earthquake studies (e.g., WGCEP 1995). However, defining time-dependent multiple-rupture probabilities (cascades) are much more complicated. The first time-independent model (T.I. Model 1), assumes single-segment and multiple-segment ruptures with weights that balance the moment rate and that are similar to the observed paleoseismic rupture rates (Frankel et al., 2002, Appendix A). Possible rupture models of the southern San Andreas include: (a) ruptures along five individual segments, (b) rupture of the southern two segments and of the northern three segments (similar to the 1857 earthquake rupture), and (c) rupture of all of the five segments together. For each of the complete rupture models, the magnitudes of the earthquake were determined from the rupture area. Recurrence rates were assessed by dividing the moment rate along the rupture with this calculated magnitude. The single-segment rupture models were weighted 10% and the multi-segment rupture models were weighted 90% (50% for sub-model b and 40% for sub-model c) to fit the observed paleoseismic data. In the first time-dependent model (T.D. Model 1), which is based on T.I. Model 1, probabilities are calculated, as in previous working groups, by using a lognormal distribution for all the segments and parametric sigmas listed in Table 1. For the time-dependent portion of the model, we have adjusted the Poisson probabilities to account for the information from the timedependent probabilities of single-segment events. The southern San Andreas individual fault segments have higher time-dependent probabilities than the corresponding Poisson probabilities (a probability gain), therefore, the multi-segment rupture should also have higher time-dependent probabilities than the Poisson model. Since it is not known in advance what segment might 12

13 trigger rupture of the cascade, this multi-segment rupture probability is calculated using the weighted average of the probability gains from each of the segments involved in the rupture, where the weights are proportional to the 30-year time-dependent probability of each segment. We show an example containing two segments in Appendices B and C. The second time independent model (T.I. Model 2) is also based on the 2002 national seismic hazard model (model 2) and considers characteristic displacements for earthquake ruptures. This model assumes two multiple-segment ruptures that are composed of segments from Cholame through Mojave (1857 type ruptures) and from San Bernardino through Coachella. In addition, single segment ruptures of the Cholame and Mojave are considered. The model assumes that the Carrizo segment only ruptures in 1857 type earthquakes with a rate of 4.7e-3 events/yr, based on paleoseismic observations. Therefore, this multi-segment rupture accounts for 22 mm/yr of the total slip rate of 34 mm/yr (WGCEP, 1995), given the earthquake rate and a 4.75 characteristic slip on the Cholame segment. The remaining 12 mm/yr is taken up by single-segment ruptures of the Cholame segment. Using a single-segment magnitude of 7.3 and a 12mm/yr slip rate yields a single-segment recurrence rate for Cholame of 2.5e-3/yr. For the Mojave segment, the slip rate available after the slip from 1857 type ruptures is removed is 9 mm/yr. Using an earthquake with magnitude 7.4 (4.4 m/event) for single-segment rupture and a slip rate of 9 mm/yr yields a recurrence rate of 2.05e-3 for a single segment Mojave rupture. For the San Bernardino through Coachella rupture a M 7.7 earthquake with recurrence rate of 5.5e-3 event/yr is used to be consistent with the paleoseismic data. Inclusion of other ruptures on these segments leads to estimated recurrence rates that exceed the paleoseismic observations. The total moment rate of this model is 92% of the total predicted moment rate. This second time-dependent model (T.D. Model 2), which is based on T.I. Model 2, accommodates the difference between the total segment time-dependent rupture rate (the timedependent rate of all potential ruptures that involve that segment) and the corresponding multiple-segment rupture rate that involves that segment. The segment time-dependent probabilities for all ruptures combined are calculated the same way as for the first model and are shown in Table 1. The Carrizo segment is assumed to rupture only in 1857 type events and its total segment time-dependent probability is the same as the time-dependent probability for the 1857 type events (following the partial cascades model in Petersen et al., 1996 and Cramer et al., 2000). We first calculate a time-dependent probability P ctotal for any type of rupture scenario involving the Cholame segment (single segment or 1857 type). Here we use the total recurrence rate derived from the time independent calculation from Model 2. Next we calculate the time dependent probability P 1857 for 1857 type ruptures using the paleoseismic recurrence rate. Now the time-dependent probability of a single-segment Cholame rupture is derived from the total time-dependent rate (calculated from P ctotal ) subtracted by the rate of the 1857 type events (converted from P 1857 ). An example is shown in Appendix C. The time-dependent rate for the Coachella and San Bernardino segments rupturing together has to be the smaller of the two segment rates. In T.I Model 2, the San Bernardino segment is not allowed to rupture by itself. Now, when the conditional probability weighting is applied, this rupture has to be allowed in order to accommodate the excess rate on this segment. Its timedependent rate is the segment rate (converted from probability) subtracted by the event rate of the Coachella and San Bernardino segments rupturing together. 13

14 For the third model we have applied two rupture scenarios that are based on new (i.e., post 2002) geologic data and interpretations: (1) single segment time-dependent rates that were used in model 1 above and (2) two multi-segment ruptures, the 1857 type rupture that includes the Carrizo, Chalome, and Mojave segments and the southern multi-segment rupture that includes the San Bernardino and Coachella segments. The recurrence rates and elapsed time since the last earthquake for multi-segment ruptures are based on geologic data shown in Weldon et al. (2004, Figure 12). The five single-segment rupture models were weighted 10% and the two multisegment ruptures were weighted 90%, similar to the weighting in T.I. Model 1. The multisegment earthquakes incorporate a recurrence time of 200 years (5 events in 1,000 years) and elapsed time of 149 years for the 1857 type event and a recurrence time of 220 years (4 events in 880 years) and elapsed time of 310 years for the southern two-segment rupture. In general the models of Weldon et al. (2004) are moment balanced using slip rate. However, when we apply the 200 and 220 yr recurrence intervals to the 1857-type (M7.8) and southern multi-segment rupture that includes the San Bernardino and Coachella segments (M7.7), we get a moment rate which is about 80% of the other models. The reason for the lower moment is that the magnitude of the multi-segment rupture is not specified in the Weldon et al. (2004) model. If the magnitude of the 1857-type rupture is raised from 7.8 to 7.9 the updated model releases about the same moment as the other models and is moment balanced. This slight magnitude adjustment would not change the hazard calculation significantly since ground motions from M 7.8 and M 7.9 earthquakes are very similar. Therefore, for model 3 we have maintained the M 7.8 magnitude in order to be consistent with the magnitudes used in other models, recognizing that the moment rate is a little lower as a result. Weldon et al. (2004) also show data that indicates variability in the southern extent of the 1857 ruptures and the northern extent of the southernmost multisegment rupture in the vicinity of the 1812 rupture. Therefore, we have also included an aleatory variability for the segment boundary near the southern end of the 1857 rupture and have not included the 1812 rupture as the date of the last event. We have developed time-independent (Poisson) and time-dependent models for these ruptures (T.I. Model 3 and T.D. Model 3). An example calculation is shown in Appendix C. The time-dependent hazard of peak ground acceleration for 10% probability in 30 years is shown in Figure 3 and the time-dependent probabilities are listed in Table 2. The time-dependent map is developed from the WGCEP (2003) model for the San Francisco Bay area; the Cramer et al. (2000) model for the San Andreas (Parkfield), San Jacinto, Elsinore, Imperial, and Laguna Salada faults; and the Petersen et al. (2002) model for the Cascadia subduction zone. In addition the southern San Andreas hazard was developed using T.D. Models 1, 2, and 3 with equal weighting. COMPARISON OF TIME-INDEPENDENT AND TIME-DEPENDENT SEISMIC HAZARD MODELS To compare the time-independent and time-dependent hazard, we have produced hazard curves and maps from three equally weighted Poisson models (T.I. Models 1, 2, and 3) and three equally weighted time-dependent models (T.D. Models 1, 2, and 3). The difference between these models is in southern California, northern California remains the same in all of these 14

15 models. The time-independent (Poisson) and time-dependent maps are shown in Figure 2 and 3 and the ratio of the time-dependent and time-independent maps is shown in Figure 4. The ratio map shows that the largest positive changes to the hazard due to time-dependence are along the eastern San Francisco Bay area faults, the Cascadia subduction zone, the southern San Andreas Fault, and the northern San Jacinto Fault, which are all elevated up to 10% to 20% with respect to the Poisson model. Hazard along the San Andreas Fault in northern California and the southern San Jacinto, Superstition Hills, and Imperial Fault in southern California is reduced by up to 10-20% with respect to the time-independent hazard because they all experienced earthquakes over the past few decades. Figure 5 shows hazard curves that indicate the differences between hazard models at three sites in Parkfield, Los Angeles, and Cajon Pass (located near the intersection of the San Bernardino and Mojave segments of the San Andreas fault). The Parkfield hazard curve indicates that the 30-year time-dependent probability is consistently 10% higher than the Poisson probability for a large range of annual frequency of exceedance levels. However, the 5-year probability is considerably lower than the corresponding Poisson probability. The Parkfield segment of the San Andreas fault ruptured last in However, the mean recurrence for this segment is only 25 years. Thus, when one considers a 30-year probability, which is greater than the 25 year average recurrence, the probability is enhanced compared to the Poisson for having an earthquake rupture. When one considers a 5-year probability the probabilities are much lower because this time period is in the early portion of the seismic cycle. The site at Los Angeles indicates that the time-dependent probabilities are not controlling the hazard at distances of about 50 km. Deaggregations for sites in the greater Los Angeles region indicate that local faults tend to contribute more to the hazard at high frequencies which control the peak horizontal ground acceleration than the large events on the San Andreas fault. This may not be true for longer periods for which the San Andreas fault is more important. The differences between the time-dependent and time-independent curves are negligible for peak ground acceleration (less than 1%). Hazard at sites along the San Andreas Fault are dependent on how the local fault segments were modeled. For example, the Cajon Pass site is controlled by earthquakes on the Mojave and San Bernardino segments as well as the multi-segment ruptures that include those segments. We have included this site to show the differences between the time-dependent and time-independent ruptures for each of the 3 different time-independent and time-dependent models. The hazard differences are about 10-20% between these models for risk levels used in building codes. The time-dependent version typically gives larger ground motions because of the longer elapsed time since the last earthquake. Future versions of the time-independent hazard maps will allow stress interactions and viscoelastic effects that will add some variability to these curves. CONCLUSIONS In this paper we have presented both time-independent and time-dependent probabilities for several faults and statewide ground motion hazard maps for California that show the value of 15

16 peak ground acceleration with a 10% probability of exceedance for a time period of 30 years starting in The time-dependent maps differ by about 10% to 20% from the time-dependent maps. The southern San Andreas fault, Cascadia subduction zone, and the eastern San Francisco Bay area faults generally have elevated hazard relative to the time-independent maps. This is because it has been quite long since the last earthquake, about 150 years since the 1857 M 7.9 Fort Tejon earthquake, 300 years since the 1700 M 9 Cascadia earthquake, and 137 years since the 1868 M 6.8 on the southern Hayward fault. All of these faults are, most likely, in the latter half of their seismic cycles. The northern San Andreas fault, southern San Jacinto fault, and Imperial fault, on the other hand, have time-dependent hazard that is lower than the timeindependent hazard due to the relatively short period since the 1906 (M 7.8) San Francisco earthquake, the 1968 (M6.4) Borrego Mountain earthquake, and the 1971 Imperial Valley earthquake, which places these faults in the first half of their seismic cycles. Sites located well away from the A-faults are typically controlled by local faults, especially for high frequencies greater than 1 hz. Three time-independent and corresponding time-dependent models that are proposed in this paper are based on characteristic earthquake recurrence models that have distinct segment boundaries; for T.D. Model 3 we have allowed the end of the rupture to vary according the geologic models. For the past 15 years WGCEP reports have all applied the characteristic model with fixed or slightly variable boundaries. Recent studies (e.g., Weldon et al., 2005) suggest that other more random ruptures also fit the same geologic data constraining earthquake ruptures on the southern San Andreas Fault. This implies that strict characteristic models should be relaxed in future time-dependent hazard calculations to account for this potential variability in source models. Variable rupture characteristics may not result in significant changes to the hazard at low frequencies (i.e., long return periods), but should be considered in future WGCEP models. Probabilistic hazard maps are used for making important risk mitigation decisions regarding building design, insurance rates, land use planning, and public policy issues that need to balance safety and economics. This map is the basis for the Working Group on California Earthquake Probability Uniform California Earthquake Rupture Model version 1.0 (UCERF 1.0) and will be used to compare current 2006 methods with future, more complex, models. It is important that state-of-the-art science is incorporated in hazard maps that are used for public policy. Generally hazard products should be updated regularly as new information on earthquake recurrence and ground shaking becomes available from the science community. Research on such important hazard topics as recurrence time and rupture histories of prehistoric earthquakes, magnitudefrequency distributions for individual faults, and the effects of shallow and deep site conditions on ground shaking will improve these maps in the future. ACKNOWLEDGEMENTS We acknowledge Mike Blanpied, Bill Ellsworth, and Ned Field for calculating the timedependent hazard using the 2002 Working Group model and Ken Rukstales for producing GIS maps for Figure 1. Rob Wesson, Yuehua Zeng, and Mark Stirling, and Ned Field provided helpful reviews the manuscript. 16

17 APPENDIX A For this paper we have calculated the time-dependent probabilities for time-periods of 5, 10, 20, 30, and 50 years. For these calculations we have generally assumed a lognormal probability density function, however, the WG2002 report used a Brownian Passage Time model that does not cause a significant difference from the lognormal distribution except for very long elapsed times since the previous earthquake. Following the WGCEP 1995 report we find that the density function f(t) has the following form: f 2 1 # [ln( t / µ )] t) = exp{ }, (B1) t! 2" 2! T ( 2 lnt i lnt ^ i where µ is the mean, µ^ is the median, σ lnti is the intrinsic sigma, t is time period of interest. If µ^ and σ lnti are known, then the conditional time-dependent probability in time interval (t e, t e + ΔT) is given by: P( t e " T " t e + # t T > t e ) = P( te " T " te + # t), (B2) P( t " T "!) e where t e and is the elapsed time and ΔT is the time period of interest. A Poisson process follows the rule: P=1-exp(-rT), where P is the Poisson probability, r is the rate of earthquakes, and T is the time period of interest. If we want to convert between probability P and rate r, then we can use the formula: r=-ln(1-p)/t. (B3) We calculate the probability and annualize this rate using the above formula. APPENDIX B If we denote the calculated time-dependent probabilities and time-independent (Poisson) probabilities for two single-segment rupture events as P, P, P, and P, the ratios t p t p R a = Pa / Pa and R b = Pb / Pb are sometimes called the probability gain or loss over the average Poisson probabilities. For a multi-segment (cascade) event involving these two segments, we t p also define the probability gain or loss as R = P / P, in which the Poisson probability P is ab p known. Since P ab already accounts for the conditional probability of multi-segment rupture, we further assume that the cascade event is triggered by independent rupture of one of the segments A or B. So we know that Rab = Ra if the cascade event starts from A and that Rab = Rb if it starts from B. Assuming segment A is more likely to rupture in some future time period than segment ab ab t a t b p a p b p ab 17

18 B, then R a > Rb, and the chance of a cascade event occurring must be smaller than the chance of A rupturing but larger than the chance of B rupturing. Therefore, R ab has to be smaller than R a but larger than R b if R a > Rb and vice versa. Considering that a cascade event can start from A t t or B with different likelihoods, we approximate R ab by weighting R a and R b by P a and P b, t t t t their probabilities of rupture, resulting in the cascade event ratio R ab = ( Pa Ra + Pb Rb ) /( Pa + Pb ). The physical basis for this type of weighting process is that a multi-segment rupture has to start from one of the segment and the higher segment probability has higher probability to lead to or trigger a multi-segment event. APPENDIX C Example applications for calculating time dependent rates Models 1 and 3: In this section we show how the annual occurrence rates for a multi-segment rupture are calculated in Models 1 and 3. For our first example, we calculate the rate of rupture that involves all five segments. The time-dependent 30-year probabilities for the five segments Coachella, San Bernardino, Mojave, Carrizo, and Cholame are 0.325, 0.358, 0.342, 0.442, and assuming a lognormal distribution. The equivalent annual rates are calculated using the formula r = -ln(1-p)/t, where p is the segment time-dependent probability in t (30 years). This rate is divided by the Poissonian rate of the 2002 model and produces the probability gain for each segment. The gains for five segments are 1.141, 1.918, 1.065, 1.690, and The weighted gain for this 5-segment rupture is (= (0.325x x x x X1.114)/( )). The final annual rate for this rupture is the Poissonian rate ( ) multiplied by this gain and the 2002 model weight (0.4), which is (= x1.384x0.4). For model 3, the cascading allows only 1857 and 1690 types of events and their recurrence times are 200 and 220 years respectively, which are different from the 2002 model. We follow the same steps as in the T.D. Model 1 to calculate the time-dependent annual rates for the multisegment ruptures with the new Poissonian rates for multi-segment events. After obtaining the time-dependent annual rates for the 1857 and 1690 multi-segment ruptures, we weight each of the Weldon et al. (2004) rupture scenarios included in the model. Model 2: In 2002 model 2, the Poissonian rates for the five segments are different from T.D. model 1. We apply these different mean recurrence times and the same elapse times and intrinsic and parametric uncertainties and calculate time-dependent 30-year probabilities and their equivalent annual rates as we did in model 1. These rates are , , , , and for the segments: Coachella, San Bernardino, Mojave, Carrizo, and Cholame respectively. The Carrizo segment in T.D. model 2 only ruptures in 1857 type of events, so the time-dependent annual rate for 1857 type of rupture is defined as the rate for Carrizo segment ( ). The Cholame and Mojave segments are allowed in 2002 model to rupture independently. The time-dependent rates for these two segments are their time-dependent rates, which are converted from their 30-year probabilities, subtracted by the rate for 1857 type events 18

19 or (= ) for Cholame and (= ) for Mojave ruptures. The time-dependent rate for Coachella and San Bernardino segments rupturing together has to be the smaller of the two segment rates or (< ). In the 2002 model, San Bernardino segment is not allowed to rupture by itself. But now the difference between the San Bernardino segment rate ( ) and the rate ( ) for San Bernardino and Coachella segments rupturing together defines the single segment rupture on the San Bernardino segment, i.e., ( = ). REFERENCES Abrahamson, N.A. and W.J. Silva (1997). Empirical response spectral attenuation relations for shallow crustal earthquakes, Seis. Res. Letts., v. 68, no. 1, pp Algermissen, S.T. and D.M. Perkins (1982). A probabilistic estimate of maximum acceleration in rock in the contiguous United States, U.S. Geol. Surv. Open-file Rept Atkinson, G.M. and D.M. Boore (2003). Empirical ground-motion relations for subduction zone earthquakes and their application to Cascadia and other regions, Bull. Seism. Soc. Am., v. 93, pp Boore, D.M., W.B. Joyner, and T.E. Fumal (1997). Equations for estimating horizontal response spectra and peak acceleration from western North American earthquakes: a summary of recent work, Seism. Res. Letts., v. 68, pp Campbell, K.W. and Y. Bozorgnia (2003). Updated near-source ground motion (attenuation) relations for the horizontal and vertical components of peak ground acceleration and acceleration response spectra, Bull. Seism. Soc. Am., v. 93, pp Cramer, C.H., M.D. Petersen, T. Cao, T.R. Toppozada, and M.S. Reichle (2000). A timedependent probabilistic seismic-hazard model for California, Bull. Seism. Soc. Am. V. 90, pp DeMets, C., R.G. Gordon, D.F. Argus, and S. Stein (1990). Current plate motions, Geophys. J. Int., v. 101, pp Frankel, A. (1995). Mapping seismic hazard in the Central and Eastern United States, Seism. Res. Letts., v. 66, no. 4, pp Frankel, A., C. Mueller, T. Barnhard, D. Perkins, E. Leyendecker, N. Dickman, S. Hanson, and M. Hopper (1996). National seismic-hazard maps: documentation June 1996, U.S. Geol. Surv. Open-file Rept Frankel, A.D., M.D. Petersen, C.S. Mueller, K.M. Haller, R.L. Wheeler, E.V. Leyendecker, R.L. Wesson, S.C. Harmsen, C.H. Cramer, D.M. Perkins, K.S. Rukstales (2002). Documentation for the 2002 update of the national seismic hazard map, U.S. Geol. Surv. Open-file Report Gardner, J. K., and L. Knopoff (1974). Is the sequence of earthquakes in southern California, with aftershocks removed, Poissonian?, Bull. Seism. Soc. Am., v. 64, pp Petersen, M.D., W.A. Bryant, C.H. Cramer, T. Cao, N.S. Reichle, A.D. Frankel, J.J. Lienkaemper, P.A. McCrory, and D.P. Schwartz (1996). Probabilistic seismic hazard 19

20 assessment for the state of California, California Division of Mines and Geology Open-File Rept , U.S. Geol. Surv. Open-File Rept Petersen, M.D., C.H. Cramer, M.S. Reichle, A.D. Frankel, and T.C. Hanks (2000). Discrepancy between earthquake rates implied by historic earthquakes and a consensus geologic source model for California, Bull. Seism. Soc. Am., v. 90, pp Petersen, M.D., C.H. Cramer, and A.D. Frankel (2002). Simulations of seismic hazard for the Pacific Northwest of the United States from earthquakes associated with the Cascadia subduction zone, Pure Appl. Geophys, v. 159, pp Sadigh, K., C.Y. Chang, J. Egan, F. Makdisi, and R. Youngs (1997). Attenuation relationships for shallow crustal earthquakes based on California strong motion data, Seism. Res. Letts., v. 68, pp Spudich, P., W.B. Joyner, A.G. Lindh, D.M. Boore, B.M. Margaris, and J.B. Fletcher (1999). SEA99: A revised ground motion prediction relation for use in extensional tectonic regimes, Bull. Seism. Soc. Am., v. 89, pp Stirling, M. and M. Petersen (2005). Comparison of intensity data with seismic hazard models in the U.S. and New Zealand, preprint. Toppozada, T. Branum, D., Petersen, M., Hallstrom, C., Cramer, C. and Reichle, M., (2000). Epicenters of and areas damaged by M 5 California earthquakes, , California Division of Mines and Geology Map Sheet 49. Weldon, R., K. Sharer, T. Fumal, and G. Biasi (2004). Wrightwood and the earthquake cycle: What a long recurrence record tells us about how faults work, GSA Today, V14 (#9), p. 8. Wells, D.L. and K.J. Coppersmith (1994). New empirical relationships among magnitude, rupture length, rupture width, and surface displacements, Bull. Seism. Soc. Am., v. 84, pp Working Group on California Earthquake Probabilities (1988). Probabilities of large earthquakes occurring in California on the San Andreas fault, U.S. Geol. Surv. Open-file Rept Working Group on California Earthquake Probabilities (1990). Probabilities of large earthquakes in the San Francisco Bay region, California, U.S. Geol. Surv. Circ Working Group on California Earthquake Probabilities (1995). Seismic hazards in southern California: probable earthquakes, , Bull. Seism. Soc. Am., v. 85, pp Working Group on California Earthquake Probabilities (1999). Earthquake probabilities in the San Francisco Bay Region: 2000 to A summary of findings, U.S. Geol. Surv. Open- File Rept Working Group on California Earthquake Probabilities (2003). Earthquake probabilities in the San Francisco Bay region: , U.S. Geol. Surv. Open-file Rept Youngs, R.R., S.J. Chiou, W.J. Silva, and J.R. Humphrey (1997). Strong ground motion attenuation relationships for subduction zone earthquakes, Seism. Res. Letts., v. 68, no. 1, pp

21 Figure 1: Locations and names of A-faults contained in the source model. 21

22 Figure 2: Time-independent (Poisson) map for rock site condition and a 10% probability of exceedance in 30 years. This map was developed from the 2002 national seismic hazard model but also includes the new Poisson model for T.I. Model 3. 22

23 Figure 3: Time-dependent map for rock site condition and a 10% probability of exceedance in 30 years. This map was developed by equally weighting three time-dependent models (T.D. model 1, 2, and 3). 23

24 Figure 4: Ratio of the time-dependent map (Figure 3) and the time-independent map (Figure 2) for rock site conditions and a 10% probability of exceedance in 30 years. 24

25 Figure 5: Hazard curves showing the annual frequency of exceedance and the peak ground acceleration for three sites for the time-independent (Poisson) and time-dependent models. The annual frequency of exceedance is obtained by taking the 30-year probability and calculating the equivalent annual frequency of exceedance as shown in Appendix B. 25

26 TABLE 1: parameters used in the time-dependent analysis T (mean) T (median) Sigma- P Sigma- T Last Event Elapse Time P in 30 yrs SOUTHERN SAN ANDREAS FAULT Model 1: SAF - Coachella seg SAF - San Bernardino seg SAF - Mojave seg SAF - Carrizo seg SAF - Cholame seg Model 2: SAF - Coachella seg SAF - San Bernardino seg SAF - Mojave seg SAF - Carrizo seg SAF - Cholame seg Model 3: Same as model 1 for single segments SAF - Parkfield seg ELSINORE FAULT Whittier Elsinore - Glen Ivy seg Elsinore - Temecula seg Elsinore - Julian seg Elsinore - Coyote Mtn. Seg Laguna Salada SAN JACINTO FAULT SJF - San Bernardino seg SJF - San Jacinto Valley seg SJF - Anza seg SJF - Coyote Creek seg SJF - Borrego seg SJF - Superstition Mtn. Seg SJF - Superstition Hills seg Imperial CASCADIA SUBDUCTION ZONE Cascadia megathrust - mid (M 9.0) Cascadia megathrust - top (M 9.0) Cascadia megathrust - bottom (M 9.0) Cascadia megathrust - old (M 9.0)

27 TABLE 2: probabilities calculated for different time periods Fault 5-years 10-years 20-years 30-years 50-years NORTHERN SAN ANDREAS FAULT SAF - Santa Cruz seg SAF - Peninsula seg SAF - North Coast seg. (so.) SAF - North Coast seg. (no.) SAF - Santa Cruz & Peninsula SAF - Peninsula & North Coast (so.) SAF - North Coast seg. (so. & no.) SAF - Santa Cruz, Peninsula & North Coast (so.) SAF - Peninsula & North Coast (so. & no.) SAF Rupture SAF Rupture (floating) HAYWARD-RODGERS CREEK Hayward (southern) Hayward (northern) Hayward (so. & no.) Rodgers Creek Hayward (no.)-rodgers Creek Hayward (so. & no.)-rodgers Creek Hayward-Rodgers Creek (floating) CALAVERAS FAULT Calaveras (southern) Calaveras (central) Calaveras (so. & cent.) Calaveras (northern) Calaveras (cent. & no.) Calaveras (so., cent. & no.) Calaveras (entire floating) Calaveras (so. & cent. floating) CONCORD-GREEN VALLEY FAULT Concord Green Valley (southern) Concord-Green Valley (so.) Green Valley (northern) Green Valley (so. & no.) Concord-Green Valley (entire) Concord-Green Valley (floating) SAN GREGORIO FAULT San Gregorio (southern) San Gregorio (northern) San Gregorio (so. & no.) San Gregorio (floating)

28 GREENVILLE FAULT Greenville (southern) Greenville (northern) Greenville (so. & no.) Greenville (floating) Mt. Diablo Thrust SOUTHERN SAN ANDREAS FAULT Model 1: SAF - Coachella seg SAF - San Bernardino seg SAF - Mojave seg SAF - Carrizo seg SAF - Cholame seg Model 2: SAF - Coachella seg SAF - San Bernardino seg SAF - Mojave seg SAF - Carrizo seg SAF - Cholame seg SAF - Parkfield seg ELSINORE FAULT Whittier Elsinore - Glen Ivy seg Elsinore - Temecula seg Elsinore - Julian seg Elsinore - Coyote Mtn. Seg Laguna Salada SAN JACINTO FAULT SJF - San Bernardino seg SJF - San Jacinto Valley seg SJF - Anza seg SJF - Coyote Creek seg SJF - Borrego seg SJF - Superstition Mtn. Seg SJF - Superstition Hills seg Imperial CASCADIA SUBDUCTION ZONE Cascadia megathrust - mid (M 9.0) Cascadia megathrust - top (M 9.0) Cascadia megathrust - bottom (M 9.0) Cascadia megathrust - old (M 9.0)

29 Section II A Synoptic View of Southern San Andreas Fault Paleoseismic Constraints by Ray Weldon The seismic hazard associated with the Southern San Andreas fault has traditionally been based on conceptual recurrence and segmentation models parameterized by the locally available paleoseismic data (Working Groups on California Earthquakes). While this may be the best way to infer hazard, we are exploring an alternative approach that attempts to determine hazard directly from the data without appealing to simple models or at least to expand the range of possible models that are allowed by the paleoseismic data. To accomplish this goal we have gathered all of the existing paleoseismic data from the Southern San Andreas fault, including complete probability density functions that describe the range of possible ages for the events, the coseismic displacement associated with as many ruptures as possible, and the slip rate across the fault. From this data set we explore the range of possible fault behaviors by constructing scenarios from the data. To date these data have only been qualitatively interpreted. Figure 1 shows three scenarios that have been discussed by Weldon et al. (2004, GSA Today 14, pp. 4-10; and 2005, Science, reproduced here in Appendix A). Scenario (A) describes the data with highly periodic alternating north and south Southern San Andreas ruptures each of characteristic size. Scenario (B) attempts to explain the data with as great a variety of ruptures as possible and (C) describes the data as a combination of long ruptures that span the entire Southern San Andreas fault and 1812 type earthquakes in between to satisfy the paleoseismic data. Obviously, many more possibilities are exist. To fully explore the range of rupture scenarios consistent with the paleoseismic data, and to objectively generate scenarios we have automated the process of generating scenarios. We link sites as one might string pearls, with first one site, then one and its neighbor, and so on, to include all adjoining site linkages (see Figure 2). We accommodate dating uncertainty by allowing a rupture to include a site even though the original record did not report an event at that time. We apply a likelihood penalty to such ruptures, but this approach keeps absent or incorrect event information at an individual site from trumping a rupture otherwise favored by adjoining paleoseismic records. Rupture likelihood also considers consistency with dating and surface displacement evidence. Scenarios, which we call possible fault histories are constructed by drawing from the pool of all possible ruptures until all reported events from all the sites have been included. Each rupture history thus can be regarded, with greater or lesser probability, as what might have happened, given all available evidence. We develop likelihoods among rupture histories based on the combined likelihood of its contributing ruptures. By 29

30 generating thousands of histories and keeping the most likely ones, the ruptures provide a chronology, location, and rupture length that can be translated into the history of ground shaking near the San Andreas fault and evaluated for their seismic hazard implications. Two examples are shown in Figure 3. Figure 1 - Three possible rupture sequences on the southern San Andreas fault. Vertical colored bars are 95% confidence ranges in age for earthquakes at the sites listed at the lower margin and horizontal bars are rupture extent. Open boxes represent multiple event age ranges; the individual event ages are unknown. Grey shading indicates regions and times with no data. (A) In this 30

31 model, the data are interpreted in terms of north and south ruptures with substantial overlap and the 1812 event is anomalous. (B) A random looking distribution of event timing and rupture lengths is constructed to fit the data. (C) Wall to wall ruptures are used to explain the data with a minimum number of earthquakes; in this scenario additional small earthquakes, like 1812, are necessary to explain the data, and 1857 was anomalously short. Site abbreviations (see appendices for references): PK Parkfield; LY Las Yeguas, Young et al. (2002); CP Carrizo Plain, integration of Liu et al. (2004), Sims (1994), Grant and Sieh (1994); FM Frasier Mountain, Lindvall et al. (2002); 3P Three Points, reinterpretation of Rust (1982); LR Littlerock, reinterpretation of Schwartz and Weldon (1986); PC Pallett Creek, Salyards et al. (1992), Biasi et al. (2002), Sieh et al. (1989); WW Wrightwood, Biasi et al. (2002), Fumal et al. (2002b), Weldon et al. (2002); CC Cajon Creek, Weldon and Sieh (1985); PT Pitman Canyon, Seitz et al. (2000), G. Seitz et al. (2003, personal commun.); PL Plunge Creek, McGill et al. (2002); BF Burro Flats, Yule and Sieh (2001), D. Yule and K. Sieh (2003, personal commun.); TP Thousand Palms Oasis, Fumal et al. (2002a); IO Indio, reinterpretation of Sieh (1986), Sieh and Williams (1990), Fumal et al. (2002a); SC Salt Creek, Williams (1989), P.L. Williams (2003, personal commun.). Figure 2 Left shows a hypothetical set of overlapping event ages from 4 sites. All possible ruptures that include these 4 sites can be constructed by progressively linking all of the sites, beginning with 1 earthquake per site, to a single earthquake that spans all 4 sites. Right shows three possible results for 4 earthquakes at each of 4 sites. Each possible history is then weighted according to its consistency with the available age control, displacement per event, and fault slip rate/moment rate. 31

32 The upper part of the record cannot be run until we combine historical and C-14 data Before about 600 AD there is not enough data to compare sites Figure 3 - Two possible earthquake histories generated by a preliminary version of our scenario generator, with the rupture lengths of 1857, 1812 and the immediately prehistoric ~1690 event for comparison.

33 APPENDIX A PERSPECTIVES GEOPHYSICS: Past and Future Earthquakes on the San Andreas Fault Ray J. Weldon, Thomas E. Fumal, Glenn P. Biasi, Katherine M. Scharer * The San Andreas fault is one of the most famous and--because of its proximity to large population centers in California--one of the most dangerous earthquake-generating faults on Earth. Concern about the timing, magnitude, and location of future earthquakes, combined with convenient access, have motivated more research on this fault than on any other. In recent years, an increasing number of sites along the fault have provided evidence for prehistoric earthquakes (1, 2). Damaging earthquakes are generated by rupture that can span hundreds of kilometers on a fault. Data from many sites must therefore be integrated into "rupture scenarios"-- possible histories of earthquakes that include the date, location, and size (length of fault rupture) of all earthquakes on a fault during a period of time. Recently, rupture scenarios for the southern San Andreas fault have stimulated interest in how different scenarios affect interpretations of seismic hazard and underlying models of earthquake recurrence behavior. Large earthquakes occur infrequently on individual faults. Scientists therefore cannot test recurrence models for damaging earthquakes by waiting for a series of large earthquakes to occur or by consulting instrumental records, which span at most 100 years. Records of large earthquakes must be dug out of the geologic record to characterize earthquakes that predate the instrumental record. Such studies tend to provide samples of the date and ground displacement at isolated sites along the ruptures, hundreds of kilometers long, caused by large paleoearthquakes. Key insights into fault recurrence behavior have been gained from site-specific data on the southern San Andreas fault (3, 4). However, measurements of the date and displacement often vary considerably between sites. Further advances in understanding the San Andreas fault will require the construction of rupture scenarios. Given the large body of data and recent advances in interpretive methodology, this goal is now within reach for the southern San Andreas fault. Ruptures on the southern San Andreas fault. Lines are probability density functions for the dates of individual earthquakes, colored by site; the 1812 and 1857 earthquakes have exact dates. Peaks and valleys in the smoothed sum of the individual probability density functions suggest that large parts 33

34 of the fault rupture every ~200 years in individual large earthquakes or series of a few earthquakes. See (2) for site locations and data sources. To date, 56 dates of prehistoric earthquakes have been published, based on data from 12 sites on the southern 550 km of the San Andreas fault. There are also about 10 paleoseismic records for the earthquakes of 1857 and Analysis of these data (4, 5) yields probability density functions of the dates of the earthquakes (see the first figure). These date distributions provide the raw material for correlating ruptures from site to site and the first step toward constructing a history of large earthquakes. Unfortunately, rupture scenarios based on earthquake date alone are poorly constrained and support a wide range of earthquake models. The existing data can be explained by highly periodic overlapping ruptures (see the second figure, top panel), randomly distributed ruptures (middle panel), and even repeated ruptures spanning the entire southern San Andreas fault (bottom panel). Each model implies a different level of hazard to the Los Angeles region (see the figure legend, second figure) and supports a different physical model of faulting (2). Strong overlap of event dates (see the first figure) may occur when many sites along the fault record the same earthquake or a sequence of earthquakes occur within years or decades. Poor or no overlap may indicate earthquakes with lesser rupture extent or errors in the dating and interpretation of paleoseismic data. Given the rupture lengths of the 1812 and 1857 earthquakes (~150 and 300 km, respectively) and the lack of substantial rupture in the 148 years since 1857, most scientists doubt the possibility of frequent small ruptures on the southern San Andreas fault. Three recent developments strengthen the hypothesis that the fault breaks in relatively infrequent, large earthquakes. Cartoon of rupture scenarios. Black boxes denote paleoearthquake dates at sites along the fault. Black horizontal bars show the extents of the 1857 and 1812 ruptures. Three scenarios accommodate all dates. (Top) Ruptures spanning the northern twothirds of the fault (like the 1857 earthquake) alternate with shorter ruptures centered on the southern third. This model yields a conditional probability of earthquake recurrence of ~70% in the next 30 years, largely due to the long time since a southern event. (Middle) Ruptures of variable length recur randomly. This model yields a conditional probability of ~40% in the next 30 years assuming Poisson behavior. (Bottom) Long ruptures (violet) span most of the fault, with small additional ruptures (like the 1812 earthquake) (orange) to satisfy all dates. This model yields a conditional probability of ~20% in the next 30 years, assuming quasi-periodic behavior of short and long events. 34

35 First, the relationship between a displacement observed at a site and the probability of seeing the same rupture at the next site along the fault has been quantified (6). Commonly observed displacements of 4 to 7 m (7, 8) imply rupture lengths of more than 100 km (9), much more than the distances between paleoseismic sites. Date ranges from nearby sites that overlap poorly are thus likely in error. Second, different chemical, physical, and biological fractions of materials such as peat and charcoal yield very different radiocarbon dates (10-12). Because the type of material varies between sites, overlap of dates may be imperfect even if a single rupture spans the sites. Third, careful documentation of evidence from multiple excavations (8, 10-12) shows a wide range in the quality of event evidence from excavation to excavation and site to site. Thus, the evidence for some paleoearthquakes may have been misinterpreted. A much clearer picture of earthquakes on the southern San Andreas fault should emerge in the next 5 to 10 years. The groups of earthquake date ranges seen every ~200 years in the first figure will probably withstand this reevaluation. Some of these groups contain a single earthquake that ruptured through many sites and may have ruptured large parts of the southern San Andreas fault. Others contain multiple earthquakes at individual sites and could be multiple earthquakes with overlapping ruptures, like the 1812 and 1857 earthquakes (see the second figure). The current 148- year hiatus is probably not exceptional. However, no lull in the past 1600 years appears to have lasted more than ~200 years, and when the current hiatus ends, a substantial portion of the fault is likely to rupture, either as a single long rupture or a series of overlapping ruptures in a short time interval. References and Notes 1. L. Grant, W. R. Lettis, Eds., Special issue on The Paleoseismology of the San Andreas Fault, Bull. Seismol. Soc. Am. 92 (2002). 2. R. J. Weldon, K. M. Scharer, T. E. Fumal, G. P. Biasi, GSA Today 14, 4 (September 2004). 3. K. Sieh, M. Stuiver, D. Brillinger, J. Geophys. Res. 94, 603 (1989). 4. G. P. Biasi, R. J. Weldon, T. E. Fumal, G. G. Seitz, Bull. Seismol. Soc. Am. 92, 2761 (2002) 5. At four further sites, no exact dates can be determined for individual earthquakes, but the data can be related to other paleoearthquakes and thus help constrain the overall rupture scenarios. 6. G. P. Biasi, R. J. Weldon, Bull. Seismol. Soc. Am., in press. 7. J. Liu, Y. Klinger, K. Sieh, C. M. Rubin, Geology 32, 649 (2004). 8. R. J. Weldon et al., Bull. Seismol. Soc. Am. 92, 2704 (2002). 9. L. Wells, K. J. Coppersmith, Bull. Seismol. Soc. Am. 84, 974 (1994). 35

36 10. T. E. Fumal et al., Bull. Seismol. Soc. Am. 92, 2726 (2002). 11. G. G. Seitz, thesis, University of Oregon (1999). 12. K. M. Scharer, thesis, University of Oregon (2005) /science R. J. Weldon and K. M. Scharer are in the Department of Geological Sciences, University of Oregon, Eugene, OR 97403, USA. ; T. E. Fumal is with the Earthquake Hazards Team, U.S. Geological Survey, Menlo Park, CA 94025, USA. G. P. Biasi is at the Seismological Laboratory, University of Nevada, Reno, NV 89557, USA. 36

37 A Framework for Developing a Time-Dependent Earthquake Rupture Forecast by Edward (Ned) Field Introduction This document outlines an attempt to establish a simple framework that can accommodate a variety of approaches to modeling time-dependent earthquake probabilities. There is an emphasis on modularity and extensibility, so that simple, existing approaches can be applied now, yet more sophisticated approaches can also be added later. A primary goal is to relax the assumption of persistent rupture boundaries (segmentation) and to allow fault-to-fault jumps in a time-dependent framework (no solution to this problem has previously been articulated, at least not as far as the author and other members of the WGCEP are aware). Long-Term Earthquake Rate Model The purpose of this model is to give the long-term rate of all possible earthquake ruptures (magnitude, rupture surface, and average rake) for the entire region. Of course all possible might include an infinite number, so what we mean is all those at some level of discretization (and above some magnitude cutoff) that is sufficient for hazard and loss estimation. Although a tectonic region may evolve (with old faults healing and new faults being created), the concept of a long-term model is legitimate in that there will be some absolute rate of ruptures over any given time span. By long we simply mean long enough to capture the statistics of events relevant to the forecast duration, and short enough that the system does not significantly change. Known faults provide a means of identifying where future ruptures will occur. Given the fault slip rate, seismogenic depths, and knowledge or assumptions about spatial extent of ruptures and/or magnitude frequency distributions, it is possible solve for the relative rate of all ruptures on each fault. We can write the rate of these discrete events as: FR f,m,r = rate of r th rupture for the m th magnitude on the f th fault (where the possible rupture surfaces will depend on the magnitude) Of course known faults do not provide the location of all possible ruptures, so we need to account for off-fault seismicity as well (often referred to as background seismicity). Such seismicity is usually modeled with a grid of Gutenberg Richter (GR) sources, where the a-values and perhaps b-values are spatially variable. Regardless of the magnitude frequency distribution, we can write the rate of background seismicity as: BR i,j,m = background rate of m th magnitude at the i th latitude and j th longitude An example of background seismicity from the National Seismic Hazard Mapping Program 1996 model (NSHMP, 1996) is shown in Figure 1 along with their fault ruptures. The 37

38 spatial variation in their background rates is determined from smoothed historical seismicity, although stressing rates from a deformation model could be used as well. One problem with using gridded-seismicity models is that they don t explicitly provide a finite rupture surface for the events, as needed for seismic-hazard analysis (SHA), but rather they provide hypocentral locations (and treating ruptures as point sources underestimates hazard). One must, therefore, either construct and consider all possible finite rupture surfaces, or assign a single surface at random as done in the NSHMP 1996 and 2002 models (NSHMP, 1996 & 2002). This highlights one of the primary advantages of fault-based models, as they provide valuable information about rupture-surfaces. Figure 1. Fault (red) and background-grid (gray) rupture sources from the 1996 NSHMP model for southern California (the darkness of the grid points is proportional to the a-value of the background GR seismicity). 38

39 Relaxing The Assumption of Fixed Rupture Boundaries: Previous WGCEPs have assumed, at least for time-dependent probabilities, that faults are segmented (an example for the Hayward/Rodgers-Creek from the 2002 WGCEP (WGCEP, 2003) is given below in Figure 2). This means that ruptures can occur on one or perhaps more segments, but cannot occur on just part of any segment. Previous models have also precluded ruptures that jump from one fault to another, as occurred in both the 1992 M 7.2 Landers and the 2002 M 7.9 Denali earthquakes. For the latter, rupture began on the Susitna Glacier Fault, jumped onto the Denali fault, and then jumped off onto the Totschunda fault. The following outlines a recipe for building a model that relaxes these assumptions. It is basically a hybrid between the models outlined by Field et al. (1999) and Andrews and Schwerer (2000), although the latter deserves most of the credit. We start with a model of fault sections and slip rates, such as that shown in Figure 1 (although applied statewide here). Note that fault sectioning has been applied only for descriptive purposes, and should not be interpreted as rupture segmentation. The first task is to subsection these faults into smaller increments (~5km lengths) such that further sub-sectioning would not influence SHA (from here on we ll refer to these ~5km subsections as sections ). Following Andrews and Schwerer (2000), but for a larger region and using smaller fault sections, we then define every possible earthquake rupture involving two or more contiguous sections (at least two because we consider only events that rupture the entire seismogenic thickness). We use an indicator matrix, I mi, containing 1s and 0s to denote whether the i th section is involved in the m th rupture ( 1 if yes and zero otherwise). The M ruptures so defined can include those that involve fault-to-fault jumps if deemed appropriate (e.g., for faults that are separated by less than 10 km). We then use the slip rate on each section to constrain the rate of each rupture: " I mi u m f m = r i m where f m is the desired long-term rate of the m th rupture, u m is the average slip of the m th rupture (e.g., obtained from a magnitude-area relationship), and r i is the known slip rate of the i th section. This system of equations can be solved for all f m. However, it is under determined so an infinite number of solutions exist. Our task now is simply to add equations in order to achieve a unique solution, or to at least significantly narrow the solution space. Again, following Andrews and Schwerer (2000), we can add positivity constraints for all f m, as well as equations that represent the constraint that rates for the region as a whole must exhibit a Gutenber-Richter distribution (taking into consideration uncertainties in the latter at the largest magnitudes). This is the extent to which Andrews and Schwerer (2000) took their model, and they concluded that additional constraints would be needed to get reliable estimates of the rate of any particular event. It is doubtful that our extension of their San Francisco Bay Area model to the entire state (including significantly smaller fault sections) will change that conclusion. Therefore, we need to apply whatever additional constraints we can find to narrow the solution space as much as possible. One approach is to penalize any ruptures that are thought to be less probable. For example, there might be reasons to believe that ruptures do not pass certain points on a fault. Such segmentation could easily be imposed. In addition, dynamic rupture modeling might be able to support the assertion that certain fault-to-fault rupture geometries are less probable. It 39

40 remains to be seen how best to apply such constraints in the inversion, but one simple approach would be to force these rates to be some fraction of their otherwise unconstrained values. We can also add equations representing observed rupture rates from historical seismicity (e.g., Parkfield) or from paleoseismic studies at specific fault locations. Furthermore, the systematic interpretation of all southern San Andreas data by Weldon et al. (2005) will be particularly important to include. Finally, assumptions could be made (and associated constraints applied) as to the magnitude frequency distribution of particular faults (e.g., Field et al. (1999) assumed a percentage of characteristic versus Gutenberg-Richter events on all faults, and tuned this percentage to match the regional rates). However, to the extent that fault-to-fault jumps are allowed, it becomes difficult to define exactly what the fault is when it comes to an assumed magnitude-frequency distribution. Also following Field et al. (1999), off-fault seismicity can be modeled with a Guternberg- Richter distribution that makes up for the difference between the regional moment rate and the fault-model moment rate, and where the maximum magnitude is uniquely defined by matching the total regional rate of events above the lowest magnitude. The important point is that we define all possible fault rupture events (with no a priori segment boundaries and allowing fault-to-fault jumps) and solve for the rates via a linear inversion that applies all presently available constraints (including strict segmentation if appropriate). We will thereby have a formal, tractable, mathematical framework for adding additional constraints when they become available in the future. One might be concerned that the solution space will still be large after applying all available constraints. However, this is exactly what we need to know for SHA, as the solution space will represent all viable models, which can and should be accommodated as epistemic uncertainties in the analysis. The size of this inversion may be problematic, although Tom Jordan says it s no big deal. In conclusion, the long-term model simply represents an answer to the question: Over a long period of time, what is the rate of every possible discrete rupture? There will, of course, be strong differences of opinion on what such a model should look like, especially when it comes to whether strict fault segmentation is enforced and whether and how different faults can link up to produce even larger events. However, the fact that there is some long-term rate of events cannot be disputed, especially if we specify what that long time span actually is. The question then becomes, given a viable long-term model (of which there will be many for reasons just stated), how do we make a credible time-dependent forecast for a future time span? Rate Changes in the Long-Term Model Just as everyone would agree that there is some rate of events over a long period of time (given by the long-term model), so too would they agree that these rates vary to some degree on shorter times scales. Because large events occur infrequently, we are only able to demonstrate statistically significant rates of change for all events above some magnitude threshold, which we will take as M=5 here since this is the threshold of interest to hazard. Suppose we have a model that can predict that average rate change for M 5 events, relative to the long-term model, over a specified time span and as a function of space: ΔN i,j (timespan) 40

41 where i and j are latitude and longitude indices, and timespan includes both a start time and duration. Then we can simply apply these rate changes to all ruptures in the long-term model in order to make a time-dependent forecast for that specific time span. The only slightly tricky issue is how rate changes are assigned to large fault ruptures. If we assume a uniform probability distribution of hypocenters, then we can simply apply the average ΔN i,j predicted over the entire rupture surface. The obvious assumption here is that the change in rate of M 5 events implies an equivalent change in the rate of larger events. This seems reasonable in that large events must start out as small events, so that an increase of smaller events implies an increased probability of a large-event nucleation. This is precisely the assumption made in the empirical model applied by the 2002 WGCEP (where all long-term rates were scaled down by an amount determined by temporal extrapolation of the post-1906 regional seismicity rate reduction). This assumption is also implicit in the time-dependent models of Stein et al. (e.g., 1997), and in the Short-Term Earthquake Probability (STEP) model of Gerstenberger et al. (2005), which uses foreshock/aftershock statistics. Therefore, the wide application of this assumption implies that it is certainly a viable approach for time-dependent earthquake forecasts. Of course the challenge is to come up with credible models of ΔN i,j (timespan). Here again, there is likely to be a wide range of options (three having just been mentioned in the previous paragraph). Fortunately we can, in principle, apply any that are available, including alarm-based predictions (where a declaration is made that an earthquake will occur in some polygon by some date) as long as the probability of being wrong is quantified. Care will be needed to make sure the ΔN i,j (timespan) model is consistent with the long-term model (e.g., moment rates must balance over the long term, or stated another way, no double counting). For models such as STEP that predict changes that are a function of magnitude as well (e.g., by virtue of b-value changes), then rates should be modified on a magnitude-by-magnitude basis using: ΔN i,j,m (timespan) (where a subscript for magnitude has been added). The advantage here, relative to the current STEP implementation, is that the upper magnitude cutoff is not arbitrary and the largest ruptures are not treated as point sources, but rather come from the fault-based ruptures defined in the long term model. Some might question the wisdom of applying the time-dependent modifications of the background model described here. The answer simply comes down to, after considering the use of the final outcome and potential problems with the model, whether we are better off applying it than using only Poisson probabilities. For example, if the California Earthquake Authority (CEA) wants yearly forecasts updated on a yearly basis, should we or should we not include foreshock/aftershock statistics? We obviously have to apply the approach presented here to some extent if we are to model the post-1906 siesmicity lull as was done by the 2002 WGCEP. 41

42 Time-Dependent Conditional Probabilities From the long-term model we have the rate of all possible earthquake ruptures (FR f,m,r & BR i,j,m ), perhaps modified (or not) by a rate change model ΔN i,j (timespan) as just described. The question here is how to modify the fault-rupture probabilities based on information such as the date of, or amount of slip, in previous events. As demonstrated the 1988 WGCEP, computing conditional probabilities is relatively straight forward if strict segmentation is applied (ruptures are confined to specific segments, with no possibility of overlap or multi-segment ruptures). Using the average recurrence interval (perhaps determined by moment balancing), a coefficient of variation, the date of the most recent event, and an assumed distribution (e.g., lognormal) one can easily compute the conditional probability of occurrence for a given time span. Unfortunately the procedure is not so simple if one allows multi-segment ruptures (sometimes referred to as cascade events). The WGCEP-2002 approach: The long-term model developed by the 2002 WGCEP resulted in a moment-balanced relative frequency of occurrence for each single and multi-segment rupture combination on each fault. A simplified example of the possible ruptures and their frequencies of occurrence for the Hayward/Rodgers-Creek fault is shown in Figure 2 (see the caption for details, and note that none of the simplifications influence the conclusions drawn here). Again, the frequency of each rupture has been constrained to honor the long-term moment rate of each fault section. Let s now look at how they computed conditional probabilities for each earthquake rupture. We will focus here on their Brownian Passage Time (BPT) model, but the basic conclusions drawn will apply to their other two conditional probability models as well (the BPTstep and time-predictable models). They used the BPT model to compute the conditional probability that each segment will rupture using the long-term rupture rate for each segment, which is simply the sum of the rates of all ruptures that include that segment, and the date of last event on the segment. They then partitioned each segment probability among the events that include that segment, along with the probability that the rupture would nucleate in that segment, to give the conditional probability of each rupture. In other words, each segment was treated as a point process. Therefore, if a segment has just ruptured by itself, for example, there should be a near-zero probability that it will rupture again soon after (according to their renewal model). However, there is nothing in their model stopping a neighboring segment from triggering a multi-segment event that re-ruptures the segment that just ruptured. In other words, one point process can be reset by a different, independent point process, which seems to violate the concept of a point process. 42

43 Figure 2. Example of a long-term, moment balanced rupture model for the Hayward/Rodgers- Creek fault, obtained from the WGCEP-2002 Fortran code. The image, taken from the WGCEP report, shows the segments on the left and the possible ruptures on the right. The tables below give information including the rate of each earthquake rupture and the rate that each segment ruptures. Note that this example represents a single iteration from their code, where modal values in the input file were given exclusive weight, aseismicity parameters were set to 1.0, no GR-tail seismicity was included, sigma of the characteristic magnitude-frequency distribution was set to zero, and the floating ruptures were given zero weight (specifically, the line on the input-file that specified the segmentation model that was given exclusive weight read: model-a-modified ) Segment Info Name Length (km) Width (km) slip-rate (mm/yr) Rupture Rate (/yr) Date of Last Event HS e HN e RC e Rupture Info Name mag rate HS e-3 HN e-3 HS+HN e-3 RC e-3 HN+RC e-3 HS+HN+RC e-3 floating This issue is illustrated by the Monte-Carlo-simulations shown in Figure 3, where WGCEP-2002 probabilities of rupture were computed for 1-year time intervals, ruptures were allowed to occur at random according to their probabilities for that year, dates of last events were updated on each relevant segment when a rupture occurred, and probabilities were updated for the next year. This process was repeated for 10,000 1-year time steps. The simulation results show that segment ruptures occur more often than they should soon after previous ruptures, which is at odds with the model used to predict the segment probabilities in the first place. Thus, there seems to be a logical inconsistency with the WGCEP-2002 approach. If one does not allow multi-segment ruptures, then the simulated recurrence intervals match the BPT distributions exactly (as expected). However, strict segmentation is exactly what we are trying to relax. Another way to state the problem with the WGCEP-2002 approach is that the probability of one segment triggering a rupture that extends into a neighboring segment is completely independent of when that neighboring segment last ruptured. 43

44 Figure 3. The BPT distribution of recurrence intervals used to compute segment rupture probabilities (red line), as well as the distribution of segment recurrence intervals obtained by simulating ruptures according the WGCEP-2002 methodology (gray bins). The BPT probabilities assume a coefficient of variation of 0.5 and the segment rates given in Figure 2. Note the relatively high rate of short recurrence intervals in the simulations. An Alternative Approach: What we seem to lack is a logically consistent way to apply time-dependent, conditional probabilities where both single and various multi-segment ruptures are allowed. The first question we should ask is whether a strict segmentation model might be adequate (making conditional probability calculations trivial). Certainly the segmentation model of the 1988 WGCEP is inconsistent with two most important historical earthquakes, namely the 1857 and 1906 San Andreas Fault (SAF) events. The only salvation for strict segmentation is if those historical events represent the only ruptures that occur on those sections of the SAF (or that other ruptures can be safely ignored). The best test of this hypothesis, at least that this author is aware of, is the paleoseismic data analysis of Weldon et al. (2005) for the southern SAF. They use both dates and amounts of slip inferred for previous events at points along the fault, along with Bayesian inference, to constrain the spatial and temporal distribution of previous events. Uncertainties inevitably allow more than one interpretation, two of which are shown in Figure 4 (the image and caption were provided by Weldon via personal communication). Figure 4A represents and interpretation what is largely consistent with a two-segment model, with type ruptures to the north and separate ruptures to the south. Figure 4B, which is also consistent with the data, is an alternative interpretation where no systematic rupture boundaries are apparent (not even if multi-segment ruptures are acknowledged). Thus, it appears that one viable interpretation is that no meaningful, systematic rupture boundaries can be identified on the one fault that has been studied the most. 44

45 Figure 4. Rupture scenarios for the Southern San Andreas fault. Vertical bars represent the age range of paleoseismic events recognized to date, and horizontal bars represent possible ruptures. Gray shows regions/times without data. In (A) all events seen on the northern 2/3 of the fault are constrained to be as much like the 1857 AD rupture as possible, and all other sites are grouped to produce ruptures that span the southern ½ of the fault; this model is referred to the North Bend/South Bend scenario. In (B) ruptures are constructed to be as varied as possible, while still satisfy the existing age data. The results for the southern SAF imply that not only do we need a logically consistent way of applying conditional probabilities in the case where we have multi-segment ruptures, but also in the case where no persistent rupture boundaries exist. The most general case would be where a variety of rupture sizes are allowed to occur anywhere along the fault (as outlined in the previous section for the long-term model). Again, the question is how to sensibly apply conditional probabilities in this situation. More specifically, given a viable interpretation of the past history of events on the fault (for which Figure 4 shows only two of perhaps thousands for the southern SAF), how do we compute a conditional probability for each possible future rupture. The challenge arises from the fact that these probabilities are not independent, as the degree to which one rupture is more likely might imply that another, somewhat overlapping rupture is less likely. The approach outlined below makes use of Bayesian methods. Before introducing the proposed solution, however, it s probably worth quoting from an excellent manuscript that s available on-line (D Agostini, 2003, two crucial aspects of the Bayesian approach [are]: 1) As it is used in everyday language, the term probability has the intuitive meaning of the degree of belief that an event will occur. 2) Probability depends on our state of knowledge, which is usually different for different people. In other words, probability is unavoidably subjective. The concept of subjective probabilities is not new to SHA. Indeed, the different branches of a logic tree, which represent epistemic uncertainties, explicitly embody such differences of opinion (where at most only one can be correct). 45

46 Our present understanding of what influences the timing and growth of ruptures in complex fault systems suggests that a simple model (e.g., strict segmentation with point renewal processes) will not be reliable. On the other hand, our most sophisticated physics based models (Ward, 2000; Rundle et al., 2004) are too both too simplistic and have too many free parameters to be of direct use in forecasting (although they probably constitute our best hope for the future). Society nevertheless has to make informed decisions, so we are obligated to assimilate our present knowledge (however imperfect) and make the best forecast we can, including an honest assessment of the uncertainties therein. It is hoped that the Bayesian approach outlined below constitutes a credible framework for doing just this. (note that in what follows it is assumed that the reader has a basic understanding of Bayesian methods; the above reference provides an excellent introduction and overview if not). The long-term fault model given above, FR f,m,r, gives the rate of each fault rupture over a long period of time (perhaps modified by ΔN i,j (timespan) if appropriate). Therefore, if we have no additional information, then the consequent Poisson probabilities, which we write as P pois (FR f,m,r ), represent the best forecast we can make. What we would like to do is improve this forecast based on additional information or data, written generically as D. Fortunately, Bayes theorem gives us exactly what we need to make such an improvement. Specifically: P(FR f,m,r D) = P (FR )P(D FR ) pois f,m,r f,m,r P pois (FR f,m,r )P(D FR f,m,r ) " f,m,r This says that the relative probability of a given rupture, FR f,m,r, is just the Poisson probability of that rupture (the prior distribution) scaled by probability that the additional data is consistent with that rupture. One immediate issue is that the additional data (D) might not be available for all possible ruptures. However, if it s available for a sufficiently large set, such that the collective probability of these ruptures doesn t change (only their relative values), then I think the above can be applied. For example, we might be confident that the total probability of a rupture on all faults with date-of-last-event information is 0.1; application of Bayes theorem would constitute a rational approach for modifying the relative long-term probabilities that each will be the one that occurs. If the above approach is legitimate, then the challenge becomes finding appropriate model(s) for the conditional probability (also called the likelihood): P(D FR f,m,r ) Again, this simply gives the probability that the data is consistent with the occurrence of that rupture (and not that the data is caused by the rupture). This likelihood is essentially used in Bayes theorem to assign a probability gain relative to the long-term Poisson probabilities. For example, in the spirit of the time-dependent renewal models applied by previous WGCEPs, we might want to apply a model that captures the notion that events are less probably where one has recently occurred, and more probable where one has not (in the seismic gaps). 46

47 Looking at Figure 4, this reasoning would identify the southern most part of the southern SAF as the most likely place for a large rupture. Recall that in the long-term model we have defined the rupture extent of every possible earthquake (as well as its magnitude, from a magnitude-area relationship, which uniquely defines the average slip as well). Suppose we could look in a crystal ball and know exactly which one would occur next, but we were left with figuring out when it would occur. The following are two approaches for defining a probability distribution for the next occurrence (and therefore could constitute the basis for the likelihood function in Bayes theorem above): Method 1 (Average Slip-Predictable Model): Here we say the best estimate of the date of the next event is when sufficient moment has accumulated since the last event(s) to match the moment, M o, of the next event (the latter coming from the long-term model). If v i, t oi, and A i are the slip rate, date of last event, and area of each fault section comprising the rupture, then this best-estimate time of occurrence, t, satisfies the following: I M o = µ " v i A i (t # t oi ) where µ is the shear modulus. Solving for t we have: i= 0 t = M o + µ " v i A i t oi I i= 0 I µ " v i A i i= 0 or where and t = "t + t o "t = M o I µ # v i A i I i= 0 " v i A i t oi i= 0 t o = I. " v i A i "t is the time needed to accumulate the required moment, and t o is a weight-averaged time of the last event(s) where the plural indicates that the date of last event, t oi, may vary between sections. i= 0 47

48 Of course there are both uncertainties and intrinsic variability, so we can represent "t as a random variable (using a BPT model or any other reasonable distribution), giving a range of possible next-event times. However, this is not a renewal process in the sense of the exact same rupture occurring repeatedly (characteristic earthquakes), as we have made no assumptions regarding persistent rupture boundaries. The expression for "t above shows that the time of next event depends on the size, or more specifically the amount of slip, of the next event. The implicit assumption is, therefore, that a rupture brings each fault-section, on average, back to some base level of stress (as in a slippredictable model). Thus, if we know, or are willing to hypothesize the size of the event, then we can predict it s most likely time of occurrence. There is no dependence on the amount of slip in previous event(s). This model will lead us astray if each large event does not, on average, exhaust the stress available to produce subsequent events. Method 2 (Average Time-Predictable Model): If we also know the amount of slip produced by the last event in each fault section (D i ), then we could apply the time-predictable model on a section-by-section basis. That is, the best estimate for the time of next event on each section is: t i = D i v i Then for a hypothesized rupture including multiple sections, we can define the best estimate of the rupture time for the next event as a weighted-average of t i for each section: + t oi or ˆ t = " $ # I D ( A i i i= 0 v i I ( A i i= 0 % + t oi ' & t ˆ = "ˆ t + t ˆ o where and "ˆ t = I # i= 0 I # i= 0 A i D i v i A i 48

49 ˆ t o = I " i= 0 I " i= 0 A i t oi A i. Note that we are using hats over the symbols here to distinguish them from the symbols with bars in method 1 above. In contrast to method 1, the timing of the next event is not dependent on how large it will be, but rather dependent on how large the last event(s) were (implicitly assuming some triggering threshold for each section, as in the time-predictable model). This model will interpret a relatively low amount of slip in an event as indicative that another event should occur soon (because significant stress remains), rather than indicating the fault section had already been depleted. Therefore, the differing assumptions between methods 1 and 2 above are that the time of next event depends either on the size of the next event, or on the size of the last event, respectively (albeit both with some intrinsic variability as modeled by a BPT-type distribution). We have made no presumptions regarding the persistence of repeated, identical ruptures, but rather have presumed knowledge of the location of the next event. We will return to how this can be used in Bayes theorem shortly, but lets diverge for a moment to examine whether the assumptions in methods 1 and/or 2 above are testable. The question is whether either of these predicted intervals, "t or "ˆ t, correlate with actual observed intervals ( t " t o or t " t ˆ o, respectively, where t is the observed time) and whether normalized difference between the two, (t " t o ) /#t or (t " t ˆ o ) /#ˆ t, exhibit something like a BPT distribution. Perhaps worldwide observations are not abundant enough to provide an adequate test if this, so for clues we turn to the physics-based Virtual California simulations of Rundle et al. (2004). Virtual California is a cellular-automata-type earthquake simulator comprised of 650 fault sections, each of which is about 10 km in length. Each section is loaded according to its long-term slipping rate and is allowed to rupture according to a friction law. Both static and quasi-dynamic stress interactions are accounted for between sections, giving rise to a selforganization of the statistical dynamics. In particular, the distribution of events for the entire region (California) exhibits a Gutenberg Richter distribution. The interesting question is whether this complex interaction effectively erases any predictability hope for by elastic-rebound-type considerations. Specifically, do methods 1 or 2 above provide any predictability with respect to Virtual California events? The results shown in Figure 5 are quite encouraging, as the distribution of events for method 1 (the average slip-predictable approach shown in 5a) are fit well by a BPT distribution with a coefficient of variation of less than 0.2. The results for method 2 are even better (the average time-predictable approach) with a coefficient of variation of less than That method 2 would exhibit more predictability is not surprising, however, because Virtual California s stochastic element is applied as a random 10% over- or under-shoot of the amount of slip in each event; therefore, knowing the size of the last event is more helpful than knowing the size of the next event. These results are encouraging in that one of the most sophisticated physics-based earthquake simulators implies there is some predictability in the system. The relevant question is, however, how robust is this conclusion is given all the assumptions and simplifications embodied in Virtual California. What we need is to propagate all the uncertainties in this model, 49

50 as well as examine the results of other earthquake simulators, to see what predictability remains (e.g., the variability of the coefficients of variation in methods 1 and 2 above). Results obtained using the Ward (2000) earthquake simulator (not shown here) reveal predictability as well, but with significantly higher coefficients of variation (~0.8). (a) VC distribution of (t " t o ) /#t (b) VC distribution of (t " t ˆ o ) /#ˆ t Figure 5. This shows the distribution of observed versus predicted recurrence intervals from the Virtual California (VC) simulation of Rundle et al. (2004). The red bins in (a) and (b) are for prediction methods 1 and 2 (average slip and time predictable methods, respectively), and various PDF fits are shown as well. (c) rupture locations on the northern San Andreas Fault for a 3000 year time slice, showing a lack of persistent rupture boundaries. (d) and (e) show similar plots after assigning a random time of occurrence for each of the VC ruptures, revealing Poissonian behavior as expected. (d) VC distribution of (t " t o ) /#t for randomized events (c) VC events on N. SAF (e) VC events randomized on N. SAF Also shown in Figure 5 is a space-time diagram for VC ruptures on the northern San Andreas Fault (5c), as well as the distribution of method-1 recurrence intervals after assigning a random time of occurrence for all VC ruptures (5d, which shows an approximately exponential distribution as expected for a Poisson model). Figure 5e shows a space-time diagram for the time-randomized events on the northern SAF. Note that the Poissonian behavior in Figure 5e exhibits much more clustering and gaps in seismicity compared to Figure 5c. This exemplifies how the physics in Virtual California (specifically, the frictional strength of faults) effectively regularizes the occurrence of earthquakes to avoid too much stress buildup or release. An interesting corollary is whether the Poisson model, with its associated random clustering and gaps in event times, can be ruled out on the basis of simple fault strength arguments. 50

A Time-Dependent Probabilistic Seismic-Hazard Model for California

A Time-Dependent Probabilistic Seismic-Hazard Model for California Bulletin of the Seismological Society of America, 90, 1, pp. 1 21, February 2000 A Time-Dependent Probabilistic Seismic-Hazard Model for California by Chris H. Cramer,* Mark D. Petersen, Tianqing Cao,

More information

Development of U. S. National Seismic Hazard Maps and Implementation in the International Building Code

Development of U. S. National Seismic Hazard Maps and Implementation in the International Building Code Development of U. S. National Seismic Hazard Maps and Implementation in the International Building Code Mark D. Petersen (U.S. Geological Survey) http://earthquake.usgs.gov/hazmaps/ Seismic hazard analysis

More information

Model Uncertainties of the 2002 Update of California Seismic Hazard Maps

Model Uncertainties of the 2002 Update of California Seismic Hazard Maps Bulletin of the Seismological Society of America, Vol. 95, No. 6, pp. 24 257, December 25, doi: 1.1785/12517 Model Uncertainties of the 22 Update of California Seismic Hazard Maps by Tianqing Cao, Mark

More information

Uniform California Earthquake Rupture Forecast, Version 2 (UCERF 2)

Uniform California Earthquake Rupture Forecast, Version 2 (UCERF 2) Bulletin of the Seismological Society of America, Vol. 99, No. 4, pp. 2053 2107, August 2009, doi: 10.1785/0120080049 Uniform California Earthquake Rupture Forecast, Version 2 (UCERF 2) by E. H. Field,

More information

Article from: Risk Management. March 2009 Issue 15

Article from: Risk Management. March 2009 Issue 15 Article from: Risk Management March 2009 Issue 15 XXXXXXXXXXXXX RISK IDENTIFICATION Preparing for a New View of U.S. Earthquake Risk By Prasad Gunturi and Kyle Beatty INTRODUCTION The United States Geological

More information

SEISMIC HAZARD ANALYSIS. Instructional Material Complementing FEMA 451, Design Examples Seismic Hazard Analysis 5a - 1

SEISMIC HAZARD ANALYSIS. Instructional Material Complementing FEMA 451, Design Examples Seismic Hazard Analysis 5a - 1 SEISMIC HAZARD ANALYSIS Instructional Material Complementing FEMA 451, Design Examples Seismic Hazard Analysis 5a - 1 Seismic Hazard Analysis Deterministic procedures Probabilistic procedures USGS hazard

More information

Seismic Risk in California Is Changing

Seismic Risk in California Is Changing WHITE PAPER 4 August 2016 Seismic Risk in California Is Changing The Impact of New Earthquake Hazard Models for the State Contributors: Paul C. Thenhaus Kenneth W. Campbell Ph.D Nitin Gupta David F. Smith

More information

Scientific Research on the Cascadia Subduction Zone that Will Help Improve Seismic Hazard Maps, Building Codes, and Other Risk-Mitigation Measures

Scientific Research on the Cascadia Subduction Zone that Will Help Improve Seismic Hazard Maps, Building Codes, and Other Risk-Mitigation Measures Scientific Research on the Cascadia Subduction Zone that Will Help Improve Seismic Hazard Maps, Building Codes, and Other Risk-Mitigation Measures Art Frankel U.S. Geological Survey Seattle, WA GeoPrisms-Earthscope

More information

Southern California Earthquake Center Collaboratory for the Study of Earthquake Predictability (CSEP) Thomas H. Jordan

Southern California Earthquake Center Collaboratory for the Study of Earthquake Predictability (CSEP) Thomas H. Jordan Southern California Earthquake Center Collaboratory for the Study of Earthquake Predictability (CSEP) Thomas H. Jordan SCEC Director & Professor, University of Southern California 5th Joint Meeting of

More information

Kinematics of the Southern California Fault System Constrained by GPS Measurements

Kinematics of the Southern California Fault System Constrained by GPS Measurements Title Page Kinematics of the Southern California Fault System Constrained by GPS Measurements Brendan Meade and Bradford Hager Three basic questions Large historical earthquakes One basic question How

More information

An Empirical Model for Earthquake Probabilities in the San Francisco Bay Region, California,

An Empirical Model for Earthquake Probabilities in the San Francisco Bay Region, California, Bulletin of the Seismological Society of America, Vol. 93, No. 1, pp. 1 13, February 2003 An Empirical Model for Earthquake Probabilities in the San Francisco Bay Region, California, 2002 2031 by Paul

More information

6 Source Characterization

6 Source Characterization 6 Source Characterization Source characterization describes the rate at which earthquakes of a given magnitude, and dimensions (length and width) occur at a given location. For each seismic source, the

More information

Appendix O: Gridded Seismicity Sources

Appendix O: Gridded Seismicity Sources Appendix O: Gridded Seismicity Sources Peter M. Powers U.S. Geological Survey Introduction The Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3) is a forecast of earthquakes that fall

More information

(Seismological Research Letters, July/August 2005, Vol.76 (4): )

(Seismological Research Letters, July/August 2005, Vol.76 (4): ) (Seismological Research Letters, July/August 2005, Vol.76 (4):466-471) Comment on How Can Seismic Hazard around the New Madrid Seismic Zone Be Similar to that in California? by Arthur Frankel Zhenming

More information

Operational Earthquake Forecasting: Proposed Guidelines for Implementation

Operational Earthquake Forecasting: Proposed Guidelines for Implementation Operational Earthquake Forecasting: Proposed Guidelines for Implementation Thomas H. Jordan Director, Southern California S33D-01, AGU Meeting 14 December 2010 Operational Earthquake Forecasting Authoritative

More information

Eleventh U.S. National Conference on Earthquake Engineering Integrating Science, Engineering & Policy June 25-29, 2018 Los Angeles, California

Eleventh U.S. National Conference on Earthquake Engineering Integrating Science, Engineering & Policy June 25-29, 2018 Los Angeles, California Eleventh U.S. National Conference on Earthquake Engineering Integrating Science, Engineering & Policy June 25-29, 2018 Los Angeles, California Site-Specific MCE R Response Spectra for Los Angeles Region

More information

BRIEFING. 1. The greatest magnitude changes in seismic risk have occurred in California, with significant but lesser changes in the Pacific Northwest.

BRIEFING. 1. The greatest magnitude changes in seismic risk have occurred in California, with significant but lesser changes in the Pacific Northwest. Catastrophe Management Services BRIEFING September 2008 Preparing for a new view of U.S. earthquake risk T The Uniform California Earthquake Rupture Forecast, Version 2 (UCERF 2) By 2007 Working Group

More information

Overview of Seismic PHSA Approaches with Emphasis on the Management of Uncertainties

Overview of Seismic PHSA Approaches with Emphasis on the Management of Uncertainties H4.SMR/1645-29 "2nd Workshop on Earthquake Engineering for Nuclear Facilities: Uncertainties in Seismic Hazard" 14-25 February 2005 Overview of Seismic PHSA Approaches with Emphasis on the Management of

More information

Uniform California Earthquake Rupture Forecast,

Uniform California Earthquake Rupture Forecast, Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3) Framework Working Group on California Earthquake Probabilities (WGCEP) Technical Report #8 July 9, 2012 Submitted to: California Earthquake

More information

Quantifying the effect of declustering on probabilistic seismic hazard

Quantifying the effect of declustering on probabilistic seismic hazard Proceedings of the Ninth Pacific Conference on Earthquake Engineering Building an Earthquake-Resilient Society 14-16 April, 2011, Auckland, New Zealand Quantifying the effect of declustering on probabilistic

More information

THE REVISED 2002 CALIFORNIA PROBABILISTIC SEISMIC HAZARD MAPS JUNE 2003

THE REVISED 2002 CALIFORNIA PROBABILISTIC SEISMIC HAZARD MAPS JUNE 2003 THE REVISED 2002 CALIFORNIA PROBABILISTIC SEISMIC HAZARD MAPS JUNE 2003 by Tianqing Cao, William A. Bryant, Badie Rowshandel, David Branum, and Christopher J. Wills INTRODUCTION These maps show an estimate

More information

EARTHQUAKE CLUSTERS, SMALL EARTHQUAKES

EARTHQUAKE CLUSTERS, SMALL EARTHQUAKES EARTHQUAKE CLUSTERS, SMALL EARTHQUAKES AND THEIR TREATMENT FOR HAZARD ESTIMATION Gary Gibson and Amy Brown RMIT University, Melbourne Seismology Research Centre, Bundoora AUTHORS Gary Gibson wrote his

More information

2014 Update of the United States National Seismic Hazard Maps

2014 Update of the United States National Seismic Hazard Maps 2014 Update of the United States National Seismic Hazard Maps M.D. Petersen, C.S. Mueller, K.M. Haller, M Moschetti, S.C. Harmsen, E.H. Field, K.S. Rukstales, Y. Zeng, D.M. Perkins, P. Powers, S. Rezaeian,

More information

Arthur Frankel, William Stephenson, David Carver, Jack Odum, Robert Williams, and Susan Rhea U.S. Geological Survey

Arthur Frankel, William Stephenson, David Carver, Jack Odum, Robert Williams, and Susan Rhea U.S. Geological Survey Probabilistic Seismic Hazard Maps for Seattle: 3D Sedimentary Basin Effects, Nonlinear Site Response, and Uncertainties from Random Velocity Variations Arthur Frankel, William Stephenson, David Carver,

More information

AN OVERVIEW AND GUIDELINES FOR PROBABILISTIC SEISMIC HAZARD MAPPING

AN OVERVIEW AND GUIDELINES FOR PROBABILISTIC SEISMIC HAZARD MAPPING CO 2 TRACCS INTERNATIONAL WORKSHOP Bucharest, 2 September, 2012 AN OVERVIEW AND GUIDELINES FOR PROBABILISTIC SEISMIC HAZARD MAPPING M. Semih YÜCEMEN Department of Civil Engineering and Earthquake Studies

More information

SEISMIC HAZARD ANALYSIS

SEISMIC HAZARD ANALYSIS SEISMIC HAZARD ANALYSIS Instructional Material Complementing FEMA 451, Design Examples Seismic Hazard Analysis 5a - 1 This topic addresses deterministic and probabilistic seismic hazard analysis, ground

More information

ACCOUNTING FOR SITE EFFECTS IN PROBABILISTIC SEISMIC HAZARD ANALYSIS: OVERVIEW OF THE SCEC PHASE III REPORT

ACCOUNTING FOR SITE EFFECTS IN PROBABILISTIC SEISMIC HAZARD ANALYSIS: OVERVIEW OF THE SCEC PHASE III REPORT ACCOUNTING FOR SITE EFFECTS IN PROBABILISTIC SEISMIC HAZARD ANALYSIS: OVERVIEW OF THE SCEC PHASE III REPORT Edward H FIELD 1 And SCEC PHASE III WORKING GROUP 2 SUMMARY Probabilistic seismic hazard analysis

More information

Simulation-based Seismic Hazard Analysis Using CyberShake

Simulation-based Seismic Hazard Analysis Using CyberShake Simulation-based Seismic Hazard Analysis Using CyberShake SCEC CyberShake Collaboration: Robert Graves, Scott Callaghan, Feng Wang, Thomas H. Jordan, Philip Maechling, Kim Olsen, Kevin Milner, En-Jui Lee,

More information

Part 2 - Engineering Characterization of Earthquakes and Seismic Hazard. Earthquake Environment

Part 2 - Engineering Characterization of Earthquakes and Seismic Hazard. Earthquake Environment Part 2 - Engineering Characterization of Earthquakes and Seismic Hazard Ultimately what we want is a seismic intensity measure that will allow us to quantify effect of an earthquake on a structure. S a

More information

Occurrence of negative epsilon in seismic hazard analysis deaggregation, and its impact on target spectra computation

Occurrence of negative epsilon in seismic hazard analysis deaggregation, and its impact on target spectra computation Occurrence of negative epsilon in seismic hazard analysis deaggregation, and its impact on target spectra computation Lynne S. Burks 1 and Jack W. Baker Department of Civil and Environmental Engineering,

More information

Epistemic Uncertainty in Seismic Hazard Analysis for Australia

Epistemic Uncertainty in Seismic Hazard Analysis for Australia Australian Earthquake Engineering Society 2011 Conference, 18-20 November, Barossa Valley, South Australia Epistemic Uncertainty in Seismic Hazard Analysis for Australia Paul Somerville 1,2 and Hong Kie

More information

Knowledge of in-slab earthquakes needed to improve seismic hazard estimates for southwestern British Columbia

Knowledge of in-slab earthquakes needed to improve seismic hazard estimates for southwestern British Columbia USGS OPEN FILE REPORT #: Intraslab Earthquakes 1 Knowledge of in-slab earthquakes needed to improve seismic hazard estimates for southwestern British Columbia John Adams and Stephen Halchuk Geological

More information

San Francisco Bay Area Earthquake Simulations: A step toward a Standard Physical Earthquake Model

San Francisco Bay Area Earthquake Simulations: A step toward a Standard Physical Earthquake Model San Francisco Bay Area Earthquake Simulations: A step toward a Standard Physical Earthquake Model Steven N. Ward Institute of Geophysics and Planetary Physics, University of California, Santa Cruz, CA,

More information

A TESTABLE FIVE-YEAR FORECAST OF MODERATE AND LARGE EARTHQUAKES. Yan Y. Kagan 1,David D. Jackson 1, and Yufang Rong 2

A TESTABLE FIVE-YEAR FORECAST OF MODERATE AND LARGE EARTHQUAKES. Yan Y. Kagan 1,David D. Jackson 1, and Yufang Rong 2 Printed: September 1, 2005 A TESTABLE FIVE-YEAR FORECAST OF MODERATE AND LARGE EARTHQUAKES IN SOUTHERN CALIFORNIA BASED ON SMOOTHED SEISMICITY Yan Y. Kagan 1,David D. Jackson 1, and Yufang Rong 2 1 Department

More information

Jack Loveless Department of Geosciences Smith College

Jack Loveless Department of Geosciences Smith College Geodetic constraints on fault interactions and stressing rates in southern California Jack Loveless Department of Geosciences Smith College jloveless@smith.edu Brendan Meade Department of Earth & Planetary

More information

The Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3) Project Plan

The Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3) Project Plan UCERF3_Project_Plan_v55.doc 1 The Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3) Project Plan by the Working Group on California Earthquake Probabilities (WGCEP) Notes on this version

More information

Earthquakes. Earthquake Magnitudes 10/1/2013. Environmental Geology Chapter 8 Earthquakes and Related Phenomena

Earthquakes. Earthquake Magnitudes 10/1/2013. Environmental Geology Chapter 8 Earthquakes and Related Phenomena Environmental Geology Chapter 8 Earthquakes and Related Phenomena Fall 2013 Northridge 1994 Kobe 1995 Mexico City 1985 China 2008 Earthquakes Earthquake Magnitudes Earthquake Magnitudes Richter Magnitude

More information

Conditional Spectrum Computation Incorporating Multiple Causal Earthquakes and Ground-Motion Prediction Models

Conditional Spectrum Computation Incorporating Multiple Causal Earthquakes and Ground-Motion Prediction Models Bulletin of the Seismological Society of America, Vol. 13, No. 2A, pp. 113 1116, April 213, doi: 1.1785/1211293 Conditional Spectrum Computation Incorporating Multiple Causal Earthquakes and Ground-Motion

More information

Uniform Hazard Spectrum(UHS) for performance based seismic design

Uniform Hazard Spectrum(UHS) for performance based seismic design Uniform Hazard Spectrum(UHS) for performance based seismic design *Jun-Kyoung Kim 1), Soung-Hoon Wee 2) and Seong-Hwa Yoo 2) 1) Department of Fire Protection and Disaster Prevention, Semyoung University,

More information

Commentary Appendix A DEVELOPMENT OF MAXIMUM CONSIDERED EARTHQUAKE GROUND MOTION MAPS FIGURES THROUGH

Commentary Appendix A DEVELOPMENT OF MAXIMUM CONSIDERED EARTHQUAKE GROUND MOTION MAPS FIGURES THROUGH Commentary Appendix A DEVELOPMENT OF MAXIMUM CONSIDERED EARTHQUAKE GROUND MOTION MAPS FIGURES 3.3-1 THROUGH 3.3-14 BACKGROUND The maps used in the Provisions through 1994 provided the A a (effective peak

More information

The Length to Which an Earthquake will go to Rupture. University of Nevada, Reno 89557

The Length to Which an Earthquake will go to Rupture. University of Nevada, Reno 89557 The Length to Which an Earthquake will go to Rupture Steven G. Wesnousky 1 and Glenn P. Biasi 2 1 Center of Neotectonic Studies and 2 Nevada Seismological Laboratory University of Nevada, Reno 89557 Abstract

More information

PREPARING FOR A NEW VIEW OF U.S. EARTHQUAKE RISK

PREPARING FOR A NEW VIEW OF U.S. EARTHQUAKE RISK October 2015 PREPARING FOR A NEW VIEW OF U.S. EARTHQUAKE RISK The latest scientific studies are set to change vendor earthquake catastrophe models over the next 18 months. But will these changes be significant

More information

DCPP Seismic FAQ s Geosciences Department 08/04/2011 GM1) What magnitude earthquake is DCPP designed for?

DCPP Seismic FAQ s Geosciences Department 08/04/2011 GM1) What magnitude earthquake is DCPP designed for? GM1) What magnitude earthquake is DCPP designed for? The new design ground motions for DCPP were developed after the discovery of the Hosgri fault. In 1977, the largest magnitude of the Hosgri fault was

More information

GPS Strain & Earthquakes Unit 5: 2014 South Napa earthquake GPS strain analysis student exercise

GPS Strain & Earthquakes Unit 5: 2014 South Napa earthquake GPS strain analysis student exercise GPS Strain & Earthquakes Unit 5: 2014 South Napa earthquake GPS strain analysis student exercise Strain Analysis Introduction Name: The earthquake cycle can be viewed as a process of slow strain accumulation

More information

Shaking Hazard Compatible Methodology for Probabilistic Assessment of Fault Displacement Hazard

Shaking Hazard Compatible Methodology for Probabilistic Assessment of Fault Displacement Hazard Surface Fault Displacement Hazard Workshop PEER, Berkeley, May 20-21, 2009 Shaking Hazard Compatible Methodology for Probabilistic Assessment of Fault Displacement Hazard Maria Todorovska Civil & Environmental

More information

Development of Ground Motion Time Histories for Seismic Design

Development of Ground Motion Time Histories for Seismic Design Proceedings of the Ninth Pacific Conference on Earthquake Engineering Building an Earthquake-Resilient Society 14-16 April, 2011, Auckland, New Zealand Development of Ground Motion Time Histories for Seismic

More information

Documentation for the 2002 Update of the National Seismic Hazard Maps

Documentation for the 2002 Update of the National Seismic Hazard Maps 1 Documentation for the 2002 Update of the National Seismic Hazard Maps by Arthur D. Frankel 1, Mark D. Petersen 1, Charles S. Mueller 1, Kathleen M. Haller 1, Russell L. Wheeler 1, E.V. Leyendecker 1,

More information

Project 17 Development of Next-Generation Seismic Design Value Maps

Project 17 Development of Next-Generation Seismic Design Value Maps Project 17 Development of Next-Generation Seismic Design Value Maps Geo Structures 2016 16 February 2016 R.O. Hamburger, SE, SECB www.sgh.com Some History Prior to 1988 - maps provided broad seismic zones

More information

Module 7 SEISMIC HAZARD ANALYSIS (Lectures 33 to 36)

Module 7 SEISMIC HAZARD ANALYSIS (Lectures 33 to 36) Lecture 34 Topics Module 7 SEISMIC HAZARD ANALYSIS (Lectures 33 to 36) 7.3 DETERMINISTIC SEISMIC HAZARD ANALYSIS 7.4 PROBABILISTIC SEISMIC HAZARD ANALYSIS 7.4.1 Earthquake Source Characterization 7.4.2

More information

Review of The Canterbury Earthquake Sequence and Implications. for Seismic Design Levels dated July 2011

Review of The Canterbury Earthquake Sequence and Implications. for Seismic Design Levels dated July 2011 SEI.ABR.0001.1 Review of The Canterbury Earthquake Sequence and Implications for Seismic Design Levels dated July 2011 Prepared by Norman Abrahamson* 152 Dracena Ave, Piedmont CA 94611 October 9, 2011

More information

Seismic Source Characterization in Siting New Nuclear Power Plants in the Central and Eastern United States

Seismic Source Characterization in Siting New Nuclear Power Plants in the Central and Eastern United States Seismic Source Characterization in Siting New Nuclear Power Plants in the Central and Eastern United States ABSTRACT : Yong Li 1 and Nilesh Chokshi 2 1 Senior Geophysicist, 2 Deputy Director of DSER Nuclear

More information

SEISMIC-HAZARD ASSESSMENT: Conditional Probability

SEISMIC-HAZARD ASSESSMENT: Conditional Probability Conditional Probability SEISMIC-HAZARD ASSESSMENT: Conditional Probability Supplies Needed calculator PURPOSE Previous exercises in this book have outlined methods for inferring the patterns and history

More information

Seismic Hazard Epistemic Uncertainty in the San Francisco Bay Area and its Role in Performance-Based Assessment

Seismic Hazard Epistemic Uncertainty in the San Francisco Bay Area and its Role in Performance-Based Assessment Seismic Hazard Epistemic Uncertainty in the San Francisco Bay Area and its Role in Performance-Based Assessment Brendon A Bradley a) This paper investigates epistemic uncertainty in the results of seismic

More information

THE EFFECT OF THE LATEST SUMATRA EARTHQUAKE TO MALAYSIAN PENINSULAR

THE EFFECT OF THE LATEST SUMATRA EARTHQUAKE TO MALAYSIAN PENINSULAR JURNAL KEJURUTERAAN AWAM (JOURNAL OF CIVIL ENGINEERING) Vol. 15 No. 2, 2002 THE EFFECT OF THE LATEST SUMATRA EARTHQUAKE TO MALAYSIAN PENINSULAR Assoc. Prof. Dr. Azlan Adnan Hendriyawan Structural Earthquake

More information

Seismic Hazard Estimate from Background Seismicity in Southern California

Seismic Hazard Estimate from Background Seismicity in Southern California Bulletin of the Seismological Society of America, Vol. 86, No. 5, pp. 1372-1381, October 1996 Seismic Hazard Estimate from Background Seismicity in Southern California by Tianqing Cao, Mark D. Petersen,

More information

The Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3) Project Plan

The Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3) Project Plan UCERF3_Project_Plan_v39.doc 1 The Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3) Project Plan by the Working Group on California Earthquake Probabilities (WGCEP) Notes: This version

More information

EARTHQUAKE HAZARD ASSESSMENT IN KAZAKHSTAN

EARTHQUAKE HAZARD ASSESSMENT IN KAZAKHSTAN EARTHQUAKE HAZARD ASSESSMENT IN KAZAKHSTAN Dr Ilaria Mosca 1 and Dr Natalya Silacheva 2 1 British Geological Survey, Edinburgh (UK) imosca@nerc.ac.uk 2 Institute of Seismology, Almaty (Kazakhstan) silacheva_nat@mail.ru

More information

Guidelines for Site-Specific Seismic Hazard Reports for Essential and Hazardous Facilities and Major and Special-Occupancy Structures in Oregon

Guidelines for Site-Specific Seismic Hazard Reports for Essential and Hazardous Facilities and Major and Special-Occupancy Structures in Oregon Guidelines for Site-Specific Seismic Hazard Reports for Essential and Hazardous Facilities and Major and Special-Occupancy Structures in Oregon By the Oregon Board of Geologist Examiners and the Oregon

More information

Modelling Strong Ground Motions for Subduction Events in the Wellington Region, New Zealand

Modelling Strong Ground Motions for Subduction Events in the Wellington Region, New Zealand Proceedings of the Ninth Pacific Conference on Earthquake Engineering Building an Earthquake-Resilient Society 14-16 April, 2011, Auckland, New Zealand Modelling Strong Ground Motions for Subduction Events

More information

A NEW PROBABILISTIC SEISMIC HAZARD MODEL FOR NEW ZEALAND

A NEW PROBABILISTIC SEISMIC HAZARD MODEL FOR NEW ZEALAND A NEW PROBABILISTIC SEISMIC HAZARD MODEL FOR NEW ZEALAND Mark W STIRLING 1 SUMMARY The Institute of Geological and Nuclear Sciences (GNS) has developed a new seismic hazard model for New Zealand that incorporates

More information

Geo736: Seismicity and California s Active Faults Introduction

Geo736: Seismicity and California s Active Faults Introduction Geo736: Seismicity and California s Active Faults Course Notes: S. G. Wesnousky Spring 2018 Introduction California sits on the boundary of the Pacific - North American plate boundary (Figure 1). Relative

More information

GROUND MOTION TIME HISTORIES FOR THE VAN NUYS BUILDING

GROUND MOTION TIME HISTORIES FOR THE VAN NUYS BUILDING GROUND MOTION TIME HISTORIES FOR THE VAN NUYS BUILDING Prepared for the PEER Methodology Testbeds Project by Paul Somerville and Nancy Collins URS Corporation, Pasadena, CA. Preliminary Draft, Feb 11,

More information

RECORD OF REVISIONS. Page 2 of 17 GEO. DCPP.TR.14.06, Rev. 0

RECORD OF REVISIONS. Page 2 of 17 GEO. DCPP.TR.14.06, Rev. 0 Page 2 of 17 RECORD OF REVISIONS Rev. No. Reason for Revision Revision Date 0 Initial Report - this work is being tracked under Notification SAPN 50638425-1 8/6/2014 Page 3 of 17 TABLE OF CONTENTS Page

More information

DEVELOPMENT OF DESIGN RESPONSE SPECTRAL SHAPES FOR CENTRAL AND EASTERN U.S. (CEUS) AND WESTERN U.S. (WUS) ROCK SITE CONDITIONS*

DEVELOPMENT OF DESIGN RESPONSE SPECTRAL SHAPES FOR CENTRAL AND EASTERN U.S. (CEUS) AND WESTERN U.S. (WUS) ROCK SITE CONDITIONS* DEVELOPMENT OF DESIGN RESPONSE SPECTRAL SHAPES FOR CENTRAL AND EASTERN U.S. (CEUS) AND WESTERN U.S. (WUS) ROCK SITE CONDITIONS* 1 INTRODUCTION W. J. Silva, R.R. Youngs, and I.M. Idriss In developing response

More information

UPDATE OF THE PROBABILISTIC SEISMIC HAZARD ANALYSIS AND DEVELOPMENT OF SEISMIC DESIGN GROUND MOTIONS AT THE LOS ALAMOS NATIONAL LABORATORY

UPDATE OF THE PROBABILISTIC SEISMIC HAZARD ANALYSIS AND DEVELOPMENT OF SEISMIC DESIGN GROUND MOTIONS AT THE LOS ALAMOS NATIONAL LABORATORY F I N A L R E P O R T UPDATE OF THE PROBABILISTIC SEISMIC HAZARD ANALYSIS AND DEVELOPMENT OF SEISMIC DESIGN GROUND MOTIONS AT THE LOS ALAMOS NATIONAL LABORATORY Prepared for Los Alamos National Laboratory

More information

The effect of bounds on magnitude, source-to-site distance and site condition in PSHA-based ground motion selection

The effect of bounds on magnitude, source-to-site distance and site condition in PSHA-based ground motion selection The effect of bounds on magnitude, source-to-site distance and site condition in PSHA-based ground motion selection K. Tarbali & B.A. Bradley Department of Civil and Natural Resources Engineering, University

More information

Module 7 SEISMIC HAZARD ANALYSIS (Lectures 33 to 36)

Module 7 SEISMIC HAZARD ANALYSIS (Lectures 33 to 36) Lecture 35 Topics Module 7 SEISMIC HAZARD ANALYSIS (Lectures 33 to 36) 7.4.4 Predictive Relationships 7.4.5 Temporal Uncertainty 7.4.6 Poisson Model 7.4.7 Other Models 7.4.8 Model Applicability 7.4.9 Probability

More information

UCERF3 Task R2- Evaluate Magnitude-Scaling Relationships and Depth of Rupture: Proposed Solutions

UCERF3 Task R2- Evaluate Magnitude-Scaling Relationships and Depth of Rupture: Proposed Solutions UCERF3 Task R2- Evaluate Magnitude-Scaling Relationships and Depth of Rupture: Proposed Solutions Bruce E. Shaw Lamont Doherty Earth Observatory, Columbia University Statement of the Problem In UCERF2

More information

2018 Blue Waters Symposium June 5, Southern California Earthquake Center

2018 Blue Waters Symposium June 5, Southern California Earthquake Center Integrating Physics-based Earthquake Cycle Simulator Models and High-Resolution Ground Motion Simulations into a Physics-based Probabilistic Seismic Hazard Model PI: J. Vidale; Former PI: T. H. Jordan

More information

San Andreas and Other Fault Sources; Background Source

San Andreas and Other Fault Sources; Background Source 1 San Andreas and Other Fault Sources; Background Source SSC TI Team Evaluation Steve Thompson Diablo Canyon SSHAC Level 3 PSHA Workshop #3 Feedback to Technical Integration Team on Preliminary Models

More information

Plate Boundary Observatory Working Group for the Central and Northern San Andreas Fault System PBO-WG-CNSA

Plate Boundary Observatory Working Group for the Central and Northern San Andreas Fault System PBO-WG-CNSA Plate Boundary Observatory Working Group for the Central and Northern San Andreas Fault System PBO-WG-CNSA Introduction Our proposal focuses on the San Andreas fault system in central and northern California.

More information

The Earthquake Cycle Chapter :: n/a

The Earthquake Cycle Chapter :: n/a The Earthquake Cycle Chapter :: n/a A German seismogram of the 1906 SF EQ Image courtesy of San Francisco Public Library Stages of the Earthquake Cycle The Earthquake cycle is split into several distinct

More information

Ground Motions from the 2008 Wells, Nevada Earthquake Sequence and Implications for Seismic Hazard

Ground Motions from the 2008 Wells, Nevada Earthquake Sequence and Implications for Seismic Hazard Nevada Bureau of Mines and Geology Special Publication 36 Ground Motions from the 2008 Wells, Nevada Earthquake Sequence and Implications for Seismic Hazard by Mark Petersen 1, Kris Pankow 2, Glenn Biasi

More information

GROUND MOTION TIME HISTORIES FOR THE VAN NUYS BUILDING

GROUND MOTION TIME HISTORIES FOR THE VAN NUYS BUILDING GROUND MOTION TIME HISTORIES FOR THE VAN NUYS BUILDING Prepared for the PEER Methodology Testbeds Project by Paul Somerville and Nancy Collins URS Corporation, Pasadena, CA March 7, Site Conditions The

More information

WESTERN STATES SEISMIC POLICY COUNCIL POLICY RECOMMENDATION Definitions of Recency of Surface Faulting for the Basin and Range Province

WESTERN STATES SEISMIC POLICY COUNCIL POLICY RECOMMENDATION Definitions of Recency of Surface Faulting for the Basin and Range Province WESTERN STATES SEISMIC POLICY COUNCIL POLICY RECOMMENDATION 15-3 Definitions of Recency of Surface Faulting for the Basin and Range Province Policy Recommendation 15-3 WSSPC recommends that each state

More information

Earthquakes. Building Earth s Surface, Part 2. Science 330 Summer What is an earthquake?

Earthquakes. Building Earth s Surface, Part 2. Science 330 Summer What is an earthquake? Earthquakes Building Earth s Surface, Part 2 Science 330 Summer 2005 What is an earthquake? An earthquake is the vibration of Earth produced by the rapid release of energy Energy released radiates in all

More information

Coulomb stress changes due to Queensland earthquakes and the implications for seismic risk assessment

Coulomb stress changes due to Queensland earthquakes and the implications for seismic risk assessment Coulomb stress changes due to Queensland earthquakes and the implications for seismic risk assessment Abstract D. Weatherley University of Queensland Coulomb stress change analysis has been applied in

More information

Estimating Earthquake-Rupture Rates on a Fault or Fault System

Estimating Earthquake-Rupture Rates on a Fault or Fault System Bulletin of the Seismological Society of America, Vol. 101, No. 1, pp. 79 92, February 2011, doi: 10.1785/0120100004 Estimating Earthquake-Rupture Rates on a Fault or Fault System by Edward H. Field and

More information

Earthquakes and Earthquake Hazards Earth - Chapter 11 Stan Hatfield Southwestern Illinois College

Earthquakes and Earthquake Hazards Earth - Chapter 11 Stan Hatfield Southwestern Illinois College Earthquakes and Earthquake Hazards Earth - Chapter 11 Stan Hatfield Southwestern Illinois College What Is an Earthquake? An earthquake is the vibration of Earth, produced by the rapid release of energy.

More information

Earthquakes Earth, 9th edition, Chapter 11 Key Concepts What is an earthquake? Earthquake focus and epicenter What is an earthquake?

Earthquakes Earth, 9th edition, Chapter 11 Key Concepts What is an earthquake? Earthquake focus and epicenter What is an earthquake? 1 2 3 4 5 6 7 8 9 10 Earthquakes Earth, 9 th edition, Chapter 11 Key Concepts Earthquake basics. "" and locating earthquakes.. Destruction resulting from earthquakes. Predicting earthquakes. Earthquakes

More information

Earthquakes Chapter 19

Earthquakes Chapter 19 Earthquakes Chapter 19 Does not contain complete lecture notes. What is an earthquake An earthquake is the vibration of Earth produced by the rapid release of energy Energy released radiates in all directions

More information

The Bridge from Earthquake Geology to Earthquake Seismology

The Bridge from Earthquake Geology to Earthquake Seismology The Bridge from Earthquake Geology to Earthquake Seismology Computer simulation Earthquake rate Fault slip rate Magnitude distribution Fault geometry Strain rate Paleo-seismology David D. Jackson djackson@g.ucla.edu

More information

Regional Workshop on Essential Knowledge of Site Evaluation Report for Nuclear Power Plants.

Regional Workshop on Essential Knowledge of Site Evaluation Report for Nuclear Power Plants. Regional Workshop on Essential Knowledge of Site Evaluation Report for Nuclear Power Plants. Development of seismotectonic models Ramon Secanell Kuala Lumpur, 26-30 August 2013 Overview of Presentation

More information

Lab 9: Satellite Geodesy (35 points)

Lab 9: Satellite Geodesy (35 points) Lab 9: Satellite Geodesy (35 points) Here you will work with GPS Time Series data to explore plate motion and deformation in California. This lab modifies an exercise found here: http://www.unavco.org:8080/cws/pbonucleus/draftresources/sanandreas/

More information

From the Testing Center of Regional Earthquake Likelihood Models. to the Collaboratory for the Study of Earthquake Predictability

From the Testing Center of Regional Earthquake Likelihood Models. to the Collaboratory for the Study of Earthquake Predictability From the Testing Center of Regional Earthquake Likelihood Models (RELM) to the Collaboratory for the Study of Earthquake Predictability (CSEP) Danijel Schorlemmer, Matt Gerstenberger, Tom Jordan, Dave

More information

10.1 A summary of the Virtual Seismologist (VS) method for seismic early warning

10.1 A summary of the Virtual Seismologist (VS) method for seismic early warning 316 Chapter 10 Conclusions This final Chapter is made up of the following: a summary of the Virtual Seismologist method for seismic early warning, comments on implementation issues, conclusions and other

More information

Probabilistic Tsunami Hazard Analysis. Hong Kie Thio AECOM, Los Angeles

Probabilistic Tsunami Hazard Analysis. Hong Kie Thio AECOM, Los Angeles Probabilistic Tsunami Hazard Analysis Hong Kie Thio AECOM, Los Angeles May 18, 2015 Overview Introduction Types of hazard analysis Similarities and differences to seismic hazard Methodology Elements o

More information

BC HYDRO SSHAC LEVEL 3 PSHA STUDY METHODOLOGY

BC HYDRO SSHAC LEVEL 3 PSHA STUDY METHODOLOGY BC HYDRO SSHAC LEVEL 3 PSHA STUDY METHODOLOGY M. W. McCann, Jr. 1, K. Addo 2 and M. Lawrence 3 ABSTRACT BC Hydro recently completed a comprehensive Probabilistic Seismic Hazard Analysis (PSHA) to evaluate

More information

Estimating fault slip rates, locking distribution, elastic/viscous properites of lithosphere/asthenosphere. Kaj M. Johnson Indiana University

Estimating fault slip rates, locking distribution, elastic/viscous properites of lithosphere/asthenosphere. Kaj M. Johnson Indiana University 3D Viscoelastic Earthquake Cycle Models Estimating fault slip rates, locking distribution, elastic/viscous properites of lithosphere/asthenosphere Kaj M. Johnson Indiana University In collaboration with:

More information

I. Locations of Earthquakes. Announcements. Earthquakes Ch. 5. video Northridge, California earthquake, lecture on Chapter 5 Earthquakes!

I. Locations of Earthquakes. Announcements. Earthquakes Ch. 5. video Northridge, California earthquake, lecture on Chapter 5 Earthquakes! 51-100-21 Environmental Geology Summer 2006 Tuesday & Thursday 6-9:20 p.m. Dr. Beyer Earthquakes Ch. 5 I. Locations of Earthquakes II. Earthquake Processes III. Effects of Earthquakes IV. Earthquake Risk

More information

Estimation of Short Term Shelter Needs FEMA Earthquake HAZUS Model

Estimation of Short Term Shelter Needs FEMA Earthquake HAZUS Model July 2017 ESRI International Users Conference Estimation of Short Term Shelter Needs FEMA Earthquake HAZUS Model Techniques & Results Douglas Schenk / Sampa Patra GIS Group / Information Services Division

More information

Uncertainties in a probabilistic model for seismic hazard analysis in Japan

Uncertainties in a probabilistic model for seismic hazard analysis in Japan Uncertainties in a probabilistic model for seismic hazard analysis in Japan T. Annaka* and H. Yashiro* * Tokyo Electric Power Services Co., Ltd., Japan ** The Tokio Marine and Fire Insurance Co., Ltd.,

More information

SCEC Simulation Data Access

SCEC Simulation Data Access SCEC Simulation Data Access 16 February 2018 Philip Maechling (maechlin@usc.edu) Fabio Silva, Scott Callaghan, Christine Goulet, Silvia Mazzoni, John Vidale, et al. SCEC Data Management Approach SCEC Open

More information

Probabilistic Seismic Hazard Analysis of Nepal considering Uniform Density Model

Probabilistic Seismic Hazard Analysis of Nepal considering Uniform Density Model Proceedings of IOE Graduate Conference, 2016 pp. 115 122 Probabilistic Seismic Hazard Analysis of Nepal considering Uniform Density Model Sunita Ghimire 1, Hari Ram Parajuli 2 1 Department of Civil Engineering,

More information

Non-Ergodic Probabilistic Seismic Hazard Analyses

Non-Ergodic Probabilistic Seismic Hazard Analyses Non-Ergodic Probabilistic Seismic Hazard Analyses M.A. Walling Lettis Consultants International, INC N.A. Abrahamson University of California, Berkeley SUMMARY A method is developed that relaxes the ergodic

More information

VALIDATION AGAINST NGA EMPIRICAL MODEL OF SIMULATED MOTIONS FOR M7.8 RUPTURE OF SAN ANDREAS FAULT

VALIDATION AGAINST NGA EMPIRICAL MODEL OF SIMULATED MOTIONS FOR M7.8 RUPTURE OF SAN ANDREAS FAULT VALIDATION AGAINST NGA EMPIRICAL MODEL OF SIMULATED MOTIONS FOR M7.8 RUPTURE OF SAN ANDREAS FAULT L.M. Star 1, J. P. Stewart 1, R.W. Graves 2 and K.W. Hudnut 3 1 Department of Civil and Environmental Engineering,

More information

Definitions. Seismic Risk, R (Σεισμική διακινδύνευση) = risk of damage of a structure

Definitions. Seismic Risk, R (Σεισμική διακινδύνευση) = risk of damage of a structure SEISMIC HAZARD Definitions Seismic Risk, R (Σεισμική διακινδύνευση) = risk of damage of a structure Seismic Hazard, Η (Σεισμικός κίνδυνος) = expected intensity of ground motion at a site Vulnerability,

More information

Regional deformation and kinematics from GPS data

Regional deformation and kinematics from GPS data Regional deformation and kinematics from GPS data Jessica Murray, Jerry Svarc, Elizabeth Hearn, and Wayne Thatcher U. S. Geological Survey Acknowledgements: Rob McCaffrey, Portland State University UCERF3

More information

Stress triggering and earthquake probability estimates

Stress triggering and earthquake probability estimates JOURNAL OF GEOPHYSICAL RESEARCH, VOL. 109,, doi:10.1029/2003jb002437, 2004 Stress triggering and earthquake probability estimates Jeanne L. Hardebeck 1 Institute for Geophysics and Planetary Physics, Scripps

More information

Magnitude-Area Scaling of Strike-Slip Earthquakes. Paul Somerville, URS

Magnitude-Area Scaling of Strike-Slip Earthquakes. Paul Somerville, URS Magnitude-Area Scaling of Strike-Slip Earthquakes Paul Somerville, URS Scaling Models of Large Strike-slip Earthquakes L Model Scaling (Hanks & Bakun, 2002) Displacement grows with L for L > > Wmax M

More information