Uniform California Earthquake Rupture Forecast,

Size: px
Start display at page:

Download "Uniform California Earthquake Rupture Forecast,"

Transcription

1 Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3) Framework Working Group on California Earthquake Probabilities (WGCEP) Technical Report #8 July 9, 2012 Submitted to: California Earthquake Authority 801 K Street, Suite 100 Sacramento, CA Submitted by: University of Southern California Southern California Earthquake Center 3651 Trousdale Parkway, ZHS 169 Los Angeles, CA

2 The 2012 Working Group on California Earthquake Probabilities Executive Committee (ExCom) The ExCom is responsible for convening experts, reviewing options, making recommendations, and orchestrating implementation of the model and supporting databases. The role of the ExCom is not to advocate specific model components, but to ensure that a minimum set of models is considered that spans the range of viability. Edward (Ned) Field (Chair) Timothy Dawson Andrew Michael Thomas Parsons Ray Weldon USGS, Golden CGS USGS, Menlo Park USGS, Menlo Park Univ. of Oregon Management Oversight Committee (MOC) The MOC is in charge of resource allocation and approving project plans, budgets, and schedules. The MOC is also responsible for seeing that the models are properly reviewed and delivered. Thomas Jordan (Chair) Tom Brocher Jill McCarthy Chris Wills SCEC Director USGS, Menlo Park USGS, Golden CGS Scientific Review Panel (SRP) The SRP is an independent body of experts who will decide whether the WGCEP has considered an adequate range of models, given the forecast duration of interest, and that logic-tree branch weights have been set appropriately. Bill Ellsworth (Chair) Duncan Agnew Ramon Arrowsmith Yehuda Ben-Zion Greg Beroza (PC liaison) Mike Blanpied Arthur Frankel Sue Hough Warner Marzocchi Hamid Haddadi Rick Shoenberg David Schwartz USGS, Menlo Park Scripps, IGPP Arizona State University Univ. Southern California Stanford USGS, Reston USGS, Seattle USGS, Pasadena INGV, Italy CGS UCLA USGS, Menlo Park

3 UCERF3 Deformation Model Evaluation Committee These are individuals are responsible for evaluating the suite of deformation models proposed in UCERF3 and assigning relative weights to each in the final model. Tom Parsons (Chair) Jack Boatwright Timothy Dawson Arthur Frankel Jim Dieterich Dave Jackson Wayne Thatcher Ray Weldon Chris Wills USGS, Menlo Park USGS, Menlo Park CGS USGS, Seattle UC Riverside UCLA USGS, Menlo Park Univ. or Oregon CGS Contributors These are individuals that contribute in a major way, either by providing expert opinion or specific model components. Ramon Arrowsmith Glenn Biasi Peter Bird Karen Felzer Jeanne Hardebeck Ruth Harris Dave Jackson Kaj Johnson Christopher Madden Kevin Milner Anna Olsen Morgan Page Keith Porter Peter Powers Danijel Schorlemmer Bruce Shaw Xiaopeng Tong Wayne Thatcher Yuehua Zeng ASU Univ. of Nevada, Reno UCLA USGS, Pasadena USGS, Menlo Park USGS, Menlo Park UCLA Indiana OSU SCEC USGS, Golden USGS, Pasadena Univ. of Colorado USGS, Golden SCEC Columbia Univ. (LDEO) UCSD USGS, Menlo Park USGS, Golden

4

5 UCERF3 Technical Report #8 i Table of Contents A. Introduction... 1 A.1. Goals and Objectives... 3 A.2. Project Organization... 4 A.3. The UCERF3 Framework... 6 A.4. Review and Consensus-Building Processes... 8 A.5. UCERF3 Limitations and Potential Applications B. Fault Model B.1. Definition B.2. Fault-Zone Polygons B.3. Logic-Tree Branches B.4. Development Process C. Deformation Models C.1. Geologic Slip-Rate Constraints C.2. The Geologic Deformation Model C.3. Deformation Models from Joint Inversion of Geodetic and Geologic Data C.4. Creep and Aseismicity C.5. Implied Moment Rates C.6. Logic-Tree Branch Weights D. Earthquake Rate Models and the Grand Inversion D.1. Methodology D.3. Implementation Ingredients D.3.1. Slip-Rate Balancing (Equation Set 1) D.3.2. Paleoseismic Event-Rate Matching (Equation Set 2) D.3.3. Improbability Constraint (Equation Set 3) D.3.4. Other Constraints (Related to Equation Sets 4 & 5) D.4. Inversion Models and Associated Gridded Seismicity D.4.1. Characteristic Branches D.4.2. Gutenberg-Richter Branches D.5. Gardner-Knopoff Aftershock Filter D.6. Grand Inversion using UCERF2 Ingredients D.7. UCERF3.0 Earthquake Rate Models D.7.1. Reference Branches D.7.2. Other Comparisons D.8. Conclusions E. Earthquake Probability Models E.1. The Empirical Model E.2. Self-Consistent Elastic Rebound Models E.3. Spatiotemporal Clustering Models E.4. Implementation Using UCERF E.5. Logic-Tree Branches F. Conclusions and Recommendations F.1. Major Issues F.2. Recommendations References List of Acronyms... 86

6

7 UCERF3 Technical Report #8 Introduction 1 A. Introduction Most damaging earthquakes in California are caused by the rupture of pre-existing faults in Earth s upper crust (above a depth of about 20 km) where previous deformation has already weakened the brittle rocks within an active fault zone. An earthquake occurs when increasing tectonic stress causes the fault to fail suddenly, displacing the rocks on either side and radiating energy in seismic waves. An earthquake rupture forecast comprises statements, couched in terms of probabilities, about the locations, rates of occurrence, and magnitude of future fault ruptures. Earthquake rupture forecasting is a critical technology for ensuring the seismic safety of California s 38 million residents. It provides the predictive framework for the probabilistic seismic hazard analysis (PSHA) required for earthquake engineering, the loss modeling required for setting insurance rates and other financial safety mechanisms, and the preparedness measures needed to improve community resilience to earthquake disasters (NRC, 2011). The science of earthquake forecasting has been moving rapidly, owing to new data from seismology, geology, and geodesy and a better understanding of earthquake processes within active fault systems (e.g., Jordan et al., 2011). Motivated by the need to apply the best available science to the practical issues of forecasting, the California Earthquake Authority (CEA) joined with the United States Geological Survey (USGS), California Geological Survey (CGS), and the Southern California Earthquake Center (SCEC) in September 2004 to create a new Working Group on California Earthquake Probabilities ( Previous WGCEPs had produced long-term, time-dependent models for the San Andreas fault system (WGCEP, 1988, 1990), Southern California (WGCEP, 1995), and the San Francisco Bay Area (WGCEP, 2003). The new WGCEP was charged with developing the Uniform California Earthquake Rupture Forecast (UCERF), a statewide framework for time-dependent earthquake forecasting, and coordinating this framework with the National Seismic Hazard Mapping Program (NSHMP). The initial project produced a prototype model, UCERF1 (Petersen et al., 2007a), and full-fledged consensus model, UCERF2 (Field et al., 2009), and the models were distributed in OpenSHA, a flexible open-source computational platform (Field et al., 2003). UCERF2 was comprehensively documented in a special joint report with 16 appendices, supplementary data, and an introductory fact sheet (WGCEP, 2007; available at and released to the public in April The time-independent component of this model the UCERF2 long-term earthquake rate model was used as the California component of the updated National Seismic Hazard Map (Petersen et al., 2008), ensuring the consistency of these complementary hazard analyses. The time-dependent component of UCERF2, like previous WGCEP models, relied on a stressrenewal model to condition earthquake probabilities on the dates of previous large events, such as the 1906 San Francisco and the 1857 Fort Tejon earthquakes on the San Andreas fault. This type of time dependence assumes the probability rate drops immediately after the earthquake releases tectonic stress on a fault and rises as the stress re-accumulates, in accordance with the elastic rebound theory of the earthquake cycle (Reid, 1911; WGCEP, 2003, Chapter 5). UCERF2 was calibrated for variations in the cycle using a comprehensive set of historical and paleoseismic observations.

8 2 Introduction UCERF3 Technical Report #8 A second WGCEP project supported by the USGS, CGS, SCEC, and CEA was launched in early Its primary goal was to improve the UCERF probabilistic framework by incorporating multi-fault ruptures and spatiotemporal clustering. The accomplishments of this 28-month UCERF3 project are described in this technical report, which has three purposes: (1) to document the resulting UCERF3 framework and its implementation in OpenSHA; (2) to present an initial long-term model derived within this framework, the UCERF3.0 model (Figure 1); and (3) to make recommendations regarded future UCERF developments. In particular, UCERF3 extends the forecasting capabilities for California into the realm of the short-term probabilities needed for operational earthquake forecasting (Jordan et al., 2011; NRC, 2011). As described in this report, the use of UCERF3 for this purpose will require considerable road testing of the framework s time-dependent components, as well as the development of a robust interoperability with real-time seismicity information. Further coordination with the NSHMP will also be needed to ensure the compatibility of UCERF3 with the next release of the NSHMP, scheduled for Figure 1. Three-dimensional perspective view of California, showing the 2601 fault sections of UCERF Fault Model 3.1. Colors indicate the rates at which each fault section participates in earthquake ruptures with magnitudes 6.5. The participation rates were calculated from a single branch of UCERF3.0 model (the Characteristic reference branch for the Zeng deformation model). The light blue boundary identifies the UCERF model region, which comprises California and a buffer zone. The black boxes define the San Francisco Bay Area and Los Angeles Regions used in hazard calculations. The Cascadia megathrust is not shown on this fault map; it and the Mendocino transform fault extend beyond the UCERF model region.

9 UCERF3 Technical Report #8 Introduction 3 A.1. Goals and Objectives In the UCERF2 project, WGCEP 2007 developed a statewide model that used consistent methodologies, data-handling standards, and treatment of uncertainties across all regions of California. The Working Group also identified a number of unresolved issues related to earthquake rupture forecasting and recommended research that could further improve the UCERF framework. For example, a reanalysis of earthquakes in California revealed that the previous national hazard maps (e.g., NSHMP, 2002) significantly over-predicted the rate of earthquakes with magnitudes near 6.5. The Working Group devoted substantial effort to reducing this bulge in earthquake rates. While these adjustments were successful in making the predictions consistent with the frequency-magnitude statistics of historical earthquakes, they only began to address what some scientists believed to be the root causes of the discrepancy: assumptions regarding fault segmentation that limits earthquake size and a corresponding underestimation of fault-to-fault ruptures. Multiple fault ruptures are common in strike-slip fault systems; examples include the 1992 Landers earthquake (M7.3), the 2002 Denali (Alaska) earthquake (M7.9), and the 2010 El Mayor-Cucapah earthquake. Therefore, one of the main goals of the UCERF3 project has been to relax the fault segmentation constraints of previous models and improve probability estimates of the fault-to-fault ruptures that could possibly span proximate fault segments in California. To achieve this goal, WGCEP has developed a formal inversion method for constructing earthquake rate models that relaxes segmentation and allows fault-to-fault ruptures while honoring all available information on fault geometries, slip rates, and frequency-magnitude statistics. This Grand Inversion scheme is described in Section D. UCERF2 did not include earthquake clustering evident in aftershock sequences nor earthquake triggering caused by static or dynamic stress changes. In some situations, these effects can dominate the time-dependent probabilities for damaging earthquakes (Jordan et al., 2011). The goal has been to incorporate clustering models, such as the Epidemic Type Aftershock Sequence (ETAS) model (Ogata, 1988), into the fault-based UCERF framework. Achieving this goal has required extending ETAS from a point-process model to a finite-fault model. Section E documents this extended ETAS model and its current implementation in the UCERF3 framework. In addition to achieving these two main goals, the UCERF3 project tackled several other key issues identified by WGCEP in its UCERF2 report: Improved deformation models. The deformation models for UCERF2 were constructed primarily from fault slip rates determined from geologic observations; geodetic data were used only indirectly, except as constraints on a few fault slip rates and a few broad zones of deformation where the distribution of fault slip rates was poorly determined (called type-c zones). In UCERF3, WGCEP has aimed to derive more accurate deformation models based on kinematically consistent inversions of both geodetic and geologic data, rather than relying on expert opinion to sort out discrepancies between the two observational approaches. A second objective has been to replace type-c zones with more spatially refined off-fault strain estimates. Deformation models with these attributes are presented in Section C.

10 4 Introduction UCERF3 Technical Report #8 Interpretation of historical seismicity. WGCEP 2003 interpreted the apparent recent seismicity lull as a stress shadow cast by the great 1906 San Francisco earthquake; however, WGCEP 2007 found that much of California exhibits a similar lull, which called the stress-shadow hypothesis into question. The apparent variation of historical seismicity was the largest single source of epistemic uncertainty in the UCERF2 time-dependent probability model. The evidence for historical seismicity changes has been re-examined, and its treatment using empirical time-dependent models has been reassessed (Section E). Magnitude-area relationships. Earthquake rupture forecasts depend heavily on magnitude-area relationships, and those used in UCERF2 appear to be at odds with models preferred by the groundmotion simulations, such as SCEC s CyberShake project (Graves et al., 2010). The UCERF3 objectives are to resolve inconsistencies among published magnitude-area relationships, especially with respect to the depth extent of large ruptures, and to evaluate their implications for statewide earthquake probabilities, including implications for aseismicity factors and coupling coefficients. Results are given in Section D. Self-consistent renewal models. WGCEP 2007 demonstrated that the inclusion of multi-segment ruptures can lead to inconsistencies in computing conditional time-dependent probabilities from single-segment renewal models. Additional problems arise when long-term renewal models are combined with short-term earthquake clustering models, such as ETAS. UCERF3 implements selfconsistent probability models based on renewal concepts that can accommodate fault-to-fault ruptures and short-term clustering (Section E). WGCEP has addressed these and other issues through an extensive suite of coordinated studies, which are documented in the 19 appendices to this report (Table 1). A.2. Project Organization The WGCEP organizational structure used in the UCERF2 study was retained for UCERF3, comprising an Executive Committee (ExCom), a Management Oversight Committee (MOC), a Scientific Review Panel (SRP), and a large group of contributing experts. The ExCom, chaired by Dr. Field, has been responsible for convening experts, reviewing options, and making decisions about model components, as well as implementing the UCERF3 framework and supporting databases. An important role of the ExCom has been to ensure that the components of UCERF3 span the range of model viability. The MOC, chaired by Dr. Jordan, has allocated resources and approved project plans, budgets, and schedules; it has also overseen the model review and delivery processes. The SRP, chaired by Dr. Ellsworth, is an independent body of experts that has reviewed the project plans, research results, and model elements. In particular, the SRP has provided WGCEP with guidance regarding model viability and the range of models needed to represent epistemic uncertainties. Other WGCEP contributors include research scientists, resource experts, model advocates, and IT professionals. Members of these groups are listed at the beginning of this report and under "Participants" at

11 UCERF3 Technical Report #8 Introduction 5 The UCERF3 project has been supported using the internal resources of the USGS, CGS, and SCEC, and by the CEA through a contract to SCEC, managed by the MOC (Jordan, PI). CEA s Multidisciplinary Research Team participated in the reviews of WGCEP products and reports; however, no CEA personnel were directly involved in the development of the UCERF3 framework. Table 1. Appendices to Technical Report #8.* Appendix Title Authors A Updates to the California Reference Fault Parameter Dawson, TE Database: UCERF3 Fault Models 3.1 and 3.2 B Geologic Slip-Rate Data and Geologic Deformation Model Dawson, TE, and RJ Weldon II C Deformation Models for UCERF3 Bird, P, JM Bormann, TE Dawson, EH Field, WC Hammond, TA Herring, KM Johnson, R McCaffrey, T Parsons, Z-K Shen, WR Thatcher, RJ Weldon II, and Y Zeng D Compilation of Creep Rate Data for California Faults and Calculation of Moment Reduction Due to Creep Weldon II, RJ, DA Schmidt, X Tong, BA Wisely, LJ Austin, TE Dawson, and DT Sandwell Shaw, BE E Evaluation of Magnitude-Scaling Relationships and Depth of Rupture: Recommendation for UCERF3 F Distribution of Slip in Ruptures Biasi, G, RJ Weldon II, and TE Dawson G Paleoseismic Sites Recurrence Database Weldon II, RJ, TE Dawson, and C Madden H Paleoseismic Interevent Times Interpreted for an Parsons, T Unsegmented Earthquake Rupture Forecast I Probability of Detection of Ground Rupture at Weldon II, RJ, and G Biasi Paleoseismic Sites J Fault-to-Fault Rupture Probabilities Biasi, G, T Parsons, RJ Weldon II, and TE Dawson K The UCERF3 Earthquake Catalog Felzer, KR L Observed Magnitude Frequency Distributions Felzer, KR M Smoothed Seismicity Model Felzer, KR N Grand Inversion Implementation and Exploration of Logic Page, MT, EH Field, and KR Milner Tree Branches O Gridded Seismicity Sources Powers, PM P Models of Earthquake Recurrence and Down-Dip Edge of Frankel, AD, and MD Petersen Rupture for the Cascadia Subduction Zone Q The Empirical Model Felzer, KR R S Compilation of Slip in the Last Event Data and Analysis of Last Event, Repeated Slip, and Average Displacement for Recent and Prehistoric Ruptures Constraining ETAS Parameters from the UCERF3 Catalog and Validating the ETAS Model for M 6.5 Earthquakes Madden, C, DE Haddad, JB Salisbury, O Zielke, JR Arrowsmith, J Colunga, and RJ Weldon II Hardebeck, JL * The draft versions of the appendices included with this report are available at fully reviewed and revised versions will be included as citable elements of a USGS Open File Report on UCERF3.

12 6 Introduction UCERF3 Technical Report #8 A.3. The UCERF3 Framework The UCERF model domain, the polygon surrounding the state boundaries in Figure 1, was chosen by WGCEP 2007 to be the testing region of the Regional Earthquake Likelihood Models (RELM) project (Field, 2007a; Schorlemmer et al., 2007). This standard has also been adopted as the California testing region for the Collaboratory for the Study of Earthquake Predictability (CSEP, Zechar et al., 2010), which will facilitate the prospective testing of the UCERF3 probability models against future seismicity and the evaluation of its short-term components relative to the CSEP experimental forecasts. The UCERF3 framework, like its UCERF2 predecessor, has been constructed from the four main model components, shown in Figure 2. The Fault Model gives the physical geometry of the larger, known faults; the Deformation Model assigns slip rates and aseismicity factors to each fault section; the Earthquake Rate Model gives the long-term rate of all earthquakes throughout the region above a specified threshold (M 5 both UCERF and NSHMP); and the Earthquake Probability Model gives a probability for each event over a specified time span. Fault Models Deformation Models Earthquake-Rate Models Probability Models Specifies the spatial geometry of larger, more active faults. Provides fault slip rates used to calculate seismic moment release. Gives the long-term rate of all possible damaging earthquakes throughout a region. Gives the probability that each earthquake in the given Earthquake Rate Model will occur during a specified time span. Figure 2. Four main model components of UCERF3 framework. Although dividing any complex interactive system into separate components is to some degree artificial and arbitrary, the scheme of Figure 2 has continued to be useful in UCERF model development. In the case of UCERF3, the most problematic distinction is between the Earthquake Rate Model and Earthquake Probability Model. All previous WGCEP and NSHMP forecast models have first defined the long-term rate of each event, which has both physical meaning (in terms of being measurable, at least in principle) and practical utility (e.g., in current building codes). However, drawing this distinction becomes problematic when constructing a fully time-dependent model; for instance, when trying to differentiate between a multi-fault rupture rate and the probability that one fault might quickly trigger another as a separate event. WGCEP derived insights about how this problem should be handled through a posteriori analysis of the results from physics-based earthquake simulators, which do not separate longterm earthquake rates from short-term probabilities a priori (see Section E). WGCEP has retained the separation of UCERF3 components to facilitate the construction and testing of the models. This modularization, which has been achieved in OpenSHA via object-oriented computer programming, also aids in defining alternative models (as logic tree branches) and in creating and improving model components. WGCEP 2007 put a significant effort into developing an object-oriented, open-source, extensible cyberinfrastructure that could accommodate future improvements to the UCERF framework (click "Model Framework" at The Working Group also developed distributed data resources and a flexible set of analysis tools (click "Data" and/or "Tools" at These prior investments in cyberinfrastructure greatly facilitated the extensions of the

13 UCERF3 Technical Report #8 Introduction 7 earthquake forecasting framework achieved in UCERF3. They have been augmented considerably to accommodate the expanded UCERF3 framework, including implementations of OpenSHA on supercomputers. The next four sections of this report describe the main model components depicted in Figure 2, and they define the branch structure of the UCERF3.0 logic tree, shown in Figure 3. The important submodules of these main components are referred to as Key Components of UCERF3, which are fully described in the appendices (Table 1). Figure 3. Logic-tree branches for the UCERF3.0 model. Highlighted branches are the Characteristic reference branch (bold black and red) and the Gutenberg-Richter reference branch (bold black and blue), described in Section D.7.

14 8 Introduction UCERF3 Technical Report #8 Because there will be no single consensus model for UCERF3, it is important that the modeling framework adequately portray epistemic uncertainties, which represent our incomplete understanding of how nature works, as well as the aleatory uncertainties, which represent the inherent randomness assumed in any given model (SSHAC, 1997). Epistemic uncertainties are represented as they were in UCERF2, using a logic tree structured by the four main model components. The logic-tree branches, described in more detail below, account for multiple models constructed under different assumptions and with different parameters (Figure 3). The branch points often represent choices among the Key Components. Figure 3 shows the logic tree for the UCERF3.0 model. A.4. Review and Consensus-Building Processes Discussion of model options and consensus building has been achieved through a series of community workshops listed in Table 2. These workshops included participants from the broader community who provided input to the WGCEP. Some workshops focused on the scientific ingredients going into UCERF3, while others were aimed at informing and getting feedback from user communities. Decisions with respect to logic-tree branches and weights are the responsibility of the ExCom. In some cases, input about branch weighting has been solicited from ad hoc evaluation committees when special expertise was needed. An example is the UCERF3 Deformation Model Evaluation Committee, set up to assess the deformation models (see listing at the beginning of this report and under "Participants" at The ExCom also provides the scientific rationale for why the models were selected and how the weights were assigned. ExCom decisions are subject to review by the SRP and the MOC. While the ExCom will continue to rely on expert opinion in establishing logic-tree branch weights when necessary, a goal of the UCERF3 project is to base branch weighting on criteria that are as quantitative, reproducible, and testable as possible. Throughout the project, emphasis has been placed on developing model-analysis tools and objective metrics that can be used in establishing branch weights; many examples are given in this report. UCERF3 is being developed in full cooperation and coordination with NSHMP. For example, the Cascadia subduction zone, which extends northward from California, is treated as a special case with its own logic tree. The earthquake rupture forecast for this megathrust has been developed by a joint NSHMP/UCERF working group, which convened three workshops on the Cascadia subduction zone over the course of this project (Table 2). The results are fully described in Appendix P by Frankel and Petersen (2012). WGCEP is proceeding under the assumption that the time-independent version of UCERF3 be used for the next round of USGS hazard maps for California, which are scheduled for release circa Based on preliminary evaluations, the initial model presented here, UCERF3.0, will have to be substantially revised to serve this purpose. A major milestone will be the NSHMP California Workshop, scheduled for Oct 17-18, It is the intention to finalize the UCERF3 Earthquake Rate Model by this time. In his dual role as WGCEP chair and as USGS lead for the California component of the NSHMP forecast model, Dr. Field will continue to coordinate this integration process.

15 UCERF3 Technical Report #8 Introduction 9 Table 2. WGCEP (2012) consensus-building activities, including planned activities. Date Activity Description Oct 17-18* California Workshop of the National Seismic Hazard Mapping Project (NSHMP) Sep* Overview of Final UCERF3 Model for User Communities TBD* Workshop on Assumptions and Model Testing Jul 9 UCERF3 Model Framework (Report #8) Submitted May 8-9 Scientific Review Panel Meeting Apr 30 Review of Preliminary UCERF3 Model (Report #7) Submitted Mar 31 Preliminary UCERF3 Model (Report #6) Submitted Mar Cascadia Subduction Zone Workshop Jan 26 UCERF3 Deformation Model Meeting Jan 5-6 WGCEP All-Hands Meeting Dec 15 Cascadia Subduction Zone Workshop Oct 25 Joint UCERF3 and NGA-W2 Workshop on Common Issues Oct 24 UCERF3 Plan Overview (Emphasizing the Grand Inversion for Users) Sep 30 Final UCERF3 Plan (Report #5) Submitted Sep SCEC Annual Meeting Jun 30 SRP Review of Proposed UCERF3 Plan (Report #4) Submitted Jun Scientific Review Panel Meeting Jun 11 Workshop on Distribution of Slip in Large Earthquakes Jun 10 Workshop on Instrumental & Historical Seismicity Jun 9 Workshop on the Use of Physics-Based Simulators Jun 8 Workshop on Time-Dependent Models Jun 4-5 Workshop on UCERF3 Deformation Models May 31 Proposed UCERF3 Plan (Report #3) Submitted Apr 8 Statewide Fault Model and Paleoseismic Data Workshop (in southern California) Apr 6 Statewide Fault Model and Paleoseismic Data Workshop (in northern California) Mar 2-3 Distribution of Slip in Large Earthquakes Meeting Jan 12 WGCEP All-Hands Meeting Dec 31 UCERF3 Methodology Assessment Proposed Solutions to Issues (Report #2) Submitted Nov Cascadia Subduction Zone Workshop Nov Scientific Review Panel Meeting Nov 3-4 CEPEC/NEPEC Meeting Sep SCEC Annual Meeting Aug 2 Fault-to-Fault Jumps Task Meeting Jun 30 UCERF3 Methodology Assessment Issues and Research Plan (Report #1) Submitted Apr 1-2 Workshop on Incorporating Geodetic Surface Deformation Data in UCERF3 Feb WGCEP All-Hands Meeting Dec 1-2 UCERF3 Kick-Off and Planning Meeting * planned activities.

16 10 Introduction UCERF3 Technical Report #8 The entire WGCEP process has also been monitored by representatives of the National Earthquake Prediction Evaluation Council (NEPEC) and the California Earthquake Prediction Evaluation Council (CEPEC). The final UCERF3 models will be submitted to NEPEC and CEPEC for review prior to any public release. A.5. UCERF3 Limitations and Potential Applications Previous WGCEP efforts have repeatedly demonstrated the truth of two clichés, it s easier said than done and the devil is in the details. Current WGCEP examples of this include: (1) a significant increase, relative to UCERF2, in the total statewide moment rate implied by the new deformation models; (2) difficulties in achieving a model that exhibits a Gutenberg-Richter distribution of earthquake nucleation rates both on and off all faults; (3) inconsistencies between spatiotemporal clustering implied by ETAS and the strongly characteristic magnitude frequency distributions in UCERF2; and (4) the recognition that, in the fault-based framework of UCERF3, plausible ETAS models require the implementation of elastic rebound. The latter two issues illustrate the tight coupling between the longterm and the short-term components that is required to obtain a self-consistent earthquake rupture forecast. In this report, WGCEP demonstrates the new UCERF3 capabilities for modeling a wider range of plausible earthquake behaviors while fitting richer sets of data with better results. However, the model described here, UCERF3.0, is only a prototype. The end-to-end modeling has revealed a variety of issues and inconsistencies, described in this report, that will require review and possible remediation before a UCERF3 time-independent model can be finalized. In addition, new aspects of the model, such as the inclusion of complex multi-fault ruptures of very low probability, may require some rethinking of the way UCERF models are applied; e.g., in the determination of the largest credible event for ensuring seismic safety of critical facilities. Recommendations on how to proceed toward the final UCERF3 models are summarized in Section F.2. Both the elastic rebound and ETAS components have been implemented in the UCERF3 framework, as described in this report and at recent professional meetings (Field, 2012). However, UCERF3-based results for these time-dependent components are not presented here and will be deferred until issues related to the deformation models and other elements are resolved. The time-dependent components will be incorporated after the time-independent models have been reviewed and provisionally accepted. A particularly ambitious aspect of UCERF3 is to develop an operational earthquake forecast an authoritative model that can be revised quickly after events that significantly modify estimates of subsequent earthquake probabilities. The WGCEP goal is to construct a model that will produce forecasts across a wide range of time scales, from short term (days to weeks), through intermediate term (e.g., annual forecasts), to long term (decades to centuries). Short-term forecasts could be used, for example, to alert emergency officials of the increased hazard due to a moderate-sized earthquake occurring near a fault that is considered close to failure. Yearly forecasts could be used by homeowners to decide whether to buy earthquake insurance for the following year, or by those needing to price insurance premiums or cat bonds. Long-term forecasts are currently used in building codes.

17 UCERF3 Technical Report #8 Introduction 11 A unified model with a full range of a forecasting capabilities would be an improvement over current practice of issuing short-term and long-term forecasts that are not necessarily consistent. For instance, in an Epidemic Type Aftershock Sequence (ETAS) model, the long-term probabilities represented by the background rate of events trade off against the aftershock productivity parameters, which control the short-term probabilities. Also, while aftershock sequences are generally considered to be a short-term phenomenon, they can produce significant probability changes over periods of years to decades, as demonstrated by the current sequences in Christchurch, New Zealand, and Tokyo, Japan. By considering all time dependencies within a single modeling framework, we are attempting to develop a consistent set of forecasts. The utility of UCERF3 will be dictated by the interests of user communities, as well as user confidence in the forecast, given its uncertainties. Therefore, we have supported an ongoing dialogue between potential users and model developers throughout the project. Potential users include the California Earthquake Prediction Evaluation Council (CEPEC) and National Earthquake Prediction Evaluation Council (NEPEC), which render advice on earthquake threats following significant events. The USGS currently makes short-term forecasts during earthquake clusters, which are used by the California Emergency Management Agency (CalEMA), other emergency responders, and utilities. UCERF3 will improve the basis for this type of short-term forecasting by making them consistent with the long-term seismic hazard model. It will also help to quantify how multi-fault ruptures can be distinguished from a series of separate, but quickly triggered, earthquakes. As a prototype unified forecasting model, UCERF3 will allow the USGS and CGS, as well as CEA, to explore the technical and societal issues associated with time-dependent operational forecasting.

18 12 Fault Models UCERF3 Technical Report #8 B. Fault Model The UCERFs are fault-based earthquake rupture forecasts; that is, most large earthquakes are expected to occur as ruptures on identified faults. The totality of identified faults constitutes the fault model, which is one of the four main components of the UCERF framework (Figure 2). In developing the UCERF3 fault models, WGCEP has adopted and modified the database created for UCERF2, the California Reference Geologic Fault Parameter Database. The updates to the database include significant fault revisions and the inclusion of new faults based on recent studies, as well as a re-evaluation of fault endpoints. The deformation model for the Cascadia subduction zone is discussed by Frankel and Petersen (2012) in Appendix P. B.1. Definition A fault model gives the spatial geometry of the larger, active faults throughout the region, with alternative models representing epistemic uncertainties in the fault system geometry. By definition, a fault model is composed of a list of fault sections; each fault section is described by: Fault Section Name (e.g., San Andreas (Parkfield) ) Fault Trace (list of latitudes, longitudes, and depths for the upper fault edge) Upper and Lower Seismogenic-Depth Estimates Average Dip Estimate Average Rake Estimate (although this can be modified by a deformation model) Fault Zone Polygon (an areal representation of a fault zone) Because distinct Fault Sections are defined only to the extent that one or more of these attributes vary along strike, some fault sections can be quite long (e.g., the northern San Andreas Fault has only four sections). A Fault Section does not define a rupture segment in UCERF3 (see Section D). The complete master list of fault sections for California is given in the Fault Section Database, which is part of the California Reference Geologic Fault Parameter Database. Some fault section entries in this database are mutually exclusive (e.g., representing alternative representations). A fault model is a list of fault sections that is intended to be a complete, viable representation of the large, known, and active faults throughout the region. These constitute the Fault Models key component of UCERF3 (Dawson, Appendix A). B.2. Fault-Zone Polygons The Fault Zone Polygon parameter is new to UCERF3 (Appendix O). It is intended for use in specifying: whether the fault section represents a simple surface or a broader, braided system of faults; the area over which the deformation-model slip rate applies; the faults to which observed micro-seismicity is attributed; the area over which elastic-rebound-based probability reductions are applied; and whether a future large earthquake is identified with a rupture of a UCERF3 modeled fault.

19 UCERF3 Technical Report #8 Fault Models 13 A retrospective example of the identification problem is whether, with respect to UCERF2, the 2010 El Mayor-Cucapah earthquake was an event on the Laguna Salada source or a background seismicity event. In the UCERF2 parameterization this is ambiguous, but with the new fault zone polygons, the 2010 El Mayor-Cucapah earthquake would count as an event on the Laguna Salada source. Figure 4. The definition and creation of fault zone polygons. (A) Perspective view of the Garlock, southern San Andreas, and San Jacinto fault systems, color coded by slip rate (other faults in this region are not shown). Down-dip projections of the faults are stippled and the geologically defined surface polygons are solid. The geologic polygons typically extend to 1 km on either side of the fault traces, but in many places are much broader to accommodate additional mapped surface features. (B) Schematic diagram of the union of geologic, surface projection, and trace buffer polygons to form the complete fault zone polygon used in UCERF3. The example fault dips at ~70º, so the buffer polygon extends to 6 km on either side of the fault trace. The dashed orange lines in the complete polygon mark the subdivisions used to define polygons for the individual fault sections. (C) Cross-strike cross section of a fault showing how dip variations influence the widths of the trace buffer, the surface projection, and the complete fault zone polygon used in UCERF3. In UCERF3, each fault section represents a proxy source for all events that nucleate inside its faultzone polygon. Polygons based on geologic considerations (Figure 4a) were assigned to each fault section in Appendix A, thereby satisfying the first bullet above. However, for some faults such as the San

20 14 Fault Models UCERF3 Technical Report #8 Andreas, this zone gets as narrow as 1 km on each side of the fault, which is too thin for the other intended uses listed above. This reflects the fact that there is no single polygon definition that will perfectly satisfy all intended uses. A fault-zone width was effectively defined in UCERF2 based on standards established previously by the NSHMP in distinguishing fault-based sources from gridded, off-fault seismicity. Specifically, the maximum magnitudes for gridded seismicity were reduced in the vicinity of fault-based sources to avoid overlap with the minimum magnitude of fault sources. Although this led to a checkerboard pattern for the zones around faults, the average width of their zones is about 12 km on either side of vertically dipping faults. WGCEP has therefore adopted a default width of 12 km for vertically dipping faults, with this tapering to the surface projection for faults dipping less than 50 degrees. More specifically, fault zone polygons are the combination (or union) of three independently defined polygons (Figure 3b): the geologically defined polygon for the fault; the surface projection of the fault if dipping; and a buffer on either side of the fault trace. The width of the buffer polygon on either side of a fault trace scales linearly from 0 km at 50 dip to 12 km at 90 dip (Figure 3c). This provides vertical faults with a broad zone of influence that scales down as dip decreases and the area of the surface projection polygon increases. Figure 4b demonstrates how the three polygons are combined. This definition in consistent with past NSHMP practice, but also avoids a checkerboard pattern for the fault zones. This fault-zone definition is still somewhat arbitrary, however, and potential hazard implications of this choice need to be considered carefully for any specific use. B.3. Logic-Tree Branches UCERF3 comprises two alternative fault models: FM 3.1 and FM 3.2, which are analogous to the faultmodel alternatives FM 2.1 and FM 2.2 used in UCERF2. These two new models, shown in Figure 5, represent alternative representations of several fault groups. Reducing all possible combinations to just two models introduces some correlation between the alternatives for different faults; however, the groupings are judicious choices for minimizing the number of logic tree branches and should be adequate in representing the epistemic uncertainties for the most common types of hazard estimates (e.g., mean hazard). B.4. Development Process Fault models FM 3.1 and 3.2 were developed in coordination with the Statewide Community Fault Model (SCFM) project, which builds on SCEC s Community Fault Model development (Plesch et al., 2007). Two dedicated workshops were held to solicit feedback from the broader community, one on April 6, 2011 in southern California and another two days later in northern California (Table 2). Relative to UCERF2, the primarily modifications are: (1) 162 new faults section were added, mostly in northern California, and 76 fault section were revised; (2) fault endpoints were re-examined by reviewing more detailed geologic maps, to enable better quantification of multi-fault rupture probabilities; and (3) connector fault sections were added between larger faults where deemed appropriate, to enable multi-fault

21 UCERF3 Technical Report #8 Fault Models 15 ruptures or, where needed, to define block boundaries for the deformation models. Figure 5 (top) shows UCERF3 FM 3.1 and compares it to UCERF2 FM 2.1; Figure 5 (bottom) displays the fault sections that are unique to Fault Models 3.1 and 3.2, respectively. The details of how these fault models were constructed are given by Dawson (2012) in Appendix A. Figure 5. Fault model 2.1 from UCERF2 (top) and the new UCERF3 Fault Model 3.1 (middle). Shown at the bottom are maps of the fault sections that are unique to Fault Models 3.1 and 3.2, respectively. The green polygons represent the type-c zones applied in UCERF2. The slip rates for Fault Model 2.1 are from the UCERF2 Deformation Model 2.1, and those shown for the UCERF3 fault models are from the Geologic Deformation Model. The Cascadia megathrust, which is excluded from this representation, is described in Appendix P.

22 16 Deformation Models UCERF3 Technical Report #8 C. Deformation Models Forecasting earthquakes in California depends on measurements, estimates, and models of fault slip rates. In the fault-based UCERF models, the rates that faults slip, when combined with magnitude-area relationships and magnitude frequency distributions, control the majority of calculated earthquake rates. In addition to fault slip rates, identified crustal deformation not associated with modeled faults termed off-fault deformation (even though the deformation is at least partially occurring on unmodeled faults) contributes to the earthquake hazard. Fault slip rates and off-fault deformation are assembled from geologic information such as datable, offset markers that can be tied across a fault, and from modeling space geodetic measurements like Global Positioning System (GPS) observations. GPS can also identify off-fault deformation, as can historical and current seismicity. In the UCERF2 deformation models, each fault section of a fault model was assigned a single sliprate estimate where available, and off-fault deformation was represented by a set of geographic polygons, the type-c zones, to which we assigned an effective slip rate (Figure 5a). The UCERF2 deformation models were based on the evaluation of geologic and geodetic data by expert opinion; the deformation was summed across various transects as an a posteriori check that the total plate tectonic rate was matched. A goal for UCERF3 was to base the deformation models on more quantitative, kinematically consistent modeling that directly includes the geodetic data, rather than relying on expert opinion to broker any discrepancies between the inferred geodetic and geologic slip rates. A second goal was to replace type-c zones with more spatially refined off-fault strain estimates. WGCEP also sought to remove an ambiguity endemic to fault-based deformation models: whether slip rates represent only slip on the main fault surface, or whether they include deformation in a zone surrounding the fault. This is now handled by adding the Fault Zone Polygon attribute to all fault sections, which defines a fault deformation zone across which the slip rates apply (see Section B.2). In UCERF3, for the first time, both geologic and geodetic data have been systematically combined into a set of deformation models. Each deformation model gives slip-rate estimates on the surface elements of the fault model, plus deformation rates outside the explicitly modeled fault polygons, which specify the off-fault deformation. C.1. Geologic Slip-Rate Constraints Because UCERF2 expert-opinion slip rates were influence by both geologic and geodetic data, an effort was made to extract and compile the geologic-only constraints at points on faults where such data exist (Dawson and Weldon, Appendix B). In addition to geologic slip rates, Appendix B also includes the supporting data, such as information about the site location, offset features, dating constraints, number of events, reported uncertainties, comments, and separate qualitative ratings of the offset features, dating constraints, and overall slip rate. In the majority of cases, these data were compiled from the original sources, although extensive use was also made of the written summaries included in the USGS Quaternary Fault and Fold Database (USGS QFFD). Given the number of Quaternary active faults in California, the dataset of geologic slip rates is surprisingly sparse. The compilation includes ~230 reported slip rates, of which about 150 reported rates

23 UCERF3 Technical Report #8 Deformation Models 17 are ranked as moderately to well constrained. Of the ~350 fault sections in the UCERF3 fault model, only about 150 fault sections are directly constrained by slip rate data (Figure 7b). This emphasizes the danger of presuming that previously defined slip rates are actually based on solid geologic data. C.2. The Geologic Deformation Model One of the new UCERF3 deformation models is a purely geologic model that includes no constraints from geodesy or plate-motion models (Appendix B). As with the other deformation models, the output is an estimated slip rate at all points on the model faults. Where available, these slip rates were assigned using the revised geologic data in Appendix B; elsewhere, best-estimate values were taken from UCERF2, except where the latter included hybrid slip rates (from both geology and geodesy) or where the old slip rates were inconsistent with other types of data, such as the USGS rate category and published slip rates. One issue in deriving this model is that a number of fault sections had no previously assigned sliprate. In UCERF2, these were simply excluded from the deformation model (e.g., sections in the type-c zones). In UCERF3, these sections have been assigned to a rate category based primarily on recency of activity and, to a lesser extent, on geomorphic expression of fault activity and comparison to similar, nearby faults with known or assigned rates. When using recency of activity to assign slip rate bounds, the following criteria were applied: Quaternary active (< 1.6 Ma) Late Pleistocene (< ~130,000 years) Holocene (< ~11,000 years) 0.0 to 0.2 mm/yr 0.2 to 1.0 mm/yr 1.0 to 5.0 mm/yr Very few faults were placed into the last category, primarily because the fastest slipping faults are already well characterized throughout California. (An exception is offshore faults, which are difficult to study.) The best estimate was typically chosen to be the midpoint of the reported range, unless other data suggested an alternative. This deformation model is displayed in the middle panel of Figure 5 and also in Figure 7c. Table 3 lists the moment rate contributions from the various types of sources in UCERF2, and Table 4 lists the implied moment rates for the new deformation models. Of particular note is that the new faults, which included the UCERF2 faults that lacked slip rates plus the new faults added to UCERF3, constitute a collective moment rate of Nm/yr, which is slightly greater than the total off-fault moment rate in UCERF2, Nm/yr. The latter represents the off-fault background and C-zone contributions in Table 3. Discounting the added faults, the UCERF3 Geologic Model represents only a 5% moment rate increase over the UCERF2 model, primarily because of a few overlapping faults that were not explicitly tapered with the relative plate motion rate in mind, as was done in the UCERF2 deformation models. Making these corrections, which are underway, should provide even better agreement.

24 18 Deformation Models UCERF3 Technical Report #8 Figure 6. (a) Distribution of UCERF3 GPS velocity vectors for California, referenced to the North America plate (from Appendix C). Error ellipses represent 50% confidence regions. (b) UCERF2 residual velocities, computed as the difference between the observed GPS velocities and the predictions of the UCERF2 Deformation Model 2.1. The velocity scales of the two plots are the same. Some of the velocity differences in (b) are due to post-seismic effects of Landers and Hector Mine ruptures, but overall the residual vectors indicate that UCERF2 underestimates the average statewide deformation rate. C.3. Deformation Models from Joint Inversion of Geodetic and Geologic Data An explicit goal of UCERF3 has been to replace expert-opinion slip rates and off-fault deformation zones with slip rates and off-fault strain rates derived from kinematically consistent inversions of GPS and geologic data. Several workshops and meetings were convened to address this problem (Table 2). Appendix C describes the GPS database in Figure 6 and discusses three deformation models developed for UCERF3 by inverting these data together with the geologic constraints: NeoKinema: A model was obtained by inverting geologic, geodetic, and principal-stress data using the finite-element method of Bird (2009) to estimate the long-term velocity field both on and off faults. It is not based on a block geometry. Zeng: A model by Zeng and Shen (2012) represents faults as buried dislocations in a homogeneous elastic halfspace. Each fault segment slips at a solved-for slip rate beneath a locking depth, except at a few segments where shallow creep is allowed. A continuity constraint allows adjustment between more and less block-like deformation. The model here is on the less block-like end of this spectrum.

25 UCERF3 Technical Report #8 Deformation Models 19 Averaged Block Model (ABM): A model was constructed by averaging five different block models using a kinematically consistent method. The input models were Rob McCaffrey s DefNode, Bill Hammond s block model, Kaj Johnson s quasi-block model, and special (more block-like) versions of NeoKinema and Zeng s model. The averaging was done by using the slip rates from all five blockmodel inversions as data in a unified block-model inversion. An additional step was to compute slip rates on faults that are not on block boundaries, achieved by mapping half of the intra-block strain within 10 km on each side of a fault into that fault s slip rate; thus, half of the strain within this region remains off-fault. Figure 7. (a) Fault slip rates for the UCERF2 Deformation Model 2.1, (b) sites of geologic slip-rate constraints, and (c-f) fault slip rates for the four UCERF3 deformation models. Each of these deformation models provides slip-rate estimates for virtually every fault. They eliminate the UCERF2 type-c zones and handle distributed deformation by estimating strain-rate tensors on a 0.1º 0.1º grid covering California. This strain-rate grid accounts for all modeled deformation that is

26 20 Deformation Models UCERF3 Technical Report #8 not accommodated by the faults. All of the models listed above were constrained by a consensus GPS velocity field constructed for this purpose (Appendix C), as well as by the geologic slip-rate constraints from Appendix B. The slip rates for these deformation models are shown in Figure 7, and the spatial distribution of moment rates is discussed below. Table 3. Moment rates for the various types of sources in UCERF2 ( M 0, given in units of Nm/yr or dyne-cm/yr) 5 Source Type Moment Rate ( M 0 ) Percent of Total Seismic Percent of Total CA Faults % 73% Non-CA Faults % 3% Off-faults background % 16% C Zones (seismic) % 4% C Zones (aseismic) % Total (seismic) % Total (including aseismic) % 1) Value reflects the 10% reduction for smaller earthquakes (generally < 6.5, which are treated as background seismicity) and aftershocks. The UCERF2 faults are only a subset of UCERF3 fault model, to which more than 100 new fault sections have been added. 2) Faults outside of California (in Nevada and Oregon), but within the UCERF model region. 3) Value does not include type-c zones or deep seismicity near Cascadia (in the NSHMP file agrd_deeps_out) but does include aftershocks and the special areas for Brawley ( Nm/yr), Mendocino ( Nm/yr), and Creeping SAF ( Nm/yr). 4) Half the moment rate in type-c zones was deemed aseismic. 5) Values in this table were compiled by running the OpenSHA method UCERF3.utils.UCERF2_MFD_ConstraintFetcher.computeMomentRates() C.4. Creep and Aseismicity In this report, the term creep refers to inter-seismic creep, which operates over decadal and longer time periods, rather than afterslip or transient/triggered creep, which operate over much shorter periods. Appendix D, by Weldon et al. (2012), documents observations of creep on California faults and develops a new methodology to estimate the seismic moment reduction due to creep. This work builds on UCERF2 Appendix P and approximately doubles the number of creep estimates, primarily from InSAR and dense geodetic network data. In addition, from micro-repeating earthquakes (recognized as repeatedly rupturing asperities embedded within a creeping fault surface), one can estimate creep at depth at a growing inventory of fault locations. These data also allow one to infer, through models, how creep extends to depth. UCERF3 research has built on the work by Savage and Lisowski (1993) to improve the estimates of how surface creep partitions into fault-area reduction (via the aseismicity factor) and slip-rate reduction (via the coupling coefficient).

27 UCERF3 Technical Report #8 Deformation Models 21 Table 4. Moment rates M 0 ( M 0, given in units of Nm/yr) for the UCERF3 deformation models, plus implied values of maximum magnitude (M max ) and mean recurrence interval (MRI) of M 8 events. 6 Fault Model Deformation Model Average Block Model Geologic NeoKinema Zeng Average Block Model Geologic NeoKinema Zeng 2.1 UCERF2 1 Fault M 0 (new faults) 1.71 (0.44) M 0 Off Faults 2 % Off Faults Total M 0 (seismic and offfault aseismic) Increase over UCERF2 3 Change in M 0 on faults from UCERF2 4 5 MRI M max M % % -27% (0.48) % % 5% (0.65) 2.10 (0.48) 1.70 (0.43) % % -19% % % -6% % % -27% (0.49) % % 5% (0.65) 2.08 (0.48) 1.73 (0.0) % % -12% % % -8% % % 0% ) Value includes fault-specific down-dip widths and creep-based moment-rate reductions; default is 0.1 where no creep data exist. For reference, the average lower seismogenic depth is ~12 km in the UCERF3 Fault Models; with surface creep the average seismogenic thickness is ~11 km. The values in parenthesis are the moment-rate contributions from the more than 150 new fault sections added in UCERF3 (not included in UCERF2). However, UCERF3 does not include most of the Non-CA Faults faults listed in Table 3; contributions from these are included in off fault here. 2) Values from K. Johnson s analysis of off-fault strain rates (Appendix C), which assume a seismogenic thickness of 11 km. The exception is the Geologic model, for which there is no off-fault strain rate map; its value was computed assuming the total moment rate for the Ave Block Model is correct (0.31 = for UCERF3 FM 3.1). The UCERF2 value includes aseismic contribution from C-Zones (Table 3) 3) Relative to the UCERF2 value of Nm/yr (Table 3). UCERF3 values from the geodetic models include off-fault deformation, a fraction of which is likely aseismic. On fault values have aseismic contributions removed. Much of the increase is from just three new UCERF3 fault sections: Brawley (Seismic Zone) alt 1 ( Nm/yr); Cerro Prieto ( Nm/yr); and Mendocino ( Nm/yr). 4) Comparison of on-fault moment rate for the same faults as used in the UCERF2 model. 5) Implied values computed assuming a truncated GR distribution constrained to have the observed rate of 8.7 M 5 events per year (Appendix L) and a b-value of 1.0 6) Values in this table compiled by running the following OpenSHA method on 6/24/12: UCERF3.analysis.DeformationModelsCalc.calcMoRateAndMmaxDataForDefModels(). The aseismicity factor is defined as the fraction of area between the upper and lower seismogenic depth over which all slip is released aseismically. By reducing rupture area, aseismicity reduces the magnitude of events in UCERF models. The coupling coefficient is defined as the fraction of slip-rate that

28 22 Deformation Models UCERF3 Technical Report #8 is fully seismic (not due to creep), and the values less than 1.0 generally reduce the rates of events rather than the magnitudes. In UCERF2, about 20% of the fault sections were assigned non-zero aseismicty factors. None of the UCERF2 fault sections had site-specific coupling coefficients; instead, slip rates were reduced across-the-board by 10% on all faults. This reduction accounted for aftershocks, subseismogenic ruptures on the faults, and the extent to which distributed off-fault deformation was mapped onto the fault slip rates. (a) (b) Figure 8. (a) The UCERF3 creep model, which specifies moment-rate reduction as a function of creeprate/slip-rate ratio. (b) The moment-rate reductions in the UCERF3 creep model are applied in two ways, as an aseismicity factor, which reduces seismogenic area, and as a coupling coefficient, which reduces the slip rate. See text for details. In UCERF2, surface creep was only applied as an area reduction, by setting the aseismicity factor equal to the surface creep rate divided by the total average slip rate. It is now understood that most creep

29 UCERF3 Technical Report #8 Deformation Models 23 is shallow and decreases rapidly with depth, which implies that the UCERF2 approach almost certainly over-reduced the seismogenic moment rate. A new model to account for the dependence of creep on depth and slip rate, developed by Weldon et al. (2012) in Appendix D, is summarized in Figure 8. Moment-rate reductions as a function of the creep fraction (the creep-rate/slip-rate ratio) were obtained by integrating over a depth-dependent model (Figure 8a). These reductions are applied as two limiting cases (Figure 8b): Low creep fraction. For faults with small amounts of creep, the aseismic slip is assumed to occur at the surface, while at depth the fault remains fully coupled. In the UCERF3.0 implementation, moment reduction factors up to 0.9 are applied only as an aseismicity factor, which reduces seismogenic area. High creep fraction. For highly creeping faults, aseismic slip is assumed to occur at all depths. Moment reductions at factors greater than 0.9 are applied as a coupling coefficient, which reduces the slip rate. In other words, the first 90% of moment-rate reduction is from a seismogenic area reduction, and the remaining moment rate reduction (above 90%) is applied as a slip-rate reduction. As discussed in Appendix D, the UCERF3.0 transition threshold is set to 90% because the Parkfield section requires high amounts of area reduction in order to reproduce a characteristic magnitude of approximately 6.0. Total moment-rate reduction in this implementation is capped at 95%, for which area is reduced by 90% and slip rate is reduced by 50%. The slip-rate reductions on highly creeping faults act to limit the rate of through-going ruptures, in particular the San Andreas Creeping Section. The dependence on long-term slip rate (Figure 8a) implies that the moment-rate reductions vary among the deformation models. In UCERF2, the moment-rate reduction was set to zero for faults with no creep data. However, it is very difficult to recognize creep on most Californian faults where slip rates are on the order of a few mm/yr, especially for low creep fractions. To account for this observational bias, UCERF3.0 sets a default value for the aseismicity factor at 0.1, implying a 10% area reduction. C.5. Implied Moment Rates The total moment rate (seismic + aseismic) for UCERF2 is Nm/yr (Table 3), not including the Cascadia subduction zone. This integrated rate implies a maximum magnitude (M max ) of 8.15 for a truncated Gutenberg-Richter (GR) frequency-magnitude distribution constrained to have the observed rate of 8.7 M 5 events per year and a b-value of 1.0, which are the preferred UCERF3.0 values by Felzer (2012b) in Appendix L. The corresponding mean recurrence interval (MRI) for M 8 events is 380 years. In comparison, the total moment rates computed for the UCERF3 deformation models range from Nm/yr to Nm/yr, which is 10% to 19% larger (Table 4). The corresponding M max varies from 8.25 to 8.32 and the MRI from 265 to 220 years. The large residuals in Figure 6b, which are independent of the UCERF3 models, indicate that UCERF2 underestimates the total moment rate. The models also differ in the percentage of off-fault versus on-fault moment, with the Geologic Deformation Model having the least off fault (~11% for Fault Model 3.2, assuming the ABM has the correct total moment rate). The geodetically constrained models, on the other hand, have between 25%

30 24 Deformation Models UCERF3 Technical Report #8 and 35% moment off faults. The most direct comparison to UCERF2 is the on-fault moment rate values with the new faults excluded. In this comparison, the geodetic models result in a moment rate decrease compared with UCERF2, ranging from -6% to -27%. The UCERF3 Geologic model has a 5% increase in on-fault moment rate, primarily because overlapping fault traces were not manually tapered to fit relative plate motions, as they were in UCERF2. Figure 9 shows the spatial distribution of moment rates implied by the deformation models. For comparison, Figure 10 shows the spatial distribution of moment rate implied by both the UCERF2 and UCERF3 smoothed seismicity maps (the latter from Appendix M of Felzer, 2012c). In summary, the deformation models have several implications for moment rates. Geodetic observations capture all current strain, both seismogenic and aseismic. As a result, models based on these data represent high end-members of calculated moment rate. Fault based geologic models necessarily represent low end-members, because it is impossible to identify every possible fault. When the comparison is restricted to the moment rates just on faults present in the both UCERF2 and UCERF3 models, only the UCERF3 geologic model shows a moment increase, and it is small (5%). Therefore, the moment increase in UCERF3 deformation models comes principally from newly included faults plus offfault strain. Since only the geodetically determined spatial distribution of strain (and not its magnitude) is planned for use in the Earthquake Rate Model, UCERF3 deformation models do not necessarily imply any moment increase resulting from geodetic observations. Even if geodetic off-fault strain were to contribute to moment-rate calculations, it could, for example, be subject to a 50% aseismicity factor as was done for C-Zones in UCERF2. C.6. Logic-Tree Branch Weights All the UCERF3 deformation models fit their intended input datasets at a reasonable degree of confidence. In general, the fits to geologic observations are better than to GPS, with a mean misfit of less than 2 mm/yr for all models in comparison to the UCERF3 Geologic Model. Reduced chi-squared 2 2 estimates of fit to the GPS data range from χ red =10.1 for the ABM to χ red =15.3 for NeoKinema. These values are high, but not unusually so, because of the small reported errors on GPS observations and a desire to avoid over fitting the data, which leads to strong along-fault slip rate variations. Additional criteria for judging the merit of the models include (1) the total moment release rate, (2) the consistency with relative plate motion vectors, (3) rake consistency with well-studied faults, and (4) agreement with geologic slip rates. The geologic data of Appendix B were used as input to all four models. The Geologic Model fits these data precisely. The continuum solutions (NeoKinema and Zeng) make the smallest possible changes to the Geologic Model needed to fit the GPS data and are thus similar in character. This suggests giving the Geologic model a weight that is commensurate with the view that the current geodetic signal is not consistent with the long-term relative plate-motion rate and direction.

31 UCERF3 Technical Report #8 Deformation Models 25 Figure 9. Spatial distribution of moment rates on faults (first column), off faults (second column), and the total (third column) for the UCERF2 and new UCERF3 deformation models. The latter were all computed using UCERF3 Fault Model 3.1. The fourth column shows the ratio of the total moment rate for each model relative to the average UCERF3 deformation model. (This average excludes the Geologic model because it lacks off-fault estimates.) Off-fault moment rates for new models assume a seismogenic thickness of 11 km.

32 26 Deformation Models UCERF3 Technical Report #8 The UCERF3 core working group (WGCEP ExCom and Key Contributors) reviewed the deformation models and reported that all four models are viable. However, several characteristics suggest that the continuum solutions deserve the higher weights: they have more smoothly varying long-term slip rates, more constant rakes, and a higher proportion of total moment on identified faults, and they have fewer very low slip-rate subsections on high slip-rate faults and fewer rake reversals. In addition, the NeoKinema model is fit to an independent, stress-direction data set. Based on these recommendations, WGCEP fixed the UCERF3.0 weights at 35% for NeoKinema, 30% for Zeng, 20% for the Average Block Model, and 15% for the Geologic Model. Of course, these weights are subject to a posteriori modifications. Most notably, it has been found that the geodetic solutions break with geologic consensus on a few low-slip-rate faults in the Los Angeles and San Francisco Bay regions (see Figure 31), and the changes are sufficiently large to affect hazard estimates. A fault-by-fault review is being conducted that will identify these faults, and the final deformation models will be edited. Figure 10. The spatial distribution of moment rates implied by the UCERF2 smoothed-seismicity model (Appendix J of that report) and the new UCERF3 smoothed seismicity model (Appendix M of this report). The calculation assumes a Gutenberg-Richter distribution with the same maximum magnitude everywhere, and also assumes a total regional moment rate of Nm/y, the ABM model value.

33 UCERF3 Technical Report #8 Earthquake Rate Models 27 D. Earthquake Rate Models and the Grand Inversion The earthquake-rate component of the UCERF3 model framework (Figure 2) defines the long-term rate of all possible earthquake ruptures above the magnitude threshold (M 5) and with a discretization sufficient to represent hazard. Each earthquake-rate model comprises two types of sources: (1) ruptures with dimensions larger than the seismogenic depth occurring on explicitly modeled faults (supraseismogenic on-fault ruptures), and (2) other earthquakes modeled as gridded seismicity, where each cell of a 0.1º 0.1º geographic grid is assigned magnitude-frequency distribution of earthquake nucleation rates. The gridded seismicity is further separated into grid cells that are inside a fault-zone polygon (subseismogenic on-fault ruptures) and those outside fault-zone polygons (off-fault ruptures). Cells partially inside a fault-zone polygon are fractionally apportioned; Appendix O describes the bookkeeping details. Rather than building the models for each fault and separately adding background seismicity, as was done in UCERF2, the UCERF3 procedure is to solve for the rates of all events simultaneously using the inverse approach outlined by Field and Page (2010), which builds on the work of Andrews and Schwerer (2000). This approach allows the relaxation of fault segmentation and inclusion of multi-fault ruptures, which is a major goal of the UCERF3 project. The implementation of this unified methodology in the UCERF3 framework has become informally known as the Grand Inversion. The UCERF3 earthquake-rate models include aftershocks. However, the UCERF3 framework includes standardized procedures for removing aftershocks when appropriate; e.g., for the NSHMP or in constructing the Earthquake Probability Models (see Section D.5). D.1. Methodology The Grand Inversion methodology is fully described by Page et al. (2012) in Appendix N. To summarize its salient aspects, we first consider only those ruptures that occur on the faults that populate the Fault and Deformation Models and only model events that have a rupture length greater than or equal to the seismogenic thickness. To relax segmentation, we subdivide each fault section into S equal-length subsections with lengths of about half the seismogenic thickness (e.g., S = 2601 for the example shown in Figure 11). The surfaces of possible ruptures are taken to be the complete set of two or more contiguous fault subsections. The minimum number of two subsections ensures that the minimum rupture lengths are approximately equal to the seismogenic thickness, since subsection lengths are about half that. Contiguous means subsections separated by less than some specified distance; for UCERF3.0, this maximum separation was set at 5 km, based on the assessment of Biasi (2012, Appendix J). The rupture set is further filtered by retaining only those fault-to-fault ruptures that pass all of the following viability criteria (Appendix N): 1. All fault sections connect within 5 km or less. 2. Ruptures cannot include a given subsection more than once. 3. Ruptures must contain at least 2 subsections of any main fault section. 4. Ruptures can only jump between fault sections at their closest points (in 3D). 5. The maximum azimuth change between neighboring subsections is 60º. 6. The maximum azimuth change between the first and last subsection is 60º.

34 28 Earthquake Rate Models UCERF3 Technical Report #8 7. The maximum cumulative rake change (summing over each neighboring subsection pair) is 180º. The Geologic deformation model rakes are used to ensure rupture-set consistency among branches for each fault model. 8. The maximum cumulative azimuth change, computed by summing absolute values over each neighboring subsection pair, is less than 560º (a filter that reduces squirrelyness ). 9. Branch points (potential connections between main fault sections) must pass Coulomb criteria described in Appendices J and N. This filtering leads to 220,045 and 220,094 unique viable ruptures for Fault Models 3.1 and 3.2, respectively. For reference, mapping the UCERF2 ruptures into their nearest equivalents in the Fault Model 3.1 yields only 7,773 ruptures. The much larger UCERF3 rupture set reflects the high connectivity of California fault system: nearly all the subsections in fault models can be connected to nearly all others without jumping more than 5 km (green subset in Figure 11). This high connectivity was largely ignored in UCERF2. Figure 11. Fault Model 3.1 sections divided into an integer number of equal-length subsections (lengths equal to, or just less than, half the section s seismogenic thickness). All subsections shown in green are connected to all others in green without jumping more than 5 km between faults.

35 UCERF3 Technical Report #8 Earthquake Rate Models 29 Box 1. Grand Inversion Equations A system of equations to solve for the long-term rate or frequency (f r ) of each r th rupture. Theses constraints can be applied with varying weights to balance the influence of each. R r=1 Equation Set G sr R r=1 R D sr f r = v s (1) r=1 P r paleo f r = f s paleo (2) Description Slip-Rate Balancing: v s is the subsection slip rate (from a deformation model) and D sr is the average slip on the s th subsection in the r th event (by average we mean over multiple occurrences of the rupture, and as measured at mid-seismogenic depth). paleo Paleoseismic Event-Rate Matching: f s is a paleoseismically inferred event-rate estimate (where known) and P paleo r is the probability that the r th rupture would be seen in a paleoseismic trench. λ r f r = 0 (3) Improbability Constraint: This allows us to force relatively improbable events to have a lower rate (e.g., based on multi-fault rupture likelihoods). A higher value of λ r adds more misfit for a given rupture rate, forcing the inversion to minimize that rupture rate further. f r = f r a priori g M mr f r GR m g (4) A Priori Constraint: Constrain the rates of ruptures to target values. This can be used on individual ruptures (e.g., make Parkfield occur every ~25 years) or to a complete rupture set in order to obtain a unique solution of interest (e.g., keep final rates as close as possible to those in UCERF2 while satisfying other data). (5) Other equations can be added as discussed in the text. Regional Gutenberg-Richter (GR) Constraint: This forces geographic regions (or sub-regions, g) to have a magnitude-frequency distribution that is less than or equal to a Gutenberg-Richter rate (an inequality constraint to prevent over-prediction bulges ). GR g m represents the GR rate of the m th magnitude bin in the g th sub-region, and g matrix M mr contains the product of whether the r th rupture falls in the m th magnitude bin (either 0 or 1) multiplied by the fraction of that rupture that nucleates within the g th g sub-region. Note that GR m needs to have truly off-fault and sub-seismogenic ruptures removed, and that this can also be applied as an equality constraint (both discussed in the main text). The inversion method estimates the long-term rates of the R viable ruptures, { f r : r = 1, 2,, R}, by solving the system of equations described in Box 1. The equations in the inversion can be weighted by the uncertainties in the data or the degree of belief in a particular constraint. Conceptually, this approach is simpler and more objective, reproducible, and consistent than that adopted in UCERF2. The (largely

36 30 Earthquake Rate Models UCERF3 Technical Report #8 artificial) distinction between Type-A and Type-B faults has been dropped, and Type-C zones have been removed or included as off-fault seismicity. However, the inversion method can still accommodate expert opinion and subjectivity in the form of weights applied to the different inversion constraints. Expert opinion is also used to weight the various logic-tree branches in a UCERF3 model, which will generally include alternative rate models from different inversions. Therefore, the Grand Inversion constitutes a more flexible framework for incorporating expert opinion as well as data-based constraints. The framework is also extensible: constraints other than those in Box 1 can easily be added to the inversion. In their exploratory study, Field and Page (2010) solved the inverse problem by the non-negative least squares algorithm of Lawson and Hanson (1974). This algorithm is not computationally feasible for an inversion using the statewide system of faults. Therefore, we have developed a parallelized code that can efficiently solve very large equation sets by simulated annealing (Appendix N). Owing to its computational structure and efficiency, simulated annealing can also provide a range of models that sample the solution manifold of this inverse problem, which generally is both under-determined and overdetermined (mixed-determined). We are investigating how this sampling can be used in representing the epistemic uncertainty associated with model non-uniqueness. Other important details of the inversions, including weights applied to the various equation sets in UCERF3.0 and our use of high-performance computing, are discussed in Appendix N. D.3. Implementation Ingredients This section describes the various data and models used in the inversion for UCERF3.0. Inversion results are summarized in Section D.6. D.3.1. Slip-Rate Balancing (Equation Set 1) Slip-rate balancing requires knowing the average slip on the s th subsection in the r th rupture (D sr ), where the averaging is done over multiple occurrences, and the slip value is taken to be that at mid-seismogenic depths. Using slip-rate balancing rather than moment balancing avoids depth-of-rupture ambiguities. The average slip for a given rupture, D r, is partitioned among the subsections to get D sr. As in UCERF2, the magnitude of each rupture is computed from a magnitude-area relationship, M(A r ). In UCERF2 the Hanks and Bakun (2008) and Ellsworth B (WGCEP, 2003) relationships were used with equal weights. For UCERF3, we have also included a slightly modified version of the Shaw (2009) relationship as justified by Shaw (2012) in Appendix E. These three relationships, referred to hereafter as HanksBakun08, EllsworthB, and Shaw09mod, are plotted in Figure 12. Two different approaches are used to get D r for each rupture. One, used in UCERF2, involves converting the magnitude from the M(A r ) relationship to moment and then dividing by the rupture area (A r ) and shear rigidity (m) to get D r : D r = Mo r µa r = 101.5*M (Ar )+9.05 µa r

37 UCERF3 Technical Report #8 Earthquake Rate Models 31 where Mo r is the moment of the r th rupture. Because A r is the above equation is based on the depth of microseismicity, D r might be an overestimate if larger ruptures penetrate deeper than smaller ruptures. An alternative is to obtain D r using one of two viable slip-length scaling relationships derived in Appendix E from surface-slip observations. The first assumes slip scales as the square root of area ( SqrtLength ), and the second assumes constant stress drop ( ConstStressDrop ). Examples obtained using these two slip-length models, as well as using the three magnitude-area relationships above, are given in Figure 12 (bottom). The slip-length models generally give a smaller D r for a given rupture, which could be real (e.g., due to slip penetrating below the depth of microseismicity) or from a bias in the slip measurements (e.g., surface values lower than those at seismogenic depths). Including these slip-length relationships can account for this type of epistemic uncertainty, which was not included in UCERF2. Figure 12. The magnitude area (top) and slip-length (bottom) relationships used in UCERF3, which are documented in Appendix E. The M(A)-derived slip-length curves at the bottom assume an average seismogenic thickness of 11 km in converting area to length.

38 32 Earthquake Rate Models UCERF3 Technical Report #8 As discussed in Appendix E, some magnitude-area and slip-length relationship combinations are incompatible given underlying assumptions. For example, using HanksBakun08 to get magnitude from area and then using ConstStressDrop to get slip from length can lead to unreasonably large implied downdip widths for long ruptures. Table 5 lists the combinations allowed in UCERF3.0, together with their associated weights (which are also shown in the logic tree in Figure 3). All branches are given equal weight. Table 5. Magnitude-area and slip-length model combinations used in UCERF3, as described in the text. See Appendix E for equations and further justifications. Magnitude-Area Relationship Slip-length Relationship Branch Weight EllsworthB EllsworthB 20% HanksBakun08 HanksBakun08 20% Shaw09mod Shaw09mod 20% EllsworthB SqrtLength 20% Shaw09mod ConstStressDrop 20% To partition D r among the subsection to get D sr, which, again, represents the average over multiple occurrences of the given rupture. As discussed by Biasi (2012) in Appendix F, the preferred choice is the observationally based tapered slip (square-root-sine) model of Weldon et al. (2007), which was applied to type-a faults in UCERF2. Doing so assumes that the intra-event, along-strike variability averages over multiple occurrences to the tapered shape. In UCERF3.0, this simple taper is also applied to multi-fault ruptures; there is no pinching out of the slip across internal stepovers. Of course, the choice of D sr model should be consistent with how slip rates vary in the deformation model; e.g., if slip is persistently low at fault stepovers, then the slip rates should be lower there. Because the deformation models do not resolve such along-strike slip-rate variations, applying a multi-rainbow shape is not warranted, especially given the epistemic uncertainties on what is happening at seismogenic depths. However, because the deformation model slip rates do not ramp down toward the ends of faults that terminate (i.e., where there are no connections to other faults), applying the tapered slip model tends to produce a higher rate of smaller earthquakes at these endpoints (Appendix N). To minimize this potential artifact, UCERF3.0 also utilizes a uniform (boxcar) slip distribution (D sr = D r ) as an alternative branch in its logic tree. Both options are given equal weight, although the boxcar model is chosen as the reference-branch option. A more correct solution may be to taper both the slip and the slip rates at fault terminations, a target for future research. Better observations are needed to constrain how the multi-fault slip functions actually vary in nature. The UCERF3 framework is capable of applying the WGCEP (2003) D sr model, in which the slip is proportional to the slip rate of each subsection. However, this option was given zero weight for the same reasons it was zeroed out in the UCERF2 logic tree: lack of observational support and implications that some ruptures cannot happen. While there also appears to be some evidence to support a characteristic slip model, where the amount of slip on a subsection is similar for all ruptures, applying this would be difficult due to very limited observational constraints (requiring the propagation of large epistemic

39 UCERF3 Technical Report #8 Earthquake Rate Models 33 uncertainties with unknown spatial correlation structure). Nevertheless, future studies should examine the consistency of UCERF3 results with paleoseismic slip data, including the coefficient-of-variation analysis of Hecker et al. (paper in review). Because the inversion is solving for the rate of seismogenic-thickness and larger ruptures, the slip rates in Equation Set (1) need be reduced to account for the moment released in sub-seismogenic-scale events. As discussed below, the correction for a Characteristic Inversion Model is different from that of a Gutenberg-Richter Inversion Model. D.3.2. Paleoseismic Event-Rate Matching (Equation Set 2) In Appendix G, Weldon et al. (2012) provide an updated compilation of the paleosiesmic data on largeevent recurrence intervals at various locations in California. In Appendix H, Parsons (2012) provides estimates of the mean paleoseismic event rates, f s paleo, for these data. In particular, the new methodology in Appendix H does not assume segmentation or any particular underlying probability distribution. The updated rates are compared with values used in UCERF2 in Table 6. A new model for the probability of seeing a given rupture in a trench, P r paleo is given by Weldon (2012) in Appendix I. The probability depends on both the average slip of the rupture (D r ) and the position of the site relative to the nearest end of the rupture, implying that one is less likely to observe surface offsets near the ends of a rupture, consistent with the tapered slip model. Table 7 lists representative values for P r paleo. This could be done on a trench-by-trench basis to account for the unique depositional environment of each site; however, the probability model adopted in UCERF3.0 is generic, not site-specific. D.3.3. Improbability Constraint (Equation Set 3) Improbability constraints can enforce relatively low probabilities on any designated event or event type (e.g., multi-fault ruptures that involve large jumps). If the relative rupture probabilities are P r multi fault, ranging from 1.0 for ruptures that have no obvious impediments to 0.0 for impossible ruptures, then the associated equation for each rupture in the inversion will be given a weight of 1/P r multi fault. Appendix J (by Biasi) summarizes the various observational and theoretical studies that can guide the assigment improbability constraints. As noted above, the Coulomb calculations given in Appendix J were used to cull the total UCERF3.0 set down to subset of ruptures deemed viable. Because the rates of multi-fault ruptures are already constrained in the inversion by both slip-rate balancing (larger ruptures consume more slip) and the regional GR constraint (larger events have lower collective rates), the improbability constraints can be redundant; therefore, they are not applied in UCERF3.0 (see Appendix N). improbability constraint will require further exploration. The need for the

40 34 Earthquake Rate Models UCERF3 Technical Report #8 Table 6. Event-rate estimates at paleoseismic sites from Appendix H, including new data from Appendix G. Comparisons to UCERF2 values are also listed for sites that were available in that study. Table 7. Example values from the probability of paleoseismic detection model ( P r paleo ) used in Equation Set (2) of the inversion, from Appendix H. Ave Slip (D r, meters) Mag (approximate) Probability of Detection ( Dist is fractional distances from end of rupture) Dist = 0.05 Dist = 0.25 Dist =

41 UCERF3 Technical Report #8 Earthquake Rate Models 35 D.3.4. Other Constraints (Related to Equation Sets 4 & 5) Equation Sets 4 and 5 are most easily understood in the context of defining the gridded seismicity. Because the implementation differs between the Characteristic and Gutenberg-Richter branches, further equation details are given in the respective sections below. Here we describe the constraints common to all branches. A primary constraint and logic-tree branch choice is the total rate of M 5 events per year within the UCERF3 model region (Figure 1), denoted R!"!#$!!!. The seismicity analysis in Appendix L (Felzer,!"!#$ : 2012b) has determined three branch options for R!!! 7.6 events/yr (10% weight) 8.7 events/yr (60% weight) 10.0 events/yr (30% weight) The spatial distribution of the off-fault gridded seismicity is set by choosing a spatial probability density function (SpatialPDF) map, such as those shown in Figure 13. One option is the UCERF2 smoothedseismicity map shown in Figure 13a (from Appendix J of the UCERF2 report, and based on Frankel, 1995), which uses a 2D Gaussian smoothing kernel with a sigma of 50 km and a somewhat narrower, anisotropic smoothing near some active faults. Felzer (2012c) in Appendix M gives an alternative (Figure 13b); this UCERF3 smoothed seismicity was derived by an adaptive smoothing algorithm in which the kernel width depends on data density (Helmstetter et al., 2007). The higher resolution of UCERF3 smoothing comes with a greater uncertainty, but it is recommended by its superior performance in the formal RELM test (Zechar et al., 2012) and by its seismicity localization, which in some areas is more consistent with surveys of precariously balanced rocks (Brune et al., 2006). UCERF2 smoothing may be more appropriate for the larger events that dominate the hazard analysis (M > 6) and for the 50-year spans of interest to building codes. Another option for the SpatialPDF is to use off-fault moment-rate maps from the deformation models of Appendix C. UCERF3.0 includes only a single off-fault moment rate map, which represents an average of the ABM, NeoKinema, and Zeng models (Figure 13c). Like previous WGCEP and NSHMP models, the UCERF3 framework explicitly differentiates between fault-based sources and gridded seismicity, starting with the deformation model. The fraction of observed seismicity attributable to off-fault events relative to fault-based sources is implied by the choice of the spatial PDFs of Figure 13 and the fault-zone polygons of the fault model. The PDF values inside all fault polygons are summed to ensure that rates properly add up in the final model (Appendix O). For the relevant branch options, the percentage of on-fault seismicity is: 53% for UCERF2 smoothed seismicity + Fault Model % for UCERF2 smoothed seismicity + Fault Model % for UCERF3 smoothed seismicity + Fault Model % for UCERF3 smoothed seismicity + Fault Model 3.2 None of the deformation models are used to define the percentage of on-fault seismicity, because the calculation depends too heavily on assuming a constant magnitude frequency distribution throughout the

42 36 Earthquake Rate Models UCERF3 Technical Report #8 region, as well as specifying the fraction of the moment rate that is released aseismically, both on- and off-fault. Off-Fault Spatial Seismicity PDFs UCERF2 Smoothed Seismicity UCERF3 Smoothed Seismicity Deformation Model Ave Log10 probability for each grid cell Figure 13. The various off-fault spatial seismicity probability density functions (spatial PDFs) used in UCERF3 for setting gridded seismicity. Values in each map sum to unity. (a) The UCERF2 smoothedseismicity model. (b) The UCERF3 smoothed-seismicity model of Felzer (Appendix M), which has a higher resolution, adaptive smoothing kernel. (c) Spatial PDF implied by the average of the off-fault moment rate maps from Appendix C, which includes the Average Block Model, NeoKinema, and the Zeng deformation models, and averages over both Fault Model 3.1 and 3.2.!""!!"#$% Another logic-tree choice is the assumed maximum magnitude of off-fault seismicity, M!"#. The three branch options applied in UCERF3 adopt values of 7.2, 7.6, and 8.0 (Figure 3). The implications of these choices depend on the Inversion Model, as discussed below. D.4. Inversion Models and Associated Gridded Seismicity Here we describe the Inversion Model logic-tree branches (Figure 3), including their conceptual motivation, how the remaining inversion constraints are constructed, and how the gridded seismicity is specified for each option. A supplementary file ( lists many of the metrics discussed here for the logic-tree branch combinations. D.4.1. Characteristic Branches These branches represent the possibility that faulting is governed by characteristic behavior, which has come to mean one or more of the following: (1) segmentation of faulting, in which ruptures persistently terminate at certain locations; (2) event rates at higher magnitudes that exceed an extrapolation of the

43 UCERF3 Technical Report #8 Earthquake Rate Models 37 Gutenberg-Richter magnitude frequency distribution (MFD) from smaller magnitudes; and (3) a narrow range of slip from event to event at a point on a fault. In principle, these attributes could be implemented as direct constraints in the Grand Inversion. However, distilling the essence of a characteristic model into independent inversion constraints has proven difficult. In UCERF3.0, the characteristic branches are simply constrained to stay as close as possible to UCERF2, although the model is adjusted where required to better fit the observations. A target MFD for the entire region is determined by choosing the total rate, R!"!#$!!!, and the implied regional maximum magnitude, M max. The latter is computed from the largest-area rupture in the selected deformation model and the selected magnitude-area relationship. The target MFD is taken to be a perfect truncated Gutenberg-Richter distribution with a b-value of 1.0 (black curve in Figure 14). This total MFD is partitioned into a supra-seismogenic on-fault MFD, for use in Equation Set (5), as well as MFDs for the sub-seismogenic on-fault seismicity and the off-fault seismicity. Because a fault section represents a proxy for all ruptures that nucleate within its fault-zone polygon,!"!#$ the total rate of events for each fault section is determined by multiplying R!!! by the sum of SpatialPDF values inside the section s fault-zone polygon. Summing these rates over all fault sections gives the total rate of on-fault events (left side of orange curve in Figure 13a). Following UCERF2, the MFD for sub-seismogenic, on-fault ruptures is assumed to be Gutenberg-Richter up to the minimum magnitude of supra-seismogenic ruptures. In UCERF2 this supra-seismogenic transition was M 6.5 on most faults (except where the characteristic magnitude was less), whereas in UCERF3 the transition is fault-section dependent, owing to variations in seismogenic widths. Below the minimum supraseismogenic magnitude among all fault sections (6.15 or 6.35 depending on logic-tree choices), the total target on-fault MFD (orange curve in Figure 13a) is parallel the total target (black curve) offset by the difference between R!"!#$!!! and the total off-fault rate. Because the total target on-fault MFD must match!""!!"#$% the total target above the maximum magnitude of the off-fault events, M!"#, the total target on-fault MFD has a tri-linear functional form in log-rate space (Figure 13a). The total target MFD has a b-value of 1.0 above and below the transition points and a lower value in between. The off-fault MFD (green curve in Figure 13a) is simply the difference between the total target MFD (black curve) and the total on-fault target MFD (orange curve). The supra-seismogenic, on-fault MFD, shown as light blue in Figure 13b, is the total on-fault target (orange curve) minus the total subseismogenic on-fault MFD (pink curve). This curve summarizes the constraints included Equation Set (5) for the entire UCERF3 region. In principle, these constraints could be broken into arbitrarily small subregions, but uncertainties in the observed magnitude frequency distributions impose practical limits. In UCERF3.0, MFD constraints were applied separately to the northern and southern California regions, and as equality constraints below magnitude 7.8 and as inequality constraints at higher magnitudes. The highmagnitude inequalities allow the final solution to roll off more quickly than the target MFD if allowed or required by the other data constraints. The total gridded seismicity for UCERF3 (gray line in Figure 13b) is sum of the off-fault MFD (green) and the total sub-seismogenic on-fault MFD (pink). This result is consistent with the total MFD for UCERF2 gridded seismicity (magenta line in Figure 13b), though smoother in shape. The red line in Figure 13b shows the total MFD for fault-based sources in UCERF2, which by itself exceeds the total

44 38 Earthquake Rate Models UCERF3 Technical Report #8 regional target between magnitude 6.5 and 7.5 the proverbial MFD bulge described in the UCERF2 report which is eliminated in the Grand Inversion by imposing the constraints of Equation Set (5). Figure 14. Examples of the various magnitude frequency distributions considered in setting up the Characteristic Inversion Model branches. See main text for an explanation of each curve. These examples are for the reference branch setting given below in Table 5. Up to this point, the deformation-model moment rates (Table 4) have not been used in constructing the various target MFDs. A useful pre-inversion diagnostic is to compute implied coupling coefficients, defined here as the moment-rate of the target MFD divided by the deformation-model moment rate from Table 4. These diagnostics are listed separately for the total on-fault and off-fault model components, and

45 UCERF3 Technical Report #8 Earthquake Rate Models 39 for all logic-tree branches, in the Supplementary Data File. As defined, these implied coupling coefficients can exceed 1.0 if the target MFD implies more moment rate than the deformation model. The off-fault coupling coefficients implied for a deformation model depend primarily on the choice of!"!#$!""!!"#$% R!!! and M!"#. One could alternatively tune the latter to some desired coupling coefficient. However, off-fault coupling coefficients are essentially unconstrained, and the parameterization in terms of R!"!#$!""!!"#$%!!! and M!"# is more intuitive for most hazard analysts. The possibility of using the implied off-fault coupling coefficients in deciding a posteriori branch weights will be explored in future iterations of UCERF3. The on-fault coupling coefficients implied by the deformation models are more useful. Values greater than 1.0 are remedied by the inversion rolling off the final MFD more rapidly at highest magnitudes relative to the target. Values less than 1.0 imply the slip rates are higher than can be accommodated by the target MFD, which requires a Fault Moment Rate Fix. The following options have been implemented in the UCERF3.0 logic tree: Apply Implied Coupling Coefficient reduce the slip rates on all fault sections by the implied coupling coefficient. Relax MFD Constraint permit an over-prediction bulge in the final MFD if needed to satisfy slip rates. Apply Both Options apply both of the above. Do Nothing let the inversion to decide where to reduce slip rates in matching the target MFD. An additional, perhaps more desirable, option would be to target specific faults suspected of having relatively low coupling coefficients. For example, UCERF2 excluded the Mendocino, Cerro Prieto, and Brawley faults for this reason, following the precedent set by the NSHMP to treat them as special fault zones (e.g., Frankel et al., 2002). However, the Do Nothing option above seems to naturally reduce Cerro Prieto and Brawley fault slip rates in the Grand Inversion (Appendix N). In UCERF3, the Mendocino fault was treated in the same way as the San Andreas, allowing ruptures to jump between the two proximate sections. Fault-specific coupling coefficients could be applied in future modeling if the data warrant such constraints. The UCERF3.0 weights for the Fault Moment Rate Fixes, which are given in Figure 3, are rather ad hoc, but they represent the current WGCEP consensus. The a priori constraints in Equation Set (4) depend on which of two Characteristic Inversion Model options has been chosen from the logic tree (Figure 3): Characteristic Unconstrained: apply the inversion without any a priori rates in Equation Set (4). Because this type of model is under-determined, a large set of inversion runs are needed to sample the solution space (see Appendix N); results can either be averaged or used to sprout additional subbranches of the logic tree. Characteristic UCERF2 Constrained: set a priori rates in Equation Set (4) to be the UCERF2 rates for every equivalent event in the UCERF3 framework. For ruptures that are not in the UCERF2 model (e.g., multi-fault ruptures and those on new fault sections), the inversion minimizes the total event rate. For the new faults, this minimization favors larger events, which satisfy slip rates at lower event rates, and thus produces a characteristic event distribution. Weaker weighting yields solutions

46 40 Earthquake Rate Models UCERF3 Technical Report #8 closer to the UCERF2 rates. Rupture rates are prevented from going to zero by enforcing a minimum rate or water-level, as described below. To avoid double counting, the slip rates are reduced in the inversion according to the moment rate implied by the sub-seismogenic, on-fault ruptures. This reduction was originally done on a fault-section basis, but the high rates of observed seismicity in some areas produced sub-seismogenic MFDs that had moment rates greater than those assigned to the fault section (leading to negative corrected moments). Therefore, in UCERF3.0, only a system-wide average is applied to reduce slip rates for sub-seismogenic ruptures. The reduction for each logic-tree branch is derived by dividing the moment rate of the subseismogenic MFD (magenta curve in Figure 13b) by the total on-fault moment rate from the deformation model (Table 4); the values vary from 3.9% to 12%. The off-fault gridded seismicity sources are defined by partitioning the associated MFD (green curve in Figure 13a) among those grid cells that are outside fault-zone polygons, weighted by the relative spatialpdf value in each grid cell. The Characteristic branches utilize either the UCERF2 or UCERF3 smoothed seismicity model (Figure 13a or 13b). In UCERF3.0, equal weight is given to each branch. WGCEP prefers the UCERF3 spatialpdf option for reasons already described, but equal weighting will be maintained until the full implication of this choice can be ascertained. The branch associated with the spatialpdf for the deformation model average (Figure 13c) was assigned zero weight. The sub-seismogenic MFD for each fault section is distributed among the grid cells that fall within the associated fault-zone polygon, consistent with the smoothed seismicity rates used to derive the subseismogenic MFDs. Given the ad hoc choices used to construct fault-zone polygons (Figure 4), the hazard implications of this methodology need to be understood. The primary influence of fault-zone width is on the maximum magnitude for gridded seismicity near faults. Inside the polygons this maximum magnitude is defined by the minimum magnitude of supra-seismogenic on-fault ruptures (~ 6.25 on average),!""!!"#$% whereas outside the polygons it is specified by M!"#, with UCERF3.0 logic-tree options of 7.2, 7.6, or 8.0. Therefore, changing the fault-zone widths primarily changes the gridded seismicity maximum magnitudes near the outer edges of the polygon, which should have only a very small effect on UCERF3.0 hazard estimates because the hazard near faults is generally dominated by supra-seismogenic, on-fault ruptures. D.4.2. Gutenberg-Richter Branches These branches represent the possibility that faulting is everywhere governed by a Gutenberg-Richter (GR) magnitude-frequency distribution. The GR hypothesis is in basic disagreement with all previous WGCEP and NSHMP models, which assumed that ruptures on large, well-developed faults exhibit characteristic behavior (Section D.4.1). Furthermore, trying to impose greater GR on faults in the UCERF2 framework only served to exacerbate the MFD bulge near M 6.7. The UCERF3 Grand Inversion greatly reduces this problem by allowing multi-fault ruptures. Given the support the GR hypothesis has received in recent analyses (e.g., Page et al., 2011), we believe it is important to try to accommodate such a branch in the UCERF3 logic tree. As shown in Figure 15, the target MFDs for the GR case are simple to construct. As in the Characteristic branches, the regional target for total seismicity (black line) is specified by R!!!!"!#$ and

47 UCERF3 Technical Report #8 Earthquake Rate Models 41 M max. The total event rate from this MFD is partitioned into an on-fault event rate (58% of total) and an off-fault event rate (42% of total), yielding a target on-fault MFD (orange line) and a target off-fault MFD (green line). The latter is truncated at the specified value of M!"#!""!!"#$%. Figure 15. Magnitude-frequency distributions implied by the Gutenberg-Richter hypothesis. Total Target GR satisfies the total regional rate (8.7 events/yr for UCERF3.0) and the M max implied by the largest event in the fault system (8.6 for the Geologic deformation model and HanksBakun08 magnitude-area relationship). Total On-Fault Target and Truly Off-Fault are scaled to an on-fault fraction of 58%; the Truly Off-Fault MFD is truncated at M!""!!"#$%!"# = 7.6. The two curves (GR Implied On Fault versus GR Implied On Fault) represent the end-member MFDs implied if all sections have a GR distribution of nucleations; all other GR branches fall between these end-members. These GR-implied curves roll off at the highest magnitudes because not all fault sections participate in the largest events. Each fault section is hypothesized to nucleate a GR distribution of earthquakes with a b-value of 1.0 between magnitude zero and the maximum magnitude in which the section participates. The a-value of the section GR MFD is fixed by the section moment rate. MFDs for all the sections are summed to obtain a total on-fault MFD implied by the GR hypothesis. The on-fault MFDs from the UCERF3.0 logic-tree branches are bounded by the end-member cases shown in Figure 15 as a red curve (for the Geologic Deformation Model and Shaw09mod Magnitude-Area Relationship) and a magenta curve (for the ABM Deformation Model and HanksBakun08 Magnitude-Area relationship). The on-fault MFDs implied by both end-members exceed the on-fault target (orange line), and the higher MFD even exceeds the target for all earthquakes combined (black line). Indeed, the total on-fault moment rate implied by the deformation models under the GR hypothesis exceeds the observed on-fault seismicity rate for all logic tree branches, which calls into question the viability of the GR model. This discrepancy can be quantified as an implied coupling coefficient, defined

48 42 Earthquake Rate Models UCERF3 Technical Report #8 as the ratio of the total rate of the target on-fault MFD to the total rate of the GR-implied on-fault MFD (red or magenta curve). These implied coupling coefficients, listed in the Supplementary Data File, range from 0.34 to 0.9 for all GR branches in Figure 3. The extreme values correspond to two cases shown in Figure 15. Because not all fault sections participate in the largest event, the implied on-fault GR MFDs roll-off at high magnitudes. Hence, more moment has to be taken up by smaller earthquakes, which increases the overall event rates. The problem generally goes away if all sections share the same maximum magnitude, but this would require including many of the squirrelly-looking ruptures that were rejected by the viability criteria of Section D.1. Another option is to allow certain faults to fill in the taper at higher magnitudes, lowering the overall rate, but this amounts to allowing characteristic MFDs, which is inconsistent with the GR hypothesis. There is some evidence that faults might have a lower b-value than in surrounding regions (Page et al., 2011). Assuming b = 0.95 on fault reduces the discrepancy, but not enough to warrant adding such a logic-tree option to UCERF3.0. The bottom line is that some form of on-fault moment reduction must occur for the GR branches to be viable. The UCERF3.0 branch options for this moment reduction are described below. For the examples shown in Figure 15, adding the target on-fault MFD with the off-fault MFD will!""!!"#$% produce a discontinuity in the total MFD at M!"#. This artifact, due to the idealized discontinuous distributions, can be mitigated by exponentially tapering the off-fault Gutenberg-Richter MFD according to the method introduced by Kagan (2002), which is more consistent with physical models source finiteness (e.g., Sornette and Sornette, 1999). The tapered-gr model depends on a corner magnitude!""!!"#$% (M corner ) rather than a maximum magnitude, which can be derived from M!"# by maintaining the same total off-fault event rate and moment rate. Comparisons are given in Figure 16 for all the!""!!"#$% options in the UCERF3.0 logic tree. There is general agreement within WGCEP that the M!"# tapered GR is more appropriate for off-fault seismicity of the GR branches. The off-fault implied coupling coefficients (the ratios of moment rates of the off-fault MFD to that of the deformation models) are listed in the Supplementary Data File. Among the GR logic-tree branches, these coefficients range from 0.29 to Therefore, the problem of balancing moment rates in the GR model extends to the off-fault component. As in the case of the Characteristic branches, not much can be done to modify these implied coupling coefficients, although they can be used to modify branch weights a posteriori, which should be considered in future UCERF3 modeling. In UCERF3.0, the logic-tree branches for Fault Moment Rate Fixes include two equally weighted options for dealing with the low on-fault implied coupling coefficients (Figure 3): Apply Implied Coupling Coefficient reduce the slip rates on all fault sections by the implied coupling coefficient. Do Nothing let the inversion decide where to reduce slip rates in matching the target MFD. The other two options in Figure 3, which involve relaxing the MFD constraint all together, violate the GR hypothesis (at least in spirit) and are therefore given zero weight.

49 UCERF3 Technical Report #8 Earthquake Rate Models 43 Figure 16. Comparison of truncated (dashed) and tapered (solid) Gutenberg-Richter distributions for the three M!""!!"#$%!"# logic-tree branch options. All of these MFDs have a total rate of 3.6 events per year, which is a typical value for off-fault seismicity in UCERF3. The tapered-distribution corner magnitudes are determined by maintaining the total event and moment rates. The tapered distribution for M!""!!"#$%!"# of 8.0 (corner magnitude of 7.7) implies an off-fault M 8 event every 3000 years. Examples of the final MFD constraints for the UCERF3.0 model are presented in Figure 17. For the Apply Implied Coupling Coefficient branch, the slip rates are simply scaled by the implied coupling coefficient in Equation Set (1) before computing each fault-section GR MFD. These MFDs are then divided into supra- and sub-seismogenic MFDs based on the minimum supra-seismogenic magnitude for the fault section. Summing over all fault sections gives the total supra- and sub-seismogenic MFDs (light blue and pink lines in Figure 17, respectively), which together sum to the coupling-corrected GR-implied MFD. For the Do Nothing branch, the slip rates are left unchanged in Equation Set (1), and the inversion decides where to cut them to match the target MFD. The supra-seismogenic on-fault MFD for this branch (darker blue curve in Figure 17) is constructed by multiplying the total target (black curve) by the fraction of seismicity that is on-fault (0.58 in this example) and then subtracting the sub-seismogenic MFD (pink curve). The results for the Apply Implied Coupling Coefficient and Do Nothing branches differ most at the highest magnitudes (blue and purple curves in Figure 17).

50 44 Earthquake Rate Models UCERF3 Technical Report #8 Figure 17. Examples of the various target MFDs for the GR inversion model branches; see text for details. The supra-seismogenic on-fault target MFDs in Figure 17 exemplify the constraints used in Equation Set (5) for the GR inversions. As with the Characteristic branches, these constraints are imposed separately for northern and southern California, and they are applied as an equality constraint below magnitude 7.8 and as an inequality constraint above this magnitude. In the UCERF3.0 inversions, only the UCERF3 smoothed seismicity is used to compute the fractional rates of on- and off-faults events (58% and 42%, respectively, from Figure 13b). To avoid double counting, the slip rates in Equation Set (1) are reduced by the amount implied by the sub-seismogenic MFD for each fault section, about 10% on average. The simulated-annealing inversion algorithm is initiated with a GR starting model (Appendix N). To obtain a good GR starting model, the total rupture set for the inversion was partitioned into discrete magnitude bins, and the supra-seismogenic MFD rate for each bin was divided among the ruptures in the bin, weighted by the minimum slip rate among all sections utilized by each rupture. Models are derived from this starting model by two GR inversion options: GR Unconstrained: The data are inverted without any a priori rates in Equation Set (4). Because this model is under-determined, numerous inversion runs are needed to sample the solution manifold. GR Constrained: In preliminary modeling, the data were inverted with the a priori rates in Equation Set (4) set to the GR starting model. However, imposing a uniform participation MFD is a more computationally efficient way to enforce GR behavior. (As demonstrated by Field and Page, 2010, a GR model has a uniform distribution of participation rates simply because smaller events are more frequent whereas larger events influence a longer stretch of fault.) This computational approach is

51 UCERF3 Technical Report #8 Earthquake Rate Models 45 now implemented in UCERF3.0. An attractive alternative, imposing a GR nucleation MFD on each section, has yet to be fully explored. In all inversions, a minimum rate was applied to all ruptures; for the GR branches, the minimum was set by multiplying the GR starting solution rate by This magnitude-dependent value ensures that every rupture has a non-zero, albeit exceedingly small rate of occurrence (typically 10-8 to per year depending on magnitude and local slip rates). The off-fault gridded seismicity sources are defined by partitioning the associated MFD (green curve in Figure 17) among those grid cells that are outside fault-zone polygons, weighted by the relative spatialpdf value in each grid cell. For the GR branches of UCERF3.0, the a priori branch weights for SpatialPDF choices are: 20% on UCERF2 Smoothed Seismicity (Figure 13a) 30% on UCERF3 Smoothed Seismicity (Figure 13b) 50% on Deformation Model Average (Figure 13c) Thus, 50% of the weight goes to smoothed seismicity and 50% to the deformation model average. The sub-semogenic MFD for each fault section is distributed among the grid cells that fall within the associated fault-zone polygon. Consequently, the sub-seismogenic on-fault rates match slip rates rather than observed seismicity. On the Apply Implied Coupling Coefficient branch, the slip-rate constraints are the original rates corrected by the implied coupling coefficient. On the Do Nothing branch, the postinversion (model implied) slip rates are used to set sub-seismogenic MFDs, ensuring consistency with the final model. The influence of fault-zone polygons is potentially greater in the GR case because the implied coupling coefficients are sensitive to the fraction of seismicity on- versus off-faults, which depends on the fault-zone widths. Checks indicate that such adjustments are generally small compared to the overall moment-rate issues for the GR branches. D.5. Gardner-Knopoff Aftershock Filter The UCERF3 Earthquake Rate Model includes aftershocks, whereas previous NSHMP models have removed such events using the Gardner-Knopoff declustering algorithm (Gardner and Knopoff, 1974). The definition of aftershocks implied by this type of declustering is out of date. For example, it implies that the fraction of aftershocks relative to mainshocks is magnitude dependent, whereas recent aftershock studies do not support any such dependence (e.g., Felzer et al., 2004). Nevertheless, the Gardner-Knopoff definition is still used in formulating and applying hazard policies; e.g., in current building codes that rely on the NSHMP models. To facilitate comparison and consistency with other earthquake hazard models, a procedure for removing Gardner-Knopoff aftershocks has been implemented in the UCERF3 framework. Gardner-Knopoff filtering typically reduces the b-value from about 1.0 (full catalog) to about 0.8 (declustered catalog). For this reason, the NSHMP has generally used a b-value of 0.8 for modeling smaller events as gridded seismicity. The b-value difference can be combined with the fraction of M 5 events designated as mainshocks by the declustering model to construct a Gardner-Knopoff filter.

52 46 Earthquake Rate Models UCERF3 Technical Report #8 According to Appendix I (Felzer) of the UCERF2 report, the total number of M 5 events per year in the UCERF region is 7.50 for the full catalog, compared with 4.17 for a Gardner-Knopoff declustered catalog, which implies 56% of these events are Gardner-Knopoff main shocks. The corresponding GR MFDs for full and declustered catalogs are compared in Figure 18a. The UCERF3 Gardner-Knopoff aftershock filter is simply the ratio of the decustered MFD to total MFD in Figure 18a, capped at 1.0 above the point where the two MFDs crossover. This filter compares well with binned data values from Appendix L (Felzer, 2012b, Table 10), as shown in Figure 18b. In applying this Gardner-Knopoff aftershock filter, the rate of each UCERF3 rupture is simply scaled by the value on the red curve in Figure 18b. This implicitly assumes that the fraction of aftershocks is location independent, which is consistent with past NSHMP applications. Observational evidence for a systematic spatial dependence is lacking, though the available data do not exclude the possibility. Figure 18. (a) Cumulative Gutenberg-Richter distributions implied by the Gardner-Knopoff declustering algorithm. The full-catalog MFD (black line) has been normalized to unit rate for M 5; the relative rate for the declustered-catalog MFD is given by the blue line. (b) Comparison of the UCERF3 Gardner- Knopoff aftershock filter (red curve) with the data from Appendix L. The former is obtained by taking the ratio of incremental distributions from (a) and setting values above the crossover at M 6.8 to 1.0.

53 UCERF3 Technical Report #8 Earthquake Rate Models 47 D.6. Grand Inversion using UCERF2 Ingredients The performance characteristics of the Grand Inversion have been assessed by applying the inversion to UCERF2 ingredients, which included Deformation Model 2.1. Figure 19 shows in map view how the inversion is able to improve the slip-rate fits compared to UCERF2. The UCERF2-implied slip rates exhibit rainbow patters on many of the type-b faults, which is a consequence of giving floating ruptures a uniform probability of occurrence along the fault; this tapers slip rates toward the endpoints (which must occur if the faults really terminate), but it also contributes to an over-prediction of slip rates at mid sections. The Grand Inversion generally provides better fits to the slip rates on a subsection-by-subsection basis. Figure 19. Slip-rate misfit for UCERF2 (left) and UCERF2 ingredients fit by the Grand Inversion (right). The UCERF3 inversion methodology generally provides better fits to the slip rates on a subsection-bysubsection basis. The UCERF2 solution tends to under-predict the rates near fault-section endpoints and over-predict near the section centers, although the model is moment-balanced if section-averaged. Note that the Mendocino fault and the San Andreas creeping section were not part of the UCERF2 on-fault solution; those misfits of the UCERF2 reference model should be disregarded. Figure 20 illustrates how the inversion of the UCERF2 ingredients performs in fitting the paleoseismic recurrence-interval data from Appendix G at two different levels of weighting. If the paleoseismic data are highly weighted, as in panel (c), the data are well fit; the squared residual sum is reduced to about half that of the original UCERF2 model (panel a). However, narrow peaks and troughs are evident in model rates at many of the trench sites along the fault section (red lines), indicating that small events or sharp boundaries in the rupture ensemble are being incorporated into the model to satisfy the data. If the weighting is reduced, as in panel (b), the fit to the data degrades as the model event rates

54 48 Earthquake Rate Models UCERF3 Technical Report #8 become smoother along strike. In deriving the preliminary model UCERF3.0 presented below, the lower weighting was assumed. Figure 20. Paleoseismic event rates for (a) the UCERF2 Reference Model and two annealed models derived by inversion using the UCERF2 ingredients with (b) lower weighting and (c) higher weighting of the paleoseismic data. Data are paleoseismically visible participation rates (red lines) along all fault sections that have paleoseismic rates determined by trenching; black circles are the mean event rates and black error bars are 95% confidence bounds. Panel (b) corresponds to the inversion solution shown in Figure 19. D.7. UCERF3.0 Earthquake Rate Models The inversion results that constitute the UCERF3.0 model are presented in Appendix N and summarized here, together with a preliminary evaluation of model components. Generally speaking, the UCERF3 Grand Inversion appears to work well, and the flexibility and efficiency of its high-performance, highthroughput computing allow any of the logic-tree branches in Figure 3 to be run in short order. However, the inversions for the main UCERF3.0 branches indicate that some of the model components, in particular the deformation models, need further scrutiny, and perhaps revision, before any UCERF3 model can be considered for practical use. At a minimum, this analysis is likely to lead to an a posteriori reassignment of logic-tree branch weights. One of the primary challenges for UCERF3 evaluation is the increased number of viable ruptures in the fault system more than 200,000, compared to less than 8,000 for UCERF2. Furthermore, the rupture

55 UCERF3 Technical Report #8 Earthquake Rate Models 49 rates in UCERF2 were largely prescribed in terms of the assumed magnitude frequency distributions for each source, so validation and interpretation of the model was a much simpler task. The complexity of the UCERF3 model requires a richer inventory of model metrics and visualization tools, and an important aspect of this report is the presentation of new resources that have been developed for this purpose. Given the preliminary nature of these evaluations, considerable external review will be necessary to ensure that the proposed metrics are sufficient for establishing the viability of the UCERF3 results, as well as for objective analysis that can adequately vet the UCERF3 models for practical applications. D.7.1. Reference Branches Two reference branches were chosen to illustrate UCERF3.0 results (Table 6): a Characteristic Reference Branch (Char_ref) and a Gutenberg-Richter Branch (GR_ref). These branches share the black options in Table 6, but they differ in the blue options. In particular, they share the Zeng Deformation Model, but they differ in the inversion model, as described in Sections D.4.1 and D.4.2, respectively. Table 6. Logic-tree values for the UCERF3.0 reference branches for the Characteristic and Gutenberg-Richter inversions. Different options are highlighted in blue. Logic Tree Branch Characteristic Reference Branch (Char_ref) Gutenberg-Richter Reference Branch (GR_ref) Fault Model FM 3.1 FM 3.1 Deformation Model Zeng Zeng Scaling Relationship Ellsworth B for Both Ellsworth B for Both (mag-area and slip-length models) Slip Along Rupture (D sr ) Boxcar Boxcar Total M 5 Event Rate (per year) Inversion Model Characteristic UCERF2 Unconstrainted Gutenberg-Richter Unconstrainted Off-Fault M max Off-Fault SpatialPDF UCERF3 Smoothed Seis Deformation Model Ave Fault Moment Rate Fixes Do Nothing Do Nothing It should be emphasized that reference does not imply preferred, nor does it indicate the most highly weighted branches in the logic tree. The highest a priori weight was assigned to the NeoKinema model by the UCERF3 Deformation Model Evaluation Committee (see Section C). However, postinversion evaluations have raised issues that will require further scrutiny before a posteriori branch weights can be finalized. Model iterations beyond UCERF3.0 will almost certainly be necessary. Figure 21 (left panel) demonstrates that the inversion-derived supra-seismogenic, on-fault MFD for the reference branches (dark blue) matches the target very well. The small peak at M6 reflects a special constraint that is needed for Parkfield, where the rate of those earthquakes alone constitute a considerable fraction of the total regional rate of M6 events. This discrepancy is easily remedied (see Appendix N).

56 50 Earthquake Rate Models UCERF3 Technical Report #8 GR- Figure 21. California-wide magnitude-frequency distributions for the Char_ref branch (left) and the GR_ref branch (right). Five MFDs are shown for each model: black, the total target MFD defined from seismicity (largely hidden under the red curve); blue, the on-fault MFD found by the inversion; cyan, the target on-fault MFD used in the inversion MFD constraints; grey, the gridded-sesimicity MFD (off-fault plus sub-seismogenic on-fault MFDs); magenta, the UCERF2 background-seismicity MFD (for comparison); red, the total implied MFD. The deviation from the Inversion Target around M6 is due to the Parkfield ruptures, which are not included in the MFD constraint. The overall fit of the Char_ref branch to the slip rate data is good, although the model does underpredict the total on-fault target moment by 8% (Figure 22, left panel). The misfit is concentrated on the Imperial and Cerro Prieto faults near the California-Mexican border, where the model slip rates are lower than observed, owing primarily to the MFD constraint, which limits the number of moderately-sized ruptures of these relatively isolated faults. Slip rates for the GR_ref branch are systematically lower than the deformation model targets, under-predicting the total on-fault target moment by 25% (Figure 22, right panel). This problem was anticipated in the pre-inversion analysis of Section D.4.2, and options to reduce the discrepancy are discussed there and in Appendix N. The reference branches are compared to the paleoseismic event rates in Figure 23. In these solutions, the weight given to these constraints was lowered, which produces model slip rates that are smoother along-strike, comparable to Figure 20b. As seen in Figure 20c, the fits to these data can be improved by increasing the weights, but the resulting model has strong along-strike variations in slip rate, in some cases caused by an increased or decreased frequency of small ruptures at the trench sites. The trade-offs associated with this and other aspects of constraint weighting in the Grand Inversion need to be more fully explored. It is especially important to understand how these choices affect the distribution of large ruptures along the major faults. Another set of diagnostics is the on-fault participation rate maps for various magnitude thresholds, which can be compared to equivalent UCERF2 maps. These are presented in Appendix N (e.g., Figure 14 therein). Participation rate maps for the complete reference-branch models, including gridded seismicity, are compared to UCERF2 in Figure 24. The main difference can be seen at the higher magnitudes, where the rates for UCERF3 are higher than those of UCERF2, owing to the higher deformation rates and the inclusion of large earthquakes as fault-to-fault ruptures.

57 UCERF3 Technical Report #8 Earthquake Rate Models 51 Figure 22. Slip-rate misfit for the Char_ref branch (left) and GR_ref branch (right) of model UCERF3.0. The single black subsection on the Great Valley fault in the center of the state has a zero slip rate in the Zeng deformation model. Figure 23. Paleoseismic event rates for the Char_ref branch (left) and GR_ref branch (right) of model UCERF3.0. This plot shows paleoseismically visible participation rates along all fault sections that have a paleoseismic trench in red; the mean event rate and 95% confidence bounds is shown in black. The weighting of the paleoseismic data in these inversions is similar that in Figure 20b.

58 52 Earthquake Rate Models UCERF3 Technical Report #8 Figure 24. Participation rate maps for (a) UCERF2, (b) the UCERF3.0 Characteristic branch (Zeng deformation model), and (c) the UCERF3.0 GR_ref branch (Zeng deformation model) for three magnitude thresholds.

59 UCERF3 Technical Report #8 Earthquake Rate Models 53 The SCEC-VDO visualization software has proven indispensible for evaluating inversion solutions, allowing easy 3D viewing of individual ruptures (magnitudes & rates), as well as aggregate quantities such as slip rates and participation rates. The output of a new and much used SCEC-VDO visualization tool is shown in Figure 25, which displays all ruptures in which a given fault section participates, arrayed in rate-order above the fault. The incremental and cumulative nucleation MFDs can be plotted for any fault section. Other examples are given in Appendix N. Figure 25. Diagnostic plots for the Hayward North fault section for the Char_ref branch. Upper left: SCEC-VDO display of all fault sub-sections that rupture with (any part) of the Hayward North fault, colored by the rate at which they rupture. Lower left: Incremental and cumulative nucleation magnitude distributions for this fault section. Right: SCEC-VDO display of rupture traces plotted above the faults, colored by rate. This visualization shows that a common stopping point for these ruptures is at the Calaveras junction, south of the Hayward. These ruptures can link to the Calaveras fault and then to the San Andreas fault, although the rates of these fault-to-fault jumps is very low. Figure 26 shows a plot of implied segmentation along the San Andreas fault for UCERF2 mapped onto FM3.1, for inversion results using UCERF2 ingredients (e.g., Deformation Model 2.1), and for the Char_ref branch. The extent to which inversion ruptures involve one or more fault jumps can also be assessed and compared to the observational data. An example is given in Figure 27; the comparison with data on fault-to-fault ruptures implies that this branch of UCERF3.0 not producing an over-abundance of multi-fault ruptures. Another metric is the rate of ruptures that occur on multiply-named faults, which can be compared to the Wesnousky dataset (in which 14 of 28 ruptures, or 50%, involve faults with different names). For the UCERF3 Characteristic reference branch, 41% of M 7 ruptures involve differently named faults, based on the convention that larger faults with multiple sections are treated as a singlename fault.

60 54 Earthquake Rate Models UCERF3 Technical Report #8 Figure 26. Display showing the segmentation of ruptures on the San Andreas/Brawley/Imperial fault system. Rates at which neighboring subsections rupture together are shown in green, and the rates at which they do not are shown in red; the two rates are normalized by the total rate of ruptures involving the two subsections and therefore sum to unity. Top: UCERF2 model. Middle: UCERF3 inversion of UCERF2 ingredients. Bottom: UCERF3.0 Char_ref branch, Where the red line reaches 1 (and the green reaches 0), there is strict segmentation; no ruptures break through that location.

61 UCERF3 Technical Report #8 Earthquake Rate Models 55 Figure 27. The total rate of M7+ ruptures in the Characteristic reference-branch solution that have 0, 1, 2, and 3 jumps greater than 1 km (in 3D, according to FM 3.1). The purple line shows the rate of observed ruptures from the Wesnousky dataset of large surface-rupturing earthquakes (Wesnousky and Biasi, 2011). Each line is normalized to the same total rate. This UCERF3.0 branch under-predicts the observed rate of multi-fault ruptures, suggesting that the requirement of the inversion to stay close to UCERF2 may be too restrictive. See Appendix N for additional details. D.7.2. Other Comparisons Diagnostics have been computed to assess how each of the simulated annealing inversions matches the data constraints. Figure 28 is a screen shot from the output of one tool, showing the relative fits to each constraint for a branch ensemble: single-branch permutations initiated with the Char_ref branch in Table 6. Graphical tools have also been developed that can display this information in real time as a simulated annealing run progresses on the host supercomputer (e.g., Figure 3 of Appendix N). This metric indicates how well a model branch fits the data. It will be helpful in assigning a posteriori weights to UCERF3 branches. Figure 29 shows several UCERF3.0 MFDs for the San Francisco and Los Angeles areas that are delineated by the boxes in Figure 1 and compares them with the seismicity data and UCERF2 results. In the San Francisco region, the UCERF3.0 branches are in good agreement with UCERF2 (and also the WGCEP 2003 study); however, the seismicity rates are lower, which reflects the rate decrease after the 1906 earthquake. This decrease is captured in the Empirical time-dependent model discussed in Section E.1. The comparison for the LA region indicates a bigger difference between models: the UCERF3.0 solutions are closer to a GR MFD than UCERF2, which shows considerable curvature (bulge) in the range M6-7. All of the models are consistent with the data for the LA region.

62 56 Earthquake Rate Models UCERF3 Technical Report #8 Figure 28. Screen shot of a tool for displaying how well different logic tree branches match the data constraints. Each set of histograms represents a different branch, labeled below the bins. Successive branches change only one branching option at a time beginning with the Char_ref branch in Table 6. Energy is simulated-annealing parlance for an uncertainty-normalized sum over squared data residuals. Figure 29. MFDs for the San Francisco (left) and Los Angeles areas (right). Heavy orange lines are mean rates and 95% confidence intervals for the Felzer catalog (Appendix K), declustered using the Gardner- Knopoff procedure described in Section D.5. Black lines represent the mean and 95% confidence intervals for the UCERF2 (NSHMP) time-independent model. The blue line represents the MFDs for the Char_ref branch (Zeng deformation model), the red line is for a Characteristic reference branch that substitutes the NeoKinema deformation model for Zeng, and the green line is for the GR_ref branch (Zeng deformation model). These MFDs include all events that have any part of their rupture surface inside the box (i.e., participation rather than nucleation MFDs). Boxes defining the two regions shown in Figure 1. The implications of the UCERF3.0 model for hazard can be measured using risk-targeted ground motions (RGTM, Luco et al., 2007) at various locations throughout California. To calculate RTGM, the hazard curve at a site is iteratively combined, via a risk integral, with fragility functions for different

63 UCERF3 Technical Report #8 Earthquake Rate Models 57 target ground motions. The target ground motion is varied until the iterations converge on a result that would yield a 1% probability of collapse in 50 years. As a proxy for small and large buildings, RTGM values are calculated at 0.2 sec and 1.0 sec spectral accelerations, respectively. It should also be noted that, in engineering practice, RTGM is taken as the lesser of the probabilistic and deterministic ground motion at a site. For the purposes evaluating UCERF3, only the probabilistic component has been considered. RGTM has two advantages for model hazard assessment: (1) it is a scalar metric of the complete hazard curve; and (2) it was used the Building Seismic Safety Council (BSSC) to evaluate the 2009 update to the NEHRP Provisions and will probably used by BSSC to evaluate UCERF3 models. The implementation of this metric in OpenSHA for any UCERF logic tree is a notable project achievement. Figure 30. Comparison of probabilistic risk-targeted ground motions for 0.2-sec spectral acceleration at three California cities. In each plot the dark blue bins represent the summed weights of different RTGM values across the 480 UCERF2 time-dependent, logic-tree branches. The green Line represents the average UCERF2 value, the orange line represents the official values from the US Seismic Design Maps, and the four lines labeled U3 represent UCERF3 Characteristic reference branches for all four deformation models. The RTGM values were computed using the weighted combination of the three Next Generation Attenuation relationships (NGAs) that were used in the 2008 NSHMP. Mean UCERF2 values are generally lower than the US Design Map RTGM because the former does not consider additional epistemic uncertainty on ground motion that was included in the 2008 NSHMP.

64 58 Earthquake Rate Models UCERF3 Technical Report #8 Figure 28 shows RTGM results for three of the approximately 20 sites throughout California considered by the BSSC. Four different values are shown for UCERF3, one for the Char_ref branch, and one for each of the three alternative deformation models. The Characteristic reference branch that uses NeoKinema, has an anomalously high value for Los Angeles, owing to relatively high slip rates it places on the Compton, Elysian Park, and Puente Hill thrust faults (Figure 31). This conclusion underscores the need for a careful, fault-by-fault evaluation of slip rates in all deformation models. UCERF3.0 Fault Slip Rates in SF and LA Regions Pilarcitos San Gregorio ABM Geologic NeoKinema Zeng Elysian Park Compton Puente Hills fault slip rates (mm/yr) Figure 31. SCEC-VDO visualizations of the San Francisco Bay area (top) and Los Angeles area (bottom), showing fault slip rates from the four deformation models. Color scale saturates at 10 mm/yr. The NeoKinema model gives slip rates for the Compton, Elysian Park, and Puente Hills thrust faults that are substantially higher than the geologic consensus, which explains the anomalously high value for the corresponding branch in Figure 30a. Figure 32 shows an example of statewide portfolio loss analysis that conducted using OpenSHA, building on work described in Trimming the UCERF2 Logic Tree by Porter et al. (submitted to SRL, 2012; manuscript available upon request). This portfolio-based loss evaluation metric is directly relevant to the insurance concerns of the California Earthquake Authority. While the particular calculation shown on the right side of Figure 32 was generated to explore the convergence properties of our Characteristic

65 UCERF3 Technical Report #8 Earthquake Rate Models 59 branch, the methodology can now be used to quantify the influence of various logic tree branches, and to potentially trim branches that are unimportant for loss metrics. Figure 32. (below) Tornado diagram illustrating the influence of various UCERF2 logic-tree branches on statewide portfolio loss estimates (from Porter et al, 2012). (right) Distribution of values for multiple simulated-annealing runs using the UCERF3 Char_ref branch. D.8. Conclusions The Grand Inversion is largely successful in its design goals of: (1) fitting data better, including elimination of the bulge; (2) including multi-fault ruptures; (3) sampling a broader range of models that are consistent with data; and (4) providing a general and extensible framework for future improvements. Although the UCERF3 approach is still an approximation of the system (e.g., it precludes ruptures that are partially on fault and partially off fault), it nevertheless represents a considerable step forward for constructing system-level earthquake rate models. Much has been learned since the preliminary model implementation, especially in terms of the extent to which various possible logic-tree branches are conceptually correlated or inconsistent. The number of free parameters has been significantly reduced, streamlining both the implementation and description of the model. For example, the off-fault aseismicity parameter used in versions of UCERF3 has been dropped from the parameterization because its value is actually implied by other logic-tree choices (resulting in the implied off-fault coupling coefficient described in Section D.4). The challenges associated with the Gutenberg-Richter branches are another example of what has been learned from this system-level integration. The inversion results for these models are presented in Appendix N. The pre-inversion analysis indicated that all of the data could not be satisfied because, under the GR hypothesis, the fault slip rates lead to a considerable over prediction of the total, statewide rate of

66 60 Earthquake Rate Models UCERF3 Technical Report #8 events (see Section D.4.2). Applying the GR hypothesis on a fault-by-fault basis appears to be inconsistent with the current observational constraints. UCERF2 only had a single logic-tree branch with respect to gridded seismicity (other than the maximum magnitude for off-fault seismicity near faults). UCERF3 now has alternative off-fault maximum magnitudes, alternative models for the spatial distribution of seismicity (SpatialPDFs), and alternative off-fault MFDs (for Characteristic versus GR branches). An important component of epistemic uncertainty has therefore been added, which is especially relevant for areas of the state where hazard is not dominated by fault sources. In spite of this progress, considerable work remains in exploring logic-tree branches, not only with respect to scientific viability, but also with respect to the practical implications. Some of the key questions are: Should on-fault coupling coefficients be applied on a fault-by-fault basis, rather than applying a single value across the board, or should the inversion be allowed to decide where to cut slip rates? Is an improbability constraint needed to further distinguish the relative likelihood of alternative multi-fault ruptures? How do we test the credibility of current results with respect to such ruptures? Are results consistent with the average slip-per-event data compiled in Appendix S and the implied coefficients of variation? Are results consistent with known correlations (or anti-correlations) in rupture dates between neighboring paleoseismic sites? Are the slip-rate and paleo-event-rate data over fit? Should a broader range of models be explored by Monte Carlo sampling of these data according to their uncertainties? How stable are simulated annealing results for over-determined problems, and how can the solution space be systematically sampled for under-determined problems? How should logic tree branch weights be defined, given both known and unknown correlations between the branch options? The path forward is discussed in Section F.

67 UCERF3 Technical Report #8 Earthquake Probability Models 61 E. Earthquake Probability Models An Earthquake Probability Model specifies a probability for each event in the long-term Earthquake Rate Model that one or more of these vents will occur during a specified time interval. The main goals for UCERF3 are the following: (1) reexamine the evidence for the time-dependence of historical seismicity that motivated the UCERF2 Empirical Model ; (2) develop self-consistent elastic rebound models; and (3) apply spatiotemporal clustering models. The goal is to build these features into a single forecasting model that is applicable across a wide range of time scales. This section outlines the current implementation of the probability model components in the UCERF3 framework. The rupture forecast presented in report, UCERF3.0, is a long-term Earthquake Rate Model that does not include timedependent probabilities. Although these components have been unit-tested and are ready for integration into the model, this step has been deferred, owing to unresolved issues with the time-independent model, described in Section D. A plan for this integration is presented in Section F. E.1. The Empirical Model The UCERF2 Empirical Model was based on a comparison between the instrumental earthquake catalog for California (1932-present) and its historical catalog ( ). The decrease in seismicity rate documented in the San Francisco Bay Area has been attributed to the static stress shadow of the 1906 earthquake (WGCEP 2003). In the UCERF2 study, rate decreases were found in the north coast and central and southern parts of the state, as well as in the San Francisco Bay Area (WGCEP, 2007, Table 11). The seismicity rates for UCERF2 were calculated in spatial regions that were drawn to enclose areas with similar levels of earthquake catalog completeness. Differences between the more recent and longer term catalog rates were calculated for each spatial region. In the UCERF2 Empirical Model, these values were applied as empirical corrections to the long-term earthquake rates on each fault in the region. There are two problems with this approach. In many areas of the state the historical catalog is too incomplete to verify whether the rate changes are significant. Moreover, each of the UCERF2 spatial regions encompasses many faults, so that any fault-to-fault variations are averaged out. This approach followed WGCEP (2003) in using a single value for the empirical rate decrease on Bay Area faults although it was recognized that individual faults probably had different behaviors. In UCERF3, more precise empirical rate changes have been derived for km grid cells rather than over expanded regions (Appendix Q). The seismicity-smoothing algorithm developed by Helmstetter et al. (2007) was used to average the historical and current seismicity on this scale, and the ratios of the smoothed values are computed in each cell where the historical data are sufficient. The results, given in Figure 33, confirm the rate decrease in the San Francisco Bay Area described by WGCEP (2003) and evident in Figure 31, and they also indicate a rate decrease along the San Andreas fault south of the Bay Area. Rate changes in other regions are less significant and, given the substantial uncertainties, do not warrant inclusion in the UCERF3 model.

68 62 Earthquake Probability Models UCERF3 Technical Report #8 Figure 33. The new empirical model showing which parts of California exhibit a significant rate change over the post-1850 versus post-1984 time periods, from Felzer (2012d, Appendix Q). E.2. Self-Consistent Elastic Rebound Models Elastic-rebound motivated renewal models have been the foundation of the time-dependent probabilities in all previous WGCEPs. Computing conditional probabilities is simple when a fault is assumed to obey strict segmentation; i.e., where no multi-segment ruptures occur (e.g., WGCEP, 1988, 1990). However, the calculation is not straightforward when multi-segment ruptures are possible, in essence because a point-process model is being used to describe a spatially distributed process. The methodology of WGCEP (2003) was applied by WGCEP (2007) in computing elastic-rebound probabilities for UCERF2. The WGCEP (2003) approach first computes the probability that each segment will rupture from the long term-rate and date of last event, assuming a Brownian Passage Time (BPT) distribution, and then partitions these probabilities among all ruptures that could be triggered by the segment. As discussed in Appendix N of the UCERF2 report, this methodology is not self-consistent. One manifestation is that final segment probabilities, when aggregated over all ruptures, are not equal to the segment probabilities as originally computed. Another, revealed by Monte-Carlo simulations, is that the distribution of segment recurrence intervals implied by the model disagrees with the initial assumptions (Figure 34). For example, there is nothing that stops a segment from going by itself one day, and then being triggered by a neighboring segment the next, which leads to shorter than assumed recurrence

69 UCERF3 Technical Report #8 Earthquake Probability Models 63 intervals. The simulated rate of events is therefore biased high relative to the long-term rate (about 3% for the UCERF2 example in Figure 34a). WGCEP (2007) applied the WGCEP (2003) methodology despite these shortcomings, because (1) an alternative was lacking; (2) the effects were minor since UCERF2 generally had only a few segments per fault; and (3) the methodology captured the overall intent of pushing probabilities in a direction consistent with elastic rebound, making the final values acceptable from the Bayesian perspective of probability as a statement of the degree of belief that an event will occur (D Agostini, 2003). These problems worsen as the fault is divided into more and more segments, especially if segmentation is relaxed altogether. Figure 34b shows the results for a simple unsegmented example, where the final distribution of recurrence intervals looks nothing like that assumed, and there is a substantial bias (~20%) in the total rate of events and overall moment rate. Figure 34. The distribution of recurrence intervals for the WGCEP (2003) methodology of computing time-dependent probabilities. Those assumed are shown in red (BPT with a coefficient of variation (COV) of 0.5), and those implied by Monte Carlo simulations are shown as gray bins. (a) An example for the Cholame segment of the southern SAF as modeled for UCERF2. (b) an example for an 80-km fault with 5- km segments (essentially un-segmented) and a Gutenberg-Richter distribution of events. Both examples are taken from Appendix N of the UCERF2 report.

70 64 Earthquake Probability Models UCERF3 Technical Report #8 Figure 35. The distribution of M 6.5 recurrence intervals at one location on the northern San Andreas Fault from the RSQsim earthquake simulator of Dieterich and Richards-Dinger (2010). The model for this simulation is the so-called norcal1 fault system for northern California that has been used by the SCEC Simulators Working Group for simulator comparisons. In developing and evaluating other approaches, use has been made of physics-based simulators (Ward, 2000; Rundle et el. 2006; and Dieterich and Richards-Dinger, 2010), which adhere to elastic rebound and make no assumptions regarding segmentation. Of course, no simulator correctly represents natural earthquake processes, so it is important to evaluate any inferred statistical behavior for robustness against the range of simulator results, as well as against actual observations. If a fault does not obey segmentation, then it is not possible for all points on that fault to honor a renewal-model distribution such as BPT or log-normal, especially where the tails of neighboring ruptures overlap. This is exemplified in Figure 35, which shows the distribution of recurrence intervals at a point on the northern SAF from a simulation by Dieterich and Richards-Dinger (2010), which models the entire northern California fault system. This plot does not display any of the usual renewal-model distributions. Stated another way, even if we had perfect knowledge of the recurrence-interval distribution at one or more points on a fault (as in Figure 35), it is not clear how to turn this information into rupture probabilities for an un-segmented model. Again, the problem arises from the attempt to apply a pointprocess model to spatially distributed processes. A promising alternative procedure has been formulated and implemented in the UCERF3 framework. Consider the situation in which one knows exactly where the next big earthquake will occur and is faced with the task of predicting when it will occur. A sensible approach would be to apply an average timepredictable model:

71 UCERF3 Technical Report #8 Earthquake Probability Models 65 T r pred = S s=1 " $ # D s last v s S + T s last % ' & = N s=1 D s last S v s + N s=1 T s last S = ΔT r pred + T r last This equation states that the predicted time of this r th rupture ( rate (v s ) on each subsection has recovered the amount of slip ( T s last ) is the average time at which the slip ) that occurred in the last event at time on each subsection. The average is taken over the total number of subsections (S) involved in the given event. The fact that T s last pred T r last D s can vary along the rupture reflects the un-segmented nature of the model, and thus represents a straightforward generalization of the time-predictable model introduced by Bufe et al. (1977) and Shimazaki and Nakata (1980). The equation can be rewritten as T pred r = ΔT pred last r + T r where ΔT r pred = N s=1 D s last S v s D last r v r D r last and v r in the r th rupture, and are the slip-in-last-event and slip rate, respectively, averaged over the sub-sections involved Data are not available to directly test the agreement between the predicted intervals, ΔT pred r = T pred last r T r, and the observed intervals, ΔT obs r = T obs last r T r, where is the occurrence time of an event. However, from synthetic catalogs produced by physics-based simulators, one can examine the distribution of the ratio of the observed (i.e., simulated) to predicted intervals, ΔT obs pred r /ΔT r. To evaluate the consistency of this formulation with simulator data, a size threshold for events that reset the clock to T s last T r last =. needs to be established. For example, if a fault really does exhibit a Gutenberg- Richter distribution of earthquakes down to low magnitude, do the smallest events reset the clock? This could be a problem because the low amounts of slip associated with these little earthquakes would imply short recurrence intervals. To avoid this problem, only earthquakes that rupture the full seismogenic thickness (M 6.5) are considered, consistent with the UCERF3 long-term Earthquake Rate Model in which the fault-based events are restricted to those that rupture the full seismogenic thickness. N s=1 T s last S T r obs

72 66 Earthquake Probability Models UCERF3 Technical Report #8 ΔT r obs /ΔT r pred Figure 36. Distribution of obtained from three physics-based simulators. The input model used in these simulations is the same as in Figure 35. Shown with a black and blue lines are best-fit BPT and log-normal distributions, respectively, with parameters as follows: (a) mean = 1.2 and COV = 0.30; (b) mean = 1.1 and COV = 0.23; and (c) mean = 1 and COV = The blue line is generally hidden below the black line.

73 UCERF3 Technical Report #8 Earthquake Probability Models 67 Figure 37. Same as Figure 36a, but where the event times were randomized uniformly over the simulation duration before computing normalized recurrence intervals. Figure 36a shows the distribution of ΔT obs pred r /ΔT r obtained from the Dieterich and Richards-Dinger (2010) simulator for all M 6.5 events that occurred in a 22,000-year synthetic catalog. Compared to Figure 35, the results in Figure 36a are much more consistent with a BPT or log-normal distribution (with COV 0.3). This is because the probability of what question has been changed from a point on the fault (Figure 35) to the time of the next event, given knowledge that it will be the next one to rupture. For comparison, Figure 37 shows the same result as in Figure 36a, but for which the event times in the simulation have been randomized, generating a Poisson-like distribution of recurrence intervals. Figures 36b and 36c show the same results as in Figure 36a, but for the Ward (2000) and Rundle et al. (2006) simulators, respectively. The agreement between simulators in Figures 36 is encouraging: all seem consistent with a BPT or log-normal distribution with a COV between 0.23 and 0.3. In particular, there are no short recurrence intervals in Figure 36, implying that the simulators do not generate M 6.5 aftershocks or triggered events within the rupture surface of larger main shocks. The preceding discussion assumes one knows exactly where the next rupture will occur, leaving only the question of when. Because which of the many ruptures in the long-term model will be the next to occur is not known, all possibilities must be considered. This can be done using the following for the Poission-equivalent time-dependent rate of each rupture: R timedep longterm P BPT r R r (ΔT pred last r,t r,cov ) r P Pois r (ΔT pred ) longterm where R is the long-term rate of the r th r rupture, and P BPT r (ΔT pred r,t r,cov ) and P Pois r (ΔT pred r ) are the probabilities computed from the BPT and Poisson models, respectively, assuming the r th rupture is the next to go. Note that the long-term rate of a rupture tends toward zero as the fault is represented with an increasing number of smaller subsections. The ratio in the last term of the equation acts as a probability gain or reduction factor for the r th rupture. Probabilities of events will be correlated to the last

74 68 Earthquake Probability Models UCERF3 Technical Report #8 extent they overlap spatially (i.e., share a large number of subsections). This model gives lower probabilities for events in areas that have recently ruptured and higher probabilities where they have not, as required to fill seismic gaps. It also allows some spatial overlap of events that occur in close temporal proximity (as observed for cascading sequences, such as the one on North Anatolian fault). Monte Carlo simulations with the model shown in Figure 34b demonstrate that this methodology is relatively unbiased in terms of event rates and moment rates. This consistency will be confirmed once the methodology has been implemented for the full UCERF3 model. A reasonable alternative implementation would be the equivalent average slip-predictable model, D s last where above is replace with, the slip in the next event. However, Monte Carlo simulations reveal a significant bias in this approach; the procedure preferentially chooses smaller events earlier in the cycle, thereby skewing both the overall rates and the magnitude-frequency distribution relative to the long-term model. One drawback of this approach is that it requires knowledge of the amount of slip in the last event on each subsection. Therefore, we have produced a new comprehensive compilation of this data for California faults (Appendix R). Where this is unknown, we can quantify the associated epistemic uncertainties for D s last D s last D s next from the long-term model, the assumed time dependence, and the observed open interval. Whether such uncertainty bounds add any value compared to a Poission model remains to be seen. It is not surprising that the simulators imply elastic-rebound predictability, because they are based on the physics of elastic rebound. However, the results in Figure 36 suggest that elastic interactions within a complex fault system do not mask this type of predictability. Further simulations will be conducted to see how robust this conclusion is with respect to model tuning. Other issues to be explored with this methodology include: What conditions most effectively reset the clock on each subsection (e.g., magnitude threshold, down-dip width of rupture)? What s the magnitude dependence of this predictability; e.g., does the COV decrease with increasing magnitude? Are there differences in the predictability among faults or fault sections? How should tests of this methodology against either real or simulated observations be formalized? What are the implications of recent evidence that micro repeaters and laboratory earthquakes are neither time nor slip predictable? The point of this section has not been to argue that elastic-rebound predictability indeed exists or to represent the physical basis of this behavior. Rather, a probabilistic, rule-based approach for modifying Poisson probabilities has been developed that is simple and consistent with elastic-rebound theory. The method presented here seems as defensible as previous WGCEP approaches, and the support from physics-based simulators is significant value added.

75 UCERF3 Technical Report #8 Earthquake Probability Models 69 E.3. Spatiotemporal Clustering Models A major goal for UCERF3 is to include spatiotemporal clustering to account for triggered earthquakes that can be large and damaging. A good example of such clustering is the Joshua Tree, Landers, Big Bear, and Hector Mine sequence that occurred in southeastern California in the 1990s. A more recent example is the M Darfield earthquake in New Zealand, which produced a very damaging M 6.3 aftershock in Christchurch five months later. Even the great M Tohoku earthquake in Japan can be considered an aftershock of an M 7.2 earthquake that occurred just two days before. According to UCERF2, such chains of events would be pure coincidence. The weight of opinion represented by the scientific literature, however, points to some kind of triggering phenomenon. If we accept this interpretation, then the next relevant question is whether such triggering is important for the policy decisions represented in building codes, earthquake insurance, and other forms of risk reduction. For example, would the California Earthquake Authority still be solvent had the Mojave sequence occurred in the LA basin? Answering such questions requires an appropriate time-dependent earthquake rupture forecast. Because the physical process responsible for earthquake triggering remain controversial (e.g., Felzer and Brodsky, 2006; Richards-Dinger et al., 2010), we have based UCERF3 on empirical, statistical clustering models (e.g., Ogata, 1988; Reasenberg and Jones, 1989, 1994). The Short Term Earthquake Probability (STEP) methodology of Gerstenberger et al. (2005) is one available approach ( which applies aftershock statistics to revise earthquake probabilities in real time for M 3 events that occur throughout an aftershock zone. The model we propose for UCERF3 builds on the STEP methodology. In so doing, we have addressed the following issues: 1. STEP requires that each observed event be associated with a single main shock, which becomes problematic where aftershock zones overlap, especially because these zones evolve with time as more data are collected. 2. In STEP, triggered events are sampled from a Gutenberg-Richter distribution between M5 and M8 everywhere in the region, which is inconsistent with the underlying long-term model, which, for example, constrains M8 events to occur on only a few faults such as the San Andreas. 3. There is nothing in the STEP formulation to prevent an M 8 event from immediately triggering itself. In fact, the likelihood of any particular event in STEP is greatest the moment after it actually occurs, which is inconsistent with elastic rebound. 4. Only one aftershock sequence influences probabilities at a given point in space (whichever sequence has the highest rate change). 5. STEP over-predicts both the total rate and moment rate of large earthquakes due to an inconsistency between the de-clustering applied in the long-term model (Gardner and Knopoff, 1974) and the aftershocks statistics used for spatiotemporal clustering (Reasenberg and Jones, 1989, 1994). 6. STEP combines different models based on a sophisticated analysis of generic, sequence-specific, and spatially variable parameters. This may improve predictability, but it introduces significant

76 70 Earthquake Probability Models UCERF3 Technical Report #8 complexity, and it makes tracking aleatory versus epistemic uncertainties a challenge, especially because the combination of models is spatially variable. Here we outline the UCERF3 methodology and discuss how it addresses these issues. The UCERF3 Earthquake Rate Models estimate the long-term rate of all possible events throughout the region at some level of discretization and above some magnitude threshold. Assuming a uniform distribution of nucleation points on each earthquake surface, rupture rates can be translated into nucleation rates as a function of space within each 0.1º 0.1º bin. Likewise, an occurrence of a magnitude M event in a given bin can be mapped into one of the viable ruptures in the long-term Earthquake Rate Model, a simple bookkeeping matter. The steps involved for the anticipated UCERF3 spatiotemporal clustering model include: a. For a given start time and forecast duration, we collect all previously observed M 2.5 events, plus randomly sampled spontaneous (non-triggered) events from our long-term Earthquake Rate Model, including any empirical model and/or elastic-rebound modifications as described in the previous sections. We now have all candidate main shocks. b. For each main shock in (a), we randomly sample times of occurrence of primary aftershocks from the ETAS formulation of the modified Omori law (e.g., Felzer, 2009): n(t) = k 10 (M main M min ) (c + t) p c. where we use generic parameter values from Hardebeck et al. (2008) for this report (k = 0.008, p = 2.34, c = days, and M min = 2.5). These parameter values will be replaced by the updated analysis in Appendix J (Hardebeck). d. We next need to decide where each of these primary aftershocks occurs. Using the long-term nucleation rate of M 2.5 events throughout the region from the Earthquake Rate Model (below, left panel), multiplied by a spatial decay of (R + R min ) n, where R is the distance from the main shock fault surface (below, middle panel), we randomly sample a nucleation grid-cell for the primary aftershock from the distribution in the image below (right panel): Long-term rate of M 2.5 in each bin Main Shock Aftershock distance decay Probability aftershock falls in bin log(rate/yr ) Distance (km) log (Prob)

77 UCERF3 Technical Report #8 Earthquake Probability Models 71 e. To decide the magnitude of the primary aftershock, we randomly sample a magnitude according to the relative rate of each magnitude, using the nucleation magnitude-frequency distribution for the grid cell chosen in step (c), which may or may not be Gutenberg-Richter: Long-term rate of M 2.5 in each bin Long-term MFD in bin Probability aftershock falls in bin 10-6 Rate (/yr) Mag log(rate/yr ) log (Prob) -2-1 f. To decide which specific rupture (from the long-term Earthquake Rate Model) the primary aftershock represents, we randomly sample a rupture from the long-term Earthquake Rate model according to the relative rate that each viable rupture (of that magnitude) nucleates in that grid cell: Ruptures that nucleate in that bin g. To collect secondary aftershocks from the primary aftershocks, we repeat steps (b) thru (e) to get secondary aftershocks from all primary aftershocks, then likewise for tertiary events, and so forth until no more events are generated. We now have a complete synthetic catalog for the chosen time span. h. We repeat (a) through (f) to generate whatever number of alternative synthetic catalogs are needed to get statistically meaningful hazard or loss estimates. This algorithm avoids having to assign each observed event to a main shock, and it allows multiple events to influence triggering probabilities at a given location. It also samples aftershocks directly from the long-term model, avoiding the inconsistency noted in item (2) above. This means a main shock is more likely to trigger an M8 earthquake if it occurs near a fault capable of generating such an event (e.g., the Bombay Beach scenario). Furthermore, by using long-term rates in steps (c) through (e) that have been corrected for elastic rebound influences as discussed in the previous section, we can prevent large, fault-based main shocks from sampling themselves as aftershocks. Furthermore, updating the model based on ongoing M 2.5 seismicity will delineate any blue versus red lobes that would be present if

78 72 Earthquake Probability Models UCERF3 Technical Report #8 static stress changes are important. Including events down to M 2.5 will allow sequences of smaller events to trigger larger, more distant earthquakes via secondary and subsequent triggering as demonstrated by Felzer et al. (2002) for the Landers and Hector Mine sequence. The smaller events can also connect together large earthquakes over long periods of time. For instance, the M 2.5 seismicity following the 1971 San Fernando shows a decaying aftershock sequence that were still above background rates at the time of the Northridge earthquake 23 years later in Because the fraction of events that are triggered is magnitude independent, we simply reduce the rates of events in the background model by a common multiplicative factor so that the total simulated rate of events equals that observed (thereby avoiding the double counting problem with STEP at large magnitudes). This algorithm generates suites of synthetic catalogs, each of which represents a viable sequence of triggered events. This is an advantage, because loss modeling is generally conducted using synthetic catalogs (referred to as stochastic event sets ) in order to account for the spatial correlation of ground motions across a portfolio of sites. New for loss modeling will be UCERF3 event sets that include spatiotemporal clustering, not just samples from a Poisson process. One advantage of aftershocks being sample from the long-term model is that losses for every event in that model can be pre-computed and stored (assuming the portfolio does not change with time). Then the losses for each synthetic catalog from UCERF3 can be easily (and quickly) aggregated, and statistics can be compiled over the different viable synthetic catalogs. This efficiency will facilitate operational earthquake loss forecasting. The creation of synthetic catalogs differs from the current STEP implementation, which is not Monte- Carlo based, but rather gives the rates of events averaged over all viable sequences. The two approaches should be equivalent, all other things being equal, as long as a sufficient number of synthetic catalogs are sampled and averaged in the Monte Carlo approach. If there exists a need for a single, STEP-like forecast representing the average over all possible sequences, and the Monte Carlo approach is inefficient, then we can explore alternative formulations for achieving this. Another contrast with STEP is that we do not solve for and apply sequence-specific parameters other than how ongoing seismicity changes subsequent forecasts. Our approach is to see how well our model does in simplified form before adding such sophistication. The CSEP testing center will be useful in terms of deciding what further complexities are warranted. E.4. Implementation Using UCERF2 The software components needed for this spatiotemporal model have been implemented into the OpenSHA platform. Here, example results applied to the UCERF2 long-term model are described. Numerical implementation details, such as finite discretizations and Monte Carlo sampling of probability distributions, can be found in the OpenSHA repository. Figure 38 shows a simulated aftershock sequence for an M 7.25 Landers earthquake as represented in UCERF2, where the distance decay and minimum distance for this simulation are n = 1.7 and R min = 0.3, respectively. According to Figure 38d, the expected number of M 6.1 aftershocks is ~1.0 (consistent with Bath s law), the expected number of M 6.5 aftershocks is ~0.5, and the expected number of M 7.25 aftershocks (the main shock magnitude) is 0.06.

79 UCERF3 Technical Report #8 Earthquake Probability Models 73 Figure 38. Results of an ETAS-simulated aftershock sequence for 360 days following the occurrence of an M 7.25 Landers earthquake as represented by UCERF2, and as described in text. (a) Spatial probability distribution for primary event epicenters (in 0.1º 0.1º bins). (b) Map of simulated aftershock hypocenters, where the number of events in each generation is indicated at the upper right. (c) Expected magnitude probability distribution for all aftershocks in 0.1 magnitude bins (red), the numbers of sampled in the simulation (black). (d) Expected number of aftershocks greater than each magnitude (red), the numbers sampled in the simulation (black), and the GR distribution with b = 1 for comparison. (e) Temporal decay for the sequence (in 0.5-day bins. (f) Spatial decay for the sequence (10 km bins), where the difference between Expected from UCERF2 and Pure Expectation is due to a lack of events outside California.

80 74 Earthquake Probability Models UCERF3 Technical Report #8 One important aspect of the implementation shown in Figure 38 is that the Landers main shock was prohibited from sampling itself as a direct or indirect aftershock. Figure 39 shows the expected number of aftershocks in an alternative simulation where Landers events are allowed to occur as aftershocks. This plot, which can be compared to Figure 38d, shows that the expected number of M 6.1, M 6.5, and M 7.25 aftershocks is ~3, ~2 and 0.5, respectively. Furthermore, and perhaps more importantly, if an M 6.5 event is indeed triggered, it has a 64% chance of being a re-rupture of Landers itself. If an M 7.25 event is triggered, it has an 86% chance of being another Landers event. These probabilities are clearly inconsistent with global observations of large triggered events; such a high fraction of large (fullseismogenic thickness) aftershocks are not observed to occur on same rupture surface as a large main shock. Figure 39. Same as Figure 38d, but where the Landers main shock is allowed to trigger itself as a primary or subsequent-generation aftershock. Also shown here are the expected number of primary events only (blue), as well as the number of primary events expected to be triggered on the Landers source (green). The point here is that a large fraction of triggered events are on the Landers source itself, with the fraction generally increasing with magnitude. Ironically, these results imply that aftershocks statistics might indeed constitute the greatest evidence for elastic rebound. That is, given the distance decay of aftershocks, without elastic rebound the most likely event for any earthquake to trigger is itself. Of course this assertion depends not only on the ETASparameter values (especially n and R min ), but also on how the long-term models is constructed (e.g., how background seismicity in UCERF2 is modeled relative to fault-based sources like Landers). We are therefore currently conducting further tests, and if elastic rebound is indeed required by ETAS, then for UCERF3 we intend to apply the methodology outlined in the previous section.

81 UCERF3 Technical Report #8 Earthquake Probability Models 75 Figure 40. Same as Figure 38, but for an M 6.75 Northridge main shock as represented by UCERF2. Figure 40 shows a simulated aftershock sequence for an M 6.75 Northridge earthquake as represented in UCERF2. The same distance decay and minimum distance have been used for this simulation (n = 1.7 and R min = 0.3, respectively) and Northridge events are prohibited from being sampled as aftershocks in this example. According to Figure 40d, the expected number of M 6.7 aftershocks is ~0.5, implying a 50% chance of triggering something larger than itself. This high probability is a consequence of the very characteristic (or non Gutenberg Richter) magnitude-frequency distribution for sampled aftershocks in this area (Figure 37c). In fact, the likelihood of sampling an M 6.7 aftershock from this model is about the same as sampling an M 5.6 earthquake. This is a direct reflection of very characteristic distribution implied by

82 76 Earthquake Probability Models UCERF3 Technical Report #8 UCERF2 in this area, as the magnitude frequency distribution of sample aftershocks is simply that of the long-term model weighted by distance from the main shock. Not only is this M 6.7 triggering probability dubiously high, but these Northridge ETAS simulations often run away and never converge, because large triggered events can keep on triggering other large events. These issues become worse if the Northridge main shock is allowed to re-rupture itself as an aftershock. This raises the issue of just how non-gr the long-term model can be over sub-regions without running into problems with the methodology proposed here. Figure 41 shows the magnitude frequency distribution from the UCERF2 long-term model in the vicinity of Northridge, revealing again a very non- GR distribution. While this region represents one of the more extreme examples from UCERF2, and may itself stretch scientific credibility, many other regions exhibit this behavior to one degree or another. The inversion approach to constructing the UCERF3 earthquake rate model explicitly includes a regional Gutenberg Richter constraint, so we have a mechanism for invoking this to the extent it is needed. But the question is, again, how much do we need to invoke this constraint, and will doing so create other problems? If problems remain with respect to non-gr regions in some branches of the Earthquake Rate Model, one option will be to sample the aftershocks from a pure Gutenberg-Richter distribution while still sampling from those events available in the long-term model, according to their relative nucleation rates. Thus, aftershocks will still come from, and therefore be consistent with, the long-term model (an improvement over STEP), but we will have lost the feature where long-term ETAS simulations have rates that match our Earthquake Rate Model, so we will not have solved that problem with STEP. We need to compare aftershocks sequences simulated using this methodology against those actually observed in California for as many sequences as possible (using the analyses in Appendix S). This will not only indicate the extent to which sequence-specific parameters are important, but also the extent to which the model fails to reproduce features of real sequences. An example of the latter would be a higher rate of aftershocks occurring near large, mature faults (e.g., Powers and Jordan, 2010), and where this behavior is not already manifested in background seismicity rates. Other issues that we will consider include: For an operational system, how far back in time do we need to go in collecting the main shocks that will be carried forward as such in the simulations? Is this magnitude dependent? Is there a tradeoff here with declustering assumptions in developing the background seismicity model? Influence of spatial smoothing in the development of the background seismicity model on the spatial distribution of aftershock sequences (e.g., does tighter smoothing account for the effect identified by Powers and Jordan, 2010). Computation time. The spatiotemporal clustering model is not completely independent from the elastic rebound model (e.g., the COV in the latter must be somewhat influenced by aftershock statistics in the former). What is the physical difference between a multi-fault rupture (added to the earthquake rate model above) and a quickly triggered separate event (as modeled here)? Could slip-length scaling distinguish these populations?

83 UCERF3 Technical Report #8 Earthquake Probability Models 77 Figure 41. Map above shows the expected number of M 6.5 hypocenters in bins in a 5 year period predicted by UCERF2, and the plot below shows the incremental (blue) and cumulative (black) magnitude-frequency distributions for events that nucleate inside the black box show in the map. UCERF2 sources that nucleate inside this box include: Holser, alt 1, Northridge, Oak Ridge (Onshore, Oak Ridge Connected, Pitas Point Connected, San Cayetano, San Gabriel, Santa Susana, alt 1, Santa Ynez (East), Santa Ynez Connected, Sierra Madre (San Fernando), Sierra Madre Connected, Simi-Santa Rosa, Ventura-Pitas Point, Verdugo, and background gridded seismicity. The box is defined by latitudes 34.25º and 34.55º and longitudes º and º. E.5. Logic-Tree Branches Possible logic-tree branches here include the following: Application (or not) of the revised Empirical Model COV used in the average time-predictable, elastic-rebound calculations Uncertainty in amount of slip in most recent along each fault Alternative ETAS parameters and perhaps a sequence-specific option

The Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3) Project Plan

The Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3) Project Plan UCERF3_Project_Plan_v55.doc 1 The Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3) Project Plan by the Working Group on California Earthquake Probabilities (WGCEP) Notes on this version

More information

Appendix O: Gridded Seismicity Sources

Appendix O: Gridded Seismicity Sources Appendix O: Gridded Seismicity Sources Peter M. Powers U.S. Geological Survey Introduction The Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3) is a forecast of earthquakes that fall

More information

The Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3) Project Plan

The Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3) Project Plan UCERF3_Project_Plan_v39.doc 1 The Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3) Project Plan by the Working Group on California Earthquake Probabilities (WGCEP) Notes: This version

More information

Southern California Earthquake Center Collaboratory for the Study of Earthquake Predictability (CSEP) Thomas H. Jordan

Southern California Earthquake Center Collaboratory for the Study of Earthquake Predictability (CSEP) Thomas H. Jordan Southern California Earthquake Center Collaboratory for the Study of Earthquake Predictability (CSEP) Thomas H. Jordan SCEC Director & Professor, University of Southern California 5th Joint Meeting of

More information

Seismic Risk in California Is Changing

Seismic Risk in California Is Changing WHITE PAPER 4 August 2016 Seismic Risk in California Is Changing The Impact of New Earthquake Hazard Models for the State Contributors: Paul C. Thenhaus Kenneth W. Campbell Ph.D Nitin Gupta David F. Smith

More information

Operational Earthquake Forecasting: Proposed Guidelines for Implementation

Operational Earthquake Forecasting: Proposed Guidelines for Implementation Operational Earthquake Forecasting: Proposed Guidelines for Implementation Thomas H. Jordan Director, Southern California S33D-01, AGU Meeting 14 December 2010 Operational Earthquake Forecasting Authoritative

More information

USC-SCEC/CEA Technical Report #1

USC-SCEC/CEA Technical Report #1 USC-SCEC/CEA Technical Report #1 Milestone 1A Submitted to California Earthquake Authority 801 K Street, Suite 1000 Sacramento, CA 95814 By the Southern California Earthquake Center University of Southern

More information

UCERF3 Task R2- Evaluate Magnitude-Scaling Relationships and Depth of Rupture: Proposed Solutions

UCERF3 Task R2- Evaluate Magnitude-Scaling Relationships and Depth of Rupture: Proposed Solutions UCERF3 Task R2- Evaluate Magnitude-Scaling Relationships and Depth of Rupture: Proposed Solutions Bruce E. Shaw Lamont Doherty Earth Observatory, Columbia University Statement of the Problem In UCERF2

More information

The Length to Which an Earthquake will go to Rupture. University of Nevada, Reno 89557

The Length to Which an Earthquake will go to Rupture. University of Nevada, Reno 89557 The Length to Which an Earthquake will go to Rupture Steven G. Wesnousky 1 and Glenn P. Biasi 2 1 Center of Neotectonic Studies and 2 Nevada Seismological Laboratory University of Nevada, Reno 89557 Abstract

More information

Uniform California Earthquake Rupture Forecast, Version 2 (UCERF 2)

Uniform California Earthquake Rupture Forecast, Version 2 (UCERF 2) Bulletin of the Seismological Society of America, Vol. 99, No. 4, pp. 2053 2107, August 2009, doi: 10.1785/0120080049 Uniform California Earthquake Rupture Forecast, Version 2 (UCERF 2) by E. H. Field,

More information

National Earthquake Prediction Evaluation Council

National Earthquake Prediction Evaluation Council National Earthquake Prediction Evaluation Council Terry E. Tullis, Chair Emeritus and Research Professor Brown University Providence RI, 02912 1 What is NEPEC? From it s current (2010) Charter, the Scope

More information

Kinematics of the Southern California Fault System Constrained by GPS Measurements

Kinematics of the Southern California Fault System Constrained by GPS Measurements Title Page Kinematics of the Southern California Fault System Constrained by GPS Measurements Brendan Meade and Bradford Hager Three basic questions Large historical earthquakes One basic question How

More information

Overview of the Seismic Source Characterization for the Palo Verde Nuclear Generating Station

Overview of the Seismic Source Characterization for the Palo Verde Nuclear Generating Station Overview of the Seismic Source Characterization for the Palo Verde Nuclear Generating Station Scott Lindvall SSC TI Team Lead Palo Verde SSC SSHAC Level 3 Project Tuesday, March 19, 2013 1 Questions from

More information

From the Testing Center of Regional Earthquake Likelihood Models. to the Collaboratory for the Study of Earthquake Predictability

From the Testing Center of Regional Earthquake Likelihood Models. to the Collaboratory for the Study of Earthquake Predictability From the Testing Center of Regional Earthquake Likelihood Models (RELM) to the Collaboratory for the Study of Earthquake Predictability (CSEP) Danijel Schorlemmer, Matt Gerstenberger, Tom Jordan, Dave

More information

Eleventh U.S. National Conference on Earthquake Engineering Integrating Science, Engineering & Policy June 25-29, 2018 Los Angeles, California

Eleventh U.S. National Conference on Earthquake Engineering Integrating Science, Engineering & Policy June 25-29, 2018 Los Angeles, California Eleventh U.S. National Conference on Earthquake Engineering Integrating Science, Engineering & Policy June 25-29, 2018 Los Angeles, California Site-Specific MCE R Response Spectra for Los Angeles Region

More information

50 or 500? Current Issues in Estimating Fault Rupture Length. David P. Schwartz USGS Menlo Park

50 or 500? Current Issues in Estimating Fault Rupture Length. David P. Schwartz USGS Menlo Park 50 or 500? Current Issues in Estimating Fault Rupture Length David P. Schwartz USGS Menlo Park Kondo et al (in press) Rockwell and Okumura (2010) 6.2 5 Hire Tom Rockwell! 4.9 5.1 5.2 4.5 5 4.7 6.1 North

More information

Seismic Source Characterization in Siting New Nuclear Power Plants in the Central and Eastern United States

Seismic Source Characterization in Siting New Nuclear Power Plants in the Central and Eastern United States Seismic Source Characterization in Siting New Nuclear Power Plants in the Central and Eastern United States ABSTRACT : Yong Li 1 and Nilesh Chokshi 2 1 Senior Geophysicist, 2 Deputy Director of DSER Nuclear

More information

Earthquakes. Earthquake Magnitudes 10/1/2013. Environmental Geology Chapter 8 Earthquakes and Related Phenomena

Earthquakes. Earthquake Magnitudes 10/1/2013. Environmental Geology Chapter 8 Earthquakes and Related Phenomena Environmental Geology Chapter 8 Earthquakes and Related Phenomena Fall 2013 Northridge 1994 Kobe 1995 Mexico City 1985 China 2008 Earthquakes Earthquake Magnitudes Earthquake Magnitudes Richter Magnitude

More information

San Francisco Bay Area Earthquake Simulations: A step toward a Standard Physical Earthquake Model

San Francisco Bay Area Earthquake Simulations: A step toward a Standard Physical Earthquake Model San Francisco Bay Area Earthquake Simulations: A step toward a Standard Physical Earthquake Model Steven N. Ward Institute of Geophysics and Planetary Physics, University of California, Santa Cruz, CA,

More information

Article from: Risk Management. March 2009 Issue 15

Article from: Risk Management. March 2009 Issue 15 Article from: Risk Management March 2009 Issue 15 XXXXXXXXXXXXX RISK IDENTIFICATION Preparing for a New View of U.S. Earthquake Risk By Prasad Gunturi and Kyle Beatty INTRODUCTION The United States Geological

More information

2014 Update of the United States National Seismic Hazard Maps

2014 Update of the United States National Seismic Hazard Maps 2014 Update of the United States National Seismic Hazard Maps M.D. Petersen, C.S. Mueller, K.M. Haller, M Moschetti, S.C. Harmsen, E.H. Field, K.S. Rukstales, Y. Zeng, D.M. Perkins, P. Powers, S. Rezaeian,

More information

2014 SCEC Annual Meeting!

2014 SCEC Annual Meeting! 2014 SCEC Annual Meeting! Palm Springs, California! 7-10 September 2014! Welcome Back to Palm Springs! AVAILABLE FOR DOWNLOAD http://www.scec.org/meetings/ 2014am/SCEC2014Program.pdf Goals of the Annual

More information

Scientific Research on the Cascadia Subduction Zone that Will Help Improve Seismic Hazard Maps, Building Codes, and Other Risk-Mitigation Measures

Scientific Research on the Cascadia Subduction Zone that Will Help Improve Seismic Hazard Maps, Building Codes, and Other Risk-Mitigation Measures Scientific Research on the Cascadia Subduction Zone that Will Help Improve Seismic Hazard Maps, Building Codes, and Other Risk-Mitigation Measures Art Frankel U.S. Geological Survey Seattle, WA GeoPrisms-Earthscope

More information

Operational Earthquake Forecasting: State of Knowledge and Guidelines for Utilization

Operational Earthquake Forecasting: State of Knowledge and Guidelines for Utilization Operational Earthquake Forecasting: State of Knowledge and Guidelines for Utilization Report of the INTERNATIONAL COMMISSION ON EARTHQUAKE FORECASTING FOR CIVIL PROTECTION Thomas H. Jordan, chair International

More information

The UCERF3 Grand Inversion: Solving for the Long-Term Rate of Ruptures in a Fault System

The UCERF3 Grand Inversion: Solving for the Long-Term Rate of Ruptures in a Fault System Bulletin of the Seismological Society of America, Vol. 104, No. 3, pp. 1181 1204, June 2014, doi: 10.1785/0120130180 The UCERF3 Grand Inversion: Solving for the Long-Term Rate of Ruptures in a Fault System

More information

AN OVERVIEW AND GUIDELINES FOR PROBABILISTIC SEISMIC HAZARD MAPPING

AN OVERVIEW AND GUIDELINES FOR PROBABILISTIC SEISMIC HAZARD MAPPING CO 2 TRACCS INTERNATIONAL WORKSHOP Bucharest, 2 September, 2012 AN OVERVIEW AND GUIDELINES FOR PROBABILISTIC SEISMIC HAZARD MAPPING M. Semih YÜCEMEN Department of Civil Engineering and Earthquake Studies

More information

GEM Faulted Earth. A Global Active Fault and Fault Source Database

GEM Faulted Earth. A Global Active Fault and Fault Source Database GEM Faulted Earth A Global Active Fault and Fault Source Database Marco Pagani on behalf of GEM Faulted Earth Kelvin Berryman, Carlos Costa, Kerry Sieh Nicola Litchfield, Annemarie Christophersen THIS

More information

The Collaboratory for the Study of Earthquake Predictability: Perspectives on Evaluation & Testing for Seismic Hazard

The Collaboratory for the Study of Earthquake Predictability: Perspectives on Evaluation & Testing for Seismic Hazard The Collaboratory for the Study of Earthquake Predictability: Perspectives on Evaluation & Testing for Seismic Hazard D. Schorlemmer, D. D. Jackson, J. D. Zechar, T. H. Jordan The fundamental principle

More information

UCERF3 Task R2- Evaluate Magnitude-Scaling Relationships and Depth of Rupture: Proposed Solutions

UCERF3 Task R2- Evaluate Magnitude-Scaling Relationships and Depth of Rupture: Proposed Solutions UCERF3 Task R- Evaluate Magnitude-Scaling Relationships and Depth of Rupture: Proposed Solutions Bruce E. Shaw Lamont Doherty Earth Observatory, Columbia University Statement of the Problem In UCERF Magnitude-Area

More information

A USGS Perspective on Earthquake Prediction Research

A USGS Perspective on Earthquake Prediction Research A USGS Perspective on Earthquake Prediction Research Michael Blanpied USGS Earthquake Hazard Program Reston, VA USGS Statutory Role USGS Director has the delegated responsibility to issue warnings for

More information

Rate and State-Dependent Friction in Earthquake Simulation

Rate and State-Dependent Friction in Earthquake Simulation Rate and State-Dependent Friction in Earthquake Simulation Zac Meadows UC Davis - Department of Physics Summer 2012 REU September 3, 2012 Abstract To better understand the spatial and temporal complexity

More information

Development of U. S. National Seismic Hazard Maps and Implementation in the International Building Code

Development of U. S. National Seismic Hazard Maps and Implementation in the International Building Code Development of U. S. National Seismic Hazard Maps and Implementation in the International Building Code Mark D. Petersen (U.S. Geological Survey) http://earthquake.usgs.gov/hazmaps/ Seismic hazard analysis

More information

The 3 rd SCEC CSM workshop

The 3 rd SCEC CSM workshop The 3 rd SCEC CSM workshop Welcome on behalf of the organizers Jeanne Hardebeck Brad Aagaard David Sandwell Bruce Shaw John Shaw Thorsten Becker Thanks for playing! SCEC Community Stress Model (CSM) Community

More information

L. Danciu, D. Giardini, J. Wößner Swiss Seismological Service ETH-Zurich Switzerland

L. Danciu, D. Giardini, J. Wößner Swiss Seismological Service ETH-Zurich Switzerland BUILDING CAPACITIES FOR ELABORATION OF NDPs AND NAs OF THE EUROCODES IN THE BALKAN REGION Experience on the field of seismic hazard zonation SHARE Project L. Danciu, D. Giardini, J. Wößner Swiss Seismological

More information

Southern California Earthquake Center. SCEC Annual Meeting. Palm Springs, California September 2016

Southern California Earthquake Center. SCEC Annual Meeting. Palm Springs, California September 2016 SCEC Annual Meeting Palm Springs, California 11-14 September 2016 Welcome to Palm Springs! Southern California SCEC Annual Meetings 707 pre-registrants 347 poster abstracts 211 first-time attendees (145

More information

SONGS SSC. Tom Freeman GeoPentech PRELIMINARY RESULTS

SONGS SSC. Tom Freeman GeoPentech PRELIMINARY RESULTS SONGS SSC Tom Freeman GeoPentech PRELIMINARY RESULTS Focused Questions Summarize the tectonic setting What is the seismogenic thickness? Are you including deep ruptures in the upper mantle (~30 km)? Do

More information

WESTERN STATES SEISMIC POLICY COUNCIL POLICY RECOMMENDATION Definitions of Recency of Surface Faulting for the Basin and Range Province

WESTERN STATES SEISMIC POLICY COUNCIL POLICY RECOMMENDATION Definitions of Recency of Surface Faulting for the Basin and Range Province WESTERN STATES SEISMIC POLICY COUNCIL POLICY RECOMMENDATION 15-3 Definitions of Recency of Surface Faulting for the Basin and Range Province Policy Recommendation 15-3 WSSPC recommends that each state

More information

Plate Boundary Observatory Working Group for the Central and Northern San Andreas Fault System PBO-WG-CNSA

Plate Boundary Observatory Working Group for the Central and Northern San Andreas Fault System PBO-WG-CNSA Plate Boundary Observatory Working Group for the Central and Northern San Andreas Fault System PBO-WG-CNSA Introduction Our proposal focuses on the San Andreas fault system in central and northern California.

More information

GPS Strain & Earthquakes Unit 5: 2014 South Napa earthquake GPS strain analysis student exercise

GPS Strain & Earthquakes Unit 5: 2014 South Napa earthquake GPS strain analysis student exercise GPS Strain & Earthquakes Unit 5: 2014 South Napa earthquake GPS strain analysis student exercise Strain Analysis Introduction Name: The earthquake cycle can be viewed as a process of slow strain accumulation

More information

The Bridge from Earthquake Geology to Earthquake Seismology

The Bridge from Earthquake Geology to Earthquake Seismology The Bridge from Earthquake Geology to Earthquake Seismology Computer simulation Earthquake rate Fault slip rate Magnitude distribution Fault geometry Strain rate Paleo-seismology David D. Jackson djackson@g.ucla.edu

More information

6 Source Characterization

6 Source Characterization 6 Source Characterization Source characterization describes the rate at which earthquakes of a given magnitude, and dimensions (length and width) occur at a given location. For each seismic source, the

More information

DISCLAIMER BIBLIOGRAPHIC REFERENCE

DISCLAIMER BIBLIOGRAPHIC REFERENCE DISCLAIMER This report has been prepared by the Institute of Geological and Nuclear Sciences Limited (GNS Science) exclusively for and under contract to the Earthquake Commission. Unless otherwise agreed

More information

Comparison of Strain Rate Maps

Comparison of Strain Rate Maps Comparison of Strain Rate Maps David T. Sandwell UNAVCO March 8, 2010 why strain rate matters comparison of 10 strain rate models new data required interseismic model velocity v(x) = V π tan 1 x D strain

More information

A TESTABLE FIVE-YEAR FORECAST OF MODERATE AND LARGE EARTHQUAKES. Yan Y. Kagan 1,David D. Jackson 1, and Yufang Rong 2

A TESTABLE FIVE-YEAR FORECAST OF MODERATE AND LARGE EARTHQUAKES. Yan Y. Kagan 1,David D. Jackson 1, and Yufang Rong 2 Printed: September 1, 2005 A TESTABLE FIVE-YEAR FORECAST OF MODERATE AND LARGE EARTHQUAKES IN SOUTHERN CALIFORNIA BASED ON SMOOTHED SEISMICITY Yan Y. Kagan 1,David D. Jackson 1, and Yufang Rong 2 1 Department

More information

2018 Blue Waters Symposium June 5, Southern California Earthquake Center

2018 Blue Waters Symposium June 5, Southern California Earthquake Center Integrating Physics-based Earthquake Cycle Simulator Models and High-Resolution Ground Motion Simulations into a Physics-based Probabilistic Seismic Hazard Model PI: J. Vidale; Former PI: T. H. Jordan

More information

High Resolution Imaging of Fault Zone Properties

High Resolution Imaging of Fault Zone Properties Annual Report on 1998-99 Studies, Southern California Earthquake Center High Resolution Imaging of Fault Zone Properties Yehuda Ben-Zion Department of Earth Sciences, University of Southern California

More information

The New Zealand National Seismic Hazard Model: Rethinking PSHA

The New Zealand National Seismic Hazard Model: Rethinking PSHA Proceedings of the Tenth Pacific Conference on Earthquake Engineering Building an Earthquake-Resilient Pacific 6-8 November 2015, Sydney, Australia The New Zealand National Seismic Hazard Model: Rethinking

More information

Fault Displacement Hazard Analysis Workshop in Menlo Park, USGS facility- Synthesis and Perspectives 8 and 9 December 2016

Fault Displacement Hazard Analysis Workshop in Menlo Park, USGS facility- Synthesis and Perspectives 8 and 9 December 2016 Fault Displacement Hazard Analysis Workshop in Menlo Park, USGS facility- Synthesis and Perspectives 8 and 9 December 2016 Stéphane Baize & Oona Scotti (IRSN), Timothy Dawson (CGS), David Schwartz (USGS)

More information

Simulated and Observed Scaling in Earthquakes Kasey Schultz Physics 219B Final Project December 6, 2013

Simulated and Observed Scaling in Earthquakes Kasey Schultz Physics 219B Final Project December 6, 2013 Simulated and Observed Scaling in Earthquakes Kasey Schultz Physics 219B Final Project December 6, 2013 Abstract Earthquakes do not fit into the class of models we discussed in Physics 219B. Earthquakes

More information

Magnitude-Area Scaling of Strike-Slip Earthquakes. Paul Somerville, URS

Magnitude-Area Scaling of Strike-Slip Earthquakes. Paul Somerville, URS Magnitude-Area Scaling of Strike-Slip Earthquakes Paul Somerville, URS Scaling Models of Large Strike-slip Earthquakes L Model Scaling (Hanks & Bakun, 2002) Displacement grows with L for L > > Wmax M

More information

Coulomb stress changes due to Queensland earthquakes and the implications for seismic risk assessment

Coulomb stress changes due to Queensland earthquakes and the implications for seismic risk assessment Coulomb stress changes due to Queensland earthquakes and the implications for seismic risk assessment Abstract D. Weatherley University of Queensland Coulomb stress change analysis has been applied in

More information

Module 7 SEISMIC HAZARD ANALYSIS (Lectures 33 to 36)

Module 7 SEISMIC HAZARD ANALYSIS (Lectures 33 to 36) Lecture 34 Topics Module 7 SEISMIC HAZARD ANALYSIS (Lectures 33 to 36) 7.3 DETERMINISTIC SEISMIC HAZARD ANALYSIS 7.4 PROBABILISTIC SEISMIC HAZARD ANALYSIS 7.4.1 Earthquake Source Characterization 7.4.2

More information

Simulation-based Seismic Hazard Analysis Using CyberShake

Simulation-based Seismic Hazard Analysis Using CyberShake Simulation-based Seismic Hazard Analysis Using CyberShake SCEC CyberShake Collaboration: Robert Graves, Scott Callaghan, Feng Wang, Thomas H. Jordan, Philip Maechling, Kim Olsen, Kevin Milner, En-Jui Lee,

More information

Overview of Seismic PHSA Approaches with Emphasis on the Management of Uncertainties

Overview of Seismic PHSA Approaches with Emphasis on the Management of Uncertainties H4.SMR/1645-29 "2nd Workshop on Earthquake Engineering for Nuclear Facilities: Uncertainties in Seismic Hazard" 14-25 February 2005 Overview of Seismic PHSA Approaches with Emphasis on the Management of

More information

Estimating fault slip rates, locking distribution, elastic/viscous properites of lithosphere/asthenosphere. Kaj M. Johnson Indiana University

Estimating fault slip rates, locking distribution, elastic/viscous properites of lithosphere/asthenosphere. Kaj M. Johnson Indiana University 3D Viscoelastic Earthquake Cycle Models Estimating fault slip rates, locking distribution, elastic/viscous properites of lithosphere/asthenosphere Kaj M. Johnson Indiana University In collaboration with:

More information

GEO Geohazards Community of Practice

GEO Geohazards Community of Practice GEO Geohazards Community of Practice 1) Co-Chair of GHCP With input from: Stuart Marsh, GHCP Co-Chair Francesco Gaetani, GEO Secretariat and many GHCP contributors 1) Nevada Bureau of Mines and Geology

More information

GEM's community tools for probabilistic seismic hazard modelling and calculation

GEM's community tools for probabilistic seismic hazard modelling and calculation GEM's community tools for probabilistic seismic hazard modelling and calculation Marco Pagani, GEM Secretariat, Pavia, IT Damiano Monelli, GEM Model Facility, SED-ETH, Zürich, CH Graeme Weatherill, GEM

More information

Seismic and aseismic processes in elastodynamic simulations of spontaneous fault slip

Seismic and aseismic processes in elastodynamic simulations of spontaneous fault slip Seismic and aseismic processes in elastodynamic simulations of spontaneous fault slip Most earthquake simulations study either one large seismic event with full inertial effects or long-term slip history

More information

DCPP Seismic FAQ s Geosciences Department 08/04/2011 GM1) What magnitude earthquake is DCPP designed for?

DCPP Seismic FAQ s Geosciences Department 08/04/2011 GM1) What magnitude earthquake is DCPP designed for? GM1) What magnitude earthquake is DCPP designed for? The new design ground motions for DCPP were developed after the discovery of the Hosgri fault. In 1977, the largest magnitude of the Hosgri fault was

More information

Regional deformation and kinematics from GPS data

Regional deformation and kinematics from GPS data Regional deformation and kinematics from GPS data Jessica Murray, Jerry Svarc, Elizabeth Hearn, and Wayne Thatcher U. S. Geological Survey Acknowledgements: Rob McCaffrey, Portland State University UCERF3

More information

GPS Strain & Earthquakes Unit 4: GPS strain analysis examples Student exercise

GPS Strain & Earthquakes Unit 4: GPS strain analysis examples Student exercise GPS Strain & Earthquakes Unit 4: GPS strain analysis examples Student exercise Example 1: Olympic Peninsula Name: Please complete the following worksheet to estimate, calculate, and interpret the strain

More information

DIRECT HAZARD ANALYSIS OF INELASTIC RESPONSE SPECTRA

DIRECT HAZARD ANALYSIS OF INELASTIC RESPONSE SPECTRA DIRECT HAZARD ANALYSIS OF INELASTIC RESPONSE SPECTRA ABSTRACT Y. Bozorgnia, M. Hachem, and K.W. Campbell Associate Director, PEER, University of California, Berkeley, California, USA Senior Associate,

More information

Jack Loveless Department of Geosciences Smith College

Jack Loveless Department of Geosciences Smith College Geodetic constraints on fault interactions and stressing rates in southern California Jack Loveless Department of Geosciences Smith College jloveless@smith.edu Brendan Meade Department of Earth & Planetary

More information

Model Uncertainties of the 2002 Update of California Seismic Hazard Maps

Model Uncertainties of the 2002 Update of California Seismic Hazard Maps Bulletin of the Seismological Society of America, Vol. 95, No. 6, pp. 24 257, December 25, doi: 1.1785/12517 Model Uncertainties of the 22 Update of California Seismic Hazard Maps by Tianqing Cao, Mark

More information

Toward a SCEC Community Rheology Model: TAG Kickoff and Workshop SCEC Workshop Proposal Final Report

Toward a SCEC Community Rheology Model: TAG Kickoff and Workshop SCEC Workshop Proposal Final Report Toward a SCEC Community Rheology Model: TAG Kickoff and Workshop SCEC Workshop Proposal 17206 Final Report A one-day Community Rheology Model workshop was held at the Palm Springs Hilton on the Saturday

More information

A NEW PROBABILISTIC SEISMIC HAZARD MODEL FOR NEW ZEALAND

A NEW PROBABILISTIC SEISMIC HAZARD MODEL FOR NEW ZEALAND A NEW PROBABILISTIC SEISMIC HAZARD MODEL FOR NEW ZEALAND Mark W STIRLING 1 SUMMARY The Institute of Geological and Nuclear Sciences (GNS) has developed a new seismic hazard model for New Zealand that incorporates

More information

Stress triggering and earthquake probability estimates

Stress triggering and earthquake probability estimates JOURNAL OF GEOPHYSICAL RESEARCH, VOL. 109,, doi:10.1029/2003jb002437, 2004 Stress triggering and earthquake probability estimates Jeanne L. Hardebeck 1 Institute for Geophysics and Planetary Physics, Scripps

More information

AIRCURRENTS THE TOHOKU EARTHQUAKE AND STRESS TRANSFER STRESS TRANSFER

AIRCURRENTS THE TOHOKU EARTHQUAKE AND STRESS TRANSFER STRESS TRANSFER THE TOHOKU EARTHQUAKE AND STRESS TRANSFER AIRCURRENTS 11.2011 Edited Editor s Note: The March 11th Tohoku Earthquake was unprecedented in Japan s recorded history. In April, AIR Currents described the

More information

GPS strain analysis examples Instructor notes

GPS strain analysis examples Instructor notes GPS strain analysis examples Instructor notes Compiled by Phil Resor (Wesleyan University) This document presents several examples of GPS station triplets for different tectonic environments. These examples

More information

A METHODOLOGY FOR ASSESSING EARTHQUAKE-INDUCED LANDSLIDE RISK. Agency for the Environmental Protection, ITALY (

A METHODOLOGY FOR ASSESSING EARTHQUAKE-INDUCED LANDSLIDE RISK. Agency for the Environmental Protection, ITALY ( A METHODOLOGY FOR ASSESSING EARTHQUAKE-INDUCED LANDSLIDE RISK Roberto W. Romeo 1, Randall W. Jibson 2 & Antonio Pugliese 3 1 University of Urbino, ITALY (e-mail: rwromeo@uniurb.it) 2 U.S. Geological Survey

More information

Regional Workshop on Essential Knowledge of Site Evaluation Report for Nuclear Power Plants.

Regional Workshop on Essential Knowledge of Site Evaluation Report for Nuclear Power Plants. Regional Workshop on Essential Knowledge of Site Evaluation Report for Nuclear Power Plants. Development of seismotectonic models Ramon Secanell Kuala Lumpur, 26-30 August 2013 Overview of Presentation

More information

Guidelines for Site-Specific Seismic Hazard Reports for Essential and Hazardous Facilities and Major and Special-Occupancy Structures in Oregon

Guidelines for Site-Specific Seismic Hazard Reports for Essential and Hazardous Facilities and Major and Special-Occupancy Structures in Oregon Guidelines for Site-Specific Seismic Hazard Reports for Essential and Hazardous Facilities and Major and Special-Occupancy Structures in Oregon By the Oregon Board of Geologist Examiners and the Oregon

More information

Tim Dawson (California Geological Survey) Ray Weldon (University of Oregon)

Tim Dawson (California Geological Survey) Ray Weldon (University of Oregon) UCERF3: w Fault Models and Geologic Slip Rate Tim Dawson (California Geological Survey) Ray Weldon (University of Oregon) Workshop on Use of UCERF3 in the USGS National Seismic Hazard Map October 17-18

More information

A Time-Dependent Probabilistic Seismic-Hazard Model for California

A Time-Dependent Probabilistic Seismic-Hazard Model for California Bulletin of the Seismological Society of America, 90, 1, pp. 1 21, February 2000 A Time-Dependent Probabilistic Seismic-Hazard Model for California by Chris H. Cramer,* Mark D. Petersen, Tianqing Cao,

More information

Seismic Hazard Analysis along the State Water Project California Department of Water Resources

Seismic Hazard Analysis along the State Water Project California Department of Water Resources Seismic Hazard Analysis along the State Water Project California Department of Water Resources ATC-USGS NSHMP User-Needs Workshop September 21-22, 2015 USGS Menlo Park Don Hoirup, CEG 2270 Senior Engineering

More information

Part 2 - Engineering Characterization of Earthquakes and Seismic Hazard. Earthquake Environment

Part 2 - Engineering Characterization of Earthquakes and Seismic Hazard. Earthquake Environment Part 2 - Engineering Characterization of Earthquakes and Seismic Hazard Ultimately what we want is a seismic intensity measure that will allow us to quantify effect of an earthquake on a structure. S a

More information

INTEGRATING DIVERSE CALIBRATION PRODUCTS TO IMPROVE SEISMIC LOCATION

INTEGRATING DIVERSE CALIBRATION PRODUCTS TO IMPROVE SEISMIC LOCATION INTEGRATING DIVERSE CALIBRATION PRODUCTS TO IMPROVE SEISMIC LOCATION ABSTRACT Craig A. Schultz, Steven C. Myers, Jennifer L. Swenson, Megan P. Flanagan, Michael E. Pasyanos, and Joydeep Bhattacharyya Lawrence

More information

Economic and Social Council

Economic and Social Council United Nations Economic and Social Council Distr.: General 2 July 2012 E/C.20/2012/10/Add.1 Original: English Committee of Experts on Global Geospatial Information Management Second session New York, 13-15

More information

8.0 SUMMARY AND CONCLUSIONS

8.0 SUMMARY AND CONCLUSIONS 8.0 SUMMARY AND CONCLUSIONS In November 2008, Pacific Gas and Electric (PG&E) informed the U.S. Nuclear Regulatory Commission (NRC) that preliminary results from the Diablo Canyon Power Plant (DCPP) Long

More information

Estimating Earthquake-Rupture Rates on a Fault or Fault System

Estimating Earthquake-Rupture Rates on a Fault or Fault System Bulletin of the Seismological Society of America, Vol. 101, No. 1, pp. 79 92, February 2011, doi: 10.1785/0120100004 Estimating Earthquake-Rupture Rates on a Fault or Fault System by Edward H. Field and

More information

to: Interseismic strain accumulation and the earthquake potential on the southern San

to: Interseismic strain accumulation and the earthquake potential on the southern San Supplementary material to: Interseismic strain accumulation and the earthquake potential on the southern San Andreas fault system by Yuri Fialko Methods The San Bernardino-Coachella Valley segment of the

More information

SCEC Simulation Data Access

SCEC Simulation Data Access SCEC Simulation Data Access 16 February 2018 Philip Maechling (maechlin@usc.edu) Fabio Silva, Scott Callaghan, Christine Goulet, Silvia Mazzoni, John Vidale, et al. SCEC Data Management Approach SCEC Open

More information

BC HYDRO SSHAC LEVEL 3 PSHA STUDY METHODOLOGY

BC HYDRO SSHAC LEVEL 3 PSHA STUDY METHODOLOGY BC HYDRO SSHAC LEVEL 3 PSHA STUDY METHODOLOGY M. W. McCann, Jr. 1, K. Addo 2 and M. Lawrence 3 ABSTRACT BC Hydro recently completed a comprehensive Probabilistic Seismic Hazard Analysis (PSHA) to evaluate

More information

What Measures Can Be Taken To Improve The Understanding Of Observed Changes?

What Measures Can Be Taken To Improve The Understanding Of Observed Changes? What Measures Can Be Taken To Improve The Understanding Of Observed Changes? Convening Lead Author: Roger Pielke Sr. (Colorado State University) Lead Author: David Parker (U.K. Met Office) Lead Author:

More information

Quantifying the effect of declustering on probabilistic seismic hazard

Quantifying the effect of declustering on probabilistic seismic hazard Proceedings of the Ninth Pacific Conference on Earthquake Engineering Building an Earthquake-Resilient Society 14-16 April, 2011, Auckland, New Zealand Quantifying the effect of declustering on probabilistic

More information

ALM: An Asperity-based Likelihood Model for California

ALM: An Asperity-based Likelihood Model for California ALM: An Asperity-based Likelihood Model for California Stefan Wiemer and Danijel Schorlemmer Stefan Wiemer and Danijel Schorlemmer 1 ETH Zürich, Switzerland INTRODUCTION In most earthquake hazard models,

More information

Usually, only a couple of centuries of earthquake data is available, much shorter than the complete seismic cycle for most plate motions.

Usually, only a couple of centuries of earthquake data is available, much shorter than the complete seismic cycle for most plate motions. Earthquake Hazard Analysis estimate the hazard presented by earthquakes in a given region Hazard analysis is related to long term prediction and provides a basis to expressed hazard in probabilistic terms.

More information

Investigating the effects of smoothing on the performance of earthquake hazard maps

Investigating the effects of smoothing on the performance of earthquake hazard maps Brooks et al. Smoothing hazard maps 1 Investigating the effects of smoothing on the performance of earthquake hazard maps Edward M. Brooks 1,2, Seth Stein 1,2, Bruce D. Spencer 3,2 1 Department of Earth

More information

Collaboratory for the Study of Earthquake Predictability (CSEP)

Collaboratory for the Study of Earthquake Predictability (CSEP) Collaboratory for the Study of Earthquake Predictability (CSEP) T. H. Jordan, D. Schorlemmer, S. Wiemer, M. Gerstenberger, P. Maechling, M. Liukis, J. Zechar & the CSEP Collaboration 5th International

More information

SCEC Community Fault and Velocity Models and how they might contribute to the DCPP seismic hazard assessment. Andreas Plesch Harvard University

SCEC Community Fault and Velocity Models and how they might contribute to the DCPP seismic hazard assessment. Andreas Plesch Harvard University SCEC Community Fault and Velocity Models and how they might contribute to the DCPP seismic hazard assessment Andreas Plesch Harvard University SCEC Unified Structural Representation (USR) The SCEC USR

More information

Arthur Frankel, William Stephenson, David Carver, Jack Odum, Robert Williams, and Susan Rhea U.S. Geological Survey

Arthur Frankel, William Stephenson, David Carver, Jack Odum, Robert Williams, and Susan Rhea U.S. Geological Survey Probabilistic Seismic Hazard Maps for Seattle: 3D Sedimentary Basin Effects, Nonlinear Site Response, and Uncertainties from Random Velocity Variations Arthur Frankel, William Stephenson, David Carver,

More information

ACCOUNTING FOR SITE EFFECTS IN PROBABILISTIC SEISMIC HAZARD ANALYSIS: OVERVIEW OF THE SCEC PHASE III REPORT

ACCOUNTING FOR SITE EFFECTS IN PROBABILISTIC SEISMIC HAZARD ANALYSIS: OVERVIEW OF THE SCEC PHASE III REPORT ACCOUNTING FOR SITE EFFECTS IN PROBABILISTIC SEISMIC HAZARD ANALYSIS: OVERVIEW OF THE SCEC PHASE III REPORT Edward H FIELD 1 And SCEC PHASE III WORKING GROUP 2 SUMMARY Probabilistic seismic hazard analysis

More information

GROUND MOTION TIME HISTORIES FOR THE VAN NUYS BUILDING

GROUND MOTION TIME HISTORIES FOR THE VAN NUYS BUILDING GROUND MOTION TIME HISTORIES FOR THE VAN NUYS BUILDING Prepared for the PEER Methodology Testbeds Project by Paul Somerville and Nancy Collins URS Corporation, Pasadena, CA. Preliminary Draft, Feb 11,

More information

Overview of Seismic Source Characterization for the Diablo Canyon Power Plant

Overview of Seismic Source Characterization for the Diablo Canyon Power Plant Overview of Seismic Source Characterization for the Diablo Canyon Power Plant Steve Thompson (LCI and SSC TI Team), for SWUS GMC Workshop 1, March 19, 2013 Questions from TI Team Summarize tectonic setting.

More information

Time-varying and long-term mean aftershock hazard in Wellington

Time-varying and long-term mean aftershock hazard in Wellington Time-varying and long-term mean aftershock hazard in Wellington A. Christophersen, D.A. Rhoades, R.J. Van Dissen, C. Müller, M.W. Stirling, G.H. McVerry & M.C. Gerstenberger GNS Science, Lower Hutt, New

More information

Power-law distribution of fault slip-rates in southern California

Power-law distribution of fault slip-rates in southern California Click Here for Full Article GEOPHYSICAL RESEARCH LETTERS, VOL. 34, L23307, doi:10.1029/2007gl031454, 2007 Power-law distribution of fault slip-rates in southern California Brendan J. Meade 1 Received 31

More information

Measurements in the Creeping Section of the Central San Andreas Fault

Measurements in the Creeping Section of the Central San Andreas Fault Measurements in the Creeping Section of the Central San Andreas Fault Introduction Duncan Agnew, Andy Michael We propose the PBO instrument, with GPS and borehole strainmeters, the creeping section of

More information

Probabilities for Jumping Fault Segment Stepovers

Probabilities for Jumping Fault Segment Stepovers GEOPHYSICAL RESEARCH LETTERS, VOL.???, XXXX, DOI:1.129/, Probabilities for Jumping Fault Segment Stepovers Bruce E. Shaw Lamont-Doherty Earth Observatory, Columbia University, New York James H. Dieterich

More information

Measuring earthquake-generated surface offsets from high-resolution digital topography

Measuring earthquake-generated surface offsets from high-resolution digital topography Measuring earthquake-generated surface offsets from high-resolution digital topography July 19, 2011 David E. Haddad david.e.haddad@asu.edu Active Tectonics, Quantitative Structural Geology, and Geomorphology

More information

Representative ground-motion ensembles for several major earthquake scenarios in New Zealand

Representative ground-motion ensembles for several major earthquake scenarios in New Zealand Representative ground-motion ensembles for several major earthquake scenarios in New Zealand K. Tarbali & B.A. Bradley Department of Civil and Natural Resources Engineering, University of Canterbury, Christchurch.

More information