ANNUAL REPORT TO THE CENTRAL WEATHER BUREAU

Size: px
Start display at page:

Download "ANNUAL REPORT TO THE CENTRAL WEATHER BUREAU"

Transcription

1 ANNUAL REPORT TO THE CENTRAL WEATHER BUREAU ON Earthquake Early Warning System and Implementation of the Strong Motion Instrumentation Program Submitted by Ta-liang Teng Southern California Earthquake Center University of Southern California Los Angeles, California William H.K. Lee U.S. Geological Survey Menlo Park, California Also at 862 Richardson Court Palo Alto, California November 28,

2 Table of Contents Executive Summary... 3 Section A: Earthquake Early-Warning System Development.. A1-A15 Section B: Earthquake Prediction and Earthquake Early Warning in Taiwan.. B1-B70 Section C: Instrument Evaluation and Strong-Motion Software Development C1-C67 Section D: Processing the Strong-Motion Records from the Chi-Chi Earthquake Sequence.. D1-D44 2

3 Executive Summary This contract performs work that assists the Central Weather Bureau (CWB) in planning and executing large seismological observation programs of its Seismological Center, and helps in the instrument specifications, calibrations, data quality control and database construction. Parallel to the above tasks, this contract also helps CWB develop and implement software necessary in the task of earthquake rapid reporting and, still on going, of earthquake early warning. This contract also reviews the current global scene in earthquake prediction research, and makes recommendations as to what strategy CWB should take in helping Taiwan to deal with the seismic hazard problem. Seismic Hazard Management in Taiwan (Sections A and B) CWB, with the help from scientists in Taiwan and abroad, has made important contributions on earthquake rapid reporting. During the entire course of the 1999 Chi-Chi earthquake sequence, CWB s rapid reporting ability (about one minute after the origin time) has resulted in generating very important and timely information for Taiwan governmental agencies responsible of emergency response and rescue missions dispatching. On the subject of earthquake rapid reporting and earthquake early warning, Taiwan has made a major scientific contribution in the world, and currently is still leading the world in this accomplishment. At a hearing of the United States Congress on October 20, 1999, Mr. Waverly Person, a key witness representing the United States Geological Survey, praised the seismic instrumentation and performance in Taiwan, and urged the United States Congress to fund a similar effort in the United States. Due to the limited CWB funding in the past few years, further development of the rapid reporting system to a truly earthquake early warning system has been slow. We urge that CWB should continue and intensify its effort to bring its earthquake early warning program to a completion, and further to harden the seismic data communication links so as to minimize system failure during future large earthquakes. Earthquake Prediction in Taiwan (Section B) This section consists of an assessment on current earthquake prediction research in the world; it also offers a suggested earthquake hazard reduction strategy for Taiwan. Based on (1) our discussions with the world leading seismologists over the past two years, (2) a synthesis of the published results in earthquake prediction research over the past 30 years, and (3) a summary of the recent scientific debates on Earthquake Prediction hosted by the Nature magazine, we have written this section of the CWB Final Report. A simplified summary follows: A. In view of Taiwan s available scientific manpower and limited funding resources, it is smart for Taiwan, especially for CWB, not to get deeply involved in large experimental earthquake prediction programs. We do not believe that any precursor-chasing earthquake prediction program in the world has arrived at any significant and repeatable results. 3

4 B. Taiwan should maintain and further augment its current GPS program. It has produced excellent strain accumulation data over the past 30 years and especially during the Chi- Chi earthquake. The GPS data are well understood, accurate, and repeatable. They give a measure of the on going deformation of Taiwan in response to the 8 cm/year plate convergence rate. C. Because of the excellent 3-component broadband seismic database from 1990 to 2000, Taiwan has an excellent chance to test the usefulness of the shear-wave splitting observations as a means of stress-forecasting. That, plus the GPS data, may lead to a practical measure potentially useful for earthquake prediction analysis. Dr. Stuart Crampin of England has long proposed this approach and his team is performing field experiment in Iceland. Taiwan already has the needed data on hand, i.e., the field experiment at Taiwan is done and at least a 10-year database is available, which may even be extended to a 30-year seismic database. A Taiwan shear-wave splitting program will be inexpensive and manpower need is small. However, results from Taiwan is much more definitive and relevant for tectonic earthquakes than those from Crampin s group mainly for volcanic earthquakes. Strong-Motion Software Development and Processing the Chi-Chi Strong-Motion Records (Sections C and D) The enormous seismic data recovery of the TSMIP Program and of CWBSN during the 921 Chi-Chi earthquake sequence must be considered as one of the most important seismological observations anywhere in the world, the data have thoroughly covered one of the largest earthquakes in Taiwan, both in space and in time. This contract has spent equally enormous amount of time in checking and improving the accuracy of this Chi-Chi data set. Taiwan s Chi- Chi data set immediately reminds the scientific world of the important astronomical observation work meticulously performed by Tycho Brahe in the late 16 th Century. Without Tycho s complete set of observations of unprecedented accuracy and coverage, Kepler could not have discovered his laws on elliptical planet orbits. This has not only brought about a revolution in astronomy, but has also directly led to the development of modern science, such as the Newton s theory of gravitation. CWB s role in the development of big science may not exactly comparable to the Tycho-Kepler event, but in time this Chi-Chi data set will make CWB shine, findings from the data will punctuate the seismological progress by major breakthroughs, which undoubtedly will lead to quantum jumps in our understanding on earthquakes and the predictability thereof. Section C summarizes the development of a powerful interactive software for quality assurance of strong-motion data. This software and several supporting programs were used to process about 10,000 strong-motion records which were obtained during the first 6 hours and 13 minutes after the Chi-Chi main shock (17:47 to 24:00 of September 20, 1999, UTC time). Some sample results are shown in Section D in 5 near-source regions: (1) along the Chelungpu fault, (2) from the hanging hall of the Chelungpu fault, (3) from the footwall of the Chelungpu fault, (4) north of the earthquake ruptures, and (5) southwest of the Chi-Chi hypocenter. 4

5 Section A: Earthquake Early-Warning System Development Contents I. Introduction... A-2 II. Performance of the RTD System during the 921 Chi-Chi Earthquake in A-3 III. Current Status of the CWB-USC Project... A-4 IV. Comments and Suggestions... A-6 References... A-8 Appendix: Statement of Mr. Waverly Person, Geophysicist, U. S. Geological Survey, before the Subcommittee on Basic Research, Committee on Science, U. S. House of Representatives..... A-10 A-1

6 I. Introduction As increasing urbanization is taking place worldwide, earthquake hazards post strong threats to lives and properties for urban areas near major active faults on land or subduction zones offshore. Taiwan is very vulnerable to damaging earthquakes because a large population is concentrated in urban areas in an island with many active faults on land and at sea. The disastrous 921 Chi-Chi earthquake (M W = 7.6) occurred at 1:47 a.m., 21 September 1999, local time (Shin et al., 2000). The death toll is about 3,000, with over 10,000 people injured. Thousands of houses collapsed, making more than 100,000 people homeless. The economic loss was estimated to be tens of billions of dollars. At the time of the earthquake, the Taiwan Rapid Earthquake Information Release (RTD) System automatically determined the location and magnitude for the main shock and prepared a shake map within 102 seconds after the earthquake's origin time. This is the best World's record of reporting a major earthquake after its occurrence. For comparison, similar results were reported in about 5 minutes for the Hector Mine earthquake (M W = 7.1) of October 16, 1999 in southern California (Dave Wald, personal communication, 1999). It took two days before reliable information about the Kocaeli (Izmit), Turkey, earthquake (M W = 7.4) was obtained (Person, 1999). At a hearing of the United States Congress on October 20, 1999, Mr. Waverly Person, a key witness representing the United States Geological Survey, praised the seismic instrumentation and performance in Taiwan, and urged the United States Congress to fund a similar effort in the United States. Mr. Waverly Person's statement was released to the public on the Internet (Person, 1999), and is reproduced in full in the Appendix of this Section A. Earthquake early-warning systems can be a useful tool for reducing earthquake hazards, if cities are favorably located with respect to earthquake sources and their citizens are properly trained to response to earthquake warning messages. The physical basis for earthquake earlywarning systems is well understood, namely, destructive S-and surface waves travel at about half the speed of the P-waves, and seismic waves travel at much slower speed than signals transmitted by telephones or radios (Lee et al., 1996). Since 1992, an active program to develop and implement earthquake early-warning systems has been conducted by CWB with cooperation with U. S. Geological Survey ( ) and with the Southern California Earthquake Center of the University of Southern California (1992- present). Development continues to the present, but progress has been slow due to the lack of sufficient funding. The Taiwan Rapid Earthquake Information Release (RTD) System (Wu et al., 1997; Teng et al., 1997) is one of the few realtime seismic systems of the world that can detect and locate a nearby earthquake in about 1 minute. During the disastrous 921 Chi-Chi earthquake in 1999, the RTD system performed well and accurate information was sent out automatically in 102 seconds after the earthquake's origin time. It took a little longer than usual because the Chi-Chi earthquake is large (M W = 7.6). In order to ascertain its magnitude correctly, the RTD system A-2

7 had to wait until the principal motions began to subside. An assessment of its performance will be given in the next Section. II. Performance of the RTD System during the 921 Chi-Chi Earthquake in 1999 At the time of the earthquake, the Taiwan Rapid Earthquake Information Release (RTD) System automatically determined the location and magnitude for the mainshock and prepared a shake map within 102 seconds after the earthquake's origin time. This is the best World's record of reporting a major earthquake after its occurrence. For comparison, similar results were reported in about 5 minutes for the Hector Mine earthquake of 1999 in southern California. The RTD System quickly obtained a good estimate of the hypocenter location, magnitude, and a shake map. This information was immediately disseminated to government emergency response agencies electronically in four ways, by , World Wide Web, fax and pager. The RTD System performed as originally designed and the System kept up with the numerous aftershocks after the main shock. A detailed report on the RTD System performance during the Chi-Chi earthquake was published in Wu et al. (2000). CWB also operates a digital short-period telemetered seismic network (S13), which is the primary tool used for routine earthquake monitoring. The following table compares the results from these two systems, as well as that from the solution by USGS using global digital stations around the world and a refined solution by Willie Lee using a few near-field strong-motion records o RTD: N o E 10 km ML =7.3 results in 102 sec. o S13: N o E 7 km results in ~30 min. USGS: 238. o N o E normal M S =7.6 results in ~50 min. o Refined: N o E 11 km Values for the epicenter and the focal depth are essentially the same from the RTD and the S13 system. The latitude of the USGS solution agrees well with the local solutions, but the longitude differs by about 25 km. Amplitudes from the short-period instruments are saturated, but the RTD's M L value agrees well with the M W value of 7.6 from the USGS's moment tensor solution. A refined solution was obtained by combining the S-13 arrival time data with a few S- P interval times from near-source strong-motion records using the HYPO71PC program (Lee and Valdes, 1994) and the CWB layered velocity model. The epicenter does not change, and the A-3

8 focal depth of 11 km provides the best fit of the observed data. A simple experiment (of fixing the epicenter but placing the focal depth at various depths from 0 to 20 km) indicates that a focal depth shallower than 9 km or deeper than 13 km has significantly higher RMS residuals and more misfits of the first P motions. The 921 Chi-Chi earthquake experience indicated that a major weakness in the RTD System is the telephone telemetry system. We have recommended additional methods of telemetry (e.g., by satellites or radios) to CWB before. Unfortunately, due to the cost and other considerations, it was not yet implemented. After the Chi-Chi earthquake, CWB began implementation of a realtime sub-net system at Hualien and at Taichung. This concept was proposed by us several years ago and we are glad that CWB is now recognizing its value. Although the RTD System performed well during the 921 Chi-Chi earthquake, it was originally designed by Willie Lee using technology that is nearly 10 years old by now. Many improvements are desirable, in particular: Alternative telemetry methods must be implemented to supplement the current telephone telemetry, and Further development of the RTD System with the latest hardware and software technology is necessary in order to make it a truly earthquake warning system. III. Current Status of the CWB-USC Project The software used for earthquake early warning and for the real-time strong-motion monitoring in building arrays was originally developed by W. H. K. Lee and associates over 10 years ago (Lee, 1989; 1990). At that time, the only reliable operating system for the PCs was the Microsoft Disk Operating System (MS-DOS). However, MS-DOS allows only a single-task to be executed and has a limited addressable space of 640 kilobytes. Consequently, various nonstandard schemes were used to overcome these limitations. Because the PC hardware and software advanced so rapidly in the past decade, it is no longer possible to purchase the new hardware to execute the old software reliably. The new hardware is backward compatible only for the software written in standard protocol. Hence, we have no choice but to develop new software for real-time seismic data acquisition, processing, and analysis, especially for earthquake early warning purposes. We began re-writing the real-time seismic software in Because almost all new PCs are shipped with the Microsoft Windows 95/98 Operating System, we must write our software to be compatible with it. Because there are some limitations of Windows 95/98, we decided to develop the new software under the Microsoft Windows NT Operating System. Since Windows A-4

9 NT's software can be executed in Windows 95/98, we are better off this way because, if necessary, we can also use Windows NT (which is more stable than Windows 95/98, but has other problems and costs more). The first step in our new software development is to replace the 16-bit PC-SUDS Library (originally written by Robert Banfill under the direction of W. H. K. Lee) by a 32-bit equivalence for Windows NT and Windows 95/98. The first version of the PC-SUDS32 Library was released in February, 1997 (Dodge, 1997), and the second version is included in the 1997 Annual Report (Teng at al., 1997) as Appendix 1 of Section A. Using the PC-SUDS32 Library, it is much quicker and easier to develop a real-time, multi-tasking computer program for seismic applications. For example, an application program (Sudsfix) for use in quality control of strongmotion data collected by CWB is described in Section C of the 1997 Annual Report. The second step in our new software development is to replace the RTP software used in the CWB building arrays. A multi-tasking Real-time Seismic Software (MRSS, Version 1) was developed by W. H. K. Lee and D. Dodge and was first released in March, 1997 (Lee and Dodge, 1997) and is included in the 1997 Annual Report (Teng, et al., 1997) as Appendix 2 of Section A. A modified version of MRSS for the special CWB array in Tainan (with additional wind sensors) is called MRSS-64, and is also included in the 1997 Annual Report as Appendix 3 of Section A. The MRSS (Version 1) software consists of two parts: a data acquisition module supplied by Symmetric Research (1997), and a data processing module called DISPATCH, which manages the incoming digitized data and executes a batch program consisting of user specified data processing modules. The advantage of this approach is flexibility; a user can easily try different data processing modules or mix or match various modules (including standard 16-bit DOS programs). However, this flexibility comes with a price in term of efficiency and reliability. During 1997/1998, We have developed Version 2 of the MRSS software which was released on March 26, 1998 (Dodge and Lee, 1998), in time for the 3 new building arrays installed by CWB in April, A slightly improved Version 2.3 is included in the 1998 Annual Report as Appendix 1 of Section A, and the corresponding source code is included as Appendix 2 of Section A. Unfortunately, MRSS software was not well received because Windows operating systems do not in general recover automatically after the power returns after a power outage, and the user interface is very different than that of the traditional RTP software. The traditional RTP software is not Y2K compliant as pointed out by Yih-Min Wu and others. Yih-Min Wu also provided us with a modification of XRTP and XRTPDB software so that they are Y2K compliant. Because RTP contains some commercial software modules that we do not have access to their source code, we have re-written RTP software to bypass these software modules. In addition of Y2K compliant, we also merged XRTP and RTP software into a new software called irss16, which can be executed under Windows 95/98. The irss16 is now being tested and our next step is to merge XRTPDB with irss16. For ease of development, the irss16 software is the first step towards the irss32 software, which will be a fully Windows 32-bit program. A-5

10 New development on irss32 software was halted because of the occurrence of the 921 Chi- Chi earthquake in All efforts have been diverted to process and analyze the strong-motion data from the Chi-Chi earthquake and its aftershocks. Over 30,000 strong-motion records were organized and preliminary processing of about 10,000 strong-motion records. In particular, a final version of the Chi-Chi main-shock strong-motion data was completed as described in Section B. IV. Comments and Suggestions Although from a political point of view, it is very desirable to improve this RTD system to be an earthquake early-warning system. Implementing an operational earthquake early-warning system in Taiwan is technically feasible, as demonstrated by the Prototype Earthquake Early- Warning Systems at Hualien earlier (Chung et al., 1995; Lee et al., 1996). However, to achieve a useful earthquake warning time of seconds, a dense telemetered seismic network is necessary. This will require considerably more funding on: Capital equipment (several tens of million US dollars), Operation (several million US dollars per year), and A technically competent staff (dozens of first-rate engineers, technicians, and seismologists). We are also concern about the social implications of releasing an earthquake early-warning message to the public, because: Taiwan is not favorably located (as compared to, say, Mexico) for earthquake early warning, as potential earthquakes are too close to urban areas so that very little time is available for issuing an earthquake warning and for reacting to it. Taiwan does not yet have an emergency preparedness agency that can effectively react to an earthquake warning. The Mexico City experience (Lee and Espinosa-Aranda, 2000) clearly illustrates the difficulties in educating the urban population for an earthquake response. In view of the financial, staffing, and social difficulties, we can only recommend that CWB pursue the technical developments of improving the Taiwan Rapid Earthquake Information Release System, especially in taking advantages of emerging new technology, such as the Internet and the Web tools. We also think it is useful to investigate the "single-station" technique as successfully used by the Japanese in their UrEDAS system (Nakamura, 1996), and A-6

11 the "direct-search" technique for earthquake location as demonstrated by Anthony Lomax recently. Although the RTD System performed well during the 921 Chi-Chi earthquake, we wish to emphasize that it was originally designed by Willie Lee using technology that is nearly 10 years old by now. During the 921 Chi-chi earthquake, the leased telephone lines performed poorly. We strongly recommend that CWB seeks alternative telemetry methods (e.g., satellite communications) to supplement the current telephone telemetry, and further development of the RTD System with the latest hardware and software technology is necessary in order to make it a truly earthquake warning system. In the original design of the RTD system, we recommended "clusters" or "subnets" so that the RTD system is not centralized in the CWB headquarters, but is a distributed system with independent sub-centers. We are pleased that CWB is now implementing sub-centers at Hualien and Taichung. Finally, with limited support from CWB, we can only do one topic well. At present, we are tied up with processing the large amount of the strong-motion data recorded by CWB. Unless there is a significant increase of support, we can not assist CWB much on new development of earthquake early warning system. A-7

12 References Chung, J. K., W. H. K. Lee and T. C. Shin (1995). A prototype earthquake warning system in Taiwan: operation and results. IUGG XXI General Assembly, Abstracts Week A, p 406. Dodge, D., (1997). PC-SUDS32 Library, unpublished report (see Appendix 1 of Section A in Teng, et al., 1997). Dodge, D., and W. H. K. Lee (1998). Multi-tasking Real-time Seismic Software (Version 2.3) for CWB Building Arrays, unpublished report (see Appendices 1 and 2 of Section A in Teng, et al., 1998). Lee, W. H. K. (Editor) (1989). Toolbox for Seismic Data Acquisition, Processing, and Analysis. IASPEI Software Library Volume 1, Seismological Society of America, El Cerrito, CA, 283 pp; Second Edition, Lee, W. H. K. (Editor) (1990). Toolbox for Plotting and Displaying Seismic and Other Data. IASPEI Software Library, Volume 2, Seismological Society of America, El Cerrito, CA, 207 pp; Second Edition, Lee, W. H. K., and C. M. Valdes (1994). User manual for HYPO71PC (A personal computer version of the HYPO71 earthquake location program), IASPEI Software Library, vol. 1 (2 nd Edition), p , Seism. Soc. Am., El Cerrito, CA. Lee, W. H. K., T. C. Shin, and T. L. Teng (1996). Design and implementation of earthquake early warning systems in Taiwan. Proc. 11th World Conf. Earthq. Eng., Paper No Lee, W. H. K. and D. Dodge (1997). Multi-tasking Real-time Seismic Software (Version 1.0) for CWB Building Arrays, unpublished report (see Appendix 2 of Section A in Teng, et. al., 1997). Lee, W. H. K. and J. M. Espinosa-Aranda (2000). Earthquake early-warning systems: current status and perspectives, in Proceedings of the International IDNDR-Conference on Early Warning Systems for the Reduction of Natural Disasters, Potsdam, Germany, September 7-11, 1998, in press. Nakamura, Y. (1996). Real-time information system for hazards mitigation. Proc. 11th World Conf. Earthq. Eng., Paper No Person, W. (2000) Statement of Mr. Waverly Person, Geophysicist, U. S. Geological Survey, before the Subcommittee on Basic Research, Committee on Science, U. S. House of Representatives, Shin, T. C., K. W. Kuo, W. H. K. Lee, T. L. Teng, and Y. B. Tsai (2000). A preliminary report on the 1999 Chi-Chi (Taiwan) earthquake. Seism. Res. Letters, vol. 71, p A-8

13 Symmetric Research, (1997; 1998). DSPA64 User's Guide, it Symmetric Research, 15 Central Way, Suite #9, Kirland, WA 98033, USA. Teng, T. L., L. Wu, T. C. Shin, Y. B. Tsai, and W. H. K. Lee (1997). One minute after: strong motion map, effective epicenter, and effective magnitude. Bull. Seism. Soc. Am., vol. 87, p Teng, T. L., M. Hsu, W. H. K. Lee, Y. B. Tsai, F. T. Wu, and G. Liu (1997). Annual Report to the Central Weather Bureau on Earthquake Early Warning System and Implementation of the Strong Motion Instrumentation Program, Technical Report, Volume 17, Seismology Center, Central Weather Bureau, Taiwan, June, Teng, T. L., M. Hsu, W. H. K. Lee, Y. B. Tsai, F. T. Wu, and G. Liu (1998). Annual Report to the Central Weather Bureau on Earthquake Early Warning System and Implementation of the Strong Motion Instrumentation Program, Technical Report, Volume 20, Seismology Center, Central Weather Bureau, Taiwan, July, Wu, Y. M., C. C. Chen, T. C. Shin, Y. B. Tsai, W. H. K. Lee, and T. L. Teng (1997). Taiwan Rapid Earthquake Information Release System. Seism. Res. Letters, vol. 68, p Wu, Y. M., W. H. K. Lee, C. C. Chen, T. C. Shin, T. L. Teng and Y. B. Tsai (2000). Performance of the Taiwan Rapid Earthquake Information Release System. Seism. Res. Letters, vol. 71, p A-9

14 Appendix: Statement of Mr. Waverly Person, Geophysicist, U. S. Geological Survey, before the Subcommittee on Basic Research, Committee on Science, U. S. House of Representatives from INTRODUCTION Mr. Chairman and members of the subcommittee, thank you for this opportunity to present, on behalf of the U.S. Geological Survey, testimony on the role of the USGS s National Earthquake Information Center (NEIC) during the recent large earthquakes and the lessons learned from these events. I would like to begin by describing how the NEIC works. THE ROLE OF NEIC The NEIC, in Golden, Colorado, is operated by the USGS as part of the Earthquake Hazards Program and maintains a 24-hour-a-day Earthquake Early Alerting Service. This service rapidly and accurately determines the location and magnitude of significant earthquakes throughout the world. The NEIC immediately and electronically sends this information to key civil defense and public safety agencies, including railroads, power plants, pipeline companies, Federal and state emergency services, the State Department, United Nations, Department of Humanitarian Affairs in Geneva, the national and international news media, scientists involved in aftershock studies, and the general public. The facts about a damaging earthquake abroad are also relayed to staff of the American embassies and consulates in the affected countries. The USGS issues rapid reports and press releases for earthquakes of magnitude 4.5 or greater in the conterminous United States; magnitude 6.5 or greater elsewhere in the world; and other earthquakes of lesser magnitude that are potentially damaging or cause public concern. At present, we have approximately 150 seismograph stations from the U.S. and around the world transmitting seismic data to NEIC via satellite and the Internet on a real-time basis. These stations enable us to rapidly locate and compute the magnitudes of significant earthquakes throughout the world very accurately. As an example of how this system works, I have provided a chronology of events, which occurred following the M 7.6, September 20, 1999, Taiwan earthquake. The origin of the Taiwan earthquake was 13:47 EDT and within the first minute, the event was recorded on a USGS Global Seismic Network Station in Taiwan. Nine minutes after the origin of the event, the first energy in the form of P- (for Primary) waves arrived at the westernmost U.S. National Seismic Network station in Shemya, Alaska. By 14:00 EDT, news media began to inquire about A-10

15 the earthquake via correspondents in Taipei. Although this was 13 minutes after the mainshock, it is important to note that energy from the event was just beginning to arrive at stations within the conterminous U.S. By 14:03 EDT the USGS had enough information from the Global Seismic Network and the U.S. National Seismic Network to locate the event and issue automatic, computer-generated , fax, and pager announcements of the event to critical offices and disaster relief agencies. [Figure 1]. The preliminary size of this event was reported as only 6.0. This is because the magnitude estimate was based only on the first-arriving, high frequency, P-wave energy. The low frequency surface wave energy, which gives a more accurate measurement of large earthquakes, did not arrive until 14:20 EDT. Such adjustments in magnitude estimates are to be expected for large events when they are observed by seismographs located thousands of kilometers away from the epicenter. By 14:35 EDT the NEIC was able to issue a preliminary reviewed solution of magnitude 7.6 based on surface wave amplitudes. Again, computer-generated , fax, and pager announcements based on the reviewed solution were sent to critical offices and disaster relief agencies. [Figure 2]. During the ensuing 20 minutes, the USGS Earthquake Program office and the State Department Operations Center were contacted by phone and manual fax messages were confirmed sent, including to the White House Situation Room. The notification process was complete by 15:25 EDT, 1 hour and 38 minutes after the initiation of the mainshock. [Figure 3]. During the next hours and days, the NEIC gave numerous media interviews, including reports on a vigorous aftershock sequence. The steps described above in characterizing an earthquake and issuing earthquake announcements are essentially identical for locating and characterizing domestic earthquakes. In a domestic earthquake, however, the U.S. National Seismic Network and NEIC take on a much more pivotal role in earthquake alert, response, assessment, and recovery efforts. In the same way, the local networks in Turkey and Taiwan served as the central nervous system during each of their respective events, providing information critical to earthquake response efforts. It is thus instructive to consider the performance of the Turkey and Taiwan local networks, with an eye toward learning from their experiences and better preparing for the inevitable events in the U.S. LESSONS LEARNED In responding to a devastating earthquake, time is of the essence. Emergency managers require immediate information on the location and magnitude of the epicenter and the severity and geographic distribution of strong ground shaking in order to determine the levels of mobilization required, and to some extent, the types of resources needed to respond effectively and to efficiently allocate what is available. Our Turkish colleagues report that it took two days to assess the full extent of the damage from the M7.4 earthquake in Turkey and to direct teams of relief workers to the hardest hit areas. Part of the problem the Turkish government had in dealing with the crisis was that the Turkey seismic network was saturated by the large event and did not perform effectively. The earthquake s magnitude was initially estimated as 6.7 by the Turkish Kandilli Observatory, which operates the seismic network in the Izmit region. This A-11

16 estimate was considerably lower than the actual magnitude of 7.4, and -- according to our Turkish colleagues -- initially gave a wrong impression to scientists and government officials of the likely damage. The Kandilli network is relatively modern for locating events, but it is not designed to handle big earthquakes. As a result, the severity and extent of the damage and human loss were underestimated, and emergency response officials were not provided with adequate and timely information with which to work. This was precisely the same problem that plagued Japan after the 1995 Kobe earthquake, when government officials didn t realize the magnitude of the disaster. In contrast, in Taiwan, relief efforts moved much more quickly, aided significantly by the modern seismographic alert system in place in Taiwan. An intensive program to instrument urban areas was completed in 1996, and now Taiwan maintains a network consisting of approximately 1000 digital seismic recording instruments the highest concentration of modern digital accelerographs in the world. For comparison, station spacing of the more abundant freefield accelerographs (i.e., those instruments not situated in buildings or structures) in Taiwan is about 3 km in the metropolitan areas, versus a 25 km uniform spacing in Japan. In the U.S., although the state of California is 12 times the size of Taiwan, there are only 1400 of these strong ground motion stations spread across the entire U.S., -- nowhere close to the station density in place in Taiwan. This difference is highlighted in Figure 4, which compares the station spacing in Taiwan and northern California. Equally important, most of the U.S. strong motion stations are not capable of transmitting critical ground shaking information electronically to a central recording center. This dense array of accelerographs allowed the Taiwan Seismic Center to rapidly compute the parameters for the Taiwan earthquake and associated aftershocks. The location, depth, and magnitude were computed in 102 seconds, and the results agreed well with the final magnitude 7.6 determined by NEIC in about 3 hours after the earthquake. Once the earthquake parameters were determined by the Taiwan processing center, information was distributed via to scientists, government officials, and emergency response people. In addition, the Taiwan alert system automatically created a shaking intensity map for the island in the first 102 seconds. The latter provided an estimate of shaking severity and the level of damage likely associated with such shaking. The map enabled emergency managers to promptly locate the hardest hit areas and to send appropriate help. Because damage from an earthquake is not evenly distributed about its epicenter but is instead greatly affected by local geologic conditions, an accurate shaking intensity map is an invaluable tool for emergency managers. But to resolve this non-uniform pattern of ground shaking in an urbanized area requires dense instrumentation, and in the U.S., we fall well short of the Taiwan model. Given present capabilities, USGS seismologists expect that an earthquake in northern California comparable to the magnitude 7.6 Taiwan event would require approximately several hours before an equivalent shake-intensity map could be produced, assuming our facilities in Menlo Park, California, are not damaged. Is several hours sufficient? According to Mr. Richard Eisner, the Associate Director of California s Office of Emergency Services, the percentage of people rescued after the 'golden hour' the first 60 min after a large earthquake is extremely small. This was the case for A-12

17 Northridge and Loma Prieta, and similar results appear to hold true for Turkey and Taiwan. So the difference between 100 seconds and 100 minutes can be the difference between life and death. Although southern California, by virtue of the FEMA-sponsored TriNet, can now produce a shake map for an earthquake in its region almost as fast and almost as well as Taiwan, the San Francisco Bay area, Seattle, Portland, New Madrid, Salt Lake City, and other cities in earthquake prone areas of the U.S. are all vulnerable. In these regions, the U.S. network is comparable to the Kandilli Observatory network in Turkey and is currently far inferior to the network operating in Taiwan. In February of 1999, the USGS responded to a request by this subcommittee and submitted a report entitled: An Assessment of Seismic Monitoring in the United States: Requirement for an Advanced National Seismic System. Delineated in this report are the needs and associated costs of upgrading the U.S. seismic networks to achieve an effective national seismic monitoring strategy. In addition to seismograph and communications upgrades, the report suggests that NEIC needs to be modernized to serve as a backup for the regional networks and data centers. It should be able to replicate their services (including the production of ground shaking intensity maps) should a regional center fail due to a major earthquake, power loss, or other extreme event, as occurred during the Loma Prieta and Northridge earthquakes. Furthermore, the report notes that NEIC needs to modernize and expand its data and information products and associated dissemination procedures. The report indicates that these modernization steps are necessary if the U.S. hopes to follow Taiwan s lead and effectively plan for mitigating the effects of earthquakes and providing rapid relief to earthquake-devastated regions. Although my remarks have emphasized the need to speed earthquake disaster relief, it is also important to note that implementation of an advanced national seismic system offers the capability to recognize when an earthquake is in progress and to provide seconds to tens of seconds of warning before the onset of strong shaking at a site, depending on its distance from the epicenter. Early warnings can enable individuals in vulnerable situations to protect themselves or others. School children can take cover under desks to avoid injury from falling structural debris or nonstructural building components. Surgeons can suspend delicate operations. Businesses and industries can stop critical processes such as the handling of toxic substances and protect assets such as active data bases. Utilities and transportation lifelines can take preventive action to avoid major service disruptions. Modern seismic networks also provide critical information for long-term earthquake loss reduction. Structural engineers and scientists are critically dependent on strong-motion data collected by the USGS for improving building codes. A growing network of advanced instruments will provide these essential engineering measurements from future earthquakes. The greatest long term value of the Advanced National Seismic System is thus in gathering data on site response and time series information that engineers and planners rely on so that buildings can be designed to withstand shaking. Although earthquakes cannot be prevented, we can engineer our buildings to withstand strong ground shaking, thereby mitigating the effects of earthquakes and providing improved protection for U.S. citizens from future loss of life and property. A-13

18 The USGS also is working on means of integrating data from instrument networks and other sources, and providing accurate and timely disaster information to be delivered to the proper disaster management officials. Through its Center for Integration of Natural Disaster Information (CINDI), the Survey researches means of integrating near-real-time data from multiple sources to provide scientific information that helps public officials and citizens make well-informed decisions. The USGS also cooperates with other federal agencies and the private sectors to explore ways to better integrate and deliver pertinent information by leveraging advances being made in the computer and communications industries. All these efforts, including an Advanced National Seismic System, support the development of a Disaster Information Network, to provide information useful in mitigating potential disasters and adequately responding to disasters when they occur. CONCLUSIONS In conclusion, there is much the U.S. can do to learn from these recent seismic events in Turkey and Taiwan, and to improve the nation s ability to mitigate the effects a damaging or disruptive earthquake. The technology is currently available to significantly improve the delivery of earthquake disaster information, and Taiwan has served as a trend setter in this regard. For the USGS, these advances hinge upon upgrading our current s vintage network and transforming it into a digital network with much denser clustering of instruments in urban centers where damage is likely to be greatest and where immediate response is critical. Effective advancement also depends on support for a disaster information network that will better coordinate efforts of Federal, state, and local agencies in responding to disaster situations. As stated above, the first 60 minutes after an earthquake are critical, and many U.S. cities in earthquake prone areas are vulnerable. It is also worth noting that improved response is equally dependent on close and careful coordination among those who generate, deliver, and use this time-critical earthquake information. Technology transfer, training, and public education will be important to ensure that new products of upgraded seismic networks, such as near-real-time ground shaking maps, are understood and used successfully in managing emergencies, promoting greater safety, and improving recovery. The USGS is working actively with the Federal Emergency Management Agency and State and local agencies to achieve these goals, and H.R. 1184, the Earthquake Hazards Reduction Authorization Act of 1999 which was recently passed by the House with an overwhelming vote of support, also addresses these important concerns. Mr. Chairman, this concludes my remarks. I would be happy to respond to any questions. Waverly Person: Mr. Person is a geophysicist with the U.S. Geological Survey and is in charge of the National Earthquake Information Service in Golden, Colorado. He has 37 years of experience in responding to earthquakes and currently serves as the Chief of the National Earthquake Information Service within the USGS National Earthquake Information Center. As such, Mr. Person serves as the spokesperson for the USGS on matters pertaining to the release of information of early earthquake reporting service and general information of earthquake activity around the world. He also plans and carries out the operation of the Earthquake Early Alerting A-14

19 Service, which involves the location of all significant earthquakes throughout the world and the dissemination of this information to emergency groups, scientists, the media, and the general public. Mr. Person is an Editor of Seismological Notes, a regular publication in the Bulletin of the Seismological Society of America, and he writes an article for the USGS s Earthquake Information Bulletin s Earthquakes, a bimonthly publication. He has authored over fifty articles and reports on earthquakes around the world and has been featured in numerous newspaper, TV, and magazine pieces. He is a distinguished recipient of the Department of Interior s Meritorious Service Award and is a valuable member of the USGS s Earthquake Hazards Program. Note on Figures: The four figures presented by Mr. Waverly Person were not released to the public. However, only Figure 4 is relevant to Taiwan and was obtained from Mr. Person as shown below. A-15

20 Section B: Earthquake Prediction and Earthquake Early Warning in Taiwan Contents I. Introduction... B-2 II. Earthquake Prediction... B-3 III. Precursors... B-4 IV. Difficulties... B-5 The complex system...b-5 Crustal Stress Field...B-6 Magnitude Uncertainty...B-6 V. Strategies of Coping with Earthquake Hazard... B-7 Strategy of Earthquake Preparedness...B-7 Real-time Monitoring Strategy Earthquake Rapid Reporting & Early Warning...B-8 Appendix. The Debate on Earthquake Prediction hosted by the Nature Magazine... B-10 B-1

21 I. Introduction This section consists of an assessment on current earthquake prediction research in the world; it also offers a suggested earthquake hazard reduction strategy for Taiwan. Based on (1) our discussions with the world leading seismologists over the past two years, (2) a synthesis of the published results in earthquake prediction research over the past 30 years, and (3) a summary of the recent scientific debates on Earthquake Prediction hosted by the Nature magazine, we have written this section of the CWB Final Report. A simplified summary follows: A. In view of Taiwan s available scientific manpower and limited funding resources, it is smart for Taiwan, especially for CWB, not to get deeply involved in large experimental earthquake prediction programs. We do not believe that any precursor-chasing earthquake prediction program in the world has arrived at any significant and repeatable results. B. Taiwan should maintain and further augment its current GPS program. It has produced excellent strain accumulation data over the past 30 years and especially during the Chi- Chi earthquake. The GPS data are well understood, accurate, and repeatable, and they give a measure of the on going deformation of Taiwan in response to the 8 cm/year plate convergence rate. C. Because of the excellent 3-component broadband seismic database from 1990 to 2000, Taiwan has an excellent chance to test the usefulness of the shear-wave splitting observations as a means of stress-forecasting. That, plus the GPS data, may lead to a practical measure useful for earthquake prediction purposes. Dr. Stuart Crampin of England has long proposed this approach and his team is performing field experiment in Iceland. Taiwan already has the needed data on hand, i.e., the field experiment is done and at least a 10-year database is available, which may even be extended to a 30-year seismic database. A Taiwan shear-wave splitting research program will be inexpensive and manpower need is small. However, results from Taiwan is much more definitive and relevant for tectonic earthquakes than those from Crampin s group working in a region of volcanic earthquakes. D. CWB, with the help from scientists in Taiwan and abroad, has made important contributions on earthquake rapid reporting. During the entire course of the 1999 Chi- Chi earthquake sequence, CWB s rapid reporting ability (within about one minute after the origin time) has resulted in generating very important and timely information for Taiwan governmental agencies responsible of emergency response and rescue missions dispatching. On the subject of earthquake rapid reporting and earthquake early warning, Taiwan has made a major scientific contribution in the world, and currently is still leading the world in this accomplishment. CWB should continue its effort to bring its earthquake early warning program to a completion, and further to harden the seismic data communication links so as to minimize system failure during future large earthquakes. B-2

22 II. Earthquake Prediction There are two frequently asked questions related to earthquakes. Question 1: Can earthquakes be predicted? Question 2: How can we best mitigate the earthquake hazard, thus reduce Human suffering? These questions become particularly relevant in the aftermath of the disastrous 921 Chi-Chi earthquake that has dealt a severe blow to Taiwan. It is clearly the responsibility of scientists to seriously address these questions. There are two kinds of earthquake prediction -- Probabilistic and Deterministic. The first kind is actually a long-term earthquake forecast based on earthquake statistics. The second kind concentrates on detection and observations of physical, chemical, and tectonic signals that may precede the occurrence of a damaging earthquake. For this paper, we limit our discussion of earthquake prediction to the second kind, namely, the deterministic earthquake prediction. Scientists agree that, a valid prediction should consist of four essential elements: Time window Place window Magnitude range Probability range Also that prediction research only deals with large earthquakes, ones that will have significant social consequences. Normally, this means earthquakes of M >> 6. Prior to the 1960s, mostly quacks and charlatans made earthquake predictions. After 1990s, their voices became loud again as many scientists have drifted away from prediction research. We will first discuss as a background the scientific prediction research activities that have taken place in the recent past, especially during the 1880s and 1990s. In the late 60s and early 70s, a number of papers published by scientists in the then Soviet Union and in mainland China, claiming progress made in the detection of premonitory signals (or precursors) that can be potentially useful in earthquake prediction. Mainland China has also claimed to having predicted the 1974 Haicheng earthquake. From mid 70s to late 80s, there is a flurry of efforts organized by the U.S. and Japan in pursuing a scientifically sound confirmation of the prediction claims. During that period, many reputable scientists have worked on earthquake prediction. There were several reasons for this optimism during that period: B-3

23 PLATE TECTONICS: Tells us why earthquakes occur, where they occur, and gives us an average rate of occurrence. For example, Taiwan is under NW-SE compression with a fast deformation rate of 8 cm/year. This will drive Taiwan s orogeny and frequent large earthquakes. LAB STUDIES: Tell us that there are measurable physical and chemical changes observed in the laboratories, which occur prior to the failure of rock samples. Some scientists have put forward hypotheses explaining these lab results and have speculated the possibility of similar observations in the field. There is the concept of dilatancy -- the inelastic increase in volume of rock materials upon application of external stresses. The increase in volume is due to the creation of microcracks. There is a suggestion of dilatancy hardening that rock becomes stronger or harder in response to intense stress. This is usually used to explain the quiet period before a large earthquake. Then the influx of water into the microcracks in the damaged rock increases the pore pressure and weakens the rock, making the ultimate failure of rock, or an earthquake, imminent. III. Precursors Many signals have actually been reported to have been observed in the field prior to some earthquakes. They consist of: Seismic Wave Studies ( Vp/Vs, foreshocks, and spectral behavior) Groundwater Chemistry (Radon and other soil gases: H2, methane, CO2, etc.) Electric Resistivity Method (Accuracy needed: 1% in 10 km.) Tectonic Tilt Measurements Tectonic Strain Measurements In-situ Stress Measurements Geomagnetic Measurements Gravity Measurements Groundwater Monitoring(level changes, and turbidity changes) Acoustic emissions Ground temperature changes Anomalous animal behaviors And many others Unfortunately, none of these signals has been observed consistently, then these observations cannot be duplicated in time or space. In pursuing the above measurements, there has been a very sizable research effort from the U.S., Japan and China. Even Taiwan has also gotten into the act for a little while. The total B-4

24 global spending must have exceeded $ 1 billion, plus several generations of graduate students. Almost twenty years have been spent on the above measurements, results are disappointing: that the early observations made by the Soviet Union colleagues cannot be duplicated, the Haicheng prediction claim is found to be not convincing under careful scrutiny. Most of the precursory measurements made in the U.S. and Japan cannot be duplicated either. The reasons become apparent to many scientists that the problem is much harder than we have originally thought. IV. Difficulties To detect signals useful for predicting the forthcoming earthquake, we are logically looking for signs indicative of an increase of tectonic stress. However, at the earth s surface, by definition, all stresses should vanish. This forces us to measure the physical and chemical changes induced by the stress changes. Since the rate of tectonic stress build-up is slow and the incremental amount is small, whatever stress-induced changes are secondary phenomena that give out weak signals. Consequently, these precursor signals are generally so feeble that they are seriously drowned out by the strong ambient noise. Oftentimes, in order to monitor one fault (say, the 1000 km San Andreas fault), a large number of expensive instruments of many kinds are needed. With a finite budget, only a small number of instruments can be installed over limited sections of a few faults. Earthquakes usually do not occur where there have planned instrument clusters. Also, because of a limited budget, most field installations are compromised in design. Poor data quality and inadequate maintenance care introduce substantial errors to the extent that where tectonic tilt is supposed to be measured, because of inadequate heat insulation, the tiltmeters become a thermometer! Groundwater chemistry (radon, helium, and methane, etc.) is supposed to be done on samples from deep wells, because of field difficulties, samples form shallow springs and wells are used, the results are thus heavily contaminated by groundwater pumping activities and weather factors. These difficulties made it necessary at least for the U.S. scientists to put the deterministic earthquake research in the back burner, while some Japan scientists currently still fight on with prediction research, only under increasing public criticism. There are more fundamental aspects that outline other dimensions of the difficulty in earthquake prediction. The complex system Earthquakes occur in a complex crustal-mantle system, not a homogeneous or layered elastic halfspace. This system includes some distinct structures such as the seismogenic zones and faults, which are embedded in a highly heterogeneous structure with all length dimensions. In laboratory and in mathematical models, we approximate the generation of an earthquake by a displacement dislocation in an elastic body, whereby the rupturing process excites earthquake waves that inflict damage to human structures. This approach fails because the above approximation is no longer valid in a complex system where the interaction between parts of the system displays a chaotic behavior. Because of this chaotic component of the process, many B-5

25 scientists believe that earthquake cannot be predicted in a deterministic way. They think that prediction can only be made on a statistical sense with considerable uncertainty. Crustal Stress Field We have pretty good knowledge of the rate of the plate motion. If we can assume that the motion is stationary, it would be possible to estimate the stress change with time, and therefore the earthquake recurrence rate. However, the crustal-mantle is a complex system, its elastic and inelastic behaviors are not well known. While the average rate of plate motion for the entire plate may be stationary, this stationarity is not guaranteed for individual locations. The bulk property of a plate is hardly known. It is clearly neither rigid nor perfectly elastic. We know even less about the rock s failure strength as a bulk (not just a hand sample). The stress field derived from analytical solution of a layered halfspace finds great departure from the reality. Since earthquakes occur on a complex array of faults, the stress field may be very irregular and there is no simple way to predict local stress concentrations, which usually are believed to lead to the nucleation of an earthquake. Even we can locate points of potential rupture nucleation, it is still very difficult to predict how far a point of nucleation will or will not escalate into a large event. Magnitude Uncertainty A useful prediction must contain an accurate magnitude estimate of the forthcoming event. However, a small earthquake may trigger another event in the adjacent area, thus escalating into a much larger and damaging event. The size of an earthquake is thought to be somewhat limited by the total length of a fault it occurs on. Even with given regional stress field, there is no way to assess whether a small rupture initiated on a large fault will, or will not, go through the entire fault length to produce a big earthquake. A long fault may be segmented, and it is believed that a given length segment of a fault may accommodate a largest earthquake called characteristic earthquake, its magnitude is proportional to the length of the segment. This concept of characteristic earthquake has many difficulties, because there are many exceptions. Examples are the Nankai trough offshore of Tokyo and the San Andreas fault in California. Very large (8+) earthquakes have occurred in the "segmented" Nankai trough, in 1498, 1707, 1854, 1944, and Sometimes the earthquake ruptures a segment and sometimes it would ruptures the entire length of several segments. The different of magnitude would have very different social consequences to Japan. In 1857, and 1906, the San Andreas fault ruptured almost 300 km each time in southern and northern California. But the 1989 Loma Prieta earthquake ruptured only a part of the 1906 segment of the San Andreas fault. In fact, it is even unclear if the Loma Prieta rupture indeed was on the San Andreas fault. Should it have repeated the 1906 rupture to day, the damage would have been 100-time worse due to modern development! Because of the structure of the complex system, of which we only know poorly, and the fact that our understanding of the mechanics of rupture dynamics is quite primitive, we are far from being ready to predict when a rupture would start and when it would stop. Therefore, large uncertainty of magnitude estimate will remain. B-6

26 In the spring of 1999, an open scientific debate hosted by the Nature magazine specifically addresses the problem on the feasibility of deterministic earthquake prediction. With broad participation of scientists all over the world, the general consensus reached is basically that at the present short-term deterministic prediction of an individual earthquake, within sufficiently narrow limits to allow a planned evacuation program, is an unrealistic goal. There is a large body of arguments leading to the above conclusion. We have summarized these arguments in the above sections. As an attachment (see Attachment) to this report, we shall present different views of different scientists. V. Strategies of Coping with Earthquake Hazard Deterministic earthquake prediction is not an achievable goal, at least at the present. A good fraction of the world population, including people in Taiwan, must live with earthquakes. Do we have means to live with earthquakes safely? The answer is yes. Some of the measures are being done, and we can do more. We shall discuss the current status. Also reported is the progress Taiwan has made, for which Taiwan has indeed been recognized as a leader in the world. Strategy of Earthquake Preparedness Long-term earthquake forecast for hazard mitigation through improved building code enforcement and better long-term land use planning. A careful long-term probabilistic assessment of the trend of earthquake occurrence in a region is a fruitful approach and can lead to significant reduction of loss of lives and property damage. Some people call it long-term earthquake prediction, which is somewhat misleading because its generally results come with a statement somewhat like "in the coming 50 years, there is 30% chance that this location will experience a strong-motion shaking exceeding 20% of a g (gravitational acceleration)". Statements like this can be very practical and useful in regional seismic hazard assessment. This is the beginning point of regional development planning and earthquake-resistant building codes. In California, Japan, and Taiwan, this subject is being pursued in earnest. In fact, if it were not for the building codes revisions Taiwan has recent years adopted, the Chi-Chi casualty would have been a lot worse. Think of it this way: The central region Taiwan has higher population density than Turkey. The Chi-Chi earthquake is larger than that of Turkey. A thrust fault is usually worse than a strike-slip fault in terms of damaging power. And that Chelungpu fault produces much bigger fault scarp than that of the ruptured Anatolian fault. Yet the Chi-Chi earthquake casualty is only a small fraction of that of the recent Turkey earthquake. B-7

27 This Chi-Chi earthquake has produced an enormous amount of strong-motion data. The Taiwan government has correctly and quickly entered these data into earthquake risk rezoning and building codes revision for the reconstruction. Furthermore, the Taiwan government has ordered a detailed mapping of the ruptured Chelungpu fault, with intention of declaring an earthquake zone in which no construction for human occupancy is allowed. This is an act similar to California s Active Fault Zones Act, a good measure to mitigate casualty for future earthquake. In the future, the enforcement of the revised earthquake building codes and the Taiwan s Active Fault Zones Act will significantly mitigate the seismic hazard on this island. Real-time Monitoring Strategy Earthquake Rapid Reporting & Early Warning The recent progress on real-time seismic monitoring in Taiwan is particularly outstanding. There are four countries competing scientifically in the area of earthquake rapid reporting and early warning: the U.S., Japan, Mexico, and Taiwan. At this moment, the U.S. and Japan are clearly 4 years behind Taiwan, as they have not completed their effective real-time seismic monitoring systems yet. And Taiwan began as early as 1995 in issuing routine earthquake information rapid reporting (within a minute or two after the earthquake occurrence). There have been several scientific papers documenting this accomplishment. All moderate to large earthquake events have been successfully reported in time 1 2 minutes after their occurrences. Mexico has claimed that they have accomplished even earthquake early warning and published a paper on it. There is a very particular situation in Mexico: there the earthquake source area (the Coco subduction zone) and the target area (the Mexico City) are separated by a large distance D = 350 km. This allow about T = 80 seconds of lead time before the strong shaking waves to arrive at the Mexico city after an earthquake has occurred in the Cocos subduction zone. Of course, at 350 km distance, moderate to small earthquakes will not even be felt. Mexico scientists have claimed to have successfully issued an earthquake early warning a few years ago. However, their system and software are not as well developed as in Taiwan. They have admitted to have totally missed an important M = 7.4 event from the Coco subduction zone a week after the Chi-Chi event. The real-time strong-motion monitoring by the Taiwan Central Weather Bureau s (CWB) telemetered seismic network (CWBSN) was established in CWB has been working on rapid reporting of seismic information since then, aiming at first at a one-minute target time after the occurrence of a large earthquake. If rapid reporting can be achieved before the arrival of the strong shaking, earthquake early warning will become possible. CWB has achieved this (the generation of the intensity map, epicenter and magnitude) within about one minute after the occurrence of a large earthquake. Both rapid reporting and early warning are principally applied to large (M >>5) events; the requirement of on-scale waveform recording prompted CWB to integrate strong-motion sensors (e.g., force-balance accelerometers) into its telemetered seismic monitoring system a first in the global seismic monitoring practice. Time-domain recursive processing is applied to the multi-channel incoming seismic signals by a group of networked computers to generate the intensity map. From the isoseismal contours, an effective epicenter is immediately identified that resides in the middle of the largest (usually the 100-gal) contour curve of the intensity map. An effective magnitude is also defined that can B-8

28 be derived immediately from the surface area covered by the largest (usually the 100-gal) contour curve. A subsequent method of magnitude determination uses the P waveform. For a large event with a finite rupture surface, the epicenter and magnitude so derived are more adequate estimates of the source location and of the strength of destruction. The effective epicenter gives the center of the damage area; it stands in contrast with the conventional epicenter location, which only gives the initial point of rupture nucleation. The effective magnitude reflects more closely the earthquake damage potential, instead of the classical magnitude definition that emphasizes the total energy release. The CWB has achieved in obtaining the above crucial source information well within about one minute. This information is electronically sent out to governmental agencies responsible for emergency management. The same information is also ed to international scientists interested in receiving these results. Essentially, all event above M = 4 occurred in Taiwan have been rapid-reported and the reporting time is about 1 2 seconds. This time can further be reduced to better than 30 seconds, showing that earthquake early warning is indeed an achievable goal. The rapid reporting and early warning information is electronically transmitted to the emergency response agencies of the Taiwan government. It allows rapid emergency response actions. In the case of the Chi-Chi earthquake, the complete earthquake information (epicenter, magnitude, and shaking intensity map) has been sent out in exactly 102 seconds. Not only the main shock, all large aftershocks are reported as soon as 1 2 minutes after their occurrences (Table 2). We believe that this rapid reporting has been instrumental in life saving and in preventing the escalation of secondary earthquake damage, such as fires. As the Chi-Chi earthquake s aftermath is gradually cleaned up, we will find that the CWB s contribution in this regard will be commanded. It is interesting to note that even the U.S. Congress has noted and recognized this important achievement and recorded on the U.S. Congressional record that Taiwan leads the world in real-time seismology and earthquake rapid reporting. We also believe that in the near future, CWB will contribute significantly towards the earthquake early warning task that will have important implication in the safe operation of the future rapid rail system of Taiwan. However, we should note that in Taiwan (as well as in Japan and in the U.S.), the distance D between the earthquake source area and the target area is short (~100 km), therefore the lead time T is short. These conditions will make the problem much more difficult as compared to that of the Mexico City. Of course, for situations in which D is much smaller that 100 km, there will be no earthquake early warning to speak about. As a concluding statement, we may safely state that: While deterministic earthquake prediction does not seem to be an achievable goal at the present, there are sound strategies to safely live with earthquakes in Taiwan. Among the strategies, the work of earthquake rapid reporting and early warning will make Taiwan s accomplishment shine in the scientific world. B-9

29 Appendix. The Debate on Earthquake Prediction hosted by the Nature Magazine. Contents Is the reliable prediction of individual earthquakes a realistic scientific goal? by Ian Main...11 Earthquake prediction: is this debate necessary? by Robert Geller...14 Earthquake precursors and crustal 'transients' by Pascal Bernard...17 How well can we predict earthquakes? by Andrew Michael...20 Earthquake prediction: feasible and useful? by Christopher Scholz...23 Earthquake prediction is difficult but not impossible by Leon Knopoff...25 Earthquake Prediction: What should we be debating? by Robert Geller...28 Without funding no progress by Max Wyss...31 Comments by Pascal Bernard...33 Comments by Per Bak...34 The status of earthquake prediction by David Jackson...36 Without progress no funding by Robert Geller...39 Comments by David Jackson...41 A case for intermediate-term earthquake prediction: don't throw the baby out with the bath water! by David Bowman and Charles Sammis...42 On the existence and complexity of empirical precursors by Francesco Biagi...44 Realistic predictions: are they worthwhile? by Andrew Michael...46 Sociological aspects of the prediction debate by Robert Geller...48 Comments by Max Wyss...51 Stress-forecasting: an apparently viable third strategy by Stuart Crampin...52 Comments by Zhongliang Wu...55 Comments by Christopher Scholz...57 The need for objective testing by Robert Geller...58 More agreement than division by Max Wyss Contributions by Didier Sorbette...63 Concluding Remarks by Ian Main...67 B-10

30 Is the reliable prediction of individual earthquakes a realistic scientific goal? by Ian Main (25 February 1999) The recent earthquake in Colombia has once again illustrated to the general public the inability of science to predict such natural catastrophes. Despite the significant global effort that has gone into the investigation of the nucleation process of earthquakes, such events still seem to strike suddenly and without obvious warning. Not all natural catastrophes are so apparently unpredictable, however. For example, the explosive eruption of Mount St Helens in 1980 was preceded by visible ground deformation of up to 1 meter per day, by eruptions of gas and steam, and by thousands of small earthquakes, culminating in the magnitude 5 event that finally breached the carapace. In this example, nearly two decades ago now, the general public had been given official warning of the likelihood of such an event, on a time scale of a few months. So, if other sudden onset natural disasters can be predicted to some degree, what is special about earthquakes? Why have no unambiguous, reliable precursors been observed, as they commonly are in laboratory tests? In the absence of reliable, accurate prediction methods, what should we do instead? How far should we go in even trying to predict earthquakes? The idea that science cannot predict everything is not new; it dates back to the 1755 Great Lisbon earthquake, which shattered contemporary European belief in a benign, predictable Universe 1. In the eighteenth century 'Age of Reason', the picture of a predictable Universe 1 was based on the spectacular success of linear mathematics, such as Newton's theory of gravitation. The history of science during this century has to some extent echoed this earlier debate. Theories from the earlier part of the century, such as Einstein's relativity, and the development of quantum mechanics, were found to be spectacularly, even terrifyingly, successful when tested against experiment and observation. Such success was mirrored in the increasing faith that the general public placed in science. However, the century is closing with the gradual realization by both practitioners and the general public that we should not expect scientific predictions to be infallible. Even simple nonlinear systems can exhibit 'chaotic' behaviour, whereas more 'complex' nonlinear systems, with lots of interacting elements, can produce remarkable statistical stability while retaining an inherently random (if not completely chaotic) component 2. The null hypothesis to be disproved is not that earthquakes are predictable, but that they are not. The question to be addressed in this debate is whether the accurate, reliable prediction of individual earthquakes is a realistic scientific goal, and, if not, how far should we go in attempting to assess the predictability of the earthquake generation process? Recent research and observation have shown that the process of seismogenesis is not completely random earthquakes tend to be localized in space, primarily on plate boundaries, and seem to be clustered in time more than would be expected for a random process. The scale-invariant nature of fault morphology, the earthquake frequency-magnitude distribution, the spatiotemporal clustering of earthquakes, the relatively constant dynamic stress drop, and the apparent ease with which earthquakes can be triggered by small perturbations in stress are all testament to a degree of determinism and predictability in the properties of earthquake populations 3,4. The debate here centres on the prediction of individual events. For the purposes of this debate, we define a sliding scale of earthquake 'prediction' as follows. 1. Time-independent hazard. We assume that earthquakes are a random (Poisson) process in time, and use past locations of earthquakes, active faults, geological recurrence times and/or fault slip rates from plate tectonic or satellite data to constrain the future long-term seismic hazard 5. We then calculate the likely occurrence of groundshaking from a combination of source magnitude probability with path and site effects, and include a calculation of the associated errors. Such calculations can also be used in building design and planning of land use, and for the estimation of earthquake insurance. 2. Time-dependent hazard. Here we accept a degree of predictability in the process, in that the seismic hazard varies with time. We might include linear theories, where the hazard increases after the last previous event 6, or the idea of a 'characteristic earthquake' with a relatively similar magnitude, location and approximate repeat time predicted B-11

31 from the geological dating of previous events 7. Surprisingly, the tendency of earthquakes to cluster in space and time include the possibility of a seismic hazard that actually decreases with time 8. This would allow the refinement of hazard to include the time and duration of a building's use as a variable in calculating the seismic risk. 3. Earthquake forecasting. Here we would try to predict some of the features of an impending earthquake, usually on the basis of the observation of a precursory signal. The prediction would still be probabilistic, in the sense that the precise magnitude, time and location might not be given precisely or reliably, but that there is some physical connection above the level of chance between the observation of a precursor and the subsequent event. Forecasting would also have to include a precise statement of the probabilities and errors involved, and would have to demonstrate more predictability than the clustering referred to in time-dependent hazard. The practical utility of this would be to enable the relevant authorities to prepare for an impending event on a timescale of months to weeks. Practical difficulties include identifying reliable, unambiguous precursors 9-11, and the acceptance of an inherent proportion of missed events or false alarms, involving evacuation for up to several months at a time, resulting in a loss of public confidence. 4. Deterministic prediction. Earthquakes are inherently predictable. We can reliably know in advance their location (latitude, longitude and depth), magnitude, and time of occurrence, all within narrow limits (again above the level of chance), so that a planned evacuation can take place. Time-independent hazard has now been standard practice for three decades, although new information from geological and satellite data is increasingly being used as a constraint. In contrast, few seismologists would argue that deterministic prediction as defined above is a reasonable goal in the medium term, if not for ever 12. In the USA, the emphasis has long been shifted to a better fundamental understanding of the earthquake process, and on an improved calculation of the seismic hazard, apart from an unsuccessful attempt to monitor precursors to an earthquake near Parkfield, California, which failed to materialize on time. In Japan, particularly in the aftermath of the Kobe earthquake in 1995, there is a growing realization that successful earthquake prediction might not be realistic 13. In China, thirty false alarms have brought power lines and business operations to a standstill in the past three years, leading to recent government plans to clamp down on unofficial 'predictions' 14. So, if we cannot predict individual earthquakes reliably and accurately with current knowledge 15-20, how far should we go in investigating the degree of predictability that might exist? References 1. Voltaire, Candide (Penguin, London, 1997, first published 1759). 2. Bak, P. How Nature Works: The Science of Self-organised Criticality (Oxford Univ. Press, 1997). 3. Turcotte, D.L. Fractals and Chaos in Geology and Geophysics (Cambridge Univ. Press, 1991). 4. Main, I., Statistical physics, seismogenesis and seismic hazard, Rev. Geophys. 34, (1996). 5. Reiter, L. Earthquake Hazard Analysis (Columbia Univ. Press, New York, 1991). 6. Shimazaki, K. & Nakata, T., Time-predictable recurrence model for large earthquakes, Geophys. Res. Lett. 7, (1980). 7. Schwartz, D.P. & Coppersmith, K.J., Fault behavior and characteristic earthquakes: Examples from the Wasatch and San Andreas fault systems, J. Geophys. Res. 89, (1984). 8. Davis, P.M., Jackson, D.D. & Kagan, Y.Y., The longer its been since the last earthquake, the longer the expected time till the next?, Bull. Seism. Soc. Am. 79, (1989). 9. Wyss, M., Second round of evaluation of proposed earthquake precursors, Pure Appl. Geophys. 149, 3-16 (1991). 10. Campbell, W.H. A misuse of public funds: UN support for geomagnetic forecasting of earthquakes and meteorological disasters, Eos Trans. Am. Geophys. Union 79, (1998). 11. Scholz, C.H. The Mechanics of Earthquakes and Faulting (Cambridge Univ. Press, 1990). 12. Main, I., Earthquakes - Long odds on prediction, Nature 385, (1997). 13. Saegusa, A., Japan tries to understand quakes, not predict them, Nature 397, 284 (1999). 14. Saegusa, A., China clamps down on inaccurate warnings, Nature 397, 284 (1999). 15. Macelwane, J.B., Forecasting earthquakes, Bull. Seism. Soc. Am. 36, 1-4 (1946). B-12

32 16. Turcotte, D.H., Earthquake prediction, A. Rev. Earth Planet. Sci. 19, (1991). 17. Sneider, R. & van Eck, T., Earthquake prediction: a political problem?, Geol. Rdsch. 86, (1997). 18. Jordan, T.H., Is the study of earthquakes a basic science?, Seismol. Res. Lett. 68, (1997). 19. Evans, R., Asessment of schemes for earthquake prediction: editor's introduction, Geophys. J. Int. 131, (1997). 20. Geller, R.J., Earthquake prediction: a critical review, Geophys. J. Int (1997). 21. Main, I.G., Sammonds P.R. & Meredith, P.G., Application of a modified Griffith criterion to the evolution of fractal damage during compressional rock failure, Geophys. J. Int. 115, (1993). 22. Argus, D. & Lyzenga, G.A., Site velocities before and after the Loma Prieta and the Gulf of Alaska earthquakes determined from VLBI, Geophys. Res. Lett. 21, (1994). B-13

33 Earthquake prediction: is this debate necessary? by Robert Geller (25 February 1999) Because large earthquakes release huge amounts of energy, many researchers havethought that there ought to be some precursory phenomena that could beconsistently observed and identified, and used as the basis for making reliable and accurate predictions. Over the past 100 years, and particularly since 1960, great efforts, all unsuccessful, have been made to find such hypothetical precursors. For further details see my review 1, which includes eight pages of references (in6-point type, to save space) to this vast body of work. The public, media, and government regard an 'earthquake prediction' as an alarm of an imminent large earthquake, with enough accuracy and reliability to take measures such as the evacuation of cities. 'Prediction' is used exclusively in the above sense here; in other words, longer-term forecasts of seismic hazards or statistical forecasts of aftershock probabilities are not classified as predictions. Three obvious questions arise: 1. What pitfalls have undermined prediction research? 2. Why are earthquakes so difficult to predict? 3. Why is prediction still being discussed? These questions are answered below. Most earthquake prediction research is empirical, featuring the 'case-study' approach. After a large earthquake, data of all sorts are examined retrospectively in the hope of finding a precursory signal. Workers reporting candidate precursors frequently set up observatories to look for similar signals before future earthquakes. Empiricism should not necessarily be dismissed out of hand, as it has led to many important scientific discoveries. However, as noted by E.B. Wilson, without proper controls the empirical approach can lead to absurd conclusions, for example that the beating of tom-tom drums will restore the Sun after an eclipse. Lack of controls is one of the main problems that has dogged the search for precursors. Another chronic problem is attributing 'anomalous' signals to earthquakes before considering more plausible explanations. One research group has repeatedly claimed to be observing electrical precursors of earthquakes (and even managed to get relatively favourable publicity in Nature's news columns 2,3 ), but it now seems likely that the signals are noise due to nearby digital radio-telecommunications transmitters, and are unrelated to earthquakes 4. Rigorous statistical analyses are rarely performed by prediction researchers, leading to a plethora of marginal claims. There are two main problems. First, most precursor claims involve retrospective studies, and it is easy to 'tune' parameters after the fact to produce apparently significant correlations that are actually bogus 5. Second, earthquakes are clustered in space and time, and spuriously high levels of statistical significance can easily be obtained unless appropriate null hypotheses are used 6,7. Why is prediction so difficult? This question cannot be answered conclusively, as we do not yet have a definitive theory of the seismic source. The Earth's crust (where almost all earthquakes occur) is highly heterogeneous, as is the distribution of strength and stored elastic strain energy. The earthquake source process seems to be extremely sensitive to small variations in the initial conditions (as are fracture and failure processes in general). There is complex and highly nonlinear interaction between faults in the crust, making prediction yet more difficult. In short, there is no good reason to think that earthquakes ought to be predictable in the first place. A few laboratory failure B-14

34 experiments might seem to suggest otherwise, but they are conducted on a limited scale and do not replicate the complex and heterogeneous conditions of the problem in situ. If reliable and accurate prediction is impossible now and for the foreseeable future, why is it being debated on Nature's web site? The answer seems to be sociological rather than scientific. Certain research topics are fatally attractive to both scientists and the general public, owing to the combination of their extreme difficulty and great potential reward. No less a scientist than Sir Isaac Newton regarded alchemy (the transmutation of elements by chemical reactions) as his primary research field. His continued failures drove him to despair, and led him to give up science for a sinecure as Master of the Mint. Sir Isaac's failures notwithstanding, alchemy continued to attract the fruitless efforts of talented scientists for another 100 years. Earthquake prediction seems to be the alchemy of our times. The examples of alchemy and perpetual motion machines show that the only way to 'prove' something is impossible is by developing a satisfactory theory of the underlying phenomenon (nuclear physics and thermodynamics, respectively). No satisfactory theory of the earthquake source process exists at present. Further work should be encouraged, but it will probably lead to a better understanding of why prediction is effectively impossible rather than to effective methods for prediction. Governments in many countries have awarded lavish funding for work on earthquake prediction 1. Such funding frequently greatly exceeds what is available through normal peer-reviewed channels for even highly meritorious work. It is regrettable that this disparity sometimes induces reputable scientists to label their work as 'earthquake prediction research' to get a share of such funding. In view of the bleak prospects, there is no obvious need for specialized organizations and research programmes for prediction. Researchers in this area should seek funding through normal peer-reviewed channels (such as the NSF in the USA), in competition with all other research in earthquake science. This would probably lead to an almost complete phasing out of prediction research, not because of censorship but rather owing to the poor quality of most present work in this field. Of course meritorious prediction proposals (if any exist) should be funded. More importantly, meritorious work on estimating long-term seismic hazards, real-time seismology and improving design standards for earthquake-resistant construction should be funded, along with basic research and the operation of observational networks, as the key components in an integrated programme of seismological research. Now that prediction research is under pressure in many countries, including Japan 8, some prediction proponents might seek to reposition their work as one component of such an integrated research programme for the reduction of seismic hazards. However, in view of the goals and methods of prediction research, this seems unwarranted. Under the ground rules of this debate, participants are not allowed to see other contributions before publication. However, unlike earthquakes themselves, the arguments used by prediction proponents are eminently predictable. See box for a rebuttal of some of the arguments that are likely to be used by the other side in this debate. The sad history of earthquake prediction research teaches us a lesson that we should already have learned from cold fusion, polywater and similar debacles. Namely, the potential importance of a particular research topic should not induce a lowering of scientific standards. In the long run (and in the short run too), science progresses when rigorous research methodology is followed. References 1. Geller, R.J. Earthquake prediction: a critical review. Geophys. J. Int. 131, (1997). 2. Masood, E. Greek earthquake stirs controversy over claims for prediction method. Nature 375, 617 (1995). 3. Masood, E. Court charges open split in Greek earthquake experts. Nature 377, 375 (1995). 4. Pham, V.N., Boyer, D., Chouliaras, G., LeMouël, J., Rossignol, J.C. & Stavrakakis, G. Characteristics of electromagnetic noise in the Ioannina region (Greece); a possible origin for so called 'seismic electric signals' (SES). Geophys. Res. Lett. 25, (1998). B-15

35 5. Mulargia, F. Retrospective validation of the time association of precursors. Geophys. J. Int. 131, (1997). 6. Kagan, Y. VAN earthquake predictions: an attempt at statistical evaluation. Geophys. Res. Lett. 23, (1996). 7. Stark, P.B. Earthquake prediction: the null hypothesis. Geophys. J. Int. 131, (1997). 8. Saegusa, A. Japan to try to understand quakes, not predict them. Nature 397, 284 (1999). 9. Frisch, U. Turbulence (Cambridge Univ. Press, Cambridge, 1995). 10.Abercrombie, R.E. & Mori, J. Occurrence patterns of foreshocks to large earthquakes in the western United States. Nature 381, (1996). B-16

36 Earthquake precursors and crustal 'transients' by Pascal Bernard (25 February 1999) For the public, the main question that seismologists should ask themselves is, "Can earthquakes be predicted?". Nature's earthquake prediction debate follows this simple line of inquiry, although presented in a slightly more subtle form by Ian Main: "How accurately and reliably can we predict earthquakes, and how far can we go in investigating the degree of predictability that might exist?" This is still, however, a question formulated under social pressure. I argue that this question should be left to one side by scientists to allow progress in a more general and comprehensive framework, by studying the whole set of crustal instabilities or 'transients' and not only earthquake precursors. First I shall outline the major observations relevant to this problem, and the two standard models for earthquake occurrence and predictability. I shall then comment briefly on these models and show how a more general approach could lead to a better understanding of earthquake predictability. Relevant observations of crustal instabilities O1: Continuous or transient aseismic fault slip is reported for several major faults that reach the Earth's surface 1. This slip might involve only the upper few kilometres of the fault or, for some fault segments, it might involve the whole thickness of the brittle crust. The transient creep events occur at various time scales (hours, days or months). O2: Silent and slow earthquakes observed at long periods show that significant transient, low-frequency slip events can occur on faults on a timescale of minutes 2. The reported seismic nucleation phases, lasting from fractions of a second to seconds, seem to scale with the final rupture size, and sometimes with the dimension of the pre-shock cluster, if such a cluster exists 3. O3: Fluid migration instabilities in the crust have been reported from studies of the mineralization of veins, nearsurface measurements of groundwater geochemistry and pore-pressure measurements in deep boreholes 4,5 ; nonhydrostatic pore pressure at depths of several kilometres is observed in many places. O4: Seismicity is not a Poisson process: clusters of earthquakes can last from hours to years, and have reported dimensions from hundreds of metres to hundreds of kilometres 6 ; seismic quiescence on various spatial scales has been reported to have occurred on a time scale of years 7. O5: Earthquake sizes have power-law distributions (possibly up to some finite magnitude threshold). O6: Size and roughness of fault segments follow power-law distributions; borehole logs of rock parameters (such as density and velocity) also reveal power-law distributions 8. Two standard models M1: Processes reported in O1 to O4, and their subsequent effects (such as ground deformation and electromagnetic effects) can sometimes be recognized (retrospectively) as being precursors to large earthquakes 3,9. This is the basis for the preparation-zone paradigm in seismogenesis. M2: Observations O5 and O6 provides the basis for self-organized critical models for the crust (SOC), or similar models leading to a chaotic system with a large degree of freedom, in which earthquakes are inherently unpredictable in size, space and time (such as cascade or avalanche processes) B-17

37 Many authors have convincingly shown that proponents of M1 have not been very successful if at all in providing statistical evidence for such correlations between anomalies and earthquakes, nor for stating what would distinguish a 'precursor-type' from a 'non-precursor-type' anomaly 12. Furthermore, it is difficult to explain how the size of the preparation zone, which is expected to be relatively small, can scale with the final size of large earthquake. On model M2, my opinion is that proponents of seismicity's being nearly chaotic are not very convincing either, because their physical model of the crust is a crude, oversimplified one, from which the important mechanical processes reported in O1 to O4 are absent. A generalized SOC model for the crust To resolve this, one should consider SOC models applied to the whole set of instabilities in the crust (fluid, aseismic and seismic), not only to the seismic ones. In this more global framework, it would be surprising if the characteristic parameters of the slow instabilities that span a large range of scales (duration, dimension and amplitude) did not obey a power-law distribution, just as earthquakes do. Indeed, they all result from nonlinear processes developing on the same fractal structures: the system of faults and the rock matrix (O5 and O6). Although we might have to wait for quite a long time before testing this hypothesis with enough observations, as deep aseismic slip or fluid transients are usually difficult if not impossible to detect from the surface, such a model does seem quite plausible. Under this working hypothesis it can be suggested that each type of transient process might trigger not only itself in cascades, but might sometimes also be coupled to another: fluid instabilities triggering or being triggered by fault creep, earthquakes triggering or being triggered by fluid instabilities or transient fault creep triggering or being triggered by earthquakes. Numerous observations support the existence of these coupled processes, mostly in the shallow crust, where aseismic processes are dominant 13,15. Indirect evidence also exists deeper in the brittle crust, as some foreshock sequences seem to be triggered by aseismic slip 3. The brittle-ductile transition zone might be another favourable location in which significant transient aseismic processes and seismic instabilities can coexist and be coupled on the fault system, because the faults zones there might exhibit unstable as well as stable frictional behaviour; interestingly enough, it also the common nucleation point for large earthquakes. It can thus be proposed that models M1 and M2 can be merged into a more general framework of crustal instabilities, still within a SOC model, sometimes displaying coupled processes that lead, in favourable cases, to the observation of precursors to large earthquakes. In such a model, the slow instability leading up to the earthquake is expected to remain unpredictable. However, if one were able to detect and monitor the progression of the slow instability, and to develop a physical model of the coupling process between the fluid or aseismic transient and the seismic nucleation, one might be able to predict some characteristics of the impending earthquake. The remaining problem is the scaling of the precursors to the earthquake size, which could be tackled by considering that some of the large slow transients (size L1) might lead to seismic ruptures large enough for breaking a whole asperity (size L2 > L1), thus allowing dynamic propagation at least up to the next large barrier on the fault (distance L3 >> L2). The possible existence of probabilistic scaling laws between L1 and L2, and between L2 and L3, might be the condition for the existence of reliable precursors. What should we do? Clearly, geophysicists should focus on deciphering and modelling the physics of the frictional and fluid migration transient processes in the crust 16,17. From the observational point of view, differential tomography with active B-18

38 sources or multiplets, dense arrays of continuous GPS receivers and of borehole strain meters and tilt meters, and deep borehole observations in fault zones (for tracking the role of fluids directly), might be the key to success. Hence, to the question, "Is the reliable prediction of individual earthquakes a realistic scientific goal?", my answer would be in the negative, as this should not yet be a scientific target. However, to the more relevant question, "Is the understanding of crustal transients an important and realistic scientific goal?", I would answer in the affirmative, and add that significant progress in this field is required before questions about earthquake predictability can be answered realistically. References 1.Gladwin et al. Measurements of the strain field associated with episodic creep events on the San Andreas fault at San Juan Bautista, California. J. Geophys. Res. 99, (1994). 2. McGuire et al. Time-domain observations of a slow precursor to the 1994 Romanche transform earthquake. Science 274, (1996). 3. Dodge et al. Detailed observations of California foreshock sequences: implications for the earthquake initiation process. J. Geophys. Res. 101, (1996). 4. Hickman et al. Introduction to special section: mechanical involvement of fluids in faulting. J. Geophys. Res. 100, (1995). 5. Roeloffs et al. Hydrological effects on water level changes associated with episodic fault creep near Parkfield, California. J. Geophys. Res. 94, (1989). 6. Kossobokov and Carslon, Active zone versus activity: A study of different seismicity patterns in the context of the prediction algorithm M8. J. Geophys. Res. 100, (1995). 7. Wyss and Martirosyan, Seismic quiescence before the M7, 1998, Spitak earthquake, Armenia. Geophys. J. Int. 134, (1998). 8. Leary, Rock as a critical-point system and the inherent implausibility of reliable earthquake prediction. Geophys. J. Int. 131, (1997). 9. Fraser-Smith et al. Low-frequency magnetic field measurements near the epicenter of the Ms 7.1 Loma Prieta earthquake. Geophys. Res. Lett. 17, (1990). 10. Bak and Tang, Earthquakes as a self-organized critical phenomenon. J. Geophys. Res. 94, (1989). 11. Allègre, C. et al. Scaling organization of fracture tectonic (SOFT) and earthquake mechanism. Phys. Earth Planet. Int. 92, (1995). 12. Geller, R.J. et al. Earthquakes cannot be predicted. Science 275, (1997). 13. Gwyther et al., Anomalous shear strain at Parkfield during Geophys. Res. Lett. 23, (1996). 14. Johnson and McEvilly, Parkfield seismicity: fluid-driven? J. Geophys. Res. 100, (1995). 15. Leary and Malin, Ground deformation events preceding the homestead valley earthquakes. Bull. Seismol. Soc. Am. 74, (1984). 16. Scholz, Earthquake and friction laws. Nature 391, (1998). 17. Sibson, Implications of fault-valve behavior for rupture nucleation and recurrence. Tectonophysics 211, (1992). B-19

39 How well can we predict earthquakes? by Andrew Michael (4 March 1999) How well can we predict earthquakes? As suggested in Ian Main's introduction to this forum, we can easily predict the behaviour of populations of earthquakes and we clearly cannot completely predict the behaviour of individual earthquakes. But where is the boundary between the easy and the impossible? In search of this boundary let us take a tour through Ian Main's four levels of earthquake prediction. Level 1, time-independent hazard estimation, clearly shows that we can predict the behaviour of earthquake populations. Here we seek spatially varying estimates of average earthquake rates. Such calculations are common and the results are widely used. To argue otherwise you must believe in equal earthquake hazards for both California and Britain. Time-dependent earthquake hazard estimation, level 2 in Ian Main's scheme, can be divided into two parts. Temporal and spatial earthquake clustering, which I shall denote as level 2a, can lead to some definite improvements over the time-independent estimates. Aftershocks are a major part of any earthquake catalogue and the largest ones are capable of doing additional damage. Probabilistic estimates of aftershock rates can be used to aid emergency response and recovery operations after damaging earthquakes 1,2. Although predicting aftershocks is an admirable goal, by definition it does not include predicting the largest and most damaging earthquakes. Recognizing foreshocks would allow us to predict these more important events. But no one has been able to identify which earthquakes are foreshocks. This has limited us to statistical analyses in which we probabilistically estimate the odds that an earthquake is a foreshock 3 or treat each earthquake as a main shock and allow for the possibility that one of its aftershocks might be larger 1,2. In both cases, the probabilities that any earthquake will be followed by a larger event are only a few per cent over the first several days. There might also be significant uncertainties in these probabilities 4. Understanding earthquake clustering in terms of stress transfer and rate and state friction laws 5-7 might allow us to place these statistical models on a firmer physical footing, but this will not necessarily reduce these uncertainties. Earthquake clustering is now a routine time-dependent hazard estimation tool in California. Joint foreshock and aftershock probabilities are automatically released by the United States Geological Survey and the State of California after earthquakes over magnitude 5. But does level 2a let us predict the behaviour of individual earthquakes or merely the behaviour of a population? Predictions based on aftershocks can be fulfilled by a variety of possible events, so they predict the behaviour of a population of earthquakes. In contrast, statistical models of foreshocks target a specific main shock 3,4. But actually writing a definition for a single earthquake is quite difficult4 and so at best these are predictions for one of a small population. Also, given the long time between main shocks, it is difficult to test these predictions of individual events or small populations. The other choice is to do a test over a broad area but then we are really testing the behaviour of the population. The second part of level 2 continues with the prediction of specific events by using the concept of an earthquake cycle based on the elastic rebound theory 8. The use of this cycle, which I shall refer to as level 2b, led from plate tectonics through seismic gaps and on to time-dependent hazard analysis based on a probabilistic model of the time between earthquakes on a fault segment. To achieve level 1 we need only know the average rate of earthquakes in a region. To achieve level 2b we must assign those rates to specific fault segments, determine the date of the last event on each fault segment, and choose an earthquake recurrence model. Determining the history of earthquakes on a fault segment is especially difficult in areas such as California, where the historic record is short compared with the average time between major earthquakes. It is also difficult in areas in which the historic record is longer. Although we might know when past earthquakes occurred, we might not know on which fault they occurred. Palaeoseismology, with fault trenches and tree ring studies, attempts to address these questions, but its success varies depending on local conditions. B-20

40 A few consensus reports have been issued that include time-dependent hazard estimates for California 9,10 ; an update to the estimates for Northern California is currently under way. Although these analyses attempt to predict individual model events, there is so much uncertainty in the individual predictions that the results are presented as a probabilistic sum over many models. A further problem with level 2b is that these predictions might be impossible to test during our lifetimes. Thus, our faith in these predictions relies on our ability to test the components that went into them and our faith in the 'experts' who must make somewhat arbitrary choices when assembling these components. Although the quality of these predictions is debatable, their impact is clearer. Widespread release of earthquake hazards estimates in the San Francisco Bay area have led businesses and governments to spend hundreds of millions on earthquake preparedness 11. Level 3, the use of precursors, could lead to the prediction of either individual events or the behaviour of the population depending on how large an area the precursors cover. Given that years of effort have led to no widely accepted precursors, perhaps there are no valid earthquake precursors. Or have our efforts been too weak to find them? Although Ian Main asserts that the effort to find precursors has been enormous, it has used only a few per cent of the US earthquake research budget. This limited effort has allowed a wide variety of dense instrumentation to be installed in very few areas, and these areas have not yet experienced a large event 12,13. Although the level of effort must be considered against other seismological and societal goals, it is impossible to rule out the existence of precursors on the basis of a lack of observations. Another option is to show that there cannot be any valid earthquake precursors because the system is simply too chaotic. This would also rule out level 4: the deterministic prediction of individual events. For instance, if when an earthquake begins there is no way of suggesting how large it will become, prediction will be very difficult. Laboratory faults display a slow nucleation process and some recent work suggests a slow 14, magnitudeproportional 15 nucleation process for real faults, but this remains controversial 16,17. Other fruitful topics for further research include understanding the frictional behaviour of faults and why they are so much weaker than simple laboratory models 18,19. The predictability of the weakening mechanism might affect our view of how predictable the entire system is. For instance, opening-mode vibrations 20 might be more predictable than the collapse of high-porefluid compartments 21,22. Until we understand better the basic processes of real faults it is too early to say that we will not improve on our current predictive capability. And our knowledge might improve with new observations such as those made in deep drill holes 24. In conclusion, scientists are now making societally useful predictions based on both the behaviour of the population of earthquakes and of individual events, although these predictions are best posed in terms of at least small populations. Progress in this field might be difficult but we should heed Sir Peter Medawar's advice 25 : "No kind of prediction is more obviously mistaken or more dramatically falsified than that which declares that something which is possible in principle (that is, which does not flout some established scientific law) will never or can never happen." References 1. Reasenberg, P.A. & Jones, L.M. California aftershock hazard forecasts. Science. 247, (1990) 2. Reasenberg, P.A. & Jones, L.M. Earthquake hazard after a mainshock in California. Science 243, (1989) 3. Agnew, D.C. & Jones, L.M. Prediction probabilities from foreshocks. J. Geophys. Res. 96, (1991) 4. Michael, A.J. & Jones, L.M. Seismicity alert probabilities at Parkfield, California, revisited. Bull. Seism. Soc. Am. 87, (1998). 5. Dieterich, J. A constitutive law for rate of earthquake production and its application to earthquake clustering. J. Geophys. Res. 99, (1994). 6. King, G.C.P., Stein, R.S. & Lin, J. Static stress changes and the triggering of earthquakes: The 1992 Landers, California, earthquake sequence. Bull. Seism. Soc. Am. 84, (1994) 7. Stein, R.S., King, G.C.P. & Lin, J. Stress triggering of the 1994 M=6.7 Northridge, California, earthquake by its predecessors. Science 265, (1994) B-21

41 8. Reid, H.F. The California earthquake of April 18, 1906; the mechanics of the earthquake, Vol. 2, 192 (Carnegie Inst. Wash. Pub. 87, 1910) 9. Working Group on California Earthquake Probabilities. Probabilities of large earthquakes occurring in California on the San Andreas fault. (U.S. Geol. Survey Open-File Rep , 1988) 10. Working Group on California Earthquake Probabilities. Probabilities of large earthquakes in the San Francisco Bay region, California. (U.S. Geol. Survey Circular 1053, 1990) 11. Bakun, W.H. Pay a little now, or a lot later. (U.S. Geol. Survey Fact Sheet , 1995) 12. Bakun, W.H. & Lindh, A.G. The Parkfield, California, earthquake prediction experiment. Science 229, (1985) 13. Roeloffs, E.A. & Langbein, J. The earthquake prediction experiment at Parkfield, California. Rev. Geophys. 32, (1994) 14. Iio, Y. Slow initial phase of the P-wave velocity pulse generated by microearthquakes. Geophys. Res. Lett. 19, (1992) 15. Ellsworth, W.L. & Beroza, G.C. Seismic evidence for an earthquake nucleation phase. Science 268, (1995) 16. Mori, J.J. & Kanamori, H. Initial rupture of earthquakes in the 1995 Ridgecrest, California sequence. Geophys. Res. Lett. 23, (1996) 17. Ellsworth, W.L. & Beroza, G.C. Observation of the seismic nucleation phase in the Ridgecrest, California, earthquake sequence. Geophys. Res. Lett. 25, (1998) 18. Brune, J.N., Henyey, T.L. & Roy, R.F. Heat flow, stress, and rate of slip along the San Andreas fault, California. J. Geophys. Res. 74, (1969) 19. Lachenbruch, A.H. & Sass, J.H. Heat flow and energetics of the San Andreas fault zone: Magnitude of deviatoric stresses in the Earth's crust and uppermost mantle. J. Geophys. Res. 85, (1980) 20. Anooshehpoor, A. & Brune, J.N. Frictional heat generation and seismic radiation in a foam rubber model of earthquakes: Faulting, friction, and earthquake mechanics; Part 1. Pure Appl. Geophys. 142, (1994) 21. Rice, J.R. in Fault Mechanics and Transport Properties in Rocks (eds. Evans, B. & Wong, T.-F.) (Academic, 1992) 22. Byerlee, J. Friction, overpressure and fault normal compression. Geophys. Res. Lett. 17, (1990) 23. Byerlee, J. Model for episodic flow of high pressure water in fault zones before earthquakes. Geology 21, (1993) 24. Hickman, S., Zoback, M., Younker, Y. & Ellsworth, W. Deep scientific drilling in the San Andreas fault zone. Eos 75, (1994) 25. Medawar, P. B. Pluto's Republic (Oxford Univ. Press, London, 1982) B-22

42 Earthquake prediction: feasible and useful? by Christopher Scholz (4 March 1999) There has been a recent recrudescence 1,2 of the long debate on the feasibility of short-term earthquake prediction, namely, the prediction, with a lead time of days to weeks, of the time, location and magnitude of a future event. This type of earthquake prediction is inherently difficult to research and has a chequered past, with many intriguing but fragmentary observations of possibly precursory phenomena but no scientifically based and verified successes 3. The current debate has taken the matter further, with the assertion, based on two arguments, that such prediction is intrinsically impossible. The first argument is that the Earth is in a state of self-organized criticality (SOC), everywhere near the rupture point, so that earthquakes of any size can occur randomly anywhere at any time. SOC refers to a global state, such as that of the whole Earth or a large portion of it containing many earthquake generating faults with uncorrelated states. However, to paraphrase what Tip O'Neil, the late Speaker of the US House of Representatives, said about politics, earthquake prediction is always local. This point is illustrated in Fig. 1 (omitted), which shows the canonical sandpile model of SOC. The pile isbuilt by a rain of sand and, when its sides reach the critical angle of repose (Fig. 1A), landslides of all sizes begin to occur. If we focus now on only one sector of the sandpile, there will occasionally occur a system-sized landslide (Fig. 1B), which brings the local slope well below the angle of repose. No landslides can then occur in this locality until the slope is built back up to the angle ofrepose. It is the problem of long-term earthquake prediction to estimate when this will occur. In earthquake prediction research, this is known as the 'seismic gap' hypothesis. A test of this hypothesis 4, which had negative results, was flawed because it used earthquakes that were smaller than system-sized and took only a bite out of the side (Fig. 1C), which clearly does not preclude subsequent local earthquakes. Their second argument is based on the conjecture that an earthquake cannot 'know' how big it will become because that depends entirely on initial conditions (local state of stress and strength of the fault). This will prevent the earthquake magnitude from being predicted even if one could sense its nucleation (which friction theory predicts might be detectable days or weeks before the earthquake instability 5 ). Could this conjecture be false? There are observations that indicate that the size of foreshock zones, and a precursory slip phase of earthquakes, which might map the nucleation region, scale with the size of the subsequent mainshock 6,7. Thus the detection of the nucleation zone size might allow the prediction of the size of the subsequent earthquake. If, however, this conjecture is true, can it preclude the prediction of the earthquake's size? No, but the problem would then change; it would require determining the initial conditions, namely the size of the region around the nucleation zone that is loaded near the critical state. Other methods, such those espoused in the 'dilatancy-diffusion' theory of earthquake prediction 8, might make that possible. Therefore, although we do not have a method for making short-term predictions, I do not believe it is justified to assert that it is impossible. What, then, can we say about other types of earthquake prediction: their feasibility and utility? Long-term prediction, which is the estimate, on a decadal time scale, of the probable failure time of segments of active faults, is now an established part of seismic hazard analysis 9. On the basis of that methodology, several studies forecast the 1989 Loma Prieta, California, earthquake in the six years before that event 10. The utility of this kind of prediction is that with a decadal lead time, it can guide engineering and emergency planning measures to B-23

43 mitigate the impact of the earthquake. An intermediate-term prediction is an update of the long-term prediction brought about by an increase in seismicity (Fig. 1D) or some other indicator that the fault is near its failure point. In another type of prediction, an Immediate Alert, seismic waves above a certain threshold send an electronic alert, which, with a lead time of several seconds, can be used for such things as shutting down nuclear reactors, gas and electricity grids, and the like. A system like this is in use in Japan to stop high-speed trains in the event of an earthquake. Finally, the finding that earthquakes often trigger other earthquakes on nearby faults leads to another prediction model, which might be called a post-earthquake seismic hazard reassessment. In this methodology, shortly after a large earthquake the resulting stress changes are calculated on all nearby faults and warnings issued about those faults that have been brought closer to failure by the preceding earthquake 11. What, then, should we do about short-term earthquake prediction? Should we declare it impossible and banish it from our minds? I think not: there is much yet to be learned about earthquake physics, and rapid progress is being made, particularly in the applications of the rate/state variable-friction laws to the problem 12. Until now we have been working in the dark, with the only observables being the earthquakes themselves. Dense permanent global positioning system (GPS) networks are presently being installed in California and Japan and elsewhere that, together with satellite radar interferometry, will allow us to view for the first time the evolution of strain fields in space and time. Who knows what might turn up? Then there are the curious 'precursory' phenomena, which continue to be serendipitously observed from time to time. What could their mechanism be? References 1. Main, I.G. Long odds on prediction. Nature 385, (1997). 2. Geller, R.J., Jackson D.D., Kagan, Y.Y. & Mulargia, F. Earthquakes cannot be predicted. Science 275, (1997). 3. Scholz, C.H. Whatever happened to earthquake prediction. Geotimes, pp , March (1997). 4. Kagan, Y.Y. & Jackson, D.D. Seismic gap hypothesis: ten years after. J. Geophys. Res. 96, (1991). 5. Dieterich, J.H. & Kilgore, B. Implications of fault constitutive properties for earthquake prediction. Proc. Natl Acad. Sci. USA 93, (1996). 6. Dodge, D.A., Beroza, G.C. & Ellsworth, W.L. Detailed observation of California foreshock sequences: implications for the earthquake initiation process. J. Geophys. Res. 101, (1996). 7. Ellsworth, W.L. & Beroza, G.C. Seismic evidence for an earthquake nucleation phase. Science 268, (1995). 8. Scholz, C.H., Sykes, L.R. & Aggarwal, Y.P. Earthquake prediction, a physical basis. Science 181, (1973). 9. Working group on California Earthquake Probabilities. Probabilities of Large Earthquakes Occurring in California on the San Andreas Fault (U.S. Geol. Surv. Open-file Rep , 1988). 10. Harris, R.A. Forecasts of the 1989 Loma Prieta, California, earthquake. Bull. Seismol. Soc. Am. 88, (1998). 11. Toda, S., Stein, R.S., Reasenberg, P.A., Dieterich, J.H. & Yoshida, A. Stress transfer by the 1995 Mw 6.9 Kobe, Japan, shock: effect on aftershocks and future earthquake probabilities. J. Geophys. Res. 103, (1998). 12. Scholz, C.H. Earthquakes and friction laws. Nature 391, (1998). B-24

44 Earthquake prediction is difficult but not impossible by Leon Knopoff (11 March 1999) For a prediction to be successful, the probability of occurrence in a time interval and a space domain must be specified in advance, as must the lower magnitude. There are two important additional constraints: a utilitarian constraint demands that the lower magnitude bound be appropriate to societal needs; in other words, we are especially interested in strong destructive earthquakes. The time intervals for societal needs in the developing countries are of the order of days, but in the developed countries the windows can be broader, even of the order of years, because the response can be one of marshalling resources to improve construction, for example. A second constraint is that we must guard against self-indulgence: if the time or space windows are made too broad, or the magnitude threshold is made too low, then we can increase the probability of success up to 100% without any serious effort on our part (as, equally, will a Poisson random process). To avoid this problem we must specify how our probability estimate for the window compares with the poissonian estimate. Despite our assertions about the desirability of probabilistic estimates the problem is not statistical. There have been too few large enough events in any small sufficiently area in the past century to be able to define probabilities of the largest events sufficiently accurately. Cyclic inferences There are two ways in which to proceed. One is to study the time intervals between earthquakes in the region in this magnitude scale. If earthquakes are periodic, the problem is solved. Current estimates of interval times through measurements by global positioning by satellite (GPS) of rates of slip, coupled with geological estimates of slips in great earthquakes, give only average values of interval times. However, from palaeoseismicity, we find that the interval times for the strongest earthquakes at one site on the San Andreas fault have large variability 1. The statistical distribution of these interval times is poorly identified even in this, the best of cases. And a long duration since the last occurrence is no guarantee that the next event is imminent; the next event could be farther in the future 2, as Ian Main has also noted. The conclusion depends on the actual interval time distribution, which is unknown. The failure of the Parkfield prediction is a case in point: extrapolation from a brief set of interval times was insufficient to provide adequate information about the distribution of interval times. The variability of interval times is due to the influence of earthquakes on nearby faults; the earthquakes on a given fault cannot be taken as occurring as though they were independent of the activity on the other faults in the neighbourhood. Without information about the distribution of interval times, an earthquake prediction programme based only on GPS and short runs of palaeoseismicity must fail; the average values of slips and slip rates alone are not sufficient to solve the problem, but they comprise one of several pieces of information important to the prediction problem. Indeed, it is only on some faults that we have information about the date of the most recent sizable earthquake. What is lacking in this version of the programme is a theoretical effort to understand the distribution of interval times in one subarea due to earthquakes on an inhomogeneous network of inhomogeneous faults and subfaults, a modelling problem of considerable difficulty. De novo prediction The second and more attractive approach is to search for the immediate precursors of strong earthquakes. Here there have been many culs-de-sac: searches for foreshocks, tilts, radon, electrical precursors and variations in velocity ratios of P-waves to S-waves have either failed or are at best unproven. In general, these efforts (a) failed to restrict the problem to the study of large earthquakes and (b) failed to evaluate seriously the success in units of poissonian B-25

45 behaviour. In many cases the invalid assumption was made that one could use the prediction of small earthquakes as a proxy for the prediction of large ones. Part of the blame for the use of the assumption can be put at a misinterpretation of the Gutenberg-Richter magnitude frequency distribution. The illusion of the G-R distribution is that there are no characteristic scale sizes except for the largest-magnitude events that a region can support. We now know that there are at least three subscales in the Southern California distribution: the central trapped-mode core of the fracture in the largest earthquakes has a dimension of the order of m (ref. 3); the dimension of the zone of aftershocks astride a large fracture is of the order of 1-3 km; and the thickness of the brittle seismogenic layer is of the order of 15 km. (Space limitations do not allow me to discuss the cause of the apparent log-linearity of the G-R distribution in the presence of characteristic length scales 4. ) Because of the wealth of scales, the 'prediction' of earthquakes at a smaller scale to understand larger ones cannot be valid. The assumption that we can amplify our data set by a study of large earthquakes worldwide is also not tenable, because of the large variability of the faulting environment for the largest earthquakes from region to region. Statistics of rare events The small number of events means that again we need a physics-based theory of the precursory process to amplify the meager data. In the area of physics, another blind alley was followed. The beguiling attractiveness of the illusion of scale-independence of the G-R law suggested that the model of self-organized criticality (SOC), which also yielded scale-independent distributions, might be appropriate. (The logic is evidently faulty: if mammals have four legs, and tables have four legs, it does not follow that tables are mammals, or the reverse.) The model of SOC permits a hierarchical development of large events out of the nonlinear interaction of smaller events, at rates in relation to their sizes, and culminating in the largest event. However, there are several important arguments against the applicability of SOC to the earthquake problem. 1. Faults and fault systems are inhomogeneous: we have already noted the presence of several scale sizes. 2. Seismicity at almost all scales is absent from most faults, before any large earthquake on that fault; the San Andreas Fault in Southern California is remarkably somnolent at all magnitudes on the section that tore in the 1857 earthquake. 3. There is no evidence for long-range correlations of the stress field before large earthquakes. I do not see that the salient properties of SOC that are requisites for its application are reproduced in the earthquake data. It is now time to develop a sound physics-based theory of the precursory process that takes us away from simplistic models. Such a theory should study the organization of seismicity on the complex geometry of faults and fault systems, and should bring to bear the properties of rocks under high deformational stress and under realistic loading and unloading rates. It is impossible to anticipate catastrophic failure on a purely elastic-loading/brittle-fracture model of rupture. As it has been for nearly 60 years 5, the detection of non-elastic deformation under high stress before fracture is the most promising avenue for the detection and identification of precursors. The nucleation of the largest earthquakes on inhomogeneous faults will take place at sites of greatest compressional strength, which are of geometrical origin 6. These localized sites are those most likely to display precursory accelerated slip. The tasks of identifying these sites in advance and of measuring the deformation at them are not easy, even for impending large earthquakes. The task of identifying faults and measuring slip on them before remote small earthquakes, such as the recent Armenia, Colombia, event, does not seem to be possible at present. In my opinion, fluctuations in seismicity are not active agents that participate in a process of self-organization toward large events. Rather, they serve as qualitative stress gauges to indicate that regions of the Earth's crust are in a state of high stress or otherwise. We have used fluctuations in the rates of occurrence of intermediate-magnitude earthquakes to construct hindsight predictive techniques7 that are successful at about the 80% level (with large error B-26

46 bars) and represent an improvement over poissonian estimates of the order of 3:1 for a region the size of Southern California, with time constants of the order of 10 years, and with a magnitude threshold around 6.8. This is not much progress, but it is a step in the right direction. Challenges not insolubles The recent paper by Geller et al. 8 is in error on two counts. First, it states that the model of SOC shows that earthquakes are unpredictable. In fact, SOC 'predicts' stresses more readily than do chaotic systems. I have indicated above that the model of SOC is inapplicable to earthquakes on several counts: the data fail to show scale independence, the data fail to show long-range correlations in the stress field, and individual faults are remarkably inactive before large earthquakes. Second, the paper 8 states that the problem is too difficult, and we should therefore give up trying. I believe the opposite. The community has indeed tried the seemingly easy methods, and they have failed. For 25 years the leadership of our national programmes in prediction have been making the assumption that the problem is simple and will therefore have a simple prescriptive solution. We have been guilty of jumping on bandwagons without asking the basic questions, "What is an earthquake? What determines its size, and why is it likely to occur where and when it does?" These are physics questions; they are not likely to be solved by statistically unsubstantiable means. We have so far been unsuccessful at prediction because laboratory and theoretical studies of the physics of deformation and fracture have been largely unsupported. The problem is not simple; however, that does not mean it is insoluble. As I have indicated, there are weak solutions at present for large space-time windows. The short-term problem is much more difficult. References 1. Sieh, K., Stuiver, M. & Brillinger, D. A more precise chronology of earthquakes produced by the San Andreas Fault in Southern California. J. Geophys. Res. 94, (1989). 2. Sornette, D. & Knopoff, L. The paradox of the expected time until the next earthquake. Bull. Seismol. Soc. Am. 87, (1997). 3. Li, Y.G., Aki, K., Adams, D., Hasemi, A. & Lee, W.H.K. Seismic guided waves trapped in the fault zone of the Landers, California, earthquake of J. Geophys. Res. 99, (1994). 4. Knopoff, L. b-values for large and small Southern California earthquakes (to be submitted); The distribution of declustered earthquakes in Southern California (to be submitted). 5. Griggs, D.T. Experimental flow of rocks under conditions favoring recrystallization. Bull. Geol. Soc. Am. 51, (1940). 6. Nielsen, S.B. & Knopoff, L. The equivalent strength of geometrical barriers to earthquakes. J. Geophys. Res. 103, (1998). 7. Knopoff, L., Levshina, T., Keilis-Borok, V.I. & Mattoni, C. Increased long-range intermediate-magnitude earthquake activity prior to strong earthquakes in California. J. Geophys. Res. 101, (1996). 8. Geller, R.J., Jackson, D.D., Kagan, Y.Y. & Mulargia, F. Earthquakes cannot be predicted. Science 275, (1997). B-27

47 Earthquake Prediction: What should we be debating? by Robert Geller (11 March 1999) The topic posed by the editors for this debate, is whether the reliable prediction of individual earthquakes is a realistic scientific goal. Translated into everyday language, this becomes: given the present state of the art, does earthquake prediction research merit a significant investment of public funds? My initial contribution to this debate stated the negative case. None of the other debaters appears to have made strong arguments for the affirmative. Mission: Impossible? The arguments presented by some of the other debaters are variations on the following theme: 1. Prediction has not yet been shown to be impossible. 2. Some other things that were called impossible later turned out to be possible. 3. So, why shouldn't prediction also turn out to be possible? However, convincing supporting data, not just the mere fact that impossibility has not yet been proven, should be required before any proposed research is approved. This is particularly true for fields like prediction 1, or cold fusion 2, where previous work has been notoriously unsuccessful. Note that we do not have to decide whether or not prediction is inherently impossible. We just have to decide whether or not there are compelling grounds at present for establishing large-scale ("throw money at the problem") programmes for prediction research. The answer given current knowledge is clearly negative, but the question could, if necessary, be reopened at any time if new proposals were backed by well documented and convincing results. The Gambler's Fallacy Simple mechanical systems used in games of chance provide cautionary examples for prediction researchers. If an honest die is rolled, the probability of each number's landing face up is 1 in 6, but (due to sensitivity to small variations in the initial conditions) it is impossible to reliably and accurately predict the outcome of individual rolls. Many gamblers nevertheless try in vain to look for patterns in the outcome of previous rolls of a die. Such gamblers, as a group, lose their money, but for a short time a lucky few will win. By looking only at the winners, while ignoring the much larger group of losers, it is easy to jump to the mistaken conclusion that the winners have found a way to beat the odds. The root cause of the gambler's fallacy is drawing conclusions from a retrospectively chosen and unrepresentative sample of a much larger data set. This is also the fundamental problem bedevilling 'case studies' of alleged earthquake precursors, but the fallacy here is a bit less obvious. This is because the probabilities for each roll of a die are fixed, but the probability of earthquake occurrence is spatially variable, and varies strongly temporally depending on previous seismicity. A benchmark for prediction methods The probability of earthquake occurrence is much larger than usual immediately after an earthquake occurs, decaying with time as a power law 3. This is the basis for the 'automatic alarm' prediction strategy4: issue an alarm automatically after every earthquake above a certain size, on the chance that it might be a foreshock of a larger earthquake. B-28

48 The exact success and alarm rates of the automatic alarm strategy will depend on the choice of windows, but there will probably be hundreds of false alarms for every success, and on the order of half the significant earthquakes will probably be missed. Thus, as emphasized by its proposer 4, this strategy is not in general sufficiently reliable and accurate to justify issuing public alarms. (Probabilistic prediction of aftershocks -- see discussion by Michael -- may be an exception where public alarms are justifiable.) Note that the 'automatic alarm' strategy is a scientifically valid method for making forecasts in Main's category 2 (time-dependent hazard), although its actual utility in hazard mitigation is unclear. The automatic alarm strategy can be implemented at essentially no cost, as all we need are the hypocentral data from a seismic network. No measurements of electromagnetic signals, radio-isotope levels in well water, or any of the other phenomena that are sometimes claimed to be earthquake precursors are required. Although the automatic alarm strategy falls far short of the accuracy and reliability required for issuing public alarms, it achieves a significant probability gain over predictions issued completely at random. The automatic alarm strategy should be adopted as the benchmark for testing other proposed prediction methods. Unless and until a proposed method has been shown to outperform the automatic alarm strategy (none has ever been shown to do so), it does not warrant intensive investigation. Needed: objective testing, not case studies What is wrong with present prediction research? Wyss cites scientifically weak work and scientifically unqualified publicity seekers as problems. I agree 5, but I do not think these are the main problems. The principal problem appears to be the use of the anecdotal 'case study' approach by prediction researchers. At an early stage this approach can be valuable, but there are now literally thousands of published claims of precursors 1. The value of further additions to this list is questionable. Wyss's contribution to this debate cites both "increased moment release" (more small earthquakes than usual) and "seismic quiescence" (fewer small earthquakes than usual) as precursors. Thus it appears that any variability whatsoever in background seismicity can be claimed, after the fact, to have been a precursor. To determine whether these variations in seismicity levels are random fluctuations or real physical phenomena, objective testing of unambiguously stated hypotheses is required6. It is regrettable that the other contributors to the first two weeks of this debate have not sufficiently acknowledged the importance of objective statistical testing in resolving the prediction debate. Researchers looking for precursors could greatly benefit from the experience of pharmaceutical research, where new drugs are routinely evaluated by randomized double-blind testing using placebos 7. Long-term forecasts: where do we stand? If they were reliable and accurate, long-term forecasts could be useful in engineering and emergency planning measures to mitigate the impact of earthquakes. Unfortunately, however, there is serious question about the accuracy and reliability of proposed methods for long-term seismicity forecasts. For example, several long-term forecasts have been issued on the basis of the 'seismic gap' hypothesis. However, when these forecasts were later subjected to independent testing, they were shown not to have outperformed random chance 8. There has been a running controversy over the seismic-gap forecasts; for information on both sides see the works cited in the bibliography of ref. 8. Scholz claims there were successful long-term forecasts of the 1989 Loma Prieta, California, earthquake. However, this claim is dubious, as the long-term forecasts were for an earthquake on a different fault, and with a different focal mechanism, than the actual earthquake (see section 4.4 of ref. 1). Furthermore, even if the claim of 'success' were warranted, this appears to be a classic example of the gambler's fallacy of picking one possibly atypical sample out of a much larger dataset. B-29

49 Scholz does not cite ref. 8, but does cite an earlier work by the same authors in his discussion of the seismic-gap hypothesis. Scholz contends that Kagan & Jackson incorrectly rejected the seismic gap hypothesis because their study considered some earthquakes that were too small. Scholz's criticism is apparently based on ref. 9, but Jackson & Kagan replied to this criticism in ref. 10. Unfortunately Scholz discussed only the criticism but not the reply. It appears that the real problem is that the original forecasts did not fully specify the 'rules of the game', thus forcing anyone who evaluates these forecasts to choose some of the rules in retrospect, after the actual seismicity is already known. (In fairness, it must be added that the forecasts in question are among the best of their kind, as they were stated with sufficient precision to be objectively tested, albeit with some ambiguity.) The only way to avoid such problems in the future is for forecasters and independent evaluators to thoroughly thrash out all of the ground rules at the time a forecast is issued, before the actual seismicity is known. The downside of long-term forecasts Until we have well validated methods, we should be reluctant to recommend that government authorities take strong action on the basis of long-term forecasts, although no great harm and some good is likely to result from taking sensible precautionary measures on a moderate scale in regions for which long-term forecasts have been issued. There is, however, a risk that the authorities in regions for which long-term forecasts have not been issued may become overly complacent. This is not merely a theoretical possibility. Several hypothetical future earthquakes in and around the Tokyo area have been the subject of extensive discussion in Japan for the past 25 years (see section 5 of ref. 1). Partly as a result of these forecasts, local governments in western Japan, including Kobe, incorrectly assumed that their region was not at significant risk, and failed to take sufficient precautionary measures against earthquakes. This was one of the causes of the unexpectedly large damage caused by the 1995 Kobe earthquake. The bottom line Rather than debating whether or not reliable and accurate earthquake prediction is possible, we should instead be debating the extent to which earthquake occurrence is stochastic. Since it appears likely that earthquake occurrence is at least partly stochastic (or effectively stochastic), efforts at achieving deterministic prediction seem unwarranted. We should instead be searching for reliable statistical methods for quantifying the probability of earthquake occurrence as a function of space, time, earthquake size, and previous seismicity. The case study approach to earthquake prediction research should be abandoned in favour of the objective testing of unambiguously formulated hypotheses. In view of the lack of proven forecasting methods, scientists should exercise caution in issuing public warnings regarding future seismic hazards. Finally, prediction proponents should refrain from using the argument that prediction has not yet been proven to be impossible as justification for prediction research. References 1. Geller, R.J. Earthquake prediction: a critical review. Geophys. J. Int. 131, (1997). 2. Huizenga, J.R. Cold Fusion: The Scientific Fiasco of the Century (University of Rochester Press, Rochester, NY, 1992). 3. Kagan, Y. & Knopoff, L. Statistical short-term earthquake prediction. Science 236, (1987). 4. Kagan, Y. VAN earthquake predictions: an attempt at statistical evaluation. Geophys. Res. Lett. 23, (1996). 5. Geller, R.J. Predictable publicity. Astron. Geophys Quart. J. R. Astr. Soc. 38(1), (1997). 6. Jackson, D.D. Hypothesis testing and earthquake prediction. Proc. Natl Acad. Sci. USA, 93, (1996). 7. Shapiro, A.K. & Shapiro, E. The powerful placebo (Johns Hopkins University Press, Baltimore, MD, 1997). 8. Kagan, Y.Y. & Jackson, D.D. New seismic gap hypothesis: Five years after. J. Geophys. Res. 100, (1995). 9. Nishenko, S.P. & Sykes, L.R. Comment on "Seismic Gap Hypothesis: Ten years after" by Y.Y. Kagan and D.D. Jackson. J. Geophys. Res. 98, (1993). 10. Jackson, D.D. & Kagan, Y.Y. Reply. J. Geophys. Res. 98, (1993). B-30

50 Without funding no progress by Max Wyss (11 March 1999) The contributions to the debate about earthquake prediction research in Nature so far, clearly show that we have hardly scratched the surface of the problem of how earthquake ruptures initiate and how to predict them. This arises from the difficulty of the problem and the lack of a vigorous program to study these questions. As Andrew Michael has said, funding for earthquake prediction research is a small fraction of the seismology program, in the U.S., and seismology is poorly funded compared to disciplines like astronomy. Great efforts over the past 100 years?! The contributions of Bernard, Michael and Scholz to this debate show that we have only a rudimentary understanding of the physics of earthquake ruptures, of transients in the Earth's crust and of the possibility of predicting earthquakes. They also point out that numerous crustal parameters may contain relevant information, but that no generally accepted, irrefutably hard evidence exists for any of these that would allow reliable earthquake prediction. In this debate Geller repeats the exaggeration "Over the past 100 years, and particularly since 1960, great efforts, all unsuccessful, have been made to find such hypothetical precursors." Such strong wording was not acceptable in his recent article in the Geophysical Journal International 1, because articles in that journal are reviewed. The facts are that the first blue print on prediction research was not assembled until the mid 1960's and that blue print was not followed. No prediction research program existed before the 1970s and after the short flurry of activity in the mid 1970s, funding in the US and Europe dried up. Those of us who work in the field of earthquake rupture or prediction, know from first hand experience that when seeking research funding, the expression "earthquake prediction" in a research proposal to the NSF or the USGS will guarantee that it will not be funded. There is no question in my mind that we will make no serious progress toward learning how to predict earthquakes, unless we assure high quality control in prediction research and start to fund it at a scale comparable to the funding of astrophysical research. The definition of earthquake prediction The definition of "earthquake prediction" as one leading to "a planned evacuation" by the moderator of this debate is not likely to be accepted, because social scientists warn that evacuations may do more harm than good, and because an accepted definition exists. A valid earthquake prediction is any statement that specifies Location ± uncertainty Size ± uncertainty Occurrence time ± uncertainty Probability of the prediction being fulfilled 2. Since there exist a number of different types of consumers (individuals, officials, government agencies, insurance and other companies, police and fire fighting departments), predictions with vastly different uncertainties are of interest. The consumer can judge from the uncertainties, whether or not a given prediction is useful. Insurance companies and those who make decisions on reinforcing old buildings are more interested in long term predictions with large uncertainties, than in accurate short term predictions. B-31

51 The engineering solution is not enough Everyone, except perhaps some real estate developers, and builders of nuclear reactors and high dams, agree that we should build according to strict codes assuring earthquake resistance. However, the great majority of people will live and work for the next 50 years in buildings existing today and having been built when lax codes were in force. The sad fact is that in most parts of the world there is no money available to reinforce these buildings. Hence, long- and intermediate-term predictions as a motivating force for precautions 1, as well as short-term prediction, if attainable, are bound to benefit people significantly, if they are based on sound science and responsibly announced. If the current knowledge of the earthquake initiation process is so poorly founded that experienced researchers can maintain the profound differences of opinion present in this debate, we are in desperate need of the major research effort that is not 1 at present being made. References 1. Geller, R. J. Earthquake prediction: A critical review, Geophys. J. Int. 131, (1997). 2. Allen, C. R. Responsibilities in earthquake prediction, Bull. Seism. Soc. Amer. 66, (1976). B-32

52 Comments by Pascal Bernard (11 March 1999) In Bob Geller's contribution, the logical construction of the sentence "empiricism should not necessarily be dismissed out of hands" presents the observational part of geophysics as a simple accessory to a better understanding of geophysical processes, which may on occasion be useless. I totally disagree with such an opinion. Observation is an absolutely necessary part of the game of science, as is the elaboration of theories with which it continually interacts, guiding it or being guided by it. Rather one could say that "observation is a necessary, but not sufficient approach to the problem". Geller also claims that "a few laboratory failure experiments might seem to" suggest that earthquakes are predictable, but that they are "conducted on a limited scale and do not replicate the complex and heterogeneous conditions of the problem in situ". My impression is that the more heterogeneous and complex the medium, the more we should expect to have detectable processes in preparation for large-scale failure. This is supported by the fact that the breakage of a very homogeneous material such as glass is said to provide much weaker warnings than rock samples before failure. Geller goes on to say that "the only way to prove that something is impossible is by developing a satisfactory theory of the underlying phenomenon". If one replaces "something" by "earthquake prediction", we should conclude that Bob agrees with all other contributors to Nature's debate, including myself, who expressed a very similar view. Indeed, we all agree that there is as yet no satisfactory theory about the nucleation of earthquakes. This part of the debate should thus be closed here. I do not see why the research on natural phenomena like geophysical transients (see my contribution) should necessarily be "integrated in a research programme for the reduction of seismic hazard", as requested by Geller. It may of course be linked to such a program because of this "prediction" rationale, but it has its own self-consistent ways in terms of physics; observational and experimental tools to be developed; and hypotheses to be tested. In addition, it may contribute to other important societal needs as in oil exploration (role of faults and fluids), or engineering (failure of concrete, friction of tires,...). I, of course, totally agree with Geller's concluding remark that rigorous research methodology should be followed. Intriguing crustal transients are observed: let us be curious, and try to understand them. One of this week's contributors considers the consequences for earthquake predictions if they are indeed selforganized critical phenomena. B-33

53 Comments by Per Bak (11 March 1999) In order to understand the level at which we can expect to predict earthquakes, it is important to understand the dynamic nature of the phenomenon. Is it periodic? Is it chaotic? Is it random in space and time? Simple mathematical modelling, and comparison with empirical observations indicate that we are dealing with a selforganized critical phenomenon 1-4. Using the notation of Pascal Bernard, these include O5, power law distribution of earthquake size and O6, fractal, power law distribution of fault segments, mimicking the highly inhomogeneous world-wide distribution of faults and fault-zones. More interestingly, the earthquakes in SOC models are clustered in time and space, and therefore also reproduce the observation O4. This may give the strongest support for the SOC hypothesis, since no alternative models exhibiting this feature has been proposed. The distribution of waiting time between earthquakes of a given time is T-a. It is this feature that allows for prediction of earthquakes at level 2. and 3., beyond the level of chance, in Main's notation. Ito 5 has analysed a model previously introduced by Bak and Sneppen 6 in a different context. He found that the exponent a for actual earthquakes in California was well represented by a waiting time exponent a=1.4, which compares well with the value obtained from the model, a=1.5. This implies that the longer you have waited since the last event of a given size, the longer you still have to wait; as noted in Main's opening piece, but in sharp contrast to popular belief!. For the smallest time-scales, this represents foreshocks and aftershocks. For the longest time-scales this implies that in regions where there have been no large earthquakes for thousands or millions of years, we can expect to wait thousands or millions of years before we are going to see another one. We can 'predict' that it is relatively safe to stay in a region with little recent historical activity, as everyone knows. There is no characteristic timescale where the probability starts increasing, as would be the case if we were dealing with a periodic phenomenon. The phenomenon is fractal in space and time, ranging from minutes and hours to millions of years in time, and from meters to thousands of kilometers in space. This behaviour could hardly be more different from Christopher Scholz's description that "SOC refers to a global state...containing many earthquake generating faults with uncorrelated states" and that in the SOC state "earthquakes of any size can occur randomly anywhere at any time". Ironically, some real sandpiles 7 exhibit the oscillatory phenomenon depicted by Scholz but this has nothing to do with self-organized criticality! In fact, one of the independent arguments in favour of earthquakes as SOC is the relatively small stress drop (3 MPa), independent of earthquake size, compared to the absolute magnitude of the Earth's stress field at earthquake nucleation depths (300 MPa) (for review see ref. 8). Thus the stress change is sufficiently small that this type of oscillatory behaviour (for sandpiles with large changes in angle of repose) may be precluded. Assuming that we are dealing with an SOC phenomenon, what can this tell us about the prospects of going on from statistical prediction towards the level 5 of individual prediction? Unfortunately, the size of an individual earthquake is contingent upon minor variations of the actual configuration of the crust of the Earth 8, as discussed in Main's introduction. Thus, any precursor state of a large event is essentially identical to a precursor state of a small event. The earthquake does not "know how large it will become", as eloquently stated by Scholz. Thus, if the crust of the earth is in a SOC state, there is a bleak future for individual earthquake prediction. On the other hand, the consequences of the spatio-temporal correlation function for time-dependent hazard calculations have so far not been fully exploited! References 1. Bak, P. How Nature Works. The Science of Self-organized Criticality (Copernicus, New York, and Oxford University Press, Oxford, 1997). B-34

54 2. Sornette, A. & Sornette, D, Self-organized criticality and earthquakes, Europhy. Lett. 9, 197 (1989). 3. Olami, Z., Feder, H. J. & Christensen, C. Self-organized criticality in a cellular automaton modeling earthquakes, Phys. Rev. E 48, (1993). 4. Bak, P. & Tang, C. Earthquakes as an SOC phenomenon, J. Geophys. Res. 94, (1989). 5. Ito, K. Punctuated equilibrium model of evolution is also an SOC model of earthquakes, Phys. Rev. E 52, (1995). 6. Bak, P. & Sneppen, K. Punctuated equilibrium and criticality in a simple model of evolution, Phys. Rev. Lett. 71, (1993). 7. Jaeger, H.M., Liu, C. & Nagel, S. Relaxation of the angle of repose. Phys. Rev. Lett 62, (1989). 8. Main, I., Statistical physics, seismogenesis and seismic hazard, Rev. Geophys. 34, (1996). 9. P. Bak and M. Paczuski, Complexity, Contingency, and Criticality, Proc. Natl Acad. Sci. 92, (1995). B-35

55 The status of earthquake prediction by David Jackson (18 March 1999) What is it? Earthquake prediction invites debate partly because it resists definition. To Ian Main's very helpful definitions I would add that an earthquake forecast implies substantially elevated probability. For deterministic prediction, that probability is so high that it justifies exceptional response (although not necessarily evacuation as Ian Main suggests; evacuation is not generally envisaged as a response to earthquake warnings, and it would probably be counterproductive even if future earthquakes could be predicted accurately.). Thus prediction demands high certainty. Forecasting and predicting earthquakes must involve probabilities. We can predict thunder after lightning without talking of probabilities because the sequence is so repeatable. But earthquakes are more complex: we need probabilities both to express our degree of confidence and to test that our forecasting is skilful (better than an extrapolation of past seismicity). What we can do We can estimate relative time-independent hazard well (Japan is more hazardous than Germany) but our precision is limited (Is Japan more hazardous than New Zealand?). Hazard statements are quantitative, but even after 30 years none of the models has been prospectively tested (for agreement with later earthquakes). We can estimate well the long-term seismic moment rate (a measure of displacement rate integrated over fault area) but to estimate earthquake rates we need to know their size distribution. There are very different ideas about how to do this 1,2 but none has been tested scientifically. What we cannot do We cannot specify time-dependent hazard well at all: in fact, we have two antithetical paradigms. Clustering models predict that earthquake probability is enhanced immediately after a large event. Aftershocks provide a familiar example, but large main-shocks also cluster 3. The seismic gap theory asserts that large, quasi-periodic 'characteristic earthquakes' deplete stress energy, preventing future earthquakes nearby until the stress is restored 4. How could these antithetical models coexist? It is easy: there are many examples of each behaviour in the earthquake record. So far, the seismic gap model has failed every prospective test. The 'Parkfield earthquake' 5 has been overdue since 1993, and a 1989 forecast 6 for 98 circum-pacific zones predicted that nine characteristic earthquakes should have happened by 1994; only two occurred. Our attempts at earthquake forecasting, as Ian Main defines it 7, have failed. (Note that 'earthquake forecasting' is often defined differently. Nishenko4 defined it to mean estimation of time-dependent earthquake probability, possibly on a decade time scale, and not necessarily involving precursors.) Most studies of earthquake forecasting assumed that precursors would be so obvious that estimates of background (unconditional) and anomalous (conditional) probabilities were unnecessary. Hundreds of anomalous observations have been identified retrospectively and nominated as likely precursors, but none has been shown to lead to skill in forecasting 7. Given the bleak record in earthquake forecasting, there is no prospect of deterministic earthquake prediction in the foreseeable future. B-36

56 What is the difficulty? In principle, earthquakes might be predicted by one of two strategies: detecting precursors, or detailed modelling of earthquake physics. For precursors, confidence would come from empirical observations; understanding mechanisms would be desirable but not necessary. Earthquake physics involves modelling strain, stress and strength, for example, in some detail. The precursor strategy will not work because earthquakes are too complicated and too infrequent. Even if precursors existed, a few observations would not lead to prediction, because their signature would vary with place and time. This problem cannot be overcome simply by monitoring more phenomena such as electric, magnetic or gravity fields, or geochemical concentrations. Each phenomenon has its own non-seismic natural variations. Monitoring these phenomena without complete understanding is courting trouble. Monitoring them properly is a huge effort with only a remote connection to earthquakes. Such studies would certainly unearth more examples of anomalies that might be interpreted as precursors, but establishing a precursory connection would require observations of many earthquakes in the same place. Earthquake physics is an interesting and worthwhile study in its own right, but short-term earthquake prediction is not a reasonable expectation. One idea is that high stresses throughout the impending rupture area might induce recognizable inelastic processes, such as creep or stress weakening. Even if these phenomena occur they will not lead to earthquake prediction, for several reasons. Earthquakes start small, becoming big ones by dynamic rupture. The critically high stress needed to start rupture is not required to keep it going. The telltale signs, if they were to exist, need affect only the nucleation point (several kilometres deep), not the eventual rupture area. Even very large earthquakes cluster 8, indicating that seismogenic areas are almost always ready. Earthquakes clearly respond to stress changes from past earthquakes 9, but the response is complex. For example, most aftershocks occur on planes for which the shear stress should have been reduced by the main shock. Monitoring strain accumulation and deducing the boundary conditions and mechanical properties of the crust will tell a lot about earthquakes and perhaps allow us to predict some properties. To forecast better than purely statistical approaches would be in itself a solid accomplishment, which must come long before deterministic prediction. Part of our difficulty is a lack of rigour in Earth sciences. We examine past data for patterns (as we should) but we pay very little attention to validating these patterns. Many of the patterns conflict: some contend that seismicity increases before large earthquakes 10, others that it generally decreases 11. We generally explain exceptions retrospectively rather than describe the patterns, rules and limitations precisely enough to test hypotheses. What is possible? Some argue that earthquakes possess a property known as self-organized criticality (SOC), so earthquakes cannot be predicted because seismogenic regions are always in a critical state. But SOC would not preclude precursors. For example, if lightning were governed by SOC, we could still predict thunder with a short warning time. Nothing discussed above makes earthquake prediction either possible or impossible. Others argue that SOC comes and goes and that outward signs of SOC (such as frequent moderate earthquakes) provide the clue that a big earthquake is due. If SOC comes and goes, it is not clear how to recognize it. To be useful, it must apply to big events only, and we would need many (rare) examples to learn how big they must be. SOC would presumably appear gradually, so at any one time it might give at best a modest probability gain. The important question is not whether earthquake prediction is possible but whether it is easy. Otherwise it is not a realistic goal now, because we must learn earthquake behaviour from large earthquakes themselves, which visit too infrequently to teach us. B-37

57 What should be done? Earthquake hazard estimation is the most effective way for Earth scientists to reduce earthquake losses. Many outstanding scientific questions need answers: the most important is how to determine the magnitude distribution for large earthquakes, which is needed to estimate their frequencies. Time-dependent hazard is worth pursuing, but prospective tests are needed to identify the models that work. These tests should cover large areas of the globe, so that we need not wait too long for earthquakes. For global tests we need global data, especially on earthquakes, active faults and geodetic deformation. Basic earthquake science is a sound investment for many reasons. Progress will lead to advancements in understanding tectonics, Earth history, materials and complexity, to name just a few. Results will also benefit hazard estimation. Wholesale measurements of phenomena such as electric fields with no clear relationship to earthquakes will not help. For real progress we need a methodical approach and a better strategy for testing hypotheses. We have good reason to expect wonderful discoveries, but not deterministic prediction. References 1. Wells, D.L. & Coppersmith, K.J. New empirical relationships among magnitude, rupture length, rupture area, and surface displacement. Bull. Seism. Soc. Am. 84, (1994). 2. Kagan, Y.Y. Seismic moment-frequency relation for shallow earthquakes: regional comparison. J. Geophys. Res. 102, (1997). 3. Kagan, Y.Y. & Jackson, D.D. Long-term earthquake clustering. Geophys. J. Int. 104, (1991). 4. Nishenko, S.P. Circum-Pacific seismic potential: Pure Appl. Geophys. 135, (1991). 5. Roeloffs, E. & Langbein, J. The earthquake prediction experiment at Parkfield, California. Rev. Geophys. 32, (1994). 6. Kagan, Y.Y. & Jackson, D.D. New seismic gap hypothesis: five years after. J. Geophys. Res. 100, (1995). 7. Geller, R.J. Earthquake prediction: a critical review. Geophys. J. Int. 131, (1997). 8. Kagan, Y.Y. & Jackson, D.D. Worldwide doublets of large shallow earthquakes. Bull. Seis. Soc. Am. (submitted). 9. Deng, J.S. & Sykes, L. Stress evolution in southern California and triggering of moderate- small- and micro-size earthquakes. J. Geophys. Res. 102, (1979), and references therein. 10. Ellsworth, W.L., Lindh, A.G., Prescott, W.H. & Herd, D.G. in Earthquake Prediction, An International Review (eds Simpson, D. & Richards, P.) (Am. Geophys. Un., Washington, D.C., 1981). 11. Ohtake, M., Matumoto, T. & Latham, G. in Earthquake Prediction, An International Review (eds Simpson D. & Richards, P.) (Am. Geophys. Un., Washington, D.C., 1981). B-38

58 Without progress no funding by Robert Geller (18 March 1999) Wyss asserted that without funding at "a scale comparable to the funding of astrophysical research... serious progress toward learning how to predict earthquakes" was impossible. However, extensive prediction efforts in several countries in several eras have all failed. Further allocation of public funds appears unwarranted unless there are specific and compelling grounds for thinking that a proposed new prediction programme will be successful. In my first article in this debate I said that over the past 100 years, and particularly since 1960, there had been great efforts, all unsuccessful, to find precursory phenomena that could be used to make reliable and accurate predictions of earthquakes. Wyss claims that this statement is incorrect, but below I would like to demonstrate its veracity. A tale of two countries In 1891 the Nobi (sometimes called Mino-Owari) earthquake caused significant damage in Japan. In response, the Japanese government established the Imperial Earthquake Investigation Committee in Imamura (Ref. 1, p. 346), a well-known seismologist, wrote as follows in 1937: "[The Committee] attacked with every resource at their command the various problems bearing on earthquake prediction, such as earth tiltings and earth pulsations, variation in the elements of terrestrial magnetism, variation in underground temperatures, variation in latitude, secular variation in topography, etc., but satisfactory results were not obtained". J.B. Macelwane 2, also a leading seismologist of his day (one of the major medals of the American Geophysical Union is named in his honour), commented as follows in "The problem of earthquake forecasting has been under intensive investigation in California and elsewhere for some forty years, and we seem to be no nearer a solution of the problem than we were in the beginning. In fact the outlook is much less hopeful." Thus the existence of prediction research efforts before 1960 is supported by two leading authorities of the era. One stated that a government body had attacked the prediction problem "with every resource at their command" without obtaining satisfactory results, and another that "intensive investigations" "in California and elsewhere for some forty years" had not led to any progress towards prediction. Only in America? Wyss says that "no prediction research program existed before the 1970s". Even if the efforts reported by Imamura and Macelwane were disregarded, Wyss's statement would still be incorrect unless applied only to work in the US. Japan's prediction research program started in , and the Soviet prediction research program started in the Garm "polygon" (test field area for intensive geophysical observations) shortly after the 1948 Ashkhabad earthquake 4. These substantial efforts by qualified professionals should not be ignored just because they were not in the US or western Europe. Japan has spent about 2 x Yen on earthquake prediction since 1965 (Asahi Shinbun newspaper, 10 January 1998), but this programme has been unsuccessful 5,6. Before adopting Wyss's suggestion of funding prediction research "at a scale comparable to the funding of astrophysical research", US government authorities should find out what went wrong in Japan. The set of possible explanations lies between two extremes. 1. Owing to some unknown difference, the seismologists in Japan failed when their counterparts in the US, would have succeeded if only they had had comparable funding. 2. The goals and methods of the programme were completely unrealistic. B-39

59 Needless to say, I think (2) is correct. Nature's Tokyo correspondent appears to share my views 7. As Wyss is implicitly advocating position (1), he should explain his reasons for taking this view. The bottom line All of the debaters, including both Wyss and myself, agree that scientifically sound efforts to improve our knowledge of the earthquake source process should be made. We can be cautiously optimistic that, in the long run, such work may indirectly contribute to the mitigation of earthquake hazards. However, proposed work in this area should be evaluated by the normal peer-review process and should not be labelled as "earthquake prediction" research. References 1. Imamura, A. Theoretical and Applied Seismology. (Maruzen, Tokyo, 1937). 2. Macelwane, J.B. Forecasting earthquakes. Bull. Seism. Soc. Am. 36, 1-4 (1946). (reprinted in Geophys. J. Int. 131, , 1997). 3. Kanamori, H. Recent developments in earthquake prediction research in Japan, Tectonophysics 9, (1970). 4. Savarensky, E.F. On the prediction of earthquakes. Tectonophysics 6, (1968). 5. Geller, R.J. Earthquake prediction: a critical review. Geophys. J. Int. 131, (1997). 6. Saegusa, A., Japan to try to understand quakes, not predict them. Nature, 397, 284 (1999). 7. Swinbanks, D. Without a way to measure their success, Japanese projects are very hard to stop. Nature, 357, 619 (1992). B-40

60 Comments by David Jackson (18 March 1999) Scholz omitted crucial parts of the the recent history of the seismic gap forecast and test. He remarked that our test of the gap theory was 'flawed' because it used earthquakes 'smaller than system-sized'. This was also asserted in a published comment by Nishenko & Sykes 1 and answered by Jackson & Kagan 2. But 'system-sized' was never defined in the original seismic gap model 3. The model was widely used to estimate potential for earthquakes of magnitude 7.0 and larger, so we used this threshold in our original test. The results of our test were essentially unchanged if we used larger events (magnitude 7.5 and above) as recommended by Sykes & Nishenko. More importantly, a revised version of the seismic gap model has been published 4 that is much more specific and defines the magnitude of earthquake appropriate to each seismic zone. Nishenko deserves much credit for stating the seismic gap model in testable form. Unfortunately the new gap model also failed 5 because it predicted far more earthquakes than observed in the following five-year period. Now 10 years have elapsed with the same result. Defining the 'system-sized' magnitude is a fundamental difficulty, not a semantic issue. Small earthquakes are clearly clustered, but the seismic gap model posits that large 'system-sized' events have the opposite behaviour. The definition becomes important because some different physics must take over for large events if the gap hypothesis is true. The same difficulty exists for the sand-pile analogy, whether or not it describes earthquake behaviour well. Small areas on the surface of a sand pile can suffer 'sandslides' even if they are not locally at a critical slope, because slope failures above or below can affect them. Scholz's argument that a local area might become immune by having recently slipped assumes that it is big enough to preclude upslope or downslope failures. Identifying that particular size requires a knowledge of the state of the whole system, which is not available in the earthquake analogy. The seismic gap model has no meaning without a definition of 'system-sized', and the model fails with the only specific definition offered so far. References 1. Nishenko, S.P. & Sykes, L.R. Comment on 'Seismic gap hypothesis: ten years after' by Y.Y. Kagan and D.D. Jackson. J. Geophys. Res. 98, (1993). 2. Jackson, D.D. & Kagan, Y.Y. Reply to Nishenko and Sykes. J. Geophys. Res. 98, (1993). 3. McCann, W.R., Nishenko, S.P., Sykes, L.R. & Krause, J. Seismic gaps and plate tectonics: seismic potential for major boundaries. Pure Appl. Geophys. 117, (1979). 4. Nishenko, S.P. Circum-Pacific seismic potential: Pure Appl. Geophys. 135, (1991). 5. Kagan, Y.Y. & Jackson, D.D. New seismic gap hypothesis: five years after. J. Geophys. Res. 100, (1995). B-41

61 A case for intermediate-term earthquake prediction: don't throw the baby out with the bath water! by David Bowman and Charles Sammis (18 March 1999) As anyone who has ever spent any time in California can attest, much public attention is being focused on the great earthquake-prediction debate. Unfortunately, this attention focuses on deterministic predictions on the day-to-week timescale. But as some of the participants in this debate have pointed out 1,2 current efforts to identify reliable shortterm precursors to large earthquakes have been largely unsuccessful, suggesting that earthquakes are such a complicated process that reliable (and observable) precursors might not exist. That is not to say that earthquakes do not have some 'preparatory phase', but rather that this phase might be not be consistently observable by geophysicists on the surface. But does this mean that all efforts to determine the size, timing and locations of future earthquakes are fruitless? Or are we being misled by human scales of time and distance? As Robert Geller said in his earlier comments in this debate, 'the public, media and government regard an "earthquake prediction" as an alarm of an imminent large earthquake, with enough accuracy and reliability to take measures such as the evacuation of cities'. As Geller has pointed out on many occasions, this goal might be too ambitious. However, according to the categories of earthquake prediction defined by Ian Main in the introduction of this debate, most such efforts fall into category 4 (deterministic prediction). But what about forecasting earthquakes on the year-to-decade scale? Although 'predictions' over this timescale might not justify such drastic actions as the evacuation of cities, it would certainly give policy-makers as well as individual citizens sufficient time to brace themselves for the impending event, in much the same way that California was able to prepare itself for last winter's El Niño. With this paradigm in mind, forecasting on the year-to-decade scale would be immensely useful. In recent years there has been the suggestion that even this goal might be inherently impossible. Central to this argument is the claim by many authors that the crust is in a continuous state of self-organized criticality 2-6 (and Per Bak's contribution to this debate). In the context of earthquakes, 'criticality' is defined as a system in which the stress field is correlated at all scales, meaning that at any time there is an equal probability that an event will grow to any size. If the system exhibits self-organized criticality, it will spontaneously evolve to criticality and will remain there through dissipative feedback mechanisms, relying on a constant driving stress to keep the system at the critical state. The implication of this model is that, at any time, an earthquake has a finite probability of growing into a large event, suggesting that earthquakes are inherently unpredictable. However, this is contradicted by recent observations of the evolution of the static stress field after large earthquakes. In one of the first studies on this subject 7 it was found that the 1906 San Francisco earthquake produced a 'shadow' in the static stress field that seemed to inhibit earthquakes for many years after the M = 7.9 event. After this work, several other studies observed stress shadows after numerous events including the 1857 Fort Tejon 8,9 and 1952 Kern County 8 earthquakes. An excellent review of these and other observations of stress shadows after large earthquakes can be found in a recent issue of the Journal of Geophysical Research special issue on stress triggers, stress shadows and implications for seismic hazard 10. In an earlier comment during this debate, Christopher Scholz discussed these stress shadows in the framework of self-organized criticality (Fig. 1B in his comment), and mentioned that this concept is equivalent to the 'seismic gap' hypothesis. However, it should be noted that recent years have seen the proliferation of models that describe how the system emerges from these stress shadows. The hypothesis for this viewpoint (which has come to be known as intermittent criticality) is that a large regional earthquake is the end result of a process in which the stress field becomes correlated over increasingly long scale-lengths (that is, the system approaches a critical state). The scale over which the stress field is correlated sets the size of the largest earthquake that can be expected at that time. The largest event possible in a given fault network cannot occur until regional criticality has been achieved. This large event then reduces the correlation length, moving the system away from the critical state on its associated network, B-42

62 creating a period of relative quiescence, after which the process repeats by rebuilding correlation lengths towards criticality and the next large event. The differences between these models for regional seismicity have important consequences for efforts to quantify the seismic hazard in a particular region. Self-organized criticality has been used as a justification for the claim that earthquakes are inherently unpredictable 2. Models of intermittent criticality, in contrast, do not preclude the possibility of discovering reliable precursors of impending great earthquakes. Indeed, several modern models use this concept to predict observable changes in regional seismicity patterns before large earthquakes It can be argued that models of intermittent criticality not only hold the promise of providing additional criteria for intermediate-term earthquake forecasting methods but also might provide a theoretical basis for such approaches. Although models of intermittent criticality might promise improved methods for intermediate-term earthquake prediction, we must be careful not to overstate their claims. Ideally, the scientific community and the public at large should approach these methods much the same way as weather prediction. It should be fully expected that forecasts will change through time, in much the same way that the five-day weather forecast on the evening news changes. However, this will require a fundamental shift in the way we as Earth scientists think about earthquakes. We must acknowledge that the Earth is a complicated nonlinear system and that even the best intermediate-term forecasts cannot hold up to the standards imposed by Geller in his comments earlier in this debate. References 1. Geller, R.J. Earthquake prediction: a critical review. Geophys. J. Int. 131, (1997). 2. Geller, R.J., Jackson, D.D., Kagan, Y.Y. & Mulargia, F. Earthquakes cannot be predicted. Science 275, (1997). 3.Sornette, A. & Sornette, D. Self-organized criticality and earthquakes. Europhys. Lett. 9, 197 (1989). 4. Bak, P. & Tang, C. Earthquakes as a self-organized critical phenomenon. J. Geophys. Res. 94, (1989). 5. Ito, K. & Matsuzaki, M. Earthquakes as self-organized critical phenomena. J. Geophys. Res. 95, (1990). 6. Main, I., Statistical physics, seismogenesis, and seismic hazard. Rev. Geophys. 34, (1996). 7. Simpson, R.W. & Reasenberg, P.A. in The Loma Prieta, California Earthquake of October 17, Tectonic Processes and Models (ed. Simpson, R.W.) F55-F89 (U.S. Geol. Surv. Prof. Pap F, 1994). 8. Harris, R.A. & Simpson, R.W. In the shadow of 1857-the effect of the great Ft. Tejon earthquake on subsequent earthquakes in southern California. Geophys. Res. Lett. 23, (1996). 9. Deng, J. & Sykes, L.R. Evolution of the stress field in southern California and triggering of moderate-size earthquakes. J. Geophys. Res. 102, (1997). 10. Harris, R.A. Introduction to special section: stress triggers, stress shadows, and implications for seismic hazard. J. Geophys. Res. 103, (1998). 11. Sornette, D. & Sammis, C.G. Complex critical exponents from renormalization group theory of earthquakes: implications for earthquake predictions. J. Phys. I 5, (1995). 12. Saleur, H., Sammis, C.G. & Sornette, D. Renormalization group theory of earthquakes. Nonlin. Processes Geophys. 3, (1996). 13. Sammis, C.G., Sornette, D. & Saleur, H. in Reduction and Predictability of Natural Disasters (SFI Studies in the Sciences of Complexity vol. 25) (eds Rundle, J.B. Klein, W. & Turcotte, D.L.) (Addison-Wesley, Reading, Massachusetts, 1996). 14. Sammis, C.G. & Smith, S. Seismic cycles and the evolution of stress correlation in cellular automaton models of finite fault networks. Pure Appl. Geophys. (in the press). 15. Huang, Y., Saleur, H., Sammis, C.G. & Sornette, D. Precursors, aftershocks, criticality and self-organized criticality. Europhys. Lett. 41, (1998). 16. Bowman, D.D., Ouillon, G., Sammis, C.G., Sornette, A. & Sornette, D. An observational test of the critical earthquake concept. J. Geophys. Res. 103, (1998). 17. Jaumé, S.C. & Sykes, L.R. Evolving towards a critical point: a review of accelerating moment/energy release prior to large and great earthquakes. Pure Appl. Geophys. (in the press). 18. Brehm, D.J. & Braile, L.W. Intermediate-term earthquake prediction using precursory events in the New Madrid seismic zone. Bull. Seismol. Soc. Am. 88, (1998). B-43

63 On the existence and complexity of empirical precursors by Francesco Biagi (18 March 1999) Earthquake prediction is strictly related to empirical precursors. Despite the results presented in recent decades in support of the existence of empirical precursors there is scepticism in the scientific community about whether they exist 1-3. The widespread argument is that precursor signals reported are unrelated to earthquake activity and that they could have occurred by chance. If this were true, earthquake prediction would not be possible. Since 1974 our group has been performing research on empirical precursors. Tilts, hydrogeochemicals, electromagnetic emissions and radiowave disturbances have been investigated. We have reported results for the Friuli earthquake 4 (1976), the Umbria earthquake 5 (1979), the Irpinia earthquake 6 (1980), the Spitak earthquake 7,8 (1988) and the largest earthquakes that occurred in southern Kamchatka 9,10 during the past decade. Our field measurement and empirical data led us to suppose that there is an extremely small possibility that the precursors detected occurred randomly and are unrelated to the earthquakes. But it seems that the relationship linking earthquakes and premonitory anomalies is very complex and might be different in relation to seismogenetic zones. Consequently no general rules can be assumed. The following main aspects can be emphasized: there are earthquakes that will produce no precursors in the geophysical and geochemical parameters of a network, even if the earthquakes are large enough to be considered as potential sources of precursors; there are network sites in which one type of precursor will appear before some earthquakes and not before others, although these earthquakes could be potential sources of precursors; there are different premonitory anomaly forms both at different sites of a network for the same earthquake and at the same site for different earthquakes. These and other features are related to the anisotropy of the natural processes and it might therefore not be possible to eliminate them. The main problem in using precursors in earthquake prediction is to discover whether in a seismogenetic area these features are totally random or whether there are significant recurrences. In the first case the prediction of earthquakes is a null hypothesis; in the second case the prediction of some earthquakes might be possible. On the basis of 25 years of field research I believe that a satisfactory solution to this problem is still lacking. More data must be collected and more geophysical and geochemical parameters must be tested. Unfortunately, progress in this research area is connected with the occurrence of earthquakes. Many earthquakes (considered as sources of precursors) are necessary for defining in a meaningful way the relationship linking earthquakes and precursors in a seismogenetic area, but the occurrence of earthquakes cannot be planned. As a result a deadline for the definition of the problem cannot be foreseen and might be tens of years in the future. In this framework, countries in which research on precursors is still encouraged and funded are very few. Generally this research is prevented so that in Europe any reference to earthquake precursors in a scientific proposal will guarantee that it will not be funded. Therefore, reputable and qualified scientists in this field are boycotted a priori. Is this the right way to conduct science? References: 1. Geller, R.J. Earthquake prediction: a critical review. Geophys. J. Int. 131, (1997). 2. Geller, R.J., Jackson, D.D., Kagan, Y.Y. & Mulargia, F. Earthquakes cannot be predicted. Science 275, (1997). B-44

64 3.Stark, P.B. Earthquake prediction: the null hypothesis. Geophys. J. Int. 131, (1997). 4. Biagi, P.F., Caloi, P., Migani, M. & Spadea, M.C. Tilt variations and seismicity that preceded the strong Friuli earthquake of May 6th, Ann. Geofis. 29, 137 (1976). 5. Alessio, M. et al. Study of some precursory phenomena for the Umbria earthquake of September 19, Nuovo Cim. C 3, 589 (1980). 6. Allegri, L. et al. Radon and tilt anomalies detected before the Irpinia (South Italy) earthquake of November 23, 1980 at great distances from the epicenter. Geophys. Res. Lett. 10, 269 (1983). 7. Areshidze, G. et al. Anomalies in geophysical and geochemical parameters revealed in the occasion of the Paravani (M = 5.6) and Spitak (M = 6.9) earthquakes (Caucasus). Tectonophysics 202, (1992). 8. Bella, F. et al. Helium content in thermal waters in the Caucasus from 1985 to 1991 and correlations with the seismic activity. Tectonophysics 246, (1995). 9. Bella, F. et al. Hydrogeochemical anomalies in Kamchatka (Russia). Phys. Chem. Earth 23, (1998). 10. Biagi, P.F. et al. Hydrogeochemical anomalies in Kamchatka (Russia) on the occasion of the strongest (M = 6.9) earthquakes in the last ten years. Nat. Hazards (in the press). B-45

65 Realistic predictions: are they worthwhile? by Andrew Michael (25 March 1999) It is certainly possible to define the reliable prediction of individual earthquakes so narrowly that success is impossible. For instance, in Main's level 4 he refers to predictions with such precision and accuracy that a planned evacuation can take place. None of the contributors have yet to suggest that this is a possibility and I agree with Wyss that using this straw man as the standard will not lead to a useful debate. However, Main's levels 2 and 3 may lead to socially useful tools regardless of whether we call them predictions or probabilistic forecasts. As Main's extremely accurate short-term predictions are impossible, the public should neither expect to be saved from calamity by such predictions nor support research based on this expectation. However, further research into earthquake prediction may well bring real social benefits even if they are less spectacular than the vision of huge populations in mass exodus. As discussed by Richard Andrews 1, head of the California Office of Emergency Services, earthquakes are the one natural disaster that currently allows for no advance warning. Storms approach, fires grow, floods migrate after large rainfalls, but earthquakes can turn a perfectly normal day into a disaster in seconds. So, if we can make low probability forecasts, short term (such as those currently based on foreshock and aftershock models) what can society do with them? Raising awareness There are a number of low cost actions that can have large payoffs if a damaging earthquake occurs. Often earthquake preparedness plans are not maintained as real world pressures focus attention onto other problems. Low probability warnings can serve as reminders, like fire-drills, to update plans and review the state of preparedness. For instance, childcare facilities might check their first aid supplies and review the parents' emergency contact numbers. Companies might service their emergency generators. Such actions can be very valuable, even if the predicted event comes later than expected 2. Many hospitals now store medical supplies offsite in order to make more efficient use of their main buildings. However, this can create problems if an earthquake simultaneously causes casualties and cuts off transportation to the storage areas. Under a low probability warning, additional supplies can be moved to the hospital at little cost. Some unusual industrial processes, such as handling nuclear fuel rods, may be more difficult during an earthquake and can be put off until a time of lower risk. Low probability warnings also have an advantage over the high probability, deterministic, predictions that Main gave as one end result. One frequent concern about earthquake prediction is the cost of false alarms. Extreme actions like evacuations and shutting down businesses have great economic and social costs and thus false alarms are very troubling. In contrast, low probability forecasts temporarily focus public attention on reasonable preparedness activities and have not caused problems when carefully carried out in California. The warnings issued by the California state government include recommendations for specific actions, giving the public a method of dealing with the advisories without causing problems. A need to know Improvements in these low probability predictions might come from a continued search for precursors. Geller 3 suggests that this search has been vigorous but, at least in the US, the large networks of diverse instrumentation dreamed of during the optimism of the 1960s 4 was never realized. The result is that we have few records of the strain and electromagnetic fields very close to earthquake ruptures. Current strain records suggest that strain records B-46

66 can not observe the earthquake source preparation process from outside the rupture zone 5 but without data from within the rupture zone it is difficult to say that no precursors will be found. Even knowing that earthquake prediction is impossible would be useful. Amateur scientists will continue to predict earthquakes and without swift, knowledgeable response from the scientific community these predictions will do more harm than good 6. Proving that earthquakes are truly unpredictable will help us deal with the problems posed by less scientific approaches. However, our current understanding of earthquake physics can not prove this point. For instance, the majority of contributions to this debate have discussed self-organizing criticality models but there is no agreement on what they imply for earthquake prediction or if they are a good explanation for earthquakes (see contributions from Per Bak, David Bowman & Charles Sammis, and Chris Scholz). Testing studies As highlighted by Geller and Wyss an important concern is the quality of earthquake prediction research. As Geller points out, we must be more careful to separate hypothesis development from hypothesis testing. Earthquake prediction research is dominated by case studies which are good for hypothesis development, but we often lack the global studies that are necessary for hypothesis tests. As also noted by Geller, Wyss cites that some earthquakes are preceded by increased seismicity, some by quiesence, and some by neither. Viewed as case studies this has lead to the development of both activiation and quiesence precursors. But, viewed as a global hypothesis test, this assortment of differing observations suggests completely random behaviour 7. Unless we can separate out when to expect each behaviour a priori, such precursors are useless. A similar problem currently exists with those proposing that earthquakes can be predicted with 'time to failure analysis', a version of the activation hypothesis with its roots in material science. While many case studies have been presented, these are all hypothesis development. We now need a good hypothesis test but such studies are unfortunately rare. Thus we need some way to encourage more researchers to undertake these critical tests. Certainly, earthquake prediction is extremely difficult, but it is possible that we will be able to improve our ability to make low-probability, short-term forecasts and these may be much better for society than the high probability ones that are most likely impossible. The trick will be to improve the quality of both the data collected, particularly in the near-source region, and the work done with it. References 1. Andrews, R. The Parkfield earthquake prediction of October 1992: the emergency services response. Earthquakes and Volcanoes 23, (1992). 2. Michael, A. J., Reasenberg, P., Stauffer, P. H. & Hendley, J. W., II. Quake forecasting - an emerging capability. USGS Fact Sheet , 2 (1995). 3. Geller, R. J. Earthquake prediction; a critical review. Geophys J. Int. 131, (1997). 4. Press, F. et al. Earthquake prediction: a proposal for a ten year program of research. Ad Hoc Panel on Earthquake Prediction. (Office of Science and Technology, Washington, D.C., 1965). 5. Johnston, M. J. S., Linde, A. T., Gladwin, M. T. & Borcherdt, R. D. Fault failure with moderate earthquakes. Tectonophysics 144, (1987). 6. Kerr, R. A. The lessons of Dr. Browning. Science 253, (1991). 7. Matthews, M. V. & Reasenberg, P. A. Statistical methods for investigating quiescence and other temporal seismicity patterns. Pure Appl. Geophys 126, (1988). B-47

67 Sociological aspects of the prediction debate by Robert Geller (25 March 1999) The question at the heart of this debate appears to be whether earthquake prediction should be recognised as a distinct and independent research field, or whether it is just one possible research topic in the general field of study of the earthquake source process. As there are no known grounds for optimism that reliable and accurate earthquake prediction (as defined in my first article) can be realized in the foreseeable future, the case for the latter position appears clear-cut. As a consequence there is no obvious need for specialised organisations for prediction research. Besides the benefits that always accrue from pruning deadwood, abolition of such organisations would force prediction proponents and critics to confront each other in common forums, thereby speeding the resolution of the controversy. Paradigms Lost? A specialized community of scientists, which has its own journals, meetings, and paradigms, is the arbiter of what is acceptable in its own field 1. Viewed in sociological terms, such groups strive for recognition of their authority from the broader scientific community. In the long-run this recognition is dependent on whether a community's methods and theories can successfully explain experiments or observations. In the short- and intermediate-term however, subjective and sociological factors can lead to recognition being accorded to scientific communities whose paradigms are lacking in merit, or to the needless prolonging of controversies. Some revisionist historians of science have recently called attention to these sociological aspects of scientific research (in discussions commonly referred to as 'science wars'). While physical theories are certainly more than arbitrary social conventions, working scientists must admit that there may be room for improvement of present methods for resolving scientific disputes. The earthquake prediction debate provides an example of how sociological factors can impede the resolution of a scientific controversy. Cold fusion: Case Closed Cold fusion is a case where current methods for resolving controversies worked reasonably well 2. Cold fusion proponents attempted to set up all the trappings of a genuine research field (specialized research institutes, conferences, journals, funding programs), but once the underlying experiments were shown to be unreliable, the cold fusion enterprise quickly collapsed. This episode was basically a success story for science, although relatively large costs were incurred in the evaluation process before cold fusion was rejected 2. One reason the controversy could be efficiently resolved was that much of the debate was carried out in the open, for example at meetings of scientific societies or in scientific journals. Consequently the largely positive conclusions reached by cold fusion 'believers' at their own specialized conferences were not accorded credence by the scientific community as a whole. Ten years after the first public cold fusion claims, a small band of cold fusion proponents continues to hold out (New York Times, 23 March 1999). Until fairly recently international cold fusion conferences were still being held 3. Nevertheless, the cold fusion community has clearly failed to convince the scientific community as a whole of the legitimacy of its claims and methods. Cold fusion is typical, rather than unique. In all episodes of 'pathological science' there are some credentialed scientists who hold out indefinitely in support of generally discredited theories 4. Debates are resolved when the mainstream scientific community decides that one side or the other has nothing new to say and treats the discussion B-48

68 as effectively closed, barring truly new data. Perhaps it is time to consider whether the prediction debate has reached this point. Chronic Problems in Geoscience Geoscience is an observational field and controversies are harder to resolve than in more experimental disciplines. For example, Wegener's early 20th century evidence for continental drift was widely disregarded because of objections to the proposed driving mechanism 5. It was not until that evidence from paleomagnetism, marine geophysics and seismology became so clear-cut that the geoscience community generally embraced plate tectonics, of which continental drift is one consequence. The dramatic turnabout that led to the acceptance of continental drift has perhaps made geoscientists wary of resolving other controversies, lest they later be proven wrong. But all such decisions are inherently made on an interim basis, and controversies can always be reopened if new data are obtained. Allowing controversies such as the earthquake prediction debate to remain open indefinitely wastes time and energy, thereby slowing scientific progress. Ironically, the advent of plate tectonics was viewed in the late 1960s and early 1970s as reason for optimism about earthquake prediction6. This was not wholly unreasonable, as plate tectonics explains why large earthquakes are concentrated along plate boundaries, and also the direction of earthquake slip. Unfortunately, we now know, as noted by Jackson that plate tectonics does not allow either short-term or long-term prediction with success beyond random chance (although some controversy still lingers; see Scholz and Jackson). Deconstructing the debate On the surface the central question in Nature's current prediction debate has been how much funding should be allocated to 'prediction research'. At the extremes Wyss says as much as is now given to research in astrophysics while I say none, except through the normal peer-review process; the other debaters hold positions between these. Wyss and I reach diametrically opposite conclusions despite our agreement that there are no immediate prospects for reliable and accurate prediction. The reason appears to be that Wyss's implicit starting point is that earthquake prediction is a legitimate scientific research field, and should be funded as such. On the other hand, I argue that prediction research is in principle a perfectly legitimate research topic within the field of study of the earthquake source process (although much prediction research is of too low a quality to warrant funding), but that it is not a legitimate research field in its own right. One hallmark of a research field is the existence of a widely recognised journal. It is interesting to note that the journal Earthquake Prediction Research ceased publication in 1986 after only 4 volumes. Resolving the debate My point of view leads to a number of specific conclusions. One is that discussion of `prediction research' at scientific meetings should be held together with all other talks on the earthquake source process, rather than off in its own room, attended only by prediction 'believers'. This might make life unpleasant for everyone in the short run, as it would force prediction proponents and critics into head-on confrontations, but in the long run such discussions, although sometimes painful for all concerned, would be invaluable for resolving the prediction controversy. Holding prediction and earthquake source sessions in the same room at the same time would also encourage the development of common terminology, and would lead to more rapid dissemination of new research results. The major international body for seismology is the International Association of Seismology and Physics of the Earth's Interior (IASPEI). One of the working groups under the IASPEI is the 'Subcommission on Earthquake Prediction'. This and similar bodies were founded 20 or 30 years ago at a time when there was more optimism about prospects for prediction than exists at present6. The need for such bodies should be re-examined in light of current knowledge of the difficulties besetting prediction research. Even if such bodies were not abolished, their terms of reference ought to be redefined to reflect current scientific knowledge. B-49

69 I emphasise that I have no intention of criticising the officers or individual members of the IASPEI Subcommission (although I don't share some of their scientific views). Rather my point is that the very existence of a separate body for 'prediction research' is an impediment to scientific progress, as it tends to cleave 'prediction research' apart from work on the seismic source in general. There are many other prediction organisations whose continued existence might usefully be reviewed. Among these are the various bodies associated with the earthquake prediction program in Japan (see section 5.3 of ref. 6), and the US National Earthquake Prediction Evaluation Council, which endorsed the unsuccessful Parkfield prediction (see section 6 of ref. 6). The European Seismological Commission's Subcommission for Earthquake Prediction Research is another organisation that might merit review. Just as war is too important to be left to the military, earthquake prediction should not be left only to prediction proponents and ignored by the rest of the seismological community. Unfortunately this is a generally accurate, albeit somewhat oversimplified, description of the present situation. I feel that if special organisations for earthquake prediction were abolished, thereby forcing the prediction debate into the open, it would be possible to achieve some resolution relatively soon. However, unless this is done, the earthquake prediction debate appears doomed to linger in its present form almost indefinitely. Anyone comparing my articles in this debate to that of Macelwane 7 in 1946 will be struck by how little has changed. Let us hope that seismologists in 2049 will not be making similar comments. References 1. Kuhn, T.S., The Structure of Scientific Revolutions. 2nd ed. (University of Chicago Press, Chicago, 1970). 2. Huizenga, J.R. Cold Fusion: The Scientific Fiasco of the Century. (University of Rochester Press, Rochester, 1992). 3. Morrison, D.R.O. Damning verdict on cold fusion. Nature 382, 572 (1996). 4. Langmuir, I. Pathological Science. Physics Today 42(10), (1989). 5. Menard, H.W. The Ocean of Truth. (Princeton University Press, Princeton N.J., 1986). 6. Geller, R.J. Earthquake prediction: a critical review. Geophys. J. Int. 131, (1997). 7. Macelwane, J.B. Forecasting earthquakes. Bull. Seism. Soc. Am. 36, 1-4 (1946). (reprinted in Geophys. J. Int. 131, , 1997). B-50

70 Comments by Max Wyss (25 March 1999) In his contribution to this debate of 11th March (week 3), Geller makes a significant error. Contrary to his statement "increased moment release [more small earthquakes than usual]", increased moment release is mostly due to large (M7) and medium magnitude (M6) earthquakes, not small ones. 1-5 References 1. Varnes, D.J., Predicting earthquakes by analyzing accelerating precursory seismic activity. Pageoph, 130, (1989). 2. Sykes, L.R. & Jaume, S.C. Seismic activity on neighboring faults as a long-term precursor to large earthquakes in the San Francisco Bay area. Nature, 348, (1990). 3. Jaume, S.C. & Sykes, L.R. Evolution of moderate seismicity in the San Francisco Bay region, 1850 to 1993: Seismicity changes related to occurrence of large and great earthquakes. J. Geophys. Res. 101, (1996). 4. Bufe, C.G., Nishenko, S.P. & Varnes, D.J. Seismicity trends and potential for large earthquakes in the Alaska- Aleutian region. Pageoph, 142, (1994). 5. Sornette, D. & Sammis, C.G. Complex critical components from renormalization group theory of earthquakes: Implications for earthquake prediction. J. Phys. France 5, (1995). B-51

71 Stress-forecasting: an apparently viable third strategy by Stuart Crampin (25 March 1999) All discussions so far have referred (perhaps not surprisingly) to the properties of earthquakes, their times, locations, nucleation mechanisms, physics of the source, possible precursors, etc. I think this will lead nowhere. Earthquakes are extraordinarily varied and impossible to average. Perhaps the only feature of earthquakes that can be relied on is that they release a large amount of stress which, because rock is weak, has necessarily accumulated over a large volume of rock. If this build up of stress can be monitored then the time and magnitude of the earthquake when fracture criticality is reached can be subject to 'stress-forecast'. I suggest that we already know how to do this. The effects have been seen with hindsight for eight earthquakes worldwide, and the time and magnitude of an M=5 earthquake has been successfully stress-forecasted. Let me try to introduce a little realism into the debate. Earthquakes are complex. They vary: with magnitude and direction of stress-field; shape of the fault planes; orientation of fault plane with respect to stress field; presence or absence of fluids; nature of fluids; fluid-pressure; asperities on fault plane; debris on fault plane; presence or absence of fault gouge; presence or absence of water channels, pressure seals; height of water table; temperature; state of Earth tides; state of ocean tides; air pressure; local geology; other earthquakes; and so on and so on. Each of these phenomena could in certain circumstances have major effects on time, place, and magnitude of impending earthquakes. Consequently, no two earthquakes are identical (although seismic records being dominated by the ray path may be very similar). To understand, model, and accurately predict the behaviour of such a source requires intimate knowledge of every grain of the fault gouge and every microcrack in the rockmass. This might be possible in theory but in practice is totally unknowable by tens of orders of magnitude (and similarly beyond the capacity of any existing or foreseeable computer to model or manipulate again by tens of orders of magnitude). Earthquake prediction is not just a difficult subject where more knowledge or funding is required, it is out of our reach by astronomical-sized factors. This is the reason, why techniques which depend on any feature of the source, or any source-induced precursors, understanding nucleation processes, etc., are not likely to succeed. There is just far to much heterogeneity by once more tens of orders of magnitude. It is pretty clear by now that there is no magic formula, waiting to be discovered as some of the discussions seem to imply. So Bob Geller's first entry in this debate is correct on the basis of looking at the earthquake source, prediction of the time, place, and magnitude is practically impossible. There is just far too much possible variety. Consequently, it is hardly surprising that the classical earthquake prediction of time, magnitude and place of future earthquakes within narrow limits seems impossible. The claims of success listed by Max Wyss seem extremely shaky. One may wish for something to turn up, as some in this debate have done, but I suggest that it is clear from any contemplation of the enormous complexity and variability of the earthquake source that such hopes are futile and not worth wasting time or spending money on. Can we do anything? I believe we can, but not by examining the source. Rock is weak to shear stress, which means that the stress released by earthquakes has to accumulate over enormous volumes of rock. Perhaps hundreds of millions of cubic kilometres before an M=8 earthquake. There is mounting direct and indirect evidence 1-4 that changes in seismic shear-wave splitting (seismic bi-refringence) can monitor the necessary build up of stress almost anywhere in the vast stressed rockmass before the earthquake can occur. Most rocks in the crust contain stress-aligned fluid-saturated grain-boundary cracks and pores 1. These are the most compliant elements of the rockmass and their geometry is modified by the build up of stress 2,3,5. Variations in seismic shear-wave splitting reflect changes of crack geometry, and hence can monitor the build-up of stress before earthquakes 2 and the release of stress at the time of (or in one case shortly before) the earthquake. Such changes B-52

72 have been identified with hindsight before three earthquakes in USA, one in China 3,5, and now routinely before four earthquakes in SW Iceland 6 (Please see these references for further details of these studies). The interpretation of these changes in shear-wave splitting is that stress builds up until the rock reaches fracture criticality when the cracking is so extensive that there are through-going fractures (at the percolation threshold) and the earthquake occurs 2,6. The rate of increase of stress can be estimated by the changes in shear-wave splitting, and the level of fracture criticality from previous earthquakes. When the increasing stress reaches fracture criticality the earthquake occurs. Magnitude can be estimated from the inverse of the rate of stress increase 6 : for a given rate of stress input, if stress accumulates over a small volume the rate is faster but the final earthquake smaller, whereas if stress accumulates over a larger volume the rate is slower but the earthquake larger. As of 17th March, 1999 one earthquake has been successfully stress forecast in real-time giving the time and magnitude of a M=5 earthquake in SW Iceland6. Non-specific stress-forecasts were issued to the Icelandic National Civil Defence Committee on the 27th and 29th October, The final time-magnitude window (a window is necessary because of uncertainties in estimates) on 10th November, 1998, was a M>=5 soon or, if stress continued to increase, a M>=6 before the end of February Three days later (13th November, 1999), there was a M=5 earthquake within 2 km of the centre of the three stations where changes in shear-wave splitting were observed. We claim this is a successful real-time stress-forecast, as anticipated from the behaviour noted with hindsight elsewhere. Shear-wave splitting does not indicate potential earthquake locations, but analysis of local seismicity by Ragnar Stefánsson correctly predicted the small fault on which the stress-forecast earthquake occurred. It appears that monitoring the build up of stress before earthquakes can forecast the time and magnitude of impending earthquakes. Three comments about stress-forecasting: 1. Stress-forecasting seems to give reasonable estimates of time and magnitude but gives little or no information about location, where perhaps Bob Geller's stochasticism takes over. However, as Chris Scholz says, "earthquake prediction is always local". If it is known that a large earthquake is going to occur (that is when there has been a stress-forecast), local information may be able to indicate the fault that will break, as happened in Iceland. 2. Stress-forecasting was possible in SW Iceland only because of the unique seismicity of the onshore transformzone of the Mid-Atlantic Ridge where nearly-continuous swarm activity provided sufficient shear-waves to illuminate the rockmass. Routine stress-forecasting elsewhere, without such swarm activity, would require controlled-source seismology. 3. The phenomena we are observing are not precursors. Apart from the decrease in stress at the time of the earthquake, the effects are independent of the earthquake source parameters. Shear-wave splitting monitors a more fundamental process, the effects of the build up of stress on the rockmass, which allows the estimation of the rate of increase and the time when fracture criticality is reached. For reasons not fully understood, but probably to do with the underlying critical nature of the non-linear fluid-rock interactions 4,7 the effect of the stressed fluid-saturated microcracks on shear-waves is remarkably stable 1-3,5. We see exactly the same behaviour before the 1996 Vatnajökull eruption in Iceland as we see before earthquakes. About 1 cubic kilometre of magma was injected into the crust over a five month period. The major difference from an earthquake being that the stress was not abruptly released by the eruption, as it would have been by an earthquake, following the eruption the stress relaxed over a period of several years as it was accommodated by a spreading cycle of the Mid-Atlantic Ridge. I suggest that monitoring the build up of stress is a third strategy for predicting earthquakes beyond the two - detecting precursors, and detailed modelling of earthquake physics - suggested by Dave Jackson. Like many features of shear-wave splitting, it appears to be comparatively stable, appears to have considerable accuracy in forecasting time and magnitude. scrampin@ed.ac.uk. B-53

73 References 1. Crampin, S. The fracture criticality of crustal rocks. Geophys. J. Int. 118, (1994). 2. Crampin, S. & Zatsepin, S. V. Modelling the compliance of crustal rock, II - response to temporal changes before earthquakes. Geophys. J. Int. 129, (1997). 3. Crampin, S. Calculable fluid-rock interactions. J. Geol. Soc. (in the press) (1999). 4. Crampin, S. Shear-wave splitting in a critical crust: the next step. Rev. Inst. Fran. Pet. 53, (1998). 5. Crampin, S. Stress-forecasting: a viable alternative to earthquake prediction in a dynamic Earth. Trans. R. Soc. Edin., Earth Sci. 89, (1998). 6. Crampin, S., Volti, T. & Stefansson, R. A successfully stress-forecast earthquake. Geophys. J. Int. (in the press) (1999). 7. Crampin, S. Going APE: II - The physical manifestation of self-organized criticality. 67th Ann. Int. Mtg. SEG, Dallas, Expanded Abstracts, 1, (1997). B-54

74 Comments by Zhongliang Wu (1 April 1999) Testing hypothesises is an essential part of earthquake prediction. The 'game rule' associated with this test is especially important because it leads to the criteria for accepting or rejecting a statistical hypothesis related to earthquake prediction. Various studies have been carried out to measure prediction efficiency, to formulate hypothesis tests, and to verify prediction schemes (for example refs 1-5). Up to now, however, the 'game rules' have not paid enough attention to an important problem, specifically that earthquakes are different from one another. In the statistical test, it often happens that all earthquakes within a magnitude-space-time range are treated as the same, which has no sound geophysical basis. The reason for making this argument is that earthquakes are different in their source process and tectonic environment, and can be divided into some classes. A certain precursor will be valid only for a certain class of earthquakes. For example, if tidal triggering is regarded as a potential earthquake precursor, then caution must be taken that such a triggering mechanism is only significant for the earthquakes of normal-fault type. In contrast, for the dip-thrusting and strike-slip earthquakes, such a triggering effect is not significant 6. There are three general cases associated with earthquake prediction: successful predictions, false-alarms, and failures-to-predict. A 'game rule' is mainly a comparison of the performance of a prediction approach with that of random prediction, according to the normal rate of seismicity 3,4. If earthquakes can be classified into different categories, then false-alarms and failures-to-predict have a different physical significance. For any specific precursor, failure-to-predict some earthquakes is inevitable because the precursor under consideration is not valid for all classes of earthquakes. In the study of potential precusors, therefore, an appropriate research strategy is to depress the falsealarms and to tolerate the failures-to-predict*. The classification of earthquakes by the physics of their seismic source is far from complete, and more detailed studies on the source process of earthquakes and seismogenesis are needed. Furthermore, we do not know the exact role of slow earthquakes 7,8 and aseismic slip 9 in the dynamics of earthquakes. In the future, seismologists may provide an earthquake catalogue classifying of earthquake sources for the statistical test of earthquake prediction schemes. Although at present we do not have such a catalogue, it is clear that the assumption that all earthquakes are the same will lead to a harsh 'game rule' in evaluating the performance of an earthquake prediction scheme. An extreme example is Geller's claim 10 that time and effort need not be wasted on evaluating prediction schemes that cannot outperform Kagan's 'automatic alarm' strategy 11. If earthquakes were all the same, then this claim would be absolutely reasonable. However, from the perspective of the classification of earthquakes, such a stance might lead to the loss of some useful information. Geller et al. 12 proposed that the question of precursor test can be addressed using a Bayesian approach where each failed attempt at prediction lowers the a priori probability for the next attempt. In this case, if all earthquakes are treated as the same and no difference is made between failures-to-predict and false-alarms, the 'game rule' will be extremely harsh, and many potential precursors will be rejected by this 'razor'. From this point of view, it is too early to accept the conclusions that the search for earthquake precusors has proved fruitless and earthquakes cannot be predicted 11,12. At the other extreme, the ignorance of some proponents of earthquake prediction to the differences between earthquakes, and the attempts to 'improve' the performance of the proposed precursors (that is to decrease both the B-55

75 rate of false-alarms and the rate of failures-to-predict) has lead in recent years to too many declarations of prediction, which in turn lead to too many meaningless false-alarms. It is interesting that one of the pioneer works in seismology is the classification of earthquakes by R. Hoernes in years after, we have almost forgotten this 'classical' problem and treat all earthquakes alike in our statistical tests. On the other hand, a constructive contribution of the present 'game rule' is that it goes toward an objective test, which is important in earthquake prediction study. Even if we have a catalogue with appropriate classification, some of the basic principles will still be valid. References 1. Wyss, M. Evaluation of Proposed Earthquake Precusors. (American Geophysical Union, Washington, D. C., 1991). 2. Wyss, M. Second round of evaluation of proposed earthquake precusors. Pure Appl. Geophys. 149, 3-16 (1997). 3.Stark, P. B. A few statistical considerations for ascribing statistical significance to earthquake predictions. Geophys. Res. Lett. 23, (1996). 4. Stark, P. B. Earthquake prediction: the null hypothesis. Geophys. J. Int. 131, (1997). 5. Kagan, Y. Y. Are earthquakes predictable? Geophys. J. Int. 131, (1997). 6. Tsuruoka, H., Ohtake, M. & Sato, H. Statistical test of the tidal triggering of earthquakes: contribution of the ocean tide loading effect. Geophys. J. Int. 122, (1995). 7. Kanamori, H. & Hauksson, E. A slow earthquake in the Santa Maria Basin, California. Bull. Seism. Soc. Am. 85, (1992). 8. Kanamori, H. & Kikuchi, M. The 1992 Nicaragua earthquake: a slow tsunami earthquake associated with subducted sediments. Nature 361, (1993). 9. Dragoni, M. & Tallarico, A. Interaction between seismic and aseismic slip along a transcurrent plate boundary: a model for seismic sequences. Phys. Earth Planet. Interiors 72, (1992). 10. Geller, R. J. Earthquake prediction: a critical review. Geophys. J. Int. 131, (1997). 11. Kagan, Y. Y. VAN earthquake predictions - an attempt at statistical evaluation. Geophys. Res. Lett. 23, (1996). 12. Geller, R. J., Jackson, D. D., Kagan, Y. Y. & Mulargia, F. Earthquakes cannot be predicted. Science 275, (1997). * Such a research strategy does not conflict with the ethics of seismological study. To tolerate the failures-to-predict does not mean that seismologists are irresponsible. Comparing the study of earthquake prediction to the study of medicine, it is unreasonable to require a technique or instrument to be able to diagnose all diseases. Similarly, it is not rational to require that an earthquake precursor is valid for all kinds of earthquakes. A decrease in failures-topredict can be achieved by discovering new precursors which are valid for other kinds of earthquake. B-56

76 Comments by Christopher Scholz (1 April 1999) Geller and Jackson have both reproached me for not citing all of the Jackson and Kagan papers in my earlier statement. Space requirements did not allow for a fuller discussion it that time. The 'seismic gap' hypothesis is nothing more than a restatement of Reid's elastic rebound theory. Is it incorrect? This theory applies only to system-size events, which, rather than being undefined, as suggested by Jackson, is defined in the case of subduction zones as the seismically coupled down-dip width, which can be determined by the areal extent of large earthquakes in the region. The problem is that this is geographically quite variable, ranging from 50 km (M 7.3) to 200 km (M 8.4). So arbitrarily assuming a constant value of 7.0 (ref. 1) or 7.5 (ref. 2) will always include some events too small to qualify, this being doubly so because the Gutenberg-Richter relation insures that the catalogue will be dominated by events near the lower size cut-off. Hence with that procedure one can expect too many events in 'safe' zones, which was the result of refs 1 and 2, although, as expected, there were less discrepancies when the higher magnitude cutoff was used. This was the flaw I pointed out in my first contribution to these debates. Thus the elastic rebound theory was not properly tested. In their more recent study 3, they found, in contrast, less events than predicted by Nishenko 4. But here the failure was in a different part of the physics: the assumptions of recurrence times made by Nishenko. These recurrence times are based on very little data, no theory, and are unquestionably suspect. But this failure needs to be separated from a failure of the elastic rebound theory, which would lead us to contemplate weird physics. When conducting such statistical tests, it is important to keep aware of what, in the physics, one is testing. References 1. Kagan, Y.Y. & Jackson, D.D. Seismic gap hypothesis: ten years after. J. Geophys. Res. 96, (1991). 2. Jackson, D.D. & Kagan, Y.Y. Reply to Nishenko and Sykes. J. Geophys. Res. 98, (1993). 3. Kagan, Y.Y. & Jackson, D.D. New seismic gap hypothesis: five years after. J. Geophys. Res. 100, (1995). 4. Nishenko, S.P. Circum-Pacific seismic potential: Pure Appl. Geophys. 135, (1991). B-57

77 The need for objective testing by Robert Geller (1 April 1999) Wyss's letter in week 5 of this debate claims that I made a 'significant error' in my week 3 article. I explain here why this claim should not be accepted, placing my rebuttal in a more general context. Testable algorithms required All readers have undoubtedly studied Newton's law of gravitational attraction: F = Gm1m2 / r2 It took a great scientist to discover this law, but any competent scientist can use it. Because this law is quantitative, it can easily be checked against observed data, thus permitting it to be verified or disproved. This precision is what led to the discovery that it is only approximately correct, as significant discrepancies were found for cases where relativistic effects are important. In contrast, prediction proponents provide retrospective 'case studies' that are represented as showing the existence of some precursory phenomenon before some particular earthquake. Unlike Newton's law, there is no formula or algorithm that can be objectively tested using other data. This shortcoming is crucial. Prediction proponents should, but do not, provide a 'predictor in the box' software package, with all parameters fixed. We could then input any data of the type considered by the method being tested (for example seismicity, geoelectrical, geodetic, etc.), using either real-time data or recorded data for regions other than that used to develop the algorithm. The software package would then generate predictions that could be tested against an intelligent null hypothesis such as the automatic alarm strategy 1. Wyss's criticism rebutted In his week 1 article, Wyss said that "some [earthquakes] are preceded by increased moment release during the years before them, and some are preceded by seismic quiescence." In my week 3 article I commented as follows. "Wyss's contribution to this debate cites both 'increased moment release' (more small earthquakes than usual) and 'seismic quiescence' (fewer small earthquakes than usual) as precursors. Thus it appears that any variability whatsoever in background seismicity can be claimed, after the fact, to have been a precursor. To determine whether these variations in seismicity levels are random fluctuations or real physical phenomena, objective testing of unambiguously stated hypotheses is required." Wyss claims that this is "a significant error", because "increased moment release is mostly due to large (M7) and medium magnitude (M6) earthquakes, not small ones." Here he has defined "large", "medium," and "small" in a specific way. The "significant error", if any, that I made was perhaps to be insufficiently precise in defining the size of earthquakes. Similar criticisms could be levelled at most of the articles in this debate, which are written in a quasi-journalistic style to make them easily accessible to readers. Needed: more statistics, less rhetoric Individual case studies, rather than objective tests of a hypothesis, are generally presented in support of the 'increased moment release' model. One recent study 2 attempts to make progress towards objective testing, but this B-58

78 study's null hypothesis does not appear realistic (see ref. 3), and the statistical testing does not appear to account for the effect of retroactively adjustable parameters (see ref. 4). Another recent study 5, which appears to have been carefully conducted, found that a random (Poisson) model outperformed the increased moment release model (sometimes also known as 'time-to-failure' analysis) by a factor of four for the particular dataset that was analyzed. (Note that Wyss cited ref. 2 in his Week 1 article, but did not cite ref. 5 in any of his three articles (week 1, 3, 5) to date.) In my week 3 article I said that many prediction proponents appeared to have fallen into 'The Gambler's Fallacy' of drawing conclusions from a retrospectively chosen unrepresentative subset of a much larger dataset. The hypotheses of 'seismic quiescence' and 'increased moment release' both appear to be examples of this pitfall. If their proponents disagree with me, they should attempt to refute me, not by quibbling over the definitions of 'small', 'medium', and 'large', but rather by objective statistical testing. References 1. Kagan, Y. VAN earthquake predictions: an attempt at statistical evaluation. Geophys. Res. Lett. 23, (1996). 2. Bowman, D.D., Ouillon, G., Sammis, C.G, Sornette, A. & Sornette, D. An observational test of the critical earthquake concept. J. Geophys. Res. 103, 24,359-24,372 (1998). 3. Stark, P.B. Earthquake prediction: the null hypothesis. Geophys. J. Int. 131, (1997). 4. Mulargia, F. Retrospective validation of the time association of precursors. Geophys. J. Int. 131, (1997). 5. Gross, S. & Rundle, J. A systematic test of time-to-failure analysis. Geophys. J. Int. 133, (1998). B-59

79 More agreement than division by Max Wyss (1 April 1999) As we all agree that we know little about how earthquakes initiate and how to predict them, it follows that we will study the problem and eventually reach a relatively satisfactory solution. The question is, will we do it with the significant financial support and expedience this humanitarian effort deserves, or will we force individual scientists to do it in their non-existent spare time? Definition of earthquake prediction The attempt to define earthquake prediction in such a narrow way (a time resolution of a few days) is failing, thus forcing it to be declared as generally impossible. This is a red herring. If I'm not mistaken, nobody in the debate has disagreed with the point, made by many, that there are significant benefits derived from intermediate- and long-term predictions, even if they are made with relatively low probabilities. In which case, let us see if we cannot make progress in formulating rigorous, yet consumer friendly statements describing the time dependent earthquake hazard in some locations. That is, predict earthquakes. Quality of research There are two types of influences that degrade the quality of research into earthquake prediction. Firstly, there are two emotional factors. Limelight seekers are attracted to this field and others are electrified into hasty work by the idea of coming to the rescue of the populace. But there is a second problem; lack of financial support. A researcher who explores a hypothesis after regular working hours exclusively, is likely to present the 'best' results only, instead of exploring all the possible, but marginal data sets that may exist. Thus, our weapons in the struggle for high quality work are threefold: 1. Funding the research at an adequate level such that the most capable scientists are attracted to this field; such that a researcher has the time to penetrate to the greatest depth allowed by a data set; and such that the necessary high quality data are available. 2. Rigorous peer review of research proposals. 3. Stringent reviews of journal articles. We should use all of these tools to foster high quality earthquake prediction research. Case histories are often all we have Very large earthquakes occur too infrequently to test hypotheses on how to predict them with the statistical rigor one would like (for example L. Knopoff's contribution to this debate ), and potential data sets for testing are further reduced by the need to separate different tectonic settings (see Z. Wu's contribution to this debate ). In addition, most earthquakes occur far from existing dense instrumentation networks, making it impossible to gather data pertinent to most hypotheses. Thus sets of case histories in which the individual cases may number about a dozen will be all we will have for years to come, whether we like it or not. However, I do not see this as a reason to give up prediction research or rupture initiation studies altogether, as long as we have not exhausted the data available. As it is, we have hardly scratched the surface, because of lack of funding. B-60

80 Here we go again. "extensive prediction efforts in several countries in several eras have all failed" (Geller, this debate). Such exaggerations know no bounds. What "several eras" has the fledgling discipline of seismology seen? I'm utterly unimpressed by the quotes Geller used to support his assertion in week four of this debate, because these quotes stem from the years 1937 and No wonder satisfactory results concerning the problem of earthquake prediction were not achieved, although this problem was "attacked with every resource at their command," since they had essentially no resources, had not even classified earthquakes as to type and size and had not yet understood the reason for earthquake ruptures on the planet. The most basic tool of seismologists is the seismograph network. Rudimentary networks first came into existence in a few locations in the 1930s. A world wide network was installed in the mid 1960s, and anyone who wishes to analyze high resolution earthquake catalogues produced by dense networks cannot start their data set before the 1980s, because up to that time the data were so poor. Thus the researchers around 1940, whom Geller quotes, had hardly an opportunity to catch a glimpse of the distribution of earthquakes in space, time and as a function of size. They were in no position to conduct serious prediction research. They did not have even the most basic, let alone sophisticated tools. In addition, the reason for fault ruptures (earthquakes) on this planet was only discovered in the late 1960s. Only then did it became clear that the planet cools by moving heat, generated in its interior due to radioactive decay, by convection to the surface, where brittle plates are pushed past one another, generating earthquakes. Preoccupied with consolidating this discovery for about a decade, seismologists spent no time on prediction research and plans drawn up for such a program remained largely unimplemented (see A. Michael in this debate ). It is clear that in the US there was never a serious research program for earthquake prediction. There did exist a thorough seismology program to detect and discriminate nuclear explosions in the USSR and China, which was very successful, because it attracted the best workers in the field, since it was well funded. Separation of sub-disciplines Geller seems preoccupied by separation of sub-disciplines. Most researchers and educators try to combat the barriers that constantly appear between sub-disciplines but it is difficult to keep the channels of communication open. Of course the problem of earthquake prediction is intimately intertwined with those of the seismic source processes, of tectonics and self organized criticality. In addition, laboratory experiments on rock fracture, computer simulation of faulting, crustal deformation measurements and modelling, as well as studies of ground water properties are all important for, and applicable to, problems in prediction research. It would be beneficial, if, as Geller suggests, an audience of such wide expertise were in one room at a conference. For one thing Geller himself would then not make such elementary mistakes as to confuse increased seismic moment release with "more small earthquakes." However, as we all know, wide participation in specialist lectures at conferences is unrealistic. People cannot be forced to attend. To foster interactions between sub-disciplines, one must make the special effort of interdisciplinary retreat-meetings. Thus, I disagree with Geller when he sees a need for organizational changes. The boundaries of sub-disciplines establish themselves and there is no particular evil associated with them. However, I agree that more frequent retreat-meetings with attendance by experts from a wide range of fields is needed to advance earthquake prediction research. Conclusion I conclude that the criticism of earthquake prediction research has some worthy targets: the low quality work and the exaggerated claims that exist in this field. I hope we can reduce the originators of these problems to a small minority B-61

81 (they will never completely disappear). However, when the criticism takes on the character of a crusade, which tries to outlaw earthquake prediction research, many of us grow a bit tired of the "debate". B-62

82 Contributions by Didier Sorbette (8 April 1999) [Note: Although this debate is now closed, the following contribution makes an interesting counterpoint to Ian Main's concluding remarks and so was held over for posting in this final week.] Predicting earthquakes requires an understanding of the underlying physics, which calls for novel multidisciplinary approaches at a level never yet undertaken. Notwithstanding past efforts in several countries in the last decades, I fail to see that the scientific community has used the full potential of artificial/computational intelligence, statistical physics, super-computer modelling, large scale monitoring of a full spectrum of physical measurements, coupled together with more traditional seismological and geological approaches to make a dent in the earthquake problem. What we have learned is that past failures in earthquake prediction reflect the biased view that it was a simple problem. The alchemy of earthquakes Paradoxes, misunderstanding, controversies often appear when restricted to the 'narrow' window of our present knowledge. Consider the example regarding the importance that Sir Isaac Newton attributed to alchemy as his primary research field, leading to the provoking statement by Geller in the first week of this debate that ''Earthquake prediction seems to be the alchemy of our times''. The lesson I personally take from this example is that Newton was fundamentally right to expect that physical processes could lead to the transmutation of one element into another. However, physics and technology were not at his time sufficiently advanced and science had to wait for Becquerel and for the Curie's to open the modern 'alchemy' (nuclear science) era. The question then boils down to the fact that Newton lost his time pursuing a (valid) goal which was, however, out of his reach. Similarly, we need fundamentally new approaches for understanding what are earthquakes, but hopefully less time might be needed to understand what is the 'alchemy of earthquakes', simply because we are so much better armed and science is progressing so much faster than ever before. I consider the understanding of earthquakes to be a requisite to the assessment of prediction potentials for two reasons. Simple 'black box' pattern recognition techniques have been tried repeatedly and have shown limited success, probably in part due to the poor quality and scarcity of the data. A fundamental understanding of earthquakes, not only of the source problem but of the full seismic cycles, is thus called for. Only such an understanding could lead us to a quantitative assessment of the potentials and limitations of earthquake prediction, as chaos and dynamical system theory have helped in understanding (some of) the limits of weather forecasting. We are very far behind meteorology for two reasons: 1. we still have very limited precise quantitative measurements of the many parameters involved. 2. the physical phenomena underlying earthquakes are much more intricate and interwoven and we do not have a fundamental Navier-Stokes equation for the crust organization. It is thus too early to state anything conclusive about the fundamental limitation of earthquake prediction. B-63

83 Mechano-chemistry Earthquakes are indeed very poorly understood. The standard theory is based on the rebound theory of earthquakes formulated by Reid in 1910 which was later elaborated as a friction phenomenon by Brace and Byerlee in 1966 with many recent developments using Ruina-Dieterich-type laws. This textbook picture still poses many fundamental paradoxes, such as the strain paradox 1, the stress paradox 2, the heat flow paradox 3 and so on 4. Resolutions of these paradoxes usually call for additional assumptions on the nature of the rupture process (such as novel modes of deformations and ruptures) prior to and/or during an earthquake, on the nature of the fault and on the effect of trapped fluids within the crust at seismogenic depths (see ref. 4 and references therein). There is no unifying understanding of these paradoxes. As recalled by Crampin in this debate, earthquakes depend on many geological and physical conditions. In particular, there is a lot of direct and indirect evidence for the prominent role of water, both mechanically (pore pressure) and chemically (recrystallization, particle effects, texture) and their probable interplay 4,5. There is growing recognition that mineral structures can form and deform at much milder pressures and temperatures than their pure equilibrium phase diagram would suggest, when in contact with water or in the presence of anisotropic strain and stress (ref. 5 and references therein). As an example, I have recently proposed5 that water in the presence of finite localized strain within fault gouges may lead to the modification of mineral textures, involving dynamic recrystallization and maybe phase transformations of stable minerals into metastable polymorphs of higher free energy density. The interplay between mechanical deformation, activated chemical transformation and rupture opens new windows to look at earthquakes, beyond the (reductionist) mechanical paradigm. Self-Organized Criticality As mentioned by Bak in this debate, the SOC hypothesis has been suggested, on the one hand, on the basis of the observation of power law distributions, such as the Gutenberg-Richter law for earthquakes and the fault length distribution, and of the fractal geometry of sets of earthquake epicenters and of fault patterns, and on the other hand on the study of highly simplified models with somewhat similar scale-invariant properties. The most interesting aspect of SOC is its prediction that the stress field should exhibit long-range spatial correlations as well as important amplitude fluctuations. The exact solution of simple SOC models 6 has shown that the spatial correlation of the stress-stress fluctuations around the average stress is long range and decays as a power law with distance. Such models suggest that the stress fluctuations not only reflect but also constitute an active and essential component of the organizing principle leading to SOC. It is an intringuing possibility whether the observed increase of long-range intermediate-magnitude earthquake activity prior to a strong earthquake 7,8 may be a signature of increasing long-range correlations. This theoretical framework supports the view developed by Crampin in this debate that stress monitoring on large scale may be a good strategy. Two important consequences can be drawn from the SOC hypothesis. First, at any time, a (small) fraction of the crust is close to the rupture instability. Together with the localization of seismicity on faults, this leads to the conclusion that a fraction of the crust is susceptible to rupture, while presently being quiescient. The quantitative determination of the susceptible fraction is dependent on the specificity of the model and cannot thus be ascertained with precision for the crust. What is important however in that the susceptible part of the crust can be activated with relatively small perturbations or by modification of the overall driving conditions. This remark leads to a natural interpretation of triggered 9 and induced seismicity by human activity in the SOC framework 10. The second important but often ignored point is that, in the SOC picture, the crust is NOT almost everywhere on the verge of rupture and is not maintaining itself perpetually near the critical point. For instance, numerical simulations show that in discrete models made of interacting blocks carrying a continuous scalar stress variable, the average B-64

84 stress is about two thirds of the stress threshold for rupture. In these models, the crust is, on average, far from rupture. However, it exhibits strong fluctuations such that a subset of space is very close to rupture at any time. The average is thus a poor representation of the large variability of the stress amplitudes in the crust. This leads to the prediction that not all perturbations will lead to triggered or induced seismicity and that some regions will be very stable. SOC models suggest that local stress measurements may not be representative of the global organization. Criticality and predictability In the present context, criticality and self-organized criticality, used in the sense of statistical physics, refer to two very different concepts, which leads to a lot of confusions, as seen in this debate. First, SOC is self-organized (thus there is no apparent 'tuning', see however ref. 11) while criticality is not. Second, the hallmarks of criticality are the existence of specific precursory patterns (increasing 'susceptibility' and correlation length) in space and time. The idea that a large earthquake could be a critical phenomenon has been put forward by different groups, starting almost two decades ago Attempts to link earthquakes and critical phenomena find support in the demonstration that rupture in heterogeneous media is a critical phenomenon. Also indicative is the often reported observation of increased intermediate magnitude seismicity before large events (see Bowman and Samis's contribution to this debate and references therein). Criticality carries with it the concepts of coarse-graining and universality, and suggests a robustness of its signatures when observed at sufficiently large scale. This is in contrast with the conclusion that one needs a detailed knowledge of the huge complexity of the geology and mechanics of fault systems (fault geometry, strength variations in the fault, zone material, rheological properties, state of stress, etc) to perform a prediction (see Crampin's contribution to this debate). Criticality and SOC can coexist. If rupture of a laboratory sample is the well-defined conclusion of the loading history, the same cannot be said for the crust where 'there is life' after large earthquakes. An illustration of the coexistence of criticality and of SOC is found in a simple sandpile model of earthquakes on a hierarchical fault structure 15. Here, the important ingredient is to take into account both the nonlinear dynamics and the complex geometry. While the system self-organizes at large time scales according to the expected statistical characteristics, such a the Gutenberg-Richter law for earthquake magnitude frequency, most of the large earthquakes have precursors occuring over time scales of decades and over distances of hundreds of kilometers. Within the critical view point, these intermediate earthquakes are both 'witnesses' and 'actors' of the building-up of correlations. These precursors produce an energy release, which when measured as a time-to-failure process, is quite consistent with an accelerating power law behaviour. In addition, the statistical average (over many large earthquakes) of the correlation length, measured as the maximum size of the precursors, also increases as a power law of the time to the large earthquake. From the point of view of self-organized criticality, this is surprising news: large earthquakes do not lose their 'identity'. In this model1 5, a large earthquake is different from a small one, a very different story than told by common SOC wisdom in which 'any precursor state of a large event is essentially identical to a precursor state of a small event and earthquake does not know how large it will become', as stated by Scholz and Bak in this debate. The difference comes from the absence of geometry in standard SOC models. Reintroducing geometry is essential. In models with hierarchical fault structures 15, we find a degree of predictability of large events. Most of the large earthquakes whose typical recurrence time is of the order of a century or so can be predicted from about four years in advance with a precision better than a year. An important ingredient is the existence of logperiodic corrections to the power law increase of the seismic activity prior to large events, reflecting the hierarchical geometry, which help 'synchronizing' a better fit to the data. The B-65

85 associated discrete scale invariance and complex exponents are expected to occur in such out-of-equilibrium hierarchical systems with threshold dynamics 16. Of course, extreme caution should be exercized but the theory is beautiful in its self-consistency and, even if probably largely inacurate, it may provide a useful guideline. Hierarchical geometry need not be introduced by hand as it emerges spontaneously from the self-consistent organization of the fault-earthquake process 17. References 1. Jackson, D.D. et al. Southern California deformation. Science 277, (1997). 2. Zoback, M.L. et al. New evidence on the state of stress of the San Andreas fault zone. Science 238, (1987). 3. Lachenbruch, A.H., & Sass, J.H. Heat flow and energetics of the San Andreas fault zone. J. Geophys. Res. 85, (1980). 4. Sornette, D. Earthquakes: from chemical alteration to mechanical rupture. Phys Rep. In the press (1999). ( 5. Sornette D. Mechanochemistry: an hypothesis for shallow earthquakes. in Earthquake thermodynamics and phase transformations in the earth's interior. (Teisseyre, R. & Majewski E. eds; Cambridge University Press, 1999). ( 6. Dhar, D., Self-organized critical state of sandpile automaton models, Phys. Rev. Lett. 64, (1990). 7. Knopoff, L. et al. Increased long-range intermediate-magnitude earthquake activity prior to strong earthquakes in California. J. Geophys. Res. 101, (1996). 8. Bowman, D.D. et al. An Observational test of the critical earthquake concept. J. Geophys. Res. 103, (1998). 9. King, G.C.P., Stein, R.S., & Lin, J. Static stress changes and the triggering of earthquakes. Bull. Seism. Soc. Am. 84, (1994). 10. Grasso, J.R. & Sornette, D. Testing self-organized criticality by induced seismicity. J. Geophys. Res. 103, (1998). 11. Sornette, D., Johansen A. & Dornic, I. Mapping self-organized criticality onto criticality. J. Phys. I. France 5, (1995). 12. Allegre, C.J., Le Mouel, J.L. & Provost, A. Scaling rules in rock fracture and possible implications for earthquake predictions. Nature 297, (1982). 13. Keilis-Borok, V. The lithosphere of the Earth as a large nonlinear system. Geophys. Monogr. Ser. 60, (1990). 14. Sornette, A. & Sornette, D. Earthquake rupture as a critical point: Consequences fortelluric precursors. Tectonophysics 179, (1990). 15. Huang, Y. et al. Precursors, aftershocks, criticality and self-organized criticality. Europhys. Letts 41, (1998). 16. Sornette, D. Discrete scale invariance and complex dimensions. Phys Reps 297, (1998). 17. Sornette, D., Miltenberger, P. & Vanneste, C. Statistical physics of fault patterns self-organized by repeated earthquakes. Pure Appl. Geophys 142, (1994). B-66

86 Concluding Remarks by Ian Main (8 April 1999) This debate has highlighted both a degree of consensus and a degree of continuing controversy within the thorny subject of the predictability of earthquakes. In terms of the four levels of prediction of seismicity I introduced at the start of this debate, a consensus has emerged that at least some form of time-dependent seismic hazard can be justified on both physical and observational grounds. The phenomenon of earthquake triggering leads to a transient, local increase in probability of future earthquakes, for example as aftershocks, but also sometimes in the form of subsequently larger events. In fact warnings based on such clustering are already in use in California (Michael, week 2). On the other hand, all of the contributors to this debate who expressed an opinion agree that the deterministic prediction of an individual earthquake, within sufficiently narrow limits to allow a planned evacuation programme, is an unrealistic goal. Seismic gap hypothesis If we examine the intermediate scenarios, we find a continuing debate on the applicability of the seismic gap hypothesis to natural seismicity (Jackson, week 4; Scholz, weeks 2, 6). This would in principle allow the calculation of longer-term conditional probabilities for time-dependent seismic hazard calculations. The root of this debate lies in the need for a consistent definition of the gap hypothesis in a form that can be objectively tested by statistical means, and the need to consider at least the possibility of a conditional probability density which may decrease with time due to the clustering properties of seismicity (Knopoff, week 3 article). Scholz (week 6) finds it hard to believe that the gap hypothesis can fail, because of the 'weird' physics this might require. However, the gap hypothesis assumes the repetition of identical individual events, and takes no account of interactions between neighbouring faults, or the possibility of strain release by a population of smaller events. For the time being, the gap hypothesis remains open to question and future testing, preferably in prospective mode, but also in comparison with palaeseimological data, bearing in mind some of the potential pitfalls involved (Michael, week 2). Precursors Perhaps the greatest degree of controversy lingers in the possibility of making probabilistic forecasts of future earthquakes based on the observation of precursory phenomena. The reason for this continuing controversy is twofold: 1. The plain practical difficulty of objectively identifying any precursory phenomena with sufficient clarity and repeatability to convince the sceptical of their general existence. Some believe that this problem will be overcome in time (Wyss, week 1; Knopoff, week 3; Biagi, week 4), although at least one feels that the effort needed will require resources comparable with those currently spent on astronomical research (Wyss, week 3). Many contributors to the debate, while acknowledging the problems, argue nevertheless that we should not rule out a priori the possibility of a level of prediction above that of earthquake clustering (Bernard, week 1; Michael, week 2; Scholz, week 2; Knopoff, week 3; Bowman & Sammis, week 4, Wu, week 5). Geller (week 4) argued that a massive investment of new resources specifically targeted at earthquake prediction would be an unwise investment without more obvious success, and Bernard (week 1) that any increase in resources should be targeted first at a better fundamental understanding of earthquakes and crustal transients themselves. Geller (week 6) criticised the 'case study' approach inherent in the investigation of precursory phenomena, on the grounds that sample bias can lead to apparent statistical significance - the 'gambler's fallacy'. Wyss (week 6) responded by saying that this may be all we have for the foreseeable future, but this does not of itself refute Geller's argument. B-67

87 2. The lack of a universally-agreed physical model for the complicated and non-linear process of seismogenesis. We may all agree that the local physics of fracture or friction play a strong role, but how do these scale from the controlled conditions of the laboratory to the field case, how do we account for changes in the boundary conditions, and how do we take into account the strong interactions between faults during earthquakes? Self-organized criticality In fact there is still a debate on the applicability of the current model for earthquake populations, based on the notion of self-organised criticality (Bernard, week 1; Scholz, week 2; Knopoff, week 3; Bak, week 3; Bowman & Sammis, week 4). Perhaps surprisingly to some, the contributions here have highlighted the fact that self-organised criticality itself does not preclude some degree of predictability in the statistical properties of the system. (Even a completely chaotic system has a degree of short-term predictability). In fact, there are no realistic grounds for ruling out the existence of finite fluctuations in local stress or global correlation length, which in turn may influence future probabilities of the earthquake population. The problem may instead be that such fluctuations can be small (Bak, week 3), and hard to distinguish from the continued fluctuations inherent in a self-organised critical state, so that a 'background' level is hard to define. Finally it would be hard to devise any model which can truly account for the enormous complexity of the Earth (Knopoff, week 3; Crampin, week 5). Knopoff (week 3) argued instead that the notion of self-organised criticality does not describe the statistical properties of the San Andreas fault in detail. However, to first order at least, this model remains the best contemporary explanation for a plethora of scaling phenomena in geology and geophysics, including earthquake and fault populations (refs 1,2; Sornette, week 7). In terms of deterministic prediction, physicists reading this article will be aware that the critical point represents the cusp between the deterministic and the random. It is this fundamental competition between order and chaos which explains the long-range correlations seen in earthquake and fault populations, as well as smaller-scale physical phenomena such as critical opalescence. If earthquake physics similarly requires an irreducibly random element in order to explain the observed long-range correlations, and the surprising sensitivity to small perturbations in stress that we see in induced seismicity, then this goes a long way to explaining theoretically the emerging consensus on the unlikelihood of our ever being able to achieve accurate deterministic prediction. Policy issues The debate has also highlighted a number of issues which may come under the heading of science policy and organisation. Geller (week 5) argued that earthquake prediction should not be treated as a subject in its own right, and rather that those working in the subject area should present their results in broader fora which include those working in earthquake source physics or statistics, and requiring similar standards of objective testing. Wyss (week 6) concurred. Bernard (week 1) advocated a more fundamental approach to the observation of crustal transients, targeting the study against a range of potential applications. A second policy issue is that of funding. Wyss (week 1) and Biagi (week 4) complained that the mere mention of 'Earthquake Prediction' in a proposal can guarantee its failure. An alternative explanation is that such proposals simply do not come over as strongly as the alternatives in a rapidly-moving and increasingly competitive world. Geller (week 6) and Jackson (week 4) highlighted the need for clear, unambiguous, and time-independent statements of individual predictions, in order to allow the objective testing of any individual prediction. Many reviewers may also be mindful of the scientifically weak work that Wyss (week 1) highlighted. As a consequence Geller (week 4) argued that without significant progress there should be no special funding for this area, although explicitly stating that meritorious earthquake prediction research should be funded under the normal rules of peer review. Wyss (week 3) argued that without literally astronomical funding, there will be no significant B-68

88 progress. Those who deal with science policy in earthquake-prone countries will have to make up their own minds on the costs and potential benefits involved. Benefits of predictions Although we are a long way from consensus on how far we should go in terms of addressing scientific questions with potential application to earthquake prediction, many contributors nevertheless addressed the potential utility of predictions at different levels introduced at the start of the debate. Michael (week 5) pointed out that low-probability short-term forecasts, while not justifying mass evacuation of cities, may help maintain a state of preparedness beneficial in earthquake-prone areas. Despite the continuing scientific debate on the gap hypothesis, he also pointed out (week 2) that time-dependent hazard maps already in use in California have led to significant practical benefits in terms of increased investment in aseismic building construction. However, there is also a potential downside to identifying some areas as being at high risk, in the sense that this may lead to unwarranted complacency in areas identified as being at low risk (Geller, week 3). Even if deterministic prediction could be achieved, mass evacuation programmes may not be the best option, given its own potential problems (Wyss, week 1; Michael, week 2; and Jackson, week 4). We remain a long way from proving that any earthquake prediction scheme can succeed better than predictions based on the statistics of earthquake clustering, but this debate has highlighted in the clearest terms possible that when scientists speak of 'earthquake prediction', they do not imply the type of accurate short-term prediction that might allow public evacuations before an individual event. Instead the predictions implied come under the general category of probabilistic forecasts for a population of earthquakes. Such forecasts may instead serve as a motivation for aseismic design, and maintained vigilance by the general public and civil defence agencies. In the end it is not earthquakes themselves which kill people, it is the collapse of man-made structures which does most of the damage. While we continue to explore the degree of predictability of earthquakes on rigorous observational, statistical and theoretical grounds, we should therefore not lose sight of the fact that the best way of preparing for the inevitable remains in the development of land use plans, and building and infrastructure design codes to mitigate their worst effects. References 1. Main, I.G. Statistical physics, seismogenesis and seismic hazard. Rev. Geophys. 34, (1996). 2. Turcotte, D.L. Fractals and chaos in geology & geophysics, 2nd ed. (Cambridge University Press, Cambridge, UK., 1997). B-69

89 Author Affiliation Ian Main Department of Geology and Geophysics, University of Edinburgh, Edinburgh, UK Per Bak Department of Physics, Niels Bohr Institute, Blegdamevej 17, DK-2100 Copenhagen Pascal Bernard Institut de Physique du Globe de Paris, France Pier Francesco Biagi Physics Department, University of Bari, Bari, Italy. Stuart Crampin Centre for Reservoir Geoscience, Department of Geology & Geophysics, University of Edinburgh, Grant Institute, West Mains Road, Edinburgh EH9 3JW, SCOTLAND Robert J. Geller Department of Earth and Planetary Physics, Graduate School of Science,,Tokyo University, Bunkyo, Tokyo , Japan. David D. Jackson Southern California Earthquake Center, University of California, Los Angeles, CA USA Andrew Michael United States Geological Survey, Menlo Park, California, USA Christopher H. Scholz Lamont-Doherty Earth Observatory, Columbia University, Palisades, New York, USA Didier Sornette Director of Research National Center for Scientific Research LPMC, CNRS UMR6622 and Universite de Nice-Sophia Antipolis, B.P NICE Cedex 2, France Zhongliang Wu Institute of Geophysics, China Seismological Bureau, China Max Wyss Geophysical Institute, University of Alaska, Fairbanks, Alaska, USA. B-70

90 Section C: Instrument Evaluation and Strong-Motion Software Development Contents I. Introduction... A-2 II. Strong-Motion Software Development... A-2 III. Strong-Motion Quality Assurance Program: SMQC.EXE... A-3 IV. Supporting Packages and Miscellaneous Utilities for Strong Motion Processing... A-30 References... A-67 C-1

91 I. Introduction Instrumentation evaluation was not performed in 1999/2000 because no new instruments were submitted for the CWB 1999/2000 procurements of (i) strong-motion array systems for building and bridges, and (ii) free-field digital accelerographs. During 1998/1999, an intensive effort has been made to develop a simple strong-motion database on CD-ROMs with integrated browsing and analytic tools. We developed an interactive database archival system (idas), so that a database of strong-motion data is integrated with interactive software for browsing the database and for processing and analyzing the selected data. Although this idea is not new, it is technically feasible to implement only recently. We believe that we are the first to implement this concept for seismic data. Detailed were reported in the last year's Annual Report by Teng, Li and Lee (1999). The disastrous 921 Chi-Chi earthquake changed our plan completely, as over 30,000 strongmotion records were obtained by CWB (Shin et al., 2000), and there is urgent need to make the data available to the seismological and engineering communities. II. Strong-Motion Software Development It is well known that a large collection of data from hundreds of instruments requires extensive quality assurance, because to err is human. In this case, the data collection was also complicated by 4 different types of accelerographs, each having a different convention. Because of the urgent need to release the strong-motion data from the Chi-Chi earthquake, we performed first-order quality control on the recorded data, and a pre-publication CD was released in early December, 1999 (Lee et al., 1999). We realized that a systematic method must be developed in order to process over 30,000 strong-motion records, an amount more than all the world has recorded from the 1930s through 1999 for earthquakes with magnitude 4 or greater in the near field (<50 km). For finalizing the Chi-Chi earthquake strong-motion data (Lee et al., 2000), Doug Dodge and Willie Lee wrote several computer programs to perform quality control on the recorded free-field strong-motion data, and to correct many errors in the recorded strong-motion data files. In order to perform assurance of data quality quickly and efficiently, a graphical interactive computer program called SMQC was developed. In addition, several supporting programs have also been developed to supplement the SMQC program. These programs will be described in the following sections. C-2

92 III. Strong-Motion Quality Assurance Program: SMQC.EXE The wealth of data produced by the CWB strong-motion network; particularly in response to the 1999 Chi-Chi earthquake has brought with it a number of challenges in preparing the data for analysis by the engineering community. The large volume of data combined with inherent inefficiencies and limitations in the old DOS-based analysis and processing programs have produced a serious backlog in the efforts to prepare data for use. In addition, some of the required processing steps cannot be accomplished using the old tools. For example, memory limitations of the previous generation of tools prevent visual analysis of data streams consisting of many stations and channels over time intervals of many minutes to several hours. Combining recordings acquired from individual triggered stations into complete event sets is often quite difficult. Although some of the existing analysis tools allow a level of interactivity, they are still awkward by modern standards. Finally, the partitioning of analysis tasks into discrete tools, although suitable for pipeline operations, makes some tasks almost impossible. For example, the event rate during the early part of the Chi-Chi aftershock sequence was so high that many aftershocks produced mixed-coda recordings. For these recordings, there is no easy way to tell a priori which P-wave arrivals belong with which event. A practical way to locate these events is by an iterative process in which a few picks are made, a location is calculated, predicted arrivals are compared to the waveforms, and the initial hypothesis is tested. This process is so awkward with the old tools as to be nearly impossible. For these reasons, we created a Windows-based waveform analysis and processing tool that we call the Strong Motion Quality Control (SMQC) program. SMQC consists of approximately non-blank lines of ANSI C++ code, and is completely object-oriented in design. Graphics are provided by the TAxis component, which we have developed (also in ANSI C++). The business logic is embedded in a set of classes collectively known as the SeisObjs Class Library. These classes are platform-independent, and have been compiled on several different architectures. The user interface of SMQC was developed using Borland s Visual Component Library, an object-oriented graphics system. This document illustrates the features of the new program. Features of SMQC SMQC is, for the most part, a Windows 95 compliant application. Although it does not support a Windows install/uninstall capability, its user interface follows the Windows Style Guide and the program relies on the system registry for configuration. The program has a Windows icon and can be started from Explorer or the desktop either by: (1) double clicking on the icon, or (2) by dropping a Suds file onto the icon. C-3

93 SMQC supports four read methods: Single Suds File Continuous Suds File Read From List of Files Suds File Insert. Seismograms can be viewed as single traces, as 3-component channel sets, as record sections, or in simple multi-trace plots. In all views, there are unlimited levels of zoom / un-zoom. As an alternative to zooming, there are also trace magnification controls in all the seismogram views. The main analysis window has an interactive picking capability that is available in both single trace and three-trace mode. All seismogram views support display of predicted P- and S-wave arrivals for Suds files with both hypocenter and station information There are three hypocenter locators built into SMQC. These are HYPO71 (Lee and Lahr, 1975), a Simplex-based locator and an adaptive grid search locator. The Simplex and grid search locators are capable of producing a hypocenter estimate with only 3 picks, and can use both L1 and L2 norms. Epicenter locations can be plotted on an interactive map that shows the coverage ellipse and the residual at each station. Main Analysis Window Figure 1 shows the main analysis window in Three-Trace mode. At the top of the view are a menu and two toolbars. The file name and its save status are shown in the figure caption. Each component view shows the trace for that component along with summary information for that trace. The nature and placement of the summary information is user-configurable, and is saved from session to session. Trace color and line width, background color, and other axis properties are also configurable and persistent. Each trace has an independent magnification control to the right of the trace window. Traces can be zoomed and un zoomed semiindependently. That is, when one trace is zoomed, the other traces zoom in time to maintain the same time axis limits for all traces. However, zooming one trace does not affect the amplitude limits of the other trace views. C-4

94 Figure 1. Main Analysis Window in Three-Trace mode. Figure 2 shows an explanation of the toolbars on the Main Analysis Window. Figure 2. Toolbar Button Functions. C-5

95 Toolbar 1 The Open and Save buttons allow input and output of single SUDS files. The Print button sends the plot of the current trace(s) to an installed printer. The Station Dialog button opens a dialog for viewing and editing of station information for the current trace. The Origin Dialog button opens a dialog allowing viewing, editing, and generation of event source location information. In multi-file mode, the Previous File button loads the previous file in the list. The Previous Trace button moves to the previous trace in the current SUDS file. The Next Trace button moves to the next trace in the current SUDS file. In multi-file mode, the Next File button loads the next file in the list. The Trace Selector Combo presents a list of all the traces in memory and allows moving to a desired trace in one step. The Unzoom button undoes multiple levels of zoom to the full view of the current trace. The Exit button quits the program. Toolbar 2 The Remove Mean button removes the mean from all traces in memory. The Remove Pre-P Mean button removes the mean of the pre-p data for all traces with a P-wave pick. The Deglitch Traces button applies an adaptive deglitching algorithm to the traces in memory. The Taper Traces button applies a user-configurable taper to the traces in memory. The Correct Time button allows a time correction to be determined and applied to all traces with origin and station information, and for which a P-wave pick has been made. The Filter Traces button applies the currently-defined filter to the traces in memory. The Add to Pick List button adds the phase picks in the current SUDS file to a list of picks that can be output for use by external locator programs. The No Pick Mode and Pick Mode buttons allow toggling back and forth between a mode in which mouse clicks set phase picks and a mode in which the mouse is used to select and to zoom. The Delete Trace button deletes the current trace from memory. Figure 3 shows the expanded menus for the File, View, and PickList menus. Many of the menu items are available on the toolbars. However, a few are not and deserve mention here. C-6

96 Figure 3. SMQC Menus. The Open Continuous Suds File item allows opening a very long (hours to days) SUDS file from which segments can be extracted at will. The Open File List item allows reading a file containing a list of SUDS files to be processed. In this mode, SMQC can readily switch between files using toolbar buttons. The Insert File item allows a SUDS file to be inserted into the data in memory. If the file being inserted contains traces for channels already in memory, the new traces are merged into the current data. Otherwise the new channels are inserted into the list of channels (in order of distance from epicenter if necessary information is available). The Save Plot As item allows the current trace (or traces if in 3-component mode) to be output as a Windows EMF graphic file. The View menu provides items for opening a number of dialogs used for configuring SMQC. These dialogs are discussed in a later section. The menu item shown as 3-Component Plot allows toggling back and forth between 3-Component and single-component views of data for a single station. The All Traces item opens the Multi-Trace View. The PickList menu is used for managing a PickList. As mentioned earlier, the PickList is an in-memory record of phase picks made in a single SMQC session. The contents of the PickList can be output in formats recognized by the HYPO71 and Velest programs so that picks made by SMQC can be used by those programs. Figure 4 shows the Main Analysis window of SMQC in Single-Trace mode. The plotting Region of the window shows just one trace in this mode. Switching among the channels is done using either the Trace Selector or the Previous and Next Trace buttons. C-7

97 Figure 4. SMQC in single-component display mode. Figure 5 shows the same view as does Figure 4, but the user has zoomed in to a 3.5s region around the P-wave arrival. The zooming operation can be repeated an unlimited number of times. The view can be un-zoomed a step at a time by holding down the Shift key and clicking with the mouse inside the plot. Alternatively, the Unzoom button can be used to return to the original plot dimensions. C-8

98 Figure 5. Zoomed-in View of Trace. Multi-Trace View When working with event-oriented SUDS files (files containing many traces from different stations for the same time interval, possibly with a single origin), it is often convenient to be able to view all the traces together. SMQC provides this capability in the Multi-Trace View. The Multi-Trace View has two modes available. In the sequential mode shown in Figure 6, the traces are plotted either in the order in which they were read, or if origin and station information are available, in order of distance from the epicenter. If an origin is available, its summary statistics are shown in the plot title. Station names are (optionally) plotted beneath each trace on the left-hand side of the plot. Phase picks are always plotted with each trace, and if origin and station information is available, predicted P- and S-wave arrival times are (optionally) shown as well. C-9

99 Figure 6. Multi-trace view in sequential mode. If the origin and station information are available, then the Multi-Trace view can be put in Record-section mode. This mode is shown in Figure 7. In Record-section mode, the traces are centered on their epicentral distance. Although some traces may partially obscure other nearby traces in this mode, it is often easy to identify phases because of the linear trends visible in this type of plot. C-10

100 Figure 7. Multi-trace view in record-section mode. In both sequential and record-section mode, the user can zoom and unzoom. It is also possible to send multi-trace plots to an installed Windows printer by clicking the Printer button on the toolbar. Picking in SMQC Picking in SMQC, although currently limited to the Main Trace View, is nevertheless quite flexible. The view is put into pick mode by clicking the pick mode button on the second toolbar. While in pick mode, clicking anywhere in a trace plot sets a pick at the (time) location of the mouse pointer. The attributes of the new pick are taken to be the current values shown in the Pick Parameters dialog (). This dialog is opened either from the View menu or by right clicking on the pick and selecting Pick Parameters from the context menu. Picks are created selected, C-11

101 and while selected, are highlighted. The attributes of a selected pick are changed using the Pick Parameters dialog. Figure 8. Pick Parameters Dialog. Figure 9. Single-trace view showing user-defined P-wave pick and predicted P- and S-wave arrivals from stored velocity model. Once a pick has been created, the Main Trace view returns to No Pick mode. This state is shown in Figure 9. In either mode, a pick can be selected at any time by clicking on it with the mouse. While selected, dragging the mouse moves the pick. A selected pick can be deleted using either the Delete button in the Pick Parameters dialog or the Delete key on the keyboard. C-12

102 Configuration Dialogs Much of the behavior and look of SMQC is configurable and persistent. All configurable properties are set using configuration dialogs. The property values are stored in the system registry. Figure 10 shows the Read Options configuration dialog. For some purposes users may wish to restrict input to selected stations or channels. This is accomplished using the Station and channel restriction controls within this dialog. Figure 10. Read Options Dialog. When inserting seismograms into a workset, it may be convenient to trim the input seismograms to the same time window as the seismograms already in memory. This is especially useful when inserting from continuous data files. This behavior is enabled by checking the Restrict Insert to Time Interval of Existing Data checkbox. Some files may contain dropouts. If the data are otherwise zero mean, the dropouts do not present a particular problem. Otherwise, the dropouts can interfere with visualization and with some trace operations (e.g. mean removal). SMQC can change the dropouts to have the value of the mean of the remainder of the trace. This capability is enabled within this dialog by checking the Fix Dropouts on Input checkbox. C-13

103 Figure 11 shows the Tools Options dialog. This dialog currently controls properties of the Filter, Taper, and Deglitch tools. In Figure 11, the dialog is shown with the Filter Properties tab sheet open. This sheet allows design of one and two pass IIR filters constructed from either Butterworth or Bessel analog prototypes. Figure 11. Tools Options Dialog. The Plot Properties dialog shown in Figure 12 controls many of the characteristics of plots generated by SMQC. The Miscellaneous tab sheet (shown in Figure 12) controls parameters common to all plots. These include line color, background color, axis line and axis label colors, and font properties for axis text objects. C-14

104 Figure 12. Plot Properties Dialog. Each of the four possible plots within the Main Analysis window can have their size and position configured using other tab sheets within the dialog. The position size, and visibility of descriptive text within the plot windows is controlled using the Axis Text tab sheet. The Pick Line tab sheet controls the line width and color of selected and unselected pick lines. It is also used to control the visibility and characteristics of travel time markers and origin markers. The Plot Output Setup dialog shown in Figure 13 controls the size and position of printed plots generated from any view, and of metafiles generated in the Main Analysis window. C-15

105 Figure 13. Plot Output Dimensions Dialog. The Station Data dialog shown in Figure 14 shows the parameters for the current stationchannel selected in the Main Analysis window. Parameters can also be edited here. Figure 14. Station Data Dialog. C-16

106 Locating Earthquakes in SMQC SMQC provides a flexible, interactive environment for locating earthquakes. After picking, events can be located with a single mouse click. Computed locations are immediately displayed on an interactive map that shows the misfit at each station. From the map, the user can easily select a waveform for viewing, and in the waveform view can easily compare predicted arrivals to actual phase picks. The starting point for all location efforts is the Origin Information dialog. Clicking the Origin Dialog on the first toolbar of the Main Analysis window opens this dialog. Figure 15. Origin Dialog Collapsed. The Origin Information dialog can be viewed collapsed to save space as in Figure 15, or it can be expanded to show origin details as in Figure 16. Figure 16. Origin Dialog Expanded showing Location page. C-17

107 To locate an earthquake, the user clicks the Locate button, shown to the left. In most cases, the hypocenter location will be computed in a second or less. The View Location Map will then become enabled if it was not already. Also, if the Origin Information dialog is expanded, then the origin information and statistics will be displayed as well. The newly computed hypocenter location is not written into the SUDS origin structure until the commit button has been pressed. Figure 17. Location Algorithm Properties Dialog. C-18

108 Three locators are available within SMQC. They are HYPO71, a Simplex algorithm-based locator, and a grid search locator. For most purposes, the HYPO71 locator gives good results, and it is significantly faster that either of the two other locators. However, because the simplex and grid search locators do not solve a linear inverse problem, and because these locators can use a robust L1 norm, they may give better results in the presence of outliers or when the coverage is poor. An additional advantage of these direct search locators is that they can produce an answer given only three picks. Figure 18. Location Map Dialog. C-19

109 To switch among locators, the user chooses View Location Algorithm Dialog from the Origin Information Dialog, and then checks the desired locator in the Solution Method box. Note that the Hypo locator uses an auxiliary program (hypo71.exe), and the path to this program must be set. Also, a temp file directory in which the user has read/write privileges must be set. Finally, note that only the version of HYPO71 distributed with SMQC can be used. The grid search locator uses a two-step adaptive grid search method. The latitude, longitude, and depth boundaries define a rectangular region over which the coarse search is made. After the coarse search is complete, a fine mesh is generated centered on the minimum residual grid point of the coarse search. The location precision in any dimension is approximately (Max Value Min Value) / (NumberCoarse * NumberFine). Figure 19. Location Map Zoomed in to show detail of epicenter and station residuals. C-20

110 Regardless of how the location is computed, it will automatically be plotted on the Map View (Figure 18). If the Map View is not open, the user clicks the View Location Map button on the Origin Information dialog to see the current map. All stations in the SUDS file are plotted on the map. However, only stations with picks have residuals plotted. In Figure 18 the epicenter is obscured by station information. The user can use the mouse to zoom in for a close up. This is shown in Figure 19. There, the epicenter is shown as a + symbol centered within a 95% confidence ellipse. The user can easily evaluate sources of misfit in the solution by clicking on a station symbol. That action causes the trace(s) for that station to be loaded into the Main Analysis window where the user can review the picks, modify them if necessary, and then relocate the event. Graphical Output in SMQC SMQC can produce high-resolution printer plots for the Main Analysis window and the Multi-trace window. In addition, contents of the Main Analysis window can be saved as Windows EMF files and imported into many Windows programs capable of displaying graphics. This provides a mechanism for inserting SMQC plots directly into reports. An example SMQC plot imported into MS-Word is shown in Figure Max Value = Unknown Units T078 v Sep 20 (263), :11: Seconds Relative to Reference Time Figure 20. Example EMF File produced by SMQC. C-21

111 Example Analysis of Aftershock Using Continuous Waveform Data Figure 21 Shows 1000s of data displayed in the Continuous Data Dialog. The data are from a continuous SUDS file containing the Chi-Chi mainshock and 10 hours of aftershocks for 19 stations near the mainshock epicenter. Figure 21. Continuous Data Dialog with 1000-second window showing several events. C-22

112 Within this dialog, the user can use the forward and reverse arrow buttons to move through the data. The Width control is used to set the amount of data displayed, and the Move By control is used to set the amount by which the data are shifted. The Segment Position control provides visual feedback of the current position in the file. A number of earthquake signals are visible in the data window. Figure 22. Continuous Data Dialog zoomed in on event near 700 s. C-23

113 For this example, the third prominent event near 700s is chosen for analysis using the mouse. The result after selection is shown in Figure 22. The selected data window is 150s long, and contains adequate pre-event and post-event data. By clicking the Select For Analysis button, the data are loaded into the main analysis module of SMQC. Figure 23. Main Analysis Single-Trace View after loading data segment. Figure 23 shows the Single-Trace view of the Main Analysis module of SMQC after loading the waveform segment. The first trace (T078) is displayed, the Trace Navigator is enabled and set to T078-v, and the Forward trace arrow button is enabled. All trace modification buttons on the toolbars (except those that depend on having picks set) are enabled. From the View menu, the user can choose All Traces to display the Multi-trace view shown in Figure 24. Within this view the current trace (T078) is shown highlighted and the station names are displayed. Because there has been no hypocenter solution calculated yet, there is no origin time available, and the traces cannot be shown in record section form. C-24

114 Figure 24. The Multi-trace view showing newly imported segment from continuous data file. At this point the user sets picks for a few of the traces with unambiguous P-wave onsets, and clicks the Locate button on the Origin dialog. After examining statistics shown on the Origin dialog and on the Map view, the user decides to inspect the overall consistency of the solution. By pressing the Commit button on the Origin dialog, the origin information and predicted arrivals are added to the waveform views. The effect on the Multi-trace view is shown in Figure 25. The vertical line from top to bottom of the view marks the origin time. Each trace has markers showing the expected arrival times of the P- and S-waves. Traces with picks have the actual pick times marked as well. For all the traces, with or without picks, the predicted P-wave times are reasonably close to the visible P- wave onsets. The S-waves are more problematic, primarily because this is a double event with a small precursor (which was picked) followed by a larger, nearby event. C-25

115 Figure 25. Multi-trace view after preliminary location. Examining the solution in the map view (Figure 26), it is obvious that the solution is well constrained epicentrally. In fact, there is at least one redundant station (T079) that could be profitably removed from the solution. However, the depth does not appear to be well constrained. (Note the depth of 15 km, which is an indicator that HYPO71 found the solution to be unconstrained in depth.) C-26

116 Figure 26. Location map after final location. In fact, the solution can be improved. Using the first solution as a guide, a few minor adjustments are made in pick time. Also, two stations with high residuals but with little independent constraint on the solution are removed from the solution. Also, the locator is switched to Grid Search. The resulting epicenter location is shown in Figure 27. The new epicenter has moved very little. However, the depth has moved from 15 km to about 4 km. The residual has decreased from about 0.3 s to about 0.1 s. Formal uncertainties in latitude, longitude, and depth are all less than 2.9 km. C-27

117 Figure 27. Map view after second solution. The record section for the revised solution is shown in Figure 28. C-28

118 Figure 28. Record Section plot after final location. C-29

Why 1G Was Recorded at TCU129 Site During the 1999 Chi-Chi, Taiwan, Earthquake

Why 1G Was Recorded at TCU129 Site During the 1999 Chi-Chi, Taiwan, Earthquake Bulletin of the Seismological Society of America, 91, 5, pp. 1255 1266, October 2001 Why 1G Was Recorded at TCU129 Site During the 1999 Chi-Chi, Taiwan, Earthquake by Kuo-Liang Wen,* Han-Yih Peng, Yi-Ben

More information

ShakeAlert Earthquake Early Warning

ShakeAlert Earthquake Early Warning ShakeAlert Earthquake Early Warning Doug Given USGS, Pasadena Earthquake Program, Early Warning Coordinator 10 seconds 50 seconds 90 seconds USGS Earthquake Hazard Responsibilities USGS has the lead federal

More information

ESTIMATES OF HORIZONTAL DISPLACEMENTS ASSOCIATED WITH THE 1999 TAIWAN EARTHQUAKE

ESTIMATES OF HORIZONTAL DISPLACEMENTS ASSOCIATED WITH THE 1999 TAIWAN EARTHQUAKE ESTIMATES OF HORIZONTAL DISPLACEMENTS ASSOCIATED WITH THE 1999 TAIWAN EARTHQUAKE C. C. Chang Department of Surveying and Mapping Engineering Chung Cheng Institute of Technology, Taiwan, ROC ABSTRACT A

More information

Establishment and Operation of a Regional Tsunami Warning Centre

Establishment and Operation of a Regional Tsunami Warning Centre Establishment and Operation of a Regional Tsunami Warning Centre Dr. Charles McCreery, Director NOAA Richard H. Hagemeyer Pacific Tsunami Warning Center Ewa Beach, Hawaii USA Why A Regional Tsunami Warning

More information

Building earthquake early warning for the west coast. Ken Creager Professor of Earth and Space Sciences University of Washington

Building earthquake early warning for the west coast. Ken Creager Professor of Earth and Space Sciences University of Washington Building earthquake early warning for the west coast Ken Creager Professor of Earth and Space Sciences University of Washington How Earthquake Early Warning works: P-waves S-waves 3-fold way of Earthquake

More information

New Findings Form Basis for Earthquake

New Findings Form Basis for Earthquake Page 1 of 5 enter key Advanced Se Español Français Pycckuú You Are In: USINFO > Products >Washfile 10 November 2005 New Findings Form Basis for Earthquake Warning System Tracking quake waves could give

More information

Figure Locations of the CWB free-field strong motion stations, the epicenter, and the surface fault of the 1999 Chi-Chi, Taiwan earthquake.

Figure Locations of the CWB free-field strong motion stations, the epicenter, and the surface fault of the 1999 Chi-Chi, Taiwan earthquake. 2.2 Strong Ground Motion 2.2.1 Strong Ground Motion Network The world densest digital strong ground motion network of Taiwan with the station mesh of 3 km in the urban areas (Shin et al., 2) monitored

More information

GEO-VIII November Geohazard Supersites and Natural Laboratories Progress Report. Document 9

GEO-VIII November Geohazard Supersites and Natural Laboratories Progress Report. Document 9 GEO-VIII 16-17 November 2011 Geohazard Supersites and Natural Laboratories Progress Report Document 9 This document is submitted to GEO-VIII for information. Geohazard Supersites and Natural Laboratories

More information

REAL-TIME ASSESSMENT OF EARTHQUAKE DISASTER IN YOKOHAMA BASED ON DENSE STRONG-MOTION NETWORK

REAL-TIME ASSESSMENT OF EARTHQUAKE DISASTER IN YOKOHAMA BASED ON DENSE STRONG-MOTION NETWORK REAL-TIME ASSESSMENT OF EARTHQUAKE DISASTER IN YOKOHAMA BASED ON DENSE STRONG-MOTION NETWORK Saburoh MIDORIKAWA 1 And Susumu ABE 2 SUMMARY This paper describes a system for REal-time Assessment of earthquake

More information

HISTORY OF HEAVY RAINFALL DISASTER INFORMATION IN JAPAN

HISTORY OF HEAVY RAINFALL DISASTER INFORMATION IN JAPAN transmission, for a state-of-the-art review. In addition, issues expected to arise in the future are discussed. HISTORY OF HEAVY RAINFALL DISASTER INFORMATION IN JAPAN Progress of Rainfall Observation

More information

14 State of the Art and Progress in the Earthquake Early Warning System in Taiwan

14 State of the Art and Progress in the Earthquake Early Warning System in Taiwan 14 State of the Art and Progress in the Earthquake Early Warning System in Taiwan Yih-Min Wu 1, Nai-Chi Hsiao 2, William H.K. Lee 3, Ta-liang Teng 4, Tzay-Chyn Shin 2 1 Department of Geosciences, National

More information

Usually, only a couple of centuries of earthquake data is available, much shorter than the complete seismic cycle for most plate motions.

Usually, only a couple of centuries of earthquake data is available, much shorter than the complete seismic cycle for most plate motions. Earthquake Hazard Analysis estimate the hazard presented by earthquakes in a given region Hazard analysis is related to long term prediction and provides a basis to expressed hazard in probabilistic terms.

More information

Interpretive Map Series 24

Interpretive Map Series 24 Oregon Department of Geology and Mineral Industries Interpretive Map Series 24 Geologic Hazards, and Hazard Maps, and Future Damage Estimates for Six Counties in the Mid/Southern Willamette Valley Including

More information

How to communicate Cascadia Subduction Zone earthquake hazards

How to communicate Cascadia Subduction Zone earthquake hazards How to communicate Cascadia Subduction Zone earthquake hazards Tom Brocher Research Geophysicist Earthquake Science Center U.S. Geological Survey Menlo Park, California Seattle Post-Intelligencer Lessons

More information

An updated and refined catalog of earthquakes in Taiwan ( ) with homogenized M w magnitudes

An updated and refined catalog of earthquakes in Taiwan ( ) with homogenized M w magnitudes DOI 10.1186/s40623-016-0414-4 TECHNICAL REPORT Open Access An updated and refined catalog of earthquakes in Taiwan (1900 2014) with homogenized M w magnitudes Wen Yen Chang 1,2, Kuei Pao Chen 2* and Yi

More information

Quick and Reliable Determination of Magnitude for Seismic Early Warning

Quick and Reliable Determination of Magnitude for Seismic Early Warning Bulletin of the Seismological Society of America, Vol. 88, No. 5, pp. 1254-1259, October 1998 Quick and Reliable Determination of Magnitude for Seismic Early Warning by Yih-Min Wu, Tzay-Chyn Shin, and

More information

GENERAL. CHAPTER 1 BACKGROUND AND PURPOSE OF THE GUIDELINES Background of the Guidelines Purpose of the Guidelines...

GENERAL. CHAPTER 1 BACKGROUND AND PURPOSE OF THE GUIDELINES Background of the Guidelines Purpose of the Guidelines... GENERAL CHAPTER 1 BACKGROUND AND PURPOSE OF THE GUIDELINES... 1 1.1 Background of the Guidelines... 1 1.2 Purpose of the Guidelines... 3 CHAPTER 2 APPLICATION OF THE GUIDELINES... 3 2.1 Potential Users

More information

GIS Geographical Information Systems. GIS Management

GIS Geographical Information Systems. GIS Management GIS Geographical Information Systems GIS Management Difficulties on establishing a GIS Funding GIS Determining Project Standards Data Gathering Map Development Recruiting GIS Professionals Educating Staff

More information

Twitter s Effectiveness on Blackout Detection during Hurricane Sandy

Twitter s Effectiveness on Blackout Detection during Hurricane Sandy Twitter s Effectiveness on Blackout Detection during Hurricane Sandy KJ Lee, Ju-young Shin & Reza Zadeh December, 03. Introduction Hurricane Sandy developed from the Caribbean stroke near Atlantic City,

More information

United States Multi-Hazard Early Warning System

United States Multi-Hazard Early Warning System United States Multi-Hazard Early Warning System Saving Lives Through Partnership Lynn Maximuk National Weather Service Director, Central Region Kansas City, Missouri America s s Weather Enterprise: Protecting

More information

Seismic Recording Station AZ_PFO Summary Report

Seismic Recording Station AZ_PFO Summary Report Seismic Recording Station AZ_PFO Summary Report Thank you for hosting station AZ_PFO on your property. We hope that you find the enclosed report interesting. Your station is one of the 2000 USArray seismic

More information

Earthquake early warning: Adding societal value to regional networks and station clusters

Earthquake early warning: Adding societal value to regional networks and station clusters Earthquake early warning: Adding societal value to regional networks and station clusters Richard Allen, UC Berkeley Seismological Laboratory rallen@berkeley.edu Sustaining funding for regional seismic

More information

the IRIS Consortium Collaborative, Multi-user Facilities for Research and Education Briefing NSF Business Systems Review September 9, 2008

the IRIS Consortium Collaborative, Multi-user Facilities for Research and Education Briefing NSF Business Systems Review September 9, 2008 the IRIS Consortium Collaborative, Multi-user Facilities for Research and Education Briefing NSF Business Systems Review September 9, 2008 A facilities program for collection and distribution of seismological

More information

METEOROLOGICAL WARNINGS STUDY GROUP (METWSG)

METEOROLOGICAL WARNINGS STUDY GROUP (METWSG) METWSG/4-SN No. 6 12/3/12 METEOROLOGICAL WARNINGS STUDY GROUP (METWSG) FOURTH MEETING Montréal, 15 to 18 May 2012 Agenda Item 6: Wind shear, turbulence and tsunami warnings TSUNAMI INFORMATION (Presented

More information

Seismic Quiescence before the 1999 Chi-Chi, Taiwan, M w 7.6 Earthquake

Seismic Quiescence before the 1999 Chi-Chi, Taiwan, M w 7.6 Earthquake Bulletin of the Seismological Society of America, Vol. 96, No. 1, pp. 321 327, February 2006, doi: 10.1785/0120050069 Seismic Quiescence before the 1999 Chi-Chi, Taiwan, M w 7.6 Earthquake by Yih-Min Wu

More information

Magnitude 7.1 PERU. There are early reports of homes and roads collapsed leaving one dead and several dozen injured.

Magnitude 7.1 PERU. There are early reports of homes and roads collapsed leaving one dead and several dozen injured. A magnitude 7.1 earthquake has occurred offshore Peru. The earthquake struck just after 4 a.m. local time and was centered near the coast of Peru, 40 km (25 miles) south-southwest of Acari, Peru at a depth

More information

Disaster Risk Reduction in Survey for Seismic Protection of MES

Disaster Risk Reduction in Survey for Seismic Protection of MES Survey for Seismic Protection Ministry of Emergency Situations of Republic of Armenia Disaster Risk Reduction in Survey for Seismic Protection of MES Syuzanna Kakoyan Leading specialist at the Department

More information

Earthquakes. Earthquake Magnitudes 10/1/2013. Environmental Geology Chapter 8 Earthquakes and Related Phenomena

Earthquakes. Earthquake Magnitudes 10/1/2013. Environmental Geology Chapter 8 Earthquakes and Related Phenomena Environmental Geology Chapter 8 Earthquakes and Related Phenomena Fall 2013 Northridge 1994 Kobe 1995 Mexico City 1985 China 2008 Earthquakes Earthquake Magnitudes Earthquake Magnitudes Richter Magnitude

More information

Report of the Working Group 2 Data Sharing and Integration for Disaster Management *

Report of the Working Group 2 Data Sharing and Integration for Disaster Management * UNITED NATIONS E/CONF.104/6 ECONOMIC AND SOCIAL COUNCIL Twentieth United Nations Regional Cartographic Conference for Asia and the Pacific Jeju, 6-9 October 2015 Item 5 of the provisional agenda Report

More information

WMO. Key Elements of PWS and Effective EWS. Haleh Haleh Kootval Chief, PWS Programme

WMO. Key Elements of PWS and Effective EWS. Haleh Haleh Kootval Chief, PWS Programme WMO Key Elements of PWS and Effective EWS Haleh Haleh Kootval Chief, PWS Programme Workshop Objectives This workshop is all about Service Delivery and becoming excellent at it through: Sharing experiences

More information

ACTIVITIES OF THE HEADQUARTERS FOR EARTHQUAKE RESEARCH PROMOTION

ACTIVITIES OF THE HEADQUARTERS FOR EARTHQUAKE RESEARCH PROMOTION Journal of Japan Association for Earthquake Engineering, Vol.4, No.3 (Special Issue), 2004 ACTIVITIES OF THE HEADQUARTERS FOR EARTHQUAKE RESEARCH PROMOTION Sadanori HIGASHI 1 1 Member of JAEE, Earthquake

More information

I. Locations of Earthquakes. Announcements. Earthquakes Ch. 5. video Northridge, California earthquake, lecture on Chapter 5 Earthquakes!

I. Locations of Earthquakes. Announcements. Earthquakes Ch. 5. video Northridge, California earthquake, lecture on Chapter 5 Earthquakes! 51-100-21 Environmental Geology Summer 2006 Tuesday & Thursday 6-9:20 p.m. Dr. Beyer Earthquakes Ch. 5 I. Locations of Earthquakes II. Earthquake Processes III. Effects of Earthquakes IV. Earthquake Risk

More information

APPLICATIONS OF EARTHQUAKE HAZARD MAPS TO LAND-USE AND EMERGENCY PLANNING EXAMPLES FROM THE PORTLAND AREA

APPLICATIONS OF EARTHQUAKE HAZARD MAPS TO LAND-USE AND EMERGENCY PLANNING EXAMPLES FROM THE PORTLAND AREA APPLICATIONS OF EARTHQUAKE HAZARD MAPS TO LAND-USE AND EMERGENCY PLANNING EXAMPLES FROM THE PORTLAND AREA O. Gerald Uba Metro, Portland, Oregon OVERVIEW The extent to which we understand "below ground"

More information

Design of Safety Monitoring and Early Warning System for Buried Pipeline Crossing Fault

Design of Safety Monitoring and Early Warning System for Buried Pipeline Crossing Fault 5th International Conference on Civil Engineering and Transportation (ICCET 2015) Design of Safety Monitoring and Early Warning System for Buried Pipeline Crossing Fault Wu Liu1,a, Wanggang Hou1,b *, Wentao

More information

"The Big One" by sea and not by land

The Big One by sea and not by land "The Big One" by sea and not by land By Los Angeles Times, adapted by Newsela staff on 03.24.14 Word Count 629 Surfer Lee Johnson emerges from the water at San Onofre State Beach, Calif., with the twin

More information

LAB 6 SUPPLEMENT. G141 Earthquakes & Volcanoes

LAB 6 SUPPLEMENT. G141 Earthquakes & Volcanoes G141 Earthquakes & Volcanoes Name LAB 6 SUPPLEMENT Using Earthquake Arrival Times to Locate Earthquakes In last week's lab, you used arrival times of P- and S-waves to locate earthquakes from a local seismograph

More information

Seismic Recording Station TA_109C Summary Report

Seismic Recording Station TA_109C Summary Report Seismic Recording Station TA_109C Summary Report Thank you for hosting station TA_109C on your property. We hope that you find the enclosed report interesting. Your station is one of the 2000 USArray seismic

More information

EARTHQUAKES. Bruce A. Bolt. Fifth Edition. W. H. Freeman and Company New York. University of California, Berkeley

EARTHQUAKES. Bruce A. Bolt. Fifth Edition. W. H. Freeman and Company New York. University of California, Berkeley EARTHQUAKES Fifth Edition Bruce A. Bolt University of California, Berkeley DS W. H. Freeman and Company New York Preface xi What We Feel in an Earthquake 1 The 1906 Eye-Opening San Francisco Earthquake

More information

Lecture Outline Wednesday-Monday April 18 23, 2018

Lecture Outline Wednesday-Monday April 18 23, 2018 Lecture Outline Wednesday-Monday April 18 23, 2018 Questions? Lecture Final Exam Lecture Section 1 Friday May 4, 8:00-10:00am Lecture Section 2 Friday May 4, 3:10-5:10 pm Final Exam is 70% new material

More information

WINTER STORM Annex II

WINTER STORM Annex II WINTER STORM Annex II I. PURPOSE A. This annex has been prepared to ensure a coordinated response by state agencies to requests from local jurisdictions to reduce potential loss of life and to ensure essential

More information

Overview. Tools of the Trade. USGS Decision-Making Tools for Pre-Earthquake Mitigation and Post-Earthquake Response

Overview. Tools of the Trade. USGS Decision-Making Tools for Pre-Earthquake Mitigation and Post-Earthquake Response USGS Decision-Making Tools for Pre-Earthquake Mitigation and Post-Earthquake Response Tools of the Trade DAVID WALD United States Geological Survey NEHRP Workshop: Developing Earthquake Scenarios Sept

More information

M-8.1 EARTHQUAKE 87KM SW OF PIJIJIAPAN, MEXICO EXACT LOCATION: N W DEPTH: 69.7KM SEPTEMBER 7, 11:49 PST

M-8.1 EARTHQUAKE 87KM SW OF PIJIJIAPAN, MEXICO EXACT LOCATION: N W DEPTH: 69.7KM SEPTEMBER 7, 11:49 PST M-8.1 EARTHQUAKE 87KM SW OF PIJIJIAPAN, MEXICO EXACT LOCATION: 15.068 N 93.715 W DEPTH: 69.7KM SEPTEMBER 7, 2017 @ 11:49 PST Photo: Luis Alberto Cruz / AP Photo: Carlos Jasso 1 THE 2017 CHIAPAS MEXICO

More information

In the early morning hours of

In the early morning hours of Figure 1. Brace that Chimney! Bracing of masonry chimneys is very difficult to do properly and has generally been ineffective in preventing their failure during earthquakes. While replacement of the chimney

More information

How to Use This Presentation

How to Use This Presentation How to Use This Presentation To View the presentation as a slideshow with effects select View on the menu bar and click on Slide Show. To advance through the presentation, click the right-arrow key or

More information

Unit 5: NWS Hazardous Weather Products. Hazardous Weather and Flooding Preparedness

Unit 5: NWS Hazardous Weather Products. Hazardous Weather and Flooding Preparedness Unit 5: NWS Hazardous Weather Products Objectives Describe the mission of the NWS Describe the basic organizational structure of the NWS Explain the purpose of various NWS products Explain how Probability

More information

Magnitude 7.2 OAXACA, MEXICO

Magnitude 7.2 OAXACA, MEXICO A magnitude 7.2 earthquake has occurred in Oaxaca, Mexico at a depth of 24.6 km (15 miles). It was felt as far away as Guatemala. There have been no reported deaths directly linked to the earthquake. Emergency

More information

TEGAM s Connection to the EarthScope Project

TEGAM s Connection to the EarthScope Project TEGAM s Connection to the EarthScope Project Introduction The EarthScope Project is an undertaking funded by the National Science Foundation in partnership with the United States Geological Survey and

More information

10.1 A summary of the Virtual Seismologist (VS) method for seismic early warning

10.1 A summary of the Virtual Seismologist (VS) method for seismic early warning 316 Chapter 10 Conclusions This final Chapter is made up of the following: a summary of the Virtual Seismologist method for seismic early warning, comments on implementation issues, conclusions and other

More information

NEW LOCAL MAGNITUDE CALIBRATION FOR VRANCEA (ROMANIA) INTERMEDIATE-DEPTH EARTHQUAKES

NEW LOCAL MAGNITUDE CALIBRATION FOR VRANCEA (ROMANIA) INTERMEDIATE-DEPTH EARTHQUAKES Romanian Reports in Physics, Vol. 64, No. 4, P. 1097 1108, 2012 EARTH PHYSICS NEW LOCAL MAGNITUDE CALIBRATION FOR VRANCEA (ROMANIA) INTERMEDIATE-DEPTH EARTHQUAKES M. CRAIU, A. CRAIU, C. IONESCU, M. POPA,

More information

Ground motion intensity map of the Tainan earthquake (Central Weather Bureau).

Ground motion intensity map of the Tainan earthquake (Central Weather Bureau). Taiwan lies on the boundary between the Eurasian Plate and the Philippine Sea Plate, which are converging at 80 mm per year. The island is the result of uplift caused by the collision of the northern end

More information

Earthquakes. Photo credit: USGS

Earthquakes. Photo credit: USGS Earthquakes Earthquakes Photo credit: USGS Pancaked Building - 1985 Mexico City Earthquakes don t kill people - buildings do! An earthquake is the motion or trembling of the ground produced by sudden displacement

More information

California s New Earthquake Early Warning System And Why We Are Different

California s New Earthquake Early Warning System And Why We Are Different Civilization exists by Geologic consent Subject to change without notice. -- Durant EERI SACRAMENTO CHAPTER CALIFORNIA STATE UNIVERSITY, SACRAMENTO APRIL 27, 2017 California s New Earthquake Early Warning

More information

revised October 30, 2001 Carlos Mendoza

revised October 30, 2001 Carlos Mendoza Earthquake Sources in the circum-caribbean Region Puerto Rico Tsunami Mitigation and Warning Program Federal Emergency Management Agency Preliminary Report: Task 3 revised October 30, 2001 Carlos Mendoza

More information

Ground motion attenuation relations of small and moderate earthquakes in Sichuan region

Ground motion attenuation relations of small and moderate earthquakes in Sichuan region Earthq Sci (2009)22: 277 282 277 Doi: 10.1007/s11589-009-0277-x Ground motion attenuation relations of small and moderate earthquakes in Sichuan region Lanchi Kang 1, and Xing Jin 1,2 1 Fuzhou University,

More information

EARTHQUAKE RELATED PROJECTS IN NIED, JAPAN. Yoshimitsu Okada NIED (National Research Institute for Earth Science and Disaster Prevention), Japan

EARTHQUAKE RELATED PROJECTS IN NIED, JAPAN. Yoshimitsu Okada NIED (National Research Institute for Earth Science and Disaster Prevention), Japan OECD/NEA WS 1/8 EARTHQUAKE RELATED PROJECTS IN NIED, JAPAN Yoshimitsu Okada NIED (National Research Institute for Earth Science and Disaster Prevention), Japan Abstract Earthquake related projects in NIED

More information

A.C.R.E and. C3S Data Rescue Capacity Building Workshops. December 4-8, 2017 Auckland, New Zealand. Session 3: Rescue of Large Format and Analog Data

A.C.R.E and. C3S Data Rescue Capacity Building Workshops. December 4-8, 2017 Auckland, New Zealand. Session 3: Rescue of Large Format and Analog Data A.C.R.E and C3S Data Rescue Capacity Building Workshops December 4-8, 2017 Auckland, New Zealand Dr. Rick Crouthamel, D.Sc. Executive Director Session 3: Rescue of Large Format and Analog Data 4 December

More information

Magnitude 7.9 SE of KODIAK, ALASKA

Magnitude 7.9 SE of KODIAK, ALASKA A magnitude 7.9 earthquake occurred at 12:31 am local time 181 miles southeast of Kodiak at a depth of 25 km (15.5 miles). There are no immediate reports of damage or fatalities. Light shaking from this

More information

2018 NASCIO Award Submission Category: Cross-Boundary Collaboration and Partnerships. Project Title: Tennessee Wildfires: A Coordinated GIS Response

2018 NASCIO Award Submission Category: Cross-Boundary Collaboration and Partnerships. Project Title: Tennessee Wildfires: A Coordinated GIS Response 2018 NASCIO Award Submission Category: Cross-Boundary Collaboration and Partnerships Project Title: Tennessee Wildfires: A Coordinated GIS Response Sevier County, Tennessee State of Tennessee, Emergency

More information

Analysis Of Earthquake Records of Istanbul Earthquake Rapid Response System Stations Related to the Determination of Site Fundamental Frequency

Analysis Of Earthquake Records of Istanbul Earthquake Rapid Response System Stations Related to the Determination of Site Fundamental Frequency Analysis Of Earthquake Records of Istanbul Earthquake Rapid Response System Stations Related to the Determination of Site Fundamental Frequency A. C. Zulfikar, H. Alcik & E. Cakti Bogazici University,Kandilli

More information

ShakeAlert Phase 1: West Coast Earthquake Early Warning. Doug Given, USGS EEW Coordinator Education Symposium, Dec. 4, 2018

ShakeAlert Phase 1: West Coast Earthquake Early Warning. Doug Given, USGS EEW Coordinator Education Symposium, Dec. 4, 2018 ShakeAlert Phase 1: West Coast Earthquake Early Warning Doug Given, USGS EEW Coordinator Education Symposium, Dec. 4, 2018 Population WA 7M OR 4M Annualized Earthquake Losses, $6.1B 61% in California,

More information

Tornado Drill Exercise Plan (EXPLAN)

Tornado Drill Exercise Plan (EXPLAN) Tornado Drill Exercise Plan (EXPLAN) As part of the National Weather Service s (NWS) Severe Weather Preparedness Week in Indiana Purdue University March 19, 2019 As of Feb 19, 2019 TABLE OF CONTENTS Introduction...

More information

1 Introduction. Station Type No. Synoptic/GTS 17 Principal 172 Ordinary 546 Precipitation

1 Introduction. Station Type No. Synoptic/GTS 17 Principal 172 Ordinary 546 Precipitation Use of Automatic Weather Stations in Ethiopia Dula Shanko National Meteorological Agency(NMA), Addis Ababa, Ethiopia Phone: +251116639662, Mob +251911208024 Fax +251116625292, Email: Du_shanko@yahoo.com

More information

Geospatial natural disaster management

Geospatial natural disaster management Geospatial natural disaster management disasters happen. are you ready? Natural disasters can strike almost anywhere at any time, with no regard to a municipality s financial resources. These extraordinarily

More information

Characteristics and introduction of Earthquake in Asia-Pacific region

Characteristics and introduction of Earthquake in Asia-Pacific region Characteristics and introduction of Earthquake in Asia-Pacific region 1906 San Francisco 2011 Tohoku 1999 Chi-Chi 1985 Mexico City 2004 Sumatra Chung-Han Chan 詹忠翰 2011 Christchurch To understand the characteristics

More information

Earthquake Source. Kazuki Koketsu. Special Session: Great East Japan (Tohoku) Earthquake. Earthquake Research Institute, University of Tokyo

Earthquake Source. Kazuki Koketsu. Special Session: Great East Japan (Tohoku) Earthquake. Earthquake Research Institute, University of Tokyo 2012/9/24 17:20-17:35 WCEE SS24.4 Special Session: Great East Japan (Tohoku) Earthquake Earthquake Source Kazuki Koketsu Earthquake Research Institute, University of Tokyo 1 Names and features of the earthquake

More information

Special feature: Are its lessons being adequately applied? Follow-up on the ten-year anniversary of the Hanshin-Awaji Earthquake

Special feature: Are its lessons being adequately applied? Follow-up on the ten-year anniversary of the Hanshin-Awaji Earthquake Special feature: Are its lessons being adequately applied? Follow-up on the ten-year anniversary of the Hanshin-Awaji Earthquake - Are we prepared for future massive earthquakes? - Hisakazu SAKAI Member

More information

UGRC 144 Science and Technology in Our Lives/Geohazards

UGRC 144 Science and Technology in Our Lives/Geohazards UGRC 144 Science and Technology in Our Lives/Geohazards Session 3 Understanding Earthquakes and Earthquake Hazards Lecturer: Dr. Patrick Asamoah Sakyi Department of Earth Science, UG Contact Information:

More information

DETERMINATION OF EARTHQUAKE PARAMETERS USING SINGLE STATION BROADBAND DATA IN SRI LANKA

DETERMINATION OF EARTHQUAKE PARAMETERS USING SINGLE STATION BROADBAND DATA IN SRI LANKA DETERMINATION OF EARTHQUAKE PARAMETERS USING SINGLE STATION BROADBAND DATA IN SRI LANKA S.W.M. SENEVIRATNE* MEE71 Supervisors: Yasuhiro YOSHIDA** Tatsuhiko HARA*** ABSTRACT We determined epicenters and

More information

A GLOBAL MODEL FOR AFTERSHOCK BEHAVIOUR

A GLOBAL MODEL FOR AFTERSHOCK BEHAVIOUR A GLOBAL MODEL FOR AFTERSHOCK BEHAVIOUR Annemarie CHRISTOPHERSEN 1 And Euan G C SMITH 2 SUMMARY This paper considers the distribution of aftershocks in space, abundance, magnitude and time. Investigations

More information

WESTERN STATES SEISMIC POLICY COUNCIL POLICY RECOMMENDATION Earthquake and Tsunami Planning Scenarios

WESTERN STATES SEISMIC POLICY COUNCIL POLICY RECOMMENDATION Earthquake and Tsunami Planning Scenarios WESTERN STATES SEISMIC POLICY COUNCIL POLICY RECOMMENDATION 18-1 Earthquake and Tsunami Planning Scenarios Policy Recommendation 18-1 WSSPC strongly encourages states, provinces, territories, First Nations,

More information

Applications on Slope Land Management through GIS Technology

Applications on Slope Land Management through GIS Technology Applications on Slope Land Management through GIS Technology Hsiu-Yi Ko, Jo-Yu Liu,Tai-Chung Hsiao, Tian-Ying Chou, Ying-Hui Chang Geographic Information Systems Research Center of Feng Chia University

More information

Economic and Social Council

Economic and Social Council United Nations Economic and Social Council Distr.: General 2 July 2012 E/C.20/2012/10/Add.1 Original: English Committee of Experts on Global Geospatial Information Management Second session New York, 13-15

More information

Special edition paper Development of Shinkansen Earthquake Impact Assessment System

Special edition paper Development of Shinkansen Earthquake Impact Assessment System Development of Shinkansen Earthquake Impact Assessment System Makoto Shimamura*, Keiichi Yamamura* Assuring safety during earthquakes is a very important task for the Shinkansen because the trains operate

More information

EARTHQUAKE EARLY WARNING SYSTEM AT A LOCAL GOVERNMENT AND A PRIVATE COMPANY IN JAPAN

EARTHQUAKE EARLY WARNING SYSTEM AT A LOCAL GOVERNMENT AND A PRIVATE COMPANY IN JAPAN First European Conference on Earthquake Engineering and Seismology (A joint event of the 13 th ECEE & 30 th General Assembly of the ESC) Geneva, Switzerland, 3-8 September 2006 Paper Number: 741 EARTHQUAKE

More information

Earthquake Doublet Sequences: Evidence of Static Triggering in the Strong Convergent Zones of Taiwan

Earthquake Doublet Sequences: Evidence of Static Triggering in the Strong Convergent Zones of Taiwan Terr. Atmos. Ocean. Sci., Vol. 19, No. 6, 589-594, December 2008 doi: 10.3319/TAO.2008.19.6.589(PT) Earthquake Doublet Sequences: Evidence of Static Triggering in the Strong Convergent Zones of Taiwan

More information

Natural Disaster :.JP s Experience and Preparation

Natural Disaster :.JP s Experience and Preparation Natural Disaster :.JP s Experience and Preparation 14 March. 2018 Hiro Hotta 1 Where s Japan Japan Puerto Rico earthquakes large enough to feel : 2,000-20,000 times a year typhoons disastrous

More information

Assessing Hazards and Risk

Assessing Hazards and Risk Page 1 of 6 EENS 204 Tulane University Natural Disasters Prof. Stephen A. Nelson Assessing Hazards and Risk This page last updated on 07-Jan-2004 As discussed before, natural disasters are produced by

More information

Directed Reading. Section: How and Where Earthquakes Happen WHY EARTHQUAKES HAPPEN. Skills Worksheet. 1. Define earthquake.

Directed Reading. Section: How and Where Earthquakes Happen WHY EARTHQUAKES HAPPEN. Skills Worksheet. 1. Define earthquake. Skills Worksheet Directed Reading Section: How and Where Earthquakes Happen 1. Define earthquake. 2. When do earthquakes usually occur? 3. What is a fault? WHY EARTHQUAKES HAPPEN 4. Rocks along both sides

More information

Running Head: HAZARD MITIGATION PLAN OUTLINE FOR MISSISSIPPI 1

Running Head: HAZARD MITIGATION PLAN OUTLINE FOR MISSISSIPPI 1 Running Head: HAZARD MITIGATION PLAN OUTLINE FOR MISSISSIPPI 1 Hazard Mitigation Plan Outline for Mississippi Name: Institution: HAZARD MITIGATION PLAN OUTLINE FOR MISSISSIPPI 2 Hazard Mitigation Plan

More information

THE 3D SIMULATION INFORMATION SYSTEM FOR ASSESSING THE FLOODING LOST IN KEELUNG RIVER BASIN

THE 3D SIMULATION INFORMATION SYSTEM FOR ASSESSING THE FLOODING LOST IN KEELUNG RIVER BASIN THE 3D SIMULATION INFORMATION SYSTEM FOR ASSESSING THE FLOODING LOST IN KEELUNG RIVER BASIN Kuo-Chung Wen *, Tsung-Hsing Huang ** * Associate Professor, Chinese Culture University, Taipei **Master, Chinese

More information

AIRCURRENTS THE TOHOKU EARTHQUAKE AND STRESS TRANSFER STRESS TRANSFER

AIRCURRENTS THE TOHOKU EARTHQUAKE AND STRESS TRANSFER STRESS TRANSFER THE TOHOKU EARTHQUAKE AND STRESS TRANSFER AIRCURRENTS 11.2011 Edited Editor s Note: The March 11th Tohoku Earthquake was unprecedented in Japan s recorded history. In April, AIR Currents described the

More information

South Carolina Seismic Network Bulletin

South Carolina Seismic Network Bulletin South Carolina Seismic Network Bulletin Volume XIII 2003 Prepared by: Pradeep Talwani Abhijit Gangopadhyay and Richard Cannon SPONSORS: Department of Energy/Westinghouse Savannah River Company United States

More information

Earthquakes. Earthquakes and Plate Tectonics. Earthquakes and Plate Tectonics. Chapter 6 Modern Earth Science. Modern Earth Science. Section 6.

Earthquakes. Earthquakes and Plate Tectonics. Earthquakes and Plate Tectonics. Chapter 6 Modern Earth Science. Modern Earth Science. Section 6. Earthquakes Chapter 6 Modern Earth Science Earthquakes and Plate Tectonics Section 6.1 Modern Earth Science Earthquakes and Plate Tectonics Earthquakes are the result of stresses in Earth s s lithosphere.

More information

HAZUS-MH: Earthquake Event Report

HAZUS-MH: Earthquake Event Report HAZUS-MH: Earthquake Event Report Region Name: El Paso County Earthquake Scenario: El Paso County Random EQ Print Date: February 08, 2006 Disclaimer: The estimates of social and economic impacts contained

More information

Indian Ocean Tsunami Warning System: Example from the 12 th September 2007 Tsunami

Indian Ocean Tsunami Warning System: Example from the 12 th September 2007 Tsunami Indian Ocean Tsunami Warning System: Example from the 12 th September 2007 Tsunami Charitha Pattiaratchi 1 Professor of Coastal Oceanography, The University of Western Australia Email: chari.pattiaratchi@uwa.edu.au

More information

Evidence for plate tectonics

Evidence for plate tectonics Evidence for plate tectonics See class powerpoint Printed tables 2x essay Qs markschemes Discuss/Evaluate the evidence for plate tectonics Discuss/evaluate the evidence for plate tectonics Essay: To what

More information

Lesson 8. Natural Disasters

Lesson 8. Natural Disasters Lesson 8 Natural Disasters 1 Reading is NOT a spectator sport! 2 Reading requires active participation! 3 PREDICT Try to figure out what information will come next and how the selection might end. 4 Natural

More information

Source:

Source: Source: http://www.pastforward.ca/perspectives/columns/10_02_05.htm At 16:53, on Wednesday, January 12th, 2010, a devastating 7.0 earthquake struck Haiti's capital, Port-au-Prince. The earthquake left

More information

Pacific Catastrophe Risk Assessment And Financing Initiative

Pacific Catastrophe Risk Assessment And Financing Initiative Pacific Catastrophe Risk Assessment And Financing Initiative PALAU September is expected to incur, on average,.7 million USD per year in losses due to earthquakes and tropical cyclones. In the next 5 years,

More information

PERFORMANCE. 1 Scheduled and successfully completed observing time

PERFORMANCE. 1 Scheduled and successfully completed observing time PERFORMANCE INDICATORS The ATNF assesses its performance through key performance indicators, based on those used generally by CSIRO but adapted to be appropriate for a National Facility. Unless otherwise

More information

Recovery Renewal Rebuilding

Recovery Renewal Rebuilding Recovery Renewal Rebuilding Federal Reserve Foreclosure Series Washington, D.C. October 20, 2008 Mayor Jay Williams, Youngstown OH The City of Youngstown Youngstown State University Urban Strategies Inc.

More information

M 7.1 EARTHQUAKE 5KM ENE OF RABOSO, MEXICO EXACT LOCATION: N W DEPTH: 51.0KM SEPTEMBER 19, 1:14 LOCAL TIME

M 7.1 EARTHQUAKE 5KM ENE OF RABOSO, MEXICO EXACT LOCATION: N W DEPTH: 51.0KM SEPTEMBER 19, 1:14 LOCAL TIME M 7.1 EARTHQUAKE 5KM ENE OF RABOSO, MEXICO EXACT LOCATION: 18.584 N 98.399 W DEPTH: 51.0KM SEPTEMBER 19, 2017 @ 1:14 LOCAL TIME Photo: Eduardo Verdugo / AP Photo: Alfredo Estrella/ Agence France-Presse/

More information

Three Fs of earthquakes: forces, faults, and friction. Slow accumulation and rapid release of elastic energy.

Three Fs of earthquakes: forces, faults, and friction. Slow accumulation and rapid release of elastic energy. Earthquake Machine Stick-slip: Elastic Rebound Theory Jerky motions on faults produce EQs Three Fs of earthquakes: forces, faults, and friction. Slow accumulation and rapid release of elastic energy. Three

More information

IMPLEMENT ROUTINE AND RAPID EARTHQUAKE MOMENT-TENSOR DETERMINATION AT THE NEIC USING REGIONAL ANSS WAVEFORMS

IMPLEMENT ROUTINE AND RAPID EARTHQUAKE MOMENT-TENSOR DETERMINATION AT THE NEIC USING REGIONAL ANSS WAVEFORMS Final Technical Report Award number: 05HQGR0062 IMPLEMENT ROUTINE AND RAPID EARTHQUAKE MOMENT-TENSOR DETERMINATION AT THE NEIC USING REGIONAL ANSS WAVEFORMS Lupei Zhu Saint Louis University Department

More information

What is the Right Answer?

What is the Right Answer? What is the Right Answer??! Purpose To introduce students to the concept that sometimes there is no one right answer to a question or measurement Overview Students learn to be careful when searching for

More information

Tsunami Response and the Enhance PTWC Alerts

Tsunami Response and the Enhance PTWC Alerts Tsunami Response and the Enhance PTWC Alerts Ken Gledhill GeoNet Project Director Chair, Intergovernmental Coordination Group, Pacific Tsunami Warning and Mitigation System (PTWS) Overview 1. Procedures

More information

Disclaimer. This report was compiled by an ADRC visiting researcher (VR) from ADRC member countries.

Disclaimer. This report was compiled by an ADRC visiting researcher (VR) from ADRC member countries. Disclaimer This report was compiled by an ADRC visiting researcher (VR) from ADRC member countries. The views expressed in the report do not necessarily reflect the views of the ADRC. The boundaries and

More information

Title: Storm of the Century: Documenting the 1993 Superstorm

Title: Storm of the Century: Documenting the 1993 Superstorm A. PAIIF 2011 Cover Page Title: Storm of the Century: Documenting the 1993 Superstorm Point of Contact: Name: Alex Granger Email: alex.granger@noaa.gov Phone Number: 571-555-9278 Mailing Address: Research

More information

Chapter 6: Writing the News Story in Simple Steps

Chapter 6: Writing the News Story in Simple Steps Chapter 6: Writing the News Story in Simple Steps Here we finish the job of writing the news story, which we began in Chapter 4: Writing the intro in simple steps. We consider ranking key points, structuring

More information

Pacific Catastrophe Risk Assessment And Financing Initiative

Pacific Catastrophe Risk Assessment And Financing Initiative Pacific Catastrophe Risk Assessment And Financing Initiative VANUATU September 211 Country Risk Profile: VANUATU is expected to incur, on average, 48 million USD per year in losses due to earthquakes and

More information