GPS Monitoring 1 GPS Monitoring of the San Andreas Fault and Modeling of Slip Rate on the Mojave Section of the San Andreas Fault By Nayeli Jimenez and Mischel Bartie PRISM Research Program Coordinator: Rollie Trapp
GPS Monitoring 2 Though the San Andreas fault has yet to produce a large earthquake in over two centuries, the land around the fault expresses the strain it's undergoing. To understand how much strain is building, we used sensitive GPS surveying equipment to measure site velocities. Originally, we physically surveyed four specific sites. These sites were: Cherry in Beaumont, Lacy and Ryn6 in Hemet, and Caba in Cabazon. We also modeled GPS site velocities from Southern California Earthquake Center's (SCEC) CMM4 (Crustal Motion Model 4), to characterize crustal deformation, within the Mojave transect across the San Andreas fault using MS Excel capabilities. Over the span of seven days we tested over 100,000 slip rate combinations to find a line that best fit the CMM4 site velocity data from the Mojave transect. From these computed combinations we found that the best fit line was actually calibrated by hand. The slip rates that gave us the best fit line are as follows (in mm/yr): unnamed offshore fault - 0, San Clemente fault- 1, Santa Cruz- 1, San Pedro Basin- 0, Palos Verdes fault 0, Newport/Inglewood fault- 11, San Andreas fault- 17, Mirage Valley fault-2, Helendale fault 1, Lenwood/Lockhart fault 0, Harper fault- 1, Blackwater fault- 4, Goldstone Lake fault- 5, and S. Death Valley fault 3. Though the line fits the data almost perfectly, the Newport-Inglewood modeled slip rate is highly unrealistic. We can only account for this by the absence of the Elsinore fault from our data. Also we had relatively high slip rates in the ECSZ for Blackwater and Goldstone, but their numbers fit the GPS data with astonishing consistency with no other known faults nearby that could accommodate that slip. We can only conclude that these numbers are fairly accurate. Within Southern California, hundreds of faults contribute to multiple earthquakes everyday; but most are too small to feel without highly sensitive equipment. In this seismically active state there is a constant threat that the San Andreas Fault (SAF) imposes and the burning question: when will the next "big one" hit? Geologists and seismologists work diligently to answer this question, but earthquake prediction is virtually impossible; so they do the next best thing, by estimating how rapidly stress is building up on the fault, which helps to estimate when the fault may be due for its next earthquake. The SAF is known as a transform plate boundary. This is basically a border along which the Pacific plate and the North American plate "slide" by each other. However, the fault is not perfectly smooth, so it "locks up" and great amounts of stress build. While this stress is building the land around the fault moves in a elastic fashion and measuring this movement can help calculate the amount of deformation that has occurred and the slippage that may occur once the stress is released. By using precise GPS equipment we are able to measure the amount of movement around this fault. By utilizing predetermined points, also known as survey benchmarks, in and
GPS Monitoring 3 around San Bernardino and Riverside counties we can use satellites to track and record small amounts of movement which enables us to compare this data to information gathered in previous years. If you ever decide to take a stroll through the San Bernardino area, you might stumble upon one of twenty-five survey benchmarks used in this study. These benchmarks (see picture below) are stable points on Earth's surface that can be surveyed year after year to measure the rate and direction of motion of the underlying portion of the Earth's crust. We used GPS to measure the position of the benchmarks by placing a high quality GPS antenna precisely over a benchmark. If you look back to the picture below you'll notice a "+" mark in the center of the circle. This is an important mark because we carefully align our equipment over the axis of this mark. Being level and centered is extremely important when comparing the location of these benchmarks over a series of years, because if they weren't positioned accurately over the benchmark the difference could falsely indicate an immense shift in ground movement. Once we finish gathering our data, over approximately five days, we send our data to the University of Arizona to be processed into site coordinates we can work with through programs like Excel. The time series presented on the left side of the poster show a comparison of the 2010 positions relative to previous years. We collected GPS data from four sites (Cabazon, Cherry, Lacy and Ryan 6) and obtained the north, east and vertical position for each site. We plotted these positions as a function of time, including data from prior years. Each graph above shows the site's position over time. The slope of the line represents the rate of movement in the east, north or vertical direction. GPS is able to resolve the north and east positions more accurately than the vertical position. We combined the north and east velocities to calculate the horizontal velocity using the equation below, which is also illustrated in the images below. Vn= velocity north Ve= velocity east (vh)2= (vn)2 + (ve)2 Vh= horizontal velocity We chose to model the fault slip rates within a transect across the Mojave section of the San Andreas fault, as shown at right. This transect does not include the sites at which we collected GPS data, but GPS velocities are publicly available for many sites within this transect. These site velocities were taken from the Southern California Earthquake Center s (SCEC) Crustal Motion Model version 4 (CMM4). To model the fault slip rates, we had to calculate the component of velocity of each site that is parallel to the overall trend of
GPS Monitoring 4 the faults. To do this, the coordinates had to be rotated in order to have the vectors parallel and perpendicular to the San Andreas fault and the equation used. When we began modeling transects there were so many students working within the San Bernardino range, that we decided to work with the Mojave transect. The Mojave Transect (see transect map middle, bottom), crosses many different active faults. Using an equation for the elastic behavior of materials, we can predict what the GPS site velocities should be, given any set of slip rates and locking depths for the faults. The graph at right below shows the fault parallel velocities of the sites (blue dots) as well as the velocities predicted by several different models (solid lines). Our task was to come up with a set of slip rates on all the faults that would best graphically fit all the site velocities within this area. The slip-rate ranges we tested for each fault were initially based on published slip-rates obtained by other methods and compiled in a seismic hazard report (Wills and others, 2007). We then chose a range of slip rates for each fault, surrounding the published estimates, to test for the rates that are most consistent with the GPS velocities. We quickly found that the number of combinations to be tested exceeded 1 million. Before testing any possible slip-rates, we simplified the problem by first testing models for the Eastern California Shear Zone (ESCZ) only. Once we found the best-fitting rates for these faults, we calculated the percentage of ECSZ slip for each ECSZ fault and held these percentages fixed in all future models. Thereafter, as other faults were added to the model, the total slip rate of the ECSZ was allowed to vary, but slip was always apportioned to the individual ECSZ faults using the same percentages. This reduced the number of models tested to 3,125 for the faults within the ECSZ and 101,090 for the remaining faults and combined ECSZ. We tested each of these combinations and calculated the chi-squared statistic for each, to assess the goodness of fit for each model. After sorting the 101,090 total slip rate combinations on chi-squared, we were able to find the best fit for the site's velocities based on the lowest chi-squared. However, some models with higher chi-squared values had a visually better looking fit. This is because chi-squared is biased towards regions with more data, so models that fit regions of sparse data poorly may still have a low chi-squared. We also realized we could get a better fitting model if the slip-rates for the ECSZ faults were changed slightly since we restricted them to a certain percentage of the entire Mojave area. The majority of the faults within the ECSZ were fixed at 11% with the S. Death Valley fault at 33% of the combined slip rate (for example: a combined slip rate of 18 for the ECSZ would assign a slip rate of 6 to S. Death Valley and a slip rate of 2 for the other six faults). This limits the extent of our values and leaves no room for any relative variation. We felt this was discouraging for any future modeling attempts due to its limiting nature. To our surprise, the hand model was the best fit overall and we think it was caused by the slight
GPS Monitoring 5 change in the ECSZ's slip-rates. Nonetheless, we kept the slip-rates along the reasonable ranges established before any modeling was done. In the future we would suggest taking the extra time to run the larger amount of combinations to find better accuracy within the computed best fit chi-squared line. As for the San Andreas' slip-rates, we plotted every 100th slip rate combination, starting with the lowest chi2 model until we found the chi2 value at which the models no longer fit well visually. In good-fitting models the San Andreas fault slip-rates were always between 16 mm/yr and 22mm/yr, nothing less and nothing higher. As for the Newport-Inglewood fault, in the hand fixed model, the slip-rate is unrealistically high and it is because we compensated for the absence of the Elsinore fault. As for the high slip-rates of Black Water and Goldstone faults, they resulted higher than expected rates even higher than those included in automated combinations, but these may indeed represent the actual slip-rates for these two faults since this model was the one that best fit all the site's velocities.