REDUCTIONS IN SEASONAL CLIMATE FORECAST DEPENDABILITY AS A RESULT OF DOWNSCALING

Similar documents
1.4 USEFULNESS OF RECENT NOAA/CPC SEASONAL TEMPERATURE FORECASTS

National Wildland Significant Fire Potential Outlook

Weather and Climate Summary and Forecast March 2018 Report

National Wildland Significant Fire Potential Outlook

Weather and Climate Summary and Forecast February 2018 Report

Weather and Climate Summary and Forecast Winter

Weather and Climate Summary and Forecast Winter

Weather and Climate Summary and Forecast January 2018 Report

Correction to Spatial and temporal distributions of U.S. winds and wind power at 80 m derived from measurements

Monthly Long Range Weather Commentary Issued: February 15, 2015 Steven A. Root, CCM, President/CEO

Weather and Climate Summary and Forecast April 2018 Report

Weather and Climate Summary and Forecast August 2018 Report

Weather and Climate Risks and Effects on Agriculture

Weather and Climate Summary and Forecast October 2017 Report

Crop / Weather Update

Weather and Climate Summary and Forecast Summer 2016

Prediction of Snow Water Equivalent in the Snake River Basin

Weather and Climate Summary and Forecast Fall/Winter 2016

Weather and Climate Summary and Forecast November 2017 Report

Weather and Climate Summary and Forecast December 2017 Report

Monthly Long Range Weather Commentary Issued: APRIL 18, 2017 Steven A. Root, CCM, Chief Analytics Officer, Sr. VP,

Weather and Climate Summary and Forecast January 2019 Report

Weather and Climate Summary and Forecast Summer 2017

Weather and Climate Summary and Forecast March 2019 Report

Monthly Long Range Weather Commentary Issued: NOVEMBER 16, 2015 Steven A. Root, CCM, Chief Analytics Officer, Sr. VP, sales

Weather and Climate Summary and Forecast Summer 2016

Monthly Long Range Weather Commentary Issued: SEPTEMBER 19, 2016 Steven A. Root, CCM, Chief Analytics Officer, Sr. VP,

Weather and Climate Summary and Forecast October 2018 Report

Erik Kabela and Greg Carbone, Department of Geography, University of South Carolina

Evaluating PRISM Precipitation Grid Data As Possible Surrogates For Station Data At Four Sites In Oklahoma

Crop / Weather Update

Monthly Long Range Weather Commentary Issued: APRIL 1, 2015 Steven A. Root, CCM, President/CEO

2015 Summer Forecast

Weather and Climate Summary and Forecast Summer into Harvest 2016

Hourly Precipitation Data Documentation (text and csv version) February 2016

Cooperative Program Allocation Budget Receipts Southern Baptist Convention Executive Committee May 2018

Cooperative Program Allocation Budget Receipts Southern Baptist Convention Executive Committee October 2017

Cooperative Program Allocation Budget Receipts Southern Baptist Convention Executive Committee October 2018

National Drought Summary August 14, 2018

Climate Prediction Center National Centers for Environmental Prediction

A. Geography Students know the location of places, geographic features, and patterns of the environment.

Monthly Long Range Weather Commentary Issued: APRIL 25, 2016 Steven A. Root, CCM, Chief Analytics Officer, Sr. VP, sales

Summary of Natural Hazard Statistics for 2008 in the United States

8.1 CHANGES IN CHARACTERISTICS OF UNITED STATES SNOWFALL OVER THE LAST HALF OF THE TWENTIETH CENTURY

Standard Indicator That s the Latitude! Students will use latitude and longitude to locate places in Indiana and other parts of the world.

INVISIBLE WATER COSTS

Crop / Weather Update

Intercity Bus Stop Analysis

Daria Scott Dept. of Geography University of Delaware, Newark, Delaware

Monthly Long Range Weather Commentary Issued: July 18, 2014 Steven A. Root, CCM, President/CEO

National Drought Summary February 28, 2017

Monthly Long Range Weather Commentary Issued: SEPTEMBER 19, 2015 Steven A. Root, CCM, Chief Analytics Officer, Sr. VP, sales

Weather and Climate of the Rogue Valley By Gregory V. Jones, Ph.D., Southern Oregon University

New Educators Campaign Weekly Report

Crop / Weather Update

Meteorology. Circle the letter that corresponds to the correct answer

Summer 2018 Southern Company Temperature/Precipitation Forecast

, District of Columbia

JEFF JOHNSON S Winter Weather Outlook

An ENSO-Neutral Winter

The following information is provided for your use in describing climate and water supply conditions in the West as of April 1, 2003.

Crop / Weather Update

March 1, 2003 Western Snowpack Conditions and Water Supply Forecasts

The 1986 Southeast Drought in Historical Perspective

Crop / Weather Update

Confronting Climate Change in the Great Lakes Region. Technical Appendix Climate Change Projections MIGRATING CLIMATES

Upper Missouri River Basin February 2018 Calendar Year Runoff Forecast February 6, 2018

SEPTEMBER 2013 REVIEW

National Drought Summary July 18, 2017

Guided Reading Activity

Crop / Weather Update

Climate Outlook through 2100 South Florida Ecological Services Office Vero Beach, FL September 9, 2014

extreme weather, climate & preparedness in the american mind

Variability Across Space

New Climate Divisions for Monitoring and Predicting Climate in the U.S.

2005 Mortgage Broker Regulation Matrix

Using PRISM Climate Grids and GIS for Extreme Precipitation Mapping

Midwest/Great Plains Climate-Drought Outlook September 20, 2018

Meteorology 110. Lab 1. Geography and Map Skills

The number of events satisfying the heat wave definition criteria varies widely

Climate Forecasts and Forecast Uncertainty

Climate Outlook through 2100 South Florida Ecological Services Office Vero Beach, FL January 13, 2015

NIDIS Drought and Water Assessment

North American Geography. Lesson 2: My Country tis of Thee

Jakarta International School 6 th Grade Formative Assessment Graphing and Statistics -Black

Monthly Long Range Weather Commentary Issued: May 15, 2014 Steven A. Root, CCM, President/CEO

NIDIS Intermountain West Drought Early Warning System April 18, 2017

Club Convergence and Clustering of U.S. State-Level CO 2 Emissions

1. INTRODUCTION 3. VERIFYING ANALYSES

Abortion Facilities Target College Students

Investigation 11.3 Weather Maps

Challenge 1: Learning About the Physical Geography of Canada and the United States

UNITED STATES AND SOUTH AMERICA OUTLOOK (FULL REPORT) Thursday, December 28, 2017

NIDIS Intermountain West Drought Early Warning System October 17, 2017

NIDIS Intermountain West Drought Early Warning System January 15, 2019

Chapter. Organizing and Summarizing Data. Copyright 2013, 2010 and 2007 Pearson Education, Inc.

Upper Missouri River Basin December 2017 Calendar Year Runoff Forecast December 5, 2017

The Pennsylvania Observer

Southern Climate Monitor. In This Issue: April 2018 Volume 8, Issue 4. Page 2-3: Simple Planning Tool for Oklahoma Climate Hazards

Seasonal Outlook through September 2007

Transcription:

REDUCTIONS IN SEASONAL CLIMATE FORECAST DEPENDABILITY AS A RESULT OF DOWNSCALING J. M. Schneider, J. D. Garbrecht ABSTRACT. This investigation addresses a practical question from an agricultural planning and management perspective: are the NOAA/CPC seasonal climate forecasts skillful enough to retain utility after they have been downscaled to field and daily scales for use in crop models to predict impacts on crop production? Utility is defined herein as net forecast dependabilities of at least 50%, where net dependability is the product of the large scale 3 month forecast dependability with a factor accounting for losses in dependability due to the higher spatiotemporal variability of 1 month station data. This loss factor is estimated from station data by computing the frequency of matching sign (FOMS) between the direction of departures from average of 3 month forecast division values and 1 month station values, for average temperature and precipitation, over a 10 year study period and 96 stations in six regions of the U.S. The resulting FOMS does not display any consistent differences across regions, locations, or months, so is averaged across all months and stations. Average FOMS calculated in this manner are 76% for average temperature and 66% for total precipitation. The decimal FOMS are then used as the multiplicative loss factor on previously reported 3 month forecast division reliability values to produce estimates of the net reliability for downscaled forecasts at locations within each forecast division. The resulting guidance is dependent on region and forecast variable, with the forecasts for above average temperature emerging as worthy of consideration for use in agricultural applications over the majority of the contiguous U.S. The Northeast, the Great Lakes, parts of the Northern Great Plains, interior California, and northwest Nevada are the only regions with insufficient net dependability to preclude immediate consideration. Conversely, forecasts for cooler than average temperature do not retain sufficient net dependability after downscaling to be an attractive option in any part of the contiguous U.S. at this time. Forecasts for wetter or drier than average conditions retained sufficient net dependability to encourage further development over only about 10% of the contiguous U.S., in regions well known to experience the strongest ENSO impacts on precipitation. The forecast divisions where agricultural decision support might benefit from NOAA/CPC seasonal precipitation forecasts are located in Florida, south Texas, southwest New Mexico, Arizona, central and southern California, and parts of Oregon, Washington, Idaho, and Montana. Keywords. Agricultural management, Average air temperature, Climate, Climatology, Decision support, Downscaling, Forecast, Precipitation, Seasonal. Decision support in crop and forage agriculture is based largely on field studies, with some support from crop modeling. The effects of climate on agricultural yields and profitability is usually represented by a local climatology derived from nearby weather station data, either as the values for weather during field studies or expressed as station statistics (e.g., mean, standard deviation, skewness of precipitation; frequency of wet days; growing degree days or similar thermal units) used to drive a daily weather generator for crop modeling. Given the variations in weather from year to year, seasonal climate forecasts appear to offer an opportunity to reduce risks and maximize profits under varying climate. Official seasonal climate fore Submitted for review in July 2007 as manuscript number SW 7072; approved for publication by the Soil & Water Division of ASABE in May 2008. The authors are Jeanne M. Schneider, Research Meteorologist, and Jurgen D. Garbrecht, Research Hydraulic Engineer; USDA ARS Grazinglands Research Laboratory, El Reno, Oklahoma. Corresponding author: Jeanne M. Schneider, USDA ARS Grazinglands Research Laboratory, 7207 West Cheyenne St., El Reno, OK 73036; phone: 405 262 5291, ext. 251; fax: 405 262 0133; e mail: Jeanne.Schneider @ars.usda.gov. casts for average temperature and total precipitation have been offered by the National Oceanic and Atmospheric Administration's Climate Prediction Center (NOAA/CPC) for the contiguous U.S. since December 1994 (Barnston et al., 2000). Unfortunately, any attempt to incorporate the NOAA/ CPC seasonal climate forecasts into agricultural decision support is faced with several immediate obstacles: the probabilistic nature of the forecasts; the question of the skill or dependability of the forecasts; the infrequency of forecasts significantly different from climatology; and most relevant to this analysis, the physical and temporal scale of the forecasts. Crops and forages grow in individual fields, but seasonal climate forecasts are offered for large areas (each approximately 9 10 4 km 2, termed forecast divisions herein) and 3 month periods, so some type of downscaling in both space and time is required to use them at the field scale. There are statistical reasons why forecasts are generated for regional and seasonal scales, in particular the higher variability of weather (especially precipitation) at a location compared to an area average; i.e., it is more difficult to discern a robust seasonal forecast signal in noisy station data (e.g., Gong et al., 2003). However, the potential payoff for individual operators across the U.S. is large enough to justify developing and testing a methodology for incorporating any Transactions of the ASABE Vol. 51(3): 915-925 2008 American Society of Agricultural and Biological Engineers ISSN 0001-2351 915

useful climate forecast signal, derived from official forecasts that are freely available (in this case, the NOAA/CPC forecasts), into risk based decision support systems. The essential elements of an application methodology have been created and are outlined below. First, we assessed the practical utility of the NOAA/CPC seasonal climate forecasts for agricultural applications over each forecast division, since the information on forecast performance previously available considered only national summaries. Several measures were created to assess forecast utility (Schneider and Garbrecht, 2003, 2006) to address the following two questions: (1) Are the forecasts for departures large enough to justify their use? The NOAA/CPC seasonal climate forecasts are statements of shifts in odds relative to conditions during a 30 year reference period, termed a climatology. The forecasts may indicate a shift in odds toward either end of the climatological distribution (e.g., wetter or drier, warmer or cooler) or may be for equal chances, which means a forecast equal to the climatology. To be useful in agricultural management, the forecasts need to be significantly different from climatology in order to offer new information beyond the climate information already accounted for in current management practices. Further, are the forecasts for large departures offered often enough to bother? If non climatology forecasts are offered rarely (e.g., one every second year on average), they may not offer sufficient return on the investment required to modify management practices to include them. This forecast characteristic was addressed with a measure called usefulness (Schneider and Garbrecht, 2003). (2) Are the forecasts skillful enough to justify their use? In other words, do these probabilistic forecasts for shifts in odds get the odds right? This forecast characteristic is termed reliability in the climate forecast community. Schneider and Garbrecht (2006) combined a threshold requirement determined by the definition of usefulness with the concept of reliability to produce a measure called dependability. This measure computes the success rate of forecasts in predicting climate variations numerically distinct from climatology, where success is defined as correctly predicting the direction of the variation from the mean (warmer/cooler, wetter/drier). Summarizing the results from Schneider and Garbrecht (2003, 2006), forecast usefulness and dependability for the 3 month forecasts vary significantly across the U.S., as shown in table 1. These results are consistent with a recent analysis of NOAA/CPC forecast skill reported by Livezey and Timofeyeva (2008), which employed traditional meteorological measures for probabilistic forecasts. The point here, however, is that both sets of analyses addressed forecast performance over 3 month periods and the relatively large area forecast divisions. As such, they do not address questions relative to dependability or skill if the forecasts are applied at smaller space or time scales. The primary components of our forecast application methodology are a spatial downscaling methodology (Schneider and Garbrecht, 2002; Garbrecht et al., 2004) and a temporal disaggregation approach (Schneider et al., 2005) for the NOAA/CPC seasonal climate forecasts. Our spatial downscaling approach is different from the technique currently employed by NOAA/National Weather Service (NOAA/NWS) for their city specific climate forecasts. We assume that the predicted shift in probability for the large spatial scale applies to all points within that forecast area. This approach deliberately sidesteps the challenges associated with deriving relationships between statistics for large areas and an embedded station (the approach taken by NOAA/ NWS, which is unfortunately problematic for precipitation; e.g., Meyers et al., 2008) and defers the related problem to the analysis reported here. Our temporal disaggregation approach consists of two parts: first, transforming the overlapping 3 month forecasts to a sequence of 1 month forecasts (Schneider et al., 2005); second, employing a tailored weather generator that properly reflects the 1 month forecasts in the production of ensembles of daily weather (Garbrecht at al., 2004). Together, the spatial downscaling and temporal disaggregation techniques provide the means to apply the seasonal climate forecasts at the field spatial scale and daily time step. Hereafter, we refer to this collection of techniques as spatiotemporal downscaling. This brought us to the crux of this analysis: is there any forecast signal left after the spatiotemporal downscaling? What is the net outcome? Are the differences between monthly precipitation totals or average temperature at a station, and 3 month totals or averages at forecast division scales, so large that the NOAA/CPC forecasts might completely lose their already limited utility when we try to use them in agricultural management? We expect a negative impact on dependability, specifically, and the open question is the degree of reduction, or loss in probabilistic skill. If a sufficient degree of dependability survives the spatiotemporal downscaling, there is reason to proceed with modeling and development of climate forecast dependent decision support using the methodologies in hand. If not, it might be prudent to defer until the forecasts improve enough to overcome the losses in dependability due to downscaling. Note that all possible downscaling methodologies will face similar challenges, perhaps varying in degree and with region. The analysis results presented here are specific to our spatiotemporal downscaling methodology. In addition, these analysis results are intended to be indicative, rather than definitive or exhaustive. METHODS The variable we are downscaling is the forecast division 3 month departure from the climatological average for precipitation or average temperature. The utility parameter we are most concerned with is the dependability of the seasonal forecasts (Schneider and Garbrecht, 2006, hereafter SG06). By definition, dependability will decrease whenever the sign of the actual departure (positive or negative) is different at smaller space or shorter time scales than at forecast scales. An estimate of the expected reduction in dependability can be developed by doing a simple count of historical cases where the sign of departures at the different scales match (i.e., are the same sign). In other words, if the forecast division was wetter than average, was the station also wetter? Or if the 3 month total was drier than average, was the 1 month total also drier than average? Note that this analysis will not use actual seasonal forecasts; instead, we use daily station data 916 TRANSACTIONS OF THE ASABE

Table 1. Selected results for dependability from Schneider and Garbrecht (2006), tabulated by forecast direction and lead time. The study period covered 1997 through the first three months of 2005, a total of 97 forecasts at the shortest lead time. The ratios are the dependability for each forecast division, in fractional form. Dependability is defined as the number of matching (same direction) outcomes divided by the number of useful forecasts in that direction. A forecast was deemed useful if it satisfied a minimum departure from climatology. If the forecasts are reliable in the sense of getting the odds right, then these ratios should be approximately equal to 0.5. Note that small samples (arbitrarily defined as fewer than six useful forecasts) may not be good indicators of future former performance. Accordingly, all cases where the dependability was less than 0.5 or where there were fewer than six useful forecasts have been shaded in the table. The unshaded cases are deemed dependable at the 3 month forecast division scale. Warm Cool Wet Dry Forecast Division 3.5 Month 6.5 Month 3.5 Month 3.5 Month N New England 1/1 2/4 2/3 1/4 0/1 0/0 0/0 0/0 NE New England 1/1 2/4 1/2 0/3 0/1 0/0 0/0 0/1 N New York 2/3 2/2 2/4 0/2 2/2 1/1 0/1 0/0 S New England 1/1 4/5 3/3 0/1 1/1 0/0 0/0 0/0 E Great Lakes 2/3 2/2 3/7 0/1 2/4 2/4 1/3 0/1 Ohio 4/6 1/2 3/5 0/0 4/4 0/0 2/6 2/5 Mid Atlantic Coast 3/5 6/9 2/5 0/0 1/1 0/0 0/2 0/0 N Appalachians 4/4 3/3 5/7 0/0 1/2 0/0 1/3 1/1 Central Appalachians 7/12 4/6 4/7 0/0 1/1 1/1 0/3 1/3 Coastal Virginia 6/9 5/8 6/10 0/0 1/1 0/0 1/2 0/0 S Appalachians 8/10 9/11 7/11 0/1 0/2 0/2 1/3 0/0 Coastal Carolinas 7/10 11/12 9/11 0/4 4/4 0/0 5/7 2/4 Interior Carolinas 10/13 9/11 6/11 0/3 2/4 2/3 5/5 1/1 Upper Michigan 6/10 6/10 6/9 0/3 4/5 1/1 3/5 0/1 N Minnesota 8/13 5/9 5/8 0/6 4/5 1/2 0/2 1/2 E North Dakota 9/12 4/6 5/6 1/6 1/3 0/2 0/1 0/1 W North Dakota 8/11 4/6 3/5 1/6 1/3 0/1 1/3 0/1 E Montana 6/9 4/5 2/4 0/5 1/2 1/2 2/4 1/4 N Central Montana 14/15 6/7 1/2 0/1 1/3 2/3 7/8 8/9 S Central Montana 12/14 4/6 2/3 0/1 2/5 2/3 5/7 6/8 W Montana 14/15 5/9 4/7 0/1 3/6 2/4 7/7 7/8 N Central Michigan 4/8 3/8 4/8 0/2 2/6 1/2 3/9 0/3 S Michigan 3/7 0/3 4/8 0/1 2/6 3/6 2/6 1/4 E Central Illinois 4/9 1/3 5/6 1/2 1/2 0/1 1/6 1/6 N Illinois 4/9 3/6 5/6 1/3 1/1 0/0 1/3 1/2 N Wisconsin 5/9 6/9 5/9 1/6 2/3 0/0 1/1 1/2 SE Minnesota 7/9 6/9 7/9 2/7 0/2 0/0 0/1 2/3 E South Dakota 6/10 4/6 3/4 1/6 1/3 0/0 0/1 3/4 Central South Dakota 5/9 1/4 3/3 2/5 2/2 0/0 0/2 0/3 W South Dakota 6/8 2/2 1/1 1/2 0/0 0/1 2/3 0/1 NE Wyoming 7/7 0/1 0/1 0/1 1/2 1/2 2/3 0/0 NW Wyoming 8/9 0/1 0/2 1/2 1/2 2/3 1/4 1/1 E Iowa 4/7 4/6 4/5 5/8 1/2 0/0 0/2 2/3 NW Iowa 6/9 3/5 3/3 3/7 1/3 0/0 1/2 2/5 Central Nebraska 7/7 1/2 0/1 1/2 0/4 1/1 0/1 3/6 S Nebraska 7/7 0/2 0/0 2/5 2/4 0/0 3/5 1/4 W NE Cheyenne 6/6 1/1 0/0 1/2 1/1 0/0 0/0 0/3 E Kentucky 6/9 3/5 5/6 0/0 2/3 1/1 1/5 1/4 W Kentucky 6/8 3/5 3/4 0/2 2/2 2/3 2/6 3/5 SE Missouri 4/7 5/7 2/3 0/2 1/1 1/1 2/4 2/3 NE Missouri 3/7 1/2 4/4 2/3 0/0 0/0 0/3 1/2 NW Missouri 6/7 1/2 2/2 4/5 2/3 0/0 1/3 2/4 E Kansas 5/7 4/6 0/2 1/2 2/3 0/0 0/4 2/5 Central Kansas 7/7 3/4 0/0 2/3 3/4 1/1 1/5 0/4 W Kansas 7/9 4/5 0/1 2/4 4/5 1/1 1/6 0/4 averaged over single months and 3 month periods in comparison to encompassing forecast division data to develop our estimates of the frequency with which the signs of departures match. If the signs always match perfectly (frequency of 100%), there would be no loss in dependability expected due to the spatiotemporal downscaling. If the frequency is less than 100%, then we can expect the dependability score to be decreased by that factor. Our analysis examines ten years (1991 2000) of actual precipitation and average temperature data at the spatiotemporal scales in question. The study duration was chosen as a compromise between considerations related to the probabilistic forecasts and the variable nature of the observations, versus the application decision making framework. Probabilistic forecasts require multi year applications to realize the forecast signal and any associated practical values, or for Vol. 51(3): 915-925 917

Forecast Division Table 1. (Continued). Warm Cool Wet Dry 3.5 Month 3.5 Month NE Colorado 7/9 5/5 2/2 1/2 2/2 1/1 0/2 1/4 SE Colorado 10/12 6/6 1/1 1/3 5/5 1/1 1/5 1/5 W Colorado 12/14 12/14 5/7 0/2 0/1 0/0 0/4 0/4 SW Wyoming 6/8 0/2 1/2 1/2 0/0 0/1 0/0 0/0 Central Tennessee 7/8 7/9 6/7 0/0 2/4 0/1 0/2 0/0 W Tennessee 6/7 6/9 5/6 0/2 2/3 0/0 0/2 0/0 Ozark Mountains 6/8 4/6 3/5 0/0 2/2 0/0 0/3 1/3 Central Oklahoma 7/10 5/7 0/1 0/1 3/8 0/3 0/3 1/4 Abilene, Texas 11/14 8/10 1/2 1/4 4/9 2/5 1/6 0/5 N High Plains Texas 13/16 8/10 2/3 0/1 4/10 3/7 0/9 0/6 N Georgia 7/9 9/14 7/14 1/4 2/3 1/3 4/4 2/2 N Alabama 7/9 7/10 8/13 1/4 0/0 0/0 2/4 0/1 Central Mississippi 9/10 6/9 5/10 1/3 2/2 0/0 2/3 1/2 S Arkansas 8/9 5/8 5/7 0/1 3/3 1/1 1/4 1/3 E Texas 9/11 7/11 3/5 1/2 12/17 9/13 1/5 2/3 Dallas, Texas 13/14 7/11 3/5 2/6 11/15 8/12 3/7 3/4 San Antonio, Texas 19/25 12/21 7/10 3/5 9/14 6/9 3/7 3/6 Far S Texas 25/33 15/22 9/13 0/5 10/11 7/9 7/9 4/6 W Central Texas 25/30 10/18 5/8 0/5 8/14 4/9 4/8 4/8 W Texas Panhandle 42/46 19/23 12/13 1/4 6/10 3/7 9/13 6/8 Jacksonville, Fla. 12/15 11/16 11/16 4/10 8/11 5/6 11/12 6/6 Central Florida 18/26 17/25 19/27 2/3 11/12 7/11 19/23 7/10 S Florida 41/47 47/52 50/53 0/2 14/16 13/17 15/20 8/12 Florida Panhandle 13/16 7/12 10/15 3/5 4/6 1/3 6/7 4/4 Coastal Louisiana 13/14 8/11 8/11 1/4 3/5 1/2 4/4 2/2 Coastal Texas, Houston 14/16 10/14 9/10 1/4 9/13 5/9 2/5 0/2 NE Washington 13/17 6/12 5/11 0/2 9/10 6/7 1/3 2/2 Pendleton, Oregon 12/16 7/11 6/9 0/1 2/4 2/3 2/2 0/0 Central Washington 18/20 9/13 7/11 0/1 5/9 4/6 3/3 2/2 Seattle, Wash. 19/24 17/24 15/21 0/0 9/15 11/14 4/4 6/7 Coastal Washington 23/26 14/26 18/30 0/0 11/18 11/13 4/6 5/8 E Idaho 12/14 3/6 4/7 1/3 0/0 0/1 1/1 1/1 Idaho Central Mountains 16/18 7/10 5/8 0/1 4/4 3/3 2/2 1/1 SW Idaho 14/16 6/10 5/9 0/2 1/3 0/1 2/3 3/4 E Oregon 13/15 6/11 5/9 0/0 2/4 1/2 1/2 2/2 Oregon Coastal Valley 14/17 13/18 9/15 0/0 6/7 5/7 1/1 2/2 Oregon Coast 18/19 13/24 8/20 0/0 9/14 4/10 4/4 4/4 NE Utah 13/17 18/20 4/6 0/1 0/0 0/0 1/1 0/0 SE Utah 22/30 25/28 12/17 0/0 3/4 0/0 0/6 0/4 W Utah 22/31 18/25 11/18 0/0 2/2 0/0 1/2 1/3 NE Nevada 19/27 12/22 9/15 0/0 1/2 0/0 4/6 3/4 NW Nevada 13/21 19/24 11/14 0/0 4/5 0/0 5/7 4/5 Sacramento, Calif. 8/20 11/25 4/14 0/0 4/7 1/1 3/3 2/2 N Calif. Coast 15/20 13/21 9/17 0/0 5/8 0/2 2/2 1/1 Central Nevada 35/43 33/41 23/30 0/0 5/7 0/0 1/3 3/5 Fresno, Calif. 16/25 12/25 11/21 0/0 5/9 1/1 5/6 3/4 Central Calif. Coast 14/19 9/20 6/17 0/0 6/7 2/2 2/3 0/2 S Calif. Coast 19/23 15/30 9/21 0/3 7/9 2/2 5/5 1/2 SE California 36/57 36/58 28/49 0/0 6/10 2/2 8/8 2/4 Las Vegas, Nevada 56/62 57/63 56/61 0/0 8/10 3/4 9/12 6/7 SW Arizona 66/75 60/70 60/68 0/0 7/10 3/7 16/18 11/12 NE Arizona 43/51 37/47 32/39 0/0 5/8 3/6 11/17 7/9 SE Arizona 67/73 63/70 58/64 0/0 8/12 3/6 19/20 13/14 N New Mexico 15/20 13/16 3/4 0/0 6/10 1/4 2/10 1/8 E New Mexico 20/24 11/14 4/5 1/2 6/11 2/8 3/11 1/9 C New Mexico 30/38 17/24 12/16 0/0 8/11 3/6 8/14 4/9 S New Mexico 44/52 25/31 18/22 1/1 7/13 2/7 11/13 8/10 any assessment of forecast performance, so ten or more years of analysis would be preferred. Further, ten years is a short period from a climatology viewpoint (especially for precipitation); i.e., 10 year statistical descriptions only capture part of the variability. But the practical reality is that any longer period loses relevance for possible agricultural applications by 918 TRANSACTIONS OF THE ASABE

Pacific Northwest Northern Great Plains Great Lakes Southeast Southwest Southern Great Plains (TX through KS) Figure 1. Locations of stations used in correlation analysis. Squares indicate sites used for both average temperature and precipitation, circles indicate sites used just for precipitation, and triangles indicate sites used just for average temperature. individual operators. Even five years is a long time from a rancher or farmer's point of view, especially relative to typical short term agricultural operating loans. Ten years serves as a workable compromise. We choose two locations in each of eight forecast divisions in six regions of the contiguous U.S., for a total of 96 locations for each variable (fig. 1). While we considered conducting this analysis just for regions where the forecasts have demonstrated dependability, we decided such an approach would be less than satisfactory. Climate forecast techniques continue to evolve, and it is impossible to anticipate where performance improvements might manifest first. We believe it is preferable to have an answer that will continue to be valid as the forecasts improve, and since our analysis approach does not depend on the current forecast techniques per se, this should be possible. We used actual average temperature and precipitation data over a 10 year period at locations (NOAA/NCDC, 2005), and averaged over forecast divisions (called climate divisions by NOAA/CPC, e.g., NOAA/CPC, 2006a) on 1 month and 3 month scales, including all 12 overlapping 3 month forecast periods. Station data sites were chosen on the basis of continuity of monthly data in the record, with the majority of sites requiring no interpolation from nearby stations to fill data gaps. Only 0.1% of the precipitation data and 0.8% of the average temperature data required filling to provide continuous monthly data during the study period, with negligible impact on the resulting analysis. The number of cases (points of comparison) in the analysis depends on which aspect of the downscaling is under consideration (space, time, or both) and the variable (precipitation or average temperature). The time disaggregation technique developed for precipitation (Schneider et al., 2005) uses all three of the 3 month periods that include the month in question; e.g., June depends on April May June, May June July, and June July August forecasts and averages. The time disaggregation technique developed for average temperature uses only the centered 3 month period for the month in question; e.g., June depends only on the May June July forecast and averages. The differences in the techniques reflect the differences in the characteristics of precipitation (highly variable in space and time) and average temperature (less variable). Table 2 summarizes the number of cases by variable and by stage of downscaling. Since frequency of matching signs of departures from average is a long and clumsy phrase, we will use FOMS hereafter, usually with units of percentage. A visual example of the FOMS between 3 month precipitation totals for a forecast division, and totals for a station within that forecast division, is presented in figure 2. An example of the FOMS between 3 month and 1 month station precipitation is presented in figure 3. Figure 3 illus- Table 2. Number of cases in each step of the frequency analysis. Number of cases in 10 year study period Precipitation Spatial downscaling: forecast division to station, both are 3 month periods 120 per station: 10 years, each with 12 3 month periods Time disaggregation: 3 month to 1 month at individual stations 360 per station: 10 years, each with 12 3 month periods, times 3 Both space and time: forecast division at 3 months to stations at 1 month 360 per station: 10 years, each with 12 3 month periods, times 3 Average temperature Spatial downscaling: forecast division to station, both are 3 month periods 120 per station: 10 years, each with 12 3 month periods Time disaggregation: 3 month to 1 month at individual stations 120 per station: 10 years, each with 12 3 month periods Both space and time: forecast division at 3 months to stations at 1 month 120 per station: 10 years, each with 12 3 month periods Vol. 51(3): 915-925 919

3-month Departures from Average (inches) for Forecast Division (54) including Abilene, TX 10-5 5 0 -/+ 4/120 Departures From Average Precipitation Slope of linear fit: 0.76 R = 0.78 Percent Matching Sign = 83.3 +/+ 49/120 -/- +/- -10 51/120 16/120-10 -5 0 5 10 3-month Departures from Average (inches) at Abilene, TX Figure 2. The top left quadrant encompasses cases where the large area precipitation departure from average was positive (wetter than average), but the station departure was negative (drier than average). Similarly, the top right quadrant holds cases where both departures were positive (wetter than average). The lower right quadrant holds cases with negative large area departures but positive station departures, and the lower left quadrant holds cases where both departures were negative. For this forecast division and station, 100 of 120 cases had matching signs, or a frequency of matching signs (FOMS) of 83% (100/120 100). 3-month Departures from Average (inches) at Abilene, TX 10 5 0-5 -/+ 64/360 Departures From Average Precipitation centered leading trailing +/+ 95/360 Slope of linear fit: 0.97 R = 0.57 Percent Matching Sign = 70.3 -/- +/- 158/360 43/360-10 -10-5 0 5 10 1-month Departures from Average (inches) at Abilene, TX Figure 3. An example showing the FOMS for 3 month versus 1 month precipitation totals for a station. All months are combined on this plot, showing all contributing 3 month periods for each month, producing a total of 360 points. The quadrants have similar meanings as in figure 2; for example, the top right quadrant encompasses cases where the 3 month and 1 month precipitation departures from average were both positive (wetter than average). The different symbols represent the 3 month periods contributing to the 1 month disaggregated value. For example, for June, centered indicates comparison between the month in question (June) and the 3 month period May June July, while trailing indicates that the month in question (June) is at the end of the comparison 3 month period April May June. For this station, the matching sign cases total 253 of 360, or FOMS of 70%. trates the most complex of the comparisons, with the different 3 month periods contributing to the estimation of each 1 month precipitation departure indicated in order to illustrate the general insensitivity of FOMS to the contributing 3 month period. Since the FOMS did not depend in any systematic fashion on the relative positioning of the 1 month period, all results reported below are for the combined 360 cases. An example for both the spatial downscaling and temporal disaggregation is not shown. Generally, the FOMS in time are lower than the FOMS in space, for both variables. RESULTS Recall that our goal is to estimate the degree of impact on forecast dependability due to our spatiotemporal downscaling methodology, and that we are using the frequency of matching signs of departures from average (FOMS) in actual data at the relevant spatiotemporal scales to estimate the magnitude of the loss in dependability. FOMS will necessarily depend to some degree on the number and location of stations selected for analysis, and the length and period of record (for example, 1961 1990 versus 1971 2000). However, we expect our selected cases to be sufficient to provide guidance in this application. The spatial downscaling (forecast division to station for 3 month values), temporal disaggregation (3 month to 1 month values at stations), and the combined results (3 month forecast division vs. 1 month station values) were examined separately to provide some insight as to which aspect of the process produced the largest impact. We also searched for any indication of significant differences in FOMS with region or season, coastal vs. inland sites, and arid vs. humid environments. Differences in FOMS for a given month between sites can be large, but there were no consistent patterns to support separation by any of the suspected seasonal or geographic factors (e.g., coastal versus inland). As a result, we present the FOMS results for all months as a single number for each station. PRECIPITATION The FOMS for precipitation, for each station, grouped by region, for downscaling in space (forecast division average total to station total), disaggregation in time (station 3 month to 1 month totals), and the full spatiotemporal downscaling (forecast division 3 month total to station 1 month total) are presented in figure 4. The station to station variation in FOMS due to spatial downscaling is large (a range of 25% in the Great Lakes and Pacific Northwest regions), but the average FOMS value for all stations is better than might have been expected, about 80%. The spread in FOMS between stations due to temporal disaggregation is smaller (13% in the Southwest), but the magnitude is also smaller, averaging a bit over 70%. For the complete spatiotemporal downscaling, the average FOMS for precipitation is only 66%, with a spread of 14%. Stated another way, the direction of the departure from average precipitation (wetter or drier) is different for 1 month station data from that of the 3 month forecast division data roughly 1 in 3 times. This means that the dependability of the largescale seasonal precipitation forecasts will be decreased accordingly when the forecasts are applied at 1 month and local scales. 920 TRANSACTIONS OF THE ASABE

100 Downscaling Precipitation in Space 100 Downscaling Average Temperature in Space 90 90 FOMS (%) 80 70 FOMS (%) 80 70 60 60 50 GL NGP PNW SW SGP SE 50 GL NGP PNW SW SGP SE 100 Downscaling Precipitation in Time 100 Disaggregating Average Temperature in Time 90 90 FOMS (%) 80 70 FOMS (%) 80 70 60 60 50 GL NGP PNW SW SGP SE 50 GL NGP PNW SW SGP SE 100 Downscaling Precipitation in Space and Time 100 Downscaling Avg. Temperature in Space and Time 90 90 FOMS (%) 80 70 FOMS (%) 80 70 60 60 50 GL NGP PNW SW SGP SE Figure 4. FOMS for precipitation departures at each station, organized by region and type of downscaling or disaggregation. The abbreviations refer to the regions in figure 1: GL = Great Lakes, NGP = Northern Great Plains, PNW = Pacific Northwest, SW = Southwest, SGP = Southern Great Plains, and SE = Southeast. The mean FOMS for each region is indicated by a horizontal line. The overall mean FOMS in precipitation across all stations after downscaling and disaggregation (average for all values in the last panel) is 66.4%. 50 GL NGP PNW SW SGP SE Figure 5. FOMS for average temperature departures at each station, organized by region and type of downscaling or disaggregation. The abbreviations are the same as in figure 4, and the mean FOMS for each region is indicated with a horizontal line. The overall mean FOMS in average temperature across all stations after downscaling and disaggregation (average of all values in last panel) is 76.6%. Vol. 51(3): 915-925 921

If the variations in space and time were statistically independent, one would expect the FOMS after downscaling and disaggregation to be the product of the (decimal) FOMS of the two components: 0.8 0.7 = 0.56, or 56%. The good news is that they are dependent to a degree, so the net FOMS is higher, but 66% does imply a significant loss (34%) in dependability for spatiotemporally downscaled precipitation forecasts. AVERAGE TEMPERATURE The FOMS for average temperature, for each station, grouped by region, for downscaling in space, disaggregation in time, and the combined procedures are presented in figure 5. The pattern in scatter and magnitude of the FOMS for average temperature departures is very similar to that for precipitation: spatial downscaling produces a larger spread (20%) in FOMS than temporal disaggregation (16%), but has a larger average FOMS. FOMS after spatial downscaling averages about 88%, after temporal disaggregation about 79%, and the average FOMS after both techniques is 76%, with a spread of 18%. In other words, the direction of the departure from average temperature (warmer or cooler) is different for 1 month station data from that of the 3 month forecast division data roughly 1 in 4 times. This implies a reduction in large scale dependability by 24% for 1 month and location applications of the seasonal average temperature forecasts. Stations in the Pacific Northwest and Southwest have slightly higher FOMS for precipitation departures than the other regions, but given the high scatter between stations, and the significant terrain influences in those regions, the difference could be an artifact of the particular set of stations chosen for the analysis. The FOMS for average temperature departures also vary slightly by region, but we judge these differences to be possible sampling artifacts as well. The primary difference between the FOMS for average temperature departures versus precipitation departures is in the magnitude of the overall average FOMS values, which are higher for average temperature. This 10% difference in average FOMS (66% versus 76%) after spatiotemporal downscaling for the two variables is a direct reflection of the more variable nature of precipitation compared to average temperature on these scales. POST DOWNSCALING NET DEPENDABILITY Since our goal is an estimate of the possible impact due to spatiotemporal downscaling of future forecasts, and given the lack of strong indications of variation in FOMS with location and season, we simply averaged the FOMS across all seasons and locations to produce a number that represents our expected impact on forecast dependability for each variable. It has long been understood that the spatiotemporal correlation in temperature is higher than that for precipitation, so we expected the losses in dependability due to spatiotemporal downscaling to be less with average temperature and greater for precipitation, and that is what we found. The multiplicative FOMS factors are 0.67 for precipitation dependability and 0.76 for average temperature dependability. The next step is to apply these factors to the large scale forecast dependabilities to determine where the forecasts retain sufficient dependability (in the sense of correctly predicting the odds for useful departures from climatology) to justify downscaling and examining the forecasts for possible incorporation into agricultural decision support systems. To compute the net dependability that we expect for downscaled forecasts, we multiplied the FOMS factor with the 3 month/forecast division dependability for each forecast division (as decimals), as reported in SG06. The SG06 dependability results were developed from forecasts issued from Jan Feb Mar 1997 through Jan Feb Mar 2005 (97 forecast cycles over a bit more than eight years), which is not a direct match to the 10 year analysis period used to develop the multiplicative FOMS factors (1991 2000). We do not expect the mismatch in analysis period to be important in this analysis. We also eliminated all forecast divisions with fewer than six useful forecasts (less than 6% of the 97 forecasts issued during that period), expecting such low frequencies of useful forecasts (Schneider and Garbrecht, 2003) to be of little utility in agricultural applications. Figure 6. Maps of the net dependability of spatiotemporally downscaled forecasts for the shortest lead time; the numbers are the percentages of forecasts expected to have correctly predicted the direction of 1 month station precipitation or average temperature departures. Forecast divisions with low usefulness (fewer than six forecasts satisfying the 8% departure threshold in SG06) are left blank. Regions with net dependabilities of 50% or larger are emphasized with shading (cont'd). 922 TRANSACTIONS OF THE ASABE

Figure 6 (cont'd). Maps of the net dependability of spatiotemporally downscaled forecasts for the shortest lead time; the numbers are the percentages of forecasts expected to have correctly predicted the direction of 1 month station precipitation or average temperature departures. Forecast divisions with low usefulness (fewer than six forecasts satisfying the 8% departure threshold in SG06) are left blank. Regions with net dependabilities of 50% or larger are emphasized with shading. Vol. 51(3): 915-925 923

The resulting net dependability for each forecast division at the shortest lead time forecasts (0.5 months), separated by variable and direction, is shown in figure 6. Net dependability results for longer lead times can be derived in the same manner from the dependabilities reported in figures 3 through 6 in SG06. The numbers on the maps in figure 6 can be interpreted as the percentage of forecasts expected to have correctly predicted the direction of 1 month station precipitation or average temperature departures when the forecasts were for conditions at least 8% wetter, drier, warmer, or cooler than the 30 year average (our threshold for usefulness ), per the definition of dependability in SG06. Since the SG06 analysis used the average (mean) of the 3 month, large scale climatology as the dividing point, by definition a dependable forecast (one that correctly predicts the odds) will have a dependability of 50%. (The skewness of precipitation does produce a difference between the mean and median (50% probability), but we judge this difference to be small enough to ignore in this context.) From a practical viewpoint, a net dependability greater than 50% is obviously preferred. Despite the variability in our estimate of losses in dependability due to downscaling (the FOMS multiplicative factors), we choose to continue to use values of 50% net dependability as the required minimum value for designating a forecast division to be promising for further investigation for seasonal forecast applications at field scale. Note that these are expected values, in the sense of average over 10 years. As shown in the FOMS results, individual stations can experience significantly better or worse results over a 10 year period. The biggest losses in net dependability from the spatiotemporal downscaling are for the precipitation forecasts. The impact of the 66% FOMS for precipitation departures is significant, reducing the number of forecast divisions with at least six useful forecasts and dependabilities >50% from 33 to 8 for wetter than average forecast departures, and from 22 to 13 for drier than average forecast departures. (There are 102 forecast divisions covering the contiguous U.S.) With the larger FOMS, the average temperature forecasts fare much better. The number of forecast divisions with at least six useful forecasts for warmer than average conditions and dependabilities >50% dropped from 91 to 78. Unfortunately, for the cooler than average forecasts, the single forecast division that satisfied the dependability criteria at the 3 month/forecast division scale dropped below 50% in net dependability. CONCLUSION The point of this analysis was to determine if a sufficient degree of dependability survives our spatiotemporal downscaling methodology to justify possible modeling and development of climate forecast dependent decision support using the current NOAA/CPC climate forecasts and methodologies in hand. The resulting guidance is mixed, dependent on region and forecast variable, with the forecasts for aboveaverage temperature emerging as worthy of consideration in 78 of 102 forecast divisions, covering most of the contiguous U.S. The Northeast, the Great Lakes, parts of the Northern Great Plains, interior California, and northwest Nevada are the only regions with insufficient net dependability to preclude immediate consideration of use of the warmer than average forecasts. Forecasts for wetter than average conditions retained sufficient net dependability to encourage further development in only 8 of 102 forecast divisions, and in 13 of 102 forecast divisions for drier than average forecasts, all in regions well known to experience the strongest ENSO impacts on precipitation. These forecast divisions are located in Florida, south Texas, southwest New Mexico, Arizona, central and southern California, and parts of Oregon, Washington, Idaho, and Montana. Conversely, forecasts for cooler than average temperature do not retain sufficient net dependability after downscaling to be an attractive option in any part of the contiguous U.S. at this point in time. For anyone considering the use of downscaled NOAA/ CPC forecasts in agricultural decision support, we suggest a few checks before proceeding, due to the large station tostation variability in correlations, and aspects of the seasonal timing of the forecasts. Net dependability is necessary, but not sufficient, to guarantee utility of seasonal forecasts for a particular application. For example, useful forecasts might not be offered during the months most critical to a particular crop, or the magnitude of the forecast departures from average might not be large enough to induce a discernable impact on productivity or financial outcome. Those located in a region with net downscaled dependability >50% should examine the climate forecast time series for the encompassing forecast division (digitally available at NOAA/CPC, 2006b) to determine if the timing and frequency of forecasts appears promising for the crop or forage of interest. If potentially useful forecasts are offered during the months of interest, then calculate the FOMS between the 3 month forecast division (NOAA/CPC, 2006c) and the closest 1 month station values for precipitation or average temperature. If the FOMS are close to (or better than) the average numbers reported herein, consider downscaling the NOAA/CPC climate forecasts to pursue the development of climate forecast based decision support. An alternative to the use of downscaled NOAA/CPC forecasts is the development of custom climate forecast tools for a particular crop and location. This approach is being used successfully for a number of crops in Florida (e.g., Southeast Climate Consortium, 2008), but it can require a significant development effort in comparison to that required to downscale the NOAA/CPC forecasts. The failures in net dependability reported here are primarily the result of the limits in our collective knowledge of the sources of climate variability for individual locations and are only weakly related to the choice of downscaling methodology. All general climate forecasts, regardless of the agency producing them, suffer limitations similar to those of the NOAA/CPC forecasts examined here (e.g., Goddard et al., 2003), and lose skill when downscaled (e.g., Gong et al., 2003). The skill of the NOAA/CPC seasonal climate forecasts can be expected to improve on large spatiotemporal scales as innovations continue to be tested, demonstrated, and implemented. However, the fundamental mismatch between forecast scales and application scales will limit the utility of any such improvements for agricultural applications. Currently, the seasonal forecasts are based on averages over NOAA/NCDC climate divisions, huge areas that encompass significant variability in seasonal precipitation. This variability is the basis of the 80% FOMS in spatial downscaling of precipitation. Forecasts developed for small 924 TRANSACTIONS OF THE ASABE

er areas, and in particular the experimental climate divisions based on precipitation variability (Wolter and Allured, 2007), have the potential to avoid most of the spatial downscaling issue, assuming that they display dependability comparable to or slightly less than the current forecasts. Noting that the loss in net dependability due to temporal disaggregation is distinctly larger than that due to spatial downscaling, it would appear that the bigger improvement could be achieved if the climate forecasts were offered for single months, rather than 3 month periods. Again, this would depend on the skill of the 1 month forecasts, but the dependability would not need to be as high since one could avoid the 3 month to 1 month disaggregation completely. Regardless, the availability of smaller spatial scale, 1 month forecasts would significantly simplify the evaluation and implementation of the NOAA/CPC forecasts in agricultural applications. On this basis alone, such a line of development deserves consideration as a possible improvement to the current NOAA/CPC forecasts. REFERENCES Barnston, A. G., Y. He, and D. A. Unger. 2000. A forecast product that maximizes utility for state of the art seasonal climate prediction. Bull. American Meteor. Soc. 81(6): 1271 1279. Garbrecht, J. D., J. M. Schneider, and X. J. Zhang. 2004. Downscaling NOAA's seasonal precipitation forecasts to predict hydrologic response. In Proc. 18th Conference on Hydrology. Paper 6.8. Boston, Mass.: American Meteorological Society. Goddard, L., A. G. Barnston, and S. J. Mason. 2003. Evaluation of the IRI's net assessment seasonal climate forecasts: 1997 2001. Bull. American Meteor. Soc. 84(12): 1761 1781. Gong, X., A. G. Barnston, and M. N. Ward. 2003. The effect of spatial aggregation on the skill of seasonal precipitation forecasts. J. Climate 16(18): 3059 3071. Livezey, R. E., and M. M. Timoveyeva. 2008. Insights from a skill analysis of the first decade of long lead U.S. three month temperature and precipitation forecasts. Bull. American Meteor. Soc. 89 (in press). Meyers, J. C., M. Timofeyeva, and A. C. Comrie. 2008. Developing the local 3 month precipitation outlook (abstract). In Proc. 19th Conference on Probability and Statistics. Paper P1.4. Boston, Mass.: American Meteorological Society. Available at: www.confex.com/ams/htsearch.cgi. Accessed 2 May 2008. NOAA/CPC. 2006a. Seasonal outlooks: Probability of exceedance (POE) maps. Silver Spring, Md.: NOAA Climate Prediction Center. Available at: www.cpc.ncep.noaa.gov/products/ predictions/long_range/poe_index.php?lead=1&var=p. Accessed 12 December 2006. NOAA/CPC. 2006b. Probability of exceedance (POEs) of CPC's long lead seasonal forecasts for temperature and precipitation, since December 1994. Silver Spring, Md.: NOAA Climate Prediction Center. Available at: www.cpc.ncep.noaa.gov/pacdir/ NFORdir/HUGEdir2/hut.html. Accessed 12 December 2006. NOAA/CPC. 2006c. CPC outlook archive: Observation data. Silver Spring, Md.: NOAA Climate Prediction Center. Available at: www.cpc.ncep.noaa.gov/pacdir/nfordir/hugedir2/huo.html. Accessed 12 December 2006. NOAA/NCDC. 2005. Surface data: Daily cooperative station data. Asheville, N.C.: NOAA National Climatic Data Center. Available at: www.ncdc.noaa.gov/oa/climate/climatedata. html#daily. Accessed 20 June 2005. Schneider, J. M., and J. D. Garbrecht. 2002. A blueprint for the use of NOAA/CPC precipitation climate forecasts in agricultural applications. In Proc. 3rd Symposium on Environmental Applications. Paper J9.12. Boston, Mass.: American Meteorological Society. Schneider, J. M., and J. D. Garbrecht. 2003. A measure of the usefulness of seasonal precipitation forecasts for agricultural applications. Trans. ASAE 46(2): 257 267. Schneider, J. M., and J. D. Garbrecht. 2006. Dependability and effectiveness of seasonal forecasts for agricultural applications. Trans. ASABE 49(6): 1737 1753. Schneider, J. M., J. D. Garbrecht, and D. A. Unger. 2005. A heuristic method for time disaggregation of seasonal climate forecasts. Weather and Forecasting 20(2): 212 221. Southeast Climate Consortium. 2008. AgClimate: A Service of the Climate Consortium. Tallahassee, Fla.: Florida State University. Available at: www.agclimate.org/development/apps/agclimate/ controller/perl/agclimate.pl. Accessed 5 May 2008. Wolter, K., and D. Allured. 2007. New climate divisions for monitoring and predicting climate in the U.S. Intermountain West Climate Summary 3(5): 2 6. Vol. 51(3): 915-925 925

926 TRANSACTIONS OF THE ASABE