On the Use of Forecasts when Forcing Annual Totals on Seasonally Adjusted Data

Similar documents
Recent Developments in Benchmarking to Annual Totals in X-12-ARIMA and at Statistics Canada

A MACRO-DRIVEN FORECASTING SYSTEM FOR EVALUATING FORECAST MODEL PERFORMANCE

The Spectrum of Broadway: A SAS

Seasonal Adjustment using X-13ARIMA-SEATS

Paper SA-08. Are Sales Figures in Line With Expectations? Using PROC ARIMA in SAS to Forecast Company Revenue

A Multivariate Denton Method for Benchmarking Large Data Sets

Automatic Singular Spectrum Analysis and Forecasting Michele Trovero, Michael Leonard, and Bruce Elsheimer SAS Institute Inc.

Forecasting Using Time Series Models

Decision 411: Class 3

Decision 411: Class 3

Decision 411: Class 3

Public Library Use and Economic Hard Times: Analysis of Recent Data

Empirical Project, part 1, ECO 672

An Empirical Comparison of Methods for Temporal Disaggregation at the National Accounts

STATISTICAL FORECASTING and SEASONALITY (M. E. Ippolito; )

Cyclical Effect, and Measuring Irregular Effect

Industrial Engineering Prof. Inderdeep Singh Department of Mechanical & Industrial Engineering Indian Institute of Technology, Roorkee

Forecasting. Chapter Copyright 2010 Pearson Education, Inc. Publishing as Prentice Hall

International Seminar on Early Warning and Business Cycle Indicators. 14 to 16 December 2009 Scheveningen, The Netherlands

AUTO SALES FORECASTING FOR PRODUCTION PLANNING AT FORD

Forecasting Unemployment Rates in the UK and EU

Timely detection of turning points: Should I use the seasonally adjusted or trend estimates? G P A P

Time-Series Analysis. Dr. Seetha Bandara Dept. of Economics MA_ECON

TRANSFER FUNCTION MODEL FOR GLOSS PREDICTION OF COATED ALUMINUM USING THE ARIMA PROCEDURE

22: Applications of Differential Calculus

NOWCASTING THE OBAMA VOTE: PROXY MODELS FOR 2012

The SAB Medium Term Sales Forecasting System : From Data to Planning Information. Kenneth Carden SAB : Beer Division Planning

Empirical Approach to Modelling and Forecasting Inflation in Ghana

Indicator: Proportion of the rural population who live within 2 km of an all-season road

Small-Area Population Forecasting Using a Spatial Regression Approach

Summary of Seasonal Normal Review Investigations. DESC 31 st March 2009

Comment on: Automated Short-Run Economic Forecast (ASEF) By Nicolas Stoffels. Bank of Canada Workshop October 25-26, 2007

Introduction to Forecasting

Member Level Forecasting Using SAS Enterprise Guide and SAS Forecast Studio

Time Series Analysis. Smoothing Time Series. 2) assessment of/accounting for seasonality. 3) assessment of/exploiting "serial correlation"

Technical note on seasonal adjustment for M0

Volume Title: Dating Postwar Business Cycles: Methods and Their Application to Western Germany,

Using Temporal Hierarchies to Predict Tourism Demand

3. If a forecast is too high when compared to an actual outcome, will that forecast error be positive or negative?

Page No. (and line no. if applicable):

LOADS, CUSTOMERS AND REVENUE

Are Forecast Updates Progressive?

AERMOD Sensitivity to AERSURFACE Moisture Conditions and Temporal Resolution. Paper No Prepared By:

Demand Forecasting Models

Euro-indicators Working Group

Cipra D. Revised Submittal 1

Are Forecast Updates Progressive?

A Diagnostic for Seasonality Based Upon Autoregressive Roots

Using PROC ARIMA in Forecasting the Demand and Utilization of Inpatient Hospital Services

= observed volume on day l for bin j = base volume in jth bin, and = residual error, assumed independent with mean zero.

Asitha Kodippili. Deepthika Senaratne. Department of Mathematics and Computer Science,Fayetteville State University, USA.

TIME SERIES ANALYSIS AND FORECASTING USING THE STATISTICAL MODEL ARIMA

3 Time Series Regression

BEFORE THE PUBLIC UTILITIES COMMISSION OF THE STATE OF COLORADO * * * * *

FORECASTING COARSE RICE PRICES IN BANGLADESH

FORECASTING. Methods and Applications. Third Edition. Spyros Makridakis. European Institute of Business Administration (INSEAD) Steven C Wheelwright

A COMPARISON OF REGIONAL FORECASTING TECHNIQUES

Recent advances in time series analysis

NOWCASTING REPORT. Updated: July 20, 2018

Chapter 5: Forecasting

NOWCASTING REPORT. Updated: August 17, 2018

Statistical Forecast of the 2001 Western Wildfire Season Using Principal Components Regression. Experimental Long-Lead Forecast Bulletin

Stata tip 76: Separating seasonal time series

Forecasting Foreign Direct Investment Inflows into India Using ARIMA Model

GLD Skill Booster #4:

BV4.1 Methodology and User-friendly Software for Decomposing Economic Time Series

Analysis on Characteristics of Precipitation Change from 1957 to 2015 in Weishan County

Charting Employment Loss in North Carolina Textiles 1

On Comparison of Temporal Disaggregation Methods in Flow Variables of Economic Data.

Lecture 4 Forecasting

Defining Normal Weather for Energy and Peak Normalization

arxiv: v1 [stat.me] 5 Nov 2008

Nowcasting Norwegian GDP

Every day, health care managers must make decisions about service delivery

Dennis Bricker Dept of Mechanical & Industrial Engineering The University of Iowa. Exponential Smoothing 02/13/03 page 1 of 38

Central Ohio Air Quality End of Season Report. 111 Liberty Street, Suite 100 Columbus, OH Mid-Ohio Regional Planning Commission

Seasonal Autoregressive Integrated Moving Average Model for Precipitation Time Series

ESRI Research Note Nowcasting and the Need for Timely Estimates of Movements in Irish Output

Suan Sunandha Rajabhat University

Ryan K. Decker * NASA Marshall Space Flight Center, Huntsville, Alabama. Lee Burns Raytheon, Huntsville, Alabama

Analysis of Violent Crime in Los Angeles County

Based on the original slides from Levine, et. all, First Edition, Prentice Hall, Inc

Modified Holt s Linear Trend Method

Read Section 1.1, Examples of time series, on pages 1-8. These example introduce the book; you are not tested on them.

Product and Inventory Management (35E00300) Forecasting Models Trend analysis

Gridded Ambient Air Pollutant Concentrations for Southern California, User Notes authored by Beau MacDonald, 11/28/2017

1. Introduction. Hang Qian 1 Iowa State University

The Benchmarking in the official data in Mexico

How to Make or Plot a Graph or Chart in Excel

PREPARED DIRECT TESTIMONY OF GREGORY TEPLOW SOUTHERN CALIFORNIA GAS COMPANY AND SAN DIEGO GAS & ELECTRIC COMPANY

2018 Annual Review of Availability Assessment Hours

Discussion Paper on the Impacts of Climate Change for Mount Pearl. August, Darlene Butler. Planning Department. City of Mount Pearl

Time Series Analysis

Lecture Prepared By: Mohammad Kamrul Arefin Lecturer, School of Business, North South University

Wind Resource Modelling using the Markov Transition Probability Matrix Method

Appendix C Final Methods and Assumptions for Forecasting Traffic Volumes

LODGING FORECAST ACCURACY

Denver International Airport MDSS Demonstration Verification Report for the Season

A Unified Framework for Near-term and Short-term System Load Forecasting

Warwick Business School Forecasting System. Summary. Ana Galvao, Anthony Garratt and James Mitchell November, 2014

Transcription:

The 34 th International Symposium on Forecasting Rotterdam, The Netherlands, June 29 to July 2, 2014 On the Use of Forecasts when Forcing Annual Totals on Seasonally Adjusted Data Michel Ferland, Susie Fortier and Margaret Wu 1 Abstract The use of forecasts in seasonal adjustment software such as X-12-ARIMA or the SAS X12 procedure is well known to minimize revisions. In practice, seasonally adjusted series are also often benchmarked to the original (unadjusted) series annual totals to maintain consistency between both sources. This step is done through an X-12 option called FORCE. Under this option, the use of forecasts in the seasonal adjustment process influences the benchmark for the last incomplete year of data; if forecasts are used and cover the rest of the incomplete year, they contribute to an explicit benchmark, and if not, the benchmark is implicit and relies only on assumptions of the specific benchmarking method (e.g. modified Denton in PROC X12). Once the last year of data becomes complete as more data is obtained, there is a switch from the implicit to the explicit benchmark when forecasts are not used. The impact of this behaviour on revisions to the forced seasonally adjusted data was studied empirically using data from various sources. Month-to-month revisions in the first twelve months following the initial or concurrent estimate and revisions to a final estimate were both considered. Keywords: Benchmarking, modified Denton method, X-12-ARIMA, PROC X12. 1. Introduction This paper presents the results of an empirical study on the impact of forecasts when forcing annual totals during seasonal adjustment. The motivation for this study arose during the 2011 annual seasonal adjustment review of the Employment Insurance program at Statistics Canada. In addition to the usual review of seasonal adjustment models and parameters, we were also implementing a change of production tool for seasonal adjustment at that time, going from the U.S. Census Bureau X-12-ARIMA program to the SAS X12 procedure. Although both tools essentially produce the same seasonally adjusted data, there is a small difference in the benchmarking method used when forcing annual totals. While the modified additive first difference Denton method (Denton, 1971 and Cholette, 1984) is available in PROC X12, recent versions of X-12-ARIMA (Monsell, 2007) propose a more flexible approach based on a simplified version of the Cholette-Dagum regression-based method (Dagum and Cholette, 2006 and Quenneville et al, 2006). Although small, the differences between the forced seasonally adjusted Employment Insurance data obtained with PROC X12 and with X-12-ARIMA were still larger than expected, especially with series for which we did not use forecasts. The investigations to identify the source of these differences eventually lead to a larger scale empirical study covering data from other Statistics Canada surveys and programs. 1 Statistics Canada, 150 Tunney s Pasture Driveway, Ottawa, Ontario, Canada, K1A 0T6. E-mail: michel.ferland@statcan.gc.ca, susie.fortier@statcan.gc.ca, margaret.wu@statcan.gc.ca. 1

Before presenting the highlights of the study in section 4 and conclusions in section 5, sections 2 and 3 first set the table by discussing the topic of benchmarking when forcing annual totals during seasonal adjustment and by addressing the impact that forecasts have in this context. Although the case of monthly data is primarily used throughout this paper, the discussion also applies to quarterly data. 2. Forcing Annual Totals and Benchmarking When forcing annual totals during seasonal adjustment, the goal is simply to impose the annual totals of the original data on the seasonally adjusted data. Annual totals from both sources being usually close to each other, we often use the term preserving annual totals during seasonal adjustment. For people familiar with X-12-ARIMA, forcing annual totals means that we are interested in the end in seasonally adjusted data from table D11A instead of table D11. The main motivation behind annual total preservation in seasonal adjustment comes from a need for consistency between both sources of data, a typical requirement in systems of National Accounts. Quenneville and Fortier (2012) also demonstrate how movement preservation can be improved when restoring accounting constraints in time series by resolving the problem in two steps, the first step corresponding to the preservation of annual totals during the seasonal adjustment stage. In practice, annual total preservation is implemented using time series benchmarking techniques which aim at imposing the level of the benchmarks, annual totals from the original data in this context, while preserving the month-to-month movement in the seasonally adjusted data. This subject is covered in detail in Quenneville et al (2006). Annual total benchmarking in the SAS X12 procedure is specified with the FORCE option available in the X11 statement. It is based on the modified additive first difference Denton method, which was also available in X-11-ARIMA and early versions of X-12-ARIMA. This method comes down to finding the solution of the following minimization problem: subject to constraints tm a t m min tm y T t2 t s t s t 1 2 (1) t t1, m = 1,..., M, where s t are the seasonally adjusted estimates, t are the forced seasonally adjusted estimates (the resulting benchmarked estimates), y t are the original series data points and a m the M annual totals calculated from the original data (the benchmarks). This formulation of the objective function in equation (1) emphasizes movement preservation as we are trying to minimize changes in the month-to-month movement (month-to-month difference in additive mode) of the seasonally adjusted data before and after benchmarking. In X-12-ARIMA, annual total preservation is specified with the FORCE spec. It is based on a simplified version of the Cholette-Dagum regression-based benchmarking method for which the minimization problem in its additive form becomes, given parameter, min T 2 2 1 s s t s 2 (2) 1 1 t2 2 t t1 t1

As we will illustrate later in this section, is linked to the movement preservation. Note that the multiplicative (proportional) form of the simplified Cholette-Dagum regression-based method is also available in X-12-ARIMA. Notice how using = 1 in equation (2) returns equation (1) corresponding the modified Denton method. The difference between X-12-ARIMA and PROC X12 with regards to annual total benchmarking therefore basically comes down to the value of. To illustrate the impact of parameter in the benchmarking solution of the minimization problem formulated in equation (2), we will study the corrections, or adjustments, that need to be applied to the seasonally adjusted series in order to obtain the forced seasonally adjusted series. Figure 1 plots the correction lines, or curves, corresponding to the benchmarking solution for a fictitious example with monthly data available up to November 2006 using four values of : 0, 0.7, 0.9 and 1. The red bars represent the average monthly discrepancies that need to be allocated, in an additive way, to the seasonally adjusted series in order to obtain the target annual totals from the original data. For example the discrepancy between both sources in year 2000 is 2.5 per month, which means that a total of 30 units needs to be added to the seasonally adjusted data in order to obtain the target annual total in 2000. When = 0, the corrections (in purple) exactly match the average monthly discrepancies (red bars), resulting in large changes in the December to January movements every year, as illustrated by large vertical shifts in the correction line. Better movement preservation is achieved when the correction line is smooth and becomes a curve without apparent breaks or large vertical shifts, as illustrated in Figure 1 with values of 0.7 (in orange), 0.9 (in blue) and 1 (in green). Parameter is referred to as the movement preservation parameter, with values ranging from 0 (no movement preservation) to 1 (maximum preservation). Figure 1 Impact of parameter on the annual total benchmarking additive corrections for a fictitious example 3

It is interesting to note in Figure 1 that while the corrections with > 0 are reasonably close to each other in the years for which there is an annual total (years with a red bar), the differences between the corrections are much larger when there is no annual total. This is illustrated in Figure 1 with the shaded area for year 2006, where an annual total does not exist yet as data are only available up to November. In such cases the value of plays an important role in the benchmarking solution for the last year, with corrections going to zero (no correction) faster as the value of decreases. To each value of actually correspond implicit benchmarks for 2006 and beyond; these implicit benchmarks rely only on assumptions regarding future unknown discrepancies (future red bars). In particular, notice how future discrepancies are assumed to remain at a constant level when = 1 while they are assumed to eventually be zero when < 1. Chen and Wu (2006) and Dagum and Cholette (2006) suggest using = 0.9 for monthly data as it generally provides excellent movement preservation while minimizing revisions to the benchmarked series resulting from the incorporation of the next benchmark when it becomes available. 3. Forecasting in the Context of Seasonal Adjustment with Forced Annual Totals As shown by Dagum (1982), forecasts help reduce revisions to concurrent seasonally adjusted estimates. The general guideline at Statistics Canada is to use twelve months of regarima forecasts when the quality is deemed good. In practice, the quality of the forecasts is considered good when the average forecast error in the last three years is smaller than 15% and the regarima model adequacy diagnostics are acceptable. Otherwise, when the quality is deemed poor, forecasts are not used. When forcing annual totals, forecasts also provide a benchmark for the last incomplete year that gradually converges to the true annual total in December as more data points become available and replace forecasted values. This explicit benchmark limits the impact of the choice of in the benchmarking solution for the last incomplete year by replacing the implicit benchmark associated with the value of. Figure 2 on the next page plots the corrections corresponding to the benchmarking solution for the same fictitious example as in Figure 1, but only keeping the lines for = 0.9 (in blue) and = 1 (in green) respectively corresponding to the values used with X-12-ARIMA and PROC X12. The area shaded in green in Figure 2 illustrates the situation where data is available up to November 2006 and forecasts are not used, resulting in the usage of an implicit benchmark that depends on for the last year (2006) since an annual total is not yet available. The area shaded in purple however can be seen as illustrating the case where data would only be available up to November 2005 while using a year worth of forecasts (i.e. up to November 2006), resulting in the usage of an explicit benchmark for the last year (2005) this time around since an annual total is available, annual total composed of eleven true original data points and one forecast. Notice how the differences between the two correction lines are much smaller when forecasts are used and an explicit benchmark independent of is available for the last year (purple shaded area) compared to when forecasts are not used and an implicit benchmark that depends on is used instead (green shaded area). This ultimately explains the larger differences we noticed in the Employment Insurance estimates between X-12-ARIMA and PROC X12 when forecasts were not used. 4

Figure 2 Illustration of the impact that forecasts can have on the corrections for incomplete years 4. Empirical Study Knowing that the value of cannot be specified in PROC X12 and that the value of mainly has an impact on the forced seasonally adjusted data for the last year when forecasts are not used, the question is: should forecasts always be used when forcing annual totals, even for series for which the forecast quality is deemed poor? Or, stated differently, knowing that using forecasts help reduce differences in the forced seasonally adjusted data for the last year obtained from X-12-ARIMA (our primary analytical tool and former production tool) and PROC X12 (our new production tool), what would be the impact of using poor forecasts when forcing annual totals? Those questions were investigated through an empirical study which is presented in this section. The study consisted of a revision analysis of the forced seasonally adjusted estimates using the X-12-ARIMA HISTORY spec with the Denton method ( = 1) in order to mimic benchmarking as implemented in PROC X12. We studied month-to-month revisions to the forced seasonally adjusted estimates over the first twelve lags (i.e. revisions from lag 0 to lag 1, lag 1 to lag 2,..., lag 11 to lag 12) and revisions to the final estimates at lags 0, 12, 24, 36, 48 and 60; that is differences between the final or best forced seasonally adjusted estimates and the initial estimates at lag 0, the 1-year later estimates at lag 12 and so on up to the 5-year later estimates at lag 60. We compared the impact on these revisions of using forecasts or not, for series where the quality of the forecasts was deemed good and for those where it was considered poor, using the average forecast error in the last three years criterion with a threshold of 15%. We used a representative set of 1,500 series from the following five monthly programs at Statistics Canada: Employment Insurance, Food Services, International Trade, International Travel and Manufacturing, with about 75% of the series considered to have good forecasts. The series contained 5

about 15 years of data on average and covered the most recent downturn in 2008. The results of the empirical study are summarized in the remainder of this section. All the graphs presented in the following pages represent average absolute percent revisions (differences) across all data points part of the revision window, ranging from 6 to 12 years depending on the length of seasonal filter, and across either all series, all series with good forecasts or all series with poor forecasts. Figure 3 summarizes month-to-month revisions to the forced seasonally adjusted estimates over the course of a year, or the first twelve lags, when using forecasts. For example, the darker blue line on the leftmost part of the graph represents the average absolute percent month-to-month revisions to initial January estimates that are first revised in February, then again in March and so on until the 12 th lagged revision occurring in January of the next year. The revisions for all twelve months display a bowl shape when using forecasts, which is the result of the combined effect of the trend and seasonal filters. The larger revisions for the early lags come mostly from the trend filter while the ones for the later lags come mostly from the seasonal filter as next year s forecasted value is updated as we get closer to the real value one year later. Figure 3 Average month-to-month revisions when forecasts are used (by initial month) Figure 4 on the next page also presents average percent month-to-month revisions to the forced seasonally adjusted estimates but when forecasts are not used. Revisions display a bowl shape here again but with revisions associated with the later lags remaining relatively small until we reach a new value for the same month one year later. This is explained by the fact that until we reach that point in time, revisions are mainly caused by the trend filter since we do not forecast next year s value. However, the most striking difference when compared to Figure 3 are the large revisions that occur in December when forecasts are not used, as illustrated by the large spike for the revision month of December in Figure 4. 6

Figure 4 Average month-to-month revisions when forecasts are not used (by initial month) When we reach December and forecasts are not used, a new annual total is suddenly available for the last year. It is this switch from an implicit to an explicit benchmark when December is reached that causes the large revisions to the forced seasonally adjusted estimates. This situation does not occur when forecasts are used (see Figure 3) since an annual total (an explicit benchmark) is always available for the last year. This annual total, mostly composed of forecasts at first, is revised every month as new data becomes available during the year and gradually converges to the true total in December. In fact, there is also a switch from an implicit to an explicit benchmark when December is reached and forecasts are used, but for the following year. By looking closely at Figure 3, a small spike is actually noticeable in December, but noting that compares to the amplitude of the spike in Figure 4. Figure 5 on the next page again summarizes average month-to-month revisions to the forced seasonally adjusted estimates that occur over the first year, but only for series with good forecasts. Averages have been recalculated based on the month in which revisions occur, emphasizing the evolution of the revisions throughout the year. The first thing to note is that revisions are relatively stable throughout the year, except for larger revisions in December when forecasts are not used (in blue) as noted earlier in Figure 4. Revisions when forecasts are used (in red) are slightly larger than when forecasts are not used (in blue) prior to December. These added revisions are caused by the updating of the forecasted annual total (explicit benchmark) for the last year as new data points become available. But when we compare the overall average over the twelve months, average revisions are about the same whether forecasts are used or not, at 0.40% and 0.39%. Although using forecasts does not really reduce the overall amount of revisions to the forced seasonally adjusted estimates for series considered having good forecasts, it distributes them more evenly across the year, which is generally preferable to having large revisions every December when forecasts are not used. 7

Figure 5 Average month-to-month revisions for series with good forecasts In a similar fashion, Figure 6 presents the average month-to-month revisions to the forced seasonally adjusted estimates but for series considered to have poor forecasts. Average revisions in Figure 6 generally behave the same way as in Figure 5 except that they are larger on average, at 0.90% when using forecasts and 0.93% when not using forecasts, and not quite as stable throughout the year. This can be explained by the fact that series with poor forecasts generally correspond to less well-behaved series. Figure 6 Average month-to-month revisions for series with poor forecasts 8

Overall using poor forecasts does not seem to increase revisions to the forced seasonally adjusted estimates. And although month-to-month revisions are not really reduced either on average, large revisions every December are avoided here again when forecasts are used. Revisions to the final forced seasonally adjusted estimates are presented in Figure 7. This graph presents average absolute percent differences between the final or best forced seasonally adjusted estimates, obtained several years later, and the initial estimates at lag 0, the 1-year later estimates at lag 12 and so on up to the 5-year later estimates at lag 60. Four lines are included in the graph according to the quality of the forecasts and whether the forecasts are used or not. Figure 7 Average revisions to final estimates As expected, all revisions converge to zero as we advance in time and get closer to the final estimates and revisions for series with good forecasts (in red) are smaller than for series with poor forecasts (in blue). It is really interesting to note however that average revisions are systematically smaller when using forecasts (dashed lines versus solid lines), even when they are considered poor. Using poor forecasts therefore does not seem to generate more revisions to the final forced seasonally adjusted estimates either. On average, it is rather the opposite. 5. Conclusion In summary, results from the empirical study using the Denton approach ( = 1) as used by PROC X12 show that whether forecasts are good or poor, average month-to-month revisions to the forced seasonally adjusted estimates over the first year are about the same whether forecasts are used or not. However, revisions show a significant increase in December when forecasts are not used, which is the result of the 9

switch from an implicit to an explicit benchmark as the last year becomes complete. This situation does not occur when using twelve months of forecasts since an annual total (an explicit benchmark) is always available for the last year. Results from the empirical study also show that whether forecasts are good or poor, average revisions to the final forced seasonally adjusted estimates are systematically smaller when forecasts are used. In light of these results, some changes were made in the application of Statistics Canada s Quality Guidelines on Seasonal Adjustment (Statistics Canada, 2009) when forcing annual totals. Using a year worth of forecasts is now the recommended default approach when annual totals are forced in production with the SAS X12 procedure, even with series for which the forecasts are considered poor. In addition to globally improving revision diagnostics on the forced seasonally adjusted estimates, forecasts have the advantage of reducing differences originating from the benchmarking methods available in PROC X12 (Denton approach) and X-12-ARIMA (Cholette-Dagum approach), our primary analytical tool and former production tool. More specifically, completing the last year with forecasts limits the impact of using different values of in PROC X12 ( = 1) and X-12-ARIMA ( = 0.9) by replacing the -dependent implicit benchmark for the last incomplete year with an explicit benchmark that does not depend on. This being said, the fact that forecasts are beneficial, on average, when forcing annual totals does not protect us against specific cases where they might prove to cause problems. For this reason, we monitor forecasts on a regular basis and intervene when necessary. Examples of such interventions may include using external forecasts, deactivating forecasts temporarily or changing additive outliers for level shifts or ramps, which can prove to be helpful during turning points by improving the level of the forecasts. As with other seasonal adjustment options, the recommended default approach for the use of forecasts can be overridden when appropriate. As always, proper time and effort should be put into the analysis of the series and on the initial selection and maintenance of options. References Chen, Z.G. and Wu, K.H. (2006): Comparison of Benchmarking Methods with and without a Survey Error Model, International Statistical Review, 74, pp. 285-304. Cholette, P. (1984): Adjusting Sub-Annual Series to Yearly Benchmarks, Survey Methodology. 10, pp. 35-49. Dagum, E.B. (1982): Revisions of the Seasonally Adjusted Data Due to Filter Changes, Proceedings of the Business and Economic Section. American Statistical Association, pp. 39-45. Dagum, E. B. and Cholette, P. (2006): Benchmarking, Temporal Distribution and Reconciliation Methods of Time Series. Springer-Verlag, New York, Lecture notes in Statistic, #186. Denton, F. (1971): Adjustment of Monthly or Quarterly Series to Annual Totals: An Approach Based on Quadratic Minimization, Journal of the American Statistical Association. 82, pp. 99 102. 10

Monsell, B. C. (2007): Release Notes for Version 0.3 of X-12-ARIMA, Research Report No. RRS2007/03. Statistical Research Division, U.S. Census Bureau. Available at http://www.census.gov/srd/papers/pdf/rrs2007-03.pdf. Quenneville, B. and Fortier, S. (2012): Restoring Accounting Constraints in Time Series: Methods and Software for a Statistical Agency. Economic Time Series: Modeling and Seasonality. Chapman & Hall/CRC. Quenneville, B., Fortier, S., Chen, Z.G. and Latendresse, E. (2006): Recent Developments in Benchmarking to Annual Totals in X-12-ARIMA and at Statistics Canada, Proceedings of the 2006 Eurostat conference on Seasonality, Seasonal Adjustment and Their Implications for Short-Term Analysis and Forecasting. Luxembourg, May 2006. SAS Institute Inc. (2009): The X12 Procedure, SAS 9.2 Documentation: SAS/ETS 9.2 User s Guide. Cary, NC: SAS Institute Inc. Statistics Canada (2009): Seasonal adjustment and trend-cycle estimation, Statistics Canada Quality Guidelines. Statistics Canada, Ottawa, Canada. Catalogue no. 12-539-X, available at http://www.statcan.gc.ca/pub/12-539-x/2009001/seasonal-saisonnal-eng.htm U.S. Census Bureau (2007): X-12-ARIMA Reference Manual, Version 0.3. Washington, DC. 11