This note introduces some key concepts in time series econometrics. First, we

Similar documents
Introduction to Economic Time Series

Prof. Dr. Roland Füss Lecture Series in Applied Econometrics Summer Term Introduction to Time Series Analysis

Non-Stationary Time Series and Unit Root Testing

Non-Stationary Time Series and Unit Root Testing

Univariate Time Series Analysis; ARIMA Models

Lecture 3 Stationary Processes and the Ergodic LLN (Reference Section 2.2, Hayashi)

This note discusses some central issues in the analysis of non-stationary time

Non-Stationary Time Series and Unit Root Testing

Econ 424 Time Series Concepts

ECON3327: Financial Econometrics, Spring 2016

Reliability and Risk Analysis. Time Series, Types of Trend Functions and Estimates of Trends

7 Introduction to Time Series

9) Time series econometrics

Empirical Market Microstructure Analysis (EMMA)

7 Introduction to Time Series Time Series vs. Cross-Sectional Data Detrending Time Series... 15

Cointegration, Stationarity and Error Correction Models.

Eksamen på Økonomistudiet 2006-II Econometrics 2 June 9, 2006

Lecture note 2 considered the statistical analysis of regression models for time

Week 5 Quantitative Analysis of Financial Markets Characterizing Cycles

Lecture 5: Unit Roots, Cointegration and Error Correction Models The Spurious Regression Problem

Lecture 2: Univariate Time Series

11. Further Issues in Using OLS with TS Data

Lecture 6a: Unit Root and ARIMA Models

Econometrics I. Professor William Greene Stern School of Business Department of Economics 25-1/25. Part 25: Time Series

Covers Chapter 10-12, some of 16, some of 18 in Wooldridge. Regression Analysis with Time Series Data

13. Time Series Analysis: Asymptotics Weakly Dependent and Random Walk Process. Strict Exogeneity

Linear Regression with Time Series Data

EC408 Topics in Applied Econometrics. B Fingleton, Dept of Economics, Strathclyde University

CHAPTER 21: TIME SERIES ECONOMETRICS: SOME BASIC CONCEPTS

Volatility. Gerald P. Dwyer. February Clemson University

Non-Stationary Time Series, Cointegration, and Spurious Regression

Econometrics of financial markets, -solutions to seminar 1. Problem 1

TIME SERIES AND FORECASTING. Luca Gambetti UAB, Barcelona GSE Master in Macroeconomic Policy and Financial Markets

A time series is called strictly stationary if the joint distribution of every collection (Y t

Stochastic Processes

Problem set 1 - Solutions

STAT 520: Forecasting and Time Series. David B. Hitchcock University of South Carolina Department of Statistics

A Guide to Modern Econometric:

Chapter 2: Unit Roots

Chapter 2. Some basic tools. 2.1 Time series: Theory Stochastic processes

Efficiency Tradeoffs in Estimating the Linear Trend Plus Noise Model. Abstract

MA Advanced Econometrics: Applying Least Squares to Time Series

New York University Department of Economics. Applied Statistics and Econometrics G Spring 2013

E c o n o m e t r i c s

This note analyzes OLS estimation in a linear regression model for time series

Time Series Econometrics 4 Vijayamohanan Pillai N

AR, MA and ARMA models

Outline. Nature of the Problem. Nature of the Problem. Basic Econometrics in Transportation. Autocorrelation

Econ 423 Lecture Notes: Additional Topics in Time Series 1

Stock Prices, News, and Economic Fluctuations: Comment

1. Stochastic Processes and Stationarity

Introduction to Econometrics

For a stochastic process {Y t : t = 0, ±1, ±2, ±3, }, the mean function is defined by (2.2.1) ± 2..., γ t,

Permanent Income Hypothesis (PIH) Instructor: Dmytro Hryshko

Time Series Methods. Sanjaya Desilva

Class 1: Stationary Time Series Analysis

Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley

Cointegration and Error Correction Exercise Class, Econometrics II

Financial Econometrics

at least 50 and preferably 100 observations should be available to build a proper model

Macroeconomics Theory II

A Non-Parametric Approach of Heteroskedasticity Robust Estimation of Vector-Autoregressive (VAR) Models

Økonomisk Kandidateksamen 2004 (II) Econometrics 2 June 14, 2004

Labor-Supply Shifts and Economic Fluctuations. Technical Appendix

1. Fundamental concepts

Introduction to Algorithmic Trading Strategies Lecture 3

Autoregressive Moving Average (ARMA) Models and their Practical Applications

Financial Econometrics

LECTURES 2-3 : Stochastic Processes, Autocorrelation function. Stationarity.

The multiple regression model; Indicator variables as regressors

10. Time series regression and forecasting

Do Markov-Switching Models Capture Nonlinearities in the Data? Tests using Nonparametric Methods

New Notes on the Solow Growth Model

B y t = γ 0 + Γ 1 y t + ε t B(L) y t = γ 0 + ε t ε t iid (0, D) D is diagonal

Some Time-Series Models

Introduction to Linear Regression Analysis

Chapter 12: An introduction to Time Series Analysis. Chapter 12: An introduction to Time Series Analysis

Time series: Cointegration

Econometrics Summary Algebraic and Statistical Preliminaries

FinQuiz Notes

DEPARTMENT OF ECONOMICS AND FINANCE COLLEGE OF BUSINESS AND ECONOMICS UNIVERSITY OF CANTERBURY CHRISTCHURCH, NEW ZEALAND

ENSC327 Communications Systems 19: Random Processes. Jie Liang School of Engineering Science Simon Fraser University

Introductory Econometrics

10) Time series econometrics

Monte Carlo Simulations and the PcNaive Software

Macroeconomics Theory II

Økonomisk Kandidateksamen 2005(I) Econometrics 2 January 20, 2005

Econometrics I: Univariate Time Series Econometrics (1)

Advanced Econometrics

Regression with time series

Empirical Macroeconomics

Small Open Economy RBC Model Uribe, Chapter 4

The Identification of ARIMA Models

1 Regression with Time Series Variables

University of Oxford. Statistical Methods Autocorrelation. Identification and Estimation

7. Integrated Processes

Heteroskedasticity in Time Series

Multivariate GARCH models.

Test for Parameter Change in ARIMA Models

Macroeconomics Field Exam. August 2007

Transcription:

INTRODUCTION TO TIME SERIES Econometrics 2 Heino Bohn Nielsen September, 2005 This note introduces some key concepts in time series econometrics. First, we present by means of examples some characteristic features of macro-economic and financial time series. We then present some underlying assumptions on the way time series data are sampled and give a heuristic introduction to stochastic processes. We then present the property of stationarity and discuss how a time series which is not stationary can sometimes be made stationary by means of simple transformations. Stationarity turns out to be an important requirement for the usual results from (cross-sectional) regression analysis to carry over to the time series case. Introduction Most data in macro-economics and finance come in the form of time series. A time series is a set of observations y,y 2,...,y t,...,y T, () where the index t represents time such that the observations have a natural temporal ordering. Most time series in economics and all time series considered in this course are observed with fixed intervals, such that the distances between successive time points, t and t +,areconstant. A characteristic feature of many economic time series is a clear dependence over time, and there is often a non-zero correlation between observations at time t and t k for some lag k. As an example, graph (A) in Figure shows monthly data for the US unemployment rate in percent of the labour force, 948 : 2005 : 7. The development in unemployment

is relatively sluggish, and a reasonable conjecture on the unemployment rate a given month (for example the most recent, u 2005:7 ) would be a function of the unemployment rate the previous month, u 2005:6. The temporal ordering of the observations implies that the observation for 2005 : 6 always precede the observation for 2005 : 7, and technically u t is predetermined when u t is generated. This suggests that the past of u t can be included in the information set in the analysis of u t, and a natural object of interest in a time series analysis would be a model for u t given the past, e.g. the conditional expectation, E[u t u,u 2,...,u t ]. Another characteristic feature shared by many time series is a tendency to drift. As an example, consider in graph (B) the quarterly Danish productivity, 97 : 2003 : 2, compiled as the log of real output per hour worked. In this case the drift represents the growth of the real economy, and the slope of the trend is by and large constant and approximately equal to 0.5% from quarter to quarter. A third interesting feature is a tendency for some time series to move together. Graph (C) shows time series for the log of real disposable income, y t, and the log of real consumption, c t, for the private sector in Denmark. The movements from quarter to quarter sometimes differ markedly, but the slow underlying movements seem to be related. This suggests that there could be some sort of equilibrium relation governing the movements of consumption and income in the long run. A stylized economic theory may suggest that while both y t and c t have an upward drift due to the real underlying growth in productivity and real wages, the savings rate, y t c t, should be stable implying that the time series in graph (C) should move together. For many financial time series, not only the mean but also the variance is of interest. This is because the variance is related to measures of the uncertainty or risk of a particular variable. A characteristic feature of this type of data is a tendency for the volatility to change over time reflecting periods of rest and unrest on financial markets. Graph (D) shows percentage changes from day to day in the NASDAQ stock market index for the period January 3, 2000 to February 26, 2004. A visual inspection suggests that the volatility is much larger in some period that in others. One primary goal of this course is to introduce the tools for analyzing time series with the characteristics presented in Figure. We want to construct measures of the dependence between successive observations, y t and y t, and to develop simple econometric models for the time dependence. We also want tools to analyze trending variables; in particular we want to address the dynamic interaction and the existence of equilibrium relations. 2 Stochastic Processes and Stationarity The main assumption underlying time series analysis is that the observation at time t, y t, is a realization of a random variable, y t. Note that the standard notation does not distinguish between the random variable and a realization. Taken as a whole, the observed 2

2.5 (A) US unemployment rate.00 (B) Danish productivity (logs) 0.0 0.75 7.5 5.0 0.50 2.5 0.0 950 960 970 980 990 2000 0.25 970 980 990 2000 6.4 6.3 6.2 (C) Danish income and consumption (logs) Income Consumption 0 5 (D) Daily change in the NASDAQ index (%) 6. 0 6.0-5 5.9 970 980 990 2000-0 Period: 3/-2000 to 26/2-2004 0 50 300 450 600 750 900 050 Figure : Examples of macro-economic and financial time series. time series in () is a realization of a sequence of random variables, y t (t =, 2,...,T), often referred to as a stochastic process. Here we notice an important difference between cross-section data and time series data. Recall, that in the cross-section case we think of a data set, x i (i =, 2,...,N), as being sampled as N independent draws from a large population; and if N is sufficiently large we can characterize the distribution by the sample moments: e.g. the mean and variance. In the time series context, on the other hand, we are faced with T random variables, y t (t =, 2,...,T), and only one realization from each. In general, therefore, we have no hope of characterizing the distributions corresponding to each of the random variables, unless we impose additional restrictions. Graph (A) in Figure 2 illustrates the idea of a general stochastic process, where the distributions differ from time to time. It is obvious that based on a single realized time series we cannot say much about the underlying stochastic process. A realization of a stochastic process is just a sample path of T real numbers; and ifhistorytookadifferentcoursewewouldhaveobservedadifferent sample path. If we could rerun history a number of times, M say, we would have M realized sample paths corresponding to different states of nature. Letting a superscript (m) denotethe realizations (m =, 2,...,M)wewouldhaveM observed time series: 3

Realization : y () Realization 2 : y (2) Realization m : Realization M :, y() 2, y(2) 2,..., y() t,..., y () T,..., y(2) t,..., y (2).... y (m), y (m) 2,..., y (m) t,..., y (m) T.... y (M), y (M) 2,..., y (M) t,..., y (M) T. T (2) For each point in time, t, we would then have a cross-section of M random draws, y () t,y (2) t,...,y (m) t,...,y (M) t, from the same distribution. This cross-section is not drawn from a fixed population, but is drawn from a hypothetical population of possible outcomes, corresponding to a particular distribution in graph 2 (A). Often we are interested in the unconditional mean, E[y t ], which we could estimate with the sample average be[y t ]= M MX m= y (m) t. (3) The cross-sectional mean in (3) is sometimes referred to as the ensemble mean and it is the mean of a particular distribution in Figure 2. This concept is fundamentally different from the time average of a particular realized sample path, e.g. y = T TX t= y () t. (4) Notice, that when we analyze a single realization, we normally ignore the superscript and use the notation y t = y () t. Stationarity. Of course, it is not possible in economics to generate more realizations of history. But if the distribution of the random variable y t remains unchanged over time, then we can think of the T observations, y,...,y T, as drawn from the same distribution; and we can make inference on the underlying distribution of y t basedonobservationsfrom different points in time. The property that the distribution of y t is the same for all t is referred to as stationarity. Formally, a time series, y,y 2,...,y t,...,y T,iscalledstrictly stationary if the joint distributions of n observations (y t,y t2,...,y tn ) and (y t +h,y t2 +h,...,y tn+h) are the same for all integers h and all indices t,t 2,...,t n. Strict stationarity implies that all structures and characteristics do not depend on the location on the time axis, and for example all moments are constant over time. A weaker concept requires only constancy of the first two moments (mean and variance), and a time series is called weakly stationary 4

(A) A stochastic process y (B) A stationary stochastic process y t 2 3 4 5 6 7 2 3 4 5 6 7 Figure 2: Stochastic processes and realized time series. t or covariance stationary or second-order stationary, if E[y t ] = µ V [y t ] = E[(y t µ) 2 ] = γ 0 Cov[y t,y t k ] = E[(y t µ)(y t k µ)] = γ k for k =, 2,... Note that the mean, µ, and the variance, γ 0, are the same for all t and the covariance between y t and y t k, denoted γ k, only depends on the distance between the observations, k. Often it is furthermore assumed that µ, γ 0,andγ k are finite. The idea of a stationary stochastic process is illustrated in graph (B) in Figure 2. Here each new observation, y t, contains information on the same distribution, and we can use all observations to estimate the common mean, µ. Notice that the realized time series are identical in graph (A) and (B), and for a small number of observations it is often difficult to distinguish stationary from non-stationary time series. Weak Dependence. Under stationarity the structures are the same for all y t,andwe would like to use information from different points in time to characterize the properties. For this we need a law of large numbers (LLN) and a central limit theorem (CLT) to apply to the time series y t (t =, 2,...,T); and for this we need an additional assumption of weak dependence. There are different ways to formulate the appropriate assumptions, but it all boils down to ensuring the existence of a LLN and a CLT, and for the purpose of this course we consider the additional assumption as a technical regularity condition. You might recall the CLT for independent and identically distributed (IID) variables. This result can be extended to allow y t to depend on y t k, but the dependence cannot be too strong. In particular, the additional condition says that the observations y t and y t k becomes approximately independent if we choose k large enough. This ensures that each new observation contains some new information on E[y t ], and the time average, y, isaconsistent estimator of the ensemble mean, E[y t ]=µ. A covariance stationary time series, y t,canbe characterized in terms of covariances, and weak dependence implies that the correlation 5

between y t and y t k approach zero sufficiently fast as k increases. For more information, the interested reader is referred to Hayashi (2000, chapter 2.2). As we will see later in the course, most of the results derived for regression models for IID data carry over to the analysis of stationary and weakly dependent time series. In the IID case we assume independent data with identical distributions. For the time series case this is typically too strong, and independence is replaced by the assumption of weak dependence, which implies that the observations becomes independent asymptotically. Likewise, the assumption of identical drawings is replaced by the assumption of stationarity. If the assumptions fail to hold, the standard results from regression analysis cannot be applied; and the most important distinction in time series econometrics is whether the time series of interest are stationary or not. Forthefirstpartofthiscoursewefocuson stationary time series; but later we consider how the tools should be modified to allow for the analysis of non-stationary time series. 3 Stationarity and Time Dependence It follows from the definition, that a stationary time series should fluctuate around a constant level. For a stationary time series the level can be seen as the equilibrium value of y t, while deviations from the mean, y t µ, can be interpreted as deviations from equilibrium. In terms of economics, the existence of an equilibrium requires that there are some forces in the economy that pulls y t towards the equilibrium level. We say that the variable is mean reverting or equilibrium correcting. If the variable, y t,ishitbyashock in period t so that y t is pushed out of equilibrium, stationarity implies that the variable should revert to equilibrium, and a heuristic characterization of a stationary process is that a shock to the process y t has only transitory effects. The fact that a time series adjust back to the mean, does not imply that the deviations from equilibrium cannot look systematic. Stationarity requires that the unconditional distribution of y t is constant, but the distribution given the past, e.g. the distribution of y t y t may depend on y t so that y t and y t are correlated. Another way to phrase this is the following: the unconditional distributions of y t depicted in graph 2 (B) are based on the assumption that we do not know the history of the process, y t,y t 2,... If y t has a tendency to fluctuate in a systematic manner and we have observed that observation four, y 4,isbelowµ, that could suggest that also the next observation is likely to be small, and the conditional expectation, E[y 5 y 4 ], is likely to be smaller than µ. In terms of economics the deviations from equilibrium could reflect business cycles, which we expect to be systematic. Measures of Time Dependence. One way to characterize the time dependence in a time series, y t,alsoknownasthepersistence, is by the correlation between y t and y t k, 6

defined as Corr(y t,y t k )= Cov(y t,y t k ) p V (yt ) V (y t k ). If the correlation is positive we denote it positive autocorrelation, and in a graph of the time series that is visible as a tendency of a large observation, y t, to be followed by another large observations, y t+,andvice versa. Negative autocorrelation is visible as a tendency of a large observation to be followed by a small observation. Under stationarity it holds that Cov(y t,y t k )=γ k only depends on k, and the variance is constant, V (y t )=V(y t k )=γ 0. For a stationary process the formula can therefore be simplified, and we define the autocorrelation function (ACF) as ρ k = Cov(y t,y t k ) V (y t ) = γ k γ 0, k =..., 2,, 0,, 2,... The term autocorrelation function refers to the fact that we consider ρ k as a function of the lag length k. Itfollowsfromthedefinition that ρ 0 =, the autocorrelations are symmetric, ρ k = ρ k, and they are bounded, ρ k. For a stationary time series the autocorrelations could be non-zero for a number of lags, but we expect ρ k to approach zero as k increases. In most cases the convergence to zero is relatively fast. For a given data set, the sample autocorrelations can be estimated by e.g. or alternatively, by eρ k = bρ k = X T T k T X T T k T k t=k+ (y t y)(y t k y) X T t= (y t y) 2 (y t y)(y t k y) t=k+ X T, (y t k y) 2 t=k+ where y is the full-sample mean definedin(4). Thefirst estimator, eρ k,isthemostefficient as it uses all the available observations in the denominator. For convenience it is sometimes preferred to discard the first k observations in the denominator and use the estimator bρ k, which is just the OLS estimator in the regression model y t = c + ρ k y t k + residual. There are several formulations of the variance of the estimated autocorrelation function. The simplest result is that if the true correlations are all zero, ρ = ρ 2 =... = 0, then the asymptotic distribution of bρ k is normal with variance V (bρ k )=T. A 95% confidence band for bρ is therefore given by ±2/ T. An complementary measure of the time dependence is the so-called partial autocorrelation function (PACF), which is the correlation between y t and y t k, conditional on the intermediate values, i.e. θ k =Corr(y t,y t k y t,...,y t k+ ). 7

The PACF can be estimated as the OLS estimator b θ k in the regression y t = c + θ y t +... + θ k y t k + residual, where the intermediate lags are included. If θ = θ 2 =... = 0, it again holds that V ( b θ k )=T,anda95%confidence band is given by ±2/ T. For a stationary time series the partial autocorrelations function, θ k, should also approach zero as k increases. In the next section we consider examples of the estimated ACF, and later in the course we will discuss how the ACF and PACF convey important information on properties of the data. 4 Transformations to Stationarity AbrieflookatthetimeseriesinFiguresuggeststhatnotalleconomictimeseries are stationary. The US unemployment rate in graph 3 (A) seems to fluctuate around a constant level. The deviations are extremely persistent, however, and the unemployment rate is above the constant level for most of the 20 year period 975 995. If the time series is considered stationary, the speed of adjustment towards the equilibrium level is very low; and it is probably a more reasonable assumption that the expected value of the unemployment rate is not constant. An economic interpretation of this finding could be that there have been systematic changed in the natural rate of unemployment over the decades, which have lead to very persistent changes in the recorded unemployment rate. The persistence of the time series itself, translates into very large and significant autocorrelations. Graph 3 (B) indicates that the ACF remains positive for a very long sample length, and the correlation between u t and u t 5 is around 0.6. Whether the unemployment rate actually corresponds to a sample path from a stationary stochastic process is an empirical question. We will discuss formal testing of this hypothesis at a later stage. Trend-Stationarity. The Danish productivity level in graph (B) has a clear positive drift and is hence non-stationary. The trend appears to be very systematic, however, and it seems that productivity fluctuates around the linear trend. One reason could be that there is a constant autonomous increase in productivity of around 2% per year; and equilibrium is defined in terms on this trend, so that the deviations of productivity from the underlying trend are stationary. Graph (C) in Figure 3 shows the deviation of productivity from a linear trend. The deviations are calculated as the estimated residual, bq t, in the linear regression on a constant and a trend, q t = µ 0 + µ t + qt, (5) where q t is the log of measured productivity. The deviations from the trend, bq t = q t bµ 0 bµ t, are still systematic, reflecting labour hoarding and other business cycle effects, 8

but there seems to be a clear reversion of productivity to the trending mean. This is also reflected in the ACF of the deviations, which dies out relatively fast. A non-stationary time series, q t, that becomes stationary after a deterministic linear trend has been removed is often denoted trend stationary. One way to think about a trend-stationarity time series is that the stochastic part of the process is stationary, but the stochastic fluctuations appear around a trending deterministic component, µ 0 + µ t. Deterministic de-trending, like in the regression (5), is one possible way to transform a non-stationary time series to stationarity. Difference-Stationarity. Also the aggregate Danish consumption in graph (C) is clearly non-stationary due to the upward drift. In this case, the deviations from a deterministic trend are still very persistent, and the de-trended consumption also looks non-stationary. An alternative way to remove the non-stationarity is to consider the first difference, c t = c t c t. The changes in Danish consumption, c t,depictedinfigure 3(D) fluctuates around a constant level and could look stationary. The persistence is not very strong, which is also reflected in the ACF in graph (E). A variable, c t, which itself is non-stationary but where the first difference, c t, is stationary is denoted difference stationary. Often it is also referred to as integrated of first order or I() because it behaves like a stationary variable that has been integrated (or cumulated) ones. Whether it is most appropriate to transform a variable to stationarity using deterministic de-trending or by first differencing has been subject to much debate in the economics and econometrics literature. The main difference is whether the stochastic component of the variable is stationary or not, that is whether the stochastic shocks to the process have transitory effects only (the trend-stationary case), or whether shock can have permanent effects (the difference stationary case). We return to the discussion later, where we present formal tests between the two cases. We may note that the first difference of variable in logs have a special interpretation. As an example, let c t =log(cons t ) measures the log of consumption. It holds that µ µ CONSt c t = c t c t =log =log + CONS t CONS t ' CONS t CONS t, CONS t CONS t CONS t where the last approximation is good if the growth rate is is close to zero. Therefore, c t can be interpreted as the relative growth rate of CONS t ; and changes in c t are interpretable as percentage changes (divided by 00). Cointegration. A third way to remove non-stationarity is by taking linear combinations of several variables. In graph (C) in Figure the development of income and consumption have many similarities and the savings rate, s t = y t c t, depicted in graph 3 (G) is much more stable. It seems to fluctuate around a constant level with peaks corresponding to known business cycle episodes in the Danish economy. The time dependence dies out relatively fast, cf. graph (H), and the savings rate may correspond to a stationary process. 9

The property that a combination of non-stationary variables become stationary is denoted cointegration. The interpretation is that the variables themselves, y t and c t,move in a non-stationary manner, but they are tied together in an equilibrium relation, and the dynamics of the economy ensures that the savings rate only shows transitory deviations from the equilibrium level. Cointegration is the main tool in time series analysis of nonstationary variables, and we will return to the cointegration analysis later in the course. References [] Hayashi, Fumio (2000): Econometrics, Princeton University Press. [2] Stock, James H. and Mark W. Watson (2003): Introduction to Econometrics, Addison Wesley. [3] Verbeek, Marno (2004): A Guide to Modern Econometrics, Second edition, John Wiley and Sons. [4] Wooldridge, Jeffrey M. (2003): Introductory Econometrics: A Modern Approach, 2nd edition, South Western College Publishing. 0

0.0 (A) US unemployment rate.0 (B) ACF for (A) 7.5 0.5 5.0 950 960 970 980 990 2000 0 5 0 5 0.05 (C) Danish productivity (log) minus trend.0 (D) ACF for (C) 0.00 0.5-0.05 970 980 990 2000 0 5 0 5 (E) change in Danish consumption (log) (F) ACF for (E) 0.05 0.00 0-0.05 970 980 990 2000 0 5 0 5 0.5 (G) Danish savings rate (log) (H) ACF for (G) 0.0 0 0.05 0.00 970 980 990 2000 0 5 0 5 Figure 3: Examples of time series transformations to stationarity.