Exercises - Time series analysis

Similar documents
7. Forecasting with ARIMA models

TMA4285 December 2015 Time series models, solution.

FE570 Financial Markets and Trading. Stevens Institute of Technology

Some Time-Series Models

ARIMA Models. Jamie Monogan. January 16, University of Georgia. Jamie Monogan (UGA) ARIMA Models January 16, / 27

ESSE Mid-Term Test 2017 Tuesday 17 October :30-09:45

TIME SERIES ANALYSIS AND FORECASTING USING THE STATISTICAL MODEL ARIMA

Elements of Multivariate Time Series Analysis

STAT 443 Final Exam Review. 1 Basic Definitions. 2 Statistical Tests. L A TEXer: W. Kong

3. ARMA Modeling. Now: Important class of stationary processes

Module 4. Stationary Time Series Models Part 1 MA Models and Their Properties

Advanced Econometrics

TIME SERIES ANALYSIS. Forecasting and Control. Wiley. Fifth Edition GWILYM M. JENKINS GEORGE E. P. BOX GREGORY C. REINSEL GRETA M.

Empirical Market Microstructure Analysis (EMMA)

Chapter 9: Forecasting

Time Series I Time Domain Methods

3 Theory of stationary random processes

Time Series: Theory and Methods

Cointegrated VARIMA models: specification and. simulation

Ross Bettinger, Analytical Consultant, Seattle, WA

Time Series Analysis -- An Introduction -- AMS 586

ARIMA Models. Jamie Monogan. January 25, University of Georgia. Jamie Monogan (UGA) ARIMA Models January 25, / 38

Prof. Dr. Roland Füss Lecture Series in Applied Econometrics Summer Term Introduction to Time Series Analysis

at least 50 and preferably 100 observations should be available to build a proper model

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Circle the single best answer for each multiple choice question. Your choice should be made clearly.

ARMA (and ARIMA) models are often expressed in backshift notation.

ECON/FIN 250: Forecasting in Finance and Economics: Section 7: Unit Roots & Dickey-Fuller Tests

A time series is called strictly stationary if the joint distribution of every collection (Y t

Booth School of Business, University of Chicago Business 41914, Spring Quarter 2017, Mr. Ruey S. Tsay. Solutions to Midterm

THE UNIVERSITY OF CHICAGO Booth School of Business Business 41914, Spring Quarter 2013, Mr. Ruey S. Tsay

Module 3. Descriptive Time Series Statistics and Introduction to Time Series Models

IDENTIFICATION OF ARMA MODELS

Ch 9. FORECASTING. Time Series Analysis

18.S096 Problem Set 4 Fall 2013 Time Series Due Date: 10/15/2013

Time Series Examples Sheet

1 Linear Difference Equations

Stochastic Modelling Solutions to Exercises on Time Series

Reliability and Risk Analysis. Time Series, Types of Trend Functions and Estimates of Trends

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

UNIVARIATE TIME SERIES ANALYSIS BRIEFING 1970

Univariate, Nonstationary Processes

Time Series Examples Sheet

Forecasting using R. Rob J Hyndman. 2.4 Non-seasonal ARIMA models. Forecasting using R 1

Lecture 3: Autoregressive Moving Average (ARMA) Models and their Practical Applications

Chapter 2: Unit Roots

Chapter 12: An introduction to Time Series Analysis. Chapter 12: An introduction to Time Series Analysis

STAT 436 / Lecture 16: Key

Examination paper for Solution: TMA4285 Time series models

Univariate Time Series Analysis; ARIMA Models

Lecture 2: ARMA(p,q) models (part 2)

Statistics of stochastic processes

We use the centered realization z t z in the computation. Also used in computing sample autocovariances and autocorrelations.

Econometrics of financial markets, -solutions to seminar 1. Problem 1

Review Session: Econometrics - CLEFIN (20192)

Time Series Outlier Detection

Applied Time. Series Analysis. Wayne A. Woodward. Henry L. Gray. Alan C. Elliott. Dallas, Texas, USA

Non-Stationary Time Series and Unit Root Testing

Chapter 8: Model Diagnostics

Lesson 13: Box-Jenkins Modeling Strategy for building ARMA models

ECON/FIN 250: Forecasting in Finance and Economics: Section 6: Standard Univariate Models

Introduction to ARMA and GARCH processes

Generalised AR and MA Models and Applications

Part 1. Multiple Choice (50 questions, 1 point each) Part 2. Problems/Short Answer (10 questions, 5 points each)

Minitab Project Report - Assignment 6

ITSM-R Reference Manual

Levinson Durbin Recursions: I

Circle a single answer for each multiple choice question. Your choice should be made clearly.

Econometría 2: Análisis de series de Tiempo

Ch 5. Models for Nonstationary Time Series. Time Series Analysis

Lesson 9: Autoregressive-Moving Average (ARMA) models

Levinson Durbin Recursions: I

Estimation and application of best ARIMA model for forecasting the uranium price.

MAT3379 (Winter 2016)

Ch 6. Model Specification. Time Series Analysis

Problem Set 1 Solution Sketches Time Series Analysis Spring 2010

2. An Introduction to Moving Average Models and ARMA Models

Classic Time Series Analysis

Time Series 3. Robert Almgren. Sept. 28, 2009

Econometrics II Heij et al. Chapter 7.1

γ 0 = Var(X i ) = Var(φ 1 X i 1 +W i ) = φ 2 1γ 0 +σ 2, which implies that we must have φ 1 < 1, and γ 0 = σ2 . 1 φ 2 1 We may also calculate for j 1

Non-Stationary Time Series and Unit Root Testing

Statistics 349(02) Review Questions

Applied time-series analysis

STAT Financial Time Series

A SARIMAX coupled modelling applied to individual load curves intraday forecasting

Topic 4 Unit Roots. Gerald P. Dwyer. February Clemson University

Booth School of Business, University of Chicago Business 41914, Spring Quarter 2017, Mr. Ruey S. Tsay Midterm

TAKEHOME FINAL EXAM e iω e 2iω e iω e 2iω

Vector autoregressions, VAR

Non-Stationary Time Series and Unit Root Testing

MCMC analysis of classical time series algorithms.

Analysis. Components of a Time Series

Univariate linear models

Classical Decomposition Model Revisited: I

EASTERN MEDITERRANEAN UNIVERSITY ECON 604, FALL 2007 DEPARTMENT OF ECONOMICS MEHMET BALCILAR ARIMA MODELS: IDENTIFICATION

Problem Set 2: Box-Jenkins methodology

Econ 424 Time Series Concepts

Seasonal Models and Seasonal Adjustment

Chapter 6: Model Specification for Time Series

Transcription:

Descriptive analysis of a time series (1) Estimate the trend of the series of gasoline consumption in Spain using a straight line in the period from 1945 to 1995 and generate forecasts for 24 months. Compare the results with Holt s method. (2) Apply a decomposition method to the gasoline consumption series and estimate the seasonal coefficients. Compare the results when Holt s method and using moving averages. (3) Obtain the periodogram for the series of Columbus s voyage and interpret it. (4) Obtain the periodogram for the gasoline consumption series and interpret it. (5) Prove that the predictions made using Holt s method verify the recursive equation- ẑ T +1 (1) = ẑ T (1)+ β T + γ(1 θ)(z T +1 µ T β T ). 1

2 Time series and stochastic processes (1) Figure 1 shows monthly rainfall in Santiago de Compostela during the years 1988-1997. Would it be a stationary process? (2) Let us consider the process z t =.5z t 1 + a t where a t is a white noise process. Calculate the marginal mean, its variance and first order autocovariance. Is the process stationary? (3) In the above exercise, calculate the expectation and variance of the conditional distribution f(z t z t 1 ). Compare these results with those obtained in the above exercise for the marginal distribution. (4) Let us consider the process z t =.5a t 1 + a t. Calculate its mean and first and second order autocovariance. Is the process stationary? (5) Prove that the above process can be written as z t =.5(z t 1 +.5a t 2 ) + a t =.5z t 1.25a t 2 + a t. Use this equation to calculate the expectation and variance of the conditional distribution f(z t z t 1 ). Compare the results with those of the marginal distribution. (6) Given a stationary series, prove that if the autocovariances are all positive then the mean of the process will be estimated with greater variance than if all the autocovariances are null. (7) Prove that the mean of the process in exercise (4) is estimated with less variability than in a process with independent data and the same marginal variance. Figure 1. Monthly rainfall in Santiago de Compostela during 1988-1997 600 500 400 300 200 100 0 88 89 90 91 92 93 94 95 96 97 Rainfall in Santiago de Compostela, Spain

3 Autoregressive processes (1) Generate samples of an AR(1) process with φ =.7 using a computer program as follows: (1) generate a vector a t of 150 random normal variables (0,1); (2) take z 1 = a 1 ; (3) for t = 2,..., 150 calculate z t =.7z t 1 + a t ; (4) to avoid the effect of the initial conditions, eliminate the first 50 observations and take the values z 51,...z 150 as a sample of 100 of the AR(1) process. (2) Obtain the theoretical autocorrelation function of the process z t =.7z t 1 + a t, where a t is white noise. Compare the theoretical results with those observed in the sample from the above exercise. (3) Obtain the theoretical autocorrelation function of the process z t =.9z t 1.18z t 1 + a t, where a t is white noise. Generate a realization of the process using a computer and compare the sample function with the theoretical one. (4) Express in operator notation the processes of exercises (2) and (3). Represent the process in the form z t = ψ i a t i obtaining the inverse operators. (5) Write the theoretical autocorrelation function of the process (1 1.2B +.32B 2 )z t = a t. Obtain the representation of this process as z t = ψ i a t i and comment on the relationship between the coefficients ψ and the autocorrelation function. (6) Prove that the process y t = z t z t 1, where z t =.9z t 1 + a t is stationary, remains stationary. (7) Prove that the above process can be written as y t =.09z t 2 + a t.1a t 1. (8) Calculate the inverse operator of (1.8B)(1 B). (9) Justify whether the process (1.5B)(1.7B)(1.2B)z t = a t is stationary and write it in its usual expression. (10) Calculate the theoretical coefficients of partial autocorrelation for the following AR(2) process: z t =.7z t 1.5z t 1 + a t, where a t is white noise. (11) Prove that if a process is AR(1) if we then make the regression z t = βz t+1 + u t we obtain β = φ and var(u t ) = γ 0 (1 φ 2 ), where γ 0 is the variance of the process. (12) Prove that if a process is AR(1) and we make the regression z t = αz t 1 + βz t+1 + u t, we obtain β = α = φ/(1 + φ 2 ) and var(u t ) = γ 0 (1 φ 2 )/(1 + φ 2 ). Observe that the variance of the innovations is now less than that of an AR(1) and less than in above exercise and interpret this result.

4 Moving average and ARMA processes (1) Given the process of zero mean z t = (1.7B)a t : (a) calculate the autocorrelation function; (b) write it as an AR( ) process. (2) Prove that the MA(1) processes z t = a t.5a t 1 and z t = a t 2a t 1 have the same autocorrelation structure but that one is invertible and the other is not. (3) Prove that the two processes z t = a t +.5a t 1 and z t =.5a t + a t 1 are indistinguishable since they have the same variance and the same autocorrelation structure. (4) Given the MA(2) process z t = a t 1.2a t 1 +.35a t 2 : (a) check whether it is invertible; (b) calculate its autocorrelation structure; (c) write it as an AR( ) process. (5) Given the model z t = 5 +.9z t 1 + a t +.4a t 1 : (a) calculate its autocorrelation structure; (b) write it in MA( ) form; (c) write it in AR( ) form. (6) Given the process (1 B +.21B 2 )z t = a t.3a t 1 :(a) check whether it is stationary and invertible: (b) obtain the autocorrelation function; (c) obtain its AR( ) representation; (d) obtain its MA( ) representation. (7) Obtain the autocorrelation function of an ARMA(1,1) process writing it as an MA( ). (8) Prove that if we add two MA(1) processes we obtain a new MA(1) process with an MA parameter which is a linear combination of the MA parameters of the two processes, with weighs that are proportional to the quotients between the variances of the innovations of the summands related to the variance of the innovation of the sum process.

5 Integrated and long-memory processes (1) Prove that the sum and the difference of two stationary processes are stationary. (2) Prove that the model z t = a+bt+ct 2 +a t, where a t is a white noise process, becomes a non-invertible stationary process when two differences are taken. (3) Prove that the autocorrelations of a random walk can be approximated by ρ(t, t + k) 1 k 2t. (4) Simulate an ARIMA (0,1,1) process using parameter values θ =.4,.7 and.9, and study the decay of the autocorrelation function of the process. (5) Simulate the process 2 z t = (1.8B)a t and study the decay of the autocorrelation function of the process.

6 Seasonal ARIMA processes (1) Find the seasonal coefficients for a quarterly series that follows the model z t = 10+cos(πt/2+π/8)+a t. (2) Prove that the series from above exercise can be modelled using 4 z t = (1 B 4 )a t. (3) We assume that a monthly series follows the model z t = 30 + cos(πt/6 + π/8) + V t + a t, where a t is a white noise process with variance σ 2 a and process V t verifies V t = V t 12 + ɛ t, where ɛ t is a white noise process with variance σ 2 ɛ. Prove that this series follows the ARIMA model 12 z t = (1 ΘB 12 )a t, where θ 1.( Suggestion: prove that the process ɛ t + a t a t 12 has an MA(1) 12 structure and that the autocorrelation of order twelve is - σ 2 a/(σ 2 ɛ + 2σ 2 a)). (4) Find the theoretical autocorrelation function of the process (1.4B)w t = (1 +.5B 12 ). (5) Find the theoretical autocorrelation function of the process (1.4B)(1.8B 12 )w t = a t. (6) Find the theoretical autocorrelation function of the process w t = (1 θb)(1 ΘB 12 )a t and compare it with that of the non-multiplicative process w t = (1 θb ΘB 12 )a t.

7 Forecasting with ARIMA models (1) Given the process z t = 2+.8z t 1.1z t 1 +a t and four observations (4, 3, 1, 2.5) generate predictions for 4 periods ahead. (2) Indicate what will be the long-term forecast generated by the model from the above exercise. (3) Assuming that the variance of the innovations in above exercise is 2, calculate the confidence intervals for the predictions for one and two steps ahead. (4) Calculate predictions for t = 100, 101 and 102 and the final prediction equation of the MA(2) process z t = 5 + a t.5a t 1, knowing that the predictions carried out with information up to t = 97 have been: ẑ 97 (1) = 5.1, and ẑ 97 (1) = 5.3, and that we have later observed z 98 = 4.9 and z 99 = 5.5. (5) Explain the structure of the forecasts generated by the model: z t = 3 + (1.7B)a t (6) Explain the long-term structure of the forecasts using the model: 12 z t = (1.7B)a t. (7) Prove that in the IMA(1,1) process which can be written z T +1 = (1 θ) j=0 θj z T j + a T +1 it is shown that for k 2, ẑ T (k) = ẑ T (k 1). Notice that ẑ T (2) = (1 θ)[ẑ T (1) + θz T + θ 2 z T 1 +...] and replace the expression of ẑ T (1). (8) Calculate the predictability of the process z t = 2 +.8z t 1.1z t 1 + a t.

8 Identifying possible ARIMA models (1) An early criterion for determining the number of differences needed to make a series stationary is the Titner Criterion, which consists in differentiating while the variance of the resulting series diminishes and stopping when the variance increases on taking a new difference. Prove that if we start with a stationary series but with first order autocorrelation greater than.5 the variance of the series diminishes when it is differentiated. Suggestion: let x t denote the original series and n t = x t, then V ar(n t ) = 2σ 2 x(1 ρ 1 ). (2) Identify a model for the airline series. (3) Identify a model for the series of the Spanish population over 16 years of age. (4) Justify that a sample of 100 observations generated by the model (1.2B)z t = a t can be easily identified as one generated by an MA(1) or an ARMA(1,1). Suggestion: express the model as an MA(1) and take into account that the bounds of the coefficients of the ACF and PACF are T 1/2. (5) Check that the error correction model for testing whether the true model is the M1:(1.7B) z t = a t or the M2: (1.7B)(1 φb)z t = a t, using φ < 1, is z t = αz t 1 +.7 z t 1 + a t. Which values of α indicate each of the two models? (6) Check the equivalence between the condition α 0 = 1 and a unit-root in the augmented Dickey-Fuller test obtaining the coefficients α i as a function of the φ, equating powers in both polynomials. Check that α p = φ p+1, α p 1 = φ p+1 φ p is obtained, and in general α i = p+1 j=i+1 φ j ; i 1 and α 0 = φ 1 +...+φ p+1, which confirms that the condition (1 φ 1... φ p+1 ) = 0, implies the condition α 0 = 1.

9 Estimation and selection of ARMA models (1) Prove that the variance of the estimator of µ for an AR(p) obtained by conditional estimation is greater than the variance of the sample mean of the entire process. (2) Prove that in an AR(2) process the marginal distribution of the first observation is normal with mean σ 2 E [ω 1 ] = µ and V ar [ω 1 ] = 1 φ 2 1 and for the second observation E [ω 2 ω 1 ] = µ + φ 1 (ω 1 µ) ( ) φ2 2 1 φ 2 and V ar [ω 2 ω 1 ] = σ 2 1 1 φ 2 1. φ2 2 (3) Prove that the ML estimator of σ a 2 for an AR(p) process is: T ( σ a 2 = ω t µ Σ p φ 2 i=1 i (ω t i µ) /T t=p+1 (4) Verify that the state space representation of an AR(1) model has Ω = φ, H = 1 and R = σ 2. (5) Write the Kalman filter equations to forecast using an AR(1) and verify that they are reduced to ẑ t t 1 = φz t with variance p t t 1 = σ 2. (6) Write the state space equations for an MA(1) and prove that Ω = [ ] 1 θ σ 2 θ θ 2 are verified. [ 0 1 0 0 ], H=(0,1) and R = (7) Using that for square matrices, A and C, and rectangular matrices, B and D, with appropriate dimensions it is verified that (A+BCD) 1 = A 1 A 1 B(DA 1 B +C 1 ) 1 DA 1, prove that the revision equation of the covariances of the state estimation can be written as S 1 t = S 1 t t 1 + H t V 1 t H t. (8) Letting precision denote the inverse variance, justify that the above expression is interpreted as that the final precision is the sum of the initial precision contributed by the last observation. (9) Write the equations of the Kalman filter for an AR(2) and relate the calculation method of the predictions with that studied in Chapter 8.

10 Model diagnosis and prediction (1) Fit a model to the data for unemployment in Spain and analyze the residuals of the model to verify that they are suitable. (2) Prove that the distribution mean of αf(µ 1, σ 2 1) + (1 α)f(µ 2, σ 2 2) is µ = αµ 1 + (1 α)µ 2 and the variance ασ 2 1 + (1 α)σ 2 2 + α(µ 1 µ) 2 + (1 α)(µ 2 µ) 2. (3) Use the above result to obtain the mean and variance when we combine predictions from different models. (4) Prove that if the most probable model is that of less residual variance the intervals constructed by model averaging will be wider than those of a single model.