Applied Time Series Notes White noise: e t mean 0, variance 5 2 uncorrelated Moving Average

Similar documents
Stationary Time Series

Diebold, Chapter 7. Francis X. Diebold, Elements of Forecasting, 4th Edition (Mason, Ohio: Cengage Learning, 2006). Chapter 7. Characterizing Cycles

Licenciatura de ADE y Licenciatura conjunta Derecho y ADE. Hoja de ejercicios 2 PARTE A

Financial Econometrics Jeffrey R. Russell Midterm Winter 2009 SOLUTIONS

14 Autoregressive Moving Average Models

Modeling and Forecasting Volatility Autoregressive Conditional Heteroskedasticity Models. Economic Forecasting Anthony Tay Slide 1

OBJECTIVES OF TIME SERIES ANALYSIS

Estimation Uncertainty

- The whole joint distribution is independent of the date at which it is measured and depends only on the lag.

Properties of Autocorrelated Processes Economics 30331

Smoothing. Backward smoother: At any give T, replace the observation yt by a combination of observations at & before T

Section 4 NABE ASTEF 232

Math 10B: Mock Mid II. April 13, 2016

Cointegration and Implications for Forecasting

Distribution of Estimates

Vectorautoregressive Model and Cointegration Analysis. Time Series Analysis Dr. Sevtap Kestel 1

Wisconsin Unemployment Rate Forecast Revisited

R t. C t P t. + u t. C t = αp t + βr t + v t. + β + w t

Chapter 5. Heterocedastic Models. Introduction to time series (2008) 1

Distribution of Least Squares

Econ Autocorrelation. Sanjaya DeSilva

State-Space Models. Initialization, Estimation and Smoothing of the Kalman Filter

Generalized Least Squares

Introduction D P. r = constant discount rate, g = Gordon Model (1962): constant dividend growth rate.

(10) (a) Derive and plot the spectrum of y. Discuss how the seasonality in the process is evident in spectrum.

Econ107 Applied Econometrics Topic 7: Multicollinearity (Studenmund, Chapter 8)

Y 0.4Y 0.45Y Y to a proper ARMA specification.

Exponential Smoothing

Outline. lse-logo. Outline. Outline. 1 Wald Test. 2 The Likelihood Ratio Test. 3 Lagrange Multiplier Tests

Vector autoregression VAR. Case 1

Kriging Models Predicting Atrazine Concentrations in Surface Water Draining Agricultural Watersheds

Box-Jenkins Modelling of Nigerian Stock Prices Data

Comparing Means: t-tests for One Sample & Two Related Samples

Dynamic Models, Autocorrelation and Forecasting

Linear Gaussian State Space Models

GMM - Generalized Method of Moments

GDP Advance Estimate, 2016Q4

Methodology. -ratios are biased and that the appropriate critical values have to be increased by an amount. that depends on the sample size.

ST4064. Time Series Analysis. Lecture notes

Lecture 5. Time series: ECM. Bernardina Algieri Department Economics, Statistics and Finance

Regression with Time Series Data

Forecasting optimally

1. Joint stationarity and long run effects in a simple ADL(1,1) Suppose Xt, Y, also is stationary?

Unit Root Time Series. Univariate random walk

3.1 More on model selection

Time series Decomposition method

Testing the Random Walk Model. i.i.d. ( ) r

Solutions to Odd Number Exercises in Chapter 6

References are appeared in the last slide. Last update: (1393/08/19)

BOX-JENKINS MODEL NOTATION. The Box-Jenkins ARMA(p,q) model is denoted by the equation. pwhile the moving average (MA) part of the model is θ1at

Wednesday, November 7 Handout: Heteroskedasticity

Chapter 3, Part IV: The Box-Jenkins Approach to Model Building

Types of Exponential Smoothing Methods. Simple Exponential Smoothing. Simple Exponential Smoothing

Chapter 15. Time Series: Descriptive Analyses, Models, and Forecasting

Stability. Coefficients may change over time. Evolution of the economy Policy changes

12: AUTOREGRESSIVE AND MOVING AVERAGE PROCESSES IN DISCRETE TIME. Σ j =

Nonstationarity-Integrated Models. Time Series Analysis Dr. Sevtap Kestel 1

Nature Neuroscience: doi: /nn Supplementary Figure 1. Spike-count autocorrelations in time.

CH4. Auto Regression Type Model

Chapter 16. Regression with Time Series Data

STAD57 Time Series Analysis. Lecture 5

Challenge Problems. DIS 203 and 210. March 6, (e 2) k. k(k + 2). k=1. f(x) = k(k + 2) = 1 x k

Solutions: Wednesday, November 14

ACE 562 Fall Lecture 8: The Simple Linear Regression Model: R 2, Reporting the Results and Prediction. by Professor Scott H.

13.3 Term structure models

EXERCISES FOR SECTION 1.5

Elements of Stochastic Processes Lecture II Hamid R. Rabiee

t is a basis for the solution space to this system, then the matrix having these solutions as columns, t x 1 t, x 2 t,... x n t x 2 t...

20. Applications of the Genetic-Drift Model

MATH 128A, SUMMER 2009, FINAL EXAM SOLUTION

ACE 562 Fall Lecture 5: The Simple Linear Regression Model: Sampling Properties of the Least Squares Estimators. by Professor Scott H.

Measurement Error 1: Consequences Page 1. Definitions. For two variables, X and Y, the following hold: Expectation, or Mean, of X.

Y, where. 1 Estimate St.error

Volatility. Many economic series, and most financial series, display conditional volatility

Let. x y. denote a bivariate time series with zero mean.

Lesson 2, page 1. Outline of lesson 2

Richard A. Davis Colorado State University Bojan Basrak Eurandom Thomas Mikosch University of Groningen

Summer Term Albert-Ludwigs-Universität Freiburg Empirische Forschung und Okonometrie. Time Series Analysis

Quarterly ice cream sales are high each summer, and the series tends to repeat itself each year, so that the seasonal period is 4.

I. Return Calculations (20 pts, 4 points each)

Physics 127b: Statistical Mechanics. Fokker-Planck Equation. Time Evolution

Solution of Assignment #2

Vehicle Arrival Models : Headway

(a) Set up the least squares estimation procedure for this problem, which will consist in minimizing the sum of squared residuals. 2 t.

STRUCTURAL CHANGE IN TIME SERIES OF THE EXCHANGE RATES BETWEEN YEN-DOLLAR AND YEN-EURO IN

Some Basic Information about M-S-D Systems

STAD57 Time Series Analysis. Lecture 17

STAD57 Time Series Analysis. Lecture 17

d 1 = c 1 b 2 - b 1 c 2 d 2 = c 1 b 3 - b 1 c 3

Math 106: Review for Final Exam, Part II. (x x 0 ) 2 = !

Chapter 2. Models, Censoring, and Likelihood for Failure-Time Data

Bias in Conditional and Unconditional Fixed Effects Logit Estimation: a Correction * Tom Coupé

ACE 562 Fall Lecture 4: Simple Linear Regression Model: Specification and Estimation. by Professor Scott H. Irwin

Linear Combinations of Volatility Forecasts for the WIG20 and Polish Exchange Rates

Arima Fit to Nigerian Unemployment Data

Components Model. Remember that we said that it was useful to think about the components representation

Testing for a Single Factor Model in the Multivariate State Space Framework

Solutions to Exercises in Chapter 12

DYNAMIC ECONOMETRIC MODELS vol NICHOLAS COPERNICUS UNIVERSITY - TORUŃ Józef Stawicki and Joanna Górka Nicholas Copernicus University

Linear Dynamic Models

Transcription:

Applied Time Series Noes Whie noise: e mean 0, variance 5 uncorrelaed Moving Average Order 1: (Y. ) œ e ) 1e -1 all Order q: (Y. ) œ e ) e â ) e all 1-1 q -q ( 14 ) Infinie order: (Y. ) œ e ) 1e -1 ) e - ) 3e -3 â all Have o be careful here - may no "converge" i.e. may no exis. Example: Y. œ e +e -1+e -+e -3+ â has infinie variance Pr {Y>C} = Pr{ Z > (Y-.)/_ } = Pr{ Z > 0 } =1/ for any C (makes no sense!) Example: Y. œ e + 3e -1+ 3 e -+ 3 3 e -3+ â Variance{Y} = 5 (1+ 3 + 3 4 + â ) = 5 /(1 3 ) for 3 <1. Y. œ e + 3e -1+ 3 e -+ 3 3 e -3+ â Y -1. œ e -1 + 3e -+ 3 e -3+ 3 3 e -4+ â (Y. )- 3 (Y. )=e +0 Auoregressive - AR(1) -1 (Y. ) œ 3 (Y. ) e all 1 E(Y ). œ 3 [E(Y ). ] 1 Saionariy: E(Y ) consan all (call i.) Cov(Y, Y j) œ #(j) œ funcion of j only E(Y ) œ. (if ± 3 ± 1) Assuming ± 3 ± 1 (y. ) œ 3 (Y. ) e 1 œ 33 [ (Y. ) e ] e 1 œ 333 [ [ (Y. ) e ] e ]+e ec. 1 (Y. ) œe 3e 3 e Again, E(Y ) œ. 3 1 Var (Y ) œ5 (1 3 3 3 ) œ5 (1 3 ) 4 6 j Cov (Y, Y j) œ 35 (1 3)

Applied Time Series Noes Example: Plo of #(j) versus j is Auocovariance Funcion ( 15 ) j= 0 1 3 4 5 6 7 #(j) 64 3 16 8 4 1 1/ (1) Find 3, variance of Y variance of e and.: 3 = 0.5 (geomeric decay rae) Variance of Y is 64. Variance of e: [Using #(0) = 64 = 5 /(1-3 ) = 5 /(1-.5)] 5 = 64(.75) = 48. Covariances have no informaion abou.. Forecas: Y œ. 3 (Y. ) e n 1 n n+1. œ 90 known (or esimaed) Daa Y 1, Y,, Y n wih Yn œ106. We see ha 3 =0.5 Ys œ 90.5(106 90) œ 98 error Y Ys œ e n 1 n 1 n 1 n+1 n n n n 1 Y œ. 3 (Y. ) e 3e Y s œ. 3 (Y. ) œ 94, error œ e 3e n n n n 1 Y s j j 1 œ. 3 (Y. ) error œe + 3e 3 e n j n n j n j 1 n 1 98, 94, 9, 91, 90.5, forecass. Forecas inervals (large n).., 3 known (or esimaed and assumed known) n 1 n n 1 n 3 n n 1 Forecas errors e, e 3e, e 3e 3 e, (1) Can' know fuure e's j () Can esimae variance 5 (1 3 3 ). (3) Esimae 5 : Use r œ (Y. ) 3 (Y. ) hen 5s œ Dr n or 1 Ge S œ D(Y Y ) n hen 5s œ S (1 3 ) y y Ys 1.96 É5s (1 3 3j ) n j

Esimaing an AR(1) Applied Time Series Noes ( 16 ) Y =.(1-3) + 3Y -1 + e = - + 3Y -1 + e Looks _ like a regression: _ Regress Y on 1, Y Y -Y on Y -Y (noin) -1-1 or n _ -1 1. n! 5 (Y - Y) converges o E{ (Y -.) } = #(0) = = -1 1-3 _. Èn [ (Y -Y) e /n ] is Èn imes a mean of (Y -.) e erms! -1-1 uncorrelaed (bu no independen) _ Neverheless È n [!(Y -1 -Y) e /n ] converges o N(0,? ) where -1-1 variance is E{ (Y -.) e } = E{ (Y -.) }E{ e }= #(0) 5 _ n _ 3. Èn ( ^- ) = È n [! -1 (Y -Y) e /n ] / [ n! (Y - Y) ] 33-1 -1 = in he limi his is N(0, #(0) 5 / #(0) ) = N(0, 1-3 ). EXAMPLE: Winning Percenage (x 10) for baseball's Naional League pennan winner. Regression: Year Y Y-1 191 614 1. 19 604 1 614 ã ã ã ã 1993 64 1 605 PROC REG: Y ^ = 341.4 +.44116 Y -1 + e, s = 863.7 (66.06) (.108) Y ^ - 610.6 =.44116 (Y -1-610.6) + e Year Forecas Forecas Sandard Error 1994: 341.4 +.44 (64) = 64.46 È863.7 = 9.4 1995: 341.4 +.44 (64.46) = 616.73 È863.7(1+.44 ) = 3.10 or 1995: 610.6 +.44 (64-610.6) 60 054: 610.6 +.44 (41.38) = 610.6 =. ^ È863.7/(1-.44 ) = È# ^(0) so long erm forecas is jus mean. Theory assigns sd. error È(1-3)/n o 3^. We have È(1-.44 )/73 =.105

Applied Time Series Noes Idenificaion - Par 1 Auo correlaion 3 (j) œ #(j) #(0) ( 17 ) (For AR(1), 3(j) œ 3 j ) ACF Parial Auocorrelaion Regress Y on Y, Y,, Y 1 j Las coefficien C is called j h j parial auocorrelaion PACF. 1 More formally, ^# (j) = n D (Y. )(Y. ) esimaes #(j) 1 n XX w regression marix looks like j 1 n Ô D(Y 1 Y ) D(Y 1 Y )(Y Y ) D(Y 1 Y )(Y Y ) D(Y Y ) Õ ã Ø w w so formally, XXbœ XYis analogous o he populaion equaion (also "bes predicor" idea) Ô #(0) #(1) #(j 1) Ô b1 Ô #(1) #(1) #(0) #(j ) b #() Ö Ù Ö Ùœ Ö Ù ã ã Õ# (j-1) #(j-) #(0) Ø Õb( œ C ) Ø Õ #(j) Ø This defines C j h œ j parial auocorrelaion For AR(1), parials are j œ 1 3 C œ 3 0 0 j j j Moving Average MA(1) Y œ. e ) e 1 E(Y ) œ. Var (Y ) œ 5 (1 ) ) Auocovariances j 0 1 3 4 #( ) 5 (1 ) ) 5 ) 0 0 0 j

Applied Time Series Noes Example: j œ 0 1 3 4 #(j) œ 10 4 0 0 0 ( 18 ) 5 (1 ) ) œ 10 ake raio 4(1 ) ) œ -10 ) )5 œ 4 ) +.5) 1 œ 0 () +.5)( ) +) œ 0 Forecas MA(1) 1 ) œ - or ) œ - Y œ. + e ) e n 1 n 1 n e has already occurred bu wha is i? I wan n Y s œ. ) e n 1 n so I need e. Use backward subsiuion: n e œ Y. ) e n n n 1 n n 1 n 3 n 3 œ (Y. ) ) (Y. ) ) (Y. ) ) (Y.) If ) 1, runcaion (i.e. no knowing Y 0, Y 1, ec.) won' hur oo much. If ) 1, major problems Moral: In our example, choose ) œ - 1 so we can inver" he process, i.e. wrie i as long AR. Y 90 œ e.5e 1 Daa œ 98 94 9 85 89 93 9 Ys œ 90.5e s (how o sar?) 1 One way: recursion wih se 0 œ 0 Y. 8 4 5 1 3 Y s. 0 4 0 1 3 1 1 (0) 8 0 6 1 AR(p) Ys 8 œ 90.5(1) œ 90.5 error e8 Ys œ Ys œ Ys œ œ 90 œ.. error e +.5 e 9 10 11 n n-1 (Y. ) œ! (Y. )! (Y. )! (Y. ) e 1 1 p p

Applied Time Series Noes ( 19 ) MA(q) Y œ. e ) e ) e 1 1 q q Covariance, MA(q) Y. œe ) e ) e ) e ) e Y. œ e ) e 1 1 j j j 1 j 1 q q j j 1 j 1 Covariance œ[ ) ) ) ) ) ] 5 (0 if j q) j 1 j 1 q j q Example j = 0 1 3 4 5 #(j) = 85 18 40 0 0 0 MA() 1 5 [1 ) ) ] œ 85 5 œ 100 5 [ ) ) ) ] œ 18 ) œ 1.3 1 1 1 5 [ ) ] œ 40 ) œ.4 Y œ e 1.3e.4e. 1 Can we wrie e œ(y. ) C 1(y 1. ) C (Y. )? Will C j die off exponenially? i.e. is his inverible? Backshif: Y œ. (1 1.3B.4B )e where 1 1 B(e) œ e, B (e) œ B(e ) œ e, ec. e œ (Y ) Formally 1 (1 1.3B.4B ). 1 3 3 (1.5B)(1.8B) œ 1.5B 1.8B 5 8 1 3 and 1 X œ1 X X X if X 1 so 5 3 3 (1.5B.5B.15B, 8 3 (1.8B.64B.51B, 3 œ1 1.3B 1.9B œ1 C B Obviously C 's die off exponenially. j 1

Applied Time Series Noes AR() (Y. ).9(Y. ).(Y. ) œ e 1 ( 0 ) (1.5B)(1.4B)(Y. ) œ e (1.5B)(1.4B)(Y. ) œ e Righ away, we see ha (1.5X)(1.4X) œ 0 has all roos exceeding 1 in magniude so as we did wih MA(), we can wrie (Y ) œe Ce Ce. 1 1 wih C j dying off exponenially. Pas shocks" e j are no so imporan in deermining Y. p 1 p AR(p) (1! B! B! B )(Y. ) œe p If all roos of (1! 1m! m! pm ) œ0 have m 1, series is saionary (shocks emporary) 1 MA() Y. œ (1 ) B ) B )e. If all he roos of (1 ) 1m ) m ) œ 0 have m 1, series is inverible (can exrac e from Y's). Alernaive version of characerisic equaion (I prefer his) p p-1 p- 1 p m! m! m! = 0 saionary <=> roos <1. Mixed Models ARMA(p, q) Example: (Y. ).5(Y. ) œ e.8e 1 1 Y. œ [(1.8B)/(1.5B)]e œ(1.8b)(1.5b.5b.15b )e 3 œe 1.3e.65e.35e Yule-Walker equaions 1 3 (Y. )[(Y. ).5(Y. )] œ (Y. )(e.8e ) j 1 j 1 Take expeced value j œ 0 #(0).5 #(1) œ 5 (1 1.04) #(0)!# (1) = 5 Š 1 ) (! ) ) j œ 1 #(1).5 #(0) œ 5 (.8) #(1)!# (0) = -)5 j 1 #(j).5 #(j 1) œ 0

Applied Time Series Noes ( 1 ) Œ 1 #(0) 1.5.04 3.533 œ #(1).5 1 Œ 0.80 5 œ Œ.466 5 j 0 1 3 4 3(j) 1.746.373.186.093 ec. Define #( j) œ #(j), 3( j) œ 3(j). In general Yule-Walker relaes covariances o parameers. Two uses: (1) Given model, ge #(j) and 3(j) () Given esimaes of #(j) ge rough esimaes of parameers. Idenificaion - Par II Inverse Auocorrelaion IACF For he model (Y. )! (y. )! (Y. ) œe ) e ) e 1 1 p p 1 1 q q define IACF as ACF of he dual model: (Y. ) ) (Y. ) ) (Y. ) œe! e! e 1 1 q q 1 1 p p IACF of AR(p) is ACF of a MA(p) IACF of MA(q) is ACF of an AR(q) How do we esimae ACF, IACF, PACF from daa? n j Auocovariances s# (j) œ D (Y Y )(Y j Y ) În œ 1 ACF s3 (j) œs# (j) Îs# (0) PACF plug s# (j) ino formal defining formula and solve for C. IACF: Approximae by fiing long auoregression (Y. ) œ! s (Y. )! s (Y. ) e 1 1 k k Compue ACF of dual model Y. œe! ^ e! ^ e. 1 1 k k To fi he long auoregressive plug #s (j) ino Yule-Walker equaions for AR(k), or jus regress Y on Y, Y, â,y. j 1 k s j

Applied Time Series Noes ( ) All 3 funcions IACF, PACF, ACF compued in PROC ARIMA. How do we inerpre hem? Compare o caalog of heoreical IACF, PACF, ACF for AR, MA, and ARMA models. See SAS Sysem for Forecasing Time Series book for several examples - secion 3.3.. Variance for IACF, PACF approximaely 1 n For ACF, SAS uses Barle's formula. For 3s (j) his is j 1 1 n D Š s3(i) i= j 1 (Fuller gives Barle's formula as 6..11 afer firs deriving a more accurae esimae of he variance of s3(i). The sum here is infinie so in SAS he hypohesis being esed is H 0: 3(j)=0 > assuming 3 (i)=0 for i>j. Assuming a MA of order no more han j, is he j auocorrelaion 0?) Synax PROC ARIMA; IDENTIFY VAR=Y (NOPRINT NLAG=10 CENTER); ESTIMATE P= Q=1 (NOCONSTANT NOPRINT ML PLOT); FORECAST LEAD=7 OUT=OUT1 ID=DATE INTERVAL=MONTH; (1) I, E, F will work. () Mus have I preceding E, E preceding F (3) CENTER subracs Y (4) NOCONSTANT is like NOINT in PROC REG (5) ML (maximum likelihood) akes more ime bu has slighly beer accuracy han he defaul leas squares. (6) PLOT gives ACF, PACF, IACF of residuals. Diagnosics: Box-Ljung chi-square on daa Y or residuals se. (1) Compue esimaed ACF 3s (j) () Tes is Q œ n(n ) D k Š s3(j) (n j) j œ 1 (3) Compare o ; disribuion wih k p q d.f. Š ARMA(p, q) ñ SAS (PROC ARIMA) will give Q es on original daa and on residuals from fied models.

Applied Time Series Noes ( 3 ) ñ Q saisics given in ses of 6, i.e. for j=1 o 6, for j=1 o 1, for j=1 o 18, ec. Noe ha hese are cumulaive ñ For original series H : Series is whie noise o sar wih. 0 ñ For residuals H : Residual series is whie noise. 0 Suppose residuals auocorrelaed - wha does i mean? Can predic fuure residuals from pas - hen why no do i? Model predics using correlaion. Auocorrelaed residuals => model has no capured all he predicabiliy in he daa. So... H 0: Model is sufficien vs. H 1: Needs more work <=> "lack of fi" es Le's ry some examples. All have his kind of header, all have 1500 obs. ARIMA Procedure Name of variable = Y1. Mean of working series = -0.0306 Sandard deviaion = 1.76685 Number of observaions = 1500

Y1 Applied Time Series Noes Auocorrelaions ( 4 ) Lag Covar Corr -1 9 8 7 6 5 4 3 1 0 1 3 4 5 6 7 8 9 1 0.98144 1.00000 ******************** 1.39661 0.80384. **************** 1.89578 0.63586. ************* 3 1.49191 0.50040. ********** 4 1.0474 0.40408. ******** 5 1.00738 0.33788. ******* 6 0.8373 0.8084. ****** 7 0.67985 0.803. ***** 8 0.58866 0.19744. **** "." marks wo sandard errors Inverse Auocorrelaions Lag Correlaion -1 9 8 7 6 5 4 3 1 0 1 3 4 5 6 7 8 9 1 1-0.50067 **********. -0.00386.. 3 0.0033.. 4 0.01656.. 5-0.01834.. 6-0.0593 *. 7 0.04455. * 8-0.08.. Parial Auocorrelaions Lag Correlaion -1 9 8 7 6 5 4 3 1 0 1 3 4 5 6 7 8 9 1 1 0.80384. **************** -0.091 *. 3-0.0065.. 4 0.0961. * 5 0.03108. * 6-0.00511.. 7-0.01304.. 8 0.03765. * Auocorrelaion Check for Whie Noise To Chi Auocorrelaions Lag Square DF Prob 6 493.0 6 0.000 0.804 0.636 0.500 0.404 0.338 0.81

Y Applied Time Series Noes Auocorrelaions ( 5 ) Lag Covar Corr -1 9 8 7 6 5 4 3 1 0 1 3 4 5 6 7 8 9 1 0.84007 1.00000 ******************** 1 -.489 -.79184 ****************. 1.80586 0.63585. ************* 3-1.4603 -.51416 **********. 4 1.14644 0.40367. ******** 5-0.919 -.3460 ******. 6 0.76776 0.7033. ***** 7-0.661 -.044 ****. 8 0.48619 0.17119. *** "." marks wo sandard errors Inverse Auocorrelaions Lag Correlaion -1 9 8 7 6 5 4 3 1 0 1 3 4 5 6 7 8 9 1 1 0.47434. ********* 0.0030.. 3 0.0434. * 4 0.0313. * 5-0.01538.. 6-0.0161.. 7 0.01805.. 8 0.0153.. Parial Auocorrelaions Lag Correlaion -1 9 8 7 6 5 4 3 1 0 1 3 4 5 6 7 8 9 1 1-0.79184 ****************. 0.037.. 3-0.0108.. 4-0.03180 *. 5-0.0191.. 6 0.0570. * 7 0.00905.. 8-0.0456.. Auocorrelaion Check for Whie Noise To Chi Auocorrelaions Lag Square DF Prob 6 46.73 6 0.000-0.79 0.636-0.514 0.404-0.35 0.70

Y3 Applied Time Series Noes Auocorrelaions ( 6 ) Lag Covar Corr -1 9 8 7 6 5 4 3 1 0 1 3 4 5 6 7 8 9 1 0 1.68768 1.00000 ******************** 1 0.87193 0.51664. ********** 0.9573 0.5485. *********** 3 0.60333 0.35749. ******* 4 0.54891 0.354. ******* 5 0.4368 0.5637. ***** 6 0.38316 0.704. ***** 7 0.85 0.16740. *** 8 0.691 0.15946. *** "." marks wo sandard errors Inverse Auocorrelaions Lag Correlaion -1 9 8 7 6 5 4 3 1 0 1 3 4 5 6 7 8 9 1 1-0.17153 ***. -0.341 ******. 3 0.04133. * 4 0.070. * 5-0.03447 *. 6-0.0051.. 7 0.03163. * 8-0.01685.. Parial Auocorrelaions Lag Correlaion -1 9 8 7 6 5 4 3 1 0 1 3 4 5 6 7 8 9 1 1 0.51664. ********** 0.38414. ******** 3-0.0480.. 4 0.00940.. 5 0.03415. * 6 0.03.. 7-0.060 *. 8 0.0131.. Auocorrelaion Check for Whie Noise To Chi Auocorrelaions Lag Square DF Prob 6 138.13 6 0.000 0.517 0.549 0.357 0.35 0.56 0.7

Y4 Applied Time Series Noes Auocorrelaions ( 7 ) Lag Covar Corr -1 9 8 7 6 5 4 3 1 0 1 3 4 5 6 7 8 9 1 0 1.87853 1.00000 ******************** 1 0.90481 0.48166. ********** -0.3135 -.16687 ***. 3-0.7114 -.3787 ********. 4-0.3603 -.19181 ****. 5 0.10377 0.0554. * 6 0.464 0.13108. *** 7 0.1376 0.0736. * 8 0.05574 0.0967. * "." marks wo sandard errors Inverse Auocorrelaions Lag Correlaion -1 9 8 7 6 5 4 3 1 0 1 3 4 5 6 7 8 9 1 1-0.60608 ************. 0.7383. ***** 3 0.00795.. 4 0.00599.. 5-0.00347.. 6-0.080 *. 7 0.039. * 8-0.0356 *. Parial Auocorrelaions Lag Correlaion -1 9 8 7 6 5 4 3 1 0 1 3 4 5 6 7 8 9 1 1 0.48166. ********** -0.51936 **********. 3-0.01149.. 4 0.00598.. 5-0.00605.. 6-0.01601.. 7 0.0135.. 8 0.06300. * Auocorrelaion Check for Whie Noise To Chi Auocorrelaions Lag Square DF Prob 6 69.35 6 0.000 0.48-0.167-0.379-0.19 0.055 0.131

Y5 Applied Time Series Noes Auocorrelaions ( 8 ) Lag Covar Corr -1 9 8 7 6 5 4 3 1 0 1 3 4 5 6 7 8 9 1 0 1.77591 1.00000 ******************** 1 0.8886 0.50037. ********** -0.0056 -.00314.. 3-0.074 -.04169 *. 4-0.0503 -.0831 *. 5 0.0303 0.0170.. 6 0.037 0.01841.. 7 0.00366 0.0006.. 8 0.06513 0.03667. * "." marks wo sandard errors Inverse Auocorrelaions Lag Correlaion -1 9 8 7 6 5 4 3 1 0 1 3 4 5 6 7 8 9 1 1-0.77909 ****************. 0.58550. ************ 3-0.460 *********. 4 0.30980. ****** 5-0.0537 ****. 6 0.10971. ** 7-0.04141 *. 8 0.00364.. Parial Auocorrelaions Lag Correlaion -1 9 8 7 6 5 4 3 1 0 1 3 4 5 6 7 8 9 1 1 0.50037. ********** -0.33819 *******. 3 0.1956. **** 4-0.1516 ***. 5 0.14907. *** 6-0.11616 **. 7 0.0919. ** 8-0.00938.. Auocorrelaion Check for Whie Noise To Chi Auocorrelaions Lag Square DF Prob 6 381.10 6 0.000 0.500-0.003-0.04-0.08 0.017 0.018

Y6 Applied Time Series Noes Auocorrelaions ( 9 ) Lag Covar Corr -1 9 8 7 6 5 4 3 1 0 1 3 4 5 6 7 8 9 1 0 1.3101 1.00000 ******************** 1-0.159 -.11669 **. -0.4136 -.31571 ******. 3-0.0558 -.046 *. 4-0.0349 -.0664 *. 5 0.046 0.0356. * 6 0.0675 0.004.. 7-0.0657 -.0501 *. 8 0.044 0.03374. * "." marks wo sandard errors Inverse Auocorrelaions Lag Correlaion -1 9 8 7 6 5 4 3 1 0 1 3 4 5 6 7 8 9 1 1 0.4600. ********* 0.559. ********** 3 0.33470. ******* 4 0.858. ****** 5 0.17483. *** 6 0.11407. ** 7 0.08309. ** 8 0.01913.. Parial Auocorrelaions Lag Correlaion -1 9 8 7 6 5 4 3 1 0 1 3 4 5 6 7 8 9 1 1-0.11669 **. -0.33387 *******. 3-0.14914 ***. 4-0.19317 ****. 5-0.08641 **. 6-0.08499 **. 7-0.1104 **. 8-0.090 *. Auocorrelaion Check for Whie Noise To Chi Auocorrelaions Lag Square DF Prob 6 176.67 6 0.000-0.117-0.316-0.043-0.07 0.035 0.00

Y7 Applied Time Series Noes Auocorrelaions ( 30 ) Lag Covar Corr -1 9 8 7 6 5 4 3 1 0 1 3 4 5 6 7 8 9 1 0 1.05471 1.00000 ******************** 1 0.0858 0.0710. * -0.005 -.0034.. 3-0.033 -.03150 *. 4-0.034 -.015.. 5 0.0183 0.0179.. 6 0.0353 0.031.. 7-0.065 -.0510 *. 8 0.03498 0.03316. * "." marks wo sandard errors Inverse Auocorrelaions Lag Correlaion -1 9 8 7 6 5 4 3 1 0 1 3 4 5 6 7 8 9 1 1-0.0879 *. -0.00014.. 3 0.03119. * 4 0.0154.. 5-0.0191.. 6-0.0185.. 7 0.0939. * 8-0.0351 *. Parial Auocorrelaions Lag Correlaion -1 9 8 7 6 5 4 3 1 0 1 3 4 5 6 7 8 9 1 1 0.0710. * -0.00307.. 3-0.03137 *. 4-0.0049.. 5 0.0183.. 6 0.0035.. 7-0.0759 *. 8 0.03539. * Auocorrelaion Check for Whie Noise To Chi Auocorrelaions Lag Square DF Prob 6 4.55 6 0.603 0.07-0.00-0.031-0.0 0.017 0.0

Y8 Applied Time Series Noes Auocorrelaions ( 31 ) Lag Covar Corr -1 9 8 7 6 5 4 3 1 0 1 3 4 5 6 7 8 9 1 0.7590 1.00000 ******************** 1.0883 0.75687. *************** 1.1753 0.4419. ********* 3 0.6783 0.4585. ***** 4 0.40167 0.14558. *** 5 0.8744 0.10418. ** 6 0.1441 0.07771. ** 7 0.1585 0.05736. *. 8 0.15586 0.05649. *. "." marks wo sandard errors Inverse Auocorrelaions Lag Correlaion -1 9 8 7 6 5 4 3 1 0 1 3 4 5 6 7 8 9 1 1-0.67877 **************. 0.658. ***** 3-0.09853 **. 4 0.0550. * 5-0.0564 *. 6-0.01058.. 7 0.0696. * 8-0.01597.. Parial Auocorrelaions Lag Correlaion -1 9 8 7 6 5 4 3 1 0 1 3 4 5 6 7 8 9 1 1 0.75687. *************** -0.3080 ******. 3 0.10891. ** 4-0.01306.. 5 0.03785. * 6-0.01800.. 7 0.01313.. 8 0.0361. * Auocorrelaion Check for Whie Noise To Chi Auocorrelaions Lag Square DF Prob 6 130.4 6 0.000 0.757 0.441 0.46 0.146 0.104 0.078

Applied Time Series Noes Back o Naional League example: ( 3 ) Winning Percenage for Naional League Pennan Winner Name of variable = WINPCT. Mean of working series = 610.3699 Sandard deviaion = 3.01905 Number of observaions = 73 Auocorrelaions Lag Covariance Correlaion -1 9 8 7 6 5 4 3 1 0 1 3 4 5 6 7 8 9 1 0 105.19 1.00000 ******************** 1 446.181 0.4351. ********* 454.877 0.44369. ********* 3 1.99 0.0708. ****. 4 145.90 0.1417. ***. 5 154.803 0.15099. ***. 6 9.698646 0.0904. **. 7 15.31 0.115. **. 8 144.686 0.14113. ***. "." marks wo sandard errors Inverse Auocorrelaions Lag Correlaion -1 9 8 7 6 5 4 3 1 0 1 3 4 5 6 7 8 9 1 1-0.19575.****. -0.34856 *******. 3 0.11418. **. 4 0.0814. **. 5-0.11465. **. 6 0.04885. *. 7 0.03710. *. 8-0.06110. *. Parial Auocorrelaions Lag Correlaion -1 9 8 7 6 5 4 3 1 0 1 3 4 5 6 7 8 9 1 1 0.4351. ********* 0.31370. ****** 3-0.08479. **. 4-0.05397. *. 5 0.1173. **. 6 0.0006.. 7 0.0711. *. 8 0.0930. **.

Applied Time Series Noes ( 33 ) Auocorrelaion Check for Whie Noise To Chi Auocorrelaions Lag Square DF Prob 6 37.03 6 0.000 0.435 0.444 0.07 0.14 0.151 0.090 Looks like AR() or MA() may fi well. How o fi an MA? Daa: 10 1 13 11 9 10 8 9 8 ( Mean = 10 ) Sum of squares for ) = -.5 Y = e - ) e-1 Y 10 1 13 11 9 10 8 9 8 y = Y-10 0 3 1-1 0 - -1 - y ^ =.5 e -1 0 0 1 1 0-0.5 0.5-1.15 0.065 e = y-y ^ 0 0-1 0.5 -.5 0.15 -.065 sum of squared errors = 0 + +... +.065 = 18.58. ).1 0 -.1 -. -.3 -.4 -.5 -.6 -.7 SS(err) 6.7 4 1.9 0.3 19. 18.64 18.58 19. 0.6 so ) ^ -.5 A beer way: ` ^ Make SSq( ) = e ( ^ ` )! )) e () ^ ) = 0 ` ) ` ) How? If e ( ^ ` )) is a residual from a regression on e () ^ ) hen derivaive is 0 by orhogonaliy of residuals o regressors. Taylor's Series: e ( ) = e ( ^ ` ) ) ) + e () ^ ) ( ) - ) ^ ) + remainder ` ) Ignore remainder and evaluae a e (rue ) ) = whie noise 0 e ( ^ ` ) ) - e () ^ ) ( ) - ) ^ ) + e ( ) ) ` ) 0 0 Can calculae e ( ^ ` ) ) and - e () ^ ), error erm is whie noise! ` ) Esimae ( ) - ) ^ ) by regression and ierae o convergence. 0 ` )

Applied Time Series Noes ( 34 ) Also: Can show regression sandard errors jusified in large samples. 1. e ( ) ^ ) = Y + ) ^ e ( ) ^ ) for iniial ) ^ -1 `. e ( ^ ) = e ( ^ ) + ( ^ ` ) ) )) e () ^ ) ` ) -1 ` ) -1 3. Regress sequence (1) on sequence (). Daa MA; * begin Harley modificaion ; hea = -. -.447966+.3168 -.6*.44376; call sympu('h', pu(hea,8.5)); ile "Using hea = &h " ; if _n_ = 1 hen do; e1=0; w1=0; end; inpu y @@; e = y + hea*e1; w = -e1 + hea*w1; oupu; reain; e1=e; w1=w; cards; 0 3 1-1 0 - -1 - ; proc prin noobs; var y e e1 w w1; proc reg; model e = w / noin; run; ------------------------------------------------------------------------------------ Using hea = -0.47779 Y E E1 W W1 0 0.00000 0.00000 0.00000 0.00000.00000 0.00000 0.00000 0.00000 3.0444.00000 -.00000 0.00000 1 0.0319.0444-1.08883 -.00000-1 -1.01108 0.0319 0.49704-1.08883 0 0.48309-1.01108 0.77360 0.49704 - -.3081 0.48309-0.8571 0.77360-1 0.06586 -.3081.6383-0.8571 - -.03147 0.06586-1.3639.6383 Parameer Esimaes Parameer Sandard T for H0: Variable DF Esimae Error Parameer=0 Prob > T W 1 0.034087 0.3868007 0.088 0.9319

Applied Time Series Noes ( 35 ) Anoher way o esimae ARMA models is EXACT MAXIMUM LIKELIHOOD Gonzalez-Farias' disseraion uses his mehodology for nonsaionary series. AR(1): ( Y -. ) = 3 ( Y -1 -. ) + e Y -. ~ N( 0, 5 /(1-3 ) ) 1 ( Y -. ) - 3 ( Y -1 -.) ~ N( 0, 5 ), =,3,,...,n Likelihood: _ = n-1 "! - # [(Y -. ) - 3 (Y -1 -. ) ] / 5 1 e # Š e = È1-3 " - (Y -. ) (1-3 )/ 5 1 5È1 5È1 n Posiive in (-1,1) and 0 a +1, -1 => easy o maximize. Logarihms: " n " ln( _) = # ln (1-3 ) - ln [ 1 s ( 3) ] - # n n 1-1 = n-1 (Y 1+Y n)+(1-3)! Y = where s ( 3 ) = SSq / n and SSq = (Y -. ) (1-3 ) +![(Y -. ) - 3 (Y -. ) ]. =.( 3 ) = +(n-)(1-3) If 3 <1 hen choosing 3 o maximize ln( _) does no differ in he limi from choosing 3 n o minimize! [(Y -. ) - 3 (Y -. ) ]. = (leas squares and maximum likelihood are abou he same for large samples OLS MLE). Gonzalez-Farias shows MLE and OLS differ in a nonrivial way, even in he limi, when 3=1.

Applied Time Series Noes Example of MLE for Iron and Seel Expors daa: ( 36 ) DATA STEEL STEEL; ARRAY Y(44); n=44; pi = 4*aan(1); do =1 o n; inpu EXPORT @@;OUTPUT STEEL; Y()=EXPORT; end; Do RHO =.44 o.51 by.01; MU = (Y(1) + Y(n) + (1-rho)*sum(of Y-Y43) )/(+(1-rho)*4); SSq = (1-rho**)*(Y(1)-mu)**; Do = o n; SSq = SSq + (Y()-mu - rho*(y(-1)-mu) )**; end; lnl =.5*log(1-rho*rho) - (n/)*log(*pi*ssq/n) - n/; oupu Seel; end; drop y1-y44; CARDS; 3.89.41.80 8.7 7.1 7.4 7.15 6.05 5.1 5.03 6.88 4.70 5.06 3.16 3.6 4.55.43 3.16 4.55 5.17 6.95 3.46.13 3.47.79.5.80 4.04 3.08.8.17.78 5.94 8.14 3.55 3.61 5.06 7.13 4.15 3.86 3. 3.50 3.76 5.11 ; proc arima daa=seel; I var=expor noprin; e p=1 ml; proc plo daa=seel; plo lnl*rho/vpos=0 hpos=40; ile "Log likelihood for Iron Expors daa"; proc prin daa=seel; run; Log likelihood for Iron Expors daa ARIMA Procedure Maximum Likelihood Esimaion Approx. Parameer Esimae Sd Error T Raio Lag MU 4.419 0.4300 10.8 0 AR1,1 0.46415 0.13579 3.4 1

Applied Time Series Noes ( 37 ) Plo of lnl*rho. Legend: A = 1 obs, B = obs, ec. lnl -81.18 ˆ A A A A -81.0 ˆ A A -81. ˆ A -81.4 ˆ A -81.6 ˆ Šƒˆƒƒƒƒˆƒƒƒƒˆƒƒƒƒˆƒƒƒƒˆƒƒƒƒˆƒƒƒƒˆƒƒƒƒˆƒƒƒ 0.44 0.45 0.46 0.47 0.48 0.49 0.50 0.51 RHO IRON AND STEEL EXPORTS EXCLUDING SCRAPS WEIGHT IN MILLION TONS 1937-1980 OBS N PI T EXPORT RHO MU SSQ LNL 1 44 3.14159 45 5.11 0.44 4.4100 10.765-81.06 44 3.14159 45 5.11 0.45 4.411 10.687-81.1914 3 44 3.14159 45 5.11 0.46 4.413 10.636-81.1861 4 44 3.14159 45 5.11 0.47 4.4135 10.610-81.1866 5 44 3.14159 45 5.11 0.48 4.4148 10.611-81.199 6 44 3.14159 45 5.11 0.49 4.4161 10.638-81.051 7 44 3.14159 45 5.11 0.50 4.4174 10.69-81.31 8 44 3.14159 45 5.11 0.51 4.4188 10.77-81.469