Time Series Analysis

Similar documents
Time Series Analysis

Time Series Analysis

Ross Bettinger, Analytical Consultant, Seattle, WA

Time Series Analysis

Time Series Analysis. Solutions to problems in Chapter 5 IMM

Time Series Analysis

Time Series I Time Domain Methods

Dynamic Time Series Regression: A Panacea for Spurious Correlations

Time Series Outlier Detection

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process

Generalised AR and MA Models and Applications

3 Theory of stationary random processes

Module 3. Descriptive Time Series Statistics and Introduction to Time Series Models

Introduction to Signal Processing

Lecture 1: Fundamental concepts in Time Series Analysis (part 2)

1 Linear Difference Equations

Advanced Process Control Tutorial Problem Set 2 Development of Control Relevant Models through System Identification

IDENTIFICATION OF ARMA MODELS

INDIAN INSTITUTE OF SCIENCE STOCHASTIC HYDROLOGY. Lecture -18 Course Instructor : Prof. P. P. MUJUMDAR Department of Civil Engg., IISc.

Lecture 2: Univariate Time Series

16.584: Random (Stochastic) Processes

Modelling using ARMA processes

5 Transfer function modelling

Booth School of Business, University of Chicago Business 41914, Spring Quarter 2013, Mr. Ruey S. Tsay. Midterm

Advanced Econometrics

Chapter 12: An introduction to Time Series Analysis. Chapter 12: An introduction to Time Series Analysis

INDIAN INSTITUTE OF SCIENCE STOCHASTIC HYDROLOGY. Lecture -20 Course Instructor : Prof. P. P. MUJUMDAR Department of Civil Engg., IISc.

Time Series Examples Sheet

Pure Random process Pure Random Process or White Noise Process: is a random process {X t, t 0} which has: { σ 2 if k = 0 0 if k 0

Problem Set 2 Solution Sketches Time Series Analysis Spring 2010

EL1820 Modeling of Dynamical Systems

Stat 248 Lab 2: Stationarity, More EDA, Basic TS Models

Multivariate Time Series

STAT 248: EDA & Stationarity Handout 3

MCMC analysis of classical time series algorithms.

Model structure. Lecture Note #3 (Chap.6) Identification of time series model. ARMAX Models and Difference Equations

TIME SERIES ANALYSIS. Forecasting and Control. Wiley. Fifth Edition GWILYM M. JENKINS GEORGE E. P. BOX GREGORY C. REINSEL GRETA M.

Circle the single best answer for each multiple choice question. Your choice should be made clearly.

Exercises - Time series analysis

{ } Stochastic processes. Models for time series. Specification of a process. Specification of a process. , X t3. ,...X tn }

BME 50500: Image and Signal Processing in Biomedicine. Lecture 5: Correlation and Power-Spectrum CCNY

Examination paper for Solution: TMA4285 Time series models

Prof. Dr. Roland Füss Lecture Series in Applied Econometrics Summer Term Introduction to Time Series Analysis

TIME SERIES ANALYSIS AND FORECASTING USING THE STATISTICAL MODEL ARIMA

Econ 623 Econometrics II Topic 2: Stationary Time Series

Computer Exercise 1 Estimation and Model Validation

Time Series Examples Sheet

ECE 636: Systems identification

New Introduction to Multiple Time Series Analysis

STAT Financial Time Series

LECTURES 2-3 : Stochastic Processes, Autocorrelation function. Stationarity.

Principles of forecasting

Stationary or Non-Stationary Random Excitation for Vibration-Based Structural Damage Detection? An exploratory study

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Identification of ARX, OE, FIR models with the least squares method

INDIAN INSTITUTE OF SCIENCE STOCHASTIC HYDROLOGY. Lecture -33 Course Instructor : Prof. P. P. MUJUMDAR Department of Civil Engg., IISc.

Akaike criterion: Kullback-Leibler discrepancy

Econometrics I: Univariate Time Series Econometrics (1)

Nonlinear Time Series Modeling

Stochastic Processes. A stochastic process is a function of two variables:

FE570 Financial Markets and Trading. Stevens Institute of Technology

Centre for Mathematical Sciences HT 2017 Mathematical Statistics

Circle a single answer for each multiple choice question. Your choice should be made clearly.

A time series is called strictly stationary if the joint distribution of every collection (Y t

Lecture 5: Estimation of time series

Stochastic Processes: I. consider bowl of worms model for oscilloscope experiment:

Lesson 2: Analysis of time series

Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes (cont d)

Basics: Definitions and Notation. Stationarity. A More Formal Definition

STAT 443 Final Exam Review. 1 Basic Definitions. 2 Statistical Tests. L A TEXer: W. Kong

Modern Navigation. Thomas Herring

EL1820 Modeling of Dynamical Systems

Definition of a Stochastic Process

Stochastic Processes. M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno

Reliability and Risk Analysis. Time Series, Types of Trend Functions and Estimates of Trends

ECON 4160, Spring term Lecture 12

We will only present the general ideas on how to obtain. follow closely the AR(1) and AR(2) cases presented before.

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

STA 6857 VAR, VARMA, VARMAX ( 5.7)

If we want to analyze experimental or simulated data we might encounter the following tasks:

Forecasting using R. Rob J Hyndman. 2.4 Non-seasonal ARIMA models. Forecasting using R 1

Vector autoregressions, VAR

IV. Covariance Analysis

Empirical Market Microstructure Analysis (EMMA)

6 NONSEASONAL BOX-JENKINS MODELS

f-domain expression for the limit model Combine: 5.12 Approximate Modelling What can be said about H(q, θ) G(q, θ ) H(q, θ ) with

Basics on 2-D 2 D Random Signal

Covariance Stationary Time Series. Example: Independent White Noise (IWN(0,σ 2 )) Y t = ε t, ε t iid N(0,σ 2 )

Vector Auto-Regressive Models

Cast of Characters. Some Symbols, Functions, and Variables Used in the Book

AR, MA and ARMA models

Closed-Loop Disturbance Identification and Controller Tuning for Discrete Manufacturing Processes. Overview

Lecture 4 - Spectral Estimation

VAR Models and Applications

Lecture # 37. Prof. John W. Sutherland. Nov. 28, 2005

Classic Time Series Analysis

Notes on Random Processes

System Identification & Parameter Estimation

1 Teaching notes on structural VARs.

Online Appendix. j=1. φ T (ω j ) vec (EI T (ω j ) f θ0 (ω j )). vec (EI T (ω) f θ0 (ω)) = O T β+1/2) = o(1), M 1. M T (s) exp ( isω)

Transcription:

Time Series Analysis hm@imm.dtu.dk Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby 1

Outline of the lecture Input-Output systems The z-transform important issues from Sec. 4.4 Cross Correlation Functions from Sec. 6.2.2 Transfer function models; identification, estimation, validation, prediction, Chap. 8 2

The z-transform A way to describe dynamical systems in discrete time Z({x t }) = X(z) = t= x t z t (z complex) The z-transform of a time delay: Z({x t τ }) = z τ X(z) The transfer function of the system is called H(z) = h t z t t= y t = k= h k x t k Y (z) = H(z)X(z) Relation to the frequency response function: H(ω) = H(e iω ) 3

Cross covariance and cross correlation functions Estimate of the cross covariance function: C XY (k) = C XY ( k) = 1 N 1 N (X t X)(Y t+k Y ) N k t=1 (X t+k X)(Y t Y ) N k t=1 Estimate of the cross correlation function: ρ XY (k) = C XY (k)/ C XX (0)C Y Y (0) If at least one of the processes is white noise and if the processes are uncorrelated then ρ XY (k) is approximately normally distributed with mean 0 and variance 1/N 4

Systems without measurement noise Xt Y t System input output Y t = h i X t i i= Given γ XX and the system description we obtain γ Y Y (k) = γ XY (k) = h i h j γ XX (k j + i) (1) i= j= i= h i γ XX (k i). (2) 5

Systems with measurement noise N t X t input System Σ Y t output Y t = i= h i X t i + N t. 6

Time domain relations Given γ XX and γ NN we obtain γ Y Y (k) = γ XY (k) = h i h j γ XX (k j + i) + γ NN (k) (3) i= j= i= h i γ XX (k i). (4) IMPORTANT ASSUMPTION: No feedback in the system. 7

Spectral relations f Y Y (ω) = H(e iω )H(e iω )f XX (ω) + f NN (ω) = G 2 (ω)f XX (ω) + f NN (ω), f XY (ω) = H(e iω )f XX (ω) = H(ω)f XX (ω). The frequency response function, which is a complex function, is usually split into a modulus and argument H(ω) = H(ω) e iarg{h(ω)} = G(ω)e iφ(ω), where G(ω) and φ(ω) are the gain and phase, respectively, of the system at the frequency ω from the input {X t } to the output {Y t }. 8

Estimating the impulse response The poles and zeros characterize the impulse response (Appendix A and Chapter 8) If we can estimate the impulse response from recordings of input an output we can get information that allows us to suggest a structure for the transfer function True Impulse Response 0.0 0.04 0.08 0 10 20 30 Lag SCCF 0.0 0.2 0.4 0.6 0 10 20 30 Lag SCCF after pre whitening 0.0 0.1 0.2 0.3 0.4 0 10 20 30 Lag 9

Estimating the impulse response On the previous slide we saw that we got a good picture of the true impulse response when pre-whitening the data The reason is γ XY (k) = i= h i γ XX (k i) and only if {X t } is white noise we get γ XY (k) = h k σ 2 X Therefore if {X t } is white noise the SCCF ˆρ XY (k) is proportional to ĥk Normally {X t } is not white noise we fix this using pre-whitening 10

Pre-whitening a) A suitable ARMA-model is applied to the input series: ϕ(b)x t = θ(b)α t. b) We perform a prewhitening of the input series α t = θ(b) 1 ϕ(b)x t c) The output series {Y t } is filtered with the same model, i.e. β t = θ(b) 1 ϕ(b)y t. d) Now the impulse response function is estimated by ĥ k = C αβ (k)/c αα (0) = C αβ (k)/sα. 2 11

Example using S-PLUS ## ARMA structure for x; AR(1) x.struct <- list(order=c(1,0,0)) ## Estimate the model (check for convergence): x.fit <- arima.mle(x - mean(x), model=x.struct) ## Extract the model: x.mod <- x.fit$model ## Filter x: x.start <- rep(mean(x), 1000) x.filt <- arima.sim(model=list(ma=x.mod$ar), innov=x, start.innov = x.start) ## Filter y: y.start <- rep(mean(y), 1000) y.filt <- arima.sim(model=list(ma=x.mod$ar), innov=y, start.innov = y.start) ## Estimate SCCF for the filtered series: acf(cbind(y.filt, x.filt)) 12

Graphical output Multivariate Series : cbind(y.filt, x.filt) y.filt y.filt and x.filt ACF 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.1 0.2 0.3 0.4 0 10 20 30 x.filt and y.filt 0 10 20 30 x.filt ACF 0.0 0.1 0.2 0.3 0.4 0.0 0.2 0.4 0.6 0.8 1.0 30 20 10 0 Lag 0 10 20 30 Lag 13

Systems with measurement noise N t X t input System Σ Y t output Y t = i= h i X t i + N t. γ XY (k) = i= h i γ XX (k i) 14

Transfer function models ε t θ(b) B b ϕ(b) N t Xt b ω(b) B δ(b) Σ Y t Y t = ω(b) δ(b) Bb X t + θ(b) ϕ(b) ε t Also called Box-Jenkins models Can be extended to include more inputs see the book. 15

Some names FIR: Finite Impulse Response ARX: Auto Regressive with external input ARMAX/CARMA: Auto Regressive Moving Average with external input / Controlled ARMA OE: Output Error Regression models with ARMA noise 16

Identification of transfer function models h(b) = ω(b)bb δ(b) = h 0 + h 1 B + h 2 B 2 + h 3 B 3 + h 4 B 4 +... Using pre-whitening we estimate the impulse response and guess an appropriate structure of h(b) based on this (see page 197 for examples). It is a good idea to experiment with some structures. Matlab (use q 1 instead of B): A = 1; B = 1; C = 1; D = 1; F = [1-2.55 2.41-0.85]; mod = idpoly(a, B, C, D, F, 1, 1) impulse(mod) PEZdemo (complex poles/zeros should be in pairs): http://users.ece.gatech.edu/mcclella/matlabguis/ 17

2 exponential 2 1.8B 1 1.8B+0.81B 2 Zeros Poles Im Im Re Re Impulse Response 0.0 0.5 1.0 1.5 2.0 Step Response 0 20 40 60 0 10 20 30 40 50 60 0 10 20 30 40 50 60 Lag Lag 18

2 real poles 1 1 1.7B+0.72B 2 Zeros Poles No Zeros Im Im Re Re Impulse Response 0.0 0.5 1.0 1.5 2.0 2.5 Step Response 0 10 20 30 40 50 0 10 20 30 40 50 60 0 10 20 30 40 50 60 Lag Lag 19

2 complex 1 1 1.5B+0.81B 2 Zeros Poles No Zeros Im Im Re Re Impulse Response 0.5 0.0 0.5 1.0 1.5 Step Response 0 1 2 3 4 5 0 10 20 30 0 10 20 30 Lag Lag 20

1 exp + 2 comp Zeros 2 2.35B+0.69B 2 1 2.35B+2.02B 2 0.66B 3 Poles Im Im Re Re Impulse Response 0.0 0.5 1.0 1.5 2.0 Step Response 0 5 10 15 20 0 10 20 30 40 50 60 0 10 20 30 40 50 60 Lag Lag 21

Identification of the transfer function for the noise After selection of the structure of the transfer function of the input we estimate the parameters of the model Y t = ω(b) δ(b) Bb X t + N t We extract the residuals {N t } and identifies a structure for an ARMA model of this series N t = θ(b) ϕ(b) ε t ϕ(b)n t = θ(b)ε t We then have the full structure of the model and we estimate all parameters simultaneously 22

Estimation Form 1-step predictions, treating the input {X t } as known in the future (if {X t } is really stochastic we condition on the observed values) Select the parameters so that the sum of squares of these errors is as small as possible If {ε t } is normal then the ML estimates are obtained For FIR and ARX models we can write the model as Y t = X T t θ + ε t and use LS-estimates Moment estimates: Based on the structure of the transfer function we find the theoretical impulse response and we make a match with the lowest lags in the estimated impulse response Output error estimates... 23

Model validation As for ARMA models with the additions: Test for cross correlation between the residuals and the input ˆρ εx (k) Norm(0,1/N) which is (approximately) correct when {ε t } is white noise and when there is no correlation between the input and the residuals A Portmanteau test can also be performed 24

Prediction Ŷt+k t We must consider two situations The input is controllable, i.e. we can decide it and we can predict under different input-scenarios. In this case the prediction error variance is originating from the ARMA-part only (N t ). The input is only known until the present time point t and to predict the output we must predict the input. In this case the prediction error variance depend also on the autocovariance of the input process. In the book the case where the input can be modelled as an ARMA-process is considered. 25

Prediction (cont nd) Ŷ t+k t = k 1 i=0 h i X t+k i t + h i X t+k i + N t+k t. i=k Y t+k Ŷt+k t = k 1 i=0 h i (X t+k i X t+k i t ) + N t+k N t+k t If the input is controllable then X t+k i t = X t+k i The book also considers the case where output is known until time t and input until time t + j 26

Prediction (cont nd) We have N t = ψ i ε t i i=0 And if we model the input as an ARMA-process we have X t = ψ i η t i i=0 An thereby we get: V [Y t+k Ŷt+k t] = σ 2 η k 1 l=0 i 1 +i 2 =l h i1 ψ i2 2 + σ 2 ε k 1 ψi 2 i=0 27

Y t = 0.4 1 0.6B X t + 1 1 0.4 ε t, σ 2 ε = 0.036 y x h xf N y x h xf N 1 2.04 1.661 0.00403 2.21-0.1645 2.04 1.661 0.00403 2.21-0.1645 2 3.05 4.199 0.00672 3.00 0.0407 3.05 4.199 0.00672 3.00 0.0407 3 2.34 1.991 0.01120 2.60-0.2566 2.34 1.991 0.01120 2.60-0.2566 4 2.49 2.371 0.01866 2.51-0.0186 2.49 2.371 0.01866 2.51-0.0186 5 3.30 3.521 0.03110 2.91 0.3826 3.30 3.521 0.03110 2.91 0.3826 6 3.53 3.269 0.05184 3.06 0.4768 3.53 3.269 0.05184 3.06 0.4768 7 2.72 0.741 0.08640 2.13 0.5880 2.72 0.741 0.08640 2.13 0.5880 8 2.46 2.238 0.14400 2.17 0.2888 2.46 2.238 0.14400 2.17 0.2888 9 NA 2.544 0.24000 2.32 NA 2.44 2.544 0.24000 2.32 0.1155 10 NA 3.201 0.40000 2.67 NA 2.72 3.201 0.40000 2.67 0.0462 To forecast y (9,10) we must filter x as in xf, calc. N for the historic data, forecast N and add that to xf (future values) > Nfc <- arima.forecast(n[1:8], model=list(ar=0.4), sigma2=0.036, n=2) > Nfc$mean: [1] 0.1155 0.0462 28

Intervention models I t = { 1 t = t0 0 t t 0 Y t = ω(b) δ(b) I t + θ(b) φ(b) ε t See a real life example in the book. 29