Model structure. Lecture Note #3 (Chap.6) Identification of time series model. ARMAX Models and Difference Equations

Similar documents
6.435, System Identification

Lecture Note #7 (Chap.11)

EECE Adaptive Control

SGN Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Prediction Error Methods - Torsten Söderström

Identification of ARX, OE, FIR models with the least squares method

System Modeling and Identification CHBE 702 Korea University Prof. Dae Ryook Yang

Lecture Note #6 (Chap.10)

MODEL PREDICTIVE CONTROL and optimization

EL1820 Modeling of Dynamical Systems

12. Prediction Error Methods (PEM)

CHBE507 LECTURE II MPC Revisited. Professor Dae Ryook Yang

RECURSIVE ESTIMATION AND KALMAN FILTERING

f-domain expression for the limit model Combine: 5.12 Approximate Modelling What can be said about H(q, θ) G(q, θ ) H(q, θ ) with

6.435, System Identification

11. Further Issues in Using OLS with TS Data

2 Statistical Estimation: Basic Concepts

Econometrics II - EXAM Answer each question in separate sheets in three hours

Economics 620, Lecture 9: Asymptotics III: Maximum Likelihood Estimation

Elements of Multivariate Time Series Analysis

ARIMA Modelling and Forecasting

Basic concepts in estimation

EE482: Digital Signal Processing Applications

Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes (cont d)

Computer Exercise 1 Estimation and Model Validation

Chapter 9: Forecasting

TIME SERIES ANALYSIS. Forecasting and Control. Wiley. Fifth Edition GWILYM M. JENKINS GEORGE E. P. BOX GREGORY C. REINSEL GRETA M.

Time Series Analysis

Lecture 7: Discrete-time Models. Modeling of Physical Systems. Preprocessing Experimental Data.

Linear Model Under General Variance Structure: Autocorrelation

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

IDENTIFICATION OF A TWO-INPUT SYSTEM: VARIANCE ANALYSIS

System Identification, Lecture 4

System Identification, Lecture 4

ECE 636: Systems identification

Testing Linear Restrictions: cont.

System Identification

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Financial Econometrics / 49

EL1820 Modeling of Dynamical Systems

Lecture Note #6 (Chap.10)

1. The Multivariate Classical Linear Regression Model

CBE495 LECTURE IV MODEL PREDICTIVE CONTROL

Advanced Process Control Tutorial Problem Set 2 Development of Control Relevant Models through System Identification

Instrumental Variables and Two-Stage Least Squares

Cramér-Rao Bounds for Estimation of Linear System Noise Covariances

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Fin. Econometrics / 53

Ross Bettinger, Analytical Consultant, Seattle, WA

Dynamic Regression Models (Lect 15)

ESTIMATION ALGORITHMS

Variations. ECE 6540, Lecture 10 Maximum Likelihood Estimation

Wiener Filtering. EE264: Lecture 12

Pure Random process Pure Random Process or White Noise Process: is a random process {X t, t 0} which has: { σ 2 if k = 0 0 if k 0

Multivariate ARMA Processes

Time Series Examples Sheet

Least-squares data fitting

LECTURE 2 LINEAR REGRESSION MODEL AND OLS

Statistical Filtering and Control for AI and Robotics. Part II. Linear methods for regression & Kalman filtering

Parameter Estimation

System Identification

EEG- Signal Processing

Least Squares. Ken Kreutz-Delgado (Nuno Vasconcelos) ECE 175A Winter UCSD

New Introduction to Multiple Time Series Analysis

THE TRANSLATION PLANES OF ORDER 49 AND THEIR AUTOMORPHISM GROUPS

Chapter 3: Maximum Likelihood Theory

Econ 623 Econometrics II Topic 2: Stationary Time Series

Box-Jenkins ARIMA Advanced Time Series

STAT 100C: Linear models

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Recursive Algorithms - Han-Fu Chen

1 EM algorithm: updating the mixing proportions {π k } ik are the posterior probabilities at the qth iteration of EM.

Online monitoring of MPC disturbance models using closed-loop data

Multiple Random Variables

Practical Spectral Estimation

Introduction to Estimation Methods for Time Series models Lecture 2

Statistics 910, #5 1. Regression Methods

Lecture 4a: ARMA Model

Lecture 4: Types of errors. Bayesian regression models. Logistic regression

System Identification & Parameter Estimation

Classical Decomposition Model Revisited: I

5 Transfer function modelling

LMS and eigenvalue spread 2. Lecture 3 1. LMS and eigenvalue spread 3. LMS and eigenvalue spread 4. χ(r) = λ max λ min. » 1 a. » b0 +b. b 0 a+b 1.

Economics 620, Lecture 4: The K-Variable Linear Model I. y 1 = + x 1 + " 1 y 2 = + x 2 + " 2 :::::::: :::::::: y N = + x N + " N

Advanced Econometrics I

p(z)

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q

Nonlinear Parameter Estimation for State-Space ARCH Models with Missing Observations

Subspace Identification Methods

4 Derivations of the Discrete-Time Kalman Filter

Lecture 11 FIR Filters

Marcel Dettling. Applied Time Series Analysis SS 2013 Week 05. ETH Zürich, March 18, Institute for Data Analysis and Process Design

Generalized Least Squares

Free Meixner distributions and random matrices

Introduction to Estimation Methods for Time Series models. Lecture 1

Statistics 910, #15 1. Kalman Filter

Maximum Likelihood Diffusive Source Localization Based on Binary Observations

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b

RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK

Matlab software tools for model identification and data analysis 10/11/2017 Prof. Marcello Farina

Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science : Discrete-Time Signal Processing

Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8]

Transcription:

System Modeling and Identification Lecture ote #3 (Chap.6) CHBE 70 Korea University Prof. Dae Ryoo Yang Model structure ime series Multivariable time series x [ ] x x xm Multidimensional time series (temporal+spatial) Model classification SISO/MIMO Linear/onlinear Deterministic/Stochastic For linear discrete-time systems Difference equation and ARMAX models ransfer function model State-space model - -3 Chap.6 Identification of ime- Series Model Identification of time series model Model structure Parametric estimation Least square method Excellent properties when disturbances are uncorrelated Otherwise, there may be systematic errors and bias More sophisticated methods are needed Handling correlated disturbances Extension of linear regression ARMAX Models and Difference Equations ARMAX (AutoRegressive Moving Average with exogenous input) model A( z ) y z d B( z ) u C( z ) w where d is time delay and A( z ) a z a z an z A nb B( z ) b bz b z bn z B nc Cz ( ) cz cz c z 0 with unnown parameters [ a a n b b n c c n w] ( ) (AR) d nc A B C Az y w y C( z ) w (MA or FIR) Az ( ) y z Bz ( ) u w (ARX) Az ( ) y Cz ( ) w (ARMA) Az y Bz u Cz w ( ) ( ) ( ) (ARIMAX or CARIMA) - Integrated Controlled -4

ARMAX: General model ARX: Controlled autoregressive model and Linear regression model when disturbance is measured AR: Model harmonics compounded with noise, and runcated impulse response (FIR) model ARMA: Model-based spectrum analysis ARIMAX: For nonzero means and drift or nonstationary disturbance cases Az y Bz u Cz w ( ) ( ) ( ) A ( z ) y ( ) ( )( ) B z u C z z w Integrated white noise: onstationary (random wal) ransfer Function Models ransfer function models * y H u( z) u H v( z) v ( E{ vv i j} v ij) Bz ( ) Cz ( ) Az ( ) y u w F( z ) D( z ) Bz ( ) y u (Output error (OE) model) v F( z ) Bz ( ) Cz ( ) y u (Box-Jenins (BJ) model) w F( z ) D( z ) OE model: o assumption on disturbance sequence {v } BJ model: Filtered white noise {w } sequence by C/D -5-7 Prediction Error Method (PEM) Methods to predict y based on previous data and the identified model PEM min E( y ( ) y ) min E ( ) It minimizes the variance of the prediction steps ahead of the output y where the prediction is based on the present data. y ( A) y Bu Cw ( Ay ) Bu ( Cw ) w ( A) y Bu ( C)( C ( Ay Bu )) w z w E{( y y ) } E{( z w y ) } E{( z y ) } E{ w} w Minimum attainable variance -6 Difference between output error and prediction error y ay bu (Output error method) y ay bu (Prediction error method) Output error relies more on the accuracy of future output modeling. Prediction error uses actual output. Output error identification is a nonlinear estimation problem. Prediction error identification is a linear estimation problem. Algorithm for OE identification. Least square identification to find initial estimate of F and B. M : F( z ) y B( z ) u v. Filter the data according to F F y y / F ( z ), u u / F ( z ) Prewhitening filter 3. Subsequence estimation of F and B from the model. F F M : F( z ) y B( z ) u v 4. Repeat -3 until the estimate converges. -8

Comparison of error models for identification -9 For ARMAX model A( z ) y d z B( z ) u C( z ) w log L( ) 0.5log(( ) det ) 0.5 ( ) ( ) v v y where y ay an y n bu dbu A A dn B Unnown v cv cn CvnC v [ y yn u ] A d udn v B vn C [ a an b b ] A n c c B nc he empirical lielihood function when v = v I ( v unnown) log L(, ) ( / )log( ) ( / ) log( ) (/ ) ( ) v v v ( /)log( ) ( /)log( v) (/ v) V( ) log L(, v) (/ v) V( ) 0 4 log L(, ) (/ ) ( ) ( / ) 0 v v V v v ( ( ( i V ( V ) i i ) i ) i ) v ( V ) 0 (/ V ) ( ) ( ( )) ( ) (ewton-raphson method) - Maximum-Lielihood Method Select estimate so that the observation Y is most probable. max py ( ) py ( ) Lielihood function Example 6.4 Y v ( Ev { } 0 and Evv { } 0) / Assume that pv ( ) (( ) det ) v exp( 0.5 vv v) / Lielihood function: pv ( ) (( ) det v) exp( 0.5( Y) v ( Y)) log L( ) log p( ) 0.5log(( ) det v) 0.5 v If the model is linear in parameters with normally distributed white noise, the maximum lielihood estimate is same as Marov estimate. Cramer-Rao lower bound log L log L log L Cov( ) E E Example 6.5 LS and ML identification S y ay bu w cw : For colored noise, ML identification performs better than LS. LS can estimates only a and b. Local minimum Example 6.6 Pseudolinear regression S: A( z ) y B( z ) u C( z ) w Estimate high order polynomials A and B by least squares he computed residual sequence { } yields a good approximation of white noise sequence {w }. Extend the regressor with { } and then estimate the polynomials of A, B and C using least squares identification. It is also called two-step linear regression. Fisher information matrix -0-3

Kalman Filter State-space model x x u v Ev { } 0, Ee { } 0 y Cx Du e E{ vv } R, E{ ee } R, P(0) E{ x0x0} R0 Optimal estimate of x based on the input-output data min J( x) E{( x x ) } Kalman filter (Kalman-Bucy filter) Kalman filter will minimize the above minimization when v and e are independent and normally distributed. x x u ( ) K y Cx K PC ( R CPC ) P P R PC ( R CPC ) CP Cases for time-varying parameters v Ev { } 0, Ee { } 0 y e K ( y ) K P( R P) P P R P ( R P ) P Excellent for time-varying systems R and R are important design parameter that should match the temporal variations of and the observation noise, respectively. -3-5 Derivation he prediction error x x x he prediction error dynamics x ( KCx ) v Ke he mean prediction error E{ x } ( KC) E{ x} he mean square prediction error Ex { x } E{[( KCx ) v Ke ][( KCx ) v Ke ] } ( KCExx ) { }( KC ) v KeK Let P E{ xx } and Q e CPC P P KCP PC K v KQK P P v PC Q CP ( K PC Q ) Q ( K PC Q ) Minimization of P + gives K PC ( e CPC ) P P v PC ( e CPC ) CP (Riccati equation) Instrumental Variable Method Correlation between the regressors and the prediction error leads to bias of the parameter estimates obtained from least-square solutions to the linear regression problem Replace regressor by some other variable Z: IV method In order to mae the estimator consistent EZv { } 0 ran( Z ) p z ( Z ) ZY z z z Cov( ) E{( )( ) } ( Z ) Z vz( Z) he instrumental variable abeshould oudbec chosen sothat they eyae are simultaneously uncorrelated with v and highly correlated with. -4-6 4

Example 6.8 S: y 0.9y 0.uw 0.7w ( Ew { } 0, Ew { } w) Biased least-square estimate of parameters y u [ ab] ( ) Y[0.957 0.047] Instrumental variable y u z az bu z u Z z ( Z ) Z Y [0.98 0.075] z u Shows reduced bias 0 Example 6.9 u0 u For a choice of IV Z u u z ( Z ) Z Y [0.43 0.047] 047] Gives very poor estimate. It might be difficult to choose appropriate instrumental variables. hus, an iterative procedure are usually used. -7 Some Aspects of Application Prefiltering, smoothing, prewhitening f f Y ( z) F( z ) Y( z), U ( z) F( z ) U( z) f f M : Az ( ) y Bz ( ) u ( v For periodic variation, use F(z 0 w) )= z d when d is the period of trend. Bias reduction rend elimination y ( y )/, u ( u )/ M : Az ( )( y y) Bz ( )( u u) v Differentiation i i of data M : Az ( ) y Bz ( ) u v M : Az ( ) y Bz ( ) u v It gives improved accuracy Offset estimation via an extra parameter M : Az ( ) y Bz ( ) u ( v w) 0 It introduces new noise correlation Extra parameter -9 Example 6.0 (he Yule-Waler equations) Consider the AR process S: A( z ) y w ; ( E{ w } 0, E{ w } ) w * Cyy ( ) E{ y y } E{( ai y i w ) y } ae i { y i y } E{ w y } i i ac ( ), 0 i yy i i w Cyy ( ) ac ( ), 0 i i yy i Choosing numbers M> n A and p>n A and [ y yn ] A z [ y yp]/ M ( p,, M p) Cyy ( i) Cyy ( i) Cyy ( i) a Cyy ( i) Cyy () i Cyy ( i) Cyy ( i ) a Cyy ( i ) Cyy ( i p) Cyy ( i p) Cyy ( i p ) an C ( ) A yy i p z ( Z ) ZY ( Z ) ( Z Y ) -8 Convergence and Consistency Convergence in L p p,, 0<p< p p lim E{ x x } 0 Convergence almost surely lim P{ x x, n, 0} n Convergence in probability P{ x x, 0} 0 Central limit theorem Let {x } be a sequence of independent random variables with common distribution function F with finite mean and variance. X has a limiting normal distribution with mean 0 and variance as. S dist If S x, then X ormal(0,) -0 5

Efficient estimate, E{( ) } E{( ) } for any other estimate Consistent estimate Probability limit lim E{( ) } 0 lim P {, 0} 0 plim Unbiased and asymptotically unbiased estimates E{ } (Unbiased estimate) lim E{ } (Asymptotically unbiased estimate) - 6