Advanced Process Control Tutorial Problem Set 2 Development of Control Relevant Models through System Identification

Similar documents
EL1820 Modeling of Dynamical Systems

EL1820 Modeling of Dynamical Systems

EECE Adaptive Control

Nonlinear System Identification Using MLP Dr.-Ing. Sudchai Boonto

Identification of ARX, OE, FIR models with the least squares method

Analysis of Discrete-Time Systems

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Prediction Error Methods - Torsten Söderström

Analysis of Discrete-Time Systems

Control Design. Lecture 9: State Feedback and Observers. Two Classes of Control Problems. State Feedback: Problem Formulation

Data mining for system identi cation

Modelling and Control of Dynamic Systems. Stability of Linear Systems. Sven Laur University of Tartu

On Input Design for System Identification

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors

EE538 Final Exam Fall 2007 Mon, Dec 10, 8-10 am RHPH 127 Dec. 10, Cover Sheet

Abstract. 1 Introduction

LECTURES 2-3 : Stochastic Processes, Autocorrelation function. Stationarity.

Identification of Linear Systems

Module 9: State Feedback Control Design Lecture Note 1

Time Series Analysis

Stochastic Process II Dr.-Ing. Sudchai Boonto

On the Equivalence of OKID and Time Series Identification for Markov-Parameter Estimation

IV. Covariance Analysis

Module 3. Descriptive Time Series Statistics and Introduction to Time Series Models

6.435, System Identification

1 Linear Difference Equations

CDS Final Exam

Automatic Control 2. Model reduction. Prof. Alberto Bemporad. University of Trento. Academic year

System Identification

EEE582 Homework Problems

LQR, Kalman Filter, and LQG. Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin

Rural/Urban Migration: The Dynamics of Eigenvectors

Ross Bettinger, Analytical Consultant, Seattle, WA

Econ 623 Econometrics II Topic 2: Stationary Time Series

Lecture Note #6 (Chap.10)

Lecture 1: Introduction to System Modeling and Control. Introduction Basic Definitions Different Model Types System Identification

Identification in closed-loop, MISO identification, practical issues of identification

Linear Model Under General Variance Structure: Autocorrelation

18.S096 Problem Set 4 Fall 2013 Time Series Due Date: 10/15/2013

Control Systems Lab - SC4070 System Identification and Linearization

Exam in Automatic Control II Reglerteknik II 5hp (1RT495)

y k = ( ) x k + v k. w q wk i 0 0 wk

Application of Modified Multi Model Predictive Control Algorithm to Fluid Catalytic Cracking Unit

Week 5 Quantitative Analysis of Financial Markets Characterizing Cycles

Industrial Model Predictive Control

Lecture 7: Discrete-time Models. Modeling of Physical Systems. Preprocessing Experimental Data.

IDENTIFICATION OF A TWO-INPUT SYSTEM: VARIANCE ANALYSIS

An Exponentially Weighted Moving Average Method for Identification and Monitoring of Stochastic Systems

Moving Average (MA) representations

13. Power Spectrum. For a deterministic signal x(t), the spectrum is well defined: If represents its Fourier transform, i.e., if.

University of Oxford. Statistical Methods Autocorrelation. Identification and Estimation

DESIGNING A KALMAN FILTER WHEN NO NOISE COVARIANCE INFORMATION IS AVAILABLE. Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof

Lecture 5: Recurrent Neural Networks

An All-Interaction Matrix Approach to Linear and Bilinear System Identification

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Optimal Polynomial Control for Discrete-Time Systems

Optimal control and estimation

6.435, System Identification

Problem Set 2: Box-Jenkins methodology

AN IDENTIFICATION ALGORITHM FOR ARMAX SYSTEMS

Dynamic and Time Series Modeling for Process Control

Closed-Loop Identification of Unstable Systems Using Noncausal FIR Models

4F3 - Predictive Control

4F3 - Predictive Control

TIME SERIES ANALYSIS. Forecasting and Control. Wiley. Fifth Edition GWILYM M. JENKINS GEORGE E. P. BOX GREGORY C. REINSEL GRETA M.

Infinite Horizon LQ. Given continuous-time state equation. Find the control function u(t) to minimize

Discrete time processes

Linear Systems. Manfred Morari Melanie Zeilinger. Institut für Automatik, ETH Zürich Institute for Dynamic Systems and Control, ETH Zürich

1 Introduction to Generalized Least Squares

Time Series Analysis

Further Results on Model Structure Validation for Closed Loop System Identification

An Algorithm for Finding Process Identification Intervals from Normal Operating Data

EECE Adaptive Control

Online monitoring of MPC disturbance models using closed-loop data

PERFORMANCE ANALYSIS OF CLOSED LOOP SYSTEM WITH A TAILOR MADE PARAMETERIZATION. Jianhong Wang, Hong Jiang and Yonghong Zhu

Text of Intelligent Control. M. Yamakita

Econometría 2: Análisis de series de Tiempo

Lecture 4a: ARMA Model

Process Dynamics & Control LECTURE 1: INTRODUCTION OF MODEL PREDICTIVE CONTROL A Multivariable Control Technique for the Process Industry

Errors-in-variables identification through covariance matching: Analysis of a colored measurement noise case

Probability Space. J. McNames Portland State University ECE 538/638 Stochastic Signals Ver

Quis custodiet ipsos custodes?

A summary of Modeling and Simulation

ENSC327 Communications Systems 19: Random Processes. Jie Liang School of Engineering Science Simon Fraser University

where r n = dn+1 x(t)

Empirical Market Microstructure Analysis (EMMA)

Statistical signal processing

Stochastic Processes. A stochastic process is a function of two variables:

XIII Simpósio Brasileiro de Automação Inteligente Porto Alegre RS, 1 o 4 de Outubro de 2017 ALGORITHM-AIDED IDENTIFICATION USING HISTORIC PROCESS DATA

Some Time-Series Models

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process

ESSE Mid-Term Test 2017 Tuesday 17 October :30-09:45

Model structure. Lecture Note #3 (Chap.6) Identification of time series model. ARMAX Models and Difference Equations

S deals with the problem of building a mathematical model for

Time Series Analysis. James D. Hamilton PRINCETON UNIVERSITY PRESS PRINCETON, NEW JERSEY

Regression with time series

Matlab software tools for model identification and data analysis 11/12/2015 Prof. Marcello Farina

TIME DELAY TEMPERATURE CONTROL WITH IMC AND CLOSED-LOOP IDENTIFICATION

Chapter 4: Models for Stationary Time Series

SAMPLE SOLUTION TO EXAM in MAS501 Control Systems 2 Autumn 2015

State Estimation of Linear and Nonlinear Dynamic Systems

Transcription:

Advanced Process Control Tutorial Problem Set 2 Development of Control Relevant Models through System Identification 1. Consider the time series x(k) = β 1 + β 2 k + w(k) where β 1 and β 2 are known constants and w(k) is a white noise process with variance σ 2. (a) Show that the mean of the moving average process y(k) = 1 p x(k j) 2p + 1 is β 1 + β 2 k. Is x(k) a stationary process? j= p (b) Find a transformation that produces a stationary process starting from x(k). (Hint: Consider transformation using backward difference operator, i.e. z(k) = (1 q 1 )x(k)) 2. Show that autocovariance function r(s, t) = E (v(s) µ v (s)) (v(k) µ v (k))] = E v(s)v(k)] µ v (s)µ v (k) where E v(s)] = µ v (s). 3. For a moving average process of the form x(k) = (1/2)w(k 2) + w(k 1) + 2w(k) (1/2)w(k + 1) where w(k) are independent with zero means and variance σ 2 w, determine the autocovariance and autocorrelation functions as a function of lag τ = s k. 4. Estimate the autocorrelation of the finite sequence u = {1, 2, 3, 4, 5, 6}. Comment on the relationship between r u,u (τ) and r u,u ( τ). 5. If h = {1, 2, 3, 4} and u = {5, 6, 7, 8}, estimate the cross-correlation r hu. 6. Consider two series x(k) = w(k) y(k) = w(k) θw(k 1) + u(k) where w(k) and u(k) are independent zero mean white noise sequences with variances σ 2 and λ 2, respectively, and θ is a unspecified constant. 1

(a) Express the autocorrelation function ρ y (τ) of sequence {y(k)} for τ = ±1, ±2,..as a function of σ 2,λ 2, and θ. (b) Determine cross-correlation function ρ x,y (τ) relating {x(k)} and {y(k)}. (c) Show that {x(k)} and {y(k)} are jointly stationary. (Series with constant means and aucovariance and cross covariance functions depending only on τ are said to be jointly stationary). 7. Consider a moving average process v(k) = e(k) + c 1 e(k 1) + c 2 e(k 2) (1) where {e(k)} is a zero mean white noise process with variance λ 2. Show that stochastic process {v(k)} has zero mean and auto-correlation R v (0) = E v(k), v(k)] = (1 + c 2 1 + c 2 2)λ 2 (2) R v (1) = E v(k), v(k 1)] = (c 1 + c 1 c 2 )λ 2 (3) R v (2) = E v(k), v(k 2)] = c 2 λ 2 (4) R v (k) = 0 for k > 2 (5) Note that {v(k)} is a typical example of colored noise. 8. Consider ARX model of the form y(k) = ay(k 1) + bu(k 1) + e(k) (6) It is desired to estimate the model parameters (a, b) using measurement data set {y(k) : k = 0, 1,...N} collected from an experiment in which input sequence {u(k) : k = 0, 1,...N} was injected into a system. (a) Show that the least square estimate of parameters generated from input-output data is given by y(k 1) 2 y(k 1)u(k 1) y(k 1)u(k 1) u(k 1) 2 where all summations are from k = 1 to N. ] â b ] = y(k)y(k 1) y(k)u(k 1) (7) (b) When data length is large (i.e. N ), show that equation (7) is equivalent to ] ] E y(k 1) 2 ] E y(k 1)u(k 1)] â E y(k)y(k 1)] = E y(k 1)u(k 1)] E u(k 1) 2 ] b E y(k)u(k 1)] (8) 2

or ] R y (0) R yu (0) R yu (0) R u (0) â b ] = R y(1) R yu (1) where R y (τ) represents auto-correlation function and R yu (τ) represents crosscorrelation function. (c) Defining regressor vector ] T ϕ(k) = y(k 1) u(k 1) (10) θ = â b ] T (11) show that equation (7) can be written as (9) E ϕ(k)ϕ(k) T ] θ = E ϕ(k)y(k)] (12) Hint: Show that Ω T Ω = E ϕ(k)ϕ(k) ] T Ω T Y = E ϕ(k)y(k)] ϕ(1) T y(1) ϕ(2) T Ω =... Y = y(2)... ϕ(n) T y(n) 9. Generalize the results of the previous for a general ARX model of the form y(k) = a 1 y(k 1)... a n y(k 1) + b 1 u(k 1) +... + b n u(k n) + e(k) (13) 10. Model conversions (a) Consider OE model of the form y(k) = 2q 1 u(k) + v(k) 1 0.6q 1 Using long division, convert the model into the following form y(k) = h 1 u(k 1) +... + h n u(k n) + v(k) where n is selected such that h i < 0.01 are neglected. How many terms are required and what can you say about h n as n increases? The resulting model is called finite impulse response model (FIR) and h i are called as impulse response coeffi cients (why?). 3

(b) Consider OE model of the form y(k) = 2q 1 u(k) + v(k) 1 1.5q 1 Can you find FIR model for this system? Justify your answer. (c) Consider AR model of the form v(k) = 1 e(k) 1 0.5q 1 where {e(k)} is a zero mean white noise signal with unit variance. Using long division, convert the model into moving average (MA) form y(k) = e(k) + h 1 e(k 1) +... + h n e(k n) n is selected such that h i < 0.01 are neglected. (d) Consider AR model of the form v(k) = 1 (1 0.5q 1 )(1 0.25q 1 ) e(k) Using long division, convert the model into moving average (MA) form. (e) Consider AR model of the form v(k) = 1 (1 q 1 ) e(k) Using long division, is it possible to convert the model into moving average (MA) form? 11. Consider process governed by FIR equation of the form y(k) = h 1 u(k 1) + h 2 u(k 2) + e(k) (14) where {e(k)} is a sequence of independent normal N(0, λ) random variables. (a) Determine estimates of (ĥ1, ĥ2) when input signal {u(k)} is step input introduced at k = 0. (b) Make same investigation as part(a) when the input signal {u(k)} is a white noise with unit variance. 4

12. Consider data generated by the discrete time system System : y(k) = h 1 u(k 1) + h 2 u(k 2) + e(k) (15) where {e(k)} is a sequence of independent normal N(0, 1) random variables. Assume that parameter h of the model Model : y(k) = hu(k) (16) is determined by least square. (a) Determine estimates obtained for large observation sets when the input u(k) is a step function. (This is a simple illustration of the problem of fitting a low order model to a data generated by a complex system. The result obtained will critically depend on the character of the input signal.) (b) Make same investigation as part (a) when the input signal is white noise with unit variance. 13. Consider FIR model of the form y(k) = h 1 u(k 1) +... + h N u(k N) + v(k) (17) show that least square estimates of impulse response coeffi cients are given by equation (12) where ] T ϕ(k) = u(k 1)... u(k N) (18) ] T θ = ĥ1... ĥ N (19) In other words, generalize results of Problem 8 to a general FIR model 14. If it is desired to identify parameters of FIR model (17), taking clues from the previous problem, what is the requirement on rank of matrix E ϕ(k)ϕ(k) ] T? This condition is called as persistency of excitation. 15. For a FIR model, show that parameter estimates are unbiased if {v(k)} is a zero mean sequence. 16. Consider discrete time system given by equation (6) where the input signal {u(k)} and noise {e(k)} are sequences on independent random variables with zero mean and standard deviations σ and λ, respectively.. Determine the covariance of parameter estimates obtained for large observation sets. 5

17. Consider discrete time system given by equation y(k) = a 0 y(k 1) + b 0 u(k 1) + e(k) + c 0 e(k 1) (20) where the input signal {u(k)} and noise {e(k)} are sequences on independent random variables with zero mean and standard deviations σ and λ, respectively. Assume that a model of the form y(k) = ay(k 1) + bu(k 1) + ε(k) (21) are estimated by least squares. Determine the asymptotic values of the estimates when (a) {u(k)} is a zero mean white noise process with standard deviation σ (b) {u(k)} is step input of magnitude σ (c) In particular, compare the estimated values (â, b) with the true values (a 0, b 0 ) for the following system a 0 = 0.8 ; b 0 = 1 ; c 0 = 0.5 (22) for the cases (a) σ = 1, λ = 0.1 (b) σ = 1, λ =. By comparing the estimates for cases (a) and (b) with true values, what can you conclude about the effect of signal to noise ration (σ 2 /λ 2 ) on the parameter estimates? 18. Consider a discrete time model v(k) = a + b k + e(k) (23) where {e(k)} is a sequence of independent normal N(0, λ) random variables. Determine least square estimates of model parameters and covariance of the estimates. Discuss behavior of the estimates as the number of data points increases. 19. Consider data generated by y(k) = b + e(k) ; k = 1, 2,...N (24) where {e(k) : k = 1, 3, 4,...} is a sequence of independent random variables. Furthermore, assume that there is a large error at k = 2, i.e., e(2) = A where A is a large number. Determine the estimate obtained and discuss how it depends on A.(This is a simple example that shows how sensitive the least square estimate is with respect to occasional large errors.) 6

20. Suppose that we wish to identify a plant that is operating in closed loop as follows Plant dynamics : y(k) = ay(k 1) + bu(k 1) + ε(k) (25) Feedback control law : u(k) = βy(k) (26) where {e(k)} is a sequence of independent normal N(0, λ) random variables. (a) Show that we cannot identify parameters (a, b) from observations of y and u, even when β is known. (b) Assume that an external independent perturbation was introduced in input signal u(k) = βy(k) + r(k) (27) where {r(k)} is a sequence of independent normal N(0, σ) random variables. Show that it is now possible to recover estimates of open loop model parameters using the closed loop data. (Note: Here {r(k)} has been taken as a zero mean white noise sequence to simplify the analysis. In practice, an independent PRBS signal is added to manipulated input to make the model parameters identifiable in closed loop conditions.) 21. The English mathematician Richardson has proposed the following simple model for the arms race between two countries x(k + 1) = ax(k) + by(k) + f (28) y(k + 1) = cx(k) + dy(k) + g (29) where x(k) and y(k) are yearly expenditures on arms of the two nations and (a, b, c, d, f, g) are model parameters. The following data has been obtained from World Armaments and Disarmaments Year Book 1982Determine the parameters of the model by least squares and investigate the stability of the model. 22. Consider an ARMA model of the form which is equivalent to y(k) = ay(k 1) + e(k) + ce(k 1) (30) y(k) = H(q)e(k) = 1 + cq 1 e(k) (31) 1 + aq 1 {e(k)} is a sequence of independent normal N(0, λ) random variables. Develop 1 step ahead predictor for ŷ(k + 1 k), which uses only the current and the past measurements of y. 7

23. Consider an ARMAX model of the form which is equivalent to y(k) = ay(k 1) + bu(k 1) + e(k) + ce(k 1) (32) y(k) = G(q)u(k) + H(q)e(k) = bq 1 1 + cq 1 u(k) + e(k) (33) 1 + aq 1 1 + aq 1 {e(k)} is a sequence of independent normal N(0, λ) random variables. Develop 1-step ahead predictor for ŷ(k + 1 k), which uses only the current and the past measurements of y. 24. Consider Box-Jenkin s model y(k) = G(q)u(k) + H(q)e(k) Derive one step prediction G(q) = q + b q + a H(q) = q + c q + d ŷ(k k 1) = H(q)] 1 G(q)u(k) + 1 (H(q)) 1] y(k) y(k) = ŷ(k k 1) + e(k) and express dynamics of ŷ(k k 1) as a time domain difference equation. 25. Consider moving average (MA) process y(k) = H(q)e(k) (34) H(q) = 1 1.1q 1 + 0.3q 2 (35) 8

Compute H 1 (q) as an infinite expansion by long division and develop an auto-regressive model of the form e(k) = H 1 (q)y(k) (36) This model facilitates estimation of noise e(k) based on current and past measurements of y(k). 26. Given an ARMAX model of the form y(k) = B(q) C(q) u(k) + A(q) A(q) e(k) = Rearrange this model as 0.1q 1 1 0.9q 1 0.2q 1 u(k) + e(k) (37) 1 1 0.9q 1 y(k) = C 1 (q)b(q) C 1 (q)a(q) u(k) + 1 e(k) (38) C 1 (q)a(q) Compute C 1 (q) as an infinite expansion by long division and truncate the expansion after finite number of terms when coeffi cients become small, i.e. C 1 T (q) 1 + c 1q 1 +... + c n q n (39) Using this truncated C 1 T (q), express the model in ARX form y(k) = B(q) Ã(q) u(k) + 1 e(k) (40) Ã(q) Ã(q) = C 1 T (q)a(q) ; B(q) = C 1 T (q)b(q) (41) This simple calculation will illustrate how a low order ARMAX model can be approximated by a high order ARX model. 27. Consider transfer functions G 1 (q) = G 2 (q) = H(q) = (q 0.5)(q + 0.5) (q 1)(q 2 1.5q + 0.7) (q 0.2)(q + 0.2) (q 1)(q 2 1.5q + 0.7) (q 0.8) (q 2 1.5q + 0.7) Derive state-space realization using observable canonical form for the following systems (cases (a) to (d)) (a) y(k) = G 1 (q)u 1 (k) + v(k) 9

(b) y(k) = G 1 (q)u 1 (k) + G 2 (q)u 2 (k) + v(k) (c) y(k) = G 1 (q)u 1 (k) + H(q)e(k) (d) y(k) = G 1 (q)u(k) + G 2 (q)u(k) + H(q)e(k) (e) Given that sequence {e(k)} is a zero mean white noise sequence with standard deviation equal to 0.5, express the resulting state space models for cases (c) and (d) in the form x(k + 1) = Φx(k) + Γu(k) + w(k) y(k) = Cx(k) + e(k) and estimate covariance of white noise sequence {w(k)}. (f) Derive state-space realization using controllable canonical form for case (a). 28. Derive a state realization for ] y 1 (k) 1 = y 2 (k) (q 2 1.5q + 0.8) q + 0.5 q 1.5 q 0.5 q + 1.5 ] u 1 (k) u 2 (k) ] + v 1 (k) v 2 (k) ] in controllable and observable canonical forms. 29. A system is represented by G(s) = 3 (s + 4)(s + 1) (a) Derive continuous time state-space realizations dx = Ax + Bu ; y = Cx dt in (a) Controllable canonical form (b) Observable canonical form (b) Convert each of the continuous time state space models into discrete state space form x(k + 1) = Φx(k) + Bu(k) ; y(k) = Cx(k) Is canonical structure in continuous time preserved after discretization? Show that both the discrete realizations have identical transfer function G(q). (c) If canonical structures are not preserved after discretization, derive discrete state realizations in (a) Controllable canonical form (b) Observable canonical form starting from G(q). 30. Derive a state realizations for ] y 1 (t) 1 = y 2 (t) (s 2 + 3s + 2) s + 1.5 s 2 s 3 s + 2 ] u 1 (t) u 2 (t) ] in controllable and observable canonical forms. 10