EL1820 Modeling of Dynamical Systems
|
|
- Dylan Moody
- 5 years ago
- Views:
Transcription
1 EL1820 Modeling of Dynamical Systems Lecture 9 - Parameter estimation in linear models Model structures Parameter estimation via prediction error minimization Properties of the estimate: bias and variance Lecture 9 1
2 You should be able to Today s goal distinguish between common model structures used in identification estimate model parameters using the prediction-error method calculate the optimal parameters for ARX models using least-squares estimate bias and variance of estimates from model and input signal properties Lecture 9 2
3 System identification Basic idea: estimate system from measurements of u(t) and y(t) w(t) - disturbance u(t) System y(t) e(t) - measurement noise u(kh) y(kh) Many issues Many issues choice of sampling frequency, input signal (experiment conditions) choice of sampling freq., input signal (experimental conditions) what class of models how to model disturbances? what class of models how to model disturbances? estimating model parameters from sampled, finite and noisy data estimating model parameters from sampled, finite and noisy data Lecture 9 3
4 System identification via parameter estimation w[k] (disturbance) u[k] Linear system y[k] Need to fix model structure before trying to estimate parameters Need to fix model system structure model, disturbance before trying model to estimate parameters system model, order disturbance (degrees model of transfer function polynomials) model order (degrees of transfer function polynomials) Lecture 9 4
5 Model structures Model structures commonly used (BJ includes all as special cases) ARMAX (autoregressive moving average exogeneous input) e[k] BJ (Box Jenkins) e[k] C(q) C(q) D(q) u[k] B(q) 1 A(q) y[k] u[k] B(q) F(q) y[k] ARX (autoregressive with exogeneous input) e[k] OE (output error) e[k] u[k] B(q) 1 A(q) y[k] u[k] B(q) A(q) y[k] Lecture 9 5
6 Transfer function parameterizations The transfer functions G(q) and H(q) in the linear model will be parameterized as y[k] = G(q; θ)u[k] + H(q; θ)e[k] G(q; θ) = q n k b 0 + b 1 q b nb q n b 1 + f 1 q f nf q n f H(q; θ) = 1 + c 1q c nc q n c 1 + d 1 q d nd q n d where the parameter vector θ contains {b k }, {f k }, {c k }, {d k } Note n k determines dead-time; n b, n f, n c, n d : order of polynomials Lecture 9 6
7 Model order selection from physical insight Physical insight can often help us to determine the right model order If system is sampled using zero-order hold (input piecewise constant), n f equals the number of poles of continuous-time system if system has no delay and no direct term, then n b = n f 1, n k = 1 if system has no delay but direct term, then n b = n f, n k = 0 if continuous system has time delay, then n k = τ/h + 1 Note n b does not depend on number of continuous-time zeros! Lecture 9 7
8 EL1820 Modeling of Dynamical Systems Lecture 9 - Parameter estimation in linear models Model structures Parameter estimation via prediction error minimization Properties of the estimate: bias and variance Lecture 9 8
9 Basic principle of parameter estimation w[k] System y[k] u[k] Model ^ y[k] For given parameters θ, the model predicts that the system output should be ŷ[t; θ] For given θ, the model predicts that the system output will be ŷ[k; θ] Determine θ so that ŷ[t; θ] matches observed output y[t] as closely as possible Determine θ so ŷ[k; θ] matches observed y[k] as closely as possible To solve the parameter estimation problem, we note that To solve the parameter estimation problem, we note that 1. The value of ŷ[t; θ] depends on the disturbance model 1. The value of ŷ[k; θ] depends on the disturbance model 2. The concept as closely as possible must be given a mathematical formulation 2. The concept as closely as possible must be mathematically formalized Lecture 9 8 April 29, 2004 Lecture 9 9
10 1. Compute Prediction error minimization (PEM) ŷ[k; θ k 1] the model s prediction of the system output, given information at time k 1 2. Form the prediction error ε[k] = y[k] ŷ[k; θ k 1] 3. Construct the loss function V N (θ) = 1 N N ε 2 [k] k=1 4. The optimal θ is the one minimizing the loss function θ = arg min θ V N (θ) Lecture 9 10
11 Prediction using linear models Consider the linear model: y[k] = G(q)u[k] + H(q)e[k] Multiply by H 1 (q) (to make noise term white) and re-write as y[k] = (1 H 1 (q))y[k] + H 1 (q)g(q)u[k] + e[k] Since {e[k]} is a white noise sequence, our best prediction is ŷ[k] = (1 H 1 (q))y[k] + H 1 (q)g(q)u[k] If n c n d, prediction uses only old outputs (measured up to k 1) Lecture 9 11
12 Prediction using ARX models For ARX models, H = 1/A and G = q n k B/A, so (1 H 1 (q))y[k] = (1 A(q))y[k] = (a 1 q a na q n a )y[k] H 1 (q)g(q)u[k] = q n k B(q)u[k] = (b 0 + b 1 q b nb q n b )q n k u[k] Thus, the predictor is linear in the parameters ŷ[k; θ k 1] = ϕ T [k]θ where θ = a 1. a na. b nb ϕ[k] = y[k 1]. y[k n a ] u[k n k ]. u[k n k n b ] Lecture 9 12
13 Linear regression Linear model, linear predictor ({e[k]}: white noise) y[k] = ϕ T [k]θ 0 + e[k] ŷ[k] = ϕ T [k]θ Convenient to express the residuals ε[k] = y[k] ŷ[k] in vector form, ε[1] y[1] ϕ T [1] ε N = = θ = y N ϕ N θ. ε[n]. y[n]. ϕ T [N] Then, the loss function can be written as V (θ) = 1 N N n=1 ε2 [k] = 1 N εt Nε N = 1 N (y N ϕ N θ) T (y N ϕ N θ) and the optimal estimate is found by solving V/ θ = 0: ˆθ = (ϕ T Nϕ N ) 1 ϕ T N y N (provided ( ) 1 exists; see end of slides for proof) Lecture 9 13
14 Example: Estimation in ARX models Example Estimate the model parameters a and b in the ARX model y[k] = ay[k 1] + bu[k 1] + e[k] from input and output sequences {y[k]}, {u[k]} for k = 0,..., N Using θ = (a b) T and ψ[k] = (y[k 1] u[k 1]) T, we find ϕ T Nϕ N = [ y[0] y[n 1] u[0] u[n 1] ] y[0] u[0]. y[n 1] u[n 1] so the optimal estimate is given by [ N ] 1 [ N N ] ˆθ = k=1 y2 [k 1] k=1 y[k 1]u[k 1] N k=1 u[k 1]y[k 1] k=1 y[k 1]y[k] N N k=1 u2 [k 1] k=1 u[k 1]y[k]. Note Estimate computed using covariances of u[k], y[k] Lecture 9 14
15 Estimation in general model structures Estimation more difficult when predictor is not linear in parameters In general, we need to minimize V N (θ) using iterative numerical methods, e.g., θ (i+1) = θ (i) µ (i) M (i) V N(θ (i) ) Example Newton s method uses M (i) = (V N (θ (i) )) 1 while Gauss-Newton approximates M (i) using first-order derivatives Problem Result is locally optimal, but not necessarily globally optimal Lecture 9 15
16 Example Example G(s) = 10/(s 2 + 2s + 10) sampled w/ h = 0.05, var{v} = Magnitude True system ARX OE Frequency (rad/s) 0 Phase (deg) Frequenct (rad/s) Lecture 9 16 Model structure matters!
17 EL1820 Modeling of Dynamical Systems Lecture 9 - Parameter estimation in linear models Model structures Parameter estimation via prediction error minimization Properties of the estimate: bias and variance Lecture 9 17
18 Properties of PEM estimates What can we say about models estimated using prediction-error minimization? Model errors have two components: 1. Bias errors: arise if model is unable to capture true system 2. Variance errors: due to influence of stochastic disturbances We will study two properties of general prediction error methods: 1. Convergence: what happens with ˆθ N as N grows? 2. Accuracy: what can we say about size of ˆθ N θ 0 as N increases? Lecture 9 18
19 Convergence If disturbances acting on system are stochastic, then so is prediction error ε[k] Under quite general conditions (even if ε[k] are not independent) and lim N 1 N N ε 2 [k; θ] = E{ε 2 [k; θ]} k=1 ˆθ N θ = arg min θ E{ε 2 [k; θ]} as N Even if model cannot reflect reality, estimate will minimize prediction mean squared error! Lecture 9 19
20 Example Example Assume you try to estimate the parameter b in the model ŷ[k] = bu[k 1] while the true system is y[k] = u[k 1] + u[k 2] + e[k] where {u[k]}, {e[k]} are white noise signals, indep. of each other What will the PEM estimate converge to? PEM will find the parameters that minimize the mean squared error E{ε 2 [k]} = E{(y[k] ŷ[k]) 2 } = E{(u[k 1] + u[k 2] + e[k] bu[k 1]) 2 } = E{((1 b)u[k 1] + u[k 2]) 2 } + σe 2 = (1 b) 2 σu 2 + σu 2 + σe 2 This expression is minimized by b = 1 (the asymptotic estimate) Lecture 9 20
21 Consistency Assume that there is some θ 0 such that {ε[k; θ 0 ]} is white noise, then E{ε 2 [k; θ]} is minimized by this value (see end of slides for proof) If, moreover, then one can conclude that ŷ[k; θ 0 ] = ŷ[k; θ] = θ = θ 0 ˆθ N θ 0 as N Lecture 9 21
22 θ : frequency domain characterization Assume that the true system is described by y[k] = G 0 (q)u[k] + w[k] and that we try to estimate a model of the form (H (q) indep. of θ) y[k] = G(q; θ)u[k] + H (q)e[k] If {u[k]} and {w[k]} are independent, θ = lim N ˆθ N = arg min θ π π G 0 (e iω ) G(e iω ; θ) 2 Φ u (ω) H (e iω ) 2 dω θ minimizes least-squares criterion, weighted by Φ u (ω)/ H (e iω ) 2 good fit where Φ u (ω) has much energy, or H(e iω ) has little energy Can focus model accuracy on important frequency range by choosing {u[k]} Lecture 9 22
23 Example Output error method using low- and high-frequency input signal 10 2 Magnitude True system OE Frequency (rad/s) 10 2 Magnitude True system OE Frequency (rad/s) Lecture 9 23
24 Estimation error variance If {e[k]} is white noise with variance λ, then E{(θ θ 0 )(θ θ 0 ) T } 1 N λr 1 where R = E{ψ[k; θ 0 ]ψ T [k; θ 0 ]} ψ[k; θ 0 ] = d dθ ŷ[k; θ] θ=θ0 Error variance decreases with sensitivity of prediction error (w.r.t. parameters) number of measurements Lecture 9 24
25 Estimation error variance cont d We can estimate the estimation error variance via ˆP N = 1 N ˆλ ˆR 1 N where ˆλ = 1 N N ε 2 [k; ˆθ N ], ˆRN = 1 N N ψ[k; ˆθ N ]ψ T [k; ˆθ N ] k=1 k=1 Moreover, one can show that N(ˆθN θ 0 ) d N (0, λr 1 ) This can be used to compute confidence regions for parameter estimates Lecture 9 25
26 Error variance in the frequency domain For the variance of the frequency response of the estimate, we have var{g(e iω ; θ)} n N Φ w (ω) Φ u (ω) n, N 1 Variance increases with number of model parameters n decreases with number of observations, and signal-to-noise ratio again, the frequency content of the input influences accuracy of the model Similar to spectral analysis error bounds G(e iω ; θ) typically decreases at ω π/h, while variance is constant (or increases!) = high relative error at high freq. Lecture 9 26
27 Example Confidence intervals for freq. responses for two different input spectra 10 1 Input spectrum 1 Input spectrum Estimate 1 Estimate frequency (rad/sec) frequency (rad/sec) Lecture 9 27
28 Next lecture Experimental condition and model validation Lecture 9 28
29 Bonus: calculation of V θ V (θ) = 1 N (y N ϕ N θ) T (y N ϕ N θ) = 1 N (yt Ny N 2θ T ϕ T Ny N + θ T ϕ T N ϕ N θ }{{}}{{} 2 i θ i(ϕ T N y N ) i i,j θ iθ j (ϕ T N ϕ N ) ij ) Therefore, or V θ k = 2 N (ϕt Ny N ) k + 2 N (ϕ T N ϕ N ) ik θ k V θ = 2 N ϕt Ny N + 2 N (ϕt Nϕ N )θ! = 0 k for θ = ˆθ Hence: (ϕ T Nϕ N )ˆθ = ϕ T N y N ˆθ = (ϕ T N ϕ N ) 1 ϕ T Ny N Lecture 9 29
30 Bonus: Proof that ε[k; ˆθ] is white noise E{ε 2 [k; θ]} = E{(y[k] ŷ[k; θ 0 ] +ŷ[k; θ 0 ] ŷ[k; θ]) 2 } }{{} =ε[k;θ 0 ] = E{ε 2 [k; θ 0 ]} + E{(ŷ[k; θ 0 ] ŷ[k; θ]) 2 } + 2E{ε[k; θ 0 ](ŷ[k; θ 0 ] ŷ[k; θ])} E{ε 2 [k; θ 0 ]} if E{ε[k; θ 0 ](ŷ[k; θ 0 ] ŷ[k; θ])} = 0 Now, y[k] = ε[k; θ 0 ] + ŷ[k; θ 0 ] is a function of ε[k; θ 0 ], ε[k 1; θ 0 ],..., u[k], u[k 1],..., because ŷ[k; θ 0 ] is a function of y[k 1], y[k 2],... and u[k], u[k 1],..., where y[k 1],... depend on previous values of ε[k 1; θ 0 ], and so on Then, since {ε[k; θ 0 ]} is white noise, ε[k; θ 0 ] is uncorrelated to y[k 1],... and u[k],..., hence it is uncorrelated to both ŷ[k; θ 0 ] and ŷ[k; θ], i.e., E{ε[k; θ 0 ](ŷ[k; θ 0 ] ŷ[k; θ])} = 0 This shows that E{ε 2 [k; θ]} E{ε 2 [k; θ 0 ]} for all θ Lecture 9 30
EL1820 Modeling of Dynamical Systems
EL1820 Modeling of Dynamical Systems Lecture 10 - System identification as a model building tool Experiment design Examination and prefiltering of data Model structure selection Model validation Lecture
More informationAdvanced Process Control Tutorial Problem Set 2 Development of Control Relevant Models through System Identification
Advanced Process Control Tutorial Problem Set 2 Development of Control Relevant Models through System Identification 1. Consider the time series x(k) = β 1 + β 2 k + w(k) where β 1 and β 2 are known constants
More informationCONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Prediction Error Methods - Torsten Söderström
PREDICTIO ERROR METHODS Torsten Söderström Department of Systems and Control, Information Technology, Uppsala University, Uppsala, Sweden Keywords: prediction error method, optimal prediction, identifiability,
More informationControl Systems Lab - SC4070 System Identification and Linearization
Control Systems Lab - SC4070 System Identification and Linearization Dr. Manuel Mazo Jr. Delft Center for Systems and Control (TU Delft) m.mazo@tudelft.nl Tel.:015-2788131 TU Delft, February 13, 2015 (slides
More informationEECE Adaptive Control
EECE 574 - Adaptive Control Basics of System Identification Guy Dumont Department of Electrical and Computer Engineering University of British Columbia January 2010 Guy Dumont (UBC) EECE574 - Basics of
More informationOutline 2(42) Sysid Course VT An Overview. Data from Gripen 4(42) An Introductory Example 2,530 3(42)
Outline 2(42) Sysid Course T1 2016 An Overview. Automatic Control, SY, Linköpings Universitet An Umbrella Contribution for the aterial in the Course The classic, conventional System dentification Setup
More informationEECE Adaptive Control
EECE 574 - Adaptive Control Recursive Identification in Closed-Loop and Adaptive Control Guy Dumont Department of Electrical and Computer Engineering University of British Columbia January 2010 Guy Dumont
More information12. Prediction Error Methods (PEM)
12. Prediction Error Methods (PEM) EE531 (Semester II, 2010) description optimal prediction Kalman filter statistical results computational aspects 12-1 Description idea: determine the model parameter
More informationf-domain expression for the limit model Combine: 5.12 Approximate Modelling What can be said about H(q, θ) G(q, θ ) H(q, θ ) with
5.2 Approximate Modelling What can be said about if S / M, and even G / G? G(q, ) H(q, ) f-domain expression for the limit model Combine: with ε(t, ) =H(q, ) [y(t) G(q, )u(t)] y(t) =G (q)u(t) v(t) We know
More informationIdentification, Model Validation and Control. Lennart Ljung, Linköping
Identification, Model Validation and Control Lennart Ljung, Linköping Acknowledgment: Useful discussions with U Forssell and H Hjalmarsson 1 Outline 1. Introduction 2. System Identification (in closed
More informationOutline. What Can Regularization Offer for Estimation of Dynamical Systems? State-of-the-Art System Identification
Outline What Can Regularization Offer for Estimation of Dynamical Systems? with Tianshi Chen Preamble: The classic, conventional System Identification Setup Bias Variance, Model Size Selection Regularization
More informationMatlab software tools for model identification and data analysis 11/12/2015 Prof. Marcello Farina
Matlab software tools for model identification and data analysis 11/12/2015 Prof. Marcello Farina Model Identification and Data Analysis (Academic year 2015-2016) Prof. Sergio Bittanti Outline Data generation
More informationIdentification of ARX, OE, FIR models with the least squares method
Identification of ARX, OE, FIR models with the least squares method CHEM-E7145 Advanced Process Control Methods Lecture 2 Contents Identification of ARX model with the least squares minimizing the equation
More information6.435, System Identification
SET 6 System Identification 6.435 Parametrized model structures One-step predictor Identifiability Munther A. Dahleh 1 Models of LTI Systems A complete model u = input y = output e = noise (with PDF).
More informationAn Exponentially Weighted Moving Average Method for Identification and Monitoring of Stochastic Systems
Ind. Eng. Chem. Res. 2008, 47, 8239 8249 8239 PROCESS DESIGN AND CONTROL An Exponentially Weighted Moving Average Method for Identification and Monitoring of Stochastic Systems Shyh-Hong Hwang,* Ho-Tsen
More informationData mining for system identi cation
LERTEKNIK REG AU T O MA RO T IC C O N T L applications to process André Carvalho Bittencourt Automatic Control, Linköping Sweden Outline 1. Problem formulation. Theoretical guiding principles Modeling
More informationLecture 7: Discrete-time Models. Modeling of Physical Systems. Preprocessing Experimental Data.
ISS0031 Modeling and Identification Lecture 7: Discrete-time Models. Modeling of Physical Systems. Preprocessing Experimental Data. Aleksei Tepljakov, Ph.D. October 21, 2015 Discrete-time Transfer Functions
More informationIntroduction to system identification
Introduction to system identification Jan Swevers July 2006 0-0 Introduction to system identification 1 Contents of this lecture What is system identification Time vs. frequency domain identification Discrete
More informationChapter 6: Nonparametric Time- and Frequency-Domain Methods. Problems presented by Uwe
System Identification written by L. Ljung, Prentice Hall PTR, 1999 Chapter 6: Nonparametric Time- and Frequency-Domain Methods Problems presented by Uwe System Identification Problems Chapter 6 p. 1/33
More informationNon-parametric identification
Non-parametric Non-parametric Transient Step-response using Spectral Transient Correlation Frequency function estimate Spectral System Identification, SSY230 Non-parametric 1 Non-parametric Transient Step-response
More informationB y t = γ 0 + Γ 1 y t + ε t B(L) y t = γ 0 + ε t ε t iid (0, D) D is diagonal
Structural VAR Modeling for I(1) Data that is Not Cointegrated Assume y t =(y 1t,y 2t ) 0 be I(1) and not cointegrated. That is, y 1t and y 2t are both I(1) and there is no linear combination of y 1t and
More informationMatlab software tools for model identification and data analysis 10/11/2017 Prof. Marcello Farina
Matlab software tools for model identification and data analysis 10/11/2017 Prof. Marcello Farina Model Identification and Data Analysis (Academic year 2017-2018) Prof. Sergio Bittanti Outline Data generation
More informationOn Input Design for System Identification
On Input Design for System Identification Input Design Using Markov Chains CHIARA BRIGHENTI Masters Degree Project Stockholm, Sweden March 2009 XR-EE-RT 2009:002 Abstract When system identification methods
More informationTHERE are two types of configurations [1] in the
Linear Identification of a Steam Generation Plant Magdi S. Mahmoud Member, IAEG Abstract The paper examines the development of models of steam generation plant using linear identification techniques. The
More informationIDENTIFICATION OF A TWO-INPUT SYSTEM: VARIANCE ANALYSIS
IDENTIFICATION OF A TWO-INPUT SYSTEM: VARIANCE ANALYSIS M Gevers,1 L Mišković,2 D Bonvin A Karimi Center for Systems Engineering and Applied Mechanics (CESAME) Université Catholique de Louvain B-1348 Louvain-la-Neuve,
More informationIdentification of Linear Systems
Identification of Linear Systems Johan Schoukens http://homepages.vub.ac.be/~jschouk Vrije Universiteit Brussel Department INDI /67 Basic goal Built a parametric model for a linear dynamic system from
More informationAn Algorithm for Finding Process Identification Intervals from Normal Operating Data
Processes 015, 3, 357-383; doi:10.3390/pr300357 OPEN ACCESS processes ISSN 7-9717 www.mdpi.com/journal/processes Article An Algorithm for Finding Process Identification Intervals from Normal Operating
More informationy k = ( ) x k + v k. w q wk i 0 0 wk
Four telling examples of Kalman Filters Example : Signal plus noise Measurement of a bandpass signal, center frequency.2 rad/sec buried in highpass noise. Dig out the quadrature part of the signal while
More informationModel structure. Lecture Note #3 (Chap.6) Identification of time series model. ARMAX Models and Difference Equations
System Modeling and Identification Lecture ote #3 (Chap.6) CHBE 70 Korea University Prof. Dae Ryoo Yang Model structure ime series Multivariable time series x [ ] x x xm Multidimensional time series (temporal+spatial)
More informationNonlinear System Identification Using MLP Dr.-Ing. Sudchai Boonto
Dr-Ing Sudchai Boonto Department of Control System and Instrumentation Engineering King Mongkut s Unniversity of Technology Thonburi Thailand Nonlinear System Identification Given a data set Z N = {y(k),
More informationSystem Identification
System Identification Arun K. Tangirala Department of Chemical Engineering IIT Madras July 26, 2013 Module 6 Lecture 1 Arun K. Tangirala System Identification July 26, 2013 1 Objectives of this Module
More informationOptimal Polynomial Control for Discrete-Time Systems
1 Optimal Polynomial Control for Discrete-Time Systems Prof Guy Beale Electrical and Computer Engineering Department George Mason University Fairfax, Virginia Correspondence concerning this paper should
More informationEconometrics II - EXAM Answer each question in separate sheets in three hours
Econometrics II - EXAM Answer each question in separate sheets in three hours. Let u and u be jointly Gaussian and independent of z in all the equations. a Investigate the identification of the following
More informationParametric Output Error Based Identification and Fault Detection in Structures Under Earthquake Excitation
Parametric Output Error Based Identification and Fault Detection in Structures Under Earthquake Excitation J.S. Sakellariou and S.D. Fassois Department of Mechanical & Aeronautical Engr. GR 265 Patras,
More informationModel Identification and Validation for a Heating System using MATLAB System Identification Toolbox
IOP Conference Series: Materials Science and Engineering OPEN ACCESS Model Identification and Validation for a Heating System using MATLAB System Identification Toolbox To cite this article: Muhammad Junaid
More informationErrors-in-variables identification through covariance matching: Analysis of a colored measurement noise case
008 American Control Conference Westin Seattle Hotel Seattle Washington USA June -3 008 WeB8.4 Errors-in-variables identification through covariance matching: Analysis of a colored measurement noise case
More informationExam in Automatic Control II Reglerteknik II 5hp (1RT495)
Exam in Automatic Control II Reglerteknik II 5hp (1RT495) Date: August 4, 018 Venue: Bergsbrunnagatan 15 sal Responsible teacher: Hans Rosth. Aiding material: Calculator, mathematical handbooks, textbooks
More information6.435, System Identification
System Identification 6.435 SET 3 Nonparametric Identification Munther A. Dahleh 1 Nonparametric Methods for System ID Time domain methods Impulse response Step response Correlation analysis / time Frequency
More informationPERFORMANCE ANALYSIS OF CLOSED LOOP SYSTEM WITH A TAILOR MADE PARAMETERIZATION. Jianhong Wang, Hong Jiang and Yonghong Zhu
International Journal of Innovative Computing, Information and Control ICIC International c 208 ISSN 349-498 Volume 4, Number, February 208 pp. 8 96 PERFORMANCE ANALYSIS OF CLOSED LOOP SYSTEM WITH A TAILOR
More informationEECE Adaptive Control
EECE 574 - Adaptive Control Recursive Identification Algorithms Guy Dumont Department of Electrical and Computer Engineering University of British Columbia January 2012 Guy Dumont (UBC EECE) EECE 574 -
More informationGMM, HAC estimators, & Standard Errors for Business Cycle Statistics
GMM, HAC estimators, & Standard Errors for Business Cycle Statistics Wouter J. Den Haan London School of Economics c Wouter J. Den Haan Overview Generic GMM problem Estimation Heteroskedastic and Autocorrelation
More informationTime Series Analysis
Time Series Analysis hm@imm.dtu.dk Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby 1 Outline of the lecture Input-Output systems The z-transform important issues
More informationProcess Dynamics & Control LECTURE 1: INTRODUCTION OF MODEL PREDICTIVE CONTROL A Multivariable Control Technique for the Process Industry
Process Dynamics & Control LECTURE 1: INTRODUCTION OF MODEL PREDICTIVE CONTROL A Multivariable Control Technique for the Process Industry Jong Min Lee Chemical and Biological Engineering Seoul National
More informationClosed-loop Identification of Hammerstein Systems Using Iterative Instrumental Variables
Proceedings of the 18th World Congress The International Federation of Automatic Control Closed-loop Identification of Hammerstein Systems Using Iterative Instrumental Variables Younghee Han and Raymond
More informationImproving performance and stability of MRI methods in closed-loop
Preprints of the 8th IFAC Symposium on Advanced Control of Chemical Processes The International Federation of Automatic Control Improving performance and stability of MRI methods in closed-loop Alain Segundo
More informationIDENTIFICATION FOR CONTROL
IDENTIFICATION FOR CONTROL Raymond A. de Callafon, University of California San Diego, USA Paul M.J. Van den Hof, Delft University of Technology, the Netherlands Keywords: Controller, Closed loop model,
More informationLecture Note #7 (Chap.11)
System Modeling and Identification Lecture Note #7 (Chap.) CBE 702 Korea University Prof. Dae Ryoo Yang Chap. Real-time Identification Real-time identification Supervision and tracing of time varying parameters
More information14 th IFAC Symposium on System Identification, Newcastle, Australia, 2006
14 th IFAC Symposium on System Identification, Newcastle, Australia, 26 LINEAR REGRESSION METHOD FOR ESTIMATING APPROXIMATE NORMALIZED COPRIME PLANT FACTORS M.R. Graham R.A. de Callafon,1 University of
More informationIdentification in closed-loop, MISO identification, practical issues of identification
Identification in closed-loop, MISO identification, practical issues of identification CHEM-E7145 Advanced Process Control Methods Lecture 4 Contents Identification in practice Identification in closed-loop
More informationLecture Stat Information Criterion
Lecture Stat 461-561 Information Criterion Arnaud Doucet February 2008 Arnaud Doucet () February 2008 1 / 34 Review of Maximum Likelihood Approach We have data X i i.i.d. g (x). We model the distribution
More informationWe are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors
We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists 3,800 116,000 120M Open access books available International authors and editors Downloads Our
More informationOPTIMAL EXPERIMENT DESIGN IN CLOSED LOOP. KTH, Signals, Sensors and Systems, S Stockholm, Sweden.
OPTIMAL EXPERIMENT DESIGN IN CLOSED LOOP Henrik Jansson Håkan Hjalmarsson KTH, Signals, Sensors and Systems, S-00 44 Stockholm, Sweden. henrik.jansson@s3.kth.se Abstract: In this contribution we extend
More informationRefined Instrumental Variable Methods for Identifying Hammerstein Models Operating in Closed Loop
Refined Instrumental Variable Methods for Identifying Hammerstein Models Operating in Closed Loop V. Laurain, M. Gilson, H. Garnier Abstract This article presents an instrumental variable method dedicated
More informationMS&E 226: Small Data. Lecture 11: Maximum likelihood (v2) Ramesh Johari
MS&E 226: Small Data Lecture 11: Maximum likelihood (v2) Ramesh Johari ramesh.johari@stanford.edu 1 / 18 The likelihood function 2 / 18 Estimating the parameter This lecture develops the methodology behind
More informationWhat can regularization offer for estimation of dynamical systems?
8 6 4 6 What can regularization offer for estimation of dynamical systems? Lennart Ljung Tianshi Chen Division of Automatic Control, Department of Electrical Engineering, Linköping University, SE-58 83
More informationSGN Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection
SG 21006 Advanced Signal Processing: Lecture 8 Parameter estimation for AR and MA models. Model order selection Ioan Tabus Department of Signal Processing Tampere University of Technology Finland 1 / 28
More informationFRTN 15 Predictive Control
Department of AUTOMATIC CONTROL FRTN 5 Predictive Control Final Exam March 4, 27, 8am - 3pm General Instructions This is an open book exam. You may use any book you want, including the slides from the
More informationREGLERTEKNIK AUTOMATIC CONTROL LINKÖPING
Time-domain Identication of Dynamic Errors-in-variables Systems Using Periodic Excitation Signals Urban Forssell, Fredrik Gustafsson, Tomas McKelvey Department of Electrical Engineering Linkping University,
More informationUnbiased Power Prediction of Rayleigh Fading Channels
Unbiased Power Prediction of Rayleigh Fading Channels Torbjörn Ekman UniK PO Box 70, N-2027 Kjeller, Norway Email: torbjorn.ekman@signal.uu.se Mikael Sternad and Anders Ahlén Signals and Systems, Uppsala
More informationA summary of Modeling and Simulation
A summary of Modeling and Simulation Text-book: Modeling of dynamic systems Lennart Ljung and Torkel Glad Content What re Models for systems and signals? Basic concepts Types of models How to build a model
More informationSystem Identification & Parameter Estimation
System Identiication & Parameter Estimation Wb30: SIPE Lecture 9: Physical Modeling, Model and Parameter Accuracy Erwin de Vlugt, Dept. o Biomechanical Engineering BMechE, Fac. 3mE April 6 00 Delt University
More informationOn instrumental variable-based methods for errors-in-variables model identification
On instrumental variable-based methods for errors-in-variables model identification Stéphane Thil, Marion Gilson, Hugues Garnier To cite this version: Stéphane Thil, Marion Gilson, Hugues Garnier. On instrumental
More informationAdvanced Econometrics
Based on the textbook by Verbeek: A Guide to Modern Econometrics Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna May 16, 2013 Outline Univariate
More informationParameter Estimation in a Moving Horizon Perspective
Parameter Estimation in a Moving Horizon Perspective State and Parameter Estimation in Dynamical Systems Reglerteknik, ISY, Linköpings Universitet State and Parameter Estimation in Dynamical Systems OUTLINE
More informationSign-Perturbed Sums (SPS): A Method for Constructing Exact Finite-Sample Confidence Regions for General Linear Systems
51st IEEE Conference on Decision and Control December 10-13, 2012. Maui, Hawaii, USA Sign-Perturbed Sums (SPS): A Method for Constructing Exact Finite-Sample Confidence Regions for General Linear Systems
More informationStudy of Time Series and Development of System Identification Model for Agarwada Raingauge Station
Study of Time Series and Development of System Identification Model for Agarwada Raingauge Station N.A. Bhatia 1 and T.M.V.Suryanarayana 2 1 Teaching Assistant, 2 Assistant Professor, Water Resources Engineering
More informationTime series models in the Frequency domain. The power spectrum, Spectral analysis
ime series models in the Frequency domain he power spectrum, Spectral analysis Relationship between the periodogram and the autocorrelations = + = ( ) ( ˆ α ˆ ) β I Yt cos t + Yt sin t t= t= ( ( ) ) cosλ
More informationSimple Linear Regression: The Model
Simple Linear Regression: The Model task: quantifying the effect of change X in X on Y, with some constant β 1 : Y = β 1 X, linear relationship between X and Y, however, relationship subject to a random
More informationESTIMATION ALGORITHMS
ESTIMATIO ALGORITHMS Solving normal equations using QR-factorization on-linear optimization Two and multi-stage methods EM algorithm FEL 3201 Estimation Algorithms - 1 SOLVIG ORMAL EQUATIOS USIG QR FACTORIZATIO
More informationEconometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018
Econometrics I KS Module 2: Multivariate Linear Regression Alexander Ahammer Department of Economics Johannes Kepler University of Linz This version: April 16, 2018 Alexander Ahammer (JKU) Module 2: Multivariate
More informationSimple Linear Regression
Simple Linear Regression Christopher Ting Christopher Ting : christophert@smu.edu.sg : 688 0364 : LKCSB 5036 January 7, 017 Web Site: http://www.mysmu.edu/faculty/christophert/ Christopher Ting QF 30 Week
More informationSGN Advanced Signal Processing Project bonus: Sparse model estimation
SGN 21006 Advanced Signal Processing Project bonus: Sparse model estimation Ioan Tabus Department of Signal Processing Tampere University of Technology Finland 1 / 12 Sparse models Initial problem: solve
More informationRESEARCH ARTICLE Parameter Consistency and Quadratically Constrained Errors-in-Variables Least-Squares Identification
RESEARCH ARTICLE Parameter Consistency and Quadratically Constrained Errors-in-Variables Least-Squares Identification Harish J. Palanthandalam-Madapusi Department of Mechanical and Aerospace Engineering
More informationTime Series Analysis
Time Series Analysis hm@imm.dtu.dk Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby 1 Outline of the lecture Regression based methods, 1st part: Introduction (Sec.
More informationIdentification of Stochastic Systems Under Multiple Operating Conditions: The Vector Dependent FP ARX Parametrization
Identification of Stochastic Systems Under Multiple Operating Conditions: The Vector Dependent FP ARX Parametrization Fotis P Kopsaftopoulos and Spilios D Fassois Abstract The problem of identifying stochastic
More informationMachine Learning and Computational Statistics, Spring 2017 Homework 2: Lasso Regression
Machine Learning and Computational Statistics, Spring 2017 Homework 2: Lasso Regression Due: Monday, February 13, 2017, at 10pm (Submit via Gradescope) Instructions: Your answers to the questions below,
More informationAnalysis of Discrete-Time Systems
TU Berlin Discrete-Time Control Systems 1 Analysis of Discrete-Time Systems Overview Stability Sensitivity and Robustness Controllability, Reachability, Observability, and Detectabiliy TU Berlin Discrete-Time
More informationECE 636: Systems identification
ECE 636: Systems identification Lectures 7 8 onparametric identification (continued) Important distributions: chi square, t distribution, F distribution Sampling distributions ib i Sample mean If the variance
More informationSystem Identification, Lecture 4
System Identification, Lecture 4 Kristiaan Pelckmans (IT/UU, 2338) Course code: 1RT880, Report code: 61800 - Spring 2016 F, FRI Uppsala University, Information Technology 13 April 2016 SI-2016 K. Pelckmans
More informationFurther Results on Model Structure Validation for Closed Loop System Identification
Advances in Wireless Communications and etworks 7; 3(5: 57-66 http://www.sciencepublishinggroup.com/j/awcn doi:.648/j.awcn.735. Further esults on Model Structure Validation for Closed Loop System Identification
More informationSpatial Statistics with Image Analysis. Lecture L08. Computer exercise 3. Lecture 8. Johan Lindström. November 25, 2016
C3 Repetition Creating Q Spectral Non-grid Spatial Statistics with Image Analysis Lecture 8 Johan Lindström November 25, 216 Johan Lindström - johanl@maths.lth.se FMSN2/MASM25L8 1/39 Lecture L8 C3 Repetition
More informationResponse Surface Methods
Response Surface Methods 3.12.2014 Goals of Today s Lecture See how a sequence of experiments can be performed to optimize a response variable. Understand the difference between first-order and second-order
More informationAUTOMATIC CONTROL COMMUNICATION SYSTEMS LINKÖPINGS UNIVERSITET. Questions AUTOMATIC CONTROL COMMUNICATION SYSTEMS LINKÖPINGS UNIVERSITET
The Problem Identification of Linear and onlinear Dynamical Systems Theme : Curve Fitting Division of Automatic Control Linköping University Sweden Data from Gripen Questions How do the control surface
More information! # % & () +,.&/ 01),. &, / &
! # % & () +,.&/ ),. &, / & 2 A NEW METHOD FOR THE DESIGN OF ENERGY TRANSFER FILTERS Xiaofeng Wu, Z Q Lang and S. A. Billings Department of Automatic Control and Systems Engineering The University of Sheffield
More informationOPTIMAL DESIGN INPUTS FOR EXPERIMENTAL CHAPTER 17. Organization of chapter in ISSO. Background. Linear models
CHAPTER 17 Slides for Introduction to Stochastic Search and Optimization (ISSO)by J. C. Spall OPTIMAL DESIGN FOR EXPERIMENTAL INPUTS Organization of chapter in ISSO Background Motivation Finite sample
More informationLinear Approximations of Nonlinear FIR Systems for Separable Input Processes
Linear Approximations of Nonlinear FIR Systems for Separable Input Processes Martin Enqvist, Lennart Ljung Division of Automatic Control Department of Electrical Engineering Linköpings universitet, SE-581
More information1 Outline. 1. Motivation. 2. SUR model. 3. Simultaneous equations. 4. Estimation
1 Outline. 1. Motivation 2. SUR model 3. Simultaneous equations 4. Estimation 2 Motivation. In this chapter, we will study simultaneous systems of econometric equations. Systems of simultaneous equations
More informationEstimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators
Estimation theory Parametric estimation Properties of estimators Minimum variance estimator Cramer-Rao bound Maximum likelihood estimators Confidence intervals Bayesian estimation 1 Random Variables Let
More informationEL2520 Control Theory and Practice
EL2520 Control Theory and Practice Lecture 8: Linear quadratic control Mikael Johansson School of Electrical Engineering KTH, Stockholm, Sweden Linear quadratic control Allows to compute the controller
More informationLECTURE 10 LINEAR PROCESSES II: SPECTRAL DENSITY, LAG OPERATOR, ARMA. In this lecture, we continue to discuss covariance stationary processes.
MAY, 0 LECTURE 0 LINEAR PROCESSES II: SPECTRAL DENSITY, LAG OPERATOR, ARMA In this lecture, we continue to discuss covariance stationary processes. Spectral density Gourieroux and Monfort 990), Ch. 5;
More informationImproving Convergence of Iterative Feedback Tuning using Optimal External Perturbations
Proceedings of the 47th IEEE Conference on Decision and Control Cancun, Mexico, Dec. 9-, 2008 Improving Convergence of Iterative Feedback Tuning using Optimal External Perturbations Jakob Kjøbsted Huusom,
More informationAppendix A: The time series behavior of employment growth
Unpublished appendices from The Relationship between Firm Size and Firm Growth in the U.S. Manufacturing Sector Bronwyn H. Hall Journal of Industrial Economics 35 (June 987): 583-606. Appendix A: The time
More informationUniversity of Pavia. M Estimators. Eduardo Rossi
University of Pavia M Estimators Eduardo Rossi Criterion Function A basic unifying notion is that most econometric estimators are defined as the minimizers of certain functions constructed from the sample
More informationFinite Sample Confidence Regions for Parameters in Prediction Error Identification using Output Error Models
Proceedings of the 7th World Congress The International Federation of Automatic Control Finite Sample Confidence Regions for Parameters in Prediction Error Identification using Output Error Models Arnold
More informationLecture 2: Statistical Decision Theory (Part I)
Lecture 2: Statistical Decision Theory (Part I) Hao Helen Zhang Hao Helen Zhang Lecture 2: Statistical Decision Theory (Part I) 1 / 35 Outline of This Note Part I: Statistics Decision Theory (from Statistical
More informationCovariance Stationary Time Series. Example: Independent White Noise (IWN(0,σ 2 )) Y t = ε t, ε t iid N(0,σ 2 )
Covariance Stationary Time Series Stochastic Process: sequence of rv s ordered by time {Y t } {...,Y 1,Y 0,Y 1,...} Defn: {Y t } is covariance stationary if E[Y t ]μ for all t cov(y t,y t j )E[(Y t μ)(y
More informationJust-in-Time Models with Applications to Dynamical Systems
Linköping Studies in Science and Technology Thesis No. 601 Just-in-Time Models with Applications to Dynamical Systems Anders Stenman REGLERTEKNIK AUTOMATIC CONTROL LINKÖPING Division of Automatic Control
More informationExpressions for the covariance matrix of covariance data
Expressions for the covariance matrix of covariance data Torsten Söderström Division of Systems and Control, Department of Information Technology, Uppsala University, P O Box 337, SE-7505 Uppsala, Sweden
More informationModelling Non-linear and Non-stationary Time Series
Modelling Non-linear and Non-stationary Time Series Chapter 2: Non-parametric methods Henrik Madsen Advanced Time Series Analysis September 206 Henrik Madsen (02427 Adv. TS Analysis) Lecture Notes September
More informationUnivariate Time Series Analysis; ARIMA Models
Econometrics 2 Fall 24 Univariate Time Series Analysis; ARIMA Models Heino Bohn Nielsen of4 Outline of the Lecture () Introduction to univariate time series analysis. (2) Stationarity. (3) Characterizing
More information