Modeling and Analysis of Dynamic Systems

Size: px
Start display at page:

Download "Modeling and Analysis of Dynamic Systems"

Transcription

1 Modeling and Analysis of Dynamic Systems by Dr. Guillaume Ducard c Fall 2016 Institute for Dynamic Systems and Control ETH Zurich, Switzerland G. Ducard c 1

2 Outline 1 Lecture 9: Model Parametrization 2 G. Ducard c 2

3 Outline Lecture 9: Model Parametrization 1 Lecture 9: Model Parametrization 2 G. Ducard c 3

4 Introduction Lecture 9: Model Parametrization You came up with a mathematical model of a system, which contains some parameters (ex: mass, elasticity, specific heat,...). Now you need to run experiments to identify the model parameters. How to proceed? G. Ducard c 4

5 Introduction Lecture 9: Model Parametrization Least Squares Methods: Classical LS methods for static and linear systems closed-form solutions available. Nonlinear LS methods for dynamic and nonlinear systems only numerical (optimization) solutions available. Remark: there are closed-form approaches for linear dynamic systems as well. See master-level courses (e.g. Introduction to Recursive Filtering and Estimation ). G. Ducard c 5

6 Planning experiments is about knowing What excitation of the system : choice of correct input signals What to measure in the system (choice of sensors, their location, etc.) Measurements for linear or nonlinear model identification Frequency content of the excitation signals Noise level at input and output of the system Safety issues are best to efficiently identify the system parameters. Choose signals such that all the relevant dynamics and static effects inside the plant are excited with the correct amount of input energy. in p script. G. Ducard c 6

7 The data obtained experimentally may be used for two purposes: 1 To identify unknown system structures and system parameters. Using a first set of data: u 1,y r,1 u y 1 r,1 Real Plant Modeled System y m 2 To validate the results of the system modeling and parameter identification. Using a second set of data: u 2,y r,2 u y 2 r,2 Real Plant Modeled System y m G. Ducard c 7

8 A word of caution: It is of fundamental importance not to use the same data set for both purposes. The real quality of a parameterized model may only be assessed by: comparing the prediction of that model with measurement data that have not been used in the model parametrization. u y 2 r,2 Real Plant Modeled System y m Remark: the model and its identification are validated if: for the same input signal u 2, the output signals y r,2 and y m are sufficiently similar. G. Ducard c 8

9 Outline Lecture 9: Model Parametrization 1 Lecture 9: Model Parametrization 2 G. Ducard c 9

10 Introduction Lecture 9: Model Parametrization Least Squares estimation is used to fit the parameters of a linear and static model (model that mathematically describes inputs/outputs of the system). The model is never exact: for the same input signals there will be a difference between the outputs of the model, and true system outputs modeling errors. Remark: These errors may be considered a deterministic or stochastic variables. Both formulations are equivalent, as long as these errors are completely unpredictable and not correlated with the inputs. G. Ducard c 10

11 LS Formulation Lecture 9: Model Parametrization e u System s model y Figure: Elementary least-squares model structure. It is assumed that the output of the real system may be approximated by the output of the system s model with some model error e according to the linear equation: y(k) = h T (u(k)) π +e(k) with: k [1,...,r]: discrete-time instant G. Ducard c 11

12 y(k) = h T (u(k)) π +e(k) k: represents the index of discrete time (discrete-time instant k) u(k) R m input vector y(k) R is the output signal (measurement) (scalar). π R q is the vector of the q unknown parameters (those we want to estimate). h(.) R q is the regressor, depends on u in a nonlinear but algebraic way. e(k) is the error (scalar). Typically, there are more measurements than unknown parameters: (r q). G. Ducard c 12

13 LS Objective: Estimate π R q is the vector of unknown parameters (those we want to estimate) such that the model error e is minimized. In order to do that, let s formulate the problem into a matrix form (derived on the blackboard during the class). + Example. G. Ducard c 13

14 Outline Lecture 9: Model Parametrization 1 Lecture 9: Model Parametrization 2 G. Ducard c 14

15 Least-square solution and comments π LS = [ H T W H ] 1 HT W ỹ The regression matrix H must have full column rank, i.e., all q parameters (π 1,π 2,...π q ) are required to explain the data. Moore-Penrose inverse: M = (M T M) 1 M T, M R r q, r > q, rank{m} = q G. Ducard c 15

16 Least-square solution and comments If the error e is an uncorrelated white noise signal with: mean value 0 and variance σ, then 1 the expected value of the parameter estimation π LS is equal to its true value, a E(π LS ) = π true 2 covariance matrix: Σ = σ 2 (H T W H) 1. a Of course, only if the model perfectly describes the true system. G. Ducard c 16

17 Least Squares Solution: geometric interpretation Particular case: q = 2, r = 3 The result of the LS identification can be geometrically interpreted: the columns of H define the directions (projection vectors) that define a plane (defined by the 2 vectors in this case: h 1 h2 ) and therefore, ẽ LS is perpendicular to that plane. ỹ = Hπ LS +ẽ LS [ πls,1 ỹ = [ h 1 h 2 ] π LS,2 ] +ẽ LS ỹ = π LS,1 h 1 +π LS,2 h 2 +ẽ LS G. Ducard c 17

18 ỹ h1 ẽ LS π LS,2 h 2 h2 π LS,1 h 1 G. Ducard c 18

19 Outline Lecture 9: Model Parametrization 1 Lecture 9: Model Parametrization 2 G. Ducard c 19

20 Iterative Least Squares: Up to now, a batch-like approach 1 has been assumed: Problems: π LS = [ H T W H ] 1 HT W ỹ 1 The computation the matrix inversion part is the most time-consuming step. 2 Assuming that r measurements have been taken, a solution has been computed, numerically very inefficient to repeat the full matrix inversion procedure when an additional measurement data becomes available. 1 Batch-like approach: 1. all measurement are made, 2. data are organized in the LS pb formulation, 3. the LS solution is computed once G. Ducard c 20

21 Instead, an iterative solution of the form π LS (r +1) = f (π LS (r),y(r +1)), initialized by π LS (0) = E{π} would be much more efficient. How do we build up a recursive Least-Squares algorithm? G. Ducard c 21

22 Recursive LS Formulation 1 Start: π LS = [ H T W H ] 1 HT W ỹ G. Ducard c 22

23 Recursive LS Formulation 1 Start: π LS = [ H T W H ] 1 HT W ỹ 2 Simplification: consider weighting matrix simply as W = I (extension with W easily possible). π LS = [ H T H ] 1 HT ỹ G. Ducard c 23

24 Recursive LS Formulation 1 Start: π LS = [ H T W H ] 1 HT W ỹ 2 Simplification: consider weighting matrix simply as W = I (extension with W easily possible). π LS = [ H T H ] 1 HT ỹ 3 Formulate matrix products as sums: [ r 1 π LS (r) = h(k) h (k)] T k=1 r h(k) y(k) k=1 G. Ducard c 24

25 Recursive LS Formulation 1 Start: π LS = [ H T W H ] 1 HT W ỹ 2 Simplification: consider weighting matrix simply as W = I (extension with W easily possible). π LS = [ H T H ] 1 HT ỹ 3 Formulate matrix products as sums: [ r 1 π LS (r) = h(k) h (k)] T k=1 4 Use matrix inversion Lemma r h(k) y(k) k=1 G. Ducard c 25

26 Matrix Inversion Lemma Suppose M R n n is a regular matrix (det(m) 0), and v R n is a column vector, which satisfies the condition: 1+v T M 1 v 0. In this case: [M +v v T ] 1 = M v T M 1 v M 1 v v T M 1 Remarks Proof by inspection: multiply from the left with M +v v T. Main advantage of this lemma: no additional matrix inversion than M 1 is needed. Inversion of the new matrix M +v v T may be carried out very efficiently. G. Ducard c 26

27 [ r 1 r π LS (r) = h(k) h (k)] T h(k) y(k) k=1 k=1 To simplify the notation, a matrix Ω is defined as: Ω(r) = Then compute Ω(r +1): [ r h(k) h T (k) k=1 ] 1 Ω(r +1) = = [ r+1 h(k) h T (k) k=1 ] 1 [ r h(k) h T (k)+h(r +1) h T (r +1) ] 1 k=1 G. Ducard c 27

28 [ r Ω(r +1) = h(k) h T (k)+h(r +1) h T (r +1) k=1 ] 1 we use the Inversion Lemma: [M +v v T ] 1 = M 1 Recursive formulation of the matrix inverse Ω(r +1) = Ω(r) 1 1+v T M 1 v M 1 v v T M c(r +1) Ω(r) h(r +1) ht (r +1) Ω(r) where c(r +1) = h T (r +1) Ω(r) h(r +1) (scalar). G. Ducard c 28

29 r π LS (r) = Ω(r) h(k) y(k) k=1 How to compute recursively the estimate? r+1 π LS (r +1) = Ω(r +1) h(k) y(k) k=1 [ ] 1 = Ω(r) 1+c(r +1) Ω(r)h(r +1)hT (r +1)Ω(r) ( r ) h(k) y(k)+h(r +1) y(r +1) k=1 G. Ducard c 29

30 π LS(r+1) = Ω (r) and r h (k) y (k) + Ω (r) h (r+1) y (r+1) k=1 } {{ } π LS(r) 1 Ω 1+c (r) h (r+1) h T (r+1) Ω (r) (r+1) r h (k) y (k) k=1 } {{ } π LS(r) 1 Ω 1+c (r) h (r+1) h T (r+1) Ω (r) h (r+1) (r+1) }{{} c (r+1) y (r+1) c (r+1) 1+c (r+1) = 1 1+c (r+1) 1 G. Ducard c 30

31 π LS(r+1) = π LS(r) + Ω (r) h (r+1) y (r+1) }{{} 1 Ω 1+c (r) h (r+1) h T (r+1) π LS(r) (r+1) c (r+1) }{{} Ω (r) h (r+1) y (r+1) π LS(r+1) = 1 π LS(r) Ω 1+c (r) h (r+1) h T (r+1) π LS(r) (r+1) }{{} 1 + Ω 1+c (r) h (r+1) (r+1) }{{} y (r+1) G. Ducard c 31

32 Recursive computation of the parameter vector π LS(r) 1 ( ) π LS(r+1) = π LS(r) + Ω 1+c (r) h (r+1) y (r+1) h T (r+1) π LS(r) (r+1) with Recursive update of the gain matrix Ω Ω (r+1) = Ω (r) 1 1+c (r+1) Ω (r) h (r+1) h T (r+1) Ω (r) where c (r+1) = h T (r+1) Ω (r) h (r+1) (scalar). and Initialization π LS(0), Ω (0) G. Ducard c 32

33 Recursive computation of the parameter vector π LS(r) 1 ( ) π LS(r+1) = π LS(r) + Ω 1+c (r) h (r+1) y (r+1) h T (r+1) π LS(r) (r+1) can be rewritten as: ( ) π LS(r+1) = π LS(r) +δ(r +1) y (r+1) h T (r+1) π LS(r) Comments on the recursive formulation: The blue term is a vector indicating the correction direction: δ(r + 1) applied by the innovation term (or prediction error). Interesting to note that the correction direction is not dependent on the magnitude of the prediction error. G. Ducard c 33

34 Outline Lecture 9: Model Parametrization 1 Lecture 9: Model Parametrization 2 G. Ducard c 34

35 Exponential Forgetting New error weighting for the recursive case ǫ(r) = r λ r k [y(k) h T (k) π LS (k)] 2, λ < 1 k=1 This introduces an exponential forgetting process: older errors have a smaller influence on the result of the parameter estimation. Can cope with slowly varying parameters. Update equations π LS(r+1) = π LS(r) + Ω (r+1) = 1 λ Ω (r) 1 [ ] Ω λ+c (r) h (r+1) y (r+1) h T (r+1) π LS(r) (r+1) ] [ I 1 λ+c (r+1) h (r+1) h T (r+1) Ω (r) G. Ducard c 35

36 Outline Lecture 9: Model Parametrization 1 Lecture 9: Model Parametrization 2 G. Ducard c 36

37 Kaczmarz s projection algorithm Each new prediction error : e (r+1) = y (r+1) h T (r+1) π (r) contains new information on the parameters π only in the direction of h (r+1). Therefore, π (r+1) is sought, which requires the smallest possible change π (r+1) π (r) to explain the new observation Cost function to minimize: J(π) = 1 2 [π (r+1) π (r) ] T (π(r+1) π (r) )+µ [y (r+1) h T (r+1) π (r+1)] Necessary conditions for the minimum: J π (r+1) = 0 J µ = 0 G. Ducard c 37

38 Solve this linear equations for π(r +1) and µ π (r+1) = π (r) + Usually this solution is modified as h(r +1) h T (r +1) h (r+1) [y(r +1) h T (r+1) π (r)] γ h (r+1) π (r+1) = π (r) + λ+h T (r+1) h (r+1) [y(r +1) h T (r+1) π(r)] 0 < γ < 2, 0 < λ < 1 to achieve desired convergence and forgetting. Discussions Kaczmarz projection algorithm requires less computational efforts than regular LS It converges much slower than regular LS algorithm. Choice of algorithm depending on resources at hand and convergence speed requirements. G. Ducard c 38

39 Next lecture + Upcoming Exercise Next lecture Stability Analysis Properties of Linear Systems Next exercises: Least squares Parameter identification G. Ducard c 39

Least Squares Estimation Namrata Vaswani,

Least Squares Estimation Namrata Vaswani, Least Squares Estimation Namrata Vaswani, namrata@iastate.edu Least Squares Estimation 1 Recall: Geometric Intuition for Least Squares Minimize J(x) = y Hx 2 Solution satisfies: H T H ˆx = H T y, i.e.

More information

Modeling and Analysis of Dynamic Systems

Modeling and Analysis of Dynamic Systems Modeling and Analysis of Dynamic Systems by Dr. Guillaume Ducard Fall 2016 Institute for Dynamic Systems and Control ETH Zurich, Switzerland based on script from: Prof. Dr. Lino Guzzella 1/33 Outline 1

More information

EECE Adaptive Control

EECE Adaptive Control EECE 574 - Adaptive Control Basics of System Identification Guy Dumont Department of Electrical and Computer Engineering University of British Columbia January 2010 Guy Dumont (UBC) EECE574 - Basics of

More information

Linear Models for Regression

Linear Models for Regression Linear Models for Regression Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr

More information

9 Multi-Model State Estimation

9 Multi-Model State Estimation Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof. N. Shimkin 9 Multi-Model State

More information

Modeling and Analysis of Dynamic Systems

Modeling and Analysis of Dynamic Systems Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich, Switzerland G. Ducard c 1 / 21 Outline 1 Lecture 4: Modeling Tools for Mechanical

More information

4 Derivations of the Discrete-Time Kalman Filter

4 Derivations of the Discrete-Time Kalman Filter Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof N Shimkin 4 Derivations of the Discrete-Time

More information

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q Kalman Filter Kalman Filter Predict: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q Update: K = P k k 1 Hk T (H k P k k 1 Hk T + R) 1 x k k = x k k 1 + K(z k H k x k k 1 ) P k k =(I

More information

Nonlinear System Identification Using MLP Dr.-Ing. Sudchai Boonto

Nonlinear System Identification Using MLP Dr.-Ing. Sudchai Boonto Dr-Ing Sudchai Boonto Department of Control System and Instrumentation Engineering King Mongkut s Unniversity of Technology Thonburi Thailand Nonlinear System Identification Given a data set Z N = {y(k),

More information

Least Squares and Kalman Filtering Questions: me,

Least Squares and Kalman Filtering Questions:  me, Least Squares and Kalman Filtering Questions: Email me, namrata@ece.gatech.edu Least Squares and Kalman Filtering 1 Recall: Weighted Least Squares y = Hx + e Minimize Solution: J(x) = (y Hx) T W (y Hx)

More information

ELEC system identification workshop. Subspace methods

ELEC system identification workshop. Subspace methods 1 / 33 ELEC system identification workshop Subspace methods Ivan Markovsky 2 / 33 Plan 1. Behavioral approach 2. Subspace methods 3. Optimization methods 3 / 33 Outline Exact modeling Algorithms 4 / 33

More information

REGLERTEKNIK AUTOMATIC CONTROL LINKÖPING

REGLERTEKNIK AUTOMATIC CONTROL LINKÖPING Recursive Least Squares and Accelerated Convergence in Stochastic Approximation Schemes Lennart Ljung Department of Electrical Engineering Linkping University, S-581 83 Linkping, Sweden WWW: http://www.control.isy.liu.se

More information

Infinite Horizon LQ. Given continuous-time state equation. Find the control function u(t) to minimize

Infinite Horizon LQ. Given continuous-time state equation. Find the control function u(t) to minimize Infinite Horizon LQ Given continuous-time state equation x = Ax + Bu Find the control function ut) to minimize J = 1 " # [ x T t)qxt) + u T t)rut)] dt 2 0 Q $ 0, R > 0 and symmetric Solution is obtained

More information

Modeling and Analysis of Dynamic Systems

Modeling and Analysis of Dynamic Systems Modeling and Analysis of Dynamic Systems by Dr. Guillaume Ducard Fall 2016 Institute for Dynamic Systems and Control ETH Zurich, Switzerland 1/22 Outline 1 Lecture 5: Hydraulic Systems Pelton Turbine:

More information

NONLINEAR INTEGRAL MINIMUM VARIANCE-LIKE CONTROL WITH APPLICATION TO AN AIRCRAFT SYSTEM

NONLINEAR INTEGRAL MINIMUM VARIANCE-LIKE CONTROL WITH APPLICATION TO AN AIRCRAFT SYSTEM NONLINEAR INTEGRAL MINIMUM VARIANCE-LIKE CONTROL WITH APPLICATION TO AN AIRCRAFT SYSTEM D.G. Dimogianopoulos, J.D. Hios and S.D. Fassois DEPARTMENT OF MECHANICAL & AERONAUTICAL ENGINEERING GR-26500 PATRAS,

More information

Expressions for the covariance matrix of covariance data

Expressions for the covariance matrix of covariance data Expressions for the covariance matrix of covariance data Torsten Söderström Division of Systems and Control, Department of Information Technology, Uppsala University, P O Box 337, SE-7505 Uppsala, Sweden

More information

RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK

RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK RECURSIVE SUBSPACE IDENTIFICATION IN THE LEAST SQUARES FRAMEWORK TRNKA PAVEL AND HAVLENA VLADIMÍR Dept of Control Engineering, Czech Technical University, Technická 2, 166 27 Praha, Czech Republic mail:

More information

Kalman-Filter-Based Time-Varying Parameter Estimation via Retrospective Optimization of the Process Noise Covariance

Kalman-Filter-Based Time-Varying Parameter Estimation via Retrospective Optimization of the Process Noise Covariance 2016 American Control Conference (ACC) Boston Marriott Copley Place July 6-8, 2016. Boston, MA, USA Kalman-Filter-Based Time-Varying Parameter Estimation via Retrospective Optimization of the Process Noise

More information

Managing Uncertainty

Managing Uncertainty Managing Uncertainty Bayesian Linear Regression and Kalman Filter December 4, 2017 Objectives The goal of this lab is multiple: 1. First it is a reminder of some central elementary notions of Bayesian

More information

1 Kalman Filter Introduction

1 Kalman Filter Introduction 1 Kalman Filter Introduction You should first read Chapter 1 of Stochastic models, estimation, and control: Volume 1 by Peter S. Maybec (available here). 1.1 Explanation of Equations (1-3) and (1-4) Equation

More information

6.4 Kalman Filter Equations

6.4 Kalman Filter Equations 6.4 Kalman Filter Equations 6.4.1 Recap: Auxiliary variables Recall the definition of the auxiliary random variables x p k) and x m k): Init: x m 0) := x0) S1: x p k) := Ak 1)x m k 1) +uk 1) +vk 1) S2:

More information

The Kalman Filter. Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience. Sarah Dance

The Kalman Filter. Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience. Sarah Dance The Kalman Filter Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience Sarah Dance School of Mathematical and Physical Sciences, University of Reading s.l.dance@reading.ac.uk July

More information

ECE521 week 3: 23/26 January 2017

ECE521 week 3: 23/26 January 2017 ECE521 week 3: 23/26 January 2017 Outline Probabilistic interpretation of linear regression - Maximum likelihood estimation (MLE) - Maximum a posteriori (MAP) estimation Bias-variance trade-off Linear

More information

SIMON FRASER UNIVERSITY School of Engineering Science

SIMON FRASER UNIVERSITY School of Engineering Science SIMON FRASER UNIVERSITY School of Engineering Science Course Outline ENSC 810-3 Digital Signal Processing Calendar Description This course covers advanced digital signal processing techniques. The main

More information

Statistical and Adaptive Signal Processing

Statistical and Adaptive Signal Processing r Statistical and Adaptive Signal Processing Spectral Estimation, Signal Modeling, Adaptive Filtering and Array Processing Dimitris G. Manolakis Massachusetts Institute of Technology Lincoln Laboratory

More information

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino ROBOTICS 01PEEQW Basilio Bona DAUIN Politecnico di Torino Probabilistic Fundamentals in Robotics Gaussian Filters Course Outline Basic mathematical framework Probabilistic models of mobile robots Mobile

More information

Cautious Data Driven Fault Detection and Isolation applied to the Wind Turbine Benchmark

Cautious Data Driven Fault Detection and Isolation applied to the Wind Turbine Benchmark Driven Fault Detection and Isolation applied to the Wind Turbine Benchmark Prof. Michel Verhaegen Delft Center for Systems and Control Delft University of Technology the Netherlands November 28, 2011 Prof.

More information

ECE531 Lecture 11: Dynamic Parameter Estimation: Kalman-Bucy Filter

ECE531 Lecture 11: Dynamic Parameter Estimation: Kalman-Bucy Filter ECE531 Lecture 11: Dynamic Parameter Estimation: Kalman-Bucy Filter D. Richard Brown III Worcester Polytechnic Institute 09-Apr-2009 Worcester Polytechnic Institute D. Richard Brown III 09-Apr-2009 1 /

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation

More information

Lecture: Simultaneous Equation Model (Wooldridge s Book Chapter 16)

Lecture: Simultaneous Equation Model (Wooldridge s Book Chapter 16) Lecture: Simultaneous Equation Model (Wooldridge s Book Chapter 16) 1 2 Model Consider a system of two regressions y 1 = β 1 y 2 + u 1 (1) y 2 = β 2 y 1 + u 2 (2) This is a simultaneous equation model

More information

Exponential Convergence Bounds in Least Squares Estimation: Identification of Viscoelastic Properties in Atomic Force Microscopy

Exponential Convergence Bounds in Least Squares Estimation: Identification of Viscoelastic Properties in Atomic Force Microscopy Exponential Convergence Bounds in Least Squares Estimation: Identification of Viscoelastic Properties in Atomic Force Microscopy Michael R. P. Ragazzon, J. Tommy Gravdahl, Kristin Y. Pettersen Department

More information

DESIGNING A KALMAN FILTER WHEN NO NOISE COVARIANCE INFORMATION IS AVAILABLE. Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof

DESIGNING A KALMAN FILTER WHEN NO NOISE COVARIANCE INFORMATION IS AVAILABLE. Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof DESIGNING A KALMAN FILTER WHEN NO NOISE COVARIANCE INFORMATION IS AVAILABLE Robert Bos,1 Xavier Bombois Paul M. J. Van den Hof Delft Center for Systems and Control, Delft University of Technology, Mekelweg

More information

Lessons in Estimation Theory for Signal Processing, Communications, and Control

Lessons in Estimation Theory for Signal Processing, Communications, and Control Lessons in Estimation Theory for Signal Processing, Communications, and Control Jerry M. Mendel Department of Electrical Engineering University of Southern California Los Angeles, California PRENTICE HALL

More information

Unbiased minimum variance estimation for systems with unknown exogenous inputs

Unbiased minimum variance estimation for systems with unknown exogenous inputs Unbiased minimum variance estimation for systems with unknown exogenous inputs Mohamed Darouach, Michel Zasadzinski To cite this version: Mohamed Darouach, Michel Zasadzinski. Unbiased minimum variance

More information

Modeling and Identification of Dynamic Systems (vimmd312, 2018)

Modeling and Identification of Dynamic Systems (vimmd312, 2018) Modeling and Identification of Dynamic Systems (vimmd312, 2018) Textbook background of the curriculum taken. In parenthesis: material not reviewed due to time shortage, but which is suggested to be read

More information

Least Squares. Ken Kreutz-Delgado (Nuno Vasconcelos) ECE 175A Winter UCSD

Least Squares. Ken Kreutz-Delgado (Nuno Vasconcelos) ECE 175A Winter UCSD Least Squares Ken Kreutz-Delgado (Nuno Vasconcelos) ECE 75A Winter 0 - UCSD (Unweighted) Least Squares Assume linearity in the unnown, deterministic model parameters Scalar, additive noise model: y f (

More information

ENGR352 Problem Set 02

ENGR352 Problem Set 02 engr352/engr352p02 September 13, 2018) ENGR352 Problem Set 02 Transfer function of an estimator 1. Using Eq. (1.1.4-27) from the text, find the correct value of r ss (the result given in the text is incorrect).

More information

11. Further Issues in Using OLS with TS Data

11. Further Issues in Using OLS with TS Data 11. Further Issues in Using OLS with TS Data With TS, including lags of the dependent variable often allow us to fit much better the variation in y Exact distribution theory is rarely available in TS applications,

More information

CS 532: 3D Computer Vision 6 th Set of Notes

CS 532: 3D Computer Vision 6 th Set of Notes 1 CS 532: 3D Computer Vision 6 th Set of Notes Instructor: Philippos Mordohai Webpage: www.cs.stevens.edu/~mordohai E-mail: Philippos.Mordohai@stevens.edu Office: Lieb 215 Lecture Outline Intro to Covariance

More information

Derivation of the Kalman Filter

Derivation of the Kalman Filter Derivation of the Kalman Filter Kai Borre Danish GPS Center, Denmark Block Matrix Identities The key formulas give the inverse of a 2 by 2 block matrix, assuming T is invertible: T U 1 L M. (1) V W N P

More information

Ch. 15 Forecasting. 1.1 Forecasts Based on Conditional Expectations

Ch. 15 Forecasting. 1.1 Forecasts Based on Conditional Expectations Ch 15 Forecasting Having considered in Chapter 14 some of the properties of ARMA models, we now show how they may be used to forecast future values of an observed time series For the present we proceed

More information

EL1820 Modeling of Dynamical Systems

EL1820 Modeling of Dynamical Systems EL1820 Modeling of Dynamical Systems Lecture 10 - System identification as a model building tool Experiment design Examination and prefiltering of data Model structure selection Model validation Lecture

More information

A recursive algorithm based on the extended Kalman filter for the training of feedforward neural models. Isabelle Rivals and Léon Personnaz

A recursive algorithm based on the extended Kalman filter for the training of feedforward neural models. Isabelle Rivals and Léon Personnaz In Neurocomputing 2(-3): 279-294 (998). A recursive algorithm based on the extended Kalman filter for the training of feedforward neural models Isabelle Rivals and Léon Personnaz Laboratoire d'électronique,

More information

Introduction to Unscented Kalman Filter

Introduction to Unscented Kalman Filter Introduction to Unscented Kalman Filter 1 Introdution In many scientific fields, we use certain models to describe the dynamics of system, such as mobile robot, vision tracking and so on. The word dynamics

More information

Adaptive Noise Cancellation

Adaptive Noise Cancellation Adaptive Noise Cancellation P. Comon and V. Zarzoso January 5, 2010 1 Introduction In numerous application areas, including biomedical engineering, radar, sonar and digital communications, the goal is

More information

Sensor Fusion, 2014 Lecture 1: 1 Lectures

Sensor Fusion, 2014 Lecture 1: 1 Lectures Sensor Fusion, 2014 Lecture 1: 1 Lectures Lecture Content 1 Course overview. Estimation theory for linear models. 2 Estimation theory for nonlinear models 3 Sensor networks and detection theory 4 Nonlinear

More information

Parametric Output Error Based Identification and Fault Detection in Structures Under Earthquake Excitation

Parametric Output Error Based Identification and Fault Detection in Structures Under Earthquake Excitation Parametric Output Error Based Identification and Fault Detection in Structures Under Earthquake Excitation J.S. Sakellariou and S.D. Fassois Department of Mechanical & Aeronautical Engr. GR 265 Patras,

More information

Unsupervised Learning: Projections

Unsupervised Learning: Projections Unsupervised Learning: Projections CMPSCI 689 Fall 2015 Sridhar Mahadevan Lecture 2 Data, Data, Data LIBS spectrum Steel'drum' The Image Classification Challenge: 1,000 object classes 1,431,167 images

More information

L06. LINEAR KALMAN FILTERS. NA568 Mobile Robotics: Methods & Algorithms

L06. LINEAR KALMAN FILTERS. NA568 Mobile Robotics: Methods & Algorithms L06. LINEAR KALMAN FILTERS NA568 Mobile Robotics: Methods & Algorithms 2 PS2 is out! Landmark-based Localization: EKF, UKF, PF Today s Lecture Minimum Mean Square Error (MMSE) Linear Kalman Filter Gaussian

More information

Further Results on Model Structure Validation for Closed Loop System Identification

Further Results on Model Structure Validation for Closed Loop System Identification Advances in Wireless Communications and etworks 7; 3(5: 57-66 http://www.sciencepublishinggroup.com/j/awcn doi:.648/j.awcn.735. Further esults on Model Structure Validation for Closed Loop System Identification

More information

Time Series Analysis

Time Series Analysis Time Series Analysis hm@imm.dtu.dk Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby 1 Outline of the lecture State space models, 1st part: Model: Sec. 10.1 The

More information

Online monitoring of MPC disturbance models using closed-loop data

Online monitoring of MPC disturbance models using closed-loop data Online monitoring of MPC disturbance models using closed-loop data Brian J. Odelson and James B. Rawlings Department of Chemical Engineering University of Wisconsin-Madison Online Optimization Based Identification

More information

Lecture 22: Variance and Covariance

Lecture 22: Variance and Covariance EE5110 : Probability Foundations for Electrical Engineers July-November 2015 Lecture 22: Variance and Covariance Lecturer: Dr. Krishna Jagannathan Scribes: R.Ravi Kiran In this lecture we will introduce

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing

More information

Modeling and Analysis of Dynamic Systems

Modeling and Analysis of Dynamic Systems Modeling and Analysis of Dynamic Systems by Dr. Guillaume Ducard Fall 2016 Institute for Dynamic Systems and Control ETH Zurich, Switzerland 1/21 Outline 1 Lecture 4: Modeling Tools for Mechanical Systems

More information

RECURSIVE ESTIMATION AND KALMAN FILTERING

RECURSIVE ESTIMATION AND KALMAN FILTERING Chapter 3 RECURSIVE ESTIMATION AND KALMAN FILTERING 3. The Discrete Time Kalman Filter Consider the following estimation problem. Given the stochastic system with x k+ = Ax k + Gw k (3.) y k = Cx k + Hv

More information

Modeling and Analysis of Dynamic Systems

Modeling and Analysis of Dynamic Systems Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich, Switzerland G. Ducard c 1 / 57 Outline 1 Lecture 13: Linear System - Stability

More information

6.435, System Identification

6.435, System Identification SET 6 System Identification 6.435 Parametrized model structures One-step predictor Identifiability Munther A. Dahleh 1 Models of LTI Systems A complete model u = input y = output e = noise (with PDF).

More information

Image enhancement. Why image enhancement? Why image enhancement? Why image enhancement? Example of artifacts caused by image encoding

Image enhancement. Why image enhancement? Why image enhancement? Why image enhancement? Example of artifacts caused by image encoding 13 Why image enhancement? Image enhancement Example of artifacts caused by image encoding Computer Vision, Lecture 14 Michael Felsberg Computer Vision Laboratory Department of Electrical Engineering 12

More information

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010

Probabilistic Fundamentals in Robotics. DAUIN Politecnico di Torino July 2010 Probabilistic Fundamentals in Robotics Gaussian Filters Basilio Bona DAUIN Politecnico di Torino July 2010 Course Outline Basic mathematical framework Probabilistic models of mobile robots Mobile robot

More information

Some Time-Series Models

Some Time-Series Models Some Time-Series Models Outline 1. Stochastic processes and their properties 2. Stationary processes 3. Some properties of the autocorrelation function 4. Some useful models Purely random processes, random

More information

1 Introduction to Generalized Least Squares

1 Introduction to Generalized Least Squares ECONOMICS 7344, Spring 2017 Bent E. Sørensen April 12, 2017 1 Introduction to Generalized Least Squares Consider the model Y = Xβ + ɛ, where the N K matrix of regressors X is fixed, independent of the

More information

Basic Concepts in Data Reconciliation. Chapter 6: Steady-State Data Reconciliation with Model Uncertainties

Basic Concepts in Data Reconciliation. Chapter 6: Steady-State Data Reconciliation with Model Uncertainties Chapter 6: Steady-State Data with Model Uncertainties CHAPTER 6 Steady-State Data with Model Uncertainties 6.1 Models with Uncertainties In the previous chapters, the models employed in the DR were considered

More information

Identification of ARX, OE, FIR models with the least squares method

Identification of ARX, OE, FIR models with the least squares method Identification of ARX, OE, FIR models with the least squares method CHEM-E7145 Advanced Process Control Methods Lecture 2 Contents Identification of ARX model with the least squares minimizing the equation

More information

6.435, System Identification

6.435, System Identification System Identification 6.435 SET 3 Nonparametric Identification Munther A. Dahleh 1 Nonparametric Methods for System ID Time domain methods Impulse response Step response Correlation analysis / time Frequency

More information

2D Image Processing. Bayes filter implementation: Kalman filter

2D Image Processing. Bayes filter implementation: Kalman filter 2D Image Processing Bayes filter implementation: Kalman filter Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de

More information

Introduction to Estimation Methods for Time Series models. Lecture 1

Introduction to Estimation Methods for Time Series models. Lecture 1 Introduction to Estimation Methods for Time Series models Lecture 1 Fulvio Corsi SNS Pisa Fulvio Corsi Introduction to Estimation () Methods for Time Series models Lecture 1 SNS Pisa 1 / 19 Estimation

More information

y k = ( ) x k + v k. w q wk i 0 0 wk

y k = ( ) x k + v k. w q wk i 0 0 wk Four telling examples of Kalman Filters Example : Signal plus noise Measurement of a bandpass signal, center frequency.2 rad/sec buried in highpass noise. Dig out the quadrature part of the signal while

More information

Control Systems Lab - SC4070 System Identification and Linearization

Control Systems Lab - SC4070 System Identification and Linearization Control Systems Lab - SC4070 System Identification and Linearization Dr. Manuel Mazo Jr. Delft Center for Systems and Control (TU Delft) m.mazo@tudelft.nl Tel.:015-2788131 TU Delft, February 13, 2015 (slides

More information

Introduction to Data Assimilation. Reima Eresmaa Finnish Meteorological Institute

Introduction to Data Assimilation. Reima Eresmaa Finnish Meteorological Institute Introduction to Data Assimilation Reima Eresmaa Finnish Meteorological Institute 15 June 2006 Outline 1) The purpose of data assimilation 2) The inputs for data assimilation 3) Analysis methods Theoretical

More information

Statistics 910, #15 1. Kalman Filter

Statistics 910, #15 1. Kalman Filter Statistics 910, #15 1 Overview 1. Summary of Kalman filter 2. Derivations 3. ARMA likelihoods 4. Recursions for the variance Kalman Filter Summary of Kalman filter Simplifications To make the derivations

More information

In the bivariate regression model, the original parameterization is. Y i = β 1 + β 2 X2 + β 2 X2. + β 2 (X 2i X 2 ) + ε i (2)

In the bivariate regression model, the original parameterization is. Y i = β 1 + β 2 X2 + β 2 X2. + β 2 (X 2i X 2 ) + ε i (2) RNy, econ460 autumn 04 Lecture note Orthogonalization and re-parameterization 5..3 and 7.. in HN Orthogonalization of variables, for example X i and X means that variables that are correlated are made

More information

APPROXIMATE SOLUTION OF A SYSTEM OF LINEAR EQUATIONS WITH RANDOM PERTURBATIONS

APPROXIMATE SOLUTION OF A SYSTEM OF LINEAR EQUATIONS WITH RANDOM PERTURBATIONS APPROXIMATE SOLUTION OF A SYSTEM OF LINEAR EQUATIONS WITH RANDOM PERTURBATIONS P. Date paresh.date@brunel.ac.uk Center for Analysis of Risk and Optimisation Modelling Applications, Department of Mathematical

More information

Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes (cont d)

Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes (cont d) Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes (cont d) Electrical & Computer Engineering North Carolina State University Acknowledgment: ECE792-41 slides

More information

Review of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley

Review of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley Review of Classical Least Squares James L. Powell Department of Economics University of California, Berkeley The Classical Linear Model The object of least squares regression methods is to model and estimate

More information

Title without the persistently exciting c. works must be obtained from the IEE

Title without the persistently exciting c.   works must be obtained from the IEE Title Exact convergence analysis of adapt without the persistently exciting c Author(s) Sakai, H; Yang, JM; Oka, T Citation IEEE TRANSACTIONS ON SIGNAL 55(5): 2077-2083 PROCESS Issue Date 2007-05 URL http://hdl.handle.net/2433/50544

More information

Statistics Homework #4

Statistics Homework #4 Statistics 910 1 Homework #4 Chapter 6, Shumway and Stoffer These are outlines of the solutions. If you would like to fill in other details, please come see me during office hours. 6.1 State-space representation

More information

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Mathematical Optimisation, Chpt 2: Linear Equations and inequalities Peter J.C. Dickinson p.j.c.dickinson@utwente.nl http://dickinson.website version: 12/02/18 Monday 5th February 2018 Peter J.C. Dickinson

More information

Lecture Note 12: Kalman Filter

Lecture Note 12: Kalman Filter ECE 645: Estimation Theory Spring 2015 Instructor: Prof. Stanley H. Chan Lecture Note 12: Kalman Filter LaTeX prepared by Stylianos Chatzidakis) May 4, 2015 This lecture note is based on ECE 645Spring

More information

Adaptive Robust Tracking Control of Robot Manipulators in the Task-space under Uncertainties

Adaptive Robust Tracking Control of Robot Manipulators in the Task-space under Uncertainties Australian Journal of Basic and Applied Sciences, 3(1): 308-322, 2009 ISSN 1991-8178 Adaptive Robust Tracking Control of Robot Manipulators in the Task-space under Uncertainties M.R.Soltanpour, M.M.Fateh

More information

Design of Norm-Optimal Iterative Learning Controllers: The Effect of an Iteration-Domain Kalman Filter for Disturbance Estimation

Design of Norm-Optimal Iterative Learning Controllers: The Effect of an Iteration-Domain Kalman Filter for Disturbance Estimation Design of Norm-Optimal Iterative Learning Controllers: The Effect of an Iteration-Domain Kalman Filter for Disturbance Estimation Nicolas Degen, Autonomous System Lab, ETH Zürich Angela P. Schoellig, University

More information

Lecture 3: Multiple Regression

Lecture 3: Multiple Regression Lecture 3: Multiple Regression R.G. Pierse 1 The General Linear Model Suppose that we have k explanatory variables Y i = β 1 + β X i + β 3 X 3i + + β k X ki + u i, i = 1,, n (1.1) or Y i = β j X ji + u

More information

c i r i i=1 r 1 = [1, 2] r 2 = [0, 1] r 3 = [3, 4].

c i r i i=1 r 1 = [1, 2] r 2 = [0, 1] r 3 = [3, 4]. Lecture Notes: Rank of a Matrix Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk 1 Linear Independence Definition 1. Let r 1, r 2,..., r m

More information

EM-algorithm for Training of State-space Models with Application to Time Series Prediction

EM-algorithm for Training of State-space Models with Application to Time Series Prediction EM-algorithm for Training of State-space Models with Application to Time Series Prediction Elia Liitiäinen, Nima Reyhani and Amaury Lendasse Helsinki University of Technology - Neural Networks Research

More information

DESIGN AND IMPLEMENTATION OF SENSORLESS SPEED CONTROL FOR INDUCTION MOTOR DRIVE USING AN OPTIMIZED EXTENDED KALMAN FILTER

DESIGN AND IMPLEMENTATION OF SENSORLESS SPEED CONTROL FOR INDUCTION MOTOR DRIVE USING AN OPTIMIZED EXTENDED KALMAN FILTER INTERNATIONAL JOURNAL OF ELECTRONICS AND COMMUNICATION ENGINEERING & TECHNOLOGY (IJECET) International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN 0976 ISSN 0976 6464(Print)

More information

ARIMA Modelling and Forecasting

ARIMA Modelling and Forecasting ARIMA Modelling and Forecasting Economic time series often appear nonstationary, because of trends, seasonal patterns, cycles, etc. However, the differences may appear stationary. Δx t x t x t 1 (first

More information

Introduction to System Identification and Adaptive Control

Introduction to System Identification and Adaptive Control Introduction to System Identification and Adaptive Control A. Khaki Sedigh Control Systems Group Faculty of Electrical and Computer Engineering K. N. Toosi University of Technology May 2009 Introduction

More information

Modeling and Analysis of Dynamic Systems

Modeling and Analysis of Dynamic Systems Modeling and Analysis of Dynamic Systems Dr. Guillaume Ducard Fall 2017 Institute for Dynamic Systems and Control ETH Zurich, Switzerland G. Ducard c 1 / 54 Outline 1 G. Ducard c 2 / 54 Outline 1 G. Ducard

More information

Elements of Multivariate Time Series Analysis

Elements of Multivariate Time Series Analysis Gregory C. Reinsel Elements of Multivariate Time Series Analysis Second Edition With 14 Figures Springer Contents Preface to the Second Edition Preface to the First Edition vii ix 1. Vector Time Series

More information

Adaptive ensemble Kalman filtering of nonlinear systems

Adaptive ensemble Kalman filtering of nonlinear systems Adaptive ensemble Kalman filtering of nonlinear systems Tyrus Berry George Mason University June 12, 213 : Problem Setup We consider a system of the form: x k+1 = f (x k ) + ω k+1 ω N (, Q) y k+1 = h(x

More information

Adaptive Filters. un [ ] yn [ ] w. yn n wun k. - Adaptive filter (FIR): yn n n w nun k. (1) Identification. Unknown System + (2) Inverse modeling

Adaptive Filters. un [ ] yn [ ] w. yn n wun k. - Adaptive filter (FIR): yn n n w nun k. (1) Identification. Unknown System + (2) Inverse modeling Adaptive Filters - Statistical digital signal processing: in many problems of interest, the signals exhibit some inherent variability plus additive noise we use probabilistic laws to model the statistical

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 254 Part V

More information

Modelling Non-linear and Non-stationary Time Series

Modelling Non-linear and Non-stationary Time Series Modelling Non-linear and Non-stationary Time Series Chapter 2: Non-parametric methods Henrik Madsen Advanced Time Series Analysis September 206 Henrik Madsen (02427 Adv. TS Analysis) Lecture Notes September

More information

Lecture 16: State Space Model and Kalman Filter Bus 41910, Time Series Analysis, Mr. R. Tsay

Lecture 16: State Space Model and Kalman Filter Bus 41910, Time Series Analysis, Mr. R. Tsay Lecture 6: State Space Model and Kalman Filter Bus 490, Time Series Analysis, Mr R Tsay A state space model consists of two equations: S t+ F S t + Ge t+, () Z t HS t + ɛ t (2) where S t is a state vector

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

Linear System Theory. Wonhee Kim Lecture 1. March 7, 2018

Linear System Theory. Wonhee Kim Lecture 1. March 7, 2018 Linear System Theory Wonhee Kim Lecture 1 March 7, 2018 1 / 22 Overview Course Information Prerequisites Course Outline What is Control Engineering? Examples of Control Systems Structure of Control Systems

More information

3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE

3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE 3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE 3.0 INTRODUCTION The purpose of this chapter is to introduce estimators shortly. More elaborated courses on System Identification, which are given

More information

Optimal State Estimators for Linear Systems with Unknown Inputs

Optimal State Estimators for Linear Systems with Unknown Inputs Optimal tate Estimators for Linear ystems with Unknown Inputs hreyas undaram and Christoforos N Hadjicostis Abstract We present a method for constructing linear minimum-variance unbiased state estimators

More information

Principles of forecasting

Principles of forecasting 2.5 Forecasting Principles of forecasting Forecast based on conditional expectations Suppose we are interested in forecasting the value of y t+1 based on a set of variables X t (m 1 vector). Let y t+1

More information

ESTIMATION THEORY. Chapter Estimation of Random Variables

ESTIMATION THEORY. Chapter Estimation of Random Variables Chapter ESTIMATION THEORY. Estimation of Random Variables Suppose X,Y,Y 2,...,Y n are random variables defined on the same probability space (Ω, S,P). We consider Y,...,Y n to be the observed random variables

More information