Kalman Filter and Parameter Identification. Florian Herzog

Size: px
Start display at page:

Download "Kalman Filter and Parameter Identification. Florian Herzog"

Transcription

1 Kalman Filter and Parameter Identification Florian Herzog 2013

2 Continuous-time Kalman Filter In this chapter, we shall use stochastic processes with independent increments w 1 (.) and w 2 (.) at the input and the output, respectively, of a dynamic system. We shall switch back and forth between the mathematically precise description of these (normalized) Brownian motions by their increments and the sloppy description by the corresponding white noises v(.) and ṙ(.), respectively, which is preferred in engineering circles Q 1/2 (t)dw 1 (t) = [ v(t) u(t)]dt (1) R 1/2 (t)dw 2 (t) = [ṙ(t) r(t)]dt, (2) where Q 1/2 (t) and R 1/2 (t) are the volatility parameters. We assume a linear dynamical system, where the inputs u(t) and the measurements y(t) are disturbed by noise. Stochastic Systems,

3 Continuous-time Kalman Filter Consider the following linear time-varying dynamic system of order n which is driven by the m-vector-valued white noise v(.). Its initial state x(t 0 ) is a random vector ξ and its p-vector-valued output y(.) is corrupted by the additive white noise ṙ(.): System description in the mathematically precise form: dx(t) = A(t)x(t)dt + B(t)dv(t) = [A(t)x(t) + B(t)u(t)]dt + B(t)Q 1/2 (t)dw 1 (t) (3) x(t 0 ) = ξ (4) y(t)dt = C(t)x(t)dt + dr(t) = [C(t)x(t) + r(t)]dt + R 1/2 (t)dw 2 (t), (5) (6) Stochastic Systems,

4 Continuous-time Kalman Filter System description in the engineer s form: ẋ(t) = A(t)x(t) + B(t) v(t) (7) x(t 0 ) = ξ (8) y(t) = C(t)x(t) + ṙ(t), (9) where ξ : N (x 0, Σ 0 ) (10) v(t) : N (u(t), Q(t)) (11) ṙ(t) : N (r(t), R(t)). (12) Note that in the last two lines, we have deleted the factor 1 dt Q(t) and R(t). which ought to multiply Stochastic Systems,

5 Continuous-time Kalman Filter The filtering problem is stated as follows: Find the optimal filter with the state vector x which is optimal in the following sense: The state estimation is bias-free: E{x(t) x(t)} 0. The error of the state estimate has infimal covariance matrix: Σ opt (t) Σ subopt (t). Stochastic Systems,

6 Continuous-time Kalman Filter With the above-mentioned assumptions that ξ, v, and r are mutually independent and Gaussian, the optimal filter is the following linear dynamic system: x(t) = A(t) x(t) + B(t)u(t) + H(t)[y(t) r(t) C(t) x(t)] = [A(t) H(t)C(t)] x(t) + B(t)u(t) + H(t)[y(t) r(t)] (13) x(t 0 ) = x 0 (14) with H(t) = Σ(t)C T (t)r 1 (t), (15) where Σ(t) is the covariance matrix of the error x(t) x(t) of the state estimate x(t). Stochastic Systems,

7 Continuous-time Kalman Filter Σ(t) is the covariance matrix of the error x(t) x(t) of the state estimate x(t) satisfying the following matrix Riccati differential equation: Σ(t) = A(t)Σ(t) + Σ(t)A T (t) Σ(t)C T (t)r 1 (t)c(t)σ(t) + B(t)Q(t)B T (t) (16) Σ(t 0 ) = Σ 0. (17) The estimation error e(t) = x(t) x(t) satisfies the differential equation ė = ẋ x = Ax + Bu + B( v u) [A HC] x Bu HCx H(ṙ r) = [A HC]e + B( v u) H(ṙ r). (18) Stochastic Systems,

8 Continuous-time Kalman Filter The covariance matrix Σ(t) of the state estimation errer e(t) is governed by the matrix differential equation Σ = [A HC]Σ + Σ[A HC] T + BQB T + HRH T (19) = AΣ + ΣA T + BQB T ΣC T R 1 CΣ + [H ΣC T R 1 ] R [H ΣC T R 1 ] T AΣ + ΣA T + BQB T ΣC T R 1 CΣ. (20) Obviously, Σ(t) is infimized at all times t for H(t) = Σ(t)C T (t)r 1 (t). (21) Stochastic Systems,

9 Continuous-time Kalman Filter Given (19) for the covariance matrix Σ(t): Σ = [A HC]Σ + Σ[A HC] T + BQB T + HRH T. Its first differential with respect to H, at some value of H, with increment dh can be formulated as follows: d Σ(H, dh) = Σ(H) H dh = dhcσ ΣC T dh T + dhrh T + HRdH T = U[HR ΣC T ]T dh, (22) where T is the matrix transposing operator and U is the operator which adds the transposed matrix to its matrix argument. Stochastic Systems,

10 Continuous-time Kalman Filter Consider the following stochastic system of first order with the random initial condition ξ : N (x 0, Σ 0 ) and the uncorrelated white noises v : N (0, Q) and ṙ : N (0, R): ẋ(t) = ax(t) + b[u(t) + v(t)] x(0) = ξ y(t) = x(t) + ṙ(t). Find the time-invariant Kalman filter! The resulting time-invariant Kalman filter is described by the following equations: x(t) = x(0) = x 0. a 2 + b 2QR x(t) + bu(t) + ( a + a 2 + b 2Q R ) y(t) Stochastic Systems,

11 Continuous/discrete-time Kalman Filter The continuous-time measurement (9) which is corrupted by the white noise error ṙ(t) with infinite covariance, y(t) = C(t)x(t) + ṙ(t), must be replaced by an averaged discrete-time measurement y k = C k x(t k ) + r k (23) r k : N (r k, R k ) (24) Cov( r i, r j ) = R i δ ij. (25) Stochastic Systems,

12 Continuous/discrete-time Kalman Filter At any sampling time t k we denote measurements by x(t k t k 1 ) and error covariance matrix Σ(t k t k 1 ) before the new measurement y k has been processed and the state estimate x(t k t k ) and the corresponding error covariance matrix Σ(t k t k ) after the new measurement y k has been processed. As a consequence, the continuous-time/discrete-time Kalman filter alternatingly performs update steps at the sampling times t k processing the latest measurement information y k and open-loop extrapolation steps between the times t k and t K+1 with no measurement information available: Stochastic Systems,

13 Continuous/discrete-time Kalman Filter The Continuous-Time/Discrete-Time Kalman Filter Update at time t k : x(t k t k ) = x(t k t k 1 ) + Σ(t k t k 1 )C T k [ ] 1 C k Σ(t k t k 1 )C T k +R k [y k r k C k x(t k t k 1 )] (26) Σ(t k t k ) = Σ(t k t k 1 ) Σ(t k t k 1 )C T k [ C k Σ(t k t k 1 )C T k +R k] 1Ck Σ(t k t k 1 ) (27) or, equivalently: Σ 1 (t k t k ) = Σ 1 (t k t k 1 ) + C T k R 1 k C k. (28) Stochastic Systems,

14 Continuous/discrete-time Kalman Filter Continuous-time extrapolation from t k to t k+1 with the initial conditions x(t k t k ) and Σ(t k t k ): x(t t k ) = A(t) x(t t k ) + B(t)u(t) (29) Σ(t t k ) = A(t)Σ(t t k ) + Σ(t t k )A T (t) + B(t)Q(t)B T (t). (30) Assuming that the Kalman filter starts with an update at the initial time t 0, this Kalman filter is initialized at t 0 as follows: x(t 0 t 1 ) = x 0 (31) Σ(t 0 t 1 ) = Σ 0. (32) Stochastic Systems,

15 Discrete-time Kalman Filter We consider the following discrete-time stochastic system: x k+1 = F k x k + G k ṽ k (33) x 0 = ξ (34) y k = C k x k + r k (35) ξ : N (x 0, Σ 0 ) (36) ṽ : N (u k, Q k ) (37) Cov(ṽ i, ṽ j ) = Q i δ ij (38) r : N (r k, R k ) (39) Cov( r i, r j ) = R i δ ij (40) ξ, ṽ, r : mutually independent. (41) Stochastic Systems,

16 Discrete-time Kalman Filter We have the following (approximate) zero-order-hold-equivalence correspondences to the continuous-time stochastic system: F k = Φ(t k+1, t k ) transition matrix (42) G k = tk+1 t k Φ(t k+1, t)b(t)dt zero order hold equivalence (43) C k = C(t k ) (44) Q k = Q(t k ) t k+1 t k (45) R k = R(t k) t k t k = averaging time at time t k (46) Stochastic Systems,

17 Discrete-time Kalman Filter The Discrete-Time Kalman Filter Update at time t k : x k k = x k k 1 + Σ k k 1 C T k Σ k k = Σ k k 1 Σ k k 1 C T k [ C k Σ k k 1 C T k +R k] 1[yk r k C k x k k 1 ] (47) [ C k Σ k k 1 C T k +R k] 1Ck Σ k k 1 (48) or, equivalently: Σ 1 k k = Σ 1 k 1 k 1 + CT k R 1 k C k (49) Stochastic Systems,

18 Discrete-time Kalman Filter Extrapolation from t k to t k+1 : x k+1 k = F k x k k + G k u k (50) Σ k+1 k = F k Σ k k F T k + G kq k G T k (51) Initialization at time t 0 : x 0 1 = x 0 (52) Σ 0 1 = Σ 0. (53) Stochastic Systems,

19 Extended Kalman Filter The linearized problem may not be a good approximation we stick to the following philosophy: Where it does not hurt: Analyze the nonlinear system. Where it hurts: use linearization. We consider the following nonlinear stochastic system: ẋ(t) = f(x(t), u(t), t) + B(t) v(t) (54) x(t 0 ) = ξ (55) y(t) = g(x(t), t) + ṙ(t) (56) ξ : N (x 0, Σ 0 ) (57) v : N (v(t), Q(t)) (58) ṙ : N (r(t), R(t)). (59) Stochastic Systems,

20 Extended Kalman Filter The extended Kalman filter Dynamics of the state estimation: x(t) = f( x(t), u(t), t) + B(t)v(t) + H(t)[y(t) r(t) g( x(t), t)] (60) x(t 0 ) = x 0 (61) with H(t) = Σ(t)C T (t)r 1 (t). The error covariance matrix Σ(t) must be calculated in real-time using the folloing matrix Riccati differential equation: Σ(t) = A(t)Σ(t) + Σ(t)A T (t) Σ(t)C T (t)r 1 (t)c(t)σ(t) + B(t)Q(t)B T (t) (62) Σ(t 0 ) = Σ 0. (63) Stochastic Systems,

21 Extended Kalman Filter The matrices A(t) and C(t) correspond to the dynamics matrix and the output matrix, respectively, of the linearization of the nonlinear system around the estimated trajectory: A(t) = C(t) = f( x(t), u(t), t) (64) x g( x(t), t). (65) x Stochastic Systems,

22 Extended Kalman Filter Because the dynamics of the error covariance matrix Σ(t) correspond to the linearized system, we face the following problems: This filter is not optimal. The state estimate will be biased. The reference point for the linearization is questionable. The matrix Σ(t) is only an approximation of the state error covariance matrix. The state vectors x(t) und x(t) are not Gaussian even if ξ, v, and ṙ are. Stochastic Systems,

23 Extended Kalman Filter Again, the continous-time measurement corrupted with the white noise ṙ with infinite covariance y(t) = g(x(t), t) + ṙ(t) (66) must be replaced by an averaged dicrete-time measurement y k = g(x(t k ), t k ) + r k (67) r k : N (r k, R k ) (68) Cov( r i, r j ) = R i δ ij. (69) Stochastic Systems,

24 Continuous-time/discrete-time extended Kalman filter Update at time t k : x(t k t k ) = x(t k t k 1 ) + Σ(t k t k 1 )C T k [ ] 1 C k Σ(t k t k 1 )C T k + R k [y k r k g( x(t k t k 1 ), t k )] (70) Σ(t k t k ) = Σ(t k t k 1 ) (71) [ 1Ck Σ(t k t k 1 )C T k C k Σ(t k t k 1 )C T k k] + R Σ(t k t k 1 ) (72) Extrapolation between t k and t k+1 : x(t t k ) = f( x(t t k ), u(t), t) + B(t)v(t) (73) Σ(t t k ) = A(t)Σ(t t k ) + Σ(t t k )A T (t) + B(t)Q(t)B T (t) (74) Stochastic Systems,

25 Continuous-time/discrete-time extended Kalman filter Again, the matrices A(t) and C k are the dynamics matrix and the output matrix, respectively, of the nonlinear system linearized around the estimated trajectory: A(t) = f( x(t), u(t), t) x C k = g( x(t k t k 1 ), t k ) x (75) (76) Initialization at t 0 : x(t 0 t 1 ) = x 0 (77) Σ(t 0 t 1 ) = Σ 0. (78) Stochastic Systems,

26 Kalman Filter and Parameter Identification The system and the measurement equations in discrete time are given by x k+1 = F k x k + G k u k + [G k Q 1 2 k ]w (1) k (79) The update equations of the Kalman Filter are y k = C k x k + r k + [R 1 2 k ]w (2) k (80) x k k = x k k 1 + Σ k k 1 C T k [C kσ k k 1 C T k + R k] 1 v k (81) v k = [y k r k C k x k k 1 ] (82) Σ k k = Σ k k 1 where v k denotes the prediction error. Σ k k 1 C T k [C kσ k k 1 C T k + R k] 1 C k Σ k k 1 (83) Stochastic Systems,

27 Kalman Filter and Parameter Identification The prediction equations are given by x k k 1 = F k 1 x k 1 k 1 + G k 1 u k 1 (84) Σ k k 1 = F k 1 Σ k 1 k 1 F T k 1 + G k 1Q k G T k 1 (85) The algorithm is carried out in two distinct parts: Prediction Step The first step consists of forming an optimal predictor of the next observation, given all the information available up to time t k 1. This results in a so-called a priori estimate for time t. Updating Step The a priori estimate is updated with the new information arriving at time t k that is combined with the already available information from time t k 1. The result of this step is the filtered estimate or the a posteriori estimate Stochastic Systems,

28 Kalman Filter and Parameter Identification The prediction error is defined by v k = y k y k k 1 = y k E[y k F k 1 ] = y k r k C k x k k 1 and its covariance matrix can be calculated by K k k 1 = Cov(v k F k 1 ) = E[v k v T k F k 1] = C k E[ x k k 1 x T k k 1 F k 1]C T k + E[w(2) k w(2)t k F k 1 ] = C k Σ k k 1 C T k + R k This result is important for forming the log likelihood estimator. Stochastic Systems,

29 Kalman Filter and Parameter Identification We start to examine the appropriate log-likelihood function with our frame work of the Kalman Filter. Let ψ R d be the vector of unknown parameters, which belongs to the admissible parameter space Ψ. The various system matrices of the state space models, as well as the variances of stochastic processes, given in the previous section, depend on ψ. The likelihood function of the state space model is given by the joint density of the observational data y = (y N, y N 1,..., y 1 ) l(y, ψ) = p(y N, y N 1,..., y 1 ; ψ) (86) which reflects how likely it would have been to have observed the date if ψ were the true values of the parameters. Stochastic Systems,

30 Kalman Filter and Parameter Identification Using the definition of condition probability and employing Bayes s theorem recursively, we now write the joint density as product of conditional densities l(y, ψ) = p(y N y N 1,..., y 1 ; ψ)... p(y k y k 1,..., y 1 ; ψ)... p(y 1 ; ψ) (87) where we approximate the initial density function p(y 1 ; ψ) by p(y 1 y 0 ; ψ). Furthermore, since we deal with a Markovian system, future values of y l with l > k only depend on (y k, y k 1,..., y 1 ) through the current values of y k, the expression in (87) can be rewritten to depend on only the last observed values, thus it is reduced to l(y, ψ) = p(y N y N 1 ; ψ)... p(y k y k 1 ; ψ)... p(y 1 ; ψ) (88) Stochastic Systems,

31 Kalman Filter and Parameter Identification For the purpose of estimating the parameter vector ψ we express the likelihood function in terms of the prediction error. Since the variance of the prediction error v k is the same as the conditional variance of y k, i.e. Cov[v k F k 1 ] = Cov[y k F k 1 ] (89) we are able to state the density function. The density function p(y k y k 1 ; ψ) is a Gauss Normal distribution with conditional mean with conditional covariance matrix E[y k F k 1 ] = r k + C k x k k 1 (90) Cov[y k F k 1 ] = K k k 1. (91) Stochastic Systems,

32 Kalman Filter and Parameter Identification The density function of a n-dimensional Normal distribution can be written in matrix notation as 1 (2π) n e 1 2 (x µ)t Σ 1 (x µ) 2 Σ where µ denotes the mean value and Σ covariance matrix. In our case x µ is replaced by y k r k C k x k k 1 = v k and the covariance matrix is K k k 1. The probability density function thus takes the form p(y k y k 1 ; ψ) = 1 (2π) p 2 K k k 1 e 1 2 vt k K 1 k k 1 vk, (92) where v k R p. Stochastic Systems,

33 Kalman Filter and Parameter Identification Taking the logarithm of (92) yields ln(p(y k y k 1 ; ψ)) = n 2 ln(2π) 1 2 ln K k k vt k K 1 k k 1 v k (93) which gives us the whole log-likelihood function as the sum L(y, ψ) = 1 2 N k=1 n ln(2π) + ln K k k 1 + v T k K 1 k k 1 v k (94) In order to estimate the unknown parameters from the log-likelihood function in (94) we employ an appropriate optimization routine which maximizes L(y, ψ) with respect to ψ. Stochastic Systems,

34 Kalman Filter and Parameter Identification iven the simplicity of the innovation form of the likelihood function, we can find the values for the conditional expectation of the prediction error and the conditional covariance matrix for every given parameter vector ψ using the Kalman Filter algorithm. Hence, we are able to calculate the value of the conditional log likelihood function numerically. The parameter are estimated based on the optimization ψ ML = arg max ψ Ψ L(y, ψ) (95) where Ψ denotes the parameter space. The optimization is either an unconstrained optimization problem if ψ R d or a constrained optimization if the parameter space Ψ R d. Stochastic Systems,

35 Kalman Filter and Parameter Identification Stochastic Systems,

Discrete time processes

Discrete time processes Discrete time processes Predictions are difficult. Especially about the future Mark Twain. Florian Herzog 2013 Modeling observed data When we model observed (realized) data, we encounter usually the following

More information

4 Derivations of the Discrete-Time Kalman Filter

4 Derivations of the Discrete-Time Kalman Filter Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof N Shimkin 4 Derivations of the Discrete-Time

More information

RECURSIVE ESTIMATION AND KALMAN FILTERING

RECURSIVE ESTIMATION AND KALMAN FILTERING Chapter 3 RECURSIVE ESTIMATION AND KALMAN FILTERING 3. The Discrete Time Kalman Filter Consider the following estimation problem. Given the stochastic system with x k+ = Ax k + Gw k (3.) y k = Cx k + Hv

More information

Sequential State Estimation (Crassidas and Junkins, Chapter 5)

Sequential State Estimation (Crassidas and Junkins, Chapter 5) Sequential State Estimation (Crassidas and Junkins, Chapter 5) Please read: 5.1, 5.3-5.6 5.3 The Discrete-Time Kalman Filter The discrete-time Kalman filter is used when the dynamics and measurements are

More information

Lecture 10 Linear Quadratic Stochastic Control with Partial State Observation

Lecture 10 Linear Quadratic Stochastic Control with Partial State Observation EE363 Winter 2008-09 Lecture 10 Linear Quadratic Stochastic Control with Partial State Observation partially observed linear-quadratic stochastic control problem estimation-control separation principle

More information

14 - Gaussian Stochastic Processes

14 - Gaussian Stochastic Processes 14-1 Gaussian Stochastic Processes S. Lall, Stanford 211.2.24.1 14 - Gaussian Stochastic Processes Linear systems driven by IID noise Evolution of mean and covariance Example: mass-spring system Steady-state

More information

Steady State Kalman Filter

Steady State Kalman Filter Steady State Kalman Filter Infinite Horizon LQ Control: ẋ = Ax + Bu R positive definite, Q = Q T 2Q 1 2. (A, B) stabilizable, (A, Q 1 2) detectable. Solve for the positive (semi-) definite P in the ARE:

More information

Lecture 6: Bayesian Inference in SDE Models

Lecture 6: Bayesian Inference in SDE Models Lecture 6: Bayesian Inference in SDE Models Bayesian Filtering and Smoothing Point of View Simo Särkkä Aalto University Simo Särkkä (Aalto) Lecture 6: Bayesian Inference in SDEs 1 / 45 Contents 1 SDEs

More information

= m(0) + 4e 2 ( 3e 2 ) 2e 2, 1 (2k + k 2 ) dt. m(0) = u + R 1 B T P x 2 R dt. u + R 1 B T P y 2 R dt +

= m(0) + 4e 2 ( 3e 2 ) 2e 2, 1 (2k + k 2 ) dt. m(0) = u + R 1 B T P x 2 R dt. u + R 1 B T P y 2 R dt + ECE 553, Spring 8 Posted: May nd, 8 Problem Set #7 Solution Solutions: 1. The optimal controller is still the one given in the solution to the Problem 6 in Homework #5: u (x, t) = p(t)x k(t), t. The minimum

More information

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems CDS 110b R. M. Murray Kalman Filters 14 January 2007 Reading: This set of lectures provides a brief introduction to Kalman filtering, following

More information

Time Series Analysis

Time Series Analysis Time Series Analysis hm@imm.dtu.dk Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby 1 Outline of the lecture State space models, 1st part: Model: Sec. 10.1 The

More information

Statistics 910, #15 1. Kalman Filter

Statistics 910, #15 1. Kalman Filter Statistics 910, #15 1 Overview 1. Summary of Kalman filter 2. Derivations 3. ARMA likelihoods 4. Recursions for the variance Kalman Filter Summary of Kalman filter Simplifications To make the derivations

More information

Gaussian, Markov and stationary processes

Gaussian, Markov and stationary processes Gaussian, Markov and stationary processes Gonzalo Mateos Dept. of ECE and Goergen Institute for Data Science University of Rochester gmateosb@ece.rochester.edu http://www.ece.rochester.edu/~gmateosb/ November

More information

Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013

Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013 Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013 Abstract As in optimal control theory, linear quadratic (LQ) differential games (DG) can be solved, even in high dimension,

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond Department of Biomedical Engineering and Computational Science Aalto University January 26, 2012 Contents 1 Batch and Recursive Estimation

More information

EE 565: Position, Navigation, and Timing

EE 565: Position, Navigation, and Timing EE 565: Position, Navigation, and Timing Kalman Filtering Example Aly El-Osery Kevin Wedeward Electrical Engineering Department, New Mexico Tech Socorro, New Mexico, USA In Collaboration with Stephen Bruder

More information

Statistical Techniques in Robotics (16-831, F12) Lecture#17 (Wednesday October 31) Kalman Filters. Lecturer: Drew Bagnell Scribe:Greydon Foil 1

Statistical Techniques in Robotics (16-831, F12) Lecture#17 (Wednesday October 31) Kalman Filters. Lecturer: Drew Bagnell Scribe:Greydon Foil 1 Statistical Techniques in Robotics (16-831, F12) Lecture#17 (Wednesday October 31) Kalman Filters Lecturer: Drew Bagnell Scribe:Greydon Foil 1 1 Gauss Markov Model Consider X 1, X 2,...X t, X t+1 to be

More information

Lecture 2: From Linear Regression to Kalman Filter and Beyond

Lecture 2: From Linear Regression to Kalman Filter and Beyond Lecture 2: From Linear Regression to Kalman Filter and Beyond January 18, 2017 Contents 1 Batch and Recursive Estimation 2 Towards Bayesian Filtering 3 Kalman Filter and Bayesian Filtering and Smoothing

More information

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems CDS 110b R. M. Murray Kalman Filters 25 January 2006 Reading: This set of lectures provides a brief introduction to Kalman filtering, following

More information

3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE

3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE 3. ESTIMATION OF SIGNALS USING A LEAST SQUARES TECHNIQUE 3.0 INTRODUCTION The purpose of this chapter is to introduce estimators shortly. More elaborated courses on System Identification, which are given

More information

Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters

Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 204 Emo Todorov (UW) AMATH/CSE 579, Winter

More information

A Theoretical Overview on Kalman Filtering

A Theoretical Overview on Kalman Filtering A Theoretical Overview on Kalman Filtering Constantinos Mavroeidis Vanier College Presented to professors: IVANOV T. IVAN STAHN CHRISTIAN Email: cmavroeidis@gmail.com June 6, 208 Abstract Kalman filtering

More information

State estimation and the Kalman filter

State estimation and the Kalman filter State estimation and the Kalman filter PhD, David Di Ruscio Telemark university college Department of Technology Systems and Control Engineering N-3914 Porsgrunn, Norway Fax: +47 35 57 52 50 Tel: +47 35

More information

Chapter 3. LQ, LQG and Control System Design. Dutch Institute of Systems and Control

Chapter 3. LQ, LQG and Control System Design. Dutch Institute of Systems and Control Chapter 3 LQ, LQG and Control System H 2 Design Overview LQ optimization state feedback LQG optimization output feedback H 2 optimization non-stochastic version of LQG Application to feedback system design

More information

Optimization-Based Control

Optimization-Based Control Optimization-Based Control Richard M. Murray Control and Dynamical Systems California Institute of Technology DRAFT v1.7a, 19 February 2008 c California Institute of Technology All rights reserved. This

More information

Stochastic contraction BACS Workshop Chamonix, January 14, 2008

Stochastic contraction BACS Workshop Chamonix, January 14, 2008 Stochastic contraction BACS Workshop Chamonix, January 14, 2008 Q.-C. Pham N. Tabareau J.-J. Slotine Q.-C. Pham, N. Tabareau, J.-J. Slotine () Stochastic contraction 1 / 19 Why stochastic contraction?

More information

Statistical Filtering and Control for AI and Robotics. Part II. Linear methods for regression & Kalman filtering

Statistical Filtering and Control for AI and Robotics. Part II. Linear methods for regression & Kalman filtering Statistical Filtering and Control for AI and Robotics Part II. Linear methods for regression & Kalman filtering Riccardo Muradore 1 / 66 Outline Linear Methods for Regression Gaussian filter Stochastic

More information

A Tutorial on Recursive methods in Linear Least Squares Problems

A Tutorial on Recursive methods in Linear Least Squares Problems A Tutorial on Recursive methods in Linear Least Squares Problems by Arvind Yedla 1 Introduction This tutorial motivates the use of Recursive Methods in Linear Least Squares problems, specifically Recursive

More information

The Kalman filter. Chapter 6

The Kalman filter. Chapter 6 Chapter 6 The Kalman filter In the last chapter, we saw that in Data Assimilation, we ultimately desire knowledge of the full a posteriori p.d.f., that is the conditional p.d.f. of the state given the

More information

FIR Filters for Stationary State Space Signal Models

FIR Filters for Stationary State Space Signal Models Proceedings of the 17th World Congress The International Federation of Automatic Control FIR Filters for Stationary State Space Signal Models Jung Hun Park Wook Hyun Kwon School of Electrical Engineering

More information

Parameter Estimation in a Moving Horizon Perspective

Parameter Estimation in a Moving Horizon Perspective Parameter Estimation in a Moving Horizon Perspective State and Parameter Estimation in Dynamical Systems Reglerteknik, ISY, Linköpings Universitet State and Parameter Estimation in Dynamical Systems OUTLINE

More information

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering

ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering ECE276A: Sensing & Estimation in Robotics Lecture 10: Gaussian Mixture and Particle Filtering Lecturer: Nikolay Atanasov: natanasov@ucsd.edu Teaching Assistants: Siwei Guo: s9guo@eng.ucsd.edu Anwesan Pal:

More information

1 Kalman Filter Introduction

1 Kalman Filter Introduction 1 Kalman Filter Introduction You should first read Chapter 1 of Stochastic models, estimation, and control: Volume 1 by Peter S. Maybec (available here). 1.1 Explanation of Equations (1-3) and (1-4) Equation

More information

Sequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes

Sequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes Sequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes Ellida M. Khazen * 13395 Coppermine Rd. Apartment 410 Herndon VA 20171 USA Abstract

More information

RESEARCH OBJECTIVES AND SUMMARY OF RESEARCH

RESEARCH OBJECTIVES AND SUMMARY OF RESEARCH XVIII. DETECTION AND ESTIMATION THEORY Academic Research Staff Prof. H. L. Van Trees Prof. A. B. Baggeroer Graduate Students J. P. Albuquerque J. M. F. Moura L. C. Pusey L. S. Metzger S. Orphanoudakis

More information

Bayesian Estimation of DSGE Models

Bayesian Estimation of DSGE Models Bayesian Estimation of DSGE Models Stéphane Adjemian Université du Maine, GAINS & CEPREMAP stephane.adjemian@univ-lemans.fr http://www.dynare.org/stepan June 28, 2011 June 28, 2011 Université du Maine,

More information

Comparison of four state observer design algorithms for MIMO system

Comparison of four state observer design algorithms for MIMO system Archives of Control Sciences Volume 23(LIX), 2013 No. 2, pages 131 144 Comparison of four state observer design algorithms for MIMO system VINODH KUMAR. E, JOVITHA JEROME and S. AYYAPPAN A state observer

More information

Linear Quadratic Zero-Sum Two-Person Differential Games

Linear Quadratic Zero-Sum Two-Person Differential Games Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard To cite this version: Pierre Bernhard. Linear Quadratic Zero-Sum Two-Person Differential Games. Encyclopaedia of Systems and Control,

More information

= 0 otherwise. Eu(n) = 0 and Eu(n)u(m) = δ n m

= 0 otherwise. Eu(n) = 0 and Eu(n)u(m) = δ n m A-AE 567 Final Homework Spring 212 You will need Matlab and Simulink. You work must be neat and easy to read. Clearly, identify your answers in a box. You will loose points for poorly written work. You

More information

Industrial Model Predictive Control

Industrial Model Predictive Control Industrial Model Predictive Control Emil Schultz Christensen Kongens Lyngby 2013 DTU Compute-M.Sc.-2013-49 Technical University of Denmark DTU Compute Matematiktovet, Building 303B, DK-2800 Kongens Lyngby,

More information

State Variable Analysis of Linear Dynamical Systems

State Variable Analysis of Linear Dynamical Systems Chapter 6 State Variable Analysis of Linear Dynamical Systems 6 Preliminaries In state variable approach, a system is represented completely by a set of differential equations that govern the evolution

More information

Z i Q ij Z j. J(x, φ; U) = X T φ(t ) 2 h + where h k k, H(t) k k and R(t) r r are nonnegative definite matrices (R(t) is uniformly in t nonsingular).

Z i Q ij Z j. J(x, φ; U) = X T φ(t ) 2 h + where h k k, H(t) k k and R(t) r r are nonnegative definite matrices (R(t) is uniformly in t nonsingular). 2. LINEAR QUADRATIC DETERMINISTIC PROBLEM Notations: For a vector Z, Z = Z, Z is the Euclidean norm here Z, Z = i Z2 i is the inner product; For a vector Z and nonnegative definite matrix Q, Z Q = Z, QZ

More information

Exam in Automatic Control II Reglerteknik II 5hp (1RT495)

Exam in Automatic Control II Reglerteknik II 5hp (1RT495) Exam in Automatic Control II Reglerteknik II 5hp (1RT495) Date: August 4, 018 Venue: Bergsbrunnagatan 15 sal Responsible teacher: Hans Rosth. Aiding material: Calculator, mathematical handbooks, textbooks

More information

Estimation of state space models with skewed shocks

Estimation of state space models with skewed shocks Estimation of state space models with skewed shocks Grzegorz Koloch April 30, 2012 Abstract This paper provides analytical formulae for the likelihood function, predictive densities and the filtration

More information

Brownian Motion and Poisson Process

Brownian Motion and Poisson Process and Poisson Process She: What is white noise? He: It is the best model of a totally unpredictable process. She: Are you implying, I am white noise? He: No, it does not exist. Dialogue of an unknown couple.

More information

CS 532: 3D Computer Vision 6 th Set of Notes

CS 532: 3D Computer Vision 6 th Set of Notes 1 CS 532: 3D Computer Vision 6 th Set of Notes Instructor: Philippos Mordohai Webpage: www.cs.stevens.edu/~mordohai E-mail: Philippos.Mordohai@stevens.edu Office: Lieb 215 Lecture Outline Intro to Covariance

More information

Sensor Tasking and Control

Sensor Tasking and Control Sensor Tasking and Control Sensing Networking Leonidas Guibas Stanford University Computation CS428 Sensor systems are about sensing, after all... System State Continuous and Discrete Variables The quantities

More information

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Fin. Econometrics / 53

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Fin. Econometrics / 53 State-space Model Eduardo Rossi University of Pavia November 2014 Rossi State-space Model Fin. Econometrics - 2014 1 / 53 Outline 1 Motivation 2 Introduction 3 The Kalman filter 4 Forecast errors 5 State

More information

Basic concepts in estimation

Basic concepts in estimation Basic concepts in estimation Random and nonrandom parameters Definitions of estimates ML Maimum Lielihood MAP Maimum A Posteriori LS Least Squares MMS Minimum Mean square rror Measures of quality of estimates

More information

SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL

SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL 1. Optimal regulator with noisy measurement Consider the following system: ẋ = Ax + Bu + w, x(0) = x 0 where w(t) is white noise with Ew(t) = 0, and x 0 is a stochastic

More information

Linear System Theory. Wonhee Kim Lecture 1. March 7, 2018

Linear System Theory. Wonhee Kim Lecture 1. March 7, 2018 Linear System Theory Wonhee Kim Lecture 1 March 7, 2018 1 / 22 Overview Course Information Prerequisites Course Outline What is Control Engineering? Examples of Control Systems Structure of Control Systems

More information

2 Statistical Estimation: Basic Concepts

2 Statistical Estimation: Basic Concepts Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof. N. Shimkin 2 Statistical Estimation:

More information

Advanced Mechatronics Engineering

Advanced Mechatronics Engineering Advanced Mechatronics Engineering German University in Cairo 21 December, 2013 Outline Necessary conditions for optimal input Example Linear regulator problem Example Necessary conditions for optimal input

More information

Multivariate Regression

Multivariate Regression Multivariate Regression The so-called supervised learning problem is the following: we want to approximate the random variable Y with an appropriate function of the random variables X 1,..., X p with the

More information

6.241 Dynamic Systems and Control

6.241 Dynamic Systems and Control 6.241 Dynamic Systems and Control Lecture 24: H2 Synthesis Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology May 4, 2011 E. Frazzoli (MIT) Lecture 24: H 2 Synthesis May

More information

x(n + 1) = Ax(n) and y(n) = Cx(n) + 2v(n) and C = x(0) = ξ 1 ξ 2 Ex(0)x(0) = I

x(n + 1) = Ax(n) and y(n) = Cx(n) + 2v(n) and C = x(0) = ξ 1 ξ 2 Ex(0)x(0) = I A-AE 567 Final Homework Spring 213 You will need Matlab and Simulink. You work must be neat and easy to read. Clearly, identify your answers in a box. You will loose points for poorly written work. You

More information

2 Introduction of Discrete-Time Systems

2 Introduction of Discrete-Time Systems 2 Introduction of Discrete-Time Systems This chapter concerns an important subclass of discrete-time systems, which are the linear and time-invariant systems excited by Gaussian distributed stochastic

More information

State Estimation of Linear and Nonlinear Dynamic Systems

State Estimation of Linear and Nonlinear Dynamic Systems State Estimation of Linear and Nonlinear Dynamic Systems Part I: Linear Systems with Gaussian Noise James B. Rawlings and Fernando V. Lima Department of Chemical and Biological Engineering University of

More information

System Modeling and Identification CHBE 702 Korea University Prof. Dae Ryook Yang

System Modeling and Identification CHBE 702 Korea University Prof. Dae Ryook Yang System Modeling and Identification CHBE 702 Korea University Prof. Dae Ryook Yang 1-1 Course Description Emphases Delivering concepts and Practice Programming Identification Methods using Matlab Class

More information

State Estimation using Moving Horizon Estimation and Particle Filtering

State Estimation using Moving Horizon Estimation and Particle Filtering State Estimation using Moving Horizon Estimation and Particle Filtering James B. Rawlings Department of Chemical and Biological Engineering UW Math Probability Seminar Spring 2009 Rawlings MHE & PF 1 /

More information

The regression equation for the gain bears a striking resemblance to the regression equation for the estimate X t : -1k -1 P t t = Ctr tt 7.

The regression equation for the gain bears a striking resemblance to the regression equation for the estimate X t : -1k -1 P t t = Ctr tt 7. 7.8 The Kalman Filter 307 Yt Figure 7.14 Network for implementing sequential Bayes estimate. The regression equation for the gain bears a striking resemblance to the regression equation for the estimate

More information

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

1. Find the solution of the following uncontrolled linear system. 2 α 1 1 Appendix B Revision Problems 1. Find the solution of the following uncontrolled linear system 0 1 1 ẋ = x, x(0) =. 2 3 1 Class test, August 1998 2. Given the linear system described by 2 α 1 1 ẋ = x +

More information

Module 9: Stationary Processes

Module 9: Stationary Processes Module 9: Stationary Processes Lecture 1 Stationary Processes 1 Introduction A stationary process is a stochastic process whose joint probability distribution does not change when shifted in time or space.

More information

Toward nonlinear tracking and rejection using LPV control

Toward nonlinear tracking and rejection using LPV control Toward nonlinear tracking and rejection using LPV control Gérard Scorletti, V. Fromion, S. de Hillerin Laboratoire Ampère (CNRS) MaIAGE (INRA) Fondation EADS International Workshop on Robust LPV Control

More information

ECE 636: Systems identification

ECE 636: Systems identification ECE 636: Systems identification Lectures 3 4 Random variables/signals (continued) Random/stochastic vectors Random signals and linear systems Random signals in the frequency domain υ ε x S z + y Experimental

More information

Basic Concepts in Data Reconciliation. Chapter 6: Steady-State Data Reconciliation with Model Uncertainties

Basic Concepts in Data Reconciliation. Chapter 6: Steady-State Data Reconciliation with Model Uncertainties Chapter 6: Steady-State Data with Model Uncertainties CHAPTER 6 Steady-State Data with Model Uncertainties 6.1 Models with Uncertainties In the previous chapters, the models employed in the DR were considered

More information

Lecture 16: State Space Model and Kalman Filter Bus 41910, Time Series Analysis, Mr. R. Tsay

Lecture 16: State Space Model and Kalman Filter Bus 41910, Time Series Analysis, Mr. R. Tsay Lecture 6: State Space Model and Kalman Filter Bus 490, Time Series Analysis, Mr R Tsay A state space model consists of two equations: S t+ F S t + Ge t+, () Z t HS t + ɛ t (2) where S t is a state vector

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Probabilistic Graphical Models Brown University CSCI 2950-P, Spring 2013 Prof. Erik Sudderth Lecture 12: Gaussian Belief Propagation, State Space Models and Kalman Filters Guest Kalman Filter Lecture by

More information

Lecture 1: Pragmatic Introduction to Stochastic Differential Equations

Lecture 1: Pragmatic Introduction to Stochastic Differential Equations Lecture 1: Pragmatic Introduction to Stochastic Differential Equations Simo Särkkä Aalto University, Finland (visiting at Oxford University, UK) November 13, 2013 Simo Särkkä (Aalto) Lecture 1: Pragmatic

More information

The Multivariate Normal Distribution 1

The Multivariate Normal Distribution 1 The Multivariate Normal Distribution 1 STA 302 Fall 2014 1 See last slide for copyright information. 1 / 37 Overview 1 Moment-generating Functions 2 Definition 3 Properties 4 χ 2 and t distributions 2

More information

Stochastic Processes

Stochastic Processes Elements of Lecture II Hamid R. Rabiee with thanks to Ali Jalali Overview Reading Assignment Chapter 9 of textbook Further Resources MIT Open Course Ware S. Karlin and H. M. Taylor, A First Course in Stochastic

More information

The Kalman Filter. Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience. Sarah Dance

The Kalman Filter. Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience. Sarah Dance The Kalman Filter Data Assimilation & Inverse Problems from Weather Forecasting to Neuroscience Sarah Dance School of Mathematical and Physical Sciences, University of Reading s.l.dance@reading.ac.uk July

More information

6.4 Kalman Filter Equations

6.4 Kalman Filter Equations 6.4 Kalman Filter Equations 6.4.1 Recap: Auxiliary variables Recall the definition of the auxiliary random variables x p k) and x m k): Init: x m 0) := x0) S1: x p k) := Ak 1)x m k 1) +uk 1) +vk 1) S2:

More information

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q Kalman Filter Kalman Filter Predict: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q Update: K = P k k 1 Hk T (H k P k k 1 Hk T + R) 1 x k k = x k k 1 + K(z k H k x k k 1 ) P k k =(I

More information

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Financial Econometrics / 49

State-space Model. Eduardo Rossi University of Pavia. November Rossi State-space Model Financial Econometrics / 49 State-space Model Eduardo Rossi University of Pavia November 2013 Rossi State-space Model Financial Econometrics - 2013 1 / 49 Outline 1 Introduction 2 The Kalman filter 3 Forecast errors 4 State smoothing

More information

9 Multi-Model State Estimation

9 Multi-Model State Estimation Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof. N. Shimkin 9 Multi-Model State

More information

Lessons in Estimation Theory for Signal Processing, Communications, and Control

Lessons in Estimation Theory for Signal Processing, Communications, and Control Lessons in Estimation Theory for Signal Processing, Communications, and Control Jerry M. Mendel Department of Electrical Engineering University of Southern California Los Angeles, California PRENTICE HALL

More information

Online monitoring of MPC disturbance models using closed-loop data

Online monitoring of MPC disturbance models using closed-loop data Online monitoring of MPC disturbance models using closed-loop data Brian J. Odelson and James B. Rawlings Department of Chemical Engineering University of Wisconsin-Madison Online Optimization Based Identification

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

Dynamic Consistency for Stochastic Optimal Control Problems

Dynamic Consistency for Stochastic Optimal Control Problems Dynamic Consistency for Stochastic Optimal Control Problems Cadarache Summer School CEA/EDF/INRIA 2012 Pierre Carpentier Jean-Philippe Chancelier Michel De Lara SOWG June 2012 Lecture outline Introduction

More information

Prediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother

Prediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother Prediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother Simo Särkkä, Aki Vehtari and Jouko Lampinen Helsinki University of Technology Department of Electrical and Communications

More information

FUNDAMENTAL FILTERING LIMITATIONS IN LINEAR NON-GAUSSIAN SYSTEMS

FUNDAMENTAL FILTERING LIMITATIONS IN LINEAR NON-GAUSSIAN SYSTEMS FUNDAMENTAL FILTERING LIMITATIONS IN LINEAR NON-GAUSSIAN SYSTEMS Gustaf Hendeby Fredrik Gustafsson Division of Automatic Control Department of Electrical Engineering, Linköpings universitet, SE-58 83 Linköping,

More information

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt. The concentration of a drug in blood Exponential decay C12 concentration 2 4 6 8 1 C12 concentration 2 4 6 8 1 dc(t) dt = µc(t) C(t) = C()e µt 2 4 6 8 1 12 time in minutes 2 4 6 8 1 12 time in minutes

More information

Lecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations

Lecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations Lecture 6: Multiple Model Filtering, Particle Filtering and Other Approximations Department of Biomedical Engineering and Computational Science Aalto University April 28, 2010 Contents 1 Multiple Model

More information

Name of the Student: Problems on Discrete & Continuous R.Vs

Name of the Student: Problems on Discrete & Continuous R.Vs Engineering Mathematics 05 SUBJECT NAME : Probability & Random Process SUBJECT CODE : MA6 MATERIAL NAME : University Questions MATERIAL CODE : JM08AM004 REGULATION : R008 UPDATED ON : Nov-Dec 04 (Scan

More information

Sequential Monte Carlo Methods for Bayesian Computation

Sequential Monte Carlo Methods for Bayesian Computation Sequential Monte Carlo Methods for Bayesian Computation A. Doucet Kyoto Sept. 2012 A. Doucet (MLSS Sept. 2012) Sept. 2012 1 / 136 Motivating Example 1: Generic Bayesian Model Let X be a vector parameter

More information

NONUNIFORM SAMPLING FOR DETECTION OF ABRUPT CHANGES*

NONUNIFORM SAMPLING FOR DETECTION OF ABRUPT CHANGES* CIRCUITS SYSTEMS SIGNAL PROCESSING c Birkhäuser Boston (2003) VOL. 22, NO. 4,2003, PP. 395 404 NONUNIFORM SAMPLING FOR DETECTION OF ABRUPT CHANGES* Feza Kerestecioğlu 1,2 and Sezai Tokat 1,3 Abstract.

More information

FAST ALGORITHM FOR ESTIMATING MUTUAL INFORMATION, ENTROPIES AND SCORE FUNCTIONS. Dinh Tuan Pham

FAST ALGORITHM FOR ESTIMATING MUTUAL INFORMATION, ENTROPIES AND SCORE FUNCTIONS. Dinh Tuan Pham FAST ALGORITHM FOR ESTIMATING MUTUAL INFORMATION, ENTROPIES AND SCORE FUNCTIONS Dinh Tuan Pham Laboratoire de Modelling and Computation, CNRS, IMAG BP. 53, 384 Grenoble cedex, France ABSTRACT This papers

More information

COMPOSITE OPTIMAL CONTROL FOR INTERCONNECTED SINGULARLY PERTURBED SYSTEMS. A Dissertation by. Saud A. Alghamdi

COMPOSITE OPTIMAL CONTROL FOR INTERCONNECTED SINGULARLY PERTURBED SYSTEMS. A Dissertation by. Saud A. Alghamdi COMPOSITE OPTIMAL CONTROL FOR INTERCONNECTED SINGULARLY PERTURBED SYSTEMS A Dissertation by Saud A. Alghamdi Master of Mathematical Science, Queensland University of Technology, 21 Bachelor of Mathematics,

More information

A Crash Course on Kalman Filtering

A Crash Course on Kalman Filtering A Crash Course on Kalman Filtering Dan Simon Cleveland State University Fall 2014 1 / 64 Outline Linear Systems Probability State Means and Covariances Least Squares Estimation The Kalman Filter Unknown

More information

Optimal control and estimation

Optimal control and estimation Automatic Control 2 Optimal control and estimation Prof. Alberto Bemporad University of Trento Academic year 2010-2011 Prof. Alberto Bemporad (University of Trento) Automatic Control 2 Academic year 2010-2011

More information

Machine Learning 4771

Machine Learning 4771 Machine Learning 4771 Instructor: ony Jebara Kalman Filtering Linear Dynamical Systems and Kalman Filtering Structure from Motion Linear Dynamical Systems Audio: x=pitch y=acoustic waveform Vision: x=object

More information

ADAPTIVE FILTER THEORY

ADAPTIVE FILTER THEORY ADAPTIVE FILTER THEORY Fifth Edition Simon Haykin Communications Research Laboratory McMaster University Hamilton, Ontario, Canada International Edition contributions by Telagarapu Prabhakar Department

More information

Local E-optimality Conditions for Trajectory Design to Estimate Parameters in Nonlinear Systems

Local E-optimality Conditions for Trajectory Design to Estimate Parameters in Nonlinear Systems Local E-optimality Conditions for Trajectory Design to Estimate Parameters in Nonlinear Systems Andrew D. Wilson and Todd D. Murphey Abstract This paper develops an optimization method to synthesize trajectories

More information

Labor-Supply Shifts and Economic Fluctuations. Technical Appendix

Labor-Supply Shifts and Economic Fluctuations. Technical Appendix Labor-Supply Shifts and Economic Fluctuations Technical Appendix Yongsung Chang Department of Economics University of Pennsylvania Frank Schorfheide Department of Economics University of Pennsylvania January

More information

X t = a t + r t, (7.1)

X t = a t + r t, (7.1) Chapter 7 State Space Models 71 Introduction State Space models, developed over the past 10 20 years, are alternative models for time series They include both the ARIMA models of Chapters 3 6 and the Classical

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Cheng Soon Ong & Christian Walder. Canberra February June 2018 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2018 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 254 Part V

More information

A New Subspace Identification Method for Open and Closed Loop Data

A New Subspace Identification Method for Open and Closed Loop Data A New Subspace Identification Method for Open and Closed Loop Data Magnus Jansson July 2005 IR S3 SB 0524 IFAC World Congress 2005 ROYAL INSTITUTE OF TECHNOLOGY Department of Signals, Sensors & Systems

More information

LQR, Kalman Filter, and LQG. Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin

LQR, Kalman Filter, and LQG. Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin LQR, Kalman Filter, and LQG Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin May 2015 Linear Quadratic Regulator (LQR) Consider a linear system

More information

OPTIMAL CONTROL. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 28

OPTIMAL CONTROL. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 28 OPTIMAL CONTROL Sadegh Bolouki Lecture slides for ECE 515 University of Illinois, Urbana-Champaign Fall 2016 S. Bolouki (UIUC) 1 / 28 (Example from Optimal Control Theory, Kirk) Objective: To get from

More information