CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b

Similar documents
CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b

Optimization-Based Control

Optimization-Based Control

Optimization-Based Control

There are none. Abstract for Gauranteed Margins for LQG Regulators, John Doyle, 1978 [Doy78].

Steady State Kalman Filter

6 OUTPUT FEEDBACK DESIGN

Lecture 10 Linear Quadratic Stochastic Control with Partial State Observation

Lecture 9. Introduction to Kalman Filtering. Linear Quadratic Gaussian Control (LQG) G. Hovland 2004

CONTROL DESIGN FOR SET POINT TRACKING

5. Observer-based Controller Design

SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL

POLE PLACEMENT. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 19

Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters

EL2520 Control Theory and Practice

RECURSIVE ESTIMATION AND KALMAN FILTERING

Automatic Control II Computer exercise 3. LQG Design

= m(0) + 4e 2 ( 3e 2 ) 2e 2, 1 (2k + k 2 ) dt. m(0) = u + R 1 B T P x 2 R dt. u + R 1 B T P y 2 R dt +

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q

EL 625 Lecture 10. Pole Placement and Observer Design. ẋ = Ax (1)

1 Kalman Filter Introduction

6.241 Dynamic Systems and Control

Lecture 7 : Generalized Plant and LFT form Dr.-Ing. Sudchai Boonto Assistant Professor

Homework: Ball Predictor for Robot Soccer

Control Systems Design, SC4026. SC4026 Fall 2010, dr. A. Abate, DCSC, TU Delft

2 Quick Review of Continuous Random Variables

LQR, Kalman Filter, and LQG. Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin

EE C128 / ME C134 Final Exam Fall 2014

Lecture 5: Control Over Lossy Networks

State estimation and the Kalman filter

OPTIMAL CONTROL AND ESTIMATION

A Crash Course on Kalman Filtering

= 0 otherwise. Eu(n) = 0 and Eu(n)u(m) = δ n m

1 (30 pts) Dominant Pole

Kalman Filter Computer Vision (Kris Kitani) Carnegie Mellon University

ROBUST PASSIVE OBSERVER-BASED CONTROL FOR A CLASS OF SINGULAR SYSTEMS

Nonlinear Identification of Backlash in Robot Transmissions

Intro. Computer Control Systems: F9

Industrial Model Predictive Control

Linear State Feedback Controller Design

Online monitoring of MPC disturbance models using closed-loop data

x(n + 1) = Ax(n) and y(n) = Cx(n) + 2v(n) and C = x(0) = ξ 1 ξ 2 Ex(0)x(0) = I

Optimal control and estimation

Chap. 3. Controlled Systems, Controllability

Nonlinear Observers. Jaime A. Moreno. Eléctrica y Computación Instituto de Ingeniería Universidad Nacional Autónoma de México

Kalman Filter and Parameter Identification. Florian Herzog

Control Systems Design

Chapter 3. LQ, LQG and Control System Design. Dutch Institute of Systems and Control

State Regulator. Advanced Control. design of controllers using pole placement and LQ design rules

Machine Learning 4771

Deterministic Kalman Filtering on Semi-infinite Interval

University of Toronto Department of Electrical and Computer Engineering ECE410F Control Systems Problem Set #3 Solutions = Q o = CA.

Linear System Theory

1 Continuous-time Systems

Comparison of four state observer design algorithms for MIMO system

Controllability, Observability, Full State Feedback, Observer Based Control

Linear-Quadratic-Gaussian Controllers!

EE221A Linear System Theory Final Exam

9 Controller Discretization

3 Gramians and Balanced Realizations

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities.

Computer Problem 1: SIE Guidance, Navigation, and Control

Nonlinear Observer Design for Dynamic Positioning

Stochastic and Adaptive Optimal Control

Lecture 8. Applications

LMIs for Observability and Observer Design

EE226a - Summary of Lecture 13 and 14 Kalman Filter: Convergence

Linear Discrete-time State Space Realization of a Modified Quadruple Tank System with State Estimation using Kalman Filter

Zeros and zero dynamics

Lecture 19 Observability and state estimation

Balancing of Lossless and Passive Systems

Robust Control 5 Nominal Controller Design Continued

Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science : MULTIVARIABLE CONTROL SYSTEMS by A.

Homework Solution # 3

Lecture 2 and 3: Controllability of DT-LTI systems

Derivation of the Kalman Filter

Topic # Feedback Control Systems

Pole placement control: state space and polynomial approaches Lecture 2

Every real system has uncertainties, which include system parametric uncertainties, unmodeled dynamics

Chapter 9 Observers, Model-based Controllers 9. Introduction In here we deal with the general case where only a subset of the states, or linear combin

Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 2015

CS 532: 3D Computer Vision 6 th Set of Notes

2 Introduction of Discrete-Time Systems

ECEEN 5448 Fall 2011 Homework #4 Solutions

4 Derivations of the Discrete-Time Kalman Filter

LQ Control of a Two Wheeled Inverted Pendulum Process

Subject: Optimal Control Assignment-1 (Related to Lecture notes 1-10)

Time-Invariant Linear Quadratic Regulators!

6.241 Dynamic Systems and Control

Stochastic Models, Estimation and Control Peter S. Maybeck Volumes 1, 2 & 3 Tables of Contents

State Observers and the Kalman filter

Observability and state estimation

LINEAR QUADRATIC GAUSSIAN

Observability. It was the property in Lyapunov stability which allowed us to resolve that

ESTIMATION and stochastic control theories are heavily

H 2 Optimal State Feedback Control Synthesis. Raktim Bhattacharya Aerospace Engineering, Texas A&M University

Final Exam Solutions

(q 1)t. Control theory lends itself well such unification, as the structure and behavior of discrete control

Cross Directional Control

Networked Sensing, Estimation and Control Systems

Transcription:

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems CDS 110b R. M. Murray Kalman Filters 25 January 2006 Reading: This set of lectures provides a brief introduction to Kalman filtering, following the treatment in Friedland. Friedland, Chapter 11 1 Introduction d n r e u v w F Σ C Σ P Σ y E Controller P rocess Figure 1: Block diagram of a basic feedback loop. 2 Linear Quadratic Estimators Consider a stochastic system ẋ = Ax + Bu + Fv E{v(s)v T (t)} = Q(t)δ(t s) y = Cx + w E{w(s)w T (t)} = R(t)δ(t s) Assume that the disturbance v and noise w are zero-mean and Gaussian (but not necessarily stationary): 1 p(v) = n e 1 2 vt Q 1 v 2π detq p(w) =... (using R)

multi-variable Gaussian with covariance matrix Q in scalar case, Q = σ 2 Problem statement: Find the estimate ˆx(t) that minimizes the mean square error E{(x(t) ˆx(t))(x(t) ˆx(t)) T } given {y(τ) : 0 τ t}. Proposition ˆx(t) = E{x(t) y(τ), τ t} Optimal estimate is just the expectation of the random process x given the constraint of the observed output. This is the way Kalman originally formulated the problem. Can think of this as a least squares problem: given all previous y(t), find the estimate ˆx that satisfies the dynamics and minimizes the square error with the measured data. Proof See text. Basic idea: show that the conditional mean minimizes the mean square error. Theorem 1 (Kalman-Bucy, 1961). The optimal estimator has the form of a linear observer ˆx = Aˆx + Bu + L(y Cˆx) where L(t) = P(t)C T R 1 and P(t) = E{(x(t) ˆx(t))(x(t) ˆx(t)) T } and satisfies P = AP + PA T PC T R 1 (t)cp + FQ(t)F T P(0) = E{x(0)x T (0)} Proof. (sketch) The error dynamics are given by ė = (A LC)e + ξ ξ = Fv Lw R ξ = FQF T + LRL T The covariance matrix P e = P for this process satisfies (from last lecture): P = (A LC)P + P(A LC) T + FQF T + LRL T. We need to find L such that P(t) is as small as possible. Can show that the L that acheives this is given by RL T = CP = L = PC T R 1 (See Friedland, Section 9.4). Remarks and properties 2

1. The Kalman filter has the form of a recursive filter: given P(t) = E{e(t)e T (t)} at time t, can compute how the estimate and covariance change. Don t need to keep track of old values of the output. 2. The Kalman filter gives the estimate ˆx(t) and the covariance P e (t) = you can see how well the error is converging. 3. If the noise is stationary (Q, R constant) and if P is stable, then the observer gain is constant: L = PC T R 1 AP + PA T PC T R 01 CP + FQF T (algebraic Riccati equation) This is the problem solved by the lqe command in MATLAB. 4. The Kalman filter extracts the maximum possible inforamtion about output data r = y Cˆx = residual or innovations process Can show that for the Kalman filter, the correlation matrix is R r (t,s) = W(t)δ(t s) = white noise So the output error has no remaining dynamic information content (see Friedland section 11.5 for complete calculation) 3 Extended Kalman Filters Consider a nonlinear system ẋ = f(x,u,v) y = Cx + w Nonlinear observer: x R n,u R m v, w Gaussian white noise processes with covariance matrices Q and R. ˆx = f(ˆx,u, 0) + L(y Cˆx) Error dynamics: e = x ˆx ė = f(x,u,v) f(ˆx,u, 0) LC(x ˆx) = F(e, ˆx,u,v) LCe F(e, ˆx,u,v) = f(e + ˆx,u,v) f(ˆx,u, 0) Now linearize around current estimate ˆx ê = F e + F(0, ˆx,u, 0) e =0 = Ãe + Fv LCe + F v v 3 noise LCe observer gain +h.o.t

where à = F e F = F v (0,ˆx,u,0) (0,ˆx,u,0) = f x = f v (ˆx,u,0) (ˆx,u,0) Depend on current estimate ˆx Idea: design observer for the linearized system around current estimate ˆx = f(ˆx,u, 0) + L(y Cˆx) L = PC T R 1 P = (à LC)P + P(à LC)T + FQ F T + LRL T P(t 0 ) = E{x(t 0 )x T (t 0 )} This is called the (Schmidt) extended Kalman filter (EKF) Remarks: 1. Can t prove very much about EKF due to nonlinear terms 2. In applications, works very well. One of the most used forms of the Kalman filter Application: parameter ID Consider a linear system with unknown parameters ξ ẋ = A(ξ)x + B(ξ)u + Fv y = C(ξ)x + w ξ R p Parameter ID problem: given u(t) and y(t), estimate the value of the parameters ξ. One approach: treat ξ as unkown state } ẋ = A(ξ)x + B(ξ)u + Fv ξ = 0 d dt [ ] x = ξ f([ x ξ ],u,v) {[ ][}}] [ { A(ξ) 0 x B(ξ) + 0 0 ξ 0 y = C(ξ)x + w h([ x ξ ],w) ] u + [ ] F v 0 Now use EKF to estimate x and ξ = determine unknown parameters ξ R p. Remark: need various observability conditions on augmented system in order for this to work. 4

4 LQG Control Return to the original H 2 control problem Figure ẋ = Ax + Bu + Fv y = Cx + w v, w Gaussian white noise with covariance R v, R w Stochastic control problem: find C(s) to minimize { [ J = E (y r) T Q(y r) T + u T Ru ] } dt 0 Assume for simplicity that r = 0 (otherwise, translate the state accordingly). Theorem 2. The optimal controller has the form ˆx = Aˆx + Bu + L(y Cˆx) u = K(ˆx x d ) where L is the optimal observer gain ignoring the controller and K is the optimal controller gain ignoring the noise. This is called the separation principle (for H 2 control). 5 Sensor Fusion 5