CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b

Similar documents
CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b

Optimization-Based Control

Optimization-Based Control

Optimization-Based Control

Steady State Kalman Filter

6 OUTPUT FEEDBACK DESIGN

Lecture 9. Introduction to Kalman Filtering. Linear Quadratic Gaussian Control (LQG) G. Hovland 2004

SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL

There are none. Abstract for Gauranteed Margins for LQG Regulators, John Doyle, 1978 [Doy78].

Lecture 10 Linear Quadratic Stochastic Control with Partial State Observation

EL2520 Control Theory and Practice

Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters

5. Observer-based Controller Design

2 Quick Review of Continuous Random Variables

LQR, Kalman Filter, and LQG. Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q

= m(0) + 4e 2 ( 3e 2 ) 2e 2, 1 (2k + k 2 ) dt. m(0) = u + R 1 B T P x 2 R dt. u + R 1 B T P y 2 R dt +

Automatic Control II Computer exercise 3. LQG Design

OPTIMAL CONTROL AND ESTIMATION

= 0 otherwise. Eu(n) = 0 and Eu(n)u(m) = δ n m

POLE PLACEMENT. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 19

State estimation and the Kalman filter

RECURSIVE ESTIMATION AND KALMAN FILTERING

EE C128 / ME C134 Final Exam Fall 2014

6.241 Dynamic Systems and Control

Nonlinear Identification of Backlash in Robot Transmissions

x(n + 1) = Ax(n) and y(n) = Cx(n) + 2v(n) and C = x(0) = ξ 1 ξ 2 Ex(0)x(0) = I

CONTROL DESIGN FOR SET POINT TRACKING

ROBUST PASSIVE OBSERVER-BASED CONTROL FOR A CLASS OF SINGULAR SYSTEMS

EE226a - Summary of Lecture 13 and 14 Kalman Filter: Convergence

Chap. 3. Controlled Systems, Controllability

Kalman Filter Computer Vision (Kris Kitani) Carnegie Mellon University

Nonlinear Observers. Jaime A. Moreno. Eléctrica y Computación Instituto de Ingeniería Universidad Nacional Autónoma de México

Homework: Ball Predictor for Robot Soccer

A Crash Course on Kalman Filtering

Kalman Filter and Parameter Identification. Florian Herzog

Lecture 7 : Generalized Plant and LFT form Dr.-Ing. Sudchai Boonto Assistant Professor

1 (30 pts) Dominant Pole

EL 625 Lecture 10. Pole Placement and Observer Design. ẋ = Ax (1)

Lecture 5: Control Over Lossy Networks

Computer Problem 1: SIE Guidance, Navigation, and Control

Subject: Optimal Control Assignment-1 (Related to Lecture notes 1-10)

Stochastic and Adaptive Optimal Control

Control Systems Design, SC4026. SC4026 Fall 2010, dr. A. Abate, DCSC, TU Delft

1 Kalman Filter Introduction

Robust Control 5 Nominal Controller Design Continued

Control Systems Design

Topic # Feedback Control Systems

Derivation of the Kalman Filter

Optimal control and estimation

Industrial Model Predictive Control

Online monitoring of MPC disturbance models using closed-loop data

4 Derivations of the Discrete-Time Kalman Filter

Time-Invariant Linear Quadratic Regulators!

Stochastic Models, Estimation and Control Peter S. Maybeck Volumes 1, 2 & 3 Tables of Contents

LQ Control of a Two Wheeled Inverted Pendulum Process

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems

Controllability, Observability, Full State Feedback, Observer Based Control

(q 1)t. Control theory lends itself well such unification, as the structure and behavior of discrete control

Comparison of four state observer design algorithms for MIMO system

Here represents the impulse (or delta) function. is an diagonal matrix of intensities, and is an diagonal matrix of intensities.

Final Exam Solutions

Development of an Extended Operational States Observer of a Power Assist Wheelchair

Chapter 9 Observers, Model-based Controllers 9. Introduction In here we deal with the general case where only a subset of the states, or linear combin

Nonlinear State Estimation! Extended Kalman Filters!

Lecture 2 and 3: Controllability of DT-LTI systems

Homework Solution # 3

CS 532: 3D Computer Vision 6 th Set of Notes

Linear System Theory

Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 2015

Definition of Observability Consider a system described by a set of differential equations dx dt

LMIs for Observability and Observer Design

LINEAR QUADRATIC GAUSSIAN

State Regulator. Advanced Control. design of controllers using pole placement and LQ design rules

AERT 2013 [CA'NTI 19] ALGORITHMES DE COMMANDE NUMÉRIQUE OPTIMALE DES TURBINES ÉOLIENNES

3 Gramians and Balanced Realizations

Pole placement control: state space and polynomial approaches Lecture 2

Linear-Quadratic Optimal Control: Full-State Feedback

Lecture 4: Numerical Solution of SDEs, Itô Taylor Series, Gaussian Approximations

State Estimation and Motion Tracking for Spatially Diverse VLC Networks

Chapter 3. LQ, LQG and Control System Design. Dutch Institute of Systems and Control

Zeros and zero dynamics

Lecture 2: From Linear Regression to Kalman Filter and Beyond

2 Introduction of Discrete-Time Systems

ESTIMATION and stochastic control theories are heavily

Disturbance modelling

State Observers and the Kalman filter

Every real system has uncertainties, which include system parametric uncertainties, unmodeled dynamics

CDS 110b: Lecture 2-1 Linear Quadratic Regulators

Linear Quadratic Zero-Sum Two-Person Differential Games Pierre Bernhard June 15, 2013

Lecture 4: Numerical Solution of SDEs, Itô Taylor Series, Gaussian Process Approximations

Linear State Feedback Controller Design

Control Design. Lecture 9: State Feedback and Observers. Two Classes of Control Problems. State Feedback: Problem Formulation

Lecture 5 Linear Quadratic Stochastic Control

H 2 Optimal State Feedback Control Synthesis. Raktim Bhattacharya Aerospace Engineering, Texas A&M University

Basic Concepts in Data Reconciliation. Chapter 6: Steady-State Data Reconciliation with Model Uncertainties

LQG/LTR CONTROLLER DESIGN FOR AN AIRCRAFT MODEL

EE221A Linear System Theory Final Exam

Linear Discrete-time State Space Realization of a Modified Quadruple Tank System with State Estimation using Kalman Filter

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION - Vol. XIII - Nonlinear Observers - A. J. Krener

Transcription:

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems CDS 110b R. M. Murray Kalman Filters 14 January 2007 Reading: This set of lectures provides a brief introduction to Kalman filtering, following the treatment in Friedland. Friedland, Chapter 11 1 Introduction d n r e u v w F Σ C Σ P Σ y E Controller P rocess Figure 1: Block diagram of a basic feedback loop. 2 Linear Quadratic Estimators Consider a stochastic system Ẋ = AX + BU + FV E{V (s)v T (t)} = R V (t)δ(t s) Y = CX + E{(s) T (t)} = R (t)δ(t s) Assume that the disturbance V and noise are zero-mean and Gaussian (but not necessarily stationary): 1 p(v) = n e 1 2 vt R 1 V v 2π detrv p(w) =... (using R )

multi-variable Gaussian with covariance matrix R V in scalar case, R V = σ 2 Problem statement: Find the estimate ˆX(t) that minimizes the mean square error E{(X(t) ˆX(t))(X(t) ˆX(t)) T } given {Y (τ) : 0 τ t}. Proposition ˆX(t) = E{X(t) Y (τ),τ t} Optimal estimate is just the expectation of the random process X given the constraint of the observed output. This is the way Kalman originally formulated the problem. Can think of this as a least squares problem: given all previous Y (t), find the estimate ˆX that satisfies the dynamics and minimizes the square error with the measured data. Proof See text. Basic idea: show that the conditional mean minimizes the mean square error. Theorem 1 (Kalman-Bucy, 1961). The optimal estimator has the form of a linear observer ˆX = A ˆX + BU + L(Y C ˆX) where L(t) = P(t)C T R 1 and P(t) = E{(X(t) ˆX(t))(X(t) ˆX(t)) T } and satisfies P = AP + PA T PC T R 1 (t)cp + FR V (t)f T P(0) = E{X(0)X T (0)} Proof. (sketch) The error dynamics are given by Ė = (A LC)E + ξ ξ = FV L R ξ = FR V F T + LR L T The covariance matrix P E = P for this process satisfies (from last lecture): P = (A LC)P + P(A LC) T + FR V F T + LR L T. e need to find L such that P(t) is as small as possible. Can show that the L that acheives this is given by R L T = CP = L = PC T R 1 (See Friedland, Section 9.4). Remarks and properties 2

1. The Kalman filter has the form of a recursive filter: given P(t) = E{E(t)E T (t)} at time t, can compute how the estimate and covariance change. Don t need to keep track of old values of the output. 2. The Kalman filter gives the estimate ˆX(t) and the covariance P E (t) = you can see how well the error is converging. 3. If the noise is stationary (R V, R constant) and if P is stable, then the observer gain is constant: L = PC T R 1 AP + PA T PC T R 1 CP + FR V F T (algebraic Riccati equation) This is the problem solved by the lqe command in MATLAB. 4. The Kalman filter extracts the maximum possible information about output data R = Y C ˆX = residual or innovations process Can show that for the Kalman filter, the correlation matrix is R R (t,s) = (t)δ(t s) = white noise So the output error has no remaining dynamic information content (see Friedland section 11.5 for complete calculation) 3 Extended Kalman Filters Consider a nonlinear system Ẋ = f(x,u,v ) Y = CX + X R n,u R m V, Gaussian white noise processes with covariance matrices R V and R. Nonlinear observer: ˆX = f( ˆX,U, 0) + L(Y C ˆX) Error dynamics: E = X ˆX Ė = f(x,u,v ) f( ˆX,U, 0) LC(X ˆX) = F(E, ˆX,U,V ) LCe F(E, ˆX,U,V ) = f(e + ˆX,U,V ) f( ˆX,U, 0) 3

Now linearize around current estimate ˆX Ê = F E E + F(0, ˆX,U, 0) where à = F e F = F V } {{ } =0 = ÃE + FV LCE (0, ˆX,U,0) (0, ˆX,U,0) = f X = f V ( ˆX,U,0) ( ˆX,U,0) + F V V noise LCe observer gain +h.o.t Depend on current estimate ˆX Idea: design observer for the linearized system around current estimate ˆX = f( ˆX,U, 0) + L(Y C ˆX) L = PC T R 1 P = (à LC)P + P(à LC)T + FR V F T + LR L T P(t 0 ) = E{X(t 0 )X T (t 0 )} This is called the (Schmidt) extended Kalman filter (EKF) Remarks: 1. Can t prove very much about EKF due to nonlinear terms 2. In applications, works very well. One of the most used forms of the Kalman filter 3. Unscented Kalman filters 4. Information form of the Kalman filter (next week?) Application: parameter ID Consider a linear system with unknown parameters ξ Ẋ = A(ξ)X + B(ξ)U + FV Y = C(ξ)X + ξ R p Parameter ID problem: given U(t) and Y (t), estimate the value of the parameters ξ. One approach: treat ξ as unkown state Ẋ = A(ξ)X + B(ξ)U + FV ξ = 0 } d dt [ ] X = ξ 4 h i f( Xξ,U,V ) [{ ][ }}] [ { A(ξ) 0 X B(ξ) + 0 0 ξ 0 Y = C(ξ)X + h i h( Xξ,) ] U + [ ] F V 0

Now use EKF to estimate X and ξ = determine unknown parameters ξ R p. Remark: need various observability conditions on augmented system in order for this to work. 4 LQG Control Return to the original H 2 control problem Figure Ẋ = AX + BU + FV Y = CX + V, Gaussian white noise with covariance R V, R Stochastic control problem: find C(s) to minimize { [ J = E (Y r) T R V (Y r) T + U T R U ] } dt 0 Assume for simplicity that the reference input r = 0 (otherwise, translate the state accordingly). Theorem 2. The optimal controller has the form ˆX = A ˆX + BU + L(Y C ˆX) U = K( ˆX X d ) where L is the optimal observer gain ignoring the controller and K is the optimal controller gain ignoring the noise. This is called the separation principle (for H 2 control). 5