Optimization-Based Control

Similar documents
Optimization-Based Control

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b

Optimization-Based Control

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b

There are none. Abstract for Gauranteed Margins for LQG Regulators, John Doyle, 1978 [Doy78].

1 Kalman Filter Introduction

Steady State Kalman Filter

SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL

Lecture 9. Introduction to Kalman Filtering. Linear Quadratic Gaussian Control (LQG) G. Hovland 2004

6 OUTPUT FEEDBACK DESIGN

Linear-Quadratic-Gaussian (LQG) Controllers and Kalman Filters

EL2520 Control Theory and Practice

= m(0) + 4e 2 ( 3e 2 ) 2e 2, 1 (2k + k 2 ) dt. m(0) = u + R 1 B T P x 2 R dt. u + R 1 B T P y 2 R dt +

A Crash Course on Kalman Filtering

= 0 otherwise. Eu(n) = 0 and Eu(n)u(m) = δ n m

OPTIMAL CONTROL AND ESTIMATION

Development of an Extended Operational States Observer of a Power Assist Wheelchair

Automatic Control II Computer exercise 3. LQG Design

State estimation and the Kalman filter

Computer Problem 1: SIE Guidance, Navigation, and Control

RECURSIVE ESTIMATION AND KALMAN FILTERING

x(n + 1) = Ax(n) and y(n) = Cx(n) + 2v(n) and C = x(0) = ξ 1 ξ 2 Ex(0)x(0) = I

Lecture 10 Linear Quadratic Stochastic Control with Partial State Observation

Kalman Filter. Predict: Update: x k k 1 = F k x k 1 k 1 + B k u k P k k 1 = F k P k 1 k 1 F T k + Q

Kalman Filter Computer Vision (Kris Kitani) Carnegie Mellon University

Nonlinear State Estimation! Extended Kalman Filters!

Networked Sensing, Estimation and Control Systems

Nonlinear Identification of Backlash in Robot Transmissions

LQ Control of a Two Wheeled Inverted Pendulum Process

Time-Invariant Linear Quadratic Regulators!

Stochastic and Adaptive Optimal Control

Miscellaneous. Regarding reading materials. Again, ask questions (if you have) and ask them earlier

5. Observer-based Controller Design

ELEC4631 s Lecture 2: Dynamic Control Systems 7 March Overview of dynamic control systems

EE C128 / ME C134 Final Exam Fall 2014

CONTROL DESIGN FOR SET POINT TRACKING

POLE PLACEMENT. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 19

6.241 Dynamic Systems and Control

Time-Invariant Linear Quadratic Regulators Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 2015

Robust Control 5 Nominal Controller Design Continued

Autonomous Mobile Robot Design

Mini-Course 07 Kalman Particle Filters. Henrique Massard da Fonseca Cesar Cunha Pacheco Wellington Bettencurte Julio Dutra

ECE531 Lecture 11: Dynamic Parameter Estimation: Kalman-Bucy Filter

Multi-Robotic Systems

Riccati difference equations to non linear extended Kalman filter constraints

Stochastic Models, Estimation and Control Peter S. Maybeck Volumes 1, 2 & 3 Tables of Contents

Optimal control and estimation

Topic # Feedback Control Systems

Nonlinear Observers. Jaime A. Moreno. Eléctrica y Computación Instituto de Ingeniería Universidad Nacional Autónoma de México

A Study of Covariances within Basic and Extended Kalman Filters

Adaptive State Estimation Robert Stengel Optimal Control and Estimation MAE 546 Princeton University, 2018

2 Quick Review of Continuous Random Variables

Lecture 1: Pragmatic Introduction to Stochastic Differential Equations

Lecture 5: Control Over Lossy Networks

A Theoretical Overview on Kalman Filtering

State Estimation and Motion Tracking for Spatially Diverse VLC Networks

Nonlinear Observer Design for Dynamic Positioning

Kalman Filter and Parameter Identification. Florian Herzog

LQR, Kalman Filter, and LQG. Postgraduate Course, M.Sc. Electrical Engineering Department College of Engineering University of Salahaddin

Deterministic Kalman Filtering on Semi-infinite Interval

State Estimation with a Kalman Filter

CDS 110b: Lecture 2-1 Linear Quadratic Regulators

SLAM Techniques and Algorithms. Jack Collier. Canada. Recherche et développement pour la défense Canada. Defence Research and Development Canada

Basic Concepts in Data Reconciliation. Chapter 6: Steady-State Data Reconciliation with Model Uncertainties

4 Derivations of the Discrete-Time Kalman Filter

Exam in Automatic Control II Reglerteknik II 5hp (1RT495)

Lecture 6: Bayesian Inference in SDE Models

Linear System Theory

Derivation of the Kalman Filter

Problem 1: Ship Path-Following Control System (35%)

EE226a - Summary of Lecture 13 and 14 Kalman Filter: Convergence

BioControl - Week 3, Lecture 1

CS 532: 3D Computer Vision 6 th Set of Notes

State observers for invariant dynamics on a Lie group

AERT 2013 [CA'NTI 19] ALGORITHMES DE COMMANDE NUMÉRIQUE OPTIMALE DES TURBINES ÉOLIENNES

State Regulator. Advanced Control. design of controllers using pole placement and LQ design rules

Lecture 7 : Generalized Plant and LFT form Dr.-Ing. Sudchai Boonto Assistant Professor

From Bayes to Extended Kalman Filter

Controlled Diffusions and Hamilton-Jacobi Bellman Equations

Invariant Extended Kalman Filter: Theory and application to a velocity-aided estimation problem

Prediction of ESTSP Competition Time Series by Unscented Kalman Filter and RTS Smoother

Predictive Control of Gyroscopic-Force Actuators for Mechanical Vibration Damping

Lecture 7: Optimal Smoothing

Application of state observers in attitude estimation using low-cost sensors

Chap. 3. Controlled Systems, Controllability

Kalman filtering and friends: Inference in time series models. Herke van Hoof slides mostly by Michael Rubinstein

1 (30 pts) Dominant Pole

Lecture Note 12: Kalman Filter

Lecture Notes 4 Vector Detection and Estimation. Vector Detection Reconstruction Problem Detection for Vector AGN Channel

ROBUST PASSIVE OBSERVER-BASED CONTROL FOR A CLASS OF SINGULAR SYSTEMS

2 Introduction of Discrete-Time Systems

Subject: Optimal Control Assignment-1 (Related to Lecture notes 1-10)

Linear Discrete-time State Space Realization of a Modified Quadruple Tank System with State Estimation using Kalman Filter

Control Systems Design, SC4026. SC4026 Fall 2010, dr. A. Abate, DCSC, TU Delft

Definition of Observability Consider a system described by a set of differential equations dx dt

Homework: Ball Predictor for Robot Soccer

Every real system has uncertainties, which include system parametric uncertainties, unmodeled dynamics

Linearization problem. The simplest example

Vlad Estivill-Castro. Robots for People --- A project for intelligent integrated systems

Statistical Filtering and Control for AI and Robotics. Part II. Linear methods for regression & Kalman filtering

Transcription:

Optimization-Based Control Richard M. Murray Control and Dynamical Systems California Institute of Technology DRAFT v1.7a, 19 February 2008 c California Institute of Technology All rights reserved. This manuscript is for review purposes only and may not be reproduced, in whole or in part, without written consent from the author.

Chapter 5 Kalman Filtering In this chapter we derive the Kalman filter in continuous time (also referred to as the Kalman-Bucy filter). Prerequisites. Readers should have basic familiarity with continuous-time stochastic systems at the level presented in Chapter 4. 5.1 Linear Quadratic Estimators Consider a stochastic system Ẋ = AX + Bu + FV Y = CX + W, where X represents that state, u is the (deterministic) input, V represents disturbances that affect the dynamics of the system and W represents measurement noise. Assume that the disturbance V and noise W are zero-mean, Gaussian white noise (but not necessarily stationary): p(v) = p(w) = 1 n 2π det RV e 1 2 vt R 1 V v E{V (s)v T (t)} = R V (t)δ(t s) 1 n 2π det RW e 1 2 vt R 1 W v E{W(s)W T (t)} = R W (t)δ(t s) We also assume that the cross correlation between V and W is zero, so that the distrubances are not correlated with the noise. Note that we use multivariable Gaussians here, with (incremental) covariance matrix R V R m m and R W R p p. In the scalar case, R V = σ 2 V and R W = σ 2 W. We formulate the optimal estimation problem as finding the estimate ˆX(t) that minimizes the mean square error E{(X(t) ˆX(t))(X(t) ˆX(t)) T } given {Y (τ) : 0 τ t}. It can be shown that this is equivalent to finding the expected value of X subject to the constraint given by all of the previous measurements, so that ˆX(t) = E{X(t) Y (τ), τ t}. This was the way that Kalman originally formulated the problem and it can be viewed as solving a least squares problem: given all previous Y (t), find the estimate ˆX that satisfies the dynamics and minimizes the square error with the measured data. We omit the proof since we will work directly with the error formulation.

5.1. LINEAR QUADRATIC ESTIMATORS 2 Theorem 5.1 (Kalman-Bucy, 1961). The optimal estimator has the form of a linear observer ˆX = A ˆX + BU + L(Y C ˆX) where L(t) = P(t)C T R 1 W and P(t) = E{(X(t) ˆX(t))(X(t) ˆX(t)) T } and satisfies P = AP + PA T PC T R 1 W (t)cp + FR V (t)f T P(0) = E{X(0)X T (0)} Sketch of proof. The error dynamics are given by Ė = (A LC)E + ξ, ξ = FV LW, R ξ = FR V F T + LR W L T The covariance matrix P E = P for this process satisfies P = (A LC)P + P(A LC) T + FR V F T + LR W L T = AP + PA T + FR V F T LCP PC T L T + LR W L T = AP + PA T + FR V F T + (LR W PC T )R 1 W (LR W + PC T ) T PC T R W CP, where the last line follows by completing the square. We need to find L such that P(t) is as small as possible, which can be done by choosing L so that P decreases by the maximimum amount possible at each instant in time. This is accomplished by setting LR W = PC T = L = PC T R 1 W. Note that the Kalman filter has the form of a recursive filter: given P(t) = E{E(t)E T (t)} at time t, can compute how the estimate and covariance change. Thus we don t need to keep track of old values of the output. Furthermore, the Kalman filter gives the estimate ˆX(t) and the covariance P E (t), so you can see how well the error is converging. If the noise is stationary (R V, R W constant) and if P is stable, then the observer gain converges to a constant and satisfies the algebraic Riccati equation: L = PC T R 1 W AP + PA T PC T R 1 W CP + FR V F T. This is the most commonly used form of the controller since it gives an explicit formula for the estimator gains that minimize the error covariance. The gain matrix for this case can solved use the lqe command in MATLAB. Another property of the Kalman filter is that it extracts the maximum possible information about output data. To see this, consider the residual random prossess R = Y C ˆX

5.2. EXTENSIONS OF THE KALMAN FILTER 3 (this process is also called the innovations process). It can be shown for the Kalman filter that the correlation matrix of R is given by R R (t, s) = W(t)δ(t s). This implies that the residuals are a white noise process and so the output error has no remaining dynamic information content. 5.2 Extensions of the Kalman Filter Correlated disturbances and noise The derivation of the Kalman filter assumes that the disturbances and noise are independent and white. Removing the assumption of independence is straightforward and simply results in a cross term (E{V (t)w(s)} = R V W δ(s t)) being carried through all calculations. To remove the assumption of white noise sources, we construct a filter that takes white noise as an input and produces a noise source with the appropriate correlation function (or equivalently, spectral power density function). The intuition behind this approach is that we must have an internal model of the noise and/or disturbances in order to capture the correlation between different times. Extended Kalman filters Consider a nonlinear system Ẋ = f(x, U, V ) Y = CX + W Nonlinear observer: Error dynamics: E = X ˆX X R n, u R m V, W Gaussian white noise processes with covariance matrices R V and R W. ˆX = f( ˆX, U, 0) + L(Y C ˆX) Ė = f(x, U, V ) f( ˆX, U, 0) LC(X ˆX) = F(E, ˆX, U, V ) LCe F(E, ˆX, U, V ) = f(e + ˆX, U, V ) f( ˆX, U, 0) Now linearize around current estimate ˆX Ê = F E E + F(0, ˆX, U, 0) + F }{{} V V }{{} =0 noise = ÃE + FV LCE LCe +h.o.t }{{} observer gain

5.2. EXTENSIONS OF THE KALMAN FILTER 4 where à = F e F = F V (0, ˆX,U,0) (0, ˆX,U,0) = f X = f V ( ˆX,U,0) ( ˆX,U,0) Depend on current estimate ˆX Idea: design observer for the linearized system around current estimate ˆX = f( ˆX, U, 0) + L(Y C ˆX) L = PC T R 1 W P = (à LC)P + P(à LC)T + FR V F T + LR W L T P(t 0 ) = E{X(t 0 )X T (t 0 )} This is called the (Schmidt) extended Kalman filter (EKF) Remarks: 1. Can t prove very much about EKF due to nonlinear terms 2. In applications, works very well. One of the most used forms of the Kalman filter 3. Unscented Kalman filters 4. Information form of the Kalman filter (next week?) Application: parameter ID Consider a linear system with unknown parameters ξ Ẋ = A(ξ)X + B(ξ)U + FV Y = C(ξ)X + W ξ R p Parameter ID problem: given U(t) and Y (t), estimate the value of the parameters ξ. One approach: treat ξ as unkown state } Ẋ = A(ξ)X + B(ξ)U + FV ξ = 0 [ ] d X = dt ξ h i f( Xξ,U,V ) {[ }}] [ { A(ξ) 0 X B(ξ) + 0 0][ ξ 0 Y = C(ξ)X + W }{{} h i h( Xξ,W) Now use EKF to estimate X and ξ = determine unknown parameters ξ R p. Remark: need various observability conditions on augmented system in order for this to work. ] U + [ F 0] V

5.3. LQG CONTROL 5 Unscented Kalman filter 5.3 LQG Control Return to the original H 2 control problem Figure Ẋ = AX + BU + FV Y = CX + W V, W Gaussian white noise with covariance R V, R W Stochastic control problem: find C(s) to minimize { [ J = E (Y r) T R V (Y r) T + U T R W U ] } dt 0 Assume for simplicity that the reference input r = 0 (otherwise, translate the state accordingly). Theorem 5.2 (Separation principle). The optimal controller has the form ˆX = A ˆX + BU + L(Y C ˆX) U = K( ˆX X d ) where L is the optimal observer gain ignoring the controller and K is the optimal controller gain ignoring the noise. This is called the separation principle (for H 2 control). 5.4 Application to the Caltech Ducted Fan To illustrate the use of the Kalman filter, we consider the problem of estimating the state for the Caltech ducted fan, described already in Section??. 5.5 Further Reading Exercises 5.1 Consider the problem of estimating the position of an autonomous mobile vehicle using a GPS receiver and an IMU (inertial measurement unit). The dynamics of the vehicle are given by

5.5. FURTHER READING 6 φ y l θ ẋ = cos θ v ẏ = sin θ v θ = 1 tan φv, l x We assume that the vehicle is disturbance free, but that we have noisy measurements from the GPS receiver and IMU and an initial condition error. In this problem we will utilize the full form of the Kalman filter (including the P equation). (a) Suppose first that we only have the GPS measurements for the xy position of the vehicle. These measurements give the position of the vehicle with approximately 1 meter accuracy. Model the GPS error as Gaussian white noise with σ = 1.2 meter in each direction and design an optimal estimator for the system. Plot the estimated states and the covariances for each state starting with an initial condition of 5 degree heading error at 10 meters/sec forward speed (i.e., choose x(0) = (0, 0, 5π/180) and ˆx = (0, 0, 0)). (b) An IMU can be used to measure angular rates and linear acceleration. Assume that we use a Northrop Grumman LN200 to measure the angular rate θ. Use the datasheet on the course web page to determine a model for the noise process and design a Kalman filter that fuses the GPS and IMU to determine the position of the vehicle. Plot the estimated states and the covariances for each state starting with an initial condition of 5 degree heading error at 10 meters/sec forward speed. Note: be careful with units on this problem!