LMIs for Observability and Observer Design

Similar documents
Modern Control Systems

Grammians. Matthew M. Peet. Lecture 20: Grammians. Illinois Institute of Technology

Modern Optimal Control

Controllability, Observability, Full State Feedback, Observer Based Control

Observability. It was the property in Lyapunov stability which allowed us to resolve that

LMI Methods in Optimal and Robust Control

Control Systems Design

Module 08 Observability and State Estimator Design of Dynamical LTI Systems

Module 03 Linear Systems Theory: Necessary Background

Lecture 19 Observability and state estimation

Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science : MULTIVARIABLE CONTROL SYSTEMS by A.

Observability and state estimation

6.241 Dynamic Systems and Control

Multivariable Control. Lecture 03. Description of Linear Time Invariant Systems. John T. Wen. September 7, 2006

LMI Methods in Optimal and Robust Control

POLE PLACEMENT. Sadegh Bolouki. Lecture slides for ECE 515. University of Illinois, Urbana-Champaign. Fall S. Bolouki (UIUC) 1 / 19

Balanced Truncation 1

Comparison of four state observer design algorithms for MIMO system

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science : Dynamic Systems Spring 2011

ME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ

to have roots with negative real parts, the necessary and sufficient conditions are that:

Control Systems Design, SC4026. SC4026 Fall 2009, dr. A. Abate, DCSC, TU Delft

Stability, Pole Placement, Observers and Stabilization

Module 07 Controllability and Controller Design of Dynamical LTI Systems

CONTROL DESIGN FOR SET POINT TRACKING

FEL3210 Multivariable Feedback Control

DESIGN OF OBSERVERS FOR SYSTEMS WITH SLOW AND FAST MODES

Control Systems Design, SC4026. SC4026 Fall 2010, dr. A. Abate, DCSC, TU Delft

16.31 Fall 2005 Lecture Presentation Mon 31-Oct-05 ver 1.1

1 Continuous-time Systems

5. Observer-based Controller Design

Topic # Feedback Control Systems

6 OUTPUT FEEDBACK DESIGN

Pole placement control: state space and polynomial approaches Lecture 2

TRACKING AND DISTURBANCE REJECTION

Control Systems I. Lecture 4: Diagonalization, Modal Analysis, Intro to Feedback. Readings: Emilio Frazzoli

Intro. Computer Control Systems: F9

Lecture 8. Applications

Control Systems Design, SC4026. SC4026 Fall 2010, dr. A. Abate, DCSC, TU Delft

ME 132, Fall 2015, Quiz # 2

State Feedback and State Estimators Linear System Theory and Design, Chapter 8.

EL 625 Lecture 10. Pole Placement and Observer Design. ẋ = Ax (1)

Identification Methods for Structural Systems

SYSTEMTEORI - KALMAN FILTER VS LQ CONTROL

Nonlinear Observers. Jaime A. Moreno. Eléctrica y Computación Instituto de Ingeniería Universidad Nacional Autónoma de México

Chapter 3. LQ, LQG and Control System Design. Dutch Institute of Systems and Control

ACM/CMS 107 Linear Analysis & Applications Fall 2016 Assignment 4: Linear ODEs and Control Theory Due: 5th December 2016

ECEN 605 LINEAR SYSTEMS. Lecture 7 Solution of State Equations 1/77

Zeros and zero dynamics

Multivariable MRAC with State Feedback for Output Tracking

Analysis of Systems with State-Dependent Delay

INVERSE MODEL APPROACH TO DISTURBANCE REJECTION AND DECOUPLING CONTROLLER DESIGN. Leonid Lyubchyk

ROBUST PASSIVE OBSERVER-BASED CONTROL FOR A CLASS OF SINGULAR SYSTEMS

Problem Set 5 Solutions 1

State Feedback and State Estimators Linear System Theory and Design, Chapter 8.

The norms can also be characterized in terms of Riccati inequalities.

3 Gramians and Balanced Realizations

Modern Control Systems

Modern Optimal Control

ECEN 605 LINEAR SYSTEMS. Lecture 8 Invariant Subspaces 1/26

Semidefinite Programming Duality and Linear Time-invariant Systems

Topic # Feedback Control. State-Space Systems Closed-loop control using estimators and regulators. Dynamics output feedback

Denis ARZELIER arzelier

ẋ n = f n (x 1,...,x n,u 1,...,u m ) (5) y 1 = g 1 (x 1,...,x n,u 1,...,u m ) (6) y p = g p (x 1,...,x n,u 1,...,u m ) (7)

Linear Matrix Inequality (LMI)

Theorem 1. ẋ = Ax is globally exponentially stable (GES) iff A is Hurwitz (i.e., max(re(σ(a))) < 0).

CHAPTER 6 FAST OUTPUT SAMPLING CONTROL TECHNIQUE

Linear System Theory

1 Some Facts on Symmetric Matrices

Output Stabilization of Time-Varying Input Delay System using Interval Observer Technique

Intro. Computer Control Systems: F8

Control Systems Design

Integral action in state feedback control

ECE504: Lecture 9. D. Richard Brown III. Worcester Polytechnic Institute. 04-Nov-2008

Systems Analysis and Control

Final: Signal, Systems and Control (BME )

Control Systems I. Lecture 7: Feedback and the Root Locus method. Readings: Jacopo Tani. Institute for Dynamic Systems and Control D-MAVT ETH Zürich

1 (30 pts) Dominant Pole

Linear Algebra. P R E R E Q U I S I T E S A S S E S S M E N T Ahmad F. Taha August 24, 2015

EEE582 Homework Problems

Diagonalization. P. Danziger. u B = A 1. B u S.

Topic # Feedback Control Systems

Systems Analysis and Control

Robust Control 2 Controllability, Observability & Transfer Functions

Linear dynamical systems with inputs & outputs

Verification and Control of Safety-Factor Profile in Tokamaks

Module 09 From s-domain to time-domain From ODEs, TFs to State-Space Modern Control

BALANCING-RELATED MODEL REDUCTION FOR DATA-SPARSE SYSTEMS

ECEEN 5448 Fall 2011 Homework #4 Solutions

Prashant Mhaskar, Nael H. El-Farra & Panagiotis D. Christofides. Department of Chemical Engineering University of California, Los Angeles

Controller Design for Robust Output Regulation of Regular Linear Systems

Lecture 15: H Control Synthesis

Modern Optimal Control

ECEN 420 LINEAR CONTROL SYSTEMS. Lecture 6 Mathematical Representation of Physical Systems II 1/67

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T

ECE504: Lecture 8. D. Richard Brown III. Worcester Polytechnic Institute. 28-Oct-2008

Outline. Linear Matrix Inequalities in Control. Outline. System Interconnection. j _jst. ]Bt Bjj. Generalized plant framework

Topics in control Tracking and regulation A. Astolfi

CALIFORNIA INSTITUTE OF TECHNOLOGY Control and Dynamical Systems. CDS 110b

Systems Analysis and Control

Transcription:

LMIs for Observability and Observer Design Matthew M. Peet Arizona State University Lecture 06: LMIs for Observability and Observer Design

Observability Consider a system with no input: ẋ(t) = Ax(t), x(0) = x 0 y(t) = Cx(t) Definition 1. For a given T, the pair (A, C) is Observable on [0, T ] if, given y(t) for t [0, T ], we can reconstruct x 0. Definition 2. Given (C, A), the flow map, Ψ T : R p F(R, R p ) is So y = Ψ T x 0 means y(t) = Ce At x 0. Proposition 1. Ψ T : x 0 Ce At x 0 t [0, T ] The pair (C, A) is observable if and only if Ψ T is invertible, which implies ker Ψ T = 0 M. Peet Lecture 06: Observability 2 / 18

Observability Definition 3. The Observability Matrix, O(C, A) is defined as C CA O(C, A) =. CA n 1 Theorem 4. ker Ψ T = ker C ker CA ker CA 2 ker CA n 1 C CA = ker. CA n 1 Definition 5. The Unobservable Subspace is N CA = ker Ψ T = ker O(C, A). Theorem 6. For a given pair (C, A), the following are equivalent. ker Y = 0 ker Ψ T = 0 ker O(C, A) = 0 If the state is observable, then it is observable arbitrarily fast. M. Peet Lecture 06: Observability 3 / 18

The Observability Gramian Definition 7. For pair (C, A), the Observability Grammian is defined as Y = Ψ T Ψ T = 0 e AT s C T Ce As ds Observable Ellipsoid: The set of initial states which result in an output y with norm y 1 is given by the ellipsoid {x R n : Ψ T x 2 = x T Y x 1} an ellipsoid with semiaxis lengths 1 λ i(y ) an ellipsoid with semiaxis directions given by eigenvectors of Y If λ i (Y ) = 0 for some i, (C, A) is not observable. Note that the major axes are the WEAKLY observable states 1 λ2 { } 1 λ1 M. Peet Lecture 06: Observability 4 / 18

Duality The Controllability and Observability matrices are related O(C, A) = C(A T, C T ) T C(A, B) = O(B T, A T ) T For this reason, the study of controllability and observability are related. ker O(C, A) = [image C(A T, C T )] image C(A, B) = [ker O(B T, A T )] We can investigate observability of (C, A) by studying controllability of (A T, C T ) (C, A) is observable if and only if (A T, C T ) is controllable. Lemma 8 (An LMI for the Observability Gramian). (C, A) is observable iff Y > 0 is the unique solution to A T Y + Y A + C T C = 0 Recall W > 0 and AW + W A T + BB T = 0 for controllability! M. Peet Lecture 06: Observability 5 / 18

Observers Suppose we have designed a controller but we can only measure y(t) = Cx(t)! u(t) = F x(t) Question: How to find x(t)? If (C, A) observable, then we can observe y(t) on t [t, t + T ]. But by then its too late! we need x(t) in real time! Definition 9. An Observer, is an Artificial Dynamical System whose output tracks x(t). Suppose we want to observe the following system ẋ(t) = Ax(t) + Bu(t) y(t) = Cx(t) + Du(t) Lets assume the observer is state-space What are our inputs and output? What is the dimension of the system? M. Peet Lecture 06: Observability 6 / 18

Observers Inputs: u(t) and y(t). Outputs: Estimate of the state: ˆx(t). Assume the observer has the same dimension as the system ż(t) = Mz(t) + Ny(t) + P u(t) ˆx(t) = Qz(t) + Ry(t) + Su(t) We want lim t 0 e(t) = lim t 0 x(t) ˆx(t) = 0 for any u, z(0), and x(0). We would also like internal stability, etc. M. Peet Lecture 06: Observability 7 / 18

Coupled System and Observer Dynamics System Dynamics: Observer Dynamics: ẋ(t) = Ax(t) + Bu(t) y(t) = Cx(t) + Du(t) ż(t) = Mz(t) + Ny(t) + P u(t) ˆx(t) = Qz(t) + Ry(t) + Su(t) DYNAMICS Of The Error: What are the dynamics of e(t) = x(t) ˆx(t)? ė(t) = ẋ(t) ˆx(t) = Ax(t) + Bu(t) Qż(t) + Rẏ(t) + S u(t) = Ax(t) + Bu(t) Q(Mz(t) + Ny(t) + P u(t)) + R(Cẋ(t) + D u(t)) + S u(t) = Ax(t) + Bu(t) QMz(t) QN(Cx(t) + Du(t)) QP u(t) + RC(Ax(t) + Bu(t)) + (S + RD) u(t) = (A + RCA QNC)e(t) + (AQ + RCAQ QNCQ QM)z(t) + (A + RCA QNC)Ry(t) + (B + RCB QP QND)u(t) + (S + RD) u(t) Designing an observer requires that these dynamics are Hurwitz. M. Peet Lecture 06: Observability 8 / 18

The Luenberger Observer For now, we consider a special kind of observers, parameterized by the matrix L ż(t) = (A + LC)z(t) Ly(t) + (B + LD)u(t) = Az(t) + Bu(t) + L(Cz(t) + Du(t) y(t)) ˆx(t) = z(t) In the general formulation, this corresponds to M = A + LC; N = L; P = B + LD; Q = I; R = 0; S = 0; So in this case z(t) = ˆx(t) and (A + RCA QNC) = QM = A + LC. Furthermore (A + RCA QNC)R = 0 and AQ + RCAQ QNCQ QM = 0. Thus the criterion for convergence is A + LC Hurwitz. Question Can we choose L such that A + LC is Hurwitz? Similar to choosing A + BF. M. Peet Lecture 06: Observability 9 / 18

Observability If turns out that observability and detectability are useful Theorem 10. The eigenvalues of A + LC are freely assignable through L if and only if (C, A) is observable. If we only need A + LC Hurwitz, then the test is easier. We only need detectability Theorem 11. An observer exists if and only if (C, A) is detectable Note: Theorem applies to ANY observer, not just Luenberger observers. M. Peet Lecture 06: Observability 10 / 18

An LMI for Observer Synthesis Question: How to compute L? The eigenvalues of A + LC and (A + LC) T = A T + C T L T are the same. This is the same problem as controller design! Theorem 12. There exists a K such that A + BK is stable if and only if there exists some P > 0 and Z such that AP + P A T + BZ + Z T B T < 0, where K = ZP 1. Theorem 13. There exists an L such that A + LC is stable if and only if there exists some P > 0 and Z such that A T P + P A + C T Z + Z T C < 0, where L = P 1 Z T. So now we know how to design an Luenberger observer. Also called an estimator The error dynamics will be dictated by the eigenvalues of A + LC. generally a good idea for the observer to converge faster than the plant. M. Peet Lecture 06: Observability 11 / 18

Observer-Based Controllers Summary: What do we know? How to design a controller which uses the full state. How to design an observer which converges to the full state. Question: Is the combined system stable? We know the error dynamics converge. Lets look at the coupled dynamics. Proposition 2. The system defined by ẋ(t) = Ax(t) + Bu(t) y(t) = Cx(t) + Du(t) u(t) = F ˆx(t) ˆx(t) = (A + LC + BF + LDF ) ˆx(t) Ly(t) has eigenvalues equal to that of A + LC and A + BF. Note we have reduced the dependence on u(t). M. Peet Lecture 06: Observability 12 / 18

Observer-Based Controllers The proof is relatively easy Proof. The state dynamics are Rewrite the estimation dynamics as ẋ(t) = Ax(t) + BF ˆx(t) ˆx(t) = (A + LC + BF + LDF ) ˆx(t) Ly(t) = (A + LC) ˆx(t) + (B + LD) F ˆx(t) LCx(t) LDu(t) = (A + LC) ˆx(t) + (B + LD) u(t) LCx(t) LDu(t) = (A + LC) ˆx(t) + Bu(t) LCx(t) = (A + LC + BF ) ˆx(t) LCx(t) In state-space form, we get [ẋ(t) ] [ ] [ ] A BF x(t) = ˆx(t) LC A + LC + BF ˆx(t) M. Peet Lecture 06: Observability 13 / 18

Observer-Based Controllers Proof. [ẋ(t) ] [ ] [ ] A BF x(t) = ˆx(t) LC A + LC + BF ˆx(t) [ ] I 0 Use the similarity transform T = T 1 =. I I [ ] [ ] [ ] T ĀT 1 I 0 A BF I 0 = I I LC A + LC + BF I I [ ] [ ] I 0 A + BF BF = I I A + BF (A + LC + BF ) [ ] A + BF BF = 0 A + LC which has eigenvalues A + LC and A + BF. M. Peet Lecture 06: Observability 14 / 18

An LMI for Observer D-Stability Use the Controller Synthesis LMI to choose K. Then use the following LMI to choose L. If both A + LC and A + BK satisfy the D-stability condition, then the eigenvalues of the close-loop system will as well. Lemma 14 (An LMI for D-Observer Design). Suppose [ there exists X > 0 and Z ] such that rp (P A + ZC) T < 0, P A + ZC rp (P A + ZC) T + P A + ZC + 2αP < 0, and [ ] c((p A + ZC) T + P A + ZC) (P A + ZC) T (P A + ZC) P A + ZC (P A + ZC) T c((p A + ZC) T < 0 + P A + ZC) Then if L = P 1 Z, the pole locations, z C of A + LC satisfy x r, Re x α and z + z c z z. M. Peet Lecture 06: Observability 15 / 18

One and Two-Step Discrete-Time Observers ˆx k+1 = Aˆx k + Bu k + L(C ˆx k + Du k y k ) This gives error (e k = x k ˆx k ) dynamics e k+1 = (A + LC)e k So the Problem is exactly the same as for the continuous-time case. New Problem: Feedback at step k doesn t include the latest measurements y k. Instead take the output from the previous estimator and propagate it forward x k = Aˆx k 1 + Bu k 1, (Current State Estimate w/o update) ˆx k = x k + L(C x k + Du k y k ) Eliminating ˆx, we get the Current State Estimator! The error dynamics then become x k+1 = A x k + Bu k + AL(C x k + Du k y k ) e k+1 = (A + LCA)e k This is not a more difficult problem to solve (replace C with CA) M. Peet Lecture 06: Observability 16 / 18

Summary of LMIs Learned M. Peet Lecture 06: Observability 17 / 18

Examples: Example 6.2: Jet Aircraft ẋ = Ax + Bu and y = Cx..0558.9968.0802.0415 A =.5980.1150.0318 0 3.0500.388.465 0 0.0805 1 0.0729.0001 [ ] B = 4.75 1.23 0 1 0 0 1.53 10.63 C = 0 0 0 1 0 0 Example 6.3: Discrete-Time System x k+1 = Ax k + Bu k and y = Cx k. 0 1 0 0 [ ] A = 1 1 0 B = 1 1 0 0, C = 0 0 1 1 0 0 0 M. Peet Lecture 06: Observability 18 / 18