Modern Optimal Control

Similar documents
Modern Optimal Control

Modern Optimal Control

Linear Matrix Inequality (LMI)

Outline. Linear Matrix Inequalities in Control. Outline. System Interconnection. j _jst. ]Bt Bjj. Generalized plant framework

LMI Methods in Optimal and Robust Control

Systems Analysis and Control

LMI Methods in Optimal and Robust Control

Static Output Feedback Stabilisation with H Performance for a Class of Plants

The norms can also be characterized in terms of Riccati inequalities.

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 4

Lecture 15: H Control Synthesis

Grammians. Matthew M. Peet. Lecture 20: Grammians. Illinois Institute of Technology

LOW ORDER H CONTROLLER DESIGN: AN LMI APPROACH

4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns

EE363 homework 8 solutions

Lecture 10: Linear Matrix Inequalities Dr.-Ing. Sudchai Boonto

Systems Analysis and Control

Disturbance Decoupling Problem

LMIs for Observability and Observer Design

8 A First Glimpse on Design with LMIs

1 Last time: least-squares problems

Elementary operation matrices: row addition

Section 2.2: The Inverse of a Matrix

IDEAL CLASSES AND RELATIVE INTEGERS

NORMS ON SPACE OF MATRICES

LMI Methods in Optimal and Robust Control

On Bounded Real Matrix Inequality Dilation

LMI Methods in Optimal and Robust Control

An Introduction to Linear Matrix Inequalities. Raktim Bhattacharya Aerospace Engineering, Texas A&M University

Applied Matrix Algebra Lecture Notes Section 2.2. Gerald Höhn Department of Mathematics, Kansas State University

Systems Analysis and Control

Denis ARZELIER arzelier

1 Linear Algebra Problems

LMI based output-feedback controllers: γ-optimal versus linear quadratic.

On the continued fraction expansion of 2 2n+1

Economics 620, Lecture 4: The K-Varable Linear Model I

Robust and Optimal Control, Spring 2015

The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications

Economics 620, Lecture 4: The K-Variable Linear Model I. y 1 = + x 1 + " 1 y 2 = + x 2 + " 2 :::::::: :::::::: y N = + x N + " N

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

Extra Problems: Chapter 1

Research Article Stabilization Analysis and Synthesis of Discrete-Time Descriptor Markov Jump Systems with Partially Unknown Transition Probabilities

Robust control in multidimensional systems

Properties of Transformations

DS-GA 1002 Lecture notes 10 November 23, Linear models

Modern Control Systems

Introduction to Nonlinear Control Lecture # 4 Passivity

Convex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013

Lecture 4 and 5 Controllability and Observability: Kalman decompositions

Lecture 7 Spectral methods

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

Dissipativity. Outline. Motivation. Dissipative Systems. M. Sami Fadali EBME Dept., UNR

1 Computing with constraints

Dot Products, Transposes, and Orthogonal Projections

MATH 33A LECTURE 3 2ND MIDTERM SOLUTIONS FALL 2016

Solutions to Exam I MATH 304, section 6

Modern Control Systems

On Linear-Quadratic Control Theory of Implicit Difference Equations

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

Constrained Nonlinear Optimization Algorithms

6-2 Matrix Multiplication, Inverses and Determinants

Chapter 4 Euclid Space

Semidefinite Programming Duality and Linear Time-invariant Systems

Systems Analysis and Control

To appear in IEEE Trans. on Automatic Control Revised 12/31/97. Output Feedback

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note 6

Systems Analysis and Control

FINITE HORIZON ROBUST MODEL PREDICTIVE CONTROL USING LINEAR MATRIX INEQUALITIES. Danlei Chu, Tongwen Chen, Horacio J. Marquez

22A-2 SUMMER 2014 LECTURE 5

This lecture is a review for the exam. The majority of the exam is on what we ve learned about rectangular matrices.

' Mothemoticol Institute, Technicof University of Denmork, DK-2800, Lyngby. Denmork

Lesson 3. Inverse of Matrices by Determinants and Gauss-Jordan Method

Definition 2.3. We define addition and multiplication of matrices as follows.

Solution: a) Let us consider the matrix form of the system, focusing on the augmented matrix: 0 k k + 1. k 2 1 = 0. k = 1

Necessary And Sufficient Conditions For Existence of the LU Factorization of an Arbitrary Matrix.

x 9 or x > 10 Name: Class: Date: 1 How many natural numbers are between 1.5 and 4.5 on the number line?

Lecture notes: Applied linear algebra Part 1. Version 2

Closed-Loop Structure of Discrete Time H Controller

Semidefinite Programming Basics and Applications

MODEL ANSWERS TO HWK #7. 1. Suppose that F is a field and that a and b are in F. Suppose that. Thus a = 0. It follows that F is an integral domain.

From the multivariable Nyquist plot, if 1/k < or 1/k > 5.36, or 0.19 < k < 2.7, then the closed loop system is stable.

Linear Matrix Inequalities in Control

IMPULSIVE CONTROL OF DISCRETE-TIME NETWORKED SYSTEMS WITH COMMUNICATION DELAYS. Shumei Mu, Tianguang Chu, and Long Wang

Systems Analysis and Control

LECTURE 7. Least Squares and Variants. Optimization Models EE 127 / EE 227AT. Outline. Least Squares. Notes. Notes. Notes. Notes.

Convex Geometry. Otto-von-Guericke Universität Magdeburg. Applications of the Brascamp-Lieb and Barthe inequalities. Exercise 12.

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

3 Stabilization of MIMO Feedback Systems

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

A RIEMANN-ROCH THEOREM FOR EDGE-WEIGHTED GRAPHS

MODEL ANSWERS TO THE FIRST HOMEWORK

Section 4.5. Matrix Inverses

LMI Methods in Optimal and Robust Control

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Systems Analysis and Control

Math 215 HW #11 Solutions

Math 115A: Homework 5

7.1 Linear Systems Stability Consider the Continuous-Time (CT) Linear Time-Invariant (LTI) system

Riccati Equations and Inequalities in Robust Control

E 231, UC Berkeley, Fall 2005, Packard

Transcription:

Modern Optimal Control Matthew M. Peet Arizona State University Lecture 21: Optimal Output Feedback Control

connection is called the (lower) star-product of P and Optimal Output Feedback ansformation (LFT). Recall: Linear Fractional Transformation z y P w u K to z is given by Plant: [z ] [ P11 P = 12 ] where P = ] [ w D y P S(P, K) =P 11 + P 12 K(I P 22 K) 1 21 P 22 u 11 D 12 C 2 D 21 DP 22 21 A B 1 B 2 Controller: AK B u = Ky where K = K M. Peet Lecture 21: 2 / 28 C K C 1 D K

Optimal Control Choose K to minimize P 11 + P 12 (I KP 22 ) 1 KP 21 AK B Equivalently choose K to minimize [ A 0 0 A K C K ] [ B2 0 + 0 B K D K [ C1 0 ] + [ D 12 0 ] [ I D K D 22 I where Q = (I D 22 D K ) 1. ] 1 I DK 0 CK D 22 I C 2 0 ] 1 0 CK C 2 0 B 1 + B 2D KQD 21 B KQD 21 D 11 + D 12D KQD 21 H M. Peet Lecture 21: 3 / 28

Optimal Control Recall the Matrix Inversion Lemma: Lemma 1. 1 A B C D A = 1 + A 1 B(D CA 1 B) 1 CA 1 A 1 B(D CA 1 B) 1 (D CA 1 B) 1 CA 1 (D CA 1 B) 1 [ ] (A BD = 1 C) 1 (A BD 1 C) 1 BD 1 D 1 C(A BD 1 C) 1 D 1 + D 1 C(A BD 1 C) 1 BD 1 M. Peet Lecture 21: 4 / 28

Optimal Control Recall that 1 I DK I + DK QD = 22 D K Q I Q D 22 QD 22 where Q = (I D 22 D K ) 1. Then 1 A 0 B2 0 I DK 0 CK A cl := + 0 A K 0 B K D 22 I C 2 0 A 0 B2 0 I + DK QD = + 22 D K Q 0 CK 0 A K 0 B K QD 22 Q C 2 0 A + B2 D = K QC 2 B 2 (I + D K QD 22 )C K B K QC 2 A K + B K QD 22 C K Likewise C cl := [ C 1 0 ] + [ D 12 0 ] I + D K QD 22 D K Q 0 CK QD 22 Q C 2 0 = C 1 + D 12 D K QC 2 D 12 (I + D K QD 22 )C K M. Peet Lecture 21: 5 / 28

Thus we have A + B 2 D K QC 2 B 2 (I + D K QD 22 )C K B 1 + B 2 D K QD 21 [C1 B K QC 2 A K + B K QD 22 C K ] B K QD 21 + D 12 D K QC 2 D 12 (I + D K QD 22 )C K D 11 + D 12 D K QD 21 where Q = (I D 22 D K ) 1. This is nonlinear in (A K, B K, C K, D K ). Hence we make a change of variables (First of several). A K2 = A K + B K QD 22 C K B K2 = B K Q C K2 = (I + D K QD 22 )C K D K2 = D K Q M. Peet Lecture 21: 6 / 28

This yields the system A + B2 D K2 C 2 B 2 C K2 [C1 B K2 C 2 A K2 ] + D 12 D K2 C 2 D 12 C K2 [ AK2 B Which is affine in K2 C K2 D K2 ]. B 1 + B 2 D K2 D 21 B K2 D 21 D 11 + D 12 D K2 D 21 M. Peet Lecture 21: 7 / 28

Hence we can optimize over our new variables. However, the change of variables must be invertible. If we recall that (I QM) 1 = I + Q(I MQ) 1 M then we get I + D K QD 22 = I + D K (I D 22 D K ) 1 D 22 = (I D K D 22 ) 1 Examine the variable C K2 C K2 = (I + D K (I D 22 D K ) 1 D 22 )C K = (I D K D 22 ) 1 C K Hence, given C K2, we can recover C K as C K = (I D K D 22 )C K2 M. Peet Lecture 21: 8 / 28

Now suppose we have D K2. Then D K2 = D K Q = D K (I D 22 D K ) 1 implies that D K = D K2 (I D 22 D K ) = D K2 D K2 D 22 D K or (I + D K2 D 22 )D K = D K2 which can be inverted to get D K = (I + D K2 D 22 ) 1 D K2 M. Peet Lecture 21: 9 / 28

Once we have C K and D K, the other variables are easily recovered as B K = B K2 Q 1 = B K2 (I D 22 D K ) A K = A K2 B K (I D 22 D K ) 1 D 22 C K To summarize, the original variables can be recovered as D K = (I + D K2 D 22 ) 1 D K2 B K = B K2 (I D 22 D K ) C K = (I D K D 22 )C K2 A K = A K2 B K (I D 22 D K ) 1 D 22 C K M. Peet Lecture 21: 10 / 28

Or [ Acl B cl C cl D cl ] := Acl B cl = A 0 B 1 0 0 0 C cl D cl C 1 0 D 11 A + B2 D K2 C 2 B 2 C K2 [C1 B K2 C 2 A K2 ] + D 12 D K2 C 2 D 12 C K2 + B 1 + B 2 D K2 D 21 B K2 D 21 D 11 + D 12 D K2 D 21 0 B 2 I 0 AK2 B K2 0 I 0 C 0 D K2 D K2 C 2 0 D 21 12 A 0 0 B2 AK2 B A cl = + K2 0 I 0 0 I 0 C K2 D K2 C 2 0 B1 0 B2 AK2 B B cl = + K2 0 0 I 0 C K2 D K2 D 21 C cl = [ C 1 0 ] + A 0 D K2 B K2 0 I 12 C K2 D K2 C 2 0 D cl = A D 11 + 0 K2 B K2 0 D12 C K2 D K2 D 21 M. Peet Lecture 21: 11 / 28

Lemma 2 (Transformation Lemma). Suppose that Y1 I > 0 I X 1 Then there exist X 2, X 3, Y 2, Y 3 such that 1 X1 X X = 2 Y1 Y X2 T = 2 X 3 Y2 T = Y 1 > 0 Y 3 Y1 I where Y cl = Y2 T has full rank. 0 M. Peet Lecture 21: 12 / 28

Proof. Since Y1 I > 0, I X 1 by the Schur complement X 1 > 0 and X1 1 Y 1 > 0. Since I X 1 Y 1 = X 1 (X1 1 Y 1 ), we conclude that I X 1 Y 1 is invertible. Choose any two square invertible matrices X 2 and Y 2 such that X 2 Y T 2 = I X 1 Y 1 Because Y 2 is non-singular, Y T cl = are also non-singular. [ Y1 Y 2 I 0 ] I 0 and X cl = X 1 X 2 M. Peet Lecture 21: 13 / 28

Proof. Now define X and Y as X = Y T cl X cl and Y = X 1 cl Ycl T. Then XY = Y 1 cl X cl X 1 Y cl = I cl M. Peet Lecture 21: 14 / 28

Lemma 3 (Converse Transformation Lemma). X1 X Given X = 2 X2 T > 0 where X X 2 has full column rank. Let 3 X 1 Y1 Y = Y = 2 Y2 T Y 3 then Y1 I > 0 I X 1 Y1 I and Y cl = Y2 T has full column rank. 0 M. Peet Lecture 21: 15 / 28

Proof. I 0 Since X 2 is full rank, X cl = also has full column rank. Note that X 1 X 2 XY = I implies Ycl T Y1 Y X = 2 X1 X 2 I 0 I 0 X2 T = = X X 3 X 1 X cl. 2 Hence Y T cl = Y1 Y 2 I 0 = Y = X I 0 X 1 X cl Y 2 has full column rank. Now, since XY = I implies X 1 Y 1 + X 2 Y2 T = I, we have [ ] I 0 Y1 I Y X cl Y cl = X 1 X 2 Y2 T = 1 I Y1 I 0 X 1 Y 1 + X 2 Y2 T = X 1 I X 1 Furthermore, because Y cl has full rank, Y1 I = X I X cl Y cl = X cl Y Xcl T = Ycl T XY cl > 0 1 M. Peet Lecture 21: 16 / 28

Theorem 4. The following are equivalent. There exists a ˆK AK B = K such that S(K, P ) C K D H < γ. K X1 I There exist X 1, Y 1, A n, B n, C n, D n such that > 0 I Y 1 AY 1 +Y 1A T +B 2C n +Cn T B2 T T T T A T + A n + [B 2D nc 2] T X 1A+A T X 1 +B nc 2 +C2 T Bn T T T [B 1 + B 2D nd 21] T [XB 1 + B nd 21] T < 0 γi C 1Y 1 + D 12C n C 1 + D 12D nc 2 D 11 +D 12D nd 21 γi Moreover, [ AK2 B K2 C K2 D K2 ] [ X2 X = 1 B 2 0 I ] 1 [ An B n C n D n for any full-rank X 2 and Y 2 such that [ X1 X 2 Y1 Y X2 T = 2 X 3 Y2 T Y 3 ] [ X1 AY 1 0 Y T 0 0 ] 1 2 0 C 2 Y 1 I ] 1 M. Peet Lecture 21: 17 / 28

Proof: If. Suppose there exist X 1, Y 1, A n, B n, C n, D n such that the LMI is feasible. Since X1 I > 0, I Y 1 by the transformation lemma, there exist X 2, X 3, Y 2, Y 3 such that 1 X X2 Y Y2 X := X2 T = X 3 Y2 T > 0 Y 3 Y I AK B where Y cl = Y2 T has full row rank. Let K = K where 0 C K D K D K = (I + D K2 D 22 ) 1 D K2 B K = B K2 (I D 22 D K ) C K = (I D K D 22 )C K2 A K = A K2 B K (I D 22 D K ) 1 D 22 C K. M. Peet Lecture 21: 18 / 28

Proof: If. and where [ AK2 B K2 C K2 D K2 ] = 1 [ X2 X 1 B 2 An B n 0 I C n D n ] [ X1 AY 1 0 Y T 0 0 2 0 C 2 Y 1 I ] 1. M. Peet Lecture 21: 19 / 28

Proof: If. As discussed previously, this means the closed-loop system is A 0 B Acl B 1 0 B 2 cl = 0 0 0 + I 0 AK2 B K2 0 I 0 C cl D cl C C 1 0 D 11 0 D K2 D K2 C 2 0 D 21 12 A 0 B 1 0 B 2 = 0 0 0 + I 0 C 1 0 D 11 0 D 12 1 [ ] X2 X 1 B 2 An B n X1 AY 1 0 Y T 1 2 0 0 I 0 0 I C n D n 0 0 C 2 Y 1 I C 2 0 D 21 We now apply the KYP lemma to this system M. Peet Lecture 21: 20 / 28

Proof: If. Expanding out, we obtain Y cl T 0 0 A T cl X + XA cl XB cl Ccl T Y cl 0 0 0 I 0 Bcl T X γi DT cl 0 I 0 = 0 0 I C cl D cl γi 0 0 I AY 1 +Y 1A T +B 2C n +Cn T B2 T T T T A T + A n + [B 2D nc 2] T X 1A+A T X 1 +B nc 2 +C2 T Bn T T T [B 1 + B 2D nd 21] T [XB 1 + B nd 21] T γi C 1Y 1 + D 12C n C 1 + D 12D nc 2 D 11 +D 12D nd 21 γi Acl B Hence, by the KYP lemma, S(P, K) = cl satisfies S(P, K) H < γ. C cl D cl < 0 M. Peet Lecture 21: 21 / 28

Proof: Only If. [ AK B Now suppose that S(P, K) H < γ for some K = K S(P, K) H < γ, by the KYP lemma, there exists a X1 X X = 2 X2 T > 0 X 3 such that C K AT cl X + XA cl X Ccl T Bcl T X γi DT cl < 0 C cl D cl γi D K ]. Since Because the inequalities are strict, we can assume that X 2 has full row rank. Define Y1 Y Y = 2 Y2 T = X 1 Y1 I and Y Y cl = 3 Y2 T 0 Then, according to the converse [ transformation ] lemma, Y cl has full row rank and X1 I > 0. I Y 1 M. Peet Lecture 21: 22 / 28

Proof: Only If. Now, using the given A K, B K, C K, D K, define the variables An B n X2 X = 1 B 2 AK2 B K2 Y T 2 0 + C n D n 0 I C K2 D K2 C 2 Y 1 I X1 AY 1 0. 0 0 where A K2 = A K + B K (I D 22 D K ) 1 D 22 C K C K2 = (I + D K (I D 22 D K ) 1 D 22 )C K B K2 = B K (I D 22 D K ) 1 D K2 = D K (I D 22 D K ) 1 Then as before A 0 B Acl B 1 0 B 2 cl = 0 0 0 + I 0 C cl D cl C 1 0 D 11 0 D 12 1 [ ] X2 X 1 B 2 An B n X1 AY 1 0 Y T 1 2 0 0 I 0 0 I C n D n 0 0 C 2 Y 1 I C 2 0 D 21 M. Peet Lecture 21: 23 / 28

Proof: Only If. Expanding out the LMI, we find AY 1 +Y 1A T +B 2C n +Cn T B2 T T T T A T + A n + [B 2D nc 2] T X 1A+A T X 1 +B nc 2 +C2 T Bn T T T [B 1 + B 2D nd 21] T [XB 1 + B nd 21] T γi C 1Y 1 + D 12C n C 1 + D 12D nc 2 D 11 +D 12D nd 21 γi = Y cl T 0 0 A T cl X + XA cl X Ccl T Y cl 0 0 0 I 0 Bcl T X cl γi Dcl T 0 I 0 < 0 0 0 I C cl D cl γi 0 0 I M. Peet Lecture 21: 24 / 28

Conclusion To solve the H -optimal state-feedback problem, we solve min γ such that γ,x 1,Y 1,A n,b n,c n,d n X1 I > 0 I Y 1 AY 1 +Y 1A T +B 2C n +Cn T B2 T T T T A T + A n + [B 2D nc 2] T X 1A+A T X 1 +B nc 2 +C2 T Bn T T T [B 1 + B 2D nd 21] T [XB 1 + B nd 21] T < 0 γi C 1Y 1 + D 12C n C 1 + D 12D nc 2 D 11 +D 12D nd 21 γi M. Peet Lecture 21: 25 / 28

Conclusion Then, we construct our controller using where [ AK2 B K2 C K2 D K2 ] = D K = (I + D K2 D 22 ) 1 D K2 B K = B K2 (I D 22 D K ) C K = (I D K D 22 )C K2 A K = A K2 B K (I D 22 D K ) 1 D 22 C K. 1 [ X2 X 1 B 2 An B n 0 I C n D n ] [ X1 AY 1 0 Y T 0 0 and where X 2 and Y 2 are any matrices which satisfy X 2 Y 2 = I X 1 Y 1. e.g. Let Y 2 = I and X 2 = I X 1 Y 1. The optimal controller is NOT uniquely defined. Don t forget to check invertibility of I D 22 D K 2 0 C 2 Y 1 I ] 1. M. Peet Lecture 21: 26 / 28

Conclusion The H -optimal controller is a dynamic system. Transfer Function ˆK(s) AK B = K C K D K Minimizes the effect of external input (w) on external output (z). z L2 S(P, K) H w L2 Minimum Energy Gain M. Peet Lecture 21: 27 / 28

Alternative Formulation Theorem 5. Let N o and N c be full-rank matrices whose images satisfy Im N o = Ker C 2 D 21 Im N c = Ker B2 T D12 T Then the following are equivalent. There exists a ˆK AK B = K such that S(K, P ) C K D H < 1. K X1 I There exist X 1, Y 1 such that > 0 and I Y 1 T No 0 X 1A + A T X 1 T T B1 0 I T X 1 I T No 0 < 0 0 I C 1 D 11 I T AY Nc 0 1 + Y 1 A T T C 0 I 1 Y 1 I T Nc 0 < 0 B1 T D11 T 0 I I] M. Peet Lecture 21: 28 / 28