w T 1 w T 2. w T n 0 if i j 1 if i = j

Similar documents
Study Guide for Linear Algebra Exam 2

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

1 Last time: least-squares problems

The norms can also be characterized in terms of Riccati inequalities.

6.241 Dynamic Systems and Control

ME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ

Calculating determinants for larger matrices

Linear Algebra: Matrix Eigenvalue Problems

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Math 3191 Applied Linear Algebra

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Eigenvalues and Eigenvectors

Eigenspaces and Diagonalizable Transformations

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Definition (T -invariant subspace) Example. Example

MATH 221, Spring Homework 10 Solutions

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

4. Linear transformations as a vector space 17

MATH 583A REVIEW SESSION #1

Eigenvalues and Eigenvectors

The Jordan Normal Form and its Applications

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

MATHEMATICS 217 NOTES

Eigenvalues and eigenvectors

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems

Chapter 5 Eigenvalues and Eigenvectors

1. General Vector Spaces

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

235 Final exam review questions

Chapter 5. Eigenvalues and Eigenvectors

Diagonalization of Matrix

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Spectral radius, symmetric and positive matrices

Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control

Eigenvalues and Eigenvectors

2 The Linear Quadratic Regulator (LQR)

Eigenvalues and Eigenvectors

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

The Singular Value Decomposition

LINEAR ALGEBRA REVIEW

21 Linear State-Space Representations

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

4 Matrix Diagonalization and Eigensystems

Diagonalizing Matrices

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Diagonalization of Matrices

Announcements Wednesday, November 01

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

Chapter 7: Symmetric Matrices and Quadratic Forms

Math 2331 Linear Algebra

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

Eigenvalues and Eigenvectors

Generalized Eigenvectors and Jordan Form

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS

Symmetric matrices and dot products

NOTES ON LINEAR ODES

Lecture Summaries for Linear Algebra M51A

GQE ALGEBRA PROBLEMS

Recall : Eigenvalues and Eigenvectors

Linear algebra and applications to graphs Part 1

Numerical Linear Algebra Homework Assignment - Week 2

Announcements Wednesday, November 7

Math Matrix Algebra

Eigenvalues and Eigenvectors

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Eigenvalues, Eigenvectors, and Diagonalization

Summer Session Practice Final Exam

Discrete Riccati equations and block Toeplitz matrices

EIGENVALUES AND EIGENVECTORS 3

Properties of Matrices and Operations on Matrices

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

Notes on Eigenvalues, Singular Values and QR

MIT Final Exam Solutions, Spring 2017

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

Econ Slides from Lecture 7

5 Eigenvalues and Diagonalization

Lecture 10 - Eigenvalues problem

Lec 2: Mathematical Economics

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

MATH 205C: STATIONARY PHASE LEMMA

Chapter 4 Euclid Space

A Brief Outline of Math 355

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra

Symmetric and anti symmetric matrices

Math 1060 Linear Algebra Homework Exercises 1 1. Find the complete solutions (if any!) to each of the following systems of simultaneous equations:

MATH 115A: SAMPLE FINAL SOLUTIONS

Eigenvalues and Eigenvectors

Stat 159/259: Linear Algebra Notes

Linear Algebra 1 Exam 2 Solutions 7/14/3

SUPPLEMENT TO CHAPTERS VII/VIII

Announcements Wednesday, November 01

PY 351 Modern Physics Short assignment 4, Nov. 9, 2018, to be returned in class on Nov. 15.

MAC Module 12 Eigenvalues and Eigenvectors. Learning Objectives. Upon completing this module, you should be able to:

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop

MAC Module 12 Eigenvalues and Eigenvectors

Chapter 2: Linear Independence and Bases

Transcription:

Lyapunov Operator Let A F n n be given, and define a linear operator L A : C n n C n n as L A (X) := A X + XA Suppose A is diagonalizable (what follows can be generalized even if this is not possible - the qualitative results still hold). Let {λ 1, λ 2,..., λ n } be the eigenvalues of A. In general, these are complex numbers. Let V C n n be the matrix of eigenvectors for A, so is invertible, and for each i V = [v 1 v 2 v n ] C n n A v i = λ i v i Let w i C n be defined so that V 1 = w T 1 w T 2. w T n Since V 1 V = I n, we have that For each 1 i, j n define w T i v j = δ ij := 0 if i j 1 if i = j X ij := v i v j Cn n 130

Eigenvalues Lyapunov Operator Lemma 63 The set {X ij } i=1,.,n;j=1,.,n is a linearly independent set, and this set is a full set of eigenvectors for the linear operator L A. Moreover, the eigenvalues of L A is the set of complex numbers { λi + λ j } Proof: Suppose {α ij } i=1,.,n;j=1,.,n are scalars and n n j=1 i=1 i=1,.,n;j=1,.,n α ij X ij = 0 n n (7.19) Premultiply this by w T l and postmultiply by w k 0 = w T l n n j=1 i=1 = n n j=1 i=1 = n n j=1 i=1 = α lk α ij X ij α ij w T l v i v j w k α ij δ li δ kj w k which holds for any 1 k, l n. Hence, all of the α s are 0, and the set {X ij } i=1,.,n;j=1,.,n is indeed a linearly independent set. Also, by definition of L A, we have L A (X ij ) = A X ij + X ij A = A v i v j + v i v ja = λ i v i v j + v i ( λj v j = ( λi + λ j ) vi v j = ( λi + λ j ) Xij So, X ij is indeed an eigenvector of L A with eigenvalue ( λi + λ j ). ) 131

Lyapunov Operator Alternate Invertibility Proof Do a Schur decomposition of A, namely A = QΛQ, with Q unitary and Λ upper triangular. Then the equation A X + XA = R can be written as A X + XA = R QΛ Q X + XQΛQ = R Λ Q XQ + Q XQΛ = Q RQ Λ X + XΛ = R Since Q is invertible, the redefinition of variables is an invertible transformation. Writing out all of the equations (we are trying to get that for every R, there is a unique X satisfying the equation) yields a upper triangular system with n 2 unknowns, and the values λ i + λ j on the diagonal. Do it. This is an easy (and computationally motivated) proof that the Lyapunov operator L A is an invertible operator if and only if λ i + λ j 0 for all eigenvalues of A. 132

Lyapunov Operator Integral Formula If A is stable (all eigenvalues have negative real parts) then ( λi + λ j ) 0 for all combinations. Hence L A is an invertible linear operator. We can explicitly solve the equation for X using an integral formula. L A (X) = Q Theorem 64 Let A F n n be given, and suppose that A is stable. The L A is invertible, and for any Q F n n, the unique X F n n solving L A (X) = Q is given by X = e A τ Qe Aτ dτ Proof: For each t 0, define 0 S(t) := t 0 ea τ Qe Aτ dτ Since A is stable, the integrand is made up of decaying exponentials, and S(t) has a well-defined limit as t, which we have already denoted X := lim t S(t). For any t 0, t A ( S(t) + S(t)A = A e A τ Qe Aτ + e A τ Qe Aτ A ) dτ 0 t d ( = e A τ Qe Aτ) dτ 0 dτ = e A t Qe At Q Taking limits on both sides gives as desired. A X + XA = Q 133

Lyapunov Operator A few other useful facts More Theorem 65 Suppose A, Q F n n are given. Assume that A is stable and Q = Q 0. Then the pair (A, Q) is observable if and only if X := 0 e A τ Qe Aτ dτ > 0 Proof: Suppose not. Then there is an x o F n, x o 0 such that Qe At x o = 0 for all t 0. Consequently x o ea t Qe At x o = 0 for all t 0. Integrating gives 0 = 0 ( x o e A t Qe At x o ) dt = x o ( 0 e A t Qe At dt ) x o = x oxx o Since x o 0, X is not positive definite. Suppose that X is not positive definite (note by virtue of the definition X is at least positive semidefinite). Then there exists x o F n, x o 0 such that x oxx o = 0. Using the integral form for X (since A is stable by assumption) and the fact tha Q 0 gives 0 Q 1 2 e Aτ x o dτ = 0 The integrand is nonnegative and continuous, hence it must be 0 for all τ 0. Hence Qe Aτ x o = 0 for all τ 0. Since x o 0, this implies that (A, Q) is not observable. 134

Lyapunov Operator More Theorem 66 Let (A, C) be detectable and suppose X is any solution to A X +XA = C C (there is no apriori assumption that L A is an invertible operator, hence there could be multiple solutions). Then X 0 if and only if A is stable. Proof Suppose not, then there is a v C n, v 0, λ C, Re (λ) 0 with Av = λv (and v A = λv ). Since (A, C) is detectable and the eigenvalue in the closed right-half plane Cv 0. Note that Cv 2 = v C Cv = v (A X + XA)v = ( λ ) + λ v Xv = 2Re (λ)v Xv Since Cv > 0 and Re (λ) 0, we must actually have Re (λ) > 0 and v Xv < 0. Hence X is not positive semidefinite. Since A is stable, there is only one solution to the equation A X + XA = C C, moreover it is given by the integral formula, X = 0 e A τ C Ce Aτ dτ which is clearly positive semidefinite. 135

Lyap Ordering Finally, some simple results about the ordering of Lyapunov solutions. Let A be stable, so L A is guaranteed to be invertible. Use L 1 A ( Q) to denote the unique solution X to the equation A X + XA = Q. Lemma 67 If Q 1 = Q 1 Q 2 = Q 2, then L 1 A ( Q 1) L 1 A ( Q 2) Lemma 68 If Q 1 = Q 1 > Q 2 = Q 2, then L 1 A ( Q 1) > L 1 A ( Q 2) Proofs Let X 1 and X 2 be the two solutions, obviously A X 1 + X 1 A = Q 1 A X 2 + X 2 A = Q 2 A (X 1 X 2 ) + (X 1 X 2 )A = (Q 1 Q 2 ) Since A is stable, the integral formula applies, and X 1 X 2 = 0 e A τ (Q 1 Q 2 ) e Aτ dτ Obviously, if Q 1 Q 2 0, then X 1 X 2 0, which proves Lemma 5. If Q 1 Q 2 > 0, then the pair (A, Q 1 Q 2 ) is observable, and by Theorem 3, X 1 X 2 > 0, proving Lemma 6. Warning: The converses of Lemma 5 and Lemma 6 are NOT true. 136

The algebraic Riccati equation (A.R.E.) is Algebraic Riccati Equation A T X + XA + XRX Q = 0 (7.20) where A, R, Q R n n are given matrices, with R = R T, and Q = Q T, and solutions X C n n are sought. We will primarily be interested in solutions X of (7.20), which satisfy A + RX is Hurwitz and are called stabilizing solutions of the Riccati equation. Associated with the data A, R, and Q, we define the Hamiltonian matrix H R 2n 2n by A R H := Q A T (7.21) 137

Invariant Subspace Definition Suppose A C n n. Then (via matrix-vector multiplication), we view A as a linear map from C n C n. A subspace V C n is said to be A-invariant if for every vector v V, the vector Av V too. In terms of matrices, suppose that the columns of W C n m are a basis for V. In other words, they are linearly independent, and span the space V, so V = {Wη : η C m } Then V being A-invariant is equivalent to the existence of a matrix S C m m such that AW = WS The matrix S is the matrix representation of A restricted to V, referred to the basis W. Clearly, since W has linearly independent columns, the matrix S is unique. If (Q, Λ) are the Schur decomposition of A, then for any k with 1 k n, the first k columns of Q span a k-dimensional invariant subspace of A, as seen below Clearly, AQ 1 = Q 1 Λ 11. A [Q 1 Q 2 ] = [Q 1 Q 2 ] Λ 11 Λ 12 0 Λ 22 138

Eigenvectors in Invariant subspaces Note that since S C m m has at least one eigenvector, say ξ 0 m, with associated eigenvalue λ, it follows that Wξ V is an eigenvector of A, A (Wξ) = AWξ = WSξ = λwξ = λ(wξ) Hence, every nontrivial invariant subspace of A contains an eigenvector of A. 139

Stable Invariant Subspace Definition If S is Hurwitz, then V is a stable, A-invariant subspace. Note that while the matrix S depends on the particular choice of the basis for V (ie., the specific columns of W), the eigenvalues of S are only dependent on the subspace V, not the basis choice. To see this, suppose that the columns of W C n m span the same space as the columns of W. Obviously, there is a matrix T C m m such that W = WT. The linear independence of the columns of W (and W) imply that T is invertible. Also, A W = AWT 1 = WST 1 = W TST 1 }{{} S Clearly, the eigenvalues of S and S are the same. The linear operator A V is the transformation from V V defined as A, simply restricted to V. Since V is A-invariant, this is well-defined. Clearly, S is the matrix representation of this operator, using the columns of W as the basis choice. 140

A T X + XA + XRX Q = 0 Invariant Subspace The first theorem gives a way of constructing solutions to (7.20) in terms of invariant subspaces of H. Theorem 69 Let V C 2n be a n-dimensional invariant subspace of H (H is a linear map from C 2n to C 2n ), and let X 1 C n n, X 2 C n n be two complex matrices such that V = span X 1 X 2 If X 1 is invertible, then X := X 2 X 1 1 is a solution to the algebraic Riccati equation, and spec (A + RX) = spec (H V ). Proof: Since V is H invariant, there is a matrix Λ C n n with which gives and A R Q A T X 1 X 2 = X 1 X 2 Λ (7.22) AX 1 + RX 2 = X 1 Λ (7.23) QX 1 A T X 2 = X 2 Λ (7.24) Pre and post multiplying equation (7.23) by X 2 X1 1 and X1 1 respectively gives ( X2 X 1 1 ) A + ( X2 X 1 1 and postmultiplying 7.24 by X1 1 gives Q A T ( X 2 X 1 1 Combining (7.25) and (7.26) yields ) ( R X2 X1 1 ) = X2 ΛX1 1 (7.25) ) = X2 ΛX 1 1 (7.26) A T (X 2 X 1 1 ) + (X 2 X 1 1 )A + (X 2 X 1 1 )R(X 2 X 1 1 ) Q = 0 so X 2 X1 1 is indeed a solution. Equation (7.23) implies that A+R ( X 2 X1 1 X 1 ΛX1 1, and hence they have the same eigenvalues. But, by definition, Λ is a matrix representation of the map H V, so the theorem is proved. ) = 141

A T X + XA + XRX Q = 0 Invariant Subspace The next theorem shows that the specific choice of matrices X 1 and X 2 which span V is not important. Theorem 70 Let V, X 1, and X 2 be as above, and suppose there are two other matrices X 1, X 2 C n n X 1 such that the columns of also span X 2 V. Then X 1 is invertible if and only if X1 is invertible. Consequently, both X 2 X1 1 and X X 1 2 1 are solutions to (7.20), and in fact they are equal. Remark: Hence, the question of whether or not X 1 is invertible is a property of the subspace, and not of the particular basis choice for the subspace. Therefore, we say that an n-dimensional subspace V has the invertibility property if in any basis, the top portion is invertible. In the literature, this is often called the complimentary property. Proof: Since both sets of columns span the same n-dimensional subspace, there is an invertible K C n n such that X 1 X 2 = X 1 X 2 K Obviously, X 1 is invertible if and only if X 1 is invertible, and X 2 X 1 ( X2 K ) ( X1 K ) 1 = X2 X 1 1.. 1 = 142

A T X + XA + XRX Q = 0 Invariant Subspace As we would hope, the converse of theorem 69 is true. Theorem 71 If X C n n is a solution to the A.R.E., then there exist matrices X 1, X 2 C n n, with X 1 invertible, such that X = X 2 X1 1 and the columns of X 1 X 2 span an n-dimensional invariant subspace of H. Proof: Define Λ := A + RX C n n. Multiplying this by X gives XΛ = XA+XRX = Q A T X, the second equality coming from the fact that X is a solution to the Riccati equation. We write these two relations as A R Q A T I n X = I n X Λ I n Hence, the columns of span an n-dimensional invariant subspace X of H, and defining X 1 := I n, and X 2 := X completes the proof.. Hence, there is a one-to-one correspondence between n-dimensional invariants subspaces of H with the invertibility property, and solutions of the Riccati equation. 143

A T X + XA + XRX Q = 0 Eigenvalue distribution of H In the previous section, we showed that any solution X of the Riccati equation is intimately related to an invariant subspace, V, of H, and the matrix A + RX is similiar (ie. related by a similarity transformation) to the restriction of H on V, H V. Hence, the eigenvalues of A + RX will always be a subset of the eigenvalues of H. Recall the particular structure of the Hamiltonian matrix H, H := A R Q A T Since H is real, we know that the eigenvalues of H are symmetric about the real axis, including multiplicities. The added structure of H gives additional symmetry in the eigenvalues as follows. Lemma 72 The eigenvalues of H are symmetric about the origin (and hence also about the imaginary axis). Proof: Define J R 2n 2n by J = 0 I I 0 Note that JHJ 1 = H T. Also, JH = H T J = (JH) T. Let p (λ) be the characteristic polynomial of H. Then p(λ) := det (λi H) = det ( λi JHJ 1) = det ( λi + H T ) = det (λi + H) = ( 1) 2n det ( λi H) = p( λ) (7.27) Hence the roots of p are symmetric about the origin, and the symmetry about the imaginary axis follows. 144

A T X + XA + XRX Q = 0 Hermitian Solutions Lemma 73 If H has no eigenvalues on the imaginary axis, then H has n eigenvalues in the open-right-half plane, and n eigenvalues in the open-lefthalf plane. Theorem 74 Suppose V is a n-dimensional invariant subspace of H, and X 1, X 2 C n n form a matrix whose colums span V. Let Λ C n n be a matrix representation of H V. If λ i + λ j 0 for all eigenvalues λ i, λ j spec(λ), then X 1X 2 = X 2X 1. Proof: Using the matrix J from the proof of Lemma 72 we have that [X 1 X 2] JH X 1 is Hermitian, and equals ( X 1 X 2 + X 2 X 1) Λ. Define W := X 1 X 2 + X 2X 1. Then W = W, and WΛ + Λ W = 0. By the eigenvalue assumption on Λ, the linear map L Λ :C n n C n n defined by is invertible, so W = 0. X 2 L Λ (X) = XΛ + Λ X Corollary 75 Under the same assumptions as in theorem 74, if X 1 is invertible, then X 2 X 1 1 is Hermitian. 145

A T X + XA + XRX Q = 0 Real Solutions Real (rather than complex) solutions arise when a symmetry condition is imposed on the invariant subspace. Theorem 76 Let V be a n-dimensional invariant subspace of H with the invertibility property, and let X 1, X 2 C n n form a 2n n matrix whose columns span V. Then V is conjugate symmetric ({ v : v V} = V) if and only if X 2 X 1 1 R n n. Proof: Define X := X 2 X 1 1. By assumption, X R n n, and span so V is conjugate symmetric. I n X = V Since V is conjugate symmetric, there is an invertible matrix K C n n such that X 1 X 2 = X 1 X 2 K Therefore, (X 2 X 1 1 ) = X 2 X 1 1 = X 2 K(X 1 K) 1 = X 2 X 1 1 as desired. 146

A T X + XA + XRX Q = 0 Stabilizing Solutions Recall that for any square matrix M, there is a largest stable invariant subspace. That is, there is a stable invariant subspace V S that contains every other stable invariant subspace of M. This largest stable invariant subspace is simply the span of all of the eigenvectors and generalized eigenvectors associated with the open-left-half plane eigenvalues. If the Schur form has the eigenvalues (on diagonal of Λ) ordered, then it is spanned by the first set of columns of Q Theorem 77 There is at most one solution X to the Riccati equation such that A +RX is Hurwitz, and if it does exist, then X is real, and symmetric. Proof: If X is such a stabilizing solution, then by theorem 71 it can be constructed from some n-dimensional invariant subspace of H, V. If A + RX is stable, then we must also have spec (H V ). Recall that the spectrum of H is symmetric about the imaginary axis, so H has at most n eigenvalues in C, and the only possible choice for V is the stable eigenspace of H. By theorem 70, every solution (if there are any) constructed from this subspace will be the same, hence there is at most 1 stabilizing solution. Associated with the stable eigenspace, any Λ as in equation (7.22) will be stable. Hence, if the stable eigenspace has the invertibility property, from corollary 75 we have X := X 2 X1 1 is Hermitian. Finally, since H is real, the stable eigenspace of H is indeed conjugate symmetric, so the associated solution X is in fact real, and symmetric. 147

A T X + XA + XRX Q = 0 dom S Ric If H is (7.21) has no imaginary axis eigenvalues, then H has a unique n- dimensional stable invariant subspace V S C 2n, and since H is real, there exist matrices X 1, X 2 R n n such that span X 1 X 2 = V S. More concretely, there exists a matrix Λ R n n such that H X 1 X 2 = X 1 X 2 This leads to the following definition Λ, spec (Λ) C Definition 78 (Definition of dom S Ric) Consider H R 2n 2n as defined in (7.21). If H has no imaginary axis eigenvalues, and the matrices X 1, X 2 R n n for which span X 1 X 2 is the stable, n-dimensional invariant subspace of H satisfy det (X 1 ) 0, then H dom S Ric. For H dom S Ric, define Ric S (H) := X 2 X1 1, which is the unique matrix X R n n such that A + RX is stable, and Moreover, X = X T. A T X + XA + XRX Q = 0. 148

A T X + XA + XRX Q = 0 Useful Lemma Theorem 79 Suppose H (as in (7.21)) does not have any imaginary axis eigenvalues, R is either positive semidefinite, or negative semidefinite, and (A, R) is a stabilizable pair. Then H dom S Ric. Proof: Let X 1 X 2 form a basis for the stable, n-dimensional invariant subspace of H (this exists, by virtue of the eigenvalue assumption about H). Then A R Q A T = X 1 Λ (7.28) X 2 X 1 X 2 where Λ C n n is a stable matrix. Since Λ is stable, it must be that X 1 X 2 = X 2 X 1. We simply need to show X 1 is invertible. First we prove that KerX 1 is Λ-invariant, that is, the implication x KerX 1 Λx KerX 1 Let x KerX 1. The top row of (7.28) gives RX 2 = X 1 Λ AX 1. Premultiply this by x X 2 to get x X2RX 2 x = x X2 (X 1 Λ AX 1 )x = x X2 X 1Λx = x X1X 2 Λx since X2X 1 = X1X 2 = 0. Since R 0 or R 0, it follows that RX 2 x = 0. Now, using (7.28) again gives X 1 Λx = 0, as desired. Hence KerX 1 is a subspace that is Λ-invariant. Now, if KerX 1 {0}, then there is a vector v 0 and λ C, Re(λ) < 0 such that X 1 v = 0 Λv = λv RX 2 v = 0 149

We also have QX 1 A T X 2 = X 2 Λ, from the second row of (7.28). Mutiplying by v gives v X 2 [ A ( λi ) R ] = 0 Since Re ( λ) < 0, and (A, R) is assumed stabilizable, it must be that X 2 v = 0. But X 1 v = 0, and X 1 X 2 is full-column rank. Hence, v = 0, and KerX 1 is indeed the trivial set {0}. This implies that X 1 is invertible, and hence H dom S Ric.. 150

A T X + XA + XRX Q = 0 The next theorem is a main result of LQR theory. Quadratic Regulator Theorem 80 Let A, B, C be given, with (A, B) stabilizable, and (A, C) detectable. Define A BB T H := C T C A T Then H dom S Ric. Moreover, 0 X := Ric S (H), and KerX C CA. CA n 1 Proof: Use stabilizablity of (A, B) and detectability of (A, C) to show that H has no imaginary axis eigenvalues. Specifically, suppose v 1, v 2 C n and ω R such that A C T C BB T A T v 1 v 2 = jω Premultiply top and bottom by v 2 and v 1, combine, and conclude v 1 = v 2 = 0. Show that if (A, B) is stabilizable, then ( A, BB T ) is also stabilizable. At this point, apply Theorem 79 to conclude H dom S Ric. Then, rearrange the Riccati equation into a Lyapunov equation and use Theorem 64 to show that X 0. Finally, show that KerX KerC, and that KerX is A-invariant. That shows that KerX KerCA j for any j 0.. v 1 v 2 151

Derivatives Linear Maps Recall that the derivative of f at x is defined to be the number l x such that f(x + δ) f(x) l x δ lim δ 0 δ Suppose f : R n R m is differentiable. Then f (x) is the unique matrix L x R m n which satisfies = 0 1 lim δ 0 n δ (f(x + δ) f(x) L xδ) = 0 Hence, while f : R n R m, the value of f at a specific location x, f (x), is a linear operator from R n R m. Hence the function f is a map from R n L (R n,r m ). Similarly, if f : H n n H n n is differentiable, then the derivative of f at x is the unique linear operator L X : H n n H n n such that satisfying 1 lim 0 n n (f(x + ) f(x) L X ) = 0 152

Derivative of Riccati Lyapunov f(x) := XDX A X XA C Note, f(x + ) = (X + )D (X + ) A (X + ) (X + )A C = XDX A X XA C + DX + XD A A + D = f(x) (A DX) (A DX) + D For any W C n n, let L W : H n n H n n be the Lyapunov operator L W (X) := W X + XW. Using this notation, f(x + ) f(x) L A+DX ( ) = D Therefore 1 lim 0 n n [f(x + ) f(x) L 1 A+DX ( )] = lim 0 n n [ D ] = 0 This means that the Lyaponov operator L A+DX is the derivative of the matrix-valued function at X. XDX A X XA C 153

Iterative Algorithms Roots Suppose f : R R is differentiable. An iterative algorithm to find a root of f(x) = 0 is x k+1 = x k f(x k) f (x k ) Suppose f : R n R n is differentiable. Then to first order f(x + δ) = f(x) + L x δ An iterative algorithm to find a root of f(x) = 0 n is obtained by setting the first-order approximation expression to zero, and solving for δ, yielding the update law (with x k+1 = x k + δ) Implicitly, it is written as x k+1 = x k L 1 x k [f(x k )] L xk [x k+1 x k ] = f(x k ) (7.29) For the Riccati equation f(x) := XDX A X XA C we have derived that the derivative of f at X involves the Lyapunov operator. The iteration in (7.29) appears as In detail, this is L A+DXk (X k+1 X k ) = X k DX k + A X k + X k A + C (A DX k ) (X k+1 X k ) (X k+1 X k )(A DX k ) = X k DX k +A X k +X k A+C which simplifies (check) to (A DX k ) X k+1 + X k+1 (A DX k ) = X k DX k C Of course, questions about the well-posedness and convergence of the iteration must be addressed in detail. 154

Comparisons Gohberg, Lancaster and Rodman Theorems 2.1-2.3 in On Hermitian Solutions of the Symmetric Algebraic Riccati Equation, by Gohberg, Lancaster and Rodman, SIAM Journal of Control and Optimization, vol. 24, no. 6, Nov. 1986, are the basis for comparing equality and inequality solutions of Riccati equations. The definitions, theorems and main ideas are below. Suppose that A R n n, C R n n, D R n n, with D = D T 0, C = C T and (A, D) stabilizable. Consider the matrix equation XDX A T X XA C = 0 (7.30) Definition 81 A symmetric solution X + = X T + of (9.39) is maximal if X + X for any other symmetric solution X of (9.39). Check: Maximal solutions (if they exist) are unique. Theorem 82 If there is a symmetric solution to (9.39), then there is a maximal solution X +. The maximal solution satisfies max i Reλ i (A DX + ) 0 Theorem 83 If there is a symmetric solution to (9.39), then there is a sequence {X j } j=1 such that for each j, X j = X T j and 0 X j DX j A T X j X j A C X j X for any X = X T solving 9.39 A DX j is Hurwitz lim j X j = X + Theorem 84 If there are symmetric solutions to (9.39), then for every symmetric C C, there exists symmetric solutions (and hence a maximal solution X + ) to XDX A T X XA C = 0 Moreover, the maximal solutions are related by X + X +. 155

Identities For any Hermitian matrices Y and Ŷ, it is trivial to verify Y (A DY ) + (A DY ) Y + Y DY = Y (A DŶ ) + (A DŶ ) Y + Ŷ DŶ (Y Ŷ ) ( ) D Y Ŷ (7.31) Let X denote any Hermitian solution (if one exists) to the ARE, XDX A X XA C = 0 (7.32) This (the existence of a hermitian solution) is the departure point for all that will eventually follow. For any hermitian matrix Z C = XDX + A X + XA = X (A DX) + (A DX) X + XDX = X (A DZ) + (A DZ) X + ZDZ (X Z) D (X Z) (7.33) This exploits that X solves the original Riccati equation, and uses the identity (7.31), with Y = X, Ŷ = Z. 156

Iteration Take any C C (which could be C, for instance). Also, let X 0 = X 0 0 be chosen so that A DX 0 is Hurwitz. Check: Why is this possible? Iterate (if possible, which we will show is...) as X ν+1 (A DX ν ) + (A DX ν ) X ν+1 = X ν DX ν C (7.34) Note that by the Hurwitz assumption on A DX 0, at least the first iteration is possible (ν = 0). Also, recall that the iteration is analogous to a first-order algorithm for root finding, x n+1 := x n f(x n) f (x n ). Use Z := X ν in equation (7.33), and subtract from the iteration equation (7.34), leaving (X ν+1 X) (A DX ν ) + (A DX ν ) (X ν+1 X) = (X X ν )D (X X ν ) ( C C ) (7.35) which is valid whenever the iteration equation is valid. 157

Main Induction Now for the induction: assume X ν = X ν and A DX ν is Hurwitz. Note that this is true for ν = 0. Since A DX ν is Hurwitz, there is a unique solution X ν+1 to equation (7.34). Moreover, since A DX ν is Hurwitz, and the right-hand-side of (7.35) is 0, it follows that X ν+1 X 0 (note that X 0 X is not necessarily positive semidefinite). Rewrite (7.31) with Y = X ν+1 and Ŷ = X ν. X ν+1 (A DX ν+1 ) + (A DX ν+1 ) X ν+1 + X ν+1 DX ν+1 = X ν+1 (A DX ν ) + (A DX ν ) X ν+1 + X ν DX ν (X ν+1 X ν ) D (X ν+1 X ν ) Rewriting, } {{ } C C = X ν+1 (A DX ν+1 ) + (A DX ν+1 ) X ν+1 Writing (7.33) with Z = X ν+1 gives + X ν+1 DX ν+1 + (X ν+1 X ν ) D (X ν+1 X ν ) C = X (A DX ν+1 ) + (A DX ν+1 ) X Subtract these, yielding + X ν+1 DX ν+1 (X X ν+1 ) D (X X ν+1 ) ( C C ) = (Xν+1 X) (A DX ν+1 ) + (A DX ν+1 ) (X ν+1 X) + (X ν+1 X ν )D (X ν+1 X ν ) + (X X ν+1 )D (X X ν+1 ) (7.36) Now suppose that Re (λ) 0, and v C n such that (A DX ν+1 )v = λv. Pre and post-multiply (7.36) by v and v respectively. This gives v ( ) C C v } {{ } 0 = 2Re(λ) }{{} 0 v (X ν+1 X) v }{{} 0 Hence, both sides are in fact equal to zero, and therefore + v (X ν+1 X ν ) D (X ν+1 X ν ) = 0. 158 D 1/2 2 (X ν+1 X ν ) v D 1/2 (X X ν+1 ) } {{ } 0

Since D 0, it follows that D (X ν+1 X ν ) v = 0, which means DX ν+1 v = DX ν v. Hence λv = (A DX ν+1 ) v = (A DX ν )v But A DX ν is Hurwitz, and Re(λ) 0, so v = 0 n as desired. This means A DX ν+1 is Hurwitz. Summarizing: Suppose there exists a hermitian X = X solving XDX A X XA C = 0 Let X 0 = X 0 0 be chosen so A DX 0 is Hurwitz Let C be a Hermitian matrix with C C. Iteratively define (if possible) a sequence {X ν } ν=1 via The following is true: X ν+1 (A DX ν ) + (A DX ν ) X ν+1 = X ν DX ν C Given the choice of X 0 and C, the sequence {X ν } ν=1 unique, and has X ν = Xν. is well-defined, For each ν 0, A DX ν is Hurwitz For each ν 1, X ν X. It remains to show that for ν 1, X ν+1 X ν (although it is not necessarily true that X 1 X 0 ). 159

Decreasing Proof For ν 1, apply (7.31) with Y = X ν and Ŷ = X ν 1, and substitute the iteration (shifted back one step) equation (7.34) giving X ν (A DX ν ) + (A DX ν ) X ν + X ν DX ν = C (X ν X ν 1 ) D (X ν X ν 1 ) The iteration equation is X ν+1 (A DX ν ) + (A DX ν ) X ν+1 = X ν DX ν C Subtracting leaves (X ν X ν+1 )(A DX ν ) + (A DX ν ) (X ν X ν+1 ) = (X ν X ν 1 )D (X ν X ν 1 ) The fact that A DX ν is Hurwitz and D 0 implies that X ν X ν+1 0. Hence, we have shown that for all ν 1, X X ν+1 X ν 160

Monotonic Matrix Sequences Convergence Every nonincreasing sequence of real numbers which is bounded below has a limit. Specifically, if {x k } k=0 is a sequence of real numbers, and there is a real number γ such that x k γ and x k+1 x k for all k, then there is a real number x such that lim x k = x k A similar result holds for symmetric matrices: Specifically, if {X k } k=0 is a sequence of real, symmetric matrices, and there is a real symmetric matrix Γ such that X k Γ and X k+1 X k for all k, then there is a real symmetric matrix X such that lim k X k = X This is proven by first considering the sequence { e T } i X k e i, which shows k=0 that all of the diagonal entries of the sequence have limits. Then look at { (ei + e j ) T X k (e i + e j ) } to conclude convergence of off-diagonal terms. k=0 161

Limit of Sequence Maximal Solution The sequence {X ν } ν=0, generated by (7.34), has a limit lim ν X ν =: X+ Remember that C was any hermitian matrix C. Facts: Since each A DX ν is Hurwitz, in passing to the limit we may get eigenvalues with zero real-parts, hence max i Reλ i (A DX + ) 0 Since each X ν = X ν, it follows that X + = X +. Since each X ν X (where X is any hermitian solution to the equality XDX A X XA C = 0, equation (7.32)), it follows by passing to the limit that X + X. The iteration equation is X ν+1 (A DX ν ) + (A DX ν ) X ν+1 = X ν DX ν C Taking limits yields X + ( A D X+ ) + ( A D X+ ) X+ = X + D X + C which has a few cancellations, leaving X + A + A X+ X + D X + + C = 0 Finally, if X + denotes the above limit when C = C, it follows that the above ideas hold, so X + X, for all solutions X. And really finally, since X + is a hermitian solution to (7.32) with C, the properties of X+ imply that X + X +. 162

Analogies with Scalar Case dx 2 2ax c = 0 Matrix Statement Scalar Specialization D = D 0 d R, with d 0 (A, D) stabilizable a < 0 or d > 0 existence of Hermitian solution a 2 + cd 0 Maximal solution X + x + = a+ a 2 +cd d for d > 0 Maximal solution X + x + = c 2a for d = 0 A DX + a 2 + cd for d > 0 A DX + a for d = 0 A DX 0 Hurwitz x 0 > a d for d > 0 A DX 0 Hurwitz a < 0 for d = 0 A DX Hurwitz x > a d for d > 0 A DX Hurwitz a < 0 for d = 0 If d = 0, then the graph of 2ax c (with a < 0) is a straight line with a positive slope. There is one (and only one) real solution at x = c 2a. If d > 0, then the graph of dx 2 2ax c is a upward directed parabola. The minimum occurs at a d. There are either 0, 1, or 2 real solutions to dx 2 2ax c = 0. If a 2 + cd < 0, then there are no solutions (the minimum is positive) If a 2 + cd = 0, then there is one solution, at x = a d, which is where the minimum occurs. If a 2 + cd > 0, then there are two solutions, at x = a d a 2 +cd d x + = a d + a 2 +cd which are equidistant from the minimum a d d. and 163