w T 1 w T 2. w T n 0 if i j 1 if i = j

Size: px
Start display at page:

Download "w T 1 w T 2. w T n 0 if i j 1 if i = j"

Transcription

1 Lyapunov Operator Let A F n n be given, and define a linear operator L A : C n n C n n as L A (X) := A X + XA Suppose A is diagonalizable (what follows can be generalized even if this is not possible - the qualitative results still hold). Let {λ 1, λ 2,..., λ n } be the eigenvalues of A. In general, these are complex numbers. Let V C n n be the matrix of eigenvectors for A, so is invertible, and for each i V = [v 1 v 2 v n ] C n n A v i = λ i v i Let w i C n be defined so that V 1 = w T 1 w T 2. w T n Since V 1 V = I n, we have that For each 1 i, j n define w T i v j = δ ij := 0 if i j 1 if i = j X ij := v i v j Cn n 130

2 Eigenvalues Lyapunov Operator Lemma 63 The set {X ij } i=1,.,n;j=1,.,n is a linearly independent set, and this set is a full set of eigenvectors for the linear operator L A. Moreover, the eigenvalues of L A is the set of complex numbers { λi + λ j } Proof: Suppose {α ij } i=1,.,n;j=1,.,n are scalars and n n j=1 i=1 i=1,.,n;j=1,.,n α ij X ij = 0 n n (7.19) Premultiply this by w T l and postmultiply by w k 0 = w T l n n j=1 i=1 = n n j=1 i=1 = n n j=1 i=1 = α lk α ij X ij α ij w T l v i v j w k α ij δ li δ kj w k which holds for any 1 k, l n. Hence, all of the α s are 0, and the set {X ij } i=1,.,n;j=1,.,n is indeed a linearly independent set. Also, by definition of L A, we have L A (X ij ) = A X ij + X ij A = A v i v j + v i v ja = λ i v i v j + v i ( λj v j = ( λi + λ j ) vi v j = ( λi + λ j ) Xij So, X ij is indeed an eigenvector of L A with eigenvalue ( λi + λ j ). ) 131

3 Lyapunov Operator Alternate Invertibility Proof Do a Schur decomposition of A, namely A = QΛQ, with Q unitary and Λ upper triangular. Then the equation A X + XA = R can be written as A X + XA = R QΛ Q X + XQΛQ = R Λ Q XQ + Q XQΛ = Q RQ Λ X + XΛ = R Since Q is invertible, the redefinition of variables is an invertible transformation. Writing out all of the equations (we are trying to get that for every R, there is a unique X satisfying the equation) yields a upper triangular system with n 2 unknowns, and the values λ i + λ j on the diagonal. Do it. This is an easy (and computationally motivated) proof that the Lyapunov operator L A is an invertible operator if and only if λ i + λ j 0 for all eigenvalues of A. 132

4 Lyapunov Operator Integral Formula If A is stable (all eigenvalues have negative real parts) then ( λi + λ j ) 0 for all combinations. Hence L A is an invertible linear operator. We can explicitly solve the equation for X using an integral formula. L A (X) = Q Theorem 64 Let A F n n be given, and suppose that A is stable. The L A is invertible, and for any Q F n n, the unique X F n n solving L A (X) = Q is given by X = e A τ Qe Aτ dτ Proof: For each t 0, define 0 S(t) := t 0 ea τ Qe Aτ dτ Since A is stable, the integrand is made up of decaying exponentials, and S(t) has a well-defined limit as t, which we have already denoted X := lim t S(t). For any t 0, t A ( S(t) + S(t)A = A e A τ Qe Aτ + e A τ Qe Aτ A ) dτ 0 t d ( = e A τ Qe Aτ) dτ 0 dτ = e A t Qe At Q Taking limits on both sides gives as desired. A X + XA = Q 133

5 Lyapunov Operator A few other useful facts More Theorem 65 Suppose A, Q F n n are given. Assume that A is stable and Q = Q 0. Then the pair (A, Q) is observable if and only if X := 0 e A τ Qe Aτ dτ > 0 Proof: Suppose not. Then there is an x o F n, x o 0 such that Qe At x o = 0 for all t 0. Consequently x o ea t Qe At x o = 0 for all t 0. Integrating gives 0 = 0 ( x o e A t Qe At x o ) dt = x o ( 0 e A t Qe At dt ) x o = x oxx o Since x o 0, X is not positive definite. Suppose that X is not positive definite (note by virtue of the definition X is at least positive semidefinite). Then there exists x o F n, x o 0 such that x oxx o = 0. Using the integral form for X (since A is stable by assumption) and the fact tha Q 0 gives 0 Q 1 2 e Aτ x o dτ = 0 The integrand is nonnegative and continuous, hence it must be 0 for all τ 0. Hence Qe Aτ x o = 0 for all τ 0. Since x o 0, this implies that (A, Q) is not observable. 134

6 Lyapunov Operator More Theorem 66 Let (A, C) be detectable and suppose X is any solution to A X +XA = C C (there is no apriori assumption that L A is an invertible operator, hence there could be multiple solutions). Then X 0 if and only if A is stable. Proof Suppose not, then there is a v C n, v 0, λ C, Re (λ) 0 with Av = λv (and v A = λv ). Since (A, C) is detectable and the eigenvalue in the closed right-half plane Cv 0. Note that Cv 2 = v C Cv = v (A X + XA)v = ( λ ) + λ v Xv = 2Re (λ)v Xv Since Cv > 0 and Re (λ) 0, we must actually have Re (λ) > 0 and v Xv < 0. Hence X is not positive semidefinite. Since A is stable, there is only one solution to the equation A X + XA = C C, moreover it is given by the integral formula, X = 0 e A τ C Ce Aτ dτ which is clearly positive semidefinite. 135

7 Lyap Ordering Finally, some simple results about the ordering of Lyapunov solutions. Let A be stable, so L A is guaranteed to be invertible. Use L 1 A ( Q) to denote the unique solution X to the equation A X + XA = Q. Lemma 67 If Q 1 = Q 1 Q 2 = Q 2, then L 1 A ( Q 1) L 1 A ( Q 2) Lemma 68 If Q 1 = Q 1 > Q 2 = Q 2, then L 1 A ( Q 1) > L 1 A ( Q 2) Proofs Let X 1 and X 2 be the two solutions, obviously A X 1 + X 1 A = Q 1 A X 2 + X 2 A = Q 2 A (X 1 X 2 ) + (X 1 X 2 )A = (Q 1 Q 2 ) Since A is stable, the integral formula applies, and X 1 X 2 = 0 e A τ (Q 1 Q 2 ) e Aτ dτ Obviously, if Q 1 Q 2 0, then X 1 X 2 0, which proves Lemma 5. If Q 1 Q 2 > 0, then the pair (A, Q 1 Q 2 ) is observable, and by Theorem 3, X 1 X 2 > 0, proving Lemma 6. Warning: The converses of Lemma 5 and Lemma 6 are NOT true. 136

8 The algebraic Riccati equation (A.R.E.) is Algebraic Riccati Equation A T X + XA + XRX Q = 0 (7.20) where A, R, Q R n n are given matrices, with R = R T, and Q = Q T, and solutions X C n n are sought. We will primarily be interested in solutions X of (7.20), which satisfy A + RX is Hurwitz and are called stabilizing solutions of the Riccati equation. Associated with the data A, R, and Q, we define the Hamiltonian matrix H R 2n 2n by A R H := Q A T (7.21) 137

9 Invariant Subspace Definition Suppose A C n n. Then (via matrix-vector multiplication), we view A as a linear map from C n C n. A subspace V C n is said to be A-invariant if for every vector v V, the vector Av V too. In terms of matrices, suppose that the columns of W C n m are a basis for V. In other words, they are linearly independent, and span the space V, so V = {Wη : η C m } Then V being A-invariant is equivalent to the existence of a matrix S C m m such that AW = WS The matrix S is the matrix representation of A restricted to V, referred to the basis W. Clearly, since W has linearly independent columns, the matrix S is unique. If (Q, Λ) are the Schur decomposition of A, then for any k with 1 k n, the first k columns of Q span a k-dimensional invariant subspace of A, as seen below Clearly, AQ 1 = Q 1 Λ 11. A [Q 1 Q 2 ] = [Q 1 Q 2 ] Λ 11 Λ 12 0 Λ

10 Eigenvectors in Invariant subspaces Note that since S C m m has at least one eigenvector, say ξ 0 m, with associated eigenvalue λ, it follows that Wξ V is an eigenvector of A, A (Wξ) = AWξ = WSξ = λwξ = λ(wξ) Hence, every nontrivial invariant subspace of A contains an eigenvector of A. 139

11 Stable Invariant Subspace Definition If S is Hurwitz, then V is a stable, A-invariant subspace. Note that while the matrix S depends on the particular choice of the basis for V (ie., the specific columns of W), the eigenvalues of S are only dependent on the subspace V, not the basis choice. To see this, suppose that the columns of W C n m span the same space as the columns of W. Obviously, there is a matrix T C m m such that W = WT. The linear independence of the columns of W (and W) imply that T is invertible. Also, A W = AWT 1 = WST 1 = W TST 1 }{{} S Clearly, the eigenvalues of S and S are the same. The linear operator A V is the transformation from V V defined as A, simply restricted to V. Since V is A-invariant, this is well-defined. Clearly, S is the matrix representation of this operator, using the columns of W as the basis choice. 140

12 A T X + XA + XRX Q = 0 Invariant Subspace The first theorem gives a way of constructing solutions to (7.20) in terms of invariant subspaces of H. Theorem 69 Let V C 2n be a n-dimensional invariant subspace of H (H is a linear map from C 2n to C 2n ), and let X 1 C n n, X 2 C n n be two complex matrices such that V = span X 1 X 2 If X 1 is invertible, then X := X 2 X 1 1 is a solution to the algebraic Riccati equation, and spec (A + RX) = spec (H V ). Proof: Since V is H invariant, there is a matrix Λ C n n with which gives and A R Q A T X 1 X 2 = X 1 X 2 Λ (7.22) AX 1 + RX 2 = X 1 Λ (7.23) QX 1 A T X 2 = X 2 Λ (7.24) Pre and post multiplying equation (7.23) by X 2 X1 1 and X1 1 respectively gives ( X2 X 1 1 ) A + ( X2 X 1 1 and postmultiplying 7.24 by X1 1 gives Q A T ( X 2 X 1 1 Combining (7.25) and (7.26) yields ) ( R X2 X1 1 ) = X2 ΛX1 1 (7.25) ) = X2 ΛX 1 1 (7.26) A T (X 2 X 1 1 ) + (X 2 X 1 1 )A + (X 2 X 1 1 )R(X 2 X 1 1 ) Q = 0 so X 2 X1 1 is indeed a solution. Equation (7.23) implies that A+R ( X 2 X1 1 X 1 ΛX1 1, and hence they have the same eigenvalues. But, by definition, Λ is a matrix representation of the map H V, so the theorem is proved. ) = 141

13 A T X + XA + XRX Q = 0 Invariant Subspace The next theorem shows that the specific choice of matrices X 1 and X 2 which span V is not important. Theorem 70 Let V, X 1, and X 2 be as above, and suppose there are two other matrices X 1, X 2 C n n X 1 such that the columns of also span X 2 V. Then X 1 is invertible if and only if X1 is invertible. Consequently, both X 2 X1 1 and X X are solutions to (7.20), and in fact they are equal. Remark: Hence, the question of whether or not X 1 is invertible is a property of the subspace, and not of the particular basis choice for the subspace. Therefore, we say that an n-dimensional subspace V has the invertibility property if in any basis, the top portion is invertible. In the literature, this is often called the complimentary property. Proof: Since both sets of columns span the same n-dimensional subspace, there is an invertible K C n n such that X 1 X 2 = X 1 X 2 K Obviously, X 1 is invertible if and only if X 1 is invertible, and X 2 X 1 ( X2 K ) ( X1 K ) 1 = X2 X = 142

14 A T X + XA + XRX Q = 0 Invariant Subspace As we would hope, the converse of theorem 69 is true. Theorem 71 If X C n n is a solution to the A.R.E., then there exist matrices X 1, X 2 C n n, with X 1 invertible, such that X = X 2 X1 1 and the columns of X 1 X 2 span an n-dimensional invariant subspace of H. Proof: Define Λ := A + RX C n n. Multiplying this by X gives XΛ = XA+XRX = Q A T X, the second equality coming from the fact that X is a solution to the Riccati equation. We write these two relations as A R Q A T I n X = I n X Λ I n Hence, the columns of span an n-dimensional invariant subspace X of H, and defining X 1 := I n, and X 2 := X completes the proof.. Hence, there is a one-to-one correspondence between n-dimensional invariants subspaces of H with the invertibility property, and solutions of the Riccati equation. 143

15 A T X + XA + XRX Q = 0 Eigenvalue distribution of H In the previous section, we showed that any solution X of the Riccati equation is intimately related to an invariant subspace, V, of H, and the matrix A + RX is similiar (ie. related by a similarity transformation) to the restriction of H on V, H V. Hence, the eigenvalues of A + RX will always be a subset of the eigenvalues of H. Recall the particular structure of the Hamiltonian matrix H, H := A R Q A T Since H is real, we know that the eigenvalues of H are symmetric about the real axis, including multiplicities. The added structure of H gives additional symmetry in the eigenvalues as follows. Lemma 72 The eigenvalues of H are symmetric about the origin (and hence also about the imaginary axis). Proof: Define J R 2n 2n by J = 0 I I 0 Note that JHJ 1 = H T. Also, JH = H T J = (JH) T. Let p (λ) be the characteristic polynomial of H. Then p(λ) := det (λi H) = det ( λi JHJ 1) = det ( λi + H T ) = det (λi + H) = ( 1) 2n det ( λi H) = p( λ) (7.27) Hence the roots of p are symmetric about the origin, and the symmetry about the imaginary axis follows. 144

16 A T X + XA + XRX Q = 0 Hermitian Solutions Lemma 73 If H has no eigenvalues on the imaginary axis, then H has n eigenvalues in the open-right-half plane, and n eigenvalues in the open-lefthalf plane. Theorem 74 Suppose V is a n-dimensional invariant subspace of H, and X 1, X 2 C n n form a matrix whose colums span V. Let Λ C n n be a matrix representation of H V. If λ i + λ j 0 for all eigenvalues λ i, λ j spec(λ), then X 1X 2 = X 2X 1. Proof: Using the matrix J from the proof of Lemma 72 we have that [X 1 X 2] JH X 1 is Hermitian, and equals ( X 1 X 2 + X 2 X 1) Λ. Define W := X 1 X 2 + X 2X 1. Then W = W, and WΛ + Λ W = 0. By the eigenvalue assumption on Λ, the linear map L Λ :C n n C n n defined by is invertible, so W = 0. X 2 L Λ (X) = XΛ + Λ X Corollary 75 Under the same assumptions as in theorem 74, if X 1 is invertible, then X 2 X 1 1 is Hermitian. 145

17 A T X + XA + XRX Q = 0 Real Solutions Real (rather than complex) solutions arise when a symmetry condition is imposed on the invariant subspace. Theorem 76 Let V be a n-dimensional invariant subspace of H with the invertibility property, and let X 1, X 2 C n n form a 2n n matrix whose columns span V. Then V is conjugate symmetric ({ v : v V} = V) if and only if X 2 X 1 1 R n n. Proof: Define X := X 2 X 1 1. By assumption, X R n n, and span so V is conjugate symmetric. I n X = V Since V is conjugate symmetric, there is an invertible matrix K C n n such that X 1 X 2 = X 1 X 2 K Therefore, (X 2 X 1 1 ) = X 2 X 1 1 = X 2 K(X 1 K) 1 = X 2 X 1 1 as desired. 146

18 A T X + XA + XRX Q = 0 Stabilizing Solutions Recall that for any square matrix M, there is a largest stable invariant subspace. That is, there is a stable invariant subspace V S that contains every other stable invariant subspace of M. This largest stable invariant subspace is simply the span of all of the eigenvectors and generalized eigenvectors associated with the open-left-half plane eigenvalues. If the Schur form has the eigenvalues (on diagonal of Λ) ordered, then it is spanned by the first set of columns of Q Theorem 77 There is at most one solution X to the Riccati equation such that A +RX is Hurwitz, and if it does exist, then X is real, and symmetric. Proof: If X is such a stabilizing solution, then by theorem 71 it can be constructed from some n-dimensional invariant subspace of H, V. If A + RX is stable, then we must also have spec (H V ). Recall that the spectrum of H is symmetric about the imaginary axis, so H has at most n eigenvalues in C, and the only possible choice for V is the stable eigenspace of H. By theorem 70, every solution (if there are any) constructed from this subspace will be the same, hence there is at most 1 stabilizing solution. Associated with the stable eigenspace, any Λ as in equation (7.22) will be stable. Hence, if the stable eigenspace has the invertibility property, from corollary 75 we have X := X 2 X1 1 is Hermitian. Finally, since H is real, the stable eigenspace of H is indeed conjugate symmetric, so the associated solution X is in fact real, and symmetric. 147

19 A T X + XA + XRX Q = 0 dom S Ric If H is (7.21) has no imaginary axis eigenvalues, then H has a unique n- dimensional stable invariant subspace V S C 2n, and since H is real, there exist matrices X 1, X 2 R n n such that span X 1 X 2 = V S. More concretely, there exists a matrix Λ R n n such that H X 1 X 2 = X 1 X 2 This leads to the following definition Λ, spec (Λ) C Definition 78 (Definition of dom S Ric) Consider H R 2n 2n as defined in (7.21). If H has no imaginary axis eigenvalues, and the matrices X 1, X 2 R n n for which span X 1 X 2 is the stable, n-dimensional invariant subspace of H satisfy det (X 1 ) 0, then H dom S Ric. For H dom S Ric, define Ric S (H) := X 2 X1 1, which is the unique matrix X R n n such that A + RX is stable, and Moreover, X = X T. A T X + XA + XRX Q =

20 A T X + XA + XRX Q = 0 Useful Lemma Theorem 79 Suppose H (as in (7.21)) does not have any imaginary axis eigenvalues, R is either positive semidefinite, or negative semidefinite, and (A, R) is a stabilizable pair. Then H dom S Ric. Proof: Let X 1 X 2 form a basis for the stable, n-dimensional invariant subspace of H (this exists, by virtue of the eigenvalue assumption about H). Then A R Q A T = X 1 Λ (7.28) X 2 X 1 X 2 where Λ C n n is a stable matrix. Since Λ is stable, it must be that X 1 X 2 = X 2 X 1. We simply need to show X 1 is invertible. First we prove that KerX 1 is Λ-invariant, that is, the implication x KerX 1 Λx KerX 1 Let x KerX 1. The top row of (7.28) gives RX 2 = X 1 Λ AX 1. Premultiply this by x X 2 to get x X2RX 2 x = x X2 (X 1 Λ AX 1 )x = x X2 X 1Λx = x X1X 2 Λx since X2X 1 = X1X 2 = 0. Since R 0 or R 0, it follows that RX 2 x = 0. Now, using (7.28) again gives X 1 Λx = 0, as desired. Hence KerX 1 is a subspace that is Λ-invariant. Now, if KerX 1 {0}, then there is a vector v 0 and λ C, Re(λ) < 0 such that X 1 v = 0 Λv = λv RX 2 v = 0 149

21 We also have QX 1 A T X 2 = X 2 Λ, from the second row of (7.28). Mutiplying by v gives v X 2 [ A ( λi ) R ] = 0 Since Re ( λ) < 0, and (A, R) is assumed stabilizable, it must be that X 2 v = 0. But X 1 v = 0, and X 1 X 2 is full-column rank. Hence, v = 0, and KerX 1 is indeed the trivial set {0}. This implies that X 1 is invertible, and hence H dom S Ric.. 150

22 A T X + XA + XRX Q = 0 The next theorem is a main result of LQR theory. Quadratic Regulator Theorem 80 Let A, B, C be given, with (A, B) stabilizable, and (A, C) detectable. Define A BB T H := C T C A T Then H dom S Ric. Moreover, 0 X := Ric S (H), and KerX C CA. CA n 1 Proof: Use stabilizablity of (A, B) and detectability of (A, C) to show that H has no imaginary axis eigenvalues. Specifically, suppose v 1, v 2 C n and ω R such that A C T C BB T A T v 1 v 2 = jω Premultiply top and bottom by v 2 and v 1, combine, and conclude v 1 = v 2 = 0. Show that if (A, B) is stabilizable, then ( A, BB T ) is also stabilizable. At this point, apply Theorem 79 to conclude H dom S Ric. Then, rearrange the Riccati equation into a Lyapunov equation and use Theorem 64 to show that X 0. Finally, show that KerX KerC, and that KerX is A-invariant. That shows that KerX KerCA j for any j 0.. v 1 v 2 151

23 Derivatives Linear Maps Recall that the derivative of f at x is defined to be the number l x such that f(x + δ) f(x) l x δ lim δ 0 δ Suppose f : R n R m is differentiable. Then f (x) is the unique matrix L x R m n which satisfies = 0 1 lim δ 0 n δ (f(x + δ) f(x) L xδ) = 0 Hence, while f : R n R m, the value of f at a specific location x, f (x), is a linear operator from R n R m. Hence the function f is a map from R n L (R n,r m ). Similarly, if f : H n n H n n is differentiable, then the derivative of f at x is the unique linear operator L X : H n n H n n such that satisfying 1 lim 0 n n (f(x + ) f(x) L X ) = 0 152

24 Derivative of Riccati Lyapunov f(x) := XDX A X XA C Note, f(x + ) = (X + )D (X + ) A (X + ) (X + )A C = XDX A X XA C + DX + XD A A + D = f(x) (A DX) (A DX) + D For any W C n n, let L W : H n n H n n be the Lyapunov operator L W (X) := W X + XW. Using this notation, f(x + ) f(x) L A+DX ( ) = D Therefore 1 lim 0 n n [f(x + ) f(x) L 1 A+DX ( )] = lim 0 n n [ D ] = 0 This means that the Lyaponov operator L A+DX is the derivative of the matrix-valued function at X. XDX A X XA C 153

25 Iterative Algorithms Roots Suppose f : R R is differentiable. An iterative algorithm to find a root of f(x) = 0 is x k+1 = x k f(x k) f (x k ) Suppose f : R n R n is differentiable. Then to first order f(x + δ) = f(x) + L x δ An iterative algorithm to find a root of f(x) = 0 n is obtained by setting the first-order approximation expression to zero, and solving for δ, yielding the update law (with x k+1 = x k + δ) Implicitly, it is written as x k+1 = x k L 1 x k [f(x k )] L xk [x k+1 x k ] = f(x k ) (7.29) For the Riccati equation f(x) := XDX A X XA C we have derived that the derivative of f at X involves the Lyapunov operator. The iteration in (7.29) appears as In detail, this is L A+DXk (X k+1 X k ) = X k DX k + A X k + X k A + C (A DX k ) (X k+1 X k ) (X k+1 X k )(A DX k ) = X k DX k +A X k +X k A+C which simplifies (check) to (A DX k ) X k+1 + X k+1 (A DX k ) = X k DX k C Of course, questions about the well-posedness and convergence of the iteration must be addressed in detail. 154

26 Comparisons Gohberg, Lancaster and Rodman Theorems in On Hermitian Solutions of the Symmetric Algebraic Riccati Equation, by Gohberg, Lancaster and Rodman, SIAM Journal of Control and Optimization, vol. 24, no. 6, Nov. 1986, are the basis for comparing equality and inequality solutions of Riccati equations. The definitions, theorems and main ideas are below. Suppose that A R n n, C R n n, D R n n, with D = D T 0, C = C T and (A, D) stabilizable. Consider the matrix equation XDX A T X XA C = 0 (7.30) Definition 81 A symmetric solution X + = X T + of (9.39) is maximal if X + X for any other symmetric solution X of (9.39). Check: Maximal solutions (if they exist) are unique. Theorem 82 If there is a symmetric solution to (9.39), then there is a maximal solution X +. The maximal solution satisfies max i Reλ i (A DX + ) 0 Theorem 83 If there is a symmetric solution to (9.39), then there is a sequence {X j } j=1 such that for each j, X j = X T j and 0 X j DX j A T X j X j A C X j X for any X = X T solving 9.39 A DX j is Hurwitz lim j X j = X + Theorem 84 If there are symmetric solutions to (9.39), then for every symmetric C C, there exists symmetric solutions (and hence a maximal solution X + ) to XDX A T X XA C = 0 Moreover, the maximal solutions are related by X + X

27 Identities For any Hermitian matrices Y and Ŷ, it is trivial to verify Y (A DY ) + (A DY ) Y + Y DY = Y (A DŶ ) + (A DŶ ) Y + Ŷ DŶ (Y Ŷ ) ( ) D Y Ŷ (7.31) Let X denote any Hermitian solution (if one exists) to the ARE, XDX A X XA C = 0 (7.32) This (the existence of a hermitian solution) is the departure point for all that will eventually follow. For any hermitian matrix Z C = XDX + A X + XA = X (A DX) + (A DX) X + XDX = X (A DZ) + (A DZ) X + ZDZ (X Z) D (X Z) (7.33) This exploits that X solves the original Riccati equation, and uses the identity (7.31), with Y = X, Ŷ = Z. 156

28 Iteration Take any C C (which could be C, for instance). Also, let X 0 = X 0 0 be chosen so that A DX 0 is Hurwitz. Check: Why is this possible? Iterate (if possible, which we will show is...) as X ν+1 (A DX ν ) + (A DX ν ) X ν+1 = X ν DX ν C (7.34) Note that by the Hurwitz assumption on A DX 0, at least the first iteration is possible (ν = 0). Also, recall that the iteration is analogous to a first-order algorithm for root finding, x n+1 := x n f(x n) f (x n ). Use Z := X ν in equation (7.33), and subtract from the iteration equation (7.34), leaving (X ν+1 X) (A DX ν ) + (A DX ν ) (X ν+1 X) = (X X ν )D (X X ν ) ( C C ) (7.35) which is valid whenever the iteration equation is valid. 157

29 Main Induction Now for the induction: assume X ν = X ν and A DX ν is Hurwitz. Note that this is true for ν = 0. Since A DX ν is Hurwitz, there is a unique solution X ν+1 to equation (7.34). Moreover, since A DX ν is Hurwitz, and the right-hand-side of (7.35) is 0, it follows that X ν+1 X 0 (note that X 0 X is not necessarily positive semidefinite). Rewrite (7.31) with Y = X ν+1 and Ŷ = X ν. X ν+1 (A DX ν+1 ) + (A DX ν+1 ) X ν+1 + X ν+1 DX ν+1 = X ν+1 (A DX ν ) + (A DX ν ) X ν+1 + X ν DX ν (X ν+1 X ν ) D (X ν+1 X ν ) Rewriting, } {{ } C C = X ν+1 (A DX ν+1 ) + (A DX ν+1 ) X ν+1 Writing (7.33) with Z = X ν+1 gives + X ν+1 DX ν+1 + (X ν+1 X ν ) D (X ν+1 X ν ) C = X (A DX ν+1 ) + (A DX ν+1 ) X Subtract these, yielding + X ν+1 DX ν+1 (X X ν+1 ) D (X X ν+1 ) ( C C ) = (Xν+1 X) (A DX ν+1 ) + (A DX ν+1 ) (X ν+1 X) + (X ν+1 X ν )D (X ν+1 X ν ) + (X X ν+1 )D (X X ν+1 ) (7.36) Now suppose that Re (λ) 0, and v C n such that (A DX ν+1 )v = λv. Pre and post-multiply (7.36) by v and v respectively. This gives v ( ) C C v } {{ } 0 = 2Re(λ) }{{} 0 v (X ν+1 X) v }{{} 0 Hence, both sides are in fact equal to zero, and therefore + v (X ν+1 X ν ) D (X ν+1 X ν ) = D 1/2 2 (X ν+1 X ν ) v D 1/2 (X X ν+1 ) } {{ } 0

30 Since D 0, it follows that D (X ν+1 X ν ) v = 0, which means DX ν+1 v = DX ν v. Hence λv = (A DX ν+1 ) v = (A DX ν )v But A DX ν is Hurwitz, and Re(λ) 0, so v = 0 n as desired. This means A DX ν+1 is Hurwitz. Summarizing: Suppose there exists a hermitian X = X solving XDX A X XA C = 0 Let X 0 = X 0 0 be chosen so A DX 0 is Hurwitz Let C be a Hermitian matrix with C C. Iteratively define (if possible) a sequence {X ν } ν=1 via The following is true: X ν+1 (A DX ν ) + (A DX ν ) X ν+1 = X ν DX ν C Given the choice of X 0 and C, the sequence {X ν } ν=1 unique, and has X ν = Xν. is well-defined, For each ν 0, A DX ν is Hurwitz For each ν 1, X ν X. It remains to show that for ν 1, X ν+1 X ν (although it is not necessarily true that X 1 X 0 ). 159

31 Decreasing Proof For ν 1, apply (7.31) with Y = X ν and Ŷ = X ν 1, and substitute the iteration (shifted back one step) equation (7.34) giving X ν (A DX ν ) + (A DX ν ) X ν + X ν DX ν = C (X ν X ν 1 ) D (X ν X ν 1 ) The iteration equation is X ν+1 (A DX ν ) + (A DX ν ) X ν+1 = X ν DX ν C Subtracting leaves (X ν X ν+1 )(A DX ν ) + (A DX ν ) (X ν X ν+1 ) = (X ν X ν 1 )D (X ν X ν 1 ) The fact that A DX ν is Hurwitz and D 0 implies that X ν X ν+1 0. Hence, we have shown that for all ν 1, X X ν+1 X ν 160

32 Monotonic Matrix Sequences Convergence Every nonincreasing sequence of real numbers which is bounded below has a limit. Specifically, if {x k } k=0 is a sequence of real numbers, and there is a real number γ such that x k γ and x k+1 x k for all k, then there is a real number x such that lim x k = x k A similar result holds for symmetric matrices: Specifically, if {X k } k=0 is a sequence of real, symmetric matrices, and there is a real symmetric matrix Γ such that X k Γ and X k+1 X k for all k, then there is a real symmetric matrix X such that lim k X k = X This is proven by first considering the sequence { e T } i X k e i, which shows k=0 that all of the diagonal entries of the sequence have limits. Then look at { (ei + e j ) T X k (e i + e j ) } to conclude convergence of off-diagonal terms. k=0 161

33 Limit of Sequence Maximal Solution The sequence {X ν } ν=0, generated by (7.34), has a limit lim ν X ν =: X+ Remember that C was any hermitian matrix C. Facts: Since each A DX ν is Hurwitz, in passing to the limit we may get eigenvalues with zero real-parts, hence max i Reλ i (A DX + ) 0 Since each X ν = X ν, it follows that X + = X +. Since each X ν X (where X is any hermitian solution to the equality XDX A X XA C = 0, equation (7.32)), it follows by passing to the limit that X + X. The iteration equation is X ν+1 (A DX ν ) + (A DX ν ) X ν+1 = X ν DX ν C Taking limits yields X + ( A D X+ ) + ( A D X+ ) X+ = X + D X + C which has a few cancellations, leaving X + A + A X+ X + D X + + C = 0 Finally, if X + denotes the above limit when C = C, it follows that the above ideas hold, so X + X, for all solutions X. And really finally, since X + is a hermitian solution to (7.32) with C, the properties of X+ imply that X + X

34 Analogies with Scalar Case dx 2 2ax c = 0 Matrix Statement Scalar Specialization D = D 0 d R, with d 0 (A, D) stabilizable a < 0 or d > 0 existence of Hermitian solution a 2 + cd 0 Maximal solution X + x + = a+ a 2 +cd d for d > 0 Maximal solution X + x + = c 2a for d = 0 A DX + a 2 + cd for d > 0 A DX + a for d = 0 A DX 0 Hurwitz x 0 > a d for d > 0 A DX 0 Hurwitz a < 0 for d = 0 A DX Hurwitz x > a d for d > 0 A DX Hurwitz a < 0 for d = 0 If d = 0, then the graph of 2ax c (with a < 0) is a straight line with a positive slope. There is one (and only one) real solution at x = c 2a. If d > 0, then the graph of dx 2 2ax c is a upward directed parabola. The minimum occurs at a d. There are either 0, 1, or 2 real solutions to dx 2 2ax c = 0. If a 2 + cd < 0, then there are no solutions (the minimum is positive) If a 2 + cd = 0, then there is one solution, at x = a d, which is where the minimum occurs. If a 2 + cd > 0, then there are two solutions, at x = a d a 2 +cd d x + = a d + a 2 +cd which are equidistant from the minimum a d d. and 163

Study Guide for Linear Algebra Exam 2

Study Guide for Linear Algebra Exam 2 Study Guide for Linear Algebra Exam 2 Term Vector Space Definition A Vector Space is a nonempty set V of objects, on which are defined two operations, called addition and multiplication by scalars (real

More information

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix DIAGONALIZATION Definition We say that a matrix A of size n n is diagonalizable if there is a basis of R n consisting of eigenvectors of A ie if there are n linearly independent vectors v v n such that

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

The norms can also be characterized in terms of Riccati inequalities.

The norms can also be characterized in terms of Riccati inequalities. 9 Analysis of stability and H norms Consider the causal, linear, time-invariant system ẋ(t = Ax(t + Bu(t y(t = Cx(t Denote the transfer function G(s := C (si A 1 B. Theorem 85 The following statements

More information

6.241 Dynamic Systems and Control

6.241 Dynamic Systems and Control 6.241 Dynamic Systems and Control Lecture 24: H2 Synthesis Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology May 4, 2011 E. Frazzoli (MIT) Lecture 24: H 2 Synthesis May

More information

ME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ

ME 234, Lyapunov and Riccati Problems. 1. This problem is to recall some facts and formulae you already know. e Aτ BB e A τ dτ ME 234, Lyapunov and Riccati Problems. This problem is to recall some facts and formulae you already know. (a) Let A and B be matrices of appropriate dimension. Show that (A, B) is controllable if and

More information

Calculating determinants for larger matrices

Calculating determinants for larger matrices Day 26 Calculating determinants for larger matrices We now proceed to define det A for n n matrices A As before, we are looking for a function of A that satisfies the product formula det(ab) = det A det

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero. Sec 6 Eigenvalues and Eigenvectors Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called an eigenvalue of A if there is a nontrivial

More information

Math 3191 Applied Linear Algebra

Math 3191 Applied Linear Algebra Math 9 Applied Linear Algebra Lecture 9: Diagonalization Stephen Billups University of Colorado at Denver Math 9Applied Linear Algebra p./9 Section. Diagonalization The goal here is to develop a useful

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) Eigenvalues and Eigenvectors Fall 2015 1 / 14 Introduction We define eigenvalues and eigenvectors. We discuss how to

More information

Eigenspaces and Diagonalizable Transformations

Eigenspaces and Diagonalizable Transformations Chapter 2 Eigenspaces and Diagonalizable Transformations As we explored how heat states evolve under the action of a diffusion transformation E, we found that some heat states will only change in amplitude.

More information

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial Linear Algebra (part 4): Eigenvalues, Diagonalization, and the Jordan Form (by Evan Dummit, 27, v ) Contents 4 Eigenvalues, Diagonalization, and the Jordan Canonical Form 4 Eigenvalues, Eigenvectors, and

More information

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

More information

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in 806 Problem Set 8 - Solutions Due Wednesday, 4 November 2007 at 4 pm in 2-06 08 03 Problem : 205+5+5+5 Consider the matrix A 02 07 a Check that A is a positive Markov matrix, and find its steady state

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Eigenvalue Problems; Similarity Transformations Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 18 Eigenvalue

More information

Definition (T -invariant subspace) Example. Example

Definition (T -invariant subspace) Example. Example Eigenvalues, Eigenvectors, Similarity, and Diagonalization We now turn our attention to linear transformations of the form T : V V. To better understand the effect of T on the vector space V, we begin

More information

MATH 221, Spring Homework 10 Solutions

MATH 221, Spring Homework 10 Solutions MATH 22, Spring 28 - Homework Solutions Due Tuesday, May Section 52 Page 279, Problem 2: 4 λ A λi = and the characteristic polynomial is det(a λi) = ( 4 λ)( λ) ( )(6) = λ 6 λ 2 +λ+2 The solutions to the

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

4. Linear transformations as a vector space 17

4. Linear transformations as a vector space 17 4 Linear transformations as a vector space 17 d) 1 2 0 0 1 2 0 0 1 0 0 0 1 2 3 4 32 Let a linear transformation in R 2 be the reflection in the line = x 2 Find its matrix 33 For each linear transformation

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors week -2 Fall 26 Eigenvalues and eigenvectors The most simple linear transformation from R n to R n may be the transformation of the form: T (x,,, x n ) (λ x, λ 2,, λ n x n

More information

The Jordan Normal Form and its Applications

The Jordan Normal Form and its Applications The and its Applications Jeremy IMPACT Brigham Young University A square matrix A is a linear operator on {R, C} n. A is diagonalizable if and only if it has n linearly independent eigenvectors. What happens

More information

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero. Sec 5 Eigenvectors and Eigenvalues In this chapter, vector means column vector Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called

More information

MATHEMATICS 217 NOTES

MATHEMATICS 217 NOTES MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable

More information

Eigenvalues and eigenvectors

Eigenvalues and eigenvectors Chapter 6 Eigenvalues and eigenvectors An eigenvalue of a square matrix represents the linear operator as a scaling of the associated eigenvector, and the action of certain matrices on general vectors

More information

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 45 Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems Peter J. Hammond latest revision 2017 September

More information

Chapter 5 Eigenvalues and Eigenvectors

Chapter 5 Eigenvalues and Eigenvectors Chapter 5 Eigenvalues and Eigenvectors Outline 5.1 Eigenvalues and Eigenvectors 5.2 Diagonalization 5.3 Complex Vector Spaces 2 5.1 Eigenvalues and Eigenvectors Eigenvalue and Eigenvector If A is a n n

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

235 Final exam review questions

235 Final exam review questions 5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an

More information

Chapter 5. Eigenvalues and Eigenvectors

Chapter 5. Eigenvalues and Eigenvectors Chapter 5 Eigenvalues and Eigenvectors Section 5. Eigenvectors and Eigenvalues Motivation: Difference equations A Biology Question How to predict a population of rabbits with given dynamics:. half of the

More information

Diagonalization of Matrix

Diagonalization of Matrix of Matrix King Saud University August 29, 2018 of Matrix Table of contents 1 2 of Matrix Definition If A M n (R) and λ R. We say that λ is an eigenvalue of the matrix A if there is X R n \ {0} such that

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Spectral radius, symmetric and positive matrices

Spectral radius, symmetric and positive matrices Spectral radius, symmetric and positive matrices Zdeněk Dvořák April 28, 2016 1 Spectral radius Definition 1. The spectral radius of a square matrix A is ρ(a) = max{ λ : λ is an eigenvalue of A}. For an

More information

Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control

Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control Chun-Hua Guo Dedicated to Peter Lancaster on the occasion of his 70th birthday We consider iterative methods for finding the

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Sec. 6.1 Eigenvalues and Eigenvectors Linear transformations L : V V that go from a vector space to itself are often called linear operators. Many linear operators can be understood geometrically by identifying

More information

2 The Linear Quadratic Regulator (LQR)

2 The Linear Quadratic Regulator (LQR) 2 The Linear Quadratic Regulator (LQR) Problem: Compute a state feedback controller u(t) = Kx(t) that stabilizes the closed loop system and minimizes J := 0 x(t) T Qx(t)+u(t) T Ru(t)dt where x and u are

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 1 Eigenvalues and Eigenvectors Among problems in numerical linear algebra, the determination of the eigenvalues and eigenvectors of matrices is second in importance only to the solution of linear

More information

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION REVIEW FOR EXAM III The exam covers sections 4.4, the portions of 4. on systems of differential equations and on Markov chains, and..4. SIMILARITY AND DIAGONALIZATION. Two matrices A and B are similar

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will

More information

LINEAR ALGEBRA REVIEW

LINEAR ALGEBRA REVIEW LINEAR ALGEBRA REVIEW SPENCER BECKER-KAHN Basic Definitions Domain and Codomain. Let f : X Y be any function. This notation means that X is the domain of f and Y is the codomain of f. This means that for

More information

21 Linear State-Space Representations

21 Linear State-Space Representations ME 132, Spring 25, UC Berkeley, A Packard 187 21 Linear State-Space Representations First, let s describe the most general type of dynamic system that we will consider/encounter in this class Systems may

More information

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x = Linear Algebra Review Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1 x x = 2. x n Vectors of up to three dimensions are easy to diagram.

More information

4 Matrix Diagonalization and Eigensystems

4 Matrix Diagonalization and Eigensystems 14.102, Math for Economists Fall 2004 Lecture Notes, 9/21/2004 These notes are primarily based on those written by George Marios Angeletos for the Harvard Math Camp in 1999 and 2000, and updated by Stavros

More information

Diagonalizing Matrices

Diagonalizing Matrices Diagonalizing Matrices Massoud Malek A A Let A = A k be an n n non-singular matrix and let B = A = [B, B,, B k,, B n ] Then A n A B = A A 0 0 A k [B, B,, B k,, B n ] = 0 0 = I n 0 A n Notice that A i B

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Diagonalization of Matrices

Diagonalization of Matrices LECTURE 4 Diagonalization of Matrices Recall that a diagonal matrix is a square n n matrix with non-zero entries only along the diagonal from the upper left to the lower right (the main diagonal) Diagonal

More information

Announcements Wednesday, November 01

Announcements Wednesday, November 01 Announcements Wednesday, November 01 WeBWorK 3.1, 3.2 are due today at 11:59pm. The quiz on Friday covers 3.1, 3.2. My office is Skiles 244. Rabinoffice hours are Monday, 1 3pm and Tuesday, 9 11am. Section

More information

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a). .(5pts) Let B = 5 5. Compute det(b). (a) (b) (c) 6 (d) (e) 6.(5pts) Determine which statement is not always true for n n matrices A and B. (a) If two rows of A are interchanged to produce B, then det(b)

More information

Chapter 7: Symmetric Matrices and Quadratic Forms

Chapter 7: Symmetric Matrices and Quadratic Forms Chapter 7: Symmetric Matrices and Quadratic Forms (Last Updated: December, 06) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved

More information

Math 2331 Linear Algebra

Math 2331 Linear Algebra 5. Eigenvectors & Eigenvalues Math 233 Linear Algebra 5. Eigenvectors & Eigenvalues Shang-Huan Chiu Department of Mathematics, University of Houston schiu@math.uh.edu math.uh.edu/ schiu/ Shang-Huan Chiu,

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors /88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix

More information

Generalized Eigenvectors and Jordan Form

Generalized Eigenvectors and Jordan Form Generalized Eigenvectors and Jordan Form We have seen that an n n matrix A is diagonalizable precisely when the dimensions of its eigenspaces sum to n. So if A is not diagonalizable, there is at least

More information

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS SPRING 006 PRELIMINARY EXAMINATION SOLUTIONS 1A. Let G be the subgroup of the free abelian group Z 4 consisting of all integer vectors (x, y, z, w) such that x + 3y + 5z + 7w = 0. (a) Determine a linearly

More information

Symmetric matrices and dot products

Symmetric matrices and dot products Symmetric matrices and dot products Proposition An n n matrix A is symmetric iff, for all x, y in R n, (Ax) y = x (Ay). Proof. If A is symmetric, then (Ax) y = x T A T y = x T Ay = x (Ay). If equality

More information

NOTES ON LINEAR ODES

NOTES ON LINEAR ODES NOTES ON LINEAR ODES JONATHAN LUK We can now use all the discussions we had on linear algebra to study linear ODEs Most of this material appears in the textbook in 21, 22, 23, 26 As always, this is a preliminary

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

GQE ALGEBRA PROBLEMS

GQE ALGEBRA PROBLEMS GQE ALGEBRA PROBLEMS JAKOB STREIPEL Contents. Eigenthings 2. Norms, Inner Products, Orthogonality, and Such 6 3. Determinants, Inverses, and Linear (In)dependence 4. (Invariant) Subspaces 3 Throughout

More information

Recall : Eigenvalues and Eigenvectors

Recall : Eigenvalues and Eigenvectors Recall : Eigenvalues and Eigenvectors Let A be an n n matrix. If a nonzero vector x in R n satisfies Ax λx for a scalar λ, then : The scalar λ is called an eigenvalue of A. The vector x is called an eigenvector

More information

Linear algebra and applications to graphs Part 1

Linear algebra and applications to graphs Part 1 Linear algebra and applications to graphs Part 1 Written up by Mikhail Belkin and Moon Duchin Instructor: Laszlo Babai June 17, 2001 1 Basic Linear Algebra Exercise 1.1 Let V and W be linear subspaces

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information

Announcements Wednesday, November 7

Announcements Wednesday, November 7 Announcements Wednesday, November 7 The third midterm is on Friday, November 6 That is one week from this Friday The exam covers 45, 5, 52 53, 6, 62, 64, 65 (through today s material) WeBWorK 6, 62 are

More information

Math Matrix Algebra

Math Matrix Algebra Math 44 - Matrix Algebra Review notes - 4 (Alberto Bressan, Spring 27) Review of complex numbers In this chapter we shall need to work with complex numbers z C These can be written in the form z = a+ib,

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors LECTURE 3 Eigenvalues and Eigenvectors Definition 3.. Let A be an n n matrix. The eigenvalue-eigenvector problem for A is the problem of finding numbers λ and vectors v R 3 such that Av = λv. If λ, v are

More information

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether

More information

Eigenvalues, Eigenvectors, and Diagonalization

Eigenvalues, Eigenvectors, and Diagonalization Math 240 TA: Shuyi Weng Winter 207 February 23, 207 Eigenvalues, Eigenvectors, and Diagonalization The concepts of eigenvalues, eigenvectors, and diagonalization are best studied with examples. We will

More information

Summer Session Practice Final Exam

Summer Session Practice Final Exam Math 2F Summer Session 25 Practice Final Exam Time Limit: Hours Name (Print): Teaching Assistant This exam contains pages (including this cover page) and 9 problems. Check to see if any pages are missing.

More information

Discrete Riccati equations and block Toeplitz matrices

Discrete Riccati equations and block Toeplitz matrices Discrete Riccati equations and block Toeplitz matrices André Ran Vrije Universiteit Amsterdam Leonid Lerer Technion-Israel Institute of Technology Haifa André Ran and Leonid Lerer 1 Discrete algebraic

More information

EIGENVALUES AND EIGENVECTORS 3

EIGENVALUES AND EIGENVECTORS 3 EIGENVALUES AND EIGENVECTORS 3 1. Motivation 1.1. Diagonal matrices. Perhaps the simplest type of linear transformations are those whose matrix is diagonal (in some basis). Consider for example the matrices

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

MIT Final Exam Solutions, Spring 2017

MIT Final Exam Solutions, Spring 2017 MIT 8.6 Final Exam Solutions, Spring 7 Problem : For some real matrix A, the following vectors form a basis for its column space and null space: C(A) = span,, N(A) = span,,. (a) What is the size m n of

More information

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

More information

Econ Slides from Lecture 7

Econ Slides from Lecture 7 Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for

More information

5 Eigenvalues and Diagonalization

5 Eigenvalues and Diagonalization Linear Algebra (part 5): Eigenvalues and Diagonalization (by Evan Dummit, 27, v 5) Contents 5 Eigenvalues and Diagonalization 5 Eigenvalues, Eigenvectors, and The Characteristic Polynomial 5 Eigenvalues

More information

Lecture 10 - Eigenvalues problem

Lecture 10 - Eigenvalues problem Lecture 10 - Eigenvalues problem Department of Computer Science University of Houston February 28, 2008 1 Lecture 10 - Eigenvalues problem Introduction Eigenvalue problems form an important class of problems

More information

Lec 2: Mathematical Economics

Lec 2: Mathematical Economics Lec 2: Mathematical Economics to Spectral Theory Sugata Bag Delhi School of Economics 24th August 2012 [SB] (Delhi School of Economics) Introductory Math Econ 24th August 2012 1 / 17 Definition: Eigen

More information

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017 Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...

More information

MATH 205C: STATIONARY PHASE LEMMA

MATH 205C: STATIONARY PHASE LEMMA MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)

More information

Chapter 4 Euclid Space

Chapter 4 Euclid Space Chapter 4 Euclid Space Inner Product Spaces Definition.. Let V be a real vector space over IR. A real inner product on V is a real valued function on V V, denoted by (, ), which satisfies () (x, y) = (y,

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra Definite versus Indefinite Linear Algebra Christian Mehl Institut für Mathematik TU Berlin Germany 10th SIAM Conference on Applied Linear Algebra Monterey Bay Seaside, October 26-29, 2009 Indefinite Linear

More information

Symmetric and anti symmetric matrices

Symmetric and anti symmetric matrices Symmetric and anti symmetric matrices In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally, matrix A is symmetric if. A = A Because equal matrices have equal

More information

Math 1060 Linear Algebra Homework Exercises 1 1. Find the complete solutions (if any!) to each of the following systems of simultaneous equations:

Math 1060 Linear Algebra Homework Exercises 1 1. Find the complete solutions (if any!) to each of the following systems of simultaneous equations: Homework Exercises 1 1 Find the complete solutions (if any!) to each of the following systems of simultaneous equations: (i) x 4y + 3z = 2 3x 11y + 13z = 3 2x 9y + 2z = 7 x 2y + 6z = 2 (ii) x 4y + 3z =

More information

MATH 115A: SAMPLE FINAL SOLUTIONS

MATH 115A: SAMPLE FINAL SOLUTIONS MATH A: SAMPLE FINAL SOLUTIONS JOE HUGHES. Let V be the set of all functions f : R R such that f( x) = f(x) for all x R. Show that V is a vector space over R under the usual addition and scalar multiplication

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors 5 Eigenvalues and Eigenvectors 5.2 THE CHARACTERISTIC EQUATION DETERMINANATS nn Let A be an matrix, let U be any echelon form obtained from A by row replacements and row interchanges (without scaling),

More information

Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

More information

Linear Algebra 1 Exam 2 Solutions 7/14/3

Linear Algebra 1 Exam 2 Solutions 7/14/3 Linear Algebra 1 Exam Solutions 7/14/3 Question 1 The line L has the symmetric equation: x 1 = y + 3 The line M has the parametric equation: = z 4. [x, y, z] = [ 4, 10, 5] + s[10, 7, ]. The line N is perpendicular

More information

SUPPLEMENT TO CHAPTERS VII/VIII

SUPPLEMENT TO CHAPTERS VII/VIII SUPPLEMENT TO CHAPTERS VII/VIII The characteristic polynomial of an operator Let A M n,n (F ) be an n n-matrix Then the characteristic polynomial of A is defined by: C A (x) = det(xi A) where I denotes

More information

Announcements Wednesday, November 01

Announcements Wednesday, November 01 Announcements Wednesday, November 01 WeBWorK 3.1, 3.2 are due today at 11:59pm. The quiz on Friday covers 3.1, 3.2. My office is Skiles 244. Rabinoffice hours are Monday, 1 3pm and Tuesday, 9 11am. Section

More information

PY 351 Modern Physics Short assignment 4, Nov. 9, 2018, to be returned in class on Nov. 15.

PY 351 Modern Physics Short assignment 4, Nov. 9, 2018, to be returned in class on Nov. 15. PY 351 Modern Physics Short assignment 4, Nov. 9, 2018, to be returned in class on Nov. 15. You may write your answers on this sheet or on a separate paper. Remember to write your name on top. Please note:

More information

MAC Module 12 Eigenvalues and Eigenvectors. Learning Objectives. Upon completing this module, you should be able to:

MAC Module 12 Eigenvalues and Eigenvectors. Learning Objectives. Upon completing this module, you should be able to: MAC Module Eigenvalues and Eigenvectors Learning Objectives Upon completing this module, you should be able to: Solve the eigenvalue problem by finding the eigenvalues and the corresponding eigenvectors

More information

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop

Perturbation theory for eigenvalues of Hermitian pencils. Christian Mehl Institut für Mathematik TU Berlin, Germany. 9th Elgersburg Workshop Perturbation theory for eigenvalues of Hermitian pencils Christian Mehl Institut für Mathematik TU Berlin, Germany 9th Elgersburg Workshop Elgersburg, March 3, 2014 joint work with Shreemayee Bora, Michael

More information

MAC Module 12 Eigenvalues and Eigenvectors

MAC Module 12 Eigenvalues and Eigenvectors MAC 23 Module 2 Eigenvalues and Eigenvectors Learning Objectives Upon completing this module, you should be able to:. Solve the eigenvalue problem by finding the eigenvalues and the corresponding eigenvectors

More information

Chapter 2: Linear Independence and Bases

Chapter 2: Linear Independence and Bases MATH20300: Linear Algebra 2 (2016 Chapter 2: Linear Independence and Bases 1 Linear Combinations and Spans Example 11 Consider the vector v (1, 1 R 2 What is the smallest subspace of (the real vector space

More information