Math 102 - Homework 8 (selected problems) David Lipshutz Problem 1. (Strang, 5.5: #14) In the list below, which classes of matrices contain A and which contain B? 1 1 1 1 A 0 0 1 0 0 0 0 1 and B 1 1 1 1 1 4 1 1 1 1 1 0 0 0 1 1 1 1 Orthogonal, invertible, projection, permutation, Hermitian, rank-1, diagonalizable, Markov. Find the eigenvalues of A and B. Proof. A is orthogonal, invertible, not projection (since A A 2 ), permutation, not Hermitian (since A A T ), diagonalizable and Markov. The characteristic polynomial for A is p(λ) λ 4 1, so the eigenvalues of A are ±1 and ±i. B is projection (onto the space spanned by (1, 1, 1, 1)), Hermitian, rank-1, diagonalizable, Markov. The eigenvalues of B are 1 since it is a Markov matrix and the rest 0 since it is rank-1. Problem 2. (Strang, 5.5: #16) Write one significant fact about the eigenvalues of each of the following. (a) A real symmetric matrix. (b) A stable matrix: all solutions to du/dt Au approach zero. (c) An orthogonal matrix. (d) A Markov matrix. (e) A defective matrix (nondiagonalizable). (f) A singular matrix. Proof. (a) Every eigenvalue is real. (b) All eigenvalues have norm less than 1. (c) Eigenvalues all have norm equal to 1. (d) 1 is an eigenvalue of the matrix and the other eigenvalues are less than 1. (e) The matrix has repeated eigenvalues. (f) 0 is an eigenvalue of the matrix. 1
Problem 3. (Strang, 5.5: #18) Show that a unitary matrix has det U 1, but possibly det U is different from det U H. Describe all 2 by 2 matrices that are unitary. Proof. If λ 1,..., λ n are the eigenvalues of U, then det U λ 1 λ n λ 1 λ n 1. Suppose 1 0 1 0 U then U H 0 i 0 i r1 so det U i and det U H e iθ1 r 3 e iθ3 i. Suppose U is unitary. Then U has r 2 e iθ2 r 4 e iθ4 orthonormal columns, so r1 2 + r2 2 1 r3 2 + r4. 2 Let r 1 sin ϕ 1 then r 2 cos ϕ 1, and if r 3 cos ϕ 2 then r 4 sin ϕ 2 where 0 ϕ 1, ϕ 2 π 2. By the orthogonality of the column vectors, sin ϕ 1 cos ϕ 2 e i(θ1+θ3) + cos ϕ 1 sin ϕ 2 e i(θ2+θ4) 0. This implies that ϕ 2 ϕ 1 and e θ1+θ3 e θ2+θ4 θ 1 + θ 3 θ 2 + θ 4 + π. So sin ϕe iθ 1 cos ϕe iθ3 sin ϕe iθ 1 cos ϕe iθ3 U or cos ϕe iθ2 sin ϕe i(θ1+θ3 θ2 π) cos ϕe iθ2 sin ϕe i(θ1+θ3 θ2) for some 0 ϕ π 2. Problem 4. (Strang, 5.5: #38) If v 1,..., v n is an orthonormal basis for C n, the matrix with those columns is what kind of matrix? Show that any vector z equals (v H 1 z)v 1 + + (v H n z)v n. Proof. The matrix is unitary since the columns are orthonormal. v H v1 z Iz v 1 v n 1. H z z v 1 v n. (vh 1 z)v 1 + + (vn H z)v n vn H z v H n Problem 5. (Strang, 5.5: #44) How are the eigenvalues of A H (square matrix) related to the eigenvalues of A? Proof. If λ is an eigenvalue of A, then det(a λi) 0 which implies that A λi is singular, so (A λi) H is singular. Therefore det(a H λi) 0, so λ is an eigenvalues of A H. If λ is an eigenvalue of A then λ is an eigenvalue of A H and vice versa. Problem 6. (Strang, 5.5: #46) A B If A+iB is a unitary matrix (A and B are real), show that Q B A matrix. is an orthogonal 2
Proof. We have that I (A + ib)(a + ib) H (A + ib)(a T ib T ) (AA T + BB T ) i(ab T BA T ), which implies that AA T + BB T I and AB T BA T. Then A B QQ H A T B T AA T + BB T AB T BA T I 0 I B A B T A T BA T AB T BB T + AA T 0 I So Q is an orthogonal matrix. Problem 7. (Strang, 5.6: #8) What matrix M changes the basis V 1 (1, 1), V 2 (1, 4) to the basis v 1 (2, 5), v 2 (1, 4) The columns of M come from expressing V 1 and V 2 as combinations of m ij v i of the v s. Proof. Note that V 1 v 1 v 2 and V 2 v 2, so we want a matrix that takes av 1 + bv 2 to 1 0 a(v 1 v 2 ) + bv 2 av 1 + (b a)v 2. This is given by M. 1 1 Problem 8. (Strang, 5.6: #38) These Jordan matrices have eigenvalues 0, 0, 0, 0. They have two eigenvectors (find them). But the block sizes don t match and J is not similar to K: J 0 0 0 1 and K 0 0 1 0 For any matrix M, compare JM and MK. If they are equal, show that M is not invertible. Then M 1 JM K is impossible. Proof. M 21 M 22 M 23 M 24 0 M 11 M 12 0 JM M 41 M 42 M 43 M 44 MK 0 M 21 M 22 0 0 M 31 M 32 0 0 M 41 M 42 0 If JM MK then M 11 M 21 M 31 M 41 0 det M 0 so M is not invertible, so M 1 JM K is impossible. Problem 9. (Strang, 5.6: #42) Prove that AB has the same eigenvalues as BA. Proof. If λ is an eigenvalue of AB, ABx λx for some x which implies BA(Bx) λ(bx), so λ is an eigenvalue of BA with eigenvector Bx. Problem 10. (Strang, 5.6: #44) Why is each of these statements true? (a) If A is similar to B, then A 2 is similar to B 2. 3
(b) A 2 and B 2 can be similar when A and B are not similar (try λ 0, 0). 3 0 (c) is similar to 0 4 0 4 3 0 (d) is not similar to (e) If we exchange rows 1 and 2 of A and then exchange columns 1 and 2 of A, the resulting Proof. matrix has the same eigenvalues. (a) If A P BP 1, then A 2 (P BP 1 )(P BP 1 ) P B 2 P 1. 0 1 (b) Let A and B 0, then A 2 0 is obviously similar to B 2 0, but 0 0 A P BP 1 0 for all P. (c) Since the eigenvalues of are λ 3, 4 with respective eigenvectors (1, 0) and 0 4 (1, 1), the matrix can be diagonalized as follows: 1 1 3 0 1 1 0 4 0 1 0 4 0 1 (d) 3 0 3I, therefore P (3I)P 1 3I for all invertible P. (e) Exchanging the rows and columns is equivalent to applying the matrices 1 0 1 1 0 0 1 0 1 0 0 A 1 0 0 1 0 0 A 1 0 0 A 0 0 I 0 0 I 0 0 I 0 0 I So the matrices are similar and therefore have the same eigenvalues. Problem 11. (Strang, Appendix B: #6) Find the Jordan form J and the matrix M for A and B (B has eigenvalues 1,1,1,-1). What is the solution to du/dt Au, and what is e At? A and B 1 1 0 1 0 2 0 1 2 1 1 1 2 1 2 0 4
Proof. The method for finding the Jordan form of a matrix given in the book goes as follows: 1. Find linearly independent vectors such that the vectors are eigenvectors or a linear combination of the vector and the previous vector: Ae 1 0e 1, Ae 3 0e 3 + e 1, Ae 5 0e 5 + e 3, Ae 2 0e 2 and Ae 4 0e 4 + e 2 2. Set M to be the matrix with those column vectors: M e 1 e 3 e 5 e 2 e 4. 3. Then J M 1 AM: 1 J 0 1 0 0 4. To solve du/dt Au, change variables so u Mv, then Mdv/dt du/dt Au AMv MJv or dv/dt Jv. 1 t t 2 /2 0 0 0 1 t 0 0 e Jt, so u Me Jt M 1 u 0 0 0 0 1 t B has eigenvalues 1, 1, 1, 1 with associated eigenvectors ( 1, 1, 3, 3) for λ 1 and (0, 1, 0, 1) and (1, 0, 1, 0) for λ 1. There is one instance of eigenvalue -1 and one associated eigenvector, so there is one block associated to eigenvalue -1. There are three instances of eigenvalue 1 with only two eigenvectors, so there are two blocks associated to the eigenvalue 1. Since there are 3 instances of the eigenvalue 1, one of the blocks must be 2 by 2 and the other 1 by 1. So the Jacobian matrix is of the form: 1 1 0 0 J 0 0 1 0 0 0 0 1 1. To find M, set BM MJ. B x 1 x 2 x 3 x 4 x 1 x 2 x 3 x 4 1 1 0 0 0 0 1 0 0 0 0 1 5
2. This leads to Bx 1 x 1 and Bx 2 x 2 +x 1 or (B I)x 2 x 1 and Bx 3 x 3 and Bx 4 x 4. That means that x 1 is an eigenvector of B with eigenvalue 1 that is also in the column space of B I. Through some simple computation, x 1 (1, 1, 1, 1) fits these specifications. Then x 2 (0, 1/2, 0, 1/2), x 3 (0, 1, 0, 1) and x 4 ( 1, 1, 3, 3). 3. Setting J M 1 BM, we get 3/2 0 1/2 0 J 1 1 1 1 1/2 1/2 1/2 1/2 1/2 0 1/2 0 1 1 0 0 0 0 1 0 0 0 0 1 1 1 0 1 0 2 0 1 2 1 1 1 2 1 2 0 1 0 0 1 1 1/2 1 1 1 0 1 1/2 1 3 which is what we want. 6