MATH 511 ADVANCED LINEAR ALGEBRA SPRING 2006 Sherod Eubanks HOMEWORK 2 2.1 : 2, 5, 9, 12 2.3 : 3, 6 2.4 : 2, 4, 5, 9, 11 Section 2.1: Unitary Matrices Problem 2 If λ σ(u) and U M n is unitary, show that λ = 1. Solution. If λ σ(u), U M n is unitary, and Ux = λx for x 0, then by Theorem 2.1.4(g), we have x C n = Ux C n = λx C n = λ x C n, hence λ = 1, as desired. Problem 5 Show that the permutation matrices in M n are orthogonal and that the permutation matrices form a subgroup of the group of real orthogonal matrices. How many different permutation matrices are there in M n? Solution. By definition, a matrix P M n is called a permutation matrix if exactly one entry in each row and column is equal to 1, and all other entries are 0. That is, letting e i C n denote the standard basis element of C n that has a 1 in the i th row and zeros elsewhere, and S n be the set of all permutations on n elements, then P = [e σ(1) e σ(n) ] = P σ for some permutation σ S n such that σ(k) denotes the k th member of σ. Observe that for any σ S n, and as { 1 if i = j e T σ(i)e σ(j) = 0 otherwise for 1 i j n by the definition of e i, we have that e T σ(1) Pσ T e σ(1) e T σ(1) e σ(n) P σ =.. = I n (= P σ Pσ T ) e T σ(n) e σ(1) e T σ(n) e σ(n) (where I n denotes the n n identity matrix). Hence Pσ 1 = Pσ T (permutation matrices are trivially nonsingular), and so P σ is (real) orthogonal. Since the above holds for any σ S n, it follows that any permutation matrix is orthogonal. Now, notice that I n is a permutation matrix corresponding to the identity in the group S n, so the set of all permutation matrices in M n is (trivially) nonempty, and contains the identity element of GL n. Moreover, by the preceding paragraph, for each σ S n and each corresponding permutation matrix P σ, Pσ T = Pσ 1, and observe further that Pσ 1 = P σ 1, since P σ has a 1 in column i, row σ(i), and Pσ 1 = Pσ T = P τ has a 1 in column σ(i), row τ(σ(i)) = i for all i = 1,..., n. Thus τ σ = e, the identity element of S n, so τ = σ 1 since S n is a group. As such, the inverse (transpose) of a permutation matrix is again a permutation matrix. Finally, if ν S n is any other permutation, then the preceding discussion 1
shows that e T σ(1) Pσ T e ν(1) e T σ(1) e ν(n) P ν =.. = [e (σ ν)(1) e (σ ν)(n) ] = P σ ν e T σ(n) e ν(1) e T σ(n) e ν(n) hence as σ ν S n, the product of permutation matrices is again a permutation matrix (this is rather trivial given the definition of permutation matrix, but it illustrates the connection between permutation matrices in M n and permutations in S n ). Therefore, the set of all permutation matrices is not only a subgroup of GL n, but since each is orthogonal, such is a subgroup of the set of all orthogonal matrices as well. Moreover, the mapping σ P σ is a bijection, hence as o(s n ) = n!, it follows that there are n! different permutation matrices in M n (thus, the order of the subgroup in question is n!). Problem 9 If U M n is unitary, show that U, U T, and U are all unitary. Solution. Let U M n be unitary. That U is unitary follows readily from Theorem 2.1.4(d); that U T is unitary follows from the fact that as the columns of U form an orthonormal set by Theorem 2.1.4(e), then the rows of U T form an orthonormal set. Now, since U = U T is unitary, the rows of U T form an orthonormal set, hence the columns of U form an orthonormal set, and thus U is unitary. Problem 12 Show that if A M n is similar to a unitary matrix, then A 1 is similar to A. Solution. If A M n is similar to the unitary matrix U, then there is a nonsingular matrix S such that U = SAS 1, hence AS 1 = S 1 U, and as such A(S 1 U S) = S 1 UU S = S 1 S = I n. Since S and U are nonsingular, S 1 U S is nonsingular, hence it follows that A is nonsingular (by the exercise preceding Theorem 2.1.4). Thus A 1 = S 1 U S, so that U = SA 1 S 1, and so as U = SAS 1, it follows that U = (S 1 ) A S = SA 1 S 1, and therefore, since S S is nonsingular and (S 1 ) = (S ) 1 by the non-singularity of S, A 1 = S 1 (S 1 ) A S S = (S S) 1 A S S, which implies that A 1 and A are similar. Section 2.3: Schur s Unitary Triangularization Theorem Problem 3 Let A M n (R). Explain why the nonreal eigenvalues of A (if any) must occur in conjugate pairs. Solution. A simple answer to the given question is that since A M n (R), the characteristic polynomial p A (t) has real coefficients, and hence any nonreal roots occur in conjugate pairs, it follows that any nonreal eigenvalues of A must occur in conjugate pairs. This also follows by Theorem 2.3.4, since there is a real orthogonal matrix Q M n (R) such that Q T AQ M n (R) where Q T AQ = A 1 A 2 0 A k, 2
and each A i is a real 1 1 matrix (so A i σ(a)), or a real 2 2 matrix with a nonreal pair of complex conjugate eigenvalues. Hence, since σ(a) = σ(q T AQ) by similarity, any nonreal eigenvalues of A must occur in conjugate pairs. Problem 6 Let A, B M n be given, and suppose A and B are simultaneously similar to upper triangular matrices; that is, S 1 AS and S 1 BS are both upper triangular for some nonsingular S M n. Show that every eigenvalue of AB BA must be zero. Solution. Put T A = S 1 AS and T B = S 1 BS. Since T A and T B are upper triangular, T A T B and T B T A are upper triangular, hence as T A T B = S 1 ASS 1 BS = S 1 ABS and similarly T B T A = S 1 BAS. Now, T A T B T B T A = S 1 ABS S 1 BAS = S 1 (AB BA)S, hence as T A T B and T B T A are both upper triangular, it follows further that T A T B T B T A is also upper triangular, hence the eigenvalues of AB BA are the diagonal elements of T A T B T B T A. But, if T A = [t ij ], T B = [s ij ], then t ij = s ij = 0 if i > j, hence it follows that T A T B = t 11 * 0 t nn s 11 * 0 s nn t 11 s 11 * 0 t nn s nn so the diagonal of T A T B is t ii s ii, i = 1,..., n, and by a similar computation, the diagonal of T B T A is s ii t ii (i.e. the two set of diagonal entries are the same). Therefore, the diagonal of T A T B T B T A is t ii s ii s ii t ii = 0 for all i = 1,..., n, which implies that every eigenvalue of AB BA is zero, as desired. Section 2.4: Some Implications of Schur s Theorem Problem 2 If A M n, show that the rank of A is not less than the number of nonzero eigenvalues of A. Solution. If A M n and σ(a) = {λ 1,..., λ n }, then by Schur s Theorem, there is a unitary matrix U such that U AU = T = [t ij ] where T is upper triangular and t ii = λ i, i = 1,..., n. If k of the eigenvalues of A are nonzero, then T has k nonzero and n k zero entries along its main diagonal. As such, the k columns containing the nonzero eigenvalues of A constitute a linearly independent set (since T is upper triangular), and as such, rank(t ) k. But then rank(a) k since U is nonsingular and rank is invariant under multiplication by nonsingular matrices. Of course, we may certainly have rank(a) > k, for if A = [ 0 1 then A is already upper triangular, and σ(a) = {0}, so even though A has no nonzero eigenvalues, rank(a) = 1 > 0. Problem 4 Let A M n be a nonsingular matrix. Show that any matrix that commutes with A also commutes with A 1. ],, 3
Solution. Here, we provide two proofs of the given statement. First, if A M n is nonsingular and AB = BA for some B M n, then B = A 1 BA hence BA 1 = A 1 B, so B commutes with A 1 if and only if it commutes with A. Second, by Corollary 2.4.4, since A M n is nonsingular, there is a polynomial q(t), whose coefficients depend on A and where deg(q(t)) n 1, such that A 1 = q(a). Put k = deg(q(t)) and write q(t) = a k t k + a k 1 t k 1 + + a 1 t + a 0, where a k 0. Now, observe that showing BA = AB implies that Bq(A) = q(a)b will prove the given statement. Note that for any p N, we have BA p = BAA p 1 = ABA p 1 = = A i BA p i = = A p 1 BA = A p B, so B commutes with any positive integer power of A; as such, we compute and thus A 1 B = BA 1, as desired. Problem 5 q(a)b = (a k A k + a k 1 A k 1 + + a 1 A + a 0 )B = a k A k B + a k 1 A k 1 B + + a 1 AB + a 0 IB = a k BA k + a k 1 BA k 1 + + a 1 BA + a 0 BI = B(a k A k + a k 1 A k 1 + + a 1 A + a 0 I) = Bq(A), Use (2.3.1) to show that if A M n has eigenvalues λ 1, λ 2,..., λ n, then n λ k i = tr(a k ), k = 1, 2,... Solution. First, if A M n, and σ(a) = {λ 1,..., λ n }, then letting p(t) = t k for k = 1, 2,..., by Theorem 1.1.6 we have that p(a) = A k has eigenvalues p(λ i ) = λ k i, i = 1,..., n. Now, by Schur s Theorem, for each k = 1, 2,..., there is a unitary matrix U k M n such that Uk Ak U k = T k = [t (k) ij ] where T k is upper triangular, and t (k) ii = λ k i, i = 1,..., n. Hence, by Problem 11 (below), as tr(ab) = tr(ba), and as the trace of a matrix is the sum of the eigenvalues of the matrix that tr(a k ) = tr(u k U k A k ) = tr(u k A k U k ) = tr(t k ) = n λ k i k = 1, 2,... as desired. Problem 9 Let A M n, B M m be given and suppose A and B have no eigenvalues in common; that is, σ(a) σ(b) is empty. Use the Cayley-Hamilton theorem (2.4.2) to show that the equation AX XB = 0, X M n,m has only the solution X = 0. Deduce from this fact that the equation AX XB = C has a unique solution X M n,m for each given C M n,m. Solution. Suppose AX = XB, for A, B, and X as given above. Then, assuming that A k X = XB k for k = 1,..., p, we have A p+1 X = A(A p X) = A(XB p ) = (AX)B p = (XB)B p = XB p+1, thus by induction, A k X = XB k for all k = 1, 2,.... In this way, if p(t) is any polynomial, it follows 4
that p(a)x = Xp(B) (as in Problem 4 above). So p A (A)X = Xp A (B), hence as p A (A) = 0 by the Cayley-Hamilton Theorem, we have Xp A (B) = 0. But, since p A (t) = (t λ 1 )(t λ 2 ) (t λ n ) where λ i σ(a), i = 1,..., n, it follows that p A (B) = n (B λ i I). Moreover, the eigenvalues of the matrix p A (B) are p A (µ j ) for µ j σ(a) σ(b) =, µ j λ i for any 1 i n and 1 j m, so σ(b), j = 1,..., m, hence as p A (µ j ) = n (µ j λ i ) 0 for each j = 1,..., m. So, as all of the eigenvalues of p A (B) are nonzero, it follows that p A (B) is nonsingular, and as such, Xp A (B) = 0 has the unique solution X = 0, hence AX XB = 0 has the unique solution X = 0. So, considering the linear transformation T : M n,m M n,m where T (X) = AX XB, as T (X) = 0 has the unique solution X = 0, it follows that T (X) = C has a unique solution for each C M n,m, and the proof is complete. Problem 11 Let A, B M n be given and consider the commutator C = AB BA. Show that tr(c) = 0. Consider [ ] [ ] 0 1 A = and B = 1 0 and show that a commutator need not be nilpotent; that is, some eigenvalues of a commutator can be nonzero, even though the sum of the eigenvalues must be zero. Solution. First, by the definition of trace as the sum of diagonal entries, we have tr(c) = tr(ab BA) = tr(ab) tr(ba), hence by Theorem 1.3.20, as the eigenvalues of AB and BA are the same (counting multiplicity), and as the trace of a matrix is also the sum of its eigenvalues, we have that tr(ab) = tr(ba), so that tr(c) = 0. Now, observe that with A and B as given above, we have [ ] 1 0 C = AB BA = 0 1 so that C has (nonzero) eigenvalues 1 and 1, and hence C is not nilpotent, but we see that tr(c) = 1 + 1 = 0. 5