Math 5327 Fall 2018 Homework 7 1. For each of the following matrices, find the minimal polynomial and determine whether the matrix is diagonalizable. 3 1 0 (a) A = 1 2 0 1 1 0 x 3 1 0 Solution: 1 x 2 0 1 1 x = x((x 3)(x 2) 1) = x(x2 5x + 5). The eigenvalues are 0 and 5 ± 5. Since the minimal polynomial divides the characteristic polynomial and has the same zeros, m A (x) = x(x 2 5x + 5) as well. 2 10 5 0 35 20 0 As a check, A 2 = 5 5 0 and A 3 = 20 15 0. We can see from this 4 3 0 15 10 0 that A 3 + 5A = 5A 2, consistent with m A (x) = x 3 5x 2 + 5x. 0 r 0 0 0 0 rs 0 0 0 0 rst (b) B = 0 0 s 0 0 0 0 t, so B2 = 0 0 0 st 0 0 0 0, B3 = 0 0 0 0 0 0 0 0. 0 0 0 0 0 0 0 0 0 0 0 0 Solution: The characteristic polynomial is x 4, so the minimal polynomial is a divisor of x 4. This is tricky: the minimal polynomial is x 4 if r, s, t are all nonzero, the characteristic polynomial is x 3 if s 0 and at least one of r, t is nonzero, and the characteristic polynomial is x 2 if s = 0 but (at least) one of r, t is not, or when s 0 but both r and t are 0. Finally, the characteristic polynomial is x when r = s = t = 0. Only in this last case is B diagonalizable. 1 2 3 4 (c) C = 0 2 3 4 0 0 3 4 0 0 0 4 Solution: The characteristic polynomial is (x 1)(x 2)(x 3)(x 4), the same as the minimal polynomial. Distinct linear terms means that C is diagonalizable.
2 0 0 0 (d) D = 1 1 0 0 3 2 1 0 0 0 0 1 Solution: The matrix is lower triangular so c D (x) = (x 1) 3 (x 2). This means m D (x) = (x 1) k (x 2) for some k between 1 and 3. We start by calculating 1 0 0 0 0 0 0 0 0 0 0 0 (D I)(D 2I) = 1 0 0 0 1 1 0 0 3 2 0 0 3 2 1 0 = 0 0 0 0 2 2 0 0, 0 0 0 0 0 0 0 1 0 0 0 0 and (D I) 2 (D 2I) = 0. This is enough to tell us that m D = (x 1) 2 (x 2). Since m D (x) has repeated zeros, D is not diagonalizable. 0 0 c 2. Suppose A = 1 0 b. Show that the minimal and characteristic polynomials of A 0 1 a are the same. This shows that every monic cubic polynomial p(x) has a matrix A with p(x) as both its minimal and characteristic polynomials. x 0 c Solution: We have xi A = 1 x b 0 1 x + a (x+a)+c+bx = x 3 +ax 2 +bx+c. 1 0 0 For the minimal polynomial, consider the four matrices I = 0 1 0, A = 0 0 1 0 0 c 0 c ac c ac a 2 c + bc 1 0 b, A 2 = 0 b ab c, A 3 = b ab c a 2 b + b 2 + ac. 0 1 a 1 a a 2 b a a 2 b a 3 + 2ab c Based on the first column, we see that I, A, A 2 are linearly independent, which tells us m A (x) has degree at least 3. Since A 3 +aa 2 +ba+ci = 0, the minimal polynomial is x 3 + ax 2 + bx + c, the same as the characteristic polynomial. Page 2
3. Let A = ( ) 1 1. 4 1 (a) If T : R 2 R 2 is defined by T (v) = Av, show that the only T -invariant subspaces of R 2 are {0} and R 2. Solution: First, 0 and V are always T -invariant for T : V V so R 2 is T -invariant. The only other subspaces of R are 1-dimensional. If U were a 1-dimensional T -invariant subspace, with basis vector u then T (u) must be in U meaning that T (u) = cu for some scalar c. That is, 1-dimensional T -invariant subspaces are eigenspaces. The characteristic polynomial of A is x 2 2x + 5, which has no real eigenvalues, so there can t be any 1-dimensional T -invariant subspaces are eigenspaces. (b) If S : C 2 C 2 is defined by S(v) = Av, show that C has two 1-dimensional S-invariant subspaces. Solution: Since there are two complex eigenvalues (1 + 2i and 1-2i), there are two 1-dimensional S-invariant subspaces. ( ) Specifically, ( ) they are the 1-dimensional 1 1 spaces spanned by the vectors and. 2i 2i 4. Let T : P 3 P 3 be defined by T (ax 3 + bx 2 + cx + d) = (a + b)x 3 + (b a)x 2 + (a + b + d)x + (a b + 2c + d). (a) Show that P 1 and V = {p(x) p(1) = 0} are both T -invariant subspaces of P 3. Solution: For P 1, T (cx + d) = dx + 2c + d, which is still in P 1. For V, suppose q(x) = T (p(x). With p(x) = ax 3 + bx 2 + cx + d, we have q(x) = (a + b)x 3 + (b a)x 2 + (a + b + d)x + (a b + 2c + d). Thus, q(1) = 2a + 2b + 2c + 2d = 2p(1). Thus, when p(1) = 0, q(1) = 0 as well, showing V is T -invariant. (b) Show that V contains a 2-dimensional T -invariant subspace W such that P 3 = P 1 W. Solution: There is probably a better way to do this. I just did the following: A basis for V is {x 1, x 2 1, x 3 1} and if we apply T to each of these, we get (x 1), x 3 + x 2 2, x 3 x 2. If the original basis vectors are b 1, b 2, b 3 then T (b 2 ) = b 1 + b 2 and T (b 3 ) = b 3 b 2. This means that W = {b 2, b 3 } = {x 2 1, x 3 1} is T -invariant. Since {1, x, x 2 1, x 3 1} is a basis for P 3, it follows that P 3 = P 1 W. Page 3
(c) Use your answer to part (b) to find a basis for P 3 for which the matrix of T is block diagonal. Solution: One basis is the one I listed above, B = {1, x, x 2 1, x 3 1}. With respect to this basis we have [T ] B = ([T (1)] B [T (x)] B [T (x 2 1)] B [T (x 3 1)] B ) = ([x + 1] B [2] B [x 3 + x 2 2] B [x 3 x 2 ] B ) 1 2 0 0 1 2 0 0 = 1 0 0 0 0 0 1 1 = 1 0 0 0 0 0 1-1. 0 0 1 1 0 0 1 1 5. Let T : F n n F n n be defined by T (A) = BA, where B is some fixed matrix. Similarly, define S(A) by S(A) = AB. (a) If v is an eigenvector of B, show that A = (v 0 0) is an eigenvector of T. Solution: If Bv = cv then T (A) = BA = B(v 0 0) = (Bv B0 B0) = (cv 0 0) = ca. (b) Prove that m T (x) = m S (x) = m B (x). Solution: Note that T k (A) = B k A and S k (A) = AB k. It follows that for any polynomial p(x) = a m x m + a m 1 x m 1 + + a 1 x + a 0, p(t )(A) = (a m T m + a m 1 T m 1 + + a 1 T + a 0 I)(A) = a m T m (A) + a m 1 T m 1 (A) + + a 1 T (A) + a 0 I(A) = a m B m A + a m 1 B m 1 A + + a 1 BA + a 0 IA = (a m B m + a m 1 B m 1 + + a 1 B + a 0 I)A = p(b)a. Similarly, p(s)(a) = A p(b). If p(x) annihilates B then p(t )(A) = p(b)a = 0A = 0 and p(s)(a) = A p(b) = A0 = 0 so p(x) will annihilate both S and T as well. And if p(x) annihilates T, take A = I and we get 0 = p(t )(I) = p(b)i so p(b) = 0. Similarly, if p(x) annihilates S then 0 = p(s)(i) = Ip(B) so p(b) = 0. What this means is that the annihilating polynomials for B are exactly the same as the annihilating polynomials for S and the annihilating polynomials for T. Consequently, the smallest one is the same in each case so m T (x) = m S (x) = m B (x). Page 4
(c) Prove that the characteristic polynomials for S and T are equal. They should have degree n 2. ( ) 1 2 Solution: An example is probably helpful here. Suppose that B =. 3 4 Then with respect to the standard basis, S = {E 1,1, E 1,2, E 2,1, E 2,2 } we have 1 3 0 0 1 0 2 0 [S] S = 2 4 0 0 0 0 1 3 and [T ] S = 0 1 0 2 3 0 4 0. More generally, with S = 0 0 2 4 0 3 0 4 {E 1,1, E 1,2,..., E 1,n, E 2,1,..., E n,n }, we have S(E i,j ) = E i,j B, and this will be the matrix of all 0 s except that it s i th row will be the j th row of B. The basis vector E i,j is in position (i 1)n + j in the basis, which means we are dealing with this column number in [S]. These entries will be the entries in a row of B, and they get put into the i th block as the j th column. What this means is that [S] S is block diagonal consisting of n blocks of B t. Consequently, the characteristic polynomial of this matrix has the form det(xi B t ) n. But B and B t have the same characteristic polynomial so c S (x) = (c B (x)) n. Obviously, T is more annoying. The trick is to use a slightly different basis. Instead of the standard basis, we use B = {E 1,1, E 2,1,..., E n,1, E 1,2,..., E n,n }. Letting B = (v 1 v 2 v n ), BE i,j is the matrix that has v i in column j and 0 s for all the other columns. What this means is that with respect to this basis, the matrix of T will be block diagonal with B s on the diagonal. The characteristic polynomial of this matrix is again c B (x) n. (d) Prove that S and T are diagonalizable if and only if B is diagonalizable. Solution: This is a nice problem to show the power of the theory we ve built up. Since S, T, and B all have the same minimal polynomial and a matrix/transformation is diagonalizable if and only it factors into distinct linear terms, S, T have this property if and only if B does. Page 5
For extra credit: 6. Generalize 2 to the n n case. 0 0 0 0 a 0 1 0 0 0 a 1 Solution: The result is that the matrix M = 0 1 0 0 a 2 has both..... 0 0 0 1 a n 1 characteristic and minimal polynomial equal to x n + a n 1 x n 1 + + a 1 x + a 0. It is relatively easy to show this is true for the characteristic polynomial. For the minimal polynomial, it is not hard to show that I, M, M 2,..., M n 1 are independent, based on the first column in each matrix. It takes a little more work to get the exact form for the minimal polynomial. 7. Let T : F n n F n n be defined by T (A) = BA AB, where B is some fixed matrix. Prove or give a counterexample: T is diagonalizable if and only if B is diagonalizable. I will give partial credit if you can give examples of eigenvalues and eigenvectors for T. Solution: I will only give a partial solution here. It turns out that T is diagonalizable if and only if B is. I will only talk about the case whee B is diagonalizable. In that case, we try to get n 2 independent eigenvectors (matrices) for T. Since B is diagonalizable, there are n linearly independent eigenvectors v 1, v 2,, v n for B with eigenvalues c 1, c 2,..., c n, not all necessarily distinct. It takes a little effort, but if B is diagonalizable, then B t is also diagonalizable, so B t also has n linearly independent eigenvectors w 1,, w n. Let these vectors have eigenvalues d 1,..., d n. These vectors have the property that B t w = dw, or, taking transposes, w t B = dw t. Now let A i,j = v i wj. t Each A i,j is an n n matrix. Moreover, T (A i,j ) = Bv i w t j v i w t jb = c i v i w t j d j v i w t j = (c i d j )v i w t j. That is, each A i,j is an eigenvector, with eigenvalue c i d j. It turns out that the A s are all linearly independent (I will leave this to you to check). Consequently, we have n 2 independent eigenvectors for T, so T is diagonalizable. Page 6