Linear Algebra 2 Final Exam, December 7, 2015 SOLUTIONS 1. (5.5 points) Let T : R 2 R 4 be a linear mapping satisfying T (1, 1) = ( 1, 0, 2, 3), T (2, 3) = (2, 3, 0, 0). Determine T (x, y) for (x, y) R 2. We first find a, b R such that (x, y) = a(1, 1)+b(2, 3). This means solving a + 2b = x a + 3b = y This solves to a = 3x 2y, b = y x. Thus T (x, y) = at (1, 1) + bt (2, 3) = (3x 2y)( 1, 0, 2, 3) + (y x)(2, 3, 0, 0) = ( 5x + 4y, 3x + 3y, 6x 4y, 9x 6y). 2. (5.5 points) Suppose V is a vector space over the field F and assume that V is NOT finite dimensional. Show that there is a sequence of vectors v 1, v 2,... in V such that v 1,..., v m is linearly independent for each positive integer m. Hint: The sequence can be constructed by induction. The vector v 1 can be almost arbitrary; show that for every n, given v 1,..., v n you can find v n+1 to continue the process. Proof. Since V {0}, there is v V such that v 0. We set v 1 = v. Assume v 1,..., v m found for some m 1 so that v 1,..., v m is linearly independent. This has been done for m = 1. Since V is infinite dimensional, V span(v 1,..., v m ). Thus there is v V \span(v 1,..., v m ). Set v m+1 = v. If a list of vectors is linearly dependent then one of the vectors belongs to the span of the preceding ones; this is clearly not so for v 1,..., v m+1. It follows that v 1,..., v m+1 is linearly independent. The desired sequence has been inductively constructed. 3. (5.5 points) Let V be a finite dimensional vector space and let U be a subspace of V. Prove: If dim U = dim V, then U = V. Hint: Show that a basis of U must span V. Proof. Let n = dim V. Assume dim U = dim V = n. Then U has a basis of n-elements, say u 1,..., u n. These vectors are linearly independent, so they are n linearly independent vectors in V. Since dim V = n, they must also span V. Thus V = U 4. (5.5 points) Let V, W be finite dimensional vector spaces and let T L(V, W ) and let v 1,..., v n be a basis of V. Prove that T is injective (one-to-one) if and only if T v 1,..., T v n are linearly independent in W. Proof. (This was done in class) Assume first T is injective. If n c jt v j = 0 for some c 1,..., c n F, then 0 = c j T v j = T c j v j.
2 Since T is injective, it follows that n c jv j = 0, hence c 1 = c 2 = = c n = 0 since v 1,..., v n are linearly independent, being a basis. This proves T v 1,..., T v n are linearly independent. Conversely, assume T v 1,..., T v n are linearly independent. Let v V and assume T v = 0. Since v 1,..., v n is a basis of V we can write v = n c jv j, where c 1,..., c n F. Then 0 = T v = c j T v j ; by the linear independence of T v 1,..., T v n we conclude that c 1 = = c n = 0, thus v = 0. Thus N(T ) = {0} and T is injective. 5. (5.5 points) LetV, W be vector spaces, T L(V, W ), let v 1,..., v n be vectors in V such that T v 1,..., T v n are linearly independent in W. Prove v 1,..., v n are linearly independent in V. Proof. Assume c 1,..., c n are elements of the field F such that n c jv j = 0. Then 0 = T 0 = T c j v j = c j T v j. since T v 1,..., T v n are linearly independent, we conclude that c 1 = = c n = 0. 6. Let V be a finite dimensional vector space. (a) (5.5 points) Assume dim(v ) is even. Prove: There exists T L(V ) such that N(T ) = R(T ). (As done during the whole course, I use N(T ) for the null space of T, R(T ) for the range of T.) (b) (5.5 points) Assume dim(v ) is odd. Prove: There does NOT exist T L(V ) such that N(T ) = R(T ) Hint: What role can the parity of the dimension play? What can you do with an even basis that you can t do with an odd one? And does the Fundamental Theorem of Linear Mappings have anything to say here? (a) Proof. Assume dim V = 2m even. Let v 1,..., v 2m be a basis of V. Since linear operators are uniquely determined by their action on a basis, there exists T L(V ) such that T v j = v m+j for j = 1,..., m and T v j = 0 if m + 1 j 2m. Then R(T ) = span(v m+1,..., v 2m ). In fact, let v = 2m c jv j V. Then T v = c j T v j = c j v j span(v m+1,..., v 2m ) proving that R(T ) span(v m+1,..., v 2m ). Conversely, let v span(v m+1,..., v 2m ). Then, for some c m+1,..., c 2m in F. v = c j v j = m c m+k T v k = T k=1 m c m+k v k R(T ). k=1
3 This establishes that R(T ) = span(v m+1,..., v 2m ). Assuming now v R(T ), we can write v = 2m c jv j for some c m+1,..., c 2m in F; thus T v = T c j v j = c j T v j = 0, proving R(T ) N(T ). Conversely, if T v = 0, writing v = 2m c jv j for c 1,..., c 2m in F, we have 0 = T c j v j = Thus N(T ) = R(T ). c j T v j = m c j v m+j span(v m+1,..., v 2m ). (b) Proof. Assume R(T ) = N(T ) and let m = dim R(T ) = dim N(T ). By the fundamental theorem of linear mappings, dim V = dim N(T )+ dim R(T ) = 2m, thus dim V must be even. 7. (5.5 points) Let V be a vector space, not necessarily finite dimensional. Let φ, ψ V = L(V, F) and assume N(φ) = N(ψ) V. Prove: There exists c F, c 0 such that ψ = cφ. Hint: Suppose φ(u) = 1. If v V you might want to consider v φ(u)v. Note and apology: This was a bad hint; it should have been: Consider v φ(v)u. Obviously, it had to be wrong, v φ(u)v = v v = 0. Proof. Since N(φ) {0}, there exists u V such that φ(u) 0. Dividing u by φ(u) if necessary, we can assume φ(u) = 1. Let v V. Then v φ(v)u N(φ); in fact, φ(v φ(v)u) = φ(v) φ(u)φ(v) = φ(v) φ(v) = 0. Since N(φ) = N(ψ) it follows that 0 = ψ(v φ(v)u) = ψ(v) φ(u)ψ(v). Thus ψ(v) = cφ(v) for v V, with c = ψ(u). Notice that ψ(u) 0 since u / N(φ) = N(ψ). 8. Let T : R 3 R 3 be defined by T (x, y, z) = (4x + 4z, 5x + 2y 10z, x). (a) (5.5 points) Prove that T has a single real eigenvalue. Hint: One approach is to evaluate the characteristic polynomial. Carefully! (b) (5.5 points) If λ 0 is this eigenvalue, show that E(λ 0, T ) = {v : T v = λ 0 v} has dimension 2. Hint Maybe use row reduction. (c) (5.5 points) Find a basis with respect to which the matrix of T is upper triangular. Hint: If you did part (b) you already have two vectors for the basis. Any vector independent of these can complete it. (a) The matris of T with respect to the standard basis of R 3 is 5 2 10. 4 0 4 1 0 0
4 We have det(λi T ) = det = (λ 4) det λ 4 0 4 5 λ 2 10 1 0 λ ( λ 2 10 0 λ ) 4 det = (λ 4)((λ 2)λ + 4(λ 2) = (λ 2) 3. ( 5 λ 2 1 0 Since λ is an eigenvalue of T if and only if det(λi T ) = 0 it is clear that 2 is the only eigenvalue of T. (b) We have (x, y, z) E(2, T ) if and only if T (x, y, z) = 2(x, y, z), which works out to 2x + 4z = 0, 5x 10z = 0, x 2z = 0. All three equations are essentially the same; (x, y, z) is in the eigenspace if and only if x = 2z (arbitrary y). The following is thus a basis of the eigenspace ( 2, 0, 1), (0, 1, 0). Thus dim E(2, T ) = 2. (c) Let v 1 = ( 2, 0, 1), v 2 = (0, 1, 0) be the basis we found of E(2, T ). It is quite easy to see that the vector (0, 0, 1) is independent of v 1, v 2. Perhaps the easiest way to prove this is to see that T (0, 0, 1) = (4, 10, 0) 2(0, 0, 1), so (0, 0, 1) / E(2, T ) thus must be independent of v 1, v 2. Since dim R 3 = 3, setting v 3 = (0, 0, 1), we have a basis of R 3. With respect to this basis the matrix of T must be upper triangular because T v 1 = 2v 2 span(v 1 ), T v 2 = 2v 2 span(v 1, v 2 ), and T v 3 is some irrelevant combination of v 1, v 2, v 3, hence in the span of v 1, v 2, v 3. 9. (5.5 points) Let V = R n and let v 1,..., v n be n-vectors in R n. Let A be the matrix whose k-th row is the vector v k ; that is A = (x kj ) 1 k,j n, where v k = (v k1,..., v kn ). Prove: v 1,..., v n is an orthonormal basis of R n if and only if AA t = I. (Here A t is the transpose of A; R n is considered an inner product space with the usual inner product). Proof. If the rows of A are v 1,..., v n one sees that AA t = ( v i, v j ) 1 i,j n. Thus AA t = I if and only if v i, v j = δ ij for 0 i, j 1, where δ ij are the Kronecker deltas; i.e., if and only if v 1,..., v n is an orthonormal basis of R n. ) 10. (5.5 points) Let T be a self adjoint operator on a finite dimensional inner product space and assume that 3 and 5 are the only eigenvalues of T. Prove that T 2 8T + 15I = 0. Hint: The spectral Theorem could play a role here. Perhaps the best way to use it is to see V splits into a direct sum of eigenspaces and see what happens on each eigenspace. Proof. By the spectral theorem, T is diagonalizable, thus has a basis of eigenvectors. Equivalently, given that the only eigenvalues of T are 3 and 5, V = E(3, T ) E(5, T ). Let v V. Then v = u+w where u E(3, T ), w E(5, T ). Thus (T 2 8T + 15I)v = (T 2 8T + 15I)u + (T 2 8T + 15I)w = T 2 u 8T u + 15u + T 2 w 8T w + 15w = 9u 24u + 15u + 25w 40w + 15w = 0.
5 This holding for all v V, we are done. 11. Let V be a vector space over the field F, let T L(V ). Let W = {v V : T k v = 0} for some k N. (a) (5.5 points) Prove W is a subspace of V. (b) (5.5 points) Prove: If W is finite dimensional and m = dim W, then W = {v V : T m v = 0}. (c) (5.5 points) Prove: If V is finite dimensional, then V = N(T m ) R(T m ), where m = dim W. Hints: If T k v = 0 but T k 1 v 0, see that v, T V,..., T k 1 v are linearly independent. You need to use part (b) to prove part (c). The Fundamental Theorem of Linear Mappings could also be invoked with profit. (a) Proof. Obviously 0 = t0 W. Let v, w W. There exist then k, l N such that T k v = 0, T l w = 0. If m = max(k, l) then T m v = T m w = 0, thus T m (v + w) = 0 and v + w W. If v W there is k N such that T k v = 0; thus T k (cv) = ct k v = 0 for all c F. It follows that W is a subspace. (b) Proof. Let m = dim W. If W = {0} then there is nothing to prove. Actually, the case W = {0} requires one to assume k N {0}, but I ll gloss over this. Assume W 0. Let v W, v 0. There exists k N such that T k v = 0; let k be the smallest such positive integer so that T k 1 v 0. Claim: v, T v,..., T k 1 v are linearly independent. In fact, assume k 1 j=0 c jt j v = 0 for some c 0,..., c k 1 in F (where T 0 is interpreted as being the identity map I). Assuming not all c j equal 0, there is a first index, call it l, 0 l k 1, such that c l 0. Thus k 1 c j T j v = 0. j=l Apply T k 1 l to both sides of this equation. If j > ell then T k 1 l T j v = T k+(j l 1) v = 0 since then j l 1 0. Thus what we get when applying T k 1 l is c l T k 1 v = 0, a contradiction since c l 0 and T k 1 v 0. This establishes that claim. Since we have a list of k linearly independent vectors, it follows that m k hence T m v = 0. (c) Proof. This is perhaps the most difficult exercise in this test. We see first that N(T m ) R(T m ) = {0}. This is the difficult part. Let v N(T m ) R(T m ) so that T m v = 0 and v = T m w for some w V. Then T 2m w = T m v = 0, hence w W, hence 0 = T m w = v. This proves N(T m ) R(T m ) = {0}. If we now let X = N(T m ) R(T m ) we have that dim X = dim R(T m ) + dim N(T m ); on the other hand, by the fundamental theorem of linear mappings, dim R(T m ) + dim N(T m ) = dim V. Thus dim X = dim V, proving V = X = R(T m ) N(T m ). 12. If M is an n n matrix with entries in the field F, we denote by p M (λ) the characteristic polynomial of M; thus p M (λ) = det(λi M).
6 (a) (5.5 points) Let A, B be similar matrices; that is, there is an invertible matrix R such that B = RAR 1. Prove: p A (λ) = p B (λ) for all λ F. (b) (5.5 points) Prove : If A, B are n n matrices and one of A, B is invertible, then p AB (λ) = P BA (λ) for all λ F. (a) Proof. In this case, hence λi B = λi RAR 1 = R(λI A)R 1, p B (λ) = det(λi B) = det(r(λi A)R 1 ) = det(r) det(λi A) det(r 1 ) = det(r) det(r 1 ) det(λi A) = det(rr 1 ) det(λi A) = det(i) det(λi A) = p A (λ). (b) Proof. Assume without loss of regularity that A is invertible. Then AB = A(BA)A 1, so the result follows from part (a). The same result is true for all matrices; i.e., if A, B are n n matrices then p AB (λ) = p BA (λ) for all λ F. This is a consequence of the exercise, the fact that invertible matrices are dense in the space of all matrices, and the continuity of matrix operations and of the determinant. Using this result, one can get a relatively simple proof of the following problem from this year s Putnam Exam: Let A, B, M be n n matrices of real entries and assume AM = MB. Then det(a MX) = det(b X M) for all n n real matrices X.