Linear Algebra 2 Final Exam, December 7, 2015 SOLUTIONS. a + 2b = x a + 3b = y. This solves to a = 3x 2y, b = y x. Thus

Similar documents
Linear Algebra 2 Spectral Notes

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

Final A. Problem Points Score Total 100. Math115A Nadja Hempel 03/23/2017

Math 4153 Exam 3 Review. The syllabus for Exam 3 is Chapter 6 (pages ), Chapter 7 through page 137, and Chapter 8 through page 182 in Axler.

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

Solutions to Final Exam

and let s calculate the image of some vectors under the transformation T.

Study Guide for Linear Algebra Exam 2

Eigenvalues and Eigenvectors A =

LINEAR ALGEBRA REVIEW

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

Eigenvalues and Eigenvectors

MA 265 FINAL EXAM Fall 2012

Math 108b: Notes on the Spectral Theorem

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

Dimension. Eigenvalue and eigenvector

Math Linear Algebra Final Exam Review Sheet

MATH 115A: SAMPLE FINAL SOLUTIONS

Online Exercises for Linear Algebra XM511

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Therefore, A and B have the same characteristic polynomial and hence, the same eigenvalues.

MATH 235. Final ANSWERS May 5, 2015

GQE ALGEBRA PROBLEMS

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

LINEAR ALGEBRA REVIEW

( 9x + 3y. y 3y = (λ 9)x 3x + y = λy 9x + 3y = 3λy 9x + (λ 9)x = λ(λ 9)x. (λ 2 10λ)x = 0

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.

Review problems for MA 54, Fall 2004.

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Homework 11 Solutions. Math 110, Fall 2013.

Math 315: Linear Algebra Solutions to Assignment 7

MATHEMATICS 217 NOTES

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

1. General Vector Spaces

Linear Algebra Final Exam Solutions, December 13, 2008

235 Final exam review questions

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ.

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Topic 1: Matrix diagonalization

1. Select the unique answer (choice) for each problem. Write only the answer.

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

MAT Linear Algebra Collection of sample exams

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

Linear Algebra- Final Exam Review

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

MA 1B ANALYTIC - HOMEWORK SET 7 SOLUTIONS

Selected exercises from Abstract Algebra by Dummit and Foote (3rd edition).

LINEAR ALGEBRA SUMMARY SHEET.

Linear algebra and applications to graphs Part 1

Linear Algebra Review

Math 323 Exam 2 Sample Problems Solution Guide October 31, 2013

Lecture Summaries for Linear Algebra M51A

2. Every linear system with the same number of equations as unknowns has a unique solution.

Notes on the matrix exponential

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Control Systems. Linear Algebra topics. L. Lanari

1 Invariant subspaces

Math 314H Solutions to Homework # 3

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam May 25th, 2018

Diagonalizing Matrices

Linear Algebra 2 More on determinants and Evalues Exercises and Thanksgiving Activities

Math 113 Final Exam: Solutions

Matrices and Linear Algebra

Last name: First name: Signature: Student number:

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2.

Eigenvalues and Eigenvectors. Review: Invertibility. Eigenvalues and Eigenvectors. The Finite Dimensional Case. January 18, 2018

Math Linear Algebra II. 1. Inner Products and Norms

1 Last time: least-squares problems

LINEAR ALGEBRA QUESTION BANK

Topic 2 Quiz 2. choice C implies B and B implies C. correct-choice C implies B, but B does not imply C

MTH 464: Computational Linear Algebra

I. Multiple Choice Questions (Answer any eight)

2 b 3 b 4. c c 2 c 3 c 4

PRACTICE PROBLEMS FOR THE FINAL

Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI?

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Linear Algebra Practice Final

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

Lecture 7: Positive Semidefinite Matrices

MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

2018 Fall 2210Q Section 013 Midterm Exam II Solution

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

Examples True or false: 3. Let A be a 3 3 matrix. Then there is a pattern in A with precisely 4 inversions.

Remarks on Definitions

Math 3191 Applied Linear Algebra

Practice Exam. 2x 1 + 4x 2 + 2x 3 = 4 x 1 + 2x 2 + 3x 3 = 1 2x 1 + 3x 2 + 4x 3 = 5

Transcription:

Linear Algebra 2 Final Exam, December 7, 2015 SOLUTIONS 1. (5.5 points) Let T : R 2 R 4 be a linear mapping satisfying T (1, 1) = ( 1, 0, 2, 3), T (2, 3) = (2, 3, 0, 0). Determine T (x, y) for (x, y) R 2. We first find a, b R such that (x, y) = a(1, 1)+b(2, 3). This means solving a + 2b = x a + 3b = y This solves to a = 3x 2y, b = y x. Thus T (x, y) = at (1, 1) + bt (2, 3) = (3x 2y)( 1, 0, 2, 3) + (y x)(2, 3, 0, 0) = ( 5x + 4y, 3x + 3y, 6x 4y, 9x 6y). 2. (5.5 points) Suppose V is a vector space over the field F and assume that V is NOT finite dimensional. Show that there is a sequence of vectors v 1, v 2,... in V such that v 1,..., v m is linearly independent for each positive integer m. Hint: The sequence can be constructed by induction. The vector v 1 can be almost arbitrary; show that for every n, given v 1,..., v n you can find v n+1 to continue the process. Proof. Since V {0}, there is v V such that v 0. We set v 1 = v. Assume v 1,..., v m found for some m 1 so that v 1,..., v m is linearly independent. This has been done for m = 1. Since V is infinite dimensional, V span(v 1,..., v m ). Thus there is v V \span(v 1,..., v m ). Set v m+1 = v. If a list of vectors is linearly dependent then one of the vectors belongs to the span of the preceding ones; this is clearly not so for v 1,..., v m+1. It follows that v 1,..., v m+1 is linearly independent. The desired sequence has been inductively constructed. 3. (5.5 points) Let V be a finite dimensional vector space and let U be a subspace of V. Prove: If dim U = dim V, then U = V. Hint: Show that a basis of U must span V. Proof. Let n = dim V. Assume dim U = dim V = n. Then U has a basis of n-elements, say u 1,..., u n. These vectors are linearly independent, so they are n linearly independent vectors in V. Since dim V = n, they must also span V. Thus V = U 4. (5.5 points) Let V, W be finite dimensional vector spaces and let T L(V, W ) and let v 1,..., v n be a basis of V. Prove that T is injective (one-to-one) if and only if T v 1,..., T v n are linearly independent in W. Proof. (This was done in class) Assume first T is injective. If n c jt v j = 0 for some c 1,..., c n F, then 0 = c j T v j = T c j v j.

2 Since T is injective, it follows that n c jv j = 0, hence c 1 = c 2 = = c n = 0 since v 1,..., v n are linearly independent, being a basis. This proves T v 1,..., T v n are linearly independent. Conversely, assume T v 1,..., T v n are linearly independent. Let v V and assume T v = 0. Since v 1,..., v n is a basis of V we can write v = n c jv j, where c 1,..., c n F. Then 0 = T v = c j T v j ; by the linear independence of T v 1,..., T v n we conclude that c 1 = = c n = 0, thus v = 0. Thus N(T ) = {0} and T is injective. 5. (5.5 points) LetV, W be vector spaces, T L(V, W ), let v 1,..., v n be vectors in V such that T v 1,..., T v n are linearly independent in W. Prove v 1,..., v n are linearly independent in V. Proof. Assume c 1,..., c n are elements of the field F such that n c jv j = 0. Then 0 = T 0 = T c j v j = c j T v j. since T v 1,..., T v n are linearly independent, we conclude that c 1 = = c n = 0. 6. Let V be a finite dimensional vector space. (a) (5.5 points) Assume dim(v ) is even. Prove: There exists T L(V ) such that N(T ) = R(T ). (As done during the whole course, I use N(T ) for the null space of T, R(T ) for the range of T.) (b) (5.5 points) Assume dim(v ) is odd. Prove: There does NOT exist T L(V ) such that N(T ) = R(T ) Hint: What role can the parity of the dimension play? What can you do with an even basis that you can t do with an odd one? And does the Fundamental Theorem of Linear Mappings have anything to say here? (a) Proof. Assume dim V = 2m even. Let v 1,..., v 2m be a basis of V. Since linear operators are uniquely determined by their action on a basis, there exists T L(V ) such that T v j = v m+j for j = 1,..., m and T v j = 0 if m + 1 j 2m. Then R(T ) = span(v m+1,..., v 2m ). In fact, let v = 2m c jv j V. Then T v = c j T v j = c j v j span(v m+1,..., v 2m ) proving that R(T ) span(v m+1,..., v 2m ). Conversely, let v span(v m+1,..., v 2m ). Then, for some c m+1,..., c 2m in F. v = c j v j = m c m+k T v k = T k=1 m c m+k v k R(T ). k=1

3 This establishes that R(T ) = span(v m+1,..., v 2m ). Assuming now v R(T ), we can write v = 2m c jv j for some c m+1,..., c 2m in F; thus T v = T c j v j = c j T v j = 0, proving R(T ) N(T ). Conversely, if T v = 0, writing v = 2m c jv j for c 1,..., c 2m in F, we have 0 = T c j v j = Thus N(T ) = R(T ). c j T v j = m c j v m+j span(v m+1,..., v 2m ). (b) Proof. Assume R(T ) = N(T ) and let m = dim R(T ) = dim N(T ). By the fundamental theorem of linear mappings, dim V = dim N(T )+ dim R(T ) = 2m, thus dim V must be even. 7. (5.5 points) Let V be a vector space, not necessarily finite dimensional. Let φ, ψ V = L(V, F) and assume N(φ) = N(ψ) V. Prove: There exists c F, c 0 such that ψ = cφ. Hint: Suppose φ(u) = 1. If v V you might want to consider v φ(u)v. Note and apology: This was a bad hint; it should have been: Consider v φ(v)u. Obviously, it had to be wrong, v φ(u)v = v v = 0. Proof. Since N(φ) {0}, there exists u V such that φ(u) 0. Dividing u by φ(u) if necessary, we can assume φ(u) = 1. Let v V. Then v φ(v)u N(φ); in fact, φ(v φ(v)u) = φ(v) φ(u)φ(v) = φ(v) φ(v) = 0. Since N(φ) = N(ψ) it follows that 0 = ψ(v φ(v)u) = ψ(v) φ(u)ψ(v). Thus ψ(v) = cφ(v) for v V, with c = ψ(u). Notice that ψ(u) 0 since u / N(φ) = N(ψ). 8. Let T : R 3 R 3 be defined by T (x, y, z) = (4x + 4z, 5x + 2y 10z, x). (a) (5.5 points) Prove that T has a single real eigenvalue. Hint: One approach is to evaluate the characteristic polynomial. Carefully! (b) (5.5 points) If λ 0 is this eigenvalue, show that E(λ 0, T ) = {v : T v = λ 0 v} has dimension 2. Hint Maybe use row reduction. (c) (5.5 points) Find a basis with respect to which the matrix of T is upper triangular. Hint: If you did part (b) you already have two vectors for the basis. Any vector independent of these can complete it. (a) The matris of T with respect to the standard basis of R 3 is 5 2 10. 4 0 4 1 0 0

4 We have det(λi T ) = det = (λ 4) det λ 4 0 4 5 λ 2 10 1 0 λ ( λ 2 10 0 λ ) 4 det = (λ 4)((λ 2)λ + 4(λ 2) = (λ 2) 3. ( 5 λ 2 1 0 Since λ is an eigenvalue of T if and only if det(λi T ) = 0 it is clear that 2 is the only eigenvalue of T. (b) We have (x, y, z) E(2, T ) if and only if T (x, y, z) = 2(x, y, z), which works out to 2x + 4z = 0, 5x 10z = 0, x 2z = 0. All three equations are essentially the same; (x, y, z) is in the eigenspace if and only if x = 2z (arbitrary y). The following is thus a basis of the eigenspace ( 2, 0, 1), (0, 1, 0). Thus dim E(2, T ) = 2. (c) Let v 1 = ( 2, 0, 1), v 2 = (0, 1, 0) be the basis we found of E(2, T ). It is quite easy to see that the vector (0, 0, 1) is independent of v 1, v 2. Perhaps the easiest way to prove this is to see that T (0, 0, 1) = (4, 10, 0) 2(0, 0, 1), so (0, 0, 1) / E(2, T ) thus must be independent of v 1, v 2. Since dim R 3 = 3, setting v 3 = (0, 0, 1), we have a basis of R 3. With respect to this basis the matrix of T must be upper triangular because T v 1 = 2v 2 span(v 1 ), T v 2 = 2v 2 span(v 1, v 2 ), and T v 3 is some irrelevant combination of v 1, v 2, v 3, hence in the span of v 1, v 2, v 3. 9. (5.5 points) Let V = R n and let v 1,..., v n be n-vectors in R n. Let A be the matrix whose k-th row is the vector v k ; that is A = (x kj ) 1 k,j n, where v k = (v k1,..., v kn ). Prove: v 1,..., v n is an orthonormal basis of R n if and only if AA t = I. (Here A t is the transpose of A; R n is considered an inner product space with the usual inner product). Proof. If the rows of A are v 1,..., v n one sees that AA t = ( v i, v j ) 1 i,j n. Thus AA t = I if and only if v i, v j = δ ij for 0 i, j 1, where δ ij are the Kronecker deltas; i.e., if and only if v 1,..., v n is an orthonormal basis of R n. ) 10. (5.5 points) Let T be a self adjoint operator on a finite dimensional inner product space and assume that 3 and 5 are the only eigenvalues of T. Prove that T 2 8T + 15I = 0. Hint: The spectral Theorem could play a role here. Perhaps the best way to use it is to see V splits into a direct sum of eigenspaces and see what happens on each eigenspace. Proof. By the spectral theorem, T is diagonalizable, thus has a basis of eigenvectors. Equivalently, given that the only eigenvalues of T are 3 and 5, V = E(3, T ) E(5, T ). Let v V. Then v = u+w where u E(3, T ), w E(5, T ). Thus (T 2 8T + 15I)v = (T 2 8T + 15I)u + (T 2 8T + 15I)w = T 2 u 8T u + 15u + T 2 w 8T w + 15w = 9u 24u + 15u + 25w 40w + 15w = 0.

5 This holding for all v V, we are done. 11. Let V be a vector space over the field F, let T L(V ). Let W = {v V : T k v = 0} for some k N. (a) (5.5 points) Prove W is a subspace of V. (b) (5.5 points) Prove: If W is finite dimensional and m = dim W, then W = {v V : T m v = 0}. (c) (5.5 points) Prove: If V is finite dimensional, then V = N(T m ) R(T m ), where m = dim W. Hints: If T k v = 0 but T k 1 v 0, see that v, T V,..., T k 1 v are linearly independent. You need to use part (b) to prove part (c). The Fundamental Theorem of Linear Mappings could also be invoked with profit. (a) Proof. Obviously 0 = t0 W. Let v, w W. There exist then k, l N such that T k v = 0, T l w = 0. If m = max(k, l) then T m v = T m w = 0, thus T m (v + w) = 0 and v + w W. If v W there is k N such that T k v = 0; thus T k (cv) = ct k v = 0 for all c F. It follows that W is a subspace. (b) Proof. Let m = dim W. If W = {0} then there is nothing to prove. Actually, the case W = {0} requires one to assume k N {0}, but I ll gloss over this. Assume W 0. Let v W, v 0. There exists k N such that T k v = 0; let k be the smallest such positive integer so that T k 1 v 0. Claim: v, T v,..., T k 1 v are linearly independent. In fact, assume k 1 j=0 c jt j v = 0 for some c 0,..., c k 1 in F (where T 0 is interpreted as being the identity map I). Assuming not all c j equal 0, there is a first index, call it l, 0 l k 1, such that c l 0. Thus k 1 c j T j v = 0. j=l Apply T k 1 l to both sides of this equation. If j > ell then T k 1 l T j v = T k+(j l 1) v = 0 since then j l 1 0. Thus what we get when applying T k 1 l is c l T k 1 v = 0, a contradiction since c l 0 and T k 1 v 0. This establishes that claim. Since we have a list of k linearly independent vectors, it follows that m k hence T m v = 0. (c) Proof. This is perhaps the most difficult exercise in this test. We see first that N(T m ) R(T m ) = {0}. This is the difficult part. Let v N(T m ) R(T m ) so that T m v = 0 and v = T m w for some w V. Then T 2m w = T m v = 0, hence w W, hence 0 = T m w = v. This proves N(T m ) R(T m ) = {0}. If we now let X = N(T m ) R(T m ) we have that dim X = dim R(T m ) + dim N(T m ); on the other hand, by the fundamental theorem of linear mappings, dim R(T m ) + dim N(T m ) = dim V. Thus dim X = dim V, proving V = X = R(T m ) N(T m ). 12. If M is an n n matrix with entries in the field F, we denote by p M (λ) the characteristic polynomial of M; thus p M (λ) = det(λi M).

6 (a) (5.5 points) Let A, B be similar matrices; that is, there is an invertible matrix R such that B = RAR 1. Prove: p A (λ) = p B (λ) for all λ F. (b) (5.5 points) Prove : If A, B are n n matrices and one of A, B is invertible, then p AB (λ) = P BA (λ) for all λ F. (a) Proof. In this case, hence λi B = λi RAR 1 = R(λI A)R 1, p B (λ) = det(λi B) = det(r(λi A)R 1 ) = det(r) det(λi A) det(r 1 ) = det(r) det(r 1 ) det(λi A) = det(rr 1 ) det(λi A) = det(i) det(λi A) = p A (λ). (b) Proof. Assume without loss of regularity that A is invertible. Then AB = A(BA)A 1, so the result follows from part (a). The same result is true for all matrices; i.e., if A, B are n n matrices then p AB (λ) = p BA (λ) for all λ F. This is a consequence of the exercise, the fact that invertible matrices are dense in the space of all matrices, and the continuity of matrix operations and of the determinant. Using this result, one can get a relatively simple proof of the following problem from this year s Putnam Exam: Let A, B, M be n n matrices of real entries and assume AM = MB. Then det(a MX) = det(b X M) for all n n real matrices X.