MATH 110: LINEAR ALGEBRA PRACTICE FINAL SOLUTIONS

Similar documents
MATHEMATICS 217 NOTES

Review problems for MA 54, Fall 2004.

Definition (T -invariant subspace) Example. Example

Calculating determinants for larger matrices

Jordan Normal Form and Singular Decomposition

4. Linear transformations as a vector space 17

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

MAT Linear Algebra Collection of sample exams

Lecture 2: Linear operators

Linear Algebra Highlights

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Math Linear Algebra Final Exam Review Sheet

Eigenvalues and Eigenvectors

HOMEWORK 9 solutions

Linear Algebra 2 Spectral Notes

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

Linear Algebra Practice Final

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

Math 215 HW #11 Solutions

Linear Systems. Class 27. c 2008 Ron Buckmire. TITLE Projection Matrices and Orthogonal Diagonalization CURRENT READING Poole 5.4

THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS

MATH 221, Spring Homework 10 Solutions

Linear Algebra in Actuarial Science: Slides to the lecture

Solving a system by back-substitution, checking consistency of a system (no rows of the form

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 108b: Notes on the Spectral Theorem

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

MTH5112 Linear Algebra I MTH5212 Applied Linear Algebra (2017/2018)

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 8, 2012

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra Review

Math Final December 2006 C. Robinson

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Math 121 Practice Final Solutions

Eigenspaces and Diagonalizable Transformations

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure.

2. Every linear system with the same number of equations as unknowns has a unique solution.

Math 113 Final Exam: Solutions

235 Final exam review questions

CS 246 Review of Linear Algebra 01/17/19

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

Chapter 6: Orthogonality

2 Eigenvectors and Eigenvalues in abstract spaces.

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

Math 489AB Exercises for Chapter 2 Fall Section 2.3

Math Homework 8 (selected problems)

Eigenvalue and Eigenvector Problems

Eigenvalues, Eigenvectors, and Diagonalization

1 Linear Algebra Problems

Math 217: Eigenspaces and Characteristic Polynomials Professor Karen Smith

Eigenvalues and Eigenvectors


Linear Algebra March 16, 2019

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

Math 205, Summer I, Week 4b:

Linear Algebra- Final Exam Review

Vectors and matrices: matrices (Version 2) This is a very brief summary of my lecture notes.

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

The following definition is fundamental.

Conceptual Questions for Review

EXAM. Exam 1. Math 5316, Fall December 2, 2012

Homework For each of the following matrices, find the minimal polynomial and determine whether the matrix is diagonalizable.

Name: Final Exam MATH 3320

Fundamentals of Engineering Analysis (650163)

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.

Chapter 5 Eigenvalues and Eigenvectors

Topic 2 Quiz 2. choice C implies B and B implies C. correct-choice C implies B, but B does not imply C

Math 554 Qualifying Exam. You may use any theorems from the textbook. Any other claims must be proved in details.

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

MATH 581D FINAL EXAM Autumn December 12, 2016

Math 314H Solutions to Homework # 3

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

MATH 110: LINEAR ALGEBRA PRACTICE MIDTERM #2

MATH 115A: SAMPLE FINAL SOLUTIONS

Algebra Workshops 10 and 11

Warm-up. True or false? Baby proof. 2. The system of normal equations for A x = y has solutions iff A x = y has solutions

LINEAR ALGEBRA SUMMARY SHEET.

I. Multiple Choice Questions (Answer any eight)

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial

Extra Problems for Math 2050 Linear Algebra I

Math 4153 Exam 3 Review. The syllabus for Exam 3 is Chapter 6 (pages ), Chapter 7 through page 137, and Chapter 8 through page 182 in Axler.

Linear Algebra Review (Course Notes for Math 308H - Spring 2016)

Diagonalizing Matrices

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

Lecture 10 - Eigenvalues problem

Lecture Note 12: The Eigenvalue Problem

MATH 369 Linear Algebra

Introduction to Linear Algebra, Second Edition, Serge Lange

Definition 2.3. We define addition and multiplication of matrices as follows.

Introduction to Matrix Algebra

Math 24 Spring 2012 Sample Homework Solutions Week 8

Transcription:

MATH 110: LINEAR ALGEBRA PRACTICE FINAL SOLUTIONS Question 1. (1) Write f(x) =a m x m + a m 1 x m 1 + + a 0.ThenF = f(t )=a m T m + a m 1 T m 1 + +a 0 I.SinceTis upper triangular, (T k ) ii = Tii k for all positive k and i {1,...,n}. Hence F ii = a m Tii m + a m 1 T m 1 ii + + a 0 = f(t ii ). (2) TF = Tf(T )=T(a m T m +a m 1 T m 1 + +a 0 I)=a m T m+1 +a m 1 T m + +a 0 T = (a m T m + a m 1 T m 1 + + a 0 I)T = f(t )T = FT. (3) Since T is upper triangular, so is each power of T, and hence so is F = f(t ). Therefore, T ij =0andF ij = 0whenever i>j. Hence and (FT) i,i+1 = (TF) i,i+1 = i+1 F ik T k,i+1 = F ik T k,i+1 = F ii T i,i+1 + F i,i+1 T i+1,i+1 k=1 k=i i+1 T ik F k,i+1 = T ik F k,i+1 = T ii F i,i+1 + T i,i+1 F i+1,i+1. k=1 k=i Then (FT) i,i+1 =(TF) i,i+1 implies that F i,i+1 (T ii T i+1,i+1 )=T i,i+1 F i+1,i+1 F ii T i,i+1. But, by hypothesis, T ii T i+1,i+1 0,so F i,i+1 =(T i,i+1 F i+1,i+1 F ii T i,i+1 )/(T ii T i+1,i+1 ). (4) As before, T ij =0andF ij = 0whenever i>j.sowehave and (FT) i,i+k = (TF) i,i+k = F ij T j,i+k = F ij T j,i+k = F i,i+k T i+k,i+k + j=1 T ij F j,i+k = T ij F j,i+k = T ii F i,i+k + j=1 Equating the two expressions, we obtain F i,i+k (T ii T i+k,i+k )= i+k 1 Again, by hypothesis, T ii T i+k,i+k 0,so i+k 1 F i,i+k =( F ij T j,i+k +1 1 F ij T j,i+k +1 +1 i+k 1 T ij F j,i+k. T ij F j,i+k )/(T ii T i+k,i+k ). F ij T j,i+k T ij F j,i+k.

2 MATH 110: PRACTICE FINAL SOLUTIONS (5) The above calculations work equally well for power series that converge at all eigenvalues of T, in place of polynomials. So, noting that cos(x) = ( 1) n x 2n n=0,wecan (2n)! apply the above considerations. Let us set ( ) π/4 7 T = 0 π/4 and F =cos(t ). By the above work, F 11 =cos(π/4) = 2/2, F 22 =cos( π/4) = 2/2, F21 =0,andF 12 =(T 12 F 22 F 11 T 12 )/(T 11 T 22 )= 7 2 2 2 2 7 π =0. So 4 + π 4 ( ) 2/2 0 F =. 0 2/2 Question 2. The first two parts of this problem only ask you to show that AB and BA have (almost) the same eigenvalues, but don t actually demand you show their characteristic polynomials (almost) agree. Showing the latter condition is stronger than the former, and the hints given actually lead to this stronger result. However there is a simpler method to show the weaker condition, and that is included after the full solution. (1) Without loss of generality, we may assume that A is nonsingular and hence invertible. So A 1 (AB)A = BA. SinceAB and BA are similar, they have the same eigenvalues. (2) Let us compute the characteristic polynomials of AB and BA. We ll be working in the field F (x), consisting of fractions of polynomials in x with coefficients in F. ([ ][ ]) (xi n AB) 0 xi n AB A I n A xi n =1 A I n A xi n A xi n = x n x A xi n n x 0 I n A xi n 0 x 1 I n ([ ][ ][ ]) x x 0 I n A xi n 0 x 1 I n A I n x x 1 A I n A I n A I n ([ ][ ]) x xin BA B A I n A I n 0 I n (xi n BA). Hence AB and BA have the same characteristic polynomial. (3) Again, we ll be working over F (x). ([ ][ ]) (xi m AB) 0 xi m AB A I m A xi m =1 A I m A xi m A xi m

]) 0 I m ([ ][ = x m n x 0 I m MATH 110: PRACTICE FINAL SOLUTIONS 3 ([ = x m n x n ([ = x m n x A xi m 0 x 1 I m ][ ]) A xi m 0 x 1 = x I m n x m A I m = x m n x 1=x A I m n x m A I m A I m ([ ][ ]) = x m n x = x A I m A I m n xin BA B m 0 I n A xi m ]) x m = x m n (xi n BA). Hence (xi m AB) =x m n (xi n BA). Here is a simpler method which just demonstrates that AB and BA have the same eigenvalues, except possibly for 0in the m n case. Suppose λ, v is an eigenpair for AB. ThenBA(Bv) = B(ABv) = Bλv = λbv. Aslongas Bv 0,wehavethatλ, Bv is an eigenpair for BA. But if λ 0,thenABv = λv 0,so Bv 0. This shows that without any restrictions on m and n, AB and BA have the same non-zero eigenvalues (the eigenvalues go back from BA to AB by symmetry). Now suppose m = n and 0is an eigenvalue for AB. Then either A or B is singular, so 0is also an eigenvalue for BA. Now suppose m>n. Then as B is n m, B has non-trivial nullspace, so 0must be an eigenvalue for AB. 0may or may not be an eigenvalue for BA. You may want to think of an example where it is not. Question 3. (1) Let a i denote the ith column of A and q i the ith column of Q. Using the algorithm for computing QR decompositions given in class, we obtain R 11 = a 1 =2and q 1 = a 1 /R 11 = 1/2 (1, 1, 1, 1) t. Also, R 12 = a 2,q 1 = 4, R 13 = a 3,q 1 = 6, R 22 = a 2 q 1 R 12 = (3, 3, 3, 3) t =6,andq 2 =(3, 3, 3, 3) t /R 22 =1/2 (1, 1, 1, 1) t. Continuing in this fashion, one obtains Q = 1 2 1 1 1 1 1 1 1 1 1 1 1 1 and R = 2 4 6 0 6 4 0 0 6 (2) By the computation done in class, x = R 1 Q t y =( 26, 23, 21) t. Question 4. (1) Multiplying X by P on the left interchanges rows i and n +1 i of X (for each i {1, 2,...,n}). So PX =(X n+1 i,j ). Multiplying PX by P on the right interchanges columns j and n+1 j of PX (for each j {1, 2,...,n}). So PXP =(X n+1 i,n+1 j ). (2) Let J be a Jordan block, with λ as its eigenvalue. For each i {1, 2,...,n}, J ii = λ, and hence λ = J n+1 i,n+1 i = (PXP) ii, by the above formula. Similarly, each J i,i+1 = 1, so 1 = J n+1 i,n+2 i = (PXP) i,i 1. Itisalsoeasytoseethatsince J ij = 0whenever j / {i, i +1}, (PXP) ij = 0whenever j / {i, i 1}. Hence, PJP 1 = PJP = J t..

4 MATH 110: PRACTICE FINAL SOLUTIONS Now consider a matrix in Jordan canonical form: J 1 0... 0 0 J J = 2... 0., 0 0... J k where the J i are Jordan blocks. Also, let P 1 0... 0 0 P P = 2... 0., 0 0... P k where for each i, P i has the same size as J i,andeachp i is of the form P of (1). Then P 1 J 1 P 1 0... 0 J t 1 0... 0 0 P P J P = 2 J 2 P 2... 0. = 0 J 2 t... 0. =(J ) t. 0 0... P k J k P k 0 0... Jk t Finally, let A be an n by n complex matrix, and suppose that A = SJS 1 is its Jordan decomposition. Then A t =(S 1 ) t J t S t =(S t ) 1 J t S t. But, by the above, J t is similar to J, and hence A t is similar to J. By the uniqueness of the Jordan canonical form, J is the Jordan canonical form of A t. Question 5. Let D be a diagonal matrix such that D ii = R ii. Also, let L = R D 1 (D is invertible, since A, and hence R, is invertible), and let U = DR. ThenLU = R D 1 DR = R R = R (Q Q)R =(QR) (QR) =A A,sinceQ is unital. Question 6. First note that because {u 1,u 2,v 1,v 2,v 3 } is a basis for F 5, these vectors must be distinct and the sets {u 1,u 2 } and {v 1,v 2,v 3 } are independent. Now clearly u i E c for each i, so span(u 1,u 2 ) E c,andspan(u 1,u 2 ) is 2-dimensional. So dim(e c ) 2. Likewise, span(v 1,v 2,v 3 ) is a 3-dimensional subspace of E d.nowwehave 5=dim(F 5 ) dim(e c + E d )=dim(e c )+dim(e d ) dim(e c E d ). But as c d, E c E d = {0}, so the last term is 0. Thus dim(e c ) = 2 and dim(e d )=3,so span(u 1,u 2 )=E c and likewise for d. Alternatively, as A s characteristic polynomial has degree 5, and dim(e λ ) mult(λ), the multiplicity of λ, it is easy to see that the multiplicities of c and d are 2 and 3 respectively. Thus E c cannot have dimension > 2, so must coincide with span(u 1,u 2 ), and likewise for d. Question 7. In the following I ll also define u j = v j for j>r, for simplicity of notation. The given basis consists of eigenvectors for A, and is orthnormal. Therfore A is normal, so A A = AA.LetA = QDQ be the decomposition of A given by this basis (so the columns of Q are, in order, {u 1,...,u n } and D = Q AQ). Then D =diag(λ 1,...,λ n ), where λ i = c for i r and λ i = d for i>r.forq is the change of co-ordinates matrix from the given eigenbasis to the standard basis. Or just consider D ij = e t ide j = e i Q AQe j =(Qe i ) A(Qe j )= = u i Au j = u i λ j u j = λ j δ ij.

MATH 110: PRACTICE FINAL SOLUTIONS 5 Here I have used that the u i s form an orthnormal set. (7.1) Using A = QDQ, A u i = QD Q u i = QD e i = Qλ i e i = λ i Qe i = λ i u i. So A u i = c u i for i r and A u i = d u i for i>r. (7.2) As β = {u 1,...,u n } forms a basis, CS(A) =span(l A (β)) = span(au 1,...,Au n )=span(cu 1,...,cu r,du r+1,...,du n ). Likewise, CS(A )=span(l A (β)) = span(c u 1,...,c u r,d u r+1,...,d u n ). If c 0andd 0, it is clear that these spans coincide with span(u 1,...,u n )=C n. If c 0=d, then both spans coincide with span(u 1,...,u n ). Generalizing the result of problem 6, this is E c.ifc =0 d, wegete d instead. We can t have c = d as we assumed they were distinct. So we are done. Note that having only 2 distinct eigenvalues wasn t important here; the same arguments generalize to give similar results (such as CS(A) =CS(A )foranormal) for A having more eigenvalues. Question 8. (8.1) True. A is skew-symmetric (A = A t )andreal,soa A = A t A = A 2 = AA, so A is normal. This is equivalent to A being diagonalizable by a unitary matrix. (Note we are talking about complex matrices here, as the question was about unitary matrices. There actually isn t a factorization A = QDQ t with Q real orthogonal and D real diagonal.) (8.2) False. A being unitarily diagonalizable is also equivalent to the existence of an orthnormal basis of eigenvectors for A. ButA is upper triangular with eigenvalues 1, 2, 3. Each of the eigenspaces must have dimension 1, and computing them, one easily sees that if v 1 E 1 and v 2 E 2, both non-zero, then v 1 and v 2 are not orthogonal. (8.3) False. The matrices [ 1 1 0 1 ], [ 1 0 0 2 do not commute. Just try it. (8.4) True. Such matrices may be simultaneously diagonalized, so they commute, by a homework problem. To see this, let J = QDQ 1. Q s columns are eigenvectors for J, so must also be eigenvectors for K by assumption. Therefore K = QCQ 1 for some diagonal matrix C (with the corresponding eigenvalues on its diagonal). Therefore JK = QDQ 1 QCQ 1 = QDCQ 1 = QCDQ 1 = KJ. (8.5) False. This requires the first column of Q 1 to be an eigenvector for A. For in the Q 1 -basis β (given by Q 1 s columns), [L A ] β = T, and the first standard basis vector is an eigenvector for an upper triangular matrix. Alternatively, letting Q s first column be q, q 0andAQ 1 = TQ,so Aq = A(Q 1 e 1 )=Q 1 Te 1 = Q 1 T 11 e 1 = T 11 Q 1 e 1 = T 11 q. ]

6 MATH 110: PRACTICE FINAL SOLUTIONS So A must have a real eigenvector. But this is false for the 90 o rotation matrix [ ] 0 1. 1 0 (8.6) False. Let u =[11111] t.notethatu t M = u t because M is a probability matrix. Now suppose Mx = y. Then 8=u t y = u t Mx = u t x =7. Question 9. Suppose all of f s roots are distinct, and are λ 1,...,λ n.letλ=diag(λ 1,...,λ n ). I will show that any complex matrix whose characteristic poly is f is similar to Λ. This is sufficient, because if M and N are both similar to Λ, then they are similar to one another. So let M be a complex matrix whose characteristic poly is f. ThenM is diagonalizable as it has n distinct eigenvalues. Thus Q 1 MQ = Θ, where Θ = diag(θ 1,...,θ n ). But the θ i s are the roots of f, as are the λ i s. So Θ and Λ have the same diagonal entries, but in possibly different orders. Thus they are similar through a permutation matrix P, and M was similar to Θ, so M is similar to Λ also. If you re interested, I ll now explicitly construct P. I ll define a permutation matrix P such that P ΘP t =Λ. Foreachi, there is a unique j such that λ i = θ j,asm s eigenvalues are f s roots. Let π : {1,...,n} {1,...,n} be the permutation (bijective function) that sends i to j (so λ i = θ π(i) ). We want the i th row of P to extract λ i from Θ, which is in the π(i) th position in Θ, so set P i,j = δ π(i),j. Because π is a permutation, it is easy to check that P is a permutation matrix. Let P i be the i th row of P.AsPP t = I, P i P t j = δ ij. Wehave (P ΘP t ) ij = P i ΘP t j = θ π(i)p i P t j = θ π(i) δ ij = λ i δ ij =Λ ij. Therefore P ΘP t =Λasrequired. Now we do the other direction. Suppose f s roots, counted according to multiplicity, are λ 1,...,λ n. By reordering if needed, suppose that r > 1, and λ 1 = λ i iff i r (so the multiplicity of λ = λ 1 is r>1). Let D =diag(λ 1,...,λ n ), let J = J(λ 1,r)bether r Jordan block with eigenvalue λ 1 and let C =diag(j, λ r+1,...,λ n ). Then both D and C have characteristic poly f, but are not similar. This follows from the uniqueness of the Jordan canonical form: any two Jordan matrices are similar iff for each i and λ, they have the same number of i i Jordan blocks with eigenvalue λ (but they may appear in different orders). D and C do not satisfy this, so are not similar. We may also see it directly as follows. Writing C as a 2 2 block matrix with first block r r, [ ][ ] J(0,r) 0 v Eλ C v1 (C λi)v =0 =0. 0 C 22 λi v 2 This is equivalent to requiring J(0,r)v 1 =0and(C 22 λi)v 2 =0. ButC 22 has diagonal entries distinct from λ (by choice of r), so C 22 λi is upper triangular invertible. So we must have v 2 = 0. Clearly the rank of J(0,r)isr 1, and its nullity is 1, which means N(J(0,r)) and Eλ C have dimension 1. But we assumed 1 <r,sodim(eλ C ) < mult(λ), so that C is not diagonalizable. As D is in fact diagonal, they cannot be similar.

MATH 110: PRACTICE FINAL SOLUTIONS 7 Question 10. Note that if C is an n n matrix with columns c i, e t ice j = e t ic j = C ij,and e t i = e i (e i is the standard basis vector, which is real). So Be j,e i = e i Be j = e t ibe j = B ij, and e j,ae i =(Ae i ) e j = e i A e j = e t i A e j = A ij. Applying the assumption, B ij = A ij for each i, j, sob = A. Question 11. LetW = {v V x, v =0}. We show that W satisfies the three conditions required of a subspace. By one of the first theorems on inner products, 0 W. If u,v W, x, u + v = x, u + x, v =0+0=0,sou + v W. If u W and c F, x, cu = c x, u = c0 =0,socu W.