Math 113 Homework 5. Bowei Liu, Chao Li. Fall 2013

Similar documents
MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by

The Jordan Canonical Form

Definition (T -invariant subspace) Example. Example

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

Bare-bones outline of eigenvalue theory and the Jordan canonical form

MATH 205 HOMEWORK #3 OFFICIAL SOLUTION. Problem 1: Find all eigenvalues and eigenvectors of the following linear transformations. (a) F = R, V = R 3,

GENERALIZED EIGENVECTORS, MINIMAL POLYNOMIALS AND THEOREM OF CAYLEY-HAMILTION

MATH 115A: SAMPLE FINAL SOLUTIONS

Homework For each of the following matrices, find the minimal polynomial and determine whether the matrix is diagonalizable.

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

Math 4153 Exam 3 Review. The syllabus for Exam 3 is Chapter 6 (pages ), Chapter 7 through page 137, and Chapter 8 through page 182 in Axler.

0.1 Rational Canonical Forms

Math 113 Winter 2013 Prof. Church Midterm Solutions

Math 113 Homework 5 Solutions (Starred problems) Solutions by Guanyang Wang, with edits by Tom Church.

1 Invariant subspaces

Math 110 Linear Algebra Midterm 2 Review October 28, 2017

Homework 6 Solutions. Solution. Note {e t, te t, t 2 e t, e 2t } is linearly independent. If β = {e t, te t, t 2 e t, e 2t }, then

MATH 221, Spring Homework 10 Solutions

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 10, 2011

Lecture Notes for Math 414: Linear Algebra II Fall 2015, Michigan State University

Lecture 11: Finish Gaussian elimination and applications; intro to eigenvalues and eigenvectors (1)

Review problems for MA 54, Fall 2004.

Topics in linear algebra

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam May 25th, 2018

The Cayley-Hamilton Theorem and the Jordan Decomposition

Generalized Eigenvectors and Jordan Form

Math 113 Final Exam: Solutions

Infinite-Dimensional Triangularization

Algebra Homework, Edition 2 9 September 2010

Matrices related to linear transformations

Given a finite-dimensional vector space V over a field K, recall that a linear

Problem 1A. Suppose that f is a continuous real function on [0, 1]. Prove that

Review 1 Math 321: Linear Algebra Spring 2010

Math 554 Qualifying Exam. You may use any theorems from the textbook. Any other claims must be proved in details.

Generalized eigenspaces

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Math 113 Midterm Exam Solutions

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

Homework sheet 4: EIGENVALUES AND EIGENVECTORS. DIAGONALIZATION (with solutions) Year ? Why or why not? 6 9

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r

A proof of the Jordan normal form theorem

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS

SUPPLEMENT TO CHAPTERS VII/VIII

Homework 11 Solutions. Math 110, Fall 2013.

A linear algebra proof of the fundamental theorem of algebra

JORDAN NORMAL FORM. Contents Introduction 1 Jordan Normal Form 1 Conclusion 5 References 5

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

Calculating determinants for larger matrices

Group Theory. 1. Show that Φ maps a conjugacy class of G into a conjugacy class of G.

A linear algebra proof of the fundamental theorem of algebra

Eigenvalues, Eigenvectors, and Invariant Subspaces

Name: Final Exam MATH 3320

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

INTRODUCTION TO LIE ALGEBRAS. LECTURE 10.

MATH PRACTICE EXAM 1 SOLUTIONS

Lecture notes - Math 110 Lec 002, Summer The reference [LADR] stands for Axler s Linear Algebra Done Right, 3rd edition.

GQE ALGEBRA PROBLEMS

NOTES II FOR 130A JACOB STERBENZ

Linear Algebra II Lecture 13

Linear algebra I Homework #1 due Thursday, Oct Show that the diagonals of a square are orthogonal to one another.

Family Feud Review. Linear Algebra. October 22, 2013

Linear algebra II Homework #1 solutions A = This means that every eigenvector with eigenvalue λ = 1 must have the form

Linear Algebra 2 More on determinants and Evalues Exercises and Thanksgiving Activities

1.4 Solvable Lie algebras

EXERCISES AND SOLUTIONS IN LINEAR ALGEBRA

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

QUALIFYING EXAM IN ALGEBRA August 2011

Math 1553, Introduction to Linear Algebra

FALL 2011, SOLUTION SET 10 LAST REVISION: NOV 27, 9:45 AM. (T c f)(x) = f(x c).

MATH JORDAN FORM

( 9x + 3y. y 3y = (λ 9)x 3x + y = λy 9x + 3y = 3λy 9x + (λ 9)x = λ(λ 9)x. (λ 2 10λ)x = 0

Eigenvalues and Eigenvectors

The Cyclic Decomposition of a Nilpotent Operator

MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS

Travis Schedler. Thurs, Oct 27, 2011 (version: Thurs, Oct 27, 1:00 PM)

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Linear Algebra 2 Spectral Notes

Math 217: Eigenspaces and Characteristic Polynomials Professor Karen Smith

MAT 1302B Mathematical Methods II

Homework Set 5 Solutions

Honors Algebra 4, MATH 371 Winter 2010 Assignment 4 Due Wednesday, February 17 at 08:35

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

2 Eigenvectors and Eigenvalues in abstract spaces.

Written Homework # 4 Solution

Math 489AB Exercises for Chapter 2 Fall Section 2.3

First we introduce the sets that are going to serve as the generalizations of the scalars.

Math Spring 2011 Final Exam

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

The Cyclic Decomposition Theorem

Math 108b: Notes on the Spectral Theorem

Jordan Normal Form. Chapter Minimal Polynomials

A NOTE ON THE JORDAN CANONICAL FORM

The converse is clear, since

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

Math 4310 Solutions to homework 1 Due 9/1/16

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

Moreover this binary operation satisfies the following properties

Transcription:

Math 113 Homework 5 Bowei Liu, Chao Li Fall 2013 This homework is due Thursday November 7th at the start of class. Remember to write clearly, and justify your solutions. Please make sure to put your name on the first page and to number and staple your pages. Problems From The Book: Axler Chapter 5 problems 1, 8, and 15, Chapter 8 problems 5 and 10. In addition do the following problems: Problem 1 (Ch. 5). Show that the sum of invariant subspaces of a linear endomorphism is still invariant. Proof. Suppose that U 1,..., U m are invariant subspaces of T L(V ). We wish to show that U 1 + + U m is also an invariant subspace. Consider any v U 1 + + U m. By definition, we can write v = u 1 + + u m, for some u 1 U 1,..., u m U m. Since U 1,..., U m are invariant subspaces, by definition, we know T u 1 U 1 U 1 + + U m,..., T u m U m U 1 + + U m. Hence we can conclude, since U 1 + + U m is a subspace and therefore closed under addition, that i.e. U 1 + + U m is an invariant subspace. T v = T u 1 + + T u m U 1 + + U m, Problem 8 (Ch. 5). Find all eigenvalues and eigenvectors of the map T : (z 1, z 2,...) (z 2, z 3,...). Proof. Suppose that v 0 is an eigenvector of T with eigenvalue λ, i.e. Writing out the components of v, we have that Equating components, so Hence every eigenvector of T must be of the form T v = λ v. T v = (v 2, v 3,...) = (λv 1, λv 2,...) = λ v. v 2 = λv 1, v 3 = λv 2,..., v 2 = λv 1, v 3 = λ 2 v 1,..., v n = λ n 1 v 1,.... v = (v, λv, λ 2 v,...), for some λ F. Furthermore, we can verify that all vectors of this form are indeed eigenvectors of T with eigenvalue λ, since in that case, T v = (λv, λ 2 v,...) = λ(v, λv,...) = λ v. This shows that we have indeed exhibited all eigenvectors and their associated eigenvalues. 1

Problem 15 (Ch. 5). p a complex polynomial; T a complex linear transformation. Show that a is an eigenvalue of p(t ) iff a = p(λ) for some eigenvalue λ. Proof. : Suppose λ is the eigenvalue corresponding to the nonzero eigenvector v V, i.e. T v = λ v. Then if a = p(λ), we claim that a is the eigenvalue of p(t ) corresponding to v. p(x) = a n x n + + a 0, we get Indeed, if we write p(t ) v = (a n T n + + a 0 I) v = a n T n v + + a 0 I v = a n λ n v + + a 0 v = p(λ) v = a v. : Note that the statement is obviously true if p(x) 0, since the only eigenvalues of the 0 map are 0 = p(λ) for any λ. Now assume p(x) is not the zero polynomial, and suppose that a is an eigenvalue of p(t ), i.e. there exists some nonzero v V such that p(t ) v = a v, i.e. (p(t ) ai) v = 0. By factoring the polynomial p(x) a = c(x r 1 ) (x r n ), c 0 we get that c(t r 1 I)(T r 2 I) (T r n I) v = 0. Now we examine this chain of maps starting from the right. Since v 0, either (T r n I) v = 0 (i.e. T v = r n v so r n is an eigenvalue of T ), or (T r n I) v 0. In the latter case, if we define v := (T r n I) v 0, we get c(t r 1 I)(T r 2 I) (T r n 1 I) v = 0. By the same logic, now either r n 1 is an eigenvalue of T (corresponding to the NONZERO eigenvector v ), or v := (T r n 1 I) v 0 and c(t r 1 I) (T r n 2 I) v = 0. Continuing on in this fashion, we get that c(t r 1 I) v = 0, so in this last case r 1 must be an eigenvalue of T. This shows that at least one of r 1,..., r n is an eigenvalue of T ; whichever one it is, call it λ = r i. (Alternatively, to see that at least one of r 1,..., r n must be an eigenvalue of T, we can simply note that if this is not the case, i.e. if T r i I is injective for each i = 1, 2,..., n, then their composition must be injective, a contradiction.) Now, λ is an eigenvalue of T, and by construction it is a root of p(x) a, i.e. (x λ) p(x) a which implies that in fact p(λ) a = 0, so p(λ) = a, as desired. Problem 5 (Ch. 8). Suppose we have a positive integer n such that (ST ) n = 0. Since multiplying any operator by a zero operator results in the zero operator, we also get: T (ST ) n S = 0. However this is precisely (T S) n+1 = 0. And thus T S is also nilpotent. 2

Problem 10 (Ch. 8). This is not true. For example, take V to be a two-dimensional complex vector space spanned by { v 1, v 2 }. Then define T : V V to be T ( v 1 ) = 0, T ( v 2 ) = v 1, and extends linearly to all V. Then the null space of T is the subspace V 1 spanned by { v 1 }, and the range of T is also V 1. So the null space and range of T don t sum up to the whole space. And in fact they have non-zero intersection as well. Question 1. Let T 1, T 2 L(C 2 ) be given by the matrices [ ] 0 1 T 1 =, T 0 0 2 = What are the invariant subspaces of T 1 and T 2? [ ] 1 0. 0 1 Proof. T 1 : {0} is the only possible 0-dimensional subspace, and it is clearly invariant. Similarly, C 2 is the only possible 2-dimensional subspace and is also clearly invariant. 1-dimensional subspaces are spaces spanned by single eigenvectors (see 5.2, 5.3 in the book). And T 1 is already in Jordan normal form, so we can note that the only eigenvector is [1, 0]: thus the only 1-dimensional subspace is C 0 = {(z, 0) : z C}. T 2 : Since this is the identity transformation, every vector is fixed by T 2, so every subspace is invariant: {0}, C 2, {λ(z 1, z 2 ) : λ C} for each (z 1, z 2 ) (0, 0) C. Question 2. Let T L(P n (C)) be given by T (p) = p + p + p. What is the Jordan form of T? (Hint: show that (T I) n+1 = 0 and that (T I) n is not) Ans: It looks like 1 1 0......... 1, 0 1 with 1 s on the diagonal and 1 square above the diagonal, and 0 s everywhere else. Proof. Note that T I is given by T I : p p + p. When applied to a polynomial p of degree n, we find that p + p has degree at most n 1. Hence it follows that (T I) n p must be degree at most 0, i.e. a constant function, so (T I) n+1 is the zero map. On the other hand, (T I) n is not the zero map, since (T I) n (x n ) = n! 0. (To see this, one can note that (T I) is like multiplication by the operator d dx + d2 dx 2 = d dx ( 1 + d dx and hence we can apply binomial expansion to see ( ( ) ( ) (T I) n = dn n d n d 2 dx n 1 + 1 dx + 2 dx 2 + + ), ( ) ) n d n 1 dn + n 1 dxn 1 dx n. Alternatively, one can argue by degrees; if p is degree k, then p must be exactly degree k 1, and p must be exactly degree k 2, so p + p must still be degree k 1.) This implies that m(x n ) = n, and thus, by Lemma 8.40, there exists a basis for V of the form {(x n ), (T I)(x n ), (T I) 2 (x n ),..., (T I) n (x n )}. 3

Using these basis elements as a Jordan basis, we immediately find that T I looks like 0 1 0......... 1. 0 0 Adding I to this matrix then finishes the proof. Question 3. Let S, T L(V ) be operators on a finite dimensional, non-zero vector space V over C. Suppose that S and T commute, namely that ST = T S. Show that there is a non-zero vector v V so that v is an eigenvector of both S and T. (Hint: Find a non-zero eigenvector of T, show that the eigenspace is S-invariant, and find an eigenvector of S within that eigenspace). Proof. Since the vector space is over the algebraically closed field C, the linear operator T has an eigenvalue λ. Denote by V λ the subspace of all vectors who is an eigenvector of T with eigenvalue λ. That is, V λ = { v V : T v = λ v}. Clearly V λ is a subspace of V (You can easily verify V λ is closed under addition and scalar multiplication). We claim that V λ is an invariant subspace of S, namely S(V λ ) V λ. This is obvious: for any element v V λ, since ST = T S, we have T (S( v)) = S(T ( v)) = S(λ v) = λs( v). So S( v) V λ. We restrict S on the subspace V λ to get S Vλ L(V λ ). Since V λ is a complex vector space and S Vλ is a linear operator on it, it has an eigenvalue µ C. That is, there is a vector u V λ such that This u V λ is the vector we want, for we have S Vλ ( u) = µ u. T ( u) = λ u, S( u) = S Vλ ( u) = µ u, as desired. Question 4. Let T L(C n ) be an operator on C n. Suppose that C n has a basis of eigenvectors of T, all of whose eigenvalues have absolute value strictly less than 1. Show that lim T n ( v) = 0 n for all v C n. You may use the fact that lim n λ n = 0 if λ < 1. Proof. Suppose { v 1,..., v n } is a basis of eigenvectors of T with eigenvalues λ 1,..., λ n. Now for any vector v C n, there are complex numbers a 1,..., a n such that v = a 1 v 1 +... + a n v n. By definition we have T ( v j ) = λ j v j for j = 1,..., n. So inductively we have So T k ( v j ) = λ k j v j, j = 1,..., n. T k ( v) = T k (a 1 v 1 +... + a n v n ) = a 1 T k ( v 1 ) +... + a n T k ( v n ) = a 1 λ k 1 v 1 +... + a n λ k n v n 4

With the assumption that λ j < 1 for j = 1,..., n, we have lim k λk j = 0, j = 1,..., n. So we conclude or equivalently lim a 1λ k 1 v 1 +... + a n λ k n v n = 0, k lim T k ( v) = 0. k Question 5. Let N L(V ) be a nilpotent operator. Let p(x) be a polynomial. Show that p(n) is nilpotent if and only if p(0) = 0. Proof. is easier. Suppose we have p(0) = 0. Take n to be an integer satisfying N n = 0. Since p(0) = 0 we can factor p through x, namely there is a polynomial q(x) such that p(x) = xq(x). Therefore p(x) n = x n q(x) n, and we have p(n) n = N n q(n) n = 0, which means p(n) is nilpotent. : we give two proofs. Proof 1. The first proof is based on the fact: a linear operator T L(V ) is nilpotent if and only if T has no nonzero eigenvalue (or equivalently 0 is the only eigenvalue of T ). To see this, if there is some integer n such that T n = 0, then for any eigenvalue λ of T, clearly λ n is an eigenvalue of T n, so must be 0. This gives λ n = 0, and hence λ = 0. Conversely, if 0 is the only eigenvalue of T, then by proposition 5.16, T has an upper-triangular matrix A with respect to some basis of V. Now the eigenvalue of T are all 0 means that the diagonal entries of A are all 0. Now suppose A is an k-by-k upper-trianglular matrix, then A k = 0. So we have T k = 0, or T is nilpotent. Now suppose p(n) is nilpotent. From above we see that 0 is the only eigenvalue of p(n). We prove by contradiction. If p(0) 0, then there is some nonzero constant c and some polynomial q(x) such that p(x) = xq(x) + c. Suppose N m = 0. Now pick an nonzero eigenvector of p(n), say v. Then the eigenvalue of v is 0, so we get (Nq(N) + c Id)( v) = 0, Nq(N) v = c v. Also Nq(N) = c Id, so Since N m is the zero operator, we have (Nq(N)) m ( v) = ( c) m ( v). ( c) m ( v) = 0, v = 0 by the fact that c 0. Contradiction. Proof 2. The second proof uses the so-called Bézout theorem: Let k be a field, and we are given two polynomials P (x), Q(x) k[x]. Then there exist two polynomials f(x), g(x) k[x] such that P (x)f(x) + Q(x)g(x) = (P, Q)(x), where (P, Q)(x) is the greatest common divisor of P and Q. Now suppose p(n) is nilpotent. Let k, m be two integers with p(n) k = 0, N m = 0. We prove by contradiction. If p(0) 0, then p(x) is relatively prime to x. Therefore p(x) k is relatively prime to x m. Therefore by Bézout theorem, there exist two polynomials f(x), g(x) such that p(x) k f(x) + x m g(x) = 1. 5

Taking the operator N into the above expression, we get p(n) k f(n) + N m g(n) = Id. This gives us a contradiction under the assumption that p(n) k = 0 and N m = 0. Question 6. Approximately how much time did you spend working on this homework? 6