The Jordan Canonical Form

Similar documents
The Cyclic Decomposition of a Nilpotent Operator

Bare-bones outline of eigenvalue theory and the Jordan canonical form

JORDAN NORMAL FORM. Contents Introduction 1 Jordan Normal Form 1 Conclusion 5 References 5

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

MATH 115A: SAMPLE FINAL SOLUTIONS

Math 113 Homework 5. Bowei Liu, Chao Li. Fall 2013

Homework 6 Solutions. Solution. Note {e t, te t, t 2 e t, e 2t } is linearly independent. If β = {e t, te t, t 2 e t, e 2t }, then

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

Definition (T -invariant subspace) Example. Example

Dimension. Eigenvalue and eigenvector

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

A NOTE ON THE JORDAN CANONICAL FORM

AMS10 HW7 Solutions. All credit is given for effort. (-5 pts for any missing sections) Problem 1 (20 pts) Consider the following matrix 2 A =

Linear Algebra 2 Spectral Notes

MATHEMATICS 217 NOTES

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Linear algebra II Tutorial solutions #1 A = x 1

GENERALIZED EIGENVECTORS, MINIMAL POLYNOMIALS AND THEOREM OF CAYLEY-HAMILTION

1.4 Solvable Lie algebras

Solution. That ϕ W is a linear map W W follows from the definition of subspace. The map ϕ is ϕ(v + W ) = ϕ(v) + W, which is well-defined since

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

The Cayley-Hamilton Theorem and the Jordan Decomposition

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Study Guide for Linear Algebra Exam 2

Review problems for MA 54, Fall 2004.

Infinite-Dimensional Triangularization

Problem set #4. Due February 19, x 1 x 2 + x 3 + x 4 x 5 = 0 x 1 + x 3 + 2x 4 = 1 x 1 x 2 x 4 x 5 = 1.

Generalized eigenspaces

A proof of the Jordan normal form theorem

LIE ALGEBRAS: LECTURE 3 6 April 2010

Lecture 21: The decomposition theorem into generalized eigenspaces; multiplicity of eigenvalues and upper-triangular matrices (1)

(VI.D) Generalized Eigenspaces

I. Multiple Choice Questions (Answer any eight)

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

Math 4153 Exam 3 Review. The syllabus for Exam 3 is Chapter 6 (pages ), Chapter 7 through page 137, and Chapter 8 through page 182 in Axler.

Topics in linear algebra

2.4 Root space decomposition

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

L(C G (x) 0 ) c g (x). Proof. Recall C G (x) = {g G xgx 1 = g} and c g (x) = {X g Ad xx = X}. In general, it is obvious that

Linear Algebra Final Exam Solutions, December 13, 2008

Generalized eigenvector - Wikipedia, the free encyclopedia

Math 54. Selected Solutions for Week 5

arxiv: v1 [math.rt] 7 Oct 2014

CSL361 Problem set 4: Basic linear algebra

Math 554 Qualifying Exam. You may use any theorems from the textbook. Any other claims must be proved in details.

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ.

MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS

Control Systems. Linear Algebra topics. L. Lanari

MATH JORDAN FORM

MATH 221, Spring Homework 10 Solutions

The definition of a vector space (V, +, )

Math 1553, Introduction to Linear Algebra

Math 113 Midterm Exam Solutions

Notes on the matrix exponential

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

Exercises Chapter II.

Lecture 11: Finish Gaussian elimination and applications; intro to eigenvalues and eigenvectors (1)

m We can similarly replace any pair of complex conjugate eigenvalues with 2 2 real blocks. = R

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

2. Every linear system with the same number of equations as unknowns has a unique solution.

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam May 25th, 2018

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Math 121 Homework 4: Notes on Selected Problems

Linear Algebra March 16, 2019

THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS

Lecture 10 - Eigenvalues problem

5 Compact linear operators

Eigenvalues and Eigenvectors

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Review Notes for Linear Algebra True or False Last Updated: February 22, 2010

Announcements Monday, October 29

Family Feud Review. Linear Algebra. October 22, 2013

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Lecture 11: Diagonalization

QUALIFYING EXAM IN ALGEBRA August 2011

Cartan s Criteria. Math 649, Dan Barbasch. February 26

Lecture Summaries for Linear Algebra M51A

Representations of algebraic groups and their Lie algebras Jens Carsten Jantzen Lecture III

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Review of Some Concepts from Linear Algebra: Part 2

19. Basis and Dimension

Chapter 2 Notes, Linear Algebra 5e Lay

ALGEBRA QUALIFYING EXAM PROBLEMS

Advanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur

Lie Algebras. Shlomo Sternberg

Selected exercises from Abstract Algebra by Dummit and Foote (3rd edition).

Solutions to Homework 5 - Math 3410

Theorem 1 ( special version of MASCHKE s theorem ) Given L:V V, there is a basis B of V such that A A l

Jordan Canonical Form Homework Solutions

EXAM. Exam 1. Math 5316, Fall December 2, 2012

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.

Math 110 Linear Algebra Midterm 2 Review October 28, 2017

Transcription:

The Jordan Canonical Form The Jordan canonical form describes the structure of an arbitrary linear transformation on a finite-dimensional vector space over an algebraically closed field. Here we develop it using only the most basic concepts of linear algebra, with no reference to determinants or ideals of polynomials. THEOREM 1. Let β 1,..., β n be linearly independent vectors in a vector space. If they are in the span of α 1,..., α k then k n. Proof. We prove the following claim: Let β 1,..., β n be linearly independent vectors in a vector space. For all j with 0 j n and all vectors α 1,..., α k, if β 1,..., β n are in the span of β 1,..., β j, α 1,..., α k, then j + k n. The proof of the claim is by induction on k. For k = 0, the claim is obvious since β 1,..., β n are linearly independent. Suppose the claim is true for k 1, and suppose that β 1,..., β n are in the span of the vectors β 1,..., β j, α 1,..., α k. Then in particular we have β j+1 = b 1 β 1 + + b j β j + a 1 α 1 + + a k α k. (1) For some i we must have a i 0 since β 1,..., β n are linearly independent, so we can solve (1) for α i as a linear combination of β 1,..., β j, β j+1, α 1,..., α i 1, α i+1,..., α k. (2) Hence the vectors (2) span β 1,..., β n. By the induction hypothesis, (j + 1) + (k 1) n, so j + k n. This proves the claim. The case j = 0 of the claim gives the theorem. By this theorem, any two bases of a finite-dimensional vector space have the same number of elements, the dimension of the vector space. Let T be a linear transformation on the finite-dimensional vector space V over the field F. An annihilating polynomial for T is a non-zero polynomial p such that p(t ) = 0. THEOREM 2. Let T be a linear transformation on the finite-dimensional vector space V. Then there exists an annihilating polynomial for T. 1

Proof. Let α 1,..., α n be a basis for V. For each i with 1 i n, by Theorem 1 there exist scalars a 0,..., a n, not all 0, such that a 0 α i + a 1 T α i + + a n T n α i = 0. That is, p i (T )α i = 0 where p i = a 0 + a 1 x + + a n x n. Let p be the product of all the p i. Then p is an annihilating polynomial for T since p(t )α i = 0 for each basis vector α i. We denote the null space of the linear transformation T by N T and its range by RT. THEOREM 3. Let T be a linear transformation on the finite dimensional vector space V. Then N T and RT are linear subspaces of V invariant under T, with If N T RT = {0} then dim N T + dim RT = dim V. (3) V = N T RT (4) is a decomposition of V as a direct sum of subspaces invariant under T. Proof. It is clear that N T and RT are linear subspaces of V invariant under T. Let α 1,..., α k be a basis for N T and extend it by the vectors α k+1,..., α n to be a basis for V. Then T α k+1,..., T α n are a basis for RT : they span RT, and if a k+1 T α k+1 + + a n T α n = 0 then a k+1 α k+1 + + a n α n N T, so all of the coefficients are 0. This proves (3), from which (4) follows if N T RT = {0}. THEOREM 4. Let T be a linear transformation on a non-zero finitedimensional vector V over an algebraically closed field F. Then T has an eigenvector. Proof. By Theorem 2 there exists an annihilating polynomial p for T. Since F is algebraically closed, p is a non-zero scalar multiple of (x c k ) (x c 1 ) for some scalars c k,..., c 1. Let α be a non-zero vector and let i be the least number such that (T c i I) (T c 1 I)α = 0. If i = 1, then α is an eigenvector with the eigenvalue c 1 ; otherwise, β = (T c i 1 I) (T c 1 I)α is an eigenvector with the eigenvalue c i. 2

THEOREM 5. Let T be a linear transformation on the finite-dimensional vector space V with an eigenvalue c. Let V c = { α V : for some j, (T ci) j α = 0 }. (5) Then there exists r such that and V c = N (T ci) r (6) V = N (T ci) r R(T ci) r (7) is a decomposition of V as a direct sum of subspaces invariant under T. Proof. If α V c and c F, then cα V c ; if α 1 V c (so that (T ci) j 1 α 1 = 0 for some j 1 ) and α 2 V c (so that (T ci) j 2 α 2 = 0 for some j 2 ), then α 1 + α 2 V c (since (T ci) j (α 1 + α 2 ) = 0 whenever j j 1, j 2 ). Thus V c is a linear subspace of V. It has a finite basis, since V is finite dimensional, so there is an r such that (T ci) r α = 0 for each basis element α and consequently for all α in V c. This proves (6). I claim that V c = N (T ci) r and R(T ci) r have intersection {0}. Suppose that α is in both spaces. Then α = (T ci) r β for some β since α is in R(T ci) r. Since it is in N (T ci) r, (T ci) r α = (T ci) 2r β = 0, so β V c by the definition (5) of V c. Hence (T ci) r β = 0 by (6), so α = 0. This proves the claim. By Theorem 3 we have (7). Each of the two spaces is invariant under T ci, and under ci, so also under T = (T ci) + ci. THEOREM 6. Let T be a linear transformation on the finite-dimensional vector space V over the algebraically closed field F, and let the scalars c 1,..., c k be the distinct eigenvalues of T. Then there exist numbers r i, for 1 i k, such that V = N (T c 1 I) r 1 N (T c k I) r k (8) is a direct sum decomposition of V into subspaces invariant under T. Proof. From Theorem 5 by induction on the number of distinct eigenvalues. A linear transformation N is nilpotent of degree r in case N r = 0 but N r 1 0; it is nilpotent in case it is nilpotent of degree r for some r. Notice that on each of the subspaces of the direct sum decomposition (8), the operator T is a scalar multiple of I plus a nilpotent operator. Thus our remaining task is to find the structure of a nilpotent operator. 3

THEOREM 7. Let N be nilpotent of degree r on the vector space V. Then we have strict inclusions N N N N 2 N N r 1 N N r = V. (9) Proof. The inclusions are obvious. They are strict inclusions because by definition there is a vector α in V such that N r α = 0 but N r 1 α 0. Then N r i α is in N N i but not N N i 1. We say that the vectors β 1,..., β k are linearly independent of the linear subspace W in case b 1 β 1 + + b k β k is in W only if b 1 = = b k = 0. THEOREM 8. Let N be a nilpotent linear transformation of degree r on the finite-dimensional vector space V. Then there exist a number m and vectors α 1,..., α m such that the non-zero vectors of the form N j α l, for j 0 and 1 l m, are a basis for V. Any vectors linearly independent of N N r 1 can be included among the α 1,..., α m. For 1 l m, let V l be the subspace with basis α l,..., N sl 1 α l, where s l is the least number such that N s l α l = 0. Then V = V 1 V m (10) is a direct sum decomposition of V into s l -dimensional subspaces invariant under N, and N is nilpotent of degree s l on V l. For 1 i r, let ϕ(i) be the number of subspaces in the decomposition (10) of dimension at least i. Then dim N N i dim N N i 1 = ϕ(i), so the number of subspaces in (10) of any given dimension is determined uniquely by N. Proof. We prove the statements of the first paragraph by induction on r. For r = 1, we have N = 0 and the result is trivial. Suppose that the result holds for r 1, and consider a nilpotent linear transformation of degree r. Given vectors linearly independent of N N r 1, extend them to a maximal such set β 1,..., β k (so that they together with any basis for N N r 1 are a basis for V ). Then the vectors Nβ 1,..., Nβ k are in N N r 1 and are linearly independent of N N r 2, for if b 1 Nβ 1 + + b k Nβ k N N r 2 then b 1 β 1 + + b k β k N N r 1 so b 1 = = b k = 0. Now N restricted to N N r 1 is nilpotent of degree r 1, so by the 4

induction hypothesis there are vectors α 1,..., α m, including Nβ 1,..., Nβ k among them, such that the non-zero vectors of the form N j α l are a basis for N N r 1. Adjoin the vectors β 1,..., β k to them; then this is a basis for V of the desired form. Now the statements of the second paragraph follow directly. (See the following example, in which N is nilpotent of degree 5 on a 24 dimensional space. The bottom i rows are N N i.) V 1 V 2 V 3 V 4 V 5 V 6 α 1 α 2 Nα 1 Nα 2 α 3 α 4 α 5 N 2 α 1 N 2 α 2 Nα 3 Nα 4 Nα 5 N 3 α 1 N 3 α 2 N 2 α 3 N 2 α 4 N 2 α 5 α 6 N 4 α 1 N 4 α 2 N 3 α 3 N 3 α 4 N 3 α 5 Nα 6 We have done all the work necessary to establish the Jordan canonical form; it remains only to put the pieces together. It is convenient to express the result in matrix language. Let B(r; c) be the r r lower triangular matrix with c along the diagonal, 1 everywhere immediately below the diagonal, and 0 everywhere else. Such a matrix is called a Jordan block. Notice that in the decomposition (10), the matrix of N on V l, with respect to the basis described in Theorem 8, is the Jordan block B(s l ; 0). (With the basis in reverse order, the entries 1 are immediately above the diagonal. Either convention is acceptable.) A matrix that is a direct sum of Jordan blocks is in Jordan form. THEOREM 9. Let T be a linear transformation on the finite-dimensional vector space V over the algebraically closed field F. Then there exists a basis of V such that the matrix of T is in Jordan form. This matrix is unique except for the order of the Jordan blocks. Proof. By Theorems 6 and 8. The proof shows that the same result holds for a field that is not algebraically closed provided that T has some annihilating polynomial that factors into first degree factors. http://math.princeton.edu/ nelson/217/jordan.pdf 5