The converse is clear, since

Similar documents
(a + b)c = ac + bc and a(b + c) = ab + ac.

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

1 Invariant subspaces

From Lay, 5.4. If we always treat a matrix as defining a linear transformation, what role does diagonalisation play?

Bare-bones outline of eigenvalue theory and the Jordan canonical form

MATH 205 HOMEWORK #3 OFFICIAL SOLUTION. Problem 1: Find all eigenvalues and eigenvectors of the following linear transformations. (a) F = R, V = R 3,

Study Guide for Linear Algebra Exam 2

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

Notes on the matrix exponential

LINEAR ALGEBRA REVIEW

Vector Spaces and Linear Maps

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

GENERALIZED EIGENVECTORS, MINIMAL POLYNOMIALS AND THEOREM OF CAYLEY-HAMILTION

THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS

SUPPLEMENT TO CHAPTERS VII/VIII

A linear algebra proof of the fundamental theorem of algebra

MAS4107 Linear Algebra 2

A linear algebra proof of the fundamental theorem of algebra

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is,

Linear Algebra Highlights

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

Abstract Vector Spaces

Theorem 5.3. Let E/F, E = F (u), be a simple field extension. Then u is algebraic if and only if E/F is finite. In this case, [E : F ] = deg f u.

LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS

Econ Slides from Lecture 7

Polynomials. Chapter 4

The eigenvalues are the roots of the characteristic polynomial, det(a λi). We can compute

Lecture 12: Diagonalization

A Little Beyond: Linear Algebra

Dimension. Eigenvalue and eigenvector

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions

Eigenvectors. Prop-Defn

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

First of all, the notion of linearity does not depend on which coordinates are used. Recall that a map T : R n R m is linear if

Solving a system by back-substitution, checking consistency of a system (no rows of the form

4. Linear transformations as a vector space 17

Announcements Wednesday, November 01

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Honors Algebra 4, MATH 371 Winter 2010 Assignment 4 Due Wednesday, February 17 at 08:35

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Abstract Vector Spaces and Concrete Examples

Linear Algebra 1 Exam 2 Solutions 7/14/3

Calculating determinants for larger matrices

Generalized eigenspaces

Diagonalization of Matrix

R S. with the property that for every s S, φ(s) is a unit in R S, which is universal amongst all such rings. That is given any morphism

Lecture Summaries for Linear Algebra M51A

Jordan Normal Form. Chapter Minimal Polynomials

Vector Spaces and Linear Transformations

EXAM. Exam #3. Math 2360, Spring April 24, 2001 ANSWERS

Math 489AB Exercises for Chapter 2 Fall Section 2.3

Linear Algebra- Final Exam Review

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by

Chapter 3. Vector spaces

Problem 1A. Suppose that f is a continuous real function on [0, 1]. Prove that

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

Lecture 11: Eigenvalues and Eigenvectors

Solution. That ϕ W is a linear map W W follows from the definition of subspace. The map ϕ is ϕ(v + W ) = ϕ(v) + W, which is well-defined since

First we introduce the sets that are going to serve as the generalizations of the scalars.

Math 110, Spring 2015: Midterm Solutions

Lecture 7: Polynomial rings

Solutions to Final Exam

Eigenvalues, Eigenvectors, and Diagonalization

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Topics in linear algebra

Review 1 Math 321: Linear Algebra Spring 2010

Math 113 Homework 5. Bowei Liu, Chao Li. Fall 2013

Therefore, A and B have the same characteristic polynomial and hence, the same eigenvalues.

The Cayley-Hamilton Theorem and the Jordan Decomposition

Linear Algebra. Workbook

Lecture 19: Polar and singular value decompositions; generalized eigenspaces; the decomposition theorem (1)

Linear and Bilinear Algebra (2WF04) Jan Draisma

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

GEOMETRIC CONSTRUCTIONS AND ALGEBRAIC FIELD EXTENSIONS

Eigenvalues and Eigenvectors

Finite Fields. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay

Linear Algebra Practice Problems

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Announcements Wednesday, November 01

Lecture 19: Polar and singular value decompositions; generalized eigenspaces; the decomposition theorem (1)

Problem 1: Solving a linear equation

Benjamin McKay. Abstract Linear Algebra

NOTES II FOR 130A JACOB STERBENZ

Definition 3 A Hamel basis (often just called a basis) of a vector space X is a linearly independent set of vectors in X that spans X.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Polynomial Rings. (Last Updated: December 8, 2017)

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Math 121 Homework 4: Notes on Selected Problems

Ma/CS 6b Class 23: Eigenvalues in Regular Graphs

Homework For each of the following matrices, find the minimal polynomial and determine whether the matrix is diagonalizable.

Eigenvalues and Eigenvectors 7.2 Diagonalization

The Cyclic Decomposition of a Nilpotent Operator

Review of Linear Algebra

Math 24 Spring 2012 Questions (mostly) from the Textbook

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

Chapter 5 Eigenvalues and Eigenvectors

Transcription:

14. The minimal polynomial For an example of a matrix which cannot be diagonalised, consider the matrix ( ) 0 1 A =. 0 0 The characteristic polynomial is λ 2 = 0 so that the only eigenvalue is λ = 0. The corresponding eigenspace E 0 (A) is spanned by (1, 0). In particular E 0 (A) is one dimensional. But if A were diagonalisable then it would have a basis of eigenvectors. So A cannot be diagonalisable. Thus at the very least we must allow an extra matrix ( ) λ 1 A =. 0 λ This behaves exactly the same as before, since when we subtract λi 2, we are back to the case of ( ) 0 1 A =. 0 0 In fact Lemma 14.1. Let A M n,n (F ) and let λ F. A is similar to B iff A λi n is similar B λi n. Proof. There are two ways to see this. First we proceed directly. Suppose that A is similar to B. Then we may find an invertible matrix P M n,n (F ) such that B = P AP 1. In this case P (A λi n )P 1 = P AP 1 λp I n P 1 Thus A λi n is similar to B λi n. The converse is clear, since = B λi n. A = (A λi n ) µi n and B = (B λi n ) µi n, where µ = λ. So much for the direct approach. The other way to proceed is as follows. A and B correspond to the same linear function φ: V V, where V = F n. To get A we might as well suppose that we take f to be the identity map. To get B we take f corresponding to the the matrix P such that A = P BP 1. But then A λi n and B λi n correspond to the same linear function φ λi, where I : V V is the identity function. 1

What are the possibilities for n 3? Again we know that we must have repeated eigenvalues (in some sense). By (14.1) we know that we might as well suppose we have eigenvalue λ = 0. Here then are some possibilities: A 0 = 0 0 0 0 0 0 A 1 = 0 0 0 0 0 1 and A 2 = 0 1 0 0 0 1. 0 0 0 0 0 0 0 0 0 It is interesting to consider the eigenspaces in all three cases. In the first case every vector is an eigenvector with eigenvalue 0, E 0 (A 0 ) = F 3. In the second case the kernel is z = 0 so that (1, 0, 0) and (0, 1, 0) span E 0 (A 1 ). In the third case the kernel is y = z = 0, so that E 0 (A 2 ) is spanned by (1, 0, 0). But we already know that similar matrices have eigenspaces of the same dimension. So A 0, A 1 and A 2 are inequivalent matrices. In fact there is another way to see that these matrices are not similar. Definition 14.2. Let φ: V V be a linear transformation. We say that φ is nilpotent if φ k = 0 for some positive integer k. The smallest such integer is called the order of φ. We say that a matrix is nilpotent if the corresponding linear function is nilpotent. Almost by definition, if A and B are similar matrices then A is nilpotent if and only if B is nilpotent and the order is then the same. A 0 = 0, A 2 1 = 0 and A 3 2 = 0. In fact A 2 2 0. So A 0, A 1 and A 2 are not similar matrices, since they have different orders (of nilpotency). In general then we should look at Or what comes to the same thing (A λi n ) k. (φ λi) k. This in fact suggests that we should consider polynomials in A (or φ). Given a polynomial m(x) P (F ), we first note that it makes sense to consider m(φ). We can always compute powers of φ, we can always multiply by a scalar and we can always add linear functions (or matrices). For example, if m(x) = 3x 3 5x + 7 then 3φ 3 5φ + 7, is a linear function. To see that there is something non-trivial going on here, note that by contrast it makes no sense to think about polynomials in two linear transformations. The problem is that φ ψ ψ φ and 2

yet the variables x and y are supposed to commute for any polynomial in these variables. Definition-Lemma 14.3. Let V and W be vector spaces over the same field F. Hom(V, W ) denotes the set of all linear transformations φ: V W. Hom(V, W ) is a vector space over F, with the natural rules for addition and scalar multiplication. If V and W are finite dimensional then so is Hom(V, W ) and dim Hom(V, W ) = dim V dim W. Proof. In fact the set of all functions W V is a vector space with addition and scalar multiplication defined pointwise. Since the subset of linear transformations is closed under addition and scalar multiplication, Hom(V, W ) is a linear subspace of W V. In particular it is a vector space. Now suppose that V and W are finite dimensional. Pick isomorphisms f : V F n and g : W F m. This induces a natural map Φ: Hom(V, W ) Hom(F n, F m ). Given φ we send this to Φ(φ) = g φ f 1. φ 2 Hom(V, W ). Pick v V. Then Φ(φ 1 + φ 2 )(v) = g (φ 1 + φ 2 ) f 1 (v) = g(φ 1 + φ 2 )(f 1 )(v) = g((φ 1 + φ 2 )(f 1 (v))) = g(φ 1 (f 1 (v))) + g(φ 2 (f 1 (v))) Suppose that φ 1 and = (g φ 1 f 1 )(v) + (g φ 2 f 1 )(v) = Φ(φ 1 )(v) + Φ(φ 2 )(v). But then Φ(φ 1 + φ 2 ) = Φ(φ 1 ) + Φ(φ 2 ). Therefore Φ respects addition. Similarly Φ respects scalar multiplication. It follows that Φ is linear. It is easy to see that Φ is a bijection. Thus Φ is a linear isomorphism. Thus we might as well assume that V = F n and W = F m. Now define a map Ψ: Hom(F n, F m ) M m,n (F ), by sending a linear map φ to the associated matrix A. It is again straightforward to check that Ψ is linear. We already know Ψ is bijective. Hence Ψ is a linear isomorphism. So the vector space Hom(V, W ) is isomorphic to M m,n (F ). But the latter has dimension mn = dim W dim V. 3

Proposition 14.4. Let V be a finite dimensional vector space over a field and let φ: V V be a linear function. Then there is a polynomial m(x) P (F ) such that m(φ) = 0. Proof. Consider the linear transformations 1, φ, φ 2,.... Since this is an infinite set of vectors in the finite dimensional vector space Hom(V, V ) it follows that there are scalars a 0, a 1, a 2,..., a d, not all zero, such that a 0 + a 1 φ + a 2 φ 2 + + a d φ d = 0. Let m(x) = a 0 + a 1 x + a 2 x 2 + + a d x d P (F ). Then m(φ) = 0 by construction. Definition 14.5. Let φ: V V be a linear transformation, where V is a finite dimensional vector space. The minimal polynomial of φ is the smallest degree monic polynomial m(x) such that m(φ) = 0. Clearly the minimal polynomials of A 0, A 1 and A 2 above, are x, x 2 and x 3. In what follows we will adopt the convention that the degree of the zero polynomial is undefined. Lemma 14.6 (Division Algorithm). Let F be a field. Let a(x) and b(x) P (F ). Suppose a(x) 0. Then we may find q(x) and r(x) P (F ) such that b(x) = q(x)a(x) + r(x), where either r(x) = 0 or deg r(x) < deg a(x). Proof. By induction on the degree of b(x). If either b(x) = 0 or deg b(x) < deg a(x) then take q(x) = 0 and r(x) = b(x). Otherwise we may assume that deg b(x) deg a(x) and that the result holds for any polynomial c(x) of smaller degree than b(x). Suppose that a(x) = αx d +... and b(x) = βx e +..., where dots indicate lower degree terms. By assumption d e. Let c(x) = b(x) (γx f )a(x), where γ = α and f = e d 0. Then either c(x) = 0 or c(x) β has smaller degree than the degree of b(x). By induction there are polynomials q 1 (x) and r(x) such that c(x) = q 1 (x)a(x) + r(x), 4

where either r(x) = 0 or deg r(x) < d. Then b(x) = c(x) + γx f a(x) b(x) = q 1 (x)a(x) + r(x) + γx f a(x) = (q 1 (x) + γx f )a(x) + r(x). So if we take q(x) = q 1 (x) + γx f then we are done by induction. Theorem 14.7. Let φ: V V be a linear function, where V is a finite dimensional vector space. The minimal polynomial m(x) of φ is the unique monic polynomial such that m(φ) = 0 and, if n(x) is any other polynomial such that n(φ) = 0 then m(x) divides n(x). Proof. Suppose that m(x) is the minimal polynomial. Then m(x) is monic and m(φ) = 0. Suppose that n(x) is another polynomial such that n(φ) = 0. By (14.6) we may find q(x) and r(x) such that n(x) = q(x)m(x) + r(x), where deg r(x) < deg m(x) (or r(x) = 0). Now n(φ) = q(φ)m(φ) + r(φ). By assumption n(φ) = m(φ) = 0. But then r(φ) = 0. By minimality of the degree of m(x) the only possibility is that r(x) = 0, the zero polynomial. But then m(x) divides n(x). Now suppose that n(x) is monic, n(φ) = 0 and n(x) divides any polynomial which vanishes when evaluated at φ. Let m(x) be the minimal polynomial. As m(φ) = 0, n(x) divides m(x). But then the degree of n(x) is at most the degree of m(x). By minimality of the degree of the monic polynomial, n(x) and m(x) have the same degree. As both are monic, it follows that n(x) = m(x). Example 14.8. Consider the matrices A 0, A 1 and A 2 above. Now n i (x) = x i+1 is a polynomial such that n i (A i ) = 0. So the minimal polynomial m i (x) must divide x i+1 in each case. From this it is easy to see that the minimal polynomials are in fact n i (x), so that m 0 (x) = x, m 1 (x) = x 2 and m 2 (x) = x 3. For a more involved example consider the matrix B = B k (λ) M k,k (F ), where λ is a scalar and we have a trailing sequence of 1 s on the main diagonal. First consider N = B k (0) = B k (λ) λi k = B λi k. 5

This matrix is nilpotent. In fact N k = 0, but N k 1 0. So if we set n(x) = (x λ) k then n(b) = 0. Once again, the minimal polynomial m(x) of B must divide n(x). So m(x) = (x λ) i, some i k. But since N k 1 0, in fact i = k, and the minimal polynomial of B is precisely m(x) = (x λ) k. 6