Chapter 4 & 5: Vector Spaces & Linear Transformations

Similar documents
Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

MAT 1302B Mathematical Methods II

Study Guide for Linear Algebra Exam 2

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

Linear Algebra (MATH ) Spring 2011 Final Exam Practice Problem Solutions

Solving a system by back-substitution, checking consistency of a system (no rows of the form

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

LINEAR ALGEBRA REVIEW

2. Every linear system with the same number of equations as unknowns has a unique solution.

MATH 221, Spring Homework 10 Solutions

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

MAT1302F Mathematical Methods II Lecture 19

Definition (T -invariant subspace) Example. Example

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

and let s calculate the image of some vectors under the transformation T.

Math Linear Algebra Final Exam Review Sheet

Solutions to Final Exam

Chapter 5. Eigenvalues and Eigenvectors

Chapter Two Elements of Linear Algebra

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

ft-uiowa-math2550 Assignment OptionalFinalExamReviewMultChoiceMEDIUMlengthForm due 12/31/2014 at 10:36pm CST

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Matrices related to linear transformations

LECTURES 14/15: LINEAR INDEPENDENCE AND BASES

Topic 2 Quiz 2. choice C implies B and B implies C. correct-choice C implies B, but B does not imply C

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Recall : Eigenvalues and Eigenvectors

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Dimension. Eigenvalue and eigenvector

Math 323 Exam 2 Sample Problems Solution Guide October 31, 2013

1 9/5 Matrices, vectors, and their applications

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

Chapter 5 Eigenvalues and Eigenvectors

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

Lecture Summaries for Linear Algebra M51A

Topics in linear algebra

Econ Slides from Lecture 7

Final Exam Practice Problems Answers Math 24 Winter 2012

Math 313 Chapter 5 Review

Math Final December 2006 C. Robinson

Linear Algebra Highlights

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

Eigenvalues, Eigenvectors, and Diagonalization

Linear Algebra Practice Problems

Diagonalization. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

CS 246 Review of Linear Algebra 01/17/19

MA 265 FINAL EXAM Fall 2012

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

Agenda: Understand the action of A by seeing how it acts on eigenvectors.

Announcements Monday, October 29

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Math 315: Linear Algebra Solutions to Assignment 7

Math 205, Summer I, Week 4b:

Math 308 Practice Test for Final Exam Winter 2015

Elementary Linear Algebra Review for Exam 2 Exam is Monday, November 16th.

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

The eigenvalues are the roots of the characteristic polynomial, det(a λi). We can compute

PROBLEM SET. Problems on Eigenvalues and Diagonalization. Math 3351, Fall Oct. 20, 2010 ANSWERS

ft-uiowa-math2550 Assignment NOTRequiredJustHWformatOfQuizReviewForExam3part2 due 12/31/2014 at 07:10pm CST

The Jordan Normal Form and its Applications

2018 Fall 2210Q Section 013 Midterm Exam II Solution

1. General Vector Spaces

W2 ) = dim(w 1 )+ dim(w 2 ) for any two finite dimensional subspaces W 1, W 2 of V.

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

Linear Algebra, Summer 2011, pt. 2

City Suburbs. : population distribution after m years

Review of Some Concepts from Linear Algebra: Part 2

Eigenvalues and Eigenvectors

MATH 583A REVIEW SESSION #1

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

2.3. VECTOR SPACES 25

Comps Study Guide for Linear Algebra

Math 205, Summer I, Week 3a (continued): Chapter 4, Sections 5 and 6. Week 3b. Chapter 4, [Sections 7], 8 and 9

MATH 310, REVIEW SHEET 2

REVIEW FOR EXAM II. The exam covers sections , the part of 3.7 on Markov chains, and

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

12x + 18y = 30? ax + by = m

235 Final exam review questions

Practice problems for Exam 3 A =

1. Select the unique answer (choice) for each problem. Write only the answer.

Row Space and Column Space of a Matrix

Math Matrix Algebra

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

Linear Algebra- Final Exam Review

DM554 Linear and Integer Programming. Lecture 9. Diagonalization. Marco Chiarandini

Math 369 Exam #2 Practice Problem Solutions

Summer Session Practice Final Exam

Math 20F Practice Final Solutions. Jor-el Briones

Math 205, Summer I, Week 4b: Continued. Chapter 5, Section 8

1 Last time: inverses

Lecture 15, 16: Diagonalization

M340L Final Exam Solutions May 13, 1995

SUMMARY OF MATH 1600

Transcription:

Chapter 4 & 5: Vector Spaces & Linear Transformations Philip Gressman University of Pennsylvania Philip Gressman Math 240 002 2014C: Chapters 4 & 5 1 / 40

Objective The purpose of Chapter 4 is to think abstractly about vectors. We will identify certain fundamental laws which are true for the kind of vectors (i.e., column vectors) that we already know. We will consider the abstract notion of a vector space. The purpose of doing this is to apply the theory that we already know and love to situations which appear superficially different but are really very similar from the right perspective. The main application for us will be solutions of linear ordinary differential equations. Even though they are functions instead of column vectors, they behave in exactly the same way and can be understood completely. Philip Gressman Math 240 002 2014C: Chapters 4 & 5 2 / 40

Abstract Setup We begin with a collection V of things we call vectors and a collection of scalars F (which will be either real numbers, rational numbers, or complex numbers usually). A1 We are given a formula for vector addition u + v. The sum of two things in V will be another thing in V. A2 We are given a formula for scalar multiplication cv for c F and v V. It should be another thing in V. A3 (Commutativity) It should be true that u + v = v + u for any u, v V. A4 (Associativity of +) It should be true that (u + v) + w = u + (v + w) for any u, v, w V. A5 (Additive identity) There should be a vector called 0 such that v + 0 = v for any v V. A6 (Additive inverses) For any v V, there should be a vector v so that v + ( v) = 0. A7 (Multiplicative identity) For any v V, we should have 1v = v. Philip Gressman Math 240 002 2014C: Chapters 4 & 5 3 / 40

Abstract Setup (continued) A8 (Associativity of ) For any v V and c 1, c 2 F, we should have (c 1 c 2 )v = c 1 (c 2 v). A9 (Distributivity I) For any u, v V and c F, it should be true that c(u + v) = cu + cv. A10 (Distributivity II) For any c 1, c 2 F and u V, it should be true that (c 1 + c 2 )u = c 1 u + c 2 u. The Point: Someone hands you a set of things V they call vectors and a set F of scalars. If they call V a vector space over F, they mean that they have checked that A1 through A10 are always true. If someone asks you if a given V is a vector space over F, you must check that each one of A1 through A10 is true under all circumstances. Philip Gressman Math 240 002 2014C: Chapters 4 & 5 4 / 40

Vector Spaces: Examples Real-valued convergent sequences {a n } are a vector space over R when {a n } + {b n } is the sequence c n := a n + b n and scalar multiplication is done termwise. They are not a vector space over C. Complex-valued convergent sequences {b n } are a vector space over C. They are also a vector space over R. Real-valued functions f on an open interval I such that f, f, f,..., f (k) are all continuous functions on I form a vector space under pointwise addition and pointwise scalar multiplication. It s called C k (I ). Item polynomials with real coefficients are a vector space over R under the usual notions of addition and scalar multiplication. Philip Gressman Math 240 002 2014C: Chapters 4 & 5 5 / 40

Vector Spaces: Beyond the Definition Any two things which are vector spaces over F will have many things in common, far beyond just the properties A1 through A10. For example: 0u = 0 for any u V. Why? Well, 0u + u = (0 + 1)u = 1u = u, but now add u to both sides. There is only one vector 0 which always works for A5. That s because v = v + 0 = 0 + v; now add 0 to each side. c0 = 0 for any c F. Why? If c 1, notice that 1 c 1 0 + 0 = 1 c 10. Un-distribute the left-hand side and then multiply both sides by (c 1). If c = 1 then axiom A7 already works. See Theorem 4.2.6 for several more examples. Philip Gressman Math 240 002 2014C: Chapters 4 & 5 6 / 40

Subspaces A vector subspace S is a subset of V which is a vector space in its own right. The conditions to check are A1 and A2 (namely, that sums of things in S need to remain in S and scalar multiples of things in S still belong in S). All the other conditions will still be true because they were true in V. Important Example If you have any homogeneous system of equations in n unknowns, then the solution set of the system will be a subspace of F n (that s the notation for n-tuples of things in F ). Second Important Example Solutions of any linear homogeneous ODE form a vector subspace of the vector space of all functions on the real line (or on a finite interval I if you wish). Philip Gressman Math 240 002 2014C: Chapters 4 & 5 7 / 40

Subspaces For systems of equations with m unknowns, solutions of any homogeneous system of equations always form a vector subspace of R m (or C m if solutions are allowed to be complex). Given an n m matrix A, we define the null space or kernel of A to be those vectors x in R m (or C m ) such that Ax = 0. Philip Gressman Math 240 002 2014C: Chapters 4 & 5 8 / 40

Linear Combinations If you are given vectors v 1, v 2,..., v k, any vector w which can be written in the form c 1 v 1 + c 2 v 2 + + c k v k for scalars c 1, c 2,..., c k is said to be a linear combination of v 1,..., v k. Definition Given any list of vectors v 1,..., v k V, the set of all linear combinations of v 1,..., v k is a vector subspace of V. It is called the span of v 1,..., v k. Philip Gressman Math 240 002 2014C: Chapters 4 & 5 9 / 40

Types of Problems Determine whether a given set of vectors spans a vector space. A collection of fewer than n vectors in R n never spans all of R n. A collection of exactly n vectors in R n spans all of R n if and only if the determinant of the matrix they make by adjoining columns is nonzero. A collection of more than n row vectors spans R n if and only if the rank of the matrix made by adjoining rows is n. Given a list of vectors, classify all vectors in the span. Write a system of equations with unknowns equal to the coefficients c j in the linear combination. Try to solve the system for an arbitrary vector b solving b = c 1 v 1 + + c k v k. Determine whether the spans of two sets are the same or different. Identify whether a particular vector is in the span or not. Philip Gressman Math 240 002 2014C: Chapters 4 & 5 10 / 40

Types of Problems Identify when a specific vector y is in the span of v 1,..., v k? Determine if these equations can be solved: c 1 v 1 v k. = y Classify the span of v 1,..., v k? Treat the entries of y as variables, determine what constraints (if any) on the entries of y are needed to make the equations consistent. c k Philip Gressman Math 240 002 2014C: Chapters 4 & 5 11 / 40

Two Ways to Describe Subspaces There are two basic ways to give complete descriptions of a vector subspace: Give a list of vectors that span the vector subspace; in other words, find a collection of vectors which is (1) small enough to belong to the subspace, and (2) big enough so that they don t belong to a smaller subspace. Give a complete list of linear constraints (i.e., equations) that all vectors in the subspace must satisfy. In both cases, it is important to know what the minimal number of objects (either vectors or constraints) must be there s not much point in working harder than is necessary. Philip Gressman Math 240 002 2014C: Chapters 4 & 5 12 / 40

Linear Dependence A collection of vectors v 1,..., v k is said to be linearly dependent when it s possible to write 0 as a linear combination of v 1,..., v k which is nontrivial, i.e., c 1 v 1 + c k v k = 0 and not every one of c 1,..., c k is zero. Linear dependence of a set of vectors means that one or more vectors may be removed without changing their span. Linear dependence means that the system has nontrivial solutions. v 1 v k c 1. c k = 0. 0 Philip Gressman Math 240 002 2014C: Chapters 4 & 5 13 / 40

Bases Definition A basis of a vector space V is a collection of vectors v 1,..., v k which is linearly independent and spans V. In other words, c 1 v 1 + c 2 v 2 + + c k v k = y has a unique solution for any choice of y V (i.e., always one solution, never more than one). The column vector is called the coordinates of y with respect to the ordered basis v 1,..., v k. c 1. c k Philip Gressman Math 240 002 2014C: Chapters 4 & 5 14 / 40

Bases The standard basis of R n is given by 1 0 0 1. 0,. 0,..., Bases of the same vector space must always have the same number of vectors. This is called the dimension of the vector space. ANY basis of R n must have n vectors. If the dim. = n, then any list of more than n vectors must be linearly dependent. If dim. = n, then any list of fewer than n vectors never spans. If you know you have the right number of vectors, then linear independence and spanning go hand-in-hand (i.e., if one is true then the other is true; if one is false then the other is false). 0 0. 1 Philip Gressman Math 240 002 2014C: Chapters 4 & 5 15 / 40

Changing Bases Coordinates change when the basis changes; the effect of changing bases is encoded in the change of basis matrix. The columns of the change of basis matrix are the coordinates in the new basis of the old basis vectors. Basic Skills: How do you verify that a given set is a basis of a vector space? How do you discover a basis for a given vector space? How do you convert coordinates from one basis to another? Philip Gressman Math 240 002 2014C: Chapters 4 & 5 16 / 40

Linear Independence Redux What are the mechanical steps to check for linear independence of a list of vectors v 1,..., v k : If they all belong to some vector space V of dimension < k, then stop: they cannot be linearly independent. If they belong to R n or C n, treat the statement c 1 v 1 + + c k v k = 0 as an equation to solve for unknowns c 1,..., c k. Solve by row reduction like normal; if there is only one solution then they are linearly independent. If they belong to an exotic vector space BUT you happen to know a basis, write everything out in coordinates. This reduces the computational problem to the previous case. If you do not know a nice basis, e.g. C (I ), you have to be more clever. In this case, one option is to use what is known as the Wronskian. Philip Gressman Math 240 002 2014C: Chapters 4 & 5 17 / 40

Rank-Nullity Theorem Theorem For any m n matrix A, rank(a) + nullity(a) = n. In other words, the rank of A plus the dimension of the null space of A equals the number of columns. Remember: The rank of the matrix equals the number of nonzero rows after row reduction. The rank is also be the dimension of the row space. The rank is also the dimension of the span of the columns. Philip Gressman Math 240 002 2014C: Chapters 4 & 5 18 / 40

Applications of Rank-Nullity Analysis of the equations Ax = 0 when A is an m n matrix: If the rank of A is n, then the only solution is x = 0. Otherwise if r is the rank, then the nullity is n r, meaning that all solutions have the form x = c 1 x 1 + + c n r x n r for any linearly-indepdendent set of solutions x 1,..., x n r. Analysis of Ax = b: If b is not in the column space, then there is no solution. If b is in the column space and rank(a) = n, then there is a unique solution. If b is in the column space and rank(a) = r < n, then x = c 1 x 1 + + c n r x n r + x p where x 1,..., x n r are linearly-independent solutions of Ax = 0 and x p is any single solution of Ax = b. Philip Gressman Math 240 002 2014C: Chapters 4 & 5 19 / 40

Linear Transformations Definition A linear transformation is any mapping T : V W between vector spaces V and W over the same field which satisfies T (u + v) = Tu + Tv and T (cv) = c(tv) for any vectors u, v V and any scalar c. The canonical example is when T is multiplication by some given matrix A, but there are a variety of things which are linear transformations. Philip Gressman Math 240 002 2014C: Chapters 4 & 5 20 / 40

Connection to Matrices Suppose T is a linear transformation of a vector space V with ordered basis B into a vector space W with ordered basis C. Basic Question If v V has coordinates what will be the coordinates of Tv? c 1. c n d 1. d m B C Philip Gressman Math 240 002 2014C: Chapters 4 & 5 21 / 40

Geometry of Matrix Multiplication To understand the importance of eigenvectors and eigenvalues ( 5.6 8), it is first necessary to understand the geometric meaning of matrix multiplication. If you visit the link http://www.math.hawaii.edu/phototranspage.html, you can explore the result of applying any 2 2 matrix you choose to any image of your choice. I choose the image http://freezertofield.com/wp-content/ uploads/2011/09/baby-doll-sheep-300x275.jpg (thanks, Google!): Philip Gressman Math 240 002 2014C: Chapters 4 & 5 22 / 40

Geometry of Matrix Multiplication [ x y Left sheep in xy-plane ] = [ 0.5 0 0 1 ] [ x y ] Right sheep in x y -plane Philip Gressman Math 240 002 2014C: Chapters 4 & 5 23 / 40

Geometry of Matrix Multiplication [ x y Left sheep in xy-plane ] = [ 1 0 0 1.8 ] [ x y ] Right sheep in x y -plane Philip Gressman Math 240 002 2014C: Chapters 4 & 5 24 / 40

Geometry of Matrix Multiplication [ x y Left sheep in xy-plane ] = [ 1 0.7 0 1 ] [ x y ] Right sheep in x y -plane Philip Gressman Math 240 002 2014C: Chapters 4 & 5 25 / 40

Geometry of Matrix Multiplication [ x y Left sheep in xy-plane ] [ = 1 0 0.6 1 ] [ x y ] Right sheep in x y -plane Philip Gressman Math 240 002 2014C: Chapters 4 & 5 26 / 40

Geometry of Matrix Multiplication [ x y Left sheep in xy-plane ] = [ 1.2 0.5 0.5 1.2 ] [ x y ] Right sheep in x y -plane Philip Gressman Math 240 002 2014C: Chapters 4 & 5 27 / 40

Eigenvectors Eigenvectors and Eigenvalues For a square matrix A, if there is a (nonzero) column vector E and a number λ so that AE = λe then we call E an eigenvector and λ its eigenvalue. Geometrically, the existence of an eigenvector means that there is a direction which is preserved by the matrix A: in the same direction as E, the only effect of multiplication by A is to stretch (or shrink, or reflect) by a factor λ. The nicest possible situation is when an n n matrix has n linearly independent eigenvectors. This would mean that the matrix A can be understood solely in terms of stretchings in each of the various directions. In practice, this does not always happen. Philip Gressman Math 240 002 2014C: Chapters 4 & 5 28 / 40

Finding the Eigenvalues Characteristic Equation If A is an n n matrix, then the determinant det(a λi ) is a polynomial of degree n in terms of λ (called the characteristic polynomial). The eigenvalues of A must be solutions of the equation det(a λi ) = 0. Why? Because det(a λi ) = 0 if and only if there is a nonzero column vector E for which (A λi )E = 0. This means E is an eigenvector with eigenvalue A. Fact For triangular matrices, the eigenvalues are just the diagonal entries. Philip Gressman Math 240 002 2014C: Chapters 4 & 5 29 / 40

Finding the Eigenvectors Solving for E Once you know the roots of the characteristic polynomial, the way that you find eigenvalues is to solve the system of equations (A λ j I )X = 0 where you fix λ j to be any root. Any solution X is an eigenvector. Fact Assuming you ve solved for the eigenvalues correctly, there will always be at least one solution of this system (i.e., at least one eigenvector). Sometimes there will be multiple eigenvectors. You should always choose them to be linearly independent (and remember that the number of identically zero rows you find when row reducing A λi will tell you exactly how many eigenvectors to find). Philip Gressman Math 240 002 2014C: Chapters 4 & 5 30 / 40

Points to Remember Not Enough Eigenvectors Sometimes an n n matrix may not have n linearly independent eigenvectors (eigenvectors with different eigenvalues are automatically linearly independent). This only happens when the characteristic polynomial has repeated roots, and even then it only happens some of the time. Complex Eigenvalues If there are complex roots to the characteristic equation, you will get complex eigenvalues. If your matrix had all real entries, then complex eigenvalues will always appear in conjugate pairs. Moreover, if you find an eigenvector for one of these eigenvalues, you can automatically construct an eigenvector for the conjugate eigenvalue by taking the complex conjugate of the eigenvector itself. Philip Gressman Math 240 002 2014C: Chapters 4 & 5 31 / 40

Diagonalization Suppose V 1,..., V n are column vectors. Linear combinations of these vectors are vectors which can be written in terms of the formula W = c 1 V 1 + + c n V n for some scalars c 1,..., c n. We can write the vectors as column vectors; the formula becomes c 1 V 1 V n. = W. c n If V 1,..., V n are linearly independent vectors in n-dimensional space, then the matrix formed by concatenating the column vectors (call it P) will be invertible. Philip Gressman Math 240 002 2014C: Chapters 4 & 5 32 / 40

Diagonalization In short, if W is a column vector and P is a matrix whose columns are linearly independent vectors V 1,..., V n, then the column vector P 1 W tells you the unique choice of coefficients c 1,..., c n necessary to make W = c 1 V 1 + + c n V n. Next, suppose that V 1,..., V n happen to be eigenvectors of a matrix A (with eigenvalues λ 1,..., λ n. W = c 1 V 1 + + c n V n AW = c 1 λ 1 V 1 + + c n λ n V n. In other words, P 1 AW = λ 1 0 0 0 λ 2........... 0 0 0 λ n P 1 W Philip Gressman Math 240 002 2014C: Chapters 4 & 5 33 / 40

Diagonalization We conclude: If A is an n n matrix which happens to have n linearly independent eigenvectors, let P be the matrix whose columns are those eigenvectors and let D be a diagonal matrix made of the eigenvalues of A. We conclude P 1 AW = DP 1 W for any column vector W you like. Consequently A = PDP 1 and D = P 1 AP. We call D the diagonalization of A. Having a diagonalization A means that in some appropriate basis, the matrix A just acts like a coordinate-wise scaling. A sufficient condition to be diagonalizable is when A has n distinct eigenvalues. High powers of A are easy to compute with diagonalizations: A N = PD N P 1 Philip Gressman Math 240 002 2014C: Chapters 4 & 5 34 / 40

Diagonalization If A is symmetric, then it may always be diagonalized, and it will always be true in this case that the eigenvectors can be chosen so that they re mutually orthogonal. Diagonalizations are useful to reduce the complexity of problems (since operating with diagonal matrices is generally much easier than it would be with more general matrices). Applications include classification of conic sections and solutions of systems of linear ordinary differential equations. Philip Gressman Math 240 002 2014C: Chapters 4 & 5 35 / 40

Diagonalization: Further Examples Classification and Transformation of Conic Sections Logistics / Planning / Random Walks Computing Functions of a Matrix Stress Matrices: http://en.wikipedia.org/wiki/stress_(mechanics) Data Analysis: http://en.wikipedia.org/wiki/ Principal_component_analysis Moments of Inertia: http://en.wikipedia.org/wiki/moment_of_inertia Quantum Mechanics... Philip Gressman Math 240 002 2014C: Chapters 4 & 5 36 / 40

Jordan Forms or When Diagonalization Doesn t Work Some matrices / linear transformations cannot be diagonalized. Examples: 1 2 3 0 1 2 or Differentiation on P n, n 1 0 0 1 In this case we can do a next best option and find what is called the Jordan Canonical Form. A Jordan Block corresponding to λ is a square matrix with λ in each diagonal entry, 1 in every entry immediately above the diagonal, and zero elsewhere. Philip Gressman Math 240 002 2014C: Chapters 4 & 5 37 / 40

Finding the Jordan Form For any fixed eigenvalue λ 0 : The number of blocks equals the number of eigenvectors, which equals the nullity of A λ 0 I. The sum of the sizes of all blocks associated to λ 0 equals the power of (λ λ 0 ) found in the characteristic polynomial. Example: If det(a λi ) = (λ 1)(λ 2) 3 then we can say that A is a 4 4 matrix There is one eigenvector with eigenvalue 1. As for eigenvalue 2, there are either Three eigenvectors Two eigenvectors: 1 1 and 2 2 Jordan blocks One eigenvector: 3 3 Jordan block Philip Gressman Math 240 002 2014C: Chapters 4 & 5 38 / 40

Some Examples to Think About Example 1: det(a λi ) = (λ 1)(λ 2) 2 (λ 3) 3, nullity of A 2I is 2 and nullity of A 3I is also 2. Example 2: det(a λi ) = (λ 1) 2 (λ 2) 2 (λ 3) 4, nullity of A I, A 2I, and A 3I are all 2. Philip Gressman Math 240 002 2014C: Chapters 4 & 5 39 / 40

Generalized Eigenvectors Definition A vector v 0 is called a generalized eigenvector with eigenvalue λ when (A λi ) p v = 0 for some p. If (A λi ) p v = 0, then automatically (A λi ) p+1 v = 0 and so on for all higher powers. Thus if v is specified, we generally need to know what the smallest value of p is that makes the formula true. To build a basis in which the matrix takes its Jordan Form, we start with vectors v such that (A λi ) p v = 0 and p is minimal; then we put the vectors (A λi ) p 1 v, (A λi ) p 2 v,..., (A λi )v, v in that order into the basis. Philip Gressman Math 240 002 2014C: Chapters 4 & 5 40 / 40