Lecture 11: Finish Gaussian elimination and applications; intro to eigenvalues and eigenvectors (1)

Similar documents
Travis Schedler. Thurs, Oct 27, 2011 (version: Thurs, Oct 27, 1:00 PM)

Lecture 21: The decomposition theorem into generalized eigenspaces; multiplicity of eigenvalues and upper-triangular matrices (1)

Lecture 19: Polar and singular value decompositions; generalized eigenspaces; the decomposition theorem (1)

Lecture 19: Polar and singular value decompositions; generalized eigenspaces; the decomposition theorem (1)

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by

1 Invariant subspaces

Lecture 22: Jordan canonical form of upper-triangular matrices (1)

Worksheet for Lecture 15 (due October 23) Section 4.3 Linearly Independent Sets; Bases

Math 4153 Exam 3 Review. The syllabus for Exam 3 is Chapter 6 (pages ), Chapter 7 through page 137, and Chapter 8 through page 182 in Axler.

Lecture 4: Linear independence, span, and bases (1)

Announcements Monday, October 29

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

What is on this week. 1 Vector spaces (continued) 1.1 Null space and Column Space of a matrix

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2.

Online Exercises for Linear Algebra XM511

Final A. Problem Points Score Total 100. Math115A Nadja Hempel 03/23/2017

Worksheet for Lecture 15 (due October 23) Section 4.3 Linearly Independent Sets; Bases

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

MATH 115A: SAMPLE FINAL SOLUTIONS

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Definition (T -invariant subspace) Example. Example

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

1 Last time: least-squares problems

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

Chapter 5. Eigenvalues and Eigenvectors

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

Generalized eigenspaces

Lecture Summaries for Linear Algebra M51A

MTH 464: Computational Linear Algebra

Study Guide for Linear Algebra Exam 2

Math 113 Winter 2013 Prof. Church Midterm Solutions

MAT Linear Algebra Collection of sample exams

Lecture 6: Corrections; Dimension; Linear maps

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

MATH 304 Linear Algebra Lecture 34: Review for Test 2.


Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Homework 5 M 373K Mark Lindberg and Travis Schedler

Chapter 6: Orthogonality

JORDAN NORMAL FORM. Contents Introduction 1 Jordan Normal Form 1 Conclusion 5 References 5

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Name: Final Exam MATH 3320

Eigenvalues and Eigenvectors

Linear algebra I Homework #1 due Thursday, Oct Show that the diagonals of a square are orthogonal to one another.

Math 113 Homework 5. Bowei Liu, Chao Li. Fall 2013

Calculating determinants for larger matrices

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

2. Every linear system with the same number of equations as unknowns has a unique solution.

MATH 205 HOMEWORK #3 OFFICIAL SOLUTION. Problem 1: Find all eigenvalues and eigenvectors of the following linear transformations. (a) F = R, V = R 3,

Math 323 Exam 2 Sample Problems Solution Guide October 31, 2013

Quizzes for Math 304

M.6. Rational canonical form

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

Vector Spaces and Linear Maps

System of Linear Equations

Lecture 23: Determinants (1)

Schur s Triangularization Theorem. Math 422

Linear Algebra 2 Spectral Notes

Lecture 3 Eigenvalues and Eigenvectors

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

A linear algebra proof of the fundamental theorem of algebra

Lecture Notes for Math 414: Linear Algebra II Fall 2015, Michigan State University

2 Eigenvectors and Eigenvalues in abstract spaces.

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

A linear algebra proof of the fundamental theorem of algebra

4. Linear transformations as a vector space 17

Homework For each of the following matrices, find the minimal polynomial and determine whether the matrix is diagonalizable.

Homework 11 Solutions. Math 110, Fall 2013.

AMS10 HW7 Solutions. All credit is given for effort. (-5 pts for any missing sections) Problem 1 (20 pts) Consider the following matrix 2 A =

Warm-up. True or false? Baby proof. 2. The system of normal equations for A x = y has solutions iff A x = y has solutions

MATH 304 Linear Algebra Lecture 33: Bases of eigenvectors. Diagonalization.

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Lecture 19: Isometries, Positive operators, Polar and singular value decompositions; Unitary matrices and classical groups; Previews (1)

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Eigenvalues and Eigenvectors

MH1200 Final 2014/2015

GQE ALGEBRA PROBLEMS

Lecture 23: Trace and determinants! (1) (Final lecture)

Linear Algebra II Lecture 13

Abstract Vector Spaces

Math Final December 2006 C. Robinson

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

1. Let A = (a) 2 (b) 3 (c) 0 (d) 4 (e) 1

Linear equations in linear algebra

Math 113 Homework 5 Solutions (Starred problems) Solutions by Guanyang Wang, with edits by Tom Church.

MATH Spring 2011 Sample problems for Test 2: Solutions

FALL 2011, SOLUTION SET 10 LAST REVISION: NOV 27, 9:45 AM. (T c f)(x) = f(x c).

Math 550 Notes. Chapter 2. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

1 Last time: inverses

Systems of Linear Equations

Linear Algebra Exam 1 Spring 2007

235 Final exam review questions

Definitions for Quizzes

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Transcription:

Lecture 11: Finish Gaussian elimination and applications; intro to eigenvalues and eigenvectors (1) Travis Schedler Tue, Oct 18, 2011 (version: Tue, Oct 18, 6:00 PM)

Goals (2) Solving systems of equations PLU decomposition of matrices Eigenvalues and eigenvectors Invariant subspaces Read Gaussian Elimination handout (on website).

Warm-up exercise (3) Let V be a f.d. vs. Let T L(V ). Read both of the following, then prove the one your group is assigned to: (a) Prove that (I, T, T 2,..., T (dim V )2 ) is linearly dependent. Hint: Use that dim L(V ) = (dim V ) 2 (we proved this from the fact that M is an isomorphism). (b) Recall from the PS that, if p(x) = a m x m + + a 1 x + a 0 is a polynomial, then we say p(t ) = a m T m + + a 1 T + a 0 I. Using (a), prove that, for some nonzero polynomial p of degree at most (dim V ) 2, p(t ) = 0. Now, if F = C, we can write p(x) = a m (x r 1 ) (x r m ) (fundamental theorem of algebra). From the last PS (#9), if V {0}, we conclude that, for some j, (T r j I ) is not injective. That is, there exists a nonzero v null(t r j I ), i.e., a nonzero eigenvector v of eigenvalue r j C (Tv = r j v).

Solution to warm-up exercise (4) (a) Since this is a list of length > (dim V ) 2 in a vector space of dimension (dim V ) 2, it must be linearly dependent.

Solution to warm-up exercise (4) (a) Since this is a list of length > (dim V ) 2 in a vector space of dimension (dim V ) 2, it must be linearly dependent. (b) From (a), a 0 I + a 1 T + + a (dim V ) 2T (dim V )2 = 0 for some a 0,..., a (dim V ) 2. Set p(x) = a 0 + a 1 x + + a (dim V ) 2x (dim V )2. Then p(t ) = 0.

Solving systems of equations (5) Observe: A system of linear equations i,j a i,jx j = b i is Ax = b, A Mat(m, n, F), b Mat(m, 1, F), x =.. To solve: Perform G. or G-J elim. on (A b) (on board). x 1 x n

Solving systems of equations (5) Observe: A system of linear equations i,j a i,jx j = b i is Ax = b, A Mat(m, n, F), b Mat(m, 1, F), x =.. To solve: Perform G. or G-J elim. on (A b) (on board). Equiv: If SA = C, for C (reduced) row echelon, then Ax = b is equiv. to Cx = Sb. Here we just set free entries of x arbitrarily and solve for pivot entries. x 1 x n

PLU decomposition (6) Motivation: View SA = B, for B row echelon, as a decomposition, A = S 1 B.

PLU decomposition (6) Motivation: View SA = B, for B row echelon, as a decomposition, A = S 1 B. Gaussian elimination gives E m E m 1 E 1 A = B.

PLU decomposition (6) Motivation: View SA = B, for B row echelon, as a decomposition, A = S 1 B. Gaussian elimination gives E m E m 1 E 1 A = B. The E i that are not permutations are lower triangular.

PLU decomposition (6) Motivation: View SA = B, for B row echelon, as a decomposition, A = S 1 B. Gaussian elimination gives E m E m 1 E 1 A = B. The E i that are not permutations are lower triangular. The product of lower-tri matrices is also lower-tri.

PLU decomposition (6) Motivation: View SA = B, for B row echelon, as a decomposition, A = S 1 B. Gaussian elimination gives E m E m 1 E 1 A = B. The E i that are not permutations are lower triangular. The product of lower-tri matrices is also lower-tri. So if there are no permutations, we get SA = B, S is lower tri. So A = LB, for L = S 1 also lower tri.

PLU decomposition (6) Motivation: View SA = B, for B row echelon, as a decomposition, A = S 1 B. Gaussian elimination gives E m E m 1 E 1 A = B. The E i that are not permutations are lower triangular. The product of lower-tri matrices is also lower-tri. So if there are no permutations, we get SA = B, S is lower tri. So A = LB, for L = S 1 also lower tri. If there are permutations, we can do those first, once we know what they are.

PLU decomposition (6) Motivation: View SA = B, for B row echelon, as a decomposition, A = S 1 B. Gaussian elimination gives E m E m 1 E 1 A = B. The E i that are not permutations are lower triangular. The product of lower-tri matrices is also lower-tri. So if there are no permutations, we get SA = B, S is lower tri. So A = LB, for L = S 1 also lower tri. If there are permutations, we can do those first, once we know what they are. Then, P 1 A = LB for L lower-tri, and P 1 a permutation matrix.

PLU decomposition (6) Motivation: View SA = B, for B row echelon, as a decomposition, A = S 1 B. Gaussian elimination gives E m E m 1 E 1 A = B. The E i that are not permutations are lower triangular. The product of lower-tri matrices is also lower-tri. So if there are no permutations, we get SA = B, S is lower tri. So A = LB, for L = S 1 also lower tri. If there are permutations, we can do those first, once we know what they are. Then, P 1 A = LB for L lower-tri, and P 1 a permutation matrix. Upshot: A = PLB.

PLU decomposition (6) Motivation: View SA = B, for B row echelon, as a decomposition, A = S 1 B. Gaussian elimination gives E m E m 1 E 1 A = B. The E i that are not permutations are lower triangular. The product of lower-tri matrices is also lower-tri. So if there are no permutations, we get SA = B, S is lower tri. So A = LB, for L = S 1 also lower tri. If there are permutations, we can do those first, once we know what they are. Then, P 1 A = LB for L lower-tri, and P 1 a permutation matrix. Upshot: A = PLB. If B is invertible [which requires B to be square, i.e., m = n], then it is upper-triangular. Then, write U = B, and A = PLU.

Uniqueness of reduced row echelon form (7) Theorem For every matrix A Mat(m, n, F), there exists a unique reduced row echelon form matrix C such that, for some invertible S, SA = C.

Uniqueness of reduced row echelon form (7) Theorem For every matrix A Mat(m, n, F), there exists a unique reduced row echelon form matrix C such that, for some invertible S, SA = C. Note that row operations, and in particular multiplication by invertible matrices, leave row space unchanged. We prove:

Uniqueness of reduced row echelon form (7) Theorem For every matrix A Mat(m, n, F), there exists a unique reduced row echelon form matrix C such that, for some invertible S, SA = C. Note that row operations, and in particular multiplication by invertible matrices, leave row space unchanged. We prove: Theorem Let U Mat(1, n, F). Then there exists a unique red. row ech. form matrix C such that rowspace(c) = U.

Uniqueness of reduced row echelon form (7) Theorem For every matrix A Mat(m, n, F), there exists a unique reduced row echelon form matrix C such that, for some invertible S, SA = C. Note that row operations, and in particular multiplication by invertible matrices, leave row space unchanged. We prove: Theorem Let U Mat(1, n, F). Then there exists a unique red. row ech. form matrix C such that rowspace(c) = U. Proof (on board): The last row of the matrix is the unique nonzero vector in row space with as many zeros as possible followed by a 1.

Uniqueness of reduced row echelon form (7) Theorem For every matrix A Mat(m, n, F), there exists a unique reduced row echelon form matrix C such that, for some invertible S, SA = C. Note that row operations, and in particular multiplication by invertible matrices, leave row space unchanged. We prove: Theorem Let U Mat(1, n, F). Then there exists a unique red. row ech. form matrix C such that rowspace(c) = U. Proof (on board): The last row of the matrix is the unique nonzero vector in row space with as many zeros as possible followed by a 1. The remaining rows span the complementary subspace, call it U, which has a 0 in the pivot entry of the last row.

Uniqueness of reduced row echelon form (7) Theorem For every matrix A Mat(m, n, F), there exists a unique reduced row echelon form matrix C such that, for some invertible S, SA = C. Note that row operations, and in particular multiplication by invertible matrices, leave row space unchanged. We prove: Theorem Let U Mat(1, n, F). Then there exists a unique red. row ech. form matrix C such that rowspace(c) = U. Proof (on board): The last row of the matrix is the unique nonzero vector in row space with as many zeros as possible followed by a 1. The remaining rows span the complementary subspace, call it U, which has a 0 in the pivot entry of the last row. By induction on dim(rowspace)(u), the remaining rows are the unique red. row ech. form matrix with U as its rowspace.

Eigenvalues (8) Definition A (nonzero) eigenvector v V of T L(V ) of eigenvalue λ is a solution of the equation Tv = λv.

Eigenvalues (8) Definition A (nonzero) eigenvector v V of T L(V ) of eigenvalue λ is a solution of the equation Tv = λv. Ambiguity: Sometimes eigenvector implies nonzero, and sometimes we allow the zero eigenvector. However:

Eigenvalues (8) Definition A (nonzero) eigenvector v V of T L(V ) of eigenvalue λ is a solution of the equation Tv = λv. Ambiguity: Sometimes eigenvector implies nonzero, and sometimes we allow the zero eigenvector. However: Definition We call λ F an eigenvalue of T if there exists a nonzero eigenvector v of T of eigenvalue λ.

Eigenvalues (8) Definition A (nonzero) eigenvector v V of T L(V ) of eigenvalue λ is a solution of the equation Tv = λv. Ambiguity: Sometimes eigenvector implies nonzero, and sometimes we allow the zero eigenvector. However: Definition We call λ F an eigenvalue of T if there exists a nonzero eigenvector v of T of eigenvalue λ. So an eigenvalue of T means there is a nonzero eigenvector.

Eigenvalues (8) Definition A (nonzero) eigenvector v V of T L(V ) of eigenvalue λ is a solution of the equation Tv = λv. Ambiguity: Sometimes eigenvector implies nonzero, and sometimes we allow the zero eigenvector. However: Definition We call λ F an eigenvalue of T if there exists a nonzero eigenvector v of T of eigenvalue λ. So an eigenvalue of T means there is a nonzero eigenvector. Definition Given any λ F, the λ-eigenspace of T is the collection of all eigenvectors of T with eigenvalue λ, together with the zero vector.

Eigenvalues (8) Definition A (nonzero) eigenvector v V of T L(V ) of eigenvalue λ is a solution of the equation Tv = λv. Ambiguity: Sometimes eigenvector implies nonzero, and sometimes we allow the zero eigenvector. However: Definition We call λ F an eigenvalue of T if there exists a nonzero eigenvector v of T of eigenvalue λ. So an eigenvalue of T means there is a nonzero eigenvector. Definition Given any λ F, the λ-eigenspace of T is the collection of all eigenvectors of T with eigenvalue λ, together with the zero vector. Then λ is an eigenvalue of T iff the λ-eigenspace is nonzero.

Eigenspaces are vector spaces (9) Proposition The λ-eigenspace of T is a vector space.

Eigenspaces are vector spaces (9) Proposition The λ-eigenspace of T is a vector space. Proof: If u and v are eigenvectors of eigenvalue λ, then T (u + v) = T (u) + T (v) = λ(u + v), so u + v is an eigenvector of eigenvalue λ. The rest is similar.

Eigenspaces are vector spaces (9) Proposition The λ-eigenspace of T is a vector space. Proof: If u and v are eigenvectors of eigenvalue λ, then T (u + v) = T (u) + T (v) = λ(u + v), so u + v is an eigenvector of eigenvalue λ. The rest is similar. Theorem (Theorem 5.10) If F = C, and V is f.d. and nonzero, then every T L(V ) has an eigenvalue.

Eigenspaces are vector spaces (9) Proposition The λ-eigenspace of T is a vector space. Proof: If u and v are eigenvectors of eigenvalue λ, then T (u + v) = T (u) + T (v) = λ(u + v), so u + v is an eigenvector of eigenvalue λ. The rest is similar. Theorem (Theorem 5.10) If F = C, and V is f.d. and nonzero, then every T L(V ) has an eigenvalue. Proof: This was in the warm-up exercise!

Real transformations (10) However, if F = R, then not all linear transformations admit an eigenvalue. Example?

Real transformations (10) However, if F = R, then not all linear transformations admit an eigenvalue. Example? We saw it already on PS1: the 90 rotation of R 2 does not!

Real transformations (10) However, if F = R, then not all linear transformations admit an eigenvalue. Example? We saw it already on PS1: the 90 rotation of R 2 does not! Theorem (To be proved later!) Suppose F = R, T L(V ), and V is f.d. and nonzero. Then there exists a subspace U V such that dim U {1, 2}, and T (U) U.

Real transformations (10) However, if F = R, then not all linear transformations admit an eigenvalue. Example? We saw it already on PS1: the 90 rotation of R 2 does not! Theorem (To be proved later!) Suppose F = R, T L(V ), and V is f.d. and nonzero. Then there exists a subspace U V such that dim U {1, 2}, and T (U) U. Example: for the rotation above, we can take U = V = R 2.

Invariant subspaces (11) This motivates: Definition Let T L(V ). An invariant subspace U V is a subspace such that T (u) U for all u U.

Invariant subspaces (11) This motivates: Definition Let T L(V ). An invariant subspace U V is a subspace such that T (u) U for all u U. Examples: V itself, {0}, eigenspaces.

Invariant subspaces (11) This motivates: Definition Let T L(V ). An invariant subspace U V is a subspace such that T (u) U for all u U. Examples: V itself, {0}, eigenspaces. Proposition The following are equivalent: (a) v is a nonzero eigenvector of T ; (b) Span(v) is a one-dim. invariant subspace.

Invariant subspaces (11) This motivates: Definition Let T L(V ). An invariant subspace U V is a subspace such that T (u) U for all u U. Examples: V itself, {0}, eigenspaces. Proposition The following are equivalent: (a) v is a nonzero eigenvector of T ; (b) Span(v) is a one-dim. invariant subspace. Proof: (a) implies (b): T (av) = aλv Span(v) for all a F;

Invariant subspaces (11) This motivates: Definition Let T L(V ). An invariant subspace U V is a subspace such that T (u) U for all u U. Examples: V itself, {0}, eigenspaces. Proposition The following are equivalent: (a) v is a nonzero eigenvector of T ; (b) Span(v) is a one-dim. invariant subspace. Proof: (a) implies (b): T (av) = aλv Span(v) for all a F; (b) implies (a): If T (v) Span(v) then T (v) = λv for some λ F. Also v is nonzero since Span(v) {0}.

Invariant subspaces (11) This motivates: Definition Let T L(V ). An invariant subspace U V is a subspace such that T (u) U for all u U. Examples: V itself, {0}, eigenspaces. Proposition The following are equivalent: (a) v is a nonzero eigenvector of T ; (b) Span(v) is a one-dim. invariant subspace. Proof: (a) implies (b): T (av) = aλv Span(v) for all a F; (b) implies (a): If T (v) Span(v) then T (v) = λv for some λ F. Also v is nonzero since Span(v) {0}. Now the theorem on the last slide says that if F = R then T L(V ) admits a nonzero invariant subspace of dimension 2.

Invariant subspaces (11) This motivates: Definition Let T L(V ). An invariant subspace U V is a subspace such that T (u) U for all u U. Examples: V itself, {0}, eigenspaces. Proposition The following are equivalent: (a) v is a nonzero eigenvector of T ; (b) Span(v) is a one-dim. invariant subspace. Proof: (a) implies (b): T (av) = aλv Span(v) for all a F; (b) implies (a): If T (v) Span(v) then T (v) = λv for some λ F. Also v is nonzero since Span(v) {0}. Now the theorem on the last slide says that if F = R then T L(V ) admits a nonzero invariant subspace of dimension 2. Corollary (preview): If dim V is odd (F = R), then T has an eigenvalue.