Eigenvalues and Eigenvectors. Review: Invertibility. Eigenvalues and Eigenvectors. The Finite Dimensional Case. January 18, 2018

Similar documents
Study Guide for Linear Algebra Exam 2

Recall : Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors A =

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

Solutions to Final Exam

Conceptual Questions for Review

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

and let s calculate the image of some vectors under the transformation T.

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

Eigenvalues and Eigenvectors 7.1 Eigenvalues and Eigenvecto

MATH 205 HOMEWORK #3 OFFICIAL SOLUTION. Problem 1: Find all eigenvalues and eigenvectors of the following linear transformations. (a) F = R, V = R 3,

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

MTH 464: Computational Linear Algebra

II. Determinant Functions

Math Linear Algebra Final Exam Review Sheet

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

MA 1B ANALYTIC - HOMEWORK SET 7 SOLUTIONS

Dimension. Eigenvalue and eigenvector

Introduction to Matrices

4. Determinants.

Chapter 5 Eigenvalues and Eigenvectors

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Chapter 7. Linear Algebra: Matrices, Vectors,

1 Matrices and Systems of Linear Equations. a 1n a 2n

EIGENVALUES AND EIGENVECTORS 3

Determinants by Cofactor Expansion (III)

CS 246 Review of Linear Algebra 01/17/19

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Systems of Algebraic Equations and Systems of Differential Equations

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

A Brief Outline of Math 355

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Study Notes on Matrices & Determinants for GATE 2017

1. Select the unique answer (choice) for each problem. Write only the answer.

3 (Maths) Linear Algebra

18.06SC Final Exam Solutions

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

Chapter 3. Determinants and Eigenvalues

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

1. General Vector Spaces

Linear Algebra 2 More on determinants and Evalues Exercises and Thanksgiving Activities

(a) only (ii) and (iv) (b) only (ii) and (iii) (c) only (i) and (ii) (d) only (iv) (e) only (i) and (iii)

8-15. Stop by or call (630)

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

Math 315: Linear Algebra Solutions to Assignment 7

Linear Algebra: Matrix Eigenvalue Problems

A Review of Matrix Analysis

4. Linear transformations as a vector space 17

1. Diagonalize the matrix A if possible, that is, find an invertible matrix P and a diagonal

October 4, 2017 EIGENVALUES AND EIGENVECTORS. APPLICATIONS

Linear algebra I Homework #1 due Thursday, Oct Show that the diagonals of a square are orthogonal to one another.

6 Inner Product Spaces

MA 265 FINAL EXAM Fall 2012

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

CAAM 335 Matrix Analysis

MAC Module 3 Determinants. Learning Objectives. Upon completing this module, you should be able to:

MATH 310, REVIEW SHEET 2

Elementary maths for GMT

Symmetric and anti symmetric matrices

Proofs for Quizzes. Proof. Suppose T is a linear transformation, and let A be a matrix such that T (x) = Ax for all x R m. Then

Lecture 12: Diagonalization

Definition (T -invariant subspace) Example. Example

Chapter 2:Determinants. Section 2.1: Determinants by cofactor expansion

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Reduction to the associated homogeneous system via a particular solution

MATHEMATICS 217 NOTES

Topic 1: Matrix diagonalization

Eigenvalues and Eigenvectors

Math113: Linear Algebra. Beifang Chen

NOTES ON LINEAR ALGEBRA. 1. Determinants

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

Calculating determinants for larger matrices

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12

Background Mathematics (2/2) 1. David Barber

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Chap 3. Linear Algebra

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

September 26, 2017 EIGENVALUES AND EIGENVECTORS. APPLICATIONS

Linear Algebra. and

Math 2114 Common Final Exam May 13, 2015 Form A

Introduction to Matrix Algebra

University of Toronto Solutions to MAT188H1F TERM TEST of Tuesday, October 30, 2012 Duration: 100 minutes

MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS

Linear Algebra Primer

Eigenvalues and Eigenvectors

This MUST hold matrix multiplication satisfies the distributive property.

MATH 583A REVIEW SESSION #1

Diagonalization of Matrix

1 Determinants. 1.1 Determinant

MAC Module 12 Eigenvalues and Eigenvectors. Learning Objectives. Upon completing this module, you should be able to:

Transcription:

January 18, 2018

Contents 1 2 3 4

Review 1 We looked at general determinant functions proved that they are all multiples of a special one, called det f (A) = f (I n ) det A.

Review 1 We looked at general determinant functions proved that they are all multiples of a special one, called det f (A) = f (I n ) det A. 2 We proved the product formula: det AB = det A det B.

Review 1 We looked at general determinant functions proved that they are all multiples of a special one, called det f (A) = f (I n ) det A. 2 We proved the product formula: det AB = det A det B. 3 We proved that det A = det A t.

Review 1 We looked at general determinant functions proved that they are all multiples of a special one, called det f (A) = f (I n ) det A. 2 We proved the product formula: det AB = det A det B. 3 We proved that det A = det A t. 4 We proved Expansion by Cofactors, both rows columns, that is if we denote:

Cofactors cof a kj = det A kj = det A0 kj = ( 1)k+j A kj, then det A = det A = n a kj cof a kj j=1 n a kj cof a kj k=1 k th row expansion j th column expansion.

Cofactors cof a kj = det A kj = det A0 kj = ( 1)k+j A kj, then det A = det A = n a kj cof a kj j=1 n a kj cof a kj k=1 k th row expansion j th column expansion. The Main Tools: Everything was done using Gauss-Jordan row reduction. This is what you need to really underst.

The Cofactor Matrix We already noted that: n det A = a kj cof a kj det A = j=1 n a kj cof a kj k=1 k th row expansion j th column expansion.

The Cofactor Matrix We already noted that: n det A = a kj cof a kj k th row expansion det A = j=1 n a kj cof a kj k=1 Note however that if we compute the sums j th column expansion. 0 = 0 = n a lj cof a kj k th row expansion j=1 n a ki cof a kj k=1 j th column expansion. for l k, respectively for i j, then we get zero.

The Cofactor Matrix The reason is that: the first one it is determinant of the matrix B, obtained from A by replacing row k with row l. (So a matrix with two equal rows.)

The Cofactor Matrix The reason is that: the first one it is determinant of the matrix B, obtained from A by replacing row k with row l. (So a matrix with two equal rows.) the second one is the determinant of the matrix B, obtained from A by replacing column j with column i. (So a matrix with two equal columns.)

The Cofactor Matrix The reason is that: the first one it is determinant of the matrix B, obtained from A by replacing row k with row l. (So a matrix with two equal rows.) the second one is the determinant of the matrix B, obtained from A by replacing column j with column i. (So a matrix with two equal columns.)

The Cofactor Matrix The reason is that: the first one it is determinant of the matrix B, obtained from A by replacing row k with row l. (So a matrix with two equal rows.) the second one is the determinant of the matrix B, obtained from A by replacing column j with column i. (So a matrix with two equal columns.) If you stare at the above formulas, you see that they look like formulas for the product of two matrices, except for the wrong position of the summation index. We fix that by considering (cof A) t the transpose of the cofactor matrix.

So we get the formula Theorem A (cof A) t = (det A)I n (cof A) t A = (det A)I n. The n n matrix A is invertible if only if det A 0, in this case the formula for the inverse is given by A 1 = 1 det A (cof A)t.

So we get the formula Theorem A (cof A) t = (det A)I n (cof A) t A = (det A)I n. The n n matrix A is invertible if only if det A 0, in this case the formula for the inverse is given by A 1 = 1 det A (cof A)t. Proof: We proved the hard part: that det A 0 implies A is invertible the inverse is given by the above formula.

So we get the formula Theorem A (cof A) t = (det A)I n (cof A) t A = (det A)I n. The n n matrix A is invertible if only if det A 0, in this case the formula for the inverse is given by A 1 = 1 det A (cof A)t. Proof: We proved the hard part: that det A 0 implies A is invertible the inverse is given by the above formula. Conversely, if A is invertible then so det A 0. det A det A 1 = det A A 1 = det I n = 1,

Cramer s Rule In solving linear systems Ax = b, if A is invertible we immediately get x = A 1 b, if you plug in the formula that we just got, we get x = 1 det A (cof A)t b. But if we use column expansion, we see that ((cof A) t b) i is the determinant of the matrix obtained by substituting column i in A with b. So x i is a quotient of two determinants. Read from the book!

Cramer s Rule In solving linear systems Ax = b, if A is invertible we immediately get x = A 1 b, if you plug in the formula that we just got, we get x = 1 det A (cof A)t b. But if we use column expansion, we see that ((cof A) t b) i is the determinant of the matrix obtained by substituting column i in A with b. So x i is a quotient of two determinants. Read from the book! And finally, even though we are not going to use it, you have to see at least once the formula Formula for det det A = σ S n ( 1) ɛ(σ) a 1σ(1) a 2σ(2) a nσ(n).

Linear Transformations with Diagonal Matrix Representation Recall that we get a matrix representation for the linear transformation T : V W only after chosing a basis in V a basis in W. So you can check that in finite dimensions you can always find a Diagonal Matrix Representation. That not what we are intersested in. What we want is a linear map T : V V, that has a diagonal matrix representaion in a given basis in V. It is not hard to see that:

Linear Transformations with Diagonal Matrix Representation Theorem Given a linear transformation T : V V, where dim V = n, then T has a diagonal representation if only if there exists a basis (e 1,,..., e n ) in V such that T (e i ) = λ i e i, for some scalars λ i R. (or λ i C or even in general λ i k).

Linear Transformations with Diagonal Matrix Representation Theorem Given a linear transformation T : V V, where dim V = n, then T has a diagonal representation if only if there exists a basis (e 1,,..., e n ) in V such that T (e i ) = λ i e i, for some scalars λ i R. (or λ i C or even in general λ i k). This brings us to the following definition:

Definition Let V be a linear space, S V a linear subspace T : S V a linear map. A scalar λ is called an eigenvalue of T if there exists a nonzero v S, such that Tv = λv. v is called an eigenvector, or more precisely an eigenvector corresponding to the eigenvalue λ.

Definition Let V be a linear space, S V a linear subspace T : S V a linear map. A scalar λ is called an eigenvalue of T if there exists a nonzero v S, such that Tv = λv. v is called an eigenvector, or more precisely an eigenvector corresponding to the eigenvalue λ. Note I have used the customary notation for linear maps Tv instead of T (v).

Examples 1 0 is an eigenvalue if only if the null space of T contains nonzero elements.

Examples 1 0 is an eigenvalue if only if the null space of T contains nonzero elements. 2 The identity map I : V V has 1 as its only eigenvalue, every vector is an eigenvector.

Examples 1 0 is an eigenvalue if only if the null space of T contains nonzero elements. 2 The identity map I : V V has 1 as its only eigenvalue, every vector is an eigenvector. 3 In infinite dimensions, a linear transformation may have infinitely many eigenvalues. For example the differentiation operator defined on the subspace C 1 (R) C(R), with values in C(R) by Df = f has every real number as eigenvalue, as can be seen by De λx = λe λx.

Examples 1 0 is an eigenvalue if only if the null space of T contains nonzero elements. 2 The identity map I : V V has 1 as its only eigenvalue, every vector is an eigenvector. 3 In infinite dimensions, a linear transformation may have infinitely many eigenvalues. For example the differentiation operator defined on the subspace C 1 (R) C(R), with values in C(R) by Df = f has every real number as eigenvalue, as can be seen by De λx = λe λx. 4 If you allow complex valued functions, then every complex λ is an eigenvalue for the differentiation operator.

Examples 5 The Integration operator T : C(R) C(R) defined by Tf (x) = has no eigenvalues. ˆ x 0 f (t) dt

Examples 5 The Integration operator T : C(R) C(R) defined by has no eigenvalues. Indeed, if Tf = λf, then ˆ x Tf (x) = 0 ˆ x 0 f (t) dt f (t) dt = λf (x), differentiating (using the Fundamental Theorem of Calculus) f = λf.

Examples 5 The Integration operator T : C(R) C(R) defined by has no eigenvalues. Indeed, if Tf = λf, then ˆ x Tf (x) = 0 ˆ x 0 f (t) dt f (t) dt = λf (x), differentiating (using the Fundamental Theorem of Calculus) f = λf. If λ = 0, then f = 0, so it is not an eigenvalue, while if λ 0, then the general solution is f (x) = ce 1 λ x, since f (0) = 0, this forces c = 0, so f = 0.

Linear Independence Theorem Let u 1,..., u k be nonzero eigenvectors of a linear transformation T : S V, S V,corresponding to distinct eigenvalues λ 1,..., λ k. Then the eigenvectors u 1,..., u k are linearly independent. Proof We are going to use induction on k. k = 1 is true since the u i s are different from zero. Assume that there are c i scalars such that k c i u i = 0. i=1 (We want to show that all of them are zero.)

Linear Independence Proof cont. Applying T we get also that k c i λ i u i = 0 i=1 Multiplying the first equation with λ k subtracting from this one, we get k 1 c i (λ i λ k )u i = 0. i=1 By induction this implies that c i (λ i λ k ) = 0, for 1 i k 1, this forces c i = 0, for 1 i k 1. (λ i λ k ). So, what remains from the first equation is c k u k = 0, which implies c k = 0. (Again since u k 0.)

Some Terminology Note that saying that λ is an eigenvalue for T is the same as saying that the null-space of the transformation is different from zero. λi T

Some Terminology Note that saying that λ is an eigenvalue for T is the same as saying that the null-space of the transformation is different from zero. λi T The set of all eigenvectors corresponding to λ is this null-space.

Some Terminology Note that saying that λ is an eigenvalue for T is the same as saying that the null-space of the transformation is different from zero. λi T The set of all eigenvectors corresponding to λ is this null-space. It is denoted by E(λ) is a linear subspace of S.

The preceding Theorem obviously implies that if S has finite dimension equal to n, then T has at most n distinct eigenvalues.

The preceding Theorem obviously implies that if S has finite dimension equal to n, then T has at most n distinct eigenvalues. From now on V will have finite dimension n. In this case we are going to assume that T is defined on the whole space V.

The preceding Theorem obviously implies that if S has finite dimension equal to n, then T has at most n distinct eigenvalues. From now on V will have finite dimension n. In this case we are going to assume that T is defined on the whole space V.

The preceding Theorem obviously implies that if S has finite dimension equal to n, then T has at most n distinct eigenvalues. From now on V will have finite dimension n. In this case we are going to assume that T is defined on the whole space V. Recall that: 1 An n n matrix A is invertible if only if det A 0,

The preceding Theorem obviously implies that if S has finite dimension equal to n, then T has at most n distinct eigenvalues. From now on V will have finite dimension n. In this case we are going to assume that T is defined on the whole space V. Recall that: 1 An n n matrix A is invertible if only if det A 0, 2 The linear transformation T : V V is invertible if only if it is 1 1 (injective), that is its null-space consists only of {0}.

Characteristic Polynomial We just proved Theorem Suppose T : V V is a linear transformation, with matrix A relative to the choice of a basis e 1,..., e n. Then λ is an eigenvalue of T if only if det(λi n A) = 0.

Characteristic Polynomial We just proved Theorem Suppose T : V V is a linear transformation, with matrix A relative to the choice of a basis e 1,..., e n. Then λ is an eigenvalue of T if only if Theorem The function det(λi n A) = 0. f (λ) = det(λi n A) is a polynomial of degree n (in λ). It is called the characteristic polynomial of A. It depends only on T, so we are going to call it also the characteristic polynomial of T.

Characteristic Polynomial Proof: λ a 11... a 1i a 1n... det(λi n a) = det a i1 λ a ii a in.... a n1 a ni λ a nn It is easy to see by induction that the highest term is λ n the free term is ( 1) n det A.

The Trace Proof: (cont)

The Trace Proof: (cont) A little bit harder is that the coefficient of λ n 1 is n a ii. i=1 The expression a ii is called the trace of A, respectively the trace of T denoted tr A, respectively tr T.

The Trace Proof: (cont) A little bit harder is that the coefficient of λ n 1 is n a ii. i=1 The expression a ii is called the trace of A, respectively the trace of T denoted tr A, respectively tr T. If we chose another basis, then the matrix B with respect to that basis will be B = CAC 1, where C corresponds to the change of basis. So det(λi n B) = det[c(λi n A)C 1 ] = det A.