Chapter 5 Eigenvalues and Eigenvectors

Similar documents
Eigenvalues and Eigenvectors

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Diagonalization. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Study Guide for Linear Algebra Exam 2

2. Every linear system with the same number of equations as unknowns has a unique solution.

and let s calculate the image of some vectors under the transformation T.

MA 265 FINAL EXAM Fall 2012

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

MAC Module 12 Eigenvalues and Eigenvectors. Learning Objectives. Upon completing this module, you should be able to:

MAC Module 12 Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors A =

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Linear Algebra: Matrix Eigenvalue Problems

LINEAR ALGEBRA SUMMARY SHEET.

Math 315: Linear Algebra Solutions to Assignment 7

Econ Slides from Lecture 7

MAT Linear Algebra Collection of sample exams

Dimension. Eigenvalue and eigenvector

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Computationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity:

4. Determinants.

Linear Algebra Highlights

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Solving a system by back-substitution, checking consistency of a system (no rows of the form

Eigenvalues and Eigenvectors

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

1. General Vector Spaces

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

MTH 464: Computational Linear Algebra

Calculating determinants for larger matrices

Eigenvalues and Eigenvectors

Recall : Eigenvalues and Eigenvectors

Linear Algebra Primer

Online Exercises for Linear Algebra XM511

Eigenvalues and Eigenvectors: An Introduction

Review of Linear Algebra

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

Lecture 23: 6.1 Inner Products

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors

MATH 221, Spring Homework 10 Solutions

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

ICS 6N Computational Linear Algebra Eigenvalues and Eigenvectors

LU Factorization. A m x n matrix A admits an LU factorization if it can be written in the form of A = LU

Lecture Summaries for Linear Algebra M51A

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is,

Math Final December 2006 C. Robinson

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

Announcements Wednesday, November 01

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Chapter 3. Determinants and Eigenvalues

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

HW2 - Due 01/30. Each answer must be mathematically justified. Don t forget your name.

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Math Linear Algebra Final Exam Review Sheet

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Review problems for MA 54, Fall 2004.

Solutions to Final Exam

Eigenvalues and Eigenvectors

Conceptual Questions for Review

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

6 Inner Product Spaces

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Topic 2 Quiz 2. choice C implies B and B implies C. correct-choice C implies B, but B does not imply C

Math Matrix Algebra

8. Diagonalization.

Eigenvalues, Eigenvectors. Eigenvalues and eigenvector will be fundamentally related to the nature of the solutions of state space systems.

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

MATH 369 Linear Algebra

DM554 Linear and Integer Programming. Lecture 9. Diagonalization. Marco Chiarandini

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

Linear Algebra Practice Problems

Numerical Linear Algebra Homework Assignment - Week 2

ft-uiowa-math2550 Assignment OptionalFinalExamReviewMultChoiceMEDIUMlengthForm due 12/31/2014 at 10:36pm CST

Third Midterm Exam Name: Practice Problems November 11, Find a basis for the subspace spanned by the following vectors.

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

Linear System Theory

235 Final exam review questions

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

Math 215 HW #9 Solutions

Eigenvalues and Eigenvectors 7.2 Diagonalization

CSL361 Problem set 4: Basic linear algebra

Eigenvalues and Eigenvectors 7.1 Eigenvalues and Eigenvecto

Properties of Linear Transformations from R n to R m

Linear Algebra- Final Exam Review

Transcription:

Chapter 5 Eigenvalues and Eigenvectors

Outline 5.1 Eigenvalues and Eigenvectors 5.2 Diagonalization 5.3 Complex Vector Spaces 2

5.1 Eigenvalues and Eigenvectors

Eigenvalue and Eigenvector If A is a n n matrix, then a nonzero vector x in R n is called an eigenvector of A (or the matrix operator T A ) if Ax is a scalar multiple of x; that is Ax= λx for some scalar λ. The scalar λ is called an eigenvalue of A (or of T A ), and x is said to be an eigenvector corresponding to λ. In some special case, multiplication by A leaves the direction unchanged. x λx λx x λx x λx x 4

Example The vector is an eigenvector of 5

Computing Eigenvalues and Eigenvectors To compute eigenvalues and eigenvectors For λ to be an eigenvalue of A this equation must have a nonzero solution for x. from Theorem 4.9.4, this is so if and only if the coefficient matrix λi-a has a zero determinant. Theorem 5.1.1: If A is an n n matrix, then λ is an eigenvalue of A if and only if it satisfies the equation: det(λi-a)=0 This is called the characteristic equation of A. 6

Example Find all eigenvalues of the matrix Characteristic function: 7

Characteristic Polynomial When the determinant det(λi-a) is expanded, the result is a polynomial p(λ) of degree n that is called the characteristic polynomial of A. Example: The characteristic polynomial of an n n matrix Has at most n distinct eigenvalues. Some solutions may be complex numbers; it is possible for a matrix to have complex eigenvalues. 8

Example Find eigenvalues of 9

Example Find eigenvalues of Theorem 5.1.2: If A is an n n triangular matrix (upper triangular, lower triangular, or diagonal), then the eigenvalues of A are the entries on the main diagonal of A. 10

Theorem 5.1.3 If A is an n n matrix, the following statements are equivalent λ is an eigenvalue of A The system of equations (λi-a)x=0 has nontrivial solutions There is a nonzero vector x such that Ax= λx λ is a solution of the characteristic equation det(λi-a)=0 11

Finding Eigenvectors and Bases for Eigenspaces The eigenvectors are the nonzero vectors in the null space of the matrix λi-a (λi-a)x=0 We call this null space the eigenspace of A corresponding to λ. The eigenspace of A corresponding to the eigenvalue λ is the solution space of the homogeneous system (λi-a)x=0. 12

Example Eigenvalues of the matrix are The system (λi-a)x=0 is a basis for the eigenspace corresponding to 13

Example Find bases for the eigenspaces of Solution: The characteristic equation of matrix A is 3 5 2 + 8 4 = 0, or in factored form, ( 1)( 2) 2 = 0; thus, the eigenvalues of A are = 1 and = 2, so there are two eigenspaces of A. (I A)x = 0 If = 2, then (3) becomes 0 0 2 A 1 2 1 1 0 3 0 2 x1 0 1 2 1 x 0 (3) 2 1 0 3 x 3 0 2 0 2 x1 0 1 0 1 x 0 2 1 0 1 x 3 0 14

Example 2 0 2 x1 0 1 0 1 x 0 2 1 0 1 x 3 0 Solving the system yield x 1 = -s, x 2 = t, x 3 = s Thus, the eigenvectors of A corresponding to = 2 are the nonzero vectors of the form s s 0 1 0 x t 0 t s 0 t 1 s s 0 1 0 The vectors [-1 0 1] T and [0 1 0] T are linearly independent and form a basis for the eigenspace corresponding to = 2. Similarly, the eigenvectors of A corresponding to = 1 are the nonzero vectors of the form x = s [-2 1 1] T Thus, [-2 1 1] T is a basis for the eigenspace corresponding to = 1. 15

Powers of a Matrix If λ is an eigenvalue of A and x is a corresponding eigenvector, then which shows that λ 2 is an eigenvalue of A 2 and that x is a corresponding eigenvector. 16

Theorems Theorem 5.1.4 If k is a positive integer, is an eigenvalue of a matrix A, and x is corresponding eigenvector, then k is an eigenvalue of A k and x is a corresponding eigenvector. Theorem 5.1.5 A square matrix A is invertible if and only if = 0 is not an eigenvalue of A. 17

Proof of Theorem 5.1.5 Assume that A is an n n matrix and observe that =0 is a solution of the characteristic equation if and only if the constant term c n is zero. Thus it suffices to prove that A is invertible if and only if c n 0. But or, on setting =0, or It follows from the last equation that det(a)=0 if and only if c n =0, and this in turn implies that A is invertible if and only if c n 0. 18

Theorem 5.1.6 (Equivalent Statements) If A is an mn matrix, and if T A : R n R n is multiplication by A, then the following are equivalent: A is invertible. Ax = 0 has only the trivial solution. The reduced row-echelon form of A is I n. A is expressible as a product of elementary matrices. Ax = b is consistent for every n1 matrix b. Ax = b has exactly one solution for every n1 matrix b. det(a) 0. The column vectors of A are linearly independent. The row vectors of A are linearly independent. The column vectors of A span R n. The row vectors of A span R n. 19

Theorem 5.1.6 (Equivalent Statements) The column vectors of A form a basis for R n. The row vectors of A form a basis for R n. A has rank n. A has nullity 0. The orthogonal complement of the nullspace of A is R n. The orthogonal complement of the row space of A is {0}. The range of T A is R n. T A is one-to-one. Ax= λx A T A is invertible. => (λi-a)x=0 = 0 is not an eigenvalue of A. => -Ax=0 (if λ=0) Ax=0 x has the only trivial solution i.e., x=0 (since A is invertable) 20

5.2 Diagonalization

Matrix Diagonalization Problem Problem 1: Given an n n matrix A, does there exist an invertible matrix P such that P -1 AP is diagonal? Problem 2: Given an n n matrix A, does A have n linearly independent eigenvectors? The matrix product P -1 AP in Problem 1 is called a similarity transformation of the matrix A. If A and B are square matrices, then we say that B is similar to A if there is an invertible matrix P such that B= P -1 AP 22

Similarity Invariants If B is similar to A, then A is similar to B, since we can express B as B=Q -1 AQ by taking Q=P -1. We usually say that A and B are similar matrices. If B=P -1 AP, then A and B have the same determinant det(b)=det(p -1 AP)=det(P -1 )det(a)det(p) = (1/det(P)) det(a)det(p) = det(a) Any property that is shared by all similar matrices is called a similarity invariant or is said to be invariant under similarity. 23

Similarity Invariants Property Determinant Invertibility Rank Nullity Trace Characteristic polynomial Eigenvalues Eigenspace dimension Description A and P -1 AP have the same determinant. A is invertible if and only if P -1 AP is invertible. A and P -1 AP have the same rank. A and P -1 AP have the same nullity. A and P -1 AP have the same trace. A and P -1 AP have the same characteristic polynomial. A and P -1 AP have the same eigenvalues. If λ is an eigenvalue of A and hence of P -1 AP, then the eigenspace of A corresponding to λ and the eigenspace of P -1 AP corresponding to λ have the same dimension. (have the same set of eigenvectors?) 24

Diagonalizable A square matrix A is said to be diagonalizable if it is similar to some diagonal matrix; that is, if there exists an invertible matrix P such that P -1 AP is diagonal. In this case the matrix P is said to diagonalize ( 對角化 ) A. Theorem 5.2.1: If A is an n n matrix, the following statements are equivalent. (a) A is diagonalizable (b) A has n linearly independent eigenvectors. 25

Proof of Theorem 5.2.1 Since A is assumed diagonalizable, there is an invertible matrix such that P -1 AP is diagonal, say P -1 AP = D. It follows that AP = PD; that is 26

Proof of Theorem 5.2.1 If we now let p 1, p 2,, p n denote the column vectors of P, then the successive columns of AP are We also know that the successive columns of AP are Ap 1, Ap 2,, Ap n. Thus we have Since P is invertible, its columns are all nonzero. Thus, are eigenvalues of A, and p 1, p 2,, p n are corresponding eigenvectors. Since P is invertible, p 1, p 2,, p n are linearly independent. Thus A has n linearly independent eigenvectors. 27

Procedure for Diagonalizing a Matrix The preceding theorem guarantees that an nn matrix A with n linearly independent eigenvectors is diagonalizable, and the proof provides the following method for diagonalizing A. Step 1. Find n linear independent eigenvectors of A, say, p 1, p 2,, p n. Step 2. From the matrix P having p 1, p 2,, p n as its column vectors. Step 3. The matrix P -1 AP will then be diagonal with 1, 2,, n as its successive diagonal entries, where i is the eigenvalue corresponding to p i, for i = 1, 2,, n. If there is total of n such vectors, then A is diagonalizable. Otherwise, A is not diagonalizable. 28

29 Example Find a matrix P that diagonalizes Solution: From the previous example, we have the following bases for the eigenspaces: = 2: = 1: Thus, Also, 0 0 2 1 2 1 1 0 3 A 0 1 0, 1 0 1 p 1 p 2 1 1 2 p 3 1 0 1 1 1 0 2 0 1 P D AP P 1 0 0 0 2 0 0 0 2 1 0 1 1 1 0 2 0 1 3 0 1 1 2 1 2 0 0 1 0 1 1 1 1 2 0 1 1

Example There is no preferred order for the columns of P. Since the ith diagonal entry of P -1 AP is an eigenvalue for the ith column vector of P, changing the order of the columns of P just changes the order of the eigenvalues of the diagonal of P -1 AP. If we write P as We have 30

Example (A Non-Diagonalizable Matrix) Find a matrix P that diagonalizes Solution: The characteristic polynomial of A is 1 0 0 1 0 0 A 1 2 0 3 5 2 det( I A) 1 2 0 ( 1)( 2) 3 5 2 The bases for the eigenspaces are = 1: 1/8 = 2: 0 p 1 1/8 p 2 0 1 1 Since there are only two basis vectors in total, A is not diagonalizable. 2 31

Example (Alternative Solution) If one is interested only in determining whether a matrix is diagonalizable and is not concerned with actually finding a diagonalizing matrix P, then it s not necessary to compute bases for the eigenspaces; it suffices to find the dimensions of the eigenspace. For this example, the eigenspace corresponding to λ=1 is the solution space of the system The coefficient matrix has rank 2. Thus the nullity of this matrix is 1, and hence the solution space is one-dimensional. 32

Example (Alternative Solution) The eigenspace corresponding to λ=2 is the solution space of the system The coefficient matrix also has rank 2 and nullity 1, so the eigenspace corresponding to λ=2 is also one-dimensional. Since the eigenspaces produce a total of two basis vectors, the matrix A is not diagonalizable. 33

Theorems Theorem 5.2.2 If v 1, v 2,, v k, are eigenvectors of A corresponding to distinct eigenvalues 1, 2,, k, then {v 1, v 2,, v k } is a linearly independent set. Theorem 5.2.3 If an nn matrix A has n distinct eigenvalues, then A is diagonalizable. 34

Example Since the matrix has three distinct eigenvalues, Therefore, A is diagonalizable. Further, 0 1 0 A 0 0 1 4 17 8 1 P AP 4, 2 3, 2 3 4 0 0 0 2 3 0 0 0 2 3 for some invertible matrix P, and the matrix P can be found using the procedure for diagonalizing a matrix. 35

A Diagonalizable Matrix Since the eigenvalues of a triangular matrix are the entries on its main diagonal (Theorem 5.1.2). Thus, a triangular matrix with distinct entries on the main diagonal is diagonalizable. For example, is a diagonalizable matrix. 1 2 4 0 0 3 1 7 A 0 0 5 8 0 0 0 2 36

Computing Powers of a Matrix If A is an nn matrix and P is an invertible matrix, then (P -1 AP) 2 = P -1 APP -1 AP = P -1 AIAP = P -1 A 2 P (P -1 AP) k = P -1 A k P for any positive integer k. If A is diagonalizable, and P -1 AP = D is a diagonal matrix, then P -1 A k P = (P -1 AP) k = D k Thus, A k = PD k P -1 The matrix D k is easy to compute; for example, if k d1 0... 0 d1 0... 0 k 0 d2... 0 k 0 d2... 0 D, and D : : : : : : k 0 0... dn 0 0... dn 37

Example Find A 13 The matrix A is diagonalized by and that Thus, 38

Theorem 5.2.4 If λ is an eigenvalue of a square matrix A and x is a corresponding eigenvector, and if k any positive integer, then λ k is an eigenvalue of A k and x is a corresponding eigenvector. Example: A 2 x = A(Ax)=A(λx) = λax = λ(λx) = λ 2 x 39

Repeated Eigenvalues and Diagonalizability If a matrix has all distinct eigenvalues, then it is diagonalizable. Matrices with repeated eigenvalues might be nondiagonalizable. For example, has repeated eigenvalues (1) but is diagonalizable since any nonzero vector in R 3 is an eigenvector of I 3, and so, we can find three linearly independent eigenvectors. 40

Geometric and Algebraic Multiplicity Example: the characteristic polynomial (λ-1)(λ-2) 2 The eigenspace corresponding to λ=1 is at most one-dim, and the eigenspace corresponding to λ=2 is at most twodim. Definition If 0 is an eigenvalue of an nn matrix A, then the dimension of the eigenspace corresponding to 0 is called the geometric multiplicity ( 幾何重數 ) of 0, and the number of times that 0 appears as a factor in the characteristic polynomial of A is called the algebraic multiplicity ( 代數重數 ) of A. 41

Theorem 5.2.5 Theorem 5.2.5 (Geometric and Algebraic Multiplicity) If A is a square matrix, then : For every eigenvalue of A the geometric multiplicity is less than or equal to the algebraic multiplicity. A is diagonalizable if and only if the geometric multiplicity is equal to the algebraic multiplicity for every eigenvalue. 42

5.3 Complex Vector Spaces

Review of Complex Numbers If z = a + bi is a complex number, then Re(z) = a and Im(z) = b are called the real part of z and the imaginary part of z, respectively. is called the modulus (or absolute value) of z. is called the complex conjugate of z. the angle is called an argument of z. is called the polar form of z. 44

Complex Eigenvalues The characteristic equation of a general matrix A has the form It is possible for the characteristic equation to have imaginary solutions has characteristic equation, which has imaginary solutions and 45

Vectors in C n A vector space in which scalars are allowed to be complex numbers is called a complex vector space. Definition: If n is a positive integer, then a complex n- tuple is a sequence of n complex numbers (v 1, v 2,, v n ). The set of all complex n-tuple is called complex n-space and is denoted by C n. Scalars are complex numbers, and the operations of addition, subtraction, and scalar multiplication are performed componentwise. 46

Vectors in C n If v 1, v 2,, v n are complex numbers, then we call v=(v 1, v 2,, v n ) a vector in C n and v 1, v 2,, v n its components. Some examples: Every vector in C n can be split into real and imaginary parts Complex conjugate: 47

Vectors in C n The vectors in R n can be viewed as those vectors in C n whose imaginary part is zero. A vector v in C n is in R n if and only if Real matrix: entries in the matrix are required to be real numbers Complex matrix: entries in the matrix are allowed to be complex numbers If A is a complex matrix, then Re(A) and Im(A) are the matrices formed from the real and imaginary parts of the entries of A. 48

Example Let v=(3+i, -2i, 5) Then 49

Algebraic Properties of the Complex Conjugate Theorem 5.3.1: If u and v are vectors in C n, and if k is a scalar, then (a) (b) (c) (d) Theorem 5.3.2: If A is an complex matrix and B is a complex matrix, then (a) (b) (c) 50

The Complex Euclidean Inner Product Definition: If u=(u 1, u 2,, u n ) and v=(v 1, v 2,, v n ) are vectors in C n, then the complex Euclidean inner product of u and v (also called the complex dot product) id denoted by u.v and is defined as We also define the Euclidean norm on C n to be We call v a unit vector in C n if v =1, and we say two vectors u and v are orthogonal if u.v =0 51

Example Find u.v, v.u, u, and v for the vectors 52

Theorem 5.3.3 In R n, In C n, Theorem 5.3.3: If u, v, and w are vectors in C n, and if k is a scalar, then the complex Euclidean inner product has the following properties: (a) (b) (c) (d) (e) and if and only if [Antisymmetry property] [Distributive property] [Homogeneity property] [Antihomogeneity property] [Positivityproperty] 53

Proof of Theorem 5.3.3(d) To complete this proof, substitute for k and use the factor that 54

Vector Concepts in C n Except for the use of complex scalars, the notions of linear combination, linear independence, subspace, spanning, basis, and dimension carry over without change to C n. If A is an matrix with complex entries, then the complex roots of the characteristic equation are called complex eigenvalues of A. is a complex eigenvalue of A iff there exists a nonzero vector x in C n such that. x is called a complex eigenvector of A corresponding to. 55

Vector Concepts in C n The complex eigenvectors of A corresponding to are the nonzero solutions of the linear system and the set of all such solutions is a subspace of C n, called the eigenspaces of A corresponding to. 56

Theorem 5.3.4 If is an eigenvalue of a real matrix A, and if x is a corresponding eigenvector, then is also an eigenvalue of A, and is a corresponding eigenvector. Proof: Since is a eigenvalue of A and x is a corresponding eigenvector, we have However,, it follows from part (c) of Theorem 5.3.2 Therefore,, in which 57

Example Find the eigenvalues and bases for the eigenspace Solution: Eigenvalues: and To find eigenvectors, we must solve the system Solve this system by reducing the augmented matrix by Gauss-Jordan elimination 58

Example The reduced row echelon form must have a row of zeros because it has nontrivial solutions. Each row must be a scalar multiple of the other, and hence the first row can be made into a row of zeros by adding a suitable multiple of the second row to it. Accordingly, we can simply set the entries in the first row to zero, then interchange the rows, and then multiply the new first row by to obtain the reduced row echelon form A general solution of the system is 59

Example This tells us that the eigenspace corresponding to is one-dimensional and consists of all complex scalar multiples of the basis vector As a check, let us confirm Ax=ix. 60

Example We could find the eigenspace corresponding to in a similar way. 61

Eigenvalues and Eigenvectors For the matrix The characteristic polynomial We express it as 62

Eigenvalues and Eigenvectors Recall that if ax 2 +bx+c=0 is a quadratic equation with real coefficients, then the discriminant b 2-4ac determines the nature of the roots: b 2 4ac > 0 [Two distinct real roots] b 2 4ac = 0 [One repeated real root] b 2 4ac < 0 [Two conjugate imaginary roots] a = 1, b = -tr(a), c = det(a) 63

Theorem 5.3.5 If A is a matrix with real entries, then the characteristic equation of A is (a) A has two distinct real eigenvalues if (b) A has one repeated real eigenvalue if (c) A has two complex conjugate eigenvalues if and 64

Example Find the eigenvalues of Solution (a): tr(a) = 7 and det(a) = 12, so the characteristic equation of A is Solution (b): tr(a) = 2 and det(a) = 1, so the characteristic equation of A is Solution (c): tr(a) = 4 and det(a) = 13, so the characteristic equation of A is 65

Symmetric Matrices Have Real Eigenvalues Theorem 5.3.6: If A is a real symmetric matrix, then A has real eigenvalues. Proof: Suppose that is an eigenvalue of A and x is a corresponding eigenvector, if we multiply both sides by Since the denominator is real, we prove that is real by showing 66

Theorem 5.3.7 Theorem 5.3.7: The eigenvalues of the real matrix are. If a and b are not both zero, then this matrix can be factored as where is the angle from the positive x-axis to the ray that joins the origin to the point (a,b) (a,b) 67

Proof of Theorem 5.3.7 (a,b) The characteristic equation of C is, and the eigenvalues of C are Assuming that a and b are not both zero, let be the angle from the positive x-axis to the ray that joins the origin to the point (a,b). The angle is an argument of the eigenvalue, so we have 68

Theorem 5.3.8 Let A be a real matrix with complex eigenvalues where. If x is an eigenvector of A corresponding to, then the matrix is invertible and 69

Example Factor the matrix using the eigenvalue and the corresponding eigenvector Solution: Let us denote the eigenvector that corresponds to by x. Thus, A can be factored as 70

Geometric Interpretation of Theorem 5.3.8 Let us denote the matrices on the right by and Rewrite Theorem 5.3.8 If we now view P as the transition matrix from the basis B=[Re(x) Im(x)] to the standard basis, then this equation tells us that computing a product Ax 0 can be broken into a three-step process 71

Geometric Interpretation of Theorem 5.3.8 Step 1. Map x 0 from standard coordinates into B- coordinates by forming the product P -1 x 0. Step 2. Rotate and scale the vector P -1 x 0 by forming the product Step 3. Map the rotated and scaled vector back to standard coordinates to obtain 72

Power Sequences If A is the standard matrix for an operator on R n and x 0 is some fixed factor in R n, then one might be interested in the behavior of the power sequence x 0, Ax 0, A 2 x 0,, A k x 0, With the help of MATLAB one can show that if the first 100 terms are plotted as ordered pairs (x,y), then the points move along the elliptical path show in Figure 5.3.4a To understand why, we need to examine the eigenvalues and eigenvectors of A. 73

Power Sequences We obtain the factorization is a rotation about the origin through the angle whose tangent is 74

Power Sequences The matrix P is the transition matrix from the basis to the standard basis, and P -1 is the transition matrix from the standard basis to the basis B. If n is a positive integer, so the product A n x 0 can be compute by first mapping x 0 into the point P -1 x 0 in B-coordinates, then multiplying by to rotate this point about the origin through the angle, and then multiplying by P to map the resulting point back to the standard coordinates. 75

Power Sequences In B-coordinates each successive multiplication by A causes the point P -1 x 0 to advance through an angle, thereby tracing a circular orbit about the origin. However, the basis is skewed (not orthogonal), so when the points on the circular orbit are transformed back to standard coordinates, the effect is to distort the circular orbit into the elliptical orbit traced by A n x 0. 76

Power Sequences [x 0 is mapped to B-coordinates] [The point (1, ½ ) is rotated through the angle ] [The point (1, ½ ) is mapped to standard coordinates] 77