CHAPTER 3. Matrix Eigenvalue Problems

Similar documents
Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Linear Algebra: Matrix Eigenvalue Problems

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

Econ Slides from Lecture 7

Numerical Linear Algebra Homework Assignment - Week 2

Lecture Notes: Eigenvalues and Eigenvectors. 1 Definitions. 2 Finding All Eigenvalues

Properties of Linear Transformations from R n to R m

MAC Module 12 Eigenvalues and Eigenvectors. Learning Objectives. Upon completing this module, you should be able to:

MAC Module 12 Eigenvalues and Eigenvectors

Recall : Eigenvalues and Eigenvectors

Chapter Two Elements of Linear Algebra

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Chapter 2 Notes, Linear Algebra 5e Lay

Conceptual Questions for Review

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Chapter 5 Eigenvalues and Eigenvectors

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

LU Factorization. A m x n matrix A admits an LU factorization if it can be written in the form of A = LU

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

Systems of Algebraic Equations and Systems of Differential Equations

and let s calculate the image of some vectors under the transformation T.

MATH 423 Linear Algebra II Lecture 20: Geometry of linear transformations. Eigenvalues and eigenvectors. Characteristic polynomial.

MTH 464: Computational Linear Algebra

Chapter 3. Determinants and Eigenvalues

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

4. Determinants.

1. General Vector Spaces

Eigenvalues and Eigenvectors A =

Math Matrix Algebra

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12

CHAPTER 5. Higher Order Linear ODE'S

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

Lecture Summaries for Linear Algebra M51A

Symmetric and anti symmetric matrices

2. Every linear system with the same number of equations as unknowns has a unique solution.

CS 246 Review of Linear Algebra 01/17/19

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Math 489AB Exercises for Chapter 2 Fall Section 2.3

Repeated Eigenvalues and Symmetric Matrices

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

Example: Filter output power. maximization. Definition. Eigenvalues, eigenvectors and similarity. Example: Stability of linear systems.

MA 265 FINAL EXAM Fall 2012

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

ICS 6N Computational Linear Algebra Eigenvalues and Eigenvectors

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Dimension. Eigenvalue and eigenvector

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Study Guide for Linear Algebra Exam 2

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Eigenvalues and Eigenvectors. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Linear vector spaces and subspaces.

Announcements Wednesday, November 01

TMA Calculus 3. Lecture 21, April 3. Toke Meier Carlsen Norwegian University of Science and Technology Spring 2013

Linear Algebra Primer

3 Matrix Algebra. 3.1 Operations on matrices

Linear Algebra Practice Problems

The matrix will only be consistent if the last entry of row three is 0, meaning 2b 3 + b 2 b 1 = 0.

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

CHAPTER 5. Linear Operators, Span, Linear Independence, Basis Sets, and Dimension

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

Chapter 3. Linear and Nonlinear Systems

MATH 3321 Sample Questions for Exam 3. 3y y, C = Perform the indicated operations, if possible: (a) AC (b) AB (c) B + AC (d) CBA

PRACTICE FINAL EXAM. why. If they are dependent, exhibit a linear dependence relation among them.

Linear Algebra Practice Problems

Quantum Computing Lecture 2. Review of Linear Algebra

Some Notes on Linear Algebra

1. In this problem, if the statement is always true, circle T; otherwise, circle F.

Lecture 10 - Eigenvalues problem

Review problems for MA 54, Fall 2004.

Chap 3. Linear Algebra

Eigenvalues and Eigenvectors

235 Final exam review questions

Math Linear Algebra Final Exam Review Sheet

MAT Linear Algebra Collection of sample exams

Linear Algebra Primer

Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI?

Final Exam Practice Problems Answers Math 24 Winter 2012

MATHEMATICS 217 NOTES

Linear Algebra Highlights

Math 1553, Introduction to Linear Algebra

MATH 369 Linear Algebra

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ.

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45

Chapter 7. Linear Algebra: Matrices, Vectors,

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

Mathematical Methods for Engineers and Scientists 1

Math Matrix Algebra

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Math 2174: Practice Midterm 1

Introduction to Matrices

Math 21b. Review for Final Exam

MATH 221, Spring Homework 10 Solutions

vibrations, light transmission, tuning guitar, design buildings and bridges, washing machine, Partial differential problems, water flow,...

Transcription:

A SERIES OF CLASS NOTES FOR 2005-2006 TO INTRODUCE LINEAR AND NONLINEAR PROBLEMS TO ENGINEERS, SCIENTISTS, AND APPLIED MATHEMATICIANS DE CLASS NOTES 3 A COLLECTION OF HANDOUTS ON SYSTEMS OF ORDINARY DIFFERENTIAL EQUATIONS (ODE's) CHAPTER 3 Matrix Eigenvalue Problems 1. Eigenvalue Problems for Matrices 2. Hermitian Matrices 3. Basis of Eigenvectors Ch. 3 Pg. 1

Handout #1 MATRIX EIGENVALUE PROBLEMS Prof. Moseley For a given square matrix, the nonzero vectors x and scalars λ such that Ax = λx are special. For example 1 1 1 3 A 2x2 1 1 = 2 (1) 1 1 x = λ x 2x1 2x1 The proportionality constant λ is called an eigenvalue of A and x is an eigenvector of A associated with the eigenvalue λ. Since we can write (1) as the homogeneous equation. (A - λi)x = (2) the eigenvalues are exactly the solutions of the polynomial equation det (A - λi) = 0. (3) which makes the matrix AλI singular. Recall that (2) has only the trivial solution x = 0 if det(a - λ) 0 so that AλI is nonsingular. If det(a - λ 1 I) = 0, then there are an infinite number of solutions to (2) and that they form a subspace of R n (called the eigenspace associated with λ 1 ). By definition, an eigenvector for a given eigenvalue is any nontrivial solution in its eigenspace. Unfortunately, since this conflicts with the formal definition of an eigenvector, for a given eigenvalue λ = λ 1, a basis of this subspace are often referred to as the eigenvectors associated with λ 1. However, like the concept of linear independence, once you understand it, the bad semantics are not a problem. Solving the eigenvalue problem Ax = λx means finding all eigenvalues and basis sets for their associated eigenspaces. Unfortunately (especially among engineers), this is called finding the eigenvectors. Interestingly, an eigenvalue problem is different from solving Ax = b. If the entries in A and b are in a field K, then, since only a finite number of field operations are needed for the solution process (Gauss elimination), all solutions will be in K n. However, the solution of Ax = λx requires the roots 0f (3) which, even if the entries in A are integers, may be unreal. Hence to solve an eigenvalue problem we must be able to obtain the roots of an n th degree polynomial. Hence we really must use the field C. Eigenvectors in R 3 may have a fgeometric interpretation (e.g., in the design of rotors for motors). However, we are interested in solving Ax = λx in R n to be able to use it as part of the solution process for solving a system of linear homogeneous ODE s. 1 1 EXAMPLE #1. Solve the eigenvalue problem Ax = λx if A =. 1 3 Step 1. Find the eigenvalues of A. Ch. 3 Pg. 2

Hence p(λ) = 0 (λ - 2) 2 = 0 λ = 2,2 (repeated real root). We say that the eigenvalue λ = 2 has algebraic multiplicity two. (If p(λ) = (λ - λ 1 ) n1 (λ - λ i ) ni (λ - λ k ) nk, then the algebraic multiplicity of λ i is n i.) Step 2. Find the eigenvectors (i.e., find a basis set for the eigenspace associated with each eigenvalue). We first solve (A - 2I) x = 0 where. To solve (A - 2I) x = 0 we use Gauss Elimination to reduce A - 2I. It need not be augmented by the RHS since the problem is homogeneous. The RHS is 0 and any elementary row operation will leave it zero. R R 1 2 1 1 x 1 x 2 = 0 x 1 = x 2 1 1 1 1 0 0 Hence is the general solution of (A - 2I)x = 0. Thus there are an infinite number of solutions, all scalar multiples of the vector. However, all solutions are scalar multiples of the vector x = (we could have chosen as well). The dimension of the eigenspace is one. In general, the dimension of the eigenspace is called the geometric multiplicity of the eigenvalue. THEOREM. The algebraic multiplicity of an eigenvalue is always bigger than or equal to the geometric multiplicity. The nice case is when they are equal. Unfortunately, that is not the case in this example. Ch. 3 Pg. 3

We record the information in a Table. TABLE Associated e-values e-vectors λ 1 = λ 2 = 2 (or λ 1 = 0, 0) (This indicates that the (Unfortunately, since there is only one vector in the basis algebraic multiplicity is set for the eigenspace so that the geometric multiplicity is two) only one.) 0 1 EXAMPLE #2. Solve the eigenvalue problem Ax = λx if A =. 1 0 Step 1. Find the eigenvalues of A l 1 A I. Hence 1 l p( ) det( ) 1 2 A I 1 1 p(λ) = 0 λ 2 +1 = 0 λ = ±i. Step 2. Find the eigenvectors (i.e., find a basis set for the eigenspace associated with each eigenvalue). i 1 We first solve (A - ii) x = 0 where AiI =. 1 i To solve (A - ii) x = 0 we use Gauss Elimination to reduce A - ii. It need not be augmented by the RHS since the problem is homogeneous. The RHS is 0 and any elementary row operation will leave it zero. R ir 2 1 i 1 i x 1 x 2 = 0 x 2 = i x 2 1 - i i 1 0 0 x1 x1 1 Hence x x1 is the general solution of (A - ii)x =. Thus there are an x2 ix 1 i 0 Ch. 3 Pg. 4

1 infinite number of solutions, all scalar multiples of the vector. i If A has all real entries, then p(λ) = det (A - λi) has all real coefficients so that unreal roots of (A - λi)x = 0 come in complex conjugate pairs. Furthermore, since A is real, if λ = µ+iν is an eigenvalue with eigenvector a ib, then a ib is an eigenvalue for λ = µiν. Hence Table. 1 i is an eigenvector associated with i. We record the information in a TABLE Associated e-values e-vectors 1 λ 1 = i x 1 = i 1 λ 2 = i x 2 = i Ch. 3 Pg. 5

Handout #2 HERMITIAN MATRICES Prof. Moseley DEFINITION #1. A square matrix A nxn C n n is called Hermitian (or self-adjoint) if A = A * where (i.e. the transpose of the complex conjugate of A). THEOREM #1. If AC n n is Hermitian (self-adjoint), then 1. All eigenvalues are real. 2. There exist a full set of n linearly independent eigenvectors for A which form a basis for C n (if A is real, the eigenvectors can be chosen to be real and hence form a basis for R n ). 3. If x and y are eigenvectors for different eigenvalues, then they are orthogonal T (perpendicular), so that (x, y) x y = 0 4. If a given eigenvalue λ has more than one linearly independent eigenvector associated with it (we say λ has geometric multiplicity m) then they can be chosen to be orthogonal. COROLLARY #1. If A R n n C n n = R n n +ir n n is real and symmetric, it is Hermitian (selfmxn adjoint) and its eigenvalues are real. Also, it has a full set of eigenvectors which may be chosen to be orthogonal. In an appropriate setting, the real matrices associated with linear circuits and systems of linear springs are symmetric and hence have real eigenvalues. They could also be positive or negative definite indicating exponential decay. Nonreal eigenvalues may indicate pure oscillations or damped oscillations. Ch. 3 Pg. 6

Handout #3 BASIS OF EIGENVECTORS Prof. Moseley Let T:R n R n be defined by T( x ) = A x. A matrix is nice if it has a full set of n linearly independent eigenvectors that form a basis of R n. Recall that B = e ˆ1,...,eˆ n where T is a basis of R n. ê [0,...,0,1,0,...,0] If x [x,x,..., x ] R n, then x 1, x 2,..., x n are the coordinates i 1 2 n th i slot of with respect to this basis. If x,x,...,x are a a full set of n linearly independent x 1 2 n eigenvectors that form a basis of R n, then the coordinates of obtained from the set of linear algebraic equations x with respect to this basis can be c x c x x 1 1 n n. Recall that if A is Hermitian (self-adjoint) then it has a full set of eigenvectors that are orthogonal. The coordinates of an orthogonal basis are particularly easy to obtain. Let B = { x,, x } be an 1 n orthogonal basis. so that any vector x can be written as x c x c x 1 1 n n.. (1) We obtain the coordinate c j by taking the dot product of (1) with. x j (x, x) (x,c x c x ) j j 1 1 n n =. c (x,x ) c (x x ) = (1 j 1 n j n c ( x, x ) ( 1 j j Hence (x j,x) cj. We might call these the Fourier Series coefficients for this basis. 2 x j Ch. 3 Pg. 7