Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Similar documents
Computational Methods CMSC/AMSC/MAPL 460. EigenValue decomposition Singular Value Decomposition. Ramani Duraiswami, Dept. of Computer Science

Numerical Linear Algebra Homework Assignment - Week 2

AMS526: Numerical Analysis I (Numerical Linear Algebra)

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

The Singular Value Decomposition

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Introduction to Matrices

Review of Linear Algebra

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Introduction to Matrix Algebra

Repeated Eigenvalues and Symmetric Matrices

Eigenvalues and eigenvectors

Properties of Matrices and Operations on Matrices

Chap 3. Linear Algebra

Chapter 2 Notes, Linear Algebra 5e Lay

A = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3].

A VERY BRIEF LINEAR ALGEBRA REVIEW for MAP 5485 Introduction to Mathematical Biophysics Fall 2010

Linear Algebra Practice Problems

Linear Algebra - Part II

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions

Conceptual Questions for Review

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Linear Algebra and Matrix Inversion

(Mathematical Operations with Arrays) Applied Linear Algebra in Geoscience Using MATLAB

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

Lecture 15 Review of Matrix Theory III. Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

Matrix Algebra: Summary

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is,

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

Symmetric and anti symmetric matrices

Chapter 3 Transformations

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Linear Algebra Review. Vectors

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

Linear Algebra: Matrix Eigenvalue Problems

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

Study Notes on Matrices & Determinants for GATE 2017

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Cheat Sheet for MATH461

1 Linear Algebra Problems

Econ Slides from Lecture 7

CHAPTER 3. Matrix Eigenvalue Problems

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

CS 246 Review of Linear Algebra 01/17/19

Lec 2: Mathematical Economics

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Review Let A, B, and C be matrices of the same size, and let r and s be scalars. Then

Announcements Wednesday, November 7

MATH 583A REVIEW SESSION #1

Foundations of Matrix Analysis

Announcements Wednesday, November 7

Eigenpairs and Similarity Transformations

Proofs for Quizzes. Proof. Suppose T is a linear transformation, and let A be a matrix such that T (x) = Ax for all x R m. Then

Matrices and Linear Algebra

Therefore, A and B have the same characteristic polynomial and hence, the same eigenvalues.

Linear Algebra for Machine Learning. Sargur N. Srihari

Math 489AB Exercises for Chapter 2 Fall Section 2.3

Applied Linear Algebra in Geoscience Using MATLAB

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

3 (Maths) Linear Algebra

The matrix will only be consistent if the last entry of row three is 0, meaning 2b 3 + b 2 b 1 = 0.

Fundamentals of Matrices

Maths for Signals and Systems Linear Algebra in Engineering

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Background Mathematics (2/2) 1. David Barber

1. In this problem, if the statement is always true, circle T; otherwise, circle F.

MATH 369 Linear Algebra

POLI270 - Linear Algebra

Calculating determinants for larger matrices

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

G1110 & 852G1 Numerical Linear Algebra

Chapter 1. Matrix Algebra

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

Ma 227 Review for Systems of DEs

Eigenvalues and Eigenvectors

MATH 221, Spring Homework 10 Solutions

Mathematical foundations - linear algebra

LU Factorization. A m x n matrix A admits an LU factorization if it can be written in the form of A = LU

Linear Systems of n equations for n unknowns

Math 408 Advanced Linear Algebra

Example: Filter output power. maximization. Definition. Eigenvalues, eigenvectors and similarity. Example: Stability of linear systems.

Quantum Computing Lecture 2. Review of Linear Algebra

Linear Equations and Matrix

Online Exercises for Linear Algebra XM511

6 EIGENVALUES AND EIGENVECTORS

Section 2.2: The Inverse of a Matrix

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

Computational Methods. Eigenvalues and Singular Values

Eigenvalues, Eigenvectors, and Diagonalization

Linear Algebra and Matrices

Eigenvalues, Eigenvectors, and Diagonalization

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Transcription:

Computational Methods CMSC/AMSC/MAPL 460 Eigenvalues and Eigenvectors Ramani Duraiswami, Dept. of Computer Science

Eigen Values of a Matrix Recap: A N N matrix A has an eigenvector x (non-zero) with corresponding eigenvalue if Ax= x This means Ax- x=0 (A-Ι) x=0 If a matrix vector product gives a zero vector, then either the vector is zero, or the matrix has zero determinant (is singular). Here this means det(a- I) =0

Left and Right Eigenvectors Right eigenvector of a matrix A is Ax= x For a N N matrix we can also define a left matrix product y t A=v t So if we have y t A= y t then y is a left eigenvector of A If A is symmetric A=A t (Ax) t =x t A t = x t A=( x) t = x t So left and right eigenvectors of a symmetric matrix are the same

Symmetric Matrices A matrix is symmetric if its transpose is equal to itself A is symmetric if A t =A For a complex matrix A H =A Eigenvalues and Eigenvectors of a real symmetric (complex hermitian) matrix are real and eigenvectors are orthogonal.

Symmetric Matrices Eigenvalues and Eigenvectors of a real symmetric matrix are real. Its eigenvectors are orthogonal. A X R = X R diag(... N) X L A = diag(... N) X L Multiply first equation on left by X L, second on the right by X R, and subtract (X L X R ) diag(... N) = diag(... N) (X L X R ) matrix of dot products of the left and right eigenvectors commutes with the diagonal matrix of eigenvalues. Only matrices that commute with a diagonal matrix of distinct elements are themselves diagonal. So X L X R is diagonal So X L and X R are orthogonal to each other But X L = X t R

Characteristic Equation Ax = x can be written as (A- I)x = 0 which holds for x 0, so (A- I) is singular and det(a- I) = 0 This is called the characteristic polynomial. If A is n n the polynomial is of degree n and so A has n eigenvalues, counting multiplicities.

Example = 2 3 4 A = 2 3 4 I A 0 ()(3) ) )(2 (4 0 ) det( = = I A 0 ) 5)( ( 0 5 6 2 = = + Hence the two eigenvalues are and 5.

Example (continued) Once we have the eigenvalues, the eigenvectors can be obtained by substituting back into (A- I)x = 0. This gives eigenvectors ( -) T and ( /3) T Note that we can scale the eigenvectors any way we want. Determinant are not used for finding the eigenvalues of large matrices.

Positive Definite Matrices A complex matrix A is positive definite if for every nonzero complex vector x the quadratic form x H Ax is real and: x H Ax > 0 where x H denotes the conjugate transpose of x (i.e., change the sign of the imaginary part of each component of x and then transpose).

Eigenvalues of Positive Definite Matrices If A is positive definite and and x are an eigenvalue/eigenvector pair, then: Ax = x x H Ax = x H x Since x H Ax and x H x are both real and positive it follows that is real and positive.

Properties of Positive Definite Matrices If A is a positive definite matrix then: A is nonsingular. The inverse of A is positive definite. Gaussian elimination can be performed on A without pivoting. The eigenvalues of A are positive.

Hermitian Matrices A square matrix for which A = A H is said to be an Hermitian matrix. If A is real and Hermitian it is said to be symmetric, and A = A T. Every Hermitian matrix is positive definite. Every eigenvalue of an Hermitian matrix is real. Different eigenvectors of an Hermitian matrix are orthogonal to each other, i.e., their scalar product is zero.

The Power Method Label the eigenvalues in order of decreasing absolute value so > 2 > n. Consider the iteration formula: y k+ = Ay k where we start with some initial y 0, so that: y k = A k y 0 Then y k converges to the eigenvector x corresponding the eigenvalue.

Eigen Decomposition Recall from previous class the Eigen decomposition Let, 2,, n be the eigenvalues of the n n matrix A and x,x 2,,x n the corresponding eigenvectors. Let Λ be the diagonal matrix with, 2,, n on the main diagonal. Let X be the n n matrix whose jth column is x j. Then AX = X Λ, and so we have the eigen decomposition of A: A = X t Λ X - This requires X to be invertible, thus the eigenvectors of A must be linearly independent.

Powers of Matrices Also recall If A = X t Λ X - then: A 2 = (X t Λ X - )(X t Λ X - ) = X t Λ (X - X) Λ X - = X t Λ 2 X - Hence we have: A p = X t Λ p X - Thus, A p has the same eigenvectors as A, and its eigenvalues are p, 2p,, np. We can use these results as the basis of an iterative algorithm for finding the eigenvalues of a matrix.

Proof We know that A k = X Λ k X -, so: y k = A k y 0 = X Λ k X - y 0 Now we have: = = Λ k k n k k k k n k k k 2 2 O O The terms on the diagonal get smaller in absolute value as k increases, since is the dominant eigenvalue.

Proof (continued) So we have 2 0 0 c x c c c x x y k n n k k = = M O M M L M M Since k c x is just a constant times x then we have the required result.

Example Let A = [2-2; -5] and y 0 =[ ] y = -4[2.50.00] y 2 = 0[2.80.00] y 3 = -22[2.9.00] y 4 = 46[2.96.00] y 5 = -94[2.98.00] y 6 = -90[2.99.00] The iteration is converging on a scalar multiple of [3 ], which is the correct dominant eigenvector.

Rayleigh Quotient Note that once we have the eigenvector, the corresponding eigenvalue can be obtained from the Rayleigh quotient: dot(ax,x)/dot(x,x) where dot(a,b) is the scalar product of vectors a and b defined by: dot(a,b) = a b +a 2 b 2 + +a n b n So for our example, = -2.

Scaling The k can cause problems as it may become very large as the iteration progresses. To avoid this problem we scale the iteration formula: y k+ = A(y k /r k+ ) where r k+ is the component of Ay k with largest absolute value.

Example with Scaling Let A = [2-2; -5] and y 0 =[ ] Ay 0 = [-0-4] so r =-0 and y =[.00 0.40]. Ay = [-2.8 -.0] so r 2 =-2.8 and y 2 =[.0 0.357]. Ay 2 = [-2.2857-0.7857] so r 3 =-2.2857 and y 3 =[.0 0.3437]. Ay 3 = [-2.250-0.787] so r 4 =-2.250 and y 4 =[.0 0.3382]. Ay 4 = [-2.0588-0.692] so r 5 =-2.0588 and y 5 =[.0 0.3357]. Ay 5 = [-2.0286-0.6786] so r 6 =-2.0286 and y 6 =[.0 0.3345]. r is converging to the correct eigenvalue -2.

Scaling Factor At step k+, the scaling factor r k+ is the component with largest absolute value is Ay k. When k is sufficiently large Ay k ' y k. The component with largest absolute value in y k is (since y k was scaled in the previous step to have largest component ). Hence, r k+ -> as k ->.

MATLAB Code function [lambda,y]=powermethod(a,y,n) for (i=:n) y = A*y; [c j] = max(abs(y)); lambda = y(j); y = y/lambda; end

Convergence The Power Method relies on us being able to ignore terms of the form ( j / ) k when k is large enough. Thus, the convergence of the Power Method depends on 2 /. If 2 / = the method will not converge. If 2 / is close to the method will converge slowly.