Lecture 12 Eigenvalue Problem. Review of Eigenvalues Some properties Power method Shift method Inverse power method Deflation QR Method

Similar documents
AMS526: Numerical Analysis I (Numerical Linear Algebra)

EECS 275 Matrix Computation

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Data Analysis and Manifold Learning Lecture 2: Properties of Symmetric Matrices and Examples

In this section again we shall assume that the matrix A is m m, real and symmetric.

Computational Methods. Eigenvalues and Singular Values

LU Factorization. A m x n matrix A admits an LU factorization if it can be written in the form of A = LU

Eigenvalue Problems and Singular Value Decomposition

Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm

Math 489AB Exercises for Chapter 2 Fall Section 2.3

Chapter 3 Transformations

Matrices, Moments and Quadrature, cont d

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

Eigenvalues and eigenvectors

Solving large scale eigenvalue problems

k is a product of elementary matrices.

Eigenvalue and Eigenvector Problems

Lecture # 11 The Power Method for Eigenvalues Part II. The power method find the largest (in magnitude) eigenvalue of. A R n n.

Cheat Sheet for MATH461

Eigenvalues and Eigenvectors

Lecture 1: Systems of linear equations and their solutions

Conceptual Questions for Review

Lecture 4 Eigenvalue problems

Geometric Modeling Summer Semester 2010 Mathematical Tools (1)

Numerical methods for eigenvalue problems

THE QR METHOD A = Q 1 R 1

Computational Methods CMSC/AMSC/MAPL 460. EigenValue decomposition Singular Value Decomposition. Ramani Duraiswami, Dept. of Computer Science

Assignment #9: Orthogonal Projections, Gram-Schmidt, and Least Squares. Name:

Lecture 15, 16: Diagonalization

Linear Algebra Primer

Linear Algebra Methods for Data Mining

Review Let A, B, and C be matrices of the same size, and let r and s be scalars. Then

We will discuss matrix diagonalization algorithms in Numerical Recipes in the context of the eigenvalue problem in quantum mechanics, m A n = λ m

TBP MATH33A Review Sheet. November 24, 2018

Eigenvectors and Eigenvalues 1

Exercise Sheet 1.

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis

Numerical Methods - Numerical Linear Algebra

Systems of Algebraic Equations and Systems of Differential Equations

Numerical Methods for Solving Large Scale Eigenvalue Problems

Eigenvalues and Eigenvectors

Review of Linear Algebra

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

22.4. Numerical Determination of Eigenvalues and Eigenvectors. Introduction. Prerequisites. Learning Outcomes

Linear Algebra Methods for Data Mining

Numerical Linear Algebra Homework Assignment - Week 2

Linear Algebra V = T = ( 4 3 ).

Problem # Max points possible Actual score Total 120

Study Guide for Linear Algebra Exam 2

LECTURE 7. Least Squares and Variants. Optimization Models EE 127 / EE 227AT. Outline. Least Squares. Notes. Notes. Notes. Notes.

Chapter 6. Nonlinear Equations. 6.1 The Problem of Nonlinear Root-finding. 6.2 Rate of Convergence

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Statistical Geometry Processing Winter Semester 2011/2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

Linear Algebra Review. Vectors

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions

Solution of Linear Equations

EIGENVALUES AND EIGENVECTORS 3

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

Solving large scale eigenvalue problems

1 Singular Value Decomposition and Principal Component

Eigenvalues and Eigenvectors

AMS526: Numerical Analysis I (Numerical Linear Algebra)

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

The QR Factorization

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

arxiv: v1 [math.na] 5 May 2011

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm

Scientific Computing: An Introductory Survey

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

3. Mathematical Properties of MDOF Systems

Matrix inversion and linear equations

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

ON THE QR ITERATIONS OF REAL MATRICES

Econ Slides from Lecture 7

Lecture Summaries for Linear Algebra M51A

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Lecture 15 Review of Matrix Theory III. Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore

Lecture 5 Singular value decomposition

Lecture 9: SVD, Low Rank Approximation

Lecture 10 - Eigenvalues problem

Face Recognition and Biometric Systems

Numerical Methods I Eigenvalue Problems

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Gaussian Elimination and Back Substitution

Main matrix factorizations

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

Math 407: Linear Optimization

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

33A Linear Algebra and Applications: Practice Final Exam - Solutions

Algebra C Numerical Linear Algebra Sample Exam Problems

Linear Algebra. Carleton DeTar February 27, 2017

Transcription:

Lecture Eigenvalue Problem Review of Eigenvalues Some properties Power method Shift method Inverse power method Deflation QR Method

Eigenvalue Eigenvalue ( A I) If det( A I) (trivial solution) To obtain a non-trivial solution, a a L a n a a L a a M n a M n O L n Characteristic equation a a nn det ( AI)

Properties of Eigenvalue ) trace A tr( A) n i a ii n i i ) det n AΠ i i ) If A is symmetric, then the eigenvectors are orthogonal:, G 4) Let the eigenvalues of T i i j then, the eigenvalues of (A - ai) a a,,, j ii i j A,, L, L n a n

Application Eamples If a matri A represents a system, then eigenvalues can be used to system identification, e.g., stability and so on. For infinite -dimension systems, eigenvalueeigenfunction pairs are used, e.g., resonance frequency of an oscillator or cavity, waveguide modes, etc. In image and speech recognition, eigenface, eigenvoice are used for identification. In statistics, principal component analysis (PCA) is used to find relationships among variables. 4

Geometrical Interpretation of Eigenvectors Transformation A A :The transformation of an eigenvector is mapped onto the same line of. Symmetric matri orthogonal eigenvectors Relation to Singular Value if A is singular {eigenvalues} 5

Power Method Any vector can be written as where α m is a constant and φ m denotes an eigenvector. In general, continue the multiplication: A n m α α φ + α φ + α φ + K+ α m φ m where denotes an eigenvalue, and A AA 44 K A n nφ n 6

Power Method Factor the large value term As you continue to multiply the vector by [A] + + + n φ φ φ n n α α α K A as αφ A 7

Power Method The basic computation of the power method is summarized as as φ φ φ α φ α φ α φ α A A u 8

Power Method The basic computation of the power method is summarized as u Au Au - - and lim u φ The equation can be written as: Au u - - Au u - - 9

The Power Method Algorithm ynonzero random vector Initialize A*y vector for,, n y/ Ay ( is the approimate eigenvector) approimate eigenvalue µ (y T )/(y T y) rµy- +

Eample of Power Method Consider the follow matri A A 4 Assume an arbitrary vector { } T

Eample of Power Method Multiply the matri by the matri [A] by 5 4 Normalize the result of the product..6 5 5

Eample of Power Method 45..478 4.74.45.7 4.45.7 4.6. 4.6. 4.6..6 4..4 4.74.45.478 4.74

Eample of Power Method..65 4.4..4 4.5.56 4.4..65 4.4 As you continue to multiply each successive vector yields 4 and the vector u { } T 4

Eample of Power Method Consider the follow matri A 6 6 6 A Assume an arbitrary vector { } T 5 ;.857.574 4 4 8 6 6 6 ;.87.56.857.857.857 6.857.857.574 6 6 6

Eample of Power Method 6 ;.84.59.465.465.465 6.465.87.56 6 6 6 ;.84.5.77.77.77 6.77.84.59 6 6 6 ;.84.5... 6..84.5 6 6 6 Result: and u {.5.8} T

Power method : Advantages Simple, easy to implement. The eigenvector corresponding to the dominant (i.e., maimum) eigenvalue is generated at the same time. The inverse power method solves for the minimal eigenvalue/eigenvector pair. 7

Power Method : Disadvantages The power method provides only one eigenvalue/eigenvector pair, which corresponds to the maimum eigenvalue. Some modifications must be implemented to find other eigenvalues, e.g., shift method, deflation, etc. Also, if the dominant eigenvalue is not sufficiently larger than others, a large number of iterations are required. 8

Shift method It is possible to obtain another eigenvalue from the set equations by using a technique nown as shifting the matri. [ A] Subtracting a vector s from each side, thereby changing the maimum eigenvalue [ A] s[ I] ( s) 9

Shift method Let the eigenvalue, ma, be the maimum value of the matri A, Then the matri is rewritten in a form: [ B] [ A] [ I] ma The Power method can then be applied to obtain the largest eigenvalue of [B].

Eample of Power Method Consider the following matri B 5 4 4 B Assume an arbitrary vector { } T

Eample of Power Method Multiply by the matri [B] 5 5 Normalize the result of the product.6. -5 5

Eample of Power Method..4 5 5.6. 5.6... 5 Continue with the iteration and the final value is -5. However, to get the true you need to shift bac by: 4 5 ma + +

Inverse Power Method The inverse method is similar to the power method, ecept that it finds the smallest eigenvalue. Using the following technique. [ A] [ A] [ A] [ A] [ A] µ [ B] 4

Inverse Power Method The algorithm is the same as the Power method and the eigenvector is not the eigenvector for the smallest eigenvalue. To obtain the smallest eigenvalue from the power method. µ µ 5

Inverse Power Method The inverse algorithm use the technique avoids calculating the inverse matri and uses a LU decomposition to find the vector. [ A] [ L ][ U] Eample [ A] 4 5 4.964 5.55.8 6

Matri Deflation It is possible to obtain eigenvectors one after another Properly assigning the vector is important. e.g. Wielandt s deflation 7

Eample Using Deflation(I) 8

Eample: Using Deflation(II) 9

Wielandt s Deflation Let, be the eigenvector, eigenvalue of A. Consider Clearly, is an eigenvalue of A, since Assume j, j,j,, then ; u u A A T T u A A T j j j j j T j j T j j j T j j y A y u A u u A A i.e.,,

Wielandt s Deflation () Thus, y j, j,j,, : eigenvector, eigenvalue of A. To get u, first find v such that v T, then Thus, and v T y A j v T v T u T v A A j v T T A j A j v T A v j j T

Wielandt s Deflation () If has nonzero first element, normalizing it to mae its first element to be, then we can choose v T e T [ ]. Then A A e A T ( T I e )A

Other methods Rayleigh quotient iteration Bisection method Divide-and-conquer QR Algorithm etc.

Similar Matrices Definition: A and B are similar matrices if and only if there eists a nonsingular matri P such that B P - AP. (or PBP - A) Theorem: Similar matrices have the same set of eigenvalues. Proof: Since Ap p PBP - p p P - (PBP - p) P - (p) B(P - p) (P - p). Then, is an eigenvalue of A (with eigenvector p) is an eigenvalue of B (with eigenvector P - p). QED 4

QR Algorithm Given square matri A we can factor A QR where Q is orthogonal (i.e., Q t QI or Q t Q - ) and R is upper triangular. Algorithm to find eigenvalues: Start: A () A QR (note: R Q t A) A () RQ Q t A Q (A and A are similar) Factor: A ) Q () R () A () R () Q () Q ()t Q t AQQ () (similar to A () ) General: Given A (m) Factor: A (m) Q (m) R (m) A (m+) R (m) Q (m) (similar to A (m,, A ( 5, A)

QR Algorithm Note: If the eigenvalues of A satisfy > > > n Then, the iterates A m converge to an upper triangular matri having the eigenvalues of A on the diagonal. (Proof omitted) n m A L M O M M L L * * * 6

QR Algorithm- Eigenvectors How do we compute the eigenvectors? Note: A (m) Q (m-) t Q () t Q t A Q Q () Q (m-) Let Q * Q Q () Q (m-) Then, A (m) Q *t A Q * If A (m) becomes diagonal, the eigenvectors of A (m) are e, e,, e n. Since A (m) and A are similar, the eigenvectors of A are (Q *t ) - e i Q * e i, i.e., the eigenvectors of A are the columns of Q *. For any symmetric real matri, A (m) converges to a diagonal matri. For more details, see Matri Computations, by Golub, Van Loan, or other boos on matri computations. 7

Eample 8 5 4 5 A.76 -.. -. 6.9.87..87 6.89 () A.76 -... 6..8 -..8 6.884 (5) A.5 -.78 -.4984.77.6.77 -.5 -.76.56 (5) Q

Eample 9 5 4 5 A.76... 6.84.794..794 6.7 () A.76 -... 6.. -.. 6.884 () A.5 -.77 -.5.77..77 -.5 -.77.5 () Q