Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm

Similar documents
Linear algebra & Numerical Analysis

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Therefore, A and B have the same characteristic polynomial and hence, the same eigenvalues.

Computational Methods. Eigenvalues and Singular Values

Numerical Analysis Lecture Notes

EECS 275 Matrix Computation

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

Eigenvalues and eigenvectors

MATH 221, Spring Homework 10 Solutions

The Solution of Linear Systems AX = B

Math 504 (Fall 2011) 1. (*) Consider the matrices

Eigenvalue and Eigenvector Problems

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

Eigenvalues and Eigenvectors

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis

13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2

11.3 Eigenvalues and Eigenvectors of a Tridiagonal Matrix

Scientific Computing: An Introductory Survey

MATH Spring 2011 Sample problems for Test 2: Solutions

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Numerical Methods - Numerical Linear Algebra

Linear Algebra Review

Math 489AB Exercises for Chapter 2 Fall Section 2.3

1 Last time: least-squares problems

5 Selected Topics in Numerical Linear Algebra

Section 4.5 Eigenvalues of Symmetric Tridiagonal Matrices

Matrices, Moments and Quadrature, cont d

Numerical Methods I Eigenvalue Problems

Matrices and Linear Algebra

Notes on Eigenvalues, Singular Values and QR

Lecture # 11 The Power Method for Eigenvalues Part II. The power method find the largest (in magnitude) eigenvalue of. A R n n.

235 Final exam review questions

Section 4.4 Reduction to Symmetric Tridiagonal Form

MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2.

Orthogonal iteration to QR

632 CHAP. 11 EIGENVALUES AND EIGENVECTORS. QR Method

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

Diagonalization. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

1 Number Systems and Errors 1

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

Linear Algebra. Carleton DeTar February 27, 2017

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Schur s Triangularization Theorem. Math 422

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

Solving large scale eigenvalue problems

Lecture Summaries for Linear Algebra M51A

Today: eigenvalue sensitivity, eigenvalue algorithms Reminder: midterm starts today

Chapter 3. Determinants and Eigenvalues

Math 577 Assignment 7

EE263: Introduction to Linear Dynamical Systems Review Session 5

Eigenvalues, Eigenvectors, and Diagonalization

Recall : Eigenvalues and Eigenvectors

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Math 315: Linear Algebra Solutions to Assignment 7

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Bindel, Spring 2016 Numerical Analysis (CS 4220) Notes for

0.1 Rational Canonical Forms

4. Linear transformations as a vector space 17

Online Exercises for Linear Algebra XM511

Numerical Linear Algebra Homework Assignment - Week 2

1. In this problem, if the statement is always true, circle T; otherwise, circle F.

A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem

Introduction to Applied Linear Algebra with MATLAB

The determinant. Motivation: area of parallelograms, volume of parallepipeds. Two vectors in R 2 : oriented area of a parallelogram

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

This can be accomplished by left matrix multiplication as follows: I

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

5.3 The Power Method Approximation of the Eigenvalue of Largest Module

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

April 26, Applied mathematics PhD candidate, physics MA UC Berkeley. Lecture 4/26/2013. Jed Duersch. Spd matrices. Cholesky decomposition

arxiv: v1 [math.na] 5 May 2011

Linear Algebra, part 3 QR and SVD

and let s calculate the image of some vectors under the transformation T.

G1110 & 852G1 Numerical Linear Algebra

0.1 Eigenvalues and Eigenvectors

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

We will discuss matrix diagonalization algorithms in Numerical Recipes in the context of the eigenvalue problem in quantum mechanics, m A n = λ m

22.4. Numerical Determination of Eigenvalues and Eigenvectors. Introduction. Prerequisites. Learning Outcomes

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices

Lecture 3: QR-Factorization

ANSWERS. E k E 2 E 1 A = B

Numerical Methods I: Eigenvalues and eigenvectors

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is,

MATH 304 Linear Algebra Lecture 33: Bases of eigenvectors. Diagonalization.

ft-uiowa-math2550 Assignment NOTRequiredJustHWformatOfQuizReviewForExam3part2 due 12/31/2014 at 07:10pm CST

Applied Numerical Linear Algebra. Lecture 8

AMS526: Numerical Analysis I (Numerical Linear Algebra)

The German word eigen is cognate with the Old English word āgen, which became owen in Middle English and own in modern English.

Cheat Sheet for MATH461


UNIT 6: The singular value decomposition.

Transcription:

Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm References: Trefethen & Bau textbook Eigenvalue problem: given a matrix A, find (a/several/all) eigenvalue(s) λ and and corresponding eigenvector(s) v such that: Av = λv Iterative Methods: methods such as LU or QR factorizations are direct: they compute a certain number of operations and then finish with the answer But eigenvalue calculations in general cannot be direct because they are equivalent to finding roots of the characteristic polynomials: for degree greater than 5, there does not exist a finite sequence of arithmetic operations for a solution They instead must be iterative: - construct a sequence; - truncate that sequence after convergence ; - typically concerned with fast convergence rate (rather than operation count) Notations: for x R n, we take norm x = x T x to be Euclidean length of x In iterative methods, x k usually means the vector x at the kth iteration (rather than kth entry of vector x) Some sources use x k or x (k) instead Power Iteration: a simple method for calculating a single (largest) eigenvalue of a square matrix A (and its associated eigenvector) For arbitrary y R n, set x 0 = y/ y to calculate an initial vector, and then for k = 0,1, Compute y k = Ax k and set x k+1 = y k / y k This is the Power Method or Iteration, and computes unit vectors in the direction of x 0,Ax 0,A 2 x 0,A 3 x 0,,A k x 0 Suppose that A is diagonalizable so that there is a basis of eigenvectors of A: {v 1,v 2,,v n } with Av i = λ i v i and v i = 1, i = 1,2,,n, and assume that λ 1 > λ 2 λ n Then we can write for some α i R, i = 1,2,,n, so x 0 = α i v i A k x 0 = A k α i v i = α i A k v i Topics 10 pg 1 of 5

However, since Av i = λ i v i = A 2 v i = A(Av i ) = λ i Av i = λ 2 i v i, inductively A k v i = λ k i v i So [ ( ) ] k A k x 0 = α i λ k iv i = λ k λi 1 α 1 v 1 + α i v i λ 1 Since (λ i /λ 1 ) k 0 as k, A k x 0 ts to look like λ k 1α 1 v 1 as k gets large The result is that by normalizing to be a unit vector A k x 0 A k x 0 ±v A k x 0 1 and A k 1 x 0 λ k 1α 1 = λ 1 i=2 λ1 k 1 α 1 as k, and the sign of λ 1 is identified by looking at, eg, (A k x 0 ) 1 /(A k 1 x 0 ) 1 Essentially the same argument works when we normalize at each step: the Power Iteration may be seen to compute y k = β k A k x 0 for some β k Then, from the above, x k+1 = y k y k = β k β k A k x 0 A k x 0 ±v 1 Similarly, y k 1 = β k 1 A k 1 x 0 for some β k 1 Thus x k = β k 1 β k 1 A k 1 x 0 A k 1 x 0 and hence y k = Ax k = β k 1 β k 1 A k x 0 A k 1 x 0 Therefore, as above, y k = Ak x 0 A k 1 x 0 λ 1, and the sign of λ 1 may be identified by looking at, eg, (x k+1 ) 1 /(x k ) 1 Hence the largest eigenvalue (and its eigenvector) can be found Note: it is possible for a chosen vector x 0 that α 1 = 0, but rounding errors in the computation generally introduce a small component in v 1, so that in practice this is not a concern! This simplified method for eigenvalue computation is the basis for effective methods (cf, Rayleigh Quotient and the Arnoldi Algorithm as used in Matlab s sparse eigs command) For general dense matrices, we will look at a popular method known as the QR Algorithm There are also Divide and Conquer algorithms (since late 1990 s) Topics 10 pg 2 of 5

QR Algorithm We consider only the case where A is symmetric Recall: a symmetric matrix A is similar to B if there is a nonsingular matrix P for which A = P 1 BP Similar matrices have the same eigenvalues, since if A = P 1 BP, 0 = det(a λi) = det(p 1 (B λi)p) = det(p 1 )det(p)det(b λi), so det(a λi) = 0 if, and only if, det(b λi) = 0 The basic QR algorithm is: Set A 1 = A for k = 1,2, form the QR factorization A k = Q k R k and set A k+1 = R k Q k Proposition The symmetric matrices A 1,A 2,,A k, are all similar and thus have the same eigenvalues Proof Since A k+1 = R k Q k = (Q T k Q k )R k Q k = QT k (Q k R k )Q k = QT k A k Q k = Q 1 k A k Q k, A k+1 is symmetric if A k is, and is similar to A k At least when A has distinct eigenvalues, this basic QR algorithm can be shown to work (A k converges to a diagonal matrix as k, the diagonal entries of which are the eigenvalues) However, a really practical, fast algorithm is based on some refinements Reduction to tridiagonal form: the idea is to apply explicit similarity transformations QAQ 1 = QAQ T, with Q orthogonal, so that QAQ T is tridiagonal Note: direct reduction to triangular form would reveal the eigenvalues, but is not possible If 0 H(w)A = 0 then H(w)AH(w) T is generally full, ie, all zeros created by pre-multiplication are destroyed by the post-multiplication However, if [ ] γ u T A = u C (as A = A T ) and w = [ 0 ŵ ] α where H(ŵ)u = 0, 0 Topics 10 pg 3 of 5

it follows that γ u T α H(w)A =, 0 ie, the u T part of the first row of A is unchanged However, then γ α 0 0 α H(w)AH(w) 1 = H(w)AH(w) T = H(w)AH(w) = B 0 0, where B = H(ŵ)CH T (ŵ), as u T H(ŵ) T = (α, 0,, 0); note that H(w)AH(w) T is symmetric as A is Now we inductively apply this to the smaller matrix B, as described for the QR factorization but using post- as well as pre-multiplications The result of n 2 such Householder similarity transformations is the matrix H(w n 2 ) H(w 2 )H(w)AH(w)H(w 2 ) H(w n 2 ), which is tridiagonal The QR factorization of a tridiagonal matrix can now easily be achieved with n 1 Givens rotations: if A is tridiagonal J(n 1,n) J(2,3)J(1,2) A = R, upper triangular }{{} Q T Precisely, R has a diagonal and 2 super-diagonals, R = 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 (exercise: check!) In the QR algorithm, the next matrix in the sequence is RQ Lemma In the QR algorithm applied to a symmetric tridiagonal matrix, the symmetry and tridiagonal form are preserved when Givens rotations are used Proof We have already shown that if A k = QR is symmetric, then so is A k+1 = RQ If A k = QR = J(1,2) T J(2,3) T J(n 1,n) T R is tridiagonal, then A k+1 = RQ = RJ(1,2) T J(2,3) T J(n 1,n) T Recall thatpost-multiplicationofamatrixbyj(i,i+1) T Topics 10 pg 4 of 5

replaces columns i and i+1 by linear combinations of the pair of columns, while leaving columns j = 1,2,,i 1,i+2,,n alone Thus, since R is upper triangular, the only subdiagonal entry in RJ(1,2) T is in position (2,1) Similarly, the only subdiagonal entries in RJ(1,2) T J(2,3) T = (RJ(1,2) T )J(2,3) T are in positions (2,1) and (3,2) Inductively, the only subdiagonal entries in RJ(1,2) T J(2,3) T J(i 2,i 1) T J(i 1,i) T = (RJ(1,2) T J(2,3) T J(i 2,i 1) T )J(i 1,i) T are in positions (j,j 1), j = 2,i So, the lower triangular part of A k+1 only has nonzeros on its first subdiagonal However, then since A k+1 is symmetric, it must be tridiagonal Using shifts One further and final step in making an efficient algorithm is the use of shifts: for k = 1,2, form the QR factorization of A k µ k I = Q k R k and set A k+1 = R k Q k +µ k I For any chosen sequence of values of µ k R, {A k } k=1 are symmetric and tridiagonal if A 1 has these properties, and similar to A 1 The simplest shift to use is a n,n, which leads rapidly in almost all cases to A k = [ Tk 0 0 T λ where T k is n 1 by n 1 and tridiagonal, and λ is an eigenvalue of A 1 Inductively, once this form has been found, the QR algorithm with shift a n 1,n 1 can be concentrated only on the n 1 by n 1 leading submatrix T k This process is called deflation The overall algorithm for calculating the eigenvalues of an n by n symmetric matrix: reduce A to tridiagonal form by orthogonal (Householder) similarity transformations for m = n,n 1,2 while a m 1,m > tol [Q,R] = qr(a a m,m I) A = R Q+a m,m I while record eigenvalue λ m = a m,m A leading m 1 by m 1 submatrix of A record eigenvalue λ 1 = a 1,1 ], Topics 10 pg 5 of 5