Section 4.5 Eigenvalues of Symmetric Tridiagonal Matrices

Similar documents
632 CHAP. 11 EIGENVALUES AND EIGENVECTORS. QR Method

Orthogonal iteration to QR

Linear algebra & Numerical Analysis

ECS130 Scientific Computing Handout E February 13, 2017

Lecture # 11 The Power Method for Eigenvalues Part II. The power method find the largest (in magnitude) eigenvalue of. A R n n.

THE QR METHOD A = Q 1 R 1

Eigenvalue and Eigenvector Problems

Numerical Analysis Lecture Notes

Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm

Matrices, Moments and Quadrature, cont d

Orthogonal iteration revisited

13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Orthogonal iteration to QR

Bindel, Spring 2016 Numerical Analysis (CS 4220) Notes for

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

EXAM. Exam 1. Math 5316, Fall December 2, 2012

(Linear equations) Applied Linear Algebra in Geoscience Using MATLAB

Math 411 Preliminaries

Numerical Methods I Eigenvalue Problems

Eigenvalues and eigenvectors

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR

Notes on Eigenvalues, Singular Values and QR

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Solving linear equations with Gaussian Elimination (I)

1 Number Systems and Errors 1

EECS 275 Matrix Computation

6.4 Krylov Subspaces and Conjugate Gradients

APPLIED NUMERICAL LINEAR ALGEBRA

Eigenvalue problems. Eigenvalue problems

Exercise Set 7.2. Skills

Lecture 3: QR-Factorization

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

Fast Hessenberg QR Iteration for Companion Matrices

22.4. Numerical Determination of Eigenvalues and Eigenvectors. Introduction. Prerequisites. Learning Outcomes

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm

Index. for generalized eigenvalue problem, butterfly form, 211

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

Numerical Methods I: Eigenvalues and eigenvectors

We will discuss matrix diagonalization algorithms in Numerical Recipes in the context of the eigenvalue problem in quantum mechanics, m A n = λ m

Introduction to Applied Linear Algebra with MATLAB

Applied Linear Algebra in Geoscience Using MATLAB

MAA507, Power method, QR-method and sparse matrix representation.

Eigenvalues and Eigenvectors

Section 6.4. The Gram Schmidt Process

Solving large scale eigenvalue problems

Introduction to Mathematical Programming

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

Numerical Methods - Numerical Linear Algebra

COMPUTER SCIENCE 515 Numerical Linear Algebra SPRING 2006 ASSIGNMENT # 4 (39 points) February 27

NUMERICAL COMPUTATION IN SCIENCE AND ENGINEERING

Applied Linear Algebra

EIGENVALUE PROBLEMS (EVP)

Roundoff Error. Monday, August 29, 11

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

Presentation of XLIFE++

Applied Linear Algebra in Geoscience Using MATLAB

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9

Assignment #9: Orthogonal Projections, Gram-Schmidt, and Least Squares. Name:

Least squares and Eigenvalues

Linear Algebra in Numerical Methods. Lecture on linear algebra MATLAB/Octave works well with linear algebra

QR Decomposition. When solving an overdetermined system by projection (or a least squares solution) often the following method is used:

ANONSINGULAR tridiagonal linear system of the form

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis

Scientific Computing: An Introductory Survey

A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem

In this section again we shall assume that the matrix A is m m, real and symmetric.

Linear Least Squares Problems

SUMMARY OF MATH 1600

Direct methods for symmetric eigenvalue problems

Linear Algebra, part 3 QR and SVD

EECS 275 Matrix Computation

Matrix decompositions

Matrix Algebra: Summary

11.3 Eigenvalues and Eigenvectors of a Tridiagonal Matrix

The Singular Value Decomposition

Eigenvalue Problems and Singular Value Decomposition

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Module 6.6: nag nsym gen eig Nonsymmetric Generalized Eigenvalue Problems. Contents

Math 504 (Fall 2011) 1. (*) Consider the matrices

I. Multiple Choice Questions (Answer any eight)

Linear least squares problem: Example

TEACHING NUMERICAL LINEAR ALGEBRA AT THE UNDERGRADUATE LEVEL by Biswa Nath Datta Department of Mathematical Sciences Northern Illinois University

Solving large scale eigenvalue problems

5.3 The Power Method Approximation of the Eigenvalue of Largest Module

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Index. Copyright (c)2007 The Society for Industrial and Applied Mathematics From: Matrix Methods in Data Mining and Pattern Recgonition By: Lars Elden

Notes for CS542G (Iterative Solvers for Linear Systems)

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Computational Methods. Least Squares Approximation/Optimization

11.0 Introduction. An N N matrix A is said to have an eigenvector x and corresponding eigenvalue λ if. A x = λx (11.0.1)

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

Class notes: Approximation

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Krylov Subspaces. Lab 1. The Arnoldi Iteration

Numerical Mathematics

Linear Algebra. Carleton DeTar February 27, 2017

Transcription:

Section 4.5 Eigenvalues of Symmetric Tridiagonal Matrices Key Terms Symmetric matrix Tridiagonal matrix Orthogonal matrix QR-factorization Rotation matrices (plane rotations) Eigenvalues

We will now complete our study of the matrix eigenvalue problem for symmetric matrices by developing an algorithm to compute all of the eigenvalues of a symmetric tridiagonal matrix. The technique which we will develop in this section is called the QR algorithm. Unlike the technique developed for reduction to tridiagonal form, the QR algorithm is iterative in nature. The sequence of matrices generated by the algorithm converges to a diagonal matrix, so the eigenvalues of the final" matrix in the sequence are just the elements along main diagonal. As we will continue to use similarity transformations, these diagonal elements are also the eigenvalues of the original matrix. But we will not use Householder matrices. Overview of eigenvalue approximation using the QR-method: Let A be an n n symmetric matrix which has been reduced to symmetric tridiagonal form, denoted A H, using Householder matrices. This preserves the eigenvalues and provides a matrix which is closer to diagonal form, hence fewer arithmetic operations will be required at later steps. Next we apply a piece of theory to A H Theorem: Any m n complex matrix A can be factored into a product of an m n matrix Q with orthonormal columns and n n upper triangular matrix R. (This is called a QR-factorization; we discuss this later.) In the case that matrix A is real and square, then the matrix Q will be an orthogonal matrix.

We use the QR-factorization to generate a sequence of similarity transformations so that the limit of the sequence will be a diagonal matrix. The eigenvalues of A and A H will be preserved and will appear on the diagonal of the diagonal matrix. The sequence is constructed as follows: Let A 0 = A H ; find the QR-factorization of A 0 ; A 0 = Q 0 R 0. (Matrix Q 0 is orthogonal so its inverse is Q 0T.) Define A 1 = R 0 Q 0 = Q 0T A 0 Q 0 since R 0 = Q 0T A 0. Find the QR-factorization of A 1 ; A 1 = Q 1 R 1. Define A 2 = R 1 Q 1 = Q 1T A 1 Q 1 since R 1 = Q 1T A 1. Thus A 2 = Q 1 T Q 0T A 0 Q 0 Q 1. Apply the steps again to A 2 and successively repeat the process. We call iteration process the BASIC QR-method. It generates a sequence A 0, A 1, A 2,, A k, D, a diagonal matrix. The diagonal entries of A k approximate the eigenvalues of matrix A. This process preserves the tridiagonal form and the numerical values of the first sub diagonal and first super diagonal decrease in value as k gets large. We stop the iterative process when the entries of these off-diagonals are sufficiently small. In this case the sequence appears to be converging to a diagonal matrix.

Example: Apply the basicqr.m iteration to the symmetric tridiagonal matrix Observe that the off-diagonal elements are converging toward zero, while the diagonal elements are converging toward the eigenvalues of A, which, to three decimal places, are 5.951, 3.084 and -1.035. Furthermore, the eigenvalues appear along the diagonal of A k in decreasing order of magnitude. This example demonstrates the general performance of the QR algorithm and that the offdiagonal entries converge toward zero and the rate of convergence O( λ j /λ j-1 ) where j is the row number and j-1 the column number of an off- diagonal entry. (No proof about the rate.)

In the example we used the m-file basicqr.m. >> help basicqr basicqr Apply the basicqr method to matrix A iter times. Use in the form ==> basicqr(a,iter) <== There are choices for various display options. MATLAB has an m-file more robust than basicqr.m. The MATLAB file is qr.m. It has various options which appear in the help file, A simple MATLAB code using qr.m which is like for the basic QR-method is given below. for k=1:10, [Q,R]=qr(A); A=R*Q, pause, end Change the number of iterations as needed. We will not prove the QR theorem. A discussion is given in the text. Orthogonal matrices constructed in the QR-algorithm are often called rotation matrices.

Notes: The eig command in MATLAB uses a more robust implementation than that given in the basic QR-(eigen)method discussed here. This command is quite dependable. If a matrix has eigenvalues very close in magnitude, then the convergence of the eigen method based on QR-factorizations can quite slow. If a matrix is defective or nearly defective the convergence of QR-(eigen)method can be slow. The basic QR-(eigen)method can fail. When asked to compute QR-factorizations involving symmetric matrices it is recommended that the matrix first be transformed to symmetric tridiagonal form. QR-factorizations are often implemented using another technique called (Givens) plane rotations. If A is not symmetric how can you approximate its eigenvalues?

Eigenvalues of non-symmetric matrices As opposed to the symmetric problem, the eigenvalues a of non-symmetric matrix do not form an orthogonal set of vectors. Moreover, eigenvectors may not form a linearindependent vector system (this is possible, although not necessarily, in case of multiple eigenvalues - a subspace with size less than k can correspond to the eigenvalue of multiplicity k). The second distinction from the symmetric problem is that a non-symmetric matrix could not be easily reduced to a tridiagonal matrix or a matrix in other compact form - matrix asymmetry causes the fact that after zeroing all the elements below the first subdiagonal (using an orthogonal transformation) all the elements in the upper triangle are not zeroed, so we get a matrix in upper Hessenberg form. This slows an algorithm down, because it is necessary to update all the upper triangles of the matrix after each iteration of the QR algorithm. At last, the third distinction is that the eigenvalues of a non-symmetric matrix could be complex (as are their corresponding eigenvectors).

Algorithm description Solving a non-symmetric problem of finding eigenvalues is performed in some steps. In the first step, the matrix is reduced to upper Hessenberg form by using an orthogonal transformation. In the second step, which takes the most amount of time, the matrix is reduced to upper Schur form by using an orthogonal transformation. If we only have to find the eigenvalues, this step is the last because the matrix eigenvalues are located in the diagonal blocks of a quasi-triangular matrix from the canonical Schur form. If we have to find the eigenvectors as well, it is necessary to perform a backward substitution with Schur vectors and quasi-triangular vectors (in fact - solving a system of linear equations; the process of backward substitution itself takes a small amount of time, but the necessity to save all the transformations makes the algorithm twice as slow). The Schur decomposition, which takes the most amount of time, is performed by using the QR algorithm with multiple shifts. The algorithm is taken from the LAPACK library. This algorithm is a block-matrix analog of the ordinary QR-algorithm with double shift. As all other block-matrix algorithms, this algorithm requires adjustment to achieve optimal performance. (Thankfully all this is incorporated in MATLAB s eig command.) More details at http://people.math.gatech.edu/~klounici6/2605/lectures%20notes%20carlen/chap4.pdf https://www.math.washington.edu/~morrow/498_13/eigenvalues.pdf http://web.stanford.edu/class/cme335/lecture4sup.pdf