We will discuss matrix diagonalization algorithms in Numerical Recipes in the context of the eigenvalue problem in quantum mechanics, m A n = λ m

Similar documents
ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition

Lecture 10: Eigenvectors and eigenvalues (Numerical Recipes, Chapter 11)

Lecture 11. Linear systems: Cholesky method. Eigensystems: Terminology. Jacobi transformations QR transformation

Numerical Analysis Lecture Notes

Orthogonal Transformations

EECS 275 Matrix Computation

Quantum Computing Lecture 2. Review of Linear Algebra

Orthonormal Transformations

Lecture 4 Orthonormal vectors and QR factorization

Linear Algebra: Matrix Eigenvalue Problems

Lecture 3: QR-Factorization

Orthonormal Transformations and Least Squares

the centre for theoretical physics

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Solution of Linear Equations

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12

Matrix decompositions

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Linear Algebra. Carleton DeTar February 27, 2017

Conceptual Questions for Review

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

1 Eigenvalues and eigenvectors

Matrix Factorization and Analysis

Applied Numerical Linear Algebra. Lecture 8

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm

Eigenvalues and eigenvectors

Notes on Eigenvalues, Singular Values and QR

5.6. PSEUDOINVERSES 101. A H w.

Class notes: Approximation

11.0 Introduction. An N N matrix A is said to have an eigenvector x and corresponding eigenvalue λ if. A x = λx (11.0.1)

Review of Linear Algebra

av 1 x 2 + 4y 2 + xy + 4z 2 = 16.

1. Diagonalize the matrix A if possible, that is, find an invertible matrix P and a diagonal

Linear Algebra Review. Vectors

Math Matrix Algebra

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

Introduction to Matrix Algebra

1 Last time: least-squares problems

. The following is a 3 3 orthogonal matrix: 2/3 1/3 2/3 2/3 2/3 1/3 1/3 2/3 2/3

Orthogonal iteration to QR

Eigenvalue Problems and Singular Value Decomposition

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Notes on basis changes and matrix diagonalization

The Singular Value Decomposition

Applied Linear Algebra in Geoscience Using MATLAB

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices

Linear Algebra Massoud Malek

Matrices, Moments and Quadrature, cont d

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

EXAM. Exam 1. Math 5316, Fall December 2, 2012

Eigenvalue and Eigenvector Problems

Orthogonal iteration revisited

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

I. Multiple Choice Questions (Answer any eight)

LU Factorization. A m x n matrix A admits an LU factorization if it can be written in the form of A = LU

Cheat Sheet for MATH461

Orthogonal iteration to QR

Section 4.5 Eigenvalues of Symmetric Tridiagonal Matrices

G1110 & 852G1 Numerical Linear Algebra

This property turns out to be a general property of eigenvectors of a symmetric A that correspond to distinct eigenvalues as we shall see later.

LinGloss. A glossary of linear algebra

3 QR factorization revisited

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

EECS 275 Matrix Computation

TBP MATH33A Review Sheet. November 24, 2018

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

Linear Algebra Primer

Numerical Methods - Numerical Linear Algebra

Chapter 3 Transformations

UNIT 6: The singular value decomposition.

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Applied Linear Algebra

LINEAR ALGEBRA SUMMARY SHEET.

Decomposition. bq (m n) R b (n n) r 11 r 1n

11.3 Eigenvalues and Eigenvectors of a Tridiagonal Matrix

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis

Lecture 7. Econ August 18

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

lecture 2 and 3: algorithms for linear algebra

Lecture 8 : Eigenvalues and Eigenvectors

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

7. Dimension and Structure.

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

MAA507, Power method, QR-method and sparse matrix representation.

A Brief Outline of Math 355

THE QR METHOD A = Q 1 R 1

Section 6.4. The Gram Schmidt Process

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions

Krylov Subspace Methods that Are Based on the Minimization of the Residual

be a Householder matrix. Then prove the followings H = I 2 uut Hu = (I 2 uu u T u )u = u 2 uut u

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

JACOBI S ITERATION METHOD

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

Math 407: Linear Optimization

Solving large scale eigenvalue problems

Lecture 6. Numerical methods. Approximation of functions

Transcription:

Eigensystems We will discuss matrix diagonalization algorithms in umerical Recipes in the context of the eigenvalue problem in quantum mechanics, A n = λ n n, (1) where A is a real, symmetric Hamiltonian operator and n is the n-th eigenvector with eigenvalue λ n. In an -dimensional vector space, Eq. (1) becomes where A is an matrix, and x i (n) ORTHOORMAL BASIS (Orthogonality) The basis set or, by defining the transformation matrix U as (n) (n) A ij x j = λ n x i, () j=1 is the i-th element of the n-th eigenvector x (n) R. { n n =1,, } can be made orthonormal, i.e., m n x (m) (n) i x i = δ mn, (3) i=1 U in = x i (n) (i.e., the n-th column of U is the n-th eigenvector), U is orthogonal, where I is the identity matrix. orthogonal without modifying the eigenvalue. For example, Gram-Schmidt orthogonalization procedure 1 (4) U T U = I (5) Proof of Eq. (3): First note that all eigenvalues λ n are real. ( By multiplying eq. (1) by n from the left, n A n = λ n n n. For a Hermitian matrix (and of course for a real, symmetric matrix), n A n is real, and n n is also real since its complex conjugate is itself.) ext, by multiplying Eq. (1) by m from the left, we obtain Similarly, m A n = λ n m n. (6) n A m = λ m n m. (7) By taking the complex conjugate of Eq. (7) and noting the reality of the eigenvalue, Subtracting Eq. (8) from Eq. (6), m A n = λ m m n. (8) = ( λ n λ m ) m n. (9) If λ n λ m, Eq. (9) requires that m n =. On the other hand, if λ n = λ m, we can still make them

n n m m n (1) makes m n = m n m m m n = m n m n =, followed by the normalization n as n n n n 1/. (Completeness) The orthonormal basis set space, is the identity operator. Equivalently, or { n } is also complete, i.e., in the -dimensional vector n n =1 (11) n=1 x (n) (n) i x j = δ ij, (1) n=1 UU T = I. (13) Equation (11) states that any vector in the -dimensional vector space ψ is a linear combination of the basis functions, since there are only linearly independent vectors in this vector space. or The orthogonality and completeness together states that ORTHOGOAL TRASFORMATIO ψ = n n ψ, (14) n=1 U T U =UU T = I. (15) U 1 =U T. (16) ow, we use the orthogonal matrix U to restate the matrix eigenvalue problem. To do so, multiply (m) Eq. () by x i and sum the resulting equation over i, i=1 x (m) (n) i A ij x j = λ n x (m) (n) i x i = λ n δ mn, (17) j=1 i=1 where we have used the orthonormality, Eq. (3). Using U, equation (17) can be rewritten as where U T AU = Λ, (18) Λ mn = λ m δ mn, (19) is a diagonal matrix. Thus the matrix eigenvalue problem amounts to finding an orthogonal matrix, U, or the associated orthogonal transformation, Eq. (18), which eliminates all the off-diagonal matrix elements.

GRAD STRATEGY The grand strategy of matrix diagonalization is to nudge the matrix A towards diagonal form by a sequence of orthogonal transformations, A P 1 T AP 1 P T P 1 T AP 1 P, () so that its off-diagonal elements gradually disappear. At the end, the orthogonal matrix is U = P 1 P. (1) ORTHOGOAL TRASFORMATIO ~ ROTATIO: JACOBI TRASFORMATIO is As an illustration, let us consider a two-state system, for which the most general Hamiltonian matrix H = ε 1 δ. () δ ε (We define the indices such that ε 1 < ε, i.e., the first state is the lower-energy state.) We express first eigenvector of this Hamiltonian as u = cosθ sinθ = cosθ 1 + sinθ, () which is most general. (Because of the normalization condition, the any vector in the -dimensional vector space can be specified by one parameter.) Once we specify the first eigenvector, the second is readily determined from the orthonomality as v = sinθ, () cosθ see the figure below. The rotation angle θ specifies the deviation of the first eigenvector u from 1. The orthogonal matrix is then cosθ sinθ U = [ u v] =. (3) sinθ cosθ To find the specific rotation angle θ, let us return to the original eigenvalue problem, λ ε 1 δ u 1 δ λ ε u =. (4) The eigenvalues are obtained by solving the secular equation, 3

and its two solutions are det( λi H) = λ ε 1 δ = ( λ ε δ λ ε 1 )( λ ε ) δ =, (5) λ ± = ε 1 + ε ± ( ε 1 ε ) + 4δ. (6) ow let us examine the lower eigenenergy λ. By substituting the eigenvalue and the corresponding eigenvector, Eq. (), into Eq. (4), we obtain θ = tan 1 ε 1 + ε ε 1 ε δ ( ) + 4δ. (7) For example, if the off-diagonal element δ is small, we can expand Eq. (7) is its power series, the first term of which is (we have assumed ε 1 < ε ) Jacobi Transformation θ = δ ε 1 ε. (8) In Jacobi transformation, each orthogonal transformation P i in Eq. () is the two-dimensional rotation applied to a pair of rows, i and j, and the pair of columns of the same indices. One such rotation eliminates a pair (i,j) and (j,i) of off-diagonal elements. A sequence of two-dimensional rotations will eventually eliminate all the off-diagonal elements. (In fact, later rotations may partially restore offdiagonal elements eliminated earlier. evertheless, this procedure will converge, and the square sum of all the off-diagonal elements becomes smaller as we continue the procedure.) HOUSEHOLDER TRASFORMATIOS FOR TRIDIAGOALIZATIO Instead of eliminating a pair of off-diagonal elements at one time as in Jacobi transformation, Householder transformation eliminates an entire row but the first two elements at a time. In Chapter 11 of umerical Recipes, Householder transformations are used to reduce a real, symmetric matrix to a tridiagonal form, in which only the diagonal ( A ii ), upper subdiagonal ( A ii+1 ), and lower subdiagonal ( A i+1i ) elements may be nonzero. The function tred() achieves this. The resulting tridiagonal matrix is then diagonalized (i.e., both subdiagonal elements are eliminated), using another sets of orthogonal transformations in function tqli(). The magical orthogonal matrix P is constructed from a vector in the -dimensional vector space. First, let us prove a useful lemma. (Lemma) Let v ( R ) and then P is symmetric and orthogonal, i.e., P = I vvt v T v, (9) P T P = PP = I. (3) 4

First, P ij = δ ij v iv j, (31) v k k=1 is symmetric with respect to the exchange of the indices i and j. ext, P T P = I vvt v T I vvt v v T v = I 4vvT v T v + 4vvT vv T v T v ( ) = I 4vvT v T v + 4vvT v T v = I ow, given an arbitrary vector x in the -dimensional vector space, we can device an orthogonal matrix that eliminates all the elements but the first one when multiplied to x. (Theorem) For x ( R ), let where and the vector -norm is defined as. // v = x ± x e 1 (3) 1 e 1 = (33) x = x T x = x i. (34) Then Px = I vvt v T x = x v e 1, (35) i.e., the Householder matrix P, when multiplied, eliminates all the elements of x but the first one. ote that, Then i=1 v T v = x T T ( ± x e 1 )( x ± x e 1 ) = x ± x x 1 + x ( ) = x x ± x 1. 5

vv T Px = x x x ± x 1 ( ) x ( ) x T T ( ± x e 1 ) x x ( x ± x 1 ) ( ) x ( x ± x 1 ) x ( x ± x 1 ) = x x ± x e 1 = x x ± x e 1 = x x x e 1 = x e 1 The Householder matrix can be used for tridiagonalization as follows: Let us decompose a real, symmetric matrix A as. // a 11 a 1 a 1 T a 11 A 1 = A 1 a 1 A = = A 1 A a 1. (36) where A 1, A 1, and A are (-1) 1, 1 (-1), and (-1) (-1) matrices, respectively. ow let v ( R 1 ) = A 1 + sign a 1 (The sign has been chosen to minimize the cancellation error.) Then ow P A 1 I 1 vvt v T A 1 = -sign a 1 v 1 T a 11 A PAP 1 P A 1 A T a 11 A 1 k = PA a 11 k k = PA P ( ) A 1 e 1. (37) ( ) A 1 e 1 ke 1. (38) 1 P 1 P, (39) i.e., all the elements in the first row and first column but a 11, a 1 and a 1 have been eliminated by this transformation. ext, a similar Householder transformation is applied to the first column and first row of the (-1) (-1) submatrix P A P, which eliminates all the elements in the second row and second column in the original matrix but a, a 3 and a 3, so on (see the figure below, in which white cells represent eliminated matrix elements). 6

After (-) such transformations, all the off-diagonal elements but the diagonal and upper/lower subdiagonal elements are eliminated. DIAGOALIZATIO OF A TRIDIAGOAL MATRIX QR DECOMPOSITIO QR Decomposition The diagonalization of the tridiagonal matrix obtained above can use QR decomposition (or similar QL decomposition). That is, any square matrix A can be decomposed into A = QR, (4) where Q is an orthogonal matrix and R is an upper-triangular matrix, i.e., R ij = for i > j. For example, this can be achieved by using a Householder transformation as follows. First, we decompose the matrix A into the first column A 1 and the rest A : Let then and thus a 11 A = v a 1 = A 1 A. (41) ( R ) = A 1 + sign( a 11 ) A 1 e 1, (4) PA 1 I vvt v T A 1 = -sign a 11 v PA = PA 1 PA ( ) A 1 e 1 ke 1, (43) k = PA, (44) i.e., all the elements in the first column but one have been eliminated. ext, we can apply a similar elimination to A(:,:) submatrix to eliminate all the lower-triangular elements in the second column, see the figure below. After (-1) transformation, the resulting matrix is upper-triangular, i.e., 7

or P 1 P P 1 A = R, (45) Orthogonal Transformation P 1 1 P 1 P 1 1 R QR. (46) Let Eq. (4) be the QR decomposition of matrix A. Then, define another matrix by A = RQ. (48) Since R = Q 1 A = Q T A from Eq. (4), Eq. (48) defines an orthogonal transformation, A A = Q T AQ. (49) It can be proven that, if A is tridiagonal, then A is also tridiagonal, i.e., the orthogonal transformation preserves the tridiagonality. The QR algorithm consists of successive applications of this orthogonal transformation. (QR algorithm) " $ # % $ 1. Q s R s A s. A s+1 R s Q s s =1,,. (5) The following theorems then guarantee that the eigenvalues and eigenvectors can be obtained by the QR algorithm. (Theorem) 1. lim s A s is upper-triangular, and. The eigenvalues appear on its diagonal. In Chapter 11 of umerical Recipes, function tqli() uses QL algorithm, instead of the above QR algorithm, to achieve lower-triangularity, to minimize the cancellation error. 8