Krylov Subspace Methods for Large/Sparse Eigenvalue Problems

Size: px
Start display at page:

Download "Krylov Subspace Methods for Large/Sparse Eigenvalue Problems"

Transcription

1 Krylov Subspace Methods for Large/Sparse Eigenvalue Problems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan April 17, 2012 T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

2 Outline 1 Lanczos decomposition Householder transformation 2 Implicitly restarted Lanczos method 3 Arnoldi method 4 Generalized eigenvalue problem Krylov-Schur restarting T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

3 Theorem 1 Let V be an eigenspace of A and let V be an orthonormal basis for V. Then there is a unique matrix H such that The matrix H is given by AV = V H. H = V AV. If (λ, x) is an eigenpair of A with x V, then (λ, V x) is an eigenpair of H. Conversely, if (λ, s) is an eigenpair of H, then (λ, V s) is an eigenpair of A. T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

4 Theorem 2 (Optimal residuals) Let [V V ] be unitary. Let R = AV V H and S = V A HV. Then R and S are minimized when in which case (a) (b) H = V AV, R = V AV, S = V AV, (c) V R = 0. T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

5 Definition 3 Let V be orthonormal. Then V AV is a Rayleigh quotient of A. Theorem 4 Let V be orthonormal, A be Hermitian and R = AV V H. If θ 1,..., θ k are the eigenvalues of H, then there are eigenvalues λ j1,..., λ jk of A such that θ i λ ji R 2 and k (θ i λ ji ) 2 2 R F. i=1 T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

6 Suppose the eigenvalue with maximum module is wanted. Power method Compute the dominant eigenpair Disadvantage At each step it considers only the single vector A k u, which throws away the information contained in the previously generated vectors u, Au, A 2 u,..., A k 1 u. T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

7 Definition 5 Let A be of order n and let u 0 be an n vector. Then {u, Au, A 2 u, A 3 u,...} is a Krylov sequence based on A and u. We call the matrix K k (A, u) = [ u Au A 2 u A k 1 u ] the kth Krylov matrix. The space K k (A, u) = R[K k (A, u)] is called the kth Krylov subspace. T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

8 By the definition of K k (A, u), for any vector v K k (A, u) can be written in the form where v = γ 1 u + γ 2 Au + + γ k A k 1 u p(a)u, p(a) = γ 1 I + γ 2 A + γ 3 A γ k A k 1. Assume that A = A and Ax i = λ i x i for i = 1,..., n. Write u in the form Since p(a)x i = p(λ i )x i, we have u = α 1 x 1 + α 2 x α n x n. p(a)u = α 1 p(λ 1 )x 1 + α 2 p(λ 2 )x α n p(λ n )x n. (1) If p(λ i ) is large compared with p(λ j ) for j i, then p(a)u is a good approximation to x i. T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

9 Theorem 6 If x H i u 0 and p(λ i) 0, then tan (p(a)u, x i ) max j i Proof: From (1), we have cos (p(a)u, x i ) = xh i p(a)u p(a)u 2 x i 2 = p(λ j ) p(λ i ) tan (u, x i). α i p(λ i ) n j=1 α jp(λ j ) 2 and j i α jp(λ j ) 2 sin (p(a)u, x i ) = n j=1 α jp(λ j ) 2 T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

10 Hence tan 2 (p(a)u, x i ) = j i max j i = max j i α j p(λ j ) 2 α i p(λ i ) 2 p(λ j ) 2 p(λ i ) 2 j i α j 2 α i 2 p(λ j ) 2 p(λ i ) 2 tan2 (u, x i ). Assume that p(λ i ) = 1, then tan (p(a)u, x i ) Hence max p(λ j) tan (u, x i ) p(a)u K k. j i,p(λ i )=1 tan (x i, K k ) min max p(λ j) tan (u, x i ). deg(p) k 1,p(λ i )=1 j i T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

11 Assume that λ 1 > λ 2 λ n and that our interest is in the eigenvector x 1. Then tan (x 1, K k ) Question How to compute min deg(p) k 1,p(λ 1 )=1 min deg(p) k 1,p(λ 1 )=1 max p(λ) tan (u, x 1). λ [λ n,λ 2 ] max p(λ)? λ [λ n,λ 2 ] Definition 7 The Chebyshev polynomials are defined by c k (t) = { cos(k cos 1 t), t 1, cosh(k cosh 1 t), t 1. T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

12 Theorem 8 (i) c 0 (t) = 1, c 1 (t) = t and c k+1 (t) = 2c k (t) c k 1 (t), k = 1, 2,.... (ii) For t > 1, c k (t) = (1 + t 2 1) k + (1 + t 2 1) k. (iii) For t [ 1, 1], c k (t) 1. Moreover, if t ik = cos then c k (t ik ) = ( 1) k i. (iv) For s > 1, (k i)π, i = 0, 1,..., k, k min max p(t) = 1 deg(p) k,p(s)=1 t [0,1] c k (s), (2) and the minimum is obtained only for p(t) = c k (t)/c k (s). T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

13 For applying (2), we define λ = λ 2 + (µ 1)(λ 2 λ n ) to transform interval [λ n, λ 2 ] to [0, 1]. Then the value of µ at λ 1 is and µ 1 = 1 + λ 1 λ 2 λ 2 λ n min deg(p) k 1,p(λ 1 )=1 max p(λ) λ [λ n,λ 2 ] = min deg(p) k 1,p(µ 1 )=1 max µ [0,1] p(µ) = 1 c k 1 (µ 1 ) T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

14 Theorem 9 Let A = A and Ax i = λ i x i, i = 1,, n with λ 1 > λ 2 λ n. Let η = λ 1 λ 2 λ 2 λ n. Then tan [x 1, K k (A, u)] tan (x 1, u) c k 1 (1 + η) tan (x 1, u) = (1 + 2η + η 2 ) k 1 + (1 + 2η + η 2 ). 1 k T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

15 For k large and if η is small, then the bound becomes tan [x 1, K k (A, u)] tan (x 1, u) (1 + 2η) k 1. Compare it with power method: If λ 1 > λ 2 λ n, then the conv. rate is λ 2 /λ 1 k. For example, let λ 1 = 1, λ 2 = 0.95, λ 3 = ,, λ 100 = be the Ews of A R Then η = and the bound on the conv. rate is 1/(1 + 2η) = Thus the square root effect gives a great improvement over the rate of 0.95 for the power method. T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

16 Definition 10 A Householder transformation or elementary reflector is a matrix of where u 2 = 2. H = I uu Note that H is Hermitian and unitary. Theorem 11 Let x be a vector with x 1 0. Let where ρ = x 1 / x 1. Then u = ρ x x 2 + e 1, 1 + ρ x 1 x 2 Hx = ρ x 2 e 1. T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

17 Proof: Since [ ρx / x 2 + e 1 ][ρx/ x 2 + e 1 ] = ρρ + ρx 1 / x 2 + ρ x 1 / x = 2[1 + ρx 1 / x 2 ], it follows that and u u = 2 u 2 = 2 u x = ρ x 2 + x ρ x 1 x 2 T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

18 Hence, Hx = x (u x)u = x ρ x 2 + x ρ x 1 = [1 ( ρ x 2 + x 1 ) ρ ] x ρ x 1 x 2 = ρ x 2 + x ρ x 1 x 2 e 1 = ρ x 2 e 1. x 2 ρ x x 2 + e ρ x 1 x 2 x ρ x 2 + x ρ x 1 x 2 e 1 T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

19 Definition 12 A complex m n-matrix R = [r ij ] is called an upper (lower) triangular matrix, if r ij = 0 for i > j (i < j). Definition 13 Given A C m n, Q C m m unitary and R C m n upper triangular such that A = QR. Then the product is called a QR-factorization of A. Theorem 14 Any complex m n matrix A can be factorized by the product A = QR, where Q is m m-unitary and R is m n upper triangular. T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

20 Proof: Let A (0) = A = [a (0) 1 a(0) 2 a(0) n ]. Find Q 1 = (I 2w 1 w1 ) such that Q 1 a (0) 1 = ce 1. Then A (1) = Q 1 A (0) = [Q 1 a (0) 1, Q 1a (0) 2,, Q 1a (0) n ] c 1 0 =. a (1) 2 a (1) n. (3) 0 [ ] 1 0 Find Q 2 = 0 I w 2 w2 A (2) = Q 2 A (1) = such that (I 2w 2 w 2 )a(1) 2 = c 2 e 1. Then c 1 0 c a (2) 3 a (2) n 0 0. T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

21 We continue this process. Then after l = min(m, n) steps A (l) is an upper triangular matrix satisfying A (l 1) = R = Q l 1 Q 1 A. Then A = QR, where Q = Q 1 Q l 1. Suppose that the columns of K k+1 are linearly independent and let K k+1 = U k+1 R k+1 be the QR factorization of K k+1. Then the columns of U k+1 are results of successively orthogonalizing the columns of K k+1. T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

22 Theorem 15 Let u 1 2 = 1 and the columns of K k+1 (A, u 1 ) be linearly indep. Let U k+1 = [u 1 u k+1 ] be the Q-factor of K k+1. Then there is a (k + 1) k unreduced upper Hessenberg matrix Ĥ k ĥ 11 ĥ 1k ĥ 21 ĥ 22 ĥ 2k ĥ k,k 1 ĥ kk ĥ k+1,k with ĥ i+1,i 0 (4) such that AU k = U k+1 Ĥ k. (Arnoldi decomp.) (5) Conversely, if U k+1 is orthonormal and satisfies (5), where Ĥk is defined in (4), then U k+1 is the Q-factor of K k+1 (A, u 1 ). T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

23 Proof: ( ) Let K k = U k R k be the QR factorization and S k = R 1 k. Then [ ] [ ] 0 0 AU k = AK k S k = K k+1 = U S k+1 R k+1 = U k S k+1 Ĥ k, k where Ĥ k = R k+1 [ 0 S k ]. It implies that Ĥk is a (k + 1) k Hessenberg matrix and h i+1,i = r i+1,i+1 s ii = r i+1,i+1 r ii. Thus by the nonsingularity of R k, Ĥk is unreduced. T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

24 ( ) If k = 1, then which implies that Au 1 = h 11 u 1 + h 21 u 2 = [ u 1 u 2 ] [ h 11 h 21 K 2 = [ u 1 Au 1 ] = [ u1 u 2 ] [ 1 h 11 0 h 21 Since [ u 1 u 2 ] is orthonormal, [ u 1 u 2 ] is the Q-factor of K 2. Assume U k is the Q-factor of K k, i.e., K k = U k R k. By the definition of the Krylov matrix, we have K k+1 = [ u 1 AK k ] = [ u1 AU k R k ] = [ u1 U k+1 Ĥ k R k ] = U k+1 [ e1 Ĥ k R k ] ] ]. Hence U k+1 is the Q-factor of K k+1. T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

25 The uniqueness of Hessenberg reduction Definition 16 Let H be upper Hessenberg of order n. Then H is unreduced if h i+1,i 0 for i = 1,, n 1. Theorem 17 (Implicit Q theorem) Suppose Q = ( q 1 ) ( q n and V = v1 ) v n are unitary matrices with Q AQ = H and V AV = G being upper Hessenberg. Let k denote the smallest positive integer for which h k+1,k = 0, with the convection that k = n if H is unreduced. If v 1 = q 1, then v i = ±q i and h i,i 1 = g i,i 1 for i = 2,, k. Moreover, if k < n, then g k+1,k = 0. T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

26 Definition 18 Let U k+1 C n (k+1) be orthonormal. If there is a (k + 1) k unreduced upper Hessenberg matrix Ĥk such that AU k = U k+1 Ĥ k, (6) then (6) is called an Arnoldi decomposition of order k. If Ĥk is reduced, we say the Arnoldi decomposition is reduced. Partition and set Then (6) is equivalent to [ H Ĥ k = k h k+1,k e T k β k = h k+1,k. ], AU k = U k H k + β k u k+1 e T k. T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

27 Theorem 19 Suppose the Krylov sequence K k+1 (A, u 1 ) does not terminate at k + 1. Then up to scaling of the columns of U k+1, the Arnoldi decomposition of K k+1 is unique. Proof: Since the Krylov sequence K k+1 (A, u 1 ) does not terminate at k + 1, the columns of K k+1 (A, u 1 ) are linearly independent. By Theorem 15, there is an unreduced matrix H k and β k 0 such that AU k = U k H k + β k u k+1 e T k, (7) where U k+1 = [U k u k+1 ] is an orthonormal basis for K k+1 (A, u 1 ). Suppose there is another orthonormal basis Ũk+1 = [Ũk ũ k+1 ] for K k+1 (A, u 1 ), unreduced matrix H k and β k 0 such that AŨk = Ũk H k + β k ũ k+1 e T k. T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

28 Then we claim that Ũ H k u k+1 = 0. For otherwise there is a column ũ j of Ũk such that Hence ũ j = αu k+1 + U k a, α 0. Aũ j = αau k+1 + AU k a which implies that Aũ j contains a component along A k+1 u 1. Since the Krylov sequence K k+1 (A, u 1 ) does not terminate at k + 1, we have K k+2 (A, u 1 ) K k+1 (A, u 1 ). Therefore, Aũ j lies in K k+2 (A, u 1 ) but not in K k+1 (A, u 1 ) which is a contradiction. T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

29 Since U k+1 and Ũk+1 are orthonormal bases for K k+1 (A, u 1 ) and Ũ H k u k+1 = 0, it follows that that is R(U k ) = R(Ũk) and U H k ũk+1 = 0, U k = ŨkQ for some unitary matrix Q. Hence or A(ŨkQ) = (ŨkQ)(Q H Hk Q) + β k ũ k+1 (e T k Q), AU k = U k (Q H Hk Q) + β k ũ k+1 e T k Q. (8) On premultiplying (7) and (8) by U H k, we obtain H k = U H k AU k = Q H Hk Q. Similarly, premultiplying by u H k+1, we obtain β k e T k = uh k+1 AU k = β k (u H k+1ũk+1)e T k Q. T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

30 It follows that the last row of Q is ω k e T k, where ω k = 1. Since the norm of the last column of Q is one, the last column of Q is ω k e k. Since H k is unreduced, it follows from the implicit Q theorem that Q = diag(ω 1,, ω k ), ω j = 1, j = 1,..., k. Thus up to column scaling U k = ŨkQ is the same as Ũk. Subtracting (8) from (7), we find that β k u k+1 = ω k βk ũ k+1 so that up to scaling u k+1 and ũ k+1 are the same. T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

31 Let A be Hermitian and let AU k = U k T k + β k u k+1 e k (9) be an Arnoldi decomposition. Since T k is upper Hessenberg and T k = Uk HAU k is Hermitian, it follows that T k is tridiagonal and can be written in the form T k = α 1 β 1 β 1 α 2 β 2 β 2 α 3 β β k 2 α k 1 β k 1 β k 1 α k. Equation (9) is called a Lanczos decomposition. T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

32 The first column of (9) is Au 1 = α 1 u 1 + β 1 u 2 or u 2 = Au 1 α 1 u 1 β 1. From the orthonormality of u 1 and u 2, it follows that α 1 = u 1Au 1 and β 1 = Au 1 α 1 u 1 2. More generality, from the j-th column of (9) we get the relation where u j+1 = Au j α j u j β j 1 u j 1 β j α j = u jau j and β j = Au j α j u j β j 1 u j 1 2. This is the Lanczos three-term recurrence. T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

33 Algorithm 1 (Lanczos recurrence) Let u 1 be given. This algorithm generates the Lanczos decomposition AU k = U k T k + β k u k+1 e k where T k is symmetric tridiagonal. 1. u 0 = 0; β 0 = 0; 2. for j = 1 to k 3. u j+1 = Au j 4. α j = u j u j+1 5. v = u j+1 α j u j β j 1 u j 1 6. β j = v 2 7. u j+1 = v/β j 8. end for j T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

34 Reorthogonalization Let ũ j+1 = Au j α j u j β j 1 u j 1 with α j = u j Au j. Re-orthogonalize ũ j+1 against U j, i.e., j ũ j+1 := ũ j+1 (u i ũ j+1 ) u i Take i=1 j 1 = Au j ( α j + u ) jũ j+1 uj β j 1 u j 1 (u i ũ j+1 ) u i i=1 β j = ũ j+1 2, u j+1 = ũ j+1 /β j. T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

35 Theorem 20 (Stop criterion) Suppose that j steps of the Lanczos algorithm have been performed and that S H j T j S j = diag(θ 1,, θ j ) is the Schur decomposition of the tridiagonal matrix T j, if Y j C n j is defined by Y j [ y 1 y j ] = Uj S j then for i = 1,, j we have where S j = [s pq ]. Ay i θ i y i 2 = β j s ji T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

36 Proof : Post-multiplying AU j = U j T j + β j u j+1 e j by S j gives AY j = Y j diag(θ 1,, θ j ) + β j u j+1 e j S j, i.e., Ay i = θ i y i + β j u j+1 (e j S j e i ), i = 1,, j. The proof is complete by taking norms. Remark 1 Stop criterion = β j s ji. Do not need to compute Ay i θ i y i 2. In general, β j is not small. It is possible that β j s ji is small. T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

37 Theorem 21 Let A be n n symmetric matrix with eigenvalues λ 1 λ n and corresponding orthonormal eigenvectors z 1,, z n. If θ 1 θ j are the eigenvalues of T j obtained after j steps of the Lanczos iteration, then λ 1 θ 1 λ 1 (λ 1 λ n )(tan φ 1 ) 2 [c j 1 (1 + 2ρ 1 )] 2, where cos φ 1 = u 1 z 1, c j 1 is a Chebychev polynomal of degree j 1 and ρ 1 = λ 1 λ 2 λ 2 λ n. T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

38 Proof: From Courant-Fischer theorem we have θ 1 = max y 0 y T j y y y = max y 0 (U j y) A(U j y) (U j y) (U j y) = max w Aw 0 w K j (u 1,A) w w. Since λ 1 is the maximum of w Aw/w w over all nonzero w, it follows that λ 1 θ 1. To obtain the lower bound for θ 1, note that θ 1 = max p P j 1 u 1 p(a)ap(a)u 1 u 1 p(a)2 u 1, where P j 1 is the set of all j 1 degree polynomials. If u 1 = n i=1 d iz i, then u 1 p(a)ap(a)u 1 u 1 p(a)2 u 1 = n i=1 d2 i p(λ i) 2 λ i n i=1 d2 i p(λ i) 2 λ 1 (λ 1 λ n ) n i=2 d2 i p(λ i) 2 d 2 1 p(λ 1) 2 + n i=2 d2 i p(λ i) 2. T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

39 We can make the lower bound tight by selecting a polynomial p(α) that is large at α = λ 1 in comparison to its value at the remaining eigenvalues. Set ( p(α) = c j α λ ) n, λ 2 λ n where c j 1 (z) is the (j 1)-th Chebychev polynomial generated by c j (z) = 2zc j 1 (z) c j 2 (z), c 0 = 1, c 1 = z. These polynomials are bounded by unity on [-1,1]. It follows that p(λ i ) is bounded by unity for i = 2,, n while p(λ 1 ) = c j 1 (1 + 2ρ 1 ). Thus, θ 1 λ 1 (λ 1 λ n ) (1 d2 1 ) d c 2 j 1 (1 + 2ρ 1). The desired lower bound is obtained by noting that tan (φ 1 ) 2 = (1 d 2 1 )/d2 1. T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

40 Theorem 22 Using the same notation as Theorem 21, where λ n θ j λ n + (λ 1 λ n ) tan 2 ϕ n [c j 1 (1 + 2ρ n )] 2, ρ n = λ n 1 λ n λ 1 λ n 1, cos ϕ n = u 1 z n. Proof : Apply Theorem 21 with A replaced by A. T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

41 Restarted Lanczos method Let be a Lanczos decomposition. AU m = U m T m + β m u m+1 e T m 1 In principle, we can keep expanding the Lanczos decomposition until the Ritz pairs have converged. 2 Unfortunately, it is limited by the amount of memory to storage of U m. 3 Restarted the Lanczos process once m becomes so large that we cannot store U m. Implicitly restarting method Krylov-Schur decomposition T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

42 Implicitly restarted Lanczos method Choose a new starting vector for the underlying Krylov sequence A natural choice would be a linear combination of Ritz vectors that we are interested in. Filter polynomials Assume A has a complete system of eigenpairs (λ i, x i ) and we are interested in the first k of these eigenpairs. Expand u 1 in the form u 1 = k n γ i x i + γ i x i. i=1 i=k+1 If p is any polynomial, we have p(a)u 1 = k n γ i p(λ i )x i + γ i p(λ i )x i. i=1 i=k+1 T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

43 Choose p so that the values p(λ i ) (i = k + 1,..., n) are small compared to the values p(λ i ) (i = 1,..., k). Then p(a)u 1 is rich in the components of the x i that we want and deficient in the ones that we do not want. p is called a filter polynomial. Suppose we have Ritz values θ 1,..., θ m and θ 1,..., θ m k are not interesting. Then take Implicitly restarted Lanczos: Let p(t) = (t θ 1 ) (t θ m k ). AU m = U m T m + β m u m+1 e m (10) be a Lanczos decomposition with order m. Choose a filter polynomial p of degree m k and use the implicit restarting process to reduce the decomposition to a decomposition AŨk = Ũk T k + β k ũ k+1 e k of order k with starting vector p(a)u 1. T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

44 Let θ 1,..., θ m be eigenvalues of T m and suppose that θ 1,..., θ m k correspond to the part of the spectrum we are not interested in. Then take p(t) = (t θ 1 )(t θ 2 ) (t θ m k ). The starting vector p(a)u 1 is equal to p(a)u 1 = (A θ m k I) (A θ 2 I)(A θ 1 I)u 1 = (A θ m k I) [ [(A θ 2 I) [(A θ 1 I)u 1 ]]]. In the first, we construct a Lanczos decomposition with starting vector (A θ 1 I)u 1. From (10), we have where (A θ 1 I)U m = U m (T m θ 1 I) + β m u m+1 e m (11) = U m Q 1 R 1 + β m u m+1 e m, T m θ 1 I = Q 1 R 1 is the QR factorization of T m θ 1 I. T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

45 Postmultiplying by Q 1, we get (A θ 1 I)(U m Q 1 ) = (U m Q 1 )(R 1 Q 1 ) + β m u m+1 (e mq 1 ). It implies that where (T (1) m AU m (1) = U m (1) T m (1) + β m u m+1 b (1)H m+1, U (1) m = U m Q 1, T (1) m = R 1 Q 1 + θ 1 I, b (1)H m+1 = e mq 1. : one step of single shifted QR algorithm) T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

46 Theorem 23 Let T m be an unreduced tridiagonal. Then T m (1) ] T (1) m = [ ˆT (1) m 0 0 θ 1, has the form where ˆT (1) m is unreduced tridiagonal. Proof: Let T m θ 1 I = Q 1 R 1 be the QR factorization of T m θ 1 I with Q 1 = G(1, 2, ϕ 1 ) G(m 1, m, ϕ m 1 ) where G(i, i + 1, ϕ i ) for i = 1,..., m 1 are Given rotations. T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

47 Since T m is unreduced tridiagonal, i.e., the subdiagonal elements of T m are nonzero, we get and ϕ i 0 for i = 1,..., m 1 (12) (R 1 ) ii 0 for i = 1,..., m 1. (13) Since θ 1 is an eigenvalue of T m, we have that T m θ 1 I is singular and then Using the results of (12), (13) and (14), we get T (1) (R 1 ) mm = 0. (14) m = R 1 Q 1 + θ 1 I = R 1 G(1, 2, ϕ 1 ) G(m 1, m, ϕ m 1 ) + θ 1 I ] = where Ĥ(1) m [ Ĥ(1) m ĥ12 0 θ 1, is unreduced upper Hessenberg. (15) T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

48 By the definition of T (1) m, we get Q 1 T (1) m Q H 1 = Q 1 (R 1 Q 1 + θ 1 I)Q H 1 = Q 1 R 1 + θ 1 I = T m. It implies that T m (1) obtained. Remark 2 is tridiagonal and then, from (15), the result in (12) is is orthonormal. The vector b (1)H m+1 = e mq 1 has the form U (1) m b (1)H m+1 = [ 0 0 q (1) m 1,m q(1) m,m ] ; i.e., only the last two components of b (1) m+1 are nonzero. T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

49 For on postmultiplying (11) by e 1, we get (A θ 1 I)u 1 = (A θ 1 I)(U m e 1 ) = U (1) m R 1 e 1 = r (1) 11 u(1) 1. Since T m is unreduced, r (1) 11 is nonzero. Therefore, the first column of U m (1) is a multiple of (A θ 1 I)u 1. By the definition of T (1) m, we get Q 1 T (1) m Q H 1 = Q 1 (R 1 Q 1 + θ 1 I)Q H 1 = Q 1 R 1 + θ 1 I = T m. Therefore, θ 1, θ 2,..., θ m are also eigenvalues of T (1) m. T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

50 Similarly, where (A θ 2 I)U m (1) = U m (1) (T m (1) θ 2 I) + β m u m+1 b (1)H m+1 (16) = U m (1) Q 2 R 2 + β m u m+1 b (1)H m+1, is the QR factorization of T (1) m T (1) m θ 2 I = Q 2 R 2 θ 2 I. Postmultiplying by Q 2, we get (A θ 2 I)(U (1) m Q 2 ) = (U (1) m Q 2 )(R 2 Q 2 ) + β m u m+1 (b (1)H m+1 Q 2). It implies that where is orthonormal, AU m (2) = U m (2) T m (2) + β m u m+1 b (2)H m+1, U (2) m U (1) m Q 2 T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

51 T m (2) R 2 Q 2 + θ 2 I = T (2) m θ 2 0 θ 1 is tridiagonal with unreduced matrix T (2) m 2 and b (2)H m+1 b (1)H m+1 Q 2 = q (1) m 1,m eh m 1Q 2 + q (1) m,me mq 2 = [ 0 0 ]. For on postmultiplying (16) by e 1, we get (A θ 2 I)u (1) 1 = (A θ 2 I)(U (1) m e 1 ) = U (2) m R 2 e 1 = r (2) 11 u(2) 1. Since H m (1) is unreduced, r (2) 11 is nonzero. Therefore, the first column of is a multiple of U (2) m (A θ 2 I)u (1) 1 = 1/r (1) 11 (A θ 2I)(A θ 1 I)u 1. T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

52 Repeating this process with θ 3,..., θ m k, the result will be a Krylov decomposition AU (m k) m with the following properties 1 U m (m k) 2 T m (m k) is orthonormal. is tridiagonal. = U m (m k) T m (m k) + β m u m+1 b (m k)h m+1 3 The first k 1 components of b (m k)h m+1 are zero. 4 The first column of U m (m k) is a multiple of (A θ 1 I) (A θ m k I)u 1. T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

53 Corollary 24 Let θ 1,..., θ m be eigenvalues of T m. If the implicitly restarted QR step is performed with shifts θ 1,..., θ m k, then the matrix T m (m k) has the form [ ] T m (m k) T (m k) = kk 0 0 D (m k), where D (m k) is an digonal matrix with Ritz value θ 1,..., θ m k on its diagonal. T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

54 For k = 3 and m = 6, A [ u u u u u u ] = [ u u u u u u ] +u [ 0 0 q q q q ] Therefore, the first k columns of the decomposition can be written in the form AU (m k) k where U (m k) k = U (m k) k T (m k) kk + t k+1,k u (m k) k+1 e k + β mq mk u m+1 e k, consists of the first k columns of U m (m k), T (m k) kk leading principal submatrix of order k of T m (m k) matrix Q = Q 1 Q m k. is the, and q mk is from the T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

55 Hence if we set then Ũ k = U (m k) k, T k = T (m k) kk, β k = t k+1,k u (m k) k+1 + β m q mk u m+1 2, 1 ũ k+1 = β k (t k+1,ku (m k) k+1 + β m q mk u m+1 ), AŨk = Ũk T k + β k ũ k+1 e k is a Lanczos decomposition whose starting vector is proportional to (A θ 1 I) (A θ m k I)u 1. Avoid any matrix-vector multiplications in forming the new starting vector. Get its Lanczos decomposition of order k for free. For large n the major cost will be in computing UQ. T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

56 Practical Implementation Restarted Lanczos method Input: Given Lanczos decomp. AU m = U m T m + β m u m+1 e m Output: new Lanczos decomp. AU k = U k T k + β k u k+1 e k 1: Compute the eigenvalues θ 1,..., θ m of T m. 2: Determine shifts, said θ 1,..., θ m k, and set b m = e m. 3: for j = 1,..., m k do 4: Compute QR factorization: T m θ j I = Q m R m. 5: Update T m := R m Q m + θ j I, U m := U m Q m, b m := Q mb m. 6: end for 7: Compute v = β k u k+1 + β m b m (k)u m+1. 8: Set U k := U m (:, 1 : k), β k = v 2, u k+1 = v/β k, and T k := T m (1 : k, 1 : k), T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

57 Question Can we implicitly compute Q m and get new tridiagonal matrix T m? General algorithm 1 Determine the first column c 1 of T m θ j I. 2 Let Q be a Householder transformation such that Q c 1 = σe 1. 3 Set T = Q T m Q. 4 Use Householder transformation Q to reduce T to a new tridiagonal form T Q T Q. 5 Set Q m = Q Q. Question General algorithm = one step of single shift QR algorithm? T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

58 Answer: (I) Let T m θ j I = [ c 1 C ] = Qm R m = [ q Q m ] [ ρ r 0 R be the [ QR factorization of T m θ j I. Then c 1 = ρq. Partition Q ˆq Q ], then c 1 = σ Qe 1 = σˆq which implies that q and ˆq are proportional to c 1. (II) Since T = Q T Q is tridiagonal, we have Qe 1 = e 1. ] Hence, ( Q Q)e 1 = Qe 1 = ˆq which implies that the first column of Q Q is proportional to q. (III) Since ( Q Q) T m ( Q Q) is tridiagonal and the first column of Q Q is proportional to q, by the implicit Q Theorem, if T is unreduced, then Q = Q 0 Q 1 and T = ( Q Q) T m ( Q Q). T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

59 Definition 25 (Givens rotation) A plane rotation (also called a Givens rotation) is a matrix of the form [ ] c s G = s c where c 2 + s 2 = 1. Given a 0 and b, set v = a 2 + b 2, c = a /v and s = a a b v, then [ c s s c ] [ a b ] = [ v a a 0 ]. T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

60 Let I i 1 c s G ij = I j i 1 s c (I) Compute the first column t 1 α 1 θ j β 1 0 I n j. of T m θ j I and determine Givens rotation G 12 such that G 12 t 1 = γe 1. (II) Set T m := G 12 T m G 12. T m := G G 12 = T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

61 (III) Construct orthonormal Q such that T m := QT m Q is tridiagonal: T m := G G 23 = T m := G 34 T m := G G 34 = G 45 = T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

62 Implicit Restarting for Lanczos method Input: Given Lanczos decomp. AU m = U m T m + β m u m+1 e m Output: new Lanczos decomp. AU k = U k T k + β k u k+1 e k 1: Compute the eigenvalues θ 1,..., θ m of T m. 2: Determine shifts, said θ 1,..., θ m k, and set b m = e m. 3: for j = 1,..., m k do α 1 θ j 4: Compute Givens rotation G 12 such that G 12 β 1 0 = γe 1. 5: Update T m := G 12 T m G 12, U m := U m G 12, b m := G 12 b m. 6: Compute Givens rotations G 23,..., G m 1,m such that T m := G m 1,m G 23 T m G 23 G m 1,m is tridiagonal. 7: Update U m := U m G 23 G m 1,m and b m := G m 1,m G 23 b m. 8: end for 9: Compute v = β k u k+1 + β m b m (k)u m+1. 10: Set U k := U m (:, 1 : k), β k = v 2, u k+1 = v/β k, and T k := T m (1 : k, 1 : k), T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

63 Problem Mathematically, u j must be orthogonal. In practice, they can lose orthogonality. Solutions Reorthogonalize the vectors at each step. j-th step of Lanczos process Input: Given β j 1 and orthonormal matrix U j = [u 1,..., u j ]. Output: α j, β j and unit vector u j+1 with u j+1 U j = 0 and Au j = α j u j + β j 1 u j 1 + β j u j+1. 1: Compute u j+1 = Au j β j 1 u j 1 and α j = u j u j+1; 2: Update u j+1 := u j+1 α j u j ; 3: for i = 1,..., j do 4: Compute γ i = u i u j+1 and update u j+1 := u j+1 γ i u i ; 5: end for 6: Update α j := α j + γ j and compute β j = u j+1 2 and u j+1 := u j+1 /β j. T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

64 Lanczos algorithm with implicit restarting Input: Given initial unit vector u 1, number k of desired eigenpairs, restarting number m and stopping tolerance ε. Output: desired eigenpairs (θ i, x i ) for i = 1,..., k. 1: Compute Lanczos decomposition with order k: AU k = U k T k + β k u k+1 e k ; 2: repeat 3: Extend the Lanczos decomposition from order k to order m: AU m = U m T m + β m u m+1 e m; 4: Use implicitly restarting scheme to reform a new Lanczos decomposition with order k; 5: Compute the eigenpairs (θ i, s i ), i = 1,..., k, of T k ; 6: until ( β k s i,k < ε for i = 1,..., k) 7: Compute eigenvector x i = U k s i for i = 1,..., k. T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

65 Arnoldi method Recall: Arnoldi decomposition of unsymmetric A: AU k = U k H k + h k+1,k u k+1 e k, (17) where H k is unreduced upper Hessenberg. Write (17) in the form Au k = U k h k + h k+1,k u k+1. Then from the orthogonality of U k+1, we have h k = U H k Au k. Since h k+1,k u k+1 = Au k U k h k and u k+1 2 = 1, we must have h k+1,k = Au k U k h k 2, u k+1 = h 1 k+1,k (Au k U k h k ). T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

66 Arnoldi process 1: for k = 1, 2,... do 2: h k = Uk HAu k; 3: v = Au k U k h k ; 4: h k+1,k = v 2 ; 5: u k+1 = v/h k+1,k ; [ Ĥk 1 h 6: Ĥ k = k 0 h k+1,k 7: end for ] ; The computation of u k+1 is actually a form of the well-known Gram-Schmidt algorithm. In the presence of inexact arithmetic cancelation in statement 3 can cause it to fail to produce orthogonal vectors. The cure is process called reorthogonalization. T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

67 Reorthogonalized Arnoldi process 1: for k = 1, 2,... do 2: h k = Uk HAu k; 3: v = Au k U k h k ; 4: w = Uk Hv; 5: h k = h k + w; 6: v = v U k w; 7: h k+1,k = v 2 ; 8: u k+1 = v/h k+1,k ; [ Ĥk 1 h 9: Ĥ k = k 0 h k+1,k 10: end for ] ; T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

68 Let y (k) i and x (k) i Theorem 26 and therefore, be an eigenvector of H k associated with the eigenvalue λ (k) i = U k y (k) i the Ritz approximate eigenvector. (A λ (k) i I)x (k) i = h k+1,k e k y(k) i u k+1. (A λ (k) i I)x (k) i 2 = h k+1,k e k y(k) i. T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

69 Generalized eigenvalue problem Consider the generalized eigenvalue problem where B is nonsingular. Let Ax = λbx, C = B 1 A. Applying Arnoldi process to matrix C, we get or CU k = U k H k + h k+1,k u k+1 e k, AU k = BU k H k + h k+1,k Bu k+1 e k. (18) T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

70 Write the k-th column of (18) in the form Au k = BU k h k + h k+1,k Bu k+1. (19) Let U k satisfy that Uk U k = I k. Then h k = Uk B 1 Au k and h k+1,k u k+1 = B 1 Au k U k h k t k which implies that h k+1,k = t k, u k+1 = h 1 k+1,k t k. T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

71 Shift-and-invert Lanczos for GEP Consider the generalized eigenvalue problem Ax = λbx, (20) where A is symmetric and B is symmetric positive definite. Shift-and-invert Compute the eigenvalues which are closest to a given shift value σ. Transform (20) into (A σb) 1 Bx = (λ σ) 1 x. (21) T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

72 The basic recursion for applying Lanczos method to (21) is (A σb) 1 BU j = U j T j + β j u j+1 e j, (22) where the basis U j is B-orthogonal and T j is a real symmetric tridiagonal matrix defined by α 1 β 1. T j = β 1 α βj 1 or equivalent to β j 1 (A σb) 1 Bu j = α j u j + β j 1 u j 1 + β j u j+1. α j T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

73 By the condition U j BU j = I j, it holds that where α j = u jb(a σb) 1 Bu j, β 2 j = t jbt j, t j (A σb) 1 Bu j α j u j β j 1 u j 1 = β j u j+1. An eigenpair (θ i, s i ) of T j is used to get an approximate eigenpair (λ i, x i ) of (A, B) by λ i = σ + 1 θ i, x i = U j s i. The corresponding residual is r i = AU j s i λ i BU j s i = (A σb)u j s i θi 1 BU j s i = θi 1 [BU j (A σb)u j T j ] s i = θ 1 i (A σb) [ (A σb) 1 BU j U j T j ] si = θ 1 i β j (e j s i )(A σb)u j+1 which implies r i is small whenever β j (e j s i)/θ i is small. T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

74 Shift-and-invert Lanczos method for symmetric GEP 1: Given starting vector t, compute q = Bt and β 0 = q t. 2: for j = 1, 2,..., do 3: Compute w j = q/β j 1 and u j = t/β j 1. 4: Solve linear system (A σb)t = w j. 5: Set t := t β j 1 u j 1 ; compute α j = w j t and reset t := t α ju j. 6: B-reorthogonalize t to u 1,..., u j if necessary. 7: Compute q = Bt and β j = q t. 8: Compute approximate eigenvalues T j = S j Θ j S j. 9: Test for convergence. 10: end for 11: Compute approximate eigenvectors X = U j S j. T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

75 Krylov-Schur restarting method (Locking and Purging) Theorem 27 (Schur decomposition) Let A C n n. Then there is a unitary matrix V such that A = V T V H, where T is upper triangular. The process of Krylov-Schur restarting: Compute the Schur decomposition of the Rayleigh quotient Move the desired eigenvalues to the beginning Throw away the rest of the decomposition T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

76 Use Lanczos process to generate the Lanczos decomposition of order m k + p (A σb) 1 BU k+p = U k+p T k+p + β k+p u k+p+1 e k+p. (23) Let T k+p = V k+p D k+p Vk+p [ [ ] V V k V p diag (Dk, D p ) k Vp ] (24) be a Schur decomposition of T k+p. The diagonal elements of D k and D p contain the k wanted and p unwanted Ritz values, respectively. T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

77 Substituting (24) into (23), it holds that (A σb) 1 B(U k+p V k+p ) = (U k+p V k+p )(V k+p T k+pv k+p ) + β k+p u k+p+1 (e k+p V k+p), which implies that (A σb) 1 BŨk = ŨkD k + u k+p+1 t k = [ Ũ k ũ k+1 ] [ D k t k ] is a Krylov decomposition of order k where Ũk U k+p V k, ũ k+1 = u k+p+1 and [ t k, ] t p βk+p e k+p V k+p. The new vectors ũ k+2,..., ũ k+p+1 are computed sequentially starting from ũ j+2 with (A σb) 1 BŨk+1 = [ ] Ũ k+1 ũ k+2 D k t k t k α k+1 0 βk+1, T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

78 or equivalently for (A σb) 1 Bũ k+1 = Ũkt k + α k+1 ũ k+1 + β k+1 ũ k+2, α k+1 = ũ k+1 B(A σb) 1 Bũ k+1. Consequently a new Krylov decomposition of order k + p can be generated by (A σb) 1 BŨk+p = Ũk+p T k+p + β k+p ũ k+p+1 e k+p, (25) where T k+p = D k t k t k α k+1 βk+1 β k+1 α k βk+p 1 β k+p 1 α k+p. (26) T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

79 Shift-and-invert Krylov-Schur method for solving Ax = λbx Build an initial Lanczos decomposition of order k + p: (A σb) 1 BU k+p = U k+p T k+p + β k+p u k+p+1 e k+p. repeat Compute Schur decomposition T k+p = [ V k V p ] diag ( Dk, D p ) [ Vk V p ]. Set U k := U k+p V k, u k+1 := u k+p+1 and t k := β k+pe k+p V k to get (A σb) 1 BU k = U k D k + u k+1 t k. for j = k + 1,..., k + p do Solve linear system (A σb)q = Bu j. if j = k + 1 then Set q := q U k t k. else Set q := q β j 1 u j 1. end if Compute α j = u j Bq and reset q := q α j u j. B-reorthogonalize q to u 1,..., u j 1 if necessary. Compute β j = q Bq and u j+1 = q/β j. end for Decompose T k+p = S k+p Θ k+p Sk+p to get the approximate eigenvalues. Test for convergence. until all the k wanted eigenpairs are convergent T.-M. Huang (Taiwan Normal University) Krylov Subspace Methods for Large/Sparse Eigenvalue Problems April 17, / 80

Krylov subspace projection methods

Krylov subspace projection methods I.1.(a) Krylov subspace projection methods Orthogonal projection technique : framework Let A be an n n complex matrix and K be an m-dimensional subspace of C n. An orthogonal projection technique seeks

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts

More information

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues also important in analyzing numerical methods Theory and algorithms apply

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 1 Eigenvalues and Eigenvectors Among problems in numerical linear algebra, the determination of the eigenvalues and eigenvectors of matrices is second in importance only to the solution of linear

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 4 Eigenvalue Problems Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue

More information

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts

More information

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR

QR-decomposition. The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which A = QR QR-decomposition The QR-decomposition of an n k matrix A, k n, is an n n unitary matrix Q and an n k upper triangular matrix R for which In Matlab A = QR [Q,R]=qr(A); Note. The QR-decomposition is unique

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated. Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces

More information

Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems

Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan April 28, 2011 T.M. Huang (Taiwan Normal Univ.)

More information

Krylov Subspace Methods that Are Based on the Minimization of the Residual

Krylov Subspace Methods that Are Based on the Minimization of the Residual Chapter 5 Krylov Subspace Methods that Are Based on the Minimization of the Residual Remark 51 Goal he goal of these methods consists in determining x k x 0 +K k r 0,A such that the corresponding Euclidean

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 19: More on Arnoldi Iteration; Lanczos Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 17 Outline 1

More information

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization General Tools for Solving Large Eigen-Problems

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

LARGE SPARSE EIGENVALUE PROBLEMS

LARGE SPARSE EIGENVALUE PROBLEMS LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization 14-1 General Tools for Solving Large Eigen-Problems

More information

Matrices, Moments and Quadrature, cont d

Matrices, Moments and Quadrature, cont d Jim Lambers CME 335 Spring Quarter 2010-11 Lecture 4 Notes Matrices, Moments and Quadrature, cont d Estimation of the Regularization Parameter Consider the least squares problem of finding x such that

More information

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems Part I: Review of basic theory of eigenvalue problems 1. Let A C n n. (a) A scalar λ is an eigenvalue of an n n A

More information

On the influence of eigenvalues on Bi-CG residual norms

On the influence of eigenvalues on Bi-CG residual norms On the influence of eigenvalues on Bi-CG residual norms Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic duintjertebbens@cs.cas.cz Gérard Meurant 30, rue

More information

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Krylov Space Methods Nonstationary sounds good Radu Trîmbiţaş Babeş-Bolyai University Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Introduction These methods are used both to solve

More information

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland Matrix Algorithms Volume II: Eigensystems G. W. Stewart University of Maryland College Park, Maryland H1HJ1L Society for Industrial and Applied Mathematics Philadelphia CONTENTS Algorithms Preface xv xvii

More information

Arnoldi Methods in SLEPc

Arnoldi Methods in SLEPc Scalable Library for Eigenvalue Problem Computations SLEPc Technical Report STR-4 Available at http://slepc.upv.es Arnoldi Methods in SLEPc V. Hernández J. E. Román A. Tomás V. Vidal Last update: October,

More information

Numerical Methods for Solving Large Scale Eigenvalue Problems

Numerical Methods for Solving Large Scale Eigenvalue Problems Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems

More information

13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2

13-2 Text: 28-30; AB: 1.3.3, 3.2.3, 3.4.2, 3.5, 3.6.2; GvL Eigen2 The QR algorithm The most common method for solving small (dense) eigenvalue problems. The basic algorithm: QR without shifts 1. Until Convergence Do: 2. Compute the QR factorization A = QR 3. Set A :=

More information

GMRES: Generalized Minimal Residual Algorithm for Solving Nonsymmetric Linear Systems

GMRES: Generalized Minimal Residual Algorithm for Solving Nonsymmetric Linear Systems GMRES: Generalized Minimal Residual Algorithm for Solving Nonsymmetric Linear Systems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University December 4, 2011 T.-M. Huang (NTNU) GMRES

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give

More information

Math 408 Advanced Linear Algebra

Math 408 Advanced Linear Algebra Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x

More information

Index. for generalized eigenvalue problem, butterfly form, 211

Index. for generalized eigenvalue problem, butterfly form, 211 Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes

On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic joint work with Gérard

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

Eigenvalues and eigenvectors

Eigenvalues and eigenvectors Chapter 6 Eigenvalues and eigenvectors An eigenvalue of a square matrix represents the linear operator as a scaling of the associated eigenvector, and the action of certain matrices on general vectors

More information

Math 504 (Fall 2011) 1. (*) Consider the matrices

Math 504 (Fall 2011) 1. (*) Consider the matrices Math 504 (Fall 2011) Instructor: Emre Mengi Study Guide for Weeks 11-14 This homework concerns the following topics. Basic definitions and facts about eigenvalues and eigenvectors (Trefethen&Bau, Lecture

More information

Chapter 7. Lanczos Methods. 7.1 The Lanczos Algorithm

Chapter 7. Lanczos Methods. 7.1 The Lanczos Algorithm Chapter 7 Lanczos Methods In this chapter we develop the Lanczos method, a technique that is applicable to large sparse, symmetric eigenproblems The method involves tridiagonalizing the given matrix A

More information

Orthogonal Transformations

Orthogonal Transformations Orthogonal Transformations Tom Lyche University of Oslo Norway Orthogonal Transformations p. 1/3 Applications of Qx with Q T Q = I 1. solving least squares problems (today) 2. solving linear equations

More information

A HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY

A HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY A HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY RONALD B. MORGAN AND MIN ZENG Abstract. A restarted Arnoldi algorithm is given that computes eigenvalues

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

Solving large scale eigenvalue problems

Solving large scale eigenvalue problems arge scale eigenvalue problems, Lecture 4, March 14, 2018 1/41 Lecture 4, March 14, 2018: The QR algorithm http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Zürich E-mail:

More information

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers.. EIGENVALUE PROBLEMS Background on eigenvalues/ eigenvectors / decompositions Perturbation analysis, condition numbers.. Power method The QR algorithm Practical QR algorithms: use of Hessenberg form and

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation

More information

We will discuss matrix diagonalization algorithms in Numerical Recipes in the context of the eigenvalue problem in quantum mechanics, m A n = λ m

We will discuss matrix diagonalization algorithms in Numerical Recipes in the context of the eigenvalue problem in quantum mechanics, m A n = λ m Eigensystems We will discuss matrix diagonalization algorithms in umerical Recipes in the context of the eigenvalue problem in quantum mechanics, A n = λ n n, (1) where A is a real, symmetric Hamiltonian

More information

EIGENVALUE PROBLEMS (EVP)

EIGENVALUE PROBLEMS (EVP) EIGENVALUE PROBLEMS (EVP) (Golub & Van Loan: Chaps 7-8; Watkins: Chaps 5-7) X.-W Chang and C. C. Paige PART I. EVP THEORY EIGENVALUES AND EIGENVECTORS Let A C n n. Suppose Ax = λx with x 0, then x is a

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given

More information

Charles University Faculty of Mathematics and Physics DOCTORAL THESIS. Krylov subspace approximations in linear algebraic problems

Charles University Faculty of Mathematics and Physics DOCTORAL THESIS. Krylov subspace approximations in linear algebraic problems Charles University Faculty of Mathematics and Physics DOCTORAL THESIS Iveta Hnětynková Krylov subspace approximations in linear algebraic problems Department of Numerical Mathematics Supervisor: Doc. RNDr.

More information

Orthogonalization and least squares methods

Orthogonalization and least squares methods Chapter 3 Orthogonalization and least squares methods 31 QR-factorization (QR-decomposition) 311 Householder transformation Definition 311 A complex m n-matrix R = [r ij is called an upper (lower) triangular

More information

Preconditioned inverse iteration and shift-invert Arnoldi method

Preconditioned inverse iteration and shift-invert Arnoldi method Preconditioned inverse iteration and shift-invert Arnoldi method Melina Freitag Department of Mathematical Sciences University of Bath CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

Krylov Subspace Methods to Calculate PageRank

Krylov Subspace Methods to Calculate PageRank Krylov Subspace Methods to Calculate PageRank B. Vadala-Roth REU Final Presentation August 1st, 2013 How does Google Rank Web Pages? The Web The Graph (A) Ranks of Web pages v = v 1... Dominant Eigenvector

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Reduction to Hessenberg and Tridiagonal Forms; Rayleigh Quotient Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Arnoldi Notes for 2016-11-16 Krylov subspaces are good spaces for approximation schemes. But the power basis (i.e. the basis A j b for j = 0,..., k 1) is not good for numerical work. The vectors in the

More information

Model reduction of large-scale dynamical systems

Model reduction of large-scale dynamical systems Model reduction of large-scale dynamical systems Lecture III: Krylov approximation and rational interpolation Thanos Antoulas Rice University and Jacobs University email: aca@rice.edu URL: www.ece.rice.edu/

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Large-scale eigenvalue problems

Large-scale eigenvalue problems ELE 538B: Mathematics of High-Dimensional Data Large-scale eigenvalue problems Yuxin Chen Princeton University, Fall 208 Outline Power method Lanczos algorithm Eigenvalue problems 4-2 Eigendecomposition

More information

Solving large scale eigenvalue problems

Solving large scale eigenvalue problems arge scale eigenvalue problems, Lecture 5, March 23, 2016 1/30 Lecture 5, March 23, 2016: The QR algorithm II http://people.inf.ethz.ch/arbenz/ewp/ Peter Arbenz Computer Science Department, ETH Zürich

More information

FEM and sparse linear system solving

FEM and sparse linear system solving FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Orthogonal iteration to QR

Orthogonal iteration to QR Notes for 2016-03-09 Orthogonal iteration to QR The QR iteration is the workhorse for solving the nonsymmetric eigenvalue problem. Unfortunately, while the iteration itself is simple to write, the derivation

More information

Davidson Method CHAPTER 3 : JACOBI DAVIDSON METHOD

Davidson Method CHAPTER 3 : JACOBI DAVIDSON METHOD Davidson Method CHAPTER 3 : JACOBI DAVIDSON METHOD Heinrich Voss voss@tu-harburg.de Hamburg University of Technology The Davidson method is a popular technique to compute a few of the smallest (or largest)

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES Eigenvalue Problems CHAPTER 1 : PRELIMINARIES Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Mathematics TUHH Heinrich Voss Preliminaries Eigenvalue problems 2012 1 / 14

More information

Krylov Subspaces. Lab 1. The Arnoldi Iteration

Krylov Subspaces. Lab 1. The Arnoldi Iteration Lab 1 Krylov Subspaces Lab Objective: Discuss simple Krylov Subspace Methods for finding eigenvalues and show some interesting applications. One of the biggest difficulties in computational linear algebra

More information

Orthogonal iteration to QR

Orthogonal iteration to QR Week 10: Wednesday and Friday, Oct 24 and 26 Orthogonal iteration to QR On Monday, we went through a somewhat roundabout algbraic path from orthogonal subspace iteration to the QR iteration. Let me start

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information

5.3 The Power Method Approximation of the Eigenvalue of Largest Module

5.3 The Power Method Approximation of the Eigenvalue of Largest Module 192 5 Approximation of Eigenvalues and Eigenvectors 5.3 The Power Method The power method is very good at approximating the extremal eigenvalues of the matrix, that is, the eigenvalues having largest and

More information

Lecture 4 Orthonormal vectors and QR factorization

Lecture 4 Orthonormal vectors and QR factorization Orthonormal vectors and QR factorization 4 1 Lecture 4 Orthonormal vectors and QR factorization EE263 Autumn 2004 orthonormal vectors Gram-Schmidt procedure, QR factorization orthogonal decomposition induced

More information

Eigenvalue and Eigenvector Problems

Eigenvalue and Eigenvector Problems Eigenvalue and Eigenvector Problems An attempt to introduce eigenproblems Radu Trîmbiţaş Babeş-Bolyai University April 8, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) Eigenvalue and Eigenvector Problems

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 17 1 / 26 Overview

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 7: More on Householder Reflectors; Least Squares Problems Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 15 Outline

More information

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 19, 2010 Today

More information

The German word eigen is cognate with the Old English word āgen, which became owen in Middle English and own in modern English.

The German word eigen is cognate with the Old English word āgen, which became owen in Middle English and own in modern English. Chapter 4 EIGENVALUE PROBLEM The German word eigen is cognate with the Old English word āgen, which became owen in Middle English and own in modern English. 4.1 Mathematics 4.2 Reduction to Upper Hessenberg

More information

Matrix Factorization and Analysis

Matrix Factorization and Analysis Chapter 7 Matrix Factorization and Analysis Matrix factorizations are an important part of the practice and analysis of signal processing. They are at the heart of many signal-processing algorithms. Their

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J Olver 8 Numerical Computation of Eigenvalues In this part, we discuss some practical methods for computing eigenvalues and eigenvectors of matrices Needless to

More information

Orthonormal Transformations

Orthonormal Transformations Orthonormal Transformations Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 25, 2010 Applications of transformation Q : R m R m, with Q T Q = I 1.

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

The QR Algorithm. Marco Latini. February 26, 2004

The QR Algorithm. Marco Latini. February 26, 2004 The QR Algorithm Marco Latini February 26, 2004 Contents 1 Introduction 2 2 The Power and Inverse Power method 2 2.1 The Power Method............................... 2 2.2 The Inverse power method...........................

More information

Chapter 6: Orthogonality

Chapter 6: Orthogonality Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products

More information

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm

Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Computing Eigenvalues and/or Eigenvectors;Part 2, The Power method and QR-algorithm Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 13, 2009 Today

More information

Applied Numerical Linear Algebra. Lecture 8

Applied Numerical Linear Algebra. Lecture 8 Applied Numerical Linear Algebra. Lecture 8 1/ 45 Perturbation Theory for the Least Squares Problem When A is not square, we define its condition number with respect to the 2-norm to be k 2 (A) σ max (A)/σ

More information

Lecture 4 Basic Iterative Methods I

Lecture 4 Basic Iterative Methods I March 26, 2018 Lecture 4 Basic Iterative Methods I A The Power Method Let A be an n n with eigenvalues λ 1,...,λ n counted according to multiplicity. We assume the eigenvalues to be ordered in absolute

More information

MAA507, Power method, QR-method and sparse matrix representation.

MAA507, Power method, QR-method and sparse matrix representation. ,, and representation. February 11, 2014 Lecture 7: Overview, Today we will look at:.. If time: A look at representation and fill in. Why do we need numerical s? I think everyone have seen how time consuming

More information

Orthonormal Transformations and Least Squares

Orthonormal Transformations and Least Squares Orthonormal Transformations and Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 30, 2009 Applications of Qx with Q T Q = I 1. solving

More information

Alternative correction equations in the Jacobi-Davidson method

Alternative correction equations in the Jacobi-Davidson method Chapter 2 Alternative correction equations in the Jacobi-Davidson method Menno Genseberger and Gerard Sleijpen Abstract The correction equation in the Jacobi-Davidson method is effective in a subspace

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition

ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition ENGG5781 Matrix Analysis and Computations Lecture 8: QR Decomposition Wing-Kin (Ken) Ma 2017 2018 Term 2 Department of Electronic Engineering The Chinese University of Hong Kong Lecture 8: QR Decomposition

More information

Eigenvalue problems. Eigenvalue problems

Eigenvalue problems. Eigenvalue problems Determination of eigenvalues and eigenvectors Ax x, where A is an N N matrix, eigenvector x 0, and eigenvalues are in general complex numbers In physics: - Energy eigenvalues in a quantum mechanical system

More information

Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method

Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method Antti Koskela KTH Royal Institute of Technology, Lindstedtvägen 25, 10044 Stockholm,

More information

MULTIGRID ARNOLDI FOR EIGENVALUES

MULTIGRID ARNOLDI FOR EIGENVALUES 1 MULTIGRID ARNOLDI FOR EIGENVALUES RONALD B. MORGAN AND ZHAO YANG Abstract. A new approach is given for computing eigenvalues and eigenvectors of large matrices. Multigrid is combined with the Arnoldi

More information

The Singular Value Decomposition and Least Squares Problems

The Singular Value Decomposition and Least Squares Problems The Singular Value Decomposition and Least Squares Problems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 27, 2009 Applications of SVD solving

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

Iterative Methods for Sparse Linear Systems

Iterative Methods for Sparse Linear Systems Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University

More information

ABSTRACT. Professor G.W. Stewart

ABSTRACT. Professor G.W. Stewart ABSTRACT Title of dissertation: Residual Arnoldi Methods : Theory, Package, and Experiments Che-Rung Lee, Doctor of Philosophy, 2007 Dissertation directed by: Professor G.W. Stewart Department of Computer

More information

Block Lanczos Tridiagonalization of Complex Symmetric Matrices

Block Lanczos Tridiagonalization of Complex Symmetric Matrices Block Lanczos Tridiagonalization of Complex Symmetric Matrices Sanzheng Qiao, Guohong Liu, Wei Xu Department of Computing and Software, McMaster University, Hamilton, Ontario L8S 4L7 ABSTRACT The classic

More information