A Jacobi-Davidson method for two real parameter nonlinear eigenvalue problems arising from delay differential equations

Size: px
Start display at page:

Download "A Jacobi-Davidson method for two real parameter nonlinear eigenvalue problems arising from delay differential equations"

Transcription

1 A Jacobi-Davidson method for two real parameter nonlinear eigenvalue problems arising from delay differential equations Heinrich Voss Joint work with Karl Meerbergen (KU Leuven) and Christian Schröder (TU Berlin) Hamburg University of Technology Institute of Mathematics TUHH Heinrich Voss Delay JD Beijing, May / 41

2 Outline 1 Problem definition 2 small critical delay problems 3 Iterative projection methods for nonlinear eigenproblems 4 A Jacobi-Davidson-type method for two-real-param. EVP 5 Numerical Experience TUHH Heinrich Voss Delay JD Beijing, May / 41

3 Outline Problem definition 1 Problem definition 2 small critical delay problems 3 Iterative projection methods for nonlinear eigenproblems 4 A Jacobi-Davidson-type method for two-real-param. EVP 5 Numerical Experience TUHH Heinrich Voss Delay JD Beijing, May / 41

4 Problem Problem definition Consider the linear time-invariant delay differential equation Mẋ(t) + Ax(t) + Bx(t τ) = 0 with given M, A, B C n n ; τ 0 is the delay TUHH Heinrich Voss Delay JD Beijing, May / 41

5 Problem Problem definition Consider the linear time-invariant delay differential equation Mẋ(t) + Ax(t) + Bx(t τ) = 0 with given M, A, B C n n ; τ 0 is the delay This system is (asymptotically) stable if, for every bounded initial condition, it holds that x(t) 0 as t TUHH Heinrich Voss Delay JD Beijing, May / 41

6 Problem Problem definition Consider the linear time-invariant delay differential equation Mẋ(t) + Ax(t) + Bx(t τ) = 0 with given M, A, B C n n ; τ 0 is the delay This system is (asymptotically) stable if, for every bounded initial condition, it holds that x(t) 0 as t Necessary and sufficient is that the spectrum of the nonlinear eigenvalue problem λmu + Au + e τλ Bu = 0 is contained in the open left half plane. TUHH Heinrich Voss Delay JD Beijing, May / 41

7 Problem cnt. Problem definition Approach at hand: homotopy, i.e. follow eigenvalues close to imaginary axis for changing τ TUHH Heinrich Voss Delay JD Beijing, May / 41

8 Problem cnt. Problem definition Approach at hand: homotopy, i.e. follow eigenvalues close to imaginary axis for changing τ Problem expensive and unreliable; a different eigenvalue curve may have crossed the imaginary axis before the followed one arrives it. TUHH Heinrich Voss Delay JD Beijing, May / 41

9 Problem cnt. Problem definition Approach at hand: homotopy, i.e. follow eigenvalues close to imaginary axis for changing τ Problem expensive and unreliable; a different eigenvalue curve may have crossed the imaginary axis before the followed one arrives it. Wanted: critical delays τ where system changes stability necessary: system has a purely imaginary eigenvalue TUHH Heinrich Voss Delay JD Beijing, May / 41

10 Outline small critical delay problems 1 Problem definition 2 small critical delay problems 3 Iterative projection methods for nonlinear eigenproblems 4 A Jacobi-Davidson-type method for two-real-param. EVP 5 Numerical Experience TUHH Heinrich Voss Delay JD Beijing, May / 41

11 small critical delay problems Solving small critical delay problems Recall the DEVP (iωm + A + e iωτ B)u = 0 where M, A, B C n n. We are interested in solutions (ω, τ, u) R R C n, u 0. TUHH Heinrich Voss Delay JD Beijing, May / 41

12 small critical delay problems Solving small critical delay problems Recall the DEVP where M, A, B C n n. (iωm + A + e iωτ B)u = 0 We are interested in solutions (ω, τ, u) R R C n, u 0. Introducing the parameter µ = e iωτ translates the problem to the two-parameter problem iωmu + Au + µbu = 0. TUHH Heinrich Voss Delay JD Beijing, May / 41

13 small critical delay problems Solving small critical delay problems Recall the DEVP where M, A, B C n n. (iωm + A + e iωτ B)u = 0 We are interested in solutions (ω, τ, u) R R C n, u 0. Introducing the parameter µ = e iωτ translates the problem to the two-parameter problem iωmu + Au + µbu = 0. Note that µ lies on the unit circle, thus µ = µ 1. TUHH Heinrich Voss Delay JD Beijing, May / 41

14 small critical delay problems Solving small critical delay problems Recall the DEVP where M, A, B C n n. (iωm + A + e iωτ B)u = 0 We are interested in solutions (ω, τ, u) R R C n, u 0. Introducing the parameter µ = e iωτ translates the problem to the two-parameter problem iωmu + Au + µbu = 0. Note that µ lies on the unit circle, thus µ = µ 1. Hence the complex conjugate equation reads iω Mū + Āū + µ 1 Bū = 0. TUHH Heinrich Voss Delay JD Beijing, May / 41

15 small critical delay problems Solving small critical delay problems Using Kronecker products we eliminate ω TUHH Heinrich Voss Delay JD Beijing, May / 41

16 small critical delay problems Solving small critical delay problems Using Kronecker products we eliminate ω 0 = (iωmu) Mū Mu ( iω Mū) = (Au + µbu) Mū + Mu (Āū + µ 1 Bū) = ((A + µb) M + M (Ā + µ 1 B))(u ū). and arrive at a rational eigenvalue problem. TUHH Heinrich Voss Delay JD Beijing, May / 41

17 small critical delay problems Solving small critical delay problems Using Kronecker products we eliminate ω 0 = (iωmu) Mū Mu ( iω Mū) = (Au + µbu) Mū + Mu (Āū + µ 1 Bū) = ((A + µb) M + M (Ā + µ 1 B))(u ū). and arrive at a rational eigenvalue problem. Expanding and multiplication by µ yields the quadratic eigenvalue problem µ 2 (B M)z + µ(a M + M Ā)z + (M B)z = 0. TUHH Heinrich Voss Delay JD Beijing, May / 41

18 small critical delay problems Solving small critical delay problems Using Kronecker products we eliminate ω 0 = (iωmu) Mū Mu ( iω Mū) = (Au + µbu) Mū + Mu (Āū + µ 1 Bū) = ((A + µb) M + M (Ā + µ 1 B))(u ū). and arrive at a rational eigenvalue problem. Expanding and multiplication by µ yields the quadratic eigenvalue problem µ 2 (B M)z + µ(a M + M Ā)z + (M B)z = 0. We have thus shown that if (ω, τ, u) is a solution of the DEVP and µ = e iωτ is on the unit circle, then (µ, u ū) is an eigenpair of the quadratic eigenvalue problem (QEVP) above. TUHH Heinrich Voss Delay JD Beijing, May / 41

19 small critical delay problems Solving small critical delay problems The converse is also true: TUHH Heinrich Voss Delay JD Beijing, May / 41

20 small critical delay problems Solving small critical delay problems The converse is also true: Theorem Let M, A, B C n n with M nonsingular. Then any eigenvector z C n2 of QEVP corresponding to a simple eigenvalue µ C can be written as z = αu 1 u 2 for some vectors u 1, u 2 C n and some α C. TUHH Heinrich Voss Delay JD Beijing, May / 41

21 small critical delay problems Solving small critical delay problems The converse is also true: Theorem Let M, A, B C n n with M nonsingular. Then any eigenvector z C n2 of QEVP corresponding to a simple eigenvalue µ C can be written as z = αu 1 u 2 for some vectors u 1, u 2 C n and some α C. Moreover, if µ = 1 then u 1 = ū 2 = u and there is an ω R such that DEVP holds. TUHH Heinrich Voss Delay JD Beijing, May / 41

22 small critical delay problems Solving small critical delay problems The converse is also true: Theorem Let M, A, B C n n with M nonsingular. Then any eigenvector z C n2 of QEVP corresponding to a simple eigenvalue µ C can be written as z = αu 1 u 2 for some vectors u 1, u 2 C n and some α C. Moreover, if µ = 1 then u 1 = ū 2 = u and there is an ω R such that DEVP holds. We have thus transformed the problem of finding solutions of the DEVP to the problem of finding eigenvalues of modulus one of a QEVP. TUHH Heinrich Voss Delay JD Beijing, May / 41

23 small critical delay problems Solving small critical delay problems The converse is also true: Theorem Let M, A, B C n n with M nonsingular. Then any eigenvector z C n2 of QEVP corresponding to a simple eigenvalue µ C can be written as z = αu 1 u 2 for some vectors u 1, u 2 C n and some α C. Moreover, if µ = 1 then u 1 = ū 2 = u and there is an ω R such that DEVP holds. We have thus transformed the problem of finding solutions of the DEVP to the problem of finding eigenvalues of modulus one of a QEVP. This can be done by the structure preserving method similar to the method presented in Fassbender,Mackey,Mackey,Schröder (2008) TUHH Heinrich Voss Delay JD Beijing, May / 41

24 small critical delay problems Solving small critical delay problems Once an eigenpair (µ, z) of QEVP is known, u can be recovered from z = αu ū. TUHH Heinrich Voss Delay JD Beijing, May / 41

25 small critical delay problems Solving small critical delay problems Once an eigenpair (µ, z) of QEVP is known, u can be recovered from z = αu ū. One possibility is to compute the dominant singular vector of the n n matrix Z with z = vec(z ). Alternatively, one could chose u as a column of Z scaled such that u has norm one. TUHH Heinrich Voss Delay JD Beijing, May / 41

26 small critical delay problems Solving small critical delay problems Once an eigenpair (µ, z) of QEVP is known, u can be recovered from z = αu ū. One possibility is to compute the dominant singular vector of the n n matrix Z with z = vec(z ). Alternatively, one could chose u as a column of Z scaled such that u has norm one. Subsequently ω can be obtained by projection, i.e. ω = Im(Mu)H (Au + µbu) Mu 2. 2 TUHH Heinrich Voss Delay JD Beijing, May / 41

27 small critical delay problems Solving small critical delay problems Once an eigenpair (µ, z) of QEVP is known, u can be recovered from z = αu ū. One possibility is to compute the dominant singular vector of the n n matrix Z with z = vec(z ). Alternatively, one could chose u as a column of Z scaled such that u has norm one. Subsequently ω can be obtained by projection, i.e. ω = Im(Mu)H (Au + µbu) Mu 2. 2 Finally, τ may be computed from µ and ω as τ = Im( ln(µ))/ω. TUHH Heinrich Voss Delay JD Beijing, May / 41

28 small critical delay problems Structure preserving method for QEVP µ 2 (B M)z + µ(a M + M Ā)z + (M B)z = 0. TUHH Heinrich Voss Delay JD Beijing, May / 41

29 small critical delay problems Structure preserving method for QEVP µ 2 (B M)z + µ(a M + M Ā)z + (M B)z = 0. Recall that the Kronecker product does, in general, not commute, i.e. A B B A. However there is a symmetric permutation matrix P such that A B = P(B A)P. TUHH Heinrich Voss Delay JD Beijing, May / 41

30 small critical delay problems Structure preserving method for QEVP µ 2 (B M)z + µ(a M + M Ā)z + (M B)z = 0. Recall that the Kronecker product does, in general, not commute, i.e. A B B A. However there is a symmetric permutation matrix P such that A B = P(B A)P. Hence, QEVP can be rewritten as where (µ 2 A 2 + µa 1 + A 0 )z = 0 A 2 = PĀ0P, A 1 = PĀ1P, whith P = P 1 = P T R n2 n 2. TUHH Heinrich Voss Delay JD Beijing, May / 41

31 small critical delay problems Structure preserving method for QEVP µ 2 (B M)z + µ(a M + M Ā)z + (M B)z = 0. Recall that the Kronecker product does, in general, not commute, i.e. A B B A. However there is a symmetric permutation matrix P such that A B = P(B A)P. Hence, QEVP can be rewritten as where (µ 2 A 2 + µa 1 + A 0 )z = 0 A 2 = PĀ0P, A 1 = PĀ1P, whith P = P 1 = P T R n2 n 2. Such problems are called PCP-palindromic eigenvalue problems. TUHH Heinrich Voss Delay JD Beijing, May / 41

32 small critical delay problems Structure preserving method for QEVP The method can be divided into three steps. TUHH Heinrich Voss Delay JD Beijing, May / 41

33 small critical delay problems Structure preserving method for QEVP The method can be divided into three steps. First, the quadratic eigenvalue problem is reformulated as ( [ ] ] [ ] [ ]) [ ] 0 P [Ā1 µ Ā2 Ā 0 0 P A1 A + 2 A 0 µz = 0, P 0 Ā 0 Ā 0 P 0 A 0 A 0 z which is a linear PCP-palindromic problem, because with P also [ ] P P is a real symmetric permutation. TUHH Heinrich Voss Delay JD Beijing, May / 41

34 small critical delay problems Structure preserving method for QEVP The method can be divided into three steps. First, the quadratic eigenvalue problem is reformulated as ( [ ] ] [ ] [ ]) [ ] 0 P [Ā1 µ Ā2 Ā 0 0 P A1 A + 2 A 0 µz = 0, P 0 Ā 0 Ā 0 P 0 A 0 A 0 z which is a linear PCP-palindromic problem, because with P also [ ] P P is a real symmetric permutation. Second, using the factorization [ ] 0 P = UU T, where U = 1 [ I P 0 2 P ] ip ii we define [ ] C := U H A1 A 2 A 0 Ū. A 0 A 0 TUHH Heinrich Voss Delay JD Beijing, May / 41

35 small critical delay problems Structure preserving method for QEVP Since U is unitary, i.e. U H T U = I = U Ū, this pencil is equivalent to U H (µ [ ] 0 P [Ā1 Ā2 Ā 0 P 0 Ā 0 Ā 0 ] Ā 0 [Ā1 =µu T Ā2 Ā 0 Ā 0 =µ C + C. ] [ 0 P P 0 [ ] U + U H A1 A 2 A 0 Ū A 0 A 0 ] [ ]) A1 A + 2 A 0 Ū A 0 A 0 TUHH Heinrich Voss Delay JD Beijing, May / 41

36 small critical delay problems Structure preserving method for QEVP Since U is unitary, i.e. U H T U = I = U Ū, this pencil is equivalent to U H (µ [ ] 0 P [Ā1 Ā2 Ā 0 P 0 Ā 0 Ā 0 ] Ā 0 [Ā1 =µu T Ā2 Ā 0 Ā 0 =µ C + C. ] [ 0 P P 0 [ ] U + U H A1 A 2 A 0 Ū A 0 A 0 ] [ ]) A1 A + 2 A 0 Ū A 0 A 0 Third, consider an eigenpair (θ, x) of the real generalized eigenvalue problem or, equivalently, Re(C)x = θim(c)x Cx + i + θ i θ Cx = 0. TUHH Heinrich Voss Delay JD Beijing, May / 41

37 small critical delay problems Structure preserving method for QEVP Since U is unitary, i.e. U H T U = I = U Ū, this pencil is equivalent to U H (µ [ ] 0 P [Ā1 Ā2 Ā 0 P 0 Ā 0 Ā 0 ] Ā 0 [Ā1 =µu T Ā2 Ā 0 Ā 0 =µ C + C. ] [ 0 P P 0 [ ] U + U H A1 A 2 A 0 Ū A 0 A 0 ] [ ]) A1 A + 2 A 0 Ū A 0 A 0 Third, consider an eigenpair (θ, x) of the real generalized eigenvalue problem or, equivalently, Re(C)x = θim(c)x Cx + i + θ i θ Cx = 0. Note that µ := (i + θ)/(i θ) is on the unit circle if and only if θ is real or θ = (the latter resulting in µ = 1). Note that real simple eigenvalues of real pencils can be stably computed by e.g. the real QZ algorithm. TUHH Heinrich Voss Delay JD Beijing, May / 41

38 small critical delay problems Structure preserving method for QEVP Require: M, A, B C n n Ensure: solutions (ω j, τ j, u j ) j=1,... of (iωm + A + e iωτ B)u = 0 1: A 0 = M B, A 1 = A M + M Ā 2: Construct the permutation P [ ] H [ ] [ ] I ip 3: C = 1 A1 PA 0 P A 0 I ip 2 P ii A 0 A 0 P ii 4: Compute all eigenpairs (θ j, x j ) of Re(C)x = θim(c)x where θ j is real 5: for j=1,... do 6: µ j = i+θ j i θ j 7: z j = [I ip]x j 8: Compute u j as dominant singular vector of mat(z j ) 9: ω j = Im((Mu j ) H (Au j +µ j Bu j )) Mu j : τ j = Im( ln(µ j )) ω j 11: end for TUHH Heinrich Voss Delay JD Beijing, May / 41

39 small critical delay problems Structure preserving method for QEVP Require: M, A, B C n n Ensure: solutions (ω j, τ j, u j ) j=1,... of (iωm + A + e iωτ B)u = 0 1: A 0 = M B, A 1 = A M + M Ā 2: Construct the permutation P [ ] H [ ] [ ] I ip 3: C = 1 A1 PA 0 P A 0 I ip 2 P ii A 0 A 0 P ii 4: Compute all eigenpairs (θ j, x j ) of Re(C)x = θim(c)x where θ j is real 5: for j=1,... do 6: µ j = i+θ j i θ j 7: z j = [I ip]x j 8: Compute u j as dominant singular vector of mat(z j ) 9: ω j = Im((Mu j ) H (Au j +µ j Bu j )) Mu j : τ j = Im( ln(µ j )) ω j 11: end for The dominant computational part is step 4 with a complexity of O(n 6 ). TUHH Heinrich Voss Delay JD Beijing, May / 41

40 Iterative projection methods for nonlinear eigenproblems Outline 1 Problem definition 2 small critical delay problems 3 Iterative projection methods for nonlinear eigenproblems 4 A Jacobi-Davidson-type method for two-real-param. EVP 5 Numerical Experience TUHH Heinrich Voss Delay JD Beijing, May / 41

41 Iterative projection methods for nonlinear eigenproblems Iterative projection methods For linear sparse eigenproblems T (λ) = λb A very efficient methods are iterative projection methods (Lanczos, Arnoldi, Jacobi Davidson method, e.g.), where approximations to the wanted eigenvalues and eigenvectors are obtained from projections of the eigenproblem to subspaces of small dimension which are expanded in the course of the algorithm. TUHH Heinrich Voss Delay JD Beijing, May / 41

42 Iterative projection methods for nonlinear eigenproblems Iterative projection methods For linear sparse eigenproblems T (λ) = λb A very efficient methods are iterative projection methods (Lanczos, Arnoldi, Jacobi Davidson method, e.g.), where approximations to the wanted eigenvalues and eigenvectors are obtained from projections of the eigenproblem to subspaces of small dimension which are expanded in the course of the algorithm. Generalizations to nonlinear sparse eigenproblems nonlinear rational Krylov: Ruhe(2000,2005), Jarlebring, V. (2005)) TUHH Heinrich Voss Delay JD Beijing, May / 41

43 Iterative projection methods for nonlinear eigenproblems Iterative projection methods For linear sparse eigenproblems T (λ) = λb A very efficient methods are iterative projection methods (Lanczos, Arnoldi, Jacobi Davidson method, e.g.), where approximations to the wanted eigenvalues and eigenvectors are obtained from projections of the eigenproblem to subspaces of small dimension which are expanded in the course of the algorithm. Generalizations to nonlinear sparse eigenproblems nonlinear rational Krylov: Ruhe(2000,2005), Jarlebring, V. (2005)) Arnoldi method: quadratic probl.: Meerbergen (2001) general problems: V. (2003,2004); Liao, Bai, Lee, Ko (2006), Liao (2007) TUHH Heinrich Voss Delay JD Beijing, May / 41

44 Iterative projection methods for nonlinear eigenproblems Iterative projection methods For linear sparse eigenproblems T (λ) = λb A very efficient methods are iterative projection methods (Lanczos, Arnoldi, Jacobi Davidson method, e.g.), where approximations to the wanted eigenvalues and eigenvectors are obtained from projections of the eigenproblem to subspaces of small dimension which are expanded in the course of the algorithm. Generalizations to nonlinear sparse eigenproblems nonlinear rational Krylov: Ruhe(2000,2005), Jarlebring, V. (2005)) Arnoldi method: quadratic probl.: Meerbergen (2001) general problems: V. (2003,2004); Liao, Bai, Lee, Ko (2006), Liao (2007) Jacobi-Davidson: polynomial problems: Sleijpen, Boten, Fokkema, van der Vorst (1996) Hwang, Lin, Wang, Wang (2004,2005) general problems:t. Betcke, V. (2004), V. (2004,2007), Schwetlick, Schreiber (2006), Schreiber (2008) TUHH Heinrich Voss Delay JD Beijing, May / 41

45 Iterative projection methods for nonlinear eigenproblems Iterative projection method Require: Initial basis V with V H V = I; set m = 1 1: while m number of wanted eigenvalues do 2: compute eigenpair (µ, y) of projected problem V T T (λ)vy = 0. 3: determine Ritz vector u = Vy, u = 1,and residual r = T (µ)u 4: if r < ε then 5: accept approximate eigenpair λ m = µ, x m = u; increase m m + 1 6: reduce search space V if necessary 7: choose approximation (λ m, u) to next eigenpair, and compute r = T (λ m )u 8: end if 9: expand search space V = [V, vnew] 10: update projected problem 11: end while TUHH Heinrich Voss Delay JD Beijing, May / 41

46 Iterative projection methods for nonlinear eigenproblems Iterative projection method Require: Initial basis V with V H V = I; set m = 1 1: while m number of wanted eigenvalues do 2: compute eigenpair (µ, y) of projected problem V T T (λ)vy = 0. 3: determine Ritz vector u = Vy, u = 1,and residual r = T (µ)u 4: if r < ε then 5: accept approximate eigenpair λ m = µ, x m = u; increase m m + 1 6: reduce search space V if necessary 7: choose approximation (λ m, u) to next eigenpair, and compute r = T (λ m )u 8: end if 9: expand search space V = [V, vnew] 10: update projected problem 11: end while Main tasks expand search space choose eigenpair of projected problem (locking, purging) TUHH Heinrich Voss Delay JD Beijing, May / 41

47 Iterative projection methods for nonlinear eigenproblems Expanding the subspace Given subspace V C n. Expand V by a direction with high approximation potential for the next wanted eigenvector. TUHH Heinrich Voss Delay JD Beijing, May / 41

48 Iterative projection methods for nonlinear eigenproblems Expanding the subspace Given subspace V C n. Expand V by a direction with high approximation potential for the next wanted eigenvector. Let θ be an eigenvalue of the projected problem V H T (λ)vy = 0 and x = Vy corresponding Ritz vector, then inverse iteration yields suitable candidate v := T (θ) 1 T (θ)x TUHH Heinrich Voss Delay JD Beijing, May / 41

49 Iterative projection methods for nonlinear eigenproblems Expanding the subspace Given subspace V C n. Expand V by a direction with high approximation potential for the next wanted eigenvector. Let θ be an eigenvalue of the projected problem V H T (λ)vy = 0 and x = Vy corresponding Ritz vector, then inverse iteration yields suitable candidate v := T (θ) 1 T (θ)x BUT: In each step have to solve large linear system with varying matrix TUHH Heinrich Voss Delay JD Beijing, May / 41

50 Iterative projection methods for nonlinear eigenproblems Expanding the subspace Given subspace V C n. Expand V by a direction with high approximation potential for the next wanted eigenvector. Let θ be an eigenvalue of the projected problem V H T (λ)vy = 0 and x = Vy corresponding Ritz vector, then inverse iteration yields suitable candidate v := T (θ) 1 T (θ)x BUT: In each step have to solve large linear system with varying matrix and in a truly large problem the vector v will not be accessible but only an inexact solution ṽ := v + e of T (θ)v = T (θ)x, and the next iterate will be a solution of the projection of T (λ)x = 0 upon the expanded space Ṽ := span{v, ṽ}. TUHH Heinrich Voss Delay JD Beijing, May / 41

51 Iterative projection methods for nonlinear eigenproblems Expansion of search space ct. We assume that x is already a good approximation to an eigenvector of T ( ). Then v will be an even better approximation, and therefore the eigenvector we are looking for will be very close to the plane E := span{x, v}. TUHH Heinrich Voss Delay JD Beijing, May / 41

52 Iterative projection methods for nonlinear eigenproblems Expansion of search space ct. We assume that x is already a good approximation to an eigenvector of T ( ). Then v will be an even better approximation, and therefore the eigenvector we are looking for will be very close to the plane E := span{x, v}. We therefore neglect the influence of the orthogonal complement of x in V on the next iterate and discuss the nearness of the planes E and Ẽ := span{x, ṽ}. TUHH Heinrich Voss Delay JD Beijing, May / 41

53 Iterative projection methods for nonlinear eigenproblems Expansion of search space ct. We assume that x is already a good approximation to an eigenvector of T ( ). Then v will be an even better approximation, and therefore the eigenvector we are looking for will be very close to the plane E := span{x, v}. We therefore neglect the influence of the orthogonal complement of x in V on the next iterate and discuss the nearness of the planes E and Ẽ := span{x, ṽ}. If the angle between these two planes is small, then the projection of T (λ) upon Ṽ should be similar to the one upon span{v, v}, and the approximation properties of inverse iteration should be maintained. TUHH Heinrich Voss Delay JD Beijing, May / 41

54 Iterative projection methods for nonlinear eigenproblems Expansion of search space ct. We assume that x is already a good approximation to an eigenvector of T ( ). Then v will be an even better approximation, and therefore the eigenvector we are looking for will be very close to the plane E := span{x, v}. We therefore neglect the influence of the orthogonal complement of x in V on the next iterate and discuss the nearness of the planes E and Ẽ := span{x, ṽ}. If the angle between these two planes is small, then the projection of T (λ) upon Ṽ should be similar to the one upon span{v, v}, and the approximation properties of inverse iteration should be maintained. If this angle can become large, then it is not surprising that the convergence properties of inverse iteration are not reflected by the projection method. TUHH Heinrich Voss Delay JD Beijing, May / 41

55 Iterative projection methods for nonlinear eigenproblems Theorem Let φ 0 = arccos(x T v) denote the angle between x and v, and the relative error of ṽ by ε := e. TUHH Heinrich Voss Delay JD Beijing, May / 41

56 Iterative projection methods for nonlinear eigenproblems Theorem Let φ 0 = arccos(x T v) denote the angle between x and v, and the relative error of ṽ by ε := e. Then the maximal possible acute angle between the planes E and Ẽ is { β(ε) = arccos 1 ε 2 / sin 2 φ 0 if ε sin φ 0 π 2 if ε sin φ 0 TUHH Heinrich Voss Delay JD Beijing, May / 41

57 Iterative projection methods for nonlinear eigenproblems Proof TUHH Heinrich Voss Delay JD Beijing, May / 41

58 Iterative projection methods for nonlinear eigenproblems Expansion by inexact inverse iteration Obviously for every α R, α 0 the plane E is also spanned by x and x + αv. TUHH Heinrich Voss Delay JD Beijing, May / 41

59 Iterative projection methods for nonlinear eigenproblems Expansion by inexact inverse iteration Obviously for every α R, α 0 the plane E is also spanned by x and x + αv. If Ẽ(α) is the plane which is spanned by x and a perturbed realization x + αv + e of x + αv then by the same arguments as in the proof of the Theorem the maximum angle between E and Ẽ(α) is { γ(α, ε) = arccos 1 ε 2 / sin 2 φ(α) if ε sin φ(α) π 2 if ε sin φ(α) where φ(α) denotes the angle between x and x + αv. TUHH Heinrich Voss Delay JD Beijing, May / 41

60 Iterative projection methods for nonlinear eigenproblems Expansion by inexact inverse iteration Obviously for every α R, α 0 the plane E is also spanned by x and x + αv. If Ẽ(α) is the plane which is spanned by x and a perturbed realization x + αv + e of x + αv then by the same arguments as in the proof of the Theorem the maximum angle between E and Ẽ(α) is { γ(α, ε) = arccos 1 ε 2 / sin 2 φ(α) if ε sin φ(α) π 2 if ε sin φ(α) where φ(α) denotes the angle between x and x + αv. Since the mapping φ arccos 1 ε 2 / sin 2 φ decreases monotonically the expansion of the search space by an inexact realization of t := x + αv is most robust with respect to small perturbations, if α is chosen such that x and x + αv are orthogonal TUHH Heinrich Voss Delay JD Beijing, May / 41

61 Iterative projection methods for nonlinear eigenproblems Expansion by inexact inverse iteration ct. t := x + αv is orthogonal to x iff v = x x H x x H T (θ) 1 T (θ)x T (θ) 1 T (θ)x.. ( ) TUHH Heinrich Voss Delay JD Beijing, May / 41

62 Iterative projection methods for nonlinear eigenproblems Expansion by inexact inverse iteration ct. t := x + αv is orthogonal to x iff v = x x H x x H T (θ) 1 T (θ)x T (θ) 1 T (θ)x.. ( ) which yields a maximum acute angle between E and Ẽ(α) { arccos 1 ε 2 if ε 1 γ(α, ε) = π 2 if ε 1. TUHH Heinrich Voss Delay JD Beijing, May / 41

63 Iterative projection methods for nonlinear eigenproblems Expansion by inexact inverse iteration TUHH Heinrich Voss Delay JD Beijing, May / 41

64 Iterative projection methods for nonlinear eigenproblems Expansion by inexact inverse iteration ct. TUHH Heinrich Voss Delay JD Beijing, May / 41

65 Iterative projection methods for nonlinear eigenproblems Expansion by inexact inverse iteration ct. TUHH Heinrich Voss Delay JD Beijing, May / 41

66 Iterative projection methods for nonlinear eigenproblems Jacobi Davidson method The expansion v = x x H x x H T (θ) 1 T (θ)x T (θ) 1 T (θ)x.. ( ) of the current search space V is the solution of the equation (I T (θ)xx H x H T (θ)x )T (θ)(i xx H )t = r, t x TUHH Heinrich Voss Delay JD Beijing, May / 41

67 Iterative projection methods for nonlinear eigenproblems Jacobi Davidson method The expansion v = x x H x x H T (θ) 1 T (θ)x T (θ) 1 T (θ)x.. ( ) of the current search space V is the solution of the equation (I T (θ)xx H x H T (θ)x )T (θ)(i xx H )t = r, t x This is the so called correction equation of the Jacobi Davidson method which was derived in T. Betcke & Voss (2004) generalizing the approach of Sleijpen and van der Vorst (1996) for linear and polynomial eigenvalue problems. TUHH Heinrich Voss Delay JD Beijing, May / 41

68 Iterative projection methods for nonlinear eigenproblems Jacobi Davidson method The expansion v = x x H x x H T (θ) 1 T (θ)x T (θ) 1 T (θ)x.. ( ) of the current search space V is the solution of the equation (I T (θ)xx H x H T (θ)x )T (θ)(i xx H )t = r, t x This is the so called correction equation of the Jacobi Davidson method which was derived in T. Betcke & Voss (2004) generalizing the approach of Sleijpen and van der Vorst (1996) for linear and polynomial eigenvalue problems. Hence, the Jacobi Davidson method is the most robust realization of an expansion of a search space such that the direction of inverse iteration is contained in the expanded space in the sense that it is least sensitive to inexact solves of linear systems T (θ)v = T (θ)x. TUHH Heinrich Voss Delay JD Beijing, May / 41

69 Iterative projection methods for nonlinear eigenproblems Nonlinear Jacobi-Davidson 1: Start with orthonormal basis V ; set m = 1 2: determine preconditioner M T (σ) 1 ; σ close to first wanted eigenvalue 3: while m number of wanted eigenvalues do 4: compute eigenpair (µ, y) of projected problem V T T (λ)vy = 0. 5: determine Ritz vector u = Vy, u = 1,and residual r = T (µ)u 6: if r < ε then 7: accept approximate eigenpair λ m = µ, x m = u; increase m m + 1 8: reduce search space V if necessary 9: choose new preconditioner M T (µ) if indicated 10: choose approximation (λ m, u) to next eigenpair, and compute r = T (λ m)u 11: end if 12: solve approximately correction equation (I T (µ)uu H u H T (µ)u ) T (µ) (I uuh u H u ) t = r, 13: t = t VV T t,ṽ = t/ t, reorthogonalize if necessary 14: expand search space V = [V, ṽ] 15: update projected problem 16: end while t u TUHH Heinrich Voss Delay JD Beijing, May / 41

70 Iterative projection methods for nonlinear eigenproblems Correction equation The correction equation is solved approximately by a few steps of an iterative solver (GMRES or BiCGStab). TUHH Heinrich Voss Delay JD Beijing, May / 41

71 Iterative projection methods for nonlinear eigenproblems Correction equation The correction equation is solved approximately by a few steps of an iterative solver (GMRES or BiCGStab). The operator T (σ) is restricted to map the subspace x into itself. Hence, if M T (σ) is a preconditioner of T (σ), σ µ, then a preconditioner for an iterativ solver of the correction equation should be modified correspondingly to M := (I T (µ)xx H H xx x H )M(I T (µ)x x H x ). TUHH Heinrich Voss Delay JD Beijing, May / 41

72 Iterative projection methods for nonlinear eigenproblems Correction equation The correction equation is solved approximately by a few steps of an iterative solver (GMRES or BiCGStab). The operator T (σ) is restricted to map the subspace x into itself. Hence, if M T (σ) is a preconditioner of T (σ), σ µ, then a preconditioner for an iterativ solver of the correction equation should be modified correspondingly to M := (I T (µ)xx H H xx x H )M(I T (µ)x x H x ). Taking into account the projectors in the preconditioner, i.e. using M instead of M, raises the cost of the preconditioned Krylov solver only slightly (cf. Sleijpen, van der Vorst). Only one additional linear solve with system matrix M is required. TUHH Heinrich Voss Delay JD Beijing, May / 41

73 Iterative projection methods for nonlinear eigenproblems Common approach to Jacobi Davidson Expand the search space V by the direction defined by one step of Newton s method applied to ( ) ( T (θ)x 0 x T =, x 1 0) TUHH Heinrich Voss Delay JD Beijing, May / 41

74 Iterative projection methods for nonlinear eigenproblems Common approach to Jacobi Davidson Expand the search space V by the direction defined by one step of Newton s method applied to ( ) ( T (θ)x 0 x T =, x 1 0) i.e. ( ) ( T (θ) T (θ)x v 2x T = 0 α) ( ) T (θ)x 0 TUHH Heinrich Voss Delay JD Beijing, May / 41

75 Iterative projection methods for nonlinear eigenproblems Common approach to Jacobi Davidson Expand the search space V by the direction defined by one step of Newton s method applied to ( ) ( T (θ)x 0 x T =, x 1 0) i.e. ( ) ( T (θ) T (θ)x v 2x T = 0 α) ( ) T (θ)x 0 T (θ)v + αt (θ)x = T (θ)x, 2x T v = 0 TUHH Heinrich Voss Delay JD Beijing, May / 41

76 Iterative projection methods for nonlinear eigenproblems Common approach to Jacobi Davidson Expand the search space V by the direction defined by one step of Newton s method applied to ( ) ( T (θ)x 0 x T =, x 1 0) i.e. ( ) ( T (θ) T (θ)x v 2x T = 0 α) ( ) T (θ)x 0 T (θ)v + αt (θ)x = T (θ)x, 2x T v = 0 v = x αt (θ) 1 T (θ)x, x T v = 0 TUHH Heinrich Voss Delay JD Beijing, May / 41

77 Iterative projection methods for nonlinear eigenproblems Common approach to Jacobi Davidson Expand the search space V by the direction defined by one step of Newton s method applied to ( ) ( T (θ)x 0 x T =, x 1 0) i.e. ( ) ( T (θ) T (θ)x v 2x T = 0 α) ( ) T (θ)x 0 T (θ)v + αt (θ)x = T (θ)x, 2x T v = 0 v = x αt (θ) 1 T (θ)x, x T v = 0 x T x v = x x T T (θ) 1 T (θ)x T (θ) 1 T (θ)x, and the quadratic convergence of Newton s method explains the fast convergence of the Jacobi Davidson method. TUHH Heinrich Voss Delay JD Beijing, May / 41

78 Iterative projection methods for nonlinear eigenproblems Common approach to Jacobi Davidson BUT the resulting correction equation is solved only very inexactly which spoils the good approximation properties of Newton s method. TUHH Heinrich Voss Delay JD Beijing, May / 41 Expand the search space V by the direction defined by one step of Newton s method applied to ( ) ( T (θ)x 0 x T =, x 1 0) i.e. ( ) ( T (θ) T (θ)x v 2x T = 0 α) ( ) T (θ)x 0 T (θ)v + αt (θ)x = T (θ)x, 2x T v = 0 v = x αt (θ) 1 T (θ)x, x T v = 0 x T x v = x x T T (θ) 1 T (θ)x T (θ) 1 T (θ)x, and the quadratic convergence of Newton s method explains the fast convergence of the Jacobi Davidson method.

79 Iterative projection methods for nonlinear eigenproblems More general approach Assume that we are given a base method x k+1 = S(x k, θ k ), θ k+1 = p(x k+1, θ k ) which converges locally ˆx x k+1 = O( ˆx x k q 1 ), ˆλ θ k+1 = O( ˆλ θ k q 2 ). TUHH Heinrich Voss Delay JD Beijing, May / 41

80 Iterative projection methods for nonlinear eigenproblems More general approach Assume that we are given a base method which converges locally Robustification: x k+1 = S(x k, θ k ), θ k+1 = p(x k+1, θ k ) ˆx x k+1 = O( ˆx x k q 1 ), ˆλ θ k+1 = O( ˆλ θ k q 2 ). (i) Find v k = x k + αs(x k, θ k )x k, (t k ) H Bx k = 0 (ii) Choose x k+1 span{x k, v k }, for instance solving the projected problem P H k T (λ)p k z k = 0, x k+1 = P k z k where P k is the orthogonal projector onto span{x k, v k } (iii) θ k+1 = p(x k+1 ) where B is a Hermitian positive definite matrix (for instance if it is known in advance that the eigenvectors are B orthogonal). TUHH Heinrich Voss Delay JD Beijing, May / 41

81 A Jacobi-Davidson-type method for two-real-param. EVP Outline 1 Problem definition 2 small critical delay problems 3 Iterative projection methods for nonlinear eigenproblems 4 A Jacobi-Davidson-type method for two-real-param. EVP 5 Numerical Experience TUHH Heinrich Voss Delay JD Beijing, May / 41

82 A Jacobi-Davidson-type method for two-real-param. EVP A JD-type method for two-real-param. EVP Solve the NEP in two real parameters T (ω, τ)u := (iωm + A + e iωτ B)u = 0 by a straight forward adaption of (nonlinear) JD method: TUHH Heinrich Voss Delay JD Beijing, May / 41

83 A Jacobi-Davidson-type method for two-real-param. EVP A JD-type method for two-real-param. EVP Solve the NEP in two real parameters T (ω, τ)u := (iωm + A + e iωτ B)u = 0 by a straight forward adaption of (nonlinear) JD method: 1 given an ansatz space span(v ) of eigenvector approximations known approximations of eigenvectors or random vector TUHH Heinrich Voss Delay JD Beijing, May / 41

84 A Jacobi-Davidson-type method for two-real-param. EVP A JD-type method for two-real-param. EVP Solve the NEP in two real parameters T (ω, τ)u := (iωm + A + e iωτ B)u = 0 by a straight forward adaption of (nonlinear) JD method: 1 given an ansatz space span(v ) of eigenvector approximations 2 solve projected problem V H T (ω, τ)vz = 0 use presented method for small problems TUHH Heinrich Voss Delay JD Beijing, May / 41

85 A Jacobi-Davidson-type method for two-real-param. EVP A JD-type method for two-real-param. EVP Solve the NEP in two real parameters T (ω, τ)u := (iωm + A + e iωτ B)u = 0 by a straight forward adaption of (nonlinear) JD method: 1 given an ansatz space span(v ) of eigenvector approximations 2 solve projected problem V H T (ω, τ)vz = 0 3 compute approximate eigenvectors u i = Vz i and residuals r i = T (ω i, τ i )u i TUHH Heinrich Voss Delay JD Beijing, May / 41

86 A Jacobi-Davidson-type method for two-real-param. EVP A JD-type method for two-real-param. EVP Solve the NEP in two real parameters T (ω, τ)u := (iωm + A + e iωτ B)u = 0 by a straight forward adaption of (nonlinear) JD method: 1 given an ansatz space span(v ) of eigenvector approximations 2 solve projected problem V H T (ω, τ)vz = 0 3 compute approximate eigenvectors u i = Vz i and residuals r i = T (ω i, τ i )u i 4 stop, if enough eigentriples have converged we use r i 2 tol TUHH Heinrich Voss Delay JD Beijing, May / 41

87 A Jacobi-Davidson-type method for two-real-param. EVP A JD-type method for two-real-param. EVP Solve the NEP in two real parameters T (ω, τ)u := (iωm + A + e iωτ B)u = 0 by a straight forward adaption of (nonlinear) JD method: 1 given an ansatz space span(v ) of eigenvector approximations 2 solve projected problem V H T (ω, τ)vz = 0 3 compute approximate eigenvectors u i = Vz i and residuals r i = T (ω i, τ i )u i 4 stop, if enough eigentriples have converged 5 compute a correction c of an approx. eigenvector û i.e., û + c is a better approximation that s the ineresting part TUHH Heinrich Voss Delay JD Beijing, May / 41

88 A Jacobi-Davidson-type method for two-real-param. EVP A JD-type method for two-real-param. EVP Solve the NEP in two real parameters T (ω, τ)u := (iωm + A + e iωτ B)u = 0 by a straight forward adaption of (nonlinear) JD method: 1 given an ansatz space span(v ) of eigenvector approximations 2 solve projected problem V H T (ω, τ)vz = 0 3 compute approximate eigenvectors u i = Vz i and residuals r i = T (ω i, τ i )u i 4 stop, if enough eigentriples have converged 5 compute a correction c of an approx. eigenvector û i.e., û + c is a better approximation 6 expand V orth[v, c] and GOTO 2 repeat as long as neccessary TUHH Heinrich Voss Delay JD Beijing, May / 41

89 A Jacobi-Davidson-type method for two-real-param. EVP Correction equation - medium sized problems Given: approx. eigentriple (ˆω, ˆτ, û), Wanted: correction (δ, ɛ, c) TUHH Heinrich Voss Delay JD Beijing, May / 41

90 A Jacobi-Davidson-type method for two-real-param. EVP Correction equation - medium sized problems Given: approx. eigentriple (ˆω, ˆτ, û), Wanted: correction (δ, ɛ, c) [ ] T (ˆω + δ, ˆτ + ε)(û + c) one step of Newtons method applied to û H = 0 c TUHH Heinrich Voss Delay JD Beijing, May / 41

91 A Jacobi-Davidson-type method for two-real-param. EVP Correction equation - medium sized problems Given: approx. eigentriple (ˆω, ˆτ, û), Wanted: correction (δ, ɛ, c) [ ] T (ˆω + δ, ˆτ + ε)(û + c) one step of Newtons method applied to û H = 0 c [ ] T (ˆω, ˆτ) Tω (ˆω, ˆτ)û T τ (ˆω, ˆτ)û c [ ] û H δ ˆr = ε linear system, n + 1 complex equations, n complex + 2 real unknowns no standard software available TUHH Heinrich Voss Delay JD Beijing, May / 41

92 A Jacobi-Davidson-type method for two-real-param. EVP Correction equation - medium sized problems Given: approx. eigentriple (ˆω, ˆτ, û), Wanted: correction (δ, ɛ, c) [ ] T (ˆω + δ, ˆτ + ε)(û + c) one step of Newtons method applied to û H = 0 c [ ] T (ˆω, ˆτ) Tω (ˆω, ˆτ)û T τ (ˆω, ˆτ)û c [ ] û H δ ˆr = ε linear system, n + 1 complex equations, n complex + 2 real unknowns no standard software available ˆT 0 ˆTω û ˆTτ û c ˆr 0 ˆT ˆTω û ˆTτ û û H d δ = ˆr 0. 0 û H 0 0 ε 0 TUHH Heinrich Voss Delay JD Beijing, May / 41

93 A Jacobi-Davidson-type method for two-real-param. EVP Correction equation - medium sized problems Given: approx. eigentriple (ˆω, ˆτ, û), Wanted: correction (δ, ɛ, c) [ ] T (ˆω + δ, ˆτ + ε)(û + c) one step of Newtons method applied to û H = 0 c [ ] T (ˆω, ˆτ) Tω (ˆω, ˆτ)û T τ (ˆω, ˆτ)û c [ ] û H δ ˆr = ε linear system, n + 1 complex equations, n complex + 2 real unknowns no standard software available ˆT 0 ˆTω û ˆTτ û c ˆr 0 ˆT ˆTω û ˆTτ û û H d δ = ˆr 0. 0 û H 0 0 ε 0 Lemma: d = c, δ, ɛ R TUHH Heinrich Voss Delay JD Beijing, May / 41

94 A Jacobi-Davidson-type method for two-real-param. EVP Correction equation - medium sized problems Given: approx. eigentriple (ˆω, ˆτ, û), Wanted: correction (δ, ɛ, c) [ ] T (ˆω + δ, ˆτ + ε)(û + c) one step of Newtons method applied to û H = 0 c [ ] T (ˆω, ˆτ) Tω (ˆω, ˆτ)û T τ (ˆω, ˆτ)û c [ ] û H δ ˆr = ε linear system, n + 1 complex equations, n complex + 2 real unknowns no standard software available ˆT 0 ˆTω û ˆTτ û c ˆr 0 ˆT ˆTω û ˆTτ û û H d δ = ˆr 0. 0 û H 0 0 ε 0 Lemma: d = c, δ, ɛ R What if n is large and a preconditioner is available for ˆT only? TUHH Heinrich Voss Delay JD Beijing, May / 41

95 A Jacobi-Davidson-type method for two-real-param. EVP Correction equation - large size With system reads [ ] [ ] ˆT 0 ˆTω û ˆTτ û T =, K =, U = 0 ˆT ˆT ω û ˆTτ û [ ] c r T K c U H 0 δ = r 0. ε 0 [û ] 0 0 û TUHH Heinrich Voss Delay JD Beijing, May / 41

96 A Jacobi-Davidson-type method for two-real-param. EVP Correction equation - large size With system reads [ ] [ ] ˆT 0 ˆTω û ˆTτ û T =, K =, U = 0 ˆT ˆT ω û ˆTτ û [ ] c r T K c U H 0 δ = r 0. ε 0 [û ] 0 0 û Eliminating (δ, ε) yields [ c c] (I K (U H K ) 1 U H )T(I UU H ) [ r r] = (I K (U H K ) 1 U H ) TUHH Heinrich Voss Delay JD Beijing, May / 41

97 A Jacobi-Davidson-type method for two-real-param. EVP Correction equation - large size With system reads [ ] [ ] ˆT 0 ˆTω û ˆTτ û T =, K =, U = 0 ˆT ˆT ω û ˆTτ û [ ] c r T K c U H 0 δ = r 0. ε 0 [û ] 0 0 û Eliminating (δ, ε) yields [ c c] (I K (U H K ) 1 U H )T(I UU H ) [ r r] = (I K (U H K ) 1 U H ) [ [ c c] r r] looks like other JD correction equations: T = is complemented by projectors TUHH Heinrich Voss Delay JD Beijing, May / 41

98 A Jacobi-Davidson-type method for two-real-param. EVP Correction equation - large size With system reads [ ] [ ] ˆT 0 ˆTω û ˆTτ û T =, K =, U = 0 ˆT ˆT ω û ˆTτ û [ ] c r T K c U H 0 δ = r 0. ε 0 [û ] 0 0 û Eliminating (δ, ε) yields [ c c] (I K (U H K ) 1 U H )T(I UU H ) [ r r] = (I K (U H K ) 1 U H ) [ [ c c] r r] looks like other JD correction equations: T = is complemented by projectors How does this help with the preconditioner? TUHH Heinrich Voss Delay JD Beijing, May / 41

99 A Jacobi-Davidson-type method for two-real-param. EVP Preconditioning the correction equation Correction equation [ c c] (I K (U H K ) 1 U H )T(I UU H ) [ r r] = (I K (U H K ) 1 U H ) TUHH Heinrich Voss Delay JD Beijing, May / 41

100 A Jacobi-Davidson-type method for two-real-param. EVP Preconditioning the correction equation Correction equation [ c c] (I K (U H K ) 1 U H )T(I UU H ) [ r r] = (I K (U H K ) 1 U H ) preconditioner: (I K (U H K ) 1 U H )P(I UU H ) with P = [ ] P 0 0 P TUHH Heinrich Voss Delay JD Beijing, May / 41

101 A Jacobi-Davidson-type method for two-real-param. EVP Preconditioning the correction equation Correction equation [ c c] (I K (U H K ) 1 U H )T(I UU H ) [ r r] = (I K (U H K ) 1 U H ) preconditioner: (I K (U H K ) 1 U H )P(I UU H ) with P = [ ] P 0 0 P need to apply in a left preconditioned Krylov solver: (I UU H )P 1 (I K (U H P 1 K ) 1 U H P 1 )(I K (U H K ) 1 U H )T(I UU H ) TUHH Heinrich Voss Delay JD Beijing, May / 41

102 A Jacobi-Davidson-type method for two-real-param. EVP Preconditioning the correction equation Correction equation [ c c] (I K (U H K ) 1 U H )T(I UU H ) [ r r] = (I K (U H K ) 1 U H ) preconditioner: (I K (U H K ) 1 U H )P(I UU H ) with P = [ ] P 0 0 P need to apply in a left preconditioned Krylov solver: (I UU H )P 1 (I K (U H P 1 K ) 1 U H P 1 )(I K (U H K ) 1 U H )T(I UU H ) Efficient implementation needs one application of P 1 ˆT per iteration and additionally 2 T products and 3 P solves as in other JD variants TUHH Heinrich Voss Delay JD Beijing, May / 41

103 A Jacobi-Davidson-type method for two-real-param. EVP Some details Alternative expansion sometimes the projected problem has no solution (Re(C)x = θim(c)x has no real eigenvalues) TUHH Heinrich Voss Delay JD Beijing, May / 41

104 A Jacobi-Davidson-type method for two-real-param. EVP Some details Alternative expansion sometimes the projected problem has no solution (Re(C)x = θim(c)x has no real eigenvalues) then what to use as approximate Ritz triple for the correction equation? TUHH Heinrich Voss Delay JD Beijing, May / 41

105 A Jacobi-Davidson-type method for two-real-param. EVP Some details Alternative expansion sometimes the projected problem has no solution (Re(C)x = θim(c)x has no real eigenvalues) then what to use as approximate Ritz triple for the correction equation? we use ˆω = 0, e i ˆωˆτ = σ (given) and û span(v ) such that T (ˆω, ˆτ)û 2 = min TUHH Heinrich Voss Delay JD Beijing, May / 41

106 A Jacobi-Davidson-type method for two-real-param. EVP Some details Alternative expansion Restart sometimes the projected problem has no solution (Re(C)x = θim(c)x has no real eigenvalues) then what to use as approximate Ritz triple for the correction equation? we use ˆω = 0, e i ˆωˆτ = σ (given) and û span(v ) such that T (ˆω, ˆτ)û 2 = min cost grows like O(k 6 ), so becomes prohibitve for larger k TUHH Heinrich Voss Delay JD Beijing, May / 41

107 A Jacobi-Davidson-type method for two-real-param. EVP Some details Alternative expansion Restart sometimes the projected problem has no solution (Re(C)x = θim(c)x has no real eigenvalues) then what to use as approximate Ritz triple for the correction equation? we use ˆω = 0, e i ˆωˆτ = σ (given) and û span(v ) such that T (ˆω, ˆτ)û 2 = min cost grows like O(k 6 ), so becomes prohibitve for larger k restart with space spanned by converged eigenvectors and a few unconverged ones with the best residuals TUHH Heinrich Voss Delay JD Beijing, May / 41

108 A Jacobi-Davidson-type method for two-real-param. EVP Some details Alternative expansion Restart sometimes the projected problem has no solution (Re(C)x = θim(c)x has no real eigenvalues) then what to use as approximate Ritz triple for the correction equation? we use ˆω = 0, e i ˆωˆτ = σ (given) and û span(v ) such that T (ˆω, ˆτ)û 2 = min cost grows like O(k 6 ), so becomes prohibitve for larger k restart with space spanned by converged eigenvectors and a few unconverged ones with the best residuals Real problems if M, A, B are real, then eigentriples come in pairs (ω, τ, u), ( ω, τ, u) TUHH Heinrich Voss Delay JD Beijing, May / 41

109 A Jacobi-Davidson-type method for two-real-param. EVP Some details Alternative expansion Restart sometimes the projected problem has no solution (Re(C)x = θim(c)x has no real eigenvalues) then what to use as approximate Ritz triple for the correction equation? we use ˆω = 0, e i ˆωˆτ = σ (given) and û span(v ) such that T (ˆω, ˆτ)û 2 = min cost grows like O(k 6 ), so becomes prohibitve for larger k restart with space spanned by converged eigenvectors and a few unconverged ones with the best residuals Real problems if M, A, B are real, then eigentriples come in pairs (ω, τ, u), ( ω, τ, u) when an eigenvector u converges, add u to searchspace TUHH Heinrich Voss Delay JD Beijing, May / 41

110 Outline Numerical Experience 1 Problem definition 2 small critical delay problems 3 Iterative projection methods for nonlinear eigenproblems 4 A Jacobi-Davidson-type method for two-real-param. EVP 5 Numerical Experience TUHH Heinrich Voss Delay JD Beijing, May / 41

111 Numerical Experience Numerical example Consider the parabolic problem u t ((1 + x 2 + y 2 + z 2 ) u) + [1, 0, 1] u + u α(1 + x 2 )u(t τ) = 0 (1) with spatial variables x, y and z on Ω = (0, 1) (0, 1) (0, 1) with Dirichlet boundary condition u = 0 on Ω. TUHH Heinrich Voss Delay JD Beijing, May / 41

112 Numerical Experience Numerical example Consider the parabolic problem u t ((1 + x 2 + y 2 + z 2 ) u) + [1, 0, 1] u + u α(1 + x 2 )u(t τ) = 0 (1) with spatial variables x, y and z on Ω = (0, 1) (0, 1) (0, 1) with Dirichlet boundary condition u = 0 on Ω. A discretization with piecewise quadratic ansatz functions on a tetrahedral grid using COMSOL yielded an eigenvalue problem of dimension n = Mẋ(t) + Ax(t) + Bx(t τ) = 0 TUHH Heinrich Voss Delay JD Beijing, May / 41

113 Numerical Experience Numerical example Consider the parabolic problem u t ((1 + x 2 + y 2 + z 2 ) u) + [1, 0, 1] u + u α(1 + x 2 )u(t τ) = 0 (1) with spatial variables x, y and z on Ω = (0, 1) (0, 1) (0, 1) with Dirichlet boundary condition u = 0 on Ω. A discretization with piecewise quadratic ansatz functions on a tetrahedral grid using COMSOL yielded an eigenvalue problem of dimension n = Mẋ(t) + Ax(t) + Bx(t τ) = 0 For α = 100 the problem has four pairs of eigentriples. TUHH Heinrich Voss Delay JD Beijing, May / 41

114 Numerical Experience Numerical Example 10 0 Convergence History Example residual norm iteration TUHH Heinrich Voss Delay JD Beijing, May / 41

115 Numerical Experience Numerical Example 10 0 Convergence History Example residual norm iteration x axis: number of iteration: tic=5; together 31 TUHH Heinrich Voss Delay JD Beijing, May / 41

116 Numerical Experience Numerical Example 10 0 Convergence History Example residual norm iteration y axis: norm of residual: tic=10 2 ; tolerance= TUHH Heinrich Voss Delay JD Beijing, May / 41

Iterative projection methods for sparse nonlinear eigenvalue problems

Iterative projection methods for sparse nonlinear eigenvalue problems Iterative projection methods for sparse nonlinear eigenvalue problems Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Mathematics TUHH Heinrich Voss Iterative projection

More information

Davidson Method CHAPTER 3 : JACOBI DAVIDSON METHOD

Davidson Method CHAPTER 3 : JACOBI DAVIDSON METHOD Davidson Method CHAPTER 3 : JACOBI DAVIDSON METHOD Heinrich Voss voss@tu-harburg.de Hamburg University of Technology The Davidson method is a popular technique to compute a few of the smallest (or largest)

More information

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation

More information

1. Introduction. In this paper we consider the large and sparse eigenvalue problem. Ax = λx (1.1) T (λ)x = 0 (1.2)

1. Introduction. In this paper we consider the large and sparse eigenvalue problem. Ax = λx (1.1) T (λ)x = 0 (1.2) A NEW JUSTIFICATION OF THE JACOBI DAVIDSON METHOD FOR LARGE EIGENPROBLEMS HEINRICH VOSS Abstract. The Jacobi Davidson method is known to converge at least quadratically if the correction equation is solved

More information

A Jacobi Davidson Method for Nonlinear Eigenproblems

A Jacobi Davidson Method for Nonlinear Eigenproblems A Jacobi Davidson Method for Nonlinear Eigenproblems Heinrich Voss Section of Mathematics, Hamburg University of Technology, D 21071 Hamburg voss @ tu-harburg.de http://www.tu-harburg.de/mat/hp/voss Abstract.

More information

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES Eigenvalue Problems CHAPTER 1 : PRELIMINARIES Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Mathematics TUHH Heinrich Voss Preliminaries Eigenvalue problems 2012 1 / 14

More information

Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems

Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan April 28, 2011 T.M. Huang (Taiwan Normal Univ.)

More information

An Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems

An Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems An Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems H. Voss 1 Introduction In this paper we consider the nonlinear eigenvalue problem T (λ)x = 0 (1) where T (λ) R n n is a family of symmetric

More information

Lecture 3: Inexact inverse iteration with preconditioning

Lecture 3: Inexact inverse iteration with preconditioning Lecture 3: Department of Mathematical Sciences CLAPDE, Durham, July 2008 Joint work with M. Freitag (Bath), and M. Robbé & M. Sadkane (Brest) 1 Introduction 2 Preconditioned GMRES for Inverse Power Method

More information

HOMOGENEOUS JACOBI DAVIDSON. 1. Introduction. We study a homogeneous Jacobi Davidson variant for the polynomial eigenproblem

HOMOGENEOUS JACOBI DAVIDSON. 1. Introduction. We study a homogeneous Jacobi Davidson variant for the polynomial eigenproblem HOMOGENEOUS JACOBI DAVIDSON MICHIEL E. HOCHSTENBACH AND YVAN NOTAY Abstract. We study a homogeneous variant of the Jacobi Davidson method for the generalized and polynomial eigenvalue problem. Special

More information

Inexact inverse iteration with preconditioning

Inexact inverse iteration with preconditioning Department of Mathematical Sciences Computational Methods with Applications Harrachov, Czech Republic 24th August 2007 (joint work with M. Robbé and M. Sadkane (Brest)) 1 Introduction 2 Preconditioned

More information

Solving Regularized Total Least Squares Problems

Solving Regularized Total Least Squares Problems Solving Regularized Total Least Squares Problems Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation Joint work with Jörg Lampe TUHH Heinrich Voss Total

More information

Model order reduction of large-scale dynamical systems with Jacobi-Davidson style eigensolvers

Model order reduction of large-scale dynamical systems with Jacobi-Davidson style eigensolvers MAX PLANCK INSTITUTE International Conference on Communications, Computing and Control Applications March 3-5, 2011, Hammamet, Tunisia. Model order reduction of large-scale dynamical systems with Jacobi-Davidson

More information

Preconditioned inverse iteration and shift-invert Arnoldi method

Preconditioned inverse iteration and shift-invert Arnoldi method Preconditioned inverse iteration and shift-invert Arnoldi method Melina Freitag Department of Mathematical Sciences University of Bath CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical

More information

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts

More information

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation Tao Zhao 1, Feng-Nan Hwang 2 and Xiao-Chuan Cai 3 Abstract In this paper, we develop an overlapping domain decomposition

More information

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization General Tools for Solving Large Eigen-Problems

More information

LARGE SPARSE EIGENVALUE PROBLEMS

LARGE SPARSE EIGENVALUE PROBLEMS LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization 14-1 General Tools for Solving Large Eigen-Problems

More information

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems Part I: Review of basic theory of eigenvalue problems 1. Let A C n n. (a) A scalar λ is an eigenvalue of an n n A

More information

A Jacobi Davidson-type projection method for nonlinear eigenvalue problems

A Jacobi Davidson-type projection method for nonlinear eigenvalue problems A Jacobi Davidson-type projection method for nonlinear eigenvalue problems Timo Betce and Heinrich Voss Technical University of Hamburg-Harburg, Department of Mathematics, Schwarzenbergstrasse 95, D-21073

More information

MICHIEL E. HOCHSTENBACH

MICHIEL E. HOCHSTENBACH VARIATIONS ON HARMONIC RAYLEIGH RITZ FOR STANDARD AND GENERALIZED EIGENPROBLEMS MICHIEL E. HOCHSTENBACH Abstract. We present several variations on the harmonic Rayleigh Ritz method. First, we introduce

More information

of dimension n 1 n 2, one defines the matrix determinants

of dimension n 1 n 2, one defines the matrix determinants HARMONIC RAYLEIGH RITZ FOR THE MULTIPARAMETER EIGENVALUE PROBLEM MICHIEL E. HOCHSTENBACH AND BOR PLESTENJAK Abstract. We study harmonic and refined extraction methods for the multiparameter eigenvalue

More information

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna.

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna. Recent advances in approximation using Krylov subspaces V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The framework It is given an operator

More information

HARMONIC RAYLEIGH RITZ EXTRACTION FOR THE MULTIPARAMETER EIGENVALUE PROBLEM

HARMONIC RAYLEIGH RITZ EXTRACTION FOR THE MULTIPARAMETER EIGENVALUE PROBLEM HARMONIC RAYLEIGH RITZ EXTRACTION FOR THE MULTIPARAMETER EIGENVALUE PROBLEM MICHIEL E. HOCHSTENBACH AND BOR PLESTENJAK Abstract. We study harmonic and refined extraction methods for the multiparameter

More information

T(λ)x = 0 (1.1) k λ j A j x = 0 (1.3)

T(λ)x = 0 (1.1) k λ j A j x = 0 (1.3) PROJECTION METHODS FOR NONLINEAR SPARSE EIGENVALUE PROBLEMS HEINRICH VOSS Key words. nonlinear eigenvalue problem, iterative projection method, Jacobi Davidson method, Arnoldi method, rational Krylov method,

More information

Alternative correction equations in the Jacobi-Davidson method

Alternative correction equations in the Jacobi-Davidson method Chapter 2 Alternative correction equations in the Jacobi-Davidson method Menno Genseberger and Gerard Sleijpen Abstract The correction equation in the Jacobi-Davidson method is effective in a subspace

More information

Numerical Methods for Large Scale Eigenvalue Problems

Numerical Methods for Large Scale Eigenvalue Problems MAX PLANCK INSTITUTE Summer School in Trogir, Croatia Oktober 12, 2011 Numerical Methods for Large Scale Eigenvalue Problems Patrick Kürschner Max Planck Institute for Dynamics of Complex Technical Systems

More information

Nonlinear Eigenvalue Problems: A Challenge for Modern Eigenvalue Methods

Nonlinear Eigenvalue Problems: A Challenge for Modern Eigenvalue Methods Nonlinear Eigenvalue Problems: A Challenge for Modern Eigenvalue Methods Volker Mehrmann Heinrich Voss November 29, 2004 Abstract We discuss the state of the art in numerical solution methods for large

More information

Efficient Methods For Nonlinear Eigenvalue Problems. Diploma Thesis

Efficient Methods For Nonlinear Eigenvalue Problems. Diploma Thesis Efficient Methods For Nonlinear Eigenvalue Problems Diploma Thesis Timo Betcke Technical University of Hamburg-Harburg Department of Mathematics (Prof. Dr. H. Voß) August 2002 Abstract During the last

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give

More information

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts

More information

Rational Krylov methods for linear and nonlinear eigenvalue problems

Rational Krylov methods for linear and nonlinear eigenvalue problems Rational Krylov methods for linear and nonlinear eigenvalue problems Mele Giampaolo mele@mail.dm.unipi.it University of Pisa 7 March 2014 Outline Arnoldi (and its variants) for linear eigenproblems Rational

More information

Controlling inner iterations in the Jacobi Davidson method

Controlling inner iterations in the Jacobi Davidson method Controlling inner iterations in the Jacobi Davidson method Michiel E. Hochstenbach and Yvan Notay Service de Métrologie Nucléaire, Université Libre de Bruxelles (C.P. 165/84), 50, Av. F.D. Roosevelt, B-1050

More information

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems JAMES H. MONEY and QIANG YE UNIVERSITY OF KENTUCKY eigifp is a MATLAB program for computing a few extreme eigenvalues

More information

A comparison of solvers for quadratic eigenvalue problems from combustion

A comparison of solvers for quadratic eigenvalue problems from combustion INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS [Version: 2002/09/18 v1.01] A comparison of solvers for quadratic eigenvalue problems from combustion C. Sensiau 1, F. Nicoud 2,, M. van Gijzen 3,

More information

Nonlinear Eigenvalue Problems

Nonlinear Eigenvalue Problems 115 Nonlinear Eigenvalue Problems Heinrich Voss Hamburg University of Technology 115.1 Basic Properties........................................ 115-2 115.2 Analytic matrix functions.............................

More information

Jacobi Davidson methods for polynomial two parameter eigenvalue problems

Jacobi Davidson methods for polynomial two parameter eigenvalue problems Jacobi Davidson methods for polynomial two parameter eigenvalue problems Michiel E. Hochstenbach a, Andrej Muhič b, Bor Plestenjak b a Department of Mathematics and Computer Science, TU Eindhoven, PO Box

More information

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland Matrix Algorithms Volume II: Eigensystems G. W. Stewart University of Maryland College Park, Maryland H1HJ1L Society for Industrial and Applied Mathematics Philadelphia CONTENTS Algorithms Preface xv xvii

More information

RANA03-02 January Jacobi-Davidson methods and preconditioning with applications in pole-zero analysis

RANA03-02 January Jacobi-Davidson methods and preconditioning with applications in pole-zero analysis RANA03-02 January 2003 Jacobi-Davidson methods and preconditioning with applications in pole-zero analysis by J.Rommes, H.A. van der Vorst, EJ.W. ter Maten Reports on Applied and Numerical Analysis Department

More information

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given

More information

CONTROLLING INNER ITERATIONS IN THE JACOBI DAVIDSON METHOD

CONTROLLING INNER ITERATIONS IN THE JACOBI DAVIDSON METHOD CONTROLLING INNER ITERATIONS IN THE JACOBI DAVIDSON METHOD MICHIEL E. HOCHSTENBACH AND YVAN NOTAY Abstract. The Jacobi Davidson method is an eigenvalue solver which uses an inner-outer scheme. In the outer

More information

Iterative Methods for Linear Systems of Equations

Iterative Methods for Linear Systems of Equations Iterative Methods for Linear Systems of Equations Projection methods (3) ITMAN PhD-course DTU 20-10-08 till 24-10-08 Martin van Gijzen 1 Delft University of Technology Overview day 4 Bi-Lanczos method

More information

Jacobi s Ideas on Eigenvalue Computation in a modern context

Jacobi s Ideas on Eigenvalue Computation in a modern context Jacobi s Ideas on Eigenvalue Computation in a modern context Henk van der Vorst vorst@math.uu.nl Mathematical Institute Utrecht University June 3, 2006, Michel Crouzeix p.1/18 General remarks Ax = λx Nonlinear

More information

CONTROLLING INNER ITERATIONS IN THE JACOBI DAVIDSON METHOD

CONTROLLING INNER ITERATIONS IN THE JACOBI DAVIDSON METHOD CONTROLLING INNER ITERATIONS IN THE JACOBI DAVIDSON METHOD MICHIEL E. HOCHSTENBACH AND YVAN NOTAY Abstract. The Jacobi Davidson method is an eigenvalue solver which uses the iterative (and in general inaccurate)

More information

Jacobi-Davidson Algorithm for Locating Resonances in a Few-Body Tunneling System

Jacobi-Davidson Algorithm for Locating Resonances in a Few-Body Tunneling System Jacobi-Davidson Algorithm for Locating Resonances in a Few-Body Tunneling System Bachelor Thesis in Physics and Engineering Physics Gustav Hjelmare Jonathan Larsson David Lidberg Sebastian Östnell Department

More information

U AUc = θc. n u = γ i x i,

U AUc = θc. n u = γ i x i, HARMONIC AND REFINED RAYLEIGH RITZ FOR THE POLYNOMIAL EIGENVALUE PROBLEM MICHIEL E. HOCHSTENBACH AND GERARD L. G. SLEIJPEN Abstract. After reviewing the harmonic Rayleigh Ritz approach for the standard

More information

Domain decomposition on different levels of the Jacobi-Davidson method

Domain decomposition on different levels of the Jacobi-Davidson method hapter 5 Domain decomposition on different levels of the Jacobi-Davidson method Abstract Most computational work of Jacobi-Davidson [46], an iterative method suitable for computing solutions of large dimensional

More information

Nonlinear Eigenvalue Problems: An Introduction

Nonlinear Eigenvalue Problems: An Introduction Nonlinear Eigenvalue Problems: An Introduction Cedric Effenberger Seminar for Applied Mathematics ETH Zurich Pro*Doc Workshop Disentis, August 18 21, 2010 Cedric Effenberger (SAM, ETHZ) NLEVPs: An Introduction

More information

The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method. V. Simoncini. Dipartimento di Matematica, Università di Bologna

The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method. V. Simoncini. Dipartimento di Matematica, Università di Bologna The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The

More information

Two-sided Eigenvalue Algorithms for Modal Approximation

Two-sided Eigenvalue Algorithms for Modal Approximation Two-sided Eigenvalue Algorithms for Modal Approximation Master s thesis submitted to Faculty of Mathematics at Chemnitz University of Technology presented by: Supervisor: Advisor: B.sc. Patrick Kürschner

More information

Keeping σ fixed for several steps, iterating on µ and neglecting the remainder in the Lagrange interpolation one obtains. θ = λ j λ j 1 λ j σ, (2.

Keeping σ fixed for several steps, iterating on µ and neglecting the remainder in the Lagrange interpolation one obtains. θ = λ j λ j 1 λ j σ, (2. RATIONAL KRYLOV FOR NONLINEAR EIGENPROBLEMS, AN ITERATIVE PROJECTION METHOD ELIAS JARLEBRING AND HEINRICH VOSS Key words. nonlinear eigenvalue problem, rational Krylov, Arnoldi, projection method AMS subject

More information

RAYLEIGH QUOTIENT ITERATION AND SIMPLIFIED JACOBI-DAVIDSON WITH PRECONDITIONED ITERATIVE SOLVES FOR GENERALISED EIGENVALUE PROBLEMS

RAYLEIGH QUOTIENT ITERATION AND SIMPLIFIED JACOBI-DAVIDSON WITH PRECONDITIONED ITERATIVE SOLVES FOR GENERALISED EIGENVALUE PROBLEMS RAYLEIGH QUOTIENT ITERATION AND SIMPLIFIED JACOBI-DAVIDSON WITH PRECONDITIONED ITERATIVE SOLVES FOR GENERALISED EIGENVALUE PROBLEMS MELINA A. FREITAG, ALASTAIR SPENCE, AND EERO VAINIKKO Abstract. The computation

More information

ABSTRACT OF DISSERTATION. Ping Zhang

ABSTRACT OF DISSERTATION. Ping Zhang ABSTRACT OF DISSERTATION Ping Zhang The Graduate School University of Kentucky 2009 Iterative Methods for Computing Eigenvalues and Exponentials of Large Matrices ABSTRACT OF DISSERTATION A dissertation

More information

Reduction of nonlinear eigenproblems with JD

Reduction of nonlinear eigenproblems with JD Reduction of nonlinear eigenproblems with JD Henk van der Vorst H.A.vanderVorst@math.uu.nl Mathematical Institute Utrecht University July 13, 2005, SIAM Annual New Orleans p.1/16 Outline Polynomial Eigenproblems

More information

Arnoldi Methods in SLEPc

Arnoldi Methods in SLEPc Scalable Library for Eigenvalue Problem Computations SLEPc Technical Report STR-4 Available at http://slepc.upv.es Arnoldi Methods in SLEPc V. Hernández J. E. Román A. Tomás V. Vidal Last update: October,

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Robust successive computation of eigenpairs for nonlinear eigenvalue problems

Robust successive computation of eigenpairs for nonlinear eigenvalue problems Robust successive computation of eigenpairs for nonlinear eigenvalue problems C. Effenberger July 21, 2012 Newton-based methods are well-established techniques for solving nonlinear eigenvalue problems.

More information

On the Modification of an Eigenvalue Problem that Preserves an Eigenspace

On the Modification of an Eigenvalue Problem that Preserves an Eigenspace Purdue University Purdue e-pubs Department of Computer Science Technical Reports Department of Computer Science 2009 On the Modification of an Eigenvalue Problem that Preserves an Eigenspace Maxim Maumov

More information

A Jacobi Davidson type method for the generalized singular value problem

A Jacobi Davidson type method for the generalized singular value problem A Jacobi Davidson type method for the generalized singular value problem M. E. Hochstenbach a, a Department of Mathematics and Computing Science, Eindhoven University of Technology, P.O. Box 513, 5600

More information

Linear Algebra and its Applications

Linear Algebra and its Applications Linear Algebra and its Applications 431 (2009) 471 487 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa A Jacobi Davidson type

More information

Efficient computation of transfer function dominant poles of large second-order dynamical systems

Efficient computation of transfer function dominant poles of large second-order dynamical systems Chapter 6 Efficient computation of transfer function dominant poles of large second-order dynamical systems Abstract This chapter presents a new algorithm for the computation of dominant poles of transfer

More information

1. Introduction. We consider nonlinear eigenvalue problems of the form

1. Introduction. We consider nonlinear eigenvalue problems of the form ROBUST SUCCESSIVE COMPUTATION OF EIGENPAIRS FOR NONLINEAR EIGENVALUE PROBLEMS C. EFFENBERGER Abstract. Newton-based methods are well-established techniques for solving nonlinear eigenvalue problems. If

More information

A Jacobi-Davidson method for solving complex symmetric eigenvalue problems Arbenz, P.; Hochstenbach, M.E.

A Jacobi-Davidson method for solving complex symmetric eigenvalue problems Arbenz, P.; Hochstenbach, M.E. A Jacobi-Davidson method for solving complex symmetric eigenvalue problems Arbenz, P.; Hochstenbach, M.E. Published in: SIAM Journal on Scientific Computing DOI: 10.1137/S1064827502410992 Published: 01/01/2004

More information

ABSTRACT. Professor G.W. Stewart

ABSTRACT. Professor G.W. Stewart ABSTRACT Title of dissertation: Residual Arnoldi Methods : Theory, Package, and Experiments Che-Rung Lee, Doctor of Philosophy, 2007 Dissertation directed by: Professor G.W. Stewart Department of Computer

More information

Jacobi Davidson Methods for Cubic Eigenvalue Problems (March 11, 2004)

Jacobi Davidson Methods for Cubic Eigenvalue Problems (March 11, 2004) Jacobi Davidson Methods for Cubic Eigenvalue Problems (March 11, 24) Tsung-Min Hwang 1, Wen-Wei Lin 2, Jinn-Liang Liu 3, Weichung Wang 4 1 Department of Mathematics, National Taiwan Normal University,

More information

Numerical behavior of inexact linear solvers

Numerical behavior of inexact linear solvers Numerical behavior of inexact linear solvers Miro Rozložník joint results with Zhong-zhi Bai and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic The fourth

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

COMPUTING DOMINANT POLES OF TRANSFER FUNCTIONS

COMPUTING DOMINANT POLES OF TRANSFER FUNCTIONS COMPUTING DOMINANT POLES OF TRANSFER FUNCTIONS GERARD L.G. SLEIJPEN AND JOOST ROMMES Abstract. The transfer function describes the response of a dynamical system to periodic inputs. Dominant poles are

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

An Arnoldi method with structured starting vectors for the delay eigenvalue problem

An Arnoldi method with structured starting vectors for the delay eigenvalue problem An Arnoldi method with structured starting vectors for the delay eigenvalue problem Elias Jarlebring, Karl Meerbergen, Wim Michiels Department of Computer Science, K.U. Leuven, Celestijnenlaan 200 A, 3001

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 16-02 The Induced Dimension Reduction method applied to convection-diffusion-reaction problems R. Astudillo and M. B. van Gijzen ISSN 1389-6520 Reports of the Delft

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

Inexact Inverse Iteration for Symmetric Matrices

Inexact Inverse Iteration for Symmetric Matrices Inexact Inverse Iteration for Symmetric Matrices Jörg Berns-Müller Ivan G. Graham Alastair Spence Abstract In this paper we analyse inexact inverse iteration for the real symmetric eigenvalue problem Av

More information

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 3 : SEMI-ITERATIVE METHODS

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 3 : SEMI-ITERATIVE METHODS ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 3 : SEMI-ITERATIVE METHODS Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation

More information

Nonlinear palindromic eigenvalue problems and their numerical solution

Nonlinear palindromic eigenvalue problems and their numerical solution Nonlinear palindromic eigenvalue problems and their numerical solution TU Berlin DFG Research Center Institut für Mathematik MATHEON IN MEMORIAM RALPH BYERS Polynomial eigenvalue problems k P(λ) x = (

More information

Alternative correction equations in the Jacobi-Davidson method. Mathematical Institute. Menno Genseberger and Gerard L. G.

Alternative correction equations in the Jacobi-Davidson method. Mathematical Institute. Menno Genseberger and Gerard L. G. Universiteit-Utrecht * Mathematical Institute Alternative correction equations in the Jacobi-Davidson method by Menno Genseberger and Gerard L. G. Sleijpen Preprint No. 173 June 1998, revised: March 1999

More information

A numerical method for polynomial eigenvalue problems using contour integral

A numerical method for polynomial eigenvalue problems using contour integral A numerical method for polynomial eigenvalue problems using contour integral Junko Asakura a Tetsuya Sakurai b Hiroto Tadano b Tsutomu Ikegami c Kinji Kimura d a Graduate School of Systems and Information

More information

1. Introduction. In this paper we consider variants of the Jacobi Davidson (JD) algorithm [27] for computing a few eigenpairs of

1. Introduction. In this paper we consider variants of the Jacobi Davidson (JD) algorithm [27] for computing a few eigenpairs of SIAM J. SCI. COMPUT. Vol. 25, No. 5, pp. 1655 1673 c 2004 Society for Industrial and Applied Mathematics A JACOBI DAVIDSON METHOD FOR SOLVING COMPLEX SYMMETRIC EIGENVALUE PROBLEMS PETER ARBENZ AND MICHIEL

More information

Universiteit-Utrecht. Department. of Mathematics. The convergence of Jacobi-Davidson for. Hermitian eigenproblems. Jasper van den Eshof.

Universiteit-Utrecht. Department. of Mathematics. The convergence of Jacobi-Davidson for. Hermitian eigenproblems. Jasper van den Eshof. Universiteit-Utrecht * Department of Mathematics The convergence of Jacobi-Davidson for Hermitian eigenproblems by Jasper van den Eshof Preprint nr. 1165 November, 2000 THE CONVERGENCE OF JACOBI-DAVIDSON

More information

Krylov subspace projection methods

Krylov subspace projection methods I.1.(a) Krylov subspace projection methods Orthogonal projection technique : framework Let A be an n n complex matrix and K be an m-dimensional subspace of C n. An orthogonal projection technique seeks

More information

KU Leuven Department of Computer Science

KU Leuven Department of Computer Science Compact rational Krylov methods for nonlinear eigenvalue problems Roel Van Beeumen Karl Meerbergen Wim Michiels Report TW 651, July 214 KU Leuven Department of Computer Science Celestinenlaan 2A B-31 Heverlee

More information

On the choice of abstract projection vectors for second level preconditioners

On the choice of abstract projection vectors for second level preconditioners On the choice of abstract projection vectors for second level preconditioners C. Vuik 1, J.M. Tang 1, and R. Nabben 2 1 Delft University of Technology 2 Technische Universität Berlin Institut für Mathematik

More information

University of Maryland Department of Computer Science TR-5009 University of Maryland Institute for Advanced Computer Studies TR April 2012

University of Maryland Department of Computer Science TR-5009 University of Maryland Institute for Advanced Computer Studies TR April 2012 University of Maryland Department of Computer Science TR-5009 University of Maryland Institute for Advanced Computer Studies TR-202-07 April 202 LYAPUNOV INVERSE ITERATION FOR COMPUTING A FEW RIGHTMOST

More information

Harmonic and refined extraction methods for the singular value problem, with applications in least squares problems

Harmonic and refined extraction methods for the singular value problem, with applications in least squares problems Harmonic and refined extraction methods for the singular value problem, with applications in least squares problems Michiel E. Hochstenbach December 17, 2002 Abstract. For the accurate approximation of

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT -09 Computational and Sensitivity Aspects of Eigenvalue-Based Methods for the Large-Scale Trust-Region Subproblem Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug

More information

How to Detect Definite Hermitian Pairs

How to Detect Definite Hermitian Pairs How to Detect Definite Hermitian Pairs Françoise Tisseur School of Mathematics The University of Manchester ftisseur@ma.man.ac.uk http://www.ma.man.ac.uk/~ftisseur/ Joint work with Chun-Hua Guo and Nick

More information

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra

Definite versus Indefinite Linear Algebra. Christian Mehl Institut für Mathematik TU Berlin Germany. 10th SIAM Conference on Applied Linear Algebra Definite versus Indefinite Linear Algebra Christian Mehl Institut für Mathematik TU Berlin Germany 10th SIAM Conference on Applied Linear Algebra Monterey Bay Seaside, October 26-29, 2009 Indefinite Linear

More information

A Parallel Scalable PETSc-Based Jacobi-Davidson Polynomial Eigensolver with Application in Quantum Dot Simulation

A Parallel Scalable PETSc-Based Jacobi-Davidson Polynomial Eigensolver with Application in Quantum Dot Simulation A Parallel Scalable PETSc-Based Jacobi-Davidson Polynomial Eigensolver with Application in Quantum Dot Simulation Zih-Hao Wei 1, Feng-Nan Hwang 1, Tsung-Ming Huang 2, and Weichung Wang 3 1 Department of

More information

A Chebyshev-based two-stage iterative method as an alternative to the direct solution of linear systems

A Chebyshev-based two-stage iterative method as an alternative to the direct solution of linear systems A Chebyshev-based two-stage iterative method as an alternative to the direct solution of linear systems Mario Arioli m.arioli@rl.ac.uk CCLRC-Rutherford Appleton Laboratory with Daniel Ruiz (E.N.S.E.E.I.H.T)

More information

Introduction to Iterative Solvers of Linear Systems

Introduction to Iterative Solvers of Linear Systems Introduction to Iterative Solvers of Linear Systems SFB Training Event January 2012 Prof. Dr. Andreas Frommer Typeset by Lukas Krämer, Simon-Wolfgang Mages and Rudolf Rödl 1 Classes of Matrices and their

More information

Iterative methods for positive definite linear systems with a complex shift

Iterative methods for positive definite linear systems with a complex shift Iterative methods for positive definite linear systems with a complex shift William McLean, University of New South Wales Vidar Thomée, Chalmers University November 4, 2011 Outline 1. Numerical solution

More information

Contour Integral Method for the Simulation of Accelerator Cavities

Contour Integral Method for the Simulation of Accelerator Cavities Contour Integral Method for the Simulation of Accelerator Cavities V. Pham-Xuan, W. Ackermann and H. De Gersem Institut für Theorie Elektromagnetischer Felder DESY meeting (14.11.2017) November 14, 2017

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

Computable error bounds for nonlinear eigenvalue problems allowing for a minmax characterization

Computable error bounds for nonlinear eigenvalue problems allowing for a minmax characterization Computable error bounds for nonlinear eigenvalue problems allowing for a minmax characterization Heinrich Voss voss@tuhh.de Joint work with Kemal Yildiztekin Hamburg University of Technology Institute

More information

Index. for generalized eigenvalue problem, butterfly form, 211

Index. for generalized eigenvalue problem, butterfly form, 211 Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,

More information

Research Matters. February 25, The Nonlinear Eigenvalue Problem. Nick Higham. Part III. Director of Research School of Mathematics

Research Matters. February 25, The Nonlinear Eigenvalue Problem. Nick Higham. Part III. Director of Research School of Mathematics Research Matters February 25, 2009 The Nonlinear Eigenvalue Problem Nick Higham Part III Director of Research School of Mathematics Françoise Tisseur School of Mathematics The University of Manchester

More information

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues also important in analyzing numerical methods Theory and algorithms apply

More information

Computing Transfer Function Dominant Poles of Large Second-Order Systems

Computing Transfer Function Dominant Poles of Large Second-Order Systems Computing Transfer Function Dominant Poles of Large Second-Order Systems Joost Rommes Mathematical Institute Utrecht University rommes@math.uu.nl http://www.math.uu.nl/people/rommes joint work with Nelson

More information

A JACOBI-DAVIDSON ITERATION METHOD FOR LINEAR EIGENVALUE PROBLEMS. GERARD L.G. SLEIJPEN y AND HENK A. VAN DER VORST y

A JACOBI-DAVIDSON ITERATION METHOD FOR LINEAR EIGENVALUE PROBLEMS. GERARD L.G. SLEIJPEN y AND HENK A. VAN DER VORST y A JACOBI-DAVIDSON ITERATION METHOD FOR LINEAR EIGENVALUE PROBLEMS GERARD L.G. SLEIJPEN y AND HENK A. VAN DER VORST y Abstract. In this paper we propose a new method for the iterative computation of a few

More information

Universiteit-Utrecht. Department. of Mathematics. Jacobi-Davidson algorithms for various. eigenproblems. - A working document -

Universiteit-Utrecht. Department. of Mathematics. Jacobi-Davidson algorithms for various. eigenproblems. - A working document - Universiteit-Utrecht * Department of Mathematics Jacobi-Davidson algorithms for various eigenproblems - A working document - by Gerard L.G. Sleipen, Henk A. Van der Vorst, and Zhaoun Bai Preprint nr. 1114

More information