Iterative projection methods for sparse nonlinear eigenvalue problems

Size: px
Start display at page:

Download "Iterative projection methods for sparse nonlinear eigenvalue problems"

Transcription

1 Iterative projection methods for sparse nonlinear eigenvalue problems Heinrich Voss Hamburg University of Technology Institute of Mathematics TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

2 Outline 1 Introduction 2 Numerical methods for dense problems 3 Iterative projection methods 4 Variational characterization of eigenvalues 5 Numerical example TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

3 Outline Introduction 1 Introduction 2 Numerical methods for dense problems 3 Iterative projection methods 4 Variational characterization of eigenvalues 5 Numerical example TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

4 Introduction Nonlinear eigenvalue problem The term nonlinear eigenvalue problem is not used in a unique way in the literature On the one hand: Parameter dependent nonlinear (with respect to the state variable) operator equations T (λ, u) = 0 discussing positivity of solutions multiplicity of solution dependence of solutions on the parameter; bifurcation (change of ) stability of solutions TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

5 Introduction Nonlinear eigenvalue problem The term nonlinear eigenvalue problem is not used in a unique way in the literature On the one hand: Parameter dependent nonlinear (with respect to the state variable) operator equations T (λ, u) = 0 discussing positivity of solutions multiplicity of solution dependence of solutions on the parameter; bifurcation (change of ) stability of solutions TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

6 In this presentation Introduction For D C let T (λ) C n n, λ D be a family of matrices (more generally a family of closed linear operators on a Banach space). Find λ D and x 0 such that T (λ)x = 0. Then λ is called an eigenvalue of T ( ), and x a corresponding eigenvector. In this talk we assume the matrices to be large and sparse. TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

7 In this presentation Introduction For D C let T (λ) C n n, λ D be a family of matrices (more generally a family of closed linear operators on a Banach space). Find λ D and x 0 such that T (λ)x = 0. Then λ is called an eigenvalue of T ( ), and x a corresponding eigenvector. In this talk we assume the matrices to be large and sparse. TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

8 In this presentation Introduction For D C let T (λ) C n n, λ D be a family of matrices (more generally a family of closed linear operators on a Banach space). Find λ D and x 0 such that T (λ)x = 0. Then λ is called an eigenvalue of T ( ), and x a corresponding eigenvector. In this talk we assume the matrices to be large and sparse. TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

9 Introduction Electronic structure of quantum dots Semiconductor nanostructures have attracted tremendous interest in the past few years because of their special physical properties and their potential for applications in micro and optoelectronic devices. In such nanostructures, the free carriers are confined to a small region of space by potential barriers, and if the size of this region is less than the electron wavelength, the electronic states become quantized at discrete energy levels. The ultimate limit of low dimensional structures is the quantum dot, in which the carriers are confined in all three directions, thus reducing the degrees of freedom to zero. Therefore, a quantum dot can be thought of as an artificial atom. TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

10 Introduction Electronic structure of quantum dots Semiconductor nanostructures have attracted tremendous interest in the past few years because of their special physical properties and their potential for applications in micro and optoelectronic devices. In such nanostructures, the free carriers are confined to a small region of space by potential barriers, and if the size of this region is less than the electron wavelength, the electronic states become quantized at discrete energy levels. The ultimate limit of low dimensional structures is the quantum dot, in which the carriers are confined in all three directions, thus reducing the degrees of freedom to zero. Therefore, a quantum dot can be thought of as an artificial atom. TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

11 Introduction Electronic structure of quantum dots Semiconductor nanostructures have attracted tremendous interest in the past few years because of their special physical properties and their potential for applications in micro and optoelectronic devices. In such nanostructures, the free carriers are confined to a small region of space by potential barriers, and if the size of this region is less than the electron wavelength, the electronic states become quantized at discrete energy levels. The ultimate limit of low dimensional structures is the quantum dot, in which the carriers are confined in all three directions, thus reducing the degrees of freedom to zero. Therefore, a quantum dot can be thought of as an artificial atom. TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

12 Problem Introduction Determine relevant energy states (i.e. eigenvalues) and corresponding wave functions (i.e. eigenfunctions) of a three-dimensional quantum dot embedded in a matrix. TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

13 Problem ct. Introduction Governing equation: Schrödinger equation ( 2 ) 2m(x, E) Φ + V (x)φ = EΦ, x Ω q Ω m where is the reduced Planck constant, m(x, E) is the electron effective mass, and V (x) is the confinement potential. m and V are discontinous across the heterojunction. Boundary and interface conditions Φ = 0 on outer boundary of matrix Ω m 1 Φ BenDaniel Duke condition m m n = 1 Φ Ωm m q n Ωq on interface TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

14 Problem ct. Introduction Governing equation: Schrödinger equation ( 2 ) 2m(x, E) Φ + V (x)φ = EΦ, x Ω q Ω m where is the reduced Planck constant, m(x, E) is the electron effective mass, and V (x) is the confinement potential. m and V are discontinous across the heterojunction. Boundary and interface conditions Φ = 0 on outer boundary of matrix Ω m 1 Φ BenDaniel Duke condition m m n = 1 Φ Ωm m q n Ωq on interface TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

15 Problem ct. Introduction Governing equation: Schrödinger equation ( 2 ) 2m(x, E) Φ + V (x)φ = EΦ, x Ω q Ω m where is the reduced Planck constant, m(x, E) is the electron effective mass, and V (x) is the confinement potential. m and V are discontinous across the heterojunction. Boundary and interface conditions Φ = 0 on outer boundary of matrix Ω m 1 Φ BenDaniel Duke condition m m n = 1 Φ Ωm m q n Ωq on interface TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

16 Introduction Electron effective mass The electron effective mass m = m j, j {q, m} depends an the energy level E. Most simulations in the literature consider constant electron effective masses (e.g. m q = 0.023m 0 and m m = 0.067m 0 for an InAs quantum dot and the GaAs matrix, respectively, where m 0 is the free electron mass). Numerical examples demonstrate that the electronic behavior is substantially different: For the pyramidal quantum dot in our examples there are only 3 confined electron states for the linear model whereas the nonlinear model exhibits 7 confined states (i.e. energy levels which are smaller than the confinement potential). The ground state of the nonlinear model is about 7% and the first exited state about 18% smaller than for the linear approximation. TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

17 Introduction Electron effective mass The electron effective mass m = m j, j {q, m} depends an the energy level E. Most simulations in the literature consider constant electron effective masses (e.g. m q = 0.023m 0 and m m = 0.067m 0 for an InAs quantum dot and the GaAs matrix, respectively, where m 0 is the free electron mass). Numerical examples demonstrate that the electronic behavior is substantially different: For the pyramidal quantum dot in our examples there are only 3 confined electron states for the linear model whereas the nonlinear model exhibits 7 confined states (i.e. energy levels which are smaller than the confinement potential). The ground state of the nonlinear model is about 7% and the first exited state about 18% smaller than for the linear approximation. TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

18 Introduction Electron effective mass The electron effective mass m = m j, j {q, m} depends an the energy level E. Most simulations in the literature consider constant electron effective masses (e.g. m q = 0.023m 0 and m m = 0.067m 0 for an InAs quantum dot and the GaAs matrix, respectively, where m 0 is the free electron mass). Numerical examples demonstrate that the electronic behavior is substantially different: For the pyramidal quantum dot in our examples there are only 3 confined electron states for the linear model whereas the nonlinear model exhibits 7 confined states (i.e. energy levels which are smaller than the confinement potential). The ground state of the nonlinear model is about 7% and the first exited state about 18% smaller than for the linear approximation. TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

19 Introduction Electron effective mass The electron effective mass m = m j, j {q, m} depends an the energy level E. Most simulations in the literature consider constant electron effective masses (e.g. m q = 0.023m 0 and m m = 0.067m 0 for an InAs quantum dot and the GaAs matrix, respectively, where m 0 is the free electron mass). Numerical examples demonstrate that the electronic behavior is substantially different: For the pyramidal quantum dot in our examples there are only 3 confined electron states for the linear model whereas the nonlinear model exhibits 7 confined states (i.e. energy levels which are smaller than the confinement potential). The ground state of the nonlinear model is about 7% and the first exited state about 18% smaller than for the linear approximation. TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

20 Effective mass ct. Introduction The dependence of m(x, E) on E can be derived from the eight-band k p analysis and effective mass theory. Projecting the 8 8 Hamiltonian onto the conduction band results in the single Hamiltonian eigenvalue problem with { mq (E), x Ω m(x, E) = q m m (E), x Ω m ( 1 m j (E) = P2 j E + g j V j 1 E + g j V j + δ j ), j {m, q} where m j is the electron effective mass, V j the confinement potential, P j the momentum, g j the main energy gap, and δ j the spin-orbit splitting in the jth region. Other types of effective mass (taking into account the effect of strain, e.g.) appear in the literature. They are all rational functions of E where 1/m(x, E) is monotonically decreasing with respect to E, and that s all we need. TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

21 Effective mass ct. Introduction The dependence of m(x, E) on E can be derived from the eight-band k p analysis and effective mass theory. Projecting the 8 8 Hamiltonian onto the conduction band results in the single Hamiltonian eigenvalue problem with { mq (E), x Ω m(x, E) = q m m (E), x Ω m ( 1 m j (E) = P2 j E + g j V j 1 E + g j V j + δ j ), j {m, q} where m j is the electron effective mass, V j the confinement potential, P j the momentum, g j the main energy gap, and δ j the spin-orbit splitting in the jth region. Other types of effective mass (taking into account the effect of strain, e.g.) appear in the literature. They are all rational functions of E where 1/m(x, E) is monotonically decreasing with respect to E, and that s all we need. TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

22 Variational form Introduction Find E R and Φ H 1 0 (Ω), Φ 0, Ω := Ω q Ω m, such that a(φ, Ψ; E) := Φ Ψ dx + m q (x, E) 2 Ω q Ω m 1 Φ Ψ dx m m (x, E) + V q (x)φψ dx + V m (x)φψ dx Ω q Ω m = E ΦΨ dx =: Eb(Φ, Ψ) for every Ψ H0 1 (Ω) Ω Discretizing by FEM yields a rational matrix eigenvalue problem TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

23 Variational form Introduction Find E R and Φ H 1 0 (Ω), Φ 0, Ω := Ω q Ω m, such that a(φ, Ψ; E) := Φ Ψ dx + m q (x, E) 2 Ω q Ω m 1 Φ Ψ dx m m (x, E) + V q (x)φψ dx + V m (x)φψ dx Ω q Ω m = E ΦΨ dx =: Eb(Φ, Ψ) for every Ψ H0 1 (Ω) Ω Discretizing by FEM yields a rational matrix eigenvalue problem TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

24 Introduction Classes of nonlinear eigenproblems Quadratic eigenvalue problems TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

25 Introduction Classes of nonlinear eigenproblems Quadratic eigenvalue problems Recent survey: Tisseur, Meerbergen 2001 T (λ) = λ 2 A + λb + C, A, B, C C n n Dynamic analysis of structures Stability analysis of flows in fluid mechanics Signal processing Vibration of spinning structures Vibration of fluid-solid structures (Conca et al. 1992) Lateral buckling analysis (Prandtl 1899) Corner singularities of anisotropic material (Wendland et al. 1992) Constrained least squares problems (Golub 1973) Regularization of total least squares problems (Sima et al. 2004) TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

26 Introduction Classes of nonlinear eigenproblems Quadratic eigenvalue problems Polynomial eigenvalue problems TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

27 Introduction Classes of nonlinear eigenproblems Quadratic eigenvalue problems Polynomial eigenvalue problems T (λ) = k λ j A j, A j C n n j=0 Optimal control problems (Mehrmann 1991) Corner singularities of anisotropic material (Kozlov et al. 2001) Dynamic element discretization of linear eigenproblems (V. 1987) Least squares element methods (Rothe 1989) System of coupled Euler-Bernoulli and Timoshenko beams (Balakrishnan et al. 2004) Nonlinear integrated optics (Botchev et al. 2008) Electronic states of quantum dots (Hwang et al. 2005) TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

28 Introduction Classes of nonlinear eigenproblems Quadratic eigenvalue problems Polynomial eigenvalue problems Rational eigenvalue problems TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

29 Introduction Classes of nonlinear eigenproblems Quadratic eigenvalue problems Polynomial eigenvalue problems Rational eigenvalue problems NMAT T (λ) = λ 2 1 M + K b m λ K m m=1 Vibration of structures with viscoelastic damping Vibration of sandwich plates (Soni 1981) Dynamics of plates with concentrated masses (Andreev et al. 1988) Vibration of fluid-solid structures (Conca et al. 1989) Exact condensation of linear eigenvalue problems (Wittrick, Williams 1971) Electronic states of semiconductor heterostructures (Luttinger, Kohn 1954) TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

30 Introduction Classes of nonlinear eigenproblems Quadratic eigenvalue problems Polynomial eigenvalue problems Rational eigenvalue problems General nonlinear eigenproblems TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

31 Introduction Classes of nonlinear eigenproblems Quadratic eigenvalue problems Polynomial eigenvalue problems Rational eigenvalue problems General nonlinear eigenproblems T (λ) = K λm + i p λ σj W j Exact dynamic element methods Nonlinear eigenproblems in accelerator design (Igarashi et al. 1995) Vibrations of poroelastic structures (Dazel et al. 2002) Vibro-acoustic behavior of piezoelectric/poroelastic structures (Batifol et al. 2007) Nonlinear integrated optics (Dirichlet-to-Neumann) (Botchev 2008) Stability of acoustic pressure levels in combustion chambers (van Leeuwen 2007) j=1 TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

32 Introduction Classes of nonlinear eigenproblems Quadratic eigenvalue problems Polynomial eigenvalue problems Rational eigenvalue problems General nonlinear eigenproblems Exponential dependence on eigenvalues TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

33 Introduction Classes of nonlinear eigenproblems Quadratic eigenvalue problems Polynomial eigenvalue problems Rational eigenvalue problems General nonlinear eigenproblems Exponential dependence on eigenvalues d dt s(t) = A 0s(t) + p A j s(t τ j ), s(t) = xe λt j=1 T (λ) = λi + A 0 + p e τ j λ A j j=1 Stability of time-delay systems TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

34 Outline Numerical methods for dense problems 1 Introduction 2 Numerical methods for dense problems 3 Iterative projection methods 4 Variational characterization of eigenvalues 5 Numerical example TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

35 Numerical methods for dense problems Inverse iteration Require: Initial pair (λ 0, x 0 ) close to wanted eigenpair 1: for k = 0, 1, 2,... until convergence do 2: Solve T (λ k )u k+1 = T (λ k )x k for u k+1 3: λ k+1 = λ k (v H u k+1 )/(v H x k ) 4: Normalize x k+1 = u k+1 /v H u k+1 5: end for Inverse iteration (being a variant of Newton s method) converges locally and quadratically for isolated eigenpairs. Replacing the update of λ by the Rayleigh functional inverse iteration becomes cubically convergent for real symmetric problems (Rothe, V. 1989); in the general case cubic convergence was proved for a two-sided version of Rayleigh functional iteration (Schreiber 2008) TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

36 Numerical methods for dense problems Inverse iteration Require: Initial pair (λ 0, x 0 ) close to wanted eigenpair 1: for k = 0, 1, 2,... until convergence do 2: Solve T (λ k )u k+1 = T (λ k )x k for u k+1 3: λ k+1 = λ k (v H u k+1 )/(v H x k ) 4: Normalize x k+1 = u k+1 /v H u k+1 5: end for Inverse iteration (being a variant of Newton s method) converges locally and quadratically for isolated eigenpairs. Replacing the update of λ by the Rayleigh functional inverse iteration becomes cubically convergent for real symmetric problems (Rothe, V. 1989); in the general case cubic convergence was proved for a two-sided version of Rayleigh functional iteration (Schreiber 2008) TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

37 Numerical methods for dense problems Inverse iteration Require: Initial pair (λ 0, x 0 ) close to wanted eigenpair 1: for k = 0, 1, 2,... until convergence do 2: Solve T (λ k )u k+1 = T (λ k )x k for u k+1 3: λ k+1 = λ k (v H u k+1 )/(v H x k ) 4: Normalize x k+1 = u k+1 /v H u k+1 5: end for Inverse iteration (being a variant of Newton s method) converges locally and quadratically for isolated eigenpairs. Replacing the update of λ by the Rayleigh functional inverse iteration becomes cubically convergent for real symmetric problems (Rothe, V. 1989); in the general case cubic convergence was proved for a two-sided version of Rayleigh functional iteration (Schreiber 2008) TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

38 Numerical methods for dense problems Residual inverse iteration (Neumaier 1985) Require: Initial pair (λ 0, x 0 ) close to wanted eigenpair and shift σ 1: for k = 0, 1, 2,... until convergence do 2: Solve v H T (σ) 1 T (λ k+1 )x k = 0 for λ k+1 or x H k T (λ k+1)x k = 0 if T (λ) is Hermitian and λ k+1 is real 3: Solve T (σ)d k = T (λ k+1 )x k for d k 4: Set u k+1 = x k d k, and normalize x k+1 = u k+1 /v H u k+1 5: end for If T (λ) is twice continuously differentiable, ˆλ is a simple zero of det T (λ), and if ˆx is an eigenvector normalized by v H ˆx = 1, then residual inverse iteration convergence for all σ sufficiently close to ˆλ and it holds that x k+1 ˆx x k ˆx = O( σ ˆλ ) and λ k+1 ˆλ = ( x k ˆx q ) where q = 2 if T (λ) is Hermitian, ˆλ is real, and λ k+1 solves x H k T (λ k+1)x k = 0 in Step 3, and q = 1 otherwise. The domains of attraction of inverse iteration and residual inverse iteration is often very small, and both methods are very sensitive with respect to inexact solves of linear systems. TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

39 Numerical methods for dense problems Residual inverse iteration (Neumaier 1985) Require: Initial pair (λ 0, x 0 ) close to wanted eigenpair and shift σ 1: for k = 0, 1, 2,... until convergence do 2: Solve v H T (σ) 1 T (λ k+1 )x k = 0 for λ k+1 or x H k T (λ k+1)x k = 0 if T (λ) is Hermitian and λ k+1 is real 3: Solve T (σ)d k = T (λ k+1 )x k for d k 4: Set u k+1 = x k d k, and normalize x k+1 = u k+1 /v H u k+1 5: end for If T (λ) is twice continuously differentiable, ˆλ is a simple zero of det T (λ), and if ˆx is an eigenvector normalized by v H ˆx = 1, then residual inverse iteration convergence for all σ sufficiently close to ˆλ and it holds that x k+1 ˆx x k ˆx = O( σ ˆλ ) and λ k+1 ˆλ = ( x k ˆx q ) where q = 2 if T (λ) is Hermitian, ˆλ is real, and λ k+1 solves x H k T (λ k+1)x k = 0 in Step 3, and q = 1 otherwise. The domains of attraction of inverse iteration and residual inverse iteration is often very small, and both methods are very sensitive with respect to inexact solves of linear systems. TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

40 Numerical methods for dense problems Residual inverse iteration (Neumaier 1985) Require: Initial pair (λ 0, x 0 ) close to wanted eigenpair and shift σ 1: for k = 0, 1, 2,... until convergence do 2: Solve v H T (σ) 1 T (λ k+1 )x k = 0 for λ k+1 or x H k T (λ k+1)x k = 0 if T (λ) is Hermitian and λ k+1 is real 3: Solve T (σ)d k = T (λ k+1 )x k for d k 4: Set u k+1 = x k d k, and normalize x k+1 = u k+1 /v H u k+1 5: end for If T (λ) is twice continuously differentiable, ˆλ is a simple zero of det T (λ), and if ˆx is an eigenvector normalized by v H ˆx = 1, then residual inverse iteration convergence for all σ sufficiently close to ˆλ and it holds that x k+1 ˆx x k ˆx = O( σ ˆλ ) and λ k+1 ˆλ = ( x k ˆx q ) where q = 2 if T (λ) is Hermitian, ˆλ is real, and λ k+1 solves x H k T (λ k+1)x k = 0 in Step 3, and q = 1 otherwise. The domains of attraction of inverse iteration and residual inverse iteration is often very small, and both methods are very sensitive with respect to inexact solves of linear systems. TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

41 Outline Iterative projection methods 1 Introduction 2 Numerical methods for dense problems 3 Iterative projection methods 4 Variational characterization of eigenvalues 5 Numerical example TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

42 Iterative projection methods Iterative projection methods For linear sparse eigenproblems T (λ) = λb A very efficient methods are iterative projection methods (Lanczos, Arnoldi, Jacobi Davidson method, e.g.), where approximations to the wanted eigenvalues and eigenvectors are obtained from projections of the eigenproblem to subspaces of small dimension which are expanded in the course of the algorithm. Generalizations to nonlinear sparse eigenproblems nonlinear rational Krylov: Ruhe(2000,2005), Jarlebring, V. (2005)) Arnoldi method: quadratic probl.: Meerbergen (2001) general problems: V. (2003,2004); Liao, Bai, Lee, Ko (2006), Liao (2007) Jacobi-Davidson: polynomial problems: Sleijpen, Boten, Fokkema, van der Vorst (1996) Hwang, Lin, Wang, Wang (2004,2005) general problems:t. Betcke, V. (2004), V. (2004,2007), Schwetlick, Schreiber (2006), Schreiber (2008) TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

43 Iterative projection methods Iterative projection methods For linear sparse eigenproblems T (λ) = λb A very efficient methods are iterative projection methods (Lanczos, Arnoldi, Jacobi Davidson method, e.g.), where approximations to the wanted eigenvalues and eigenvectors are obtained from projections of the eigenproblem to subspaces of small dimension which are expanded in the course of the algorithm. Generalizations to nonlinear sparse eigenproblems nonlinear rational Krylov: Ruhe(2000,2005), Jarlebring, V. (2005)) Arnoldi method: quadratic probl.: Meerbergen (2001) general problems: V. (2003,2004); Liao, Bai, Lee, Ko (2006), Liao (2007) Jacobi-Davidson: polynomial problems: Sleijpen, Boten, Fokkema, van der Vorst (1996) Hwang, Lin, Wang, Wang (2004,2005) general problems:t. Betcke, V. (2004), V. (2004,2007), Schwetlick, Schreiber (2006), Schreiber (2008) TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

44 Iterative projection methods Iterative projection methods For linear sparse eigenproblems T (λ) = λb A very efficient methods are iterative projection methods (Lanczos, Arnoldi, Jacobi Davidson method, e.g.), where approximations to the wanted eigenvalues and eigenvectors are obtained from projections of the eigenproblem to subspaces of small dimension which are expanded in the course of the algorithm. Generalizations to nonlinear sparse eigenproblems nonlinear rational Krylov: Ruhe(2000,2005), Jarlebring, V. (2005)) Arnoldi method: quadratic probl.: Meerbergen (2001) general problems: V. (2003,2004); Liao, Bai, Lee, Ko (2006), Liao (2007) Jacobi-Davidson: polynomial problems: Sleijpen, Boten, Fokkema, van der Vorst (1996) Hwang, Lin, Wang, Wang (2004,2005) general problems:t. Betcke, V. (2004), V. (2004,2007), Schwetlick, Schreiber (2006), Schreiber (2008) TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

45 Iterative projection methods Iterative projection methods For linear sparse eigenproblems T (λ) = λb A very efficient methods are iterative projection methods (Lanczos, Arnoldi, Jacobi Davidson method, e.g.), where approximations to the wanted eigenvalues and eigenvectors are obtained from projections of the eigenproblem to subspaces of small dimension which are expanded in the course of the algorithm. Generalizations to nonlinear sparse eigenproblems nonlinear rational Krylov: Ruhe(2000,2005), Jarlebring, V. (2005)) Arnoldi method: quadratic probl.: Meerbergen (2001) general problems: V. (2003,2004); Liao, Bai, Lee, Ko (2006), Liao (2007) Jacobi-Davidson: polynomial problems: Sleijpen, Boten, Fokkema, van der Vorst (1996) Hwang, Lin, Wang, Wang (2004,2005) general problems:t. Betcke, V. (2004), V. (2004,2007), Schwetlick, Schreiber (2006), Schreiber (2008) TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

46 Iterative projection methods Iterative projection method Require: Initial basis V with V H V = I; set m = 1 1: while m number of wanted eigenvalues do 2: compute eigenpair (µ, y) of projected problem V T T (λ)vy = 0. 3: determine Ritz vector u = Vy, u = 1,and residual r = T (µ)u 4: if r < ε then 5: accept approximate eigenpair λ m = µ, x m = u; increase m m + 1 6: reduce search space V if necessary 7: choose approximation (λ m, u) to next eigenpair, and compute r = T (λ m )u 8: end if 9: expand search space V = [V, vnew] 10: update projected problem 11: end while Main tasks expand search space choose eigenpair of projected problem (locking, purging) TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

47 Iterative projection methods Iterative projection method Require: Initial basis V with V H V = I; set m = 1 1: while m number of wanted eigenvalues do 2: compute eigenpair (µ, y) of projected problem V T T (λ)vy = 0. 3: determine Ritz vector u = Vy, u = 1,and residual r = T (µ)u 4: if r < ε then 5: accept approximate eigenpair λ m = µ, x m = u; increase m m + 1 6: reduce search space V if necessary 7: choose approximation (λ m, u) to next eigenpair, and compute r = T (λ m )u 8: end if 9: expand search space V = [V, vnew] 10: update projected problem 11: end while Main tasks expand search space choose eigenpair of projected problem (locking, purging) TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

48 Iterative projection methods Expanding the subspace Given subspace V C n. Expand V by a direction with high approximation potential for the next wanted eigenvector. Let θ be an eigenvalue of the projected problem V H T (λ)vy = 0 and x = Vy corresponding Ritz vector, then inverse iteration yields suitable candidate v := T (θ) 1 T (θ)x BUT: In each step have to solve large linear system with varying matrix and in a truly large problem the vector v will not be accessible but only an inexact solution ṽ := v + e of T (θ)v = T (θ)x, and the next iterate will be a solution of the projection of T (λ)x = 0 upon the expanded space Ṽ := span{v, ṽ}. TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

49 Iterative projection methods Expanding the subspace Given subspace V C n. Expand V by a direction with high approximation potential for the next wanted eigenvector. Let θ be an eigenvalue of the projected problem V H T (λ)vy = 0 and x = Vy corresponding Ritz vector, then inverse iteration yields suitable candidate v := T (θ) 1 T (θ)x BUT: In each step have to solve large linear system with varying matrix and in a truly large problem the vector v will not be accessible but only an inexact solution ṽ := v + e of T (θ)v = T (θ)x, and the next iterate will be a solution of the projection of T (λ)x = 0 upon the expanded space Ṽ := span{v, ṽ}. TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

50 Iterative projection methods Expanding the subspace Given subspace V C n. Expand V by a direction with high approximation potential for the next wanted eigenvector. Let θ be an eigenvalue of the projected problem V H T (λ)vy = 0 and x = Vy corresponding Ritz vector, then inverse iteration yields suitable candidate v := T (θ) 1 T (θ)x BUT: In each step have to solve large linear system with varying matrix and in a truly large problem the vector v will not be accessible but only an inexact solution ṽ := v + e of T (θ)v = T (θ)x, and the next iterate will be a solution of the projection of T (λ)x = 0 upon the expanded space Ṽ := span{v, ṽ}. TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

51 Iterative projection methods Expanding the subspace Given subspace V C n. Expand V by a direction with high approximation potential for the next wanted eigenvector. Let θ be an eigenvalue of the projected problem V H T (λ)vy = 0 and x = Vy corresponding Ritz vector, then inverse iteration yields suitable candidate v := T (θ) 1 T (θ)x BUT: In each step have to solve large linear system with varying matrix and in a truly large problem the vector v will not be accessible but only an inexact solution ṽ := v + e of T (θ)v = T (θ)x, and the next iterate will be a solution of the projection of T (λ)x = 0 upon the expanded space Ṽ := span{v, ṽ}. TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

52 Iterative projection methods Expansion of search space ct. We assume that x is already a good approximation to an eigenvector of T ( ). Then v will be an even better approximation, and therefore the eigenvector we are looking for will be very close to the plane E := span{x, v}. We therefore neglect the influence of the orthogonal complement of x in V on the next iterate and discuss the nearness of the planes E and Ẽ := span{x, ṽ}. If the angle between these two planes is small, then the projection of T (λ) upon Ṽ should be similar to the one upon span{v, v}, and the approximation properties of inverse iteration should be maintained. If this angle can become large, then it is not surprising that the convergence properties of inverse iteration are not reflected by the projection method. TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

53 Iterative projection methods Expansion of search space ct. We assume that x is already a good approximation to an eigenvector of T ( ). Then v will be an even better approximation, and therefore the eigenvector we are looking for will be very close to the plane E := span{x, v}. We therefore neglect the influence of the orthogonal complement of x in V on the next iterate and discuss the nearness of the planes E and Ẽ := span{x, ṽ}. If the angle between these two planes is small, then the projection of T (λ) upon Ṽ should be similar to the one upon span{v, v}, and the approximation properties of inverse iteration should be maintained. If this angle can become large, then it is not surprising that the convergence properties of inverse iteration are not reflected by the projection method. TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

54 Iterative projection methods Expansion of search space ct. We assume that x is already a good approximation to an eigenvector of T ( ). Then v will be an even better approximation, and therefore the eigenvector we are looking for will be very close to the plane E := span{x, v}. We therefore neglect the influence of the orthogonal complement of x in V on the next iterate and discuss the nearness of the planes E and Ẽ := span{x, ṽ}. If the angle between these two planes is small, then the projection of T (λ) upon Ṽ should be similar to the one upon span{v, v}, and the approximation properties of inverse iteration should be maintained. If this angle can become large, then it is not surprising that the convergence properties of inverse iteration are not reflected by the projection method. TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

55 Iterative projection methods Expansion of search space ct. We assume that x is already a good approximation to an eigenvector of T ( ). Then v will be an even better approximation, and therefore the eigenvector we are looking for will be very close to the plane E := span{x, v}. We therefore neglect the influence of the orthogonal complement of x in V on the next iterate and discuss the nearness of the planes E and Ẽ := span{x, ṽ}. If the angle between these two planes is small, then the projection of T (λ) upon Ṽ should be similar to the one upon span{v, v}, and the approximation properties of inverse iteration should be maintained. If this angle can become large, then it is not surprising that the convergence properties of inverse iteration are not reflected by the projection method. TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

56 Theorem Iterative projection methods Let φ 0 = arccos(x T v) denote the angle between x and v, and the relative error of ṽ by ε := e. Then the maximal possible acute angle between the planes E and Ẽ is { β(ε) = arccos 1 ε 2 / sin 2 φ 0 if ε sin φ 0 π 2 if ε sin φ 0 TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

57 Theorem Iterative projection methods Let φ 0 = arccos(x T v) denote the angle between x and v, and the relative error of ṽ by ε := e. Then the maximal possible acute angle between the planes E and Ẽ is { β(ε) = arccos 1 ε 2 / sin 2 φ 0 if ε sin φ 0 π 2 if ε sin φ 0 TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

58 Proof Iterative projection methods TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

59 Iterative projection methods Expansion by inexact inverse iteration Obviously for every α R, α 0 the plane E is also spanned by x and x + αv. If Ẽ(α) is the plane which is spanned by x and a perturbed realization x + αv + e of x + αv then by the same arguments as in the proof of the Theorem the maximum angle between E and Ẽ(α) is { γ(α, ε) = arccos 1 ε 2 / sin 2 φ(α) if ε sin φ(α) π 2 if ε sin φ(α) where φ(α) denotes the angle between x and x + αv. Since the mapping φ arccos 1 ε 2 / sin 2 φ decreases monotonically the expansion of the search space by an inexact realization of t := x + αv is most robust with respect to small perturbations, if α is chosen such that x and x + αv are orthogonal TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

60 Iterative projection methods Expansion by inexact inverse iteration Obviously for every α R, α 0 the plane E is also spanned by x and x + αv. If Ẽ(α) is the plane which is spanned by x and a perturbed realization x + αv + e of x + αv then by the same arguments as in the proof of the Theorem the maximum angle between E and Ẽ(α) is { γ(α, ε) = arccos 1 ε 2 / sin 2 φ(α) if ε sin φ(α) π 2 if ε sin φ(α) where φ(α) denotes the angle between x and x + αv. Since the mapping φ arccos 1 ε 2 / sin 2 φ decreases monotonically the expansion of the search space by an inexact realization of t := x + αv is most robust with respect to small perturbations, if α is chosen such that x and x + αv are orthogonal TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

61 Iterative projection methods Expansion by inexact inverse iteration Obviously for every α R, α 0 the plane E is also spanned by x and x + αv. If Ẽ(α) is the plane which is spanned by x and a perturbed realization x + αv + e of x + αv then by the same arguments as in the proof of the Theorem the maximum angle between E and Ẽ(α) is { γ(α, ε) = arccos 1 ε 2 / sin 2 φ(α) if ε sin φ(α) π 2 if ε sin φ(α) where φ(α) denotes the angle between x and x + αv. Since the mapping φ arccos 1 ε 2 / sin 2 φ decreases monotonically the expansion of the search space by an inexact realization of t := x + αv is most robust with respect to small perturbations, if α is chosen such that x and x + αv are orthogonal TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

62 Iterative projection methods Expansion by inexact inverse iteration ct. t := x + αv is orthogonal to x iff v = x x H x x H T (θ) 1 T (θ)x T (θ) 1 T (θ)x.. ( ) which yields a maximum acute angle between E and Ẽ(α) { arccos 1 ε 2 if ε 1 γ(α, ε) = π 2 if ε 1. TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

63 Iterative projection methods Expansion by inexact inverse iteration ct. t := x + αv is orthogonal to x iff v = x x H x x H T (θ) 1 T (θ)x T (θ) 1 T (θ)x.. ( ) which yields a maximum acute angle between E and Ẽ(α) { arccos 1 ε 2 if ε 1 γ(α, ε) = π 2 if ε 1. TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

64 Iterative projection methods Expansion by inexact inverse iteration TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

65 Iterative projection methods Expansion by inexact inverse iteration ct. TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

66 Iterative projection methods Expansion by inexact inverse iteration ct. TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

67 Iterative projection methods Jacobi Davidson method The expansion v = x x H x x H T (θ) 1 T (θ)x T (θ) 1 T (θ)x.. ( ) of the current search space V is the solution of the equation (I T (θ)xx H x H T (θ)x )T (θ)(i xx H )t = r, t x This is the so called correction equation of the Jacobi Davidson method which was derived in T. Betcke & Voss (2004) generalizing the approach of Sleijpen and van der Vorst (1996) for linear and polynomial eigenvalue problems. Hence, the Jacobi Davidson method is the most robust realization of an expansion of a search space such that the direction of inverse iteration is contained in the expanded space in the sense that it is least sensitive to inexact solves of linear systems T (θ)v = T (θ)x. TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

68 Iterative projection methods Jacobi Davidson method The expansion v = x x H x x H T (θ) 1 T (θ)x T (θ) 1 T (θ)x.. ( ) of the current search space V is the solution of the equation (I T (θ)xx H x H T (θ)x )T (θ)(i xx H )t = r, t x This is the so called correction equation of the Jacobi Davidson method which was derived in T. Betcke & Voss (2004) generalizing the approach of Sleijpen and van der Vorst (1996) for linear and polynomial eigenvalue problems. Hence, the Jacobi Davidson method is the most robust realization of an expansion of a search space such that the direction of inverse iteration is contained in the expanded space in the sense that it is least sensitive to inexact solves of linear systems T (θ)v = T (θ)x. TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

69 Iterative projection methods Jacobi Davidson method The expansion v = x x H x x H T (θ) 1 T (θ)x T (θ) 1 T (θ)x.. ( ) of the current search space V is the solution of the equation (I T (θ)xx H x H T (θ)x )T (θ)(i xx H )t = r, t x This is the so called correction equation of the Jacobi Davidson method which was derived in T. Betcke & Voss (2004) generalizing the approach of Sleijpen and van der Vorst (1996) for linear and polynomial eigenvalue problems. Hence, the Jacobi Davidson method is the most robust realization of an expansion of a search space such that the direction of inverse iteration is contained in the expanded space in the sense that it is least sensitive to inexact solves of linear systems T (θ)v = T (θ)x. TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

70 Iterative projection methods Nonlinear Jacobi-Davidson 1: Start with orthonormal basis V ; set m = 1 2: determine preconditioner M T (σ) 1 ; σ close to first wanted eigenvalue 3: while m number of wanted eigenvalues do 4: compute eigenpair (µ, y) of projected problem V T T (λ)vy = 0. 5: determine Ritz vector u = Vy, u = 1,and residual r = T (µ)u 6: if r < ε then 7: accept approximate eigenpair λ m = µ, x m = u; increase m m + 1 8: reduce search space V if necessary 9: choose new preconditioner M T (µ) if indicated 10: choose approximation (λ m, u) to next eigenpair, and compute r = T (λ m)u 11: end if 12: solve approximately correction equation (I T (µ)uu H u H T (µ)u ) T (µ) (I uuh u H u ) t = r, 13: t = t VV T t,ṽ = t/ t, reorthogonalize if necessary 14: expand search space V = [V, ṽ] 15: update projected problem 16: end while t u TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

71 Comments Iterative projection methods 1: Start with orthonormal basis V ; set m = 1 2: determine preconditioner M T (σ) 1 ; σ close to first wanted eigenvalue 3: while m number of wanted eigenvalues do 4: compute eigenpair (µ, y) of projected problem V T T (λ)vy = 0. 5: determine Ritz vector u = Vy, u = 1,and residual r = T (µ)u 6: if r < ε then 7: accept approximate eigenpair λ m = µ, x m = u; increase m m + 1 8: reduce search space V if necessary 9: choose new preconditioner M T (µ) if indicated 10: choose approximation (λ m, u) to next eigenpair, and compute r = T (λ m)u 11: end if 12: solve approximately correction equation (I T (µ)uu H u H T (µ)u ) T (µ) (I uuh u H u ) t = r, 13: t = t VV T t,ṽ = t/ t, reorthogonalize if necessary 14: expand search space V = [V, ṽ] 15: update projected problem 16: end while t u TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

72 Comments 1: Iterative projection methods Here preinformation on eigenvectors can be introduced into the algorithm (approximate eigenvectors of contigous problems in reanalysis, e.g.) If no information on eigenvectors is at hand, and if we are interested in eigenvalues close to the parameter σ D: choose initial vector at random, and execute a few Arnoldi steps for the linear eigenproblem T (σ)u = θu or T (σ)u = θt (σ)u, and choose the eigenvector corresponding to the smallest eigenvalue in modulus or a small number of Schur vectors as initial basis of the search space. Starting with a random vector without this preprocessing usually will yield a value µ in step 3 which is far away from σ and will avert convergence. TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

73 Comments 1: Iterative projection methods Here preinformation on eigenvectors can be introduced into the algorithm (approximate eigenvectors of contigous problems in reanalysis, e.g.) If no information on eigenvectors is at hand, and if we are interested in eigenvalues close to the parameter σ D: choose initial vector at random, and execute a few Arnoldi steps for the linear eigenproblem T (σ)u = θu or T (σ)u = θt (σ)u, and choose the eigenvector corresponding to the smallest eigenvalue in modulus or a small number of Schur vectors as initial basis of the search space. Starting with a random vector without this preprocessing usually will yield a value µ in step 3 which is far away from σ and will avert convergence. TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

74 Comments 1: Iterative projection methods Here preinformation on eigenvectors can be introduced into the algorithm (approximate eigenvectors of contigous problems in reanalysis, e.g.) If no information on eigenvectors is at hand, and if we are interested in eigenvalues close to the parameter σ D: choose initial vector at random, and execute a few Arnoldi steps for the linear eigenproblem T (σ)u = θu or T (σ)u = θt (σ)u, and choose the eigenvector corresponding to the smallest eigenvalue in modulus or a small number of Schur vectors as initial basis of the search space. Starting with a random vector without this preprocessing usually will yield a value µ in step 3 which is far away from σ and will avert convergence. TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

75 Comments Iterative projection methods 1: Start with orthonormal basis V ; set m = 1 2: determine preconditioner M T (σ) 1 ; σ close to first wanted eigenvalue 3: while m number of wanted eigenvalues do 4: compute eigenpair (µ, y) of projected problem V T T (λ)vy = 0. 5: determine Ritz vector u = Vy, u = 1,and residual r = T (µ)u 6: if r < ε then 7: accept approximate eigenpair λ m = µ, x m = u; increase m m + 1 8: reduce search space V if necessary 9: choose new preconditioner M T (µ) if indicated 10: choose approximation (λ m, u) to next eigenpair, and compute r = T (λ m)u 11: end if 12: solve approximately correction equation (I T (µ)uu H u H T (µ)u ) T (µ) (I uuh u H u ) t = r, 13: t = t VV T t,ṽ = t/ t, reorthogonalize if necessary 14: expand search space V = [V, ṽ] 15: update projected problem 16: end while t u TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

76 Comments 4: Iterative projection methods Since the dimension of the projected problem is small it can be solved by any dense solver like linearization if T (λ) is polynomial methods based on characteristic function det T (λ) = 0 inverse iteration residual inverse iteration successive linear problems Notice that symmetry properties that the original problem may have are inherited by the projected problem, and one can take advantage of these properties when solving the projected problem (safeguarded iteration, structure preserving methods). TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

77 Comments 4: Iterative projection methods Since the dimension of the projected problem is small it can be solved by any dense solver like linearization if T (λ) is polynomial methods based on characteristic function det T (λ) = 0 inverse iteration residual inverse iteration successive linear problems Notice that symmetry properties that the original problem may have are inherited by the projected problem, and one can take advantage of these properties when solving the projected problem (safeguarded iteration, structure preserving methods). TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

78 Comments Iterative projection methods 1: Start with orthonormal basis V ; set m = 1 2: determine preconditioner M T (σ) 1 ; σ close to first wanted eigenvalue 3: while m number of wanted eigenvalues do 4: compute eigenpair (µ, y) of projected problem V T T (λ)vy = 0. 5: determine Ritz vector u = Vy, u = 1,and residual r = T (µ)u 6: if r < ε then 7: accept approximate eigenpair λ m = µ, x m = u; increase m m + 1 8: reduce search space V if necessary 9: choose new preconditioner M T (µ) if indicated 10: choose approximation (λ m, u) to next eigenpair, and compute r = T (λ m)u 11: end if 12: solve approximately correction equation (I T (µ)uu H u H T (µ)u ) T (µ) (I uuh u H u ) t = r, 13: t = t VV T t,ṽ = t/ t, reorthogonalize if necessary 14: expand search space V = [V, ṽ] 15: update projected problem 16: end while t u TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

79 Comments 8: Iterative projection methods As the subspaces expand in the course of the algorithm the increasing storage or the computational cost for solving the projected eigenvalue problems may make it necessary to restart the algorithm and purge some of the basis vectors. Since a restart destroys information on the eigenvectors and particularly on the one the method is just aiming at, we restart only if an eigenvector has just converged. Resonable search spaces after restart are the space spanned by the already converged eigenvectors (or a space slightly larger) an invariant space of T (σ) or eigenspace of T (σ)y = µt (σ)y corresponding to small eigenvalues in modulus. TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

80 Comments 8: Iterative projection methods As the subspaces expand in the course of the algorithm the increasing storage or the computational cost for solving the projected eigenvalue problems may make it necessary to restart the algorithm and purge some of the basis vectors. Since a restart destroys information on the eigenvectors and particularly on the one the method is just aiming at, we restart only if an eigenvector has just converged. Resonable search spaces after restart are the space spanned by the already converged eigenvectors (or a space slightly larger) an invariant space of T (σ) or eigenspace of T (σ)y = µt (σ)y corresponding to small eigenvalues in modulus. TUHH Heinrich Voss Iterative projection methods Shanghai, May 27, / 61

A Jacobi-Davidson method for two real parameter nonlinear eigenvalue problems arising from delay differential equations

A Jacobi-Davidson method for two real parameter nonlinear eigenvalue problems arising from delay differential equations A Jacobi-Davidson method for two real parameter nonlinear eigenvalue problems arising from delay differential equations Heinrich Voss voss@tuhh.de Joint work with Karl Meerbergen (KU Leuven) and Christian

More information

Variational Principles for Nonlinear Eigenvalue Problems

Variational Principles for Nonlinear Eigenvalue Problems Variational Principles for Nonlinear Eigenvalue Problems Heinrich Voss voss@tuhh.de Hamburg University of Technology Institute of Mathematics TUHH Heinrich Voss Variational Principles for Nonlinear EVPs

More information

A Jacobi Davidson Method for Nonlinear Eigenproblems

A Jacobi Davidson Method for Nonlinear Eigenproblems A Jacobi Davidson Method for Nonlinear Eigenproblems Heinrich Voss Section of Mathematics, Hamburg University of Technology, D 21071 Hamburg voss @ tu-harburg.de http://www.tu-harburg.de/mat/hp/voss Abstract.

More information

Davidson Method CHAPTER 3 : JACOBI DAVIDSON METHOD

Davidson Method CHAPTER 3 : JACOBI DAVIDSON METHOD Davidson Method CHAPTER 3 : JACOBI DAVIDSON METHOD Heinrich Voss voss@tu-harburg.de Hamburg University of Technology The Davidson method is a popular technique to compute a few of the smallest (or largest)

More information

1. Introduction. In this paper we consider the large and sparse eigenvalue problem. Ax = λx (1.1) T (λ)x = 0 (1.2)

1. Introduction. In this paper we consider the large and sparse eigenvalue problem. Ax = λx (1.1) T (λ)x = 0 (1.2) A NEW JUSTIFICATION OF THE JACOBI DAVIDSON METHOD FOR LARGE EIGENPROBLEMS HEINRICH VOSS Abstract. The Jacobi Davidson method is known to converge at least quadratically if the correction equation is solved

More information

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation

More information

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES Eigenvalue Problems CHAPTER 1 : PRELIMINARIES Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Mathematics TUHH Heinrich Voss Preliminaries Eigenvalue problems 2012 1 / 14

More information

An Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems

An Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems An Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems H. Voss 1 Introduction In this paper we consider the nonlinear eigenvalue problem T (λ)x = 0 (1) where T (λ) R n n is a family of symmetric

More information

CHAPTER 5 : NONLINEAR EIGENVALUE PROBLEMS

CHAPTER 5 : NONLINEAR EIGENVALUE PROBLEMS CHAPTER 5 : NONLINEAR EIGENVALUE PROBLEMS Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Mathematics TUHH Heinrich Voss Nonlinear eigenvalue problems Eigenvalue problems

More information

T(λ)x = 0 (1.1) k λ j A j x = 0 (1.3)

T(λ)x = 0 (1.1) k λ j A j x = 0 (1.3) PROJECTION METHODS FOR NONLINEAR SPARSE EIGENVALUE PROBLEMS HEINRICH VOSS Key words. nonlinear eigenvalue problem, iterative projection method, Jacobi Davidson method, Arnoldi method, rational Krylov method,

More information

Solving Regularized Total Least Squares Problems

Solving Regularized Total Least Squares Problems Solving Regularized Total Least Squares Problems Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation Joint work with Jörg Lampe TUHH Heinrich Voss Total

More information

A Jacobi Davidson-type projection method for nonlinear eigenvalue problems

A Jacobi Davidson-type projection method for nonlinear eigenvalue problems A Jacobi Davidson-type projection method for nonlinear eigenvalue problems Timo Betce and Heinrich Voss Technical University of Hamburg-Harburg, Department of Mathematics, Schwarzenbergstrasse 95, D-21073

More information

STATIONARY SCHRÖDINGER EQUATIONS GOVERNING ELECTRONIC STATES OF QUANTUM DOTS IN THE PRESENCE OF SPIN ORBIT SPLITTING

STATIONARY SCHRÖDINGER EQUATIONS GOVERNING ELECTRONIC STATES OF QUANTUM DOTS IN THE PRESENCE OF SPIN ORBIT SPLITTING STATIONARY SCHRÖDINGER EQUATIONS GOVERNING ELECTRONIC STATES OF QUANTUM DOTS IN THE PRESENCE OF SPIN ORBIT SPLITTING MARTA BETCKE AND HEINRICH VOSS Abstract. In this work we derive a pair of nonlinear

More information

Keeping σ fixed for several steps, iterating on µ and neglecting the remainder in the Lagrange interpolation one obtains. θ = λ j λ j 1 λ j σ, (2.

Keeping σ fixed for several steps, iterating on µ and neglecting the remainder in the Lagrange interpolation one obtains. θ = λ j λ j 1 λ j σ, (2. RATIONAL KRYLOV FOR NONLINEAR EIGENPROBLEMS, AN ITERATIVE PROJECTION METHOD ELIAS JARLEBRING AND HEINRICH VOSS Key words. nonlinear eigenvalue problem, rational Krylov, Arnoldi, projection method AMS subject

More information

ITERATIVE PROJECTION METHODS FOR LARGE SCALE NONLINEAR EIGENVALUE PROBLEMS

ITERATIVE PROJECTION METHODS FOR LARGE SCALE NONLINEAR EIGENVALUE PROBLEMS ITERATIVE PROJECTION METHODS FOR LARGE SCALE NONLINEAR EIGENVALUE PROBLEMS Heinrich Voss Hamburg University of Technology, Department of Mathematics, D-21071 Hamburg, Federal Republic of Germany, voss@tu-harburg.de

More information

Nonlinear Eigenvalue Problems: A Challenge for Modern Eigenvalue Methods

Nonlinear Eigenvalue Problems: A Challenge for Modern Eigenvalue Methods Nonlinear Eigenvalue Problems: A Challenge for Modern Eigenvalue Methods Volker Mehrmann Heinrich Voss November 29, 2004 Abstract We discuss the state of the art in numerical solution methods for large

More information

Nonlinear Eigenvalue Problems

Nonlinear Eigenvalue Problems 115 Nonlinear Eigenvalue Problems Heinrich Voss Hamburg University of Technology 115.1 Basic Properties........................................ 115-2 115.2 Analytic matrix functions.............................

More information

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation Tao Zhao 1, Feng-Nan Hwang 2 and Xiao-Chuan Cai 3 Abstract In this paper, we develop an overlapping domain decomposition

More information

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization General Tools for Solving Large Eigen-Problems

More information

LARGE SPARSE EIGENVALUE PROBLEMS

LARGE SPARSE EIGENVALUE PROBLEMS LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization 14-1 General Tools for Solving Large Eigen-Problems

More information

Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems

Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan April 28, 2011 T.M. Huang (Taiwan Normal Univ.)

More information

Jacobi Davidson Methods for Cubic Eigenvalue Problems (March 11, 2004)

Jacobi Davidson Methods for Cubic Eigenvalue Problems (March 11, 2004) Jacobi Davidson Methods for Cubic Eigenvalue Problems (March 11, 24) Tsung-Min Hwang 1, Wen-Wei Lin 2, Jinn-Liang Liu 3, Weichung Wang 4 1 Department of Mathematics, National Taiwan Normal University,

More information

Lecture 3: Inexact inverse iteration with preconditioning

Lecture 3: Inexact inverse iteration with preconditioning Lecture 3: Department of Mathematical Sciences CLAPDE, Durham, July 2008 Joint work with M. Freitag (Bath), and M. Robbé & M. Sadkane (Brest) 1 Introduction 2 Preconditioned GMRES for Inverse Power Method

More information

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 1 : INTRODUCTION

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 1 : INTRODUCTION ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 1 : INTRODUCTION Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation TUHH

More information

HOMOGENEOUS JACOBI DAVIDSON. 1. Introduction. We study a homogeneous Jacobi Davidson variant for the polynomial eigenproblem

HOMOGENEOUS JACOBI DAVIDSON. 1. Introduction. We study a homogeneous Jacobi Davidson variant for the polynomial eigenproblem HOMOGENEOUS JACOBI DAVIDSON MICHIEL E. HOCHSTENBACH AND YVAN NOTAY Abstract. We study a homogeneous variant of the Jacobi Davidson method for the generalized and polynomial eigenvalue problem. Special

More information

Rational Krylov methods for linear and nonlinear eigenvalue problems

Rational Krylov methods for linear and nonlinear eigenvalue problems Rational Krylov methods for linear and nonlinear eigenvalue problems Mele Giampaolo mele@mail.dm.unipi.it University of Pisa 7 March 2014 Outline Arnoldi (and its variants) for linear eigenproblems Rational

More information

Efficient Methods For Nonlinear Eigenvalue Problems. Diploma Thesis

Efficient Methods For Nonlinear Eigenvalue Problems. Diploma Thesis Efficient Methods For Nonlinear Eigenvalue Problems Diploma Thesis Timo Betcke Technical University of Hamburg-Harburg Department of Mathematics (Prof. Dr. H. Voß) August 2002 Abstract During the last

More information

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts

More information

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given

More information

Application of Lanczos and Schur vectors in structural dynamics

Application of Lanczos and Schur vectors in structural dynamics Shock and Vibration 15 (2008) 459 466 459 IOS Press Application of Lanczos and Schur vectors in structural dynamics M. Radeş Universitatea Politehnica Bucureşti, Splaiul Independenţei 313, Bucureşti, Romania

More information

Reduction of nonlinear eigenproblems with JD

Reduction of nonlinear eigenproblems with JD Reduction of nonlinear eigenproblems with JD Henk van der Vorst H.A.vanderVorst@math.uu.nl Mathematical Institute Utrecht University July 13, 2005, SIAM Annual New Orleans p.1/16 Outline Polynomial Eigenproblems

More information

Computable error bounds for nonlinear eigenvalue problems allowing for a minmax characterization

Computable error bounds for nonlinear eigenvalue problems allowing for a minmax characterization Computable error bounds for nonlinear eigenvalue problems allowing for a minmax characterization Heinrich Voss voss@tuhh.de Joint work with Kemal Yildiztekin Hamburg University of Technology Institute

More information

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts

More information

Nonlinear Eigenvalue Problems: An Introduction

Nonlinear Eigenvalue Problems: An Introduction Nonlinear Eigenvalue Problems: An Introduction Cedric Effenberger Seminar for Applied Mathematics ETH Zurich Pro*Doc Workshop Disentis, August 18 21, 2010 Cedric Effenberger (SAM, ETHZ) NLEVPs: An Introduction

More information

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland Matrix Algorithms Volume II: Eigensystems G. W. Stewart University of Maryland College Park, Maryland H1HJ1L Society for Industrial and Applied Mathematics Philadelphia CONTENTS Algorithms Preface xv xvii

More information

Model order reduction of large-scale dynamical systems with Jacobi-Davidson style eigensolvers

Model order reduction of large-scale dynamical systems with Jacobi-Davidson style eigensolvers MAX PLANCK INSTITUTE International Conference on Communications, Computing and Control Applications March 3-5, 2011, Hammamet, Tunisia. Model order reduction of large-scale dynamical systems with Jacobi-Davidson

More information

Preconditioned inverse iteration and shift-invert Arnoldi method

Preconditioned inverse iteration and shift-invert Arnoldi method Preconditioned inverse iteration and shift-invert Arnoldi method Melina Freitag Department of Mathematical Sciences University of Bath CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical

More information

Inexact inverse iteration with preconditioning

Inexact inverse iteration with preconditioning Department of Mathematical Sciences Computational Methods with Applications Harrachov, Czech Republic 24th August 2007 (joint work with M. Robbé and M. Sadkane (Brest)) 1 Introduction 2 Preconditioned

More information

Block Iterative Eigensolvers for Sequences of Dense Correlated Eigenvalue Problems

Block Iterative Eigensolvers for Sequences of Dense Correlated Eigenvalue Problems Mitglied der Helmholtz-Gemeinschaft Block Iterative Eigensolvers for Sequences of Dense Correlated Eigenvalue Problems Birkbeck University, London, June the 29th 2012 Edoardo Di Napoli Motivation and Goals

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

Automated Multi-Level Substructuring CHAPTER 4 : AMLS METHOD. Condensation. Exact condensation

Automated Multi-Level Substructuring CHAPTER 4 : AMLS METHOD. Condensation. Exact condensation Automated Multi-Level Substructuring CHAPTER 4 : AMLS METHOD Heinrich Voss voss@tu-harburg.de Hamburg University of Technology AMLS was introduced by Bennighof (1998) and was applied to huge problems of

More information

Alternative correction equations in the Jacobi-Davidson method

Alternative correction equations in the Jacobi-Davidson method Chapter 2 Alternative correction equations in the Jacobi-Davidson method Menno Genseberger and Gerard Sleijpen Abstract The correction equation in the Jacobi-Davidson method is effective in a subspace

More information

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems JAMES H. MONEY and QIANG YE UNIVERSITY OF KENTUCKY eigifp is a MATLAB program for computing a few extreme eigenvalues

More information

A Parallel Scalable PETSc-Based Jacobi-Davidson Polynomial Eigensolver with Application in Quantum Dot Simulation

A Parallel Scalable PETSc-Based Jacobi-Davidson Polynomial Eigensolver with Application in Quantum Dot Simulation A Parallel Scalable PETSc-Based Jacobi-Davidson Polynomial Eigensolver with Application in Quantum Dot Simulation Zih-Hao Wei 1, Feng-Nan Hwang 1, Tsung-Ming Huang 2, and Weichung Wang 3 1 Department of

More information

Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries

Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries Structured Krylov Subspace Methods for Eigenproblems with Spectral Symmetries Fakultät für Mathematik TU Chemnitz, Germany Peter Benner benner@mathematik.tu-chemnitz.de joint work with Heike Faßbender

More information

Arnoldi Methods in SLEPc

Arnoldi Methods in SLEPc Scalable Library for Eigenvalue Problem Computations SLEPc Technical Report STR-4 Available at http://slepc.upv.es Arnoldi Methods in SLEPc V. Hernández J. E. Román A. Tomás V. Vidal Last update: October,

More information

Jacobi s Ideas on Eigenvalue Computation in a modern context

Jacobi s Ideas on Eigenvalue Computation in a modern context Jacobi s Ideas on Eigenvalue Computation in a modern context Henk van der Vorst vorst@math.uu.nl Mathematical Institute Utrecht University June 3, 2006, Michel Crouzeix p.1/18 General remarks Ax = λx Nonlinear

More information

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 3 : SEMI-ITERATIVE METHODS

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 3 : SEMI-ITERATIVE METHODS ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 3 : SEMI-ITERATIVE METHODS Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation

More information

Numerical Methods for Large Scale Eigenvalue Problems

Numerical Methods for Large Scale Eigenvalue Problems MAX PLANCK INSTITUTE Summer School in Trogir, Croatia Oktober 12, 2011 Numerical Methods for Large Scale Eigenvalue Problems Patrick Kürschner Max Planck Institute for Dynamics of Complex Technical Systems

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems Part I: Review of basic theory of eigenvalue problems 1. Let A C n n. (a) A scalar λ is an eigenvalue of an n n A

More information

Efficient computation of transfer function dominant poles of large second-order dynamical systems

Efficient computation of transfer function dominant poles of large second-order dynamical systems Chapter 6 Efficient computation of transfer function dominant poles of large second-order dynamical systems Abstract This chapter presents a new algorithm for the computation of dominant poles of transfer

More information

A comparison of solvers for quadratic eigenvalue problems from combustion

A comparison of solvers for quadratic eigenvalue problems from combustion INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS [Version: 2002/09/18 v1.01] A comparison of solvers for quadratic eigenvalue problems from combustion C. Sensiau 1, F. Nicoud 2,, M. van Gijzen 3,

More information

of dimension n 1 n 2, one defines the matrix determinants

of dimension n 1 n 2, one defines the matrix determinants HARMONIC RAYLEIGH RITZ FOR THE MULTIPARAMETER EIGENVALUE PROBLEM MICHIEL E. HOCHSTENBACH AND BOR PLESTENJAK Abstract. We study harmonic and refined extraction methods for the multiparameter eigenvalue

More information

SOLVING RATIONAL EIGENVALUE PROBLEMS VIA LINEARIZATION

SOLVING RATIONAL EIGENVALUE PROBLEMS VIA LINEARIZATION SOLVNG RATONAL EGENVALUE PROBLEMS VA LNEARZATON YANGFENG SU AND ZHAOJUN BA Abstract Rational eigenvalue problem is an emerging class of nonlinear eigenvalue problems arising from a variety of physical

More information

Index. for generalized eigenvalue problem, butterfly form, 211

Index. for generalized eigenvalue problem, butterfly form, 211 Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,

More information

The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method. V. Simoncini. Dipartimento di Matematica, Università di Bologna

The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method. V. Simoncini. Dipartimento di Matematica, Università di Bologna The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 4 Eigenvalue Problems Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

A HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY

A HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY A HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY RONALD B. MORGAN AND MIN ZENG Abstract. A restarted Arnoldi algorithm is given that computes eigenvalues

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

MICHIEL E. HOCHSTENBACH

MICHIEL E. HOCHSTENBACH VARIATIONS ON HARMONIC RAYLEIGH RITZ FOR STANDARD AND GENERALIZED EIGENPROBLEMS MICHIEL E. HOCHSTENBACH Abstract. We present several variations on the harmonic Rayleigh Ritz method. First, we introduce

More information

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis

Eigenvalue Problems. Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues also important in analyzing numerical methods Theory and algorithms apply

More information

Research Matters. February 25, The Nonlinear Eigenvalue Problem. Nick Higham. Part III. Director of Research School of Mathematics

Research Matters. February 25, The Nonlinear Eigenvalue Problem. Nick Higham. Part III. Director of Research School of Mathematics Research Matters February 25, 2009 The Nonlinear Eigenvalue Problem Nick Higham Part III Director of Research School of Mathematics Françoise Tisseur School of Mathematics The University of Manchester

More information

Solving Polynomial Eigenvalue Problems Arising in Simulations of Nanoscale Quantum Dots

Solving Polynomial Eigenvalue Problems Arising in Simulations of Nanoscale Quantum Dots Solving Polynomial Eigenvalue Problems Arising in Simulations of Nanoscale Quantum Dots Weichung Wang Department of Mathematics National Taiwan University Recent Advances in Numerical Methods for Eigenvalue

More information

Rayleigh-Ritz majorization error bounds with applications to FEM and subspace iterations

Rayleigh-Ritz majorization error bounds with applications to FEM and subspace iterations 1 Rayleigh-Ritz majorization error bounds with applications to FEM and subspace iterations Merico Argentati and Andrew Knyazev (speaker) Department of Applied Mathematical and Statistical Sciences Center

More information

Iterative Methods for Linear Systems of Equations

Iterative Methods for Linear Systems of Equations Iterative Methods for Linear Systems of Equations Projection methods (3) ITMAN PhD-course DTU 20-10-08 till 24-10-08 Martin van Gijzen 1 Delft University of Technology Overview day 4 Bi-Lanczos method

More information

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna.

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna. Recent advances in approximation using Krylov subspaces V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The framework It is given an operator

More information

HARMONIC RAYLEIGH RITZ EXTRACTION FOR THE MULTIPARAMETER EIGENVALUE PROBLEM

HARMONIC RAYLEIGH RITZ EXTRACTION FOR THE MULTIPARAMETER EIGENVALUE PROBLEM HARMONIC RAYLEIGH RITZ EXTRACTION FOR THE MULTIPARAMETER EIGENVALUE PROBLEM MICHIEL E. HOCHSTENBACH AND BOR PLESTENJAK Abstract. We study harmonic and refined extraction methods for the multiparameter

More information

Research Matters. February 25, The Nonlinear Eigenvalue. Director of Research School of Mathematics

Research Matters. February 25, The Nonlinear Eigenvalue. Director of Research School of Mathematics Research Matters February 25, 2009 The Nonlinear Eigenvalue Nick Problem: HighamPart II Director of Research School of Mathematics Françoise Tisseur School of Mathematics The University of Manchester Woudschoten

More information

Iterative methods for symmetric eigenvalue problems

Iterative methods for symmetric eigenvalue problems s Iterative s for symmetric eigenvalue problems, PhD McMaster University School of Computational Engineering and Science February 11, 2008 s 1 The power and its variants Inverse power Rayleigh quotient

More information

Modeling of Thermoelastic Damping in MEMS Resonators

Modeling of Thermoelastic Damping in MEMS Resonators Modeling of Thermoelastic Damping in MEMS Resonators T. Koyama a, D. Bindel b, S. Govindjee a a Dept. of Civil Engineering b Computer Science Division 1 University of California, Berkeley MEMS Resonators

More information

RANA03-02 January Jacobi-Davidson methods and preconditioning with applications in pole-zero analysis

RANA03-02 January Jacobi-Davidson methods and preconditioning with applications in pole-zero analysis RANA03-02 January 2003 Jacobi-Davidson methods and preconditioning with applications in pole-zero analysis by J.Rommes, H.A. van der Vorst, EJ.W. ter Maten Reports on Applied and Numerical Analysis Department

More information

U AUc = θc. n u = γ i x i,

U AUc = θc. n u = γ i x i, HARMONIC AND REFINED RAYLEIGH RITZ FOR THE POLYNOMIAL EIGENVALUE PROBLEM MICHIEL E. HOCHSTENBACH AND GERARD L. G. SLEIJPEN Abstract. After reviewing the harmonic Rayleigh Ritz approach for the standard

More information

Solving a rational eigenvalue problem in fluidstructure

Solving a rational eigenvalue problem in fluidstructure Solving a rational eigenvalue problem in fluidstructure interaction H. VOSS Section of Mathematics, TU Germany Hambu~g-Harburg, D-21071 Hamburg, Abstract In this paper we consider a rational eigenvalue

More information

The parallel computation of the smallest eigenpair of an. acoustic problem with damping. Martin B. van Gijzen and Femke A. Raeven.

The parallel computation of the smallest eigenpair of an. acoustic problem with damping. Martin B. van Gijzen and Femke A. Raeven. The parallel computation of the smallest eigenpair of an acoustic problem with damping. Martin B. van Gijzen and Femke A. Raeven Abstract Acoustic problems with damping may give rise to large quadratic

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT -09 Computational and Sensitivity Aspects of Eigenvalue-Based Methods for the Large-Scale Trust-Region Subproblem Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug

More information

A Structure-Preserving Method for Large Scale Eigenproblems. of Skew-Hamiltonian/Hamiltonian (SHH) Pencils

A Structure-Preserving Method for Large Scale Eigenproblems. of Skew-Hamiltonian/Hamiltonian (SHH) Pencils A Structure-Preserving Method for Large Scale Eigenproblems of Skew-Hamiltonian/Hamiltonian (SHH) Pencils Yangfeng Su Department of Mathematics, Fudan University Zhaojun Bai Department of Computer Science,

More information

A numerical method for polynomial eigenvalue problems using contour integral

A numerical method for polynomial eigenvalue problems using contour integral A numerical method for polynomial eigenvalue problems using contour integral Junko Asakura a Tetsuya Sakurai b Hiroto Tadano b Tsutomu Ikegami c Kinji Kimura d a Graduate School of Systems and Information

More information

Contour Integral Method for the Simulation of Accelerator Cavities

Contour Integral Method for the Simulation of Accelerator Cavities Contour Integral Method for the Simulation of Accelerator Cavities V. Pham-Xuan, W. Ackermann and H. De Gersem Institut für Theorie Elektromagnetischer Felder DESY meeting (14.11.2017) November 14, 2017

More information

A Tuned Preconditioner for Inexact Inverse Iteration for Generalised Eigenvalue Problems

A Tuned Preconditioner for Inexact Inverse Iteration for Generalised Eigenvalue Problems A Tuned Preconditioner for for Generalised Eigenvalue Problems Department of Mathematical Sciences University of Bath, United Kingdom IWASEP VI May 22-25, 2006 Pennsylvania State University, University

More information

AA242B: MECHANICAL VIBRATIONS

AA242B: MECHANICAL VIBRATIONS AA242B: MECHANICAL VIBRATIONS 1 / 17 AA242B: MECHANICAL VIBRATIONS Solution Methods for the Generalized Eigenvalue Problem These slides are based on the recommended textbook: M. Géradin and D. Rixen, Mechanical

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

ABSTRACT OF DISSERTATION. Ping Zhang

ABSTRACT OF DISSERTATION. Ping Zhang ABSTRACT OF DISSERTATION Ping Zhang The Graduate School University of Kentucky 2009 Iterative Methods for Computing Eigenvalues and Exponentials of Large Matrices ABSTRACT OF DISSERTATION A dissertation

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture

More information

Two-sided Eigenvalue Algorithms for Modal Approximation

Two-sided Eigenvalue Algorithms for Modal Approximation Two-sided Eigenvalue Algorithms for Modal Approximation Master s thesis submitted to Faculty of Mathematics at Chemnitz University of Technology presented by: Supervisor: Advisor: B.sc. Patrick Kürschner

More information

On the Modification of an Eigenvalue Problem that Preserves an Eigenspace

On the Modification of an Eigenvalue Problem that Preserves an Eigenspace Purdue University Purdue e-pubs Department of Computer Science Technical Reports Department of Computer Science 2009 On the Modification of an Eigenvalue Problem that Preserves an Eigenspace Maxim Maumov

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give

More information

An Arnoldi method with structured starting vectors for the delay eigenvalue problem

An Arnoldi method with structured starting vectors for the delay eigenvalue problem An Arnoldi method with structured starting vectors for the delay eigenvalue problem Elias Jarlebring, Karl Meerbergen, Wim Michiels Department of Computer Science, K.U. Leuven, Celestijnenlaan 200 A, 3001

More information

A JACOBI-DAVIDSON ITERATION METHOD FOR LINEAR EIGENVALUE PROBLEMS. GERARD L.G. SLEIJPEN y AND HENK A. VAN DER VORST y

A JACOBI-DAVIDSON ITERATION METHOD FOR LINEAR EIGENVALUE PROBLEMS. GERARD L.G. SLEIJPEN y AND HENK A. VAN DER VORST y A JACOBI-DAVIDSON ITERATION METHOD FOR LINEAR EIGENVALUE PROBLEMS GERARD L.G. SLEIJPEN y AND HENK A. VAN DER VORST y Abstract. In this paper we propose a new method for the iterative computation of a few

More information

SOLVING MESH EIGENPROBLEMS WITH MULTIGRID EFFICIENCY

SOLVING MESH EIGENPROBLEMS WITH MULTIGRID EFFICIENCY SOLVING MESH EIGENPROBLEMS WITH MULTIGRID EFFICIENCY KLAUS NEYMEYR ABSTRACT. Multigrid techniques can successfully be applied to mesh eigenvalue problems for elliptic differential operators. They allow

More information

Jacobi-Davidson Algorithm for Locating Resonances in a Few-Body Tunneling System

Jacobi-Davidson Algorithm for Locating Resonances in a Few-Body Tunneling System Jacobi-Davidson Algorithm for Locating Resonances in a Few-Body Tunneling System Bachelor Thesis in Physics and Engineering Physics Gustav Hjelmare Jonathan Larsson David Lidberg Sebastian Östnell Department

More information

Preconditioned Eigenvalue Solvers for electronic structure calculations. Andrew V. Knyazev. Householder Symposium XVI May 26, 2005

Preconditioned Eigenvalue Solvers for electronic structure calculations. Andrew V. Knyazev. Householder Symposium XVI May 26, 2005 1 Preconditioned Eigenvalue Solvers for electronic structure calculations Andrew V. Knyazev Department of Mathematics and Center for Computational Mathematics University of Colorado at Denver Householder

More information

Krylov subspace projection methods

Krylov subspace projection methods I.1.(a) Krylov subspace projection methods Orthogonal projection technique : framework Let A be an n n complex matrix and K be an m-dimensional subspace of C n. An orthogonal projection technique seeks

More information

Domain decomposition on different levels of the Jacobi-Davidson method

Domain decomposition on different levels of the Jacobi-Davidson method hapter 5 Domain decomposition on different levels of the Jacobi-Davidson method Abstract Most computational work of Jacobi-Davidson [46], an iterative method suitable for computing solutions of large dimensional

More information

A Jacobi Davidson type method for the product eigenvalue problem

A Jacobi Davidson type method for the product eigenvalue problem A Jacobi Davidson type method for the product eigenvalue problem Michiel E Hochstenbach Abstract We propose a Jacobi Davidson type method to compute selected eigenpairs of the product eigenvalue problem

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

Robust successive computation of eigenpairs for nonlinear eigenvalue problems

Robust successive computation of eigenpairs for nonlinear eigenvalue problems Robust successive computation of eigenpairs for nonlinear eigenvalue problems C. Effenberger July 21, 2012 Newton-based methods are well-established techniques for solving nonlinear eigenvalue problems.

More information

A Parallel Additive Schwarz Preconditioned Jacobi-Davidson Algorithm for Polynomial Eigenvalue Problems in Quantum Dot Simulation

A Parallel Additive Schwarz Preconditioned Jacobi-Davidson Algorithm for Polynomial Eigenvalue Problems in Quantum Dot Simulation A Parallel Additive Schwarz Preconditioned Jacobi-Davidson Algorithm for Polynomial Eigenvalue Problems in Quantum Dot Simulation Feng-Nan Hwang a, Zih-Hao Wei a, Tsung-Ming Huang b, Weichung Wang c, a

More information

A Jacobi Davidson Algorithm for Large Eigenvalue Problems from Opto-Electronics

A Jacobi Davidson Algorithm for Large Eigenvalue Problems from Opto-Electronics RANMEP, Hsinchu, Taiwan, January 4 8, 2008 1/30 A Jacobi Davidson Algorithm for Large Eigenvalue Problems from Opto-Electronics Peter Arbenz 1 Urban Weber 1 Ratko Veprek 2 Bernd Witzigmann 2 1 Institute

More information