Inexact inverse iteration with preconditioning

Similar documents
Lecture 3: Inexact inverse iteration with preconditioning

Preconditioned inverse iteration and shift-invert Arnoldi method

A Tuned Preconditioner for Inexact Inverse Iteration for Generalised Eigenvalue Problems

A Tuned Preconditioner for Inexact Inverse Iteration Applied to Hermitian Eigenvalue Problems

RAYLEIGH QUOTIENT ITERATION AND SIMPLIFIED JACOBI-DAVIDSON WITH PRECONDITIONED ITERATIVE SOLVES FOR GENERALISED EIGENVALUE PROBLEMS

Inner-outer Iterative Methods for Eigenvalue Problems - Convergence and Preconditioning

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna.

c 2006 Society for Industrial and Applied Mathematics

Inexact Inverse Iteration for Symmetric Matrices

CONTROLLING INNER ITERATIONS IN THE JACOBI DAVIDSON METHOD

Inexact Inverse Iteration for Symmetric Matrices

Controlling inner iterations in the Jacobi Davidson method

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation

Inexact inverse iteration for symmetric matrices

CONTROLLING INNER ITERATIONS IN THE JACOBI DAVIDSON METHOD

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES

1. Introduction. In this paper we consider the large and sparse eigenvalue problem. Ax = λx (1.1) T (λ)x = 0 (1.2)

Davidson Method CHAPTER 3 : JACOBI DAVIDSON METHOD

MICHIEL E. HOCHSTENBACH

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD

c 2009 Society for Industrial and Applied Mathematics

Finding Rightmost Eigenvalues of Large, Sparse, Nonsymmetric Parameterized Eigenvalue Problems

Universiteit-Utrecht. Department. of Mathematics. The convergence of Jacobi-Davidson for. Hermitian eigenproblems. Jasper van den Eshof.

AN INEXACT INVERSE ITERATION FOR COMPUTING THE SMALLEST EIGENVALUE OF AN IRREDUCIBLE M-MATRIX. 1. Introduction. We consider the eigenvalue problem

ABSTRACT OF DISSERTATION. Ping Zhang

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems

Alternative correction equations in the Jacobi-Davidson method

Model order reduction of large-scale dynamical systems with Jacobi-Davidson style eigensolvers

The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method. V. Simoncini. Dipartimento di Matematica, Università di Bologna

HOMOGENEOUS JACOBI DAVIDSON. 1. Introduction. We study a homogeneous Jacobi Davidson variant for the polynomial eigenproblem

Domain decomposition on different levels of the Jacobi-Davidson method

of dimension n 1 n 2, one defines the matrix determinants

FINDING RIGHTMOST EIGENVALUES OF LARGE SPARSE NONSYMMETRIC PARAMETERIZED EIGENVALUE PROBLEMS

Linear Algebra and its Applications

Iterative Methods and Multigrid

HARMONIC RAYLEIGH RITZ EXTRACTION FOR THE MULTIPARAMETER EIGENVALUE PROBLEM

A Jacobi Davidson type method for the product eigenvalue problem

Polynomial optimization and a Jacobi Davidson type method for commuting matrices

Jacobi s Ideas on Eigenvalue Computation in a modern context

ABSTRACT NUMERICAL SOLUTION OF EIGENVALUE PROBLEMS WITH SPECTRAL TRANSFORMATIONS

Alternative correction equations in the Jacobi-Davidson method. Mathematical Institute. Menno Genseberger and Gerard L. G.

Preconditioning for Nonsymmetry and Time-dependence

Efficient Solvers for the Navier Stokes Equations in Rotation Form

A Jacobi-Davidson method for two real parameter nonlinear eigenvalue problems arising from delay differential equations

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

M.A. Botchev. September 5, 2014

Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems

IDR(s) A family of simple and fast algorithms for solving large nonsymmetric systems of linear equations

A HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY

Splitting Iteration Methods for Positive Definite Linear Systems

U AUc = θc. n u = γ i x i,

A Jacobi Davidson Method for Nonlinear Eigenproblems

Reduction of nonlinear eigenproblems with JD

EINDHOVEN UNIVERSITY OF TECHNOLOGY Department of Mathematics and Computer Science. CASA-Report November 2013

Is there life after the Lanczos method? What is LOBPCG?

Studies on Jacobi-Davidson, Rayleigh Quotient Iteration, Inverse Iteration Generalized Davidson and Newton Updates

Efficient iterative algorithms for linear stability analysis of incompressible flows

A comparison of solvers for quadratic eigenvalue problems from combustion

Inexactness and flexibility in linear Krylov solvers

A Jacobi Davidson Method with a Multigrid Solver for the Hermitian Wilson-Dirac Operator

Solving Regularized Total Least Squares Problems

A Parallel Scalable PETSc-Based Jacobi-Davidson Polynomial Eigensolver with Application in Quantum Dot Simulation

Using the Karush-Kuhn-Tucker Conditions to Analyze the Convergence Rate of Preconditioned Eigenvalue Solvers

The Conjugate Gradient Method

Mathematics and Computer Science

A Jacobi Davidson type method for the generalized singular value problem

DELFT UNIVERSITY OF TECHNOLOGY

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers

Two-level Domain decomposition preconditioning for the high-frequency time-harmonic Maxwell equations

Numerical behavior of inexact linear solvers

Linear Algebra and its Applications

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

9.1 Preconditioned Krylov Subspace Methods

A Newton-Galerkin-ADI Method for Large-Scale Algebraic Riccati Equations

On the interplay between discretization and preconditioning of Krylov subspace methods

A CHEBYSHEV-DAVIDSON ALGORITHM FOR LARGE SYMMETRIC EIGENPROBLEMS

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Solving Ax = b, an overview. Program

An Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems

On solving linear systems arising from Shishkin mesh discretizations

University of Maryland Department of Computer Science TR-5009 University of Maryland Institute for Advanced Computer Studies TR April 2012

A parameter tuning technique of a weighted Jacobi-type preconditioner and its application to supernova simulations

SOLVING MESH EIGENPROBLEMS WITH MULTIGRID EFFICIENCY

Harmonic Projection Methods for Large Non-symmetric Eigenvalue Problems

On the Modification of an Eigenvalue Problem that Preserves an Eigenspace

Motivation: Sparse matrices and numerical PDE's

COMPUTING DOMINANT POLES OF TRANSFER FUNCTIONS

Fast Iterative Solution of Saddle Point Problems

A Model-Trust-Region Framework for Symmetric Generalized Eigenvalue Problems

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

A JACOBI DAVIDSON METHOD FOR SOLVING COMPLEX-SYMMETRIC EIGENVALUE PROBLEMS

[2] W. E. ARNOLDI, The principle of minimized iterations in the solution of the matrix eigenvalue problem, Quart. Appl. Math., 9 (1951), pp

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems

Arnoldi Methods in SLEPc

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

Eigenvalue Problems Computation and Applications

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods

Multiparameter eigenvalue problem as a structured eigenproblem

Transcription:

Department of Mathematical Sciences Computational Methods with Applications Harrachov, Czech Republic 24th August 2007 (joint work with M. Robbé and M. Sadkane (Brest))

1 Introduction 2 Preconditioned GMRES 3 Inexact Subspace iteration 4 Preconditioned RQI and Jacobi-Davidson 5 Conclusions

Outline 1 Introduction 2 Preconditioned GMRES 3 Inexact Subspace iteration 4 Preconditioned RQI and Jacobi-Davidson 5 Conclusions

Introduction Ax = λx, λ C, x C n seek λ near a given shift σ. A is large, sparse, nonsymmetric (discretised PDE: Ax = λmx)

Introduction Ax = λx, λ C, x C n seek λ near a given shift σ. A is large, sparse, nonsymmetric (discretised PDE: Ax = λmx) Inverse Iteration: (A σi)y = x Preconditioned iterative solves

Introduction Ax = λx, λ C, x C n seek λ near a given shift σ. A is large, sparse, nonsymmetric (discretised PDE: Ax = λmx) Inverse Iteration: (A σi)y = x Preconditioned iterative solves Extensions Inverse Subspace Iteration Jacobi-Davidson method Shift-invert Arnoldi method (Melina Freitag)

Inexact inverse iteration Assume x (i) is an approximate normalised eigenvector Iterative solves (e.g. GMRES) of (A σi)y = x (i)

Inexact inverse iteration Assume x (i) is an approximate normalised eigenvector Iterative solves (e.g. GMRES) of (A σi)y = x (i) inner-outer x (i) (A σi)y k τ (i) Rescale y k to get x (i+1), (τ (i) : solve tolerance)

Inexact inverse iteration Assume x (i) is an approximate normalised eigenvector Iterative solves (e.g. GMRES) of inner-outer x (i) (A σi)y k τ (i) Rescale y k to get x (i+1) (Right-) preconditioned solves P 1 known (A σi)y = x (i) (A σi)p 1 ỹ = x (i), P 1 ỹ = y., (τ (i) : solve tolerance)

Convergence of inexact inverse iteration Given x (i) and λ (i) r (i) = Ax (i) λ (i) x (i) Eigenvalue residual Theorem (Convergence) If the solve tolerance, τ (i), is chosen to reduce proportional to the norm of the eigenvalue residual r (i) then we recover the rate of convergence achieved when using direct solves. other options/strategies possible.

Literature Ruhe/Wiberg (1972) Lai/Lin/Lin (1997) Smit/Paardekooper (1999) Golub/Ye (2000) Simoncini/Eldén (2002) Notay (2003), Notay/Hochstenbach (2007) Berns-Müller/Graham/Sp. (2006), Freitag/Sp. (2006),...

TODAY Assume σ = 0: Inverse Power method: Ay = x (i)

TODAY Assume σ = 0: Inverse Power method: Ay = x (i) AP 1 ỹ = x (i), P 1 ỹ = y. inverse iteration inverse subspace iteration link with Jacobi-Davidson Shift-Invert Arnoldi (Melina Freitag - this afternoon)

TODAY Assume σ = 0: Inverse Power method: Ay = x (i) AP 1 ỹ = x (i), P 1 ỹ = y. inverse iteration inverse subspace iteration link with Jacobi-Davidson Shift-Invert Arnoldi (Melina Freitag - this afternoon) Always assume decreasing tolerance: τ (i) = C Ax (i) λ (i) x (i) Example

Convection-Diffusion problem Finite difference discretisation on a 32 32 grid of the convection-diffusion operator u + 5u x + 5u y = λu on (0, 1) 2, with homogeneous Dirichlet boundary conditions (961 961 matrix). smallest eigenvalue: λ 1 32.18560954, Preconditioned GMRES with tolerance τ (i) = 0.01 r (i), ILU based preconditioners.

Convection-Diffusion problem: No Preconditioning - Ay k x (i) τ (i) 60 unpreconditioned solves,453 10 2 unpreconditioned solves,453 55 10 0 50 inner iterations 45 40 35 30 25 eigenvalue residual 10 2 10 4 10 6 10 8 20 15 10 10 10 0 5 10 15 20 25 outer iterations 10 12 50 100 150 200 250 300 350 400 450 500 sum of inner iterations Figure: Inner iterations vs outer iterations Figure: Eigenvalue residual norms vs total number of inner iterations Question Why is there no increase in inner iterations as i increases?

Convection-Diffusion problem: Preconditioning - AP 1 ỹ k x (i) τ (i) 45 40 35 inner iterations 30 25 20 15 10 P 1 P i 1 5 0 0 5 10 15 20 25 outer iterations Figure: Inner iterations vs outer iterations Question Why is P 1 i better than P 1?

Convection-Diffusion problem: Preconditioning - AP 1 ỹ k x (i) τ (i) 45 40 35 inner iterations 30 25 20 15 10 P 1 P i 1 5 0 0 5 10 15 20 25 outer iterations Figure: Inner iterations vs outer iterations Question Why is P 1 i better than P 1? Also P i is a rank-one change to P, tuned to the eigenproblem!

Outline 1 Introduction 2 Preconditioned GMRES 3 Inexact Subspace iteration 4 Preconditioned RQI and Jacobi-Davidson 5 Conclusions

Theory: Unpreconditioned solves to find λ 1, x 1 x (i) is approximation to x 1 x 1 θ (i) x (i) (i) Q x = O (sin ) measure for the error θ (i) x (i) = cos θ (i) x 1 + sin θ (i) x

Theory: Unpreconditioned solves to find λ 1, x 1 x (i) is approximation to x 1 x 1 θ (i) x (i) (i) Q x = O (sin ) measure for the error θ (i) x (i) = cos θ (i) x 1 + sin θ (i) x r (i) = Ax (i) λ (i) x (i), r (i) C sin θ (i) Parlett (1998) - ideas extend to nonsymmetric problems.

GMRES applied to Ay = x (i) y k after k steps x (i) Ay k τ (i) = C r (i)

GMRES applied to Ay = x (i) y k after k steps x (i) Ay k τ (i) = C r (i) x (i) Ay k = min p k (A)x (i) min q k 1 (A)(I 1 λ 1 A)(cos θ (i) x 1 + sin θ (i) x ) Cρ k sin θ (i), 0 < ρ < 1. «sin θ(i) k 1 + C 1 log C 2 + log τ (i) bound on k does not increase with i.

GMRES applied to Ay = x (i) y k after k steps x (i) Ay k τ (i) = C r (i) x (i) Ay k = min p k (A)x (i) min q k 1 (A)(I 1 λ 1 A)(cos θ (i) x 1 + sin θ (i) x ) Cρ k sin θ (i), 0 < ρ < 1. «sin θ(i) k 1 + C 1 log C 2 + log τ (i) bound on k does not increase with i. Reason: x (i) = cos θ (i) x 1 + sin θ (i) x x (i) = eigenvector of A + term 0

GMRES applied to AP 1 ỹ = x (i) AP 1 u 1 = µ 1u 1: (µ 1, u 1) eigenpair nearest zero of AP 1 x (i) = cos θ (i) u 1 + sin θ (i) u

GMRES applied to AP 1 ỹ = x (i) AP 1 u 1 = µ 1u 1: (µ 1, u 1) eigenpair nearest zero of AP 1 x (i) = cos θ (i) u 1 + sin θ (i) u «k 1 + C 1 log C (i) sin θ 2 + log τ (i) BUT sin θ (i) 0 only if u 1 span{x 1} generally won t hold

GMRES applied to AP 1 ỹ = x (i) AP 1 u 1 = µ 1u 1: (µ 1, u 1) eigenpair nearest zero of AP 1 x (i) = cos θ (i) u 1 + sin θ (i) u «k 1 + C 1 log C (i) sin θ 2 + log τ (i) BUT sin θ (i) 0 only if u 1 span{x 1} generally won t hold sin θ (i) 0 inner iteration costs increase with i. Reason: x (i) = cos θ (i) u 1 + sin θ (i) u x (i) = eigenvector of AP 1 + term 0

New tuned preconditioner P i Idea: recreate the good relationship between the right hand side and the iteration matrix x (i) = eigenvector of iteration matrix + term 0

New tuned preconditioner P i Idea: recreate the good relationship between the right hand side and the iteration matrix x (i) = eigenvector of iteration matrix + term 0 Define P i = P + (A P)x (i) x (i)h P i is a rank one change to P (Sherman-Morrison)

New tuned preconditioner P i Idea: recreate the good relationship between the right hand side and the iteration matrix x (i) = eigenvector of iteration matrix + term 0 Define P i = P + (A P)x (i) x (i)h P i is a rank one change to P (Sherman-Morrison) P ix (i) = Px (i) + (A P)x (i) x (i)h x (i) Ax (i) = P ix (i)

New tuned preconditioner P i Idea: recreate the good relationship between the right hand side and the iteration matrix Define x (i) = eigenvector of iteration matrix + term 0 P i = P + (A P)x (i) x (i)h P i is a rank one change to P (Sherman-Morrison) P ix (i) = Px (i) + (A P)x (i) x (i)h x (i) Ax (i) = P ix (i) Hence Ax (i) is an eigenvector of AP 1 i AP 1 i (Ax (i) ) = Ax (i)

GMRES with the tuned preconditioner Recall AP 1 i ỹ = x (i) AP 1 i Ax (i) = Ax (i) Is x (i) a nice RHS for AP 1?

GMRES with the tuned preconditioner Recall AP 1 i ỹ = x (i) AP 1 i Ax (i) = Ax (i) Is x (i) a nice RHS for AP 1? r (i) = Ax (i) λ (i) x (i) x (i) = 1 λ (i) Ax(i) 1 λ Idea of tuning: change iteration matrix so that (i) r(i) x (i) = eigenvector of AP 1 i + term 0

GMRES with the tuned preconditioner Recall AP 1 i ỹ = x (i) AP 1 i Ax (i) = Ax (i) Is x (i) a nice RHS for AP 1? r (i) = Ax (i) λ (i) x (i) x (i) = 1 λ (i) Ax(i) 1 λ Idea of tuning: change iteration matrix so that (i) r(i) x (i) = eigenvector of AP 1 i + term 0 GMRES analysis is essentially the same as for unpreconditioned case No increase in inner iterations as i increases

GMRES with the tuned preconditioner Recall AP 1 i ỹ = x (i) AP 1 i Ax (i) = Ax (i) Is x (i) a nice RHS for AP 1? r (i) = Ax (i) λ (i) x (i) x (i) = 1 λ (i) Ax(i) 1 λ Idea of tuning: change iteration matrix so that (i) r(i) x (i) = eigenvector of AP 1 i + term 0 GMRES analysis is essentially the same as for unpreconditioned case No increase in inner iterations as i increases Explains the numerical results earlier

Theory for tuned preconditioner As before Now introduce P i = P + (A P)x (i) x (i)h P ideal = P + (A P)x 1x 1 H

Theory for tuned preconditioner As before Now introduce P i = P + (A P)x (i) x (i)h P ideal = P + (A P)x 1x 1 H x 1 is eigenvector of AP 1 ideal

Theory for tuned preconditioner As before Now introduce P i = P + (A P)x (i) x (i)h P ideal = P + (A P)x 1x 1 H x 1 is eigenvector of AP 1 ideal AP 1 idealỹ = x 1 GMRES converges in 1 step

Theory for tuned preconditioner As before Now introduce P i = P + (A P)x (i) x (i)h P ideal = P + (A P)x 1x 1 H x 1 is eigenvector of AP 1 ideal AP 1 idealỹ = x 1 GMRES converges in 1 step AP 1 i ỹ = x (i) close to ideal system proof of independence of i

Theory for tuned preconditioner As before Now introduce P i = P + (A P)x (i) x (i)h P ideal = P + (A P)x 1x 1 H x 1 is eigenvector of AP 1 ideal AP 1 idealỹ = x 1 GMRES converges in 1 step AP 1 i ỹ = x (i) close to ideal system proof of independence of i Subspace (block) version (Robbé/Sadkane/Sp.).

Outline 1 Introduction 2 Preconditioned GMRES 3 Inexact Subspace iteration 4 Preconditioned RQI and Jacobi-Davidson 5 Conclusions

Numerical Example same PDE example as before (5 10) n = 2025, nz = 9945 ILU preconditioner, drop tolerance 0.1 subspace dimension 6 seek first 4 eigenvalues (stop when residual 10 8 )

Unpreconditioned Block-GMRES 500 No preconditioner 450 400 350 inner itertaions k i 300 250 200 150 100 50 0 0 20 40 60 80 100 120 140 160 180 200 outer iterations i Figure: Inner iterations vs outer iterations

Preconditioned Block-GMRES 180 160 Tuned preconditioner ILU preconditioner 140 120 inner itertaions k i 100 80 60 40 20 0 0 50 100 150 outer iterations i Figure: Inner iterations vs outer iterations

Preconditioned Block-GMRES 10 4 Tuned preconditioner ILU preconditioner 10 2 10 0 10 2 Γ i 10 4 10 6 10 8 10 10 0 50 100 150 outer iterations i Figure: Residual norms vs outer iterations

Preconditioned Block-GMRES 10 4 Tuned preconditioner ILU preconditioner 10 2 10 0 10 2 Γ i 10 4 10 6 10 8 10 10 0 2000 4000 6000 8000 10000 12000 14000 i sum of inner iterations, Σ j=0 k j Figure: Residual norms vs total number of iterations

Numerical Example matrix market library qc2534 complex symmetric (non-hermitian) n = 2534, nz = 463360 ILU preconditioner subspace dimension 16 seek first 10 eigenvalues

Preconditioned GMRES 400 350 300 Tuned preconditioner inner itertaions k i 250 200 150 ILU preconditioner 100 50 0 0 5 10 15 outer iterations i Figure: Inner iterations vs outer iterations

Preconditioned GMRES 10 0 10 1 Tuned preconditioner ILU preconditioner 10 2 10 3 10 4 Γ i 10 5 10 6 10 7 10 8 10 9 0 1000 2000 3000 4000 5000 6000 i sum of inner iterations, Σ j=0 k j Figure: Residual norms vs total number of iterations

Outline 1 Introduction 2 Preconditioned GMRES 3 Inexact Subspace iteration 4 Preconditioned RQI and Jacobi-Davidson 5 Conclusions

RQI and J-D: Exact solves Rayleigh quotient iteration At each iteration a system of the form (A ρ(x)i)y = x has to be solved.

RQI and J-D: Exact solves Rayleigh quotient iteration At each iteration a system of the form (A ρ(x)i)y = x has to be solved. Jacobi-Davidson method At each iteration a system of the form (I xx H )(A ρ(x)i)(i xx H )s = r has to be solved, where r = (A ρ(x)i)x is the eigenvalue residual and s x.

RQI and J-D: Exact solves Rayleigh quotient iteration At each iteration a system of the form (A ρ(x)i)y = x has to be solved. Jacobi-Davidson method At each iteration a system of the form (I xx H )(A ρ(x)i)(i xx H )s = r has to be solved, where r = (A ρ(x)i)x is the eigenvalue residual and s x. Exact solves Sleijpen and van der Vorst (1996): y = α(x + s) for some constant α

RQI and J-D: Inexact solves Rayleigh quotient iteration At each iteration a system of the form (A ρ(x)i)y = x has to be solved. Jacobi-Davidson method At each iteration a system of the form (I xx H )(A ρ(x)i)(i xx H )s = r has to be solved, where r = (A ρ(x)i)x is the eigenvalue residual and s x. Galerkin-Krylov Solver Simoncini and Eldén (2002), (Hochstenbach and Sleijpen (2003) for two-sided RQ iteration): y k+1 = β(x + s k ) for some constant β if both systems are solved using a Galerkin-Krylov subspace method

RQI and J-D: Preconditioned Solves Preconditioning for RQ iteration At each iteration a system of the form (A ρ(x)i)p 1 ỹ = x, (with y = P 1 ỹ) has to be solved.

RQI and J-D: Preconditioned Solves Preconditioning for RQ iteration At each iteration a system of the form (A ρ(x)i)p 1 ỹ = x, (with y = P 1 ỹ) has to be solved. Preconditioning for JD method At each iteration a system of the form (I xx H )(A ρ(x)i)(i xx H ) P s = r (with s = P s) has to be solved. Note the restricted preconditioner P := (I xx H )P(I xx H ). Equivalence does not hold!

Example: sherman5.mtx fixed shift; (preconditioned) FOM as inner solver 10 2 simplified Jacobi Davidson without preconditoner Inverse iteration without preconditioner 10 2 10 1 simplified Jacobi Davidson with standard preconditoner Inverse iteration with standard preconditioner 10 1 eigenvalue residual 10 0 10 1 10 2 eigenvalue residual 10 0 10 1 10 2 10 3 10 4 10 5 10 3 0 2 4 6 8 10 12 14 16 18 20 outer iteration 2 4 6 8 10 12 14 16 18 20 outer iteration Figure: Convergence history of the eigenvalue residuals; no preconditioner Figure: Convergence history of the eigenvalue residuals; standard preconditioner

Preconditioned Solves Tuned RQI preconditioned JD Px = x Preconditioning for RQ iteration Inner solves in RQ iteration builds Krylov space span{x,(a ρ(x)i)p 1 x,((a ρ(x)i)p 1 ) 2 x,...}

Preconditioned Solves Tuned RQI preconditioned JD Px = x Preconditioning for RQ iteration Inner solves in RQ iteration builds Krylov space span{x,(a ρ(x)i)p 1 x,((a ρ(x)i)p 1 ) 2 x,...} Preconditioning for JD method Inner solves in JD method builds Krylov space span{r,π 1(A ρ(x)i)π P 2 P 1 r,(π 1(A ρ(x)i)π P 2 P 1 ) 2 r,...} where (I xx H ) and Π P 2 = I P 1 xx H x H P 1 x.

Consider subspaces A A ρ(x)i Lemma The subspaces K k = span{x, AP 1 x,(ap 1 ) 2 x,..., (AP 1 ) k x} and L k = span{x, r,π 1AΠ P 2 P 1 r,(π 1AΠ P 2 P 1 ) 2 r,..., (Π 1AΠ P 2 P 1 ) k 1 r} are equivalent. Proof. Based on Stathopoulos and Saad (1998).

Equivalence for inexact solves Theorem Let both (A ρ(x)i)p 1 ỹ = x, y = P 1 ỹ and (I xx H )(A ρ(x)i)(i xx H ) P s = r, s = be solved with the same Galerkin-Krylov method. Then P s y RQ k+1 = γ(x + sjd k ). Proof. Based on Simoncini and Eldén (2002).

Heuristic Explanation of RQI + P JD + P Px = x P = P + (I P)xx H P = xx H + P(I xx H )

Example: sherman5.mtx fixed shift; (preconditioned) FOM as inner solver 10 2 10 1 simplified Jacobi Davidson without preconditoner Inverse iteration without preconditioner 10 2 10 1 simplified Jacobi Davidson with standard preconditoner Inverse iteration with standard preconditioner Inverse iteration with tuned preconditioner eigenvalue residual 10 0 10 1 10 2 eigenvalue residual 10 0 10 1 10 2 10 3 10 4 10 3 0 2 4 6 8 10 12 14 16 18 20 outer iteration Figure: Convergence history of the eigenvalue residuals; no preconditioner 10 5 2 4 6 8 10 12 14 16 18 20 outer iteration Figure: standard preconditioner for JD, tuned preconditioner for II

Outline 1 Introduction 2 Preconditioned GMRES 3 Inexact Subspace iteration 4 Preconditioned RQI and Jacobi-Davidson 5 Conclusions

Conclusions When using Krylov solvers for shifted systems (A σi)y = x (i) in eigenvalue computations then it is best if the iteration matrix has a good relationship with the right hand side, For any preconditioner the good relationship is achieved by a small rank change called tuning, essentially no extra costs, Numerical results on eigenvalue problems obtained from Mixed FEM Navier-Stokes with DD preconditioner show the same gains.

M. A. Freitag and A. Spence, A tuned preconditioner for inexact inverse iteration applied to Hermitian eigenvalue problems, 2005. Submitted to IMAJNA., Convergence rates for inexact inverse iteration with application to preconditioned iterative solves, BIT, 47 (2007), pp. 27 44., Rayleigh quotient iteration and simplified Jacobi-Davidson method with preconditioned iterative solves, 2007. Submitted to LAA. M. Robbé, M. Sadkane, and A. Spence, Inexact inverse subspace iteration with preconditioning applied to non-hermitian eigenvalue problems, 2006. Submitted to SIMAX.