Preconditioned inverse iteration and shift-invert Arnoldi method

Similar documents
Inexact inverse iteration with preconditioning

Lecture 3: Inexact inverse iteration with preconditioning

A Tuned Preconditioner for Inexact Inverse Iteration Applied to Hermitian Eigenvalue Problems

A Tuned Preconditioner for Inexact Inverse Iteration for Generalised Eigenvalue Problems

Finding Rightmost Eigenvalues of Large, Sparse, Nonsymmetric Parameterized Eigenvalue Problems

Inner-outer Iterative Methods for Eigenvalue Problems - Convergence and Preconditioning

RAYLEIGH QUOTIENT ITERATION AND SIMPLIFIED JACOBI-DAVIDSON WITH PRECONDITIONED ITERATIVE SOLVES FOR GENERALISED EIGENVALUE PROBLEMS

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

Inexact Inverse Iteration for Symmetric Matrices

Inexact Inverse Iteration for Symmetric Matrices

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna.

c 2006 Society for Industrial and Applied Mathematics

Alternative correction equations in the Jacobi-Davidson method

Iterative methods for symmetric eigenvalue problems

Inexact inverse iteration for symmetric matrices

The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method. V. Simoncini. Dipartimento di Matematica, Università di Bologna

ABSTRACT NUMERICAL SOLUTION OF EIGENVALUE PROBLEMS WITH SPECTRAL TRANSFORMATIONS

Model order reduction of large-scale dynamical systems with Jacobi-Davidson style eigensolvers

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Introduction to Arnoldi method

ECS231 Handout Subspace projection methods for Solving Large-Scale Eigenvalue Problems. Part I: Review of basic theory of eigenvalue problems

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Inexactness and flexibility in linear Krylov solvers

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

1. Introduction. In this paper we consider the large and sparse eigenvalue problem. Ax = λx (1.1) T (λ)x = 0 (1.2)

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

FINDING RIGHTMOST EIGENVALUES OF LARGE SPARSE NONSYMMETRIC PARAMETERIZED EIGENVALUE PROBLEMS

An Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems

Numerical Methods I Eigenvalue Problems

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Domain decomposition on different levels of the Jacobi-Davidson method

Is there life after the Lanczos method? What is LOBPCG?

6.4 Krylov Subspaces and Conjugate Gradients

Arnoldi Methods in SLEPc

A Jacobi Davidson Method for Nonlinear Eigenproblems

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Alternative correction equations in the Jacobi-Davidson method. Mathematical Institute. Menno Genseberger and Gerard L. G.

ABSTRACT OF DISSERTATION. Ping Zhang

Lecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Linear Algebra and its Applications

A HARMONIC RESTARTED ARNOLDI ALGORITHM FOR CALCULATING EIGENVALUES AND DETERMINING MULTIPLICITY

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

A comparison of solvers for quadratic eigenvalue problems from combustion

Krylov Subspace Methods to Calculate PageRank

Preconditioning for Nonsymmetry and Time-dependence

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

Linear Solvers. Andrew Hazel

Numerical Methods in Matrix Computations

Solving Ax = b, an overview. Program

Charles University Faculty of Mathematics and Physics DOCTORAL THESIS. Krylov subspace approximations in linear algebraic problems

Harmonic Projection Methods for Large Non-symmetric Eigenvalue Problems

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Numerical Methods for Solving Large Scale Eigenvalue Problems

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17

Math 504 (Fall 2011) 1. (*) Consider the matrices

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation

Numerical behavior of inexact linear solvers

Polynomial Jacobi Davidson Method for Large/Sparse Eigenvalue Problems

The Lanczos and conjugate gradient algorithms

CONVERGENCE BOUNDS FOR PRECONDITIONED GMRES USING ELEMENT-BY-ELEMENT ESTIMATES OF THE FIELD OF VALUES

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

of dimension n 1 n 2, one defines the matrix determinants

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers

MULTIGRID ARNOLDI FOR EIGENVALUES

Approximating the matrix exponential of an advection-diffusion operator using the incomplete orthogonalization method

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems

Controlling inner iterations in the Jacobi Davidson method

FEM and sparse linear system solving

Iterative methods for Linear System

Eigenvalue Problems Computation and Applications

CONTROLLING INNER ITERATIONS IN THE JACOBI DAVIDSON METHOD

LARGE SPARSE EIGENVALUE PROBLEMS

APPLIED NUMERICAL LINEAR ALGEBRA

Sparse matrix methods in quantum chemistry Post-doctorale cursus quantumchemie Han-sur-Lesse, Belgium

Efficient iterative algorithms for linear stability analysis of incompressible flows

DELFT UNIVERSITY OF TECHNOLOGY

Preface to the Second Edition. Preface to the First Edition

M.A. Botchev. September 5, 2014

HARMONIC RAYLEIGH RITZ EXTRACTION FOR THE MULTIPARAMETER EIGENVALUE PROBLEM

A reflection on the implicitly restarted Arnoldi method for computing eigenvalues near a vertical line

A Jacobi Davidson Method with a Multigrid Solver for the Hermitian Wilson-Dirac Operator

On the accuracy of saddle point solvers

PROJECTED GMRES AND ITS VARIANTS

On the Modification of an Eigenvalue Problem that Preserves an Eigenspace

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

WHEN studying distributed simulations of power systems,

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems

A Chebyshev-based two-stage iterative method as an alternative to the direct solution of linear systems

Mathematics and Computer Science

A Jacobi Davidson-type projection method for nonlinear eigenvalue problems

DELFT UNIVERSITY OF TECHNOLOGY

Krylov Subspaces. Lab 1. The Arnoldi Iteration

Index. for generalized eigenvalue problem, butterfly form, 211

On the influence of eigenvalues on Bi-CG residual norms

Numerical Methods I: Eigenvalues and eigenvectors

Transcription:

Preconditioned inverse iteration and shift-invert Arnoldi method Melina Freitag Department of Mathematical Sciences University of Bath CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical Systems Magdeburg 3rd May 2011 joint work with Alastair Spence (Bath)

1 Introduction 2 Inexact inverse iteration 3 Inexact Shift-invert Arnoldi method 4 Inexact Shift-invert Arnoldi method with implicit restarts 5 Conclusions

Outline 1 Introduction 2 Inexact inverse iteration 3 Inexact Shift-invert Arnoldi method 4 Inexact Shift-invert Arnoldi method with implicit restarts 5 Conclusions

Problem and iterative methods Find a small number of eigenvalues and eigenvectors of: Ax = λx, λ C,x C n A is large, sparse, nonsymmetric

Problem and iterative methods Find a small number of eigenvalues and eigenvectors of: Ax = λx, λ C,x C n A is large, sparse, nonsymmetric Iterative solves Power method Simultaneous iteration Arnoldi method Jacobi-Davidson method repeated application of the matrix A to a vector Generally convergence to largest/outlying eigenvector

Problem and iterative methods Find a small number of eigenvalues and eigenvectors of: Ax = λx, λ C,x C n A is large, sparse, nonsymmetric Iterative solves Power method Simultaneous iteration Arnoldi method Jacobi-Davidson method The first three of these involve repeated application of the matrix A to a vector Generally convergence to largest/outlying eigenvector

Shift-invert strategy Wish to find a few eigenvalues close to a shift σ σ λ3 λ1 λ2 λ4 λ n

Shift-invert strategy Wish to find a few eigenvalues close to a shift σ σ Problem becomes λ3 λ1 λ2 λ4 λ n (A σi) 1 x = 1 λ σ x each step of the iterative method involves repeated application of A = (A σi) 1 to a vector Inner iterative solve: (A σi)y = x using Krylov method for linear systems. leading to inner-outer iterative method.

Shift-invert strategy This talk: Inner iteration and preconditioning Fixed shifts only Inverse iteration and Arnoldi method

Outline 1 Introduction 2 Inexact inverse iteration 3 Inexact Shift-invert Arnoldi method 4 Inexact Shift-invert Arnoldi method with implicit restarts 5 Conclusions

The algorithm Inexact inverse iteration for i = 1 to... do choose τ (i) solve Rescale x (i+1) = y(i) y (i), (A σi)y (i) x (i) = d (i) τ (i), Update λ (i+1) = x (i+1)h Ax (i+1), Test: eigenvalue residual r (i+1) = (A λ (i+1) I)x (i+1). end for

The algorithm Inexact inverse iteration for i = 1 to... do choose τ (i) solve Rescale x (i+1) = y(i) y (i), (A σi)y (i) x (i) = d (i) τ (i), Update λ (i+1) = x (i+1)h Ax (i+1), Test: eigenvalue residual r (i+1) = (A λ (i+1) I)x (i+1). end for Convergence rates If τ (i) = C r (i) then convergence rate is linear (same convergence rate as for exact solves).

The inner iteration for (A σi)y = x Standard GMRES theory for y 0 = 0 and A diagonalisable x (A σi)y k κ(w) min max p(λj) x j=1,...,n p P k where λ j are eigenvalues of A σi and (A σi) = WΛW 1.

The inner iteration for (A σi)y = x Standard GMRES theory for y 0 = 0 and A diagonalisable x (A σi)y k κ(w) min max p(λj) x j=1,...,n p P k where λ j are eigenvalues of A σi and (A σi) = WΛW 1. Number of inner iterations for x (A σi)y k τ. k C 1 +C 2log x τ

The inner iteration for (A σi)y = x More detailed GMRES theory for y 0 = 0 λ2 λ1 x (A σi)y k κ(w) λ 1 where λ j are eigenvalues of A σi. Number of inner iterations min p P k 1 max p(λj) Qx j=2,...,n k C 1 +C 2log Qx, τ where Q projects onto the space not spanned by the eigenvector.

The inner iteration for (A σi)y = x Good news! k (i) C 1 +C 2log C3 r(i) τ (i), where τ (i) = C r (i). Iteration number approximately constant!

The inner iteration for (A σi)y = x Good news! k (i) C 1 +C 2log C3 r(i) τ (i), where τ (i) = C r (i). Iteration number approximately constant! Bad news :-( For a standard preconditioner P (A σi)p 1 ỹ (i) = x (i) P 1 ỹ (i) = y (i) k (i) C 1 +C 2 log Qx (i) τ (i) = C 1 +C 2 log C τ (i), where τ (i) = C r (i). Iteration number increases!

Convection-Diffusion operator Finite difference discretisation on a 32 32 grid of the convection-diffusion operator u+5u x +5u y = λu on (0,1) 2, with homogeneous Dirichlet boundary conditions (961 961 matrix). smallest eigenvalue: λ 1 32.18560954, Preconditioned GMRES with tolerance τ (i) = 0.01 r (i), standard and tuned preconditioner (incomplete LU).

Convection-Diffusion operator 45 right preconditioning, 569 iterations 10 0 right preconditioning, 569 iterations inner iterations 40 35 30 25 20 15 10 residual norms \ r^{(i)}\ 10 2 10 4 10 6 10 8 5 10 10 2 4 6 8 10 12 14 16 18 20 22 outer iterations 100 200 300 400 500 600 sum of inner iterations Figure: Inner iterations vs outer iterations Figure: Eigenvalue residual norms vs total number of inner iterations

The inner iteration for (A σi)p 1 ỹ = x How to overcome this problem Use a different preconditioner, namely one that satisfies P ix (i) = Ax (i), P i := P +(A P)x (i) x (i)h minor modification and minor extra computational cost, [AP 1 i ]Ax (i) = Ax (i).

The inner iteration for (A σi)p 1 ỹ = x How to overcome this problem Use a different preconditioner, namely one that satisfies P ix (i) = Ax (i), P i := P +(A P)x (i) x (i)h minor modification and minor extra computational cost, [AP 1 i ]Ax (i) = Ax (i). Why does that work? Assume we have found eigenvector x 1 Ax 1 = Px 1 = λ 1x 1 (A σi)p 1 x 1 = λ1 σ λ 1 x 1 and convergence of Krylov method applied to (A σi)p 1 ỹ = x 1 in one iteration. For general x (i) k (i) C 1 +C 2 log C3 r(i), where τ (i) = C r (i). τ (i)

Convection-Diffusion operator Finite difference discretisation on a 32 32 grid of the convection-diffusion operator u+5u x +5u y = λu on (0,1) 2, with homogeneous Dirichlet boundary conditions (961 961 matrix). smallest eigenvalue: λ 1 32.18560954, Preconditioned GMRES with tolerance τ (i) = 0.01 r (i), standard and tuned preconditioner (incomplete LU).

Convection-Diffusion operator 45 right preconditioning, 569 iterations 10 0 right preconditioning, 569 iterations inner iterations 40 35 30 25 20 15 10 residual norms \ r^{(i)}\ 10 2 10 4 10 6 10 8 5 10 10 2 4 6 8 10 12 14 16 18 20 22 outer iterations 100 200 300 400 500 600 sum of inner iterations Figure: Inner iterations vs outer iterations Figure: Eigenvalue residual norms vs total number of inner iterations

Convection-Diffusion operator 45 right preconditioning, 569 iterations tuned right preconditioning, 115 iterations 10 0 right preconditioning, 569 iterations tuned right preconditioning, 115 iterations inner iterations 40 35 30 25 20 15 10 residual norms \ r^{(i)}\ 10 2 10 4 10 6 10 8 5 10 10 2 4 6 8 10 12 14 16 18 20 22 outer iterations 100 200 300 400 500 600 sum of inner iterations Figure: Inner iterations vs outer iterations Figure: Eigenvalue residual norms vs total number of inner iterations

Outline 1 Introduction 2 Inexact inverse iteration 3 Inexact Shift-invert Arnoldi method 4 Inexact Shift-invert Arnoldi method with implicit restarts 5 Conclusions

The algorithm Arnoldi s method Arnoldi method constructs an orthogonal basis of k-dimensional Krylov subspace K k (A,q (1) ) = span{q (1),Aq (1),A 2 q (1),...,A k 1 q (1) }, [ ] AQ k = Q k H k +q k+1 h k+1,k e H H k k = Q k+1 h k+1,k e H k Q H k Q k = I.

The algorithm Arnoldi s method Arnoldi method constructs an orthogonal basis of k-dimensional Krylov subspace K k (A,q (1) ) = span{q (1),Aq (1),A 2 q (1),...,A k 1 q (1) }, [ ] AQ k = Q k H k +q k+1 h k+1,k e H H k k = Q k+1 h k+1,k e H k Q H k Q k = I. Eigenvalues of H k are eigenvalue approximations of (outlying) eigenvalues of A r k = Ax θx = (AQ k Q k H k )u = h k+1,k e H k u,

The algorithm Arnoldi s method Arnoldi method constructs an orthogonal basis of k-dimensional Krylov subspace K k (A,q (1) ) = span{q (1),Aq (1),A 2 q (1),...,A k 1 q (1) }, [ ] AQ k = Q k H k +q k+1 h k+1,k e H H k k = Q k+1 h k+1,k e H k Q H k Q k = I. Eigenvalues of H k are eigenvalue approximations of (outlying) eigenvalues of A r k = Ax θx = (AQ k Q k H k )u = h k+1,k e H k u, at each step, application of A to q k : Aq k = q k+1

Example random complex matrix of dimension n = 144 generated in Matlab: G=numgrid( N,14);B=delsq(G);A=sprandn(B)+i*sprandn(B) eigenvalues of A 3 2 1 0 1 2 3 4 3 2 1 0 1 2 3 4

after 5 Arnoldi steps Arnoldi after 5 steps 3 2 1 0 1 2 3 4 3 2 1 0 1 2 3 4

after 10 Arnoldi steps Arnoldi after 10 steps 3 2 1 0 1 2 3 4 3 2 1 0 1 2 3 4

after 15 Arnoldi steps Arnoldi after 15 steps 3 2 1 0 1 2 3 4 3 2 1 0 1 2 3 4

after 20 Arnoldi steps Arnoldi after 20 steps 3 2 1 0 1 2 3 4 3 2 1 0 1 2 3 4

after 25 Arnoldi steps Arnoldi after 25 steps 3 2 1 0 1 2 3 4 3 2 1 0 1 2 3 4

after 30 Arnoldi steps Arnoldi after 30 steps 3 2 1 0 1 2 3 4 3 2 1 0 1 2 3 4

The algorithm: take σ = 0 Shift-Invert Arnoldi s method A := A 1 Arnoldi method constructs an orthogonal basis of k-dimensional Krylov subspace K k (A 1,q (1) ) = span{q (1),A 1 q (1),(A 1 ) 2 q (1),...,(A 1 ) k 1 q (1) }, [ ] A 1 Q k = Q k H k +q k+1 h k+1,k e H H k k = Q k+1 h k+1,k e H k Q H k Q k = I. Eigenvalues of H k are eigenvalue approximations of (outlying) eigenvalues of A 1 r k = A 1 x θx = (A 1 Q k Q k H k )u = h k+1,k e H k u, at each step, application of A 1 to q k : A 1 q k = q k+1

Inexact solves Inexact solves (Simoncini 2005), Bouras and Frayssé (2000) Wish to solve q k A q k+1 = d k τ k

Inexact solves Inexact solves (Simoncini 2005), Bouras and Frayssé (2000) Wish to solve q k A q k+1 = d k τ k leads to inexact Arnoldi relation [ [ A 1 H k H k Q k = Q k+1 ]+D h k+1,k e H k = Q k+1 k h k+1,k e H k ] +[d 1... d k ]

Inexact solves Inexact solves (Simoncini 2005), Bouras and Frayssé (2000) Wish to solve q k A q k+1 = d k τ k leads to inexact Arnoldi relation [ [ A 1 H k H k Q k = Q k+1 ]+D h k+1,k e H k = Q k+1 k h k+1,k e H k ] +[d 1... d k ] u eigenvector of H k : r k = (A 1 Q k Q k H k )u = h k+1,k e H k u +D k u,

Inexact solves Inexact solves (Simoncini 2005), Bouras and Frayssé (2000) Wish to solve q k A q k+1 = d k τ k leads to inexact Arnoldi relation [ [ A 1 H k H k Q k = Q k+1 ]+D h k+1,k e H k = Q k+1 k h k+1,k e H k ] +[d 1... d k ] u eigenvector of H k : r k = (A 1 Q k Q k H k )u = h k+1,k e H k u +D k u, Linear combination of the columns of D k k D k u = d l u l, if u l l=1 small, then d l allowed to be large!

Inexact solves Inexact solves (Simoncini 2005), Bouras and Frayssé (2000) Linear combination of the columns of D k k D k u = d l u l, if u l l=1 small, then d l allowed to be large! and leads to Solve tolerance can be relaxed. d l u l 1 k ε D ku < ε u l C(l,k) r l 1 q k A q k+1 = d k d k = C 1 r k 1

The inner iteration for AP 1 q k+1 = q k Preconditioning GMRES convergence bound depending on q k AP 1 q l k+1 = κ min p Π l max i=1,...,n p(µi) q k

The inner iteration for AP 1 q k+1 = q k Preconditioning GMRES convergence bound depending on q k AP 1 q l k+1 = κ min p Π l max i=1,...,n p(µi) q k the eigenvalue clustering of AP 1 the condition number the right hand side (initial guess)

The inner iteration for AP 1 q k+1 = q k Preconditioning Introduce preconditioner P and solve AP 1 q k+1 = q k, P 1 q k+1 = q k+1 using GMRES

The inner iteration for AP 1 q k+1 = q k Preconditioning Introduce preconditioner P and solve using GMRES AP 1 q k+1 = q k, P 1 q k+1 = q k+1 Tuned Preconditioner using a tuned preconditioner for Arnoldi s method P k Q k = AQ k ; given by P k = P +(A P)Q k Q H k

The inner iteration for A q = q Theorem (Properties of the tuned preconditioner) Let P with P = A+E be a preconditioner for A and assume k steps of Arnoldi s method have been carried out; then k eigenvalues of AP 1 k are equal to one: [AP 1 k ]AQ k = AQ k and n k eigenvalues are close to the corresponding eigenvalues of AP 1.

The inner iteration for A q = q Theorem (Properties of the tuned preconditioner) Let P with P = A+E be a preconditioner for A and assume k steps of Arnoldi s method have been carried out; then k eigenvalues of AP 1 k are equal to one: [AP 1 k ]AQ k = AQ k and n k eigenvalues are close to the corresponding eigenvalues of AP 1. Implementation Sherman-Morrison-Woodbury. Only minor extra costs (one back substitution per outer iteration)

Numerical Example sherman5.mtx nonsymmetric matrix from the Matrix Market library (3312 3312). smallest eigenvalue: λ 1 4.69 10 2, Preconditioned GMRES as inner solver (both fixed tolerance and relaxation strategy), standard and tuned preconditioner (incomplete LU).

No tuning and standard preconditioner 25 10 0 Arnoldi tolerance 10e 14/m inner itertaions 20 15 10 residual norms r (i) 10 5 10 10 5 10 15 Arnoldi tolerance 10e 14/m 2 4 6 8 10 12 outer iterations 50 100 150 200 250 300 sum of inner iterations Figure: Inner iterations vs outer iterations Figure: Eigenvalue residual norms vs total number of inner iterations

Tuning the preconditioner 25 10 0 Arnoldi tolerance 10e 14/m Arnoldi tolerance 10e 14 tuned inner itertaions 20 15 10 residual norms r (i) 10 5 10 10 5 Arnoldi tolerance 10e 14/m Arnoldi tolerance 10e 14 tuned 10 15 2 4 6 8 10 12 outer iterations 50 100 150 200 250 300 sum of inner iterations Figure: Inner iterations vs outer iterations Figure: Eigenvalue residual norms vs total number of inner iterations

Relaxation 25 10 0 Arnoldi tolerance 10e 14/m Arnoldi relaxed tolerance Arnoldi tolerance 10e 14 tuned inner itertaions 20 15 10 residual norms r (i) 10 5 10 10 5 Arnoldi tolerance 10e 14/m Arnoldi relaxed tolerance Arnoldi tolerance 10e 14 tuned 10 15 2 4 6 8 10 12 outer iterations 50 100 150 200 250 300 sum of inner iterations Figure: Inner iterations vs outer iterations Figure: Eigenvalue residual norms vs total number of inner iterations

Tuning and relaxation strategy 25 10 0 Arnoldi tolerance 10e 14/m Arnoldi relaxed tolerance Arnoldi tolerance 10e 14/m tuned Arnoldi relaxed tolerance tuned inner itertaions 20 15 10 5 Arnoldi tolerance 10e 14/m Arnoldi relaxed tolerance Arnoldi tolerance 10e 14/m tuned Arnoldi relaxed tolerance tuned residual norms r (i) 10 5 10 10 10 15 2 4 6 8 10 12 outer iterations 50 100 150 200 250 300 sum of inner iterations Figure: Inner iterations vs outer iterations Figure: Eigenvalue residual norms vs total number of inner iterations

Ritz values of exact and inexact Arnoldi Exact eigenvalues Ritz values (exact Arnoldi) Ritz values (inexact Arnoldi, tuning) +4.69249563e-02 +4.69249563e-02 +4.69249563e-02 +1.25445378e-01 +1.25445378e-01 +1.25445378e-01 +4.02658363e-01 +4.02658347e-01 +4.02658244e-01 +5.79574381e-01 +5.79625498e-01 +5.79817301e-01 +6.18836405e-01 +6.18798666e-01 +6.18650849e-01 Table: Ritz values of exact Arnoldi s method and inexact Arnoldi s method with the tuning strategy compared to exact eigenvalues closest to zero after 14 shift-invert Arnoldi steps.

Outline 1 Introduction 2 Inexact inverse iteration 3 Inexact Shift-invert Arnoldi method 4 Inexact Shift-invert Arnoldi method with implicit restarts 5 Conclusions

Implicitly restarted Arnoldi (Sorensen (1992)) Exact shifts take an k +p step Arnoldi factorisation AQ k+p = Q k+p H k+p +q k+p+1 h k+p+1,k+p e H k+p Compute Λ(H k+p ) and select p shifts for an implicit QR iteration implicit restart with new starting vector ˆq (1) = p(a)q(1) p(a)q (1)

Implicitly restarted Arnoldi (Sorensen (1992)) Exact shifts take an k +p step Arnoldi factorisation AQ k+p = Q k+p H k+p +q k+p+1 h k+p+1,k+p e H k+p Compute Λ(H k+p ) and select p shifts for an implicit QR iteration implicit restart with new starting vector ˆq (1) = p(a)q(1) p(a)q (1) Aim of IRA AQ k = Q k H k +q k+1 h k+1,k }{{} 0 e H k

Relaxation strategy for IRA Theorem For any given ε R with ε > 0 assume that ε C if l > k, d l R k ε otherwise. Then AQ mu Q muθ R m ε. Very technical Relaxation strategy also works for IRA!

Tuning Tuning for implicitly restarted Arnoldi s method Introduce preconditioner P and solve AP 1 k q k+1 = q k, P 1 k q k+1 = q k+1 using GMRES and a tuned preconditioner P k Q k = AQ k ; given by P k = P +(A P)Q k Q H k

Tuning Why does tuning help? Assume we have found an approximate invariant subspace, that is A 1 Q k = Q k H k +q k+1 h k+1,k e H k }{{} 0

Tuning Why does tuning help? Assume we have found an approximate invariant subspace, that is A 1 Q k = Q k H k +q k+1 h k+1,k e H k }{{} 0 let A 1 have the upper Hessenberg form [ Qk Q ] H k A 1 [ Q k Q ] [ k = H k T 12 h k+1,k e 1e H k T 22 where [ Q k Q ] k is unitary and H k C k,k and T 22 C n k,n k are upper Hessenberg. ],

Tuning Why does tuning help? Assume we have found an approximate invariant subspace, that is A 1 Q k = Q k H k +q k+1 h k+1,k e H k }{{} 0 let A 1 have the upper Hessenberg form [ Qk Q ] H k A 1 [ Q k Q ] [ k = H k T 12 h k+1,k e 1e H k T 22 where [ Q k Q ] k is unitary and H k C k,k and T 22 C n k,n k are upper Hessenberg. ], If h k+1,k 0 then [ Qk Q ] H [ k AP 1 k Qk Q ] k = [ I + ] Q H k AP 1 k Q k T 1 22 (Q H k PQ k ) 1 +

Tuning Why does tuning help? Assume we have found an approximate invariant subspace, that is A 1 Q k = Q k H k +q k+1 h k+1,k e H k }{{} 0 let A 1 have the upper Hessenberg form [ Qk Q ] H k A 1 [ Q k Q ] [ k = H k T 12 h k+1,k e 1e H k T 22 where [ Q k Q ] k is unitary and H k C k,k and T 22 C n k,n k are upper Hessenberg. ], If h k+1,k = 0 then [ Qk Q ] H [ k AP 1 k Qk Q ] k = [ ] I Q H k AP 1 k Q k 0 T 1 22 (Q H k PQ k ) 1

Tuning Another advantage of tuning System to be solved at each step of Arnoldi s method is AP 1 k q k+1 = q k, P 1 k q k+1 = q k

Tuning Another advantage of tuning System to be solved at each step of Arnoldi s method is AP 1 k q k+1 = q k, P 1 k q k+1 = q k Assuming invariant subspace found then (A 1 Q k = Q k H k ): AP 1 k q k = q k

Tuning Another advantage of tuning System to be solved at each step of Arnoldi s method is AP 1 k q k+1 = q k, P 1 k q k+1 = q k Assuming invariant subspace found then (A 1 Q k = Q k H k ): AP 1 k q k = q k the right hand side of the system matrix is an eigenvector of the system!

Tuning Another advantage of tuning System to be solved at each step of Arnoldi s method is AP 1 k q k+1 = q k, P 1 k q k+1 = q k Assuming invariant subspace found then (A 1 Q k = Q k H k ): AP 1 k q k = q k the right hand side of the system matrix is an eigenvector of the system! Krylov methods converge in one iteration

Tuning Another advantage of tuning In practice: and A 1 Q k = Q k H k +q k+1 h k+1,k e H k AP 1 k q k q k = O( h k+1,k )

Tuning Another advantage of tuning In practice: and A 1 Q k = Q k H k +q k+1 h k+1,k e H k AP 1 k q k q k = O( h k+1,k ) number of iterations decreases as the outer iteration proceeds

Tuning Another advantage of tuning In practice: and A 1 Q k = Q k H k +q k+1 h k+1,k e H k AP 1 k q k q k = O( h k+1,k ) number of iterations decreases as the outer iteration proceeds Rigorous analysis quite technical.

Numerical Example sherman5.mtx nonsymmetric matrix from the Matrix Market library (3312 3312). k = 8 eigenvalues closest to zero IRA with exact shifts p = 4 Preconditioned GMRES as inner solver (fixed tolerance and relaxation strategy), standard and tuned preconditioner (incomplete LU).

No tuning and standard preconditioner 60 10 2 Arnoldi tolerance 10e 12 inner iterations 50 40 30 20 10 residual norms r (i) 10 4 10 6 10 8 10 10 10 12 10 14 Arnoldi tolerance 10e 12 0 0 5 10 15 20 25 30 35 40 45 50 outer iterations 10 16 500 1000 1500 2000 2500 sum of inner iterations Figure: Inner iterations vs outer iterations Figure: Eigenvalue residual norms vs total number of inner iterations

Tuning 60 10 2 Arnoldi tolerance 10e 12 Arnoldi 10e 12 tuned inner iterations 50 40 30 20 residual norms r (i) 10 4 10 6 10 8 10 10 10 12 10 Arnoldi tolerance 10e 12 Arnoldi 10e 12 tuned 0 0 5 10 15 20 25 30 35 40 45 50 outer iterations 10 14 10 16 500 1000 1500 2000 2500 sum of inner iterations Figure: Inner iterations vs outer iterations Figure: Eigenvalue residual norms vs total number of inner iterations

Relaxation 60 10 2 Arnoldi tolerance 10e 12 Arnold relaxed tolerance Arnoldi 10e 12 tuned inner iterations 50 40 30 20 residual norms r (i) 10 4 10 6 10 8 10 10 10 12 10 Arnoldi tolerance 10e 12 Arnold relaxed tolerance Arnoldi 10e 12 tuned 0 0 5 10 15 20 25 30 35 40 45 50 outer iterations 10 14 10 16 500 1000 1500 2000 2500 sum of inner iterations Figure: Inner iterations vs outer iterations Figure: Eigenvalue residual norms vs total number of inner iterations

Tuning and relaxation strategy inner iterations 60 50 40 30 20 Arnoldi tolerance 10e 12 10 Arnold relaxed tolerance Arnoldi 10e 12 tuned Arnold relaxed tolerance tuned 0 0 5 10 15 20 25 30 35 40 45 50 outer iterations residual norms r (i) 10 2 10 4 10 6 10 8 10 10 10 12 10 14 10 16 500 1000 1500 2000 2500 sum of inner iterations Arnoldi tolerance 10e 12 Arnold relaxed tolerance Arnoldi 10e 12 tuned Arnold relaxed tolerance tuned Figure: Inner iterations vs outer iterations Figure: Eigenvalue residual norms vs total number of inner iterations

Numerical Example qc2534.mtx matrix from the Matrix Market library. k = 6 eigenvalues closest to zero IRA with exact shifts p = 4 Preconditioned GMRES as inner solver (fixed tolerance and relaxation strategy), standard and tuned preconditioner (incomplete LU).

Tuning and relaxation strategy inner iterations 100 90 80 70 60 50 40 30 Arnoldi tolerance 10e 12 residual norms r (i) 10 0 10 5 10 10 Arnoldi tolerance 10e 12 Arnold relaxed tolerance Arnoldi 10e 12 tuned Arnold relaxed tolerance tuned 20 Arnold relaxed tolerance 10 Arnoldi 10e 12 tuned Arnold relaxed tolerance tuned 5 10 15 20 25 30 35 40 45 50 55 outer iterations 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 sum of inner iterations Figure: Inner iterations vs outer iterations Figure: Eigenvalue residual norms vs total number of inner iterations

Outline 1 Introduction 2 Inexact inverse iteration 3 Inexact Shift-invert Arnoldi method 4 Inexact Shift-invert Arnoldi method with implicit restarts 5 Conclusions

Conclusions For eigenvalue computations it is advantageous to consider small rank changes to the standard preconditioners Works for any preconditioner Works for SI versions of Power method, Simultaneous iteration, Arnoldi method Inexact inverse iteration with a special tuned preconditioner is equivalent to the Jacobi-Davidson method (without subspace expansion) For Arnoldi method best results are obtained when relaxation and tuning are combined

M. A. Freitag and A. Spence, Rayleigh quotient iteration and simplified Jacobi-Davidson method with preconditioned iterative solves., Convergence rates for inexact inverse iteration with application to preconditioned iterative solves, BIT, 47 (2007), pp. 27 44., A tuned preconditioner for inexact inverse iteration applied to Hermitian eigenvalue problems, IMA journal of numerical analysis, 28 (2008), p. 522., Shift-invert Arnoldi s method with preconditioned iterative solves, SIAM Journal on Matrix Analysis and Applications, 31 (2009), pp. 942 969. F. Xue and H. C. Elman, Convergence analysis of iterative solvers in inexact rayleigh quotient iteration, SIAM J. Matrix Anal. Appl., 31 (2009), pp. 877 899., Fast inexact subspace iteration for generalized eigenvalue problems with spectral transformation, Linear Algebra and its Applications, (2010).