A Tuned Preconditioner for Inexact Inverse Iteration for Generalised Eigenvalue Problems

Similar documents
Lecture 3: Inexact inverse iteration with preconditioning

Inexact inverse iteration with preconditioning

A Tuned Preconditioner for Inexact Inverse Iteration Applied to Hermitian Eigenvalue Problems

Preconditioned inverse iteration and shift-invert Arnoldi method

RAYLEIGH QUOTIENT ITERATION AND SIMPLIFIED JACOBI-DAVIDSON WITH PRECONDITIONED ITERATIVE SOLVES FOR GENERALISED EIGENVALUE PROBLEMS

Inexact Inverse Iteration for Symmetric Matrices

c 2006 Society for Industrial and Applied Mathematics

Inexact Inverse Iteration for Symmetric Matrices

Inexact inverse iteration for symmetric matrices

Inner-outer Iterative Methods for Eigenvalue Problems - Convergence and Preconditioning

An Arnoldi Method for Nonlinear Symmetric Eigenvalue Problems

Finding Rightmost Eigenvalues of Large, Sparse, Nonsymmetric Parameterized Eigenvalue Problems

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna.

The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method. V. Simoncini. Dipartimento di Matematica, Università di Bologna

Iterative Methods and Multigrid

Linear Solvers. Andrew Hazel

Efficient iterative algorithms for linear stability analysis of incompressible flows

FINDING RIGHTMOST EIGENVALUES OF LARGE SPARSE NONSYMMETRIC PARAMETERIZED EIGENVALUE PROBLEMS

ABSTRACT NUMERICAL SOLUTION OF EIGENVALUE PROBLEMS WITH SPECTRAL TRANSFORMATIONS

Is there life after the Lanczos method? What is LOBPCG?

Two-level Domain decomposition preconditioning for the high-frequency time-harmonic Maxwell equations

ABSTRACT OF DISSERTATION. Ping Zhang

AN INEXACT INVERSE ITERATION FOR COMPUTING THE SMALLEST EIGENVALUE OF AN IRREDUCIBLE M-MATRIX. 1. Introduction. We consider the eigenvalue problem

Eigenvalue Problems CHAPTER 1 : PRELIMINARIES

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

Linear Algebra and its Applications

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems

Domain decomposition on different levels of the Jacobi-Davidson method

Model order reduction of large-scale dynamical systems with Jacobi-Davidson style eigensolvers

Preconditioning for Nonsymmetry and Time-dependence

Numerical behavior of inexact linear solvers

Alternative correction equations in the Jacobi-Davidson method

MICHIEL E. HOCHSTENBACH

A Review of Preconditioning Techniques for Steady Incompressible Flow

Chebyshev semi-iteration in Preconditioning

SOLVING MESH EIGENPROBLEMS WITH MULTIGRID EFFICIENCY

A Jacobi Davidson Method for Nonlinear Eigenproblems

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers

The Conjugate Gradient Method

A Parallel Scalable PETSc-Based Jacobi-Davidson Polynomial Eigensolver with Application in Quantum Dot Simulation

Spectral Properties of Saddle Point Linear Systems and Relations to Iterative Solvers Part I: Spectral Properties. V. Simoncini

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Department of Computer Science, University of Illinois at Urbana-Champaign

A parameter tuning technique of a weighted Jacobi-type preconditioner and its application to supernova simulations

Optimal Left and Right Additive Schwarz Preconditioning for Minimal Residual Methods with Euclidean and Energy Norms

CONTROLLING INNER ITERATIONS IN THE JACOBI DAVIDSON METHOD

Davidson Method CHAPTER 3 : JACOBI DAVIDSON METHOD

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

Iterative projection methods for sparse nonlinear eigenvalue problems

Conjugate Gradient Method

Universiteit-Utrecht. Department. of Mathematics. The convergence of Jacobi-Davidson for. Hermitian eigenproblems. Jasper van den Eshof.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Solving Large Nonlinear Sparse Systems

THEORETICAL AND NUMERICAL COMPARISON OF PROJECTION METHODS DERIVED FROM DEFLATION, DOMAIN DECOMPOSITION AND MULTIGRID METHODS

Controlling inner iterations in the Jacobi Davidson method

PRECONDITIONING TECHNIQUES FOR NEWTON S METHOD FOR THE INCOMPRESSIBLE NAVIER STOKES EQUATIONS

A Note on Inverse Iteration

On the accuracy of saddle point solvers

A reflection on the implicitly restarted Arnoldi method for computing eigenvalues near a vertical line

Numerical Methods for Large Scale Eigenvalue Problems

CONTROLLING INNER ITERATIONS IN THE JACOBI DAVIDSON METHOD

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD

M.A. Botchev. September 5, 2014

Solving Ax = b, an overview. Program

1. Introduction. In this paper we consider the large and sparse eigenvalue problem. Ax = λx (1.1) T (λ)x = 0 (1.2)

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

A Jacobi Davidson-type projection method for nonlinear eigenvalue problems

On solving linear systems arising from Shishkin mesh discretizations

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems

University of Maryland Department of Computer Science TR-5009 University of Maryland Institute for Advanced Computer Studies TR April 2012

Motivation: Sparse matrices and numerical PDE's

Mathematics and Computer Science

A Jacobi Davidson type method for the product eigenvalue problem

Studies on Jacobi-Davidson, Rayleigh Quotient Iteration, Inverse Iteration Generalized Davidson and Newton Updates

Master Thesis Literature Study Presentation

A Model-Trust-Region Framework for Symmetric Generalized Eigenvalue Problems

Conjugate Gradient (CG) Method

WHEN studying distributed simulations of power systems,

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

High Performance Nonlinear Solvers

University of Maryland Department of Computer Science TR-4975 University of Maryland Institute for Advanced Computer Studies TR March 2011

Alternative correction equations in the Jacobi-Davidson method. Mathematical Institute. Menno Genseberger and Gerard L. G.

LARGE SPARSE EIGENVALUE PROBLEMS

9.1 Preconditioned Krylov Subspace Methods

CONVERGENCE BOUNDS FOR PRECONDITIONED GMRES USING ELEMENT-BY-ELEMENT ESTIMATES OF THE FIELD OF VALUES

Iterative Methods for Solving A x = b

Splitting Iteration Methods for Positive Definite Linear Systems

Solving large sparse eigenvalue problems

A comparison of solvers for quadratic eigenvalue problems from combustion

Domain decomposition for the Jacobi-Davidson method: practical strategies

Matrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland

X. He and M. Neytcheva. Preconditioning the incompressible Navier-Stokes equations with variable viscosity. J. Comput. Math., in press, 2012.

College of William & Mary Department of Computer Science

FEM and sparse linear system solving

AMS526: Numerical Analysis I (Numerical Linear Algebra)

7.4 The Saddle Point Stokes Problem

Getting Started with Communications Engineering. Rows first, columns second. Remember that. R then C. 1

Stabilization and Acceleration of Algebraic Multigrid Method

Transcription:

A Tuned Preconditioner for for Generalised Eigenvalue Problems Department of Mathematical Sciences University of Bath, United Kingdom IWASEP VI May 22-25, 2006 Pennsylvania State University, University Park Joint work with: Melina Freitag and Eero Vainikko

1 2 3 4 5

1 2 3 4 5

Ax = λmx (λ 1, x 1 ): Ax 1 = λ 1 Mx 1 Large sparse nonsymmetric matrices Stability calculations for linearised N-S using Mixed FEM Hopf bifurcation: λ complex Simple eigenvalues Inverse with iterative solves for shifted linear systems Jacobi-Davidson, Arnoldi,... TODAY: costs of system solves

1 2 3 4 5

iteration (an inner-outer iteration) Ax = λmx, (A σm) 1 Mx = 1 λ σ x Fixed shift, x (0), c H x (0) = 1 for i = 1 to... do choose τ (i) solve (A σm)y (i) Mx (i) τ (i), update eigenvector x (i+1) = y(i) c H y, (i) update eigenval λ (i+1) =Ray Quot. e-value residual r (i+1) = (A λ (i+1) M)x (i+1). end for

PDE Example Consider and its u + 5u x + 5u y = λx Finite Difference Discretisation: A 1 x = λx Finite Element Discretisation: A 2 x = λm 2 x Apply inexact inverse iteration with fixed shift σ and decreasing tolerance: (A 1 σi)y (i) = x (i), (A 2 σm 2 )y (i) = M 2 x (i),

PDE Example: Numerics Inner v. outer iterations

PDE Example: Numerics Inner v. outer iterations

Questions For decreasing solve tolerances Since the linear systems are being solved more and more accurately, why isn t the # inner iterations increasing with i for A 1 x = λx? Why is the inner iteration behaviour different for the two discretizations? Can we achieve no increase in # inner iterations for A 2 x = λm 2 x? (Yes: tuning ) What implications are there for preconditioned iterative? Next talk...

Convergence Analysis Ax 1 = λ 1 x 1, A symmetric (see Parlett s book) C sin θ (i) r (i) C sin θ (i), r (i) = (A λ (i) M)x (i) Nonsymmetric: Ax = λm x [Golub/Ye (2000); Berns-Müller/Sp (2006)] If τ (i) = C r (i) then LINEAR convergence

Convergence Analysis Ax 1 = λ 1 x 1, A symmetric (see Parlett s book) C sin θ (i) r (i) C sin θ (i), r (i) = (A λ (i) M)x (i) Nonsymmetric: Ax = λm x [Golub/Ye (2000); Berns-Müller/Sp (2006)] If τ (i) = C r (i) then LINEAR convergence

1 2 3 4 5

Krylov for By = b B = (A σi) and b = x (later B = (A σm) and b = Mx) b By k = min p k (B)b Cρ k b, (0 < ρ < 1). If b By k τ then k C 1 + C 2 log b τ Bound on k increases as τ decreases

Krylov for By = b B = (A σi) and b = x (later B = (A σm) and b = Mx) b By k = min p k (B)b Cρ k b, (0 < ρ < 1). If b By k τ then k C 1 + C 2 log b τ Bound on k increases as τ decreases

Krylov for By = b: sophisticated analysis for a well-separated eigenvalue For B = (A σi) then Bx 1 = (λ 1 σ)x 1 b By k = min p k (B)b p k 1 (B)q 1 (B)b b By k p k 1 (B)q 1 (B)Qb to achieve b By k τ then k C 3 + C 4 log Qb τ Bound on k depends on Qb τ

Krylov for By = b: sophisticated analysis for a well-separated eigenvalue For B = (A σi) then Bx 1 = (λ 1 σ)x 1 b By k = min p k (B)b p k 1 (B)q 1 (B)b b By k p k 1 (B)q 1 (B)Qb to achieve b By k τ then k C 3 + C 4 log Qb τ Bound on k depends on Qb τ

Krylov for By = b: sophisticated analysis for a well-separated eigenvalue For B = (A σi) then Bx 1 = (λ 1 σ)x 1 b By k = min p k (B)b p k 1 (B)q 1 (B)b b By k p k 1 (B)q 1 (B)Qb to achieve b By k τ then k C 3 + C 4 log Qb τ Bound on k depends on Qb τ

Some answers for τ (i) = C r (i) = O(sin θ (i) ) (A σi)y (i) = x (i) b = x (i) and Qx (i) = O(sin θ (i) ) Hence Qb /τ = O(1) and the bound on k doesn t increase Consistent with numerics for A 1 x = λx (A σm)y (i) = Mx (i) b = Mx (i) and QMx (i) = O(1) Hence Qb /τ = O(sin θ (i) ) 1 and the bound on k increases Consistent with numerics for A 2 x = λm 2 x

1 2 3 4 5

For (A σm)y (i) = Mx (i) Introduce the tuning matrix T and consider (think ) T 1 (A σm)y (i) = T 1 Mx (i) Key Condition: T 1 Mx (i) = x (i) Re-arrange to: Mx (i) = Tx (i) Implement by rank-one change to Identity: T := I + (Mx (i) x (i) )c H (c H x (i) = 1) Use Sherman-Morrison to get action of T 1

For (A σm)y (i) = Mx (i) Introduce the tuning matrix T and consider (think ) T 1 (A σm)y (i) = T 1 Mx (i) Key Condition: T 1 Mx (i) = x (i) Re-arrange to: Mx (i) = Tx (i) Implement by rank-one change to Identity: T := I + (Mx (i) x (i) )c H (c H x (i) = 1) Use Sherman-Morrison to get action of T 1

For (A σm)y (i) = Mx (i) Introduce the tuning matrix T and consider (think ) T 1 (A σm)y (i) = T 1 Mx (i) Key Condition: T 1 Mx (i) = x (i) Re-arrange to: Mx (i) = Tx (i) Implement by rank-one change to Identity: T := I + (Mx (i) x (i) )c H (c H x (i) = 1) Use Sherman-Morrison to get action of T 1

PDE Example: Numerics Inner v. outer iterations

PDE Example: Numerics Inner v. outer iterations

1 2 3 4 5

Flow past a circular cylinder (incompressible Navier-Stokes) Re = 25, λ ±10i Mixed FEM: Q 2 Q 1 elements Elman preconditioner: 2-level additive Schwarz 54000 degrees of freedom

Flow past a circular cylinder (incompressible Navier-Stokes) Figure: Fixed Shift Figure: Rayleigh Quotient Shift

preconditioner Preconditioned iterates should behave as in unpreconditioned standard EVP case right preconditioned system (A σm)p 1 S ỹ(i) = Mx (i) theory of Krylov solver for By = b indicates that Mx (i) should be close to eigenvector of (A σm)p 1 S Introduce a preconditioner P so that we solve (A σm)p 1 ỹ (i) = Mx (i) Remember Ax 1 = λ 1 Mx 1, so condition Px (i) λ (i) Mx (i)? Take Px (i) = Ax (i) since then Px (i) = λ (i) Mx (i) + (Ax (i) λ (i) Mx (i) )

preconditioner Preconditioned iterates should behave as in unpreconditioned standard EVP case right preconditioned system (A σm)p 1 S ỹ(i) = Mx (i) theory of Krylov solver for By = b indicates that Mx (i) should be close to eigenvector of (A σm)p 1 S Introduce a preconditioner P so that we solve (A σm)p 1 ỹ (i) = Mx (i) Remember Ax 1 = λ 1 Mx 1, so condition Px (i) λ (i) Mx (i)? Take Px (i) = Ax (i) since then Px (i) = λ (i) Mx (i) + (Ax (i) λ (i) Mx (i) )

preconditioner Preconditioned iterates should behave as in unpreconditioned standard EVP case right preconditioned system (A σm)p 1 S ỹ(i) = Mx (i) theory of Krylov solver for By = b indicates that Mx (i) should be close to eigenvector of (A σm)p 1 S Introduce a preconditioner P so that we solve (A σm)p 1 ỹ (i) = Mx (i) Remember Ax 1 = λ 1 Mx 1, so condition Px (i) λ (i) Mx (i)? Take Px (i) = Ax (i) since then Px (i) = λ (i) Mx (i) + (Ax (i) λ (i) Mx (i) )

Implementation of Px (i) = Ax (i) Given P S Evaluate u (i) = Ax (i) P S x (i) Rank-one update: P = P S + u (i) c H Px (i) = P S x (i) + u (i) c H x (i) = P S x (i) + u (i) = Ax (i) Use Sherman-Morrison - one extra backsolve per outer iteration

Implementation of Px (i) = Ax (i) Given P S Evaluate u (i) = Ax (i) P S x (i) Rank-one update: P = P S + u (i) c H Px (i) = P S x (i) + u (i) c H x (i) = P S x (i) + u (i) = Ax (i) Use Sherman-Morrison - one extra backsolve per outer iteration

Example Figure: Fixed Shift Figure: Rayleigh Quotient Shift

Conclusion When an eigenvalue problem think of adding the property Px (i) = Ax (i) to your favourite preconditioner This can be achieved by a simple and cheap rank one modification

M. A. Freitag and A. Spence, Convergence rates for inexact inverse iteration with application to preconditioned iterative solves, 2006. Submitted to BIT., A preconditioner for inexact inverse iteration applied to Hermitian eigenvalue problems, 2006. Submitted to IMA J. Numer. Anal.