Inexactness and flexibility in linear Krylov solvers

Similar documents
Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna.

On the loss of orthogonality in the Gram-Schmidt orthogonalization process

Numerical behavior of inexact linear solvers

A stable variant of Simpler GMRES and GCR

Preconditioned inverse iteration and shift-invert Arnoldi method

The Lanczos and conjugate gradient algorithms

WHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS

Solving large linear systems with multiple right hand sides

The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method. V. Simoncini. Dipartimento di Matematica, Università di Bologna

On the influence of eigenvalues on Bi-CG residual norms

HOW TO MAKE SIMPLER GMRES AND GCR MORE STABLE

A High-Performance Parallel Hybrid Method for Large Sparse Linear Systems

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

FEM and sparse linear system solving

On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes

c 2008 Society for Industrial and Applied Mathematics

Nested Krylov methods for shifted linear systems

Adaptive preconditioners for nonlinear systems of equations

Arnoldi Methods in SLEPc

Krylov Subspace Methods that Are Based on the Minimization of the Residual

Relaxation strategies for nested Krylov methods

Error Bounds for Iterative Refinement in Three Precisions

From Direct to Iterative Substructuring: some Parallel Experiences in 2 and 3D

Solving large sparse Ax = b.

On Solving Large Algebraic. Riccati Matrix Equations

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact

NUMERICS OF THE GRAM-SCHMIDT ORTHOGONALIZATION PROCESS

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Lecture 8: Fast Linear Solvers (Part 7)

Additive and multiplicative two-level spectral preconditioning for general linear systems

Finding Rightmost Eigenvalues of Large, Sparse, Nonsymmetric Parameterized Eigenvalue Problems

Iterative Methods for Sparse Linear Systems

6.4 Krylov Subspaces and Conjugate Gradients

LARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems

Alternative correction equations in the Jacobi-Davidson method

On the accuracy of saddle point solvers

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

LARGE SPARSE EIGENVALUE PROBLEMS

Inexact inverse iteration with preconditioning

Stopping criteria for iterations in finite element methods

Block Bidiagonal Decomposition and Least Squares Problems

Alternative correction equations in the Jacobi-Davidson method. Mathematical Institute. Menno Genseberger and Gerard L. G.

Parallel sparse linear solvers and applications in CFD

Preface to the Second Edition. Preface to the First Edition

M.A. Botchev. September 5, 2014

Gram-Schmidt Orthogonalization: 100 Years and More

A deflated minimal block residual method for the solution of non-hermitian linear systems with multiple right-hand sides

On solving linear systems arising from Shishkin mesh discretizations

Recent computational developments in Krylov Subspace Methods for linear systems. Valeria Simoncini and Daniel B. Szyld

Optimal Left and Right Additive Schwarz Preconditioning for Minimal Residual Methods with Euclidean and Energy Norms

ADI-preconditioned FGMRES for solving large generalized Lyapunov equations - A case study

Lecture 3: Inexact inverse iteration with preconditioning

A Novel Approach for Solving the Power Flow Equations

Lab 1: Iterative Methods for Solving Linear Systems

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering

PROJECTED GMRES AND ITS VARIANTS

1. Introduction. In this paper we consider the large and sparse eigenvalue problem. Ax = λx (1.1) T (λ)x = 0 (1.2)

The in uence of orthogonality on the Arnoldi method

IDR(s) as a projection method

Simple iteration procedure

Numerical Methods for Large-Scale Nonlinear Equations

Total least squares. Gérard MEURANT. October, 2008

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

Department of Computer Science, University of Illinois at Urbana-Champaign

Algorithms that use the Arnoldi Basis

WHEN studying distributed simulations of power systems,

Numerical Methods in Matrix Computations

Principles and Analysis of Krylov Subspace Methods

A DISSERTATION. Extensions of the Conjugate Residual Method. by Tomohiro Sogabe. Presented to

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method

Davidson Method CHAPTER 3 : JACOBI DAVIDSON METHOD

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

Chapter 7 Iterative Techniques in Matrix Algebra

A hybrid reordered Arnoldi method to accelerate PageRank computations

arxiv: v1 [math.na] 5 Jun 2017

AMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1.

Numerical Methods for Large-Scale Nonlinear Systems

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices

Rounding error analysis of the classical Gram-Schmidt orthogonalization process

Solving Ax = b, an overview. Program

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

9.1 Preconditioned Krylov Subspace Methods

A Chebyshev-based two-stage iterative method as an alternative to the direct solution of linear systems

Solving large sparse eigenvalue problems

ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS

Rounding error analysis of the classical Gram-Schmidt orthogonalization process

Preconditioned GMRES Revisited

LSMR: An iterative algorithm for least-squares problems

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Communication-avoiding Krylov subspace methods

Solving Large Nonlinear Sparse Systems

A Jacobi Davidson Method for Nonlinear Eigenproblems

DELFT UNIVERSITY OF TECHNOLOGY

ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS

ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD

DELFT UNIVERSITY OF TECHNOLOGY

GMRES: Generalized Minimal Residual Algorithm for Solving Nonsymmetric Linear Systems

Contents. 1 Repeated Gram Schmidt Local errors Propagation of the errors... 3

Convergence Behavior of Left Preconditioning Techniques for GMRES ECS 231: Large Scale Scientific Computing University of California, Davis Winter

Transcription:

Inexactness and flexibility in linear Krylov solvers Luc Giraud ENSEEIHT (N7) - IRIT, Toulouse Matrix Analysis and Applications CIRM Luminy - October 15-19, 2007 in honor of Gérard Meurant for his 60 th birthday

The original title 1 Inexactness joint work with J. Langou (Unv. Colorado Denver) and S. Gratton (CNES, Toulouse) on inexact/relaxed GMRES (SISC-07) AV k + [ A 1 v 1,..., A k v k ] = V k+1 Hk 2 and flexibility in linear Krylov solvers joint work with S. Gratton and X. Pinel (CERFACS) on flexible GMRES with deflated restarting (ongoing) [AM 1 v 1,..., AM k v k ] = V k+1 Hk Converged title Convergence in backward error of inexact GMRES.

Outline 1 Introduction 2 3 4

Outline Introduction 1 Introduction 2 3 4 L. Giraud 4/30 Inexactness and flexibility in linear Krylov solvers

The Arnoldi algorithm A an n n non singular matrix Krylov subspace K k (A, v 1 ) = span{v 1, Av 1,.., A k 1 v 1 } Arnoldi algorithm on A, starting with v 1 generates an orthonormal set of vectors v j such that AV k = V k+1 Hk, with V k = [v 1,..., v k ], H k upper-hessenberg. Breakdown of the algorithm when K k (A, v 1 ) is an A-invariant subspace. L. Giraud 5/30 Inexactness and flexibility in linear Krylov solvers

Inexact GMRES method Take the basic GMRES method, and perturb the matrix-vector products v k+1 = (A + A k )v k Easy way to control the inner accuracy. Why? The matrix is not known with full accuracy (Parameter estimation, Schur complement in non-overlapping DDM,...) Computing Ax with a poor accuracy is cheap (FMM) L. Giraud 6/30 Inexactness and flexibility in linear Krylov solvers

Inexact GMRES algorithm - MGS variant 1. x 0 initial guess, r 0 = b (A + A 0 )x 0, β = r 0 and v 1 = r 0 /β 2. For k=1,2,... Do 3. Compute w k = (A + A k )v k 4. For i=1,...,j, Do 5. h i,k = wk T v i 6. w k = w k h i,k v i 7. EndDo 8. h j+1,j = w k 9. If h j+1,j = 0 Goto 12 10. v k+1 = w k /h j+1,j 11. enddo 12. Set-up the (m + 1) m matrix H m = (h i,j ) 1 i m+1,1 j m 13. Compute, y m argmin of βe 1 H my 14. Compute, x m = x 0 + V my m L. Giraud 7/30 Inexactness and flexibility in linear Krylov solvers

Outline Introduction 1 Introduction 2 3 4 L. Giraud 8/30 Inexactness and flexibility in linear Krylov solvers

Historical developments Relaxed GMRES method - Consider the normwise backward error Ax b η A,b (x) =, and 0 < ε < 1. A x + b Numerous numerical illustrations in Bouras, Frayssé (SIMAX-05) that if a relaxed GMRES is run on a computer, using perturbations controlled ( so that min 1, max A k A ( ε, ε Ax k 1 b )), the GMRES iterate x k reaches for some k n a backward error η A,b (x k ) less than ε. L. Giraud 9/30 Inexactness and flexibility in linear Krylov solvers

Some properties of the BF criterion BF criterion A k A ( ( )) ε min 1, max ε, Ax k 1 b, Often a pure relaxation criterion in practice ( Ax k b is decreasing along the iterations). Never perform perturbations A k A smaller than the target backward error ε, greater than 1, criterion weakness : knowledge of the exact Ax k 1 b required, scaling issues... L. Giraud 10/30 Inexactness and flexibility in linear Krylov solvers

Exact relations in the inexact algorithm Exact arithmetic assumed From the Gram-Schmidt process follows the inexact Arnoldi relation AV k + [ A 1 v 1,..., A k v k ] = V k+1 Hk Least squares y k = argmin H k y r 0 e 1 True residual r k = b Ax k Computed residual r k = V k+1 ( r0 e 1 H k y k ). The norm rk is readily available from the incremental solution of the least squares min H k y r 0 e 1 L. Giraud 11/30 Inexactness and flexibility in linear Krylov solvers

Exact relations in the inexact algorithm Exact arithmetic assumed From the Gram-Schmidt process follows the inexact Arnoldi relation AV k + [ A 1 v 1,..., A k v k ] = V k+1 Hk ( A 1,..., A k ) Least squares y k = argmin H k y r 0 e 1 True residual r k = b Ax k Computed residual r k = V k+1 ( r0 e 1 H k y k ). The norm rk is readily available from the incremental solution of the least squares min H k y r 0 e 1 L. Giraud 12/30 Inexactness and flexibility in linear Krylov solvers

Inexact GMRES algorithm as an exact GMRES on a perturbed matrix Simoncini, Szyld (SISC-03) and Van den Eshof, Sleijpen (SIMAX-04) define G k = [ A 1 v 1,..., A k v k ]. The inexact Arnoldi reads (A + G k V T k )V k = V k+1 Hk. The computed residuals norm r k are non increasing, if then r k r k ɛ A k σ min( H m)ɛ r k 1 Information on the exact residual obtained from r m r m + r m r m L. Giraud 13/30 Inexactness and flexibility in linear Krylov solvers

Inexact GMRES algorithm as an exact GMRES on a perturbed matrix Simoncini, Szyld (SISC-03) and Van den Eshof, Sleijpen (SIMAX-04) define G k = [ A 1 v 1,..., A k v k ]. The inexact Arnoldi reads (A + G k V T k )V k = V k+1 Hk. The computed residuals norm r k are non increasing, if there exists a family of matrices A k such that A k σ min( H m( A 1,... A m))ɛ r k 1, k m, then r k r k ɛ Information on the exact residual obtained from r m r m + r m r m L. Giraud 14/30 Inexactness and flexibility in linear Krylov solvers

Next steps Introduction What we would like to get 1 Remove the dependency among the A i, 2 Control the possible singularity of H m, 3 Design an implementable algorithm that can reach a prescribed backward error accuracy. η A,b (x k ) = min {τ > 0 : A τ A, b τ b A, b and (A + A)x k = b + b} = Ax k b A x k + b, η b (x k ) = min {τ > 0 : b τ b and Ax k = b + b} = Ax k b. b b L. Giraud 15/30 Inexactness and flexibility in linear Krylov solvers

Control the possible singularity of H m If A k c σ min(a) n for 0 < c < 1 V m+1 Hm = (A + G m V T m)v m σ min (A + G m V T m) σ min ( H m ) with G m V T m = m i=1 A iv i v T i m i=1 A iv i v T i cσ min (A) then 0 < (1 c)σ min (A) σ min ( H m ) because σ min (A) G m V T m σ min (A + G m V T m). 1 With such perturbations, the r k will be monotonically decreasing until happy breakdown where it will be zero. 2 Replacing the constraint on the perturbation size based on σ min ( H m ) by the more stringent bound (1 c)σ min (A) still ensures the former result on the residual gap. L. Giraud 16/30 Inexactness and flexibility in linear Krylov solvers

Convergence of relaxed GMRES for η b Theorem η b (x k ) r k r k b } {{ } ε g + r k b }{{} ε c Let us denote by m the step where the breakdown occurs in the inexact GMRES algorithm. Let c be such that 0 < c < 1 and let ε c and ε g be any positive real numbers. Assume for all k m, A k 1 ( n σ min(a) min c, ) (1 c) b ε g. (1) r k 1 Then there exists l, 0 < l m, such that the following stopping criterion is satisfied r l ε c b (2) and η b (x l ) ε c + ε g.

Convergence of relaxed GMRES for η A,b Theorem η A,b (x k ) r k r k A x k + b + r k A x k + b Let us denote by m the step where the breakdown occurs in the inexact GMRES algorithm. Let c, x 0 and x be such that 2c x 0 x and 0 < c < 1. Let ε c and ε g be any positive real numbers. Suppose that for all k m A k 1 ( n σ γ ) min(a) min c, (1 c) r k 1 ε g, (3) where γ = 1 4+2ε cκ(a) A x + b. There exists l, l m, such that the following stopping criterion is satisfied r l ε c A x l (4) and η A,b (x l ) ε c + ε g.

Outline Introduction 1 Introduction 2 3 4 L. Giraud 19/30 Inexactness and flexibility in linear Krylov solvers

The matrices are generated using the rand Matlab command Strategy S A k = σ min(a) 4n ( 3γ ) min 1, 2 r k 1 ε Strategy S b A k = σ min(a) 4n ( 3γ b ) min 1, 2 r k 1 ε Simpler to implement but more stringent in term of perturbation size L. Giraud 20/30 Inexactness and flexibility in linear Krylov solvers

Implemented algorithm Relaxed GMRES with strategy S 1: Choose a convergence threshold ε = ε c + ε g 2: Choose an initial guess x 0 3: r 0 = b Ax 0 ; β = r 0 4: v 1 = r 0 / r 0 ; 5: for k = 1, 2,... do 6: z = (A + A k )v k, A k being such that strategy S holds 7: for i = 1 to k do 8: h i,k = v T i z 9: z = z h i,k v i 10: end for 11: h k+1,k = z 12: v k+1 = z/h k+1,k 13: Solve the least-squares problem min βe 1 H k y for y k 14: if r k = βe 1 H k y k ε c A x k then 15: Set x k = x 0 + V k y k 16: Exit 17: end if 18: end for

10 0 Full GMRES with ILU(0.1) ε = 1e 13 10 5 10 10 10 15 10 20 b.e. for S* perturbation size for S* b.e. for S b perturbation size for S b b.e. for exact GMRES 5 10 15 20 25 PDE225 n = 225 K 2 (A Prec ) = 22.486 A Prec 2 = 1.676 Figure: Relaxed GMRES with strategy S and S b - PDE225 - ε = 10 13. L. Giraud 22/30 Inexactness and flexibility in linear Krylov solvers

10 0 Full GMRES with ILU(0.001) ε = 1e 08 10 5 10 10 b.e. for S* perturbation size for S* b.e. for S b perturbation size for S b b.e. for exact GMRES 10 15 10 20 2 4 6 8 10 12 14 16 18 20 UTM300 n = 300 K 2 (A Prec ) = 1.6122e+07 A Prec 2 = 18635 Figure: Relaxed GMRES with strategy S and S b - UTM300 - ε = 10 8. L. Giraud 23/30 Inexactness and flexibility in linear Krylov solvers

Backward stability of GMRES Related papers J. Drkošová, M. Rozložník, Z. Strakoš and A. Greenbaum, Numerical stability of the GMRES method, BIT, vol. 35, p. 309-330, 1995. C.C. Paige, M. Rozložník and Z. Strakoš, Modified Gram-Schmidt (MGS), Least Squares, and Backward Stability of MGS-GMRES, SIAM J. Matrix Anal. Appl., vol. 28 (1), p. 264-284, 2006. Design of some relaxations heuristics Heuristic S(ε) A k = ε A Exact GMRES run in a floating point arithmetic with machine precision ε Heuristic S (ε) A k = max ε A, σ ««min(a) 3γ min 1, 4n r k 1 εg Heuristic S b (ε) A k = max ε A, σ ««min(a) 3γ b min 1, 4n r k 1 εg

Heuristics matrix n t ε N ex N ε Nε Nε b e05r0400 236 10 3 10 14 21 21 21 21 e05r0000 236 10 2 10 06 25 25 26 26 GRE115 115 10 1 10 10 15 15 15 15 GRE185 185 10 2 10 14 21 21 21 21 GRE343 343 10 1 10 10 29 29 30 30 CAVITY03 317 10 3 10 10 18 18 18 18 PDE225 225 10 1 10 13 21 21 22 22 SAYLR1 238 10 1 10 11 29 29 30 30 UTM300 300 10 3 10 08 17 17 18 18 WEST0381 381 10 2 10 06 12 12 13 13 BFW398A 398 10 1 10 08 40 40 41 41 Table: # iterations of GMRES with various strategies. L. Giraud 25/30 Inexactness and flexibility in linear Krylov solvers

Relaxed FMM for 3D Maxwell Solution PhD Dissertation J. Langou (EADS-CERFACS), 2003. Parallel out-of-core FMM code - EADS-IW 10 0 Cetaf without precond (0 o,90 o ) precfmm 3 precfmm 2 precfmm 1 10 1 10 2 50 100 150 200 250 L. Giraud 26/30 Inexactness and flexibility in linear Krylov solvers

Outline Introduction 1 Introduction 2 3 4 L. Giraud 27/30 Inexactness and flexibility in linear Krylov solvers

Similar results can be derived for GMRES with relaxed right-preconditioning, inexact initial residual. Relaxation/Inexactness for GMRES understood in exact arithmetic. Backward stability of relaxed/inexact GMRES for Householder in finite precision proved... MGS to be done. Implementation possible in many scientific computing simulations: Electromagnetism (FMM), domain decomposition (inexact local solvers), block preconditioners (inexact block solvers). L. Giraud 28/30 Inexactness and flexibility in linear Krylov solvers

Merci pour votre attention L. Giraud 29/30 Inexactness and flexibility in linear Krylov solvers

Merci pour votre attention Happy anniversaire Gérard L. Giraud 30/30 Inexactness and flexibility in linear Krylov solvers

Bibliography A. Bouras and V. Frayssé. Inexact matrix-vector products in Krylov methods for solving linear systems: a relaxation strategy. SIAM Journal on Matrix Analysis and Applications, 26(23):660 678, 2005. A. Bouras, V. Frayssé, and L. Giraud. A relaxation strategy for inner-outer linear solvers in domain decomposition methods. Technical Report TR/PA/00/17, CERFACS, Toulouse, France, 2000. L. Giraud, S. Gratton, and J. Langou. Convergence in backward error of relaxed GMRES. SIAM J. Scientific Computing, 29(2):710 728, 2007. V. Simoncini and D. B. Szyld. Theory of inexact Krylov subspace methods and applications to scientific computing. SIAM J. Scientific Computing, 25(2):454 477, 2003. J. van den Eshof and G. L. G. Sleijpen. Inexact Krylov subspace methods for linear systems. SIAM Journal on Matrix Analysis and Applications, 26(1):125 153, 2004.