Inexactness and flexibility in linear Krylov solvers
|
|
- Jacob Stanley
- 6 years ago
- Views:
Transcription
1 Inexactness and flexibility in linear Krylov solvers Luc Giraud ENSEEIHT (N7) - IRIT, Toulouse Matrix Analysis and Applications CIRM Luminy - October 15-19, 2007 in honor of Gérard Meurant for his 60 th birthday
2 The original title 1 Inexactness joint work with J. Langou (Unv. Colorado Denver) and S. Gratton (CNES, Toulouse) on inexact/relaxed GMRES (SISC-07) AV k + [ A 1 v 1,..., A k v k ] = V k+1 Hk 2 and flexibility in linear Krylov solvers joint work with S. Gratton and X. Pinel (CERFACS) on flexible GMRES with deflated restarting (ongoing) [AM 1 v 1,..., AM k v k ] = V k+1 Hk Converged title Convergence in backward error of inexact GMRES.
3 Outline 1 Introduction 2 3 4
4 Outline Introduction 1 Introduction L. Giraud 4/30 Inexactness and flexibility in linear Krylov solvers
5 The Arnoldi algorithm A an n n non singular matrix Krylov subspace K k (A, v 1 ) = span{v 1, Av 1,.., A k 1 v 1 } Arnoldi algorithm on A, starting with v 1 generates an orthonormal set of vectors v j such that AV k = V k+1 Hk, with V k = [v 1,..., v k ], H k upper-hessenberg. Breakdown of the algorithm when K k (A, v 1 ) is an A-invariant subspace. L. Giraud 5/30 Inexactness and flexibility in linear Krylov solvers
6 Inexact GMRES method Take the basic GMRES method, and perturb the matrix-vector products v k+1 = (A + A k )v k Easy way to control the inner accuracy. Why? The matrix is not known with full accuracy (Parameter estimation, Schur complement in non-overlapping DDM,...) Computing Ax with a poor accuracy is cheap (FMM) L. Giraud 6/30 Inexactness and flexibility in linear Krylov solvers
7 Inexact GMRES algorithm - MGS variant 1. x 0 initial guess, r 0 = b (A + A 0 )x 0, β = r 0 and v 1 = r 0 /β 2. For k=1,2,... Do 3. Compute w k = (A + A k )v k 4. For i=1,...,j, Do 5. h i,k = wk T v i 6. w k = w k h i,k v i 7. EndDo 8. h j+1,j = w k 9. If h j+1,j = 0 Goto v k+1 = w k /h j+1,j 11. enddo 12. Set-up the (m + 1) m matrix H m = (h i,j ) 1 i m+1,1 j m 13. Compute, y m argmin of βe 1 H my 14. Compute, x m = x 0 + V my m L. Giraud 7/30 Inexactness and flexibility in linear Krylov solvers
8 Outline Introduction 1 Introduction L. Giraud 8/30 Inexactness and flexibility in linear Krylov solvers
9 Historical developments Relaxed GMRES method - Consider the normwise backward error Ax b η A,b (x) =, and 0 < ε < 1. A x + b Numerous numerical illustrations in Bouras, Frayssé (SIMAX-05) that if a relaxed GMRES is run on a computer, using perturbations controlled ( so that min 1, max A k A ( ε, ε Ax k 1 b )), the GMRES iterate x k reaches for some k n a backward error η A,b (x k ) less than ε. L. Giraud 9/30 Inexactness and flexibility in linear Krylov solvers
10 Some properties of the BF criterion BF criterion A k A ( ( )) ε min 1, max ε, Ax k 1 b, Often a pure relaxation criterion in practice ( Ax k b is decreasing along the iterations). Never perform perturbations A k A smaller than the target backward error ε, greater than 1, criterion weakness : knowledge of the exact Ax k 1 b required, scaling issues... L. Giraud 10/30 Inexactness and flexibility in linear Krylov solvers
11 Exact relations in the inexact algorithm Exact arithmetic assumed From the Gram-Schmidt process follows the inexact Arnoldi relation AV k + [ A 1 v 1,..., A k v k ] = V k+1 Hk Least squares y k = argmin H k y r 0 e 1 True residual r k = b Ax k Computed residual r k = V k+1 ( r0 e 1 H k y k ). The norm rk is readily available from the incremental solution of the least squares min H k y r 0 e 1 L. Giraud 11/30 Inexactness and flexibility in linear Krylov solvers
12 Exact relations in the inexact algorithm Exact arithmetic assumed From the Gram-Schmidt process follows the inexact Arnoldi relation AV k + [ A 1 v 1,..., A k v k ] = V k+1 Hk ( A 1,..., A k ) Least squares y k = argmin H k y r 0 e 1 True residual r k = b Ax k Computed residual r k = V k+1 ( r0 e 1 H k y k ). The norm rk is readily available from the incremental solution of the least squares min H k y r 0 e 1 L. Giraud 12/30 Inexactness and flexibility in linear Krylov solvers
13 Inexact GMRES algorithm as an exact GMRES on a perturbed matrix Simoncini, Szyld (SISC-03) and Van den Eshof, Sleijpen (SIMAX-04) define G k = [ A 1 v 1,..., A k v k ]. The inexact Arnoldi reads (A + G k V T k )V k = V k+1 Hk. The computed residuals norm r k are non increasing, if then r k r k ɛ A k σ min( H m)ɛ r k 1 Information on the exact residual obtained from r m r m + r m r m L. Giraud 13/30 Inexactness and flexibility in linear Krylov solvers
14 Inexact GMRES algorithm as an exact GMRES on a perturbed matrix Simoncini, Szyld (SISC-03) and Van den Eshof, Sleijpen (SIMAX-04) define G k = [ A 1 v 1,..., A k v k ]. The inexact Arnoldi reads (A + G k V T k )V k = V k+1 Hk. The computed residuals norm r k are non increasing, if there exists a family of matrices A k such that A k σ min( H m( A 1,... A m))ɛ r k 1, k m, then r k r k ɛ Information on the exact residual obtained from r m r m + r m r m L. Giraud 14/30 Inexactness and flexibility in linear Krylov solvers
15 Next steps Introduction What we would like to get 1 Remove the dependency among the A i, 2 Control the possible singularity of H m, 3 Design an implementable algorithm that can reach a prescribed backward error accuracy. η A,b (x k ) = min {τ > 0 : A τ A, b τ b A, b and (A + A)x k = b + b} = Ax k b A x k + b, η b (x k ) = min {τ > 0 : b τ b and Ax k = b + b} = Ax k b. b b L. Giraud 15/30 Inexactness and flexibility in linear Krylov solvers
16 Control the possible singularity of H m If A k c σ min(a) n for 0 < c < 1 V m+1 Hm = (A + G m V T m)v m σ min (A + G m V T m) σ min ( H m ) with G m V T m = m i=1 A iv i v T i m i=1 A iv i v T i cσ min (A) then 0 < (1 c)σ min (A) σ min ( H m ) because σ min (A) G m V T m σ min (A + G m V T m). 1 With such perturbations, the r k will be monotonically decreasing until happy breakdown where it will be zero. 2 Replacing the constraint on the perturbation size based on σ min ( H m ) by the more stringent bound (1 c)σ min (A) still ensures the former result on the residual gap. L. Giraud 16/30 Inexactness and flexibility in linear Krylov solvers
17 Convergence of relaxed GMRES for η b Theorem η b (x k ) r k r k b } {{ } ε g + r k b }{{} ε c Let us denote by m the step where the breakdown occurs in the inexact GMRES algorithm. Let c be such that 0 < c < 1 and let ε c and ε g be any positive real numbers. Assume for all k m, A k 1 ( n σ min(a) min c, ) (1 c) b ε g. (1) r k 1 Then there exists l, 0 < l m, such that the following stopping criterion is satisfied r l ε c b (2) and η b (x l ) ε c + ε g.
18 Convergence of relaxed GMRES for η A,b Theorem η A,b (x k ) r k r k A x k + b + r k A x k + b Let us denote by m the step where the breakdown occurs in the inexact GMRES algorithm. Let c, x 0 and x be such that 2c x 0 x and 0 < c < 1. Let ε c and ε g be any positive real numbers. Suppose that for all k m A k 1 ( n σ γ ) min(a) min c, (1 c) r k 1 ε g, (3) where γ = 1 4+2ε cκ(a) A x + b. There exists l, l m, such that the following stopping criterion is satisfied r l ε c A x l (4) and η A,b (x l ) ε c + ε g.
19 Outline Introduction 1 Introduction L. Giraud 19/30 Inexactness and flexibility in linear Krylov solvers
20 The matrices are generated using the rand Matlab command Strategy S A k = σ min(a) 4n ( 3γ ) min 1, 2 r k 1 ε Strategy S b A k = σ min(a) 4n ( 3γ b ) min 1, 2 r k 1 ε Simpler to implement but more stringent in term of perturbation size L. Giraud 20/30 Inexactness and flexibility in linear Krylov solvers
21 Implemented algorithm Relaxed GMRES with strategy S 1: Choose a convergence threshold ε = ε c + ε g 2: Choose an initial guess x 0 3: r 0 = b Ax 0 ; β = r 0 4: v 1 = r 0 / r 0 ; 5: for k = 1, 2,... do 6: z = (A + A k )v k, A k being such that strategy S holds 7: for i = 1 to k do 8: h i,k = v T i z 9: z = z h i,k v i 10: end for 11: h k+1,k = z 12: v k+1 = z/h k+1,k 13: Solve the least-squares problem min βe 1 H k y for y k 14: if r k = βe 1 H k y k ε c A x k then 15: Set x k = x 0 + V k y k 16: Exit 17: end if 18: end for
22 10 0 Full GMRES with ILU(0.1) ε = 1e b.e. for S* perturbation size for S* b.e. for S b perturbation size for S b b.e. for exact GMRES PDE225 n = 225 K 2 (A Prec ) = A Prec 2 = Figure: Relaxed GMRES with strategy S and S b - PDE225 - ε = L. Giraud 22/30 Inexactness and flexibility in linear Krylov solvers
23 10 0 Full GMRES with ILU(0.001) ε = 1e b.e. for S* perturbation size for S* b.e. for S b perturbation size for S b b.e. for exact GMRES UTM300 n = 300 K 2 (A Prec ) = e+07 A Prec 2 = Figure: Relaxed GMRES with strategy S and S b - UTM300 - ε = L. Giraud 23/30 Inexactness and flexibility in linear Krylov solvers
24 Backward stability of GMRES Related papers J. Drkošová, M. Rozložník, Z. Strakoš and A. Greenbaum, Numerical stability of the GMRES method, BIT, vol. 35, p , C.C. Paige, M. Rozložník and Z. Strakoš, Modified Gram-Schmidt (MGS), Least Squares, and Backward Stability of MGS-GMRES, SIAM J. Matrix Anal. Appl., vol. 28 (1), p , Design of some relaxations heuristics Heuristic S(ε) A k = ε A Exact GMRES run in a floating point arithmetic with machine precision ε Heuristic S (ε) A k = max ε A, σ ««min(a) 3γ min 1, 4n r k 1 εg Heuristic S b (ε) A k = max ε A, σ ««min(a) 3γ b min 1, 4n r k 1 εg
25 Heuristics matrix n t ε N ex N ε Nε Nε b e05r e05r GRE GRE GRE CAVITY PDE SAYLR UTM WEST BFW398A Table: # iterations of GMRES with various strategies. L. Giraud 25/30 Inexactness and flexibility in linear Krylov solvers
26 Relaxed FMM for 3D Maxwell Solution PhD Dissertation J. Langou (EADS-CERFACS), Parallel out-of-core FMM code - EADS-IW 10 0 Cetaf without precond (0 o,90 o ) precfmm 3 precfmm 2 precfmm L. Giraud 26/30 Inexactness and flexibility in linear Krylov solvers
27 Outline Introduction 1 Introduction L. Giraud 27/30 Inexactness and flexibility in linear Krylov solvers
28 Similar results can be derived for GMRES with relaxed right-preconditioning, inexact initial residual. Relaxation/Inexactness for GMRES understood in exact arithmetic. Backward stability of relaxed/inexact GMRES for Householder in finite precision proved... MGS to be done. Implementation possible in many scientific computing simulations: Electromagnetism (FMM), domain decomposition (inexact local solvers), block preconditioners (inexact block solvers). L. Giraud 28/30 Inexactness and flexibility in linear Krylov solvers
29 Merci pour votre attention L. Giraud 29/30 Inexactness and flexibility in linear Krylov solvers
30 Merci pour votre attention Happy anniversaire Gérard L. Giraud 30/30 Inexactness and flexibility in linear Krylov solvers
31 Bibliography A. Bouras and V. Frayssé. Inexact matrix-vector products in Krylov methods for solving linear systems: a relaxation strategy. SIAM Journal on Matrix Analysis and Applications, 26(23): , A. Bouras, V. Frayssé, and L. Giraud. A relaxation strategy for inner-outer linear solvers in domain decomposition methods. Technical Report TR/PA/00/17, CERFACS, Toulouse, France, L. Giraud, S. Gratton, and J. Langou. Convergence in backward error of relaxed GMRES. SIAM J. Scientific Computing, 29(2): , V. Simoncini and D. B. Szyld. Theory of inexact Krylov subspace methods and applications to scientific computing. SIAM J. Scientific Computing, 25(2): , J. van den Eshof and G. L. G. Sleijpen. Inexact Krylov subspace methods for linear systems. SIAM Journal on Matrix Analysis and Applications, 26(1): , 2004.
Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna.
Recent advances in approximation using Krylov subspaces V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The framework It is given an operator
More informationOn the loss of orthogonality in the Gram-Schmidt orthogonalization process
CERFACS Technical Report No. TR/PA/03/25 Luc Giraud Julien Langou Miroslav Rozložník On the loss of orthogonality in the Gram-Schmidt orthogonalization process Abstract. In this paper we study numerical
More informationNumerical behavior of inexact linear solvers
Numerical behavior of inexact linear solvers Miro Rozložník joint results with Zhong-zhi Bai and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic The fourth
More informationA stable variant of Simpler GMRES and GCR
A stable variant of Simpler GMRES and GCR Miroslav Rozložník joint work with Pavel Jiránek and Martin H. Gutknecht Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic miro@cs.cas.cz,
More informationPreconditioned inverse iteration and shift-invert Arnoldi method
Preconditioned inverse iteration and shift-invert Arnoldi method Melina Freitag Department of Mathematical Sciences University of Bath CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical
More informationThe Lanczos and conjugate gradient algorithms
The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization
More informationWHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS
IMA Journal of Numerical Analysis (2002) 22, 1-8 WHEN MODIFIED GRAM-SCHMIDT GENERATES A WELL-CONDITIONED SET OF VECTORS L. Giraud and J. Langou Cerfacs, 42 Avenue Gaspard Coriolis, 31057 Toulouse Cedex
More informationSolving large linear systems with multiple right hand sides
Solving large linear systems with multiple right hand sides Julien Langou CERFACS Toulouse France. Solving large linear systems with multiple right hand sides p. 1/12 Motivations In electromagnetism design
More informationThe Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method. V. Simoncini. Dipartimento di Matematica, Università di Bologna
The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The
More informationOn the influence of eigenvalues on Bi-CG residual norms
On the influence of eigenvalues on Bi-CG residual norms Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic duintjertebbens@cs.cas.cz Gérard Meurant 30, rue
More informationHOW TO MAKE SIMPLER GMRES AND GCR MORE STABLE
HOW TO MAKE SIMPLER GMRES AND GCR MORE STABLE PAVEL JIRÁNEK, MIROSLAV ROZLOŽNÍK, AND MARTIN H. GUTKNECHT Abstract. In this paper we analyze the numerical behavior of several minimum residual methods, which
More informationA High-Performance Parallel Hybrid Method for Large Sparse Linear Systems
Outline A High-Performance Parallel Hybrid Method for Large Sparse Linear Systems Azzam Haidar CERFACS, Toulouse joint work with Luc Giraud (N7-IRIT, France) and Layne Watson (Virginia Polytechnic Institute,
More information4.8 Arnoldi Iteration, Krylov Subspaces and GMRES
48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate
More informationFEM and sparse linear system solving
FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich
More informationOn prescribing Ritz values and GMRES residual norms generated by Arnoldi processes
On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic joint work with Gérard
More informationc 2008 Society for Industrial and Applied Mathematics
SIAM J. MATRIX ANAL. APPL. Vol. 30, No. 4, pp. 1483 1499 c 2008 Society for Industrial and Applied Mathematics HOW TO MAKE SIMPLER GMRES AND GCR MORE STABLE PAVEL JIRÁNEK, MIROSLAV ROZLOŽNÍK, AND MARTIN
More informationNested Krylov methods for shifted linear systems
Nested Krylov methods for shifted linear systems M. Baumann, and M. B. van Gizen Email: M.M.Baumann@tudelft.nl Delft Institute of Applied Mathematics Delft University of Technology Delft, The Netherlands
More informationAdaptive preconditioners for nonlinear systems of equations
Adaptive preconditioners for nonlinear systems of equations L. Loghin D. Ruiz A. Touhami CERFACS Technical Report TR/PA/04/42 Also ENSEEIHT-IRIT Technical Report RT/TLSE/04/02 Abstract The use of preconditioned
More informationArnoldi Methods in SLEPc
Scalable Library for Eigenvalue Problem Computations SLEPc Technical Report STR-4 Available at http://slepc.upv.es Arnoldi Methods in SLEPc V. Hernández J. E. Román A. Tomás V. Vidal Last update: October,
More informationKrylov Subspace Methods that Are Based on the Minimization of the Residual
Chapter 5 Krylov Subspace Methods that Are Based on the Minimization of the Residual Remark 51 Goal he goal of these methods consists in determining x k x 0 +K k r 0,A such that the corresponding Euclidean
More informationRelaxation strategies for nested Krylov methods
Journal of Computational and Applied Mathematics 177 (2005) 347 365 www.elsevier.com/locate/cam Relaxation strategies for nested Krylov methods Jasper van den Eshof a, Gerard L.G. Sleijpen b,, Martin B.
More informationError Bounds for Iterative Refinement in Three Precisions
Error Bounds for Iterative Refinement in Three Precisions Erin C. Carson, New York University Nicholas J. Higham, University of Manchester SIAM Annual Meeting Portland, Oregon July 13, 018 Hardware Support
More informationFrom Direct to Iterative Substructuring: some Parallel Experiences in 2 and 3D
From Direct to Iterative Substructuring: some Parallel Experiences in 2 and 3D Luc Giraud N7-IRIT, Toulouse MUMPS Day October 24, 2006, ENS-INRIA, Lyon, France Outline 1 General Framework 2 The direct
More informationSolving large sparse Ax = b.
Bob-05 p.1/69 Solving large sparse Ax = b. Stopping criteria, & backward stability of MGS-GMRES. Chris Paige (McGill University); Miroslav Rozložník & Zdeněk Strakoš (Academy of Sciences of the Czech Republic)..pdf
More informationOn Solving Large Algebraic. Riccati Matrix Equations
International Mathematical Forum, 5, 2010, no. 33, 1637-1644 On Solving Large Algebraic Riccati Matrix Equations Amer Kaabi Department of Basic Science Khoramshahr Marine Science and Technology University
More informationLanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact
Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Zdeněk Strakoš Academy of Sciences and Charles University, Prague http://www.cs.cas.cz/ strakos Hong Kong, February
More informationNUMERICS OF THE GRAM-SCHMIDT ORTHOGONALIZATION PROCESS
NUMERICS OF THE GRAM-SCHMIDT ORTHOGONALIZATION PROCESS Miro Rozložník Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic email: miro@cs.cas.cz joint results with Luc Giraud,
More informationKey words. conjugate gradients, normwise backward error, incremental norm estimation.
Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322
More informationLecture 8: Fast Linear Solvers (Part 7)
Lecture 8: Fast Linear Solvers (Part 7) 1 Modified Gram-Schmidt Process with Reorthogonalization Test Reorthogonalization If Av k 2 + δ v k+1 2 = Av k 2 to working precision. δ = 10 3 2 Householder Arnoldi
More informationAdditive and multiplicative two-level spectral preconditioning for general linear systems
Additive and multiplicative two-level spectral preconditioning for general linear systems B. Carpentieri L. Giraud S. Gratton CERFACS Technical Report TR/PA/04/38 Abstract Multigrid methods are among the
More informationFinding Rightmost Eigenvalues of Large, Sparse, Nonsymmetric Parameterized Eigenvalue Problems
Finding Rightmost Eigenvalues of Large, Sparse, Nonsymmetric Parameterized Eigenvalue Problems AMSC 663-664 Final Report Minghao Wu AMSC Program mwu@math.umd.edu Dr. Howard Elman Department of Computer
More informationIterative Methods for Sparse Linear Systems
Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University
More information6.4 Krylov Subspaces and Conjugate Gradients
6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P
More informationLARGE SPARSE EIGENVALUE PROBLEMS. General Tools for Solving Large Eigen-Problems
LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization General Tools for Solving Large Eigen-Problems
More informationAlternative correction equations in the Jacobi-Davidson method
Chapter 2 Alternative correction equations in the Jacobi-Davidson method Menno Genseberger and Gerard Sleijpen Abstract The correction equation in the Jacobi-Davidson method is effective in a subspace
More informationOn the accuracy of saddle point solvers
On the accuracy of saddle point solvers Miro Rozložník joint results with Valeria Simoncini and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic Seminar at
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline
More informationLARGE SPARSE EIGENVALUE PROBLEMS
LARGE SPARSE EIGENVALUE PROBLEMS Projection methods The subspace iteration Krylov subspace methods: Arnoldi and Lanczos Golub-Kahan-Lanczos bidiagonalization 14-1 General Tools for Solving Large Eigen-Problems
More informationInexact inverse iteration with preconditioning
Department of Mathematical Sciences Computational Methods with Applications Harrachov, Czech Republic 24th August 2007 (joint work with M. Robbé and M. Sadkane (Brest)) 1 Introduction 2 Preconditioned
More informationStopping criteria for iterations in finite element methods
Stopping criteria for iterations in finite element methods M. Arioli, D. Loghin and A. J. Wathen CERFACS Technical Report TR/PA/03/21 Abstract This work extends the results of Arioli [1], [2] on stopping
More informationBlock Bidiagonal Decomposition and Least Squares Problems
Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition
More informationAlternative correction equations in the Jacobi-Davidson method. Mathematical Institute. Menno Genseberger and Gerard L. G.
Universiteit-Utrecht * Mathematical Institute Alternative correction equations in the Jacobi-Davidson method by Menno Genseberger and Gerard L. G. Sleijpen Preprint No. 173 June 1998, revised: March 1999
More informationParallel sparse linear solvers and applications in CFD
Parallel sparse linear solvers and applications in CFD Jocelyne Erhel Joint work with Désiré Nuentsa Wakam () and Baptiste Poirriez () SAGE team, Inria Rennes, France journée Calcul Intensif Distribué
More informationPreface to the Second Edition. Preface to the First Edition
n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................
More informationM.A. Botchev. September 5, 2014
Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev
More informationGram-Schmidt Orthogonalization: 100 Years and More
Gram-Schmidt Orthogonalization: 100 Years and More September 12, 2008 Outline of Talk Early History (1795 1907) Middle History 1. The work of Åke Björck Least squares, Stability, Loss of orthogonality
More informationA deflated minimal block residual method for the solution of non-hermitian linear systems with multiple right-hand sides
A deflated minimal block residual method for the solution of non-hermitian linear systems with multiple right-hand sides Henri Calandra, Serge Gratton, Rafael Lago, Xavier Vasseur and Luiz Mariano Carvalho
More informationOn solving linear systems arising from Shishkin mesh discretizations
On solving linear systems arising from Shishkin mesh discretizations Petr Tichý Faculty of Mathematics and Physics, Charles University joint work with Carlos Echeverría, Jörg Liesen, and Daniel Szyld October
More informationRecent computational developments in Krylov Subspace Methods for linear systems. Valeria Simoncini and Daniel B. Szyld
Recent computational developments in Krylov Subspace Methods for linear systems Valeria Simoncini and Daniel B. Szyld A later version appeared in : Numerical Linear Algebra w/appl., 2007, v. 14(1), pp.1-59.
More informationOptimal Left and Right Additive Schwarz Preconditioning for Minimal Residual Methods with Euclidean and Energy Norms
Optimal Left and Right Additive Schwarz Preconditioning for Minimal Residual Methods with Euclidean and Energy Norms Marcus Sarkis Worcester Polytechnic Inst., Mass. and IMPA, Rio de Janeiro and Daniel
More informationADI-preconditioned FGMRES for solving large generalized Lyapunov equations - A case study
-preconditioned for large - A case study Matthias Bollhöfer, André Eppler TU Braunschweig Institute Computational Mathematics Syrene-MOR Workshop, TU Hamburg October 30, 2008 2 / 20 Outline 1 2 Overview
More informationLecture 3: Inexact inverse iteration with preconditioning
Lecture 3: Department of Mathematical Sciences CLAPDE, Durham, July 2008 Joint work with M. Freitag (Bath), and M. Robbé & M. Sadkane (Brest) 1 Introduction 2 Preconditioned GMRES for Inverse Power Method
More informationA Novel Approach for Solving the Power Flow Equations
Vol.1, Issue.2, pp-364-370 ISSN: 2249-6645 A Novel Approach for Solving the Power Flow Equations Anwesh Chowdary 1, Dr. G. MadhusudhanaRao 2 1 Dept.of.EEE, KL University-Guntur, Andhra Pradesh, INDIA 2
More informationLab 1: Iterative Methods for Solving Linear Systems
Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as
More informationA short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering
A short course on: Preconditioned Krylov subspace methods Yousef Saad University of Minnesota Dept. of Computer Science and Engineering Universite du Littoral, Jan 19-3, 25 Outline Part 1 Introd., discretization
More informationPROJECTED GMRES AND ITS VARIANTS
PROJECTED GMRES AND ITS VARIANTS Reinaldo Astudillo Brígida Molina rastudillo@kuaimare.ciens.ucv.ve bmolina@kuaimare.ciens.ucv.ve Centro de Cálculo Científico y Tecnológico (CCCT), Facultad de Ciencias,
More information1. Introduction. In this paper we consider the large and sparse eigenvalue problem. Ax = λx (1.1) T (λ)x = 0 (1.2)
A NEW JUSTIFICATION OF THE JACOBI DAVIDSON METHOD FOR LARGE EIGENPROBLEMS HEINRICH VOSS Abstract. The Jacobi Davidson method is known to converge at least quadratically if the correction equation is solved
More informationThe in uence of orthogonality on the Arnoldi method
Linear Algebra and its Applications 309 (2000) 307±323 www.elsevier.com/locate/laa The in uence of orthogonality on the Arnoldi method T. Braconnier *, P. Langlois, J.C. Rioual IREMIA, Universite de la
More informationIDR(s) as a projection method
Delft University of Technology Faculty of Electrical Engineering, Mathematics and Computer Science Delft Institute of Applied Mathematics IDR(s) as a projection method A thesis submitted to the Delft Institute
More informationSimple iteration procedure
Simple iteration procedure Solve Known approximate solution Preconditionning: Jacobi Gauss-Seidel Lower triangle residue use of pre-conditionner correction residue use of pre-conditionner Convergence Spectral
More informationNumerical Methods for Large-Scale Nonlinear Equations
Slide 1 Numerical Methods for Large-Scale Nonlinear Equations Homer Walker MA 512 April 28, 2005 Inexact Newton and Newton Krylov Methods a. Newton-iterative and inexact Newton methods. Slide 2 i. Formulation
More informationTotal least squares. Gérard MEURANT. October, 2008
Total least squares Gérard MEURANT October, 2008 1 Introduction to total least squares 2 Approximation of the TLS secular equation 3 Numerical experiments Introduction to total least squares In least squares
More informationOUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU
Preconditioning Techniques for Solving Large Sparse Linear Systems Arnold Reusken Institut für Geometrie und Praktische Mathematik RWTH-Aachen OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative
More informationDepartment of Computer Science, University of Illinois at Urbana-Champaign
Department of Computer Science, University of Illinois at Urbana-Champaign Probing for Schur Complements and Preconditioning Generalized Saddle-Point Problems Eric de Sturler, sturler@cs.uiuc.edu, http://www-faculty.cs.uiuc.edu/~sturler
More informationAlgorithms that use the Arnoldi Basis
AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Arnoldi Methods Dianne P. O Leary c 2006, 2007 Algorithms that use the Arnoldi Basis Reference: Chapter 6 of Saad The Arnoldi Basis How to
More informationWHEN studying distributed simulations of power systems,
1096 IEEE TRANSACTIONS ON POWER SYSTEMS, VOL 21, NO 3, AUGUST 2006 A Jacobian-Free Newton-GMRES(m) Method with Adaptive Preconditioner and Its Application for Power Flow Calculations Ying Chen and Chen
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More informationPrinciples and Analysis of Krylov Subspace Methods
Principles and Analysis of Krylov Subspace Methods Zdeněk Strakoš Institute of Computer Science, Academy of Sciences, Prague www.cs.cas.cz/~strakos Ostrava, February 2005 1 With special thanks to C.C.
More informationA DISSERTATION. Extensions of the Conjugate Residual Method. by Tomohiro Sogabe. Presented to
A DISSERTATION Extensions of the Conjugate Residual Method ( ) by Tomohiro Sogabe Presented to Department of Applied Physics, The University of Tokyo Contents 1 Introduction 1 2 Krylov subspace methods
More informationSummary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method
Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Leslie Foster 11-5-2012 We will discuss the FOM (full orthogonalization method), CG,
More informationDavidson Method CHAPTER 3 : JACOBI DAVIDSON METHOD
Davidson Method CHAPTER 3 : JACOBI DAVIDSON METHOD Heinrich Voss voss@tu-harburg.de Hamburg University of Technology The Davidson method is a popular technique to compute a few of the smallest (or largest)
More informationON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH
ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix
More informationChapter 7 Iterative Techniques in Matrix Algebra
Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition
More informationA hybrid reordered Arnoldi method to accelerate PageRank computations
A hybrid reordered Arnoldi method to accelerate PageRank computations Danielle Parker Final Presentation Background Modeling the Web The Web The Graph (A) Ranks of Web pages v = v 1... Dominant Eigenvector
More informationarxiv: v1 [math.na] 5 Jun 2017
A WEIGHTED GLOBAL GMRES ALGORITHM WITH DEFLATION FOR SOLVING LARGE SYLVESTER MATRIX EQUATIONS NAJMEH AZIZI ZADEH, AZITA TAJADDINI, AND GANG WU arxiv:1706.01176v1 [math.na] 5 Jun 2017 Abstract. The solution
More informationAMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1.
J. Appl. Math. & Computing Vol. 15(2004), No. 1, pp. 299-312 BILUS: A BLOCK VERSION OF ILUS FACTORIZATION DAVOD KHOJASTEH SALKUYEH AND FAEZEH TOUTOUNIAN Abstract. ILUS factorization has many desirable
More informationNumerical Methods for Large-Scale Nonlinear Systems
Numerical Methods for Large-Scale Nonlinear Systems Handouts by Ronald H.W. Hoppe following the monograph P. Deuflhard Newton Methods for Nonlinear Problems Springer, Berlin-Heidelberg-New York, 2004 Num.
More informationSolving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners
Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department
More informationIncomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices
Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices Eun-Joo Lee Department of Computer Science, East Stroudsburg University of Pennsylvania, 327 Science and Technology Center,
More informationRounding error analysis of the classical Gram-Schmidt orthogonalization process
Cerfacs Technical report TR-PA-04-77 submitted to Numerische Mathematik manuscript No. 5271 Rounding error analysis of the classical Gram-Schmidt orthogonalization process Luc Giraud 1, Julien Langou 2,
More informationSolving Ax = b, an overview. Program
Numerical Linear Algebra Improving iterative solvers: preconditioning, deflation, numerical software and parallelisation Gerard Sleijpen and Martin van Gijzen November 29, 27 Solving Ax = b, an overview
More informationSolution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method
Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts
More information9.1 Preconditioned Krylov Subspace Methods
Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete
More informationA Chebyshev-based two-stage iterative method as an alternative to the direct solution of linear systems
A Chebyshev-based two-stage iterative method as an alternative to the direct solution of linear systems Mario Arioli m.arioli@rl.ac.uk CCLRC-Rutherford Appleton Laboratory with Daniel Ruiz (E.N.S.E.E.I.H.T)
More informationSolving large sparse eigenvalue problems
Solving large sparse eigenvalue problems Mario Berljafa Stefan Güttel June 2015 Contents 1 Introduction 1 2 Extracting approximate eigenpairs 2 3 Accuracy of the approximate eigenpairs 3 4 Expanding the
More informationITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS
ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS YOUSEF SAAD University of Minnesota PWS PUBLISHING COMPANY I(T)P An International Thomson Publishing Company BOSTON ALBANY BONN CINCINNATI DETROIT LONDON MADRID
More informationRounding error analysis of the classical Gram-Schmidt orthogonalization process
Numer. Math. (2005) 101: 87 100 DOI 10.1007/s00211-005-0615-4 Numerische Mathematik Luc Giraud Julien Langou Miroslav Rozložník Jasper van den Eshof Rounding error analysis of the classical Gram-Schmidt
More informationPreconditioned GMRES Revisited
Preconditioned GMRES Revisited Roland Herzog Kirk Soodhalter UBC (visiting) RICAM Linz Preconditioning Conference 2017 Vancouver August 01, 2017 Preconditioned GMRES Revisited Vancouver 1 / 32 Table of
More informationLSMR: An iterative algorithm for least-squares problems
LSMR: An iterative algorithm for least-squares problems David Fong Michael Saunders Institute for Computational and Mathematical Engineering (icme) Stanford University Copper Mountain Conference on Iterative
More informationApplied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give
More informationCommunication-avoiding Krylov subspace methods
Motivation Communication-avoiding Krylov subspace methods Mark mhoemmen@cs.berkeley.edu University of California Berkeley EECS MS Numerical Libraries Group visit: 28 April 2008 Overview Motivation Current
More informationSolving Large Nonlinear Sparse Systems
Solving Large Nonlinear Sparse Systems Fred W. Wubs and Jonas Thies Computational Mechanics & Numerical Mathematics University of Groningen, the Netherlands f.w.wubs@rug.nl Centre for Interdisciplinary
More informationA Jacobi Davidson Method for Nonlinear Eigenproblems
A Jacobi Davidson Method for Nonlinear Eigenproblems Heinrich Voss Section of Mathematics, Hamburg University of Technology, D 21071 Hamburg voss @ tu-harburg.de http://www.tu-harburg.de/mat/hp/voss Abstract.
More informationDELFT UNIVERSITY OF TECHNOLOGY
DELFT UNIVERSITY OF TECHNOLOGY REPORT 16-02 The Induced Dimension Reduction method applied to convection-diffusion-reaction problems R. Astudillo and M. B. van Gijzen ISSN 1389-6520 Reports of the Delft
More informationON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS
ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS Fatemeh Panjeh Ali Beik and Davod Khojasteh Salkuyeh, Department of Mathematics, Vali-e-Asr University of Rafsanjan, Rafsanjan,
More informationITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD
ITERATIVE PROJECTION METHODS FOR SPARSE LINEAR SYSTEMS AND EIGENPROBLEMS CHAPTER 11 : JACOBI DAVIDSON METHOD Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation
More informationDELFT UNIVERSITY OF TECHNOLOGY
DELFT UNIVERSITY OF TECHNOLOGY REPORT 14-1 Nested Krylov methods for shifted linear systems M. Baumann and M. B. van Gizen ISSN 1389-652 Reports of the Delft Institute of Applied Mathematics Delft 214
More informationGMRES: Generalized Minimal Residual Algorithm for Solving Nonsymmetric Linear Systems
GMRES: Generalized Minimal Residual Algorithm for Solving Nonsymmetric Linear Systems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University December 4, 2011 T.-M. Huang (NTNU) GMRES
More informationContents. 1 Repeated Gram Schmidt Local errors Propagation of the errors... 3
Contents 1 Repeated Gram Schmidt 1 1.1 Local errors.................................. 1 1.2 Propagation of the errors.......................... 3 Gram-Schmidt orthogonalisation Gerard Sleijpen December
More informationConvergence Behavior of Left Preconditioning Techniques for GMRES ECS 231: Large Scale Scientific Computing University of California, Davis Winter
Convergence Behavior of Left Preconditioning Techniques for GMRES ECS 231: Large Scale Scientific Computing University of California, Davis Winter Quarter 2013 March 20, 2013 Joshua Zorn jezorn@ucdavis.edu
More information