Iterative Methods for Linear Systems of Equations

Size: px
Start display at page:

Download "Iterative Methods for Linear Systems of Equations"

Transcription

1 Iterative Methods for Linear Systems of Equations Projection methods (3) ITMAN PhD-course DTU till Martin van Gijzen 1 Delft University of Technology

2 Overview day 4 Bi-Lanczos method Computation of an approximate solution: Bi-CG QMR CGS and Bi-CGSTAB Convergence The induced dimension theorem and IDR(s) 2

3 Introduction Yesterday, we discussed Arnoldi-based methods. A method like GMRES minimises the residual norm, but has to compute and store a new orthogonal basis vector in each iteration. The cost of doing this becomes prohibitive if many iterations have to be performed. Today, we will discuss another class of iterative methods. They are based on the bi-lanczos method, and hence closely related to CG. They use short recurrence but the iterates have no optimality property. We will also discuss a new method: IDR(s). This method is not based on the bi-lanczos method, but is mathematically related. 3

4 The Bi-Lanczos method The symmetric Lanczos method constructs a matrix Q k such that Q T k Q k = I and Q T k AQ k = T k tridiagonal. The Bi-Lanczos algorithm constructs matrices W k and V k such that W T k AV k = T k tridiagonal and such that W k T V k = I Hence W k and V k are not orthogonal themselves, but they form a bi-orthogonal pair. 4

5 The Bi-Lanczos method (2) Choose two vectors v 1 and w 1 such that v1 Tw 1 = 1 β 1 = δ 1 = 0 w 0 = v 0 = 0 initialization FOR k = 1, DO iteration α k = wk TAv k ˆv k+1 = Av k α k v k β k v k 1 new ŵ k+1 = A T w k α k w k δ k w k 1 direction v δ k+1 = ˆv k+1ŵk+1 T orthogonal to β k+1 = ˆv k+1ŵk+1/δ T k+1 previous w. w k+1 = ŵ k+1 /β k+1 END FOR v k+1 = ˆv k+1 /δ k+1 5

6 The Bi-Lanczos method (3) Let T k = α 1 β 2 0 δ 2 α β k δ k α k and V k = [v 1 v 2... v k ] W k = [w 1 w 2... w k ]. Then AV k = V k T k + δ k+1 v k+1 e T k and A T W k = W k Tk T + β k+1w k+1 e T k. 6

7 The Bi-Lanczos method (3) Note that span{v 1 v k } is a basis for K k (A;v 1 ) and span{w 1 w k } is a basis for K k (A T ;w 1 ) 7

8 Breakdown of Bi-Lanczos Bi-Lanczos breaks down if wk+1 T v k+1 = 0. This can happen in the following cases: v k+1 = 0 : {v 1 v k } span an invariant subspace (wrt. A) w k+1 = 0 : {w 1 w k } span an invariant subspace (wrt. A T ) w k+1 0 and v k+1 0 : serious breakdown 8

9 Bi-CG A CG-type method can now be defined as: v 1 = r 0 / r 0 and w 1 = r 0 / r 0 (with e.g. r 0 = r 0 ). - Compute W k and V k using Bi-Lanczos - Compute x k = x 0 + V k y k with y k such that W T k AV ky k = W T k r 0 T k y k = r 0 e 1 This algorithm can be cast in a more practical form in the same way as CG. The resulting algorithm is called Bi-CG. 9

10 The Bi-CG algorithm r 0 = b Ax 0 Choose r 0, p 0 = r 0, p 0 = r 0 initialization FOR k = 0,1,, DO α k = rt k r k p T k Ap k x k+1 = x k + α k p k r k+1 = r k α k Ap k r k+1 = r k α k A T p k β k = rt k+1 r k+1 r T k r k p k+1 = r k+1 + β k p k p k+1 = r k+1 + β k p k update iterate update residual shadow residual update direction vector shadow direction vector END FOR 10

11 Properties of Bi-CG The method uses limited memory: only five vectors need to be stored; The method is not optimal. However, if A is SPD and with r 0 = r 0 we get CG back (but with two times the number of operations per iteration!) The method is finite: all the residuals are orthogonal to the shadow residuals r i Tr j = 0 if i j. The method is not robust: r k Tr k may be zero and r k 0. 11

12 Quasi minimising the residuals In analogy to CG and MINRES (and to FOM and GMRES) we can define a quasi -minimal residual algorithm. To this end, we first rewrite the Bi-Lanczos relations. 12

13 The Bi-Lanczos relations Define T k as T k = α 1 β 2 δ 2 α β k δ k α k. δ k+1 We can now write AV k = V k+1 T k 13

14 Quasi minimal residuals (1) We would like to solve the problem: find x k = x 0 + V k y k such that r k is minimal. r k = b Ax k = r 0 AV k y k = r 0 v 1 AV k y k Hence the problem is to minimise r k = r 0 v 1 AV k y k (1) = r 0 V k+1 e 1 V k+1 T k y k. (2) If V k+1 had orthonormal columns this would be equivalent to: Minimise r k = r 0 e 1 T k y k 14

15 Quasi minimal residuals (2) However, in general V k does not have orthogonal columns. We could ignore this fact and still compute y k by minimising r 0 e 1 T k y k The resulting algorithm is called QMR. For many problems, the convergence of QMR is smoother than of Bi-CG, but not essentially faster. 15

16 Bi-CG wastes time Bi-CG constructs its approximations as a linear combination of the columns of V k. The only reason why we construct shadow vectors is to calculate the parameters α k and β k. Another disadvantage of Bi-CG and QMR is that operations with A T have to be performed, and this may be difficult in matrix-free computations. Next we look at the following questions: Can the computational work of Bi-CG be better exploited? Can operations with A T be avoided? 16

17 Bi-CG: a closer look In the Bi-CG method, the residual vector can be expressed as and the direction vector as r k = φ k (A)r 0 p k = π k (A)r 0 with φ k and π k polynomials of degree k. The shadow vectors r k and p k are defined through the same recurrences as r k and p k. Hence r k = φ k (A T ) r 0, p k = π k (A T ) r 0 17

18 Towards a faster algorithm The scalar α k in Bi-CG is given by α k = (φ k(a T ) r 0 ) T (φ k (A)r 0 ) (π k (A T ) r 0 ) T (Aπ k (A)r 0 ) = rt 0 φ2 k (A)r 0 r T 0 Aπ2 k (A)r 0 The idea is now to find an algorithm which gives residuals ˆr k that satisfy ˆr k = φ k 2 (A)r 0. The algorithm that is based on this idea is called CGS (by Sonneveld in 1989). 18

19 Conjugate Gradients Squared x 0 is an initial guess; r 0 = b Ax 0 ; r 0 arbitrary, such that r T 0 r 0 0, ρ 0 = r T 0 r 0) ; β 1 = ρ 0 ; p 1 = q 0 = 0 ; FOR i = 0,1,2,... DO u i = r i + β i 1 q i ; p i = u i + β i 1 (q i + β i 1 p i 1 ) ; ˆv = Ap i ; α i = ρ i r T 0 ˆv ; q i+1 = u i α iˆv ; END FOR s i = u i + q i+1 x i+1 = x i + α i (u i + q i+1 ) ; r i+1 = r i α i A(u i + q i+1 ) ; ρ i+1 = r T 0 r i+1 ; β i = ρ i+1 ρ i ; 19

20 CGS (2) For many problems CGS converges considerably faster than Bi-CG, sometimes twice as fast. However, convergence is more erratic. Bi-CG shows peaks in the convergence curves. These peaks are squared by CGS. The peaks can destroy the accuracy of the solution and also convergence of the method. The search for a more smoothly converging method has led to the development of Bi-CGSTAB by Van der Vorst. 20

21 Bi-CGSTAB Bi-CGSTAB is based on the following observation. Instead of squaring the Bi-CG polynomial, we can construct other iteration methods by which x k are generated so that r k = ψ k (A)φ k (A)r 0 Bi-CGSTAB takes a polynomial of the form ψ k (x) = (1 ω 1 x)(1 ω 2 x)...(1 ω k x) The ω k are chosen to minimise the current residual in the same way as in the one-step minimal residual method. 21

22 Bi-CGSTAB: the algorithm x 0 is an initial guess; r 0 = b Ax 0 ; r 0 arbitrary, such that r0 T r 0 0, ρ 1 = α 1 = ω 1 = 1 ; v 1 = p 1 = 0 ; FOR k = 0,1,2,... DO ρ k = r 0 Tr k ; β k 1 = (ρ k /ρ k 1 )(α k 1 /ω k 1 ) ; p k = r k + β k 1 (p k 1 ω k 1 v k 1 ) ; v k = Ap k ; α k = ρ k /( r 0 Tv k) ; s = r k α k v k ; t = As ; ω k = (t T s)/(t T t) ; x k+1 = x k + α k p k + ω k s ; r k+1 = s ω k t ; END FOR 22

23 Convergence of Bi-CG methods No bounds on the residual norm can be given for the four methods of today: they have no optimality property. Some general observations can be made: All four methods that we discussed today are finite. They all show superlinear convergence. The rate of convergence of Bi-CG and QMR is basically the same, but convergence of QMR is smoother. Convergence of Bi-CGSTAB and CGS is most of the time faster than of the other two methods. But convergence of CGS can be erratic which has a negative effect on the accuracy. 23

24 Convergence: first example The figure below shows typical convergence curves for Bi-CGSTAB, CGS, QMR and Bi-CG BICGSTAB CGS BI CG QMR 10 0 residual norm matrix vector multiplications 24

25 Convergence: second example The figure below shows another example of the convergence of Bi-CGSTAB, CGS, QMR and Bi-CG BICGSTAB CGS BI CG QMR 10 5 residual norm matrix vector multiplications 25

26 BiCGSTAB2 and BiCGstab(l) For some problems, in particular for real problems with large complex eigenvalues, Bi-CGSTAB does not work well. The reason is that for such problems it is not possible to construct a residual minimising polynomial of the form ψ k (x) = (1 ω 1 x)(1 ω 2 x)...(1 ω k x). Such a polynomial has real roots, and can therefore not approximate large eigenvalues. To overcome this problem Gutknecht proposed BiCGSTAB2, which uses a polynomial with quadratic factors. Sleijpen and Fokkema proposed BiCGstab(l), which uses polynomial factors of degree l. 26

27 Intermezzo Of the Bi-CG methods Bi-CGSTAB is by far the most widely used. Recently, the new method IDR(s) has been proposed by Sonneveld and van Gijzen. IDR(1) and Bi-CGSTAB are mathematically equivalent. But convergence of IDR(s) is often much faster than of Bi-CGSTAB for s > 1. Although Bi-CGSTAB and IDR(1) are mathematically equivalent, IDR(s) is based on a completely different approach. 27

28 The IDR approach for solving Ax = b Generate residuals r n = b Ax n that are in subspaces G j of decreasing dimension. These nested subspaces are related by G j = (I ω j A)(G j 1 S) where S is a fixed proper subspace of C N, and the ω j C s are non-zero scalars. Ultimately r n {0} (IDR theorem). 28

29 The IDR theorem Theorem 1 (IDR) Let A be any matrix in C N N, let v 0 be any nonzero vector in C N, and let G 0 be the complete Krylov space K N (A;v 0 ). Let S denote any (proper) subspace of C N such that S and G 0 do not share a nontrivial invariant subspace of A, and define the sequence G j, j = 1,2,... as G j = (I ω j A)(G j 1 S) where the ω j s are nonzero scalars. Then (i) G j G j 1 for all j > 0. (ii) G j = {0} for some j N. 29

30 Making an IDR algorithm 30

31 Making an IDR algorithm (2) Definition of S: S can be defined as span(p 1... p s ). Let P be the matrix with p 1... p s as its columns. Then v S P H v = 0. Residual difference vectors: We compute residual difference vectors r n = r n+1 r n = A x n to update the solution vector x n+1 with the residual r n+1. 31

32 Making an IDR algorithm (3) Intermediate residuals Intermediate residuals r n can be generated by repeating the algorithm. Once s + 1 residuals in G j have been computed, the next residual will be in G j+1. Choice of ω Every s + 1st step a new ω may be chosen. We choose it such that the next residual is minimized in norm. 32

33 Prototype IDR(s) algorithm. while r n > TOL or n < MAXIT do for k = 0 to s do Solve c from P H dr n c = P H r n v = r n dr n c; t = Av; if k = 0 then end if end for end while ω = (t H v)/(t H t); dr n = dr n c ωt; dx n = dx n c + ωv; r n+1 = r n + dr n ; x n+1 = x n + dx n ; n = n + 1; dr n = (dr n 1 dr n s ); dx n = (dx n 1 dx n s ); 33

34 Convergence of IDR(s) The figure below shows typical convergence of Bi-CGSTAB, GMRES (optimal) and IDR(s). Convection diffusion problem GMRES Bi CGSTAB BiCGstab(2) IDR(1) IDR(2) IDR(4) IDR(8) log 10 r Number of MATVECS 34

35 Concluding remarks (1) Today we discussed short recurrence methods for solving nonsymmetric problems. Of these methods Bi-CGSTAB is by far the most popular. A logical question to ask is: are there better methods than Bi-CGSTAB possible that use short recurrences? The answer is: sure! For example IDR(s). However, neither Bi-CGSTAB nor IDR(s) reduce an error in an optimal way. Another question is: Is it possible to derive an optimal method for general matrices that uses short recurrences? The answer is: unfortunately not. This has been proven by Faber and Manteufel in the 90 s. 35

36 Concluding remarks When should we use IDR(s) and when GMRES? It is difficult to give a general answer. Rules of thumb are: Are matrix-vector multiplications (and/or preconditioning operations) expensive? GMRES is your best bet, at least if the number of iterations is limited. Are matrix-vector multiplications inexpensive and you expect that many iterations are needed (for example because you use a simple preconditioner)? Use IDR(s) (with s = 4 as a good default). 36

IDR(s) A family of simple and fast algorithms for solving large nonsymmetric systems of linear equations

IDR(s) A family of simple and fast algorithms for solving large nonsymmetric systems of linear equations IDR(s) A family of simple and fast algorithms for solving large nonsymmetric systems of linear equations Harrachov 2007 Peter Sonneveld en Martin van Gijzen August 24, 2007 1 Delft University of Technology

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 16-02 The Induced Dimension Reduction method applied to convection-diffusion-reaction problems R. Astudillo and M. B. van Gijzen ISSN 1389-6520 Reports of the Delft

More information

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Leslie Foster 11-5-2012 We will discuss the FOM (full orthogonalization method), CG,

More information

1. Introduction. Krylov subspace methods are used extensively for the iterative solution of linear systems of equations Ax = b.

1. Introduction. Krylov subspace methods are used extensively for the iterative solution of linear systems of equations Ax = b. SIAM J. SCI. COMPUT. Vol. 31, No. 2, pp. 1035 1062 c 2008 Society for Industrial and Applied Mathematics IDR(s): A FAMILY OF SIMPLE AND FAST ALGORITHMS FOR SOLVING LARGE NONSYMMETRIC SYSTEMS OF LINEAR

More information

Fast iterative solvers

Fast iterative solvers Utrecht, 15 november 2017 Fast iterative solvers Gerard Sleijpen Department of Mathematics http://www.staff.science.uu.nl/ sleij101/ Review. To simplify, take x 0 = 0, assume b 2 = 1. Solving Ax = b for

More information

IDR(s) Master s thesis Goushani Kisoensingh. Supervisor: Gerard L.G. Sleijpen Department of Mathematics Universiteit Utrecht

IDR(s) Master s thesis Goushani Kisoensingh. Supervisor: Gerard L.G. Sleijpen Department of Mathematics Universiteit Utrecht IDR(s) Master s thesis Goushani Kisoensingh Supervisor: Gerard L.G. Sleijpen Department of Mathematics Universiteit Utrecht Contents 1 Introduction 2 2 The background of Bi-CGSTAB 3 3 IDR(s) 4 3.1 IDR.............................................

More information

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009) Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Fast iterative solvers

Fast iterative solvers Solving Ax = b, an overview Utrecht, 15 november 2017 Fast iterative solvers A = A no Good precond. yes flex. precond. yes GCR yes A > 0 yes CG no no GMRES no ill cond. yes SYMMLQ str indef no large im

More information

Krylov Space Solvers

Krylov Space Solvers Seminar for Applied Mathematics ETH Zurich International Symposium on Frontiers of Computational Science Nagoya, 12/13 Dec. 2005 Sparse Matrices Large sparse linear systems of equations or large sparse

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 9 Minimizing Residual CG

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

Iterative Methods for Sparse Linear Systems

Iterative Methods for Sparse Linear Systems Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 14-1 Nested Krylov methods for shifted linear systems M. Baumann and M. B. van Gizen ISSN 1389-652 Reports of the Delft Institute of Applied Mathematics Delft 214

More information

IDR(s) as a projection method

IDR(s) as a projection method Delft University of Technology Faculty of Electrical Engineering, Mathematics and Computer Science Delft Institute of Applied Mathematics IDR(s) as a projection method A thesis submitted to the Delft Institute

More information

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 10-16 An elegant IDR(s) variant that efficiently exploits bi-orthogonality properties Martin B. van Gijzen and Peter Sonneveld ISSN 1389-6520 Reports of the Department

More information

On the influence of eigenvalues on Bi-CG residual norms

On the influence of eigenvalues on Bi-CG residual norms On the influence of eigenvalues on Bi-CG residual norms Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic duintjertebbens@cs.cas.cz Gérard Meurant 30, rue

More information

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering A short course on: Preconditioned Krylov subspace methods Yousef Saad University of Minnesota Dept. of Computer Science and Engineering Universite du Littoral, Jan 19-3, 25 Outline Part 1 Introd., discretization

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

FEM and sparse linear system solving

FEM and sparse linear system solving FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

1 Conjugate gradients

1 Conjugate gradients Notes for 2016-11-18 1 Conjugate gradients We now turn to the method of conjugate gradients (CG), perhaps the best known of the Krylov subspace solvers. The CG iteration can be characterized as the iteration

More information

Key words. linear equations, polynomial preconditioning, nonsymmetric Lanczos, BiCGStab, IDR

Key words. linear equations, polynomial preconditioning, nonsymmetric Lanczos, BiCGStab, IDR POLYNOMIAL PRECONDITIONED BICGSTAB AND IDR JENNIFER A. LOE AND RONALD B. MORGAN Abstract. Polynomial preconditioning is applied to the nonsymmetric Lanczos methods BiCGStab and IDR for solving large nonsymmetric

More information

Lecture 8 Fast Iterative Methods for linear systems of equations

Lecture 8 Fast Iterative Methods for linear systems of equations CGLS Select x 0 C n x = x 0, r = b Ax 0 u = 0, ρ = 1 while r 2 > tol do s = A r ρ = ρ, ρ = s s, β = ρ/ρ u s βu, c = Au σ = c c, α = ρ/σ r r αc x x+αu end while Graig s method Select x 0 C n x = x 0, r

More information

Contents. Preface... xi. Introduction...

Contents. Preface... xi. Introduction... Contents Preface... xi Introduction... xv Chapter 1. Computer Architectures... 1 1.1. Different types of parallelism... 1 1.1.1. Overlap, concurrency and parallelism... 1 1.1.2. Temporal and spatial parallelism

More information

Comparison of Fixed Point Methods and Krylov Subspace Methods Solving Convection-Diffusion Equations

Comparison of Fixed Point Methods and Krylov Subspace Methods Solving Convection-Diffusion Equations American Journal of Computational Mathematics, 5, 5, 3-6 Published Online June 5 in SciRes. http://www.scirp.org/journal/ajcm http://dx.doi.org/.436/ajcm.5.5 Comparison of Fixed Point Methods and Krylov

More information

Recycling Bi-Lanczos Algorithms: BiCG, CGS, and BiCGSTAB

Recycling Bi-Lanczos Algorithms: BiCG, CGS, and BiCGSTAB Recycling Bi-Lanczos Algorithms: BiCG, CGS, and BiCGSTAB Kapil Ahuja Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial fulfillment of the requirements

More information

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294) Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps

More information

KRYLOV SUBSPACE ITERATION

KRYLOV SUBSPACE ITERATION KRYLOV SUBSPACE ITERATION Presented by: Nab Raj Roshyara Master and Ph.D. Student Supervisors: Prof. Dr. Peter Benner, Department of Mathematics, TU Chemnitz and Dipl.-Geophys. Thomas Günther 1. Februar

More information

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD

More information

Valeria Simoncini and Daniel B. Szyld. Report October 2009

Valeria Simoncini and Daniel B. Szyld. Report October 2009 Interpreting IDR as a Petrov-Galerkin method Valeria Simoncini and Daniel B. Szyld Report 09-10-22 October 2009 This report is available in the World Wide Web at http://www.math.temple.edu/~szyld INTERPRETING

More information

A DISSERTATION. Extensions of the Conjugate Residual Method. by Tomohiro Sogabe. Presented to

A DISSERTATION. Extensions of the Conjugate Residual Method. by Tomohiro Sogabe. Presented to A DISSERTATION Extensions of the Conjugate Residual Method ( ) by Tomohiro Sogabe Presented to Department of Applied Physics, The University of Tokyo Contents 1 Introduction 1 2 Krylov subspace methods

More information

The solution of the discretized incompressible Navier-Stokes equations with iterative methods

The solution of the discretized incompressible Navier-Stokes equations with iterative methods The solution of the discretized incompressible Navier-Stokes equations with iterative methods Report 93-54 C. Vuik Technische Universiteit Delft Delft University of Technology Faculteit der Technische

More information

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate

More information

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication.

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication. CME342 Parallel Methods in Numerical Analysis Matrix Computation: Iterative Methods II Outline: CG & its parallelization. Sparse Matrix-vector Multiplication. 1 Basic iterative methods: Ax = b r = b Ax

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 06-05 Solution of the incompressible Navier Stokes equations with preconditioned Krylov subspace methods M. ur Rehman, C. Vuik G. Segal ISSN 1389-6520 Reports of the

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 11-14 On the convergence of inexact Newton methods R. Idema, D.J.P. Lahaye, and C. Vuik ISSN 1389-6520 Reports of the Department of Applied Mathematical Analysis Delft

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 20 1 / 20 Overview

More information

Variants of BiCGSafe method using shadow three-term recurrence

Variants of BiCGSafe method using shadow three-term recurrence Variants of BiCGSafe method using shadow three-term recurrence Seiji Fujino, Takashi Sekimoto Research Institute for Information Technology, Kyushu University Ryoyu Systems Co. Ltd. (Kyushu University)

More information

Preface to the Second Edition. Preface to the First Edition

Preface to the Second Edition. Preface to the First Edition n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................

More information

Recent computational developments in Krylov Subspace Methods for linear systems. Valeria Simoncini and Daniel B. Szyld

Recent computational developments in Krylov Subspace Methods for linear systems. Valeria Simoncini and Daniel B. Szyld Recent computational developments in Krylov Subspace Methods for linear systems Valeria Simoncini and Daniel B. Szyld A later version appeared in : Numerical Linear Algebra w/appl., 2007, v. 14(1), pp.1-59.

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

IDR EXPLAINED. Dedicated to Richard S. Varga on the occasion of his 80th birthday.

IDR EXPLAINED. Dedicated to Richard S. Varga on the occasion of his 80th birthday. IDR EXPLAINED MARTIN H. GUTKNECHT Dedicated to Richard S. Varga on the occasion of his 80th birthday. Abstract. The Induced Dimension Reduction (IDR) method is a Krylov space method for solving linear

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD

More information

ETNA Kent State University

ETNA Kent State University Electronic ransactions on Numerical Analysis. Volume 2, pp. 57-75, March 1994. Copyright 1994,. ISSN 1068-9613. ENA MINIMIZAION PROPERIES AND SHOR RECURRENCES FOR KRYLOV SUBSPACE MEHODS RÜDIGER WEISS Dedicated

More information

S-Step and Communication-Avoiding Iterative Methods

S-Step and Communication-Avoiding Iterative Methods S-Step and Communication-Avoiding Iterative Methods Maxim Naumov NVIDIA, 270 San Tomas Expressway, Santa Clara, CA 95050 Abstract In this paper we make an overview of s-step Conjugate Gradient and develop

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give

More information

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna.

Recent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna. Recent advances in approximation using Krylov subspaces V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The framework It is given an operator

More information

Krylov Subspace Methods that Are Based on the Minimization of the Residual

Krylov Subspace Methods that Are Based on the Minimization of the Residual Chapter 5 Krylov Subspace Methods that Are Based on the Minimization of the Residual Remark 51 Goal he goal of these methods consists in determining x k x 0 +K k r 0,A such that the corresponding Euclidean

More information

Performance Evaluation of GPBiCGSafe Method without Reverse-Ordered Recurrence for Realistic Problems

Performance Evaluation of GPBiCGSafe Method without Reverse-Ordered Recurrence for Realistic Problems Performance Evaluation of GPBiCGSafe Method without Reverse-Ordered Recurrence for Realistic Problems Seiji Fujino, Takashi Sekimoto Abstract GPBiCG method is an attractive iterative method for the solution

More information

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm CS 622 Data-Sparse Matrix Computations September 19, 217 Lecture 9: Krylov Subspace Methods Lecturer: Anil Damle Scribes: David Eriksson, Marc Aurele Gilles, Ariah Klages-Mundt, Sophia Novitzky 1 Introduction

More information

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Krylov Space Methods Nonstationary sounds good Radu Trîmbiţaş Babeş-Bolyai University Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Introduction These methods are used both to solve

More information

1 Extrapolation: A Hint of Things to Come

1 Extrapolation: A Hint of Things to Come Notes for 2017-03-24 1 Extrapolation: A Hint of Things to Come Stationary iterations are simple. Methods like Jacobi or Gauss-Seidel are easy to program, and it s (relatively) easy to analyze their convergence.

More information

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts

More information

A class of linear solvers built on the Biconjugate A-Orthonormalization Procedure for solving unsymmetric linear systems

A class of linear solvers built on the Biconjugate A-Orthonormalization Procedure for solving unsymmetric linear systems arxiv:1004.1622v1 [math.na] 9 Apr 2010 A class of linear solvers built on the Biconjugate A-Orthonormalization Procedure for solving unsymmetric linear systems Bruno Carpentieri Institute of Mathematics

More information

ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS

ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS YOUSEF SAAD University of Minnesota PWS PUBLISHING COMPANY I(T)P An International Thomson Publishing Company BOSTON ALBANY BONN CINCINNATI DETROIT LONDON MADRID

More information

Algorithms that use the Arnoldi Basis

Algorithms that use the Arnoldi Basis AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Arnoldi Methods Dianne P. O Leary c 2006, 2007 Algorithms that use the Arnoldi Basis Reference: Chapter 6 of Saad The Arnoldi Basis How to

More information

Nested Krylov methods for shifted linear systems

Nested Krylov methods for shifted linear systems Nested Krylov methods for shifted linear systems M. Baumann, and M. B. van Gizen Email: M.M.Baumann@tudelft.nl Delft Institute of Applied Mathematics Delft University of Technology Delft, The Netherlands

More information

Recycling Techniques for Induced Dimension Reduction Methods Used for Solving Large Sparse Non-Symmetric Systems of Linear Equations

Recycling Techniques for Induced Dimension Reduction Methods Used for Solving Large Sparse Non-Symmetric Systems of Linear Equations Recycling Techniques for Induced Dimension Reduction Methods Used for Solving Large Sparse Non-Symmetric Systems of Linear Equations July 26, 2017 Master thesis, Mathematical Sciences, Utrecht University

More information

Charles University Faculty of Mathematics and Physics DOCTORAL THESIS. Krylov subspace approximations in linear algebraic problems

Charles University Faculty of Mathematics and Physics DOCTORAL THESIS. Krylov subspace approximations in linear algebraic problems Charles University Faculty of Mathematics and Physics DOCTORAL THESIS Iveta Hnětynková Krylov subspace approximations in linear algebraic problems Department of Numerical Mathematics Supervisor: Doc. RNDr.

More information

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccs Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary c 2008,2010

More information

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary

More information

arxiv: v4 [math.na] 1 Sep 2018

arxiv: v4 [math.na] 1 Sep 2018 A Comparison of Preconditioned Krylov Subspace Methods for Large-Scale Nonsymmetric Linear Systems Aditi Ghai, Cao Lu and Xiangmin Jiao arxiv:1607.00351v4 [math.na] 1 Sep 2018 September 5, 2018 Department

More information

ITERATIVE METHODS BASED ON KRYLOV SUBSPACES

ITERATIVE METHODS BASED ON KRYLOV SUBSPACES ITERATIVE METHODS BASED ON KRYLOV SUBSPACES LONG CHEN We shall present iterative methods for solving linear algebraic equation Au = b based on Krylov subspaces We derive conjugate gradient (CG) method

More information

-.- Bi-CG... GMRES(25) --- Bi-CGSTAB BiCGstab(2)

-.- Bi-CG... GMRES(25) --- Bi-CGSTAB BiCGstab(2) .... Advection dominated problem -.- Bi-CG... GMRES(25) --- Bi-CGSTAB BiCGstab(2) * Universiteit Utrecht -2 log1 of residual norm -4-6 -8 Department of Mathematics - GMRES(25) 2 4 6 8 1 Hybrid Bi-Conjugate

More information

Course Notes: Week 4

Course Notes: Week 4 Course Notes: Week 4 Math 270C: Applied Numerical Linear Algebra 1 Lecture 9: Steepest Descent (4/18/11) The connection with Lanczos iteration and the CG was not originally known. CG was originally derived

More information

Solving Ax = b, an overview. Program

Solving Ax = b, an overview. Program Numerical Linear Algebra Improving iterative solvers: preconditioning, deflation, numerical software and parallelisation Gerard Sleijpen and Martin van Gijzen November 29, 27 Solving Ax = b, an overview

More information

4.6 Iterative Solvers for Linear Systems

4.6 Iterative Solvers for Linear Systems 4.6 Iterative Solvers for Linear Systems Why use iterative methods? Virtually all direct methods for solving Ax = b require O(n 3 ) floating point operations. In practical applications the matrix A often

More information

Preconditioning for Nonsymmetry and Time-dependence

Preconditioning for Nonsymmetry and Time-dependence Preconditioning for Nonsymmetry and Time-dependence Andy Wathen Oxford University, UK joint work with Jen Pestana and Elle McDonald Jeju, Korea, 2015 p.1/24 Iterative methods For self-adjoint problems/symmetric

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 15-05 Induced Dimension Reduction method for solving linear matrix equations R. Astudillo and M. B. van Gijzen ISSN 1389-6520 Reports of the Delft Institute of Applied

More information

Some minimization problems

Some minimization problems Week 13: Wednesday, Nov 14 Some minimization problems Last time, we sketched the following two-step strategy for approximating the solution to linear systems via Krylov subspaces: 1. Build a sequence of

More information

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department

More information

Lab 1: Iterative Methods for Solving Linear Systems

Lab 1: Iterative Methods for Solving Linear Systems Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as

More information

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method

Solution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts

More information

Research Article Some Generalizations and Modifications of Iterative Methods for Solving Large Sparse Symmetric Indefinite Linear Systems

Research Article Some Generalizations and Modifications of Iterative Methods for Solving Large Sparse Symmetric Indefinite Linear Systems Abstract and Applied Analysis Article ID 237808 pages http://dxdoiorg/055/204/237808 Research Article Some Generalizations and Modifications of Iterative Methods for Solving Large Sparse Symmetric Indefinite

More information

Since the early 1800s, researchers have

Since the early 1800s, researchers have the Top T HEME ARTICLE KRYLOV SUBSPACE ITERATION This survey article reviews the history and current importance of Krylov subspace iteration algorithms. 1521-9615/00/$10.00 2000 IEEE HENK A. VAN DER VORST

More information

Algebraic Multigrid as Solvers and as Preconditioner

Algebraic Multigrid as Solvers and as Preconditioner Ò Algebraic Multigrid as Solvers and as Preconditioner Domenico Lahaye domenico.lahaye@cs.kuleuven.ac.be http://www.cs.kuleuven.ac.be/ domenico/ Department of Computer Science Katholieke Universiteit Leuven

More information

A parameter tuning technique of a weighted Jacobi-type preconditioner and its application to supernova simulations

A parameter tuning technique of a weighted Jacobi-type preconditioner and its application to supernova simulations A parameter tuning technique of a weighted Jacobi-type preconditioner and its application to supernova simulations Akira IMAKURA Center for Computational Sciences, University of Tsukuba Joint work with

More information

Iterative Methods and Multigrid

Iterative Methods and Multigrid Iterative Methods and Multigrid Part 3: Preconditioning 2 Eric de Sturler Preconditioning The general idea behind preconditioning is that convergence of some method for the linear system Ax = b can be

More information

Institute for Advanced Computer Studies. Department of Computer Science. Iterative methods for solving Ax = b. GMRES/FOM versus QMR/BiCG

Institute for Advanced Computer Studies. Department of Computer Science. Iterative methods for solving Ax = b. GMRES/FOM versus QMR/BiCG University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{96{2 TR{3587 Iterative methods for solving Ax = b GMRES/FOM versus QMR/BiCG Jane K. Cullum

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

Induced Dimension Reduction method to solve the Quadratic Eigenvalue Problem

Induced Dimension Reduction method to solve the Quadratic Eigenvalue Problem Induced Dimension Reduction method to solve the Quadratic Eigenvalue Problem R. Astudillo and M. B. van Gijzen Email: R.A.Astudillo@tudelft.nl Delft Institute of Applied Mathematics Delft University of

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

Optimal Left and Right Additive Schwarz Preconditioning for Minimal Residual Methods with Euclidean and Energy Norms

Optimal Left and Right Additive Schwarz Preconditioning for Minimal Residual Methods with Euclidean and Energy Norms Optimal Left and Right Additive Schwarz Preconditioning for Minimal Residual Methods with Euclidean and Energy Norms Marcus Sarkis Worcester Polytechnic Inst., Mass. and IMPA, Rio de Janeiro and Daniel

More information

College of William & Mary Department of Computer Science

College of William & Mary Department of Computer Science Technical Report WM-CS-2009-06 College of William & Mary Department of Computer Science WM-CS-2009-06 Extending the eigcg algorithm to non-symmetric Lanczos for linear systems with multiple right-hand

More information

Multigrid absolute value preconditioning

Multigrid absolute value preconditioning Multigrid absolute value preconditioning Eugene Vecharynski 1 Andrew Knyazev 2 (speaker) 1 Department of Computer Science and Engineering University of Minnesota 2 Department of Mathematical and Statistical

More information

Lecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University

Lecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University Lecture 17 Methods for System of Linear Equations: Part 2 Songting Luo Department of Mathematics Iowa State University MATH 481 Numerical Methods for Differential Equations Songting Luo ( Department of

More information

Solutions and Notes to Selected Problems In: Numerical Optimzation by Jorge Nocedal and Stephen J. Wright.

Solutions and Notes to Selected Problems In: Numerical Optimzation by Jorge Nocedal and Stephen J. Wright. Solutions and Notes to Selected Problems In: Numerical Optimzation by Jorge Nocedal and Stephen J. Wright. John L. Weatherwax July 7, 2010 wax@alum.mit.edu 1 Chapter 5 (Conjugate Gradient Methods) Notes

More information

Iterative Linear Solvers and Jacobian-free Newton-Krylov Methods

Iterative Linear Solvers and Jacobian-free Newton-Krylov Methods Eric de Sturler Iterative Linear Solvers and Jacobian-free Newton-Krylov Methods Eric de Sturler Departent of Matheatics, Virginia Tech www.ath.vt.edu/people/sturler/index.htl sturler@vt.edu Efficient

More information

Efficient Deflation for Communication-Avoiding Krylov Subspace Methods

Efficient Deflation for Communication-Avoiding Krylov Subspace Methods Efficient Deflation for Communication-Avoiding Krylov Subspace Methods Erin Carson Nicholas Knight, James Demmel Univ. of California, Berkeley Monday, June 24, NASCA 2013, Calais, France Overview We derive

More information

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1 Parallel Numerics, WT 2016/2017 5 Iterative Methods for Sparse Linear Systems of Equations page 1 of 1 Contents 1 Introduction 1.1 Computer Science Aspects 1.2 Numerical Problems 1.3 Graphs 1.4 Loop Manipulations

More information

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices Chapter 7 Iterative methods for large sparse linear systems In this chapter we revisit the problem of solving linear systems of equations, but now in the context of large sparse systems. The price to pay

More information

Linear Solvers. Andrew Hazel

Linear Solvers. Andrew Hazel Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction

More information