Iterative Methods for Sparse Linear Systems

Size: px
Start display at page:

Download "Iterative Methods for Sparse Linear Systems"

Transcription

1 Iterative Methods for Sparse Linear Systems Luca Bergamaschi berga Department of Mathematical Methods and Models for Scientific Applications University of Padova Iterative Methods p.1/31

2 Discretized diffusion equation After discretization in the space, the PDE is converted in a system of ODEs like P u t = Hu + b where P = I in FD discretization. Stiffness matrix H is symmetric definite negative, capacity or mass matrix P is symmetric positive definite. Finally discretization in time transforms the system of ODEs to a linear system of size equal to the number of gridpoints to be solved at each time step. Time discretization divides the time interval (0,T] into subintervals of length t i so that solution in time is computed at t k = P k i=1 i and uses Finite Differences to approximate the time derivative. For example applying the implicit Euler method (u u(t k+1) u(t k ) t k ). (I + t k H)u(t k+1 ) = t k P u(t k ) + t k b Iterative Methods p.2/31

3 Eigenvalues and condition number of the Laplacian The 2D FD discretization of the Laplace equation takes a general form (in the case x = y = h): H = 1 h 2 0 S I I S I I S I I S 1 C A where S = C A Eigenvalues of S and hence those of H are explicitly known: λ j,k (H) = 4 «jπh h 2 sin2 + 4 «kπh 2 h 2 sin2, j, k =1,..., n x = 1 2 h 1 Iterative Methods p.3/31

4 Eigenvalues and condition number of the Laplacian Theorem. The smallest and largest eigenvalues of H behave like: λ n =2π 2 + O(h 2 ),λ 1 =8h 2 + O(1). Proof. λ n = 8 «πh h 2 sin2 = 8 2 h 2 (hπ/2+o(h3 )) 2 =2π 2 + O(h 2 ). λ 1 = 8 «nx πh h 2 sin2 = 8 π 2 h 2 sin2 2 πh «= 2 = 8 «πh h 2 cos2 = 8 2 h (1 2 O(h2 )) 2 =8h 2 + O(1). Corollary. The condition number of H behaves like κ(h) = 4 π 2 h 2 + O(1) (κ(h) = 3d π d h 2 when Ω R d ) Corollary. The number of iteration of the CG for solving Hx = b is proportional to 8 < n 1D discretizations h 1 = n 2D discretizations. : 3 n 3D discretizations Iterative Methods p.4/31

5 Iterative methods on the diffusion equations Again stationary iterative methods 1. Jacobi iteration. x k+1 = D 1 (L + U)x k + D 1 b = E J x k + q J. 2. Gauss-Seidel iteration. x k+1 = (D + L) 1 Ux k +(D + L) 1 b = E GS x k + q GS 3. Acceleration of Gauss-Seidel = SOR method depending on ω R. x k+1 = (D+ωL) 1 ((1 ω)d ωu) x k +ω(d+ωl) 1 b = E SOR x k +q SOR Theorem. (Young & Varga). If all the eigenvalues of the Jacobi iteration matrix are real then 1. λ(e GS )=λ(e J ) 2 2. If in addition ρ(e J ) < 1 then there is an optimal value of ω: 2 ω opt = 1+ p 1 ρ(e J ) ρ(e SOR (ω opt )) = ω opt 1 Iterative Methods p.5/31

6 Iterative method for the diffusion equation Recall H = 1 h 2 0 S I I S I I S I I S 1 C A where S = Since D = diag(h) = 4h 2 I, then eigenvalues of E J = I D 1 A are ««jπh kπh λ j,k (E) = 1 sin 2 sin C A λ max (E J ) 1 π2 h 2 2 «λ max (E GS ) 1 π2 h π 2 h 2 2 λ max (E SOR ) 2 1+πh 1 1 2πh Iterative Methods p.6/31

7 Iterative methods on the diffusion equations What about CG? Asymptotically the reduction factor is µ = κ 1 κ +1 where κ(h) = 4 π 2 h 2. Hence µ 2 1 hπ =1 2hπ +1 2 hπ 2+hπ 1 πh Asymptotic behaviour: method error reduction factor type Jacobi 1 5h 2 sharp Gauss-Seidel 1 10h 2 sharp SOR with ω opt 1 6h sharp SD (no precond) 1 5h 2 tight CG (no precond) 1 3h tight Iterative Methods p.7/31

8 Some results 2D discretization of the diffusion equation by FD, with h =0.005, tol = 10 8 Results: Method iterations CPU time comments Jacobi r k > 10 6 Gauss Seidel SOR ω =1.969 ω opt SOR ω =1.95 SOR ω =1.98 CG no prec CG IC(0) prec 2D discretization of the diffusion equation by FD, with h =0.0025, tol = 10 8 Results Method iterations CPU time comments SOR ω =1.984 ω opt CG no prec CG IC(0) prec Iterative Methods p.8/31

9 Results and Comments Results with h = , tol = 10 8 Method iterations CPU time comments SOR ω =1.992 ω opt CG no prec CG IC(0) prec Comments Theoretically SOR(ω opt ) is the fastest method with CG very close. ω opt is always very difficult to assess. Bounds on SOR is sharp while that on CG is tight. CG can be preconditioned while SOR can not. CG better than the estimates also when eigenvalues are spread into the spectral interval. Dependence of condition number on h 2 (and the number of iterations on h 1 ) only alleviated by preconditioning. SOR is dramatically dependent on ω. Solving the FD-discretized diffusion equation, it takes advantage on the a priori knowledge of the spectral interval. Iterative Methods p.9/31

10 Iterative methods for general systems PCG is expected to converge only on spd matrices. Why? One reason is that for non spd matrices it is impossible to have at the same time orthogonality and a minimization property by means of a short term recurrence. PCG at iteration k minimizes the error on a subspace of size k and constructs an orthogonal basis using a short-term recurrence. Extensions of PCG for general nonsymmetric matrices: 1. PCG method applied to the (spd) system A T Ax = A T b (Normal equations). This system is often ill-conditioned. It is the case of symmetric matrices where κ(a T A)=κ(A 2 ) = (κ(a)) 2. Also difficult to find a preconditioner if A T A could not be explicitly formed. 2. Methods that provide orthogonality + minimization by using a long-term recurrence (GMRES) 3. Methods that provide (bi) orthogonality. Examples: BiCG, BiCGstab 4. Methods that provide some minimization properties. Examples: QMR, TFQMR. Iterative Methods p.10/31

11 The GMRES method Given an arbitrary nonzero vector v, a Krylov subspace of dimension m is defined as K m (v) =span `v,av,a 2 v,..., A m 1 v The GMRES (Generalized Minimal RESidual) method finds the solution of the linear system Ax = b by minimizing the norm of the residual r m = b Ax m over all the vectors x m written as x m = x 0 + y, y K m (r 0 ) where x 0 is an arbitrary initial vector and K m is the Krylov subspace generated by the initial residual. First note that the basis of K m is little linearly independent. {r 0,Ar 0,A 2 r 0,..., A m 1 r 0 } Iterative Methods p.11/31

12 Orthogonalization of the Krylov basis Theorem. If the eigenvalues of A are such that λ 1 > λ 2 then A k λ2 k, r 0 = v 1 + O λ 1 with v1 the eigenvector corresponding to λ 1. To compute a really independent basis for K m we have to orthonormalize such vectors using the Gram-Schmidt procedure. β = r 0, v 1 = r 0 β DO k =1,m 1. w k+1 = Av k 2. DO j =1,k 1 3. h jk = w T k+1 v j 4. w k+1 := w k+1 h jk v j 5. END DO 6. h k+1,k = w k+1 ; v k+1 = w k+1 /h k+1,k END DO Once the Krylov subspace basis is computed, the GMRES method minimizes the norm of the residual onto mx x 0 + K m = x 0 + z i v i (1) j=1 Iterative Methods p.12/31

13 Minimizing the residual Theorem. The new vectors v k satisfy: Av k = k+1 X j=1 h jk v j, k =1,..., m Proof. In fact from step 4. of the Gram Schmidt algorithm we have w k+1 = Av k kx h jk v j j=1 Now substituting w k+1 = h k+1,k v k+1 we obtain h k+1,k v k+1 = Av k kx h jk v j j=1 and the thesis holds. Iterative Methods p.13/31

14 Minimizing the residual Now define V m =[v 1, v 2,... v m ] and H m =(h jk ). The statement of the theorem reads AV m = V m+1 H m H m is a rectangular m +1 m matrix. It is a Hessenberg matrix since it has the following nonzero pattern: H m = 0 h 11 h h 21 h 22 h h 32 h 33 h h m,m 1 h mm h m+1,m 1 C A Iterative Methods p.14/31

15 Minimizing the residual = A V m = V m+1 H m Iterative Methods p.15/31

16 Minimizing the residual Now we would like to minimize r m among all the x m : or Now x m = x 0 + x m = x 0 + V m z, mx z i v i j=1 z =(z 1,..., z m ) T r m = b Ax m = b A (x 0 + V m z)= = r 0 AV m z = = βv 1 V m+1 H m z = = V m+1 (βe 1 H m z) (2) where e 1 = [1, 0,..., 0] T. Iterative Methods p.16/31

17 Minimizing the residual Recalling that V T m+1v m+1 = I m+1 : r m = p r T mr m = q = (βe 1 H m z) T Vm+1 T V m+1 (βe 1 H m z)= q = (βe 1 H m z) T (βe 1 H m z)= and hence z = argmin βe 1 H m z. Comments: = (βe 1 H m z) (3) H m has more rows than columns then H m z = βe 1 has (in general) no solutions. Minimization problem z = argmin βe 1 H m z is very small m = 20, 50, 100 as compared to the original size n. Computational solution of this problem will be very cheap. Iterative Methods p.17/31

18 Least square minimization Any (even rectangular) matrix H can be factorized as a product of an orthogonal matrix Q and an upper triangular matrix R (known as QR factorization. When H is not square the resulting R has as many final zero rows as the gap between rows and columns. Let us now factorize our H m as: H m = QR (computational cost O(m 3 )). Then, in view of the orthogonality of Q: min βe 1 H m z = min Q βq T e 1 Rz = min g Rz where g = βq T e 1. Iterative Methods p.18/31

19 End of minimization Matrix R takes on the form: R = 0 r 11 r r 22 r r mm 0 1 C A (4) The solution to min g Rz is simply accomplished by solving Rz = g where R is obtained from R by dropping the last row and g the first m components of g. This last system being square, small, and upper triangular, is easily and cheaply solved. Finally note that min r m = g m+1. Iterative Methods p.19/31

20 Algorithm ALGORITHM: GMRES Input: x 0, A, b, k max, toll r 0 = b Ax 0,k=0,ρ 0 = r 0,β = ρ 0, v 1 = r 0 β WHILE ρ k > toll b AND k < k max DO 1. k = k v k+1 = Av k 3. DO j =1,k h jk = v T k+1v j v k+1 = v k+1 h jk v j END DO 4. h k+1,k = v k+1 5. v k+1 = v k+1 /h k+1,k 6. z k = argmin βe 1 H k z 7. ρ k = βe 1 H k z k END WHILE x k = x 0 + V k z k Iterative Methods p.20/31

21 Practical GMRES implementations. GMRES is optimal in the sense that it minimizes the residual over a subspace of increasing size (finite termination property). GMRES is a VERY computationally costly method. 1. Storage: It needs to keep in memory all the vectors of Krylov basis. When the number of iterations becomes large (order of hundreds) the storage may be prohibitive. 2. Computational cost. The cost of a Gram-Schmidt orthogonalization increases with the iteration number (O(mn)) so again problems arise when the number of iteration is high. Practical implementations fix a maximum number of vectors to be kept in memory, say p. After p iterations, x p is computed and a new Krylov subspace is begin constructed starting from r p. In this way however, we loose optimality with consequent slowing down of the method. Iterative Methods p.21/31

22 Convergence Assume for simplicity that A is diagonalizable that is A = V ΛV 1 where Λ is the diagonal matrix of eigenvalues and columns of V are normalized eigenvectors of A. Then r k = min P k Vp k (Λ)V 1 r 0 κ(v ) min P k p k (Λ) r 0 or r k r 0 κ(v ) min P k max p k (λ i ) Matrix V may be ill-conditioned We cannot simply relate convergence of GMRES with the eigenvalue distribution of A. Iterative Methods p.22/31

23 Preconditioning Also for general systems preconditioning is premultiplying the original system by a nonsingular matrix. Here preconditioning, more than reducing condition number of A is aimed at clustering eigenvalues away from the origin. Common general preconditioners. 1. Diagonal: M = diag( A i ), with A i the ith row of A. 2. ILU decomposition (based on pattern and/or on dropping tolerance). 3. Approximate inverse preconditioners Computational cost. Instead of applying A one should apply M 1 A to a vector (step 2. of algorithm) as t = Av k, Mv k+1 = t Iterative Methods p.23/31

24 Numerical example Convergence profile (semi-logarithmic plot) of GMRES as applied to a very small (n = 27) matrix with two preconditioners and some choices of the restart parameter r_k diagonale, nrest=1 diagonale, nrest=10 diagonale, nrest=50 ILU, nrest=1 ILU, nrest= numero di iterazioni Iterative Methods p.24/31

25 Other iterative methods for nonsymmetric systems They are based on a fixed amount of work per iteration (short term recurrence) No a priori theoretical estimates on the error. Possibility of failure (breakdown) that can be alleviated through the use of look-ahead strategies. Iterative Methods p.25/31

26 The Lanczos method. It is useful to underline connection between symmetric and nonsymmetric problems. 1. GMRES for spd matrices produces a tridiagonal matrix instead of an Hessenberg one (a symmetric Hessenberg is tridiagonal). This means that there is a three-term recurrence which implements Gram-Schmidt orthogonalization on a Krylov subspace generated by an spd matrix. 2. For A spd V T m AV m = T m and it can be proved that the extremal eigenvalues of T m converge quickly to the extremal eigenvalues of A. Moreover there is a strict connection between the Lanczos process and the CG method (entries of T m can be computed from CG iterates). Knowing approximately the largest and the smallest eigenvalues is useful for an appropriate exit test for the CG method (recall 1st lecture on reliability of exit tests). 3. If A is nonsymmetric the two-sided Lanczos method can be employed to construct a biorthogonal basis for the Krylov subspaces corresponding to A and A T by using a short term recurrence. Iterative Methods p.26/31

27 The Biconjugate Gradient method (BiCG). Based on two-sided Lanczos process: 1. Given an arbitrary cr 0 non orthogonal to r 0, it constructs two sets of vectors v 1,...,v n K(A, r 0 ) and w 1,..., w n K(A T, cr 0 ) s. t. v T i w j =0,i j. In matrix form AV k = V k+1 T k+1,k, A T W k = W k+1 b Tk+1,k, V T k W k = I k 2. Write x m = x 0 + v k y k. From the several choices for vector y k BiCG forces r k = r 0 AV k y k to be orthogonal to W k. This leads to the equation W T k r 0 W T k AV k y k =0 By noticing that W T k AV k = T k and that W T k r 0 = βe 1,β = r 0, equation for y k becomes T k y k = βe 1 3. Practical implementation of BiCG algorithm implicitly performs an LDU factorization of matrix T k (Also CG for spd matrices is equivalent to Lanczos process with LDL T factorization of T k ), Iterative Methods p.27/31

28 BiCG Algorithm ALGORITHM: BICONJUGATE GRADIENT Input: x 0, cr 0, A, b, k max, tol r 0 = p 0 = b Ax 0,k=0, cp 0 = cr 0 WHILE r k > tol b AND k < k max DO 1. z = Ap k, u = A T cp k 2. α k = rt k cr k z T p k 3. x k+1 = x k + α k p k 4. r k+1 = r k α k z r k+1 = r k α k u 5. β k = rt k+1r k+1 r T k r k 6. p k+1 = r k+1 + β k p k, p k+1 = r k+1 + β k cp k 7. k = k +1 END WHILE Iterative Methods p.28/31

29 BiCGstab Drawbacks of the BiCG method: 1. Need to implement a matrix vector product also for the transpose of the matrix (this can be a problem: sometimes the matrix is not available explicitly but only the result of its application to a vector) 2. No local minimization property fulfilled in BiCG. This leads to sometimes large oscillations in the convergence profile. First CGS (Conjugate Squared method) avoids using A T then a subsequent improvement: the BiConjugate Gradient stabilized (BiCGstab) method is aimed at avoiding such an erratic behavior trying to produce a residual of the form r k =Ψ k (A)φ k (A)r 0 where Ψ k (z) = ky (1 ω j z) and the ω s are chosen to minimize j=1 r j = (1 ω j A)Ψ j 1 (A)φ j (A)r 0 Iterative Methods p.29/31

30 BiCGstab ALGORITHM: BICGSTAB Input: x 0, A, b, k max, toll r 0 = b Ax 0,k=0, choose ˆr 0 s. t. ˆr T 0 ˆr 0 0 ρ 0 = α 0 = η 1 = 1; WHILE r k > toll b AND k < k max DO 1. k = k ρ k =ˆr T 0 r k 3. β k =(ρ k /ρ k 1 )(α k 1 /η k ) 4. p k = r k + β k (p k 1 η k v k 1 ) 5. v k = Ap k 6. α k = ρ k /ˆr T 0 v k 7. s = r k α k v k 8. t = As 9. η k+1 = s T t/t T t 10. c k+1 = c k + α k p k + η k+1 s 11. r k+1 = s η k+1 t END WHILE Iterative Methods p.30/31

31 Bibliography Books on iterative methods Anne Greenbaum. Iterative Methods for Solving Large Linear systems. C. T. Kelley. Iterative Methods for Linear and Nonlinear Equations Preconditioning M. Benzi. Preconditioning Techniques for Large Linear Systems: A Survey, Journal of Computational Physics, 182 (2002), pp Conjugate gradient J. R. Shewchuk. An Introduction to the CG method without the agonizing pain. Nonsymmetric linear systems Y. Saad and M. H. Schultz. Gmres: A generalized minimal residual algorithm for solving nonsymmetric linear systems. SIAM J. Sci. Stat. Comput., pages , (7) H. A. van der Vorst. Bi-CGstab: A fast and smoothly converging variant of BI-CG for the solution of nonsymmetric linear systems. SIAM J. Sci. Stat. Comput., pages , (13) Iterative Methods p.31/31

Notes on PCG for Sparse Linear Systems

Notes on PCG for Sparse Linear Systems Notes on PCG for Sparse Linear Systems Luca Bergamaschi Department of Civil Environmental and Architectural Engineering University of Padova e-mail luca.bergamaschi@unipd.it webpage www.dmsa.unipd.it/

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009) Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential

More information

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 9 Minimizing Residual CG

More information

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccs Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary c 2008,2010

More information

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary

More information

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Leslie Foster 11-5-2012 We will discuss the FOM (full orthogonalization method), CG,

More information

FEM and sparse linear system solving

FEM and sparse linear system solving FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Iterative Methods for Linear Systems of Equations

Iterative Methods for Linear Systems of Equations Iterative Methods for Linear Systems of Equations Projection methods (3) ITMAN PhD-course DTU 20-10-08 till 24-10-08 Martin van Gijzen 1 Delft University of Technology Overview day 4 Bi-Lanczos method

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

1 Extrapolation: A Hint of Things to Come

1 Extrapolation: A Hint of Things to Come Notes for 2017-03-24 1 Extrapolation: A Hint of Things to Come Stationary iterations are simple. Methods like Jacobi or Gauss-Seidel are easy to program, and it s (relatively) easy to analyze their convergence.

More information

On the influence of eigenvalues on Bi-CG residual norms

On the influence of eigenvalues on Bi-CG residual norms On the influence of eigenvalues on Bi-CG residual norms Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic duintjertebbens@cs.cas.cz Gérard Meurant 30, rue

More information

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294) Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps

More information

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Krylov Space Methods Nonstationary sounds good Radu Trîmbiţaş Babeş-Bolyai University Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Introduction These methods are used both to solve

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

Lab 1: Iterative Methods for Solving Linear Systems

Lab 1: Iterative Methods for Solving Linear Systems Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as

More information

Comparison of Fixed Point Methods and Krylov Subspace Methods Solving Convection-Diffusion Equations

Comparison of Fixed Point Methods and Krylov Subspace Methods Solving Convection-Diffusion Equations American Journal of Computational Mathematics, 5, 5, 3-6 Published Online June 5 in SciRes. http://www.scirp.org/journal/ajcm http://dx.doi.org/.436/ajcm.5.5 Comparison of Fixed Point Methods and Krylov

More information

KRYLOV SUBSPACE ITERATION

KRYLOV SUBSPACE ITERATION KRYLOV SUBSPACE ITERATION Presented by: Nab Raj Roshyara Master and Ph.D. Student Supervisors: Prof. Dr. Peter Benner, Department of Mathematics, TU Chemnitz and Dipl.-Geophys. Thomas Günther 1. Februar

More information

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1 Parallel Numerics, WT 2016/2017 5 Iterative Methods for Sparse Linear Systems of Equations page 1 of 1 Contents 1 Introduction 1.1 Computer Science Aspects 1.2 Numerical Problems 1.3 Graphs 1.4 Loop Manipulations

More information

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection

Last Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue

More information

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department

More information

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative

More information

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate

More information

Preface to the Second Edition. Preface to the First Edition

Preface to the Second Edition. Preface to the First Edition n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 16-02 The Induced Dimension Reduction method applied to convection-diffusion-reaction problems R. Astudillo and M. B. van Gijzen ISSN 1389-6520 Reports of the Delft

More information

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering A short course on: Preconditioned Krylov subspace methods Yousef Saad University of Minnesota Dept. of Computer Science and Engineering Universite du Littoral, Jan 19-3, 25 Outline Part 1 Introd., discretization

More information

Iterative Methods and Multigrid

Iterative Methods and Multigrid Iterative Methods and Multigrid Part 3: Preconditioning 2 Eric de Sturler Preconditioning The general idea behind preconditioning is that convergence of some method for the linear system Ax = b can be

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

Steady-State Optimization Lecture 1: A Brief Review on Numerical Linear Algebra Methods

Steady-State Optimization Lecture 1: A Brief Review on Numerical Linear Algebra Methods Steady-State Optimization Lecture 1: A Brief Review on Numerical Linear Algebra Methods Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP) Summer Semester

More information

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU Preconditioning Techniques for Solving Large Sparse Linear Systems Arnold Reusken Institut für Geometrie und Praktische Mathematik RWTH-Aachen OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative

More information

ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS

ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS YOUSEF SAAD University of Minnesota PWS PUBLISHING COMPANY I(T)P An International Thomson Publishing Company BOSTON ALBANY BONN CINCINNATI DETROIT LONDON MADRID

More information

A DISSERTATION. Extensions of the Conjugate Residual Method. by Tomohiro Sogabe. Presented to

A DISSERTATION. Extensions of the Conjugate Residual Method. by Tomohiro Sogabe. Presented to A DISSERTATION Extensions of the Conjugate Residual Method ( ) by Tomohiro Sogabe Presented to Department of Applied Physics, The University of Tokyo Contents 1 Introduction 1 2 Krylov subspace methods

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give

More information

Stabilization and Acceleration of Algebraic Multigrid Method

Stabilization and Acceleration of Algebraic Multigrid Method Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration

More information

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices Chapter 7 Iterative methods for large sparse linear systems In this chapter we revisit the problem of solving linear systems of equations, but now in the context of large sparse systems. The price to pay

More information

Algorithms that use the Arnoldi Basis

Algorithms that use the Arnoldi Basis AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Arnoldi Methods Dianne P. O Leary c 2006, 2007 Algorithms that use the Arnoldi Basis Reference: Chapter 6 of Saad The Arnoldi Basis How to

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

Introduction to Iterative Solvers of Linear Systems

Introduction to Iterative Solvers of Linear Systems Introduction to Iterative Solvers of Linear Systems SFB Training Event January 2012 Prof. Dr. Andreas Frommer Typeset by Lukas Krämer, Simon-Wolfgang Mages and Rudolf Rödl 1 Classes of Matrices and their

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Krylov Space Solvers

Krylov Space Solvers Seminar for Applied Mathematics ETH Zurich International Symposium on Frontiers of Computational Science Nagoya, 12/13 Dec. 2005 Sparse Matrices Large sparse linear systems of equations or large sparse

More information

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc. Lecture 11: CMSC 878R/AMSC698R Iterative Methods An introduction Outline Direct Solution of Linear Systems Inverse, LU decomposition, Cholesky, SVD, etc. Iterative methods for linear systems Why? Matrix

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 21: Sensitivity of Eigenvalues and Eigenvectors; Conjugate Gradient Method Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis

More information

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication.

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication. CME342 Parallel Methods in Numerical Analysis Matrix Computation: Iterative Methods II Outline: CG & its parallelization. Sparse Matrix-vector Multiplication. 1 Basic iterative methods: Ax = b r = b Ax

More information

ITERATIVE METHODS BASED ON KRYLOV SUBSPACES

ITERATIVE METHODS BASED ON KRYLOV SUBSPACES ITERATIVE METHODS BASED ON KRYLOV SUBSPACES LONG CHEN We shall present iterative methods for solving linear algebraic equation Au = b based on Krylov subspaces We derive conjugate gradient (CG) method

More information

Multigrid absolute value preconditioning

Multigrid absolute value preconditioning Multigrid absolute value preconditioning Eugene Vecharynski 1 Andrew Knyazev 2 (speaker) 1 Department of Computer Science and Engineering University of Minnesota 2 Department of Mathematical and Statistical

More information

Residual iterative schemes for largescale linear systems

Residual iterative schemes for largescale linear systems Universidad Central de Venezuela Facultad de Ciencias Escuela de Computación Lecturas en Ciencias de la Computación ISSN 1316-6239 Residual iterative schemes for largescale linear systems William La Cruz

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Contents. Preface... xi. Introduction...

Contents. Preface... xi. Introduction... Contents Preface... xi Introduction... xv Chapter 1. Computer Architectures... 1 1.1. Different types of parallelism... 1 1.1.1. Overlap, concurrency and parallelism... 1 1.1.2. Temporal and spatial parallelism

More information

Introduction to Scientific Computing

Introduction to Scientific Computing (Lecture 5: Linear system of equations / Matrix Splitting) Bojana Rosić, Thilo Moshagen Institute of Scientific Computing Motivation Let us resolve the problem scheme by using Kirchhoff s laws: the algebraic

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

On the Preconditioning of the Block Tridiagonal Linear System of Equations

On the Preconditioning of the Block Tridiagonal Linear System of Equations On the Preconditioning of the Block Tridiagonal Linear System of Equations Davod Khojasteh Salkuyeh Department of Mathematics, University of Mohaghegh Ardabili, PO Box 179, Ardabil, Iran E-mail: khojaste@umaacir

More information

IDR(s) Master s thesis Goushani Kisoensingh. Supervisor: Gerard L.G. Sleijpen Department of Mathematics Universiteit Utrecht

IDR(s) Master s thesis Goushani Kisoensingh. Supervisor: Gerard L.G. Sleijpen Department of Mathematics Universiteit Utrecht IDR(s) Master s thesis Goushani Kisoensingh Supervisor: Gerard L.G. Sleijpen Department of Mathematics Universiteit Utrecht Contents 1 Introduction 2 2 The background of Bi-CGSTAB 3 3 IDR(s) 4 3.1 IDR.............................................

More information

From Stationary Methods to Krylov Subspaces

From Stationary Methods to Krylov Subspaces Week 6: Wednesday, Mar 7 From Stationary Methods to Krylov Subspaces Last time, we discussed stationary methods for the iterative solution of linear systems of equations, which can generally be written

More information

FEM and Sparse Linear System Solving

FEM and Sparse Linear System Solving FEM & sparse system solving, Lecture 7, Nov 3, 2017 1/46 Lecture 7, Nov 3, 2015: Introduction to Iterative Solvers: Stationary Methods http://people.inf.ethz.ch/arbenz/fem16 Peter Arbenz Computer Science

More information

Krylov Subspace Methods that Are Based on the Minimization of the Residual

Krylov Subspace Methods that Are Based on the Minimization of the Residual Chapter 5 Krylov Subspace Methods that Are Based on the Minimization of the Residual Remark 51 Goal he goal of these methods consists in determining x k x 0 +K k r 0,A such that the corresponding Euclidean

More information

Splitting Iteration Methods for Positive Definite Linear Systems

Splitting Iteration Methods for Positive Definite Linear Systems Splitting Iteration Methods for Positive Definite Linear Systems Zhong-Zhi Bai a State Key Lab. of Sci./Engrg. Computing Inst. of Comput. Math. & Sci./Engrg. Computing Academy of Mathematics and System

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

Preconditioning Techniques Analysis for CG Method

Preconditioning Techniques Analysis for CG Method Preconditioning Techniques Analysis for CG Method Huaguang Song Department of Computer Science University of California, Davis hso@ucdavis.edu Abstract Matrix computation issue for solve linear system

More information

Notes on Some Methods for Solving Linear Systems

Notes on Some Methods for Solving Linear Systems Notes on Some Methods for Solving Linear Systems Dianne P. O Leary, 1983 and 1999 and 2007 September 25, 2007 When the matrix A is symmetric and positive definite, we have a whole new class of algorithms

More information

Some minimization problems

Some minimization problems Week 13: Wednesday, Nov 14 Some minimization problems Last time, we sketched the following two-step strategy for approximating the solution to linear systems via Krylov subspaces: 1. Build a sequence of

More information

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant

More information

Preconditioned Conjugate Gradient-Like Methods for. Nonsymmetric Linear Systems 1. Ulrike Meier Yang 2. July 19, 1994

Preconditioned Conjugate Gradient-Like Methods for. Nonsymmetric Linear Systems 1. Ulrike Meier Yang 2. July 19, 1994 Preconditioned Conjugate Gradient-Like Methods for Nonsymmetric Linear Systems Ulrike Meier Yang 2 July 9, 994 This research was supported by the U.S. Department of Energy under Grant No. DE-FG2-85ER25.

More information

Lecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University

Lecture 17 Methods for System of Linear Equations: Part 2. Songting Luo. Department of Mathematics Iowa State University Lecture 17 Methods for System of Linear Equations: Part 2 Songting Luo Department of Mathematics Iowa State University MATH 481 Numerical Methods for Differential Equations Songting Luo ( Department of

More information

PETROV-GALERKIN METHODS

PETROV-GALERKIN METHODS Chapter 7 PETROV-GALERKIN METHODS 7.1 Energy Norm Minimization 7.2 Residual Norm Minimization 7.3 General Projection Methods 7.1 Energy Norm Minimization Saad, Sections 5.3.1, 5.2.1a. 7.1.1 Methods based

More information

Lecture 8: Fast Linear Solvers (Part 7)

Lecture 8: Fast Linear Solvers (Part 7) Lecture 8: Fast Linear Solvers (Part 7) 1 Modified Gram-Schmidt Process with Reorthogonalization Test Reorthogonalization If Av k 2 + δ v k+1 2 = Av k 2 to working precision. δ = 10 3 2 Householder Arnoldi

More information

4.6 Iterative Solvers for Linear Systems

4.6 Iterative Solvers for Linear Systems 4.6 Iterative Solvers for Linear Systems Why use iterative methods? Virtually all direct methods for solving Ax = b require O(n 3 ) floating point operations. In practical applications the matrix A often

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 19: More on Arnoldi Iteration; Lanczos Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 17 Outline 1

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 06-05 Solution of the incompressible Navier Stokes equations with preconditioned Krylov subspace methods M. ur Rehman, C. Vuik G. Segal ISSN 1389-6520 Reports of the

More information

AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Krylov Minimization and Projection (KMP) Dianne P. O Leary c 2006, 2007.

AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Krylov Minimization and Projection (KMP) Dianne P. O Leary c 2006, 2007. AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Krylov Minimization and Projection (KMP) Dianne P. O Leary c 2006, 2007 This unit: So far: A survey of iterative methods for solving linear

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Institute for Advanced Computer Studies. Department of Computer Science. Iterative methods for solving Ax = b. GMRES/FOM versus QMR/BiCG

Institute for Advanced Computer Studies. Department of Computer Science. Iterative methods for solving Ax = b. GMRES/FOM versus QMR/BiCG University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{96{2 TR{3587 Iterative methods for solving Ax = b GMRES/FOM versus QMR/BiCG Jane K. Cullum

More information

An advanced ILU preconditioner for the incompressible Navier-Stokes equations

An advanced ILU preconditioner for the incompressible Navier-Stokes equations An advanced ILU preconditioner for the incompressible Navier-Stokes equations M. ur Rehman C. Vuik A. Segal Delft Institute of Applied Mathematics, TU delft The Netherlands Computational Methods with Applications,

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Linear Solvers. Andrew Hazel

Linear Solvers. Andrew Hazel Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction

More information

1 Conjugate gradients

1 Conjugate gradients Notes for 2016-11-18 1 Conjugate gradients We now turn to the method of conjugate gradients (CG), perhaps the best known of the Krylov subspace solvers. The CG iteration can be characterized as the iteration

More information

arxiv: v4 [math.na] 1 Sep 2018

arxiv: v4 [math.na] 1 Sep 2018 A Comparison of Preconditioned Krylov Subspace Methods for Large-Scale Nonsymmetric Linear Systems Aditi Ghai, Cao Lu and Xiangmin Jiao arxiv:1607.00351v4 [math.na] 1 Sep 2018 September 5, 2018 Department

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Charles University Faculty of Mathematics and Physics DOCTORAL THESIS. Krylov subspace approximations in linear algebraic problems

Charles University Faculty of Mathematics and Physics DOCTORAL THESIS. Krylov subspace approximations in linear algebraic problems Charles University Faculty of Mathematics and Physics DOCTORAL THESIS Iveta Hnětynková Krylov subspace approximations in linear algebraic problems Department of Numerical Mathematics Supervisor: Doc. RNDr.

More information

Fast iterative solvers

Fast iterative solvers Utrecht, 15 november 2017 Fast iterative solvers Gerard Sleijpen Department of Mathematics http://www.staff.science.uu.nl/ sleij101/ Review. To simplify, take x 0 = 0, assume b 2 = 1. Solving Ax = b for

More information

Iterative methods for positive definite linear systems with a complex shift

Iterative methods for positive definite linear systems with a complex shift Iterative methods for positive definite linear systems with a complex shift William McLean, University of New South Wales Vidar Thomée, Chalmers University November 4, 2011 Outline 1. Numerical solution

More information

Recycling Bi-Lanczos Algorithms: BiCG, CGS, and BiCGSTAB

Recycling Bi-Lanczos Algorithms: BiCG, CGS, and BiCGSTAB Recycling Bi-Lanczos Algorithms: BiCG, CGS, and BiCGSTAB Kapil Ahuja Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial fulfillment of the requirements

More information

On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes

On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic joint work with Gérard

More information

IDR(s) as a projection method

IDR(s) as a projection method Delft University of Technology Faculty of Electrical Engineering, Mathematics and Computer Science Delft Institute of Applied Mathematics IDR(s) as a projection method A thesis submitted to the Delft Institute

More information

Dense Matrices for Biofluids Applications

Dense Matrices for Biofluids Applications Dense Matrices for Biofluids Applications by Liwei Chen A Project Report Submitted to the Faculty of the WORCESTER POLYTECHNIC INSTITUTE In partial fulfillment of the requirements for the Degree of Master

More information

The flexible incomplete LU preconditioner for large nonsymmetric linear systems. Takatoshi Nakamura Takashi Nodera

The flexible incomplete LU preconditioner for large nonsymmetric linear systems. Takatoshi Nakamura Takashi Nodera Research Report KSTS/RR-15/006 The flexible incomplete LU preconditioner for large nonsymmetric linear systems by Takatoshi Nakamura Takashi Nodera Takatoshi Nakamura School of Fundamental Science and

More information

Recent computational developments in Krylov Subspace Methods for linear systems. Valeria Simoncini and Daniel B. Szyld

Recent computational developments in Krylov Subspace Methods for linear systems. Valeria Simoncini and Daniel B. Szyld Recent computational developments in Krylov Subspace Methods for linear systems Valeria Simoncini and Daniel B. Szyld A later version appeared in : Numerical Linear Algebra w/appl., 2007, v. 14(1), pp.1-59.

More information

9. Iterative Methods for Large Linear Systems

9. Iterative Methods for Large Linear Systems EE507 - Computational Techniques for EE Jitkomut Songsiri 9. Iterative Methods for Large Linear Systems introduction splitting method Jacobi method Gauss-Seidel method successive overrelaxation (SOR) 9-1

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.

More information

College of William & Mary Department of Computer Science

College of William & Mary Department of Computer Science Technical Report WM-CS-2009-06 College of William & Mary Department of Computer Science WM-CS-2009-06 Extending the eigcg algorithm to non-symmetric Lanczos for linear systems with multiple right-hand

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD

More information

Introduction and Stationary Iterative Methods

Introduction and Stationary Iterative Methods Introduction and C. T. Kelley NC State University tim kelley@ncsu.edu Research Supported by NSF, DOE, ARO, USACE DTU ITMAN, 2011 Outline Notation and Preliminaries General References What you Should Know

More information