A stable variant of Simpler GMRES and GCR

Size: px
Start display at page:

Download "A stable variant of Simpler GMRES and GCR"

Transcription

1 A stable variant of Simpler GMRES and GCR Miroslav Rozložník joint work with Pavel Jiránek and Martin H. Gutknecht Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic miro Applied Linear Algebra Conference in honor of Hans Schneider University of Novi Sad, Serbia, May 24 28, Miro Rozložník (Prague) A stable variant of Simpler GMRES and GCR ALA10 1

2 Minimum residual methods for Ax = b We consider a nonsingular linear system Ax = b, A R N N, b R N. Krylov subspace methods: initial guess x 0, compute {x n} such that x n x 0 + K n r n b Ax 0 r 0 + AK n, K n span(r 0, Ar 0,..., A n 1 r 0 ) is the n-th Krylov subspace generated by A and r 0. Minimum residual approach: minimizing the residual r n = b Ax n = r 0 d n r n = min r 0 d r n AK n. d AK n Miro Rozložník (Prague) A stable variant of Simpler GMRES and GCR ALA10 2

3 Minimum residual methods for Ax = b We consider a nonsingular linear system Ax = b, A R N N, b R N. Krylov subspace methods: initial guess x 0, compute {x n} such that x n x 0 + K n r n b Ax 0 r 0 + AK n, K n span(r 0, Ar 0,..., A n 1 r 0 ) is the n-th Krylov subspace generated by A and r 0. Minimum residual approach: minimizing the residual r n = b Ax n = r 0 d n r n = min r 0 d r n AK n. d AK n Miro Rozložník (Prague) A stable variant of Simpler GMRES and GCR ALA10 2

4 Minimum residual methods for Ax = b GMRES by Saad and Schultz [1986]: Arnoldi process: AQ n = Q n+1 H n+1,n, x n = x 0 + Q ny n, where y n solves min y ϱ 0 e 1 H n+1,n y, ϱ 0 r 0. Numerical stability of GMRES: Drkošová, Greenbaum, R, and Strakoš [1995], Arioli and Fassino [1996], Greenbaum, R, and Strakoš [1997], Paige, R, and Strakoš [2006]. Other implementations: Simpler GMRES: Walker and Zhou [1994], ORTHODIR, ORTHOMIN( ) GCR: Young and Jea [1980], Vinsome [1976], Eisenstat, Elman, and Schultz [1983]. Miro Rozložník (Prague) A stable variant of Simpler GMRES and GCR ALA10 3

5 Minimum residual methods for Ax = b GMRES by Saad and Schultz [1986]: Arnoldi process: AQ n = Q n+1 H n+1,n, x n = x 0 + Q ny n, where y n solves min y ϱ 0 e 1 H n+1,n y, ϱ 0 r 0. Numerical stability of GMRES: Drkošová, Greenbaum, R, and Strakoš [1995], Arioli and Fassino [1996], Greenbaum, R, and Strakoš [1997], Paige, R, and Strakoš [2006]. Other implementations: Simpler GMRES: Walker and Zhou [1994], ORTHODIR, ORTHOMIN( ) GCR: Young and Jea [1980], Vinsome [1976], Eisenstat, Elman, and Schultz [1983]. Miro Rozložník (Prague) A stable variant of Simpler GMRES and GCR ALA10 3

6 Sketch of Simpler GMRES and GCR Z n [z 1,..., z n] a (normalized) basis of K n. V n [v 1,..., v n] an orthonormal basis of AK n: AZ n = V nu n, U n is upper triangular. Residual r n r 0 + AK n = r 0 + R(V n), r n R(V n) satisfies r n = (I V nv T n )r 0 = (I v nv T n )r n 1 = r n 1 α nv n, α n v T n r n 1, whereas x n x 0 + K n = x 0 + R(Z n) satisfes x n = x 0 + Z nt n, U nt n = V T n r 0 = [α 1,..., α n] T. Approximate solution can be updated at each step P n [p 1,..., p n] = A 1 V n, Z n = P nu n x n = x n 1 + α np n. Miro Rozložník (Prague) A stable variant of Simpler GMRES and GCR ALA10 4

7 Sketch of Simpler GMRES and GCR Z n [z 1,..., z n] a (normalized) basis of K n. V n [v 1,..., v n] an orthonormal basis of AK n: AZ n = V nu n, U n is upper triangular. Residual r n r 0 + AK n = r 0 + R(V n), r n R(V n) satisfies r n = (I V nv T n )r 0 = (I v nv T n )r n 1 = r n 1 α nv n, α n v T n r n 1, whereas x n x 0 + K n = x 0 + R(Z n) satisfes x n = x 0 + Z nt n, U nt n = V T n r 0 = [α 1,..., α n] T. Approximate solution can be updated at each step P n [p 1,..., p n] = A 1 V n, Z n = P nu n x n = x n 1 + α np n. Miro Rozložník (Prague) A stable variant of Simpler GMRES and GCR ALA10 4

8 Sketch of Simpler GMRES and GCR Z n [z 1,..., z n] a (normalized) basis of K n. V n [v 1,..., v n] an orthonormal basis of AK n: AZ n = V nu n, U n is upper triangular. Residual r n r 0 + AK n = r 0 + R(V n), r n R(V n) satisfies r n = (I V nv T n )r 0 = (I v nv T n )r n 1 = r n 1 α nv n, α n v T n r n 1, whereas x n x 0 + K n = x 0 + R(Z n) satisfes x n = x 0 + Z nt n, U nt n = V T n r 0 = [α 1,..., α n] T. Approximate solution can be updated at each step P n [p 1,..., p n] = A 1 V n, Z n = P nu n x n = x n 1 + α np n. Miro Rozložník (Prague) A stable variant of Simpler GMRES and GCR ALA10 4

9 Sketch of Simpler GMRES and GCR Z n [z 1,..., z n] a (normalized) basis of K n. V n [v 1,..., v n] an orthonormal basis of AK n: AZ n = V nu n, U n is upper triangular. Residual r n r 0 + AK n = r 0 + R(V n), r n R(V n) satisfies r n = (I V nv T n )r 0 = (I v nv T n )r n 1 = r n 1 α nv n, α n v T n r n 1, whereas x n x 0 + K n = x 0 + R(Z n) satisfes x n = x 0 + Z nt n, U nt n = V T n r 0 = [α 1,..., α n] T. Approximate solution can be updated at each step P n [p 1,..., p n] = A 1 V n, Z n = P nu n x n = x n 1 + α np n. Miro Rozložník (Prague) A stable variant of Simpler GMRES and GCR ALA10 4

10 Backward and forward errors Assume stable computation of U n (e.g., using modified Gram-Schmidt orthogonalization or Householder reflections), c εκ(a)κ(z n) < 1, and r n 0. Solution of a triangular system ( b Ax n A x n + b c εκ(zn) 1 + x ) 0, x n x x n x n c εκ(a)κ(zn) ( 1 + x ) 0. 1 c εκ(a)κ(z n) x n Updated approximate solutions ( b Ax n A x n + b c εκ(a)κ(zn) 1 + x ) 0, x n x x n x n c εκ(a)κ(zn) ( 1 + x ) 0. 1 c εκ(a)κ(z n) x n Miro Rozložník (Prague) A stable variant of Simpler GMRES and GCR ALA10 5

11 Backward and forward errors Assume stable computation of U n (e.g., using modified Gram-Schmidt orthogonalization or Householder reflections), c εκ(a)κ(z n) < 1, and r n 0. Solution of a triangular system ( b Ax n A x n + b c εκ(zn) 1 + x ) 0, x n x x n x n c εκ(a)κ(zn) ( 1 + x ) 0. 1 c εκ(a)κ(z n) x n Updated approximate solutions ( b Ax n A x n + b c εκ(a)κ(zn) 1 + x ) 0, x n x x n x n c εκ(a)κ(zn) ( 1 + x ) 0. 1 c εκ(a)κ(z n) x n Miro Rozložník (Prague) A stable variant of Simpler GMRES and GCR ALA10 5

12 Backward and forward errors Assume stable computation of U n (e.g., using modified Gram-Schmidt orthogonalization or Householder reflections), c εκ(a)κ(z n) < 1, and r n 0. Solution of a triangular system ( b Ax n A x n + b c εκ(zn) 1 + x ) 0, x n x x n x n c εκ(a)κ(zn) ( 1 + x ) 0. 1 c εκ(a)κ(z n) x n Updated approximate solutions ( b Ax n A x n + b c εκ(a)κ(zn) 1 + x ) 0, x n x x n x n c εκ(a)κ(zn) ( 1 + x ) 0. 1 c εκ(a)κ(z n) x n Miro Rozložník (Prague) A stable variant of Simpler GMRES and GCR ALA10 5

13 Choice of Z n 1. Z n = [r 0 / r 0, V n 1 ] (Simpler GMRES, ORTHODIR) Robust, but numerically less stable (Liesen, R, and Strakoš [2002]): r 0 r n 1 κ([ r 0 r 0, V n 1]) 2 r 0 r n Z n = R n, R n [r 0 / r 0,..., r n 1 / r n 1 ] (ORTHOMIN( ), GCR) Residuals are linearly independent iff the residual norms decrease, but R n is well conditioned: max k=1,...,n 1 ( rk r k 2 r k 1 2 r k 2 ) 1 ( ) 1 2 n 1 κ(rn) (n + 1) 1 r k r k r k=1 k 1 2 r k 2 3. Can we benefit from advantages of both bases without suffering from disadvantages? Adaptive choice of the direction z n: introduce ν [0, 1) and compute z n as r 0 / r 0 if n = 1, z n = r n 1 / r n 1 if n > 1 & r n 1 ν r n 2, v n 1 otherwise. PCR algorithm of Ramage and Wathen [1994]: hybridization of the fast ORTHOMIN and robust ORTHODIR Miro Rozložník (Prague) A stable variant of Simpler GMRES and GCR ALA10 6

14 Choice of Z n 1. Z n = [r 0 / r 0, V n 1 ] (Simpler GMRES, ORTHODIR) Robust, but numerically less stable (Liesen, R, and Strakoš [2002]): r 0 r n 1 κ([ r 0 r 0, V n 1]) 2 r 0 r n Z n = R n, R n [r 0 / r 0,..., r n 1 / r n 1 ] (ORTHOMIN( ), GCR) Residuals are linearly independent iff the residual norms decrease, but R n is well conditioned: max k=1,...,n 1 ( rk r k 2 r k 1 2 r k 2 ) 1 ( ) 1 2 n 1 κ(rn) (n + 1) 1 r k r k r k=1 k 1 2 r k 2 3. Can we benefit from advantages of both bases without suffering from disadvantages? Adaptive choice of the direction z n: introduce ν [0, 1) and compute z n as r 0 / r 0 if n = 1, z n = r n 1 / r n 1 if n > 1 & r n 1 ν r n 2, v n 1 otherwise. PCR algorithm of Ramage and Wathen [1994]: hybridization of the fast ORTHOMIN and robust ORTHODIR Miro Rozložník (Prague) A stable variant of Simpler GMRES and GCR ALA10 6

15 Choice of Z n 1. Z n = [r 0 / r 0, V n 1 ] (Simpler GMRES, ORTHODIR) Robust, but numerically less stable (Liesen, R, and Strakoš [2002]): r 0 r n 1 κ([ r 0 r 0, V n 1]) 2 r 0 r n Z n = R n, R n [r 0 / r 0,..., r n 1 / r n 1 ] (ORTHOMIN( ), GCR) Residuals are linearly independent iff the residual norms decrease, but R n is well conditioned: max k=1,...,n 1 ( rk r k 2 r k 1 2 r k 2 ) 1 ( ) 1 2 n 1 κ(rn) (n + 1) 1 r k r k r k=1 k 1 2 r k 2 3. Can we benefit from advantages of both bases without suffering from disadvantages? Adaptive choice of the direction z n: introduce ν [0, 1) and compute z n as r 0 / r 0 if n = 1, z n = r n 1 / r n 1 if n > 1 & r n 1 ν r n 2, v n 1 otherwise. PCR algorithm of Ramage and Wathen [1994]: hybridization of the fast ORTHOMIN and robust ORTHODIR Miro Rozložník (Prague) A stable variant of Simpler GMRES and GCR ALA10 6

16 Conditioning of the adaptive Z n Assume a near stagnation in the initial phase and a fast convergence in the subsequent steps: Then where r k 1 > ν r k 2 for k = 2,..., q (Simpler GMRES basis), r k 1 ν r k 2 for k = q + 1,..., n (residual basis). { max 1, 1 } 2 γ r q 1 n,q r 0, r 0 r 0 γ 1 n,q κ(z n) 2γ r q 1 n,q r q 1, ( rk r k 2 ) 1 2 n 1 γ n,q max, γn,q k=q,...,n 1 r k 1 2 r k 2 (n q+1) 1 r k r k 2 + r k=q k 1 2 r k 2 Similar estimates for general case (for multiple switches between bases). Moreover, κ(z n) 2 2 ν q ν 1 ν. Quasi-optimal choice ν opt = ( 1 + n 2 1)/n leads to κ(z n) ν=νopt = O(n) Miro Rozložník (Prague) A stable variant of Simpler GMRES and GCR ALA10 7

17 Conditioning of the adaptive Z n Assume a near stagnation in the initial phase and a fast convergence in the subsequent steps: Then where r k 1 > ν r k 2 for k = 2,..., q (Simpler GMRES basis), r k 1 ν r k 2 for k = q + 1,..., n (residual basis). { max 1, 1 } 2 γ r q 1 n,q r 0, r 0 r 0 γ 1 n,q κ(z n) 2γ r q 1 n,q r q 1, ( rk r k 2 ) 1 2 n 1 γ n,q max, γn,q k=q,...,n 1 r k 1 2 r k 2 (n q+1) 1 r k r k 2 + r k=q k 1 2 r k 2 Similar estimates for general case (for multiple switches between bases). Moreover, κ(z n) 2 2 ν q ν 1 ν. Quasi-optimal choice ν opt = ( 1 + n 2 1)/n leads to κ(z n) ν=νopt = O(n) Miro Rozložník (Prague) A stable variant of Simpler GMRES and GCR ALA10 7

18 Conditioning of the adaptive Z n Assume a near stagnation in the initial phase and a fast convergence in the subsequent steps: Then where r k 1 > ν r k 2 for k = 2,..., q (Simpler GMRES basis), r k 1 ν r k 2 for k = q + 1,..., n (residual basis). { max 1, 1 } 2 γ r q 1 n,q r 0, r 0 r 0 γ 1 n,q κ(z n) 2γ r q 1 n,q r q 1, ( rk r k 2 ) 1 2 n 1 γ n,q max, γn,q k=q,...,n 1 r k 1 2 r k 2 (n q+1) 1 r k r k 2 + r k=q k 1 2 r k 2 Similar estimates for general case (for multiple switches between bases). Moreover, κ(z n) 2 2 ν q ν 1 ν. Quasi-optimal choice ν opt = ( 1 + n 2 1)/n leads to κ(z n) ν=νopt = O(n) Miro Rozložník (Prague) A stable variant of Simpler GMRES and GCR ALA10 7

19 FS1836, b = A(1,..., 1) T : Simpler GMRES fails, adaptive wins backward errors ɛκ(basis) ɛκ(a basis) ɛκ(a) 10 0 backward error and condition numbers iteration number n Miro Rozložník (Prague) A stable variant of Simpler GMRES and GCR ALA10 8

20 FS1836, b = A(1,..., 1) T : Simpler GMRES fails, adaptive wins backward errors ɛκ(basis) ɛκ(a basis) ɛκ(a) 10 0 backward error and condition numbers iteration number n Miro Rozložník (Prague) A stable variant of Simpler GMRES and GCR ALA10 9

21 FS1836, b = u min : GCR fails, adaptive wins backward errors ɛκ(basis) ɛκ(a basis) ɛκ(a) 10 0 backward error and condition numbers iteration number n Miro Rozložník (Prague) A stable variant of Simpler GMRES and GCR ALA10 10

22 FS1836, b = u min : GCR fails, adaptive wins backward errors ɛκ(basis) ɛκ(a basis) ɛκ(a) 10 0 backward error and condition numbers iteration number n Miro Rozložník (Prague) A stable variant of Simpler GMRES and GCR ALA10 11

23 Dependence of κ(z n ) on ν condition number of the basis Z m SHERMAN2 E05R0000 UTM300 CAVITY03 PDE threshold parameter ν Miro Rozložník (Prague) A stable variant of Simpler GMRES and GCR ALA10 12

24 Conclusions and references Simpler GMRES and ORTHODIR do not break down, but are potentially unstable in the case of the fast convergence. Simpler GMRES with residuals and ORTHOMIN( )/GCR can break down, but already moderate convergence implies good conditioning of the basis. Interpolation between both bases by adaptive selection of the new direction vector can benefit from advantages of both approaches and provides the stable variant of Simpler GMRES or GCR. References: P. Jiránek, R, M. H. Gutknecht, How to make Simpler GMRES and GCR more stable, SIAM J. Matrix Anal. Appl., 30 (2008), pp P. Jiránek, R, Adaptive version of Simpler GMRES, Numerical Algorithms, 53 (2010), pp Thank you for your attention! Miro Rozložník (Prague) A stable variant of Simpler GMRES and GCR ALA10 13

25 Conclusions and references Simpler GMRES and ORTHODIR do not break down, but are potentially unstable in the case of the fast convergence. Simpler GMRES with residuals and ORTHOMIN( )/GCR can break down, but already moderate convergence implies good conditioning of the basis. Interpolation between both bases by adaptive selection of the new direction vector can benefit from advantages of both approaches and provides the stable variant of Simpler GMRES or GCR. References: P. Jiránek, R, M. H. Gutknecht, How to make Simpler GMRES and GCR more stable, SIAM J. Matrix Anal. Appl., 30 (2008), pp P. Jiránek, R, Adaptive version of Simpler GMRES, Numerical Algorithms, 53 (2010), pp Thank you for your attention! Miro Rozložník (Prague) A stable variant of Simpler GMRES and GCR ALA10 13

26 Conclusions and references Simpler GMRES and ORTHODIR do not break down, but are potentially unstable in the case of the fast convergence. Simpler GMRES with residuals and ORTHOMIN( )/GCR can break down, but already moderate convergence implies good conditioning of the basis. Interpolation between both bases by adaptive selection of the new direction vector can benefit from advantages of both approaches and provides the stable variant of Simpler GMRES or GCR. References: P. Jiránek, R, M. H. Gutknecht, How to make Simpler GMRES and GCR more stable, SIAM J. Matrix Anal. Appl., 30 (2008), pp P. Jiránek, R, Adaptive version of Simpler GMRES, Numerical Algorithms, 53 (2010), pp Thank you for your attention! Miro Rozložník (Prague) A stable variant of Simpler GMRES and GCR ALA10 13

27 References I M. Arioli and C. Fassino. Roundoff error analysis of algorithms based on Krylov subspace methods. BIT, 36(2): , J. Drkošová, A. Greenbaum, M. Rozložník, and Z. Strakoš. Numerical stability of GMRES. BIT, 35(3): , S. C. Eisenstat, H. C. Elman, and M. H. Schultz. Variational iterative methods for nonsymmetric systems of linear equations. SIAM J. Numer. Anal., 20(2): , A. Greenbaum, M. Rozložník, and Z. Strakoš. Numerical behaviour of the modified Gram-Schmidt GMRES implementation. BIT, 36(3): , J. Liesen, M. Rozložník, and Z. Strakoš. Least squares residuals and minimal residual methods. SIAM J. Sci. Stat. Comput., 23(5): , C. C. Paige, M. Rozložník, and Z. Strakoš. Modified Gram-Schmidt (MGS), least squares, and backward stability of MGS-GMRES. SIAM J. Matrix Anal. Appl., 28(1): , A. Ramage and A. J. Wathen. Iterative solution techniques for the Stokes and Navier-Stokes equations. Internat. J. Numer. Methods Fluids, 19(1):67 83, Y. Saad and M. H. Schultz. GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems. SIAM J. Sci. Stat. Comput., 7(3): , P. K. W. Vinsome. Orthomin, an iterative method for solving sparse sets of simultaneous linear equations. In Proceedings of the Fourth Symposium on Reservoir Simulation, pages Society of Petroleum Engineers of the American Institute of Mining, Metallurgical, and Petroleum Engineers, Miro Rozložník (Prague) A stable variant of Simpler GMRES and GCR ALA10 14

28 References II H. F. Walker and L. Zhou. A simpler GMRES. Numer. Linear Algebra Appl., 1(6): , D. M. Young and K. C. Jea. Generalized conjugate-gradient acceleration of nonsymmetrizable iterative methods. Linear Algebra Appl., 34(1): , Miro Rozložník (Prague) A stable variant of Simpler GMRES and GCR ALA10 15

HOW TO MAKE SIMPLER GMRES AND GCR MORE STABLE

HOW TO MAKE SIMPLER GMRES AND GCR MORE STABLE HOW TO MAKE SIMPLER GMRES AND GCR MORE STABLE PAVEL JIRÁNEK, MIROSLAV ROZLOŽNÍK, AND MARTIN H. GUTKNECHT Abstract. In this paper we analyze the numerical behavior of several minimum residual methods, which

More information

c 2008 Society for Industrial and Applied Mathematics

c 2008 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 30, No. 4, pp. 1483 1499 c 2008 Society for Industrial and Applied Mathematics HOW TO MAKE SIMPLER GMRES AND GCR MORE STABLE PAVEL JIRÁNEK, MIROSLAV ROZLOŽNÍK, AND MARTIN

More information

Adaptive version of Simpler GMRES

Adaptive version of Simpler GMRES DOI 0.007/s075-009-93- ORIGINAL PAPER Adaptive version of Simpler GMRES Pavel Jiránek Miroslav Rozložník Received: 0 November 008 / Accepted: 3 June 009 Springer Science + Business Media, LLC 009 Abstract

More information

Numerical behavior of inexact linear solvers

Numerical behavior of inexact linear solvers Numerical behavior of inexact linear solvers Miro Rozložník joint results with Zhong-zhi Bai and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic The fourth

More information

On the loss of orthogonality in the Gram-Schmidt orthogonalization process

On the loss of orthogonality in the Gram-Schmidt orthogonalization process CERFACS Technical Report No. TR/PA/03/25 Luc Giraud Julien Langou Miroslav Rozložník On the loss of orthogonality in the Gram-Schmidt orthogonalization process Abstract. In this paper we study numerical

More information

On the accuracy of saddle point solvers

On the accuracy of saddle point solvers On the accuracy of saddle point solvers Miro Rozložník joint results with Valeria Simoncini and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic Seminar at

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

Inexactness and flexibility in linear Krylov solvers

Inexactness and flexibility in linear Krylov solvers Inexactness and flexibility in linear Krylov solvers Luc Giraud ENSEEIHT (N7) - IRIT, Toulouse Matrix Analysis and Applications CIRM Luminy - October 15-19, 2007 in honor of Gérard Meurant for his 60 th

More information

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering A short course on: Preconditioned Krylov subspace methods Yousef Saad University of Minnesota Dept. of Computer Science and Engineering Universite du Littoral, Jan 19-3, 25 Outline Part 1 Introd., discretization

More information

RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS

RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS HOMER F. WALKER Abstract. Recent results on residual smoothing are reviewed, and it is observed that certain of these are equivalent

More information

Solving large sparse Ax = b.

Solving large sparse Ax = b. Bob-05 p.1/69 Solving large sparse Ax = b. Stopping criteria, & backward stability of MGS-GMRES. Chris Paige (McGill University); Miroslav Rozložník & Zdeněk Strakoš (Academy of Sciences of the Czech Republic)..pdf

More information

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Leslie Foster 11-5-2012 We will discuss the FOM (full orthogonalization method), CG,

More information

Principles and Analysis of Krylov Subspace Methods

Principles and Analysis of Krylov Subspace Methods Principles and Analysis of Krylov Subspace Methods Zdeněk Strakoš Institute of Computer Science, Academy of Sciences, Prague www.cs.cas.cz/~strakos Ostrava, February 2005 1 With special thanks to C.C.

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Zdeněk Strakoš Academy of Sciences and Charles University, Prague http://www.cs.cas.cz/ strakos Hong Kong, February

More information

Gram-Schmidt Orthogonalization: 100 Years and More

Gram-Schmidt Orthogonalization: 100 Years and More Gram-Schmidt Orthogonalization: 100 Years and More September 12, 2008 Outline of Talk Early History (1795 1907) Middle History 1. The work of Åke Björck Least squares, Stability, Loss of orthogonality

More information

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17

Krylov Space Methods. Nonstationary sounds good. Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Krylov Space Methods Nonstationary sounds good Radu Trîmbiţaş Babeş-Bolyai University Radu Trîmbiţaş ( Babeş-Bolyai University) Krylov Space Methods 1 / 17 Introduction These methods are used both to solve

More information

NUMERICS OF THE GRAM-SCHMIDT ORTHOGONALIZATION PROCESS

NUMERICS OF THE GRAM-SCHMIDT ORTHOGONALIZATION PROCESS NUMERICS OF THE GRAM-SCHMIDT ORTHOGONALIZATION PROCESS Miro Rozložník Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic email: miro@cs.cas.cz joint results with Luc Giraud,

More information

On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes

On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic joint work with Gérard

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294) Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method The minimization problem We are given a symmetric positive definite matrix R n n and a right hand side vector b R n We want to solve the linear system Find u R n such that

More information

PROJECTED GMRES AND ITS VARIANTS

PROJECTED GMRES AND ITS VARIANTS PROJECTED GMRES AND ITS VARIANTS Reinaldo Astudillo Brígida Molina rastudillo@kuaimare.ciens.ucv.ve bmolina@kuaimare.ciens.ucv.ve Centro de Cálculo Científico y Tecnológico (CCCT), Facultad de Ciencias,

More information

Recent computational developments in Krylov Subspace Methods for linear systems. Valeria Simoncini and Daniel B. Szyld

Recent computational developments in Krylov Subspace Methods for linear systems. Valeria Simoncini and Daniel B. Szyld Recent computational developments in Krylov Subspace Methods for linear systems Valeria Simoncini and Daniel B. Szyld A later version appeared in : Numerical Linear Algebra w/appl., 2007, v. 14(1), pp.1-59.

More information

1. Introduction. The GMRES algorithm by Saad and Schultz [28] is a popular iterative method for solving systems of linear equations: p n (A).

1. Introduction. The GMRES algorithm by Saad and Schultz [28] is a popular iterative method for solving systems of linear equations: p n (A). SIAM J. MATRIX ANAL. APPL. Vol., No. 3, pp. 88 903 c 000 Society for Industrial and Applied Mathematics COMPUTABLE CONVERGENCE BOUNDS FOR GMRES JÖRG LIESEN Abstract. The purpose of this paper is to derive

More information

GMRES: Generalized Minimal Residual Algorithm for Solving Nonsymmetric Linear Systems

GMRES: Generalized Minimal Residual Algorithm for Solving Nonsymmetric Linear Systems GMRES: Generalized Minimal Residual Algorithm for Solving Nonsymmetric Linear Systems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University December 4, 2011 T.-M. Huang (NTNU) GMRES

More information

Error Bounds for Iterative Refinement in Three Precisions

Error Bounds for Iterative Refinement in Three Precisions Error Bounds for Iterative Refinement in Three Precisions Erin C. Carson, New York University Nicholas J. Higham, University of Manchester SIAM Annual Meeting Portland, Oregon July 13, 018 Hardware Support

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 9 Minimizing Residual CG

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate

More information

JURJEN DUINTJER TEBBENS

JURJEN DUINTJER TEBBENS Proceedings of ALGORITMY 005 pp. 40 49 GMRES ACCELERATION ANALYSIS FOR A CONVECTION DIFFUSION MODEL PROBLEM JURJEN DUINTJER TEBBENS Abstract. When we apply the GMRES method [4] to linear systems arising

More information

On the influence of eigenvalues on Bi-CG residual norms

On the influence of eigenvalues on Bi-CG residual norms On the influence of eigenvalues on Bi-CG residual norms Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic duintjertebbens@cs.cas.cz Gérard Meurant 30, rue

More information

Chebyshev semi-iteration in Preconditioning

Chebyshev semi-iteration in Preconditioning Report no. 08/14 Chebyshev semi-iteration in Preconditioning Andrew J. Wathen Oxford University Computing Laboratory Tyrone Rees Oxford University Computing Laboratory Dedicated to Victor Pereyra on his

More information

FEM and sparse linear system solving

FEM and sparse linear system solving FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich

More information

CONVERGENCE BOUNDS FOR PRECONDITIONED GMRES USING ELEMENT-BY-ELEMENT ESTIMATES OF THE FIELD OF VALUES

CONVERGENCE BOUNDS FOR PRECONDITIONED GMRES USING ELEMENT-BY-ELEMENT ESTIMATES OF THE FIELD OF VALUES European Conference on Computational Fluid Dynamics ECCOMAS CFD 2006 P. Wesseling, E. Oñate and J. Périaux (Eds) c TU Delft, The Netherlands, 2006 CONVERGENCE BOUNDS FOR PRECONDITIONED GMRES USING ELEMENT-BY-ELEMENT

More information

Key words. GMRES method, convergence bounds, worst-case GMRES, ideal GMRES, field of values

Key words. GMRES method, convergence bounds, worst-case GMRES, ideal GMRES, field of values THE FIELD OF VALUES BOUNDS ON IDEAL GMRES JÖRG LIESEN AND PETR TICHÝ 27.03.2018) Abstract. A widely known result of Elman, and its improvements due to Starke, Eiermann and Ernst, gives a bound on the worst-case

More information

Krylov Subspace Methods that Are Based on the Minimization of the Residual

Krylov Subspace Methods that Are Based on the Minimization of the Residual Chapter 5 Krylov Subspace Methods that Are Based on the Minimization of the Residual Remark 51 Goal he goal of these methods consists in determining x k x 0 +K k r 0,A such that the corresponding Euclidean

More information

ANY FINITE CONVERGENCE CURVE IS POSSIBLE IN THE INITIAL ITERATIONS OF RESTARTED FOM

ANY FINITE CONVERGENCE CURVE IS POSSIBLE IN THE INITIAL ITERATIONS OF RESTARTED FOM Electronic Transactions on Numerical Analysis. Volume 45, pp. 133 145, 2016. Copyright c 2016,. ISSN 1068 9613. ETNA ANY FINITE CONVERGENCE CURVE IS POSSIBLE IN THE INITIAL ITERATIONS OF RESTARTED FOM

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

On the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012

On the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012 On the Superlinear Convergence of MINRES Valeria Simoncini and Daniel B. Szyld Report 12-01-11 January 2012 This report is available in the World Wide Web at http://www.math.temple.edu/~szyld 0 Chapter

More information

ON A GENERAL CLASS OF PRECONDITIONERS FOR NONSYMMETRIC GENERALIZED SADDLE POINT PROBLEMS

ON A GENERAL CLASS OF PRECONDITIONERS FOR NONSYMMETRIC GENERALIZED SADDLE POINT PROBLEMS U..B. Sci. Bull., Series A, Vol. 78, Iss. 4, 06 ISSN 3-707 ON A GENERAL CLASS OF RECONDIIONERS FOR NONSYMMERIC GENERALIZED SADDLE OIN ROBLE Fatemeh anjeh Ali BEIK his paper deals with applying a class

More information

Iterative Methods for Sparse Linear Systems

Iterative Methods for Sparse Linear Systems Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University

More information

On the interplay between discretization and preconditioning of Krylov subspace methods

On the interplay between discretization and preconditioning of Krylov subspace methods On the interplay between discretization and preconditioning of Krylov subspace methods Josef Málek and Zdeněk Strakoš Nečas Center for Mathematical Modeling Charles University in Prague and Czech Academy

More information

Generalized MINRES or Generalized LSQR?

Generalized MINRES or Generalized LSQR? Generalized MINRES or Generalized LSQR? Michael Saunders Systems Optimization Laboratory (SOL) Institute for Computational Mathematics and Engineering (ICME) Stanford University New Frontiers in Numerical

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS

ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS YOUSEF SAAD University of Minnesota PWS PUBLISHING COMPANY I(T)P An International Thomson Publishing Company BOSTON ALBANY BONN CINCINNATI DETROIT LONDON MADRID

More information

A DISSERTATION. Extensions of the Conjugate Residual Method. by Tomohiro Sogabe. Presented to

A DISSERTATION. Extensions of the Conjugate Residual Method. by Tomohiro Sogabe. Presented to A DISSERTATION Extensions of the Conjugate Residual Method ( ) by Tomohiro Sogabe Presented to Department of Applied Physics, The University of Tokyo Contents 1 Introduction 1 2 Krylov subspace methods

More information

Preconditioning for Nonsymmetry and Time-dependence

Preconditioning for Nonsymmetry and Time-dependence Preconditioning for Nonsymmetry and Time-dependence Andy Wathen Oxford University, UK joint work with Jen Pestana and Elle McDonald Jeju, Korea, 2015 p.1/24 Iterative methods For self-adjoint problems/symmetric

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 16-02 The Induced Dimension Reduction method applied to convection-diffusion-reaction problems R. Astudillo and M. B. van Gijzen ISSN 1389-6520 Reports of the Delft

More information

GMRESR: A family of nested GMRES methods

GMRESR: A family of nested GMRES methods GMRESR: A family of nested GMRES methods Report 91-80 H.A. van der Vorst C. Vui Technische Universiteit Delft Delft University of Technology Faculteit der Technische Wisunde en Informatica Faculty of Technical

More information

Iterative Methods for Linear Systems of Equations

Iterative Methods for Linear Systems of Equations Iterative Methods for Linear Systems of Equations Projection methods (3) ITMAN PhD-course DTU 20-10-08 till 24-10-08 Martin van Gijzen 1 Delft University of Technology Overview day 4 Bi-Lanczos method

More information

Preconditioned inverse iteration and shift-invert Arnoldi method

Preconditioned inverse iteration and shift-invert Arnoldi method Preconditioned inverse iteration and shift-invert Arnoldi method Melina Freitag Department of Mathematical Sciences University of Bath CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

Preface to the Second Edition. Preface to the First Edition

Preface to the Second Edition. Preface to the First Edition n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................

More information

Combination Preconditioning of saddle-point systems for positive definiteness

Combination Preconditioning of saddle-point systems for positive definiteness Combination Preconditioning of saddle-point systems for positive definiteness Andy Wathen Oxford University, UK joint work with Jen Pestana Eindhoven, 2012 p.1/30 compute iterates with residuals Krylov

More information

Preconditioners for the incompressible Navier Stokes equations

Preconditioners for the incompressible Navier Stokes equations Preconditioners for the incompressible Navier Stokes equations C. Vuik M. ur Rehman A. Segal Delft Institute of Applied Mathematics, TU Delft, The Netherlands SIAM Conference on Computational Science and

More information

Algorithms that use the Arnoldi Basis

Algorithms that use the Arnoldi Basis AMSC 600 /CMSC 760 Advanced Linear Numerical Analysis Fall 2007 Arnoldi Methods Dianne P. O Leary c 2006, 2007 Algorithms that use the Arnoldi Basis Reference: Chapter 6 of Saad The Arnoldi Basis How to

More information

Simple iteration procedure

Simple iteration procedure Simple iteration procedure Solve Known approximate solution Preconditionning: Jacobi Gauss-Seidel Lower triangle residue use of pre-conditionner correction residue use of pre-conditionner Convergence Spectral

More information

Research Article Some Generalizations and Modifications of Iterative Methods for Solving Large Sparse Symmetric Indefinite Linear Systems

Research Article Some Generalizations and Modifications of Iterative Methods for Solving Large Sparse Symmetric Indefinite Linear Systems Abstract and Applied Analysis Article ID 237808 pages http://dxdoiorg/055/204/237808 Research Article Some Generalizations and Modifications of Iterative Methods for Solving Large Sparse Symmetric Indefinite

More information

Lecture 8: Fast Linear Solvers (Part 7)

Lecture 8: Fast Linear Solvers (Part 7) Lecture 8: Fast Linear Solvers (Part 7) 1 Modified Gram-Schmidt Process with Reorthogonalization Test Reorthogonalization If Av k 2 + δ v k+1 2 = Av k 2 to working precision. δ = 10 3 2 Householder Arnoldi

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

Dedicated to Lothar Reichel on the occasion of his 60th birthday

Dedicated to Lothar Reichel on the occasion of his 60th birthday REVISITING (k, l)-step METHODS MARTIN H. GUTKNECHT Dedicated to Lothar Reichel on the occasion of his 60th birthday Abstract. In the mid-1980s B.N. Parsons [29] and the author [10] independently had the

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD

More information

On the Preconditioning of the Block Tridiagonal Linear System of Equations

On the Preconditioning of the Block Tridiagonal Linear System of Equations On the Preconditioning of the Block Tridiagonal Linear System of Equations Davod Khojasteh Salkuyeh Department of Mathematics, University of Mohaghegh Ardabili, PO Box 179, Ardabil, Iran E-mail: khojaste@umaacir

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 19: More on Arnoldi Iteration; Lanczos Iteration Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 17 Outline 1

More information

ETNA Kent State University

ETNA Kent State University Electronic ransactions on Numerical Analysis. Volume 2, pp. 57-75, March 1994. Copyright 1994,. ISSN 1068-9613. ENA MINIMIZAION PROPERIES AND SHOR RECURRENCES FOR KRYLOV SUBSPACE MEHODS RÜDIGER WEISS Dedicated

More information

Contents. Preface... xi. Introduction...

Contents. Preface... xi. Introduction... Contents Preface... xi Introduction... xv Chapter 1. Computer Architectures... 1 1.1. Different types of parallelism... 1 1.1.1. Overlap, concurrency and parallelism... 1 1.1.2. Temporal and spatial parallelism

More information

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department

More information

Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices

Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices Eun-Joo Lee Department of Computer Science, East Stroudsburg University of Pennsylvania, 327 Science and Technology Center,

More information

GMRES ON (NEARLY) SINGULAR SYSTEMS

GMRES ON (NEARLY) SINGULAR SYSTEMS SIAM J. MATRIX ANAL. APPL. c 1997 Society for Industrial and Applied Mathematics Vol. 18, No. 1, pp. 37 51, January 1997 004 GMRES ON (NEARLY) SINGULAR SYSTEMS PETER N. BROWN AND HOMER F. WALKER Abstract.

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 20 1 / 20 Overview

More information

Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computa

Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computa Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computations ªaw University of Technology Institute of Mathematics and Computer Science Warsaw, October 7, 2006

More information

Some minimization problems

Some minimization problems Week 13: Wednesday, Nov 14 Some minimization problems Last time, we sketched the following two-step strategy for approximating the solution to linear systems via Krylov subspaces: 1. Build a sequence of

More information

Augmented GMRES-type methods

Augmented GMRES-type methods Augmented GMRES-type methods James Baglama 1 and Lothar Reichel 2, 1 Department of Mathematics, University of Rhode Island, Kingston, RI 02881. E-mail: jbaglama@math.uri.edu. Home page: http://hypatia.math.uri.edu/

More information

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying

The quadratic eigenvalue problem (QEP) is to find scalars λ and nonzero vectors u satisfying I.2 Quadratic Eigenvalue Problems 1 Introduction The quadratic eigenvalue problem QEP is to find scalars λ and nonzero vectors u satisfying where Qλx = 0, 1.1 Qλ = λ 2 M + λd + K, M, D and K are given

More information

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices Chapter 7 Iterative methods for large sparse linear systems In this chapter we revisit the problem of solving linear systems of equations, but now in the context of large sparse systems. The price to pay

More information

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU Preconditioning Techniques for Solving Large Sparse Linear Systems Arnold Reusken Institut für Geometrie und Praktische Mathematik RWTH-Aachen OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative

More information

Variants of BiCGSafe method using shadow three-term recurrence

Variants of BiCGSafe method using shadow three-term recurrence Variants of BiCGSafe method using shadow three-term recurrence Seiji Fujino, Takashi Sekimoto Research Institute for Information Technology, Kyushu University Ryoyu Systems Co. Ltd. (Kyushu University)

More information

Lab 1: Iterative Methods for Solving Linear Systems

Lab 1: Iterative Methods for Solving Linear Systems Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as

More information

Finding Rightmost Eigenvalues of Large, Sparse, Nonsymmetric Parameterized Eigenvalue Problems

Finding Rightmost Eigenvalues of Large, Sparse, Nonsymmetric Parameterized Eigenvalue Problems Finding Rightmost Eigenvalues of Large, Sparse, Nonsymmetric Parameterized Eigenvalue Problems AMSC 663-664 Final Report Minghao Wu AMSC Program mwu@math.umd.edu Dr. Howard Elman Department of Computer

More information

ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS

ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS Fatemeh Panjeh Ali Beik and Davod Khojasteh Salkuyeh, Department of Mathematics, Vali-e-Asr University of Rafsanjan, Rafsanjan,

More information

An advanced ILU preconditioner for the incompressible Navier-Stokes equations

An advanced ILU preconditioner for the incompressible Navier-Stokes equations An advanced ILU preconditioner for the incompressible Navier-Stokes equations M. ur Rehman C. Vuik A. Segal Delft Institute of Applied Mathematics, TU delft The Netherlands Computational Methods with Applications,

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 13-10 Comparison of some preconditioners for the incompressible Navier-Stokes equations X. He and C. Vuik ISSN 1389-6520 Reports of the Delft Institute of Applied

More information

Krylov Space Solvers

Krylov Space Solvers Seminar for Applied Mathematics ETH Zurich International Symposium on Frontiers of Computational Science Nagoya, 12/13 Dec. 2005 Sparse Matrices Large sparse linear systems of equations or large sparse

More information

IDR(s) as a projection method

IDR(s) as a projection method Delft University of Technology Faculty of Electrical Engineering, Mathematics and Computer Science Delft Institute of Applied Mathematics IDR(s) as a projection method A thesis submitted to the Delft Institute

More information

The Rate of Convergence of GMRES on a Tridiagonal Toeplitz Linear System

The Rate of Convergence of GMRES on a Tridiagonal Toeplitz Linear System Numerische Mathematik manuscript No. (will be inserted by the editor) The Rate of Convergence of GMRES on a Tridiagonal Toeplitz Linear System Ren-Cang Li 1, Wei Zhang 1 Department of Mathematics, University

More information

Arnoldi Methods in SLEPc

Arnoldi Methods in SLEPc Scalable Library for Eigenvalue Problem Computations SLEPc Technical Report STR-4 Available at http://slepc.upv.es Arnoldi Methods in SLEPc V. Hernández J. E. Román A. Tomás V. Vidal Last update: October,

More information

c 2005 Society for Industrial and Applied Mathematics

c 2005 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 26, No. 4, pp. 1001 1021 c 2005 Society for Industrial and Applied Mathematics BREAKDOWN-FREE GMRES FOR SINGULAR SYSTEMS LOTHAR REICHEL AND QIANG YE Abstract. GMRES is a

More information

Comparison of Fixed Point Methods and Krylov Subspace Methods Solving Convection-Diffusion Equations

Comparison of Fixed Point Methods and Krylov Subspace Methods Solving Convection-Diffusion Equations American Journal of Computational Mathematics, 5, 5, 3-6 Published Online June 5 in SciRes. http://www.scirp.org/journal/ajcm http://dx.doi.org/.436/ajcm.5.5 Comparison of Fixed Point Methods and Krylov

More information

The Loss of Orthogonality in the Gram-Schmidt Orthogonalization Process

The Loss of Orthogonality in the Gram-Schmidt Orthogonalization Process ELSEVFX Available online at www.sciencedirect.corn =...c.@o,..~.- An nternational Journal computers & mathematics with applications Computers and Mathematics with Applications 50 (2005) 1069-1075 www.elsevier.com/locate/camwa

More information

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccs Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary c 2008,2010

More information

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give

More information

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009) Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential

More information

On the Vorobyev method of moments

On the Vorobyev method of moments On the Vorobyev method of moments Zdeněk Strakoš Charles University in Prague and Czech Academy of Sciences http://www.karlin.mff.cuni.cz/ strakos Conference in honor of Volker Mehrmann Berlin, May 2015

More information

Efficient Augmented Lagrangian-type Preconditioning for the Oseen Problem using Grad-Div Stabilization

Efficient Augmented Lagrangian-type Preconditioning for the Oseen Problem using Grad-Div Stabilization Efficient Augmented Lagrangian-type Preconditioning for the Oseen Problem using Grad-Div Stabilization Timo Heister, Texas A&M University 2013-02-28 SIAM CSE 2 Setting Stationary, incompressible flow problems

More information

FINDING RIGHTMOST EIGENVALUES OF LARGE SPARSE NONSYMMETRIC PARAMETERIZED EIGENVALUE PROBLEMS

FINDING RIGHTMOST EIGENVALUES OF LARGE SPARSE NONSYMMETRIC PARAMETERIZED EIGENVALUE PROBLEMS FINDING RIGHTMOST EIGENVALUES OF LARGE SPARSE NONSYMMETRIC PARAMETERIZED EIGENVALUE PROBLEMS Department of Mathematics University of Maryland, College Park Advisor: Dr. Howard Elman Department of Computer

More information

MS 28: Scalable Communication-Avoiding and -Hiding Krylov Subspace Methods I

MS 28: Scalable Communication-Avoiding and -Hiding Krylov Subspace Methods I MS 28: Scalable Communication-Avoiding and -Hiding Krylov Subspace Methods I Organizers: Siegfried Cools, University of Antwerp, Belgium Erin C. Carson, New York University, USA 10:50-11:10 High Performance

More information