Variants of BiCGSafe method using shadow three-term recurrence

Size: px
Start display at page:

Download "Variants of BiCGSafe method using shadow three-term recurrence"

Transcription

1 Variants of BiCGSafe method using shadow three-term recurrence Seiji Fujino, Takashi Sekimoto Research Institute for Information Technology, Kyushu University Ryoyu Systems Co. Ltd. (Kyushu University) 1 Introduction After appearance of CGS[10] and BiCGStab[12] methods, a number of iterative methods based on Lanczos polynomial added with auxiliary polynomial were proposed independently [9] [14] [13]. Then strategy of combination of two polynomials was generalized as a form of product of two polynomials in 1997[15]. However, the optimization of product of polynomials remains as an open problem. The first solution among naive realization of product-type iterative methods was made partly by Fujino et al. in 2005[4] because of adoption of associate residual in place of residual for decision of undetermined two parameters. They found out a clue in the ordering of developing of polynomials. As a result, they succeeded fairly in reduction of instability of convergence. They referred BiCGSafe method in view of safety convergence. With the same strategy, i.e., adoption of associate residual, variants of GPBiCG method were produced such as GPBiCG AR in 2009 [6]. On the other hand, different approach exists also, e.g., an approach of two-term recurrence proposed by prof. Rutishauser in 1959 [5][8]. Recently profs. Abe and Sleijpen [1] applied it to improvement of some variants of GPBiCG method. However, some variants succeeded in, some variants may failed out in view of convergence rate and stability of convergence. In this paper, we derive variants of BiCGSafe method [4] by means of considering on ordering of Lanczos and stability polynomials which construct algorithm of product-type BiCG method [3]. This paper is organized as follows. In section 2 we describe derivation of variants of BiCG method. In section 3, algorithm of revised variant of BiCGSafe method is derived using shadow three-term recurrence. In section 4, safety convergence of variants of BiCGSafe method will be demonstrated by numerical experiments, and in section 5 finally we will draw some conclusions. 2 Derivation of variants of BiCGSafe method We will focus on the iteration solution of a linear system of equations Ax = b (2.1) in which A is a non-singular real n n matrix and b a given real n-vector. Starting from some initial guess x 0 for the solution, BiCG(Bi-Conjugate Gradient) method[3] generates a sequence x k with the property that the kth residual r k := b Ax k lies in the Krylov subspace generated by A from r 0, i.e., The sequence of residual polynomials R k are defined by r k K k+1 := span(r 0, Ar 0,, A k r 0 ). (2.2) r k := R k (A)r 0 (2.3) and polynomials R k (λ) are refered to as the so-called Lanczos polynomials. In the standard treatment of the BiCG method, second polynomials P k play a role, and are defined by p k := P k (A)r 0. (2.4) We note that the basic recurrence relations between R k (λ) and P k (λ) hold as follows: R 0 (λ) = 1, P 0 (λ) = 1, (2.5) R k+1 (λ) = R k (λ) α k λp k (λ), P k+1 (λ) = R k+1 (λ) + β k P k (λ), k = 1, 2,.... (2.6) Then we can introduce the three-term recurrence relations for Lanczos polynomials R k (λ) only by eliminating P k (λ) from (2.5) and (2.6) as follows: R 0 (λ) = 1, R 1 (λ) = (1 α 0 λ)r 0 (λ), (2.7) R k+1 (λ) = (1 + β k 1 α k 1 α k α k λ)r k (λ) β k 1 α k 1 α k R k 1 (λ), k = 1, 2,.... (2.8) 1

2 Prof. Zhang [15] discovered that an often excellent convergence property can be gained by choosing for stability polynomials H k (λ) that are built up in the three-term recurrence form as polynomial R k (λ) in (2.7) and (2.8) by adding suitable undetermined parameters ζ k and η k as follows: H 0 (λ) = 1, H 1 (λ) = (1 ζ 0 λ)h 0 (λ), (2.9) H k+1 (λ) = (1 + η k ζ k λ)h k (λ) η k H k 1 (λ), k = 1, 2,.... (2.10) Moreover we define polynomial G k (λ) as below. G k (λ) = (H k+1 (λ) H k (λ))/λ. (2.11) By reconstruction of (2.8) using the stability polynomials H k (λ) and G k (λ), we have the following coupled twoterm recursion of the form as H 0 (λ) = 1, G 0 (λ) = ζ 0, (2.12) H k (λ) = H k 1 (λ) λg k 1 (λ), (2.13) G k (λ) = ζ k H k (λ) + η k G k 1 (λ), k = 1, 2,.... (2.14) 3 Algorithm of revised variants of BiCGSafe method In this section, we consider on the following alternative recurrence [8] with polynomials G k+1 (λ) and H k+1 (λ) in place of polynomials G k (λ) and H k+1 (λ). G 0 (λ) = 0, H 0 (λ) = 1, (3.1) G k+1 (λ) = ζ k λh k (λ) + η k Gk (λ), (3.2) H k+1 (λ) = H k (λ) G k+1 (λ), k = 1, 2,.... (3.3) Here, parameters ζ k, η k are undetermined scalars. Polynomial Gk+1 (λ) satisfies relationship of G k+1 (λ) = λg k (λ). When λ is very close to zero, computation of G k (λ) = (H k+1 (λ) H k (λ))/λ causes more roundoff errors than that of G k (λ) as below. G k (λ) = λg k (λ) = λ{ H k+1(λ) H k (λ) } λ = (H k+1 (λ) H k (λ)) (3.4) Using eqns. (3.2) and (3.3), the following recurrence (3.5) holds. We refer this recurrence as shadow three-term recurrence. 3.1 Ordering of developping of two polynomials H k+1 (λ) = H k (λ) ζ k λh k (λ) η k Gk (λ). (3.5) In this subsection, we consider on developing polynomial of the residual H k+1 (λ)r k+1 (λ) by means of the alternative polynomials of BiCG method, the alternative recurrences (3.1)-(3.3) by Rutishauser and the above shadow three-term recurrence (3.5). We derive the algorithm of variants of BiCGSafe method by developing firstly Lancoz polynomial R k+1 (A) in the residual vector r k := H k+1 (A)R k+1 (A)r 0 as well as the algorithm of BiCGSafe method itself. H k+1 (λ)r k+1 (λ) = H k+1 (λ)(r k (λ) α k λp k (λ)) = (H k (λ) ζ k λh k (λ) η k Gk (λ))r k (λ) α k λ(h k (λ) G k+1 (λ))p k (λ) = H k (λ)r k (λ) α k λh k (λ)p k (λ) (ζ k λh k (λ)r k (λ) + η k Gk (λ)r k (λ) α k λ G k+1 (λ)p k (λ)) (3.6) 2

3 Moreover we can develop some recurrences in eqn. (3.6) as below. H k (λ)p k (λ) = H k (λ)(r k (λ) + β k 1 P k 1 (λ)) = H k (λ)r k (λ) + β k 1 (H k 1 (λ)p k 1 (λ) G k (λ)p k 1 (λ)), (3.7) λh k (λ)p k (λ) = λh k (λ)r k (λ) + β k 1 (λh k 1 (λ)p k 1 (λ) λ G k (λ)p k 1 (λ)), (3.8) G k+1 (λ)p k (λ) = (ζ k λh k (λ) + η k Gk (λ))p k (λ) (3.9) = ζ k λh k (λ)p k (λ) + η k ( G k (λ)r k (λ) + β k 1 Gk (λ)p k 1 (λ)), (3.10) G k+1 (λ)r k+1 (λ) = G k+1 (λ)(r k (λ) α k P k (λ)) Here we introduce three auxiliary vectors as below. = ζ k λh k (λ)r k (λ) + η k Gk (λ)r k (λ) α k λ G k+1 (λ)p k (λ). (3.11) p k := H k (A)P k (A)r 0, (3.12) u k := G k+1 (A)P k (A)r 0, (3.13) w k+1 := G k+1 (A)R k+1 (A)r 0. (3.14) We gain the following vectors by substituting these auxiliary vectors into eqns. (3.6)-(3.11). r k+1 = r k α k Ap k w k+1, (3.15) p k = r k β k 1 (p k 1 u k 1 ), (3.16) Ap k = Ar k β k 1 (Ap k 1 Au k 1 ), (3.17) u k = ζ k Ap k + η k (w k + β k u k 1 ), (3.18) w k+1 = ζ k Ar k + η k w k α k Au k. (3.19) Concerning update formula of the approximate solution vector x k+1, we utilize the next recurrence from eqn. (3.15). b Ax k+1 = r k+1 = r k α k Ap k w k+1 = b Ax k α k Ap k w k+1. (3.20) Here we introduce an additional auxiliary vector z k defined as w k+1 = Az k, i.e., z k = A 1 Gk+1 (A)R k+1 (A)r 0. Then, from the following relationship, λ 1 Gk+1 (λ)r k+1 (λ) = ζ k H k (λ)r k (λ) + η k λ 1 Gk (λ)r k (λ) α k Gk+1 (λ)p k (λ), (3.21) we can derive the recurrence of the auxiliary vector z k. Accordingly we get the next recurrence of the solution vector x k+1. z k = ζ k r k + η k z k 1 α k u k. (3.22) x k+1 = x k + α k p k + z k. (3.23) As well as BiCGSafe method, we decide parameters ζ k, η k by introduction of associate residual of a r k := H k+1 (A)R k (A)r 0. By using the following relationship, H k+1 (λ)r k (λ) = H k (λ)r k (λ) ζ k λh k (λ)r k (λ) η k Gk (λ)r k (λ) (3.24) we can derive recurrence of associate residual a r k as, and decide parameters ζ k, η k from 2-norm of a r k. a r k = r k ζ k Ar k η k w k (3.25) a r k 2 = r k ζ k Ar k η k w k 2 (3.26) 3

4 From the above derivation, we can gain the algorithm. We refer to this as BiCGSafe variant 1 (abberivated as BiCGSafe var 1). Moreover, in developping of polynomial G k+1 (λ)p k (λ) of eqn. (3.9), the following developping can be gained. G k+1 (λ)p k (λ) = G k+1 (λ)(r k (λ) β k 1 P k 1 (λ)) = G k+1 (λ)r k (λ) β k 1 G k+1 (λ)p k 1 (λ) = G k+1 (λ)r k (λ) β k 1 (ζ k λh k (λ)p k 1 (λ) + η k Gk (λ)p k 1 (λ)) (3.27) As well as the algorithm of BiCGSafe var 1 method, we can gain the algorithm of BiCGSafe variant 2 (abberivated as BiCGSafe var 2) method. We show difference between BiCGSafe var 1 method and BiCGSafe var 2 method in the update formula for residual vector r k+1 as eqn. (3.43) and eqn. (3.44). The underlined update procedures differ from that of the algorithm of BiCGSafe method. 4 Numerical experiments Algorithms of BiCGSafe var 1(var 2) methods x 0 is an initial guess, r 0 = b Ax 0, Choose r 0 such that (r 0, r 0 ) 0, β 1 = 0, for k = 0, 1, until r k ε r 0 p k = r k + β k 1 (p k 1 u k 1 ), (3.28) Compute Ar k, (3.29) Ap k = Ar k + β k 1 t k, (3.30) α k = (r 0, r k )/(r 0, Ap k ), (3.31) a k = r k, b k = y k, c k = Ar k, (3.32) ζ k = (b k, b k )(c k, a k ) (b k, a k )(c k, b k ) (c k, c k )(b k, b k ) (b k, c k )(c k, b k ), (3.33) η k = (c k, c k )(b k, a k ) (b k, c k )(c k, a k ) (c k, c k )(b k, b k ) (b k, c k )(c k, b k ), (3.34) (if k = 0, then ζ k = (c k, a k )/(c k, c k ), η k = 0) (3.35) q k = ζ k Ar k + η k y k, (3.36) u k = q k + β k 1 (ζ k t k + η k u k 1 ), (3.37) z k = ζ k r k + η k z k 1 α k u k, (3.38) Compute Au k, (3.39) y k+1 = q k α k Au k, (3.40) t k+1 = Ap k Au k, (3.41) x k+1 = x k + α k p k + z k, (3.42) r k+1 = r k α k Ap k y k+1, (3.43) (= r k α k t k q k ), (3.44) β k = α k (r 0, r k+1 ) ζ k (r 0, r k), (3.45) end for All computations were done in double precision floating point arithmetic, and performed on Dell PowerEdge R210 II(CPU: Intel Xeon E3-1220, clock: 3.1GHz, memory: 8Gbytes, OS: Scientific Linux 6.0). Compiler of Intel Fortran Compiler version 11.0 is used. All codes were compiled with the -O3 optimization option. The righthand side b was imposed from the physical load conditions. The stopping criterion for successful convergence of the iterative methods is less than of the relative residual 2-norm r k+1 2 / b Ax 0 2. In all cases the iteration was started with the initial guess solutions x 0 = 0. The maximum number of iterations is fixed as Matrices are normalized with diagonal scaling. The initial shadow residual r 0 is equal to the initial residual r 0 (= b Ax 0 ). We show specifications of test matrices in Table1. All matrices are taken from Florida sparse matrix collection[11]. 4

5 matrix n total ave. nnz nnz af , , air-cfl5 1,536,000 19,435, airfoil 2d 14, , atmosmodd 1,270,432 8,814, bcircuit 68, , big 13,209 91, epb1 14,734 95, Table 1: Specifications of test matrices. matrix n total ave. nnz nnz epb2 25, , epb3 84, , Freescale1 3,428,755 17,052, poisson3db 85,623 2,374, raefsky3 21,200 1,488, sme3da 12, , sme3db 29,067 2,081, matrix n total ave. nnz nnz sme3dc 42,930 3,148, visco- 4,326 61, plastic1 wang3 26, , wang4 26, , water tank 60,740 2,035, xenon2 157,464 3,866, We examined performance of GPBiCG, GPBiCG v1 (Abe-Sleijpen GPBiCG variant-1), GPBiCG v2 (Abe- Sleijpen GPBiCG variant-2), BiCGSafe, BiCGSafe var 1 and BiCGSafe var 2 methods with Eisenstat SSOR(m) preconditioning [2] [7]. Table 2 presents the numerical results of GPBiCG, GPBiCG v1, GPBiCG v2, BiCGSafe, BiCGSafe var 1 and BiCGSafe var 2 methods. pre-t. means computation time of making preconditioner, itr-t. means iteration time and tot-t. means total time in seconds. ratio means the ratio of computation time of GPBiCG method to that of each iterative method. max denotes non-convergence until iterations reach at the maximum iteration counts. TRR (True Relative Residual) means a value of b Ax k+1 2 / b Ax 0 2 for the approximate solution x k+1. The bold figures implies the fastest case for each matrix. We can see the following facts from Table 2. BiCGSafe var 1 and BiCGSafe var 2 methods outperform among the tested iterative methods in view of both computation time for successful convergence. Performance of GPBiCG v1 and GPBiCG v2 methods is poor. TRRs of BiCGSafe var 1 and BiCGSafe var 2 methods are good. We exhibit comparison of computation time in seconds of iterative methods with E-SSOR(m) preconditioning in Table 3. spurious means that TRR of the approximate solution is more than It can be seen that Performance of BiCGSafe, BiCGSafe var 1 and BiCGSafe var 2 methods are very efficient. 5 Conclusions We derived algorithms of two variants of BiCGSafe method, i.e., BiCGSafe var 1 and BiCGSafe var 2 methods. Moreover we examined performance and robustness of convergence of these iterative methods through numerical experiments. As a result, it turned out that BiCGSafe var 1 and BiCGSafe var 2 methods work well from the viewpoint of convergence rate and accuracy of the approximate solution. References [1] Abe, K., Sleijpen, G.L.G., Solving linear equations with a STABilized GPBiCG method, Proc. of 14th Kansetouchi workshop of JSIAM, pp.29-34, Okayama University of Science, January, [2] Eisenstat, S.C., Efficient implementation of a class of preconditioned conjugate gradient methods, SIAM J. Sci. Stat. Compute., 2-1, 1/4(1981). [3] Fletcher, R., Conjugate Gradient preconditioning for indefinite systems, Lecture Notes in Math., 506, pp , [4] Fujino, S., Fujiwara, M., Yoshida, M., A proposal of preconditioned BiCGSafe method with safe convergence, Proc. of The 17th IMACS World Congress on Scientific Computation, Applied Math. and Simul., CD-ROM, Paris, [5] Jea, K.C., Young, D.M., Generalized conjugate gradient acceleration of nonsymmetrizable iterative methods, Linear Algebra Appl., Vol.34, pp , 1980 [6] Moethuthu, Fujino, S., Stability of GPBiCG AR method based on minimization of associate residual, Journal of ASCM, pp ,

6 Table 2: Performance of some GPBiCG type and BiCGSafe type methods with E-SSOR(m) preconditioning. (a)matrix: bcircuit method ω m itr. pre-t. itr-t. tot-t. log 10 (TRR) ratio GPBiCG GPBiCG v GPBiCG v BiCGSafe BiCGSafe var BiCGSafe var (b)matrix: sme3da method ω m itr. pre-t. itr-t. tot-t. log 10 (TRR) ratio GPBiCG (-7.32) spurious GPBiCG v (-7.40) spurious GPBiCG v (-6.59) spurious BiCGSafe BiCGSafe var BiCGSafe var (c)matrix: sme3dc method ω m itr. pre-t. itr-t. tot-t. log 10 (TRR) ratio GPBiCG GPBiCG v (-7.20) spurious GPBiCG v (-7.02) spurious BiCGSafe BiCGSafe var BiCGSafe var Table 3: Comparison of computation time in seconds of iterative methods with E-SSOR(m) preconditioning. method max spurious conv. GPBiCG v GPBiCG v BiCGSafe BiCGSafe var BiCGSafe var [7] Murakami, K., Fujino, S., Onoue, Y., and Taira, K., Consideration on Eisenstat preconditioning from the viewpoint of memory access, Transaction of JSST, Vol.3, No.2, pp.36-47, [8] Rutishauser, H., Theory of gradient method, in: Refined Iterative Methods for Computation of the Solution and the Eigenvalues of Self-Adjoint Value Problems, in: Mitt. Inst. Angew. Math. ETH Zürich, Birkhäuser, pp.24-49, [9] Saad, Y., Iterative methods for sparse linear systems, 2nd ed., SIAM, Philadelphia, [10] Sonneveld, P., CGS: a fast Lanczos-type solver for nonsymmetric linear systems, SIAM J. Sci. Statis. Comput., Vol.10, pp.36-52, [11] University of Florida Sparse Matrix Collection: matrices/index.html [12] van der Vorst, H.A., Bi-CGSTAB: A fast and smoothly converging variant of Bi-CG for the solution of nonsymmetric linear systems, SIAM J. Sci. Comput., Vol.13, pp , [13] van der Vorst, H.A., Iterative Krylov Methods for Large Linear Systems, Cambdrige University Press, Cambdrige, [14] Weiss, R., Parameter-free iterative linear solvers, Akademie Verlag, Berlin, [15] Zhang, S.-L., GPBi-CG: Generalized product-type methods preconditionings based on Bi-CG for solving nonsymmetric linear systems, SIAM J. Sci. Comput., Vol.18, pp ,

Performance Evaluation of GPBiCGSafe Method without Reverse-Ordered Recurrence for Realistic Problems

Performance Evaluation of GPBiCGSafe Method without Reverse-Ordered Recurrence for Realistic Problems Performance Evaluation of GPBiCGSafe Method without Reverse-Ordered Recurrence for Realistic Problems Seiji Fujino, Takashi Sekimoto Abstract GPBiCG method is an attractive iterative method for the solution

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 16-02 The Induced Dimension Reduction method applied to convection-diffusion-reaction problems R. Astudillo and M. B. van Gijzen ISSN 1389-6520 Reports of the Delft

More information

arxiv: v2 [math.na] 1 Sep 2016

arxiv: v2 [math.na] 1 Sep 2016 The structure of the polynomials in preconditioned BiCG algorithms and the switching direction of preconditioned systems Shoji Itoh and Masaai Sugihara arxiv:163.175v2 [math.na] 1 Sep 216 Abstract We present

More information

Formulation of a Preconditioned Algorithm for the Conjugate Gradient Squared Method in Accordance with Its Logical Structure

Formulation of a Preconditioned Algorithm for the Conjugate Gradient Squared Method in Accordance with Its Logical Structure Applied Mathematics 215 6 1389-146 Published Online July 215 in SciRes http://wwwscirporg/journal/am http://dxdoiorg/14236/am21568131 Formulation of a Preconditioned Algorithm for the Conjugate Gradient

More information

A parameter tuning technique of a weighted Jacobi-type preconditioner and its application to supernova simulations

A parameter tuning technique of a weighted Jacobi-type preconditioner and its application to supernova simulations A parameter tuning technique of a weighted Jacobi-type preconditioner and its application to supernova simulations Akira IMAKURA Center for Computational Sciences, University of Tsukuba Joint work with

More information

Key words. linear equations, polynomial preconditioning, nonsymmetric Lanczos, BiCGStab, IDR

Key words. linear equations, polynomial preconditioning, nonsymmetric Lanczos, BiCGStab, IDR POLYNOMIAL PRECONDITIONED BICGSTAB AND IDR JENNIFER A. LOE AND RONALD B. MORGAN Abstract. Polynomial preconditioning is applied to the nonsymmetric Lanczos methods BiCGStab and IDR for solving large nonsymmetric

More information

Iterative Methods for Linear Systems of Equations

Iterative Methods for Linear Systems of Equations Iterative Methods for Linear Systems of Equations Projection methods (3) ITMAN PhD-course DTU 20-10-08 till 24-10-08 Martin van Gijzen 1 Delft University of Technology Overview day 4 Bi-Lanczos method

More information

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Leslie Foster 11-5-2012 We will discuss the FOM (full orthogonalization method), CG,

More information

IDR(s) Master s thesis Goushani Kisoensingh. Supervisor: Gerard L.G. Sleijpen Department of Mathematics Universiteit Utrecht

IDR(s) Master s thesis Goushani Kisoensingh. Supervisor: Gerard L.G. Sleijpen Department of Mathematics Universiteit Utrecht IDR(s) Master s thesis Goushani Kisoensingh Supervisor: Gerard L.G. Sleijpen Department of Mathematics Universiteit Utrecht Contents 1 Introduction 2 2 The background of Bi-CGSTAB 3 3 IDR(s) 4 3.1 IDR.............................................

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS

RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS HOMER F. WALKER Abstract. Recent results on residual smoothing are reviewed, and it is observed that certain of these are equivalent

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 10-16 An elegant IDR(s) variant that efficiently exploits bi-orthogonality properties Martin B. van Gijzen and Peter Sonneveld ISSN 1389-6520 Reports of the Department

More information

On the Preconditioning of the Block Tridiagonal Linear System of Equations

On the Preconditioning of the Block Tridiagonal Linear System of Equations On the Preconditioning of the Block Tridiagonal Linear System of Equations Davod Khojasteh Salkuyeh Department of Mathematics, University of Mohaghegh Ardabili, PO Box 179, Ardabil, Iran E-mail: khojaste@umaacir

More information

Krylov Space Solvers

Krylov Space Solvers Seminar for Applied Mathematics ETH Zurich International Symposium on Frontiers of Computational Science Nagoya, 12/13 Dec. 2005 Sparse Matrices Large sparse linear systems of equations or large sparse

More information

Comparison of Fixed Point Methods and Krylov Subspace Methods Solving Convection-Diffusion Equations

Comparison of Fixed Point Methods and Krylov Subspace Methods Solving Convection-Diffusion Equations American Journal of Computational Mathematics, 5, 5, 3-6 Published Online June 5 in SciRes. http://www.scirp.org/journal/ajcm http://dx.doi.org/.436/ajcm.5.5 Comparison of Fixed Point Methods and Krylov

More information

Lecture 8 Fast Iterative Methods for linear systems of equations

Lecture 8 Fast Iterative Methods for linear systems of equations CGLS Select x 0 C n x = x 0, r = b Ax 0 u = 0, ρ = 1 while r 2 > tol do s = A r ρ = ρ, ρ = s s, β = ρ/ρ u s βu, c = Au σ = c c, α = ρ/σ r r αc x x+αu end while Graig s method Select x 0 C n x = x 0, r

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

Fast iterative solvers

Fast iterative solvers Utrecht, 15 november 2017 Fast iterative solvers Gerard Sleijpen Department of Mathematics http://www.staff.science.uu.nl/ sleij101/ Review. To simplify, take x 0 = 0, assume b 2 = 1. Solving Ax = b for

More information

Finite-choice algorithm optimization in Conjugate Gradients

Finite-choice algorithm optimization in Conjugate Gradients Finite-choice algorithm optimization in Conjugate Gradients Jack Dongarra and Victor Eijkhout January 2003 Abstract We present computational aspects of mathematically equivalent implementations of the

More information

A SPARSE APPROXIMATE INVERSE PRECONDITIONER FOR NONSYMMETRIC LINEAR SYSTEMS

A SPARSE APPROXIMATE INVERSE PRECONDITIONER FOR NONSYMMETRIC LINEAR SYSTEMS INTERNATIONAL JOURNAL OF NUMERICAL ANALYSIS AND MODELING, SERIES B Volume 5, Number 1-2, Pages 21 30 c 2014 Institute for Scientific Computing and Information A SPARSE APPROXIMATE INVERSE PRECONDITIONER

More information

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate

More information

The flexible incomplete LU preconditioner for large nonsymmetric linear systems. Takatoshi Nakamura Takashi Nodera

The flexible incomplete LU preconditioner for large nonsymmetric linear systems. Takatoshi Nakamura Takashi Nodera Research Report KSTS/RR-15/006 The flexible incomplete LU preconditioner for large nonsymmetric linear systems by Takatoshi Nakamura Takashi Nodera Takatoshi Nakamura School of Fundamental Science and

More information

Residual iterative schemes for largescale linear systems

Residual iterative schemes for largescale linear systems Universidad Central de Venezuela Facultad de Ciencias Escuela de Computación Lecturas en Ciencias de la Computación ISSN 1316-6239 Residual iterative schemes for largescale linear systems William La Cruz

More information

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294) Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

ETNA Kent State University

ETNA Kent State University Electronic ransactions on Numerical Analysis. Volume 2, pp. 57-75, March 1994. Copyright 1994,. ISSN 1068-9613. ENA MINIMIZAION PROPERIES AND SHOR RECURRENCES FOR KRYLOV SUBSPACE MEHODS RÜDIGER WEISS Dedicated

More information

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering A short course on: Preconditioned Krylov subspace methods Yousef Saad University of Minnesota Dept. of Computer Science and Engineering Universite du Littoral, Jan 19-3, 25 Outline Part 1 Introd., discretization

More information

1 Conjugate gradients

1 Conjugate gradients Notes for 2016-11-18 1 Conjugate gradients We now turn to the method of conjugate gradients (CG), perhaps the best known of the Krylov subspace solvers. The CG iteration can be characterized as the iteration

More information

IDR(s) A family of simple and fast algorithms for solving large nonsymmetric systems of linear equations

IDR(s) A family of simple and fast algorithms for solving large nonsymmetric systems of linear equations IDR(s) A family of simple and fast algorithms for solving large nonsymmetric systems of linear equations Harrachov 2007 Peter Sonneveld en Martin van Gijzen August 24, 2007 1 Delft University of Technology

More information

A DISSERTATION. Extensions of the Conjugate Residual Method. by Tomohiro Sogabe. Presented to

A DISSERTATION. Extensions of the Conjugate Residual Method. by Tomohiro Sogabe. Presented to A DISSERTATION Extensions of the Conjugate Residual Method ( ) by Tomohiro Sogabe Presented to Department of Applied Physics, The University of Tokyo Contents 1 Introduction 1 2 Krylov subspace methods

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

On convergence of Inverse-based IC decomposition

On convergence of Inverse-based IC decomposition On convergence of Inverse-based IC decomposition Seiji FUJINO and Aira SHIODE (Kyushu University) On convergence of Inverse-basedIC decomposition p.1/26 Outline of my tal 1. Difference between standard

More information

FEM and sparse linear system solving

FEM and sparse linear system solving FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 11-14 On the convergence of inexact Newton methods R. Idema, D.J.P. Lahaye, and C. Vuik ISSN 1389-6520 Reports of the Department of Applied Mathematical Analysis Delft

More information

Iterative Methods for Sparse Linear Systems

Iterative Methods for Sparse Linear Systems Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University

More information

Contents. Preface... xi. Introduction...

Contents. Preface... xi. Introduction... Contents Preface... xi Introduction... xv Chapter 1. Computer Architectures... 1 1.1. Different types of parallelism... 1 1.1.1. Overlap, concurrency and parallelism... 1 1.1.2. Temporal and spatial parallelism

More information

APPLIED NUMERICAL LINEAR ALGEBRA

APPLIED NUMERICAL LINEAR ALGEBRA APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 9 Minimizing Residual CG

More information

Some minimization problems

Some minimization problems Week 13: Wednesday, Nov 14 Some minimization problems Last time, we sketched the following two-step strategy for approximating the solution to linear systems via Krylov subspaces: 1. Build a sequence of

More information

The solution of the discretized incompressible Navier-Stokes equations with iterative methods

The solution of the discretized incompressible Navier-Stokes equations with iterative methods The solution of the discretized incompressible Navier-Stokes equations with iterative methods Report 93-54 C. Vuik Technische Universiteit Delft Delft University of Technology Faculteit der Technische

More information

Incomplete Cholesky preconditioners that exploit the low-rank property

Incomplete Cholesky preconditioners that exploit the low-rank property anapov@ulb.ac.be ; http://homepages.ulb.ac.be/ anapov/ 1 / 35 Incomplete Cholesky preconditioners that exploit the low-rank property (theory and practice) Artem Napov Service de Métrologie Nucléaire, Université

More information

Fast iterative solvers

Fast iterative solvers Solving Ax = b, an overview Utrecht, 15 november 2017 Fast iterative solvers A = A no Good precond. yes flex. precond. yes GCR yes A > 0 yes CG no no GMRES no ill cond. yes SYMMLQ str indef no large im

More information

Preconditioned Conjugate Gradient-Like Methods for. Nonsymmetric Linear Systems 1. Ulrike Meier Yang 2. July 19, 1994

Preconditioned Conjugate Gradient-Like Methods for. Nonsymmetric Linear Systems 1. Ulrike Meier Yang 2. July 19, 1994 Preconditioned Conjugate Gradient-Like Methods for Nonsymmetric Linear Systems Ulrike Meier Yang 2 July 9, 994 This research was supported by the U.S. Department of Energy under Grant No. DE-FG2-85ER25.

More information

On the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012

On the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012 On the Superlinear Convergence of MINRES Valeria Simoncini and Daniel B. Szyld Report 12-01-11 January 2012 This report is available in the World Wide Web at http://www.math.temple.edu/~szyld 0 Chapter

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Reduced Synchronization Overhead on. December 3, Abstract. The standard formulation of the conjugate gradient algorithm involves

Reduced Synchronization Overhead on. December 3, Abstract. The standard formulation of the conjugate gradient algorithm involves Lapack Working Note 56 Conjugate Gradient Algorithms with Reduced Synchronization Overhead on Distributed Memory Multiprocessors E. F. D'Azevedo y, V.L. Eijkhout z, C. H. Romine y December 3, 1999 Abstract

More information

Conjugate Gradient algorithm. Storage: fixed, independent of number of steps.

Conjugate Gradient algorithm. Storage: fixed, independent of number of steps. Conjugate Gradient algorithm Need: A symmetric positive definite; Cost: 1 matrix-vector product per step; Storage: fixed, independent of number of steps. The CG method minimizes the A norm of the error,

More information

Recent computational developments in Krylov Subspace Methods for linear systems. Valeria Simoncini and Daniel B. Szyld

Recent computational developments in Krylov Subspace Methods for linear systems. Valeria Simoncini and Daniel B. Szyld Recent computational developments in Krylov Subspace Methods for linear systems Valeria Simoncini and Daniel B. Szyld A later version appeared in : Numerical Linear Algebra w/appl., 2007, v. 14(1), pp.1-59.

More information

A PRECONDITIONER FOR THE HELMHOLTZ EQUATION WITH PERFECTLY MATCHED LAYER

A PRECONDITIONER FOR THE HELMHOLTZ EQUATION WITH PERFECTLY MATCHED LAYER European Conference on Computational Fluid Dynamics ECCOMAS CFD 2006 P. Wesseling, E. Oñate and J. Périaux (Eds) c TU Delft, The Netherlands, 2006 A PRECONDITIONER FOR THE HELMHOLTZ EQUATION WITH PERFECTLY

More information

A stable variant of Simpler GMRES and GCR

A stable variant of Simpler GMRES and GCR A stable variant of Simpler GMRES and GCR Miroslav Rozložník joint work with Pavel Jiránek and Martin H. Gutknecht Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic miro@cs.cas.cz,

More information

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm

Lecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm CS 622 Data-Sparse Matrix Computations September 19, 217 Lecture 9: Krylov Subspace Methods Lecturer: Anil Damle Scribes: David Eriksson, Marc Aurele Gilles, Ariah Klages-Mundt, Sophia Novitzky 1 Introduction

More information

Numerical behavior of inexact linear solvers

Numerical behavior of inexact linear solvers Numerical behavior of inexact linear solvers Miro Rozložník joint results with Zhong-zhi Bai and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic The fourth

More information

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication.

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication. CME342 Parallel Methods in Numerical Analysis Matrix Computation: Iterative Methods II Outline: CG & its parallelization. Sparse Matrix-vector Multiplication. 1 Basic iterative methods: Ax = b r = b Ax

More information

Utilizing Quadruple-Precision Floating Point Arithmetic Operation for the Krylov Subspace Methods. SIAM Conference on Applied 1

Utilizing Quadruple-Precision Floating Point Arithmetic Operation for the Krylov Subspace Methods. SIAM Conference on Applied 1 Utilizing Quadruple-Precision Floating Point Arithmetic Operation for the Krylov Subspace Methods SIAM Conference on Applied 1 Outline Krylov Subspace methods Results of Accurate Computing Tools for Accurate

More information

Research Article Some Generalizations and Modifications of Iterative Methods for Solving Large Sparse Symmetric Indefinite Linear Systems

Research Article Some Generalizations and Modifications of Iterative Methods for Solving Large Sparse Symmetric Indefinite Linear Systems Abstract and Applied Analysis Article ID 237808 pages http://dxdoiorg/055/204/237808 Research Article Some Generalizations and Modifications of Iterative Methods for Solving Large Sparse Symmetric Indefinite

More information

S-Step and Communication-Avoiding Iterative Methods

S-Step and Communication-Avoiding Iterative Methods S-Step and Communication-Avoiding Iterative Methods Maxim Naumov NVIDIA, 270 San Tomas Expressway, Santa Clara, CA 95050 Abstract In this paper we make an overview of s-step Conjugate Gradient and develop

More information

IDR EXPLAINED. Dedicated to Richard S. Varga on the occasion of his 80th birthday.

IDR EXPLAINED. Dedicated to Richard S. Varga on the occasion of his 80th birthday. IDR EXPLAINED MARTIN H. GUTKNECHT Dedicated to Richard S. Varga on the occasion of his 80th birthday. Abstract. The Induced Dimension Reduction (IDR) method is a Krylov space method for solving linear

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers Applied and Computational Mathematics 2017; 6(4): 202-207 http://www.sciencepublishinggroup.com/j/acm doi: 10.11648/j.acm.20170604.18 ISSN: 2328-5605 (Print); ISSN: 2328-5613 (Online) A Robust Preconditioned

More information

PROJECTED GMRES AND ITS VARIANTS

PROJECTED GMRES AND ITS VARIANTS PROJECTED GMRES AND ITS VARIANTS Reinaldo Astudillo Brígida Molina rastudillo@kuaimare.ciens.ucv.ve bmolina@kuaimare.ciens.ucv.ve Centro de Cálculo Científico y Tecnológico (CCCT), Facultad de Ciencias,

More information

Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration

Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration D. Calvetti a, B. Lewis b and L. Reichel c a Department of Mathematics, Case Western Reserve University,

More information

Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices

Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices Eun-Joo Lee Department of Computer Science, East Stroudsburg University of Pennsylvania, 327 Science and Technology Center,

More information

On the influence of eigenvalues on Bi-CG residual norms

On the influence of eigenvalues on Bi-CG residual norms On the influence of eigenvalues on Bi-CG residual norms Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic duintjertebbens@cs.cas.cz Gérard Meurant 30, rue

More information

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative

More information

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc. Lecture 11: CMSC 878R/AMSC698R Iterative Methods An introduction Outline Direct Solution of Linear Systems Inverse, LU decomposition, Cholesky, SVD, etc. Iterative methods for linear systems Why? Matrix

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

Recycling Bi-Lanczos Algorithms: BiCG, CGS, and BiCGSTAB

Recycling Bi-Lanczos Algorithms: BiCG, CGS, and BiCGSTAB Recycling Bi-Lanczos Algorithms: BiCG, CGS, and BiCGSTAB Kapil Ahuja Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial fulfillment of the requirements

More information

Iterative Solvers in the Finite Element Solution of Transient Heat Conduction

Iterative Solvers in the Finite Element Solution of Transient Heat Conduction Iterative Solvers in the Finite Element Solution of Transient Heat Conduction Mile R. Vuji~i} PhD student Steve G.R. Brown Senior Lecturer Materials Research Centre School of Engineering University of

More information

Introduction. Chapter One

Introduction. Chapter One Chapter One Introduction The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and

More information

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution

More information

Algorithm for Sparse Approximate Inverse Preconditioners in the Conjugate Gradient Method

Algorithm for Sparse Approximate Inverse Preconditioners in the Conjugate Gradient Method Algorithm for Sparse Approximate Inverse Preconditioners in the Conjugate Gradient Method Ilya B. Labutin A.A. Trofimuk Institute of Petroleum Geology and Geophysics SB RAS, 3, acad. Koptyug Ave., Novosibirsk

More information

Convergence property of IDR variant methods in the integral equation analysis of electromagnetic scattering problems

Convergence property of IDR variant methods in the integral equation analysis of electromagnetic scattering problems LETTER IEICE Electronics Express, Vol.11, No.9, 1 8 Convergence property of IDR variant methods in the integral equation analysis of electromagnetic scattering problems Hidetoshi Chiba a), Toru Fukasawa,

More information

Arnoldi Methods in SLEPc

Arnoldi Methods in SLEPc Scalable Library for Eigenvalue Problem Computations SLEPc Technical Report STR-4 Available at http://slepc.upv.es Arnoldi Methods in SLEPc V. Hernández J. E. Román A. Tomás V. Vidal Last update: October,

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Jae Heon Yun and Yu Du Han

Jae Heon Yun and Yu Du Han Bull. Korean Math. Soc. 39 (2002), No. 3, pp. 495 509 MODIFIED INCOMPLETE CHOLESKY FACTORIZATION PRECONDITIONERS FOR A SYMMETRIC POSITIVE DEFINITE MATRIX Jae Heon Yun and Yu Du Han Abstract. We propose

More information

Conjugate Gradient (CG) Method

Conjugate Gradient (CG) Method Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD

More information

Linear Solvers. Andrew Hazel

Linear Solvers. Andrew Hazel Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction

More information

A Novel Approach for Solving the Power Flow Equations

A Novel Approach for Solving the Power Flow Equations Vol.1, Issue.2, pp-364-370 ISSN: 2249-6645 A Novel Approach for Solving the Power Flow Equations Anwesh Chowdary 1, Dr. G. MadhusudhanaRao 2 1 Dept.of.EEE, KL University-Guntur, Andhra Pradesh, INDIA 2

More information

BiCGCR2: A new extension of conjugate residual method for solving non-hermitian linear systems

BiCGCR2: A new extension of conjugate residual method for solving non-hermitian linear systems BiCGCR2: A new extension of conjugate residual method for solving non-hermitian linear systems Xian-Ming Gu 1,2, Ting-Zhu Huang 1, Bruno Carpentieri 3, 1. School of Mathematical Sciences, University of

More information

Gradient Method Based on Roots of A

Gradient Method Based on Roots of A Journal of Scientific Computing, Vol. 15, No. 4, 2000 Solving Ax Using a Modified Conjugate Gradient Method Based on Roots of A Paul F. Fischer 1 and Sigal Gottlieb 2 Received January 23, 2001; accepted

More information

Valeria Simoncini and Daniel B. Szyld. Report October 2009

Valeria Simoncini and Daniel B. Szyld. Report October 2009 Interpreting IDR as a Petrov-Galerkin method Valeria Simoncini and Daniel B. Szyld Report 09-10-22 October 2009 This report is available in the World Wide Web at http://www.math.temple.edu/~szyld INTERPRETING

More information

High performance Krylov subspace method variants

High performance Krylov subspace method variants High performance Krylov subspace method variants and their behavior in finite precision Erin Carson New York University HPCSE17, May 24, 2017 Collaborators Miroslav Rozložník Institute of Computer Science,

More information

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili,

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili, Australian Journal of Basic and Applied Sciences, 5(3): 35-358, 20 ISSN 99-878 Generalized AOR Method for Solving Syste of Linear Equations Davod Khojasteh Salkuyeh Departent of Matheatics, University

More information

College of William & Mary Department of Computer Science

College of William & Mary Department of Computer Science Technical Report WM-CS-2009-06 College of William & Mary Department of Computer Science WM-CS-2009-06 Extending the eigcg algorithm to non-symmetric Lanczos for linear systems with multiple right-hand

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1 Parallel Numerics, WT 2016/2017 5 Iterative Methods for Sparse Linear Systems of Equations page 1 of 1 Contents 1 Introduction 1.1 Computer Science Aspects 1.2 Numerical Problems 1.3 Graphs 1.4 Loop Manipulations

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

IDR(s) as a projection method

IDR(s) as a projection method Delft University of Technology Faculty of Electrical Engineering, Mathematics and Computer Science Delft Institute of Applied Mathematics IDR(s) as a projection method A thesis submitted to the Delft Institute

More information

Iterative Linear Solvers

Iterative Linear Solvers Chapter 10 Iterative Linear Solvers In the previous two chapters, we developed strategies for solving a new class of problems involving minimizing a function f ( x) with or without constraints on x. In

More information

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation Tao Zhao 1, Feng-Nan Hwang 2 and Xiao-Chuan Cai 3 Abstract In this paper, we develop an overlapping domain decomposition

More information

QUADERNI. del. Dipartimento di Matematica. Ezio Venturino, Peter R. Graves-Moris & Alessandra De Rossi

QUADERNI. del. Dipartimento di Matematica. Ezio Venturino, Peter R. Graves-Moris & Alessandra De Rossi UNIVERSITÀ DI TORINO QUADERNI del Dipartimento di Matematica Ezio Venturino, Peter R. Graves-Moris & Alessandra De Rossi AN ADAPTATION OF BICGSTAB FOR NONLINEAR BIOLOGICAL SYSTEMS QUADERNO N. 17/2005 Here

More information

A Block Compression Algorithm for Computing Preconditioners

A Block Compression Algorithm for Computing Preconditioners A Block Compression Algorithm for Computing Preconditioners J Cerdán, J Marín, and J Mas Abstract To implement efficiently algorithms for the solution of large systems of linear equations in modern computer

More information

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant

More information

Further experiences with GMRESR

Further experiences with GMRESR Further experiences with GMRESR Report 92-2 C. Vui Technische Universiteit Delft Delft University of Technology Faculteit der Technische Wisunde en Informatica Faculty of Technical Mathematics and Informatics

More information

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department

More information

Chebyshev semi-iteration in Preconditioning

Chebyshev semi-iteration in Preconditioning Report no. 08/14 Chebyshev semi-iteration in Preconditioning Andrew J. Wathen Oxford University Computing Laboratory Tyrone Rees Oxford University Computing Laboratory Dedicated to Victor Pereyra on his

More information

Dedicated to Lothar Reichel on the occasion of his 60th birthday

Dedicated to Lothar Reichel on the occasion of his 60th birthday REVISITING (k, l)-step METHODS MARTIN H. GUTKNECHT Dedicated to Lothar Reichel on the occasion of his 60th birthday Abstract. In the mid-1980s B.N. Parsons [29] and the author [10] independently had the

More information