Formulation of a Preconditioned Algorithm for the Conjugate Gradient Squared Method in Accordance with Its Logical Structure

Size: px
Start display at page:

Download "Formulation of a Preconditioned Algorithm for the Conjugate Gradient Squared Method in Accordance with Its Logical Structure"

Transcription

1 Applied Mathematics Published Online July 215 in SciRes Formulation of a Preconditioned Algorithm for the Conjugate Gradient Squared Method in Accordance with Its Logical Structure Shoji Itoh 1 Masaai Sugihara 2 1 Division of Science School of Science and Engineering Toyo Deni University Saitama Japan 2 Department of Physics and Mathematics College of Science and Engineering Aoyama Gauin University Kanagawa Japan itosho@acmorg Received 22 June 215; accepted 27 July 215; published 3 July 215 Copyright 215 by authors and Scientific Research Publishing Inc This wor is licensed under the Creative Commons Attribution-NonCommercial International License (CC BY-NC Abstract In this paper we propose an improved preconditioned algorithm for the conjugate gradient squared method (improved P for the solution of linear equations Further the logical structures underlying the formation of this preconditioned algorithm are demonstrated via a number of theorems This improved P algorithm retains some mathematical properties that are associated with the derivation from the bi-conjugate gradient method under a non-preconditioned system A series of numerical comparisons with the conventional P illustrate the enhanced effectiveness of our improved scheme with a variety of preconditioners This logical structure underlying the formation of the improved P brings a spillover effect from various bi-lanczos-type algorithms with minimal residual operations because these algorithms were constructed by adopting the idea behind the derivation of These bi-lanczos-type algorithms are very important because they are often adopted to solve the systems of linear equations that arise from large-scale numerical simulations Keywords Linear Systems Krylov Subspace Method Bi-Lanczos Algorithm Preconditioned System P 1 Introduction In scientific and technical computation natural phenomena or engineering problems are described through numerical models These models are often reduced to a system of linear equations: How to cite this paper: Itoh S and Sugihara M (215 Formulation of a Preconditioned Algorithm for the Conjugate Gradient Squared Method in Accordance with Its Logical Structure Applied Mathematics

2 S Itoh M Sugihara A x b (1 where A is a large sparse coefficient matrix of size n n x is the solution vector and b is the right-hand side (RHS vector The conjugate gradient squared ( method is a way to solve (1 [1] The method is a type of bi- Lanczos algorithm that belongs to the class of Krylov subspace methods Bi-Lanczos-type algorithms are derived from the bi-conjugate gradient (BiCG method [2] [3] which assumes the existence of a dual system T A x b (2 Characteristically the coefficient matrix of (2 is the transpose of A In this paper we term (2 a shadow system Bi-Lanczos-type algorithms have the advantage of requiring less memory than Arnoldi-type algorithms another class of Krylov subspace methods The method is derived from BiCG Furthermore various bi-lanczos-type algorithms such as Bitab [4] G [5] and so on have been constructed by adopting the idea behind the derivation of These bi-lanczos-type algorithms are very important because they are often adopted to solve systems in the form of (1 that arise from large-scale numerical simulations Many iterative methods including bi-lanczos algorithms are often applied together with some preconditioning operation Such algorithms are called preconditioned algorithms; for example preconditioned (P The application of preconditioning operations to iterative methods effectively enhances their performance Indeed the effects attributable to different preconditioning operations are greater than those produced by different iterative methods [6] However if a preconditioned algorithm is poorly designed there may be no beneficial effect from the preconditioning operation Consequently P holds an important position within the Krylov subspace methods In this paper we identify a mathematical issue with the conventional P algorithm and propose an improved P 1 This improved P algorithm is derived rationally in accordance with its logical structure In this paper preconditioned algorithm and preconditioned system refer to solving algorithms described with some preconditioning operator M (or preconditioner preconditioning matrix and the system converted by the operator based on M respectively These terms never indicate the algorithm for the preconditioning operation itself such as incomplete LU decomposition approximate inverse and so on For example for a preconditioned system the original linear system (1 becomes Ax b (3 A M AM x M x b M b ( L R R L under the preconditioner M MLMR M A Here the matrix and the vector in the preconditioned system are denoted by the tilde ( However the conversions in (3 and (4 are not implemented; rather we construct the preconditioned algorithm that is equivalent to solving (3 This paper is organized as follows Section 2 provides an overview of the derivation of the method and the properties of two scalar coefficients ( α and This forms an important part of our argument for the preconditioned BiCG and P algorithms in the next section Section 3 introduces the P algorithm An issue with the conventional P algorithm is identified and an improved algorithm is proposed In Section 4 we present some numerical results to demonstrate the effect of the improved P algorithm with a variety of preconditioners As a consequence the effectiveness of the improved algorithm is clearly established Finally our conclusions are presented in Section 5 2 Derivation of the Method and Preconditioned Algorithm of the BiCG Method In this section we derive the method from the BiCG method and introduce the preconditioned BiCG algo- 1 This improved P was proposed in Ref [7] from a viewpoint of deriving process s logic but this manuscript demonstrates some mathematical theorems underlying the deriving process s logic Further numerical results using ILU( preconditioner only are shown in Ref [7] but the results using a variety of typical preconditioners are shown in this manuscript 139

3 S Itoh M Sugihara rithm 21 Structure of the BiCG Method BiCG [2] [3] is an iterative method for linear systems in which the coefficient matrix A is nonsymmetric The algorithm proceeds as follows: Algorithm 1 BiCG method: x is an initial guess r b Ax set 1 r r eg r r ( For 1 2 until convergence Do: End Do p r + p p r + p α ( r r ( p Ap x x +α p + 1 T r+ 1 r αap r+ 1 r αa p ( ( r r r r (5 (6 BiCG implements the following Theorems Theorem 1 (Hestenes et al [8] Sonneveld [1] and others 2 For a non-preconditioned system there are re- R λ and probing direction polynomi- P λ These are currence relations that define the degree of the residual polynomial al ( λ ( λ R 1 P 1 (7 ( λ ( λ α λ ( λ R R P ( ( λ ( λ + ( λ P R P (9 1 1 where α and are based on (5 and (6 Using the polynomials of Theorem 1 the residual vector for the linear system (1 and the shadow residual vector for the shadow system (2 can be written as These probing direction vectors are represented by r r A r R (1 T A r p p R (11 r A P (12 P r (13 T A T where r b Ax is the initial residual vector and r b A x is the initial shadow residual vector T However in practice r is set in such a way as to satisfy the conditions ( r r ( uv uv denotes the inner product between vectors u and v In this paper we set r r 2 Of course [8] discuss the conjugate gradient method for a symmetric system r r to satisfy the conditions ( 1391

4 S Itoh M Sugihara exactly This is a typical way of ensuring r ; other settings are beyond the scope of this paper Theorem 2 (Fletcher [2] The BiCG method satisfies the following conditions: 22 Derivation of the Method ( i j ( i j r r bi-orthogonality conditions (14 ( i A j ( i j p p bi-conjugacy conditions (15 The method is derived by transforming the scalar coefficients in the BiCG method to avoid the matrix [1] The polynomial defined by (1-(13 is substituted into (14 and (15 which construct the numerator and denominator of α and in BiCG 3 Then ( A A ( A r r R r R r r R r (16 T 2 p Ap P A r AP A r r AP A r (17 T 2 and the following theorem can be applied Theorem 3 (Sonneveld [1] The coefficients α and are equivalent to these coefficients in BiCG under certain transformation and substitution operations based on the bi-orthogonality and bi-conjugacy conditions Proof We apply to (16 and (17 Then α BiCG BiCG ( A ( A r R r p P r ( ( R ( A P ( R ( A ( R ( A 2 r r r r r r α p Ap p A A r r Ap 2 2 r+ 1 r+ 1 r + 1 r r r+ 1 r r r r r r 2 The method is derived from BiCG by Theorem 4 Theorem 4 (Sonneveld [1] The method is derived from the linear system s recurrence relations in the BiCG method under the property of equivalence between the coefficients α in and BiCG However the solution vector x is derived from a recurrence relation based on r b Ax Proof The coefficients α and in BiCG and were derived in Theorem 3 The recurrence relations (8 and (9 for BiCG are squared to give: 2 ( λ ( 1( λ α 1λ 1( λ ( λ α λ ( λ ( λ α λ ( λ R R P 2 R 2 P R + P ( λ ( ( λ + 1 1( λ ( λ ( λ ( λ ( λ P R P 2 R + 2 P R + P Further we can apply r R ( A r p P ( A r from (18 and substitute u ( A ( A r P ( A R ( A r q + 1 Thus we have derived the method 4 3 BiCG BiCG In this paper if we specifically distinguish this algorithm we write α and 4 In this paper we use the superscript alongside α and relevant vectors to describe this algorithm T A (19 (2 (21 (22 P R 1392

5 S Itoh M Sugihara Algorithm 2 method: x is an initial guess r b Ax set 1 ( r r eg r r For 12 until convergence Do: End Do u r q ( ( ( r Ap p u + q + p α r r α A q u p x + x + α u + q 1 r r u q + 1 α A + ( + 1 ( r r r r The following Proposition 5 and Corollary 1 are given as a supplementary explanation for Algorithm 2 These are almost trivial but are comparatively important in the next section s discussion Proposition 5 There exist the following relations: r r (23 r r (24 where r is the initial residual vector at in the iterative part of r is the initial shadow residual vector in r is the initial residual vector and r is the initial shadow residual vector 2 Proof Equation (23 follows because (18 for gives r R ( A r r Here R ( A I by (7 and I denotes the identity matrix Equation (24 is derived as follows Applying (18 to (16 we obtain ( R A R A R ( A r r r r r r r r T 2 This equation shows that the inner product of the on the right is obtained from the inner product of the T BiCG on the left Therefore r that composes the polynomial R ( A r to express the shadow residual vector of the BiCG is the same as r in Hereafter r can be represented by r to the extent that neither can be distinguished Corollary 1 There exists the following relation: p r r where p is the probing direction vector in r is the initial residual vector at in the iterative part of and r is the initial residual vector 23 Derivation of Preconditioned BiCG Algorithm In this subsection the preconditioned BiCG algorithm is derived from the non-preconditioned BiCG method (Algorithm 1 First some basic aspects of the BiCG method under a preconditioned system are expressed and a standard preconditioned BiCG algorithm is given When the BiCG method (Algorithm 1 is applied to linear equations under a preconditioned system: 1393

6 S Itoh M Sugihara A x b (25 we obtain a BiCG method under a preconditioned system (Algorithm 3 We denote this as In this paper matrices and vectors under the preconditioned system are denoted with such as A The coefficients α are specified by α Algorithm 3 BiCG method under the preconditioned system: x is an initial guess r b A x set 1 ( r r eg r r For 1 2 until convergence Do: End Do p r + p p r + p α ( r r ( p A p x x p +α + 1 r r α A p r r α A p T ( ( r r r r p 5 (26 (27 We now state Theorem 6 and Theorem 7 which are clearly derived from Theorem 1 and Theorem 2 respectively Theorem 6 Under the preconditioned system there are recurrence relations that define the degree of the residual polynomial R λ and probing direction polynomial P R( λ 1 P( λ 1 R ( λ R ( λ α λp ( λ P ( λ R ( λ + 1 P 1 ( λ λ 6 These are (28 ( (3 where λ is the variation under the preconditioned system and α in these relations are based on (26 and (27 Using the polynomials of Theorem 6 the residual vectors of the preconditioned linear system (25 and the shadow residual vectors of the following preconditioned shadow system: can be represented as respectively The probing direction vectors are given by T A x b (31 ( A r r R (32 ( A r (33 T R r ( A r p P (34 5 If we wish to emphasize different methods a superscript is applied to the relevant vectors to denote the method such as 6 We represent the polynomials R and P in italic font to denote a preconditioned system p BiCG p 1394

7 ( A T S Itoh M Sugihara p P r (35 respectively Under the preconditioned system r r In this paper we set r based on the equation r r to satisfy the conditions ( r r exactly Some variations based on ( r r such as Algorithm 4 below are allowed Other settings are beyond the scope of this paper Remar 1 The shadow systems given by (31 do not exist but it is very important to construct systems in which the transpose of the matrix A exists Theorem 7 The BiCG method under the preconditioned system satisfies the following conditions: r is set in such a way as to satisfy the conditions ( ri rj ( i j bi-orthogonality conditions under the preconditioned system (36 ( pi A pj ( i j bi-conjugacy conditions under the preconditioned system (37 Next we derive the standard algorithm Here the preconditioned linear system (25 and its shadow system (31 are formed as follows: M AM M x M b ( L R R L M AM M x M b (39 T T T T T R L L R Definition 1 On the subject of the algorithm the solution vector is denoted as x Furthermore the residual vectors of the linear system and shadow system of the algorithm are written as r and r respectively and their probing direction vectors are p and p respectively Using this notation each vector of the BiCG under the preconditioned system given by Algorithm 3 is converted as below: r M r r M r p M p p M p (4 1 T T L R R L Substituting the elements of (4 into (36 and (37 we have T 1 1 ( ri rj ( MR ri ML rj ( ri M rj (41 ( ( p Ap M p M AM M p p Ap (42 Consequently (26 and (27 become T 1 1 i j L i L R R j i j α ( ( p A p ( M ( p Ap r r r r 1 ( ( ( M ( M r r r r r r r r Before the iterative step we give the following Definition 2 Definition 2 For some preconditioned algorithms the initial residual vector of the linear system is written as P P r and the initial shadow residual vector of the shadow system is written as r before the iterative step We adopt the following preconditioning conversion after (4 1 P T P L R (43 (44 r M r r M r (45 Consequently we can derive the following standard algorithm [9] [1] Algorithm 4 Standard preconditioned BiCG algorithm: P x is an initial guess r b Ax set

8 S Itoh M Sugihara P 1 P ( r r ( r M r eg r r P 1 P M For 1 2 until convergence Do: 1 M p r p End Do T M p r p α 1 ( M ( p Ap r r x + x +α p α A (46 r r p (47 r r p (48 T + 1 α A 1 ( r+ 1 M r+ 1 1 ( r M r (49 P P Algorithm 4 satisfies r r and r r when in the iterative part Remar 2 Because we apply a preconditioning conversion such as x M Rx in the iteration of the BiCG method under the preconditioned system (Algorithm 3 x x when in the iteration of Algorithm 4 Further the initial solution to Algorithm 3 is technically x but this is actually calculated by multiplying M R by the initial solution x In this section we have shown that α in BiCG is equivalent to α in using (19 and that in BiCG is equivalent to in using (2 In the next section we propose an improved P algorithm by applying this result to the preconditioned system 3 Improved P Algorithm In this section we first explain the derivation of P and present the conventional P algorithm We identify an issue with this conventional P algorithm and propose an improved P that overcomes this issue 31 Derivation of P Algorithm Figure 1 illustrates the logical structure of the solving methods and preconditioned algorithms discussed in this paper BiCG Derivation Preconditioning conversion α α BiCG BiCG α α P P Preconditioning conversion Conventional P Derivation Improved P α α Our Proposal P P Figure 1 Relation between BiCG and their preconditioned algorithms 1396

9 S Itoh M Sugihara Typically P algorithms are derived via a method under a preconditioned system (Algorithm 5 Algorithm 5 is derived by applying the method (Algorithm 2 to the preconditioned linear system (25 In this section the vectors and α are P elements except for those before the iteration (Definition 2 P If we wish to emphasize different methods we apply a superscript to the relevant vectors such as α p p P P Algorithm 5 method under the preconditioned system: x is an initial guess r b A P x set 1 r r eg r r ( For 1 2 until convergence Do: End Do u r + q P 1 1 P P ( ( r r ( r A p p u q p α P q u α A p P x x + u + q + P 1 α r r u + q + P 1 α A P ( + 1 ( r r r r The conventional P algorithm (Algorithm 6 is derived via the method as shown in Figure 1 but this algorithm does not reflect the logic of subsection 22 in its preconditioning conversion In contrast our proposed improved P algorithm (Algorithm 7 directly applies the derivation from BiCG to to the algorithm thus maintaining the logic from subsection Conventional P and Its Issue The conventional P algorithm is adopted in many documents and numerical libraries [4] [9] [11] It is derived by applying the following preconditioning conversion to Algorithm 5: A M L AM R x M Rx b M L b r M L r (5 T r M r p M p u M u q M q L L L L This gives the following Algorithm 6 ( Conventional P in Figure 1 Algorithm 6 Conventional P algorithm: x is an initial guess r b Ax r r eg r r set 1 ( For 12 until convergence Do: u r q ( p u q p α ( r r 1 ( r AM p (

10 S Itoh M Sugihara End Do q u α AM p 1 x x u q αm + + r r u q 1 1 αam + + ( + 1 ( r r r r (52 This P algorithm was described in [4] which proposed the Bitab method and has been employed as a standard approach as affairs stand now This version of P seems to be a compliant algorithm on the surface because the operation ( r r in 1 (51 and (52 does not include the preconditioning operator M 1 under the conversions r M L r and T T r M L r from (5 However if we apply r M L r from (5 to the preconditioning conversion of the shadow residual vector in the BiCG method we obtain r (53 T M Lr This is different to the conversion given by (4 and we cannot obtain equivalent coefficients to in (43 and (44 using (53 33 Derivation of the Method from α In this subsection we present an improved P algorithm ( Improved P in Figure 1 We formulate this algorithm by applying the derivation process to the BiCG method directly under the preconditioned system ( Algorithm 3 The polynomials (32-(35 of the residual vectors and the probing direction vectors in are substituted for the numerators and denominators of α and We have ( R A R A ( R ( A r r r r r r (54 BiCG BiCG T 2 ( ( p A p P A r AP A r r AP A r (55 BiCG BiCG T 2 and apply the following Theorem 8 Theorem 8 The P coefficients α and are equivalent to these coefficients in under certain transformation and substitution operations based on the bi-orthogonality and bi-conjugacy conditions under the preconditioned system Proof We apply to (54 and (55 Then α ( ( r R A r p P A r ( BiCG BiCG ( r r ( r r BiCG BiCG ( p A p ( r A p BiCG BiCG ( r + 1 r + 1 ( r r + 1 BiCG BiCG ( r r ( r r α The P method is derived from using Theorem 9 Theorem 9 The P method is derived from the linear system s recurrence relations in the method under the property of equivalence between the coefficients α in P and However the solu- P P (57 (

11 S Itoh M Sugihara tion vector x is derived from a recurrence relation based on r b A x Proof The coefficients α and in and P were derived in Theorem 8 The recurrence relations in (29 and (3 for are squared to give: 2 ( R λ R λ α λp λ R λ 2 α λp λ R λ + α λ P λ ( ( P λ R λ + P λ R λ + 2 P λ R λ + P λ ( Further we can apply r 2 2 R A r p P ( A r from (56 and substitute u P ( A R ( A P ( A R + 1( A r r q The following Proposition 1 and Corollary 2 are given as a supplementary explanation under the preconditioned system Proposition 1 There exist the following relations: r (61 r r (62 r where r is the initial residual vector at in the iterative part of P r is the initial shadow residual vector in P r is the initial residual vector and r is the initial shadow residual vector Proof Equation (61 follows because (56 for gives r 2 R ( A r r Here R ( A I by (28 Equation (62 is derived as follows Applying (56 to (54 we obtain ( R A R A ( R ( A ( r r r r r r r r BiCG BiCG T 2 This equation shows that the inner product of the P on the right is obtained from the inner product of the T on the left Therefore r that composes the polynomial R ( A r to express the shadow residual vector of the is the same as r in P Hereafter r can be represented by r to the extent that neither can be distinguished Corollary 2 There exists the following relation: p r r where p is the probing direction vector in P r is the initial residual vector at in the iterative part of P and r is the initial residual vector The preconditioning conversion given by r p and r is subjected to the same treatment as the BiCG preconditioning conversion of r p in (4 and (45 Further r is the same as r Then α P P ( ( ( P 1 P ( r M Ap r r M r M r r M r r A p M r M AM M p T 1 P 1 P R L T 1 1 P R L R R ( GS ( GS r r M r M r r M r r r M r M r r M r C T 1 P 1 P + 1 R L C T 1 P 1 P R L As a consequence the following improved P algorithm is derived 7 Algorithm 7 Improved preconditioned algorithm: x is an initial guess r b Ax 1 ( r r eg r M r set 1 For 1 2 until convergence Do: 7 We apply the superscript P to α u r + q 1 M 1 1 and relevant vectors to denote this algorithm (63 (

12 S Itoh M Sugihara End Do ( p u q p α 1 ( r M r 1 ( r M Ap q u α M Ap 1 x x u q + 1 α αa r r u + q + 1 ( M ( r M r r r Algorithm 7 can also be derived by applying the following preconditioning conversion to Algorithm 5 Here we treat the preconditioning conversions of u and q the same as the conversion of p A M AM x M x b M b r M r L R R L L r M r p M p u M u q M q T R R R R The number of preconditioning operations in the iterative part of Algorithm 7 is the same as that in Algorithm 6 4 Numerical Experiments In this section we compare the conventional and improved P algorithms numerically The test problems were generated by building real unsymmetric matrices corresponding to linear systems taen from the Tim Davis collection [12] and the Matrix Maret [13] The RHS vector b of (1 was generated by setting all elements of the exact solution vector x exact to 1 and substituting this into (1 The solution algorithm was implemented using the sequential mode of the Lis numerical computation library (version 112 [14] in double-precision with the compiler options registered in the Lis Maefile Furthermore we set the initial solution to x and considered the algorithm to have converged when 12 r b 1 1 (where 2 2 r is the residual vector in the algorithm and is the iteration number The maximum number of iterations was set to the size of the coefficient matrix The numerical experiments were executed on a DELL Precision T74 (Intel Xeon E GHz CPU 16 GB RAM running the Cent OS (ernel 2618 and the Intel icc 11 compiler The results using the non-preconditioned are listed in Table 1 The results given by the conventional P and the improved P are listed in Tables 2-5 Each table adopts a different preconditioner in Lis [14]: (Point-Jacobi ILU( 8 SAINV and Crout ILU In these tables significant advantages of one algorithm over the other are emphasized by bold font 9 Additionally matrix names given in italic font in Table 1 encounter some difficulties The evolution of the convergence for each preconditioner is shown in Figures 2-5 In this paper we do not compare the computational speed of these preconditioners In many cases the results given by the improved P are better than those from the conventional algorithm We should pay particular attention to the results from matrices mcca mcfe and watt_1 In these cases it appears that the conventional P converges faster with any preconditioner but the TRE values are worse than those from the improved algorithm The iteration number for the conventional P is not emphasized by bold font in these instances The consequences of this anomaly are worth investigating further possibly by analyzing them under This will be the subject of future wor 8 Values of zero for ILU( indicates the fill-in level that is no fill-in 9 Because the principal argument presented in this paper concerns the structure of the algorithm for P based on an improved preconditioning transformation procedure the time required to obtain numerical solutions and the convergence history graphs (Figures 2-5 are provided as a reference As mentioned before there is almost no difference between Algorithm 6 and Algorithm 7 as regards the number of operations in the iteration which would exert a strong influence on the computation time 14

13 S Itoh M Sugihara Table 1 Numerical results for a veriety of test problems ( Matrix N NNZ (Algorithm 2 Iter TRR TRE Time arc e 4 bfwa e 2 cryg No convergence epb e 1 jpwh_ Breadown mcca No convergence mcfe No convergence memplus e+ olm No convergence olm No convergence pde e 3 pde e 2 sherman No convergence sherman No convergence sherman e 1 viscoplastic e+ watt_ e 2 In this table N is the problem size and NNZ is the number of nonzero elements The items in each column are from left to right the number of iterations required to converge (denoted Iter the true relative residual log 1 2-norm (denoted by TRR calculated as b Aˆ x b where 2 2 x ˆ is the numerical solution the true relative error log 1 2-norm (denoted by TRE calculated from the numerical solution and the exact solution ˆ exact exact that is x x x and as a reference the CPU time (denoted Time [s] Several matrix names are represented in italic font These 2 2 describe certain situations such as Breadown No convergence and insufficient accuracy on TRR or TRE 5 Conclusions In this paper we have developed an improved P algorithm by applying the procedure for deriving to the BiCG method under a preconditioned system and we also have presented some mathematical theorems underlying the deriving process s logic The improved P does not increase the number of preconditioning operations in the iterative part of the algorithm Our numerical results established that solutions obtained with the proposed algorithm are superior to those from the conventional algorithm for a variety of preconditioners However the improved algorithm may still brea down during the iterative procedure This is an artefact of certain characteristics of the non-preconditioned BiCG and methods mainly the operations based on the bi-orthogonality and bi-conjugacy conditions Nevertheless this improved logic can be applied to other bi- Lanczos-based algorithms with minimal residual operations In future wor we will analyze the mechanism of the conventional and improved P algorithms and consider other variations of this algorithm Furthermore we will consider other settings of the initial shadow residual vector r except for r r to ensure that ( r r holds 141

14 S Itoh M Sugihara Table 2 Numerical results for a veriety of test problems (Jacobi- Conv P (Algorithm 6 Impr P (Algorithm 7 Matrix Iter TRR TRE Time Iter TRR TRE Time arc e e 5 bfwa e e 2 cryg25 No convergence No convergence epb e e 1 jpwh_991 Breadown Breadown mcca e e 3 mcfe e e 2 memplus e e 1 olm1 No convergence No convergence olm5 No convergence No convergence pde e e 3 pde e e 2 sherman2 No convergence No convergence sherman e e 1 sherman e e 2 viscoplastic e e+ watt_ e e 3 Figue 2 Converging histories of relative residual 2-norm (Jacobi- Red line: Conventional P Blue line: Improved P 142

15 S Itoh M Sugihara Table 3 Numerical results for a veriety of test problems (ILU(- Conv P (Algorithm 6 Impr P (Algorithm 7 Matrix Iter TRR TRE Time Iter TRR TRE Time arc e e 4 bfwa e e 2 cryg25 No convergence e 2 epb e e 1 jpwh_991 Breadown e 3 mcca e e 4 mcfe e e 3 memplus e e 1 olm1 No convergence e 3 olm5 No convergence e 2 pde e e 3 pde e e 2 sherman e e 3 sherman e e 2 sherman e e 2 viscoplastic e e+ watt_ e e 3 Figue 3 Converging histories of relative residual 2-norm (ILU(- Red line: Conventional P Blue line: Improved P 143

16 S Itoh M Sugihara Table 4 Numerical results for a veriety of test problems (SAINV- Conv P (Algorithm 6 Impr P (Algorithm 7 Matrix Iter TRR TRE Time Iter TRR TRE Time arc e e 4 bfwa e e 2 cryg25 No convergence No convergence epb e e+ jpwh_991 Breadown e 3 mcca e e 3 mcfe e e 1 memplus e e+ olm1 No convergence No convergence olm5 No convergence No convergence pde e e 3 pde e e 2 sherman2 No convergence No convergence sherman e e 1 sherman e e 2 viscoplastic2 No convergence No convergence watt_ e e+ Figure 4 Converging histories of relative residual 2-norm (SAINV- Red line: Conventional P Blue line: Improved P 144

17 S Itoh M Sugihara Table 5 Numerical results for a veriety of test problems (CroutILU- Matrix Conv P (Algorithm 6 Impr P (Algorithm 7 Iter TRR TRE Time Iter TRR TRE Time arc e e 4 bfwa e e 2 cryg25 No convergence e 1 epb e e 1 jpwh_991 Breadown e 3 mcca e e 4 mcfe e e 3 memplus e e 1 olm1 No convergence e 3 olm e e 2 pde e e 3 pde e e 3 sherman e e 1 sherman e e 2 sherman e e 2 viscoplastic2 No convergence No convergence watt_ e e 3 Figure 5 Converging histories of relative residual 2-norm (CroutILU- Red line: Conventional P Blue line: Improved P 145

18 S Itoh M Sugihara Acnowledgements This wor is partially supported by a Grant-in-Aid for Scientific Research (C No from MEXT Japan References [1] Sonneveld P (1989 A Fast Lanczos-Type Solver for Nonsymmetric Linear Systems SIAM Journal on Scientific and Statistical Computing [2] Fletcher R (1976 Conjugate Gradient Methods for Indefinite Systems In: Watson G Ed Numerical Analysis Dundee 1975 Lecture Notes in Mathematics Vol 56 Springer-Verlag Berlin New Yor [3] Lanczos C (1952 Solution of Systems of Linear Equations by Minimized Iterations Journal of Research of the National Bureau of Standards [4] Van der Vorst HA (1992 Bi-TAB: A Fast and Smoothly Converging Variant of Bi-CG for the Solution of Nonsymmetric Linear Systems SIAM Journal on Scientific and Statistical Computing [5] Zhang S-L (1997 GPBi-CG: Generalized Product-Type Methods Based on Bi-CG for Solving Nonsymmetric Linear Systems SIAM Journal on Scientific Computing [6] Itoh S and Sugihara M (21 Systematic Performance Evaluation of Linear Solvers Using Quality Control Techniques In: Naono K Teranishi K Cavazos J and Suda R Eds Software Automatic Tuning: From Concepts to State-of-the-Art Results Springer [7] Itoh S and Sugihara M (213 Preconditioned Algorithm of the Method Focusing on Its Deriving Process Transactions of the Japan SIAM (in Japanese [8] Hestenes MR and Stiefel E (1952 Methods of Conjugate Gradients for Solving Linear Systems Journal of Research of the National Bureau of Standards [9] Barrett R Berry M Chan TF Demmel J Donato J Dongarra J et al (1994 Templates for the Solution of Linear Systems: Building Blocs for Iterative Methods SIAM [1] Van der Vorst HA (23 Iterative Krylov Methods for Large Linear Systems Cambridge University Press Cambridge [11] Meurant G (25 Computer Solution of Large Linear Systems Elsevier Amsterdam [12] Davis TA The University of Florida Sparse Matrix Collection [13] Matrix Maret Project [14] SSI Project Lis 146

arxiv: v2 [math.na] 1 Sep 2016

arxiv: v2 [math.na] 1 Sep 2016 The structure of the polynomials in preconditioned BiCG algorithms and the switching direction of preconditioned systems Shoji Itoh and Masaai Sugihara arxiv:163.175v2 [math.na] 1 Sep 216 Abstract We present

More information

Performance Evaluation of GPBiCGSafe Method without Reverse-Ordered Recurrence for Realistic Problems

Performance Evaluation of GPBiCGSafe Method without Reverse-Ordered Recurrence for Realistic Problems Performance Evaluation of GPBiCGSafe Method without Reverse-Ordered Recurrence for Realistic Problems Seiji Fujino, Takashi Sekimoto Abstract GPBiCG method is an attractive iterative method for the solution

More information

Variants of BiCGSafe method using shadow three-term recurrence

Variants of BiCGSafe method using shadow three-term recurrence Variants of BiCGSafe method using shadow three-term recurrence Seiji Fujino, Takashi Sekimoto Research Institute for Information Technology, Kyushu University Ryoyu Systems Co. Ltd. (Kyushu University)

More information

Comparison of Fixed Point Methods and Krylov Subspace Methods Solving Convection-Diffusion Equations

Comparison of Fixed Point Methods and Krylov Subspace Methods Solving Convection-Diffusion Equations American Journal of Computational Mathematics, 5, 5, 3-6 Published Online June 5 in SciRes. http://www.scirp.org/journal/ajcm http://dx.doi.org/.436/ajcm.5.5 Comparison of Fixed Point Methods and Krylov

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 16-02 The Induced Dimension Reduction method applied to convection-diffusion-reaction problems R. Astudillo and M. B. van Gijzen ISSN 1389-6520 Reports of the Delft

More information

Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices

Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices Eun-Joo Lee Department of Computer Science, East Stroudsburg University of Pennsylvania, 327 Science and Technology Center,

More information

Finite-choice algorithm optimization in Conjugate Gradients

Finite-choice algorithm optimization in Conjugate Gradients Finite-choice algorithm optimization in Conjugate Gradients Jack Dongarra and Victor Eijkhout January 2003 Abstract We present computational aspects of mathematically equivalent implementations of the

More information

RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS

RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS HOMER F. WALKER Abstract. Recent results on residual smoothing are reviewed, and it is observed that certain of these are equivalent

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Leslie Foster 11-5-2012 We will discuss the FOM (full orthogonalization method), CG,

More information

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009) Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential

More information

A parameter tuning technique of a weighted Jacobi-type preconditioner and its application to supernova simulations

A parameter tuning technique of a weighted Jacobi-type preconditioner and its application to supernova simulations A parameter tuning technique of a weighted Jacobi-type preconditioner and its application to supernova simulations Akira IMAKURA Center for Computational Sciences, University of Tsukuba Joint work with

More information

The flexible incomplete LU preconditioner for large nonsymmetric linear systems. Takatoshi Nakamura Takashi Nodera

The flexible incomplete LU preconditioner for large nonsymmetric linear systems. Takatoshi Nakamura Takashi Nodera Research Report KSTS/RR-15/006 The flexible incomplete LU preconditioner for large nonsymmetric linear systems by Takatoshi Nakamura Takashi Nodera Takatoshi Nakamura School of Fundamental Science and

More information

Iterative Methods for Linear Systems of Equations

Iterative Methods for Linear Systems of Equations Iterative Methods for Linear Systems of Equations Projection methods (3) ITMAN PhD-course DTU 20-10-08 till 24-10-08 Martin van Gijzen 1 Delft University of Technology Overview day 4 Bi-Lanczos method

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

Krylov Space Solvers

Krylov Space Solvers Seminar for Applied Mathematics ETH Zurich International Symposium on Frontiers of Computational Science Nagoya, 12/13 Dec. 2005 Sparse Matrices Large sparse linear systems of equations or large sparse

More information

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294) Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps

More information

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate

More information

S-Step and Communication-Avoiding Iterative Methods

S-Step and Communication-Avoiding Iterative Methods S-Step and Communication-Avoiding Iterative Methods Maxim Naumov NVIDIA, 270 San Tomas Expressway, Santa Clara, CA 95050 Abstract In this paper we make an overview of s-step Conjugate Gradient and develop

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

Algorithm for Sparse Approximate Inverse Preconditioners in the Conjugate Gradient Method

Algorithm for Sparse Approximate Inverse Preconditioners in the Conjugate Gradient Method Algorithm for Sparse Approximate Inverse Preconditioners in the Conjugate Gradient Method Ilya B. Labutin A.A. Trofimuk Institute of Petroleum Geology and Geophysics SB RAS, 3, acad. Koptyug Ave., Novosibirsk

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems

EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems EIGIFP: A MATLAB Program for Solving Large Symmetric Generalized Eigenvalue Problems JAMES H. MONEY and QIANG YE UNIVERSITY OF KENTUCKY eigifp is a MATLAB program for computing a few extreme eigenvalues

More information

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc. Lecture 11: CMSC 878R/AMSC698R Iterative Methods An introduction Outline Direct Solution of Linear Systems Inverse, LU decomposition, Cholesky, SVD, etc. Iterative methods for linear systems Why? Matrix

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 20 1 / 20 Overview

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

Evaluation of Iterative-Solutions used for Three-Dimensional Volume Integral Equation

Evaluation of Iterative-Solutions used for Three-Dimensional Volume Integral Equation Paper Evaluation of Iterative-Solutions used for Three-Dimensional Volume Integral Equation Non-member Yasuo Takahashi (Gifu University) Member Kazuo Tanaka (Gifu University) Non-member Masahiro Tanaka

More information

Since the early 1800s, researchers have

Since the early 1800s, researchers have the Top T HEME ARTICLE KRYLOV SUBSPACE ITERATION This survey article reviews the history and current importance of Krylov subspace iteration algorithms. 1521-9615/00/$10.00 2000 IEEE HENK A. VAN DER VORST

More information

The solution of the discretized incompressible Navier-Stokes equations with iterative methods

The solution of the discretized incompressible Navier-Stokes equations with iterative methods The solution of the discretized incompressible Navier-Stokes equations with iterative methods Report 93-54 C. Vuik Technische Universiteit Delft Delft University of Technology Faculteit der Technische

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

ETNA Kent State University

ETNA Kent State University Z Electronic Transactions on Numerical Analysis. olume 16, pp. 191, 003. Copyright 003,. ISSN 10689613. etnamcs.kent.edu A BLOCK ERSION O BICGSTAB OR LINEAR SYSTEMS WITH MULTIPLE RIGHTHAND SIDES A. EL

More information

FEM and sparse linear system solving

FEM and sparse linear system solving FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich

More information

1 Conjugate gradients

1 Conjugate gradients Notes for 2016-11-18 1 Conjugate gradients We now turn to the method of conjugate gradients (CG), perhaps the best known of the Krylov subspace solvers. The CG iteration can be characterized as the iteration

More information

OUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact

OUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact Computational Linear Algebra Course: (MATH: 6800, CSCI: 6800) Semester: Fall 1998 Instructors: { Joseph E. Flaherty, aherje@cs.rpi.edu { Franklin T. Luk, luk@cs.rpi.edu { Wesley Turner, turnerw@cs.rpi.edu

More information

IDR(s) Master s thesis Goushani Kisoensingh. Supervisor: Gerard L.G. Sleijpen Department of Mathematics Universiteit Utrecht

IDR(s) Master s thesis Goushani Kisoensingh. Supervisor: Gerard L.G. Sleijpen Department of Mathematics Universiteit Utrecht IDR(s) Master s thesis Goushani Kisoensingh Supervisor: Gerard L.G. Sleijpen Department of Mathematics Universiteit Utrecht Contents 1 Introduction 2 2 The background of Bi-CGSTAB 3 3 IDR(s) 4 3.1 IDR.............................................

More information

Contents. Preface... xi. Introduction...

Contents. Preface... xi. Introduction... Contents Preface... xi Introduction... xv Chapter 1. Computer Architectures... 1 1.1. Different types of parallelism... 1 1.1.1. Overlap, concurrency and parallelism... 1 1.1.2. Temporal and spatial parallelism

More information

On the Preconditioning of the Block Tridiagonal Linear System of Equations

On the Preconditioning of the Block Tridiagonal Linear System of Equations On the Preconditioning of the Block Tridiagonal Linear System of Equations Davod Khojasteh Salkuyeh Department of Mathematics, University of Mohaghegh Ardabili, PO Box 179, Ardabil, Iran E-mail: khojaste@umaacir

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 9 Minimizing Residual CG

More information

A Novel Approach for Solving the Power Flow Equations

A Novel Approach for Solving the Power Flow Equations Vol.1, Issue.2, pp-364-370 ISSN: 2249-6645 A Novel Approach for Solving the Power Flow Equations Anwesh Chowdary 1, Dr. G. MadhusudhanaRao 2 1 Dept.of.EEE, KL University-Guntur, Andhra Pradesh, INDIA 2

More information

Preface to the Second Edition. Preface to the First Edition

Preface to the Second Edition. Preface to the First Edition n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................

More information

A DISSERTATION. Extensions of the Conjugate Residual Method. by Tomohiro Sogabe. Presented to

A DISSERTATION. Extensions of the Conjugate Residual Method. by Tomohiro Sogabe. Presented to A DISSERTATION Extensions of the Conjugate Residual Method ( ) by Tomohiro Sogabe Presented to Department of Applied Physics, The University of Tokyo Contents 1 Introduction 1 2 Krylov subspace methods

More information

AMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1.

AMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1. J. Appl. Math. & Computing Vol. 15(2004), No. 1, pp. 299-312 BILUS: A BLOCK VERSION OF ILUS FACTORIZATION DAVOD KHOJASTEH SALKUYEH AND FAEZEH TOUTOUNIAN Abstract. ILUS factorization has many desirable

More information

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers Applied and Computational Mathematics 2017; 6(4): 202-207 http://www.sciencepublishinggroup.com/j/acm doi: 10.11648/j.acm.20170604.18 ISSN: 2328-5605 (Print); ISSN: 2328-5613 (Online) A Robust Preconditioned

More information

Iterative Solvers in the Finite Element Solution of Transient Heat Conduction

Iterative Solvers in the Finite Element Solution of Transient Heat Conduction Iterative Solvers in the Finite Element Solution of Transient Heat Conduction Mile R. Vuji~i} PhD student Steve G.R. Brown Senior Lecturer Materials Research Centre School of Engineering University of

More information

Preconditioning Techniques Analysis for CG Method

Preconditioning Techniques Analysis for CG Method Preconditioning Techniques Analysis for CG Method Huaguang Song Department of Computer Science University of California, Davis hso@ucdavis.edu Abstract Matrix computation issue for solve linear system

More information

BiCGCR2: A new extension of conjugate residual method for solving non-hermitian linear systems

BiCGCR2: A new extension of conjugate residual method for solving non-hermitian linear systems BiCGCR2: A new extension of conjugate residual method for solving non-hermitian linear systems Xian-Ming Gu 1,2, Ting-Zhu Huang 1, Bruno Carpentieri 3, 1. School of Mathematical Sciences, University of

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD

More information

Key words. linear equations, polynomial preconditioning, nonsymmetric Lanczos, BiCGStab, IDR

Key words. linear equations, polynomial preconditioning, nonsymmetric Lanczos, BiCGStab, IDR POLYNOMIAL PRECONDITIONED BICGSTAB AND IDR JENNIFER A. LOE AND RONALD B. MORGAN Abstract. Polynomial preconditioning is applied to the nonsymmetric Lanczos methods BiCGStab and IDR for solving large nonsymmetric

More information

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccs Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary c 2008,2010

More information

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary

More information

Fast iterative solvers

Fast iterative solvers Utrecht, 15 november 2017 Fast iterative solvers Gerard Sleijpen Department of Mathematics http://www.staff.science.uu.nl/ sleij101/ Review. To simplify, take x 0 = 0, assume b 2 = 1. Solving Ax = b for

More information

Linear Solvers. Andrew Hazel

Linear Solvers. Andrew Hazel Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction

More information

arxiv: v1 [hep-lat] 2 May 2012

arxiv: v1 [hep-lat] 2 May 2012 A CG Method for Multiple Right Hand Sides and Multiple Shifts in Lattice QCD Calculations arxiv:1205.0359v1 [hep-lat] 2 May 2012 Fachbereich C, Mathematik und Naturwissenschaften, Bergische Universität

More information

A note on eigenvalue computation for a tridiagonal matrix with real eigenvalues Akiko Fukuda

A note on eigenvalue computation for a tridiagonal matrix with real eigenvalues Akiko Fukuda Journal of Math-for-Industry Vol 3 (20A-4) pp 47 52 A note on eigenvalue computation for a tridiagonal matrix with real eigenvalues Aio Fuuda Received on October 6 200 / Revised on February 7 20 Abstract

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

Lecture 3: Inexact inverse iteration with preconditioning

Lecture 3: Inexact inverse iteration with preconditioning Lecture 3: Department of Mathematical Sciences CLAPDE, Durham, July 2008 Joint work with M. Freitag (Bath), and M. Robbé & M. Sadkane (Brest) 1 Introduction 2 Preconditioned GMRES for Inverse Power Method

More information

Iterative Methods for Sparse Linear Systems

Iterative Methods for Sparse Linear Systems Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University

More information

Reduced Synchronization Overhead on. December 3, Abstract. The standard formulation of the conjugate gradient algorithm involves

Reduced Synchronization Overhead on. December 3, Abstract. The standard formulation of the conjugate gradient algorithm involves Lapack Working Note 56 Conjugate Gradient Algorithms with Reduced Synchronization Overhead on Distributed Memory Multiprocessors E. F. D'Azevedo y, V.L. Eijkhout z, C. H. Romine y December 3, 1999 Abstract

More information

On the influence of eigenvalues on Bi-CG residual norms

On the influence of eigenvalues on Bi-CG residual norms On the influence of eigenvalues on Bi-CG residual norms Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic duintjertebbens@cs.cas.cz Gérard Meurant 30, rue

More information

ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS

ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS YOUSEF SAAD University of Minnesota PWS PUBLISHING COMPANY I(T)P An International Thomson Publishing Company BOSTON ALBANY BONN CINCINNATI DETROIT LONDON MADRID

More information

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution

More information

Iterative Methods and Multigrid

Iterative Methods and Multigrid Iterative Methods and Multigrid Part 3: Preconditioning 2 Eric de Sturler Preconditioning The general idea behind preconditioning is that convergence of some method for the linear system Ax = b can be

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

ITERATIVE METHODS BASED ON KRYLOV SUBSPACES

ITERATIVE METHODS BASED ON KRYLOV SUBSPACES ITERATIVE METHODS BASED ON KRYLOV SUBSPACES LONG CHEN We shall present iterative methods for solving linear algebraic equation Au = b based on Krylov subspaces We derive conjugate gradient (CG) method

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

The Conjugate Gradient Method for Solving Linear Systems of Equations

The Conjugate Gradient Method for Solving Linear Systems of Equations The Conjugate Gradient Method for Solving Linear Systems of Equations Mike Rambo Mentor: Hans de Moor May 2016 Department of Mathematics, Saint Mary s College of California Contents 1 Introduction 2 2

More information

Preconditioned Conjugate Gradient-Like Methods for. Nonsymmetric Linear Systems 1. Ulrike Meier Yang 2. July 19, 1994

Preconditioned Conjugate Gradient-Like Methods for. Nonsymmetric Linear Systems 1. Ulrike Meier Yang 2. July 19, 1994 Preconditioned Conjugate Gradient-Like Methods for Nonsymmetric Linear Systems Ulrike Meier Yang 2 July 9, 994 This research was supported by the U.S. Department of Energy under Grant No. DE-FG2-85ER25.

More information

Conjugate Gradient Method

Conjugate Gradient Method Conjugate Gradient Method direct and indirect methods positive definite linear systems Krylov sequence spectral analysis of Krylov sequence preconditioning Prof. S. Boyd, EE364b, Stanford University Three

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

Incomplete Cholesky preconditioners that exploit the low-rank property

Incomplete Cholesky preconditioners that exploit the low-rank property anapov@ulb.ac.be ; http://homepages.ulb.ac.be/ anapov/ 1 / 35 Incomplete Cholesky preconditioners that exploit the low-rank property (theory and practice) Artem Napov Service de Métrologie Nucléaire, Université

More information

Preconditioned inverse iteration and shift-invert Arnoldi method

Preconditioned inverse iteration and shift-invert Arnoldi method Preconditioned inverse iteration and shift-invert Arnoldi method Melina Freitag Department of Mathematical Sciences University of Bath CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical

More information

Notes on Some Methods for Solving Linear Systems

Notes on Some Methods for Solving Linear Systems Notes on Some Methods for Solving Linear Systems Dianne P. O Leary, 1983 and 1999 and 2007 September 25, 2007 When the matrix A is symmetric and positive definite, we have a whole new class of algorithms

More information

Chapter 10 Conjugate Direction Methods

Chapter 10 Conjugate Direction Methods Chapter 10 Conjugate Direction Methods An Introduction to Optimization Spring, 2012 1 Wei-Ta Chu 2012/4/13 Introduction Conjugate direction methods can be viewed as being intermediate between the method

More information

Jae Heon Yun and Yu Du Han

Jae Heon Yun and Yu Du Han Bull. Korean Math. Soc. 39 (2002), No. 3, pp. 495 509 MODIFIED INCOMPLETE CHOLESKY FACTORIZATION PRECONDITIONERS FOR A SYMMETRIC POSITIVE DEFINITE MATRIX Jae Heon Yun and Yu Du Han Abstract. We propose

More information

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU Preconditioning Techniques for Solving Large Sparse Linear Systems Arnold Reusken Institut für Geometrie und Praktische Mathematik RWTH-Aachen OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative

More information

Research Article Residual Iterative Method for Solving Absolute Value Equations

Research Article Residual Iterative Method for Solving Absolute Value Equations Abstract and Applied Analysis Volume 2012, Article ID 406232, 9 pages doi:10.1155/2012/406232 Research Article Residual Iterative Method for Solving Absolute Value Equations Muhammad Aslam Noor, 1 Javed

More information

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization

More information

Further experiences with GMRESR

Further experiences with GMRESR Further experiences with GMRESR Report 92-2 C. Vui Technische Universiteit Delft Delft University of Technology Faculteit der Technische Wisunde en Informatica Faculty of Technical Mathematics and Informatics

More information

Lecture 8 Fast Iterative Methods for linear systems of equations

Lecture 8 Fast Iterative Methods for linear systems of equations CGLS Select x 0 C n x = x 0, r = b Ax 0 u = 0, ρ = 1 while r 2 > tol do s = A r ρ = ρ, ρ = s s, β = ρ/ρ u s βu, c = Au σ = c c, α = ρ/σ r r αc x x+αu end while Graig s method Select x 0 C n x = x 0, r

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

EXASCALE COMPUTING. Hiding global synchronization latency in the preconditioned Conjugate Gradient algorithm. P. Ghysels W.

EXASCALE COMPUTING. Hiding global synchronization latency in the preconditioned Conjugate Gradient algorithm. P. Ghysels W. ExaScience Lab Intel Labs Europe EXASCALE COMPUTING Hiding global synchronization latency in the preconditioned Conjugate Gradient algorithm P. Ghysels W. Vanroose Report 12.2012.1, December 2012 This

More information

A SPARSE APPROXIMATE INVERSE PRECONDITIONER FOR NONSYMMETRIC LINEAR SYSTEMS

A SPARSE APPROXIMATE INVERSE PRECONDITIONER FOR NONSYMMETRIC LINEAR SYSTEMS INTERNATIONAL JOURNAL OF NUMERICAL ANALYSIS AND MODELING, SERIES B Volume 5, Number 1-2, Pages 21 30 c 2014 Institute for Scientific Computing and Information A SPARSE APPROXIMATE INVERSE PRECONDITIONER

More information

Inexact inverse iteration with preconditioning

Inexact inverse iteration with preconditioning Department of Mathematical Sciences Computational Methods with Applications Harrachov, Czech Republic 24th August 2007 (joint work with M. Robbé and M. Sadkane (Brest)) 1 Introduction 2 Preconditioned

More information

QUADERNI. del. Dipartimento di Matematica. Ezio Venturino, Peter R. Graves-Moris & Alessandra De Rossi

QUADERNI. del. Dipartimento di Matematica. Ezio Venturino, Peter R. Graves-Moris & Alessandra De Rossi UNIVERSITÀ DI TORINO QUADERNI del Dipartimento di Matematica Ezio Venturino, Peter R. Graves-Moris & Alessandra De Rossi AN ADAPTATION OF BICGSTAB FOR NONLINEAR BIOLOGICAL SYSTEMS QUADERNO N. 17/2005 Here

More information

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department

More information

Valeria Simoncini and Daniel B. Szyld. Report October 2009

Valeria Simoncini and Daniel B. Szyld. Report October 2009 Interpreting IDR as a Petrov-Galerkin method Valeria Simoncini and Daniel B. Szyld Report 09-10-22 October 2009 This report is available in the World Wide Web at http://www.math.temple.edu/~szyld INTERPRETING

More information

Iterative Linear Solvers

Iterative Linear Solvers Chapter 10 Iterative Linear Solvers In the previous two chapters, we developed strategies for solving a new class of problems involving minimizing a function f ( x) with or without constraints on x. In

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

Arnoldi Methods in SLEPc

Arnoldi Methods in SLEPc Scalable Library for Eigenvalue Problem Computations SLEPc Technical Report STR-4 Available at http://slepc.upv.es Arnoldi Methods in SLEPc V. Hernández J. E. Román A. Tomás V. Vidal Last update: October,

More information

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication.

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication. CME342 Parallel Methods in Numerical Analysis Matrix Computation: Iterative Methods II Outline: CG & its parallelization. Sparse Matrix-vector Multiplication. 1 Basic iterative methods: Ax = b r = b Ax

More information

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering A short course on: Preconditioned Krylov subspace methods Yousef Saad University of Minnesota Dept. of Computer Science and Engineering Universite du Littoral, Jan 19-3, 25 Outline Part 1 Introd., discretization

More information

Fast iterative solvers

Fast iterative solvers Solving Ax = b, an overview Utrecht, 15 november 2017 Fast iterative solvers A = A no Good precond. yes flex. precond. yes GCR yes A > 0 yes CG no no GMRES no ill cond. yes SYMMLQ str indef no large im

More information

APPLIED NUMERICAL LINEAR ALGEBRA

APPLIED NUMERICAL LINEAR ALGEBRA APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation

More information

Multilevel low-rank approximation preconditioners Yousef Saad Department of Computer Science and Engineering University of Minnesota

Multilevel low-rank approximation preconditioners Yousef Saad Department of Computer Science and Engineering University of Minnesota Multilevel low-rank approximation preconditioners Yousef Saad Department of Computer Science and Engineering University of Minnesota SIAM CSE Boston - March 1, 2013 First: Joint work with Ruipeng Li Work

More information

Conjugate Gradient (CG) Method

Conjugate Gradient (CG) Method Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous

More information

THE RELATIONSHIPS BETWEEN CG, BFGS, AND TWO LIMITED-MEMORY ALGORITHMS

THE RELATIONSHIPS BETWEEN CG, BFGS, AND TWO LIMITED-MEMORY ALGORITHMS Furman University Electronic Journal of Undergraduate Mathematics Volume 12, 5 20, 2007 HE RELAIONSHIPS BEWEEN CG, BFGS, AND WO LIMIED-MEMORY ALGORIHMS ZHIWEI (ONY) QIN Abstract. For the solution of linear

More information