On the Preconditioning of the Block Tridiagonal Linear System of Equations

Size: px
Start display at page:

Download "On the Preconditioning of the Block Tridiagonal Linear System of Equations"

Transcription

1 On the Preconditioning of the Block Tridiagonal Linear System of Equations Davod Khojasteh Salkuyeh Department of Mathematics, University of Mohaghegh Ardabili, PO Box 179, Ardabil, Iran Abstract Two algorithms for computing the inverse factors of general tridiagonal and pentadiagonal matrices are obtained Then, these algorithms are used for computing a block ILU preconditioner for the block tridiagonal linear system of equations Some numerical results are given to show the robustness and efficiency of the preconditioner The performance of the proposed preconditioner is compared with a recently proposed method AMS Subject Classification : 65F10, 65F50 Keywords: Krylov subspace methods, inverse factors, block ILU factorization, block tridiagonal matrices 1 Introduction Consider the linear system of equations Ax = b, (1) where the vectors x, b R n and the matrix A R n n is of the form D 1 C 2 B 2 D 2 C 3 A = (2) B m 1 D m 1 C m B m D m This type of matrices typically arise in the finite difference or finite element discretization of the second order partial differential equations Nowadays, iterative methods based on the Krylov subspaces such as CG [13], GMRES [14], CGS [15] and Bi-CGSTAB [16] are used to solve (1) In order to be effective, these methods are combined with a good preconditioner More precisely, iterative methods usually involve a second matrix that transforms the coefficient matrix into one with a more favorable spectrum The transformation matrix is called a preconditioner If M is a nonsingular matrix which approximates A 1 (M A 1 or MA I, where I is the identity matrix), the transformed linear system MAx = Mb, (3) which is called the left-preconditioned system, will have the same solution as (1) but the convergence rate of iterative methods applied to (3) may be much higher Also, there are right- and splitpreconditioned systems [4, 5, 13] 1

2 There are many ways to make a preconditioner [3, 4, 5, 10, 11, 17] A very good survey of the preconditioners can be found in [4, 5, 13] One way is to compute lower sparse matrices M L and MU T, such that M L L and M U U, where A = LU is the LU factorization of A In this case we have A M = M L M U which is called an incomplete LU factorization (ILU) of A Here, we have A 1 M 1 = M 1 U M 1 L Iterative methods based on the Krylov subspace methods usually involve matrix-vector multiplications Hence, for computing z = M 1 r, one can solve Mz = r for z with backward and forward substitution In this paper we focus our attention to the ILU preconditioning of (1) This paper is organized as follows In section 2, we review the block ILU preconditioner for the block tridiagonal matrices presented by Koulaei and Toutounian in [9] Two new algorithms for computing the approximate inverse factors of tridiagonal and pentadiagonal matrices are presented in section 3 Some numerical experiments are given in section 4 Section 5 is devoted to concluding remarks 2 Koulaei and Toutounian s algorithms for block ILU preconditioning of block tridiagonal matrices Consider the block tridiagonal matrix A blocked in the form (2) Let m i be order of the ith square diagonal block D i and m i=1 m i = n Let also D be the block diagonal matrix consisting of the diagonal blocks D i, L (U) the block strictly lower (upper) triangular matrix consisting of the sub-diagonal (super-diagonal) blocks B i (C i ) Then the matrix A is of the form A = D + L + U Let Σ be the block diagonal matrix with m i m i blocks Σ i satisfying Σ 1 = D 1, Σ i = D i B i Σ 1 i 1 C i, i = 2, 3,, m Then the matrix A may be factorized in the block form [1, 2, 6, 7, 8, 9] A = (Σ + L)Σ 1 (Σ + U) (4) If either A is a symmetric positive definite matrix or a block H-matrix then the factorization of the form (4) exists [1, 9] Since the inverse of a sparse matrix is in general full, the matrices Σ i are in general full, even if B i, D i and C i are sparse Hence, to compute a sparse approximate factorization of the form (4) for the matrix A, it is enough to use sparse approximations of the inverses of Σ i To do this, let Λ i be sparse matrices defined by Now let 1 = D 1, i = D i B i Λ i 1 C i, Λ i 1 1 i 1, i = 2,, m i = L i U i, i = 1,, m, be the LU factorization of blocks i In this case we have A M = ( + L) 1 ( + U) 2

3 In other words M = L 1 V 2 L 2 V m 1 L m 1 V m L m U 1 W 2 U 2 W 3 U m 1 W m U m, (5) where V i = B i U 1 i 1, W i = L 1 i 1 C i, i = 2,, m In this paper we consider an important case that diagonal blocks D i are tridiagonal and B i and C i are diagonal In [9], two recurrence formulas for computing the sparse approximate inverse factors of tridiagonal and pentadiagonal matrices are obtained We recall that the lower and upper unit triangular matrices W T and Z, respectively, is said to be the inverse factors of matrix A if W T AZ = D, where D is a diagonal matrix Then, the blocks i are kept tridiagonal or pentadiagonal and by using these algorithms the inverse factors of blocks i are computed The recurrence formula presented in [9] for the tridiagonal matrices is as follows Let a 1 c 2 b 2 a 2 c 3 T = tridiag(b i, a i, c i+1 ) =, (6) b n 1 a n 1 c n b n a n and W T = Z = 1 y 11 1 y 21 y 22 1, (7) y n 1,1 y n 1,2 y n 1,n z 11 z 12 z 1,n 1 1 z 22 z 2,n 1 1, (8) zn 1,n 1 1 be its inverse factors, ie W T T Z = D = diag(λ 1,, λ n ) Then the algorithm may run as following (KTT: Koulaei and Toutounian s algorithm for Tridiagonal matrices) Algorithm 1 KTT algorithm 1 λ 1 := a 1 2 For k = 1,, n 1, Do: 3 y kk := b k+1 and z kk := c k+1 3

4 4 If k > 1 then 5 For i = k 1, k 2,, 1, Do: 6 y ki := bi+1 λ i y k,i+1 z ik := ci+1 λ i z i+1,k 7 EndDo 8 EndIf 9 +1 := a k+1 b k+1c k+1 10 Enddo The recurrence formula presented in [9] for the pentadiagonal matrices is as follows Let a 1 c 2 f 3 b 2 a 2 c 3 P = pentadiag(e i, b i, a i, c i+1, f i+2 ) = e 3 fn, (9) bn 1 a n 1 c n e n b n a n and W T and Z be its inverse factors defined as (7) and (8), respectively Then the algorithm proposed in [9], for computing inverse factors of P is as Algorithm 2 (KTP: Koulaei and Toutounian s algorithm for Pentadiagonal matrices) Algorithm 2 KTP algorithm 1 λ 1 := a 1, γ 1 := b 2 and ξ 1 := c 2 2 For k = 1,, n 1, Do: 3 y kk := b k+1 and z kk := c k+1 4 If k > 1, then y k,k 1 := 1 1 (e k+1 γ k 1 y kk ) 5 and z k 1,k := 1 1 (f k+1 ξ k 1 z kk ) 6 If k > 2, then 7 For i = k 2,, 1, Do: 8 y ki := 1 λ i (γ i y k,i+1 + e i+2 y k,i+2 ), 9 z ik := 1 λ i (ξ i z i+1,k + f i+2 y i+2,k ) 10 EndDo 11 EndIf 12 If k = 1, then λ 2 := a 2 b2c2 λ 1, 13 Else +1 := a k+1 e k+1f k+1 1 ξ 14 γ k+1 := b k+2 e k k+2 15 Enddo γ kξ k and ξ k+1 := e k+2 f k+2 γ k In this paper we propose another recurrence formulas but more effective than KTT and KTP algorithms for the tridiagonal and pentadiagonal matrices 3 New recurrence formulas for computing inverse factors of tridiagonal and pentadiagonal matrices 4

5 31 Tridiagonal matrices Let T be the tridiagonal matrix defined by Eq (6) and T = LDU be its LDU factorization For the existence of the LDU factorization we refer the reader to reference [12] It can be easily seen that L = (l i,j ) and U T are two lower unit bidiagonal matrices On the other hand, we have L 1 T U 1 = D and as a result W T = L 1 and Z = U 1 are the inverse factors of T Here, we obtain the matrix Z and the matrix W T can be computed in a similar way Let Z = (z 1, z 2,, z n ) Hence, ZU = I n n Or Therefore We have z 1 = e 1, (z 1, z 2,, z n ) 1 u 12 1 u 23 1 u n 1,n 1 = (e 1, e 2,, e n ) e k = u k,k+1 z k + z k+1 z k+1 = e k+1 u k,k+1 z k, k = 1,, n 1 (10) W T T Z = D T Z = W T D Z T T Z = Z T W T D It can be easily seen that Z T W T D and hence Z T T Z are lower triangular matrices such that diag(z T T Z) = D Therefore { zi T di, i = j; T z j = (11) 0, j > i Hence from (10) we have 0 = z T k T z k+1 = z T k T e k+1 u k,k+1 z T k T z k u k,k+1 = zt k T e k+1 zk T T z k By letting v i = z T i T, we have u k+1,k = v ke k+1 v k z k, (12) and z k+1 = e k+1 u k,k+1 z k z T k+1t = e T k+1t u k,k+1 z T k T Lemma 31 Let v k = (v (1) k, v(2) k,, v(n) k ) Then v(k+1) Proof It can be proven by induction on k For k = 1, we have v k+1 = e T k+1t u k,k+1 v k, (13) v 1 = z T 1 T = e T 1 T = (a 1 c 2 0 0) Hence the lemma is true if k = 1 Let the Lemma be true for k Then v k+1 = e T k+1t u k,k+1 v k k = c k+1 and v (j) k = 0, j = k + 2,, n = (0,, 0, b k+1, a k+1, c k+2, 0, 0) u k,k+1 (v (1) k,, v(k) k, c k+1, 0,, 0) = ( u k,k+1 v (1) k,, u k,k+1v (k) k, a k+1 u k,k+1 c k+1, c k+2, 0,, 0) 5

6 This shows that the lemma is true for k + 1 Now from Eq (11) and Lemma 1 we have v k e k+1 = c k+1 and v k z k = d k Therefore, from (9) we conclude that u k+1,k = c k+1 d k Now we are going to find a recurrence formula for computing d k, k = 1,, n First of all we see that d 1 = v 1 z 1 = e T 1 T e 1 = a 1 From Eq (11) we have v k z k+1 = 0 Hence from Eq (10) we deduce d k+1 = v k+1 z k+1 = (e T k+1t u k,k+1 v k )z k+1 = e T k+1t z k+1 = e T k+1t (e k+1 u k,k+1 z k ) = a k+1 u k,k+1 e T k+1t z k It can be easily verified that e T k+1 T z k = b k+1 Therefore d k+1 = a k+1 u k,k+1 b k+1 Now the above discussion can be summarized as algorithm 2 (NT: New algorithm for Tridiagonal matrices) Algorithm 3 NT algorithm 1 z 11 := 1, w 11 := 1, and d 1 := a 1 2 For k = 1,, n 1, Do: 3 u k,k+1 := c k+1 d k and l k+1,k := b k+1 d k 4 z k+1,k+1 := 1 and w k+1,k+1 := 1 5 For i = 1, 2,, k 6 z i,k+1 := u k,k+1 z i,k 7 w i,k+1 := l k+1,k w i,k 8 EndDo 9 d k+1 := a k+1 u k,k+1 b k+1 10 Enddo It is necessary to mention that in this algorithm z k = (z 1,k,, z n,k ) T and w k = (w 1,k,, w n,k ) T are the column vectors of Z and W, respectively The NT algorithm gives not only the inverse factors of T but also its LDU factorization Hence, if the LDU factorization of T is available then the inverse factors of T can be easily computed In the next subsection, we use this fact for computing the inverse factors of a pentadiagonal matrices Both of the NT and KTT algorithms give the same inverse factors of the matrix T But the cost of computing the inverse factors by the NT algorithm is less than that of the KTT algorithm Since in computing the kth (k 2) column of Z (W ) by the NT algorithm, k + 2 flops are required, whereas this number for KTT algorithm is 2k + 1 The numerical experiments in the next section confirm this fact Another advantage of the NT algorithm over KTT algorithm is its parallelism In fact, the entries of each column or row of the inverse factors can be computed by the NT algorithm from the previous column or row simultaneously But this is not correct for KTT algorithm Since, in the KTT algorithm, each entry of either column or row of the inverse factors should be computed from 6

7 previous entry of the same column or row In fact, without completing the computation of a column or row, the computation of the next column or row may not be started 32 Pentadiagonal matrices Let us consider the pentadiagonal matrix (9) and P = LDU be its LDU factorization It can be verified that L and U T are lower unit triangular matrices of bandwidth 3 Let 1 v 2 1 ṽ 3 v 3 L = ṽ 4 vn 1 1 ṽ n v n 1, U = 1 w 2 w 3 1 w 3 w 4 wn 1 w n 1 w n 1 and D = diag(λ 1, λ 2,, λ n ) It can be easily verified that the entries of L, U and D can be computed via Algorithm 4 Algorithm 4 LDU factorization of pentadiagonal matrices 1 v 2 := b2 λ 1, w 2 := c2 λ 1 2 λ 2 := a 2 b 2 w 2 (:= a 2 c 2 v 2 ) 3 For k = 3,, n 4 v k := 1 1 (b k e k w k 1 ) and ṽ k := 5 w k := 1 1 (c k f k v k 1 ) and w k := 6 := a k 1 v k w k 1 2 e k f k 7 Enddo e k 2 f k 2 Similar to the method used in the previous subsection, let W T and Z be the inverse factors of P, ie, W T P Z = D = diag(λ 1,, λ n ) In this case we have Z = U 1 Hence ZU = I, where I is the identity matrix of order n By equating the kth (k = 1,, n) column of two sides of ZU = I we obtain z 1 = e 1, z 2 = e 2 w 2 z 1, z k = e k w k z k 1 w k z k 2, k = 3,, n, where e j is the jth column I In the same manner, if W = (y 1,, y n ), then we deduce that y 1 = e 1, y 2 = e 2 v 2 y 1, y k = e k v k y k 1 ṽ k y k 2, k = 3,, n Therefore by the above discussion we can state a new algorithm for computing the inverse factors of pentadiagonal matrices as follows (NP: New algorithm for Pentadiagonal matrices), 7

8 Algorithm 5 NP algorithm 1 Let W = (y i,j ) and Z = (z i,j ) Set W = Z = I 2 λ 1 := a 1, v 2 := b2 λ 1, w 2 := c2 λ 1 3 λ 2 := a 2 c 2 v 2 4 z 12 = w 2 and y 12 = v 2 5 For k = 3,, n, Do: 6 v k := 1 1 (b k e k w k 1 ) and ṽ k := e k 2 f k 2 7 w k := 1 1 (c k f k v k 1 ) and w k := 8 z k 1,k = w k and y k 1,k = v k 9 z k 2,k = w k z k 2,k 1 w k and y k 2,k = v k y k 2,k 1 ṽ k 10 For i = k 3, k 2,, 1 Do: 11 z i,k = w k z i,k 1 w k z i,k 2 12 y i,k = v k y i,k 1 ṽ k y i,k 2 13 EndDo 14 := a k 1 v k w k 1 2 e k f k 15 Enddo In the NP algorithm, for computing k th (k 3) column of Z (W ) we need 3k + 3 flops, whereas this number for the KTP algorithm is 4k+5 Another advantage of NP algorithm over KTP algorithm, as we mentioned for NT algorithm, is its potential parallelism 4 Numerical experiments All the numerical experiments presented in this section were computed in double precision with some MATLAB codes on a personal computer Pentium 3, CPU 797 MHz In all the experiments, vector b = A(1, 1,, 1) T was taken to be the right-hand side of the linear system and a null vector as an initial guess The stopping criterion used was b Ax i 2 b 2 < 10 7 The used iterative methods were CG (in the SPD case), CGS, Bi-CGSTAB and GMRES(10) algorithms For the first set of the numerical experiments, let A 1 = tridiag( 271, 2, 1), and A 2 = pentadiag( 1, 1, 25, 1, 1) We used the Bi-CGSTAB algorithm in conjunction with the preconditioners obtained by KTT and NT algorithms for A 1 x = b and KTP and NP algorithms for A 2 x = b for different values of n In both cases the preconditioner was taken to be M = ZD 1 W T where Z and W T are approximations of the inverse factors Z and W, respectively, with six diagonals and applied as a left preconditioner The numerical results were given in tables 1 and 2 In this table (and the next tables) P-time, Ittime, T-time and P-Its stand for the CPU time for constructing the preconditioner, the required time for the convergence, T-time=P-time+It-time and the number of the iterations for the convergence, respectively It is mentioned that the timings are in seconds Meanwhile, if the number of iterations 8

9 Table 1: Numerical results of the left-preconditioned Bi-CGSTAB algorithm in conjunction with KTT and NT algorithms for the matrix A 1 = tridiag( 271, 2, 1) NT Alg KTT Alg n P-time It-time T-time P-Its P-time It-time T-time P-Its Table 2: Numerical results of the left-preconditioned Bi-CGSTAB algorithm in conjunction with KTP and NP algorithms for the matrix A 2 = pentatridiag( 1, 1, 25, 1, 1) NP Alg KTP Alg n P-time It-time T-time P-Its P-time It-time T-time P-Its of an iterative method for a problem with two different preconditioners are the same then we recorded a CPU time for both of them Some observations can be posed here Table 1, shows that the CPU time for constructing the preconditioner by using NT algorithm is always less than that of the KTT algorithm There is not any significant difference between the number of iterations of two methods The small differences between the number of iterations and consequently the total CPU time are due to different error propagations which is different for two algorithms Numerical results presented in Table 2 show that the NP algorithm gives better results than the KTP algorithm For the second set of our numerical experiments we consider the equation u = f, in Ω = (0, 1) (0, 1) Discretization of this equation on an (m + 1) (m + 1) grid, by using the second order centered differences for the Laplacian, gives a linear system of equations of order n = m 2 with n unknowns where 9

10 Table 3: Numerical results for the second set of experiments NT and NP algorithms KTT and KTP algorithms n p P-time It-time T-time P-Its P-time It-time T-time P-Its its coefficient matrix is SPD [12] and of the form (2) with B i = C i = I and D i = tridiag( 1, 4, 1) For solving the linear system of equations with the SPD coefficient matrix by the iterative methods, the CG algorithm is the method of choice Hence the left-preconditioned CG algorithm was used with the preconditioner computed by (5) and the numerical results were given in Table 3 Since the coefficient matrix is symmetric, only one of the inverse factors was computed The preconditioners were constructed by keeping blocks i tridiagonal or pentadiagonal For keeping i tridiagonal, we used KTT and NT algorithms In the same way, for keeping i pentadiagonal, we used KTP and NP algorithms It is necessary to mention that for keeping blocks i tridiagonal it is enough to keep Λ i 1 tridiagonal To do this we first compute tridiagonal approximate inverse factor Z i 1 of i 1 by using KTT or NT algorithms, ie, i 1 Z i 1 T D Z i 1 i 1 Then we set Λ i 1 = Z i 1 D 1 Z i 1 i 1 T Obviously, block Λ i 1 is tridiagonal In the same manner by using KTP and NP algorithms one can keep blocks i pentadiagonal Numerical results presented in Table 3 show that the NT and NP algorithms in general are more effective than the KTT and KTP algorithms, respectively Note that p = 1 stands for NT and KTT and p = 2 for NP and KTP algorithms Our third set of test matrices used arise from the centered difference discretization of u + 2δ 1 u x + 2δ 2 u y δ 3 u = f, in Ω = [0, 1] [0, 1], where δ 1, δ 2 and δ 3 are constants, on a uniform (m + 1) (m + 1) grid This kind of discretization gives a linear system of equations of n = m 2 unknowns u ij = u(ih, jh), (1 + δ 2 h)u i,j 1 (1 + δ 1 h)u i 1,j + (4 δ 3 h 2 )u ij (1 δ 1 h)u i+1,j (1 δ 2 h)u i,j+1 = h 2 f ij, where f ij = f(ih, jh) and h = 1/(m + 1) In our test we let δ 1 = 2, δ 2 = 4 and δ 3 = 0 In this case it can be easily verified that the coefficient matrix is diagonally dominant We have used the left-preconditioned GMRES(10), Bi-CBSTAB and CGS for solving the preconditioned linear system for m = 100, 200 and 300 It is mentioned that for constructing the preconditioner of the form (5), blocks i were kept tridiagonal or pentadiagonal as we did in the second set of numerical experiments Numerical results were given in Table 4 In column 3 and 4 of this table the numerical results of the iterative methods without preconditioning were shown As the numerical experiments in Table 4 show, the CPU time for constructing the preconditioner by NT and NP algorithms are always less than that of the KTT and KTP algorithms, respectively We also see that the number of iterations of the new methods and the previous ones are comparable 10

11 Table 4: Numerical results for the third set of experiments Unp NT and NP algorithms KTT and KTP algorithms n Method Its Time p P-time It-time T-time P-Its P-time It-time T-time P-Its GMRES(10) Bi-CGSTAB CGS GMRES(10) Bi-CGSTAB CGS GMRES(10) Bi-CGSTAB CGS Conclusion In this paper, we have presented two recurrence formulas for computing the inverse factors of general tridiagonal and pentadiagonal matrices Then by using these algorithms we constructed an ILU preconditioner for the block tridiagonal matrices Numerical experiments on some test matrices were given to compare our approaches with Koulaei and Toutounian s algorithms Numerical results show that both of the methods are robust But the CPU time of the phase of constructing the preconditioner of the new methods are always less than the method of [9] Meanwhile, the new methods are suitable for parallel computers Acknowledgements The author would like to thank reviewers for their valuable comments that improved the presentation of the paper References [1] O Axelsson, Iterative solution methods, Cambridge University Press, Cambridge, 1996 [2] O Axelsson and B Polman, On approximate factorization methods for block matrices suitable for vector and parallel processors, Numerical Linear Algebra with Applications, 77(1986), 3-26 [3] M Benzi, Preconditioning techniques for large linear systems: A survey, J of Computational Physics, 182 (2002) [4] M Benzi, M Tuma A comparative study of sparse approximate inverse preconditioners, Applied Numerical Mathematics, 30 (1999) [5] M Benzi, M Tuma, A sparse approximate inverse preconditioner for nonsymmetric linear systems, SIAM J Sci Comput, 19 (1998)

12 [6] T F Chan and P S Vassilevski, A framework for block ILU factorizations using block-size reduction, Mathematics of Computation, 64 (1995) [7] P Concus, G H Golub and G Meurant, Block preconditioning for the conjugate gradient, SIAM J Sci Stat Comput, 6 (1985) [8] D K Salkuyeh and F Toutounian, BILUS: A block version of ILUS factorization, J Appl Math & Computing, 15 (2004) [9] M H Koulaei and F Toutounian, On computing of block ILU preconditioner for block trdiagonal systems, Journal of Computational and Applied Mathematics, 202 (2007) [10] L Y Kolotilina and A Y Yeremin, Factorized sparse approximate inverse preconditioning I Theory, SIAM J Matrix Anal Apll, 14 (1993) [11] L Y Kolotilina and A Y Yeremin, Factorized sparse approximate inverse preconditioning II: Solution of 3D FE systems on massively parallel computers, Int J High Speed Comput, 7 (1995) [12] CD Meyer, Matrix analysis and applied linear algebra, SIAM, 2004 [13] Y Saad, Iterative Methods for Sparse linear Systems, PWS press, New York, 1995 [14] Y Saad and M H Schultz, GMRES: A generalized minimal residual algorithm for nonsymmetric linear systems, SIAM J Sci Statist Comput, 7(1986) [15] P Sonneveld, CGS, a fast Laczos-type solver for nonsymmetric linear systems, SIAM J Sci Statist Comput, 14 (1993) [16] H A van der Vorst, Bi-CGSTAB: a fast and smoothly converging variant of Bi-CG for the solution of nonsymmetric linear systems, SIAM J Sci Statist Comput, 12 (1992) [17] J Zhang, A sparse approximate inverse technique for parallel preconditioning of general sparse matrices, Appl Math Comput, 130 (2002)

AMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1.

AMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1. J. Appl. Math. & Computing Vol. 15(2004), No. 1, pp. 299-312 BILUS: A BLOCK VERSION OF ILUS FACTORIZATION DAVOD KHOJASTEH SALKUYEH AND FAEZEH TOUTOUNIAN Abstract. ILUS factorization has many desirable

More information

ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS

ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS Fatemeh Panjeh Ali Beik and Davod Khojasteh Salkuyeh, Department of Mathematics, Vali-e-Asr University of Rafsanjan, Rafsanjan,

More information

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili,

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili, Australian Journal of Basic and Applied Sciences, 5(3): 35-358, 20 ISSN 99-878 Generalized AOR Method for Solving Syste of Linear Equations Davod Khojasteh Salkuyeh Departent of Matheatics, University

More information

A SPARSE APPROXIMATE INVERSE PRECONDITIONER FOR NONSYMMETRIC LINEAR SYSTEMS

A SPARSE APPROXIMATE INVERSE PRECONDITIONER FOR NONSYMMETRIC LINEAR SYSTEMS INTERNATIONAL JOURNAL OF NUMERICAL ANALYSIS AND MODELING, SERIES B Volume 5, Number 1-2, Pages 21 30 c 2014 Institute for Scientific Computing and Information A SPARSE APPROXIMATE INVERSE PRECONDITIONER

More information

A generalization of the Gauss-Seidel iteration method for solving absolute value equations

A generalization of the Gauss-Seidel iteration method for solving absolute value equations A generalization of the Gauss-Seidel iteration method for solving absolute value equations Vahid Edalatpour, Davod Hezari and Davod Khojasteh Salkuyeh Faculty of Mathematical Sciences, University of Guilan,

More information

AN ITERATIVE METHOD TO SOLVE SYMMETRIC POSITIVE DEFINITE MATRIX EQUATIONS

AN ITERATIVE METHOD TO SOLVE SYMMETRIC POSITIVE DEFINITE MATRIX EQUATIONS AN ITERATIVE METHOD TO SOLVE SYMMETRIC POSITIVE DEFINITE MATRIX EQUATIONS DAVOD KHOJASTEH SALKUYEH and FATEMEH PANJEH ALI BEIK Communicated by the former editorial board Let A : R m n R m n be a symmetric

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

The solution of the discretized incompressible Navier-Stokes equations with iterative methods

The solution of the discretized incompressible Navier-Stokes equations with iterative methods The solution of the discretized incompressible Navier-Stokes equations with iterative methods Report 93-54 C. Vuik Technische Universiteit Delft Delft University of Technology Faculteit der Technische

More information

A Note on the Pin-Pointing Solution of Ill-Conditioned Linear System of Equations

A Note on the Pin-Pointing Solution of Ill-Conditioned Linear System of Equations A Note on the Pin-Pointing Solution of Ill-Conditioned Linear System of Equations Davod Khojasteh Salkuyeh 1 and Mohsen Hasani 2 1,2 Department of Mathematics, University of Mohaghegh Ardabili, P. O. Box.

More information

Iterative Methods for Sparse Linear Systems

Iterative Methods for Sparse Linear Systems Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Jae Heon Yun and Yu Du Han

Jae Heon Yun and Yu Du Han Bull. Korean Math. Soc. 39 (2002), No. 3, pp. 495 509 MODIFIED INCOMPLETE CHOLESKY FACTORIZATION PRECONDITIONERS FOR A SYMMETRIC POSITIVE DEFINITE MATRIX Jae Heon Yun and Yu Du Han Abstract. We propose

More information

BLOCK ILU PRECONDITIONED ITERATIVE METHODS FOR REDUCED LINEAR SYSTEMS

BLOCK ILU PRECONDITIONED ITERATIVE METHODS FOR REDUCED LINEAR SYSTEMS CANADIAN APPLIED MATHEMATICS QUARTERLY Volume 12, Number 3, Fall 2004 BLOCK ILU PRECONDITIONED ITERATIVE METHODS FOR REDUCED LINEAR SYSTEMS N GUESSOUS AND O SOUHAR ABSTRACT This paper deals with the iterative

More information

On the influence of eigenvalues on Bi-CG residual norms

On the influence of eigenvalues on Bi-CG residual norms On the influence of eigenvalues on Bi-CG residual norms Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic duintjertebbens@cs.cas.cz Gérard Meurant 30, rue

More information

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative

More information

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

The flexible incomplete LU preconditioner for large nonsymmetric linear systems. Takatoshi Nakamura Takashi Nodera

The flexible incomplete LU preconditioner for large nonsymmetric linear systems. Takatoshi Nakamura Takashi Nodera Research Report KSTS/RR-15/006 The flexible incomplete LU preconditioner for large nonsymmetric linear systems by Takatoshi Nakamura Takashi Nodera Takatoshi Nakamura School of Fundamental Science and

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 16-02 The Induced Dimension Reduction method applied to convection-diffusion-reaction problems R. Astudillo and M. B. van Gijzen ISSN 1389-6520 Reports of the Delft

More information

Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices

Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices Eun-Joo Lee Department of Computer Science, East Stroudsburg University of Pennsylvania, 327 Science and Technology Center,

More information

Residual iterative schemes for largescale linear systems

Residual iterative schemes for largescale linear systems Universidad Central de Venezuela Facultad de Ciencias Escuela de Computación Lecturas en Ciencias de la Computación ISSN 1316-6239 Residual iterative schemes for largescale linear systems William La Cruz

More information

An advanced ILU preconditioner for the incompressible Navier-Stokes equations

An advanced ILU preconditioner for the incompressible Navier-Stokes equations An advanced ILU preconditioner for the incompressible Navier-Stokes equations M. ur Rehman C. Vuik A. Segal Delft Institute of Applied Mathematics, TU delft The Netherlands Computational Methods with Applications,

More information

ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER *

ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER * Journal of Computational Mathematics Vol.xx, No.x, 2x, 6. http://www.global-sci.org/jcm doi:?? ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER * Davod

More information

FEM and sparse linear system solving

FEM and sparse linear system solving FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich

More information

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Leslie Foster 11-5-2012 We will discuss the FOM (full orthogonalization method), CG,

More information

Preconditioners for the incompressible Navier Stokes equations

Preconditioners for the incompressible Navier Stokes equations Preconditioners for the incompressible Navier Stokes equations C. Vuik M. ur Rehman A. Segal Delft Institute of Applied Mathematics, TU Delft, The Netherlands SIAM Conference on Computational Science and

More information

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU Preconditioning Techniques for Solving Large Sparse Linear Systems Arnold Reusken Institut für Geometrie und Praktische Mathematik RWTH-Aachen OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative

More information

THE solution of the absolute value equation (AVE) of

THE solution of the absolute value equation (AVE) of The nonlinear HSS-like iterative method for absolute value equations Mu-Zheng Zhu Member, IAENG, and Ya-E Qi arxiv:1403.7013v4 [math.na] 2 Jan 2018 Abstract Salkuyeh proposed the Picard-HSS iteration method

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

Comparison of Fixed Point Methods and Krylov Subspace Methods Solving Convection-Diffusion Equations

Comparison of Fixed Point Methods and Krylov Subspace Methods Solving Convection-Diffusion Equations American Journal of Computational Mathematics, 5, 5, 3-6 Published Online June 5 in SciRes. http://www.scirp.org/journal/ajcm http://dx.doi.org/.436/ajcm.5.5 Comparison of Fixed Point Methods and Krylov

More information

Variants of BiCGSafe method using shadow three-term recurrence

Variants of BiCGSafe method using shadow three-term recurrence Variants of BiCGSafe method using shadow three-term recurrence Seiji Fujino, Takashi Sekimoto Research Institute for Information Technology, Kyushu University Ryoyu Systems Co. Ltd. (Kyushu University)

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

A new iterative method for solving a class of complex symmetric system of linear equations

A new iterative method for solving a class of complex symmetric system of linear equations A new iterative method for solving a class of complex symmetric system of linear equations Davod Hezari Davod Khojasteh Salkuyeh Vahid Edalatpour Received: date / Accepted: date Abstract We present a new

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

Numerical behavior of inexact linear solvers

Numerical behavior of inexact linear solvers Numerical behavior of inexact linear solvers Miro Rozložník joint results with Zhong-zhi Bai and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic The fourth

More information

arxiv: v2 [math.na] 1 Sep 2016

arxiv: v2 [math.na] 1 Sep 2016 The structure of the polynomials in preconditioned BiCG algorithms and the switching direction of preconditioned systems Shoji Itoh and Masaai Sugihara arxiv:163.175v2 [math.na] 1 Sep 216 Abstract We present

More information

Since the early 1800s, researchers have

Since the early 1800s, researchers have the Top T HEME ARTICLE KRYLOV SUBSPACE ITERATION This survey article reviews the history and current importance of Krylov subspace iteration algorithms. 1521-9615/00/$10.00 2000 IEEE HENK A. VAN DER VORST

More information

Key words. linear equations, polynomial preconditioning, nonsymmetric Lanczos, BiCGStab, IDR

Key words. linear equations, polynomial preconditioning, nonsymmetric Lanczos, BiCGStab, IDR POLYNOMIAL PRECONDITIONED BICGSTAB AND IDR JENNIFER A. LOE AND RONALD B. MORGAN Abstract. Polynomial preconditioning is applied to the nonsymmetric Lanczos methods BiCGStab and IDR for solving large nonsymmetric

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering

A short course on: Preconditioned Krylov subspace methods. Yousef Saad University of Minnesota Dept. of Computer Science and Engineering A short course on: Preconditioned Krylov subspace methods Yousef Saad University of Minnesota Dept. of Computer Science and Engineering Universite du Littoral, Jan 19-3, 25 Outline Part 1 Introd., discretization

More information

Optimal Iterate of the Power and Inverse Iteration Methods

Optimal Iterate of the Power and Inverse Iteration Methods Optimal Iterate of the Power and Inverse Iteration Methods Davod Khojasteh Salkuyeh and Faezeh Toutounian Department of Mathematics, University of Mohaghegh Ardabili, P.O. Box. 79, Ardabil, Iran E-mail:

More information

Preconditioning Techniques Analysis for CG Method

Preconditioning Techniques Analysis for CG Method Preconditioning Techniques Analysis for CG Method Huaguang Song Department of Computer Science University of California, Davis hso@ucdavis.edu Abstract Matrix computation issue for solve linear system

More information

Iterative Solution methods

Iterative Solution methods p. 1/28 TDB NLA Parallel Algorithms for Scientific Computing Iterative Solution methods p. 2/28 TDB NLA Parallel Algorithms for Scientific Computing Basic Iterative Solution methods The ideas to use iterative

More information

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccs Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary c 2008,2010

More information

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary

More information

A PRECONDITIONER FOR THE HELMHOLTZ EQUATION WITH PERFECTLY MATCHED LAYER

A PRECONDITIONER FOR THE HELMHOLTZ EQUATION WITH PERFECTLY MATCHED LAYER European Conference on Computational Fluid Dynamics ECCOMAS CFD 2006 P. Wesseling, E. Oñate and J. Périaux (Eds) c TU Delft, The Netherlands, 2006 A PRECONDITIONER FOR THE HELMHOLTZ EQUATION WITH PERFECTLY

More information

FEM and Sparse Linear System Solving

FEM and Sparse Linear System Solving FEM & sparse system solving, Lecture 7, Nov 3, 2017 1/46 Lecture 7, Nov 3, 2015: Introduction to Iterative Solvers: Stationary Methods http://people.inf.ethz.ch/arbenz/fem16 Peter Arbenz Computer Science

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Block Bidiagonal Decomposition and Least Squares Problems

Block Bidiagonal Decomposition and Least Squares Problems Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition

More information

ETNA Kent State University

ETNA Kent State University Electronic ransactions on Numerical Analysis. Volume 2, pp. 57-75, March 1994. Copyright 1994,. ISSN 1068-9613. ENA MINIMIZAION PROPERIES AND SHOR RECURRENCES FOR KRYLOV SUBSPACE MEHODS RÜDIGER WEISS Dedicated

More information

Conjugate Gradient (CG) Method

Conjugate Gradient (CG) Method Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous

More information

Mathematics and Computer Science

Mathematics and Computer Science Technical Report TR-2010-026 On Preconditioned MHSS Iteration Methods for Complex Symmetric Linear Systems by Zhong-Zhi Bai, Michele Benzi, Fang Chen Mathematics and Computer Science EMORY UNIVERSITY On

More information

Parallel Iterative Methods for Sparse Linear Systems. H. Martin Bücker Lehrstuhl für Hochleistungsrechnen

Parallel Iterative Methods for Sparse Linear Systems. H. Martin Bücker Lehrstuhl für Hochleistungsrechnen Parallel Iterative Methods for Sparse Linear Systems Lehrstuhl für Hochleistungsrechnen www.sc.rwth-aachen.de RWTH Aachen Large and Sparse Small and Dense Outline Problem with Direct Methods Iterative

More information

On the Stability of LU-Factorizations

On the Stability of LU-Factorizations Australian Journal of Basic and Applied Sciences 5(6): 497-503 2011 ISSN 1991-8178 On the Stability of LU-Factorizations R Saneifard and A Asgari Department of Mathematics Oroumieh Branch Islamic Azad

More information

Performance Evaluation of GPBiCGSafe Method without Reverse-Ordered Recurrence for Realistic Problems

Performance Evaluation of GPBiCGSafe Method without Reverse-Ordered Recurrence for Realistic Problems Performance Evaluation of GPBiCGSafe Method without Reverse-Ordered Recurrence for Realistic Problems Seiji Fujino, Takashi Sekimoto Abstract GPBiCG method is an attractive iterative method for the solution

More information

On Algebraic Structure of Improved Gauss-Seidel Iteration. O. M. Bamigbola, A. A. Ibrahim

On Algebraic Structure of Improved Gauss-Seidel Iteration. O. M. Bamigbola, A. A. Ibrahim World Academy of Science, Engineering Technology On Algebraic Structure of Improved Gauss-Seidel Iteration O M Bamigbola, A A Ibrahim International Science Index, Mathematical Computational Sciences Vol:8,

More information

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate

More information

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294) Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps

More information

Iterative Methods for Linear Systems of Equations

Iterative Methods for Linear Systems of Equations Iterative Methods for Linear Systems of Equations Projection methods (3) ITMAN PhD-course DTU 20-10-08 till 24-10-08 Martin van Gijzen 1 Delft University of Technology Overview day 4 Bi-Lanczos method

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

A MULTIPRECONDITIONED CONJUGATE GRADIENT ALGORITHM

A MULTIPRECONDITIONED CONJUGATE GRADIENT ALGORITHM SIAM J. MATRIX ANAL. APPL. Vol. 27, No. 4, pp. 1056 1068 c 2006 Society for Industrial and Applied Mathematics A MULTIPRECONDITIONED CONJUGATE GRADIENT ALGORITHM ROBERT BRIDSON AND CHEN GREIF Abstract.

More information

Modified HSS iteration methods for a class of complex symmetric linear systems

Modified HSS iteration methods for a class of complex symmetric linear systems Computing (2010) 87:9 111 DOI 10.1007/s00607-010-0077-0 Modified HSS iteration methods for a class of complex symmetric linear systems Zhong-Zhi Bai Michele Benzi Fang Chen Received: 20 October 2009 /

More information

Krylov Subspace Methods that Are Based on the Minimization of the Residual

Krylov Subspace Methods that Are Based on the Minimization of the Residual Chapter 5 Krylov Subspace Methods that Are Based on the Minimization of the Residual Remark 51 Goal he goal of these methods consists in determining x k x 0 +K k r 0,A such that the corresponding Euclidean

More information

Reduced Synchronization Overhead on. December 3, Abstract. The standard formulation of the conjugate gradient algorithm involves

Reduced Synchronization Overhead on. December 3, Abstract. The standard formulation of the conjugate gradient algorithm involves Lapack Working Note 56 Conjugate Gradient Algorithms with Reduced Synchronization Overhead on Distributed Memory Multiprocessors E. F. D'Azevedo y, V.L. Eijkhout z, C. H. Romine y December 3, 1999 Abstract

More information

Fine-grained Parallel Incomplete LU Factorization

Fine-grained Parallel Incomplete LU Factorization Fine-grained Parallel Incomplete LU Factorization Edmond Chow School of Computational Science and Engineering Georgia Institute of Technology Sparse Days Meeting at CERFACS June 5-6, 2014 Contribution

More information

Linear Solvers. Andrew Hazel

Linear Solvers. Andrew Hazel Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction

More information

A decade of fast and robust Helmholtz solvers

A decade of fast and robust Helmholtz solvers A decade of fast and robust Helmholtz solvers Werkgemeenschap Scientific Computing Spring meeting Kees Vuik May 11th, 212 1 Delft University of Technology Contents Introduction Preconditioning (22-28)

More information

Preface to the Second Edition. Preface to the First Edition

Preface to the Second Edition. Preface to the First Edition n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................

More information

Incomplete factorization preconditioners and their updates with applications - I 1,2

Incomplete factorization preconditioners and their updates with applications - I 1,2 Incomplete factorization preconditioners and their updates with applications - I 1,2 Daniele Bertaccini, Fabio Durastante Moscow August 24, 216 Notes of the course: Incomplete factorization preconditioners

More information

Mathematics and Computer Science

Mathematics and Computer Science Technical Report TR-2007-002 Block preconditioning for saddle point systems with indefinite (1,1) block by Michele Benzi, Jia Liu Mathematics and Computer Science EMORY UNIVERSITY International Journal

More information

A DISSERTATION. Extensions of the Conjugate Residual Method. by Tomohiro Sogabe. Presented to

A DISSERTATION. Extensions of the Conjugate Residual Method. by Tomohiro Sogabe. Presented to A DISSERTATION Extensions of the Conjugate Residual Method ( ) by Tomohiro Sogabe Presented to Department of Applied Physics, The University of Tokyo Contents 1 Introduction 1 2 Krylov subspace methods

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Generalized MINRES or Generalized LSQR?

Generalized MINRES or Generalized LSQR? Generalized MINRES or Generalized LSQR? Michael Saunders Systems Optimization Laboratory (SOL) Institute for Computational Mathematics and Engineering (ICME) Stanford University New Frontiers in Numerical

More information

Fast Iterative Solution of Saddle Point Problems

Fast Iterative Solution of Saddle Point Problems Michele Benzi Department of Mathematics and Computer Science Emory University Atlanta, GA Acknowledgments NSF (Computational Mathematics) Maxim Olshanskii (Mech-Math, Moscow State U.) Zhen Wang (PhD student,

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

ON A SPLITTING PRECONDITIONER FOR SADDLE POINT PROBLEMS

ON A SPLITTING PRECONDITIONER FOR SADDLE POINT PROBLEMS J. Appl. Math. & Informatics Vol. 36(208, No. 5-6, pp. 459-474 https://doi.org/0.437/jami.208.459 ON A SPLITTING PRECONDITIONER FOR SADDLE POINT PROBLEMS DAVOD KHOJASTEH SALKUYEH, MARYAM ABDOLMALEKI, SAEED

More information

A parallel exponential integrator for large-scale discretizations of advection-diffusion models

A parallel exponential integrator for large-scale discretizations of advection-diffusion models A parallel exponential integrator for large-scale discretizations of advection-diffusion models L. Bergamaschi 1, M. Caliari 2, A. Martínez 3, and M. Vianello 3 1 Department of Mathematical Methods and

More information

ON A GENERAL CLASS OF PRECONDITIONERS FOR NONSYMMETRIC GENERALIZED SADDLE POINT PROBLEMS

ON A GENERAL CLASS OF PRECONDITIONERS FOR NONSYMMETRIC GENERALIZED SADDLE POINT PROBLEMS U..B. Sci. Bull., Series A, Vol. 78, Iss. 4, 06 ISSN 3-707 ON A GENERAL CLASS OF RECONDIIONERS FOR NONSYMMERIC GENERALIZED SADDLE OIN ROBLE Fatemeh anjeh Ali BEIK his paper deals with applying a class

More information

Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration

Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration D. Calvetti a, B. Lewis b and L. Reichel c a Department of Mathematics, Case Western Reserve University,

More information

MULTIGRID-CONJUGATE GRADIENT TYPE METHODS FOR REACTION DIFFUSION SYSTEMS *

MULTIGRID-CONJUGATE GRADIENT TYPE METHODS FOR REACTION DIFFUSION SYSTEMS * International Journal of Bifurcation and Chaos, Vol 14, No 1 (24) 3587 365 c World Scientific Publishing Company MULTIGRID-CONJUGATE GRADIENT TYPE METHODS FOR REACTION DIFFUSION SYSTEMS * S-L CHANG Center

More information

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers Applied and Computational Mathematics 2017; 6(4): 202-207 http://www.sciencepublishinggroup.com/j/acm doi: 10.11648/j.acm.20170604.18 ISSN: 2328-5605 (Print); ISSN: 2328-5613 (Online) A Robust Preconditioned

More information

BOUNDS FOR THE ENTRIES OF MATRIX FUNCTIONS WITH APPLICATIONS TO PRECONDITIONING

BOUNDS FOR THE ENTRIES OF MATRIX FUNCTIONS WITH APPLICATIONS TO PRECONDITIONING BIT 0006-3835/99/3903-0417 $1500 1999, Vol 39, No 3, pp 417 438 c Swets & Zeitlinger BOUNDS FOR THE ENTRIES OF MATRIX FUNCTIONS WITH APPLICATIONS TO PRECONDITIONING MICHELE BENZI 1 and GENE H GOLUB 2 Abstract

More information

Further experiences with GMRESR

Further experiences with GMRESR Further experiences with GMRESR Report 92-2 C. Vui Technische Universiteit Delft Delft University of Technology Faculteit der Technische Wisunde en Informatica Faculty of Technical Mathematics and Informatics

More information

AND. Key words. preconditioned iterative methods, sparse matrices, incomplete decompositions, approximate inverses. Ax = b, (1.1)

AND. Key words. preconditioned iterative methods, sparse matrices, incomplete decompositions, approximate inverses. Ax = b, (1.1) BALANCED INCOMPLETE FACTORIZATION RAFAEL BRU, JOSÉ MARÍN, JOSÉ MAS AND M. TŮMA Abstract. In this paper we present a new incomplete factorization of a square matrix into triangular factors in which we get

More information

A MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY

A MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY A MULTIGRID ALGORITHM FOR THE CELL-CENTERED FINITE DIFFERENCE SCHEME Richard E. Ewing and Jian Shen Institute for Scientic Computation Texas A&M University College Station, Texas SUMMARY In this article,

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 15-05 Induced Dimension Reduction method for solving linear matrix equations R. Astudillo and M. B. van Gijzen ISSN 1389-6520 Reports of the Delft Institute of Applied

More information

RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS

RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS RESIDUAL SMOOTHING AND PEAK/PLATEAU BEHAVIOR IN KRYLOV SUBSPACE METHODS HOMER F. WALKER Abstract. Recent results on residual smoothing are reviewed, and it is observed that certain of these are equivalent

More information

Preconditioned Conjugate Gradient-Like Methods for. Nonsymmetric Linear Systems 1. Ulrike Meier Yang 2. July 19, 1994

Preconditioned Conjugate Gradient-Like Methods for. Nonsymmetric Linear Systems 1. Ulrike Meier Yang 2. July 19, 1994 Preconditioned Conjugate Gradient-Like Methods for Nonsymmetric Linear Systems Ulrike Meier Yang 2 July 9, 994 This research was supported by the U.S. Department of Energy under Grant No. DE-FG2-85ER25.

More information

Chebyshev semi-iteration in Preconditioning

Chebyshev semi-iteration in Preconditioning Report no. 08/14 Chebyshev semi-iteration in Preconditioning Andrew J. Wathen Oxford University Computing Laboratory Tyrone Rees Oxford University Computing Laboratory Dedicated to Victor Pereyra on his

More information

Block-Triangular and Skew-Hermitian Splitting Methods for Positive Definite Linear Systems

Block-Triangular and Skew-Hermitian Splitting Methods for Positive Definite Linear Systems Block-Triangular and Skew-Hermitian Splitting Methods for Positive Definite Linear Systems Zhong-Zhi Bai State Key Laboratory of Scientific/Engineering Computing Institute of Computational Mathematics

More information

Nested splitting CG-like iterative method for solving the continuous Sylvester equation and preconditioning

Nested splitting CG-like iterative method for solving the continuous Sylvester equation and preconditioning Adv Comput Math DOI 10.1007/s10444-013-9330-3 Nested splitting CG-like iterative method for solving the continuous Sylvester equation and preconditioning Mohammad Khorsand Zak Faezeh Toutounian Received:

More information

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA

SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization

More information

The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method. V. Simoncini. Dipartimento di Matematica, Università di Bologna

The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method. V. Simoncini. Dipartimento di Matematica, Università di Bologna The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The

More information

On deflation and singular symmetric positive semi-definite matrices

On deflation and singular symmetric positive semi-definite matrices Journal of Computational and Applied Mathematics 206 (2007) 603 614 www.elsevier.com/locate/cam On deflation and singular symmetric positive semi-definite matrices J.M. Tang, C. Vuik Faculty of Electrical

More information

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 12: Gaussian Elimination and LU Factorization Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 10 Gaussian Elimination

More information

Iterative Solvers in the Finite Element Solution of Transient Heat Conduction

Iterative Solvers in the Finite Element Solution of Transient Heat Conduction Iterative Solvers in the Finite Element Solution of Transient Heat Conduction Mile R. Vuji~i} PhD student Steve G.R. Brown Senior Lecturer Materials Research Centre School of Engineering University of

More information