Jae Heon Yun and Yu Du Han

Size: px
Start display at page:

Download "Jae Heon Yun and Yu Du Han"

Transcription

1 Bull. Korean Math. Soc. 39 (2002), No. 3, pp MODIFIED INCOMPLETE CHOLESKY FACTORIZATION PRECONDITIONERS FOR A SYMMETRIC POSITIVE DEFINITE MATRIX Jae Heon Yun and Yu Du Han Abstract. We propose variants of the modified incomplete Cholesy factorization preconditioner for a symmetric positive definite (SPD) matrix. Spectral properties of these preconditioners are discussed, and then numerical results of the preconditioned CG (PCG) method using these preconditioners are provided to see the effectiveness of the preconditioners.. Introduction In this paper, we consider a linear system of equations () Ax = b, x, b R n, where A R n n is a large sparse symmetric positive definite (SPD) matrix. Since A is a large sparse matrix, direct methods such as Gaussian elimination become prohibitively expensive because of a lot of fillin elements. As an alternative, the preconditioned CG (PCG) iterative method [2] is widely used for the purpose of finding an approximate solution of the problem (). Given an initial guess x 0, the PCG method computes iteratively new approximations x to the true solution x =A b. The iterate x is accepted as a solution if the residual r = b Ax satisfies r 2 b 2 tol. The convergence rate and robustness of the PCG largely dep on how well the preconditioner approximates A. One of the powerful preconditioning methods in terms of reducing the number of iterations is Received February 5, Revised April 24, Mathematics Subject Classification: 65F0, 65F50. Key words and phrases: modified incomplete Cholesy factorization, P-regular splitting, preconditioner, PCG method. This research was supported by the Chungbu National University Basic Science Research Fund BSRI-0-6.

2 496 Jae Heon Yun and Yu Du Han the incomplete Cholesy factorization method studied by Meijerin and van der Vorst [4] and Yun [8]. It was shown in [4] that for every zero pattern set, there exists an incomplete Cholesy factorization of a symmetric M-matrix. However, the incomplete Cholesy factorization for a symmetric positive definite matrix does not always exist. To this, Robert [7] introduced the modified incomplete Cholesy factorization which exists for any symmetric positive definite matrix. The purpose of this paper is to study variants of the modified incomplete Cholesy factorization for a symmetric positive definite matrix. This paper is organized as follows. In Section 2, we consider some properties of P - regular splittings. In Section 3, we consider variants of the modified incomplete Cholesy factorization. In Section 4, we provide numerical results. In Section 5, some conclusions are drawn. 2. Properties of P -regular splitting For two matrices A = (a ij ) and B = (b ij ), A B denotes a ij b ij for all i and j, and A B denotes a ij b ij for all i and j. We say that a real matrix A is monotone if Ax 0 implies x 0. It is well nown that A is monotone if and only if A 0. A real square matrix A = (a ij ) is called an M-matrix if a ij 0 for i j and it is monotone. A real matrix A is positive definite (positive semi-definite) if Re(x H Ax) > 0 (Re(x H Ax) 0) for every nonzero vector x C n, or equivalently x T Ax > 0 (x T Ax 0) for every nonzero vector x R n, where Re(x H Ax) refers to the real part of x H Ax. The spectral radius ρ(a) of a matrix A is ρ(a) = max{ λ λ σ(a) }, where σ(a) denotes the spectrum of A, that is, the set of eigenvalues of A. A representation A = M N is called a splitting of A when M is nonsingular. A splitting A = M N is a convergent splitting of A if ρ( M N ) <. A splitting A = M N is a P-regular splitting if M + N is positive definite. Obviously, the matrix M + N is positive definite if and only if M T + N is positive definite. Throughout the paper, we use the notation M 0 (M 0) for a matrix to be symmetric positive definite (symmetric positive semi-definite), and M M 2 (M M 2 ) denotes that M M 2 0 (M M 2 0). Theorem 2. ([3]). Let A be a symmetric matrix and let A = M N be a P-regular splitting of A. Then ρ( M N) < if and only if A is positive definite.

3 Modified incomplete Cholesy factorization preconditioners 497 Theorem 2.2 ([5]). Let A 0 and let A = M N = M 2 N 2 be two splittings of A. If 0 N N 2, then ρ( M N ) ρ( M2 N 2) <. If 0 N N 2, then ρ( M N ) < ρ( M2 N 2) <. Theorem 2.3 ([6]). Let A be a symmetric matrix and let A = M N be a splitting of A. If M 0 and ρ( M N ) <, then A is positive definite and A = M N is a P -regular splitting of A. Theorem 2.4. Let A 0 and let A = M N be a splitting of A with N 0. Then ρ( M N ) <, 0 < λ M A, and 0 λ M N <, where λ M A and λ M N denote eigenvalues of M A and M N, respectively. Proof. Suppose that λ is an eigenvalue of M N. Then there exists a nonzero vector x such that M Nx = λx. Thus, Nx = λmx. It follows that x H Nx = λx H Mx = λx H Ax + λx H Nx. Since A 0 and λ, x H Nx = λ λ xh Ax. Since x H Ax > 0 and x H λ Nx 0, λ 0. It follows that 0 λ <. Hence 0 λ M N < is proved. From M A = I M N, λ M A = λ M N. Thus, 0 < λ M A. For the proof of ρ(m N) <, N 0 implies that M = A+N 0. Hence, A = M N is a P -regular splitting of A. From Theorem 2., ρ(m N) < is obtained. Theorem 2.5. Suppose that A 0. Let A = M N be a splitting of A with M symmetric. Then ρ( M N ) < if and only if A = M N is a P -regular splitting. Proof. Suppose that A = M N is a P -regular splitting. Then by Theorem 2., ρ(m N) < is immediately obtained. For the proof of the other direction, let s suppose that ρ(m N) <. From the identity M A = I M N, it can be seen that M A has eigenvalues with positive real parts. Notice that M A is similar to A /2 M A /2. Since A /2 M A /2 is symmetric, M A has real eigenvalues. It follows that M A has positive eigenvalues. Hence, A /2 M A /2 0, from which M 0. From Theorem 2.3, one obtains that A = M N is a P -regular splitting. Theorem 2.6 ([]). Let A 0 and let A = M N be a splitting of A with M symmetric. Let B = i=0 (M N) i M ( ). Then

4 498 Jae Heon Yun and Yu Du Han (a) B is symmetric. (b) For odd, B is positive definite if and only if M is positive definite. (c) For even, B is positive definite if and only if M + N is positive definite. If the B defined in Theorem 2.6 is symmetric positive definite, then it can be used as a preconditioner for the PCG method and it is called -step polynomial preconditioner corresponding to the splitting A = M N. It is easy to see that lim B = A if ρ(m N) <. It is well-nown that the convergence rate of the PCG method with a preconditioner M for solving Ax = b deps on how small λmax(m A) λ min (M A) is, where λ max (M A) and λ min (M A) denote the maximum and minimum eigenvalues of M A, respectively. More specifically, if A = M N is a splitting of A with M being SPD, then the convergence rate of the PCG with the preconditioner M may dep on how small ρ(m N) is. Thus, if ρ(m N) < and M 0 is easily invertible, then the M may be considered as a good preconditioner for the PCG. 3. Modified incomplete Cholesy factorizations Let P n = {(i, j) i j, i, j n}. Then, it was shown in [4] that for every symmetric zero pattern set Z P n (i.e. (i, j) Z implies (j, i) Z), there exists an incomplete Cholesy factorization of a symmetric M-matrix A such that A = U T DU R is a regular splitting of A, where U is an upper triangular M-matrix and D is a diagonal matrix whose ith diagonal element is an inverse of the ith diagonal element of U. It is clear that U T DU in the above factorization is positive definite since D is positive definite. Thus, the U T DU obtained from the incomplete Cholesy factorization of a symmetric M-matrix can be used as a preconditioner for the PCG method. However, next example shows that the incomplete Cholesy factorization for a symmetric positive definite matrix A which is not an M-matrix does not always exist. That is, there exists a diagonal matrix D with a nonpositive diagonal element such that A = U T DU R.

5 Modified incomplete Cholesy factorization preconditioners 499 Example 3.. Let 0 0. A = Since all eigenvalues of A are positive, A 0. It is clear that A is not an M-matrix. For the zero pattern set Z = {(, 3), (2, 4), (3, ), (4, 2)}, the incomplete factorization of A such that A = U T DU R is U T = , D = , R = Since D contains a negative diagonal element 25, U T DU is not positive definite and hence it can not be used as a preconditioner of the PCG iterative method. Also note that A = U T DU R is not a regular splitting of A. The incomplete factorization of A proposed in [6] such that A = U T DU R requires that all diagonal elements of the matrix R are zero. By suppressing this requirement, Robert [0] proposed a modified incomplete Cholesy factorization which always exists for any symmetric positive definite matrix A. Theorem 3.2 ([7]). Let A 0. Then for every symmetric zero pattern set Z P n there exists a modified incomplete Cholesy factorization of A such that A = U T DU R, where U is an upper triangular matrix with positive diagonal elements, D is a diagonal matrix whose ith diagonal element is an inverse of the ith diagonal element of U, and R is positive semi-definite. The modified incomplete Cholesy factorization (MICF) in [7] is described in a theoretical way, so the MICF algorithm which can be easily implemented is provided below. Let a ij denote the (i, j) component of A and let Z be a zero pattern set. Since A is symmetric, only the lower triangular part of A is used and updated. Algorithm 3. (MICF). d = /a For i = 2, n

6 500 Jae Heon Yun and Yu Du Han For j =, i For = i, n a i = a i a j a ij d j For = i +, n if (, i) Z, then a ii = a ii + a i a = a + a i a i = 0 if d i = /a ii From Algorithm 3., the modified incomplete Cholesy factorization of the matrix A given in Example 3. such that A = U T DU R is 0 U T 2 = , D= , R= Notice that the lower triangular matrix U T and the diagonal matrix D are obtained directly from Algorithm 3., while the matrix R is computed from the identity R = U T DU A. Since we are only interested in the matrix U T DU which can be used as a preconditioner of the PCG method, all algorithms in this paper do not contain the computational step for the matrix R. From now on, the U T DU 0 obtained from Algorithm 3. is called the MICF preconditioner. Below we propose a variant of the modified incomplete Cholesy factorization (VMICF) algorithm which is more efficient than Algorithm 3. (see Tables and 3). This variant also yields a splitting A = Û T ˆDÛ ˆR such that Û T ˆDÛ 0 and ˆR 0. From now on, the Û T ˆDÛ 0 is called the VMICF preconditioner. Algorithm 3.2 (VMICF). For i =, n d i = /a ii For j = i +, n d j = a ji d i

7 Modified incomplete Cholesy factorization preconditioners 50 For j = i +, n For = j, n if (, j) Z, then a j = a j d a ji a = a + a j a jj = a jj + a j a j = 0 else a j = a j d a ji if The main difference between Algorithms 3. and 3.2 is as follows. At the ith step, Algorithm 3. updates the ith column using the first (i ) columns and then modifies diagonal elements according to a zero pattern set Z, while Algorithm 3.2 updates the last (n i) columns using the ith column and then modifies diagonal elements according to the zero pattern set Z. Here, we provide an example which shows the difference between Algorithms 3. and 3.2. Example 3.3. Let 4 0 A = Let Z = {(, 2), (2, ), (3, 4), (4, 3)}. Since A is an irreducibly diagonally dominant matrix with positive diagonal elements, A 0. From Algorithm 3., one obtains A = U T DU R with U T = , D = 2 2 3, R =

8 502 Jae Heon Yun and Yu Du Han From Algorithm 3.2, one obtains A = Û T ˆDÛ ˆR with Û T = , ˆD = 2 2, ˆR = The spectral condition numbers of A, U T DU, and Û T ˆDÛ are κ 2 (A) = A 2 A κ 2 (U T DU) = U T DU 2 (U T DU) κ 2 (Û T ˆDÛ) = Û T ˆDÛ 2 (Û T ˆDÛ) It is easy to show that 0 R ˆR. From Theorem 2.2, one obtains ρ((u T DU) R) ρ((û T ˆDÛ) ˆR) <. Note that direct calculations yield ρ((u T DU) R) = /3 ρ((û T ˆDÛ) ˆR) = /2. Suppose that A = U T DU R and A = Û T ˆDÛ ˆR are splittings of A obtained from Algorithms 3. and 3.2, respectively. Since U T DU 0, Û T ˆDÛ 0, R 0 and ˆR 0, from Theorem 2.5 ρ((u T DU) R) < and ρ((û T ˆDÛ) ˆR) <. In addition, Theorem 2.6 implies that for any M = ((U T DU) R) i (U T DU) i=0 ˆM = ((Û T ˆDÛ) ˆR) i (Û T ˆDÛ) i=0 are symmetric positive definite. Hence, the M and ˆM can be used as preconditioners for the PCG method. The M and ˆM are called -step MICF and -step VMICF polynomial preconditioners, respectively. Notice that for =, the M and ˆM reduce to the standard MICF and VMICF preconditioners, respectively. It can be shown that 0 R ˆR and thus ρ((u T DU) R) ρ((û T ˆDÛ) ˆR). From this point of view, the MICF preconditioner obtained from Algorithm 3. may provide better convergence rate of the PCG than the VMICF preconditioner obtained from Algorithm 3.2. Numerical experiments in Section 4 show that the convergence rate of the PCG with MICF preconditioner is as

9 Modified incomplete Cholesy factorization preconditioners 503 good as or slightly better than that with VMICF preconditioner, but the construction time for MICF preconditioner is much larger than that for VMICF preconditioner (see Tables and 3). So, the VMICF is more efficient preconditioner than the MICF on the Cray T3E supercomputer. Next we consider other variants of the modified incomplete Choesy factorization preconditioner. Let A be a symmetric positive definite 2 2 bloc matrix of the form ( ) A C (2) A =, C T A 2 where A and A 2 are square matrices. Since A 0, A 0 and A 2 0. Thus, there exist modified incomplete Cholesy factorizations for A and A 2. Lemma 3.4. Suppose that A is a symmetric positive) definite matrix of the form (2). Then for every nonzero vector x = x T A x + x T 2 A 2 x 2 + 2x T C x 2 > 0 x T A x + x T 2 A 2 x 2 2x T C x 2 > 0. ( x Proof. Since A is a symmetric positive ) definite matrix of the form (2), for any nonzero vector x = ( x x 2 ( ) ( ) ( ) ( x T, x T x 2 )A = ( x T, x T A C x 2 ) > 0. x 2 C T A 2 x 2 Therefore, x T A x 2x T C x 2 + x T 2 A 2x 2 > 0. Since ( ) ( ) x T Ax = (x T, xt 2 ) A C x > 0, C T A 2 x 2 the second inequality is obtained. Theorem 3.5. Suppose that A is a symmetric positive definite matrix of the form (2). Let A = U T D U R and A 2 = U2 T D 2U 2 R 2 be splittings obtained from Algorithms 3. or 3.2. Let ( ) ( ) ( U 0 D 0 U (U T U =, D =, U = D ) ) C. 0 U 2 0 D 2 0 U 2 x 2

10 504 Jae Heon Yun and Yu Du Han Let M = U T DU, M = U T DU, N = M A, and N = M A. Then A = M N and A = M N are P -regular splitting of A. Proof. Note that Let x = M + N = 2M A = ( x x 2 ) be a nonzero vector. Then ( U T D U + R C C T U T 2 D 2U 2 + R 2 x T (M + N)x = x T (U T D U + R )x +x T 2 (U T 2 D 2 U 2 + R 2 )x 2 2x T C x 2 = 2x T R x + 2x T 2 R 2 x 2 + x T A x + x T 2 A 2 x 2 2x T C x 2. Since R and R 2 are positive semi-definite, Lemma 3.4 implies that x T (M + N)x > 0. Hence, A = M N is a P -regular splitting of A. Next we will show that A = M N is a P -regular splitting of A. Note that M + N = 2M A ( ) U T = 2 D U C C T U2 T D 2U 2 + C T (U T D ) T D (U T D A ) C ( ) A + 2R C = C T 2C T (U T D ) T D (U T D. ) C + A 2 + 2R 2 If we let V = (U T D ) C, then ( ) A + 2R C M + N = C T 2V T D. V + A 2 + 2R 2 Thus for each nonzero vector x, one obtains x T (M + N )x = x T (A + 2R )x + 2x T C x 2 + x T 2 (A 2 + 2R 2 )x 2 + 2x T 2 V T D V x 2 = x T A x + 2x T C x 2 + x T 2 A 2 x 2 + 2x T R x + 2x T 2 R 2 x 2 + 2x T 2 V T D V x 2. By assumption, R 0, R 2 0 and D 0. Hence, Lemma 3.4 implies that x T (M + N )x > 0. It follows that A = M N is a P -regular splitting of A. ).

11 Modified incomplete Cholesy factorization preconditioners 505 Since the two splittings A = M N and A = M N introduced in Theorem 3.5 are P -regular splittings, from Theorem 2. ρ(m N) < and ρ(m N ) <. Hence, the M 0 and M 0 can be used as preconditioners for the PCG method. Theorem 2.6 also shows that B = (M N) i M and i=0 ˆB = i=0 (M N ) i M are symmetric positive definite and thus they can be used as -step polynomial preconditioners for the PCG method. Below we provide an efficient algorithm for computing B r which is one of the basic timeconsuming computational ernels of the PCG, where r is a given vector. Algorithm 3.3 (PRESOL()). x 0 = 0 For i =, x i = x i + M (r Ax i ) From Algorithm 3.3, it can be seen that x = B r. Notice that Algorithm 3.3 computes B r without using the matrix N. If we replace ˆB M in Algorithm 3.3 with M, then r is computed. Since U i s and D i s in Theorem 3.5 can be computed indepently for different i, the M and M can be constructed in parallel based on matrix blocs. The idea provided in Theorem 3.5 can be easily exted to a general m m bloc-tridigonal matrix A which is symmetric positive definite. 4. Numerical results The test PDE problem we are considering in this paper is (3) (a(x, y)u x (x, y)) x (b(x, y)u y (x, y)) y + c(x, y)u(x, y) = f(x, y) with a(x, y) > 0, b(x, y) > 0, c(x, y) 0, and (x, y) Ω, where Ω is a square region, and with suitable boundary conditions on Ω which denotes the boundary of Ω. All numerical results have been obtained using the PCG method. The MICF and VMICF preconditioners we have used for numerical experiments were obtained without fill-ins. In all cases, the PCG was started with an initial vector x 0 = 0 and it was stopped when r i 2 b 2 < 0 8, where r i refers to the ith residual b Ax i. All numerical experiments have been carried out using 64-bit arithmetic on the Cray T3E at the KISTI supercomputing center. In Tables to 4, ITER stands for the number of iterations satisfying the stopping criterion mentioned

12 506 Jae Heon Yun and Yu Du Han above, Prec stands for the preconditioner to be used, P-time and I-time stand for the CPU time required for constructing preconditioners and the CPU time required for the PCG with these preconditioners, respectively. All timing results are reported in seconds using one processor of the Cray T3E. For all test problems, only the matrix A, which is constructed from five-point finite difference discretization of the given PDE, is of importance, so the right-hand side vector b is created artificially. Hence, the right-hand side function f in Examples 4. and 4.2 is not relevant. Example 4.. We consider Equation (3) over the square region Ω = (0, ) (0, ) with a(x, y) = b(x, y) = cos x, c(x, y) = 0, and Dirichlet boundary condition u = 0 on Ω. That is, the following PDE problem is considered: { (cos x u) = f in Ω, u = 0 on Ω. We have used two uniform meshes of x = y = 0 and x = y = 20 which lead to two matrices of order n = and n = , where x and y refer to the mesh sizes in the x-direction and y- direction, respectively. We have used both the natural row-wise ordering and the Red-Blac ordering of the mesh grid. The matrix A generated from this discretization is a symmetric M-matrix and hence it is also a SPD matrix. To generate a SPD matrix which is not an M-matrix, all off-diagonal elements of the matrix A are made positive by taing their absolute values. Once the matrix A is constructed, the right-hand side vector b is chosen so that the exact solution is the discretization of 0xy( x)( y)e x4.5. Numerical results for this problem are listed in Tables and 2. Example 4.2. We consider Equation (3) over the square region Ω = (0, ) (0, ) with a(x, y) = b(x, y) =, c(x, y) = 0, and Dirichlet boundary condition u = 0 on Ω. That is, the following PDE problem is considered: { u = f in Ω, u = 0 on Ω. We have used the same uniform meshes as in Example 4.. Once the matrix A is constructed as in Example 4., the right hand side vector b is chosen so that b = A[,,..., ] T. Numerical results for this problem are listed in Tables 3 and 4.

13 Modified incomplete Cholesy factorization preconditioners 507 Table. Numerical results of the PCG for Example 4. Ordering n = n = (Matrix type) Prec ITER P-Time I-Time ITER P-Time I-Time Natural MICF (M-matrix) VMICF Red-Blac MICF (M-matrix) VMICF Natural MICF (SPD matrix) VMICF Red-Blac MICF (SPD matrix) VMICF Table 2. Numerical results of the PCG with -step VMICF polynomial preconditioner for Example 4. Ordering n = n = (Matrix type) ITER P-Time I-Time ITER P-Time I-Time Natural (M-matrix) Natural (SPD matrix) Red-Blac (M-matrix) Red-Blac (SPD matrix) Conclusions The construction time of the MICF preconditioner is much larger than that of the VMICF preconditioner, and the convergence rate of the PCG with the MICF preconditioner is as good as or slightly better than that with the VMICF preconditioner (see Tables and 3). So, the VMICF preconditioner is recommed for use on the Cray T3E.

14 508 Jae Heon Yun and Yu Du Han Table 3. Numerical results of the PCG for Example 4.2 Ordering n = n = (Matrix type) Prec ITER P-Time I-Time ITER P-Time I-Time Natural MICF (M-matrix) VMICF Red-Blac MICF (M-matrix) VMICF Natural MICF (SPD matrix) VMICF Red-Blac MICF (SPD matrix) VMICF Table 4. Numerical results of the PCG with -step VMICF polynomial preconditioner for Example 4.2 Ordering n = n = (Matrix type) ITER P-Time I-Time ITER P-Time I-Time Natural (M-matrix) Natural (SPD matrix) Red-Blac (M-matrix) Red-Blac (SPD matrix) The PCG with -step VMICF polynomial preconditioner reduces the number of iterations as increases (see Tables 2 and 4), but the computing time of the PCG with -step VMICF polynomial preconditioner increases as increases. This is because the reduction in the number of iterations is not enough to balance the increase in the computing time required for PRESOL(). Notice that -step VMICF polynomial preconditioner is the same as the VMICF preconditioner. Hence, -step

15 Modified incomplete Cholesy factorization preconditioners 509 VMICF polynomial preconditioner is recommed on the Cray T3E only for =. That is, the PCG with VMICF preconditioner performs efficiently on the Cray T3E. References [] L. Adams, m-step preconditioned conjugate gradient methods, SIAM J. Sci. Stat. Comput. 6 (985), [2] M. R. Hestenes and E. L. Stiefel, Methods of conjugate gradients for solving linear systems, J. Res. Nat. Bur. Stand. 49 (952), [3] F. John, Advanced Numerical Analysis, Lecture Notes, Department of Mathematics, New Yor University, 956. [4] J. A. Meijerin and H. A. van der Vorst, An iterative solution method for linear systems of which the coefficient matrix is a symmetric M-matrix, Math. Comp. 3 (977), [5] R. Nabben, A note on comparison theorems for splittings and multisplittings of hermitian positive definite matrices, Linear Algebra Appl. 233 (996), [6] J. M. Ortega, Introduction to parallel and vector solution of linear systems, Plenum Press, New Yor, 988. [7] Y. Robert, Regular incomplete factorizations of real positive definite matrices, Linear Algebra Appl. 48 (982), [8] J. H. Yun, Bloc incomplete factorization preconditioners for a symmetric bloctridiagonal M-matrix, J. Comput. Appl. Math. 94 (998), Jae Heon Yun, Department of Mathematics, College of Natural Sciences, Chungbu National University, Cheongju , Korea gmjae@cbucc.chungbu.ac.r Yu Du Han, Department of Mathematics, College of Natural Sciences, Chungbu National University, Cheongju , Korea

CONVERGENCE OF MULTISPLITTING METHOD FOR A SYMMETRIC POSITIVE DEFINITE MATRIX

CONVERGENCE OF MULTISPLITTING METHOD FOR A SYMMETRIC POSITIVE DEFINITE MATRIX J. Appl. Math. & Computing Vol. 182005 No. 1-2 pp. 59-72 CONVERGENCE OF MULTISPLITTING METHOD FOR A SYMMETRIC POSITIVE DEFINITE MATRIX JAE HEON YUN SEYOUNG OH AND EUN HEUI KIM Abstract. We study convergence

More information

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1 Scientific Computing WS 2018/2019 Lecture 9 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 9 Slide 1 Lecture 9 Slide 2 Simple iteration with preconditioning Idea: Aû = b iterative scheme û = û

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Conjugate Gradient (CG) Method

Conjugate Gradient (CG) Method Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous

More information

Lecture # 20 The Preconditioned Conjugate Gradient Method

Lecture # 20 The Preconditioned Conjugate Gradient Method Lecture # 20 The Preconditioned Conjugate Gradient Method We wish to solve Ax = b (1) A R n n is symmetric and positive definite (SPD). We then of n are being VERY LARGE, say, n = 10 6 or n = 10 7. Usually,

More information

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative

More information

Performance Comparison of Relaxation Methods with Singular and Nonsingular Preconditioners for Singular Saddle Point Problems

Performance Comparison of Relaxation Methods with Singular and Nonsingular Preconditioners for Singular Saddle Point Problems Applied Mathematical Sciences, Vol. 10, 2016, no. 30, 1477-1488 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2016.6269 Performance Comparison of Relaxation Methods with Singular and Nonsingular

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

AMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1.

AMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1. J. Appl. Math. & Computing Vol. 15(2004), No. 1, pp. 299-312 BILUS: A BLOCK VERSION OF ILUS FACTORIZATION DAVOD KHOJASTEH SALKUYEH AND FAEZEH TOUTOUNIAN Abstract. ILUS factorization has many desirable

More information

On the Preconditioning of the Block Tridiagonal Linear System of Equations

On the Preconditioning of the Block Tridiagonal Linear System of Equations On the Preconditioning of the Block Tridiagonal Linear System of Equations Davod Khojasteh Salkuyeh Department of Mathematics, University of Mohaghegh Ardabili, PO Box 179, Ardabil, Iran E-mail: khojaste@umaacir

More information

CAAM 454/554: Stationary Iterative Methods

CAAM 454/554: Stationary Iterative Methods CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are

More information

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili,

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili, Australian Journal of Basic and Applied Sciences, 5(3): 35-358, 20 ISSN 99-878 Generalized AOR Method for Solving Syste of Linear Equations Davod Khojasteh Salkuyeh Departent of Matheatics, University

More information

On the choice of abstract projection vectors for second level preconditioners

On the choice of abstract projection vectors for second level preconditioners On the choice of abstract projection vectors for second level preconditioners C. Vuik 1, J.M. Tang 1, and R. Nabben 2 1 Delft University of Technology 2 Technische Universität Berlin Institut für Mathematik

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Iterative techniques in matrix algebra

Iterative techniques in matrix algebra Iterative techniques in matrix algebra Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan September 12, 2015 Outline 1 Norms of vectors and matrices 2 Eigenvalues and

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Introduction to Iterative Solvers of Linear Systems

Introduction to Iterative Solvers of Linear Systems Introduction to Iterative Solvers of Linear Systems SFB Training Event January 2012 Prof. Dr. Andreas Frommer Typeset by Lukas Krämer, Simon-Wolfgang Mages and Rudolf Rödl 1 Classes of Matrices and their

More information

Preconditioning Techniques Analysis for CG Method

Preconditioning Techniques Analysis for CG Method Preconditioning Techniques Analysis for CG Method Huaguang Song Department of Computer Science University of California, Davis hso@ucdavis.edu Abstract Matrix computation issue for solve linear system

More information

Fine-grained Parallel Incomplete LU Factorization

Fine-grained Parallel Incomplete LU Factorization Fine-grained Parallel Incomplete LU Factorization Edmond Chow School of Computational Science and Engineering Georgia Institute of Technology Sparse Days Meeting at CERFACS June 5-6, 2014 Contribution

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 24: Preconditioning and Multigrid Solver Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 5 Preconditioning Motivation:

More information

Iterative Solvers in the Finite Element Solution of Transient Heat Conduction

Iterative Solvers in the Finite Element Solution of Transient Heat Conduction Iterative Solvers in the Finite Element Solution of Transient Heat Conduction Mile R. Vuji~i} PhD student Steve G.R. Brown Senior Lecturer Materials Research Centre School of Engineering University of

More information

Notes on PCG for Sparse Linear Systems

Notes on PCG for Sparse Linear Systems Notes on PCG for Sparse Linear Systems Luca Bergamaschi Department of Civil Environmental and Architectural Engineering University of Padova e-mail luca.bergamaschi@unipd.it webpage www.dmsa.unipd.it/

More information

Robust Preconditioned Conjugate Gradient for the GPU and Parallel Implementations

Robust Preconditioned Conjugate Gradient for the GPU and Parallel Implementations Robust Preconditioned Conjugate Gradient for the GPU and Parallel Implementations Rohit Gupta, Martin van Gijzen, Kees Vuik GPU Technology Conference 2012, San Jose CA. GPU Technology Conference 2012,

More information

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant

More information

A generalization of the Gauss-Seidel iteration method for solving absolute value equations

A generalization of the Gauss-Seidel iteration method for solving absolute value equations A generalization of the Gauss-Seidel iteration method for solving absolute value equations Vahid Edalatpour, Davod Hezari and Davod Khojasteh Salkuyeh Faculty of Mathematical Sciences, University of Guilan,

More information

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccs Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary c 2008,2010

More information

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

Master Thesis Literature Study Presentation

Master Thesis Literature Study Presentation Master Thesis Literature Study Presentation Delft University of Technology The Faculty of Electrical Engineering, Mathematics and Computer Science January 29, 2010 Plaxis Introduction Plaxis Finite Element

More information

Efficient smoothers for all-at-once multigrid methods for Poisson and Stokes control problems

Efficient smoothers for all-at-once multigrid methods for Poisson and Stokes control problems Efficient smoothers for all-at-once multigrid methods for Poisson and Stoes control problems Stefan Taacs stefan.taacs@numa.uni-linz.ac.at, WWW home page: http://www.numa.uni-linz.ac.at/~stefant/j3362/

More information

Iterative Methods for Sparse Linear Systems

Iterative Methods for Sparse Linear Systems Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University

More information

Lab 1: Iterative Methods for Solving Linear Systems

Lab 1: Iterative Methods for Solving Linear Systems Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

4.6 Iterative Solvers for Linear Systems

4.6 Iterative Solvers for Linear Systems 4.6 Iterative Solvers for Linear Systems Why use iterative methods? Virtually all direct methods for solving Ax = b require O(n 3 ) floating point operations. In practical applications the matrix A often

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J Class Notes 4: THE SPECTRAL RADIUS, NORM CONVERGENCE AND SOR. Math 639d Due Date: Feb. 7 (updated: February 5, 2018) In the first part of this week s reading, we will prove Theorem 2 of the previous class.

More information

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1

Parallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1 Parallel Numerics, WT 2016/2017 5 Iterative Methods for Sparse Linear Systems of Equations page 1 of 1 Contents 1 Introduction 1.1 Computer Science Aspects 1.2 Numerical Problems 1.3 Graphs 1.4 Loop Manipulations

More information

CHAPTER 5. Basic Iterative Methods

CHAPTER 5. Basic Iterative Methods Basic Iterative Methods CHAPTER 5 Solve Ax = f where A is large and sparse (and nonsingular. Let A be split as A = M N in which M is nonsingular, and solving systems of the form Mz = r is much easier than

More information

Classical iterative methods for linear systems

Classical iterative methods for linear systems Classical iterative methods for linear systems Ed Bueler MATH 615 Numerical Analysis of Differential Equations 27 February 1 March, 2017 Ed Bueler (MATH 615 NADEs) Classical iterative methods for linear

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

On convergence of Inverse-based IC decomposition

On convergence of Inverse-based IC decomposition On convergence of Inverse-based IC decomposition Seiji FUJINO and Aira SHIODE (Kyushu University) On convergence of Inverse-basedIC decomposition p.1/26 Outline of my tal 1. Difference between standard

More information

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294) Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps

More information

Conjugate Gradient Method

Conjugate Gradient Method Conjugate Gradient Method direct and indirect methods positive definite linear systems Krylov sequence spectral analysis of Krylov sequence preconditioning Prof. S. Boyd, EE364b, Stanford University Three

More information

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers Applied and Computational Mathematics 2017; 6(4): 202-207 http://www.sciencepublishinggroup.com/j/acm doi: 10.11648/j.acm.20170604.18 ISSN: 2328-5605 (Print); ISSN: 2328-5613 (Online) A Robust Preconditioned

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Splitting Iteration Methods for Positive Definite Linear Systems

Splitting Iteration Methods for Positive Definite Linear Systems Splitting Iteration Methods for Positive Definite Linear Systems Zhong-Zhi Bai a State Key Lab. of Sci./Engrg. Computing Inst. of Comput. Math. & Sci./Engrg. Computing Academy of Mathematics and System

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

CLASSICAL ITERATIVE METHODS

CLASSICAL ITERATIVE METHODS CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped

More information

On deflation and singular symmetric positive semi-definite matrices

On deflation and singular symmetric positive semi-definite matrices Journal of Computational and Applied Mathematics 206 (2007) 603 614 www.elsevier.com/locate/cam On deflation and singular symmetric positive semi-definite matrices J.M. Tang, C. Vuik Faculty of Electrical

More information

Notes on Some Methods for Solving Linear Systems

Notes on Some Methods for Solving Linear Systems Notes on Some Methods for Solving Linear Systems Dianne P. O Leary, 1983 and 1999 and 2007 September 25, 2007 When the matrix A is symmetric and positive definite, we have a whole new class of algorithms

More information

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU Preconditioning Techniques for Solving Large Sparse Linear Systems Arnold Reusken Institut für Geometrie und Praktische Mathematik RWTH-Aachen OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative

More information

Comparative Analysis of Mesh Generators and MIC(0) Preconditioning of FEM Elasticity Systems

Comparative Analysis of Mesh Generators and MIC(0) Preconditioning of FEM Elasticity Systems Comparative Analysis of Mesh Generators and MIC(0) Preconditioning of FEM Elasticity Systems Nikola Kosturski and Svetozar Margenov Institute for Parallel Processing, Bulgarian Academy of Sciences Abstract.

More information

FEM and Sparse Linear System Solving

FEM and Sparse Linear System Solving FEM & sparse system solving, Lecture 7, Nov 3, 2017 1/46 Lecture 7, Nov 3, 2015: Introduction to Iterative Solvers: Stationary Methods http://people.inf.ethz.ch/arbenz/fem16 Peter Arbenz Computer Science

More information

Various ways to use a second level preconditioner

Various ways to use a second level preconditioner Various ways to use a second level preconditioner C. Vuik 1, J.M. Tang 1, R. Nabben 2, and Y. Erlangga 3 1 Delft University of Technology Delft Institute of Applied Mathematics 2 Technische Universität

More information

PDE Solvers for Fluid Flow

PDE Solvers for Fluid Flow PDE Solvers for Fluid Flow issues and algorithms for the Streaming Supercomputer Eran Guendelman February 5, 2002 Topics Equations for incompressible fluid flow 3 model PDEs: Hyperbolic, Elliptic, Parabolic

More information

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate

More information

Algorithm for Sparse Approximate Inverse Preconditioners in the Conjugate Gradient Method

Algorithm for Sparse Approximate Inverse Preconditioners in the Conjugate Gradient Method Algorithm for Sparse Approximate Inverse Preconditioners in the Conjugate Gradient Method Ilya B. Labutin A.A. Trofimuk Institute of Petroleum Geology and Geophysics SB RAS, 3, acad. Koptyug Ave., Novosibirsk

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Scientific Computing WS 2018/2019. Lecture 10. Jürgen Fuhrmann Lecture 10 Slide 1

Scientific Computing WS 2018/2019. Lecture 10. Jürgen Fuhrmann Lecture 10 Slide 1 Scientific Computing WS 2018/2019 Lecture 10 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 10 Slide 1 Homework assessment Lecture 10 Slide 2 General Please apologize terse answers - on the bright

More information

Linear Solvers. Andrew Hazel

Linear Solvers. Andrew Hazel Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction

More information

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department

More information

arxiv: v2 [math.na] 1 Sep 2016

arxiv: v2 [math.na] 1 Sep 2016 The structure of the polynomials in preconditioned BiCG algorithms and the switching direction of preconditioned systems Shoji Itoh and Masaai Sugihara arxiv:163.175v2 [math.na] 1 Sep 216 Abstract We present

More information

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)

Iterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009) Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential

More information

JACOBI S ITERATION METHOD

JACOBI S ITERATION METHOD ITERATION METHODS These are methods which compute a sequence of progressively accurate iterates to approximate the solution of Ax = b. We need such methods for solving many large linear systems. Sometimes

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

9. Iterative Methods for Large Linear Systems

9. Iterative Methods for Large Linear Systems EE507 - Computational Techniques for EE Jitkomut Songsiri 9. Iterative Methods for Large Linear Systems introduction splitting method Jacobi method Gauss-Seidel method successive overrelaxation (SOR) 9-1

More information

7.2 Steepest Descent and Preconditioning

7.2 Steepest Descent and Preconditioning 7.2 Steepest Descent and Preconditioning Descent methods are a broad class of iterative methods for finding solutions of the linear system Ax = b for symmetric positive definite matrix A R n n. Consider

More information

Multilevel Preconditioning of Graph-Laplacians: Polynomial Approximation of the Pivot Blocks Inverses

Multilevel Preconditioning of Graph-Laplacians: Polynomial Approximation of the Pivot Blocks Inverses Multilevel Preconditioning of Graph-Laplacians: Polynomial Approximation of the Pivot Blocks Inverses P. Boyanova 1, I. Georgiev 34, S. Margenov, L. Zikatanov 5 1 Uppsala University, Box 337, 751 05 Uppsala,

More information

Reduced Synchronization Overhead on. December 3, Abstract. The standard formulation of the conjugate gradient algorithm involves

Reduced Synchronization Overhead on. December 3, Abstract. The standard formulation of the conjugate gradient algorithm involves Lapack Working Note 56 Conjugate Gradient Algorithms with Reduced Synchronization Overhead on Distributed Memory Multiprocessors E. F. D'Azevedo y, V.L. Eijkhout z, C. H. Romine y December 3, 1999 Abstract

More information

Finite-choice algorithm optimization in Conjugate Gradients

Finite-choice algorithm optimization in Conjugate Gradients Finite-choice algorithm optimization in Conjugate Gradients Jack Dongarra and Victor Eijkhout January 2003 Abstract We present computational aspects of mathematically equivalent implementations of the

More information

Computational Economics and Finance

Computational Economics and Finance Computational Economics and Finance Part II: Linear Equations Spring 2016 Outline Back Substitution, LU and other decomposi- Direct methods: tions Error analysis and condition numbers Iterative methods:

More information

ITERATIVE METHODS BASED ON KRYLOV SUBSPACES

ITERATIVE METHODS BASED ON KRYLOV SUBSPACES ITERATIVE METHODS BASED ON KRYLOV SUBSPACES LONG CHEN We shall present iterative methods for solving linear algebraic equation Au = b based on Krylov subspaces We derive conjugate gradient (CG) method

More information

Lecture 17: Iterative Methods and Sparse Linear Algebra

Lecture 17: Iterative Methods and Sparse Linear Algebra Lecture 17: Iterative Methods and Sparse Linear Algebra David Bindel 25 Mar 2014 Logistics HW 3 extended to Wednesday after break HW 4 should come out Monday after break Still need project description

More information

Iterative Methods. Splitting Methods

Iterative Methods. Splitting Methods Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition

More information

Multilevel low-rank approximation preconditioners Yousef Saad Department of Computer Science and Engineering University of Minnesota

Multilevel low-rank approximation preconditioners Yousef Saad Department of Computer Science and Engineering University of Minnesota Multilevel low-rank approximation preconditioners Yousef Saad Department of Computer Science and Engineering University of Minnesota SIAM CSE Boston - March 1, 2013 First: Joint work with Ruipeng Li Work

More information

A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES

A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES Journal of Mathematical Sciences: Advances and Applications Volume, Number 2, 2008, Pages 3-322 A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES Department of Mathematics Taiyuan Normal University

More information

Iterative solution methods and their rate of convergence

Iterative solution methods and their rate of convergence Uppsala University Graduate School in Mathematics and Computing Institute for Information Technology Numerical Linear Algebra FMB and MN Fall 2007 Mandatory Assignment 3a: Iterative solution methods and

More information

Incomplete Cholesky preconditioners that exploit the low-rank property

Incomplete Cholesky preconditioners that exploit the low-rank property anapov@ulb.ac.be ; http://homepages.ulb.ac.be/ anapov/ 1 / 35 Incomplete Cholesky preconditioners that exploit the low-rank property (theory and practice) Artem Napov Service de Métrologie Nucléaire, Université

More information

Computers and Mathematics with Applications. Convergence analysis of the preconditioned Gauss Seidel method for H-matrices

Computers and Mathematics with Applications. Convergence analysis of the preconditioned Gauss Seidel method for H-matrices Computers Mathematics with Applications 56 (2008) 2048 2053 Contents lists available at ScienceDirect Computers Mathematics with Applications journal homepage: wwwelseviercom/locate/camwa Convergence analysis

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give

More information

Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices

Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices Eun-Joo Lee Department of Computer Science, East Stroudsburg University of Pennsylvania, 327 Science and Technology Center,

More information

Kasetsart University Workshop. Multigrid methods: An introduction

Kasetsart University Workshop. Multigrid methods: An introduction Kasetsart University Workshop Multigrid methods: An introduction Dr. Anand Pardhanani Mathematics Department Earlham College Richmond, Indiana USA pardhan@earlham.edu A copy of these slides is available

More information

Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White

Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White Introduction to Simulation - Lecture 2 Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White Thanks to Deepak Ramaswamy, Michal Rewienski, and Karen Veroy Outline Reminder about

More information

The flexible incomplete LU preconditioner for large nonsymmetric linear systems. Takatoshi Nakamura Takashi Nodera

The flexible incomplete LU preconditioner for large nonsymmetric linear systems. Takatoshi Nakamura Takashi Nodera Research Report KSTS/RR-15/006 The flexible incomplete LU preconditioner for large nonsymmetric linear systems by Takatoshi Nakamura Takashi Nodera Takatoshi Nakamura School of Fundamental Science and

More information

Residual iterative schemes for largescale linear systems

Residual iterative schemes for largescale linear systems Universidad Central de Venezuela Facultad de Ciencias Escuela de Computación Lecturas en Ciencias de la Computación ISSN 1316-6239 Residual iterative schemes for largescale linear systems William La Cruz

More information

Solving Ax = b, an overview. Program

Solving Ax = b, an overview. Program Numerical Linear Algebra Improving iterative solvers: preconditioning, deflation, numerical software and parallelisation Gerard Sleijpen and Martin van Gijzen November 29, 27 Solving Ax = b, an overview

More information

FINE-GRAINED PARALLEL INCOMPLETE LU FACTORIZATION

FINE-GRAINED PARALLEL INCOMPLETE LU FACTORIZATION FINE-GRAINED PARALLEL INCOMPLETE LU FACTORIZATION EDMOND CHOW AND AFTAB PATEL Abstract. This paper presents a new fine-grained parallel algorithm for computing an incomplete LU factorization. All nonzeros

More information

From Stationary Methods to Krylov Subspaces

From Stationary Methods to Krylov Subspaces Week 6: Wednesday, Mar 7 From Stationary Methods to Krylov Subspaces Last time, we discussed stationary methods for the iterative solution of linear systems of equations, which can generally be written

More information

Dense Matrices for Biofluids Applications

Dense Matrices for Biofluids Applications Dense Matrices for Biofluids Applications by Liwei Chen A Project Report Submitted to the Faculty of the WORCESTER POLYTECHNIC INSTITUTE In partial fulfillment of the requirements for the Degree of Master

More information

Lecture 11. Fast Linear Solvers: Iterative Methods. J. Chaudhry. Department of Mathematics and Statistics University of New Mexico

Lecture 11. Fast Linear Solvers: Iterative Methods. J. Chaudhry. Department of Mathematics and Statistics University of New Mexico Lecture 11 Fast Linear Solvers: Iterative Methods J. Chaudhry Department of Mathematics and Statistics University of New Mexico J. Chaudhry (UNM) Math/CS 375 1 / 23 Summary: Complexity of Linear Solves

More information

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Stefania Bellavia Dipartimento di Energetica S. Stecco Università degli Studi di Firenze Joint work with Jacek

More information

Block-Triangular and Skew-Hermitian Splitting Methods for Positive Definite Linear Systems

Block-Triangular and Skew-Hermitian Splitting Methods for Positive Definite Linear Systems Block-Triangular and Skew-Hermitian Splitting Methods for Positive Definite Linear Systems Zhong-Zhi Bai State Key Laboratory of Scientific/Engineering Computing Institute of Computational Mathematics

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

Some bounds for the spectral radius of the Hadamard product of matrices

Some bounds for the spectral radius of the Hadamard product of matrices Some bounds for the spectral radius of the Hadamard product of matrices Guang-Hui Cheng, Xiao-Yu Cheng, Ting-Zhu Huang, Tin-Yau Tam. June 1, 2004 Abstract Some bounds for the spectral radius of the Hadamard

More information

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices

Chapter 7. Iterative methods for large sparse linear systems. 7.1 Sparse matrix algebra. Large sparse matrices Chapter 7 Iterative methods for large sparse linear systems In this chapter we revisit the problem of solving linear systems of equations, but now in the context of large sparse systems. The price to pay

More information