BLOCK ILU PRECONDITIONED ITERATIVE METHODS FOR REDUCED LINEAR SYSTEMS

Size: px
Start display at page:

Download "BLOCK ILU PRECONDITIONED ITERATIVE METHODS FOR REDUCED LINEAR SYSTEMS"

Transcription

1 CANADIAN APPLIED MATHEMATICS QUARTERLY Volume 12, Number 3, Fall 2004 BLOCK ILU PRECONDITIONED ITERATIVE METHODS FOR REDUCED LINEAR SYSTEMS N GUESSOUS AND O SOUHAR ABSTRACT This paper deals with the iterative solution of a large sparse matrix when this matrix is in block-partitioned form arising from discrete elliptic Partial Differential Equations (PDEs) We introduce a block red-black coloring technique to reorder the unknowns, which is based on the standard red-black coloring We propose some preconditioning techniques which combine the block incomplete LU factorization and the Schur complement technique, where the reduced system is preconditioned using different preconditioners Some theoretical results to control the residual norm are presented 1 Introduction In this paper, we address the problem of developing preconditioners for the solution of large linear systems (11) Au = b, where A is an n n real sparse matrix The above system is assumed to have the following block form ( ( ) ( B F x f (12) = E C) y g) This block structure arises from the standard and block Red-Black colorings described in Sections 2 and 3, or from the domain decomposition method which is based on decomposing the physical domain Ω into a number of subdomains Ω i, 1 i k, for each of which an independent incomplete factorization can be computed and applied in parallel [7 9, 25, 31] The main idea is to obtain more parallelism at the subdomain level rather than at the grid point level Another approach to increase AMS subject classification: 65F10, 65N06, 65N22 Keywords: Sparse linear systems, ordering methods, block incomplete LU factorization, Schur complement, Krylov subspace Copyright c Applied Mathematics Institute, University of Alberta 351

2 352 N GUESSOUS AND O SOUHAR the degree of parallelism is to use several hyperplane wavefronts to sweep through the grid For example van der Vorst [37] considered starting wavefronts from each of the four corners in a 2D rectangular grid Earlier, Meurant [26] and van der Vorst [38] used a similar idea in which the grid is divided into equal parts (eg, halves or quadrants) and each part is ordered in its own natural ordering A broad class of preconditioners is based on incomplete LU factorizations which were originally developed for M-matrices arising from the discretization of simple elliptic partial differential equations For more details on this class of preconditioners, the interested readers can be refer to [1, 2, 10, 11, 20, 21, 23, 24, 27 31, 34 36, 39] The matrix A can be written as ( ) ( ) B I B (13) A = 1 F, E S I where S = C EB 1 F is the Schur complement matrix The matrix S does not need to be formed explicitly in order to solve the reduced system by an iterative method For example, if a Krylov subspace method without preconditioning is used, then the only operations that are required with the matrix S are matrix-vector operations w = Sv Such operations can be performed as follows (a) Compute v = F v (b) Solve Bz = v (c) Compute w = Cv Ez The above procedure involves only matrix-vector multiplications and one linear system solution (b) with B Note that the linear system with B must be solved exactly, by using a direct solution technique or by an iterative technique with a high level of accuracy It is well known that the ordering of the unknowns can have a significant effect on the convergence of a preconditioned iterative method and on its implementation on a parallel computer [13, 14], especially in the application of the ILU preconditioners Indeed, if we consider a graph containing a central node and whose only edges join this node to the (n 1) remaining ones And consider now the two extreme orderings, the first giving the number 1 to the central node and the second giving it the last number n Let A 1 and A 2 be the adjacency matrices to these

3 BLOCK ILU PRECONDITIONED ITERATIVE METHODS 353 two orderings respectively, which can be written as A 1 = A 2 =, Let A 1 = L 1 U 1 and A 2 = L 2 U 2 be the factorizations according to Gaussian elimination process, A 1 and A 2 have the same number of non-zero entries In the first ordering the structure (L 1 +U 1 ) has all its coefficients different from zero, whereas with the second ordering (L 2 + U 2 ) it has the same structure as A 2 Furthermore, the factors governed by the factorization ILU(0) of A 2 are exactly L 2 and U 2! This paper is organized as follows: Section 2 gives a brief overview of a standard red-black coloring technique Section 3 describes an extension of the previous section named block red-black coloring Section 4 provides some preconditioners arising from block ILU factorization and some theoretical results to control the residual norm Section 5 gives some numerical experiments and a few concluding remarks 2 Standard red-black coloring The problem addressed by multicoloring is to determine a coloring of the nodes of the adjacency graph of a matrix such that any two adjacent nodes have different colors For simple two-dimensional finite difference grid (a five-point operator), we can easily separate the grid points into two sets, red and black, so that the nodes in one set are adjacent only to nodes in the other set, as shown in Figure 1 for a 12 8 grid (with white nodes correspond to red nodes) Assume that the unknowns are labeled by listing the red unknowns first

4 354 N GUESSOUS AND O SOUHAR together, followed by the black ones, we obtain a system of the form (21) ( ) ( ) D1 F x = E D 2 y ( f g), where D 1 and D 2 are diagonal matrices One way to exploit the red-black ordering is to use the standard SSOR or ILU (0) preconditioners for solving (21) The preconditioning operations are highly parallel For example, the linear system that arises from the forward solving in SSOR will have the form (22) ( D1 E ) ( ) x = D 2 y ( f g) Figure 1 A standard red-black coloring of 12 8 grid This system can be solved by performing the following sequence of operations: Solve D 1 x = f Compute g = g Ex Solve D 2 y = g This consists of two diagonal scalings (operations 1 and 3) and a sparse matrix-vector product (operation 2) The situation is identical with the ILU (0) preconditioning A simple look at the structure of the ILU factors reveals that many more elements are dropped with the red-black ordering than with the natural ordering The result is that the number of

5 BLOCK ILU PRECONDITIONED ITERATIVE METHODS 355 iterations to achieve convergence can be much higher with the red-black ordering than with the natural ordering [30] The second method that has been used in connection with the redblack ordering solves the reduced system which involves only the black unknowns Eliminating the red unknowns from (21) results in the reduced system: (23) (D 2 ED 1 1 F ) y = g ED 1 1 f Note that this new system is again a sparse linear system with about half as many unknowns In addition, it has been observed that for easy problems, the reduced system can often be solved efficiently with only diagonal preconditioning The computation of the reduced system is a highly parallel and inexpensive process We can also use a multi-level ordering of the grid points For more details on this technique and on the order of the condition number see the references of Brand and Heinmann [4] and [12] where they proposed a repeated red-black (RRB) ordering 3 Block red-black coloring We now consider an extension of the red-black coloring which consists of transforming the system (11) into the following block form (31) ( ( ) ( B F x f = E C) y g) Here B and C are block diagonal matrices, where each block is of size 2 A block of size 2 will be found by coupling a node j with its neighbor (j+1) with the same color (Red) followed by two nodes that are different color (Black) Assume again that the unknowns are labeled by listing the red unknowns first together, followed by the black ones This block two-by-two colors is illustrated in Figure 2 for a 12 8 grid (with white nodes correspond to red nodes)

6 356 N GUESSOUS AND O SOUHAR Figure 2 A block red-black coloring of 12 8 grid This leads to a 2 2 submatrix of the form (32) ( ) bj,j b j,j+1, b j+1,j b j+1,j+1 ( ) cj,j c j,j+1 c j+1,j c j+1,j+1 This recursive block version technique Repeated Block Red-Black ordering (RBRB) can be applied as in [4, 12] This technique starts with all red nodes indicated by, followed by half of the remaining black nodes corresponding to alternating horizontal grid line indicated by It can be easily checked that the remaining black nodes indicated by form a regular grid again with a double mesh spacing as the original one The process can then be repeated recursively until a sufficiently coarse level is reached on which the usual natural ordering can be applied Figure 3 shows an example of RBRB ordering Figure 3 A RBRB ordering, stopping after 1 coarse level (indicated by )

7 BLOCK ILU PRECONDITIONED ITERATIVE METHODS Preconditioning techniques and preconditioned iterations Some results to control the preconditioned residual norm are given in this section First a preconditioner to solve the linear system of the block form is developed: ( ( ) ( B F x f (41) =, E C) y g) where B and C are square matrices Furthermore, we assume that B is nonsingular The block lower-upper (LU) triangular factorization of A can be written as ( ) ( ) ( ) B F B I B (42) = 1 F, E C E S I where S is the Schur complement matrix (43) S = C EB 1 F To compute the solution (x, y) T of the linear system (41), we therefore need to perform the following steps: (a) Compute g = g EB 1 f (b) Solve Sy = g (c) Solve Bx = (f F y) If the cost for computing the exact inverse of B is prohibitive, then we first approximate B 1 by a sparse matrix Y using sparse approximate inverse techniques; see [3, 18, 19], to approach the Schur complement S with a sparse matrix The matrix S = C EY F is sparse And solving (b) with it is more economical since S is sparser than S Note that, we can also use an approximate factorization of S: (44) S = S + R es = L SU S + R es which define a preconditioner for the Schur complement matrix, S = L SU S, and here, R es is the error matrix 41 Left preconditioning ILU In the following we suppose that the inverse of B is computed exactly Then the preconditioner can be written as: ( ) ( ) B I B (45) M = 1 E S F = L I M U M,

8 358 N GUESSOUS AND O SOUHAR with S being some approximate to the Schur complement S For example, in the form of an approximate LU factorization of S When C is a block diagonal matrix and is not singular, we can choose S = C, since the only inverse that occurs is that of a block diagonal matrix with small blocks, or S = I, ie, no preconditioning of S We can easily show that the preconditioned system has the particular form ( ) ( ) ( ) M 1 I B A = 1 F I I B 1 F (46) I S 1 S I = U 1 M DU M Then the quality of the preconditioner M depends on that of S, and the concentration of the spectrum 1 of M 1 A depends only on that of S 1 S, since σ(m 1 A) = {1} σ( S 1 S) Consider now a GMRES iteration to solve the preconditioned system (47) M 1 Au = M 1 b Given an initial guess u 0 = (x 0, y 0 ) T, we denote the preconditioned initial residual by r 0 = M 1 (b Au 0 ) = ( z0 s 0 ) It is useful to recall that the GMRES algorithm finds a vector u in the affine subspace u 0 + K m, where K m is a Krylov subspace: K m = K m (M 1 A, r 0 ) = Span { r 0, (M 1 A)r 0,, (M 1 A) m 1 r 0 } The GMRES minimizes M 1 (b Au) 2 for an arbitrary u in u 0 + K m u = u 0 + w, w K m Then, the preconditioned residual takes the form: r = M 1 (b Au) = M 1 (b Au 0 Aw) = r 0 M 1 Aw 1 σ(x) denotes the spectrum of a matrix X

9 BLOCK ILU PRECONDITIONED ITERATIVE METHODS 359 Theorem 41 Assume that the reduced system S 1 Sy = S 1 (g EB 1 f) is solved with GMRES starting with an arbitrary initial guess y 0 and let s m = S 1 (g Sy m ) be the preconditioned residual obtained at the m th step Then the preconditioned residual vector r m obtained at the m th step of GMRES for solving the block system (41) preconditioned by M of (46) and with an initial guess u 0 = (x 0, y 0 ) T satisfies Bx 0 = f F y 0, satisfies the inequality r m 2 (1 + B 1 F 2 ) s m 2 In particular, if s m = 0, then r m = 0 Proof Let s m = p m ( S 1 S)s 0 be the preconditioned residual obtained at the m th step of the GMRES algorithm starting with an arbitrary initial guess y 0 to solve the preconditioned reduced system (48) S 1 Sy = S 1 g, where g = g EB 1 f and p m is the m th residual polynomial, which minimizes p( S 1 S)s 0 2 among all polynomials p of degree m satisfying the constraint p(0) = 1 Let m p m (t) = α i t i The preconditioned residual r m obtained at the m th step of GMRES algorithm for solving the preconditioned unreduced system minimizes p(m 1 A)r 0 2 over all polynomials p of degree m which are consistent, ie, p(0) = 1, and satisfies i=0 where r m 2 p m (M 1 A)r 0 2 ( ) ( ) U 1 pm (I) z0 2 M p m ( S 1 U S) M s 0 ( ( ) U 1 M pm (I) δ 2 p m ( S)) S 1, s 0 2 (49) δ = z 0 + B 1 F s 0

10 360 N GUESSOUS AND O SOUHAR Due to the particular structure of B, that is a block diagonal matrix, and if we suppose that its blocks have a small size then, we have to solve the system by a direct solution technique On these conditions we have to solve (410) Bx 0 = f F y 0 exactly and we have δ = 0 Then (411) r m 2 U 1 M 2 p m ( S 1 S)s 2 0, that is, r m 2 U 1 M 2 s m 2 (1 + B 1 F 2 ) s m 2 Remark 41 In general, systems involving B may be solved in many ways, depending on their difficulty and what we know about B If B is known to be well conditioned, then triangular solves with incomplete LU factors may be sufficient For more difficult B matrices, the incomplete factors may be used as a preconditioner for an inner iterative process for B Suppose now that an iterative method is used to solve (410) Let x 0 the exact solution and x 0 0 an initial guess, then at kth step, the error is e k 0 = x 0 x k 0 We have Also, we get ( ) ( ) f Bx k 0 F y 0 Be k 0 Mr 0 = b Au 0 = g Ex k 0 Cy = 0 g Mr 0 = ( From the first component we have since B is not singular then Bz 0 + F s 0 Ez 0 + EB 1 F s 0 + Ss 0 B(z 0 + B 1 F s 0 ) = Be k 0, (412) δ = e k 0, and we have (413) r m 2 ( U 1 pm 2 (1) 2 e k s m 2 1/2 2) M )

11 BLOCK ILU PRECONDITIONED ITERATIVE METHODS Split preconditioning Let A = L M U M + R A, where R A is the error matrix and ( S) ( ) B I B L M =, U E M = 1 F, I Then ( ) (414) L 1 1 I M AUM = S 1, S and the split preconditioning is obtained by solving: L 1 1 M AUM (415) v = L 1 M b U M u = v The Krylov subspace associated with this system is: K m (L 1 M where 1 AUA, r 0) = Span {r 0, (L 1 r 0 = L 1 M b (L 1 M 1 M AU AU 1 M )r 0,, (L 1 M M ) U Mu 0 = ( z0 s 0 ), AU 1 M )m 1 r 0 }, where u 0 = (x 0, y 0 ) T is an initial guess to solve (41), and v 0 = U A u 0 Theorem 42 Assume that the reduced system (48) is solved with GM- RES starting with an arbitrary initial guess y 0 and let s m = S 1 (g Sy m ) be the preconditioned residual obtained at the m th step Then the preconditioned residual vector r m+1 obtained at the (m+1) th step of GM- RES for solving the block system (415) with an arbitrary initial guess u 0 = (x 0, y 0 ) T satisfies the inequality (416) r m+1 2 I S 1 S 2 s m 2 In particular, if s m = 0, then r m+1 = 0 Proof Let p m be the m th residual polynomial, which minimizes p( S 1 S)s 0 2 among all polynomials p of degree m satisfying the constraint p(0) = 1, then the residual vector of the m th GMRES approximation associated with the preconditioned reduced system (48) is the form s m = p m ( S 1 S)s 0 We define the polynomial q m+1 (t) = (1 t)p m (t) The rest is similar to the proof of Theorem 41

12 362 N GUESSOUS AND O SOUHAR Definition 41 A matrix B R n n is said to be a Z-matrix if all of its off-diagonal entries (if any) are nonpositive: b i,j 0, for all i j Remark 42 Assume that B is a block diagonal matrix, where each block is of size 2 Furthermore, we suppose that( B is a Z-matrix ) and bj,j b strictly diagonally dominant Then each block j,j+1 is b j+1,j b j+1,j+1 nonsingular and its inverse can be written (417) ( ) 1 bj,j b j,j+1 = 1 b j+1,j b j+1,j+1 det j = ( ) bj+1,j+1 b j,j+1 b j+1,j b j,j ( ) βj,j β j,j+1, β j+1,j β j+1,j+1 where det j = b j,j b j+1,j+1 b j+1,j b j,j+1 Since B is strictly diagonally dominant then there exists ε j > 0 such that det j > ε j, then one easily has (418) 0 β i,j < γ ε B for each 1 i, j m, where ε B = min{ε j } and γ = max 1 i,j n { a i,j } In other words, By Theorem 41, we deduce (419) r m 2 ( m m B 1 2 B 1 F = 2m γ ε B j=1 i=1 β 2 i,j ) 1 2 ( 1 + 2m γ ) F 2 s m 2 ε B 43 Schur complement preconditioning Assume that C has the same order m and the same structure (block diagonal matrix of size 2) and satisfies the same properties of B Then, we can choose S = C as a preconditioner for the Schur complement matrix S In this case and

13 BLOCK ILU PRECONDITIONED ITERATIVE METHODS 363 from Theorem 42, one easily has r m+1 2 I C 1 S 2 s m 2 C 1 EB 1 F 2 s m 2 C 1 2 B 1 2 E 2 F 2 s m 2 2m γ γ E 2 F 2 s m 2 ε C ε B Then, we get ( γ ) 2 (420) r m+1 2 n E 2 F 2 s m 2, ε where ε = min{ε B, ε C } Now take S = L SU S + R S to be an approximate factorization of S The other version in which the Schur complement matrix S is allowed to be preconditioned from the left and the right by L S and U S, respectively Consider the reduced system L 1 1 SU S S (421) w = L 1 S g U Sy = w, where g = g EB 1 f The system (421) is solved by GMRES with an arbitrary initial guess y 0 (U Sy 0 = w 0 ) Let (422) s m = L 1 S g (L 1 S SU 1 S ) U Sy m Then the residual vector obtained at the (m + 1) th step of GMRES for solving the system (42) with left preconditioning L A and right preconditioning U A and with an initial guess u 0 = (x 0, y 0 ) T satisfies the inequality (423) r m+1 2 I L 1 S SU 1 S 2 s m 2 The proof of this result starts with the equality ( ) 1 ( ) B I B (424) A 1 1 ( F I = E L S U S The rest is similar to that of the previous result L 1 1 S SUS )

14 364 N GUESSOUS AND O SOUHAR 5 Numerical experiments and conclusion We consider the five point upwind finite difference scheme of the Laplace and convectiondiffusion equations [40] The advantage of this scheme is that it can produce stable discretizations, while with the standard central finite difference approximation and for small mesh sizes, sperious oscillations appear in the discrete solution of the convection-diffusion equation [27] We use the standard and block red-black colorings described in sections 2 and 3 with uniform meshsize h in both x and y directions In these cases, we compute the Schur complement matrix S explicitly In our tests, we present the number of preconditioned GMRES(20) iterations [33] with four different grid sizes In Tables 4 and 7, solu shows the CPU time in seconds for the solution phase; prec shows the CPU time in seconds taken in constructing the preconditioners; spar shows the sparsity ratio which is the ratio between the number of nonzero elements of the preconditioner in its block ILU factorization and that of the original matrix The numerical experiments were conducted on a Pentium III CPU at 550 MHZ with 64Mb RAM The right hand side was generated by assuming that the exact solution is a vector of all ones and the initial guess was a vector of some random numbers The computations were stopped when the 2-norm of the residual was reduced by a factor of 10 8 or when 200 iterations were reached, indicated by (-) We denote M S as the preconditioner defined by (45), where the solution of the reduced system (51) Sy = g EB 1 f, is performed using backward and forward solution steps by appling ILUT (10 4, 10) factorization [29] And by M I and M C the preconditioners where the reduced system (51) is unpreconditioned and preconditioned by C 1, respectively The first set of test problems is a Laplace equation with Dirichlet boundary conditions (52) u = rhs, which is defined on the unit square The second set of test problem is a convection-diffusion equation with Dirichlet boundary conditions, (53) ν u + v u = rhs,

15 BLOCK ILU PRECONDITIONED ITERATIVE METHODS 365 Standard red-black Block red-black h 1 M I M C M S M I M C M S TABLE 1: Iterations to convergence for the Laplace problem by different preconditioners which is defined on the unit square v is a convective flow (velocity vector) and the viscosity parameter ν governs the ratio between convection and diffusion and define the inverse of Reynolds number Re = 1/ν Looking at the results for the Laplace problem (Table 1), it turns out that the application of the preconditioner M S offers a small number of iterations to achieve the convergence than M I and M C, for both standard and block red-black orderings Furthermore, M I appears to be better with the standard red-black colorings than with the block version, that is not the case for the preconditioners M C and M S One can easily see that the number of iterations increases strongly with h 1 for both preconditioners M I and M C and weakly for M S Tables 2 and 3, 5 and 6 show the number of iterations for the convectiondiffusion problem in two cases discretized by using standard and block colorings, preconditioned by three preconditioners, Case 2 was proposed by Ruge and Stuben [28] to test their algebraic multigrid method, this case has some semi-recirculation effect because the first of its convection coefficients vanishes at y = 05, while the first case has neither stagnation point nor semi-recirculation effect We observe that for the preconditioner M I, the number is less with the standard ordering than with the block version, except when (10 6 ν 10 2 and for h 1 =33 and 65) in the first case, and when (ν = 10 2 for all mesh-sizes) in the second case For the preconditioner M C and for the second case, the block version requires less number of iterations than the standard version, but for the first case we cannot deduce which ordering is better For the preconditioner M S, we observe a moderate deterioration for 10 3 ν 10 1 for both cases and both orderings We show that with the standard coloring we obtain a number of iterations less than with the block version for both cases, except for h 1 =65 and h 1 =105 when ν = 1 in the first case and 10 2 ν 1 in the second one

16 366 N GUESSOUS AND O SOUHAR Case 1: v = ( ) cos x sin y h ν M I M C M S M I M C M S M I M C M S M I M C M S TABLE 2: Iterations to convergence for the convection-diffusion problem (Case 1) by different preconditioners using Standard version h ν M I M C M S M I M C M S M I M C M S M I M C M S TABLE 3: Iterations to convergence for the convection-diffusion problem (Case 1) by different preconditioners using Block version From Tables 4 and 7, it is seen that with the standard red-black coloring M S requires less CPU time for the processing phase and less space memory than with the block version We also see that the CPU time for the solution phase depends on the number of iterations required to achieve the convergence

17 BLOCK ILU PRECONDITIONED ITERATIVE METHODS 367 Standard red-black coloring Block red-black coloring h ν solu prec spar solu prec spar solu prec spar solu prec spar TABLE 4: Performance of the preconditioner M S for solving the convection-diffusion problem (Case 1) Case 2: v = ( ) (2y 1)(1 x 2 ) 2xy(y 1) h ν M I M C M S M I M C M S M I M C M S M I M C M S TABLE 5: Iterations to convergence for the convection-diffusion problem (Case 2) preconditioned by different preconditioners using Standard version 6 Conclusion We have presented a technique to reorder the nodes of a graph adjacency to a sparse matrix, this thecnique is based on point red-black coloring, according to this ordering we have also presented a block ILU preconditioner with different approximations of the Schur complement matrix for solving sparse matrices We have shown both from a theoretical and a practical point of view that the efficiency of the global preconditioner M for the block original system depends strongly on how well S approximates the Schur complement matrix S From the

18 368 N GUESSOUS AND O SOUHAR h ν M I M C M S M I M C M S M I M C M S M I M C M S TABLE 6: Iterations to convergence for the convection-diffusion problem (Case 2) preconditioned by different preconditioners using Block version Standard red-black coloring Block red-black coloring h ν solu prec spar solu prec spar solu prec spar solu prec spar TABLE 7: Performance of the preconditioner M S for solving the convection-diffusion problem (Case 2) numerical experiments that we have conducted in this paper (Laplace and convection-diffusion equations), we have learned that the preconditioner M S is much better than M I and M C where the convergence is not reached for small ν and large problems (with small meshsize h) and leads to a considerably faster convergence As can be seen by comparing Tables, we can solve more problems with the standard red-black ordering and M S preconditioner than with the block version, but the use of large blocks may be more efficient for some parallel architecture Acknowledgment The authors would like to thank the anonymous referee for their accurate remarks in improving the quality of this paper and for directing them to several of the references

19 BLOCK ILU PRECONDITIONED ITERATIVE METHODS 369 REFERENCES 1 O Axelsson, Iterative Solution Methods, University Press, Cambridge, R Barrett, M Berry, T Chan, J Demmel, J Donato, J Dongarra, V Eijkhout, R Pozo, C Romine and H A van der Vorst, Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, SIAM, Philadelphia, PA, M Benzi and M Tuma, A sparse approximate inverse preconditioner for nonsymmetric linear systems, SIAM J Sci Comput 19 (1998), C Brand and Z E Heinemann, A new iterative solution technique for reservoir simulation equations on locally refined grids, SPE, (18410), B L Buzbee, F W Dorr, J A George and G H Golub, The direct solution of the discrete poisson equation on irregular regions, SIAM, J Num Anal 8 (1971), B L Buzbee, G H Golub and C W Nielson, On direct methods for solving poisson s equations, SIAM, J Num Anal 7 (1970), T F Chan, Analysis of preconditioners for domain decomposition, SIAM J Numer 24 (2) (1987), T F Chan and D Goovaerts, A note on the efficiency of domain decomposed incomplete factorization, SIAM J Sci Stat Comput 11 (4) (1990), T F Chan and H A van der Vorst, Approximate and incomplete factorization, Technical Report 871, Department of Mathematics, University of Utrecht, E Chow and Y Saad, Approximate inverse preconditioner via sparse-sparse iterations, SIAM J Comput 19 (3) (1998), E Chow and Y Saad, Approximate inverse techniques for block-partitioned matrices, SIAM J Sci Comput 18 (1997), P Ciarlet, Jr, Repeated red-black ordering: a new approach, Numer Alg (1994) 13 E F D Azevedo, P A Forsyth and Wei-Pai Tang, Ordering methods preconditioned conjugate gradient methods applied to unstructured grid problems, SIAM J Matrix Anal Appl 13 (3) (1992), I S Duff and G A Meurant, The effect of ordering on preconditioned conjugate gradients, BIT 29 (1989), H C Elman, A stability analysis of incomplete LU factorization, Math Comp 47 (175) (1986), H C Elman and G H Golub, Iterative methods for cyclically reduced nonself-adjoint linear systems, Math Comp 54 (1990), H C Elman and G H Golub, Iterative methods for cyclically reduced nonself-adjoint linear systems II, Math Comp 56 (1991), N I M Gould and J A Scott, Sparse approximate-inverse preconditioners using normalization techniques, SIAM J Sci Comput 19 (1998), M G Grote and T Huckle, Parrallel preconditioning with sparse approximate inverses, SIAM J Sci Comput 18 (1997), N Guessous and O Souhar, Multilevel block ILU preconditioner for sparse nonsymmetric M-matrices, J Comput Appl Math 162 (1) (2004), N Guessous and O Souhar, Recursive two-level ILU preconditioner for nonsymmetric M-matrices, J Appl Math Comput 16 (1) (2004), M T Jones and P E Plassman, A parallel graph coloring heuristic, SIAM J Sci Comput 14 (1993), L YU Kolotilina and A Yu Yermin, Factorized sparse approximate inverse preconditioning, SIAM J Matrix-Anal 14 (1993), 45 58

20 370 N GUESSOUS AND O SOUHAR 24 T A Manteuffel, An incomplete factorization technique for positive definite linear systems, Math of Comput 32 (1980), G Meurant, Domain decomposition methods for partial differential equations on parallel computers, Int J Supercomputing Appls 2 (1988), G Meurant, The block preconditioned conjugate gradient method on vector computers, BIT 24 (1984), Y Notay, A robust algebraic multilevel preconditioner for nonsymmetric M- matrix, Tech Rep GANMN 99-01, Université Libre de Bruxelles, Brussels, Belgium, J W Ruge and K Stuben, Efficient solution of finite difference and finite element equations, in: Multigrid Methods for Integral and Differential Equations (D J Paddon, H Holstein, eds), Clarendon Press, Oxford, , Y Saad, ILUT: A dual threshold incomplete LU preconditioner, Numer Linear Algebra Appl 1(4) (1994), Y Saad, ILUM: A multi-elimination ILU preconditioner for general sparse matrices, SIAM J Sci Comput 17(4) (1996), Y Saad, Iterative Methods for Sparse Linear Systems, PWS Publishing, New York, NY, Y Saad, SPARSKIT: A Basic Tool Kit for Sparse Matrix Computations, Technical Report CSRD TR 1029, University of Illinois at Urbana-Champaign, IL, Y Saad and M H Schultz, GMRES: A generalized minimal residual algorithm for solving nonsymetric linear systems, This Journal 7 (1986), Y Saad and J Zhang, BILUM: Block versions of multi-elimination and multilevel ILU preconditioner for general linear sparse systems, SIAM J Sci Comput 20(6) (1999), Y Saad and J Zhang, BILUTM: A domain based multilevel block ILUT preconditioner for general sparse matrices, SIAM J Matrix Anal Appl 21(1) (1999), Y Saad and J Zhang, Diagonal threshold techniques in robust multi-level ILU preconditioner for general linear sparse systems, Numer Linear Algebra Appl 6 (1999), H A van der Vorst, High performance preconditioning, SIAM J Sci Stat Comput 10 (1989), H A van der Vorst, Large tridiagonal and block tridiagonal linear systems on vector and parallel computers, Parallel Computing 5 (1987), J Zhang, A sparse Approximate Inverse Technique for Parallel Preconditioning of General Sparse Matrices, Technical report , Department of Computer science, University of Kentucky, Lexington, KY, J Zhang, Preconditioned iterative methods and finite difference schemes for convection-diffusion, Appl Math and Comput 109 (2000), Ecole Normale supérieure, Bensouda BP 5206 Fès, Morocco address: guessous najib@hotmailcom Département de Mathématiques et Informatique Faculté des Sciences Dhar-Mahraz BP 1796 Atlas Fès, Morocco address: drsouhar@hotmailcom

On the Preconditioning of the Block Tridiagonal Linear System of Equations

On the Preconditioning of the Block Tridiagonal Linear System of Equations On the Preconditioning of the Block Tridiagonal Linear System of Equations Davod Khojasteh Salkuyeh Department of Mathematics, University of Mohaghegh Ardabili, PO Box 179, Ardabil, Iran E-mail: khojaste@umaacir

More information

Preface to the Second Edition. Preface to the First Edition

Preface to the Second Edition. Preface to the First Edition n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices

Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices Eun-Joo Lee Department of Computer Science, East Stroudsburg University of Pennsylvania, 327 Science and Technology Center,

More information

AMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1.

AMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1. J. Appl. Math. & Computing Vol. 15(2004), No. 1, pp. 299-312 BILUS: A BLOCK VERSION OF ILUS FACTORIZATION DAVOD KHOJASTEH SALKUYEH AND FAEZEH TOUTOUNIAN Abstract. ILUS factorization has many desirable

More information

ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS

ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS YOUSEF SAAD University of Minnesota PWS PUBLISHING COMPANY I(T)P An International Thomson Publishing Company BOSTON ALBANY BONN CINCINNATI DETROIT LONDON MADRID

More information

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU Preconditioning Techniques for Solving Large Sparse Linear Systems Arnold Reusken Institut für Geometrie und Praktische Mathematik RWTH-Aachen OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative

More information

Fine-grained Parallel Incomplete LU Factorization

Fine-grained Parallel Incomplete LU Factorization Fine-grained Parallel Incomplete LU Factorization Edmond Chow School of Computational Science and Engineering Georgia Institute of Technology Sparse Days Meeting at CERFACS June 5-6, 2014 Contribution

More information

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation Tao Zhao 1, Feng-Nan Hwang 2 and Xiao-Chuan Cai 3 Abstract In this paper, we develop an overlapping domain decomposition

More information

A SPARSE APPROXIMATE INVERSE PRECONDITIONER FOR NONSYMMETRIC LINEAR SYSTEMS

A SPARSE APPROXIMATE INVERSE PRECONDITIONER FOR NONSYMMETRIC LINEAR SYSTEMS INTERNATIONAL JOURNAL OF NUMERICAL ANALYSIS AND MODELING, SERIES B Volume 5, Number 1-2, Pages 21 30 c 2014 Institute for Scientific Computing and Information A SPARSE APPROXIMATE INVERSE PRECONDITIONER

More information

Jae Heon Yun and Yu Du Han

Jae Heon Yun and Yu Du Han Bull. Korean Math. Soc. 39 (2002), No. 3, pp. 495 509 MODIFIED INCOMPLETE CHOLESKY FACTORIZATION PRECONDITIONERS FOR A SYMMETRIC POSITIVE DEFINITE MATRIX Jae Heon Yun and Yu Du Han Abstract. We propose

More information

An advanced ILU preconditioner for the incompressible Navier-Stokes equations

An advanced ILU preconditioner for the incompressible Navier-Stokes equations An advanced ILU preconditioner for the incompressible Navier-Stokes equations M. ur Rehman C. Vuik A. Segal Delft Institute of Applied Mathematics, TU delft The Netherlands Computational Methods with Applications,

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 16-02 The Induced Dimension Reduction method applied to convection-diffusion-reaction problems R. Astudillo and M. B. van Gijzen ISSN 1389-6520 Reports of the Delft

More information

A Block Compression Algorithm for Computing Preconditioners

A Block Compression Algorithm for Computing Preconditioners A Block Compression Algorithm for Computing Preconditioners J Cerdán, J Marín, and J Mas Abstract To implement efficiently algorithms for the solution of large systems of linear equations in modern computer

More information

Solving Large Nonlinear Sparse Systems

Solving Large Nonlinear Sparse Systems Solving Large Nonlinear Sparse Systems Fred W. Wubs and Jonas Thies Computational Mechanics & Numerical Mathematics University of Groningen, the Netherlands f.w.wubs@rug.nl Centre for Interdisciplinary

More information

Preconditioning Techniques Analysis for CG Method

Preconditioning Techniques Analysis for CG Method Preconditioning Techniques Analysis for CG Method Huaguang Song Department of Computer Science University of California, Davis hso@ucdavis.edu Abstract Matrix computation issue for solve linear system

More information

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant

More information

Preconditioners for the incompressible Navier Stokes equations

Preconditioners for the incompressible Navier Stokes equations Preconditioners for the incompressible Navier Stokes equations C. Vuik M. ur Rehman A. Segal Delft Institute of Applied Mathematics, TU Delft, The Netherlands SIAM Conference on Computational Science and

More information

The solution of the discretized incompressible Navier-Stokes equations with iterative methods

The solution of the discretized incompressible Navier-Stokes equations with iterative methods The solution of the discretized incompressible Navier-Stokes equations with iterative methods Report 93-54 C. Vuik Technische Universiteit Delft Delft University of Technology Faculteit der Technische

More information

A Numerical Study of Some Parallel Algebraic Preconditioners

A Numerical Study of Some Parallel Algebraic Preconditioners A Numerical Study of Some Parallel Algebraic Preconditioners Xing Cai Simula Research Laboratory & University of Oslo PO Box 1080, Blindern, 0316 Oslo, Norway xingca@simulano Masha Sosonkina University

More information

Fast Iterative Solution of Saddle Point Problems

Fast Iterative Solution of Saddle Point Problems Michele Benzi Department of Mathematics and Computer Science Emory University Atlanta, GA Acknowledgments NSF (Computational Mathematics) Maxim Olshanskii (Mech-Math, Moscow State U.) Zhen Wang (PhD student,

More information

OUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact

OUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact Computational Linear Algebra Course: (MATH: 6800, CSCI: 6800) Semester: Fall 1998 Instructors: { Joseph E. Flaherty, aherje@cs.rpi.edu { Franklin T. Luk, luk@cs.rpi.edu { Wesley Turner, turnerw@cs.rpi.edu

More information

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers Applied and Computational Mathematics 2017; 6(4): 202-207 http://www.sciencepublishinggroup.com/j/acm doi: 10.11648/j.acm.20170604.18 ISSN: 2328-5605 (Print); ISSN: 2328-5613 (Online) A Robust Preconditioned

More information

Department of Computer Science, University of Illinois at Urbana-Champaign

Department of Computer Science, University of Illinois at Urbana-Champaign Department of Computer Science, University of Illinois at Urbana-Champaign Probing for Schur Complements and Preconditioning Generalized Saddle-Point Problems Eric de Sturler, sturler@cs.uiuc.edu, http://www-faculty.cs.uiuc.edu/~sturler

More information

Robust solution of Poisson-like problems with aggregation-based AMG

Robust solution of Poisson-like problems with aggregation-based AMG Robust solution of Poisson-like problems with aggregation-based AMG Yvan Notay Université Libre de Bruxelles Service de Métrologie Nucléaire Paris, January 26, 215 Supported by the Belgian FNRS http://homepages.ulb.ac.be/

More information

ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER *

ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER * Journal of Computational Mathematics Vol.xx, No.x, 2x, 6. http://www.global-sci.org/jcm doi:?? ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER * Davod

More information

Multilevel low-rank approximation preconditioners Yousef Saad Department of Computer Science and Engineering University of Minnesota

Multilevel low-rank approximation preconditioners Yousef Saad Department of Computer Science and Engineering University of Minnesota Multilevel low-rank approximation preconditioners Yousef Saad Department of Computer Science and Engineering University of Minnesota SIAM CSE Boston - March 1, 2013 First: Joint work with Ruipeng Li Work

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department

More information

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili,

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili, Australian Journal of Basic and Applied Sciences, 5(3): 35-358, 20 ISSN 99-878 Generalized AOR Method for Solving Syste of Linear Equations Davod Khojasteh Salkuyeh Departent of Matheatics, University

More information

Preconditioning Techniques for Large Linear Systems Part III: General-Purpose Algebraic Preconditioners

Preconditioning Techniques for Large Linear Systems Part III: General-Purpose Algebraic Preconditioners Preconditioning Techniques for Large Linear Systems Part III: General-Purpose Algebraic Preconditioners Michele Benzi Department of Mathematics and Computer Science Emory University Atlanta, Georgia, USA

More information

Many preconditioners have been developed with specic applications in mind and as such their eciency is somewhat limited to those applications. The mai

Many preconditioners have been developed with specic applications in mind and as such their eciency is somewhat limited to those applications. The mai Diagonal Threshold Techniques in Robust Multi-Level ILU Preconditioners for General Sparse Linear Systems Yousef Saad y and Jun Zhang z Department of Computer Science and Engineering, University of Minnesota,

More information

Multigrid absolute value preconditioning

Multigrid absolute value preconditioning Multigrid absolute value preconditioning Eugene Vecharynski 1 Andrew Knyazev 2 (speaker) 1 Department of Computer Science and Engineering University of Minnesota 2 Department of Mathematical and Statistical

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Optimal Interface Conditions for an Arbitrary Decomposition into Subdomains

Optimal Interface Conditions for an Arbitrary Decomposition into Subdomains Optimal Interface Conditions for an Arbitrary Decomposition into Subdomains Martin J. Gander and Felix Kwok Section de mathématiques, Université de Genève, Geneva CH-1211, Switzerland, Martin.Gander@unige.ch;

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

A SHORT NOTE COMPARING MULTIGRID AND DOMAIN DECOMPOSITION FOR PROTEIN MODELING EQUATIONS

A SHORT NOTE COMPARING MULTIGRID AND DOMAIN DECOMPOSITION FOR PROTEIN MODELING EQUATIONS A SHORT NOTE COMPARING MULTIGRID AND DOMAIN DECOMPOSITION FOR PROTEIN MODELING EQUATIONS MICHAEL HOLST AND FAISAL SAIED Abstract. We consider multigrid and domain decomposition methods for the numerical

More information

The flexible incomplete LU preconditioner for large nonsymmetric linear systems. Takatoshi Nakamura Takashi Nodera

The flexible incomplete LU preconditioner for large nonsymmetric linear systems. Takatoshi Nakamura Takashi Nodera Research Report KSTS/RR-15/006 The flexible incomplete LU preconditioner for large nonsymmetric linear systems by Takatoshi Nakamura Takashi Nodera Takatoshi Nakamura School of Fundamental Science and

More information

Incomplete Cholesky preconditioners that exploit the low-rank property

Incomplete Cholesky preconditioners that exploit the low-rank property anapov@ulb.ac.be ; http://homepages.ulb.ac.be/ anapov/ 1 / 35 Incomplete Cholesky preconditioners that exploit the low-rank property (theory and practice) Artem Napov Service de Métrologie Nucléaire, Université

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

Optimal Left and Right Additive Schwarz Preconditioning for Minimal Residual Methods with Euclidean and Energy Norms

Optimal Left and Right Additive Schwarz Preconditioning for Minimal Residual Methods with Euclidean and Energy Norms Optimal Left and Right Additive Schwarz Preconditioning for Minimal Residual Methods with Euclidean and Energy Norms Marcus Sarkis Worcester Polytechnic Inst., Mass. and IMPA, Rio de Janeiro and Daniel

More information

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C.

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C. Lecture 9 Approximations of Laplace s Equation, Finite Element Method Mathématiques appliquées (MATH54-1) B. Dewals, C. Geuzaine V1.2 23/11/218 1 Learning objectives of this lecture Apply the finite difference

More information

THE solution to electromagnetic wave interaction with material

THE solution to electromagnetic wave interaction with material IEEE TRANSACTIONS ON ANTENNAS AND PROPAGATION, VOL. 52, NO. 9, SEPTEMBER 2004 2277 Sparse Inverse Preconditioning of Multilevel Fast Multipole Algorithm for Hybrid Integral Equations in Electromagnetics

More information

Multigrid and Domain Decomposition Methods for Electrostatics Problems

Multigrid and Domain Decomposition Methods for Electrostatics Problems Multigrid and Domain Decomposition Methods for Electrostatics Problems Michael Holst and Faisal Saied Abstract. We consider multigrid and domain decomposition methods for the numerical solution of electrostatics

More information

Numerische Mathematik

Numerische Mathematik Numer. Math. 1998 80: 397 417 Numerische Mathematik c Springer-Verlag 1998 Using approximate inverses in algebraic multilevel methods Y. Notay Service de Métrologie Nucléaire, Université Libre de Bruxelles

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

A robust algebraic multilevel preconditioner for non-symmetric M-matrices

A robust algebraic multilevel preconditioner for non-symmetric M-matrices NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 000; 7:43 67 A robust algebraic multilevel preconditioner for non-symmetric M-matrices Yvan Notay, Service de Métrologie Nucléaire

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 13-10 Comparison of some preconditioners for the incompressible Navier-Stokes equations X. He and C. Vuik ISSN 1389-6520 Reports of the Delft Institute of Applied

More information

A Novel Approach for Solving the Power Flow Equations

A Novel Approach for Solving the Power Flow Equations Vol.1, Issue.2, pp-364-370 ISSN: 2249-6645 A Novel Approach for Solving the Power Flow Equations Anwesh Chowdary 1, Dr. G. MadhusudhanaRao 2 1 Dept.of.EEE, KL University-Guntur, Andhra Pradesh, INDIA 2

More information

Cholesky factorisations of linear systems coming from a finite difference method applied to singularly perturbed problems

Cholesky factorisations of linear systems coming from a finite difference method applied to singularly perturbed problems Cholesky factorisations of linear systems coming from a finite difference method applied to singularly perturbed problems Thái Anh Nhan and Niall Madden The Boundary and Interior Layers - Computational

More information

Contents. Preface... xi. Introduction...

Contents. Preface... xi. Introduction... Contents Preface... xi Introduction... xv Chapter 1. Computer Architectures... 1 1.1. Different types of parallelism... 1 1.1.1. Overlap, concurrency and parallelism... 1 1.1.2. Temporal and spatial parallelism

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

FINE-GRAINED PARALLEL INCOMPLETE LU FACTORIZATION

FINE-GRAINED PARALLEL INCOMPLETE LU FACTORIZATION FINE-GRAINED PARALLEL INCOMPLETE LU FACTORIZATION EDMOND CHOW AND AFTAB PATEL Abstract. This paper presents a new fine-grained parallel algorithm for computing an incomplete LU factorization. All nonzeros

More information

Mathematics and Computer Science

Mathematics and Computer Science Technical Report TR-2007-002 Block preconditioning for saddle point systems with indefinite (1,1) block by Michele Benzi, Jia Liu Mathematics and Computer Science EMORY UNIVERSITY International Journal

More information

c 2015 Society for Industrial and Applied Mathematics

c 2015 Society for Industrial and Applied Mathematics SIAM J. SCI. COMPUT. Vol. 37, No. 2, pp. C169 C193 c 2015 Society for Industrial and Applied Mathematics FINE-GRAINED PARALLEL INCOMPLETE LU FACTORIZATION EDMOND CHOW AND AFTAB PATEL Abstract. This paper

More information

An Efficient Low Memory Implicit DG Algorithm for Time Dependent Problems

An Efficient Low Memory Implicit DG Algorithm for Time Dependent Problems An Efficient Low Memory Implicit DG Algorithm for Time Dependent Problems Per-Olof Persson and Jaime Peraire Massachusetts Institute of Technology, Cambridge, MA 02139, U.S.A. We present an efficient implicit

More information

20. A Dual-Primal FETI Method for solving Stokes/Navier-Stokes Equations

20. A Dual-Primal FETI Method for solving Stokes/Navier-Stokes Equations Fourteenth International Conference on Domain Decomposition Methods Editors: Ismael Herrera, David E. Keyes, Olof B. Widlund, Robert Yates c 23 DDM.org 2. A Dual-Primal FEI Method for solving Stokes/Navier-Stokes

More information

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1 Scientific Computing WS 2018/2019 Lecture 9 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 9 Slide 1 Lecture 9 Slide 2 Simple iteration with preconditioning Idea: Aû = b iterative scheme û = û

More information

Solving PDEs with Multigrid Methods p.1

Solving PDEs with Multigrid Methods p.1 Solving PDEs with Multigrid Methods Scott MacLachlan maclachl@colorado.edu Department of Applied Mathematics, University of Colorado at Boulder Solving PDEs with Multigrid Methods p.1 Support and Collaboration

More information

C. Vuik 1 R. Nabben 2 J.M. Tang 1

C. Vuik 1 R. Nabben 2 J.M. Tang 1 Deflation acceleration of block ILU preconditioned Krylov methods C. Vuik 1 R. Nabben 2 J.M. Tang 1 1 Delft University of Technology J.M. Burgerscentrum 2 Technical University Berlin Institut für Mathematik

More information

CONVERGENCE BOUNDS FOR PRECONDITIONED GMRES USING ELEMENT-BY-ELEMENT ESTIMATES OF THE FIELD OF VALUES

CONVERGENCE BOUNDS FOR PRECONDITIONED GMRES USING ELEMENT-BY-ELEMENT ESTIMATES OF THE FIELD OF VALUES European Conference on Computational Fluid Dynamics ECCOMAS CFD 2006 P. Wesseling, E. Oñate and J. Périaux (Eds) c TU Delft, The Netherlands, 2006 CONVERGENCE BOUNDS FOR PRECONDITIONED GMRES USING ELEMENT-BY-ELEMENT

More information

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 1: Direct Methods Dianne P. O Leary c 2008

More information

Parallel Iterative Methods for Sparse Linear Systems. H. Martin Bücker Lehrstuhl für Hochleistungsrechnen

Parallel Iterative Methods for Sparse Linear Systems. H. Martin Bücker Lehrstuhl für Hochleistungsrechnen Parallel Iterative Methods for Sparse Linear Systems Lehrstuhl für Hochleistungsrechnen www.sc.rwth-aachen.de RWTH Aachen Large and Sparse Small and Dense Outline Problem with Direct Methods Iterative

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 36, pp. 39-53, 009-010. Copyright 009,. ISSN 1068-9613. ETNA P-REGULAR SPLITTING ITERATIVE METHODS FOR NON-HERMITIAN POSITIVE DEFINITE LINEAR SYSTEMS

More information

XIAO-CHUAN CAI AND MAKSYMILIAN DRYJA. strongly elliptic equations discretized by the nite element methods.

XIAO-CHUAN CAI AND MAKSYMILIAN DRYJA. strongly elliptic equations discretized by the nite element methods. Contemporary Mathematics Volume 00, 0000 Domain Decomposition Methods for Monotone Nonlinear Elliptic Problems XIAO-CHUAN CAI AND MAKSYMILIAN DRYJA Abstract. In this paper, we study several overlapping

More information

Optimal V-cycle Algebraic Multilevel Preconditioning

Optimal V-cycle Algebraic Multilevel Preconditioning NUMERCAL LNEAR ALGEBRA WTH APPLCATONS Numer. Linear Algebra Appl., 5, 44 459 998 Optimal V-cycle Algebraic Multilevel Preconditioning Yvan Notay Service de Métrologie Nucléaire, Université Libre de Bruxelles

More information

On the influence of eigenvalues on Bi-CG residual norms

On the influence of eigenvalues on Bi-CG residual norms On the influence of eigenvalues on Bi-CG residual norms Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic duintjertebbens@cs.cas.cz Gérard Meurant 30, rue

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information

Iterative Methods and Multigrid

Iterative Methods and Multigrid Iterative Methods and Multigrid Part 3: Preconditioning 2 Eric de Sturler Preconditioning The general idea behind preconditioning is that convergence of some method for the linear system Ax = b can be

More information

On the performance of the Algebraic Optimized Schwarz Methods with applications

On the performance of the Algebraic Optimized Schwarz Methods with applications On the performance of the Algebraic Optimized Schwarz Methods with applications Report 12-1-18 Department of Mathematics Temple University October 212 This report is available at http://www.math.temple.edu/~szyld

More information

Iterative Methods and High-Order Difference Schemes for 2D Elliptic Problems with Mixed Derivative

Iterative Methods and High-Order Difference Schemes for 2D Elliptic Problems with Mixed Derivative Iterative Methods and High-Order Difference Schemes for 2D Elliptic Problems with Mixed Derivative Michel Fournié and Samir Karaa Laboratoire MIP, CNRS UMR 5640, Université Paul Sabatier, 118 route de

More information

Fine-Grained Parallel Algorithms for Incomplete Factorization Preconditioning

Fine-Grained Parallel Algorithms for Incomplete Factorization Preconditioning Fine-Grained Parallel Algorithms for Incomplete Factorization Preconditioning Edmond Chow School of Computational Science and Engineering Georgia Institute of Technology, USA SPPEXA Symposium TU München,

More information

5.1 Banded Storage. u = temperature. The five-point difference operator. uh (x, y + h) 2u h (x, y)+u h (x, y h) uh (x + h, y) 2u h (x, y)+u h (x h, y)

5.1 Banded Storage. u = temperature. The five-point difference operator. uh (x, y + h) 2u h (x, y)+u h (x, y h) uh (x + h, y) 2u h (x, y)+u h (x h, y) 5.1 Banded Storage u = temperature u= u h temperature at gridpoints u h = 1 u= Laplace s equation u= h u = u h = grid size u=1 The five-point difference operator 1 u h =1 uh (x + h, y) 2u h (x, y)+u h

More information

J.I. Aliaga 1 M. Bollhöfer 2 A.F. Martín 1 E.S. Quintana-Ortí 1. March, 2009

J.I. Aliaga 1 M. Bollhöfer 2 A.F. Martín 1 E.S. Quintana-Ortí 1. March, 2009 Parallel Preconditioning of Linear Systems based on ILUPACK for Multithreaded Architectures J.I. Aliaga M. Bollhöfer 2 A.F. Martín E.S. Quintana-Ortí Deparment of Computer Science and Engineering, Univ.

More information

Optimal multilevel preconditioning of strongly anisotropic problems.part II: non-conforming FEM. p. 1/36

Optimal multilevel preconditioning of strongly anisotropic problems.part II: non-conforming FEM. p. 1/36 Optimal multilevel preconditioning of strongly anisotropic problems. Part II: non-conforming FEM. Svetozar Margenov margenov@parallel.bas.bg Institute for Parallel Processing, Bulgarian Academy of Sciences,

More information

Algorithm for Sparse Approximate Inverse Preconditioners in the Conjugate Gradient Method

Algorithm for Sparse Approximate Inverse Preconditioners in the Conjugate Gradient Method Algorithm for Sparse Approximate Inverse Preconditioners in the Conjugate Gradient Method Ilya B. Labutin A.A. Trofimuk Institute of Petroleum Geology and Geophysics SB RAS, 3, acad. Koptyug Ave., Novosibirsk

More information

Linear Solvers. Andrew Hazel

Linear Solvers. Andrew Hazel Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction

More information

Multilevel Preconditioning of Graph-Laplacians: Polynomial Approximation of the Pivot Blocks Inverses

Multilevel Preconditioning of Graph-Laplacians: Polynomial Approximation of the Pivot Blocks Inverses Multilevel Preconditioning of Graph-Laplacians: Polynomial Approximation of the Pivot Blocks Inverses P. Boyanova 1, I. Georgiev 34, S. Margenov, L. Zikatanov 5 1 Uppsala University, Box 337, 751 05 Uppsala,

More information

Efficient Augmented Lagrangian-type Preconditioning for the Oseen Problem using Grad-Div Stabilization

Efficient Augmented Lagrangian-type Preconditioning for the Oseen Problem using Grad-Div Stabilization Efficient Augmented Lagrangian-type Preconditioning for the Oseen Problem using Grad-Div Stabilization Timo Heister, Texas A&M University 2013-02-28 SIAM CSE 2 Setting Stationary, incompressible flow problems

More information

Numerical behavior of inexact linear solvers

Numerical behavior of inexact linear solvers Numerical behavior of inexact linear solvers Miro Rozložník joint results with Zhong-zhi Bai and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic The fourth

More information

Solving PDEs with CUDA Jonathan Cohen

Solving PDEs with CUDA Jonathan Cohen Solving PDEs with CUDA Jonathan Cohen jocohen@nvidia.com NVIDIA Research PDEs (Partial Differential Equations) Big topic Some common strategies Focus on one type of PDE in this talk Poisson Equation Linear

More information

Domain Decomposition Preconditioners for Spectral Nédélec Elements in Two and Three Dimensions

Domain Decomposition Preconditioners for Spectral Nédélec Elements in Two and Three Dimensions Domain Decomposition Preconditioners for Spectral Nédélec Elements in Two and Three Dimensions Bernhard Hientzsch Courant Institute of Mathematical Sciences, New York University, 51 Mercer Street, New

More information

Iterative Methods for Linear Systems

Iterative Methods for Linear Systems Iterative Methods for Linear Systems 1. Introduction: Direct solvers versus iterative solvers In many applications we have to solve a linear system Ax = b with A R n n and b R n given. If n is large the

More information

THE solution of the absolute value equation (AVE) of

THE solution of the absolute value equation (AVE) of The nonlinear HSS-like iterative method for absolute value equations Mu-Zheng Zhu Member, IAENG, and Ya-E Qi arxiv:1403.7013v4 [math.na] 2 Jan 2018 Abstract Salkuyeh proposed the Picard-HSS iteration method

More information

Arnoldi Methods in SLEPc

Arnoldi Methods in SLEPc Scalable Library for Eigenvalue Problem Computations SLEPc Technical Report STR-4 Available at http://slepc.upv.es Arnoldi Methods in SLEPc V. Hernández J. E. Román A. Tomás V. Vidal Last update: October,

More information

Efficient Solvers for the Navier Stokes Equations in Rotation Form

Efficient Solvers for the Navier Stokes Equations in Rotation Form Efficient Solvers for the Navier Stokes Equations in Rotation Form Computer Research Institute Seminar Purdue University March 4, 2005 Michele Benzi Emory University Atlanta, GA Thanks to: NSF (MPS/Computational

More information

Residual iterative schemes for largescale linear systems

Residual iterative schemes for largescale linear systems Universidad Central de Venezuela Facultad de Ciencias Escuela de Computación Lecturas en Ciencias de la Computación ISSN 1316-6239 Residual iterative schemes for largescale linear systems William La Cruz

More information

Iterative Methods for Linear Systems of Equations

Iterative Methods for Linear Systems of Equations Iterative Methods for Linear Systems of Equations Projection methods (3) ITMAN PhD-course DTU 20-10-08 till 24-10-08 Martin van Gijzen 1 Delft University of Technology Overview day 4 Bi-Lanczos method

More information

ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS

ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS Fatemeh Panjeh Ali Beik and Davod Khojasteh Salkuyeh, Department of Mathematics, Vali-e-Asr University of Rafsanjan, Rafsanjan,

More information

ADDITIVE SCHWARZ FOR SCHUR COMPLEMENT 305 the parallel implementation of both preconditioners on distributed memory platforms, and compare their perfo

ADDITIVE SCHWARZ FOR SCHUR COMPLEMENT 305 the parallel implementation of both preconditioners on distributed memory platforms, and compare their perfo 35 Additive Schwarz for the Schur Complement Method Luiz M. Carvalho and Luc Giraud 1 Introduction Domain decomposition methods for solving elliptic boundary problems have been receiving increasing attention

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

The Deflation Accelerated Schwarz Method for CFD

The Deflation Accelerated Schwarz Method for CFD The Deflation Accelerated Schwarz Method for CFD J. Verkaik 1, C. Vuik 2,, B.D. Paarhuis 1, and A. Twerda 1 1 TNO Science and Industry, Stieltjesweg 1, P.O. Box 155, 2600 AD Delft, The Netherlands 2 Delft

More information

ON A GENERAL CLASS OF PRECONDITIONERS FOR NONSYMMETRIC GENERALIZED SADDLE POINT PROBLEMS

ON A GENERAL CLASS OF PRECONDITIONERS FOR NONSYMMETRIC GENERALIZED SADDLE POINT PROBLEMS U..B. Sci. Bull., Series A, Vol. 78, Iss. 4, 06 ISSN 3-707 ON A GENERAL CLASS OF RECONDIIONERS FOR NONSYMMERIC GENERALIZED SADDLE OIN ROBLE Fatemeh anjeh Ali BEIK his paper deals with applying a class

More information

A High-Performance Parallel Hybrid Method for Large Sparse Linear Systems

A High-Performance Parallel Hybrid Method for Large Sparse Linear Systems Outline A High-Performance Parallel Hybrid Method for Large Sparse Linear Systems Azzam Haidar CERFACS, Toulouse joint work with Luc Giraud (N7-IRIT, France) and Layne Watson (Virginia Polytechnic Institute,

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

Local Mesh Refinement with the PCD Method

Local Mesh Refinement with the PCD Method Advances in Dynamical Systems and Applications ISSN 0973-5321, Volume 8, Number 1, pp. 125 136 (2013) http://campus.mst.edu/adsa Local Mesh Refinement with the PCD Method Ahmed Tahiri Université Med Premier

More information

Preconditioning for Nonsymmetry and Time-dependence

Preconditioning for Nonsymmetry and Time-dependence Preconditioning for Nonsymmetry and Time-dependence Andy Wathen Oxford University, UK joint work with Jen Pestana and Elle McDonald Jeju, Korea, 2015 p.1/24 Iterative methods For self-adjoint problems/symmetric

More information

WHEN studying distributed simulations of power systems,

WHEN studying distributed simulations of power systems, 1096 IEEE TRANSACTIONS ON POWER SYSTEMS, VOL 21, NO 3, AUGUST 2006 A Jacobian-Free Newton-GMRES(m) Method with Adaptive Preconditioner and Its Application for Power Flow Calculations Ying Chen and Chen

More information

AN INTRODUCTION TO DOMAIN DECOMPOSITION METHODS. Gérard MEURANT CEA

AN INTRODUCTION TO DOMAIN DECOMPOSITION METHODS. Gérard MEURANT CEA Marrakech Jan 2003 AN INTRODUCTION TO DOMAIN DECOMPOSITION METHODS Gérard MEURANT CEA Introduction Domain decomposition is a divide and conquer technique Natural framework to introduce parallelism in the

More information

Solving Sparse Linear Systems: Iterative methods

Solving Sparse Linear Systems: Iterative methods Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccs Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary c 2008,2010

More information