Preconditioning Techniques for Large Linear Systems Part III: General-Purpose Algebraic Preconditioners
|
|
- Duane Norman
- 5 years ago
- Views:
Transcription
1 Preconditioning Techniques for Large Linear Systems Part III: General-Purpose Algebraic Preconditioners Michele Benzi Department of Mathematics and Computer Science Emory University Atlanta, Georgia, USA Scuola di Dottorato di Ricerca in Scienze Matematiche Dipartimento di Matematica Università degli Studi di Padova 1
2 Outline 1 Introduction 2 Generalities about preconditioning 3 Basic concepts of algebraic preconditioning 4 Incomplete factorizations 5 Sparse approximate inverses 6 IF via approximate inverses 7 Balanced Incomplete Factorization (BIF) 8 Conclusions 2
3 Outline 1 Introduction 2 Generalities about preconditioning 3 Basic concepts of algebraic preconditioning 4 Incomplete factorizations 5 Sparse approximate inverses 6 IF via approximate inverses 7 Balanced Incomplete Factorization (BIF) 8 Conclusions 3
4 Preconditioned iterative methods Solving large linear systems by Krylov-type methods Ax = b 4
5 Preconditioned iterative methods Solving large linear systems by Krylov-type methods Ax = b Preconditioning may be viewed as a transformation: M 1 Ax = M 1 b, or AM 1 y = b, x = M 1 y 4
6 Preconditioned iterative methods Solving large linear systems by Krylov-type methods Ax = b Preconditioning may be viewed as a transformation: M 1 Ax = M 1 b, or AM 1 y = b, x = M 1 y Examples: Matrix Splittings (block Jacobi, Gauss-Seidel, SSOR); Incomplete Factorizations; Sparse Approximate Inverses; AMG... 4
7 Preconditioned iterative methods Solving large linear systems by Krylov-type methods Ax = b Preconditioning may be viewed as a transformation: M 1 Ax = M 1 b, or AM 1 y = b, x = M 1 y Examples: Matrix Splittings (block Jacobi, Gauss-Seidel, SSOR); Incomplete Factorizations; Sparse Approximate Inverses; AMG... preconditioner M (or M 1 ) should be cheap, fast to compute, and result in rapid convergence of the preconditioned iterative method 4
8 Preconditioned iterative methods Solving large linear systems by Krylov-type methods Ax = b Preconditioning may be viewed as a transformation: M 1 Ax = M 1 b, or AM 1 y = b, x = M 1 y Examples: Matrix Splittings (block Jacobi, Gauss-Seidel, SSOR); Incomplete Factorizations; Sparse Approximate Inverses; AMG... preconditioner M (or M 1 ) should be cheap, fast to compute, and result in rapid convergence of the preconditioned iterative method but also: sufficiently robust 4
9 Preconditioned iterative methods Solving large linear systems by Krylov-type methods Ax = b Preconditioning may be viewed as a transformation: M 1 Ax = M 1 b, or AM 1 y = b, x = M 1 y Examples: Matrix Splittings (block Jacobi, Gauss-Seidel, SSOR); Incomplete Factorizations; Sparse Approximate Inverses; AMG... preconditioner M (or M 1 ) should be cheap, fast to compute, and result in rapid convergence of the preconditioned iterative method but also: sufficiently robust sparse (i.e., low storage requirements) 4
10 Preconditioned iterative methods Solving large linear systems by Krylov-type methods Ax = b Preconditioning may be viewed as a transformation: M 1 Ax = M 1 b, or AM 1 y = b, x = M 1 y Examples: Matrix Splittings (block Jacobi, Gauss-Seidel, SSOR); Incomplete Factorizations; Sparse Approximate Inverses; AMG... preconditioner M (or M 1 ) should be cheap, fast to compute, and result in rapid convergence of the preconditioned iterative method but also: sufficiently robust sparse (i.e., low storage requirements) The case of sequences of linear systems A (k) x (k) = b (k), k = 0,1,2,... 4
11 Preconditioned iterative methods Structure of this lecture: 5
12 Preconditioned iterative methods Structure of this lecture: 1 Brief discussion of algebraic vs. problem-specific preconditioning 5
13 Preconditioned iterative methods Structure of this lecture: 1 Brief discussion of algebraic vs. problem-specific preconditioning 2 Description of guiding principles behind algebraic preconditioning (IF and SAI). Robustness problems of standard techniques 5
14 Preconditioned iterative methods Structure of this lecture: 1 Brief discussion of algebraic vs. problem-specific preconditioning 2 Description of guiding principles behind algebraic preconditioning (IF and SAI). Robustness problems of standard techniques 3 Some recent approaches which exploit info on matrix inverse 5
15 Preconditioned iterative methods Structure of this lecture: 1 Brief discussion of algebraic vs. problem-specific preconditioning 2 Description of guiding principles behind algebraic preconditioning (IF and SAI). Robustness problems of standard techniques 3 Some recent approaches which exploit info on matrix inverse 4 An approach based on a novel decomposition of the input matrix 5
16 Preconditioned iterative methods Structure of this lecture: 1 Brief discussion of algebraic vs. problem-specific preconditioning 2 Description of guiding principles behind algebraic preconditioning (IF and SAI). Robustness problems of standard techniques 3 Some recent approaches which exploit info on matrix inverse 4 An approach based on a novel decomposition of the input matrix 5 Other recent developments: hybrid and multi-level methods (briefly) 5
17 Outline 1 Introduction 2 Generalities about preconditioning 3 Basic concepts of algebraic preconditioning 4 Incomplete factorizations 5 Sparse approximate inverses 6 IF via approximate inverses 7 Balanced Incomplete Factorization (BIF) 8 Conclusions 6
18 A quote In ending this book with the subject of preconditioners, we find ourselves at the philosophical center of the scientific computing of the future... Nothing will be more central to computational science in the next century than the art of transforming a problem that appears intractable into another whose solution can be approximated rapidly. For Krylov subspace matrix iterations, this is preconditioning. From N. L. Trefethen and D. Bau, III, Numerical Linear Algebra, SIAM,
19 Algebraic vs. Problem-Specific Preconditioning Algebraic preconditioners only use information extracted from the input matrix A, usually supplemented by some user-provided tuning parameters, like drop tolerances or limits on the amount of fill-in allowed. 8
20 Algebraic vs. Problem-Specific Preconditioning Algebraic preconditioners only use information extracted from the input matrix A, usually supplemented by some user-provided tuning parameters, like drop tolerances or limits on the amount of fill-in allowed. Main examples include: Preconditioners based on classical (block) splittings A = M N 8
21 Algebraic vs. Problem-Specific Preconditioning Algebraic preconditioners only use information extracted from the input matrix A, usually supplemented by some user-provided tuning parameters, like drop tolerances or limits on the amount of fill-in allowed. Main examples include: Preconditioners based on classical (block) splittings A = M N Incomplete factorizations: M = LŪ A 8
22 Algebraic vs. Problem-Specific Preconditioning Algebraic preconditioners only use information extracted from the input matrix A, usually supplemented by some user-provided tuning parameters, like drop tolerances or limits on the amount of fill-in allowed. Main examples include: Preconditioners based on classical (block) splittings A = M N Incomplete factorizations: M = LŪ A Approximate inverse preconditioners: G = M 1 A 1 8
23 Algebraic vs. Problem-Specific Preconditioning Algebraic preconditioners only use information extracted from the input matrix A, usually supplemented by some user-provided tuning parameters, like drop tolerances or limits on the amount of fill-in allowed. Main examples include: Preconditioners based on classical (block) splittings A = M N Incomplete factorizations: M = LŪ A Approximate inverse preconditioners: G = M 1 A 1 Algebraic Multi-Grid (AMG). 8
24 Algebraic vs. Problem-Specific Preconditioning Algebraic preconditioners only use information extracted from the input matrix A, usually supplemented by some user-provided tuning parameters, like drop tolerances or limits on the amount of fill-in allowed. Main examples include: Preconditioners based on classical (block) splittings A = M N Incomplete factorizations: M = LŪ A Approximate inverse preconditioners: G = M 1 A 1 Algebraic Multi-Grid (AMG). Hybrids obtained by combining some of the above 8
25 Algebraic vs. Problem-Specific Preconditioning Algebraic preconditioners only use information extracted from the input matrix A, usually supplemented by some user-provided tuning parameters, like drop tolerances or limits on the amount of fill-in allowed. Main examples include: Preconditioners based on classical (block) splittings A = M N Incomplete factorizations: M = LŪ A Approximate inverse preconditioners: G = M 1 A 1 Algebraic Multi-Grid (AMG). Hybrids obtained by combining some of the above Such preconditioners are good candidates for inclusion in general-purpose software packages. Although they may not be optimal for almost any problem, they are widely applicable and have proven to be reasonably robust in countless applications. 8
26 Algebraic vs. Problem-Specific Preconditioning Algebraic preconditioners only use information extracted from the input matrix A, usually supplemented by some user-provided tuning parameters, like drop tolerances or limits on the amount of fill-in allowed. Main examples include: Preconditioners based on classical (block) splittings A = M N Incomplete factorizations: M = LŪ A Approximate inverse preconditioners: G = M 1 A 1 Algebraic Multi-Grid (AMG). Hybrids obtained by combining some of the above Such preconditioners are good candidates for inclusion in general-purpose software packages. Although they may not be optimal for almost any problem, they are widely applicable and have proven to be reasonably robust in countless applications. Also, they are being continually improved. 8
27 Algebraic vs. Problem-Specific Preconditioning Discretization of a continuous problem (a system of PDEs, an integral equation, etc.) leads to a sequence of linear systems A n x n = b n where A n is n n and n as the discretization is refined (that is, as h 0 ). 9
28 Algebraic vs. Problem-Specific Preconditioning Discretization of a continuous problem (a system of PDEs, an integral equation, etc.) leads to a sequence of linear systems A n x n = b n where A n is n n and n as the discretization is refined (that is, as h 0 ). Definition: A preconditioner is optimal if it results in a rate of convergence of the preconditioned iteration that is asymptotically constant as the problem size increases, and if the cost of each preconditioned iteration scales linearly in the size of the problem. 9
29 Algebraic vs. Problem-Specific Preconditioning Discretization of a continuous problem (a system of PDEs, an integral equation, etc.) leads to a sequence of linear systems A n x n = b n where A n is n n and n as the discretization is refined (that is, as h 0 ). Definition: A preconditioner is optimal if it results in a rate of convergence of the preconditioned iteration that is asymptotically constant as the problem size increases, and if the cost of each preconditioned iteration scales linearly in the size of the problem. For integral equations, the scaling of each iteration may be O(n log n) or something like that. 9
30 Algebraic vs. Problem-Specific Preconditioning Discretization of a continuous problem (a system of PDEs, an integral equation, etc.) leads to a sequence of linear systems A n x n = b n where A n is n n and n as the discretization is refined (that is, as h 0 ). Definition: A preconditioner is optimal if it results in a rate of convergence of the preconditioned iteration that is asymptotically constant as the problem size increases, and if the cost of each preconditioned iteration scales linearly in the size of the problem. For integral equations, the scaling of each iteration may be O(n log n) or something like that. For example, in the SPD case if κ 2 (M 1 n A n) C where C is some constant independent of n, then M n is an optimal preconditioner if the action of M 1 n A n on a vector can be computed in O(n) work. 9
31 Algebraic vs. Problem-Specific Preconditioning In contrast, problem-specific preconditioners, which are designed to solve a narrow class of problems, are often optimal. These methods make extensive use of the developer s knowledge of the application at hand including information about the physics, the geometry, and the particular discretization technique used. 10
32 Algebraic vs. Problem-Specific Preconditioning In contrast, problem-specific preconditioners, which are designed to solve a narrow class of problems, are often optimal. These methods make extensive use of the developer s knowledge of the application at hand including information about the physics, the geometry, and the particular discretization technique used. These preconditioners are usually not suitable for other types of problems, so their range of applicability is limited. 10
33 Algebraic vs. Problem-Specific Preconditioning In contrast, problem-specific preconditioners, which are designed to solve a narrow class of problems, are often optimal. These methods make extensive use of the developer s knowledge of the application at hand including information about the physics, the geometry, and the particular discretization technique used. These preconditioners are usually not suitable for other types of problems, so their range of applicability is limited. Many PDE-based (or physics-based) preconditioners belong to this class. An example is Diffusion Synthetic Acceleration (DSA) in radiation transport. 10
34 Algebraic vs. Problem-Specific Preconditioning In contrast, problem-specific preconditioners, which are designed to solve a narrow class of problems, are often optimal. These methods make extensive use of the developer s knowledge of the application at hand including information about the physics, the geometry, and the particular discretization technique used. These preconditioners are usually not suitable for other types of problems, so their range of applicability is limited. Many PDE-based (or physics-based) preconditioners belong to this class. An example is Diffusion Synthetic Acceleration (DSA) in radiation transport. Other examples of problem-specific preconditioners, especially for incompressible flow problems, will be discussed later in these lectures. 10
35 Algebraic vs. Problem-Specific Preconditioning The two approaches, algebraic and problem-specific, are not necessarily mutually exclusive similar to direct vs. iterative methods. 11
36 Algebraic vs. Problem-Specific Preconditioning The two approaches, algebraic and problem-specific, are not necessarily mutually exclusive similar to direct vs. iterative methods. Most problem-specific preconditioners use algebraic ones as building blocks, e.g., to solve or to approximate subproblems arising within the overall preconditioning strategy. 11
37 Algebraic vs. Problem-Specific Preconditioning The two approaches, algebraic and problem-specific, are not necessarily mutually exclusive similar to direct vs. iterative methods. Most problem-specific preconditioners use algebraic ones as building blocks, e.g., to solve or to approximate subproblems arising within the overall preconditioning strategy. Some algebraic preconditioners are flexible enough that they can be tailored to specific applications. 11
38 Algebraic vs. Problem-Specific Preconditioning The two approaches, algebraic and problem-specific, are not necessarily mutually exclusive similar to direct vs. iterative methods. Most problem-specific preconditioners use algebraic ones as building blocks, e.g., to solve or to approximate subproblems arising within the overall preconditioning strategy. Some algebraic preconditioners are flexible enough that they can be tailored to specific applications. Conversely, there has been a trend in recent years to build algebraic preconditioners that mimic the properties of specialized preconditioners; for instance, algebraic multilevel methods. 11
39 Outline 1 Introduction 2 Generalities about preconditioning 3 Basic concepts of algebraic preconditioning 4 Incomplete factorizations 5 Sparse approximate inverses 6 IF via approximate inverses 7 Balanced Incomplete Factorization (BIF) 8 Conclusions 12
40 Implicit vs. explicit preconditioners An implicit, or direct, preconditioner is an approximation of the input matrix: M A. 13
41 Implicit vs. explicit preconditioners An implicit, or direct, preconditioner is an approximation of the input matrix: M A. An explicit, or inverse, preconditioner is an approximation of the inverse of the input matrix: G = M 1 A 1. This is motivated by the observation that even though A 1 is a dense matrix, many of its entries are negligibly small. 13
42 Implicit vs. explicit preconditioners An implicit, or direct, preconditioner is an approximation of the input matrix: M A. An explicit, or inverse, preconditioner is an approximation of the inverse of the input matrix: G = M 1 A 1. This is motivated by the observation that even though A 1 is a dense matrix, many of its entries are negligibly small. Examples of implicit preconditioners include classical splittings, incomplete factorizations, block and multilevel variants. 13
43 Implicit vs. explicit preconditioners An implicit, or direct, preconditioner is an approximation of the input matrix: M A. An explicit, or inverse, preconditioner is an approximation of the inverse of the input matrix: G = M 1 A 1. This is motivated by the observation that even though A 1 is a dense matrix, many of its entries are negligibly small. Examples of implicit preconditioners include classical splittings, incomplete factorizations, block and multilevel variants. Examples of explicit preconditioners include polynomial preconditioners, sparse approximate inverses, and data-sparse approximate inverses. 13
44 Implicit vs. explicit preconditioners An implicit, or direct, preconditioner is an approximation of the input matrix: M A. An explicit, or inverse, preconditioner is an approximation of the inverse of the input matrix: G = M 1 A 1. This is motivated by the observation that even though A 1 is a dense matrix, many of its entries are negligibly small. Examples of implicit preconditioners include classical splittings, incomplete factorizations, block and multilevel variants. Examples of explicit preconditioners include polynomial preconditioners, sparse approximate inverses, and data-sparse approximate inverses. Both factored and non-factored forms are in use. 13
45 Implicit vs. explicit preconditioners Application of an implicit preconditioner within a Krylov method (like CG or GMRES) requires solving one or more linear systems, often with triangular or block triangular matrices. 14
46 Implicit vs. explicit preconditioners Application of an implicit preconditioner within a Krylov method (like CG or GMRES) requires solving one or more linear systems, often with triangular or block triangular matrices. In contrast, application of an explicit preconditioner requires one or more matrix-vector products. 14
47 Implicit vs. explicit preconditioners Application of an implicit preconditioner within a Krylov method (like CG or GMRES) requires solving one or more linear systems, often with triangular or block triangular matrices. In contrast, application of an explicit preconditioner requires one or more matrix-vector products. Explicit preconditioners are easier to parallelize. Generally speaking, however, the construction of an explicit preconditioner tends to be more costly than an implicit one. This is to be expected, since A (or its action) is known but A 1 is not. 14
48 Implicit vs. explicit preconditioners Application of an implicit preconditioner within a Krylov method (like CG or GMRES) requires solving one or more linear systems, often with triangular or block triangular matrices. In contrast, application of an explicit preconditioner requires one or more matrix-vector products. Explicit preconditioners are easier to parallelize. Generally speaking, however, the construction of an explicit preconditioner tends to be more costly than an implicit one. This is to be expected, since A (or its action) is known but A 1 is not. Also, convergence rates are usually better with implicit preconditioners than with explicit ones. But there are exceptions! 14
49 Outline 1 Introduction 2 Generalities about preconditioning 3 Basic concepts of algebraic preconditioning 4 Incomplete factorizations 5 Sparse approximate inverses 6 IF via approximate inverses 7 Balanced Incomplete Factorization (BIF) 8 Conclusions 15
50 Incomplete Factorization (IF) methods When a sparse matrix is factored by Gaussian elimination, fill-in usually takes place. This means that the triangular factors L and U of the coefficient matrix A are considerably less sparse than A. 16
51 Incomplete Factorization (IF) methods When a sparse matrix is factored by Gaussian elimination, fill-in usually takes place. This means that the triangular factors L and U of the coefficient matrix A are considerably less sparse than A. Even though sparsity-preserving reordering techniques can be used to reduce fill-in, sparse direct methods are not considered viable for solving very large linear systems such as those arising from the discretization of three-dimensional boundary value problems, due to time and space constraints. 16
52 Incomplete Factorization (IF) methods When a sparse matrix is factored by Gaussian elimination, fill-in usually takes place. This means that the triangular factors L and U of the coefficient matrix A are considerably less sparse than A. Even though sparsity-preserving reordering techniques can be used to reduce fill-in, sparse direct methods are not considered viable for solving very large linear systems such as those arising from the discretization of three-dimensional boundary value problems, due to time and space constraints. However, by discarding part of the fill-in in the course of the factorization process, simple but powerful preconditioners can be obtained in the form M = LŪ where L and Ū are the incomplete (approximate) LU factors. 16
53 Incomplete Factorization (IF) methods Incomplete factorization algorithms differ in the rules that govern the dropping of fill-in in the incomplete factors. Fill-in can be discarded based on several different criteria, such as position, value, or a combination of the two. 17
54 Incomplete Factorization (IF) methods Incomplete factorization algorithms differ in the rules that govern the dropping of fill-in in the incomplete factors. Fill-in can be discarded based on several different criteria, such as position, value, or a combination of the two. Letting n = {1,2,...,n}, one can fix a subset S n n of positions in the matrix, usually including the main diagonal and all (i, j) such that a ij 0, and allow fill-in in the LU factors only in positions which are in S. 17
55 Incomplete Factorization (IF) methods Incomplete factorization algorithms differ in the rules that govern the dropping of fill-in in the incomplete factors. Fill-in can be discarded based on several different criteria, such as position, value, or a combination of the two. Letting n = {1,2,...,n}, one can fix a subset S n n of positions in the matrix, usually including the main diagonal and all (i, j) such that a ij 0, and allow fill-in in the LU factors only in positions which are in S. Formally, an incomplete factorization step can be described as { a ij a ik a 1 kk a ij a kj if (i,j) S otherwise a ij for each k and for i,j > k. 17
56 Incomplete Factorizations (IF) methods Very simple patterns for cheap / cache-efficient preconditioners? 18
57 Incomplete Factorizations (IF) methods Very simple patterns for cheap / cache-efficient preconditioners? Example: banded pattern: BCSSTK38, n = 8032, nnz = 181, 746; SPD (small structural analysis problem from Boeing). bandwidth (full) PCG its nc
58 Incomplete Factorization (IF) methods Notice that the incomplete factorization may fail due to division by zero or near-zero (this is usually referred to as a pivot breakdown), even if A admits an LU factorization without pivoting. 19
59 Incomplete Factorization (IF) methods Notice that the incomplete factorization may fail due to division by zero or near-zero (this is usually referred to as a pivot breakdown), even if A admits an LU factorization without pivoting. Partial pivoting can help, but it is costly and does not always suffice in the incomplete case. 19
60 Incomplete Factorization (IF) methods Notice that the incomplete factorization may fail due to division by zero or near-zero (this is usually referred to as a pivot breakdown), even if A admits an LU factorization without pivoting. Partial pivoting can help, but it is costly and does not always suffice in the incomplete case. If S coincides with the set of positions which are nonzero in A, we obtain the no-fill ILU factorization, or ILU(0). For SPD matrices the same concept applies to the Cholesky factorization A = LL T, resulting in the no-fill IC factorization, or IC(0). 19
61 Incomplete Factorization (IF) methods Notice that the incomplete factorization may fail due to division by zero or near-zero (this is usually referred to as a pivot breakdown), even if A admits an LU factorization without pivoting. Partial pivoting can help, but it is costly and does not always suffice in the incomplete case. If S coincides with the set of positions which are nonzero in A, we obtain the no-fill ILU factorization, or ILU(0). For SPD matrices the same concept applies to the Cholesky factorization A = LL T, resulting in the no-fill IC factorization, or IC(0). When used with the conjugate gradient algorithm, this preconditioner leads to the ICCG method (Meijerink & van der Vorst, 1977). 19
62 Incomplete Factorization (IF) methods The no-fill ILU and IC preconditioners are very simple to implement, their computation is inexpensive, and they are reasonably effective for significant problems, such as low-order discretizations of scalar elliptic PDEs leading to M-matrices or to diagonally dominant ones. No pivot breakdown can occur in these cases (Meijerink & van der Vorst, 1977; Manteuffel, 1980). 20
63 Incomplete Factorization (IF) methods The no-fill ILU and IC preconditioners are very simple to implement, their computation is inexpensive, and they are reasonably effective for significant problems, such as low-order discretizations of scalar elliptic PDEs leading to M-matrices or to diagonally dominant ones. No pivot breakdown can occur in these cases (Meijerink & van der Vorst, 1977; Manteuffel, 1980). However, for more difficult and realistic problems the no-fill factorizations result in too crude an approximation of A, and more sophisticated preconditioners, which allow some fill-in in the incomplete factors, are needed. For instance, this is the case for highly nonsymmetric and indefinite matrices such as those arising in many CFD applications. 20
64 Incomplete Factorization (IF) methods A hierarchy of ILU preconditioners may be obtained based on the levels of fill-in concept. A level of fill is attributed to each matrix entry that occurs in the incomplete factorization process. Fill-ins are dropped based on the value of the level of fill. The formal definition is as follows. 21
65 Incomplete Factorization (IF) methods A hierarchy of ILU preconditioners may be obtained based on the levels of fill-in concept. A level of fill is attributed to each matrix entry that occurs in the incomplete factorization process. Fill-ins are dropped based on the value of the level of fill. The formal definition is as follows. The initial level of fill of a matrix entry a ij is defined to be { 0, if a ij 0, or i = j lev ij = otherwise. Each time this element is modified by the ILU process, its level of fill must be updated according to lev ij = min{lev ij,lev ik + lev kj + 1}. 21
66 Incomplete Factorization (IF) methods A hierarchy of ILU preconditioners may be obtained based on the levels of fill-in concept. A level of fill is attributed to each matrix entry that occurs in the incomplete factorization process. Fill-ins are dropped based on the value of the level of fill. The formal definition is as follows. The initial level of fill of a matrix entry a ij is defined to be { 0, if a ij 0, or i = j lev ij = otherwise. Each time this element is modified by the ILU process, its level of fill must be updated according to lev ij = min{lev ij,lev ik + lev kj + 1}. Let l be a nonnegative integer. With ILU(l), all fill-ins whose level is greater than l are dropped. Note that for l = 0, we recover the no-fill ILU(0) preconditioner. 21
67 Example Level-based incomplete LU factorizations ILU(l) 22
68 Example Level-based incomplete LU factorizations ILU(l) Motivated by decay in factors of diagonally dominant matrices 22
69 Example Level-based incomplete LU factorizations ILU(l) Motivated by decay in factors of diagonally dominant matrices Structure of incomplete factors can be predicted using matrix graph 22
70 Example Level-based incomplete LU factorizations ILU(l) Motivated by decay in factors of diagonally dominant matrices Structure of incomplete factors can be predicted using matrix graph nz = 217 ILU(0) 22
71 Example Level-based incomplete LU factorizations ILU(l) Motivated by decay in factors of diagonally dominant matrices Structure of incomplete factors can be predicted using matrix graph nz = 217 ILU(0) nz = 289 ILU(1) 22
72 Example Level-based incomplete LU factorizations ILU(l) Motivated by decay in factors of diagonally dominant matrices Structure of incomplete factors can be predicted using matrix graph nz = 217 ILU(0) nz = 349 ILU(2) 22
73 Example Level-based incomplete LU factorizations ILU(l) Motivated by decay in factors of diagonally dominant matrices Structure of incomplete factors can be predicted using matrix graph nz = 217 ILU(0) nz = 457 ILU(3) 22
74 Example Level-based incomplete LU factorizations ILU(l) Motivated by decay in factors of diagonally dominant matrices Structure of incomplete factors can be predicted using matrix graph nz = 217 ILU(0) nz = 541 ILU(4) 22
75 Example Level-based incomplete LU factorizations ILU(l) Motivated by decay in factors of diagonally dominant matrices Structure of incomplete factors can be predicted using matrix graph nz = 217 ILU(0) nz = 601 ILU(5) 22
76 Example Level-based incomplete LU factorizations ILU(l) Motivated by decay in factors of diagonally dominant matrices Structure of incomplete factors can be predicted using matrix graph nz = 217 ILU(0) nz = 637 ILU(6) 22
77 Example Level-based incomplete LU factorizations ILU(l) Motivated by decay in factors of diagonally dominant matrices Structure of incomplete factors can be predicted using matrix graph nz = 217 ILU(0) nz = 649 ILU(7) 22
78 Numerical Example Fast symbolic costruction (Hysom & Pothen, SISC 2001) 23
79 Numerical Example Fast symbolic costruction (Hysom & Pothen, SISC 2001) But, typically expensive to apply even for modest number of levels 23
80 Numerical Example Fast symbolic costruction (Hysom & Pothen, SISC 2001) But, typically expensive to apply even for modest number of levels Example: Matrix ENGINE, n = 143,571, nnz = 2,424,822; SPD. levels size prec PCG its. 0 2,424, ,458, ,595, ,128, ,078, ,474, ,153, ,861, ,276,
81 Preprocessing incomplete factorizations Preprocessing originally designed for direct solvers often very useful to improve robustness of ILU preconditioners: 24
82 Preprocessing incomplete factorizations Preprocessing originally designed for direct solvers often very useful to improve robustness of ILU preconditioners: Symmetric reorderings (RCM, MD, ND, etc.) 24
83 Preprocessing incomplete factorizations Preprocessing originally designed for direct solvers often very useful to improve robustness of ILU preconditioners: Symmetric reorderings (RCM, MD, ND, etc.) Static pivoting : nonsymmetric permutations and scalings aimed at increasing diagonal dominance (Duff & Koster, SIMAX 1999, 2001; B., Haws & T uma, SISC 2000; Saad, SISC 2005; Mayer, SISC 2008) 24
84 Preprocessing incomplete factorizations Preprocessing originally designed for direct solvers often very useful to improve robustness of ILU preconditioners: Symmetric reorderings (RCM, MD, ND, etc.) Static pivoting : nonsymmetric permutations and scalings aimed at increasing diagonal dominance (Duff & Koster, SIMAX 1999, 2001; B., Haws & T uma, SISC 2000; Saad, SISC 2005; Mayer, SISC 2008) Extension to symmetric indefinite problems (Duff & Pralet, SIMAX 2005; Hagemann & Schenk, SISC 2006) 24
85 Preprocessing incomplete factorizations Preprocessing originally designed for direct solvers often very useful to improve robustness of ILU preconditioners: Symmetric reorderings (RCM, MD, ND, etc.) Static pivoting : nonsymmetric permutations and scalings aimed at increasing diagonal dominance (Duff & Koster, SIMAX 1999, 2001; B., Haws & T uma, SISC 2000; Saad, SISC 2005; Mayer, SISC 2008) Extension to symmetric indefinite problems (Duff & Pralet, SIMAX 2005; Hagemann & Schenk, SISC 2006) Block variants (many authors) 24
86 Preprocessing incomplete factorizations Preprocessing originally designed for direct solvers often very useful to improve robustness of ILU preconditioners: Symmetric reorderings (RCM, MD, ND, etc.) Static pivoting : nonsymmetric permutations and scalings aimed at increasing diagonal dominance (Duff & Koster, SIMAX 1999, 2001; B., Haws & T uma, SISC 2000; Saad, SISC 2005; Mayer, SISC 2008) Extension to symmetric indefinite problems (Duff & Pralet, SIMAX 2005; Hagemann & Schenk, SISC 2006) Block variants (many authors) But, for very tough problems still not enough to guarantee convergence of preconditioned iteration. 24
87 Example (cont.) Preprocessing: matrix is reordered with Multiple Minimum Degree, a fill-reducing ordering. 25
88 Example (cont.) Preprocessing: matrix is reordered with Multiple Minimum Degree, a fill-reducing ordering. Matrix ENGINE, n = 143,571, nnz = 2,424,822, MMD ordering 25
89 Example (cont.) Preprocessing: matrix is reordered with Multiple Minimum Degree, a fill-reducing ordering. Matrix ENGINE, n = 143,571, nnz = 2,424,822, MMD ordering levels size its size its 0 2,424, ,424, ,458, ,394, ,595, ,509, ,128, ,859, ,078, ,292, ,474, ,664, ,153, ,891, ,861, nc 8 54,276, ,590,
90 Example (cont.) Preprocessing: matrix is reordered with Multiple Minimum Degree, a fill-reducing ordering. Matrix ENGINE, n = 143,571, nnz = 2,424,822, MMD ordering levels size its size its 0 2,424, ,424, ,458, ,394, ,595, ,509, ,128, ,859, ,078, ,292, ,474, ,664, ,153, ,891, ,861, nc 8 54,276, ,590, Some improvement observed, but not entirely robust. 25
91 The use of drop tolerances In many cases, an efficient preconditioner can be obtained from an incomplete factorization where new fill-ins are accepted or discarded on the basis of their size. In this way, only fill-ins that contribute significantly to the quality of the preconditioner are stored and used. 26
92 The use of drop tolerances In many cases, an efficient preconditioner can be obtained from an incomplete factorization where new fill-ins are accepted or discarded on the basis of their size. In this way, only fill-ins that contribute significantly to the quality of the preconditioner are stored and used. A drop tolerance is a positive number τ which is used in a dropping criterion. An absolute dropping strategy can be used, whereby new fill-ins are accepted only if greater than τ in absolute value. This criterion may work poorly if the matrix is badly scaled, in which case it is better to use a relative drop tolerance. 26
93 The use of drop tolerances In many cases, an efficient preconditioner can be obtained from an incomplete factorization where new fill-ins are accepted or discarded on the basis of their size. In this way, only fill-ins that contribute significantly to the quality of the preconditioner are stored and used. A drop tolerance is a positive number τ which is used in a dropping criterion. An absolute dropping strategy can be used, whereby new fill-ins are accepted only if greater than τ in absolute value. This criterion may work poorly if the matrix is badly scaled, in which case it is better to use a relative drop tolerance. For example, when eliminating row i, a new fill-in is accepted only if it is greater in absolute value than τ a i 2, where a i denotes the ith row of A. Other criteria are also in use. 26
94 The use of drop tolerances A drawback of this approach is that it is difficult to choose a good value of the drop tolerance: usually, this is done by trial-and-error for a few sample matrices from a given application, until a satisfactory value of τ is found. In many cases, good results are obtained for values of τ in the range , but the optimal value is strongly problem-dependent. 27
95 The use of drop tolerances A drawback of this approach is that it is difficult to choose a good value of the drop tolerance: usually, this is done by trial-and-error for a few sample matrices from a given application, until a satisfactory value of τ is found. In many cases, good results are obtained for values of τ in the range , but the optimal value is strongly problem-dependent. Another difficulty is that it is impossible to predict the amount of storage that will be needed to store the incomplete LU factors. An efficient, predictable algorithm is obtained by limiting the number of nonzeros allowed in each row of the triangular factors. Saad (1994) has proposed the following dual threshold strategy: 27
96 The use of drop tolerances A drawback of this approach is that it is difficult to choose a good value of the drop tolerance: usually, this is done by trial-and-error for a few sample matrices from a given application, until a satisfactory value of τ is found. In many cases, good results are obtained for values of τ in the range , but the optimal value is strongly problem-dependent. Another difficulty is that it is impossible to predict the amount of storage that will be needed to store the incomplete LU factors. An efficient, predictable algorithm is obtained by limiting the number of nonzeros allowed in each row of the triangular factors. Saad (1994) has proposed the following dual threshold strategy: Fix a drop tolerance τ and a number p of fill-ins to be allowed in each row of the incomplete L/U factors; at each step of the elimination process, drop all fill-ins that are smaller than τ times the 2-norm of the current row; of all the remaining ones, keep (at most) the p largest ones in magnitude. 27
97 The use of drop tolerances A variant of this approach allows in each row of the incomplete factors p nonzeros in addition to the positions that were already nonzeros in the original matrix A. This makes sense for irregular problems in which the nonzeros in A are not distributed uniformly. 28
98 The use of drop tolerances A variant of this approach allows in each row of the incomplete factors p nonzeros in addition to the positions that were already nonzeros in the original matrix A. This makes sense for irregular problems in which the nonzeros in A are not distributed uniformly. The resulting preconditioner, denoted by ILUT(τ, p), is quite powerful. If it fails on a problem for a given choice of the parameters τ and p, it will often succeed by taking a smaller value of τ and/or a larger value of p. The corresponding incomplete Cholesky preconditioner for SPD matrices, denoted ICT, can also be defined. 28
99 The use of drop tolerances A variant of this approach allows in each row of the incomplete factors p nonzeros in addition to the positions that were already nonzeros in the original matrix A. This makes sense for irregular problems in which the nonzeros in A are not distributed uniformly. The resulting preconditioner, denoted by ILUT(τ, p), is quite powerful. If it fails on a problem for a given choice of the parameters τ and p, it will often succeed by taking a smaller value of τ and/or a larger value of p. The corresponding incomplete Cholesky preconditioner for SPD matrices, denoted ICT, can also be defined. ILUT(τ, p) and the variant with partial pivoting ILUTP(τ, p) are quite effective and widely used in many industrial applications. However, failures can still occur. 28
100 Example IC(0)/ICT may fail and simple diagonal scaling work! 29
101 Example IC(0)/ICT may fail and simple diagonal scaling work! Matrix LDOOR (structural analysis of car door), n = 952, 203, nnz = 23,737,
102 Example IC(0)/ICT may fail and simple diagonal scaling work! Matrix LDOOR (structural analysis of car door), n = 952, 203, nnz = 23,737,339. precond / precond. size PCG its Jacobi / 952, IC(0) / 23,737,339 > 1000 ICT / 23,838,704 > 1000 ICT / 24,614,381 > 1000 ICT / 26,167,321 > 1000 ICT / 30,047,027 > 1000 ICT / 37,809,756 >
103 Stability considerations ILU preconditioners attempt to make the residual matrix R := A M small in some norm. However, this does not always result in good preconditioners. 30
104 Stability considerations ILU preconditioners attempt to make the residual matrix R := A M small in some norm. However, this does not always result in good preconditioners. As observed by several authors (Elman, Saad,...), a more meaningful approximation measure is based on the size of the error matrix E := I AM 1 30
105 Stability considerations ILU preconditioners attempt to make the residual matrix R := A M small in some norm. However, this does not always result in good preconditioners. As observed by several authors (Elman, Saad,...), a more meaningful approximation measure is based on the size of the error matrix E := I AM 1 Approximate inverse preconditioners attempt to make E small, but this may require a huge number of nonzeros in the preconditioner (unless the entries of A 1 exhibit fast off-diagonal decay). 30
106 Stability considerations ILU preconditioners attempt to make the residual matrix R := A M small in some norm. However, this does not always result in good preconditioners. As observed by several authors (Elman, Saad,...), a more meaningful approximation measure is based on the size of the error matrix E := I AM 1 Approximate inverse preconditioners attempt to make E small, but this may require a huge number of nonzeros in the preconditioner (unless the entries of A 1 exhibit fast off-diagonal decay). Note that E = RM 1 R M 1. Hence, if M is very ill-conditioned ( M 1 is very large), then a very large error matrix may occur even if A M is small. This often results in failure to converge. 30
107 Stability considerations Example (B., Szyld & van Duin, SISC 1999): System Ax = b is a discretization of a convection-dominated, convection-diffusion equation. Solver: Bi-CGSTAB. Orderings: lexicographic and MMD. 31
108 Stability considerations Example (B., Szyld & van Duin, SISC 1999): System Ax = b is a discretization of a convection-dominated, convection-diffusion equation. Solver: Bi-CGSTAB. Orderings: lexicographic and MMD. Let N 1 := A LŪ F and N 2 := I A( LŪ) 1 F. 31
109 Stability considerations Example (B., Szyld & van Duin, SISC 1999): System Ax = b is a discretization of a convection-dominated, convection-diffusion equation. Solver: Bi-CGSTAB. Orderings: lexicographic and MMD. Let N 1 := A LŪ F and N 2 := I A( LŪ) 1 F. ILU(0) Lexicogr. MMD N N Its nc 59 ILUT(0.01,5) Lexicogr. MMD N N Its 11 nc 31
110 Permuting large entries of A to the main diagonal nz = nz = Jacobian from Navier-Stokes equations (original and permuted with MC64 + RCM). After preprocessing, ILUT with Bi-CGSTAB converges in 24 iterations. No convergence on original system. 32
111 Outline 1 Introduction 2 Generalities about preconditioning 3 Basic concepts of algebraic preconditioning 4 Incomplete factorizations 5 Sparse approximate inverses 6 IF via approximate inverses 7 Balanced Incomplete Factorization (BIF) 8 Conclusions 33
112 Sparse approximate inverses Idea: directly approximate the inverse with a sparse matrix G A 1, then preconditioner application only needs mat-vecs with G. 34
113 Sparse approximate inverses Idea: directly approximate the inverse with a sparse matrix G A 1, then preconditioner application only needs mat-vecs with G. Mostly motivated by parallel processing; also, less prone to instabilities than ILU, and easy to update when solving a sequence of linear systems. 34
114 Sparse approximate inverses Idea: directly approximate the inverse with a sparse matrix G A 1, then preconditioner application only needs mat-vecs with G. Mostly motivated by parallel processing; also, less prone to instabilities than ILU, and easy to update when solving a sequence of linear systems. Also useful for constructing robust smoothers for multigrid, and for other purposes like approximating Schur complements. 34
115 Sparse approximate inverses Idea: directly approximate the inverse with a sparse matrix G A 1, then preconditioner application only needs mat-vecs with G. Mostly motivated by parallel processing; also, less prone to instabilities than ILU, and easy to update when solving a sequence of linear systems. Also useful for constructing robust smoothers for multigrid, and for other purposes like approximating Schur complements. By now, a large body of literature exists (100 s of papers since the 1990s). 34
116 Sparse approximate inverses Idea: directly approximate the inverse with a sparse matrix G A 1, then preconditioner application only needs mat-vecs with G. Mostly motivated by parallel processing; also, less prone to instabilities than ILU, and easy to update when solving a sequence of linear systems. Also useful for constructing robust smoothers for multigrid, and for other purposes like approximating Schur complements. By now, a large body of literature exists (100 s of papers since the 1990s). Successfully used in numerous applications, including solution of dense linear systems from BEM in electromagnetics, acoustics, and elastodynamics problems 34
117 Sparse approximate inverses Idea: directly approximate the inverse with a sparse matrix G A 1, then preconditioner application only needs mat-vecs with G. Mostly motivated by parallel processing; also, less prone to instabilities than ILU, and easy to update when solving a sequence of linear systems. Also useful for constructing robust smoothers for multigrid, and for other purposes like approximating Schur complements. By now, a large body of literature exists (100 s of papers since the 1990s). Successfully used in numerous applications, including solution of dense linear systems from BEM in electromagnetics, acoustics, and elastodynamics problems solution of sparse linear systems from photon and neutron transport, CFD, Markov chains, eigenproblems, etc 34
118 Sparse approximate inverses Idea: directly approximate the inverse with a sparse matrix G A 1, then preconditioner application only needs mat-vecs with G. Mostly motivated by parallel processing; also, less prone to instabilities than ILU, and easy to update when solving a sequence of linear systems. Also useful for constructing robust smoothers for multigrid, and for other purposes like approximating Schur complements. By now, a large body of literature exists (100 s of papers since the 1990s). Successfully used in numerous applications, including solution of dense linear systems from BEM in electromagnetics, acoustics, and elastodynamics problems solution of sparse linear systems from photon and neutron transport, CFD, Markov chains, eigenproblems, etc quantum chemistry applications 34
119 Sparse approximate inverses Idea: directly approximate the inverse with a sparse matrix G A 1, then preconditioner application only needs mat-vecs with G. Mostly motivated by parallel processing; also, less prone to instabilities than ILU, and easy to update when solving a sequence of linear systems. Also useful for constructing robust smoothers for multigrid, and for other purposes like approximating Schur complements. By now, a large body of literature exists (100 s of papers since the 1990s). Successfully used in numerous applications, including solution of dense linear systems from BEM in electromagnetics, acoustics, and elastodynamics problems solution of sparse linear systems from photon and neutron transport, CFD, Markov chains, eigenproblems, etc quantum chemistry applications image processing (restoration, deblurring, inpainting) 34
120 Sparse approximate inverses Main approaches: sparse approximate inverses (SAIs) can be factored or unfactored. 35
121 Sparse approximate inverses Main approaches: sparse approximate inverses (SAIs) can be factored or unfactored. Factored forms are of the type G = ZW where, for instance, Z U 1 and W L 1. 35
122 Sparse approximate inverses Main approaches: sparse approximate inverses (SAIs) can be factored or unfactored. Factored forms are of the type G = ZW where, for instance, Z U 1 and W L 1. Factored forms are especially useful if A is SPD. In this case W = Z T and the approximate inverse G = ZZ T is guaranteed to be SPD. This allows for the use of the conjugate gradient (CG) method. 35
Preface to the Second Edition. Preface to the First Edition
n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................
More informationScientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix
Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 1: Direct Methods Dianne P. O Leary c 2008
More informationOUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU
Preconditioning Techniques for Solving Large Sparse Linear Systems Arnold Reusken Institut für Geometrie und Praktische Mathematik RWTH-Aachen OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)
AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical
More information9.1 Preconditioned Krylov Subspace Methods
Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete
More informationMultilevel low-rank approximation preconditioners Yousef Saad Department of Computer Science and Engineering University of Minnesota
Multilevel low-rank approximation preconditioners Yousef Saad Department of Computer Science and Engineering University of Minnesota SIAM CSE Boston - March 1, 2013 First: Joint work with Ruipeng Li Work
More informationFast Iterative Solution of Saddle Point Problems
Michele Benzi Department of Mathematics and Computer Science Emory University Atlanta, GA Acknowledgments NSF (Computational Mathematics) Maxim Olshanskii (Mech-Math, Moscow State U.) Zhen Wang (PhD student,
More informationAn advanced ILU preconditioner for the incompressible Navier-Stokes equations
An advanced ILU preconditioner for the incompressible Navier-Stokes equations M. ur Rehman C. Vuik A. Segal Delft Institute of Applied Mathematics, TU delft The Netherlands Computational Methods with Applications,
More informationPreconditioners for the incompressible Navier Stokes equations
Preconditioners for the incompressible Navier Stokes equations C. Vuik M. ur Rehman A. Segal Delft Institute of Applied Mathematics, TU Delft, The Netherlands SIAM Conference on Computational Science and
More informationIncomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices
Incomplete LU Preconditioning and Error Compensation Strategies for Sparse Matrices Eun-Joo Lee Department of Computer Science, East Stroudsburg University of Pennsylvania, 327 Science and Technology Center,
More informationFine-grained Parallel Incomplete LU Factorization
Fine-grained Parallel Incomplete LU Factorization Edmond Chow School of Computational Science and Engineering Georgia Institute of Technology Sparse Days Meeting at CERFACS June 5-6, 2014 Contribution
More informationITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS
ITERATIVE METHODS FOR SPARSE LINEAR SYSTEMS YOUSEF SAAD University of Minnesota PWS PUBLISHING COMPANY I(T)P An International Thomson Publishing Company BOSTON ALBANY BONN CINCINNATI DETROIT LONDON MADRID
More informationNumerical Methods I Non-Square and Sparse Linear Systems
Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant
More informationLinear Solvers. Andrew Hazel
Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction
More informationCourse Notes: Week 1
Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues
More informationThe Conjugate Gradient Method
The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline
More informationSOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS. Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA
1 SOLVING SPARSE LINEAR SYSTEMS OF EQUATIONS Chao Yang Computational Research Division Lawrence Berkeley National Laboratory Berkeley, CA, USA 2 OUTLINE Sparse matrix storage format Basic factorization
More informationAMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1.
J. Appl. Math. & Computing Vol. 15(2004), No. 1, pp. 299-312 BILUS: A BLOCK VERSION OF ILUS FACTORIZATION DAVOD KHOJASTEH SALKUYEH AND FAEZEH TOUTOUNIAN Abstract. ILUS factorization has many desirable
More informationSolving Large Nonlinear Sparse Systems
Solving Large Nonlinear Sparse Systems Fred W. Wubs and Jonas Thies Computational Mechanics & Numerical Mathematics University of Groningen, the Netherlands f.w.wubs@rug.nl Centre for Interdisciplinary
More informationIncomplete Cholesky preconditioners that exploit the low-rank property
anapov@ulb.ac.be ; http://homepages.ulb.ac.be/ anapov/ 1 / 35 Incomplete Cholesky preconditioners that exploit the low-rank property (theory and practice) Artem Napov Service de Métrologie Nucléaire, Université
More informationBoundary Value Problems - Solving 3-D Finite-Difference problems Jacob White
Introduction to Simulation - Lecture 2 Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White Thanks to Deepak Ramaswamy, Michal Rewienski, and Karen Veroy Outline Reminder about
More informationChapter 7 Iterative Techniques in Matrix Algebra
Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 24: Preconditioning and Multigrid Solver Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 5 Preconditioning Motivation:
More informationSolving linear systems (6 lectures)
Chapter 2 Solving linear systems (6 lectures) 2.1 Solving linear systems: LU factorization (1 lectures) Reference: [Trefethen, Bau III] Lecture 20, 21 How do you solve Ax = b? (2.1.1) In numerical linear
More informationA robust multilevel approximate inverse preconditioner for symmetric positive definite matrices
DICEA DEPARTMENT OF CIVIL, ENVIRONMENTAL AND ARCHITECTURAL ENGINEERING PhD SCHOOL CIVIL AND ENVIRONMENTAL ENGINEERING SCIENCES XXX CYCLE A robust multilevel approximate inverse preconditioner for symmetric
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More informationFine-Grained Parallel Algorithms for Incomplete Factorization Preconditioning
Fine-Grained Parallel Algorithms for Incomplete Factorization Preconditioning Edmond Chow School of Computational Science and Engineering Georgia Institute of Technology, USA SPPEXA Symposium TU München,
More informationSolving Ax = b, an overview. Program
Numerical Linear Algebra Improving iterative solvers: preconditioning, deflation, numerical software and parallelisation Gerard Sleijpen and Martin van Gijzen November 29, 27 Solving Ax = b, an overview
More informationEfficient Solvers for the Navier Stokes Equations in Rotation Form
Efficient Solvers for the Navier Stokes Equations in Rotation Form Computer Research Institute Seminar Purdue University March 4, 2005 Michele Benzi Emory University Atlanta, GA Thanks to: NSF (MPS/Computational
More informationc 2015 Society for Industrial and Applied Mathematics
SIAM J. SCI. COMPUT. Vol. 37, No. 2, pp. C169 C193 c 2015 Society for Industrial and Applied Mathematics FINE-GRAINED PARALLEL INCOMPLETE LU FACTORIZATION EDMOND CHOW AND AFTAB PATEL Abstract. This paper
More informationThe flexible incomplete LU preconditioner for large nonsymmetric linear systems. Takatoshi Nakamura Takashi Nodera
Research Report KSTS/RR-15/006 The flexible incomplete LU preconditioner for large nonsymmetric linear systems by Takatoshi Nakamura Takashi Nodera Takatoshi Nakamura School of Fundamental Science and
More informationFINE-GRAINED PARALLEL INCOMPLETE LU FACTORIZATION
FINE-GRAINED PARALLEL INCOMPLETE LU FACTORIZATION EDMOND CHOW AND AFTAB PATEL Abstract. This paper presents a new fine-grained parallel algorithm for computing an incomplete LU factorization. All nonzeros
More informationA Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations
A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant
More informationIterative Methods and Multigrid
Iterative Methods and Multigrid Part 3: Preconditioning 2 Eric de Sturler Preconditioning The general idea behind preconditioning is that convergence of some method for the linear system Ax = b can be
More informationScientific Computing
Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting
More informationIterative Methods for Sparse Linear Systems
Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University
More informationContents. Preface... xi. Introduction...
Contents Preface... xi Introduction... xv Chapter 1. Computer Architectures... 1 1.1. Different types of parallelism... 1 1.1.1. Overlap, concurrency and parallelism... 1 1.1.2. Temporal and spatial parallelism
More informationFINE-GRAINED PARALLEL INCOMPLETE LU FACTORIZATION
FINE-GRAINED PARALLEL INCOMPLETE LU FACTORIZATION EDMOND CHOW AND AFTAB PATEL Abstract. This paper presents a new fine-grained parallel algorithm for computing an incomplete LU factorization. All nonzeros
More informationScientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1
Scientific Computing WS 2018/2019 Lecture 9 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 9 Slide 1 Lecture 9 Slide 2 Simple iteration with preconditioning Idea: Aû = b iterative scheme û = û
More informationPreconditioning Techniques Analysis for CG Method
Preconditioning Techniques Analysis for CG Method Huaguang Song Department of Computer Science University of California, Davis hso@ucdavis.edu Abstract Matrix computation issue for solve linear system
More informationIterative Methods for Solving A x = b
Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http
More informationAPPLIED NUMERICAL LINEAR ALGEBRA
APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation
More informationIterative methods for Linear System of Equations. Joint Advanced Student School (JASS-2009)
Iterative methods for Linear System of Equations Joint Advanced Student School (JASS-2009) Course #2: Numerical Simulation - from Models to Software Introduction In numerical simulation, Partial Differential
More informationMaximum-weighted matching strategies and the application to symmetric indefinite systems
Maximum-weighted matching strategies and the application to symmetric indefinite systems by Stefan Röllin, and Olaf Schenk 2 Technical Report CS-24-7 Department of Computer Science, University of Basel
More informationMathematics and Computer Science
Technical Report TR-2007-002 Block preconditioning for saddle point systems with indefinite (1,1) block by Michele Benzi, Jia Liu Mathematics and Computer Science EMORY UNIVERSITY International Journal
More informationIn order to solve the linear system KL M N when K is nonsymmetric, we can solve the equivalent system
!"#$% "&!#' (%)!#" *# %)%(! #! %)!#" +, %"!"#$ %*&%! $#&*! *# %)%! -. -/ 0 -. 12 "**3! * $!#%+,!2!#% 44" #% ! # 4"!#" "%! "5"#!!#6 -. - #% " 7% "3#!#3! - + 87&2! * $!#% 44" ) 3( $! # % %#!!#%+ 9332!
More informationA Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers
Applied and Computational Mathematics 2017; 6(4): 202-207 http://www.sciencepublishinggroup.com/j/acm doi: 10.11648/j.acm.20170604.18 ISSN: 2328-5605 (Print); ISSN: 2328-5613 (Online) A Robust Preconditioned
More informationParallel Iterative Methods for Sparse Linear Systems. H. Martin Bücker Lehrstuhl für Hochleistungsrechnen
Parallel Iterative Methods for Sparse Linear Systems Lehrstuhl für Hochleistungsrechnen www.sc.rwth-aachen.de RWTH Aachen Large and Sparse Small and Dense Outline Problem with Direct Methods Iterative
More informationIterative methods for Linear System
Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and
More informationA Block Compression Algorithm for Computing Preconditioners
A Block Compression Algorithm for Computing Preconditioners J Cerdán, J Marín, and J Mas Abstract To implement efficiently algorithms for the solution of large systems of linear equations in modern computer
More informationAggregation-based algebraic multigrid
Aggregation-based algebraic multigrid from theory to fast solvers Yvan Notay Université Libre de Bruxelles Service de Métrologie Nucléaire CEMRACS, Marseille, July 18, 2012 Supported by the Belgian FNRS
More informationLecture 10 Preconditioning, Software, Parallelisation
March 26, 2018 Lecture 10 Preconditioning, Software, Parallelisation A Incorporating a preconditioner We are interested in solving Ax = b (10.1) for x. Here, A is an n n non-singular matrix and b is a
More informationLecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.
Lecture 11: CMSC 878R/AMSC698R Iterative Methods An introduction Outline Direct Solution of Linear Systems Inverse, LU decomposition, Cholesky, SVD, etc. Iterative methods for linear systems Why? Matrix
More informationPreconditioning techniques to accelerate the convergence of the iterative solution methods
Note Preconditioning techniques to accelerate the convergence of the iterative solution methods Many issues related to iterative solution of linear systems of equations are contradictory: numerical efficiency
More informationIncomplete factorization preconditioners and their updates with applications - I 1,2
Incomplete factorization preconditioners and their updates with applications - I 1,2 Daniele Bertaccini, Fabio Durastante Moscow August 24, 216 Notes of the course: Incomplete factorization preconditioners
More informationAlgebraic Multigrid as Solvers and as Preconditioner
Ò Algebraic Multigrid as Solvers and as Preconditioner Domenico Lahaye domenico.lahaye@cs.kuleuven.ac.be http://www.cs.kuleuven.ac.be/ domenico/ Department of Computer Science Katholieke Universiteit Leuven
More informationStabilization and Acceleration of Algebraic Multigrid Method
Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration
More informationThe solution of the discretized incompressible Navier-Stokes equations with iterative methods
The solution of the discretized incompressible Navier-Stokes equations with iterative methods Report 93-54 C. Vuik Technische Universiteit Delft Delft University of Technology Faculteit der Technische
More information1. Fast Iterative Solvers of SLE
1. Fast Iterative Solvers of crucial drawback of solvers discussed so far: they become slower if we discretize more accurate! now: look for possible remedies relaxation: explicit application of the multigrid
More informationUsing an Auction Algorithm in AMG based on Maximum Weighted Matching in Matrix Graphs
Using an Auction Algorithm in AMG based on Maximum Weighted Matching in Matrix Graphs Pasqua D Ambra Institute for Applied Computing (IAC) National Research Council of Italy (CNR) pasqua.dambra@cnr.it
More informationLecture 17: Iterative Methods and Sparse Linear Algebra
Lecture 17: Iterative Methods and Sparse Linear Algebra David Bindel 25 Mar 2014 Logistics HW 3 extended to Wednesday after break HW 4 should come out Monday after break Still need project description
More informationSolving PDEs with Multigrid Methods p.1
Solving PDEs with Multigrid Methods Scott MacLachlan maclachl@colorado.edu Department of Applied Mathematics, University of Colorado at Boulder Solving PDEs with Multigrid Methods p.1 Support and Collaboration
More informationLecture 18 Classical Iterative Methods
Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,
More informationJordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS
Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative
More informationChallenges for Matrix Preconditioning Methods
Challenges for Matrix Preconditioning Methods Matthias Bollhoefer 1 1 Dept. of Mathematics TU Berlin Preconditioning 2005, Atlanta, May 19, 2005 supported by the DFG research center MATHEON in Berlin Outline
More informationBindel, Fall 2016 Matrix Computations (CS 6210) Notes for
1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.
More informationToday s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn
Today s class Linear Algebraic Equations LU Decomposition 1 Linear Algebraic Equations Gaussian Elimination works well for solving linear systems of the form: AX = B What if you have to solve the linear
More informationParallelization of Multilevel Preconditioners Constructed from Inverse-Based ILUs on Shared-Memory Multiprocessors
Parallelization of Multilevel Preconditioners Constructed from Inverse-Based ILUs on Shared-Memory Multiprocessors J.I. Aliaga 1 M. Bollhöfer 2 A.F. Martín 1 E.S. Quintana-Ortí 1 1 Deparment of Computer
More informationAND. Key words. preconditioned iterative methods, sparse matrices, incomplete decompositions, approximate inverses. Ax = b, (1.1)
BALANCED INCOMPLETE FACTORIZATION RAFAEL BRU, JOSÉ MARÍN, JOSÉ MAS AND M. TŮMA Abstract. In this paper we present a new incomplete factorization of a square matrix into triangular factors in which we get
More information6.4 Krylov Subspaces and Conjugate Gradients
6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P
More informationLecture 8: Fast Linear Solvers (Part 7)
Lecture 8: Fast Linear Solvers (Part 7) 1 Modified Gram-Schmidt Process with Reorthogonalization Test Reorthogonalization If Av k 2 + δ v k+1 2 = Av k 2 to working precision. δ = 10 3 2 Householder Arnoldi
More informationThe amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.
AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative
More informationAn efficient multigrid solver based on aggregation
An efficient multigrid solver based on aggregation Yvan Notay Université Libre de Bruxelles Service de Métrologie Nucléaire Graz, July 4, 2012 Co-worker: Artem Napov Supported by the Belgian FNRS http://homepages.ulb.ac.be/
More informationParallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1
Parallel Numerics, WT 2016/2017 5 Iterative Methods for Sparse Linear Systems of Equations page 1 of 1 Contents 1 Introduction 1.1 Computer Science Aspects 1.2 Numerical Problems 1.3 Graphs 1.4 Loop Manipulations
More informationRecent advances in approximation using Krylov subspaces. V. Simoncini. Dipartimento di Matematica, Università di Bologna.
Recent advances in approximation using Krylov subspaces V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The framework It is given an operator
More informationLINEAR SYSTEMS (11) Intensive Computation
LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY
More informationMODIFICATION AND COMPENSATION STRATEGIES FOR THRESHOLD-BASED INCOMPLETE FACTORIZATIONS
MODIFICATION AND COMPENSATION STRATEGIES FOR THRESHOLD-BASED INCOMPLETE FACTORIZATIONS S. MACLACHLAN, D. OSEI-KUFFUOR, AND YOUSEF SAAD Abstract. Standard (single-level) incomplete factorization preconditioners
More informationc 2000 Society for Industrial and Applied Mathematics
SIAM J. SCI. COMPUT. Vol. 21, No. 5, pp. 1851 1868 c 2000 Society for Industrial and Applied Mathematics ORDERINGS FOR FACTORIZED SPARSE APPROXIMATE INVERSE PRECONDITIONERS MICHELE BENZI AND MIROSLAV TŮMA
More informationOn the Preconditioning of the Block Tridiagonal Linear System of Equations
On the Preconditioning of the Block Tridiagonal Linear System of Equations Davod Khojasteh Salkuyeh Department of Mathematics, University of Mohaghegh Ardabili, PO Box 179, Ardabil, Iran E-mail: khojaste@umaacir
More informationLecture # 20 The Preconditioned Conjugate Gradient Method
Lecture # 20 The Preconditioned Conjugate Gradient Method We wish to solve Ax = b (1) A R n n is symmetric and positive definite (SPD). We then of n are being VERY LARGE, say, n = 10 6 or n = 10 7. Usually,
More informationRecent advances in sparse linear solver technology for semiconductor device simulation matrices
Recent advances in sparse linear solver technology for semiconductor device simulation matrices (Invited Paper) Olaf Schenk and Michael Hagemann Department of Computer Science University of Basel Basel,
More informationLab 1: Iterative Methods for Solving Linear Systems
Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as
More informationIndefinite and physics-based preconditioning
Indefinite and physics-based preconditioning Jed Brown VAW, ETH Zürich 2009-01-29 Newton iteration Standard form of a nonlinear system F (u) 0 Iteration Solve: Update: J(ũ)u F (ũ) ũ + ũ + u Example (p-bratu)
More informationFEM and sparse linear system solving
FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich
More informationNotes on PCG for Sparse Linear Systems
Notes on PCG for Sparse Linear Systems Luca Bergamaschi Department of Civil Environmental and Architectural Engineering University of Padova e-mail luca.bergamaschi@unipd.it webpage www.dmsa.unipd.it/
More informationEfficient Augmented Lagrangian-type Preconditioning for the Oseen Problem using Grad-Div Stabilization
Efficient Augmented Lagrangian-type Preconditioning for the Oseen Problem using Grad-Div Stabilization Timo Heister, Texas A&M University 2013-02-28 SIAM CSE 2 Setting Stationary, incompressible flow problems
More informationRobust solution of Poisson-like problems with aggregation-based AMG
Robust solution of Poisson-like problems with aggregation-based AMG Yvan Notay Université Libre de Bruxelles Service de Métrologie Nucléaire Paris, January 26, 215 Supported by the Belgian FNRS http://homepages.ulb.ac.be/
More informationJae Heon Yun and Yu Du Han
Bull. Korean Math. Soc. 39 (2002), No. 3, pp. 495 509 MODIFIED INCOMPLETE CHOLESKY FACTORIZATION PRECONDITIONERS FOR A SYMMETRIC POSITIVE DEFINITE MATRIX Jae Heon Yun and Yu Du Han Abstract. We propose
More informationarxiv: v4 [math.na] 1 Sep 2018
A Comparison of Preconditioned Krylov Subspace Methods for Large-Scale Nonsymmetric Linear Systems Aditi Ghai, Cao Lu and Xiangmin Jiao arxiv:1607.00351v4 [math.na] 1 Sep 2018 September 5, 2018 Department
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 11 Partial Differential Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002.
More informationNEWTON-GMRES PRECONDITIONING FOR DISCONTINUOUS GALERKIN DISCRETIZATIONS OF THE NAVIER-STOKES EQUATIONS
NEWTON-GMRES PRECONDITIONING FOR DISCONTINUOUS GALERKIN DISCRETIZATIONS OF THE NAVIER-STOKES EQUATIONS P.-O. PERSSON AND J. PERAIRE Abstract. We study preconditioners for the iterative solution of the
More informationSolving Sparse Linear Systems: Iterative methods
Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccs Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary c 2008,2010
More informationSolving Sparse Linear Systems: Iterative methods
Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 2: Iterative Methods Dianne P. O Leary
More informationAn Efficient Low Memory Implicit DG Algorithm for Time Dependent Problems
An Efficient Low Memory Implicit DG Algorithm for Time Dependent Problems P.-O. Persson and J. Peraire Massachusetts Institute of Technology 2006 AIAA Aerospace Sciences Meeting, Reno, Nevada January 9,
More informationANALYSIS OF AUGMENTED LAGRANGIAN-BASED PRECONDITIONERS FOR THE STEADY INCOMPRESSIBLE NAVIER-STOKES EQUATIONS
ANALYSIS OF AUGMENTED LAGRANGIAN-BASED PRECONDITIONERS FOR THE STEADY INCOMPRESSIBLE NAVIER-STOKES EQUATIONS MICHELE BENZI AND ZHEN WANG Abstract. We analyze a class of modified augmented Lagrangian-based
More informationAdaptive algebraic multigrid methods in lattice computations
Adaptive algebraic multigrid methods in lattice computations Karsten Kahl Bergische Universität Wuppertal January 8, 2009 Acknowledgements Matthias Bolten, University of Wuppertal Achi Brandt, Weizmann
More informationNumerical Linear Algebra
Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost
More informationProgram Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects
Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen
More informationDirect and Incomplete Cholesky Factorizations with Static Supernodes
Direct and Incomplete Cholesky Factorizations with Static Supernodes AMSC 661 Term Project Report Yuancheng Luo 2010-05-14 Introduction Incomplete factorizations of sparse symmetric positive definite (SSPD)
More information