THEORETICAL AND NUMERICAL COMPARISON OF PROJECTION METHODS DERIVED FROM DEFLATION, DOMAIN DECOMPOSITION AND MULTIGRID METHODS
|
|
- Charles Harper
- 5 years ago
- Views:
Transcription
1 THEORETICAL AND NUMERICAL COMPARISON OF PROJECTION METHODS DERIVED FROM DEFLATION, DOMAIN DECOMPOSITION AND MULTIGRID METHODS J.M. TANG, R. NABBEN, C. VUIK, AND Y.A. ERLANGGA Abstract. For various applications, it is well-known that a two-level-preconditioned Krylov method is an efficient method for solving large and sparse linear systems. Beside a traditional preconditioner, like incomplete Cholesky decomposition, a projector is included as preconditioner to get rid of the effect of a number of small or large eigenvalues of the coefficient matrix. In literature, various projection methods are known, coming from the fields of deflation, domain decomposition and multigrid. At a first glance, the projectors seem to be different. However, from an abstract point of view, it can be shown that these methods are closely related. The aim of this paper is to compare these projection methods both theoretically and numerically. We investigate their convergence properties and stability by considering their implementation, effect of roundingerrors, inexact coarse solves, severe termination criteria and perturbed starting vectors. Finally, we end up with a suggestion of the second-level preconditioner, which is as stable as the abstract balancing preconditioner and as cheap and fast as the deflation preconditioner. Key words. deflation, domain decomposition, multigrid, conjugate gradients, two-grid schemes, preconditioning, implementation, spd matrices, hybrid methods, coarse grid corrections, projection methods. AMS subject classifications. 65F10, 65F50, 65N22, 65N Introduction. The Conjugate Gradient (CG) method [14] is a very popular iterative method to solve large linear systems of equations, Ax = b, A = [a ij ] R n n, (1.1) whose coefficient matrix A is sparse and symmetric positive definite (SPD). The convergence rate of CG depends on the condition number of A, i.e., after j iterations of CG, the exact error is bounded by x x j A 2 x x 0 A ( κ 1 κ + 1 ) j, (1.2) where x 0 is the starting vector, κ = κ(a) denotes the spectral condition number of A, and x A is the A norm of x, defined as x A = x T Ax. If κ is large it is advisable to solve a preconditioned system, M 1 Ax = M 1 b, instead of (1.1). The SPD preconditioner, M 1, should be chosen such that M 1 A has a more clustered spectrum, or a smaller condition number than A. Furthermore, systems M y = z must be cheap to solve, relative to the improvement it provides in convergence rate. Nowadays, the design and analysis of preconditioners for CG are in the main focus. Even fast solvers, like multigrid (MG) or domain decomposition methods (DDM), are used as preconditioners. Traditional preconditioners are diagonal scaling, basic iterative methods, approximate inverse preconditioning, and incomplete Cholesky preconditioners. A discussion and overview of these fast solvers (MG and Part of this work has been done during the visit of the first and third author at Technische Universität Berlin. Delft University of Technology, Faculty of Electrical Engineering, Mathematics and Computer Science, Delft Institute of Applied Mathematics, Mekelweg 4, 2628 CD Delft, The Netherlands (j.m.tang@tudelft.nl, c.vuik@tudelft.nl). Part of the work of these authors has been funded by the Dutch BSIK/BRICKS project. Technische Universität Berlin, Institut für Mathematik, Straße des 17. Juni 136, D Berlin, Germany (nabben@math.tu-berlin.de, erlangga@math.tu-berlin.de). The work of these authors has been partially funded by the Deutsche Forschungsgemeinschaft (DFG), Project NA248/2-2. 1
2 2 J.M. Tang, R. Nabben, C. Vuik and Y.A. Erlangga DDM) and traditional preconditioners have been given in [43]. However, it appears that the resulting preconditioned CG method shows slow convergence in many applications with highly refined grids, and flows with high coefficient ratios in the original differential equations. In these cases, the presence of small eigenvalues has a harmful influence on the convergence of preconditioned CG. Recently, it appeared that beside traditional preconditioners, a second-level preconditioning can be used to get rid of the effect of the small eigenvalues. This new type is also known as projectors or projection methods, where extra coarse linear systems have to be solved during the iterations. Deflation is one of the frequently used projection methods, which can also be divided into several variants, see, e.g., [8, 16, 27, 30, 39]. Other typical examples of projectors are additive coarse grid correction [4, 5] and abstract balancing methods [17 19], which are well-known in the field of MG and DDM. Various projectors appear to be useful for solving problems with large jumps in the coefficients, combined with domain decomposition methods [32, 36], and in combination with block-jacobi-type preconditioners in parallel computing [9, 38, 40]. Recently, it appeared that the two-level preconditioning can also be useful for problems with constant coefficients, which are solved on sequential computers [34]. At first glance, the projectors from deflation, DDM and MG seem to be different. Eigenvector approximations are usually used as deflation projectors, whereas special projections are built to transfer information to the whole domain or to a coarser grid, in cases of DDM and MG. However, from an abstract point of view, these projections are comparable, or even identical in some sense. In [24 26], theoretical comparisons have been given of the deflation, abstract balancing and additive coarse grid correction projectors. It has been shown that the deflation method is expected to be faster in convergence, than the other two projectors, by considering, e.g., the effective condition numbers. For certain starting vectors, deflation and abstract balancing even produce the same iterates. Although these projectors seem to be comparable, deflation turned out to be instable, as observed in the concise numerical experiments. The residuals stagnated or even diverged, when the required accuracy was (too) high. On the other hand, the other two projectors are more stable, but they have the drawbacks that balancing is more expensive to apply and coarse grid correction is slower in convergence. More recent papers about stability of projectors can also be found in, e.g., [2, 11, 13]. Note that, in [24 26], the comparisons of deflation, abstract balancing and coarse grid correction were mainly based on theoretical aspects, whereas the numerical comparison had been done concisely. Additionally, there are more attractive projectors available in literature, which were not included. Some of these basically employ the same operators, where some slight differences can only be noticed in the implementation. In this paper, we will examine a wide set of projection methods, used in different fields. Beside deflation, abstract balancing and coarse grid correction, some more attractive projectors are included in this set. First, this set will be compared theoretically, by considering the corresponding spectral properties, their numerical implementation and equivalences. Thereafter, the main focus will be on the numerical experiments, where the projectors will be tested on their convergence properties and stability. The effect of the different implementations will be analyzed extensively. The following questions will be answered: which projectors are stable with respect to rounding errors? which projectors can be applied, if one uses inaccurate coarse solvers, severe termination criteria or perturbed starting vectors? is there a second-level preconditioner, which is as stable as abstract balancing and as cheap and fast as deflation? Beside the projection methods considered in this paper, some other variants are known, as augmented subspace CG [7], deflated Lanczos method [30] and the
3 Theoretical and Numerical Comparison of Projection Methods 3 Odir and Omin version of CG combined with extra vectors [1]. We refer to [30, 31] for a discussion and comparison of these methods. Finally, another comparison of projection methods has been done in [11], where methods like Init-CG, Def-CG, Proj-CG and SLRU have been compared. The aim of that paper is to obtain an optimal solver, which exploits accurate spectral information about the coefficient matrix, in an efficient way. This paper is organized as follows. In Section 2, we introduce and discuss the projection methods. Section 3 is devoted to the theoretical comparison of these methods. Subsequently, the numerical comparison of the projectors is carried out in Section 4. Finally, the conclusions are given in Section Projection Methods. In this section, the projectors will be defined and motivated, but we first start with some terminology and preliminary results. Definition 2.1. Suppose that an SPD coefficient matrix, A R n n, an SPD preconditioning matrix, M 1 R n n, and a deflation subspace matrix, Z R n k, with full rank and k n N are given. Then, we define the invertible coarse matrix, E R k k, the correction matrix, Q R n n, and the deflation matrix, P R n n, as follows: P := I AQ, Q := ZE 1 Z T, E := Z T AZ, where I is the identity matrix of appropriate size. Lemma 2.2. The following equalities hold: (a) P = P 2 ; (b) P A = AP T ; (c) P T Z = 0, P T Q = 0; (d) P AZ = 0, P AQ = 0; (e) QA = I P T, QAZ = Z, QAQ = Q; (f) Q T = Q; Proof. See, e.g., [35]. If k n and Z has rank k, coarse matrix E can be easily computed and factored, and it is SPD for any Z. From an abstract point of view, all projectors will consist of an arbitrary M 1, combined with one or more matrices P and Q. In the next subsection, we will give a concise explanation and choices for these matrices in the different fields. Nevertheless, from our point of view, the given matrices M 1 and Z are just arbitrary. This abstract setting allows us to compare the different approaches, used in DDM, MG and deflation Background of the Matrices in Domain Decomposition, Multigrid and Deflation. In the projection methods used in DDM, like the balancing Neumann-Neumann or the (two-level) additive coarse grid correction method, the preconditioner, M 1, consists of the local exact or inexact solves on subdomains. For example, M 1 can be the additive Schwarz preconditioner. Moreover, Z describes a restriction operator, while Z T is the prolongation or interpolation operator based on the subdomains. In this case, E is called the coarse grid or Galerkin matrix. To speed up the convergence of the additive coarse grid correction method, a coarse grid correction matrix, Q, can be added. Finally, P can be interpreted as a subspace correction, in which each subdomain is agglomerated into a single cell. More details can be found in [32, 36]. Also in the MG approach, Z and Z T are the restriction and prolongation operator, respectively, where there can be a connection between some subdomains. E and Q are again the coarse grid or Galerkin and coarse grid correction matrix, respectively, corresponding to the Galerkin approach. Matrix P can be interpreted as a coarse grid correction, using an interpolation operator with extreme coarsening, where linear systems with E are usually solved recursively. In the context of
4 4 J.M. Tang, R. Nabben, C. Vuik and Y.A. Erlangga MG projection methods, M 1 should work as a smoother, followed by a coarse grid correction P. We refer to [37, 42] for more details. In deflation methods, M 1 can be an arbitrary preconditioner, such as the Incomplete Cholesky factorization. Furthermore, the deflation subspace matrix Z consists of so-called deflation vectors, used in the deflation matrix P. In this case, the column space of Z builds the deflation subspace, i.e., the space to be projected out of the residuals. It consists of, for example, eigenvectors, approximations of eigenvectors, or piecewise constant or linear vectors, which are strongly related to DDM. If one chooses for eigenvectors, the corresponding eigenvalues would be shifted to zero in the spectra of the deflated matrix. This fact has motivated the name deflation method. In literature, it is also known as the spectral preconditioner, see also, e.g., [11]. Usually, systems with E are solved directly, using, e.g., Cholesky decomposition General Linear System. The general linear system, which will be the basis for the projection methods, is given by PAx = b, P, A R n n. (2.1) In the standard preconditioned CG method, x = x is the solution of the original linear system Ax = b, A = A is the SPD coefficient matrix, P = M 1 PREC represents a traditional SPD preconditioner, and b = M 1 PRECb is the right-hand side. We will call this method Traditional Preconditioned CG (PREC), see also [12, 22]. Next, A may also be a combination of A and P, such that A is symmetric positive (semi-)definite (SP(S)D), while P remains a traditional preconditioner. Note that this does not cause difficulties for CG, since it can deal with SPSD matrices as long as the linear system is consistent [15]. Furthermore, instead of choosing one traditional preconditioner for P, we can combine different traditional preconditioners and matrices P and Q in an additive or multiplicative way, which will be illustrated below. The additive combination of two SPD preconditioners C 1 and C 2 leads to P a2, given by P a2 := C 1 + C 2, (2.2) which should be SPD. Of course, the summation of the preconditioners can be done by different weights of C 1 and C 2. Moreover, (2.2) can be easily generalized to P ai for more SPD preconditioners C 1, C 2,..., C i. The multiplicative combination of preconditioners can be easily explained, by considering the stationary iterative methods induced by the preconditioner. Assuming that C 1 and C 2 are two SPD preconditioners, we can combine x i+ 1 2 := x i +C 1 (b Ax i ) and x i+1 := x i C 2 (b Ax i+ 1 2 ), to obtain x i+1 = x i +P m2 (b Ax i ) with P m2 := C 1 + C 2 C 2 AC 1, (2.3) which is the multiplicative operator consisting of two preconditioners. In addition, C 1 and C 2 can again be combined with another SPD preconditioner C 3 in a multiplicative way, which yields P m3 = C 1 + C 2 + C 3 C 2 AC 1 C 3 AC 2 C 3 AC 1 + C 3 AC 2 AC 1. (2.4) This can also be generalized to P mi, for C 1, C 2,..., C i Definition of the Projection Methods. The projection methods, which will be compared, are given and motivated below.
5 Theoretical and Numerical Comparison of Projection Methods Additive Method. If one substitutes a traditonal preconditioner C 1 = M 1 and a coarse grid correction matrix C 2 = Q into the additive combination as given in (2.2), then this implies P AD = M 1 + Q. (2.5) Using the additive Schwarz preconditioner for M 1, the abstract form (2.5) includes the additive coarse grid correction preconditioner [3], and the resulting method is known as BPS. This operator has further been analyzed in, e.g., [4,5,28]. If the multiplicative Schwarz preconditioner is taken for M 1, we obtain the Hybrid- 2 preconditioner [36, p. 47]. In the MG language, P AD is sometimes called an additive multigrid preconditioner. In this paper, the resulting method, associated to P AD, will be called Additive Coarse Grid Correction (AD) Deflation Methods. The deflation technique has been exploited by several authors [9,10,16,20,21,23 25,27,30,39]. Below, we first describe the deflation method following [39], and thereafter [16, 27, 30]. First note that Q = Q T, (I P T )x = Qb and AP T = P A hold from Lemma 2.2. Then, in order to solve Ax = b, we employ x = (I P T )x + P T x where (I P T )x can be computed immediately. For the part P T x, we solve the deflated system P A x = P b. (2.6) Obviously, (2.6) is singular, and it can only be solved by CG, as long as it is consistent, see also [15]. Since matrix A is non-singular and Ax = b is consistent, this is certainly true for (2.6), where the same projection is applied to both sides of the nonsingular system. If A is singular, this projection can also be applied in many cases, see [33, 34]. Then, because of P T x = P T x, the unique solution x can be obtained via (2.6), by premultipling x by P T, and adding it to Qb, i.e., x = Qb + P T x. Subsequently, the deflated system can also be solved, by using a preconditioner, M 1, which gives M 1 P Ax = M 1 P b, (2.7) see [39] for details. The linear system (2.7) can be written in the form of (2.1), by taking P = M 1, A = P A and b = M 1 P b. Note that this is well-defined, since it can be shown that P A is an SPSD matrix. The resulting method will be called Deflation Variant 1 (DEF1). An alternative way to describe the deflation technique is to start with a random vector, x, and by supposing x 0 := Qb + P T x. Then, it can be shown that the solution of Ax = b can be constructed in the form of the deflated system AP T y = r 0, r 0 := b Ax 0, (2.8) where it can be proven that y is a unique solution. Again, the deflated system (2.8) can also be solved with the preconditioner M 1, leading to M 1 AP T y = M 1 r 0. After some rewriting, x can be uniquely determined from P T M 1 Ax = P T M 1 b, (2.9) see, e.g., [16] for more details. The resulting method will be denoted as Deflation Variant 2 (DEF2). In contrast to M 1 AP T y = M 1 r 0, Eq. (2.9) can not be written in the form of (2.1), with an SPD operator P and an SPSD matrix A. Fortunately, in Subsection 3.2, it will be shown that (2.9) is identical to a linear system, which is in the form of (2.1). Remark 1. The main difference between DEF1 and DEF2 is their flipped projection operators. In addition, define the uniqueness -operation as v = Qb +
6 6 J.M. Tang, R. Nabben, C. Vuik and Y.A. Erlangga P T ṽ, for certain vectors v and ṽ. Then, this operation has been carried out at the end of the iteration process in DEF1, so that an arbitrarily chosen starting vector x 0 can be used. On the contrary, this operation has been applied prior to the iteration process in DEF2, which can be interpreted as adopting a special starting vector Adapted Deflation Methods. If one applies C 1 = Q and C 2 = M 1 as a multiplicative combination, as given in (2.3), then this yields P A-DEF1 = M 1 P + Q. (2.10) In the MG language, this operator results from the non-symmetric multigrid iteration scheme, where one first applies a coarse grid correction, followed by a smoothing step. Note that, although Q and M 1 are SPD preconditioners, (2.10) is a nonsymmetric operator and, even more, it is not symmetric with respect to the inner product induced by A. In addition, P A-DEF1 can also be seen as an adapted deflation preconditioner, since M 1 P from DEF1 is combined in an additive way with a coarse grid correction Q. Hence, the resulting method, corresponding to P A-DEF1, will be denoted as Adapted Deflation Variant 1 (A-DEF1). Subsequently, we can also reverse the order of Q and M 1 in (2.3), i.e., we choose C 1 = M 1 and C 2 = Q in (2.3), which implies P A-DEF2 = P T M 1 + Q. (2.11) Using the additive Schwarz preconditioner for M 1, P A-DEF2 is the two-level Hybrid- II Schwarz preconditioner [32, p. 48]. In the MG methods, P A-DEF2 is the (nonsymmetric) multigrid preconditioner, where M 1 is used as a smoother. Similar to A-DEF1, P A-DEF2 is non-symmetric. Fortunately, in Subsection 3.2, we will see that A-DEF2 is identical to a method based on a symmetric operator. As in the case of P A-DEF1, the operator P A-DEF2 can also be seen as an adapted deflation preconditioner, since P T M 1 from DEF2 is combined with Q, in an additive way. Therefore, the resulting method will be called Adapted Deflation Variant 2 (A- DEF2) Abstract Balancing Methods. The operators P A-DEF1 and P A-DEF2 can be symmetrized, by using the multiplicative combination of three preconditioners. If one substitues C 1 = Q, C 2 = M 1 and C 3 = Q into (2.4), we obtain P BNN = P T M 1 P + Q. In the MG philosophy, P BNN results from a symmetric multigrid iteration scheme, where one first applies a coarse grid correction followed by a smoothing step, and ended with another coarse grid correction. However, in MG, the symmetrization is usually done by another smoothing step. Moreover, P BNN is a well-known operator in DDM. In combination with the additive Schwarz preconditioner for M 1, and after some scaling and special choices of Z, the operator P BNN is known as the Balancing-Neumann-Neumann preconditioner, introduced in [17] and further been analyzed in, e.g., [6,18,19,29,36]. In the abstract form, P BNN is called the Hybrid-1 preconditioner [36, p. 34]. Here, we will call it Abstract Balancing Neumann- Neumann (BNN). Moreover, we will also consider two variants of BNN, see below. In the first variant, we omit Q of P BNN, giving us P R-BNN1 = P T M 1 P, which still remains a symmetric operator. To our knowledge, P R-BNN1 is unknown in literature, so this is the first time its properties are analyzed. The corresponding method is called Reduced Balancing Neumann-Neumann Variant 1 (R-BNN1). Next, in the second variant of BNN, we omit both P and Q of P BNN, resulting in P R-BNN2 = P T M 1, (2.12)
7 Theoretical and Numerical Comparison of Projection Methods 7 and this method will be denoted as Reduced Balancing Neumann-Neumann Variant 2 (R-BNN2). Notice that the operators of both R-BNN2 and DEF2 are equal, i.e., P DEF2 = P R-BNN2 = P T M 1, where only the implementation appears to differ, see Subsection In fact, the implementation of DEF2 is equal to the approach as applied in, e.g., [30], where the deflation method has been derived, by combining a deflated Lanczos procedure and the standard CG algorithm. On the other hand, R-BNN2 is the approach, where deflation has been incorporated into the CG algorithm in a direct way [16], but it is also the approach where a hybrid variant has been employed in DDM [36]. Finally, as mentioned earlier, P T M 1 is a nonsymmetric preconditioner, but it will be shown in Subsection 3.2, that both P R-BNN1 and P R-BNN2 are identical to P BNN, for certain starting vectors. Hence, we classify these methods as variants of the original abstract balancing method, rather than as variants of deflation methods Aspects of Projection Methods. A list of the methods, which will be compared, is given in Table 2.1. More details about the methods can be found in the references, given in the last column of this table. Subsequently, the implementation and the computational cost of these methods will be considered in this subsection. Name Method Operator References PREC Traditional Preconditioned CG M 1 [12, 22] AD Additive Coarse Grid Correction M 1 + Q [3, 32, 36] DEF1 Deflation Variant 1 M 1 P [39] DEF2 Deflation Variant 2 P T M 1 [16, 27, 30] A-DEF1 Adapted Deflation Variant 1 M 1 P + Q [32] A-DEF2 Adapted Deflation Variant 2 P T M 1 + Q [32] BNN Abstract Balancing P T M 1 P + Q [17] R-BNN1 Reduced Balancing Variant 1 P T M 1 P R-BNN2 Reduced Balancing Variant 2 P T M 1 [17, 36] Table 2.1 List of methods which will be compared. The operator of each method can be interpreted as the preconditioner P, given in (2.1) with A = A. If possible, references to the methods and their implementations are given in the last column Implementation Issues. The general implementation of any method, given in Table 2.1, can be found in Algorithm For each method, the corresponding matrices M i and vectors V start and V end can be found in Table 2.2. For more details, we refer to [35]. [htbp] General implementation for solving Ax = b. [1] Select random x and V start, M 1, M 2, M 3, V end from Table 2.2 x 0 := V start, r 0 := b Ax 0 z 0 := M 1 r 0, p 0 := M 2 z 0 j := 0, 1,..., until convergence w j := M 3 Ap j α j := (r j, z j )/(p j, w j ) x j+1 := x j + α j p j r j+1 := r j α j w j z j+1 := M 1 r j+1 β j := (r j+1, z j+1 )/(r j, z j ) p j+1 := M 2 z j+1 + β j p j x it := V end From Algorithm and Table 2.2, it can be observed that one or more precondition and projection operations are carried out, in the steps where M i with i = 1, 2, 3 are involved. For most projectors, these steps are combined to obtain the preconditioned/projected residuals, z j+1. DEF2 is the only method, where a projection step is applied to the search directions, p j+1, whereas DEF1 is the only method, where the projection is performed to create w j. Moreover, notice that we use the same random starting vector x in each method, but the actual starting vector, V start, differs for each method. Finally, it can also be noticed that the ending vector, V end, is the same for all methods, except DEF1. Next, recall that P, as given in (2.1), should be SPD to guarantee convergence
8 8 J.M. Tang, R. Nabben, C. Vuik and Y.A. Erlangga Method V start M 1 M 2 M 3 V end PREC x M 1 I I x j+1 AD x M 1 + Q I I x j+1 DEF1 x M 1 I P Qb + P T x j+1 DEF2 Qb + P T x M 1 P T I x j+1 A-DEF1 x M 1 P + Q I I x j+1 A-DEF2 Qb + P T x P T M 1 + Q I I x j+1 BNN x P T M 1 P + Q I I x j+1 R-BNN1 Qb + P T x P T M 1 P I I x j+1 R-BNN2 Qb + P T x P T M 1 I I x j+1 Table 2.2 Choices of V start, M 1, M 2, M 3, V end for each method, as used in Algorithm of CG. This is obviously the case for PREC, AD, DEF1 and BNN. As mentioned earlier, it can be shown that DEF2, A-DEF2, R-BNN1 and R-BNN2 also give appropriate operators, where it turns out that V start = Qb + P T x plays an important role in this derivation. A-DEF1 is the only method, which does not have an SPD operator, and which can also not be decomposed or transformed into an SPD operator, P. Nonetheless, we will see that it works fine in several test cases, see Section Computational Cost. The computational cost of each method depends not only on the choices of M 1 and Z, but also on the way of implementation and storage of the matrices. It is easy to see that, for each iteration, PREC requires 1 matrix-vector multiplication (MVM), 4 inner products (IP), 3 vector updates (VU) and 1 preconditioning step. Next, note that AZ and E should be computed and stored beforehand, so that only one MVM with A is required in each iteration of the projection methods. Moreover, we distinguish two cases considering Z and AZ: Z is sparse, and both Z and AZ can be stored in a vector; Z is full, and therefore, Z and AZ are full matrices. For each projection method, we give the extra computation cost for each iteration with respect to PREC, see Table 2.3. In the table, the number of operations P y and Qy, for a given vector y, per iteration is also provided. Note that, if both P y and Qy should be computed for the same vector y, like in A-DEF1 and BNN, then Qy can be determined efficiently, since it only requires one IP if Z is sparse, or one MVM if Z is full. From Table 2.3, it can be seen that AD is obviously the cheapest method, while BNN and R-BNN1 are the most expensive projection methods, since two operators with P and P T are involved. Finally, observe that using a projection method is only efficient, if Z is sparse or if the number of deflation vectors is relatively small in the case of a full matrix Z. 3. Theoretical Comparison. In this section, a comparison of eigenvalue distributions of the operators will be made, and thereafter, some relations between the abstract balancing method and the other projectors will be derived Spectral Analysis of the Methods. It is well-known that the eigenvalue distribution of the system, corresponding to PREC, is always worse than those of the projection methods. For example, in [41], we have shown that κ ( M 1 P A ) κ ( M 1 A ), for any SPD matrices A and M 1, and any Z with full rank. This means that the effective condition number, κ, of DEF1 is always below the condition number, κ,
9 Theoretical and Numerical Comparison of Projection Methods 9 Method P y, P T y Qy IP / MVM VU CCS AD DEF DEF A-DEF A-DEF BNN R-BNN R-BNN Table 2.3 Extra computational cost of each iteration of the projection methods compared to PREC. IP = inner products, MVM = matrix-vector multiplications, VU = vector updates and CCS = coarse system solves. Note that IP holds for sparse Z and MVM holds for full Z. of PREC. It appears that κ associated to PREC is always larger than κ associated to the other projection methods. Therefore, we restrict ourselves to the projection methods, in the analysis below. In [24, 25], it has been shown that the effective condition number of DEF1 is below the condition number of both AD and BNN, i.e., κ ( M 1 P A ) κ ( M 1 A + QA ), κ ( M 1 P A ) κ ( P T M 1 P A + QA ), for all full-ranked Z and SPD matrices A and M 1. Important to notice is that such an inequality, between AD and BNN, does not hold. One would expect that the condition number associated to BNN is below the one associated to AD, but this is however not the case, see [26] for a counterexample. Beside the comparisons of AD, DEF1 and BNN, done in [24 26], more relations between the eigenvalue distribution of these and other projection methods are given next. We show that DEF1, DEF2, R-BNN1 and R-BNN2 have identical spectra, and the same for BNN, A-DEF1 and A-DEF2, see Theorem 3.1. Here, σ(c) = {λ 1, λ 2,..., λ n } denotes the spectrum of an arbitrary matrix C with eigenvalues λ i. Theorem 3.1. The following two statements hold: σ ( M 1 P A ) = σ ( P T M 1 A ) = σ ( P T M 1 P A ) ; σ ( (P T M 1 P + Q)A ) = σ ( (M 1 P + Q)A ) = σ ( (P T M 1 + Q)A ). Proof. Note first that σ(cd) = σ(dc), σ(c + I) = σ(c) + σ(i) and σ(c) = σ(c T ) hold, for arbitrary matrices C, D R n n, see also [35, Lemma 3.1]. Using these facts and Lemma 2.2, we obtain immediately σ ( M 1 P A ) = σ ( AM 1 P ) = σ ( P T M 1 A ), and σ ( M 1 P A ) = σ ( M 1 P 2 A ) = σ ( M 1 P AP T ) = σ ( P T M 1 P A ), which proves the first statement. Moreover, we have σ ( P T M 1 P A + QA ) = σ ( P T M 1 P A P T + I ) = σ ( (M 1 P A I)P T ) + σ(i) = σ ( M 1 P 2 A P T ) + σ(i) = σ ( M 1 P A + QA ).
10 10 J.M. Tang, R. Nabben, C. Vuik and Y.A. Erlangga and, likewise, σ ( P T M 1 A + QA ) = σ ( P T M 1 A P T ) + σ(i) = σ ( AM 1 P P ) + σ(i) = σ ( P AM 1 P P ) + σ(i) = σ ( P T M 1 AP T P T ) + σ(i) = σ ( P T M 1 P A + QA ), which completes the proof of the second statement. As a consequence of Theorem 3.1, DEF1, DEF2, R-BNN1 and R-BNN2 can be seen as one class of projectors, whereas BNN, A-DEF1 and A-DEF2 form another class of projectors. These two classes can be connected by Theorem 2.8 of [25], which states that if σ(m 1 P A) = {0,..., 0, µ k+1,..., µ n } is given, then σ(p T M 1 P A + QA) = {1,..., 1, µ k+1,..., µ n }. It appears that the reverse statement also holds. For completeness, these results are given in Theorem 3.2. Theorem 3.2. Let the spectra of DEF1 and BNN be given by σ(m 1 P A) = {λ 1,..., λ n }, σ(p T M 1 P A + QA) = {µ 1,..., µ n }, respectively. Then, the numbering of the eigenvalues within these spectra can be such that the following statements hold: λ i = 0 and µ i = 1 for i = 1,..., k; λ i = µ i, for i = k + 1,..., n. Proof. By using Lemma 2.2, (P T M 1 P + Q)AZ = Z, and M 1 P AZ = 0 hold. As a consequence, the columns of Z are the eigenvectors, corresponding to the eigenvalues of BNN and DEF1, which are equal to 1 and 0, respectively. Next, due to Theorem 2.8 of [25], it suffices to show that if σ(p T M 1 P A + QA) = {1,..., 1, µ k+1,..., µ n } holds, then this implies σ(m 1 P A) = {0,..., 0, µ k+1,..., µ n }. The proof is as follows. Consider the eigenvalues µ i and corresponding eigenvectors v i with i = k + 1,..., n of BNN, i.e., (P T M 1 P + Q)Av i = µ i v i, which implies P T (P T M 1 P + Q)Av i = µ i P T v i. (3.1) Applying Lemma 2.2, we have (P T ) 2 M 1 P A + P T QA = P T M 1 P AP T. Using the latter expression, Eq. (3.1) can be rewritten into P T M 1 P Aw i = µ i w i, with w i = P T v i. Note that we have P T x = 0 if x Col(Z), due to Lemma 2.2. However, w i 0, since v i / Col(Z) for i = k + 1,..., n. Hence, µ i is also an eigenvalue of P T M 1 P A. Lemma 3.1 gives σ ( M 1 P A ) = σ ( P T M 1 P A ), so that µ i is also an eigenvalue of DEF1. Due to Theorem 3.2, both DEF1 and BNN provide almost the same spectra with the same clustering. The zero eigenvalues of DEF1 are replaced by eigenvalues equal to one, if BNN is used. Next, Theorem 3.3 connects all methods, in terms of spectra. Theorem 3.3. Suppose the spectrum of DEF1, DEF2, R-BNN1 or R-BNN2 is given by {0,..., 0, λ k+1,..., λ n }. Moreover, let the spectrum of DEF1, DEF2, R-BNN1 or R-BNN2 be {1,..., 1, µ k+1,..., µ n }. Then, if λ i and µ i in the spectra are sorted increasingly, then λ i = µ i for all i = k + 1,..., n. Proof. The theorem follows immediately from Theorem 3.1 and 3.2. From Theorem 3.3, it can be concluded that all projectors almost have the same clusters of eigenvalues Comparison of the Abstract Balancing and Other Projection Methods. In this subsection, it will be shown that DEF2, A-DEF2, R-BNN1 and R-BNN2 are identical methods, in exact arithmetic. More important, we will prove that these projectors are equal to the expensive projector BNN, for certain starting
11 Theoretical and Numerical Comparison of Projection Methods 11 vectors. First, Lemma 3.4 shows that some steps in the BNN implementation can be reduced, see also [36, Sect ]. Lemma 3.4. Suppose that V start = Qb + P T x instead of V start = x is used in BNN. Then, this implies Qr j+1 = 0; P r j+1 = r j+1, for all j = 0, 1, 2,..., in the BNN implementation of Algorithm Proof. Both statements can be proven by induction. For the first statement, the proof is as follows. It can be verified that Qr 1 = 0 and QAp 1 = 0. By the inductive hypothesis, Qr j = 0 and QAp j = 0 holds. Then, for the inductive step, we obtain Qr j+1 = 0 and QAp j+1 = 0, since Qr j+1 = Qr j α j QAp j = 0, and QAp j+1 = QAz j+1 + β j QAp j = QAP T M 1 P r j+1 + QAQr j+1 = 0, where we have used Lemma 2.2. Next, for the second statement, P r 1 = r 1 and P Ap 1 = Ap 1 can be easily shown. Assume that P r j = r j and P Ap j = Ap j hold. Then, both P r j+1 = r j+1 and P Ap j+1 = Ap j+1 hold, because and P r j+1 = P r j α j P Ap j = r j α j Ap j = r j+1, P Ap j+1 = P Az j+1 + β j P Ap j = P AP T M 1 P r j+1 + β j Ap j = AP T M 1 r j+1 + β j Ap j = AP T M 1 P r j+1 + β j p j = A(z j+1 + β j p j ) = Ap j+1, where we have applied the result of the first statement. This concludes the proof. As a result of Lemma 3.4, it can be concluded that, if V start = Qb+P T x is used, BNN is completely identical to R-BNN1, R-BNN2, A-DEF2 in exact arithmetic. Moreover, it is also mathematically identical to DEF2, since the operator P T M 1 is the same in R-BNN2 and DEF2. Thus, besides the fact that DEF2, A-DEF2, R-BNN1 and R-BNN2 are identical methods, they are also identical to BNN in some cases. These results are summarized in Theorem 3.5. Theorem 3.5. BNN with V start = Qb + P T x is identical to DEF2, A-DEF2, R-BNN1 and R-BNN2, in exact arithmetic. As a consequence of Theorem 3.5, the corresponding operators are all appropriate, so that CG in combination with them works fine. Moreover, note that it may be possible that Lemma 3.4 is not fully satisfied in practice, due to rounding errors. Therefore, although BNN, DEF2, A-DEF2, R-BNN1 and R-BNN2 are mathematically identical, all involved projectors except for BNN may lead to inaccurate solutions and may suffer from instabilities in numerical experiments. In these cases, the omitted projection and correction step of the BNN algorithm appear to be important. Next, we will provide a more detailed comparison between BNN and DEF1, in terms of exact errors in the A norm, see Theorem 3.6. In fact, it is a generalization of Theorem 3.4 and 3.5 of [25], where we now apply random starting vectors, x, instead of zero starting vectors.
12 12 J.M. Tang, R. Nabben, C. Vuik and Y.A. Erlangga Theorem 3.6. Let (x j+1 ) DEF1 and (x j+1 ) BNN denote the iterates x j+1 of BNN and DEF1, respectively. Then, they satisfy x (x j+1 ) DEF1 A x (x j+1 ) BNN A ; (x j+1 ) DEF1 = (x j+1 ) BNN, if V start = Qb + P T x instead of V start = x is used in BNN. Proof. The proof is mainly the same as given in [25, Th. 3.4 and 3.5]. From Theorem 3.6, we conclude that the errors of the iterates built by DEF1 are never larger than those of BNN. More stronger, DEF1 and BNN produce the same iterates in exact arithmetic, if V start = Qb + P T x is used in BNN. 4. Numerical Comparison. In this section, a numerical comparison of the projectors will be given. We restrict ourselves to one specific test problem, which is a 2-D porous media flow problem. Other 2-D test problems with the Laplace equation and bubbly flows have been considered in [35]. Since these results appear to be comparable, it suffices to present only the results of one test problem. We consider the Poisson equation with a discontinuous coefficient, ( ) 1 ρ(x) p(x) = 0, x = (x, y) Ω = (0, 1) 2, (4.1) where ρ and p denote the piecewise-constant density and fluid pressure, respectively. The contrast, ɛ = 10 6, is fixed, which is the jump between the high and low density. In addition, we impose a Dirichlet condition on the boundary y = 1 and homogenous Neumann conditions on the other boundaries. The geometry of the problem, which consists of layers, is given in Figure 4.1. The layers are denoted by the disjoint sets Ω j, j = 1, 2,..., k, such that Ω = k j=1 Ω j. The discretized domain and layers are denoted by Ω h and Ω hj, respectively. Then, for each Ω hj with j = 1, 2,..., k, each deflation vector, z j, is defined as follows: (z j ) i := { 0, xi Ω h \ Ω hj ; 1, x i Ω hj, where x i is a grid point of Ω h. Then, we defined Z := [z 1 z 2 z k ], consisting of orthogonal disjunct piecewise-constant vectors. Other more sophisticated settings of the deflation vectors can also be used, see [35, Sect. 4.1]. Composition Permeability Shale Ω Sandstone Ω Shale Ω Sandstone Ω Shale Ω Fig Geometry of the porous media problem with k = 5 layers having a fixed density ρ. The number of deflation vectors and layers is equal. A standard second-order finite-difference scheme is used to discretize (4.1), resulting in our main linear system, Ax = b, with A R n n. Moreover, we choose as preconditioner, M 1, the Incomplete Cholesky decomposition [22], but any other SPD preconditioner could also be used.
13 Theoretical and Numerical Comparison of Projection Methods 13 We will start with a numerical experiment using standard parameters, which means that an appropriate termination criterion, exact computation of E 1 and exact starting vectors are used. Subsequently, numerical experiments will be performed with inexact E 1, severe termination tolerances and perturbed starting vector, respectively. The results for each method will be presented, by giving the number of iterations and the 2 norms of the exact errors (i.e., x it x 2 ) in a table, and by giving the exact errors in the A norm during the iteration processes (i.e., x j x A with x j denoting the j iterate), in a figure. Note that figures with the residuals and exact errors in the 2 norm (i.e., x j x 2 ), during the iteration processes, are omitted, because it appears that they do not provide extra relevant information in our test cases. Finally, for all methods, we will terminate the iterative process if the norm of the relative preconditioned/projected residual, z j+1 2 / z 1 2, is below a tolerance δ, or if the maximum allowed number of iterations, equal to 250, is reached Experiment using Standard Parameters. In the first numerical experiment, standard parameters are used with stopping tolerance δ = 10 8, exact coarse matrix E 1 and exact starting vectors V start. The results are presented in Table 4.1 and Figure 4.2. Note that we only give figures of a few test cases, because they all show similar behaviors. Moreover, for the sake of a better view, the results for PREC are omitted in these figures. Method n = 29 2, k = 5 n = 54 2, k = 5 n = 41 2, k = 7 n = 55 2, k = 7 # It. x it x 2 # It. x it x 2 # It. x it x 2 # It. x it x 2 PREC AD DEF DEF A-DEF A-DEF BNN R-BNN R-BNN Table 4.1 Number of required iterations for convergence and the 2 norm of the exact error of all proposed methods for the test problem with standard parameters. Considering both Table 4.1 and Figure 4.2, we notice that all methods need more iterations to converge, by enlarging the grid points, n, or number of layers, k. It can also be seen that all methods, except for PREC, AD and A-DEF1, perform more or less the same, which confirms the theory (cf. Theorem 3.1 and 3.3). PREC is obviously the slowest method, followed by AD. Moreover, the differences between AD and the other projectors are relatively small, although the plots show an erratic behavior for AD. Additionally, A-DEF1 is somewhat slower in convergence and gives less accurate solutions, than the other methods except for PREC and AD Experiment using Inaccurate Coarse Solves. In practice, it may be difficult to find an accurate solution of the coarse system, Ey = z, at each iteration of a projection method. Therefore, only an approximated solution ỹ may be available, if one uses, for instance, an iterative solver with a low accuracy. In this case, ỹ can be interpreted as Ẽ 1 z, where Ẽ is an inexact matrix based on E. This motivates our next experiment, where we use Ẽ 1 defined as Ẽ 1 := (I + ψr)e 1 (I + ψr), ψ > 0, (4.2)
14 14 J.M. Tang, R. Nabben, C. Vuik and Y.A. Erlangga x j x A AD DEF1 DEF2 A DEF1 A DEF2 BNN R BNN1 R BNN Iteration (a) n = 29 2, k = 5. x j x A AD DEF1 DEF2 A DEF1 A DEF2 BNN R BNN1 R BNN Iteration (b) n = 41 2, k = 7. Fig Exact errors in the A norm for the test problem with standard parameters. where R R k k is a symmetric random matrix with elements from the interval [ 0.5, 0.5], see also [24, Sect.3] for more details. Note that theoretical results, as derived in Subsection 3.2, do not hold anymore. The sensitivity of the projectors to this inaccurate coarse matrix with various values of ψ will be investigated. Note that the results for PREC are not influenced by this adaptation of E 1. They are only included for reference. The results can be found in Table 4.2 and Figure 4.3. By studying Table 4.2 and Figure 4.3, we see that the most stable projectors are AD, BNN and A-DEF2. Furthermore, notice that A-DEF1, R-BNN1 and R- BNN2 converge for ψ 10 8, but sometimes to the incorrect solution. In addition, DEF1 and DEF2 are obviously the worst methods, which is as expected, since the zero-eigenvalues of the deflated system become small non-zero eigenvalues, due to
15 Theoretical and Numerical Comparison of Projection Methods 15 Method ψ = ψ = ψ = 10 8 ψ = 10 4 # It. x it x 2 # It. x it x 2 # It. x it x 2 # It. x it x 2 PREC AD DEF NC DEF NC NC NC A-DEF A-DEF BNN R-BNN R-BNN NC Table 4.2 Number of required iterations for convergence and the 2 norm of the exact error of all methods, for the test problem with parameters n = 29 2, k = 5 and e E 1. the perturbation. It can be observed that the errors diverge for all test cases of DEF2, whereas they remain bounded in the case of DEF Experiment using Severe Termination Tolerances. We perform a numerical experiment, using various values for the tolerance δ. Note that, for a relatively small δ, it may lead to a too severe termination criterion with respect to the machine precision. However, the goal of this experiment is to test the sensitivity of the projectors to δ, rather than to perform realistic experiments. In Table 4.3 and Figure 4.4, the results are provided. Method δ = 10 8 δ = δ = # It. x it x 2 # It. x it x 2 # It. x it x 2 PREC AD DEF NC DEF NC A-DEF A-DEF BNN R-BNN R-BNN NC Table 4.3 Number of required iterations for convergence and the 2 norm of the exact error of all methods, for the test problem with parameters n = 29 2, k = 5 and various termination tolerances. By considering Table 4.3 and Figure 4.4, it can be seen that all methods, except A-DEF1, perform well for δ However, for δ 10 16, A-DEF1, R-BNN2, DEF1 and DEF2 show difficulties, since they do not converge appropriately or even diverge. This is in contrast to PREC, AD, A-DEF2, BNN and R-BNN1, which give good convergence results in all test cases. Therefore, these projectors can be characterized as stable methods, with respect to severe termination tolerances Experiment using Perturbed Starting Vectors. In Subsection 3.2, it has been proven that BNN with V start = Qb + P T x is equal to DEF2, A-DEF2, R-BNN1 and R-BNN2, in exact arithmetic. In this case, the resulting operators are well-defined and they should perform appropriately. In our next experiment, we will perturb V start in DEF2, A-DEF2, R-BNN1 and R-BNN2, and examine whether this influences the results. Note that, in this case, there are no equivalences between
16 16 J.M. Tang, R. Nabben, C. Vuik and Y.A. Erlangga x j x A AD DEF1 DEF2 A DEF1 A DEF2 BNN R BNN1 R BNN Iteration (a) ψ = x j x A AD DEF1 DEF2 A DEF1 A DEF2 BNN R BNN1 R BNN Iteration (b) ψ = Fig Exact errors in the A norm for the test problem with n = 29 2, k = 5 and e E 1. BNN and these methods. The perturbed V start, denoted as W start, is defined by W start := (1 + γy 0 )V start, γ 0, where y 0 is a random vector with elements from the interval [ 0.5, 0.5]. Note that, if DEF2, R-BNN1 or R-BNN2 converges with W start, then we may obtain a non-unique solution, since the corresponding operator is singular. Therefore, as in the case of DEF1, we should apply the uniqueness step, as mentioned in Remark 1, at the end of the iteration process. Note that this procedure is not required in A-DEF2, because this method corresponds to a non-singular operator. Now, we perform the numerical experiment, using W start for different γ. The results
17 Theoretical and Numerical Comparison of Projection Methods 17 x j x A AD DEF1 DEF2 A DEF1 A DEF2 BNN R BNN1 R BNN Iteration (a) δ = x j x A AD DEF1 DEF2 A DEF1 A DEF2 BNN R BNN1 R BNN Iteration (b) δ = Fig Exact errors in the A norm for the test problem with n = 29 2, k = 5 and various termination tolerances. can be found in Table 4.4 and Figure 4.5. Here, we use asterisks to stress that extra uniqueness steps are applied, for some methods. Moreover, notice that PREC, AD, DEF1, BNN are not included in this experiment, since they apply the random vector V start = x, by definition. From the results, it can be noticed that all involved methods converge appropriately for δ For δ 10 6, DEF2 and R-BNN2 fail to converge, or converge to the wrong solution, even after the uniqueness step. The most stable methods are obviously A-DEF2 and R-BNN1. This experiment has shown clearly that the reduced variants of BNN have different stability properties, with respect
18 18 J.M. Tang, R. Nabben, C. Vuik and Y.A. Erlangga Method γ = 10 8 γ = 10 6 γ = 10 0 # It. x it x 2 # It. x it x 2 # It. x it x 2 DEF NC NC A-DEF R-BNN * * R-BNN * NC Table 4.4 Number of required iterations for convergence and the 2 norm of the exact error of some methods, for the test problem with parameters n = 29 2, k = 5 and perturbed starting vectors. An asterisk (*) means that an extra uniqueness step is applied in that test case DEF2 A DEF2 R BNN1 R BNN x j x A Iteration (a) γ = DEF2 A DEF2 R BNN1 R BNN x j x A Iteration (b) γ = Fig Exact errors in the A norm for the test problem with n = 29 2, k = 5 2 and perturbed starting vectors.
19 Theoretical and Numerical Comparison of Projection Methods 19 to perturbations in starting vectors Summary and Discussion. From a numerical point of view, we have observed that the theoretical results only hold for standard parameters with exact computations of the coarse matrices E 1, appropriate choices for the stopping tolerance and no perturbations in the starting vectors, for certain projectors. In these cases, the numerical results confirm the theoretical fact that all projection methods perform approximately the same, although A-DEF1 showed problems in some test cases. This can be understood by the fact that A-DEF1 corresponds to a non-spsd operator of A-DEF1, see also Subsection If the dimensions of the coarse matrix, E, become large, it is favorable to solve the corresponding systems iteratively with a low accuracy. In this case, we have seen that DEF1, DEF2, R-BNN1 and R-BNN2 showed difficulties to converge. It could be observed that the errors during the iterative process of DEF2 exploded, whereas DEF1 converged slowly to the solution. The most robust methods turn out to be AD, BNN, A-DEF1 and A-DEF2. If matrix A is ill-conditioned and the tolerance of the termination criterion, chosen by the user, becomes too severe, it would be advantageous that the projection method still works appropriately. However, we have observed that DEF1, DEF2, R-BNN2 and A-DEF1 can not deal with too strict tolerances. This is in contrast to AD, BNN, A-DEF2 and R-BNN1, which remain stable in all test cases. In theory, BNN is identical to DEF2, A-DEF2, R-BNN1 and R-BNN2, for certain starting vectors. Beside the fact that these reduced variants, except A- DEF2, were not able to deal with inaccurate coarse solves, some of them were also sensitive to perturbations of the starting vector. Both DEF2 and R-BNN2 were unstable, whereas A-DEF2 and R-BNN1 appeared to be independent of these perturbations. This can be of importance, if one uses multigrid-like subdomains, where the number of subdomains, k, is very large, and the starting vector can not be obtained accurately. In the numerical experiments, we have observed that several methods showed divergence, stagnation or erratic behavior of the errors, during the iterative process. This may be caused by the fact that the residuals gradually lost the orthogonality with respect to the columns of Z, see also [30]. It can be easily shown that Z T r j = 0, j = 1, 2,..., (4.3) should hold for DEF1, DEF2, A-DEF2, R-BNN1 and R-BNN2. However, it appeared that (4.3) has not always been satisfied in the experiments. A remedy to recover this orthogononality in the bad-converging methods is described in, e.g., [30]. If we define the reorthogonalization matrix W as then W is orthogonal to Z, i.e., W := I Z(Z T Z) 1 Z T, (4.4) Z T W = Z T Z T Z(Z T Z) 1 Z T ) = 0. (4.5) Now, orthogonality of the residuals, r j, can be preserved, by premultiplying r j by W right after r j is computed in the algorithm: r j := W r j, j = 1, 2,.... (4.6) As a consequence, these adapted residuals satisfy (4.3), due to (4.5). 1 1 Note that (4.3) is not valid for AD, A-DEF1 and BNN. In the case of AD and BNN, this is not a problem, because they appear extremely stable in most test cases. This is in contrast to A- DEF1, which is unstable in several test cases. The instability of this projector can not be resolved, using the reorthogonalization strategy. Moreover, note that the reorthogonalization operator (4.6) is relatively cheap, provided that Z is sparse.
Various ways to use a second level preconditioner
Various ways to use a second level preconditioner C. Vuik 1, J.M. Tang 1, R. Nabben 2, and Y. Erlangga 3 1 Delft University of Technology Delft Institute of Applied Mathematics 2 Technische Universität
More informationOn the choice of abstract projection vectors for second level preconditioners
On the choice of abstract projection vectors for second level preconditioners C. Vuik 1, J.M. Tang 1, and R. Nabben 2 1 Delft University of Technology 2 Technische Universität Berlin Institut für Mathematik
More informationA COMPARISON OF TWO-LEVEL PRECONDITIONERS BASED ON MULTIGRID AND DEFLATION
A COMPARISON OF TWO-LEVEL PRECONDITIONERS BASED ON MULTIGRID AND DEFLATION J. M. TANG, S. P. MACLACHLAN, R. NABBEN, AND C. VUIK Abstract. It is well-known that two-level and multi-level preconditioned
More informationOn deflation and singular symmetric positive semi-definite matrices
Journal of Computational and Applied Mathematics 206 (2007) 603 614 www.elsevier.com/locate/cam On deflation and singular symmetric positive semi-definite matrices J.M. Tang, C. Vuik Faculty of Electrical
More informationThe Deflation Accelerated Schwarz Method for CFD
The Deflation Accelerated Schwarz Method for CFD J. Verkaik 1, C. Vuik 2,, B.D. Paarhuis 1, and A. Twerda 1 1 TNO Science and Industry, Stieltjesweg 1, P.O. Box 155, 2600 AD Delft, The Netherlands 2 Delft
More informationThe Conjugate Gradient Method
The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large
More informationA PRECONDITIONER FOR THE HELMHOLTZ EQUATION WITH PERFECTLY MATCHED LAYER
European Conference on Computational Fluid Dynamics ECCOMAS CFD 2006 P. Wesseling, E. Oñate and J. Périaux (Eds) c TU Delft, The Netherlands, 2006 A PRECONDITIONER FOR THE HELMHOLTZ EQUATION WITH PERFECTLY
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 24: Preconditioning and Multigrid Solver Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 5 Preconditioning Motivation:
More informationMaster Thesis Literature Study Presentation
Master Thesis Literature Study Presentation Delft University of Technology The Faculty of Electrical Engineering, Mathematics and Computer Science January 29, 2010 Plaxis Introduction Plaxis Finite Element
More informationStabilization and Acceleration of Algebraic Multigrid Method
Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration
More informationChapter 7 Iterative Techniques in Matrix Algebra
Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition
More informationSolving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners
Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department
More informationMultigrid absolute value preconditioning
Multigrid absolute value preconditioning Eugene Vecharynski 1 Andrew Knyazev 2 (speaker) 1 Department of Computer Science and Engineering University of Minnesota 2 Department of Mathematical and Statistical
More informationRobust Preconditioned Conjugate Gradient for the GPU and Parallel Implementations
Robust Preconditioned Conjugate Gradient for the GPU and Parallel Implementations Rohit Gupta, Martin van Gijzen, Kees Vuik GPU Technology Conference 2012, San Jose CA. GPU Technology Conference 2012,
More information1. Fast Iterative Solvers of SLE
1. Fast Iterative Solvers of crucial drawback of solvers discussed so far: they become slower if we discretize more accurate! now: look for possible remedies relaxation: explicit application of the multigrid
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)
AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical
More informationAdaptive algebraic multigrid methods in lattice computations
Adaptive algebraic multigrid methods in lattice computations Karsten Kahl Bergische Universität Wuppertal January 8, 2009 Acknowledgements Matthias Bolten, University of Wuppertal Achi Brandt, Weizmann
More informationDELFT UNIVERSITY OF TECHNOLOGY
DELFT UNIVERSITY OF TECHNOLOGY REPORT 16-02 The Induced Dimension Reduction method applied to convection-diffusion-reaction problems R. Astudillo and M. B. van Gijzen ISSN 1389-6520 Reports of the Delft
More information9.1 Preconditioned Krylov Subspace Methods
Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete
More informationApplied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic
Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.4: Krylov Subspace Methods 2 / 51 Krylov Subspace Methods In this chapter we give
More informationSolving PDEs with Multigrid Methods p.1
Solving PDEs with Multigrid Methods Scott MacLachlan maclachl@colorado.edu Department of Applied Mathematics, University of Colorado at Boulder Solving PDEs with Multigrid Methods p.1 Support and Collaboration
More informationC. Vuik 1 R. Nabben 2 J.M. Tang 1
Deflation acceleration of block ILU preconditioned Krylov methods C. Vuik 1 R. Nabben 2 J.M. Tang 1 1 Delft University of Technology J.M. Burgerscentrum 2 Technical University Berlin Institut für Mathematik
More informationOUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU
Preconditioning Techniques for Solving Large Sparse Linear Systems Arnold Reusken Institut für Geometrie und Praktische Mathematik RWTH-Aachen OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative
More informationCONVERGENCE BOUNDS FOR PRECONDITIONED GMRES USING ELEMENT-BY-ELEMENT ESTIMATES OF THE FIELD OF VALUES
European Conference on Computational Fluid Dynamics ECCOMAS CFD 2006 P. Wesseling, E. Oñate and J. Périaux (Eds) c TU Delft, The Netherlands, 2006 CONVERGENCE BOUNDS FOR PRECONDITIONED GMRES USING ELEMENT-BY-ELEMENT
More informationOn solving linear systems arising from Shishkin mesh discretizations
On solving linear systems arising from Shishkin mesh discretizations Petr Tichý Faculty of Mathematics and Physics, Charles University joint work with Carlos Echeverría, Jörg Liesen, and Daniel Szyld October
More informationDELFT UNIVERSITY OF TECHNOLOGY
DELFT UNIVERSITY OF TECHNOLOGY REPORT 01-13 The influence of deflation vectors at interfaces on the deflated conjugate gradient method F.J. Vermolen and C. Vuik ISSN 1389-6520 Reports of the Department
More informationDELFT UNIVERSITY OF TECHNOLOGY
DELFT UNIVERSITY OF TECHNOLOGY REPORT 10-14 On the Convergence of GMRES with Invariant-Subspace Deflation M.C. Yeung, J.M. Tang, and C. Vuik ISSN 1389-6520 Reports of the Delft Institute of Applied Mathematics
More informationNotes on PCG for Sparse Linear Systems
Notes on PCG for Sparse Linear Systems Luca Bergamaschi Department of Civil Environmental and Architectural Engineering University of Padova e-mail luca.bergamaschi@unipd.it webpage www.dmsa.unipd.it/
More informationA decade of fast and robust Helmholtz solvers
A decade of fast and robust Helmholtz solvers Werkgemeenschap Scientific Computing Spring meeting Kees Vuik May 11th, 212 1 Delft University of Technology Contents Introduction Preconditioning (22-28)
More informationDomain decomposition on different levels of the Jacobi-Davidson method
hapter 5 Domain decomposition on different levels of the Jacobi-Davidson method Abstract Most computational work of Jacobi-Davidson [46], an iterative method suitable for computing solutions of large dimensional
More informationAlgebraic Multigrid as Solvers and as Preconditioner
Ò Algebraic Multigrid as Solvers and as Preconditioner Domenico Lahaye domenico.lahaye@cs.kuleuven.ac.be http://www.cs.kuleuven.ac.be/ domenico/ Department of Computer Science Katholieke Universiteit Leuven
More informationarxiv: v2 [math.na] 1 Feb 2013
A FRAMEWORK FOR DEFLATED AND AUGMENTED KRYLOV SUBSPACE METHODS ANDRÉ GAUL, MARTIN H. GUTKNECHT, JÖRG LIESEN AND REINHARD NABBEN arxiv:1206.1506v2 [math.na] 1 Feb 2013 Abstract. We consider deflation and
More informationON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH
ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix
More informationInexact Inverse Iteration for Symmetric Matrices
Inexact Inverse Iteration for Symmetric Matrices Jörg Berns-Müller Ivan G. Graham Alastair Spence Abstract In this paper we analyse inexact inverse iteration for the real symmetric eigenvalue problem Av
More informationLinear Solvers. Andrew Hazel
Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction
More informationBindel, Fall 2016 Matrix Computations (CS 6210) Notes for
1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.
More informationThe Lanczos and conjugate gradient algorithms
The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization
More informationNumerical Methods I Non-Square and Sparse Linear Systems
Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant
More informationIterative Methods for Solving A x = b
Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More informationTopics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems
Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate
More informationTermination criteria for inexact fixed point methods
Termination criteria for inexact fixed point methods Philipp Birken 1 October 1, 2013 1 Institute of Mathematics, University of Kassel, Heinrich-Plett-Str. 40, D-34132 Kassel, Germany Department of Mathematics/Computer
More informationPreface to the Second Edition. Preface to the First Edition
n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................
More informationDELFT UNIVERSITY OF TECHNOLOGY
DELFT UNIVERSITY OF TECHNOLOGY REPORT 12-10 Fast linear solver for pressure computation in layered domains P. van Slingerland and C. Vuik ISSN 1389-6520 Reports of the Department of Applied Mathematical
More informationConjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)
Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps
More informationSpectral analysis of complex shifted-laplace preconditioners for the Helmholtz equation
Spectral analysis of complex shifted-laplace preconditioners for the Helmholtz equation C. Vuik, Y.A. Erlangga, M.B. van Gijzen, and C.W. Oosterlee Delft Institute of Applied Mathematics c.vuik@tudelft.nl
More informationMatrix Algorithms. Volume II: Eigensystems. G. W. Stewart H1HJ1L. University of Maryland College Park, Maryland
Matrix Algorithms Volume II: Eigensystems G. W. Stewart University of Maryland College Park, Maryland H1HJ1L Society for Industrial and Applied Mathematics Philadelphia CONTENTS Algorithms Preface xv xvii
More informationLab 1: Iterative Methods for Solving Linear Systems
Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as
More informationLast Time. Social Network Graphs Betweenness. Graph Laplacian. Girvan-Newman Algorithm. Spectral Bisection
Eigenvalue Problems Last Time Social Network Graphs Betweenness Girvan-Newman Algorithm Graph Laplacian Spectral Bisection λ 2, w 2 Today Small deviation into eigenvalue problems Formulation Standard eigenvalue
More informationLecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University
Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector
More informationM.A. Botchev. September 5, 2014
Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev
More information4.8 Arnoldi Iteration, Krylov Subspaces and GMRES
48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate
More informationLecture 9: Krylov Subspace Methods. 2 Derivation of the Conjugate Gradient Algorithm
CS 622 Data-Sparse Matrix Computations September 19, 217 Lecture 9: Krylov Subspace Methods Lecturer: Anil Damle Scribes: David Eriksson, Marc Aurele Gilles, Ariah Klages-Mundt, Sophia Novitzky 1 Introduction
More informationScientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix
Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 1: Direct Methods Dianne P. O Leary c 2008
More informationAMG for a Peta-scale Navier Stokes Code
AMG for a Peta-scale Navier Stokes Code James Lottes Argonne National Laboratory October 18, 2007 The Challenge Develop an AMG iterative method to solve Poisson 2 u = f discretized on highly irregular
More informationCourse Notes: Week 1
Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues
More informationNumerical methods part 2
Numerical methods part 2 Alain Hébert alain.hebert@polymtl.ca Institut de génie nucléaire École Polytechnique de Montréal ENE6103: Week 6 Numerical methods part 2 1/33 Content (week 6) 1 Solution of an
More informationPreconditioned Locally Minimal Residual Method for Computing Interior Eigenpairs of Symmetric Operators
Preconditioned Locally Minimal Residual Method for Computing Interior Eigenpairs of Symmetric Operators Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University
More informationConjugate Gradient Method
Conjugate Gradient Method direct and indirect methods positive definite linear systems Krylov sequence spectral analysis of Krylov sequence preconditioning Prof. S. Boyd, EE364b, Stanford University Three
More informationCME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication.
CME342 Parallel Methods in Numerical Analysis Matrix Computation: Iterative Methods II Outline: CG & its parallelization. Sparse Matrix-vector Multiplication. 1 Basic iterative methods: Ax = b r = b Ax
More information7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP.
7.3 The Jacobi and Gauss-Siedel Iterative Techniques Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 7.3 The Jacobi and Gauss-Siedel Iterative Techniques
More informationINTERGRID OPERATORS FOR THE CELL CENTERED FINITE DIFFERENCE MULTIGRID ALGORITHM ON RECTANGULAR GRIDS. 1. Introduction
Trends in Mathematics Information Center for Mathematical Sciences Volume 9 Number 2 December 2006 Pages 0 INTERGRID OPERATORS FOR THE CELL CENTERED FINITE DIFFERENCE MULTIGRID ALGORITHM ON RECTANGULAR
More informationAlternative correction equations in the Jacobi-Davidson method
Chapter 2 Alternative correction equations in the Jacobi-Davidson method Menno Genseberger and Gerard Sleijpen Abstract The correction equation in the Jacobi-Davidson method is effective in a subspace
More informationMultigrid and Domain Decomposition Methods for Electrostatics Problems
Multigrid and Domain Decomposition Methods for Electrostatics Problems Michael Holst and Faisal Saied Abstract. We consider multigrid and domain decomposition methods for the numerical solution of electrostatics
More informationIntroduction. Chapter One
Chapter One Introduction The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and
More informationUnconstrained optimization
Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout
More informationA Jacobi Davidson Method with a Multigrid Solver for the Hermitian Wilson-Dirac Operator
A Jacobi Davidson Method with a Multigrid Solver for the Hermitian Wilson-Dirac Operator Artur Strebel Bergische Universität Wuppertal August 3, 2016 Joint Work This project is joint work with: Gunnar
More informationarxiv: v1 [hep-lat] 2 May 2012
A CG Method for Multiple Right Hand Sides and Multiple Shifts in Lattice QCD Calculations arxiv:1205.0359v1 [hep-lat] 2 May 2012 Fachbereich C, Mathematik und Naturwissenschaften, Bergische Universität
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationNewton-Multigrid Least-Squares FEM for S-V-P Formulation of the Navier-Stokes Equations
Newton-Multigrid Least-Squares FEM for S-V-P Formulation of the Navier-Stokes Equations A. Ouazzi, M. Nickaeen, S. Turek, and M. Waseem Institut für Angewandte Mathematik, LSIII, TU Dortmund, Vogelpothsweg
More informationDomain decomposition for the Jacobi-Davidson method: practical strategies
Chapter 4 Domain decomposition for the Jacobi-Davidson method: practical strategies Abstract The Jacobi-Davidson method is an iterative method for the computation of solutions of large eigenvalue problems.
More informationJae Heon Yun and Yu Du Han
Bull. Korean Math. Soc. 39 (2002), No. 3, pp. 495 509 MODIFIED INCOMPLETE CHOLESKY FACTORIZATION PRECONDITIONERS FOR A SYMMETRIC POSITIVE DEFINITE MATRIX Jae Heon Yun and Yu Du Han Abstract. We propose
More informationLecture 18 Classical Iterative Methods
Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,
More informationMULTIGRID ARNOLDI FOR EIGENVALUES
1 MULTIGRID ARNOLDI FOR EIGENVALUES RONALD B. MORGAN AND ZHAO YANG Abstract. A new approach is given for computing eigenvalues and eigenvectors of large matrices. Multigrid is combined with the Arnoldi
More informationITERATIVE METHODS BASED ON KRYLOV SUBSPACES
ITERATIVE METHODS BASED ON KRYLOV SUBSPACES LONG CHEN We shall present iterative methods for solving linear algebraic equation Au = b based on Krylov subspaces We derive conjugate gradient (CG) method
More informationNumerical Solution I
Numerical Solution I Stationary Flow R. Kornhuber (FU Berlin) Summerschool Modelling of mass and energy transport in porous media with practical applications October 8-12, 2018 Schedule Classical Solutions
More informationSolution of eigenvalue problems. Subspace iteration, The symmetric Lanczos algorithm. Harmonic Ritz values, Jacobi-Davidson s method
Solution of eigenvalue problems Introduction motivation Projection methods for eigenvalue problems Subspace iteration, The symmetric Lanczos algorithm Nonsymmetric Lanczos procedure; Implicit restarts
More informationAdaptive Multigrid for QCD. Lattice University of Regensburg
Lattice 2007 University of Regensburg Michael Clark Boston University with J. Brannick, R. Brower, J. Osborn and C. Rebbi -1- Lattice 2007, University of Regensburg Talk Outline Introduction to Multigrid
More informationScalable Domain Decomposition Preconditioners For Heterogeneous Elliptic Problems
Scalable Domain Decomposition Preconditioners For Heterogeneous Elliptic Problems Pierre Jolivet, F. Hecht, F. Nataf, C. Prud homme Laboratoire Jacques-Louis Lions Laboratoire Jean Kuntzmann INRIA Rocquencourt
More informationPreconditioned inverse iteration and shift-invert Arnoldi method
Preconditioned inverse iteration and shift-invert Arnoldi method Melina Freitag Department of Mathematical Sciences University of Bath CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical
More informationLinear algebra issues in Interior Point methods for bound-constrained least-squares problems
Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Stefania Bellavia Dipartimento di Energetica S. Stecco Università degli Studi di Firenze Joint work with Jacek
More informationThe Removal of Critical Slowing Down. Lattice College of William and Mary
The Removal of Critical Slowing Down Lattice 2008 College of William and Mary Michael Clark Boston University James Brannick, Rich Brower, Tom Manteuffel, Steve McCormick, James Osborn, Claudio Rebbi 1
More information4.6 Iterative Solvers for Linear Systems
4.6 Iterative Solvers for Linear Systems Why use iterative methods? Virtually all direct methods for solving Ax = b require O(n 3 ) floating point operations. In practical applications the matrix A often
More informationA SHORT NOTE COMPARING MULTIGRID AND DOMAIN DECOMPOSITION FOR PROTEIN MODELING EQUATIONS
A SHORT NOTE COMPARING MULTIGRID AND DOMAIN DECOMPOSITION FOR PROTEIN MODELING EQUATIONS MICHAEL HOLST AND FAISAL SAIED Abstract. We consider multigrid and domain decomposition methods for the numerical
More informationThe amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.
AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative
More informationLecture # 20 The Preconditioned Conjugate Gradient Method
Lecture # 20 The Preconditioned Conjugate Gradient Method We wish to solve Ax = b (1) A R n n is symmetric and positive definite (SPD). We then of n are being VERY LARGE, say, n = 10 6 or n = 10 7. Usually,
More informationThe solution of the discretized incompressible Navier-Stokes equations with iterative methods
The solution of the discretized incompressible Navier-Stokes equations with iterative methods Report 93-54 C. Vuik Technische Universiteit Delft Delft University of Technology Faculteit der Technische
More information6. Iterative Methods for Linear Systems. The stepwise approach to the solution...
6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse
More informationKrylov Subspace Methods that Are Based on the Minimization of the Residual
Chapter 5 Krylov Subspace Methods that Are Based on the Minimization of the Residual Remark 51 Goal he goal of these methods consists in determining x k x 0 +K k r 0,A such that the corresponding Euclidean
More informationIterative methods for Linear System
Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and
More informationDELFT UNIVERSITY OF TECHNOLOGY
DELFT UNIVERSITY OF TECHNOLOGY REPORT 11-14 On the convergence of inexact Newton methods R. Idema, D.J.P. Lahaye, and C. Vuik ISSN 1389-6520 Reports of the Department of Applied Mathematical Analysis Delft
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 3: Iterative Methods PD
More informationIndefinite and physics-based preconditioning
Indefinite and physics-based preconditioning Jed Brown VAW, ETH Zürich 2009-01-29 Newton iteration Standard form of a nonlinear system F (u) 0 Iteration Solve: Update: J(ũ)u F (ũ) ũ + ũ + u Example (p-bratu)
More informationIterative Methods for Sparse Linear Systems
Iterative Methods for Sparse Linear Systems Luca Bergamaschi e-mail: berga@dmsa.unipd.it - http://www.dmsa.unipd.it/ berga Department of Mathematical Methods and Models for Scientific Applications University
More informationIterative Methods for Linear Systems of Equations
Iterative Methods for Linear Systems of Equations Projection methods (3) ITMAN PhD-course DTU 20-10-08 till 24-10-08 Martin van Gijzen 1 Delft University of Technology Overview day 4 Bi-Lanczos method
More informationContents. Preface... xi. Introduction...
Contents Preface... xi Introduction... xv Chapter 1. Computer Architectures... 1 1.1. Different types of parallelism... 1 1.1.1. Overlap, concurrency and parallelism... 1 1.1.2. Temporal and spatial parallelism
More information7.2 Steepest Descent and Preconditioning
7.2 Steepest Descent and Preconditioning Descent methods are a broad class of iterative methods for finding solutions of the linear system Ax = b for symmetric positive definite matrix A R n n. Consider
More informationInexact inverse iteration for symmetric matrices
Linear Algebra and its Applications 46 (2006) 389 43 www.elsevier.com/locate/laa Inexact inverse iteration for symmetric matrices Jörg Berns-Müller a, Ivan G. Graham b, Alastair Spence b, a Fachbereich
More informationA Recursive Trust-Region Method for Non-Convex Constrained Minimization
A Recursive Trust-Region Method for Non-Convex Constrained Minimization Christian Groß 1 and Rolf Krause 1 Institute for Numerical Simulation, University of Bonn. {gross,krause}@ins.uni-bonn.de 1 Introduction
More information