1. Introduction. We consider linear systems K u = b for which the system matrix K has the following saddle point structure:, B C

Size: px
Start display at page:

Download "1. Introduction. We consider linear systems K u = b for which the system matrix K has the following saddle point structure:, B C"

Transcription

1 IM J. MRIX NL. PPL. Vol. 35, No., pp c 04 ociety for Industrial and pplied Mathematics NEW NLYI OF BLOCK PRECONDIIONER FOR DDLE POIN PROBLEM YVN NOY bstract. We consider symmetric saddle point matrices. We analyze block preconditioners based on the knowledge of a good approximation for both the top left block and the chur complement resulting from its elimination. We obtain bounds on the eigenvalues of the preconditioned matrix that depend only of the quality of these approximations, as measured by the related condition numbers. Our analysis applies to indefinite block diagonal preconditioners, block triangular preconditioners, inexact Uzawa preconditioners, block approximate factorization preconditioners, and a further enhancement of these preconditioners based on symmetric block Gauss eidel-type iterations. he analysis is unified and allows the comparison of these different approaches. In particular, it reveals that block triangular and inexact Uzawa preconditioners lead to identical eigenvalue distributions. hese theoretical results are illustrated on the discrete tokes problem. It turns out that the provided bounds allow one to localize accurately both real and nonreal eigenvalues. he relative quality of the different types of preconditioners is also as expected from the theory. Key words. saddle point, preconditioning, Uzawa method, block triangular, IMPLE, convergence analysis, linear systems, tokes problem, PDE-constrained optimization M subject classifications. 65F08, 65F0, 65F50, 65N DOI. 0.37/ Introduction. We consider linear systems K u = b for which the system matrix K has the following saddle point structure: B. K =, B C where is an n n symmetric and positive definite matrix PD and where C is an m m nonnegative definite matrix. We also assume that m n and that B has full rank, or that C is positive definite on the null space of B the case of rank deficient B with C = 0 is treated in the appendix. hese assumptions entail that the system is nonsingular; see, e.g., [4]. We also refer to this work for an overview of the many applications in which such linear systems arise, as well as a general introduction to the different solution methods. Our focus in this paper is on an important class of preconditioning techniques that exploit the knowledge of a good preconditioner M for, and of a good preconditioner M for the negative chur complement. = C + B B. ince both and arepd,weassumethatm and M are PD as well. echniques for obtaining such preconditioners are often application dependent; see, again, [4] for examples and pointers to the literature. Here we disregard internal details of Received by the editors March 5, 03; accepted for publication in revised form by M. Benzi eptember, 03; published electronically February 6, ervicedemétrologie Nucléaire C.P , Université Libre de Bruxelles, B-050 Brussels, Belgium ynotay@ulb.ac.be, ynotay. Yvan Notay is Research Director of the Fonds de la Recherche cientifique FNR. 43

2 44 YVN NOY these preconditioners and develop an analysis of preconditioning schemes for K that depends only on the extremal eigenvalues.3.4 μ = λ min M, μ = λ max M, ν = λ min M, ν = λ max M, where λ min resp.,λ max stands for the smallest largest eigenvalue. Hence our results apply regardless of the application context as soon as estimates are available for these four parameters; see [5] and [33] for examples of derivation of such estimates in the contexts of tokes and PDE-constrained optimization problems, respectively. Our analysis applies to most indefinite preconditioners in block form, whose indefiniteness is tailored to compensate for the indefiniteness of the system matrix, in the sense that the preconditioned matrix has only eigenvalues with positive real part. his includes indefinite block diagonal preconditioners M.5 M d =, M block triangular preconditioners.6 M t = M B M inexact or preconditioned Uzawa preconditioners M.7 M u =, B M, block approximate factorization preconditioners.8 M f = M I M B M I B, and further enhancements of these preconditioners based on symmetric block Gauss eidel-type iterations; see section. Note that the IMPLE preconditioner e.g., [30, 43] is a particular case of the block approximate factorization preconditioner as defined above; see also [4] for further related variants. hese preconditioners are sometimes seen as symmetrized variants of block triangular or inexact Uzawa preconditioners. his framework also describes some multigrid smoothers based on distributive relaxation ; see [4, section.] for a discussion and further references. When M = and M =, it is known that all these preconditioners but M d are such that the preconditioned matrix has all eigenvalues equal to and minimal polynomial of degree at most [4, 0]. On the other hand, with M d,there are only three distinct eigenvalues when C = 0 [7]. However, using these ideal preconditioners requires exact solves with and, which is often impractical; just the computation of can be prohibitive. Here we investigate the effect of using instead approximations M and M. We analyze how the eigenvalue distributions are affected by providing bounds, where bounds, for nonreal eigenvalues, have to be understood as combinations of inequalities proving their clustering in a confined region of the complex plane. here are very many works developing eigenvalue analyses for these types of preconditioners; see [5, ] for block diagonal preconditioners, [39] for block triangular

3 BLOCK PRECONDIIONER FOR DDLE POIN PROBLEM 45 preconditioners, [7, 8, 6, 44] for inexact Uzawa preconditioners, and [, 3, 44] for block approximate factorization preconditioners to mention just a few. We refer the reader to [4] for many more references and historical remarks. Nevertheless, as far as we know, our bounds are more accurate than previous ones, with the exception of some inequalities in [39] for nonreal eigenvalues, which, combined with ours, allow us to further restrict the area where the eigenvalues are confined. Moreover, our analysis is truly unified, and we show, seemingly for the first time, that block triangular and inexact Uzawa preconditioners lead to identical eigenvalue distributions. We also establish a clear connection between these inexact Uzawa and block triangular preconditioners and symmetrized preconditioners as in.8, allowing us to discriminate cases where this symmetrization can be useful and cases where it is likely not cost effective. ome previous analyses focus on the conditions needed to have the preconditioned matrix positive definite in a nonstandard inner product, and develop related conjugate gradient like methods; see, e.g., [7,, 3, 44]. Here we offer a complementary viewpoint, giving estimates that vary continuously in function of the main parameters.3,.4, without any restriction on these parameters. Moreover, whereas we reproduce the condition μ to have only real and positive eigenvalues with Uzawa [44] or block triangular [39] preconditioners, our analysis also reveals that scaling M to satisfy this condition often has an adverse effect on the clustering of the eigenvalues. Note that there are several preconditioning techniques also based on approximations M and M that nevertheless do not fit with our analysis; this includes symmetric positive definite block diagonal preconditioners [, 5, 38], which are popular because they can be combined with MINRE [9], thus avoiding the restarting associated with GMRE [35] or GCR [3, 4]. Leaving aside restarting effects, definite and indefinite block diagonal preconditioners are found in [7] to be essentially equivalent, which we further confirm independently by showing a general relation between the eigenvalues associated with both preconditioners. nother approach that has connections with those investigated here is constraint preconditioning [9]: M B = B C M B C BM B I M In fact, this corresponds to block approximate preconditioning.8 with M = C + BM B. Hence results in this paper can be applied to this preconditioner as well, but specific analyses that exploit the particular form of M are likely more powerful; see, e.g., [4, section 0.], [36], and the references therein. Our analysis may, however, be useful when C + BM B is replaced with something easier to invert e.g., [6, 3], the line between these inexact constraint preconditioners and block approximate factorization preconditioners being blurred. he remainder of this paper is organized as follows. In section, we introduce some further variants of the preconditioners defined above. In section 3, we examine the relations that exist between the spectra associated with these different preconditioners, whereas, in section 4, we analyze the localization of the eigenvalues. hese results are illustrated in section 5 on a typical example, namely the discrete tokes For such M, there hold ν μ and ν μ. I B.

4 46 YVN NOY problem. Concluding remarks are given in section 6. Peculiarities associated with singular K are discussed in the appendix.. Further variants of block preconditioners. We first introduce a variant of the block approximate factorization preconditioners, which we call block G because of its close connection with block symmetric Gauss eidel iterations. Let. M = M M M ; M is in fact the preconditioner for corresponding to the combination of two stationary iterations with M, as seen from the relation. I M = I M. he block G preconditioner is then algebraically defined by I M I M.3 M g = BM I M I B. he motivation is twofold. On the one hand, our analysis in the next section suggests that M g can compare favorably with the block approximate factorization preconditioner.8. On the other hand, solving a system M g u = r requires only a slight modification of the algorithm that solves a system with M f,andtheextracostis limited to one additional multiplication with. Indeed, letting u =u, u C and r =r, r C, both solves are implemented with. v = M r,. u C = M r C + B v, 3. { v + M B u C u = v + M v B u C for M f, for M g. On the other hand, the other preconditioners can also be enhanced by using M instead of M, and, as will be seen, it is enlightening to explicitly include in our study the corresponding versions of block triangular and inexact Uzawa preconditioners, that is, M.4 M t = B M and M u = M B M In view of., these preconditioners represent at the algebraic level the operator used when either M t or M u is combined with an approximation of basedontwo stationary inner iterations with M. he computational cost associated with M t and M u is in fact the same as that associated with M g, except that one multiplication by either B case M t orb case M u issaved. he equivalence between the algebraic definitions.8,.3 and this algorithm can be checked by observing that v, u C,andu as defined in this algorithm satisfy M B M v uc = r rc and F I IM B u uc = v uc, with F = I when M f is used and, otherwise, with F = I I M = M M..

5 BLOCK PRECONDIIONER FOR DDLE POIN PROBLEM 47 Of course, using either M g or M t, M u makes sense only if M is positive definite. his holds if and only if μ<, where μ = λ max M has already been defined in.3. his is also the necessary and sufficient condition for having ρ <, where.5 ρ = ρ I M =max μ, μ is the spectral radius of the iteration matrix associated with M. 3. Relations between the preconditioners. he following theorem highlights the connections that exist between the spectra associated with the different preconditioners. he proof of statement 3 uses an approach similar to that followed in the proof of heorem 6 in [39], which analyzes the eigenvalues associated with block triangular preconditioners. his approach, based on a sequence of similarity transformations, is extended here to all preconditioners introduced in sections and and will further be used in the proof of heorem heorem 3.. Let B K = B C be a matrix such that is an n n PD matrix and C is an m m symmetric nonnegative definite matrix with m n. ssume that B has rank m or that C is positive definite on the null space of B. Let the preconditioners M d, M t, M u, M f, M g, M t,andm u be defined as in, respectively,.5,.6,.7,.8,.3, and.4, wherem and M are PD. Let ρ be defined by.5, and assume that ρ < when one of M f, M g, M t,orm u is considered. Letting 3. M M + = M, the eigenvalues of M K and those of M+ K satisfy d max λ σm min λ σm λ max λ, d K λ σm + K λ min λ. d K λ σm + K he matrices Mt K and Mu K have the same spectrum. 3 he matrices Mg K, Mt K,andMu K have the same spectrum. Proof. hematrixm d K has the same eigenvalues as K = M / + M d KM/ +. he largest of these eigenvalues in modulus is bounded above by the matrix norm K, which is also equal to the largest singular value of K [40, heorem 5.3], and thus is further equal to the square root of the largest eigenvalue of K K [40, heorem 5.4]. Let K = M / + KM / + and I J =. I 3 In a preliminary draft of this paper, this approach was also used to prove statement ; the much simpler argument given in the proof of heorem 3. has been suggested independently by rtem Napov and two anonymous referees.

6 48 YVN NOY Because K = J K, one has K K = K J J K = K K = K,andthesquarerootof the largest eigenvalue of K K is also the largest eigenvalue in modulus of K. his proves 3. since M+ K has the same eigenvalues as K. he inequality 3.3 can be proved by applying the same reasoning to K M d, whose largest eigenvalue in modulus is the inverse of the smallest eigenvalue in modulus of M d K : K M d has thesameeigenvaluesas K, whose norm is bounded above by the square root of the largest eigenvalue in modulus of K K = K, i.e., the inverse of the smallest eigenvalue in modulus of M+ K. o prove statement, observe that Mu K has the same spectrum as KMu, = Mu K = M t K. similar argument shows that Mt u K also have the same spectrum. However, more involved developments are needed to prove that this common spectrum further coincides with the spectrum of Mg K. hese developments are also needed to prove heorem 4.3 below. For this reason, we formulate them for all the preconditioners considered in this work, although only Mg K, M t K,andMu K are addressed by the remainder of this proof. hese developments require the assumption that there is no eigenvalue of M that is exactly equal to. his is, however, no loss of generality because if there is such an eigenvalue, we can make the proof for a slightly perturbed matrix ε B 3.4 K ε = B C which itself has the same spectrum as its transpose KMu K and M with 0 <ε<. hen, since the eigenvalues continuously depend on ε, the needed results for the original matrix are obtained by considering the limit for ε 0. Consider now the matrix In M In Z M = B. BY I m M I m etting 0 for M d,m t,m t, 3.5 Y = M for M u,m f,m g, M for M u, 0 for M d,m u,m u, Z = M for M t,m f,m g, M for M t, and 3.6 M = { M for M d,m t,m u,m f, M for M g,m t,m u where M is defined in., one sees that M can represent each of the preconditioners considered in this work. Now let X and Λ be such that X / M / X = Λ, with Λ diagonal and X X = I.Observethat Λ Y = X / Y / X, Λ Z = X / Z / X, Λ =X / M / X

7 BLOCK PRECONDIIONER FOR DDLE POIN PROBLEM 49 are also diagonal and related to Λ via 3.5, 3.6. In particular, we have 0 for M d, 3.7 Γ=Λ Y +Λ Z Λ Y Λ Z = Λ for M t,m u, Λ Λ for M f,m g,m t,m u, { Λ for M d,m t,m u,m f, 3.8 Λ = Λ Λ for M g,m t,m u. as 3.9 We then consider the preconditioned matrix M K. It has the same eigenvalues M = M M M / M B In Z B BY I m B C I m I Z B B I Y C + B Y + Z Y Z B In he last matrix in 3.9 is similar to X / M / X M M / X / I Z B / X B I Y C + B Y + Z Y Z B Λ I I Λ Z G = I G I Λ Y C + G ΛY +Λ Z Λ Y Λ Z G, wherewehaveset M / M / / 3.0 C = M CM / and G = M / B / X. Now let Δ +,Δ be nonnegative diagonal matrices such that, for all i n, Δ + ii =maxi Γ ii, 0, Δ ii =maxγ I ii, 0, where Γ is defined in 3.7. Note that this implies Δ + Δ = I Γ=I Λ Y I Λ Z. On the other hand, our assumption that M has no eigenvalue equal to implies that I Λ Y and I Λ Z are nonsingular. Further, Λ / exists because all entries in Λ are positive; see 3.8, remembering that Λ is the diagonal matrix with the eigenvalues of M on its diagonal, which are less than by assumption if M f, M g, M t,or M u is considered. Hence the preconditioned matrix M K is also similar to Δ+ +Δ Λ / I Λ Z Λ. I G I Λ Y 3. I I I Λ Z G I ΛY Λ/ Δ + Δ C + G ΓG I Λ = Λ/ Δ + +Δ G. / G Δ + Δ Λ C + G ΓG

8 50 YVN NOY Interestingly, the matrix 3. resulting from the similarity transformations is the same for all preconditioners that share the same Λ and Γ hence also the same Δ + and Δ. In view of 3.7, 3.8, this concludes the proof of statement 3. Item proves that the eigenvalue distribution associated with the positive definite block diagonal preconditioner M + cannot be qualitatively better than that associated with M d. tighter connection between both preconditioners is highlighted in [7], under the restrictive assumption that M is a multiple of the identity. ee also section 5 for a further comparison of both preconditioners. On the other hand, block triangular and inexact Uzawa preconditioners are both well-established techniques that until now have been analyzed independently of each other. In item, we prove that they lead to identical eigenvalue distributions; hence eigenvalue bounds proved for the former are valid for the latter and vice versa. Finally, the relation between the block G preconditioner and M t, M u seems less important. However, recall that M t and M u are just M t and M u in which one uses a closer approximation for, based on two stationary iterations with M.Item 3 of heorem 3. shows that using the symmetrized preconditioner M g produces exactly the same effect, at least where the eigenvalue distribution is concerned. When it could be more interesting to use M g instead of M t or M u is discussed at the end of section 4 and in section Eigenvalue analysis. he matrix 3. obtained at the end of the proof of heorem 3. suggests that, at least in some cases Δ = 0, the eigenvalue analysis can be reduced to that of a matrix of the form  B 4. K = B, Ĉ where  is PD and Ĉ is symmetric nonnegative definite. In fact, we shall see that this is true in all cases. uch matrices are nonnegative definite in R n. Hence see [5], their eigenvalues have positive real part. hus, if the preconditioned matrix is similar to a matrix of the form 4., one has gotten rid of the indefiniteness of the original matrix.. Note, however, that this is at the expense of the loss of the symmetry, meaning that a portion of the eigenvalues will be in general complex. Of course, one does not need the preconditioners introduced in sections and to obtain a nonsymmetric but definite linear system. s noted in, e.g., [5], it suffices to rewrite the original system K u = b multiplying both sides to the left by I 4. J =, I which can also be seen as a very basic form of the block diagonal preconditioner.5, with M = I and M = I. However, doing so will in general not change the magnitude of the eigenvalues by much; see item of heorem 3.. Hence, small eigenvalues remain, entailing slow convergence of the iterative methods. he role of the preconditioners investigated here then appears more clearly: combine the basic transformation 4. that makes the preconditioned matrix similar to a definite one, with further effects that improve the clustering of the eigenvalues while moving them away from the origin of the complex plane. Now, to assess these effects, we need to be able to localize accurately the eigenvalues of matrices of the form 4.. Our main tool in this respect is Proposition.

9 BLOCK PRECONDIIONER FOR DDLE POIN PROBLEM 5 in [5], whose main results are recalled in heorem 4. below; see 4.4, 4.5, and the upper bound in 4.3. However, on their own, these inequalities and those in [5] not reproduced here do not provide an accurate picture of the situation. In particular, they do not allow us to show that preconditioning can be successful in moving all eigenvalues away from the origin: the lower bound for real eigenvalues is minλ min Â,λ minĉ, which vanishes when Ĉ = 0. But the inverses of matrices of the form 4. have similar saddle point structure. Hence further inequalities can be obtained by applying the same Proposition. of [5] to the inverse of the matrix at hand. his approach is exploited in heorem 4., and leads to 4.6 and the lower bound in 4.3. hus, heorem 4. combines these new inequalities with the original ones, and it turns out that nothing more is needed to obtain a satisfactory localization of all the eigenvalues. heorem 4.. Let K be a matrix of the form 4., whereâ is an n n PD matrix and Ĉ is an m m symmetric nonnegative definite matrix with m n. ssume that B has rank m or that Ĉ is positive definite on the null space of B.Let and, if Ĉ is positive definite, Ĉ = Ĉ + B  B  =  + B Ĉ B. he real eigenvalues λ of K satisfy 4.3 min λ min Â,λ min Ĉ λ max λ max Â,λ max Ĉ, and the eigenvalues λ with nonzero imaginary part are such that 4.4 λ min  + λ min Ĉ Reλ λ max  + λ max Ĉ, 4.5 Imλ and λ max B B /, 4.6 λ ζ ζ, where 4.7 ζ = λ max Ĉ λ maxâ λ maxĉ λ maxâ+λ maxĉ if Ĉ is positive definite, otherwise. Proof. Inequalities 4.4 and 4.5 and the upper bound in 4.3 just translate results from [5, Proposition.] in our notation. o prove the remaining inequalities, we first consider the case where Ĉ is positive definite. Let K = J K,whereJ is defined by 4.. Because K is symmetric, its inverse is symmetric. Hence, since principal submatrices in K are equal to the inverse of the chur complements in K [, p. 93], one has K =  W W Ĉ,

10 5 YVN NOY where W need not be known explicitly to conduct the proof. Indeed, what matters is that K = K W J =  W has a structure that allows us to apply again Proposition. in [5]. For the real eigenvalues, this yields straightforwardly the lower bound in 4.3, using λ min  λ min Â. For the eigenvalues λ with nonzero imaginary part, this proves Re λ λ min  Ĉ + λ min he inequality 4.6 then follows because, for any complex number λ and real positive number ζ, λ ζ ζ holds if and only if Re λ ζ. If C is only semidefinite, we use a continuity argument: we apply the results just proved to  B Ĉ. B Ĉ + εi with ε>0. We then let ε 0. Using λ max Ĉ as upper bound on ζ, all quantities involved in the inequalities vary continuously with ε,aswellastheeigenvalues themselves, proving the required results. We are now ready to state heorem 4.3, which contains our main results in this section. For some cases M t and M u when μ>, we need to introduce additional parameters η and ν that depend on the following function: / 4.8 f μ, ν= 4 ν ν + + μ ν + μ,ν>0. It is a good idea to know how this function behaves before reading heorem 4.3. he following lemma is helpful in this respect. Lemma 4.. Let f μ, ν be defined by 4.8. For any μ and ν>0,there holds 4.9 max,ν f μ, ν ν + Proof. Forμ, one has +μ / max,ν+ μ / min,ν + μ / ν +, ν μ ν+. / ν =+ν 4 ν ν+ / + ν 4 ν μν+ =f μ, ν ν + + ν ν, μν+

11 BLOCK PRECONDIIONER FOR DDLE POIN PROBLEM 53 able Definitions of ξ, ξ, χ, χ, δ,andζ for the different preconditioners. ξ ξ χ χ M d min μ, ν max μ, ν } M u,m t μ min μ, ν max, ν } M u,m t μ > min μ,η ν max, ν M f min μ, ν max μ, ν } M g M u,m t min ρ, ν max, ν μ μ + ν [ ] µ if C =0 μ + ν μ + ν μ + ν minη, μ + ν μ + ν ρ μ + ν + ν ρ + ν M u,m t μ M u,m t μ < < μ M u,m t μ δ ζ M d ν μ ν M f } { ν μ μ if μ >, ν 4 ν otherwise +ν } { ν μ μ if μ >, ν 4 ν otherwise + ν } 0 not applicable ν max 4 7, μ μ if μ < 3 < μ, ν max μ μ, μ μ otherwise ν +ν μ } { M g νρ ρ if ρ <, ν M u,m t 4 ν otherwise +ν from which the lower bound and the bottom upper bound 4.9 are straightforwardly deduced. On the other hand, the top upper bound follows from / + ν 4 ν μν+ = μ / ν +μ ν + / μ / ν + μ ν +. heorem 4.3. Let the assumptions of heorem 3. hold, and let μ, μ, ν,and ν be defined by.3,.4. For each of the preconditioners, let ξ, ξ, χ, χ, δ,and ζ be defined as in able, where, when μ>, η = f μ, ν, ν = μfμ, ν, with f μ, ν being defined in 4.8. Letting stand for d, t, u, f, g, t,oru, the real eigenvalues λ of M K satisfy 4. ξ λ ξ,

12 54 YVN NOY whereas eigenvalues with nonzero imaginary part are possible only if δ>0,in which case they satisfy 4.3 χ Reλ χ, 4.4 Imλ δ, and 4.5 λ ζ ζ. Proof. he proof is in the continuation of the proof of heorem 3.. he main steps are as follows. We first rewrite the matrix 3. obtained at the end of the earlier proof in a form that has the structure seen in 4., i.e., that allows us to apply heorem 4.. he inequalities 4., 4.3, 4.4, 4.5 are then deduced from, respectively, 4.3, 4.4, 4.5, 4.6. he difficulty is in the analysis of the extremal eigenvalues of the blocks Â, Ĉ and related chur complements Â, Ĉ, which needs to be done carefully to obtain bounds as accurate as possible using no other parameter than μ, μ, ν, ν. hus all notation and definitions introduced in the proof of heorem 3. are valid here, and we also use the same continuity argument on the matrix 3.4 to handle the cases where one would have an eigenvalue of M exactly equal to. Observe in this respect that not only the eigenvalues, but also the bounds to be proved, vary continuously with ε, at least when ε is small enough to ensure that if μ = λ max M >, then ελ >, where λ is the smallest eigenvalue of M that is strictly larger than. Observing that, for λ μ, μ, one has ρ λ λ, we further define 0 for M d, 4.6 μ min =minγ ii = μ for M t,m u, i ρ for M f,m g,m t,m u, 0 for M d, 4.7 μ max =maxγ ii μ for M t,m u, i for M f,m g,m t,m u, whereas we observe that 3.7, 3.8 imply 4.8 Γ = Λ for M t,m u M g,m t,m u. We also note for later use that 3.0 implies GG = M / B B M / hence and 4.9 λ min C + GG = ν, λ max C + GG = ν. In the proof of heorem 3., we have seen that, for each of the considered preconditioners, M K has the same eigenvalue as the matrix 3.. o proceed we assume, without loss of generality, that the rows for which Δ + ii is positive are ordered first; i.e., Δ 0 Δ + =, Δ 0 =. Δ

13 BLOCK PRECONDIIONER FOR DDLE POIN PROBLEM 55 We may further partition Λ, Γ, and G accordingly: Γ Λ = Λ, Γ=, G = G G Λ Γ G, G = G. One then has Δ = I Γ and Δ =Γ I. his allows one to rewrite the matrix 3. as Λ Λ G Δ Λ/ G Δ Λ/ Λ/ Δ G Λ/ Δ G C + G Γ G + G Γ G. Hence we may apply heorem 4. with  = Λ Λ, Ĉ = Λ/ Δ G 0 G Δ Λ/ C + G Γ G + G, B =. Γ G G Δ Λ/ Of course, before applying heorem 4., we need to check that its assumptions are satisfied. For M d i.e., Γ = 0, entailing Γ =0,Δ = I,andthatΓ,Δ, G are trivial empty matrices, this clearly follows from the assumptions on B and C, which see 3.0 imply that either B / = Λ G has full rank or Ĉ = C is positive definite on its null space. Regarding all other preconditioners, we prove below see either 4.5 case μ max or 4.9 case μ max > a positive lower bound on the eigenvalues of Ĉ ; hence it is positive definite, and we need not discuss further the rank of B. Now heorem 4. is actually needed only if μ min <. Indeed, when μ min,δ and therefore Λ and G are trivial empty matrices, and the preconditioned matrix is in fact similar to an PD matrix. In view of 4.6, this happens only for M t and M u and when μ, proving that the eigenvalues are real as claimed in this case. o be complete, this also happens for other preconditioners except M d when ρ =0 i.e., M =, entailing δ = 0. In these cases, we have only to prove 4.. his is done below without assuming anything specific on Λ and G, i.e., including the case where these matrices are trivial as well. If μ min <, we have, recalling 4.8 and 4.9, λ max B B = λ max G Λ Δ G λ max GG max Λ Γ i ii ii μ if M = M d, max ν λ μ, μ λ λ if M = M f, 4.0 max λ λ μ min, λ otherwise. he function gλ =λ λ is increasing for λ</3, decreasing for /3 <λ<, and increasing for λ>. Hence, if μ < /3 < μ, the maximum in the interval μ, μ is max g/3,gμ ; otherwise, the maximum is always at one of the boundaries and thus equal to max gμ,gμ. On the other hand, the function hλ =λ λ has a unique maximum at λ =/. Hence its maximum over the interval μ min, is equal

14 56 YVN NOY to h/ = /4 if / belongs to this interval, and otherwise i.e., when μ min > / is always equal to h μ min. Using 4.0 and these considerations together with 4.6, the application of heorem 4. then yields 4.4. Hence we are left with the proof of 4., 4.3, 4.5, which requires us to bound the eigenvalues of Â, Ĉ and related chur complements. For λ max Â, we observe that if Λ = Γ, then all diagonal entries in Λ are less than or equal to, since they correspond to rows for which Γ ii does not exceed. Hence, with 4.8, 4. λ max  whereas, straightforwardly, { max Λ ii = μ if M = M d or M = M f, i otherwise, { μ for M d,m t,m u,m f, 4. λ min  min Λii = i ρ for M g,m t,m u. o analyze the chur complement Â, one first has to obtain it explicitly. One way is to consider the chur complement in 3., Λ I / +Δ + +Δ G C + G ΓG G Δ+ Δ Λ / = Λ / H Λ /, where the right-hand side defines the matrix H. Its inverse may be obtained by the herman Morrisson Woodbury formula: H = I Δ + +Δ G C + G ΓG + G Δ + Δ Δ + +Δ G G Δ+ Δ = I Δ + +Δ G C + GG G Δ+ Δ I Δ G = I C + GG Δ G G Δ G Δ. he top left block of Λ / H Λ / is the inverse of  ; hence,  / = Λ Λ Δ G C + GG G Δ Λ/. On the other hand, G C + GG G and G G C + GG have the same set of nonzero eigenvalues [7, Lemma.], and they are bounded by max v One then finds v G G v v C + GG v max v v G G v v G G + G G. v Â Λ I Δ = Λ Γ

15 BLOCK PRECONDIIONER FOR DDLE POIN PROBLEM 57 where, here and in the following, inequalities between matrices are to be understood in the nonnegative definite sense: Q R if and only if Q R is nonnegative definite. hus, Â Λ Γ, and hence recalling that ρ < μ< { μ if M = M f, 4.3 λ max  if Λ =Γ. We now consider Ĉ and Ĉ. We first consider the case where G is trivial, which happens if and only if μ max. One then obtains 4.4 λ max Ĉ = λ max C + G ΓG λ max C + GG = ν and μ max implying μ min 4.5 λ min Ĉ = λ min C + G ΓG μ min λ min C + GG = μ min ν. ince Ĉ = Λ G Δ Λ/ Λ/ one straightforwardly obtains, when G is trivial, Δ G C + G G + G Γ G, 4.6 λ max Ĉ = λmax C + GG = ν and 4.7 λ min Ĉ = λmin C + GG = ν. Note that μ max > is not possible for M d, M f, M g, M t, M u. Hence the above estimates are sufficient for these preconditioners, as well as for M t and M u when μ. One may indeed check that, for each of these cases, 4., 4.3, and 4.5 are proved by combining heorem 4. with the bounds in 4., 4., 4.3, 4.4, 4.5, 4.6, and 4.7. Regarding 4.3, we use ζ λ max Ĉ i.e., the bound for Ĉ semidefinite in the case of M d, where we have no valid upper bound on λ max Â. On the other hand, as noted above, with Md, one has Ĉ = C because Γ = 0, entailing Γ =0,Δ = I,andthatΓ,Δ, G are trivial empty matrices and therefore Ĉ =0whenC = 0, hence the improved bound for χ in this case, which is obtained by using λ max Ĉ = 0 instead of 4.4. Now it remains to prove 4., 4.3, and 4.5 for M t and M u in the case μ>. Observe that we then have Λ = Γ ; hence we may restrict the analysis to this situation. We first note that the matrix Γ Γ / Δ Δ Γ / Γ where each block is square with the same number of columns as G can be permuted to a block diagonal form with blocks: γ δγ / δγ /, γ

16 58 YVN NOY where γ =Γ ii, δ =Δ ii, and thus γ =+δ. It turns out that γ δγ / ν δγ / τ γ is nonnegative definite for τ equal to the smallest root of 4.8 νx ν +γx+ γ =0, which is nothing but f γ,ν. etting ν = ν and recalling the definition 4.0 of η,thefactthat γ μ implies f γ,ν f μ, ν =η. Hence, Γ Γ / Δ ν Δ Γ / η I. I +Δ I hen we find taking into account that η ; see 4.9 Ĉ = 0 C + G Γ G 0 C + G Γ G min η, μ ν I I + G + η ν I C + GG Γ Γ / Δ Δ Γ / G G Γ I G min η, μ ν I ; that is, 4.9 λ min Ĉ min η, μ ν. imilarly, one has Ĉ = 0 C + G G 0 C + G G η ν I I Γ + Γ / Δ G Δ Γ / + η ν I C + GG G G Γ I G η ν I ; that is, 4.30 λ min Ĉ η ν. he analysis of the largest eigenvalue of Ĉ is based on the same ideas: τ ν γ δγ / δγ / γ

17 BLOCK PRECONDIIONER FOR DDLE POIN PROBLEM 59 is nonnegative definite for τ equal to the largest root of 4.8, which is γ ν f γ,ν. etting ν = ν and recalling the definition 4. of ν,thefactthat γ μ implies γ ν f γ,ν μ ν f μ, ν = ν ν. Hence, Γ Γ / Δ Δ Γ / ν ν νi. I +Δ I hen we find taking into account that ν ν ; see 4.9 I I Γ Ĉ = C + G G + Γ / Δ G Δ Γ / Γ C + G G + ν ν νi G G νi ν ν C + GG G ν I; that is, 4.3 λ max Ĉ ν. ince λ max Ĉ λ maxĉ, we may also use λ max Ĉ ν. hen one may check that 4., 4.3, and 4.5 for M t and M u in the case μ> indeed follow from heorem 4. using this estimate and those in 4., 4., 4.3, 4.9, 4.30, and 4.3. How our bounds work is illustrated in Figure for two examples of preconditioners, using the values 4.3 μ =0.4, μ =, ν =0., ν = which come from the application studied in the next section and assuming C =0 so that the more favorable value of χ applies for the block diagonal preconditioner. he symbols and correspond to, respectively, ξ and ξ ; hence real eigenvalues are to be situated in between according to 4.. Regarding nonreal eigenvalues, the dashed vertical lines correspond to λ = χ left line and λ = χ right line, the dashed horizontal lines correspond to λ = ±iδ, and the dotted circle corresponds to λ = ζ = ζ ; hence, according to 4.3 and 4.4, the nonreal eigenvalues must lie in the box delimited by the four dashed lines but also, according to 4.5, within the disk delimited by the dotted circle. In summary, they must thus be in the shaded yellow region delimited by solid lines, understanding that horizontal lines close to the real axis are infinitesimally close to it, with only real eigenvalues actually being permitted in the area between them. he values in ables vary continuously in function of the four main parameters μ, μ, ν, ν, except possibly for M t and M u, where we have to distinguish different cases; however, if ν ν as one expects if M is properly scaled, then, since f,ν=max,νsee Lemma 4., one has η and ν ν for μ, showing that the bounds for M t and M u also vary continuously with μ. Moreover, independent of the condition ν ν,whenμ, μ, ν, ν, then, for all preconditioners but M d, ξ, ξ,χ, χ andδ 0, ζ. his means that

18 60 YVN NOY Block Diagonal M d Block Factorization M f Fig.. pplication of heorem 4.3 with main parameters as in 4.3; : ξ ; : ξ ; dotted circle: λ = ζ = ζ ; dashed vertical lines: λ = χ and λ = χ ; dashed horizontal lines: λ = ±iδ. both real and nonreal eigenvalues are confined in a region which converges smoothly towards the single point when M and M. For M d,wethenhave ξ, ξ, whereas δ, ζ, χ,andχ in general, but χ when C = 0. Hence all real eigenvalues converge towards, but nonreal eigenvalues do not: their real part lies in general between and, converging in particular towards when C = 0 ; on the other hand, their imaginary part may remain significant. his confirms the analysis in [7, Lemma.], where it is shown that if C = 0, only three distinct eigenvalues remain at the limit M = and M = :and ± i 3 ; interestingly, these latter numbers are at the intersection of the lines our dashed vertical lines, which coincide in this case and λ Reλ = our dotted circle. It is also interesting to observe that the modulus of these three remaining distinct eigenvalues is equal to, whereas, in the same circumstances, only the two eigenvalues ± remain associated with the positive definite block diagonal preconditioner 3. [7, Lemma.]; thus, at the limit of exact preconditioning of and, equality is attained in both relations 3., 3.3 from heorem 3.. Regarding real eigenvalues, it is worth noting that, when using M d, M f, M t with μ, or M u with μ, the bounds 4. then reduce to 4.33 min μ, ν λ max μ, ν, which is simple and appealing. With M g, M t, M u, the corresponding result is 4.34 min ρ, ν λ max, ν, which requires only ρ <, i.e., μ<. When using M t or M u with μ>, our estimates for real eigenvalues are somehow less favorable and indicate that scaling M to have the eigenvalues of M be greater than may have an adverse effect on the clustering of the real eigenvalues. his is better seen in an example, so consider again the values 4.3, but now add a scaling parameter so that μ =0.4 α and μ = α for some α. In Figure, 4 we depict the 4 he code allowing one to reproduce the results reported in this figure is provided as supplementary material through the electronic version of the journal.

19 BLOCK PRECONDIIONER FOR DDLE POIN PROBLEM 6 Lower bounds for real eigenvalues Upper bounds for real eigenvalues hm 4.3 [44, hms 4. & 4.3] [39, hm 6] worst case. [39, hm 6] best case Fig.. Upper and lower bounds for real eigenvalues with M t or M u as a function of α when μ = 0.4 α, μ = α, ν = 0., and ν = the legend in the left plot also applies to the right one; for the bounds from [39] we have a best and a worst case because these bounds are expressed as functions of the extremal eigenvalues of M C + BM B instead of ν, ν ; the best case is obtained by setting λ min M C + BM B = μν the largest possible value and λ maxm C + BM B = μ ν the smallest possible value, whereas the worst case corresponds to λ min M C + BM B = μ ν the smallest possible value and λ maxm C + BM B = μ ν the largest possible value. evolution of the lower and upper bounds for real eigenvalues; these plots also illustrate how η and ν vary with μ, since, in this example, ξ = η ν and ξ = ν. One sees that the η factor has only a limited impact on the lower bound, in agreement with the second upper bound 4.9 on f μ, ν, which is never worse than + ν =.. However, ν grows with μ, in fact also in agreement with the same upper bound, which yields ν μν + ν/ν +=α /. In section 5, we shall see an example where the real eigenvalues really do spread out when α increases, closely following our bounds. In Figure, we also compare our bounds with bounds appearing in papers by Zulehner [44] and imoncini [39], which analyze, respectively, inexact Uzawa and block triangular preconditioners, and contain the best previous estimates we are aware of; one sees that our analysis is significantly sharper than that of imoncini, and more general than that of Zulehner, which is effective only if α is large enough. Now, staying with M t and M u, it is well known see [39, 44] that scaling M to increase μ has on the contrary a welcome effect on the nonreal eigenvalues, which become forbidden when μ ; this is also confirmed by our analysis, since δ decreases as μ increases and vanishes for μ. Whereas previous works often focused on this and accordingly suggested selecting the scaling to enforce this condition, our analysis reveals that there is in fact a tradeoff between the clustering of real and nonreal eigenvalues. his will be further illustrated in the next section. We could also compare our bounds for nonreal eigenvalues with previous bounds. However, as seen in Figure, it is in fact more sensible to combine the different bounds than to discuss which one is the best: the more inequalities we have, the better we delimit the region that contains the eigenvalues. In particular, the bound obtained in [39, heorem ] for block triangular preconditioners can play a useful complementary role, and it is also appealing in its simplicity. For the sake of completeness, we recall this bound in the following theorem, noting that, by item of heorem 3., we

20 6 YVN NOY extend its scope of application to inexact Uzawa preconditioners. Moreover, applying this bound to M t allows its further extension to block G preconditioners, via item 3 of heorem 3.. heorem 4.4. Let the assumptions of heorems 3. and 4.3 hold. Eigenvalues λ of Mt K and Mu K with nonzero imaginary part are possible only if μ <, in which case they satisfy 4.35 λ μ. he eigenvalues λ of Mg K, Mt K,andMu K with nonzero imaginary part satisfy 4.36 λ ρ. We may further combine this result with 4.33 for M t, M u, and with 4.34 for M g, M t, M u. his straightforwardly yields the following corollary. Corollary 4.5. Let the assumptions of heorems 3. and 4.3 hold, and assume that ρ <,where If ρ = ρ I M =maxν, ν μ = λ max M, then 4.38 ρ I Mt K = ρ I Mu K max ρ,ρ. here holds 4.39 ρ I M g K = ρ I Mt K = ρ I Mu K max ρ,ρ. Let us stress that the assumption 4.37 is made for the sake of simplicity. If it is not satisfied, a bound on the spectral radius can still be obtained from 4. and On the other hand, with this result one sees even more clearly that symmetrized preconditioners like M g and, by extension, M f can be cost effective when the preconditioner for is better than that for, or of similar quality. On the contrary, when the preconditioner for is much better, the clustering of the spectrum essentially depends on the eigenvalues of M,andM g or M f can only bring a mitigated improvement to the block triangular preconditioners, so that the extra cost involved likely does not pay off. s the final remark in this section, rescaling M is of course also possible with preconditioners other than M t and M u. We do not discuss this explicitly because effects are moderate and easily predicted by inserting into able rescaled estimates for μ, μ. In particular, for M g, M t, M u, it is clear, at least from this theoretical viewpoint, that the best scaling is the one that minimizes ρ. It is also possible to rescale the whole operator M. In combination with M t or M u, this will have the same effect, already discussed above, as rescaling M when using M t or M u ;observe that the parameters in able for M g, M t,andm u in fact coincide with those for

21 BLOCK PRECONDIIONER FOR DDLE POIN PROBLEM 63 M t and M u when exchanging μ for ρ and μ for. Rescaling M with M g is more ambiguous. Letting α be the scaling parameter, one possibility is to consider I BM α M I M B I M I I M = α BM I α M I M his thus amounts to scaling the whole spectrum obtained with unscaled M and M, and inverse scaling applied to M. ince the convergence of the Krylov subspace method is independent of the global scaling of the preconditioner, this option is therefore equivalent to just applying the inverse scaling to M. 5. Example of application. In this section, we consider the typical example provided by the stationary tokes problem on the unit square Ω in two dimensions. hat is, finding the velocity vector v :Ω R and the kinematic pressure field p :Ω R satisfying I B. 5. Δu + p = f in Ω, u = 0 in Ω, where f represents a prescribed force. For the sake of simplicity we choose Dirichlet boundary conditions for the velocity. s a general rule, the discretization of this problems yields a linear system K u = b, whose coefficient matrix K has the form.. Here we consider more particularly finite difference discretization on a staggered grid, for which C =0. here is a technical difficulty appearing from the lack of boundary conditions for the pressure, which is determined only up to a constant. t the discrete level, this is reflected in the fact that B = 0,whereisthevector of all ones. Hence K is singular with null space spanned by 0. he case of rank deficient B with C e = 0 for all vectors e in the null space of B is analyzed in the appendix, in light of the results in [0, 8]. It turns out that right preconditioned GMRE or GCR can be used without special treatment as long as the system is compatible, which is guaranteed in the present case by the fact that the right-hand side is zero for all pressure unknowns. he convergence is indeed the same as that of GMRE or GCR applied to a regular matrix whose eigenvalues coincide with the nonzero eigenvalues of the original preconditioned matrix. Moreover, these eigenvalues satisfy the relations and bounds proved in heorems 3., 4.3, and 4.4 and Corollary 4.5, reading ν as the smallest nonzero eigenvalue of M the rank deficiency of B implying that is only semidefinite. Note that right preconditioning corresponds to the versions of GMRE and GCR that minimize the residual of the original linear system, and, regarding GCR, is equivalent to the standard preconditioning implementation in [4]. Now, for the stationary tokes problem, it is known that the chur complement = B B is spectrally equivalent to the identity when using finite difference approximations. Hence we may select M = ω I, and numerical computation indeed shows that 5. ν 0. ω and ν = ω,

22 64 YVN NOY where ν denotes the smallest nonzero eigenvalue of M. On the other hand, is formed of two diagonal blocks, each of them being the five point finite difference approximation of the Laplace operator acting on one of the velocity components. Hence the conditioning of depends on h, and a more sophisticated preconditioning approach is welcome, with multigrid methods being good candidates. For convenience, we selected the aggregation-based algebraic multigrid method GMG from [6,, 8]. Indeed, a black box code is available with a MLB interface [3]; hence no further tuning or coding is needed. For relatively small matrix sizes in the present example, as long as h>/35, the procedure uses only two levels. hen it follows from the algebraic properties of the preconditioner that μ = see, e.g., [5, eq. 39], whereas numerical computation reveals that μ 0.4. For larger matrices, the preconditioner is based on the same two level method, but inner coarse systems are solved iteratively, in fact with the same two level method again, which is thus used recursively. Because these inner solves are accelerated with the flexible conjugate gradient method [4], the so defined preconditioner varies slightly from one application to the next. hen, the above estimates still hold, but only approximately, and should be interpreted with care since the preconditioner is on the whole a nonlinear operator. Once M and M have been chosen, all preconditioners introduced in sections and are properly defined. For h =/3 and ω =, we depict in Figure 3 the associated eigenvalue distribution. We also represent the limits provided by the theory. One sees that heorem 4.3 accurately predicts the location of both real and nonreal eigenvalues, and one may also check the complementary role played by heorem in [39], as extended to other preconditioners in heorem 4.4. One also sees the importance of the parameter ζ from 4.5 in controlling the imaginary extension of the eigenvalues: there are eigenvalues lying exactly on the line λ ζ = ζ for all preconditioners but M f, and the improvement observed going from block diagonal to block triangular preconditioning is due largely to the decrease of ζ, the bounds on the real part remaining roughly the same. s already discussed, the scaling of M plays an important role for block triangular and Uzawa preconditioners. With μ, we have the appealing result of Corollary 4.5, but, on the other hand, if one rescales the preconditioner for to have μ, all eigenvalues are real, which may also be attractive, allowing us to use conjugate gradient methods in nonstandard inner products [, 3, 44]. o investigate this, we rescaled the algebraic multigrid preconditioner by a factor α, entailing that μ 0.4 α and μ = α. he theory predicts that increasing α moves nonreal eigenvalues closer to the real axis until the point where they are forbidden for α.5 but at the same time allows the real eigenvalues to spread over the real axis see Figure. his is illustrated in Figure 4. In the left column of figures, we proceed as for Figure 3, plotting the spectrum together with the limits provided by the theory. One sees that the bounds remain accurate in all considered situations. In the right column of figures, we rescaled the spectrum to represent the situation that occurs when optimal scaling is applied

23 BLOCK PRECONDIIONER FOR DDLE POIN PROBLEM 65 Block Diagonal M d Block riang. M t &UzawaM u Block pprox. Fact. M f Block G M g Fig. 3. +: eigenvalues of the preconditioned matrix for h =/3 and ω =; ----: limit of the region defined by the inequalities in heorem 4.3 horizontal lines close to the real axis indicate regions where in fact only real eigenvalues are permitted; ---: limit on nonreal eigenvalues provided by [39, heorem ] see heorem 4.4. to the preconditioner M t or M u, optimal meaning in such a way that the spectral radius ρ of the associated iteration matrix is minimized. We also graphically illustrate this spectral radius, plotting with the symbol the circle of center and radius ρ that contains all eigenvalues. In able, 5 we report the number of iterations actually needed to reduce the residual relative error by 0 6, testing larger problem sizes and also different values of ω ; results are not reported for M g, M t,andm u with α =.5 because the basic condition ρ < is then violated, implying that M is in fact not positive definite and is therefore no longer a sensible preconditioner for. he block approximate 5 he code allowing one to reproduce the results reported in this table and in able 3 is provided as supplementary material through the electronic version of the journal.

24 66 YVN NOY α = α =,scaledρ = α =.5 α =.5, scaled ρ = α =.5 α =.5, scaled ρ = Fig. 4. Left: pectrum of the preconditioned matrix for block triangular and inexact Uzawa preconditioners h =/3 and ω =. Right: Rescaled spectrum.

25 BLOCK PRECONDIIONER FOR DDLE POIN PROBLEM 67 able Number of iterations needed to reduce the relative residual error by 0 6 ; MINRE is used for the positive definite preconditioner M + from 3., andgcr5 for all other preconditioners, 5 meaning that the process is restarted every 5 iterations. ω = ω =.5 ω =4 α h =3 Block diag. + M Block diag. M d Block triang. M t Inexact Uzawa M u Block fact. M g Block G M f Block triang. M t Inexact Uzawa M u h = 5 Block diag. + M Block diag. M d Block triang. M t Inexact Uzawa M u Block fact. M g Block G M f Block triang. M t Inexact Uzawa M u factorization M f is still well defined when ρ >, although our analysis does not apply any longer in this case, which is reflected by the much larger number of iterations needed than with other values of α. One sees that the hierarchy of the preconditioners is as expected from the theory, with slight differences between the variants leading to the same eigenvalue distribution, to be explained by the many other features that influence the convergence, such as nonnormality effects [4, Chapters 5 and 6]. Further tests show that, according to the analysis in [7], the indefinite block diagonal preconditioner M d becomes as good as the positive definite preconditioner M + if the restart parameter is increased sufficiently; i.e., the advantage of M +, as expected from item of heorem 3., mainly comes from the global optimality of MINRE. he scalability of M g, M t,andm u reflects well that of M : solving a system with alone requires from 0 iterations for h =3twolevelvarianttofor h = 56, 5, 04 multilevel variable preconditioner. he number of iterations increases in a bigger proportion for the triangular preconditioners M t and M u, displaying their greater sensitivity to the quality of the used approximation for. his sensitivity is also expected from our analysis; compare 4.38 with 4.39 when μ, and see how fast η and ν may grow with μ otherwise. In able 3, we report the results obtained on finer meshes when fixing α =.5 andω = which is close to optimal in all cases. his allows us to further check the near optimality of all preconditioners except M d and perhaps M f. iming results suggest that this near optimality also holds with respect to time: with about 4 times more unknowns, the elapsed time is multiplied by a factor only slightly larger than 4. We also present some results obtained when defining M with a classical algebraic multigrid MG algorithm along the lines of the seminal works by Brandt, McCormick, and Ruge [9] and Ruge and tüben [34]. We selected the implementation

Multigrid absolute value preconditioning

Multigrid absolute value preconditioning Multigrid absolute value preconditioning Eugene Vecharynski 1 Andrew Knyazev 2 (speaker) 1 Department of Computer Science and Engineering University of Minnesota 2 Department of Mathematical and Statistical

More information

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

IN this paper, we investigate spectral properties of block

IN this paper, we investigate spectral properties of block On the Eigenvalues Distribution of Preconditioned Block wo-by-two Matrix Mu-Zheng Zhu and a-e Qi Abstract he spectral properties of a class of block matrix are studied, which arise in the numercial solutions

More information

Stabilization and Acceleration of Algebraic Multigrid Method

Stabilization and Acceleration of Algebraic Multigrid Method Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Robust solution of Poisson-like problems with aggregation-based AMG

Robust solution of Poisson-like problems with aggregation-based AMG Robust solution of Poisson-like problems with aggregation-based AMG Yvan Notay Université Libre de Bruxelles Service de Métrologie Nucléaire Paris, January 26, 215 Supported by the Belgian FNRS http://homepages.ulb.ac.be/

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

multigrid, algebraic multigrid, AMG, convergence analysis, preconditioning, ag- gregation Ax = b (1.1)

multigrid, algebraic multigrid, AMG, convergence analysis, preconditioning, ag- gregation Ax = b (1.1) ALGEBRAIC MULTIGRID FOR MODERATE ORDER FINITE ELEMENTS ARTEM NAPOV AND YVAN NOTAY Abstract. We investigate the use of algebraic multigrid (AMG) methods for the solution of large sparse linear systems arising

More information

Solving PDEs with Multigrid Methods p.1

Solving PDEs with Multigrid Methods p.1 Solving PDEs with Multigrid Methods Scott MacLachlan maclachl@colorado.edu Department of Applied Mathematics, University of Colorado at Boulder Solving PDEs with Multigrid Methods p.1 Support and Collaboration

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

FEM and sparse linear system solving

FEM and sparse linear system solving FEM & sparse linear system solving, Lecture 9, Nov 19, 2017 1/36 Lecture 9, Nov 17, 2017: Krylov space methods http://people.inf.ethz.ch/arbenz/fem17 Peter Arbenz Computer Science Department, ETH Zürich

More information

7.4 The Saddle Point Stokes Problem

7.4 The Saddle Point Stokes Problem 346 CHAPTER 7. APPLIED FOURIER ANALYSIS 7.4 The Saddle Point Stokes Problem So far the matrix C has been diagonal no trouble to invert. This section jumps to a fluid flow problem that is still linear (simpler

More information

Algebraic multigrid for moderate order finite elements

Algebraic multigrid for moderate order finite elements Algebraic multigrid for moderate order finite elements Artem Napov and Yvan Notay Service de Métrologie Nucléaire Université Libre de Bruxelles (C.P. 165/84) 50, Av. F.D. Roosevelt, B-1050 Brussels, Belgium.

More information

Domain decomposition on different levels of the Jacobi-Davidson method

Domain decomposition on different levels of the Jacobi-Davidson method hapter 5 Domain decomposition on different levels of the Jacobi-Davidson method Abstract Most computational work of Jacobi-Davidson [46], an iterative method suitable for computing solutions of large dimensional

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Preface to the Second Edition. Preface to the First Edition

Preface to the Second Edition. Preface to the First Edition n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................

More information

Recall the convention that, for us, all vectors are column vectors.

Recall the convention that, for us, all vectors are column vectors. Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists

More information

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS Jordan Journal of Mathematics and Statistics JJMS) 53), 2012, pp.169-184 A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS ADEL H. AL-RABTAH Abstract. The Jacobi and Gauss-Seidel iterative

More information

Mathematics and Computer Science

Mathematics and Computer Science Technical Report TR-2007-002 Block preconditioning for saddle point systems with indefinite (1,1) block by Michele Benzi, Jia Liu Mathematics and Computer Science EMORY UNIVERSITY International Journal

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

An efficient multigrid method for graph Laplacian systems II: robust aggregation

An efficient multigrid method for graph Laplacian systems II: robust aggregation An efficient multigrid method for graph Laplacian systems II: robust aggregation Artem Napov and Yvan Notay Service de Métrologie Nucléaire Université Libre de Bruxelles (C.P. 165/84) 50, Av. F.D. Roosevelt,

More information

Linear Algebra I. Ronald van Luijk, 2015

Linear Algebra I. Ronald van Luijk, 2015 Linear Algebra I Ronald van Luijk, 2015 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents Dependencies among sections 3 Chapter 1. Euclidean space: lines and hyperplanes 5 1.1. Definition

More information

Analysis of two-grid methods: The nonnormal case

Analysis of two-grid methods: The nonnormal case Analysis of two-grid methods: The nonnormal case Yvan Notay Service de Métrologie Nucléaire Université Libre de Bruxelles (C.P. 65/84) 5, Av. F.D. Roosevelt, B-5 Brussels, Belgium. email : ynotay@ulb.ac.be

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Alberto Bressan ) and Khai T. Nguyen ) *) Department of Mathematics, Penn State University **) Department of Mathematics,

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Short title: Total FETI. Corresponding author: Zdenek Dostal, VŠB-Technical University of Ostrava, 17 listopadu 15, CZ Ostrava, Czech Republic

Short title: Total FETI. Corresponding author: Zdenek Dostal, VŠB-Technical University of Ostrava, 17 listopadu 15, CZ Ostrava, Czech Republic Short title: Total FETI Corresponding author: Zdenek Dostal, VŠB-Technical University of Ostrava, 17 listopadu 15, CZ-70833 Ostrava, Czech Republic mail: zdenek.dostal@vsb.cz fax +420 596 919 597 phone

More information

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function Zhongyi Liu, Wenyu Sun Abstract This paper proposes an infeasible interior-point algorithm with

More information

2 Two-Point Boundary Value Problems

2 Two-Point Boundary Value Problems 2 Two-Point Boundary Value Problems Another fundamental equation, in addition to the heat eq. and the wave eq., is Poisson s equation: n j=1 2 u x 2 j The unknown is the function u = u(x 1, x 2,..., x

More information

ITERATIVE METHODS FOR DOUBLE SADDLE POINT SYSTEMS

ITERATIVE METHODS FOR DOUBLE SADDLE POINT SYSTEMS IERAIVE MEHODS FOR DOULE SADDLE POIN SYSEMS FAEMEH PANJEH ALI EIK 1 AND MIHELE ENZI Abstract. We consider several iterative methods for solving a class of linear systems with double saddle point structure.

More information

Fast Iterative Solution of Saddle Point Problems

Fast Iterative Solution of Saddle Point Problems Michele Benzi Department of Mathematics and Computer Science Emory University Atlanta, GA Acknowledgments NSF (Computational Mathematics) Maxim Olshanskii (Mech-Math, Moscow State U.) Zhen Wang (PhD student,

More information

1. Introduction. In this work we consider the solution of finite-dimensional constrained optimization problems of the form

1. Introduction. In this work we consider the solution of finite-dimensional constrained optimization problems of the form MULTILEVEL ALGORITHMS FOR LARGE-SCALE INTERIOR POINT METHODS MICHELE BENZI, ELDAD HABER, AND LAUREN TARALLI Abstract. We develop and compare multilevel algorithms for solving constrained nonlinear variational

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 1: Direct Methods Dianne P. O Leary c 2008

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

Convergence of a Class of Stationary Iterative Methods for Saddle Point Problems

Convergence of a Class of Stationary Iterative Methods for Saddle Point Problems Convergence of a Class of Stationary Iterative Methods for Saddle Point Problems Yin Zhang 张寅 August, 2010 Abstract A unified convergence result is derived for an entire class of stationary iterative methods

More information

Linear Systems. Carlo Tomasi

Linear Systems. Carlo Tomasi Linear Systems Carlo Tomasi Section 1 characterizes the existence and multiplicity of the solutions of a linear system in terms of the four fundamental spaces associated with the system s matrix and of

More information

Tikhonov Regularization of Large Symmetric Problems

Tikhonov Regularization of Large Symmetric Problems NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2000; 00:1 11 [Version: 2000/03/22 v1.0] Tihonov Regularization of Large Symmetric Problems D. Calvetti 1, L. Reichel 2 and A. Shuibi

More information

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers

A Robust Preconditioned Iterative Method for the Navier-Stokes Equations with High Reynolds Numbers Applied and Computational Mathematics 2017; 6(4): 202-207 http://www.sciencepublishinggroup.com/j/acm doi: 10.11648/j.acm.20170604.18 ISSN: 2328-5605 (Print); ISSN: 2328-5613 (Online) A Robust Preconditioned

More information

CLASSICAL ITERATIVE METHODS

CLASSICAL ITERATIVE METHODS CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped

More information

Aggregation-based algebraic multigrid

Aggregation-based algebraic multigrid Aggregation-based algebraic multigrid from theory to fast solvers Yvan Notay Université Libre de Bruxelles Service de Métrologie Nucléaire CEMRACS, Marseille, July 18, 2012 Supported by the Belgian FNRS

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

Vector Spaces. 9.1 Opening Remarks. Week Solvable or not solvable, that s the question. View at edx. Consider the picture

Vector Spaces. 9.1 Opening Remarks. Week Solvable or not solvable, that s the question. View at edx. Consider the picture Week9 Vector Spaces 9. Opening Remarks 9.. Solvable or not solvable, that s the question Consider the picture (,) (,) p(χ) = γ + γ χ + γ χ (, ) depicting three points in R and a quadratic polynomial (polynomial

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.

More information

Numerical behavior of inexact linear solvers

Numerical behavior of inexact linear solvers Numerical behavior of inexact linear solvers Miro Rozložník joint results with Zhong-zhi Bai and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic The fourth

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

CAAM 454/554: Stationary Iterative Methods

CAAM 454/554: Stationary Iterative Methods CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are

More information

Adaptive algebraic multigrid methods in lattice computations

Adaptive algebraic multigrid methods in lattice computations Adaptive algebraic multigrid methods in lattice computations Karsten Kahl Bergische Universität Wuppertal January 8, 2009 Acknowledgements Matthias Bolten, University of Wuppertal Achi Brandt, Weizmann

More information

ON AUGMENTED LAGRANGIAN METHODS FOR SADDLE-POINT LINEAR SYSTEMS WITH SINGULAR OR SEMIDEFINITE (1,1) BLOCKS * 1. Introduction

ON AUGMENTED LAGRANGIAN METHODS FOR SADDLE-POINT LINEAR SYSTEMS WITH SINGULAR OR SEMIDEFINITE (1,1) BLOCKS * 1. Introduction Journal of Computational Mathematics Vol.xx, No.x, 200x, 1 9. http://www.global-sci.org/jcm doi:10.4208/jcm.1401-cr7 ON AUGMENED LAGRANGIAN MEHODS FOR SADDLE-POIN LINEAR SYSEMS WIH SINGULAR OR SEMIDEFINIE

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Lab 1: Iterative Methods for Solving Linear Systems

Lab 1: Iterative Methods for Solving Linear Systems Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as

More information

Block preconditioners for saddle point systems arising from liquid crystal directors modeling

Block preconditioners for saddle point systems arising from liquid crystal directors modeling Noname manuscript No. (will be inserted by the editor) Block preconditioners for saddle point systems arising from liquid crystal directors modeling Fatemeh Panjeh Ali Beik Michele Benzi Received: date

More information

8.1 Bifurcations of Equilibria

8.1 Bifurcations of Equilibria 1 81 Bifurcations of Equilibria Bifurcation theory studies qualitative changes in solutions as a parameter varies In general one could study the bifurcation theory of ODEs PDEs integro-differential equations

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM H. E. Krogstad, IMF, Spring 2012 Karush-Kuhn-Tucker (KKT) Theorem is the most central theorem in constrained optimization, and since the proof is scattered

More information

Efficient Algorithms for Order Bases Computation

Efficient Algorithms for Order Bases Computation Efficient Algorithms for Order Bases Computation Wei Zhou and George Labahn Cheriton School of Computer Science University of Waterloo, Waterloo, Ontario, Canada Abstract In this paper we present two algorithms

More information

A strongly polynomial algorithm for linear systems having a binary solution

A strongly polynomial algorithm for linear systems having a binary solution A strongly polynomial algorithm for linear systems having a binary solution Sergei Chubanov Institute of Information Systems at the University of Siegen, Germany e-mail: sergei.chubanov@uni-siegen.de 7th

More information

Deep Linear Networks with Arbitrary Loss: All Local Minima Are Global

Deep Linear Networks with Arbitrary Loss: All Local Minima Are Global homas Laurent * 1 James H. von Brecht * 2 Abstract We consider deep linear networks with arbitrary convex differentiable loss. We provide a short and elementary proof of the fact that all local minima

More information

c 2004 Society for Industrial and Applied Mathematics

c 2004 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 6, No., pp. 377 389 c 004 Society for Industrial and Applied Mathematics SPECTRAL PROPERTIES OF THE HERMITIAN AND SKEW-HERMITIAN SPLITTING PRECONDITIONER FOR SADDLE POINT

More information

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294)

Conjugate gradient method. Descent method. Conjugate search direction. Conjugate Gradient Algorithm (294) Conjugate gradient method Descent method Hestenes, Stiefel 1952 For A N N SPD In exact arithmetic, solves in N steps In real arithmetic No guaranteed stopping Often converges in many fewer than N steps

More information

Department of Computer Science, University of Illinois at Urbana-Champaign

Department of Computer Science, University of Illinois at Urbana-Champaign Department of Computer Science, University of Illinois at Urbana-Champaign Probing for Schur Complements and Preconditioning Generalized Saddle-Point Problems Eric de Sturler, sturler@cs.uiuc.edu, http://www-faculty.cs.uiuc.edu/~sturler

More information

Where is matrix multiplication locally open?

Where is matrix multiplication locally open? Linear Algebra and its Applications 517 (2017) 167 176 Contents lists available at ScienceDirect Linear Algebra and its Applications www.elsevier.com/locate/laa Where is matrix multiplication locally open?

More information

Tangent spaces, normals and extrema

Tangent spaces, normals and extrema Chapter 3 Tangent spaces, normals and extrema If S is a surface in 3-space, with a point a S where S looks smooth, i.e., without any fold or cusp or self-crossing, we can intuitively define the tangent

More information

Contents. Preface... xi. Introduction...

Contents. Preface... xi. Introduction... Contents Preface... xi Introduction... xv Chapter 1. Computer Architectures... 1 1.1. Different types of parallelism... 1 1.1.1. Overlap, concurrency and parallelism... 1 1.1.2. Temporal and spatial parallelism

More information

In order to solve the linear system KL M N when K is nonsymmetric, we can solve the equivalent system

In order to solve the linear system KL M N when K is nonsymmetric, we can solve the equivalent system !"#$% "&!#' (%)!#" *# %)%(! #! %)!#" +, %"!"#$ %*&%! $#&*! *# %)%! -. -/ 0 -. 12 "**3! * $!#%+,!2!#% 44" #% &#33 # 4"!#" "%! "5"#!!#6 -. - #% " 7% "3#!#3! - + 87&2! * $!#% 44" ) 3( $! # % %#!!#%+ 9332!

More information

ON A GENERAL CLASS OF PRECONDITIONERS FOR NONSYMMETRIC GENERALIZED SADDLE POINT PROBLEMS

ON A GENERAL CLASS OF PRECONDITIONERS FOR NONSYMMETRIC GENERALIZED SADDLE POINT PROBLEMS U..B. Sci. Bull., Series A, Vol. 78, Iss. 4, 06 ISSN 3-707 ON A GENERAL CLASS OF RECONDIIONERS FOR NONSYMMERIC GENERALIZED SADDLE OIN ROBLE Fatemeh anjeh Ali BEIK his paper deals with applying a class

More information

Least Sparsity of p-norm based Optimization Problems with p > 1

Least Sparsity of p-norm based Optimization Problems with p > 1 Least Sparsity of p-norm based Optimization Problems with p > Jinglai Shen and Seyedahmad Mousavi Original version: July, 07; Revision: February, 08 Abstract Motivated by l p -optimization arising from

More information

ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER *

ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER * Journal of Computational Mathematics Vol.xx, No.x, 2x, 6. http://www.global-sci.org/jcm doi:?? ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER * Davod

More information

Algebraic Multigrid as Solvers and as Preconditioner

Algebraic Multigrid as Solvers and as Preconditioner Ò Algebraic Multigrid as Solvers and as Preconditioner Domenico Lahaye domenico.lahaye@cs.kuleuven.ac.be http://www.cs.kuleuven.ac.be/ domenico/ Department of Computer Science Katholieke Universiteit Leuven

More information

The Bock iteration for the ODE estimation problem

The Bock iteration for the ODE estimation problem he Bock iteration for the ODE estimation problem M.R.Osborne Contents 1 Introduction 2 2 Introducing the Bock iteration 5 3 he ODE estimation problem 7 4 he Bock iteration for the smoothing problem 12

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 24: Preconditioning and Multigrid Solver Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 5 Preconditioning Motivation:

More information

Kasetsart University Workshop. Multigrid methods: An introduction

Kasetsart University Workshop. Multigrid methods: An introduction Kasetsart University Workshop Multigrid methods: An introduction Dr. Anand Pardhanani Mathematics Department Earlham College Richmond, Indiana USA pardhan@earlham.edu A copy of these slides is available

More information

Efficient Solvers for the Navier Stokes Equations in Rotation Form

Efficient Solvers for the Navier Stokes Equations in Rotation Form Efficient Solvers for the Navier Stokes Equations in Rotation Form Computer Research Institute Seminar Purdue University March 4, 2005 Michele Benzi Emory University Atlanta, GA Thanks to: NSF (MPS/Computational

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

EXAMPLES OF CLASSICAL ITERATIVE METHODS

EXAMPLES OF CLASSICAL ITERATIVE METHODS EXAMPLES OF CLASSICAL ITERATIVE METHODS In these lecture notes we revisit a few classical fixpoint iterations for the solution of the linear systems of equations. We focus on the algebraic and algorithmic

More information

Performance Comparison of Relaxation Methods with Singular and Nonsingular Preconditioners for Singular Saddle Point Problems

Performance Comparison of Relaxation Methods with Singular and Nonsingular Preconditioners for Singular Saddle Point Problems Applied Mathematical Sciences, Vol. 10, 2016, no. 30, 1477-1488 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2016.6269 Performance Comparison of Relaxation Methods with Singular and Nonsingular

More information

Classical iterative methods for linear systems

Classical iterative methods for linear systems Classical iterative methods for linear systems Ed Bueler MATH 615 Numerical Analysis of Differential Equations 27 February 1 March, 2017 Ed Bueler (MATH 615 NADEs) Classical iterative methods for linear

More information

Topic 15 Notes Jeremy Orloff

Topic 15 Notes Jeremy Orloff Topic 5 Notes Jeremy Orloff 5 Transpose, Inverse, Determinant 5. Goals. Know the definition and be able to compute the inverse of any square matrix using row operations. 2. Know the properties of inverses.

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

Solving Ax = b, an overview. Program

Solving Ax = b, an overview. Program Numerical Linear Algebra Improving iterative solvers: preconditioning, deflation, numerical software and parallelisation Gerard Sleijpen and Martin van Gijzen November 29, 27 Solving Ax = b, an overview

More information

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems

Topics. The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems Topics The CG Algorithm Algorithmic Options CG s Two Main Convergence Theorems What about non-spd systems? Methods requiring small history Methods requiring large history Summary of solvers 1 / 52 Conjugate

More information

Another algorithm for nonnegative matrices

Another algorithm for nonnegative matrices Linear Algebra and its Applications 365 (2003) 3 12 www.elsevier.com/locate/laa Another algorithm for nonnegative matrices Manfred J. Bauch University of Bayreuth, Institute of Mathematics, D-95440 Bayreuth,

More information

Constrained Minimization and Multigrid

Constrained Minimization and Multigrid Constrained Minimization and Multigrid C. Gräser (FU Berlin), R. Kornhuber (FU Berlin), and O. Sander (FU Berlin) Workshop on PDE Constrained Optimization Hamburg, March 27-29, 2008 Matheon Outline Successive

More information

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization

A Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization A Second Full-Newton Step On Infeasible Interior-Point Algorithm for Linear Optimization H. Mansouri C. Roos August 1, 005 July 1, 005 Department of Electrical Engineering, Mathematics and Computer Science,

More information

A Randomized Algorithm for the Approximation of Matrices

A Randomized Algorithm for the Approximation of Matrices A Randomized Algorithm for the Approximation of Matrices Per-Gunnar Martinsson, Vladimir Rokhlin, and Mark Tygert Technical Report YALEU/DCS/TR-36 June 29, 2006 Abstract Given an m n matrix A and a positive

More information

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS Yugoslav Journal of Operations Research 25 (205), Number, 57 72 DOI: 0.2298/YJOR3055034A A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM FOR P (κ)-horizontal LINEAR COMPLEMENTARITY PROBLEMS Soodabeh

More information

MINIMAL NORMAL AND COMMUTING COMPLETIONS

MINIMAL NORMAL AND COMMUTING COMPLETIONS INTERNATIONAL JOURNAL OF INFORMATION AND SYSTEMS SCIENCES Volume 4, Number 1, Pages 5 59 c 8 Institute for Scientific Computing and Information MINIMAL NORMAL AND COMMUTING COMPLETIONS DAVID P KIMSEY AND

More information

Lecture 24: Element-wise Sampling of Graphs and Linear Equation Solving. 22 Element-wise Sampling of Graphs and Linear Equation Solving

Lecture 24: Element-wise Sampling of Graphs and Linear Equation Solving. 22 Element-wise Sampling of Graphs and Linear Equation Solving Stat260/CS294: Randomized Algorithms for Matrices and Data Lecture 24-12/02/2013 Lecture 24: Element-wise Sampling of Graphs and Linear Equation Solving Lecturer: Michael Mahoney Scribe: Michael Mahoney

More information

THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS

THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS KEITH CONRAD. Introduction The easiest matrices to compute with are the diagonal ones. The sum and product of diagonal matrices can be computed componentwise

More information

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1 Scientific Computing WS 2018/2019 Lecture 9 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 9 Slide 1 Lecture 9 Slide 2 Simple iteration with preconditioning Idea: Aû = b iterative scheme û = û

More information

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.

The amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A. AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative

More information