Numerische Mathematik

Size: px
Start display at page:

Download "Numerische Mathematik"

Transcription

1 Numer. Math. (00) 93: Digital Object Identifier (DOI) /s Numerische Mathematik Two new variants of nonlinear inexact Uzawa algorithms for saddle-point problems Qiya Hu 1,, Jun Zou, 1 Institute of Computational Mathematics and Scientific Engineering Computing, Chinese Academy of Sciences, Beijing , People s Republic of China; hqy@lsec.cc.ac.cn Department of Mathematics, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong, People s Republic of China; zou@math.cuhk.edu.hk Received December 8, 1999 / Revised version received September 8, 001 / Published online March 8, 00 c Springer-Verlag 00 Summary. In this paper, we consider some nonlinear inexact Uzawa methods for iteratively solving linear saddle-point problems. By means of a new technique, we first give an essential improvement on the convergence results of Bramble-Paschiak-Vassilev for a known nonlinear inexact Uzawa algorithm. Then we propose two new algorithms, which can be viewed as a combination of the known nonlinear inexact Uzawa method with the classical steepest descent method and conjugate gradient method respectively. The two new algorithms converge under very practical conditions and do not require any apriori estimates on the minimal and maximal eigenvalues of the preconditioned systems involved, including the preconditioned Schur complement. Numerical results of the algorithms applied for the Stokes problem and a purely linear system of algebraic equations are presented to show the efficiency of the algorithms. Mathematics Subject Classification: 65F10, 65N0 1 Introduction The purpose of this paper is to study some nonlinear inexact Uzawa algorithms for solving the following system of linear equations The work of this author was partially supported by a grant from The Institute of Mathematical Sciences, CUHK The work of this author was partially supported by Hong Kong RGC Grants CUHK 49/00P and CUHK 444/01P

2 334 Q. Hu, J. Zou (1.1) ( )( ) ( ) AB x f B t =, 0 y g where A is a symmetric positive definite n n matrix, and B is an n m matrix (m n). We assume that the (n + m) (n + m) coefficient matrix ( ) AB M = B t 0 is nonsingular, which is equivalent to the positive definiteness of the Schur complement matrix (1.) C = B t A 1 B. Linear systems such as (1.1) are called saddle-point problems, which may arise from finite element discretizations of Stokes equations and Maxwell equations [7, 11, 15], mixed finite element formulations for second order elliptic problems [, 7] or from Lagrange multiplier methods for optimization problems [1,16], for the parameter identification and domain decomposition problems [1,17] and [19]. In recent years, there is a rapidly increasing literature which is concerned with preconditioned iterative methods for solving the indefinite system of equations like (1.1), see [3,5,6,8,14,0 ]. In particular, inexact Uzawatype algorithms have attracted wide attention, see [3,5,8,14,1] and the references therein. The common nice properties of these Uzawa-type algorithms are that they have minimal memory requirements, and are easy to implement. Let  and Ĉ be two positive definite matrices, which are assumed to be the preconditioners of the matrices A and C = B t A 1 B respectively. And let R l denote the usual l-dimensional Euclidean space. For any l l positive definite matrix G, weuse x G to denote the G-induced norm, namely x G =(Gx, x) 1/ for all x R l. Then the standard inexact Uzawa algorithm can be described as follows (cf. [5] and [14]): Algorithm 1.1 (inexact Uzawa). Given x 0 R n and y 0 R m, the sequence {(x i,y i )} R n R m is defined for i =1,,, by (1.3) (1.4) x i+1 = x i +  1 [f (Ax i + By i )], y i+1 = y i + Ĉ 1 (B t x i+1 g). There are some earlier versions of the above algorithm, e.g., see [3] and [1]. The existing convergence results indicate that these algorithms are convergent by assuming some good knowledge of the spectrum of the preconditioned matrices  1 A and Ĉ 1 C or under some proper scalings of the

3 Two new variants of nonlinear inexact Uzawa algorithms 335 preconditioners  and Ĉ. This preprocessing may not be easy to achieve in some applications. To avoid the estimate on the minimal generalized eigenvalues of  with respect to A, a practical way is to replace the inner iteration (1.3) of the standard inexact Uzawa algorithm by some nonlinear iteration, for example, the PCG iteration. The detailed convergence properties of the resulting algorithm can be found in [5] and [14]. However, some improvements on the convergence results achieved so far seem necessary (see the numerical examples given in [5] and Sect. below). Moreover, a good knowlwdge on the spectrum of the preconditioned Schur complement is needed for a proper scaling of the preconditioner to guarantee the convergence of the global inexact Uzawa iteration, see Sect. for the details. To avoid the estimate of the generalized eigenvalues of Ĉ with respect to B t  1 B, the Uzawa-type algorithm proposed in [3] introduced a PCG algorithm as an inner iteration of (1.4). It was proved that the Uzawa-type algorithm converges under the assumptions that each inner iteration teminates with a certain accuracy and the basic iteration for the preconditioned system  1 A is convergent. The preconditioned minimal residual method studied in [0] and [] is always convergent, but needs some good knowledge of the smallest eigenvalues of the matrices  1 A and Ĉ 1 (B t  1 B) to achieve a practical convergence rate. Without a good apriori estimate on the smallest eigenvalues of these two preconditioned matrices, the condition number of the (global) preconditioned system may be still very large even if the condition numbers of the matrices  1 A and Ĉ 1 (B t  1 B) are small (cf. []). In this case, the convergence of this iterative method may be slow (see Sect. 4). The purpose of this paper is twofold. By means of a new technique, we first give an essential improvement on the convergence results of Bramble- Paschiak-Vassilev for a known nonlinear inexact Uzawa algorithm. Then we propose two new algorithms, which can be viewed as a combination of the known nonlinear inexact Uzawa method with the classical steepest descent method and conjugate gradient method respectively. We will show our algorithms are always convergent without any assumptions on the spectra of the preconditioners Ĉ and Â. The outline of the remainder of the paper is as follows. We give a new convergence result for a known nonlinear inexact Uzawa algorithm in Sect., and present two new algorithms and analyse their convergence rates in Sects. 3 and 4. Finally, we apply the proposed new algorithms for solving the Stokes problem and a linear system of purely algebraic equations in Sect. 5.

4 336 Q. Hu, J. Zou An improved convergence result for the known nonlinear inexact Uzawa algorithm The main purpose of this section is to present an essential improvement on the convergence results of Bramble-Paschiak-Vassilev for a nonlinear inexact Uzawa algorithm. In this algorithm, the computation of the action of the inverse A 1 in (1.3) is replaced with that of an approximation to A 1 which results from applying a nonlinear iterative process for inverting A. Two examples of such approximations are from defining the approximate inverse by a PCG iteration or the operator which results from the application of a multigrid cycling algorithm with a nonlinear smoother (cf. [5]). We now describe the nonlinear inexact Uzawa algorithm, in which the nonlinear approximate inverse is defined by a map Ψ : R n R n.for φ R n, Ψ(φ) is an approximation to the solution ξ of (.1) Aξ = φ. The following assumption was made in [5] on the accuracy of the approximation: (.) Ψ(φ) A 1 φ A δ A 1 φ A, φ R n for some δ (0, 1). This is a natural assumption and can be satisfied by the approximate inverse associated with the PCG iteration and the approximate inverse defined by one sweep of a multigrid algorithm with conjugate gradient smoothing (cf. [5]). The nonlinear inexact Uzawa algorithm is defined as follows: Algorithm.1 (nonlinear inexact Uzawa). Given x 0 R n and y 0 R m, the sequence {(x i,y i )} R n R m is defined for i =1,,, by (.3) (.4) (.5) x i+1 = x i + Ψ(f (Ax i + By i )), y i+1 = y i + Ĉ 1 (B t x i+1 g). Under the assumption (.) and (1 γ)(ĉy,y) (Cy,y) (Ĉy,y), y Rm, it was proved in [5] that Algorithm.1 converges if (.6) δ< 1 γ 3 γ. And in this case the following estimate of the convergence rate holds: ( ) δ 1+δ ex i A + e y δ i Ĉ ρi 1+δ ex 0 A + e y 0 Ĉ

5 Two new variants of nonlinear inexact Uzawa algorithms 337 and ( ) δ e x i A (1 + δ)(1+δ) ρ i 1+δ ex 0 A + e y 0, Ĉ where e x i and ey i are the error functions e x i = x x i, e y i = y y i and ρ = δ + γ + (δ + γ) +4δ(1 γ). Generally speaking, the convergence condition (.6) is somehow not very satisfactory as it depends on the preconditioning parameter γ. The condition says that the evaluation of A 1 f i for some f i required in (.3) by the nonlinear iteration Ψ should be more accurate if the preconditioning of the Schur complement becomes worse (i.e., γ is close to 1). This seems not so reasonable. Indeed we find surprisingly that such a requirement on δ can be made completely independent of γ, namely the same results still hold when (.6) is replaced by δ<1/3 as shown below. To demonstrate such an essentially improved result, we will first establish the convergence rate for the residual f i = f (Ax i + By i )=Ae x i + Be y i, instead of the traditionally used error vector e x i as was done in [5]. This is an important observation which leads to our improved results. To do so, we define a new norm : v =( v 1 + v C) 1, v =(v1,v ) R n R m. With this norm we can show Theorem.1 Under the assumptions (.) and (.5), for δ< 1 3 we have (.7) E i+1 ρ E i, E i =( δa 1 fi, e y i ) Rn R m. The convergence rate ρ (< 1) can be estimated by (.8) (.9) ρ { δ + δ + δ for 0 γ 4δ 1+δ, γ(1+δ) δ+ [γ(1+δ) δ] +4δ 4δ for 1+δ { <γ<1 δ + δ + δ for 0 γ 4δ 1+δ, 1 1 (1 γ)(1 + δ) for 4δ 1+δ <γ<1.

6 338 Q. Hu, J. Zou Proof. The inequality (.8) was discovered and proved independently by Cheng [10] and the authors [18]. However, for reader s convenience, we outline the main idea of the proof. Let B t A 1 = U(Σ0 0)V t be the singular value decomposition of the matrix B t A 1. Define Q 0 = Σ 0 U t Ĉ 1 UΣ 0 and ( ) δ(i + Q0 ) δq0 F = δq0. (I Q 0 ) It can be verified directly that F (.10) max β σ(q 0 ) 1 δ β(1 + δ) + 1 δ β(1 + δ) +4δ, where σ(q 0 ) is the spectrum of the matrix Q 0. One can check that the matrix Q 0 has the same eigenvalues as the matrix Ĉ 1 C. Thus we have σ(q 0 ) [1 γ,1] by (.5). Then we can show ρ F, and (.8) follows from (.10). We next show (.9). When γ>4δ/(1 + δ), wehave γ(1 + δ) δ >δ >0. This with the condition that γ<1 leads to γ(1 + δ) δ + [γ(1 + δ) δ] +4δ <γ(1 + δ) δ + (1 δ) +4δ = γ(1 + δ) δ +1 δ =+(γ 1)(1 + δ). (.9) follows then from this relation and (.8). In particular, when δ< 1 3 we have ρ<1. Using Theorem.1, we can estimate the convergence rates of the errors e x i and ey i :

7 Two new variants of nonlinear inexact Uzawa algorithms 339 Theorem. Under the assumptions (.) and (.5), Algorithm.1 converges for any δ< 1 3. Moreover, we have (.11) e x i A ( 1+4δ + ρ)ρ i 1 E 0, i =1,, and (.1) e y i C ρ i E 0, i =1,,. Here the rate ofconvergence ρ has the following bounds: { ρ 1 ρ γ 1 6 γ for 0 γ 4δ (.13) 1+δ, ρ 3 (δ + δ 4δ ) for 1+δ <γ<1. Proof. The estimate (.1) is a direct consequence of (.7). We next prove (.11). To do so, we start with the derivation of a useful expression for A 1/ f i. Clearly, we can write f i = Ae x i + Be y i, Bt x i+1 g = B t e x i+1, this with (.3) and (.4) leads to and A 1 e x i+1 = A 1 (e x i Ψ(f i ))=(A 1 fi A 1 Ψ(fi )) A 1 Be y i A 1 Be y i+1 = A 1 B(e y i + Ĉ 1 B t e x i+1) (.14) So we have = A 1 B [e y i + Ĉ 1 B t A 1 ( )] (A 1 fi A 1 Ψ(fi )) A 1 Be y i = A 1 B Ĉ 1 B t A 1 (A 1 fi A 1 Ψ(fi )) +(I A 1 B Ĉ 1 B t A 1 )A 1 Be y i. (.15) A 1 fi+1 = A 1 e x i+1 + A 1 Be y i+1 =(I + A 1 B Ĉ 1 B t A 1 )(A 1 fi A 1 Ψ(fi )) (A 1 B Ĉ 1 B t A 1 )A 1 Be y i. Because of (.), for any given natural number i one can construct a symmetric and positive definite n n matrix Q Ai such that Q Ai Ψ(f i )=f i

8 340 Q. Hu, J. Zou and I 1 1 Q Ai AQ Ai δ, see, for example, Lemma 9 of [3] for such a construction of Q Ai. Thus we have (.16) and (.17) Ψ(f i )=Q 1 Ai f i I A 1 Q 1 Ai A 1 δ. By means of (.14)-(.16), we can further write A 1 fi+1 =(I + A 1 B Ĉ 1 B t A 1 )(I A 1 Q 1 Ai A 1 )A 1 fi (A 1 B Ĉ 1 B t A 1 )A 1 Be y i, or equivalently write (replacing i by i 1) where A 1 fi =(I + Q ) Q 1i A 1 fi 1 Q A 1 Be y i 1 Q 1i = I A 1 Q 1 Ai A 1 and Q = A 1 B Ĉ 1 B t A 1. Applying the arithmetic-geometric mean inequality yields (.18) A 1 fi (1 + η) (I + Q ) Q 1i A 1 fi 1 +(1 + η 1 ) Q A 1 Be y i 1, where η is any positive number. One can easily see that the matrix Q has the same positive eigenvalues as the matrix Ĉ 1 C. Hence, the assumption (.6) implies that the eigenvalues of Q are in the interval [0, 1]. Namely, we have Q 1, I + Q, this, with (.18) and (.17), leads to A 1 fi (1 + η)4δ A 1 fi 1 +(1+η 1 ) A 1 Be y i 1 =4δ(1 + η) δa 1 fi 1 +(1+η 1 ) e y i 1 C.

9 Two new variants of nonlinear inexact Uzawa algorithms 341 Taking η =1/(4δ) above and using (.7) we obtain Now using the relation A 1 fi 1+4δ ρ i 1 E 0. A 1 e x i = A 1 fi A 1 Be y i, (.11) follows immediately from (.7) and the triangle inequality. It remains to show (.13). Let us first consider 0 <γ 4δ/(1 + δ)). We have (.19) ρ ρ = δ + γ 4δ + + γ +4δ ( δ + δ + δ) = γ 4(δ + + δ)+γ 4(δ + δ) = γ + γ ( 4(δ + δ)+γ + 4(δ + δ)). Using δ<1/3 and γ<1, we know 4(δ + δ)+γ + 4(δ + δ) < 3. This along with (.19) yields (.0) ρ ρ> γ + γ 6. Now consider the remaining case: 4δ/(1 + δ) <γ<1. We have ρ ρ = δ + γ (δ) + + γ +4δ { γ δ(1 γ } [γ(1 + δ) δ] + +4δ (.1) ( = γ ) (δ) + γ [γ(1 + δ) δ] δ + ( (δ) + γ +4δ + [γ(1 + δ) δ] +4δ). Let it is easy to see that f(γ) =(δ) + γ [γ(1 + δ) δ], f(γ) =γδ[4(1 + δ) γ( + δ)]

10 34 Q. Hu, J. Zou and Thus by the monotonicity of f, f (γ) =δ > 0. (.) f(γ) >f ( ) 4δ = 16δ 1+δ (1 + δ). Moreover, using δ<1/3 and γ<1 we have (δ) + γ +4δ + [γ(1 + δ) δ] +4δ<3. This combining with (.1)-(.) leads to ρ ρ> 3 δ + 16δ 6(1 + δ) > 3 (δ + δ ), which completes the proof of (.13). Remark.1 The key points in the proof of Theorem.1 is the replacement of the error e x i by the residual f i and the use of the norm. The n n matrix Q Ai used in the proof can be viewed as a preconditioner of the matrix A (which is just the preconditioner  in [3] and the preconditioner Q A in [5]). However, this preconditioner changes with the iteration number i. This makes the norm used in [3] (involving ) and the norm used in the proof  of Theorem 3.1 of [5] (involving Q A A ) not applicable here for the analysis of the convergence of Algorithm.1, because the local contraction of errors with a changing norm can not guarantee the global convergence of the algorithm (refer to Theorem 4., [5]). Remark. The numerical results shown in [5] indicated that Algorithm.1 may still converge even if the condition (.6) is not satisfied. It was seen as a surprising phenomenon (see [5]). Now we would like to illustrate a bit about this phenomenon. It can be verified directly by (.10) that F < 1 is equivalent to Thus, F < 1 if and only if 1 δ β(1 + δ) < 1 δ. (.3) β< (1 δ) 1+δ, β σ(q 0). Under the assumption (.5), we can only obtain β max 1. Here β max denotes the maximal eigenvalue of the matrix Q 0. Solving the inequality

11 Two new variants of nonlinear inexact Uzawa algorithms < (1 δ) 1+δ, we get δ< 1 3, which means that the covergence condition δ< 1 3 is sharp theoretically when β max =1. However, if the preconditioner Ĉ is not properly scaled β max may be bounded far away from one (namely, close to zero). In this case (e.g., see the example considerd in [5]), the inequality (.3) is also valid even if δ is close to one. But the convergence can be very slow (cf. [5]) because the assumption (.5) holds only with the γ close to one (note that 0 < 1 γ β max ). These facts indicate that the afore-mentioned phenomenon is not too surprising. 3 New algorithm I: the nonlinear inexact Uzawa combined with the steepest descent method In general, the crucial assumption (.5) is not so easy to fulfill as it requires a good estimate on the maximum eigenvalue of the preconditioned Schur complement Ĉ 1 C in order to have the desired proper scaling of Ĉ. Without such a good estimate this assumption is not met or it is satisfied only with some γ very close 1, which may cause the divergence of the algorithm or results in very slow convergence of the algorithm. Obtaining such a good estimate of the eigenvalues is not very convenient or practical. In this and next sections, we propose two new algorithms to avoid such estimates. Our first idea is to introduce a relaxation parameter τ i in (.4) such that the norm τ i Ĉ 1 g i C 1 g i C is minimized, where g i = B t x i+1 g.ifg i /=0, the direct calculation gives τ = (g i, Ĉ 1 g i ) Ĉ 1 g i C = (g i, Ĉ 1 g i ) (A 1 BĈ 1 g i,bĉ 1 g i ). Since the action of A 1 is not allowed, A 1BĈ 1 g i will be then replaced by Ψ(BĈ 1 g i ), which results in the following new algorithm: Algorithm 3.1 (Nonlinear inexact Uzawa-steepest descent). Given x 0 R n and y 0 R m, the sequence {(x i,y i )} R n R m is defined for i =1,,, by Step 1. Compute f i = f (Ax i + By i ) and Ψ(f i ), update (3.1) x i+1 = x i + Ψ(f i ). Step. Compute g i = B t x i+1 g, d i = Ĉ 1 g i and (3.) τ i = { 1 (g i,d i ) (Ψ(Bd i ),Bd i ) for g i 0; 1 for g i =0.

12 344 Q. Hu, J. Zou Update (3.3) y i+1 = y i + τ i d i. Remark 3.1 It is clear that when both f i and g i vanish, then the vectors x i and y i are the exact solution of the equation (1.1), and thus Algorithm 3.1 terminates. For the sake of convenience, we introduce two parameters δ and δ 0 which are the minimal positive numbers satisfying (3.4) (3.5) A 1 f i Ψ(f i ) A δ A 1 f i A A 1 Bd i Ψ(Bd i ) A δ 0 A 1 Bd i A. Remark 3. Note that the assumptions (3.4) and (3.5) are in fact consequences of the assumption (.). So we can simply assume the condition (.) instead of (3.4)-(3.5) and all the results in this and next sections hold. We choose to formulate the above two conditions here so that we can see how the convergence rate of Algorithm 3.1 depends more explicitly on the accuracies of the two nonlinear inner iterations in both (3.1) and (3.3). The evaluations of A 1 f i and A 1 Bd i by the nonlinear iteration Ψ can be different. (3.6) Let κ = cond(ĉ 1 C), β = 1 4κ(1 δ 0) (1 + κ) (1 δ 0 ), then we have Theorem 3.1 Assume that the approximation parameters δ and δ 0 satisfy δ< 1 3 and δ 0 < 1, then Algorithm 3.1 converges. Moreover, we have (3.7) e x i A ( 1+4δ +ˆρ)ˆρ i 1 E 0, i =1,, and (3.8) e y i C ˆρ i E 0, i =1,,. Here the rate ofconvergence ˆρ (< 1) can be estimated by (3.9) ˆρ = { δ + δ + δ for 0 < 1+β 4δ 1+δ ; (1 β)(1 + δ) for 4δ 1+δ < 1+β < 1.

13 Two new variants of nonlinear inexact Uzawa algorithms 345 Remark 3.3 From Theorem 3.1 we know that the convergence of Algorithm 3.1 is independent of the spectrum of the preconditioned Schur complement Ĉ 1 C, and the convergence rate of this new algorithm depends only on the condition number κ, not necessary to have a proper scaling of the preconditioner Ĉ and a good apriori estimate on the maximum and minimum eigenvalues of Ĉ 1 C. This seems to be an important advantage of the new algorithm over the existing nonlinear inexact Uzawa algorithms. The assumptions of Theorem 3.1 are natural and can be satisfied by the approximate inverse associated with the PCG iteration and the approximate inverse defined by one sweep of a multigrid algorithm with a suitable smoother (cf. [3,5,4] and [3]). For the proof of Theorem 3.1 we need the following lemma: Lemma 3.1 Assume that the condition (3.5) holds with δ 0 < 1. Then for any given natural number i there is a symmetric and positive definite m m matrix Q Bi such that (i) Bi g i = τ i Ĉ 1 g i. (ii) All eigenvalues ofthe matrix Q 1 1 β BiC are in the interval [, 1]. Q 1 ProofofTheorem 3.1. With Lemma 3.1, we can view the matrix Q Bi as the preconditioner Ĉ for the Schur complement C, which satisfies the condition (.5) with γ = 1 β. Thus Theorem 3.1 is a direct consequence of Theorems.1-.. Here we have to make use of the important fact that the norms adopted in Theorems.1-. are independent of the preconditioner Ĉ, in comparison with [5]. ProofofLemma 3.1. It is easy to see that (3.10) (Ψ(BĈ 1 g i ),BĈ 1 g i )= Ĉ 1 g i C +(Ψ(BĈ 1 g i ) A 1 (BĈ 1 g i ),BĈ 1 g i ). Using the Cauchy s inequality and (3.5), yields (Ψ(BĈ 1 g i ) A 1 (BĈ 1 g i ),BĈ 1 g i ) Ψ(BĈ 1 g i ) A 1 (BĈ 1 g i ) A BĈ 1 g i ) A 1 δ 0 A 1 (BĈ 1 g i ) A BĈ 1 g i ) A 1 = δ 0 Ĉ 1 g i C. Substituting the above inequality into (3.10) leads to (1 δ 0 ) Ĉ 1 g i C (Ψ(BĈ 1 g i ),BĈ 1 g i ) (1 + δ 0 ) Ĉ 1 g i C.

14 346 Q. Hu, J. Zou Thus, (3.11) (g i, Ĉ 1 g i ) (1 + δ 0 ) Ĉ 1 g i C τ i (g i, Ĉ 1 g i ) (1 δ 0 ) Ĉ 1 g i C, where the parameter τ i is defined by For τ>0, let Clearly, τ i = (g i, Ĉ 1 g i ). Ψ(BĈ 1 g i ) A F (τ) = τĉ 1 g i C 1 g i C. (3.1) F (τ) = C 1 g i C τ(g i, Ĉ 1 g i )+τ Ĉ 1 g i C. It is easy to verify that F (τ) is an increasing function for and is a decreasing function for τ (g i, Ĉ 1 g i ), Ĉ 1 g i C τ (g i, Ĉ 1 g i ). Ĉ 1 g i C Using this property and (3.11), we know (3.13) where F ( τ i ) max { F ( τ0i 1+δ 0 τ 0i = (g i, Ĉ 1 g i ). Ĉ 1 g i C ) ( )} τ0i,f 1 δ 0 On the other hand, it follows from (3.1) that ( ) τ0i F = C 1 g i C (1 + δ 0) 1 (g i, Ĉ 1 g i ) 1+δ 0 (1 + δ 0 ) { Ĉ 1 g i C } = 1 (1 + δ 0) 1 (g i, Ĉ 1 g i ) (1 + δ 0 ) C 1 g Ĉ 1 g i C C 1 g i i C. C

15 Two new variants of nonlinear inexact Uzawa algorithms 347 Let λ min and λ max denote respectively the minimal and maximal eigenvalues of the matrix Ĉ 1 CĈ 1. Then using the well-known matrix-eigenvalue inequality we obtain (g i, Ĉ 1 g i ) Ĉ 1 g i C C 1 g i C = (Ĉ 1 g i, Ĉ 1 g i ) (Ĉ 1 CĈ 1 (Ĉ 1 g i ), Ĉ 1 g i )((Ĉ 1 CĈ 1 ) 1 (Ĉ 1 g i ), Ĉ 1 g i ) 4λ min λ max (λ min + λ max ) = 4κ (1 + κ). Hence ( ) { τ0i F 1 (1 + δ } 0) 1 4κ (3.14) 1+δ 0 (1 + δ 0 ) (1 + κ) C 1 g i C. Similarly we can show (note that δ 0 < 1 ) ( ) { τ0i F 1 (1 δ } 0) 1 4κ (3.15) 1 δ 0 (1 δ 0 ) (1 + κ) C 1 g i C. Now by the direct calculation, we have namely, (1 + δ 0 ) 1 (1 + δ 0 ) (1 δ 0) 1 (1 δ 0 ) ( 1 = 1 )( ) 1 δ 0 1+δ 0 1 δ 0 1+δ 0 ( 1 = 1 ) δ 0 1 δ 0 1+δ 0 1 δ0 > 0, (1 δ 0 ) 1 (1 δ 0 ) < (1 + δ 0) 1 (1 + δ 0 ), which, together with (3.13)-(3.15), yields { F ( τ i ) 1 (1 δ } 0) 1 4κ (1 δ 0 ) (1 + κ) C 1 g i C = β C 1 g i C. Summarizing the above, we have proved τ i Ĉ 1 g i C 1 g i C β C 1 g i C.

16 348 Q. Hu, J. Zou It is clear that β<1when δ 0 < 1. Hence by Lemma 9 of [3] we know that there is a symmetric and positive definite m m matrix ˆQ Bi such that ˆQ 1 Bi g i = τ i Ĉ 1 g i and I ˆQ 1 Bi C ˆQ 1 Bi β. Now Q Bi =ˆQ Bi is the desired matrix, since the matrix Q 1 BiC has the same eigenvalues as the matrix Q 1 1 Bi CQ Bi. 4 New algorithm II: the nonlinear inexact Uzawa combined with the conjugate gradient method In the last section, we proposed a nonlinear inexact Uzawa combined with the steepest descent method (Algorithm 3.1), which converges when a general preconditioner is used for the Schur complement system and a general nonlinear iteration is used for solving Ax = b involved in the inner iteration. However, the steepest descent method converges with a reasonable rate only when a good preconditioner is available for the Schur complement system. This is the case when the saddle-point problem arises, for example, from the Stokes problem, mixed finite element formulations of second order elliptic problems [], and domain decomposition methods with Lagrange multipliers [17]. Without such a good preconditioner the method may converge with a slow rate. This fact can be observed from the numerical experiments shown in Sect. 5. In this section, we propose another new algorithm which combines the nonlinear inexact Uzawa algorithm with the conjugate gradient (CG) method, in an effort to accelerate the nonlinear inexact Uzawa algorithm when a good preconditioner is not available for the Schur complement system. This is the case when the saddle-point problems arise from the Lagrange multiplier formulations for optimization problems [16] and the parameter identification [1] [19]. In the following, we will first derive a new nonlinear inexact Uzawa algorithm and combine it with the CG method, then analyse the convergence of the new algorithm and compare its convergence rate with that of Algorithm 3.1, which uses the steepest descent method. For a given sequence {f i } R n, let {Q Ai } be the sequence of symmetric and positive definite n n matrices satisfying (.16) and (.17), and set H i = B t Q 1 Ai B. Clearly, H i is also a symmetric and positive definite m m

17 Two new variants of nonlinear inexact Uzawa algorithms 349 matrix. For a given sequence {g i } R m, consider the linear system (with the index i fixed) (4.1) H i z i = g i. We intend to solve (4.1) for each fixed index i by the PCG iteration with the preconditioner Ĉ. This process is to find a sequence of approximate solutions {ẑ j i } as follows: initialize : ẑi 0 =0, ˆr0 i = g i, ˆp 0 i = Ĉ 1 g i, ĥ 0 i =0; iterate for j =1,, : ẑ j i =ẑj 1 i +ˆτ j i ˆpj 1 i, ˆτ j i = (ˆrj 1 i,ˆp j 1 i ) ĥ j i = ĥj 1 i +ˆτ j i H i ˆp j 1 i, ˆr j i = g i ĥj i, ˆp j i = Ĉ 1ˆr j i ˆθ j i ˆpj 1 i, ˆθj, (H i ˆp j 1 i,ˆp j 1 i ) i = (Ĉ 1ˆr j i,h i ˆp j 1 i ). (H i ˆp j 1 i,ˆp j 1 i ) Note that one has the relation ĥj i = H i ẑ j i between the variables ĥj i and ẑj i above, and each action of H i involves the action of Q 1 Ai. In general it is difficult to obtain the explicit form of the matrix Q Ai. Thus, it is important and practical to use the inexact solver Ψ(B ˆp j 1 i ) to replace Q 1 Ai B ˆpj 1 i. The resulting iteration can be formulated as follows: initialize : zi 0 =0, r0 i = g i, p 0 i = Ĉ 1 g i, h 0 i =0; iterate for j =1,, : z j i = zj 1 i + τ j i pj 1 i, τ j i = (r j 1 i,p j 1 i ) + τ j i Bt Ψ(Bp j 1 i ), h j i = hj 1 i r j i = g i h j i, p j i = Ĉ 1 r j i θj i pj 1 i, (Ψ(Bp j 1 i ),Bp j 1 i ), θ j i = (BĈ 1 r j i,ψ(bpj 1 i )). (Ψ(Bp j 1 i ),Bp j 1 i ) For convenience, we use Φ k (g i ) to denote the kth iterate zi k generated from the above process. Φ k defines a (nonlinear) mapping from R m to R m, and Φ k (g i ) can be regarded as an approximate solution of the system (4.1). Note that we may not have Ψ(Bb) =Q 1 Ai Bbfor a vector b Rm in general, so Φ k (g i ) is not the true approximation generated by the k steps of PCG iteration for (4.1). We assume that there is a positive number β k < 1 such that (4.) Φ k (g i ) z i Hi = Φ k (g i ) Hi 1 g i Hi β k Hi 1 g i Hi.

18 350 Q. Hu, J. Zou Then by Lemma 9 of [3], there is a symmetric and positive definite m m matrix ˆQ 1 ki, such that ˆQ ki g i = Φ k (g i ) and I ˆQ 1 ki H i ˆQ 1 ki β k, which implies 1 (4.3) (H i w, w) ( 1+β ˆQ ki w, w) 1 (H i w, w) w R m. k 1 β k On the other hand, it follows from (.17) that (1 δ)(a 1 w, w) (Q 1 Ai w, w) (1 + δ)(a 1 w, w) w R m. This, together with (4.3), leads to 1 δ (4.4) (Cw,w) ( 1+β ˆQ ki w, w) 1+δ (Cw,w). k 1 β k Define Q ki = 1 δ ˆQ ki. It is easy to see that (4.5) and (4.6) Q 1 ki g i = 1 δ Φ k(g i ), (1 δ)(1 β k ) (Q ki w, w) (Cw,w) (1 + δ) (Q ki w, w) w R m by using (4.4) and the fact that β k < 1. Based on (4.5) and Algorithm.1, we propose the following new algorithm: Algorithm 4.1 (Nonlinear inexact Uzawa PCG method). Given x 0 R n and y 0 R m, the sequence {(x i,y i )} R n R m is defined for i =1,,, by Step 1. Compute f i = f (Ax i + By i ) and Ψ(f i ), update (4.7) x i+1 = x i + Q 1 Ai f i = x i + Ψ(f i ). Step. Compute g i = B t x i+1 g, and d ki = Φ k (g i ), update (4.8) y i+1 = y i + Q 1 ki g i = y i + 1 δ d ki. The inequality (4.6) indicates that the matrix Q ki satisfies the inequality (.5) with γ k = 1+3δ+β k(1 δ) (1+δ). Thus by Theorem. we obtain

19 Two new variants of nonlinear inexact Uzawa algorithms 351 Theorem 4.1 Assume that (.) holds for δ< 1 3, and (4.) holds for β k < 1. Then Algorithm 4.1 converges. Moreover, we have (4.9) e x i A ( 1+4δ +ˆρ k )(ˆρ k ) i 1 E 0, i =1,, and (4.10) e y i C (ˆρ k ) i E 0, i =1,,, and the convergence rate ˆρ k (< 1) can be estimated by (4.11) ˆρ k { δ + δ + δ for 0 β k 5δ 1 1 δ, (1 β 5δ 1 k)(1 δ) for 1 δ <β k < 1. Remark 4.1 One can easily see from (4.5)-(4.6) that Theorem 4.1 also holds if the factor (1 δ)/ in (4.8) is replaced by any number α (0, 1/3). In fact, the numerical results in Sect. 5 indicate that Algorithm 4.1 still converges when the factor (1 δ)/ is replaced by 1. So Algorithm 3.1 is a special case of Algorithm 4.1 with k =1. As noted in Remark 3.3, the first assumption of Theorem 4.1 holds when the mapping Ψ is properly defined. However, it is difficult to verify the condition (4.) in general. The following proposition gives a condition which guarantees (4.) when all H i s are taken to be the same: Proposition 4.1 Assume that there is a symmetric and positive definite n n matrix Q A, such that Q 1 A φ = Ψ(φ) for any φ Rn. Define H = B t Q 1 A B. Then the following inequality holds for some β k < 1 : (4.1) Φ k (g i ) H 1 g i H β k H 1 g i H. Proof. Under the assumption of Proposition 4.1, Φ k (g i ) is just the approximate solution generated by the k steps of PCG iteration for the linear system (4.13) Hz i = g i. Thus, the parameter β k satisfying the relation (4.1) can be estimated by (cf. []) { β k = min ( κ 1 κ +1 ) k }, κ 1 κ < 1 +1 with κ =cond(ĉ 1 H).

20 35 Q. Hu, J. Zou Remark 4. The assumption in Proposition 4.1 is satisfied in some applications. For example, it is true when the mapping Ψ is defined by the preconditioned Richardson iteration, which includes the standard multigrid sweeps. To see this, for any b R n, recall the preconditioned Richardson iteration for solving the system (4.14) Ax= b with the preconditioner  is x i+1 = x i +  1 (b Ax), i =1,,. If the eigenvalues of  1 A is in the range 0 <λ<, the iteration converges. Taking x 0 =0, we can easily see x i+1 = x (I  1 A) i+1 x i+1 = ( 1) j 1 C j i+1 ( 1 A) j x j=1 i+1 = ( 1) j 1 C j i+1 ( 1 A) j 1  1 b. j=1 This suggests us to define Q 1 A i+1 Q 1 A = as ( 1) j 1 C j i+1 ( 1 A) j 1  1 = j=1 ( I (I  1 A) i+1) A 1. We can easily verify that the matrix Q A is symmetric and positive definite, and satisfies Q 1 A b = x i+1 = Ψ(b) b R n. Note that if A is the stiffness matrix which arises from the finite element discretization of self-adjoint second order elliptic problems, then one can choose  1 to be the operator corresponding to one V-cycle sweep of multigrid method [5] or the hierarchical basis multigrid method [3]. In the following we make some comparison between the convergence rates of Algorithm 3.1 and Algorithm 4.1. Proposition 4. Assume that the parameters δ, κ and β k in (.), (3.6) and (4.) satisfy δ< 1 5, κ = cond(ĉ 1 C) >, lim k β k =0

21 Two new variants of nonlinear inexact Uzawa algorithms 353 respectively. Then Algorithm 4.1 converges faster than Algorithm 3.1 when k is sufficiently large. But this may not be true for general k. Proof. We first show that the parameter β in (3.6) is larger than 1/3 when κ>. It is easy to see that and so β 1 δ 0 (1 δ 0 ) 1, 1 4κ (1 + κ) = κ 1 κ +1 > 1 3 when κ>. Now let ˆρ and ˆρ k be the convergence rates of Algorithm 3.1 and Algorithm 4.1 (see Theorems 3.1 and 4.1). Since β> 1 3 and δ<1 5, it is easy to see 4δ 1+δ < 3 < 1+β Then by (3.9) and (4.11) we have and < 1 and 5δ 1 1 δ < 0 β k. ˆρ =1 1 3 (1 β)(1 + δ) > 1 (1 β) 4 10 ˆρ k =1 1 4 (1 β k)(1 δ) < (1 β k). Therefore one has ˆρ k < ˆρ if 1 5 (1 β k) > 3 (1 β), 10 that is, (4.15) β k < 3 β 1. It is clear that when β> 1 3 the inequality (4.15) can be satisfied for large k. It remains to show the second part of Proposition 4.. Form Theorem.1 and the proof of Lemma 3.1, we know that the convergence rate of Algorithm 4.1 is determined by the parameter ˆβ k satisfying Φ k (g i ) C 1 g i C ˆβ k C 1 g i C, g i R m,

22 354 Q. Hu, J. Zou more accurately, it is determined by the value Φ k (g i ) C 1 g i C, i.e., the error at the k-th iteration. Since the results generated by Algorithm 4.1 with k =1are the same as the ones generated by Algorithm 3.1, for our purpose it suffices to construct a counterexample to show that the following inequality does not hold: (4.16) Φ (g i ) C 1 g i C < Φ 1 (g i ) C 1 g i C. In fact, we can show that (4.16) is not true when definition of Algorithm 4.1, we have Ĉ = C = I. Bythe Φ (g i )=z i = z 1 i + τ i p 1 i, thus Φ (g i ) C 1 g i C = z i g i (4.17) = z 1 i g i +τ i (z 1 i g i,p 1 i )+(τ i ) p 1 i. By the basic property of the CG method, we know (ri 1,p0 i )=0, which implies (r 1 i,p 1 i )=(r 1 i,r 1 i θ 1 i p 0 i ) > 0, and so τi > 0 by the definition of τi. Now the desired conclusion follows if we can find a matrix H such that (zi 1 g i,p 1 i ) > 0. Indeed, if this is true, then it follows from (4.17) that Φ (g i ) C 1 g i C > z 1 i g i = Φ 1 (g i ) C 1 g i C, which negates (4.16). It remains to find such a matrix H. By the definition of Algorithm 4.1, z 1 i = τ 1 i g i and Therefore we have p 1 i = r 1 i θ 1 i p 0 i = g i τ 1 i Hg i θ 1 i g i. (4.18) (zi 1 g i,p 1 i )=(τi 1 1)(g i, (1 θi 1 )g i τi 1 Hg i ) { } =(τi 1 1) (1 θi 1 ) g i τi 1 g i H = θi 1 (1 τi 1 ) g i, where we have used the fact that τi 1, it is easy to verify that θ 1 i = g i / g i H. By the definitions of (4.19) θi 1 (1 τi 1 )= ( g i Hg i g i 4 H )( g i g i H ) g i 6. H

23 Two new variants of nonlinear inexact Uzawa algorithms 355 By the Cauchy-Schwarz inequality, g i Hg i g i 4 H > 0 provided that H ci (c is a constant). Without loss of generality, we assume that the first component of g i is not zero, then define It is clear that H = diag( ). 4 g i g i H > 0, so we have from (4.19) that θi 1(1 τ i 1 ) > 0. This, together with (4.18), leads to (z 1 i g i,p 1 i ) > 0. 5 Numerical experiments In this section, we apply our newly proposed Algorithm 3.1 and Algorithm 4.1, and some other existing algorithms, to solve the two-dimensional Stokes problem and another system of purely algebraic equations. Let Ω be the unit square domain in R, and L 0 (Ω) be the set of all square integrable functions over Ω with zero averages. Let H 1 (Ω) be the usual Sobolev space of order one on Ω and H0 1(Ω) be a space consisting of those functions in H1 (Ω) whose traces vanish on the boundary of Ω. Our first example is the generalized Stokes problem whose variational formulation reads as follows: { (µ u, v) (p, v) =(f,v), v H 1 0 (Ω), (5.1) (q, u) =g, q L 0 (Ω), where f L (Ω) and µ L (Ω) with µ(x) c >0 almost everywhere in Ω. We use one of the well-known conforming Taylor-Hood elements, which has been widely used in engineering, to solve the system (5.1). For any positive integer N, we divide Ω into N N subsquares, and set h =1/N. Such a triangulation of Ω, with its elements being all squares, is denoted as T h. Let X h H0 1(Ω) and M h H 1 (Ω) L 0 (Ω) be the usual continuous Q and Q 1 finite element spaces defined on T h respectively (cf. [7,13]). The total number of unknowns for this finite element is n + m = (N

24 356 Q. Hu, J. Zou Table 1. Number of iterations (Algorithm 3.1) δ=0.1 δ=0. N PCG CG ) +[(N +1) 1]. The finite element approximation of the above Stokes system can be formulated as follows: (5.) (µ(x) u h, v) (p h, v) =(f,v), v Xh, (5.3) (q, u h )=0, q M h. It is known that the inf-sup condition is satisfied by the pair (Xh,M h) (see [7]), thus the Schur complement matrix B t A 1 B associated with the system (5.)-(5.3) has a condition number independent of h. As in [6] and [], we take the variable coefficient µ to be µ =1+x 1 x +x 1 x /, and know that the corresponding matrix A is block diagonal with two copies of the stiffness matrix associated with the bilinear form (µ(x), ) on the diagonal. Therefore, if µ =1, it can be solved by the fast Poisson solver. It is natural to choose this fast solver  as the preconditioner of A. It is clear that (5.4) 0.5(Âz, z) (Az, z).5(âz, z), thus the matrix  1 A is well-conditioned, and so is the matrix B t  1 B. In this case, we can choose for Algorithm 3.1 Ĉ = I and Ψ defined by the CG method (without preconditioning) or the PCG method with the preconditioner  for A. For all the numerical results shown below, the inner nonlinear iteration is terminated when Ψ satisfies φ AΨ(φ) / φ δ for a given vector φ R m. And the outer iteration of Algorithm 3.1 terminates when (5.5) ε = Mv i b / Mv 0 b with v i =(x i y i ) t,v 0 =(x 0 y 0 ) t and b =(fg) t. The number of iterations of Algorithm 3.1 is listed in Table 1, where we can see the rate of convergence independent of mesh size h. In comparison with Algorithm 3.1, we tried to solve the Stokes system also by Algorithm 4.1 with the same preconditioners as above. The number of iterations for different k is listed in Table 1.

25 Two new variants of nonlinear inexact Uzawa algorithms 357 Table 1. Number of iterations (Algorithm 4.1) PCG inner iteration, δ =0.1 N k= k=5 k=10 k= We can see from Table 1 that Algorithm 3.1 outperforms Algorithm 4.1 in this example, and Algorithm 3.1 shows a more stable and consistent convergence history than Algorithm 4.1. The results are reasonable as Algorithm 3.1 should perform better than Algorithm 4.1 when a good preconditioner Ĉ is available for the Schur complement C. Indeed, the preconditioner h Ĉ = h I is spectrally equivalent to C (and the two algorithms are invariant under constant scalings). It can be verified by (5.4) that Ĉ =Isatisfies the assumption (.5) (cf. [6]). Thus we can also apply Algorithm.1 to solve the system (5.)- (5.3). However, the resulting convergence is very slow for this example because the parameter γ defined by (.5) is close to one. For example, 19 iterations are needed to reach the same tolerance as (5.5) when N =8and Ψ is defined by the CG iteration with δ =0.1. We can also use the diagonal preconditioner with diagonal block  and Ĉ = h I (spectrally equivalent to C) to solve the system (5.)-(5.3) by the minimum residual method. Unfortunately, its convergence seems slow. For example, 65 iterations are needed to reach the same tolerance when N =16. Our second example is a system of purely algebraic equations. We define the matrices A =(a ij ) n n and B =(b ij ) n m (n m) in (1.1) as follows: a ij = { i +1,i= j, 1, i j =1, 0, otherwise; b ij = { 15j, i = j + n m, 0, otherwise. The preconditioners  =(a ij) n n and Ĉ =(c ij) n m are defined by { { i, i = j, i a ij = c 0, otherwise; ij = +3, i = j, 0, otherwise. The right-hand side vectors f and g are defined such that the exact solutions x and y are the vectors with all components being 1. We solve the corresponding system (1.1) by Algorithm 3.1 with the preconditioner Ĉ and Ψ defined by N c iterations of the CG method (without preconditioner) and the PCG method with the preconditioner Â. Algorithm 3.1 terminates when the error ε , the resulting number of iterations is given in Table. To compare Algorithm 3.1 with Algorithm 4.1 for this example, we solved the considered system again by Alg The numerical experiments

26 358 Q. Hu, J. Zou Table. Numbers of iteration (Algorithm 3.1) n = 00,m= 150 n = 400,m= 300 n = 800,m= 600 PCG(N c =) CG(N c =6) Table 3. Number of iterations PCG(N c=) inner iteration, Ĉ = I (n,m) Algorithm 3.1 Algorithm 4.1 k=1 k= k=5 k=10 k=0 (00,150) (400,300) (800,600) Table 4. Number of iterations CG(N c =6) inner iteration, Ĉ = I (n,m) Algorithm 3.1 Algorithm 4.1 k=1 k= k=5 k=10 k=0 (00,150) (400,300) (800,600) indicate that both algorithms have almost the same number of iterations with 1 δ the preconditioner Ĉ and Ψ defined as above, even when the factor in (4.8) is replaced by 1. So the detailed number of iterations is omitted here. To further compare the performance of both algorithms in the case with a very bad preconditioner Ĉ, we take Ĉ = I, the worst preconditioner for C, namely without any preconditioning. Tables 3 and 4 below listed the number of iterations in this case. Tables 3 and 4 above indicate that Algorithm 4.1 is more efficient in the case with a very bad preconditioner Ĉ for the Schur complement C. We also tried to use Algorithm.1 to solve the considered system, but the numerical experiments indicate that this algorithm is divergent for this example. In fact, the assumption (.5) is not satisfied with the given preconditioner Ĉ. Also note that it is not easy to achieve a good estimate on the minimal generalized eigenvalue of the preconditioner Ĉ with respect to the matrix B t A 1 B in this case (except that the action of A 1 is allowed), then there seems no direct way to solve the considered system using Algorithm.1. Acknowledgements. The authors wish to thank the anonymous referees for many constructive comments which lead to the great improvement of the paper.

27 Two new variants of nonlinear inexact Uzawa algorithms 359 References 1. K. Arrow, L. Hurwicz, H. Uzawa: Studies in linear and nonlinear programming. Stanford University Press, Stanford, CA (1958). O. Axelsson: Numerical algorithms for indefinite problems. in Elliptic Problem Solvers, Academic Press, New York, 19 3 (1984) 3. R. Bank, B. Welfert, H. Yserentant: A class of iterative methods for solving saddle point problems. Numer. Math. 56, (1990) 4. D. Braess: The convergence rate of a multigrid method with Gauss- Seidel relaxation for the Poission equation. In Multigrid Methods (W. Hackbusch, U. Trottenberg, Eds.), , 198, Berlin: Springer-Verlag 5. J. Bramble, J. Pasciak, A. Vassilev: Analysis of the inexact Uzawa algorithm for saddlepoint problems. SIAMJ. Numer. Anal. 34, (1997) 6. J. Bramble, J. Pasciak: A preconditioning technique for indefinite systems resulting from mixied approximations of elliptic problems. Math. Comp. 50, 1 18 (1988) 7. F. Brezzi, M. Fortin: Mixed and Hybrid Finite Element Methods. Springer-Verlag, New York, X. Chen: On preconditioned Uzawa methods and SOR methods for saddle-point problems. J. Comput. Appl. Math. 100, 07 4 (1998) 9. X. Chen: Global and superlinear convergence of inexact Uzawa methods for saddlepoint problems with nondifferentiable mappings. SIAMJ. Numer. Anal. 35, (1998) 10. X.-L. Cheng: On the nonlinear inexact Uzawa algorithm for saddle-point problems. SIAMJ. Numer. Anal. 37, (000) 11. Z. Chen, Q. Du, J. Zou: Finite element methods with matching and non-matching meshes for Maxwell equations with discontinuous coefficients. SIAM J. Numer. Anal. 37, (000) 1. Z. Chen, J. Zou: An augmented Lagrangian method for identifying discontinuous parameters in elliptic systems. SIAMJ. Control Optim. 37, (1999) 13. P. Ciarlet: Basic error estimates for elliptic problems. In Handbook of Numerical Analysis, Volume II, P. Ciarlet, J.-L. Lions, eds., North Holland, (1991) H. Elman, G. Golub: Inexact and preconditioned Uzawa algorithms for saddle point problems. SIAMJ. Numer. Anal. 31, (1994) 15. V. Girault, P.-A. Raviart: Finite Element Methods for Navier Stokes Equations. Springer Verlag, Berlin, R. Glowinski, P. Le Tallec: Augmented Lagrangian and operator-splitting methods in nonlinear mechanics. SIAM, Philadelphia, Q. Hu, G. Liang, J. Liu: Construction of a preconditioner for domain decomposition methods with polynomial Lagrangian multipliers. J. Comp. Math. 19(001), Q. Hu, J. Zou, On nonlinear ineaxct Uzawa algorithms for saddle-point problems: Research Report MATH (199). Department of Mathematics, The Chinese University of Hong Kong 19. Y. Keung, J. Zou: An efficient linear solver for nonlinear parameter identification problems. SIAMJ. Sci. Comput., (000) 0. A. Klawonn: An optimal preconditioner for a class of saddle point problems with a penalty term. SIAMJ. Sci. Comput. 19, (1998) 1. W. Queck: The convergence factor of preconditioned algorithms of the Arrow-Hurwicz type. SIAMJ. Numer. Anal. 6, (1989). T. Rusten, R. Winther: A preconditioned iterative method for saddlepoint problems. SIAMJ. Matrix Anal. Appl. 13, (199) 3. R. Verfürth: The contraction number of a multigrid method. Z. angew. Math. Mech. 63, T386 T387 (1983)

Schur Complements on Hilbert Spaces and Saddle Point Systems

Schur Complements on Hilbert Spaces and Saddle Point Systems Schur Complements on Hilbert Spaces and Saddle Point Systems Constantin Bacuta Mathematical Sciences, University of Delaware, 5 Ewing Hall 976 Abstract For any continuous bilinear form defined on a pair

More information

SEMI-CONVERGENCE ANALYSIS OF THE INEXACT UZAWA METHOD FOR SINGULAR SADDLE POINT PROBLEMS

SEMI-CONVERGENCE ANALYSIS OF THE INEXACT UZAWA METHOD FOR SINGULAR SADDLE POINT PROBLEMS REVISTA DE LA UNIÓN MATEMÁTICA ARGENTINA Vol. 53, No. 1, 2012, 61 70 SEMI-CONVERGENCE ANALYSIS OF THE INEXACT UZAWA METHOD FOR SINGULAR SADDLE POINT PROBLEMS JIAN-LEI LI AND TING-ZHU HUANG Abstract. Recently,

More information

A Review of Preconditioning Techniques for Steady Incompressible Flow

A Review of Preconditioning Techniques for Steady Incompressible Flow Zeist 2009 p. 1/43 A Review of Preconditioning Techniques for Steady Incompressible Flow David Silvester School of Mathematics University of Manchester Zeist 2009 p. 2/43 PDEs Review : 1984 2005 Update

More information

ANALYSIS OF ITERATIVE METHODS FOR SADDLE POINT PROBLEMS: A UNIFIED APPROACH

ANALYSIS OF ITERATIVE METHODS FOR SADDLE POINT PROBLEMS: A UNIFIED APPROACH MATHEMATICS OF COMPUTATION Volume 71, Number 38, Pages 79 505 S 005-571801)013- Article electronically published on May 1, 001 ANALYSIS OF ITERATIVE METHODS FOR SADDLE POINT PROBLEMS: A UNIFIED APPROACH

More information

Efficient smoothers for all-at-once multigrid methods for Poisson and Stokes control problems

Efficient smoothers for all-at-once multigrid methods for Poisson and Stokes control problems Efficient smoothers for all-at-once multigrid methods for Poisson and Stoes control problems Stefan Taacs stefan.taacs@numa.uni-linz.ac.at, WWW home page: http://www.numa.uni-linz.ac.at/~stefant/j3362/

More information

AMG for a Peta-scale Navier Stokes Code

AMG for a Peta-scale Navier Stokes Code AMG for a Peta-scale Navier Stokes Code James Lottes Argonne National Laboratory October 18, 2007 The Challenge Develop an AMG iterative method to solve Poisson 2 u = f discretized on highly irregular

More information

A Least-Squares Finite Element Approximation for the Compressible Stokes Equations

A Least-Squares Finite Element Approximation for the Compressible Stokes Equations A Least-Squares Finite Element Approximation for the Compressible Stokes Equations Zhiqiang Cai, 1 Xiu Ye 1 Department of Mathematics, Purdue University, 1395 Mathematical Science Building, West Lafayette,

More information

Lecture Note III: Least-Squares Method

Lecture Note III: Least-Squares Method Lecture Note III: Least-Squares Method Zhiqiang Cai October 4, 004 In this chapter, we shall present least-squares methods for second-order scalar partial differential equations, elastic equations of solids,

More information

MULTIGRID PRECONDITIONING IN H(div) ON NON-CONVEX POLYGONS* Dedicated to Professor Jim Douglas, Jr. on the occasion of his seventieth birthday.

MULTIGRID PRECONDITIONING IN H(div) ON NON-CONVEX POLYGONS* Dedicated to Professor Jim Douglas, Jr. on the occasion of his seventieth birthday. MULTIGRID PRECONDITIONING IN H(div) ON NON-CONVEX POLYGONS* DOUGLAS N ARNOLD, RICHARD S FALK, and RAGNAR WINTHER Dedicated to Professor Jim Douglas, Jr on the occasion of his seventieth birthday Abstract

More information

Geometric Multigrid Methods

Geometric Multigrid Methods Geometric Multigrid Methods Susanne C. Brenner Department of Mathematics and Center for Computation & Technology Louisiana State University IMA Tutorial: Fast Solution Techniques November 28, 2010 Ideas

More information

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU

OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative methods ffl Krylov subspace methods ffl Preconditioning techniques: Iterative methods ILU Preconditioning Techniques for Solving Large Sparse Linear Systems Arnold Reusken Institut für Geometrie und Praktische Mathematik RWTH-Aachen OUTLINE ffl CFD: elliptic pde's! Ax = b ffl Basic iterative

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

On preconditioned Uzawa-type iterations for a saddle point problem with inequality constraints

On preconditioned Uzawa-type iterations for a saddle point problem with inequality constraints On preconditioned Uzawa-type iterations for a saddle point problem with inequality constraints Carsten Gräser and Ralf Kornhuber FU Berlin, FB Mathematik und Informatik (http://www.math.fu-berlin.de/rd/we-02/numerik/)

More information

Multilevel Preconditioning of Graph-Laplacians: Polynomial Approximation of the Pivot Blocks Inverses

Multilevel Preconditioning of Graph-Laplacians: Polynomial Approximation of the Pivot Blocks Inverses Multilevel Preconditioning of Graph-Laplacians: Polynomial Approximation of the Pivot Blocks Inverses P. Boyanova 1, I. Georgiev 34, S. Margenov, L. Zikatanov 5 1 Uppsala University, Box 337, 751 05 Uppsala,

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Fast Iterative Solution of Saddle Point Problems

Fast Iterative Solution of Saddle Point Problems Michele Benzi Department of Mathematics and Computer Science Emory University Atlanta, GA Acknowledgments NSF (Computational Mathematics) Maxim Olshanskii (Mech-Math, Moscow State U.) Zhen Wang (PhD student,

More information

A MULTILEVEL SUCCESSIVE ITERATION METHOD FOR NONLINEAR ELLIPTIC PROBLEMS

A MULTILEVEL SUCCESSIVE ITERATION METHOD FOR NONLINEAR ELLIPTIC PROBLEMS MATHEMATICS OF COMPUTATION Volume 73, Number 246, Pages 525 539 S 0025-5718(03)01566-7 Article electronically published on July 14, 2003 A MULTILEVEL SUCCESSIVE ITERATION METHOD FOR NONLINEAR ELLIPTIC

More information

A NOTE ON THE LADYŽENSKAJA-BABUŠKA-BREZZI CONDITION

A NOTE ON THE LADYŽENSKAJA-BABUŠKA-BREZZI CONDITION A NOTE ON THE LADYŽENSKAJA-BABUŠKA-BREZZI CONDITION JOHNNY GUZMÁN, ABNER J. SALGADO, AND FRANCISCO-JAVIER SAYAS Abstract. The analysis of finite-element-like Galerkin discretization techniques for the

More information

A MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY

A MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY A MULTIGRID ALGORITHM FOR THE CELL-CENTERED FINITE DIFFERENCE SCHEME Richard E. Ewing and Jian Shen Institute for Scientic Computation Texas A&M University College Station, Texas SUMMARY In this article,

More information

PARTITION OF UNITY FOR THE STOKES PROBLEM ON NONMATCHING GRIDS

PARTITION OF UNITY FOR THE STOKES PROBLEM ON NONMATCHING GRIDS PARTITION OF UNITY FOR THE STOES PROBLEM ON NONMATCHING GRIDS CONSTANTIN BACUTA AND JINCHAO XU Abstract. We consider the Stokes Problem on a plane polygonal domain Ω R 2. We propose a finite element method

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

UZAWA TYPE ALGORITHMS FOR NONSYMMETRIC SADDLE POINT PROBLEMS

UZAWA TYPE ALGORITHMS FOR NONSYMMETRIC SADDLE POINT PROBLEMS MATHEMATICS OF COMPUTATION Volume 69, Number 230, Pages 667 689 S 0025-5718(99)01152-7 Article electronically published on May 17, 1999 UZAWA TYPE ALGORITHMS FOR NONSYMMETRIC SADDLE POINT PROBLEMS JAMES

More information

Key words. preconditioned conjugate gradient method, saddle point problems, optimal control of PDEs, control and state constraints, multigrid method

Key words. preconditioned conjugate gradient method, saddle point problems, optimal control of PDEs, control and state constraints, multigrid method PRECONDITIONED CONJUGATE GRADIENT METHOD FOR OPTIMAL CONTROL PROBLEMS WITH CONTROL AND STATE CONSTRAINTS ROLAND HERZOG AND EKKEHARD SACHS Abstract. Optimality systems and their linearizations arising in

More information

Multigrid Methods for Saddle Point Problems

Multigrid Methods for Saddle Point Problems Multigrid Methods for Saddle Point Problems Susanne C. Brenner Department of Mathematics and Center for Computation & Technology Louisiana State University Advances in Mathematics of Finite Elements (In

More information

Journal of Computational and Applied Mathematics. Optimization of the parameterized Uzawa preconditioners for saddle point matrices

Journal of Computational and Applied Mathematics. Optimization of the parameterized Uzawa preconditioners for saddle point matrices Journal of Computational Applied Mathematics 6 (009) 136 154 Contents lists available at ScienceDirect Journal of Computational Applied Mathematics journal homepage: wwwelseviercom/locate/cam Optimization

More information

A STOKES INTERFACE PROBLEM: STABILITY, FINITE ELEMENT ANALYSIS AND A ROBUST SOLVER

A STOKES INTERFACE PROBLEM: STABILITY, FINITE ELEMENT ANALYSIS AND A ROBUST SOLVER European Congress on Computational Methods in Applied Sciences and Engineering ECCOMAS 2004 P. Neittaanmäki, T. Rossi, K. Majava, and O. Pironneau (eds.) O. Nevanlinna and R. Rannacher (assoc. eds.) Jyväskylä,

More information

4.6 Iterative Solvers for Linear Systems

4.6 Iterative Solvers for Linear Systems 4.6 Iterative Solvers for Linear Systems Why use iterative methods? Virtually all direct methods for solving Ax = b require O(n 3 ) floating point operations. In practical applications the matrix A often

More information

Numerical behavior of inexact linear solvers

Numerical behavior of inexact linear solvers Numerical behavior of inexact linear solvers Miro Rozložník joint results with Zhong-zhi Bai and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic The fourth

More information

Journal of Computational and Applied Mathematics. Multigrid method for solving convection-diffusion problems with dominant convection

Journal of Computational and Applied Mathematics. Multigrid method for solving convection-diffusion problems with dominant convection Journal of Computational and Applied Mathematics 226 (2009) 77 83 Contents lists available at ScienceDirect Journal of Computational and Applied Mathematics journal homepage: www.elsevier.com/locate/cam

More information

INTERGRID OPERATORS FOR THE CELL CENTERED FINITE DIFFERENCE MULTIGRID ALGORITHM ON RECTANGULAR GRIDS. 1. Introduction

INTERGRID OPERATORS FOR THE CELL CENTERED FINITE DIFFERENCE MULTIGRID ALGORITHM ON RECTANGULAR GRIDS. 1. Introduction Trends in Mathematics Information Center for Mathematical Sciences Volume 9 Number 2 December 2006 Pages 0 INTERGRID OPERATORS FOR THE CELL CENTERED FINITE DIFFERENCE MULTIGRID ALGORITHM ON RECTANGULAR

More information

Comparative Analysis of Mesh Generators and MIC(0) Preconditioning of FEM Elasticity Systems

Comparative Analysis of Mesh Generators and MIC(0) Preconditioning of FEM Elasticity Systems Comparative Analysis of Mesh Generators and MIC(0) Preconditioning of FEM Elasticity Systems Nikola Kosturski and Svetozar Margenov Institute for Parallel Processing, Bulgarian Academy of Sciences Abstract.

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 11, pp. 1-24, 2000. Copyright 2000,. ISSN 1068-9613. ETNA NEUMANN NEUMANN METHODS FOR VECTOR FIELD PROBLEMS ANDREA TOSELLI Abstract. In this paper,

More information

ON A GENERAL CLASS OF PRECONDITIONERS FOR NONSYMMETRIC GENERALIZED SADDLE POINT PROBLEMS

ON A GENERAL CLASS OF PRECONDITIONERS FOR NONSYMMETRIC GENERALIZED SADDLE POINT PROBLEMS U..B. Sci. Bull., Series A, Vol. 78, Iss. 4, 06 ISSN 3-707 ON A GENERAL CLASS OF RECONDIIONERS FOR NONSYMMERIC GENERALIZED SADDLE OIN ROBLE Fatemeh anjeh Ali BEIK his paper deals with applying a class

More information

7.2 Steepest Descent and Preconditioning

7.2 Steepest Descent and Preconditioning 7.2 Steepest Descent and Preconditioning Descent methods are a broad class of iterative methods for finding solutions of the linear system Ax = b for symmetric positive definite matrix A R n n. Consider

More information

On an Approximation Result for Piecewise Polynomial Functions. O. Karakashian

On an Approximation Result for Piecewise Polynomial Functions. O. Karakashian BULLETIN OF THE GREE MATHEMATICAL SOCIETY Volume 57, 010 (1 7) On an Approximation Result for Piecewise Polynomial Functions O. arakashian Abstract We provide a new approach for proving approximation results

More information

SOLVING MESH EIGENPROBLEMS WITH MULTIGRID EFFICIENCY

SOLVING MESH EIGENPROBLEMS WITH MULTIGRID EFFICIENCY SOLVING MESH EIGENPROBLEMS WITH MULTIGRID EFFICIENCY KLAUS NEYMEYR ABSTRACT. Multigrid techniques can successfully be applied to mesh eigenvalue problems for elliptic differential operators. They allow

More information

Vector and scalar penalty-projection methods

Vector and scalar penalty-projection methods Numerical Flow Models for Controlled Fusion - April 2007 Vector and scalar penalty-projection methods for incompressible and variable density flows Philippe Angot Université de Provence, LATP - Marseille

More information

Overlapping Schwarz preconditioners for Fekete spectral elements

Overlapping Schwarz preconditioners for Fekete spectral elements Overlapping Schwarz preconditioners for Fekete spectral elements R. Pasquetti 1, L. F. Pavarino 2, F. Rapetti 1, and E. Zampieri 2 1 Laboratoire J.-A. Dieudonné, CNRS & Université de Nice et Sophia-Antipolis,

More information

Numerische Mathematik

Numerische Mathematik Numer. Math. (2003) 94: 195 202 Digital Object Identifier (DOI) 10.1007/s002110100308 Numerische Mathematik Some observations on Babuška and Brezzi theories Jinchao Xu, Ludmil Zikatanov Department of Mathematics,

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

7.4 The Saddle Point Stokes Problem

7.4 The Saddle Point Stokes Problem 346 CHAPTER 7. APPLIED FOURIER ANALYSIS 7.4 The Saddle Point Stokes Problem So far the matrix C has been diagonal no trouble to invert. This section jumps to a fluid flow problem that is still linear (simpler

More information

Numerische Mathematik

Numerische Mathematik Numer. Math. (1999) 82: 179 191 Numerische Mathematik c Springer-Verlag 1999 Electronic Edition A cascadic multigrid algorithm for the Stokes equations Dietrich Braess 1, Wolfgang Dahmen 2 1 Fakultät für

More information

ENERGY NORM A POSTERIORI ERROR ESTIMATES FOR MIXED FINITE ELEMENT METHODS

ENERGY NORM A POSTERIORI ERROR ESTIMATES FOR MIXED FINITE ELEMENT METHODS ENERGY NORM A POSTERIORI ERROR ESTIMATES FOR MIXED FINITE ELEMENT METHODS CARLO LOVADINA AND ROLF STENBERG Abstract The paper deals with the a-posteriori error analysis of mixed finite element methods

More information

b i (x) u + c(x)u = f in Ω,

b i (x) u + c(x)u = f in Ω, SIAM J. NUMER. ANAL. Vol. 39, No. 6, pp. 1938 1953 c 2002 Society for Industrial and Applied Mathematics SUBOPTIMAL AND OPTIMAL CONVERGENCE IN MIXED FINITE ELEMENT METHODS ALAN DEMLOW Abstract. An elliptic

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture

More information

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

6. Iterative Methods for Linear Systems. The stepwise approach to the solution... 6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse

More information

Adaptive algebraic multigrid methods in lattice computations

Adaptive algebraic multigrid methods in lattice computations Adaptive algebraic multigrid methods in lattice computations Karsten Kahl Bergische Universität Wuppertal January 8, 2009 Acknowledgements Matthias Bolten, University of Wuppertal Achi Brandt, Weizmann

More information

Nonoverlapping Domain Decomposition Methods with Simplified Coarse Spaces for Solving Three-dimensional Elliptic Problems

Nonoverlapping Domain Decomposition Methods with Simplified Coarse Spaces for Solving Three-dimensional Elliptic Problems Nonoverlapping Domain Decomposition Methods with Simplified Coarse Spaces for Solving Three-dimensional Elliptic Problems Qiya Hu 1, Shi Shu 2 and Junxian Wang 3 Abstract In this paper we propose a substructuring

More information

Iterative techniques in matrix algebra

Iterative techniques in matrix algebra Iterative techniques in matrix algebra Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan September 12, 2015 Outline 1 Norms of vectors and matrices 2 Eigenvalues and

More information

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Stefania Bellavia Dipartimento di Energetica S. Stecco Università degli Studi di Firenze Joint work with Jacek

More information

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP.

7.3 The Jacobi and Gauss-Siedel Iterative Techniques. Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 7.3 The Jacobi and Gauss-Siedel Iterative Techniques Problem: To solve Ax = b for A R n n. Methodology: Iteratively approximate solution x. No GEPP. 7.3 The Jacobi and Gauss-Siedel Iterative Techniques

More information

Preface to the Second Edition. Preface to the First Edition

Preface to the Second Edition. Preface to the First Edition n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................

More information

CLASSICAL ITERATIVE METHODS

CLASSICAL ITERATIVE METHODS CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped

More information

ITERATIVE METHODS FOR NONLINEAR ELLIPTIC EQUATIONS

ITERATIVE METHODS FOR NONLINEAR ELLIPTIC EQUATIONS ITERATIVE METHODS FOR NONLINEAR ELLIPTIC EQUATIONS LONG CHEN In this chapter we discuss iterative methods for solving the finite element discretization of semi-linear elliptic equations of the form: find

More information

Performance Comparison of Relaxation Methods with Singular and Nonsingular Preconditioners for Singular Saddle Point Problems

Performance Comparison of Relaxation Methods with Singular and Nonsingular Preconditioners for Singular Saddle Point Problems Applied Mathematical Sciences, Vol. 10, 2016, no. 30, 1477-1488 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2016.6269 Performance Comparison of Relaxation Methods with Singular and Nonsingular

More information

20. A Dual-Primal FETI Method for solving Stokes/Navier-Stokes Equations

20. A Dual-Primal FETI Method for solving Stokes/Navier-Stokes Equations Fourteenth International Conference on Domain Decomposition Methods Editors: Ismael Herrera, David E. Keyes, Olof B. Widlund, Robert Yates c 23 DDM.org 2. A Dual-Primal FEI Method for solving Stokes/Navier-Stokes

More information

Preconditioning of Saddle Point Systems by Substructuring and a Penalty Approach

Preconditioning of Saddle Point Systems by Substructuring and a Penalty Approach Preconditioning of Saddle Point Systems by Substructuring and a Penalty Approach Clark R. Dohrmann 1 Sandia National Laboratories, crdohrm@sandia.gov. Sandia is a multiprogram laboratory operated by Sandia

More information

The Mixed Finite Element Multigrid Preconditioned Minimum Residual Method for Stokes Equations

The Mixed Finite Element Multigrid Preconditioned Minimum Residual Method for Stokes Equations The Mixed Finite Element Multigrid Preconditioned Minimum Residual Method for Stokes Equations K. Muzhinji, S. Shateyi, and S, S. Motsa 2 University of Venda, Department of Mathematics, P Bag X5050, Thohoyandou

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 15, pp. 94-105, 2003. Copyright 2003,. ISSN 1068-9613. ETNA MULTILEVEL PRECONDITIONERS FOR LAGRANGE MULTIPLIERS IN DOMAIN IMBEDDING JANNE MARTIKAINEN,

More information

Comparison of V-cycle Multigrid Method for Cell-centered Finite Difference on Triangular Meshes

Comparison of V-cycle Multigrid Method for Cell-centered Finite Difference on Triangular Meshes Comparison of V-cycle Multigrid Method for Cell-centered Finite Difference on Triangular Meshes Do Y. Kwak, 1 JunS.Lee 1 Department of Mathematics, KAIST, Taejon 305-701, Korea Department of Mathematics,

More information

Iterative Solvers in the Finite Element Solution of Transient Heat Conduction

Iterative Solvers in the Finite Element Solution of Transient Heat Conduction Iterative Solvers in the Finite Element Solution of Transient Heat Conduction Mile R. Vuji~i} PhD student Steve G.R. Brown Senior Lecturer Materials Research Centre School of Engineering University of

More information

On the accuracy of saddle point solvers

On the accuracy of saddle point solvers On the accuracy of saddle point solvers Miro Rozložník joint results with Valeria Simoncini and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic Seminar at

More information

A Finite Element Method for the Surface Stokes Problem

A Finite Element Method for the Surface Stokes Problem J A N U A R Y 2 0 1 8 P R E P R I N T 4 7 5 A Finite Element Method for the Surface Stokes Problem Maxim A. Olshanskii *, Annalisa Quaini, Arnold Reusken and Vladimir Yushutin Institut für Geometrie und

More information

Multigrid Based Optimal Shape and Topology Design in Magnetostatics

Multigrid Based Optimal Shape and Topology Design in Magnetostatics Multigrid Based Optimal Shape and Topology Design in Magnetostatics Dalibor Lukáš 1 Department of Applied Mathematics, VŠB Technical University of Ostrava, 17. listopadu 15, 708 33 Ostrava Poruba, Czech

More information

Preconditioners for the incompressible Navier Stokes equations

Preconditioners for the incompressible Navier Stokes equations Preconditioners for the incompressible Navier Stokes equations C. Vuik M. ur Rehman A. Segal Delft Institute of Applied Mathematics, TU Delft, The Netherlands SIAM Conference on Computational Science and

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

IN this paper, we investigate spectral properties of block

IN this paper, we investigate spectral properties of block On the Eigenvalues Distribution of Preconditioned Block wo-by-two Matrix Mu-Zheng Zhu and a-e Qi Abstract he spectral properties of a class of block matrix are studied, which arise in the numercial solutions

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16

Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 XVI - 1 Contraction Methods for Convex Optimization and Monotone Variational Inequalities No.16 A slightly changed ADMM for convex optimization with three separable operators Bingsheng He Department of

More information

A Note on Inverse Iteration

A Note on Inverse Iteration A Note on Inverse Iteration Klaus Neymeyr Universität Rostock, Fachbereich Mathematik, Universitätsplatz 1, 18051 Rostock, Germany; SUMMARY Inverse iteration, if applied to a symmetric positive definite

More information

Dirichlet-Neumann and Neumann-Neumann Methods

Dirichlet-Neumann and Neumann-Neumann Methods Dirichlet-Neumann and Neumann-Neumann Methods Felix Kwok Hong Kong Baptist University Introductory Domain Decomposition Short Course DD25, Memorial University of Newfoundland July 22, 2018 Outline Methods

More information

Constrained Minimization and Multigrid

Constrained Minimization and Multigrid Constrained Minimization and Multigrid C. Gräser (FU Berlin), R. Kornhuber (FU Berlin), and O. Sander (FU Berlin) Workshop on PDE Constrained Optimization Hamburg, March 27-29, 2008 Matheon Outline Successive

More information

A mixed finite element approximation of the Stokes equations with the boundary condition of type (D+N)

A mixed finite element approximation of the Stokes equations with the boundary condition of type (D+N) wwwijmercom Vol2, Issue1, Jan-Feb 2012 pp-464-472 ISSN: 2249-6645 A mixed finite element approximation of the Stokes equations with the boundary condition of type (D+N) Jaouad El-Mekkaoui 1, Abdeslam Elakkad

More information

AMS subject classifications. Primary, 65N15, 65N30, 76D07; Secondary, 35B45, 35J50

AMS subject classifications. Primary, 65N15, 65N30, 76D07; Secondary, 35B45, 35J50 A SIMPLE FINITE ELEMENT METHOD FOR THE STOKES EQUATIONS LIN MU AND XIU YE Abstract. The goal of this paper is to introduce a simple finite element method to solve the Stokes equations. This method is in

More information

Optimal multilevel preconditioning of strongly anisotropic problems.part II: non-conforming FEM. p. 1/36

Optimal multilevel preconditioning of strongly anisotropic problems.part II: non-conforming FEM. p. 1/36 Optimal multilevel preconditioning of strongly anisotropic problems. Part II: non-conforming FEM. Svetozar Margenov margenov@parallel.bas.bg Institute for Parallel Processing, Bulgarian Academy of Sciences,

More information

A Robust Preconditioner for the Hessian System in Elliptic Optimal Control Problems

A Robust Preconditioner for the Hessian System in Elliptic Optimal Control Problems A Robust Preconditioner for the Hessian System in Elliptic Optimal Control Problems Etereldes Gonçalves 1, Tarek P. Mathew 1, Markus Sarkis 1,2, and Christian E. Schaerer 1 1 Instituto de Matemática Pura

More information

Stabilization and Acceleration of Algebraic Multigrid Method

Stabilization and Acceleration of Algebraic Multigrid Method Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration

More information

Efficient Solvers for Stochastic Finite Element Saddle Point Problems

Efficient Solvers for Stochastic Finite Element Saddle Point Problems Efficient Solvers for Stochastic Finite Element Saddle Point Problems Catherine E. Powell c.powell@manchester.ac.uk School of Mathematics University of Manchester, UK Efficient Solvers for Stochastic Finite

More information

Continuous methods for numerical linear algebra problems

Continuous methods for numerical linear algebra problems Continuous methods for numerical linear algebra problems Li-Zhi Liao (http://www.math.hkbu.edu.hk/ liliao) Department of Mathematics Hong Kong Baptist University The First International Summer School on

More information

A Robust Approach to Minimizing H(div)-Dominated Functionals in an H 1 -Conforming Finite Element Space

A Robust Approach to Minimizing H(div)-Dominated Functionals in an H 1 -Conforming Finite Element Space NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2000; 00:1 23 [Version: 2002/09/18 v1.02] A Robust Approach to Minimizing H(div)-Dominated Functionals in an H 1 -Conforming Finite

More information

Some definitions. Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization. A-inner product. Important facts

Some definitions. Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization. A-inner product. Important facts Some definitions Math 1080: Numerical Linear Algebra Chapter 5, Solving Ax = b by Optimization M. M. Sussman sussmanm@math.pitt.edu Office Hours: MW 1:45PM-2:45PM, Thack 622 A matrix A is SPD (Symmetric

More information

On deflation and singular symmetric positive semi-definite matrices

On deflation and singular symmetric positive semi-definite matrices Journal of Computational and Applied Mathematics 206 (2007) 603 614 www.elsevier.com/locate/cam On deflation and singular symmetric positive semi-definite matrices J.M. Tang, C. Vuik Faculty of Electrical

More information

Abstract. 1. Introduction

Abstract. 1. Introduction Journal of Computational Mathematics Vol.28, No.2, 2010, 273 288. http://www.global-sci.org/jcm doi:10.4208/jcm.2009.10-m2870 UNIFORM SUPERCONVERGENCE OF GALERKIN METHODS FOR SINGULARLY PERTURBED PROBLEMS

More information

Domain Decomposition Algorithms for Saddle Point Problems

Domain Decomposition Algorithms for Saddle Point Problems Contemporary Mathematics Volume 218, 1998 B 0-8218-0988-1-03007-7 Domain Decomposition Algorithms for Saddle Point Problems Luca F. Pavarino 1. Introduction In this paper, we introduce some domain decomposition

More information

Local Inexact Newton Multilevel FEM for Nonlinear Elliptic Problems

Local Inexact Newton Multilevel FEM for Nonlinear Elliptic Problems Konrad-Zuse-Zentrum für Informationstechnik Berlin Heilbronner Str. 10, D-10711 Berlin-Wilmersdorf Peter Deuflhard Martin Weiser Local Inexact Newton Multilevel FEM for Nonlinear Elliptic Problems Preprint

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

On the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012

On the Superlinear Convergence of MINRES. Valeria Simoncini and Daniel B. Szyld. Report January 2012 On the Superlinear Convergence of MINRES Valeria Simoncini and Daniel B. Szyld Report 12-01-11 January 2012 This report is available in the World Wide Web at http://www.math.temple.edu/~szyld 0 Chapter

More information

INTRODUCTION TO MULTIGRID METHODS

INTRODUCTION TO MULTIGRID METHODS INTRODUCTION TO MULTIGRID METHODS LONG CHEN 1. ALGEBRAIC EQUATION OF TWO POINT BOUNDARY VALUE PROBLEM We consider the discretization of Poisson equation in one dimension: (1) u = f, x (0, 1) u(0) = u(1)

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.

More information

Jae Heon Yun and Yu Du Han

Jae Heon Yun and Yu Du Han Bull. Korean Math. Soc. 39 (2002), No. 3, pp. 495 509 MODIFIED INCOMPLETE CHOLESKY FACTORIZATION PRECONDITIONERS FOR A SYMMETRIC POSITIVE DEFINITE MATRIX Jae Heon Yun and Yu Du Han Abstract. We propose

More information

Multigrid Method ZHONG-CI SHI. Institute of Computational Mathematics Chinese Academy of Sciences, Beijing, China. Joint work: Xuejun Xu

Multigrid Method ZHONG-CI SHI. Institute of Computational Mathematics Chinese Academy of Sciences, Beijing, China. Joint work: Xuejun Xu Multigrid Method ZHONG-CI SHI Institute of Computational Mathematics Chinese Academy of Sciences, Beijing, China Joint work: Xuejun Xu Outline Introduction Standard Cascadic Multigrid method(cmg) Economical

More information

SOME NONOVERLAPPING DOMAIN DECOMPOSITION METHODS

SOME NONOVERLAPPING DOMAIN DECOMPOSITION METHODS SIAM REV. c 1998 Society for Industrial and Applied Mathematics Vol. 40, No. 4, pp. 857 914, December 1998 004 SOME NONOVERLAPPING DOMAIN DECOMPOSITION METHODS JINCHAO XU AND JUN ZOU Abstract. The purpose

More information

Improved Newton s method with exact line searches to solve quadratic matrix equation

Improved Newton s method with exact line searches to solve quadratic matrix equation Journal of Computational and Applied Mathematics 222 (2008) 645 654 wwwelseviercom/locate/cam Improved Newton s method with exact line searches to solve quadratic matrix equation Jian-hui Long, Xi-yan

More information

The semi-convergence of GSI method for singular saddle point problems

The semi-convergence of GSI method for singular saddle point problems Bull. Math. Soc. Sci. Math. Roumanie Tome 57(05 No., 04, 93 00 The semi-convergence of GSI method for singular saddle point problems by Shu-Xin Miao Abstract Recently, Miao Wang considered the GSI method

More information

Efficient Augmented Lagrangian-type Preconditioning for the Oseen Problem using Grad-Div Stabilization

Efficient Augmented Lagrangian-type Preconditioning for the Oseen Problem using Grad-Div Stabilization Efficient Augmented Lagrangian-type Preconditioning for the Oseen Problem using Grad-Div Stabilization Timo Heister, Texas A&M University 2013-02-28 SIAM CSE 2 Setting Stationary, incompressible flow problems

More information

The Mixed Finite Element Multigrid Preconditioned MINRES Method for Stokes Equations

The Mixed Finite Element Multigrid Preconditioned MINRES Method for Stokes Equations Journal of Applied Fluid Mechanics, Vol. 9, No. 3, pp. 285-296, 206. Available online at www.jafmonline.net, ISSN 735-3572, EISSN 735-3645. DOI: 0.8869/acadpub.jafm.68.228.22805 The Mixed Finite Element

More information