U n i v e r s i t ä t A u g s b u r g Institut für Mathematik Xuejun Xu, Huangxin Chen, Ronald H.W. Hoppe Local Multilevel Methods for Adaptive Nonconforming Finite Element Methods Preprint Nr. 21/2009 14. August 2009 Institut für Mathematik, Universitätsstraße, D-86135 Augsburg http://www.math.uni-augsburg.de/
Impressum: Herausgeber: Institut für Mathematik Universität Augsburg 86135 Augsburg http://www.math.uni-augsburg.de/pages/de/forschung/preprints.shtml ViSdP: Ronald H.W. Hoppe Institut für Mathematik Universität Augsburg 86135 Augsburg Preprint: Sämtliche Rechte verbleiben den Autoren c 2009
LOCAL MULTILEVEL METHODS FOR ADAPTIVE NONCONFORMING FINITE ELEMENT METHODS XUEUN XU, HUANGXIN CHEN, AND RONALD H.W. HOPPE Abstract. In this paper, we propose a local multilevel product algorithm and its additive version for linear systems arising from adaptive nonconforming finite element approximations of second order elliptic boundary value problems. The abstract Schwarz theory is applied to analyze the multilevel methods with acobi or Gauss-Seidel smoothers performed on local nodes on coarse meshes and global nodes on the finest mesh. It is shown that the local multilevel methods are optimal, i.e., the convergence rate of the multilevel methods is independent of the mesh sizes and mesh levels. Numerical experiments are given to confirm the theoretical results.. Introduction Multigrid methods and other multilevel preconditioning methods for nonconforming finite elements have been studied by many researchers (cf. [4], [5], [6], [7], [14], [18], [19], [20], [21], [23], [26], [27], [28],[32], [34]). The BPX framework developed in [4] provides a unified convergence analysis for nonnested multigrid methods. Duan et al. [14] extended the result to general V-cycle nonnested multigrid methods, but only the case of full elliptic regularity was considered. Besides, Brenner [7] established a framework for the nonconforming V-cycle multigrid method under less restrictive regularity assumptions. For multilevel preconditioning methods, Oswald developed a hierarchical basis multilevel method [19] and a BPX-type multilevel preconditioner [20] for nonconforming finite elements. Vassilevski and Wang [26] presented some multilevel algorithms for nonconforming finite element methods and obtained a uniform convergence result without additional regularity beyond H 1. Furthermore, Hoppe and Wohlmuth [15] considered multilevel preconditioned conjugate gradient methods for nonconforming P 1 finite element approximations with respect to adaptively generated hierarchies of nonuniform meshes based on residual type a posteriori error estimators. Recent studies (cf., e.g., [2], [10], [11], [17], [24]) indicate optimal convergence properties of adaptive conforming and nonconforming finite element methods. Therefore, in order to achieve an optimal numerical solution, it is imperative to study efficient iterative algorithms for the solution of linear systems arising from adaptive 2000 Mathematics Subject Classification. Primary 54C40, 14E20; Secondary 46E25, 20C20. Key words and phrases. local multilevel methods, adaptive nonconforming finite element methods, convergence analysis, optimality. The work of the first two authors was supported in parts by the National Basic Research Program of China (Grant No. 2005CB321701) and the National Natural Science Foundation of China (Grant No. 10731060). The work of the third author was supported in parts by NSF grants DMS-0707602, DMS- 0810156, DMS-0811153, and DMS-0914788. 1
2 XUEUN XU, HUANGXIN CHEN, AND RONALD H.W. HOPPE finite element methods (AFEM). Since the number of degrees of freedom N per level may not grow exponentially with mesh levels, as Mitchell has pointed out in [16] for adaptive conforming finite element methods, the number of operations used for multigrid methods with smoothers performed on all nodes can be as bad as O(N 2 ), and a similar situation may also occur in the nonconforming case. For adaptive conforming finite element methods, Wu and Chen [29] have obtained uniform convergence for the multigrid V-cycle algorithm which performs Gauss-Seidel smoothing on newly generated nodes and those old nodes where the support of the associated nodal basis function has changed. To our knowledge, so far there does not exist an optimal multilevel method for nonconforming finite element methods on locally refined meshes. The reason is that the theoretical analysis for the local multilevel methods is rather difficult. Indeed, there are two difficulties which need to be overcome. First, the Xu and Zikatanov identity [31], on which the proof in [29] depends, can not be applied directly, because the multilevel spaces are nonnested in this situation. The second difficulty is how to establish the strengthened Cauchy-Schwarz inequality on nonnested multilevel spaces. In this paper, we will construct a special prolongation operator from the coarse space to the finest space, and obtain the key global strengthened Cauchy-Schwarz inequality. Two multilevel methods, the product and additive version, are proposed. Applying the well-known Schwarz theory (cf. [25]), we show that local multilevel methods for adaptive nonconforming finite element methods are optimal, i.e., the convergence rate of the multilevel algorithms is independent of mesh sizes and mesh levels. The remainder of this paper is organized as follows: In section 2, we introduce some notations and briefly review nonconforming P1 finite element methods. Section 3 is concerned with the study of condition number estimates of linear systems arising from adaptive nonconforming finite element methods by applying the techniques presented by Bank and Scott in [1]. The following section 4 is devoted to the derivation of a local multilevel product algorithm and its additive version. In section 5, we develop an abstract Schwarz theory based on three assumptions whose verification is carried out for local acobi and local Gauss-Seidel smoothers, respectively. Finally, in the last section we give some numerical experiments to confirm the theoretical analysis. 1. Notations and Preliminaries Throughout this paper, we adopt standard notation from Lebesgue and Sobolev space theory (cf., e.g., [13]). In particular, we refer to (, ) as the inner product in L 2 (Ω) and to 1,Ω as the norm in the Sobolev space H 1 (Ω). We further use A B, if A CB with a positive constant C depending only on the shape regularity of the meshes. A B stands for A B A. We consider elliptic boundary value problems in polyhedral domains Ω R n, n 2. However, for the sake of simplicity the analysis of local multilevel methods will be restricted to the 2D case. Given a bounded, polygonal domain Ω R 2, we consider the following second order elliptic boundary value problem (1.1) (1.2) Lu := div(a(x) u) = f in Ω, u = 0 on Ω.
LOCAL MULTILEVEL METHODS FOR ADAPTIVE NFEM 3 The choice of a homogeneous Dirichlet boundary condition is made for ease of presentation only. Similar results are valid for other types of boundary conditions and equation (1.1) with a lower order term as well. We further assume that the coefficient functions in (1.1) satisfy the following properties: (a) a( ) is a measurable function and there exist constants β 1 β 0 > 0 such that (1.3) β 0 a(x) β 1 f.a.a. x Ω; (b) f L 2 (Ω). The weak formulation of (1.1) and (1.2) is to find u V := H 1 0 (Ω) such that (1.4) a(u, v) = (f, v), v V, where the bilinear form a : V V R is given by (1.5) a(u, v) = (a u, v), u, v V. Since the bilinear form (1.5) is bounded and V -elliptic, the existence and uniqueness of the solution of (1.4) follows from the Lax-Milgram theorem. Throughout this paper, we work with families of shape regular meshes {T i, i = 0, 1,..., }, where T 0 is an intentionally chosen coarse initial triangulation, the others are obtained by adaptive procedures, refined by the newest vertex bisection algorithm. It has been proved that there exists a constant θ > 0 such that (1.6) θ T θ, T T i, i = 1, 2,..., where θ T is the minimum angle of the element T. The set of edges on T i is denoted by E i, and the set of interior and boundary edges by Ei 0 and Ei Ω, respectively. Correspondingly, let M i denote all the middle points of E i and M 0 i be the middle points of Ei 0. We refer to N i as the set of interior nodes of T i. For any E E i, h i,e and m i,e denote the length and the midpoint of E. The patch ω i,e, E Ei 0, is the union of two elements in T i sharing E. For any T T i, h i,t and x T stand for the diameter and the barycenter of T. We denote by V the lowest order nonconforming Crouzeix-Raviart finite element space with respect to T, i.e., V = {v L 2 (Ω) v T P 1 (T ), T T, [v ]ds = 0, E E }. Here, [v ] E refers to the jump of v across E E 0 Ω and is set to zero for E E. Moreover, we define the conforming P 1 finite element space by V c = {v c V v c T P 1 (T ), T T }. The nonconforming finite element approximation of (1.4) is to find u V such that E (1.7) a (u, v ) = (f, v ), v V, where a (, ) stands for the mesh-dependent bilinear form (1.8) a (u, v ) = (a u, v ) 0,T. T T
4 XUEUN XU, HUANGXIN CHEN, AND RONALD H.W. HOPPE Existence and uniqueness of the solution u again follows from the Lax-Milgram theorem. In the sequel, we refer to 1, as the mesh-dependent energy norm v 2 1, = T T v 2 1,T. For brevity, we will drop the subscript from some of the above quantities, if no confusion is possible, e.g., we will write h T instead of h,t and a(, ) instead of a (, ). 2. Condition number estimate The computation of the solution u of (1.7) always requires to solve a matrix equation using a particular basis for V. Suppose that {φ i, i = 1,..., N} is a given basis for V, where N is the dimension of V, and define the matrix A and the vector F according to A ij := a(φ i, φ j ) and F i := (f, φ i ), i, j = 1,..., N. Then, equation (1.7) is equivalent to the linear algebraic system (2.1) AX = F, where u = N i=1 u iφ i and X = (u i ). In this section, we will not restrict ourselves to the two-dimensional case, but consider domains Ω R n, n 2. We will specify conditions on V and the basis {φ i, i = 1,..., N} that will allow us to establish upper bounds for the condition number of A. We assume that T contains at most α n/2 1 N elements, with α 1 denoting a fixed constant. The following estimates hold true (cf., e.g., [13]): (2.2) v 2 1,T h n 2 T v 2 L (T ) v 2 L 2n/(n 2) (T ), T T, v V, n 3. In the special case of two dimensions (n = 2), we supplement the following inequality to the latter one in (2.2), (2.3) v L (T ) h 2/p T v L p (T ), T T, v V, 1 p. Under the assumptions on the domain Ω, there exists a continuous embedding H 1 (Ω) L p (Ω). For n 3, Sobolev s inequality (2.4) v L 2n/(n 2) (Ω) C v 1,Ω, v H 1 (Ω). holds true. In two dimensions, we have a more explicit estimate (cf., e.g., [1]) (2.5) v L p (Ω) C p v 1,Ω, v H 1 (Ω), p <. As far as the basis {φ i, i = 1,..., N} of V is concerned, we assume that it is a local basis: (2.6) max cardinality{t T : supp(φ i ) T } α 2. 1 i N Finally, we impose a more important assumption with regard to the scaling of the basis: (2.7) h n 2 T v 2 L (T ) supp(φ i) T v 2 i h n 2 T v 2 L (T ), T T,
LOCAL MULTILEVEL METHODS FOR ADAPTIVE NFEM 5 where v = N i=1 v iφ i and (v i ) is arbitrary. For instance, if {ψ i, i = 1,..., N} denote the Crouzeix-Raviart P1 nonconforming basis functions, we define a new scaled basis {φ i, i = 1,..., N} by φ i := h (2 n)/2 i ψ i, where h i is the diameter of the support of ψ i. Then, the new basis satisfies assumption (2.7). We also impose the same assumption (2.7) for the conforming finite element basis, when utilized in the sequel. For the analysis of the condition number, we propose a prolongation operator from V to Ṽ +1 c, where Ṽ +1 c is the conforming finite element space based on T +1. T +1 is an auxiliary triangulation, only used in the analysis, which is obtained from T by subdividing each T T into 2 n simplices by joining the midpoints of the edges. We refer to T as an element in T with vertices x k, k = 1,..., n + 1, and denote the midpoints of its edges by m 1,..., m s, where s is the number of edges of T, e.g., s = 3 if n = 2. In case n = 2, the prolongation operator I +1 : V Ṽ +1 c is defined by I +1 v(m l ) = v(m l ), l = 1,..., 3, I +1 v(x k ) = β k, k = 1,..., 3, where β k is the average of v in x k. Moreover, I +1 (x k ) = 0, if x k is located on the Dirichlet boundary. The stability analysis of I +1 has been derived when T +1 is obtained from T by the above bisection algorithm. In the AFEM procedures we use the newest vertex bisection algorithm. The associated stability analysis of, i = 1,...,, will be given in the appendix of this paper. I i i 1 As in the case n 3, we define I +1 : V Ṽ c +1 according to I +1 v(m l ) = α l, l = 1,..., s, I +1 v(x k ) = β k, k = 1,..., n + 1, where α l and β k are the averages of v at m l and x k respectively. I +1 (x k ) = 0 or (m l ) = 0, if x k or m l is situated on the Dirichlet boundary. The associated I +1 stability analysis of I +1 can be obtained analogously. We now give bounds on the condition number of the matrix A := (a(φ i, φ j )), where {φ i, i = 1,..., N} is the scaled basis for V satisfying the above assumptions. In the general case n 3, we have the following result. Theorem 2.1. Suppose that the nonconforming finite element space V satisfies (2.2) and the basis {φ i, i = 1,..., N} satisfies (2.6) and (2.7). Then, the l 2 -condition number K 2 (A) of A is bounded by (2.8) Proof. We set v = N i=1 v iφ i, then K 2 (A) N 2/n. a(v, v) = X t AX, where X = (v i ). By a similar technique as in the proof of Theorem 4.1 in [1], we have a(v, v) X t X. On the other hand, we apply the prolongation operator I +1 to v, and set I +1 v = I +1 v(x i ) φ i,+1, x i N +1 ( T +1 )
6 XUEUN XU, HUANGXIN CHEN, AND RONALD H.W. HOPPE where { φ i,+1 } is the conforming finite element basis of Ṽ+1 c. By Hölder s inequality, Sobolev s inequality, and the stability of I +1, we derive a complementary inequality according to X t X T T +1 supp(φ i,+1 ) T T T +1 I +1 v 2 L 2n/(n 2) (T ) N 2/n I +1 N 2/n I +1 v 2 1,Ω N 2/n v 2 1, N 2/n a(v, v). Using the above estimates, we obtain which implies that Recalling the above two estimates yield (2.8). I +1 v 2 (x i ) T T +1 h n 2 T I +1 v 2 L (T ) v 2 L 2n/(n 2) (Ω) N 2/n X t X X t AX X t X, N 2/n λ min (A) and λ max (A) 1. K 2 (A) = λ max (A)/λ min (A), In the special case n = 2, a similar result can be deduced as follows. Theorem 2.2. Suppose that the nonconforming finite element space V satisfies (2.2) and (2.3), and that the basis {φ i, i = 1,..., N} satisfies (2.6) and (2.7). Then, the l 2 -condition number K 2 (A) of A is bounded by (2.9) K 2 (A) N(1 + log(nh 2 min(e )) ). Proof. As in the proof of the above theorem, it suffices to show that (2.10) N(1 + log(nh 2 min(e )) ) 1 X t X X t AX X t X. We set v = N i=1 v iφ i, X = (v i ) and a(v, v) = X t AX. Then, a(v, v) X t X holds true as in Theorem 5.1 in [1]. As far as the lower bound in (2.10) is concerned, as in the proof of Theorem 2.1 we have (p > 2) X t X I +1 v 2 (x i ) I +1 v 2 L (T ) T T +1 supp(φ i,+1 ) T T T +1 v 2 L p (T ) ( T T +1 h 4/p ( T I +1 T T +1 h 4/(p 2) T ) (p 2)/p p I +1 N(Nh 2 min(e )) 2/p p a(v, v). T T +1 h 4/(p 2) T ) (p 2)/p I +1 v 2 L p (T ) v 2 1,T ( T T +1 h 4/(p 2) T ) (p 2)/p p a(v, v) The special choice p = max{2, log(nh 2 min (E )) } allows to conclude. For a fixed triangulation, the conforming P1 finite element space is contained in the nonconforming P1 finite element space. Hence, the sharpness of the bounds in Theorem 2.2 can be verified by the same example as in [1].
LOCAL MULTILEVEL METHODS FOR ADAPTIVE NFEM 7 3. Local multilevel methods The above section clearly shows that for the solution of a large scale problem the convergence of standard iterations such as Gauss-Seidel or CG will become very slow. This motivates the construction of more efficient iterative algorithms for those algebraic systems resulting from adaptive nonconforming finite element approximations. We will derive our local multilevel methods for adaptive nonconforming finite element discretizations based on the Crouzeix-Raviart elements. As a prerequisite, we again use the prolongation operator Ii 1 i : V i 1 V i defined as in section 2. Now, T i represents a refinement of T i 1 by the newest vertex bisection algorithm, Ii 1 i defines the values of Ii i 1v at the vertices of elements of level i, yielding a continuous piecewise linear function on T i. Ii 1 i v being a function in V i, it naturally represents a function in the finest space V. Hence, the operator I i 1 given by I i 1 v := I i i 1v, v V i 1, defines an intergrid operator from V i 1 to V. For 0 i, we define A i : V i V i by means of We also define projections P i, P 0 i (A i v, w) = a i (v, w), w V i. : V V i according to a i (P i v, w) = a(v, I i w), (P 0 i v, w) = (v, I i w), v V, w V i. For any node z N i, we use the notation ϕ z i to represent the associated nodal conforming basis function of Vi c. Let Ñ i c be the set of new nodes and those old nodes where the support of the associated basis function has changed, i.e., Let Ñ c i = {z N i : z N i \ N i 1 or z N i 1 but ϕ z i ϕ z i 1}. Mi represent the set of midpoints on which local smoothers are performed: M i := {m i,e M i : m i,e M 0 i ( ˆT i )}, where ˆT i = x z i Ñ {supp(ϕ z i c i )}. For convenience, we set Mi = {m k i, k = 1,..., ñ i},where ñ i is the cardinality of M i, and refer to φ k i = φmk i i as the Crouzeix-Raviart nonconforming finite element basis function associated with m k i. Then, for k = 1,..., ñ i let Pi k, Qk i : V i Vi k = span{φ k i } be defined by and let A k i : V k i a i (P k i v, φ k i ) = a i (v, φ k i ), (Q k i v, φ k i ) = (v, φ k i ), v V i, V k i be defined by (A k i v, φ k i ) = a i (v, φ k i ), v V k i. It is easy to see that the following relationship holds true: (3.1) A k i P k i = Q k i A i. We assume that the local smoothing operator R i : V i V i is nonnegative, symmetric or nonsymmetric with respect to the inner product (, ). It will be precisely defined and further studied in section 4. For i = 1,..., 1, R i is only performed on local midpoints Mi (we refer to Figure 1 for an illustration). R 0 is solved directly, i.e., R 0 = A 1 0. On the finest level, R is carried out on all
8 XUEUN XU, HUANGXIN CHEN, AND RONALD H.W. HOPPE Figure 1. Coarse mesh (left), fine mesh (right) and illustration of M i: the big nodes on the right refer to Ñi, the small nodes refer to M i, i = 1,..., 1. midpoints M 0, i.e., ñ = #M 0. For simplicity, we set A = A and denote by I and P the identity operator on the finest space V. We set Now, we scale S i as follows: (3.2) S i := I i R i A i P i, i = 0, 1,...,. T i := µ,i S i, i = 0, 1,...,. where µ,i > 0 is a parameter, independent of mesh sizes and mesh levels, chosen to satisfy a(t i v, T i v) ω i a(t i v, v), v V, w i < 2. We will also drop the subscript from µ,i since no confusion is possible in the convergence analysis. With the sequences of operators {T i, i = 0, 1,..., }, we can now state the local multilevel algorithm for adaptive nonconforming finite element methods as follows. Algorithm 3.1. Local multilevel product algorithm (LMPA) Given an arbitrarily chosen initial iterate u 0 V, we seek u n V as follows: (i) Let v 0 = u n 1. For i = 0, 1,...,, compute v i+1 by (3.3) (ii) Set u n = v +1. v i+1 = v i + T i (u v i ). Algorithm 3.2. Local multilevel additive algorithm (LMAA) Let T = i=0 T i and let u be the exact solution of (1.7). Find ũ V such that (3.4) where f = i=0 T iu. In view of the operator equation T ũ = f, A i P i = P 0 i A, the function f in (3.4) is formally defined by the exact finite element solution u which can be computed directly, and so does the iteration (3.3).
LOCAL MULTILEVEL METHODS FOR ADAPTIVE NFEM 9 Obviously, there exists a unique solution ũ of (3.4) coinciding with u for (1.7). The conjugate-gradient method can be used to solve the new problem, if T is symmetric. We can also apply the conjugate-gradient method to the symmetric version of LMAA (SLMAA) by solving (T + T ) ũ = 2 ˆf instead of (3.4), where ˆf = (T i +T i ) i=0 2 u and T denotes the adjoint operator of T with respect to the inner product a(, ). 4. Convergence theory In this section, we provide an abstract theory concerned with the convergence of local multilevel methods for linear systems arising from adaptive nonconforming finite element methods. We will use the well-known Schwarz theory developed in [25], [30] and [35] to analyze the algorithms. Let {T i, i = 0, 1,..., } be a sequence of operators from the finest space V to itself. The abstract theory provides an estimate for the norm of the error operator E = (I T ) (I T 1 )(I T 0 ) = (I T i ), where I is the identity operator in V. The convergence estimate for the algorithm LMPA is then obtained by the norm estimate for E. The abstract theory can be invoked due to the following assumptions. (A1). Each operator T i is nonnegative with respect to the inner product a(, ), and there exists a positive constant ω i < 2, which depends on µ i, such that i=0 a(t i v, T i v) ω i a(t i v, v), v V. (A2). Stability: There exists a constant K 0 such that a(v, v) K 0 µ a(t v, v), v V, where µ = min 0 i {µ i }. (A3). Global strengthened Cauchy-Schwarz inequality: There exists a constant K 1 such that i 1 a(t i v, T j u) K 1 ( a(t i v, v)) 1/2 ( a(t j u, u)) 1/2, v, u V. i=0 j=0 i=0 As in the proof of (4.1) in [33], it is easy to show that the following inequality holds true for the algorithms LMPA and LMAA with local smoothers chosen as acobi or Gauss-Seidel iterations (especially K 2 = 1 in the acobi case): (4.1) a(t i v, u) K 2 ( a(t i v, v)) 1/2 ( a(t i u, u)) 1/2, v, u V. i=0 i=0 Theorem 4.1. Let the assumptions A1-A3 be satisfied. Then, for the algorithm 3.1 the norm of the error operator E can be bounded as follows (cf. [25], [30], [35]) where δ = 1 i=0 j=0 a(ev, Ev) δ a(v, v), v V, µ(2 ω) K 0(K 1+K 2) 2, ω = max 0 i {ω i }.
10 XUEUN XU, HUANGXIN CHEN, AND RONALD H.W. HOPPE For the additive multilevel algorithm 3.2, the following theorem provides a spectral estimate for the operator T = i=0 T i when T is symmetric with respect to the inner product a(, ). Theorem 4.2. If T is symmetric with respect to a(, ) and assumptions A1-A3 hold true, then we have (cf. [25], [30], [35]) µ K 0 a(v, v) a(t v, v) (2K 1 + ω) a(v, v), v V. When T is nonsymmetric with respect to a(, ), similar analysis can be done for the spectral estimate of the symmetric part T +T 2. Remark 5.1. It should be pointed out that the convergence result for LMPA or for the preconditioned conjugate gradient method by LMAA depends on the parameter µ, which will be observed in our numerical experiments. The convergence rate deteriorates for decreasing µ. Next, we will apply the above convergence theory to LMPA and LMAA by verifying assumptions A1-A3 for the adaptive nonconforming finite element method. There are two classes of smoothers R i, acobi and Gauss-Seidel iterations, which will be considered separately. 4.1. Local acobi smoother. First, for v V we consider the decomposition (4.2) v = v i, v = v ṽ, v i = (Π i Π i 1 )ṽ, i = 0, 1,..., 1, i=0 where ṽ = Π 1 v and Π 1 v represents a local regularization of v in V c 1 e.g., by a Clément-type interpolation. Π i : V 1 c V i c (c.f. [9]), stands for the Scott-Zhang interpolation operator [22]. The local acobi smoother is defined as an additive smoother (cf. [3]): (4.3) R i := γ (A k i ) 1 Q k i, where γ is a suitably chosen positive scaling factor. Due to (3.1), we have (4.4) T 0 = µ 0 I 0 P 0, T i = µ i I i R i A i P i = µ i γi i Pi k P i, i = 1,...,. 4.1.1. Verification of assumption A1. Lemma 4.1. Let T i, i 0, be defined by (4.4). Then, we have a(t i v, T i v) ω i a(t i v, v), v V, ω i < 2. Moreover, T i is symmetric and nonnegative in V. Therefore, assumption A1 is satisfied. Proof. Following (4.4), for v, w V we deduce a(t i v, w) = a(µ i I i R i A i P i v, w) = a i (µ i R i A i P i v, P i w) = (µ i R i A i P i v, A i P i w). In view of the definition of R i in (4.3), we can easily see that R i is symmetric and nonnegative in V i. Hence, T i is symmetric and nonnegative in V. It is easy to show that the stated result holds true for T 0. Actually, we have a(t 0 v, T 0 v) µ 2 0C 0 a 0 (P 0 v, P 0 v) = µ 0 C 0 a(t 0 v, v).
LOCAL MULTILEVEL METHODS FOR ADAPTIVE NFEM 11 Let ω 0 = µ 0 C 0. We choose µ 0 < 2/C 0 such that ω 0 < 2. For T i, i 1, we set K k i = {P m i : supp(i i P k i v) supp(i i P m i v), v V i } and γ k,m = { 1 if supp(i i P k i v) supp(i ip m i v), 0 otherwise. The cardinality of Ki k is bounded by a constant depending only on the minimum angle θ in (1.6). For v V i, i = 1,...,, Hölder s inequality implies (4.5) k,m=1 k,m=1 a(i i P k i v, I i P m i v) = k,m=1 γ k,m a(i i P k i v, I i P k i v) C i γ k,m a(i i P k i v, I i P m i v) a(i i P k i v, I i P k i v). Taking advantage of the definition of T i in (4.4), (4.5), and the stability of I i, for v V we have a(t i v, T i v) = µ 2 i γ 2 a( I i Pi k P i v, I i Pi k P i v) µ 2 i γ 2 k,m=1 a(i i P k i P i v, I i P m i P i v) µ 2 i γ 2 C i a(i i Pi k P i v, I i Pi k P i v) µ 2 i γ 2 C 0 C i a i (Pi k P i v, Pi k P i v) = µ 2 i γ 2 C 0 C i a(i i Pi k P i v, v) = µ i γc 0 C i a(t i v, v). The proof is completed by setting ω i = µ i γc 0 C i and choosing (4.6) 0 < γ < 1 and 0 < µ i < 2 γc 0 C i such that ω i < 2. We remark that due to the fact that I is the identity we may choose µ = 1 and 0 < γ < 1 such that ω = γc < 2. 4.1.2. Verification of assumption A2. Lemma 4.2. Let {T i, i = 0, 1,..., } be defined by (4.4). constant K 0 such that Then, there exists a a(v, v) K 0 µ a(t v, v), v V, µ = min 0 i {µ i }.
12 XUEUN XU, HUANGXIN CHEN, AND RONALD H.W. HOPPE Proof. Due to the decomposition of v in (4.2) and I i v i = v i, i = 0, 1,...,, where v i is defined by (4.2), there holds (4.7) a(v, v) = For i = 1,...,, we have (4.8) a(v i, v) = i=0 a(i i v i, v) = i=0 a i (v i, P i v). a i (v i, P i v) = a i (v i (m k i )φ k i, P i v) = a i (v i (m k i )φ k i, Pi k P i v) i=0 a 1/2 i (v i (m k i )φ k i, v i (m k i )φ k i v) a 1/2 i (Pi k P i v, Pi k P i v) ( a i (v i (m k i )φ k i, v i (m k i )φ k i v)) 1/2 ( a(i i Pi k P i v, v)) 1/2. Following (4.7), we deduce (4.9) a(v, v) = a i (v i, P i v) i=0 (a 0 (v 0, v 0 ) + a i (v i (m k i )φ k i, v i (m k i )φ k i )) 1/2 i=1 (a(i 0 P 0 v, v) + a(i i Pi k P i v, v)) 1/2. i=1 Since a i (φ k i, φk i ) 1, we have a i (v i (m k i )φ k i, v i (m k i )φ k i ) v 2 i (m k i ). We note that the following inequality can be derived similarly as Lemma 3.3 in [29] 1 vi 2 (m k i ) a(ṽ, ṽ) = a( Π 1 v, Π 1 v) a(v, v). i=1 For the initial level, we have For the finest level, there holds ñ a 0 (v 0, v 0 ) = a 0 (Π 0 ṽ, Π 0 ṽ) a(ṽ, ṽ) a(v, v). ñ v(m 2 k ) (h k ) 2 v Π 1 v 2 L 2 (ω k ) a(v, v), where h k = h,e, m k E, E E 0. Hence, we have (4.10) a 0 (v 0, v 0 ) + vi 2 (m k i ) a(v, v). i=1
LOCAL MULTILEVEL METHODS FOR ADAPTIVE NFEM 13 Combining the above inequalities, we conclude that there exists a constant K 0 independent of mesh sizes and mesh levels such that K 0 a(v, v) min 0 i {µ i } (µ 0a(I 0 P 0 v, v) + a(µ i I i Pi k P i v, v)) K 0 µγ i=0 a(t i v, v) = K 0 a(t v, v). µγ i=1 We thus obtain the stated result by setting K 0 = K 0 /γ. 4.1.3. Verification of assumption A3. As a prerequisite to verify assumption A3, we provide the following key lemma which will be proved in the appendix. Lemma 4.3. For i = 1,...,, let T i be a refinement of T i 1 by the newest vertex bisection algorithm and denote by Ω k j the support of I jφ k j. Then, for mk j M j we have ( hl i h k ) 3/2 1, ( hl i (4.11) j h k ) 3 1, j i=j+1 m l i M i, I i φ l i 0 on Ek j+1 i=j+1 m l i M i, I i φ l i 0 on Ω k j where E k j+1 = E j+1( Ω k j ). Likewise, for ml i M i, (4.12) i 1 j=1 m k j M j, ( hl i h k ) 1/2 1, j i 1 j=1 m k j M j, ( hl i h k ) 1/2 1. j I i φ l i 0 on Ek j+1 I i φ l i 0 on Ω k j We are now in a position to verify assumption A3. Lemma 4.4. There exists a constant K 1 independent of mesh sizes and mesh levels such that assumption A3 holds true. Proof. In view of (4.4), we have i 1 a(t i v, T j u) = γ 2 i=1 j=1 ñ j = γ 2 a(µ j I j Pj k P j u, j=1 ñ j j=1 i=j+1 i=j+1 l=1 a(µ j I j Pj k P j u, µ i I i Pi l P i v) Setting ω = ñi i=j+1 l=1 µ ii i Pi lp iv, we have whence (4.13) i=1 j=1 µ i I i P l i P i v). a(µ j I j P k j P j u, ω) = a j (µ j P k j P j u, P k j P j ω) l=1 a 1/2 j (µ j Pj k P j u, µ j Pj k P j u)a 1/2 (Pj k P j ω, Pj k P j ω), i 1 ñ j a(t i v, T j u) γ 2 ( a j (µ j Pj k P j u, µ j Pj k P j u)) 1/2 ( j=1 j ñ j a j (Pj k P j ω, Pj k P j ω)) 1/2. j=1
14 XUEUN XU, HUANGXIN CHEN, AND RONALD H.W. HOPPE In view of (4.6), it is obvious that (4.14) and there also holds γµ = γ < 2 C (4.15) γ γµ j < 2 C 0 C j 1, 1 j, ñ j a j (µ j Pj k P j u, µ j Pj k P j u) j=1 = 1. If we choose µ = 1, then ñ j γµ j a(µ j I j Pj k P j u, u) j=1 Next, it suffices to show that a(t j u, u). j=1 (4.16) ñ j γ a j (Pj k P j ω, Pj k P j ω) a(t i v, v). j=1 i=2 Clearly, a j (φ k j, φk j ) 1. We note that which leads us to P k j P j I i P l i P i v = a j(p j I i P l i P iv, φ k j ) a j (φ k j, φk j ) φ k j a j (P j I i P l i P i v, φ k j )φ k j, a j (P k j P j ω, P k j P j ω) ( i=j+1 l=1 Similarly, Pi lp iv a i (P i v, φ l i )φl i. It follows that a j (P j µ i I i P l i P i v, φ k j )) 2. a j (P j µ i I i P l i P i v, φ k j ) = a(µ i I i P l i P i v, I j φ k j ) = a i (µ i P l i P i v, P i I j φ k j ) a i (a i (µ i P i v, φ l i)φ l i, P i I j φ k j ) = a(i i φ l i, I j φ k j )a i (µ i P i v, φ l i). Since I j φ k j is conforming and piecewise linear on T j+1 Ωk j, we obtain a(i i φ l i, I j φ k j ) = T Ω k j, T T j+1 = T T Ω k j, T T j+1 a(x) I i φ l i I j φ k j T a(x) I jφ k j n I iφ l i T Ω k j, T T j+1 T ( a(x) I j φ k j )I i φ l i. We set d k j = max{h j+1,t : T Ω k j, T T j+1}. By the minimum angle property in (1.6) we have d k j hk j. Similarly, dl i hl i. Observing (1.3), I jφ k j n (h k j ) 1 and (dk j ) 1 (4.17) a i (µ i P i v, φ l i) = a i (µ i P l i P i v, φ l i) a 1/2 i (µ i Pi l P i v, µ i Pi l P i v)a 1/2 (φ l i, φ l i) a 1/2 (µ 2 i I i Pi l P i v, v), i
LOCAL MULTILEVEL METHODS FOR ADAPTIVE NFEM 15 we deduce (4.18) m l i M i, a(i i φ l i, I j φ k j )a i (µ i P i v, φ l i) I iφ l i 0 on Ω k j m l i M i, h l i h k j a 1/2 (µ 2 i I i P l i P i v, v) + I i φ l i 0 on Ek j+1 m l i M i, I i φ l i 0 on Ω k j (h l i )2 h k a 1/2 (µ 2 i I i Pi l P i v, v). j Hence, combining (4.11), (4.17) and (4.18), we have (4.19) ( a j (P k j P j ω, P k j P j ω) ( + ( i=j+1 m l i M i, I i φ l i 0 on Ω k j i=j+1 m l i M i, ( hl i h k j i=j+1 m l i M i, I iφ l i 0 on Ek j+1 h l i h k j (h l i )2 h k a 1/2 (µ 2 i I i Pi l P i v, v)) 2 j ) 1/2 a(µ 2 i I i P l i P i v, v))( a 1/2 (µ 2 i I i P l i P i v, v)) 2 i=j+1 m l i M i, ( hl i h k ) 3/2 ) j + ( I iφ l i 0 on Ek j+1 i=j+1 m l i M i, I i φ l i 0 on Ω k j i=j+1 m l i M i, hl i h k j a(µ 2 i I i P l i P i v, v))( ( hl i h k ) 1/2 a(µ 2 i I i Pi l P i v, v) j I iφ l i 0 on Ek j+1 i=j+1 m l i M i, I i φ l i 0 on Ω k j ( hl i ) 3 ) h k j I i φ l i 0 on Ek j+1 + i=j+1 m l i M i, I iφ l i 0 on Ω k j hl i a(µ 2 h k i I i Pi l P i v, v). j
16 XUEUN XU, HUANGXIN CHEN, AND RONALD H.W. HOPPE We set δ(m l i, mk j ) = 1, if I iφ l i 0 on E j+1 k, and δ(ml i, mk j ) = 0, otherwise, δ(m l i, mk j ) = 1, if I i φ l i 0 on Ω k j, and δ(m l i, mk j ) = 0, otherwise. By (4.12) and (4.14), we obtain (4.20) ñ j γ a j (Pj k P j ω, Pj k P j ω) j=1 ñ j γ j=1 i=j+1 m l i M i, I iφ l i 0 on Ek j+1 ( hl i h k ) 1/2 a(µ 2 i I i Pi l P i v, v) j ñ j + γ j=1 i=j+1 m l i M i, hl i a(µ 2 h k i I i Pi l P i v, v) j = γ + γ I i φ l i 0 on Ω k j i 1 h l i ( ( h k i=2 m l i M i j=1 m k j M j j i 1 ( i=2 m l i M i j=1 m k j M j i=2 i=2 m l i M i γµ i a(µ i I i P l i P i v, v)(1 + m l i M i a(µ i I i P l i P i v, v). ) 1/2 δ(m l i, m k j ))a(µ 2 i I i P l i P i v, v) hl i δ(m l i, m k h k j ))a(µ 2 i I i Pi l P i v, v) j h l i ) Hence, (4.16) is verified. Combining (4.13-4.16), we obtain i 1 (4.21) a(t i v, T j u) ( a(t i v, v)) 1/2 ( a(t j u, u)) 1/2. i=1 j=1 i=2 A similar analysis can be used to derive (4.22) i=1 j=1 a(t i v, T 0 u) ( a(t i v, v)) 1/2 a(t 0 u, u) 1/2, which, together with (4.21), completes the proof of the lemma. i=1 4.2. Local Gauss-Seidel smoother. In this subsection, we will verify assumptions A1-A3 for the multilevel methods with a local Gauss-Seidel smoother R i which is defined by R i := (I Eñi i )A 1 i, where Eñi i = (I P ñi i ) (I Pi 1) = ñ i (I P i k). For brevity, we set E i := Eñi since no confusion is possible. We have (4.23) T 0 = µ 0 I 0 P 0, T i = µ i I i R i A i P i = µ i I i (I E i )P i, i = 1,...,. The decomposition of v is the same as (4.2). i,
For i = 1,...,, let Ei 0 easy to see that (4.24) LOCAL MULTILEVEL METHODS FOR ADAPTIVE NFEM 17 = I, Ek 1 i := (I P k 1 i ) (I Pi 1), k = 2,..., ñ i. It is I E i = Pi k E k 1 i. As in Lemma 4.5 in [33], there also holds (4.25) a i (P i v, P i u) a i (E i P i v, E i P i u) = a i (Pi k E k 1 i P i v, E k 1 i P i u), v, u V. 4.2.1. Verification of assumption A1. We consider the case i 1, since for T 0 assumption A1 has been verified in Lemma 4.1. Lemma 4.5. Let T i, i 1, be defined by (4.23). Then, T i is nonnegative in V and there holds a(t i v, T i v) ω i a(t i v, v), v V, ω i < 2. Proof. Due to (4.23) and (4.24) we have a(t i v, T i v) = µ 2 i a(i i (I E i )P i v, I i (I E i )P i v) = µ 2 i k,m=1 a(i i Pi k E k 1 i P i v, I i Pi m E m 1 i P i v). Using (4.25), the same techniques as in (4.5), and the stability of I i, we obtain (4.26) a(t i v, T i v) µ 2 i C i a(i i Pi k E k 1 i P i v, I i Pi k E k 1 i P i v) whence µ 2 i C 0 C i a i (Pi k E k 1 i P i v, Pi k E k 1 i P i v) =µ 2 i C 0 C i (a i (P i v, P i v) a i (E i P i v, E i P i v)) =µ 2 i C 0 C i (a i (P i v, P i v) a i ((I (I E i ))P i v, (I (I E i ))P i v)) =µ 2 i C 0 C i (2a i ((I E i )P i v, P i v) a i ((I E i )P i v, (I E i )P i v)) µ 2 i C 0 C i (2a i ((I E i )P i v, P i v) 1 C 0 a(i i (I E i )P i v, I i (I E i )P i v)) =2µ i C 0 C i a(t i v, v) C i a(t i v, T i v), a(t i v, T i v) 2µ ic 0 C i 1 + C i a(t i v, v). Obviously, the nonnegativeness of T i follows from the above inequality. Setting ω i = 2µ ic i C 0 1+C i, and choosing 0 < µ i < 1+C i 2C 0 C i such that ω i < 2, the lemma is proved. We remark that we can choose µ = 1, since I is the identity.
18 XUEUN XU, HUANGXIN CHEN, AND RONALD H.W. HOPPE 4.2.2. Verification of assumption A2. Lemma 4.6. Let {T i, i = 0, 1,..., } be defined as in (4.23). There exists a constant K 0 such that a(v, v) K 0 µ a(t v, v), v V, µ = min 0 i {µ i }. Proof. In view of the decomposition of v in (4.2), we have a(v, v) = i=0 a i(v i, P i v). For i = 1,...,, we also have (cf. (4.8)) a i (v i, P i v) ( a i (v i (m k i )φ k i, v i (m k i )φ k i )) 1/2 ( a i (Pi k P i v, Pi k P i v)) 1/2. Since I E k 1 i = k 1 m=1 P i m E m 1 i, we deduce a i (Pi k P i v, Pi k P i v) = a i (Pi k P i v, Pi k E k 1 i P i v) + k 1 m=1 a i (P k i P i v, P k i P m i E m 1 i P i v) ( a i (Pi k P i v, Pi k P i v)) 1/2 ( a i (Pi k E k 1 i P i v, E k 1 i P i v)) 1/2 + k,m=1 a i (Pi k P i v, Pi m E m 1 i P i v). Furthermore, using the same technique as in (4.5), we have k,m=1 a i (Pi k P i v, Pi m E m 1 i P i v) ( a i (Pi k P i v, Pi k P i v)) 1/2 ( a i (Pi k E k 1 i P i v, E k 1 i P i v)) 1/2, Then, it follows from (4.26) that a i (Pi k P i v, Pi k P i v) a i (Pi k E k 1 i P i v, E k 1 P i v) 1 a(t i v, v). µ i Hence, a 0 (P 0 v, P 0 v) + a i (Pi k P i v, Pi k P i v) i i=0 1 µ i a(t i v, v). Finally, similar to the analysis of (4.9) and (4.10), we deduce that assumption A2 holds true. 4.2.3. Verification of assumption A3. Lemma 4.7. There exists a constant K 1 independent of mesh sizes and mesh levels such that assumption A3 holds true for {T i, i = 0, 1,..., } defined by (4.23).
LOCAL MULTILEVEL METHODS FOR ADAPTIVE NFEM 19 Proof. We set ξ i = T i v. It follows from (4.23) that i 1 a(t i v, T j u) = i=1 j=1 = = j=1 i=j+1 j=1 i=j+1 µ j a j (P j ξ i, (I E j )P j u) = ñ j µ j a j (Pj k P j j=1 i=j+1 Further, Hölder s inequality yields a(ξ i, µ j I j (I E j )P j u) j=1 i=j+1 ξ i, Pj k E k 1 j P j u). µ j ñ j a j (P j ξ i, P k j E k 1 j P j u) (4.27) i 1 ñ j a(t i v, T j u) ( µ 2 ja j (Pj k E k 1 j P j u, E k 1 j P j u)) 1/2 i=1 j=1 j=1 ( ñ j a j ( j=1 i=j+1 P k j P j ξ i, i=j+1 P k j P j ξ i )) 1/2. In view of the estimate of (4.26) in Lemma 4.5 and µ j < 1+C j 2C 0 C j 1, for j = 1,...,, we find (4.28) ñ j 2 µ 2 ja j (P k j E k 1 j P j u, E k 1 j P j u) (2µ j a(t j u, u) 1 a(t j u, T j u)) C 0 µ j a(t j u, u) a(t j u, u), j=1 whence ñ j µ 2 ja j (Pj k E k 1 j P j u, E k 1 P j u) j=1 Next, we show that j a(t j u, u). j=1 (4.29) ñ j a j ( Pj k P j ξ i, Pj k P j ξ i ) a(t i v, v). j=1 i=j+1 i=j+1 i=2 We note that Pj kp jξ i = a j(p j ξ i,φ k j ) a j(φ k j,φk j ) φk j a j(p j ξ i, φ k j )φk j, and similarly P i lel 1 i P i v a i (E l 1 i P i v, φ l i )φl i. Then, there holds a j ( i=j+1 P k j P j ξ i, i=j+1 P k j P j ξ i ) ( a j (P j ξ i, φ k j )) 2. i=j+1
20 XUEUN XU, HUANGXIN CHEN, AND RONALD H.W. HOPPE Moreover, a j (P j ξ i, φ k j ) = a j (P j µ i I i (I E i )P i v, φ k j ) = µ i a(i i (I E i )P i v, I j φ k j ) = µ i a i (Pi l E l 1 i P i v, P i I j φ k j ) µ i a i (φ l i, P i I j φ k j )a i (E l 1 i P i v, φ l i) l=1 = µ i a(i i φ l i, I j φ k j )a i (E l 1 i P i v, φ l i). l=1 Similar to the analysis of (4.20) in Lemma 4.4 in the acobi case, and due to Lemma 4.3, we have ñ j ( j=1 i=j+1 ñ j j=1 i=j+1 a j (ξ, φ k j )) 2 m l i M i, I i φ l i 0 on Ek j+1 ( hl i h k j l=1 ) 1/2 µ 2 i a i (Pi l E l 1 i P i v, E l 1 i P i v) + ñ j j=1 i=j+1 m l i M i, I i φ l i 0 on Ω k j hl i h k j µ 2 i a i (Pi l E l 1 i P i v, E l 1 i P i v) i 1 µ 2 i a i (Pi l E l 1 i P i v, E l 1 i P i v) ( i=2 m l i M i j=1 m k j M j i 1 + µ 2 i a i (Pi l E l 1 i P i v, E l 1 i P i v) i=2 m l i M i j=1 m k j M j h l i h k j ) 1/2 δ(m l i, m k j ) hl i δ(m l i, m k h k j ) j µ 2 i a i (Pi l E l 1 i P i v, E l 1 i P i v)(1 + h li ) a(t i v, v). i=2 m l i M i i=2 Hence, (4.29) is verified. In view of (4.26), (4.27) and (4.29), it follows that i 1 (4.30) a(t i v, T j u) ( a(t i v, v)) 1/2 ( a(t j u, u)) 1/2. i=1 j=1 We further deduce a(t i v, T 0 u) ( a(t i v, v)) 1/2 a(t 0 u, u) 1/2, i=1 i=2 i=1 which, together with (4.30), implies Lemma 4.7. j=1 Numerical results In this section, for selected test examples we present numerical results that illustrate the optimality of algorithm 4.1 and algorithm 4.2. The implementation is based on the FFW toolbox [8]. The local error estimators and the strategy MARK for the selection of elements and edges for refinement have been realized
LOCAL MULTILEVEL METHODS FOR ADAPTIVE NFEM 21 as in the algorithm ANFEM II in [12]. In the following examples, both LMPA and LMAA are considered as preconditioners for the conjugate gradient method, i.e., a symmetric version of LMPA (SLMPA) has been used in the computations. Likewise, a symmetric version of LMAA (SLMAA) is employed when the smoother is nonsymmetric, otherwise, LMAA is directly applied. The algorithms LMPA and LMAA require O(NlogN) and O(N) operations respectively, where N is the number of degrees of freedom (DOFs) (cf. [26]). The estimate (A.1) in the appendix indicates that the prolongation operator I i from V i to V would increase the energy by a constant C 0 at worst, which is essential in the convergence analysis of the local multilevel methods. We can weaken the influence by a well chosen scaling number µ,i in (3.2). As seen from Theorem 4.1 and Theorem 4.2, the uniform convergence rate of LMPA or the preconditioned conjugate gradient method by LMAA will deteriorate for decreasing scaling number µ = min 0 i {µ,i }. This property will be observed in the following Example 6.1. We always choose µ, = 1 in the computations. For the preconditioned conjugate gradient method, the iteration stops when it satisfies r 0 i A i r n i 0,Ω ɛ r 0 i 0,Ω, ɛ = 10 6, where {ri k : k = 1, 2,...} stands for the set of iterative solutions of the residual equation A i x = ri 0. 1 Grid on Level 13 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 1 0.5 0 0.5 1 Nr of Nodes 2354 Figure 2. Locally refined mesh at the 13-th refinement level (Example 6.1). At the i-th level, let u 0 i = u i 1, r n i = f i A i u n i, and set ɛ 0 = (r 0 i ) t B i r 0 i, ɛ n = (r n i ) t B i r n i, where B i is the local multilevel iteration. The number of iteration steps required to achieve the desired accuracy is denoted by iter. We further denote by ρ = ( ɛ n / ɛ 0 ) 1/iter the average reduction factor.
22 XUEUN XU, HUANGXIN CHEN, AND RONALD H.W. HOPPE Example 4.1. On the L-shaped domain Ω = [ 1, 1] [ 1, 1]\(0, 1] [ 1, 0), we consider the following elliptic boundary value problem (0.5u) + u = f(x, y) in Ω, u = g(x, y) on Ω, where f and g are chosen such that u(r, θ) = r 2 3 sin( 2 3θ) is the exact solution (in polar coordinates). Table 1. Number of iterations and average reduction factor ρ on each level for the respective algorithms with scaling number µ,i = 0.8 and µ, = 1, 0 i 1, 1. For the conjugate gradient method without preconditioning, only the number of iterations is given (Example 6.1). Level DOFs CG SLMPA-GS SLMPA-acobi SLMAA-GS LMAA-acobi iter iter ρ iter ρ iter ρ iter ρ 13 6831 206 9 0.2203 12 0.3184 34 0.6732 46 0.7475 14 11293 242 10 0.2408 12 0.3185 35 0.6783 47 0.7526 15 18121 310 10 0.2395 12 0.3179 35 0.6807 48 0.7567 16 30385 369 10 0.2412 12 0.3156 35 0.6833 49 0.7594 17 49825 458 10 0.2430 12 0.3141 36 0.6853 49 0.7614 18 80893 560 10 0.2400 12 0.3115 35 0.6852 49 0.7623 19 135060 700 10 0.2391 12 0.3079 35 0.6847 49 0.7624 20 219441 858 10 0.2405 12 0.3052 35 0.6838 50 0.7640 21 359337 1053 10 0.2375 12 0.3020 35 0.6844 50 0.7641 22 598091 1331 10 0.2353 12 0.2988 35 0.6845 49 0.7629 23 964580 1491 10 0.2356 12 0.2970 35 0.6848 50 0.7645 24 1592958 1715 10 0.2315 11 0.2873 35 0.6840 49 0.7631 Table 2. Average reduction factors ρ (SLPMA-GS) for different scaling numbers (Example 6.1). Level µ,0 = = µ, 1 = α, µ, = 1 α = 1.8 α = 1.5 α = 1 α = 0.5 α = 0.2 α = 0.1 13 0.2448 0.2340 0.2196 0.2737 0.4100 0.5125 14 0.2462 0.2410 0.2394 0.2740 0.4162 0.5189 15 0.2479 0.2410 0.2393 0.2738 0.4234 0.5292 16 0.2507 0.2426 0.2410 0.2729 0.4228 0.5337 17 0.2508 0.2444 0.2426 0.2722 0.4194 0.5274 18 0.2488 0.2414 0.2397 0.2697 0.4163 0.5239 19 0.2484 0.2408 0.2387 0.2668 0.4148 0.5225 20 0.2482 0.2419 0.2400 0.2567 0.4088 0.5215 21 0.2490 0.2390 0.2370 0.2532 0.4067 0.5207 22 0.2666 0.2368 0.2347 0.2488 0.4023 0.5182 23 0.2666 0.2371 0.2351 0.2451 0.3979 0.5088 24 0.2678 0.2331 0.2310 0.2423 0.3949 0.5058 For ease of notation, we refer to SLMPA-GS,SLMAA-GS and SLMPA- acobi,lmaa-acobi as the preconditioned conjugate gradient method by SLMPA and SLMAA with local Gauss-Seidel smoothing and local acobi smoothing, respectively. For the acobi iteration, the scaling factor is chosen according to γ = 0.8.
LOCAL MULTILEVEL METHODS FOR ADAPTIVE NFEM 23 10 3 10 3 10 2 10 2 CPU time of iteration per level 10 1 10 0 CPU time of iteration per level 10 1 10 0 10 1 10 1 10 2 10 1 10 2 10 3 10 4 10 5 10 6 Number of degrees of freedom per level 10 2 10 1 10 2 10 3 10 4 10 5 10 6 Number of degrees of freedom per level 10 3 10 3 10 2 10 2 CPU time of iteration per level 10 1 10 0 CPU time of iteration per level 10 1 10 0 10 1 10 1 10 2 10 1 10 2 10 3 10 4 10 5 10 6 Number of degrees of freedom per level 10 2 10 1 10 2 10 3 10 4 10 5 10 6 Number of degrees of freedom per level Figure 3. CPU times for SLMPA-GS, SLMPA-acobi, SLMAA- GS, and LMAA-acobi (from left to right and top to bottom) At first, we choose µ,i = 0.8 (0 i < ) to illustrate the optimality of our algorithms. Figure 2 displays the locally refined mesh at the 13-th refinement level. As seen from Table 1, the number of iterative steps of the conjugate gradient method without preconditioning (CG) increases quickly with the mesh levels. However, for the algorithms SLMPA-GS, SLMPA-acobi, SLMAA-GS and LMAA- acobi we observe that the number of iteration steps and the average reduction factors are all bounded independently of the mesh sizes and the mesh levels. These results and Figure 3, displaying the CPU times (in seconds) for the respective algorithms, demonstrate the optimality of the algorithms and thus confirm the theoretical analysis. Next, we choose different scaling numbers to illustrate how they influence the convergence behavior of the local multilevel methods. We only list the results for SLMPA-GS. A similar behavior can be observed for the other algorithms. We choose µ,0 = = µ, 1 = α and µ, = 1, and thus µ = min{α, 1}. Table 2 shows that for a fixed α, SLMPA-GS converges almost uniformly. The last four numbers of each row in Table 2 show that for a fixed level the average reduction factor of SLMPA-GS deteriorates for decreasing µ. If α 1, then µ = min{α, 1} = 1, and the convergence rate will also deteriorate as α increases. This is also observed for the first numbers of each row in Table 2. In particular, for µ = 1 the convergence rate of SLMPA-GS deteriorates only with respect to ω i (the spectral bound of T i ), which increases linearly with µ,i.
24 XUEUN XU, HUANGXIN CHEN, AND RONALD H.W. HOPPE Example 4.2. We consider Poisson s equation u = 1 in Ω, with Dirichlet boundary conditions on a domain with a crack, namely Ω = {(x, y) : x + y 1}\{(x, y) : 0 x 1, y = 0}. The exact solution is r 1/2 sin(θ/2) 1 4 r2 (in polar coordinates). In this example, we choose µ, = 1, µ,i = 1 and µ,i = 0.8 (0 i <, 1), respectively, for the local multilevel methods with local Gauss-Seidel smoothing and local acobi smoothing. 1 Grid on Level 24 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 1 0.5 0 0.5 1 Nr of Nodes 2461 Figure 4. Locally refined mesh at the 24-th refinement level (Example 6.2). Table 3. Number of iterations and average reduction factors ρ on each level for the respective algorithms with scaling numbers µ, = 1, µ,i = 1, µ,i = 0.8,0 i 1, 1. For the conjugate gradient method without preconditioning, only the number of iterations is given (Example 6.2). Level DOFs CG SLMPA-GS SLMAA-GS SLMPA-acobi LMAA-acobi iter iter ρ iter ρ iter ρ iter ρ 28 18206 287 11 0.2610 51 0.7720 14 0.3756 65 0.8154 30 29108 341 10 0.2555 50 0.7662 13 0.3507 65 0.8151 32 46105 417 10 0.2403 52 0.7745 14 0.3615 65 0.8161 34 73571 523 11 0.2628 55 0.7854 14 0.3773 70 0.8271 36 116866 634 10 0.2511 52 0.7768 13 0.3309 63 0.8105 38 184155 768 10 0.2340 52 0.7764 14 0.3601 67 0.8212 40 292148 942 10 0.2513 55 0.7880 14 0.3708 70 0.8286 42 462599 1168 10 0.2395 52 0.7765 12 0.3181 64 0.8141 44 727564 1404 10 0.2337 53 0.7808 13 0.3511 68 0.8243 46 1150917 1536 10 0.2435 54 0.7852 14 0.3615 70 0.8275 Figure 4 displays the locally refined mesh at the 24-th refinement level. The numbers in Table 3 and the CPU times (in seconds) displayed in Figure 5 show a similar behavior as in the previous example and thus also support the theoretical findings.
LOCAL MULTILEVEL METHODS FOR ADAPTIVE NFEM 25 10 3 10 4 10 2 10 3 CPU time of iteration per level 10 1 10 0 CPU time of iteration per level 10 2 10 1 10 0 10 1 10 1 10 2 10 1 10 2 10 3 10 4 10 5 10 6 Number of degrees of freedom per level 10 2 10 1 10 2 10 3 10 4 10 5 10 6 Number of degrees of freedom per level 10 3 10 3 10 2 10 2 CPU time of iteration per level 10 1 10 0 CPU time of iteration per level 10 1 10 0 10 1 10 1 10 2 10 1 10 2 10 3 10 4 10 5 10 6 Number of degrees of freedom per level 10 2 10 1 10 2 10 3 10 4 10 5 10 6 Number of degrees of freedom per level Figure 5. CPU times for SLMPA-GS, SLMAA-GS, SLMPA- acobi, and LMAA-acobi (from left to right and top to bottom) Appendix In this appendix, we analyze the stability of I i and provide the proof of Lemma 4.3. Proof of the stability result of the prolongation operator I i. Let T i+1 be the refined triangulation obtained from T i by the algorithm stated in section 2 or by the newest vertex bisection. Then, there exist constants C 0 and C 0 such that (A.1) (I i v, I i v) C 0 (v, v), a(i i v, I i v) C 0 a i (v, v), v V i. Since the analysis for the first bisection algorithm has been done in [26], we only give the proof for the refinement by the newest vertex bisection. The first inequality in (A.1) is trivial for I i v being defined by local averaging. It suffices to derive the second one. The origin of vertices of T T i+1 includes four cases depending whether the vertex of T is the midpoint of an edge or a node in T i. In particular, let m, n denote the number of vertices of T T i+1 representing midpoints or nodes in T i, respectively. Setting S = {(m, n) : m + n = 3, m, n = 0, 1, 2, 3}, we have #S = 4. We only consider one of the possible cases: the vertices of T T i+1 are all nodes in T i, i.e., T is not refined in the transition from T i to T i+1, e.g., T 2 T i+1 is also K 2 T i in Figure 6. A similar analysis can be carried out in all other cases.
26 XUEUN XU, HUANGXIN CHEN, AND RONALD H.W. HOPPE K 6 K m 5 5 m K m 4 m K 4 K 2 m 3 m 2 K 3 1 6 T1 m6 1 m 5 x 5 m 4 T x m x 2 3 T 2 m T4 m m x x 1 2 3 1 7 3 4 Figure 6. The left figure illustrates a local grid from T i, the right one displays its refinement as part of T i+1. Note that a(i i v, I i v) T2 can be bounded by (A.2) C((I i v(x 1 ) I i v(x 2 )) 2 + (I i v(x 1 ) I i v(x 3 )) 2 ) for some constant C. We recall that I i v(x i ) is the average of v at x i over the triangles K l, l = 1,..., M xi, where M xi is the number of triangles containing x i. Hence, the first term of (A.2) can be written as M 1 x1 (v Kl (x 1 ) v(m 1 )) + 1 M x2 (A.3) (v(m 1 ) v Ks (x 2 )). M x1 M x2 l=1 A similar result can be obtained for the second term of (A.2). Since l 1 v Kl (x 1 ) v(m 1 ) = v Kl (x 1 ) v(m l ) + (v(m j+1 ) v(m j )), it suffices to find a constant C such that the first term of (A.3) can be bounded by M x1 (A.4) (v Kl (x 1 ) v(m 1 ) Ca i (v, v) K, l=1 where K = Mx1 l=1 K l. The same analysis can be carried out for the second term of (A.3). Following (A.2-A.4), we get (A.5) s=1 j=1 a(i i v, I i v) T2 Ca i (v, v) T2 with some constant C, where T 2 is a patch of triangles in T i also containing the vertices of T 2. For T T i+1, T Ω, let us assume T 4 Ω. Then, a(i i v, I i v) T4 can be bounded by (A.6) C((I i v(m 3 ) I i v(x 4 )) 2 + (I i v(m 3 ) I i v(x 5 )) 2 ) = 2C(v(m 3 ) v(m 7 )) 2. Combining (A.5) and (A.6) and summing up all T T i+1 completes the proof. Proof of Lemma 4.3. The proof is similar to Lemma 3.2 in [29]. We only prove the first estimate in (4.11) and (4.12). The second estimate in (4.11) and (4.12) can be obtained similarly.