MULTILEVEL PRECONDITIONERS FOR NONSELFADJOINT OR INDEFINITE ORTHOGONAL SPLINE COLLOCATION PROBLEMS

Size: px
Start display at page:

Download "MULTILEVEL PRECONDITIONERS FOR NONSELFADJOINT OR INDEFINITE ORTHOGONAL SPLINE COLLOCATION PROBLEMS"

Transcription

1 MULTILEVEL PRECONDITIONERS FOR NONSELFADJOINT OR INDEFINITE ORTHOGONAL SPLINE COLLOCATION PROBLEMS RAKHIM AITBAYEV Abstract. Efficient numerical algorithms are developed and analyzed that implement symmetric multilevel preconditioners for the the solution of an orthogonal spline collocation (OSC) discretization of a Dirichlet boundary value problem with a nonselfadjoint or an indefinite operator. The OSC solution is sought in the Hermite space of piecewise bicubic polynomials. It is proved that the proposed additive and multiplicative preconditioners are uniformly spectrally equivalent to the operator of the normal OSC equation. The preconditioners are used with the preconditioned conjugate gradient method, and numerical results are presented that demonstrate their efficiency. Key words: Orthogonal spline collocation, multilevel methods, preconditioner, nonselfadjoint or indefinite operator, elliptic boundary value problem. AMS subject classification. 65N35, 65N55, 65F10 1. Introduction. Let Ω be a unit square (0, 1) (0, 1) with boundary Ω, and let x = (x 1, x 2 ). We consider a Dirichlet boundary value problem (BVP) (1.1) where we let Lu = f in Ω, and u = 0 on Ω, (1.2) 2 2 Lu (x) = a ij (x)u xix j (x) + b i (x)u xi (x) + c(x)u(x). i,j=1 i=1 With respect BVP (1.1), we mae the following assumptions. The functions {a ij } 2 i,j=1, {b i } 2 i=1, c, and f are sufficiently smooth, and a 12 = a 21. Differential operator L satisfies the uniform ellipticity condition; that is, there is ν > 0 such that (1.3) 2 a ij (x) η i η j ν(η1 2 + η2), 2 x Ω, (η 1, η 2 ) R 2. i,j=1 For any f L 2 (Ω), BVP (1.1) has a unique solution u(x) in (1.4) H 2 0 (Ω) = {v H 2 (Ω) : v = 0 on Ω}, where L 2 (Ω) and H 2 (Ω) are the Sobolev spaces. We approximate BVP (1.1) by an orthogonal spline collocation (OSC) scheme, in which the discrete solution is sought in the Hermite space of piecewise bicubic polynomials, and it satisfies the differential equation exactly at the special set of collocation points. The primary advantages of the OSC method are as follows: low computational cost of forming a linear system of algebraic equations, relatively easy application of higher order finite elements, the OSC solution possesses optimal order error estimates [7], and the solution exhibits the so-called superconvergence property [8, 15]. The matrix of a linear system resulting from the OSC discretization is sparse and can be expressed as a sum of tensor products of so-called almost bloc diagonal matrices [2]. Department of Mathematics, New Mexico Institute of Mining and Technology, Socorro, New Mexico 87801, U.S.A., aitbayev@nmt.edu

2 2 R. Aitbayev The solution of the OSC equations by a banded Gaussian elimination requires O(N 2 ) arithmetic operations, where N is the number of unnowns [22, 23, 34]. If the differential operator is separable and the partition is uniform in one direction then the OSC problem can be solved by a fast direct algorithm with the cost O(N log N) [10]. Classical iterative methods, such as Jacobi, Gauss-Seidel, and SOR, for the OSC solution of Poisson s equation on a uniform partition were studied in [21, 26, 37]. ADI methods for solving OSC problems with separable operators were investigated in [5, 14, 18]. Modern techniques are underdeveloped for the solution of OSC equations in comparison with finite element Galerin or finite difference methods. Only a few optimal cost algorithms have been proposed to solve the OSC discretization of selfadjoint positive definite BVPs. In [19], the multigrid method was applied to the OSC problem and compared with a multigrid finite difference method. The author concluded that the proposed multigrid OSC method is less efficient than a multigrid finite difference method. A domain decomposition based fast solver for the OSC discretization of the Dirichlet problem for Poisson s equation was developed in [6] which requires O(N log log N) arithmetic operations. In [13], multigrid methods were developed and analyzed for quadratic spline collocation equations. Numerical results were presented indicating that a multigrid iteration is an efficient solver for the quadratic spline collocation equations. It is well nown that the primary issue in the efficient application of an iterative algorithm for solving a BVP is the construction of a good preconditioner. In [24], the authors studied preconditioning of a nonselfadjoint or an indefinite OSC operator by a finite element operator and investigated H 1 condition numbers and the distribution of singular values of the preconditioned matrices. Additive and multiplicative multilevel preconditioners were proposed in [9] for the iterative solution of the OSC discretization of a selfadjoint positive definite Dirichlet BVP. It was proved that the preconditioners are uniformly spectrally equivalent to the OSC operator corresponding to a BVP with the Laplacian and require O(N) arithmetic operations. An efficient two-level domain decomposition type edge preconditioner was proposed in [29] that requires O(N) arithmetic operations. The preconditioner is applied with GMRES method and the number of iterations is independent of the partition stepsize h. Numerical techniques developed for selfadjoint positive definite BVPs are usually inefficient or even fail when applied to nonselfadjoint or indefinite BVPs, and hence, special, more sophisticated, methods are required to obtain the solution [4, 11, 12, 27, 28]. A fast direct preconditioning algorithm for the solution of the normal OSC equation approximating nonselfadjoint or indefinite BVPs was proposed in [1]. In this wor, we develop an additive and a multiplicative multilevel preconditioners for the computation of the solution of the normal OSC equation. Results and algorithms presented in this paper are closely related to those in [1, 7, 9, 31, 32, 39, 40]. To prove uniform spectral equivalence of our preconditioners, we use the approach described in [31] and [32] based on the equivalence of a norm of a certain Besov space with the Sobolev H 2 -norm. We note that the approach used in [39] and [40] requires higher regularity of the solution of BVP (1.1). Our main conclusion from this wor is that the general framewor of multilevel methods can be applied to the OSC discretization of BVPs to construct efficient preconditioners. Rather general nonselfadjoint or indefinite OSC problems can be preconditioned quite well by the proposed multilevel OSC preconditioners.

3 Multilevel preconditioners for OSC problems 3 The outline of this article is as follows. We introduce our notation and define the OSC problem in 2. In 3, we present auxiliary facts that are used to prove main results of this wor. In 4, we define an additive and a multiplicative OSC preconditioners and prove that they are uniformly spectrally equivalent to the operator of the normal OSC equation. In 5, we introduce the matrix-vector form of the OSC problem in the standard Hermite finite element basis and obtain recurrence relations for the computation of the OSC approximations and other quantities required by the multilevel algorithms. In 6, we describe implementations of the additive and the multiplicative preconditioners, and in 7, we present numerical results of the application of the preconditioners with the preconditioned conjugate gradient (PCG) method to solve test problems. 2. OSC problem. In this section, we introduce our notation and define the OSC problem. Throughout this paper, C, C 1, and C 2 C 1 denote generic positive constants independent of the partition stepsize, the number of partion levels, and other variables in the expressions where the constants appear. By L 2 (Ω) and H 2 (Ω), we denote the standard Sobolev norms. Construction of nested spaces. We set π 0 = Ω and, for integer K > 0, we construct a sequence of partitions {π } K =0 by subdividing each rectangular element of partition π 1 into four congruent rectangular elements of partition π. Let h = 2 denote the stepsize of partition π. In what follows, if not stated otherwise, the index variable taes all values in {0, 1,..., K}. We note that there are K + 1 partition levels, and integer K is an important parameter in our analysis. Let V be the vector space of Hermite piecewise bicubic polynomials that vanish on Ω, which has the dimension N = 4 +1 (see Chapter 3 in [35]). The sequence of vector spaces {V } is nested as follows: (2.1) V 0 V 1... V K H 2 0 (Ω). We denote h = h K, N = N K, π h = π K, and V h = V K. Original OSC equation. Let G h be the set of nodes of the two-dimensional composite Gaussian quadrature on partition π h with 4 nodes in each element of π h. In the OSC discretization of BVP (1.1), we see u h V h that satisfies the OSC equations (2.2) Lu h (ξ) = f(ξ), ξ G h, where the differential operator L is defined in (1.2). Existence and uniqueness of a solution and a convergence analysis of problem (2.2) are given in [7]. The OSC problem (2.2) can be written as the operator equation (2.3) L h u h = f h, where the OSC operator L h from V h into V h and the vector f h V h are defined by (2.4) (L h v)(ξ) = Lv(ξ), ξ G h, for any v V h, f h (ξ) = f(ξ), ξ G h. Both L h and f h are well-defined since a function in V h is uniquely determined by its values at G h (see Lemma 5.1 in [33]). We call (2.3) the original OSC equation.

4 4 R. Aitbayev Normal OSC equation. The vector space V h is a Hilbert space with the inner product (2.5) (v, w) h = h2 4 ξ G h v(ξ)w(ξ), which corresponds to the composite Gaussian quadrature on π h. Let L h be the adjoint to L h with respect to the inner product (, ) h. Applying L h on (2.3), we obtain the normal OSC equation (2.6) L hl h u h = L hf h. We introduce a bilinear form (2.7) a h (w, v) = (L hl h w, v) h, w, v V h, and consider the following variational form of problem (2.6): find u h V h that satisfies (2.8) a h (u h, v) = (L hf h, v) h for all v V h. The problems (2.8) and (2.2) are equivalent; hence, problem (2.8) has a unique solution. In this wor, we develop and analyze multilevel preconditioners for the iterative solution of (2.8). Space decompositions. In what follows, we denote and,i for K =0 and K N =0 i=1, respectively, where N is the dimension of V. Let {ψi }N i=1 be a finite element basis of V that satisfies (2.9) C 1 h 1 α α ψ i L 2 (Ω) C 2 h 1 α, α 2, where α = (α 1, α 2 ) is a multi-index (see Theorem 5.7 in [3]). Let (2.10) V i = span(ψ i ), 1 i N, be one-dimensional subspaces of V. Based on (2.1), we consider the following two space decompositions: For v V h, let V 1 (v) = V 2 (v) = { { {v } : {v i } : V = V h and V i = V h.,i } v = v, v V, 0 K, } v i = v, v i V i, 1 i N, 0 K. i We call an element in V 1 (v) and in V 2 (v) a representation of v. The sets V 1 (v) and V 2 (v) will be used to define auxiliary equivalent norms on V h.

5 Multilevel preconditioners for OSC problems 5 3. Auxiliary results. In this section we present auxiliary facts that are used to prove main results of this wor. The following inequalities are proved in Theorem 3.1 in [1]. Lemma 3.1. For h sufficiently small, (3.1) C 1 v 2 H 2 (Ω) a h(v, v) C 2 v 2 H 2 (Ω), v V h. Remar. In what follows, by sufficiently small h, we mean values of h for which the statement of Lemma 3.1 holds. It follows from (3.1) and (2.7) that, for h sufficiently small, the bilinear form a h (, ) is an inner product on V h. The following is the estimate (8.24) in Chapter 3 of [25]. Lemma 3.2. (3.2) v H 2 (Ω) C v L 2 (Ω), v H 2 0 (Ω). Inequalities (3.1), (3.2), and (3.3) v 2 C L 2 (Ω) (vx 2 1x 1 + vx 2 2x 2 )dx, v H 0 2 (Ω), imply that, for h sufficiently small, Ω (3.4) C 1 v 2 L 2 (Ω) a h(v, v) C 2 v 2 L 2 (Ω), v V h. Lemma 3.3. Let v V, and let (3.5) Then, (3.6) N v = c j ψi and c v = (c 1,..., c N ) t. i=1 C 1 h c v v L 2 (Ω) C 2 h c v, v V, where is the 2-norm on R N. Proof. Using (3.5), we obtain v 2 L 2 (Ω) = N c i ψi Ω i=1 N c j ψj dx = j=1 N i,j=1 c i c j Ω ψ i ψ j dx. The inequalities in (3.6) follow from the last identity and the fact that the eigenvalues of the mass matrix corresponding to the finite element basis {ψi }N i=1 belong to the interval [C 1 h 2, C 2h 2 ] (see (5.103) in [3]). Let ( 1/2 K (3.7) v,h = inf v 2 L (Ω)), v V 2 h, V 1(v) =0 where inf V1(v) denotes the infimum with respect to all representations {v } in V 1 (v). The following important statement is similar to Corollary 2.1 in [32] or Lemma 2 in [31].

6 6 R. Aitbayev Lemma 3.4. The norms H 2 (Ω) and,h are uniformly equivalent on V h ; that is, (3.8) C 1 v H 2 (Ω) v,h C 2 v H 2 (Ω), v V h. Proof. First, we prove the second inequality in (3.8). It is nown that the Besov space B 2 2,2(Ω) coinsides, up to equivalent norms, with the Sobolev space H 2 (Ω) (see Part (b) of Theorem and (3) of in [38]). Since H 2 0 (Ω) is a closed subspace of H 2 (Ω), it is a closed subspace of B 2 2,2(Ω). We note, that functions in H 2 0 (Ω) are continuous on Ω, therefore, for any v H 2 0 (Ω), v(x) = 0 for all x Ω; in particular, the trace of v H 2 0 (Ω) is continuous. In a natural way, we extend the definition of {V } for > K. It follows from Theorem 5.1 in [16] that ( v A 2 2,2 = v 2 + ( ) ) 2 1/2 2 4 inf v z L 2 (Ω) L 2 (Ω) z V =0 is a norm on H 0 2 (Ω) equivalent to the standard norm in B2,2(Ω) 2 (see [30] for a definition of the approximation space A 2 2,2). Therefore, using the equivalence of norms H 2 (Ω) and A 2 2,2 on H 0 2 (Ω) and the fact V h H 0 2 (Ω), we get (3.9) v A 2 2,2 C v H 2 (Ω), v V h. Let us prove (3.10) v,h C v A 2 2,2, v V h. We note, (3.11) since ( K 1 v A 2 2,2 = v 2 + ( ) ) 1/ inf v z L 2 (Ω) L 2 (Ω), v V h, z V =0 inf v z L 2 (Ω) = 0, K, v V h. z V Tae any v V h and set v 0 = z 0 and v = z z 1 for 1 K, where z is the orthogonal L 2 -projection of v into V. Note, v = z K = K =0 v. Using v 0 L 2 (Ω) v L 2 (Ω), the triangle inequality, the definition of {z }, and (3.11), we obtain K =0 v 2 L 2 (Ω) = v 0 2 L 2 (Ω) + K =1 K 1 v L 2 (Ω) =0 K 1 = v L 2 (Ω) =0 z v + v z 1 2 L 2 (Ω) v z 2 L 2 (Ω) ( 2 4 inf v z L 2 (Ω) z V ) 2 4 v 2 A 2 2,2.

7 Multilevel preconditioners for OSC problems 7 Taing the infimum over V 1 (v), we get (3.10). Inequalities (3.10) and (3.9) imply the second inequality in (3.8). Let us prove the first inequality in (3.8). Tae any v V h and let {v } V 1 (v). Using the strengthened Cauchy-Schwarz inequality v v l dx C2 l /2 (h h l ) 2 v L 2 (Ω) v l L 2 (Ω), v V, v l V l, Ω (see Lemma 5.1 in [40]) and the fact that the spectral radius of matrix B = (b l ) with the entries b l = 2 l /2, 0, l K, is bounded by the maximum norm B , we get K K v 2 = ( v) 2 dx = ( v L 2 (Ω) )( v l )dx = C Ω K, l=0 C( ) Ω =0 l=0 2 l /2 (h h l ) 2 v L 2 (Ω) v l L 2 (Ω) v 2 L 2 (Ω). K, l=0 Ω v v l dx From the last estimate, using (3.2) and taing the infimum over V 1 (v), we obtain the first inequality in (3.8). To finish the proof of the lemma, we establish that,h is a norm on V h. It follows from the inequalities in (3.8) that v,h 0 for any v V h, and v,h = 0 if and only if v = 0. Let us show that cv,h = c v,h for any v V h and c R. Since the case c = 0 is trivial, assume c 0. Using the fact that the infimum over V 1 (cv) equals the infimum over V 1 (v), we obtain cv 2,h = inf K {w } V 1(cv) =0 w 2 L 2 (Ω) = inf K {v } V 1(v) =0 cv 2 = L 2 (Ω) c2 v 2,h, which implies the required identity. To prove the triangle inequality for h,, using the triangle inequality for the L 2 -norm and the Minowsi inequality, we obtain, for any v and w in V h, v + w 2,h = inf K {z } V 1(v+w) =0 inf inf K V 1(v) V 1(w) =0 z 2 inf L 2 (Ω) inf K V 1(v) V 1(w) =0 v + w 2 L 2 (Ω) ( v L 2 (Ω) + w L 2 (Ω) ) 2 ( v,h + w,h ) 2. Thus,,h is a norm on V h which is equivalent to the H 2 -norm. Remar. The result of Lemma 3.4 is analogous to those formulated in [32, Corollary 2.1] and [31, Lemma 2], where relations similar to (3.8) were proved first for Sobolev spaces with equivalent norms involving infinite number series, and then the versions for finite dimensional subspaces were obtained. Our proof of Lemma 3.4 is somewhat different since it is based on the representation (3.11).

8 8 R. Aitbayev Let (3.12) v Σ, = inf v i 2 L V 2 (Ω) 2(v),i 1/2, v V h. Lemma 3.5. (3.13) C 1 v H 2 (Ω) v Σ, C 2 v H 2 (Ω), v V h. Proof. We call nonnegative quantities A(h) and B(h) uniformly equivalent with respect to h and write A(h) B(h) if C 1 B(h) A(h) C 2 B(h). Our proof consists in establishing the following sequence of equivalence relations: (3.14) v 2 Σ, inf V 2(v),i v i 2 L 2 (Ω) v 2,h v 2 H 2 (Ω). The last equivalence relation in (3.14) is stated and proved in Lemma 3.4. Let us prove the first equivalence relation in (3.14). Tae any v V h and consider a representation {v i } V 2 (v). Using (2.10), (3.3), (3.2), and (2.9) with α = 2 and α = (0, 0), we obtain C 1 v i 2 L 2 (Ω) h 4 v i 2 L 2 (Ω) C 2 v i 2 L 2 (Ω). Summing these inequalities with respect to and i and taing the infimum over V 2 (v), we obtain the first equivalence relation in (3.14). We now prove the second equivalence relation in (3.14): (3.15) C 1 inf V 2(v),i v i 2 L 2 (Ω) v 2,h C 2 inf V 2(v) Tae any v V h. Using uniqueness of the representation,i N v = v i, v i V i, 1 i N, i=1 v i 2 L 2 (Ω), v V h. for v V, we define injection mappings V 1 (v) V 2 (v) and V 2 (v) V 1 (v) by v = v i. i The Schroeder-Bernstein theorem implies that there is a bijection V 1 (v) V 2 (v). Consider any representation {v } V 1 (v) and the representation {v i } V 2 (v) given by the bijection V 1 (v) V 2 (v). Let c i be such that v i = c i ψi. Using (2.9) with α = (0, 0), we have C 1 c 2 ih 2 v i 2 L 2 (Ω) C 2 c 2 ih 2.

9 Multilevel preconditioners for OSC problems 9 Summing the last inequalities with respect to i and using (3.6), we obtain (3.16) v i 2 v L 2 (Ω) 2 C N L 2 (Ω) 2 v i 2. L 2 (Ω) C 1 N i=1 i=1 Multiplying (3.16) by, summing for = 0, 1,..., K, taing the infimum over V 1 (v), and using (3.7), we obtain (3.17) C 1 inf v i 2 L 2 (Ω) v 2,h C 2 inf v i 2. L V 2 (Ω) 1(v) V 1(v),i Since there is a bijection V 1 (v) V 2 (v), the infimum over V 1 (v) is equal to the infimum over V 2 (v); hence, (3.17) implies (3.15). 4. Uniform spectral equivalence. In this section, we define an additive and a multiplicative OSC preconditioners and prove that they are uniformly spectrally equivalent to the OSC operator L h L h. Additive preconditioner. For 0 K and 1 i N, let Ti be a linear operator from V h into V i defined as follows: for any w V h, Ti w satisfies,i (4.1) a h (T i w, v) = a h (w, v) for all v V i. The following is our main result for the additive preconditioner. Theorem 4.1. Assume that h sufficiently small. Linear operator (4.2) T A =,i T i is selfadjoint positive definite on V h in the inner product a h (, ). Linear operator (4.3) B A = L hl h T 1 A is selfadjoint positive definite on V h in the inner product (, ) h, and (4.4) C 1 (B A v, v) h (L hl h v, v) h C 2 (B A v, v) h, v V h. (4.5) Proof. Let v 2 Σ,a h = inf V 2(v) a h (v i, v i ), v V h.,i First, let us prove the equivalence relation (4.6) C 1 a h (v, v) v 2 Σ,a h C 2 a h (v, v), v V h. Using (3.4), (3.12), and (4.5), we obtain inequalities C 1 v Σ, v Σ,ah C 2 v Σ,, v V h, which, by Lemma 3.5, imply C 1 v H 2 (Ω) v Σ,ah C 2 v H 2 (Ω), v V h.

10 10 R. Aitbayev The last inequalities and Lemma 3.1 imply (4.6). We note that the second inequality in (4.6) is one of the ey assumptions in the abstract theory of Schwarz methods (see Assumption 1 in Section 5.2 of [36]). Using (4.2), (4.1), the second inequality in (4.6), and (3.1), we obtain a h (T A v, v) Ca h (v, v) C v 2 H 2 (Ω), v V h, (see Theorem 1 in [17]). Therefore, operator T A is positive definite. Operators Ti are selfadjoint since a h (, ) is a symmetric bilinear form; hence, T A is selfadjoint (see Lemma 2 in Section 5.2 of [36]). Thus, operator T 1 A is selfadjoint positive-definite. It follows from (4.3), (2.7), and Lemma 1 in 5.2 of [36] that (B A v, v) h = a h (T 1 A v, v) = v 2 Σ,a h, v V h. The last relation along with (4.6) and (2.7), gives (4.4). Multiplicative preconditioner. Let (4.7) [ 0 N ( T M = I h Ih Ti ) ] [ K 1 ( Ih Ti ) ], =K i=1 =0 i=n where I h is the identity operator on V h. Theorem 4.2. Assume that h is sufficiently small. Linear operator T M is selfadjoint positive-definite on V h in the inner product a h (, ). Linear operator (4.8) B M = L hl h T 1 M is self-adjoint positive definite on V h in the inner product (, ) h, and (4.9) C 1 (B M v, v) h (L hl h v, v) h C 2 (B M v, v) h, v V h. Proof. Since operators {Ti } are self-adjoint on V h with respect to the inner product a h (, ), it is easy to see that T M are also a self-adjoint operator. Hence, B M is self-adjoint in the inner product (, ) h. Inequalities in (4.9) follow from Lemma 4 in 5.2 of [36] with ω = 1, the second inequality in (4.6), and Lemma 6.1 in [40] formulated for {V }. Remar. Using the multigrid terminology, we note that the multiplicative preconditioner B M corresponds to the V-cycle multigrid algorithm with the Gauss-Seidel smoother. Iterative method. Since the operator of the equation (2.6) is selfadjoint positive definite, we can use our multilevel preconditioners with the PCG algorithm to compute the OSC solution (see Algorithm in [20]). Let λ min,h and λ max,h be respectively the smallest and the largest eigenvalues of the preconditioned operator à h = M 1/2 h L hl h M 1/2 h, where M h is a preconditioner. It is well nown that the convergence rate of PCG is bounded from above by ( κ h 1)/( κ h + 1),

11 Multilevel preconditioners for OSC problems 11 where κ h = λ max,h /λ min,h is the spectral condition number of Ãh (see Theorem in [20]). It follows from theorems 4.1 and 4.2 that (4.10) κ h C 2 /C 1 < as h. The estimate (4.10) implies that it taes O( ln ɛ ) iterations of the PCG algorithm with the multilevel OSC preconditioners to approximate the solution of (2.6) with tolerance ɛ; that is, the number of iterations is bounded by a constant independent of h and K. 5. OSC matrix-vector representation. In this section, we introduce the matrix-vector representation of the OSC problem in the standard Hermite finite element basis and obtain recurrence relations for the computation of the OSC approximations and other required quantities. Representation of Hermite piecewise cubic polynomials. Let V 1 denote a vector space of Hermite piecewise cubic polynomials vanishing at t = 0 and t = 1 and corresponding to the partition {t i = i/2 } 2 is M i=0 = Let V 1 of the interval [0, 1]. The dimension of (5.1) Φ = {s 0, v 1, s 1,..., v 2 1, s 2 1, s 2 } {φ i} M i=1 be the standard basis of V 1 consisting of nodal value and nodal slope basis functions v i (t) and s i (t), respectively, defined by, for 0 i 2, (5.2) v i (t j ) = δ ij, (v i ) (t j ) = 0, 0 j 2, s i (t j ) = 0, (s i ) (t j ) = h 1 δ ij, 0 j 2, where δ ij is the Kronecer delta. We note that the basis functions v0 and v 2 corresponding to the boundary points t = 0 and t = 1 are not included in Φ. The value and the slope basis functions in Φ corresponding to an interior partition node are uniquely expressed as a linear combination of five basis functions in Φ +1 as follows: (5.3) Let (5.4) P = vi = 1 2 v+1 2i s+1 2i 1 + v+1 2i v+1 2i s+1 2i+1, s i = 1 8 v+1 2i s+1 2i sl+1 2i v+1 2i s+1 2i+1. ( ) ( 8 0, Q = 0 4 ) ( 4 1, and R = 6 1 ) be matrices corresponding to the representation (5.3). Let P 1 be a 2M M matrix obtained from Q O O... O R P O 1 O Q O (5.5) 8 O R P R (2M +2) (M +2), O... Q

12 12 R. Aitbayev where O is the 2 2 zero matrix, by removing the first and next to the last rows and columns. For v V 1, let [v] H, denote the vector representation of v in the basis Φ. It follows from (5.3), (5.4) and (5.5), that (5.6) [v] H,+1 = P 1 [v] H,, v V 1, {0, 1, 2,..., K 1}. Representation of Hermite piecewise bicubic polynomials. We note that V = V 1 V 1, where the symbol denotes a vector space tensor product. Set (5.7) ψ M (i 1)+j (x) = φ i(x 1 )φ j (x 2 ) for 1 i, j M, where basis functions φ i, for 1 i M, are defined by (5.1) and (5.2). The set Ψ = {ψj }N j=1, where N = M 2 = 4+1, is the standard basis for V in the standard ordering. It follows from (5.3) and (5.7) that a basis function in Ψ corresponding to an interior partition node is uniquely expressed as a linear combination of 25 basis functions in Ψ +1. Let [v] H, denote the vector representation of v V in the basis Ψ, and let [v] H = [v] H,K for v V h. It is obvious that (5.8) [ψ j ] H, = e j, 1 j N, where e j is the j-th standard basis vector in RN. We set (5.9) P = P 1 P 1 R 2N N, where P 1 is the one-dimensional interpolation matrix in (5.6) and the symbol now denotes the matrix tensor product. Matrix P corresponds to the piecewise bicubic Hermite interpolation in V +1, and we call P the interpolation matrix from level to level + 1. It follows from (5.6), (5.7), and (5.9) that (5.10) [v] H,+1 = P [v] H, for 0 K 1 and v V. Applying formula (5.10) recurrently, we obtain (5.11) [v] H = P K 1... P [v] H,, v V. In particular, replacing v in (5.11) by ψ j and using (5.8), we get (5.12) [ψ j ] H = P K 1... P e j, 1 j N. Matrix-vector form of the OSC problem. Assume that the set of Gauss points G h = {ξ i } N i=1 is ordered, and let, for any continuous on Ω function v(x), From (2.5), we have [v] G = [v(ξ 1 ),..., v(ξ N )] t. (5.13) (v, w) h = (h 2 /4)[w] t G[v] G, v, w V h. Let [L h ] be the OSC matrix of size N N, corresponding to the differential operator L in (1.2), with entries Lψ K j (ξ i), where i is the row index. For a continuous function g(x), let D(g) = diag (g(ξ 1 ),..., g(ξ N ))

13 be a diagonal matrix. We note that Multilevel preconditioners for OSC problems 13 [L h ] = D(a 11 )(Â ˆB) + 2D(a 12 )(Ĉ Ĉ) + D(a 22)( ˆB Â) +D(b 1 )(Ĉ ˆB) + D(b 2 )( ˆB Ĉ) + D(c)( ˆB ˆB), and the matrices Â, Ĉ, and ˆB have the following almost bloc diagonal structure: W Z O O... O Õ Õ W Z O Õ O W Z R MK MK, Õ O... W Z where W, Z, O and W, Z, Õ are, respectively, 2 2 and 2 1 blocs; O and Õ are zero matrices. The (i, j)-entries of matrices Â, Ĉ, and ˆB are φ i (η j), φ i (η j), and φ i (η j ), respectively, where {η j } MK j=1 is the set of Gauss points in interval [0, 1] corresponding to partition π h. It is easy to see that (5.14) [L h v] G = [L h ][v] H, v V h. Using (2.7), (5.13), and (5.14), we obtain, for any v and w in V h, (5.15) a h (v, w) = (L h v, L h w) h = (h 2 /4)[L h w] t G[L h v] G = (h 2 /4)[w] t H[L h ] t [L h ][v] H. Let A = (a ij ) be an N N matrix with entries (5.16) a ij = (4/h 2 )a h (ψ j, ψ i ), and let A = A K. From (5.16), using (5.15), we get (5.17) a ij = [ψ j ] t H[L h ] t [L h ][ψ i ] H. From (5.17) for = K, using (5.8), we obtain (5.18) A = [L h ] t [L h ]. Similarly, using (5.13), (5.14), the relation [f] G = [f h ] G, and (5.8), we obtain (4/h 2 )(L hf h, ψ K i ) h = (4/h 2 )(f h, L h ψ K i ) h = [f h ] t G[L h ψ K i ] G = [f] t G[L h ][ψ K i ] H = ( e K i ) t [L h ] t [f] G. Thus, the variational OSC equation (2.8) has the matrix-vector form (5.19) A [u h ] H = [L h ] t [f] G. 6. Implementation. In this section, we describe implementations of the additive and the multiplicative OSC preconditioners. Additive preconditioner. Let us describe the computation of w = B 1 A v for v V h, where B A is defined by (4.3). Let (6.1) N w = w i, where w i = Ti (L hl h ) 1 v. i=1

14 14 R. Aitbayev Using (4.3), (4.2), and (6.1), we get (6.2) w = T A (L hl h ) 1 v = (,i T i )(L hl h ) 1 v =,i w i = w. Thus, to compute w, we need to compute and sum w for = 0, 1,..., K. Using (4.1) and (2.7), we obtain, for 1 i N, (6.3) a h (w i, ψ i ) = a h (T i (L hl h ) 1 v, ψ i ) = a h ((L hl h ) 1 v, ψ i ) = (v, ψ i ) h. Let (6.4) (6.5) w i = c i ψ i, c i R, [v] = (4/h 2 )[(v, ψ 1 ) h,..., (v, ψ N ) h ] t R N. Substituting (6.4) into (6.3) and using (6.5) and (5.16), we rewrite (6.3) in the form (6.6) diag(a ) w = [v], where w = (c 1,..., c N ) t = [w ] t H, and diag(a ) = diag(a 11,..., a N N ). Let [I h ] be an N N matrix with entries ψ K j (ξ i), where i is the row index. Matrix [I h ] is nonsingular and maps [v] H to [v] G, that is, (6.7) [v] G = [I h ] [v] H, v V h. For [v] defined by (6.5), let us show (6.8) (6.9) [v] K = [I h ] t [v] G, [v] = P t [v] +1 for = K 1, K 2,..., 0, where the interpolation matrix P is defined by (5.9) and (5.5). Using (6.5), (5.13), and (6.7), we obtain (6.10) ( e j ) t [v] = (4/h 2 )(v, ψ j ) h = [ψ j ] t G[v] G = [ψ j ] t H[I h ] t [v] G, 1 j N. Relation (6.8) follows from (6.10) with = K and (5.8). Using (6.10), (5.12), and (6.8), we obtain [v] = P t... P t K 1[v] K, which implies (6.9). The multiplication by P t is carried out using the representation P t = ((P 1 ) t I )(I (P 1 ) t ), where I is the identity matrix of size M M. Matrices {A } can be precomputed using the recurrence formula (6.11) A = P t A +1 P for = K 1, K 2,..., 0, which follows from (5.17), (5.12), and (5.18). Finally, to compute w = w, that is, [w] H, we implement w +1 w +1 + P w, = 0, 1,..., K 1.

15 Multilevel preconditioners for OSC problems 15 input: K, [v] K, {diag(a )} K =0 output: [w] H v K [v] K for = K, K 1,..., 0 if ( < K) v = P t v +1 end solve diag(a ) w = v end for = 0, 1,..., K 1 w +1 w +1 + P w end [w] H w K Fig Additive preconditioning algorithm. The additive preconditioning algorithm is presented in Figure 6.1. It is easy to see that the computational cost of the additive algorithm is O(h 2 ) = O(4 K ). Multiplicative preconditioner. We now consider the computation of w = B 1 M v for v V h, where B M is defined in (4.8). Let (6.12) u = (L hl h ) 1 v. Using (4.8) and (6.12), we get which, by (4.7), implies w = B 1 M v = T M (L hl h ) 1 v = T M u, (6.13) [ 0 N ( u w = Ih Ti ) ] [ K 1 ( Ih Ti ) ] u. =K i=1 =0 i=n Let S be an ordered set of pairs (, i) with the ordering corresponding to that of factors in (6.13) from right to left. Setting y = u w, we see that u w can be computed by which is equivalent to y u; y (I h T i )y for (, i) S, (6.14) w 0; w w + T i (u w) for (, i) S. Let us develop an efficient implementation of the algorithm in (6.14). Using (4.1), (2.7), and (6.12), we obtain (6.15) a h (T i (u w), ψ i ) = a h (u w, ψ i ) = (v, ψ i ) h a h (w, ψ i ), ψ i V i. Substituting T i (u w) = c iψ i (6.16) (6.17) into (6.15) and (6.14), we get c i = g i (w)/a ii, w w + c i ψ i, where a ii is defined in (5.16), and (6.18) g i (w) = (4/h 2 ) [ (v, ψ i ) h a h (w, ψ i ) ], 1 i N.

16 16 R. Aitbayev Let g (w) = (g 1 (w),..., g N (w)) t. We note that, each time when the value of w is changed by (6.17), all entries of vector g (w) should be updated by (6.18), and such computation requires a matrix-vector product. It is more efficient to use a recurrent assignment (6.19) g (w) g (w) c i (a 1i,..., a N,i) t, which is obtained by multiplying (6.17) by ψj in the a h(, ) inner product and subtracting (v, ψj ) h from both sides of the resulting assignment. For = K 1, K 2,..., 0, let w be the value of w after implementing and let w w + T l i (u w), i = 1,..., N l, l = K, K 1,..., + 1, (6.20) g = g (w ). We note that (6.16) followed by (6.19) for i = 1,..., N is the column-oriented algorithm of solving a lower triangular linear system L w = g, where matrix L contains the lower triangular part of A and w = (c 1,..., c,n ) t. Let us show that vectors { g } can be computed by the recurrence formula (6.21) g 1 = P t 1 ( g A w ). Using (6.19), the definition of w 1, and (6.20), we obtain (6.22) g (w 1 ) = g A w. Applying (6.18), (6.5), (5.15), (5.12), we get (6.23) g (w) = [v] P t... P t K 1[L h ] t [L h ][w] H. From (6.23) with replaced by 1 and (6.9), we have g 1 (w) = P t 1 ( [v] P t... P t K 1[L h ] t [L h ][w] H ) = P t 1 g (w), which, by (6.22), implies (6.21). In the ascend phase, for = 0,..., K, an upper triangular linear system with the matrix L t is solved. The algorithm implementing the multiplicative preconditioning is given in Figure 6.2. It is easy to see that the cost of the multiplicative algorithm is O(h 2 ) = O(4 K ). 7. Numerical results. In this section, we present numerical results for solving test problems by the PCG method with the multilevel OSC preconditioners developed in this wor. For a chosen exact solution u(x) of BVP (1.1), we set f = Lu. PCG iterations are stopped when the relative residual norm, that is, the ratio of the 2-norm of the residual of equation (5.19) to the 2-norm of the right hand side, becomes less than tolerance ɛ. The spectral condition number κ h of the preconditioned OSC operator satisfies (4.10), that is, κ h < C 2 /C 1 as h 0. To demonstrate this fact numerically, we compute quantities that approximate κ h on a sequence of nested partitions using the PCG iteration parameters and present these approximations in the following tables

17 Multilevel preconditioners for OSC problems 17 input: K, [v] K, {A } K =0 output: [w] H g K [v] K for = K,..., 0 solve L w = g if ( > 0) g 1 P t 1 ( g A w ) end end for = 0,..., K if ( > 0) w w + P 1 w 1 end solve L t w = g A w w w + w end [w] H w K Fig Multiplicative preconditioning algorithm. Table 7.1 Comparison of multilevel preconditioners to solve the normal and the original OSC equations by PCG (L =, ɛ = ). Normal OSC equation Original OSC equation Additive Multiplicative Additive Multiplicative h κ h Iter. κ h Iter. κ h Iter. κ h Iter. 1/ / / / / under κ h. Under Iter., we report the numbers of iterations to reduce the relative residual norm within the specified tolerance. In the first set of experiments, we compare our preconditioners with those proposed in [9], where an additive and a multiplicative multilevel preconditioners were developed to solve the original OSC equation (2.3) for a selfadjoint L. As in [9], we tae L =, the Laplacian, and u(x) = 10x 2 1(1 x 1 )x 2 2(1 x 2 ). Since u(x) is a bicubic polynomial, it is also the solution of the OSC problem; hence, the discretization error is zero. The numerical results are presented in Table 7.1, and they indicate that, for L =, PCG with multilevel preconditioners for the original OSC equation is slightly more efficient than that for the normal OSC equation. For smaller values of h, the approximations to κ h and the numbers of iterations are relatively small and change insignificantly. In the next set of experiments, we solve the same problems as in [1], where PCG algorithm was tested with a direct solver preconditioner. The operator L in (1.2) is taen with the coefficients (7.1) a 11 (x) = e x1x2, b 1 (x) = x 2 e x1x2 + β 1 cos[π(x 1 + x 2 )], a 12 (x) = α/(1 + x 1 + x 2 ), b 2 (x) = x 1 e x1x2 + β 2 sin(2πx 1 x 2 ), a 22 (x) = e x1x2, c(x) = γ[1 + 1/(1 + x 1 + x 2 )],

18 18 R. Aitbayev Table 7.2 Approximations to the spectral condition number κ h and PCG iteration numbers for the variable coefficient L (ɛ = ). Top part multilevel preconditioner; bottom part additive preconditioner P1 P2 P3 P4 h κ h Iter. κ h Iter. κ h Iter. κ h Iter. 1/ (26) (46) (34) (68) 1/ (30) (51) (38) (75) 1/ (33) (54) (40) (81) 1/ (34) (55) (42) (84) 1/ / / / / / where α, β 1, β 2, and γ are parameters, and the exact solution of BVP (1.1) is set to u(x) = e x1+x2 x 1 x 2 (1 x 1 )(1 x 2 ). Using PCG with the multiplicative preconditioner, we solve the following four test problems corresponding to the differential operator L, which is: P1 selfadjoint negative-definite, α = β 1 = β 2 = γ = 0; P2 selfadjoint indefinite, α = β 1 = β 2 = 0 and γ = 100; P3 nonselfadjoint indefinite, β 2 = 100 and α = β 1 = γ = 0; P4 nonselfadjoint indefinite, α = 0.5, β 1 = 10, β 2 = γ = 50. In the top part of Table 7.2, we report results with the multiplicative preconditioner, and, in parentheses, we reproduce results reported in [1]. For all four problems, the numbers of iterations increase slowly with decreasing h. It taes the least number of iterations to solve the selfadjoint negative definite problem P1, and the most number of iterations to solve P2, the most indefinite problem among P1 P4. For P1, approximations to the spectral ratio κ h are much smaller than those for P2 P4, where the approximations to κ h are about the same for smaller values of h. It is interesting to note, that κ h monotonically increases for P1 and decreases for P2 P4 as h decreases. We observe that the approximations to κ h for P2 P4 are significantly larger than those for P1. The numbers of iterations for P1 and P4 indicate in favor of the multilevel multiplicative preconditioner in comparison with the direct solver preconditioner developed in [1]; although, the preconditioner in [1] produces smaller numbers of iterations for P2 and P3. Results for the additive preconditioner are presented in the bottom part of Table 7.2, and they suggest that, for the tested problems, the additive preconditioner is less efficient, in terms of numbers of iterations, than the multiplicative preconditioner. The approximations of κ h computed with the additive preconditioner are approximately 15 times larger than those computed with the multiplicative preconditioner, and the indicated difference is reflected by a larger number of iterations for the additive preconditioner. This result is intuitively expected based on nown properties of Jacobi and Gauss-Seidel smoothers for finite-difference operators. In Figure 7.1, we display plots of residual curves for h = 1/256. We observe monotone convergence only for the selfadjoint negative definite problem P1; for P2

19 Multilevel preconditioners for OSC problems 19 Fig Logarithmic plots of the relative residual norm vs. iteration number (h = 1/256). Top figure multiplicative preconditioner; bottom figure additive preconditioner P1 P2 P3 P P P P1 P Table 7.3 Approximations to the spectral condition number κ h Helmholtz equation (ɛ = ). and PCG iteration numbers for the 2 = = = = 1400 h κ h Iter. κ h Iter. κ h Iter. κ h Iter. 1/ / / / / P4, the residual curves are plotted relatively close to each other. In the last set of experiments, we solve the Helmholtz equation u + 2 u = f for several values of 2. Numerical results were obtained using the multiplicative preconditioner, and they are presented in Table 7.3. We see that the approximations to κ h change insignificantly when h is decreased for all taen values of 2. For 2 = 1000, the approximations to h are very large, however, for 2 = 1400 they are the smallest. The approximations to h are large because of small values of the smallest eigenvalue λ min,h of the preconditioned operator. In Figure 7.2, we plot the eigenvalues of the OSC matrix [L h ] with h = 1/32 (N = 4, 096) for the Helmholtz equation with 2 = We note that the eigenvalues are widely spread on the complex plane. In Figure 7.3, we plot the eigenvalues of the symmetric matrix [I h ] t [L h ], which are the eigenvalues of the OSC operator L h. We note that L h has large numbers of both positive and negative eigenvalues. Acnowledgment. The author wishes to than Prof. Bernard Bialeci for his assistance during the preparation of this paper and the referees for the valuable comments.

20 20 R. Aitbayev Fig Eigenvalues of the OSC matrix [L h ] for the Helmholtz equation on the complex plain ( 2 = 1400, h = 1/32) Fig Eigenvalues of the OSC operator L h for the Helmholtz equation plotted against their ordering numbers ( 2 = 1400, h = 1/32) REFERENCES [1] R. Aitbayev and B. Bialeci, A preconditioned conjugate gradient method for nonselfadjoint or indefinite orthogonal spline collocation problems, SIAM J. Numer. Anal., 41 (2003), pp [2] U. Ascher, S. Pruess, and R. D. Russell, On spline basis selection for solving differential equations, SIAM J. Numer. Anal., 20 (1983), pp [3] O. Axelsson and V. A. Barer, Finite Element Solution of Boundary Value Problems: Theory and Computation, SIAM, Philadelphia, [4] R. E. Ban, A comparison of 2 multilevel iterative methods for nonsymmetric and indefinite elliptic finite-element equations, SIAM J. Numer. Anal., 18 (1981), pp [5] B. Bialeci, An alternating direction implicit method for orthogonal spline collocation linear systems, Numer. Math., 59 (1991), pp [6] B. Bialeci, A fast domain decomposition Poisson solver on a rectangle for Hermite bicubic orthogonal spline collocation, SIAM J. Numer. Anal., 30 (1993), pp [7] B. Bialeci, Convergence analysis of orthogonal spline collocation for elliptic boundary value problems, SIAM J. Numer. Anal., 35 (1998), pp [8] B. Bialeci, Superconvergence of the orthogonal spline collocation solution of Poisson s equation, Numer. Methods Partial Differential Equations, 15 (1999), pp [9] B. Bialeci and M. Dryja, Multilevel additive and multiplicative methods for orthogonal spline collocation problems, Numer. Math., 77 (1997), pp [10] B. Bialeci, G. Fairweather, and K. Bennett, Fast direct solvers for piecewise Hermite bicubic orthogonal spline collocation equations, SIAM J. Numer. Anal., 29 (1992), pp [11] J. H. Bramble, D. Y. Kwa, and J. E. Pascia, Uniform convergence of multigrid V cycle iterations for indefinite and nonsymmetric problems, SIAM J. Numer. Anal., 31 (1994), pp [12] A. Brandt and I. Livshits, Wave-ray multigrid method for standing wave equations, ETNA, 6 (1997), pp [13] C. Christara and B. F. Smith, Multigrid and multilevel methods for quadratic spline collocation, BIT, 37 (1997), pp [14] K. D. Cooper and P. M. Prenter, Alternating direction collocation for separable elliptic partial differential equations, SIAM J. Numer. Anal., 28 (1991), pp [15] C. de Boor and B. Swartz, Collocation at Gaussian points, SIAM J. Numer. Anal., 10 (1973), pp [16] R. Devore and V. Popov, Interpolation of Besov spaces, Trans. Amer. Math. Soc., 305 (1988), pp [17] M. Dryja and O. B. Widlund, Schwarz methods of Neumann-Neumann type for 3- dimensional elliptic finite-element problems, Comm. Pure Appl. Math., 48 (1995), pp

21 Multilevel preconditioners for OSC problems 21 [18] W. R. Dysen, Tensor product generalized ADI methods for separable elliptic problems, SIAM J. Numer. Anal., 24 (1987), pp [19] J. Gary, The multigrid iteration applied to the collocation method, SIAM J. Numer. Anal., 18 (1981), pp [20] W. Hacbusch, Iterative Solution of Large Sparse Systems of Equations, Springer-Verlag, New Yor, [21] A. Hadjidimos, E. N. Houstis, J. R. Rice, and E. Vavalis, Modified successive overrelaxation (MSOR) and equivalent 2-step iterative methods for collocation matrices, J. Comput. Appl. Math., 42 (1992), pp [22] E. N. Houstis, W. F. Mitchell, and J. R. Rice, Algorithms INTCOL and HERMCOL: Collocation on rectangular domains with bicubic Hermite polynomials, ACM Trans. Math. Software, 11 (1985), pp [23], Collocation software for second-order partial differential equations, ACM Trans. Math. Software, 11 (1985), pp [24] S. D. Kim and S. V. Parter, Preconditioning cubic spline collocation discretizations of elliptic problems, Numer. Math., 72 (1995), pp [25] O. A. Ladyzhensaja and N. N. Ural ceva, Linear and Quasilinear Elliptic Equations, Academic Press, New Yor, [26] Y.-L. Lai, A. Hadjidimos, E. N. Houstis, and J. R. Rice, On the iterative solution of Hermite collocation equations, SIAM J. Matrix Anal. Appl., 16 (1995), pp [27] J. Mandel, Multigrid convergence for nonsymmetric, indefinite variational problems and one smoothing step, Appl. Math. Comput., 19 (1986), pp [28] T. A. Manteuffel and S. V. Parter, Preconditioning and boundary conditions, SIAM J. Numer. Anal., 27 (1990), pp [29] G. Mateescu, C. J. Ribbens, and L. T. Watson, A domain decomposition preconditioner for Hermite collocation problems, Numer. Methods Partial Differential Equations, 19 (2003), pp [30] P. Oswald, On function spaces related to finite element approximation theory, Zeitschrift für Analysis und ihre Anwendungen, 9 (1990), pp [31], On discrete norm estimates related to multilevel preconditioners in the finite element method, in Proc. Int. Conf. Constructive Theory of Functions, Varna 1991, K. G. Ivanov, P. Petrushev, and B. Sendov, eds., Sofia, 1992, Publ. House Bulgarian Academy of Sciences, pp [32], Multilevel preconditioners for discretizations of the biharmonic equation by rectangular finite elements, Numer. Linear Algebra Appl., 2 (1995), pp [33] P. Percell and M. F. Wheeler, A C 1 finite element collocation method for elliptic equations, SIAM J. Numer. Anal., 17 (1980), pp [34] J. R. Rice and R. F. Boisvert, Solving Elliptic Problems using ELLPACK, Springer-Verlag, New Yor, [35] M. H. Schultz, Spline Analysis, Prentice-Hall, Inc., Englewood Cliffs, N. J., [36] B. F. Smith, P. E. Bjørstad, and W. D. Gropp, Domain Decomposition: Parallel Multilevel Methods for Elliptic Partial Differential Equations, Cambridge University Press, New Yor, [37] W. Sun, Bloc iterative algorithms for solving Hermite bicubic collocation equations, SIAM J. Numer. Anal., 33 (1996), pp [38] H. Triebel, Interpolation Theory, Function Spaces, Differential Operators, North-Holland, New Yor, [39] X. Zhang, Multilevel Schwarz methods, Numer. Math., 63 (1992), pp [40], Multilevel Schwarz methods for the biharmonic Dirichlet problem, SIAM J. Sci. Comput., 15 (1994), pp

A MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY

A MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY A MULTIGRID ALGORITHM FOR THE CELL-CENTERED FINITE DIFFERENCE SCHEME Richard E. Ewing and Jian Shen Institute for Scientic Computation Texas A&M University College Station, Texas SUMMARY In this article,

More information

Geometric Multigrid Methods

Geometric Multigrid Methods Geometric Multigrid Methods Susanne C. Brenner Department of Mathematics and Center for Computation & Technology Louisiana State University IMA Tutorial: Fast Solution Techniques November 28, 2010 Ideas

More information

INTERGRID OPERATORS FOR THE CELL CENTERED FINITE DIFFERENCE MULTIGRID ALGORITHM ON RECTANGULAR GRIDS. 1. Introduction

INTERGRID OPERATORS FOR THE CELL CENTERED FINITE DIFFERENCE MULTIGRID ALGORITHM ON RECTANGULAR GRIDS. 1. Introduction Trends in Mathematics Information Center for Mathematical Sciences Volume 9 Number 2 December 2006 Pages 0 INTERGRID OPERATORS FOR THE CELL CENTERED FINITE DIFFERENCE MULTIGRID ALGORITHM ON RECTANGULAR

More information

Simple Examples on Rectangular Domains

Simple Examples on Rectangular Domains 84 Chapter 5 Simple Examples on Rectangular Domains In this chapter we consider simple elliptic boundary value problems in rectangular domains in R 2 or R 3 ; our prototype example is the Poisson equation

More information

Efficient smoothers for all-at-once multigrid methods for Poisson and Stokes control problems

Efficient smoothers for all-at-once multigrid methods for Poisson and Stokes control problems Efficient smoothers for all-at-once multigrid methods for Poisson and Stoes control problems Stefan Taacs stefan.taacs@numa.uni-linz.ac.at, WWW home page: http://www.numa.uni-linz.ac.at/~stefant/j3362/

More information

Domain Decomposition Algorithms for an Indefinite Hypersingular Integral Equation in Three Dimensions

Domain Decomposition Algorithms for an Indefinite Hypersingular Integral Equation in Three Dimensions Domain Decomposition Algorithms for an Indefinite Hypersingular Integral Equation in Three Dimensions Ernst P. Stephan 1, Matthias Maischak 2, and Thanh Tran 3 1 Institut für Angewandte Mathematik, Leibniz

More information

arxiv: v1 [math.na] 11 Jul 2011

arxiv: v1 [math.na] 11 Jul 2011 Multigrid Preconditioner for Nonconforming Discretization of Elliptic Problems with Jump Coefficients arxiv:07.260v [math.na] Jul 20 Blanca Ayuso De Dios, Michael Holst 2, Yunrong Zhu 2, and Ludmil Zikatanov

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Preface to the Second Edition. Preface to the First Edition

Preface to the Second Edition. Preface to the First Edition n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................

More information

LECTURE # 0 BASIC NOTATIONS AND CONCEPTS IN THE THEORY OF PARTIAL DIFFERENTIAL EQUATIONS (PDES)

LECTURE # 0 BASIC NOTATIONS AND CONCEPTS IN THE THEORY OF PARTIAL DIFFERENTIAL EQUATIONS (PDES) LECTURE # 0 BASIC NOTATIONS AND CONCEPTS IN THE THEORY OF PARTIAL DIFFERENTIAL EQUATIONS (PDES) RAYTCHO LAZAROV 1 Notations and Basic Functional Spaces Scalar function in R d, d 1 will be denoted by u,

More information

A Robust Preconditioner for the Hessian System in Elliptic Optimal Control Problems

A Robust Preconditioner for the Hessian System in Elliptic Optimal Control Problems A Robust Preconditioner for the Hessian System in Elliptic Optimal Control Problems Etereldes Gonçalves 1, Tarek P. Mathew 1, Markus Sarkis 1,2, and Christian E. Schaerer 1 1 Instituto de Matemática Pura

More information

Overlapping Schwarz preconditioners for Fekete spectral elements

Overlapping Schwarz preconditioners for Fekete spectral elements Overlapping Schwarz preconditioners for Fekete spectral elements R. Pasquetti 1, L. F. Pavarino 2, F. Rapetti 1, and E. Zampieri 2 1 Laboratoire J.-A. Dieudonné, CNRS & Université de Nice et Sophia-Antipolis,

More information

MULTIGRID PRECONDITIONING IN H(div) ON NON-CONVEX POLYGONS* Dedicated to Professor Jim Douglas, Jr. on the occasion of his seventieth birthday.

MULTIGRID PRECONDITIONING IN H(div) ON NON-CONVEX POLYGONS* Dedicated to Professor Jim Douglas, Jr. on the occasion of his seventieth birthday. MULTIGRID PRECONDITIONING IN H(div) ON NON-CONVEX POLYGONS* DOUGLAS N ARNOLD, RICHARD S FALK, and RAGNAR WINTHER Dedicated to Professor Jim Douglas, Jr on the occasion of his seventieth birthday Abstract

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

Domain Decomposition Preconditioners for Spectral Nédélec Elements in Two and Three Dimensions

Domain Decomposition Preconditioners for Spectral Nédélec Elements in Two and Three Dimensions Domain Decomposition Preconditioners for Spectral Nédélec Elements in Two and Three Dimensions Bernhard Hientzsch Courant Institute of Mathematical Sciences, New York University, 51 Mercer Street, New

More information

A Balancing Algorithm for Mortar Methods

A Balancing Algorithm for Mortar Methods A Balancing Algorithm for Mortar Methods Dan Stefanica Baruch College, City University of New York, NY 11, USA Dan Stefanica@baruch.cuny.edu Summary. The balancing methods are hybrid nonoverlapping Schwarz

More information

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant

More information

Journal of Computational and Applied Mathematics. Multigrid method for solving convection-diffusion problems with dominant convection

Journal of Computational and Applied Mathematics. Multigrid method for solving convection-diffusion problems with dominant convection Journal of Computational and Applied Mathematics 226 (2009) 77 83 Contents lists available at ScienceDirect Journal of Computational and Applied Mathematics journal homepage: www.elsevier.com/locate/cam

More information

Lehrstuhl Informatik V. Lehrstuhl Informatik V. 1. solve weak form of PDE to reduce regularity properties. Lehrstuhl Informatik V

Lehrstuhl Informatik V. Lehrstuhl Informatik V. 1. solve weak form of PDE to reduce regularity properties. Lehrstuhl Informatik V Part I: Introduction to Finite Element Methods Scientific Computing I Module 8: An Introduction to Finite Element Methods Tobias Necel Winter 4/5 The Model Problem FEM Main Ingredients Wea Forms and Wea

More information

From the Boundary Element Domain Decomposition Methods to Local Trefftz Finite Element Methods on Polyhedral Meshes

From the Boundary Element Domain Decomposition Methods to Local Trefftz Finite Element Methods on Polyhedral Meshes From the Boundary Element Domain Decomposition Methods to Local Trefftz Finite Element Methods on Polyhedral Meshes Dylan Copeland 1, Ulrich Langer 2, and David Pusch 3 1 Institute of Computational Mathematics,

More information

AMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1.

AMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1. J. Appl. Math. & Computing Vol. 15(2004), No. 1, pp. 299-312 BILUS: A BLOCK VERSION OF ILUS FACTORIZATION DAVOD KHOJASTEH SALKUYEH AND FAEZEH TOUTOUNIAN Abstract. ILUS factorization has many desirable

More information

EFFICIENT MULTIGRID BASED SOLVERS FOR ISOGEOMETRIC ANALYSIS

EFFICIENT MULTIGRID BASED SOLVERS FOR ISOGEOMETRIC ANALYSIS 6th European Conference on Computational Mechanics (ECCM 6) 7th European Conference on Computational Fluid Dynamics (ECFD 7) 1115 June 2018, Glasgow, UK EFFICIENT MULTIGRID BASED SOLVERS FOR ISOGEOMETRIC

More information

Celebrating Fifty Years of David M. Young s Successive Overrelaxation Method

Celebrating Fifty Years of David M. Young s Successive Overrelaxation Method Celebrating Fifty Years of David M. Young s Successive Overrelaxation Method David R. Kincaid Department of Computer Sciences University of Texas at Austin, Austin, Texas 78712 USA Abstract It has been

More information

MULTIGRID PRECONDITIONING FOR THE BIHARMONIC DIRICHLET PROBLEM M. R. HANISCH

MULTIGRID PRECONDITIONING FOR THE BIHARMONIC DIRICHLET PROBLEM M. R. HANISCH MULTIGRID PRECONDITIONING FOR THE BIHARMONIC DIRICHLET PROBLEM M. R. HANISCH Abstract. A multigrid preconditioning scheme for solving the Ciarlet-Raviart mixed method equations for the biharmonic Dirichlet

More information

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department

More information

VARIATIONAL AND NON-VARIATIONAL MULTIGRID ALGORITHMS FOR THE LAPLACE-BELTRAMI OPERATOR.

VARIATIONAL AND NON-VARIATIONAL MULTIGRID ALGORITHMS FOR THE LAPLACE-BELTRAMI OPERATOR. VARIATIONAL AND NON-VARIATIONAL MULTIGRID ALGORITHMS FOR THE LAPLACE-BELTRAMI OPERATOR. ANDREA BONITO AND JOSEPH E. PASCIAK Abstract. We design and analyze variational and non-variational multigrid algorithms

More information

An additive average Schwarz method for the plate bending problem

An additive average Schwarz method for the plate bending problem J. Numer. Math., Vol. 10, No. 2, pp. 109 125 (2002) c VSP 2002 Prepared using jnm.sty [Version: 02.02.2002 v1.2] An additive average Schwarz method for the plate bending problem X. Feng and T. Rahman Abstract

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

Fast Fourier Transform Solvers and Preconditioners for Quadratic Spline Collocation

Fast Fourier Transform Solvers and Preconditioners for Quadratic Spline Collocation Fast Fourier Transform Solvers and Preconditioners for Quadratic Spline Collocation Christina C. Christara and Kit Sun Ng Department of Computer Science University of Toronto Toronto, Ontario M5S 3G4,

More information

10.6 ITERATIVE METHODS FOR DISCRETIZED LINEAR EQUATIONS

10.6 ITERATIVE METHODS FOR DISCRETIZED LINEAR EQUATIONS 10.6 ITERATIVE METHODS FOR DISCRETIZED LINEAR EQUATIONS 769 EXERCISES 10.5.1 Use Taylor expansion (Theorem 10.1.2) to give a proof of Theorem 10.5.3. 10.5.2 Give an alternative to Theorem 10.5.3 when F

More information

Multilevel Preconditioning of Graph-Laplacians: Polynomial Approximation of the Pivot Blocks Inverses

Multilevel Preconditioning of Graph-Laplacians: Polynomial Approximation of the Pivot Blocks Inverses Multilevel Preconditioning of Graph-Laplacians: Polynomial Approximation of the Pivot Blocks Inverses P. Boyanova 1, I. Georgiev 34, S. Margenov, L. Zikatanov 5 1 Uppsala University, Box 337, 751 05 Uppsala,

More information

Abstract. 1. Introduction

Abstract. 1. Introduction Journal of Computational Mathematics Vol.28, No.2, 2010, 273 288. http://www.global-sci.org/jcm doi:10.4208/jcm.2009.10-m2870 UNIFORM SUPERCONVERGENCE OF GALERKIN METHODS FOR SINGULARLY PERTURBED PROBLEMS

More information

From the Boundary Element DDM to local Trefftz Finite Element Methods on Polyhedral Meshes

From the Boundary Element DDM to local Trefftz Finite Element Methods on Polyhedral Meshes www.oeaw.ac.at From the Boundary Element DDM to local Trefftz Finite Element Methods on Polyhedral Meshes D. Copeland, U. Langer, D. Pusch RICAM-Report 2008-10 www.ricam.oeaw.ac.at From the Boundary Element

More information

4.6 Iterative Solvers for Linear Systems

4.6 Iterative Solvers for Linear Systems 4.6 Iterative Solvers for Linear Systems Why use iterative methods? Virtually all direct methods for solving Ax = b require O(n 3 ) floating point operations. In practical applications the matrix A often

More information

Multigrid Methods for Saddle Point Problems

Multigrid Methods for Saddle Point Problems Multigrid Methods for Saddle Point Problems Susanne C. Brenner Department of Mathematics and Center for Computation & Technology Louisiana State University Advances in Mathematics of Finite Elements (In

More information

Jae Heon Yun and Yu Du Han

Jae Heon Yun and Yu Du Han Bull. Korean Math. Soc. 39 (2002), No. 3, pp. 495 509 MODIFIED INCOMPLETE CHOLESKY FACTORIZATION PRECONDITIONERS FOR A SYMMETRIC POSITIVE DEFINITE MATRIX Jae Heon Yun and Yu Du Han Abstract. We propose

More information

Numerische Mathematik

Numerische Mathematik Numer. Math. (997) 76: 479 488 Numerische Mathematik c Springer-Verlag 997 Electronic Edition Exponential decay of C cubic splines vanishing at two symmetric points in each knot interval Sang Dong Kim,,

More information

LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES. Sergey Korotov,

LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES. Sergey Korotov, LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES Sergey Korotov, Institute of Mathematics Helsinki University of Technology, Finland Academy of Finland 1 Main Problem in Mathematical

More information

arxiv: v1 [math.na] 1 May 2013

arxiv: v1 [math.na] 1 May 2013 arxiv:3050089v [mathna] May 03 Approximation Properties of a Gradient Recovery Operator Using a Biorthogonal System Bishnu P Lamichhane and Adam McNeilly May, 03 Abstract A gradient recovery operator based

More information

Multigrid absolute value preconditioning

Multigrid absolute value preconditioning Multigrid absolute value preconditioning Eugene Vecharynski 1 Andrew Knyazev 2 (speaker) 1 Department of Computer Science and Engineering University of Minnesota 2 Department of Mathematical and Statistical

More information

High order, finite volume method, flux conservation, finite element method

High order, finite volume method, flux conservation, finite element method FLUX-CONSERVING FINITE ELEMENT METHODS SHANGYOU ZHANG, ZHIMIN ZHANG, AND QINGSONG ZOU Abstract. We analyze the flux conservation property of the finite element method. It is shown that the finite element

More information

ENERGY NORM A POSTERIORI ERROR ESTIMATES FOR MIXED FINITE ELEMENT METHODS

ENERGY NORM A POSTERIORI ERROR ESTIMATES FOR MIXED FINITE ELEMENT METHODS ENERGY NORM A POSTERIORI ERROR ESTIMATES FOR MIXED FINITE ELEMENT METHODS CARLO LOVADINA AND ROLF STENBERG Abstract The paper deals with the a-posteriori error analysis of mixed finite element methods

More information

Finite difference method for elliptic problems: I

Finite difference method for elliptic problems: I Finite difference method for elliptic problems: I Praveen. C praveen@math.tifrbng.res.in Tata Institute of Fundamental Research Center for Applicable Mathematics Bangalore 560065 http://math.tifrbng.res.in/~praveen

More information

A collocation method for solving some integral equations in distributions

A collocation method for solving some integral equations in distributions A collocation method for solving some integral equations in distributions Sapto W. Indratno Department of Mathematics Kansas State University, Manhattan, KS 66506-2602, USA sapto@math.ksu.edu A G Ramm

More information

b i (x) u + c(x)u = f in Ω,

b i (x) u + c(x)u = f in Ω, SIAM J. NUMER. ANAL. Vol. 39, No. 6, pp. 1938 1953 c 2002 Society for Industrial and Applied Mathematics SUBOPTIMAL AND OPTIMAL CONVERGENCE IN MIXED FINITE ELEMENT METHODS ALAN DEMLOW Abstract. An elliptic

More information

Iterative Methods for Linear Systems

Iterative Methods for Linear Systems Iterative Methods for Linear Systems 1. Introduction: Direct solvers versus iterative solvers In many applications we have to solve a linear system Ax = b with A R n n and b R n given. If n is large the

More information

Lecture Note III: Least-Squares Method

Lecture Note III: Least-Squares Method Lecture Note III: Least-Squares Method Zhiqiang Cai October 4, 004 In this chapter, we shall present least-squares methods for second-order scalar partial differential equations, elastic equations of solids,

More information

SOLVING MESH EIGENPROBLEMS WITH MULTIGRID EFFICIENCY

SOLVING MESH EIGENPROBLEMS WITH MULTIGRID EFFICIENCY SOLVING MESH EIGENPROBLEMS WITH MULTIGRID EFFICIENCY KLAUS NEYMEYR ABSTRACT. Multigrid techniques can successfully be applied to mesh eigenvalue problems for elliptic differential operators. They allow

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

where denotes the Laplacian, Ω = (0,1) (0,1), and Ω is the boundary of Ω. Let ρ x = {x i } N+1

where denotes the Laplacian, Ω = (0,1) (0,1), and Ω is the boundary of Ω. Let ρ x = {x i } N+1 MODIFIED NODAL CUBIC SPLINE COLLOCATION FOR POISSON S EQUATION ABEER ALI ABUSHAMA AND BERNARD BIALECKI Abstract. We present a new modified nodal cubic spline collocation scheme for solving the Dirichlet

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

An Iterative Substructuring Method for Mortar Nonconforming Discretization of a Fourth-Order Elliptic Problem in two dimensions

An Iterative Substructuring Method for Mortar Nonconforming Discretization of a Fourth-Order Elliptic Problem in two dimensions An Iterative Substructuring Method for Mortar Nonconforming Discretization of a Fourth-Order Elliptic Problem in two dimensions Leszek Marcinkowski Department of Mathematics, Warsaw University, Banacha

More information

CAAM 454/554: Stationary Iterative Methods

CAAM 454/554: Stationary Iterative Methods CAAM 454/554: Stationary Iterative Methods Yin Zhang (draft) CAAM, Rice University, Houston, TX 77005 2007, Revised 2010 Abstract Stationary iterative methods for solving systems of linear equations are

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

Spectra of Multiplication Operators as a Numerical Tool. B. Vioreanu and V. Rokhlin Technical Report YALEU/DCS/TR-1443 March 3, 2011

Spectra of Multiplication Operators as a Numerical Tool. B. Vioreanu and V. Rokhlin Technical Report YALEU/DCS/TR-1443 March 3, 2011 We introduce a numerical procedure for the construction of interpolation and quadrature formulae on bounded convex regions in the plane. The construction is based on the behavior of spectra of certain

More information

CLASSICAL ITERATIVE METHODS

CLASSICAL ITERATIVE METHODS CLASSICAL ITERATIVE METHODS LONG CHEN In this notes we discuss classic iterative methods on solving the linear operator equation (1) Au = f, posed on a finite dimensional Hilbert space V = R N equipped

More information

A Review of Preconditioning Techniques for Steady Incompressible Flow

A Review of Preconditioning Techniques for Steady Incompressible Flow Zeist 2009 p. 1/43 A Review of Preconditioning Techniques for Steady Incompressible Flow David Silvester School of Mathematics University of Manchester Zeist 2009 p. 2/43 PDEs Review : 1984 2005 Update

More information

Scientific Computing I

Scientific Computing I Scientific Computing I Module 8: An Introduction to Finite Element Methods Tobias Neckel Winter 2013/2014 Module 8: An Introduction to Finite Element Methods, Winter 2013/2014 1 Part I: Introduction to

More information

Local flux mimetic finite difference methods

Local flux mimetic finite difference methods Local flux mimetic finite difference methods Konstantin Lipnikov Mikhail Shashkov Ivan Yotov November 4, 2005 Abstract We develop a local flux mimetic finite difference method for second order elliptic

More information

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form Qualifying exam for numerical analysis (Spring 2017) Show your work for full credit. If you are unable to solve some part, attempt the subsequent parts. 1. Consider the following finite difference: f (0)

More information

Lab 1: Iterative Methods for Solving Linear Systems

Lab 1: Iterative Methods for Solving Linear Systems Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as

More information

EXISTENCE AND UNIQUENESS FOR A TWO-POINT INTERFACE BOUNDARY VALUE PROBLEM

EXISTENCE AND UNIQUENESS FOR A TWO-POINT INTERFACE BOUNDARY VALUE PROBLEM Electronic Journal of Differential Equations, Vol. 2013 (2013), No. 242, pp. 1 12. ISSN: 1072-6691. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu ftp ejde.math.txstate.edu EXISTENCE AND

More information

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012.

Math Introduction to Numerical Analysis - Class Notes. Fernando Guevara Vasquez. Version Date: January 17, 2012. Math 5620 - Introduction to Numerical Analysis - Class Notes Fernando Guevara Vasquez Version 1990. Date: January 17, 2012. 3 Contents 1. Disclaimer 4 Chapter 1. Iterative methods for solving linear systems

More information

Optimal multilevel preconditioning of strongly anisotropic problems.part II: non-conforming FEM. p. 1/36

Optimal multilevel preconditioning of strongly anisotropic problems.part II: non-conforming FEM. p. 1/36 Optimal multilevel preconditioning of strongly anisotropic problems. Part II: non-conforming FEM. Svetozar Margenov margenov@parallel.bas.bg Institute for Parallel Processing, Bulgarian Academy of Sciences,

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Spectral element agglomerate AMGe

Spectral element agglomerate AMGe Spectral element agglomerate AMGe T. Chartier 1, R. Falgout 2, V. E. Henson 2, J. E. Jones 4, T. A. Manteuffel 3, S. F. McCormick 3, J. W. Ruge 3, and P. S. Vassilevski 2 1 Department of Mathematics, Davidson

More information

AN ITERATIVE METHOD WITH ERROR ESTIMATORS

AN ITERATIVE METHOD WITH ERROR ESTIMATORS AN ITERATIVE METHOD WITH ERROR ESTIMATORS D. CALVETTI, S. MORIGI, L. REICHEL, AND F. SGALLARI Abstract. Iterative methods for the solution of linear systems of equations produce a sequence of approximate

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Iteration basics Notes for 2016-11-07 An iterative solver for Ax = b is produces a sequence of approximations x (k) x. We always stop after finitely many steps, based on some convergence criterion, e.g.

More information

INTRODUCTION TO FINITE ELEMENT METHODS

INTRODUCTION TO FINITE ELEMENT METHODS INTRODUCTION TO FINITE ELEMENT METHODS LONG CHEN Finite element methods are based on the variational formulation of partial differential equations which only need to compute the gradient of a function.

More information

APPLIED NUMERICAL LINEAR ALGEBRA

APPLIED NUMERICAL LINEAR ALGEBRA APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 36, pp. 39-53, 009-010. Copyright 009,. ISSN 1068-9613. ETNA P-REGULAR SPLITTING ITERATIVE METHODS FOR NON-HERMITIAN POSITIVE DEFINITE LINEAR SYSTEMS

More information

Boundary Value Problems and Iterative Methods for Linear Systems

Boundary Value Problems and Iterative Methods for Linear Systems Boundary Value Problems and Iterative Methods for Linear Systems 1. Equilibrium Problems 1.1. Abstract setting We want to find a displacement u V. Here V is a complete vector space with a norm v V. In

More information

arxiv: v2 [math.na] 5 Jan 2015

arxiv: v2 [math.na] 5 Jan 2015 A SADDLE POINT NUMERICAL METHOD FOR HELMHOLT EQUATIONS RUSSELL B. RICHINS arxiv:1301.4435v2 [math.na] 5 Jan 2015 Abstract. In a previous work, the author and D.C. Dobson proposed a numerical method for

More information

Non-stationary extremal eigenvalue approximations in iterative solutions of linear systems and estimators for relative error

Non-stationary extremal eigenvalue approximations in iterative solutions of linear systems and estimators for relative error on-stationary extremal eigenvalue approximations in iterative solutions of linear systems and estimators for relative error Divya Anand Subba and Murugesan Venatapathi* Supercomputer Education and Research

More information

Nodal O(h 4 )-superconvergence of piecewise trilinear FE approximations

Nodal O(h 4 )-superconvergence of piecewise trilinear FE approximations Preprint, Institute of Mathematics, AS CR, Prague. 2007-12-12 INSTITTE of MATHEMATICS Academy of Sciences Czech Republic Nodal O(h 4 )-superconvergence of piecewise trilinear FE approximations Antti Hannukainen

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

XIAO-CHUAN CAI AND MAKSYMILIAN DRYJA. strongly elliptic equations discretized by the nite element methods.

XIAO-CHUAN CAI AND MAKSYMILIAN DRYJA. strongly elliptic equations discretized by the nite element methods. Contemporary Mathematics Volume 00, 0000 Domain Decomposition Methods for Monotone Nonlinear Elliptic Problems XIAO-CHUAN CAI AND MAKSYMILIAN DRYJA Abstract. In this paper, we study several overlapping

More information

Kasetsart University Workshop. Multigrid methods: An introduction

Kasetsart University Workshop. Multigrid methods: An introduction Kasetsart University Workshop Multigrid methods: An introduction Dr. Anand Pardhanani Mathematics Department Earlham College Richmond, Indiana USA pardhan@earlham.edu A copy of these slides is available

More information

NUMERICAL COMPUTATION IN SCIENCE AND ENGINEERING

NUMERICAL COMPUTATION IN SCIENCE AND ENGINEERING NUMERICAL COMPUTATION IN SCIENCE AND ENGINEERING C. Pozrikidis University of California, San Diego New York Oxford OXFORD UNIVERSITY PRESS 1998 CONTENTS Preface ix Pseudocode Language Commands xi 1 Numerical

More information

Numerical Solutions to Partial Differential Equations

Numerical Solutions to Partial Differential Equations Numerical Solutions to Partial Differential Equations Zhiping Li LMAM and School of Mathematical Sciences Peking University Nonconformity and the Consistency Error First Strang Lemma Abstract Error Estimate

More information

Parallel Iterative Methods for Sparse Linear Systems. H. Martin Bücker Lehrstuhl für Hochleistungsrechnen

Parallel Iterative Methods for Sparse Linear Systems. H. Martin Bücker Lehrstuhl für Hochleistungsrechnen Parallel Iterative Methods for Sparse Linear Systems Lehrstuhl für Hochleistungsrechnen www.sc.rwth-aachen.de RWTH Aachen Large and Sparse Small and Dense Outline Problem with Direct Methods Iterative

More information

An Error Analysis and the Mesh Independence Principle for a Nonlinear Collocation Problem

An Error Analysis and the Mesh Independence Principle for a Nonlinear Collocation Problem An Error Analysis and te Mes Independence Principle for a Nonlinear Collocation Problem Rakim Aitbayev Department of Matematics, New Mexico Institute of Mining and Tecnology, Socorro, New Mexico 87801

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

Comparative Analysis of Mesh Generators and MIC(0) Preconditioning of FEM Elasticity Systems

Comparative Analysis of Mesh Generators and MIC(0) Preconditioning of FEM Elasticity Systems Comparative Analysis of Mesh Generators and MIC(0) Preconditioning of FEM Elasticity Systems Nikola Kosturski and Svetozar Margenov Institute for Parallel Processing, Bulgarian Academy of Sciences Abstract.

More information

A Finite Element Method Using Singular Functions for Poisson Equations: Mixed Boundary Conditions

A Finite Element Method Using Singular Functions for Poisson Equations: Mixed Boundary Conditions A Finite Element Method Using Singular Functions for Poisson Equations: Mixed Boundary Conditions Zhiqiang Cai Seokchan Kim Sangdong Kim Sooryun Kong Abstract In [7], we proposed a new finite element method

More information

ON APPROXIMATION OF LAPLACIAN EIGENPROBLEM OVER A REGULAR HEXAGON WITH ZERO BOUNDARY CONDITIONS 1) 1. Introduction

ON APPROXIMATION OF LAPLACIAN EIGENPROBLEM OVER A REGULAR HEXAGON WITH ZERO BOUNDARY CONDITIONS 1) 1. Introduction Journal of Computational Mathematics, Vol., No., 4, 75 86. ON APPROXIMATION OF LAPLACIAN EIGENPROBLEM OVER A REGULAR HEXAGON WITH ZERO BOUNDARY CONDITIONS ) Jia-chang Sun (Parallel Computing Division,

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 11, pp. 1-24, 2000. Copyright 2000,. ISSN 1068-9613. ETNA NEUMANN NEUMANN METHODS FOR VECTOR FIELD PROBLEMS ANDREA TOSELLI Abstract. In this paper,

More information

Key words. preconditioned conjugate gradient method, saddle point problems, optimal control of PDEs, control and state constraints, multigrid method

Key words. preconditioned conjugate gradient method, saddle point problems, optimal control of PDEs, control and state constraints, multigrid method PRECONDITIONED CONJUGATE GRADIENT METHOD FOR OPTIMAL CONTROL PROBLEMS WITH CONTROL AND STATE CONSTRAINTS ROLAND HERZOG AND EKKEHARD SACHS Abstract. Optimality systems and their linearizations arising in

More information

A Nonoverlapping Subdomain Algorithm with Lagrange Multipliers and its Object Oriented Implementation for Interface Problems

A Nonoverlapping Subdomain Algorithm with Lagrange Multipliers and its Object Oriented Implementation for Interface Problems Contemporary Mathematics Volume 8, 998 B 0-88-0988--03030- A Nonoverlapping Subdomain Algorithm with Lagrange Multipliers and its Object Oriented Implementation for Interface Problems Daoqi Yang. Introduction

More information

2.29 Numerical Fluid Mechanics Spring 2015 Lecture 9

2.29 Numerical Fluid Mechanics Spring 2015 Lecture 9 Spring 2015 Lecture 9 REVIEW Lecture 8: Direct Methods for solving (linear) algebraic equations Gauss Elimination LU decomposition/factorization Error Analysis for Linear Systems and Condition Numbers

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

Chapter Two: Numerical Methods for Elliptic PDEs. 1 Finite Difference Methods for Elliptic PDEs

Chapter Two: Numerical Methods for Elliptic PDEs. 1 Finite Difference Methods for Elliptic PDEs Chapter Two: Numerical Methods for Elliptic PDEs Finite Difference Methods for Elliptic PDEs.. Finite difference scheme. We consider a simple example u := subject to Dirichlet boundary conditions ( ) u

More information

Numerische Mathematik

Numerische Mathematik Numer. Math. (1996) 75: 59 77 Numerische Mathematik c Springer-Verlag 1996 Electronic Edition A preconditioner for the h-p version of the finite element method in two dimensions Benqi Guo 1, and Weiming

More information

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation

A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation A Domain Decomposition Based Jacobi-Davidson Algorithm for Quantum Dot Simulation Tao Zhao 1, Feng-Nan Hwang 2 and Xiao-Chuan Cai 3 Abstract In this paper, we develop an overlapping domain decomposition

More information

9. Iterative Methods for Large Linear Systems

9. Iterative Methods for Large Linear Systems EE507 - Computational Techniques for EE Jitkomut Songsiri 9. Iterative Methods for Large Linear Systems introduction splitting method Jacobi method Gauss-Seidel method successive overrelaxation (SOR) 9-1

More information

Iterative methods for positive definite linear systems with a complex shift

Iterative methods for positive definite linear systems with a complex shift Iterative methods for positive definite linear systems with a complex shift William McLean, University of New South Wales Vidar Thomée, Chalmers University November 4, 2011 Outline 1. Numerical solution

More information

Universität Stuttgart

Universität Stuttgart Universität Stuttgart Multilevel Additive Schwarz Preconditioner For Nonconforming Mortar Finite Element Methods Masymilian Dryja, Andreas Gantner, Olof B. Widlund, Barbara I. Wohlmuth Berichte aus dem

More information

PIECEWISE LINEAR FINITE ELEMENT METHODS ARE NOT LOCALIZED

PIECEWISE LINEAR FINITE ELEMENT METHODS ARE NOT LOCALIZED PIECEWISE LINEAR FINITE ELEMENT METHODS ARE NOT LOCALIZED ALAN DEMLOW Abstract. Recent results of Schatz show that standard Galerkin finite element methods employing piecewise polynomial elements of degree

More information

A SHORT NOTE COMPARING MULTIGRID AND DOMAIN DECOMPOSITION FOR PROTEIN MODELING EQUATIONS

A SHORT NOTE COMPARING MULTIGRID AND DOMAIN DECOMPOSITION FOR PROTEIN MODELING EQUATIONS A SHORT NOTE COMPARING MULTIGRID AND DOMAIN DECOMPOSITION FOR PROTEIN MODELING EQUATIONS MICHAEL HOLST AND FAISAL SAIED Abstract. We consider multigrid and domain decomposition methods for the numerical

More information