Konrad-Zuse-Zentrum für Informationstechnik Berlin Heilbronner Str. 10, D Berlin - Wilmersdorf

Similar documents
ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD

Generalized AOR Method for Solving System of Linear Equations. Davod Khojasteh Salkuyeh. Department of Mathematics, University of Mohaghegh Ardabili,

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon

e-companion ONLY AVAILABLE IN ELECTRONIC FORM

ANALYSIS OF A NUMERICAL SOLVER FOR RADIATIVE TRANSPORT EQUATION

On the Communication Complexity of Lipschitzian Optimization for the Coordinated Model of Computation

A new type of lower bound for the largest eigenvalue of a symmetric matrix

A Bernstein-Markov Theorem for Normed Spaces

New upper bound for the B-spline basis condition number II. K. Scherer. Institut fur Angewandte Mathematik, Universitat Bonn, Bonn, Germany.

Ch 12: Variations on Backpropagation

. The univariate situation. It is well-known for a long tie that denoinators of Pade approxiants can be considered as orthogonal polynoials with respe

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval

which together show that the Lax-Milgram lemma can be applied. (c) We have the basic Galerkin orthogonality

Block designs and statistics

A BLOCK MONOTONE DOMAIN DECOMPOSITION ALGORITHM FOR A NONLINEAR SINGULARLY PERTURBED PARABOLIC PROBLEM

M ath. Res. Lett. 15 (2008), no. 2, c International Press 2008 SUM-PRODUCT ESTIMATES VIA DIRECTED EXPANDERS. Van H. Vu. 1.

Max-Product Shepard Approximation Operators

List Scheduling and LPT Oliver Braun (09/05/2017)

ANALYSIS OF RECOVERY TYPE A POSTERIORI ERROR ESTIMATORS FOR MILDLY STRUCTURED GRIDS

RECOVERY OF A DENSITY FROM THE EIGENVALUES OF A NONHOMOGENEOUS MEMBRANE

Polygonal Designs: Existence and Construction

Approximation by Piecewise Constants on Convex Partitions

A Simple Regression Problem

Chapter 6 1-D Continuous Groups

Bipartite subgraphs and the smallest eigenvalue

A Better Algorithm For an Ancient Scheduling Problem. David R. Karger Steven J. Phillips Eric Torng. Department of Computer Science

Least Squares Fitting of Data

Explicit solution of the polynomial least-squares approximation problem on Chebyshev extrema nodes

THE POLYNOMIAL REPRESENTATION OF THE TYPE A n 1 RATIONAL CHEREDNIK ALGEBRA IN CHARACTERISTIC p n

The Weierstrass Approximation Theorem

Algorithms for parallel processor scheduling with distinct due windows and unit-time jobs

Perturbation on Polynomials

Distributed Subgradient Methods for Multi-agent Optimization

arxiv: v1 [math.na] 10 Oct 2016

A RESTARTED KRYLOV SUBSPACE METHOD FOR THE EVALUATION OF MATRIX FUNCTIONS. 1. Introduction. The evaluation of. f(a)b, where A C n n, b C n (1.

On Poset Merging. 1 Introduction. Peter Chen Guoli Ding Steve Seiden. Keywords: Merging, Partial Order, Lower Bounds. AMS Classification: 68W40

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t.

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization

Computational and Statistical Learning Theory

FPTAS for optimizing polynomials over the mixed-integer points of polytopes in fixed dimension

Iterative Linear Solvers and Jacobian-free Newton-Krylov Methods

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices

Topic 5a Introduction to Curve Fitting & Linear Regression

RESTARTED FULL ORTHOGONALIZATION METHOD FOR SHIFTED LINEAR SYSTEMS

Necessity of low effective dimension

Sharp Time Data Tradeoffs for Linear Inverse Problems

A Note on Scheduling Tall/Small Multiprocessor Tasks with Unit Processing Time to Minimize Maximum Tardiness

COS 424: Interacting with Data. Written Exercises

Tight Bounds for Maximal Identifiability of Failure Nodes in Boolean Network Tomography

ADVANCES ON THE BESSIS- MOUSSA-VILLANI TRACE CONJECTURE

On the Use of A Priori Information for Sparse Signal Approximations

Lecture 21. Interior Point Methods Setup and Algorithm

Fixed-to-Variable Length Distribution Matching

Generalized eigenfunctions and a Borel Theorem on the Sierpinski Gasket.

An Optimal Family of Exponentially Accurate One-Bit Sigma-Delta Quantization Schemes

Algebraic Montgomery-Yang problem: the log del Pezzo surface case

ANALYSIS OF HALL-EFFECT THRUSTERS AND ION ENGINES FOR EARTH-TO-MOON TRANSFER

3.8 Three Types of Convergence

Numerically repeated support splitting and merging phenomena in a porous media equation with strong absorption. Kenji Tomoeda

Numerical issues in the implementation of high order polynomial multidomain penalty spectral Galerkin methods for hyperbolic conservation laws

Feature Extraction Techniques

The Transactional Nature of Quantum Information

1. INTRODUCTION AND RESULTS

Tail estimates for norms of sums of log-concave random vectors

TR DUAL-PRIMAL FETI METHODS FOR LINEAR ELASTICITY. September 14, 2004

About the definition of parameters and regimes of active two-port networks with variable loads on the basis of projective geometry

time time δ jobs jobs

arxiv: v1 [cs.ds] 3 Feb 2014

A Note on Online Scheduling for Jobs with Arbitrary Release Times

Fast Montgomery-like Square Root Computation over GF(2 m ) for All Trinomials

Shannon Sampling II. Connections to Learning Theory

Analyzing Simulation Results

Theore A. Let n (n 4) be arcoplete inial iersed hypersurface in R n+. 4(n ) Then there exists a constant C (n) = (n 2)n S (n) > 0 such that if kak n d

Stochastic Subgradient Methods

Homework 3 Solutions CSE 101 Summer 2017

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search

NORMAL MATRIX POLYNOMIALS WITH NONSINGULAR LEADING COEFFICIENTS

Convex Programming for Scheduling Unrelated Parallel Machines

ANALYSIS OF A FULLY DISCRETE FINITE ELEMENT METHOD FOR THE PHASE FIELD MODEL AND APPROXIMATION OF ITS SHARP INTERFACE LIMITS

arxiv: v1 [math.nt] 14 Sep 2014

The Hilbert Schmidt version of the commutator theorem for zero trace matrices

The Simplex Method is Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate

Dedicated to Richard S. Varga on the occasion of his 80th birthday.

Research Article On the Isolated Vertices and Connectivity in Random Intersection Graphs

Universal algorithms for learning theory Part II : piecewise polynomial functions

Prerequisites. We recall: Theorem 2 A subset of a countably innite set is countable.

Extension of CSRSM for the Parametric Study of the Face Stability of Pressurized Tunnels

UCSD Spring School lecture notes: Continuous-time quantum computing

C na (1) a=l. c = CO + Clm + CZ TWO-STAGE SAMPLE DESIGN WITH SMALL CLUSTERS. 1. Introduction

Bernoulli Wavelet Based Numerical Method for Solving Fredholm Integral Equations of the Second Kind

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis

EMPIRICAL COMPLEXITY ANALYSIS OF A MILP-APPROACH FOR OPTIMIZATION OF HYBRID SYSTEMS

Hybrid System Identification: An SDP Approach

THE CONSTRUCTION OF GOOD EXTENSIBLE RANK-1 LATTICES. 1. Introduction We are interested in approximating a high dimensional integral [0,1]

A note on the multiplication of sparse matrices

On the Bell- Kochen -Specker paradox

A MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY

Characterization of the Line Complexity of Cellular Automata Generated by Polynomial Transition Rules. Bertrand Stone

Pattern Recognition and Machine Learning. Learning and Evaluation for Pattern Recognition

Transcription:

Konrad-Zuse-Zentru für Inforationstechnik Berlin Heilbronner Str. 10, D-10711 Berlin - Wilersdorf Folkar A. Borneann On the Convergence of Cascadic Iterations for Elliptic Probles SC 94-8 (Marz 1994)

1 0

ON THE CONVERGENCE OF CASCADIC ITERATIONS FOR ELLIPTIC PROBLEMS FOLKMAR A. BORNEMANN Abstract. We consider nested iterations, in which the ultigrid ethod is replaced by soe siple basic iteration procedure, and call the cascadic iterations. They were introduced by Deuhard, who used the conugate gradient ethod as basic iteration (CCG ethod). He deonstrated by nuerical experients that the CCG ethod works within a few iterations if the linear systes on coarser triangulations are solved accurately enough. Shaidurov subsequently proved ultigrid coplexity for the CCG ethod in the case of H 2 -regular two-diensional probles with quasi-unifor triangulations. We show that his result still holds true for a large class of soothing iterations as basic iteration procedure in the case of two- and three-diensional H 1+ -regular probles. Moreover we show how to use cascadic iterations in adaptive codes and give in particular a new terination criterion for the CCG ethod. Key Words. Finite eleent approxiation, cascadic iteration, nested iteration, soothing iteration, conugate gradient ethod, adaptive triangulations AMS(MOS) subect classication. 65F10, 65N30, 65N55 Introduction. Let R d be a polygonal Lipschitz doain. We consider an elliptic Dirichlet proble on in the weak forulation: u 2 H 1 0() : a(u; v) = (f; v) L 2 8v 2 H 1 0(): Here f 2 H?1 () and a(; ) is assued to be a H 1 0 ()-elliptic syetric bilinear for. The induced energy-nor will be denoted by kuk 2 a = a(u; u) 8u 2 H 1 0(): Given a nested faily of triangulations (T )` eleents are the spaces of linear nite X = fu 2 C() : u T 2 P 1 (T ) 8T 2 T ; u @ = 0g; where P 1 (T ) denotes the linear functions on the triangle T. We have X 0 X 1 : : : X` H 1 0(): The nite eleent approxiations are given by u 2 X : a(u ; v ) = (f; v ) L 2 8v 2 X : Fachbereich Matheatik, Freie Universitat Berlin, Gerany, and Konrad-Zuse-Zentru Berlin, Heilbronner Str. 10, 10711 Berlin, Gerany. borneann@sc.zib-berlin.de 1

For ne eshes T` the direct coputation of u` is a prohibitive expensive coputational task, so one uses iterative ethods. With the choice of soe basic iterative procedure I, the following cascadic iteration akes use of the nested structure of the spaces X : (1) (i) u 0 = u 0 (ii) = 1; : : : ; ` : u = I ; u?1 : Here I ; denotes steps of the basic iteration applied on level. This kind of iteration is known in the literature under dierent naes, depending on the choice of the basic iteration and the paraeter : Nested iteration: the basic iteration is a ultigrid-cycle, the paraeters are chosen a priori as a sall constant nuber of iterations, cf. Hackbusch [5]. Cascade: the basic iteration is a ultilevel preconditioned cg-ethod, the are chosen a posteriori due to certain terination criteria. This ethod was naed and invented by Deuhard, Leinen and Yserentant [4]. CCG-iteration: the basic iteration is a plain cg-ethod, the are chosen a posteriori. CCG stands for cascadic conugate gradient ethod and was introduced by Deuhard [3]. We call a cascadic iteration optial for level `, if we obtain accuracy ku`? u `k a ku? u`k a ; i.e. if the iteration error is coparable to the approxiation error, and if we obtain ultigrid coplexity aount of work = O(n`); where n` = di X`. The optiality of the nested iteration and of Cascade are well known [5, 4], at least for certain situations. The optiality of the CCG ethod has been deonstrated by several nuerical exaples [3]. This has been considered as rather astonishing, since only a plain basic iteration is used. Shaidurov [10] has recently shown for H 2 -regular probles and quasiunifor triangulations in two diension, that the CCG ethod is optial for a certain choice of the paraeters. Since he shows in essence, that the cg-ethod has soe soothing properties, we were lead to consider rather general soothers as basic iterations. The ain result of Section 1 is as follows: For H 1+ -regular probles with 0 < 1 and quasi-unifor triangulations it is possible to choose the paraeters, that 2

for d = 3: the cascadic iteration is optial for level `. for d = 2: the cascadic iteration is accurate and has coplexity O(n` log n`). This result holds for a large class of soothing iterations. In Section 2 we show how Shaidurov's result ts in our frae. Finally in Section 3 we show for adaptive grids how to choose the a posteriori by certain terination criteria. Under soe heuristically otivated assuptions on the adaptive grids we are able to show optiality for the cg-ethod as basic iteration. Reark. With respect to the CCG ethod, we decided to call the algorith (1) cascadic iteration. Since the interesting case here is the use of plain basic iterations, we had to choose a nae dierent fro nested iteration. 1. General soothers as basic iteration. In this and the following section we consider quasi-unifor triangulations with eshsize paraeter 1 c 2? h = ax T 2T dia T c 2? : In the following we use the sybol c for any positive constant, which only depends on the bilinear for a(; ), on and the shape regularity as well as the quasi-unifority of the triangulations. All other dependencies will be explicitly indicated. A general assuption on the elliptic proble will be H 1+ -regularity for soe 0 < 1, i.e., kuk H 1+ c kfk H?1 8f 2 H?1 (): The approxiation error of the nite eleent ethod is then given in energy nor as (2) ku? u k a c h kfk H?1 = 0; : : : ; `; cf. [6, Lea 8.4.9]. By the well-known Aubin-Nitsche lea and an interpolation arguent one gets the approxiation property (3) ku? u?1 k H 1? c h ku? u?1 k a = 1; : : : ; `; cf. [6, Theore 8.4.14]. We consider the following type of basic iterations for the nite-eleent proble on level started with u 0 2 X : u? I ; u 0 = S ; (u? u 0 ) 3

with a linear apping S ; : X! X for the error propagation. We call the basic iteration a soother, if it obeys the soothing property (4) (i) ks ; v k a c h?1 kv k L 2 8v 2 X ; (ii) ks ; v k a kv k a with a paraeter 0 < 1. As is shown in [5] the syetric Gau-Seidel-, the SSOR- and the daped Jacobi-iteration are soothers in the sense of (4) with paraeter = 1=2: Lea 1.1. A soother in the sense of (4) fullls ks ; v k a c h? kv k H 1? 8v 2 X : Proof. This can be shown be an usual interpolation arguent using discrete Sobolev nors like those introduced in [1] and their equivalence to the fractional Sobolev nors. We are now able to state and prove the ain convergence estiate for the cascadic iteration (1). Theore 1.2. The error of the cascadic iteration with a soother as basic iteration can be estiated by 1 ku`? u `k a c ku? u?1 k a c h kfk H?1: Proof. For = 1; : : : ; ` we get by the linearity of S ; ku? u k a = ks ; (u? u?1 )k a ks ; (u? u?1 )k a + ks ; (u?1? u?1)k a : The rst ter can be estiated by Lea 1.1 and the approxiation property (3): ks ; (u? u?1 )k a c h? 4 c 1 ku? u?1 k H 1? ku? u?1 k a :

If we estiate the second ter by property (4(ii)) of a soother, we get ku? u k a c Using u 0 = u 0 we get by induction ku`? u `ka c Galerkin orthogonality gives ku? u?1 k a + ku?1? u?1k a : 1 ku? u?1 k a : ku? u?1 k a ku? u?1 k a ; so that the error estiate (2) yields the second assertion. We choose as the sallest integer, such that (5) 2 (d+1)(`?)=2 : The integer = ` is therefore the nuber of iterations on level `. With this choice the cascadic iteration can be shown to be optial. Lea 1.3. Let the nuber of iterations on level be given by (5). The cascadic iteration yields the error if > 1=d, and if = 1=d. ku`? u `k a c() h` kfk H?1; ku`? u `k a c Proof. By 2? =c h c 2? we get h h` (1 + log h`) =d kfk H?1 c? 2?(d+1)`=2 2 (d?1)=2 : If > 1=d this is a geoetric su which can be estiated by h c()? 2?(d+1)`=2 2 (d?1)`=2 = c()? 2?` c()? h ` : In the case = 1=d the su is equal to `, such that h =d c?=d 2?`` c?=d h ` (1 + log h`): 5

Theore 1.2 yields the assertion. The coplexity of the ethod is given by the following Lea 1.4. Let the nuber of iterations on level be given by (5). If > 1=d we get n c() n`; if = 1=d n c n`(1 + log n`): Proof. We have 2 d =c n = di X c 2 d : Therefore we get n c 2 (d+1)`=2 2 (d?1)=2 : If > 1=d this is a geoetric su which can be estiated by n c() 2 (d+1)`=2 2 (d?1)`=2 = c()2 d` c() n`: In the case = 1=d the su is equal to `, such that n c 2 d` ` c n`(1 + log n`): In order that also in the case = 1=d the cascadic iteration has an iteration error like the discretization error (2), we choose a special nuber of nal iterations. Lea 1.5. Let = 1=d. If we choose the nuber of iterations on level ` as the sallest integer with (1 + log h`) d= we get for the error of the cascadic iteration ku`? u `k a c 6 h` =d kfk H?1;

and as coplexity n c n`(1 + log n`): Proof. By observing that c 0 ` (1 + log h`) c 1 (1 + log n`) c 2 ` the assertion is clear by Lea 1.3 and Lea 1.4. Our results show, that the cascadic iteration with a plain Gauss-Seidel-, SSOR- or daped Jacobi-iteration (all with = 1=2) as basic iteration is optial for d = 3, accurate with coplexity O(n` log n`) for d = 2. 2. Conugate gradient ethod as basic iteration. When using the conugate gradient ethod as the basic iteration we have to tackle with a proble: the result I ; u 0 of steps of the cg-ethod is not linear in the starting value u 0. Thus, it sees that our frae up to now does not cover the cg-ethod. However, there is a reedy which uses results on the cg-ethod well known in the Russian literature [7, 9]. We have to x soe notation. Let h; i be the euclidean scalar product of the nodal basis in the nite eleent space X, the induced nor will be denoted by v 2 = hv ; v i 8v 2 X : We dene the linear operator A : X! X by hav ; w i = a(v ; w ) 8v ; w 2 X ; which is represented in the nodal basis by the usual stiness atrix. The error of the cg-ethod applied to the stiness atrix can be expressed by ku? I ; u 0 k a = in p2p ; p(0)=1 kp(a )(u? u 0 )k a: Here P denotes the set of polynoials p with deg p. 7

The idea is now, to nd soe polynoial q ; 2 P with q ; (0) = 1, such that S ; = q ; (A ) denes a soother in the sense of (4). Since the error in energy of the cgethod is then aorized by this linear soother, the results of Section 1 are iediately valid for the cg-ethod. The choice of q ; depends on the following solution of a polynoial iniization proble. Lea 2.1. Let > 0. The Chebyshev polynoial T 2+1 has the representation T 2+1 (x) = (?1) (2 + 1) x ; (x 2 ) with a unique ; 2 P and ; (0) = 1. The polynoial ; solves the iniization proble ax p xp(x) = in! x2[0;] over all polynoials p 2 P which are noralized by p(0) = 1. The inial value is given by ax p p x ; (x) = x2[0;] 2 + 1 : Moreover we have ax ; (x) = 1: x2[0;] A proof ay be found in the book of Shaidurov [9]. However, Shaidurov represents ; by the expression ; (x) = Y k=1 (1? x= k ); k = cos 2 ((2k? 1)=2(2 + 1)): As a fairly easy consequence Shaidurov [10] proves the following Theore 2.2. The linear operator S ; = ; (A ); = ax (A ) 8

satises q (i) ks ; v k a 2 + 1 v 8v 2 X : (ii) ks ; v k a kv k a A little nite eleent theory shows that we have found a aorizing soother for the cg-ethod. Corollary 2.3. The linear operator S ; = ; (A ); = ax (A ) denes a soother in the sense of (4) with paraeter = 1. Proof. The usual inverse inequality shows that the axiu eigenvalue of the stiness atrix can be estiated by ch d?2 ; cf. [6, Theore 8.8.6]. The euclidean nor with respect to the nodal basis is related to the L 2 -nor by 1 c hd v 2 kv k 2 L 2 chd v 2 ; cf. [6, Theore 8.8.1]. Thus Theore 2.2 gives ks ; v k a c h(d?2)=2 2 + 1 v c h?1 2 + 1 kv k L 2; i.e., (4(i)) with = 1. Property (4(ii)) was already stated in Theore 2.2. With the help of this aorizing soother it is iediately clear, that Theore 1.2, Lea 1.3, Lea 1.4 and Lea 1.5 reain valid for the cascadic iteration with the cg-ethod as basic iteration, short the CCGethod of Deuhard [3]. In particular the CCG ethod is optial for d = 2; 3. 3. Adaptive control of the CCG-ethod. In this section we develop an adaptive control of the CCG ethod which is based on our theoretical considerations and soe additional assuptions on the faily of triangulation. 9

For adaptive triangulations we drop the assuption of quasi-unifority. All constants in the sequel will not depend on the quasi-unifority, but only on the shape regularity of the triangulations. In order to bound the axiu eigenvalue of the involved atrix, we copute the stiness atrix A with respect to the scaled nodal basis h (2?d)=2 ;i ;i i = 1; : : : ; n ; where f ;i g i is the usual nodal basis of X and h ;i = dia supp ;i : In this section we denote by h; i the euclidean scalar product with respect to this scaled nodal basis. Xu [11] has shown the equivalence of nors and the bound 1 X c v 2 T 2T h?2 T kv k 2 L 2 (T ) c v 2 ; h T = dia T; = ax (A ) c for the axiu eigenvalue of A. Hence, we get for the aorizing soothing iteration of the conugate gradient ethod as dened in Section 2 ks ; v k a c 0 @ X h?2 T kv k 2 L 2 (T ) T 2T In order to turn this into a starting point for a theore like Theore 1.2 we ake the following two assuptions on the faily of triangulations: 1 A 1=2 : (6) (i) h?2 T ku? u?1 k 2 L 2 (T ) cku? u?1 k 2 H 1 (T ) ; 8T 2 T (ii) ku? u k c n?1=d kfk L 2: This is heuristically ustied for adaptive triangulations. The rst assuption (i) eans, that the nite eleent correction is locally of high frequency with respect to the ner triangulation. In other words, the reneent resolves changes but not ore. Thus it is a stateent of the eciency of a triangulation. Note that quasi-unifor triangulations do not accoplish assuption (ii) for probles which are not H 2 -regular. The second assuption is a stateent of optial accuracy, which is ustied by results of nonlinear approxiation theory like [8]. The sae proof as for Theore 1.2 gives the following 10

Lea 3.1. Assuption (6) iplies for the error of the cascadic conugate gradient iteration ku`? u `k a c 1 ku? u?1 k a c 1 n 1=d kfk L 2: We can now extend Lea 1.3 and Lea 1.4 to the adaptive case. Here we have to use additionally, that the sequence of nuber of unknowns belongs to soe kind of geoetric progression: n < 0 n n +1 1 n = 0; 1; : : : : With the choice of the iteration nubers as sallest integers for which (7)! (d+1)=2d n` ; n we get for d > 1 under assuption (6) the nal error and the coplexity ku`? u `k a c n 1=d ` n c n`: kfk L 2 However, in an adaptive algorith we do not know at level the nuber n` of nodal points at the nal level. So far our iteration is not ipleentable. But with slight changes we can still operate with it. We dene the nal level ` as the one, for which the approxiation error is below soe user given tolerance TOL. Hence assuption (6) gives us the relation ku? u k a TOL! 1=d n` ; n which leads us to replace (7) by the sallest integer with (8) ku? u k a TOL! (d+1)=2 : The actual approxiation error ku? u k a is also not known, so we replace this expression by (9) ku? u k a?1 n?1 n! 1=d 11

for soe estiate?1 of the previous approxiation error ku? u?1 k a, cf. [2]. This algorith is nearest to the a priori choice of the paraeters. In practice, the basic iteration can be accurate enough uch earlier than stated in theory. The crucial relation we used for the algorith was ku? u k a c ku? u?1 k a + ku?1? u?1k a ; so we turn it into a terination criterion for the basic iteration by using (8). We terinate the iteration with (10) ku? u k a! (d+1)=2 TOL + ku?1? u ku? u k k?1 a: a Here 0 < < 1 is soe safety factor. Of course we replace ku? u k a by the estiate (9) and ku?u k a, ku?1?u?1k a by soe estiate of the iteration error. Despite the fact, that our terination criterion is dierent fro the original one of Deuhard [3] we get coparable nuerical results for the CCG ethod. They share the essential feature that the iteration has to be ore accurate on coarser triangulations. However, the basis for (10) sees to be a sound cobination of theory and heuristics. Exaple. We applied the adaptive CCG ethod with TOL = 10?4 to the elliptic proble?u = 0; u?1 = 1; u?2 = 0; @ n u?3 = 0 on a doain which is a unit square with slit = fx 2 R 2 : x 1 1g \ fx 2 R 2 : x 2 0:03x 1 g: The boundary pieces are? 1 = fx 2 : x 1 = 1; x 2 0:03g;? 2 = fx 2 : x 1 = 1; x 2?0:03g; and? 3 = @ n (? 1 [? 2 ). Figure 1 shows a typical triangulation and isolines of the solution. In Figure 2 we have drawn the nuber of iterations which were used for the corresponding nuber of unknowns n. We copared three dierent ipleentations: CCG1 : the CCG ethod with terination criterion (10). 12

Fig. 1. Typical adaptive triangulation with isolines of solution CCG2 : the CCG ethod with Deuhard's terination criterion [3]. CSGS : an adaptive cascadic iteration using syetric Gauss-Seidel as basic iteration. We ipleented the terination criterion (10) in this case also. We observe that the CCG1 and CCG2 ipleentations are coparable, the CCG1 needs one adaptive step less in order to achieve the tolerance TOL. The perforance of the CSGS ipleentation is a little bit soother and gives copletely satisfactory results. In fact, if one looks at the real CPU-tie needed for the achieved accuracy ku? u k a, one gets nearly indistinguishable curves for all three ipleentations! Acknowledgeent. The author thanks Rudi Beck for propt coputational assistence. REFERENCES [1] R. E. Bank and T. F. Dupont. An optial order process for solving elliptic nite eleent equations. Math. Cop. 36, 967{975 (1981). [2] F. A. Borneann, B. Erdann, and R. Kornhuber. A posteriori error estiates for elliptic probles in two and three space diensions. SIAM J. Nuer. Anal. (subitted). [3] P. Deuflhard. Cascadic conugate gradient ethods for elliptic partial dierential equations. Algorith and nuerical results. In D. Keyes and J. Xu, editors, 13

20 15 CCG 1 CCG 2 CSGS iterations 10 5 0 10 100 1000 10000 nuber of nodes Fig. 2. Nuber of iterations vs. nuber of unknowns \Proceedings of the 7th International Conference on Doain Decoposition Methods 1993". AMS, Providence (1994). [4] P. Deuflhard, P. Leinen, and H. Yserentant. Concepts of an adaptive hierarchical nite eleent code. IMPACT Copt. Sci. Engrg. 1, 3{35 (1989). [5] W. Hackbusch. \Multi-Grid Methods and Applications". Springer-Verlag, Berlin, Heidelberg, New York (1985). [6] W. Hackbusch. \Theory and Nuerical Treatent of Elliptic Dierential Equations". Springer-Verlag, Berlin, Heidelberg, New York (1992). [7] V. P. Il'in. Soe estiates for conugate gradient ethods. USSR Coput. Math. and Math. Phys. 16, 22{30 (1976). [8] P. Oswald. On function spaces related to nite eleent approxiation theory. Z. Anal. Anwend. 9, 43{64 (1990). [9] V. V. Shaidurov. \Multigrid Methods for Finite Eleents". Kluwer Acadeic Publishers, Dordrecht, Boston, London (1994). [10] V. V. Shaidurov. \Soe estiates of the rate of convergence for the cascadic conugate-gradient ethod". Otto-von-Guericke-Universitat, Magdeburg (1994). Preprint. [11] J. Xu. \Theory of Multilevel Methods". Departent of Matheatics, Pennsylvania State University, University Park (1989). Report No. AM 48. 14