Sergej A. Ivanov and Vadim G. Korneev. On the preconditioning in the. domain decomposition technique. for the p-version nite element. method.

Similar documents
A Finite Element Method for an Ill-Posed Problem. Martin-Luther-Universitat, Fachbereich Mathematik/Informatik,Postfach 8, D Halle, Abstract

The Best Circulant Preconditioners for Hermitian Toeplitz Systems II: The Multiple-Zero Case Raymond H. Chan Michael K. Ng y Andy M. Yip z Abstract In

IN p-version AND SPECTRAL ELEMENT METHODS MARIO A. CASARIN

PARTITION OF UNITY FOR THE STOKES PROBLEM ON NONMATCHING GRIDS

/00 $ $.25 per page

Domain Decomposition Preconditioners for Spectral Nédélec Elements in Two and Three Dimensions

A MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY

for Finite Element Simulation of Incompressible Flow Arnd Meyer Department of Mathematics, Technical University of Chemnitz,

Overlapping Schwarz preconditioners for Fekete spectral elements

1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220)

Linear Algebra Massoud Malek

MULTIGRID PRECONDITIONING FOR THE BIHARMONIC DIRICHLET PROBLEM M. R. HANISCH

LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES. Sergey Korotov,

The Mortar Wavelet Method Silvia Bertoluzza Valerie Perrier y October 29, 1999 Abstract This paper deals with the construction of wavelet approximatio

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS

ADDITIVE SCHWARZ FOR SCHUR COMPLEMENT 305 the parallel implementation of both preconditioners on distributed memory platforms, and compare their perfo

Fast domain decomposition algorithm for discretizations of 3-d elliptic equations by spectral elements

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence)

Chapter 3 Transformations

October 7, :8 WSPC/WS-IJWMIP paper. Polynomial functions are renable

Solving Large Nonlinear Sparse Systems

t x 0.25

Domain Decomposition Algorithms for an Indefinite Hypersingular Integral Equation in Three Dimensions

Construction of a New Domain Decomposition Method for the Stokes Equations

ROB STEVENSON (\non-overlapping") errors in each subspace, that is, the errors that are not contained in the subspaces corresponding to coarser levels

the sum of two projections. Finally, in Section 5, we apply the theory of Section 4 to the case of nite element spaces. 2. Additive Algorithms for Par

Lecture Note III: Least-Squares Method

Adaptive Coarse Space Selection in BDDC and FETI-DP Iterative Substructuring Methods: Towards Fast and Robust Solvers

Splitting of Expanded Tridiagonal Matrices. Seongjai Kim. Abstract. The article addresses a regular splitting of tridiagonal matrices.

Non-Conforming Finite Element Methods for Nonmatching Grids in Three Dimensions

quantitative information on the error caused by using the solution of the linear problem to describe the response of the elastic material on a corner

A Robust Preconditioner for the Hessian System in Elliptic Optimal Control Problems

An explicit nite element method for convection-dominated compressible viscous Stokes system with inow boundary

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

0 Finite Element Method of Environmental Problems. Chapter where c is a constant dependent only on and where k k 0; and k k ; are the L () and H () So

XIAO-CHUAN CAI AND MAKSYMILIAN DRYJA. strongly elliptic equations discretized by the nite element methods.

Chapter 12. Partial di erential equations Di erential operators in R n. The gradient and Jacobian. Divergence and rotation

Calculus and linear algebra for biomedical engineering Week 3: Matrices, linear systems of equations, and the Gauss algorithm

Simple Examples on Rectangular Domains


Constrained Leja points and the numerical solution of the constrained energy problem

MULTIGRID PRECONDITIONING IN H(div) ON NON-CONVEX POLYGONS* Dedicated to Professor Jim Douglas, Jr. on the occasion of his seventieth birthday.

Class notes: Approximation

Linear Algebra: Characteristic Value Problem

Foundations of Matrix Analysis

20. A Dual-Primal FETI Method for solving Stokes/Navier-Stokes Equations

Variational Formulations

on! 0, 1 and 2 In the Zienkiewicz-Zhu SPR p 1 and p 2 are obtained by solving the locally discrete least-squares p

An additive average Schwarz method for the plate bending problem

Matrix square root and interpolation spaces

18. Balancing Neumann-Neumann for (In)Compressible Linear Elasticity and (Generalized) Stokes Parallel Implementation

On Surface Meshes Induced by Level Set Functions

4.3 - Linear Combinations and Independence of Vectors

Second Order Elliptic PDE

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

ANDREA TOSELLI. Abstract. Two-level overlapping Schwarz methods are considered for nite element problems

Optimal multilevel preconditioning of strongly anisotropic problems.part II: non-conforming FEM. p. 1/36

Numerische Mathematik

Stability of implicit extrapolation methods. Abstract. Multilevel methods are generally based on a splitting of the solution

The All-floating BETI Method: Numerical Results

Substructuring Preconditioning for Nonconforming Finite Elements engineering problems. For example, for petroleum reservoir problems in geometrically

Mathematics Research Report No. MRR 003{96, HIGH RESOLUTION POTENTIAL FLOW METHODS IN OIL EXPLORATION Stephen Roberts 1 and Stephan Matthai 2 3rd Febr

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin

K. BLACK To avoid these diculties, Boyd has proposed a method that proceeds by mapping a semi-innite interval to a nite interval [2]. The method is co

Convergence of The Multigrid Method With A Wavelet. Abstract. This new coarse grid operator is constructed using the wavelet

u, w w(x) ε v(x) u(x)

1. Introduction Let the least value of an objective function F (x), x2r n, be required, where F (x) can be calculated for any vector of variables x2r

It is known that Morley element is not C 0 element and it is divergent for Poisson equation (see [6]). When Morley element is applied to solve problem

Problem Set 9 Due: In class Tuesday, Nov. 27 Late papers will be accepted until 12:00 on Thursday (at the beginning of class).

An Iterative Substructuring Method for Mortar Nonconforming Discretization of a Fourth-Order Elliptic Problem in two dimensions

The solution of the discretized incompressible Navier-Stokes equations with iterative methods

Discontinuous Galerkin Methods

Luca F. Pavarino. Dipartimento di Matematica Pavia, Italy. Abstract

IMC 2015, Blagoevgrad, Bulgaria

Spectral element agglomerate AMGe

Iterative Methods for Linear Systems

LECTURE # 0 BASIC NOTATIONS AND CONCEPTS IN THE THEORY OF PARTIAL DIFFERENTIAL EQUATIONS (PDES)

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

S MALASSOV The theory developed in this paper provides an approach which is applicable to second order elliptic boundary value problems with large ani

Overlapping Schwarz Preconditioners for Spectral. Problem in H(curl)

A mixed nite volume element method based on rectangular mesh for biharmonic equations

Multilevel Preconditioning of Graph-Laplacians: Polynomial Approximation of the Pivot Blocks Inverses

Institute for Advanced Computer Studies. Department of Computer Science. On the Perturbation of. LU and Cholesky Factors. G. W.

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

On an Approximation Result for Piecewise Polynomial Functions. O. Karakashian

Chapter Two: Numerical Methods for Elliptic PDEs. 1 Finite Difference Methods for Elliptic PDEs

Hamburger Beiträge zur Angewandten Mathematik

Lecture 2: Review of Prerequisites. Table of contents

Mathematics Department Stanford University Math 61CM/DM Inner products

The Erwin Schrodinger International Boltzmanngasse 9. Institute for Mathematical Physics A-1090 Wien, Austria

Robust Domain Decomposition Preconditioners for Abstract Symmetric Positive Definite Bilinear Forms

Course Summary Math 211

arxiv: v1 [math.na] 29 Feb 2016

1e N

Applied Linear Algebra

Chapter 3 Least Squares Solution of y = A x 3.1 Introduction We turn to a problem that is dual to the overconstrained estimation problems considered s

10.6 ITERATIVE METHODS FOR DISCRETIZED LINEAR EQUATIONS

Linear Algebra. Christos Michalopoulos. September 24, NTU, Department of Economics

Finite Element Method for Ordinary Differential Equations

Transcription:

Technische Universitat Chemnitz-Zwickau DFG-Forschergruppe \SPC" Fakultat fur Mathematik Sergej A. Ivanov and Vadim G. Korneev On the preconditioning in the domain decomposition technique for the p-version nite element method. Part I Abstract P-version nite element method for the second order elliptic equation in an arbitrary suciently smooth domain is studied in the frame of DD method. Two types square reference elements are used with the products of the integrated Legendre's polynomials for the coordinate functions. There are considered the estimates for the condition numbers, preconditioning of the problems arising on subdomains and the Schur complement, the derivation of the DD preconditioner. For the result we obtain the DD preconditioner to which corresponds the generalized condition number of order (log p). The paper consists of two parts. In part I there are given some preliminary results for D case, condition number estimates and some inequalities for D reference element. The work was supported by the grant from the International Science Foundation, by the grant under the program "Universities of Russia", and by the German Academic Exchange Service (DAAD). Preprint-Reihe der Chemnitzer DFG-Forschergruppe \Scientic Parallel Computing" SPC 95 35 Dezember 995

Authors' address Dr. Sergej A. Ivanov / Prof. Dr. Vadim G. Korneev St.-Peterburg State University Bibliotechnaya sq. St.-Peterburg 9894 Russia e-mail: isa@niimm.spb.su korn@niimm.spb.su

Introduction The p{version nite element method for the second order elliptic equations for the last time has been intensively investigated theoretically and by means of numerical experiments for the possibilities of development of more ecient solution procedures. We may refer to works of I. Babuska et.al. [] and L.F. Pavarino [] in which DD methods for the p{ version with nonoverlapping and overlapping subdomains were analyzed in this aspect. Results of series of numerical experiments and applications may be found, for instance, in J. Mandel [3], G.F. Carey and E. Barragy [4], P.F. Fisher [5], C.H. Amon [6] and others. However answers to many questions are still unclear. They are related, particularly, to the conditionality of systems of algebraic equations for the whole problem and for the problems arising on subdomains and in this connection to the choice of basis for the reference element and to the methods of solution of the problems on subdomains. It is worth to note that numerical experiments, see for instance [, 3], showed that the most time consuming operation for p = even in D case turned out to be the solution of systems of algebraic equations for subdomains. The Schur complement preconditioning in DD methods without overlapping and construction of cheap prolongation operators in dierent nite element spaces from the interface boundary on the whole domain demand also further study. In this paper we analyze some of the mentioned problems for the p{version at its application to the D second order elliptic partial dierential equation in an arbitrary suciently smooth domain. It is assumed that one of the two reference elements or the both are used for dening the nite element space. The basis of one is f^l i;j (x) = ^L i (x )^L j (x ), i; j pg, where ^L ; ^L are usual \nodal" linear functions, ^Lk = k ~ Lk for k with k and ~L k being a normalizing multiplier and the integral of the Legendre's polynomial L k of degree k. The basis of another is f^l i;j (x) for i; j; (i + j) p, for i = ; ; j = ; 3; ::; p, for i = ; 3; ::; p; j = ; and for i; j = ; g. Special partition of the domain in quadrangles, which are curvilinear near to the boundary, and special mappings of the reference element on these quadrangles allow us to obtain the nite element method in which the boundary and the rst homogeneous boundary condition are taken into account exactly. The mappings are chosen to satisfy conditions which we call the generalized conditions of quasiuniformity and which are sucient to guarantee the same orders of convergence and condition numbers as in the case of uniform square mesh of nite elements. In the same manner the nite element p{version with the piecewise polynomial approximation of the boundary may be obtained and considered from the point of view of its numerical solution and convergence. Condition numbers of the stiness and mass matrices of the reference elements are estimated with the same order p. To obtain the rst we assume that the vertices of the element are xed. These estimates lead to estimates of condition numbers for the global nite element stiness and mass matrices with the orders h p log p and p respectively. Also some energy equivalence inequalities useful at the construction of DD preconditioners are proved. If the bilinear form corresponds to Laplace operator then the stiness matrix of the reference element, which is denoted via A, has a rather simple form. In particular, in the rows corresponding to ^Li;j, i; j p, it contains only ve nonzero coecients. The conditions of the generalized quasiuniformity and simplicity of the matrix allow to conclude that it can serve as a good preconditioner for the stiness matrices of a general curvilinear form nite elements in the case of more general elliptic equations. Considering A as the stiness matrix of each element we assemble denite matrix p;h, which is a 3

good preconditioner for the global stiness matrix providing the generalized condition number O(). It turns out that the method of the nested dissection by A. George [7] is applicable for solving of the system p;h x = y with the computational cost O(N ;5 ), where N is the dimension of p;h. The more ecient solution techniques demand further study. In DD solvers for the p{version the domains of elements or clusters of elements are usually taken for the subdomains of decomposition. For the simplicity we assume here the rst. Matrix A as it has been noted, is a good preconditioner for the element stiness matrices and much simpler than p;h, since it corresponds to the square. Thus, the solution of the Dirichlet problems on subdomains won't be dicult. For the Schur complement preconditioning we suggest a matrix, which provides the generalized condition number O(log p). It is represented as the product of a diagonal and a triangular matrices and so is convenient for the use in computations. The resulting DD p{version conjugate gradient method demands O(log p log ) iterations. In general situation the main time consuming operation will be the multiplication by the global stiness matrix, which has O(Rp 4 ) nonzero entries with R being the number of elements. But, for instance, in the case of the Laplace equation the number of nonzero entries is O(Rp ). The article is organized as follows. In Sec. there are given estimates for D case, which are the development of the study in [8]. The estimates for the condition numbers for the stiness and mass matrices are derived in Sec.. In Sec.3 the Schur complement preconditioner is obtained for the D reference element and the related generalized condition number is estimated. The nite element method for the second order elliptic rst boundary value problem in an arbitrary suciently smooth domain is described in Sec.4. There are given also the estimates of condition numbers for the global nite element matrices. The DD preconditioner is constructed in Sec.5. Part I contains Sec. and Sec.. Let us describe some notations used in the papers. I := ( ; ), I? := (; ), := I I, also notation I with dierent subscripts is used for the unity matrices. P p;x, P (p) x and in each variable, P [p] x are the spaces of polynomials of degree not higher than p over all variables is the space containing P p;x and polynomials of the rst degree in one variable and the p{th degree in another. ^E, ^E are the reference elements, E r is some nite element, H( ^E), H( ^E ), H(E r ) are the spaces generated by the corresponding elements. D q x v := @jqj v=@x q @x q ; q = (q ; q ); q ; q ; jqj = q + q ; (; ), k k = k k ; are the scalar product and the norm in L (). j j k;, k k k; are the quasinorm and the norm in the Sobolev space W k (), i.e. jvj X Z kx = k; (D q x v) dx; kvk = k; kvk + ; jvj l; : jqj=k o W() is the subspace of W () of functions having zero traces on @. k k =;I, k k =;I, are the norms in the space W = (I) and the subspace W = (I) W = (I) of functions having zero values at x =. These norms for I = (a; b) are dened by expressions l= 4

Zb kvk := =;I a Z b Zb kvk := =;I kvk =;I + a! u(x) u(y) dxdy; x y a Z u (x) b x a dx + a u (x) x b dx: The norm k k =;i, where i is the side of is dened analogously with k k =;I. For instance, for the side i, which is on the line x = x, we have Z Z! kvk u(x ; t) u(x ; ) =; i := dtd: t Also we need the norm kuk =;@ = 4X kvk =; i + i= 4X Z i= u j(i) (t) u l(i) (t) dt; jtj where the u j(i) denotes the restriction of u on side j(i), t is the distance to v i, v i is the vertex of such that the vertex is common for j(i) and l(i). A + is the pseudoinverse to matrix A. (A), min (A), max (A) are an eigenvalue, the minimal nonzero and the maximal eigenvalues of symmetric nonnegative matrix A. Signs,, assume one sided and two sided inequalities, which are true with some omitted absolute constants. One dimensional nite element with the integrated Legendre polynomials as shape functions Let us consider the system of functions on I ~M = f ~ L i j ~ L = L ; ~ L = L ; ~ L j = Z x L j (s) ds; j = ; 3; :::; Ng; containing two rst Legendre polynomials, i.e. constant L and linear L = x functions, and the rst integrals of Legendre polynomials. As it is known kl i k = =(i + ); ~ Lj (x) = j [L j(x) L j (x)]; (.) The number of functions in M ~ is adopted equal to N + only for convenience, the considerations and results are not changed when i = ; ; :::; N +. In the following it is convenient to use another normalization and to divide sometimes ~M in two subsystems, corresponding to odd and even numbers. Let us set M = f^l i ; i = ; ; :::Ng; M = f L i ; i = ; ; :::Ng; M = M = M + [ M ; M + = fl i = ^L i ; i = ; ; :::Ng; M = f Li = ^L(i N ) ; i = N + ; :::; Ng; (.) 5

with ^L i = ~ L i =k ~ L i k; k^l i k =, i.e. ^L = L p ; ^L = j = s 3 L ~ ; ^Lj = j Lj ~ = j [L j L j ]; j ; s (j 3)(j + ) q (j 3)(j )(j + ); j = j ; (.3) Systems M; M dier only in the ordering of their elements. Expression f = NX i= bi Li and quadratic forms (f; f) I ; (f ; f ) I generate matrices (f; f) I =< K b; b >; (f ; f ) I =< K b; b >; where < ; > is the scalar product in R N + and b = f b i g is the vector of the coecients bi. It is easy to note, that K =! K ;+ K; ; K =! K ;+ K; and matrices K ;+ ; K; are diagonal, matrices K ;+ ; K; are tridiagonal. Via K ; K we denote the same matrix K ; K but with the ordering of rows and columns, corresponding to representation The aim of this paragraph is f = NX i= ^bi ^Li : Lemma. The nonzero eigenvalues of K ; K satisfy relations min (K ) = p 6; max (K ) = (4N 3)(4N + )= min (K ) =N ; max (K ) : (.4) Proof. These estimates are evident except for min (K ). Indeed, taking into account (.){(.3), we see, that K = diag " ; p 6; :::; (i 3)(i + ) ; :::; # (4N 3)(4N + ) ; (.5) and, thus, relations (.4) for K are right. As it is shown below matrix K has diagonal predominance for any N <. Matrix K can be represented in the form K = D A D with the diagonal matrix D, which is the square root of the diagonal of A. Matrix A is the \mass" matrix in the basis fl ; L ; (L L ); : : : ; (L i L i ); : : : ; (L N L N )g: 6

A = B B @ Consequently for the result we have 3 3 ( + 5 ) 5 : : : : : : : : : : : : K = c i = Easy calculations lead to inequality i 3 i 3 + i+ i+ : : : : : : : : : : : : : : : : : : 4N 3 4N 3 q 5=6 q 7= c i B @ SYM C A 4N +! (i + 3)(i = 5) = = : (i 3)(i + ) 4i 4i 3 c i (i + ) ; i 4; from which and considerations of the rst four rows of K we have c i c i =(i + ); i = ; ; :::; N; min (K ) =N : (.6) It is easy to note that the matrix, which is denoted below by 4 and is obtained from K by making in each row the diagonal coecients equal to the sum of modules of the nondiagonal coecients, retains positive deniteness. Since it is meaningful for calculations we shall also estimate the boundaries of the spectrum for 4. It is sucient to consider only K ;+ and the corresponding part ;+ of because for K ; considerations are the same. Thus, let us put K ;+ = 4 ;+ + D ;+ where D ;+ is the diagonal matrix, 4 ;+ has the form ;+ = B @ =h =h =h (=h + =h ) =h : : : : : : : : : =h i (=h i + =h i ) =h i : : : : : : ; C C A =h N (=h N + =h N ) and h i = =c i. As K;+ has diagonal predominance, so all diagonal elements of D ;+ are positive. The above expression for 4 ;+ is the same as for the nite element matrix, generated in the case of linear nite elements with the nodes x () = ; x (i) = h + h + ::: + h i ; x (N +) = b : C A 7

by bilinear form (u ; v ) (;b) at the rst boundary condition at x = b. That means positive deniteness of 4 ;+ at any N <. Consequently, min ( K ;+ ) min ( ;+ ) + min (D ;+ ) > if N <. For the estimation of min (4 ;+ ) one may use the relation min (4 ;+ ) = max (4 ;+ ) since 4 ;+ may be written explicitly, see [9], on the basis of the above mentioned analogy with the nite element matrix. Namely we have 4 ;+ = f (i;j) g; (i;j) = K(x (i) ; x (j) ); i; j = ; ; :::; N K(x; t) = fb x; x t; b t; t xg : For max (4 ;+) we can use the Frobenius estimate by the maximal sum of the modulus of coecients in each row. Taking into account that the sum of K(x; x (j) ) over j = ; ; :::N is equivalent to the quadrature of trapeziums, which at x = x (i) is equal to the corresponding integral, we get " max ; max h kn P max (4 ;+) = max j K(x (i) ; x (j) ) in max # x b max h k + h k xb " h ; max kn Z b # b 8N : h k + h k a K(x; t)dt (.7) In (.7) it is also taken into account that numbers c i are uniformly bounded from below by q 33=75 and from above by /. Thus minimal eigenvalues of 4 ;+ and D ;+ are estimated from below with the same order. Let us show now, that estimate (.6) for min (K ) is exact in the order. For vector of the form b + = (; ; :::; b l ; b l+ ; :::b N ) T ; l = entire N ; b + R N + ; (.8) we have b T +D ;+ b + 4 N bt + b + ; (.9) On the other hand for the symmetrical positive denite D ;+ ; 4 ;+ it is true, that b T + K ;+ b + = b T +(D ;+ + 4 ;+ )b + b T + ( 4 N I + 4 ;+) b + : (.) Now we put in b + from (.8) b j = ; form of 4;+. For such b + we obtain b T + 4 ;+ b + X li;jn K(x (i) ; x (j) ) h min l<i<n j = l; l + ; :::; N, and turn again to the explicit i Z b h i + h i Z b b= b= K(x; t) dtdx : The integral is equal to N 3 =48 and c i = :5 O(N ); l i N. Therefore for suciently large N it is true the estimate b T + 4 ;+ b + cn 3 and since b T + b + = N l :5N +, then b T + 4 ;+ b + cn b T + b + 8

and c is an absolute constant. From this inequality and formulas (.9), (.) it follows, that min ( K ;+ ) cn and therefore estimate (.6) for min (K ) is exact in order. We have proved (.4) for min (K ). The last estimate (.4) is evidently valid since the sum of the modulus of coecients in each row is less than 4 and the diagonal coecients are equal. Lemma has been proved. In the nite element method the functions ^L = ( + x); ^L = ( x); (.) are used instead of the rst two functions (.3). In the following we understand M, as such bases. It is evident that the estimates for the bounds of the nonzero spectrums of K ; K are retained in the order. According to the fact of the uniform boundness of numbers c i from below and from above matrix 4 ;+ is equivalent in the spectrum to matrix () = B @ : : : : : : : : : : : : : : : : : : The same is true for 4 ; under agreement that the same notation 4 () is used for matrices (.) of dierent dimensions. In other words C A : M b T + 4 () b + b T + 4 ;+ b + b T + 4 () b + ; b + R N + ; b T 4 () b b T ; b b T 4 () b ; b R N : (.) Square element with the hierarchical shape functions produced by the integrated Legendre's polynomials By the reference element ^E = ^Ef; ^L i;j ; i; j pg; p = N, we shall mean square := I I with the specied on it system M = f^li;j ; i; j pg of functions ^L i;j (x) = ^L i (x )^L j (x ): (.) The space spanned over them is the space H( ^E) = P (p) x of polynomials ^u(x) = X u i;j ^Li;j (x); i; j p; (.) containing all polynomials of degree not higher than p in each variable x ; x. We shall consider matrices A ; A of bilinear forms (^u; ^v) = u T p A v p ; a (^u; ^u) = Z r^u r^v dx = u T p A v p ; u p ; v p $ ^u; ^v; (.3) where relations u p $ ^u; v p $ ^v; : : : are understood as isomorphism between polynomials ^u; ^v ; P (p) x and vectors u p = fu i;j g; v p = fv i;j g : : : from R (p+) of coecients of their 9

representations in the basis f^l i;j g, see (.). Matrices A ; A are the mass and stiness matrices of the reference element, while energy is dened by the Dirichlet integral. For the following system M it is convenient to subdivide into subsystems M I, M II, M III containing, correspondingly, the so called internal, side and vertex shape functions: M I = f^l i;j ; i; j pg; M II = f^l i;j ; i = ; ; j = ; 3; : : : ; p or j = ; 3; : : : ; p; j = ; g; M III = f^li;j ; i = ; and j = ; g: By the internal stiness matrix we shall call matrix A ; generated by the set M I. Evidently it corresponds to the rst boundary condition on @ since all functions of M I are equal zero on @ and there is no such functions in M II, M III. Lemma. There are valid estimates min (A ) N 4 ; max (A ) ; min (A ; ) ; max (A ; ) N : (.4) Proof. Matrices A ; A ; are represented by means of the Kronecker products A = K K ; A : = K ; K ; + K ; K ; ; where K ; ; K ; are matrices, which are obtained from K ; K by crossing out the rst two rows and columns and K ; K are matrices described in Sec.. According to the properties of the Kronecker product f m;n (AB)g = f m (A)gf n (B)g and consequently estimates for min from below and for max from above directly follow from (.). However, estimate min (A ; ) N obtained in such a way is rough and to obtain the estimates given in (.4) it is necessary to use another ways. Let us use the representation K; = ; + D ;, which is analogous to the representation K ;+ = ;+ + D ;+. Then min (A ;) = min (K ; ( ; + D ; ) + ( ; + D ; ) K ; ) min (K ; D ; ) + D ; K ; ): Matrices K ;, D ; are diagonal and their elements of the i-th row have the orders i, (i + ). Consequently, i min (A ; ) inf i;j j + + j ; i; j N; (.5) i + and estimate for min (A ; ) of lemma is also valid. In order to obtain the estimate from above let us consider in R (N ) vectors w p of the form w p = a b, where a = fa i g R (N ) and b = fb i g R (N ). We can write inf u pr (N ) ut p A ;u p u T p u p inf w p w T p A ;w p w T p w p = inf a;b a T K ; a a T a b T K ; b b T b inf a a T K ; a a T a sup bt K ; b b T b from where and from Lemma. the estimate for min (A ;) from above follows. Matrix K ; contains unities on the diagonal and ^K; has diagonal coecients of order N, what makes valid the estimates for max from below. According to the mentioned property of the Kronecker product min (A ; ) = ( min (K ; )), and estimates for min (K ;) are known. Lemma has been proved.

Remark. Numerical experiments with some shape functions have shown the growth of the condition number for the local stiness matrix as O(p 3 ), see for instance []. For the chosen here shape functions according to above lemma, see also lemma., such condition number is O(p ). Now we consider matrix A ;, which is obtained from A by crossing out four rows and four columns corresponding to vertex functions ^L i;j, i; j = ;. Lemma. There are true estimates min (A ; ) ; max (A ; ) N : Proof. The estimates for max (A ; ) are proved in the same way as for A ;. In order to simplify judgements in estimating of min (A ; ) we instead of A ; will consider matrix A on the space of vectors u p U IS, u p = fu i;j g, i; j = ; ; : : : ; p, with four zero components u i;j = for i; j = ;. Matrix A is dened by the Kronecker product A = K K + K K where K corresponding to system ^M = f^l i;j ; i = ; ; : : : ; pg where ^L ; ^L from (.) and, thus, having the form K = For matrices A and B B @... 5 (i 3)(i+)... A = K ; K + K K ; (p 3)(p+) we have A A, where in above relation K ; is understood as matrix obtained from K by replacing by zeros the rst two rows and columns. Indeed, matrix A is nonnegative since K ; and K are nonnegative. Further K K and K ; K distinguish only in the rst (p + ) (p + ) diagonal block, which is! K K K K in K K and zero in K ; K. Thus, K K K ; K. If we change the ordering of rows and columns replacing i by j and visa versa, then we see there is only the same dierence between K K and K K ;. Consequently A A. It is easy to note that zero rows and columns of K ; K and K K ; have only four zero rows and columns in common and they correspond to i; j = ;. All the rest rows and columns are nonzero. More exactly in respect to rows we have: zero rows in K ; K and nonzero rows in K K ; for i = ; ; j = ; 3; : : : ; p, nonzero rows in K ; K and zero rows in K K ; for i = ; 3; : : : ; p; j = ;, nonzero rows in K ; K and K K ; for i; j = ; 3; : : : ; p. C C A :

Submatrix (p )(p ) of K ; K with entries corresponding to i = ; 3; : : : ; p; j = ; ; : : : ; p and (p ) (p ) submatrix of K K ; with entries corresponding to i = ; ; : : : ; p; j = ; 3; : : : ; p are three-diagonal. The diagonal predominance of these two submatrices are estimated exactly in the same manner as for K ; K ; and K ; K ;. Denoting via d i;j (A) the diagonal predominance in row \i; j" of some matrix A we obtain in the result that d i;j ( A; ) o;i ;i i j + + o;j ;j j i + (.6) where k;l is the Kronecker delta and i; j are not equal to or simultaneously. Lemma has been proved. Let us give several additional inequalities related to the preconditioning of A Lemma.3 Let ^u H( ^E) be presented in the form ^u = ^u I + ^u II + ^u III, where ^u L span M L, L = I; II; III and Then ^a(v; w) = Z rvrw dx c + log p [^a(^u I + ^u II ; ^u I + ^u II ) + ^a(^u III ; ^u III )] ^a(^u; ^u) (.7) c p + log p with an absolute constant c, c. X L=I;II;III ^a(^u L ; ^u L ) 3 [^a(^u I + ^u II ; ^u I + ^u II ) + ^a(^u III ; ^u III )]; X L=I;II;III ^a(^u L ; ^u L ); (.8) Proof. The right inequalities are Cauchy ones. The left inequality (.7) indeed has been proved in [] and is based on the following result, which is needed below. Theorem. (I.Babuska et al. []). For any polynomial u(x) P p;x and x I ju(x)j ( + log p)kuk ;I (.9) with absolute constant. This estimate cannot be asymptotically improved, i.e. there is a constant c and for each p there exists v p P p;x such that kv p k ;I c and jv p ( )j log p. Now we have ^a(^u I + ^u II ; ^u I + ^u II ) = ^a(^u ^u III ; ^u ^u III ) (^a(^u; ^u) + ^a(^u III ; ^u III )) ; ^a(^u I + ^u II ; ^u I + ^u II ) + ^a(^u III ; ^u III ) ^a(^u; ^u) + 3^a(^u III ; ^u III ): (.) Since ^u III is bilinear its maximum is at one of the vertices x = (; ) of and at these points ^u III = ^u(x). Consequently a(^u III ; ^u III ) c 4 maxju(x)j over x = (; ) and application of (.9) and the trace theorem gives a(^u III ; ^u III ) c 4 c 3 ( + log p) k^uk ; i c 5c 4 c 3 ( + log p) k^uk ; ;

where j ; j = ; ; 3; 4, are the sides of and above i is the side adjacent to the vertex, at which ^u III achieves maximum. Now the Bramble{Hilbert lemma type arguments allow to write a(^u III ; ^u III ) c( + log p) j^uj ; c( + log p)a(^u; ^u) (.) with an absolute constant c. Combining with (.) we obtain the left part of (.7). In order to obtain the left inequality (.8) we use the Markov type inequality juj ;I cpkuk =;I ; for any u P p;x; (.) which is obtained in the following way. Setting x = cos we represent ~u() := u(x), I := (; ) by the sum We also can write j~uj ;I j~uj ;I = = px k= ~u() = ( + k )b k p px k= px k= b k cos k: q ( + k ) b k cpk~uk =;I? cc pkuk =;I ; where in the last step we used the equivalence of the norms kuk and =;I k~uk =;I?, which has been established in []. It is easy to see that ^uj @ = ^u II + ^u III and X a(^u II + ^u III ; ^u II + ^u III ) 4 k^uk ; i : Thus applying Cauchy inequality,(.), (.), and the trace theorem we have a(^u II ; ^u II ) (a(^u III ; ^u III ) + c 4X k^uk ; i ) i= c(( + log p)k^uk ; + p The use of Bramble{Hilbert lemma type arguments gives Besides, (.), (.3) allow to obtain i= 4X k^uk =; i ) c(log p + p)k^uk ; : i= a(^u II ; ^u II ) c(log p + p)a(^u; ^u): (.3) a(^u I ; ^u I ) 3(a(^u; ^u) + a(^u II ; ^u II ) + a(^u III ; ^u III )) c(log p + p)a(^u; ^u): (.4) From (.), (.3), and (.4) it follows left inequality (.8). Lemma has been proved. Remark. In the p-version it is more ecient sometimes to use the reference element with polynomial spaces, which are the minimal polynomial spaces containing for a given p space P p;x and allowing to satisfy the compatibility conditions. For the chosen type of the coordinate functions this assumes the use of the reference element ^E = ^E f; ^Li;j M p g with M p = M I;p [ M II [ M III ; M I;p := f^l i;j ; i; j; (i + j) pg: Thus, for a xed p set M p diers from M only in the subset M I;p 6= M I of the internal functions but the sets of the boundary functions and the vertex functions coincide. The 3

space of polynomials H( ^E ) = span M p, which will be denoted also by P [p] x, corresponds to element ^E. The reasons for ^E to be more eective in comparison with ^E, especially if we enlarge p in the course of solution of one problem are known from the literature. The obtained results are retained for element ^E. Lemma. is retained in particular due to inclusion (entire p=) M M p M (p) ; where for a given p it is used notation M (p) instead of M. Lemma.3 is also valid for the corresponding matrices generated by the reference element ^E, because in their proof it was not important the concrete form of internal functions and the space over them. Remark.3 Let us suppose that we have orthogonalized the set of the side functions to the set of the internal functions, i.e. instead of M II we use M (II) such that for any ^u (II) M (II). Then a(^u II ; ^u I ) = ; for any ^u I M I or for any ^u I M I;p ; (.5) a(^u II ; ^u II ) c( + log p)a(^u; ^u) Indeed, taking into account (.),(.5), we can write j^u II j ; j^u II + ^u I j ; = j^u ^u III j ; (j^uj ; + j^u III j ;) c( + log p)j^uj ; : Instead of (.) we get c log p X L=I;II;III a(^u L ; ^u L ) a(^u; ^u) 3 X L=I;II;III a(^u L ; ^u L ): References. Babuska I., Craig A., Mandel J., Pitkaranta J. Ecient preconditioning for the p{version nite element method in two dimensions. SIAM J. Numer. Anal., Vol. 8, (99), no.3, pp. 64{66.. Pavarino L.F. Additive Schwartz methods for the p{version nite element method. Numer. Math. Vol. 66 (994), no.4, pp. 493{55. 3. Mandel J. Iterative solvers by substructuring for the p{version nite element method. Comput. Methods of Appl. Mech. Engrg. Vol. 8 (99), pp. 7{8. 4. Carey G.F., Barragy E. Basis function selection and preconditioning high degree nite element and spectral methods. BIT Vol. 9 (989), no.4, pp. 784{84. 5. Fisher P.F. Spectral element solution for the Navier{Stokes equations on high perfomance distributed memory parallel proceccors, in Parallel and Vector Computations in Heat Transfer. HTD{Vol. 33. Ed. J.D.Georgiadis and J.Y.Murphy, 99. 6. Amon C.N. Spectral element { Fourier approximation for the Navier {Stokes equations: some aspects of parallel implementation, in Parallel and Vector computations in Heat Trasfer. HTD Vol. 33. Ed. J.D.Georgiadis and J.Y.Murphy, 99. 7. George Liu J. W.{H. Computer solution of large sparse positive denite systems. Prentice { Hall Inc. Englwood Clis. New Jersew 98. 4

8. Bahlmann D., Korneev V.G. Comparison of high order accuracy nite element methods: nodal points selection and preconditioning. Preprint Nr.37/7.{Chemnitz, Technishe Universitat Chemnitz, Jg., 993. 9. Korneev V.G. On the relation between nite dierences and quadratures from Green function for the ordinary second order dierential operator. Vestnik Leningradskogo Universiteta (97), no.7, pp.36{44 (in Russian).. Korneev V.G. On the construction of the varitional{dierence shemes of the high orders of accuracy. Vestnik Leningradskogo Universiteta (97), no.9, pp.8{4 (in Russian).. Korneev V.G. Finite element methods of the high orders of accuracy Izdatel`stvo Leningradskogo Universiteta 97, 5p. (in Russian).. Korneev V.G. Iterative methods for solution of nite element algebraic equations systems. Zhurnal vychislitel`noy matematiki i matematicheskoi ziki (977),V.7, no. 5, pp.3{33 (in Russian). 3. Ivanov S.A. and Korneev V.G. Selection of the high order coordinate functions and preconditioning in the frame of the domain decompozition method Izvestiya Vyshikh Uchebnykh Zavedeniy (995),no.4 (395), pp.6{8 (in Russian). 4. Nepomnyaschikh S.V. Method of splitting into subspaces for solving elliptic boundary value problems in complex{form domains. Sov. J.Numer. Anal.Math. Modelling (99). V.6, no. (), pp. 5{68. 5