A Spectral Method for Elliptic Equations: The Neumann Problem

Similar documents
A Spectral Method for Elliptic Equations: The Neumann Problem

A Spectral Method for Nonlinear Elliptic Equations

Near convexity, metric convexity, and convexity

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence)

4.3 - Linear Combinations and Independence of Vectors

Chapter 12. Partial di erential equations Di erential operators in R n. The gradient and Jacobian. Divergence and rotation

1 Which sets have volume 0?

Numerical Integration for Multivariable. October Abstract. We consider the numerical integration of functions with point singularities over

Scientific Computing WS 2017/2018. Lecture 18. Jürgen Fuhrmann Lecture 18 Slide 1

A Spectral Method for the Biharmonic Equation

BIBLIOGRAPHY KENDALL E ATKINSON

Simple Examples on Rectangular Domains

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS

The Jacobian. v y u y v (see exercise 46). The total derivative is also known as the Jacobian Matrix of the transformation T (u; v) :

SHARP BOUNDARY TRACE INEQUALITIES. 1. Introduction

Polyharmonic Cubature Formulas on the Ball and on the Sphere

Abstract. 1. Introduction

Lecture 6: Contraction mapping, inverse and implicit function theorems

Dichotomy Of Poincare Maps And Boundedness Of Some Cauchy Sequences

A FAST SOLVER FOR ELLIPTIC EQUATIONS WITH HARMONIC COEFFICIENT APPROXIMATIONS

Stochastic solutions of nonlinear pde s: McKean versus superprocesses

Projected Surface Finite Elements for Elliptic Equations

Scientific Computing WS 2018/2019. Lecture 15. Jürgen Fuhrmann Lecture 15 Slide 1

LECTURE # 0 BASIC NOTATIONS AND CONCEPTS IN THE THEORY OF PARTIAL DIFFERENTIAL EQUATIONS (PDES)

Recurrence Relations and Fast Algorithms

1462. Jacobi pseudo-spectral Galerkin method for second kind Volterra integro-differential equations with a weakly singular kernel

An Accurate Fourier-Spectral Solver for Variable Coefficient Elliptic Equations

Taylor series - Solutions

The Symplectic Camel and Quantum Universal Invariants: the Angel of Geometry versus the Demon of Algebra

Functional Analysis: Assignment Set # 11 Spring 2009 Professor: Fengbo Hang April 29, 2009

PART IV Spectral Methods

Recall that any inner product space V has an associated norm defined by

FENGBO HANG AND PAUL C. YANG

Lecture 2: Review of Prerequisites. Table of contents

Introduction to Poincare Conjecture and the Hamilton-Perelman program

Robust Solutions to Multi-Objective Linear Programs with Uncertain Data

Harmonic Polynomials and Dirichlet-Type Problems. 1. Derivatives of x 2 n

LEGENDRE POLYNOMIALS AND APPLICATIONS. We construct Legendre polynomials and apply them to solve Dirichlet problems in spherical coordinates.

Math 443/543 Graph Theory Notes 5: Graphs as matrices, spectral graph theory, and PageRank

Lecture 8: Basic convex analysis

The Dirichlet-to-Neumann operator

Oscillatory Mixed Di erential Systems

A COLLOCATION METHOD FOR SOLVING THE EXTERIOR NEUMANN PROBLEM

Spurious Chaotic Solutions of Dierential. Equations. Sigitas Keras. September Department of Applied Mathematics and Theoretical Physics

THE FOURTH-ORDER BESSEL-TYPE DIFFERENTIAL EQUATION

Introduction to Linear Algebra. Tyrone L. Vincent

Superconvergence Results for the Iterated Discrete Legendre Galerkin Method for Hammerstein Integral Equations

The Frobenius{Perron Operator

INEQUALITIES OF LIPSCHITZ TYPE FOR POWER SERIES OF OPERATORS IN HILBERT SPACES

Notes for Math 663 Spring Alan Demlow

Math 5520 Homework 2 Solutions

ON THE ASYMPTOTIC BEHAVIOR OF THE PRINCIPAL EIGENVALUES OF SOME ELLIPTIC PROBLEMS

LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES. Sergey Korotov,

theorem for harmonic diffeomorphisms. Theorem. Let n be a complete manifold with Ricci 0, and let n be a simplyconnected manifold with nonpositive sec

Ole Christensen 3. October 20, Abstract. We point out some connections between the existing theories for

Simple Estimators for Semiparametric Multinomial Choice Models

Linearized methods for ordinary di erential equations

Theorem 1. Suppose that is a Fuchsian group of the second kind. Then S( ) n J 6= and J( ) n T 6= : Corollary. When is of the second kind, the Bers con

ON INTERPOLATION AND INTEGRATION IN FINITE-DIMENSIONAL SPACES OF BOUNDED FUNCTIONS

Partial Differential Equations

Chapter 5 A priori error estimates for nonconforming finite element approximations 5.1 Strang s first lemma

Multivariable Calculus

Asymptotics of generating the symmetric and alternating groups

Two-scale convergence

MATHEMATICAL PROGRAMMING I

A Finite Element Method for an Ill-Posed Problem. Martin-Luther-Universitat, Fachbereich Mathematik/Informatik,Postfach 8, D Halle, Abstract

Scattering for the NLS equation

Variational Formulations

Linear Algebra Massoud Malek

A Comparison between Solving Two Dimensional Integral Equations by the Traditional Collocation Method and Radial Basis Functions

1. Subspaces A subset M of Hilbert space H is a subspace of it is closed under the operation of forming linear combinations;i.e.,

The Mortar Wavelet Method Silvia Bertoluzza Valerie Perrier y October 29, 1999 Abstract This paper deals with the construction of wavelet approximatio

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:

Simple Estimators for Monotone Index Models

Second Order Elliptic PDE

LEAST SQUARES APPROXIMATION

Nonlinear Programming (NLP)

(x x 0 )(x x 1 )... (x x n ) (x x 0 ) + y 0.

Nonlinear Radon and Fourier Transforms

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS

c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp , March

Spectral properties of the transition operator associated to a multivariate re nement equation q

Stochastic integral. Introduction. Ito integral. References. Appendices Stochastic Calculus I. Geneviève Gauthier.

SOME ABSOLUTELY CONTINUOUS REPRESENTATIONS OF FUNCTION ALGEBRAS

Lecture Notes on PDEs

Boundary Integral Equations on the Sphere with Radial Basis Functions: Error Analysis

Global Solutions for a Nonlinear Wave Equation with the p-laplacian Operator

It is known that Morley element is not C 0 element and it is divergent for Poisson equation (see [6]). When Morley element is applied to solve problem

Stochastic Processes

Combinatorial Dimension in Fractional Cartesian Products

ON THE PARALLEL SURFACES IN GALILEAN SPACE

MAT 257, Handout 13: December 5-7, 2011.

Optimal Boundary Control of a Nonlinear Di usion Equation y

MARKOV CHAINS: STATIONARY DISTRIBUTIONS AND FUNCTIONS ON STATE SPACES. Contents

Lecture Notes on Metric Spaces

Finite Elements. Colin Cotter. February 22, Colin Cotter FEM

LECTURE 12 UNIT ROOT, WEAK CONVERGENCE, FUNCTIONAL CLT

Legendre s Equation. PHYS Southern Illinois University. October 18, 2016

Spectral Properties of Elliptic Operators In previous work we have replaced the strong version of an elliptic boundary value problem

Transcription:

A Spectral Method for Elliptic Equations: The Neumann Problem Kendall Atkinson Departments of Mathematics & Computer Science The University of Iowa David Chien, Olaf Hansen Department of Mathematics California State University San Marcos July 6, 9 Abstract Let be an open, simply connected, and bounded region in R d, d, and assume its boundary @ is smooth. Consider solving an elliptic partial di erential equation u + u = f over with a Neumann boundary condition. The problem is converted to an equivalent elliptic problem over the unit ball, and then a spectral Galerkin method is used to create a convergent sequence of multivariate polynomials u n of degree n that is convergent to u. The transformation from to requires a special analytical calculation for its implementation. With su ciently smooth problem parameters, the method is shown to be rapidly convergent. For u C and assuming @ is a C boundary, the convergence of ku u nk H to zero is faster than any power of =n. Numerical examples in R and R 3 show experimentally an exponential rate of convergence. INTRODUCTION Consider solving the Neumann problem for Poisson s equation: u + (s)u = f(s); s () @u(s) @n s = g(s); s @: () Assume is an open, simply-connected, and bounded region in R d, d, and assume that its boundary @ is several times continuously di erentiable. Similarly, assume the functions (s) and f(s) are several times continuously di erentiable over, and assume that g(s) is several times continuously di erentiable over the boundary @.

There is a rich literature on spectral methods for solving partial di erential equations. From the more recent literature, we cite [7], [8], [9], and [5]. Their bibliographies contain references to earlier papers on spectral methods. The present paper is a continuation of the work in [3] in which a spectral method is given for a general elliptic equation with a Dirichlet boundary condition. Our approach is somewhat di erent than the standard approaches. We convert the partial di erential equation to an equivalent problem on the unit disk or unit ball, and in the process we are required to work with a more complicated equation. Our approach is reminiscent of the use of conformal mappings for planar problems. Conformal mappings can be used with our approach when working on planar problems, although having a conformal mapping is not necessary. In we assume that ()-() is uniquely solvable, and we present a spectral Galerkin method for its solution. In 3 we extend the method to the problem with (s) in. The problem is no longer uniquely solvable and we extend our spectral method to this case. The implementation of the method is discussed in 4 and it is illustrated in 5. A spectral method for the uniquely solvable case We assume the Neumann problem ()-() is uniquely solvable. This is true, for example, if (s) c > ; s (3) for some constant c >. For functions u H () ; v H (), v(s) [ u(s) + (s)u] ds = [Ou(s) Ov(s) + (s)u(s)v(s)] ds @ v (s) @u(s) @n s Introduce the bilinear functional A (v ; v ) = [Ov (s) Ov (s) + (s)v (s)v (s)] ds: (5) The variational form of the Neumann problem ()-() is as follows: nd u such that A (u; v) = `(v) + ` (v) ; 8v H () (6) with the linear functionals de ned by `(v) = v(s)f(s) ds; (7) ` (v) = v (s) g(s) ds: (8) @ ds: (4)

The norms we use for ` and ` are the standard operator norms when regarding ` and ` as linear functionals on H (). The functional ` is bounded easily on H (), j` (v)j kfk L kvk L kfk L kvk H : (9) Ordinarily, we will use kvk in place of kvk H. The functional ` is bounded (at least for bounded domains ). To show this, begin by noting that the restriction : H ()! H = (@) is continuous [3, Th. 3.37] and the imbedding : H = (@),! L (@) is compact [3, Th. 3.7]. If we further denote by l g the continuous mapping l g : u 7! u(s)g(s) ds; u L (@) then we see ` = l g, and therefore ` is bounded. It is straightforward to show A is bounded, @ ja (v; w)j c A kvk kwk ; c A = max f; kk g : () In addition, we assume A is strongly elliptic on H (), A (v; v) c e kvk ; v H () () with some c e >. This follows ordinarily from showing the unique solvability of the Neumann problem ()-(). If (3) is satis ed, then we can satisfy () with c e = min f; c g Under our assumptions on A, including the strong ellipticity in (), the Lax- Milgram Theorem implies the existence of a unique solution u to (6) with kuk c e [k`k + k`k] : () Our spectral method is de ned using polynomial approximations over the open unit ball in R d, call it d. Introduce a change of variables : d! onto with a twice-di erentiable mapping, and let = :! d. [We onto comment later on the creation of for cases in which only the boundary mapping : @ d! @ is known.] For v L (), let ev(x) = v ( (x)) ; x d R d and conversely, v(s) = ev ( (s)) ; s R d : 3

Assuming v H (), we can show r x ev (x) = J (x) T r s v (s) ; s = (x) with J (x) the Jacobian matrix for over the unit ball d, d @i (x) J(x) (D) (x) = ; x d : @x j i;j= Similarly, r s v(s) = K(s) T r x ev(x); with K(s) the Jacobian matrix for over. Also, x = (s) K ( (x)) = J (x) : (3) Using the change of variables s = (x), the formula (5) converts to A (v ; v ) = f[k ( (x)) T r x ev (x)] T [K ( (x)) T r x ev (x)] d with + ( (x))v ( (x))v ( (x)g jdet [J(x)]j dx = f[j (x) T r x ev (x)] T [J (x) T r x ev (x)] d + e(x)ev (x)ev (x)g jdet [J(x)]j dx = fr x ev (x) T A(x)r x ev (x) + e(x)ev (x)ev (x)g jdet [J(x)]j dx d e A (ev ; ev ) (4) A(x) = J (x) J (x) T : We can also introduce analogues to ` and ` following a change of variables, calling them è and è and de ned on H ( d ). For example, è (ev) = ev(x)f((x)) jdet [J(x)]j dx: d We can then convert (6) to an equivalent problem over H ( d ). The variational problem becomes ea (eu; ev) = è (ev) + è (ev) ; 8ev H ( d ) : (5) The assumptions and results in (6)-() extend to this new problem on H ( d ). The strong ellipticity condition () becomes ea (ev; ev) ec e kevk ; ev H ( d ) ; (6) min ec e = c xd jdet J(x)j e h i max ; max xd kj(x)k 4

where kj(x)k denotes the operator matrix -norm of J(x) for R d. Also, A e (ev; ew) ec A kevk k ewk ; ec A = max jdet [J(x)]j max max ka(x)k ; kk : x d x d For the nite dimensional problem, we want to use the approximating subspace n d n. We want to nd eu n n such that ea (eu n ; ev) = è (ev) + è (ev) ; 8ev n : (7) The Lax-Milgram Theorem (cf. [4, 8.3], [5,.7]) implies the existence of u n for all n. For the error in this Galerkin method, Cea s Lemma (cf. [4, p. 365], [5, p. 6]) implies the convergence of u n to u, and moreover, keu eu n k ec A ec e inf keu evk : (8) ev n It remains to bound the best approximation error on the right side of this inequality. Ragozin [4] gives bounds on the rate of convergence of best polynomial approximation over the unit ball, and these results are extended in [6] to simultaneous approximation of a function and some of its lower order derivatives. Assume eu C m+ d. Using [6, Theorem ], we have with inf keu evk ev n! u;m+ () = sup jj=m+ c(u; m) n m! u;m+ sup jd eu (x) jx yj n D eu (y)j! : (9) The notation D eu (x) is standard derivative notation with a multi-integer. In particular, for = ( ; : : : ; d ), D eu (x) = @jj eu (x ; : : : ; x d ) @x : @x d When (9) is combined with (8), we see that our solutions eu n converge faster than any power of =n provided eu C d. 3 A spectral method for u = f Consider the Neumann problem for Poisson s equation: d u = f(s); s () @u(s) @n s = g(s); s @ () 5

As a reference for this problem, see [5, 5.]. As earlier in (4), we have for functions u H () ; v H (), v(s)u(s) ds = Ou(s) Ov(s) ds + v (s) @u(s) ds () @n s If this Neumann problem ()-() is solvable, then its solution is not unique: any constant added to a solution gives another solution. In addition, if ()-() is solvable, then v(s)f(s) ds = Ou(s) Ov(s) ds v (s) g(s) ds (3) Choosing v(s), we obtain f(s) ds = @ @ @ g(s) ds (4) This is a necessary and su cient condition on the functions f and g in order that ()-() be solvable. With this constraint, the Neumann problem is solvable. To deal with the non-unique solvability, we look for a solution u satisfying u(s) ds = (5) Introduce the bilinear functional A (v ; v ) = and the function space A is bounded, V = Ov (s) Ov (s) ds (6) v H () : v(s) ds = ja (v; w)j kvk kwk ; 8v; w V: From [5, Prop. 5.3.] A (; ) is strongly elliptic on V, satisfying (7) A (v; v) c e kvk ; v V for some c e >. The variational form of the Neumann problem ()-() is as follows: nd u such that A (u; v) = `(v) + ` (v) ; 8v V (8) with ` and ` de ned as in (7)-(8). As before, the Lax-Milgram Theorem implies the existence of a unique solution u to (8) with kuk c e [k`k + k`k] : 6

As in the preceding section, we transform the problem from being de ned over to being over d. Most of the arguments are repeated, and we have ea (ev ; ev ) = fr x ev (x) T A(x)r x ev (x)g jdet [J(x)]j dx: d The condition (5) becomes We introduce the space ev = ev H () : ev(x) jdet [J(x)]j dx = : The Neumann problem now has the reformulation ev(x) jdet [J(x)]j dx = : (9) ea (eu; ev) = è (ev) + è (ev) ; 8ev e V (3) For the nite dimensional approximating problem, we use Then we want to nd eu n e V n such that ev n = e V \ n (3) ea (eu n ; ev) = è (ev) + è (ev) ; 8ev e V n (3) We can invoke the standard results of the Lax-Milgram Theorem and Cea s Lemma to obtain the existence of a unique solution eu n, and moreover, keu eu n k c inf v e V n keu vk : (33) for some c >. A modi cation of the argument that led to (9) can be used to obtained a similar result for (33). First, however, we discuss the practical problem of choosing a basis for e V n. 3. Constructing a basis for e V n Let ' j : j Nn d denote a basis for n (usually we choose ' j to be an orthogonal family in the norm of L ( d )). We assume that ' (x) is a nonzero constant function. Introduce the new basis elements with b' j = ' j C C = ' j (x) jdet [J(x)]j dx; j N d n (34) jdet [J(x)]j dx kdet [J]k L (35) 7

Then b' = and b' j (x) jdet [J(x)]j dx = C = ' j (x) jdet [J(x)]j dx ' j (x) jdet [J(x)]j dx jdet [J(x)]j dx Thus b' j : j Nn d procedure in (3). is a basis of e V n and we can use it for our Galerkin 3. The rate of convergence of eu n Now we estimate inf vvn e keu vk ; see (33). Recalling (34), we consider the linear mapping P : L ( d )! L ( d ) given by (P eu)(x) = eu(x) jdet[j(y)]j eu(y) dy; C C = k det[j]k L ; see (35). The mapping P is a projection P (P eu)(x) = (P eu)(x) jdet[j(y)]j (P eu)(y) dy C = eu(x) jdet[j(y)]j eu(y) dy C jdet[j(y)]j eu(y) jdet[j(z)]j eu(z) dy C C = eu(x) jdet[j(y)]j eu(y) dy jdet[j(y)]j eu(y) dy C C + C jdet[j(y)]j dy jdet[j(z)]j eu(z) dz {z } =C = eu(x) jdet[j(y)]j eu(y) dy C = (P eu)(x) 8

So P = P and P is a projection with kp k L!L and kp euk = keu jdet[j(y)]j eu(y) dyk L C keuk L + C jdet[j(y)]j eu(y) dy kk L s keuk L + C k det[j]k L keuk d= L + (Cauchy-Schwarz) d s! = + = c P keuk L d= k det[j]kl + d k det[j]k L keuk L which shows kp k L!L c P and e V := P (H ( d )), see (9). For eu H ( d ) we also have P eu H ( d ) and here we again estimate the norm of P : kp euk H = kp euk + kr(p eu)k c P keuk + kreuk since r(p eu) = reu. Furthermore c P, so kp euk H c P keuk L + c P kreuk L = c P (keuk L + kreuk L ) = c P keuk H kp euk H c P keuk H and we have also kp k H!H c P. For eu V e = P (H ()) we can now estimate the minimal approximation error min keu epk ep V e H = min kp eu epk n ep V e H P is a projection n and eu image(p ) = min p n kp eu P pk H because e V n = P ( n ) min kp k H p!h keu pk H n c P min keu pk H p n and now we can apply the results from [6]. 4 Implementation Consider the implementation of the Galerkin method of for the Neumann problem ()-() over by means of the reformulation in (5) over the unit ball d. We are to nd the function eu n n satisfying (5). To do so, we begin by 9

selecting a basis for n, denoting it by f' ; : : : ; ' N g, with N N n = dim n. Generally we use a basis that is orthonormal in the norm of L ( ). It would be better probably to use a basis that is orthonormal in the norm of H ( d ); for example, see [8]. We seek eu n (x) = NX k ' k (x) (36) k= Then (7) is equivalent to 3 NX dx k 4 a i;j (x) @' k(x) @'`(x) + (x)' k= d @x i;j= j @x k (x)'`(x) 5 jdet [J(x)]j dx i = f (x) '` (x) jdet [J(x)]j dx (37) d + g (x) '` (x) jj bdy (x)j dx; ` = ; : : : ; N @ d The function jj bdy (x)j arises from the transformation of an integral over @ to one over @ d, associated with the change from ` to è as discussed preceding (5). For example, in one variable the boundary @ is often represented as a mapping () = ( () ; ()) ; : In that case, jj bdy (x)j is simply j ()j and the associated integral is g ( ()) '` ( ()) j ()j d In (37) we need to calculate the orthonormal polynomials and their rst partial derivatives; and we also need to approximate the integrals in the linear system. For an introduction to the topic of multivariate orthogonal polynomials, see Dunkl and Xu [] and Xu [7]. For multivariate quadrature over the unit ball in R d, see Stroud [6]. For the Neumann problem ()-() of 3, the implementation is basically the same. The basis f' ; : : : ; ' N g is modi ed as in (34), with the constant C of (35) approximated using the quadrature in (4), given below. 4. The planar case The dimension of n is N n = (n + ) (n + ) (38) For notation, we replace x with (x; y). How do we choose the orthonormal basis f'`(x; y)g Ǹ = for n? Unlike the situation for the single variable case, there are many possible orthonormal bases over d = D, the unit disk in R. We have

chosen one that is particularly convenient for our computations. These are the "ridge polynomials" introduced by Logan and Shepp [] for solving an image reconstruction problem. We summarize here the results needed for our work. Let V n = fp n : (P; Q) = 8Q n g the polynomials of degree n that are orthogonal to all elements of n. Then the dimension of V n is n + ; moreover, n = V V V n (39) It is standard to construct orthonormal bases of each V n and to then combine them to form an orthonormal basis of n using the latter decomposition. As an orthonormal basis of V n we use ' n;k (x; y) = p U n (x cos (kh) + y sin (kh)) ; (x; y) D; h = (4) n + for k = ; ; : : : ; n. The function U n is the Chebyshev polynomial of the second kind of degree n: sin (n + ) U n (t) = ; t = cos ; t ; n = ; ; : : : (4) sin The family n ' is an orthonormal basis of V n;k k= n. As a basis of n, we order 'n;k lexicographically based on the ordering in (4) and (39): f'`g Ǹ = = ' ; ; ' ; ; ' ; ; ' ; ; : : : ; ' n; ; : : : ; ' n;n To calculate the rst order partial derivatives of ' n;k (x; y), we need U n(t). The values of U n (t) and U n(t) are evaluated using the standard triple recursion relations U n+ (t) = tu n (t) U n (t) U n+(t) = U n (t) + tu n(t) U n (t) For the numerical approximation of the integrals in (37), which are over being the unit disk, we use the formula qx Xq m g(x; y) dx dy g r l ;! l q + q + r l (4) l= m= Here the numbers! l are the weights of the (q + )-point Gauss-Legendre quadrature formula on [; ]. Note that qx p(x)dx = p(r l )! l ; for all single-variable polynomials p(x) with deg (p) q +. The formula (4) uses the trapezoidal rule with q + subdivisions for the integration over d in the azimuthal variable. This quadrature is exact for all polynomials g q. This formula is also the basis of the hyperinterpolation formula discussed in []. l=

4. The three dimensional case In the three dimensional case the dimension of n is given by n + 3 N n = 3 and we choose the following orthogonal polynomials on the unit ball ' m;j; (x) = c m;j p (;m j+ ) j (kxk )S ;m j (x) = c m;j kxk m j p (;m j+ ) j (kxk )S ;m j x kxk ; (43) j = ; : : : ; bm=c; = ; ; : : : ; (m j); m = ; ; : : : ; n The constants c m;j are given by c m;j = 5 4 + m j ; and the functions p (;m j+ ) j are the normalized Jacobi polynomials. The functions S ;m j are spherical harmonic functions and they are orthonormal on the sphere S R 3. See [, 3] for the de nition of these functions. In [3] one also nds the quadrature methods which we use to approximate the integrals over () in (4) and (5). The functional è in (5) is given by è (v) = g(((; ; ))) (44) k( ) (; ; ) ( ) (; ; )k v(((; ; )))d d where (; ; ) := (sin() cos(); sin() sin(); cos()) (45) is the usual transformation between spherical and Cartesian coordinates and the indices denote the partial derivatives. For the numerical approximation of the integral in (44) we use traezoidal rules in the direction and Gauß-Legendre formulas for the direction. 5 Numerical examples The construction of our examples is very similar to that given in [3] for the Dirichlet problem. Our rst two transformations have been so chosen that we can invert explicitly the mapping, to be able to better construct our test examples. This is not needed when applying the method; but it simpli es the construction of our test cases. Given, we need to calculate analytically the matrix A(x) = J (x) J (x) T : (46)

.5.5.5.5.5 Figure : Images of (47), with a = :5, for lines of constant radius and constant azimuth on the unit disk. 5. The planar case For our variables, we replace a point x d with (x; y), and we replace a point s with (s; t). De ne the mapping :! by (s; t) = (x; y), s = x y + ax t = x + y (47) with < a <. It can be shown that is a - mapping from the unit disk. In particular, the inverse mapping :! is given by x = h + p i + a (s + t) a y = h at + p i (48) + a (s + t) a In Figure, we give the images in of the circles r = j=, j = ; : : : ; and the azimuthal lines = j=, j = ; : : : ;. The following information is needed when implementing the transformation from u + u = f on to a new equation on : + ax D = J (x; y) = det (J) = ( + ax) 3

.5.5.5.5.5.5.5.5.5.5 t.5 s Figure : The function u(s; t) of (5) J (x) = A = J (x) J (x) T = ( + ax) ( + ax) + ax ax ax a x + ax + The latter are the coe cients needed to de ne e A in (4). We give numerical results for solving the equation As a test case, we choose u (s; t) + e s t u (s; t) = f (s; t) ; (s; t) (49) u (s; t) = e s cos (t) ; (s; t) (5) The solution is pictured in Figure. To nd f(s; t), we use (49) and (5). We use the domain parameter a = :5, with pictured in Figure. Numerical results are given in Table for even values of n. The integrations in (37) were performed with (4); and the integration parameter q ranged from to 3. We give the condition numbers of the linear system (37) as produced in Matlab. To calculate the error, we evaluate the numerical solution and the error on the grid (x i;j ; y i;j ) = (r i cos j ; r i sin j ) i (r i ; j ) = ; j ; i = ; ; : : : ; j = ; : : : 4

Table : Maximum errors in Galerkin solution u n n N n ku u n k cond n N n ku u n k cond 6 9:7E 4:5 4 3:9E 5 67 4 5 :87E 86: 6 53 6:37E 6 5 6 8 5:85E 39 8 9 8:E 7 596 8 45 :6E 84 3 9:44E 8 377 66 :6E 3 89 76 :6E 8 347 9 :8E 4 357 4 35 :4E 9 4765 4 6 8 n 5 5 5 Figure 3: Errors from Table The results are shown graphically in Figure 3. The use of a semi-log scale demonstrates the exponential convergence of the method as the degree increases. To examine experimentally the behaviour of the condition numbers for the linear system (37), we have graphed the condition numbers from Table in Figure 4. Note that we are graphing Nn vs. the condition number of the associated linear system. The graph seems to indicate that the condition number of the system (37) is directly proportional to the square of the order of the system, with the order given in (38). For the Poisson equation u (s; t) = f (s; t) ; (s; t) with the same true solution as in (5), we use the numerical method given in 3. The numerical results are comparable. For example, with n =, we obtain 5

5 x 4 4.5 4 3.5 3.5.5.5 N n 4 6 8 x 4 Figure 4: Condition numbers from Table ku u n k = 9:9 8 and the condition number is approximately 498. 5. The three dimensional case To illustrate that the proposed spectral method converges rapidly, we rst use a simple test example. We choose the linear transformation x 3x s := (x) = @ x + x x + x + x 3 A ; so that () is transformed to an ellipsoid ; see gure 5. For this transformation D and J = det(d ) are constant functions. For a test solution, we use the function u(s) = s e s sin(s 3 ) (5) which is analytic in each variable. Table shows the errors and the development of the condition numbers for the solution of () on. The associated graphs for the errors and condition numbers are shown in gures 6 and 7, respectively. The graph of the error is consistent with exponential convergence; and the condition number seems to have a growth proportional to the square of the number of degrees of freedom N n. Next we study domains which are star shaped with respect to the origin, = fx R 3 j x = (; ; ); R(; )g: (5) 6

.5.5.5.5 3 Y 3 4 X 4 Figure 5: The boundary of See (45) for the de nition of, and R : S! (; ) is assumed to be a C function. In this case we can construct arbitrarily smooth and invertible mappings : ()! as we will show now. First we de ne a function t : [; ]! [; ] ; t() := ; es ( ; )es < : (53) the parameter e s N determines the smoothness of t C es [; ]. For the following we will assume that R(; ) >, for all and ; this follows after an appropriate scaling of the problem. With the help of t we de ne the function er which is monotone increasing from to R(; ) on [; ] and equal to the Table : Maximum errors in Galerkin solution u n n N n ku u n k cond n N n ku u n k cond 4 9:E + 8 9 4:5E 4 964 5:5E + 3 86 6:84E 5 794 3 :9E + 79 364 :E 5 386 4 35 5:8E 67 455 :6E 6 5 5 56 :6E 34 3 56 :6E 7 6888 6 84 4:53E 54 4 68 :6E 8 8937 7 :3E 87 5 86 3:E 9 45 8 65 :3E 3 335 6 969 3:3E 4376 7

error 4 6 8 4 6 8 4 6 n Figure 6: Errors from Table identity on [; =], er(; ; ) := t()r(; ) + ( t()) ecause @ @ e R(; ; ) = t ()(R(; ) ) + ( t()) > ; [; ] the function R e is an invertible function of of class C es : ()! is de ned by. The transformation (x) := ( e R(; ; ); ; ); x = (; ; ) () The properties of R e imply that is equal to the identity on () and the outside shell () n () is deformed by to cover n (). For a test surface, we use R(; ) = + 3 4 cos() sin() (7 cos() ) (54) e s = 5; see gures 8-9 for pictures of @. For our test example, we use u from (5). The term cos() sin() (7 cos() ) is a spherical harmonic function which shows R C (S ), and the factor 3=4 is used to guarantee R >. For the 8

Condition number 5 5.5.5.5 N x 4 n Figure 7: Conditions numbers from Table transformation we get C 4 ( ()), so we expect a convergence of order O(n 4 ). Our spectral method will now approximate u on the unit ball, which varies much more than the function in our rst example. We also note that one might ask why we do not further increase e s (see (53)) to get a better order of convergence. It is possible to do this, but the price one pays is in larger derivatives of u, and this may result in larger errors for the range of n values where we actually calculate the approximation. The search for an optimal e s is a problem on its own, but it also depends on the solution u. So we have chosen e s = 5 in order to demonstrate our method, showing that the qualitative behaviour of the error is the same as in our earlier examples. The results of our calculations are given in table 3, and the associated graphs of the errors and condition numbers are shown in gures and, respectively. The graph in Figure shows that the condition numbers of the systems grow more slowly than in our rst example, but again the condition numbers appear to be proportional to N n. The graph of the error in Figure again resembles a line and this implies exponential convergence; but the line has a much smaller slope than in the rst example so that the error is only reduced to about : when we use degree 6. What we expect is a convergence of order O(n 4 ), but the graph does not reveal this behavior in the range of n values we have used. Rather, the convergence appears to be exponential. In the future we plan on repeating this numerical example with an improved extension of the boundary given in (54). When given a mapping ' : @! @, it is often nontrivial to nd an 9

.5.5.5.5.5.5 3 Y 3 3 X 3 Figure 8: A view of @ extension :! with j onto @ = ' and with other needed properties. For example, consider a star-like region whose boundary surface @ is given by = R(; ) with R : S! @. It might seem natural to use (; ; ) = R(; ); ; ; However, such a function is not continuously di erentiable at =. We are exploring this general problem, looking at ways of producing with the properties that are needed for implementing our spectral method. ACKNOWLEDGEMENTS. The authors would like to thank Professor Weimin Han for his careful proofreading of the manuscript. References [] M. Abramowitz, I.A. Stegun, Handbook of Mathematical Functions, Dover Publications, Inc., New York, 965. [] K. Atkinson. An Introduction to Numerical Analysis, nd ed., John Wiley, New York, 989.

.5.5.5.5.5.5.5.5 X Y Figure 9: Another view of @ [3] K. Atkinson, D. Chien, and O. Hansen, A Spectral Method for Elliptic Equations: The Dirichlet Problem, Advances in Computational Mathematics, DOI:.7/s444-9-95-8, to appear. [4] K. Atkinson and W. Han. Theoretical Numerical Analysis: A Functional Analysis Framework, nd ed., Springer-Verlag, New York, 5. [5] S. renner and L. Scott, The Mathematical Theory of Finite Element Methods, Springer-Verlag, 994. [6] T. agby, L. os, and N. Levenberg, Multivariate simultaneous approximation, Constructive Approximation, 8 (), pp. 569-577. Table 3: Maximum errors in Galerkin solution u n n N n ku u n k cond n N n ku u n k cond 4 :3 3 9 :68 475 :3 86 :3 7 3 :85 9 364 :5 987 4 35 :5 44 455 :6 35 5 56 : 73 3 56 :68 89 6 84 :87 5 4 68 :53 46 7 :545 3 5 86 :38 38 8 65 :44 38 6 969 : 3967

error 4 6 8 4 6 n Figure : Errors from table 3 [7] C. Canuto, A. Quarteroni, My. Hussaini, and T. ang, Spectral Methods in Fluid Mechanics, Springer-Verlag, 988. [8] C. Canuto, A. Quarteroni, My. Hussaini, and T. ang, Spectral Methods - Fundamentals in Single Domains, Springer-Verlag, 6. [9] E. Doha and W. Abd-Elhameed. E cient spectral-galerkin algorithms for direct solution of second-order equations using ultraspherical polynomials, SIAM J. Sci. Comput. 4 (), 548-57. [] C. Dunkl and Y. Xu. Orthogonal Polynomials of Several Variables, Cambridge Univ. Press, Cambridge,. [] O. Hansen, K. Atkinson, and D. Chien. On the norm of the hyperinterpolation operator on the unit disk and its use for the solution of the nonlinear Poisson equation, IMA J. Numerical Analysis, 9 (9), pp. 57-83, DOI:.93/imanum/drm5. []. Logan. and L. Shepp. Optimal reconstruction of a function from its projections, Duke Mathematical Journal 4, (975), 645 659. [3] W. McLean, Strongly Elliptic Systems and oundary Integral Equations, Cambridge Univ. Press,. [4] D. Ragozin. Constructive polynomial approximation on spheres and projective spaces, Trans. Amer. Math. Soc. 6 (97), 57-7.

Condition number 4 35 3 5 5 5.5.5.5 N x 4 n Figure : Condition numbers from table 3 [5] J. Shen and L. Wang. Analysis of a spectral-galerkin approximation to the Helmholtz equation in exterior domains, SIAM J. Numer. Anal. 45 (7), 954-978. [6] A. Stroud. Approximate Calculation of Multiple Integrals, Prentice-Hall, Inc., Englewood Cli s, N.J., 97. [7] Yuan Xu. Lecture notes on orthogonal polynomials of several variables, in Advances in the Theory of Special Functions and Orthogonal Polynomials, Nova Science Publishers, 4, 35-88. [8] Yuan Xu. A family of Sobolev orthogonal polynomials on the unit ball, J. Approx. Theory 38 (6), 3-4. 3