A Spectral Method for Elliptic Equations: The Neumann Problem

Similar documents
A Spectral Method for Elliptic Equations: The Neumann Problem

A Spectral Method for Nonlinear Elliptic Equations

PART IV Spectral Methods

Simple Examples on Rectangular Domains

A Spectral Method for the Biharmonic Equation

A FAST SOLVER FOR ELLIPTIC EQUATIONS WITH HARMONIC COEFFICIENT APPROXIMATIONS

12.0 Properties of orthogonal polynomials

BIBLIOGRAPHY KENDALL E ATKINSON

Projected Surface Finite Elements for Elliptic Equations

A COLLOCATION METHOD FOR SOLVING THE EXTERIOR NEUMANN PROBLEM

SHARP BOUNDARY TRACE INEQUALITIES. 1. Introduction

Abstract. 1. Introduction

1462. Jacobi pseudo-spectral Galerkin method for second kind Volterra integro-differential equations with a weakly singular kernel

Some lecture notes for Math 6050E: PDEs, Fall 2016

Scientific Computing WS 2017/2018. Lecture 18. Jürgen Fuhrmann Lecture 18 Slide 1

An Accurate Fourier-Spectral Solver for Variable Coefficient Elliptic Equations

Partial Differential Equations

ELLIPTIC PARTIAL DIFFERENTIAL EQUATIONS EXERCISES I (HARMONIC FUNCTIONS)

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS

Integrals in cylindrical, spherical coordinates (Sect. 15.7)

Boundary Integral Equations on the Sphere with Radial Basis Functions: Error Analysis

In this chapter we study elliptical PDEs. That is, PDEs of the form. 2 u = lots,

Wave equation on manifolds and finite speed of propagation

ON SPECTRAL METHODS FOR VOLTERRA INTEGRAL EQUATIONS AND THE CONVERGENCE ANALYSIS * 1. Introduction

Numerical Solution of Two-Dimensional Volterra Integral Equations by Spectral Galerkin Method

The purpose of this lecture is to present a few applications of conformal mappings in problems which arise in physics and engineering.

Scientific Computing WS 2018/2019. Lecture 15. Jürgen Fuhrmann Lecture 15 Slide 1

Non-Conforming Finite Element Methods for Nonmatching Grids in Three Dimensions

Superconvergence Results for the Iterated Discrete Legendre Galerkin Method for Hammerstein Integral Equations

Technische Universität Graz

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS

Implicit Functions, Curves and Surfaces

A Comparison between Solving Two Dimensional Integral Equations by the Traditional Collocation Method and Radial Basis Functions

LECTURE 1: SOURCES OF ERRORS MATHEMATICAL TOOLS A PRIORI ERROR ESTIMATES. Sergey Korotov,

STOKES PROBLEM WITH SEVERAL TYPES OF BOUNDARY CONDITIONS IN AN EXTERIOR DOMAIN

Analysis in weighted spaces : preliminary version

Fast and accurate methods for the discretization of singular integral operators given on surfaces

Derivatives of Harmonic Bergman and Bloch Functions on the Ball

Variational Formulations

Combinatorial Dimension in Fractional Cartesian Products

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:

AN ASYMPTOTIC MEAN VALUE CHARACTERIZATION FOR p-harmonic FUNCTIONS. To the memory of our friend and colleague Fuensanta Andreu

Math 5520 Homework 2 Solutions

Linear Algebra Massoud Malek

Traces, extensions and co-normal derivatives for elliptic systems on Lipschitz domains

AMS subject classifications. Primary, 65N15, 65N30, 76D07; Secondary, 35B45, 35J50

MULTIVARIABLE INTEGRATION

Finite Elements. Colin Cotter. February 22, Colin Cotter FEM

b i (x) u + c(x)u = f in Ω,

Laplace s Equation. Chapter Mean Value Formulas

Inverse Gravimetry Problem

LECTURE # 0 BASIC NOTATIONS AND CONCEPTS IN THE THEORY OF PARTIAL DIFFERENTIAL EQUATIONS (PDES)

Existence and Multiplicity of Solutions for a Class of Semilinear Elliptic Equations 1

Note: Each problem is worth 14 points except numbers 5 and 6 which are 15 points. = 3 2

ON INTERPOLATION AND INTEGRATION IN FINITE-DIMENSIONAL SPACES OF BOUNDED FUNCTIONS

The continuity method

Polynomial approximation via de la Vallée Poussin means

Geometric Multigrid Methods

A conjecture on sustained oscillations for a closed-loop heat equation

Lecture Notes on PDEs

Accumulation constants of iterated function systems with Bloch target domains

COMPLEX SPHERICAL WAVES AND INVERSE PROBLEMS IN UNBOUNDED DOMAINS

Jacobi spectral collocation method for the approximate solution of multidimensional nonlinear Volterra integral equation

Houston Journal of Mathematics c 2009 University of Houston Volume 35, No. 1, 2009

A TALE OF TWO CONFORMALLY INVARIANT METRICS

McGill University Department of Mathematics and Statistics. Ph.D. preliminary examination, PART A. PURE AND APPLIED MATHEMATICS Paper BETA

INTEGRATION ON MANIFOLDS and GAUSS-GREEN THEOREM

The Factorization Method for a Class of Inverse Elliptic Problems

Class notes: Approximation

Discontinuous Galerkin methods for fractional diffusion problems

arxiv: v1 [math.na] 29 Feb 2016

PIECEWISE LINEAR FINITE ELEMENT METHODS ARE NOT LOCALIZED

Your first day at work MATH 806 (Fall 2015)

21 Laplace s Equation and Harmonic Functions

Theory of PDE Homework 2

AN ASYMPTOTIC MEAN VALUE CHARACTERIZATION FOR p-harmonic FUNCTIONS

ON THE SPECTRUM OF THE DIRICHLET LAPLACIAN IN A NARROW STRIP

A REMARK ON MINIMAL NODAL SOLUTIONS OF AN ELLIPTIC PROBLEM IN A BALL. Olaf Torné. 1. Introduction

FFTs in Graphics and Vision. Homogenous Polynomials and Irreducible Representations

Electromagnetism HW 1 math review

Scientific Computing I

Compression on the digital unit sphere

xy 2 e 2z dx dy dz = 8 3 (1 e 4 ) = 2.62 mc. 12 x2 y 3 e 2z 2 m 2 m 2 m Figure P4.1: Cube of Problem 4.1.

(x x 0 )(x x 1 )... (x x n ) (x x 0 ) + y 0.

LEGENDRE POLYNOMIALS AND APPLICATIONS. We construct Legendre polynomials and apply them to solve Dirichlet problems in spherical coordinates.

Global Solutions for a Nonlinear Wave Equation with the p-laplacian Operator

Invertibility of the biharmonic single layer potential operator

The Dirichlet-to-Neumann operator

Simultaneous vs. non simultaneous blow-up

PSEUDO-COMPRESSIBILITY METHODS FOR THE UNSTEADY INCOMPRESSIBLE NAVIER-STOKES EQUATIONS

THE STOKES SYSTEM R.E. SHOWALTER

Final Exam May 4, 2016

A Simple Compact Fourth-Order Poisson Solver on Polar Geometry

SEMILINEAR ELLIPTIC EQUATIONS WITH DEPENDENCE ON THE GRADIENT

Practical in Numerical Astronomy, SS 2012 LECTURE 9

A very short introduction to the Finite Element Method

The Factorization Method for Inverse Scattering Problems Part I

NECESSARY CONDITIONS FOR WEIGHTED POINTWISE HARDY INEQUALITIES

SOLUTION OF THE DIRICHLET PROBLEM WITH L p BOUNDARY CONDITION. Dagmar Medková

Transcription:

A Spectral Method for Elliptic Equations: The Neumann Problem Kendall Atkinson Departments of Mathematics & Computer Science The University of Iowa Olaf Hansen, David Chien Department of Mathematics California State University San Marcos May, Abstract Let Ω be an open, simply connected, and bounded region in R d, d, and assume its boundary Ω is smooth. Consider solving the elliptic partial differential equation u+γu = f over Ω with a Neumann boundary condition. The problem is converted to an equivalent elliptic problem over the unit ball, and then a spectral method is given that uses a special polynomial basis. In the case the Neumann problem is uniquely solvable, and with suffi ciently smooth problem parameters, the method is shown to have very rapid convergence. Numerical examples illustrate exponential convergence. INTRODUCTION Consider solving the Neumann problem for the following elliptic equation, u + γ(s)u = f(s), s = (s,..., s d ) T Ω () u(s) n s = g(s), s Ω. () Assume Ω is an open, simply-connected, and bounded region in R d, d, and that its boundary Ω is several times continuously differentiable. Similarly, assume the functions γ(s) and f(s) are several times continuously differentiable over Ω, and that g(s) is several times continuously differentiable over the boundary Ω. The full power of the method can be seen when the boundary and the solution are well behaved. There is a rich literature on spectral methods for solving partial differential equations. From the more recent literature, we cite [5], [6], [7], [4], [5]. Their

bibliographies contain references to earlier papers on spectral methods. The present paper is a continuation of the work in [] in which a spectral method is given for a general elliptic equation with a Dirichlet boundary condition. Our approach is somewhat different than the standard approaches. We convert the partial differential equation to an equivalent problem on the unit disk or unit ball, and in the process we are required to work with a more complicated equation. Our approach is reminiscent of the use of conformal mappings for planar problems. Conformal mappings can be used with our approach when working on planar problems, although having a conformal mapping is not necessary. In we assume that ()-() is uniquely solvable, and we present a spectral Galerkin method for its solution. In 3 we extend the method to the problem with γ(s) in Ω. The problem is no longer uniquely solvable and we extend our spectral method to this case. The implementation of the method is discussed in 4 and it is illustrated in 5. A spectral method for the uniquely solvable case We assume the Neumann problem ()-() is uniquely solvable. This is true, for example, if γ (s) c γ >, s Ω (3) for some constant c γ >. For functions u H (Ω), v H (Ω), v(s) [ u(s) + γ(s)u] ds = [ u(s) v(s) + γ(s)u(s)v(s)] ds Ω Ω v (s) u(s) ds. Ω n s Introduce the bilinear functional A (v, v ) = [ v (s) v (s) + γ(s)v (s)v (s)] ds. (5) Ω The variational form of the Neumann problem ()-() is as follows: find u such that A (u, v) = l (v) + l (v), v H (Ω) (6) with the linear functionals defined by l (v) = v(s)f(s) ds, (7) Ω l (v) = v (s) g(s) ds. (8) Ω The norms we use for l and l are the standard operator norms when regarding l and l as linear functionals on H (Ω). The functional l is bounded easily on H (Ω), l (v) f L v L f L v H. (9) (4)

As is common, we often use v in place of v H ; and v should not be confused with v L. The functional l is bounded (at least for bounded domains Ω). To show this, begin by noting that the restriction ρ : H (Ω) H / ( Ω) is continuous [, Th. 3.37] and the embedding ι : H / ( Ω) L ( Ω) is compact [, Th. 3.7]. If we further denote by l g the continuous mapping l g : u u(s)g(s) ds, u L ( Ω) then we see l = l g ι ρ, and therefore l is bounded. It is straightforward to show A is bounded, Ω A (v, w) c A v w, c A = max {, γ }. () In addition, we assume A is strongly elliptic on H (Ω), A (v, v) c e v, v H (Ω) () with some c e >. This follows ordinarily from showing the unique solvability of the Neumann problem ()-(). If (3) is satisfied, then we can satisfy () with c e = min {, c γ } Under our assumptions on A, including the strong ellipticity in (), the Lax- Milgram Theorem implies the existence of a unique solution u to (6) with u c e [ l + l ]. () Our spectral method is defined using polynomial approximations over the open unit ball in R d, call it d. Introduce a change of variables s = Φ(x), x = (x,..., x d ) T d x s Ω with Φ a twice-differentiable mapping that is one-to-one from d onto Ω. Let Ψ = Φ : Ω d. [We comment later on the creation of Φ for cases in which onto only the boundary mapping ϕ : d Ω is known.] For v L (Ω), let ṽ(x) = v (Φ (x)), x d R d and conversely, Assuming v H (Ω), we can show v(s) = ṽ (Ψ (s)), s Ω R d. x ṽ (x) = J (x) T s v (s), s = Φ (x) 3

with J (x) the Jacobian matrix for Φ over the unit ball d, Similarly, [ Φi (x) J(x) (DΦ) (x) = x j s v(s) = K(s) T x ṽ(x), with K(s) the Jacobian matrix for Ψ over Ω. Also, ] d, x d. i,j= x = Ψ(s) K (Φ (x)) = J (x). (3) Using the change of variables s = Φ (x), the formula (5) converts to A (v, v ) = {[K (Φ (x)) T x ṽ (x)] T [K (Φ (x)) T x ṽ (x)] d with + γ(φ (x))v (Φ (x))v (Φ (x)} det [J(x)] dx = {[J (x) T x ṽ (x)] T [J (x) T x ṽ (x)] d + γ(x)ṽ (x)ṽ (x)} det [J(x)] dx = { x ṽ (x) T A(x) x ṽ (x) + γ(x)ṽ (x)ṽ (x)} det [J(x)] dx d à (ṽ, ṽ ) (4) A(x) = J (x) J (x) T. We can also introduce analogues to l and l following a change of variables, calling them l and l and defined on H ( d ). For example, l (ṽ) = ṽ(x)f(φ(x)) det [J(x)] dx. d We can then convert (6) to an equivalent problem over H ( d ). The variational problem becomes à (ũ, ṽ) = l (ṽ) + l (ṽ), ṽ H ( d ). (5) The assumptions and results in (6)-() extend to this new problem on H ( d ). The strong ellipticity condition () becomes à (ṽ, ṽ) c e ṽ, ṽ H ( d ), (6) min c e = c x d det J(x) e [ ] (7) max, max x d J(x) 4

where J(x) denotes the operator matrix -norm of J(x) for R d. Also, à (ṽ, w) ca ṽ w, { } { } c A = max det [J(x)] max max A(x), γ. x d x d For the finite dimensional problem, we want to use the approximating subspace Π n Π d n. We want to find ũ n Π n such that à (ũ n, ṽ) = l (ṽ) + l (ṽ), ṽ Π n. (8) The Lax-Milgram Theorem (cf. [, 8.3], [3,.7]) implies the existence of u n for all n. For the error in this Galerkin method, Cea s Lemma (cf. [, p. 365], [3, p. 6]) implies the convergence of u n to u, and moreover, ũ ũ n c A c e inf ũ ṽ. (9) ṽ Π n It remains to bound the best approximation error on the right side of this inequality. Ragozin [3] gives bounds on the rate of convergence of best polynomial approximation over the unit ball, and these results are extended in [4] to simultaneous approximation of a function and some of its lower order derivatives. Assume ũ C ( ) m+ d. Using [4, Theorem ], we have with ωũ,m+ (δ) = inf ũ ṽ ṽ Π n sup α =m+ ( c(ũ, m) n m ω ũ,m+ ( n ) sup D α ũ (x) D α ũ (y) x y δ ). () The notation D α ũ (x) is standard derivative notation with α a multi-integer. In particular, for α = (α,..., α d ), D α ũ (x) = α ũ (x,..., x d ) x α. xα d When () is combined with (9), we see that our solutions ũ n converge faster than any power of /n provided ũ C ( ) d. Although we are proving convergence in only the H norm, our numerical examples use the C( d ) norm, for convenience. We note that the space C( d ) is not embedded in H ( d ). From the definition of c e in (7) and its use in (9), it seems desirable to maximize the quantity min det J(x) () x d rather than just asking that it be nonzero. It seems likely that the smaller the value for c e, the greater the ill-conditioning in the transformed problem (8). There appears to be little known in general about choosing a transformation function Φ so as to maximize the size of c e or the quantity in (). Later in the paper we comment further on the creation of a mapping Φ. d 5

3 A spectral method for u = f Consider the Neumann problem for Poisson s equation: u = f(s), s Ω () u(s) n s = g(s), s Ω (3) As a reference for this problem, see [3, 5.]. As earlier in (4), we have for functions u H (Ω), v H (Ω), v(s) u(s) ds = u(s) v(s) ds + v (s) u(s) ds (4) Ω Ω Ω n s If this Neumann problem ()-(3) is solvable, then its solution is not unique: any constant added to a solution gives another solution. In addition, if ()-(3) is solvable, then v(s)f(s) ds = u(s) v(s) ds v (s) g(s) ds (5) Ω Ω Ω Choosing v(s), we obtain Ω f(s) ds = g(s) ds (6) Ω This is a necessary and suffi cient condition on the functions f and g in order that ()-(3) be solvable. With this constraint, the Neumann problem is solvable. To deal with the non-unique solvability, we look for a solution u satisfying u(s) ds = (7) Introduce the bilinear functional A (v, v ) = and the function space A is bounded, V = Ω Ω v (s) v (s) ds (8) { } v H (Ω) : v(s) ds = Ω A (v, w) v w, v, w V. From [3, Prop. 5.3.] A (, ) is strongly elliptic on V, satisfying (9) A (v, v) c e v, v V 6

for some c e >. The variational form of the Neumann problem ()-(3) is as follows: find u such that A (u, v) = l (v) + l (v), v V (3) with l and l defined as in (7)-(8). As before, the Lax-Milgram Theorem implies the existence of a unique solution u to (3) with u c e [ l + l ]. As in the preceding section, we transform the problem from being defined over Ω to being over d. Most of the arguments are repeated, and we have à (ṽ, ṽ ) = { x ṽ (x) T A(x) x ṽ (x)} det [J(x)] dx. d The condition (7) becomes We introduce the space { Ṽ = ṽ H () : ṽ(x) det [J(x)] dx =. The Neumann problem now has the reformulation } ṽ(x) det [J(x)] dx =. (3) à (ũ, ṽ) = l (ṽ) + l (ṽ), ṽ Ṽ (3) For the finite dimensional approximating problem, we use Then we want to find ũ n Ṽn such that Ṽ n = Ṽ Π n (33) à (ũ n, ṽ) = l (ṽ) + l (ṽ), ṽ Ṽn (34) We can invoke the standard results of the Lax-Milgram Theorem and Cea s Lemma to obtain the existence of a unique solution ũ n, and moreover, ũ ũ n c inf v Ṽn ũ v. (35) for some c >. A modification of the argument that led to () can be used to obtained a similar result for (35). First, however, we discuss the practical problem of choosing a basis for Ṽn. 7

3. Constructing a basis for Ṽn Let { ϕ j : j Nn} d denote a basis for Πn (usually we choose { } ϕ j to be an orthogonal family in the norm of L ( d )). We assume that ϕ (x) is a nonzero constant function. Introduce the new basis elements ϕ j = ϕ j ϕ C j (x) det [J(x)] dx, j Nn d (36) with C = det [J(x)] dx det [J] L (37) Then ϕ = and ϕ j (x) det [J(x)] dx = ϕ j (x) det [J(x)] dx [ ] [ ] ϕ C j (x) det [J(x)] dx det [J(x)] dx = Thus { ϕ j : j Nn} d is a basis of Ṽ n and we can use it for our Galerkin procedure in (34). Although we choose the basis { ϕ j : j Nn} d to be orthogonal in L ( d ), the resulting basis { ϕ j : j Nn} d is quite unlikely to be orthogonal, especially in the norm of H ( d ). Nonetheless, we want a basis that is linearly independent in a fairly strong way, to minimize ill-conditioning in the linear system associated with (34); and choosing { } ϕ j as orthogonal in L ( d ) seems to help a great deal, at least when observed experimentally. We do not have any theoretical results on the size of the condition numbers associated with the linear system obtained from (34). These types of results have been investigated in the past for variational methods (e.g. see Mikhlin []), although these do not appear to apply directly to our method. This is a subject for further investigation. 3. The rate of convergence of ũ n Now we estimate inf v Ṽ n ũ v ; see (35). Recalling (36), we consider the linear mapping P : L ( d ) L ( d ) given by (P ũ)(x) = ũ(x) det[j(y)] ũ(y) dy, C C = det[j] L ; 8

see (37). The mapping P is a projection P (P ũ)(x) = (P ũ)(x) det[j(y)] (P ũ)(y) dy C = ũ(x) det[j(y)] ũ(y) dy C ( ( det[j(y)] ũ(y) ) ) det[j(z)] ũ(z) dy C C = ũ(x) det[j(y)] ũ(y) dy det[j(y)] ũ(y) dy C C + C det[j(y)] dy det[j(z)] ũ(z) dz = ũ(x) det[j(y)] ũ(y) dy C = (P ũ)(x) So P = P and P is a projection; and thus as a consequence, P L L. Also, P ũ = ũ det[j(y)] ũ(y) dy L C ũ L + C det[j(y)] ũ(y) dy L ũ L + C det[j] L ũ π d/ L = ( + π d/ Γ ( det[j] L + d) det[j] L Γ ( + ) d) ũ L (Cauchy-Schwarz) = c P ũ L which shows P L L c P and Ṽ := P (H ( d )), see (3). For ũ H ( d ) we also have P ũ H ( d ) and here we again estimate the norm of P : P ũ H = P ũ + (P ũ) c P ũ + ũ since (P ũ) = ũ. Furthermore c P, so P ũ H c P ũ L + c P ũ L = c P ( ũ L + ũ L ) = c P ũ H P ũ H c P ũ H 9

and we have also P H H c P. For ũ Ṽ = P (H ()) we can now estimate the minimal approximation error min ũ p H = min P ũ p H p Ṽn p Ṽn P is a projection and ũ image(p ) = min p Π n P ũ P p H because Ṽn = P (Π n ) min P H p Π H ũ p H n c P min ũ p H p Π n and now we can apply the results from [4]. 4 Implementation Consider the implementation of the Galerkin method of for the Neumann problem ()-() over Ω by means of the reformulation in (5) over the unit ball d. We are to find the function ũ n Π n satisfying (5). To do so, we begin by selecting a basis for Π n, denoting it by {ϕ,..., ϕ N }, with N N n Nn d = dim Π n, where the specific meaning should be clear from the context. Generally we use a basis that is orthonormal in the norm of L ( ). It would be better probably to use a basis that is orthonormal in the norm of H ( d ); for example, see [8]. We seek N ũ n (x) = α k ϕ k (x) (38) k= Then (8) is equivalent to N d α k a i,j (x) ϕ k(x) ϕ l (x) + γ(x)ϕ k= d x i,j= j x k (x)ϕ l (x) det [J(x)] dx i = f (x) ϕ l (x) det [J(x)] dx (39) d + g (x) ϕ l (x) J bdy (x) dx, l =,..., N d The function J bdy (x) arises from the transformation of an integral over Ω to one over d, associated with the change from l to l as discussed preceding (5). For example, in one variable the boundary Ω is often represented as a mapping χ (θ) = (χ (θ), χ (θ)), θ π. In that case, J bdy (x) is simply χ (θ) and the associated integral is π g (χ (θ)) ϕ l (χ (θ)) χ (θ) dθ

In (39) we need to calculate the orthonormal polynomials and their first partial derivatives; and we also need to approximate the integrals in the linear system. For an introduction to the topic of multivariate orthogonal polynomials, see Dunkl and Xu [8] and Xu [7]. For multivariate quadrature over the unit ball in R d, see Stroud [6]. For the Neumann problem ()-(3) of 3, the implementation is basically the same. The basis {ϕ,..., ϕ N } is modified as in (36), with the constant C of (37) approximated using the quadrature in (44), given below. 4. The planar case The dimension of Π n is Nn = (n + ) (n + ) (4) For notation, we replace x with (x, y). How do we choose the orthonormal basis {ϕ l (x, y)} N l= for Π n? Unlike the situation for the single variable case, there are many possible orthonormal bases over d = D, the unit disk in R. We have chosen one that is particularly convenient for our computations. These are the "ridge polynomials" introduced by Logan and Shepp [] for solving an image reconstruction problem. We summarize here the results needed for our work. Let V n = {P Π n : (P, Q) = Q Π n } the polynomials of degree n that are orthogonal to all elements of Π n. Then the dimension of V n is n + ; moreover, Π n = V V V n (4) It is standard to construct orthonormal bases of each V n and to then combine them to form an orthonormal basis of Π n using the decomposition in (4). As an orthonormal basis of V n we use ϕ n,k (x, y) = π U n (x cos (kh) + y sin (kh)), (x, y) D, h = π n + (4) for k =,,..., n. The function U n is the Chebyshev polynomial of the second kind of degree n: U n (t) = sin (n + ) θ, t = cos θ, t, n =,,... (43) sin θ The family { ϕ n,k } n k= is an orthonormal basis of V n. As a basis of Π n, we order { ϕn,k } lexicographically based on the ordering in (4) and (4): {ϕ l } N l= = { ϕ,, ϕ,, ϕ,, ϕ,,..., ϕ n,,..., ϕ n,n } To calculate the first order partial derivatives of ϕ n,k (x, y), we need U n(t). The values of U n (t) and U n(t) are evaluated using the standard triple recursion

relations U n+ (t) = tu n (t) U n (t) U n+(t) = U n (t) + tu n(t) U n (t) For the numerical approximation of the integrals in (39), which are over being the unit disk, we use the formula g(x, y) dx dy q q l= m= ( g r l, ) π m q + ω l π q + r l (44) Here the numbers ω l are the weights of the (q + )-point Gauss-Legendre quadrature formula on [, ]. Note that p(x)dx = q p(r l )ω l, for all single-variable polynomials p(x) with deg (p) q +. The formula (44) uses the trapezoidal rule with q + subdivisions for the integration over d in the azimuthal variable. This quadrature is exact for all polynomials g Π q. This formula is also the basis of the hyperinterpolation formula discussed in [9]. l= 4. The three dimensional case In the three dimensional case the dimension of Π n is given by ( ) n + 3 Nn 3 = 3 and we choose the following orthogonal polynomials on the unit ball ϕ m,j,β (x) = c m,j p (,m j+ ) j ( x )S β,m j (x) ( = c m,j x m j p (,m j+ ) x j ( x )S β,m j x ), (45) j =,..., m/, β =,,..., (m j), m =,,..., n The constants c m,j are given by c m,j = 5 4 + m j ; and the functions p (,m j+ ) j are the normalized Jacobi polynomials. The functions S β,m j are spherical harmonic functions and they are orthonormal on the sphere S R 3. See [8, ] for the definition of these functions. In [] one also finds the quadrature methods which we use to approximate the integrals over () in (4) and (5). The functional l in (5) is given by l (v) = π π g(φ(υ(, θ, φ))) (46) (Φ Υ) θ (, θ, φ) (Φ Υ) φ (, θ, φ) v(φ(υ(, θ, φ)))dφ dθ

.5.5.5.5.5 Figure : Images of (49), with a =.5, for lines of constant radius and constant azimuth on the unit disk. where Υ(ρ, θ, φ) := ρ (sin(θ) cos(φ), sin(θ) sin(φ), cos(θ)) (47) is the usual transformation between spherical and Cartesian coordinates and the indices denote the partial derivatives. For the numerical approximation of the integral in (46) we use traezoidal rules in the φ direction and Gauß-Legendre formulas for the θ direction. 5 Numerical examples The construction of our examples is very similar to that given in [] for the Dirichlet problem. Our first two transformations Φ have been so chosen that we can invert explicitly the mapping Φ, to be able to better construct our test examples. This is not needed when applying the method; but it simplifies the construction of our test cases. Given Φ, we need to calculate analytically the matrix J (x), from which we can then calculate A(x) = J (x) J (x) T. (48) 3

5. The planar case For our convenience, we replace a point x d with (x, y), and we replace a point s Ω with (s, t). Define the mapping Φ : Ω by (s, t) = Φ (x, y), s = x y + ax t = x + y (49) with < a <. It can be shown that Φ is a - mapping from the unit disk. In particular, the inverse mapping Ψ : Ω is given by x = [ + ] + a (s + t) a y = [ ( at + )] (5) + a (s + t) a In Figure, we give the images in Ω of the circles r = j/, j =,..., and the azimuthal lines θ = jπ/, j =,...,. The following information is needed when implementing the transformation from u + γu = f on Ω to a new equation on : ( ) + ax DΦ = J (x, y) = det (J) = ( + ax) ( ) J (x) = ( + ax) + ax A = J (x) J (x) T = ( + ax) ( ax ax a x + ax + The latter are the coeffi cients needed to define à in (4). We give numerical results for solving the equation As a test case, we choose u (s, t) + e s t u (s, t) = f (s, t), (s, t) Ω (5) u (s, t) = e s cos (πt), (s, t) Ω (5) The solution is pictured in Figure. To find f(s, t), we use (5) and (5). We use the domain parameter a =.5, with Ω pictured in Figure. Numerical results are given in Table for even values of n. The integrations in (39) were performed with (44); and the integration parameter q ranged from to 3. We give the condition numbers of the linear system (39) as produced in M. To calculate the error, we evaluate the numerical solution ũ n and the analytical solution ũ on the grid Φ (x i,j, y i,j ) = Φ (r i cos θ j, r i sin θ j ) ( i (r i, θ j ) =, jπ ), i =,,... ; j =,... ) 4

.5.5.5.5.5.5.5.5.5.5 t.5 s Figure : The function u(s, t) of (5) Table : Maximum errors in Galerkin solution u n n N n u u n cond n N n u u n cond 6 9.7E 4.5 4 3.9E 5 67 4 5.87E 86. 6 53 6.37E 6 5 6 8 5.85E 39 8 9 8.E 7 596 8 45.6E 84 3 9.44E 8 377 66.6E 3 89 76.6E 8 347 9.8E 4 357 4 35.4E 9 4765 The results are shown graphically in Figure 3. The use of a semi-log scale demonstrates the exponential convergence of the method as the degree increases. For simplicity, we have chosen to use the uniform norm u u n rather than the Sobolev norm u u n of H (Ω) that arises in the theoretical error bounds of (9)-(). To examine experimentally the behaviour of the condition numbers for the linear system (39), we have presented the condition numbers from Table in Figure 4. Note that we are graphing Nn vs. the condition number of the associated linear system. The graph seems to indicate that the condition number of the system (39) is directly proportional to the square of the order of the system, with the order given in (4). For the Poisson equation u (s, t) = f (s, t), (s, t) Ω 5

4 6 8 n 5 5 5 Figure 3: Errors from Table with the same true solution as in (5), we use the numerical method given in 3. The numerical results are comparable. For example, with n =, we obtain u u n = 9.9 8 and the condition number is approximately 498. 5. The three dimensional case To illustrate that the proposed spectral method converges rapidly, we first use a simple test example. We choose the linear transformation s := Φ (x) = x 3x x + x x + x + x 3, so that () is transformed to an ellipsoid Ω ; see figure 5. For this transformation DΦ and J = det(dφ ) are constant functions. For a test solution, we use the function u(s) = s e s sin(s 3 ) (53) which is analytic in each variable. Table shows the errors and the development of the condition numbers for the solution of () on Ω. The associated graphs for the errors and condition numbers are shown in figures 6 and 7, respectively. The graph of the error is consistent with exponential convergence; and the condition number seems to have a growth proportional to the square of the number of degrees of freedom N n. 6

5 x 4 4.5 4 3.5 3.5.5.5 N n 4 6 8 x 4 Figure 4: Condition numbers from Table Next we study domains Ω which are star shaped with respect to the origin, Ω = {x R 3 x = Υ(ρ, θ, φ), ρ R(θ, φ)}. (54) See (47) for the definition of Υ, and R : S (, ) is assumed to be a C function. In this case we can construct arbitrarily smooth and invertible mappings Φ : () Ω as we will show now. First we define a function t : [, ] [, ] {, ρ t(ρ) :=, es (ρ, )es < ρ. (55) the parameter e s N determines the smoothness of t C es [, ]. For the following we will assume that R(θ, φ) >, for all θ and φ; this follows after an Table : Maximum errors in Galerkin solution u n n N n u u n cond n N n u u n cond 4 9.E + 8 9 4.5E 4 964 5.5E + 3 86 6.84E 5 794 3.9E + 79 364.E 5 386 4 35 5.8E 67 455.6E 6 5 5 56.6E 34 3 56.6E 7 6888 6 84 4.53E 54 4 68.6E 8 8937 7.3E 87 5 86 3.E 9 45 8 65.3E 3 335 6 969 3.3E 4376 7

Z.5.5.5.5 3 Y 3 4 X 4 Figure 5: The boundary of Ω appropriate scaling of the problem. With the help of t we define the function R which is monotone increasing from to R(θ, φ) on [, ] and equal to the identity on [, /], ecause R(ρ, θ, φ) := t(ρ)r(θ, φ) + ( t(ρ))ρ ρ R(ρ, θ, φ) = t (ρ)(r(θ, φ) ρ) + ( t(ρ)) >, ρ [, ] the function R is an invertible function of ρ of class C es. The transformation Φ : () Ω is defined by Φ (x) := Υ( R(ρ, θ, φ), θ, φ), x = Υ(ρ, θ, φ) () The properties of R imply that Φ is equal to the identity on () and the outside shell () \ () is deformed by Φ to cover Ω \ (). For a test surface, we use R(θ, φ) = + 3 4 cos(φ) sin(θ) (7 cos(θ) ) (56) e s = 5; see figures 8-9 for pictures of Ω. For our test example, we use u from (53). 8

error 4 6 8 4 6 8 4 6 n Figure 6: Errors from Table The term cos(φ) sin(θ) (7 cos(θ) ) is a spherical harmonic function which shows R C (S ), and the factor 3/4 is used to guarantee R >. For the transformation Φ we get Φ C 4 ( ()), so we expect a convergence of order O(n 4 ). Our spectral method will now approximate u Φ on the unit ball, which varies much more than the function in our first example. We also note that one might ask why we do not further increase e s (see (55)) to get a better order of convergence. It is possible to do this, but the price one pays is in larger derivatives of u Φ, and this may result in larger errors for the range of n values where we actually calculate the approximation. The search for an optimal e s is a problem on its own, but it also depends on the solution u. So we have chosen e s = 5 in order to demonstrate our method, showing that the qualitative behaviour of the error is the same as in our earlier examples. The results of our calculations are given in table 3, and the associated graphs of the errors and condition numbers are shown in figures and, respectively. The graph in Figure shows that the condition numbers of the systems grow more slowly than in our first example, but again the condition numbers appear to be proportional to N 3 n. The graph of the error in Figure again resembles a line and this implies exponential convergence; but the line has a much smaller slope than in the first example so that the error is only reduced to about. when we use degree 6. What we expect is a convergence of order O(n 4 ), but the graph does not reveal this behavior in the range of n values we have used. Rather, the convergence appears to be exponential. In the future we plan on repeating this numerical example with an improved extension Φ of the boundary given in (56). 9

Condition number 5 5.5.5.5 N x 4 n Figure 7: Conditions numbers from Table Remark When given a mapping ϕ : Ω, it is often nontrivial to find an extension Φ : Ω with Φ onto = ϕ and with other needed properties. For example, consider a star-like region Ω whose boundary surface Ω is given by ρ = R(θ, φ) with R : S Ω. It might seem natural to use Φ (ρ, θ, φ) = ρr(θ, φ), ρ, θ π, φ π However, such a function Φ is usually not continuously differentiable at ρ =. We are exploring this general problem, looking at ways of producing Φ with the properties that are needed for implementing our spectral method. The choice of the transformation Φ also aff ects the conditioning of the transformed problem, as was noted earlier in this paper. Current research on producing Φ includes work on maximizing the minimum value of det J(x, y), to help reduce any possible ill conditioning in the transformed problems. These results will appear in a future paper. ACKNOWLEDGEMENTS. The authors would like to thank Professor Weimin Han for his careful proofreading of the manuscript. We would also like to thank the two anonymous referees for their suggestions.

Z.5.5.5.5.5.5 3 Y 3 3 X 3 Figure 8: A view of Ω References [] K. Atkinson, D. Chien, and O. Hansen, A Spectral Method for Elliptic Equations: The Dirichlet Problem, Advances in Computational Mathematics, DOI:.7/s444-9-95-8, to appear. [] K. Atkinson and W. Han. Theoretical Numerical Analysis: A Functional Analysis Framework, nd ed., Springer-Verlag, New York, 5. [3] S. renner and L. Scott, The Mathematical Theory of Finite Element Methods, Springer-Verlag, New York, 994. Table 3: Maximum errors in Galerkin solution u n n N n u u n cond n N n u u n cond 4.3 3 9.68 475.3 86.3 7 3.85 9 364.5 987 4 35.5 44 455.6 35 5 56. 73 3 56.68 89 6 84.87 5 4 68.53 46 7.545 3 5 86.38 38 8 65.44 38 6 969. 3967

Z.5.5.5.5.5.5.5.5 X Y Figure 9: Another view of Ω [4] T. agby, L. os, and N. Levenberg, Multivariate simultaneous approximation, Constructive Approximation, 8 (), pp. 569-577. [5] C. Canuto, A. Quarteroni, My. Hussaini, and T. Zang, Spectral Methods in Fluid Mechanics, Springer-Verlag, New York, 988. [6] C. Canuto, A. Quarteroni, My. Hussaini, and T. Zang, Spectral Methods - Fundamentals in Single Domains, Springer-Verlag, New York, 6. [7] E. Doha and W. Abd-Elhameed. Effi cient spectral-galerkin algorithms for direct solution of second-order equations using ultraspherical polynomials, SIAM J. Sci. Comput. 4 (), 548-57. [8] C. Dunkl and Y. Xu. Orthogonal Polynomials of Several Variables, Cambridge Univ. Press, Cambridge,. [9] O. Hansen, K. Atkinson, and D. Chien. On the norm of the hyperinterpolation operator on the unit disk and its use for the solution of the nonlinear Poisson equation, IMA J. Numerical Analysis, 9 (9), pp. 57-83, DOI:.93/imanum/drm5. []. Logan. and L. Shepp. Optimal reconstruction of a function from its projections, Duke Mathematical Journal 4, (975), 645 659. [] W. McLean, Strongly Elliptic Systems and oundary Integral Equations, Cambridge Univ. Press,.

error 4 6 8 4 6 n Figure : Errors from table 3 [] S. Mikhlin, The Numerical Performance of Variational Methods, Noordhoff Pub., Groningen, 97. [3] D. Ragozin. Constructive polynomial approximation on spheres and projective spaces, Trans. Amer. Math. Soc. 6 (97), 57-7. [4] J. Shen and T. Tang. Spectral and High-Order Methods with Applications, Science Press, eijing, 6. [5] J. Shen and L. Wang. Analysis of a spectral-galerkin approximation to the Helmholtz equation in exterior domains, SIAM J. Numer. Anal. 45 (7), 954-978. [6] A. Stroud. Approximate Calculation of Multiple Integrals, Prentice-Hall, Inc., Englewood Cliffs, N.J., 97. [7] Yuan Xu. Lecture notes on orthogonal polynomials of several variables, in Advances in the Theory of Special Functions and Orthogonal Polynomials, Nova Science Publishers, 4, 35-88. [8] Yuan Xu. A family of Sobolev orthogonal polynomials on the unit ball, J. Approx. Theory 38 (6), 3-4. 3

Condition number 4 35 3 5 5 5.5.5.5 3 N x 4 n Figure : Condition numbers from table 3 4