Key words. Equispaced nodes, scattered data, spectral methods, Runge phenomenon, analytic functions

Size: px
Start display at page:

Download "Key words. Equispaced nodes, scattered data, spectral methods, Runge phenomenon, analytic functions"

Transcription

1 A MAPPED POLYNOMIAL METHOD FOR HIGH-ACCURACY APPROXIMATIONS ON ARBITRARY GRIDS BEN ADCOCK AND RODRIGO B. PLATTE Abstract. The focus of this paper is the approximation of analytic functions on compact intervals from their pointwise values on arbitrary grids. We introduce a new method for this problem based on mapped polynomial approximation. By careful selection of the mapping parameter, we ensure both high accuracy of the approximation and an asymptotically optimal scaling of the polynomial degree with the grid spacing. As we explain, efficient implementation of this method can be achieved using Nonuniform Fast Fourier Transforms (NFFTs). Numerical results demonstrate the efficiency and accuracy of this approach. Key words. Equispaced nodes, scattered data, spectral methods, Runge phenomenon, analytic functions AMS subject classifications. 65D5, 65D05, 65T50. Introduction. Let f : [, ] C be an analytic function and z 0 <... < z M a fixed grid of M points. In this paper, we consider the problem of approximating of f from the grid values f(z m ), m = 0,..., M. A classical mean of doing this is to interpolate f using a polynomial of degree M. However, the famous Runge phenomenon illustrates the pitfalls of such an approach. In the case of equispaced grids, for example, the corresponding interpolants diverge exponentially fast as M for any function with complex singularities lying sufficiently close to the interval [, ]. Moreover, the approximation is ill-conditioned, and so one sees divergence of the interpolants in finite precision, even for entire functions. Neither is this phenomenon isolated to equispaced data. It is well known that to avoid a Runge-type phenomenon the data should cluster at the endpoints according to a Chebyshev distribution. In other words, polynomial interpolants are generally inadvisable for all but very special grids. One possible way to overcome the Runge phenomenon is to reduce the polynomial degree, to, say, N < M, and perform an overdetermined (weighted) least-squares fit of the data. In the case of equispaced grids, provided N is chosen sufficiently small in comparison to M, this leads to a stable and convergent approximation [7]. However, due to a result of Coppersmith & Rivlin [3] (see also [3]), it is known that N can grow no faster than M as M to maintain stability and convergence. Thus, the effective convergence rate of the approximation, determined by the size of N, is greatly lessened. Although the best approximation of an analytic function in P N is exponentially-accurate in N, this scaling translates into only root-exponential in the number of data points M. More generally, for arbitrary grids with maximal separation h, we can expect only root-exponential convergence in /h as h 0. The severity of this scaling is due to the behaviour of derivatives of polynomials, and specifically, the fact that p grows maximally like N p for a polynomial p P N (this is commonly known as Markov s inequality see [6, Theorem 5..8], Department of Mathematics, Simon Fraser University, 8888 University Drive, Burnaby, BC V5A S6, Canada (ben adcock@sfu.ca, School of Mathematical and Statistical Sciences, Arizona State University, P.O. Box 87804, Tempe, AZ , USA (rbp@asu.edu,

2 B. ADCOCK AND R.B. PLATTE for example). On the other hand, trigonometric polynomials possess derivatives that grow at most linearly in N. Hence, a trigonometric polynomial least squares approximation will permit a linear scaling of N with M (or, more generally, /h) whilst maintaining stability. Unfortunately, trigonometric polynomials are a poor means of approximating analytic functions. Unless f happens to also be periodic, there is no uniform convergence as N and one witnesses the undesirable Gibbs phenomenon near the interval endpoints. The purpose of this paper is to introduce a weighted least-squares method for approximating analytic functions from their values on equispaced or scattered points that combines the good features of both algebraic and trigonometric polynomial approximations. For appropriate parameter choices, the method we introduce has a linear scaling of N with M (or, in general, /h), much as with trigonometric polynomial approximation, but delivers high accuracy reminiscent of polynomial approximation. Moreover, the method is practical, simple and can be implemented efficiently in O (M log M) operations using nonuniform Fast Fourier Transforms (NFFTs) [7, 8, 0, 3, 35]... Mapped polynomial approximations. Our approximation method is based on mapped algebraic polynomials. The corresponding approximation space (.) P α N = {p m α : p P N }, consists of algebraic polynomials in a mapped variable y = m α (x), where m α : [, ] [, ] is a particular one-parameter family of mappings indexed by a parameter 0 α. When α = 0, PN α coincides with the space P N of algebraic polynomials of degree N, and when α = it consists of functions closely related to trigonometric polynomials. By selecting α sufficiently close to one, it is therefore expected that one can retain the good approximation properties of the α = 0 case, whilst also improving its severe scaling of N with M (respectively h). The approximation space (.) is not new. Mapped polynomial methods have been used extensively in the context of numerical quadrature and spectral methods for PDEs. Here the mapping m α is used to overcome the severe time-step requirements of standard Chebyshev spectral methods or to improve the poor resolution properties of Chebyshev grids [8, 4, 6]. The most widely-used such map is due to Kosloff and Tal Ezer [6]. However, other mapped have also been considered, including most recently in the work of Hale & Trefethen [4]. Note that the situation considered in such applications is roughly speaking the reverse of ours. Therein the mapping is used to distribute a Chebyshev grid more evenly over the domain, and hence improve the time-step restriction. Conversely, in our setting we consider a fixed, but arbitrary, grid of data points, which we map to a grid that is closer to a Chebyshev distribution. The motivation for doing this is to suppress the maximal polynomial derivatives, and correspondingly improve the scaling of N with M required for stability. Thus, an interesting conclusion of this paper is that mappings are not just useful in applications such as spectral methods and numerical quadrature, they are also useful in the reconstruction problem of approximating analytic functions to high accuracy from arbitrary grids. Note that it is not our aim in this paper to compare different mappings. We shall use the mapping due to Kosloff and Tal Ezer [6] throughout due to its simplicity. We note, however, that other mappings may provide some advantages. For a discussion on this issue in relation to spectral methods and numerical quadrature, see [4].

3 A MAPPED POLYNOMIAL METHOD 3 In the context of spectral methods, the choice of the mapping parameter α has also been the subject of an extensive debate. See [, 8, 4, 5, 6, 4, 33, 34] and references therein. There are two standard approaches for doing this. First, 0 < α < is fixed and close to one, and second, α = α N as N. As we will discuss later, approximations from the space PN α converge geometrically in the first case. In the second case, the standard approach is to introduce a finite maximal accuracy ɛ (typically on the order of machine precision [6]), and choose α N so that the error of the approximation is on the order of ɛ for large N. Approximations from the space P α N N no longer converge classically (i.e. down to zero in exact arithmetic as N ), but in practice, high accuracy is expected by taking ɛ on the order of machine precision. From the point of view of spectral methods, the advantage of the second approach is that it delivers asymptotically optimal time-step and resolution properties, on the order of those of Fourier spectral methods... Our contributions. After introducing the method in and discussing its efficient implementation using NFFTs, we devote the remainder of the paper to the key issue of how to choose the parameter N (the size of the approximation space) in relation to M (or, in general, h) for various different choices of α (the mapping parameter). We first show that for fixed α one cannot improve the asymptotic scaling of N with M beyond that of the α = 0 case, i.e. N = O( M). The only possible improvement is in the constant. Conversely, if α = α N in an appropriate way we show that stability is guaranteed with a linear scaling of N with M and with an explicit constant (Theorem 4.). Whilst classical convergence is forfeited, high accuracy is guaranteed by an appropriate choice of ɛ. The proofs of these results follow from the derivation of a Markov inequality for PN α which is uniform in both N and α (Theorem 4.). Notably, the constant in this inequality is explicit and not overly large for practical choices of parameters. This is an interesting virtue of our analysis. A summary of our main results is given in Table. Note that the results are stated for equispaced data only; the primary example we use throughout this paper. However, they can be easily recast in terms of general scattered grids by replacing M with /h. Table also includes some terminology for convergence that will be used throughout this paper. In particular, we will say that a sequence a n converges geometrically if a n = O (ρ n ) for large n for some ρ >. We say the convergence is subgeometric with index 0 < r < if a n = O ( ) ρ nr. When r = / we also refer to this convergence as root-exponential. Finally, we say the sequence converges algebraically with index k if a n = O ( n k) as n..3. Methods for function approximation from equispaced data. Many methods have been developed for the approximation of analytic functions from equispaced data. For an extensive list, see [0, 30] and references therein. A recent result [30] states that no method for this problem can be both stable and exponentially convergent. In fact, the best possible convergence rate of a stable method is root exponential in the number of equispaced points M. Such a convergence rate is achieved by polynomial least-squares, for example. However in practice (as we shall see later in our numerical results) polynomial least-squares tends to give poor results. On the other hand, when α = α N is varied appropriately (see the fifth line in Table ), the mapped polynomial method we introduce in this paper achieves high accuracy and numerical stability. Yet the aforementioned theorem is not avoided, since classical convergence is sacrificed for finite accuracy.

4 4 B. ADCOCK AND R.B. PLATTE α Conv. rate in N Scaling with M Conv. rate in M 0 geometric O(M ) root exp. 0 < α < fixed geometric O(M ) root exp. algebraic, index O (M) algebraic, index α0 N subgeo., index σ O(M σ σ ) subgeo., index σ σ log ɛ Nπ not applicable Table O (M) not convergent Summary of our main results for equispaced data, where h = /M. In the fourth and fifth row, the notation denotes the behaviour of α = α N as N. In the fourth row 0 < σ < and α 0 > 0 are fixed numbers. In the fifth row, ɛ > 0 is a fixed number, chosen sufficiently close to machine epsilon to give high accuracy. In this case, the geometric rate of convergence is limited by the mapping m α (see Theorem 3.3). Although not guaranteed to be convergent, for large N we expect the error to be proportional to ɛ. As discussed in [0, 30], a number of other methods for equispaced function approximation also offer high accuracy and stability in practice. In 7 of this paper, we compare mapped polynomial methods with two well known schemes for approximation on arbitrary grids: discrete polynomial least-squares and cubic splines. A thorough comparison with other methods for the approximation of analytic functions from equispaced data, including rational approximation [], Fourier extension [5, 4, 9, ], and windowed Fourier [8], will be presented in [9].. The mapped polynomial method... Preliminaries. Throughout this paper, we denote the space of functions which are square integrable with respect to a weight function w(x) by L w(, ). The corresponding inner product is written as, w and the norm denoted by w. The space L (, ) consists of those functions that are bounded a.e. on [, ], equipped with the norm. The mapping used in this paper, due to Kosloff and Tal Ezer [6], is: (.) m α (x) = For completeness, we define sin (απx/), x [, ], α (0, ]. sin (απ/) m 0 (x) = lim α 0 + m α (x) = x. Observe that m α (x) is a bijection of [, ], and in particular, m α (x) m α (x ) x x. Throughout, we write y = m α (x) [, ] for the variable in the mapped domain. Note that and also that x = m α (y) = απ sin (sin(απ/)y), dy dx = απ cos(απx/) m α(x) = sin(απ/) = απ β y β,

5 A MAPPED POLYNOMIAL METHOD 5 where (.) β = sin(απ/). We shall also write (.3) f(x) = g α (y), g α = f m α, for the image of f under the mapping m α. We next define the approximation space we use in this paper. Let (.4) P α N = {p m α : p P N }, be the space of mapped polynomials in the variable x of degree at most N. Observe that PN 0 = P N is the space of algebraic polynomials of degree at most N. For computational purposes, it is also necessary to have a basis for PN α. Let (.5) φ n (x) = c n T n (m α (x)), n = 0,,,..., where T n (y) P n is the n th Chebyshev polynomial of the first kind in y and the normalization factor c n is given by /π if n = 0 and /π otherwise. Lemma.. The functions {φ n } n=0 form an orthonormal basis for the weighted space L w α (, ) with weight w α (x) = απ cos(απx/). sin (απ/) sin (απx/) Proof. The functions c n T n (y) are orthonormal with respect to the Chebyshev weight function / y. Using the substitution y = m α (x), we have φ n (x)φ m (x)w α (x) dx = c n c m T n (y)t m (y) βw ( α m α (y) ) απ β y dy, where β is as in (.). Thus, for orthonormality, we require that ( βw α m α (y) ) απ = β y. y Substituting y = m α (x) now gives the result. Note that one could in theory use any orthonormal polynomial basis for P N in order to construct a basis of PN α. We use Chebyshev polynomials for their computational efficiency (see.3). But it is in theory possible to use any other polynomial basis, for example Gegenbauer polynomials, which have been used widely for other problems in scientific computing (see [9,, 3] and references therein)... The method. Having introduced the approximation space PN α, we next formulate the mapped polynomial method. To this end, let z 0 < z <... < z M, be an ordered set of M + data points, where M N, and write Z = {z m } M m=0. Define the maximal spacing h > 0 by (.6) h = max n=,...,m {z n+ z n },

6 6 B. ADCOCK AND R.B. PLATTE where z = and z M+ =. We will use equispaced grids as our primary example throughout this paper. In this case, we set (.7) z m = + m, m = 0,..., M, M and therefore h = /M. Given the data {f(z m )} M m= and the approximation space PN α, we construct the approximation to f by a weighted least-squares fitting: (.8) F α N,Z(f) = argmin p α P α N M µ m f(z m ) p α (z m ). m=0 Here the weights µ n > 0 are trapezoidal quadrature weights corresponding to Z: µ n = mα(z n+) m α(z n ) y dy (.9) = ( sin (m α (z n+ )) sin (m α (z n )) ), n = 0,..., M. We make this choice over simpler strategies because it avoids conditioning issues if z 0 or z M are close to their respective endpoints. Note that, given Z and the weights µ n, the parameters α and N are both chosen by the user. This is the key issue we consider in 3 and 4. The approximation FN,Z α (f) is defined in the physical x-domain. Let F α N,Z(f)(x) = G α N,Z(g α )(y), be its image in the y-domain, where g α is given by (.3). Note that G α N,Z (gα ) can also be defined as a poilynomial least-squares fit: G α N,Z(g α ) = argmin p P N M µ m g α (m α (z m )) p(m α (z m )). m=0 At this stage it is convenient to introduce the discrete inner product f, g Z = M µ m f(z m )g(z m ), f, g L (, ). m=0 and write Z for the corresponding discrete semi-norm. We also define the discrete uniform semi-norm: f Z, = max f(z m), f L (, ). m=0,...,m Finally, since FN,Z α (f) is defined as a least-square fit of the data, it is the solution of the corresponding normal equations. Written in variational form, one sees that FN,Z α (f) is the solution to the problem (.0) find f P α N such that f, p α Z = f, p α Z, p α P α N. We shall use this observation later when analyzing the method.

7 A MAPPED POLYNOMIAL METHOD 7.3. Computation of the approximation. Let φ n be the basis from (.5). If F α N,Z(f) = N a n φ n, for unknown coefficients a n C then the weighted least-squares (.8) is equivalent to the algebraic least squares n=0 (.) Aa b, where A C M N has (m, n) th entry µ m φ n (z m ), a = (a 0,..., a N ), b = (b 0,..., b M ) and b m = µ m f(z m ). Computation of the approximation FN,Z α (f) can be carried out using a standard solver such as conjugate gradients with the number of iterations being proportional to the square root of the condition number of A, κ(a). Let σ max and σ min be the maximal and minimal singular values of A respectively, so that κ(a) = σ max /σ min. We note the following: Lemma.. Let y n = m α (z n ) for n = 0,..., M. Then { M } (σ max ) = max µ n p(y n ) : p P N, p w = n=0 { M } (σ min ) = min µ n p(y n ) : p P N, p w =, n=0 where w(y) = / y is the Chebyshev weight. Proof. Let a = (a 0,..., a N ), f = N n=0 a nφ n PN α and p = gα P N be as in (.3). Note that p is a sum of orthonormal Chebyshev polynomials in y, and therefore On the other hand, n=0 N a n = p w. n=0 M M Aa = µ n f(m α (z n )) = µ n p(y n ). The result now follows immediately. In 5 we will use this lemma to analyze the conditioning of A and show that κ(a), in practice, remains bounded for appropriate choices of the parameters α and N. Using this, in 6 we will describe the fast implementation of the approximation in O (M log M) time using NFFTs..4. Parameter choices. We now define the various parameter choices we consider in this paper: (i) α = 0, (ii) 0 < α < fixed, (iii) α =, (iv) α = α N α 0 /N σ as N, where α 0 > 0 and 0 < σ <, (v) α = α N = 4 π arctan(ɛ/n ), where ɛ > 0 is small. n=0

8 8 B. ADCOCK AND R.B. PLATTE Note that in case (i), PN α = P N and therefore FN,Z 0 is just an algebraic polynomial least-squares fit. In case (iii) we shall see in 3.. that PN is similar to the space of trigonometric polynomials, and has correspondingly poor approximation properties. Choices (iv) and (v) involve varying α with N. The particular choice of α N in (v), originally due to Kosloff and Tal Ezer [6], will be explained in Analysis of the mapped polynomial method. Having introduced the method, we will now analyze its stability and convergence properties and how they depend on the mapping parameter and the oversampling rate. 3.. Stability and convergence. We first define the condition number of the approximation. Since F α N,Z is linear, its L condition number is given by κ = κ α N,Z = sup f L (,) f Z, 0 { F α N,Z (f) f Z, It transpires that κ is a little difficult to analyze in practice. Thus we work with the smaller quantity { p κ = κ α α } { p α } N,Z = sup p α P α p α = sup N Z, p α P α p α. p α N Z, Z, 0 p α 0 Note that the second equality follows from Z = M + N +. Indeed, since P α N consists of mapped polynomials, p α = 0 if and only if p α Z, = 0. The following lemma relates κ to the condition number κ: Lemma 3.. We have κ κ σ κ, where σ = π / min {µ n}. n=0,...,m Proof. Since M N by assumption, and since the points z 0,..., z M are distinct, the matrix A has full column rank. Hence FN,Z α (f) exists uniquely for any f L (, ). The operator FN,Z α is also a projection onto P N α. Therefore { F α } N,Z (f) κ = sup f L (,) f Z, 0 sup p α P α N p α Z, 0 f Z, { F α N,Z (p α } ) p α Z, { p α } = sup p α P α p α = κ, N Z, p α Z, 0 which gives the lower bound. For the upper bound, we first note that F α N,Z(f) κ F α N,Z(f) Z,. We now use the variational form (.0). Setting p α = f = FN,Z α (f) and using the Cauchy Schwarz inequality for the discrete inner product, Z, we find that F α N,Z(f) Z f Z. }.

9 0 8, = 0:5 A MAPPED POLYNOMIAL METHOD 9 5; ~5; <~ <~5 5 ~5, = 0: M Fig.. Numerically estimated condition numbers κ for equispaced nodes with M = N and two values of the mapping parameter, α = 0.5 and 0.9. The solid lines present the values of κ, while the dashed lines are the bounds in Lemma 3.. Thus f Z, Observe that f Z M n=0 µ n F N,Z α (f) Z M n=0 µ n M µ n = ( π + sin (m α (z M ) ) sin (m α (z 0 ))) n=0 minn=0,...,m {µ n } M FN,Z(f) α Z,. n=0 µ n where the equality holds when z 0 = and z M =. Hence this gives F α N,Z(f) κσ f Z,. dy = π, y The first result now follows immediately. Note that κ, and therefore κ, determines the condition number of the approximation FN,Z α. Figure shows how tight the bounds in Lemma 3. are for equispaced nodes. In this figure, M = N was used for α = 0.5 and α = 0.9. Notice in particular that κ is very close to κ. We next consider the error of the approximation: Theorem 3.. We have f FN,Z(f) α ( + σ κ)e N(f), α EN(f) α = inf f p α. p α PN α Proof. For any p α PN α, f FN,Z(f) α f p α + FN,Z(f α p α ) f p α + κ f p α. We now use Lemma 3.. This result shows that the error of the mapped polynomial approximation decouples into a term κ depending on the data and a term EN α (f) that is independent of the data and depends only on the parameters α and N. Clearly, our interest lies in

10 0 B. ADCOCK AND R.B. PLATTE choosing α and N such that EN α (f) is as small as possible. But this must be balanced with the fact that the best choices for minimizing EN α (f) may lead to a large condition number κ. We dedicate 4 to the issue of balancing these parameters based on an estimate for κ which we derive. In order to do so, however, it is first necessary to consider the behaviour of the best approximation error EN α (f). 3.. Behaviour of the best approximation error. We now consider the decay rate of EN α (f) with respect to N for the five choices of α introduced in The case of fixed 0 α <. We shall focus on analytic functions. Let { ( B(ρ) = ρ e iθ + ρe iθ) } : θ [ π, π) C, be the usual Bernstein ellipse in the complex y-plane with index ρ and write D α (ρ) C for the image of B(ρ) in the complex x-plane under the inverse mapping x = m α (y). Then we have the following: Theorem 3.3. Let N N and 0 α < be given and suppose that f is analytic in D α (ρ ) for some ρ > and continuous on its boundary. Then E α N(f) cα (f) ρ ρ N, c α (f) = max z D α(ρ) f(z), where { ( απ ) ρ = min cot, ρ }, 0 < α <, ρ = ρ, α = 0. 4 Proof. Note that (3.) EN(f) α = inf f p α = inf g α p, p α PN α p P N where g α = f m α. The mapping y = m α (x) has singularities at y = ±/ sin(απ/). Note that, if ρ satisfies (ρ + ρ ) = / sin(απ/), then ρ = cot(απ/4). Since f is analytic in D α (ρ ), the function g α is therefore analytic in the Bernstein ellipse B(ρ). Thus, the standard Bernstein estimate for best uniform approximation by polynomial (see [37, Chpt. 8] for example) now gives the result. Note that when α = 0, i.e. when PN α = P N, then ρ = ρ and we recover the usual result for polynomial approximation. For all other values of α, the rate of geometric convergence of EN α (f) is limited to at most cot(απ/4) by the singularity introduced by the inverse mapping m α. Nevertheless, for any fixed 0 α < the convergence rate remains geometric, in contrast to the cases described next The case α =. We first require the following result: Lemma 3.4. For even N, the space PN α defined by (.4) has equivalent expression N/ N/ PN α = a n cos(αnπx) + b n sin(α(n /)πx) : a n, b n C. n=0 n=

11 A MAPPED POLYNOMIAL METHOD The set {cos(αnπx)} n=0 {sin(α(n /)πx)} n= is precisely the orthogonal basis of eigenfunctions of the Laplace operator on [ /α, /α] subject to homogeneous Neumann boundary conditions. Proof. By [36, Lem. ], one has and also cos(αnπx) = ( ) n T n (sin( απ x)) = ( )n T n ( sin( απ )m α(x) ), sin(α(n /)πx) = ( ) n T n ( sin( απ x)) = ( ) n T n ( sin( απ )m α(x) ). ( The functions {T n sin( απ )m α(x) ) ( } N n=0 {T n sin( απ )m α(x) ) } N n= form a basis for PN α. Hence the result follows. This lemma shows that PN is closely related to the space of trigonometric polynomials (the case of odd N is similar, and hence is omitted). Indeed, if the factor of (n /) were replaced by n, then the space PN would be precisely the space of trigonometric polynomials on [, ]. For a comparison of trigonometric polynomial approximation and approximation with Laplace Neumann eigenfunctions we refer to [, 3]. One difference is that Laplace Neumann approximations converge uniformly, whereas trigonometric polynomials do not. Specifically, in [] it was shown that EN (f) 0 as N whenever f H (, ), where H (, ) denotes the standard Sobolev space of order. However, the convergence rate is limited to O ( N ) unless f obeys specific endpoint conditions, analogous to periodicity conditions in trigonometric polynomial approximation. Hence, in general, the choice α = results in lower orders of approximation (specifically, algebraic with index one), even for analytic f The case α α 0 /N σ as N. Suppose now that (3.) α = α N α 0 /N σ, N, where α 0 > 0 and 0 < σ < are fixed. To estimate the convergence rate in this case, we use Theorem 3.3. Observe that and therefore cot(α N π/4) + α 0 π/(n σ ), N, ρ N ( + α 0 π/(n σ )) N (exp(α 0 π/)) N σ, N. Hence for 0 < σ < we have subgeometric convergence with index σ. Note that when σ =, there is no decay The case α = 4/π arctan(ɛ /N ). This final choice of α is motivated by Theorem 3.3. Let ɛ > 0 be a fixed, user-controlled tolerance. Since the error is proportional to (cot(απ/4)) N, the choice (3.3) α = α N = 4 π arctan (ɛ /N), gives (cot (α N π/4)) N = ɛ.

12 B. ADCOCK AND R.B. PLATTE Hence, provided ɛ is chosen sufficiently small (e.g. on the order of machine precision), we expect a small approximation error, even though rapid classical convergence of EN α (f) down to zero is no longer guaranteed. Observe that (3.4) α N = log ɛ Nπ + O ( N ), N, in this case. Hence, in the previous notation we have σ = and α 0 = log ɛ /π. 4. Parameter choices. As discussed in the previous section, given a set of data Z with maximal spacing h, one must choose N and α in such a way to keep the condition number κ small, whilst at the same time minimizing the approximation error EN α (f). In this section we address this issue. 4.. A Markov inequality for PN α. We first require the following: Theorem 4.. We have (4.) (p α ) C α N p α, p α P α N, where (4.) C α N απ N + cot (απ/)n. Recall that the classical Markov inequality for algebraic polynomials states that (4.3) p N p, p P N, where the constant N is sharp; see [6, Theorem 5..8], for example. Conversely, for trigonometric polynomials, Bernstein s inequality [6, Theorem 5..4] gives (4.4) p Nπ p, p T N, { N/ } where T N = n= N/ a ne inπx : a n C is the space of trigonometric polynomials. Theorem 4. gives a Markov inequality for the spaces PN α, 0 < α <. Note that the bound (4.) reduces to (4.3) when α = 0. Similarly, it reduces to (4.4) when α =, except, of course, that PN is not the space T N of trigonometric polynomials but the space of Laplace Neumann eigenfunction on [, ] (see Lemma 3.4). Nevertheless, one can show that PN satisfies the same Bernstein inequality (4.4) as T N [3]. In other words, the general Markov inequality (4.) is sharp in the extreme cases α = 0 and α =. Proof. Recall that p α (x) = p m α (x) = p(y), where p P N. We have p α = p and where β is as in (.). Thus (4.5) (p α ) = απ β (p α (x)) = p (y)m α(x) = απ β p (y) β y, max y p (y) β y. We now recall the following inequality for algebraic polynomials, due to Bernstein (see, for example, [6, Theorem 5..7]): (4.6) p (y) y N p, p P N, y.

13 A MAPPED POLYNOMIAL METHOD 3 Now consider p (y) β y. Let 0 < τ < and suppose that y τ. Then p (y) β y = β y y p (y) y = Thus, by (4.6) and the assumption on y, p (y) β y β + β y p (y) y. β + β N p, y < τ. τ Suppose that τ y. Then, by Markov s inequality (4.3), p (y) β y β ( τ)n p, τ y. Combining this with the previous estimate, we obtain { p (y) β y max β + β N, } β τ τ + ( β )N p. We now set τ = /N, substitute into (4.5) and use the definition of β to deduce the result. The Markov inequality (4.) gives the following estimate for the condition number: Theorem 4.. The condition number κ α N,M hc α N /, where C α N is as in (4.). In particular, suppose that c is fixed. Then whenever N and α satisfy κ α N,Z c, (4.7) N + cot (απ/)n 4( /c). απh Proof. Let x [, ]. Then there exists an m =,..., M + such that x z m h/. By the mean value theorem p α (x) p α (x m ) + x z m (p α ) p α Z, + hc α N/ p α. Taking the supremum over x [, ] and rearranging now gives Since this holds for all p α PN α κ α N,M c provided p α /( hc α N /) p α Z,, we obtain the first result. For (4.7), we note that C α N ( /c). h Substituting the expression (4.) for CN α now gives (4.7).

14 4 B. ADCOCK AND R.B. PLATTE 4.. The choice of N and α. With Theorem 4. in hand, we may now consider how to select the parameter N for the choices of α listed in The case α = 0. Recall that PN 0 is the space P N of polynomials of degree N. Hence the sufficient condition (4.7) for a bounded condition number reduces to ( /c) N, πh i.e. N = O(/ h) as h 0. Since EN 0 (f) decays geometrically fast in N, this translates into root-exponential convergence in /h as h 0. In other words, although algebraic polynomial approximations have good intrinsic approximation properties, they result in severe scalings of N with h, leading to a less than desirable effective convergence rate in terms of h The case α =. At the other extreme, when α = condition (4.7) reads N 4( /c). απh Hence a bounded condition number is ensured with a linear scaling N = O (/h). However, as discussed in 3., the best approximation error EN (f) decays only very slowly in this case. Whilst setting α = overcomes the unpleasant scaling of the α = 0 case, it destroys the beneficial approximation properties The case of fixed 0 < α <. In this case, (4.7) results in the sufficient condition N = O(/ h) as h 0. Although the constant improves as α gets closer to one, this is the same asymptotic scaling as in the case α = 0, and it leads to the same root-exponential decay of the error in terms of h. As discussed in.3, a result of [30] states that no stable algorithm for approximating functions from equispaced data can converge better than root-exponentially fast in the number of points M. This result means that the sufficient condition N = O(/ h) derived for the cases 0 α <, which is equivalent to N = O( M) for equispaced data, is not just sufficient but also necessary. If N were to scale more rapidly with M, then the approximation would necessarily be ill-conditioned The case α α 0 /N σ as N. As in (3.), suppose that α = α N α 0 /N σ, N, where 0 < σ < and α 0 > 0. Then for large N the left-hand side of (4.7) reads N + cot (α N π/)n α 0π N σ, and therefore (4.7) results in the condition ( ) 8( /c) σ N h π σ + o(), N. α 0 In other words, we require N = O(h σ ) as h 0. Thus, by taking σ close to, we reduce the scaling to almost linear in /h. But recall that the decay rate of E α N (f) in this case is subgeometric with index σ. This means that f F α N N,Z (f) = O (c (/h) σ ), h 0, N

15 A MAPPED POLYNOMIAL METHOD 5 ( ) for some c > whenever N = O h σ and α N is as in (3.) with 0 < σ <. In other words, the effective convergence rate is subgeometric in /h with index σ, and this drops to zero as σ approaches one The case α = 4/π arctan(ɛ /N ). Suppose now that α N is given by (3.3) for some ɛ > 0. Due to (3.4), we find that (4.7) gives N 8( /c) π + log ɛ /4 h + o(), N. Hence a linear scaling N = O (/h) suffices in this case, much as in the case of α =. However, unlike that case we expect high accuracy from this approach, provided ɛ is sufficiently small. Note that this does not contradict the aforementioned impossibility theorem of root-exponential convergence, since this choice of α N does not lead to highorder classical convergence down to zero, but only down to approximately ɛ. 5. The condition number of the matrix A. In the previous section, we demonstrated stability and accuracy, provided α and N scale in the appropriate manner with h. Yet, as discussed in.3, it is important that the condition number of the matrix A also remains bounded as h 0 for the same choices of α and N. If it does, the number of conjugate gradient iterations required to compute the approximation is O () irrespective of the problem size. We now show that this is indeed the case. Theorem 5.. Suppose that h /. Then the condition number of the matrix A satisfies + Θ(α, N, h) κ(a) Θ(α, N, h), where, for any 0 < δ < N, ( Θ(α, N, h) c Nh + N h ( h α+δ N δ + hn δ + hn ( α) δ (5.) ) ) + δ + h N + hn β ( δ /N ), for some constant c > 0 independent of δ, α, N and h, where β = sin(απ/). The proof of this theorem is given in the appendix. Corollary 5.. Consider the following three cases: (i) 0 α < fixed, (ii) α =, (iii) α = α N as N with α N α 0 /N σ for 0 < σ and α 0 > 0. For each ɛ > 0, there exists a c 0 (ɛ) > 0 such that κ(a) provided N c 0 (ɛ)h γ, where γ satisfies + ɛ ɛ, (i) γ = /, (ii) γ =, (iii) γ = σ.

16 6 B. ADCOCK AND R.B. PLATTE Proof. By Theorem 5., it suffices to provide conditions under which Θ(α, N, h) is bounded away from. Suppose first that α =. Then (5.) gives ( ( h Θ(, N, h) c Nh + δ N δ + hn ) + δ δ + h N + hnδ ) Setting δ = Nh gives Θ(, N, h) c ( Nh + N h ), for some constant c independent of N and h. Hence, taking Nh sufficiently small ensures Θ(, N, h) is bounded away from one. Next suppose that 0 α < is fixed. Then (5.) reduces to Θ(α, N, h) c (N h + h N 4 ) δ + δ. Setting δ = hn now gives Θ(α, N, h) c (N ) h + hn as required. Finally, consider the case α N α 0 /N σ for fixed α 0 > 0 and 0 < σ. Note that as N, uniformly in h, and also that and N h α α 0 N σ/ h, hn ( α) α 0 hn σ, hn β ( δ /N ) O ( hn σ), N, h 0. It follows that the right-hand side of (5.) is small provided N = O(h σ ), as required. Comparing this with 4., we note that the same scalings of N with h that ensure stability and accuracy of the approximation also ensure good conditioning of the linear system. Unfortunately, unlike in 4. we have no explicit values for the constant in this case. Although careful bookkeeping in the proof would give such a constant, it would likely be woefully pessimistic. However, we note that A is at least invertible for any choice of N, α and h, provided the number of points M N. Moreover, good conditioning of A can easily be checked numerically. 6. Fast implementation. The matrix A of the mapped polynomial method can be written as A = W F C, where W = diag( µ 0,..., µ M ) C (M+) (M+) and C = diag( /π, /π,..., /π) C (N+) (N+) are diagonal, and F = { cos ( n cos (m α (z m )) )} M,N m=0,n=0 CM N. Our goal is to find the coefficient vector a that minimizes Aa b, where the entries of the vector b are given by b m = µ m f(z m ). In our computations, the points z m are either equispaced or scattered on the interval [-,].

17 A MAPPED POLYNOMIAL METHOD 7 The operations a F a and b F b can both be evaluated in O (M log M) operations using a NFFT package. In our implementation we use the NFFT library developed at the Mathematical Institute of the University of Lübeck [5, 3]. This library also includes the nonuniform fast cosine transform (NFCT), which the transformation performed by F. In order to find the expansion coefficients, we solve the least-squares problem with the LSQR algorithm [7]. In our experiments we use MATLAB s LSQR function with tolerance 0 and the NFCT with the default parameters provided in the NFFT library. The number of iterations required by the solver is proportional to the square root of the condition number of A. Hence, as long as A remains wellconditioned, the computational cost for finding the expansion coefficients is approximately O (M log M) operations. The performance of the NFFT implementation is demonstrated in the next section. 7. Numerical examples. In this section we present numerical results for approximations on [, ]. Figure shows the condition numbers (Lebesgue constants) κ α N for equispaced nodes for several values of M and four choices of the least-squares aspect ratio. As expected, when M/N =, κ α N is too large for practical computations. The condition number improves significantly as the oversampling rate is increased. The bottom-left panel of Fig. indicates that κ α N is approximately 03 if log 0 α = + Nπ, which is how we chose α in all computations for the remainder of this paper. Figure 3 confirms our results from 5 that the choice α = + log ɛ Nπ leads to stable computations when the least-squares process is computed using mapped Chebyshev polynomials as the approximation basis. Notice that κ(a) grows at a sub-algebraic rate and that for M/N = it remains roughly 0 3 for practical values of M. This indicates that, in double precision, oversampling by a factor of is sufficient if the desired accuracy is roughly 0. We present the error in the approximation of analytic functions in Figure 4. In all four cases the mapped polynomial approximations were obtained with M = N. The norm of the error f FN,Z α (f) was estimated on a fine grid of 4000 uniformly distributed points. Notice that the method is particularly accurate for the function f (x) = /( + 00x ). In fact geometric convergence can be observed on this plot. By contrast, polynomial interpolation of f is known to diverge with the error growing exponentially fast near the ends of the interval. For reference, the errors for cubic splines (not-a-knot) and polynomial least-squares are also included. For stability the degree of the polynomial least-squares approximation must satisfy N = O( M) [3]. In our numerical examples, N = 4 M was used. The superior convergence of the mapped approximations can also be observed for the functions f (x) = +6 sin (7x) and f 3(x) = sin(00x), which is entire but highly oscillatory. In the latter case, the convergence of the mapped polynomial scheme starts with a resolution of approximately 4.4 points per wavelength. The error then sharply drops several orders of magnitude and then slowly asymptotes to about 0 0. We point out that the condition number of f 3 is 00, and hence a couple of digits of accuracy are expected to be lost (in comparison to the other error plots) to rounding errors. The function f 4 (x) =.0 + x has a singularity near x =. In this case, the error decay for approximation on equispaced nodes is significantly slower. The effective rate seems to be sub-geometric for all values of M used in Fig. 4. Moreover, the polynomial least-squares is slightly more accurate than the mapped approximation. In contrast to the Runge function f, which has poles near x = 0, f 4 is most difficult

18 8 B. ADCOCK AND R.B. PLATTE M/N = M/N =.5 M/N = M/N =.5 Fig.. Numerically estimated condition numbers κ α N for several values of α and N. The colormap shows log 0 (κ α N ). Four least-square aspect ratios (number of points / approximation degree) are considered:,.5,, and.5. The solid lines represent the curves α = + log ɛ Nπ, with ɛ = 0 4, 0 0, and M/N =.5 κ(a) O(M) M/N = M/N = M Fig. 3. Condition number of the least-squares matrix A for several values of M (number of equispaced data points) and three least-squares aspect ratios M/N. The mapping parameter was chosen so that α = + log 0 Nπ.

19 A MAPPED POLYNOMIAL METHOD 9 f (x) = +00x f (x) = +6 sin (7x) polynomial least-squares error (log scale) 0-5 polynomial least-squares cubic splines error (log scale) 0-5 cubic splines 0-0 mapped polynomials 0-0 mapped polynomials M f 3 (x) = sin(00x) M f 4 (x) =.0 + x polynomial least-squares error (log scale) 0-5 cubic splines error (log scale) cubic splines 0-0 mapped polynomials M 0-0 polynomial least-squares mapped polynomials M Fig. 4. Error in the approximation of four functions sampled at M equispaced points on [, ]. Polynomial least-squares and cubic spline approximations are included for reference. The mapping log 0 parameter was chosen so that α = + and M = N. Nπ to resolve near one of the endpoints, where only sided information is available. Figure 5 presents the error in the approximation of f and f 4 from scattered data. In this computation the data points were chosen as perturbation of equispaced nodes. More precisely, z m = δ m + ( + m/m), m =... M, z 0 =, z M =, where the perturbations δ m were drawn uniformly from the open interval ( /M, /M). The error decay in this case is in good agreement with the equispaced case as expected. The numerical results presented in Fig. 4 and Fig. 5 were computed using the NFFT implementation described in 6. Fig. 6 presents the elapsed time required to approximate f 5 (x) = /( + 00 sin (30x)) on a 00 MacBook Pro laptop (3.06 GHz Intel Core Duo). The number of iterations used by LSQR is also reported in Fig. 6 (right panel). Notice that the number of iterations remains low even when the number of points is more than 0 4. For reference, elapsed time for the computation of the least-squares approximation using MATLAB s backslash (which uses a Householder QR factorization) is also included. When M = 0 4, the LSQR iteration using NFFTs is roughly a thousand times faster than the matrix QR direct solver.

20 0 B. ADCOCK AND R.B. PLATTE f (x) = +00x f 4 (x) =.0 + x error (log scale) mapped polynomials polynomial least-squares cubic splines error (log scale) cubic splines mapped polynomials M polynomial least-squares M Fig. 5. Error in the approximation of two functions sampled at M uniformly scattered points on [, ]. Polynomial least-squares and cubic spline approximations are included for reference. Matrix QR elapsed time (s) O(M 3 ) matrix LSQR NFFT LSQR M number of LSQR iterations M Fig. 6. Left: Elapsed time require to approximate f 5 (x) = /( + 00 sin (30x)) using NFFTs and matrix computations. Dashed lines correspond to O(M 3 ), O(M ) and O(M). Right: Number of iterations used by LSQR. 8. Conclusions. The purpose of this paper was to introduce an efficient method for high-accuracy approximation from scattered grids based on mapped polynomials. Through a judicious choice of the parameter α, the method has an asymptotically optimal scaling of the dimension of the approximation space N with the maximal spacing h. While this parameter choice forfeits convergence down to zero, high accuracy is expected if the quantity ɛ is chosen close to machine epsilon. Efficient implementation of the method is achieved using NFFTs. There are a number of topics we have not addressed in this paper. The first concerns the behaviour of the best approximation error EN α (f) for the parameter choice (3.3). Although this choice was derived so as to ensure the error bound of Theorem 3.3 is roughly ɛ for large N, this says nothing about how fast the error decreases in practice. The numerical experiments of the previous section suggest that the error decreases rapidly, at least initially, when it is orders of magnitude bigger than ɛ. But how does one make precise mathematical statements to this effect when, after all, the approximation is not guaranteed to converge? It turns outs that this can be done, but it is beyond the scope of this paper. We will report the details in a

21 A MAPPED POLYNOMIAL METHOD future work. Second, we have only presented our method in one dimension. For functions defined on hypercubes, an extension to higher dimensions using a tensor product of the one-dimensional mapping is conceptually straightforward. We expect that much of the analysis, in particular, the various scalings derived in 4, will also carry over to this setting. However, there may be more efficient, non-tensor product, approaches which warrant further investigations. Other topics for investigation include the choice of mapping m α. We have used (.) throughout, however there are other possibilities. See [4] for an overview. We leave the question of the best choice of mapping for future work. Somewhat related to this is the optimal choice of α. Here we have used the choice (3.3) due to Kosloff & Tal Ezer. However, other strategies may bring further benefits. Acknowledgements. We thank Nick Trefethen and Nick Hale for fruitful discussions on using mapped methods for equispaced approximation. Much of this paper was developed during the authors visits to the American Institute of Mathematics (AIM). We thank AIM for providing a stimulating research environment. RP was supported in part by AFOSR FA and NSF-DMS5639. BA was supported in part by the Alfred P. Sloan Foundation and NSERC Appendix A. Proof of Theorem 5.. To prove this theorem, we need several preliminary observations. First, let y n = m α (z n ), n =,..., M +. Then Since y n+ y n m α h απ β h (A.) απ β = απ/ sin(απ/) π, 0 α, we find that (A.) y n+ y n πh, n =,..., M. Unfortunately, this shall not be sufficient to prove the theorem, since it does not describe how the points y n cluster near the endpoints. To this end, we have the following: Lemma A.. Suppose that y n 0 and that y [y n, y n+ ]. Then y y n πh β (y n ), y y n πh (πh + β (y n+ ) ), where β is as in (.). Conversely, if y n+ 0 and y [y n, y n+ ] then we have y y n πh β (y n+ ), y y n πh (πh + β (y n ) ),

22 B. ADCOCK AND R.B. PLATTE Proof. Let y = m α (z) and y n = m α (z n ). Then, for y n 0, or equivalently z n 0, y y n = sin(απ/) sin(απz/) sin(απz n/) απ/ sin(απ/) z z n cos(απξ/), ξ [z n, z] απ/ sin(απ/) z z n cos(απz n /) απ/ sin(απ/) h sin (απz n /) = απ β h β (y n ). To deduce the first inequality, we use (A.). For the second, we first note that it suffices to take y = y n+ and then let z = y n+ y n. Then by the first inequality z π h 4 π h 4 ( β (y n+ ) + β ( (y n+ ) (y n ) )) ( β (y n+ ) + β z ). A simple exercise gives that if z cz + d with c, d 0 then z c + d. Hence we obtain z π h + πh β (y n+ ), as required. The case of y n+ 0 is similar. We are now ready to prove Theorem 5.. Throughout the proof, we shall use the notation a b to mean that there exists a constant c > 0 independent of N, α, Z and p P N such that a cb. Let y n = m α (z n ). By Lemma., we wish to find constants c, c > 0 such that (A.3) M c p w µ n p(y n ) c p w, p P N, n=0 where w(y) = / y, in which case the condition number κ(a) c /c. We now note the following. The weights µ n = ( µ l n + µ r ) yn+ n, µ l n = w(y) dy, µ r n = y n yn y n w(y) dy. Thus the theorem holds provided (A.3) holds with µ n replaced by µ l n and µ r n. By symmetry, it suffices to the result for µ l n only. That is, we need only show that c p w M µ l n p(y n ) c p w, p P N. n=0 Define the function χ(y) = N n=0 p(y n)i [yn,y n+)(y). By definition of µ n, we have that M µ l n p(y n ) = n=0 χ(y) w(y) dy = χ w.

23 A MAPPED POLYNOMIAL METHOD 3 Since (A.4) p w p χ w χ w p w + p χ w, p w = p(y) y dy, it suffices to estimate p χ w. We have (A.5) p χ w = N yn+ n=0 n=0 y n p(y) p(y n ) w(y) dy + We now note the following inequality: y0 M p(y) w(y) dy = J n + I. (A.6) N + p p w, p P N,. π This follows immediately by expanding p in normalized Chebyshev polynomials c n T n, where c 0 = /π and c n = /π otherwise, and using the Cauchy Schwarz inequality. In particular, this gives y0 p(y) w(y) dy p N + y0 w w(y) dy N + y 0 p π w. Note that + y 0 = y 0 y. Hence, by Lemma A. + y 0 h + h β = h + h cos(απ/) h + h( α), since cos(απ/) π( α)/, 0 α. Thus (A.7) y0 p(y) w(y) dy ( Nh + N h ) α p w. We now focus on the other terms of (A.5). Consider the integral J n : J n = yn+ y n yn+ y n y p (t) dt dy y n y ( y ) ( y ) t p (t) dt dt y n t y n y dy (A.8) ( yn+ ) yn+ dy y n y y n t p (t) dt We wish to estimate the first integral. To do so, let 0 < ɛ < be a parameter (whose

24 4 B. ADCOCK AND R.B. PLATTE value we choose later). Suppose first that y n 0. By Lemma A., yn+ y n y dy y n+ y n (yn+ ) h (yn+ ) + h β (y n+ ) (y n+ ) h ɛ + h + ( β )(y n+ ) (y n+ ) h ɛ + h + ( α) /ɛ ( h ɛ ɛ + h + ɛ ) h( α). ɛ Hence (A.9) n: y n 0 y n+< ɛ ( h J n ɛ ɛ + h + ɛ ɛn ( h ɛ + h ɛ + ) h( α) p /w ɛ ) h( α) p ɛ w. Note that the second step is due to the inequality p /w N p w (see, for example, [, (5.5.5)]). Near-identical arguments also give (A.0) n: y n+ 0 y n> +ɛ J n ɛn ( h ɛ + h ɛ + ) h( α) p ɛ w. Now consider terms J n with y n 0 and y n+ ɛ. Then and therefore we get that By Lemma A., n: y n 0 y n+ ɛ J n p J n p yn+ y n w(y) dy, w(y) dy N p w yn. y n y n = y n+ + y n+ y n ɛ + y n+ y n ɛ + h + h β ( ɛ) Therefore (A.) n: y n 0 y n+ ɛ J n N ɛ + h + h β ( ɛ) p w.

25 A MAPPED POLYNOMIAL METHOD 5 A similar estimate holds for the case y n+ 0, y n + ɛ. Finally, let n 0 be such that y n0 0 and y n0+ > 0. Without loss of generality, suppose that y n0 y n0+ and therefore y n0 hπ/. Hence (A.) yn0 + y n0 p(y) w(y) dy p hw(y n0 ) Nh h π /4 p w Nh p w, since h < / < /π. With this to hand, we now substitute (A.7), (A.9), (A.0), (A.) and (A.) into (A.5) to get ( χ p [ Nh + N h ) ( h α + ɛn ɛ + h + ɛ + N ɛ + h + h β ( ɛ) + Nh ] p w, ) h( α) ɛ The result now follows by setting ɛ = δ /N. REFERENCES [] M. R. Abril-Raymundo and B. Garcia-Archilla, Approximation properties of a mapped Chebyshev method, Appl. Numer. Math., 3 (000), pp [] B. Adcock, Univariate modified Fourier methods for second order boundary value problems, BIT, 49 (009), pp [3], Modified Fourier expansions: theory, construction and applications, PhD thesis, University of Cambridge, 00. [4] B. Adcock and D. Huybrechs, On the resolution power of Fourier extensions for oscillatory functions, J. Comput. Appl. Math., 60 (04), pp [5] B. Adcock, D. Huybrechs, and J. Martín-Vaquero, On the numerical stability of Fourier extensions, Found. Comput. Math., 4 (04), pp [6] P. Borwein and T. Erdélyi, Polynomials and Polynomial Inequalities, Springer Verlag, New York, 995. [7] J.P. Boyd and F. Xu, Divergence (Runge phenomenon) for least-squares polynomial approximation on an equispaced grid and mock-chebyshev subset interpolation, Appl. Math. Comput., 0 (009), pp [8] J. P. Boyd, Chebyshev and Fourier Spectral Methods, Springer Verlag, 989. [9], A comparison of numerical algorithms for Fourier Extension of the first, second, and third kinds, J. Comput. Phys., 78 (00), pp [0] J. P. Boyd and J. R. Ong, Exponentially-convergent strategies for defeating the Runge phenomenon for the approximation of non-periodic functions. I. Single-interval schemes, Commun. Comput. Phys., 5 (009), pp [] O. P. Bruno, Y. Han, and M. M. Pohlman, Accurate, high-order representation of complex three-dimensional surfaces via Fourier continuation analysis, J. Comput. Phys., 7 (007), pp [] C. Canuto, M. Y. Hussaini, A. Quarteroni, and T. A. Zang, Spectral methods: Fundamentals in Single Domains, Springer, 006. [3] D. Coppersmith and T. Rivlin, The growth of polynomials bounded at equally spaced points, SIAM J. Math. Anal., 3 (99), pp [4] B. Costa, Don W.-S., and A. Simas, Spectral convergence of mapped Chebyshev methods, tech. report, Brown University, 003. [5] B. Costa, Don W.-S., and A. Simas, Spatial resolution properties of mapped spectral Chebyshev methods, in Recent Progress in Scientific Computing: Proceedings of SCPDE05, 007, pp [6] W.-S. Don and A. Solomonoff, Accuracy enhancement for higher derivatives using Chebyshev collocation and a mapping technique, SIAM J. Sci. Comput., 8 (997), pp

Chebfun and equispaced data

Chebfun and equispaced data Chebfun and equispaced data Georges Klein University of Oxford SIAM Annual Meeting, San Diego, July 12, 2013 Outline 1 1D Interpolation, Chebfun and equispaced data 2 3 (joint work with R. Platte) G. Klein

More information

Fourier series on triangles and tetrahedra

Fourier series on triangles and tetrahedra Fourier series on triangles and tetrahedra Daan Huybrechs K.U.Leuven Cambridge, Sep 13, 2010 Outline Introduction The unit interval Triangles, tetrahedra and beyond Outline Introduction The topic of this

More information

On the Lebesgue constant of barycentric rational interpolation at equidistant nodes

On the Lebesgue constant of barycentric rational interpolation at equidistant nodes On the Lebesgue constant of barycentric rational interpolation at equidistant nodes by Len Bos, Stefano De Marchi, Kai Hormann and Georges Klein Report No. 0- May 0 Université de Fribourg (Suisse Département

More information

PART IV Spectral Methods

PART IV Spectral Methods PART IV Spectral Methods Additional References: R. Peyret, Spectral methods for incompressible viscous flow, Springer (2002), B. Mercier, An introduction to the numerical analysis of spectral methods,

More information

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018 Numerical Analysis Preliminary Exam 1 am to 1 pm, August 2, 218 Instructions. You have three hours to complete this exam. Submit solutions to four (and no more) of the following six problems. Please start

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

ON A WEIGHTED INTERPOLATION OF FUNCTIONS WITH CIRCULAR MAJORANT

ON A WEIGHTED INTERPOLATION OF FUNCTIONS WITH CIRCULAR MAJORANT ON A WEIGHTED INTERPOLATION OF FUNCTIONS WITH CIRCULAR MAJORANT Received: 31 July, 2008 Accepted: 06 February, 2009 Communicated by: SIMON J SMITH Department of Mathematics and Statistics La Trobe University,

More information

Spatial Resolution Properties of Mapped Spectral Chebyshev Methods

Spatial Resolution Properties of Mapped Spectral Chebyshev Methods Spatial Resolution Properties of Mapped Spectral Chebyshev Methods Bruno Costa Wai-Sun Don Aline Simas June 4, 006 Abstract In this article we clarify some fundamental questions on the resolution power

More information

Highly Oscillatory Problems: Computation, Theory and Applications

Highly Oscillatory Problems: Computation, Theory and Applications Highly Oscillatory Problems: Computation, Theory and Applications Isaac Newton Institute for Mathematical Sciences Edited by 1 Rapid function approximation by modified Fourier series Daan Huybrechs and

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

F (z) =f(z). f(z) = a n (z z 0 ) n. F (z) = a n (z z 0 ) n

F (z) =f(z). f(z) = a n (z z 0 ) n. F (z) = a n (z z 0 ) n 6 Chapter 2. CAUCHY S THEOREM AND ITS APPLICATIONS Theorem 5.6 (Schwarz reflection principle) Suppose that f is a holomorphic function in Ω + that extends continuously to I and such that f is real-valued

More information

SHARP BOUNDARY TRACE INEQUALITIES. 1. Introduction

SHARP BOUNDARY TRACE INEQUALITIES. 1. Introduction SHARP BOUNDARY TRACE INEQUALITIES GILES AUCHMUTY Abstract. This paper describes sharp inequalities for the trace of Sobolev functions on the boundary of a bounded region R N. The inequalities bound (semi-)norms

More information

Numerical Approximation Methods for Non-Uniform Fourier Data

Numerical Approximation Methods for Non-Uniform Fourier Data Numerical Approximation Methods for Non-Uniform Fourier Data Aditya Viswanathan aditya@math.msu.edu 2014 Joint Mathematics Meetings January 18 2014 0 / 16 Joint work with Anne Gelb (Arizona State) Guohui

More information

Bernstein-Szegö Inequalities in Reproducing Kernel Hilbert Spaces ABSTRACT 1. INTRODUCTION

Bernstein-Szegö Inequalities in Reproducing Kernel Hilbert Spaces ABSTRACT 1. INTRODUCTION Malaysian Journal of Mathematical Sciences 6(2): 25-36 (202) Bernstein-Szegö Inequalities in Reproducing Kernel Hilbert Spaces Noli N. Reyes and Rosalio G. Artes Institute of Mathematics, University of

More information

ON SPECTRAL METHODS FOR VOLTERRA INTEGRAL EQUATIONS AND THE CONVERGENCE ANALYSIS * 1. Introduction

ON SPECTRAL METHODS FOR VOLTERRA INTEGRAL EQUATIONS AND THE CONVERGENCE ANALYSIS * 1. Introduction Journal of Computational Mathematics, Vol.6, No.6, 008, 85 837. ON SPECTRAL METHODS FOR VOLTERRA INTEGRAL EQUATIONS AND THE CONVERGENCE ANALYSIS * Tao Tang Department of Mathematics, Hong Kong Baptist

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

7: FOURIER SERIES STEVEN HEILMAN

7: FOURIER SERIES STEVEN HEILMAN 7: FOURIER SERIES STEVE HEILMA Contents 1. Review 1 2. Introduction 1 3. Periodic Functions 2 4. Inner Products on Periodic Functions 3 5. Trigonometric Polynomials 5 6. Periodic Convolutions 7 7. Fourier

More information

256 Summary. D n f(x j ) = f j+n f j n 2n x. j n=1. α m n = 2( 1) n (m!) 2 (m n)!(m + n)!. PPW = 2π k x 2 N + 1. i=0?d i,j. N/2} N + 1-dim.

256 Summary. D n f(x j ) = f j+n f j n 2n x. j n=1. α m n = 2( 1) n (m!) 2 (m n)!(m + n)!. PPW = 2π k x 2 N + 1. i=0?d i,j. N/2} N + 1-dim. 56 Summary High order FD Finite-order finite differences: Points per Wavelength: Number of passes: D n f(x j ) = f j+n f j n n x df xj = m α m dx n D n f j j n= α m n = ( ) n (m!) (m n)!(m + n)!. PPW =

More information

Numerical Analysis: Interpolation Part 1

Numerical Analysis: Interpolation Part 1 Numerical Analysis: Interpolation Part 1 Computer Science, Ben-Gurion University (slides based mostly on Prof. Ben-Shahar s notes) 2018/2019, Fall Semester BGU CS Interpolation (ver. 1.00) AY 2018/2019,

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Reconstruction of sparse Legendre and Gegenbauer expansions

Reconstruction of sparse Legendre and Gegenbauer expansions Reconstruction of sparse Legendre and Gegenbauer expansions Daniel Potts Manfred Tasche We present a new deterministic algorithm for the reconstruction of sparse Legendre expansions from a small number

More information

Numerical Methods I Orthogonal Polynomials

Numerical Methods I Orthogonal Polynomials Numerical Methods I Orthogonal Polynomials Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001, Fall 2010 Nov. 4th and 11th, 2010 A. Donev (Courant Institute)

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

Considering our result for the sum and product of analytic functions, this means that for (a 0, a 1,..., a N ) C N+1, the polynomial.

Considering our result for the sum and product of analytic functions, this means that for (a 0, a 1,..., a N ) C N+1, the polynomial. Lecture 3 Usual complex functions MATH-GA 245.00 Complex Variables Polynomials. Construction f : z z is analytic on all of C since its real and imaginary parts satisfy the Cauchy-Riemann relations and

More information

A Method for Reducing Ill-Conditioning of Polynomial Root Finding Using a Change of Basis

A Method for Reducing Ill-Conditioning of Polynomial Root Finding Using a Change of Basis Portland State University PDXScholar University Honors Theses University Honors College 2014 A Method for Reducing Ill-Conditioning of Polynomial Root Finding Using a Change of Basis Edison Tsai Portland

More information

Fourier Series. 1. Review of Linear Algebra

Fourier Series. 1. Review of Linear Algebra Fourier Series In this section we give a short introduction to Fourier Analysis. If you are interested in Fourier analysis and would like to know more detail, I highly recommend the following book: Fourier

More information

1 Math 241A-B Homework Problem List for F2015 and W2016

1 Math 241A-B Homework Problem List for F2015 and W2016 1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let

More information

MATH 205C: STATIONARY PHASE LEMMA

MATH 205C: STATIONARY PHASE LEMMA MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)

More information

Math 307 Learning Goals. March 23, 2010

Math 307 Learning Goals. March 23, 2010 Math 307 Learning Goals March 23, 2010 Course Description The course presents core concepts of linear algebra by focusing on applications in Science and Engineering. Examples of applications from recent

More information

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS G. RAMESH Contents Introduction 1 1. Bounded Operators 1 1.3. Examples 3 2. Compact Operators 5 2.1. Properties 6 3. The Spectral Theorem 9 3.3. Self-adjoint

More information

Speeding up numerical computations via conformal maps

Speeding up numerical computations via conformal maps Speeding up numerical computations via conformal maps Nick Trefethen, Oxford University Thanks to Nick Hale, Nick Higham and Wynn Tee 1/35 SIAM 1997 SIAM 2000 Cambridge 2003 2/35 Princeton 2005 Bornemann

More information

Metric Spaces and Topology

Metric Spaces and Topology Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies

More information

Spectral Methods and Inverse Problems

Spectral Methods and Inverse Problems Spectral Methods and Inverse Problems Omid Khanmohamadi Department of Mathematics Florida State University Outline Outline 1 Fourier Spectral Methods Fourier Transforms Trigonometric Polynomial Interpolants

More information

In this chapter we study elliptical PDEs. That is, PDEs of the form. 2 u = lots,

In this chapter we study elliptical PDEs. That is, PDEs of the form. 2 u = lots, Chapter 8 Elliptic PDEs In this chapter we study elliptical PDEs. That is, PDEs of the form 2 u = lots, where lots means lower-order terms (u x, u y,..., u, f). Here are some ways to think about the physical

More information

Notes on Complex Analysis

Notes on Complex Analysis Michael Papadimitrakis Notes on Complex Analysis Department of Mathematics University of Crete Contents The complex plane.. The complex plane...................................2 Argument and polar representation.........................

More information

Math 61CM - Solutions to homework 6

Math 61CM - Solutions to homework 6 Math 61CM - Solutions to homework 6 Cédric De Groote November 5 th, 2018 Problem 1: (i) Give an example of a metric space X such that not all Cauchy sequences in X are convergent. (ii) Let X be a metric

More information

4 Uniform convergence

4 Uniform convergence 4 Uniform convergence In the last few sections we have seen several functions which have been defined via series or integrals. We now want to develop tools that will allow us to show that these functions

More information

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b) Numerical Methods - PROBLEMS. The Taylor series, about the origin, for log( + x) is x x2 2 + x3 3 x4 4 + Find an upper bound on the magnitude of the truncation error on the interval x.5 when log( + x)

More information

Applied Math 205. Full office hour schedule:

Applied Math 205. Full office hour schedule: Applied Math 205 Full office hour schedule: Rui: Monday 3pm 4:30pm in the IACS lounge Martin: Monday 4:30pm 6pm in the IACS lounge Chris: Tuesday 1pm 3pm in Pierce Hall, Room 305 Nao: Tuesday 3pm 4:30pm

More information

A FAST SOLVER FOR ELLIPTIC EQUATIONS WITH HARMONIC COEFFICIENT APPROXIMATIONS

A FAST SOLVER FOR ELLIPTIC EQUATIONS WITH HARMONIC COEFFICIENT APPROXIMATIONS Proceedings of ALGORITMY 2005 pp. 222 229 A FAST SOLVER FOR ELLIPTIC EQUATIONS WITH HARMONIC COEFFICIENT APPROXIMATIONS ELENA BRAVERMAN, MOSHE ISRAELI, AND ALEXANDER SHERMAN Abstract. Based on a fast subtractional

More information

Chapter 3a Topics in differentiation. Problems in differentiation. Problems in differentiation. LC Abueg: mathematical economics

Chapter 3a Topics in differentiation. Problems in differentiation. Problems in differentiation. LC Abueg: mathematical economics Chapter 3a Topics in differentiation Lectures in Mathematical Economics L Cagandahan Abueg De La Salle University School of Economics Problems in differentiation Problems in differentiation Problem 1.

More information

Mathematical Methods for Physics and Engineering

Mathematical Methods for Physics and Engineering Mathematical Methods for Physics and Engineering Lecture notes for PDEs Sergei V. Shabanov Department of Mathematics, University of Florida, Gainesville, FL 32611 USA CHAPTER 1 The integration theory

More information

SCALE INVARIANT FOURIER RESTRICTION TO A HYPERBOLIC SURFACE

SCALE INVARIANT FOURIER RESTRICTION TO A HYPERBOLIC SURFACE SCALE INVARIANT FOURIER RESTRICTION TO A HYPERBOLIC SURFACE BETSY STOVALL Abstract. This result sharpens the bilinear to linear deduction of Lee and Vargas for extension estimates on the hyperbolic paraboloid

More information

ARTICLE IN PRESS Journal of Computational and Applied Mathematics ( )

ARTICLE IN PRESS Journal of Computational and Applied Mathematics ( ) Journal of Computational and Applied Mathematics ( ) Contents lists available at ScienceDirect Journal of Computational and Applied Mathematics journal homepage: www.elsevier.com/locate/cam The polynomial

More information

Random Bernstein-Markov factors

Random Bernstein-Markov factors Random Bernstein-Markov factors Igor Pritsker and Koushik Ramachandran October 20, 208 Abstract For a polynomial P n of degree n, Bernstein s inequality states that P n n P n for all L p norms on the unit

More information

Compression on the digital unit sphere

Compression on the digital unit sphere 16th Conference on Applied Mathematics, Univ. of Central Oklahoma, Electronic Journal of Differential Equations, Conf. 07, 001, pp. 1 4. ISSN: 107-6691. URL: http://ejde.math.swt.edu or http://ejde.math.unt.edu

More information

Some Background Material

Some Background Material Chapter 1 Some Background Material In the first chapter, we present a quick review of elementary - but important - material as a way of dipping our toes in the water. This chapter also introduces important

More information

FINITE ABELIAN GROUPS Amin Witno

FINITE ABELIAN GROUPS Amin Witno WON Series in Discrete Mathematics and Modern Algebra Volume 7 FINITE ABELIAN GROUPS Amin Witno Abstract We detail the proof of the fundamental theorem of finite abelian groups, which states that every

More information

Scientific Computing

Scientific Computing 2301678 Scientific Computing Chapter 2 Interpolation and Approximation Paisan Nakmahachalasint Paisan.N@chula.ac.th Chapter 2 Interpolation and Approximation p. 1/66 Contents 1. Polynomial interpolation

More information

We consider the problem of finding a polynomial that interpolates a given set of values:

We consider the problem of finding a polynomial that interpolates a given set of values: Chapter 5 Interpolation 5. Polynomial Interpolation We consider the problem of finding a polynomial that interpolates a given set of values: x x 0 x... x n y y 0 y... y n where the x i are all distinct.

More information

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure? MA 645-4A (Real Analysis), Dr. Chernov Homework assignment 1 (Due ). Show that the open disk x 2 + y 2 < 1 is a countable union of planar elementary sets. Show that the closed disk x 2 + y 2 1 is a countable

More information

Polynomial Approximation: The Fourier System

Polynomial Approximation: The Fourier System Polynomial Approximation: The Fourier System Charles B. I. Chilaka CASA Seminar 17th October, 2007 Outline 1 Introduction and problem formulation 2 The continuous Fourier expansion 3 The discrete Fourier

More information

2014:05 Incremental Greedy Algorithm and its Applications in Numerical Integration. V. Temlyakov

2014:05 Incremental Greedy Algorithm and its Applications in Numerical Integration. V. Temlyakov INTERDISCIPLINARY MATHEMATICS INSTITUTE 2014:05 Incremental Greedy Algorithm and its Applications in Numerical Integration V. Temlyakov IMI PREPRINT SERIES COLLEGE OF ARTS AND SCIENCES UNIVERSITY OF SOUTH

More information

A fast randomized algorithm for overdetermined linear least-squares regression

A fast randomized algorithm for overdetermined linear least-squares regression A fast randomized algorithm for overdetermined linear least-squares regression Vladimir Rokhlin and Mark Tygert Technical Report YALEU/DCS/TR-1403 April 28, 2008 Abstract We introduce a randomized algorithm

More information

Complex Analysis Qualifying Exam Solutions

Complex Analysis Qualifying Exam Solutions Complex Analysis Qualifying Exam Solutions May, 04 Part.. Let log z be the principal branch of the logarithm defined on G = {z C z (, 0]}. Show that if t > 0, then the equation log z = t has exactly one

More information

MATH 102 INTRODUCTION TO MATHEMATICAL ANALYSIS. 1. Some Fundamentals

MATH 102 INTRODUCTION TO MATHEMATICAL ANALYSIS. 1. Some Fundamentals MATH 02 INTRODUCTION TO MATHEMATICAL ANALYSIS Properties of Real Numbers Some Fundamentals The whole course will be based entirely on the study of sequence of numbers and functions defined on the real

More information

On rational approximation of algebraic functions. Julius Borcea. Rikard Bøgvad & Boris Shapiro

On rational approximation of algebraic functions. Julius Borcea. Rikard Bøgvad & Boris Shapiro On rational approximation of algebraic functions http://arxiv.org/abs/math.ca/0409353 Julius Borcea joint work with Rikard Bøgvad & Boris Shapiro 1. Padé approximation: short overview 2. A scheme of rational

More information

Chapter 3: Root Finding. September 26, 2005

Chapter 3: Root Finding. September 26, 2005 Chapter 3: Root Finding September 26, 2005 Outline 1 Root Finding 2 3.1 The Bisection Method 3 3.2 Newton s Method: Derivation and Examples 4 3.3 How To Stop Newton s Method 5 3.4 Application: Division

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 9 Initial Value Problems for Ordinary Differential Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign

More information

Cambridge University Press The Mathematics of Signal Processing Steven B. Damelin and Willard Miller Excerpt More information

Cambridge University Press The Mathematics of Signal Processing Steven B. Damelin and Willard Miller Excerpt More information Introduction Consider a linear system y = Φx where Φ can be taken as an m n matrix acting on Euclidean space or more generally, a linear operator on a Hilbert space. We call the vector x a signal or input,

More information

Math212a1413 The Lebesgue integral.

Math212a1413 The Lebesgue integral. Math212a1413 The Lebesgue integral. October 28, 2014 Simple functions. In what follows, (X, F, m) is a space with a σ-field of sets, and m a measure on F. The purpose of today s lecture is to develop the

More information

On the simplest expression of the perturbed Moore Penrose metric generalized inverse

On the simplest expression of the perturbed Moore Penrose metric generalized inverse Annals of the University of Bucharest (mathematical series) 4 (LXII) (2013), 433 446 On the simplest expression of the perturbed Moore Penrose metric generalized inverse Jianbing Cao and Yifeng Xue Communicated

More information

Measure and Integration: Solutions of CW2

Measure and Integration: Solutions of CW2 Measure and Integration: s of CW2 Fall 206 [G. Holzegel] December 9, 206 Problem of Sheet 5 a) Left (f n ) and (g n ) be sequences of integrable functions with f n (x) f (x) and g n (x) g (x) for almost

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

A LOWER BOUND ON BLOWUP RATES FOR THE 3D INCOMPRESSIBLE EULER EQUATION AND A SINGLE EXPONENTIAL BEALE-KATO-MAJDA ESTIMATE. 1.

A LOWER BOUND ON BLOWUP RATES FOR THE 3D INCOMPRESSIBLE EULER EQUATION AND A SINGLE EXPONENTIAL BEALE-KATO-MAJDA ESTIMATE. 1. A LOWER BOUND ON BLOWUP RATES FOR THE 3D INCOMPRESSIBLE EULER EQUATION AND A SINGLE EXPONENTIAL BEALE-KATO-MAJDA ESTIMATE THOMAS CHEN AND NATAŠA PAVLOVIĆ Abstract. We prove a Beale-Kato-Majda criterion

More information

APPROXIMATING CONTINUOUS FUNCTIONS: WEIERSTRASS, BERNSTEIN, AND RUNGE

APPROXIMATING CONTINUOUS FUNCTIONS: WEIERSTRASS, BERNSTEIN, AND RUNGE APPROXIMATING CONTINUOUS FUNCTIONS: WEIERSTRASS, BERNSTEIN, AND RUNGE WILLIE WAI-YEUNG WONG. Introduction This set of notes is meant to describe some aspects of polynomial approximations to continuous

More information

Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control

Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control Chun-Hua Guo Dedicated to Peter Lancaster on the occasion of his 70th birthday We consider iterative methods for finding the

More information

Taylor series. Chapter Introduction From geometric series to Taylor polynomials

Taylor series. Chapter Introduction From geometric series to Taylor polynomials Chapter 2 Taylor series 2. Introduction The topic of this chapter is find approximations of functions in terms of power series, also called Taylor series. Such series can be described informally as infinite

More information

Sobolev spaces. May 18

Sobolev spaces. May 18 Sobolev spaces May 18 2015 1 Weak derivatives The purpose of these notes is to give a very basic introduction to Sobolev spaces. More extensive treatments can e.g. be found in the classical references

More information

Recall that any inner product space V has an associated norm defined by

Recall that any inner product space V has an associated norm defined by Hilbert Spaces Recall that any inner product space V has an associated norm defined by v = v v. Thus an inner product space can be viewed as a special kind of normed vector space. In particular every inner

More information

On Riesz-Fischer sequences and lower frame bounds

On Riesz-Fischer sequences and lower frame bounds On Riesz-Fischer sequences and lower frame bounds P. Casazza, O. Christensen, S. Li, A. Lindner Abstract We investigate the consequences of the lower frame condition and the lower Riesz basis condition

More information

Vectors in Function Spaces

Vectors in Function Spaces Jim Lambers MAT 66 Spring Semester 15-16 Lecture 18 Notes These notes correspond to Section 6.3 in the text. Vectors in Function Spaces We begin with some necessary terminology. A vector space V, also

More information

Class Meeting # 2: The Diffusion (aka Heat) Equation

Class Meeting # 2: The Diffusion (aka Heat) Equation MATH 8.52 COURSE NOTES - CLASS MEETING # 2 8.52 Introduction to PDEs, Fall 20 Professor: Jared Speck Class Meeting # 2: The Diffusion (aka Heat) Equation The heat equation for a function u(, x (.0.). Introduction

More information

Trace Class Operators and Lidskii s Theorem

Trace Class Operators and Lidskii s Theorem Trace Class Operators and Lidskii s Theorem Tom Phelan Semester 2 2009 1 Introduction The purpose of this paper is to provide the reader with a self-contained derivation of the celebrated Lidskii Trace

More information

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory Part V 7 Introduction: What are measures and why measurable sets Lebesgue Integration Theory Definition 7. (Preliminary). A measure on a set is a function :2 [ ] such that. () = 2. If { } = is a finite

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

APPROXIMATING THE INVERSE FRAME OPERATOR FROM LOCALIZED FRAMES

APPROXIMATING THE INVERSE FRAME OPERATOR FROM LOCALIZED FRAMES APPROXIMATING THE INVERSE FRAME OPERATOR FROM LOCALIZED FRAMES GUOHUI SONG AND ANNE GELB Abstract. This investigation seeks to establish the practicality of numerical frame approximations. Specifically,

More information

Math 118B Solutions. Charles Martin. March 6, d i (x i, y i ) + d i (y i, z i ) = d(x, y) + d(y, z). i=1

Math 118B Solutions. Charles Martin. March 6, d i (x i, y i ) + d i (y i, z i ) = d(x, y) + d(y, z). i=1 Math 8B Solutions Charles Martin March 6, Homework Problems. Let (X i, d i ), i n, be finitely many metric spaces. Construct a metric on the product space X = X X n. Proof. Denote points in X as x = (x,

More information

Inner product spaces. Layers of structure:

Inner product spaces. Layers of structure: Inner product spaces Layers of structure: vector space normed linear space inner product space The abstract definition of an inner product, which we will see very shortly, is simple (and by itself is pretty

More information

Application of modified Leja sequences to polynomial interpolation

Application of modified Leja sequences to polynomial interpolation Special Issue for the years of the Padua points, Volume 8 5 Pages 66 74 Application of modified Leja sequences to polynomial interpolation L. P. Bos a M. Caliari a Abstract We discuss several applications

More information

A Fast Spherical Filter with Uniform Resolution

A Fast Spherical Filter with Uniform Resolution JOURNAL OF COMPUTATIONAL PHYSICS 136, 580 584 (1997) ARTICLE NO. CP975782 A Fast Spherical Filter with Uniform Resolution Rüdiger Jakob-Chien*, and Bradley K. Alpert *Department of Computer Science & Engineering,

More information

(1) Consider the space S consisting of all continuous real-valued functions on the closed interval [0, 1]. For f, g S, define

(1) Consider the space S consisting of all continuous real-valued functions on the closed interval [0, 1]. For f, g S, define Homework, Real Analysis I, Fall, 2010. (1) Consider the space S consisting of all continuous real-valued functions on the closed interval [0, 1]. For f, g S, define ρ(f, g) = 1 0 f(x) g(x) dx. Show that

More information

On Reparametrization and the Gibbs Sampler

On Reparametrization and the Gibbs Sampler On Reparametrization and the Gibbs Sampler Jorge Carlos Román Department of Mathematics Vanderbilt University James P. Hobert Department of Statistics University of Florida March 2014 Brett Presnell Department

More information

YURI LEVIN, MIKHAIL NEDIAK, AND ADI BEN-ISRAEL

YURI LEVIN, MIKHAIL NEDIAK, AND ADI BEN-ISRAEL Journal of Comput. & Applied Mathematics 139(2001), 197 213 DIRECT APPROACH TO CALCULUS OF VARIATIONS VIA NEWTON-RAPHSON METHOD YURI LEVIN, MIKHAIL NEDIAK, AND ADI BEN-ISRAEL Abstract. Consider m functions

More information

Fourth Week: Lectures 10-12

Fourth Week: Lectures 10-12 Fourth Week: Lectures 10-12 Lecture 10 The fact that a power series p of positive radius of convergence defines a function inside its disc of convergence via substitution is something that we cannot ignore

More information

The discrete and fast Fourier transforms

The discrete and fast Fourier transforms The discrete and fast Fourier transforms Marcel Oliver Revised April 7, 1 1 Fourier series We begin by recalling the familiar definition of the Fourier series. For a periodic function u: [, π] C, we define

More information

Riemann integral and volume are generalized to unbounded functions and sets. is an admissible set, and its volume is a Riemann integral, 1l E,

Riemann integral and volume are generalized to unbounded functions and sets. is an admissible set, and its volume is a Riemann integral, 1l E, Tel Aviv University, 26 Analysis-III 9 9 Improper integral 9a Introduction....................... 9 9b Positive integrands................... 9c Special functions gamma and beta......... 4 9d Change of

More information

1462. Jacobi pseudo-spectral Galerkin method for second kind Volterra integro-differential equations with a weakly singular kernel

1462. Jacobi pseudo-spectral Galerkin method for second kind Volterra integro-differential equations with a weakly singular kernel 1462. Jacobi pseudo-spectral Galerkin method for second kind Volterra integro-differential equations with a weakly singular kernel Xiaoyong Zhang 1, Junlin Li 2 1 Shanghai Maritime University, Shanghai,

More information

Real Analysis Problems

Real Analysis Problems Real Analysis Problems Cristian E. Gutiérrez September 14, 29 1 1 CONTINUITY 1 Continuity Problem 1.1 Let r n be the sequence of rational numbers and Prove that f(x) = 1. f is continuous on the irrationals.

More information

A new class of highly accurate differentiation schemes based on the prolate spheroidal wave functions

A new class of highly accurate differentiation schemes based on the prolate spheroidal wave functions We introduce a new class of numerical differentiation schemes constructed for the efficient solution of time-dependent PDEs that arise in wave phenomena. The schemes are constructed via the prolate spheroidal

More information

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9 MAT 570 REAL ANALYSIS LECTURE NOTES PROFESSOR: JOHN QUIGG SEMESTER: FALL 204 Contents. Sets 2 2. Functions 5 3. Countability 7 4. Axiom of choice 8 5. Equivalence relations 9 6. Real numbers 9 7. Extended

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

MATH31011/MATH41011/MATH61011: FOURIER ANALYSIS AND LEBESGUE INTEGRATION. Chapter 2: Countability and Cantor Sets

MATH31011/MATH41011/MATH61011: FOURIER ANALYSIS AND LEBESGUE INTEGRATION. Chapter 2: Countability and Cantor Sets MATH31011/MATH41011/MATH61011: FOURIER ANALYSIS AND LEBESGUE INTEGRATION Chapter 2: Countability and Cantor Sets Countable and Uncountable Sets The concept of countability will be important in this course

More information

Math 307 Learning Goals

Math 307 Learning Goals Math 307 Learning Goals May 14, 2018 Chapter 1 Linear Equations 1.1 Solving Linear Equations Write a system of linear equations using matrix notation. Use Gaussian elimination to bring a system of linear

More information

Simple Examples on Rectangular Domains

Simple Examples on Rectangular Domains 84 Chapter 5 Simple Examples on Rectangular Domains In this chapter we consider simple elliptic boundary value problems in rectangular domains in R 2 or R 3 ; our prototype example is the Poisson equation

More information

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Complex Analysis, Stein and Shakarchi Meromorphic Functions and the Logarithm

Complex Analysis, Stein and Shakarchi Meromorphic Functions and the Logarithm Complex Analysis, Stein and Shakarchi Chapter 3 Meromorphic Functions and the Logarithm Yung-Hsiang Huang 217.11.5 Exercises 1. From the identity sin πz = eiπz e iπz 2i, it s easy to show its zeros are

More information

MATHS 730 FC Lecture Notes March 5, Introduction

MATHS 730 FC Lecture Notes March 5, Introduction 1 INTRODUCTION MATHS 730 FC Lecture Notes March 5, 2014 1 Introduction Definition. If A, B are sets and there exists a bijection A B, they have the same cardinality, which we write as A, #A. If there exists

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information