MATH 590: Meshfree Methods

Similar documents
MATH 590: Meshfree Methods

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods

Stability of Kernel Based Interpolation

MATH 590: Meshfree Methods

Approximation by Conditionally Positive Definite Functions with Finitely Many Centers

DIRECT ERROR BOUNDS FOR SYMMETRIC RBF COLLOCATION

MATH 590: Meshfree Methods

Stability constants for kernel-based interpolation processes

MATH 590: Meshfree Methods

Radial basis functions topics in slides

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods

Toward Approximate Moving Least Squares Approximation with Irregularly Spaced Centers

Kernel-based Approximation. Methods using MATLAB. Gregory Fasshauer. Interdisciplinary Mathematical Sciences. Michael McCourt.

A Posteriori Error Bounds for Meshless Methods

MATH 590: Meshfree Methods

Solving the 3D Laplace Equation by Meshless Collocation via Harmonic Kernels

MATH 590: Meshfree Methods

Consistency Estimates for gfd Methods and Selection of Sets of Influence

Meshfree Approximation Methods with MATLAB

D. Shepard, Shepard functions, late 1960s (application, surface modelling)

Kernel Method: Data Analysis with Positive Definite Kernels

Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space

Kernel B Splines and Interpolation

3. Some tools for the analysis of sequential strategies based on a Gaussian process prior

A orthonormal basis for Radial Basis Function approximation

Recent Results for Moving Least Squares Approximation

Optimal data-independent point locations for RBF interpolation

Your first day at work MATH 806 (Fall 2015)

Scattered Data Approximation of Noisy Data via Iterated Moving Least Squares

Least Squares Approximation

The Closed Form Reproducing Polynomial Particle Shape Functions for Meshfree Particle Methods

Your first day at work MATH 806 (Fall 2015)

Interpolation of Spatial Data - A Stochastic or a Deterministic Problem?

RBF Collocation Methods and Pseudospectral Methods

MATH 205C: STATIONARY PHASE LEMMA

RKHS, Mercer s theorem, Unbounded domains, Frames and Wavelets Class 22, 2004 Tomaso Poggio and Sayan Mukherjee

Local radial basis function approximation on the sphere

Multivariate Interpolation with Increasingly Flat Radial Basis Functions of Finite Smoothness

INTRODUCTION TO FINITE ELEMENT METHODS

The continuity method

INVERSE AND SATURATION THEOREMS FOR RADIAL BASIS FUNCTION INTERPOLATION

On interpolation by radial polynomials C. de Boor Happy 60th and beyond, Charlie!

MATH 590: Meshfree Methods

Optimal Polynomial Admissible Meshes on the Closure of C 1,1 Bounded Domains

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Problem Set 6: Solutions Math 201A: Fall a n x n,

MATH 590: Meshfree Methods

Regularization in Reproducing Kernel Banach Spaces

Scattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions

ON DIMENSION-INDEPENDENT RATES OF CONVERGENCE FOR FUNCTION APPROXIMATION WITH GAUSSIAN KERNELS

MA 3021: Numerical Analysis I Numerical Differentiation and Integration

STAT 200C: High-dimensional Statistics

Two hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. 29 May :45 11:45

Reproducing Kernel Hilbert Spaces Class 03, 15 February 2006 Andrea Caponnetto

Kernel Methods. Machine Learning A W VO

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Boundary Value Problems and Iterative Methods for Linear Systems

Radial Basis Functions I

Dual Bases and Discrete Reproducing Kernels: A Unified Framework for RBF and MLS Approximation

Chapter 7 Iterative Techniques in Matrix Algebra

MAT 419 Lecture Notes Transcribed by Eowyn Cenek 6/1/2012

Approximation of High-Dimensional Rank One Tensors

Analysis II: The Implicit and Inverse Function Theorems

Interpolation by Basis Functions of Different Scales and Shapes

Introduction to Proofs

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space.

Discrete Projection Methods for Integral Equations

A new stable basis for radial basis function interpolation

Approximation theory

B. Appendix B. Topological vector spaces

ASYMPTOTICALLY EXACT A POSTERIORI ESTIMATORS FOR THE POINTWISE GRADIENT ERROR ON EACH ELEMENT IN IRREGULAR MESHES. PART II: THE PIECEWISE LINEAR CASE

MATH 350: Introduction to Computational Mathematics

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

PARTIAL DIFFERENTIAL EQUATIONS. Lecturer: D.M.A. Stuart MT 2007

Linear Algebra Massoud Malek

Dynamic programming using radial basis functions and Shepard approximations

MATH 532: Linear Algebra

Greedy Kernel Techniques with Applications to Machine Learning

FIXED POINT ITERATIONS

Introduction to Machine Learning

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

Support Vector Machines

Laplace s Equation. Chapter Mean Value Formulas

Reproducing Kernels of Generalized Sobolev Spaces via a Green Function Approach with Distributional Operators

Numerical Analysis Preliminary Exam 10.00am 1.00pm, January 19, 2018

MATH 350: Introduction to Computational Mathematics

Numerical Solutions to Partial Differential Equations

EECS 598: Statistical Learning Theory, Winter 2014 Topic 11. Kernels

MA 102 (Multivariable Calculus)

Linear & nonlinear classifiers

Numerical Integration in Meshfree Methods

Computational Aspects of Radial Basis Function Approximation

Transcription:

MATH 590: Meshfree Methods Chapter 14: The Power Function and Native Space Error Estimates Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter 14 1

Outline 1 Fill Distance and Approximation Orders 2 Lagrange Form of the Interpolant and Cardinal Basis Functions 3 The Power Function 4 Generic Error Estimates for Functions in N K (Ω) 5 Error Estimates in Terms of the Fill Distance fasshauer@iit.edu MATH 590 Chapter 14 2

Outline Fill Distance and Approximation Orders 1 Fill Distance and Approximation Orders 2 Lagrange Form of the Interpolant and Cardinal Basis Functions 3 The Power Function 4 Generic Error Estimates for Functions in N K (Ω) 5 Error Estimates in Terms of the Fill Distance fasshauer@iit.edu MATH 590 Chapter 14 3

Fill Distance and Approximation Orders Goal: to provide error estimates for scattered data interpolation with strictly (conditionally) positive definite functions. fasshauer@iit.edu MATH 590 Chapter 14 4

Fill Distance and Approximation Orders Goal: to provide error estimates for scattered data interpolation with strictly (conditionally) positive definite functions. We will provide most of the details for the strictly positive definite case, and only mention the extension to the conditionally positive definite case in the end. fasshauer@iit.edu MATH 590 Chapter 14 4

Fill Distance and Approximation Orders Goal: to provide error estimates for scattered data interpolation with strictly (conditionally) positive definite functions. We will provide most of the details for the strictly positive definite case, and only mention the extension to the conditionally positive definite case in the end. In their final form we will want our estimates to depend on some kind of measure of the data distribution. fasshauer@iit.edu MATH 590 Chapter 14 4

Fill Distance and Approximation Orders Goal: to provide error estimates for scattered data interpolation with strictly (conditionally) positive definite functions. We will provide most of the details for the strictly positive definite case, and only mention the extension to the conditionally positive definite case in the end. In their final form we will want our estimates to depend on some kind of measure of the data distribution. The measure that is usually used in approximation theory is the so-called fill distance already introduced in Chapter 2. h = h X,Ω = sup min x x j 2 x j X x Ω fasshauer@iit.edu MATH 590 Chapter 14 4

Fill Distance and Approximation Orders The fill distance indicates how well the data fill out the domain Ω. It denotes the radius of the largest empty ball that can be placed among the data locations. fasshauer@iit.edu MATH 590 Chapter 14 5

Fill Distance and Approximation Orders The fill distance indicates how well the data fill out the domain Ω. It denotes the radius of the largest empty ball that can be placed among the data locations. Convergence We will be interested in whether the error f P (h) f tends to zero as h 0, and if so, how fast. fasshauer@iit.edu MATH 590 Chapter 14 5

Fill Distance and Approximation Orders The fill distance indicates how well the data fill out the domain Ω. It denotes the radius of the largest empty ball that can be placed among the data locations. Convergence We will be interested in whether the error f P (h) f tends to zero as h 0, and if so, how fast. Here {P (h) } h presents a sequence of interpolation (or, more generally, projection) operators that vary with the fill distance h. fasshauer@iit.edu MATH 590 Chapter 14 5

Fill Distance and Approximation Orders The fill distance indicates how well the data fill out the domain Ω. It denotes the radius of the largest empty ball that can be placed among the data locations. Convergence We will be interested in whether the error f P (h) f tends to zero as h 0, and if so, how fast. Here {P (h) } h presents a sequence of interpolation (or, more generally, projection) operators that vary with the fill distance h. Remark Most error bounds will focus on this worst-case setting. Some will be measured in the L 2 -norm, i.e., for average case errors, or other L p -norms. fasshauer@iit.edu MATH 590 Chapter 14 5

Fill Distance and Approximation Orders Example Let P (h) denote interpolation to data given at (2 n + 1) s, n = 1, 2,..., equally spaced points in the unit cube in R s so that h = 1 s (2 n +1) s 1 = 2 n. fasshauer@iit.edu MATH 590 Chapter 14 6

Fill Distance and Approximation Orders Example Let P (h) denote interpolation to data given at (2 n + 1) s, n = 1, 2,..., equally spaced points in the unit cube in R s so that h = 1 s (2 n +1) s 1 = 2 n. The definition of the fill distance also covers scattered data such as sets of Halton points. In fact, since Halton points are quasi-uniformly distributed (see Appendix A) we can assume h 2 n for a set of (2 n + 1) s Halton points in R s. fasshauer@iit.edu MATH 590 Chapter 14 6

Fill Distance and Approximation Orders Example Let P (h) denote interpolation to data given at (2 n + 1) s, n = 1, 2,..., equally spaced points in the unit cube in R s so that h = 1 s (2 n +1) s 1 = 2 n. The definition of the fill distance also covers scattered data such as sets of Halton points. In fact, since Halton points are quasi-uniformly distributed (see Appendix A) we can assume h 2 n for a set of (2 n + 1) s Halton points in R s. Remark These relations explain the specific sizes of the point sets we used in earlier examples. fasshauer@iit.edu MATH 590 Chapter 14 6

Fill Distance and Approximation Orders Since we want to employ the machinery of reproducing kernel Hilbert spaces presented in the previous chapter we will concentrate on error estimates for functions f N K. fasshauer@iit.edu MATH 590 Chapter 14 7

Fill Distance and Approximation Orders Since we want to employ the machinery of reproducing kernel Hilbert spaces presented in the previous chapter we will concentrate on error estimates for functions f N K. In the next chapter we will also mention some more general estimates. fasshauer@iit.edu MATH 590 Chapter 14 7

Fill Distance and Approximation Orders We measure the speed of convergence to zero in terms of approximation order. fasshauer@iit.edu MATH 590 Chapter 14 8

Fill Distance and Approximation Orders We measure the speed of convergence to zero in terms of approximation order. We say that the approximation operator P (h) has L p -approximation order k if f P (h) f p = O(h k ) for h 0. fasshauer@iit.edu MATH 590 Chapter 14 8

Fill Distance and Approximation Orders We measure the speed of convergence to zero in terms of approximation order. We say that the approximation operator P (h) has L p -approximation order k if f P (h) f p = O(h k ) for h 0. Moreover, if we can also show that f P (h) f p o(h k ), then P (h) has exact L p -approximation order k. fasshauer@iit.edu MATH 590 Chapter 14 8

Fill Distance and Approximation Orders We measure the speed of convergence to zero in terms of approximation order. We say that the approximation operator P (h) has L p -approximation order k if f P (h) f p = O(h k ) for h 0. Moreover, if we can also show that f P (h) f p o(h k ), then P (h) has exact L p -approximation order k. Remark We will concentrate mostly on the case p = (i.e., pointwise estimates), but approximation order in other norms can also be studied. fasshauer@iit.edu MATH 590 Chapter 14 8

Fill Distance and Approximation Orders Remark In order to keep the following discussion as transparent as possible we will restrict ourselves to strictly positive definite functions. With (considerably) more technical details the following can also be formulated for strictly conditionally positive definite functions (see [Wendland (2005a)] for details). fasshauer@iit.edu MATH 590 Chapter 14 9

Lagrange Form of the Interpolant and Cardinal Basis Functions Outline 1 Fill Distance and Approximation Orders 2 Lagrange Form of the Interpolant and Cardinal Basis Functions 3 The Power Function 4 Generic Error Estimates for Functions in N K (Ω) 5 Error Estimates in Terms of the Fill Distance fasshauer@iit.edu MATH 590 Chapter 14 10

Lagrange Form of the Interpolant and Cardinal Basis Functions The key idea for the following discussion is to express the interpolant in Lagrange form, i.e., using so-called cardinal basis functions. fasshauer@iit.edu MATH 590 Chapter 14 11

Lagrange Form of the Interpolant and Cardinal Basis Functions The key idea for the following discussion is to express the interpolant in Lagrange form, i.e., using so-called cardinal basis functions. For radial basis function approximation this idea is due to [Wu and Schaback (1993)]. fasshauer@iit.edu MATH 590 Chapter 14 11

Lagrange Form of the Interpolant and Cardinal Basis Functions The key idea for the following discussion is to express the interpolant in Lagrange form, i.e., using so-called cardinal basis functions. For radial basis function approximation this idea is due to [Wu and Schaback (1993)]. In the previous chapters we established that, for any strictly positive definite function Φ, the linear system Ac = y with A ij = Φ(x i x j ), i, j = 1,..., N, c = [c 1,..., c N ] T, and y = [f (x 1 ),..., f (x N )] T has a unique solution. fasshauer@iit.edu MATH 590 Chapter 14 11

Lagrange Form of the Interpolant and Cardinal Basis Functions In the following we will consider the more general situation where K is a strictly positive definite kernel, i.e., the entries of A are given by A ij = K (x i, x j ). fasshauer@iit.edu MATH 590 Chapter 14 12

Lagrange Form of the Interpolant and Cardinal Basis Functions In the following we will consider the more general situation where K is a strictly positive definite kernel, i.e., the entries of A are given by A ij = K (x i, x j ). The uniqueness result holds in this case also. fasshauer@iit.edu MATH 590 Chapter 14 12

Lagrange Form of the Interpolant and Cardinal Basis Functions In the following we will consider the more general situation where K is a strictly positive definite kernel, i.e., the entries of A are given by A ij = K (x i, x j ). The uniqueness result holds in this case also. In order to obtain the cardinal basis functions uj, j = 1,..., N, with the property uj (x i) = δ ij, i.e., u j (x i) = { 1 if i = j, 0 if i j, fasshauer@iit.edu MATH 590 Chapter 14 12

Lagrange Form of the Interpolant and Cardinal Basis Functions In the following we will consider the more general situation where K is a strictly positive definite kernel, i.e., the entries of A are given by A ij = K (x i, x j ). The uniqueness result holds in this case also. In order to obtain the cardinal basis functions uj, j = 1,..., N, with the property uj (x i) = δ ij, i.e., u j (x i) = we consider the linear system { 1 if i = j, 0 if i j, Au (x) = b(x), (1) where the matrix A is as above (and therefore invertible), u = [u 1,..., u N ]T, and b = [K (, x 1 ),..., K (, x N )] T. fasshauer@iit.edu MATH 590 Chapter 14 12

Lagrange Form of the Interpolant and Cardinal Basis Functions Existence of Cardinal Functions Theorem Suppose K is a strictly positive definite kernel on R s R s. Then, for any distinct points x 1,..., x N, there exist functions u j span{k (, x j ), j = 1,..., N} such that u j (x i) = δ ij. They are determined pointwise by solving the linear system (1), i.e., Au (x) = b(x). fasshauer@iit.edu MATH 590 Chapter 14 13

Lagrange Form of the Interpolant and Cardinal Basis Functions Existence of Cardinal Functions Theorem Suppose K is a strictly positive definite kernel on R s R s. Then, for any distinct points x 1,..., x N, there exist functions u j span{k (, x j ), j = 1,..., N} such that u j (x i) = δ ij. They are determined pointwise by solving the linear system (1), i.e., Au (x) = b(x). Therefore if we know the cardinal functions we can write the interpolant P f to f at x 1,..., x N in the cardinal form P f (x) = N f (x j )uj (x), x Rs. j=1 fasshauer@iit.edu MATH 590 Chapter 14 13

Lagrange Form of the Interpolant and Cardinal Basis Functions Remark Cardinal functions do not depend on the data values of the interpolation problem. fasshauer@iit.edu MATH 590 Chapter 14 14

Lagrange Form of the Interpolant and Cardinal Basis Functions Remark Cardinal functions do not depend on the data values of the interpolation problem. They do heavily depend on the data locations (see plots on following slides). fasshauer@iit.edu MATH 590 Chapter 14 14

Lagrange Form of the Interpolant and Cardinal Basis Functions Remark Cardinal functions do not depend on the data values of the interpolation problem. They do heavily depend on the data locations (see plots on following slides). Once the data sites are fixed and the basic function is chosen with an appropriate shape parameter (whose optimal value will depend on the data sites and values), then the cardinal functions are determined by the linear system (1). fasshauer@iit.edu MATH 590 Chapter 14 14

Lagrange Form of the Interpolant and Cardinal Basis Functions Example Gaussian Cardinal Functions Figure: Cardinal functions for Gaussian interpolation (with ε = 5) on 81 uniformly gridded points in [0, 1] 2. Centered at an edge point (left) and at an interior point (right). fasshauer@iit.edu MATH 590 Chapter 14 15

Lagrange Form of the Interpolant and Cardinal Basis Functions Example More Gaussians Figure: Cardinal functions for Gaussian interpolation (with ε = 5) on 81 tensor-product Chebyshev points in [0, 1] 2. Centered at an edge point (left) and at an interior point (right). fasshauer@iit.edu MATH 590 Chapter 14 16

Lagrange Form of the Interpolant and Cardinal Basis Functions Example More Gaussians Figure: Cardinal functions for Gaussian interpolation (with ε = 5) on 81 Halton points in [0, 1] 2. Centered at an edge point (left) and at an interior point (right). fasshauer@iit.edu MATH 590 Chapter 14 17

Lagrange Form of the Interpolant and Cardinal Basis Functions Example Multiquadric Cardinal Functions Figure: Cardinal functions for multiquadric interpolation (with ε = 5) on 81 Halton points in [0, 1] 2. Centered at an edge point (left) and at an interior point (right). fasshauer@iit.edu MATH 590 Chapter 14 18

Lagrange Form of the Interpolant and Cardinal Basis Functions Remark Basic functions that grow with increasing distance from the center point (such as multiquadrics) are sometimes criticized for being counter-intuitive for scattered data approximation. fasshauer@iit.edu MATH 590 Chapter 14 19

Lagrange Form of the Interpolant and Cardinal Basis Functions Remark Basic functions that grow with increasing distance from the center point (such as multiquadrics) are sometimes criticized for being counter-intuitive for scattered data approximation. The plot above shows that the associated cardinal functions are just as localized as those for the Gaussian basic functions, and thus the function space spanned by multiquadrics is a good local space. fasshauer@iit.edu MATH 590 Chapter 14 19

Lagrange Form of the Interpolant and Cardinal Basis Functions Program (RBFCardinalFunction.m) 1 rbf = @(e,r) exp(-(e*r).^2); ep = 5; 2 N = 81; dsites = CreatePoints(N,2, u ); 3 ctrs = dsites; 4 neval = 40; M = neval^2; 5 epoints = CreatePoints(M,2, u ); 6 DM_data = DistanceMatrix(dsites,ctrs); 7 IM = rbf(ep,dm_data); % transpose of usual eval matrix 8 DM_B = DistanceMatrix(ctrs,epoints); 9 B = rbf(ep,dm_b); % many right-hand sides for (1) 10 cardfuns = IM\B; % one per ROW 11 xe = reshape(epoints(:,1),neval,neval); 12 ye = reshape(epoints(:,2),neval,neval); 13 CFplot=surf(xe,ye,reshape(cardfuns(50,:),neval,neval)); 14 set(cfplot, FaceColor, interp, EdgeColor, none ) 15 colormap autumn; view([145 45]); camlight 16 lighting gouraud Note that the code is different than in the book and avoids loops. fasshauer@iit.edu MATH 590 Chapter 14 20

Outline The Power Function 1 Fill Distance and Approximation Orders 2 Lagrange Form of the Interpolant and Cardinal Basis Functions 3 The Power Function 4 Generic Error Estimates for Functions in N K (Ω) 5 Error Estimates in Terms of the Fill Distance fasshauer@iit.edu MATH 590 Chapter 14 21

The Power Function Another important ingredient needed for our error estimates is the so-called power function. fasshauer@iit.edu MATH 590 Chapter 14 22

The Power Function Another important ingredient needed for our error estimates is the so-called power function. To this end, we consider a domain Ω R s. Then for any strictly positive definite kernel K C(Ω Ω), any set of distinct points X = {x 1,..., x N } Ω, and an arbitrary vector u R N, we define the quadratic form N N N Q(u) = K (x, x) 2 u j K (x, x j ) + u i u j K (x i, x j ). j=1 i=1 j=1 fasshauer@iit.edu MATH 590 Chapter 14 22

The Power Function Another important ingredient needed for our error estimates is the so-called power function. To this end, we consider a domain Ω R s. Then for any strictly positive definite kernel K C(Ω Ω), any set of distinct points X = {x 1,..., x N } Ω, and an arbitrary vector u R N, we define the quadratic form N N N Q(u) = K (x, x) 2 u j K (x, x j ) + u i u j K (x i, x j ). Definition j=1 i=1 j=1 Suppose Ω R s and K C(Ω Ω) is strictly positive definite. For any distinct points X = {x 1,..., x N } Ω the power function P K,X is defined pointwise by [P K,X (x)] 2 = Q(u (x)), where u is the vector of cardinal functions studied above. fasshauer@iit.edu MATH 590 Chapter 14 22

The Power Function Using the definition of the native inner product norm from the previous chapter we can rewrite the quadratic form Q(u) as N N N Q(u) = K (x, x) 2 u j K (x, x j ) + u i u j K (x i, x j ) j=1 i=1 j=1 fasshauer@iit.edu MATH 590 Chapter 14 23

The Power Function Using the definition of the native inner product norm from the previous chapter we can rewrite the quadratic form Q(u) as N N N Q(u) = K (x, x) 2 u j K (x, x j ) + u i u j K (x i, x j ) j=1 = K (, x), K (, x) NK (Ω) 2 + N i=1 j=1 i=1 j=1 N u j K (, x), K (, x j ) NK (Ω) j=1 N u i u j K (, x i ), K (, x j ) NK (Ω) fasshauer@iit.edu MATH 590 Chapter 14 23

The Power Function Using the definition of the native inner product norm from the previous chapter we can rewrite the quadratic form Q(u) as N N N Q(u) = K (x, x) 2 u j K (x, x j ) + u i u j K (x i, x j ) j=1 = K (, x), K (, x) NK (Ω) 2 + N i=1 j=1 = K (, x) i=1 j=1 N u j K (, x), K (, x j ) NK (Ω) j=1 N u i u j K (, x i ), K (, x j ) NK (Ω) N u j K (, x j ), K (, x) j=1 N u j K (, x j ) NK (Ω) j=1 fasshauer@iit.edu MATH 590 Chapter 14 23

The Power Function Using the definition of the native inner product norm from the previous chapter we can rewrite the quadratic form Q(u) as N N N Q(u) = K (x, x) 2 u j K (x, x j ) + u i u j K (x i, x j ) j=1 = K (, x), K (, x) NK (Ω) 2 + N i=1 j=1 = K (, x) = K (, x) i=1 j=1 N u j K (, x), K (, x j ) NK (Ω) j=1 N u i u j K (, x i ), K (, x j ) NK (Ω) N u j K (, x j ), K (, x) j=1 N u j K (, x j ) j=1 2 N K (Ω) N u j K (, x j ) NK (Ω) j=1. (2) fasshauer@iit.edu MATH 590 Chapter 14 23

The Power Function Remark The name power function was chosen by [Schaback (1993)] based on its connection to the power function of a statistical decision function (originally introduced in [Neyman and Pearson (1936)]). fasshauer@iit.edu MATH 590 Chapter 14 24

The Power Function Remark The name power function was chosen by [Schaback (1993)] based on its connection to the power function of a statistical decision function (originally introduced in [Neyman and Pearson (1936)]). In the paper [Wu and Schaback (1993)] the power function was referred to as kriging function. This terminology comes from geostatistics (see, e.g., [Myers (1992)]). fasshauer@iit.edu MATH 590 Chapter 14 24

The Power Function Remark The name power function was chosen by [Schaback (1993)] based on its connection to the power function of a statistical decision function (originally introduced in [Neyman and Pearson (1936)]). In the paper [Wu and Schaback (1993)] the power function was referred to as kriging function. This terminology comes from geostatistics (see, e.g., [Myers (1992)]). In the statistics literature, the power function is known as the kriging variance (see, e.g., [Berlinet and Thomas-Agnan (2004), Matheron (1965), Stein (1999)]). fasshauer@iit.edu MATH 590 Chapter 14 24

The Power Function Using the linear system notation employed earlier, i.e., A ij = K (x i, x j ), i, j = 1,..., N, u = [u 1,..., u N ] T, and b = [K (, x 1 ),..., K (, x N )] T, we note that we can also rewrite the quadratic form Q(u) as Q(u) = K (x, x) 2 N u j K (x, x j ) + j=1 N i=1 j=1 N u i u j K (x i, x j ) fasshauer@iit.edu MATH 590 Chapter 14 25

The Power Function Using the linear system notation employed earlier, i.e., A ij = K (x i, x j ), i, j = 1,..., N, u = [u 1,..., u N ] T, and b = [K (, x 1 ),..., K (, x N )] T, we note that we can also rewrite the quadratic form Q(u) as Q(u) = K (x, x) 2 N u j K (x, x j ) + j=1 N i=1 j=1 N u i u j K (x i, x j ) = K (x, x) 2u T b(x) + u T Au. (3) fasshauer@iit.edu MATH 590 Chapter 14 25

The Power Function Using the linear system notation employed earlier, i.e., A ij = K (x i, x j ), i, j = 1,..., N, u = [u 1,..., u N ] T, and b = [K (, x 1 ),..., K (, x N )] T, we note that we can also rewrite the quadratic form Q(u) as Q(u) = K (x, x) 2 N u j K (x, x j ) + j=1 N i=1 j=1 N u i u j K (x i, x j ) = K (x, x) 2u T b(x) + u T Au. (3) This suggests two alternative representations of the power function. fasshauer@iit.edu MATH 590 Chapter 14 25

The Power Function Using the linear system notation employed earlier, i.e., A ij = K (x i, x j ), i, j = 1,..., N, u = [u 1,..., u N ] T, and b = [K (, x 1 ),..., K (, x N )] T, we note that we can also rewrite the quadratic form Q(u) as Q(u) = K (x, x) 2 N u j K (x, x j ) + j=1 N i=1 j=1 N u i u j K (x i, x j ) = K (x, x) 2u T b(x) + u T Au. (3) This suggests two alternative representations of the power function. Using the matrix-vector notation for Q(u), the power function is given as P K,X (x) = Q(u (x)) fasshauer@iit.edu MATH 590 Chapter 14 25

The Power Function Using the linear system notation employed earlier, i.e., A ij = K (x i, x j ), i, j = 1,..., N, u = [u 1,..., u N ] T, and b = [K (, x 1 ),..., K (, x N )] T, we note that we can also rewrite the quadratic form Q(u) as Q(u) = K (x, x) 2 N u j K (x, x j ) + j=1 N i=1 j=1 N u i u j K (x i, x j ) = K (x, x) 2u T b(x) + u T Au. (3) This suggests two alternative representations of the power function. Using the matrix-vector notation for Q(u), the power function is given as P K,X (x) = Q(u (x)) = K (x, x) 2(u (x)) T b(x) + (u (x)) T Au (x). fasshauer@iit.edu MATH 590 Chapter 14 25

The Power Function However, by the definition of the cardinal functions Au (x) = b(x), and therefore we have the two new variants P K,X (x) = K (x, x) 2(u (x)) T b(x) + (u (x)) T Au (x) fasshauer@iit.edu MATH 590 Chapter 14 26

The Power Function However, by the definition of the cardinal functions Au (x) = b(x), and therefore we have the two new variants P K,X (x) = K (x, x) 2(u (x)) T b(x) + (u (x)) T Au (x) = K (x, x) (u (x)) T b(x) fasshauer@iit.edu MATH 590 Chapter 14 26

The Power Function However, by the definition of the cardinal functions Au (x) = b(x), and therefore we have the two new variants P K,X (x) = K (x, x) 2(u (x)) T b(x) + (u (x)) T Au (x) = K (x, x) (u (x)) T b(x) = K (x, x) (u (x)) T Au (x). fasshauer@iit.edu MATH 590 Chapter 14 26

The Power Function However, by the definition of the cardinal functions Au (x) = b(x), and therefore we have the two new variants P K,X (x) = K (x, x) 2(u (x)) T b(x) + (u (x)) T Au (x) = K (x, x) (u (x)) T b(x) = K (x, x) (u (x)) T Au (x). Remark These formulas can be used for the numerical evaluation of the power function at x. fasshauer@iit.edu MATH 590 Chapter 14 26

The Power Function However, by the definition of the cardinal functions Au (x) = b(x), and therefore we have the two new variants P K,X (x) = K (x, x) 2(u (x)) T b(x) + (u (x)) T Au (x) = K (x, x) (u (x)) T b(x) = K (x, x) (u (x)) T Au (x). Remark These formulas can be used for the numerical evaluation of the power function at x. To this end one has to first find the value of the cardinal functions u (x) by solving the system Au (x) = b(x). fasshauer@iit.edu MATH 590 Chapter 14 26

The Power Function However, by the definition of the cardinal functions Au (x) = b(x), and therefore we have the two new variants P K,X (x) = K (x, x) 2(u (x)) T b(x) + (u (x)) T Au (x) = K (x, x) (u (x)) T b(x) = K (x, x) (u (x)) T Au (x). Remark These formulas can be used for the numerical evaluation of the power function at x. To this end one has to first find the value of the cardinal functions u (x) by solving the system Au (x) = b(x). This results in P K,X (x) = K (x, x) (b(x)) T A 1 b(x). (4) fasshauer@iit.edu MATH 590 Chapter 14 26

The Power Function Example Gaussian Power Function Figure: Data sites and power function for Gaussian interpolant with ε = 6 based on N = 81 uniformly gridded points in [0, 1] 2. fasshauer@iit.edu MATH 590 Chapter 14 27

The Power Function Example More Gaussian Power Function Figure: Data sites and power function for Gaussian interpolant with ε = 6 based on N = 81 tensor-product Chebyshev points in [0, 1] 2. fasshauer@iit.edu MATH 590 Chapter 14 28

The Power Function Example More Gaussian Power Function Figure: Data sites and power function for Gaussian interpolant with ε = 6 based on N = 81 Halton points in [0, 1] 2. fasshauer@iit.edu MATH 590 Chapter 14 29

The Power Function Remark Dependence of the power function on the data locations is clearly visible. fasshauer@iit.edu MATH 590 Chapter 14 30

The Power Function Remark Dependence of the power function on the data locations is clearly visible. This connection was used in [De Marchi et al. (2005)] to iteratively obtain an optimal set of data locations that are independent of the data values. fasshauer@iit.edu MATH 590 Chapter 14 30

The Power Function Remark Dependence of the power function on the data locations is clearly visible. This connection was used in [De Marchi et al. (2005)] to iteratively obtain an optimal set of data locations that are independent of the data values. Since A is a positive definite matrix whenever K is a strictly positive definite kernel we see that the power function satisfies the bounds 0 P K,X (x) = K (x, x) (u (x)) T Au (x) K (x, x). fasshauer@iit.edu MATH 590 Chapter 14 30

The Power Function Remark Dependence of the power function on the data locations is clearly visible. This connection was used in [De Marchi et al. (2005)] to iteratively obtain an optimal set of data locations that are independent of the data values. Since A is a positive definite matrix whenever K is a strictly positive definite kernel we see that the power function satisfies the bounds 0 P K,X (x) = K (x, x) (u (x)) T Au (x) K (x, x). At this point the power function is mostly a theoretical tool that helps us better understand error estimates since we can decouple the effects due to the data function f from those due to the kernel K and the data locations X (see the following theorem). fasshauer@iit.edu MATH 590 Chapter 14 30

The Power Function Remark Dependence of the power function on the data locations is clearly visible. This connection was used in [De Marchi et al. (2005)] to iteratively obtain an optimal set of data locations that are independent of the data values. Since A is a positive definite matrix whenever K is a strictly positive definite kernel we see that the power function satisfies the bounds 0 P K,X (x) = K (x, x) (u (x)) T Au (x) K (x, x). At this point the power function is mostly a theoretical tool that helps us better understand error estimates since we can decouple the effects due to the data function f from those due to the kernel K and the data locations X (see the following theorem). The power function is defined in an analogous way for strictly conditionally positive definite functions. fasshauer@iit.edu MATH 590 Chapter 14 30

Outline Generic Error Estimates for Functions in N K (Ω) 1 Fill Distance and Approximation Orders 2 Lagrange Form of the Interpolant and Cardinal Basis Functions 3 The Power Function 4 Generic Error Estimates for Functions in N K (Ω) 5 Error Estimates in Terms of the Fill Distance fasshauer@iit.edu MATH 590 Chapter 14 31

Generic Error Estimates for Functions in N K (Ω) Now we can give a first generic error estimate. fasshauer@iit.edu MATH 590 Chapter 14 32

Generic Error Estimates for Functions in N K (Ω) Now we can give a first generic error estimate. Theorem Let Ω R s, K C(Ω Ω) be strictly positive definite, and suppose that the points X = {x 1,..., x N } are distinct. Denote the interpolant to f N K (Ω) on X by P f. Then for every x Ω f (x) P f (x) P K,X (x) f NK (Ω). fasshauer@iit.edu MATH 590 Chapter 14 32

Generic Error Estimates for Functions in N K (Ω) Proof. Since f is assumed to lie in the native space of K the reproducing property of K yields f (x) = f, K (, x) NK (Ω). fasshauer@iit.edu MATH 590 Chapter 14 33

Generic Error Estimates for Functions in N K (Ω) Proof. Since f is assumed to lie in the native space of K the reproducing property of K yields f (x) = f, K (, x) NK (Ω). We express the interpolant in its cardinal form and apply the reproducing property of K. This gives us P f (x) = N f (x j )uj (x) j=1 fasshauer@iit.edu MATH 590 Chapter 14 33

Generic Error Estimates for Functions in N K (Ω) Proof. Since f is assumed to lie in the native space of K the reproducing property of K yields f (x) = f, K (, x) NK (Ω). We express the interpolant in its cardinal form and apply the reproducing property of K. This gives us P f (x) = = N f (x j )uj (x) j=1 N uj (x) f, K (, x j) NK (Ω) j=1 fasshauer@iit.edu MATH 590 Chapter 14 33

Generic Error Estimates for Functions in N K (Ω) Proof. Since f is assumed to lie in the native space of K the reproducing property of K yields f (x) = f, K (, x) NK (Ω). We express the interpolant in its cardinal form and apply the reproducing property of K. This gives us P f (x) = = N f (x j )uj (x) j=1 N uj (x) f, K (, x j) NK (Ω) j=1 = f, N uj (x)k (, x j) NK (Ω). j=1 fasshauer@iit.edu MATH 590 Chapter 14 33

Generic Error Estimates for Functions in N K (Ω) Proof (cont.) Now all that remains to be done is to combine the two formulas just derived and apply the Cauchy-Schwarz inequality. Thus, fasshauer@iit.edu MATH 590 Chapter 14 34

Generic Error Estimates for Functions in N K (Ω) Proof (cont.) Now all that remains to be done is to combine the two formulas just derived and apply the Cauchy-Schwarz inequality. Thus, N f (x) P f (x) = f, K (, x) uj (x)k (, x j) NK (Ω) j=1 fasshauer@iit.edu MATH 590 Chapter 14 34

Generic Error Estimates for Functions in N K (Ω) Proof (cont.) Now all that remains to be done is to combine the two formulas just derived and apply the Cauchy-Schwarz inequality. Thus, N f (x) P f (x) = f, K (, x) uj (x)k (, x j) NK (Ω) j=1 N f NK (Ω) K (, x) uj (x)k (, x j) j=1 NK (Ω) fasshauer@iit.edu MATH 590 Chapter 14 34

Generic Error Estimates for Functions in N K (Ω) Proof (cont.) Now all that remains to be done is to combine the two formulas just derived and apply the Cauchy-Schwarz inequality. Thus, N f (x) P f (x) = f, K (, x) uj (x)k (, x j) NK (Ω) j=1 N f NK (Ω) K (, x) uj (x)k (, x j) = f NK (Ω)P K,X (x), j=1 NK (Ω) where we have used the representation (2) of the quadratic form Q(u (x)) and the definition of the power function. fasshauer@iit.edu MATH 590 Chapter 14 34

Generic Error Estimates for Functions in N K (Ω) One of the main benefits of the above theorem is that we are now able to estimate the interpolation error by considering two independent phenomena: fasshauer@iit.edu MATH 590 Chapter 14 35

Generic Error Estimates for Functions in N K (Ω) One of the main benefits of the above theorem is that we are now able to estimate the interpolation error by considering two independent phenomena: the smoothness of the data (measured in terms of the native space norm of f which is independent of the data locations, but does depend on K ), fasshauer@iit.edu MATH 590 Chapter 14 35

Generic Error Estimates for Functions in N K (Ω) One of the main benefits of the above theorem is that we are now able to estimate the interpolation error by considering two independent phenomena: the smoothness of the data (measured in terms of the native space norm of f which is independent of the data locations, but does depend on K ), and the distribution of the data (measured in terms of the power function independent of the actual data values). fasshauer@iit.edu MATH 590 Chapter 14 35

Generic Error Estimates for Functions in N K (Ω) One of the main benefits of the above theorem is that we are now able to estimate the interpolation error by considering two independent phenomena: Remark the smoothness of the data (measured in terms of the native space norm of f which is independent of the data locations, but does depend on K ), and the distribution of the data (measured in terms of the power function independent of the actual data values). This is analogous to the standard error estimate for polynomial interpolation cited in most numerical analysis texts. fasshauer@iit.edu MATH 590 Chapter 14 35

Generic Error Estimates for Functions in N K (Ω) One of the main benefits of the above theorem is that we are now able to estimate the interpolation error by considering two independent phenomena: Remark the smoothness of the data (measured in terms of the native space norm of f which is independent of the data locations, but does depend on K ), and the distribution of the data (measured in terms of the power function independent of the actual data values). This is analogous to the standard error estimate for polynomial interpolation cited in most numerical analysis texts. Effects due to the use of any specific kernel K (or basic function in the translation invariant or radial case) are felt in both terms since the native space norm of f also varies with K. In particular, changing a possible shape parameter ε will have an effect on both terms in the error bound. fasshauer@iit.edu MATH 590 Chapter 14 35

Outline Error Estimates in Terms of the Fill Distance 1 Fill Distance and Approximation Orders 2 Lagrange Form of the Interpolant and Cardinal Basis Functions 3 The Power Function 4 Generic Error Estimates for Functions in N K (Ω) 5 Error Estimates in Terms of the Fill Distance fasshauer@iit.edu MATH 590 Chapter 14 36

Error Estimates in Terms of the Fill Distance The next steps are to refine this error bound by expressing the influence of the data locations in terms of the fill distance, fasshauer@iit.edu MATH 590 Chapter 14 37

Error Estimates in Terms of the Fill Distance The next steps are to refine this error bound by expressing the influence of the data locations in terms of the fill distance, and the bound needs to be specialized to various choices of kernels K. fasshauer@iit.edu MATH 590 Chapter 14 37

Error Estimates in Terms of the Fill Distance The next steps are to refine this error bound by expressing the influence of the data locations in terms of the fill distance, and the bound needs to be specialized to various choices of kernels K. Remark The most common strategy for obtaining error bounds in numerical analysis is to take advantage of the polynomial precision of a method (at least locally), and then to apply a Taylor expansion. fasshauer@iit.edu MATH 590 Chapter 14 37

Error Estimates in Terms of the Fill Distance With this in mind we observe Theorem Let Ω R s, and suppose K C(Ω Ω) is strictly positive definite. Let X = {x 1,..., x N } be a set of distinct points in Ω, and define the quadratic form Q(u) as in (2). The minimum of Q(u) is given for the vector u = u (x) of values of the cardinal functions, i.e., Q(u (x)) Q(u) for all u R N. fasshauer@iit.edu MATH 590 Chapter 14 38

Error Estimates in Terms of the Fill Distance Proof. We showed above (see (3)) that Q(u) = K (x, x) 2u T b(x) + u T Au. fasshauer@iit.edu MATH 590 Chapter 14 39

Error Estimates in Terms of the Fill Distance Proof. We showed above (see (3)) that Q(u) = K (x, x) 2u T b(x) + u T Au. The minimum of this quadratic form is given by the solution of the linear system Au = b(x). fasshauer@iit.edu MATH 590 Chapter 14 39

Error Estimates in Terms of the Fill Distance Proof. We showed above (see (3)) that Q(u) = K (x, x) 2u T b(x) + u T Au. The minimum of this quadratic form is given by the solution of the linear system Au = b(x). This, however, yields the cardinal functions u = u (x). fasshauer@iit.edu MATH 590 Chapter 14 39

Error Estimates in Terms of the Fill Distance In the proof below we will use a special coefficient vector ũ which provides the polynomial precision desired for the proof of the refined error estimate. fasshauer@iit.edu MATH 590 Chapter 14 40

Error Estimates in Terms of the Fill Distance In the proof below we will use a special coefficient vector ũ which provides the polynomial precision desired for the proof of the refined error estimate. Its existence is guaranteed by the following theorem on local polynomial reproduction proved in [Wendland (2005a)]. fasshauer@iit.edu MATH 590 Chapter 14 40

Error Estimates in Terms of the Fill Distance In the proof below we will use a special coefficient vector ũ which provides the polynomial precision desired for the proof of the refined error estimate. Its existence is guaranteed by the following theorem on local polynomial reproduction proved in [Wendland (2005a)]. The theorem requires Definition A region Ω R s satisfies an interior cone condition if there exists an angle θ (0, π/2) and a radius r > 0 such that for every x Ω there exists a unit vector ξ(x) such that the cone C = {x + λy : y R s, y 2 = 1, y T ξ(x) cos θ, λ [0, r]} is contained in Ω. fasshauer@iit.edu MATH 590 Chapter 14 40

Error Estimates in Terms of the Fill Distance The interior cone condition imposes a certain regularity on the domain Ω. In fact, a domain that satisfies this condition contains balls of a controllable radius. fasshauer@iit.edu MATH 590 Chapter 14 41

Error Estimates in Terms of the Fill Distance The interior cone condition imposes a certain regularity on the domain Ω. In fact, a domain that satisfies this condition contains balls of a controllable radius. In particular, this will be important when bounding the remainder of the Taylor expansions below. For more details see [Wendland (2005a)]. fasshauer@iit.edu MATH 590 Chapter 14 41

Error Estimates in Terms of the Fill Distance The interior cone condition imposes a certain regularity on the domain Ω. In fact, a domain that satisfies this condition contains balls of a controllable radius. In particular, this will be important when bounding the remainder of the Taylor expansions below. For more details see [Wendland (2005a)]. Existence of an approximation scheme with local polynomial precision is guaranteed by the following theorem. fasshauer@iit.edu MATH 590 Chapter 14 41

Error Estimates in Terms of the Fill Distance Theorem Suppose Ω R s is bounded and satisfies an interior cone condition, and let l be a non-negative integer. Then there exist positive constants h 0, c 1, and c 2 such that for all X = {x 1,..., x N } Ω with h X,Ω h 0 and every x Ω there exist numbers ũ 1 (x),..., ũ N (x) with N (1) ũ j (x)p(x j ) = p(x) for all polynomials p Π s l, (2) j=1 N ũ j (x) c 1, j=1 (3) ũ j (x) = 0 if x x j 2 c 2 h X,Ω. fasshauer@iit.edu MATH 590 Chapter 14 42

Error Estimates in Terms of the Fill Distance Theorem Suppose Ω R s is bounded and satisfies an interior cone condition, and let l be a non-negative integer. Then there exist positive constants h 0, c 1, and c 2 such that for all X = {x 1,..., x N } Ω with h X,Ω h 0 and every x Ω there exist numbers ũ 1 (x),..., ũ N (x) with N (1) ũ j (x)p(x j ) = p(x) for all polynomials p Π s l, (2) j=1 N ũ j (x) c 1, j=1 (3) ũ j (x) = 0 if x x j 2 c 2 h X,Ω. Property (1) yields the polynomial precision, and property (3) shows that the scheme is local. The bound in property (2) is essential for controlling the growth of error estimates. fasshauer@iit.edu MATH 590 Chapter 14 42

Error Estimates in Terms of the Fill Distance Theorem Suppose Ω R s is bounded and satisfies an interior cone condition, and let l be a non-negative integer. Then there exist positive constants h 0, c 1, and c 2 such that for all X = {x 1,..., x N } Ω with h X,Ω h 0 and every x Ω there exist numbers ũ 1 (x),..., ũ N (x) with N (1) ũ j (x)p(x j ) = p(x) for all polynomials p Π s l, (2) j=1 N ũ j (x) c 1, j=1 (3) ũ j (x) = 0 if x x j 2 c 2 h X,Ω. Property (1) yields the polynomial precision, and property (3) shows that the scheme is local. The bound in property (2) is essential for controlling the growth of error estimates. The quantity on the left-hand side of (2) is known as the Lebesgue constant at x. fasshauer@iit.edu MATH 590 Chapter 14 42

Error Estimates in Terms of the Fill Distance In the following theorem and its proof we will make repeated use of multi-index notation and multivariate Taylor expansions. fasshauer@iit.edu MATH 590 Chapter 14 43

Error Estimates in Terms of the Fill Distance In the following theorem and its proof we will make repeated use of multi-index notation and multivariate Taylor expansions. For β = (β 1,..., β s ) N s 0 with β = s i=1 β i we define the differential operator D β as D β β = ( x 1 ) β 1 ( xs ). βs fasshauer@iit.edu MATH 590 Chapter 14 43

Error Estimates in Terms of the Fill Distance In the following theorem and its proof we will make repeated use of multi-index notation and multivariate Taylor expansions. For β = (β 1,..., β s ) N s 0 with β = s i=1 β i we define the differential operator D β as D β β = ( x 1 ) β 1 ( xs ). βs The notation D β 2 K (w, ) used below indicates that the operator is applied to K (w, ) viewed as a function of the second variable. fasshauer@iit.edu MATH 590 Chapter 14 43

Error Estimates in Terms of the Fill Distance The multivariate Taylor expansion of the function K (w, ) centered at w is given by with remainder K (w, z) = β <2k R(w, z) = D β 2 β =2k K (w, w) (z w) β + R(w, z) β! D β 2 K (w, ξ w,z) (z w) β, β! where ξ w,z lies somewhere on the line segment connecting w and z. fasshauer@iit.edu MATH 590 Chapter 14 44

Error Estimates in Terms of the Fill Distance Our earlier generic error estimate can now be formulated in terms of the fill distance. fasshauer@iit.edu MATH 590 Chapter 14 45

Error Estimates in Terms of the Fill Distance Our earlier generic error estimate can now be formulated in terms of the fill distance. Theorem Suppose Ω R s is bounded and satisfies an interior cone condition. Suppose K C 2k (Ω Ω) is symmetric and strictly positive definite. Denote the interpolant to f N K (Ω) on the set X by P f. Then there exist positive constants h 0 and C (independent of x, f and K ) such that provided h X,Ω h 0. f (x) P f (x) Ch k X,Ω CK (x) f NK (Ω), fasshauer@iit.edu MATH 590 Chapter 14 45

Error Estimates in Terms of the Fill Distance Our earlier generic error estimate can now be formulated in terms of the fill distance. Theorem Suppose Ω R s is bounded and satisfies an interior cone condition. Suppose K C 2k (Ω Ω) is symmetric and strictly positive definite. Denote the interpolant to f N K (Ω) on the set X by P f. Then there exist positive constants h 0 and C (independent of x, f and K ) such that provided h X,Ω h 0. Here f (x) P f (x) Ch k X,Ω CK (x) f NK (Ω), C K (x) = max max β =2k w,z Ω B(x,c 2 h X,Ω ) Dβ 2 K (w, z) with B(x, c 2 h X,Ω ) denoting the ball of radius c 2 h X,Ω centered at x. fasshauer@iit.edu MATH 590 Chapter 14 45

Error Estimates in Terms of the Fill Distance Proof. By the generic error estimate in terms of the power function we know f (x) P f (x) P K,X (x) f NK (Ω). fasshauer@iit.edu MATH 590 Chapter 14 46

Error Estimates in Terms of the Fill Distance Proof. By the generic error estimate in terms of the power function we know f (x) P f (x) P K,X (x) f NK (Ω). Therefore, we now derive the bound P K,X (x) ChX,Ω k CK (x) for the power function in terms of the fill distance. fasshauer@iit.edu MATH 590 Chapter 14 46

Error Estimates in Terms of the Fill Distance Proof. By the generic error estimate in terms of the power function we know f (x) P f (x) P K,X (x) f NK (Ω). Therefore, we now derive the bound P K,X (x) Ch k X,Ω CK (x) for the power function in terms of the fill distance. We know that the power function is defined by [P K,X (x)] 2 = Q(u (x)). fasshauer@iit.edu MATH 590 Chapter 14 46

Error Estimates in Terms of the Fill Distance Proof. By the generic error estimate in terms of the power function we know f (x) P f (x) P K,X (x) f NK (Ω). Therefore, we now derive the bound P K,X (x) Ch k X,Ω CK (x) for the power function in terms of the fill distance. We know that the power function is defined by [P K,X (x)] 2 = Q(u (x)). Moreover, we know that the quadratic form Q(u) is minimized by u = u (x). fasshauer@iit.edu MATH 590 Chapter 14 46

Error Estimates in Terms of the Fill Distance Proof. By the generic error estimate in terms of the power function we know f (x) P f (x) P K,X (x) f NK (Ω). Therefore, we now derive the bound P K,X (x) Ch k X,Ω CK (x) for the power function in terms of the fill distance. We know that the power function is defined by [P K,X (x)] 2 = Q(u (x)). Moreover, we know that the quadratic form Q(u) is minimized by u = u (x). Therefore, any other coefficient vector u will yield an upper bound on the power function. fasshauer@iit.edu MATH 590 Chapter 14 46

Error Estimates in Terms of the Fill Distance We take u = ũ(x) from the theorem guaranteeing existence of a local polynomial reproduction so that we are ensured to have polynomial precision of degree l 2k 1. fasshauer@iit.edu MATH 590 Chapter 14 47

Error Estimates in Terms of the Fill Distance We take u = ũ(x) from the theorem guaranteeing existence of a local polynomial reproduction so that we are ensured to have polynomial precision of degree l 2k 1. For this specific choice of coefficients we have [P K,X (x)] 2 Q(u) = K (x, x) 2 u j K (x, x j ) + u i u j K (x i, x j ), j i j where the sums are over those indices j with u j 0. fasshauer@iit.edu MATH 590 Chapter 14 47

Error Estimates in Terms of the Fill Distance We take u = ũ(x) from the theorem guaranteeing existence of a local polynomial reproduction so that we are ensured to have polynomial precision of degree l 2k 1. For this specific choice of coefficients we have [P K,X (x)] 2 Q(u) = K (x, x) 2 u j K (x, x j ) + u i u j K (x i, x j ), j i j where the sums are over those indices j with u j 0. Now we apply the Taylor expansion centered at x to K (x, ) and centered at x i to K (x i, ), and evaluate both functions at x j. fasshauer@iit.edu MATH 590 Chapter 14 47

Error Estimates in Terms of the Fill Distance We take u = ũ(x) from the theorem guaranteeing existence of a local polynomial reproduction so that we are ensured to have polynomial precision of degree l 2k 1. For this specific choice of coefficients we have [P K,X (x)] 2 Q(u) = K (x, x) 2 u j K (x, x j ) + u i u j K (x i, x j ), j i j where the sums are over those indices j with u j 0. Now we apply the Taylor expansion centered at x to K (x, ) and centered at x i to K (x i, ), and evaluate both functions at x j. This yields Q(u) = K (x, x) 2 u j D β 2 K (x, x) (x j x) β + R(x, x j ) β! j β <2k + u i u j D β 2 K (x i, x i ) (x j x i ) β + R(x i, x j ). β! i j β <2k fasshauer@iit.edu MATH 590 Chapter 14 47

Error Estimates in Terms of the Fill Distance Next, we identify p(z) = (z x) β so that p(x) = 0 unless β = 0. fasshauer@iit.edu MATH 590 Chapter 14 48

Error Estimates in Terms of the Fill Distance Next, we identify p(z) = (z x) β so that p(x) = 0 unless β = 0. Therefore the polynomial precision property of the coefficient vector u simplifies this expression to Q(u) = K (x, x) 2K (x, x) 2 j u j R(x, x j ) + i u i β <2k D β 2 K (x i, x i ) (x x i ) β + β! i j u i u j R(x i, x(5) j ). fasshauer@iit.edu MATH 590 Chapter 14 48

Error Estimates in Terms of the Fill Distance Next, we identify p(z) = (z x) β so that p(x) = 0 unless β = 0. Therefore the polynomial precision property of the coefficient vector u simplifies this expression to Q(u) = K (x, x) 2K (x, x) 2 j u j R(x, x j ) + i u i β <2k D β 2 K (x i, x i ) (x x i ) β + β! i j u i u j R(x i, x(5) j ). Now we can apply the Taylor expansion again and make the observation that β <2k D β 2 K (x i, x i ) (x x i ) β = K (x i, x) R(x i, x). (6) β! fasshauer@iit.edu MATH 590 Chapter 14 48

Error Estimates in Terms of the Fill Distance If we use (6) and rearrange the terms in (5) we get Q(u) = K (x, x) [ u j 2R(x, x j ) j i u i R(x i, x j ) ] + i u i [K (x i, x) R(x i, x)]. (7) fasshauer@iit.edu MATH 590 Chapter 14 49

Error Estimates in Terms of the Fill Distance If we use (6) and rearrange the terms in (5) we get Q(u) = K (x, x) [ u j 2R(x, x j ) j i u i R(x i, x j ) ] + i u i [K (x i, x) R(x i, x)]. (7) One final Taylor expansion we need is (using the symmetry of K ) K (x i, x) = K (x, x i ) = β <2k D β 2 K (x, x) (x i x) β + R(x, x i ). (8) β! fasshauer@iit.edu MATH 590 Chapter 14 49

Error Estimates in Terms of the Fill Distance If we use (6) and rearrange the terms in (5) we get Q(u) = K (x, x) [ u j 2R(x, x j ) j i u i R(x i, x j ) ] + i u i [K (x i, x) R(x i, x)]. (7) One final Taylor expansion we need is (using the symmetry of K ) K (x i, x) = K (x, x i ) = β <2k D β 2 K (x, x) (x i x) β + R(x, x i ). (8) β! If we insert (8) into (7) and once more take advantage of the polynomial precision property of the coefficient vector u we are left with Q(u) = [ u j R(x, x j ) + R(x j, x) ] u i R(x i, x j ). j i fasshauer@iit.edu MATH 590 Chapter 14 49

Error Estimates in Terms of the Fill Distance From the previous slide we have Q(u) = [ u j R(x, x j ) + R(x j, x) j i u i R(x i, x j ) ]. fasshauer@iit.edu MATH 590 Chapter 14 50

Error Estimates in Terms of the Fill Distance From the previous slide we have Q(u) = [ u j R(x, x j ) + R(x j, x) j i u i R(x i, x j ) ]. Now the theorem on local polynomial reproductions allows us to bound j u j c 1. fasshauer@iit.edu MATH 590 Chapter 14 50

Error Estimates in Terms of the Fill Distance From the previous slide we have Q(u) = [ u j R(x, x j ) + R(x j, x) j i u i R(x i, x j ) ]. Now the theorem on local polynomial reproductions allows us to bound j u j c 1. Moreover, x x j 2 c 2 h X,Ω and x i x j 2 2c 2 h X,Ω. fasshauer@iit.edu MATH 590 Chapter 14 50