MATH 590: Meshfree Methods

Size: px
Start display at page:

Download "MATH 590: Meshfree Methods"

Transcription

1 MATH 590: Meshfree Methods Chapter 14: The Power Function and Native Space Error Estimates Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 MATH 590 Chapter 14 1

2 Outline 1 Fill Distance and Approximation Orders 2 Lagrange Form of the Interpolant and Cardinal Basis Functions 3 The Power Function 4 Generic Error Estimates for Functions in N K (Ω) 5 Error Estimates in Terms of the Fill Distance fasshauer@iit.edu MATH 590 Chapter 14 2

3 Outline Fill Distance and Approximation Orders 1 Fill Distance and Approximation Orders 2 Lagrange Form of the Interpolant and Cardinal Basis Functions 3 The Power Function 4 Generic Error Estimates for Functions in N K (Ω) 5 Error Estimates in Terms of the Fill Distance fasshauer@iit.edu MATH 590 Chapter 14 3

4 Fill Distance and Approximation Orders Goal: to provide error estimates for scattered data interpolation with strictly (conditionally) positive definite functions. MATH 590 Chapter 14 4

5 Fill Distance and Approximation Orders Goal: to provide error estimates for scattered data interpolation with strictly (conditionally) positive definite functions. We will provide most of the details for the strictly positive definite case, and only mention the extension to the conditionally positive definite case in the end. MATH 590 Chapter 14 4

6 Fill Distance and Approximation Orders Goal: to provide error estimates for scattered data interpolation with strictly (conditionally) positive definite functions. We will provide most of the details for the strictly positive definite case, and only mention the extension to the conditionally positive definite case in the end. In their final form we will want our estimates to depend on some kind of measure of the data distribution. MATH 590 Chapter 14 4

7 Fill Distance and Approximation Orders Goal: to provide error estimates for scattered data interpolation with strictly (conditionally) positive definite functions. We will provide most of the details for the strictly positive definite case, and only mention the extension to the conditionally positive definite case in the end. In their final form we will want our estimates to depend on some kind of measure of the data distribution. The measure that is usually used in approximation theory is the so-called fill distance already introduced in Chapter 2. h = h X,Ω = sup min x x j 2 x j X x Ω fasshauer@iit.edu MATH 590 Chapter 14 4

8 Fill Distance and Approximation Orders The fill distance indicates how well the data fill out the domain Ω. It denotes the radius of the largest empty ball that can be placed among the data locations. MATH 590 Chapter 14 5

9 Fill Distance and Approximation Orders The fill distance indicates how well the data fill out the domain Ω. It denotes the radius of the largest empty ball that can be placed among the data locations. Convergence We will be interested in whether the error f P (h) f tends to zero as h 0, and if so, how fast. fasshauer@iit.edu MATH 590 Chapter 14 5

10 Fill Distance and Approximation Orders The fill distance indicates how well the data fill out the domain Ω. It denotes the radius of the largest empty ball that can be placed among the data locations. Convergence We will be interested in whether the error f P (h) f tends to zero as h 0, and if so, how fast. Here {P (h) } h presents a sequence of interpolation (or, more generally, projection) operators that vary with the fill distance h. fasshauer@iit.edu MATH 590 Chapter 14 5

11 Fill Distance and Approximation Orders The fill distance indicates how well the data fill out the domain Ω. It denotes the radius of the largest empty ball that can be placed among the data locations. Convergence We will be interested in whether the error f P (h) f tends to zero as h 0, and if so, how fast. Here {P (h) } h presents a sequence of interpolation (or, more generally, projection) operators that vary with the fill distance h. Remark Most error bounds will focus on this worst-case setting. Some will be measured in the L 2 -norm, i.e., for average case errors, or other L p -norms. fasshauer@iit.edu MATH 590 Chapter 14 5

12 Fill Distance and Approximation Orders Example Let P (h) denote interpolation to data given at (2 n + 1) s, n = 1, 2,..., equally spaced points in the unit cube in R s so that h = 1 s (2 n +1) s 1 = 2 n. fasshauer@iit.edu MATH 590 Chapter 14 6

13 Fill Distance and Approximation Orders Example Let P (h) denote interpolation to data given at (2 n + 1) s, n = 1, 2,..., equally spaced points in the unit cube in R s so that h = 1 s (2 n +1) s 1 = 2 n. The definition of the fill distance also covers scattered data such as sets of Halton points. In fact, since Halton points are quasi-uniformly distributed (see Appendix A) we can assume h 2 n for a set of (2 n + 1) s Halton points in R s. fasshauer@iit.edu MATH 590 Chapter 14 6

14 Fill Distance and Approximation Orders Example Let P (h) denote interpolation to data given at (2 n + 1) s, n = 1, 2,..., equally spaced points in the unit cube in R s so that h = 1 s (2 n +1) s 1 = 2 n. The definition of the fill distance also covers scattered data such as sets of Halton points. In fact, since Halton points are quasi-uniformly distributed (see Appendix A) we can assume h 2 n for a set of (2 n + 1) s Halton points in R s. Remark These relations explain the specific sizes of the point sets we used in earlier examples. fasshauer@iit.edu MATH 590 Chapter 14 6

15 Fill Distance and Approximation Orders Since we want to employ the machinery of reproducing kernel Hilbert spaces presented in the previous chapter we will concentrate on error estimates for functions f N K. fasshauer@iit.edu MATH 590 Chapter 14 7

16 Fill Distance and Approximation Orders Since we want to employ the machinery of reproducing kernel Hilbert spaces presented in the previous chapter we will concentrate on error estimates for functions f N K. In the next chapter we will also mention some more general estimates. fasshauer@iit.edu MATH 590 Chapter 14 7

17 Fill Distance and Approximation Orders We measure the speed of convergence to zero in terms of approximation order. MATH 590 Chapter 14 8

18 Fill Distance and Approximation Orders We measure the speed of convergence to zero in terms of approximation order. We say that the approximation operator P (h) has L p -approximation order k if f P (h) f p = O(h k ) for h 0. fasshauer@iit.edu MATH 590 Chapter 14 8

19 Fill Distance and Approximation Orders We measure the speed of convergence to zero in terms of approximation order. We say that the approximation operator P (h) has L p -approximation order k if f P (h) f p = O(h k ) for h 0. Moreover, if we can also show that f P (h) f p o(h k ), then P (h) has exact L p -approximation order k. fasshauer@iit.edu MATH 590 Chapter 14 8

20 Fill Distance and Approximation Orders We measure the speed of convergence to zero in terms of approximation order. We say that the approximation operator P (h) has L p -approximation order k if f P (h) f p = O(h k ) for h 0. Moreover, if we can also show that f P (h) f p o(h k ), then P (h) has exact L p -approximation order k. Remark We will concentrate mostly on the case p = (i.e., pointwise estimates), but approximation order in other norms can also be studied. fasshauer@iit.edu MATH 590 Chapter 14 8

21 Fill Distance and Approximation Orders Remark In order to keep the following discussion as transparent as possible we will restrict ourselves to strictly positive definite functions. With (considerably) more technical details the following can also be formulated for strictly conditionally positive definite functions (see [Wendland (2005a)] for details). MATH 590 Chapter 14 9

22 Lagrange Form of the Interpolant and Cardinal Basis Functions Outline 1 Fill Distance and Approximation Orders 2 Lagrange Form of the Interpolant and Cardinal Basis Functions 3 The Power Function 4 Generic Error Estimates for Functions in N K (Ω) 5 Error Estimates in Terms of the Fill Distance fasshauer@iit.edu MATH 590 Chapter 14 10

23 Lagrange Form of the Interpolant and Cardinal Basis Functions The key idea for the following discussion is to express the interpolant in Lagrange form, i.e., using so-called cardinal basis functions. MATH 590 Chapter 14 11

24 Lagrange Form of the Interpolant and Cardinal Basis Functions The key idea for the following discussion is to express the interpolant in Lagrange form, i.e., using so-called cardinal basis functions. For radial basis function approximation this idea is due to [Wu and Schaback (1993)]. MATH 590 Chapter 14 11

25 Lagrange Form of the Interpolant and Cardinal Basis Functions The key idea for the following discussion is to express the interpolant in Lagrange form, i.e., using so-called cardinal basis functions. For radial basis function approximation this idea is due to [Wu and Schaback (1993)]. In the previous chapters we established that, for any strictly positive definite function Φ, the linear system Ac = y with A ij = Φ(x i x j ), i, j = 1,..., N, c = [c 1,..., c N ] T, and y = [f (x 1 ),..., f (x N )] T has a unique solution. fasshauer@iit.edu MATH 590 Chapter 14 11

26 Lagrange Form of the Interpolant and Cardinal Basis Functions In the following we will consider the more general situation where K is a strictly positive definite kernel, i.e., the entries of A are given by A ij = K (x i, x j ). fasshauer@iit.edu MATH 590 Chapter 14 12

27 Lagrange Form of the Interpolant and Cardinal Basis Functions In the following we will consider the more general situation where K is a strictly positive definite kernel, i.e., the entries of A are given by A ij = K (x i, x j ). The uniqueness result holds in this case also. fasshauer@iit.edu MATH 590 Chapter 14 12

28 Lagrange Form of the Interpolant and Cardinal Basis Functions In the following we will consider the more general situation where K is a strictly positive definite kernel, i.e., the entries of A are given by A ij = K (x i, x j ). The uniqueness result holds in this case also. In order to obtain the cardinal basis functions uj, j = 1,..., N, with the property uj (x i) = δ ij, i.e., u j (x i) = { 1 if i = j, 0 if i j, fasshauer@iit.edu MATH 590 Chapter 14 12

29 Lagrange Form of the Interpolant and Cardinal Basis Functions In the following we will consider the more general situation where K is a strictly positive definite kernel, i.e., the entries of A are given by A ij = K (x i, x j ). The uniqueness result holds in this case also. In order to obtain the cardinal basis functions uj, j = 1,..., N, with the property uj (x i) = δ ij, i.e., u j (x i) = we consider the linear system { 1 if i = j, 0 if i j, Au (x) = b(x), (1) where the matrix A is as above (and therefore invertible), u = [u 1,..., u N ]T, and b = [K (, x 1 ),..., K (, x N )] T. fasshauer@iit.edu MATH 590 Chapter 14 12

30 Lagrange Form of the Interpolant and Cardinal Basis Functions Existence of Cardinal Functions Theorem Suppose K is a strictly positive definite kernel on R s R s. Then, for any distinct points x 1,..., x N, there exist functions u j span{k (, x j ), j = 1,..., N} such that u j (x i) = δ ij. They are determined pointwise by solving the linear system (1), i.e., Au (x) = b(x). fasshauer@iit.edu MATH 590 Chapter 14 13

31 Lagrange Form of the Interpolant and Cardinal Basis Functions Existence of Cardinal Functions Theorem Suppose K is a strictly positive definite kernel on R s R s. Then, for any distinct points x 1,..., x N, there exist functions u j span{k (, x j ), j = 1,..., N} such that u j (x i) = δ ij. They are determined pointwise by solving the linear system (1), i.e., Au (x) = b(x). Therefore if we know the cardinal functions we can write the interpolant P f to f at x 1,..., x N in the cardinal form P f (x) = N f (x j )uj (x), x Rs. j=1 fasshauer@iit.edu MATH 590 Chapter 14 13

32 Lagrange Form of the Interpolant and Cardinal Basis Functions Remark Cardinal functions do not depend on the data values of the interpolation problem. MATH 590 Chapter 14 14

33 Lagrange Form of the Interpolant and Cardinal Basis Functions Remark Cardinal functions do not depend on the data values of the interpolation problem. They do heavily depend on the data locations (see plots on following slides). MATH 590 Chapter 14 14

34 Lagrange Form of the Interpolant and Cardinal Basis Functions Remark Cardinal functions do not depend on the data values of the interpolation problem. They do heavily depend on the data locations (see plots on following slides). Once the data sites are fixed and the basic function is chosen with an appropriate shape parameter (whose optimal value will depend on the data sites and values), then the cardinal functions are determined by the linear system (1). MATH 590 Chapter 14 14

35 Lagrange Form of the Interpolant and Cardinal Basis Functions Example Gaussian Cardinal Functions Figure: Cardinal functions for Gaussian interpolation (with ε = 5) on 81 uniformly gridded points in [0, 1] 2. Centered at an edge point (left) and at an interior point (right). fasshauer@iit.edu MATH 590 Chapter 14 15

36 Lagrange Form of the Interpolant and Cardinal Basis Functions Example More Gaussians Figure: Cardinal functions for Gaussian interpolation (with ε = 5) on 81 tensor-product Chebyshev points in [0, 1] 2. Centered at an edge point (left) and at an interior point (right). fasshauer@iit.edu MATH 590 Chapter 14 16

37 Lagrange Form of the Interpolant and Cardinal Basis Functions Example More Gaussians Figure: Cardinal functions for Gaussian interpolation (with ε = 5) on 81 Halton points in [0, 1] 2. Centered at an edge point (left) and at an interior point (right). fasshauer@iit.edu MATH 590 Chapter 14 17

38 Lagrange Form of the Interpolant and Cardinal Basis Functions Example Multiquadric Cardinal Functions Figure: Cardinal functions for multiquadric interpolation (with ε = 5) on 81 Halton points in [0, 1] 2. Centered at an edge point (left) and at an interior point (right). fasshauer@iit.edu MATH 590 Chapter 14 18

39 Lagrange Form of the Interpolant and Cardinal Basis Functions Remark Basic functions that grow with increasing distance from the center point (such as multiquadrics) are sometimes criticized for being counter-intuitive for scattered data approximation. MATH 590 Chapter 14 19

40 Lagrange Form of the Interpolant and Cardinal Basis Functions Remark Basic functions that grow with increasing distance from the center point (such as multiquadrics) are sometimes criticized for being counter-intuitive for scattered data approximation. The plot above shows that the associated cardinal functions are just as localized as those for the Gaussian basic functions, and thus the function space spanned by multiquadrics is a good local space. fasshauer@iit.edu MATH 590 Chapter 14 19

41 Lagrange Form of the Interpolant and Cardinal Basis Functions Program (RBFCardinalFunction.m) 1 rbf exp(-(e*r).^2); ep = 5; 2 N = 81; dsites = CreatePoints(N,2, u ); 3 ctrs = dsites; 4 neval = 40; M = neval^2; 5 epoints = CreatePoints(M,2, u ); 6 DM_data = DistanceMatrix(dsites,ctrs); 7 IM = rbf(ep,dm_data); % transpose of usual eval matrix 8 DM_B = DistanceMatrix(ctrs,epoints); 9 B = rbf(ep,dm_b); % many right-hand sides for (1) 10 cardfuns = IM\B; % one per ROW 11 xe = reshape(epoints(:,1),neval,neval); 12 ye = reshape(epoints(:,2),neval,neval); 13 CFplot=surf(xe,ye,reshape(cardfuns(50,:),neval,neval)); 14 set(cfplot, FaceColor, interp, EdgeColor, none ) 15 colormap autumn; view([145 45]); camlight 16 lighting gouraud Note that the code is different than in the book and avoids loops. fasshauer@iit.edu MATH 590 Chapter 14 20

42 Outline The Power Function 1 Fill Distance and Approximation Orders 2 Lagrange Form of the Interpolant and Cardinal Basis Functions 3 The Power Function 4 Generic Error Estimates for Functions in N K (Ω) 5 Error Estimates in Terms of the Fill Distance fasshauer@iit.edu MATH 590 Chapter 14 21

43 The Power Function Another important ingredient needed for our error estimates is the so-called power function. MATH 590 Chapter 14 22

44 The Power Function Another important ingredient needed for our error estimates is the so-called power function. To this end, we consider a domain Ω R s. Then for any strictly positive definite kernel K C(Ω Ω), any set of distinct points X = {x 1,..., x N } Ω, and an arbitrary vector u R N, we define the quadratic form N N N Q(u) = K (x, x) 2 u j K (x, x j ) + u i u j K (x i, x j ). j=1 i=1 j=1 fasshauer@iit.edu MATH 590 Chapter 14 22

45 The Power Function Another important ingredient needed for our error estimates is the so-called power function. To this end, we consider a domain Ω R s. Then for any strictly positive definite kernel K C(Ω Ω), any set of distinct points X = {x 1,..., x N } Ω, and an arbitrary vector u R N, we define the quadratic form N N N Q(u) = K (x, x) 2 u j K (x, x j ) + u i u j K (x i, x j ). Definition j=1 i=1 j=1 Suppose Ω R s and K C(Ω Ω) is strictly positive definite. For any distinct points X = {x 1,..., x N } Ω the power function P K,X is defined pointwise by [P K,X (x)] 2 = Q(u (x)), where u is the vector of cardinal functions studied above. fasshauer@iit.edu MATH 590 Chapter 14 22

46 The Power Function Using the definition of the native inner product norm from the previous chapter we can rewrite the quadratic form Q(u) as N N N Q(u) = K (x, x) 2 u j K (x, x j ) + u i u j K (x i, x j ) j=1 i=1 j=1 fasshauer@iit.edu MATH 590 Chapter 14 23

47 The Power Function Using the definition of the native inner product norm from the previous chapter we can rewrite the quadratic form Q(u) as N N N Q(u) = K (x, x) 2 u j K (x, x j ) + u i u j K (x i, x j ) j=1 = K (, x), K (, x) NK (Ω) 2 + N i=1 j=1 i=1 j=1 N u j K (, x), K (, x j ) NK (Ω) j=1 N u i u j K (, x i ), K (, x j ) NK (Ω) fasshauer@iit.edu MATH 590 Chapter 14 23

48 The Power Function Using the definition of the native inner product norm from the previous chapter we can rewrite the quadratic form Q(u) as N N N Q(u) = K (x, x) 2 u j K (x, x j ) + u i u j K (x i, x j ) j=1 = K (, x), K (, x) NK (Ω) 2 + N i=1 j=1 = K (, x) i=1 j=1 N u j K (, x), K (, x j ) NK (Ω) j=1 N u i u j K (, x i ), K (, x j ) NK (Ω) N u j K (, x j ), K (, x) j=1 N u j K (, x j ) NK (Ω) j=1 fasshauer@iit.edu MATH 590 Chapter 14 23

49 The Power Function Using the definition of the native inner product norm from the previous chapter we can rewrite the quadratic form Q(u) as N N N Q(u) = K (x, x) 2 u j K (x, x j ) + u i u j K (x i, x j ) j=1 = K (, x), K (, x) NK (Ω) 2 + N i=1 j=1 = K (, x) = K (, x) i=1 j=1 N u j K (, x), K (, x j ) NK (Ω) j=1 N u i u j K (, x i ), K (, x j ) NK (Ω) N u j K (, x j ), K (, x) j=1 N u j K (, x j ) j=1 2 N K (Ω) N u j K (, x j ) NK (Ω) j=1. (2) fasshauer@iit.edu MATH 590 Chapter 14 23

50 The Power Function Remark The name power function was chosen by [Schaback (1993)] based on its connection to the power function of a statistical decision function (originally introduced in [Neyman and Pearson (1936)]). fasshauer@iit.edu MATH 590 Chapter 14 24

51 The Power Function Remark The name power function was chosen by [Schaback (1993)] based on its connection to the power function of a statistical decision function (originally introduced in [Neyman and Pearson (1936)]). In the paper [Wu and Schaback (1993)] the power function was referred to as kriging function. This terminology comes from geostatistics (see, e.g., [Myers (1992)]). fasshauer@iit.edu MATH 590 Chapter 14 24

52 The Power Function Remark The name power function was chosen by [Schaback (1993)] based on its connection to the power function of a statistical decision function (originally introduced in [Neyman and Pearson (1936)]). In the paper [Wu and Schaback (1993)] the power function was referred to as kriging function. This terminology comes from geostatistics (see, e.g., [Myers (1992)]). In the statistics literature, the power function is known as the kriging variance (see, e.g., [Berlinet and Thomas-Agnan (2004), Matheron (1965), Stein (1999)]). fasshauer@iit.edu MATH 590 Chapter 14 24

53 The Power Function Using the linear system notation employed earlier, i.e., A ij = K (x i, x j ), i, j = 1,..., N, u = [u 1,..., u N ] T, and b = [K (, x 1 ),..., K (, x N )] T, we note that we can also rewrite the quadratic form Q(u) as Q(u) = K (x, x) 2 N u j K (x, x j ) + j=1 N i=1 j=1 N u i u j K (x i, x j ) fasshauer@iit.edu MATH 590 Chapter 14 25

54 The Power Function Using the linear system notation employed earlier, i.e., A ij = K (x i, x j ), i, j = 1,..., N, u = [u 1,..., u N ] T, and b = [K (, x 1 ),..., K (, x N )] T, we note that we can also rewrite the quadratic form Q(u) as Q(u) = K (x, x) 2 N u j K (x, x j ) + j=1 N i=1 j=1 N u i u j K (x i, x j ) = K (x, x) 2u T b(x) + u T Au. (3) fasshauer@iit.edu MATH 590 Chapter 14 25

55 The Power Function Using the linear system notation employed earlier, i.e., A ij = K (x i, x j ), i, j = 1,..., N, u = [u 1,..., u N ] T, and b = [K (, x 1 ),..., K (, x N )] T, we note that we can also rewrite the quadratic form Q(u) as Q(u) = K (x, x) 2 N u j K (x, x j ) + j=1 N i=1 j=1 N u i u j K (x i, x j ) = K (x, x) 2u T b(x) + u T Au. (3) This suggests two alternative representations of the power function. fasshauer@iit.edu MATH 590 Chapter 14 25

56 The Power Function Using the linear system notation employed earlier, i.e., A ij = K (x i, x j ), i, j = 1,..., N, u = [u 1,..., u N ] T, and b = [K (, x 1 ),..., K (, x N )] T, we note that we can also rewrite the quadratic form Q(u) as Q(u) = K (x, x) 2 N u j K (x, x j ) + j=1 N i=1 j=1 N u i u j K (x i, x j ) = K (x, x) 2u T b(x) + u T Au. (3) This suggests two alternative representations of the power function. Using the matrix-vector notation for Q(u), the power function is given as P K,X (x) = Q(u (x)) fasshauer@iit.edu MATH 590 Chapter 14 25

57 The Power Function Using the linear system notation employed earlier, i.e., A ij = K (x i, x j ), i, j = 1,..., N, u = [u 1,..., u N ] T, and b = [K (, x 1 ),..., K (, x N )] T, we note that we can also rewrite the quadratic form Q(u) as Q(u) = K (x, x) 2 N u j K (x, x j ) + j=1 N i=1 j=1 N u i u j K (x i, x j ) = K (x, x) 2u T b(x) + u T Au. (3) This suggests two alternative representations of the power function. Using the matrix-vector notation for Q(u), the power function is given as P K,X (x) = Q(u (x)) = K (x, x) 2(u (x)) T b(x) + (u (x)) T Au (x). fasshauer@iit.edu MATH 590 Chapter 14 25

58 The Power Function However, by the definition of the cardinal functions Au (x) = b(x), and therefore we have the two new variants P K,X (x) = K (x, x) 2(u (x)) T b(x) + (u (x)) T Au (x) fasshauer@iit.edu MATH 590 Chapter 14 26

59 The Power Function However, by the definition of the cardinal functions Au (x) = b(x), and therefore we have the two new variants P K,X (x) = K (x, x) 2(u (x)) T b(x) + (u (x)) T Au (x) = K (x, x) (u (x)) T b(x) fasshauer@iit.edu MATH 590 Chapter 14 26

60 The Power Function However, by the definition of the cardinal functions Au (x) = b(x), and therefore we have the two new variants P K,X (x) = K (x, x) 2(u (x)) T b(x) + (u (x)) T Au (x) = K (x, x) (u (x)) T b(x) = K (x, x) (u (x)) T Au (x). fasshauer@iit.edu MATH 590 Chapter 14 26

61 The Power Function However, by the definition of the cardinal functions Au (x) = b(x), and therefore we have the two new variants P K,X (x) = K (x, x) 2(u (x)) T b(x) + (u (x)) T Au (x) = K (x, x) (u (x)) T b(x) = K (x, x) (u (x)) T Au (x). Remark These formulas can be used for the numerical evaluation of the power function at x. fasshauer@iit.edu MATH 590 Chapter 14 26

62 The Power Function However, by the definition of the cardinal functions Au (x) = b(x), and therefore we have the two new variants P K,X (x) = K (x, x) 2(u (x)) T b(x) + (u (x)) T Au (x) = K (x, x) (u (x)) T b(x) = K (x, x) (u (x)) T Au (x). Remark These formulas can be used for the numerical evaluation of the power function at x. To this end one has to first find the value of the cardinal functions u (x) by solving the system Au (x) = b(x). fasshauer@iit.edu MATH 590 Chapter 14 26

63 The Power Function However, by the definition of the cardinal functions Au (x) = b(x), and therefore we have the two new variants P K,X (x) = K (x, x) 2(u (x)) T b(x) + (u (x)) T Au (x) = K (x, x) (u (x)) T b(x) = K (x, x) (u (x)) T Au (x). Remark These formulas can be used for the numerical evaluation of the power function at x. To this end one has to first find the value of the cardinal functions u (x) by solving the system Au (x) = b(x). This results in P K,X (x) = K (x, x) (b(x)) T A 1 b(x). (4) fasshauer@iit.edu MATH 590 Chapter 14 26

64 The Power Function Example Gaussian Power Function Figure: Data sites and power function for Gaussian interpolant with ε = 6 based on N = 81 uniformly gridded points in [0, 1] 2. fasshauer@iit.edu MATH 590 Chapter 14 27

65 The Power Function Example More Gaussian Power Function Figure: Data sites and power function for Gaussian interpolant with ε = 6 based on N = 81 tensor-product Chebyshev points in [0, 1] 2. fasshauer@iit.edu MATH 590 Chapter 14 28

66 The Power Function Example More Gaussian Power Function Figure: Data sites and power function for Gaussian interpolant with ε = 6 based on N = 81 Halton points in [0, 1] 2. fasshauer@iit.edu MATH 590 Chapter 14 29

67 The Power Function Remark Dependence of the power function on the data locations is clearly visible. MATH 590 Chapter 14 30

68 The Power Function Remark Dependence of the power function on the data locations is clearly visible. This connection was used in [De Marchi et al. (2005)] to iteratively obtain an optimal set of data locations that are independent of the data values. MATH 590 Chapter 14 30

69 The Power Function Remark Dependence of the power function on the data locations is clearly visible. This connection was used in [De Marchi et al. (2005)] to iteratively obtain an optimal set of data locations that are independent of the data values. Since A is a positive definite matrix whenever K is a strictly positive definite kernel we see that the power function satisfies the bounds 0 P K,X (x) = K (x, x) (u (x)) T Au (x) K (x, x). fasshauer@iit.edu MATH 590 Chapter 14 30

70 The Power Function Remark Dependence of the power function on the data locations is clearly visible. This connection was used in [De Marchi et al. (2005)] to iteratively obtain an optimal set of data locations that are independent of the data values. Since A is a positive definite matrix whenever K is a strictly positive definite kernel we see that the power function satisfies the bounds 0 P K,X (x) = K (x, x) (u (x)) T Au (x) K (x, x). At this point the power function is mostly a theoretical tool that helps us better understand error estimates since we can decouple the effects due to the data function f from those due to the kernel K and the data locations X (see the following theorem). fasshauer@iit.edu MATH 590 Chapter 14 30

71 The Power Function Remark Dependence of the power function on the data locations is clearly visible. This connection was used in [De Marchi et al. (2005)] to iteratively obtain an optimal set of data locations that are independent of the data values. Since A is a positive definite matrix whenever K is a strictly positive definite kernel we see that the power function satisfies the bounds 0 P K,X (x) = K (x, x) (u (x)) T Au (x) K (x, x). At this point the power function is mostly a theoretical tool that helps us better understand error estimates since we can decouple the effects due to the data function f from those due to the kernel K and the data locations X (see the following theorem). The power function is defined in an analogous way for strictly conditionally positive definite functions. fasshauer@iit.edu MATH 590 Chapter 14 30

72 Outline Generic Error Estimates for Functions in N K (Ω) 1 Fill Distance and Approximation Orders 2 Lagrange Form of the Interpolant and Cardinal Basis Functions 3 The Power Function 4 Generic Error Estimates for Functions in N K (Ω) 5 Error Estimates in Terms of the Fill Distance fasshauer@iit.edu MATH 590 Chapter 14 31

73 Generic Error Estimates for Functions in N K (Ω) Now we can give a first generic error estimate. fasshauer@iit.edu MATH 590 Chapter 14 32

74 Generic Error Estimates for Functions in N K (Ω) Now we can give a first generic error estimate. Theorem Let Ω R s, K C(Ω Ω) be strictly positive definite, and suppose that the points X = {x 1,..., x N } are distinct. Denote the interpolant to f N K (Ω) on X by P f. Then for every x Ω f (x) P f (x) P K,X (x) f NK (Ω). fasshauer@iit.edu MATH 590 Chapter 14 32

75 Generic Error Estimates for Functions in N K (Ω) Proof. Since f is assumed to lie in the native space of K the reproducing property of K yields f (x) = f, K (, x) NK (Ω). fasshauer@iit.edu MATH 590 Chapter 14 33

76 Generic Error Estimates for Functions in N K (Ω) Proof. Since f is assumed to lie in the native space of K the reproducing property of K yields f (x) = f, K (, x) NK (Ω). We express the interpolant in its cardinal form and apply the reproducing property of K. This gives us P f (x) = N f (x j )uj (x) j=1 fasshauer@iit.edu MATH 590 Chapter 14 33

77 Generic Error Estimates for Functions in N K (Ω) Proof. Since f is assumed to lie in the native space of K the reproducing property of K yields f (x) = f, K (, x) NK (Ω). We express the interpolant in its cardinal form and apply the reproducing property of K. This gives us P f (x) = = N f (x j )uj (x) j=1 N uj (x) f, K (, x j) NK (Ω) j=1 fasshauer@iit.edu MATH 590 Chapter 14 33

78 Generic Error Estimates for Functions in N K (Ω) Proof. Since f is assumed to lie in the native space of K the reproducing property of K yields f (x) = f, K (, x) NK (Ω). We express the interpolant in its cardinal form and apply the reproducing property of K. This gives us P f (x) = = N f (x j )uj (x) j=1 N uj (x) f, K (, x j) NK (Ω) j=1 = f, N uj (x)k (, x j) NK (Ω). j=1 fasshauer@iit.edu MATH 590 Chapter 14 33

79 Generic Error Estimates for Functions in N K (Ω) Proof (cont.) Now all that remains to be done is to combine the two formulas just derived and apply the Cauchy-Schwarz inequality. Thus, fasshauer@iit.edu MATH 590 Chapter 14 34

80 Generic Error Estimates for Functions in N K (Ω) Proof (cont.) Now all that remains to be done is to combine the two formulas just derived and apply the Cauchy-Schwarz inequality. Thus, N f (x) P f (x) = f, K (, x) uj (x)k (, x j) NK (Ω) j=1 fasshauer@iit.edu MATH 590 Chapter 14 34

81 Generic Error Estimates for Functions in N K (Ω) Proof (cont.) Now all that remains to be done is to combine the two formulas just derived and apply the Cauchy-Schwarz inequality. Thus, N f (x) P f (x) = f, K (, x) uj (x)k (, x j) NK (Ω) j=1 N f NK (Ω) K (, x) uj (x)k (, x j) j=1 NK (Ω) fasshauer@iit.edu MATH 590 Chapter 14 34

82 Generic Error Estimates for Functions in N K (Ω) Proof (cont.) Now all that remains to be done is to combine the two formulas just derived and apply the Cauchy-Schwarz inequality. Thus, N f (x) P f (x) = f, K (, x) uj (x)k (, x j) NK (Ω) j=1 N f NK (Ω) K (, x) uj (x)k (, x j) = f NK (Ω)P K,X (x), j=1 NK (Ω) where we have used the representation (2) of the quadratic form Q(u (x)) and the definition of the power function. fasshauer@iit.edu MATH 590 Chapter 14 34

83 Generic Error Estimates for Functions in N K (Ω) One of the main benefits of the above theorem is that we are now able to estimate the interpolation error by considering two independent phenomena: fasshauer@iit.edu MATH 590 Chapter 14 35

84 Generic Error Estimates for Functions in N K (Ω) One of the main benefits of the above theorem is that we are now able to estimate the interpolation error by considering two independent phenomena: the smoothness of the data (measured in terms of the native space norm of f which is independent of the data locations, but does depend on K ), fasshauer@iit.edu MATH 590 Chapter 14 35

85 Generic Error Estimates for Functions in N K (Ω) One of the main benefits of the above theorem is that we are now able to estimate the interpolation error by considering two independent phenomena: the smoothness of the data (measured in terms of the native space norm of f which is independent of the data locations, but does depend on K ), and the distribution of the data (measured in terms of the power function independent of the actual data values). fasshauer@iit.edu MATH 590 Chapter 14 35

86 Generic Error Estimates for Functions in N K (Ω) One of the main benefits of the above theorem is that we are now able to estimate the interpolation error by considering two independent phenomena: Remark the smoothness of the data (measured in terms of the native space norm of f which is independent of the data locations, but does depend on K ), and the distribution of the data (measured in terms of the power function independent of the actual data values). This is analogous to the standard error estimate for polynomial interpolation cited in most numerical analysis texts. fasshauer@iit.edu MATH 590 Chapter 14 35

87 Generic Error Estimates for Functions in N K (Ω) One of the main benefits of the above theorem is that we are now able to estimate the interpolation error by considering two independent phenomena: Remark the smoothness of the data (measured in terms of the native space norm of f which is independent of the data locations, but does depend on K ), and the distribution of the data (measured in terms of the power function independent of the actual data values). This is analogous to the standard error estimate for polynomial interpolation cited in most numerical analysis texts. Effects due to the use of any specific kernel K (or basic function in the translation invariant or radial case) are felt in both terms since the native space norm of f also varies with K. In particular, changing a possible shape parameter ε will have an effect on both terms in the error bound. fasshauer@iit.edu MATH 590 Chapter 14 35

88 Outline Error Estimates in Terms of the Fill Distance 1 Fill Distance and Approximation Orders 2 Lagrange Form of the Interpolant and Cardinal Basis Functions 3 The Power Function 4 Generic Error Estimates for Functions in N K (Ω) 5 Error Estimates in Terms of the Fill Distance fasshauer@iit.edu MATH 590 Chapter 14 36

89 Error Estimates in Terms of the Fill Distance The next steps are to refine this error bound by expressing the influence of the data locations in terms of the fill distance, MATH 590 Chapter 14 37

90 Error Estimates in Terms of the Fill Distance The next steps are to refine this error bound by expressing the influence of the data locations in terms of the fill distance, and the bound needs to be specialized to various choices of kernels K. MATH 590 Chapter 14 37

91 Error Estimates in Terms of the Fill Distance The next steps are to refine this error bound by expressing the influence of the data locations in terms of the fill distance, and the bound needs to be specialized to various choices of kernels K. Remark The most common strategy for obtaining error bounds in numerical analysis is to take advantage of the polynomial precision of a method (at least locally), and then to apply a Taylor expansion. fasshauer@iit.edu MATH 590 Chapter 14 37

92 Error Estimates in Terms of the Fill Distance With this in mind we observe Theorem Let Ω R s, and suppose K C(Ω Ω) is strictly positive definite. Let X = {x 1,..., x N } be a set of distinct points in Ω, and define the quadratic form Q(u) as in (2). The minimum of Q(u) is given for the vector u = u (x) of values of the cardinal functions, i.e., Q(u (x)) Q(u) for all u R N. fasshauer@iit.edu MATH 590 Chapter 14 38

93 Error Estimates in Terms of the Fill Distance Proof. We showed above (see (3)) that Q(u) = K (x, x) 2u T b(x) + u T Au. fasshauer@iit.edu MATH 590 Chapter 14 39

94 Error Estimates in Terms of the Fill Distance Proof. We showed above (see (3)) that Q(u) = K (x, x) 2u T b(x) + u T Au. The minimum of this quadratic form is given by the solution of the linear system Au = b(x). fasshauer@iit.edu MATH 590 Chapter 14 39

95 Error Estimates in Terms of the Fill Distance Proof. We showed above (see (3)) that Q(u) = K (x, x) 2u T b(x) + u T Au. The minimum of this quadratic form is given by the solution of the linear system Au = b(x). This, however, yields the cardinal functions u = u (x). fasshauer@iit.edu MATH 590 Chapter 14 39

96 Error Estimates in Terms of the Fill Distance In the proof below we will use a special coefficient vector ũ which provides the polynomial precision desired for the proof of the refined error estimate. fasshauer@iit.edu MATH 590 Chapter 14 40

97 Error Estimates in Terms of the Fill Distance In the proof below we will use a special coefficient vector ũ which provides the polynomial precision desired for the proof of the refined error estimate. Its existence is guaranteed by the following theorem on local polynomial reproduction proved in [Wendland (2005a)]. fasshauer@iit.edu MATH 590 Chapter 14 40

98 Error Estimates in Terms of the Fill Distance In the proof below we will use a special coefficient vector ũ which provides the polynomial precision desired for the proof of the refined error estimate. Its existence is guaranteed by the following theorem on local polynomial reproduction proved in [Wendland (2005a)]. The theorem requires Definition A region Ω R s satisfies an interior cone condition if there exists an angle θ (0, π/2) and a radius r > 0 such that for every x Ω there exists a unit vector ξ(x) such that the cone C = {x + λy : y R s, y 2 = 1, y T ξ(x) cos θ, λ [0, r]} is contained in Ω. fasshauer@iit.edu MATH 590 Chapter 14 40

99 Error Estimates in Terms of the Fill Distance The interior cone condition imposes a certain regularity on the domain Ω. In fact, a domain that satisfies this condition contains balls of a controllable radius. fasshauer@iit.edu MATH 590 Chapter 14 41

100 Error Estimates in Terms of the Fill Distance The interior cone condition imposes a certain regularity on the domain Ω. In fact, a domain that satisfies this condition contains balls of a controllable radius. In particular, this will be important when bounding the remainder of the Taylor expansions below. For more details see [Wendland (2005a)]. fasshauer@iit.edu MATH 590 Chapter 14 41

101 Error Estimates in Terms of the Fill Distance The interior cone condition imposes a certain regularity on the domain Ω. In fact, a domain that satisfies this condition contains balls of a controllable radius. In particular, this will be important when bounding the remainder of the Taylor expansions below. For more details see [Wendland (2005a)]. Existence of an approximation scheme with local polynomial precision is guaranteed by the following theorem. fasshauer@iit.edu MATH 590 Chapter 14 41

102 Error Estimates in Terms of the Fill Distance Theorem Suppose Ω R s is bounded and satisfies an interior cone condition, and let l be a non-negative integer. Then there exist positive constants h 0, c 1, and c 2 such that for all X = {x 1,..., x N } Ω with h X,Ω h 0 and every x Ω there exist numbers ũ 1 (x),..., ũ N (x) with N (1) ũ j (x)p(x j ) = p(x) for all polynomials p Π s l, (2) j=1 N ũ j (x) c 1, j=1 (3) ũ j (x) = 0 if x x j 2 c 2 h X,Ω. fasshauer@iit.edu MATH 590 Chapter 14 42

103 Error Estimates in Terms of the Fill Distance Theorem Suppose Ω R s is bounded and satisfies an interior cone condition, and let l be a non-negative integer. Then there exist positive constants h 0, c 1, and c 2 such that for all X = {x 1,..., x N } Ω with h X,Ω h 0 and every x Ω there exist numbers ũ 1 (x),..., ũ N (x) with N (1) ũ j (x)p(x j ) = p(x) for all polynomials p Π s l, (2) j=1 N ũ j (x) c 1, j=1 (3) ũ j (x) = 0 if x x j 2 c 2 h X,Ω. Property (1) yields the polynomial precision, and property (3) shows that the scheme is local. The bound in property (2) is essential for controlling the growth of error estimates. fasshauer@iit.edu MATH 590 Chapter 14 42

104 Error Estimates in Terms of the Fill Distance Theorem Suppose Ω R s is bounded and satisfies an interior cone condition, and let l be a non-negative integer. Then there exist positive constants h 0, c 1, and c 2 such that for all X = {x 1,..., x N } Ω with h X,Ω h 0 and every x Ω there exist numbers ũ 1 (x),..., ũ N (x) with N (1) ũ j (x)p(x j ) = p(x) for all polynomials p Π s l, (2) j=1 N ũ j (x) c 1, j=1 (3) ũ j (x) = 0 if x x j 2 c 2 h X,Ω. Property (1) yields the polynomial precision, and property (3) shows that the scheme is local. The bound in property (2) is essential for controlling the growth of error estimates. The quantity on the left-hand side of (2) is known as the Lebesgue constant at x. fasshauer@iit.edu MATH 590 Chapter 14 42

105 Error Estimates in Terms of the Fill Distance In the following theorem and its proof we will make repeated use of multi-index notation and multivariate Taylor expansions. MATH 590 Chapter 14 43

106 Error Estimates in Terms of the Fill Distance In the following theorem and its proof we will make repeated use of multi-index notation and multivariate Taylor expansions. For β = (β 1,..., β s ) N s 0 with β = s i=1 β i we define the differential operator D β as D β β = ( x 1 ) β 1 ( xs ). βs fasshauer@iit.edu MATH 590 Chapter 14 43

107 Error Estimates in Terms of the Fill Distance In the following theorem and its proof we will make repeated use of multi-index notation and multivariate Taylor expansions. For β = (β 1,..., β s ) N s 0 with β = s i=1 β i we define the differential operator D β as D β β = ( x 1 ) β 1 ( xs ). βs The notation D β 2 K (w, ) used below indicates that the operator is applied to K (w, ) viewed as a function of the second variable. fasshauer@iit.edu MATH 590 Chapter 14 43

108 Error Estimates in Terms of the Fill Distance The multivariate Taylor expansion of the function K (w, ) centered at w is given by with remainder K (w, z) = β <2k R(w, z) = D β 2 β =2k K (w, w) (z w) β + R(w, z) β! D β 2 K (w, ξ w,z) (z w) β, β! where ξ w,z lies somewhere on the line segment connecting w and z. fasshauer@iit.edu MATH 590 Chapter 14 44

109 Error Estimates in Terms of the Fill Distance Our earlier generic error estimate can now be formulated in terms of the fill distance. MATH 590 Chapter 14 45

110 Error Estimates in Terms of the Fill Distance Our earlier generic error estimate can now be formulated in terms of the fill distance. Theorem Suppose Ω R s is bounded and satisfies an interior cone condition. Suppose K C 2k (Ω Ω) is symmetric and strictly positive definite. Denote the interpolant to f N K (Ω) on the set X by P f. Then there exist positive constants h 0 and C (independent of x, f and K ) such that provided h X,Ω h 0. f (x) P f (x) Ch k X,Ω CK (x) f NK (Ω), fasshauer@iit.edu MATH 590 Chapter 14 45

111 Error Estimates in Terms of the Fill Distance Our earlier generic error estimate can now be formulated in terms of the fill distance. Theorem Suppose Ω R s is bounded and satisfies an interior cone condition. Suppose K C 2k (Ω Ω) is symmetric and strictly positive definite. Denote the interpolant to f N K (Ω) on the set X by P f. Then there exist positive constants h 0 and C (independent of x, f and K ) such that provided h X,Ω h 0. Here f (x) P f (x) Ch k X,Ω CK (x) f NK (Ω), C K (x) = max max β =2k w,z Ω B(x,c 2 h X,Ω ) Dβ 2 K (w, z) with B(x, c 2 h X,Ω ) denoting the ball of radius c 2 h X,Ω centered at x. fasshauer@iit.edu MATH 590 Chapter 14 45

112 Error Estimates in Terms of the Fill Distance Proof. By the generic error estimate in terms of the power function we know f (x) P f (x) P K,X (x) f NK (Ω). fasshauer@iit.edu MATH 590 Chapter 14 46

113 Error Estimates in Terms of the Fill Distance Proof. By the generic error estimate in terms of the power function we know f (x) P f (x) P K,X (x) f NK (Ω). Therefore, we now derive the bound P K,X (x) ChX,Ω k CK (x) for the power function in terms of the fill distance. fasshauer@iit.edu MATH 590 Chapter 14 46

114 Error Estimates in Terms of the Fill Distance Proof. By the generic error estimate in terms of the power function we know f (x) P f (x) P K,X (x) f NK (Ω). Therefore, we now derive the bound P K,X (x) Ch k X,Ω CK (x) for the power function in terms of the fill distance. We know that the power function is defined by [P K,X (x)] 2 = Q(u (x)). fasshauer@iit.edu MATH 590 Chapter 14 46

115 Error Estimates in Terms of the Fill Distance Proof. By the generic error estimate in terms of the power function we know f (x) P f (x) P K,X (x) f NK (Ω). Therefore, we now derive the bound P K,X (x) Ch k X,Ω CK (x) for the power function in terms of the fill distance. We know that the power function is defined by [P K,X (x)] 2 = Q(u (x)). Moreover, we know that the quadratic form Q(u) is minimized by u = u (x). fasshauer@iit.edu MATH 590 Chapter 14 46

116 Error Estimates in Terms of the Fill Distance Proof. By the generic error estimate in terms of the power function we know f (x) P f (x) P K,X (x) f NK (Ω). Therefore, we now derive the bound P K,X (x) Ch k X,Ω CK (x) for the power function in terms of the fill distance. We know that the power function is defined by [P K,X (x)] 2 = Q(u (x)). Moreover, we know that the quadratic form Q(u) is minimized by u = u (x). Therefore, any other coefficient vector u will yield an upper bound on the power function. fasshauer@iit.edu MATH 590 Chapter 14 46

117 Error Estimates in Terms of the Fill Distance We take u = ũ(x) from the theorem guaranteeing existence of a local polynomial reproduction so that we are ensured to have polynomial precision of degree l 2k 1. fasshauer@iit.edu MATH 590 Chapter 14 47

118 Error Estimates in Terms of the Fill Distance We take u = ũ(x) from the theorem guaranteeing existence of a local polynomial reproduction so that we are ensured to have polynomial precision of degree l 2k 1. For this specific choice of coefficients we have [P K,X (x)] 2 Q(u) = K (x, x) 2 u j K (x, x j ) + u i u j K (x i, x j ), j i j where the sums are over those indices j with u j 0. fasshauer@iit.edu MATH 590 Chapter 14 47

119 Error Estimates in Terms of the Fill Distance We take u = ũ(x) from the theorem guaranteeing existence of a local polynomial reproduction so that we are ensured to have polynomial precision of degree l 2k 1. For this specific choice of coefficients we have [P K,X (x)] 2 Q(u) = K (x, x) 2 u j K (x, x j ) + u i u j K (x i, x j ), j i j where the sums are over those indices j with u j 0. Now we apply the Taylor expansion centered at x to K (x, ) and centered at x i to K (x i, ), and evaluate both functions at x j. fasshauer@iit.edu MATH 590 Chapter 14 47

120 Error Estimates in Terms of the Fill Distance We take u = ũ(x) from the theorem guaranteeing existence of a local polynomial reproduction so that we are ensured to have polynomial precision of degree l 2k 1. For this specific choice of coefficients we have [P K,X (x)] 2 Q(u) = K (x, x) 2 u j K (x, x j ) + u i u j K (x i, x j ), j i j where the sums are over those indices j with u j 0. Now we apply the Taylor expansion centered at x to K (x, ) and centered at x i to K (x i, ), and evaluate both functions at x j. This yields Q(u) = K (x, x) 2 u j D β 2 K (x, x) (x j x) β + R(x, x j ) β! j β <2k + u i u j D β 2 K (x i, x i ) (x j x i ) β + R(x i, x j ). β! i j β <2k fasshauer@iit.edu MATH 590 Chapter 14 47

121 Error Estimates in Terms of the Fill Distance Next, we identify p(z) = (z x) β so that p(x) = 0 unless β = 0. fasshauer@iit.edu MATH 590 Chapter 14 48

122 Error Estimates in Terms of the Fill Distance Next, we identify p(z) = (z x) β so that p(x) = 0 unless β = 0. Therefore the polynomial precision property of the coefficient vector u simplifies this expression to Q(u) = K (x, x) 2K (x, x) 2 j u j R(x, x j ) + i u i β <2k D β 2 K (x i, x i ) (x x i ) β + β! i j u i u j R(x i, x(5) j ). fasshauer@iit.edu MATH 590 Chapter 14 48

123 Error Estimates in Terms of the Fill Distance Next, we identify p(z) = (z x) β so that p(x) = 0 unless β = 0. Therefore the polynomial precision property of the coefficient vector u simplifies this expression to Q(u) = K (x, x) 2K (x, x) 2 j u j R(x, x j ) + i u i β <2k D β 2 K (x i, x i ) (x x i ) β + β! i j u i u j R(x i, x(5) j ). Now we can apply the Taylor expansion again and make the observation that β <2k D β 2 K (x i, x i ) (x x i ) β = K (x i, x) R(x i, x). (6) β! fasshauer@iit.edu MATH 590 Chapter 14 48

124 Error Estimates in Terms of the Fill Distance If we use (6) and rearrange the terms in (5) we get Q(u) = K (x, x) [ u j 2R(x, x j ) j i u i R(x i, x j ) ] + i u i [K (x i, x) R(x i, x)]. (7) fasshauer@iit.edu MATH 590 Chapter 14 49

125 Error Estimates in Terms of the Fill Distance If we use (6) and rearrange the terms in (5) we get Q(u) = K (x, x) [ u j 2R(x, x j ) j i u i R(x i, x j ) ] + i u i [K (x i, x) R(x i, x)]. (7) One final Taylor expansion we need is (using the symmetry of K ) K (x i, x) = K (x, x i ) = β <2k D β 2 K (x, x) (x i x) β + R(x, x i ). (8) β! fasshauer@iit.edu MATH 590 Chapter 14 49

126 Error Estimates in Terms of the Fill Distance If we use (6) and rearrange the terms in (5) we get Q(u) = K (x, x) [ u j 2R(x, x j ) j i u i R(x i, x j ) ] + i u i [K (x i, x) R(x i, x)]. (7) One final Taylor expansion we need is (using the symmetry of K ) K (x i, x) = K (x, x i ) = β <2k D β 2 K (x, x) (x i x) β + R(x, x i ). (8) β! If we insert (8) into (7) and once more take advantage of the polynomial precision property of the coefficient vector u we are left with Q(u) = [ u j R(x, x j ) + R(x j, x) ] u i R(x i, x j ). j i fasshauer@iit.edu MATH 590 Chapter 14 49

127 Error Estimates in Terms of the Fill Distance From the previous slide we have Q(u) = [ u j R(x, x j ) + R(x j, x) j i u i R(x i, x j ) ]. fasshauer@iit.edu MATH 590 Chapter 14 50

128 Error Estimates in Terms of the Fill Distance From the previous slide we have Q(u) = [ u j R(x, x j ) + R(x j, x) j i u i R(x i, x j ) ]. Now the theorem on local polynomial reproductions allows us to bound j u j c 1. fasshauer@iit.edu MATH 590 Chapter 14 50

129 Error Estimates in Terms of the Fill Distance From the previous slide we have Q(u) = [ u j R(x, x j ) + R(x j, x) j i u i R(x i, x j ) ]. Now the theorem on local polynomial reproductions allows us to bound j u j c 1. Moreover, x x j 2 c 2 h X,Ω and x i x j 2 2c 2 h X,Ω. fasshauer@iit.edu MATH 590 Chapter 14 50

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 33: Adaptive Iteration Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter 33 1 Outline 1 A

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 33: Adaptive Iteration Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter 33 1 Outline 1 A

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 2: Radial Basis Function Interpolation in MATLAB Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 1 Part 3: Radial Basis Function Interpolation in MATLAB Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2014 fasshauer@iit.edu

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 6: Scattered Data Interpolation with Polynomial Precision Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 2 Part 3: Native Space for Positive Definite Kernels Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2014 fasshauer@iit.edu MATH

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 34: Improving the Condition Number of the Interpolation Matrix Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu

More information

Stability of Kernel Based Interpolation

Stability of Kernel Based Interpolation Stability of Kernel Based Interpolation Stefano De Marchi Department of Computer Science, University of Verona (Italy) Robert Schaback Institut für Numerische und Angewandte Mathematik, University of Göttingen

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Accuracy and Optimality of RKHS Methods Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2014 fasshauer@iit.edu MATH 590 1 Outline 1 Introduction

More information

Approximation by Conditionally Positive Definite Functions with Finitely Many Centers

Approximation by Conditionally Positive Definite Functions with Finitely Many Centers Approximation by Conditionally Positive Definite Functions with Finitely Many Centers Jungho Yoon Abstract. The theory of interpolation by using conditionally positive definite function provides optimal

More information

DIRECT ERROR BOUNDS FOR SYMMETRIC RBF COLLOCATION

DIRECT ERROR BOUNDS FOR SYMMETRIC RBF COLLOCATION Meshless Methods in Science and Engineering - An International Conference Porto, 22 DIRECT ERROR BOUNDS FOR SYMMETRIC RBF COLLOCATION Robert Schaback Institut für Numerische und Angewandte Mathematik (NAM)

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 1 Part 2: Scattered Data Interpolation in R d Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2014 fasshauer@iit.edu MATH 590 Chapter

More information

Stability constants for kernel-based interpolation processes

Stability constants for kernel-based interpolation processes Dipartimento di Informatica Università degli Studi di Verona Rapporto di ricerca Research report 59 Stability constants for kernel-based interpolation processes Stefano De Marchi Robert Schaback Dipartimento

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 7: Conditionally Positive Definite Functions Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter

More information

Radial basis functions topics in slides

Radial basis functions topics in slides Radial basis functions topics in 40 +1 slides Stefano De Marchi Department of Mathematics Tullio Levi-Civita University of Padova Napoli, 22nd March 2018 Outline 1 From splines to RBF 2 Error estimates,

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 5: Completely Monotone and Multiply Monotone Functions Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 40: Symmetric RBF Collocation in MATLAB Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 37: RBF Hermite Interpolation in MATLAB Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 9: Conditionally Positive Definite Radial Functions Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH

More information

Toward Approximate Moving Least Squares Approximation with Irregularly Spaced Centers

Toward Approximate Moving Least Squares Approximation with Irregularly Spaced Centers Toward Approximate Moving Least Squares Approximation with Irregularly Spaced Centers Gregory E. Fasshauer Department of Applied Mathematics, Illinois Institute of Technology, Chicago, IL 6066, U.S.A.

More information

Kernel-based Approximation. Methods using MATLAB. Gregory Fasshauer. Interdisciplinary Mathematical Sciences. Michael McCourt.

Kernel-based Approximation. Methods using MATLAB. Gregory Fasshauer. Interdisciplinary Mathematical Sciences. Michael McCourt. SINGAPORE SHANGHAI Vol TAIPEI - Interdisciplinary Mathematical Sciences 19 Kernel-based Approximation Methods using MATLAB Gregory Fasshauer Illinois Institute of Technology, USA Michael McCourt University

More information

A Posteriori Error Bounds for Meshless Methods

A Posteriori Error Bounds for Meshless Methods A Posteriori Error Bounds for Meshless Methods Abstract R. Schaback, Göttingen 1 We show how to provide safe a posteriori error bounds for numerical solutions of well-posed operator equations using kernel

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 39: Non-Symmetric RBF Collocation in MATLAB Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter

More information

Solving the 3D Laplace Equation by Meshless Collocation via Harmonic Kernels

Solving the 3D Laplace Equation by Meshless Collocation via Harmonic Kernels Solving the 3D Laplace Equation by Meshless Collocation via Harmonic Kernels Y.C. Hon and R. Schaback April 9, Abstract This paper solves the Laplace equation u = on domains Ω R 3 by meshless collocation

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods The Connection to Green s Kernels Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2014 fasshauer@iit.edu MATH 590 1 Outline 1 Introduction

More information

Consistency Estimates for gfd Methods and Selection of Sets of Influence

Consistency Estimates for gfd Methods and Selection of Sets of Influence Consistency Estimates for gfd Methods and Selection of Sets of Influence Oleg Davydov University of Giessen, Germany Localized Kernel-Based Meshless Methods for PDEs ICERM / Brown University 7 11 August

More information

Meshfree Approximation Methods with MATLAB

Meshfree Approximation Methods with MATLAB Interdisciplinary Mathematical Sc Meshfree Approximation Methods with MATLAB Gregory E. Fasshauer Illinois Institute of Technology, USA Y f? World Scientific NEW JERSEY LONDON SINGAPORE BEIJING SHANGHAI

More information

D. Shepard, Shepard functions, late 1960s (application, surface modelling)

D. Shepard, Shepard functions, late 1960s (application, surface modelling) Chapter 1 Introduction 1.1 History and Outline Originally, the motivation for the basic meshfree approximation methods (radial basis functions and moving least squares methods) came from applications in

More information

Kernel Method: Data Analysis with Positive Definite Kernels

Kernel Method: Data Analysis with Positive Definite Kernels Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University

More information

Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space

Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space Statistical Inference with Reproducing Kernel Hilbert Space Kenji Fukumizu Institute of Statistical Mathematics, ROIS Department

More information

Kernel B Splines and Interpolation

Kernel B Splines and Interpolation Kernel B Splines and Interpolation M. Bozzini, L. Lenarduzzi and R. Schaback February 6, 5 Abstract This paper applies divided differences to conditionally positive definite kernels in order to generate

More information

3. Some tools for the analysis of sequential strategies based on a Gaussian process prior

3. Some tools for the analysis of sequential strategies based on a Gaussian process prior 3. Some tools for the analysis of sequential strategies based on a Gaussian process prior E. Vazquez Computer experiments June 21-22, 2010, Paris 21 / 34 Function approximation with a Gaussian prior Aim:

More information

A orthonormal basis for Radial Basis Function approximation

A orthonormal basis for Radial Basis Function approximation A orthonormal basis for Radial Basis Function approximation 9th ISAAC Congress Krakow, August 5-9, 2013 Gabriele Santin, joint work with S. De Marchi Department of Mathematics. Doctoral School in Mathematical

More information

Recent Results for Moving Least Squares Approximation

Recent Results for Moving Least Squares Approximation Recent Results for Moving Least Squares Approximation Gregory E. Fasshauer and Jack G. Zhang Abstract. We describe two experiments recently conducted with the approximate moving least squares (MLS) approximation

More information

Optimal data-independent point locations for RBF interpolation

Optimal data-independent point locations for RBF interpolation Optimal dataindependent point locations for RF interpolation S De Marchi, R Schaback and H Wendland Università di Verona (Italy), Universität Göttingen (Germany) Metodi di Approssimazione: lezione dell

More information

Your first day at work MATH 806 (Fall 2015)

Your first day at work MATH 806 (Fall 2015) Your first day at work MATH 806 (Fall 2015) 1. Let X be a set (with no particular algebraic structure). A function d : X X R is called a metric on X (and then X is called a metric space) when d satisfies

More information

Scattered Data Approximation of Noisy Data via Iterated Moving Least Squares

Scattered Data Approximation of Noisy Data via Iterated Moving Least Squares Scattered Data Approximation o Noisy Data via Iterated Moving Least Squares Gregory E. Fasshauer and Jack G. Zhang Abstract. In this paper we ocus on two methods or multivariate approximation problems

More information

Least Squares Approximation

Least Squares Approximation Chapter 6 Least Squares Approximation As we saw in Chapter 5 we can interpret radial basis function interpolation as a constrained optimization problem. We now take this point of view again, but start

More information

The Closed Form Reproducing Polynomial Particle Shape Functions for Meshfree Particle Methods

The Closed Form Reproducing Polynomial Particle Shape Functions for Meshfree Particle Methods The Closed Form Reproducing Polynomial Particle Shape Functions for Meshfree Particle Methods by Hae-Soo Oh Department of Mathematics, University of North Carolina at Charlotte, Charlotte, NC 28223 June

More information

Your first day at work MATH 806 (Fall 2015)

Your first day at work MATH 806 (Fall 2015) Your first day at work MATH 806 (Fall 2015) 1. Let X be a set (with no particular algebraic structure). A function d : X X R is called a metric on X (and then X is called a metric space) when d satisfies

More information

Interpolation of Spatial Data - A Stochastic or a Deterministic Problem?

Interpolation of Spatial Data - A Stochastic or a Deterministic Problem? Interpolation of Spatial Data - A Stochastic or a Deterministic Problem? M. Scheuerer, R. Schaback and M. Schlather 18 November 2011 Abstract Interpolation of spatial data is a very general mathematical

More information

RBF Collocation Methods and Pseudospectral Methods

RBF Collocation Methods and Pseudospectral Methods RBF Collocation Methods and Pseudospectral Methods G. E. Fasshauer Draft: November 19, 24 Abstract We show how the collocation framework that is prevalent in the radial basis function literature can be

More information

MATH 205C: STATIONARY PHASE LEMMA

MATH 205C: STATIONARY PHASE LEMMA MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)

More information

RKHS, Mercer s theorem, Unbounded domains, Frames and Wavelets Class 22, 2004 Tomaso Poggio and Sayan Mukherjee

RKHS, Mercer s theorem, Unbounded domains, Frames and Wavelets Class 22, 2004 Tomaso Poggio and Sayan Mukherjee RKHS, Mercer s theorem, Unbounded domains, Frames and Wavelets 9.520 Class 22, 2004 Tomaso Poggio and Sayan Mukherjee About this class Goal To introduce an alternate perspective of RKHS via integral operators

More information

Local radial basis function approximation on the sphere

Local radial basis function approximation on the sphere Local radial basis function approximation on the sphere Kerstin Hesse and Q. T. Le Gia School of Mathematics and Statistics, The University of New South Wales, Sydney NSW 05, Australia April 30, 007 Abstract

More information

Multivariate Interpolation with Increasingly Flat Radial Basis Functions of Finite Smoothness

Multivariate Interpolation with Increasingly Flat Radial Basis Functions of Finite Smoothness Multivariate Interpolation with Increasingly Flat Radial Basis Functions of Finite Smoothness Guohui Song John Riddle Gregory E. Fasshauer Fred J. Hickernell Abstract In this paper, we consider multivariate

More information

INTRODUCTION TO FINITE ELEMENT METHODS

INTRODUCTION TO FINITE ELEMENT METHODS INTRODUCTION TO FINITE ELEMENT METHODS LONG CHEN Finite element methods are based on the variational formulation of partial differential equations which only need to compute the gradient of a function.

More information

The continuity method

The continuity method The continuity method The method of continuity is used in conjunction with a priori estimates to prove the existence of suitably regular solutions to elliptic partial differential equations. One crucial

More information

INVERSE AND SATURATION THEOREMS FOR RADIAL BASIS FUNCTION INTERPOLATION

INVERSE AND SATURATION THEOREMS FOR RADIAL BASIS FUNCTION INTERPOLATION MATHEMATICS OF COMPUTATION Volume 71, Number 238, Pages 669 681 S 0025-5718(01)01383-7 Article electronically published on November 28, 2001 INVERSE AND SATURATION THEOREMS FOR RADIAL BASIS FUNCTION INTERPOLATION

More information

On interpolation by radial polynomials C. de Boor Happy 60th and beyond, Charlie!

On interpolation by radial polynomials C. de Boor Happy 60th and beyond, Charlie! On interpolation by radial polynomials C. de Boor Happy 60th and beyond, Charlie! Abstract A lemma of Micchelli s, concerning radial polynomials and weighted sums of point evaluations, is shown to hold

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 4: The Connection to Kriging Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2014 fasshauer@iit.edu MATH 590 Chapter 4 1 Outline

More information

Optimal Polynomial Admissible Meshes on the Closure of C 1,1 Bounded Domains

Optimal Polynomial Admissible Meshes on the Closure of C 1,1 Bounded Domains Optimal Polynomial Admissible Meshes on the Closure of C 1,1 Bounded Domains Constructive Theory of Functions Sozopol, June 9-15, 2013 F. Piazzon, joint work with M. Vianello Department of Mathematics.

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Problem Set 6: Solutions Math 201A: Fall a n x n,

Problem Set 6: Solutions Math 201A: Fall a n x n, Problem Set 6: Solutions Math 201A: Fall 2016 Problem 1. Is (x n ) n=0 a Schauder basis of C([0, 1])? No. If f(x) = a n x n, n=0 where the series converges uniformly on [0, 1], then f has a power series

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 43: RBF-PS Methods in MATLAB Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter 43 1 Outline

More information

Regularization in Reproducing Kernel Banach Spaces

Regularization in Reproducing Kernel Banach Spaces .... Regularization in Reproducing Kernel Banach Spaces Guohui Song School of Mathematical and Statistical Sciences Arizona State University Comp Math Seminar, September 16, 2010 Joint work with Dr. Fred

More information

Scattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions

Scattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions Chapter 3 Scattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions 3.1 Scattered Data Interpolation with Polynomial Precision Sometimes the assumption on the

More information

ON DIMENSION-INDEPENDENT RATES OF CONVERGENCE FOR FUNCTION APPROXIMATION WITH GAUSSIAN KERNELS

ON DIMENSION-INDEPENDENT RATES OF CONVERGENCE FOR FUNCTION APPROXIMATION WITH GAUSSIAN KERNELS ON DIMENSION-INDEPENDENT RATES OF CONVERGENCE FOR FUNCTION APPROXIMATION WITH GAUSSIAN KERNELS GREGORY E. FASSHAUER, FRED J. HICKERNELL, AND HENRYK WOŹNIAKOWSKI Abstract. This article studies the problem

More information

MA 3021: Numerical Analysis I Numerical Differentiation and Integration

MA 3021: Numerical Analysis I Numerical Differentiation and Integration MA 3021: Numerical Analysis I Numerical Differentiation and Integration Suh-Yuh Yang ( 楊肅煜 ) Department of Mathematics, National Central University Jhongli District, Taoyuan City 32001, Taiwan syyang@math.ncu.edu.tw

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 57 Table of Contents 1 Sparse linear models Basis Pursuit and restricted null space property Sufficient conditions for RNS 2 / 57

More information

Two hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. 29 May :45 11:45

Two hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. 29 May :45 11:45 Two hours MATH20602 To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER NUMERICAL ANALYSIS 1 29 May 2015 9:45 11:45 Answer THREE of the FOUR questions. If more

More information

Reproducing Kernel Hilbert Spaces Class 03, 15 February 2006 Andrea Caponnetto

Reproducing Kernel Hilbert Spaces Class 03, 15 February 2006 Andrea Caponnetto Reproducing Kernel Hilbert Spaces 9.520 Class 03, 15 February 2006 Andrea Caponnetto About this class Goal To introduce a particularly useful family of hypothesis spaces called Reproducing Kernel Hilbert

More information

Kernel Methods. Machine Learning A W VO

Kernel Methods. Machine Learning A W VO Kernel Methods Machine Learning A 708.063 07W VO Outline 1. Dual representation 2. The kernel concept 3. Properties of kernels 4. Examples of kernel machines Kernel PCA Support vector regression (Relevance

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

Boundary Value Problems and Iterative Methods for Linear Systems

Boundary Value Problems and Iterative Methods for Linear Systems Boundary Value Problems and Iterative Methods for Linear Systems 1. Equilibrium Problems 1.1. Abstract setting We want to find a displacement u V. Here V is a complete vector space with a norm v V. In

More information

Radial Basis Functions I

Radial Basis Functions I Radial Basis Functions I Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 14, 2008 Today Reformulation of natural cubic spline interpolation Scattered

More information

Dual Bases and Discrete Reproducing Kernels: A Unified Framework for RBF and MLS Approximation

Dual Bases and Discrete Reproducing Kernels: A Unified Framework for RBF and MLS Approximation Dual Bases and Discrete Reproducing Kernels: A Unified Framework for RBF and MLS Approimation G. E. Fasshauer Abstract Moving least squares (MLS) and radial basis function (RBF) methods play a central

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

MAT 419 Lecture Notes Transcribed by Eowyn Cenek 6/1/2012

MAT 419 Lecture Notes Transcribed by Eowyn Cenek 6/1/2012 (Homework 1: Chapter 1: Exercises 1-7, 9, 11, 19, due Monday June 11th See also the course website for lectures, assignments, etc) Note: today s lecture is primarily about definitions Lots of definitions

More information

Approximation of High-Dimensional Rank One Tensors

Approximation of High-Dimensional Rank One Tensors Approximation of High-Dimensional Rank One Tensors Markus Bachmayr, Wolfgang Dahmen, Ronald DeVore, and Lars Grasedyck March 14, 2013 Abstract Many real world problems are high-dimensional in that their

More information

Analysis II: The Implicit and Inverse Function Theorems

Analysis II: The Implicit and Inverse Function Theorems Analysis II: The Implicit and Inverse Function Theorems Jesse Ratzkin November 17, 2009 Let f : R n R m be C 1. When is the zero set Z = {x R n : f(x) = 0} the graph of another function? When is Z nicely

More information

Interpolation by Basis Functions of Different Scales and Shapes

Interpolation by Basis Functions of Different Scales and Shapes Interpolation by Basis Functions of Different Scales and Shapes M. Bozzini, L. Lenarduzzi, M. Rossini and R. Schaback Abstract Under very mild additional assumptions, translates of conditionally positive

More information

Introduction to Proofs

Introduction to Proofs Real Analysis Preview May 2014 Properties of R n Recall Oftentimes in multivariable calculus, we looked at properties of vectors in R n. If we were given vectors x =< x 1, x 2,, x n > and y =< y1, y 2,,

More information

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space.

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Hilbert Spaces Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Vector Space. Vector space, ν, over the field of complex numbers,

More information

Discrete Projection Methods for Integral Equations

Discrete Projection Methods for Integral Equations SUB Gttttingen 7 208 427 244 98 A 5141 Discrete Projection Methods for Integral Equations M.A. Golberg & C.S. Chen TM Computational Mechanics Publications Southampton UK and Boston USA Contents Sources

More information

A new stable basis for radial basis function interpolation

A new stable basis for radial basis function interpolation A new stable basis for radial basis function interpolation Stefano De Marchi and Gabriele Santin Department of Mathematics University of Padua (Italy) Abstract It is well-known that radial basis function

More information

Approximation theory

Approximation theory Approximation theory Xiaojing Ye, Math & Stat, Georgia State University Spring 2019 Numerical Analysis II Xiaojing Ye, Math & Stat, Georgia State University 1 1 1.3 6 8.8 2 3.5 7 10.1 Least 3squares 4.2

More information

B. Appendix B. Topological vector spaces

B. Appendix B. Topological vector spaces B.1 B. Appendix B. Topological vector spaces B.1. Fréchet spaces. In this appendix we go through the definition of Fréchet spaces and their inductive limits, such as they are used for definitions of function

More information

ASYMPTOTICALLY EXACT A POSTERIORI ESTIMATORS FOR THE POINTWISE GRADIENT ERROR ON EACH ELEMENT IN IRREGULAR MESHES. PART II: THE PIECEWISE LINEAR CASE

ASYMPTOTICALLY EXACT A POSTERIORI ESTIMATORS FOR THE POINTWISE GRADIENT ERROR ON EACH ELEMENT IN IRREGULAR MESHES. PART II: THE PIECEWISE LINEAR CASE MATEMATICS OF COMPUTATION Volume 73, Number 246, Pages 517 523 S 0025-5718(0301570-9 Article electronically published on June 17, 2003 ASYMPTOTICALLY EXACT A POSTERIORI ESTIMATORS FOR TE POINTWISE GRADIENT

More information

MATH 350: Introduction to Computational Mathematics

MATH 350: Introduction to Computational Mathematics MATH 350: Introduction to Computational Mathematics Chapter V: Least Squares Problems Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2011 fasshauer@iit.edu MATH

More information

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form Qualifying exam for numerical analysis (Spring 2017) Show your work for full credit. If you are unable to solve some part, attempt the subsequent parts. 1. Consider the following finite difference: f (0)

More information

PARTIAL DIFFERENTIAL EQUATIONS. Lecturer: D.M.A. Stuart MT 2007

PARTIAL DIFFERENTIAL EQUATIONS. Lecturer: D.M.A. Stuart MT 2007 PARTIAL DIFFERENTIAL EQUATIONS Lecturer: D.M.A. Stuart MT 2007 In addition to the sets of lecture notes written by previous lecturers ([1, 2]) the books [4, 7] are very good for the PDE topics in the course.

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Dynamic programming using radial basis functions and Shepard approximations

Dynamic programming using radial basis functions and Shepard approximations Dynamic programming using radial basis functions and Shepard approximations Oliver Junge, Alex Schreiber Fakultät für Mathematik Technische Universität München Workshop on algorithms for dynamical systems

More information

MATH 532: Linear Algebra

MATH 532: Linear Algebra MATH 532: Linear Algebra Chapter 5: Norms, Inner Products and Orthogonality Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2015 fasshauer@iit.edu MATH 532 1 Outline

More information

Greedy Kernel Techniques with Applications to Machine Learning

Greedy Kernel Techniques with Applications to Machine Learning Greedy Kernel Techniques with Applications to Machine Learning Robert Schaback Jochen Werner Göttingen University Institut für Numerische und Angewandte Mathematik http://www.num.math.uni-goettingen.de/schaback

More information

FIXED POINT ITERATIONS

FIXED POINT ITERATIONS FIXED POINT ITERATIONS MARKUS GRASMAIR 1. Fixed Point Iteration for Non-linear Equations Our goal is the solution of an equation (1) F (x) = 0, where F : R n R n is a continuous vector valued mapping in

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning Kernel Methods Varun Chandola Computer Science & Engineering State University of New York at Buffalo Buffalo, NY, USA chandola@buffalo.edu Chandola@UB CSE 474/574 1 / 21

More information

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. Vector Spaces Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. For each two vectors a, b ν there exists a summation procedure: a +

More information

Support Vector Machines

Support Vector Machines Wien, June, 2010 Paul Hofmarcher, Stefan Theussl, WU Wien Hofmarcher/Theussl SVM 1/21 Linear Separable Separating Hyperplanes Non-Linear Separable Soft-Margin Hyperplanes Hofmarcher/Theussl SVM 2/21 (SVM)

More information

Laplace s Equation. Chapter Mean Value Formulas

Laplace s Equation. Chapter Mean Value Formulas Chapter 1 Laplace s Equation Let be an open set in R n. A function u C 2 () is called harmonic in if it satisfies Laplace s equation n (1.1) u := D ii u = 0 in. i=1 A function u C 2 () is called subharmonic

More information

Reproducing Kernels of Generalized Sobolev Spaces via a Green Function Approach with Distributional Operators

Reproducing Kernels of Generalized Sobolev Spaces via a Green Function Approach with Distributional Operators Noname manuscript No. (will be inserted by the editor) Reproducing Kernels of Generalized Sobolev Spaces via a Green Function Approach with Distributional Operators Gregory E. Fasshauer Qi Ye Abstract

More information

Numerical Analysis Preliminary Exam 10.00am 1.00pm, January 19, 2018

Numerical Analysis Preliminary Exam 10.00am 1.00pm, January 19, 2018 Numerical Analysis Preliminary Exam 0.00am.00pm, January 9, 208 Instructions. You have three hours to complete this exam. Submit solutions to four (and no more) of the following six problems. Please start

More information

MATH 350: Introduction to Computational Mathematics

MATH 350: Introduction to Computational Mathematics MATH 350: Introduction to Computational Mathematics Chapter IV: Locating Roots of Equations Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2011 fasshauer@iit.edu

More information

Numerical Solutions to Partial Differential Equations

Numerical Solutions to Partial Differential Equations Numerical Solutions to Partial Differential Equations Zhiping Li LMAM and School of Mathematical Sciences Peking University Nonconformity and the Consistency Error First Strang Lemma Abstract Error Estimate

More information

EECS 598: Statistical Learning Theory, Winter 2014 Topic 11. Kernels

EECS 598: Statistical Learning Theory, Winter 2014 Topic 11. Kernels EECS 598: Statistical Learning Theory, Winter 2014 Topic 11 Kernels Lecturer: Clayton Scott Scribe: Jun Guo, Soumik Chatterjee Disclaimer: These notes have not been subjected to the usual scrutiny reserved

More information

MA 102 (Multivariable Calculus)

MA 102 (Multivariable Calculus) MA 102 (Multivariable Calculus) Rupam Barman and Shreemayee Bora Department of Mathematics IIT Guwahati Outline of the Course Two Topics: Multivariable Calculus Will be taught as the first part of the

More information

Linear & nonlinear classifiers

Linear & nonlinear classifiers Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1396 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1396 1 / 44 Table

More information

Numerical Integration in Meshfree Methods

Numerical Integration in Meshfree Methods Numerical Integration in Meshfree Methods Pravin Madhavan New College University of Oxford A thesis submitted for the degree of Master of Science in Mathematical Modelling and Scientific Computing Trinity

More information

Computational Aspects of Radial Basis Function Approximation

Computational Aspects of Radial Basis Function Approximation Working title: Topics in Multivariate Approximation and Interpolation 1 K. Jetter et al., Editors c 2005 Elsevier B.V. All rights reserved Computational Aspects of Radial Basis Function Approximation Holger

More information