MATH 590: Meshfree Methods
|
|
- Vivian Beasley
- 5 years ago
- Views:
Transcription
1 MATH 590: Meshfree Methods Chapter 33: Adaptive Iteration Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 MATH 590 Chapter 33 1
2 Outline 1 A Greedy Adaptive Algorithm 2 The Faul-Powell Algorithm fasshauer@iit.edu MATH 590 Chapter 33 2
3 The two adaptive algorithms discussed in this chapter both yield an approximate solution to the RBF interpolation problem. MATH 590 Chapter 33 3
4 The two adaptive algorithms discussed in this chapter both yield an approximate solution to the RBF interpolation problem. The algorithms have some similarity with some of the omitted material from Chapters 21, 31 and 32. MATH 590 Chapter 33 3
5 The two adaptive algorithms discussed in this chapter both yield an approximate solution to the RBF interpolation problem. The algorithms have some similarity with some of the omitted material from Chapters 21, 31 and 32. The contents of this chapter are based mostly on the papers [Faul and Powell (1999), Faul and Powell (2000), Schaback and Wendland (2000a), Schaback and Wendland (2000b)] and the book [Wendland (2005a)]. MATH 590 Chapter 33 3
6 The two adaptive algorithms discussed in this chapter both yield an approximate solution to the RBF interpolation problem. The algorithms have some similarity with some of the omitted material from Chapters 21, 31 and 32. The contents of this chapter are based mostly on the papers [Faul and Powell (1999), Faul and Powell (2000), Schaback and Wendland (2000a), Schaback and Wendland (2000b)] and the book [Wendland (2005a)]. As always, we concentrate on systems for strictly positive definite kernels (variations for strictly conditionally positive definite kernels also exist). MATH 590 Chapter 33 3
7 Outline A Greedy Adaptive Algorithm 1 A Greedy Adaptive Algorithm 2 The Faul-Powell Algorithm fasshauer@iit.edu MATH 590 Chapter 33 4
8 One of the central ingredients is the use of the native space inner product discussed in Chapter 13. MATH 590 Chapter 33 5
9 One of the central ingredients is the use of the native space inner product discussed in Chapter 13. As always, we assume that our data sites are X = {x 1,..., x N }. fasshauer@iit.edu MATH 590 Chapter 33 5
10 One of the central ingredients is the use of the native space inner product discussed in Chapter 13. As always, we assume that our data sites are X = {x 1,..., x N }. We also consider a second set Y X. fasshauer@iit.edu MATH 590 Chapter 33 5
11 One of the central ingredients is the use of the native space inner product discussed in Chapter 13. As always, we assume that our data sites are X = {x 1,..., x N }. We also consider a second set Y X. Let P Y f be the interpolant to f on Y X. fasshauer@iit.edu MATH 590 Chapter 33 5
12 One of the central ingredients is the use of the native space inner product discussed in Chapter 13. As always, we assume that our data sites are X = {x 1,..., x N }. We also consider a second set Y X. Let P Y f be the interpolant to f on Y X. Then the first orthogonality lemma from Chapter 18 (with g = f ) yields f P Y f, P Y f NK (Ω) = 0. fasshauer@iit.edu MATH 590 Chapter 33 5
13 One of the central ingredients is the use of the native space inner product discussed in Chapter 13. As always, we assume that our data sites are X = {x 1,..., x N }. We also consider a second set Y X. Let P Y f be the interpolant to f on Y X. Then the first orthogonality lemma from Chapter 18 (with g = f ) yields f P Y f, P Y f NK (Ω) = 0. This leads to the energy split (see Chapter 18) f 2 N K (Ω) = f PY f 2 N K (Ω) + PY f 2 N K (Ω). fasshauer@iit.edu MATH 590 Chapter 33 5
14 We now consider an iteration on residuals. MATH 590 Chapter 33 6
15 We now consider an iteration on residuals. We pretend to start with our desired interpolant r 0 = Pf X set X. on the entire fasshauer@iit.edu MATH 590 Chapter 33 6
16 We now consider an iteration on residuals. We pretend to start with our desired interpolant r 0 = Pf X set X. on the entire We also pick an appropriate sequence of sets Y k X, k = 0, 1,... (we will discuss some possible heuristics for choosing these sets later). fasshauer@iit.edu MATH 590 Chapter 33 6
17 We now consider an iteration on residuals. We pretend to start with our desired interpolant r 0 = Pf X set X. on the entire We also pick an appropriate sequence of sets Y k X, k = 0, 1,... (we will discuss some possible heuristics for choosing these sets later). Then we iteratively define the residual functions r k+1 = r k P Y k r k, k = 0, 1,.... (1) fasshauer@iit.edu MATH 590 Chapter 33 6
18 We now consider an iteration on residuals. We pretend to start with our desired interpolant r 0 = Pf X set X. on the entire We also pick an appropriate sequence of sets Y k X, k = 0, 1,... (we will discuss some possible heuristics for choosing these sets later). Then we iteratively define the residual functions r k+1 = r k P Y k r k, k = 0, 1,.... (1) Remark In the actual algorithm below we will only deal with discrete vectors. Thus the vector r 0 will be given by the data values (since P f is supposed to interpolate f on X ). fasshauer@iit.edu MATH 590 Chapter 33 6
19 Now, the energy splitting identity with f = r k gives us r k 2 N K (Ω) = r k P Y k r k 2 N K (Ω) + PY k r k 2 N K (Ω) (2) fasshauer@iit.edu MATH 590 Chapter 33 7
20 Now, the energy splitting identity with f = r k gives us r k 2 N K (Ω) = r k P Y k r k 2 N K (Ω) + PY k r k 2 N K (Ω) (2) or, using the iteration formula (1), r k 2 N K (Ω) = r k+1 2 N K (Ω) + r k r k+1 2 N K (Ω). (3) fasshauer@iit.edu MATH 590 Chapter 33 7
21 We have the following telescoping sum for the partial sums of the norm of the residual updates P Y k r k : M k=0 P Y k r k 2 N K (Ω) (1) = M r k r k+1 2 N K (Ω) k=0 fasshauer@iit.edu MATH 590 Chapter 33 8
22 We have the following telescoping sum for the partial sums of the norm of the residual updates P Y k r k : M k=0 P Y k r k 2 N K (Ω) (1) = (3) = M r k r k+1 2 N K (Ω) k=0 M k=0 { } r k 2 N K (Ω) r k+1 2 N K (Ω) fasshauer@iit.edu MATH 590 Chapter 33 8
23 We have the following telescoping sum for the partial sums of the norm of the residual updates P Y k r k : M k=0 P Y k r k 2 N K (Ω) (1) = (3) = M r k r k+1 2 N K (Ω) k=0 M k=0 { } r k 2 N K (Ω) r k+1 2 N K (Ω) = r 0 2 N K (Ω) r M+1 2 N K (Ω) r 0 2 N K (Ω). fasshauer@iit.edu MATH 590 Chapter 33 8
24 We have the following telescoping sum for the partial sums of the norm of the residual updates P Y k r k : M k=0 P Y k r k 2 N K (Ω) (1) = (3) = M r k r k+1 2 N K (Ω) k=0 M k=0 { } r k 2 N K (Ω) r k+1 2 N K (Ω) = r 0 2 N K (Ω) r M+1 2 N K (Ω) r 0 2 N K (Ω). Remark This estimate shows that the sequence of partial sums is monotone increasing and bounded, and therefore convergent even for a poor choice of the sets Y k. fasshauer@iit.edu MATH 590 Chapter 33 8
25 If we can show that the residuals r k converge to zero, fasshauer@iit.edu MATH 590 Chapter 33 9
26 If we can show that the residuals r k converge to zero, then we would have that the iteratively computed approximation u M+1 = M k=0 P Y k r k fasshauer@iit.edu MATH 590 Chapter 33 9
27 If we can show that the residuals r k converge to zero, then we would have that the iteratively computed approximation u M+1 = M k=0 P Y k r k = M (r k r k+1 ) k=0 fasshauer@iit.edu MATH 590 Chapter 33 9
28 If we can show that the residuals r k converge to zero, then we would have that the iteratively computed approximation u M+1 = M k=0 P Y k r k = M (r k r k+1 ) = r 0 r M+1 (4) k=0 converges to the original interpolant r 0 = P X f. fasshauer@iit.edu MATH 590 Chapter 33 9
29 If we can show that the residuals r k converge to zero, then we would have that the iteratively computed approximation u M+1 = M k=0 P Y k r k = M (r k r k+1 ) = r 0 r M+1 (4) k=0 converges to the original interpolant r 0 = P X f. Remark The omitted chapters contain iterative methods by which we approximate the interpolant by iterating an approximation method on the full data set. fasshauer@iit.edu MATH 590 Chapter 33 9
30 If we can show that the residuals r k converge to zero, then we would have that the iteratively computed approximation u M+1 = M k=0 P Y k r k = M (r k r k+1 ) = r 0 r M+1 (4) k=0 converges to the original interpolant r 0 = P X f. Remark The omitted chapters contain iterative methods by which we approximate the interpolant by iterating an approximation method on the full data set. Here we are approximating the interpolant by iterating an interpolation method on nested (increasing) adaptively chosen subsets of the data. fasshauer@iit.edu MATH 590 Chapter 33 9
31 Remark The present method also has some similarities with the (omitted) multilevel algorithms of Chapter 32. MATH 590 Chapter 33 10
32 Remark The present method also has some similarities with the (omitted) multilevel algorithms of Chapter 32. However, here: we compute the interpolant Pf X single kernel K on the set X based on a fasshauer@iit.edu MATH 590 Chapter 33 10
33 Remark The present method also has some similarities with the (omitted) multilevel algorithms of Chapter 32. However, here: we compute the interpolant Pf X single kernel K on the set X based on a Chapter 32: the final interpolant is given as the result of using the spaces M k=1 N K k (Ω), where K k is an appropriately scaled version of the kernel K. fasshauer@iit.edu MATH 590 Chapter 33 10
34 Remark The present method also has some similarities with the (omitted) multilevel algorithms of Chapter 32. However, here: we compute the interpolant Pf X single kernel K on the set X based on a Chapter 32: the final interpolant is given as the result of using the spaces M k=1 N K k (Ω), where K k is an appropriately scaled version of the kernel K. Moreover, the goal in Chapter 32 is to approximate f, not P f. fasshauer@iit.edu MATH 590 Chapter 33 10
35 To prove convergence of the residual iteration, we assume that we can find sets of points Y k such that at step k at least some fixed percentage of the energy of the residual is picked up by its interpolant, i.e., fasshauer@iit.edu MATH 590 Chapter 33 11
36 To prove convergence of the residual iteration, we assume that we can find sets of points Y k such that at step k at least some fixed percentage of the energy of the residual is picked up by its interpolant, i.e., with some fixed γ (0, 1]. P Y k r k 2 N K (Ω) γ r k 2 N K (Ω) (5) fasshauer@iit.edu MATH 590 Chapter 33 11
37 To prove convergence of the residual iteration, we assume that we can find sets of points Y k such that at step k at least some fixed percentage of the energy of the residual is picked up by its interpolant, i.e., with some fixed γ (0, 1]. Then (3) and the iteration formula (1) imply P Y k r k 2 N K (Ω) γ r k 2 N K (Ω) (5) r k+1 2 N K (Ω) = r k 2 N K (Ω) PY k r k 2 N K (Ω), fasshauer@iit.edu MATH 590 Chapter 33 11
38 To prove convergence of the residual iteration, we assume that we can find sets of points Y k such that at step k at least some fixed percentage of the energy of the residual is picked up by its interpolant, i.e., with some fixed γ (0, 1]. Then (3) and the iteration formula (1) imply P Y k r k 2 N K (Ω) γ r k 2 N K (Ω) (5) r k+1 2 N K (Ω) = r k 2 N K (Ω) PY k r k 2 N K (Ω), and therefore r k+1 2 N K (Ω) r k 2 N K (Ω) γ r k 2 N K (Ω) fasshauer@iit.edu MATH 590 Chapter 33 11
39 To prove convergence of the residual iteration, we assume that we can find sets of points Y k such that at step k at least some fixed percentage of the energy of the residual is picked up by its interpolant, i.e., with some fixed γ (0, 1]. Then (3) and the iteration formula (1) imply P Y k r k 2 N K (Ω) γ r k 2 N K (Ω) (5) r k+1 2 N K (Ω) = r k 2 N K (Ω) PY k r k 2 N K (Ω), and therefore r k+1 2 N K (Ω) r k 2 N K (Ω) γ r k 2 N K (Ω) = (1 γ) r k 2 N K (Ω). fasshauer@iit.edu MATH 590 Chapter 33 11
40 Applying the bound A Greedy Adaptive Algorithm recursively yields r k+1 2 N K (Ω) (1 γ) r k 2 N K (Ω) Theorem If the choice of sets Y k satisfies P Y k r k 2 N K (Ω) γ r k 2 N K (Ω), then the residual iteration (see (4)) u M = M 1 k=0 P Y k r k = r 0 r M, r k+1 = r k P Y k r k, k = 0, 1,... converges linearly in the native space norm. fasshauer@iit.edu MATH 590 Chapter 33 12
41 Applying the bound A Greedy Adaptive Algorithm recursively yields r k+1 2 N K (Ω) (1 γ) r k 2 N K (Ω) Theorem If the choice of sets Y k satisfies P Y k r k 2 N K (Ω) γ r k 2 N K (Ω), then the residual iteration (see (4)) u M = M 1 k=0 P Y k r k = r 0 r M, r k+1 = r k P Y k r k, k = 0, 1,... converges linearly in the native space norm. After M steps of iterative refinement there is an error bound P X f u M 2 N K (Ω) = r 0 u M 2 N K (Ω) = r M 2 N K (Ω) (1 γ)m r 0 2 N K (Ω). fasshauer@iit.edu MATH 590 Chapter 33 12
42 Remark This theorem has various limitations: MATH 590 Chapter 33 13
43 Remark This theorem has various limitations: The norm involves the kernel K which makes it difficult to find sets Y k that satisfy (5). fasshauer@iit.edu MATH 590 Chapter 33 13
44 Remark This theorem has various limitations: The norm involves the kernel K which makes it difficult to find sets Y k that satisfy (5). The native space norm of the initial residual r 0 is not known. fasshauer@iit.edu MATH 590 Chapter 33 13
45 Remark This theorem has various limitations: The norm involves the kernel K which makes it difficult to find sets Y k that satisfy (5). The native space norm of the initial residual r 0 is not known. A way around these problems is to use an equivalent discrete norm on the set X. fasshauer@iit.edu MATH 590 Chapter 33 13
46 Schaback and Wendland establish an estimate of the form r 0 u M 2 X C c (1 δ c2 C 2 ) M/2 r 0 2 X, where c and C are constants denoting the norm equivalence, i.e., c u X u NK (Ω) C u X for any u N K (Ω), and where δ is a constant analogous to γ (but based on use of the discrete norm X in (5)). fasshauer@iit.edu MATH 590 Chapter 33 14
47 Schaback and Wendland establish an estimate of the form r 0 u M 2 X C c (1 δ c2 C 2 ) M/2 r 0 2 X, where c and C are constants denoting the norm equivalence, i.e., c u X u NK (Ω) C u X for any u N K (Ω), and where δ is a constant analogous to γ (but based on use of the discrete norm X in (5)). In fact, any discrete l p norm on X can be used. fasshauer@iit.edu MATH 590 Chapter 33 14
48 Schaback and Wendland establish an estimate of the form r 0 u M 2 X C c (1 δ c2 C 2 ) M/2 r 0 2 X, where c and C are constants denoting the norm equivalence, i.e., c u X u NK (Ω) C u X for any u N K (Ω), and where δ is a constant analogous to γ (but based on use of the discrete norm X in (5)). In fact, any discrete l p norm on X can be used. In the implementation below we will use the maximum norm. fasshauer@iit.edu MATH 590 Chapter 33 14
49 In [Schaback and Wendland (2000b)] a basic version of this algorithm where the sets Y k consist of a single point is described and tested. fasshauer@iit.edu MATH 590 Chapter 33 15
50 In [Schaback and Wendland (2000b)] a basic version of this algorithm where the sets Y k consist of a single point is described and tested. The resulting approximation yields the best M-term approximation to the interpolant. fasshauer@iit.edu MATH 590 Chapter 33 15
51 In [Schaback and Wendland (2000b)] a basic version of this algorithm where the sets Y k consist of a single point is described and tested. The resulting approximation yields the best M-term approximation to the interpolant. Remark This idea is related to the concepts of greedy approximation algorithms (see, e.g., [Temlyakov (1998)]) and sparse approximation (see, e.g., [Girosi (1998)]). fasshauer@iit.edu MATH 590 Chapter 33 15
52 If the set Y k consists of only a single point y k, then the partial interpolant P Y k r k is particularly simple: fasshauer@iit.edu MATH 590 Chapter 33 16
53 If the set Y k consists of only a single point y k, then the partial interpolant P Y k r k is particularly simple: with P Y k r k = βk (, y k ) β = r k(y k ) K (y k, y k ) fasshauer@iit.edu MATH 590 Chapter 33 16
54 If the set Y k consists of only a single point y k, then the partial interpolant P Y k r k is particularly simple: with P Y k r k = βk (, y k ) β = r k(y k ) K (y k, y k ) This follows immediately from the usual RBF expansion (which consists of only one term here) and the interpolation condition P Y k r k (y k ) = r k (y k ). fasshauer@iit.edu MATH 590 Chapter 33 16
55 The point y k is picked to be the point in X where the residual is largest, i.e., r k (y k ) = r k. fasshauer@iit.edu MATH 590 Chapter 33 17
56 The point y k is picked to be the point in X where the residual is largest, i.e., r k (y k ) = r k. This choice of set Y k certainly satisfies the constraint (5): P Y k r k 2 N K (Ω) = βk (, y k ) 2 N K (Ω) fasshauer@iit.edu MATH 590 Chapter 33 17
57 The point y k is picked to be the point in X where the residual is largest, i.e., r k (y k ) = r k. This choice of set Y k certainly satisfies the constraint (5): P Y k r k 2 N K (Ω) = βk (, y k ) 2 N K (Ω) = r k (y k ) K (y k, y k ) K (, y k) 2 N K (Ω) fasshauer@iit.edu MATH 590 Chapter 33 17
58 The point y k is picked to be the point in X where the residual is largest, i.e., r k (y k ) = r k. This choice of set Y k certainly satisfies the constraint (5): P Y k r k 2 N K (Ω) = βk (, y k ) 2 N K (Ω) = r k (y k ) K (y k, y k ) K (, y k) 2 N K (Ω) γ r k 2 N K (Ω), 0 < γ 1. fasshauer@iit.edu MATH 590 Chapter 33 17
59 The point y k is picked to be the point in X where the residual is largest, i.e., r k (y k ) = r k. This choice of set Y k certainly satisfies the constraint (5): P Y k r k 2 N K (Ω) = βk (, y k ) 2 N K (Ω) = r k (y k ) K (y k, y k ) K (, y k) 2 N K (Ω) γ r k 2 N K (Ω), 0 < γ 1. Here we require K (, y k ) K (y k, y k ) which is certainly true for positive definite translation invariant kernels (cf. Chapter 3). However, in general we only know that K (x, y) 2 K (x, x)k (y, y) (see [Berlinet and Thomas-Agnan (2004)]). The interpolation problem is (approximately) solved without having to invert any linear systems. fasshauer@iit.edu MATH 590 Chapter 33 17
60 Algorithm (Greedy one-point) Input data locations X, associated values f of f, tolerance tol > 0 Set initial residual r 0 = P X f X = f, initialize u 0 = 0, e =, k = 0 Choose starting point y k X While e > tol do Set β = r k(y k ) K (y k, y k ) For 1 i N do r k+1 (x i ) = r k (x i ) βk (x i, y k ) u k+1 (x i ) = u k (x i ) + βk (x i, y k ) end Find e = max r k+1 and the point y k+1 where it occurs X Increment k = k + 1 end fasshauer@iit.edu MATH 590 Chapter 33 18
61 Remark It is important to realize that in our MATLAB implementation we never actually compute the initial residual r 0 = P X f. fasshauer@iit.edu MATH 590 Chapter 33 19
62 Remark It is important to realize that in our MATLAB implementation we never actually compute the initial residual r 0 = P X f. All we require are the values of r 0 on the grid X of data sites. fasshauer@iit.edu MATH 590 Chapter 33 19
63 Remark It is important to realize that in our MATLAB implementation we never actually compute the initial residual r 0 = P X f. All we require are the values of r 0 on the grid X of data sites. However, since Pf X X = f X the values r 0 (x i ) are given by the interpolation data f (x i ) (see line 5 of the code). fasshauer@iit.edu MATH 590 Chapter 33 19
64 Remark It is important to realize that in our MATLAB implementation we never actually compute the initial residual r 0 = P X f. All we require are the values of r 0 on the grid X of data sites. However, since Pf X X = f X the values r 0 (x i ) are given by the interpolation data f (x i ) (see line 5 of the code). Moreover, since the sets Y k are subsets of X the value r k (y k ) required to determine β is actually one of the current residual values (see line 10 of the code). fasshauer@iit.edu MATH 590 Chapter 33 19
65 Remark We use DistanceMatrix together with rbf to compute both K (y k, y k ) (on lines 9 and 10) and K (x i, y k ) needed for the updates of the residual r k+1 and the approximation u k+1 on lines fasshauer@iit.edu MATH 590 Chapter 33 20
66 Remark We use DistanceMatrix together with rbf to compute both K (y k, y k ) (on lines 9 and 10) and K (x i, y k ) needed for the updates of the residual r k+1 and the approximation u k+1 on lines Note that the matrices DM_data, IM, DM_res, RM, DM_eval, EM are only column vectors since only one center, y k, is involved. fasshauer@iit.edu MATH 590 Chapter 33 20
67 Remark The algorithm demands that we compute the residuals r k on the data sites. fasshauer@iit.edu MATH 590 Chapter 33 21
68 Remark The algorithm demands that we compute the residuals r k on the data sites. The partial approximants u k to the interpolant can be evaluated anywhere. fasshauer@iit.edu MATH 590 Chapter 33 21
69 Remark The algorithm demands that we compute the residuals r k on the data sites. The partial approximants u k to the interpolant can be evaluated anywhere. If we do this also at the data sites, then we are required to use a plotting routine that differs from our usual one (such as trisurf built on a triangulation of the data sites obtained with the help of delaunayn). fasshauer@iit.edu MATH 590 Chapter 33 21
70 Remark The algorithm demands that we compute the residuals r k on the data sites. The partial approximants u k to the interpolant can be evaluated anywhere. If we do this also at the data sites, then we are required to use a plotting routine that differs from our usual one (such as trisurf built on a triangulation of the data sites obtained with the help of delaunayn). We instead follow the same procedure as in all of our other programs, i.e., to evaluate u k on a grid of equally spaced points. This has been implemented on lines of the program. fasshauer@iit.edu MATH 590 Chapter 33 21
71 Remark The algorithm demands that we compute the residuals r k on the data sites. The partial approximants u k to the interpolant can be evaluated anywhere. If we do this also at the data sites, then we are required to use a plotting routine that differs from our usual one (such as trisurf built on a triangulation of the data sites obtained with the help of delaunayn). We instead follow the same procedure as in all of our other programs, i.e., to evaluate u k on a grid of equally spaced points. This has been implemented on lines of the program. Note that the updating procedure has been vectorized in MATLAB allowing us to avoid the for-loop over i in the algorithm. fasshauer@iit.edu MATH 590 Chapter 33 21
72 Program (RBFGreedyOnePoint2D.m) 1 rbf exp(-(e*r).^2); ep = 5.5; 2 N = 16641; dsites = CreatePoints(N,2, h ); 3 neval = 40; epoints = CreatePoints(neval^2,2, u ); 4 tol = 1e-5; kmax = 1000; 5 res = testfunctionsd(dsites); u = 0; 6 k = 1; maxres(k) = ; 7 ykidx = (N+1)/2; yk(k,:) = dsites(ykidx,:); 8 while (maxres(k) > tol && k < kmax) 9 DM_data = DistanceMatrix(yk(k,:),yk(k,:)); 10 IM = rbf(ep,dm_data); beta = res(ykidx)/im; 11 DM_res = DistanceMatrix(dsites,yk(k,:)); 12 RM = rbf(ep,dm_res); 13 DM_eval = DistanceMatrix(epoints,yk(k,:)); 14 EM = rbf(ep,dm_eval); 15 res = res - beta*rm; u = u + beta*em; 16 [maxres(k+1), ykidx] = max(abs(res)); 17 yk(k+1,:) = dsites(ykidx,:); k = k + 1; 18 end 19 exact = testfunctionsd(epoints); 20 rms_err = norm(u-exact)/neval fasshauer@iit.edu MATH 590 Chapter 33 22
73 To illustrate the greedy one-point algorithm we perform two experiments. Both tests use data obtained by sampling Franke s function at Halton points in [0, 1] 2. Test 1 is based on Gaussians, Test 2 uses inverse multiquadrics. For both tests we use the same shape parameter ε = 5.5. fasshauer@iit.edu MATH 590 Chapter 33 23
74 Figure: 1000 selected points and residual for greedy one point algorithm with Gaussian RBFs and N = data points. fasshauer@iit.edu MATH 590 Chapter 33 24
75 Figure: Fits of Franke s function for greedy one point algorithm with Gaussian RBFs and N = data points. Top left to bottom right: 1 point, 2 points, 4 points, final fit with 1000 points. fasshauer@iit.edu MATH 590 Chapter 33 25
76 Remark In order to obtain our approximate interpolants we used a tolerance of 10 5 along with an additional upper limit of kmax=1000 on the number of iterations. fasshauer@iit.edu MATH 590 Chapter 33 26
77 Remark In order to obtain our approximate interpolants we used a tolerance of 10 5 along with an additional upper limit of kmax=1000 on the number of iterations. For both tests the algorithm uses up all 1000 iterations. fasshauer@iit.edu MATH 590 Chapter 33 26
78 Remark In order to obtain our approximate interpolants we used a tolerance of 10 5 along with an additional upper limit of kmax=1000 on the number of iterations. For both tests the algorithm uses up all 1000 iterations. The final maximum residual is maxres = for Gaussians, and maxres = for inverse MQs. fasshauer@iit.edu MATH 590 Chapter 33 26
79 Remark In order to obtain our approximate interpolants we used a tolerance of 10 5 along with an additional upper limit of kmax=1000 on the number of iterations. For both tests the algorithm uses up all 1000 iterations. The final maximum residual is maxres = for Gaussians, and maxres = for inverse MQs. In both cases there occurred several multiple point selections. fasshauer@iit.edu MATH 590 Chapter 33 26
80 Remark In order to obtain our approximate interpolants we used a tolerance of 10 5 along with an additional upper limit of kmax=1000 on the number of iterations. For both tests the algorithm uses up all 1000 iterations. The final maximum residual is maxres = for Gaussians, and maxres = for inverse MQs. In both cases there occurred several multiple point selections. Contrary to interpolation problems based on the solution of a linear system, multiple point selections do not pose a problem here. fasshauer@iit.edu MATH 590 Chapter 33 26
81 Figure: 1000 selected points and residual for greedy one point algorithm with IMQ RBFs and N = data points. fasshauer@iit.edu MATH 590 Chapter 33 27
82 Figure: Fits of Franke s function for greedy one point algorithm with IMQ RBFs and N = data points. Top left to bottom right: 1 point, 2 points, 4 points, final fit with 1000 points. fasshauer@iit.edu MATH 590 Chapter 33 28
83 Remark We note that the inverse multiquadrics have a more global influence than the Gaussians (for the same shape parameter). fasshauer@iit.edu MATH 590 Chapter 33 29
84 Remark We note that the inverse multiquadrics have a more global influence than the Gaussians (for the same shape parameter). This effect is clearly evident in the first few approximations to the interpolants in the figures. fasshauer@iit.edu MATH 590 Chapter 33 29
85 Remark We note that the inverse multiquadrics have a more global influence than the Gaussians (for the same shape parameter). This effect is clearly evident in the first few approximations to the interpolants in the figures. From the last figure we see that the greedy algorithm enforces interpolation of the data only on the most recent set Y k (i.e., for the one-point algorithm studied here only at a single point). fasshauer@iit.edu MATH 590 Chapter 33 29
86 Remark We note that the inverse multiquadrics have a more global influence than the Gaussians (for the same shape parameter). This effect is clearly evident in the first few approximations to the interpolants in the figures. From the last figure we see that the greedy algorithm enforces interpolation of the data only on the most recent set Y k (i.e., for the one-point algorithm studied here only at a single point). If one wants to maintain the interpolation achieved in previous iterations, then the sets Y k should be nested. fasshauer@iit.edu MATH 590 Chapter 33 29
87 Remark We note that the inverse multiquadrics have a more global influence than the Gaussians (for the same shape parameter). This effect is clearly evident in the first few approximations to the interpolants in the figures. From the last figure we see that the greedy algorithm enforces interpolation of the data only on the most recent set Y k (i.e., for the one-point algorithm studied here only at a single point). If one wants to maintain the interpolation achieved in previous iterations, then the sets Y k should be nested. This, however, would have a significant effect on the execution time of the algorithm since the matrices at each step would increase in size. fasshauer@iit.edu MATH 590 Chapter 33 29
88 Remark One advantage of this very simple algorithm is that no linear systems need to be solved. MATH 590 Chapter 33 30
89 Remark One advantage of this very simple algorithm is that no linear systems need to be solved. This allows us to approximate the interpolants for large data sets even for globally supported kernels, MATH 590 Chapter 33 30
90 Remark One advantage of this very simple algorithm is that no linear systems need to be solved. This allows us to approximate the interpolants for large data sets even for globally supported kernels, and also with small values of ε (and therefore an associated ill-conditioned interpolation matrix). MATH 590 Chapter 33 30
91 Remark One advantage of this very simple algorithm is that no linear systems need to be solved. This allows us to approximate the interpolants for large data sets even for globally supported kernels, and also with small values of ε (and therefore an associated ill-conditioned interpolation matrix). One should not expect too much in this case, however, as the results in the following figure show where we used a value of ε = 0.1 for the shape parameter. fasshauer@iit.edu MATH 590 Chapter 33 30
92 Remark One advantage of this very simple algorithm is that no linear systems need to be solved. This allows us to approximate the interpolants for large data sets even for globally supported kernels, and also with small values of ε (and therefore an associated ill-conditioned interpolation matrix). One should not expect too much in this case, however, as the results in the following figure show where we used a value of ε = 0.1 for the shape parameter. A lot of smoothing occurs so that the convergence to the RBF interpolant is very slow. fasshauer@iit.edu MATH 590 Chapter 33 30
93 Figure: 1000 selected points (only 20 of them distinct) and fit of Franke s function for greedy one point algorithm with flat Gaussian RBFs (ε = 0.1) and N = data points. fasshauer@iit.edu MATH 590 Chapter 33 31
94 Remark In the pseudo-code of the algorithm matrix-vector multiplications are not required. MATH 590 Chapter 33 32
95 Remark In the pseudo-code of the algorithm matrix-vector multiplications are not required. However, MATLAB allows for a vectorization of the for-loop which does result in two matrix-vector multiplications. fasshauer@iit.edu MATH 590 Chapter 33 32
96 Remark In the pseudo-code of the algorithm matrix-vector multiplications are not required. However, MATLAB allows for a vectorization of the for-loop which does result in two matrix-vector multiplications. For practical situations, e.g., for smooth kernels and densely distributed points in X the convergence can be rather slow. fasshauer@iit.edu MATH 590 Chapter 33 32
97 Remark In the pseudo-code of the algorithm matrix-vector multiplications are not required. However, MATLAB allows for a vectorization of the for-loop which does result in two matrix-vector multiplications. For practical situations, e.g., for smooth kernels and densely distributed points in X the convergence can be rather slow. The simple greedy algorithm described above is extended in [Schaback and Wendland (2000b)] to a version that adaptively uses kernels of varying scales. fasshauer@iit.edu MATH 590 Chapter 33 32
98 Outline The Faul-Powell Algorithm 1 A Greedy Adaptive Algorithm 2 The Faul-Powell Algorithm fasshauer@iit.edu MATH 590 Chapter 33 33
99 The Faul-Powell Algorithm Another iterative algorithm was suggested in [Faul and Powell (1999), Faul and Powell (2000)]. MATH 590 Chapter 33 34
100 The Faul-Powell Algorithm Another iterative algorithm was suggested in [Faul and Powell (1999), Faul and Powell (2000)]. From our earlier discussions we know that it is possible to express a kernel interpolant in terms of cardinal functions uj, j = 1,..., N, i.e., P f (x) = N f (x j )uj (x). j=1 fasshauer@iit.edu MATH 590 Chapter 33 34
101 The Faul-Powell Algorithm Another iterative algorithm was suggested in [Faul and Powell (1999), Faul and Powell (2000)]. From our earlier discussions we know that it is possible to express a kernel interpolant in terms of cardinal functions uj, j = 1,..., N, i.e., P f (x) = N f (x j )uj (x). j=1 The basic idea of the Faul-Powell algorithm is to use approximate cardinal functions Ψ j instead. fasshauer@iit.edu MATH 590 Chapter 33 34
102 The Faul-Powell Algorithm Another iterative algorithm was suggested in [Faul and Powell (1999), Faul and Powell (2000)]. From our earlier discussions we know that it is possible to express a kernel interpolant in terms of cardinal functions uj, j = 1,..., N, i.e., P f (x) = N f (x j )uj (x). j=1 The basic idea of the Faul-Powell algorithm is to use approximate cardinal functions Ψ j instead. Of course, this will only give an approximate value for the interpolant, and therefore an iteration on the residuals is suggested to improve the accuracy of this approximation. fasshauer@iit.edu MATH 590 Chapter 33 34
103 The Faul-Powell Algorithm The approximate cardinal functions Ψ j, j = 1,..., N, are determined as linear combinations of the basis functions K (, x l ) for the interpolant, i.e., Ψ j = l L j b jl K (, x l ), (6) where L j is an index set consisting of n (n 50) indices that are used to determine the approximate cardinal function. fasshauer@iit.edu MATH 590 Chapter 33 35
104 The Faul-Powell Algorithm The approximate cardinal functions Ψ j, j = 1,..., N, are determined as linear combinations of the basis functions K (, x l ) for the interpolant, i.e., Ψ j = l L j b jl K (, x l ), (6) where L j is an index set consisting of n (n 50) indices that are used to determine the approximate cardinal function. Example The n nearest neighbors of x j will usually do. fasshauer@iit.edu MATH 590 Chapter 33 35
105 The Faul-Powell Algorithm The approximate cardinal functions Ψ j, j = 1,..., N, are determined as linear combinations of the basis functions K (, x l ) for the interpolant, i.e., Ψ j = l L j b jl K (, x l ), (6) where L j is an index set consisting of n (n 50) indices that are used to determine the approximate cardinal function. Example The n nearest neighbors of x j will usually do. Remark The basic philosophy of this algorithm is very similar to that of the omitted fixed level iteration of Chapter 31 where approximate MLS generating functions were used as approximate cardinal functions. fasshauer@iit.edu MATH 590 Chapter 33 35
106 The Faul-Powell Algorithm The approximate cardinal functions Ψ j, j = 1,..., N, are determined as linear combinations of the basis functions K (, x l ) for the interpolant, i.e., Ψ j = l L j b jl K (, x l ), (6) where L j is an index set consisting of n (n 50) indices that are used to determine the approximate cardinal function. Example The n nearest neighbors of x j will usually do. Remark The basic philosophy of this algorithm is very similar to that of the omitted fixed level iteration of Chapter 31 where approximate MLS generating functions were used as approximate cardinal functions. The Faul-Powell algorithm can be interpreted as a Krylov subspace method. fasshauer@iit.edu MATH 590 Chapter 33 35
107 The Faul-Powell Algorithm Remark In general, the choice of index sets allows much freedom, and this is the reason why we include the algorithm in this chapter on adaptive iterative methods. MATH 590 Chapter 33 36
108 The Faul-Powell Algorithm Remark In general, the choice of index sets allows much freedom, and this is the reason why we include the algorithm in this chapter on adaptive iterative methods. As pointed out at the end of this section, there is a certain duality between the Faul-Powell algorithm and the greedy algorithm of the previous section. fasshauer@iit.edu MATH 590 Chapter 33 36
109 The Faul-Powell Algorithm For every j = 1,..., N, the coefficients b jl are found as solution of the (relatively small) n n linear system Ψ j (x i ) = δ jk, i L j. (7) fasshauer@iit.edu MATH 590 Chapter 33 37
110 The Faul-Powell Algorithm For every j = 1,..., N, the coefficients b jl are found as solution of the (relatively small) n n linear system Ψ j (x i ) = δ jk, i L j. (7) These approximate cardinal functions are computed in a pre-processing step. fasshauer@iit.edu MATH 590 Chapter 33 37
111 The Faul-Powell Algorithm For every j = 1,..., N, the coefficients b jl are found as solution of the (relatively small) n n linear system Ψ j (x i ) = δ jk, i L j. (7) These approximate cardinal functions are computed in a pre-processing step. In its simplest form the residual iteration can be formulated as u (0) (x) = N f (x j )Ψ j (x) j=1 u (k+1) (x) = u (k) (x) + N j=1 [ ] f (x j ) u (k) (x j ) Ψ j (x), k = 0, 1,.... fasshauer@iit.edu MATH 590 Chapter 33 37
112 The Faul-Powell Algorithm Instead of adding the contribution of all approximate cardinal functions at the same time, this is done in a three-step process in the Faul-Powell algorithm. fasshauer@iit.edu MATH 590 Chapter 33 38
113 The Faul-Powell Algorithm Instead of adding the contribution of all approximate cardinal functions at the same time, this is done in a three-step process in the Faul-Powell algorithm. To this end, we choose index sets L j, j = 1,..., N n, such that while making sure that j L j. L j {j, j + 1,..., N} fasshauer@iit.edu MATH 590 Chapter 33 38
114 The Faul-Powell Algorithm Instead of adding the contribution of all approximate cardinal functions at the same time, this is done in a three-step process in the Faul-Powell algorithm. To this end, we choose index sets L j, j = 1,..., N n, such that while making sure that j L j. L j {j, j + 1,..., N} Remark If one wants to use this algorithm to approximate the interpolant based on conditionally positive definite kernels of order m, then one needs to ensure that the corresponding centers form an (m 1)-unisolvent set and append a polynomial to the local expansion (6). fasshauer@iit.edu MATH 590 Chapter 33 38
115 Step 1 The Faul-Powell Algorithm We define u (k) 0 = u (k), and then iterate u (k) j = u (k) j 1 + θ(k) j Ψ j, j = 1,..., N n, (8) fasshauer@iit.edu MATH 590 Chapter 33 39
116 Step 1 The Faul-Powell Algorithm We define u (k) 0 = u (k), and then iterate u (k) j = u (k) j 1 + θ(k) j Ψ j, j = 1,..., N n, (8) with θ (k) j = P f u (k) j 1, Ψ j NK (Ω). (9) Ψ j, Ψ j NK (Ω) fasshauer@iit.edu MATH 590 Chapter 33 39
117 Step 1 The Faul-Powell Algorithm We define u (k) 0 = u (k), and then iterate u (k) j = u (k) j 1 + θ(k) j Ψ j, j = 1,..., N n, (8) with θ (k) j = P f u (k) j 1, Ψ j NK (Ω). (9) Ψ j, Ψ j NK (Ω) Remark The stepsize θ (k) j is chosen so that the native space best approximation to the residual P f u (k) j 1 from the space spanned by the approximate cardinal functions Ψ j is added. fasshauer@iit.edu MATH 590 Chapter 33 39
118 Step 1 (cont.) The Faul-Powell Algorithm Using the representation Ψ j = l L j b jl K (, x l ), the reproducing kernel property of K, and the (local) cardinality property Ψ j (x i ) = δ jk, i L j we can calculate the denominator of (9) as fasshauer@iit.edu MATH 590 Chapter 33 40
119 Step 1 (cont.) The Faul-Powell Algorithm Using the representation Ψ j = l L j b jl K (, x l ), the reproducing kernel property of K, and the (local) cardinality property Ψ j (x i ) = δ jk, i L j we can calculate the denominator of (9) as Ψ j, Ψ j NK (Ω) = Ψ j, l L j b jl K (, x l ) NK (Ω) fasshauer@iit.edu MATH 590 Chapter 33 40
120 Step 1 (cont.) The Faul-Powell Algorithm Using the representation Ψ j = l L j b jl K (, x l ), the reproducing kernel property of K, and the (local) cardinality property Ψ j (x i ) = δ jk, i L j we can calculate the denominator of (9) as Ψ j, Ψ j NK (Ω) = Ψ j, l L j b jl K (, x l ) NK (Ω) = l L j b jl Ψ j, K (, x l ) NK (Ω) fasshauer@iit.edu MATH 590 Chapter 33 40
121 Step 1 (cont.) The Faul-Powell Algorithm Using the representation Ψ j = l L j b jl K (, x l ), the reproducing kernel property of K, and the (local) cardinality property Ψ j (x i ) = δ jk, i L j we can calculate the denominator of (9) as Ψ j, Ψ j NK (Ω) = Ψ j, l L j b jl K (, x l ) NK (Ω) = l L j b jl Ψ j, K (, x l ) NK (Ω) = l L j b jl Ψ j (x l ) fasshauer@iit.edu MATH 590 Chapter 33 40
122 Step 1 (cont.) The Faul-Powell Algorithm Using the representation Ψ j = l L j b jl K (, x l ), the reproducing kernel property of K, and the (local) cardinality property Ψ j (x i ) = δ jk, i L j we can calculate the denominator of (9) as Ψ j, Ψ j NK (Ω) = Ψ j, l L j b jl K (, x l ) NK (Ω) = l L j b jl Ψ j, K (, x l ) NK (Ω) = l L j b jl Ψ j (x l ) = b jj since we have j L j by construction of the index set L j. fasshauer@iit.edu MATH 590 Chapter 33 40
123 Step 1 (cont.) The Faul-Powell Algorithm Similarly, we get for the numerator P f u (k) j 1, Ψ j NK (Ω) = P f u (k) j 1, l L j b jl K (, x l ) NK (Ω) fasshauer@iit.edu MATH 590 Chapter 33 41
124 Step 1 (cont.) The Faul-Powell Algorithm Similarly, we get for the numerator P f u (k) j 1, Ψ j NK (Ω) = P f u (k) j 1, l L j b jl K (, x l ) NK (Ω) = l L j b jl P f u (k) j 1, K (, x l) NK (Ω) fasshauer@iit.edu MATH 590 Chapter 33 41
125 Step 1 (cont.) The Faul-Powell Algorithm Similarly, we get for the numerator P f u (k) j 1, Ψ j NK (Ω) = P f u (k) j 1, l L j b jl K (, x l ) NK (Ω) = l L j b jl P f u (k) j 1, K (, x l) NK (Ω) = ( ) b jl P f u (k) (x l ) l L j j 1 fasshauer@iit.edu MATH 590 Chapter 33 41
126 Step 1 (cont.) The Faul-Powell Algorithm Similarly, we get for the numerator P f u (k) j 1, Ψ j NK (Ω) = P f u (k) j 1, l L j b jl K (, x l ) NK (Ω) = l L j b jl P f u (k) j 1, K (, x l) NK (Ω) = ( ) b jl P f u (k) (x l ) l L j j 1 = ( ) b jl f (x l ) u (k) j 1 (x l). l L j fasshauer@iit.edu MATH 590 Chapter 33 41
127 Step 1 (cont.) The Faul-Powell Algorithm Similarly, we get for the numerator P f u (k) j 1, Ψ j NK (Ω) = P f u (k) j 1, l L j b jl K (, x l ) NK (Ω) = l L j b jl P f u (k) j 1, K (, x l) NK (Ω) = ( ) b jl P f u (k) (x l ) l L j j 1 = ( ) b jl f (x l ) u (k) j 1 (x l). l L j Therefore (8) and (9) can be written as u (k) j = u (k) j 1 + Ψ ( ) j f (x l ) u (k) b j 1 (x l), j = 1,..., N n. jj l L j b jl fasshauer@iit.edu MATH 590 Chapter 33 41
128 Step 2 The Faul-Powell Algorithm Next we interpolate the residual on the remaining n points (collected via the index set L ). fasshauer@iit.edu MATH 590 Chapter 33 42
129 Step 2 The Faul-Powell Algorithm Next we interpolate the residual on the remaining n points (collected via the index set L ). Thus, we find a function v (k) in span{k (, x j ) : j L } such that v (k) (x i ) = f (x i ) u (k) N n (x i), i L, fasshauer@iit.edu MATH 590 Chapter 33 42
130 Step 2 The Faul-Powell Algorithm Next we interpolate the residual on the remaining n points (collected via the index set L ). Thus, we find a function v (k) in span{k (, x j ) : j L } such that v (k) (x i ) = f (x i ) u (k) N n (x i), i L, and the approximation is updated, i.e., u (k+1) = u (k) N n + v (k). fasshauer@iit.edu MATH 590 Chapter 33 42
131 Step 3 The Faul-Powell Algorithm Finally, the residuals are updated, i.e., r (k+1) i = f (x i ) u (k+1) (x i ), i = 1,..., N. (10) fasshauer@iit.edu MATH 590 Chapter 33 43
132 The Faul-Powell Algorithm Step 3 Finally, the residuals are updated, i.e., r (k+1) i = f (x i ) u (k+1) (x i ), i = 1,..., N. (10) Remark The outer iteration (on k) is now repeated unless the largest of these residuals is small enough. fasshauer@iit.edu MATH 590 Chapter 33 43
133 The Faul-Powell Algorithm Algorithm (Pre-processing step) Choose n For 1 j N n do Determine the index set L j Find the coefficients b jl of the approximate cardinal function Ψ j by solving Ψ j (x i ) = δ jk, i L j end fasshauer@iit.edu MATH 590 Chapter 33 44
134 The Faul-Powell Algorithm Algorithm (Faul-Powell) Input data locations X, associated values of f, tolerance tol > 0 Perform pre-processing step Initialize: k = 0, u (k) 0 = 0, r (k) i = f (x i ), i = 1,..., N, e = max i=1,...,n While e > tol do Update u (k) j = u (k) j 1 + Ψ j b jj l L j b jl Solve the interpolation problem v (k) (x i ) = f (x i ) u (k) N n (x i), Update the approximation Compute new residuals r (k+1) i Set new value for e = Increment k = k + 1 r (k) i ( ) f (x l ) u (k) j 1 (x l), 1 j N n u (k+1) 0 = u (k) N n + v (k) max i=1,...,n i L = f (x i ) u (k+1) 0 (x i ), i = 1,..., N (k+1) r i end fasshauer@iit.edu MATH 590 Chapter 33 45
135 The Faul-Powell Algorithm Remark Faul and Powell prove that this algorithm converges to the solution of the original interpolation problem. MATH 590 Chapter 33 46
136 The Faul-Powell Algorithm Remark Faul and Powell prove that this algorithm converges to the solution of the original interpolation problem. One needs to make sure that the residuals are evaluated efficiently by using, e.g., a fast multipole expansion, fast Fourier transform, or compactly supported kernels. fasshauer@iit.edu MATH 590 Chapter 33 46
137 The Faul-Powell Algorithm Remark In its most basic form the Krylov subspace algorithm of Faul and Powell can also be explained as a dual approach to the greedy residual iteration algorithm of Schaback and Wendland. fasshauer@iit.edu MATH 590 Chapter 33 47
138 The Faul-Powell Algorithm Remark In its most basic form the Krylov subspace algorithm of Faul and Powell can also be explained as a dual approach to the greedy residual iteration algorithm of Schaback and Wendland. Instead of defining appropriate sets of points Y k, in the Faul and Powell algorithm one picks certain subspaces U k of the native space. fasshauer@iit.edu MATH 590 Chapter 33 47
139 The Faul-Powell Algorithm Remark In its most basic form the Krylov subspace algorithm of Faul and Powell can also be explained as a dual approach to the greedy residual iteration algorithm of Schaback and Wendland. Instead of defining appropriate sets of points Y k, in the Faul and Powell algorithm one picks certain subspaces U k of the native space. In particular, if U k is the one-dimensional space U k = span{ψ k } (where Ψ k is a local approximation to the cardinal function) we get the Schaback-Wendland algorithm described above. fasshauer@iit.edu MATH 590 Chapter 33 47
140 The Faul-Powell Algorithm Remark In its most basic form the Krylov subspace algorithm of Faul and Powell can also be explained as a dual approach to the greedy residual iteration algorithm of Schaback and Wendland. Instead of defining appropriate sets of points Y k, in the Faul and Powell algorithm one picks certain subspaces U k of the native space. In particular, if U k is the one-dimensional space U k = span{ψ k } (where Ψ k is a local approximation to the cardinal function) we get the Schaback-Wendland algorithm described above. For more details see [Schaback and Wendland (2000b)]. fasshauer@iit.edu MATH 590 Chapter 33 47
141 The Faul-Powell Algorithm Remark In its most basic form the Krylov subspace algorithm of Faul and Powell can also be explained as a dual approach to the greedy residual iteration algorithm of Schaback and Wendland. Instead of defining appropriate sets of points Y k, in the Faul and Powell algorithm one picks certain subspaces U k of the native space. In particular, if U k is the one-dimensional space U k = span{ψ k } (where Ψ k is a local approximation to the cardinal function) we get the Schaback-Wendland algorithm described above. For more details see [Schaback and Wendland (2000b)]. Implementation of this algorithm is omitted. fasshauer@iit.edu MATH 590 Chapter 33 47
142 References I Appendix References Berlinet, A., Thomas-Agnan, C. (2004). Reproducing Kernel Hilbert Spaces in Probability and Statistics. Kluwer, Dordrecht. Buhmann, M. D. (2003). Radial Basis Functions: Theory and Implementations. Cambridge University Press. Fasshauer, G. E. (2007). Meshfree Approximation Methods with MATLAB. World Scientific Publishers. Higham, D. J. and Higham, N. J. (2005). MATLAB Guide. SIAM (2nd ed.), Philadelphia. Iske, A. (2004). Multiresolution Methods in Scattered Data Modelling. Lecture Notes in Computational Science and Engineering 37, Springer Verlag (Berlin). fasshauer@iit.edu MATH 590 Chapter 33 48
143 References II Appendix References G. Wahba (1990). Spline Models for Observational Data. CBMS-NSF Regional Conference Series in Applied Mathematics 59, SIAM (Philadelphia). Wendland, H. (2005a). Scattered Data Approximation. Cambridge University Press (Cambridge). Faul, A. C. and Powell, M. J. D. (1999). Proof of convergence of an iterative technique for thin plate spline interpolation in two dimensions. Adv. Comput. Math. 11, pp Faul, A. C. and Powell, M. J. D. (2000). Krylov subspace methods for radial basis function interpolation. in Numerical Analysis 1999 (Dundee), Chapman & Hall/CRC (Boca Raton, FL), pp MATH 590 Chapter 33 49
144 References III Appendix References Girosi, F. (1998). An equivalence between sparse approximation and support vector machines. Neural Computation 10, pp Schaback, R. and Wendland, H. (2000a). Numerical techniques based on radial basis functions. in Curve and Surface Fitting: Saint-Malo 1999, A. Cohen, C. Rabut, and L. L. Schumaker (eds.), Vanderbilt University Press (Nashville, TN), Schaback, R. and Wendland, H. (2000b). Adaptive greedy techniques for approximate solution of large RBF systems. Numer. Algorithms 24, pp Temlyakov, V. N. (1998). The best m-term approximation and greedy algorithms. Adv. in Comp. Math. 8, pp MATH 590 Chapter 33 50
MATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 33: Adaptive Iteration Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter 33 1 Outline 1 A
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 2: Radial Basis Function Interpolation in MATLAB Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 14: The Power Function and Native Space Error Estimates Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 6: Scattered Data Interpolation with Polynomial Precision Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 37: RBF Hermite Interpolation in MATLAB Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 39: Non-Symmetric RBF Collocation in MATLAB Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 34: Improving the Condition Number of the Interpolation Matrix Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 1 Part 3: Radial Basis Function Interpolation in MATLAB Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2014 fasshauer@iit.edu
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 40: Symmetric RBF Collocation in MATLAB Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 9: Conditionally Positive Definite Radial Functions Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 2 Part 3: Native Space for Positive Definite Kernels Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2014 fasshauer@iit.edu MATH
More informationMeshfree Approximation Methods with MATLAB
Interdisciplinary Mathematical Sc Meshfree Approximation Methods with MATLAB Gregory E. Fasshauer Illinois Institute of Technology, USA Y f? World Scientific NEW JERSEY LONDON SINGAPORE BEIJING SHANGHAI
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 5: Completely Monotone and Multiply Monotone Functions Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 7: Conditionally Positive Definite Functions Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter
More informationRecent Results for Moving Least Squares Approximation
Recent Results for Moving Least Squares Approximation Gregory E. Fasshauer and Jack G. Zhang Abstract. We describe two experiments recently conducted with the approximate moving least squares (MLS) approximation
More informationMultivariate Interpolation with Increasingly Flat Radial Basis Functions of Finite Smoothness
Multivariate Interpolation with Increasingly Flat Radial Basis Functions of Finite Smoothness Guohui Song John Riddle Gregory E. Fasshauer Fred J. Hickernell Abstract In this paper, we consider multivariate
More informationLeast Squares Approximation
Chapter 6 Least Squares Approximation As we saw in Chapter 5 we can interpret radial basis function interpolation as a constrained optimization problem. We now take this point of view again, but start
More informationKernel B Splines and Interpolation
Kernel B Splines and Interpolation M. Bozzini, L. Lenarduzzi and R. Schaback February 6, 5 Abstract This paper applies divided differences to conditionally positive definite kernels in order to generate
More informationToward Approximate Moving Least Squares Approximation with Irregularly Spaced Centers
Toward Approximate Moving Least Squares Approximation with Irregularly Spaced Centers Gregory E. Fasshauer Department of Applied Mathematics, Illinois Institute of Technology, Chicago, IL 6066, U.S.A.
More informationRadial Basis Functions I
Radial Basis Functions I Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 14, 2008 Today Reformulation of natural cubic spline interpolation Scattered
More informationA orthonormal basis for Radial Basis Function approximation
A orthonormal basis for Radial Basis Function approximation 9th ISAAC Congress Krakow, August 5-9, 2013 Gabriele Santin, joint work with S. De Marchi Department of Mathematics. Doctoral School in Mathematical
More informationDual Bases and Discrete Reproducing Kernels: A Unified Framework for RBF and MLS Approximation
Dual Bases and Discrete Reproducing Kernels: A Unified Framework for RBF and MLS Approimation G. E. Fasshauer Abstract Moving least squares (MLS) and radial basis function (RBF) methods play a central
More informationInterpolation by Basis Functions of Different Scales and Shapes
Interpolation by Basis Functions of Different Scales and Shapes M. Bozzini, L. Lenarduzzi, M. Rossini and R. Schaback Abstract Under very mild additional assumptions, translates of conditionally positive
More informationScattered Data Approximation of Noisy Data via Iterated Moving Least Squares
Scattered Data Approximation o Noisy Data via Iterated Moving Least Squares Gregory E. Fasshauer and Jack G. Zhang Abstract. In this paper we ocus on two methods or multivariate approximation problems
More informationApproximation by Conditionally Positive Definite Functions with Finitely Many Centers
Approximation by Conditionally Positive Definite Functions with Finitely Many Centers Jungho Yoon Abstract. The theory of interpolation by using conditionally positive definite function provides optimal
More informationA new stable basis for radial basis function interpolation
A new stable basis for radial basis function interpolation Stefano De Marchi and Gabriele Santin Department of Mathematics University of Padua (Italy) Abstract It is well-known that radial basis function
More informationStability of Kernel Based Interpolation
Stability of Kernel Based Interpolation Stefano De Marchi Department of Computer Science, University of Verona (Italy) Robert Schaback Institut für Numerische und Angewandte Mathematik, University of Göttingen
More informationRadial basis functions topics in slides
Radial basis functions topics in 40 +1 slides Stefano De Marchi Department of Mathematics Tullio Levi-Civita University of Padova Napoli, 22nd March 2018 Outline 1 From splines to RBF 2 Error estimates,
More informationData fitting by vector (V,f)-reproducing kernels
Data fitting by vector (V,f-reproducing kernels M-N. Benbourhim to appear in ESAIM.Proc 2007 Abstract In this paper we propose a constructive method to build vector reproducing kernels. We define the notion
More informationDIRECT ERROR BOUNDS FOR SYMMETRIC RBF COLLOCATION
Meshless Methods in Science and Engineering - An International Conference Porto, 22 DIRECT ERROR BOUNDS FOR SYMMETRIC RBF COLLOCATION Robert Schaback Institut für Numerische und Angewandte Mathematik (NAM)
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 1 Part 2: Scattered Data Interpolation in R d Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2014 fasshauer@iit.edu MATH 590 Chapter
More informationSolving the 3D Laplace Equation by Meshless Collocation via Harmonic Kernels
Solving the 3D Laplace Equation by Meshless Collocation via Harmonic Kernels Y.C. Hon and R. Schaback April 9, Abstract This paper solves the Laplace equation u = on domains Ω R 3 by meshless collocation
More informationScattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions
Chapter 3 Scattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions 3.1 Scattered Data Interpolation with Polynomial Precision Sometimes the assumption on the
More informationD. Shepard, Shepard functions, late 1960s (application, surface modelling)
Chapter 1 Introduction 1.1 History and Outline Originally, the motivation for the basic meshfree approximation methods (radial basis functions and moving least squares methods) came from applications in
More informationStability constants for kernel-based interpolation processes
Dipartimento di Informatica Università degli Studi di Verona Rapporto di ricerca Research report 59 Stability constants for kernel-based interpolation processes Stefano De Marchi Robert Schaback Dipartimento
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 43: RBF-PS Methods in MATLAB Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter 43 1 Outline
More informationKernel-based Approximation. Methods using MATLAB. Gregory Fasshauer. Interdisciplinary Mathematical Sciences. Michael McCourt.
SINGAPORE SHANGHAI Vol TAIPEI - Interdisciplinary Mathematical Sciences 19 Kernel-based Approximation Methods using MATLAB Gregory Fasshauer Illinois Institute of Technology, USA Michael McCourt University
More informationComputational Aspects of Radial Basis Function Approximation
Working title: Topics in Multivariate Approximation and Interpolation 1 K. Jetter et al., Editors c 2005 Elsevier B.V. All rights reserved Computational Aspects of Radial Basis Function Approximation Holger
More informationRBF Collocation Methods and Pseudospectral Methods
RBF Collocation Methods and Pseudospectral Methods G. E. Fasshauer Draft: November 19, 24 Abstract We show how the collocation framework that is prevalent in the radial basis function literature can be
More informationRegularization in Reproducing Kernel Banach Spaces
.... Regularization in Reproducing Kernel Banach Spaces Guohui Song School of Mathematical and Statistical Sciences Arizona State University Comp Math Seminar, September 16, 2010 Joint work with Dr. Fred
More informationKernel Method: Data Analysis with Positive Definite Kernels
Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University
More informationNumerical analysis of heat conduction problems on 3D general-shaped domains by means of a RBF Collocation Meshless Method
Journal of Physics: Conference Series PAPER OPEN ACCESS Numerical analysis of heat conduction problems on 3D general-shaped domains by means of a RBF Collocation Meshless Method To cite this article: R
More informationRKHS, Mercer s theorem, Unbounded domains, Frames and Wavelets Class 22, 2004 Tomaso Poggio and Sayan Mukherjee
RKHS, Mercer s theorem, Unbounded domains, Frames and Wavelets 9.520 Class 22, 2004 Tomaso Poggio and Sayan Mukherjee About this class Goal To introduce an alternate perspective of RKHS via integral operators
More informationOptimal data-independent point locations for RBF interpolation
Optimal dataindependent point locations for RF interpolation S De Marchi, R Schaback and H Wendland Università di Verona (Italy), Universität Göttingen (Germany) Metodi di Approssimazione: lezione dell
More informationINVERSE AND SATURATION THEOREMS FOR RADIAL BASIS FUNCTION INTERPOLATION
MATHEMATICS OF COMPUTATION Volume 71, Number 238, Pages 669 681 S 0025-5718(01)01383-7 Article electronically published on November 28, 2001 INVERSE AND SATURATION THEOREMS FOR RADIAL BASIS FUNCTION INTERPOLATION
More informationAn Introduction to Wavelets and some Applications
An Introduction to Wavelets and some Applications Milan, May 2003 Anestis Antoniadis Laboratoire IMAG-LMC University Joseph Fourier Grenoble, France An Introduction to Wavelets and some Applications p.1/54
More informationPOINT VALUES AND NORMALIZATION OF TWO-DIRECTION MULTIWAVELETS AND THEIR DERIVATIVES
November 1, 1 POINT VALUES AND NORMALIZATION OF TWO-DIRECTION MULTIWAVELETS AND THEIR DERIVATIVES FRITZ KEINERT AND SOON-GEOL KWON,1 Abstract Two-direction multiscaling functions φ and two-direction multiwavelets
More informationNumerical cubature on scattered data by radial basis functions
Numerical cubature on scattered data by radial basis functions A. Sommariva, Sydney, and M. Vianello, Padua September 5, 25 Abstract We study cubature formulas on relatively small scattered samples in
More informationReproducing Kernel Hilbert Spaces
Reproducing Kernel Hilbert Spaces Lorenzo Rosasco 9.520 Class 03 February 12, 2007 About this class Goal To introduce a particularly useful family of hypothesis spaces called Reproducing Kernel Hilbert
More informationElements of Positive Definite Kernel and Reproducing Kernel Hilbert Space
Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space Statistical Inference with Reproducing Kernel Hilbert Space Kenji Fukumizu Institute of Statistical Mathematics, ROIS Department
More informationScattered Data Interpolation with Wavelet Trees
Scattered Data Interpolation with Wavelet Trees Christophe P. Bernard, Stéphane G. Mallat and Jean-Jacques Slotine Abstract. This paper describes a new result on a wavelet scheme for scattered data interpolation
More informationSGN Advanced Signal Processing Project bonus: Sparse model estimation
SGN 21006 Advanced Signal Processing Project bonus: Sparse model estimation Ioan Tabus Department of Signal Processing Tampere University of Technology Finland 1 / 12 Sparse models Initial problem: solve
More informationIn the Name of God. Lectures 15&16: Radial Basis Function Networks
1 In the Name of God Lectures 15&16: Radial Basis Function Networks Some Historical Notes Learning is equivalent to finding a surface in a multidimensional space that provides a best fit to the training
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 43: RBF-PS Methods in MATLAB Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter 43 1 Outline
More informationReproducing Kernels of Generalized Sobolev Spaces via a Green Function Approach with Differential Operators
Reproducing Kernels of Generalized Sobolev Spaces via a Green Function Approach with Differential Operators Qi Ye Abstract In this paper we introduce a generalization of the classical L 2 ( )-based Sobolev
More informationNumerical solution of nonlinear sine-gordon equation with local RBF-based finite difference collocation method
Numerical solution of nonlinear sine-gordon equation with local RBF-based finite difference collocation method Y. Azari Keywords: Local RBF-based finite difference (LRBF-FD), Global RBF collocation, sine-gordon
More informationTheoretical and computational aspects of multivariate interpolation with increasingly flat radial basis functions
Theoretical and computational aspects of multivariate interpolation with increasingly flat radial basis functions Elisabeth Larsson Bengt Fornberg June 0, 003 Abstract Multivariate interpolation of smooth
More information1. Introduction. A radial basis function (RBF) interpolant of multivariate data (x k, y k ), k = 1, 2,..., n takes the form
A NEW CLASS OF OSCILLATORY RADIAL BASIS FUNCTIONS BENGT FORNBERG, ELISABETH LARSSON, AND GRADY WRIGHT Abstract Radial basis functions RBFs form a primary tool for multivariate interpolation, and they are
More informationMINIMAL DEGREE UNIVARIATE PIECEWISE POLYNOMIALS WITH PRESCRIBED SOBOLEV REGULARITY
MINIMAL DEGREE UNIVARIATE PIECEWISE POLYNOMIALS WITH PRESCRIBED SOBOLEV REGULARITY Amal Al-Rashdan & Michael J. Johnson* Department of Mathematics Kuwait University P.O. Box: 5969 Safat 136 Kuwait yohnson1963@hotmail.com*
More informationWendland Functions A C++ code to compute them
Wendland Functions A C++ code to compute them Carlos Argáez 1, Sigurdur Hafstein 1 and Peter Giesl 2 1 Faculty of Physical Sciences, University of Iceland, 107 Reykjavík, Iceland 2 Department of Mathematics,
More informationINTEGRATION BY RBF OVER THE SPHERE
INTEGRATION BY RBF OER THE SPHERE ALISE SOMMARIA AND ROBERT S. WOMERSLEY Abstract. In this paper we consider numerical integration over the sphere by radial basis functions (RBF). After a brief introduction
More informationThe Conjugate Gradient Method
The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large
More informationConsistency Estimates for gfd Methods and Selection of Sets of Influence
Consistency Estimates for gfd Methods and Selection of Sets of Influence Oleg Davydov University of Giessen, Germany Localized Kernel-Based Meshless Methods for PDEs ICERM / Brown University 7 11 August
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Accuracy and Optimality of RKHS Methods Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2014 fasshauer@iit.edu MATH 590 1 Outline 1 Introduction
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 4: The Connection to Kriging Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2014 fasshauer@iit.edu MATH 590 Chapter 4 1 Outline
More informationDISCRETE CDF 9/7 WAVELET TRANSFORM FOR FINITE-LENGTH SIGNALS
DISCRETE CDF 9/7 WAVELET TRANSFORM FOR FINITE-LENGTH SIGNALS D. Černá, V. Finěk Department of Mathematics and Didactics of Mathematics, Technical University in Liberec Abstract Wavelets and a discrete
More informationCONVERGENCE RATES OF COMPACTLY SUPPORTED RADIAL BASIS FUNCTION REGULARIZATION
1 CONVERGENCE RATES OF COMPACTLY SUPPORTED RADIAL BASIS FUNCTION REGULARIZATION Yi Lin and Ming Yuan University of Wisconsin-Madison and Georgia Institute of Technology Abstract: Regularization with radial
More informationApplications of Polyspline Wavelets to Astronomical Image Analysis
VIRTUAL OBSERVATORY: Plate Content Digitization, Archive Mining & Image Sequence Processing edited by M. Tsvetkov, V. Golev, F. Murtagh, and R. Molina, Heron Press, Sofia, 25 Applications of Polyspline
More informationReproducing Kernel Hilbert Spaces
Reproducing Kernel Hilbert Spaces Lorenzo Rosasco 9.520 Class 03 February 11, 2009 About this class Goal To introduce a particularly useful family of hypothesis spaces called Reproducing Kernel Hilbert
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods The Connection to Green s Kernels Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2014 fasshauer@iit.edu MATH 590 1 Outline 1 Introduction
More informationStable Parameterization Schemes for Gaussians
Stable Parameterization Schemes for Gaussians Michael McCourt Department of Mathematical and Statistical Sciences University of Colorado Denver ICOSAHOM 2014 Salt Lake City June 24, 2014 michael.mccourt@ucdenver.edu
More informationA numerical study of a technique for shifting eigenvalues of radial basis function differentiation matrices
A numerical study of a technique for shifting eigenvalues of radial basis function differentiation matrices Scott A. Sarra Marshall University and Alfa R.H. Heryudono University of Massachusetts Dartmouth
More informationSlide05 Haykin Chapter 5: Radial-Basis Function Networks
Slide5 Haykin Chapter 5: Radial-Basis Function Networks CPSC 636-6 Instructor: Yoonsuck Choe Spring Learning in MLP Supervised learning in multilayer perceptrons: Recursive technique of stochastic approximation,
More informationStability and Lebesgue constants in RBF interpolation
constants in RBF Stefano 1 1 Dept. of Computer Science, University of Verona http://www.sci.univr.it/~demarchi Göttingen, 20 September 2008 Good Outline Good Good Stability is very important in numerical
More informationRational Krylov methods for linear and nonlinear eigenvalue problems
Rational Krylov methods for linear and nonlinear eigenvalue problems Mele Giampaolo mele@mail.dm.unipi.it University of Pisa 7 March 2014 Outline Arnoldi (and its variants) for linear eigenproblems Rational
More informationAN ELEMENTARY PROOF OF THE OPTIMAL RECOVERY OF THE THIN PLATE SPLINE RADIAL BASIS FUNCTION
J. KSIAM Vol.19, No.4, 409 416, 2015 http://dx.doi.org/10.12941/jksiam.2015.19.409 AN ELEMENTARY PROOF OF THE OPTIMAL RECOVERY OF THE THIN PLATE SPLINE RADIAL BASIS FUNCTION MORAN KIM 1 AND CHOHONG MIN
More informationMultiscale RBF collocation for solving PDEs on spheres
Multiscale RBF collocation for solving PDEs on spheres Q. T. Le Gia I. H. Sloan H. Wendland Received: date / Accepted: date Abstract In this paper, we discuss multiscale radial basis function collocation
More informationNeural Networks Lecture 4: Radial Bases Function Networks
Neural Networks Lecture 4: Radial Bases Function Networks H.A Talebi Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Winter 2011. A. Talebi, Farzaneh Abdollahi
More informationAtmospheric Dynamics with Polyharmonic Spline RBFs
Photos placed in horizontal position with even amount of white space between photos and header Atmospheric Dynamics with Polyharmonic Spline RBFs Greg Barnett Sandia National Laboratories is a multimission
More informationKey words. Radial basis function, scattered data interpolation, hierarchical matrices, datasparse approximation, adaptive cross approximation
HIERARCHICAL MATRIX APPROXIMATION FOR KERNEL-BASED SCATTERED DATA INTERPOLATION ARMIN ISKE, SABINE LE BORNE, AND MICHAEL WENDE Abstract. Scattered data interpolation by radial kernel functions leads to
More informationD. Shepard, Shepard functions, late 1960s (application, surface modelling)
Chapter 1 Introduction 1.1 History and Outline Originally, the motivation for the basic meshfree approximation methods (radial basis functions and moving least squares methods) came from applications in
More informationReproducing Kernel Hilbert Spaces Class 03, 15 February 2006 Andrea Caponnetto
Reproducing Kernel Hilbert Spaces 9.520 Class 03, 15 February 2006 Andrea Caponnetto About this class Goal To introduce a particularly useful family of hypothesis spaces called Reproducing Kernel Hilbert
More informationPositive Definite Kernels: Opportunities and Challenges
Positive Definite Kernels: Opportunities and Challenges Michael McCourt Department of Mathematical and Statistical Sciences University of Colorado, Denver CUNY Mathematics Seminar CUNY Graduate College
More informationSolutions and Notes to Selected Problems In: Numerical Optimzation by Jorge Nocedal and Stephen J. Wright.
Solutions and Notes to Selected Problems In: Numerical Optimzation by Jorge Nocedal and Stephen J. Wright. John L. Weatherwax July 7, 2010 wax@alum.mit.edu 1 Chapter 5 (Conjugate Gradient Methods) Notes
More informationA NOTE ON MATRIX REFINEMENT EQUATIONS. Abstract. Renement equations involving matrix masks are receiving a lot of attention these days.
A NOTE ON MATRI REFINEMENT EQUATIONS THOMAS A. HOGAN y Abstract. Renement equations involving matrix masks are receiving a lot of attention these days. They can play a central role in the study of renable
More informationMIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications. Class 19: Data Representation by Design
MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications Class 19: Data Representation by Design What is data representation? Let X be a data-space X M (M) F (M) X A data representation
More informationBiorthogonal Spline Type Wavelets
PERGAMON Computers and Mathematics with Applications 0 (00 1 0 www.elsevier.com/locate/camwa Biorthogonal Spline Type Wavelets Tian-Xiao He Department of Mathematics and Computer Science Illinois Wesleyan
More informationComplexity and regularization issues in kernel-based learning
Complexity and regularization issues in kernel-based learning Marcello Sanguineti Department of Communications, Computer, and System Sciences (DIST) University of Genoa - Via Opera Pia 13, 16145 Genova,
More informationKarhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques
Institut für Numerische Mathematik und Optimierung Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques Oliver Ernst Computational Methods with Applications Harrachov, CR,
More informationMaryam Pazouki ½, Robert Schaback
ÓÖ Ã ÖÒ Ð ËÔ Maryam Pazouki ½, Robert Schaback ÁÒ Ø ØÙØ Ö ÆÙÑ Ö ÙÒ Ò Û Ò Ø Å Ø Ñ Ø ÍÒ Ú Ö ØØ ØØ Ò Ò ÄÓØÞ ØÖ ½ ¹½ ¼ ØØ Ò Ò ÖÑ ÒÝ Abstract Since it is well known [4] that standard bases of kernel translates
More informationA Posteriori Error Bounds for Meshless Methods
A Posteriori Error Bounds for Meshless Methods Abstract R. Schaback, Göttingen 1 We show how to provide safe a posteriori error bounds for numerical solutions of well-posed operator equations using kernel
More informationMATH 350: Introduction to Computational Mathematics
MATH 350: Introduction to Computational Mathematics Chapter IV: Locating Roots of Equations Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2011 fasshauer@iit.edu
More informationMATH 350: Introduction to Computational Mathematics
MATH 350: Introduction to Computational Mathematics Chapter V: Least Squares Problems Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2011 fasshauer@iit.edu MATH
More informationA New Trust Region Algorithm Using Radial Basis Function Models
A New Trust Region Algorithm Using Radial Basis Function Models Seppo Pulkkinen University of Turku Department of Mathematics July 14, 2010 Outline 1 Introduction 2 Background Taylor series approximations
More informationReproducing Kernel Hilbert Spaces
9.520: Statistical Learning Theory and Applications February 10th, 2010 Reproducing Kernel Hilbert Spaces Lecturer: Lorenzo Rosasco Scribe: Greg Durrett 1 Introduction In the previous two lectures, we
More information5.6 Nonparametric Logistic Regression
5.6 onparametric Logistic Regression Dmitri Dranishnikov University of Florida Statistical Learning onparametric Logistic Regression onparametric? Doesnt mean that there are no parameters. Just means that
More informationKrylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms
Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms 2. First Results and Algorithms Michael Eiermann Institut für Numerische Mathematik und Optimierung Technische
More informationRBF-FD Approximation to Solve Poisson Equation in 3D
RBF-FD Approximation to Solve Poisson Equation in 3D Jagadeeswaran.R March 14, 2014 1 / 28 Overview Problem Setup Generalized finite difference method. Uses numerical differentiations generated by Gaussian
More informationOn the Numerical Evaluation of Fractional Sobolev Norms. Carsten Burstedde. no. 268
On the Numerical Evaluation of Fractional Sobolev Norms Carsten Burstedde no. 268 Diese Arbeit ist mit Unterstützung des von der Deutschen Forschungsgemeinschaft getragenen Sonderforschungsbereiches 611
More informationFitting Linear Statistical Models to Data by Least Squares I: Introduction
Fitting Linear Statistical Models to Data by Least Squares I: Introduction Brian R. Hunt and C. David Levermore University of Maryland, College Park Math 420: Mathematical Modeling February 5, 2014 version
More information