Stability and Lebesgue constants in RBF interpolation
|
|
- Kenneth Allison
- 5 years ago
- Views:
Transcription
1 constants in RBF Stefano 1 1 Dept. of Computer Science, University of Verona Göttingen, 20 September 2008 Good
2 Outline Good Good
3 Stability is very important in numerical analysis: desirable in numerical computations, it depends on the accuracy of algorithms [4, Higham s book]. In polynomial, the stability of the process can be measured by the so-called constant, i.e the norm of the projection operator from C(K) (equipped with the uniform norm) to P n (K) (or itselfs) (K R n ), which estimates also the error. The constant depends on the via the fundamental Lagrange or cardinal polynomials. Good
4 Our approach 1. Good [DeM. RSMT03; DeM. Schaback Wendland AiCM05]. 2. Cardinal functions bounds [DeM. Schaback AiCM08]. 3. constants estimates and growth [DeM. Schaback AiCM08; Bos DeM. EJA08 (1d)]. Good
5 Notations X = {x 1,...,x N } Ω R d, distinct; data sites; {f 1,...,f N }, data values; Φ : Ω Ω R symmetric positive definite kernel the RBF interpolant s f,φ := N α j Φ(,x j ), (1) j=1 Letting V X = span{φ(,x) : x X }, s f,x can be written in terms of cardinal functions, u j V X, u j (x k ) = δ jk, i.e. Good N s f,x = f (x j )u j. (2) j=1
6 Error estimates Take V Ω := span{φ(,x) : x Ω} on which Φ is the reproducing kernel. clos(v Ω ) = N Φ (Ω), the native Hilbert space to Φ. f N Φ (Ω), using (2) and the reproducing kernel property of Φ on V Ω, applying the Cauchy-Schwarz inequality, we get f (x) s f,x (x) P Φ,X (x) f Φ (3) Good P Φ,X : power function.
7 A power function expression Letting det (A Φ,X (y 1,...,y N )) = det (Φ(y i,x j )) 1 i,j N, then u k (x) = det Φ,X(x 1,...,x k 1,x,x k+1,...,x N ) det Φ,X (x 1,...,x N ), (4) Letting u j (x), 0 j N with u 0 (x) := 1 and x 0 = x, then Good P 2 Φ,X (x) = ut A Φ,Y u, (5) where u T = ( 1,u 1 (x),...,u N (x)), Y = X {x}.
8 The problem Are there any good for approximating all functions in the native space? Good
9 Our approach 1. Power function estimates. 2. Geometric arguments. Good
10 Literature A. Beyer: Optimale Centerverteilung bei Interpolation mit radialen Basisfunktionen. Diplomarbeit, Universität Göttingen, He considered numerical aspects of the problem. L. P. Bos and U. Maier: On the asymptotics of which maximize determinants of the form det(g( x i x j )), in Advances in Multivariate Approximation (Berlin, 1999), They investigated on Fekete-type for univariate RBFs, proving that if g is s.t. g (0) 0 then that maximize the Vandermonde determinant are the ones asymptotically equidistributed. Good
11 Literature A. Iske:, Optimal distribution of centers for radial basis function methods. Tech. Rep. M0004, Technische Universität München, He studied admissible sets of by varying the centers for stability and quality of approximation by RBF, proving that uniformly distributed gives better results. He also provided a bound for the so-called uniformity: ρ X,Ω 2(d + 1)/d, d= space dimension. R. Platte and T. A. Driscoll:, Polynomials and potential theory for GRBF, SINUM (2005),they used potential theory for finding near-optimal for gaussians in 1d. Good
12 Main result Idea: data set for good approximation for all f N Φ (Ω) should have regions in Ω without large holes. Assume Φ, translation invariant, integrable and its Fourier transform decays at infinity with β > d/2 Theorem [DeM., Schaback&Wendland, AiCM 2005.] For every α > β there exists a constant M α > 0 with the following property: if ǫ > 0 and X = {x 1,..., x N } Ω are given such that f s f,x L (Ω) ǫ f Φ, for all f W β 2 (Rd ), (6) then the fill distance of X satisfies Good 1 h X,Ω M α ǫ α d/2. (7)
13 Remarks 1. The error can be bounded by f s f,x L (Ω) C h β d/2 X,Ω f W β 2 (Rd ). (8) 2. M α when α β, so from (8) we cannot get C ǫ but as close as possible. h β d/2 X,Ω 3. The proof does not work for gaussians (no compactly ed functions in the native space of the gaussians). Good
14 To remedy, we made the additional assumption that X is already quasi-uniform,i.e. h X,Ω q X,Ω. As a consequence, P Φ,X (x) ǫ. The result follows from the lower bounds of P Φ,X (cf. [Schaback AiCM95] where they are given in terms of q X ). Quasi-uniformity brings back to bounds in term of h X,Ω. Observation: optimally distributed data sites are sets that cannot have a large region in Ω without centers, i.e. h X,Ω is sufficiently small. Good
15 On computing near-optimal We studied two algorithms. 1. Greedy Algorithm (GA) 2. Geometric Greedy Algorithm (GGA) Good
16 The Greedy Algorithm (GA) At each step we determine a point where the power function attains its maxima w.r.t. the preceding set. starting step: X 1 = {x 1 }, x 1 Ω, arbitrary. iteration step: X j = X j 1 {x j } with P Φ,Xj 1 (x j ) = P Φ,Xj 1 L (Ω). Good
17 The Geometric Greedy Algorithm (GGA) This algorithm works quite well for subset Ω of cardinality n with small h X,Ω and large q X. The are computed independently of the kernel Φ. starting step: X 0 = and define dist(x, ) := A, A > diam(ω). iteration step: given X n Ω, X n = n pick x n+1 Ω \ X n s.t. x n+1 = max x Ω\Xn dist(x,x n ). Then, form X n+1 := X n {x n+1 }. Good
18 Remarks on convergence Practical experiments show that the GA fills the currently largest hole in the data point close to the center of the hole and converges at least like P j L (Ω) C j 1/d, C > 0. Defining the separation distance for X j as q j = 1 2 min x y X j x y 2 and the fill distance as h j = max x Ω min y Xj x y 2 then, we can prove that Good h j q j 1 2 h j h j, j 2 i.e. the GGA produces quasi-uniformly distributed in the euclidean metric.
19 Connections with discrete Leja sequences Let Ω N be a discretization of a compact domain of Ω R 2 and let x 0 arbitrarily chosen in Ω. The x n = max x Ω N \{x 0,...,x n 1 } are a Leja sequence on Ω. min x x k 2 0 k n 1 Hence, the construction technique of GGA is conceptually similar to finding Leja sequences : both maximize a function of distances. The construction of the GGA is independent of the Euclidean metric. If µ is any metric on Ω, the GGA algorithm produces asymptotically equidistributed in that metric. In [Caliari,DeM.,Vianello AMC2005] the GGA was used with the Dubiner metric on the square. Good
20 How good are the point sets computed by GA and GAA? We could check these quantities: Interpolation error Uniformity: in particular, the GGA maximizes ρ X,Ω = q X h X,Ω, since it works well with subset Ω n Ω with large q X and small h X,Ω. constant Good Λ N := max λ N(x) = max x Ω x Ω N u k (x). k=1
21 : details 1. We considered a discretization of Ω = [ 1,1] 2 with random. 2. The GA run until P X,Ω η, η a chosen threshold. 3. The GGA, thanks to the connection with the Leja extremal sequences, run once and for all. We extracted 406 from random on Ω = [ 1,1] 2, 406 = dim(π 27 (R 2 )). Good
22 GA: Gaussian Gaussian with scale 1, 48, η = The error in the right hand figure is P N 2 L (Ω) which has a decay as a function of the number N of data. As determined by the regression line in the figure, the decay is like N Error Good N
23 GA: Wendland C 2 Wendland function scale 15, N = 100 to depress the power function down to The error decays like N 1.9 as determined by the regression line in the figure Error 10 1 Good N
24 GGA: Gaussian error decay when the Gaussian power function is evaluated on the data supplied by the geometric greedy method up to X 48. The final error is larger by a factor of 4, and the estimated decrease of the error is only like N Error Good N
25 GGA: Wendland The error factor is only 1.4 bigger, while the estimated decay order is Error Good N
26 Gaussian Below: 65 for the gaussian with scale 1. Left: their separation distances; Right: the (+) are the one computed with the GA with η = 2.0e 7, while the (*) the one computed with the GGA geometric greedy geometric greedy Good
27 Inverse multiquadrics Below: 90 for the IM with scale 1. Left: their separation distances; Right: the (+) are the one computed with the GA with η = 2.0e 5, while the (*) the one computed with the GGA geometric greedy geometric greedy Good
28 Wendland Below: 80 for the Wendland s RBF with scale 1. Left: their separation distances; Right: the (+) are the one computed with the GA with η = 1.0e 1, while the (*) the one computed with the GGA geometric greedy geometric greedy Good
29 constants for the near-optimal for the gaussian. Left: the growth of the data-dependent. Right: the growth of the data-independent constant for Gaussian constant for Gaussian Good
30 constants for the near-optimal for the inverse multiquadrics. Left: the growth of the data-dependent. Right: the growth of the data-independent constant for Inverse Multiquadrics constant for Inverse Multiquadrics Good
31 constants for the near-optimal for the Wendland s rbf. Left: the growth of the data-dependent. Right: the growth of the data-independent constant for Wendland constant for Wendland Good
32 A comparison of constants growth for on the square: RND (random ), EUC (data-independent ), DUB (Dubiner ) constants 1e+15 1e+10 1e+05 RND (2.3) n EUC 4.0 (2.3) n DUB 0.4 n 3 Good degree n
33 f 1 (x, y) = exp( 8 x 2 8 y 2 ) and f 2 (x, y) = x 2 + y 2 xy, on Ω = [ 1, 1]. G-G65 GGA-G65 G-W80 GGA-W80 G-IMQ90 GGA-IMQ90 f f Table: Errors in L 2 -norm for by the Gaussian. When errors are > 1.0 we put. G-G65 GGA-G65 G-W80 GGA-W80 G-IMQ90 GGA-IMQ90 f f Table: Errors in L 2 -norm for by the Wendland s function. Good G-G65 GGA-G65 G-W80 GGA-W80 G-IMQ90 GGA-IMQ90 f f Table: Errors in L 2 -norm for by the inverse multiquadrics.
34 Remarks 1. The GGA is independent on the kernel and generates asymptotically equidistributed optimal sequences. It still inferior to the GA that considers the power function. 2. The computed by the GGA is such that h Xn,Ω = max x Ω min y Xn x y 2. In [Caliari,DeM,Vianello2005], we proved that they are quasi-uniform in the Dubiner metric and connected to Leja sequences. 3. So far,we have no proof of the fact the GGA generates a sequence with h n Cn 1/d, as required by asymptotic optimality. 4. We could look for data-dependent adaptive strategies for reconstruction of functions from spans of translates of kernels using new techniques known from learning theory and algorithms, applying optimization techniques for data selection (proposed by Robert... not yet implemented!). Good
35 Initial ideas Given the recovery process f s f,x, where s f,x = N j=1 f (x j)u j for some u j : Ω R d R we look for bounds of the form s f,x L (Ω) C(X) f l (X). (9) C(X), the stability constant, can be bounded below as N C(X) u j (x) j=1 L (Ω) (10) Good i.e. by the constant Λ X := max x Ω N u j (x). j=1
36 Remarks on Polynomial Interpolation 1. Looking for upper bounds for C(X) and/or Λ X is a classical problem.in recovering by polynomials, upper bounds for the constant exist, leading to the problem of finding near-optimal. 2. For P.I., near-optimal X of cardinality N, have Λ X that, in 1D behaves like log(n) and in 2D on the square like log 2 (N).An important set of near-optimal in the square for P.I., are the Padua [Bos,Caliari,DeM,Vianello,Xu JAT06, NM07], Good
37 Padua 1 0,5 0-0, ,5 0 0, Figure: (Left) Padua for N = 13 and the generating curve. (Right) Padua for N = Good
38 Padua MP EMP PD constants 100 MP (0.7 n+1.0) 2 EMP (0.4 n+0.9) 2 PD (2/π log(n+1)+1.1) 2 Good degree n Figure: (Left): Morrow-Patterson, Extended Morrow-Patterson and Padua for N = 8. (Right) constants growth
39 Stability bounds for multivariate kernel based recovery processes are missing. How can we proceed to derive them? Good
40 Recovering by kernels Given a kernel Φ : Ω Ω R (positive definite), construct N s f,x := α j Φ(,x j ) (11) j=1 from V X := span {Φ(,x) : x X } of translates of Φ so that Good f (x k ) = s f,x (x k ), 1 k N (12) with matrix A Φ,X = (Φ(x k,x j )), 1 j,k N.
41 The matrix A Φ,X 1. Unfortunately the kernel matrix has bad condition number if the data locations come close, i.e. if q X is small. 2. Then, the coefficients of the representation (11) get very large even if the data values f (x k ) are small, and simple linear solvers will fail. Users often report that the final function s f,x of (11) behaves nicely in spite of the large coefficients, and using stable solvers (for instance Riley s algorithm) lead to useful results even in case of unreasonably large condition numbers [Fasshauer s talk] 3. The interpolant can be stably calculated (in the sense of (9)), while the coefficients in the basis supplied by the Φ(x,x j ) are unstable. Good
42 Error estimates and (in)stability 1. h X,Ω and q X are used for standard error and stability estimates for multivariate interpolants. The inequality holds in most cases. q X h X,Ω 2. If of X nearly coalesce, q X can be much smaller than h X,Ω, causing instability of the standard solution process. Point sets X are called quasi uniform with uniformity constant γ > 1, if holds the inequality Good 1 γ q X h X,Ω γq X.
43 Kernels and Fourier transforms To generate interpolants, we allow (conditionally) positive definite translation-invariant kernels Φ(x,y) = K(x y) for all x,y R d, K : R d R which are reproducing in their native Hilbert space N Φ which we assume to be norm equivalent to some Sobolev space W2 τ (Ω) with τ > d/2. The kernel will then have a Fourier transform satisfying 0 < c(1 + ω 2 2) τ ˆK(ω) C(1 + ω 2 2) τ (13) Good at infinity. This includes polyharmonic splines, thin-plate splines, the Sobolev/Matérn kernel, and Wendland s compactly ed kernels.
44 Theorem 1 Theorem The classical constant for with Φ on N = X data locations in a bounded Ω R d has a bound of the form Λ X C ( ) τ d/2 hx,ω N. (14) For quasi-uniform sets, with uniformity bounded by γ < 1, this simplifies to Λ X C N. Each single cardinal function is bounded by u j L (Ω) C q X ( hx,ω q X ) τ d/2, (15) Good which, in the quasi-uniform case, simplifies to u j L (Ω) C.
45 Corollary Corollary Interpolation on sufficiently many quasi uniformly distributed data is stable in the sense of s f,x L (Ω) C ( ) f l (X) + f l2(x) (16) and with a constant C independent of X. s f,x L2(Ω) Ch d/2 X,Ω f l 2(X) (17) In the right-hand side of (17), l2 is a properly scaled discrete version of the L 2 norm. Good Proofs have been done by resorting to classical error estimates. An alternative proof based on sampling inequality [Rieger, Wendland NM05], has been proposed in [Schaback, DeM. RR59-08,UniVR].
46 Proof sketch 1. Bound u j. Using standard error estimates ([Corol ,Wendland s book]), we get ( ) ( ) ( ) xj xj u j L (Ω) 1+ I X Ψ Ψ 1+C h τ d/2 q X q X X,Ω Ψ. L (Ω) q X N (18) Ψ C, having in the unit ball and such that Ψ(0) = 1, Ψ L (Ω) = 1 (i.e. a bump function). 2. Estimate the native space norm of Ψ( q X ) getting ( ) 2 Ψ q X Thus, the estimates easily follow. N C 1 q d τ/2 X Ψ 2 L 2. Good
47 Proof sketch Finally, for the constant, observe that p f,x (x) = ( ) N j=1 f (x x xj j)ψ q X Then I X p f,x L (Ω) p f,x L (Ω) + I X p f,x p f,x L (Ω). p f,x L (Ω) f l (X), since p f,x is a sum of functions with nonoverlapping s. I X p f,x p f,x L (Ω) Ch τ d/2 X,Ω p f,x N. Then, it remains to estimate p f,x N. For τ N, we have p f,x N Cq d 2τ X Ψ W τ 2 ( N ) 1/2 f (x j ) 2 i=1 Good Cq d 2τ X Ψ W τ 2 N f l (X).
48 Kernels 1. Matérn/Sobolev kernel (finite smoothness, definite positive) Φ(r) = (r/c) ν K ν (r/c), of order ν. K ν is the modified Bessel function of second kind. Examples were done with ν = 1.5 at scale c = 20,320. Schaback call them Sobolev splines. 2. Gauss kernel (infinite smoothness, definite positive) Good Φ(r) = e νr, ν > 0. Examples with ν = 1 at scale c = 0.1, 0.2, 0.4.
49 constants against n Interior Corner 10 2 against n Interior Corner Good Figure: constants for the Matérn/Sobolev kernel (left) and Gauss kernel (right)
50 functions Good Figure: Lagrange basis function on 225 data, Gaussian kernel with scale 0.1 (left) and scale 0.2 (right). See how scaling influences the Lagrange basis.
51 functions Good Figure: Gauss kernel with scale 0.4: function on 225 regular. The maximum of the function is attained near the corners for large scales, while the behavior in the interior is as stable as for kernels with limited smoothness.
52 functions Good Figure: Matérn/Sobolev kernel with scale 320. function on 225 scattered (left) and on 225 equidistributed (right).
53 Lagrange basis functions Data Points Figure: Matern/Sobolove kernel with scale 320: Lagrange basis (left) on 225 random (right) Good
54 Lagrange basis functions Good Figure: Lagrange basis (left) and function (right) for 168 scattered data on the circle, Gaussian kernel with scale 0.4
55 Lagrange basis functions Data Points Good Figure: Data for the previous figure
56 constants Here we collect some computed constants on a grid of centers consisting of 225 pts on [ 1, 1] 2. The constants were computed on a finer grid made of 7225 pts. Matérn and Wendland had scaled by 10, IMQ and GA scaled by 0.2. Matern W2 IMQ GA First line contains the max of functions. The second are the estimated constants, by the function computed by the formula [Wendland s book, p. 208] 1 + N (uj (x)) 2 PΦ,X 2 (x) λ min (A Φ,X {x} ), x X. i=1 Good in a neighborhood of the point that maximizes the classical constant.
57 Remarks on the finite smooth case 1. In all examples, our bounds on the constants, are confirmed. 2. In all experiments, the constants seem to be uniformly bounded. 3. The maximum of the function is attained in the interior. Good
58 Remarks on the infinite smoothness... things are moreless specular The constants do not seem to be uniformly bounded. 2. In all experiments, the function attains its maximum near the corners (for large scales). 3. The limit for large scales is called flat limit which corresponds to the Lagrange basis function for polynomial (see Larsson and Fornberg talks, [Driscoll, Fornberg 2002], [Schaback 2005],...). Good
59 A possible solution Schaback, in a recent paper with S. Müller [Müeller, Scahaback JAT08], studied a Newton s basis for overcoming the ill-conditioning of linear systems in RBF. The basis is orthogonal in the native space in which the kernel is reproducing and more stable. Good
60 The case φ(x) = x This is based on the work [Bos,DeM. EJA2008]. Sites x 1 < x 2 < < x n belong to some interval [a, b] Interpolation problem (correct): find coefficients a j R n a j x x j = y j, (19) j=1 for function values y j in two ways 1. solve the linear system with Vandermonde matrix; 2. give formulas for the associated cardinal functions u j. Good
61 Formulas for the cardinal functions The cardinal functions are the classical hat functions given as follows. When x j is an interior point 0 if x x j 1 x x j 1 if x x u j (x) = j x j 1 j 1 < x x j x j+1 x, 2 j n 1, (20) if x x j+1 x j < x x j+1 j 0 if x > x j+1 while for the boundary x 1 and x n u 1 (x) = { x2 x x 2 x 1 if x 1 x x 2 0 if x > x 2 ; 0 if x x n 1 u n(x) = x x n 1 xn x n 1 if x n 1 < x x n. (21) (22) Good
62 Formulas for the cardinal functions These hat functions has another interesting property: u j is a combination of just 3 translates, x x j 1, x x j and x x j+1. This also holds for u 1 and u n identifying x 0 = x n and x n+1 = x 1. For instance, for x j an interior point u j (x) = 1 2(x j x j 1 ) x x x j+1 x j 1 j 1 2(x j+1 x j )(x j x j 1 ) x x j + 1 Good 2(x j+1 x j ) x x j+1 2 j n 1, (23) Remark: (23) is defined for all x R, but is identically zero outside [x j 1,x j+1 ]. The boundary x 1 and x n are again slightly different.
63 The case φ (x) = λ 2 φ(x) Assume λ C. We proved 1. u j still a combination of 3 consecutive translates of φ( x ) and [x j 1,x j+1 ]. 2. uniqueness of this class of functions. λ = 0 is essentially φ(x) = x, then assume λ 0. Hence, for some a,b C. φ(x) = ae λx + be λx (24) Good
64 Formulas for the cardinal functions Observe that the problem for functions of the form n s(x) = a j φ( x x j ) (25) j=1 is correct provided b a and ae λxn ±be λx 1. Theorem For φ(x) of the form (24) we have Good det ([φ( x i x j )] 1 i,j n ) = (b a) n 2 e 2λ n 1 n j=1 x j (e 2λx j+1 e 2λx j ) ( ) b 2 e 2λx 1 a 2 e 2λxn. j=1
65 Formulas for the cardinal functions Proposition For φ(x) of the form (24) with a,b so that the problem is correct, we have for 2 j n 1, u j (x) = A 1 φ( x x j 1 ) + A 2 φ( x x j ) + A 3 φ( x x j+1 ) where e λxj 1 e λx j A 1 = (e 2λx j e 2λx, j 1 )(b a) (e 2λxj+1 e 2λxj 1 )e 2λx j A 2 = (e 2λxj+1 e 2λxj )(e 2λx j e 2λx, j 1 )(b a) e λxj e λx j+1 A 3 = (e 2λxj+1 e 2λxj. )(b a) Good
66 Formulas for the cardinal functions These u j are identically zero outside the interval [x j 1,x j ]. Similar formulas hold for u 1 and u n. Good
67 Cardinal functions nodes u 1 u 2 u 3 u 4 u nodes u 1 u 2 u 3 u 4 Good u Figure: Cardinal functions for the nodes [1, 2, 3.5, 6, 7.5], a = 2, b = 3, and λ = 1 (left), λ = i (right)
68 Uniqueness of the class Theorem Suppose that φ : R + R is analytic. Suppose further that for any x 1 < x 2 < < x n, the cardinal functions for of the form (25) can be given as a linear combination of three consecutive translates, i.e., there exist constants α j, β j and γ j such that u j (x) = α j φ( x x j 1 ) + β j φ( x x j ) + γ j φ( x x j+1 ), 2 j n 1. Suppose further that u j has in the interval [x j 1,x j+1 ]. Then there exists a λ C such that Good φ (x) = λ 2 φ(x).
69 Formulas for the cardinal functions Theorem Suppose that x 1 < x 2 < x n and that φ(x) = ae λx + be λx is such that the problem is correct. Then, independently of the values of a and b, e 2λx e 2λx j 1 if x [x u j (x) = e λ(x e 2λx j e 2λx j 1 j 1,x j ] j x) e 2λx e 2λx j+1 if x [x e 2λxj e 2λx j+1 j,x j+1 ] 2 j n 1, 0 otherwise and similarly for u 1, u n. Good
70 The constant Since u j are positive functions Proposition Suppose that x 1 < x 2 < x n and that φ(x) = ae λx +be λx for λ R, is such that the problem is correct. Then, independently of the values of a and b, n j=1 In particular, u j (x) = eλx + e λ(x j+x j+1 x) e λx j + e λx, x [x j+1 j,x j+1 ]. max x 1 x x n n u j (x) = 1. j=1 Good
71 The Case of λ Complex Consider λ = i with a = i/2 and b = i/2 so that g(x) = sin(x). If we make the restriction that x n x 1 < π, one can prove that problem is correct.it follows u j (x) 0 on [x 1,x n ] with x n x 1 < π. n u j (x) = cos(x x j+x j+1 cos( x j+1 x j 2 ) j=1 2 ), x [x j,x j+1 ]. The maximum is clearly attained at the midpoint x = (x j + x j+1 )/2 at which Good n u j (x) = j=1 1 cos( x j+1 x j 2 ).
72 The Case of λ Complex Hence Λ n := max x 1 x x n n u j (x) j=1 1 = max 1 j n 1 cos( x j+1 x j 2 ) 1 = x cos(max j+1 x j 1 j n 1 2 ). (26) Good
73 The Case of λ Complex Theorem Suppose that φ(x) = sin(x). Then, among all distributions of a = x 1 < x 2 < < x n = b in the interval [a,b] with b a < π, the one for which Λ n is uniquely minimized is the equally spaced one, i.e, for x j = a + (j 1)(b a), 1 j n. (n 1) Good
74 The Case of λ Complex Good Figure: functions for λ = i and equally spaced (Left) and non-equally spaced [ ] (Right)
75 Work to do 1. In 1d, we studied the case φ(x) = x 3 and discovered, for nearly equidistributed point set, a behavior similar to that of periodic cubic splines(?) [F. Schurer, Indag. Math.30 (1968)] giving Λ n < 1 4 ( ). More investigations are then necessary! 2. Study better the behavior of the cardinal functions u j : why do they concentrate around x j and decay at infinity? 3. Efficient computations (for overtaking ill-conditioning and instability) using Nick s Trefethen definition 10 digits, 5 sec. and 1 page!). Good
76 Important references Bos L. and S., Univariate Radial Basis Functions with Compact Support Cardinal Functions, East J. Approx., Vol. 14(1) 2008, De Boor, C. and Ron A., The least solution for the polynomial problem, Math. Z., Vol , Driscoll T. and Fornberg B, Interpolation in the limit of increasingly flat radial basis functions, Computers Math. Appl , Nicholas J. Higham, Accuracy and Numerical Algorithms, SIAM, Philadelphia, 1996 S. Jokar and B. Meheri, function for multivariate by RBF, Appl. Math. Comp. 187(1) 2007, S. Müller and R. Schaback, A Newton basis for Kernel Spaces, To appear on J. Approx. Theory 2008, available at R. Schaback s home page. H. Wendland, Scattered Data Approximation, Cambridge Monographs on Applied and Computational Mathematics, Vol. 17, Good H. Wendland, C. Rieger, Approximate with applications to selecting smoothing parameters,, Num. Math ,
77 DWCAA09 First announcement 2nd Dolomites Workshop on Constructive Approximation and Applications Alba di Canazei, 3-9 Sept Keynote speakers (confirmed so far!): Carl de Boor, Robert Schaback, Nick Trefethen, Holger Wendland, Yuan Xu Sessions on: Polynomial and rational approximation (Org.: J. Carnicer, A. Cuyt), Approximation by radial bases (Org.: A. Iske, J. Levesley), Quadrature and cubature (Org. B. Bojanov, E. Venturino, Approximation in linear algebra (Org. C. Brezinski, M. Eiermann). Good
78 Thank you for your attention! Good
Stability of Kernel Based Interpolation
Stability of Kernel Based Interpolation Stefano De Marchi Department of Computer Science, University of Verona (Italy) Robert Schaback Institut für Numerische und Angewandte Mathematik, University of Göttingen
More informationStability constants for kernel-based interpolation processes
Dipartimento di Informatica Università degli Studi di Verona Rapporto di ricerca Research report 59 Stability constants for kernel-based interpolation processes Stefano De Marchi Robert Schaback Dipartimento
More informationOptimal data-independent point locations for RBF interpolation
Optimal dataindependent point locations for RF interpolation S De Marchi, R Schaback and H Wendland Università di Verona (Italy), Universität Göttingen (Germany) Metodi di Approssimazione: lezione dell
More informationA new stable basis for radial basis function interpolation
A new stable basis for radial basis function interpolation Stefano De Marchi and Gabriele Santin Department of Mathematics University of Padua (Italy) Abstract It is well-known that radial basis function
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 33: Adaptive Iteration Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter 33 1 Outline 1 A
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 33: Adaptive Iteration Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter 33 1 Outline 1 A
More informationSolving the 3D Laplace Equation by Meshless Collocation via Harmonic Kernels
Solving the 3D Laplace Equation by Meshless Collocation via Harmonic Kernels Y.C. Hon and R. Schaback April 9, Abstract This paper solves the Laplace equation u = on domains Ω R 3 by meshless collocation
More informationOn Multivariate Newton Interpolation at Discrete Leja Points
On Multivariate Newton Interpolation at Discrete Leja Points L. Bos 1, S. De Marchi 2, A. Sommariva 2, M. Vianello 2 September 25, 2011 Abstract The basic LU factorization with row pivoting, applied to
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 14: The Power Function and Native Space Error Estimates Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu
More informationDIRECT ERROR BOUNDS FOR SYMMETRIC RBF COLLOCATION
Meshless Methods in Science and Engineering - An International Conference Porto, 22 DIRECT ERROR BOUNDS FOR SYMMETRIC RBF COLLOCATION Robert Schaback Institut für Numerische und Angewandte Mathematik (NAM)
More informationKernel B Splines and Interpolation
Kernel B Splines and Interpolation M. Bozzini, L. Lenarduzzi and R. Schaback February 6, 5 Abstract This paper applies divided differences to conditionally positive definite kernels in order to generate
More informationA numerical study of the Xu polynomial interpolation formula in two variables
A numerical study of the Xu polynomial interpolation formula in two variables Len Bos Dept. of Mathematics and Statistics, University of Calgary (Canada) Marco Caliari, Stefano De Marchi Dept. of Computer
More informationApproximation by Conditionally Positive Definite Functions with Finitely Many Centers
Approximation by Conditionally Positive Definite Functions with Finitely Many Centers Jungho Yoon Abstract. The theory of interpolation by using conditionally positive definite function provides optimal
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 2: Radial Basis Function Interpolation in MATLAB Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590
More informationMultivariate Interpolation with Increasingly Flat Radial Basis Functions of Finite Smoothness
Multivariate Interpolation with Increasingly Flat Radial Basis Functions of Finite Smoothness Guohui Song John Riddle Gregory E. Fasshauer Fred J. Hickernell Abstract In this paper, we consider multivariate
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 2 Part 3: Native Space for Positive Definite Kernels Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2014 fasshauer@iit.edu MATH
More informationConsistency Estimates for gfd Methods and Selection of Sets of Influence
Consistency Estimates for gfd Methods and Selection of Sets of Influence Oleg Davydov University of Giessen, Germany Localized Kernel-Based Meshless Methods for PDEs ICERM / Brown University 7 11 August
More informationINVERSE AND SATURATION THEOREMS FOR RADIAL BASIS FUNCTION INTERPOLATION
MATHEMATICS OF COMPUTATION Volume 71, Number 238, Pages 669 681 S 0025-5718(01)01383-7 Article electronically published on November 28, 2001 INVERSE AND SATURATION THEOREMS FOR RADIAL BASIS FUNCTION INTERPOLATION
More informationRadial basis functions topics in slides
Radial basis functions topics in 40 +1 slides Stefano De Marchi Department of Mathematics Tullio Levi-Civita University of Padova Napoli, 22nd March 2018 Outline 1 From splines to RBF 2 Error estimates,
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 34: Improving the Condition Number of the Interpolation Matrix Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu
More informationA Posteriori Error Bounds for Meshless Methods
A Posteriori Error Bounds for Meshless Methods Abstract R. Schaback, Göttingen 1 We show how to provide safe a posteriori error bounds for numerical solutions of well-posed operator equations using kernel
More informationComputational Aspects of Radial Basis Function Approximation
Working title: Topics in Multivariate Approximation and Interpolation 1 K. Jetter et al., Editors c 2005 Elsevier B.V. All rights reserved Computational Aspects of Radial Basis Function Approximation Holger
More informationTrivariate polynomial approximation on Lissajous curves 1
Trivariate polynomial approximation on Lissajous curves 1 Stefano De Marchi DMV2015, Minisymposium on Mathematical Methods for MPI 22 September 2015 1 Joint work with Len Bos (Verona) and Marco Vianello
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 9: Conditionally Positive Definite Radial Functions Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH
More informationFEKETE TYPE POINTS FOR RIDGE FUNCTION INTERPOLATION AND HYPERBOLIC POTENTIAL THEORY. Len Bos, Stefano De Marchi, and Norm Levenberg
PUBLICATIONS DE L INSTITUT MATHÉMATIQUE Nouvelle série, tome 96 110) 2014), 41 48 DOI: 10.2298/PIM1410041B FEETE TYPE POINTS FOR RIDGE FUNCTION INTERPOLATION AND HYPERBOLIC POTENTIAL THEORY Len Bos, Stefano
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 5: Completely Monotone and Multiply Monotone Functions Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 6: Scattered Data Interpolation with Polynomial Precision Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu
More informationPositive Definite Kernels: Opportunities and Challenges
Positive Definite Kernels: Opportunities and Challenges Michael McCourt Department of Mathematical and Statistical Sciences University of Colorado, Denver CUNY Mathematics Seminar CUNY Graduate College
More informationA orthonormal basis for Radial Basis Function approximation
A orthonormal basis for Radial Basis Function approximation 9th ISAAC Congress Krakow, August 5-9, 2013 Gabriele Santin, joint work with S. De Marchi Department of Mathematics. Doctoral School in Mathematical
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 1 Part 3: Radial Basis Function Interpolation in MATLAB Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2014 fasshauer@iit.edu
More informationRecent Results for Moving Least Squares Approximation
Recent Results for Moving Least Squares Approximation Gregory E. Fasshauer and Jack G. Zhang Abstract. We describe two experiments recently conducted with the approximate moving least squares (MLS) approximation
More informationNumerical solution of nonlinear sine-gordon equation with local RBF-based finite difference collocation method
Numerical solution of nonlinear sine-gordon equation with local RBF-based finite difference collocation method Y. Azari Keywords: Local RBF-based finite difference (LRBF-FD), Global RBF collocation, sine-gordon
More informationRadial basis function partition of unity methods for PDEs
Radial basis function partition of unity methods for PDEs Elisabeth Larsson, Scientific Computing, Uppsala University Credit goes to a number of collaborators Alfa Ali Alison Lina Victor Igor Heryudono
More informationInterpolation by Basis Functions of Different Scales and Shapes
Interpolation by Basis Functions of Different Scales and Shapes M. Bozzini, L. Lenarduzzi, M. Rossini and R. Schaback Abstract Under very mild additional assumptions, translates of conditionally positive
More informationLeast Squares Approximation
Chapter 6 Least Squares Approximation As we saw in Chapter 5 we can interpret radial basis function interpolation as a constrained optimization problem. We now take this point of view again, but start
More informationOrthogonality of hat functions in Sobolev spaces
1 Orthogonality of hat functions in Sobolev spaces Ulrich Reif Technische Universität Darmstadt A Strobl, September 18, 27 2 3 Outline: Recap: quasi interpolation Recap: orthogonality of uniform B-splines
More informationRKHS, Mercer s theorem, Unbounded domains, Frames and Wavelets Class 22, 2004 Tomaso Poggio and Sayan Mukherjee
RKHS, Mercer s theorem, Unbounded domains, Frames and Wavelets 9.520 Class 22, 2004 Tomaso Poggio and Sayan Mukherjee About this class Goal To introduce an alternate perspective of RKHS via integral operators
More informationLocal radial basis function approximation on the sphere
Local radial basis function approximation on the sphere Kerstin Hesse and Q. T. Le Gia School of Mathematics and Statistics, The University of New South Wales, Sydney NSW 05, Australia April 30, 007 Abstract
More informationEECS 598: Statistical Learning Theory, Winter 2014 Topic 11. Kernels
EECS 598: Statistical Learning Theory, Winter 2014 Topic 11 Kernels Lecturer: Clayton Scott Scribe: Jun Guo, Soumik Chatterjee Disclaimer: These notes have not been subjected to the usual scrutiny reserved
More information1. Introduction. A radial basis function (RBF) interpolant of multivariate data (x k, y k ), k = 1, 2,..., n takes the form
A NEW CLASS OF OSCILLATORY RADIAL BASIS FUNCTIONS BENGT FORNBERG, ELISABETH LARSSON, AND GRADY WRIGHT Abstract Radial basis functions RBFs form a primary tool for multivariate interpolation, and they are
More informationOverview of normed linear spaces
20 Chapter 2 Overview of normed linear spaces Starting from this chapter, we begin examining linear spaces with at least one extra structure (topology or geometry). We assume linearity; this is a natural
More informationYour first day at work MATH 806 (Fall 2015)
Your first day at work MATH 806 (Fall 2015) 1. Let X be a set (with no particular algebraic structure). A function d : X X R is called a metric on X (and then X is called a metric space) when d satisfies
More informationApplication of modified Leja sequences to polynomial interpolation
Special Issue for the years of the Padua points, Volume 8 5 Pages 66 74 Application of modified Leja sequences to polynomial interpolation L. P. Bos a M. Caliari a Abstract We discuss several applications
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 7: Conditionally Positive Definite Functions Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter
More informationRBF Collocation Methods and Pseudospectral Methods
RBF Collocation Methods and Pseudospectral Methods G. E. Fasshauer Draft: November 19, 24 Abstract We show how the collocation framework that is prevalent in the radial basis function literature can be
More informationD. Shepard, Shepard functions, late 1960s (application, surface modelling)
Chapter 1 Introduction 1.1 History and Outline Originally, the motivation for the basic meshfree approximation methods (radial basis functions and moving least squares methods) came from applications in
More informationToward Approximate Moving Least Squares Approximation with Irregularly Spaced Centers
Toward Approximate Moving Least Squares Approximation with Irregularly Spaced Centers Gregory E. Fasshauer Department of Applied Mathematics, Illinois Institute of Technology, Chicago, IL 6066, U.S.A.
More informationc 2005 Society for Industrial and Applied Mathematics
SIAM J. UMER. AAL. Vol. 43, o. 2, pp. 75 766 c 25 Society for Industrial and Applied Mathematics POLYOMIALS AD POTETIAL THEORY FOR GAUSSIA RADIAL BASIS FUCTIO ITERPOLATIO RODRIGO B. PLATTE AD TOBI A. DRISCOLL
More informationKernel-based Approximation. Methods using MATLAB. Gregory Fasshauer. Interdisciplinary Mathematical Sciences. Michael McCourt.
SINGAPORE SHANGHAI Vol TAIPEI - Interdisciplinary Mathematical Sciences 19 Kernel-based Approximation Methods using MATLAB Gregory Fasshauer Illinois Institute of Technology, USA Michael McCourt University
More informationReproducing Kernels of Generalized Sobolev Spaces via a Green Function Approach with Distributional Operators
Noname manuscript No. (will be inserted by the editor) Reproducing Kernels of Generalized Sobolev Spaces via a Green Function Approach with Distributional Operators Gregory E. Fasshauer Qi Ye Abstract
More informationAN ELEMENTARY PROOF OF THE OPTIMAL RECOVERY OF THE THIN PLATE SPLINE RADIAL BASIS FUNCTION
J. KSIAM Vol.19, No.4, 409 416, 2015 http://dx.doi.org/10.12941/jksiam.2015.19.409 AN ELEMENTARY PROOF OF THE OPTIMAL RECOVERY OF THE THIN PLATE SPLINE RADIAL BASIS FUNCTION MORAN KIM 1 AND CHOHONG MIN
More informationYour first day at work MATH 806 (Fall 2015)
Your first day at work MATH 806 (Fall 2015) 1. Let X be a set (with no particular algebraic structure). A function d : X X R is called a metric on X (and then X is called a metric space) when d satisfies
More informationScattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions
Chapter 3 Scattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions 3.1 Scattered Data Interpolation with Polynomial Precision Sometimes the assumption on the
More informationBivariate Lagrange interpolation at the Padua points: The generating curve approach
Journal of Approximation Theory 43 (6) 5 5 www.elsevier.com/locate/jat Bivariate Lagrange interpolation at the Padua points: The generating curve approach Len Bos a, Marco Caliari b, Stefano De Marchi
More informationTheoretical and computational aspects of multivariate interpolation with increasingly flat radial basis functions
Theoretical and computational aspects of multivariate interpolation with increasingly flat radial basis functions Elisabeth Larsson Bengt Fornberg June 0, 003 Abstract Multivariate interpolation of smooth
More information3. Some tools for the analysis of sequential strategies based on a Gaussian process prior
3. Some tools for the analysis of sequential strategies based on a Gaussian process prior E. Vazquez Computer experiments June 21-22, 2010, Paris 21 / 34 Function approximation with a Gaussian prior Aim:
More informationInverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology
Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 27 Introduction Fredholm first kind integral equation of convolution type in one space dimension: g(x) = 1 k(x x )f(x
More informationA scaling perspective on accuracy and convergence in RBF approximations
A scaling perspective on accuracy and convergence in RBF approximations Elisabeth Larsson With: Bengt Fornberg, Erik Lehto, and Natasha Flyer Division of Scientific Computing Department of Information
More informationOn the Lebesgue constant of barycentric rational interpolation at equidistant nodes
On the Lebesgue constant of barycentric rational interpolation at equidistant nodes by Len Bos, Stefano De Marchi, Kai Hormann and Georges Klein Report No. 0- May 0 Université de Fribourg (Suisse Département
More informationSlide05 Haykin Chapter 5: Radial-Basis Function Networks
Slide5 Haykin Chapter 5: Radial-Basis Function Networks CPSC 636-6 Instructor: Yoonsuck Choe Spring Learning in MLP Supervised learning in multilayer perceptrons: Recursive technique of stochastic approximation,
More informationMA2501 Numerical Methods Spring 2015
Norwegian University of Science and Technology Department of Mathematics MA5 Numerical Methods Spring 5 Solutions to exercise set 9 Find approximate values of the following integrals using the adaptive
More informationFinite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product
Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )
More informationMINIMAL DEGREE UNIVARIATE PIECEWISE POLYNOMIALS WITH PRESCRIBED SOBOLEV REGULARITY
MINIMAL DEGREE UNIVARIATE PIECEWISE POLYNOMIALS WITH PRESCRIBED SOBOLEV REGULARITY Amal Al-Rashdan & Michael J. Johnson* Department of Mathematics Kuwait University P.O. Box: 5969 Safat 136 Kuwait yohnson1963@hotmail.com*
More informationNotions such as convergent sequence and Cauchy sequence make sense for any metric space. Convergent Sequences are Cauchy
Banach Spaces These notes provide an introduction to Banach spaces, which are complete normed vector spaces. For the purposes of these notes, all vector spaces are assumed to be over the real numbers.
More informationSuperconvergence of Kernel-Based Interpolation
Superconvergence of Kernel-Based Interpolation Robert Schaback Georg-August-Universität Göttingen, Institut für Numerische und Angewandte Mathematik, Lotzestraße 16-18, D-37083 Göttingen, Germany Abstract
More informationChapter 4: Interpolation and Approximation. October 28, 2005
Chapter 4: Interpolation and Approximation October 28, 2005 Outline 1 2.4 Linear Interpolation 2 4.1 Lagrange Interpolation 3 4.2 Newton Interpolation and Divided Differences 4 4.3 Interpolation Error
More informationGreedy Kernel Techniques with Applications to Machine Learning
Greedy Kernel Techniques with Applications to Machine Learning Robert Schaback Jochen Werner Göttingen University Institut für Numerische und Angewandte Mathematik http://www.num.math.uni-goettingen.de/schaback
More informationInterpolation. 1. Judd, K. Numerical Methods in Economics, Cambridge: MIT Press. Chapter
Key References: Interpolation 1. Judd, K. Numerical Methods in Economics, Cambridge: MIT Press. Chapter 6. 2. Press, W. et. al. Numerical Recipes in C, Cambridge: Cambridge University Press. Chapter 3
More informationReproducing Kernels of Generalized Sobolev Spaces via a Green Function Approach with Differential Operators
Reproducing Kernels of Generalized Sobolev Spaces via a Green Function Approach with Differential Operators Qi Ye Abstract In this paper we introduce a generalization of the classical L 2 ( )-based Sobolev
More informationGreen s Functions: Taking Another Look at Kernel Approximation, Radial Basis Functions and Splines
Green s Functions: Taking Another Look at Kernel Approximation, Radial Basis Functions and Splines Gregory E. Fasshauer Abstract The theories for radial basis functions (RBFs) as well as piecewise polynomial
More informationSolving the Laplace Equation by Meshless Collocation Using Harmonic Kernels
Solving the Laplace Equation by Meshless Collocation Using Harmonic Kernels Robert Schaback February 13, 2008 Abstract We present a meshless technique which can be seen as an alternative to the Method
More informationOptimal Polynomial Admissible Meshes on the Closure of C 1,1 Bounded Domains
Optimal Polynomial Admissible Meshes on the Closure of C 1,1 Bounded Domains Constructive Theory of Functions Sozopol, June 9-15, 2013 F. Piazzon, joint work with M. Vianello Department of Mathematics.
More informationReproducing Kernel Hilbert Spaces Class 03, 15 February 2006 Andrea Caponnetto
Reproducing Kernel Hilbert Spaces 9.520 Class 03, 15 February 2006 Andrea Caponnetto About this class Goal To introduce a particularly useful family of hypothesis spaces called Reproducing Kernel Hilbert
More informationConstructing optimal polynomial meshes on planar starlike domains
Constructing optimal polynomial meshes on planar starlike domains F. Piazzon and M. Vianello 1 Dept. of Mathematics, University of Padova (Italy) October 13, 2014 Abstract We construct polynomial norming
More informationMeshfree Approximation Methods with MATLAB
Interdisciplinary Mathematical Sc Meshfree Approximation Methods with MATLAB Gregory E. Fasshauer Illinois Institute of Technology, USA Y f? World Scientific NEW JERSEY LONDON SINGAPORE BEIJING SHANGHAI
More informationMultiscale RBF collocation for solving PDEs on spheres
Multiscale RBF collocation for solving PDEs on spheres Q. T. Le Gia I. H. Sloan H. Wendland Received: date / Accepted: date Abstract In this paper, we discuss multiscale radial basis function collocation
More informationError formulas for divided difference expansions and numerical differentiation
Error formulas for divided difference expansions and numerical differentiation Michael S. Floater Abstract: We derive an expression for the remainder in divided difference expansions and use it to give
More information(0, 0), (1, ), (2, ), (3, ), (4, ), (5, ), (6, ).
1 Interpolation: The method of constructing new data points within the range of a finite set of known data points That is if (x i, y i ), i = 1, N are known, with y i the dependent variable and x i [x
More information(Kernels +) Support Vector Machines
(Kernels +) Support Vector Machines Machine Learning Torsten Möller Reading Chapter 5 of Machine Learning An Algorithmic Perspective by Marsland Chapter 6+7 of Pattern Recognition and Machine Learning
More informationApproximation of Multivariate Functions
Approximation of Multivariate Functions Vladimir Ya. Lin and Allan Pinkus Abstract. We discuss one approach to the problem of approximating functions of many variables which is truly multivariate in character.
More informationSolutions: Problem Set 4 Math 201B, Winter 2007
Solutions: Problem Set 4 Math 2B, Winter 27 Problem. (a Define f : by { x /2 if < x
More informationGaussian interval quadrature rule for exponential weights
Gaussian interval quadrature rule for exponential weights Aleksandar S. Cvetković, a, Gradimir V. Milovanović b a Department of Mathematics, Faculty of Mechanical Engineering, University of Belgrade, Kraljice
More informationBoundary Integral Equations on the Sphere with Radial Basis Functions: Error Analysis
Boundary Integral Equations on the Sphere with Radial Basis Functions: Error Analysis T. Tran Q. T. Le Gia I. H. Sloan E. P. Stephan Abstract Radial basis functions are used to define approximate solutions
More informationInterpolation of Spatial Data - A Stochastic or a Deterministic Problem?
Interpolation of Spatial Data - A Stochastic or a Deterministic Problem? M. Scheuerer, R. Schaback and M. Schlather 18 November 2011 Abstract Interpolation of spatial data is a very general mathematical
More informationDubiner distance and stability of Lebesgue constants
Dubiner distance and stability of Lebesgue constants Marco Vianello 1 March 15, 2018 Abstract Some elementary inequalities, based on the non-elementary notion of Dubiner distance, show that norming sets
More informationLecture 10 Polynomial interpolation
Lecture 10 Polynomial interpolation Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University, tieli@pku.edu.cn
More informationRadial Basis Functions I
Radial Basis Functions I Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 14, 2008 Today Reformulation of natural cubic spline interpolation Scattered
More informationNumerical cubature on scattered data by radial basis functions
Numerical cubature on scattered data by radial basis functions A. Sommariva, Sydney, and M. Vianello, Padua September 5, 25 Abstract We study cubature formulas on relatively small scattered samples in
More informationRecent progress on boundary effects in kernel approximation
Recent progress on boundary effects in kernel approximation Thomas Hangelbroek University of Hawaii at Manoa FoCM 2014 work supported by: NSF DMS-1413726 Boundary effects and error estimates Radial basis
More informationREAL AND COMPLEX ANALYSIS
REAL AND COMPLE ANALYSIS Third Edition Walter Rudin Professor of Mathematics University of Wisconsin, Madison Version 1.1 No rights reserved. Any part of this work can be reproduced or transmitted in any
More informationKernel Method: Data Analysis with Positive Definite Kernels
Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University
More informationViewed From Cubic Splines. further clarication and improvement. This can be described by applying the
Radial Basis Functions Viewed From Cubic Splines Robert Schaback Abstract. In the context of radial basis function interpolation, the construction of native spaces and the techniques for proving error
More informationScattered Data Approximation of Noisy Data via Iterated Moving Least Squares
Scattered Data Approximation o Noisy Data via Iterated Moving Least Squares Gregory E. Fasshauer and Jack G. Zhang Abstract. In this paper we ocus on two methods or multivariate approximation problems
More informationNumerical Integration in Meshfree Methods
Numerical Integration in Meshfree Methods Pravin Madhavan New College University of Oxford A thesis submitted for the degree of Master of Science in Mathematical Modelling and Scientific Computing Trinity
More informationRegularization in Reproducing Kernel Banach Spaces
.... Regularization in Reproducing Kernel Banach Spaces Guohui Song School of Mathematical and Statistical Sciences Arizona State University Comp Math Seminar, September 16, 2010 Joint work with Dr. Fred
More informationSplines which are piecewise solutions of polyharmonic equation
Splines which are piecewise solutions of polyharmonic equation Ognyan Kounchev March 25, 2006 Abstract This paper appeared in Proceedings of the Conference Curves and Surfaces, Chamonix, 1993 1 Introduction
More informationApplied and Computational Harmonic Analysis
Appl Comput Harmon Anal 32 (2012) 401 412 Contents lists available at ScienceDirect Applied and Computational Harmonic Analysis wwwelseviercom/locate/acha Multiscale approximation for functions in arbitrary
More informationMultiscale Frame-based Kernels for Image Registration
Multiscale Frame-based Kernels for Image Registration Ming Zhen, Tan National University of Singapore 22 July, 16 Ming Zhen, Tan (National University of Singapore) Multiscale Frame-based Kernels for Image
More informationCIS 520: Machine Learning Oct 09, Kernel Methods
CIS 520: Machine Learning Oct 09, 207 Kernel Methods Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture They may or may not cover all the material discussed
More informationA. Iske RADIAL BASIS FUNCTIONS: BASICS, ADVANCED TOPICS AND MESHFREE METHODS FOR TRANSPORT PROBLEMS
Rend. Sem. Mat. Univ. Pol. Torino Vol. 61, 3 (23) Splines and Radial Functions A. Iske RADIAL BASIS FUNCTIONS: BASICS, ADVANCED TOPICS AND MESHFREE METHODS FOR TRANSPORT PROBLEMS Abstract. This invited
More information