Radial basis functions topics in slides
|
|
- Austen Barker
- 5 years ago
- Views:
Transcription
1 Radial basis functions topics in slides Stefano De Marchi Department of Mathematics Tullio Levi-Civita University of Padova Napoli, 22nd March 2018
2 Outline 1 From splines to RBF 2 Error estimates, conditioning and stability Strategies for controlling errors 3 Stable basis approaches WSVD Basis 4 Rescaled, Rational RBF and some applications 2 of 41
3 From cubic splines to RBF 3 of 41
4 Cubic splines Let S 3 (X) = {s C 2 [a, b] : s [xi,x i+1 ] P 3, 1 i N 1, s [a,x1 ], s [xn,b] P 1 } be the space of natural cubic splines. S 3 (X) has the basis of truncated powers ( x j ) 3 +, 1 j N plus an arbitrary basis for P 3 (R) s(x) = N a j (x x j ) j=1 3 b j x j, x [a, b]. (1) Using the identity x 3 + = ( x 3 +x 3 ) 2 and the fact that s [a,x1 ], s [xn,b] P 1 j=0 4 of 41
5 Cubic splines Let S 3 (X) = {s C 2 [a, b] : s [xi,x i+1 ] P 3, 1 i N 1, s [a,x1 ], s [xn,b] P 1 } be the space of natural cubic splines. S 3 (X) has the basis of truncated powers ( x j ) 3 +, 1 j N plus an arbitrary basis for P 3 (R) s(x) = N a j (x x j ) j=1 3 b j x j, x [a, b]. (1) Using the identity x+ 3 = ( x 3 +x 3 ) and the fact that s 2 [a,x1 ], s [xn,b] P 1 Every natural cubic spline s has the representation N s(x) = a j φ( x x j ) + p(x), x R j=1 j=0 where φ(r) = r 3, r 0 and p P 1 (R) s.t. N a j = j=1 N a j x j = 0. (2) j=1 4 of 41
6 Extension to R d Going to R d is straightforward: radiality becomes more evident s(x) = N a j φ( x x j 2 ) + p(x), x R d, (3) i=1 where φ : [0, ) R is a univariate fixed function and p P l 1 (R d ) is a low degree d-variate polynomial. The additional conditions in (??) become N a j q(x j ) = 0, q P l 1 (R d ). (4) i=1 5 of 41
7 Extension to R d Going to R d is straightforward: radiality becomes more evident s(x) = N a j φ( x x j 2 ) + p(x), x R d, (3) i=1 where φ : [0, ) R is a univariate fixed function and p P l 1 (R d ) is a low degree d-variate polynomial. The additional conditions in (??) become N a j q(x j ) = 0, q P l 1 (R d ). (4) Obs: i=1 We can omit the side conditions on the coefficients (??) (like using B-splines ) and consider approximants of the form s(x) = N a j φ( x x j 2 ) i=1 and also omit the 2-norm symbol 5 of 41
8 Scattered data fitting:i Setting Given data X = {x j, j = 1,..., N}, with x j Ω R d and values Y = {y j R, j = 1,..., N} (y j = f(x j )), find a (continuous) function P f span{φ( x i ), x i X} s.t. P f = N α j φ( x i ), s.t.. (P f ) X = Y (interpolation) j=1 6 of 41
9 Scattered data fitting:i Setting Given data X = {x j, j = 1,..., N}, with x j Ω R d and values Y = {y j R, j = 1,..., N} (y j = f(x j )), find a (continuous) function P f span{φ( x i ), x i X} s.t. P f = N α j φ( x i ), s.t.. (P f ) X = Y (interpolation) j=1 Figure: Data points, data values and data function Obs: P f = n i=j u j f j, with u j (x i ) = δ ji. 6 of 41
10 Scattered data fitting:ii Problem For which functions φ : [0, ) R such that for all d, N N and all pairwise distrinct x 1,..., x n R d the matrix det(a φ,x ) 0? When the matrix A φ,x := (φ( x i x j 2 ) 1 i,j N, is invertible? 7 of 41
11 Scattered data fitting:ii Problem For which functions φ : [0, ) R such that for all d, N N and all pairwise distrinct x 1,..., x n R d the matrix det(a φ,x ) 0? When the matrix A φ,x := (φ( x i x j 2 ) 1 i,j N, is invertible? Answer The functions φ should be positive definite or conditionally positive definite of some order. 7 of 41
12 General kernel-based approximation φ, Conditionally Positive Definite (CPD) of order l or Strictly Positive Definite (SPD) and radial globally supported: locally supported: name φ l Gaussian C (GA) e ε2 r 2 0 Generalized Multiquadrics C (GM) (1 + r 2 /ε 2 ) 3/2 2 name φ l Wendland C 2 (W2) (1 εr) 4 + (4εr + 1) 0 Buhmann C 2 (B2) 2r 4 log r 7/2r /3r 3 2r 2 + 1/6 0 we often consider φ(ε ), with ε called shape parameter 8 of 41
13 General kernel-based approximation φ, Conditionally Positive Definite (CPD) of order l or Strictly Positive Definite (SPD) and radial globally supported: locally supported: name φ l Gaussian C (GA) e ε2 r 2 0 Generalized Multiquadrics C (GM) (1 + r 2 /ε 2 ) 3/2 2 name φ l Wendland C 2 (W2) (1 εr) 4 + (4εr + 1) 0 Buhmann C 2 (B2) 2r 4 log r 7/2r /3r 3 2r 2 + 1/6 0 we often consider φ(ε ), with ε called shape parameter kernel notation K(x, y)(= K ε (x, y)) = Φ ε (x y) = φ(ε x y 2 ) 8 of 41
14 General kernel-based approximation φ, Conditionally Positive Definite (CPD) of order l or Strictly Positive Definite (SPD) and radial globally supported: locally supported: name φ l Gaussian C (GA) e ε2 r 2 0 Generalized Multiquadrics C (GM) (1 + r 2 /ε 2 ) 3/2 2 name φ l Wendland C 2 (W2) (1 εr) 4 + (4εr + 1) 0 Buhmann C 2 (B2) 2r 4 log r 7/2r /3r 3 2r 2 + 1/6 0 we often consider φ(ε ), with ε called shape parameter kernel notation K(x, y)(= K ε (x, y)) = Φ ε (x y) = φ(ε x y 2 ) native space N K (Ω) (where K is the reproducing kernel) finite subspace N K (X) = span{k(, x) : x X} N K (Ω). 8 of 41
15 Error estimates, conditioning and stability 9 of 41
16 Separation distance, fill-distance and power function q X := 1 2 min x i x j 2, (separation distance) h X,Ω := sup min x x j 2, (fill distance) i j x Ω x j X P Φε,X (x) := Φ ε(0) (u (x)) T Au (x), (power function) u vector of cardinal functions Figure: The fill-distance of 25 Halton points h Figure: Power function for the Gaussian kernel with ε = 6 on a grid of 81 uniform, Chebyshev and Halton points, respectively. 10 of 41
17 and C K (x) = max β =2κ max w,z Ω B(x,c2 h X,Ω ) D β 2 Φ(w, z). 11 of 41 Pointwise error estimates Theorem Let Ω R d and K C(Ω Ω) be PD on R d. Let X = {x 1,...,, x n } be a set of distinct points. Take a function f N Φ (Ω) and denote with P f its interpolant on X. Then, for every x Ω f(x) P f (x) P Φε,X(x) f NK (Ω). (5) Theorem Let Ω R d and K C 2κ (Ω Ω) be symmetric and positive definite, X = {x 1,...,, x N } a set of distinct points. Consider f N K (Ω) and its interpolant P f on X. Then, there exist positive constants h 0 and C (independent of x, f and Φ), with h X,Ω h 0, such that f(x) P f (x) C h κ X,Ω C K (x) f NK (Ω). (6)
18 Reducing the interpolation error Obs: The choice of the shape parameter ε in order to get the smallest (possible) interpolation error is crucial. Trial and Error Power function minimization Leave One Out Cross Validation (LOOCV) 12 of 41
19 Reducing the interpolation error Obs: The choice of the shape parameter ε in order to get the smallest (possible) interpolation error is crucial. Trial and Error Power function minimization Leave One Out Cross Validation (LOOCV) Trial and error strategy: interpolation of the 1-d sinc function with Gaussian for ε [0, 20], taking 100 values of ε and different equispaced data points. 12 of 41
20 Trade-off principle Consider the errors (RMSE and MAXERR) and the condition number µ 2 of A = (Φ ε ( x i x j )) N i,j=1 Figure: RMSE, MAXERR and Condition Number, µ 2, with 30 values of ε [0.1, 20], for interpolation of the Franke function on a grid of Chebyshev points Trade-off or uncertainty principle [Schaback 1995] Accuracy vs Stability Accuracy vs Efficiency Accuracy and stability vs Problem size 13 of 41
21 Accuracy vs stability The error bounds for non-stationary interpolation using infinitely smooth basis functions show that the error decreases (exponentially) as the fill distance decreases. For well-distributed data a decrease in the fill distance also implies a decrease of the separation distance But now the condition estimates above imply that the condition number of A grows exponentially This leads to numerical instabilities which make it virtually impossible to obtain the highly accurate results promised by the theoretical error bounds. 14 of 41
22 Stable basis approaches 15 of 41
23 Problem setting and questions Obs: the standard basis of translates (data-dependent) of N K (X) is unstable and not flexible 16 of 41
24 Problem setting and questions Obs: the standard basis of translates (data-dependent) of N K (X) is unstable and not flexible Question 1 Is it possible to find a better basis U of N K (X)? Question 2 How to embed information about K and Ω in the basis U? Question 3 Can we extract U U s.t. s f is as good as s f? 16 of 41
25 The natural basis The natural (data-independent) basis for Hilbert spaces (Mercer s theorem,1909) Let K be a continuous, positive definite kernel on a bounded Ω. Then K has an eigenfunction expansion with non-negative coefficients, the eigenvalues, s.t. K(x, y) = λ j ϕ j (x)ϕ j (y), x, y Ω. j=0 Moreover, λ j ϕ j (x) = K(x, y)ϕ j (y)dy := T [ϕ j ](x), x Ω, j 0 Ω {ϕ j } j>0 orthonormal N K (Ω) {ϕ j } j>0 orthogonal L 2 (Ω), ϕ j 2 = λ L 2 (Ω) j 0, λ j = K(0, 0) Ω, (the operator is of trace-class) j>0 Notice: the functions ϕ i are explicitely in very few cases [Fasshauer,McCourt 2012, for GRBF]. 17 of 41
26 Notation T X = {K(, x i ), x i X}: the standard basis of translates; U = {u i N K (Ω), i = 1,..., N}: another basis s.t. span(u) = span(t X ). 18 of 41
27 Notation T X = {K(, x i ), x i X}: the standard basis of translates; U = {u i N K (Ω), i = 1,..., N}: another basis s.t. span(u) = span(t X ). Change of basis [Pazouki,Schaback 2011] Let A ij = K(x i, x j ) R N N. Any other basis U arises from a factorization A C U = V U or A = V U C U 1, where V U = (u j (x i )) 1 i,j N and C U is the matrix of change of basis. Each N K (Ω)-orthonormal basis U arises from an orthonormal decomposition A = B T B with B = C U 1, V U = B T = (C U 1 ) T. Each l 2 (X)-orthonormal basis U arises from a decomposition A = Q B with Q = V U, Q T Q = I, B = C U 1 = Q T A. Notice: the best bases in terms of stability are the N K (Ω)-orthonormal ones! 18 of 41
28 Notation T X = {K(, x i ), x i X}: the standard basis of translates; U = {u i N K (Ω), i = 1,..., N}: another basis s.t. span(u) = span(t X ). Change of basis [Pazouki,Schaback 2011] Let A ij = K(x i, x j ) R N N. Any other basis U arises from a factorization A C U = V U or A = V U C U 1, where V U = (u j (x i )) 1 i,j N and C U is the matrix of change of basis. Each N K (Ω)-orthonormal basis U arises from an orthonormal decomposition A = B T B with B = C U 1, V U = B T = (C U 1 ) T. Each l 2 (X)-orthonormal basis U arises from a decomposition A = Q B with Q = V U, Q T Q = I, B = C U 1 = Q T A. Notice: the best bases in terms of stability are the N K (Ω)-orthonormal ones! Q1: Yes, we can! 18 of 41
29 WSVD Basis construction Q2: How to embed information on K and Ω in U? Symmetric Nyström method [Atkinson,Han 2001] Idea: discretize the natural basis (Mercer s theorem) by a convergent cubature rule (X, W), with X = {x j } N Ω and positive weights j=1 W = {w j } N j=1 19 of 41
30 WSVD Basis construction Q2: How to embed information on K and Ω in U? Symmetric Nyström method [Atkinson,Han 2001] Idea: discretize the natural basis (Mercer s theorem) by a convergent cubature rule (X, W), with X = {x j } N Ω and positive weights j=1 W = {w j } N j=1 λ j ϕ j (x i ) = applying the cubature rule λ j ϕ j (x i ) Ω K(x i, y)ϕ j (y)dy i = 1,..., N, j > 0, N K(x i, x h )ϕ j (x h )w h i, j = 1,..., N. (7) h=1 Letting W = diag(w j ), we reduce to solve the eigen-problem λv = (A W)v (8) 19 of 41
31 WSVD basis Cont Rewrite (??) (using the fact that the weights are positive) as λ j ( w i ϕ j (x i )) = N ( w i K(x i, x h ) w h )( w h ϕ j (x h )) i, j = 1,..., N, h=1 and then to consider the corresponding scaled eigenvalue problem λ ( W v ) = ( W A W ) ( W v ) which is equivalent to the previous one, now involving the symmetric and positive definite matrix (9) A W := W A W (10) 20 of 41
32 WSVD Basis Definition {λ j, ϕ j } j>0 are then approximated by eigenvalues/eigenvectors of A W. This matrix is normal, then a singular value decomposition of A W is a unitary diagonalization. Definition A weighted SVD basis U is a basis for N K (X) s.t. V U = W 1 Q Σ, C U = W Q Σ 1 since A = V U C U 1, then Q Σ 2 Q T is the SVD of A W. Here Σ jj = σ j, j = 1,..., N and σ 2 1 σ2 > 0 are the singular values N of A W. 21 of 41
33 WSVD Basis Properties This basis is in fact an approximation of the natural one (provided w i > 0, N i=1 w i = Ω ) 22 of 41
34 WSVD Basis Properties This basis is in fact an approximation of the natural one (provided w i > 0, N i=1 w i = Ω ) Properties of the new basis U (cf. [De Marchi-Santin 2013]) T N [u j ](x) = σ j u j (x), 1 j N, x Ω; N K (Ω)-orthonormal; l w 2 (X)-orthogonal, u j 2 l w 2 (X) = σ2 j, u j U; N j=1 σ 2 j = K(0, 0) Ω. 22 of 41
35 WSVD Basis Approximation Interpolant: s f (x) = N j=1 (f, u j) K u j (x) x Ω { } WDLS: s M := argmin f g : g span{u f l w 1,..., u (X) M } 2 Weighted Discrete Least Squares as truncation Let f N K (Ω), 1 M N. Then x Ω s M f (x) = M j=1 (f, u j ) l w 2 (X) (u j, u j ) l w 2 (X) u j (x) = M (f, u j ) l w j=1 σ 2 j 2 (X) u j (x) = M (f, u j ) K u j (x) j=1 23 of 41
36 WSVD Basis Approximation Interpolant: s f (x) = N j=1 (f, u j) K u j (x) x Ω { } WDLS: s M := argmin f g : g span{u f l w 1,..., u (X) M } 2 Weighted Discrete Least Squares as truncation Let f N K (Ω), 1 M N. Then x Ω s M f (x) = M j=1 (f, u j ) l w 2 (X) (u j, u j ) l w 2 (X) u j (x) = M (f, u j ) l w j=1 σ 2 j 2 (X) u j (x) = M (f, u j ) K u j (x) j=1 Q3: Can we extract U U s.t. s f is as good as s f? Yes we can! Take U = {u 1,..., u M }. 23 of 41
37 WSVD Basis Approximation II If we define the pseudo-cardinal functions as l i = s M, we get l i N M u s M j (x i ) (x) = f(x f i ) l i (x), l i (x) = u j (x). i=1 j=1 Generalized Power Function and Lebesgue constant If f N K (Ω), f(x) s M (x) f P (M) (x) f x Ω, where K,X N K (Ω) [ (M) P (x)] 2 M K,X = K(0, 0) [u j (x)] 2, (generalized PF) j=1 s M f Λ X f X, σ 2 j where Λ X = max x Ω N l i (x) is the pseudo-lebesgue constant. i=1 24 of 41
38 WSVD Basis Sub-basis How can we extract U U s.t. s f is as good as s f? ε = 9 ε = 4 ε = recall that u j l w 2 (X) = σ 2 j 0 we can choose M s.t. σ 2 M+1 < τ we don t need u j, j > M Figure: Gaussian kernel, σ 2 j points of the square on equispaced 25 of 41
39 WSVD Basis Example ε = 1 ε = 4 ε = 9 Gaussian IMQ Matern Table: Optimal M for different kernels and shape parameter that correspond to the indexes such that the weighted least-squares approximant s M provides the best approximation of the function f f(x, y) = cos(20(x + y)) on the disk with center C = (1/2, 1/2) and radius R = 1/2 26 of 41
40 WSVD Basis Example, cont Figure: Franke s test function, lens, IMQ Kernel, ε = 1 and RMSE. Left: complete basis. Right: σ 2 M+1 < of 41
41 WSVD Basis Example, cont Figure: Franke s test function, lens, IMQ Kernel, ε = 1 and RMSE. Left: complete basis. Right: σ 2 M+1 < Problem: We have to compute the whole basis before truncation! Solution: Krylov methods, approximate SVD [Novati, Russo 2013] 27 of 41
42 New fast basis [De Marchi-Santin, BIT15]: a comparison N M RMSE new RMSE WSVD Time new Time WSVD Table: Comparison of the WSVD basis and the new basis. Computational time in seconds and corresponding RMSE for the example consisting of interpolation of a function sum of gaussians by GRBF, restricted to N = 15 2, 23 2, 31 2, 39 2 equally spaced points on [ 1, 1] 2. Notice: for each N we computed the optimal M using the new algorithm with tolerance τ = of 41
43 Rescaled, Rational RBF and some applications 29 of 41
44 Rescaled RBF Interpolation Classical interpolant: P f (x) = N α k K(x, x k ), x Ω, x k X. k=1 [Hardy and Gofert 1975] used multiquadrics K(x, y) = 1 + ɛ 2 x y of 41
45 Rescaled RBF Interpolation Classical interpolant: P f (x) = N α k K(x, x k ), x Ω, x k X. k=1 [Hardy and Gofert 1975] used multiquadrics K(x, y) = 1 + ɛ 2 x y 2. Rescaled interpolant: ˆPf (x) = P N f(x) P g (x) = k=1 α k K(x, x k ) N k=1 β k K(x, x k ) where P g is the kernel interpolant of g(x) = 1, x Ω. Obs: Rescaled Localized RBF (RL-RBF) based on CSRBF introduced in [Deparis et al, SISC 2014]: they are smoother even for small radii of the support In [DeM et al 2017] it is shown that it is a Shepard s PU method: exacteness on constants. Linear convergence of localized rescaled interpolants is still an open problem [DeM and Wendland, draft 2017]. 30 of 41
46 Example Take, f(x) = x on [0, 1] by using W2 at the points set X = {0, 1/6, 1/3, 1/2, 2/3, 5/6, 1}, ε = 5 (ɛ = 1/r). Figure: (left) interpolants and (right) the abs error For more results see [Deparis et al 14, Idda s master thesis 2015]. 31 of 41
47 RBF- PU interpolation Ω = s j=1 Ω i (can overlap): that is, each x Ω belongs to a limited number of subdomains, say s 0 < s. Then we consider, W j non-negative functions on Ω j, s.t. s j=1 W j = 1 is a partition of unity. A possibile choice is W j (x) = W j (x) s W, j = 1,..., s (11) k=1 k (x) where Wj are compactly supported functions on Ω j. RBF-PU interpolant s I(x) = R j (x)w j (x), x Ω (12) j=1 N j where R j is a RBF interpolant on Ω j i.e. R j (x) = α j K(x, i xj), i N j = Ω j x j k X N j, k = 1,..., N j. 32 of 41 i=1
48 Rescaling gives a Shepard method We start by writing the interpolant of a function f N K using the N cardinals u j (x i ) = δ i,j, P f = f(x j )u j, so that for g 1 we get N P g = u j. j=1 j=1 33 of 41
49 Rescaling gives a Shepard method We start by writing the interpolant of a function f N K using the N cardinals u j (x i ) = δ i,j, P f = f(x j )u j, so that for g 1 we get j=1 N P g = u j. The rescaled interpolant is then j=1 ˆP f = N j=1 f(x j)u j N k=1 u k = N u j f(x j ) N k=1 u k j=1 N =: f(x j )û j, j=1 where we introduced the (new) cardinal functions û j := u j N k=1 u. k 33 of 41
50 Rescaling gives a Shepard method We start by writing the interpolant of a function f N K using the N cardinals u j (x i ) = δ i,j, P f = f(x j )u j, so that for g 1 we get j=1 N P g = u j. The rescaled interpolant is then j=1 ˆP f = N j=1 f(x j)u j N k=1 u k = N u j f(x j ) N k=1 u k j=1 N =: f(x j )û j, j=1 where we introduced the (new) cardinal functions û j := Corollary u j N k=1 u. k The rescaled interpolation method is a Shepard-PU method, where the weight functions are defined as û j = u j / ( N k=1 u k ), {uj } j being the cardinal basis of span{k(, x), x X}. 33 of 41
51 Variably Scaled Kernels (VSK) Definition Let ψ : R d (0, ) be a given scale function. A Variably Scaled Kernel (VSK) K ψ on R d is K ψ (x, y) := K((x, ψ(x)), (y, ψ(y))), x, y R d. where K is a kernel on R d of 41
52 Variably Scaled Kernels (VSK) Definition Let ψ : R d (0, ) be a given scale function. A Variably Scaled Kernel (VSK) K ψ on R d is K ψ (x, y) := K((x, ψ(x)), (y, ψ(y))), x, y R d. where K is a kernel on R d+1. Given X and the scale functions ψ j : R d (0, ), j = 1,..., s, the RVSK-PU is s I ψ (x) = R ψj (x) W j (x), x Ω, (13) j=1 with R ψj (x) the RVSK-PU on Ω j. 34 of 41
53 Variably Scaled Kernels (VSK) Definition Let ψ : R d (0, ) be a given scale function. A Variably Scaled Kernel (VSK) K ψ on R d is K ψ (x, y) := K((x, ψ(x)), (y, ψ(y))), x, y R d. where K is a kernel on R d+1. Given X and the scale functions ψ j : R d (0, ), j = 1,..., s, the RVSK-PU is s I ψ (x) = R ψj (x) W j (x), x Ω, (13) j=1 with R ψj (x) the RVSK-PU on Ω j. Obs: (if Φ is radial) (A ψj ) ik = Φ( x j i x j k 2 + (ψ j (x j i ) ψ j(x j k ))2 ), i, k = 1,..., N j. 34 of 41
54 Rational RBF (RRBF) definition R(x) = R(1) (x) N R (2) (x) = k=1 α k K(x, x k ) N k=1 β k K(x, x k ) [Jackbsson et al. 2009, Sarra and Bai 2017] = RRBFs well approximate data with steep gradients or discontinuites [rational with PU+VSK in DeM et al. 2017]. 35 of 41
55 Rational RBF Learning from rational functions, d = 1 polynomial case. r(x) = p 1(x) p 2 (x) = a mx m + + a 0 x 0 x n + b n 1 x n 1 + b 0. M = m + n + 1 unknowns (Padé approximation). If M < N to get the coefficients we may solve the LS problem N min f(x k ) r(x k ) 2. p 1 Π 1 m,p 2 Π 1 n k=1 36 of 41
56 Rational RBF Learning from rational functions, d = 1 polynomial case. r(x) = p 1(x) p 2 (x) = a mx m + + a 0 x 0 x n + b n 1 x n 1 + b 0. M = m + n + 1 unknowns (Padé approximation). If M < N to get the coefficients we may solve the LS problem N min f(x k ) r(x k ) 2. p 1 Π 1 m,p 2 Π 1 n k=1 RBF case. Let X m = {x k,..., x k+m 1 }, X n = {x j,..., x j+n 1 } X be non empty, such that m + n N k+m 1 R(x) = R(1) (x) R (2) (x) = i 1 α =k i1 K(x, x i1 ) j+n 1 i 2 β =j i2 K(x, x i2 ), (14) provided R (2) (x) 0, for all x Ω. 36 of 41
57 Rational RBF Find the coefficients [Jackbsson 2009] show that this is equivalent to solve the following generalized eigenvalue problem Problem 3 with Σq = λθq, Σ = 1 D T A 1 D + A 1, and Θ = 1 D T D + I f 2 f 2 N, 2 2 where I N is the identity matrix. 37 of 41
58 Rational RBF Find the coefficients [Jackbsson 2009] show that this is equivalent to solve the following generalized eigenvalue problem Problem 3 with Σq = λθq, Σ = 1 D T A 1 D + A 1, and Θ = 1 D T D + I f 2 f 2 N, 2 2 where I N is the identity matrix. q is the eigenvector associated to the smallest eigenvalue! 37 of 41
59 New class of rational RBF [Buhmann, DeM, Perracchione 2018] ˆP f (x) = N i=1 α ik(x, x i ) + L m=1 γ mp m (x) N k=1 β K(x, := P g(x) k x k ) P h (x) (15) Ratio of a CPD K of order l and an associate PD K... = two kernel matrices, Φ K and Φ K. 38 of 41
60 New class of rational RBF [Buhmann, DeM, Perracchione 2018] ˆP f (x) = N i=1 α ik(x, x i ) + L m=1 γ mp m (x) N k=1 β K(x, := P g(x) k x k ) P h (x) (15) Ratio of a CPD K of order l and an associate PD K... = two kernel matrices, Φ K and Φ K. Obs: 1 Once we know the function values P h (x i ) = h i, i = 1,..., N, we can construct P g, i.e. it interpolates g = (f 1 h 1,..., f N h N ) T. Hence ˆPf interpolates the given function values f at the nodes X N. 2 If K is PD, we fix K = K so that we deal with the same kernel matrix for both numerator and denominator. 3 We can prove well-posedness, find cardinal functions and give stability analysis. 38 of 41
61 Lectures Notes on RBF 1 Learning from splines 2 Positive definite functions Lectures on Radial Basis Functions by Stefano De Marchi and Emma Perracchione Department of Mathematics Tullio Levi-Civita University of Padua (Italy) February 16, Conditionally positive definite functions 4 Error estimates 5 Stable bases for RBFs 6 Rational RBFs 7 The Partition of unity method 8 Collocation method via RBF 9 Financial applications 10 Exercises 39 of 41
62 Other references R. Cavoretto, S. De Marchi, A. De Rossi, E. Perracchione, G. Santin, Partition of unity interpolation using stable kernel-based techniques, Appl. Numer. Math. 116 (2017), pp R. Cavoretto, A. De Rossi, E. Perracchione, Optimal selection of local approximants in RBF-PU interpolation, to appear on J. Sci. Comput. (2017). S. De Marchi, G. Santin, Fast computation of orthonormal basis for RBF spaces through Krylov space methods, BIT 55 (2015), pp Stefano De Marchi, Andrea Idda and Gabriele Santin: A rescaled method for RBF approximation. Springer Proceedings on Mathematics and Statistics, Vol. 201, pp (2017). S. De Marchi, E. Perracchione, A. Martinez Calomardo and M. Rossini: RBF-based partition of unity method for elliptic PDEs: adaptivity and stability issues via VSKs, submitted to J. Sci. Comput S. De Marchi, E. Perracchione, A. Martinez Calomardo : Rational Radial Basis Functions interpolation, submitted to J. Comput. Appl. Math M. Buhmann, S. De Marchi, E. Perracchione : Analysis of a new class of rational RBF expansions, submitted to IMA Num. Analysis, E. Perracchione: Rational RBF-based partition of unity method for efficiently and accurately approximating 3D objects arxiv preprint arxiv: , To appear in Comput. Appl. Math. 40 of 41
63 grazie per la vostra attenzione! thanks for your attention! and grazie a R.IT.A. 41 of 41
A orthonormal basis for Radial Basis Function approximation
A orthonormal basis for Radial Basis Function approximation 9th ISAAC Congress Krakow, August 5-9, 2013 Gabriele Santin, joint work with S. De Marchi Department of Mathematics. Doctoral School in Mathematical
More informationA new stable basis for radial basis function interpolation
A new stable basis for radial basis function interpolation Stefano De Marchi and Gabriele Santin Department of Mathematics University of Padua (Italy) Abstract It is well-known that radial basis function
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 14: The Power Function and Native Space Error Estimates Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu
More informationFast and stable rational RBF-based Partition of Unity interpolation
Fast and stable rational RBF-based Partition of Unity interpolation S. De Marchi, A. Martínez, E. Perracchione Dipartimento di Matematica Tullio Levi-Civita, Università di Padova, Italia Abstract We perform
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 2: Radial Basis Function Interpolation in MATLAB Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590
More informationRBF-Partition of Unity Method: an overview of recent results
RBF-Partition of Unity Method: an overview of recent results Alessandra De Rossi a In collaboration with Roberto Cavoretto a, Emma Perracchione b a Department of Mathematics G. Peano University of Torino
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 33: Adaptive Iteration Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter 33 1 Outline 1 A
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 33: Adaptive Iteration Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter 33 1 Outline 1 A
More informationStability of Kernel Based Interpolation
Stability of Kernel Based Interpolation Stefano De Marchi Department of Computer Science, University of Verona (Italy) Robert Schaback Institut für Numerische und Angewandte Mathematik, University of Göttingen
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 9: Conditionally Positive Definite Radial Functions Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH
More informationKernel B Splines and Interpolation
Kernel B Splines and Interpolation M. Bozzini, L. Lenarduzzi and R. Schaback February 6, 5 Abstract This paper applies divided differences to conditionally positive definite kernels in order to generate
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 1 Part 3: Radial Basis Function Interpolation in MATLAB Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2014 fasshauer@iit.edu
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 2 Part 3: Native Space for Positive Definite Kernels Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2014 fasshauer@iit.edu MATH
More informationMeshfree Approximation Methods with MATLAB
Interdisciplinary Mathematical Sc Meshfree Approximation Methods with MATLAB Gregory E. Fasshauer Illinois Institute of Technology, USA Y f? World Scientific NEW JERSEY LONDON SINGAPORE BEIJING SHANGHAI
More informationRadial Basis Functions I
Radial Basis Functions I Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 14, 2008 Today Reformulation of natural cubic spline interpolation Scattered
More informationPositive Definite Kernels: Opportunities and Challenges
Positive Definite Kernels: Opportunities and Challenges Michael McCourt Department of Mathematical and Statistical Sciences University of Colorado, Denver CUNY Mathematics Seminar CUNY Graduate College
More informationRadial basis function partition of unity methods for PDEs
Radial basis function partition of unity methods for PDEs Elisabeth Larsson, Scientific Computing, Uppsala University Credit goes to a number of collaborators Alfa Ali Alison Lina Victor Igor Heryudono
More informationConsistency Estimates for gfd Methods and Selection of Sets of Influence
Consistency Estimates for gfd Methods and Selection of Sets of Influence Oleg Davydov University of Giessen, Germany Localized Kernel-Based Meshless Methods for PDEs ICERM / Brown University 7 11 August
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 34: Improving the Condition Number of the Interpolation Matrix Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu
More informationScattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions
Chapter 3 Scattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions 3.1 Scattered Data Interpolation with Polynomial Precision Sometimes the assumption on the
More informationStability constants for kernel-based interpolation processes
Dipartimento di Informatica Università degli Studi di Verona Rapporto di ricerca Research report 59 Stability constants for kernel-based interpolation processes Stefano De Marchi Robert Schaback Dipartimento
More informationKernel Method: Data Analysis with Positive Definite Kernels
Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University
More informationD. Shepard, Shepard functions, late 1960s (application, surface modelling)
Chapter 1 Introduction 1.1 History and Outline Originally, the motivation for the basic meshfree approximation methods (radial basis functions and moving least squares methods) came from applications in
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 5: Completely Monotone and Multiply Monotone Functions Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH
More informationMultivariate Interpolation with Increasingly Flat Radial Basis Functions of Finite Smoothness
Multivariate Interpolation with Increasingly Flat Radial Basis Functions of Finite Smoothness Guohui Song John Riddle Gregory E. Fasshauer Fred J. Hickernell Abstract In this paper, we consider multivariate
More informationApproximation by Conditionally Positive Definite Functions with Finitely Many Centers
Approximation by Conditionally Positive Definite Functions with Finitely Many Centers Jungho Yoon Abstract. The theory of interpolation by using conditionally positive definite function provides optimal
More informationKernel-based Approximation. Methods using MATLAB. Gregory Fasshauer. Interdisciplinary Mathematical Sciences. Michael McCourt.
SINGAPORE SHANGHAI Vol TAIPEI - Interdisciplinary Mathematical Sciences 19 Kernel-based Approximation Methods using MATLAB Gregory Fasshauer Illinois Institute of Technology, USA Michael McCourt University
More informationNumerical solution of surface PDEs with Radial Basis Functions
Numerical solution of surface PDEs with Radial Basis Functions Andriy Sokolov Institut für Angewandte Mathematik (LS3) TU Dortmund andriy.sokolov@math.tu-dortmund.de TU Dortmund June 1, 2017 Outline 1
More informationSolving the 3D Laplace Equation by Meshless Collocation via Harmonic Kernels
Solving the 3D Laplace Equation by Meshless Collocation via Harmonic Kernels Y.C. Hon and R. Schaback April 9, Abstract This paper solves the Laplace equation u = on domains Ω R 3 by meshless collocation
More informationAPPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.
APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product
More informationInterpolation by Basis Functions of Different Scales and Shapes
Interpolation by Basis Functions of Different Scales and Shapes M. Bozzini, L. Lenarduzzi, M. Rossini and R. Schaback Abstract Under very mild additional assumptions, translates of conditionally positive
More informationLinear Algebra Practice Problems
Linear Algebra Practice Problems Page of 7 Linear Algebra Practice Problems These problems cover Chapters 4, 5, 6, and 7 of Elementary Linear Algebra, 6th ed, by Ron Larson and David Falvo (ISBN-3 = 978--68-78376-2,
More informationLeast Squares Approximation
Chapter 6 Least Squares Approximation As we saw in Chapter 5 we can interpret radial basis function interpolation as a constrained optimization problem. We now take this point of view again, but start
More informationMathematical foundations - linear algebra
Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar
More informationMathematical foundations - linear algebra
Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar
More informationOrthogonal Polynomials and Gaussian Quadrature
Orthogonal Polynomials and Gaussian Quadrature 1. Orthogonal polynomials Given a bounded, nonnegative, nondecreasing function w(x) on an interval, I of the real line, we consider the Hilbert space L 2
More informationReproducing Kernel Hilbert Spaces Class 03, 15 February 2006 Andrea Caponnetto
Reproducing Kernel Hilbert Spaces 9.520 Class 03, 15 February 2006 Andrea Caponnetto About this class Goal To introduce a particularly useful family of hypothesis spaces called Reproducing Kernel Hilbert
More informationStability and Lebesgue constants in RBF interpolation
constants in RBF Stefano 1 1 Dept. of Computer Science, University of Verona http://www.sci.univr.it/~demarchi Göttingen, 20 September 2008 Good Outline Good Good Stability is very important in numerical
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More information1. Introduction. A radial basis function (RBF) interpolant of multivariate data (x k, y k ), k = 1, 2,..., n takes the form
A NEW CLASS OF OSCILLATORY RADIAL BASIS FUNCTIONS BENGT FORNBERG, ELISABETH LARSSON, AND GRADY WRIGHT Abstract Radial basis functions RBFs form a primary tool for multivariate interpolation, and they are
More informationPART IV Spectral Methods
PART IV Spectral Methods Additional References: R. Peyret, Spectral methods for incompressible viscous flow, Springer (2002), B. Mercier, An introduction to the numerical analysis of spectral methods,
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 6: Scattered Data Interpolation with Polynomial Precision Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu
More informationRKHS, Mercer s theorem, Unbounded domains, Frames and Wavelets Class 22, 2004 Tomaso Poggio and Sayan Mukherjee
RKHS, Mercer s theorem, Unbounded domains, Frames and Wavelets 9.520 Class 22, 2004 Tomaso Poggio and Sayan Mukherjee About this class Goal To introduce an alternate perspective of RKHS via integral operators
More informationScattered Data Approximation of Noisy Data via Iterated Moving Least Squares
Scattered Data Approximation o Noisy Data via Iterated Moving Least Squares Gregory E. Fasshauer and Jack G. Zhang Abstract. In this paper we ocus on two methods or multivariate approximation problems
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationA scaling perspective on accuracy and convergence in RBF approximations
A scaling perspective on accuracy and convergence in RBF approximations Elisabeth Larsson With: Bengt Fornberg, Erik Lehto, and Natasha Flyer Division of Scientific Computing Department of Information
More informationCIS 520: Machine Learning Oct 09, Kernel Methods
CIS 520: Machine Learning Oct 09, 207 Kernel Methods Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture They may or may not cover all the material discussed
More informationFall TMA4145 Linear Methods. Exercise set Given the matrix 1 2
Norwegian University of Science and Technology Department of Mathematical Sciences TMA445 Linear Methods Fall 07 Exercise set Please justify your answers! The most important part is how you arrive at an
More informationDIRECT ERROR BOUNDS FOR SYMMETRIC RBF COLLOCATION
Meshless Methods in Science and Engineering - An International Conference Porto, 22 DIRECT ERROR BOUNDS FOR SYMMETRIC RBF COLLOCATION Robert Schaback Institut für Numerische und Angewandte Mathematik (NAM)
More informationINVERSE AND SATURATION THEOREMS FOR RADIAL BASIS FUNCTION INTERPOLATION
MATHEMATICS OF COMPUTATION Volume 71, Number 238, Pages 669 681 S 0025-5718(01)01383-7 Article electronically published on November 28, 2001 INVERSE AND SATURATION THEOREMS FOR RADIAL BASIS FUNCTION INTERPOLATION
More informationKernel Principal Component Analysis
Kernel Principal Component Analysis Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr
More informationReal Variables # 10 : Hilbert Spaces II
randon ehring Real Variables # 0 : Hilbert Spaces II Exercise 20 For any sequence {f n } in H with f n = for all n, there exists f H and a subsequence {f nk } such that for all g H, one has lim (f n k,
More informationSPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS
SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS G. RAMESH Contents Introduction 1 1. Bounded Operators 1 1.3. Examples 3 2. Compact Operators 5 2.1. Properties 6 3. The Spectral Theorem 9 3.3. Self-adjoint
More informationGreen s Functions: Taking Another Look at Kernel Approximation, Radial Basis Functions and Splines
Green s Functions: Taking Another Look at Kernel Approximation, Radial Basis Functions and Splines Gregory E. Fasshauer Abstract The theories for radial basis functions (RBFs) as well as piecewise polynomial
More informationThe Laplacian PDF Distance: A Cost Function for Clustering in a Kernel Feature Space
The Laplacian PDF Distance: A Cost Function for Clustering in a Kernel Feature Space Robert Jenssen, Deniz Erdogmus 2, Jose Principe 2, Torbjørn Eltoft Department of Physics, University of Tromsø, Norway
More informationScalable kernel methods and their use in black-box optimization
with derivatives Scalable kernel methods and their use in black-box optimization David Eriksson Center for Applied Mathematics Cornell University dme65@cornell.edu November 9, 2018 1 2 3 4 1/37 with derivatives
More informationSeparation of Variables in Linear PDE: One-Dimensional Problems
Separation of Variables in Linear PDE: One-Dimensional Problems Now we apply the theory of Hilbert spaces to linear differential equations with partial derivatives (PDE). We start with a particular example,
More informationStable Parameterization Schemes for Gaussians
Stable Parameterization Schemes for Gaussians Michael McCourt Department of Mathematical and Statistical Sciences University of Colorado Denver ICOSAHOM 2014 Salt Lake City June 24, 2014 michael.mccourt@ucdenver.edu
More informationReproducing Kernel Hilbert Spaces
Reproducing Kernel Hilbert Spaces Lorenzo Rosasco 9.520 Class 03 February 11, 2009 About this class Goal To introduce a particularly useful family of hypothesis spaces called Reproducing Kernel Hilbert
More informationApproximation theory
Approximation theory Xiaojing Ye, Math & Stat, Georgia State University Spring 2019 Numerical Analysis II Xiaojing Ye, Math & Stat, Georgia State University 1 1 1.3 6 8.8 2 3.5 7 10.1 Least 3squares 4.2
More information3.1 Interpolation and the Lagrange Polynomial
MATH 4073 Chapter 3 Interpolation and Polynomial Approximation Fall 2003 1 Consider a sample x x 0 x 1 x n y y 0 y 1 y n. Can we get a function out of discrete data above that gives a reasonable estimate
More informationNumerical Methods - Numerical Linear Algebra
Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear
More informationRBF Collocation Methods and Pseudospectral Methods
RBF Collocation Methods and Pseudospectral Methods G. E. Fasshauer Draft: November 19, 24 Abstract We show how the collocation framework that is prevalent in the radial basis function literature can be
More informationNumerical solution of nonlinear sine-gordon equation with local RBF-based finite difference collocation method
Numerical solution of nonlinear sine-gordon equation with local RBF-based finite difference collocation method Y. Azari Keywords: Local RBF-based finite difference (LRBF-FD), Global RBF collocation, sine-gordon
More informationNumerical cubature on scattered data by radial basis functions
Numerical cubature on scattered data by radial basis functions A. Sommariva, Sydney, and M. Vianello, Padua September 5, 25 Abstract We study cubature formulas on relatively small scattered samples in
More informationLinear Algebra Review. Vectors
Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors
More informationLinear Algebra in Actuarial Science: Slides to the lecture
Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations
More informationEECS 598: Statistical Learning Theory, Winter 2014 Topic 11. Kernels
EECS 598: Statistical Learning Theory, Winter 2014 Topic 11 Kernels Lecturer: Clayton Scott Scribe: Jun Guo, Soumik Chatterjee Disclaimer: These notes have not been subjected to the usual scrutiny reserved
More informationA Posteriori Error Bounds for Meshless Methods
A Posteriori Error Bounds for Meshless Methods Abstract R. Schaback, Göttingen 1 We show how to provide safe a posteriori error bounds for numerical solutions of well-posed operator equations using kernel
More informationMP 472 Quantum Information and Computation
MP 472 Quantum Information and Computation http://www.thphys.may.ie/staff/jvala/mp472.htm Outline Open quantum systems The density operator ensemble of quantum states general properties the reduced density
More informationA RADIAL BASIS FUNCTION METHOD FOR THE SHALLOW WATER EQUATIONS ON A SPHERE
A RADIAL BASIS FUNCTION METHOD FOR THE SHALLOW WATER EQUATIONS ON A SPHERE NATASHA FLYER AND GRADY B. WRIGHT Abstract. The paper derives the first known numerical shallow water model on the sphere using
More informationElements of Positive Definite Kernel and Reproducing Kernel Hilbert Space
Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space Statistical Inference with Reproducing Kernel Hilbert Space Kenji Fukumizu Institute of Statistical Mathematics, ROIS Department
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 43: RBF-PS Methods in MATLAB Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter 43 1 Outline
More informationOn Multivariate Newton Interpolation at Discrete Leja Points
On Multivariate Newton Interpolation at Discrete Leja Points L. Bos 1, S. De Marchi 2, A. Sommariva 2, M. Vianello 2 September 25, 2011 Abstract The basic LU factorization with row pivoting, applied to
More informationStabilization and Acceleration of Algebraic Multigrid Method
Stabilization and Acceleration of Algebraic Multigrid Method Recursive Projection Algorithm A. Jemcov J.P. Maruszewski Fluent Inc. October 24, 2006 Outline 1 Need for Algorithm Stabilization and Acceleration
More informationEcon Slides from Lecture 7
Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for
More informationToward Approximate Moving Least Squares Approximation with Irregularly Spaced Centers
Toward Approximate Moving Least Squares Approximation with Irregularly Spaced Centers Gregory E. Fasshauer Department of Applied Mathematics, Illinois Institute of Technology, Chicago, IL 6066, U.S.A.
More informationRecent Results for Moving Least Squares Approximation
Recent Results for Moving Least Squares Approximation Gregory E. Fasshauer and Jack G. Zhang Abstract. We describe two experiments recently conducted with the approximate moving least squares (MLS) approximation
More informationUnsupervised Learning Techniques Class 07, 1 March 2006 Andrea Caponnetto
Unsupervised Learning Techniques 9.520 Class 07, 1 March 2006 Andrea Caponnetto About this class Goal To introduce some methods for unsupervised learning: Gaussian Mixtures, K-Means, ISOMAP, HLLE, Laplacian
More informationCOMP 558 lecture 18 Nov. 15, 2010
Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to
More informationFunctional Analysis Review
Outline 9.520: Statistical Learning Theory and Applications February 8, 2010 Outline 1 2 3 4 Vector Space Outline A vector space is a set V with binary operations +: V V V and : R V V such that for all
More informationReproducing Kernel Hilbert Spaces
Reproducing Kernel Hilbert Spaces Lorenzo Rosasco 9.520 Class 03 February 9, 2011 About this class Goal In this class we continue our journey in the world of RKHS. We discuss the Mercer theorem which gives
More informationRadial basis function approximation
Radial basis function approximation PhD student course in Approximation Theory Elisabeth Larsson 2017-09-18 E. Larsson, 2017-09-18 (1 : 77) Global RBF approximation Stable evaluation methods RBF partition
More informationThe following definition is fundamental.
1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic
More informationPreliminary Examination, Numerical Analysis, August 2016
Preliminary Examination, Numerical Analysis, August 2016 Instructions: This exam is closed books and notes. The time allowed is three hours and you need to work on any three out of questions 1-4 and any
More informationWavelets in Scattering Calculations
Wavelets in Scattering Calculations W. P., Brian M. Kessler, Gerald L. Payne polyzou@uiowa.edu The University of Iowa Wavelets in Scattering Calculations p.1/43 What are Wavelets? Orthonormal basis functions.
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 7: Conditionally Positive Definite Functions Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter
More informationMidterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015
Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 The test lasts 1 hour and 15 minutes. No documents are allowed. The use of a calculator, cell phone or other equivalent electronic
More informationMATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL
MATH 3 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MAIN TOPICS FOR THE FINAL EXAM:. Vectors. Dot product. Cross product. Geometric applications. 2. Row reduction. Null space, column space, row space, left
More information1 Math 241A-B Homework Problem List for F2015 and W2016
1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let
More information11 a 12 a 21 a 11 a 22 a 12 a 21. (C.11) A = The determinant of a product of two matrices is given by AB = A B 1 1 = (C.13) and similarly.
C PROPERTIES OF MATRICES 697 to whether the permutation i 1 i 2 i N is even or odd, respectively Note that I =1 Thus, for a 2 2 matrix, the determinant takes the form A = a 11 a 12 = a a 21 a 11 a 22 a
More informationj=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.
Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u
More informationRBF kernel method and its applications to clinical data
Proceedings of Kernel-based Methods and Function Approximation 216, Volume 9 216 Pages 13 18 RBF kernel method and its applications to clinical data Emma Perracchione a Ilaria Stura b Abstract In this
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 40: Symmetric RBF Collocation in MATLAB Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter
More informationMaryam Pazouki ½, Robert Schaback
ÓÖ Ã ÖÒ Ð ËÔ Maryam Pazouki ½, Robert Schaback ÁÒ Ø ØÙØ Ö ÆÙÑ Ö ÙÒ Ò Û Ò Ø Å Ø Ñ Ø ÍÒ Ú Ö ØØ ØØ Ò Ò ÄÓØÞ ØÖ ½ ¹½ ¼ ØØ Ò Ò ÖÑ ÒÝ Abstract Since it is well known [4] that standard bases of kernel translates
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 39: Non-Symmetric RBF Collocation in MATLAB Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter
More informationD. Shepard, Shepard functions, late 1960s (application, surface modelling)
Chapter 1 Introduction 1.1 History and Outline Originally, the motivation for the basic meshfree approximation methods (radial basis functions and moving least squares methods) came from applications in
More informationMath 307 Learning Goals. March 23, 2010
Math 307 Learning Goals March 23, 2010 Course Description The course presents core concepts of linear algebra by focusing on applications in Science and Engineering. Examples of applications from recent
More informationThe Kernel Trick. Robert M. Haralick. Computer Science, Graduate Center City University of New York
The Kernel Trick Robert M. Haralick Computer Science, Graduate Center City University of New York Outline SVM Classification < (x 1, c 1 ),..., (x Z, c Z ) > is the training data c 1,..., c Z { 1, 1} specifies
More informationFinal Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson
Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson Name: TA Name and section: NO CALCULATORS, SHOW ALL WORK, NO OTHER PAPERS ON DESK. There is very little actual work to be done on this exam if
More information