Consistency Estimates for gfd Methods and Selection of Sets of Influence
|
|
- Teresa Willis
- 6 years ago
- Views:
Transcription
1 Consistency Estimates for gfd Methods and Selection of Sets of Influence Oleg Davydov University of Giessen, Germany Localized Kernel-Based Meshless Methods for PDEs ICERM / Brown University 7 11 August 2017 Oleg Davydov Consistency Estimates for gfd 1
2 Outline 1 Generalized Finite Difference Methods 2 Error of Polynomial and Kernel Numerical Differentiation Numerical Differentiation Polynomial Formulas Kernel-Based Formulas Least Squares Formulas 3 Selection of Sets of Influence 4 Conclusion Oleg Davydov Consistency Estimates for gfd 2
3 Outline 1 Generalized Finite Difference Methods 2 Error of Polynomial and Kernel Numerical Differentiation Numerical Differentiation Polynomial Formulas Kernel-Based Formulas Least Squares Formulas 3 Selection of Sets of Influence 4 Conclusion Oleg Davydov Consistency Estimates for gfd 3
4 Generalized Finite Difference Methods Model problem: Poisson equation with Dirichlet boundary u = f on Ω, u Ω = g. Localized numerical differentiation (Ξ Ω, Ξ i Ξ small): u(ξ i ) ξ j Ξ i w i,j u(ξ j ) for all ξ i Ξ\ Ω Find a discrete approximate solution û defined on Ξ s.t. ξ j Ξ i w i,j û(ξ j ) = f(ξ i ) for ξ i Ξ\ Ω û(ξ i ) = g(ξ i ) for ξ i Ω Sparse system matrix [w i,j ] ξi,ξ j Ξ\ Ω. Oleg Davydov Consistency Estimates for gfd 3
5 Generalized Finite Difference Methods Sets of influence: Select Ξ i for each ξ i Ξ\ Ξ ξ i Ξ i ξ i Ξ i is the star or set of influence of ξ i Oleg Davydov Consistency Estimates for gfd 4
6 Generalized Finite Difference Methods Consistency and Stability = Convergence : û u Ξ S [ u(ξ }{{} i ) ] w i,j u(ξ j ) ξ solution error j Ξ i ξ i Ξ\ Ω } {{ } consistency error S := [w i,j ] 1 ξ i,ξ j Ξ\ Ω stability constant a vector norm, e.g. (max) or quadratic mean (rms), respectively a matrix norm, or 2 If S is bounded, then the convergence order for a sequence of discretisations Ξ n is determined by the consistency error: u(ξ i ) ξ j Ξ i w i,j u(ξ j ) (numerical differentiation error) Oleg Davydov Consistency Estimates for gfd 5
7 Outline 1 Generalized Finite Difference Methods 2 Error of Polynomial and Kernel Numerical Differentiation Numerical Differentiation Polynomial Formulas Kernel-Based Formulas Least Squares Formulas 3 Selection of Sets of Influence 4 Conclusion Oleg Davydov Consistency Estimates for gfd 6
8 Outline 1 Generalized Finite Difference Methods 2 Error of Polynomial and Kernel Numerical Differentiation Numerical Differentiation Polynomial Formulas Kernel-Based Formulas Least Squares Formulas 3 Selection of Sets of Influence 4 Conclusion Oleg Davydov Consistency Estimates for gfd 6
9 Numerical Differentiation Given a finite set of points X = {x 1,..., x N } R d and function values f j = f(x j ), we want to approximate the values Df(z) at arbitrary points z, where D is a linear differential operator Df(z) = a α (z) α f(z) α Z d +, α k k is the order of D, α := α 1 + +α d, a α (z) R. Oleg Davydov Consistency Estimates for gfd 6
10 Numerical Differentiation Approximation approach Df(z) Dp(z), where p is an approximation of f, e.g., least squares fit from a finite dimensional space P partition of unity interpolant moving least squares fit RBF / kernel interpolant If p = m a i φ i and the coefficients a i depend linearly on i=1 f(x j ), i.e. a = Af X, then p = φ a = φaf X, Dp(z) = Dφ(z)A f }{{} X = w j f(x j ). w This leads to a numerical differentiation formula Df(z) w j f(x j ), w: a weight vector. Oleg Davydov Consistency Estimates for gfd 7
11 Numerical Differentiation Exactness approach Require exactness of the numerical differentiation formula for all elements of a space P: Dp(z) = w j p(x j ) for all p P. Notation: w D P. E.g., exactness for polynomials of certain order q: P = Π d q, the space of polynomials of total degree < q in d variables. (Polynomial numerical differentiation.) Example: five point star (exact for Π 2 4 ) x 3 x 2 x 0 x 1 x 4 u(x 0 ) 1 h 2 ( u(x1 )+u(x 2 )+u(x 3 )+u(x 4 ) 4u(x 0 ) ) Oleg Davydov Consistency Estimates for gfd 8
12 Numerical Differentiation Exactness approach A classical method for computing weights w D Π d q is truncation of Taylor expansion of local error f p near z (as in the Finite Difference Method). Instead, we can look at Dp(z) = w j p(x j ) for all p Π d q. as an underdetermined linear system w.r.t. w, and pick solutions with desired properties. Similar to quadrature rules (Gauss formulas), there are special point sets that admit weights with particularly high exactness order for a given N (five point star). Oleg Davydov Consistency Estimates for gfd 9
13 Numerical Differentiation Joint work with Robert Schaback O. Davydov and R. Schaback, Error bounds for kernel-based numerical differentiation, Numer. Math., 132 (2016), O. Davydov and R. Schaback, Minimal numerical differentiation formulas, preprint. arxiv: O. Davydov and R. Schaback, Optimal stencils in Sobolev spaces, preprint. arxiv: Oleg Davydov Consistency Estimates for gfd 10
14 Outline 1 Generalized Finite Difference Methods 2 Error of Polynomial and Kernel Numerical Differentiation Numerical Differentiation Polynomial Formulas Kernel-Based Formulas Least Squares Formulas 3 Selection of Sets of Influence 4 Conclusion Oleg Davydov Consistency Estimates for gfd 11
15 Polynomial Formulas: General Error Bound Theorem If w is exact for polynomials of order q > k (the order of D), then N Df(z) w j f(x j ) f,q,ω w j x j z q 2, ( 1 where f,q,ω := q! α =q 1 α! α f 2,Ω ) 1/2. Oleg Davydov Consistency Estimates for gfd 11
16 Polynomial Formulas: General Error Bound Theorem If w is exact for polynomials of order q > k (the order of D), then N Df(z) w j f(x j ) f,q,ω w j x j z q 2, ( 1 where f,q,ω := q! α =q 1 α! α f 2,Ω ) 1/2. Proof: Let R(x) := f(x) T q,z f(x) be the remainder of the Taylor polynomial or order q. Recall the integral representation R(x) = q α =q (x z) α α! 1 Since q > k, it follows that DR(z) = 0. 0 (1 t) q 1 α f(z+t(x z)) dt. Oleg Davydov Consistency Estimates for gfd 11
17 Polynomial Formulas: General Error Bound Thus, we have for R(x) := f(x) T q,z f(x): DR(z) = 0, R(x) = q (x z) α α! α =q 1 0 (1 t) q 1 α f(z+t(x z)) dt. Hence R(x j ) α =q α =q (x j z) α α f α! C(Ω) ( (x j z) 2α α! = x j z q 2 α =q α f 2 ) C(Ω) 1/2 α! ) 1/2 ( 1 1 q! α! α f 2 C(Ω) α =q }{{} = f,q,ω Oleg Davydov Consistency Estimates for gfd 12
18 Polynomial Formulas: General Error Bound With R(x) := f(x) T q,z f(x), DR(z) = 0 and R(x j ) x j z q 2 f,q,ω, polynomial exactness implies Df(z) w j f(x j ) = DR(z) w j R(x j ) w j R(x j ) N = f,q,ω w j x j z q 2. Oleg Davydov Consistency Estimates for gfd 13
19 Polynomial Formulas: General Error Bound If w is exact for polynomials of order q > k (the order of D), then N Df(z) w j f(x j ) f,q,ω w j x j z q 2. Gives in particular an error bound in terms of Lebesgue (stability) constant w 1 := N w j : where Df(z) w j f(x j ) f,q,ω w 1 h q z,x, h z,x := max 1 j N x j z 2 is the radius of the set of influence. Applicable in particular to polyharmonic formulas. Oleg Davydov Consistency Estimates for gfd 14
20 Polynomial Formulas: General Error Bound If w is exact for polynomials of order q > k (the order of D), then N Df(z) w j f(x j ) f,q,ω w j x j z q 2. The best bound is obtained if w j x j z q 2 is minimized over all weights w satisfying the exactness condition Dp(z) = N w j p(x j ), p Π d q. (w D Π d q) We call them 1,q -minimal weights. Oleg Davydov Consistency Estimates for gfd 14
21 Polynomial Formulas: 1,µ -minimal weights An 1,µ -minimal (µ 0) weight vector w satisfies w j x j z µ 2 = inf w R N w D Π d q w j x j z µ 2. Oleg Davydov Consistency Estimates for gfd 15
22 Polynomial Formulas: 1,µ -minimal weights An 1,µ -minimal (µ 0) weight vector w satisfies w j x j z µ 2 = inf w R N w D Π d q w j x j z µ 2. 1,µ -minimal weights can be found by linear programming (e.g. simplex algorithm if N is small). Oleg Davydov Consistency Estimates for gfd 15
23 Polynomial Formulas: 1,µ -minimal weights An 1,µ -minimal (µ 0) weight vector w satisfies w j x j z µ 2 = inf w R N w D Π d q w j x j z µ 2. 1,µ -minimal weights can be found by linear programming (e.g. simplex algorithm if N is small). µ = 0: formulas with optimal stability constant N w j Oleg Davydov Consistency Estimates for gfd 15
24 Polynomial Formulas: 1,µ -minimal weights An 1,µ -minimal (µ 0) weight vector w satisfies w j x j z µ 2 = inf w R N w D Π d q w j x j z µ 2. 1,µ -minimal weights can be found by linear programming (e.g. simplex algorithm if N is small). µ = 0: formulas with optimal stability constant N w j Our error bound suggests the choice µ = q. Oleg Davydov Consistency Estimates for gfd 15
25 Polynomial Formulas: 1,µ -minimal weights An 1,µ -minimal (µ 0) weight vector w satisfies w j x j z µ 2 = inf w R N w D Π d q w j x j z µ 2. 1,µ -minimal weights can be found by linear programming (e.g. simplex algorithm if N is small). µ = 0: formulas with optimal stability constant N w j Our error bound suggests the choice µ = q. w is sparse in the sense that the number of nonzero w j s does not exceed dimπ d q. Oleg Davydov Consistency Estimates for gfd 15
26 Polynomial Formulas: 1,µ -minimal weights An 1,µ -minimal (µ 0) weight vector w satisfies w j x j z µ 2 = inf w R N w D Π d q w j x j z µ 2. 1,µ -minimal weights can be found by linear programming (e.g. simplex algorithm if N is small). µ = 0: formulas with optimal stability constant N w j Our error bound suggests the choice µ = q. w is sparse in the sense that the number of nonzero w j s does not exceed dimπ d q. Considered by Seibold (2006) for D = under additional condition of positivity, which restricts exactness to q 4. Oleg Davydov Consistency Estimates for gfd 15
27 Polynomial Formulas: 1,µ -minimal weights Influence of µ on the location of nonzero weights w j (a) µ = 0 (b) µ = q = 7 (c) µ = 15 1,µ -minimal weights (µ = 0, 7, 15) of exactness order q = 7 computed for the Laplacian at the origin from the data at 150 points. Locations of 28 points x j for which wj 0 are shown. Oleg Davydov Consistency Estimates for gfd 16
28 Polynomial Formulas: 1,µ -minimal weights Scalability 1,µ -minimal formulas are scalable in the sence that the weight vector w can be computed by scaling z, X into 0, Y in the unit circle by y j = h 1 z,x (x j z), obtaining weight vector v for the mapped differential operator, and scaling back by w j = h k z,x v j. This allows for stable computation of this formulas for any small radius h z,x by upscaling preconditioning. Any scalable differentiation formulas admit error bounds of the type Ch s k z,x for sufficiently smooth functions f, where C depends on the Lebesque constant of the upscaled formula v. Oleg Davydov Consistency Estimates for gfd 17
29 Polynomial Formulas: Growth Function Duality: inf w D Π d q w i x i z q 2 = i=1 = sup { Dp(z) : p Π d q, p(x i ) x i z q 2, i} =: ρ q,d (z, X) A special case of Fenchel s duality theorem, but can be also proved directly by using extension of linear functionals. We call ρ q,d (z, X) the growth function. More general, for any seminorm on R N, inf w = sup { Dp(z) : p Π d q, p X 1 } w D Π d q =: ρ q,d (z, X, ). Oleg Davydov Consistency Estimates for gfd 18
30 Polynomial Formulas: Growth Function Theorem For any 1,q -minimal formula, Df(z) w j f(x j ) ρ q,d (z, X) f,q,ω. As we will see, similar estimates involving ρ q,d (z, X) hold for kernel methods as well! Oleg Davydov Consistency Estimates for gfd 19
31 Polynomial Formulas: Growth Function Default behavior of growth function ρ q,d (z, X) := sup { Dp(z) : p Π d q, p(x i) x i z q 2, i}, h z,x := max 1 j N z x j 2 If X is a good set for Π d q ( norming set ), then max p(x) C max p(x i ) Ch q x z 2 h z,x /2 i z,x, hence Dp(z) Ch q k z,x and ρ q,d(z, X) Ch q k z,x, so that we get an error bound of order h q k z,x : Df(z) w j f(x j ) Ch q k z,x f,q,ω. X does not have to a norming set. Example: five point star. Oleg Davydov Consistency Estimates for gfd 20
32 Polynomial Formulas: Growth Function Example: Five point stencil for Laplace operator in 2D u(z) 5 i=1 w iu(x i ), {x 1,...,x 5 } = X h = {z, z±(h, 0), z±(0, h)} Ω h The classical FD formula with weights w 2 = w 3 = w 4 = w 5 = 1/h 2, w 1 = 4/h 2 is exact for Π 2 4. It is the only formula on X h with this exactness order, hence it is 1,4 -minimal. It is easy to show that ρ 4, (z, X) = 4h 2 Hence, f(z) 5 w i u(x i ) 4h 2 f,4,ω h, i=1 similar to classical error estimates for the five point stencil. Oleg Davydov Consistency Estimates for gfd 21
33 Polynomial Formulas: Growth Function A lower bound for well separated centers Theorem Given z and X, let γ 1 be such that x j z 2 γ dist(x j, X\{x j }), j = 1,...,N. For any w and q > k there exists a function f C (R d ) s. t. N Df(z) w j f(x j ) C f,q,ω w j x j z q 2, where C depends only on q, k, N, d and γ. In particular, if w is exact for polynomials of order q, then Df(z) w j f(x j ) Cρ q,d (z, X) f,q,ω. Oleg Davydov Consistency Estimates for gfd 22
34 Outline 1 Generalized Finite Difference Methods 2 Error of Polynomial and Kernel Numerical Differentiation Numerical Differentiation Polynomial Formulas Kernel-Based Formulas Least Squares Formulas 3 Selection of Sets of Influence 4 Conclusion Oleg Davydov Consistency Estimates for gfd 23
35 Kernel-Based Formulas Let K : R d R d R be a symmetric kernel, conditionally positive definite (cpd) of order s 0 on R d (positive definite when s = 0). Π d s : polynomials of order s. For a Π d s -unisolvent X, the kernel interpolant r X,K,f in the form r X,K,f = M a j K(, x j )+ b j p j, a j, b j R, M = dim(π d s ), is uniquely determined from the positive definite linear system M r X,K,f (x k ) = a j K(x k, x j )+ b j p j (x k ) = f k, 1 k N, a j p i (x j ) = 0, 1 i M. Oleg Davydov Consistency Estimates for gfd 23
36 Kernel-Based Formulas Examples. K(x, y) = φ( x y ) (φ : R + R is then a radial basis function (RBF)) s 0: Any φ with positive Fourier transform of Φ(x) = φ( x ) Gaussian φ(r) = e r2 inverse quadric 1/(1+r 2 ) inverse multiquadric 1/ 1+r 2 Matérn kernel K ν (r)r ν, ν > 0 (K ν (r) modified Bessel function of second kind) s 1: multiquadric 1+r 2 s ν/2 +1: polyharmonic / thin plate spline r ν {log r} K(εx, εy) are also cpd kernels (ε > 0: shape parameter) Oleg Davydov Consistency Estimates for gfd 24
37 Kernel-Based Formulas Optimal Recovery r X,K,f depends linearly on the data f j = f(x j ), r X,K,f (z) = w j f(x j ), w j R, j = 1,...,N. (w j = w j (z) depends on the evaluation point z R d ) The weights w = {wj } N provide optimal recovery of f(z) for f in the native space F K associated with K, i.e., inf w R N w Π d s sup f FK 1 f(z) w j f(x j ) = sup f FK 1 f(z) j f(x j ), w w Π d s : exactness for polynomials in Π d s (empty if s = 0). Oleg Davydov Consistency Estimates for gfd 25
38 Kernel-Based Formulas Native Space" F K If K is positive definite, then F K is just the reproducing kernel Hilbert space associated with K ; in the c.p.d. case a generalization of it (a semi-hilbert space). In the translation-invariant case K(x, y) = Φ(x y) on R d, F K = {f L 2 (R d ) : f FK := ˆf/ Φ < }. L2 (R d ) Matérn kernel K(x, y) = K ν ( x y ) x y ν : Φ(ω) = c ν,d (1+ ω 2 ) ν d/2 = f FK = c ν,d f H ν+d/2 (R d ) (Sobolev space) Polyharmonics: f FK equlivalent to Sobolev seminorm C kernels: spaces of infinitely differentiable functions Oleg Davydov Consistency Estimates for gfd 26
39 Kernel-Based Formulas A kernel-based numerical differentiation formula is obtained by applying D to the kernel interpolant (approximation approach): Df(z) Dr X,K,f (z) = wj f(x j ). The weights wj can be calculated by solving the system M wj K(x k, x j ) + c j p j (x k ) = [DK(, x k )](z), 1 k N, w j p i (x j ) + 0 = Dp i (z), 1 i M. Oleg Davydov Consistency Estimates for gfd 27
40 Kernel-Based Formulas: Optimal recovery Kernel-based weights w = {wj } N provide optimal recovery of Df(z) from f(x j ), j = 1,..., N, for f F K, inf w R N w D Π d s sup f FK 1 Df(z) w j f(x j ) = sup f FK 1 Df(z) F K is the native space of K w D Π d s : exactness of numerical differentiation for polynomials in Π d s, j f(x j ), w Oleg Davydov Consistency Estimates for gfd 28
41 Kernel-Based Formulas: Optimal recovery Kernel-based weights w = {wj } N provide optimal recovery of Df(z) from f(x j ), j = 1,..., N, for f F K, inf w R N w D Π d s sup f FK 1 Df(z) w j f(x j ) = sup f FK 1 Df(z) F K is the native space of K w D Π d s : exactness of numerical differentiation for polynomials in Π d s, For example, the formula obtained with Matérn kernel K(x, y) = K ν ( x y ) x y ν, ν > 0 (s = 0), gives the best possible estimate of Df(z) if we only know that f belongs to the Sobolev space F K = H ν+d/2 (R d ) j f(x j ), w Oleg Davydov Consistency Estimates for gfd 28
42 Kernel-Based Formulas: Optimal recovery Optimal recovery error P X (z) := sup f FK 1 Df(z) j f(x j ) w is called power function and can be evaluated as P X (z) = ǫ x w ǫy w K(x, y), ǫ wf := Df(z) w j f(x j ) can be used to optimize the choice of the local set X. Oleg Davydov Consistency Estimates for gfd 29
43 Kernel-Based Formulas: Error Bounds Theorem For any q max{s, k + 1}, Df(z) Dr X,K,f (z) ρ q,d (z, X)C K,q f FK, f F K, as soon as α,β K(x, y) C(Ω Ω) for α, β q, where ρ q,d (z, X) is the growth function, C K,q := 1 ( ( )( ) q q 1/4 α,β K 2 q! α β C(Ω Ω)) <. α, β =q Oleg Davydov Consistency Estimates for gfd 30
44 Kernel-Based Formulas: Error Bounds Theorem For any q max{s, k + 1}, Df(z) Dr X,K,f (z) ρ q,d (z, X)C K,q f FK, f F K, as soon as α,β K(x, y) C(Ω Ω) for α, β q, where ρ q,d (z, X) is the growth function, C K,q := 1 ( ( )( ) q q 1/4 α,β K 2 q! α β C(Ω Ω)) <. α, β =q To compare with the optimal error bound of polynomial approximation: Df(z) N w j f(x j ) ρ q,d (z, X) f,q,ω. Oleg Davydov Consistency Estimates for gfd 30
45 Kernel-Based Formulas: Error Bounds Theorem For any q max{s, k + 1}, Df(z) Dr X,K,f (z) ρ q,d (z, X)C K,q f FK, f F K, as soon as α,β K(x, y) C(Ω Ω) for α, β q, where ρ q,d (z, X) is the growth function, C K,q := 1 ( ( )( ) q q 1/4 α,β K 2 q! α β C(Ω Ω)) <. α, β =q To compare with the optimal error bound of polynomial approximation: Df(z) N w j f(x j ) ρ q,d (z, X) f,q,ω. Robustness: Prior knowledge of the approximation order attainable on X is not needed since estimate holds for all q. Oleg Davydov Consistency Estimates for gfd 30
46 Kernel-Based Formulas: Error Bounds Theorem For any q max{s, k + 1}, Df(z) Dr X,K,f (z) ρ q,d (z, X)C K,q f FK, f F K, as soon as α,β K(x, y) C(Ω Ω) for α, β q, where ρ q,d (z, X) is the growth function, C K,q := 1 ( ( )( ) q q 1/4 α,β K 2 q! α β C(Ω Ω)) <. α, β =q Growth function ρ q,d (z, X) can be evaluated on repeated patterns, to get estimates without unknown constants. E.g., ρ 4, (z, X) = 4h 2 for the five point star, leading to the estimate f(z) r X h,k,f (z) 4h2 C K,4 f FK if the kernel K is sufficiently smooth. Oleg Davydov Consistency Estimates for gfd 30
47 Kernel-Based Formulas: Polyharmonic Formulas Polyharmonic formulas with φ(r) = r ν {log r} and s ν/2 +1 If m := (ν+ d)/2 is integer and s m, they provide optimal recovery in Beppo-Levi space BL m (R d ), the semi-hilbert space generated by m-th order Sobolev seminorm, among all formulas with polynomial exactness order s, and admit error bounds of the type C 1 h ν/2 k z,x for f BL m (R d ). Oleg Davydov Consistency Estimates for gfd 31
48 Kernel-Based Formulas: Polyharmonic Formulas Polyharmonic formulas with φ(r) = r ν {log r} and s ν/2 +1 If m := (ν+ d)/2 is integer and s m, they provide optimal recovery in Beppo-Levi space BL m (R d ), the semi-hilbert space generated by m-th order Sobolev seminorm, among all formulas with polynomial exactness order s, and admit error bounds of the type C 1 h ν/2 k z,x for f BL m (R d ). They are scalable and can therefore be stably computed by upscaling preconditioning for any small radius h z,x. The constant C 1 depends on the power function of the upscaled formula v. Oleg Davydov Consistency Estimates for gfd 31
49 Kernel-Based Formulas: Polyharmonic Formulas Polyharmonic formulas with φ(r) = r ν {log r} and s ν/2 +1 If m := (ν+ d)/2 is integer and s m, they provide optimal recovery in Beppo-Levi space BL m (R d ), the semi-hilbert space generated by m-th order Sobolev seminorm, among all formulas with polynomial exactness order s, and admit error bounds of the type C 1 h ν/2 k z,x for f BL m (R d ). They are scalable and can therefore be stably computed by upscaling preconditioning for any small radius h z,x. The constant C 1 depends on the power function of the upscaled formula v. As any scalable differentiation formulas of exactness order s, they also admit error bounds of the type C 2 h s k z,x for sufficiently smooth f, where C 2 depends on on the Lebesgue constant of v. Oleg Davydov Consistency Estimates for gfd 31
50 Kernel-Based Formulas: Polyharmonic Formulas Polyharmonic formulas with φ(r) = r ν {log r} and s ν/2 +1 If m := (ν+ d)/2 is integer and s m, they provide optimal recovery in Beppo-Levi space BL m (R d ), the semi-hilbert space generated by m-th order Sobolev seminorm, among all formulas with polynomial exactness order s, and admit error bounds of the type C 1 h ν/2 k z,x for f BL m (R d ). They are scalable and can therefore be stably computed by upscaling preconditioning for any small radius h z,x. The constant C 1 depends on the power function of the upscaled formula v. As any scalable differentiation formulas of exactness order s, they also admit error bounds of the type C 2 h s k z,x for sufficiently smooth f, where C 2 depends on on the Lebesgue constant of v. Robust kernel-based estimates are however not applicable. Oleg Davydov Consistency Estimates for gfd 31
51 Outline 1 Generalized Finite Difference Methods 2 Error of Polynomial and Kernel Numerical Differentiation Numerical Differentiation Polynomial Formulas Kernel-Based Formulas Least Squares Formulas 3 Selection of Sets of Influence 4 Conclusion Oleg Davydov Consistency Estimates for gfd 32
52 Least Squares Formulas Discrete Least Squares Let X = {x 1,...,x N } be unisolvent for Π d q (N dimπ d q). The weighted least squares polynomial L θ X,q f Πd q is uniquely defined by the condition (L θ X,q f f) X 2,θ = min { (p f) X 2,θ : p Π d q}, where ( N ) 1/2, v 2,θ := θ j vj 2 θ = [θ1,...,θ N ] T, θ j > 0. Exact for polynomials: L θ X,q p = p for all p Πd q Num. differentiation: Df(z) DL θ X,q f(z) = N w 2,θ j f(x j ) The weights w 2,θ j are scalable. Oleg Davydov Consistency Estimates for gfd 32
53 Least Squares Formulas Dual formulation The weight vector w 2,θ of Df(z) DL θ X,q f(z) = N solves the quadratic minimization problem w 2,θ 2 2,θ 1 = inf w R N w D Π d q w 2 2,θ 1, where θ 1 := [θ 1 1,...,θ 1 N ]T, w 2,θ 1 = It follows that w 2,θ 2,θ 1 = sup ( w 2,θ j f(x j ) w 2 j θ j ) 1/2. { } Dp(z) : p Π d q, p X 2,θ 1 =: ρ q,d (z, X, 2,θ 1), with a generalized growth function. Oleg Davydov Consistency Estimates for gfd 33
54 Least Squares Formulas Theorem Df(z) DL θ X,q f(z) ( N ) ρ q,d (z, X, 2,θ 1) θ j x j z 2q 1/2 f,q,ω 2. In particular, for θ j = x j z 2q 2, Df(z) DL q X,qf(z) Nρ q,d (z, X, 2) f,q,ω, where ( N ρ q,d (z, X, 2) = w 2,q 2,q := (w 2,q j ) ) 2 x j z 2q 1/2 2 Oleg Davydov Consistency Estimates for gfd 34
55 Least Squares Formulas Connection between ρ q,d (z, X) and ρ q,d (z, X, 2) We have ρ q,d (z, X, 2) ρ q,d (z, X) Nρ q,d (z, X, 2). Oleg Davydov Consistency Estimates for gfd 35
56 Least Squares Formulas Connection between ρ q,d (z, X) and ρ q,d (z, X, 2) We have ρ q,d (z, X, 2) ρ q,d (z, X) Nρ q,d (z, X, 2). This implies for the least squares formulas with θ j = x j z 2q 2 an error bound in terms of ρ q,d (z, X): Df(z) DL q X,q f(z) Nρ q,d (z, X) f,q,ω, which is only by factor N worse than the error bound for the 1,q -minimal formula. Oleg Davydov Consistency Estimates for gfd 35
57 Least Squares Formulas Connection between ρ q,d (z, X) and ρ q,d (z, X, 2) We have ρ q,d (z, X, 2) ρ q,d (z, X) Nρ q,d (z, X, 2). This implies for the least squares formulas with θ j = x j z 2q 2 an error bound in terms of ρ q,d (z, X): Df(z) DL q X,q f(z) Nρ q,d (z, X) f,q,ω, which is only by factor N worse than the error bound for the 1,q -minimal formula. We can estimate ρ q,d (z, X) with the help of ρ q,d (z, X, 2), which is cheaper to compute by quadratic minimization or orthogonal decompositions instead of l 1 minimization. Oleg Davydov Consistency Estimates for gfd 35
58 Outline 1 Generalized Finite Difference Methods 2 Error of Polynomial and Kernel Numerical Differentiation Numerical Differentiation Polynomial Formulas Kernel-Based Formulas Least Squares Formulas 3 Selection of Sets of Influence 4 Conclusion Oleg Davydov Consistency Estimates for gfd 36
59 Selection of Sets of Influence Sets of influence: Select Ξ i for each ξ i Ξ\ Ξ ξ i Ξ i ξ i Ξ i is the star or set of influence of ξ i Oleg Davydov Consistency Estimates for gfd 36
60 Selection of Sets of Influence Selection is non-trivial on non-uniform points, especially near domain s boundary (a) adaptive points 12 x x 10-3 (b) 6 nearest points 12 x x 10-3 (c) better selection Points in (a) are obtained by DistMesh (Persson & Strang, 2004) using a theoretically justified (Wahlbin, 1991) density function. Oleg Davydov Consistency Estimates for gfd 37
61 Selection of Sets of Influence: Geometric Selection Geometric selection for Laplacian Choose points in a rather uniform way around ξ i. Four quadrant criterium (Liszka & Orkisz, 1980) (Image from Lyszka, Duarte & Tworzydlo, 1996) Oleg Davydov Consistency Estimates for gfd 38
62 Selection of Sets of Influence: Geometric Selection Choose n = 6 points around ξ i as close as possible to the vertices of an equilateral hexagon (D. & Dang, 2011; Dang, D. & Hoang, 2017): discrete optimization successful for low order methods (O(h 2 ) for Poisson eq.) (n = 6 gives a fair comparison to linear FEM where the rows of the system matrix have 7 nonzeros on average) too complicated to extend to higher order gfd methods Oleg Davydov Consistency Estimates for gfd 39
63 Selection of Sets of Influence Selection via polynomial formulas For a given approximation order smaller sets of influence are preferred since they lead to sparser system matrices Oleg Davydov Consistency Estimates for gfd 40
64 Selection of Sets of Influence Selection via polynomial formulas For a given approximation order smaller sets of influence are preferred since they lead to sparser system matrices This makes a case for 1,µ -minimal formulas Oleg Davydov Consistency Estimates for gfd 40
65 Selection of Sets of Influence Selection via polynomial formulas For a given approximation order smaller sets of influence are preferred since they lead to sparser system matrices This makes a case for 1,µ -minimal formulas It is possible to employ 1,µ -minimal formulas just as a method to select sets of influence, and compute the more robust kernel-based weights on these sets (Bayona, Moscoso & Kindelan, 2011: for Seibold s positive minimal formulas) Oleg Davydov Consistency Estimates for gfd 40
66 Selection of Sets of Influence Selection via polynomial formulas For a given approximation order smaller sets of influence are preferred since they lead to sparser system matrices This makes a case for 1,µ -minimal formulas It is possible to employ 1,µ -minimal formulas just as a method to select sets of influence, and compute the more robust kernel-based weights on these sets (Bayona, Moscoso & Kindelan, 2011: for Seibold s positive minimal formulas) New idea: least squares thresholding Oleg Davydov Consistency Estimates for gfd 40
67 Selection of Sets of Influence: LS Thresholding Least squares thresholding: Compute a least squares numerical differentiation formula, and pick the positions of n largest weights. Example: compare (a) 6 nearest points, (b) 6 positions of nonzero 1,3 -minimal weights of exactness order 3, (c) positions of n = 6 largest weights of a least squares formula of exactness order (a) 6 nearest points (b) 1,3 -minimal (c) LS thresholding Oleg Davydov Consistency Estimates for gfd 41
68 Selection of Sets of Influence: LS Thresholding Test Problem Dirichlet problem for the Helmholz equation u 1 u = f, (α+r) 4 r = x 2 + y 2 in the domain Ω = (0, 1) 2. RHS and the boundary conditions chosen such that the exact solution is sin( 1 α+r Exact solution: ), where α = 1 50π. Oleg Davydov Consistency Estimates for gfd 42
69 Selection of Sets of Influence: LS Thresholding Adaptive nodes from a FEM triangulation by PDE Toolbox x x 10-3 Oleg Davydov Consistency Estimates for gfd 43
70 Selection of Sets of Influence: LS Thresholding RMS Errors of FEM and RBF-FD solutions with Gauss-QR and different selection methods for 6 neighbors 10 1 rms error on interior nodes 10 0 FEM geometric LS threshoding nearest X-axis: number of interior nodes Y-axis: RMS error Oleg Davydov Consistency Estimates for gfd 44
71 Outline 1 Generalized Finite Difference Methods 2 Error of Polynomial and Kernel Numerical Differentiation Numerical Differentiation Polynomial Formulas Kernel-Based Formulas Least Squares Formulas 3 Selection of Sets of Influence 4 Conclusion Oleg Davydov Consistency Estimates for gfd 45
72 Conclusion Polynomial and kernel-based numerical differentiation share similar error bounds that split into factors responsible for the smoothness of the data (e.g. Sobolev norm) and for the geometry of the nodes (Lebesque constant, growth function). Growth function can be efficiently estimated by least squares methods. It may be useful for node generation and selection of sets of influence with prescribed consistency orders of generalized finite difference methods. Sparse sets of influence can be found with the help of 1,µ -minimal polynomial formulas, and more efficiently by least squares thresholding. Oleg Davydov Consistency Estimates for gfd 45
MATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 14: The Power Function and Native Space Error Estimates Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu
More informationNumerical solution of surface PDEs with Radial Basis Functions
Numerical solution of surface PDEs with Radial Basis Functions Andriy Sokolov Institut für Angewandte Mathematik (LS3) TU Dortmund andriy.sokolov@math.tu-dortmund.de TU Dortmund June 1, 2017 Outline 1
More informationSolving the 3D Laplace Equation by Meshless Collocation via Harmonic Kernels
Solving the 3D Laplace Equation by Meshless Collocation via Harmonic Kernels Y.C. Hon and R. Schaback April 9, Abstract This paper solves the Laplace equation u = on domains Ω R 3 by meshless collocation
More informationKernel B Splines and Interpolation
Kernel B Splines and Interpolation M. Bozzini, L. Lenarduzzi and R. Schaback February 6, 5 Abstract This paper applies divided differences to conditionally positive definite kernels in order to generate
More informationRBF-FD Approximation to Solve Poisson Equation in 3D
RBF-FD Approximation to Solve Poisson Equation in 3D Jagadeeswaran.R March 14, 2014 1 / 28 Overview Problem Setup Generalized finite difference method. Uses numerical differentiations generated by Gaussian
More informationScientific Computing WS 2017/2018. Lecture 18. Jürgen Fuhrmann Lecture 18 Slide 1
Scientific Computing WS 2017/2018 Lecture 18 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 18 Slide 1 Lecture 18 Slide 2 Weak formulation of homogeneous Dirichlet problem Search u H0 1 (Ω) (here,
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 34: Improving the Condition Number of the Interpolation Matrix Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu
More informationApproximation by Conditionally Positive Definite Functions with Finitely Many Centers
Approximation by Conditionally Positive Definite Functions with Finitely Many Centers Jungho Yoon Abstract. The theory of interpolation by using conditionally positive definite function provides optimal
More informationScientific Computing WS 2018/2019. Lecture 15. Jürgen Fuhrmann Lecture 15 Slide 1
Scientific Computing WS 2018/2019 Lecture 15 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 15 Slide 1 Lecture 15 Slide 2 Problems with strong formulation Writing the PDE with divergence and gradient
More informationStability constants for kernel-based interpolation processes
Dipartimento di Informatica Università degli Studi di Verona Rapporto di ricerca Research report 59 Stability constants for kernel-based interpolation processes Stefano De Marchi Robert Schaback Dipartimento
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 2 Part 3: Native Space for Positive Definite Kernels Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2014 fasshauer@iit.edu MATH
More informationRKHS, Mercer s theorem, Unbounded domains, Frames and Wavelets Class 22, 2004 Tomaso Poggio and Sayan Mukherjee
RKHS, Mercer s theorem, Unbounded domains, Frames and Wavelets 9.520 Class 22, 2004 Tomaso Poggio and Sayan Mukherjee About this class Goal To introduce an alternate perspective of RKHS via integral operators
More informationMeshfree Approximation Methods with MATLAB
Interdisciplinary Mathematical Sc Meshfree Approximation Methods with MATLAB Gregory E. Fasshauer Illinois Institute of Technology, USA Y f? World Scientific NEW JERSEY LONDON SINGAPORE BEIJING SHANGHAI
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 33: Adaptive Iteration Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter 33 1 Outline 1 A
More informationReproducing Kernels of Generalized Sobolev Spaces via a Green Function Approach with Distributional Operators
Noname manuscript No. (will be inserted by the editor) Reproducing Kernels of Generalized Sobolev Spaces via a Green Function Approach with Distributional Operators Gregory E. Fasshauer Qi Ye Abstract
More informationStability of Kernel Based Interpolation
Stability of Kernel Based Interpolation Stefano De Marchi Department of Computer Science, University of Verona (Italy) Robert Schaback Institut für Numerische und Angewandte Mathematik, University of Göttingen
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 33: Adaptive Iteration Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter 33 1 Outline 1 A
More informationDIRECT ERROR BOUNDS FOR SYMMETRIC RBF COLLOCATION
Meshless Methods in Science and Engineering - An International Conference Porto, 22 DIRECT ERROR BOUNDS FOR SYMMETRIC RBF COLLOCATION Robert Schaback Institut für Numerische und Angewandte Mathematik (NAM)
More informationA. Iske RADIAL BASIS FUNCTIONS: BASICS, ADVANCED TOPICS AND MESHFREE METHODS FOR TRANSPORT PROBLEMS
Rend. Sem. Mat. Univ. Pol. Torino Vol. 61, 3 (23) Splines and Radial Functions A. Iske RADIAL BASIS FUNCTIONS: BASICS, ADVANCED TOPICS AND MESHFREE METHODS FOR TRANSPORT PROBLEMS Abstract. This invited
More informationD. Shepard, Shepard functions, late 1960s (application, surface modelling)
Chapter 1 Introduction 1.1 History and Outline Originally, the motivation for the basic meshfree approximation methods (radial basis functions and moving least squares methods) came from applications in
More informationScattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions
Chapter 3 Scattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions 3.1 Scattered Data Interpolation with Polynomial Precision Sometimes the assumption on the
More informationA Posteriori Error Bounds for Meshless Methods
A Posteriori Error Bounds for Meshless Methods Abstract R. Schaback, Göttingen 1 We show how to provide safe a posteriori error bounds for numerical solutions of well-posed operator equations using kernel
More informationNo. 579 November 2017
No. 579 November 27 Numerical study of the RBF-FD level set based method for partial differential equations on evolving-in-time surfaces A. Sokolov, O. Davydov, S. Turek ISSN: 29-767 Numerical study of
More informationRadial basis functions topics in slides
Radial basis functions topics in 40 +1 slides Stefano De Marchi Department of Mathematics Tullio Levi-Civita University of Padova Napoli, 22nd March 2018 Outline 1 From splines to RBF 2 Error estimates,
More informationNumerical solution of nonlinear sine-gordon equation with local RBF-based finite difference collocation method
Numerical solution of nonlinear sine-gordon equation with local RBF-based finite difference collocation method Y. Azari Keywords: Local RBF-based finite difference (LRBF-FD), Global RBF collocation, sine-gordon
More informationMultivariate Interpolation with Increasingly Flat Radial Basis Functions of Finite Smoothness
Multivariate Interpolation with Increasingly Flat Radial Basis Functions of Finite Smoothness Guohui Song John Riddle Gregory E. Fasshauer Fred J. Hickernell Abstract In this paper, we consider multivariate
More informationKernel-based Approximation. Methods using MATLAB. Gregory Fasshauer. Interdisciplinary Mathematical Sciences. Michael McCourt.
SINGAPORE SHANGHAI Vol TAIPEI - Interdisciplinary Mathematical Sciences 19 Kernel-based Approximation Methods using MATLAB Gregory Fasshauer Illinois Institute of Technology, USA Michael McCourt University
More informationScientific Computing I
Scientific Computing I Module 8: An Introduction to Finite Element Methods Tobias Neckel Winter 2013/2014 Module 8: An Introduction to Finite Element Methods, Winter 2013/2014 1 Part I: Introduction to
More informationMATH 205C: STATIONARY PHASE LEMMA
MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)
More informationMATH 126 FINAL EXAM. Name:
MATH 126 FINAL EXAM Name: Exam policies: Closed book, closed notes, no external resources, individual work. Please write your name on the exam and on each page you detach. Unless stated otherwise, you
More informationEXPOSITORY NOTES ON DISTRIBUTION THEORY, FALL 2018
EXPOSITORY NOTES ON DISTRIBUTION THEORY, FALL 2018 While these notes are under construction, I expect there will be many typos. The main reference for this is volume 1 of Hörmander, The analysis of liner
More informationLeast Squares Approximation
Chapter 6 Least Squares Approximation As we saw in Chapter 5 we can interpret radial basis function interpolation as a constrained optimization problem. We now take this point of view again, but start
More informationChapter 5 A priori error estimates for nonconforming finite element approximations 5.1 Strang s first lemma
Chapter 5 A priori error estimates for nonconforming finite element approximations 51 Strang s first lemma We consider the variational equation (51 a(u, v = l(v, v V H 1 (Ω, and assume that the conditions
More informationInterpolation of Spatial Data - A Stochastic or a Deterministic Problem?
Interpolation of Spatial Data - A Stochastic or a Deterministic Problem? M. Scheuerer, R. Schaback and M. Schlather 18 November 2011 Abstract Interpolation of spatial data is a very general mathematical
More informationRadial basis function partition of unity methods for PDEs
Radial basis function partition of unity methods for PDEs Elisabeth Larsson, Scientific Computing, Uppsala University Credit goes to a number of collaborators Alfa Ali Alison Lina Victor Igor Heryudono
More informationOrthogonality of hat functions in Sobolev spaces
1 Orthogonality of hat functions in Sobolev spaces Ulrich Reif Technische Universität Darmstadt A Strobl, September 18, 27 2 3 Outline: Recap: quasi interpolation Recap: orthogonality of uniform B-splines
More informationFast Numerical Methods for Stochastic Computations
Fast AreviewbyDongbinXiu May 16 th,2013 Outline Motivation 1 Motivation 2 3 4 5 Example: Burgers Equation Let us consider the Burger s equation: u t + uu x = νu xx, x [ 1, 1] u( 1) =1 u(1) = 1 Example:
More informationi=1 α i. Given an m-times continuously
1 Fundamentals 1.1 Classification and characteristics Let Ω R d, d N, d 2, be an open set and α = (α 1,, α d ) T N d 0, N 0 := N {0}, a multiindex with α := d i=1 α i. Given an m-times continuously differentiable
More informationPositive Definite Kernels: Opportunities and Challenges
Positive Definite Kernels: Opportunities and Challenges Michael McCourt Department of Mathematical and Statistical Sciences University of Colorado, Denver CUNY Mathematics Seminar CUNY Graduate College
More informationMcGill University Department of Mathematics and Statistics. Ph.D. preliminary examination, PART A. PURE AND APPLIED MATHEMATICS Paper BETA
McGill University Department of Mathematics and Statistics Ph.D. preliminary examination, PART A PURE AND APPLIED MATHEMATICS Paper BETA 17 August, 2018 1:00 p.m. - 5:00 p.m. INSTRUCTIONS: (i) This paper
More informationINTRODUCTION TO FINITE ELEMENT METHODS
INTRODUCTION TO FINITE ELEMENT METHODS LONG CHEN Finite element methods are based on the variational formulation of partial differential equations which only need to compute the gradient of a function.
More information4 Riesz Kernels. Since the functions k i (ξ) = ξ i. are bounded functions it is clear that R
4 Riesz Kernels. A natural generalization of the Hilbert transform to higher dimension is mutiplication of the Fourier Transform by homogeneous functions of degree 0, the simplest ones being R i f(ξ) =
More informationScalable kernel methods and their use in black-box optimization
with derivatives Scalable kernel methods and their use in black-box optimization David Eriksson Center for Applied Mathematics Cornell University dme65@cornell.edu November 9, 2018 1 2 3 4 1/37 with derivatives
More informationError Analysis of Nodal Meshless Methods
Error Analysis of Nodal Meshless Methods Robert Schaback Abstract There are many application papers that solve elliptic boundary value problems by meshless methods, and they use various forms of generalized
More informationA orthonormal basis for Radial Basis Function approximation
A orthonormal basis for Radial Basis Function approximation 9th ISAAC Congress Krakow, August 5-9, 2013 Gabriele Santin, joint work with S. De Marchi Department of Mathematics. Doctoral School in Mathematical
More informationLehrstuhl Informatik V. Lehrstuhl Informatik V. 1. solve weak form of PDE to reduce regularity properties. Lehrstuhl Informatik V
Part I: Introduction to Finite Element Methods Scientific Computing I Module 8: An Introduction to Finite Element Methods Tobias Necel Winter 4/5 The Model Problem FEM Main Ingredients Wea Forms and Wea
More informationComputational Aspects of Radial Basis Function Approximation
Working title: Topics in Multivariate Approximation and Interpolation 1 K. Jetter et al., Editors c 2005 Elsevier B.V. All rights reserved Computational Aspects of Radial Basis Function Approximation Holger
More informationRadial Basis Functions I
Radial Basis Functions I Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 14, 2008 Today Reformulation of natural cubic spline interpolation Scattered
More informationUNIVERSITY OF MANITOBA
Question Points Score INSTRUCTIONS TO STUDENTS: This is a 6 hour examination. No extra time will be given. No texts, notes, or other aids are permitted. There are no calculators, cellphones or electronic
More information1 Math 241A-B Homework Problem List for F2015 and W2016
1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let
More informationStability and Lebesgue constants in RBF interpolation
constants in RBF Stefano 1 1 Dept. of Computer Science, University of Verona http://www.sci.univr.it/~demarchi Göttingen, 20 September 2008 Good Outline Good Good Stability is very important in numerical
More informationKernel Method: Data Analysis with Positive Definite Kernels
Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University
More informationINVERSE AND SATURATION THEOREMS FOR RADIAL BASIS FUNCTION INTERPOLATION
MATHEMATICS OF COMPUTATION Volume 71, Number 238, Pages 669 681 S 0025-5718(01)01383-7 Article electronically published on November 28, 2001 INVERSE AND SATURATION THEOREMS FOR RADIAL BASIS FUNCTION INTERPOLATION
More informationyou expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form
Qualifying exam for numerical analysis (Spring 2017) Show your work for full credit. If you are unable to solve some part, attempt the subsequent parts. 1. Consider the following finite difference: f (0)
More informationPDEs in Image Processing, Tutorials
PDEs in Image Processing, Tutorials Markus Grasmair Vienna, Winter Term 2010 2011 Direct Methods Let X be a topological space and R: X R {+ } some functional. following definitions: The mapping R is lower
More informationOn Laplace Finite Marchi Fasulo Transform Of Generalized Functions. A.M.Mahajan
On Laplace Finite Marchi Fasulo Transform Of Generalized Functions A.M.Mahajan Department of Mathematics, Walchand College of Arts and Science, Solapur-43 006 email:-ammahajan9@gmail.com Abstract : In
More informationTheoretical and computational aspects of multivariate interpolation with increasingly flat radial basis functions
Theoretical and computational aspects of multivariate interpolation with increasingly flat radial basis functions Elisabeth Larsson Bengt Fornberg June 0, 003 Abstract Multivariate interpolation of smooth
More informationLocal radial basis function approximation on the sphere
Local radial basis function approximation on the sphere Kerstin Hesse and Q. T. Le Gia School of Mathematics and Statistics, The University of New South Wales, Sydney NSW 05, Australia April 30, 007 Abstract
More informationSome lecture notes for Math 6050E: PDEs, Fall 2016
Some lecture notes for Math 65E: PDEs, Fall 216 Tianling Jin December 1, 216 1 Variational methods We discuss an example of the use of variational methods in obtaining existence of solutions. Theorem 1.1.
More informationThe continuity method
The continuity method The method of continuity is used in conjunction with a priori estimates to prove the existence of suitably regular solutions to elliptic partial differential equations. One crucial
More informationApplied/Numerical Analysis Qualifying Exam
Applied/Numerical Analysis Qualifying Exam August 9, 212 Cover Sheet Applied Analysis Part Policy on misprints: The qualifying exam committee tries to proofread exams as carefully as possible. Nevertheless,
More informationNational Taiwan University
National Taiwan University Meshless Methods for Scientific Computing (Advisor: C.S. Chen, D.L. oung) Final Project Department: Mechanical Engineering Student: Kai-Nung Cheng SID: D9956 Date: Jan. 8 The
More informationAN ELEMENTARY PROOF OF THE OPTIMAL RECOVERY OF THE THIN PLATE SPLINE RADIAL BASIS FUNCTION
J. KSIAM Vol.19, No.4, 409 416, 2015 http://dx.doi.org/10.12941/jksiam.2015.19.409 AN ELEMENTARY PROOF OF THE OPTIMAL RECOVERY OF THE THIN PLATE SPLINE RADIAL BASIS FUNCTION MORAN KIM 1 AND CHOHONG MIN
More informationPARTIAL DIFFERENTIAL EQUATIONS. Lecturer: D.M.A. Stuart MT 2007
PARTIAL DIFFERENTIAL EQUATIONS Lecturer: D.M.A. Stuart MT 2007 In addition to the sets of lecture notes written by previous lecturers ([1, 2]) the books [4, 7] are very good for the PDE topics in the course.
More informationPreliminary Examination in Numerical Analysis
Department of Applied Mathematics Preliminary Examination in Numerical Analysis August 7, 06, 0 am pm. Submit solutions to four (and no more) of the following six problems. Show all your work, and justify
More informationRecent developments in the dual receiprocity method using compactly supported radial basis functions
Recent developments in the dual receiprocity method using compactly supported radial basis functions C.S. Chen, M.A. Golberg and R.A. Schaback 3 Department of Mathematical Sciences University of Nevada,
More informationx x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)
Numerical Methods - PROBLEMS. The Taylor series, about the origin, for log( + x) is x x2 2 + x3 3 x4 4 + Find an upper bound on the magnitude of the truncation error on the interval x.5 when log( + x)
More informationRegularization in Reproducing Kernel Banach Spaces
.... Regularization in Reproducing Kernel Banach Spaces Guohui Song School of Mathematical and Statistical Sciences Arizona State University Comp Math Seminar, September 16, 2010 Joint work with Dr. Fred
More informationON APPROXIMATION OF LAPLACIAN EIGENPROBLEM OVER A REGULAR HEXAGON WITH ZERO BOUNDARY CONDITIONS 1) 1. Introduction
Journal of Computational Mathematics, Vol., No., 4, 75 86. ON APPROXIMATION OF LAPLACIAN EIGENPROBLEM OVER A REGULAR HEXAGON WITH ZERO BOUNDARY CONDITIONS ) Jia-chang Sun (Parallel Computing Division,
More informationMath The Laplacian. 1 Green s Identities, Fundamental Solution
Math. 209 The Laplacian Green s Identities, Fundamental Solution Let be a bounded open set in R n, n 2, with smooth boundary. The fact that the boundary is smooth means that at each point x the external
More informationMultigrid finite element methods on semi-structured triangular grids
XXI Congreso de Ecuaciones Diferenciales y Aplicaciones XI Congreso de Matemática Aplicada Ciudad Real, -5 septiembre 009 (pp. 8) Multigrid finite element methods on semi-structured triangular grids F.J.
More informationA High-Order Galerkin Solver for the Poisson Problem on the Surface of the Cubed Sphere
A High-Order Galerkin Solver for the Poisson Problem on the Surface of the Cubed Sphere Michael Levy University of Colorado at Boulder Department of Applied Mathematics August 10, 2007 Outline 1 Background
More informationFunctional Analysis I
Functional Analysis I Course Notes by Stefan Richter Transcribed and Annotated by Gregory Zitelli Polar Decomposition Definition. An operator W B(H) is called a partial isometry if W x = X for all x (ker
More informationChapter Two: Numerical Methods for Elliptic PDEs. 1 Finite Difference Methods for Elliptic PDEs
Chapter Two: Numerical Methods for Elliptic PDEs Finite Difference Methods for Elliptic PDEs.. Finite difference scheme. We consider a simple example u := subject to Dirichlet boundary conditions ( ) u
More informationNumerical Integration in Meshfree Methods
Numerical Integration in Meshfree Methods Pravin Madhavan New College University of Oxford A thesis submitted for the degree of Master of Science in Mathematical Modelling and Scientific Computing Trinity
More information1. Introduction. A radial basis function (RBF) interpolant of multivariate data (x k, y k ), k = 1, 2,..., n takes the form
A NEW CLASS OF OSCILLATORY RADIAL BASIS FUNCTIONS BENGT FORNBERG, ELISABETH LARSSON, AND GRADY WRIGHT Abstract Radial basis functions RBFs form a primary tool for multivariate interpolation, and they are
More informationCIS 520: Machine Learning Oct 09, Kernel Methods
CIS 520: Machine Learning Oct 09, 207 Kernel Methods Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture They may or may not cover all the material discussed
More informationAdaptive Piecewise Polynomial Estimation via Trend Filtering
Adaptive Piecewise Polynomial Estimation via Trend Filtering Liubo Li, ShanShan Tu The Ohio State University li.2201@osu.edu, tu.162@osu.edu October 1, 2015 Liubo Li, ShanShan Tu (OSU) Trend Filtering
More informationNumerical Solutions to Partial Differential Equations
Numerical Solutions to Partial Differential Equations Zhiping Li LMAM and School of Mathematical Sciences Peking University Nonconformity and the Consistency Error First Strang Lemma Abstract Error Estimate
More informationNumerical Analysis of Differential Equations Numerical Solution of Elliptic Boundary Value
Numerical Analysis of Differential Equations 188 5 Numerical Solution of Elliptic Boundary Value Problems 5 Numerical Solution of Elliptic Boundary Value Problems TU Bergakademie Freiberg, SS 2012 Numerical
More informationWe denote the space of distributions on Ω by D ( Ω) 2.
Sep. 1 0, 008 Distributions Distributions are generalized functions. Some familiarity with the theory of distributions helps understanding of various function spaces which play important roles in the study
More informationPDEs, part 1: Introduction and elliptic PDEs
PDEs, part 1: Introduction and elliptic PDEs Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2013 Partial di erential equations The solution depends on several variables,
More informationSpotlight on Laplace s Equation
16 Spotlight on Laplace s Equation Reference: Sections 1.1,1.2, and 1.5. Laplace s equation is the undriven, linear, second-order PDE 2 u = (1) We defined diffusivity on page 587. where 2 is the Laplacian
More informationA scaling perspective on accuracy and convergence in RBF approximations
A scaling perspective on accuracy and convergence in RBF approximations Elisabeth Larsson With: Bengt Fornberg, Erik Lehto, and Natasha Flyer Division of Scientific Computing Department of Information
More informationLaplace s Equation. Chapter Mean Value Formulas
Chapter 1 Laplace s Equation Let be an open set in R n. A function u C 2 () is called harmonic in if it satisfies Laplace s equation n (1.1) u := D ii u = 0 in. i=1 A function u C 2 () is called subharmonic
More informationThe Overlapped Radial Basis Function-Finite Difference (RBF-FD) Method: A Generalization of RBF-FD
The Overlapped Radial Basis Function-Finite Difference (RBF-FD) Method: A Generalization of RBF-FD Varun Shankar arxiv:1606.03135v3 [math.na] 2 Jan 2017 Abstract Department of Mathematics, University of
More information10-701/ Recitation : Kernels
10-701/15-781 Recitation : Kernels Manojit Nandi February 27, 2014 Outline Mathematical Theory Banach Space and Hilbert Spaces Kernels Commonly Used Kernels Kernel Theory One Weird Kernel Trick Representer
More information13. Fourier transforms
(December 16, 2017) 13. Fourier transforms Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2017-18/13 Fourier transforms.pdf]
More informationLinear Independence of Finite Gabor Systems
Linear Independence of Finite Gabor Systems 1 Linear Independence of Finite Gabor Systems School of Mathematics Korea Institute for Advanced Study Linear Independence of Finite Gabor Systems 2 Short trip
More informationOptimal Stencils in Sobolev Spaces Oleg Davydov 1 and Robert Schaback 2 Draft of November 14, 2016
1 INTRODUCTION 1 Optimal Stencils in Sobolev Spaces Oleg Davydov 1 and Robert Schaback 2 Draft of November 14, 2016 Abstract: This paper proves that the approximation of pointwise derivatives of order
More information12. Cholesky factorization
L. Vandenberghe ECE133A (Winter 2018) 12. Cholesky factorization positive definite matrices examples Cholesky factorization complex positive definite matrices kernel methods 12-1 Definitions a symmetric
More informationNumerical Solutions to Partial Differential Equations
Numerical Solutions to Partial Differential Equations Zhiping Li LMAM and School of Mathematical Sciences Peking University Numerical Methods for Partial Differential Equations Finite Difference Methods
More informationEfficient Solvers for Stochastic Finite Element Saddle Point Problems
Efficient Solvers for Stochastic Finite Element Saddle Point Problems Catherine E. Powell c.powell@manchester.ac.uk School of Mathematics University of Manchester, UK Efficient Solvers for Stochastic Finite
More informationThe purpose of this lecture is to present a few applications of conformal mappings in problems which arise in physics and engineering.
Lecture 16 Applications of Conformal Mapping MATH-GA 451.001 Complex Variables The purpose of this lecture is to present a few applications of conformal mappings in problems which arise in physics and
More informationUniversität des Saarlandes. Fachrichtung 6.1 Mathematik
Universität des Saarlandes Fachrichtung 6.1 Mathematik Preprint Nr. 362 An iterative method for EIT involving only solutions of Poisson equations. I: Mesh-free forward solver Thorsten Hohage and Sergej
More informationVelocity averaging a general framework
Outline Velocity averaging a general framework Martin Lazar BCAM ERC-NUMERIWAVES Seminar May 15, 2013 Joint work with D. Mitrović, University of Montenegro, Montenegro Outline Outline 1 2 L p, p >= 2 setting
More informationOn the remainder term of Gauss Radau quadratures for analytic functions
Journal of Computational and Applied Mathematics 218 2008) 281 289 www.elsevier.com/locate/cam On the remainder term of Gauss Radau quadratures for analytic functions Gradimir V. Milovanović a,1, Miodrag
More informationFourier transforms, I
(November 28, 2016) Fourier transforms, I Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/fun/notes 2016-17/Fourier transforms I.pdf]
More informationIn this chapter we study elliptical PDEs. That is, PDEs of the form. 2 u = lots,
Chapter 8 Elliptic PDEs In this chapter we study elliptical PDEs. That is, PDEs of the form 2 u = lots, where lots means lower-order terms (u x, u y,..., u, f). Here are some ways to think about the physical
More informationTD M1 EDP 2018 no 2 Elliptic equations: regularity, maximum principle
TD M EDP 08 no Elliptic equations: regularity, maximum principle Estimates in the sup-norm I Let be an open bounded subset of R d of class C. Let A = (a ij ) be a symmetric matrix of functions of class
More information