Interpolation by Basis Functions of Different Scales and Shapes

Similar documents
Kernel B Splines and Interpolation

DIRECT ERROR BOUNDS FOR SYMMETRIC RBF COLLOCATION

MATH 590: Meshfree Methods

Scattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions

A Posteriori Error Bounds for Meshless Methods

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods

Approximation by Conditionally Positive Definite Functions with Finitely Many Centers

MATH 590: Meshfree Methods

Least Squares Approximation

Stability of Kernel Based Interpolation

Solving the 3D Laplace Equation by Meshless Collocation via Harmonic Kernels

AN ELEMENTARY PROOF OF THE OPTIMAL RECOVERY OF THE THIN PLATE SPLINE RADIAL BASIS FUNCTION

Multivariate Interpolation with Increasingly Flat Radial Basis Functions of Finite Smoothness

RBF Collocation Methods and Pseudospectral Methods

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods

D. Shepard, Shepard functions, late 1960s (application, surface modelling)

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods

INVERSE AND SATURATION THEOREMS FOR RADIAL BASIS FUNCTION INTERPOLATION

Stability constants for kernel-based interpolation processes

MATH 590: Meshfree Methods

On interpolation by radial polynomials C. de Boor Happy 60th and beyond, Charlie!

Data fitting by vector (V,f)-reproducing kernels

MATH 590: Meshfree Methods

Computational Aspects of Radial Basis Function Approximation

Radial basis functions topics in slides

Multiscale RBF collocation for solving PDEs on spheres

Radial Basis Functions I

Toward Approximate Moving Least Squares Approximation with Irregularly Spaced Centers

Meshfree Approximation Methods with MATLAB

c 2005 Society for Industrial and Applied Mathematics

A Comparison between Solving Two Dimensional Integral Equations by the Traditional Collocation Method and Radial Basis Functions

1. Introduction. A radial basis function (RBF) interpolant of multivariate data (x k, y k ), k = 1, 2,..., n takes the form

Dual Bases and Discrete Reproducing Kernels: A Unified Framework for RBF and MLS Approximation

INTEGRATION BY RBF OVER THE SPHERE

Numerical solution of nonlinear sine-gordon equation with local RBF-based finite difference collocation method

Partititioned Methods for Multifield Problems

Numerical cubature on scattered data by radial basis functions

Recent Results for Moving Least Squares Approximation

D. Shepard, Shepard functions, late 1960s (application, surface modelling)

A. Iske RADIAL BASIS FUNCTIONS: BASICS, ADVANCED TOPICS AND MESHFREE METHODS FOR TRANSPORT PROBLEMS

Theoretical and computational aspects of multivariate interpolation with increasingly flat radial basis functions

MATH 590: Meshfree Methods

Scattered Data Approximation of Noisy Data via Iterated Moving Least Squares

Applied and Computational Harmonic Analysis

Multiresolution analysis by infinitely differentiable compactly supported functions. N. Dyn A. Ron. September 1992 ABSTRACT

S chauder Theory. x 2. = log( x 1 + x 2 ) + 1 ( x 1 + x 2 ) 2. ( 5) x 1 + x 2 x 1 + x 2. 2 = 2 x 1. x 1 x 2. 1 x 1.

Nonparametric estimation of a function from noiseless observations at random points

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University

Overlapping additive Schwarz preconditioners for interpolation on the unit sphere by spherical radial basis functions

Key words. Radial basis function, scattered data interpolation, hierarchical matrices, datasparse approximation, adaptive cross approximation

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

A sharp upper bound on the approximation order of smooth bivariate pp functions C. de Boor and R.Q. Jia

Linear Algebra Massoud Malek

Where is matrix multiplication locally open?

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Multivariate Interpolation by Polynomials and Radial Basis Functions

Viewed From Cubic Splines. further clarication and improvement. This can be described by applying the

A NOTE ON MATRIX REFINEMENT EQUATIONS. Abstract. Renement equations involving matrix masks are receiving a lot of attention these days.

Physics 202 Laboratory 5. Linear Algebra 1. Laboratory 5. Physics 202 Laboratory

Positive Definite Functions on Hilbert Space

Optimal data-independent point locations for RBF interpolation

Integral Interpolation

Chapter 3 Transformations

Path integral in quantum mechanics based on S-6 Consider nonrelativistic quantum mechanics of one particle in one dimension with the hamiltonian:

Biorthogonal Spline Type Wavelets

SOLVING HYPERBOLIC PARTIAL DIFFERENTIAL EQUATIONS IN SPHERICAL GEOMETRY WITH RADIAL BASIS FUNCTIONS

R. Schaback. numerical method is proposed which rst minimizes each f j separately. and then applies a penalty strategy to gradually force the

Scientific Computing with Radial Basis Functions

Michael J. Johnson. Department of Mathematics and Computer Science Kuwait University P.O. Box: 5969 Safat Kuwait

INFINITUDE OF MINIMALLY SUPPORTED TOTALLY INTERPOLATING BIORTHOGONAL MULTIWAVELET SYSTEMS WITH LOW APPROXIMATION ORDERS. Youngwoo Choi and Jaewon Jung

PHASE TRANSITIONS: REGULARITY OF FLAT LEVEL SETS

CONVERGENCE RATES OF COMPACTLY SUPPORTED RADIAL BASIS FUNCTION REGULARIZATION

Chapter III. Stability of Linear Systems

Regularity for Poisson Equation

arxiv: v1 [math.pr] 22 May 2008

Lecture 4: Numerical solution of ordinary differential equations

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).

Positive Definite Kernels: Opportunities and Challenges

THE ELLIPTICITY PRINCIPLE FOR SELF-SIMILAR POTENTIAL FLOWS

Superconvergence of Kernel-Based Interpolation

Kernel-based Approximation. Methods using MATLAB. Gregory Fasshauer. Interdisciplinary Mathematical Sciences. Michael McCourt.

Solving the Laplace Equation by Meshless Collocation Using Harmonic Kernels

Maria Cameron Theoretical foundations. Let. be a partition of the interval [a, b].

In English, this means that if we travel on a straight line between any two points in C, then we never leave C.

Chapter One. The Calderón-Zygmund Theory I: Ellipticity

Numerical analysis of heat conduction problems on 3D general-shaped domains by means of a RBF Collocation Meshless Method

Riesz bases of Floquet modes in semi-infinite periodic waveguides and implications

RKHS, Mercer s theorem, Unbounded domains, Frames and Wavelets Class 22, 2004 Tomaso Poggio and Sayan Mukherjee

Stability and Lebesgue constants in RBF interpolation

Reproducing Kernels of Generalized Sobolev Spaces via a Green Function Approach with Distributional Operators

L. Levaggi A. Tabacco WAVELETS ON THE INTERVAL AND RELATED TOPICS

Kernel approximation, elliptic PDE and boundary effects

THE APPROXIMATION POWER OF MOVING LEAST-SQUARES

A numerical study of a technique for shifting eigenvalues of radial basis function differentiation matrices

290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f

PIECEWISE LINEAR FINITE ELEMENT METHODS ARE NOT LOCALIZED

Transcription:

Interpolation by Basis Functions of Different Scales and Shapes M. Bozzini, L. Lenarduzzi, M. Rossini and R. Schaback Abstract Under very mild additional assumptions, translates of conditionally positive definite radial basis functions allow unique interpolation to scattered multivariate data, because the interpolation matrices have a symmetric and positive definite dominant part. In many applications, the data density varies locally, and then the translates should get different scalings that match the local data density. Furthermore, if there is a local anisotropy in the data, the radial basis functions should be distorted into functions with ellipsoids as level sets. In such cases, the symmetry and the definiteness of the matrices are lost. However, this paper provides sufficient conditions for the unique solvability of such interpolation processes. The basic technique is a matrix perturbation argument combined with the Ball Narcowich Ward stability results. Keywords: Conditionally positive definite radial basis functions, shape parameters, scaling, solvability. Classification: 4A05, 4A5, 4A7, 4A7, 4A30, 4A40, 4A63, 65D0 ) Supported by a research stay sponsored by??? in the University of Milan Bicocca

Introduction Let Ω IR d be a compact set, and let us denote the space of d-variate polynomials of order not exceeding m by IP d m. We shall study multivariate interpolation by conditionally positive definite radial functions φ : IR 0 IR of order m 0. This means that for all possible choices of sets X = {x,..., x N } Ω of N distinct points the quadratic form induced by the N N matrix A = (φ( x j x k )) j,k N () is positive definite on the subspace N V := α IRN : α j p(x j ) = 0 for all p IP m d. j= Note that m = 0 implies V = IR d because of IP d m = {0}, and then the matrix A in () is positive definite. The most prominent examples of conditional positive definite radial basis functions of order m on IR d are φ(r) = ( ) β/ r β, β > 0, β IN 0 m β/ φ(r) = ( ) k+ r k log(r), k IN m k + φ(r) = (c + r ) β/, β < 0 m 0 φ(r) = ( ) β/ (c + r ) β/, β > 0, β IN 0 m β/ φ(r) = e αr, α > 0 m 0 φ(r) = ( r) 4 +( + 4r) m 0, d 3. See e.g. [7] for a comprehensive derivation of the properties of these functions. It is customary to scale a radial basis function φ by going over to φ( /δ) with a positive value δ that is roughly proportional to the distance between neighbouring data locations. In particular, for the Wendland function φ(r) = ( r) 4 +( + 4r) the scaled function has support [0, δ]. From now on, we use instead of (). A = (φ( x j x k /δ)) j,k N Interpolation of real values f,..., f N on a set X = {x,..., x N } of N distinct scattered points of Ω by such a scaled function φ( /δ) is done by solving the (N + Q) (N + Q) system Aα + P β = f P T α + 0 = 0 ()

where Q = dim IP d m and P = (p i (x j )) j N, i Q for a basis p,..., p Q of IP d m. In fact, if the additional assumption rank (P ) = Q N (3) holds, then the system () is uniquely solvable. The resulting interpolant has the form N Q s(x) = α j φ( x j x /δ) + β i p i (x) (4) j= with the additional condition α V. In many applications it is desirable not to use the same scale δ in all terms of (4). In fact, if x j lies in some local cluster of points, one would rather use φ( x j x /δ j ) for a small positive δ j that is adapted to the local data density near x j. This approach was suggested by various authors, see e.g. [, 5, 7, 9]. In particular, Buhmann and Micchelli [4] considered monotonic scalings for multiquadrics and claimed that they improve the interpolation. Fasshauer [6] observed that varying scales of multiquadrics worked well in accelerating the convergence of his multilevel method. Carlson and Foley [5] studied the variation of a constant multiquadric scaling for the interpolation of various test functions. They concluded that test functions with large curvature such as the surface of a sphere require large scales, while those with considerable variation require smaller scales. Based upon their results, Hon and Kansa [0] conjectured that the scaling of multiquadrics should be proportional to the local radius of curvature. Galperin and Zheng [8] suggested to optimize local scaling factors along with the data points. There is some need for data dependent strategies for optimal choice of data locations and scales. Most of the above papers use multiquadrics of different scales for solving partial differential equations. For interpolation, recent numerical methods can be found in [3]. So far, there are no known conditions for solvability of interpolation problems at different scales, but this paper will be a first step. Numerical evidence suggests that singular systems can occur [7]. If we introduce the nonsymmetric matrix the system () is perturbed into i= Ã = (φ( x j x k /δ j )) j,k N, Ã α + P β = f P T α + 0 = 0. (5) This paper will provide sufficient conditions for nonsingularity of (5). Sections 6 and 7 will treat the perturbation of shape instead of scale. 3

Basic Argument We shall modify a classical perturbation argument from Numerical Analysis. Let T : V IR Q V IR Q be the linear mapping defined via T := ( A P P T 0 ) ( à P P T 0 ) (, α β ) ( α β ), α V. (6) The final goal is to derive sufficient conditions for the invertibility of T, which proves solvability of (5). Note that T is just a perturbation of the identity. A crucial tool will consist of inequalities of the form γ T Aγ λ γ (7) for all γ V and fixed positive numbers λ that will still depend on properties of φ and X. Our source for such inequalities will be [6], which in turn is based on the fundamental work started by Ball, Narcowich, and Ward [,,, 3, 4]. We use () and (5) to get Aα + P β = à α + P β α T Aα = α T à α = α T (à A) α + αt A α (λ à A ) α with λ from (7). If we can make sure that à A < λ (8) holds, we get Furthermore, if we look at α A α λ à A. P β = Aα + P β à α and use the property (3), we see that we can bound ( α, β) above in terms of (α, β). But this means that the mapping T of (6) is invertible under the assumption (8), and thus we have proven Theorem. The system (5) for perturbed interpolation is solvable, if the perturbation à of the standard matrix A is bounded by (8), where the constant λ comes from (7). Sections 3 to 6 of the paper will center around evaluation of (8) and (7) in various situations. 4

3 Matrix Perturbations by Scaling We first concentrate on the left hand side of (8), and for convenience we use B n j,k n b jk for any n n matrix B. To avoid complications, we restrict the variable scalings to δ j κδ with a small positive constant κ <. Then the absolute values of the entries of à A have the bounds φ( x j x k /δ j ) φ( x j x k /δ) φ (ξ) x j x k δ δ j with 0 ξ k N x j x k X κδ κδ, if φ is continuously differentiable. Here, we have used the shorthand notation X := diam (X) for the L diameter of X. With we have and (8) takes the form C(δ) := X ξ [0, X φ (ξ) (9) /κδ] à A C(δ)N j N which leaves us to work on λ. j N δ δ j 4 Lower Bounds on Eigenvalues δ δ j < λ C(δ)N, (0) Clearly, the value λ of (7) is a lower bound on the eigenvalues of the quadratic form induced by the matrix A when restricted to V. If the function φ were not scaled, the value λ would take the form G(q X ) with the separation distance q := q X := min j<k N x j x k in practically every situation [6]. The function G still depends on φ, Ω and the space dimension d, but not on any other property of the data set X, in particular not on the cardinality of X. We now need a scaled version of this approach. The basic trick is to replace x j by y j := x j /δ to define a data set Y. Then the new q Y is q X /δ, and we can take λ = G(q X /δ), because the scaled matrix A defined for the data X coincides with the unscaled matrix defined for Y. Altogether we get 5

Theorem 4. If φ is a continuously differentiable radial basis function, and if the scalings δ j are perturbations of δ that satisfy j N δ δ j < G(q X/δ) C(δ)N, () with C from (9) and G from [6], then the general system (5) is uniquely solvable. 5 Examples for Scalings It does not make any sense to scale the powers φ(r) = ( ) β/ r β. Similarly, the thin plate splines φ(r) = ( ) k+ r k log(r) need not be scaled, because their orders of conditional positive definiteness allow to absorb the additional polynomials. Let us start with unscaled multiquadrics φ(r) = ( ) β/ ( + r ) β/, β > 0, β IN 0, m β/. From [6] we take G(q X ) = c(β, d)q β X exp(.76d/q X) and get the sufficient condition j N δ δ j < c(β, d)qβ X exp(.76dδ/q X) δ β C(δ)N for the scaled system (5) to be solvable. In the standard case β = we have φ c and can take C(δ) = X. Note that one can keep δ/q X small by proper choice of the basic scaling. This is necessary anyway, because the stability estimates, which are quite realistic due to [5], blow up like exp(.76dδ/q X ). The constant c(β, d) is explicitly given in [6], and the method of that paper will also give the corresponding constants for all the other examples to follow. We refrain from repeating those constants here. The above general scaling is not the standard one for multiquadrics. In fact, most applications will use φ c (r) = ( ) β/ (c + r ) β/ and perturb this function into φ cj for positive values c j c. Due to φ c (r) = c β φ(r/c) we have additional factors that spoil the argument of section 3. Clearly, the appropriate value of λ is c β G(q X /c) = c(β, d)q β X exp(.76dc/q X), and the differences in the matrix entries are bounded by (c j + r ) β/ (c + r ) β/ = β( c + r ) +(β+)/ c c + r c j c β( c + r ) (β )/ c j c for some c between c and c j. In the most interesting case β = we thus get the sufficient condition c c j q X j N N c(β, d) exp(.76dc/q X) 6

for solvability of the scaled system. For cases where φ (r) attains its imum on [0, ), e.g. for the Gaussian, the Wendland functions or the inverse multiquadrics, we can bound the derivative of the scaled function via φ (r/δ) δ φ (r) and just have to find the imum absolute value of the derivative of the unscaled function. For the Gaussian e t this is /e, and from [6] we get the criterion j N δ δ d+ δ j < c exp( 40.7d δ /qx ). enq d X X In the case of inverse multiquadrics φ(r) = ( + r ) β/, value of the unscaled derivative can be bounded by Then we get j N δ β δ j < c(β, d) β ( ) β/ β β. β β β < 0 the absolute ( ) β/ β q β X exp(.76dq X/δ) β δ β. N X Consider the Wendland functions φ l,k (r), k =,, 3 with l = d +k+ defined as in [6]: φ l, (r) = ( r) l+ + [ + r(l + )] φ l, (r) = ( r) l+ + [3 + r(3l + 6) + r (l + 4l + 3)] φ l,3 (r) = ( r) l+3 + [5 + r(5l + 45) +r (6l + 36l + 45) + r 3 (l 3 + 9l + 3l + 5)]. We can easily compute the derivatives of the unscaled version and bound them by calculating their imum absolute value. In particular, The next case is φ l,(r) = (l + )(l + )r( r) l + (l + )(l + ). φ l,(r) = (l + 3)(l + 4)r( r) l+ + ((l + )r + ). Now, since (l + )r + ( + r) l+, we get Finally φ l,(r) (l + 3)(l + 4)r( r ) l+ + (l + 3)(l + 4). φ l,3(r) = 3(l + 5)(l + 6)r( r) l+ + ((l /3 + 4l/3 + )r + (l + )l + ). 7

It is easy to see that (l /3 + 4l/3 + )r + (l + )l + ( + r) l+, then φ l,3(r) 3(l + 5)(l + 6)r( r ) l+ + 3(l + 5)(l + 6). If we write these bounds as we get j N δ δ j where c(l, k) is available from [8]. φ l,k(r) ɛ lk, < c(l, k) ɛ lk N X ( qx δ ) k+, 6 Matrix Perturbations by Shape We now want to replace φ( x x j ) by a function with elliptic contours. To this end, we define ψ(t) := φ( t) and introduce for each j a d variate positive definite quadratic form via a positive definite d d matrix Q j. Its eigenvalues will be denoted by λ jl, l d, j N, and we replace φ( x x j ) by ψ((x x j ) T Q j (x x j )). This coincides with φ( x x j ) if Q j is the identity matrix. The basic perturbation argument will be the same as for the scaling case, but we now have to use a different technique to bound the perturbations of the matrix entries. In fact, ψ((x x j ) T Q j (x x j )) ψ((x x j ) T I(x x j )) ψ (τ j ) (x x j ) T (Q j I)(x x j ) with some value τ j between (x x j ) T Q j (x x j ) and (x x j ) T I(x x j ). To get a handle on these, we restrict ourselves to matrices Q j with eigenvalues satisfying κ λ jl κ () for some fixed κ (0, ). Thus we have ψ((x j x k ) T Q j (x j x k )) ψ((x j x k ) T I(x j x k )) D(q X, κ) X l d λ jl with D(q X ) := ψ (t). (3) t [κq X, X /κ] Note that we have to be careful around the origin, because we are working with ψ instead of φ, and in the classical thin plate case there is no differentiability of ψ(t) = t log t at zero. But we can safely omit the case j = k from the above 8

discussion. Note that we cannot combine the scale and shape case, because the latter needs bound on the derivative of φ( r) instead of φ(r) Applying the results of sections and 4, we get Theorem 6. If ψ(t) := φ( t) is a continuously differentiable radial basis function on [κq X, X /κ], and if the quadratic forms Q j at x j satisfy () and λ jk < k d G(q X ) D(q X )N X, (4) with D from (3) and G from [6], then the general system for the shape perturbation is uniquely solvable. 7 Examples for Shape Perturbation The various cases of (4) can be treated by picking the correct G function from [6] as in section 5, but we still have to look at the function D(q X ). The results have to be inserted into (4). Let us start with multiquadrics ( ) β/ ( + r ) β/ again. The function ψ is ψ(t) = ( ) β/ ( + t) β/, and for calculating D(q X ) we get D(q X ) = β ( ) + κq β/ β X < β < D(q X ) = β ( + β/ κ ) X β >. This argument also works for inverse multiquadrics (β < 0). Thin plate splines φ(r) = ( ) k+ r k log(r) with k IN and k > lead to D(q X ) = κ k X k ( + log( X /κ) ), while the classical case k = yields D(q X ) = The monomial case φ(r) = ( ) β/ r β has ( + ( log( X /κ), log(q Xκ) ) ). D(q X ) = β ( ) κq β/ X β < ( ) X β/ β >. D(q X ) = β κ For the Gaussian φ(r) = exp( αr ) we take D(q X ) = α exp( ακq X) < α. 9

Finally, let us look at the Wendland functions. Setting ψ lk (t) = φ lk ( t) we find D(q X ) c k (l + k)(l + k ) for k =,, 3 with c = c =, c 3 = 3 by direct differentiation. Acknowledgement The authors thank Ed Kansa and H. Wendland for valuable comments on a first draft of the paper. References [] K. Ball. Eigenvalues of Euclidean distance matrices. Journal of Approximation Theory, 68:74 8, 99. [] K. Ball, N. Sivakumar, and J.D. Ward. On the sensitivity of radial basis interpolation to minimal data separation distance. Constructive Approximation, 8:40 46, 99. [3] M. Bozzini, L. Lenarduzzi, and??? A forward method for a concise description of a function. manuscript, 000. [4] M.D. Buhmann and C.A. Micchelli. Multiquadric interpolation improved. Comput. Math. Appl., 4: 5, 99. [5] R.E. Carlson and T.A. Foley. The parameter R in multiquadric interpolation. Comput. Math. Applic., :9 4, 99. [6] G. Fasshauer. On smoothing for multilevel approximation with radial basis functions. In C.K. Chui and L.L. Schumaker, editors, Approximation Theory IX, vol. : Computational Aspects, Surface Fitting and Multiresolution Methods, pages 55 6. Vanderbilt University Press, Nashville, TN, 999. [7] R. Franke, H. Hagen, and G.M. Nielson. Least squares surface approximation to scattered data using multiquadric functions. Advances in Computational Mathematics, :8 99, 994. [8] E. A. Galperin and Q. Zheng. Solution and control of pde via global optimization methods. Comput. Math. Appl., 5:03 8, 993. [9] R.E. Hagan and E.J. Kansa. Studies of the r parameter in the multiquadric function applied to ground water pumping. J. Appl. Sci. Comp., :66 8, 994. [0] Y.C. Hon and E.J. Kansa. Circumventing the ill-conditioning problem with multiquadric radial basis functions: applications to elliptic partial differential equations. Comput. Math. Applic., 39:3 7, 000. 0

[] E. J. Kansa. Multiquadrics - a scattered data approximation scheme with applications to computational fluid-dynamics - I: Surface approximation and partial dervative estimates. Comput. Math. Appl., 9:7 45, 990. [] F.J. Narcowich, N. Sivakumar, and J.D. Ward. On the sensivity of radial basis interpolation with respect to minimal data separation distance. Technical Report 40, Department of Mathematics, Texas A & M University, 99. [3] F.J. Narcowich, N. Sivakumar, and J.D. Ward. On condition numbers associated with radial function interpolation. Journal of Mathematical Analysis and Applications, 86:457 485, 994. [4] F.J. Narcowich and J.D. Ward. Norm estimates for the inverses of a general class of scattered data radial function interpolation matrices. Journal of Approximation Theory, 69:84 09, 99. [5] R. Schaback. Lower bounds for norms of inverses of interpolation matrices for radial basis functions. Journal of Approximation Theory, 79():87 306, 994. [6] R. Schaback. Error estimates and condition numbers for radial basis function interpolation. Advances in Computational Mathematics, 3:5 64, 995. [7] R. Schaback and H. Wendland. Characterization and construction of radial basis functions. In N. Dyn, D. Leviatan, and D Levin, editors, Eilat proceedings. Cambridge University Press, 000. [8] H. Wendland. Konstruktion und Untersuchung radialer Basisfunktionen mit kompaktem Träger. Dissertation, Universität Göttingen, 996.