Scattered Data Approximation of Noisy Data via Iterated Moving Least Squares

Similar documents
Preconditioning of Radial Basis Function Interpolation Systems via Accelerated Iterated Approximate Moving Least Squares Approximation

Recent Results for Moving Least Squares Approximation

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods

Meshfree Approximation Methods with MATLAB

On High-Rate Cryptographic Compression Functions

Toward Approximate Moving Least Squares Approximation with Irregularly Spaced Centers

Fluctuationlessness Theorem and its Application to Boundary Value Problems of ODEs

Received: 30 July 2017; Accepted: 29 September 2017; Published: 8 October 2017

MATH 590: Meshfree Methods

Today. Introduction to optimization Definition and motivation 1-dimensional methods. Multi-dimensional methods. General strategies, value-only methods

Least Squares Approximation

Objectives. By the time the student is finished with this section of the workbook, he/she should be able

MATH 590: Meshfree Methods

2.6 Two-dimensional continuous interpolation 3: Kriging - introduction to geostatistics. References - geostatistics. References geostatistics (cntd.

(One Dimension) Problem: for a function f(x), find x 0 such that f(x 0 ) = 0. f(x)

( x) f = where P and Q are polynomials.

CHAPTER 1: INTRODUCTION. 1.1 Inverse Theory: What It Is and What It Does

MATH 590: Meshfree Methods

Chapter 6 Reliability-based design and code developments

Scattering of Solitons of Modified KdV Equation with Self-consistent Sources

MATH 590: Meshfree Methods

Approximation by Conditionally Positive Definite Functions with Finitely Many Centers

Numerical Solution of Ordinary Differential Equations in Fluctuationlessness Theorem Perspective

OPTIMAL PLACEMENT AND UTILIZATION OF PHASOR MEASUREMENTS FOR STATE ESTIMATION

Basic mathematics of economic models. 3. Maximization

MATH 590: Meshfree Methods

A Simple Explanation of the Sobolev Gradient Method

In many diverse fields physical data is collected or analysed as Fourier components.

RATIONAL FUNCTIONS. Finding Asymptotes..347 The Domain Finding Intercepts Graphing Rational Functions

Bayesian Technique for Reducing Uncertainty in Fatigue Failure Model

Extreme Values of Functions

Supplementary material for Continuous-action planning for discounted infinite-horizon nonlinear optimal control with Lipschitz values

18-660: Numerical Methods for Engineering Design and Optimization

DIRECT ERROR BOUNDS FOR SYMMETRIC RBF COLLOCATION

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods

YURI LEVIN AND ADI BEN-ISRAEL

On a Closed Formula for the Derivatives of e f(x) and Related Financial Applications

MATH 590: Meshfree Methods

Math 2412 Activity 1(Due by EOC Sep. 17)

Algebra II Notes Inverse Functions Unit 1.2. Inverse of a Linear Function. Math Background

STAT 801: Mathematical Statistics. Hypothesis Testing

Telescoping Decomposition Method for Solving First Order Nonlinear Differential Equations

Curve Sketching. The process of curve sketching can be performed in the following steps:

8.4 Inverse Functions

The achievable limits of operational modal analysis. * Siu-Kui Au 1)

The Clifford algebra and the Chevalley map - a computational approach (detailed version 1 ) Darij Grinberg Version 0.6 (3 June 2016). Not proofread!

Least-Squares Spectral Analysis Theory Summary

SEPARATED AND PROPER MORPHISMS

Review of Prerequisite Skills for Unit # 2 (Derivatives) U2L2: Sec.2.1 The Derivative Function

AH 2700A. Attenuator Pair Ratio for C vs Frequency. Option-E 50 Hz-20 khz Ultra-precision Capacitance/Loss Bridge

9.1 The Square Root Function

Multivariate Interpolation with Increasingly Flat Radial Basis Functions of Finite Smoothness

Pulling by Pushing, Slip with Infinite Friction, and Perfectly Rough Surfaces

Products and Convolutions of Gaussian Probability Density Functions

3. Several Random Variables

A Comparative Study of Non-separable Wavelet and Tensor-product. Wavelet; Image Compression

Stability of Kernel Based Interpolation

1 Relative degree and local normal forms

Math 248B. Base change morphisms

Probabilistic Model of Error in Fixed-Point Arithmetic Gaussian Pyramid

Lecture 8 Optimization

Roberto s Notes on Differential Calculus Chapter 8: Graphical analysis Section 1. Extreme points

A new stable basis for radial basis function interpolation

Estimation and detection of a periodic signal

Kernel B Splines and Interpolation

Circuit Complexity / Counting Problems

A NOTE ON HENSEL S LEMMA IN SEVERAL VARIABLES

TANGENT SPACE INTRINSIC MANIFOLD REGULARIZATION FOR DATA REPRESENTATION. Shiliang Sun

Symbolic-Numeric Methods for Improving Structural Analysis of DAEs

Part I: Thin Converging Lens

Definition: Let f(x) be a function of one variable with continuous derivatives of all orders at a the point x 0, then the series.

Radial Basis Functions I

A Posteriori Error Bounds for Meshless Methods

Root Arrangements of Hyperbolic Polynomial-like Functions

Ultra Fast Calculation of Temperature Profiles of VLSI ICs in Thermal Packages Considering Parameter Variations

An efficient approximation method for the nonhomogeneous backward heat conduction problems

Root Finding and Optimization

Scientific Computing: An Introductory Survey

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations

Math-3 Lesson 8-5. Unit 4 review: a) Compositions of functions. b) Linear combinations of functions. c) Inverse Functions. d) Quadratic Inequalities

CONVECTIVE HEAT TRANSFER CHARACTERISTICS OF NANOFLUIDS. Convective heat transfer analysis of nanofluid flowing inside a

MATH 590: Meshfree Methods

TLT-5200/5206 COMMUNICATION THEORY, Exercise 3, Fall TLT-5200/5206 COMMUNICATION THEORY, Exercise 3, Fall Problem 1.

References Ideal Nyquist Channel and Raised Cosine Spectrum Chapter 4.5, 4.11, S. Haykin, Communication Systems, Wiley.

Global Weak Solution of Planetary Geostrophic Equations with Inviscid Geostrophic Balance

Analytical expressions for field astigmatism in decentered two mirror telescopes and application to the collimation of the ESO VLT

Manometer tubes for monitoring coastal water levels: New frequency response factors

Kernel-based Approximation. Methods using MATLAB. Gregory Fasshauer. Interdisciplinary Mathematical Sciences. Michael McCourt.

Analysis Scheme in the Ensemble Kalman Filter

2 Frequency-Domain Analysis

The Ascent Trajectory Optimization of Two-Stage-To-Orbit Aerospace Plane Based on Pseudospectral Method

Physics 5153 Classical Mechanics. Solution by Quadrature-1

ROBUST STABILITY AND PERFORMANCE ANALYSIS OF UNSTABLE PROCESS WITH DEAD TIME USING Mu SYNTHESIS

CS 450 Numerical Analysis. Chapter 5: Nonlinear Equations

A Particle Swarm Optimization Algorithm for Neighbor Selection in Peer-to-Peer Networks

Regularization in Reproducing Kernel Banach Spaces

A Brief Survey on Semi-supervised Learning with Graph Regularization

arxiv: v1 [math.gt] 20 Feb 2008

Transcription:

Scattered Data Approximation o Noisy Data via Iterated Moving Least Squares Gregory E. Fasshauer and Jack G. Zhang Abstract. In this paper we ocus on two methods or multivariate approximation problems with non-uniormly distributed noisy data. The new approach proposed here is an iterated approximate moving least-squares method. We compare our method to ridge regression which ilters out noise by using a smoothing parameter. Our goal is to ind an optimal number o iterations or the iterative method and an optimal smoothing parameter or ridge regression so that the corresponding approximants do not exactly interpolate the given data but are reasonably close. For both approaches we implement variants o leave-one-out cross-validation in order to ind these optimal values. The shape parameter or the basis unctions is also optimized in our algorithms. 1. Introduction In multivariate data itting problems we are usually given data (x j, j ), j = 1,..., N with distinct x j R s and j R, and we want to ind a (continuous) unction P : R s R such that P (x j ) = j, j = 1,..., N. (1) For classical interpolation methods we assume P to be a linear combination o a set o basis unctions φ j, i.e., P (x) = c j φ j (x). (2) The coeicients c j are determined by satisying the constraint (1). To guarantee existence o a unique set o c j or arbitrary distinct x j, it is known that, when s > 1 and N > 1, φ j must be x j -dependent. In the Conerence Title 1 Editors pp. 1 6. Copyright Oc 2005 by Nashboro Press, Brentwood, TN. ISBN 0-0-9728482-x-x All rights o reproduction in any orm reserved.

2 G. E Fasshauer and J. G. Zhang literature, the φ j are requently chosen to be basis unctions generated by shits o a single strictly positive deinite basic unction φ, i.e., Correspondingly, (2) is now rewritten as φ j (x) = φ(x x j ). (3) P (x) = c j φ(x x j ). (4) To express the problem in matrix-vector orm we let c = [c 1,..., c N ] T, = [ 1,..., N ] T, Φ ij = φ(x i x j ). Then, enorcing the interpolation constraint (1) with a P in the orm o (4) leads to Φc =. (5) The act that φ is assumed to be strictly positive deinite guarantees that the interpolation matrix Φ is invertible. Thereore, c = Φ 1. Note that, in this classical setup, there is no restriction on the distribution o the data sites x j except or being pair-wise distinct. However, distribution o the x j is an important issue in the so-called approximate moving least squares (AMLS) approximation method. More details will be given later. The deinition o the basic unction φ oten involves a multiplicative parameter ε, e.g., the Gaussian φ(x) = e ε2 x 2. This shape parameter can be used to control the latness o φ, and inding a good value or ε is a major issue o data approximation (see e.g., the recent paper [3]). Throughout the rest o this paper, we requently omit ε in our notation, but the reader should be aware that some quantities, e.g., φ, Φ and c may be ε-dependent. The unction P based on (5) exactly interpolates the given data. When N is large, solving this (oten dense and ill-conditioned) linear system can be rather time-consuming. In many practical applications the data values j will contain a certain amount o inaccuracy such as experimental noise or computational error (e.g., i the data come rom some other previous computation). This act demonstrates that we may not want to invest too much eort into obtaining a solution that exactly interpolates the data. Usually one attempts to overcome these diiculties by giving up some o the exactness o the solution unction. For more details on scattered data approximation by positive deinite unctions we reer the reader to either [1] or [7].

Iterated Moving Least Squares 3 2. Interpolation By Iterated AMLS Approximation In the standard approach to moving least squares (MLS) approximation one takes a local (usually polynomial) approximation space, and determines the point-wise approximation as the solution o a locally weighted norm minimization problem. This and many more details o MLS and AMLS approximation are described in [1]. The dimension o this polynomial space is usually ar smaller than the data size. This polynomial approach can be equivalently interpreted as a quasi-interpolation approach, namely Backus-Gilbert s method. Similarly to the local polynomial approach, Backus-Gilbert s method avoids solving a ull size linear system. However, its generating unctions are point-wise determined, that is, unless a polynomial space o low order is used, evaluation o the Backus-Gilbert solution at each point still requires the solution o a linear system whose size is based on the dimension o the polynomial space. The advantage o the Backus-Gilbert ormulation is that it suggests an extension to a continuous approximant whose evaluation no longer requires solution o any linear systems. 2.1. From the Backus-Gilbert Method to AMLS Approximation Assume a quasi-interpolant Q : R s R o the orm Q (x) = j φ(x x j ). (6) It is known that i φ is a cardinal unction, then Q interpolates the given data and minimizes the point-wise error. In the Backus-Gilbert ormulation o MLS approximation we do not intend to use a cardinal φ, but instead we determine the φ(x x j ) at a ixed x via the ollowing setup. Assume φ(x x j ) = w(x x j )q(x x j ), q Π s d, (7) with a positive weight unction w : R s R and polynomial q o degree at most d in s variables which satisies the point-wise conditions (x x j ) α φ(x x j ) = δ α,0, 0 α d. (8) It will become apparent later why we use the same notation φ or the MLS generating unction as we used to deine the interpolant P in (4). However, at this point the Backus-Gilbert method itsel does not require φ to be strictly positive deinite. Condition (8) presents a set o discrete moment conditions. These moment conditions and the orm (7) o the generating unction can be derived

4 G. E Fasshauer and J. G. Zhang rom the Gram matrix associated with the standard MLS approximation. For d > 1, it is quite complicated to obtain an explicit analytic ormulation or φ. However, i x j are (coordinate-wise) uniormly distributed, then (8) may be approximately converted to an integral orm via construction o a Riemann sum, i.e., R s x α φ( x)d x = δ α0, 0 α d. (9) In analogy to (8), condition (9) now represents a set o continuous moment conditions. These conditions are the key ingredient o AMLS approximation a method closely related to the general approximate approximation paradigm irst suggested by Maz ya in the early 1990s (see, e.g., [5] and reerences therein). The advantage o the continuous ormulation (9) is that we can now explicitly construct the basic unction φ. In act, it is clear that any appropriately normalized φ will satisy (9) or d = 0. Condition (9) also implies a polynomial reproduction via unction convolution, i.e., x α φ(x x)d x = (x x) α φ( x)d x = x α, 0 α d. (10) R s R s Consequently, p(x) = p( x)φ(x x)d x, R s p Π s d. (11) This reproduction o degree-d polynomials is what ensures good approximation properties o the scheme. Again, in the case o a uniorm center distribution, the polynomial reproduction (11) can be approximately converted to a discrete orm, i.e., p(x) p(x j )φ(x x j ) p Π s d. (12) This interpretation leads us to believe that the (discrete) AMLS approximation method should also contain properties o unction convolution. In particular, convolution is known to be a smoothing operation when it does not exactly reconstruct the convoluted unction. Consequently, we expect Q to carry some high level smoothing eects. 2.2. Iterated AMLS Approximation We now choose a φ that not only meets the continuous moment conditions but also is strictly positive deinite. With such a φ, the quasi-interpolant

Iterated Moving Least Squares 5 Q may in act be pushed as close to the interpolant P as possible via an iterative process initialized with Q. Details o this approach are presented in [2]. A basic algorithm is summarized below: Q (0) with residual unctions N (x) = j φ(x x j ), (13) Q (n+1) (x) = Q (n) (x) + R (n) [ ] j Q (n) (x j ) φ(x x j ), (14) (x) = Q (n+1) (x) Q (n) (x) [ ] = j Q (n) (x j ) φ(x x j ). (15) According to the results presented in [2], i the matrix with entries Φ ij = φ(x i x j ) satisies Φ 2 < 1, then Q (n) P, i.e., R (n) (x) 0 as n. This iterative process will eventually reach the exact interpolant simply by reducing the residuals at the data points during each iteration. Note that the residuals themselves are also quasi-interpolants o the same construction as the initial Q (0) is. We now remind the reader o the data site distribution issue. AMLS approximation in general is best ormulated or a uniorm data distribution. However, since the iterative Q (n) is guaranteed to arrive at the interpolant P it is not necessary to ask or evenly spaced data in the construction o Q (0) or the residual unction R (n) in each iteration. The initial AMLS approximant will usually be smooth but rather ar away rom the exact true unction. As the iteration proceeds, the residuals decrease, but noise is also being picked up (i there is any). Thus, one might expect that during the iteration there is a moment when the approximant is relatively ideal meaning it is smooth and also close to the true unction. This optimal moment is known to be problem dependent and also φ-dependent. We will come back to this discussion in detail later. To end this section, let us make sure that there are some basic unctions φ available or this iterative AMLS method. For example, any normalized strictly positive deinite unction (or d = 0) will do, e.g., inverse multiquadrics φ(x) = (1 + x 2 ) β, β > 0 or all s, Wendland s compactly supported unctions or speciic s, normalized Laguerre-Gaussians or arbitrary d and s (see [2]), e.g.,

6 G. E Fasshauer and J. G. Zhang φ(x) = e x2 or d = 0 and all s, φ(x) = ( 3 2 x2 )e x2 or d = 1 and s = 1. 3. Ridge Regression or Noise Filtering A well known approach to dealing with noisy data is ridge regression or smoothing spline approximation (see, e.g., the seminal paper [4] by Kimeldor and Wahba). The basic idea is to give up some accuracy and then obtain a solution via a minimization process that balances a tradeo between smoothness and closeness to the data. I we again assume the solution P to be o the orm (4) (but not subject to constraint (1)), then using a smoothing parameter γ > 0, the coeicient vector c is determined by min ct Φc + γ (P (x j ) j ) 2. (16) Here the quadratic orm c T Φc is an indicator o the smoothness o P and clearly, N (P (x j ) j ) 2 measures the closeness to the data. It can be shown that solving (16) leads to the linear system (Φ + 1γ ) I c =. (17) Note that this is just a regularized version o the interpolation system (5). It is clear that (17) will have a unique solution ) provided φ is strictly positive deinite, since then the matrix (Φ + 1γ I is also positive deinite or γ > 0. The unction P ound by solving (17) will no longer exactly it the given data simply because o the use o γ. The parameter γ balances the tradeo between closeness o it and smoothness. Obviously, a large ) γ makes P it the data more closely. In the limiting case, (Φ + 1γ I Φ as γ. This recovers the standard interpolation ormulation as given by (5). We will use cross-validation to get a good value o the smoothing parameter. A dierent approach was suggested in the recent paper [8]. 4. Leave-One-Out Cross-Validation As mentioned earlier, the optimal number o iterations or iterated AMLS approximation and the optimal smoothing parameter γ or ridge regression are φ-dependent. For a given problem, in addition to the optimal smoothing parameter γ used in ridge regression and the optimal number o iterations or iterated AMLS approximation, we seek to ind an optimal shape parameter ε or the basic unction φ in both methods. Indeed, an optimal ε is also required or standard interpolation. In order to ind

Iterated Moving Least Squares 7 these optimal values we use a variation o the so-called leave-one-out crossvalidation (LOOCV) algorithm. We now describe LOOCV along with its implementation. 4.1. LOOCV or Ridge Regression For k = 1,..., N, denote x [k] = [x 1,..., x k 1, x k+1,..., x N ] T, [k] = [ 1,..., k 1, k+1,..., N ] T, and correspondingly, Φ [k] ij = φ(ε(x[k] i x [k] j )), c [k] = [c [k] 1,..., c[k] N 1 ]T. We do this to indicate that in orming the approximation we do not take the whole data set into consideration, but leave out the k th element. Hence, all involved quantities are now o size (N 1) instead o N. We implement the ridge regression method to ind c [k]. That is, c [k] = ( Φ [k] + 1 γ I ) 1 [k]. (18) Correspondingly, the ridge regression approximation or leaving out the k th data value becomes P [k] N 1 (x) = c [k] j φ(ε(x x[k] )). (19) In order to obtain a cost unction or the optimization o γ (and ε) we compute the residual on the let-out data point x k, i.e., e k = k P [k] (x k), (20) and denote e = [e 1,..., e N ] T. Judging rom the ormulation presented thus ar, this optimization demands a large amount o computation especially when the set o γ candidates and the set o ε candidates are large. We adapt a ormula given by Rippa or standard RBF interpolation that will signiicantly reduce the computation (see [6]). We denote, Φ = Φ+ 1 γ I, where Φ is the interpolation matrix based on the whole data set. Following Rippa s analysis in [6] one can show that e k given by (20) can actually be computed by ( Φ 1 ) j e k = Φ 1 kk k. (21) For each pair o γ and ε, (20) requires computation o a matrix inverse and unction evaluation or every dierent k, whereas with (21) we need to compute and store the inverse o the ull size matrix only once or all k = 1,..., N.

8 G. E Fasshauer and J. G. Zhang 4.2. LOOCV or Iterated AMLS Approximation As mentioned earlier, the iterated AMLS approximant carries a strong smoothing property which we will use or noise iltering purposes. Again, we employ a variant o LOOCV to ind an optimal shape parameter ε and an optimal number o iterations. An interpretation o the iterated AMLS method is that the iterative approximant Q (n) approximates the interpolant P by representing the inverse o the interpolation matrix Φ by a truncated Neumann series (see [2]), i.e., n Φ 1 (I Φ) i. (22) i=0 To optimize Q (n) with respect to n and ε using LOOCV, we again deine a cost unction based on Rippa s ormula (21) which was originally derived or the interpolation process. This leads to the ollowing procedure. For a ixed ε ormulate a cost unction: During each iteration, apply an approximate version o Rippa s ormula based on (22), i.e., ( n ) e (n) i=0 (I Φ)i k k = ( n. (23) i=0 (I Φ)i) kk and denote e (n) = [e (n) 1,..., e(n) N ]T. I e (n) e (n 1) alls below a speciied tolerance or the maximal iteration number is reached, stop the iteration and store e(ε) = e (n). An optimal ε is obtained by minimizing e(ε) with respect to ε. Since the matrix I Φ is non-singular (see [2]) we can perorm an eigen decomposition, i.e., I Φ = XΛX 1, so that or n N the cost o updating (23) during iterations can be reduced to O(N 2 ) rom O(N 3 ) as would be required by usual direct matrix multiplication. 5. Discussion o Numerical Results In our numerical experiments we use Gaussian basic unctions. All computations are perormed on the unit square [0, 1] 2. The scaling o the basic unctions is achieved by using two parameters: the shape parameter ε deines the general shape, and a second parameter h that relects the data spacing is used to mimic stationary approximation. Thus, we end up using expansions such as P (x) = c j e ε2 h 2 x xj 2 or Q (x) = j e ε2 h 2 x xj 2.

Iterated Moving Least Squares 9 Test data is taken rom a modiied Franke unction. In Figure 1 we compare the convergence behavior o Gaussian RBF interpolation to that o the iterated AMLS algorithm or a problem without noise. The interpolation error is represented by the highly oscillatory curve caused by the associated numerical instabilities. The other two curves represent the errors or iterated AMLS approximation. The dash-dotted curve is or the initial iterate Q (0) and the solid one corresponds to Q (10). When ε is small the iterated AMLS method successully overcomes the numerical diiculties associated with the interpolant since the residual iteration acts as a smoothing operation. The graph on the right is a zoom-in o the let graph in a small ε range. 10 4 10 2 10 2 P Q (0) Q (10) 10 1 P Q (0) Q (10) RMSerr 10 0 RMSerr 10 0 10 1 10 2 10 2 10 4 0 0.5 1 1.5 2 2.5 ε 10 3 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 ε Fig. 1. RMS-errors vs. ε and illustration o instabilities. 33 33 Halton points. Results based on data with 3% noise added are summarized in Table 1. In addition to the two methods discussed in this paper we include results or an iterated Shepard approximation. Clearly, simple RBF interpolation (with a ixed ε) is not recommended or noisy data. All three methods produce similar error drops. However, searching or optimal parameters or ridge regression is considerably more time consuming than or the two iterative methods which both do not require solutions o linear systems during the iterations. We note that or the two iterative methods the initial approximant based on an optimal ε is lat and does not approximate the data very well. However, convergence is rather ast during early iterations, and then slows down once the approximant starts eeding on the noise. We hope to detect this point at which the noise is optimally iltered with the LOOCV algorithm. In summary, both iterative methods will produce reliably smooth results in signiicantly less time than the ridge regression method, and thereore perorm quite well in this noise iltering application.

10 G. E Fasshauer and J. G. Zhang N = 9 25 81 289 1089 RMSerr 4.80e-3 1.53e-3 6.42e-4 4.39e-4 2.48e-4 AMLS ε 1.479865 1.268158 0.911530 0.652600 0.468866 no. iter. 7 6 6 4 3 time 0.2 0.4 1.0 5.7 254 RMSerr 5.65e-3 1.96e-3 8.21e-4 5.10e-4 2.73e-4 Shepard ε 2.194212 1.338775 0.895188 0.656272 0.468866 no. iter. 7 7 7 5 3 time 0.2 0.4 2.1 7.0 225 RMSerr 3.54e-3 1.62e-3 7.20e-4 4.57e-4 2.50e-4 Ridge ε 2.083918 0.930143 0.704802 0.382683 0.181895 γ 100.0 100.0 47.324909 26.614484 29.753487 time 0.3 1.2 1.1 21.3 672 Tab. 1. Comparison o dierent methods using Gaussians or noisy data sampled at Halton points. Reerences 1. Fasshauer, G. E., Meshree Approximation Methods with MATLAB, World Scientiic Publishers, Singapore, to appear. 2. Fasshauer, G. E., and J. G. Zhang, Iterated approximate moving least squares approximation, Proceedings o Meshless Methods (Lisbon 2005), Springer, in press. 3. Fornberg, B., and J. Zuev, The Runge phenomenon and spatially variable shape parameters in RBF interpolation, preprint, 2006. 4. Kimeldor, G., and G. Wahba, Some results on Tchebycheian spline unctions, J. Math. Anal. Applic. 33 (1971), 82 95. 5. Maz ya, V., and G. Schmidt, On quasi-interpolation with nonuniormly distributed centers on domains and maniolds, J. Approx. Theory 110 (2001), 125 145. 6. Rippa, S., An algorithm or selecting a good value or the parameter c in radial basis unction interpolation, Adv. Comp. Math 11 (1999), 193 210. 7. Wendland, H., Scattered Data Approximation, Cambridge University Press, Cambridge, 2004. 8. Wendland, H., and C. Rieger, Approximate interpolation with applications to selecting smoothing parameters, Numer. Math. 101, 29 748. Gregory E. Fasshauer and Jack G. Zhang Illinois Institute o Technology Chicago, IL asshauer@iit.edu http://www.math.iit.edu/ ass zhangou@iit.edu http://www.iit.edu/ zhangou