MATH 590: Meshfree Methods

Similar documents
MATH 590: Meshfree Methods

MATH 590: Meshfree Methods

Meshfree Approximation Methods with MATLAB

MATH 590: Meshfree Methods

Kernel B Splines and Interpolation

MATH 590: Meshfree Methods

Radial Basis Functions I

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods

Scattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions

D. Shepard, Shepard functions, late 1960s (application, surface modelling)

MATH 590: Meshfree Methods

Kernel-based Approximation. Methods using MATLAB. Gregory Fasshauer. Interdisciplinary Mathematical Sciences. Michael McCourt.

MATH 590: Meshfree Methods

Multivariate Interpolation with Increasingly Flat Radial Basis Functions of Finite Smoothness

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods

Toward Approximate Moving Least Squares Approximation with Irregularly Spaced Centers

RBF Collocation Methods and Pseudospectral Methods

MATH 590: Meshfree Methods

Breaking Computational Barriers: Multi-GPU High-Order RBF Kernel Problems with Millions of Points

RBF-FD Approximation to Solve Poisson Equation in 3D

Positive Definite Kernels: Opportunities and Challenges

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods

A orthonormal basis for Radial Basis Function approximation

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C.

A new stable basis for radial basis function interpolation

MATH 590: Meshfree Methods

Recent Results for Moving Least Squares Approximation

Theoretical and computational aspects of multivariate interpolation with increasingly flat radial basis functions

Least Squares Approximation

Scalable kernel methods and their use in black-box optimization

Finite Elements. Colin Cotter. January 18, Colin Cotter FEM

MATH 590: Meshfree Methods

Multigrid absolute value preconditioning

Approximation by Conditionally Positive Definite Functions with Finitely Many Centers

MATH 590: Meshfree Methods

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

A New Trust Region Algorithm Using Radial Basis Function Models

Cubic spline Numerov type approach for solution of Helmholtz equation

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

6.4 Krylov Subspaces and Conjugate Gradients

Interpolation by Basis Functions of Different Scales and Shapes

Preconditioners for ill conditioned (block) Toeplitz systems: facts a

Stability of Kernel Based Interpolation

Efficient Solvers for Stochastic Finite Element Saddle Point Problems

Scientific Computing: An Introductory Survey

Consistency Estimates for gfd Methods and Selection of Sets of Influence

Scattered Data Approximation of Noisy Data via Iterated Moving Least Squares

The Closed Form Reproducing Polynomial Particle Shape Functions for Meshfree Particle Methods

AMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1.

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White

Lecture 18 Classical Iterative Methods

Numerical Methods I Non-Square and Sparse Linear Systems

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

Preface to the Second Edition. Preface to the First Edition

Numerical analysis of heat conduction problems on irregular domains by means of a collocation meshless method

Lectures 9-10: Polynomial and piecewise polynomial interpolation

RKHS, Mercer s theorem, Unbounded domains, Frames and Wavelets Class 22, 2004 Tomaso Poggio and Sayan Mukherjee

Chapter 7 Iterative Techniques in Matrix Algebra

Numerical analysis of heat conduction problems on 3D general-shaped domains by means of a RBF Collocation Meshless Method

We wish to solve a system of N simultaneous linear algebraic equations for the N unknowns x 1, x 2,...,x N, that are expressed in the general form

MATH 350: Introduction to Computational Mathematics

Linear Systems and Matrices

Solving Boundary Value Problems (with Gaussians)

Green s Functions: Taking Another Look at Kernel Approximation, Radial Basis Functions and Splines

Classical iterative methods for linear systems

On the Preconditioning of the Block Tridiagonal Linear System of Equations

Fast Structured Spectral Methods

Scattered Data Interpolation with Wavelet Trees

Preconditioning Techniques Analysis for CG Method

Key words. Radial basis function, scattered data interpolation, hierarchical matrices, datasparse approximation, adaptive cross approximation

Numerical solution of nonlinear sine-gordon equation with local RBF-based finite difference collocation method

Numerische Mathematik

7.4 The Saddle Point Stokes Problem

Kernel Method: Data Analysis with Positive Definite Kernels

Overlapping Schwarz preconditioners for Fekete spectral elements

Linear Algebra Practice Problems

Fast Iterative Solution of Saddle Point Problems

MA2501 Numerical Methods Spring 2015

Stability constants for kernel-based interpolation processes

1. Introduction Let f(x), x 2 R d, be a real function of d variables, and let the values f(x i ), i = 1; 2; : : : ; n, be given, where the points x i,

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

1. Introduction. A radial basis function (RBF) interpolant of multivariate data (x k, y k ), k = 1, 2,..., n takes the form

DIRECT ERROR BOUNDS FOR SYMMETRIC RBF COLLOCATION

Numerical Analysis Comprehensive Exam Questions

9.1 Preconditioned Krylov Subspace Methods

Iterative methods for Linear System

Linear Solvers. Andrew Hazel

An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84

Computational Aspects of Radial Basis Function Approximation

Iterative schemes for the solution of systems of equations arising from the DRM in multidomains

D. Shepard, Shepard functions, late 1960s (application, surface modelling)

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Lecture Notes for Inf-Mat 3350/4350, Tom Lyche

A. Iske RADIAL BASIS FUNCTIONS: BASICS, ADVANCED TOPICS AND MESHFREE METHODS FOR TRANSPORT PROBLEMS

Transcription:

MATH 590: Meshfree Methods Chapter 34: Improving the Condition Number of the Interpolation Matrix Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter 34 1

Outline 1 Preconditioning: Two Simple Examples 2 Early Preconditioners 3 Preconditioned GMRES via Approximate Cardinal Functions 4 Change of Basis 5 Effect of the Better Basis on the Condition Number 6 Effect of the Better Basis on the Accuracy of the Interpolant fasshauer@iit.edu MATH 590 Chapter 34 2

In Chapter 16 we noted that the system matrices arising in scattered data interpolation with radial basis functions tend to become very ill-conditioned as the minimal separation distance q X between the data sites x 1,..., x N, is reduced. fasshauer@iit.edu MATH 590 Chapter 34 3

In Chapter 16 we noted that the system matrices arising in scattered data interpolation with radial basis functions tend to become very ill-conditioned as the minimal separation distance q X between the data sites x 1,..., x N, is reduced. Therefore it is natural to devise strategies to prevent such instabilities by preconditioning the system, fasshauer@iit.edu MATH 590 Chapter 34 3

In Chapter 16 we noted that the system matrices arising in scattered data interpolation with radial basis functions tend to become very ill-conditioned as the minimal separation distance q X between the data sites x 1,..., x N, is reduced. Therefore it is natural to devise strategies to prevent such instabilities by preconditioning the system, or by finding a better basis for the approximation space we are using. fasshauer@iit.edu MATH 590 Chapter 34 3

The preconditioning approach is standard procedure in numerical linear algebra. fasshauer@iit.edu MATH 590 Chapter 34 4

The preconditioning approach is standard procedure in numerical linear algebra. In fact we can use any of the well-established methods (such as preconditioned conjugate gradient iteration) to improve the stability and convergence of the interpolation systems that arise for strictly positive definite functions. fasshauer@iit.edu MATH 590 Chapter 34 4

The preconditioning approach is standard procedure in numerical linear algebra. In fact we can use any of the well-established methods (such as preconditioned conjugate gradient iteration) to improve the stability and convergence of the interpolation systems that arise for strictly positive definite functions. Example The sparse systems that arise in (multilevel) interpolation with compactly supported radial basis functions can be solved efficiently with the preconditioned conjugate gradient method. fasshauer@iit.edu MATH 590 Chapter 34 4

The second approach to improving the condition number of the interpolation system, i.e., the idea of using a more stable basis, is well known from univariate polynomial and spline interpolation. fasshauer@iit.edu MATH 590 Chapter 34 5

The second approach to improving the condition number of the interpolation system, i.e., the idea of using a more stable basis, is well known from univariate polynomial and spline interpolation. The Lagrange basis functions for univariate polynomial interpolation are the ideal basis for stably solving the interpolation equations since the resulting interpolation matrix is the identity matrix (which is much better conditioned than, e.g., the Vandermonde matrix stemming from a monomial basis). fasshauer@iit.edu MATH 590 Chapter 34 5

The second approach to improving the condition number of the interpolation system, i.e., the idea of using a more stable basis, is well known from univariate polynomial and spline interpolation. The Lagrange basis functions for univariate polynomial interpolation are the ideal basis for stably solving the interpolation equations since the resulting interpolation matrix is the identity matrix (which is much better conditioned than, e.g., the Vandermonde matrix stemming from a monomial basis). B-splines give rise to diagonally dominant, sparse system matrices which are much easier to deal with than the matrices resulting from a truncated power basis representation of the spline interpolant. fasshauer@iit.edu MATH 590 Chapter 34 5

The second approach to improving the condition number of the interpolation system, i.e., the idea of using a more stable basis, is well known from univariate polynomial and spline interpolation. The Lagrange basis functions for univariate polynomial interpolation are the ideal basis for stably solving the interpolation equations since the resulting interpolation matrix is the identity matrix (which is much better conditioned than, e.g., the Vandermonde matrix stemming from a monomial basis). B-splines give rise to diagonally dominant, sparse system matrices which are much easier to deal with than the matrices resulting from a truncated power basis representation of the spline interpolant. Both of these examples are studied in great detail in standard numerical analysis texts (see, e.g., [Kincaid and Cheney (2002)]) or in the literature on splines (see, e.g., [Schumaker (1981)]). fasshauer@iit.edu MATH 590 Chapter 34 5

The second approach to improving the condition number of the interpolation system, i.e., the idea of using a more stable basis, is well known from univariate polynomial and spline interpolation. The Lagrange basis functions for univariate polynomial interpolation are the ideal basis for stably solving the interpolation equations since the resulting interpolation matrix is the identity matrix (which is much better conditioned than, e.g., the Vandermonde matrix stemming from a monomial basis). B-splines give rise to diagonally dominant, sparse system matrices which are much easier to deal with than the matrices resulting from a truncated power basis representation of the spline interpolant. Both of these examples are studied in great detail in standard numerical analysis texts (see, e.g., [Kincaid and Cheney (2002)]) or in the literature on splines (see, e.g., [Schumaker (1981)]). We discuss an analogous approach for RBFs below. fasshauer@iit.edu MATH 590 Chapter 34 5

Before we describe any of the specialized preconditioning procedures for radial basis function interpolation matrices we give two examples presented in the early RBF paper [Jackson (1989)] to illustrate the effects of and motivation for preconditioning in the context of radial basis functions. fasshauer@iit.edu MATH 590 Chapter 34 6

Outline Preconditioning: Two Simple Examples 1 Preconditioning: Two Simple Examples 2 Early Preconditioners 3 Preconditioned GMRES via Approximate Cardinal Functions 4 Change of Basis 5 Effect of the Better Basis on the Condition Number 6 Effect of the Better Basis on the Accuracy of the Interpolant fasshauer@iit.edu MATH 590 Chapter 34 7

Preconditioning: Two Simple Examples Example Let s = 1 and consider interpolation based on ϕ(r) = r with no polynomial terms added. fasshauer@iit.edu MATH 590 Chapter 34 8

Preconditioning: Two Simple Examples Example Let s = 1 and consider interpolation based on ϕ(r) = r with no polynomial terms added. As data sites we choose X = {1, 2,..., 10}. fasshauer@iit.edu MATH 590 Chapter 34 8

Preconditioning: Two Simple Examples Example Let s = 1 and consider interpolation based on ϕ(r) = r with no polynomial terms added. As data sites we choose X = {1, 2,..., 10}. This leads to the system matrix 0 1 2 3... 9 1 0 1 2... 8 2 1 0 1... 7 A = 3 2 1 0... 6........ 9 8 7 6... 0 with l 2 -condition number cond(a) 67. fasshauer@iit.edu MATH 590 Chapter 34 8

Preconditioning: Two Simple Examples Example (cont.) Instead of solving the linear system Ac = y, where y = [y 1,..., y 10 ] T R 10 is a vector of given data values, we can find a suitable matrix B to pre-multiply both sides of the equation such that the system is simpler to solve. fasshauer@iit.edu MATH 590 Chapter 34 9

Preconditioning: Two Simple Examples Example (cont.) Instead of solving the linear system Ac = y, where y = [y 1,..., y 10 ] T R 10 is a vector of given data values, we can find a suitable matrix B to pre-multiply both sides of the equation such that the system is simpler to solve. Ideally, the new system matrix BA should be the identity matrix, i.e., B should be an approximate inverse of A. fasshauer@iit.edu MATH 590 Chapter 34 9

Preconditioning: Two Simple Examples Example (cont.) Instead of solving the linear system Ac = y, where y = [y 1,..., y 10 ] T R 10 is a vector of given data values, we can find a suitable matrix B to pre-multiply both sides of the equation such that the system is simpler to solve. Ideally, the new system matrix BA should be the identity matrix, i.e., B should be an approximate inverse of A. Once we ve found an appropriate matrix B, we must now solve the linear system BAc = By. fasshauer@iit.edu MATH 590 Chapter 34 9

Preconditioning: Two Simple Examples Example (cont.) Instead of solving the linear system Ac = y, where y = [y 1,..., y 10 ] T R 10 is a vector of given data values, we can find a suitable matrix B to pre-multiply both sides of the equation such that the system is simpler to solve. Ideally, the new system matrix BA should be the identity matrix, i.e., B should be an approximate inverse of A. Once we ve found an appropriate matrix B, we must now solve the linear system BAc = By. The matrix B is usually referred to as the (left) preconditioner of the linear system. fasshauer@iit.edu MATH 590 Chapter 34 9

Preconditioning: Two Simple Examples Example (cont.) For the matrix A above we can choose a preconditioner B as 1 0 0 0... 0 0 1 1 2 1 2 0... 0 0 1 1 0 2 1 2... 0 0 1 B = 0 0 2 1... 0 0.......... 1 0 0 0 0... 1 2 0 0 0 0... 0 1 fasshauer@iit.edu MATH 590 Chapter 34 10

Preconditioning: Two Simple Examples Example (cont.) For the matrix A above we can choose a preconditioner B as 1 0 0 0... 0 0 1 1 2 1 2 0... 0 0 1 1 0 2 1 2... 0 0 1 B = 0 0 2 1... 0 0.......... 1 0 0 0 0... 1 2 0 0 0 0... 0 1 This leads to the following preconditioned system matrix 0 1 2... 8 9 0 1 0... 0 0 0 0 1... 0 0 BA =........ 0 0 0... 1 0 9 8 7... 1 0 in the system BAc = By. fasshauer@iit.edu MATH 590 Chapter 34 10

Preconditioning: Two Simple Examples Example (cont.) Note that BA is almost an identity matrix. fasshauer@iit.edu MATH 590 Chapter 34 11

Preconditioning: Two Simple Examples Example (cont.) Note that BA is almost an identity matrix. One can easily check that now cond(ba) 45. fasshauer@iit.edu MATH 590 Chapter 34 11

Preconditioning: Two Simple Examples Example (cont.) Note that BA is almost an identity matrix. One can easily check that now cond(ba) 45. The motivation for this choice of B is the following. The function ϕ(r) = r or Φ(x) = x is a fundamental solution of the Laplacian (= d 2 in the one-dimensional case), i.e. dx 2 Φ(x) = d 2 dx 2 x = 1 2 δ 0(x), where δ 0 is the Dirac delta function centered at zero. fasshauer@iit.edu MATH 590 Chapter 34 11

Preconditioning: Two Simple Examples Example (cont.) Note that BA is almost an identity matrix. One can easily check that now cond(ba) 45. The motivation for this choice of B is the following. The function ϕ(r) = r or Φ(x) = x is a fundamental solution of the Laplacian (= d 2 in the one-dimensional case), i.e. dx 2 Φ(x) = d 2 dx 2 x = 1 2 δ 0(x), where δ 0 is the Dirac delta function centered at zero. Thus, B is chosen as a discretization of the Laplacian with special choices at the endpoints of the data set. fasshauer@iit.edu MATH 590 Chapter 34 11

Preconditioning: Two Simple Examples Example For non-uniformly distributed data we can use a different discretization of the Laplacian for each row of B. fasshauer@iit.edu MATH 590 Chapter 34 12

Preconditioning: Two Simple Examples Example For non-uniformly distributed data we can use a different discretization of the Laplacian for each row of B. To see this, let s = 1, X = {1, 3 2, 5 2, 4, 9 2 }, and again consider interpolation with the radial function ϕ(r) = r. fasshauer@iit.edu MATH 590 Chapter 34 12

Preconditioning: Two Simple Examples Example For non-uniformly distributed data we can use a different discretization of the Laplacian for each row of B. To see this, let s = 1, X = {1, 3 2, 5 2, 4, 9 2 }, and again consider interpolation with the radial function ϕ(r) = r. Then with cond(a) 18.15. A = 0 1 2 3 2 3 7 2 1 2 0 1 5 2 3 3 2 1 0 3 2 2 3 5 2 3 2 0 1 2 7 2 3 2 1 2 0 fasshauer@iit.edu MATH 590 Chapter 34 12

Preconditioning: Two Simple Examples Example (cont.) If we choose 1 0 0 0 0 1 3 1 2 2 0 0 B = 1 0 2 5 1 6 3 0, 1 0 0 3 4 3 1 0 0 0 0 1 based on second-order backward differences of the points in X, fasshauer@iit.edu MATH 590 Chapter 34 13

Preconditioning: Two Simple Examples Example (cont.) If we choose 1 0 0 0 0 1 3 1 2 2 0 0 B = 1 0 2 5 1 6 3 0, 1 0 0 3 4 3 1 0 0 0 0 1 based on second-order backward differences of the points in X, then the preconditioned system to be solved becomes 0 1 2 3 2 3 7 2 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 7 2 3 2 1 2 0 c = By. fasshauer@iit.edu MATH 590 Chapter 34 13

Preconditioning: Two Simple Examples Example (cont.) Once more, this system is almost trivial to solve and has an improved condition number of cond(ba) 8.94. fasshauer@iit.edu MATH 590 Chapter 34 14

Outline Early Preconditioners 1 Preconditioning: Two Simple Examples 2 Early Preconditioners 3 Preconditioned GMRES via Approximate Cardinal Functions 4 Change of Basis 5 Effect of the Better Basis on the Condition Number 6 Effect of the Better Basis on the Accuracy of the Interpolant fasshauer@iit.edu MATH 590 Chapter 34 15

Early Preconditioners Ill-conditioning of the interpolation matrices was identified as a serious problem very early, and Nira Dyn along with some of her co-workers (see, e.g., [Dyn (1987), Dyn (1989), Dyn and Levin (1983), Dyn et al. (1986)]) provided some of the first preconditioning strategies tailored especially to RBF interpolants. fasshauer@iit.edu MATH 590 Chapter 34 16

Early Preconditioners Ill-conditioning of the interpolation matrices was identified as a serious problem very early, and Nira Dyn along with some of her co-workers (see, e.g., [Dyn (1987), Dyn (1989), Dyn and Levin (1983), Dyn et al. (1986)]) provided some of the first preconditioning strategies tailored especially to RBF interpolants. For the following discussion we consider the general interpolation problem that includes polynomial reproduction (see Chapter 6). fasshauer@iit.edu MATH 590 Chapter 34 16

Early Preconditioners We have to solve the following system of linear equations [ ] [ ] [ ] A P c y P T =, (1) O d 0 with A jk = ϕ( x j x k ), j, k = 1,..., N, P jl = p l (x j ), j = 1,..., N, l = 1,..., M, c = [c 1,..., c N ] T, d = [d 1,..., d M ] T, y = [y 1,..., y N ] T, O an M M zero matrix, and 0 a zero vector of length M with M = dim Π s m 1. Here ϕ should be strictly conditionally positive definite of order m and radial on R s and the set X = {x 1,..., x N } should be (m 1)-unisolvent. fasshauer@iit.edu MATH 590 Chapter 34 17

Early Preconditioners The preconditioning scheme proposed by Dyn and her co-workers is a generalization of the simple differencing scheme discussed above. It is motivated by the fact that the polyharmonic splines (i.e., thin plate splines and radial powers) { r ϕ(r) = 2k s log r, s even, r 2k s, s odd, 2k > s, fasshauer@iit.edu MATH 590 Chapter 34 18

Early Preconditioners The preconditioning scheme proposed by Dyn and her co-workers is a generalization of the simple differencing scheme discussed above. It is motivated by the fact that the polyharmonic splines (i.e., thin plate splines and radial powers) { r ϕ(r) = 2k s log r, s even, r 2k s, s odd, 2k > s, are fundamental solutions of the k-th iterated Laplacian in R s, i.e. k ϕ( x ) = cδ 0 (x), where δ 0 is the Dirac delta function centered at the origin, and c is an appropriate constant. fasshauer@iit.edu MATH 590 Chapter 34 18

Early Preconditioners Remark For the (inverse) multiquadrics ϕ(r) = (1 + r 2 ) ±1/2, which are also discussed in the papers mentioned above, application of the Laplacian yields a similar limiting behavior, fasshauer@iit.edu MATH 590 Chapter 34 19

Early Preconditioners Remark For the (inverse) multiquadrics ϕ(r) = (1 + r 2 ) ±1/2, which are also discussed in the papers mentioned above, application of the Laplacian yields a similar limiting behavior, i.e. lim r k ϕ(r) = 0, and for r 0 k ϕ(r) 1. fasshauer@iit.edu MATH 590 Chapter 34 19

Early Preconditioners One now wants to discretize the Laplacian on the (irregular) mesh given by the (scattered) data sites in X. fasshauer@iit.edu MATH 590 Chapter 34 20

Early Preconditioners One now wants to discretize the Laplacian on the (irregular) mesh given by the (scattered) data sites in X. To this end the authors of [Dyn et al. (1986)] suggest the following procedure for the case of scattered data interpolation over R 2. fasshauer@iit.edu MATH 590 Chapter 34 20

Early Preconditioners One now wants to discretize the Laplacian on the (irregular) mesh given by the (scattered) data sites in X. To this end the authors of [Dyn et al. (1986)] suggest the following procedure for the case of scattered data interpolation over R 2. 1 Start with a triangulation of the set X, e.g., the Delaunay triangulation will do. This triangulation can be visualized as follows. fasshauer@iit.edu MATH 590 Chapter 34 20

Early Preconditioners One now wants to discretize the Laplacian on the (irregular) mesh given by the (scattered) data sites in X. To this end the authors of [Dyn et al. (1986)] suggest the following procedure for the case of scattered data interpolation over R 2. 1 Start with a triangulation of the set X, e.g., the Delaunay triangulation will do. This triangulation can be visualized as follows. 1 Begin with the points in X and construct their Dirichlet tesselation or Voronoi diagram. The Dirichlet tile of a particular point x is that subset of points in R 2 which are closer to x than to any other point in X. fasshauer@iit.edu MATH 590 Chapter 34 20

Early Preconditioners One now wants to discretize the Laplacian on the (irregular) mesh given by the (scattered) data sites in X. To this end the authors of [Dyn et al. (1986)] suggest the following procedure for the case of scattered data interpolation over R 2. 1 Start with a triangulation of the set X, e.g., the Delaunay triangulation will do. This triangulation can be visualized as follows. 1 Begin with the points in X and construct their Dirichlet tesselation or Voronoi diagram. The Dirichlet tile of a particular point x is that subset of points in R 2 which are closer to x than to any other point in X. The green lines in the figure denote the Dirichlet tesselation for the set of 25 Halton points (circles) in [0, 1] 2. fasshauer@iit.edu MATH 590 Chapter 34 20

Early Preconditioners One now wants to discretize the Laplacian on the (irregular) mesh given by the (scattered) data sites in X. To this end the authors of [Dyn et al. (1986)] suggest the following procedure for the case of scattered data interpolation over R 2. 1 Start with a triangulation of the set X, e.g., the Delaunay triangulation will do. This triangulation can be visualized as follows. 1 Begin with the points in X and construct their Dirichlet tesselation or Voronoi diagram. The Dirichlet tile of a particular point x is that subset of points in R 2 which are closer to x than to any other point in X. The green lines in the figure denote the Dirichlet tesselation for the set of 25 Halton points (circles) in [0, 1] 2. 2 Construct the Delaunay triangulation, which is the dual of the Dirichlet tesselation, i.e., connect all strong neighbors in the Dirichlet tesselation, i.e., points whose tiles share a common edge. fasshauer@iit.edu MATH 590 Chapter 34 20

Early Preconditioners One now wants to discretize the Laplacian on the (irregular) mesh given by the (scattered) data sites in X. To this end the authors of [Dyn et al. (1986)] suggest the following procedure for the case of scattered data interpolation over R 2. 1 Start with a triangulation of the set X, e.g., the Delaunay triangulation will do. This triangulation can be visualized as follows. 1 Begin with the points in X and construct their Dirichlet tesselation or Voronoi diagram. The Dirichlet tile of a particular point x is that subset of points in R 2 which are closer to x than to any other point in X. The green lines in the figure denote the Dirichlet tesselation for the set of 25 Halton points (circles) in [0, 1] 2. 2 Construct the Delaunay triangulation, which is the dual of the Dirichlet tesselation, i.e., connect all strong neighbors in the Dirichlet tesselation, i.e., points whose tiles share a common edge. The blue lines in the figure denote the corresponding Delaunay triangulation of the 25 Halton points. fasshauer@iit.edu MATH 590 Chapter 34 20

Early Preconditioners Figure: Dirichlet tesselation (green lines) and corresponding Delaunay triangulation (blue lines) of 25 Halton points (red circles). fasshauer@iit.edu MATH 590 Chapter 34 21

Early Preconditioners Figure: Dirichlet tesselation (green lines) and corresponding Delaunay triangulation (blue lines) of 25 Halton points (red circles). The figure was created in MATLAB using the commands dsites = CreatePoints(25,2, h ); tes = delaunayn(dsites); triplot(tes,dsites(:,1),dsites(:,2), b- ) hold on [vx, vy] = voronoi(dsites(:,1),dsites(:,2),tes); plot(dsites(:,1),dsites(:,2), ro,vx,vy, g- ) axis([0 1 0 1]) fasshauer@iit.edu MATH 590 Chapter 34 21

Early Preconditioners 2 Discretize the Laplacian on this triangulation. fasshauer@iit.edu MATH 590 Chapter 34 22

Early Preconditioners 2 Discretize the Laplacian on this triangulation. In order to also take into account the boundary points Dyn, Levin and Rippa instead use a discretization of an iterated Green s formula which has the space Π 2 m 1 as its null space. fasshauer@iit.edu MATH 590 Chapter 34 22

Early Preconditioners 2 Discretize the Laplacian on this triangulation. In order to also take into account the boundary points Dyn, Levin and Rippa instead use a discretization of an iterated Green s formula which has the space Π 2 m 1 as its null space. The necessary partial derivatives are then approximated on the triangulation using certain sets of vertices of the triangulation (three points for first order partials, six for second order). fasshauer@iit.edu MATH 590 Chapter 34 22

Early Preconditioners The discretization described above yields the matrix B = (b ji ) N j,i=1 as the preconditioning matrix in a way analogous to the previous section. fasshauer@iit.edu MATH 590 Chapter 34 23

Early Preconditioners The discretization described above yields the matrix B = (b ji ) N j,i=1 as the preconditioning matrix in a way analogous to the previous section. We now obtain (BA) jk = N b ji ϕ( x i x k ) i=1 fasshauer@iit.edu MATH 590 Chapter 34 23

Early Preconditioners The discretization described above yields the matrix B = (b ji ) N j,i=1 as the preconditioning matrix in a way analogous to the previous section. We now obtain (BA) jk = N b ji ϕ( x i x k ) m ϕ( x k )(x j ), j, k = 1,..., N. i=1 (2) fasshauer@iit.edu MATH 590 Chapter 34 23

Early Preconditioners The discretization described above yields the matrix B = (b ji ) N j,i=1 as the preconditioning matrix in a way analogous to the previous section. We now obtain (BA) jk = N b ji ϕ( x i x k ) m ϕ( x k )(x j ), j, k = 1,..., N. i=1 (2) This matrix has the property that the entries close to the diagonal are large compared to those away from the diagonal, which decay to zero as the distance between the two points involved goes to infinity. fasshauer@iit.edu MATH 590 Chapter 34 23

Early Preconditioners Since the construction of B (in step 2 above) ensures that part of the preconditioned block matrix vanishes, namely BP = O, one must now solve the non-square system [ ] [ ] [ ] [ ] [ ] B O A P c B O y O I P T = O d O I 0 fasshauer@iit.edu MATH 590 Chapter 34 24

Early Preconditioners Since the construction of B (in step 2 above) ensures that part of the preconditioned block matrix vanishes, namely BP = O, one must now solve the non-square system [ ] [ ] [ ] [ B O A P c B O O I P T = O d O I [ ] [ ] BA By P T c =. 0 ] [ y 0 ] fasshauer@iit.edu MATH 590 Chapter 34 24

Early Preconditioners Since the construction of B (in step 2 above) ensures that part of the preconditioned block matrix vanishes, namely BP = O, one must now solve the non-square system [ ] [ ] [ ] [ B O A P c B O O I P T = O d O I [ ] [ ] BA By P T c =. 0 ] [ y 0 Remark The square system BAc = By is singular. However, it is shown in [Dyn et al. (1986)] that the additional constraints P T c = 0 guarantee existence of a unique solution. ] fasshauer@iit.edu MATH 590 Chapter 34 24

Early Preconditioners Since the construction of B (in step 2 above) ensures that part of the preconditioned block matrix vanishes, namely BP = O, one must now solve the non-square system [ ] [ ] [ ] [ B O A P c B O O I P T = O d O I [ ] [ ] BA By P T c =. 0 ] [ y 0 Remark The square system BAc = By is singular. However, it is shown in [Dyn et al. (1986)] that the additional constraints P T c = 0 guarantee existence of a unique solution. The coefficients d in the original expansion of the interpolant P f can be obtained by solving Pd = y Ac, i.e., by fitting the polynomial part of the expansion to the residual y Ac. fasshauer@iit.edu MATH 590 Chapter 34 24 ]

Early Preconditioners This approach leads to localized basis functions Ψ that are linear combinations of the original basis functions ϕ. fasshauer@iit.edu MATH 590 Chapter 34 25

Early Preconditioners This approach leads to localized basis functions Ψ that are linear combinations of the original basis functions ϕ. More precisely, Ψ j (x) = N b ji ϕ( x x i ) m ϕ( x j )(x), (3) i=1 where the coefficients b ji are determined via the discretization described above. fasshauer@iit.edu MATH 590 Chapter 34 25

Early Preconditioners This approach leads to localized basis functions Ψ that are linear combinations of the original basis functions ϕ. More precisely, Ψ j (x) = N b ji ϕ( x x i ) m ϕ( x j )(x), (3) i=1 where the coefficients b ji are determined via the discretization described above. Remark The localized basis functions Ψ j, j = 1,..., N, can be viewed as an alternative (better conditioned) basis for the approximation space spanned by the functions Φ j = ϕ( x j ). We will come back to this idea below. fasshauer@iit.edu MATH 590 Chapter 34 25

Early Preconditioners In [Dyn et al. (1986)] the authors describe how the preconditioned matrices can be used efficiently in conjunction with various iterative schemes such as Chebyshev iteration or a version of the conjugate gradient method. fasshauer@iit.edu MATH 590 Chapter 34 26

Early Preconditioners In [Dyn et al. (1986)] the authors describe how the preconditioned matrices can be used efficiently in conjunction with various iterative schemes such as Chebyshev iteration or a version of the conjugate gradient method. They also mention smoothing of noisy data, or low-pass filtering as other applications for this preconditioning scheme. fasshauer@iit.edu MATH 590 Chapter 34 26

Early Preconditioners ϕ N Grid I orig. Grid I precond. Grid II orig. Grid II precond. TPS 49 1181 4.3 1885 3.4 121 6764 5.1 12633 3.9 MQ 49 7274 69.2 17059 222.8 121 10556 126.0 107333 576.0 Table: Condition numbers without and with preconditioning for TPS and MQ on different data sets from [Dyn et al. (1986)]. The shape parameter ε for the multiquadrics was chosen to be the reciprocal of the average mesh size. A linear term was added for thin plate splines, and a constant for multiquadrics. fasshauer@iit.edu MATH 590 Chapter 34 27

Early Preconditioners Remark The most dramatic improvement is achieved for thin plate splines. fasshauer@iit.edu MATH 590 Chapter 34 28

Early Preconditioners Remark The most dramatic improvement is achieved for thin plate splines. This is to be expected since the method described above is tailored to these functions. fasshauer@iit.edu MATH 590 Chapter 34 28

Early Preconditioners Remark The most dramatic improvement is achieved for thin plate splines. This is to be expected since the method described above is tailored to these functions. For multiquadrics an application of the Laplacian does not yield the delta function, but for values of r close to zero gives just relatively large values. fasshauer@iit.edu MATH 590 Chapter 34 28

Early Preconditioners Another early preconditioning strategy was suggested in [Powell (1994)]. fasshauer@iit.edu MATH 590 Chapter 34 29

Early Preconditioners Another early preconditioning strategy was suggested in [Powell (1994)]. Powell uses Householder transformations to convert the matrix of the interpolation system [ ] [ ] [ ] A P c y P T = O d 0 to a symmetric positive definite matrix, and then uses the conjugate gradient method. fasshauer@iit.edu MATH 590 Chapter 34 29

Early Preconditioners Another early preconditioning strategy was suggested in [Powell (1994)]. Powell uses Householder transformations to convert the matrix of the interpolation system [ ] [ ] [ ] A P c y P T = O d 0 to a symmetric positive definite matrix, and then uses the conjugate gradient method. However, Powell reports that this method is not particularly effective for large thin plate spline interpolation problems in R 2. fasshauer@iit.edu MATH 590 Chapter 34 29

Early Preconditioners In [Baxter (1992), Baxter (2002)] preconditioned conjugate gradient methods for solving the interpolation problem are discussed in the case when Gaussians or multiquadrics are used on a regular grid. fasshauer@iit.edu MATH 590 Chapter 34 30

Early Preconditioners In [Baxter (1992), Baxter (2002)] preconditioned conjugate gradient methods for solving the interpolation problem are discussed in the case when Gaussians or multiquadrics are used on a regular grid. The resulting matrices are Toeplitz matrices, and a large body of literature exists for dealing with matrices having this special structure (see, e.g., [Chan and Strang (1989)]). fasshauer@iit.edu MATH 590 Chapter 34 30

Preconditioned GMRES via Approximate Cardinal Functions Outline 1 Preconditioning: Two Simple Examples 2 Early Preconditioners 3 Preconditioned GMRES via Approximate Cardinal Functions 4 Change of Basis 5 Effect of the Better Basis on the Condition Number 6 Effect of the Better Basis on the Accuracy of the Interpolant fasshauer@iit.edu MATH 590 Chapter 34 31

Preconditioned GMRES via Approximate Cardinal Functions More recently, Beatson, Cherrie and Mouat [Beatson et al. (1999)] proposed a preconditioner for the iterative solution of radial basis function interpolation systems in conjunction with the GMRES method of [Saad and Schultz (1986)]. fasshauer@iit.edu MATH 590 Chapter 34 32

Preconditioned GMRES via Approximate Cardinal Functions More recently, Beatson, Cherrie and Mouat [Beatson et al. (1999)] proposed a preconditioner for the iterative solution of radial basis function interpolation systems in conjunction with the GMRES method of [Saad and Schultz (1986)]. The GMRES method is a general purpose iterative solver that can be applied to nonsymmetric (nondefinite) systems. fasshauer@iit.edu MATH 590 Chapter 34 32

Preconditioned GMRES via Approximate Cardinal Functions More recently, Beatson, Cherrie and Mouat [Beatson et al. (1999)] proposed a preconditioner for the iterative solution of radial basis function interpolation systems in conjunction with the GMRES method of [Saad and Schultz (1986)]. The GMRES method is a general purpose iterative solver that can be applied to nonsymmetric (nondefinite) systems. For fast convergence the matrix should be preconditioned such that its eigenvalues are clustered around one and away from the origin. fasshauer@iit.edu MATH 590 Chapter 34 32

Preconditioned GMRES via Approximate Cardinal Functions More recently, Beatson, Cherrie and Mouat [Beatson et al. (1999)] proposed a preconditioner for the iterative solution of radial basis function interpolation systems in conjunction with the GMRES method of [Saad and Schultz (1986)]. The GMRES method is a general purpose iterative solver that can be applied to nonsymmetric (nondefinite) systems. For fast convergence the matrix should be preconditioned such that its eigenvalues are clustered around one and away from the origin. Obviously, if the basis functions were cardinal, then the matrix would be the identity matrix with all its eigenvalues equal to one. fasshauer@iit.edu MATH 590 Chapter 34 32

Preconditioned GMRES via Approximate Cardinal Functions More recently, Beatson, Cherrie and Mouat [Beatson et al. (1999)] proposed a preconditioner for the iterative solution of radial basis function interpolation systems in conjunction with the GMRES method of [Saad and Schultz (1986)]. The GMRES method is a general purpose iterative solver that can be applied to nonsymmetric (nondefinite) systems. For fast convergence the matrix should be preconditioned such that its eigenvalues are clustered around one and away from the origin. Obviously, if the basis functions were cardinal, then the matrix would be the identity matrix with all its eigenvalues equal to one. Therefore, the GMRES method would converge in a single iteration. fasshauer@iit.edu MATH 590 Chapter 34 32

Preconditioned GMRES via Approximate Cardinal Functions More recently, Beatson, Cherrie and Mouat [Beatson et al. (1999)] proposed a preconditioner for the iterative solution of radial basis function interpolation systems in conjunction with the GMRES method of [Saad and Schultz (1986)]. The GMRES method is a general purpose iterative solver that can be applied to nonsymmetric (nondefinite) systems. For fast convergence the matrix should be preconditioned such that its eigenvalues are clustered around one and away from the origin. Obviously, if the basis functions were cardinal, then the matrix would be the identity matrix with all its eigenvalues equal to one. Therefore, the GMRES method would converge in a single iteration. Consequently, the preconditioning strategy of [Beatson et al. (1999)] for the GMRES method is to obtain a preconditioning matrix B that is close to the inverse of A. fasshauer@iit.edu MATH 590 Chapter 34 32

Preconditioned GMRES via Approximate Cardinal Functions Since it is too expensive to find the true cardinal basis (this would involve at least as much work as solving the interpolation problem), the idea pursued in [Beatson et al. (1999)] (and suggested earlier in [Beatson et al. (1996), Beatson and Powell (1993)]) is to find approximate cardinal functions similar to the functions Ψ j in the previous subsection. fasshauer@iit.edu MATH 590 Chapter 34 33

Preconditioned GMRES via Approximate Cardinal Functions Since it is too expensive to find the true cardinal basis (this would involve at least as much work as solving the interpolation problem), the idea pursued in [Beatson et al. (1999)] (and suggested earlier in [Beatson et al. (1996), Beatson and Powell (1993)]) is to find approximate cardinal functions similar to the functions Ψ j in the previous subsection. Now, however, there is also an emphasis on efficiency, i.e., we want local approximate cardinal functions (c.f. the use of approximate cardinal functions in the Faul-Powell algorithm). fasshauer@iit.edu MATH 590 Chapter 34 33

Preconditioned GMRES via Approximate Cardinal Functions Since it is too expensive to find the true cardinal basis (this would involve at least as much work as solving the interpolation problem), the idea pursued in [Beatson et al. (1999)] (and suggested earlier in [Beatson et al. (1996), Beatson and Powell (1993)]) is to find approximate cardinal functions similar to the functions Ψ j in the previous subsection. Now, however, there is also an emphasis on efficiency, i.e., we want local approximate cardinal functions (c.f. the use of approximate cardinal functions in the Faul-Powell algorithm). Several different strategies for the construction of these approximate cardinal functions were suggested in [Beatson et al. (1999)]. fasshauer@iit.edu MATH 590 Chapter 34 33

Preconditioned GMRES via Approximate Cardinal Functions Since it is too expensive to find the true cardinal basis (this would involve at least as much work as solving the interpolation problem), the idea pursued in [Beatson et al. (1999)] (and suggested earlier in [Beatson et al. (1996), Beatson and Powell (1993)]) is to find approximate cardinal functions similar to the functions Ψ j in the previous subsection. Now, however, there is also an emphasis on efficiency, i.e., we want local approximate cardinal functions (c.f. the use of approximate cardinal functions in the Faul-Powell algorithm). Several different strategies for the construction of these approximate cardinal functions were suggested in [Beatson et al. (1999)]. We will now explain the basic idea. fasshauer@iit.edu MATH 590 Chapter 34 33

Preconditioned GMRES via Approximate Cardinal Functions Given the centers x 1,..., x N for the basis functions in the RBF interpolant N P f (x) = c j ϕ( x x j ), j=1 fasshauer@iit.edu MATH 590 Chapter 34 34

Preconditioned GMRES via Approximate Cardinal Functions Given the centers x 1,..., x N for the basis functions in the RBF interpolant N P f (x) = c j ϕ( x x j ), j=1 the j-th approximate cardinal function is given as a linear combination of the basis functions Φ i = ϕ( x i ), where i runs over (some subset of) {1,..., N}, i.e., fasshauer@iit.edu MATH 590 Chapter 34 34

Preconditioned GMRES via Approximate Cardinal Functions Given the centers x 1,..., x N for the basis functions in the RBF interpolant N P f (x) = c j ϕ( x x j ), j=1 the j-th approximate cardinal function is given as a linear combination of the basis functions Φ i = ϕ( x i ), where i runs over (some subset of) {1,..., N}, i.e., Ψ j = N b ji ϕ( x i ) + p j. (4) i=1 fasshauer@iit.edu MATH 590 Chapter 34 34

Preconditioned GMRES via Approximate Cardinal Functions Given the centers x 1,..., x N for the basis functions in the RBF interpolant N P f (x) = c j ϕ( x x j ), j=1 the j-th approximate cardinal function is given as a linear combination of the basis functions Φ i = ϕ( x i ), where i runs over (some subset of) {1,..., N}, i.e., Ψ j = N b ji ϕ( x i ) + p j. (4) i=1 Here p j is a polynomial in Π s m 1 that is used only in the conditionally positive definite case, and the coefficients b ji satisfy the usual conditions N b ji p j (x i ) = 0 for all p j Π s m 1. (5) i=1 fasshauer@iit.edu MATH 590 Chapter 34 34

Preconditioned GMRES via Approximate Cardinal Functions The key feature in designing the approximate cardinal functions: have only a few n N coefficients in (4) to be nonzero. fasshauer@iit.edu MATH 590 Chapter 34 35

Preconditioned GMRES via Approximate Cardinal Functions The key feature in designing the approximate cardinal functions: have only a few n N coefficients in (4) to be nonzero. In that case the functions Ψ j are found by solving small n n linear systems, which is much more efficient than dealing with the original N N system. fasshauer@iit.edu MATH 590 Chapter 34 35

Preconditioned GMRES via Approximate Cardinal Functions The key feature in designing the approximate cardinal functions: have only a few n N coefficients in (4) to be nonzero. In that case the functions Ψ j are found by solving small n n linear systems, which is much more efficient than dealing with the original N N system. For example, in [Beatson et al. (1999)] the authors use n 50 for problems involving up to 10000 centers. fasshauer@iit.edu MATH 590 Chapter 34 35

Preconditioned GMRES via Approximate Cardinal Functions The resulting preconditioned system is of the same form as before, i.e., we now have to solve the preconditioned problem (BA)c = By, where the entries of the matrix BA are just Ψ j (x k ), j, k = 1,..., N. fasshauer@iit.edu MATH 590 Chapter 34 36

Preconditioned GMRES via Approximate Cardinal Functions The resulting preconditioned system is of the same form as before, i.e., we now have to solve the preconditioned problem (BA)c = By, where the entries of the matrix BA are just Ψ j (x k ), j, k = 1,..., N. The simplest strategy for determining the coefficients b ji : select the n nearest neighbors of x j, fasshauer@iit.edu MATH 590 Chapter 34 36

Preconditioned GMRES via Approximate Cardinal Functions The resulting preconditioned system is of the same form as before, i.e., we now have to solve the preconditioned problem (BA)c = By, where the entries of the matrix BA are just Ψ j (x k ), j, k = 1,..., N. The simplest strategy for determining the coefficients b ji : select the n nearest neighbors of x j, find b ji by solving the (local) cardinal interpolation problem Ψ j (x i ) = δ ij, i = 1,..., n, subject to the moment constraint (5) listed above. Here δ ij is the Kronecker-delta, so that Ψ j is one at x j and zero at all of the neighboring centers x i. fasshauer@iit.edu MATH 590 Chapter 34 36

Preconditioned GMRES via Approximate Cardinal Functions Remark This basic strategy is improved by adding so-called special points that are distributed (very sparsely) throughout the domain (for example near corners of the domain, or at other significant locations). fasshauer@iit.edu MATH 590 Chapter 34 37

Preconditioned GMRES via Approximate Cardinal Functions ϕ N unprecond. local precond. local precond. w/special TPS 289 4.005e+006 1.464e+003 5.721e+000 1089 2.753e+008 6.359e+005 1.818e+002 4225 2.605e+009 2.381e+006 1.040e+006 MQ 289 1.506e+008 3.185e+003 2.639e+002 1089 2.154e+009 8.125e+005 5.234e+004 4225 3.734e+010 1.390e+007 4.071e+004 Table: l 2 -condition numbers without and with preconditioning for TPS and MQ at randomly distributed points in [0, 1] 2 from [Beatson et al. (1999)]. local precond.: uses the n = 50 nearest neighbors to determine the approximate cardinal functions w/special: uses the 41 nearest neighbors plus nine special points placed uniformly in the unit square. fasshauer@iit.edu MATH 590 Chapter 34 38

Preconditioned GMRES via Approximate Cardinal Functions The effect of the preconditioning on the performance of the GMRES algorithm is, e.g., a reduction from 103 to 8 iterations for the 289 point data set for thin plate splines, fasshauer@iit.edu MATH 590 Chapter 34 39

Preconditioned GMRES via Approximate Cardinal Functions The effect of the preconditioning on the performance of the GMRES algorithm is, e.g., a reduction from 103 to 8 iterations for the 289 point data set for thin plate splines, a reduction from 145 iterations to 11 for multiquadrics. fasshauer@iit.edu MATH 590 Chapter 34 39

Preconditioned GMRES via Approximate Cardinal Functions The effect of the preconditioning on the performance of the GMRES algorithm is, e.g., Remark a reduction from 103 to 8 iterations for the 289 point data set for thin plate splines, a reduction from 145 iterations to 11 for multiquadrics. An extension of the ideas of [Beatson et al. (1999)] to linear systems arising in the collocation solution of partial differential equations (see Chapter 38) was explored in Mouat s Ph.D. thesis [Mouat (2001)] and also in the recent paper [Ling and Kansa (2005)]. fasshauer@iit.edu MATH 590 Chapter 34 39

Outline Change of Basis 1 Preconditioning: Two Simple Examples 2 Early Preconditioners 3 Preconditioned GMRES via Approximate Cardinal Functions 4 Change of Basis 5 Effect of the Better Basis on the Condition Number 6 Effect of the Better Basis on the Accuracy of the Interpolant fasshauer@iit.edu MATH 590 Chapter 34 40

Change of Basis Another approach to obtaining a better conditioned interpolation system is to work with a different basis for the approximation space. fasshauer@iit.edu MATH 590 Chapter 34 41

Change of Basis Another approach to obtaining a better conditioned interpolation system is to work with a different basis for the approximation space. While this idea is implicitly addressed in the preconditioning strategies discussed above, we will now make it our primary goal to find a better conditioned basis for the RBF approximation space. fasshauer@iit.edu MATH 590 Chapter 34 41

Change of Basis Another approach to obtaining a better conditioned interpolation system is to work with a different basis for the approximation space. While this idea is implicitly addressed in the preconditioning strategies discussed above, we will now make it our primary goal to find a better conditioned basis for the RBF approximation space. Example Univariate piecewise linear splines and natural cubic splines can be interpreted as radial basis functions, and we know that B-splines form stable bases for those spaces. fasshauer@iit.edu MATH 590 Chapter 34 41

Change of Basis Another approach to obtaining a better conditioned interpolation system is to work with a different basis for the approximation space. While this idea is implicitly addressed in the preconditioning strategies discussed above, we will now make it our primary goal to find a better conditioned basis for the RBF approximation space. Example Univariate piecewise linear splines and natural cubic splines can be interpreted as radial basis functions, and we know that B-splines form stable bases for those spaces. Therefore, it should be possible to generalize this idea for other RBFs. fasshauer@iit.edu MATH 590 Chapter 34 41

Change of Basis The process of finding a better basis for conditionally positive definite RBFs is closely connected to finding the reproducing kernel of the associated native space. fasshauer@iit.edu MATH 590 Chapter 34 42

Change of Basis The process of finding a better basis for conditionally positive definite RBFs is closely connected to finding the reproducing kernel of the associated native space. Since we did not elaborate on the construction of native spaces for conditionally positive definite functions earlier, we will now present the relevant formulas without going into any further details. fasshauer@iit.edu MATH 590 Chapter 34 42

Change of Basis The process of finding a better basis for conditionally positive definite RBFs is closely connected to finding the reproducing kernel of the associated native space. Since we did not elaborate on the construction of native spaces for conditionally positive definite functions earlier, we will now present the relevant formulas without going into any further details. In particular, for polyharmonic splines we will be able to find a basis that is in a certain sense homogeneous. Therefore the condition number of the related interpolation matrix will depend only on the number N of data points, but not on their separation distance (c.f. the discussion in Chapter 16). fasshauer@iit.edu MATH 590 Chapter 34 42

Change of Basis The process of finding a better basis for conditionally positive definite RBFs is closely connected to finding the reproducing kernel of the associated native space. Since we did not elaborate on the construction of native spaces for conditionally positive definite functions earlier, we will now present the relevant formulas without going into any further details. In particular, for polyharmonic splines we will be able to find a basis that is in a certain sense homogeneous. Therefore the condition number of the related interpolation matrix will depend only on the number N of data points, but not on their separation distance (c.f. the discussion in Chapter 16). This approach was suggested by Beatson, Light and Billings [Beatson et al. (2000)], and has its roots in [Sibson and Stone (1991)]. fasshauer@iit.edu MATH 590 Chapter 34 42

Change of Basis Let Φ be a strictly conditionally positive definite kernel of order m, and X = {x 1,..., x N } Ω R s be an (m 1)-unisolvent set of centers. fasshauer@iit.edu MATH 590 Chapter 34 43

Change of Basis Let Φ be a strictly conditionally positive definite kernel of order m, and X = {x 1,..., x N } Ω R s be an (m 1)-unisolvent set of centers. Then the reproducing kernel for the native space N Φ (Ω) is given by K (x, y) = Φ(x, y) + M k=1 l=1 M p k (x)φ(x k, y) k=1 M p l (y)φ(x, x l ) l=1 M p k (x)p l (y)φ(x k, x l ) + M p l (x)p l (y), (6) where the points {x 1,..., x M } comprise an (m 1)-unisolvent subset of X and the polynomials p k, k = 1,..., M, form a cardinal basis for Π s m 1 on this subset whose dimension is M = ( ) s+m 1 m 1, i.e., p l (x k ) = δ k,l, k, l = 1,..., M. l=1 fasshauer@iit.edu MATH 590 Chapter 34 43

Change of Basis Remark This formulation of the reproducing kernel for the conditionally positive definite case also appears in the statistics literature in the context of kriging (see, e.g., [Berlinet and Thomas-Agnan (2004)]). fasshauer@iit.edu MATH 590 Chapter 34 44

Change of Basis Remark This formulation of the reproducing kernel for the conditionally positive definite case also appears in the statistics literature in the context of kriging (see, e.g., [Berlinet and Thomas-Agnan (2004)]). In that context the kernel K is a covariance kernel associated with the generalized covariance Φ. fasshauer@iit.edu MATH 590 Chapter 34 44

Change of Basis Remark This formulation of the reproducing kernel for the conditionally positive definite case also appears in the statistics literature in the context of kriging (see, e.g., [Berlinet and Thomas-Agnan (2004)]). In that context the kernel K is a covariance kernel associated with the generalized covariance Φ. These two kernels give rise to the kriging equations and dual kriging equations, respectively. fasshauer@iit.edu MATH 590 Chapter 34 44

Change of Basis An immediate consequence of having found the reproducing kernel K is that we can express the RBF interpolant to values of some function f given on X in the form N P f (x) = c j K (x, x j ), x R s. j=1 fasshauer@iit.edu MATH 590 Chapter 34 45

Change of Basis An immediate consequence of having found the reproducing kernel K is that we can express the RBF interpolant to values of some function f given on X in the form N P f (x) = c j K (x, x j ), x R s. j=1 Note that the kernel K used here is a strictly positive definite kernel (since it is a reproducing kernel) with built-in polynomial precision. fasshauer@iit.edu MATH 590 Chapter 34 45