MATH 590: Meshfree Methods

Size: px
Start display at page:

Download "MATH 590: Meshfree Methods"

Transcription

1 MATH 590: Meshfree Methods Chapter 34: Improving the Condition Number of the Interpolation Matrix Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 MATH 590 Chapter 34 1

2 Outline 1 Preconditioning: Two Simple Examples 2 Early Preconditioners 3 Preconditioned GMRES via Approximate Cardinal Functions 4 Change of Basis 5 Effect of the Better Basis on the Condition Number 6 Effect of the Better Basis on the Accuracy of the Interpolant fasshauer@iit.edu MATH 590 Chapter 34 2

3 In Chapter 16 we noted that the system matrices arising in scattered data interpolation with radial basis functions tend to become very ill-conditioned as the minimal separation distance q X between the data sites x 1,..., x N, is reduced. fasshauer@iit.edu MATH 590 Chapter 34 3

4 In Chapter 16 we noted that the system matrices arising in scattered data interpolation with radial basis functions tend to become very ill-conditioned as the minimal separation distance q X between the data sites x 1,..., x N, is reduced. Therefore it is natural to devise strategies to prevent such instabilities by preconditioning the system, fasshauer@iit.edu MATH 590 Chapter 34 3

5 In Chapter 16 we noted that the system matrices arising in scattered data interpolation with radial basis functions tend to become very ill-conditioned as the minimal separation distance q X between the data sites x 1,..., x N, is reduced. Therefore it is natural to devise strategies to prevent such instabilities by preconditioning the system, or by finding a better basis for the approximation space we are using. fasshauer@iit.edu MATH 590 Chapter 34 3

6 The preconditioning approach is standard procedure in numerical linear algebra. MATH 590 Chapter 34 4

7 The preconditioning approach is standard procedure in numerical linear algebra. In fact we can use any of the well-established methods (such as preconditioned conjugate gradient iteration) to improve the stability and convergence of the interpolation systems that arise for strictly positive definite functions. MATH 590 Chapter 34 4

8 The preconditioning approach is standard procedure in numerical linear algebra. In fact we can use any of the well-established methods (such as preconditioned conjugate gradient iteration) to improve the stability and convergence of the interpolation systems that arise for strictly positive definite functions. Example The sparse systems that arise in (multilevel) interpolation with compactly supported radial basis functions can be solved efficiently with the preconditioned conjugate gradient method. MATH 590 Chapter 34 4

9 The second approach to improving the condition number of the interpolation system, i.e., the idea of using a more stable basis, is well known from univariate polynomial and spline interpolation. fasshauer@iit.edu MATH 590 Chapter 34 5

10 The second approach to improving the condition number of the interpolation system, i.e., the idea of using a more stable basis, is well known from univariate polynomial and spline interpolation. The Lagrange basis functions for univariate polynomial interpolation are the ideal basis for stably solving the interpolation equations since the resulting interpolation matrix is the identity matrix (which is much better conditioned than, e.g., the Vandermonde matrix stemming from a monomial basis). fasshauer@iit.edu MATH 590 Chapter 34 5

11 The second approach to improving the condition number of the interpolation system, i.e., the idea of using a more stable basis, is well known from univariate polynomial and spline interpolation. The Lagrange basis functions for univariate polynomial interpolation are the ideal basis for stably solving the interpolation equations since the resulting interpolation matrix is the identity matrix (which is much better conditioned than, e.g., the Vandermonde matrix stemming from a monomial basis). B-splines give rise to diagonally dominant, sparse system matrices which are much easier to deal with than the matrices resulting from a truncated power basis representation of the spline interpolant. fasshauer@iit.edu MATH 590 Chapter 34 5

12 The second approach to improving the condition number of the interpolation system, i.e., the idea of using a more stable basis, is well known from univariate polynomial and spline interpolation. The Lagrange basis functions for univariate polynomial interpolation are the ideal basis for stably solving the interpolation equations since the resulting interpolation matrix is the identity matrix (which is much better conditioned than, e.g., the Vandermonde matrix stemming from a monomial basis). B-splines give rise to diagonally dominant, sparse system matrices which are much easier to deal with than the matrices resulting from a truncated power basis representation of the spline interpolant. Both of these examples are studied in great detail in standard numerical analysis texts (see, e.g., [Kincaid and Cheney (2002)]) or in the literature on splines (see, e.g., [Schumaker (1981)]). fasshauer@iit.edu MATH 590 Chapter 34 5

13 The second approach to improving the condition number of the interpolation system, i.e., the idea of using a more stable basis, is well known from univariate polynomial and spline interpolation. The Lagrange basis functions for univariate polynomial interpolation are the ideal basis for stably solving the interpolation equations since the resulting interpolation matrix is the identity matrix (which is much better conditioned than, e.g., the Vandermonde matrix stemming from a monomial basis). B-splines give rise to diagonally dominant, sparse system matrices which are much easier to deal with than the matrices resulting from a truncated power basis representation of the spline interpolant. Both of these examples are studied in great detail in standard numerical analysis texts (see, e.g., [Kincaid and Cheney (2002)]) or in the literature on splines (see, e.g., [Schumaker (1981)]). We discuss an analogous approach for RBFs below. fasshauer@iit.edu MATH 590 Chapter 34 5

14 Before we describe any of the specialized preconditioning procedures for radial basis function interpolation matrices we give two examples presented in the early RBF paper [Jackson (1989)] to illustrate the effects of and motivation for preconditioning in the context of radial basis functions. MATH 590 Chapter 34 6

15 Outline Preconditioning: Two Simple Examples 1 Preconditioning: Two Simple Examples 2 Early Preconditioners 3 Preconditioned GMRES via Approximate Cardinal Functions 4 Change of Basis 5 Effect of the Better Basis on the Condition Number 6 Effect of the Better Basis on the Accuracy of the Interpolant fasshauer@iit.edu MATH 590 Chapter 34 7

16 Preconditioning: Two Simple Examples Example Let s = 1 and consider interpolation based on ϕ(r) = r with no polynomial terms added. fasshauer@iit.edu MATH 590 Chapter 34 8

17 Preconditioning: Two Simple Examples Example Let s = 1 and consider interpolation based on ϕ(r) = r with no polynomial terms added. As data sites we choose X = {1, 2,..., 10}. fasshauer@iit.edu MATH 590 Chapter 34 8

18 Preconditioning: Two Simple Examples Example Let s = 1 and consider interpolation based on ϕ(r) = r with no polynomial terms added. As data sites we choose X = {1, 2,..., 10}. This leads to the system matrix A = with l 2 -condition number cond(a) 67. fasshauer@iit.edu MATH 590 Chapter 34 8

19 Preconditioning: Two Simple Examples Example (cont.) Instead of solving the linear system Ac = y, where y = [y 1,..., y 10 ] T R 10 is a vector of given data values, we can find a suitable matrix B to pre-multiply both sides of the equation such that the system is simpler to solve. fasshauer@iit.edu MATH 590 Chapter 34 9

20 Preconditioning: Two Simple Examples Example (cont.) Instead of solving the linear system Ac = y, where y = [y 1,..., y 10 ] T R 10 is a vector of given data values, we can find a suitable matrix B to pre-multiply both sides of the equation such that the system is simpler to solve. Ideally, the new system matrix BA should be the identity matrix, i.e., B should be an approximate inverse of A. fasshauer@iit.edu MATH 590 Chapter 34 9

21 Preconditioning: Two Simple Examples Example (cont.) Instead of solving the linear system Ac = y, where y = [y 1,..., y 10 ] T R 10 is a vector of given data values, we can find a suitable matrix B to pre-multiply both sides of the equation such that the system is simpler to solve. Ideally, the new system matrix BA should be the identity matrix, i.e., B should be an approximate inverse of A. Once we ve found an appropriate matrix B, we must now solve the linear system BAc = By. fasshauer@iit.edu MATH 590 Chapter 34 9

22 Preconditioning: Two Simple Examples Example (cont.) Instead of solving the linear system Ac = y, where y = [y 1,..., y 10 ] T R 10 is a vector of given data values, we can find a suitable matrix B to pre-multiply both sides of the equation such that the system is simpler to solve. Ideally, the new system matrix BA should be the identity matrix, i.e., B should be an approximate inverse of A. Once we ve found an appropriate matrix B, we must now solve the linear system BAc = By. The matrix B is usually referred to as the (left) preconditioner of the linear system. fasshauer@iit.edu MATH 590 Chapter 34 9

23 Preconditioning: Two Simple Examples Example (cont.) For the matrix A above we can choose a preconditioner B as B = fasshauer@iit.edu MATH 590 Chapter 34 10

24 Preconditioning: Two Simple Examples Example (cont.) For the matrix A above we can choose a preconditioner B as B = This leads to the following preconditioned system matrix BA = in the system BAc = By. fasshauer@iit.edu MATH 590 Chapter 34 10

25 Preconditioning: Two Simple Examples Example (cont.) Note that BA is almost an identity matrix. MATH 590 Chapter 34 11

26 Preconditioning: Two Simple Examples Example (cont.) Note that BA is almost an identity matrix. One can easily check that now cond(ba) 45. MATH 590 Chapter 34 11

27 Preconditioning: Two Simple Examples Example (cont.) Note that BA is almost an identity matrix. One can easily check that now cond(ba) 45. The motivation for this choice of B is the following. The function ϕ(r) = r or Φ(x) = x is a fundamental solution of the Laplacian (= d 2 in the one-dimensional case), i.e. dx 2 Φ(x) = d 2 dx 2 x = 1 2 δ 0(x), where δ 0 is the Dirac delta function centered at zero. fasshauer@iit.edu MATH 590 Chapter 34 11

28 Preconditioning: Two Simple Examples Example (cont.) Note that BA is almost an identity matrix. One can easily check that now cond(ba) 45. The motivation for this choice of B is the following. The function ϕ(r) = r or Φ(x) = x is a fundamental solution of the Laplacian (= d 2 in the one-dimensional case), i.e. dx 2 Φ(x) = d 2 dx 2 x = 1 2 δ 0(x), where δ 0 is the Dirac delta function centered at zero. Thus, B is chosen as a discretization of the Laplacian with special choices at the endpoints of the data set. fasshauer@iit.edu MATH 590 Chapter 34 11

29 Preconditioning: Two Simple Examples Example For non-uniformly distributed data we can use a different discretization of the Laplacian for each row of B. fasshauer@iit.edu MATH 590 Chapter 34 12

30 Preconditioning: Two Simple Examples Example For non-uniformly distributed data we can use a different discretization of the Laplacian for each row of B. To see this, let s = 1, X = {1, 3 2, 5 2, 4, 9 2 }, and again consider interpolation with the radial function ϕ(r) = r. fasshauer@iit.edu MATH 590 Chapter 34 12

31 Preconditioning: Two Simple Examples Example For non-uniformly distributed data we can use a different discretization of the Laplacian for each row of B. To see this, let s = 1, X = {1, 3 2, 5 2, 4, 9 2 }, and again consider interpolation with the radial function ϕ(r) = r. Then with cond(a) A = fasshauer@iit.edu MATH 590 Chapter 34 12

32 Preconditioning: Two Simple Examples Example (cont.) If we choose B = , based on second-order backward differences of the points in X, fasshauer@iit.edu MATH 590 Chapter 34 13

33 Preconditioning: Two Simple Examples Example (cont.) If we choose B = , based on second-order backward differences of the points in X, then the preconditioned system to be solved becomes c = By. fasshauer@iit.edu MATH 590 Chapter 34 13

34 Preconditioning: Two Simple Examples Example (cont.) Once more, this system is almost trivial to solve and has an improved condition number of cond(ba) MATH 590 Chapter 34 14

35 Outline Early Preconditioners 1 Preconditioning: Two Simple Examples 2 Early Preconditioners 3 Preconditioned GMRES via Approximate Cardinal Functions 4 Change of Basis 5 Effect of the Better Basis on the Condition Number 6 Effect of the Better Basis on the Accuracy of the Interpolant fasshauer@iit.edu MATH 590 Chapter 34 15

36 Early Preconditioners Ill-conditioning of the interpolation matrices was identified as a serious problem very early, and Nira Dyn along with some of her co-workers (see, e.g., [Dyn (1987), Dyn (1989), Dyn and Levin (1983), Dyn et al. (1986)]) provided some of the first preconditioning strategies tailored especially to RBF interpolants. fasshauer@iit.edu MATH 590 Chapter 34 16

37 Early Preconditioners Ill-conditioning of the interpolation matrices was identified as a serious problem very early, and Nira Dyn along with some of her co-workers (see, e.g., [Dyn (1987), Dyn (1989), Dyn and Levin (1983), Dyn et al. (1986)]) provided some of the first preconditioning strategies tailored especially to RBF interpolants. For the following discussion we consider the general interpolation problem that includes polynomial reproduction (see Chapter 6). fasshauer@iit.edu MATH 590 Chapter 34 16

38 Early Preconditioners We have to solve the following system of linear equations [ ] [ ] [ ] A P c y P T =, (1) O d 0 with A jk = ϕ( x j x k ), j, k = 1,..., N, P jl = p l (x j ), j = 1,..., N, l = 1,..., M, c = [c 1,..., c N ] T, d = [d 1,..., d M ] T, y = [y 1,..., y N ] T, O an M M zero matrix, and 0 a zero vector of length M with M = dim Π s m 1. Here ϕ should be strictly conditionally positive definite of order m and radial on R s and the set X = {x 1,..., x N } should be (m 1)-unisolvent. fasshauer@iit.edu MATH 590 Chapter 34 17

39 Early Preconditioners The preconditioning scheme proposed by Dyn and her co-workers is a generalization of the simple differencing scheme discussed above. It is motivated by the fact that the polyharmonic splines (i.e., thin plate splines and radial powers) { r ϕ(r) = 2k s log r, s even, r 2k s, s odd, 2k > s, fasshauer@iit.edu MATH 590 Chapter 34 18

40 Early Preconditioners The preconditioning scheme proposed by Dyn and her co-workers is a generalization of the simple differencing scheme discussed above. It is motivated by the fact that the polyharmonic splines (i.e., thin plate splines and radial powers) { r ϕ(r) = 2k s log r, s even, r 2k s, s odd, 2k > s, are fundamental solutions of the k-th iterated Laplacian in R s, i.e. k ϕ( x ) = cδ 0 (x), where δ 0 is the Dirac delta function centered at the origin, and c is an appropriate constant. fasshauer@iit.edu MATH 590 Chapter 34 18

41 Early Preconditioners Remark For the (inverse) multiquadrics ϕ(r) = (1 + r 2 ) ±1/2, which are also discussed in the papers mentioned above, application of the Laplacian yields a similar limiting behavior, fasshauer@iit.edu MATH 590 Chapter 34 19

42 Early Preconditioners Remark For the (inverse) multiquadrics ϕ(r) = (1 + r 2 ) ±1/2, which are also discussed in the papers mentioned above, application of the Laplacian yields a similar limiting behavior, i.e. lim r k ϕ(r) = 0, and for r 0 k ϕ(r) 1. fasshauer@iit.edu MATH 590 Chapter 34 19

43 Early Preconditioners One now wants to discretize the Laplacian on the (irregular) mesh given by the (scattered) data sites in X. MATH 590 Chapter 34 20

44 Early Preconditioners One now wants to discretize the Laplacian on the (irregular) mesh given by the (scattered) data sites in X. To this end the authors of [Dyn et al. (1986)] suggest the following procedure for the case of scattered data interpolation over R 2. fasshauer@iit.edu MATH 590 Chapter 34 20

45 Early Preconditioners One now wants to discretize the Laplacian on the (irregular) mesh given by the (scattered) data sites in X. To this end the authors of [Dyn et al. (1986)] suggest the following procedure for the case of scattered data interpolation over R 2. 1 Start with a triangulation of the set X, e.g., the Delaunay triangulation will do. This triangulation can be visualized as follows. fasshauer@iit.edu MATH 590 Chapter 34 20

46 Early Preconditioners One now wants to discretize the Laplacian on the (irregular) mesh given by the (scattered) data sites in X. To this end the authors of [Dyn et al. (1986)] suggest the following procedure for the case of scattered data interpolation over R 2. 1 Start with a triangulation of the set X, e.g., the Delaunay triangulation will do. This triangulation can be visualized as follows. 1 Begin with the points in X and construct their Dirichlet tesselation or Voronoi diagram. The Dirichlet tile of a particular point x is that subset of points in R 2 which are closer to x than to any other point in X. fasshauer@iit.edu MATH 590 Chapter 34 20

47 Early Preconditioners One now wants to discretize the Laplacian on the (irregular) mesh given by the (scattered) data sites in X. To this end the authors of [Dyn et al. (1986)] suggest the following procedure for the case of scattered data interpolation over R 2. 1 Start with a triangulation of the set X, e.g., the Delaunay triangulation will do. This triangulation can be visualized as follows. 1 Begin with the points in X and construct their Dirichlet tesselation or Voronoi diagram. The Dirichlet tile of a particular point x is that subset of points in R 2 which are closer to x than to any other point in X. The green lines in the figure denote the Dirichlet tesselation for the set of 25 Halton points (circles) in [0, 1] 2. fasshauer@iit.edu MATH 590 Chapter 34 20

48 Early Preconditioners One now wants to discretize the Laplacian on the (irregular) mesh given by the (scattered) data sites in X. To this end the authors of [Dyn et al. (1986)] suggest the following procedure for the case of scattered data interpolation over R 2. 1 Start with a triangulation of the set X, e.g., the Delaunay triangulation will do. This triangulation can be visualized as follows. 1 Begin with the points in X and construct their Dirichlet tesselation or Voronoi diagram. The Dirichlet tile of a particular point x is that subset of points in R 2 which are closer to x than to any other point in X. The green lines in the figure denote the Dirichlet tesselation for the set of 25 Halton points (circles) in [0, 1] 2. 2 Construct the Delaunay triangulation, which is the dual of the Dirichlet tesselation, i.e., connect all strong neighbors in the Dirichlet tesselation, i.e., points whose tiles share a common edge. fasshauer@iit.edu MATH 590 Chapter 34 20

49 Early Preconditioners One now wants to discretize the Laplacian on the (irregular) mesh given by the (scattered) data sites in X. To this end the authors of [Dyn et al. (1986)] suggest the following procedure for the case of scattered data interpolation over R 2. 1 Start with a triangulation of the set X, e.g., the Delaunay triangulation will do. This triangulation can be visualized as follows. 1 Begin with the points in X and construct their Dirichlet tesselation or Voronoi diagram. The Dirichlet tile of a particular point x is that subset of points in R 2 which are closer to x than to any other point in X. The green lines in the figure denote the Dirichlet tesselation for the set of 25 Halton points (circles) in [0, 1] 2. 2 Construct the Delaunay triangulation, which is the dual of the Dirichlet tesselation, i.e., connect all strong neighbors in the Dirichlet tesselation, i.e., points whose tiles share a common edge. The blue lines in the figure denote the corresponding Delaunay triangulation of the 25 Halton points. fasshauer@iit.edu MATH 590 Chapter 34 20

50 Early Preconditioners Figure: Dirichlet tesselation (green lines) and corresponding Delaunay triangulation (blue lines) of 25 Halton points (red circles). MATH 590 Chapter 34 21

51 Early Preconditioners Figure: Dirichlet tesselation (green lines) and corresponding Delaunay triangulation (blue lines) of 25 Halton points (red circles). The figure was created in MATLAB using the commands dsites = CreatePoints(25,2, h ); tes = delaunayn(dsites); triplot(tes,dsites(:,1),dsites(:,2), b- ) hold on [vx, vy] = voronoi(dsites(:,1),dsites(:,2),tes); plot(dsites(:,1),dsites(:,2), ro,vx,vy, g- ) axis([ ]) fasshauer@iit.edu MATH 590 Chapter 34 21

52 Early Preconditioners 2 Discretize the Laplacian on this triangulation. fasshauer@iit.edu MATH 590 Chapter 34 22

53 Early Preconditioners 2 Discretize the Laplacian on this triangulation. In order to also take into account the boundary points Dyn, Levin and Rippa instead use a discretization of an iterated Green s formula which has the space Π 2 m 1 as its null space. fasshauer@iit.edu MATH 590 Chapter 34 22

54 Early Preconditioners 2 Discretize the Laplacian on this triangulation. In order to also take into account the boundary points Dyn, Levin and Rippa instead use a discretization of an iterated Green s formula which has the space Π 2 m 1 as its null space. The necessary partial derivatives are then approximated on the triangulation using certain sets of vertices of the triangulation (three points for first order partials, six for second order). fasshauer@iit.edu MATH 590 Chapter 34 22

55 Early Preconditioners The discretization described above yields the matrix B = (b ji ) N j,i=1 as the preconditioning matrix in a way analogous to the previous section. fasshauer@iit.edu MATH 590 Chapter 34 23

56 Early Preconditioners The discretization described above yields the matrix B = (b ji ) N j,i=1 as the preconditioning matrix in a way analogous to the previous section. We now obtain (BA) jk = N b ji ϕ( x i x k ) i=1 fasshauer@iit.edu MATH 590 Chapter 34 23

57 Early Preconditioners The discretization described above yields the matrix B = (b ji ) N j,i=1 as the preconditioning matrix in a way analogous to the previous section. We now obtain (BA) jk = N b ji ϕ( x i x k ) m ϕ( x k )(x j ), j, k = 1,..., N. i=1 (2) fasshauer@iit.edu MATH 590 Chapter 34 23

58 Early Preconditioners The discretization described above yields the matrix B = (b ji ) N j,i=1 as the preconditioning matrix in a way analogous to the previous section. We now obtain (BA) jk = N b ji ϕ( x i x k ) m ϕ( x k )(x j ), j, k = 1,..., N. i=1 (2) This matrix has the property that the entries close to the diagonal are large compared to those away from the diagonal, which decay to zero as the distance between the two points involved goes to infinity. fasshauer@iit.edu MATH 590 Chapter 34 23

59 Early Preconditioners Since the construction of B (in step 2 above) ensures that part of the preconditioned block matrix vanishes, namely BP = O, one must now solve the non-square system [ ] [ ] [ ] [ ] [ ] B O A P c B O y O I P T = O d O I 0 fasshauer@iit.edu MATH 590 Chapter 34 24

60 Early Preconditioners Since the construction of B (in step 2 above) ensures that part of the preconditioned block matrix vanishes, namely BP = O, one must now solve the non-square system [ ] [ ] [ ] [ B O A P c B O O I P T = O d O I [ ] [ ] BA By P T c =. 0 ] [ y 0 ] fasshauer@iit.edu MATH 590 Chapter 34 24

61 Early Preconditioners Since the construction of B (in step 2 above) ensures that part of the preconditioned block matrix vanishes, namely BP = O, one must now solve the non-square system [ ] [ ] [ ] [ B O A P c B O O I P T = O d O I [ ] [ ] BA By P T c =. 0 ] [ y 0 Remark The square system BAc = By is singular. However, it is shown in [Dyn et al. (1986)] that the additional constraints P T c = 0 guarantee existence of a unique solution. ] fasshauer@iit.edu MATH 590 Chapter 34 24

62 Early Preconditioners Since the construction of B (in step 2 above) ensures that part of the preconditioned block matrix vanishes, namely BP = O, one must now solve the non-square system [ ] [ ] [ ] [ B O A P c B O O I P T = O d O I [ ] [ ] BA By P T c =. 0 ] [ y 0 Remark The square system BAc = By is singular. However, it is shown in [Dyn et al. (1986)] that the additional constraints P T c = 0 guarantee existence of a unique solution. The coefficients d in the original expansion of the interpolant P f can be obtained by solving Pd = y Ac, i.e., by fitting the polynomial part of the expansion to the residual y Ac. fasshauer@iit.edu MATH 590 Chapter ]

63 Early Preconditioners This approach leads to localized basis functions Ψ that are linear combinations of the original basis functions ϕ. MATH 590 Chapter 34 25

64 Early Preconditioners This approach leads to localized basis functions Ψ that are linear combinations of the original basis functions ϕ. More precisely, Ψ j (x) = N b ji ϕ( x x i ) m ϕ( x j )(x), (3) i=1 where the coefficients b ji are determined via the discretization described above. fasshauer@iit.edu MATH 590 Chapter 34 25

65 Early Preconditioners This approach leads to localized basis functions Ψ that are linear combinations of the original basis functions ϕ. More precisely, Ψ j (x) = N b ji ϕ( x x i ) m ϕ( x j )(x), (3) i=1 where the coefficients b ji are determined via the discretization described above. Remark The localized basis functions Ψ j, j = 1,..., N, can be viewed as an alternative (better conditioned) basis for the approximation space spanned by the functions Φ j = ϕ( x j ). We will come back to this idea below. fasshauer@iit.edu MATH 590 Chapter 34 25

66 Early Preconditioners In [Dyn et al. (1986)] the authors describe how the preconditioned matrices can be used efficiently in conjunction with various iterative schemes such as Chebyshev iteration or a version of the conjugate gradient method. fasshauer@iit.edu MATH 590 Chapter 34 26

67 Early Preconditioners In [Dyn et al. (1986)] the authors describe how the preconditioned matrices can be used efficiently in conjunction with various iterative schemes such as Chebyshev iteration or a version of the conjugate gradient method. They also mention smoothing of noisy data, or low-pass filtering as other applications for this preconditioning scheme. fasshauer@iit.edu MATH 590 Chapter 34 26

68 Early Preconditioners ϕ N Grid I orig. Grid I precond. Grid II orig. Grid II precond. TPS MQ Table: Condition numbers without and with preconditioning for TPS and MQ on different data sets from [Dyn et al. (1986)]. The shape parameter ε for the multiquadrics was chosen to be the reciprocal of the average mesh size. A linear term was added for thin plate splines, and a constant for multiquadrics. fasshauer@iit.edu MATH 590 Chapter 34 27

69 Early Preconditioners Remark The most dramatic improvement is achieved for thin plate splines. MATH 590 Chapter 34 28

70 Early Preconditioners Remark The most dramatic improvement is achieved for thin plate splines. This is to be expected since the method described above is tailored to these functions. MATH 590 Chapter 34 28

71 Early Preconditioners Remark The most dramatic improvement is achieved for thin plate splines. This is to be expected since the method described above is tailored to these functions. For multiquadrics an application of the Laplacian does not yield the delta function, but for values of r close to zero gives just relatively large values. fasshauer@iit.edu MATH 590 Chapter 34 28

72 Early Preconditioners Another early preconditioning strategy was suggested in [Powell (1994)]. MATH 590 Chapter 34 29

73 Early Preconditioners Another early preconditioning strategy was suggested in [Powell (1994)]. Powell uses Householder transformations to convert the matrix of the interpolation system [ ] [ ] [ ] A P c y P T = O d 0 to a symmetric positive definite matrix, and then uses the conjugate gradient method. fasshauer@iit.edu MATH 590 Chapter 34 29

74 Early Preconditioners Another early preconditioning strategy was suggested in [Powell (1994)]. Powell uses Householder transformations to convert the matrix of the interpolation system [ ] [ ] [ ] A P c y P T = O d 0 to a symmetric positive definite matrix, and then uses the conjugate gradient method. However, Powell reports that this method is not particularly effective for large thin plate spline interpolation problems in R 2. fasshauer@iit.edu MATH 590 Chapter 34 29

75 Early Preconditioners In [Baxter (1992), Baxter (2002)] preconditioned conjugate gradient methods for solving the interpolation problem are discussed in the case when Gaussians or multiquadrics are used on a regular grid. fasshauer@iit.edu MATH 590 Chapter 34 30

76 Early Preconditioners In [Baxter (1992), Baxter (2002)] preconditioned conjugate gradient methods for solving the interpolation problem are discussed in the case when Gaussians or multiquadrics are used on a regular grid. The resulting matrices are Toeplitz matrices, and a large body of literature exists for dealing with matrices having this special structure (see, e.g., [Chan and Strang (1989)]). fasshauer@iit.edu MATH 590 Chapter 34 30

77 Preconditioned GMRES via Approximate Cardinal Functions Outline 1 Preconditioning: Two Simple Examples 2 Early Preconditioners 3 Preconditioned GMRES via Approximate Cardinal Functions 4 Change of Basis 5 Effect of the Better Basis on the Condition Number 6 Effect of the Better Basis on the Accuracy of the Interpolant fasshauer@iit.edu MATH 590 Chapter 34 31

78 Preconditioned GMRES via Approximate Cardinal Functions More recently, Beatson, Cherrie and Mouat [Beatson et al. (1999)] proposed a preconditioner for the iterative solution of radial basis function interpolation systems in conjunction with the GMRES method of [Saad and Schultz (1986)]. fasshauer@iit.edu MATH 590 Chapter 34 32

79 Preconditioned GMRES via Approximate Cardinal Functions More recently, Beatson, Cherrie and Mouat [Beatson et al. (1999)] proposed a preconditioner for the iterative solution of radial basis function interpolation systems in conjunction with the GMRES method of [Saad and Schultz (1986)]. The GMRES method is a general purpose iterative solver that can be applied to nonsymmetric (nondefinite) systems. fasshauer@iit.edu MATH 590 Chapter 34 32

80 Preconditioned GMRES via Approximate Cardinal Functions More recently, Beatson, Cherrie and Mouat [Beatson et al. (1999)] proposed a preconditioner for the iterative solution of radial basis function interpolation systems in conjunction with the GMRES method of [Saad and Schultz (1986)]. The GMRES method is a general purpose iterative solver that can be applied to nonsymmetric (nondefinite) systems. For fast convergence the matrix should be preconditioned such that its eigenvalues are clustered around one and away from the origin. fasshauer@iit.edu MATH 590 Chapter 34 32

81 Preconditioned GMRES via Approximate Cardinal Functions More recently, Beatson, Cherrie and Mouat [Beatson et al. (1999)] proposed a preconditioner for the iterative solution of radial basis function interpolation systems in conjunction with the GMRES method of [Saad and Schultz (1986)]. The GMRES method is a general purpose iterative solver that can be applied to nonsymmetric (nondefinite) systems. For fast convergence the matrix should be preconditioned such that its eigenvalues are clustered around one and away from the origin. Obviously, if the basis functions were cardinal, then the matrix would be the identity matrix with all its eigenvalues equal to one. fasshauer@iit.edu MATH 590 Chapter 34 32

82 Preconditioned GMRES via Approximate Cardinal Functions More recently, Beatson, Cherrie and Mouat [Beatson et al. (1999)] proposed a preconditioner for the iterative solution of radial basis function interpolation systems in conjunction with the GMRES method of [Saad and Schultz (1986)]. The GMRES method is a general purpose iterative solver that can be applied to nonsymmetric (nondefinite) systems. For fast convergence the matrix should be preconditioned such that its eigenvalues are clustered around one and away from the origin. Obviously, if the basis functions were cardinal, then the matrix would be the identity matrix with all its eigenvalues equal to one. Therefore, the GMRES method would converge in a single iteration. fasshauer@iit.edu MATH 590 Chapter 34 32

83 Preconditioned GMRES via Approximate Cardinal Functions More recently, Beatson, Cherrie and Mouat [Beatson et al. (1999)] proposed a preconditioner for the iterative solution of radial basis function interpolation systems in conjunction with the GMRES method of [Saad and Schultz (1986)]. The GMRES method is a general purpose iterative solver that can be applied to nonsymmetric (nondefinite) systems. For fast convergence the matrix should be preconditioned such that its eigenvalues are clustered around one and away from the origin. Obviously, if the basis functions were cardinal, then the matrix would be the identity matrix with all its eigenvalues equal to one. Therefore, the GMRES method would converge in a single iteration. Consequently, the preconditioning strategy of [Beatson et al. (1999)] for the GMRES method is to obtain a preconditioning matrix B that is close to the inverse of A. fasshauer@iit.edu MATH 590 Chapter 34 32

84 Preconditioned GMRES via Approximate Cardinal Functions Since it is too expensive to find the true cardinal basis (this would involve at least as much work as solving the interpolation problem), the idea pursued in [Beatson et al. (1999)] (and suggested earlier in [Beatson et al. (1996), Beatson and Powell (1993)]) is to find approximate cardinal functions similar to the functions Ψ j in the previous subsection. fasshauer@iit.edu MATH 590 Chapter 34 33

85 Preconditioned GMRES via Approximate Cardinal Functions Since it is too expensive to find the true cardinal basis (this would involve at least as much work as solving the interpolation problem), the idea pursued in [Beatson et al. (1999)] (and suggested earlier in [Beatson et al. (1996), Beatson and Powell (1993)]) is to find approximate cardinal functions similar to the functions Ψ j in the previous subsection. Now, however, there is also an emphasis on efficiency, i.e., we want local approximate cardinal functions (c.f. the use of approximate cardinal functions in the Faul-Powell algorithm). fasshauer@iit.edu MATH 590 Chapter 34 33

86 Preconditioned GMRES via Approximate Cardinal Functions Since it is too expensive to find the true cardinal basis (this would involve at least as much work as solving the interpolation problem), the idea pursued in [Beatson et al. (1999)] (and suggested earlier in [Beatson et al. (1996), Beatson and Powell (1993)]) is to find approximate cardinal functions similar to the functions Ψ j in the previous subsection. Now, however, there is also an emphasis on efficiency, i.e., we want local approximate cardinal functions (c.f. the use of approximate cardinal functions in the Faul-Powell algorithm). Several different strategies for the construction of these approximate cardinal functions were suggested in [Beatson et al. (1999)]. fasshauer@iit.edu MATH 590 Chapter 34 33

87 Preconditioned GMRES via Approximate Cardinal Functions Since it is too expensive to find the true cardinal basis (this would involve at least as much work as solving the interpolation problem), the idea pursued in [Beatson et al. (1999)] (and suggested earlier in [Beatson et al. (1996), Beatson and Powell (1993)]) is to find approximate cardinal functions similar to the functions Ψ j in the previous subsection. Now, however, there is also an emphasis on efficiency, i.e., we want local approximate cardinal functions (c.f. the use of approximate cardinal functions in the Faul-Powell algorithm). Several different strategies for the construction of these approximate cardinal functions were suggested in [Beatson et al. (1999)]. We will now explain the basic idea. fasshauer@iit.edu MATH 590 Chapter 34 33

88 Preconditioned GMRES via Approximate Cardinal Functions Given the centers x 1,..., x N for the basis functions in the RBF interpolant N P f (x) = c j ϕ( x x j ), j=1 fasshauer@iit.edu MATH 590 Chapter 34 34

89 Preconditioned GMRES via Approximate Cardinal Functions Given the centers x 1,..., x N for the basis functions in the RBF interpolant N P f (x) = c j ϕ( x x j ), j=1 the j-th approximate cardinal function is given as a linear combination of the basis functions Φ i = ϕ( x i ), where i runs over (some subset of) {1,..., N}, i.e., fasshauer@iit.edu MATH 590 Chapter 34 34

90 Preconditioned GMRES via Approximate Cardinal Functions Given the centers x 1,..., x N for the basis functions in the RBF interpolant N P f (x) = c j ϕ( x x j ), j=1 the j-th approximate cardinal function is given as a linear combination of the basis functions Φ i = ϕ( x i ), where i runs over (some subset of) {1,..., N}, i.e., Ψ j = N b ji ϕ( x i ) + p j. (4) i=1 fasshauer@iit.edu MATH 590 Chapter 34 34

91 Preconditioned GMRES via Approximate Cardinal Functions Given the centers x 1,..., x N for the basis functions in the RBF interpolant N P f (x) = c j ϕ( x x j ), j=1 the j-th approximate cardinal function is given as a linear combination of the basis functions Φ i = ϕ( x i ), where i runs over (some subset of) {1,..., N}, i.e., Ψ j = N b ji ϕ( x i ) + p j. (4) i=1 Here p j is a polynomial in Π s m 1 that is used only in the conditionally positive definite case, and the coefficients b ji satisfy the usual conditions N b ji p j (x i ) = 0 for all p j Π s m 1. (5) i=1 fasshauer@iit.edu MATH 590 Chapter 34 34

92 Preconditioned GMRES via Approximate Cardinal Functions The key feature in designing the approximate cardinal functions: have only a few n N coefficients in (4) to be nonzero. fasshauer@iit.edu MATH 590 Chapter 34 35

93 Preconditioned GMRES via Approximate Cardinal Functions The key feature in designing the approximate cardinal functions: have only a few n N coefficients in (4) to be nonzero. In that case the functions Ψ j are found by solving small n n linear systems, which is much more efficient than dealing with the original N N system. fasshauer@iit.edu MATH 590 Chapter 34 35

94 Preconditioned GMRES via Approximate Cardinal Functions The key feature in designing the approximate cardinal functions: have only a few n N coefficients in (4) to be nonzero. In that case the functions Ψ j are found by solving small n n linear systems, which is much more efficient than dealing with the original N N system. For example, in [Beatson et al. (1999)] the authors use n 50 for problems involving up to centers. fasshauer@iit.edu MATH 590 Chapter 34 35

95 Preconditioned GMRES via Approximate Cardinal Functions The resulting preconditioned system is of the same form as before, i.e., we now have to solve the preconditioned problem (BA)c = By, where the entries of the matrix BA are just Ψ j (x k ), j, k = 1,..., N. fasshauer@iit.edu MATH 590 Chapter 34 36

96 Preconditioned GMRES via Approximate Cardinal Functions The resulting preconditioned system is of the same form as before, i.e., we now have to solve the preconditioned problem (BA)c = By, where the entries of the matrix BA are just Ψ j (x k ), j, k = 1,..., N. The simplest strategy for determining the coefficients b ji : select the n nearest neighbors of x j, fasshauer@iit.edu MATH 590 Chapter 34 36

97 Preconditioned GMRES via Approximate Cardinal Functions The resulting preconditioned system is of the same form as before, i.e., we now have to solve the preconditioned problem (BA)c = By, where the entries of the matrix BA are just Ψ j (x k ), j, k = 1,..., N. The simplest strategy for determining the coefficients b ji : select the n nearest neighbors of x j, find b ji by solving the (local) cardinal interpolation problem Ψ j (x i ) = δ ij, i = 1,..., n, subject to the moment constraint (5) listed above. Here δ ij is the Kronecker-delta, so that Ψ j is one at x j and zero at all of the neighboring centers x i. fasshauer@iit.edu MATH 590 Chapter 34 36

98 Preconditioned GMRES via Approximate Cardinal Functions Remark This basic strategy is improved by adding so-called special points that are distributed (very sparsely) throughout the domain (for example near corners of the domain, or at other significant locations). MATH 590 Chapter 34 37

99 Preconditioned GMRES via Approximate Cardinal Functions ϕ N unprecond. local precond. local precond. w/special TPS e e e e e e e e e+006 MQ e e e e e e e e e+004 Table: l 2 -condition numbers without and with preconditioning for TPS and MQ at randomly distributed points in [0, 1] 2 from [Beatson et al. (1999)]. local precond.: uses the n = 50 nearest neighbors to determine the approximate cardinal functions w/special: uses the 41 nearest neighbors plus nine special points placed uniformly in the unit square. fasshauer@iit.edu MATH 590 Chapter 34 38

100 Preconditioned GMRES via Approximate Cardinal Functions The effect of the preconditioning on the performance of the GMRES algorithm is, e.g., a reduction from 103 to 8 iterations for the 289 point data set for thin plate splines, fasshauer@iit.edu MATH 590 Chapter 34 39

101 Preconditioned GMRES via Approximate Cardinal Functions The effect of the preconditioning on the performance of the GMRES algorithm is, e.g., a reduction from 103 to 8 iterations for the 289 point data set for thin plate splines, a reduction from 145 iterations to 11 for multiquadrics. fasshauer@iit.edu MATH 590 Chapter 34 39

102 Preconditioned GMRES via Approximate Cardinal Functions The effect of the preconditioning on the performance of the GMRES algorithm is, e.g., Remark a reduction from 103 to 8 iterations for the 289 point data set for thin plate splines, a reduction from 145 iterations to 11 for multiquadrics. An extension of the ideas of [Beatson et al. (1999)] to linear systems arising in the collocation solution of partial differential equations (see Chapter 38) was explored in Mouat s Ph.D. thesis [Mouat (2001)] and also in the recent paper [Ling and Kansa (2005)]. fasshauer@iit.edu MATH 590 Chapter 34 39

103 Outline Change of Basis 1 Preconditioning: Two Simple Examples 2 Early Preconditioners 3 Preconditioned GMRES via Approximate Cardinal Functions 4 Change of Basis 5 Effect of the Better Basis on the Condition Number 6 Effect of the Better Basis on the Accuracy of the Interpolant fasshauer@iit.edu MATH 590 Chapter 34 40

104 Change of Basis Another approach to obtaining a better conditioned interpolation system is to work with a different basis for the approximation space. fasshauer@iit.edu MATH 590 Chapter 34 41

105 Change of Basis Another approach to obtaining a better conditioned interpolation system is to work with a different basis for the approximation space. While this idea is implicitly addressed in the preconditioning strategies discussed above, we will now make it our primary goal to find a better conditioned basis for the RBF approximation space. fasshauer@iit.edu MATH 590 Chapter 34 41

106 Change of Basis Another approach to obtaining a better conditioned interpolation system is to work with a different basis for the approximation space. While this idea is implicitly addressed in the preconditioning strategies discussed above, we will now make it our primary goal to find a better conditioned basis for the RBF approximation space. Example Univariate piecewise linear splines and natural cubic splines can be interpreted as radial basis functions, and we know that B-splines form stable bases for those spaces. fasshauer@iit.edu MATH 590 Chapter 34 41

107 Change of Basis Another approach to obtaining a better conditioned interpolation system is to work with a different basis for the approximation space. While this idea is implicitly addressed in the preconditioning strategies discussed above, we will now make it our primary goal to find a better conditioned basis for the RBF approximation space. Example Univariate piecewise linear splines and natural cubic splines can be interpreted as radial basis functions, and we know that B-splines form stable bases for those spaces. Therefore, it should be possible to generalize this idea for other RBFs. fasshauer@iit.edu MATH 590 Chapter 34 41

108 Change of Basis The process of finding a better basis for conditionally positive definite RBFs is closely connected to finding the reproducing kernel of the associated native space. fasshauer@iit.edu MATH 590 Chapter 34 42

109 Change of Basis The process of finding a better basis for conditionally positive definite RBFs is closely connected to finding the reproducing kernel of the associated native space. Since we did not elaborate on the construction of native spaces for conditionally positive definite functions earlier, we will now present the relevant formulas without going into any further details. fasshauer@iit.edu MATH 590 Chapter 34 42

110 Change of Basis The process of finding a better basis for conditionally positive definite RBFs is closely connected to finding the reproducing kernel of the associated native space. Since we did not elaborate on the construction of native spaces for conditionally positive definite functions earlier, we will now present the relevant formulas without going into any further details. In particular, for polyharmonic splines we will be able to find a basis that is in a certain sense homogeneous. Therefore the condition number of the related interpolation matrix will depend only on the number N of data points, but not on their separation distance (c.f. the discussion in Chapter 16). fasshauer@iit.edu MATH 590 Chapter 34 42

111 Change of Basis The process of finding a better basis for conditionally positive definite RBFs is closely connected to finding the reproducing kernel of the associated native space. Since we did not elaborate on the construction of native spaces for conditionally positive definite functions earlier, we will now present the relevant formulas without going into any further details. In particular, for polyharmonic splines we will be able to find a basis that is in a certain sense homogeneous. Therefore the condition number of the related interpolation matrix will depend only on the number N of data points, but not on their separation distance (c.f. the discussion in Chapter 16). This approach was suggested by Beatson, Light and Billings [Beatson et al. (2000)], and has its roots in [Sibson and Stone (1991)]. fasshauer@iit.edu MATH 590 Chapter 34 42

112 Change of Basis Let Φ be a strictly conditionally positive definite kernel of order m, and X = {x 1,..., x N } Ω R s be an (m 1)-unisolvent set of centers. fasshauer@iit.edu MATH 590 Chapter 34 43

113 Change of Basis Let Φ be a strictly conditionally positive definite kernel of order m, and X = {x 1,..., x N } Ω R s be an (m 1)-unisolvent set of centers. Then the reproducing kernel for the native space N Φ (Ω) is given by K (x, y) = Φ(x, y) + M k=1 l=1 M p k (x)φ(x k, y) k=1 M p l (y)φ(x, x l ) l=1 M p k (x)p l (y)φ(x k, x l ) + M p l (x)p l (y), (6) where the points {x 1,..., x M } comprise an (m 1)-unisolvent subset of X and the polynomials p k, k = 1,..., M, form a cardinal basis for Π s m 1 on this subset whose dimension is M = ( ) s+m 1 m 1, i.e., p l (x k ) = δ k,l, k, l = 1,..., M. l=1 fasshauer@iit.edu MATH 590 Chapter 34 43

114 Change of Basis Remark This formulation of the reproducing kernel for the conditionally positive definite case also appears in the statistics literature in the context of kriging (see, e.g., [Berlinet and Thomas-Agnan (2004)]). MATH 590 Chapter 34 44

115 Change of Basis Remark This formulation of the reproducing kernel for the conditionally positive definite case also appears in the statistics literature in the context of kriging (see, e.g., [Berlinet and Thomas-Agnan (2004)]). In that context the kernel K is a covariance kernel associated with the generalized covariance Φ. fasshauer@iit.edu MATH 590 Chapter 34 44

116 Change of Basis Remark This formulation of the reproducing kernel for the conditionally positive definite case also appears in the statistics literature in the context of kriging (see, e.g., [Berlinet and Thomas-Agnan (2004)]). In that context the kernel K is a covariance kernel associated with the generalized covariance Φ. These two kernels give rise to the kriging equations and dual kriging equations, respectively. fasshauer@iit.edu MATH 590 Chapter 34 44

117 Change of Basis An immediate consequence of having found the reproducing kernel K is that we can express the RBF interpolant to values of some function f given on X in the form N P f (x) = c j K (x, x j ), x R s. j=1 fasshauer@iit.edu MATH 590 Chapter 34 45

118 Change of Basis An immediate consequence of having found the reproducing kernel K is that we can express the RBF interpolant to values of some function f given on X in the form N P f (x) = c j K (x, x j ), x R s. j=1 Note that the kernel K used here is a strictly positive definite kernel (since it is a reproducing kernel) with built-in polynomial precision. fasshauer@iit.edu MATH 590 Chapter 34 45

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 33: Adaptive Iteration Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter 33 1 Outline 1 A

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 33: Adaptive Iteration Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter 33 1 Outline 1 A

More information

Meshfree Approximation Methods with MATLAB

Meshfree Approximation Methods with MATLAB Interdisciplinary Mathematical Sc Meshfree Approximation Methods with MATLAB Gregory E. Fasshauer Illinois Institute of Technology, USA Y f? World Scientific NEW JERSEY LONDON SINGAPORE BEIJING SHANGHAI

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 14: The Power Function and Native Space Error Estimates Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu

More information

Kernel B Splines and Interpolation

Kernel B Splines and Interpolation Kernel B Splines and Interpolation M. Bozzini, L. Lenarduzzi and R. Schaback February 6, 5 Abstract This paper applies divided differences to conditionally positive definite kernels in order to generate

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 9: Conditionally Positive Definite Radial Functions Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH

More information

Radial Basis Functions I

Radial Basis Functions I Radial Basis Functions I Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 14, 2008 Today Reformulation of natural cubic spline interpolation Scattered

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 6: Scattered Data Interpolation with Polynomial Precision Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 7: Conditionally Positive Definite Functions Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter

More information

Scattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions

Scattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions Chapter 3 Scattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions 3.1 Scattered Data Interpolation with Polynomial Precision Sometimes the assumption on the

More information

D. Shepard, Shepard functions, late 1960s (application, surface modelling)

D. Shepard, Shepard functions, late 1960s (application, surface modelling) Chapter 1 Introduction 1.1 History and Outline Originally, the motivation for the basic meshfree approximation methods (radial basis functions and moving least squares methods) came from applications in

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 5: Completely Monotone and Multiply Monotone Functions Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH

More information

Kernel-based Approximation. Methods using MATLAB. Gregory Fasshauer. Interdisciplinary Mathematical Sciences. Michael McCourt.

Kernel-based Approximation. Methods using MATLAB. Gregory Fasshauer. Interdisciplinary Mathematical Sciences. Michael McCourt. SINGAPORE SHANGHAI Vol TAIPEI - Interdisciplinary Mathematical Sciences 19 Kernel-based Approximation Methods using MATLAB Gregory Fasshauer Illinois Institute of Technology, USA Michael McCourt University

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 40: Symmetric RBF Collocation in MATLAB Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter

More information

Multivariate Interpolation with Increasingly Flat Radial Basis Functions of Finite Smoothness

Multivariate Interpolation with Increasingly Flat Radial Basis Functions of Finite Smoothness Multivariate Interpolation with Increasingly Flat Radial Basis Functions of Finite Smoothness Guohui Song John Riddle Gregory E. Fasshauer Fred J. Hickernell Abstract In this paper, we consider multivariate

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 2: Radial Basis Function Interpolation in MATLAB Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 1 Part 2: Scattered Data Interpolation in R d Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2014 fasshauer@iit.edu MATH 590 Chapter

More information

Toward Approximate Moving Least Squares Approximation with Irregularly Spaced Centers

Toward Approximate Moving Least Squares Approximation with Irregularly Spaced Centers Toward Approximate Moving Least Squares Approximation with Irregularly Spaced Centers Gregory E. Fasshauer Department of Applied Mathematics, Illinois Institute of Technology, Chicago, IL 6066, U.S.A.

More information

RBF Collocation Methods and Pseudospectral Methods

RBF Collocation Methods and Pseudospectral Methods RBF Collocation Methods and Pseudospectral Methods G. E. Fasshauer Draft: November 19, 24 Abstract We show how the collocation framework that is prevalent in the radial basis function literature can be

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 39: Non-Symmetric RBF Collocation in MATLAB Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter

More information

Breaking Computational Barriers: Multi-GPU High-Order RBF Kernel Problems with Millions of Points

Breaking Computational Barriers: Multi-GPU High-Order RBF Kernel Problems with Millions of Points Breaking Computational Barriers: Multi-GPU High-Order RBF Kernel Problems with Millions of Points Michael Griebel Christian Rieger Peter Zaspel Institute for Numerical Simulation Rheinische Friedrich-Wilhelms-Universität

More information

RBF-FD Approximation to Solve Poisson Equation in 3D

RBF-FD Approximation to Solve Poisson Equation in 3D RBF-FD Approximation to Solve Poisson Equation in 3D Jagadeeswaran.R March 14, 2014 1 / 28 Overview Problem Setup Generalized finite difference method. Uses numerical differentiations generated by Gaussian

More information

Positive Definite Kernels: Opportunities and Challenges

Positive Definite Kernels: Opportunities and Challenges Positive Definite Kernels: Opportunities and Challenges Michael McCourt Department of Mathematical and Statistical Sciences University of Colorado, Denver CUNY Mathematics Seminar CUNY Graduate College

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 1 Part 3: Radial Basis Function Interpolation in MATLAB Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2014 fasshauer@iit.edu

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 37: RBF Hermite Interpolation in MATLAB Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 2 Part 3: Native Space for Positive Definite Kernels Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2014 fasshauer@iit.edu MATH

More information

A orthonormal basis for Radial Basis Function approximation

A orthonormal basis for Radial Basis Function approximation A orthonormal basis for Radial Basis Function approximation 9th ISAAC Congress Krakow, August 5-9, 2013 Gabriele Santin, joint work with S. De Marchi Department of Mathematics. Doctoral School in Mathematical

More information

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C.

Lecture 9 Approximations of Laplace s Equation, Finite Element Method. Mathématiques appliquées (MATH0504-1) B. Dewals, C. Lecture 9 Approximations of Laplace s Equation, Finite Element Method Mathématiques appliquées (MATH54-1) B. Dewals, C. Geuzaine V1.2 23/11/218 1 Learning objectives of this lecture Apply the finite difference

More information

A new stable basis for radial basis function interpolation

A new stable basis for radial basis function interpolation A new stable basis for radial basis function interpolation Stefano De Marchi and Gabriele Santin Department of Mathematics University of Padua (Italy) Abstract It is well-known that radial basis function

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 43: RBF-PS Methods in MATLAB Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter 43 1 Outline

More information

Recent Results for Moving Least Squares Approximation

Recent Results for Moving Least Squares Approximation Recent Results for Moving Least Squares Approximation Gregory E. Fasshauer and Jack G. Zhang Abstract. We describe two experiments recently conducted with the approximate moving least squares (MLS) approximation

More information

Theoretical and computational aspects of multivariate interpolation with increasingly flat radial basis functions

Theoretical and computational aspects of multivariate interpolation with increasingly flat radial basis functions Theoretical and computational aspects of multivariate interpolation with increasingly flat radial basis functions Elisabeth Larsson Bengt Fornberg June 0, 003 Abstract Multivariate interpolation of smooth

More information

Least Squares Approximation

Least Squares Approximation Chapter 6 Least Squares Approximation As we saw in Chapter 5 we can interpret radial basis function interpolation as a constrained optimization problem. We now take this point of view again, but start

More information

Scalable kernel methods and their use in black-box optimization

Scalable kernel methods and their use in black-box optimization with derivatives Scalable kernel methods and their use in black-box optimization David Eriksson Center for Applied Mathematics Cornell University dme65@cornell.edu November 9, 2018 1 2 3 4 1/37 with derivatives

More information

Finite Elements. Colin Cotter. January 18, Colin Cotter FEM

Finite Elements. Colin Cotter. January 18, Colin Cotter FEM Finite Elements January 18, 2019 The finite element Given a triangulation T of a domain Ω, finite element spaces are defined according to 1. the form the functions take (usually polynomial) when restricted

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods The Connection to Green s Kernels Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2014 fasshauer@iit.edu MATH 590 1 Outline 1 Introduction

More information

Multigrid absolute value preconditioning

Multigrid absolute value preconditioning Multigrid absolute value preconditioning Eugene Vecharynski 1 Andrew Knyazev 2 (speaker) 1 Department of Computer Science and Engineering University of Minnesota 2 Department of Mathematical and Statistical

More information

Approximation by Conditionally Positive Definite Functions with Finitely Many Centers

Approximation by Conditionally Positive Definite Functions with Finitely Many Centers Approximation by Conditionally Positive Definite Functions with Finitely Many Centers Jungho Yoon Abstract. The theory of interpolation by using conditionally positive definite function provides optimal

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 43: RBF-PS Methods in MATLAB Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter 43 1 Outline

More information

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners

Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Solving Symmetric Indefinite Systems with Symmetric Positive Definite Preconditioners Eugene Vecharynski 1 Andrew Knyazev 2 1 Department of Computer Science and Engineering University of Minnesota 2 Department

More information

A New Trust Region Algorithm Using Radial Basis Function Models

A New Trust Region Algorithm Using Radial Basis Function Models A New Trust Region Algorithm Using Radial Basis Function Models Seppo Pulkkinen University of Turku Department of Mathematics July 14, 2010 Outline 1 Introduction 2 Background Taylor series approximations

More information

Cubic spline Numerov type approach for solution of Helmholtz equation

Cubic spline Numerov type approach for solution of Helmholtz equation Journal of Linear and Topological Algebra Vol. 03, No. 01, 2014, 47-54 Cubic spline Numerov type approach for solution of Helmholtz equation J. Rashidinia a, H. S. Shekarabi a and M. Aghamohamadi a a Department

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

Interpolation by Basis Functions of Different Scales and Shapes

Interpolation by Basis Functions of Different Scales and Shapes Interpolation by Basis Functions of Different Scales and Shapes M. Bozzini, L. Lenarduzzi, M. Rossini and R. Schaback Abstract Under very mild additional assumptions, translates of conditionally positive

More information

Preconditioners for ill conditioned (block) Toeplitz systems: facts a

Preconditioners for ill conditioned (block) Toeplitz systems: facts a Preconditioners for ill conditioned (block) Toeplitz systems: facts and ideas Department of Informatics, Athens University of Economics and Business, Athens, Greece. Email:pvassal@aueb.gr, pvassal@uoi.gr

More information

Stability of Kernel Based Interpolation

Stability of Kernel Based Interpolation Stability of Kernel Based Interpolation Stefano De Marchi Department of Computer Science, University of Verona (Italy) Robert Schaback Institut für Numerische und Angewandte Mathematik, University of Göttingen

More information

Efficient Solvers for Stochastic Finite Element Saddle Point Problems

Efficient Solvers for Stochastic Finite Element Saddle Point Problems Efficient Solvers for Stochastic Finite Element Saddle Point Problems Catherine E. Powell c.powell@manchester.ac.uk School of Mathematics University of Manchester, UK Efficient Solvers for Stochastic Finite

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 7 Interpolation Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Consistency Estimates for gfd Methods and Selection of Sets of Influence

Consistency Estimates for gfd Methods and Selection of Sets of Influence Consistency Estimates for gfd Methods and Selection of Sets of Influence Oleg Davydov University of Giessen, Germany Localized Kernel-Based Meshless Methods for PDEs ICERM / Brown University 7 11 August

More information

Scattered Data Approximation of Noisy Data via Iterated Moving Least Squares

Scattered Data Approximation of Noisy Data via Iterated Moving Least Squares Scattered Data Approximation o Noisy Data via Iterated Moving Least Squares Gregory E. Fasshauer and Jack G. Zhang Abstract. In this paper we ocus on two methods or multivariate approximation problems

More information

The Closed Form Reproducing Polynomial Particle Shape Functions for Meshfree Particle Methods

The Closed Form Reproducing Polynomial Particle Shape Functions for Meshfree Particle Methods The Closed Form Reproducing Polynomial Particle Shape Functions for Meshfree Particle Methods by Hae-Soo Oh Department of Mathematics, University of North Carolina at Charlotte, Charlotte, NC 28223 June

More information

AMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1.

AMS Mathematics Subject Classification : 65F10,65F50. Key words and phrases: ILUS factorization, preconditioning, Schur complement, 1. J. Appl. Math. & Computing Vol. 15(2004), No. 1, pp. 299-312 BILUS: A BLOCK VERSION OF ILUS FACTORIZATION DAVOD KHOJASTEH SALKUYEH AND FAEZEH TOUTOUNIAN Abstract. ILUS factorization has many desirable

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 24: Preconditioning and Multigrid Solver Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 5 Preconditioning Motivation:

More information

Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White

Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White Introduction to Simulation - Lecture 2 Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White Thanks to Deepak Ramaswamy, Michal Rewienski, and Karen Veroy Outline Reminder about

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

Preface to the Second Edition. Preface to the First Edition

Preface to the Second Edition. Preface to the First Edition n page v Preface to the Second Edition Preface to the First Edition xiii xvii 1 Background in Linear Algebra 1 1.1 Matrices................................. 1 1.2 Square Matrices and Eigenvalues....................

More information

Numerical analysis of heat conduction problems on irregular domains by means of a collocation meshless method

Numerical analysis of heat conduction problems on irregular domains by means of a collocation meshless method Journal of Physics: Conference Series PAPER OPEN ACCESS Numerical analysis of heat conduction problems on irregular domains by means of a collocation meshless method To cite this article: R Zamolo and

More information

Lectures 9-10: Polynomial and piecewise polynomial interpolation

Lectures 9-10: Polynomial and piecewise polynomial interpolation Lectures 9-1: Polynomial and piecewise polynomial interpolation Let f be a function, which is only known at the nodes x 1, x,, x n, ie, all we know about the function f are its values y j = f(x j ), j

More information

RKHS, Mercer s theorem, Unbounded domains, Frames and Wavelets Class 22, 2004 Tomaso Poggio and Sayan Mukherjee

RKHS, Mercer s theorem, Unbounded domains, Frames and Wavelets Class 22, 2004 Tomaso Poggio and Sayan Mukherjee RKHS, Mercer s theorem, Unbounded domains, Frames and Wavelets 9.520 Class 22, 2004 Tomaso Poggio and Sayan Mukherjee About this class Goal To introduce an alternate perspective of RKHS via integral operators

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Numerical analysis of heat conduction problems on 3D general-shaped domains by means of a RBF Collocation Meshless Method

Numerical analysis of heat conduction problems on 3D general-shaped domains by means of a RBF Collocation Meshless Method Journal of Physics: Conference Series PAPER OPEN ACCESS Numerical analysis of heat conduction problems on 3D general-shaped domains by means of a RBF Collocation Meshless Method To cite this article: R

More information

We wish to solve a system of N simultaneous linear algebraic equations for the N unknowns x 1, x 2,...,x N, that are expressed in the general form

We wish to solve a system of N simultaneous linear algebraic equations for the N unknowns x 1, x 2,...,x N, that are expressed in the general form Linear algebra This chapter discusses the solution of sets of linear algebraic equations and defines basic vector/matrix operations The focus is upon elimination methods such as Gaussian elimination, and

More information

MATH 350: Introduction to Computational Mathematics

MATH 350: Introduction to Computational Mathematics MATH 350: Introduction to Computational Mathematics Chapter V: Least Squares Problems Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2011 fasshauer@iit.edu MATH

More information

Linear Systems and Matrices

Linear Systems and Matrices Department of Mathematics The Chinese University of Hong Kong 1 System of m linear equations in n unknowns (linear system) a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.......

More information

Solving Boundary Value Problems (with Gaussians)

Solving Boundary Value Problems (with Gaussians) What is a boundary value problem? Solving Boundary Value Problems (with Gaussians) Definition A differential equation with constraints on the boundary Michael McCourt Division Argonne National Laboratory

More information

Green s Functions: Taking Another Look at Kernel Approximation, Radial Basis Functions and Splines

Green s Functions: Taking Another Look at Kernel Approximation, Radial Basis Functions and Splines Green s Functions: Taking Another Look at Kernel Approximation, Radial Basis Functions and Splines Gregory E. Fasshauer Abstract The theories for radial basis functions (RBFs) as well as piecewise polynomial

More information

Classical iterative methods for linear systems

Classical iterative methods for linear systems Classical iterative methods for linear systems Ed Bueler MATH 615 Numerical Analysis of Differential Equations 27 February 1 March, 2017 Ed Bueler (MATH 615 NADEs) Classical iterative methods for linear

More information

On the Preconditioning of the Block Tridiagonal Linear System of Equations

On the Preconditioning of the Block Tridiagonal Linear System of Equations On the Preconditioning of the Block Tridiagonal Linear System of Equations Davod Khojasteh Salkuyeh Department of Mathematics, University of Mohaghegh Ardabili, PO Box 179, Ardabil, Iran E-mail: khojaste@umaacir

More information

Fast Structured Spectral Methods

Fast Structured Spectral Methods Spectral methods HSS structures Fast algorithms Conclusion Fast Structured Spectral Methods Yingwei Wang Department of Mathematics, Purdue University Joint work with Prof Jie Shen and Prof Jianlin Xia

More information

Scattered Data Interpolation with Wavelet Trees

Scattered Data Interpolation with Wavelet Trees Scattered Data Interpolation with Wavelet Trees Christophe P. Bernard, Stéphane G. Mallat and Jean-Jacques Slotine Abstract. This paper describes a new result on a wavelet scheme for scattered data interpolation

More information

Preconditioning Techniques Analysis for CG Method

Preconditioning Techniques Analysis for CG Method Preconditioning Techniques Analysis for CG Method Huaguang Song Department of Computer Science University of California, Davis hso@ucdavis.edu Abstract Matrix computation issue for solve linear system

More information

Key words. Radial basis function, scattered data interpolation, hierarchical matrices, datasparse approximation, adaptive cross approximation

Key words. Radial basis function, scattered data interpolation, hierarchical matrices, datasparse approximation, adaptive cross approximation HIERARCHICAL MATRIX APPROXIMATION FOR KERNEL-BASED SCATTERED DATA INTERPOLATION ARMIN ISKE, SABINE LE BORNE, AND MICHAEL WENDE Abstract. Scattered data interpolation by radial kernel functions leads to

More information

Numerical solution of nonlinear sine-gordon equation with local RBF-based finite difference collocation method

Numerical solution of nonlinear sine-gordon equation with local RBF-based finite difference collocation method Numerical solution of nonlinear sine-gordon equation with local RBF-based finite difference collocation method Y. Azari Keywords: Local RBF-based finite difference (LRBF-FD), Global RBF collocation, sine-gordon

More information

Numerische Mathematik

Numerische Mathematik Numer. Math. (997) 76: 479 488 Numerische Mathematik c Springer-Verlag 997 Electronic Edition Exponential decay of C cubic splines vanishing at two symmetric points in each knot interval Sang Dong Kim,,

More information

7.4 The Saddle Point Stokes Problem

7.4 The Saddle Point Stokes Problem 346 CHAPTER 7. APPLIED FOURIER ANALYSIS 7.4 The Saddle Point Stokes Problem So far the matrix C has been diagonal no trouble to invert. This section jumps to a fluid flow problem that is still linear (simpler

More information

Kernel Method: Data Analysis with Positive Definite Kernels

Kernel Method: Data Analysis with Positive Definite Kernels Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University

More information

Overlapping Schwarz preconditioners for Fekete spectral elements

Overlapping Schwarz preconditioners for Fekete spectral elements Overlapping Schwarz preconditioners for Fekete spectral elements R. Pasquetti 1, L. F. Pavarino 2, F. Rapetti 1, and E. Zampieri 2 1 Laboratoire J.-A. Dieudonné, CNRS & Université de Nice et Sophia-Antipolis,

More information

Linear Algebra Practice Problems

Linear Algebra Practice Problems Math 7, Professor Ramras Linear Algebra Practice Problems () Consider the following system of linear equations in the variables x, y, and z, in which the constants a and b are real numbers. x y + z = a

More information

Fast Iterative Solution of Saddle Point Problems

Fast Iterative Solution of Saddle Point Problems Michele Benzi Department of Mathematics and Computer Science Emory University Atlanta, GA Acknowledgments NSF (Computational Mathematics) Maxim Olshanskii (Mech-Math, Moscow State U.) Zhen Wang (PhD student,

More information

MA2501 Numerical Methods Spring 2015

MA2501 Numerical Methods Spring 2015 Norwegian University of Science and Technology Department of Mathematics MA5 Numerical Methods Spring 5 Solutions to exercise set 9 Find approximate values of the following integrals using the adaptive

More information

Stability constants for kernel-based interpolation processes

Stability constants for kernel-based interpolation processes Dipartimento di Informatica Università degli Studi di Verona Rapporto di ricerca Research report 59 Stability constants for kernel-based interpolation processes Stefano De Marchi Robert Schaback Dipartimento

More information

1. Introduction Let f(x), x 2 R d, be a real function of d variables, and let the values f(x i ), i = 1; 2; : : : ; n, be given, where the points x i,

1. Introduction Let f(x), x 2 R d, be a real function of d variables, and let the values f(x i ), i = 1; 2; : : : ; n, be given, where the points x i, DAMTP 2001/NA11 Radial basis function methods for interpolation to functions of many variables 1 M.J.D. Powell Abstract: A review of interpolation to values of a function f(x), x 2 R d, by radial basis

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc.

Lecture 11: CMSC 878R/AMSC698R. Iterative Methods An introduction. Outline. Inverse, LU decomposition, Cholesky, SVD, etc. Lecture 11: CMSC 878R/AMSC698R Iterative Methods An introduction Outline Direct Solution of Linear Systems Inverse, LU decomposition, Cholesky, SVD, etc. Iterative methods for linear systems Why? Matrix

More information

1. Introduction. A radial basis function (RBF) interpolant of multivariate data (x k, y k ), k = 1, 2,..., n takes the form

1. Introduction. A radial basis function (RBF) interpolant of multivariate data (x k, y k ), k = 1, 2,..., n takes the form A NEW CLASS OF OSCILLATORY RADIAL BASIS FUNCTIONS BENGT FORNBERG, ELISABETH LARSSON, AND GRADY WRIGHT Abstract Radial basis functions RBFs form a primary tool for multivariate interpolation, and they are

More information

DIRECT ERROR BOUNDS FOR SYMMETRIC RBF COLLOCATION

DIRECT ERROR BOUNDS FOR SYMMETRIC RBF COLLOCATION Meshless Methods in Science and Engineering - An International Conference Porto, 22 DIRECT ERROR BOUNDS FOR SYMMETRIC RBF COLLOCATION Robert Schaback Institut für Numerische und Angewandte Mathematik (NAM)

More information

Numerical Analysis Comprehensive Exam Questions

Numerical Analysis Comprehensive Exam Questions Numerical Analysis Comprehensive Exam Questions 1. Let f(x) = (x α) m g(x) where m is an integer and g(x) C (R), g(α). Write down the Newton s method for finding the root α of f(x), and study the order

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

Iterative methods for Linear System

Iterative methods for Linear System Iterative methods for Linear System JASS 2009 Student: Rishi Patil Advisor: Prof. Thomas Huckle Outline Basics: Matrices and their properties Eigenvalues, Condition Number Iterative Methods Direct and

More information

Linear Solvers. Andrew Hazel

Linear Solvers. Andrew Hazel Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction

More information

An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84

An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84 An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84 Introduction Almost all numerical methods for solving PDEs will at some point be reduced to solving A

More information

Computational Aspects of Radial Basis Function Approximation

Computational Aspects of Radial Basis Function Approximation Working title: Topics in Multivariate Approximation and Interpolation 1 K. Jetter et al., Editors c 2005 Elsevier B.V. All rights reserved Computational Aspects of Radial Basis Function Approximation Holger

More information

Iterative schemes for the solution of systems of equations arising from the DRM in multidomains

Iterative schemes for the solution of systems of equations arising from the DRM in multidomains CHAPTER 7 Iterative schemes for the solution of systems of equations arising from the DRM in multidomains M.I. Portapila 1 &H.Power 2 1 Wessex Institute of Technology, UK. 2 Department of Mechanical Engineering,

More information

D. Shepard, Shepard functions, late 1960s (application, surface modelling)

D. Shepard, Shepard functions, late 1960s (application, surface modelling) Chapter 1 Introduction 1.1 History and Outline Originally, the motivation for the basic meshfree approximation methods (radial basis functions and moving least squares methods) came from applications in

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

Lecture Notes for Inf-Mat 3350/4350, Tom Lyche

Lecture Notes for Inf-Mat 3350/4350, Tom Lyche Lecture Notes for Inf-Mat 3350/4350, 2007 Tom Lyche August 5, 2007 2 Contents Preface vii I A Review of Linear Algebra 1 1 Introduction 3 1.1 Notation............................... 3 2 Vectors 5 2.1 Vector

More information

A. Iske RADIAL BASIS FUNCTIONS: BASICS, ADVANCED TOPICS AND MESHFREE METHODS FOR TRANSPORT PROBLEMS

A. Iske RADIAL BASIS FUNCTIONS: BASICS, ADVANCED TOPICS AND MESHFREE METHODS FOR TRANSPORT PROBLEMS Rend. Sem. Mat. Univ. Pol. Torino Vol. 61, 3 (23) Splines and Radial Functions A. Iske RADIAL BASIS FUNCTIONS: BASICS, ADVANCED TOPICS AND MESHFREE METHODS FOR TRANSPORT PROBLEMS Abstract. This invited

More information