Compact Matérn Kernels and Piecewise Polynomial Splines Viewed From a Hilbert-Schmidt Perspective

Size: px
Start display at page:

Download "Compact Matérn Kernels and Piecewise Polynomial Splines Viewed From a Hilbert-Schmidt Perspective"

Transcription

1 Noname manuscript No (will be inserted by the editor) Compact Matérn Kernels and Piecewise Polynomial Splines Viewed From a Hilbert-Schmidt Perspective Roberto Cavoretto Gregory E Fasshauer Michael McCourt Received: date / Accepted: date Abstract Differential operators and associated boundary conditions have been used to define interpolating splines, such as L-splines, since the mid 960s This work adapts that approach to define kernels which can be used to perform equivalent interpolation While a closed form of these kernels is generally unavailable, their eigenexpansions can be determined by solving the Sturm-Liouville eigenvalue problem arising from the given differential operator and boundary conditions, allowing for computation with the kernels using the RBF-QR technique Numerical results involving these methods show convergence orders commensurate with L- spline theory Keywords Positive definite kernel Stable interpolation Spline Mathematics Subject Classification (2000) 65D05 65D07 4A5 Introduction Kernel-based methods are popular tools for problems of interpolation and differential equations [3], statistics [36], machine learning [30, 37], and other fields Their popularity stems from their inherently meshfree nature, providing an avenue to potentially avoid the curse of dimensionality [4, 28] Many different kernels exist, and several factors may be considered when choosing the appropriate kernel for a given application, including the numerical stability of computations involving that kernel R Cavoretto Department of Mathematics G Peano, University of Turin, Turin, Italy robertocavoretto@unitoit G E Fasshauer Department of Applied Mathematics, Illinois Institute of Technology, Chicago, USA fasshauer@iitedu M McCourt Department of Mathematical and Statistical Sciences, University of Colorado, Denver, USA mccomic@mcsanlgov

2 2 R Cavoretto, G E Fasshauer, M McCourt The presence of free parameters influencing the shape and smoothness of kernels allows for high levels of accuracy, but may also introduce ill-conditioning into the problem For many very smooth kernels, or shape parameter values of increasingly flat kernels, the computational error may prevent realization of the theoretically optimal accuracy This problem was analyzed in, eg, [6,, 24, 3, 35] and techniques to circumvent this in certain circumstances were discussed in, eg, [8,20] In [5], a technique was developed using Hilbert-Schmidt theory [32] to create an eigenfunction expansion of the Gaussian in R d which is devoid of the standard ill-conditioning This change of basis approach was first used in the pioneering RBF-QR work [9] Once increasingly flat Gaussian interpolants could be stably computed, the polynomial limit, as predicted in [, 23, 24, 3], could be numerically confirmed in arbitrary dimensions Therefore, Gaussians can always produce at least the same accuracy as polynomials because the polynomial result can be obtained in the limit This research studies the use of Sturm-Liouville eigenvalue theory to define kernels that can achieve piecewise polynomial accuracy as the free parameter ε tends to zero This parameter plays a similar locality role as the shape parameter in the Gaussian setting In addition to ε, there is also a free parameter β which determines the smoothness of the kernels and resulting interpolant We have chosen the name compact Matérn for these kernels, because the differential operator used to generate them can also be used to generate the full space Matérn kernels, albeit with different boundary conditions We show that the choice of boundary conditions, and their validity for the functions we want to interpolate, play a significant role in the convergence order of these kernels The Sturm-Liouville eigenvalue problem uniquely specifies the eigenvalues and eigenfunctions of these compact Matérn kernels, and by applying Hilbert-Schmidt theory we can define each kernel through its Mercer series Unfortunately, their closed form is unavailable for most values of ε and β, which renders traditional interpolation through collocation techniques impossible because the kernel cannot be directly evaluated Although a truncated Mercer series can be used to approximate kernel values, recent research developing RBF-QR methods presents a preferable alternative We show how these compact Matérn interpolants can be computed and evaluated without the closed form of the kernel The rest of this section recalls some of the relevant spline literature, and reviews the connection between Hilbert-Schmidt and Sturm-Liouville eigenvalue theory In Section 2 we derive the compact Matérn kernel eigenexpansion and also find its closed form under special circumstances In Section 3 we derive the RBF-QR technique and consider appropriate series lengths for different parameter values of the compact Matérn kernel In Section 4 we perform numerical experiments to study compact Matérn interpolation on functions with various levels of smoothness as well as for the solution of a simple two point boundary value problem We conclude the paper in Section 5 and suggest future research

3 Compact Matérn Kernels 3 Review of Generalized Splines The theory of generalized splines (see eg, [ 3]) and L-splines (see eg, [33, 38]) states that the real-valued function s is an L-spline of order m > 0 on the domain [0, ] with a partition π : 0 = x < x 2 < < x N < x N =, if s C 2m ([0, ] \ π) and D 2m s L 2 ([0, ] \ π), 2 there exists a linear differential operator P such that (P Ps)(x) = 0 for all x [0, ] \ π, and 3 s satisfies the smoothness conditions lim x x i 0,,, 2m 2, i = 2,, N D k s(x) = lim x x + i D k s(x), k = Above, D denotes the usual univariate derivative d dx and P is the formal adjoint such that if P = a 0 + a D + + a md m = m a k D k, k=0 then m P = a 0 a D + + ( ) m a md m = ( ) k a k D k k=0 The coefficients a k are, in general, smooth functions However, we will limit the discussion to constant coefficients since that will suffice for our compact Matérn functions As stated before, we are concerned only with interpolatory splines, ie, we assume that interpolation conditions are specified at the data sites x,, x N More precisely, given a sufficiently smooth function f to be interpolated, an interpolatory L-spline must satisfy these interpolation conditions, ie, s(x i ) = f(x i ), i =,, N In addition to the factors above, L-splines are also defined by their boundary conditions The paper [33] gives four possible types of interpolatory L-splines : Matching derivative values: D k s = D k f at {0, } for k m, 2 Fixed spline: j k=0 ( )k a m j+k D k (Ps) = 0 at {0, } for 0 j m 2, 3 Flexible spline: j k=0 ( )k a m j+k D k (Pf Ps) = 0 at {0, } for 0 j m 2, 4 Periodic: if (D k f)(0) = (D k f)() then (D k s)(0) = (D k s)() for 0 k 2m This fourth condition requires f C 2m [0, ] while the other conditions apply as long as f C m [0, ] Demanding this higher level of smoothness from f in general allows for the interpolating spline as well as its derivatives to converge at the rate D j (f s) 2 ζ π 2m j P Pf 2, j = 0,,, 2m, () for a sufficiently fine partition π and a constant ζ which may depend on m and j but not π The fill distance π is the size of the largest component in the partition π; equivalently, this is the largest gap between interpolation points This convergence We are simplifying notation here as well as [33] also covers interpolation conditions with varying multiplicities via a so-called incidence vector

4 4 R Cavoretto, G E Fasshauer, M McCourt rate was proved in [33, Theorem 9], although it is only accurate for L-splines of type, 3 or 4 Moreover, in [38] the operator P = D m is discussed in more detail, and with several additional types of boundary conditions We recall only the result for even-order derivatives Simplified to this case, [38, Lemma 83] states that if f H 2m [0, ] and the boundary conditions are such that all even-order derivatives of s up to order 2m 2 match those of f, then the bound () also holds This L-spline theory is mentioned here because the interpolants produced by compact Matérn kernels share many properties We discuss this further in Section 23 2 Review of Hilbert-Schmidt and Sturm-Liouville Eigenvalue Theory Given a domain Ω, a weight function ρ : Ω R + and a continuous square integrable positive definite kernel K : Ω Ω R, a Hilbert-Schmidt integral operator T K is defined as T K f = K(, z)f(z)ρ(z)dz Ω This operator maps functions in L 2 (Ω, ρ) to H K, the reproducing kernel Hilbert space induced by K From Hilbert-Schmidt theory [32] (or, equivalently, Mercer s theorem [27]) we know that K has a representation of the form K(x, z) = λ nϕ n(x)ϕ n(z), x, z Ω (2) n= The eigenvalues λ n and eigenfunctions ϕ n are defined by T K ϕ = λϕ We know λ n > 0 because K is positive definite, and the eigenfunctions are orthogonal with respect to the L 2 (Ω, ρ) inner product In [9, Chap V] the argument is made that the Hilbert-Schmidt integral eigenvalue problem is inverse to the self-adjoint regular Sturm-Liouville eigenvalue problem Lϕ = ϕρ, (3) λ where the differential operator L is defined so that K is its Green s kernel, ie, LK(x, z) = δ(x z) Suitable boundary conditions (in the integral formulation reflected in the definition of H K, the range of T K ) must also be imposed to uniquely define K through L, but in doing so, the eigenvalues and eigenfunctions of (3) can be used in (2) to define K This approach is used in Section 2 to find the eigenexpansion of the compact Matérn kernels

5 Compact Matérn Kernels 5 2 A Kernel Representation for Splines, and its Associated Series Expansion The connection between the Hilbert-Schmidt and Sturm-Liouville eigenvalue problems reviewed in the previous section allows us to view the Mercer series as a generalized Fourier series We now consider the following iterated modified Helmholtz differential operator of order 2β L β,ε = ( D 2 + ε 2 I) β, β N \ {0}, ε 0, (2) where D is the first order derivative operator as above, I is the identity operator (ie, Iϕ = ϕ), and ε acts as a shape parameter (or tension parameter) In this paper we will focus on the domain Ω = [0, ], although more general domains can be addressed with an appropriate transformation We define the Sturm-Liouville eigenvalue problem L β,ε ϕ(x) = λ ϕ(x), x [0, ], with boundary conditions conveniently chosen to be of the form ϕ (2j) (0) = ϕ (2j) () = 0, j = 0,, β (22) Note that the weight function chosen for this problem is ρ For this ODE boundary value problem one can easily check that the corresponding eigenvalues and (normalized) eigenfunctions are given by λ n = ( n 2 π 2 + ε 2) β, ϕn(x) = 2 sin(nπx), n =, 2,, (23) so that the generalized Fourier series of the compact Matérn kernels K β,ε is K β,ε (x, z) = λ nϕ n(x)ϕ n(z) = n= n= ( n 2 π 2 + ε 2) β 2 sin(nπx) sin(nπz) (24) As expected, the eigenfunctions are L 2 (Ω) orthonormal, and because all the eigenvalues are positive, we know that the compact Matérn kernels are positive definite The name compact Matérn was chosen because L β,ε can be used to generate the traditional full-space Matérn kernels, but our kernels exist only on the compact domain [0, ] While (24) accurately defines the kernel for any fixed ε and β, this infinite series representation may not seem as satisfying in comparison to the clean (closed) form representation available for the full space Matérn kernels Among other things, we will show below that a compact Matérn kernel, given only by its series expansion, can be used just as effectively as a closed form kernel provided we combine it with the RBF-QR method Having said this, the rest of this section considers some special cases for which closed forms of the kernel are known

6 6 R Cavoretto, G E Fasshauer, M McCourt 2 Compact Matérn kernels for ε = 0: the piecewise polynomial spline case If we restrict the differential operator in (2) to the case ε = 0, the eigenvalues and eigenfunctions from (23) become λ n = (nπ) 2β, ϕ n(x) = 2 sin(nπx), n =, 2, Using these specific eigenvalues and eigenfunctions, a closed form for the Mercer series expansion of the kernel K β,0 can be obtained with the help of a computer algebra system such as Mathematica [40]: K β,0 (x, z) = [ 2π 2β Li 2β (e iπ(x z) ) + Li 2β (e iπ(x z) ) ( )] Li 2β (e iπ(x+z) ) + Li 2β (e iπ(x+z) ) (25) Here Li n(z) = z k k= k, z <, is known as polylogarithm of order n To see how n this leads to piecewise polynomials, we use the identity Li n(z) + ( ) n Li n ( z ) ( ) = (2πi)n log z B n, n =, 2,, (26) n! 2πi which holds provided 0 Im(z) < (see [25, Eqn 792], where one must note the missing exponent n which is correct in the more general formula [25, Eqn 790]) Here the B n are the Bernoulli polynomials of degree n They can be expressed as (see, eg, [0, Eq 2467] and [29]) B n(x) = n k=0 k + ( ) k ( ) j k (x + j) n, (27) j j=0 so that the first few Bernoulli polynomials are B 0 (x) =, B (x) = x 2, B 2(x) = x 2 x + 6 Using (26) with the substitution z e iπ(x z) we can combine the first two polylogarithms in (25) into a single Bernoulli polynomial of even degree 2β, ie, Li 2β ( e iπ(x z)) + Li 2β ( e iπ(x z)) = Li 2β ( e iπ(x z)) + ( ) 2β Li 2β ( e iπ(x z)) = ( ) β (2π) 2β (2β)! B 2β ( x z ) (28) 2 Here we rely on the assumption that x z so that the hypothesis for (26) is satisfied An analogous symmetric formula arises when z x The second group of polylogarithms in (25) can be treated similarly, ie, Li 2β ( e iπ(x+z)) + Li 2β ( e iπ(x+z)) = ( ) β (2π) 2β (2β)! B 2β ( x + z ) (29) 2

7 Compact Matérn Kernels 7 Inserting (28) and (29) into (25) then produces the desired closed form representation of the kernel K β,0 for the case 0 z x, ie, K β,0 (x, z) = ( )β 2π 2β = ( ) β 2 2β (2β)! (2π) 2β [ ( x z ) ( x + z )] B (2β)! 2β B 2 2β 2 [ ( x z ) ( x + z )] B 2β B 2 2β 2 Using absolute values to include the other half, we obtain the symmetric formula K β,0 (x, z) = ( ) β 2 2β (2β)! [ ( ) x z B 2β 2 ( x + z ) ] B 2β, (20) 2 which is valid for any 0 x, z Since it is apparent from (27) that the leadingorder terms in x cancel, the kernel K β,0 is indeed a piecewise polynomial of odd degree 2β For β =, 2 this results in piecewise linear polynomials K,0 (x, z) = min(x, z) xz = and piecewise cubic polynomials { x xz, 0 x z, z xz, z x, { K 2,0 (x, z) = 6 x( z)(x2 + z 2 2z), 0 x z, 6 ( x)z(x2 + z 2 2x), z x For a fixed z, K 3,0 (x, z) gives piecewise quintic polynomials in x, and so on Thus, (20) allows us to compute the piecewise polynomial kernel K β,0 satisfying the boundary conditions (22) in closed form for any β The sub-family of compact Matérn kernels with ε = 0 coincides with the special class of interpolatory L-splines with even-order derivative boundary conditions mentioned in Section provided those derivatives are zero also for the function f generating the data 22 Compact Matérn kernels for ε > 0 By allowing ε to vary beyond 0, the resulting kernels K β,ε are no longer piecewise polynomials This can be useful when attempting to approximate certain functions (as we show in Section 4) but it also renders the closed form (20) useless, requiring the derivation of a new formula Unfortunately, as of this writing, the authors are unaware of any such closed form appropriate for all β when ε > 0 Individual K β,ε closed forms can be derived by solving the Green s function boundary value problem L β,ε K β,ε (x, z) = δ(x z), x (0, ), K (2j) β,ε (x, z) = 0, x {0, }, j {0,, β },

8 8 R Cavoretto, G E Fasshauer, M McCourt where z (0, ) is fixed, the operator L β,ε is applied to the first argument of K, δ is the Dirac delta function and K (j) denotes the jth derivative with respect to the first argument of K This is equivalent to the boundary value problem lim L β,ε K β,ε (x, z) = 0, lim x (0, ) \ {z}, K (2j) β,ε (x, z) = 0, x {0, }, j {0,,, β }, x z K(j) β,ε x z K(2β ) β,ε (x, z) = lim (x, z) = lim x z K(j) + β,ε x z + K(2β ) (x, z), j {0,,, 2β 2}, β,ε (x, z) +, which can be solved for any fixed β using standard tools from classical differential equations When β =, the closed form solution is succinct, sinh (ε min(x, z)) sinh (ε( max(x, z))) K,ε (x, z) = ε sinh(ε) { sinh(εx) sinh(ε( z)) ε sinh(ε), 0 x z, = sinh(εz) sinh(ε( x)) ε sinh(lε), z x, and this function can obviously be expressed in terms of (Fourier) sine series as shown in (24) A closed form expression for K 2,ε was found recently by two summer REU students It is quite a bit more complicated than β =, K 2,ε (x, z) = [ e ε(x+z) 4ε 3 (e 2ε ) 2 e 2ε (2ε ε(x + z) ) + e 4ε (ε(x + z) + ) + e 2ε(+x+z) (2ε ε(x + z) + ) + e 2ε(x+z) (ε(x + z) ) + e 2ε(2+min(x,z)) ( ε x z ) + e 2ε max(x,z) ( ε x z + ) +e 2ε(+min(x,z)) ( 2ε + ε x z ) ] + e 2ε(+max(x,z)) ( + 2ε ε x z ), (2) but is found by similar means For β > 2 and ε > 0, closed forms of K β,ε should be derivable, but as of right now they are unknown If the jump in complexity from K,ε to K 2,ε is indicative, writing the closed form for higher β values will quickly become impractical, even if they can be found using the same straightforward techniques Fortunately, as we discuss in Section 3, it is actually preferable to work with the series form of K β,ε from (24) rather than the closed form 23 Compact Matérn kernels and L-splines The compact Matérn kernels K β,ε are obtained as the Green s kernels of the operator L β,ε from (2) with the boundary conditions (22) This means that L β,ε K β,ε (x, z) = 0, z (0, ) fixed,

9 Compact Matérn Kernels 9 for all x (0, ) except x = z, and lim x z K(j) β,ε (x, z) = lim K (j) x z + β,ε (x, z), j {0,,, 2β 2}, as discussed in Section 22 To match the notation of the spline theory to the compact Matérn setting we identify m = β (and k = j) As a Green s kernel, K β,ε C 2β 2 [0, ] H 2β [0, ] (see [2,6]) A linear combination of these kernels, s(x) = N c k K β,ε (x, z k ), k= is therefore also in C 2β 2 [0, ], and s satisfies the first condition of an L-spline We know that L β,ε s = 0 for all x [0, ] except z k, k N If L can be decomposed as P P, then s is an L-spline We can write L β,ε = ( D 2 + ε 2 I) β = ( D + εi) β (D + εi) β, thus if we allow P = (D + εi) β, we see that L β,ε = P P For this decomposition, the coefficients a k in the discussion of L-splines are given by a k = ( β k ) ε β k Note that this decomposition is not unique For example, in the case β = 3 we could decompose L 3,ε as L 3,ε = ( D 2 + ε 2 I) 3 = ( D + εi) 3 (D + εi) 3 = ( D 3 + 3εD 2 3ε 2 D + ε 3 I)(D 3 + 3εD 2 + 3ε 2 D + ε 3 I) = ( D 3 εd 2 + ε 2 D + ε 3 I)(D 3 εd 2 ε 2 D + ε 3 I) so that we could have P = D 3 +3εD 2 +3ε 2 D +ε 3 I or P = D 3 εd 2 ε 2 D +ε 3 I In [6,7] a decomposition of L β,ε was introduced using vector differential operators P and an appropriate vector adjoint, ie, L β,ε = P P The vector formulation allows for potentially even more decompositions of L β,ε Although s satisfies the basic L-spline criteria, it does not appear to satisfy any of the possible boundary conditions described in Section that are needed for the general L-spline case These specific forms of boundary conditions seem to be needed to satisfy the integral condition 0 [P(f s)(x)] 2 dx = 0 (f s)(x)(p Pf)(x)dx This condition is met under L-spline types, 3 and 4, and it is needed to prove (), as discussed in [38] The integral condition is also satisfied for even-derivative boundary conditions provided the special differential operator P = D m is used, ie, for our piecewise polynomial K β,0 kernels Therefore, the results of [38] apply to our family of compact Matérn kernels in the special case ε = 0 Although we

10 0 R Cavoretto, G E Fasshauer, M McCourt know that the piecewise polynomial compact Matérn kernels K β,0 are L-splines, we do not believe that compact Matérn kernels with positive ε satisfy any of the boundary conditions covered in [33, 38] Despite our inability to demonstrate that compact Matérn interpolants are subject to the same properties as L-splines, we still conjecture that their convergence order follows (), assuming the function of interest satisfies the necessary smoothness and homogeneity conditions Numerical experiments are presented in Section 4 to support this conjecture 3 Applying the RBF-QR Technique to Compact Matérn Series The discussion in Section 2 focused on the derivation of eigenvalues and eigenfunctions of the compact Matérn kernel by solving a Sturm-Liouville eigenvalue problem with specific boundary conditions These eigenvalues and eigenfunctions provide a representation of the kernel as a Hilbert-Schmidt series K β,ε (x, z) = λ nϕ n(x)ϕ n(z) n= In general, a free parameter ε 0 present in the iterated modified Helmholtz operator (2) generates a family of kernels which have different shapes This is in addition to the free parameter β N \ {0} which determines the smoothness of the kernels; the order β kernel has 2β 2 smooth derivatives As noted in (20), there is a closed form expression available for all β when ε = 0, so for the piecewise polynomial spline case we can directly evaluate the kernel Section 22 provides the closed form of the kernel for ε > 0 when β = or 2, but presently those are the only closed form non-polynomial compact Matérn kernels available This raises immediate doubts as to the viability of this method for interpolation, because without the kernel values we are unable to populate and solve the linear system arising from the interpolation conditions 3 Truncating the Hilbert-Schmidt series One option would be to approximate the value of the kernel by truncating the Hilbert-Schmidt series Because the eigenvalues of the series decay monotonically, and the eigenfunctions are bounded above by 2, at some point all the remaining terms in the series would contribute a negligible amount to the summation and can be ignored We choose our truncation point M by first picking a tolerance σ tol and requiring that the ratio of the smallest and largest eigenvalues be less than the tolerance When written out explicitly, the ratio of two eigenvalues λ M λ N < σ tol, M > N, can be solved for the necessary M: M tol = σ /β (N π tol 2 π 2 + ε 2 ) ε 2 (3)

11 Compact Matérn Kernels Table shows truncation values for various parameterizations, when N is set to For example, the kernel K 3,0 can be represented with precision σ tol = 0 5 if we use a truncation length M tol = 000 For any desired precision σ tol, a smoother kernel (ie, greater β) can be represented accurately with a shorter series The same is true as we decrease ε for a kernel of fixed smoothness Parameters Precisions ε β Table : Value of the truncation length M tol required to reach certain precisions σ tol for series generated by various combinations of ε and β The precision column is the ratio of the last and first eigenvalues, λ M /λ In some sense, computing the kernel values with a series rather than closed form is entirely appropriate even evaluating the Gaussian kernel could be done with a series expansion Note that this remark follows naturally from research which has shown that B-splines tend to Gaussians as the smoothness index β tends to infinity (see, eg, [5,4,7] and [8, Chapter 8]) Computing a value of the compact Matérn kernel with the series would be no different than evaluating Γ (x) or erf(x) However, in using this approach we ignore the special structure of this eigenfunction series which is not present in a standard Taylor series Exploiting this structure allows for interpolation without directly computing the kernel values This is accomplished by using the RBF-QR technique [5, 9] 32 An infinite matrix decomposition of K based on Hilbert-Schmidt series RBF-QR can be applied to any positive definite kernel once it is written as a Hilbert-Schmidt series In matrix terminology, a single kernel value can be written as a weighted inner product involving the diagonal matrix containing all the eigenvalues K(x, z) = ϕ n(x)λ nϕ n(z) n= λ ϕ (z) = ( ϕ (x) ϕ N (x) ) λ N ϕ N (z)

12 2 R Cavoretto, G E Fasshauer, M McCourt This can be more compactly written if we introduce the notation ϕ (x) λ φ(x) = ϕ N (x), Λ = λ N, so that a kernel value is K(x, z) = φ(x) T Λφ(z) The scattered data interpolation problem presents N distinct data locations x k and associated function values y k = f(x k ), k N, and asks for an approximation s which is equal to f at the data locations To solve this with kernels, we assume that the solution is written as N s(x) = c k K(x, x k ) k= Demanding equality to the function values at the data locations produces the linear system K(x, x ) K(x, x N ) c y =, K(x N, x ) K(x N, x N ) or, more succinctly, In this vector notation, values of s can be evaluated as c N y N Kc = y (32) s(x) = k(x) T c = k(x) T K y (33) Each row of the interpolation matrix K can be written using the eigenfunction expansion: k(x) T = ( K(x, x ) K(x, x N ) ) = ( φ(x) T Λφ(x ) φ(x) T Λφ(x N ) ) = φ(x) T Λ ( φ(x ) φ(x N ) ) (34) The entire K matrix can be formed by stacking such rows, k(x ) T φ(x ) T K = = k(x N ) T φ(x N ) T which can be simplified somewhat by defining Φ T = ( φ(x ) φ(x N ) ), or, more directly, (Φ) ij = ϕ j (x i ), and writing Λ ( φ(x ) φ(x N ) ), K = ΦΛΦ T (35)

13 Compact Matérn Kernels 3 33 Obtaining a new basis via RBF-QR and Hilbert-Schmidt SVD At this point, all we have done is rewrite the kernel interpolation matrix as the product of three larger matrices Although this helps us evaluate the kernel function using only the series expansion, the result of this process is no different than simply finding each kernel element individually as k= λ kϕ k (x )ϕ k (x ) K = k= λ kϕ k (x )ϕ(x N ) k= λ kϕ k (x N )ϕ k (x ) k= λ kϕ k (x N )ϕ k (x N ) From a performance standpoint, computing the interpolation matrix with (35) is likely more efficient because it involves level 3 BLAS operations (matrixmatrix products) rather than level BLAS operations (vector inner products φ(x) T Λφ(z)) See [2] for a review of this idea We can improve on the matrix structure by considering blocks of Φ and Λ Define ( ) Λ Φ = (Φ Φ 2 ), Λ =, where Φ, Λ R N N, Φ 2 R N, Λ 2 R Starting with (34), we can write ( ) ( ) k(x) T = φ(x) T ΛΦ T = φ(x) T Λ Φ T Λ 2 Φ T 2 ( ) = φ(x) T I N Λ 2 Φ T 2 Φ T Λ Λ Φ T, where I N R N N is the N N identity matrix To clean this up further, we define a new function ( ) ψ(x) T = φ(x) T I N Λ 2 Φ T 2 Φ T Λ so that k(x) T = ψ(x) T Λ Φ T Replacing the rows of K with this expression yields K = k(x ) T k(x N ) T = ψ(x ) T ψ(x N ) T Λ 2 Λ Φ T = ΨΛ Φ T, where Ψ is defined analogously to K We call this factorization the Hilbert-Schmidt SVD because the diagonal matrix in the middle is filled with the first N Hilbert- Schmidt eigenvalues The matrices Ψ and Φ are not orthogonal, as in a standard SVD, but they are generated by the L 2 (Ω, ρ) orthogonal eigenfunctions

14 4 R Cavoretto, G E Fasshauer, M McCourt To see the benefit of using this peculiar factorization of K, we can study the structure of our interpolant (33), s(x) = k(x) T K y = ψ(x) T Λ Φ T Φ T Λ Ψ y = ψ(x) T Ψ y This remaining expression suggests that we have developed a mechanism for evaluating our interpolant s in the basis ψ rather than its standard basis k The system of interest now is Ψb = y rather than (32) as it would be in the standard basis 34 The QR in RBF-QR and other practical considerations The use of the term RBF-QR in designing this new basis comes from the computation of the Φ T 2 Φ T term in the data-dependent correction Often it is preferable to first perform a QR factorization on the (presumably now M-length) Φ matrix to produce Φ = ( ) ( ) ( ) Φ Φ 2 = Q R R 2 = QR QR 2 which allows for Φ T 2 Φ T = R T 2 R T This is a more stable computation than LU factorization, which may be preferable depending on the scale of the various eigenfunctions The compact Matérn eigenfunctions are all on the same scale (because they are all sines) but other kernels which have used RBF-QR have found the extra stability useful [9] In the above derivation we have assumed that Φ and Λ are invertible, which should be obvious for Λ since it is a diagonal matrix with positive eigenvalues on the diagonal The inverse of Φ exists because the eigenfunctions ϕ k, k N, are linearly independent, and Φ would be the interpolation matrix associated with interpolation at the N distinct data locations Because that interpolant must be unique, the matrix Φ must be invertible Note that while this is straightforward for this set of eigenfunctions on this domain, other kernels may require more care Section 3 discussed possible truncation values for the Mercer series representation of K, yet none of this ψ discussion has addressed the need for a finite, length M series The infinite nature of this problem is tucked neatly into the definition of ψ, but it can be brought to light by writing the definition explicitly as ψ(x) T = ( ϕ (x) ϕ N (x) ) + ( ϕ N+ (x) ϕ M (x) ) Λ 2 Φ T 2 Φ T Λ by expanding out φ(x) Here we can see that the kth element in our ψ basis is the kth eigenfunction, plus a correction in the form of a linear combination of all the eigenfunctions with index greater than N This infinite length correction (recall both Λ 2 and Φ 2 have infinite dimension) is guaranteed to make a finite contribution because the original Mercer series was uniformly convergent Using (3) will allow us to determine at what point we can stop considering higher order eigenfunctions

15 Compact Matérn Kernels 5 A significant benefit to this HS-SVD approach is a more stable interpolation system Traditionally, inverting K could introduce unacceptable amounts of instability because of its ill-conditioning This was discussed in the context of RBF-QR applied to Gaussians in [5] The HS-SVD allows us to isolate the ill-conditioning primarily (although not entirely) in the Λ factor and resolve the Λ 2 Φ T 2 Φ T Λ terms safely This effect will become apparent in Section 4 for larger values of β 4 Numerical Experiments The smoothness of the compact Matérn kernels is controlled by the free parameter β such that a compact Matérn interpolant has 2β 2 smooth derivatives As was discussed in Section 23, we conjecture that the convergence order of interpolants based on these kernels, ( with respect to the fill distance π introduced in Section, should be O π 2β) For simplicity, we always use N evenly spaced points in [0, ], meaning that π N The experiments in ( this section demonstrate that compact Matérn interpolants converge at order O N 2β) for appropriately smooth and homogeneous input functions f In addition to the smoothness parameter β, we have also introduced a shape parameter ε which influences the peakedness of the compact Matérn kernels It is known that interpolation performed with infinitely smooth kernels, equipped with a shape parameter ε, will yield polynomial interpolation as they approach their ε 0 flat limit (see, eg, [,24,3]) Because ε can take positive values, these methods have the potential to achieve accuracy superior to polynomials at essentially the same computational cost A similar situation occurs for the compact Matérn functions: the presence of the free parameter ε allows compact Matérn interpolation to surpass piecewise polynomial interpolation for some interpolation problems This section studies the significance of this ε parameter, as well as the smoothness parameter β, in some interpolation and boundary value problem settings We describe several settings and demonstrate when the ε and β flexibility is valuable in improving the solution accuracy For examples dealing with the order of convergence as a function of β, we compute the error of our interpolant s to the function f using the L 2 norm, error(s) = s f 2, where the norm is approximated on 400 evenly spaced points in [0, ] The L norm is used in Section 42 only, where we are studying the effect of varying ε for a fixed β 4 Interpolation for functions satisfying homogeneous boundary conditions In the derivation of the compact Matérn and piecewise polynomial spline kernels in Section 2, we imposed boundary conditions (22) to ensure unique Green s functions However, in doing so, we have restricted the set of functions which can be recovered to arbitrary precision on [0, ] to functions which satisfy these

16 6 R Cavoretto, G E Fasshauer, M McCourt Fig : Sample plots of the G n function from (4) with n =, 3, 5, 7 As with all G n in this section, γ = 0567 The vertical dashed lines are at x = γ and x = γ, and indicate the point beyond which the function is 0 boundary conditions This section studies the interpolation properties of these kernels on suitably homogeneous functions The first function that we consider is related to a function from [22] and was specifically designed to study the effect of error near the boundaries This family of functions, G n(x) = (/2 γ) 2n (x γ) n + ( γ x)n +, x [0, ], (4) has all its derivatives at x = 0 and x = equal to 0, meaning that it definitely satisfies (22) The parameter γ (0, /2) causes the function to be identically zero outside of the interval (γ, γ), with discontinuous n th derivatives at x = {γ, γ} By manipulating n we can consider functions that both satisfy and violate the necessary smoothness conditions of the underlying Green s kernel differential operator (2) For these experiments we will fix γ = 0567 rather arbitrarily but with the intention of keeping the points of discontinuity away from any data point G n is plotted in Figure for some values of n 4 Example : Convergence orders Our first example will study the order of convergence of these kernel methods for a fixed ε = but different β values We choose our G n smoothness parameter to be n = 6 and perform compact Matérn interpolation with β =,, 5 For β 3, the necessary smoothness conditions are satisfied, but for β > 3, the interpolating functions assume a higher level of smoothness than the function G 6 provides Figure 2a shows the expected improvement in convergence order until the smoothness assumption is violated (when β = 4); at which point, subsequent increases in β yield negligible convergence improvements These results support our earlier conjecture that functions which satisfy the 2β homogeneity conditions (22) and have at least 2β smooth derivatives can be interpolated using order β compact Matérn basis functions with an O(N 2β ) order of convergence The table in Figure 2b shows the results of lines of best fit on the log

17 Compact Matérn Kernels 7 β exponent (b) Convergence order table (a) Convergence order plot Fig 2: Error results for compact Matérn interpolation on the function G 6 from (4) As β increases, the order of convergence increases, until the interpolating basis functions reach the smoothness of G 6 Beyond that point (which occurs at β = 3) we encounter saturation, and there is no value in increasing β further The N input points were evenly spaced in [0, ] as were the 400 points at which the error was computed scale graph from Figure 2a under the assumption that the error of the interpolant follows error(s) = CN 2β, log(error(s)) = log C 2β log N The exponent column should contain the values 2β if this error formula is valid This is the case when β 3, but for larger β the convergence order deteriorates 42 Example 2: Piecewise polynomial splines as limits of compact Matérn interpolants and the suboptimality of splines Earlier, we claimed that the kernels associated with (2) for ε = 0 could be used to produce piecewise polynomial splines with appropriate boundary conditions We now demonstrate that fact experimentally, and also show that optimal accuracy of the compact Matérn kernels may be achieved for ε > 0 Figure 3 studies the use of β = compact Matérn kernels to approximate G sampled at evenly spaced points This confirms that the compact Matérns are converging to piecewise linear splines as ε 0, and the table in Figure 3b displays the ε opt value which produces optimal error For this β = example, the optimal error for compact Matérn interpolation is only marginally better than the spline result This is not always the case, as is demonstrated for a β = 2 interpolant in Figure 4 We consider a function Ĝ 2 (x) = G 2 (x) exp( 36(x 4) 2 ), (42) where G 2 was defined in (4), using the parameter γ = 0567 This function satisfies the necessary smoothness and convergence criteria, but is concentrated

18 8 R Cavoretto, G E Fasshauer, M McCourt N ε opt error ratio (b) ε parameterization for optimal error (a) Error as a function of ε Fig 3: Error results for compact Matérn interpolation on the function G from (4) The effect of ε is studied for various N values, and each time an optimal ε, denoted ε opt, produces a smaller error than the spline which occurs in the ε 0 limit In the table on the right, the ε opt value and the ratio of the optimal error to the spline error is displayed The N input points were evenly spaced in [0, ] as were the 400 points at which the error was computed The spline results were computed with a piecewise linear spline around x = 4 rather than x = 5 In Figure 4a, we see clear benefits to using ε > 0 to perform interpolation, but not for all N values N ε opt error ratio (b) ε parameterization for optimal error (a) Error as a function of ε Fig 4: Error results for compact Matérn interpolation on the function Ĝ 2 from (42) The effect of ε is studied for various N values, and for smaller N an optimal ε, denoted ε opt, produces a smaller error than the spline which occurs in the ε 0 limit In the table on the right, the ε opt value and the ratio of the optimal error to the spline error is displayed For N = 48, the optimal error seems to occur for ε = 0 The N input points were evenly spaced in [0, ] as were the 400 points at which the error was computed The spline results were computed with a natural cubic spline It seems that for large enough N, this function is best fit with the interpolating cubic natural spline, but when the function is only partially known, the additional flexibility of ε 0 provides a benefit

19 Compact Matérn Kernels 9 43 Summary for functions satisfying homogeneous boundary conditions The results of this section suggest two points: The order of the compact Matérn interpolation scheme is O(N 2β ) for sufficiently smooth and appropriately homogeneous functions The ε 0 limit of the compact Matérn interpolant is the interpolating piecewise polynomial spline of degree 2β with boundary conditions specified by (22) This observation is in agreement with the theoretical results of [35] on interpolation in the flat limit for kernels with finite smoothness 42 Interpolating functions which do not satisfy the boundary conditions In the previous section, Figure 2 studied the effect of choosing β greater than the requisite level of smoothness of the underlying function This is relevant, because we may not know a priori the appropriate smoothness when interpolating functions with limited smoothness Likewise, many functions of interest do not satisfy the boundary conditions (22), and in this section we study the effect of violating those boundary conditions on the resulting interpolant One function which satisfies the β = and β = 2 conditions is the polynomial p(x) = x 2x 3 + x 4 (43) This function violates the β = 3 conditions because p (4) (0) 0, but all higher derivatives are 0, so only the β = 3 condition is violated Figure 5 studies convergence using different K β, kernels, and shows that there is stagnation beyond β = 2 β exponent (b) Convergence order table (a) Convergence order plot Fig 5: Error results for compact Matérn interpolation on the function p from (43) The expected order of convergence is observed for β = and β = 2, until the β = 3 boundary conditions are violated For β = 3, the order is the same as β = 2, and even though the β = 4 boundary conditions are satisfied, the β = 4 interpolant converges no more quickly than β = 2 because of the β = 3 barrier The N input points were evenly spaced in [0, ] as were the 400 points at which the error was computed Only ε = was considered here

20 20 R Cavoretto, G E Fasshauer, M McCourt Up until β = 2, we observe the expected order of convergence, namely O(N 2β ) Beyond the first nonhomogeneous boundary condition at β = 3, there is no improvement in convergence; this happens despite the fact that all other boundary conditions for β 3 are satisfied Higher order kernels make no progress, much in the same way that increasing β beyond the appropriate smoothness was ineffective in Figure 2 43 Stable interpolation with the Hilbert-Schmidt SVD Section 34 introduced the idea that the ψ basis is more stable than the standard basis k Numerical evidence of this for Gaussians was presented in [5] Here we show in an experiment that for small values of ε, the interpolation matrix K evaluated using the Mercer series representation (35) becomes unreasonably ill-conditioned even for small N The function we choose to interpolate is f(x) = sin(2πx) which satisfies all boundary conditions for any β value This condition problem would still appear even for functions f which do not satisfy any boundary conditions, but the choice of this function f allows us to eliminate nonhomogeneity as a potential problem here We perform interpolation with only N = 0 points, and compare the use of the ψ and k bases in Figure 6 Fig 6: Even with only N = 0 points, the ill-conditioning from the standard basis prevents stable interpolation for ε 0 In contrast, the stable basis has no such stability problem for small ε, allowing it to reach the best accuracy possible subject to machine precision For this example, β = 8 was used For suitably large values of ε, the two interpolation methods overlap, but as ε 0 the standard basis loses accuracy and stagnates as the condition of the K matrix overwhelms the solution By using the Hilbert-Schmidt SVD and computing the interpolant using Ψ instead, we are able to safely interpolate for all values of

21 Compact Matérn Kernels 2 ε This new basis provides a stable mechanism for evaluating piecewise polynomial splines, of arbitrary degree, for which a Hilbert-Schmidt series expansion is available Although this eigenexpansion is only applicable for the piecewise polynomial splines presented in Section 2, this technique could be used to stably evaluate splines generated by other differential operators and boundary conditions In this context it serves as a useful complement to the theory of B-splines (see, eg, [34]), which also allow for stable evaluation 44 2-pt BVP In the same way that we can compute interpolants using these compact Matérn kernels, it is also possible to solve boundary value problems The use of RBF-QR to solve boundary value problems involving Gaussians was discussed in [26], and a similar collocation approach is presented below This is prefaced by the assumption that the solution to the boundary value problem satisfies (22) for whatever β is chosen to solve the problem As discussed in Section 42, failing to satisfy those conditions will prevent the desired order of convergence when interpolating, and will likely cause a similar problem here To ensure that we meet at least the minimum homogeneity condition, we will restrict ourselves to boundary value problems of the form Lu(x) = f(x), x (0, ), (44a) u(x) = 0, x = 0 or x =, (44b) where L is a linear, second-order differential operator Boundary value problems of this form have solutions which obviously satisfy the β = boundary condition, and because second derivatives appear, the solution must also have at least two smooth derivatives This demands compact Matérn kernels with β 2 Assuming that appropriate β and ε values have been chosen, the BVP solution is found much as the interpolation solution is: we assume that our solution takes the standard form s(x) = N c k K(x, x k ), k= where the x k (0, ) are the kernel centers Our kernel centers will always be evenly distributed in (0, ), though that is not required Plugging this solution into (44a) and applying linearity yields the ODE N c k LK(x, x k ) = f(x), x (0, ), k= where the operator L is being applied to the first argument of the kernel K To determine the c k values, we demand equality at the collocation points While the collocation points and kernel centers can be different, we will assume they are the

22 22 R Cavoretto, G E Fasshauer, M McCourt same as we did in the interpolation setting This will produce an N N linear system LK(x, x ) LK(x, x N ) c f(x ) = (45) LK(x N, x ) LK(x N, x N ) c N f(x N ) which can be inverted to determine the c k values Normally, we would need to also incorporate boundary conditions into the collocation problem in order to find a unique solution In this D, second-order BVP setting, this would necessitate two additional kernel centers and in turn two additional collocation points, at the boundaries When using the compact Matérn kernels, however, there is no need to consider the boundary conditions (44b) explicitly, because they will automatically be satisfied by the homogeneity conditions (22) If we were to append two kernel centers on the boundaries x N+ = 0 and x N+2 = to our solution, the resulting system would be LK(x, x ) LK(x, x N ) LK(x, 0) LK(x, ) LK(x N, x ) LK(x N, x N ) LK(x N, 0) LK(x N, ) K(0, x ) K(0, x N ) K(0, 0) K(0, ) K(, x ) K(, x N ) K(, 0) K(, ) c c N c N+ c N+2 f(x ) = f(x N ) 0 0 (46) Substituting in K(0, ) = K(, ) = 0 as required by the β = homogeneity conditions causes the bottom two rows to be all zeros That system is underdetermined, and if we fix c N+ = c N+2 = 0 the remaining coefficients can be found by solving the original system (45) Note that if the BVP boundary conditions (44b) were not homogeneous, then the bottom two values in the right hand side vector of (46) would not be 0, and the system would not be consistent This is a manifestation of the fact that these compact Matérn kernels will only converge if the β = homogeneity conditions are satisfied Our example is the two point boundary value problem which has the true solution u (x) = e x + x( e), x (0, ), u(x) = 0, x = 0 or x =, u(x) = e x + 6 (x3 3x 2 + 8x e(x 3 + 5x)) (47) This function has all its derivatives, and satisfies the β = and β = 2 homogeneity conditions We fixed ε =, and studied the error at 00 evenly spaced points in (0, ) as the number of kernels N increased The results are presented in Figure 7 The first obvious difference is that no β = case is present because the β = compact Matérn function has no smooth derivatives, it would be inappropriate for this boundary value problem The BVP solution seems to have an error like error(s) = CN 2β+2,

23 Compact Matérn Kernels 23 β exponent N/A (b) Convergence order table (a) Convergence order plot Fig 7: Error results for compact Matérn collocation on the BVP with solution u from (47) The order of convergence for the collocation method is two degrees lower than the equivalent interpolation methods, likely brought on by the presence of derivatives in the linear system As was observed in previous examples, the convergence stops accelerating when β is chosen beyond the homogeneity attained by the true solution, which in this case happens for β > 2 The N collocation points were evenly spaced in [0, ] as were the 00 points at which the error was computed We fixed ε = which is two degrees worse than the interpolation setting This is confirmed numerically in Figure 7b when a least squares fit is performed on the errors This loss of order is in line with the L-spline error bound () interpreted in the case of derivative approximations 5 Conclusions and Future Work Compact Matérn kernels are developed by studying the eigenvalues and eigenfunctions of the iterated modified Helmholtz differential operator applied on the interval [0, ] The kernels are defined by using the eigenvalues and eigenfunctions in a Mercer s series Boundary conditions enforcing homogeneity of even derivatives at 0 and differentiate these kernels from the traditional full-space Matérn kernels Free parameters ε and β control the locality and smoothness of these kernels, and in the ε 0 flat limit they can be used to produce piecewise polynomial splines Despite the infinite length of the Mercer s series, the RBF-QR method can be used to perform stable interpolation involving the compact Matérn kernels even as ε 0 Numerical results show convergence of the interpolant at order roughly twice the smoothness of the kernel, which is a result predicted for L-splines, but not directly applicable to compact Matérn interpolation This convergence behavior only applies to functions which satisfy the aforementioned homogeneity conditions at the boundary; failure to comply impedes the improvement otherwise commensurate with a smoother kernel Future work must address the transition to higher dimensions, likely through the use of tensor product kernels, and the implication of the boundary conditions in

24 24 R Cavoretto, G E Fasshauer, M McCourt higher dimensions The presence of both a locality and smoothness parameter adds another layer of complexity to the common problem of choosing the right kernel parametrization Future work should study the statistical or analytic methods for optimally choosing ε and β simultaneously During that process, one may also consider the interpretation of fractional β in the operator sense (2), a fractional β seems nonsensical, but the final series computations can handle non-integer β without difficulty Some work has studied the distribution of error on [0, ] for compact Matérn interpolants, but more work will help understand the effect of violating the boundary conditions, and perhaps even provide a mechanism to circumvent them without losing the higher convergence Furthermore, different boundary conditions may be preferable for certain problems, so the impact of other boundary conditions must be studied It is, of course, also possible to define alternative families of kernels using the same differential operators we have used, but with different boundary conditions For example, a family of periodic kernels is discussed in [39, Chapter ] Acknowledgements We would like to thank Casey Bylund and William Mayner for their derivation of formula (2) Major parts of this research were performed while the first author visited IIT supported by University of Turin and Piedmont Region through the project Alta Formazione The second author acknowledges support from the National Science Foundation via grant DMS 5392 References Ahlberg, JH, Nilson, EN, Walsh, JL: Fundamental properties of generalized splines Proc Natl Acad Sci USA 52(6), (964) 2 Ahlberg, JH, Nilson, EN, Walsh, JL: Convergence properties of generalized splines Proc Natl Acad Sci USA 54(2), (965) 3 Ahlberg, JH, Nilson, EN, Walsh, JL: The Theory of Splines and Their Applications Mathematics in Science and Engineering Academic Press (967) 4 Allasia, G, Cavoretto, R, De Rossi, A: Numerical integration on multivariate scattered data by Lobachevsky splines Int J Comput Math, to appear 5 Allasia, G, Cavoretto, R, De Rossi, A: Lobachevsky spline functions and interpolation to scattered data Comput Appl Math 32(), 7 87 (203) 6 de Boor, C: On interpolation by radial polynomials Adv Comput Math 24( 4), (2006) 7 Brinks, R: On the convergence of derivatives of B-splines to derivatives of the Gaussian function Comput Appl Math 27(), (2008) 8 Cavoretto, R: Meshfree approximation methods, algorithms and applications PhD thesis, University of Turin (200) 9 Courant, R, Hilbert, D: Methods of Mathematical Physics, Vol Wiley (953) Reprinted NIST Digital Library of Mathematical Functions Release 05 of (202) Online companion to [29] Driscoll, TA, Fornberg, B: Interpolation in the limit of increasingly flat radial basis functions Comput Math Appl 43(3 5), (2002) 2 Duffy, DG: Green s Functions with Applications, edn Chapman and Hall/CRC (200) 3 Fasshauer, GE: Meshfree Approximation Methods with Matlab World Scientific Publishing Co, Inc, River Edge, NJ, USA (2007) 4 Fasshauer, GE, Hickernell, FJ, Woźniakowski, H: On dimension-independent rates of convergence for function approximation with Gaussian kernels SIAM J Numer Anal 50(), (202) 5 Fasshauer, GE, McCourt, M: Stable evaluation of Gaussian RBF interpolants SIAM J Sci Comput 34(2), A737 A762 (202)

Positive Definite Kernels: Opportunities and Challenges

Positive Definite Kernels: Opportunities and Challenges Positive Definite Kernels: Opportunities and Challenges Michael McCourt Department of Mathematical and Statistical Sciences University of Colorado, Denver CUNY Mathematics Seminar CUNY Graduate College

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods The Connection to Green s Kernels Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2014 fasshauer@iit.edu MATH 590 1 Outline 1 Introduction

More information

Kernel-based Approximation. Methods using MATLAB. Gregory Fasshauer. Interdisciplinary Mathematical Sciences. Michael McCourt.

Kernel-based Approximation. Methods using MATLAB. Gregory Fasshauer. Interdisciplinary Mathematical Sciences. Michael McCourt. SINGAPORE SHANGHAI Vol TAIPEI - Interdisciplinary Mathematical Sciences 19 Kernel-based Approximation Methods using MATLAB Gregory Fasshauer Illinois Institute of Technology, USA Michael McCourt University

More information

Solving Boundary Value Problems (with Gaussians)

Solving Boundary Value Problems (with Gaussians) What is a boundary value problem? Solving Boundary Value Problems (with Gaussians) Definition A differential equation with constraints on the boundary Michael McCourt Division Argonne National Laboratory

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 2 Part 3: Native Space for Positive Definite Kernels Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2014 fasshauer@iit.edu MATH

More information

Kernel B Splines and Interpolation

Kernel B Splines and Interpolation Kernel B Splines and Interpolation M. Bozzini, L. Lenarduzzi and R. Schaback February 6, 5 Abstract This paper applies divided differences to conditionally positive definite kernels in order to generate

More information

Stable Parameterization Schemes for Gaussians

Stable Parameterization Schemes for Gaussians Stable Parameterization Schemes for Gaussians Michael McCourt Department of Mathematical and Statistical Sciences University of Colorado Denver ICOSAHOM 2014 Salt Lake City June 24, 2014 michael.mccourt@ucdenver.edu

More information

Green s Functions: Taking Another Look at Kernel Approximation, Radial Basis Functions and Splines

Green s Functions: Taking Another Look at Kernel Approximation, Radial Basis Functions and Splines Green s Functions: Taking Another Look at Kernel Approximation, Radial Basis Functions and Splines Gregory E. Fasshauer Abstract The theories for radial basis functions (RBFs) as well as piecewise polynomial

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 9: Conditionally Positive Definite Radial Functions Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 34: Improving the Condition Number of the Interpolation Matrix Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu

More information

Radial Basis Functions I

Radial Basis Functions I Radial Basis Functions I Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo November 14, 2008 Today Reformulation of natural cubic spline interpolation Scattered

More information

Completeness, Eigenvalues, and the Calculus of Variations

Completeness, Eigenvalues, and the Calculus of Variations Completeness, Eigenvalues, and the Calculus of Variations MA 436 Kurt Bryan 1 Introduction Our goal here is to prove that ϕ k (x) = 2 sin(kπx) for k = 1, 2, 3,... form a complete orthogonal family of functions

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 2: Radial Basis Function Interpolation in MATLAB Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590

More information

Last Update: April 7, 201 0

Last Update: April 7, 201 0 M ath E S W inter Last Update: April 7, Introduction to Partial Differential Equations Disclaimer: his lecture note tries to provide an alternative approach to the material in Sections.. 5 in the textbook.

More information

Multivariate Interpolation with Increasingly Flat Radial Basis Functions of Finite Smoothness

Multivariate Interpolation with Increasingly Flat Radial Basis Functions of Finite Smoothness Multivariate Interpolation with Increasingly Flat Radial Basis Functions of Finite Smoothness Guohui Song John Riddle Gregory E. Fasshauer Fred J. Hickernell Abstract In this paper, we consider multivariate

More information

STABLE EVALUATION OF GAUSSIAN RBF INTERPOLANTS

STABLE EVALUATION OF GAUSSIAN RBF INTERPOLANTS STABLE EVALUATION OF GAUSSIAN RBF INTERPOLANTS GREGORY E. FASSHAUER AND MICHAEL J. MCCOURT Abstract. We provide a new way to compute and evaluate Gaussian radial basis function interpolants in a stable

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 33: Adaptive Iteration Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter 33 1 Outline 1 A

More information

Here we used the multiindex notation:

Here we used the multiindex notation: Mathematics Department Stanford University Math 51H Distributions Distributions first arose in solving partial differential equations by duality arguments; a later related benefit was that one could always

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 33: Adaptive Iteration Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter 33 1 Outline 1 A

More information

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

PART IV Spectral Methods

PART IV Spectral Methods PART IV Spectral Methods Additional References: R. Peyret, Spectral methods for incompressible viscous flow, Springer (2002), B. Mercier, An introduction to the numerical analysis of spectral methods,

More information

A new stable basis for radial basis function interpolation

A new stable basis for radial basis function interpolation A new stable basis for radial basis function interpolation Stefano De Marchi and Gabriele Santin Department of Mathematics University of Padua (Italy) Abstract It is well-known that radial basis function

More information

Physics 202 Laboratory 5. Linear Algebra 1. Laboratory 5. Physics 202 Laboratory

Physics 202 Laboratory 5. Linear Algebra 1. Laboratory 5. Physics 202 Laboratory Physics 202 Laboratory 5 Linear Algebra Laboratory 5 Physics 202 Laboratory We close our whirlwind tour of numerical methods by advertising some elements of (numerical) linear algebra. There are three

More information

Scattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions

Scattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions Chapter 3 Scattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions 3.1 Scattered Data Interpolation with Polynomial Precision Sometimes the assumption on the

More information

FINITE-DIMENSIONAL LINEAR ALGEBRA

FINITE-DIMENSIONAL LINEAR ALGEBRA DISCRETE MATHEMATICS AND ITS APPLICATIONS Series Editor KENNETH H ROSEN FINITE-DIMENSIONAL LINEAR ALGEBRA Mark S Gockenbach Michigan Technological University Houghton, USA CRC Press Taylor & Francis Croup

More information

RBF-FD Approximation to Solve Poisson Equation in 3D

RBF-FD Approximation to Solve Poisson Equation in 3D RBF-FD Approximation to Solve Poisson Equation in 3D Jagadeeswaran.R March 14, 2014 1 / 28 Overview Problem Setup Generalized finite difference method. Uses numerical differentiations generated by Gaussian

More information

3 Green s functions in 2 and 3D

3 Green s functions in 2 and 3D William J. Parnell: MT34032. Section 3: Green s functions in 2 and 3 57 3 Green s functions in 2 and 3 Unlike the one dimensional case where Green s functions can be found explicitly for a number of different

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.] Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the

More information

RKHS, Mercer s theorem, Unbounded domains, Frames and Wavelets Class 22, 2004 Tomaso Poggio and Sayan Mukherjee

RKHS, Mercer s theorem, Unbounded domains, Frames and Wavelets Class 22, 2004 Tomaso Poggio and Sayan Mukherjee RKHS, Mercer s theorem, Unbounded domains, Frames and Wavelets 9.520 Class 22, 2004 Tomaso Poggio and Sayan Mukherjee About this class Goal To introduce an alternate perspective of RKHS via integral operators

More information

ON DIMENSION-INDEPENDENT RATES OF CONVERGENCE FOR FUNCTION APPROXIMATION WITH GAUSSIAN KERNELS

ON DIMENSION-INDEPENDENT RATES OF CONVERGENCE FOR FUNCTION APPROXIMATION WITH GAUSSIAN KERNELS ON DIMENSION-INDEPENDENT RATES OF CONVERGENCE FOR FUNCTION APPROXIMATION WITH GAUSSIAN KERNELS GREGORY E. FASSHAUER, FRED J. HICKERNELL, AND HENRYK WOŹNIAKOWSKI Abstract. This article studies the problem

More information

Physics 250 Green s functions for ordinary differential equations

Physics 250 Green s functions for ordinary differential equations Physics 25 Green s functions for ordinary differential equations Peter Young November 25, 27 Homogeneous Equations We have already discussed second order linear homogeneous differential equations, which

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 1 Part 3: Radial Basis Function Interpolation in MATLAB Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2014 fasshauer@iit.edu

More information

Using Gaussian eigenfunctions to solve boundary value problems

Using Gaussian eigenfunctions to solve boundary value problems Advances in Applied Mathematics and Mechanics Adv Appl Math Mech, Vol xx, No x, pp xx-xx DOI: xxxx xxx 200x Using Gaussian eigenfunctions to solve boundary value problems Michael McCourt Center for Applied

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Least Squares Approximation

Least Squares Approximation Chapter 6 Least Squares Approximation As we saw in Chapter 5 we can interpret radial basis function interpolation as a constrained optimization problem. We now take this point of view again, but start

More information

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. Vector Spaces Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. For each two vectors a, b ν there exists a summation procedure: a +

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 14: The Power Function and Native Space Error Estimates Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu

More information

Theoretical and computational aspects of multivariate interpolation with increasingly flat radial basis functions

Theoretical and computational aspects of multivariate interpolation with increasingly flat radial basis functions Theoretical and computational aspects of multivariate interpolation with increasingly flat radial basis functions Elisabeth Larsson Bengt Fornberg June 0, 003 Abstract Multivariate interpolation of smooth

More information

Separation of Variables in Linear PDE: One-Dimensional Problems

Separation of Variables in Linear PDE: One-Dimensional Problems Separation of Variables in Linear PDE: One-Dimensional Problems Now we apply the theory of Hilbert spaces to linear differential equations with partial derivatives (PDE). We start with a particular example,

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Recent Results for Moving Least Squares Approximation

Recent Results for Moving Least Squares Approximation Recent Results for Moving Least Squares Approximation Gregory E. Fasshauer and Jack G. Zhang Abstract. We describe two experiments recently conducted with the approximate moving least squares (MLS) approximation

More information

arxiv: v1 [physics.comp-ph] 28 Jan 2008

arxiv: v1 [physics.comp-ph] 28 Jan 2008 A new method for studying the vibration of non-homogeneous membranes arxiv:0801.4369v1 [physics.comp-ph] 28 Jan 2008 Abstract Paolo Amore Facultad de Ciencias, Universidad de Colima, Bernal Díaz del Castillo

More information

Stability of Kernel Based Interpolation

Stability of Kernel Based Interpolation Stability of Kernel Based Interpolation Stefano De Marchi Department of Computer Science, University of Verona (Italy) Robert Schaback Institut für Numerische und Angewandte Mathematik, University of Göttingen

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 6: Scattered Data Interpolation with Polynomial Precision Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu

More information

We consider the problem of finding a polynomial that interpolates a given set of values:

We consider the problem of finding a polynomial that interpolates a given set of values: Chapter 5 Interpolation 5. Polynomial Interpolation We consider the problem of finding a polynomial that interpolates a given set of values: x x 0 x... x n y y 0 y... y n where the x i are all distinct.

More information

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space.

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Hilbert Spaces Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Vector Space. Vector space, ν, over the field of complex numbers,

More information

Kernels A Machine Learning Overview

Kernels A Machine Learning Overview Kernels A Machine Learning Overview S.V.N. Vishy Vishwanathan vishy@axiom.anu.edu.au National ICT of Australia and Australian National University Thanks to Alex Smola, Stéphane Canu, Mike Jordan and Peter

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 7 Interpolation Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Vectors in Function Spaces

Vectors in Function Spaces Jim Lambers MAT 66 Spring Semester 15-16 Lecture 18 Notes These notes correspond to Section 6.3 in the text. Vectors in Function Spaces We begin with some necessary terminology. A vector space V, also

More information

A Reverse Technique for Lumping High Dimensional Model Representation Method

A Reverse Technique for Lumping High Dimensional Model Representation Method A Reverse Technique for Lumping High Dimensional Model Representation Method M. ALPER TUNGA Bahçeşehir University Department of Software Engineering Beşiktaş, 34349, İstanbul TURKEY TÜRKİYE) alper.tunga@bahcesehir.edu.tr

More information

Math 307 Learning Goals. March 23, 2010

Math 307 Learning Goals. March 23, 2010 Math 307 Learning Goals March 23, 2010 Course Description The course presents core concepts of linear algebra by focusing on applications in Science and Engineering. Examples of applications from recent

More information

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form Qualifying exam for numerical analysis (Spring 2017) Show your work for full credit. If you are unable to solve some part, attempt the subsequent parts. 1. Consider the following finite difference: f (0)

More information

Chapter 4: Interpolation and Approximation. October 28, 2005

Chapter 4: Interpolation and Approximation. October 28, 2005 Chapter 4: Interpolation and Approximation October 28, 2005 Outline 1 2.4 Linear Interpolation 2 4.1 Lagrange Interpolation 3 4.2 Newton Interpolation and Divided Differences 4 4.3 Interpolation Error

More information

Approximate Solution of BVPs for 4th-Order IDEs by Using RKHS Method

Approximate Solution of BVPs for 4th-Order IDEs by Using RKHS Method Applied Mathematical Sciences, Vol. 6, 01, no. 50, 453-464 Approximate Solution of BVPs for 4th-Order IDEs by Using RKHS Method Mohammed Al-Smadi Mathematics Department, Al-Qassim University, Saudi Arabia

More information

(L, t) = 0, t > 0. (iii)

(L, t) = 0, t > 0. (iii) . Sturm Liouville Boundary Value Problems 69 where E is Young s modulus and ρ is the mass per unit volume. If the end x = isfixed, then the boundary condition there is u(, t) =, t >. (ii) Suppose that

More information

A orthonormal basis for Radial Basis Function approximation

A orthonormal basis for Radial Basis Function approximation A orthonormal basis for Radial Basis Function approximation 9th ISAAC Congress Krakow, August 5-9, 2013 Gabriele Santin, joint work with S. De Marchi Department of Mathematics. Doctoral School in Mathematical

More information

Linear Algebra, Summer 2011, pt. 2

Linear Algebra, Summer 2011, pt. 2 Linear Algebra, Summer 2, pt. 2 June 8, 2 Contents Inverses. 2 Vector Spaces. 3 2. Examples of vector spaces..................... 3 2.2 The column space......................... 6 2.3 The null space...........................

More information

Chem 3502/4502 Physical Chemistry II (Quantum Mechanics) 3 Credits Fall Semester 2006 Christopher J. Cramer. Lecture 5, January 27, 2006

Chem 3502/4502 Physical Chemistry II (Quantum Mechanics) 3 Credits Fall Semester 2006 Christopher J. Cramer. Lecture 5, January 27, 2006 Chem 3502/4502 Physical Chemistry II (Quantum Mechanics) 3 Credits Fall Semester 2006 Christopher J. Cramer Lecture 5, January 27, 2006 Solved Homework (Homework for grading is also due today) We are told

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

Mathematics for Engineers. Numerical mathematics

Mathematics for Engineers. Numerical mathematics Mathematics for Engineers Numerical mathematics Integers Determine the largest representable integer with the intmax command. intmax ans = int32 2147483647 2147483647+1 ans = 2.1475e+09 Remark The set

More information

Eigenvectors and Hermitian Operators

Eigenvectors and Hermitian Operators 7 71 Eigenvalues and Eigenvectors Basic Definitions Let L be a linear operator on some given vector space V A scalar λ and a nonzero vector v are referred to, respectively, as an eigenvalue and corresponding

More information

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education MTH 3 Linear Algebra Study Guide Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education June 3, ii Contents Table of Contents iii Matrix Algebra. Real Life

More information

Linear Least-Squares Data Fitting

Linear Least-Squares Data Fitting CHAPTER 6 Linear Least-Squares Data Fitting 61 Introduction Recall that in chapter 3 we were discussing linear systems of equations, written in shorthand in the form Ax = b In chapter 3, we just considered

More information

Chapter 9. Non-Parametric Density Function Estimation

Chapter 9. Non-Parametric Density Function Estimation 9-1 Density Estimation Version 1.2 Chapter 9 Non-Parametric Density Function Estimation 9.1. Introduction We have discussed several estimation techniques: method of moments, maximum likelihood, and least

More information

Radial basis function partition of unity methods for PDEs

Radial basis function partition of unity methods for PDEs Radial basis function partition of unity methods for PDEs Elisabeth Larsson, Scientific Computing, Uppsala University Credit goes to a number of collaborators Alfa Ali Alison Lina Victor Igor Heryudono

More information

DIRECT ERROR BOUNDS FOR SYMMETRIC RBF COLLOCATION

DIRECT ERROR BOUNDS FOR SYMMETRIC RBF COLLOCATION Meshless Methods in Science and Engineering - An International Conference Porto, 22 DIRECT ERROR BOUNDS FOR SYMMETRIC RBF COLLOCATION Robert Schaback Institut für Numerische und Angewandte Mathematik (NAM)

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

Chapter 9. Non-Parametric Density Function Estimation

Chapter 9. Non-Parametric Density Function Estimation 9-1 Density Estimation Version 1.1 Chapter 9 Non-Parametric Density Function Estimation 9.1. Introduction We have discussed several estimation techniques: method of moments, maximum likelihood, and least

More information

Gaussian Random Fields

Gaussian Random Fields Gaussian Random Fields Mini-Course by Prof. Voijkan Jaksic Vincent Larochelle, Alexandre Tomberg May 9, 009 Review Defnition.. Let, F, P ) be a probability space. Random variables {X,..., X n } are called

More information

INTRODUCTION TO COMPUTATIONAL MATHEMATICS

INTRODUCTION TO COMPUTATIONAL MATHEMATICS INTRODUCTION TO COMPUTATIONAL MATHEMATICS Course Notes for CM 271 / AMATH 341 / CS 371 Fall 2007 Instructor: Prof. Justin Wan School of Computer Science University of Waterloo Course notes by Prof. Hans

More information

1 Math 241A-B Homework Problem List for F2015 and W2016

1 Math 241A-B Homework Problem List for F2015 and W2016 1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let

More information

Math 4263 Homework Set 1

Math 4263 Homework Set 1 Homework Set 1 1. Solve the following PDE/BVP 2. Solve the following PDE/BVP 2u t + 3u x = 0 u (x, 0) = sin (x) u x + e x u y = 0 u (0, y) = y 2 3. (a) Find the curves γ : t (x (t), y (t)) such that that

More information

G-SPLINE INTERPOLATION FOR APPROXIMATING THE SOLUTION OF FRACTIONAL DIFFERENTIAL EQUATIONS USING LINEAR MULTI- STEP METHODS

G-SPLINE INTERPOLATION FOR APPROXIMATING THE SOLUTION OF FRACTIONAL DIFFERENTIAL EQUATIONS USING LINEAR MULTI- STEP METHODS Journal of Al-Nahrain University Vol.0(), December, 00, pp.8- Science G-SPLINE INTERPOLATION FOR APPROXIMATING THE SOLUTION OF FRACTIONAL DIFFERENTIAL EQUATIONS USING LINEAR MULTI- STEP METHODS Osama H.

More information

Sufficient conditions for functions to form Riesz bases in L 2 and applications to nonlinear boundary-value problems

Sufficient conditions for functions to form Riesz bases in L 2 and applications to nonlinear boundary-value problems Electronic Journal of Differential Equations, Vol. 200(200), No. 74, pp. 0. ISSN: 072-669. URL: http://ejde.math.swt.edu or http://ejde.math.unt.edu ftp ejde.math.swt.edu (login: ftp) Sufficient conditions

More information

Preliminary Examination in Numerical Analysis

Preliminary Examination in Numerical Analysis Department of Applied Mathematics Preliminary Examination in Numerical Analysis August 7, 06, 0 am pm. Submit solutions to four (and no more) of the following six problems. Show all your work, and justify

More information

22 : Hilbert Space Embeddings of Distributions

22 : Hilbert Space Embeddings of Distributions 10-708: Probabilistic Graphical Models 10-708, Spring 2014 22 : Hilbert Space Embeddings of Distributions Lecturer: Eric P. Xing Scribes: Sujay Kumar Jauhar and Zhiguang Huo 1 Introduction and Motivation

More information

Decomposition of Riesz frames and wavelets into a finite union of linearly independent sets

Decomposition of Riesz frames and wavelets into a finite union of linearly independent sets Decomposition of Riesz frames and wavelets into a finite union of linearly independent sets Ole Christensen, Alexander M. Lindner Abstract We characterize Riesz frames and prove that every Riesz frame

More information

Lectures 9-10: Polynomial and piecewise polynomial interpolation

Lectures 9-10: Polynomial and piecewise polynomial interpolation Lectures 9-1: Polynomial and piecewise polynomial interpolation Let f be a function, which is only known at the nodes x 1, x,, x n, ie, all we know about the function f are its values y j = f(x j ), j

More information

Spanning and Independence Properties of Finite Frames

Spanning and Independence Properties of Finite Frames Chapter 1 Spanning and Independence Properties of Finite Frames Peter G. Casazza and Darrin Speegle Abstract The fundamental notion of frame theory is redundancy. It is this property which makes frames

More information

On Riesz-Fischer sequences and lower frame bounds

On Riesz-Fischer sequences and lower frame bounds On Riesz-Fischer sequences and lower frame bounds P. Casazza, O. Christensen, S. Li, A. Lindner Abstract We investigate the consequences of the lower frame condition and the lower Riesz basis condition

More information

An Analysis of Five Numerical Methods for Approximating Certain Hypergeometric Functions in Domains within their Radii of Convergence

An Analysis of Five Numerical Methods for Approximating Certain Hypergeometric Functions in Domains within their Radii of Convergence An Analysis of Five Numerical Methods for Approximating Certain Hypergeometric Functions in Domains within their Radii of Convergence John Pearson MSc Special Topic Abstract Numerical approximations of

More information

2 Two-Point Boundary Value Problems

2 Two-Point Boundary Value Problems 2 Two-Point Boundary Value Problems Another fundamental equation, in addition to the heat eq. and the wave eq., is Poisson s equation: n j=1 2 u x 2 j The unknown is the function u = u(x 1, x 2,..., x

More information

Nonlinear Stationary Subdivision

Nonlinear Stationary Subdivision Nonlinear Stationary Subdivision Michael S. Floater SINTEF P. O. Box 4 Blindern, 034 Oslo, Norway E-mail: michael.floater@math.sintef.no Charles A. Micchelli IBM Corporation T.J. Watson Research Center

More information

Stability constants for kernel-based interpolation processes

Stability constants for kernel-based interpolation processes Dipartimento di Informatica Università degli Studi di Verona Rapporto di ricerca Research report 59 Stability constants for kernel-based interpolation processes Stefano De Marchi Robert Schaback Dipartimento

More information

arxiv: v1 [math.co] 3 Nov 2014

arxiv: v1 [math.co] 3 Nov 2014 SPARSE MATRICES DESCRIBING ITERATIONS OF INTEGER-VALUED FUNCTIONS BERND C. KELLNER arxiv:1411.0590v1 [math.co] 3 Nov 014 Abstract. We consider iterations of integer-valued functions φ, which have no fixed

More information

Numerical integration and differentiation. Unit IV. Numerical Integration and Differentiation. Plan of attack. Numerical integration.

Numerical integration and differentiation. Unit IV. Numerical Integration and Differentiation. Plan of attack. Numerical integration. Unit IV Numerical Integration and Differentiation Numerical integration and differentiation quadrature classical formulas for equally spaced nodes improper integrals Gaussian quadrature and orthogonal

More information

Scientific Computing

Scientific Computing 2301678 Scientific Computing Chapter 2 Interpolation and Approximation Paisan Nakmahachalasint Paisan.N@chula.ac.th Chapter 2 Interpolation and Approximation p. 1/66 Contents 1. Polynomial interpolation

More information

Kernel Method: Data Analysis with Positive Definite Kernels

Kernel Method: Data Analysis with Positive Definite Kernels Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University

More information

A new class of highly accurate differentiation schemes based on the prolate spheroidal wave functions

A new class of highly accurate differentiation schemes based on the prolate spheroidal wave functions We introduce a new class of numerical differentiation schemes constructed for the efficient solution of time-dependent PDEs that arise in wave phenomena. The schemes are constructed via the prolate spheroidal

More information

Backward Error Estimation

Backward Error Estimation Backward Error Estimation S. Chandrasekaran E. Gomez Y. Karant K. E. Schubert Abstract Estimation of unknowns in the presence of noise and uncertainty is an active area of study, because no method handles

More information

SVMs: nonlinearity through kernels

SVMs: nonlinearity through kernels Non-separable data e-8. Support Vector Machines 8.. The Optimal Hyperplane Consider the following two datasets: SVMs: nonlinearity through kernels ER Chapter 3.4, e-8 (a) Few noisy data. (b) Nonlinearly

More information

POINT VALUES AND NORMALIZATION OF TWO-DIRECTION MULTIWAVELETS AND THEIR DERIVATIVES

POINT VALUES AND NORMALIZATION OF TWO-DIRECTION MULTIWAVELETS AND THEIR DERIVATIVES November 1, 1 POINT VALUES AND NORMALIZATION OF TWO-DIRECTION MULTIWAVELETS AND THEIR DERIVATIVES FRITZ KEINERT AND SOON-GEOL KWON,1 Abstract Two-direction multiscaling functions φ and two-direction multiwavelets

More information

Properly forking formulas in Urysohn spaces

Properly forking formulas in Urysohn spaces Properly forking formulas in Urysohn spaces Gabriel Conant University of Notre Dame gconant@nd.edu March 4, 2016 Abstract In this informal note, we demonstrate the existence of forking and nondividing

More information

Vector Space Concepts

Vector Space Concepts Vector Space Concepts ECE 174 Introduction to Linear & Nonlinear Optimization Ken Kreutz-Delgado ECE Department, UC San Diego Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 1 / 25 Vector Space Theory

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 7: Conditionally Positive Definite Functions Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter

More information

9.2 Support Vector Machines 159

9.2 Support Vector Machines 159 9.2 Support Vector Machines 159 9.2.3 Kernel Methods We have all the tools together now to make an exciting step. Let us summarize our findings. We are interested in regularized estimation problems of

More information

MATH 205C: STATIONARY PHASE LEMMA

MATH 205C: STATIONARY PHASE LEMMA MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)

More information

4 Power Series Solutions: Frobenius Method

4 Power Series Solutions: Frobenius Method 4 Power Series Solutions: Frobenius Method Now the ODE adventure takes us to series solutions for ODEs, a technique A & W, that is often viable, valuable and informative. These can be readily applied Sec.

More information

Meshfree Approximation Methods with MATLAB

Meshfree Approximation Methods with MATLAB Interdisciplinary Mathematical Sc Meshfree Approximation Methods with MATLAB Gregory E. Fasshauer Illinois Institute of Technology, USA Y f? World Scientific NEW JERSEY LONDON SINGAPORE BEIJING SHANGHAI

More information