Key words. Radial basis function, scattered data interpolation, hierarchical matrices, datasparse approximation, adaptive cross approximation

Size: px
Start display at page:

Download "Key words. Radial basis function, scattered data interpolation, hierarchical matrices, datasparse approximation, adaptive cross approximation"

Transcription

1 HIERARCHICAL MATRIX APPROXIMATION FOR KERNEL-BASED SCATTERED DATA INTERPOLATION ARMIN ISKE, SABINE LE BORNE, AND MICHAEL WENDE Abstract. Scattered data interpolation by radial kernel functions leads to linear equation systems with large, fully populated, ill-conditioned interpolation matrices. A successful iterative solution of such a system requires an efficient matrix-vector multiplication as well as an efficient preconditioner. While multipole approaches provide a fast matrix-vector multiplication, they avoid the explicit set-up of the system matrix which hinders the construction of preconditioners, such as approximate inverses or factorizations which typically require the explicit system matrix for their construction. In this paper, we propose an approach that allows both an efficient matrix-vector multiplication as well as an explicit matrix representation which can then be used to construct a preconditioner. In particular, the interpolation matrix will be represented in hierarchical matrix format, and several approaches for the blockwise low-rank approximation are proposed and compared, of both analytical nature (separable expansions) and algebraic nature (adaptive cross approximation). Key words. Radial basis function, scattered data interpolation, hierarchical matrices, datasparse approximation, adaptive cross approximation AMS subect classifications. 65D05, 41A05, 41A58, 41A63, 65F30, 15A23 1. Introduction. Radial basis functions (RBF), or, radial kernel functions, are powerful tools for meshfree interpolation from multivariate scattered data [5, 8, 9, 10, 14, 15]. In order to explain RBF interpolation only very briefly, assume we are given function values f X = (f(x 1 ),..., f(x N )) T R N sampled from a target function f : R d R, for d 1, at a set X = {x 1,..., x N } R d of pairwise distinct interpolation points. According to the RBF reconstruction scheme, an interpolant s : R d R to f is required to be of the form (1) s(x) = N c Φ(x, x ) + p(x) for x R d, =1 where Φ is a radially symmetric kernel function, i.e., Φ(x, y) = φ( x y 2 ) for a radial kernel φ : [0, ) R, and where 2 is the Euclidean norm on R d. Moreover, p in (1) is assumed to be a d-variate real-valued polynomial, whose degree m 1 is determined by the order m N 0 of the kernel Φ. To be more precise, we require that Φ is conditionally positive definite of order m on R d, Φ CPD(m), i.e., for any point set X = {x 1,..., x N } R d of size N = #X, the kernel matrix (2) B Φ,X = (Φ(x i, x )) 1 i, N R N,N is positive definite on the linear subspace N (3) L X = c = (c 1,..., c N ) T R N : c p(x ) = 0 for all p πm 1 d RN, =1 University of Hamburg, Bundesstr. 55, Hamburg, Germany (iske@math.uni-hamburg.de). Hamburg University of Technology, Am Schwarzenberg-Campus 3, Hamburg, Germany (leborne@tuhh.de). Hamburg University of Technology, Am Schwarzenberg-Campus 3, Hamburg, Germany (michael.wende@tuhh.de). 1

2 2 A. Iske, S. Le Borne, M. Wende where πm 1 d := p(x) = n 1<m c n x n n N d 0, c n R is the space of all d-variate polynomials of degree at most m 1. For m = 0 we have L X = R N, in which case Φ is positive definite on R d, Φ PD. We remark that for Φ CPD(m) the interpolation problem s X = f X has a unique solution s of the form (1), under constraints c = (c 1,..., c N ) T L X and under the (rather weak) assumption that the interpolation points X are π d m 1-unisolvent, i.e., any polynomial in π d m 1 can uniquely be reconstructed from its values on X. In fact, the interpolation problem s X = f X leads us, for a fixed basis {p 1,..., p Q } of π d m 1, to the linear system (4) N Q c Φ(x i, x ) + b l p l (x i ) = f(x i ) for i = 1,..., N, =1 l=1 N c p i (x ) = 0 for i = 1,..., Q, =1 with unknown coefficients c = (c 1,..., c N ) T R N for the maor part of s in (1) and coefficients b = (b 1,..., b Q ) T R Q for its polynomial part. We can rewrite the system (4) in matrix form as (5) where the polynomial matrix ( ) ( ( ) BΦ,X P X c fx PX T =, 0 b) 0 P X = (p l (x i )) 1 i N;1 l Q R N,Q is inective, due to the assumed πm 1-unisolvence d of the interpolation points X. To goal of this paper is the approximation of the interpolation matrix B Φ,X in (5) in a way that requires almost optimal storage, allows a matrix-vector multiplication of almost optimal complexity, and can be used for subsequent (approximate) matrix factorization to construct a preconditioner. But we will neither discuss the condition number of the interpolation matrix (cf. [7] for a very recent account on this problem), the actual construction of a preconditioner nor the system s iterative solution. The remainder of this paper is organized as follows: In Section 2, we introduce the interpolation problem using radial kernels, the key ideas of hierarchical matrices, and the connection between these two topics. In Section 3, we describe (geometric and algebraic) hierarchical clustering algorithms for meshless methods which yield the hierarchical block structure of an H-matrix. In Section 5, we show various approaches to fill the matrix blocks with low rank approximations in factored form. Approaches include analytical methods based on separable expansions of the kernel function, either tailored to a particular kernel function or kernel-independent such as interpolation or Taylor expansion, as well as an algebraic approach by the name of adaptive cross approximation. Section 6 provides a numerical illustration and comparison of the discussed methods, and finally Section 7 concludes the paper with a summary and outlook to future work.

3 Hierarchical matrix approximation for kernel-based scattered data interpolation 3 1 Gaussian 1 Inverse multiquadric α=0.1 α=0.5 α= α=0.1 α=0.5 α= Multiquadric 3 Thin plate spline α=0.1 α=0.5 α= α=0.5 0 α=0.75 α= Fig. 1. Generating functions φ GAU,α, φ IMQ,α, φ MQ,α, φ TPS,α. 2. Preliminaries Scattered data interpolation using radial kernels. According to the formulation of the interpolation problem in (4) and the resulting matrix form in (5), we consider working with radially symmetric (multivariate) kernel functions Φ : R d R d R, (x, y) Φ(x, y) := φ( x y 2 ) being generated by a (univariate) function φ : R + R. We will focus on four radial kernels φ = φ Z,α with scaling parameter α R + and Z {GAU, IMQ, MQ, TPS}, ( φ GAU,α (r) = exp ( ) ) r 2 α (Gaussian, GAU), ( φ IMQ,α (r) = 1 + ( ) ) r α (Inverse Multiquadric, IMQ), ( φ MQ,α (r) = 1 + ( ) ) 1 r 2 2 α (Multiquadric, MQ), φ TPS,α (r) = ( r 2 ( α) log r ) α (Thin Plate Spline, TPS). as shown in Figure 1. The positive definite φ GAU,α PD and φ IMQ,α PD generate symmetric positive definite interpolation matrices B B Φ,X in (2), regardless of the choice of (pairwise distinct) data sites x i. The kernels φ MQ,α CPD(1) and φ TPS,α CPD(2), however, are conditionally positive definite of order m = 1 (for MQ) and m = 2 (for TPS), so that the symmetric kernel matrix B B Φ,X is positive definite on the linear subspace L X in (3) for πm 1-unisolvent d interpolation points X. A common feature of the four resulting interpolation matrices B is that they are dense and, moreover, their condition numbers depend on the shape parameter α > 0, deteriorating with increasing α which flattens the generating functions φ Z,α, see Figure 1.

4 4 A. Iske, S. Le Borne, M. Wende In the following section, we will introduce the technique of hierarchical (H-) matrices which will offer a (highly accurate) approximation B H B to the interpolation matrix B which features an (almost) linear complexity for storage and matrix-vector multiplication and can be used for subsequent matrix factorization H-matrices. An H-matrix is, in short, a matrix whose block index set has been hierarchically partitioned and whose resulting matrix blocks are given in factored form whenever the rank of such a matrix block is (significantly) smaller than its size (i.e., the minimum of its numbers of rows and columns). We will now provide brief formal definitions of these concepts, for more detailed discussions we refer to the textbooks [2, 12] and the survey [13]. Let I, J N be (finite) row/column index sets, and let B = (b i ) i I, J R I J = R #I,#J. Here, #I denotes number of elements (indices) of the set I. In the following definition, a partition of I is a set of nonempty, disoint subsets whose union is the entire set I, whereas a hierarchical partitioning will be defined as a certain set of several nested partitions and finally, an H-block partition is a partition consisting of subsets of a hierarchical partitioning. An actual construction of an H-block partition as defined here will follow in Section 3. Definition 1 (hierarchical partitioning, H-block partition). The sequence PI l = {I1, l..., I l p }, l = 0,..., L, of partitions of an index set I is called a hierarchical partitioning of I of depth L if it holds that l PI 0 = {I}, (root) Ik l = for a subset I sub {1,..., p l+1 }, (hierarchical nestedness, I l+1 I sub i.e., an index set on level l is the union of index sets on the finer level l + 1). Given hierarchical partitionings P l I, P l J, l = 1,..., min{l I, L J } of index sets I, J with respective depths L I, L J, the sequence P l I J = {bl 1,..., b l q l }, l = 0,..., L, with blocks b l k I J is called a hierarchical block partitioning of I J of depth L := min{l I, L J } based on hierarchical (index) partitionings PI l, P J l if it holds that PI J l is a hierarchical partitioning of I J, for every b l k there exist σ P I l, τ P J l such that bl k = σ τ. Finally, a partition PI J H = {b 1,..., b n } of I J is called an H-block partition if b i for all i = 1,..., n. PI J l l {1,...,L} In an H-matrix, some of its matrix blocks are represented in factored form, others as a full matrix. The distinction is based on an admissibility condition defined next. Definition 2 (admissibility condition). Let I, J be two index sets, and let P I J be a partition of I J. An admissibility condition is a function Adm : P I J {true, false}. We call b P I J admissible if Adm(b) =true and inadmissible otherwise. Definition 3 (H-matrix). Let PI J H be an H-block partition of I J, let Adm be an admissibility condition and let k N. Then, the set of H-matrices (with respect to PI J H, Adm and k) is defined as H(P H I J, k, Adm) := {B R I J rank(b b ) k for all admissible blocks b P H I J}. We call matrix blocks B b admissible if b PI J H is admissible. Admissible matrix blocks can be represented in factored form, i.e., B b = UV T for b = σ τ, U R #σ,k, V R #τ,k for some k k.

5 Hierarchical matrix approximation for kernel-based scattered data interpolation 5 The savings in storage and cost for matrix-vector multiplication of an H-matrix B H compared to a full matrix B originate in the factored low-rank representations of admissible matrix blocks. The actual amount of savings, i.e., the hopefully reduced, almost linear complexity estimate O(kN log N) depends on the particular H-block partition and admissibility condition used in the H-matrix construction. It is fairly straightforward to see that large, admissible matrix blocks are favorable when it comes to storage and computational complexity. The following subsection will motivate why scattered data interpolation matrices of Subsection 2.1 fit into the framework of hierarchical matrices introduced in this subsection Basic connection: low rank approximation of matrix blocks. The explicit construction of an interpolation matrix in H-matrix format is simply an extension of the well-known multipole approach used for the fast matrix-vector multiplication with (scattered data) interpolation matrices B (see [1] and reference therein for a more comprehensive account on this subect). If the underlying kernel function Φ(x, y) allows for a separable expansion k (6) Φ(x, y) = u n (x) v n (y) n=1 for k N on a subset Q x Q y R d R d, then the matrix block (b i ) xi Qx = (Φ(x i, x )) xi Qx = (φ( x i x 2 )) xi Qx x Qy x Qy x Qy of the interpolation matrix B has at most, independent of its number of rows (number of data sites x i in Q x ) and columns (number of data sites x in Q y ). Typically, Φ(x, y) is not separable but can be approximated by a separable expansion Φ(x, y) Φ k (x, y) := which yields a low rank approximation (b i ) xi Qx x Qy k u n (x) v n (y) n=1 (Φ k (x i, x )) xi Qx, x Qy once again with its rank bounded by k independently of the size of the block. The (entrywise) error between the exact matrix block and its low rank approximation is bounded by the approximation error Φ Φ k,qx Q y. If it is known that Φ(x, y) allows for a separable expansion, or rather, if error bounds of an expansion are known, this yields an upper bound for the rank of the respective matrix block. If the expansion is explicitly available, it can be used for the construction of the low rank factors of the matrix blocks, i.e., (7) (Φ k (x i, x )) xi Qx = UV T x Qy with U = (u i ) xi Qx, u i = u (x i ), V = (v i ) xi Qy, v i = v (x i ). {1,...,k} {1,...,k}

6 6 A. Iske, S. Le Borne, M. Wende The kernel functions Φ(x, y) = φ( x y 2 ) which we introduced in Subsection 2.1 have in common that they (unfortunately) are not separable but allow for exponentially convergent separable approximations as long as x and y are well separated, i.e., their distance r = x y 2 is large enough as formalized in the following definition. Definition 4 (standard admissibility condition). Let {x i = (x i,1,..., x i,d ) R d i I} be a set of data sites for an index set I := {1,..., N} (N N). For a subset σ I, we define the bounding box Q σ as the smallest axis-parallel box containing all data sites {x i i σ}, i.e., (8) Q σ := [a 1, b 1 ]... [a d, b d ] a := min i σ {x i,}, with b := max i σ {x i,}, ( {1,..., d}). Then, given a partition P I I and an η R +, the standard admissibility condition is defined by (9) Adm : P I I {true, false}, { true : min{diam(qσ ), diam(q σ τ τ )} η dist(q σ, Q τ ), false : else where the diameter and distance of bounding boxes are computed with respect to the Euklidean norm. The following Section 3 introduces a (hierarchical clustering) algorithm to construct an H-block partition suitable for an H-matrix approximation of the interpolation matrix B. Details on the construction and convergence of separable expansions on admissible matrix blocks of this H-block partition will follow in Section Construction of an H-block partitions for scattered point sets. The hierarchical partitioning of the index set I (see Definition 1) associated with data sites {x i } may be constructed through recursive (geometric) bisection. Beginning with the root I0 0 := I, an index set Ik l (containing more than n min N indices, i.e., #Ik l > n min) is partitioned into sets I l+1 2k, Il+1 2k+1 as follows: 1. Determine the spatial direction along which the index set Ik l will be divided: r := argmax {b a } where a, b define the boundig box Q I l {1,...,d} k of Ik l in (8). 2. Set mid r := (a r + b r )/2. 3. Set I l+1 2k := {i Ik l x i,r < mid r }, I l+1 2k+1 := Il k \ Il+1 2k. The index sets Ik l are commonly associated with the nodes of a tree: I0 0 = I I 1 0 I 1 1 I 2 0 I 2 1 I 2 2 I 2 3 The resulting hierarchical partitioning of I and the (standard) admissibility condition (see Definition 4) are used in the following construction of a hierarchical block partitioning. Beginning with the root I0 0 I0 0, only blocks Ik l Il k that are inadmissible and not smaller than n min are recursively partitioned, i.e., if (Adm(I l k I l k ) = false min{#il k, #I l k } > n min) then partition Ik l Ik l into {I l+1 2k I l+1 2k, I l+1 2k+1 Il+1 2k, I l+1 2k I l+1 2k +1, Il+1 2k+1 Il+1 2k +1 }.

7 Hierarchical matrix approximation for kernel-based scattered data interpolation 7 Finally, an H-block partition P I I is obtained as the set of all blocks Ik l Il k that have not been further partitioned. By construction, these blocks must either be admissible or small (with respect to n min ). In order to be able to associate the index blocks in the H-block partition P I I with a matrix block of consecutive rows and columns, we define a permutation ind2dof : I I (index-to-dof) that satisfies ind2dof(i) < ind2dof() for any two indices i, that have been assigned to disoint index sets i I l+1 2k and I l+1 2k+1 in step 3. of the index partitioning described above. Such a permutation yields a relabeling of data sites and a resulting row and column permutation of the interpolation matrix. 4. Low rank approximation through separable expansions. In this part we will discuss three analytical methods to construct separable expansions Φ k of Φ: (1) Polynomial interpolation (Section 4.1), (2) Taylor expansion (Section 4.2) and (3) a special expansion designed for multiquadric basis functions (Section 4.5). In all cases, Φ(x, y) will be considered as a function of x Q x for a fixed y Q y. The length k of the expansions is determined by a parameter m N 0 such that k = 0 when m = 0. In d > 1 dimensions the increases rapidly with m Polynomial interpolation. Given a multiindex n = (n 1,..., n d ) N d 0 and x R d, we define n 1 = n n d, n = max{ n 1,..., n d }, n! = n 1! n d!, x n = x n xn d d, n Φ(x) = n1 x 1... n d x d Φ(x). The space of polynomials of degree less than m N 0 per spatial dimension, Π d m 1 := {p(x) = n <m c n x n n N d 0, c n R} has dimension dim(π d m 1) = m d. We will use a Lagrange basis of Π d m 1 consisting of polynomials L n for n < m constructed as follows: Let x n = ( x n1,1,..., x nd,d) R d, n = (n 1,..., n d ) {1,..., m} d, be m d points on a tensor grid that is obtained through transformation of the zeros of the Chebyshev polynomials T m of degree m to the intervals [a i, b i ] of the bounding box Q σ = [a 1, b 1 ]... [a d, b d ] (8), i.e., x i, = a i + b i + b ( ) i a i 2 1 cos 2 2 2m π, i {1,..., d}, {1,..., m}. The univariate Lagrange polynomials in [a i, b i ] are given by L i, (x) = l {1,...,m}\{} x x i,l x i, x i,l, i {1,..., d}, {1,..., m}, and their multivariate counterparts are obtained by multiplication L n (x) = L 1,n1 (x 1 )... L d,nd (x d ) and satisfy L n ( x l ) = δ n,l for all l < m. A separable approximation Φ k (x, y) of Φ(x, y) is given by the polynomial in x which interpolates Φ(x, y) at the tensor grid points x n, i. e. Φ k ( x n, y) = Φ( x n, y)

8 8 A. Iske, S. Le Borne, M. Wende for all n < m. The interpolating polynomial Φ k (x, y) can be represented using the Lagrange basis, (10) Φ k (x, y) = L n (x) Φ( x n, y), the representation rank is k = m d. n <m }{{} u n(x) } {{ } v n(y) 4.2. Taylor expansion. A Taylor expansion of Φ(x, y) with respect to x about a point x 0 Q σ yields the separable expansion (11) Φ k (x, y) = n 1<m (x x 0 ) n x n Φ(x 0, y). } n! {{}}{{} v u n(y) n(x) We choose the center of the expansion x 0 to be the midpoint of the cuboid Q σ. The rank of such an expansion is k = #{n N d 0 n 1 < m} = ( ) m+d 1 d. In order to facilitate the implementation of a separable approximation through Taylor expansion, we now derive recursion formulas for the multivariate derivatives x n Φ(x, y), n N d 0. In order to simplify notation, we define z := x y as well as Φ(z) := φ( z 2 ) and write n Φ(z) instead of x n Φ(x, y) in view of y being constant and x n Φ(x, y) = x n φ( x y 2 ) = z n φ( z 2 ) z:=x y. The C (0, )-functions ψ GAU,α (r) = e r α 2, ψ IMQ,α (r) = (1 + r ) 1 2 α 2, (12) ψ MQ,α (r) = (1 + r α 2 ) 1 2, ψ TPS,α (r) = r 2α 2 log r α 2 all satisfy ψ Z,α (r 2 ) = φ Z,α (r) for Z {GAU, IMQ, MQ, TPS}. In fact, according to Schoenberg (see [15, Theorem 7.13]) there exists for every positive definite radial function φ PD an associated completely monotone function ψ C (0, ), ψ 0, satisfying ψ(r 2 ) = φ(r). We remark that Schoenberg s characterization has been extended to conditionally positive definite radial functions φ CPD(m) by Micchelli (see [15, Theorem 8.19]). Based on ψ, we now define Ψ n (z) := ψ (n) ( z 2 2), n N 0, where ψ (n) = n ψ denotes the n-th derivative of ψ : R + R. Theorem 5. Let n N d 0 be decomposed into n = 2l + m with l = n/2 N d 0 and m {0, 1} d. Then the multivariate derivative n Φ has a representation in the form (13) n Φ(z) = z n n d z d Φ(z) = a (n1) 1... a (n d) d z 2+m Ψ l m 1 (z) l with coefficients {a (n) n N 0, {0,..., n/2 }}. These can be arranged in rows n = 0, 1,... and computed recursively, starting from a (0) 0 = 1. Row 2l + 1 can be computed from row 2l through (14) a (2l+1) a (2l+1) l = 2a (2l) = 2a (2l) l + 2( + 1)a (2l) +1, = 0,..., l 1,

9 Hierarchical matrix approximation for kernel-based scattered data interpolation 9 and row 2l can be computed from row 2l 1 through (15) a (2l) l a (2l) = 2a (2l 1) l 1, = 2a (2l 1) 1 + (2 + 1)a (2l 1), = l 1,..., 1, a (2l) 0 = a (2l 1) 0. Proof. We prove the representation (13) beginning with the observation that Φ = Ψ 0 and the relationship (16) zi Ψ n (z) = 2z i Ψ n+1 (z). Repeated application of (16) will eventually result in (13). In order to derive explicit recursion formulas, we will begin with the one-dimensional case where z = z R is a scalar. In this case, the first few derivatives are computed to be (17) 0 Φ(z) = Ψ 0 (z), 1 Φ(z) = 2 z Ψ 1 (z), 2 Φ(z) = 2 Ψ 1 (z) + 4 z 2 Ψ 2 (z), 3 Φ(z) = 12 z Ψ 2 (z) + 8 z 3 Ψ 3 (z), 4 Φ(z) = 12 Ψ 2 (z) + 48 z 2 Ψ 3 (z) + 16 z 4 Ψ 4 (z). In order to derive a general formula for all n Φ(z), we distinguish between even and odd n and write n as n = 2l+m with l N 0 and m {0, 1}. Denoting the coefficients in the right hand side of n Φ(z) in (17) by a (n) 0,..., a(n) l suggests that there holds l (18a) 2l Φ(z) = a (2l) z 2 Ψ l+ (z) for n = 2l, (18b) and 2l+1 Φ(z) = l a (2l+1) z 2+1 Ψ l++1 (z) for n = 2l + 1. The difference between (18a) and (18b) is minor and merely consists of adding an m {0, 1} in appropriate places, i.e., using n = 2l + m with m {0, 1}, (18a) and (18b) are combined into (19) n Φ(z) = l a (n) z 2+m Ψ l++m (z) for n N 0. We have provided the even and odd cases separately because this allows one to confirm (18) by checking that differentiating (18a) once produces (18b) and vice versa (with l replaced by l + 1). In this way we also see how the coefficients appearing in (17) can be computed one row at time as stated in (14), (15). For the multivariate form of (13), i.e., for z R d, we use (19) for every coordinate and obtain n Φ(z) = n1 z 1 = l 1 1=0 = l n d z d Φ(z) l d a (n1) d =0 1 a (n d) d z 21+m1 1 a (n d) z 2+m Ψ l m 1 (z). a (n1) d 1 z 2 d+m d Ψ l1+ 1+m 1+ +l d + d +m d (z) d

10 10 A. Iske, S. Le Borne, M. Wende Remark 6. The monomial factors appearing in (19) have the following connection to the Hermite polynomials which satisfy the recursion formulas H n (z) = ( 1) n e z2 n e z2 H 1 (z) := 0, H 0 (z) = 1, H n+1 (z) = 2zH n (z) 2nH n 1 (z), H n(z) = 2nH n 1 (z). For the Gaussian Φ(z) = e z2, there holds Ψ l++m (z) = ( 1) l++m e z2 so that (19) turns into n Φ(z) = l z 2+m ( 1) l++m e z2. Multiplication of both sides with ( 1) n e z2 leads to H n (z) = ( 1) n a(n) l a (n) z 2+m ( 1) l++m = l ( 1) l+ a (n) z 2+m. Hence, besides the sign, the coefficients a (n) coincide with the coefficients of the Hermite polynomials H n which can be computed through the above recursion formulas Asymptotic smoothness. A convergence result for the separable approximations based on polynomial interpolation and Taylor expansion (Subsection 4.4) follows if the kernel Φ is asymptotically smooth. In [4], a similar is obtained for the ACA approach that will be introduced in Section 5. Definition 7. The kernel Φ : R d R is called asymptotically smooth if there exist constants c, β, γ, s R such that for all n N d 0 n Φ(z) c n β 1 γ n 1 n! z n 1 s 2. According to [12, Theorem E.8], the kernel Φ(z) = φ( z 2 ) is asymptotically smooth if the corresponding radial kernel function φ is asymptotically smooth. In principle, this provides a way to check the asymptotic smoothness of Φ by computing derivatives of φ analytically. This leads to quite lengthy expressions, however. In the following, we will show that φ(r) = ψ(r 2 ) is asymptotically smooth if ψ is asymptotically smooth. This will enable us to confirm for Z {GAU, IMQ, MQ, TPS} that φ Z is asymptotically smooth by computing derivatives of ψ Z, which is rather easy. Theorem 8. The asymptotic smoothness of ψ : R + R implies the asymptotic smoothness of φ(r) = ψ(r 2 ). If c 2, β 2, γ 2, s 2 R are the constants in the asymptotic smoothness inequality for ψ, then φ satisfies the asymptotic smoothness inequality with constants c = c 2, β = β 2, γ = 4γ 2, s = 2s 2. Proof. We begin by scaling the coefficients a (n) set of coefficients â (n) in (13) to produce a transformed = (l + m + )! 2 (l+m+) a (n), (n = 2l + m, m {0, 1}). can still be computed recur- The scaling factors were chosen such that these new â (n) sively. Instead of (14) we now have (20) â (2l+1) =(l )â (2l) â (2l+1) l = (2l + 1)â (2l) l +2( + 1)â (2l) +1, = 0,..., l 1,

11 Hierarchical matrix approximation for kernel-based scattered data interpolation 11 and (15) translates into (21) â (2l) l = (2l)â (2l 1) â (2l) l 1, =(l + )â (2l 1) 1 â (2l) 0 = â (2l 1) 0. +(2 + 1)â (2l 1), = l 1,..., 1, According to (20) and (21), the transformed coefficients are still integers. Based on these formulas we are now going to derive an upper bound for (22) l a (n) (l + m + )! = l 2 l+m+ â (n) 2 n l â (n). In the sum to the far right of (22) each â (n) by a factor n. Since in this way each â (n 1) is the sum of one or two â (n 1) weighted is used at most twice, we can replace the sum over row n with the sum over row n 1 multiplied with 2n. Starting from (22) we apply this step n times until we arrive at row 0 where a (0) 0 = 1: (23) l= n 2 a (n) (l + m + )! 2 n 2n n 1 2 â (n 1) 2 n (2n)(2(n 1)) n 2 2 â (n 2) 2 2n n! Using the derivative from (13) and the inequality (23), we calculate φ (n) (r) l l c 2 n β2 γ n 2 a (n) r 2+m ψ (l+m+) (r 2 ) a (n) r 2+m c 2 (l + m + ) β2 γ l+m+ 2 (l + m + )! r 2(l+m+) 2s2 l c 2 n β2 γ n 2 2 2n n! r n 2s2 cn β γ n n! r n s, a (n) (l + m + )! r 2l m 2s2 implying that φ is asymptotically smooth with the stated constants.

12 12 A. Iske, S. Le Borne, M. Wende The derivatives of the functions ψ Z := ψ Z,α=1 in (12) are given by ψ (k) GAU (r) = ( 1)k exp( r), k 1 ψ (k) IMQ (r) = ( 0.5 ) (1 + r) 0.5 k, ψ (1) TPS (r) = 0.5 log(r) + 0.5, ψ(k) ψ (k) MQ (r) = TPS (r) = k 1 (0.5 ) (1 + r) 0.5 k. k ( ) r (k 1), k 2, All of these functions are asymptotically smooth with constants listed in Table 1. For α > 0 we can choose β 2, γ 2 and s 2 in the same way when c 2 is replaced with α s c 2. This is because ψ (n) α (r) = 1 α n ψ(n) (r/α) 1 α n c 2 n β2 γ n 2 (r/α) n s2 (α s c 2 ) n β2 γ n 2 r n s2. =1 Table 1 Constants of asymptotic smoothness. c 2 β 2 γ 2 s 2 ψ G ψ I ψ T ψ M Convergence of polynomial interpolation and Taylor expansion. Error estimates for polynomial interpolation and Taylor expansion can be expressed involving derivatives of Φ. Convergence Φ k (x, y) Φ(x, y) can be proved if Φ is asymptotically smooth and if the standard admissibility condition (Definition 4) is satisfied with η chosen sufficiently small. In this case, the speed of convergence will be exponential. Definition 9 (Exponential convergence [12, Definiton 4.5]). The separable expansion (6) is called exponentially convergent if there are constants c 0 0, c 1 > 0 and β > 0 such that for all x Q σ, y Q τ. Φ(x, y) Φ k (x, y) c 0 exp( c 1 k β ) Remark 10. The constant β will often be β = 1/d in R d. According to [12, Theorems 4.17,4.22] polynomial interpolation and Taylor expansion require η < 1/γ where γ comes from Definition 7.

13 Hierarchical matrix approximation for kernel-based scattered data interpolation Specialized expansions. Polynomial interpolation and Taylor expansion appear in the literature of H-matrices as methods which can be used whenever the kernel is asymptotically smooth. Besides these generic methods, there exist separable expansions tailored to particular kernels. One example for a radial kernel function admitting such an expansion are the thin plate splines, as described in [15, Proposition 15.2]. These expansions are often used in conunction with fast multipole methods but can also be used for H-matrices. We will compare the generic aproximation methods to one of these specialized expansions developed for generalized multiquadrics (GMQ) in [6]. The radial kernel functions considered in this paper are φ GMQ,α (r) = ( ( r ) ) 2 p/2 1 + for α > 0 α for odd p Z. Note that for p = ±1 this yields the kernels φ MQ,α and φ IMQ,α. The separable approximation detailed in [6] is of the form (24) Φ(x, y) Φ k (x, y) = m 1 l=0 n 1=l u (l) n (x) v (l) n (y), where m N 0 controls the length of the expansion. The part depending on x is u (l) n (x) = x n α p x 2l p 2 For the part depending on y there holds a recurrence which uses coefficients a l = 2l p 2 l, b l = p l + 2. l Denoting by e i the i-th unit vector in R d the computation of v (l) n (y) then proceeds as. (25) v (0) 0 (y) = 1, v(1) e i (y) = p y i, d y i v (l 1) n e i (y) + b l ( y α 2 ) v (l) n (y) = a l i=1 d i=1 v (l 2) n 2e i, l 2. In the notation v n (l) from [6] it is always the case that l = n 1. The superscript is not really necessary, but it is helpful in understanding the definition since it emphasizes the level-wise computation in (25). The number of terms in (24) depends on m in the same way as for Taylor expansion, i.e. the representation rank is k = ( ) m+d 1 d. According to [6, Lemma 3.1] the convergence of Φ k (x, y) to Φ(x, y) now requires (26) x 2 > y α2 which is also termed as x lying in the farfield of y. It is important to note that we can use a shifted version of (26) where x and y are replaced by x z and y z for some z R d. Convergence will then be guaranteed if the same shift is applied to (24) as well. This is possible since we have the translation invariance Φ(x, y) = Φ(x z, y z). In the context of H-matrices we have to derive separable expansions not for individual pairs x and y but for all points from cuboids Q σ and Q τ. Therefore, we need

14 14 A. Iske, S. Le Borne, M. Wende to formulate the admissibility condition in terms of Q σ and Q τ. The admissibility of Q σ and Q τ should imply that all x Q σ are in the farfield of all y Q τ. This is the case if we use (diam(qτ ) (27) 2 ) 2 + α 2 < η dist(q σ, y 0 ) as an admissibility condition where the shift y 0 is taken as the centre of Q τ and η 1 is used to sharpen the condition to obtain faster convergence. In the right hand side of (27) we can also replace dist(q σ, y 0 ) by the smaller value of dist(q σ, Q τ ). Referring again to [6, Lemma 3.1] the error of the expansion (24) satisfies { (28) Φ(x, y) Φ k C η m for p > 0, (x, y) C m p+1 η m for p < 0, where C is a constant depending η, α, diam(q τ ) diam(x) and p but not on m. The error bound (28) corresponds to an exponential speed of convergence. 5. Low rank approximation through cross approximation. Let A R m,n. The cross approximation algorithm may be viewed as a prematurely aborted Gaussian elimination and computes a factored approximation A UV T using thin rectangular matrices U, V. In particular, the entire matrix A is not required as input but only access to individual entries a i (typically O(k) rows and columns of A where k is the number of columns in U, V ). The computation of a i typically consists of the evaluation of the kernel function itself and does not require any separable expansions as was the case in the construction of low rank approximations described in Section 4. Algorithm 1 Cross approximation Input: method to access individual matrix entries a i of A R m,n Output: factors U = (u 1... u k ) R m,k, V = (v 1... v k ) R n,k that yield A UV T 1: l = 1. 2: while stopping criterion not satisfied do 3: Choose an (arbitrary) pivot element a il l 0. 4: Compute the i l th row a il,: and the l th column a :,l of A. 5: Set u l := a :,l and v l = (a il l ) 1 a T i l,:. 6: Compute A A u l vl T. 7: l = l : end while The generic version of this procedure is provided as Algorithm 1. Line 6 might be considered as its most important one: it can be shown that subtraction of the outer product u l vl T reduces the rank of A by one. In a practical implementation, however, this update is not computed for the entire matrix A (which would imply complexity O(nm)). Instead, all updates are only computed for the row and column selected in line 4, i.e., a il,: a i,: u i,1 v T 1... u i,l v T l = a il,: u i,1:l (v 1... v l ) T, a :,l a :, u 1 v,1... u l v,l = a :,l (u 1... u l ) (v,1:l ) T. This reduces the algorithm s complexity to O(k 2 (n + m)).

15 Hierarchical matrix approximation for kernel-based scattered data interpolation 15 The computed factorisation A UV T will depend on the selection of the pivot element (line 3) as well as the stopping criterion (line 2). These two topics will be discussed in the following two subsections Pivot selection. The pivot element in line 3 must be nonzero and preferably not even close to zero. In our implementation we select the first column index 1 {1,..., m} at random which enables us to compute u 1. All the remaining row and column indices are determined by i l = arg max(u l ) i, l = 1..., k, and l = arg max(v l 1 ), l = 2..., k. i=1,...,n =1,...,m This strategy from [12] always uses the newest available vector u l or v l to compute the next pivot. Since in this paper we consider interpolation matrices where each matrix row and column is associated with a particular data site, one could use this information to select pivots apriori and without the need to determine maximum row or column entries. In particular, one might interpret the selected pivots as a coarser set of data sites, and in order to sustain as much information as possible, the data sites corresponding to the pivot rows and columns should be selected along the lines of typical coarsening algorithms. Such a strategy is also supported by the following representation of the factorization computed in Algorithm 1: If I r = {i 1,..., i k } and I c = { 1,..., k } collect the respective row and column indices of rows and columns selected as pivots in line 3 of Algorithm 1, then the computed matrix factors U, V satisfy (29) UV T = A {1,...,m} Ic (A Ir Ic }{{}}{{ ) } R m,k R k,k 1 A Ir {1,...,n}. }{{} R k,n Hence, pivots should be selected attempting to obtain a well-conditioned A Ir I c. The following theorem ensures that there exists a quasioptimal rank-k-approximant in the form of (29). Theorem 11 ([11]). Let A R m,n and k N with k rank(a). Let subsets I r {1,..., m}, I c {1,..., n} with #I r = #I c = k be chosen such that det A Ir I c = max{ det A Ir I c I r {1,..., m}, I c {1,..., n}, #I r = #I c = k}. Using the maximum matrix entry norm A C := max{ a i 1 i m, 1 n}, it holds that (29) is quasioptimal, in particular A A {1,...,m} Ic (A Ir I c ) 1 A Ir {1,...,n} C (k + 1) M R best 2 where R best is the best rank-k-approximation to A (with respect to 2.) 5.2. Stopping criterion. In line 2 of Algorithm 1 the number of steps k to perform is determined by a stopping criterion. The simplest choice would be to decide on the in advance, which can be useful if we know that this is sufficient to achieve a desired accuracy or if we want to limit the amount of storage. Cross approximation with a fixed k will be referred to as CA from here on. Strict error estimates for A UV T with ACA-computed U, V require assumptions on the underlying kernel function generating the matrix data of A. Here we review (and later use in the numerical tests) a heuristic stopping criterion from the literature (e.g., [12], 9.4.3). It is based on the following assumptions.

16 16 A. Iske, S. Le Borne, M. Wende Assumption 12. Let R l := U l Vl T denote the ACA-approximation of a matrix A after l steps of Algorithm 1. We assume that 1. A 2 R 1 2, 2. A R l 2 A R l 1 2, 3. A R l 1 2 R l R l 1 2. If the conditions of Assumption 12 are satisfied, we conclude that (again using the notation R l := U l V T l ) A R l 2 A R l 1 2 R l R l 1 2 = u l v T l 2 = u l 2 v l 2. Given a tolerance ɛ, this yields the heuristic stopping criteria (30) u l 2 v l 2 ɛ (absolute), u l 2 v l 2 u 1 2 v 1 2 ɛ (relative). We refer to Algorithm 1 implemented with a stopping criterion as in (30) as adaptive cross approximation (ACA) H-matrix approximation with constraints. If the generic ACA Algorithm 1 (or, as a matter of fact, a separable approximation of Section 4) is applied to obtain factorizations of all admissible blocks of an H-matrix, symmetry might be lost, i.e., a symmetric interpolation matrix B will be approximated by a non-symmetric H-matrix B H. However, this problem is easily removed by computing only the upper triangular part of an H-matrix B H and using its transpose for the respective blocks in the lower triangular part. While preservation of symmetry poses no problem, the situation is different for positive (semi-) definiteness. If B is positive definite, then B H will eventually be positive definite since B H ɛ 0 B when B H is constructed using an error tolerance ɛ for low rank approximations in its admissible blocks. However, a small enough ɛ might be impractical for the application at hand, especially if B s smallest eigenvalue is close to zero. If eigenvectors v of small eigenvalues of B are known, those could be used to construct an H-matrix B H with blockwise constraints, enforcing B H v = Bv [3]. Alternatively, along the lines of regularization, one could simply add a small constant to the diagonal of B H, albeit sacrificing approximation accuracy for the sake of positive definiteness. In the end, it needs to be decided whether it is worth to pursue positive definiteness when it comes at the expense of accuracy and/or computational efficiency. 6. Numerical results. We implemented and compared the construction of H- matrices with low rank approximations based on polynomial interpolation (INT), see 4.1, (10), Taylor expansion (TAY), see 4.2, (11), cross approximation with fixed (CA) or stopping criterion (30) (ACA), see 5, Algorithm 1, and the expansion for generalized multiquadrics (GMQ), see 4.5, (24), where we measure the accuracy of B H as an approximation for B in terms of the in the Frobenius norm, i.e. B B H F / B F. Since we want to include the GMQ expansion ( 4.5) in our comparison, several of the tests involve the inverse multiquadric φ IMQ as a radial basis function. We illustrate the convergence speed in and the memory and computational time in In 6.2 we demonstrate that the generic approximation methods INT, TAY and CA can be applied successfully to

17 Hierarchical matrix approximation for kernel-based scattered data interpolation 17 the radial kernels introduced in 6. The approximation accuracy of CA can be seen to be close to optimal regardless of φ and d. The scaling behaviour of ACA for increasing values of N is treated in 6.3. We again measure storage requirements and computational time and find that the claim of an almost linear computational complexity is supported by our results. In principle, our programs work for any spatial dimension. Due to the excessive growth of the representation rank, however, it was not feasible to use any other value except d = 1, 2, 3. Even in this range our available resources did not allow us to obtain approximations with an accuracy in the order of machine precision in all cases Inverse multiquadric. In our tests for φ IMQ, we use N = 10, 000 points and a shape factor α = N 1/d. We choose different values of η in the admissibility conditions (9) and (27) in order to study its effect on the speed of convergence. These values were η = 1.0, 0.5, 0.25 for CA, INT and TAY and η = 0.5, 0.25, for the GMQ expansion d=1, η=1/2 d=2, η=1/2 d=3, η=1/2 d=1, η=1/4 d=2, η=1/4 d=3, η=1/4 d=1, η=1/8 d=2, η=1/8 d=3, η=1/ d = 1, η = 1.0 d = 2, η = 1.0 d = 3, η = 1.0 d = 1, η = 0.5 d = 2, η = 0.5 d = 3, η = 0.5 d = 1, η = 0.25 d = 2, η = 0.25 d = 3, η = transformed 1/d transformed 1/d Fig. 2. N = 10, 000, inverse multiquadric φ IMQ, α = N 1/d, Left: GMQ expansion ( 4.5). Right: cross approximation at fixed rank (CA). d = 1, η = 1.0 d = 2, η = 1.0 d = 1, η = 0.5 d = 2, η = 0.5 d = 1, η = 0.25 d = 2, η = 0.25 d=1, η=1.0 d=2, η=1.0 d=3, η=1.0 d=1, η=0.5 d=2, η=0.5 d=3, η=0.5 d=1, η=0.25 d=2, η= transformed 1/d = m transformed 1/d Fig. 3. N = 10, 000, inverse multiquadric φ IMQ, α = N 1/d, Left: interpolation (INT), Right: Taylor expansion (TAY).

18 18 A. Iske, S. Le Borne, M. Wende Accuracy. The accuracy depending on the transformed 1/d is shown in Figures 2 (GMQ, CA), and 3 (INT, TAY). All the approximation methods show exponential convergence in the sense of Definition 9 where smaller values of η accelerate the error decay. Best results are obtained for ACA, followed by INT, TAY and the specialized GMQ expansion in this order d=1 GMQ, η = 0.5 GMQ, η = 0.25 GMQ, η = CA, η = 1.0 CA, η = 0.5 CA, η = 0.25 INT, η = 1.0 INT, η = 0.5 INT, η = 0.25 TAY, η = 1.0 TAY, η = 0.5 TAY, η = memory (MB) d=2 d= ,000 1,200 memory (MB) ,000 1,500 2,000 2,500 3,000 memory (MB) Fig. 4. Amount of memory required to store B H for N = 10, 000, φ = φ IMQ,α, α = N 1/d, d = 1, d = 2, d = 3. Gray vertical line shows storage of full matrix (800 MB) Storage and computational time. Storage requirements and computational time depend not only on the rank but also on the H-block partition which is the same for the approaches ACA, INT and TAY but differs for the specialized GMQ-expansion in view of the different admissibility condition. The required storage for different spatial dimensions d = 1, 2, 3 is shown in Figure 4. GMQ and CA require the smallest amount of storage whereas INT and TAY consume much more memory. The full matrix B requires about 800 MB of memory in this case - this is included in the plots for d = 2, 3 as a vertical gray line. When using INT or TAY and a high accuracy is desired, an advantage in using an H-approximation B H is only given for d = 1. For d = 2, 3, these methods require an unacceptable amount of memory. This conclusion, however, depends on the size N of the matrix. As N increases, storage of the full matrix is N 2 whereas the storage on the H-matrix B H is of order O(N log N), also for INT and TAY. Regarding the choice of η, the results are mixed. For CA, the smallest amount of memory is obtained with the largest value of η = 1. For GMQ, a

19 Hierarchical matrix approximation for kernel-based scattered data interpolation 19 larger value of η is advantageous in d = 1 but the situation is reversed for d = 2, 3. This is in contrast to what was seen in and reflects the fact that the amount of storage required does not only depend on the speed of convergence of the separable expansion but also on the admissibility condition d=1 time (s) d=2 GMQ, η = 0.5 GMQ, η = 0.25 GMQ, η = CA, η = 1.0 CA, η = 0.5 CA, η = 0.25 INT, η = 1.0 INT, η = 0.5 INT, η = 0.25 TAY η = 1.0 TAY, η = 0.5 TAY, η = 0.25 Full Matrix d= time (s) time (s) Fig. 5. Time required to compute B H for N = 10, 000, φ = φ IMQ,α, α = N 1/d, for d = 1, 2, 3. The computational time required to set up B H depending on the approximation accuracy is shown Figure 5. For the computational time similar observations as for the memory consumption can be made. The CA and GMQ approaches are in general faster than INT and TAY Radial kernel functions and dimensions. In order to show that the generic approximation methods are applicable to all radial kernels and dimensions, we measure the for every combination of d = 1, 2, 3 and the four example functions φ GAU, φ IMQ, φ TPS, φ MQ. The remaining parameters were chosen as N = 4900, η = 1, α = 1. Besides cross approximation, interpolation and Taylor expansion, we also included the best approximation through low-rank blocks based on singular value decomposition. In this approach, titled SVD in the diagrams, the approximation of admissible blocks of B H is such that the columns of U and V from (7) are the left and right singular vectors belonging to the largest singular values of this block. This procedure minimizes the error and serves as a reference for the other methods. The results for d = 1, 2, 3 dimensions are presented in Figures 6 to 8. CA with a fixed rank shows almost optimal performance across all φ and d that we considered.

20 20 A. Iske, S. Le Borne, M. Wende φ GAU φ IMQ CA SVD INT TAY φ TPS φ MQ Fig. 6. Comparsion of basis functions and approximation methods for N = 4900 and d = 1. φ GAU φ IMQ CA SVD INT TAY φ TPS φ MQ Fig. 7. Comparsion of basis functions and approximation methods for d = 2.

21 Hierarchical matrix approximation for kernel-based scattered data interpolation 21 φ GAU φ IMQ CA SVD INT TAY φ TPS φ MQ Fig. 8. Comparsion of basis functions and approximation methods for d = Dependence on matrix size N. We next investigate the influence of the number of points N on storage and computational time requirements to set up B H. The construction of B H is based on ACA using a relative stopping criterion with ɛ = 10 1,..., (see 5.2). The tests were carried out for d = 2, the Gaussian φ GAU, a fixed shape factor α = 1 and up to N = 160, 000 points. The amount of memory required to store B H as well as the full matrix B is shown in Figure 9. The results for the size of B H are compatible with our assumption of an almost linear scaling in terms of N. For a fixed value of N the amount of memory is proportional to log(ɛ) corresponding to an exponential speed of convergence of the separable expansion. To give an impression of how the amount of storage relates to a sparse matrix, we note that B H set up with ɛ = 10 3 consumes about as much memory as a sparse matrix with an average of about 700 non-zero entries per row would. The time required to compute B H is shown in Figure 10. Again, we observe an almost linear dependence on N. 7. Conclusion. Scattered data interpolation using radial kernels requires the solution of a set of linear equations with a system matrix (5) which is large, dense and often highly ill-conditioned. In this article, we described and compared various efficient ways to approximate the matrix block B in (5) by an H-matrix B H with almost linear O(N log N) complexity. The H-matrix B H allows a fast matrix-vector multiplication suitable for iterative solution algorithms. In addition, H-arithmetic is available to compute approximate matrix factorizations which may then serve as preconditioners. The number of columns Q = dim(π d m 1) of the block P X in (5) corresponding to the polynomial

22 22 A. Iske, S. Le Borne, M. Wende memory (MB) 3,000 2,000 1, N ACA, ɛ = 10 1 ACA, ɛ = 10 2 ACA, ɛ = 10 3 ACA, ɛ = ACA, ɛ = 10 5 ACA, ɛ = 10 6 ACA, ɛ = 10 7 ACA, ɛ = ACA, ɛ = 10 9 ACA, ɛ = ACA, ɛ = ACA, ɛ = Full matrix Fig. 9. Storage requirement of B H for d = 2, α = 1, φ = φ GAU depending on N. The memory required for the full matrix B is included for reference. time (s) N ACA, ɛ = 10 1 ACA, ɛ = 10 2 ACA, ɛ = 10 3 ACA, ɛ = ACA, ɛ = 10 5 ACA, ɛ = 10 6 ACA, ɛ = 10 7 ACA, ɛ = ACA, ɛ = 10 9 ACA, ɛ = ACA, ɛ = ACA, ɛ = Full matrix Fig. 10. Computational time required to build B H for d = 2, α = 1, φ = φ GAU depending on N. part is considered to be fixed and small compared to the number of points N. Setting up this block as a full matrix therefore only contributes a linear complexity O(N). We compared generic approximation methods suitable for general (sufficiently smooth) radial kernels and a separable expansion tailored to multiquadric basis functions [6]. While a Taylor expansion can be expressed generically in the form (11), its implementation would require significant effort since the multivariate derivatives have to be computed for each radial kernel. We produced an implementation which is easily adapted to a wide range of radial kernels φ since it only requires a formula for the univariate derivatives ψ (n) of ψ(r) = φ( r) which we provided for the four radial kernels considered in this paper.

23 Hierarchical matrix approximation for kernel-based scattered data interpolation 23 Our numerical tests show that all the considered approximation methods exhibit an exponential convergence with respect to the separation rank. The cross approximation algorithm outperformed the analytical expansions considered here whose rank k increased rapidly (in umps) when the parameter m in the expansion is incremented. In view of large constants involved in the complexity estimates, the H-matrix approximation B H of B becomes advantagous only if a certain number N of data sites is exceeded. While we restricted this paper to the construction of an approximation B H of B, the (iterative) solution of (5), including the construction of H-preconditioners, will be subect of future work. REFERENCES [1] R. Beatson and L. Greengard: A short course on fast multipole methods. Wavelets, Multilevel Methods and Elliptic PDEs 1 (1997), [2] M. Bebendorf: Hierarchical Matrices. A Means to Efficiently Solve Elliptic Boundary Value Problems. Lecture Notes in Computational Science and Engineering, vol. 63, Springer, [3] M. Bebendorf, M. Bollhöfer, and M. Bratsch: Hierarchical matrix approximation with blockwise constraints. BIT Numerical Mathematics 53 (2013), [4] M. Bebendorf and R. Venn: Constructing nested bases approximations from the entries of non-local operators. Numerische Mathematik 121 (2012), [5] M.D. Buhmann: Radial Basis Functions, Cambridge University Press, Cambridge, UK, [6] J. Cherrie, R. Beatson, and G. Newsam: Fast evaluation of radial basis functions: Methods for generalized multiquadrics in R n. SIAM J. Sci. Comp. 23 (2002), [7] B. Diederichs and A. Iske: Improved estimates for condition numbers of radial basis function interpolation matrices. Preprint # , Hamburger Beiträge zur Angewandten Mathematik, University of Hamburg, [8] G. Fasshauer: Meshfree Approximation Methods with Matlab. World Scientific, Singapore, [9] G. Fasshauer and M. McCourt: Kernel-based Approximation Methods using MATLAB. Interdisciplinary Mathematical Sciences, vol. 19, World Scientific, [10] B. Fornberg and N. Flyer: A Primer on Radial Basis Functions with Applications to the Geosciences. CBMS-NSF Regional Conference Series in Applied Mathematics, vol. 87, SIAM, [11] S. Goreinov and E. Tyrtyshnikov: The maximal-volume concept in approximation by low-rank matrices. Contemporary Mathematics 280 (2001), [12] W. Hackbusch: Hierarchical Matrices: Algorithms and Analysis. Springer Series in Computational Mathematics, vol. 49, Springer, [13] W. Hackbusch: Survey on the technique of hierarchical matrices. Vietnam Journal of Mathematics 44 (2016), [14] A. Iske: Multiresolution Methods in Scattered Data Modelling. Lecture Notes in Computational Science and Engineering, vol. 37, Springer, [15] H. Wendland: Scattered Data Approximation. Cambridge Monographs on Applied and Computational Mathematics, vol. 17, Cambridge University Press, Cambridge, UK, 2005.

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 9: Conditionally Positive Definite Radial Functions Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 33: Adaptive Iteration Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter 33 1 Outline 1 A

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 33: Adaptive Iteration Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter 33 1 Outline 1 A

More information

Scattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions

Scattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions Chapter 3 Scattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions 3.1 Scattered Data Interpolation with Polynomial Precision Sometimes the assumption on the

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 34: Improving the Condition Number of the Interpolation Matrix Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu

More information

D. Shepard, Shepard functions, late 1960s (application, surface modelling)

D. Shepard, Shepard functions, late 1960s (application, surface modelling) Chapter 1 Introduction 1.1 History and Outline Originally, the motivation for the basic meshfree approximation methods (radial basis functions and moving least squares methods) came from applications in

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 7: Conditionally Positive Definite Functions Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter

More information

Least Squares Approximation

Least Squares Approximation Chapter 6 Least Squares Approximation As we saw in Chapter 5 we can interpret radial basis function interpolation as a constrained optimization problem. We now take this point of view again, but start

More information

Kernel B Splines and Interpolation

Kernel B Splines and Interpolation Kernel B Splines and Interpolation M. Bozzini, L. Lenarduzzi and R. Schaback February 6, 5 Abstract This paper applies divided differences to conditionally positive definite kernels in order to generate

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 5: Completely Monotone and Multiply Monotone Functions Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH

More information

Math 671: Tensor Train decomposition methods

Math 671: Tensor Train decomposition methods Math 671: Eduardo Corona 1 1 University of Michigan at Ann Arbor December 8, 2016 Table of Contents 1 Preliminaries and goal 2 Unfolding matrices for tensorized arrays The Tensor Train decomposition 3

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 6: Scattered Data Interpolation with Polynomial Precision Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu

More information

On Multivariate Newton Interpolation at Discrete Leja Points

On Multivariate Newton Interpolation at Discrete Leja Points On Multivariate Newton Interpolation at Discrete Leja Points L. Bos 1, S. De Marchi 2, A. Sommariva 2, M. Vianello 2 September 25, 2011 Abstract The basic LU factorization with row pivoting, applied to

More information

Interpolation by Basis Functions of Different Scales and Shapes

Interpolation by Basis Functions of Different Scales and Shapes Interpolation by Basis Functions of Different Scales and Shapes M. Bozzini, L. Lenarduzzi, M. Rossini and R. Schaback Abstract Under very mild additional assumptions, translates of conditionally positive

More information

Kernel-based Approximation. Methods using MATLAB. Gregory Fasshauer. Interdisciplinary Mathematical Sciences. Michael McCourt.

Kernel-based Approximation. Methods using MATLAB. Gregory Fasshauer. Interdisciplinary Mathematical Sciences. Michael McCourt. SINGAPORE SHANGHAI Vol TAIPEI - Interdisciplinary Mathematical Sciences 19 Kernel-based Approximation Methods using MATLAB Gregory Fasshauer Illinois Institute of Technology, USA Michael McCourt University

More information

An H-LU Based Direct Finite Element Solver Accelerated by Nested Dissection for Large-scale Modeling of ICs and Packages

An H-LU Based Direct Finite Element Solver Accelerated by Nested Dissection for Large-scale Modeling of ICs and Packages PIERS ONLINE, VOL. 6, NO. 7, 2010 679 An H-LU Based Direct Finite Element Solver Accelerated by Nested Dissection for Large-scale Modeling of ICs and Packages Haixin Liu and Dan Jiao School of Electrical

More information

H 2 -matrices with adaptive bases

H 2 -matrices with adaptive bases 1 H 2 -matrices with adaptive bases Steffen Börm MPI für Mathematik in den Naturwissenschaften Inselstraße 22 26, 04103 Leipzig http://www.mis.mpg.de/ Problem 2 Goal: Treat certain large dense matrices

More information

1 9/5 Matrices, vectors, and their applications

1 9/5 Matrices, vectors, and their applications 1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric

More information

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 1: Direct Methods Dianne P. O Leary c 2008

More information

Approximation by Conditionally Positive Definite Functions with Finitely Many Centers

Approximation by Conditionally Positive Definite Functions with Finitely Many Centers Approximation by Conditionally Positive Definite Functions with Finitely Many Centers Jungho Yoon Abstract. The theory of interpolation by using conditionally positive definite function provides optimal

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

LINEAR SYSTEMS (11) Intensive Computation

LINEAR SYSTEMS (11) Intensive Computation LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Principal Component Analysis

Principal Component Analysis Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used

More information

Linear Algebra and Eigenproblems

Linear Algebra and Eigenproblems Appendix A A Linear Algebra and Eigenproblems A working knowledge of linear algebra is key to understanding many of the issues raised in this work. In particular, many of the discussions of the details

More information

Multivariate Interpolation with Increasingly Flat Radial Basis Functions of Finite Smoothness

Multivariate Interpolation with Increasingly Flat Radial Basis Functions of Finite Smoothness Multivariate Interpolation with Increasingly Flat Radial Basis Functions of Finite Smoothness Guohui Song John Riddle Gregory E. Fasshauer Fred J. Hickernell Abstract In this paper, we consider multivariate

More information

Theoretical and computational aspects of multivariate interpolation with increasingly flat radial basis functions

Theoretical and computational aspects of multivariate interpolation with increasingly flat radial basis functions Theoretical and computational aspects of multivariate interpolation with increasingly flat radial basis functions Elisabeth Larsson Bengt Fornberg June 0, 003 Abstract Multivariate interpolation of smooth

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently

More information

Fast Multipole Methods: Fundamentals & Applications. Ramani Duraiswami Nail A. Gumerov

Fast Multipole Methods: Fundamentals & Applications. Ramani Duraiswami Nail A. Gumerov Fast Multipole Methods: Fundamentals & Applications Ramani Duraiswami Nail A. Gumerov Week 1. Introduction. What are multipole methods and what is this course about. Problems from physics, mathematics,

More information

Meshfree Approximation Methods with MATLAB

Meshfree Approximation Methods with MATLAB Interdisciplinary Mathematical Sc Meshfree Approximation Methods with MATLAB Gregory E. Fasshauer Illinois Institute of Technology, USA Y f? World Scientific NEW JERSEY LONDON SINGAPORE BEIJING SHANGHAI

More information

Fraction-free Row Reduction of Matrices of Skew Polynomials

Fraction-free Row Reduction of Matrices of Skew Polynomials Fraction-free Row Reduction of Matrices of Skew Polynomials Bernhard Beckermann Laboratoire d Analyse Numérique et d Optimisation Université des Sciences et Technologies de Lille France bbecker@ano.univ-lille1.fr

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

Hierarchical Matrices. Jon Cockayne April 18, 2017

Hierarchical Matrices. Jon Cockayne April 18, 2017 Hierarchical Matrices Jon Cockayne April 18, 2017 1 Sources Introduction to Hierarchical Matrices with Applications [Börm et al., 2003] 2 Sources Introduction to Hierarchical Matrices with Applications

More information

Class notes: Approximation

Class notes: Approximation Class notes: Approximation Introduction Vector spaces, linear independence, subspace The goal of Numerical Analysis is to compute approximations We want to approximate eg numbers in R or C vectors in R

More information

A new stable basis for radial basis function interpolation

A new stable basis for radial basis function interpolation A new stable basis for radial basis function interpolation Stefano De Marchi and Gabriele Santin Department of Mathematics University of Padua (Italy) Abstract It is well-known that radial basis function

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

APPROXIMATING GAUSSIAN PROCESSES

APPROXIMATING GAUSSIAN PROCESSES 1 / 23 APPROXIMATING GAUSSIAN PROCESSES WITH H 2 -MATRICES Steffen Börm 1 Jochen Garcke 2 1 Christian-Albrechts-Universität zu Kiel 2 Universität Bonn and Fraunhofer SCAI 2 / 23 OUTLINE 1 GAUSSIAN PROCESSES

More information

Low Rank Approximation Lecture 3. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL

Low Rank Approximation Lecture 3. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL Low Rank Approximation Lecture 3 Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL daniel.kressner@epfl.ch 1 Sampling based approximation Aim: Obtain rank-r approximation

More information

CHAPTER 11. A Revision. 1. The Computers and Numbers therein

CHAPTER 11. A Revision. 1. The Computers and Numbers therein CHAPTER A Revision. The Computers and Numbers therein Traditional computer science begins with a finite alphabet. By stringing elements of the alphabet one after another, one obtains strings. A set of

More information

LECTURE NOTE #11 PROF. ALAN YUILLE

LECTURE NOTE #11 PROF. ALAN YUILLE LECTURE NOTE #11 PROF. ALAN YUILLE 1. NonLinear Dimension Reduction Spectral Methods. The basic idea is to assume that the data lies on a manifold/surface in D-dimensional space, see figure (1) Perform

More information

Using Laplacian Eigenvalues and Eigenvectors in the Analysis of Frequency Assignment Problems

Using Laplacian Eigenvalues and Eigenvectors in the Analysis of Frequency Assignment Problems Using Laplacian Eigenvalues and Eigenvectors in the Analysis of Frequency Assignment Problems Jan van den Heuvel and Snežana Pejić Department of Mathematics London School of Economics Houghton Street,

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost

More information

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Recent Results for Moving Least Squares Approximation

Recent Results for Moving Least Squares Approximation Recent Results for Moving Least Squares Approximation Gregory E. Fasshauer and Jack G. Zhang Abstract. We describe two experiments recently conducted with the approximate moving least squares (MLS) approximation

More information

Lecture 9: Numerical Linear Algebra Primer (February 11st)

Lecture 9: Numerical Linear Algebra Primer (February 11st) 10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

MATH 590: Meshfree Methods

MATH 590: Meshfree Methods MATH 590: Meshfree Methods Chapter 2: Radial Basis Function Interpolation in MATLAB Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590

More information

Fast matrix algebra for dense matrices with rank-deficient off-diagonal blocks

Fast matrix algebra for dense matrices with rank-deficient off-diagonal blocks CHAPTER 2 Fast matrix algebra for dense matrices with rank-deficient off-diagonal blocks Chapter summary: The chapter describes techniques for rapidly performing algebraic operations on dense matrices

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Process Model Formulation and Solution, 3E4

Process Model Formulation and Solution, 3E4 Process Model Formulation and Solution, 3E4 Section B: Linear Algebraic Equations Instructor: Kevin Dunn dunnkg@mcmasterca Department of Chemical Engineering Course notes: Dr Benoît Chachuat 06 October

More information

Application of modified Leja sequences to polynomial interpolation

Application of modified Leja sequences to polynomial interpolation Special Issue for the years of the Padua points, Volume 8 5 Pages 66 74 Application of modified Leja sequences to polynomial interpolation L. P. Bos a M. Caliari a Abstract We discuss several applications

More information

Dense LU factorization and its error analysis

Dense LU factorization and its error analysis Dense LU factorization and its error analysis Laura Grigori INRIA and LJLL, UPMC February 2016 Plan Basis of floating point arithmetic and stability analysis Notation, results, proofs taken from [N.J.Higham,

More information

Institute for Computational Mathematics Hong Kong Baptist University

Institute for Computational Mathematics Hong Kong Baptist University Institute for Computational Mathematics Hong Kong Baptist University ICM Research Report 08-0 How to find a good submatrix S. A. Goreinov, I. V. Oseledets, D. V. Savostyanov, E. E. Tyrtyshnikov, N. L.

More information

A Randomized Algorithm for the Approximation of Matrices

A Randomized Algorithm for the Approximation of Matrices A Randomized Algorithm for the Approximation of Matrices Per-Gunnar Martinsson, Vladimir Rokhlin, and Mark Tygert Technical Report YALEU/DCS/TR-36 June 29, 2006 Abstract Given an m n matrix A and a positive

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

1. Introduction. A radial basis function (RBF) interpolant of multivariate data (x k, y k ), k = 1, 2,..., n takes the form

1. Introduction. A radial basis function (RBF) interpolant of multivariate data (x k, y k ), k = 1, 2,..., n takes the form A NEW CLASS OF OSCILLATORY RADIAL BASIS FUNCTIONS BENGT FORNBERG, ELISABETH LARSSON, AND GRADY WRIGHT Abstract Radial basis functions RBFs form a primary tool for multivariate interpolation, and they are

More information

14.2 QR Factorization with Column Pivoting

14.2 QR Factorization with Column Pivoting page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution

More information

Statistical Geometry Processing Winter Semester 2011/2012

Statistical Geometry Processing Winter Semester 2011/2012 Statistical Geometry Processing Winter Semester 2011/2012 Linear Algebra, Function Spaces & Inverse Problems Vector and Function Spaces 3 Vectors vectors are arrows in space classically: 2 or 3 dim. Euclidian

More information

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8. Linear Algebra M1 - FIB Contents: 5 Matrices, systems of linear equations and determinants 6 Vector space 7 Linear maps 8 Diagonalization Anna de Mier Montserrat Maureso Dept Matemàtica Aplicada II Translation:

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

Elementary Linear Algebra

Elementary Linear Algebra Matrices J MUSCAT Elementary Linear Algebra Matrices Definition Dr J Muscat 2002 A matrix is a rectangular array of numbers, arranged in rows and columns a a 2 a 3 a n a 2 a 22 a 23 a 2n A = a m a mn We

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

Calculating determinants for larger matrices

Calculating determinants for larger matrices Day 26 Calculating determinants for larger matrices We now proceed to define det A for n n matrices A As before, we are looking for a function of A that satisfies the product formula det(ab) = det A det

More information

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013. The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational

More information

AN ELEMENTARY PROOF OF THE OPTIMAL RECOVERY OF THE THIN PLATE SPLINE RADIAL BASIS FUNCTION

AN ELEMENTARY PROOF OF THE OPTIMAL RECOVERY OF THE THIN PLATE SPLINE RADIAL BASIS FUNCTION J. KSIAM Vol.19, No.4, 409 416, 2015 http://dx.doi.org/10.12941/jksiam.2015.19.409 AN ELEMENTARY PROOF OF THE OPTIMAL RECOVERY OF THE THIN PLATE SPLINE RADIAL BASIS FUNCTION MORAN KIM 1 AND CHOHONG MIN

More information

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form Qualifying exam for numerical analysis (Spring 2017) Show your work for full credit. If you are unable to solve some part, attempt the subsequent parts. 1. Consider the following finite difference: f (0)

More information

Image Registration Lecture 2: Vectors and Matrices

Image Registration Lecture 2: Vectors and Matrices Image Registration Lecture 2: Vectors and Matrices Prof. Charlene Tsai Lecture Overview Vectors Matrices Basics Orthogonal matrices Singular Value Decomposition (SVD) 2 1 Preliminary Comments Some of this

More information

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any

More information

Lecture Note 2: The Gaussian Elimination and LU Decomposition

Lecture Note 2: The Gaussian Elimination and LU Decomposition MATH 5330: Computational Methods of Linear Algebra Lecture Note 2: The Gaussian Elimination and LU Decomposition The Gaussian elimination Xianyi Zeng Department of Mathematical Sciences, UTEP The method

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD

More information

Intrinsic products and factorizations of matrices

Intrinsic products and factorizations of matrices Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 5 3 www.elsevier.com/locate/laa Intrinsic products and factorizations of matrices Miroslav Fiedler Academy of Sciences

More information

3 (Maths) Linear Algebra

3 (Maths) Linear Algebra 3 (Maths) Linear Algebra References: Simon and Blume, chapters 6 to 11, 16 and 23; Pemberton and Rau, chapters 11 to 13 and 25; Sundaram, sections 1.3 and 1.5. The methods and concepts of linear algebra

More information

EXISTENCE VERIFICATION FOR SINGULAR ZEROS OF REAL NONLINEAR SYSTEMS

EXISTENCE VERIFICATION FOR SINGULAR ZEROS OF REAL NONLINEAR SYSTEMS EXISTENCE VERIFICATION FOR SINGULAR ZEROS OF REAL NONLINEAR SYSTEMS JIANWEI DIAN AND R BAKER KEARFOTT Abstract Traditional computational fixed point theorems, such as the Kantorovich theorem (made rigorous

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques

Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques Institut für Numerische Mathematik und Optimierung Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques Oliver Ernst Computational Methods with Applications Harrachov, CR,

More information

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)

More information

Fast Multipole BEM for Structural Acoustics Simulation

Fast Multipole BEM for Structural Acoustics Simulation Fast Boundary Element Methods in Industrial Applications Fast Multipole BEM for Structural Acoustics Simulation Matthias Fischer and Lothar Gaul Institut A für Mechanik, Universität Stuttgart, Germany

More information

Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning

Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning Anton Rodomanov Higher School of Economics, Russia Bayesian methods research group (http://bayesgroup.ru) 14 March

More information

arxiv: v1 [math.co] 3 Nov 2014

arxiv: v1 [math.co] 3 Nov 2014 SPARSE MATRICES DESCRIBING ITERATIONS OF INTEGER-VALUED FUNCTIONS BERND C. KELLNER arxiv:1411.0590v1 [math.co] 3 Nov 014 Abstract. We consider iterations of integer-valued functions φ, which have no fixed

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

The Closed Form Reproducing Polynomial Particle Shape Functions for Meshfree Particle Methods

The Closed Form Reproducing Polynomial Particle Shape Functions for Meshfree Particle Methods The Closed Form Reproducing Polynomial Particle Shape Functions for Meshfree Particle Methods by Hae-Soo Oh Department of Mathematics, University of North Carolina at Charlotte, Charlotte, NC 28223 June

More information

Numerical Analysis: Solving Systems of Linear Equations

Numerical Analysis: Solving Systems of Linear Equations Numerical Analysis: Solving Systems of Linear Equations Mirko Navara http://cmpfelkcvutcz/ navara/ Center for Machine Perception, Department of Cybernetics, FEE, CTU Karlovo náměstí, building G, office

More information

A Review of Matrix Analysis

A Review of Matrix Analysis Matrix Notation Part Matrix Operations Matrices are simply rectangular arrays of quantities Each quantity in the array is called an element of the matrix and an element can be either a numerical value

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

MATRICES. a m,1 a m,n A =

MATRICES. a m,1 a m,n A = MATRICES Matrices are rectangular arrays of real or complex numbers With them, we define arithmetic operations that are generalizations of those for real and complex numbers The general form a matrix of

More information

Positive Definite Kernels: Opportunities and Challenges

Positive Definite Kernels: Opportunities and Challenges Positive Definite Kernels: Opportunities and Challenges Michael McCourt Department of Mathematical and Statistical Sciences University of Colorado, Denver CUNY Mathematics Seminar CUNY Graduate College

More information

A Review of Linear Algebra

A Review of Linear Algebra A Review of Linear Algebra Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx.edu These slides are a supplement to the book Numerical Methods with Matlab: Implementations

More information

DIRECT ERROR BOUNDS FOR SYMMETRIC RBF COLLOCATION

DIRECT ERROR BOUNDS FOR SYMMETRIC RBF COLLOCATION Meshless Methods in Science and Engineering - An International Conference Porto, 22 DIRECT ERROR BOUNDS FOR SYMMETRIC RBF COLLOCATION Robert Schaback Institut für Numerische und Angewandte Mathematik (NAM)

More information

Gaussian Elimination and Back Substitution

Gaussian Elimination and Back Substitution Jim Lambers MAT 610 Summer Session 2009-10 Lecture 4 Notes These notes correspond to Sections 31 and 32 in the text Gaussian Elimination and Back Substitution The basic idea behind methods for solving

More information

Shortest paths with negative lengths

Shortest paths with negative lengths Chapter 8 Shortest paths with negative lengths In this chapter we give a linear-space, nearly linear-time algorithm that, given a directed planar graph G with real positive and negative lengths, but no

More information

Solving an Elliptic PDE Eigenvalue Problem via Automated Multi-Level Substructuring and Hierarchical Matrices

Solving an Elliptic PDE Eigenvalue Problem via Automated Multi-Level Substructuring and Hierarchical Matrices Solving an Elliptic PDE Eigenvalue Problem via Automated Multi-Level Substructuring and Hierarchical Matrices Peter Gerds and Lars Grasedyck Bericht Nr. 30 März 2014 Key words: automated multi-level substructuring,

More information