für Mathematik in den Naturwissenschaften Leipzig

Size: px
Start display at page:

Download "für Mathematik in den Naturwissenschaften Leipzig"

Transcription

1 Max-Planck-Institut für Mathematik in den Naturwissenschaften Leipzig Approximation of integral operators by H 2 -matrices with adaptive bases (revised version: December 2004) by Steffen Börm Preprint no.:

2

3 Approximation of integral operators by H 2 -matrices with adaptive bases Steffen Börm, Leipzig December 10, 2004 H 2 -matrices can be used to construct efficient approximations of discretized integral operators. The H 2 -matrix approximation can be constructed efficiently by interpolation, Taylor or multipole expansion of the integral kernel function, but the resulting representation requires a large amount of storage. In order to improve the efficiency, local Schur decompositions can be used to eliminate redundant functions from an original approximation, which leads to a significant reduction of storage requirements and algorithmic complexity. 1 Introduction We consider an integral operator K[u](x) = Ω κ(x, y)u(y) dy (1) defined by a domain or manifold Ω R d and a kernel function κ : R d R d R. Discretizing K by Galerkin s method with finite element basis functions (ϕ i ) i I leads to amatrixk R I I given by K ij = ϕ i, K[ϕ j ] L 2 = ϕ i (x) κ(x, y)ϕ j (y) dy dx. Ω Ω In many applications the support of κ is a superset of Ω Ω, so the matrix K will be dense. Treating K directly leads to a dense matrix that requires O(n 2 ) units of storage, where n := #I is the number of degrees of freedom. Since the quadratic complexity of a simple approach is not acceptable if the problem dimension becomes large, a variety of alternative representations have been introduced: If the kernel function is asymptotically smooth (cf. [7]), it can be approximated by panel-clustering (cf. [14, 17, 16]) or multipole expansion (cf. [15, 11, 12]) methods. A similar effect can be achieved by replacing the finite element basis functions (ϕ i ) i I by wavelet-like functions (cf. [8]). 1

4 The algorithm presented here creates an approximation of the kernel function by means of a panel-clustering algorithm based on interpolation [9, 3]. The expansion systems corresponding to interpolation contain a certain degree of redundancy that has to be eliminated in order to improve the efficiency. A simple way of doing this is to orthogonalize the discrete expansion systems [4]. These techniques take into account the effect of the discretization, but not the influence of the discrete operator that is being approximated. An alternative approach is the algebraic approximation algorithm presented in [2], which constructs locally optimal discrete expansion systems for arbitrary operators. If the operator is already approximated by a suitable panel-clustering approach, the algorithm can take advantage of this more efficient representation in order to reach linear complexity in time and storage requirements. 2 H 2 -matrix approximation by interpolation We will now introduce the basic notations for H 2 -matrices and demonstrate how interpolation can be used to construct an approximation of an integral operator (cf. [3]). 2.1 Local interpolation of the kernel function In order to be able to approximate the kernel function κ by interpolation, we have to ensure that κ restricted to the domain of interpolation is sufficiently smooth. For asymptotically smooth kernel functions, i.e., if there are c 0 R >0,g N 0 such that α x β y κ(x, y) c α+β 0 (α + β)! x y g+ α + β (2) holds for all α, β N d 0, a simple criterion can be found: Let Bt,B s R d be axis-parallel bounding boxes and let Im t and Is m be stable m-th order interpolation operators on Bt or B s, respectively. If the admissibility condition holds, then an error estimate of the form diam(b t B s ) η dist(b t,b s ) (3) κ (I t I s )[κ],b t B s 1 dist(b t,b s ) g ( c2 η c 2 η + c 1 ) m (4) can be proven for stable interpolation schemes (like Chebyshev interpolation, cf. [5]), so the interpolation converges exponentially if the order m is increased. Remark 1 In many applications, derivatives of the kernel function κ are used instead of κ, e.g., when treating the double layer potential operator. Using techniques similar to those used in [6, Lemma 6.1], we can prove that the derivatives of κ and its interpolant also satisfy an estimate similar to (4) and therefore will also converge exponentially. 2

5 The idea of most panel-clustering and multipole techniques is to split the domain Ω Ω into a hierarchy of subdomains that satisfy the admissibility condition (3) and a small remainder of subdomains that still contain the singularity, but can be handled by a sparse matrix. On the admissible domains, a separable approximation is used to derive a more efficient representation. In order to construct the admissible subdomains efficiently, we introduce a hierarchy of subdomains corresponding to subsets of the index set I: LetT I be a labeled tree with root r, i.e., a tree in which each node t T I is associated with a label ˆt. T I is a cluster tree if the following conditions hold: The root of T I is I, i.e., ˆr = I. If t T I has sons, then the labels of the sons form a partition of the label of the father, i.e., ˆt = {ŝ : s sons(t)}. For all t T I,wehave#sons(t) 1and#ˆt >0. The nodes t T I of a cluster tree T I are called clusters, andthelevel of a cluster t T I is defined inductively by level(r) =0andlevel(t )=level(t) +1fort sons(t). The number of all clusters is denoted by c := #T I and satisfies c 2n 1. Typically, we will assume that the leaves of T I correspond to small subsets of I, i.e., that there is a constant C leaf N such that #ˆt C leaf holds for all leaves of T I. Using the cluster tree, we can construct the desired partition: For each cluster t, we fix a bounding box B t R d, i.e., an axis-parallel box satisfying supp ϕ i B t for all i ˆt. For a given pair (t, s) T I T I, we proceed as follows: If B t and B s satisfy the admissibility condition (3), we add (t, s) tothesetp far. If t and s are leaves, we add (t, s) tothesetp near. Otherwise, we repeat the procedure for pairs formed by the sons of t and s (if only one of the clusters has no sons, we use the cluster itself instead). Starting with P far = P near = and (t, s) =(r, r), we get a partition P = P far P near of I Isuch that all pairs in P far correspond to admissible subdomains and all pairs in P near correspond to leaf clusters. In standard situations, it can be shown that #P O(n) holds (cf. [10]). 2.2 Approximation of the matrix The approximation of the matrix K can now be constructed block by block: For each (t, s) P far,weinterpolateκ on B t B s and set { K t,s ij := Ω ϕ i(x) Ω (It m Im)[κ](x, s y)ϕ j (y) dy dx if i ˆt, j ŝ 0 otherwise. for all i, j I.Since(t, s) P far, we know that interpolation order m is increased. K t,s ij will converge rapidly to K ij if the 3

6 All nearfield entries are collected in a matrix K R I I { Kij := K ij if there is a pair (t, s) P near with i ˆt, j ŝ 0 otherwise. for all i, j I.SinceP = P far P near describes a partition of I I,thematrix K := K + K t,s (t,s) P far (5) is an approximation of K. We can handle K efficiently since it is a sparse matrix. The farfield matrices K t,s require a different storage format that we will introduce now. For all t T I,wedenote the interpolation points in B t by (x t ν) k ν=1 and the corresponding Lagrange polynomials by (L t ν) k ν=1.fori ˆt, j ŝ, we find k k K t,s ij = ϕ i (x) κ(x t ν,xs µ )Lt ν (x)ls µ (y) ϕ j (y) dy dx Ω Ω = k ν=1 µ=1 ν=1 µ=1 k κ(x t ν,x s µ) so we can store K t,s in the factorized form Ω ϕ i (x)l t ν(x) dx Ω ϕ j (y)l s µ(y) dy, K t,s = V t S t,s (W s ) (6) with the row cluster basis matrix V t R I k,thecolumn cluster basis matrix W s R I k and the coefficient matrix S t,s R k k defined by { Viν t = W iν t := Ω ϕ i(x)l t ν(x) dx if i ˆt (7) 0 otherwise. Sνµ t,s := κ(xt ν,xs µ ). (8) In the case of integral operators of the form (1), the row and column cluster basis matrices V t and W t are identical. In more general applications they may differ, e.g., if the operator is based on derivatives of κ. The factorized representation of K t,s requires only (#ˆt +#ŝ + k)k units of storage. 2.3 Nested cluster bases We call the families V =(V t ) t TI and W =(W s ) s TI of row and column cluster basis matrices row and column cluster bases. StoringallV t as dense matrices leads to a storage complexity of O(nkp), where p is the depth of the cluster tree. Usually, p log n will hold, so the storage requirements will not scale linearly in n. We will now introduce an alternative representation of V t that allows us to reach the optimal order of complexity. 4

7 Let t T I be a cluster with sons(t), andlett sons(t). The interpolation operator Im t is a projection into the space of m-th order polynomials. Since any Lagrange polynomial L t µ for the cluster t is in this space, we have L t µ = I t m[l t µ]= k ν=1 L t µ(x t ν )L t ν, i.e., we can express each Lagrange polynomial of the father cluster t in terms of the Lagrange polynomials of the son cluster t. We introduce the transfer matrix T t R k k by setting Tνµ t = Lt µ (xt ν ) (9) and notice that implies V t iν = L t µ = k µ=1 k ν=1 Tνµ t Lt ν V t iµ T t µν =(V t T t ) iν for all i ˆt ˆt. Since the index sets corresponding to the sons of t are a disjoint partition of ˆt, we can sum over all sons in order to get V t = V t T t. (10) This relation between father and son clusters implies that we have to store the cluster bases V t only for leaves t T I of the cluster tree and can use the small k k transfer matrices T t to describe the bases for all other clusters. This alternative representation of the matrix K requires O(ck 2 + nk) units of storage. By truncating the cluster tree, we can ensure c n/k and reach a complexity of O(nk). An approximation of the form (5) with blocks defined by (6) is called a uniform H- matrix. If the cluster bases are nested, it is called an H 2 -matrix [13, 2, 3]. Since we are using d-dimensional interpolation of order m, werequirearankofk m d. In many applications, this is not optimal: If the kernel function is the Newton kernel κ(x, y) =1/(4π x y ) inr 3, a multipole expansion [15, 12] requires only k m 2 spherical harmonics instead of O(m 3 ) polynomials. In order to improve the efficiency, we will now remove redundant functions from the expansion system, i.e., we will use interpolation only as an initial guess and reduce k by algebraic methods while keeping the good approximation properties and general applicability. 3 Orthogonalization A simple approach to reducing redundancy is to compare the dimension of the range of the cluster basis matrices V t with the number of columns of V t, i.e., the number of 5

8 expansion functions involved. If the dimension is lower than the number of columns, the expansion system obviously contains superfluous functions that can be eliminated without changing the quality of the approximation. Seen from this point of view, an orthogonal cluster basis matrix, i.e., a matrix satisfying (V t ) V t = I, is optimal: Since the columns are pairwise perpendicular, no column can be eliminated without changing the range of V t. 3.1 Leaf clusters A viable strategy for eliminating redundant functions from the expansion system is to perform a Gram-Schmidt orthogonalization: We are looking for a matrix Z t R k kt such that the new cluster basis matrix is orthogonal, i.e., satisfies the equation Ṽ t := V t Z t (Ṽ t ) Ṽ t =(V t Z t ) V t Z t = I R kt k t. Using G t =(V t ) V t, this equation takes the form (Z t ) G t Z t = I. (11) Different methods can be used to find a suitable Z t : One possibility is to compute a rankrevealing Cholesky decomposition of G t, which would be equivalent to the classical Gram- Schmidt procedure. In order to be able to detect redundant functions in a more direct fashion, we will use the Schur decomposition G t = PDP of G t instead, i.e., we will compute an orthogonal matrix P containing the eigenvectors of G t and a diagonal matrix D =diag{λ 1,...,λ k } containing the corresponding eigenvalues λ 1 λ 2... λ k 0. We fix a rank k t {1,...,k} such that λ i > 0holdsforalli {1,...,k t }, define the matrix D R k kt by D ij := δ ij λi and set Z t := P D. Since (Z t ) G t Z t = D P PDP P D = D D D = I holds, we have found a suitable solution Z t of problem (11). Creating the matrix G t requires O(k 2 #ˆt) operations, solving the symmetric eigenvalue problem takes O(k 3 ) operations, and Z t can be constructed in O(k 2 k t ) operations, so the total complexity is bounded by O(k 2 (k + k t +#ˆt)). Any choice of k t {0,...,rank(V t )} will give us a matrix Z t satisfying (11). Using k t < rank(v t ) implies that the range of Ṽ t = V t Z t will be a proper subspace of the range of V t, so the quality of the approximation may be reduced. Since we are using the Schur decomposition to construct the approximation, we have all eigenvalues of G t at our disposal and can derive the following error estimate: 6

9 Lemma 2 (Local truncation error) We have k inf{ V t Ṽ t R 2 F : R k Rkt } = V t Ṽ t (Ṽ t ) V t 2 F = ɛt F := i=k t +1 inf{ V t Ṽ t R 2 2 : R R kt k } = V t Ṽ t (Ṽ t ) V t 2 2 = ɛ t 2 := λ k t +1. λ i and Proof. Since Ṽ t is orthogonal, Ṽ t (Ṽ t ) is an orthogonal projection in both the Euclidean and the Frobenius norm, so the best approximation of V t is given by Ṽ t (Ṽ t ) V t. By definition of Z t,wehave V t Ṽ t (Ṽ t ) V t = V t (I Z t (Z t ) (V t ) V t )=V t (I Z t (Z t ) G t ) = V t (I P D D P PDP )=V t P (I D D D)P. A simple computation reveals so we find ( D D D) ij = { δ ij if i k t 0 otherwise, V t Ṽ t (Ṽ t ) V t 2 F = V t P (I D D D)P 2 F ( =trace (I D D D) P (V t ) V t P (I D D ) D) =trace(diag{0,...,0,λ k t +1,...,λ k })= k i=k t +1 The estimate for the operator norm can be derived by replacing the trace operator by the spectral radius, since ϱ(x X)= X 2 2. In practical applications, it is sometimes a good idea to choose the original rank k to be larger than #ˆt. In this case, the rank of G t would be bounded by m := min{k, #ˆt} and computing the Schur decomposition of G t would give us at least k m >0 vanishing eigenvalues that are irrelevant to the orthogonalization process. In order to avoid this, we can use Householder transformations to compute a decomposition V t ˆt k = LQ of V t ˆt k into a lower triangular (with respect to an arbitrary, but fixed, ordering of ˆt) matrixl Rˆt k and a unitary matrix Q R k k that can be represented by a product of m elementary reflectors. Since L is lower triangular, only its first m columns will differ from zero, so only the upper left m m block of the transformed Gram matrix QG t Q = L L will contain non-zero entries. In order to find the relevant eigenvalues and eigenvectors of G t,wecan therefore transform by Q, compute the Schur decomposition of the upper m m block and reverse the transformation. Computing the LQ decomposition requires O(km 2 ) operations, solving the reduced eigenvalue problem requires O(m 3 ) operations, transforming the k t relevant eigenvectors requires O(mkk t ) operations and creating Z t requires O(mkk t ) operations, so we can construct the matrix Z t by a total of O(m 2 k + m 3 + mkk t ) O(m 2 (k + m)) operations, which is significantly better than the original O(k 2 #ˆt + k 3 + k 2 k t )ifk is larger than #ˆt. λ i. 7

10 3.2 Non-leaf clusters Applying the Gram-Schmidt procedure directly to all clusters V t will yield orthogonal cluster bases Ṽ t, but these bases will no longer be nested. Since this property is very important for the efficiency of the algorithm, we will now modify the construction in order to ensure that the orthogonalized cluster bases are again nested. We assume that orthogonal cluster bases have already been constructed for all t sons(t) (e.g., by a recursive procedure). A nested cluster basis is characterized by (10), t so we have to be able to find transfer matrices T for all t sons(t) such that Ṽ t = Ṽ t t T holds. This implies that the range of Ṽ t is a subspace of V t := range Ṽ t R I. Therefore, we replace V t by the best approximation satisfying this condition, i.e., by its orthogonal projection V t := Ṽ t (Ṽ t ) V t. (12) Since V t is already nested, we can use (10) in order to get V t = Ṽ t (Ṽ t ) V t = Ṽ t (Ṽ t ) V t T t = Ṽ t P t T t, where the matrices P t := (Ṽ t ) V t describe the transformation from the original to the new bases. For any matrix Z t R k kt, a cluster basis defined by Ṽ t = V t Z t will still satisfy Ṽ t = V t Z t = Ṽ t P t T t Z t = Ṽ t t T with T t := P t T t Z t. (13) This means that we can proceed as in the case of leaf clusters and choose a matrix Z t satisfying (Z t ) Ĝ t Z t = I (14) for the modified Gram matrix Ĝ t = (Ṽ t P t T t ) (Ṽ t P t T t )= (P t T t ) (Ṽ t ) Ṽ t (P t T t ) 8

11 = (P t T t ) (P t T t ). This matrix Z t defines the transfer matrices of the orthogonal nested cluster basis we are looking for by equation (13). In order to form Ĝt, we need the matrices P t =(Ṽ t ) V t. For the leaf clusters, they can be constructed directly. For non-leaf clusters, we can use the nested structure of both original and new cluster bases to derive the following update equation: P t =(Ṽ t ) V t = Ṽ t t T V t T t = ( T t ) P t T t. (15) For leaf clusters, we applied an LQ-decomposition in the case #t <kin order to reduce the complexity. We can do the same in the case of non-leaf clusters: Let s := # sons(t) and let sons(t) ={t 1,...,t s }.ThenĜt canbewrittenintheform with Ĝ t =(X t ) X t P t 1 T t 1 X t :=.. (16) P ts T ts The auxiliary matrix X t has q := k t k ts rows and k columns. If q<kholds, we can again use Householder transformations to compute an LQ-decomposition X t = LQ of X t and solve an m-dimensional eigenvalue problem with m := min{q,k} instead of a k-dimensional one. We can compute Ĝt in O(qk 2 ), find the matrix Z t in O(m 2 k+m 3 +mkk t ) O(m 2 (k+ m)), and construct P t in O(q(k + k t )k) O(qk 2 ) operations, so the total complexity for orthogonalizing V t is O(qk 2 + m 2 (k + m)). Lemma 3 (Local truncation error) Let λ 1... λ k 0 be the eigenvalues of Ĝ. Then we have inf{ V t Ṽ t R 2 F : R Rkt k } = V t Ṽ t (Ṽ t ) V t 2 F = ɛt F := k i=k t +1 inf{ V t Ṽ t R 2 2 : R R kt k } = V t Ṽ t (Ṽ t ) V t 2 2 = ɛ t 2 := λ k t +1. λ i and Proof. Apply Lemma 2 to V t. 3.3 Global properties We define the set of descendants of a cluster t T I by sons (t) :={s T I : ŝ ˆt} 9

12 and corresponding transfer matrices by { T s,t I if sons(t) = := T s,t T t if t sons(t) withŝ ˆt for all s sons (t). Due to the definition of sons (t) and the cluster tree T I, T s,t is well-defined. We can combine the local estimates to get a global bound for the truncation error: Theorem 4 (Truncation error) We have V t Ṽ t (Ṽ t ) V t 2 F V t Ṽ t (Ṽ t ) V t 2 2 s sons (t) s sons (t) ɛ s F T s,t 2 2 and (17) ɛ s 2 T s,t 2 2 (18) for all t T I. Here, ɛ s F, ɛs 2 are defined as in Lemma 2 for leaf clusters and as in Lemma 3 for non-leaf clusters. Proof. The proof is split into three parts: We start by proving that the local error in a cluster is perpendicular with respect to all of its ancestors, then we represent the global error in terms of the local errors, and we conclude by demonstrating that we can apply Pythagoras equality in order to get a bound for the global error. Step 1: We denote the local errors by { E t V t := Ṽ t (Ṽ t ) V t if sons(t) = V t Ṽ t (Ṽ t ) V t otherwise for all t T I. We will start by proving that Ṽ t x, E s y = 0 (19) holds for all s, t T I with ŝ ˆt and all x R kt, y R k. We do this by induction over level(s) level(t) =n N 0.Thecaselevel(s) =level(t) is trivial, since it implies s = t. Let n N 0. We assume that (19) holds for all s, t T I with level(s) level(t) =n and ŝ ˆt. Let s, t T I with level(s) level(t) =n +1 and ŝ ˆt. The assumption implies sons(t), sothereisat sons(t) withŝ ˆt,andlevel(s) level(t )=n. We apply the induction assumption to get Ṽ t x, E s y = Ṽ t t T x, E s y = Ṽ t t T x, E s y =0, t sons(t) which completes the induction. Step 2: We denote the global errors by Ê t := V t Ṽ t (Ṽ t ) V t. 10

13 By definition, we have Êt = E t for all t T I with sons(t) =, i.e., for all leaf clusters. For non-leaf clusters, we can use (Ṽ t ) V t =(Z t ) ( V t ) V t =(Z t ) (P t T t ) (Ṽ t ) V t T t in order to prove =(Z t ) (P t T t ) P t T t =(Ṽ t ) V t Ê t = V t V t + V t Ṽ t (Ṽ t ) V t = (V t Ṽ t (Ṽ t ) V t )T t + V t Ṽ t (Ṽ t ) t V = Ê t T t + E t, so the recurrence relation { E Ê t t if sons(t) = = E t + Êt T t otherwise holds for all t T I. A simple induction using the definitions of T s,t and Êt yields Ê t = E s T s,t (20) s sons (t) for all t T I. Step 3: In order to be able to apply Pythagoras equality to the norm of (20), we have to establish that the ranges of the terms E s T s,t appearing in this sum are orthogonal, i.e., that E s 1 x, E s 2 y = 0 (21) holds for all s 1,s 2 sons (t) withs 1 s 2 and all x, y R k. If ŝ 1 ŝ 2 = holds, this follows directly from the definition of V s 1, Ṽ s 1, V s 2 and Ṽ s 2 : For all i ŝ 1,wehavei ŝ 2,sothei-th row of V s 2 and Ṽ s 2 vanishes, and therefore the entire inner product also vanishes. If ŝ 1 ŝ 2 holds, we have either s 1 sons (s 2 )ors 2 sons (s 1 ). Since both cases are similar, we consider only the latter one. Our assumptions imply sons(s 1 ), i.e., E s 1 x = V s 1 x Ṽ s 1 (Ṽ s 1 ) V s 1 x = V s 1 x V s 1 Z s 1 (Ṽ s 1 ) V s 1 x = V s 1 (I Z s 1 (Ṽ s 1 ) V s 1 )x. Now we can apply (19) to prove (21), and Pythagoras equality yields Ês 2 F = s sons (t) E s T s,t 2 F and Ês 2 2 = s sons (t) E s T s,t

14 Using the submultiplicativity of the norms and the error bounds from Lemma 2 and Lemma 3 concludes the proof. In order to apply this general error estimate to our case, we need a bound for the operator norm of the matrices T s,t. By a simple induction based on the definition (9), we find T s,t νµ = L t µ(x s ν). If the interpolation operator is stable with stability constant Λ, we have L t ν Λ, i.e., T s,t ij Λ for all i, j {1,...,k}. This implies T s,t 2 2 k2 Λ 2. Theorem 5 (Matrix error) Let V =(V t ) t TI and W =(W s ) s TI be cluster bases and let Ṽ =(Ṽ t ) t TI and W =( W s ) s TI be orthogonal cluster bases satisfying V t Ṽ t (Ṽ t ) V t 2 F ɛ t and W s W s ( W s ) W s 2 F ɛ s for all t, s T I.Foreach(t, s) P far, we define Then we get the following error bound: S t,s := (Ṽ t ) V t S t,s (W s ) W s. V t S t,s (W s ) Ṽ t St,s ( W s ) 2 F ɛ t S t,s (W s ) ɛ s V t S t,s 2 2. Proof. By Pythagoras equality, we have V t S t,s (W s ) Ṽ t St,s ( W s ) 2 F = V t S t,s (W s ) Ṽ t ((Ṽ t ) V t S t,s (W s ) W s )( W s ) 2 F = V t S t,s (W s ) Ṽ t (Ṽ t ) V t S t,s (W s ) 2 F + Ṽ t (Ṽ t ) V t S t,s (W s ) Ṽ t (Ṽ t ) V t S t,s (W s ) W s ( W s ) 2 F V t Ṽ t (Ṽ t ) V t 2 F S t,s (W s ) Ṽ t (Ṽ t ) V t S t,s 2 2 (W s ) (W s ) W s ( W s ) 2 F ɛ t S t,s (W s ) V t S t,s 2 2 ɛ s. This general error estimate implies that we need bounds for S t,s (W s ) 2 2 and V t S t,s 2 2 in order to find error bounds for the approximated matrix. We will consider only the second case, since the first follows directly due to symmetry. For each of the matrix entries of V t S t,s with (t, s) P far,weget (V t S t,s k ) iµ = Viν t St,s νµ = k ) ϕ i (x)l t ν (x) dx κ(xt ν,xs µ ν=1 ν=1 Ω = ϕ i (x)i t [κ(,x s µ )](x) dx ϕ i L 1 I t [κ(,x s µ )] L (B t ) Ω 12

15 Λ ϕ i L 1 κ(,x s µ) L (B t ). Since κ is asymptotically smooth (cf. (2)) and since (t, s) is admissible, we find κ(,x s µ ) L (B t ) dist(b t,b s ) g diam(b t ) g. For quasi-regular grids with grid parameter h, we can expect h diam(b t )and ϕ i L 1 h d Ω,whered Ω is the dimension of the sub-manifold Ω, so combining the estimates gives us (V t S t,s ) iµ Λh d Ω g and therefore V t S t,s 2 2 Λ2 h 2(d Ω g) k#ˆt. Lemma 6 (Complexity) For a general cluster tree with c clusters, the orthogonalization algorithm requires O(ck 3 ) O(nk 3 ) operations. Proof. Using the LQ decomposition, we can( perform all operations for a leaf cluster in O(m 2 (k+m)) and for a non-leaf cluster in O k 2 ) kt + m 2 (k + m) operations. Since m k and k t k hold, we have k 2 k t + m 2 (k + m) k 2 k t + 2k 3 3ck 3 t T I t T I t T I and get the desired bound for the total complexity. Remark 7 For balanced cluster trees, the above complexity estimate can be improved. In order to keep the argument simple, we assume that n =2 p holds and that T I is a balanced binary tree. This implies that #ˆt =2 p level(t) holds for all t T I. Then the improved bound of O(nk 2 (log 2 k +1)) for the complexity can be derived as follows: We split the set of clusters into and define T large := {t T I : #ˆt k} and T small := {t T I : #ˆt <k} l := min{l : 2 p l <k}. For all t T large, we have 2 p level(t) =#ˆt k, i.e., level(t) <l. The minimality of l implies #T large 2 l =22 l 1 2 p =2 2 p (l 1) 22p k =2n/k. The orthogonalization requires not more than O(k 3 ) operations for each cluster t T large, so summing over all of these clusters gives us a complexity of O(nk 2 ). Now let us consider the clusters in T small.foreacht in this set, we have 2 p level(t) = #ˆt <k, i.e., level(t) l. By definition, we have m =min{k, #ˆt}, so we need not 13

16 more than O(m 2 (k + m)) O(k 2 #ˆt) operations for leaf clusters and not more than O(m 2 (k + m)+(k t 1 + k t 2 )k 2 ) O(k 2 #ˆt) operations for non-leaf clusters. Due to p p k 2 #ˆt k 2 #ˆt = k 2 n t T small l=l {t,level(t)=l} l=l = k 2 n(p l +1) k 2 n(log 2 (k)+1), the total complexity for all clusters in T small is in O(nk 2 (log 2 (k) +1)). Combining the estimates for T large and T small, we get the desired bound for the complexity of the entire algorithm. Remark 8 In [18], a similar approach is used in the context of multipole expansions and wavelet bases: The matrix V t is created by discretizing harmonic polynomials and then orthogonalized by using a singular value decomposition. 4 Recompression The orthogonalization process uses only the cluster basis and transfer matrices. These matrices depend only on the discretization and the geometry, but not on the kernel function. This means that we cannot expect the process to reach the efficiency of multipole methods, since these use expansions that are designed for a specific type of kernel function. If we aim for optimal cluster bases, we therefore have to take the kernel function into account. After polynomial approximation, all the information on the kernel function is contained in the coefficient matrices S t,s (cf. (8)), and we can use this information to construct better cluster bases. Our approach is based on the algorithm presented in [2]. By modifying the computation of the Gram matrices used in the eigenvalue problems, we can derive a variant that requires only O(ck 3 )operations. 4.1 Efficient computation of Gram matrices The fundamental step in the adaptive algorithm is the efficient computation of Gram matrices describing the interaction of the basis vectors. We apply this algorithm to an H 2 -matrix approximation K that can be constructed by interpolation or similar techniques. For the adaptive algorithm, we will need restrictions of matrices to subsets. We introduce the notation { A 0ˆt ŝ := A ij if i ˆt, j ŝ 0 otherwise for matrices A R I I and clusters t, s T I. The basis construction algorithm from [2] requires the cluster Gram matrix G t := K 0ˆt ŝ ( K 0ˆt ŝ ). (22) s P t+ far 14

17 for P t+ far := {s T I : there exists t + T I with ˆt ˆt + and(t +,s) P far } (compare [2, eq. (5.4)]). The basic idea for building G t efficiently is to split P t+ far into tree levels and use the recursive representation { G t = s T I,(t,s) P far s Pfar t s Pfar t K 0ˆt ŝ ( K 0ˆt ŝ ) + G t 0ˆt ˆt K ( K ) otherwise, 0ˆt ŝ 0ˆt ŝ s P t far if t has a father t T I with Pfar t I : (t, s) P far }. Using the special structure (6) of the terms in the sum, we get K 0ˆt ŝ ( K 0ˆt ŝ ) = V t S t,s (W s ) W s (S t,s ) (V t ). Since the cluster basis V is nested, we have (V t X(V t ) ) 0ˆt ˆt = V t T t X(T t ) (V t ), for any matrix X R k k, so a simple induction shows that G t = V t C t (V t ) (23) holds for all t T I,whereC t is given by { C t s P := far t S t,s (W s ) W s (S t,s ) + T t C t (T t ) S t,s (W s ) W s (S t,s ) s P t far if t has a father t T I otherwise. (24) We can use the nested structure of the cluster basis W in order to prepare the auxiliary matrices Y s := (W s ) W s for all s T I by a recursive procedure in O(ck 3 ). Using Y s, the matrices C t for all t T I can be computed in O(ck 3 )operations. 4.2 Construction of adaptive bases According to [2], the optimal cluster basis matrix Ṽ t Rˆt k t orthogonal and maximizes the quantity (Ṽ t ) K 0ˆt ŝ 2 F, s P t+ far for a leaf cluster t T I is and the solution of this maximization problem can be constructed by using an orthogonal basis of the eigenvectors corresponding to the k t largest eigenvalues of the matrix G t defined in (22). 15

18 Due to (23), G t can be computed in O((k +#ˆt)kˆt) operations, and the eigenvalue problem can be solved in O(#ˆt 3 ) operations. Equation (23) implies that the rank of G t cannot exceed m := min{k, #ˆt}. If t T I is not a leaf cluster, we again have to ensure that the new cluster basis is nested. As in the case of orthogonalization, we do this by projecting V t into the space spanned by the cluster bases of the sons of t, i.e., by replacing V t by its orthogonal projection V t defined in (12). This leads to a modified Gram matrix Ĝ t := V t C t ( V t ). This matrix differs in one important point from the one used in the orthogonalization procedure: Since V t appears outside of the product, we would have to solve an eigenvalue problem of dimension #ˆt. This would be too expensive for large clusters, so we have to find a way of reducing the dimension. For sons(t) ={t 1,...,t s }, we can split the projected matrix V t into an orthogonal rectangular matrix and a remainder with q := k t k ts rows: V t = Ṽ t P t T t = ( P t 1 T t 1 ) Ṽ t 1... Ṽ ts. P ts T ts If we set Q t := ( ) Ṽ t 1... Ṽ ts and recall the definition of X t in (16), we can rewrite Ĝt in the form Ĝ t = Q t X t C t (X t ) (Q t ). Since the matrix Q t is orthogonal, the non-zero eigenvalues of the matrices X t C t (X t ) and Ĝt are identical, and the eigenvectors of the latter can be computed from those of the former by applying Q t. This means that we can solve the eigenvalue problem by only O(q 3 ) operations instead of the O((#ˆt) 3 ) operations of the direct approach. We construct the orthogonal matrix Z t R q kt from an orthogonal eigenvector basis corresponding to the k t largest eigenvalues of X t C t (X t ). Then Ṽ t = Q t Z t contains the eigenvectors of Ĝt corresponding to its k t largest eigenvalues. Since t is not a leaf t cluster, we have to compute the transfer matrices T satisfying Ṽ t = Ṽ t t T = ( T t 1 T t 1 ) Ṽ t 1... Ṽ ts. = Q t.. Due to the definition of Ṽ t and the orthogonality of Q t, this implies T t 1. =(Q t ) Ṽ t =(Q t ) Q t Z t = Z t, T ts and we can extract the transfer matrices directly from Z t. T ts. T ts 16

19 4.3 Complete algorithm We have seen that we can compute adaptive row cluster bases efficiently if the matrices C t have been prepared in advance. In order to do this efficiently, we need the additional auxiliary matrices Y s := (W s ) W s. Since W is nested, we have { Y s s = sons(t) (T s ) Y s T s if sons(s) (W s ) W s otherwise, so all matrices Y s can be computed by a recursive procedure in O(ck 3 )operations. Using the recursive definition (24), we can compute all matrices C t in O(ck 3 )operations. Lemma 9 (Complexity) Let C leaf,c sp,c sons N such that #ˆt C leaf holds for all leaf clusters t T I, that sons(t) C sons holds for all non-leaf clusters t T I, and that {s : (t, s) P far } C sp and {s : (s, t) P far } C sp (25) holds for all clusters t T I. Then the adaptive construction of a row or a column cluster basis requires O(ck 3 ) operations. Proof. We start be noticing that column cluster bases can be computed by applying our algorithm to K instead of K, so we only have to consider the case of row cluster bases. Due to (25), the construction of C t can be accomplished in O(k 3 )operationsbyusing the equation (24). Computing all matrices C t recursively requires O(ck 3 )operations. Forming the matrices X t requires O(k 2 k t ) operations for each cluster, leading to a total of O(ck 3 )operations. Using C sons and C leaf, we can show that each eigenvalue problem can be solved in O(k 3 ) operations, giving us again a total of O(ck 3 ). Using the recursive equation (15), the matrices P t can also be computed in O(ck 3 ) steps, which concludes the proof. In our case, the error analysis of [2] takes the following form: Theorem 10 (Truncation error) For leaf or non-leaf clusters t T I, denote the eigenvalues of G t or Ĝt, respectively, by λ t 1... λt m and set m ɛ t F := λ t i. The matrix K R I I defined by i=k t +1 K 0ˆt ŝ := {Ṽ t St,s (W s ) K 0ˆt ŝ if (t, s) P far otherwise with St,s := (Ṽ t ) V t S t,s = P t S t,s for (t, s) P satisfies K K 2 F t T I ɛ t F. 17

20 Proof. See [2, Remark 5.1, Lemma 7.1]. Of course, a similar estimate holds for the column cluster bases, so that the approximation error for orthogonalized row and column bases can be bounded by [2, Lemma 5.2]. Remark 11 (Adaptivity) We can control the approximation error by choosing k t appropriately. This has to be done carefully, since we have to balance three sources of errors: The discretization error of the Galerkin method, the error introduced by interpolating the kernel function, and the approximation error of the algebraic compression. If the discretization error is large, choosing a high-order interpolation scheme is not a good idea. And if the interpolation order is low, using a high rank in the recompression algorithm will only approximate the numerical garbage introduced by the inaccurate interpolation. Remark 12 (Variable rank) For special kernel functions, it is possible to choose the order of the interpolation for each box B t depending on the size of B t. If the size of the box and the order are carefully balanced, the H 2 -matrix constructed by interpolation will require only O(n) units of storage and matrix-vector multiplication can be accomplished in O(n) operations ([16, 17, 5]). We can apply both the orthogonalization and the recompression algorithm presented here to these modified approximations, and a careful analysis shows that they will also require only O(n) operations. 5 Numerical experiments We test the algorithms by applying them to the trace of the classical double layer potential operator 1 K DLP [u](x) := u(y) Γ n(y) 4π x y dy for x Γ, discretized by piecewise constant or linear basis functions on a quasi-regular triangulation of the unit sphere Γ S := { x 2 =1} or the unit cube Γ C := { x =1}. The matrices of the approximation are defined by Viν t := ϕ i (x)l t ν (x) dx, W jµ s := ϕ j (y) n(y) Ls µ (y) dy, St,s νµ := 1 4π x t ν xs µ, Γ Γ where the Lagrange polynomials L t ν correspond to tensor Chebyshev interpolation for the minimal axis-parallel box B t containing the support of all piecewise constant basis functions ϕ i with i ˆt. We construct the cluster tree by recursive bisection stopping if a cluster contains not more than 32 degrees of freedom, and use the simple admissibility condition max{diam(b t ), diam(b s )} η dist(b t,b s ) with η = 2 instead of (3) to create a block partition. 18

21 n Build Bld/n Memory Mem/n MVM Error Table 1: Orthogonalization of cluster bases on the sphere Γ S with cubic interpolation n Build Bld/n Memory Mem/n MVM Error Table 2: Recompression of cluster bases on the sphere Γ S with cubic interpolation and recompression tolerance ɛ =10 3 In order to reduce the storage requirements, the original H 2 -matrix approximation constructed by interpolation is not stored, but created in temporary storage whenever the orthogonalization or recompression algorithms require it. All computations were performed by HLib [1] on UltraSPARC IIIcu processors running at 900 MHz. In the first example, we apply the orthogonalization algorithm to an intermediate approximation constructed by cubic interpolation. The results of the experiment are reported in Table 1. The first and second column give the time for the construction of the approximation: The first column contains the total time in seconds, the second contains the time per degree of freedom in milliseconds. The third and fourth column give the amount of storage needed, again the total in MB and per degree of freedom in KB. The fifth column contains the time in seconds required for one matrix-vector multiplication, and the sixth column gives the relative approximation error in the operator norm. We can see that the relative error is bounded by 10 3 and that time and storage requirements grow linearly in the number of degrees of freedom. The second example is the recompression algorithm. Comparing the results in Table 2 to those of the orthogonalization procedure, we see that the storage requirements and the time per matrix-vector multiplication are significantly reduced, while the time for building the approximation is only slightly increased. The storage requirements can be reduced significantly by increasing the admissibility parameter η. In order to reach a sufficient accuracy despite of the resulting weaker admissibility condition, we would have to increase the interpolation order, and this implies a higher computational complexity. There are applications that require an approximation error below the value 10 3 use in Table 2. In order to reach the range of 10 4, we have to use quartic instead of cubic 19

22 n Build Bld/n Memory Mem/n MVM Error Table 3: Recompression of cluster bases on the sphere Γ S with quartic interpolation and recompression tolerance ɛ =10 4 n Build Bld/n Memory Mem/n MVM Error Table 4: Recompression of cluster bases on the cube Γ C with quartic interpolation and recompression tolerance ɛ =10 3 interpolation (i.e., five interpolation points per coordinate direction) and reduce the tolerance for the recompression algorithm to The results are listed in Table 3: The computation time is increased significantly, since each coefficient matrix now has 5 6 = entries instead of 4 6 = 4096, but the storage requirements grow only moderately. The relative approximation error is close to the desired value of 10 4, increasing the interpolation order further would reduce it below this mark. Experiments carried out on the unit sphere Γ S do not provide us with information about the behaviour of our method for domains with edges. In order to verify that the recompression algorithm can cope with this type of domain, we consider the unit cube Γ C. Table 4 shows that we can reach a relative error below 10 3 by using quartic interpolation. Compared to the unit sphere, the storage requirements are reduced, but the quartic interpolation leads to a 50% increase of the time required for building the approximation. References [1] S. Börm and L. Grasedyck, HLib a library for H- andh 2 -matrices, Available at [2] S. Börm and W. Hackbusch, Data-sparse approximation by adaptive H 2 - matrices, Computing, 69 (2002), pp [3], H 2 -matrix approximation of integral operators by interpolation, Applied Numerical Mathematics, 43 (2002), pp

23 [4], Approximation of boundary element operators by adaptive H 2 -matrices, Foundations of Computational Mathematics, 312 (2004), pp [5] S. Börm, M. Löhndorf, and J. M. Melenk, Approximation of integral operators by variable-order interpolation, Tech. Rep. 82, Max Planck Institute for Mathematics in the Sciences, To appear in Numerische Mathematik. [6] S. Börm and S. A. Sauter, BEM with linear complexity for the classical boundary integral operators, Tech. Rep. 15, Institute of Mathematics, University of Zurich, To appear in Mathematics of Computation. [7] A. Brandt and A. A. Lubrecht, Multilevel matrix multiplication and fast solution of integral equations, J. Comput. Phys., 90 (1990), pp [8] W. Dahmen and R. Schneider, Wavelets on manifolds I: Construction and domain decomposition, SIAM Journal of Mathematical Analysis, 31 (1999), pp [9] K. Giebermann, Multilevel approximation of boundary integral operators, Computing, 67 (2001), pp [10] L. Grasedyck and W. Hackbusch, Construction and arithmetics of H-matrices, Computing, 70 (2003), pp [11] L. Greengard and V. Rokhlin, A fast algorithm for particle simulations, Journal of Computational Physics, 73 (1987), pp [12] L. Greengard and V. Rokhlin, A new version of the fast multipole method for the Laplace in three dimensions, in Acta Numerica 1997, Cambridge University Press, 1997, pp [13] W. Hackbusch, B. Khoromskij, and S. Sauter, On H 2 -matrices, in Lectures on Applied Mathematics, H. Bungartz, R. Hoppe, and C. Zenger, eds., Springer- Verlag, Berlin, 2000, pp [14] W. Hackbusch and Z. P. Nowak, On the fast matrix multiplication in the boundary element method by panel clustering, Numerische Mathematik, 54 (1989), pp [15] V. Rokhlin, Rapid solution of integral equations of classical potential theory, Journal of Computational Physics, 60 (1985), pp [16] S. Sauter, Variable order panel clustering (extended version), Tech. Rep. 52, Max- Planck-Institut für Mathematik, Leipzig, Germany, [17], Variable order panel clustering, Computing, 64 (2000), pp [18] J. Tausch and J. White, Multiscale bases for the sparse representation of boundary integral operators on complex geometries, SIAM J. Sci. Comput., 24 (2003), pp

24 Steffen Börm Max-Planck-Institut für Mathematik in den Naturwissenschaften Inselstrasse Leipzig Germany 22

H 2 -matrices with adaptive bases

H 2 -matrices with adaptive bases 1 H 2 -matrices with adaptive bases Steffen Börm MPI für Mathematik in den Naturwissenschaften Inselstraße 22 26, 04103 Leipzig http://www.mis.mpg.de/ Problem 2 Goal: Treat certain large dense matrices

More information

Max-Planck-Institut fur Mathematik in den Naturwissenschaften Leipzig H 2 -matrix approximation of integral operators by interpolation by Wolfgang Hackbusch and Steen Borm Preprint no.: 04 200 H 2 -Matrix

More information

Fast evaluation of boundary integral operators arising from an eddy current problem. MPI MIS Leipzig

Fast evaluation of boundary integral operators arising from an eddy current problem. MPI MIS Leipzig 1 Fast evaluation of boundary integral operators arising from an eddy current problem Steffen Börm MPI MIS Leipzig Jörg Ostrowski ETH Zürich Induction heating 2 Induction heating: Time harmonic eddy current

More information

An Introduction to Hierachical (H ) Rank and TT Rank of Tensors with Examples

An Introduction to Hierachical (H ) Rank and TT Rank of Tensors with Examples An Introduction to Hierachical (H ) Rank and TT Rank of Tensors with Examples Lars Grasedyck and Wolfgang Hackbusch Bericht Nr. 329 August 2011 Key words: MSC: hierarchical Tucker tensor rank tensor approximation

More information

Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques

Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques Institut für Numerische Mathematik und Optimierung Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques Oliver Ernst Computational Methods with Applications Harrachov, CR,

More information

Hierarchical Matrices. Jon Cockayne April 18, 2017

Hierarchical Matrices. Jon Cockayne April 18, 2017 Hierarchical Matrices Jon Cockayne April 18, 2017 1 Sources Introduction to Hierarchical Matrices with Applications [Börm et al., 2003] 2 Sources Introduction to Hierarchical Matrices with Applications

More information

Numerical Methods for Partial Differential Equations

Numerical Methods for Partial Differential Equations Numerical Methods for Partial Differential Equations Steffen Börm Compiled July 12, 2018, 12:01 All rights reserved. Contents 1. Introduction 5 2. Finite difference methods 7 2.1. Potential equation.............................

More information

APPROXIMATING GAUSSIAN PROCESSES

APPROXIMATING GAUSSIAN PROCESSES 1 / 23 APPROXIMATING GAUSSIAN PROCESSES WITH H 2 -MATRICES Steffen Börm 1 Jochen Garcke 2 1 Christian-Albrechts-Universität zu Kiel 2 Universität Bonn and Fraunhofer SCAI 2 / 23 OUTLINE 1 GAUSSIAN PROCESSES

More information

Fast matrix algebra for dense matrices with rank-deficient off-diagonal blocks

Fast matrix algebra for dense matrices with rank-deficient off-diagonal blocks CHAPTER 2 Fast matrix algebra for dense matrices with rank-deficient off-diagonal blocks Chapter summary: The chapter describes techniques for rapidly performing algebraic operations on dense matrices

More information

A New Scheme for the Tensor Representation

A New Scheme for the Tensor Representation J Fourier Anal Appl (2009) 15: 706 722 DOI 10.1007/s00041-009-9094-9 A New Scheme for the Tensor Representation W. Hackbusch S. Kühn Received: 18 December 2008 / Revised: 29 June 2009 / Published online:

More information

Diagonalizing Matrices

Diagonalizing Matrices Diagonalizing Matrices Massoud Malek A A Let A = A k be an n n non-singular matrix and let B = A = [B, B,, B k,, B n ] Then A n A B = A A 0 0 A k [B, B,, B k,, B n ] = 0 0 = I n 0 A n Notice that A i B

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Course Notes: Week 1

Course Notes: Week 1 Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues

More information

Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition

Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition R. Schneider (TUB Matheon) John von Neumann Lecture TU Munich, 2012 Setting - Tensors V ν := R n, H d = H := d ν=1 V ν

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Hierarchical Matrices in Computations of Electron Dynamics

Hierarchical Matrices in Computations of Electron Dynamics Hierarchical Matrices in Computations of Electron Dynamics Othmar Koch Christopher Ede Gerald Jordan Armin Scrinzi July 20, 2009 Abstract We discuss the approximation of the meanfield terms appearing in

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

Math 108b: Notes on the Spectral Theorem

Math 108b: Notes on the Spectral Theorem Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator

More information

MULTI-LAYER HIERARCHICAL STRUCTURES AND FACTORIZATIONS

MULTI-LAYER HIERARCHICAL STRUCTURES AND FACTORIZATIONS MULTI-LAYER HIERARCHICAL STRUCTURES AND FACTORIZATIONS JIANLIN XIA Abstract. We propose multi-layer hierarchically semiseparable MHS structures for the fast factorizations of dense matrices arising from

More information

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45 address 12 adjoint matrix 118 alternating 112 alternating 203 angle 159 angle 33 angle 60 area 120 associative 180 augmented matrix 11 axes 5 Axiom of Choice 153 basis 178 basis 210 basis 74 basis test

More information

für Mathematik in den Naturwissenschaften Leipzig

für Mathematik in den Naturwissenschaften Leipzig Max-Planck-Institut für Mathematik in den Naturwissenschaften Leipzig Introduction to Hierarchical Matrices with Applications by Steffen Börm, Lars Grasedyck, and Wolfgang Hackbusch Preprint no.: 18 2002

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

Exercises * on Linear Algebra

Exercises * on Linear Algebra Exercises * on Linear Algebra Laurenz Wiskott Institut für Neuroinformatik Ruhr-Universität Bochum, Germany, EU 4 February 7 Contents Vector spaces 4. Definition...............................................

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

FINITE-DIMENSIONAL LINEAR ALGEBRA

FINITE-DIMENSIONAL LINEAR ALGEBRA DISCRETE MATHEMATICS AND ITS APPLICATIONS Series Editor KENNETH H ROSEN FINITE-DIMENSIONAL LINEAR ALGEBRA Mark S Gockenbach Michigan Technological University Houghton, USA CRC Press Taylor & Francis Croup

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

Approximation of High-Dimensional Rank One Tensors

Approximation of High-Dimensional Rank One Tensors Approximation of High-Dimensional Rank One Tensors Markus Bachmayr, Wolfgang Dahmen, Ronald DeVore, and Lars Grasedyck March 14, 2013 Abstract Many real world problems are high-dimensional in that their

More information

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB Glossary of Linear Algebra Terms Basis (for a subspace) A linearly independent set of vectors that spans the space Basic Variable A variable in a linear system that corresponds to a pivot column in the

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

Math 307 Learning Goals. March 23, 2010

Math 307 Learning Goals. March 23, 2010 Math 307 Learning Goals March 23, 2010 Course Description The course presents core concepts of linear algebra by focusing on applications in Science and Engineering. Examples of applications from recent

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

BALANCING-RELATED MODEL REDUCTION FOR DATA-SPARSE SYSTEMS

BALANCING-RELATED MODEL REDUCTION FOR DATA-SPARSE SYSTEMS BALANCING-RELATED Peter Benner Professur Mathematik in Industrie und Technik Fakultät für Mathematik Technische Universität Chemnitz Computational Methods with Applications Harrachov, 19 25 August 2007

More information

Recurrence Relations and Fast Algorithms

Recurrence Relations and Fast Algorithms Recurrence Relations and Fast Algorithms Mark Tygert Research Report YALEU/DCS/RR-343 December 29, 2005 Abstract We construct fast algorithms for decomposing into and reconstructing from linear combinations

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

arxiv: v1 [cs.lg] 26 Jul 2017

arxiv: v1 [cs.lg] 26 Jul 2017 Updating Singular Value Decomposition for Rank One Matrix Perturbation Ratnik Gandhi, Amoli Rajgor School of Engineering & Applied Science, Ahmedabad University, Ahmedabad-380009, India arxiv:70708369v

More information

Math 307 Learning Goals

Math 307 Learning Goals Math 307 Learning Goals May 14, 2018 Chapter 1 Linear Equations 1.1 Solving Linear Equations Write a system of linear equations using matrix notation. Use Gaussian elimination to bring a system of linear

More information

LINEAR ALGEBRA SUMMARY SHEET.

LINEAR ALGEBRA SUMMARY SHEET. LINEAR ALGEBRA SUMMARY SHEET RADON ROSBOROUGH https://intuitiveexplanationscom/linear-algebra-summary-sheet/ This document is a concise collection of many of the important theorems of linear algebra, organized

More information

Math 671: Tensor Train decomposition methods

Math 671: Tensor Train decomposition methods Math 671: Eduardo Corona 1 1 University of Michigan at Ann Arbor December 8, 2016 Table of Contents 1 Preliminaries and goal 2 Unfolding matrices for tensorized arrays The Tensor Train decomposition 3

More information

Hybrid Cross Approximation for the Electric Field Integral Equation

Hybrid Cross Approximation for the Electric Field Integral Equation Progress In Electromagnetics Research M, Vol. 75, 79 90, 2018 Hybrid Cross Approximation for the Electric Field Integral Equation Priscillia Daquin, Ronan Perrussel, and Jean-René Poirier * Abstract The

More information

Lecture 3: QR-Factorization

Lecture 3: QR-Factorization Lecture 3: QR-Factorization This lecture introduces the Gram Schmidt orthonormalization process and the associated QR-factorization of matrices It also outlines some applications of this factorization

More information

Lecture 6. Numerical methods. Approximation of functions

Lecture 6. Numerical methods. Approximation of functions Lecture 6 Numerical methods Approximation of functions Lecture 6 OUTLINE 1. Approximation and interpolation 2. Least-square method basis functions design matrix residual weighted least squares normal equation

More information

Linear Algebra. and

Linear Algebra. and Instructions Please answer the six problems on your own paper. These are essay questions: you should write in complete sentences. 1. Are the two matrices 1 2 2 1 3 5 2 7 and 1 1 1 4 4 2 5 5 2 row equivalent?

More information

QR Decomposition. When solving an overdetermined system by projection (or a least squares solution) often the following method is used:

QR Decomposition. When solving an overdetermined system by projection (or a least squares solution) often the following method is used: (In practice not Gram-Schmidt, but another process Householder Transformations are used.) QR Decomposition When solving an overdetermined system by projection (or a least squares solution) often the following

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Fast Multipole BEM for Structural Acoustics Simulation

Fast Multipole BEM for Structural Acoustics Simulation Fast Boundary Element Methods in Industrial Applications Fast Multipole BEM for Structural Acoustics Simulation Matthias Fischer and Lothar Gaul Institut A für Mechanik, Universität Stuttgart, Germany

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

MATH 532: Linear Algebra

MATH 532: Linear Algebra MATH 532: Linear Algebra Chapter 5: Norms, Inner Products and Orthogonality Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2015 fasshauer@iit.edu MATH 532 1 Outline

More information

Contents. Preface to the Third Edition (2007) Preface to the Second Edition (1992) Preface to the First Edition (1985) License and Legal Information

Contents. Preface to the Third Edition (2007) Preface to the Second Edition (1992) Preface to the First Edition (1985) License and Legal Information Contents Preface to the Third Edition (2007) Preface to the Second Edition (1992) Preface to the First Edition (1985) License and Legal Information xi xiv xvii xix 1 Preliminaries 1 1.0 Introduction.............................

More information

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,

More information

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013. The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational

More information

THE H 2 -matrix is a general mathematical framework [1],

THE H 2 -matrix is a general mathematical framework [1], Accuracy Directly Controlled Fast Direct Solutions of General H -Matrices and ts Application to Electrically Large ntegral-equation-based Electromagnetic Analysis Miaomiao Ma, Student Member, EEE, and

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

An Adaptive Hierarchical Matrix on Point Iterative Poisson Solver

An Adaptive Hierarchical Matrix on Point Iterative Poisson Solver Malaysian Journal of Mathematical Sciences 10(3): 369 382 (2016) MALAYSIAN JOURNAL OF MATHEMATICAL SCIENCES Journal homepage: http://einspem.upm.edu.my/journal An Adaptive Hierarchical Matrix on Point

More information

Lecture 1: Center for Uncertainty Quantification. Alexander Litvinenko. Computation of Karhunen-Loeve Expansion:

Lecture 1: Center for Uncertainty Quantification. Alexander Litvinenko. Computation of Karhunen-Loeve Expansion: tifica Lecture 1: Computation of Karhunen-Loeve Expansion: Alexander Litvinenko http://sri-uq.kaust.edu.sa/ Stochastic PDEs We consider div(κ(x, ω) u) = f (x, ω) in G, u = 0 on G, with stochastic coefficients

More information

Econ Slides from Lecture 7

Econ Slides from Lecture 7 Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

An H-LU Based Direct Finite Element Solver Accelerated by Nested Dissection for Large-scale Modeling of ICs and Packages

An H-LU Based Direct Finite Element Solver Accelerated by Nested Dissection for Large-scale Modeling of ICs and Packages PIERS ONLINE, VOL. 6, NO. 7, 2010 679 An H-LU Based Direct Finite Element Solver Accelerated by Nested Dissection for Large-scale Modeling of ICs and Packages Haixin Liu and Dan Jiao School of Electrical

More information

Sparsity-Preserving Difference of Positive Semidefinite Matrix Representation of Indefinite Matrices

Sparsity-Preserving Difference of Positive Semidefinite Matrix Representation of Indefinite Matrices Sparsity-Preserving Difference of Positive Semidefinite Matrix Representation of Indefinite Matrices Jaehyun Park June 1 2016 Abstract We consider the problem of writing an arbitrary symmetric matrix as

More information

1. Structured representation of high-order tensors revisited. 2. Multi-linear algebra (MLA) with Kronecker-product data.

1. Structured representation of high-order tensors revisited. 2. Multi-linear algebra (MLA) with Kronecker-product data. Lect. 4. Toward MLA in tensor-product formats B. Khoromskij, Leipzig 2007(L4) 1 Contents of Lecture 4 1. Structured representation of high-order tensors revisited. - Tucker model. - Canonical (PARAFAC)

More information

Parallel Numerical Algorithms

Parallel Numerical Algorithms Parallel Numerical Algorithms Chapter 6 Matrix Models Section 6.2 Low Rank Approximation Edgar Solomonik Department of Computer Science University of Illinois at Urbana-Champaign CS 554 / CSE 512 Edgar

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

ACM/CMS 107 Linear Analysis & Applications Fall 2017 Assignment 2: PDEs and Finite Element Methods Due: 7th November 2017

ACM/CMS 107 Linear Analysis & Applications Fall 2017 Assignment 2: PDEs and Finite Element Methods Due: 7th November 2017 ACM/CMS 17 Linear Analysis & Applications Fall 217 Assignment 2: PDEs and Finite Element Methods Due: 7th November 217 For this assignment the following MATLAB code will be required: Introduction http://wwwmdunloporg/cms17/assignment2zip

More information

MATH 205C: STATIONARY PHASE LEMMA

MATH 205C: STATIONARY PHASE LEMMA MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)

More information

Recall the convention that, for us, all vectors are column vectors.

Recall the convention that, for us, all vectors are column vectors. Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists

More information

5 Compact linear operators

5 Compact linear operators 5 Compact linear operators One of the most important results of Linear Algebra is that for every selfadjoint linear map A on a finite-dimensional space, there exists a basis consisting of eigenvectors.

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors /88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix

More information

Simple Examples on Rectangular Domains

Simple Examples on Rectangular Domains 84 Chapter 5 Simple Examples on Rectangular Domains In this chapter we consider simple elliptic boundary value problems in rectangular domains in R 2 or R 3 ; our prototype example is the Poisson equation

More information

Numerical Methods for Solving Large Scale Eigenvalue Problems

Numerical Methods for Solving Large Scale Eigenvalue Problems Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

COMPARED to other computational electromagnetic

COMPARED to other computational electromagnetic IEEE TRANSACTIONS ON MICROWAVE THEORY AND TECHNIQUES, VOL. 58, NO. 12, DECEMBER 2010 3697 Existence of -Matrix Representations of the Inverse Finite-Element Matrix of Electrodynamic Problems and -Based

More information

7. Symmetric Matrices and Quadratic Forms

7. Symmetric Matrices and Quadratic Forms Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N. Math 410 Homework Problems In the following pages you will find all of the homework problems for the semester. Homework should be written out neatly and stapled and turned in at the beginning of class

More information

Part IB - Easter Term 2003 Numerical Analysis I

Part IB - Easter Term 2003 Numerical Analysis I Part IB - Easter Term 2003 Numerical Analysis I 1. Course description Here is an approximative content of the course 1. LU factorization Introduction. Gaussian elimination. LU factorization. Pivoting.

More information

Math 407: Linear Optimization

Math 407: Linear Optimization Math 407: Linear Optimization Lecture 16: The Linear Least Squares Problem II Math Dept, University of Washington February 28, 2018 Lecture 16: The Linear Least Squares Problem II (Math Dept, University

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

Teil 2: Hierarchische Matrizen

Teil 2: Hierarchische Matrizen Teil 2: Hierarchische Matrizen Mario Bebendorf, Universität Leipzig bebendorf@math.uni-leipzig.de Bebendorf, Mario (Univ. Leipzig) Kompaktkurs lineare GS und H-Matrizen 24. 2. April 2 1 / 1 1 1 2 1 2 2

More information

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background Lecture notes on Quantum Computing Chapter 1 Mathematical Background Vector states of a quantum system with n physical states are represented by unique vectors in C n, the set of n 1 column vectors 1 For

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

PART II : Least-Squares Approximation

PART II : Least-Squares Approximation PART II : Least-Squares Approximation Basic theory Let U be an inner product space. Let V be a subspace of U. For any g U, we look for a least-squares approximation of g in the subspace V min f V f g 2,

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Two Results About The Matrix Exponential

Two Results About The Matrix Exponential Two Results About The Matrix Exponential Hongguo Xu Abstract Two results about the matrix exponential are given. One is to characterize the matrices A which satisfy e A e AH = e AH e A, another is about

More information

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix

Scientific Computing with Case Studies SIAM Press, Lecture Notes for Unit VII Sparse Matrix Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit VII Sparse Matrix Computations Part 1: Direct Methods Dianne P. O Leary c 2008

More information

Krylov Subspace Methods that Are Based on the Minimization of the Residual

Krylov Subspace Methods that Are Based on the Minimization of the Residual Chapter 5 Krylov Subspace Methods that Are Based on the Minimization of the Residual Remark 51 Goal he goal of these methods consists in determining x k x 0 +K k r 0,A such that the corresponding Euclidean

More information

Linear Solvers. Andrew Hazel

Linear Solvers. Andrew Hazel Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction

More information

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS SPRING 006 PRELIMINARY EXAMINATION SOLUTIONS 1A. Let G be the subgroup of the free abelian group Z 4 consisting of all integer vectors (x, y, z, w) such that x + 3y + 5z + 7w = 0. (a) Determine a linearly

More information

4. Multi-linear algebra (MLA) with Kronecker-product data.

4. Multi-linear algebra (MLA) with Kronecker-product data. ect. 3. Tensor-product interpolation. Introduction to MLA. B. Khoromskij, Leipzig 2007(L3) 1 Contents of Lecture 3 1. Best polynomial approximation. 2. Error bound for tensor-product interpolants. - Polynomial

More information

5.3 The Power Method Approximation of the Eigenvalue of Largest Module

5.3 The Power Method Approximation of the Eigenvalue of Largest Module 192 5 Approximation of Eigenvalues and Eigenvectors 5.3 The Power Method The power method is very good at approximating the extremal eigenvalues of the matrix, that is, the eigenvalues having largest and

More information

Generalized eigenvector - Wikipedia, the free encyclopedia

Generalized eigenvector - Wikipedia, the free encyclopedia 1 of 30 18/03/2013 20:00 Generalized eigenvector From Wikipedia, the free encyclopedia In linear algebra, for a matrix A, there may not always exist a full set of linearly independent eigenvectors that

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

L. Levaggi A. Tabacco WAVELETS ON THE INTERVAL AND RELATED TOPICS

L. Levaggi A. Tabacco WAVELETS ON THE INTERVAL AND RELATED TOPICS Rend. Sem. Mat. Univ. Pol. Torino Vol. 57, 1999) L. Levaggi A. Tabacco WAVELETS ON THE INTERVAL AND RELATED TOPICS Abstract. We use an abstract framework to obtain a multilevel decomposition of a variety

More information

This property turns out to be a general property of eigenvectors of a symmetric A that correspond to distinct eigenvalues as we shall see later.

This property turns out to be a general property of eigenvectors of a symmetric A that correspond to distinct eigenvalues as we shall see later. 34 To obtain an eigenvector x 2 0 2 for l 2 = 0, define: B 2 A - l 2 I 2 = È 1, 1, 1 Î 1-0 È 1, 0, 0 Î 1 = È 1, 1, 1 Î 1. To transform B 2 into an upper triangular matrix, subtract the first row of B 2

More information

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by 1. QUESTION (a) Given a nth degree Taylor polynomial P n (x) of a function f(x), expanded about x = x 0, write down the Lagrange formula for the truncation error, carefully defining all its elements. How

More information

Math 113 Final Exam: Solutions

Math 113 Final Exam: Solutions Math 113 Final Exam: Solutions Thursday, June 11, 2013, 3.30-6.30pm. 1. (25 points total) Let P 2 (R) denote the real vector space of polynomials of degree 2. Consider the following inner product on P

More information