Karhunen-Loève Approximation of Random Fields by Generalized Fast Multipole Methods

Size: px
Start display at page:

Download "Karhunen-Loève Approximation of Random Fields by Generalized Fast Multipole Methods"

Transcription

1 Karhunen-Loève Approximation of Random Fields by Generalized Fast Multipole Methods C. Schwab and R.A. Todor Research Report No. 26- January 26 Seminar für Angewandte Mathematik Eidgenössische Technische Hochschule CH-892 Zürich Switzerland

2 Karhunen-Loève Approximation of Random Fields by Generalized Fast Multipole Methods C. Schwab and R.A. Todor Seminar für Angewandte Mathematik Eidgenössische Technische Hochschule CH-892 Zürich Switzerland Research Report No. 26- January 26 Abstract KL approximation of a possibly instationary random field a(ω, x) L 2 (Ω, dp ; L (D)) subject to prescribed meanfield E a (x) = Ω a(ω, x)dp (ω) and covariance V a(x, x ) = Ω (a(ω, x) E a(x))(a(ω, x ) E a (x ))dp (ω) in a polyhedral domain D R d is analyzed. We show how for stationary covariances V a (x, x ) = g a ( x x ) with g a (z) analytic outside of z =, an M-term approximate KL-expansion a M (ω, x) of a(ω, x) can be computed in log-linear complexity. The approach applies in arbitrary domains D and for nonseparable covariances C a. It involves Galerkin approximation of the KL eigenvalue problem discontinuous finite elements of degree p on a quasiuniform, possibly unstructured mesh of width h in D, plus a generalized Fast Multipole accelerated Krylov-Eigensolver. The approximate KL-expansion a M (x, ω) of a(x, ω) has accuracy O(exp( bm /d )) if g a is analytic at z = and accuracy O(M k/d ) if g a is C k at zero. It is obtained in O(MN(log N) b ) operations where N = O(h d ).

3 Introduction Accurate numerical prediction of properties for mass produced specimens requires accounting for uncertainties in their components. Testing procedures (destructive or nondestructive) that are limited to sample specimens allow only to collect statistical data of mass-manufactured components. They can, in general, not ascertain material properties for any particular specimen. This mandates statistical modelling of, for example, spatially inhomogeneous material and solution characteristics in Finite Element simulations. To this end, we employ a finite probability space (Ω, Σ, P ) and assume that the material property of interest is a spatially inhomogeneous random field, i.e. a P - measurable map a(, ω) : Ω L (D). To be able to speak about mean and variances of a(x, ω) we assume a L 2 (Ω, dp ; L (D)). (.) Random fields (.) arise in two broad classes of applications:. Measurements M N = {a j (x) : j =,..., N} of a can be considered as realization of N independent random variables {a j (, ω)} N j= distributed identically to the underlying random field a(x, ω). For example, in digitized light microscopy or computer tomography, samples a(, ω j ) correspond to (pixel or voxel) datasets that are too large as input data for continuum mechanical simulations. Besides accounting for measurement uncertainty, statistical modelling of a(x, ω) serves the purpose of data reduction, i.e. description of the random field s statistics in terms of a finite, preferably small, number of parameters. 2. In subsurface flow and reservoir simulation, there is only one specimen with given, deterministic a(x). Information about a(x) can only be gathered in a few points, so that here M N = {a(x j ) : j =,..., N}, and the uncertainty in spatial variation of a(x) in between x j is modelled by statistical procedures (see, e.g., [2, 3, 3, 6] for more on this). The uncertainty in a(x) is once more described by a random field (.). Given a random diffusion coefficient a(x, ω), prediction of, say, concentration u(x, ω) requires solution of a stochastic partial differential equation such as f(x) + div(a(x, ω) u(x, ω)) = in D. (.2) The simplest approach to solution of (.2) is Monte-Carlo (MC) simulation. Here, samples a(x, ω j ) of a(x, ω) with prescribed statistical properties are generated and, for each sample, a deterministic problem (.2) is solved for u(x, ω j ). From a sufficiently large set of solution samples, moments of the random solution u(x, ω) can be estimated. An alternative to MC simulation is the Stochastic Galerkin Method. Proposed originally by N. Wiener [26], one selects a basis in (and thereby introduces coordinates into)

4 L 2 (Ω, dp ). Then a(x, ω) is approximated by separating deterministic and stochastic variables, i.e. by M a M (x, ω) = φ m (x)y m (ω) (.3) m= where Y m : Ω Ω m are suitably chosen random variables with ranges Ω m R and probability measures π m (dy m ) = π m (y m )dy m. The random solution u(x, ω) L 2 (Ω, dp ; V ) (with V denoting a suitable Hilbert space of finite energy solutions) of (.2) is projected onto the finite dimensional space (Π M m= Ω m, Σ M, P M ) where Σ M is the σ-algebra of Borel subsets of Π M m= Ω m and P M = π (dy )... π M (dy M ). The computational efficiency of the stochastic Galerkin approach strongly depends on judicious selection of the coordinates Y m in L 2 (Ω, dp ) ([8, 5]). N. Wiener and many investigators afterwards (e.g. [7, ] and the references there) proposed so-called chaos expansions of u(x, ω) in Hermite polynomials of Gaussian random variables Y m which are orthogonal with respect to the Gaussian probability density. They are dense in L 2 (Ω, dp ) but not problem-adapted. Analogous polynomial systems which are orthogonal with respect to more general probability measures were proposed recently [22, 27]. Due to the high cost of the stochastic Galerkin FEM for large M ([5, 8]), considerable computational work could (and should) be spent on finding optimal (with respect to an error measure for a a M ) separated approximations (.3) of a. In this paper, we address this issue under assumption (.), if a a M is measured in L 2 (Ω, dp ; L 2 (D)). In this case, an approximation (.3) with certain optimality properties is obtained by truncating the Karhunen-Loéve (KL) expansion of a(x, ω). To define the KL-expansion, we assume that the known information on a(x, ω) includes mean field and two-point correlation, i.e. that E a (x) := a(x, ω) dp (ω) and C a (x, x ) := a(x, ω)a(x, ω) dp (ω) (.4) Ω are known. An equivalent assumption is that the mean field E a and its covariance V a are known, since by definition, Ω V a (x, x ) := C a (x, x ) E a (x)e a (x ). (.5) Due to (.), the 2-point correlation of a(x, ω) is well-defined and belongs to L (D D). Associate with V a a compact, self-adjoint operator V a : L 2 (D) L 2 (D) via (V a u)(x) = V a (x, x )u(x )dx (.6) x D and denote by (λ m, φ m (x)) m= the sequence of its eigenpairs, with λ λ 2... λ m and with φ m (x) constituting an orthonormal basis of L 2 (D). Then the KL expansion of the random field (.) takes the form a(x, ω) = E a (x) + λm φ m (x)x m (ω) (.7) m= 2

5 where X m (ω) are centered at, pairwise uncorrelated random variables on probability spaces (Ω m, Σ m, P m ) m N. They relate to a(x, ω) via X m (ω) = (a(x, ω) E a (x))φ m (x)dx m =, 2,... (.8) x D Note that, in order to allow a proper parametrization of the uncertainty space (independence of the coordinates), the family (X m ) m= of random variables in the Karhunen- Loève expansion is often assumed to be independent. This might not be the case in general for an arbitrary random field a, so that the independence assumption could in fact introduce an additional data representation error. We compute a KL-approximation of a(x, ω) based on M N by truncation of (.7) after M terms and by Galerkin approximation of the first M KL-eigenpairs, for given Covariance kernel V a : if (λ h m, φ h m(x)) M m= denote approximate eigenpairs of V a in (.6) based on a one-parameter family of subspaces S h L 2 (D), the corresponding approximate KL-expansion a h M (x, ω) of the random field a(x, ω), based on M N, is given by M a h M(x, ω) = E a (x) + λ h mφ h m(x)xm(ω) h (.9) m= where the laws πm h of the random variables X m h : Ω R can be determined from experiments M N and from (λ h m, φh m (x))m m= by a maximum likelihood estimate { M 2 min a(x) E a (x) + λ h mφ h m(x)ym(ω)} h (.) Ym h(ω) a(x) M N m= L 2 (Ω D) The laws πm h of Ym h are constrained such that the product measures P M (ω) := M m= πh m(ω) are probability measures which approach, as M and h, the law of a(x, ω), i.e measure P. Since the decay of a a h M with ah M in (.9) as M is essential for the complexity of the stochastic Galerkin approximation of (.2), we analyze the decay of λ m and the regularity of φ m (x) in dependence on the smoothness of V a (x, x ). We show that the M-term KL truncation error behaves, asymptotically as M, as the best approximation error of a(x, ω) from the usual FE-spaces in D. However, for any finite and, particularly, small values of M, the KL-expansion is the best possible M-term approximation of a(x, ω) in L 2 (Ω D) The efficient computation of approximate Karhunen-Loève expansions (.9) of a possibly nonstationary random field a(x, ω) in a polyhedral domain D R d for given mean field E a (x) and subject to arbitrary, prescribed spatial covariance function V a (x, x ) is the purpose of the present paper. Our approach is based on a Ritz-Galerkin approximation of the KL-eigenvalue problem V a ϕ = λϕ. (.) Since the covariance operator V a in (.6) is nonlocal, the matrix of its Galerkin discretization is fully populated. 3

6 We approximate the leading M eigenpairs of V a by a Krylov subspace iteration [9] requiring only matrix-vector multiplies. The fully populated moment matrix of the discretized covariance V a need never be formed explicitly, since an approximate matrix vector multiplication can be realized in linear complexity by a generalized Fast Multipole Method (gfmm) in D. This algorithm extends the Greengard-Rokhlin method (e.g. [2] and the references there) which was developed and highly optimized for the Coulomb potential to most covariances V a (x, x ) used in statistical modelling of spatially inhomogeneous random fields (e.g. [2, 3]), trading generality for efficiency in replacing spherical harmonic expansions with tensorized polynomial interpolants, but retaining fast shift operations. To ensure well-posedness of (.2), we assume in addition to (.) that a L (D Ω) is strictly positive, with lower and upper bounds a and a +, respectively, which satisfy < a a(x, ω) a + <, λ P a.e. (x, ω) D Ω. (.2) Note that the existence of C a and V a is ensured by (.) and by (.2). The outline of the paper is as follows: in Section 2, we present formal definitions and properties of the KL-expansion of random fields. We then give estimates on the rate of decay of the KL eigenvalues λ m. These estimates are crucial in determining the approximation rate of truncated, M-term KL-expansions. In Section 3, we discuss the Galerkin approximation of truncated M-term KL-expansions, for given kernel V a (x, x ). Section 4 addresses the generalized Fast Multipole Method, gives algorithmic details such as kernel interpolation and the realization of the shift operations, and establishes exponential convergence with respect to the expansion order of the multipole error. 2 Karhunen-Loève (KL) Expansion We introduce the definitions and basic properties of KL expansions of random fields a(x, ω) which are second order, i.e. which satisfy (.). The basic reference is [7], Chap. XI. We analyze the rate of decay of the KL eigenvalues and regularity of the KL eigenfunctions. 2. Correlation, KL Expansion Let H,, H, H 2,, H2 and S,, S be separable Hilbert spaces. If (s m ) m Λ is an orthonormal basis (ONB) in S (Λ is either finite or N ), any element f H S can be uniquely represented as a convergent series f = m Λ f m s m. (2.) It is then easy to prove Proposition 2.. The mapping H S H 2 S (f, g) C fg := m Λ f m g m H H 2 (2.2) 4

7 is well-defined, bilinear and bounded with norm, and it does not depend on the choice of the basis (s m ) m Λ in S. Based on Proposition 2. we give the following Definition 2.2. For f H S and g H 2 S, we call C fg H H 2 defined in Proposition 2. the correlation of the pair (f, g). If H = H 2 = H and the corresponding scalar products also coincide, we show next that the set {C f := C ff f H S} of all correlation kernels is in one-to-one correspondence with a certain class of operators on H. We denote by B sym (H) the space of symmetric bounded linear operators in a Hilbert space H,, H, while for p >, Bsym,p c (H) will be the space of symmetric compact linear operators in H whose eigenvalue sequence belongs to l p. The operators in Bsym, c (H) are termed trace class. Theorem 2.3. If H,, H and S,, S are separable Hilbert spaces of the same dimension and (s m ) m Λ is an ONB in S, the correlations of elements in H S are in a one-to-one correspondence with the positive definite trace class operators in H, via f m, x H f m H, (2.3) m Λ f m f m = C f C f : H x m Λ for f = m Λ f m s m. Proof. Obviously, the operator C f defined on the r.h.s. of (2.3) is compact as a norm limit of finite rank operators obtained by truncating the series. The positivity of C f is also clear, so it remains to check that its trace is finite. Choosing (e m ) m Λ ONB in H, we have Tr C f = C f e m, e m H = m, e n m Λ m Λ n Λ f 2 H = f m 2 H = f 2 H S <. (2.4) m Λ The mapping (2.3) is therefore well-defined. From the identity C f x, y H = C f, x y H H x, y H (2.5) it follows that the definition of C f does not depend on the basis (s m ) m Λ and that the mapping (2.3) is injective. We check the surjectivity of (2.3). To this end, let C be a positive definite trace class operator in H. C is in particular compact and has an eigenpair sequence (λ m, φ m ) m Λ, Cφ m = λ m φ m m Λ. (2.6) The eigenvalues (λ m ) m Λ have finite multiplicity, their sequence is nonincreasing and may accumulate only in. Moreover, the trace class condition reads λ m <. (2.7) m Λ 5

8 Then the series λm φ m s m (2.8) m Λ converges due to (2.7) to an element f H S for which we clearly have C f = m Λ λ m φ m φ m (2.9) From (2.5),(2.6), (2.9) it follows that C f has the same spectral decomposition as C, i.e. C f = C. As a consequence of Theorem 2.3 it is easily seen that Corollary 2.4. Let H,, H be a separable Hilbert space and C H H be a correlation kernel. Then in terms of the spectral decomposition (2.6) of C B (H) defined as in (2.5), C can be represented as C = m Λ λ m φ m φ m. (2.) The description of all the elements in H S with a given correlation kernel is achieved in the following result. Theorem 2.5. Consider H,, H, S,, S separable Hilbert spaces and C H H a correlation kernel, together with its representation (2.). Then f H S satisfies C f = C iff there exists an orthonormal family (X m ) m Λ S, such that f = m Λ λm φ m X m. (2.) Proof. The if part follows by the arguments used to conclude the proof of Theorem 2.3, after completing the family (X m ) m Λ to an ONB. Conversely, if C f = C, then we expand f = m Λ φ m Y m, (2.2) with (Y m ) m Λ S, from which it follows via Proposition 2. C f = Y m, Y m S φ m φ m (2.3) m,m Λ Comparing (2.3) and (2.), it follows that Y m, Y m S = λ m δ mm (2.4) and (2.) holds with X m := Y m / λ m. 6

9 Definition 2.6. The expansion (2.) of f in terms of the spectral decomposition of C f is called the Karhunen-Loève expansion of f. Partial sums of the KL expansion of f H S are optimal approximations of f in subspaces of H S which are finite dimensional in the first argument. Theorem 2.7. If f H S has the KL expansion (2.), then for any M N it holds inf U H dim U=M f P U S f 2 H S = λ m, (2.5) m M+ with equality only for U = Span{φ, φ 2,..., φ M } (and g consequently the M-th truncate of (2.)). Proof. It is clear that the equality holds for U = Span{φ, φ 2,..., φ M }, since P U S f is then the M-th truncate of (2.). It is also clear that (2.5) holds for M =. It suffices therefore to prove that the infimum in (2.5) can not be smaller than the r.h.s. of (2.5). We argue by induction on M. Note first that w.l.o.g. we can assume the family (X m ) m Λ to be an ONB of S. Consider now U H of dimension M and g U S. g can be written as g = m Λ u m X m, (2.6) with u m V. Then we have f g 2 = m Λ λ m φ m u m 2. (2.7) Define now N N to be the largest integer such that for W := Span{u, u 2,..., u N } it holds dim W = M. Clearly, N M and from (2.7) we deduce f g 2 = λ m φ m u m 2 + λ m φ m u m 2 m N m N+ = λ m φ m P W u m 2 λ m ( φ m P W u m 2 φ m u m 2 ) m Λ m N+ λ m φ m P W u m 2 λ m P U W (φ m P W u m ) 2 m Λ = m Λ λ m φ m P W u m 2 m N+ m N+ λ m P U W φ m 2 m Λ λ m φ m P W u m 2 λ M, (2.8) since U W is one-dimensional and N M. W being M dimensional, (2.7) and (2.8) show that f g 2 inf U H dim V =M inf f h U S h 2 λ M. (2.9) 7

10 Taking the infimum over g and U in (2.9), the conclusion follows by induction on M. As an example, let us specialize now Theorem 2.5 by choosing as in Section H := L 2 (D) and S := L 2 (Ω, dp ) and obtain (see also [7]) Proposition 2.8. If a L 2 (D Ω), then there exists a sequence of random variables (X m ) m L 2 (Ω, dp ) satisfying X m (ω) dp (ω) =, X n (ω)x m (ω) dp (ω) = δ nm, n, m, (2.2) Ω Ω and such that the random field a can be expanded in L 2 (D Ω) as a(x, ω) = E a (x) + λm φ m (x)x m (ω), (2.2) m= where (λ m, φ m ) m is the eigenpair sequence of the Carleman operator V a : L 2 (D) L 2 (D), (V a φ)(x) := V a (x, x )φ(x )dx. φ L 2 (D) (2.22) and V a := C a Ea. Moreover, X m (ω) := λm D D a(x, ω)φ m (x) dx. m. (2.23) Definition 2.9. The r.h.s. of (2.2) is called the Karhunen-Loève expansion (KL expansion for short) of the random field a L 2 (D Ω). Let us briefly discuss the optimality result Theorem 2.7 in this context. The (mean square) approximability of a(x, ω) by its KL truncates via Theorem 2.7 relies on the knowledge of the exact eigenfunctions (which are not available in practice), and on modeling the random variables given by (2.23) (achievable by testing the measurements M N against the eigenfunctions, assumed to be known). Thus in general the optimal approximations of the random field a(x, ω) can not be found, unless the covariance kernel and the domain D have simple structure (as, e.g. when V a is separable and D of tensor product type; cf. e.g. [] for examples). However, nearly optimal approximations can be constructed using computed (necessarily inexact) eigenfunctions of V a, that is, exact eigenfunctions of P U V a P U, where U L 2 (D) is a finite element space and P U is a suitable (quasi-)interpolation operator. We formulate the corresponding result in the abstract setting of Theorem 2.7. Theorem 2.. If f H S and U is a closed subspace of H, we denote by (λ m, φ m ) m and (λ U,m, φ U,m ) m the eigenpair sequences of C f and P U C f P U respectively. For any M N we define W M := span{φ U,m m M}. Then dim W M = M, M f P WM Sf 2 H S = (λ m λ U,m ) + λ m. (2.24) m= m=m+ 8

11 Proof. Denote further by (λ W M,m ) m the eigenvalue sequence of P W C f P M W, M where WM is the orthogonal complement of W M in H. We have successively f P WM Sf 2 H S = P W M S f 2 H S = λ W M,m = Tr P WM C fp W M m= M = Tr C f Tr P WM C f P WM = λ m λ U,m. m= m= Definition 2.. If f H S, U is a closed subspace of H and M N { }, we denote by (λ U,m, φ U,m ) m the eigenpair sequence of P U C f P U B sym, (U) and by W M the space spanned by (φ U,m ) m M. Then there exists an orthonormal family (X U,m ) m M S such that P WM Sf = m λu,m φ U,m X U,m (2.25) and we call the expansion (2.25) the (U, M)-quasi Karhunen-Loève (KL) expansion of f. Note that the (H, )-quasi KL expansion coincides with the KL expansion of f constructed in Theorem 2.5. Remark 2.2. In our applications below, the subspace U will be a Finite Element (FE) subspace of dimension N = dim(u) < much larger than M. The projection (2.25) can be understood as projection of f onto the principal components of U. If no eigensolver is available, the random field a(x, ω) still can be approximated by an expansion separating the deterministic and stochastic parts and obtained by testing a(x, ω) against a basis of an arbitrary space U L 2 (D). The accuracy of this approximation is just as high as that of the U U Galerkin approximation of V a. We further note that, although weaker than the methods presented in Theorems 2.7, 2., (2.25) can become (e.g. in the case of an analytic covariance V a ) asymptotically, i.e. as M, optimal. Proposition 2.3. Consider H, S separable Hilbert spaces and (U m ) m a nested, dense sequence of closed subspaces of H. If f H S, define for any m Then for any M it holds f P UM Sf 2 H S ε m := C f P Um U m C f H H. (2.26) m=m (dim U m+ dim U m ) /2 ε m. (2.27) 9

12 Proof. For m let us denote by N m the dimension of U m and choose (φ m ) m ONB in H such that U m = span{φ, φ 2,..., φ Nm } for any m. Then there exists a family (X m ) m S such that Clearly then, But, using (2.28), m>n M X m 4 S f = m f P UM Sf 2 H S = φ m X m, C f = m,m m>n M φ m X m 2 H S = X m, X m S φ m φ m. (2.28) m>n M X m 2 S. (2.29) max{m,m }>N M X m, X m S 2 = C f P UM U M C f 2 H H = ε 2 M, from which we deduce via the Cauchy-Schwarz inequality and for any M X m 2 S (N M+ N M ) /2 X m 4 S N M <m N M+ N M <m N M+ (N M+ N M ) /2 ε M. (2.3) /2 The conclusion follows inserting (2.3) in (2.29). Since we are interested in the KL expansion of stochastic coefficients a(x, ω) of partial differential equations such as the diffusion equation (.2), a sufficient condition for P a.s., pointwise a.e. in D convergence of (2.2) is of interest. Proposition 2.4. The KL series (2.2) converges P-a.s. in L (D) if λ m (log m) 2 φ m 2 L (D) Var(X m) <. (2.3) m= Follows from [7] Theorem 36.B (i), applied to λ m φ m L (D) X m(ω) if we note the X m in (2.2) satisfy (2.2), i.e. have mean zero and are pairwise uncorrelated. 2.2 KL Eigenvalue Decay From Proposition 2.4, the pointwise convergence of the KL expansion (2.2) is related to eigenvalue decay and to pointwise eigenfunction bounds. We now prove such decay rates for the KL eigenvalues in terms of regularity of the covariance kernel V a (pointwise eigenfunction bounds will be considered in the next section).

13 Definition 2.5. A correlation function V a : D D R is said to be piecewise analytic/smooth/h p,q on D D (p, q [, [) if there exists a partition D = {D j } J j= of D into a finite sequence of simplices D j and a finite family G = {G j } J j= of open sets in R d such that D = J D j, D j G j j J, (2.32) j= and such that V a Dj D j has an extension to G j G j which is analytic in G j G j /is smooth in G j G j / is in H p,q (G j G j ) := H p (G j ) H q (G j ), for any pair (j, j ). We denote by A D,G (D 2 )/ CD,G (D2 )/ H p,q D,G (D2 ) the corresponding regularity spaces. Similarly we introduce spaces of piecewise regular functions defined on D, which we denote by A D,G (D)/ CD,G (D)/ Hp D,G (D). To prove decay estimates for the eigenvalues of a piecewise analytic kernel (in the sense of Definition 2.5) we need the following auxiliary result. Lemma 2.6. Let (H,, ) be a Hilbert space and C B(H) be symmetric, nonnegative and compact operator whose eigenpair sequence is denoted by (λ m, φ m ) m. If m N and C m B(H) is an operator of rank at most m, then it holds λ m+ C C m B(H). (2.33) Proof. Straightforward application of the minimax principle: λ m+ = min V H dim V m max φ V φ H = Cφ, φ max φ (Ran Cm) φ H = = max φ (Ran Cm) φ H = Cφ, φ (C C m )φ, φ C C m B(H) We require approximation properties of the FE-spaces S p h (D). By Hk (D) we denote the functions which belong to H k (D j ) for every D j D. Then we have Proposition 2.7. Let S p h (D) denote the space of discontinuous, piecewise polynomial functions of total degree p on a quasiuniform triangulation M h of mesh width h subordinate to the partition D of D and denote by N = dims p h (D) its dimension. Denote by P h : L 2 (D) S p h the L2 (D) projection. Then for every ϕ H k (D) it holds, as h/(p + ), ϕ P h ϕ L 2 (D) C(k)(h/(p + )) min{p+,k} C(k)N min{p+,k}/d (2.34) and, if ϕ is analytic in each D j, D j D, there are b, C > such that, as p on a fixed triangulation M of D subordinate to D, ϕ P h ϕ L 2 (D) C exp( bp) C exp( bn /d ) (2.35)

14 2.2. Piecewise Analytic Covariance For piecewise analytic kernels in the sense of Definition 2.5 we have Proposition 2.8. Let V L 2 (D D) be a symmetric kernel defining the compact and nonnegative integral operator V : L 2 (D) L 2 (D), (Vu)(x) = V (x, x )u(x ) dx. (2.36) If V is piecewise analytic on D D and (λ m ) m is the eigenvalue sequence of its associated operator (2.36), then there exist constants c,v, c 2,V > depending only on V such that λ m c,v e c 2,V m /d, m. (2.37) Proof. Under the assumption of piecewise analyticity of the kernel V (x, x ) (in the sense of Definition 2.5), the operator V maps L 2 (D) into the set of piecewise analytic functions. Define V m := P h V. Then N := rankv m = O(/(p+) d ), and, by Proposition 2.7, (2.35) we get V V m B(H) C exp( bp) C exp( bn /d ). Lemma 2.6 then implies (2.37). One is often interested in Gaussian covariance kernels of the form V a (x, x ) := σ 2 exp( x x 2 /(γ 2 Λ 2 )), (x, x ) D D, (2.38) where σ, γ > are real parameters and Λ is the diameter of the domain D. Note that σ and γ are in this case referred to as the standard deviation and the correlation length of a respectively. Since this kernel admits an analytic continuation to the whole complex space C d, the eigenvalues decay is in this case even faster than in (2.37). Proposition 2.9. If a L 2 (D Ω) and V a is given by (2.38), then for the eigenvalue sequence (λ m ) m of V a it holds D λ m σ 2 (/γ)m/d +2 Γ(.5m /d ) m, (2.39) where Γ is the gamma function interpolating the factorial. Proof. A repetition of the argument in the proof of Proposition 2.8 using as approximations for V a the integral operators given by the Taylor truncates of V a. Remark 2.2. Note that the decay estimates (2.37), (2.39) are subexponential in dimension d >. The exponent /d accounts for higher multiplicity of the eigenvalues in the presence of symmetries in V a and D and cannot be removed, in general. 2

15 2.2.2 Finitely Differentiable Covariance So far, we assumed that the kernel function V (x, x ) is piecewise analytic which implied exponential decay of the KL-eigenvalues. If the requirement of analyticity of V (x, x ) is weakened to finite Sobolev regularity, only algebraic decay holds true. Proposition 2.2. Let D R d be a bounded domain and V L 2 (D D) be the symmetric kernel of the compact, nonnegative integral operator V : L 2 (D) L 2 (D), (Vu)(x) = V (x, x )u(x ) dx. (2.4) If V is piecewise H k, := H k L 2 on D D with k > and (λ m ) m denotes the eigenvalue sequence of V, there exists a constant c 3,V > such that D λ m c 3,V m k/d, m. (2.4) Proof. Analogous to Proposition 2.8, using (2.34) in place of (2.35). Corollary Let D R d be a bounded domain and V L 2 (D D) be a symmetric kernel defining a compact, nonnegative integral operator via (2.4). If V is piecewise smooth on D D and (λ m ) m denotes the eigenvalue sequence of its associated operator V, then for any s > there exists a constant c V,s > such that 2.3 KL Eigenfunction Regularity λ m c V,s m s, m. (2.42) The regularity of the covariance kernel implies corresponding regularity of the KL eigenfunctions. The following result follows immediately from the KL-eigenvalue problem (.). Proposition Assume that the covariance kernel V a (x, x ) is piecewise analytic/ is C p on the partition D of D in the sense of Definition 2.5. Then the KL eigenfunctions ϕ m (x) are analytic/ C p in every D j D. Pointwise control of KL eigenfunctions is important to establish pointwise convergence of the KL expansion, i.e. convergence in L 2 (Ω, dp ; L (D)). This in turn is important to ensure the ellipticity and stable solvability of problems like (.2) when a(x, ω) is replaced by a finite KL-approximation. Theorem For D R d a bounded domain and V piecewise smooth on D D, such that the domains D j in Definition 2.5 all have the uniform cone property, we denote by (λ m, φ m ) m the eigenpair sequence of the associated integral operator V via (2.36), such that φ m L 2 (D) =, m. Then for any s > and any multiindex α N d there exists c V,s,α > such that α φ m L (D j ) c V,s,α λ m s, m, j J. (2.43) 3

16 Remark Under the regularity assumptions of Proposition 2.24 the estimate (2.43) is optimal in the sense that for any α it fails to hold with s =. This can be seen for instance on D :=], [ by taking V := m λ m φ m φ m where λ m := e m and φ m (x) := m φ(m 2 x m), x ], [, m, with φ C satisfying φ L 2 (],[) =. (], [) Remark Further assumptions (like stationarity of a(x, ω)) lead to the uniform L boundedness of the eigenfunctions, but not of their derivatives. For the proofs of the results presented in this section we refer the reader to Appendix and to [24]. 3 Approximate KL Expansion In order to use the (truncated) KL expansion (2.2) in practice, we must be able to compute efficiently and accurately approximations to the first M KL-eigenpairs in arbitrary domains D. In one dimension, for particular kernels, explicit eigenfunctions are known (see, e.g., []). These can be used to obtain explicit eigenpairs also for multidimensional tensor product domains D, if the covariance V a (x, x ) is separable. This is often the case in subsurface flow problems, where D is a box and the covariance kernel V a is Gaussian as in (2.38). To deal with random coefficients in arbitrary geometries, however, an efficient numerical approximation of the eigenpairs of the covariance operator (2.36) is an essential step in the efficient numerical solution of problem (.2). 3. Galerkin Discretization of the KL Eigenvalue Problem Let h H (it will be either h or p) be a discretization parameter and let S h L 2 (D) denote the corresponding finite element space. The variational formulation of the eigenvalue problem reads, in discretized form, Find (λ h,m, φ h,m ) m R S h such that V a (x, x )φ h,m (x )ψ(x) dx dx = λ h,m φ h,m (x)ψ(x) dx ψ S h. (3.) D D (3.) shows that the sequence (λ h,m, φ h,m ) m is nothing but the eigenvalue sequence of the compact, self-adjoint operator P h KP h in L 2 (D). Theorem 2. implies then (with U = S h ) that an important contribution to the error introduced by a quasi-kl expansion of the random field a(x, ω) comes from the difference between the traces of the continuous and discretized operator. In order to control the trace discretization error we make the following D 4

17 Assumption 3.. The eigenpair sequence (λ m, φ m ) m of the integral operator V has the property that for any s > there exists c V,S,s > such that φ m P h φ m L 2 (D) c V,S,s λ s m Φ(h) m, h H (3.2) where the function Φ : R R describes the convergence rate of the finite element spaces S := (S h ) h H. Later we will see that Assumption 3. is satisfied again in the case of a piecewise regular kernel K for both the h and p finite element methods. Based on Assumption 3. we prove the main result of this section. Theorem 3.2. If V L 2 (D D) is piecewise smooth on D D, defining a nonnegative self-adjoint operator V via (2.36) such that Assumption 3. is satisfied, there exists a constant c V,S > such that Tr V Tr P h VP h c V,S Φ(h) 2 h H. (3.3) Proof. Fix h H. From the minimax principle we immediately deduce λ h,m λ m m, so that Tr P h VP h Tr V. (3.4) Further, the obvious identity V P h VP h = (I P h )V + V(I P h ) (I P h )V(I P h ) together with the fact that V is nonnegative ensure that (H := L 2 (D)) (V P h VP h )u, u H 2 Vu, (I P h )u H u H. (3.5) Using (3.5) and Assumption 3. it follows Tr V Tr P h VP h = m (V P h VP h )φ m, φ m H 2 m Vφ m, (I P h )φ m H 2 λ m (I P h )φ m 2 H c V,S,s Φ(h) 2 λ 2s m. (3.6) m m By Corollary 2.22 the series on the r.h.s. of (3.6) converges which concludes the proof. 3.2 Convergence of the Discretized KL Expansion From Theorems 2., 3.2 we immediately deduce 5

18 Proposition 3.3. Consider a L 2 (D Ω) such that V a L 2 (D D) is piecewise smooth on D D, defining a nonnegative self-adjoint operator V a via (2.36). If Assumption 3. holds, then there exists a constant c V,S > such that for any h H and M the (S h, M)-quasi KL expansion of a, defined by satisfies a h,m (x, ω) = E a (x) + M λh,m φ h,m (x)x h,m (ω) (3.7) m= a a h,m 2 L 2 (D Ω) c V,SΦ(h) 2 + m=m+ λ m h H. (3.8) In applications to partial differential equations with random coefficients, it is necessary to ensure that the approximate, truncated KL expansion satisfies sign conditions pointwise in x D an P -a.s. in ω Ω. Using the pointwise eigenfunction bounds Theorem 2.24, it is possible to establish pointwise convergence of the KL expansion. Assumption 3.4. The family (X m ) M= is uniformly bounded, i.e. of random variables the KL expansion (.7) c X > X m L (Ω,dP ) c X m. (3.9) We then have Theorem 3.5. If V a is piecewise analytic on D D and if Assumption 3.4 holds, then the KL expansion (.7) converges pointwise exponentially. More precisely, there exist constants C > and c 2 > such that for all s > as in Theorem 2.24 and for all M holds a a M L (D Ω) C exp( c 2 (/2 s)m /d ) (3.) 4 Fast Multipole Covariance Approximation Computation of an approximate KL-expansion requires numerical solution of the matrix eigenproblem corresponding to (3.), i.e. of If S p h (D) = span{b i(x) : i =,..., N}, we have V ii = b i (x)v a (x, x )b i (x )dxdx, M ii = D D Vφ = λmφ. (4.) D D b i (x)b i (x )dxdx. (4.2) Both matrices V and M are symmetric and positive definite, with M being diagonal if we choose as basis of S p h polynomials which are L2 (D)-orthogonal and supported on the elements π M h. The size N of the KL eigenproblem (4.) can be as large as 6 and dense eigensolvers are not applicable. We compute KL eigenpairs corresponding 6

19 to the largest eigenvalues of (4.) by an iterative Krylov subspace eigensolver [9] which requires only matrix-vector multiplies. If a(x, ω) is stationary, its covariance is translation invariant, i.e. V a (x, x ) = V a (x x ). If, moreover, D is an axiparallel cube and the triangulation M h is uniform and axiparallel, Fast Fourier techniques can be used to realize x Vx in O(N) operations. For polyhedra in dimension d = 3 and unstructured meshes M h, Fourier techniques can no longer be applied. We show in certain situations that for large N the matrixvector multiplication φ Vφ can be done approximately in O(N log N) operations and memory using a generalized fast multipole method (gfmm) applicable to general, piecewise analytic correlation kernels V a (x, x ). It generalizes the Greengard-Rokhlin method for the Coulomb potential. Using this cluster approximation of the far field yields a perturbed matrix Ṽ and, consequently, perturbed KL-eigenpairs. We estimate the error due to clustering the far field and show that an expansion order m = O( log h ) is sufficient to preserve the consistency of the Galerkin approximation of the eigenpairs. 4. Covariance Kernel Expansions Assumption 4.. Given a separation constant η <, V a : D D C a kernel function and I an index set, for all x, y D, x y, and expansion orders m N there exists a degenerate covariance kernel Va m V a (x, y) Va m (x, y; x, y ) := κ (µ,ν) (x, y )X µ (x; x )Y ν (y; y ) (4.3) (µ,ν) I m for I m I I such that for all x, y D satisfying the error V a V m a y y + x x η y x (4.4) in (4.3) is bounded by V a (x, y) V m a (x, y; x, y ) Φ(m; η, V ) y x s (4.5) with the convergence rate Φ(m; η, V ) exponentially decreasing with respect to the expansion order m and with s > denoting the singularity order at x = y. Note we admit covariance kernels V a which are singular on the diagonal x = y; s is required due to (.) the following assertions are actually valid for s > d. The purpose of the expansion (4.3) is to decouple the source points y from the field points x. The simplest example of such a decoupling is Taylor expansion. If the random field a(s, ω) is stationary, the covariance V a is translation invariant: V a (x, y) = V a (y x). (4.6) We expand V a (y x) formally into a Taylor series centered at y x with x, y R d : V a (y x) = (ν,µ) N d Nd (D µ+ν V a )(y x ) (x x) µ (y y ) ν. µ! ν! 7

20 With this we get an approximation (4.3) where I := N d, I m := {(µ, ν) I I : µ + ν < m}, κ (µ,ν) (x, y ) := (D µ+ν V a )(y x ), X µ (x; x ) := (x x) µ, Y ν (y; y ) := (y y ) ν. µ! ν! (4.7) Applying the binomial formula the expansion (4.3) can be shifted from x to x and from y to y by X µ (x; x ) = (x x) µ µ! Y ν (y; y ) = (y y ) ν ν! = ν Nd ν µ = µ N d µ ν (x x ) µ ν X ν (x; x ) (µ ν)! (y y ) ν µ Y µ (y; y ) (ν µ)! (4.8) The Taylor expansion coefficients have to be calculated by differentiation of the covariance kernel V a. To avoid this we interpolate V a (x, y) by Čebyšev polynomials so that only the kernel has to be evaluated at O(m d ) different points and no derivatives come into play. Let I := [, ], m N and T µ (x) = cos(µ arccos(x)), µ Z, denote the Čebyšev polynomials of the first kind. For any function f, defined on I, we consider the formal Čebyšev expansion f(x) = µ Z and the Čebyšev interpolant f m (x) := µ Z µ <m f(µ)t µ (x), f(µ) := π f µ T µ (x), fµ := m i<m f(ξ)t µ (ξ) dξ (4.9) ξ 2 f(x i )T µ (x i ) (4.) where we assume f to be known at the m Čebyšev-points x i := cos((i + /2)π/m) I, i.e., the m roots of T m. Employing tensor products, T µ (x) := T µi (x i ), µ Z d, x I d (4.) i d we extend expansion and interpolation to the d-dimensional case with x i denoting the i-th component of x I d. This yields f(x) = f(µ)tµ (x), f(µ) := π d f(ξ)t µ (ξ) dξ (4.2) µ Z ((ξ)i ) 2 d and f m (x) := µ Zd µ <m where x ν := (x νi ) i d I d. f µ T µ (x), I d i d d fµ := m f(x ν )T µ (x ν ) (4.3) ν Nd ν i <m 8

21 Remark 4.2. Note that T µi (x) = T µi (x), µ Z d µ <m f µ T µ (x) = µ N d 2 d δ µ fµ T µ (x). µ <m For the Fast Multipole algorithms, we require the following connection between interpolation and expansion from [2]: Suppose f : I d R to be a continuous function that admits a Čebyšev expansion. Then interpolation (4.3) and expansion (4.2) are related by f µ = ( ) ζ f(2mζ + µ) (4.4) ζ Z d for all µ Z d with µ < m. In particular, the Čebyšev interpolant of T ν, ν Z d, is given by ( ) ν µ /(2m) T µ where ν µ mod 2m. In the following result from [2], sufficient conditions on V a (x, x ) are given for the validity of Assumption 4. with exponential convergence with respect to the expansion order m. Theorem 4.3. For sufficiently small separation parameter < η <, singularity order s and a kernel function V a : D D R; V a (x, y) := K(y x) y x s (4.5) where K admits an analytic extension into C d \{}. Suppose χ denotes for any x, y D, x y, the affine transformation χ : R d R d ; χ(ξ) := η y x ξ + y x. (4.6) Čebyšev inter- Then, the approximation of the stationary covariance V a given by the polant of f( ; x, y ) := (K χ) χ s on I d, Va m (x, y; x, y ) := µ Z d µ <m f µ (x, y )T µ (χ (y x)), (4.7) satisfies the error bound (4.5) with Φ(m; η, V ) = C exp( b(η)m). admits the representation (4.3) with I m := {(µ, ν) N d Nd : µ + ν < m}, κ (µ,ν) (x, y ) := (µ + ν)!c µ+ν (x, y ), In addition, V m a (4.8) where the c µ, µ N d, are the coefficients of the interpolation polynomial defined by ( z ) f µ (x, y )T µ = c µ (x, y )z µ. (4.9) η y x µ Z d µ N d µ <m µ <m Remark 4.4. If the kernel V a belongs to C k, approximation properties of the Chebysev polynomials and the L stability bounds for the interpolation operator in the Chebysev points imply that Va m admits the representation (4.3) with I m as in (4.8), but only with the algebric convergence rate Φ(m; η, V ) = Cm k. 9

22 Remark 4.5. Theorem 4.3 implies exponential convergence of the Čebyšev- interpolated covariance Va M for covariance kernels (4.5) which possibly are singular on the diagonal x = y with singularity order s > d/2. An important example is V a (x, x ) = C exp( γ x x δ ), δ 2. (4.2) For δ = 2, the kernel is analytic and Galerkin approximations to KL eigenpairs converge exponentially as p, requiring therefore a small, dense matrix V and, in the gfmm, the (mandatory!) choice m = p will yield no gain in complexity over a dense matrix approach. For δ < 2, such kernels exhibit only very low regularity in the sense of Definition 2.5. Therefore, the Galerkin approximation (3.) will use subspaces S p h (D) of low, fixed polynomial degree p, and convergence by h-refinement. The resulting low, algebraic convergence rates mean that very large matrix eigenvalue problems (4.) must be solved to obtain accurate KL eigenpairs. The exponential error bound exp( bm) of the FMM implies in this case that m grows polynomially in log N. The gfmm reduced, in this case, the work of a matrix vector multiply from N 2 to a polylogarithmic bound in N. To compute (4.8) we interpolate the kernel function by Čebyšev polynomials Va m (x, y; x, y ) = µ Z d µ <m where the m d expansion coefficients are given by f µ (x, y ) = m d ν Nd ν i <m Expanding the interpolant by Taylor we get V m a (x, y; x, y ) = ζ Z d ζ <m = ζ Zd ζ <m f µ (x, y )T µ (χ (y x)) (4.2) (K χ)(x ν ) χ(x ν ) s T µ (x ν ) (4.22) f ζ (x, y )T ζ (χ (y x)) ( ) y x y + x f ζ (x, y )T ζ η y x = ζ Nd c ζ (x, y )(y x y + x ) ζ ζ <m = c ζ (x, y )D µ+ν (z ζ )() (x x) µ = (ν,µ) N d Nd (ν,µ) N d Nd µ+ν <m ζ N d ζ <m µ! (x x) µ (y y ) ν (µ + ν)!c µ+ν (x, y ). µ! ν! (y y ) ν ν! 2

23 Remark 4.6. The coefficients f µ (x, y ) in (4.9) - (4.22) require O(m d ) kernel evaluations at the Čebyšev points of order m. 4.2 Cluster Expansions Assumption 4. provides an approximation of the covariance kernel which is in general not valid for all (x, y) D D. In order to define a global approximation on D D, a collection of local approximations is used, where each local approximation is associated with an appropriate block of a given partition of D D. We call the blocks clusters and the combination of local approximations cluster expansion. More precisely, let P(D) denote the set of all subsets of D, ř A := inf x R d sup y A y x R the Čebyšev radius of a set A Rd and č A R d with ř A = sup y A y č A its Čebyšev center. Definition 4.7. Suppose C P(D) P(D) to be a finite partition of D D which is subordinate to D in Definition 2.5 and let < η < be a separation constant. An element (σ, τ) C is called η-cluster iff The set of all η-clusters in C, ř σ + ř τ η č σ č τ. (4.23) F := F(C, η) = {(σ, τ) C : (σ, τ) is η-cluster}, (4.24) is called far field of grain η and its complement N := N (C, η) = C \ F(C, η) the near field of grain η. Moreover, let the kernel V a (x, y) satisfy Assumption 4.. Then V m a { V (x, y) := m a (x, y; č σ, č τ ) V a (x, y) if (x, y) σ τ and (σ, τ) F otherwise (4.25) for all (x, y) D D, x y, defines a cluster expansion of the correlation kernel V a (x, y). Replacing the covariance kernel V a (x, y) in the definition of the matrix V in (4.) by its cluster expansion Va m introduces, for u, v L 2 (D), an approximate matrix Ṽ : (Ṽ ) ij = Va m (x, y) b i (x) b j (y) dy dx. (4.26) D D The matrix Ṽ may be decomposed according to Ṽ = N + X T σ F στ Y τ (4.27) with (N ) i,j := (X σ ) µ,i := (σ,τ) N σ σ (σ,τ) F V a (x, y)b i (x) b j (y) dy dx, (F στ ) µ,ν := κ (µ,ν) (č σ, č τ ), τ X µ (x; č σ )b i (x) dx, (Y τ ) ν,j := 2 τ Y ν (y; č τ )b j (y) dy

24 for (σ, τ) F and (µ, ν) I m. In (4.27), the matrix N represents the near field part of Ṽ whereas the sum of matrices describes the influence of the far field. If the partition C is chosen as discussed in the next Section, N is a sparse matrix. In addition, the matrix vector multiplication related to the far field part can be evaluated in essentially linear complexity. Remark 4.8. The matrices F στ are never formed explicitly. Typically, their entries (F στ ) µν only depend on µ + ν with ν + µ < m. Therefore, O(m d ) rather than O(m 2d ) expansion coefficients have to be evaluated and stored. Remark 4.9. The expansions in the previous Section preserve symmetry of the kernel V a, i.e. Va m (x, y) = Va m (y, x). If, in addition, the given partition C exhibits symmetry, i.e. (σ, τ) C (τ, σ) C, then Ṽ is symmetric. Remark 4.. (Collocation) For continuous covariances V a (x, x ), we may allow in definitions (4.2), (4.26) of V ij and Ṽ ij, respectively, the choice b i (x) = δ(x i ), the Dirac delta function at x i, the barycenter of element π i M h. Then all integrals in (4.2) and (4.27) become point evaluations and, with the choice M =, the matrix eigenvalue problem (4.) becomes a collocation approximation of (.). The components of φ correspond to point values of piecewise constant approximations of the eigenfunctions. 4.3 Cluster Algorithm Equation (4.27) specifies an approximation Ṽ of the matrix V which allows to control the approximation error (see Section 4.4). To reduce the complexity of assembly and storage of V we must choose the partition C appropriately. An efficient way which also provides the desired complexity reduction from N 2 to O(N log N) is to start with a recursive hierarchical decomposition of the mesh M h represented by a tree T := (V, E). An algorithm similar to Algorithm 4. might be used to generate such a decomposition, i.e. T := tree(m h ). Algorithm 4. (V, E) := tree(a) if A < c then return ({A}, ); else (A, A ) := split(a) (V, E ) := tree(a ); (V, E ) := tree(a ); return (V V {A}, E E {(A, A ), (A, A )}); The function split(a) bisects a set of elements A into two disjoint sets A and A such that the Čebyšev radius of both sets is reduced. This, for example, could be achieved by splitting the bounding box of A along the longest side and distribute the elements with respect to the two parts. 22

25 Given a hierarchical decomposition T of M h, it is straightforward to construct a partition C as outlined in Algorithm 4.2. The decomposition T serves two purposes during the construction process: (i) it defines the pool V of subsets available for the construction of clusters and (ii) it defines the sets children(a) := {A V : (A, A ) E}. Calling partition(m h, M h ) generates a partition C by specifying its far field F and near field N. The resulting partition is symmetric in the sense of Remark 4.9. Note that in Algorithm 4.2 N and F are given in terms of sets of elements π of the triangulation M of D. Algorithm 4.2 (N, F) := partition(a, B) if ( π A π, π B π) is an η-cluster then return (, {(A, B)}); else A := children(a); B := children(b); a A,b B partition(a, b) if A and B and A = B, return partition(a, B) if a A A and ( A > B or B = ), b B partition(a, b) if ( A < B or A = ) and B, ({(A, B)}, ) otherwise; The approximate matrix vector product v = Ṽ u is evaluated in five steps: (i) compute v N := N L u, (ii) for all τ compute u τ := Y L τ u, (iii) for all σ compute v σ := (σ,τ) F F στ u τ, (iv) compute v F := σ XL T σ vσ, (v) compute v = v N + v F. Steps (ii) and (iv) can be accelerated using the hierarchical decomposition as it is possible to represent the matrices X σ and Y τ by the corresponding matrices related to children of σ and τ: X σ = C σσ X σ, (4.28) Y τ = σ children(σ) τ children(τ) D ττ Y τ, (4.29) where the matrices C σσ and D ττ represent so-called shift operators as e.g. in (4.8). These relationships are exploited in Algorithms 4.3 and 4.4 which replace steps (ii) and (iv) above, i.e. by scatter(m, u) and v F := gather(m, ). Note that only the matrices Y τ and X σ, where τ and σ are leafs of the decomposition, are necessary. By Remark 4.8, these matrices contain O(m d ) entries. 23

26 Algorithm 4.3 scatter(a, u) τ := π A π; A := children(a); if A = then u τ := Y L τ u; else u τ := a A D ττ scatter(a, u) with τ := π a π; return u τ ; Algorithm 4.4 gather(a, w A ) σ := π A π; A := children(a); if A = then return X L T σ ( vσ + w A ); else return a A gather(a, C σσ T ( v σ + w A )) with σ := π a π; 4.4 Cluster Error By construction, the local error bound (4.5) remains valid for a cluster expansion: Theorem 4.. If the covariance V a (x, x ) satisfies Assumption 4., there are C, C > independent of x, y D and of m such that for all (x, y) D j D j, x y and all m it holds V a (x, y) V m a (x, y) C (C η) m V a (x, y). (4.3) The replacement (4.3) of V a (x, y) by the global cluster expansion Va m in the far field introduces an approximate bilinear form via Ṽ m (u, v) = v(x) Va m (x, x ) u(x )dx dx u, v L 2 (D). (4.3) D D We associate the perturbed form Ṽ m (, ) with the matrix Ṽ and approximate eigenpairs via the perturbed variational problem: φ h S p h : Ṽ m ( φ h, v) = λ( φ h, v) v S p h (D). (4.32) Our purpose is to estimate the error φ m φ m,h L 2 (D). To apply the classical theory [8], we estimate the error in the Carlemann operators corresponding to the cluster expansions: Theorem 4.2. Let a cluster approximation Va m of the two point correlation V a be given which satisfies Assumption 4. with sufficiently small η. Then the corresponding covariance operators V a and Va m satisfy for all m N for any < η < the error bounds V a Va m L 2 (D) L 2 (D) C (C η) m D 2 (4.33) V a V m a L (D) L (D) C (C η) m (4.34) 24

27 Proof. To show (4.33) we estimate for arbitrary f L 2 (D) V a f V m a f 2 L 2 (D) = x D ( 2 (V a (x, y) Va m (x, y))f(y)dy) dx y D V a V m a 2 L (D D) D 2 f 2 L 2 (D) and refer to (4.3). The proof of (4.34) is analogous. As an immediate consequence, we obtain Corollary 4.3. If Assumption 4. holds, the family {V m a } m N of operators converges as m in B(L 2 (D)) and B(L (D)) with rate Φ(m; η, V ) = exp( b(η)m) to V a. The aproximate covariance operators {V m a } m N are, in particular, collectively compact in these spaces. Corollary 4.4. Let P h : L 2 (D) S p h (D) denote the L2 (D) projection. If Assumption 4. holds, the family {P h V m a P h } m N of operators obtained from the cluster approximation with expansion order m converges as m and h in B(L 2 (D)) and B(L (D)) to V a. The aproximate covariance operators {P h V m a P h} m N are, in particular, collectively compact approximations in the sense of [4] of V a in these spaces. Corollary 4.4 implies that the abstract error analysis of [8] is applicable to assess the impact of the cluster-approximation on the approximate KL eigenpairs obtained from P h V m a P h. 5 Conclusion Computation of approximate KL expansions in general domains D R d for given covariance function V a (x, x ) with gfmm accelerated matrix-vector multiplication is only advised if i) the accuracy of the Galerkin eigenpairs of P h V a P h is preserved and ii) the complexity of the cluster-approximated matrix-vector multiplication of P h V m a P h with the order m chosen to satisfy i) is substantially lower than for the full matrixvector multiplication. In the case that V a is piecewise analytic in D D and a p-fem is used to approximate the KL-eigenfunctions, the gfmm will not yield significant speed up, since the expansion order m must grow proportionally to the polynomial degree p of the FE subspace in order for the cluster error to match the exponential gfmm convergence rate. In the case that V a belongs to C k in the sense of Definition 2.5, the convergence rate of the Galerkin approximated KL eigenpairs is at best algebraic, but so is the 25

Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques

Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques Institut für Numerische Mathematik und Optimierung Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques Oliver Ernst Computational Methods with Applications Harrachov, CR,

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

Kernel Method: Data Analysis with Positive Definite Kernels

Kernel Method: Data Analysis with Positive Definite Kernels Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University

More information

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms (February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops

More information

Lecture 1: Center for Uncertainty Quantification. Alexander Litvinenko. Computation of Karhunen-Loeve Expansion:

Lecture 1: Center for Uncertainty Quantification. Alexander Litvinenko. Computation of Karhunen-Loeve Expansion: tifica Lecture 1: Computation of Karhunen-Loeve Expansion: Alexander Litvinenko http://sri-uq.kaust.edu.sa/ Stochastic PDEs We consider div(κ(x, ω) u) = f (x, ω) in G, u = 0 on G, with stochastic coefficients

More information

Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs

Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs Roman Andreev ETH ZÜRICH / 29 JAN 29 TOC of the Talk Motivation & Set-Up Model Problem Stochastic Galerkin FEM Conclusions & Outlook Motivation

More information

Multilevel accelerated quadrature for elliptic PDEs with random diffusion. Helmut Harbrecht Mathematisches Institut Universität Basel Switzerland

Multilevel accelerated quadrature for elliptic PDEs with random diffusion. Helmut Harbrecht Mathematisches Institut Universität Basel Switzerland Multilevel accelerated quadrature for elliptic PDEs with random diffusion Mathematisches Institut Universität Basel Switzerland Overview Computation of the Karhunen-Loéve expansion Elliptic PDE with uniformly

More information

Fast evaluation of boundary integral operators arising from an eddy current problem. MPI MIS Leipzig

Fast evaluation of boundary integral operators arising from an eddy current problem. MPI MIS Leipzig 1 Fast evaluation of boundary integral operators arising from an eddy current problem Steffen Börm MPI MIS Leipzig Jörg Ostrowski ETH Zürich Induction heating 2 Induction heating: Time harmonic eddy current

More information

Chapter 8 Integral Operators

Chapter 8 Integral Operators Chapter 8 Integral Operators In our development of metrics, norms, inner products, and operator theory in Chapters 1 7 we only tangentially considered topics that involved the use of Lebesgue measure,

More information

REAL ANALYSIS II HOMEWORK 3. Conway, Page 49

REAL ANALYSIS II HOMEWORK 3. Conway, Page 49 REAL ANALYSIS II HOMEWORK 3 CİHAN BAHRAN Conway, Page 49 3. Let K and k be as in Proposition 4.7 and suppose that k(x, y) k(y, x). Show that K is self-adjoint and if {µ n } are the eigenvalues of K, each

More information

Commutative Banach algebras 79

Commutative Banach algebras 79 8. Commutative Banach algebras In this chapter, we analyze commutative Banach algebras in greater detail. So we always assume that xy = yx for all x, y A here. Definition 8.1. Let A be a (commutative)

More information

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS

SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS SPECTRAL PROPERTIES OF THE LAPLACIAN ON BOUNDED DOMAINS TSOGTGEREL GANTUMUR Abstract. After establishing discrete spectra for a large class of elliptic operators, we present some fundamental spectral properties

More information

Scientific Computing I

Scientific Computing I Scientific Computing I Module 8: An Introduction to Finite Element Methods Tobias Neckel Winter 2013/2014 Module 8: An Introduction to Finite Element Methods, Winter 2013/2014 1 Part I: Introduction to

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

1 Continuity Classes C m (Ω)

1 Continuity Classes C m (Ω) 0.1 Norms 0.1 Norms A norm on a linear space X is a function : X R with the properties: Positive Definite: x 0 x X (nonnegative) x = 0 x = 0 (strictly positive) λx = λ x x X, λ C(homogeneous) x + y x +

More information

Principal Component Analysis

Principal Component Analysis Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used

More information

MATH 205C: STATIONARY PHASE LEMMA

MATH 205C: STATIONARY PHASE LEMMA MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)

More information

H 2 -matrices with adaptive bases

H 2 -matrices with adaptive bases 1 H 2 -matrices with adaptive bases Steffen Börm MPI für Mathematik in den Naturwissenschaften Inselstraße 22 26, 04103 Leipzig http://www.mis.mpg.de/ Problem 2 Goal: Treat certain large dense matrices

More information

1 Math 241A-B Homework Problem List for F2015 and W2016

1 Math 241A-B Homework Problem List for F2015 and W2016 1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let

More information

HILBERT SPACES AND THE RADON-NIKODYM THEOREM. where the bar in the first equation denotes complex conjugation. In either case, for any x V define

HILBERT SPACES AND THE RADON-NIKODYM THEOREM. where the bar in the first equation denotes complex conjugation. In either case, for any x V define HILBERT SPACES AND THE RADON-NIKODYM THEOREM STEVEN P. LALLEY 1. DEFINITIONS Definition 1. A real inner product space is a real vector space V together with a symmetric, bilinear, positive-definite mapping,

More information

Foundations of the stochastic Galerkin method

Foundations of the stochastic Galerkin method Foundations of the stochastic Galerkin method Claude Jeffrey Gittelson ETH Zurich, Seminar for Applied Mathematics Pro*oc Workshop 2009 in isentis Stochastic diffusion equation R d Lipschitz, for ω Ω,

More information

Hierarchical Parallel Solution of Stochastic Systems

Hierarchical Parallel Solution of Stochastic Systems Hierarchical Parallel Solution of Stochastic Systems Second M.I.T. Conference on Computational Fluid and Solid Mechanics Contents: Simple Model of Stochastic Flow Stochastic Galerkin Scheme Resulting Equations

More information

Scientific Computing WS 2018/2019. Lecture 15. Jürgen Fuhrmann Lecture 15 Slide 1

Scientific Computing WS 2018/2019. Lecture 15. Jürgen Fuhrmann Lecture 15 Slide 1 Scientific Computing WS 2018/2019 Lecture 15 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 15 Slide 1 Lecture 15 Slide 2 Problems with strong formulation Writing the PDE with divergence and gradient

More information

CONVERGENCE THEORY. G. ALLAIRE CMAP, Ecole Polytechnique. 1. Maximum principle. 2. Oscillating test function. 3. Two-scale convergence

CONVERGENCE THEORY. G. ALLAIRE CMAP, Ecole Polytechnique. 1. Maximum principle. 2. Oscillating test function. 3. Two-scale convergence 1 CONVERGENCE THEOR G. ALLAIRE CMAP, Ecole Polytechnique 1. Maximum principle 2. Oscillating test function 3. Two-scale convergence 4. Application to homogenization 5. General theory H-convergence) 6.

More information

JUHA KINNUNEN. Harmonic Analysis

JUHA KINNUNEN. Harmonic Analysis JUHA KINNUNEN Harmonic Analysis Department of Mathematics and Systems Analysis, Aalto University 27 Contents Calderón-Zygmund decomposition. Dyadic subcubes of a cube.........................2 Dyadic cubes

More information

Second Order Elliptic PDE

Second Order Elliptic PDE Second Order Elliptic PDE T. Muthukumar tmk@iitk.ac.in December 16, 2014 Contents 1 A Quick Introduction to PDE 1 2 Classification of Second Order PDE 3 3 Linear Second Order Elliptic Operators 4 4 Periodic

More information

Collocation based high dimensional model representation for stochastic partial differential equations

Collocation based high dimensional model representation for stochastic partial differential equations Collocation based high dimensional model representation for stochastic partial differential equations S Adhikari 1 1 Swansea University, UK ECCM 2010: IV European Conference on Computational Mechanics,

More information

i=1 α i. Given an m-times continuously

i=1 α i. Given an m-times continuously 1 Fundamentals 1.1 Classification and characteristics Let Ω R d, d N, d 2, be an open set and α = (α 1,, α d ) T N d 0, N 0 := N {0}, a multiindex with α := d i=1 α i. Given an m-times continuously differentiable

More information

A Randomized Algorithm for the Approximation of Matrices

A Randomized Algorithm for the Approximation of Matrices A Randomized Algorithm for the Approximation of Matrices Per-Gunnar Martinsson, Vladimir Rokhlin, and Mark Tygert Technical Report YALEU/DCS/TR-36 June 29, 2006 Abstract Given an m n matrix A and a positive

More information

Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space

Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space Statistical Inference with Reproducing Kernel Hilbert Space Kenji Fukumizu Institute of Statistical Mathematics, ROIS Department

More information

INTRODUCTION TO FINITE ELEMENT METHODS

INTRODUCTION TO FINITE ELEMENT METHODS INTRODUCTION TO FINITE ELEMENT METHODS LONG CHEN Finite element methods are based on the variational formulation of partial differential equations which only need to compute the gradient of a function.

More information

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005

University of Houston, Department of Mathematics Numerical Analysis, Fall 2005 4 Interpolation 4.1 Polynomial interpolation Problem: LetP n (I), n ln, I := [a,b] lr, be the linear space of polynomials of degree n on I, P n (I) := { p n : I lr p n (x) = n i=0 a i x i, a i lr, 0 i

More information

Integral Jensen inequality

Integral Jensen inequality Integral Jensen inequality Let us consider a convex set R d, and a convex function f : (, + ]. For any x,..., x n and λ,..., λ n with n λ i =, we have () f( n λ ix i ) n λ if(x i ). For a R d, let δ a

More information

LECTURE 5: THE METHOD OF STATIONARY PHASE

LECTURE 5: THE METHOD OF STATIONARY PHASE LECTURE 5: THE METHOD OF STATIONARY PHASE Some notions.. A crash course on Fourier transform For j =,, n, j = x j. D j = i j. For any multi-index α = (α,, α n ) N n. α = α + + α n. α! = α! α n!. x α =

More information

Traces, extensions and co-normal derivatives for elliptic systems on Lipschitz domains

Traces, extensions and co-normal derivatives for elliptic systems on Lipschitz domains Traces, extensions and co-normal derivatives for elliptic systems on Lipschitz domains Sergey E. Mikhailov Brunel University West London, Department of Mathematics, Uxbridge, UB8 3PH, UK J. Math. Analysis

More information

CHAPTER VIII HILBERT SPACES

CHAPTER VIII HILBERT SPACES CHAPTER VIII HILBERT SPACES DEFINITION Let X and Y be two complex vector spaces. A map T : X Y is called a conjugate-linear transformation if it is a reallinear transformation from X into Y, and if T (λx)

More information

Schwarz Preconditioner for the Stochastic Finite Element Method

Schwarz Preconditioner for the Stochastic Finite Element Method Schwarz Preconditioner for the Stochastic Finite Element Method Waad Subber 1 and Sébastien Loisel 2 Preprint submitted to DD22 conference 1 Introduction The intrusive polynomial chaos approach for uncertainty

More information

The Closed Form Reproducing Polynomial Particle Shape Functions for Meshfree Particle Methods

The Closed Form Reproducing Polynomial Particle Shape Functions for Meshfree Particle Methods The Closed Form Reproducing Polynomial Particle Shape Functions for Meshfree Particle Methods by Hae-Soo Oh Department of Mathematics, University of North Carolina at Charlotte, Charlotte, NC 28223 June

More information

Solving the Stochastic Steady-State Diffusion Problem Using Multigrid

Solving the Stochastic Steady-State Diffusion Problem Using Multigrid Solving the Stochastic Steady-State Diffusion Problem Using Multigrid Tengfei Su Applied Mathematics and Scientific Computing Advisor: Howard Elman Department of Computer Science Sept. 29, 2015 Tengfei

More information

BOUNDARY VALUE PROBLEMS ON A HALF SIERPINSKI GASKET

BOUNDARY VALUE PROBLEMS ON A HALF SIERPINSKI GASKET BOUNDARY VALUE PROBLEMS ON A HALF SIERPINSKI GASKET WEILIN LI AND ROBERT S. STRICHARTZ Abstract. We study boundary value problems for the Laplacian on a domain Ω consisting of the left half of the Sierpinski

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Lecture 12: Detailed balance and Eigenfunction methods

Lecture 12: Detailed balance and Eigenfunction methods Miranda Holmes-Cerfon Applied Stochastic Analysis, Spring 2015 Lecture 12: Detailed balance and Eigenfunction methods Readings Recommended: Pavliotis [2014] 4.5-4.7 (eigenfunction methods and reversibility),

More information

Lehrstuhl Informatik V. Lehrstuhl Informatik V. 1. solve weak form of PDE to reduce regularity properties. Lehrstuhl Informatik V

Lehrstuhl Informatik V. Lehrstuhl Informatik V. 1. solve weak form of PDE to reduce regularity properties. Lehrstuhl Informatik V Part I: Introduction to Finite Element Methods Scientific Computing I Module 8: An Introduction to Finite Element Methods Tobias Necel Winter 4/5 The Model Problem FEM Main Ingredients Wea Forms and Wea

More information

Appendix A Functional Analysis

Appendix A Functional Analysis Appendix A Functional Analysis A.1 Metric Spaces, Banach Spaces, and Hilbert Spaces Definition A.1. Metric space. Let X be a set. A map d : X X R is called metric on X if for all x,y,z X it is i) d(x,y)

More information

Topics in Harmonic Analysis Lecture 1: The Fourier transform

Topics in Harmonic Analysis Lecture 1: The Fourier transform Topics in Harmonic Analysis Lecture 1: The Fourier transform Po-Lam Yung The Chinese University of Hong Kong Outline Fourier series on T: L 2 theory Convolutions The Dirichlet and Fejer kernels Pointwise

More information

Approximation by Conditionally Positive Definite Functions with Finitely Many Centers

Approximation by Conditionally Positive Definite Functions with Finitely Many Centers Approximation by Conditionally Positive Definite Functions with Finitely Many Centers Jungho Yoon Abstract. The theory of interpolation by using conditionally positive definite function provides optimal

More information

Problem Set 6: Solutions Math 201A: Fall a n x n,

Problem Set 6: Solutions Math 201A: Fall a n x n, Problem Set 6: Solutions Math 201A: Fall 2016 Problem 1. Is (x n ) n=0 a Schauder basis of C([0, 1])? No. If f(x) = a n x n, n=0 where the series converges uniformly on [0, 1], then f has a power series

More information

A Sharpened Hausdorff-Young Inequality

A Sharpened Hausdorff-Young Inequality A Sharpened Hausdorff-Young Inequality Michael Christ University of California, Berkeley IPAM Workshop Kakeya Problem, Restriction Problem, Sum-Product Theory and perhaps more May 5, 2014 Hausdorff-Young

More information

Analysis-3 lecture schemes

Analysis-3 lecture schemes Analysis-3 lecture schemes (with Homeworks) 1 Csörgő István November, 2015 1 A jegyzet az ELTE Informatikai Kar 2015. évi Jegyzetpályázatának támogatásával készült Contents 1. Lesson 1 4 1.1. The Space

More information

Recurrence Relations and Fast Algorithms

Recurrence Relations and Fast Algorithms Recurrence Relations and Fast Algorithms Mark Tygert Research Report YALEU/DCS/RR-343 December 29, 2005 Abstract We construct fast algorithms for decomposing into and reconstructing from linear combinations

More information

SPECTRAL THEOREM FOR SYMMETRIC OPERATORS WITH COMPACT RESOLVENT

SPECTRAL THEOREM FOR SYMMETRIC OPERATORS WITH COMPACT RESOLVENT SPECTRAL THEOREM FOR SYMMETRIC OPERATORS WITH COMPACT RESOLVENT Abstract. These are the letcure notes prepared for the workshop on Functional Analysis and Operator Algebras to be held at NIT-Karnataka,

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

A spectral method for Schrödinger equations with smooth confinement potentials

A spectral method for Schrödinger equations with smooth confinement potentials A spectral method for Schrödinger equations with smooth confinement potentials Jerry Gagelman Harry Yserentant August 27, 2011; to appear in Numerische Mathematik Abstract We study the expansion of the

More information

4 Hilbert spaces. The proof of the Hilbert basis theorem is not mathematics, it is theology. Camille Jordan

4 Hilbert spaces. The proof of the Hilbert basis theorem is not mathematics, it is theology. Camille Jordan The proof of the Hilbert basis theorem is not mathematics, it is theology. Camille Jordan Wir müssen wissen, wir werden wissen. David Hilbert We now continue to study a special class of Banach spaces,

More information

Global Maxwellians over All Space and Their Relation to Conserved Quantites of Classical Kinetic Equations

Global Maxwellians over All Space and Their Relation to Conserved Quantites of Classical Kinetic Equations Global Maxwellians over All Space and Their Relation to Conserved Quantites of Classical Kinetic Equations C. David Levermore Department of Mathematics and Institute for Physical Science and Technology

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Fast Numerical Methods for Stochastic Computations

Fast Numerical Methods for Stochastic Computations Fast AreviewbyDongbinXiu May 16 th,2013 Outline Motivation 1 Motivation 2 3 4 5 Example: Burgers Equation Let us consider the Burger s equation: u t + uu x = νu xx, x [ 1, 1] u( 1) =1 u(1) = 1 Example:

More information

at time t, in dimension d. The index i varies in a countable set I. We call configuration the family, denoted generically by Φ: U (x i (t) x j (t))

at time t, in dimension d. The index i varies in a countable set I. We call configuration the family, denoted generically by Φ: U (x i (t) x j (t)) Notations In this chapter we investigate infinite systems of interacting particles subject to Newtonian dynamics Each particle is characterized by its position an velocity x i t, v i t R d R d at time

More information

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS

SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS SPECTRAL THEOREM FOR COMPACT SELF-ADJOINT OPERATORS G. RAMESH Contents Introduction 1 1. Bounded Operators 1 1.3. Examples 3 2. Compact Operators 5 2.1. Properties 6 3. The Spectral Theorem 9 3.3. Self-adjoint

More information

ACM/CMS 107 Linear Analysis & Applications Fall 2017 Assignment 2: PDEs and Finite Element Methods Due: 7th November 2017

ACM/CMS 107 Linear Analysis & Applications Fall 2017 Assignment 2: PDEs and Finite Element Methods Due: 7th November 2017 ACM/CMS 17 Linear Analysis & Applications Fall 217 Assignment 2: PDEs and Finite Element Methods Due: 7th November 217 For this assignment the following MATLAB code will be required: Introduction http://wwwmdunloporg/cms17/assignment2zip

More information

Solving the steady state diffusion equation with uncertainty Final Presentation

Solving the steady state diffusion equation with uncertainty Final Presentation Solving the steady state diffusion equation with uncertainty Final Presentation Virginia Forstall vhfors@gmail.com Advisor: Howard Elman elman@cs.umd.edu Department of Computer Science May 6, 2012 Problem

More information

Efficient Solvers for Stochastic Finite Element Saddle Point Problems

Efficient Solvers for Stochastic Finite Element Saddle Point Problems Efficient Solvers for Stochastic Finite Element Saddle Point Problems Catherine E. Powell c.powell@manchester.ac.uk School of Mathematics University of Manchester, UK Efficient Solvers for Stochastic Finite

More information

FINITE-DIMENSIONAL LINEAR ALGEBRA

FINITE-DIMENSIONAL LINEAR ALGEBRA DISCRETE MATHEMATICS AND ITS APPLICATIONS Series Editor KENNETH H ROSEN FINITE-DIMENSIONAL LINEAR ALGEBRA Mark S Gockenbach Michigan Technological University Houghton, USA CRC Press Taylor & Francis Croup

More information

Recall the convention that, for us, all vectors are column vectors.

Recall the convention that, for us, all vectors are column vectors. Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists

More information

High order Galerkin appoximations for parametric second order elliptic partial differential equations

High order Galerkin appoximations for parametric second order elliptic partial differential equations Research Collection Report High order Galerkin appoximations for parametric second order elliptic partial differential equations Author(s): Nistor, Victor; Schwab, Christoph Publication Date: 2012 Permanent

More information

Stochastic Spectral Approaches to Bayesian Inference

Stochastic Spectral Approaches to Bayesian Inference Stochastic Spectral Approaches to Bayesian Inference Prof. Nathan L. Gibson Department of Mathematics Applied Mathematics and Computation Seminar March 4, 2011 Prof. Gibson (OSU) Spectral Approaches to

More information

Chapter 3 Conforming Finite Element Methods 3.1 Foundations Ritz-Galerkin Method

Chapter 3 Conforming Finite Element Methods 3.1 Foundations Ritz-Galerkin Method Chapter 3 Conforming Finite Element Methods 3.1 Foundations 3.1.1 Ritz-Galerkin Method Let V be a Hilbert space, a(, ) : V V lr a bounded, V-elliptic bilinear form and l : V lr a bounded linear functional.

More information

Optimal multilevel preconditioning of strongly anisotropic problems.part II: non-conforming FEM. p. 1/36

Optimal multilevel preconditioning of strongly anisotropic problems.part II: non-conforming FEM. p. 1/36 Optimal multilevel preconditioning of strongly anisotropic problems. Part II: non-conforming FEM. Svetozar Margenov margenov@parallel.bas.bg Institute for Parallel Processing, Bulgarian Academy of Sciences,

More information

TOEPLITZ OPERATORS. Toeplitz studied infinite matrices with NW-SE diagonals constant. f e C :

TOEPLITZ OPERATORS. Toeplitz studied infinite matrices with NW-SE diagonals constant. f e C : TOEPLITZ OPERATORS EFTON PARK 1. Introduction to Toeplitz Operators Otto Toeplitz lived from 1881-1940 in Goettingen, and it was pretty rough there, so he eventually went to Palestine and eventually contracted

More information

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

More information

Preconditioned space-time boundary element methods for the heat equation

Preconditioned space-time boundary element methods for the heat equation W I S S E N T E C H N I K L E I D E N S C H A F T Preconditioned space-time boundary element methods for the heat equation S. Dohr and O. Steinbach Institut für Numerische Mathematik Space-Time Methods

More information

arxiv: v5 [math.na] 16 Nov 2017

arxiv: v5 [math.na] 16 Nov 2017 RANDOM PERTURBATION OF LOW RANK MATRICES: IMPROVING CLASSICAL BOUNDS arxiv:3.657v5 [math.na] 6 Nov 07 SEAN O ROURKE, VAN VU, AND KE WANG Abstract. Matrix perturbation inequalities, such as Weyl s theorem

More information

Statistical Convergence of Kernel CCA

Statistical Convergence of Kernel CCA Statistical Convergence of Kernel CCA Kenji Fukumizu Institute of Statistical Mathematics Tokyo 106-8569 Japan fukumizu@ism.ac.jp Francis R. Bach Centre de Morphologie Mathematique Ecole des Mines de Paris,

More information

Some Background Material

Some Background Material Chapter 1 Some Background Material In the first chapter, we present a quick review of elementary - but important - material as a way of dipping our toes in the water. This chapter also introduces important

More information

Fourier Transform & Sobolev Spaces

Fourier Transform & Sobolev Spaces Fourier Transform & Sobolev Spaces Michael Reiter, Arthur Schuster Summer Term 2008 Abstract We introduce the concept of weak derivative that allows us to define new interesting Hilbert spaces the Sobolev

More information

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. Vector Spaces Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. For each two vectors a, b ν there exists a summation procedure: a +

More information

Sampling and Low-Rank Tensor Approximations

Sampling and Low-Rank Tensor Approximations Sampling and Low-Rank Tensor Approximations Hermann G. Matthies Alexander Litvinenko, Tarek A. El-Moshely +, Brunswick, Germany + MIT, Cambridge, MA, USA wire@tu-bs.de http://www.wire.tu-bs.de $Id: 2_Sydney-MCQMC.tex,v.3

More information

Numerical methods for the discretization of random fields by means of the Karhunen Loève expansion

Numerical methods for the discretization of random fields by means of the Karhunen Loève expansion Numerical methods for the discretization of random fields by means of the Karhunen Loève expansion Wolfgang Betz, Iason Papaioannou, Daniel Straub Engineering Risk Analysis Group, Technische Universität

More information

Research Article Multiresolution Analysis for Stochastic Finite Element Problems with Wavelet-Based Karhunen-Loève Expansion

Research Article Multiresolution Analysis for Stochastic Finite Element Problems with Wavelet-Based Karhunen-Loève Expansion Mathematical Problems in Engineering Volume 2012, Article ID 215109, 15 pages doi:10.1155/2012/215109 Research Article Multiresolution Analysis for Stochastic Finite Element Problems with Wavelet-Based

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

Mathematical Methods for Physics and Engineering

Mathematical Methods for Physics and Engineering Mathematical Methods for Physics and Engineering Lecture notes for PDEs Sergei V. Shabanov Department of Mathematics, University of Florida, Gainesville, FL 32611 USA CHAPTER 1 The integration theory

More information

Hamburger Beiträge zur Angewandten Mathematik

Hamburger Beiträge zur Angewandten Mathematik Hamburger Beiträge zur Angewandten Mathematik Numerical analysis of a control and state constrained elliptic control problem with piecewise constant control approximations Klaus Deckelnick and Michael

More information

Topics in Harmonic Analysis Lecture 6: Pseudodifferential calculus and almost orthogonality

Topics in Harmonic Analysis Lecture 6: Pseudodifferential calculus and almost orthogonality Topics in Harmonic Analysis Lecture 6: Pseudodifferential calculus and almost orthogonality Po-Lam Yung The Chinese University of Hong Kong Introduction While multiplier operators are very useful in studying

More information

Adaptive Collocation with Kernel Density Estimation

Adaptive Collocation with Kernel Density Estimation Examples of with Kernel Density Estimation Howard C. Elman Department of Computer Science University of Maryland at College Park Christopher W. Miller Applied Mathematics and Scientific Computing Program

More information

1 Directional Derivatives and Differentiability

1 Directional Derivatives and Differentiability Wednesday, January 18, 2012 1 Directional Derivatives and Differentiability Let E R N, let f : E R and let x 0 E. Given a direction v R N, let L be the line through x 0 in the direction v, that is, L :=

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Hierarchical Matrices. Jon Cockayne April 18, 2017

Hierarchical Matrices. Jon Cockayne April 18, 2017 Hierarchical Matrices Jon Cockayne April 18, 2017 1 Sources Introduction to Hierarchical Matrices with Applications [Börm et al., 2003] 2 Sources Introduction to Hierarchical Matrices with Applications

More information

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra D. R. Wilkins Contents 3 Topics in Commutative Algebra 2 3.1 Rings and Fields......................... 2 3.2 Ideals...............................

More information

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9 MAT 570 REAL ANALYSIS LECTURE NOTES PROFESSOR: JOHN QUIGG SEMESTER: FALL 204 Contents. Sets 2 2. Functions 5 3. Countability 7 4. Axiom of choice 8 5. Equivalence relations 9 6. Real numbers 9 7. Extended

More information

Chapter 5 A priori error estimates for nonconforming finite element approximations 5.1 Strang s first lemma

Chapter 5 A priori error estimates for nonconforming finite element approximations 5.1 Strang s first lemma Chapter 5 A priori error estimates for nonconforming finite element approximations 51 Strang s first lemma We consider the variational equation (51 a(u, v = l(v, v V H 1 (Ω, and assume that the conditions

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

Compact operators on Banach spaces

Compact operators on Banach spaces Compact operators on Banach spaces Jordan Bell jordan.bell@gmail.com Department of Mathematics, University of Toronto November 12, 2017 1 Introduction In this note I prove several things about compact

More information

L. Levaggi A. Tabacco WAVELETS ON THE INTERVAL AND RELATED TOPICS

L. Levaggi A. Tabacco WAVELETS ON THE INTERVAL AND RELATED TOPICS Rend. Sem. Mat. Univ. Pol. Torino Vol. 57, 1999) L. Levaggi A. Tabacco WAVELETS ON THE INTERVAL AND RELATED TOPICS Abstract. We use an abstract framework to obtain a multilevel decomposition of a variety

More information

TD 1: Hilbert Spaces and Applications

TD 1: Hilbert Spaces and Applications Université Paris-Dauphine Functional Analysis and PDEs Master MMD-MA 2017/2018 Generalities TD 1: Hilbert Spaces and Applications Exercise 1 (Generalized Parallelogram law). Let (H,, ) be a Hilbert space.

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

MATH MEASURE THEORY AND FOURIER ANALYSIS. Contents

MATH MEASURE THEORY AND FOURIER ANALYSIS. Contents MATH 3969 - MEASURE THEORY AND FOURIER ANALYSIS ANDREW TULLOCH Contents 1. Measure Theory 2 1.1. Properties of Measures 3 1.2. Constructing σ-algebras and measures 3 1.3. Properties of the Lebesgue measure

More information

Wiener Measure and Brownian Motion

Wiener Measure and Brownian Motion Chapter 16 Wiener Measure and Brownian Motion Diffusion of particles is a product of their apparently random motion. The density u(t, x) of diffusing particles satisfies the diffusion equation (16.1) u

More information

Chap. 3. Controlled Systems, Controllability

Chap. 3. Controlled Systems, Controllability Chap. 3. Controlled Systems, Controllability 1. Controllability of Linear Systems 1.1. Kalman s Criterion Consider the linear system ẋ = Ax + Bu where x R n : state vector and u R m : input vector. A :

More information