Chapter 2 Spectral Expansions

Size: px
Start display at page:

Download "Chapter 2 Spectral Expansions"

Transcription

1 Chapter 2 Spectral Expansions In this chapter, we discuss fundamental and practical aspects of spectral expansions of random model data and of model solutions. We focus on a specific class of random process in L 2 (see Appendix A) and seek Fourier-like expansions that are convergent with respect to the norm associated with the corresponding inner product. To clarify the discussion, a brief introduction of notation adopted is first provided; see Appendix A for additional details. Let (,,P) be a probability space and θ a random event belonging to. We denote L 2 (, P ) the space of second-order random variables defined on (,,P) equipped with the inner product, and associated norm : U,V = U(θ)V(θ)dP(θ)= E [UV] U,V L 2 (, P ), U L 2 (, P ) U,U = U 2 <, (2.1) where E [ ] is the expectation operator. We consider R-valued stochastic processes, indexed by x R d, d 1: U : (x,θ) U(x,θ) R, where for any fixed x, the function U(x, ) is a random variable. We shall consider second-order stochastic processes: U(x, ) L 2 (, P ) x. (2.2) Conversely, for a fixed event θ, the function U(,θ) is called a realization of the stochastic process. We will assume that the realizations U(,θ)are almost surely in the Hilbert space L 2 ( ). We denote (, ) and the inner product and norm on this space; specifically, (u, v) u(x)v(x) dx u, v L 2 ( ), u L 2 ( ) u 2 = (u, u) <. (2.3) O.P. Le Maître, O.M. Knio, Spectral Methods for Uncertainty Quantification, Scientific Computation, DOI / _2, Springer Science+Business Media B.V

2 18 2 Spectral Expansions The assumption U L 2 ( ) almost surely means that P({θ : (U(,θ),U(,θ))< }) = 1. (2.4) In Sect. 2.1, we focus our attention on the Karhunen-Loève (KL) representation of the stochastic process U. This classical approach essentially amounts to a bi-orthogonal decomposition based on the eigenfunctions obtained through analysis of its correlation function. Basic results pertaining to these decompositions are first stated; implementation of KL decompositions is then outlined through specific examples. Section 2.2 discusses Polynomial Chaos (PC) representations of random variables. We start by introducing the classical concepts of Homogeneous Chaos, and the associated one-dimensional and multi-dimensional Hermite bases expansions. The convergence of these expansions is briefly discussed. These concepts are later extended in Sect. 2.3, which deals with generalized PC expansions based on different families of orthogonal polynomials, as well as situations involving dependent random variables. Finally these expansions are extended to random vectors and stochastic processes in Sect An elementary road map for the applications of spectral representations is then provided in Sect Karhunen-Loève Expansion The Karhunen-Loève (KL) decomposition is well-known, and its use is widespread in many disciplines, including mechanics, medicine, signal analysis, biology, physics, and finance. It is also known as proper orthogonal decomposition (POD), principal component analysis (PCA) in the finite dimensional case. It has been proposed independently by different authors in the 1940 s [106, 108, 136]. As further discussed below, the KL decomposition essentially involves the representation of a stochastic process according to a spectral decomposition of its correlation function Problem Formulation Consider a stochastic process U, (U : R) L 2 ( ) L 2 (, P ), (2.5) for bounded. Without loss of generality we restrict ourselves to centered stochastic processes, E [U(x, )] = U(x,θ)dP(θ)= 0 x, (2.6) and assume that U is continuous in the mean square sense: lim U(x, ) x x U(x, ) 2 = 0 x. (2.7)

3 2.1 Karhunen-Loève Expansion 19 The autocorrelation function of the process, has for expression C UU : R, (2.8) C UU (x, x ) = E [ U(x, )U(x, ) ] x, x. (2.9) Under the assumptions above, it can be shown the C UU is continuous on and C UU (x, x ) dx dx < +. (2.10) Therefore, the linear operator, based on the so-called correlation kernel K, defined by (Kv, w) = C UU (x, x )v(x)w(x ) dx dx, (2.11) is a symmetric semi-positive Hilbert-Schmidt operator on H = L 2 1 (, R) equipped with inner product (, ). We then have the following relevant results [13, 43, 214]: the kernel K has real eigenvalues λ i ; the eigenvalues λ i are non-negative and can be arranged in decreasing order, λ 1 λ 2 0; the eigenvalues of K are countable, and are such that λ 2 j < + ; j 1 for each eigenvalue, there exists a finite number of linearly independent eigenvectors; the collection of eigenvectors {u i,i 1} constitutes an orthogonal basis of H. Furthermore, the eigenvectors may be normalized so that (u i,u j ) = δ ij. The kernel K has thus the spectral decomposition K(x, x ) = i 1 λ i u i (x)u i (x ). (2.12) In fact, the eigenvalues and eigenvectors of K are the solutions of the Fredholm equation of the second-kind: K(x, x )u i (x ) dx = λ i u i (x). (2.13)

4 20 2 Spectral Expansions The KL decomposition of the stochastic process U is consequently given by: U(x,θ)= i 1 λi u i (x)η i (θ), (2.14) where the random variables η i (θ) are given by: η i (θ) = 1 λi (U(x,θ),u i (x)). (2.15) It is immediate to demonstrate that the random variables η i have zero mean, unit variance, and are mutually uncorrelated: E[η i ]=0, E[η i η j ]=δ ij. So the random variables are orthogonal; however, they are generally not independent (except in particular for the case of Gaussian processes). As for the correlation kernel, the correlation function can also be expressed in terms of its eigenvalues and eigenfunctions, namely C UU (x, x ) = E [ U(x, )U(x, ) ] [( )( )] = E λi u i (x)η i λj u j (x )η j = i 1 i 1 λi λ j u i (x)u j (x )E[η i η j ] j 1 = i 1 λ i u i (x)u i (x ). j Properties of KL Expansions We will assume in the following that the eigenvalues are arranged in decreasing order, i.e. λ 1 λ 2. The KL expansion is optimal in the mean square sense; that is, when truncated after a finite number, N KL, of terms, the resulting approximation Û minimizes the mean square error [136, 138] [ ] ɛn 2 KL E U(x, ) Û(x, ) 2 = λi λ j (u i,u j )E[η i η j ] i,j>n KL = λi λ j δ ij δ ij = i,j>n KL i>n KL λ i

5 2.1 Karhunen-Loève Expansion 21 In other words, no other approximation of U in a series of N KL terms results in a smaller mean square error. Formally, the KL decomposition provides an optimal representation of a process U satisfying the assumptions in Sect , using a series expansion involving a finite number of random variables: N KL U(x,θ) Û(x,θ)= λi u i (x)η i (θ). (2.16) The mean square truncation error decreases monotonically with the number of terms retained in the expansion, at a rate that depends on the decay of the spectrum of K. The higher the rate of spectral decay, the smaller the number of terms needed in the expansion. Specifically, the number of terms to achieve a specified error threshold depends on the correlation function of the process. The more correlated the process, the smaller the number of terms needed to achieve the desired threshold. Conversely, if the process is poorly correlated, a higher number of terms is needed. In the limit where U corresponds to a white noise, i.e. C UU (x, x ) δ 0 ( x x ), an infinite number of terms would be necessary, as further discussed below. Finally, one can also show (see for instance [90]) that the KL decomposition of K based on its eigenfunctions is the only expansion that results in orthogonal random variables. i= Practical Determination We now provide examples of the solution of (2.13) for =[0, 1] Rational Spectra Suppose that the process U has a known rational spectrum of the form: S(f) = N(f2 ) D(f 2 ), (2.17) where N and D are polynomial functions of the frequency, f. In the case of a stationary process, the Fredholm s equation becomes: u j (x ) exp i x x f N(f2 ) D(f 2 ) df dx = λ j u j (x), (2.18) where in this equation i 2 = 1. Equation (2.18) can be recast as a second-order differential equation for the eigenfunction u i (x) [90]: [ d 2 ] [ d 2 ] λ i D dx 2 u i (x) = N dx 2 u i (x), (2.19)

6 22 2 Spectral Expansions which must be solved for each of the eigenvalues, λ i. There exist analytical solutions for (2.19) for some classical spectra [251]. One of the known solutions concerns the exponential kernel: K(x,x ) = σ 2 U exp x x /b, (2.20) which features in the study of first-order Markov processes. The (dimensional) parameter b>0 refers to the correlation length or correlation time, and σu 2 is the process variance. In this case, N and D are given by [221]: N(f)= 2b, D(f ) = 1 + b 2 f 2, (2.21) and the analytical solution of (2.19) is given by [90]: cos[ω i (x 1/2)] if i is even, sin(ω i ) 2ω i u i (x) = sin[ω i (x 1/2)] if i is odd 1 2 sin(ω i ) 2ω i (2.22) and λ i = σ 2 U 2b 1 + (ω i b) 2, (2.23) where ω i are the (ordered) positive roots of the characteristic equation: [ 1 bωtan(ω/2) ][ bω+ tan(ω/2) ] = 0. (2.24) In Fig. 2.1, we plot the first 10 eigenvalues and eigenfunctions for the exponential kernel with b = 1, σ U = 1 and =[0, 1]. One observes that the eigenfunctions u i (x) exhibit oscillations whose frequencies increase with increasing index i. Fig. 2.1 Karhunen-Loève decomposition of the exponential kernel on [0, 1]. The correlation length b = 1. Left: first ten eigenfunctions, λ i u i (x). Right: spectrum of eigenvalues, λ i. Adapted from [128]

7 2.1 Karhunen-Loève Expansion 23 Fig. 2.2 The spectrum of the exponential kernel for different values of the correlation length, b. Left: b [0.1, 1]. Right: b [1, 10]. Note the logarithmic scale in the plots Table 2.1 Eσ 2 U and Eσ U for different values of N KL, with b = 1/2, 1, and 2 N KL = 4 N KL = 6 N KL = 10 N KL = 20 N KL = 40 Eσ 2 U (N KL ) b= 1/ E E E E E 2 Eσ 2 U (N KL ) b= E E E E E 2 Eσ 2 U (N KL ) b= E E E E E 2 Eσ U (N KL ) b= 1/ E E E E E 2 Eσ U (N KL ) b= E E E E E 2 Eσ U (N KL ) b= E E E E E 2 Table 2.2 EK 2 and E K for different values of N KL, with b = 1/2, 1, and 2 N KL = 4 N KL = 6 N KL = 10 N KL = 20 N KL = 40 EK 2 (N KL) b= 1/ E E E E E 2 EK 2 (N KL) b= E E E E E 3 EK 2 (N KL) b= E E E E E 3 EK (N KL) b= 1/ E E E E E 1 EK (N KL) b= E E E E E 1 EK (N KL) b= E E E E E 2 As previously mentioned, the number of terms in the KL expansion needed for adequate approximation of the stochastic process will be large if the spectrum decays slowly. In order to illustrate this effect, Fig. 2.2 depicts the dependence of the first 20 eigenvalues on the correlation length, b. In practice, the KL expansion must be truncated and the truncation error must be carefully evaluated. Tables 2.1 and 2.2 provide the L 2 and L norms of the errors incurred to the approximation of the correlation kernel, and to the local standard

8 24 2 Spectral Expansions deviation σ U for x. These norms are defined according to: E p K (N KL) = [ N KL K(x,x ) λ i u i (x)u i (x ) i=1 p dx dx ] 1/p, (2.25) [ 1 Eσ p U (N KL ) = σ U N KL p 1/p λ i u 2 i dx] (x). (2.26) 0 i=1 In the present example, σu 2 = C UU(x, x) = 1. Table 2.1 provides Eσ 2 U and Eσ U for different values of N KL, with b = 1/2, 1 and 2. Table 2.2 provides the corresponding values of EK 2 and E K. The results show that with N KL fixed, the truncation error behaves approximately as 1/b. With a fixed value of b, Eσ 2 U and EK whereas a more rapid decay is observed for Eσ U and EK 2. To better appreciate the effect of truncation, we plot in Fig. 2.3 the correlation function CÛÛ (x, x ) of Û in [0, 1] [0, 1], forb = 1 and using N KL = 6, as well as the difference with the exact correlation function C UU. One observes that the error peaks for x x, and that it has an oscillatory decay as one gets away from the axis x = x. One can thus conclude that the truncation essentially affects smallscale correlations. This kind of behavior is not unique to rational spectra, and is generally observed for other types of kernels as well. Consequently, one can generalize the present observation by noting that if the autocorrelation function does not rapidly decay with respect to the size of the domain, then the KL expansion would rapidly converge, and a small number of terms need be retained in the series without incurring significant error. decay as N 1 KL, Non-rational Spectra In the case of non-rational spectra, we do not dispose of a systematic means to decompose analytically the correlation kernel. For certain kernels, however, it is still possible to transform the Fredholm integral equation into a differential equation by means of differentiation, but analytical solutions are available only for particular cases. Notable examples are the triangular kernel [90] and band-limited white noise [208]. This approach is difficult to generalize to multidimensional domains, in particular when the latter does not have a simple geometry [209]. In these situations, it is necessary to rely on numerical solution methods, which have a wider scope of applicability, including multidimensional domains, non-stationary processes, etc Numerical Resolution Numerical methods for the solution of the eigenvalue problem generally fall into two broad categories, namely those relying on variational formulation of (2.13) or on Galerkin-type approximations. We shall focus on the latter approach.

9 2.1 Karhunen-Loève Expansion 25 Fig. 2.3 Top: approximated correlation function corresponding to Û,using N KL = 6. Bottom: difference C UU (x, x ) CÛÛ (x, x ). The correlation scale b = 1 and the process variance σ 2 U = 1 Let h i (x), i = 1,...,N x be a set of basis functions of a Hilbert space V L 2 ( ). In this space, the eigenfunction u i (x) can be approximated as: N x u i (x) d (i) k h k(x). k=1 Substituting this approximation into (2.13) we get: N x [ ] ɛ Nx (x) = d (i) k K(x, x )h k (x ) dx λ i h k (x). (2.27) k=1

10 26 2 Spectral Expansions We now seek the coefficients d (i) k such that the approximation error ɛ Nx (x) is orthogonal to the subspace spanned by the h k s. In other words, (h k,ɛ Nx ) = 0, k {1,...,N x }. Enforcement of this constraint results in a system of equations of the form: N x k=1 [ ] d (i) k K(x, x )h k (x )h j (x) dx dx λ i h k (x)h j (x) dx = 0, (2.28) for j = 1, 2,...,N x. This system may be expressed in a matrix form; omitting the index, i, of the eigenfunction, we have: ( [K]jk λ[m] jk ) dk = 0. (2.29) Thus, we need to solve a generalized eigenvalue problem with stiffness matrix, [K] kl = K(x, x )h k (x )h j (x) dx dx R N x N x, (2.30) and mass matrix [M] kl = h k (x)h j (x) dx R N x N x. (2.31) Note that these matrices inherit the properties of the correlation kernel; in particular, [K] and [M] are symmetric and positive semi-definite. Efficient numerical libraries can be used to solve such eigenvalue problems; examples in the open domain include LAPACK 1 and ARPACK 2. One of the challenges to the numerical solution of the eigenvalue problem concerns the cost of the computations. In the case where the correlation function decays rapidly with x x, the eigenfunctions are generally highly oscillatory, and a large amount of basis functions h i (x) may be consequently needed to properly represent these functions on. Accordingly, the dimension of the matrices [K] and [M] is also large, which directly contributes to the cost of the computations. If on the other hand, the decay of the correlation function is slow (strongly-correlated process), then a small basis may be used but the stiffness and mass matrices generally tend to be full. This limits the efficiency of iterative solvers, which is generally higher when the system is sparse. Not surprisingly, the development of efficient eigenvalue solvers (such as multipole expansions) continues to be the subject of focused research efforts, particularly for 3D domains

11 2.1 Karhunen-Loève Expansion Gaussian Processes As discussed above, the KL decomposition is in fact a representation of a stochastic process in terms of the eigenfunctions of its correlation kernel. Note that there are infinitely many second-order processes sharing the same correlation kernel, and consequently admit expansions on the same set of eigenfunctions. What distinguishes processes having the same autocorrelation kernel is the joint probability of the random variables η i. By construction, these random variables have zero mean, are mutually orthogonal, and have unit variance. In many cases, the random process is (or can be assumed to be) Gaussian, leading to significant simplifications. Indeed, the KL expansion of a Gaussian process involves random variables which not only are uncorrelated but independent. In particular, a steady Gaussian process U has for expansion N KL Û(x,θ)= λi u i (x)ξ i (θ), (2.32) i=1 where the ξ i are independent, centered normalized, Gaussian random variables: E[ξ i ]=0, E[ξi 2 ]=1, (2.33) E[ξ i ξ j i ]=0, In addition, the joint probability density of the ξ i factorizes to N KL 1 p ξ (y 1,...,y NKL ) = exp yi 2 /2. (2.34) 2π i=1 The factorized form of the joint probability density of the ξ i greatly simplifies its sampling for the simulation of the process. It should be emphasized that, although Û(x, ) L 2 (, P ), Gaussian processes are not bounded. In some cases, this is problematic, particularly when KL expansions are used to represent bounded physical processes, for instance temperature or diffusivity fields. In these situations, one wishes to construct approximations Û of U that are physically meaningful, for any truncation level. Several studies are being directed towards the construction of admissible truncated KL expansion; see [218] for recent results. For brevity, these methods are not discussed further in this monograph; we simply mention that this can be achieved through the approximation of dependent η i by means of polynomial chaos expansions. Remark Let us first point out that in order to implement the KL decomposition one must know or be able to determine the correlation function of the process to be represented. Therefore, KL expansions are particularly useful when one is representing random model data, which may generally be amenable to analysis and/or subjected

12 28 2 Spectral Expansions to experimental observations and diagnostics. In predictive contexts, on the other hand, the KL formalism may not be as powerful, since the probability laws of the random process, not yet determined, are generally not known apriori. Specifically, the correlation function of the solution of a stochastic model problem is generally one of the quantities that one seeks to determine, and consequently not known or specified. Thus, alternative means of representing stochastic processes are needed, though the KL formalism may still be used to guide the definition of essential ingredients of these alternatives representations. Essentially, the KL decomposition is a series expansion involving deterministic functions (namely the eigenfunctions, u i ) and random variables (the random variables, η i ). The deterministic functions are fixed by the form of the autocorrelation kernel, whereas the joint probability law of the η i s remains unknown in the absence of information other than the second-order properties of the process. Specifically, one can only ascertain that the random variables have zero mean, unit variance, and are mutually orthogonal. One can envision modifying the structure of the KL expansion by relaxing its bi-orthogonal character, so offering more flexibility. In particular, one can envision prescribing apriorithe functional form of the random coefficients in the expansion, for instance as polynomials of independent random variables with given distribution, and then seek the (not necessarily) deterministic functions that minimize a given error norm, much in the same fashion that spectral approximations are used in the solution of deterministic PDEs. Such approach corresponds precisely to PC decompositions which we shall discuss next. 2.2 Polynomial Chaos Expansion In this section, we discuss the PC expansion of a random variable. Let us first introduce some definitions and notations. We consider R-valued random variables U defined on a probability space (,,P), U : R, (2.35) and denote L 2 (, P ) the set of second-order random variables. Let {ξ i } i=1 be a sequence of centered, normalized, mutually orthogonal, Gaussian variables. Let ˆƔ p denote the space of polynomials in {ξ i } i=1 having degree less or equal to p; Ɣ p denotes the set of polynomials that belong to ˆƔ p and orthogonal to ˆƔ p 1 ; and Ɣ p we denote the space spanned by Ɣ p.wehave: ˆƔ p = ˆƔ p 1 Ɣ p, L 2 (, P ) = Ɣ i. (2.36) The subspace Ɣ p of L 2 (, P ) is called the p-th Homogeneous Chaos, whereas Ɣ p is called the Polynomial Chaos of order p. Thus, the Polynomial Chaos of order i=0

13 2.2 Polynomial Chaos Expansion 29 p consists of all polynomials of order p, involving all possible combinations of the random variables {ξ i } i=1. Note that since random variables are functions, the polynomial chaoses are functions of functions, and are thus regarded as functionals. Each second-order, random variable U L 2 (, P ) admits a PC representation of the form [25]: U(θ) = u 0 Ɣ u i1 Ɣ 1 (ξ i1 (θ)) i 1 =1 i 1 i 1 =1 i 2 =1 i 1 i 1 =1 i 2 =1 i 3 =1 i 1 u i1 i 2 Ɣ 2 (ξ i1 (θ), ξ i2 (θ)) i 2 i 2 i 1 =1 i 2 =1 i 3 =1 i 4 =1 u i1 i 2 i 3 Ɣ 3 (ξ i1 (θ), ξ i2 (θ), ξ i3 (θ)) i 3 u i1 i 2 i 3 i 4 Ɣ 4 (ξ i1 (θ), ξ i2 (θ), ξ i3 (θ), ξ i4 (θ)) +. (2.37) This representation is convergent in the mean-square sense: lim p E [( u 0 Ɣ i1=1 i p 1 i p =1 u i1...u p Ɣ p (ξ i1,...,ξ ip ) U ) 2 ] = 0. (2.38) By construction, chaos polynomials whose orders are greater than p = 0 have vanishing expectation: E[Ɣ p>0 ]=0. (2.39) Also, all polynomials are mutually orthogonal with regards to the Gaussian measure associated to the random variables in {ξ i } i=1. In fact, one can express the expectation of U either in the original space, or in the Gaussian space spanned by {ξ i } i=1, E [U] = U(ξ(θ)) dp(θ)= E [U] = U(θ)dP(θ), (2.40) U(y)p ξ (y) dy U, (2.41)

14 30 2 Spectral Expansions where U(ξ) is understood as the PC representation of U(θ) and p ξ stands for the Gaussian probability density function: p ξ (y) = i=1 1 2π exp ( y 2 i /2). (2.42) In the following, we use the brackets to make clear that the expectation is measured with regard to the probability distribution of the random variables used in the expansion. 3 Classically, in order to facilitate the manipulation of the PC expansion, we rely on an univocal relation between the PC Ɣ() and new functionals Ψ(). It results in a more compact expression of the random variable expansion: U(ξ) = u k Ψ k (ξ), ξ ={ξ 1,ξ 2,...}, (2.43) k=0 where the deterministic expansion coefficients u k are simply called PC coefficients. The convention that Ψ 0 = Ɣ 0 is often adopted, and it is further assumed that the univocal relation is set such that Ψ i s are ordered with increasing polynomial order Polynomial Chaos System The construction outlined above involves an infinite collection, ξ i, of normalized, uncorrelated, random Gaussian variables. In practice, particularly for computational purposes, it is necessary to restrict the representation to a finite number of random variables, which leads to PC expansions of finite dimension. Specifically, the PC of dimension N and order p is the subspace of Ɣ p generated by the elements of Ɣ p that only involve N random variables, ξ 1,...,ξ N. For finite dimensions, the infinite sums in (2.37) are replaced by finite sums over N dimensions. For instance, for an expansion with two dimensions (N = 2), (2.37) becomes: U = u 0 Ɣ u i1 Ɣ 1 (ξ i1 ) + i 1 =1 i 1 i 2 i 1 =1 i 2 =1 i 3 =1 2 i 1 i 2 i 1 =1 i 2 =1 i 3 =1 i 4 =1 2 i 1 i 1 =1 i 2 =1 u i1 i 2 i 3 Ɣ 3 (ξ i1,ξ i2,ξ i3 ) i 3 u i1 i 2 Ɣ 2 (ξ i1,ξ i2 ) u i1 i 2 i 3 i 4 Ɣ 4 (ξ i1,ξ i2,ξ i3,ξ i4 ) + (2.44) 3 We will see later that these random variables are not necessarily Gaussian.

15 2.2 Polynomial Chaos Expansion 31 or alternatively, U = u 0 Ɣ 0 + u 1 Ɣ 1 (ξ 1 ) + u 2 Ɣ 2 (ξ 2 ) + u 11 Ɣ 2 (ξ 1,ξ 1 ) + u 21 Ɣ 2 (ξ 2,ξ 1 ) + u 22 Ɣ 2 (ξ 2,ξ 2 ) + u 111 Ɣ 3 (ξ 1,ξ 1,ξ 1 ) + u 211 Ɣ 3 (ξ 2,ξ 1,ξ 1 ) + u 221 Ɣ 3 (ξ 2,ξ 2,ξ 1 ) + u 222 Ɣ 3 (ξ 2,ξ 2,ξ 2 ) + u 1111 Ɣ 4 (ξ 1,ξ 1,ξ 1,ξ 1 ) +. (2.45) One Dimensional PC Basis One simple fashion of constructing the N-dimensional PC is to follow a partial tensorization of the 1D polynomials. Thus, we first focus on polynomials of a single random variable, ξ. Recall that the chaos polynomials are orthogonal, and that the probability density of ξ is given by: p ξ (y) = 1 2π exp [ y 2 /2 ]. (2.46) By ψ p (ξ) we denote the 1D chaos of order p. Following the same convention, the polynomial of degree 0 is ψ 0 (ξ) = 1. The orthogonality condition can be expressed as: E[ψ i ψ j ]= ψ i (ξ(θ))ψ j (ξ(θ)) dp(θ)= ψ i,ψ j = ψ i (y)ψ j (y)p ξ (y) dy = δ ij ψ 2 i, R with the polynomials (conventionally) normalized so that ψk 2 =k!. The 1D polynomials thus defined, which are mutually orthogonal with respect to the Gaussian measure, constitute a well-known family, namely the Hermite polynomials [1]. The first seven Hermite polynomials, ψ 0,...,ψ 6,aregivenin(B.22 B.28); they are plotted in Fig. 2.4 for ξ [ 3, 3] Multidimensional PC Basis We know proceed to the N-dimensional case, and seek to construct Ɣ p starting from the 1D Hermite polynomials, ψ q. We will denote ξ ={ξ 1,...,ξ N }. Since these random variables are independent, the probability density of ξ is given by: p ξ (y) = N p ξ (y i ). (2.47) i=1

16 32 2 Spectral Expansions Fig. 2.4 One-dimensional Hermite polynomials, ψ p (ξ), for p = 0,...,6 Let γ denote the multi-index, γ ={γ 1,...,γ N } and let λ(p) denote the following set of multi-indices: { λ(p) = γ : } N γ i = p. (2.48) i=1 Following these definitions, one constructs the p-th order polynomial chaos according to: Ɣ p = { γ λ(p) γ N γ 1 } ) ψ γi (ξ i. (2.49) Thus, for the 2D case, the Hermite expansion can be expressed as U = u 0 ψ 0 + u 1 ψ 1 (ξ 1 ) + u 2 ψ 1 (ξ 2 ) + u 11 ψ 2 (ξ 1 ) + u 21 ψ 1 (ξ 2 )ψ 1 (ξ 1 ) + u 22 ψ 2 (ξ 2 ) + u 111 ψ 3 (ξ 1 ) + u 211 ψ 1 (ξ 2 )ψ 2 (ξ 1 ) + u 221 ψ 2 (ξ 2 )ψ 1 (ξ 1 ) + u 222 ψ 3 (ξ 2 ) + u 1111 ψ 4 (ξ 1 ) +. (2.50) The above expression can be recast in the following, more compact, form: U = u k Ψ k (ξ 1,ξ 2 ). (2.51) k=0 The first 2D polynomial chaoses are plotted in Figs. 2.5 and 2.6.

17 2.2 Polynomial Chaos Expansion 33 Fig. 2.5 Two-dimensional (N = 2) Hermite chaoses of order 0, 1, and Truncated PC Expansion In the following, we will adopt a condensed notation of the expansion of the random variable U, specifically U = u k Ψ k (ξ). (2.52) k=0 As mentioned earlier, it is necessary to conduct computations with a finite number, N, of random variables, ξ i, i = 1,...,N. One also needs to truncate the PC expansion to order p, so that the expansion is also finite. The number of terms retained in the expansion, after the double truncation at N dimensions and order p, isgiven

18 34 2 Spectral Expansions Fig. 2.6 Two-dimensional (N = 2) Hermite chaoses of order 3 Table 2.3 Number of terms (P + 1) in the N-dimensional PC expansion truncated at order p p/n p/n by [90]: P + 1 = (N + p)!. (2.53) N!p! The dependence of (P + 1) on N and p is illustrated in Fig Table 2.3 provides values (P + 1) for p and N in the interval [1, 6]. The truncated expansion of a random variable U can consequently be expressed as: P U = u k Ψ k (ξ) + ɛ(n,p). (2.54) k=0 where the truncation error depends on both N and p. This error is itself a random variable. The truncated expansion converges in the mean square sense as N and p

19 2.3 Generalized Polynomial Chaos 35 Fig. 2.7 Number of terms in the PC expansion plotted against the order, p,andthe number of dimensions, N go to infinity [25], i.e. lim ɛ 2 (N,p) = 0. (2.55) N,p In light of the dependence of P on the order and the number of random variables, the PC representation will be computationally efficient when small values of N and p are sufficient for an accurate representation of U, in other words when ɛ 2 (N,p) 0 rapidly with N and p. As we shall see later, in practice N is governed by the number and structure of the sources of uncertainty. Meanwhile, in order to achieve a given error threshold, the order p needed is governed by the random variable U that one seeks to represent, particularly by its probability law. 2.3 Generalized Polynomial Chaos As further discussed in Sect. 2.5, the number N of random variables ξ i in the PC expansion is fixed by the parametrization of the uncertain model under scrutiny. Therefore, we will be essentially concerned here with the convergence of the expansion with the PC order p Independent Random Variables We temporarily restrict the discussion to the case of expansions involving a single random variable ξ i = ξ. As we have just observed, the rate of convergence of the expansion of U with p depends on the distribution of the random variable that one seeks to represent. It is thus natural to explore whether there exists other families of orthogonal polynomials that lead to a smaller representation error for the same number of terms in the expansion. One can immediately note that if U(θ) is a Gaussian random variable, then the Hermite PC basis is optimal, since an expansion with p = 1 provides an exact representation. This remark can in fact be generalized, namely to assert that the

20 36 2 Spectral Expansions Fig. 2.8 One-dimensional Legendre polynomials of order p = 0,...,6 optimal polynomial expansion is that constructed using the measure corresponding to the probability law of the random variable that we seek to represent. However, for the class of problems of interest here, particularly for model-based predictions, the probability law of the model solution to be determined is generally not known apriori. It is therefore not possible to construct, apriori, the optimal orthogonal family. Nonetheless, since one knows apriorithe probability law of the data uncertainties that one wishes to propagate, it may be useful to utilize, if possible, the measure associated with these uncertainties in order to generate the PC basis. Thus, the resulting basis will at least be optimal with respect to the uncertain model data representation. However, there is generally no guaranty of the optimality concerning the uncertain model solution, except possibly in isolated situations such as the highly idealized case of Gaussian input data and a model in which the output is linear with regards to the input data and Gaussian input data. Following the remarks above, when the uncertain data corresponds to a random variable uniformly distributed over a given finite interval, we choose the basis generated for random variables ξ i that are uniformly distributed on [ 1, 1], which leads to the Legendre polynomials [1] (see Appendix B). The first seven Legendre polynomials are plotted in Fig From a broader perspective, one notes that the Legendre polynomials are members of the Jacobi family of polynomials which cover the set of type β probability laws [246]. Xiu and Karniadakis [246] have in fact shown that for a large number of common probability laws, the corresponding families of polynomials are determined using the Askey scheme [6]. Selected probability laws (measures) and the corresponding orthogonal polynomial sets are given in Table 2.4. Note that the PC decomposition can also be accomplished on the basis of discrete random variables (RVs). Furthermore, in the case of measures for which one does not readily dispose of an orthogonal family of polynomials, it is generally possible to rely on a numerical construction of the PC basis, following a Gram-Schmidt orthogonalization process [220]. We also note that the expression of the multidimensional PCs in terms of products of one-dimensional polynomials, thanks for instance to multi-index constructions (see Appendix C), offers the possibility of natural extension to the case where the random variables ξ i are associated with different measures. Such constructions may be particularly useful for the propagation of multiple uncertainties in complex models, where the various ξ i s may be associated with different sources of uncer-

21 2.3 Generalized Polynomial Chaos 37 Table 2.4 Families of probability laws and corresponding families of orthogonal polynomials Distribution Polynomials Support ξ ψ k (ξ) Continuous RV Gaussian Hermite (, ) γ Laguerre [0, ) β Jacobi [a,b] Uniform Legendre [a,b] Discrete RV Poisson Charlier {0, 1, 2,...} Binomial Krawtchouk {0, 1, 2,...,n} Negative binomial Meixner {0, 1, 2,...} Hypergeometric Hahn {0, 1, 2,...,n} tainty having different probability laws. In addition, this approach lends itself to a modular implementation and to automatic basis construction Chaos Expansions Although the original (Hermite) PC expansion, and the generalized PC, rely on global polynomials, the approach is not limited to polynomial bases. In fact, we will see later in Chap. 8 that using piecewise polynomial functions can greatly improve the convergence properties of the stochastic expansion in certain complex situations. Strictly speaking, the term of Polynomial Chaos expansion should be used when (2.43) involves polynomials functionals Ψ k, and Chaos expansion for other types of functionals. In this monograph, we shall often rely on the terminology of Wiener- Hermite, Wiener-Legendre, etc..., for PC expansions using Hermite, Legendre,... polynomials, and use the generic term stochastic spectral expansion to designate any type of expansion (including piecewise polynomials) of the form in (2.43) Dependent Random Variables So far, we have restricted ourselves to situations where the random variables ξ i are independent, such that their joint density has product form. We have seen that for such structure of the probability law, the N-dimensional PC basis can be constructed by tensorizing one-dimensional bases. In some situations, expansion in terms of independent random variables may not be possible. This may be the case when considering the representation of complex model data. The general situation corresponds to a set ξ ={ξ 1,...,ξ N } of N random variables with given joint density p ξ. We denote the support of the joint density, i.e. p ξ (y) = 0fory /. A construction method of orthonormal Chaos bases for

22 38 2 Spectral Expansions general probability laws was introduced in [218]. The construction is performed in two steps. First, a set of N one-dimensional generalized PC bases is determined in relation with the marginal densities of the ξ i. Let us denote p i the marginal density of the i-th random variable. We recall (see Appendix A) that the marginal density p i is obtained by integration of the joint density along all its dimensions but the i-th one: p i (y) = dy 1 dy i 1 dy i+1 dy N p y (y 1,...,y N ). (2.56) For convenience, we assume orthonormal (Hilbert) bases with regard to the p i, and denote {φ p (i) (ξ)} the corresponding sets of polynomials satisfying φ (i) p,φ(i) p p i φ (i) p (y)φ(i) p (y)p i (y) dy = δ pp. (2.57) Second, the N dimensional Chaos basis is constructed. It consists in the set of random functionals Ψ γ (ξ), where γ ={γ 1,...,γ N } is a multi-index of N N.The functionals Ψ γ are defined for ξ as [ ] p1 (ξ 1 ) p N (ξ N ) 1/2 Ψ γ (ξ) = φ γ (1) p ξ (ξ) 1 (ξ 1 )...φ γ (N) N (ξ N ). (2.58) It is immediate to show that { γ } is an orthonormal set for the measure p ξ. Indeed, Ψγ,Ψ β = Ψ γ (y)ψ β (y)p ξ (y) dy p 1 (y 1 )...p N (y N ) ( ) = φ γ (1) p ξ (y) 1 (y 1 )...φ γ (N) N (y N ) ( ) φ (1) β 1 (y 1 )...φ (N) β N (y N ) p ξ (y) dy ( ) = φ γ (1) 1 (y 1 )φ (1) β 1 (y 1 )p 1 (y 1 ) ( ) φ γ (N) N (y N )φ (N) β 1 (y N )p N (y N ) dy 1 dy N = N φ (i) i=1 γ i,φ (i) β i p i = δ γβ. (2.59) We observe that the definition in (2.58) yields bases which are not polynomials for general joint densities, although the one-dimensional functionals φ (i) j may be. As a result, analytical manipulation of the functionals are quite complex or even impossible, and appropriate numerical procedures are needed. Furthermore, construction of Chaos bases for general probability laws requires the complete description of p ξ for any point in. In fact, in many situations the

23 2.4 Spectral Expansions of Stochastic Quantities 39 explicit form of the joint density is unknown. This is the case when the ξ i are related to physical quantities as for the parametrization of stochastic model data. In this case, one may only have a sample set of realizations of the data, from physical measurements for instance, and it is necessary to estimate the probability law of ξ through identification or optimization procedures. 2.4 Spectral Expansions of Stochastic Quantities Random Variable Let U be a second-order R-valued random variable defined on a probability space (,, P) and U = u k Ψ k (ξ), (2.60) k=0 its expansion on the orthogonal PC basis {Ψ 0,Ψ 1,...}, where we assume an indexation such that Ψ 0 = 1. We see immediately that the expectation of U is given by: E [U] = U(ξ) = Ψ 0 U(ξ) = P u k Ψ 0,Ψ k = u 0, (2.61) by virtue of orthogonality of the basis. Therefore, from the indexation convention the coefficient u 0 is in fact the mean of the random variable U. Further, from the definition of the random variable variance, σu 2, we obtain: ( P ) 2 σu 2 = E[ (U E [U]) 2] = E u k Ψ k = P u k u l Ψ k,ψ l = k,l=1 k=0 k=1 P u 2 k Ψ 2 k. (2.62) In other words, the variance of U is given as a weighted sum of its squared PC coefficients. Similar expressions can be derived for the higher order moments of U in terms of its PC coefficients; however, higher order moments do not have an expression as simple as that for the first and second-order ones. Alternatively, the statistics of the random variable can be estimated by means of sampling strategies: realizations U(θ) can be obtained by a sampling of ξ following its density p ξ followed by the evaluation of the PC series at the sample points ξ(θ). We shall rely heavily on such sampling procedure to estimate densities, cumulative density functions, probabilities, etc...notethatinthecontext of the analysis of model data uncertainty, the sampling scheme for a model output known from its PC expansion is substantially k=1

24 40 2 Spectral Expansions simpler and more efficient than the full evaluation of the model realizations such as in MC methods. Also in the same context, if the (independent) random variables ξ i used for the expansion of U can be related to physical sources of uncertainty, one has immediate access to the second-order characterization of the impact of different uncertainty sources, for instance through the ANOVA (analysis of variance) of U. An example of the ANOVA for an stochastic elliptic model is presented in Chap Random Vectors The PC expansion of a random variable can be immediately extended to the representation of second-order R d -random vectors: U : R d. (2.63) Denoting U i the i-th component of the random vector, its PC expansion on the truncated basis is P U i (u i ) k Ψ k (ξ). (2.64) k=0 The random vector expansion can be recast in the vector form U = P u k Ψ k (ξ), (2.65) k=0 where u k = ((u 1 ) k (u d ) k ) t R d contains the coefficients of the k-th PC coefficients of the random vector components. The vector u k will be called the k-th stochastic mode of the random vector U. Clearly, u 0 is the mean of the random vector. Further, two components U i and U j are orthogonal if and only if P (u i ) k (u j ) k Ψ 2 k = 0, (2.66) k=0 with the same condition for uncorrelated components but with the sum index starting at k = 1. In fact, the correlation and covariance matrices of vector U can be respectively expressed as: r = P u k u t P k Ψ 2 k, c = u k u t k Ψ 2 k. (2.67) k=0 Note that a simple construction of PC expansions for random vectors with independent components consists in using different random variables for the independent components. k=1

25 2.4 Spectral Expansions of Stochastic Quantities Stochastic Processes The PC expansion can be immediately extended to a second-order stochastic processes U, U : R, (2.68) by letting the deterministic coefficients to depend on the index x, namely U(x, ξ) P u k (x)ψ k (ξ). (2.69) k=0 Consistently with the case of random vectors, the deterministic functions u k (x) will be called the stochastic modes of the process. The expansion (2.69) is obtained by considering U(x, ) as a random variable of L 2 (, P ) for any x, such that its expansion coefficients on the PC basis are also indexed with x. Again,fromthe convention Ψ 0 = 1wehaveu 0 (x) = E[U(x, )]. Also, due to orthogonality of the PC basis, the k-th stochastic mode in the expansion of U is given by: ( P ) U(x, ), Ψ k = u l (x)ψ l,ψ k = P l=0 l=0 u l (x) Ψ l,ψ k = u k (x) Ψk 2. (2.70) It shows that mode u k (x) is, up to a suitable normalization factor, given by the correlation between U and Ψ k. Due to the orthogonality of the PC basis, one immediately observes that the correlation function of U can be expressed in terms of its expansion modes, according to: R UU (x, x ) = U(x, )U(x, ) ( P ) = u k (x )Ψ k = = P k=0 u k (x)ψ k)( P k=0 k=0 l=0 P k=0 P u k (x)u l (x ) Ψ k Ψ l u k (x)u k (x ) Ψk 2. (2.71) Note that the knowledge of the correlation function is not sufficient to uniquely determine the set of coefficients u k (x) appearing in the expansion of U. This illustrates the limitation of the characterization of U based only on its second-order properties, and indicates that additional information is needed to define the stochastic modes.

26 42 2 Spectral Expansions Conversely, this reflects the larger amount of information contained in a PC expansion, that clearly transcends the second-order characteristics. Comparison of the KL expansion of a stochastic process in (2.14) with the PC expansion in (2.69) leads to additional observations. First, whereas the stochastic coefficients (η k and Ψ k ) are in both cases orthogonal, the stochastic modes, u k,of the PC expansion of U are not orthogonal, unlike their counterpart in the KL decomposition. Indeed, in the PC expansion (2.69), the modes u k (x) are generally not orthogonal, so we cannot determine the random functionals Ψ k through (U(, ξ), u k ). Nonetheless, the two expansions can still be related. To this end, we write the KL expansion as: U(x,θ)= l u (KL) l λl η l (θ), ( u (KL) k,u (KL) ) l = δkl, E[η l η l ]=δ ll, (2.72) where superscript (KL) has been added to the KL-modes to avoid confusion with the PC modes of U. Now, because the random coefficients in the KL expansion are second-order random variables, they have a convergent PC expansion: η l (θ) = k (η l ) k Ψ k (ξ(θ)). (2.73) Inserting these expansions into the KL decomposition, and rearranging the terms results in: U(x,θ)= [ ] λl (η l ) k u (KL) l (x) Ψ k (ξ(θ)), (2.74) k l which is equivalent to (2.69) if we define [ ] u k (x) = λl (η l ) k u (KL) l (x). (2.75) l Further, this simple manipulation shows that (u k,u k ) = ( (KL) (η l ) k (η l ) k u l,u (KL) ) l = (η l ) k (η l ) k, (2.76) l l l which is generally non-zero for k k. A noticeable exception occurs for the case of Gaussian processes for which η l N(0, 1). In this case, the η l s being independent, we can construct a straightforward first-order Wiener-Hermite expansion for them simply by using ξ l = η l. For appropriate indexation of the PC basis, we have η l (θ) = ξ l (θ) = Ψ l (ξ) where ξ ={ξ 1,ξ 2,...}. For this particular expansion of the η l,the stochastic modes of the PC expansion of U and the KL modes are simply related by u k (x) = λ k u (KL) k (x). (2.77)

27 2.5 Application to Uncertainty Quantification Problems 43 Such simple relation between stochastic PC modes and KL modes does not exist for general (non-gaussian) processes, for which the second-order properties of U, that determine the KL modes u (KL), do not suffice. 2.5 Application to Uncertainty Quantification Problems The stochastic spectral expansion of a random quantity (random variable, vector or stochastic process) provides a convenient representation in view of its characterization (extraction of mean, moments, analysis of correlations, density estimates, measure of the probability of events, local and global sensitivity analysis,...). All this information is available at low computational cost provided that the coefficients of the representation of relevant quantities are known. Therefore, stochastic expansions will have practical utility if one disposes of methods allowing efficient determination of the associated coefficients scalars, vectors, functions, or fields, as the case may be. We distinguish here between two types of problems. In the first case, one disposes of information regarding a random quantity, and seeks to construct a spectral expansion to represent it. The type of information available may greatly differ from one application to another; this may include the complete probability law (for instance its density, joint-density or full set of finite dimensional distributions, etc.) or simply a (sometimes coarse) sample set realizations. To construct the spectral representation, one thus needs to establish an optimization problem to determine the expansion coefficients. The actual form of the optimization problem depends on the information available. For example, one approach may be based on minimizing the distance between the characteristics of some actual quantities (e.g. moments, densities, etc.) and those of the corresponding approximation. The definition of the distance may also differ from one approach to another, leading to optimization problems of very different nature. A central question regarding the spectral representation of random quantities concerns the rate of convergence, which in the context of PC expansions depends both on the expansion order p and the number N of random variables ξ i.thisis particularly delicate when one has only a sample set of realizations, since in this case it is not possible to relate an individual realization of the random quantity to a specific value of the random vector ξ. Prescribing this relation apriorimay be an option, but over-fitting issues may occur when one refines the expansion bases. Clearly, optimization techniques with regularization properties are needed here. Let us just mention algorithms based on Bayesian inference and maximum entropy principles that appear to offer the required properties in terms of robustness and convergence [45, 55, 56, 217, 219]. These aspects of stochastic spectral approximation are at the center of many current investigations, and significant advances are expected in the coming years. The second type of problem, which will be of central concern in this monograph, consists in the propagation of uncertainty in some model input data, D, which

28 44 2 Spectral Expansions are assumed to be already parametrized using a finite set of random variables ξ. Knowing the density of the random vector ξ, the goal of the uncertainty propagation problem is to determine the stochastic expansion of the model solution, say U(ξ), induced by the stochastic data D(ξ). In the following chapters, we shall discuss two broad classes of methods that can be used to address this goal: non-intrusive methods and spectral Galerkin methods.

29

Performance Evaluation of Generalized Polynomial Chaos

Performance Evaluation of Generalized Polynomial Chaos Performance Evaluation of Generalized Polynomial Chaos Dongbin Xiu, Didier Lucor, C.-H. Su, and George Em Karniadakis 1 Division of Applied Mathematics, Brown University, Providence, RI 02912, USA, gk@dam.brown.edu

More information

Stochastic Spectral Approaches to Bayesian Inference

Stochastic Spectral Approaches to Bayesian Inference Stochastic Spectral Approaches to Bayesian Inference Prof. Nathan L. Gibson Department of Mathematics Applied Mathematics and Computation Seminar March 4, 2011 Prof. Gibson (OSU) Spectral Approaches to

More information

Fast Numerical Methods for Stochastic Computations

Fast Numerical Methods for Stochastic Computations Fast AreviewbyDongbinXiu May 16 th,2013 Outline Motivation 1 Motivation 2 3 4 5 Example: Burgers Equation Let us consider the Burger s equation: u t + uu x = νu xx, x [ 1, 1] u( 1) =1 u(1) = 1 Example:

More information

A Vector-Space Approach for Stochastic Finite Element Analysis

A Vector-Space Approach for Stochastic Finite Element Analysis A Vector-Space Approach for Stochastic Finite Element Analysis S Adhikari 1 1 Swansea University, UK CST2010: Valencia, Spain Adhikari (Swansea) Vector-Space Approach for SFEM 14-17 September, 2010 1 /

More information

arxiv: v2 [math.pr] 27 Oct 2015

arxiv: v2 [math.pr] 27 Oct 2015 A brief note on the Karhunen-Loève expansion Alen Alexanderian arxiv:1509.07526v2 [math.pr] 27 Oct 2015 October 28, 2015 Abstract We provide a detailed derivation of the Karhunen Loève expansion of a stochastic

More information

Polynomial Chaos and Karhunen-Loeve Expansion

Polynomial Chaos and Karhunen-Loeve Expansion Polynomial Chaos and Karhunen-Loeve Expansion 1) Random Variables Consider a system that is modeled by R = M(x, t, X) where X is a random variable. We are interested in determining the probability of the

More information

Schwarz Preconditioner for the Stochastic Finite Element Method

Schwarz Preconditioner for the Stochastic Finite Element Method Schwarz Preconditioner for the Stochastic Finite Element Method Waad Subber 1 and Sébastien Loisel 2 Preprint submitted to DD22 conference 1 Introduction The intrusive polynomial chaos approach for uncertainty

More information

Solving the steady state diffusion equation with uncertainty Final Presentation

Solving the steady state diffusion equation with uncertainty Final Presentation Solving the steady state diffusion equation with uncertainty Final Presentation Virginia Forstall vhfors@gmail.com Advisor: Howard Elman elman@cs.umd.edu Department of Computer Science May 6, 2012 Problem

More information

Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques

Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques Institut für Numerische Mathematik und Optimierung Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques Oliver Ernst Computational Methods with Applications Harrachov, CR,

More information

Collocation based high dimensional model representation for stochastic partial differential equations

Collocation based high dimensional model representation for stochastic partial differential equations Collocation based high dimensional model representation for stochastic partial differential equations S Adhikari 1 1 Swansea University, UK ECCM 2010: IV European Conference on Computational Mechanics,

More information

UNCERTAINTY ASSESSMENT USING STOCHASTIC REDUCED BASIS METHOD FOR FLOW IN POROUS MEDIA

UNCERTAINTY ASSESSMENT USING STOCHASTIC REDUCED BASIS METHOD FOR FLOW IN POROUS MEDIA UNCERTAINTY ASSESSMENT USING STOCHASTIC REDUCED BASIS METHOD FOR FLOW IN POROUS MEDIA A REPORT SUBMITTED TO THE DEPARTMENT OF ENERGY RESOURCES ENGINEERING OF STANFORD UNIVERSITY IN PARTIAL FULFILLMENT

More information

1 Coherent-Mode Representation of Optical Fields and Sources

1 Coherent-Mode Representation of Optical Fields and Sources 1 Coherent-Mode Representation of Optical Fields and Sources 1.1 Introduction In the 1980s, E. Wolf proposed a new theory of partial coherence formulated in the space-frequency domain. 1,2 The fundamental

More information

A Stochastic Projection Method for Fluid Flow

A Stochastic Projection Method for Fluid Flow Journal of Computational Physics 8, 9 44 (22) doi:.6/jcph.22.74 A Stochastic Projection Method for Fluid Flow II. Random Process Olivier P. Le Maître, Matthew T. Reagan, Habib N. Najm, Roger G. Ghanem,

More information

Solving the stochastic steady-state diffusion problem using multigrid

Solving the stochastic steady-state diffusion problem using multigrid IMA Journal of Numerical Analysis (2007) 27, 675 688 doi:10.1093/imanum/drm006 Advance Access publication on April 9, 2007 Solving the stochastic steady-state diffusion problem using multigrid HOWARD ELMAN

More information

Hierarchical Parallel Solution of Stochastic Systems

Hierarchical Parallel Solution of Stochastic Systems Hierarchical Parallel Solution of Stochastic Systems Second M.I.T. Conference on Computational Fluid and Solid Mechanics Contents: Simple Model of Stochastic Flow Stochastic Galerkin Scheme Resulting Equations

More information

A Polynomial Chaos Approach to Robust Multiobjective Optimization

A Polynomial Chaos Approach to Robust Multiobjective Optimization A Polynomial Chaos Approach to Robust Multiobjective Optimization Silvia Poles 1, Alberto Lovison 2 1 EnginSoft S.p.A., Optimization Consulting Via Giambellino, 7 35129 Padova, Italy s.poles@enginsoft.it

More information

Uncertainty Quantification in Computational Science

Uncertainty Quantification in Computational Science DTU 2010 - Lecture I Uncertainty Quantification in Computational Science Jan S Hesthaven Brown University Jan.Hesthaven@Brown.edu Objective of lectures The main objective of these lectures are To offer

More information

c 2004 Society for Industrial and Applied Mathematics

c 2004 Society for Industrial and Applied Mathematics SIAM J. SCI. COMPUT. Vol. 6, No., pp. 578 59 c Society for Industrial and Applied Mathematics STOCHASTIC SOLUTIONS FOR THE TWO-DIMENSIONAL ADVECTION-DIFFUSION EQUATION XIAOLIANG WAN, DONGBIN XIU, AND GEORGE

More information

An Empirical Chaos Expansion Method for Uncertainty Quantification

An Empirical Chaos Expansion Method for Uncertainty Quantification An Empirical Chaos Expansion Method for Uncertainty Quantification Melvin Leok and Gautam Wilkins Abstract. Uncertainty quantification seeks to provide a quantitative means to understand complex systems

More information

Research Article Multiresolution Analysis for Stochastic Finite Element Problems with Wavelet-Based Karhunen-Loève Expansion

Research Article Multiresolution Analysis for Stochastic Finite Element Problems with Wavelet-Based Karhunen-Loève Expansion Mathematical Problems in Engineering Volume 2012, Article ID 215109, 15 pages doi:10.1155/2012/215109 Research Article Multiresolution Analysis for Stochastic Finite Element Problems with Wavelet-Based

More information

Kernel-based Approximation. Methods using MATLAB. Gregory Fasshauer. Interdisciplinary Mathematical Sciences. Michael McCourt.

Kernel-based Approximation. Methods using MATLAB. Gregory Fasshauer. Interdisciplinary Mathematical Sciences. Michael McCourt. SINGAPORE SHANGHAI Vol TAIPEI - Interdisciplinary Mathematical Sciences 19 Kernel-based Approximation Methods using MATLAB Gregory Fasshauer Illinois Institute of Technology, USA Michael McCourt University

More information

Stochastic Solvers for the Euler Equations

Stochastic Solvers for the Euler Equations 43rd AIAA Aerospace Sciences Meeting and Exhibit 1-13 January 5, Reno, Nevada 5-873 Stochastic Solvers for the Euler Equations G. Lin, C.-H. Su and G.E. Karniadakis Division of Applied Mathematics Brown

More information

Sparse polynomial chaos expansions in engineering applications

Sparse polynomial chaos expansions in engineering applications DEPARTMENT OF CIVIL, ENVIRONMENTAL AND GEOMATIC ENGINEERING CHAIR OF RISK, SAFETY & UNCERTAINTY QUANTIFICATION Sparse polynomial chaos expansions in engineering applications B. Sudret G. Blatman (EDF R&D,

More information

Solving the Stochastic Steady-State Diffusion Problem Using Multigrid

Solving the Stochastic Steady-State Diffusion Problem Using Multigrid Solving the Stochastic Steady-State Diffusion Problem Using Multigrid Tengfei Su Applied Mathematics and Scientific Computing Advisor: Howard Elman Department of Computer Science Sept. 29, 2015 Tengfei

More information

Proper Generalized Decomposition for Linear and Non-Linear Stochastic Models

Proper Generalized Decomposition for Linear and Non-Linear Stochastic Models Proper Generalized Decomposition for Linear and Non-Linear Stochastic Models Olivier Le Maître 1 Lorenzo Tamellini 2 and Anthony Nouy 3 1 LIMSI-CNRS, Orsay, France 2 MOX, Politecnico Milano, Italy 3 GeM,

More information

STA 294: Stochastic Processes & Bayesian Nonparametrics

STA 294: Stochastic Processes & Bayesian Nonparametrics MARKOV CHAINS AND CONVERGENCE CONCEPTS Markov chains are among the simplest stochastic processes, just one step beyond iid sequences of random variables. Traditionally they ve been used in modelling a

More information

Efficient Solvers for Stochastic Finite Element Saddle Point Problems

Efficient Solvers for Stochastic Finite Element Saddle Point Problems Efficient Solvers for Stochastic Finite Element Saddle Point Problems Catherine E. Powell c.powell@manchester.ac.uk School of Mathematics University of Manchester, UK Efficient Solvers for Stochastic Finite

More information

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. Vector Spaces Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. For each two vectors a, b ν there exists a summation procedure: a +

More information

Structure in Data. A major objective in data analysis is to identify interesting features or structure in the data.

Structure in Data. A major objective in data analysis is to identify interesting features or structure in the data. Structure in Data A major objective in data analysis is to identify interesting features or structure in the data. The graphical methods are very useful in discovering structure. There are basically two

More information

Introduction to Uncertainty Quantification in Computational Science Handout #3

Introduction to Uncertainty Quantification in Computational Science Handout #3 Introduction to Uncertainty Quantification in Computational Science Handout #3 Gianluca Iaccarino Department of Mechanical Engineering Stanford University June 29 - July 1, 2009 Scuola di Dottorato di

More information

Probabilistic Structural Dynamics: Parametric vs. Nonparametric Approach

Probabilistic Structural Dynamics: Parametric vs. Nonparametric Approach Probabilistic Structural Dynamics: Parametric vs. Nonparametric Approach S Adhikari School of Engineering, Swansea University, Swansea, UK Email: S.Adhikari@swansea.ac.uk URL: http://engweb.swan.ac.uk/

More information

Uncertainty analysis of large-scale systems using domain decomposition

Uncertainty analysis of large-scale systems using domain decomposition Center for Turbulence Research Annual Research Briefs 2007 143 Uncertainty analysis of large-scale systems using domain decomposition By D. Ghosh, C. Farhat AND P. Avery 1. Motivation and objectives A

More information

Beyond Wiener Askey Expansions: Handling Arbitrary PDFs

Beyond Wiener Askey Expansions: Handling Arbitrary PDFs Journal of Scientific Computing, Vol. 27, Nos. 1 3, June 2006 ( 2005) DOI: 10.1007/s10915-005-9038-8 Beyond Wiener Askey Expansions: Handling Arbitrary PDFs Xiaoliang Wan 1 and George Em Karniadakis 1

More information

Consistent Histories. Chapter Chain Operators and Weights

Consistent Histories. Chapter Chain Operators and Weights Chapter 10 Consistent Histories 10.1 Chain Operators and Weights The previous chapter showed how the Born rule can be used to assign probabilities to a sample space of histories based upon an initial state

More information

Lecture Notes on PDEs

Lecture Notes on PDEs Lecture Notes on PDEs Alberto Bressan February 26, 2012 1 Elliptic equations Let IR n be a bounded open set Given measurable functions a ij, b i, c : IR, consider the linear, second order differential

More information

Model Reduction, Centering, and the Karhunen-Loeve Expansion

Model Reduction, Centering, and the Karhunen-Loeve Expansion Model Reduction, Centering, and the Karhunen-Loeve Expansion Sonja Glavaški, Jerrold E. Marsden, and Richard M. Murray 1 Control and Dynamical Systems, 17-81 California Institute of Technology Pasadena,

More information

A Non-Intrusive Polynomial Chaos Method For Uncertainty Propagation in CFD Simulations

A Non-Intrusive Polynomial Chaos Method For Uncertainty Propagation in CFD Simulations An Extended Abstract submitted for the 44th AIAA Aerospace Sciences Meeting and Exhibit, Reno, Nevada January 26 Preferred Session Topic: Uncertainty quantification and stochastic methods for CFD A Non-Intrusive

More information

Quantifying Uncertainty: Modern Computational Representation of Probability and Applications

Quantifying Uncertainty: Modern Computational Representation of Probability and Applications Quantifying Uncertainty: Modern Computational Representation of Probability and Applications Hermann G. Matthies with Andreas Keese Technische Universität Braunschweig wire@tu-bs.de http://www.wire.tu-bs.de

More information

Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs

Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs Roman Andreev ETH ZÜRICH / 29 JAN 29 TOC of the Talk Motivation & Set-Up Model Problem Stochastic Galerkin FEM Conclusions & Outlook Motivation

More information

Lecture 3: Review of Linear Algebra

Lecture 3: Review of Linear Algebra ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters, transforms,

More information

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space.

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Hilbert Spaces Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Vector Space. Vector space, ν, over the field of complex numbers,

More information

Karhunen-Loève decomposition of Gaussian measures on Banach spaces

Karhunen-Loève decomposition of Gaussian measures on Banach spaces Karhunen-Loève decomposition of Gaussian measures on Banach spaces Jean-Charles Croix jean-charles.croix@emse.fr Génie Mathématique et Industriel (GMI) First workshop on Gaussian processes at Saint-Etienne

More information

Modeling Uncertainty in Flow Simulations via Generalized Polynomial Chaos

Modeling Uncertainty in Flow Simulations via Generalized Polynomial Chaos Modeling Uncertainty in Flow Simulations via Generalized Polynomial Chaos Dongbin Xiu and George Em Karniadakis Division of Applied Mathematics Brown University Providence, RI 9 Submitted to Journal of

More information

Time Series 3. Robert Almgren. Sept. 28, 2009

Time Series 3. Robert Almgren. Sept. 28, 2009 Time Series 3 Robert Almgren Sept. 28, 2009 Last time we discussed two main categories of linear models, and their combination. Here w t denotes a white noise: a stationary process with E w t ) = 0, E

More information

Maximum variance formulation

Maximum variance formulation 12.1. Principal Component Analysis 561 Figure 12.2 Principal component analysis seeks a space of lower dimensionality, known as the principal subspace and denoted by the magenta line, such that the orthogonal

More information

Sobol-Hoeffding Decomposition with Application to Global Sensitivity Analysis

Sobol-Hoeffding Decomposition with Application to Global Sensitivity Analysis Sobol-Hoeffding decomposition Application to Global SA Computation of the SI Sobol-Hoeffding Decomposition with Application to Global Sensitivity Analysis Olivier Le Maître with Colleague & Friend Omar

More information

Lecture 12: Detailed balance and Eigenfunction methods

Lecture 12: Detailed balance and Eigenfunction methods Lecture 12: Detailed balance and Eigenfunction methods Readings Recommended: Pavliotis [2014] 4.5-4.7 (eigenfunction methods and reversibility), 4.2-4.4 (explicit examples of eigenfunction methods) Gardiner

More information

Generalized Spectral Decomposition for Stochastic Non Linear Problems

Generalized Spectral Decomposition for Stochastic Non Linear Problems Generalized Spectral Decomposition for Stochastic Non Linear Problems Anthony Nouy O.P. Le Maître Preprint submitted to Journal of Computational Physics Abstract We present an extension of the Generalized

More information

Spectral methods for fuzzy structural dynamics: modal vs direct approach

Spectral methods for fuzzy structural dynamics: modal vs direct approach Spectral methods for fuzzy structural dynamics: modal vs direct approach S Adhikari Zienkiewicz Centre for Computational Engineering, College of Engineering, Swansea University, Wales, UK IUTAM Symposium

More information

Characterization of heterogeneous hydraulic conductivity field via Karhunen-Loève expansions and a measure-theoretic computational method

Characterization of heterogeneous hydraulic conductivity field via Karhunen-Loève expansions and a measure-theoretic computational method Characterization of heterogeneous hydraulic conductivity field via Karhunen-Loève expansions and a measure-theoretic computational method Jiachuan He University of Texas at Austin April 15, 2016 Jiachuan

More information

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008 Gaussian processes Chuong B Do (updated by Honglak Lee) November 22, 2008 Many of the classical machine learning algorithms that we talked about during the first half of this course fit the following pattern:

More information

UNIVERSITY OF CALIFORNIA, SAN DIEGO. An Empirical Chaos Expansion Method for Uncertainty Quantification

UNIVERSITY OF CALIFORNIA, SAN DIEGO. An Empirical Chaos Expansion Method for Uncertainty Quantification UNIVERSITY OF CALIFORNIA, SAN DIEGO An Empirical Chaos Expansion Method for Uncertainty Quantification A Dissertation submitted in partial satisfaction of the requirements for the degree Doctor of Philosophy

More information

Dynamic response of structures with uncertain properties

Dynamic response of structures with uncertain properties Dynamic response of structures with uncertain properties S. Adhikari 1 1 Chair of Aerospace Engineering, College of Engineering, Swansea University, Bay Campus, Fabian Way, Swansea, SA1 8EN, UK International

More information

Modeling with Itô Stochastic Differential Equations

Modeling with Itô Stochastic Differential Equations Modeling with Itô Stochastic Differential Equations 2.4-2.6 E. Allen presentation by T. Perälä 27.0.2009 Postgraduate seminar on applied mathematics 2009 Outline Hilbert Space of Stochastic Processes (

More information

PC EXPANSION FOR GLOBAL SENSITIVITY ANALYSIS OF NON-SMOOTH FUNCTIONALS OF UNCERTAIN STOCHASTIC DIFFERENTIAL EQUATIONS SOLUTIONS

PC EXPANSION FOR GLOBAL SENSITIVITY ANALYSIS OF NON-SMOOTH FUNCTIONALS OF UNCERTAIN STOCHASTIC DIFFERENTIAL EQUATIONS SOLUTIONS PC EXPANSION FOR GLOBAL SENSITIVITY ANALYSIS OF NON-SMOOTH FUNCTIONALS OF UNCERTAIN STOCHASTIC DIFFERENTIAL EQUATIONS SOLUTIONS M. Navarro, O.P. Le Maître,2, O.M. Knio,3 mariaisabel.navarrojimenez@kaust.edu.sa

More information

Estimating functional uncertainty using polynomial chaos and adjoint equations

Estimating functional uncertainty using polynomial chaos and adjoint equations 0. Estimating functional uncertainty using polynomial chaos and adjoint equations February 24, 2011 1 Florida State University, Tallahassee, Florida, Usa 2 Moscow Institute of Physics and Technology, Moscow,

More information

MATH3383. Quantum Mechanics. Appendix D: Hermite Equation; Orthogonal Polynomials

MATH3383. Quantum Mechanics. Appendix D: Hermite Equation; Orthogonal Polynomials MATH3383. Quantum Mechanics. Appendix D: Hermite Equation; Orthogonal Polynomials. Hermite Equation In the study of the eigenvalue problem of the Hamiltonian for the quantum harmonic oscillator we have

More information

Lecture 1: Center for Uncertainty Quantification. Alexander Litvinenko. Computation of Karhunen-Loeve Expansion:

Lecture 1: Center for Uncertainty Quantification. Alexander Litvinenko. Computation of Karhunen-Loeve Expansion: tifica Lecture 1: Computation of Karhunen-Loeve Expansion: Alexander Litvinenko http://sri-uq.kaust.edu.sa/ Stochastic PDEs We consider div(κ(x, ω) u) = f (x, ω) in G, u = 0 on G, with stochastic coefficients

More information

Fast Numerical Methods for Stochastic Computations: A Review

Fast Numerical Methods for Stochastic Computations: A Review COMMUNICATIONS IN COMPUTATIONAL PHYSICS Vol. 5, No. 2-4, pp. 242-272 Commun. Comput. Phys. February 2009 REVIEW ARTICLE Fast Numerical Methods for Stochastic Computations: A Review Dongbin Xiu Department

More information

Polynomial chaos expansions for sensitivity analysis

Polynomial chaos expansions for sensitivity analysis c DEPARTMENT OF CIVIL, ENVIRONMENTAL AND GEOMATIC ENGINEERING CHAIR OF RISK, SAFETY & UNCERTAINTY QUANTIFICATION Polynomial chaos expansions for sensitivity analysis B. Sudret Chair of Risk, Safety & Uncertainty

More information

PART II : Least-Squares Approximation

PART II : Least-Squares Approximation PART II : Least-Squares Approximation Basic theory Let U be an inner product space. Let V be a subspace of U. For any g U, we look for a least-squares approximation of g in the subspace V min f V f g 2,

More information

Kernel Method: Data Analysis with Positive Definite Kernels

Kernel Method: Data Analysis with Positive Definite Kernels Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University

More information

Multilevel accelerated quadrature for elliptic PDEs with random diffusion. Helmut Harbrecht Mathematisches Institut Universität Basel Switzerland

Multilevel accelerated quadrature for elliptic PDEs with random diffusion. Helmut Harbrecht Mathematisches Institut Universität Basel Switzerland Multilevel accelerated quadrature for elliptic PDEs with random diffusion Mathematisches Institut Universität Basel Switzerland Overview Computation of the Karhunen-Loéve expansion Elliptic PDE with uniformly

More information

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space. Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space

More information

Lecture 12: Detailed balance and Eigenfunction methods

Lecture 12: Detailed balance and Eigenfunction methods Miranda Holmes-Cerfon Applied Stochastic Analysis, Spring 2015 Lecture 12: Detailed balance and Eigenfunction methods Readings Recommended: Pavliotis [2014] 4.5-4.7 (eigenfunction methods and reversibility),

More information

Introduction to Computational Stochastic Differential Equations

Introduction to Computational Stochastic Differential Equations Introduction to Computational Stochastic Differential Equations Gabriel J. Lord Catherine E. Powell Tony Shardlow Preface Techniques for solving many of the differential equations traditionally used by

More information

MODEL REDUCTION BASED ON PROPER GENERALIZED DECOMPOSITION FOR THE STOCHASTIC STEADY INCOMPRESSIBLE NAVIER STOKES EQUATIONS

MODEL REDUCTION BASED ON PROPER GENERALIZED DECOMPOSITION FOR THE STOCHASTIC STEADY INCOMPRESSIBLE NAVIER STOKES EQUATIONS MODEL REDUCTION BASED ON PROPER GENERALIZED DECOMPOSITION FOR THE STOCHASTIC STEADY INCOMPRESSIBLE NAVIER STOKES EQUATIONS L. TAMELLINI, O. LE MAÎTRE, AND A. NOUY Abstract. In this paper we consider a

More information

PART IV Spectral Methods

PART IV Spectral Methods PART IV Spectral Methods Additional References: R. Peyret, Spectral methods for incompressible viscous flow, Springer (2002), B. Mercier, An introduction to the numerical analysis of spectral methods,

More information

A Note on Hilbertian Elliptically Contoured Distributions

A Note on Hilbertian Elliptically Contoured Distributions A Note on Hilbertian Elliptically Contoured Distributions Yehua Li Department of Statistics, University of Georgia, Athens, GA 30602, USA Abstract. In this paper, we discuss elliptically contoured distribution

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 7 Interpolation Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Uncertainty Evolution In Stochastic Dynamic Models Using Polynomial Chaos

Uncertainty Evolution In Stochastic Dynamic Models Using Polynomial Chaos Noname manuscript No. (will be inserted by the editor) Uncertainty Evolution In Stochastic Dynamic Models Using Polynomial Chaos Umamaheswara Konda Puneet Singla Tarunraj Singh Peter Scott Received: date

More information

Time-dependent Karhunen-Loève type decomposition methods for SPDEs

Time-dependent Karhunen-Loève type decomposition methods for SPDEs Time-dependent Karhunen-Loève type decomposition methods for SPDEs by Minseok Choi B.S., Seoul National University; Seoul, 2002 M.S., Seoul National University; Seoul, 2007 A dissertation submitted in

More information

A reduced-order stochastic finite element analysis for structures with uncertainties

A reduced-order stochastic finite element analysis for structures with uncertainties A reduced-order stochastic finite element analysis for structures with uncertainties Ji Yang 1, Béatrice Faverjon 1,2, Herwig Peters 1, icole Kessissoglou 1 1 School of Mechanical and Manufacturing Engineering,

More information

Local and Global Sensitivity Analysis

Local and Global Sensitivity Analysis Omar 1,2 1 Duke University Department of Mechanical Engineering & Materials Science omar.knio@duke.edu 2 KAUST Division of Computer, Electrical, Mathematical Science & Engineering omar.knio@kaust.edu.sa

More information

Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications

Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications Dongbin Xiu Department of Mathematics, Purdue University Support: AFOSR FA955-8-1-353 (Computational Math) SF CAREER DMS-64535

More information

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms (February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops

More information

Lecture 3: Review of Linear Algebra

Lecture 3: Review of Linear Algebra ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak, scribe: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters,

More information

CONVERGENCE THEORY. G. ALLAIRE CMAP, Ecole Polytechnique. 1. Maximum principle. 2. Oscillating test function. 3. Two-scale convergence

CONVERGENCE THEORY. G. ALLAIRE CMAP, Ecole Polytechnique. 1. Maximum principle. 2. Oscillating test function. 3. Two-scale convergence 1 CONVERGENCE THEOR G. ALLAIRE CMAP, Ecole Polytechnique 1. Maximum principle 2. Oscillating test function 3. Two-scale convergence 4. Application to homogenization 5. General theory H-convergence) 6.

More information

SELECTED TOPICS CLASS FINAL REPORT

SELECTED TOPICS CLASS FINAL REPORT SELECTED TOPICS CLASS FINAL REPORT WENJU ZHAO In this report, I have tested the stochastic Navier-Stokes equations with different kind of noise, when i am doing this project, I have encountered several

More information

which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i

which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i MODULE 6 Topics: Gram-Schmidt orthogonalization process We begin by observing that if the vectors {x j } N are mutually orthogonal in an inner product space V then they are necessarily linearly independent.

More information

Multiscale stochastic preconditioners in non-intrusive spectral projection

Multiscale stochastic preconditioners in non-intrusive spectral projection Multiscale stochastic preconditioners in non-intrusive spectral projection Alen Alenxanderian a, Olivier P. Le Maître b, Habib N. Najm c, Mohamed Iskandarani d, Omar M. Knio a, a Department of Mechanical

More information

Statistical signal processing

Statistical signal processing Statistical signal processing Short overview of the fundamentals Outline Random variables Random processes Stationarity Ergodicity Spectral analysis Random variable and processes Intuition: A random variable

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

THE PROBLEMS FOR THE SECOND TEST FOR BRIEF SOLUTIONS

THE PROBLEMS FOR THE SECOND TEST FOR BRIEF SOLUTIONS THE PROBLEMS FOR THE SECOND TEST FOR 18.102 BRIEF SOLUTIONS RICHARD MELROSE Question.1 Show that a subset of a separable Hilbert space is compact if and only if it is closed and bounded and has the property

More information

EFFICIENT SHAPE OPTIMIZATION USING POLYNOMIAL CHAOS EXPANSION AND LOCAL SENSITIVITIES

EFFICIENT SHAPE OPTIMIZATION USING POLYNOMIAL CHAOS EXPANSION AND LOCAL SENSITIVITIES 9 th ASCE Specialty Conference on Probabilistic Mechanics and Structural Reliability EFFICIENT SHAPE OPTIMIZATION USING POLYNOMIAL CHAOS EXPANSION AND LOCAL SENSITIVITIES Nam H. Kim and Haoyu Wang University

More information

Inner Product Spaces An inner product on a complex linear space X is a function x y from X X C such that. (1) (2) (3) x x > 0 for x 0.

Inner Product Spaces An inner product on a complex linear space X is a function x y from X X C such that. (1) (2) (3) x x > 0 for x 0. Inner Product Spaces An inner product on a complex linear space X is a function x y from X X C such that (1) () () (4) x 1 + x y = x 1 y + x y y x = x y x αy = α x y x x > 0 for x 0 Consequently, (5) (6)

More information

Second-Order Inference for Gaussian Random Curves

Second-Order Inference for Gaussian Random Curves Second-Order Inference for Gaussian Random Curves With Application to DNA Minicircles Victor Panaretos David Kraus John Maddocks Ecole Polytechnique Fédérale de Lausanne Panaretos, Kraus, Maddocks (EPFL)

More information

Stochastic Dimension Reduction

Stochastic Dimension Reduction Stochastic Dimension Reduction Roger Ghanem University of Southern California Los Angeles, CA, USA Computational and Theoretical Challenges in Interdisciplinary Predictive Modeling Over Random Fields 12th

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

Sampling and Low-Rank Tensor Approximations

Sampling and Low-Rank Tensor Approximations Sampling and Low-Rank Tensor Approximations Hermann G. Matthies Alexander Litvinenko, Tarek A. El-Moshely +, Brunswick, Germany + MIT, Cambridge, MA, USA wire@tu-bs.de http://www.wire.tu-bs.de $Id: 2_Sydney-MCQMC.tex,v.3

More information

This ODE arises in many physical systems that we shall investigate. + ( + 1)u = 0. (λ + s)x λ + s + ( + 1) a λ. (s + 1)(s + 2) a 0

This ODE arises in many physical systems that we shall investigate. + ( + 1)u = 0. (λ + s)x λ + s + ( + 1) a λ. (s + 1)(s + 2) a 0 Legendre equation This ODE arises in many physical systems that we shall investigate We choose We then have Substitution gives ( x 2 ) d 2 u du 2x 2 dx dx + ( + )u u x s a λ x λ a du dx λ a λ (λ + s)x

More information

The Helically Reduced Wave Equation as a Symmetric Positive System

The Helically Reduced Wave Equation as a Symmetric Positive System Utah State University DigitalCommons@USU All Physics Faculty Publications Physics 2003 The Helically Reduced Wave Equation as a Symmetric Positive System Charles G. Torre Utah State University Follow this

More information

A Statistical Look at Spectral Graph Analysis. Deep Mukhopadhyay

A Statistical Look at Spectral Graph Analysis. Deep Mukhopadhyay A Statistical Look at Spectral Graph Analysis Deep Mukhopadhyay Department of Statistics, Temple University Office: Speakman 335 deep@temple.edu http://sites.temple.edu/deepstat/ Graph Signal Processing

More information

Numerical Approximation of Stochastic Elliptic Partial Differential Equations

Numerical Approximation of Stochastic Elliptic Partial Differential Equations Numerical Approximation of Stochastic Elliptic Partial Differential Equations Hermann G. Matthies, Andreas Keese Institut für Wissenschaftliches Rechnen Technische Universität Braunschweig wire@tu-bs.de

More information

Page 404. Lecture 22: Simple Harmonic Oscillator: Energy Basis Date Given: 2008/11/19 Date Revised: 2008/11/19

Page 404. Lecture 22: Simple Harmonic Oscillator: Energy Basis Date Given: 2008/11/19 Date Revised: 2008/11/19 Page 404 Lecture : Simple Harmonic Oscillator: Energy Basis Date Given: 008/11/19 Date Revised: 008/11/19 Coordinate Basis Section 6. The One-Dimensional Simple Harmonic Oscillator: Coordinate Basis Page

More information

The Framework of Quantum Mechanics

The Framework of Quantum Mechanics The Framework of Quantum Mechanics We now use the mathematical formalism covered in the last lecture to describe the theory of quantum mechanics. In the first section we outline four axioms that lie at

More information

Review and problem list for Applied Math I

Review and problem list for Applied Math I Review and problem list for Applied Math I (This is a first version of a serious review sheet; it may contain errors and it certainly omits a number of topic which were covered in the course. Let me know

More information

Principal Component Analysis

Principal Component Analysis Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used

More information

Introduction to Signal Spaces

Introduction to Signal Spaces Introduction to Signal Spaces Selin Aviyente Department of Electrical and Computer Engineering Michigan State University January 12, 2010 Motivation Outline 1 Motivation 2 Vector Space 3 Inner Product

More information