UNCERTAINTY ASSESSMENT USING STOCHASTIC REDUCED BASIS METHOD FOR FLOW IN POROUS MEDIA

Size: px
Start display at page:

Download "UNCERTAINTY ASSESSMENT USING STOCHASTIC REDUCED BASIS METHOD FOR FLOW IN POROUS MEDIA"

Transcription

1 UNCERTAINTY ASSESSMENT USING STOCHASTIC REDUCED BASIS METHOD FOR FLOW IN POROUS MEDIA A REPORT SUBMITTED TO THE DEPARTMENT OF ENERGY RESOURCES ENGINEERING OF STANFORD UNIVERSITY IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE By Hamid Bazargan March 2009

2 c Copyright by Hamid Bazargan 2009 All Rights Reserved ii

3 I certify that I have read this report and that in my opinion it is fully adequate, in scope and in quality, as partial fulfillment of the degree of Master of Science in Petroleum Engineering. Hamdi Tchelepi Principal Advisor iii

4 Abstract We apply a hybrid formulation combining the stochastic reduced basis methods with polynomial chaos expansions, which has been introduced recently by Nair [1] for solving the linearized stochastic partial differential equation governing single-phase flow in porous media. We use a generalization of stochastic reduced basis projection schemes to non- Gaussian uncertainty models. The Karhunen-Loeve expansion is used to model the input log-permeability field; for non-gaussian input we employ Polynomial Chaos expansion to model the nonlinearity in terms of Hermite polynomials. For the pressure equation, we employ basis vectors spanning the preconditioned stochastic Krylov subspace which efficiently reduces the dimensions of the solution space. Then the Galerkin projection scheme is used to estimate the coefficients of the reduced basis approximations. We present a detailed comparison between high resolution Monte Carlo simulation and the Stochastic Reduced Basis Method (SRBM). We also study the difference between predictions obtained using SRBM with low-order Statistical Moment Equations (SME) and a Probabilistic Collocation Method (PCM). Natural formations with high permeability variability and large spatial correlation scales are of great interest. Consequently, we examine SRBM for systems with a variance of log-permeability σ 2 lnk from 0.1 to 3 and correlation scales (normalized by domain length) of 0.05 to 0.5. In order to avoid issues related to statistical convergence and resolution level, we used 9000 highly detailed realizations of permeability for Monte Carlo Simulation (MCS). We show that SRBM gives reasonably close results with MCS using a small number of Krylov subspace basis at lower computational cost. iv

5 Contents Abstract iv List of Figures vii 1 Introduction 1 2 Representation of a Stochastic Process Introduction Karhunen Loeve expansion Homogeneous Chaos expansion Representation of the Solution Process Introduction Polynomial Chaos Projection schemes Krylov Subspace Projection scheme Stochastic Reduced Basis Method Introduction Stochastic Finite Element analysis Stochastic Krylov subspace Petrov-Galerkin projection scheme Post-Processing v

6 5 Numerical Studies Introduction Governing Equation Numerical Implementation K-Based Pressure Equation Y-based Pressure Equation Monte Carlo Simulation Statistical Moment Equations Stochastic Reduced Basis method Probabilistic Collocation Method Results and Discussion Stochastic Reduced Basis Method vs. Statistical Moment Equations Stochastic Reduced Basis Method vs. Probabilistic Collocation Method 67 6 Concluding Remarks 70 vi

7 List of Figures 5.1 The computational and permeability grids in the domain Defining permeability of nodal points as the discrete form of the continuous permeability field Energy map of the eigenvalues for a variance of log-permeability, σy 2 = 1 and correlation length of The effect of increasing the number of eigenvalues on pressure variance prediction along the x direction at y=l/2, using SRBM for uniform mean flow with variance of one Comparing SRBM results on pressure variance with Monte Carlo simulation and SME along the x direction at y=0.5, for uniform mean flow with σlnk 2 = 1 and different correlation length. (a) λ Y /L = 0.1 (b)λ Y /L = 0.2 (c)λ Y /L = 0.4 (d)λ Y /L = Energy map of the eigenvalues for a variance of log-permeability, σy 2, of 3 and a correlation length of The effect of increasing the number of eigenvalues on pressure variance prediction along the x direction at y=l/2, using SRBM for uniform-mean flow with σ 2 Y = vii

8 5.8 Comparing SRBM results on pressure variance with Monte Carlo simulation and SME along the x direction at y=0.5, for uniform mean flow with σlnk 2 = 3 and different correlation length. (a) λ Y /L = 0.1 (b)λ Y /L = 0.2 (c)λ Y /L = 0.4 (d)λ Y /L = Comparing SRBM results on pressure variance with Monte Carlo simulation and PCM along the x direction, for uniform mean flow with σlnk 2 = 1 and different correlation length. (a) λ Y /L = 0.1 (b)λ Y /L = Comparing SRBM results on pressure variance with Monte Carlo simulation and PCM along the x direction, for uniform mean flow with σlnk 2 = 3 and different correlation length. (a) λ Y /L = 0.1 (b)λ Y /L = viii

9 Chapter 1 Introduction Typical reservoir simulators can handle on the order of simulation cells. The exact number will vary considerably depending on the type of simulation to be performed (dead oil, black oil, compositional) and the available computer hardware. In practice, simulations with over 10 6 cells are still not performed routinely. Geological characterization models, by contrast, typically contain on the order of cells. These models, which are referred to as geostatistical models represent geological variation on very fine scales vertically, though their areal resolution is usually coarse. A considerable amount of uncertainty is associated with the geological model. Therefore, we need to analyze the uncertainty associated with predictions obtained from flow simulations. Nearly every aspect of the reservoir characterization contains some degree of uncertainty; hence, the predictions are statistical. The uncertainty in reservoir performance can be gauged by simulating a number of different geological realizations, which is referred to Monte Carlo simulation. Thousands of such simulation may be required to cover the range of parameter variations. Even though significant advances have been made in improving the efficiency of Monte Carlo Simulation (MCS), the associated computational cost can be prohibitive for typical reservoir simulations. Perturbation methods offer computationally efficient alternatives to MCS and have been widely applied to predict the first two statistical moments of the pressure [2]. However, the 1

10 CHAPTER 1. INTRODUCTION 2 major drawback of such low-order approximations is that they lose accuracy when the permeability field is highly heterogenous. An alternative approach, the Spectral Stochastic Finite Element Method (SSFEM) was introduced by Ghanem and Spanos [3]. Their method is based on estimating both the input field (permeability field) and solution process (pressure) by Polynomial Chaos expansions, then applying a Galerkin projection scheme to compute the undetermined coefficients in a Hermite polynomial representation of the pressure. In other words, the Galerkin projection scheme is used to convert the original Stochastic Partial differential Equation (SPDE) into a set of coupled deterministic PDEs. It has been shown that unlike perturbation based methods, good accuracy can be achieved even when the variability level is high. Polynomial Chaos expansions of high order with many random variables may be necessary to obtain accurate results when the input field is highly variable. More recently, Nair et al. [4] introduced a Stochastic Reduced Basis Method (SRBM), which entails choosing a set of basis based on information from the input field. In this approach, the response pressure is approximated using vectors spanning the preconditioned stochastic Krylov subspace. They demonstrated that SRBM can be much faster than the Spectral Stochastic Finite Element Method (SSFEM), while providing results of comparable, or better accuracy for linear stochastic PDEs. The computational cost of SRBM depends primarily on the number of basis vectors and to a lesser degree on the number of random variables. In their original work, Nair et al. [4] studied SRBM for the Gaussian uncertainty model, but later they generalized SRBM to non-gaussian uncertainty models. To assess the uncertainty associated with predictions of the pressure field, we employ SRBM in conjunction with a Karhunen-Loeve expansion to represent the input permeability field. Incomplete knowledge of the permeability field is assumed to be the only source of uncertainty. We map out the behavior of the statistical moments of pressure for a wide range of the parameter space. We consider systems with a variance of log-permeability σ 2 Y from 0.1 to 3, and correlation length (normalized by the domain) from 0.1 to 0.5. We compare the

11 CHAPTER 1. INTRODUCTION 3 results with MCS and discuss the efficiency and accuracy of the hybrid SRBM algorithm. We find that using a small number of Krylov subspace basis, SRBM is able to predict the statistical moments of pressure accurately. We also compare SRBM with the Probabilistic Collocation Method (PCM) studied by Zhang and Li [5]. Their method is similar to SRBM except for the projection scheme. In SRBM, a Krylov subspace basis is used to span the solution space, but in PCM, the basis functions are chosen as Dirac Delta functions centered on different points, called collocation points. We show that both methods give accurate results for one-dimensional flow problems. However, in two dimensions when the higherorder PC expansion is used, the number of collocation points increases exponentially and SRBM becomes more efficient. This report is organized as follows. First, we review the basic concepts involved with SRBM, starting from different ways of representing a stochastic process. Then, projection schemes for representing the response process are discussed. Following that, we present the hybrid formulation combining SRBM with PC expansions for solving the Y-based pressure equation, and we perform post-processing of the results [5]. Finally, we investigate the numerical implementation of SRBM for solving both 1D and 2D pressure equations with different flow scenarios.

12 Chapter 2 Representation of a Stochastic Process 2.1 Introduction Similar to the deterministic finite-element method, whereby functions are represented by a set of parameters consisting of the values of the function and its derivatives on the nodal points, the problem in the stochastic case is to represent a random process as an indexed set of random variables [3]. Then, the process can be approximated as closely as desired by restricting the index to a dense set. For a random field, specifically the permeability field in our problem, we are given the mean and autocorrelation function of the random process. In the context of finite element analysis of the system, the permeability field should be represented by random variables that are chosen to coincide with some local average of the process over each element. This should take into account the information we have about the field, such as the mean, variance and covariance. As expected, the results depend to a large extent on the averaging method and the number of random variables used to approximate the field. Alternatively to the heuristic arguments associated with the local averaging approach, 4

13 CHAPTER 2. REPRESENTATION OF A STOCHASTIC PROCESS 5 an analogous representation for random processes can be formulated (Parzen, 1959), which has been derived for a class of second order processes. Perhaps the most important result in the spectral representation of random processes can be stated as: k(x, θ) = g(x)dµ(θ), (2.1) where x denotes spatial position, θ is a random variable and k(x, θ) is a stochastic process whose covariance function R kk (x 1, x 2 ) admits the following decomposition R kk (x 1, x 2 ) = g(x 1 )g(x 2 ) dµ 1 (θ)dµ 2 (θ). (2.2) Here g(x) is a deterministic function and dµ(θ) is an orthogonal stochastic measure [6]. An important specialization of the spectral decomposition occurs if the process k(x, θ) is Wide Sense Stationary (WSS). Wide Sense Stationary processes are those whose covariance functions are only functions of the distance x 1 x 2 rather than x 1 and x 2. In this case, equation (2.1) can be shown to reduce to: k(x, θ) = and + e ixt ω dµ(ω, θ), (2.3) R kk (x 1, x 2 ) = + e i(x 1 x 2 ) T ω S(ω)dω T, (2.4) where T denotes the transpose and S(ω) is the spectral density of the stationary process [3]. The preceding representation had a strong impact on the subsequent development of the theory of random processes. However, all of these representations involve differentials

14 CHAPTER 2. REPRESENTATION OF A STOCHASTIC PROCESS 6 of random functions, and are therefore set in an infinite dimensional space, which is not applicable for computational algorithms [3]. When the domain size increases, this representation is computationally no longer useful. An alternative formulation of the spectral representation, and one which is extensively used here, is the Karhunen-Loeve expansion whereby a random process k(x, θ) can be expanded in terms of a set of orthogonal random variables in the form k(x, θ) = µ i (θ)g i (x), (2.5) i=1 where g i (x) are deterministic functions and µ i (θ) is a set of orthogonal random variables. We will derive g i based on the information we have about the permeability field, which is its mean and two-point covariance function. This representation of a random process is often called a discrete representation. In the deterministic case, discretization of a domain has a physical meaning, but a discrete representation of a stochastic process renders a way to approximate a random field with some known random variables ( e.g. uniform distributions). A drawback of the KL expansion of a random field is its dependence on the covariance function. The KL expansion cannot be used to represent the solution process, since the covariance function of the solution and the corresponding eigenfunctions are not known. However, an analogous formulation can be derived to allow for the representation of nonlinear functions of the orthogonal stochastic measures dµ(θ) [3]. It was Wiener (1958) who first developed what is known as the Homogenous Chaos expansion. He used the independent identically distributed (iid) Gaussian random variables as a basis for representing a stochastic process. Unlike the KL expansion, the homogenous chaos expansion does not need the covariance function of the random field. Hence, it can be used to represent solution processes. The concept of Homogenous Chaos expansion is similar to expansion in terms of Fourier series of deterministic functions. To solve a partial differential equation in the deterministic

15 CHAPTER 2. REPRESENTATION OF A STOCHASTIC PROCESS 7 case, we write the solutions in terms of its Fourier series and solve for the corresponding coefficients. The same idea applies in the stochastic case here; we represent the solution process in terms of orthogonal random variables and compute their coefficients in the chaos expansion. Once these coefficients are computed, it is straightforward to post-process the results in order to represent the solutions. 2.2 Karhunen Loeve expansion In order to apply a finite-element method to problems where one, or more, of the physical quantities is modeled as a random field, we need to discretize the random field by an enumerable set of random variables. Various discretization techniques are available in the literature for approximating random fields, including the mid-point method, shape function method, optimal linear estimation, weighted integral method, orthogonal series expansions and Karhunen-Loeve (KL) expansions [7,8,9]. In the KL approach the random field is expanded in a Fourier-type series as follows k(x, θ) = λn ξ n (θ)f n (x), (2.6) n=0 where ξ n (θ) is a set of random variables to be determined and f n (x) is an orthogonal set of deterministic functions. In the following section, we review the derivation of λ n and f n (θ) based on the covariance function of a random field [3]. Let K(x, θ) be a random process that is function of the position vector, x, defined over the domain D, with θ belonging to the space of random events Ω. Let K(x, θ) denote the expected value of the process and C(x 1, x 2 ) denote its covariance function. By definition, the covariance function is bounded, symmetric, and positive definite. Thus it has the spectral decomposition [10]:

16 CHAPTER 2. REPRESENTATION OF A STOCHASTIC PROCESS 8 C(x 1, x 2 ) = λ n f n (x 1 )f n (x 2 ), (2.7) n=0 where λ n and f n are, respectively, the eigenvalues and eigenvectors of the covariance Kernel. They are solutions to the integral equation: D C(x 1, x 2 )f n (x 1 )dx 1 = λ n f n (x 2 ). (2.8) Due to the symmetry and the positive definiteness of the covariance of the Kernel (Loeve, 1977), its eigenfunctions are orthogonal and form a complete set. They can be normalized according to the following criterion D f n (x)f m (x)dx = δ nm, (2.9) where δ nm is the delta function. Let K(x, θ) = K(x, θ) K(x, θ) be a process with zero mean and covariance function C(x 1, x 2 ). The random, mean-removed process K(x, θ) can be expanded in terms of the eigenfunctions f n (x) as: K(x, θ) = λn ξ n (θ)f n (θ). (2.10) n=0 Second-order properties of the random variables ξ n (θ) can be found by multiplying both sides of the equation by K(x2, θ) and taking expectation on the both sides: C(x 1, x 2 ) = K(x1, θ) K(x2, θ) = λn λ m ξ n (θ)ξ m (θ) f n (x 1 )f n (x 2 ). (2.11) n=0 m=0 Then, multiplying both sides of the equation by f k (x 2 ), integrating over the domain D, and making use of the orthogonality of the eigenfunctions yields

17 CHAPTER 2. REPRESENTATION OF A STOCHASTIC PROCESS 9 D C(x 1, x 2 )f k (x 2 )dx 2 = λ k f k (x 2 ) = λn λ k ξ n (θ)ξ k (θ).f n (x 1 )f n (x 2 ). (2.12) n=0 Multiplying once more by f l (x 1 ) and integrating over D gives: λ k D f k (x 1 )f l (x 2 )dx 2 = λn λ k ξ n (θ)ξ k (θ) δ nl. (2.13) n=0 From the orthogonality of the eigenfunctions we know: D f k(x 1 )f l (x 2 )dx 2 = δ kl. The orthogonality of random variables implies ξ k (θ)ξ l (θ) = δ kl. Thus the random process K(x, θ) can be written as: K(x, θ) = K(x, θ) + λn ξ n (θ)f n (θ), (2.14) where the ξ n (θ) are any set of random variables with the following properties: n=0 ξ k (θ) = 0, (2.15) ξ k (θ)ξ l (θ) = δ kl (2.16) In order to employ the discrete representation of the random field computationally, we truncate the expansion at the M-th order to arrive at a finite dimensional approximation, namely, M K(x, θ) = K(x, θ) + λn ξ n (θ)f n (θ). (2.17) If the eigenvalues of the covariance function C(x 1, x 2 ) decay rapidly, then only a few terms will be required to obtain an accurate representation of the random field. In the limiting case, when the correlation length of the random variable tends to zero, the number of terms, M, will grow quickly toward infinity. That is, the correlation length of the random n=0

18 CHAPTER 2. REPRESENTATION OF A STOCHASTIC PROCESS 10 field dictates the number of terms, M, required to ensure an accurate finite dimensional representation. The Karhunen-Loeve expansion of a process is derived based on the analytical properties of its covariance function. These properties are independent of the stochastic nature of the process involved, which allows the expansion to be applied to a wide range of processes including nonstationary and multidimensional processes [3]. The usefulness of the Karhunen-Loeve expansion hinges on the ability to solve integral equations of the form D C(x 1, x 2 )f n (x 1 )dx 1 = λ n f n (x 2 ), (2.18) where C(x 1, x 2 ) is an auto-covariance function. For stationary processes, the Markovian property of a process implies that the effect of the distant past on the present is negligible. The integral equation becomes: D C(x 1 x 2 )f n (x 1 )dx 1 = λ n f n (x 2 ). (2.19) For all the problems considered here, a stationary process is assumed, which has the following two dimensional exponential function to represent variability in the random media: C( x, y ) = e (x 1 x 2 ) 2 λ 2 x + (y 1 y 2 )2 λ 2 y, (2.20) where λ x and λ y are the correlation lengths of the random field in the x and y directions. Unfortunately, there is no analytical solution of the integral equation for this Kernel, and we need to solve it numerically. However, in one dimension, the integral is a first order Markovian process: C( x ) = e x 1 x 2 β. (2.21)

19 CHAPTER 2. REPRESENTATION OF A STOCHASTIC PROCESS 11 We assume the process is defined on the one dimensional interval [ a, a]. In this case the eigenvalues and eigenfunctions of the covariance function are the solutions to the following integral equation, which is analytically solvable: a a e c x 1 x 2 f(x 2 )dx 2 = λf(x 1 ), (2.22) where c = 1 β. Differentiating this equation twice with respect to x 1 and rearranging gives: λf (x) = ( 2c + c 2 λ)f(x). (2.23) This equation is an ordinary second-order differential equation and can be solved analytically to obtain the eigenfunctions. 2.3 Homogeneous Chaos expansion It is clear from the preceding discussion that the implementation of the Karhunen-Loeve expansion requires the covariance function of the process to be modeled. Hence it cannot be implemented for the solution processes since the covariance function is not known. An alternative expansion that circumvents this problem is needed. Such an expansion could involve a basis of known random functions with deterministic coefficients to be found by minimizing some norm of the error resulting from a finite representation [10]. The idea of Polynomial Chaos (PC) representation of stochastic processes was introduced by Wiener as a generalization of Fourier series expansions [10]. More specifically, Wiener used multidimensional Hermite polynomials as basis functions for representing stochastic processes. The basic idea is to project the process under consideration onto a stochastic subspace spanned by a set of complete orthogonal random polynomials. To illustrate the process of constructing a PC expansion, we will review its derivation [3,10]:

20 CHAPTER 2. REPRESENTATION OF A STOCHASTIC PROCESS 12 Let φ i (θ), i = 1, 2,..., denote a set of polynomials, which form an orthogonal basis in L 2 (θ). Then, a general second-order stochastic process (i.e., a random process with finite variance) h(θ) can be represented as h(θ) = c 0 Φ 0 + c i1 Φ 1 (ξ i1 (θ)) + + i 1 =1 i 2 =1 i 3 =1 i 1 i 1 =1 i 1 =1 i 2 =1 i 1 i 2 c i1 i 2 Φ 2 (ξ i1 (θ), ξ i2 (θ)) c i1 i 2 i 3 Φ 3 (ξ i1 (θ), ξ i2 (θ), ξ i1 (θ)) +... (2.24) where Φ p (ξ i1, ξ i2,...ξ ip ) denote the generalized polynomial chaos of order p, which is a tensor product of one dimensional polynomial basis functions φ i, i = 1, 2,..., p. In the original work of Wiener, Φ p was chosen to be a multidimensional Hermite polynomial in terms of a set of uncorrelated Gaussian random variables ξ i1, ξ i2,...ξ ip, which have zero mean and unit variance. The general expression for the Hermite chaos of order p can be written as [10] Φ p (ξ i1, ξ i2,...ξ ip ) = ( 1) p e 1 2 ξ ξ [e 1 2 ξ ξ ]. (2.25) ξ i1 ξ i2... ξ ip For example, if Hermite polynomials are used as basis functions, a second order twodimensional PC exansion of h(θ) can be written as p h(θ) = h 0 + h 1 ξ 1 + h 2 ξ 2 + h 3 (ξ 2 1 1) + h 4 ξ 1 ξ 2 + h 5 (ξ 2 2 1). (2.26) It can be seen from the above equation that the first term of the PC expansion represents the mean value of h(θ). This is because ξ 1 and ξ 2 are uncorrelated Gaussian random variables with zero mean and unit variance. Another point worth noting here is that the number of terms in the expansion grows very quickly with the dimension of ξ and the order of the expansion. For notational convenience, Equation (2.25) can be written as

21 CHAPTER 2. REPRESENTATION OF A STOCHASTIC PROCESS 13 h(θ) = h i ϕ i (ξ), (2.27) i=0 where there is a one-to-one correspondence between the functions Φ p (ξ i1, ξ i2,...ξ ip ) and ϕ i (ξ). Also note her that ϕ 0 = 1 and ϕ i = 0 for i > 0. Since, ϕ i (ξ), i = 0, 1, 2,..., for an orthogonal basis in L 2 (θ), then ϕ i (ξ)ϕ j (ξ) = ϕ 2 i (ξ) δ ij, (2.28) where δ ij is the Kronecker delta operator, and. is the ensemble average operation, i.e., f(ξ)g(ξ) = f(ξ)g(ξ)w (ξ)dξ, (2.29) where W (ξ) is the weight function corresponding to the PC basis. The weight function is chosen to correspond to the distribution of the elements of ξ. For example, when Hermite polynomials are used as basis functions, the weight function is given by the M-dimensional normal distribution W (ξ) = 1 (2Π) M e 1 2 ξ ξ. (2.30) Cameron and Martin [11] derived the following theorem which guarantees that the Hermite chaos expansion converges in a mean-square sense for any second-order stochastic process, when the number of terms is increased. Theorem The Hermite choas expansion of any functional h(ξ) of L 2 (θ) converges in the L 2 (θ) sense to h(ξ). This means that if h(ξ) is a second-order stochastic process, i.e., h(ξ) 2 W (ξ)dξ <, (2.31)

22 CHAPTER 2. REPRESENTATION OF A STOCHASTIC PROCESS 14 then m h(ξ) h i φ i (ξ) 2 W (ξ)dξ 0 as m (2.32) i=0 where h i is the Fourier-Hermite coefficient h i = h(ξ)φ i (ξ)w (ξ)dξ. (2.33) It is worth noting that the convergence rate of a PC expansion is faster than exponential [12]. Further, it has also been shown that the error in the expansion decays as O( 1 (p+1)! ), where p is the highest order Hermite polynomial used in the basis [12]. For example, for a one dimensional PC expansion, we can derive the following inequality for convergence: m h(ξ) h i φ i (ξ) i=0 C m+1 h, (2.34) (m + 1)! ξm+1 where C is constant. The above estimate can be extended to the case when multidimensional Hermite polynomial are used as basis functions. Hence, the polynomial chaos expansion can be used to approximate a random process accurately.

23 Chapter 3 Representation of the Solution Process 3.1 Introduction In this chapter we consider different projection schemes for the solution process. The main focus of these projection schemes is the development of numerical techniques for solving stochastic PDE, governing single-phase flow in a heterogenous porous medium. Over the last decade, much progress has been made in combining well-known spatial discretization techniques, such as finite elements, finite differences, spectral methods and boundary elements with discretization techniques of random field [13,14]. In particular, there is a wide body of work which deals with the application of the finite element method (FEM) in conjunction with random field discretization techniques to solve stochastic PDEs. Refer, for example, to the research monograph by Ghanem and Spanos [3]. By combining conventional spatial discretization schemes with random field discretization techniques, it becomes possible to arrive at a finite-dimensional approximation of stochastic PDEs as a system of ordinary differential equation (ODE) with random coefficients. The system of ODEs can be converted into a system of random algebraic equations 15

24 CHAPTER 3. REPRESENTATION OF THE SOLUTION PROCESS 16 using a temporal discretization scheme, or a frequency domain transformation [3,13,14]. In the case of steady-state stochastic PDEs, discretization in space and the random dimension directly leads to a system of random algebraic equations [12]. Hence, efficient numerical schemes for solving random ODEs and algebraic equations are essential tools for solving problems described using stochastic PDEs. Stochastic subspace projection schemes can be considered as an extension of existing numerical schemes for solving deterministic algebraic and differential equations. Before delving into the details of stochastic projection schemes, we first outline the essential ideas used in these formulations [12]. To illustrate the basic ideas used in functional expansion approaches, consider the continuous stochastic operator problem: L(x; θ)u(x; θ) = f(x; θ), (3.1) where L(x; θ) is a stochastic differential operator, i.e., a randomly parameterized differential operator. For example, L(x; θ) = ξ(θ) 2, where 2 is the Laplacian operator and ξ(θ) is a random coefficient. In the pressure equation that governs flow in porous media, the random coefficient is the permeability field, L(x, y; θ) = K(x, y, θ)( 2 x y 2 ). f(x; θ) is a random function and u(x; θ) is the random solution process whose statistics are to be computed. As stated earlier, we use the symbol θ to indicate the dependence of any quantity on a random dimension. The main idea used in functional expansion techniques is to decompose the solution process u(x : θ) into separable deterministic and stochastic components, namely, u(x; θ) ũ(x; θ) = m u i (x)φ i (ξ(θ)), (3.2) i=1 where u i (x) R is an undetermined deterministic function and φ i (ξ(θ)):θ R is a known

25 CHAPTER 3. REPRESENTATION OF THE SOLUTION PROCESS 17 stochastic basis function. From now on, we shall not explicitly indicate the dependence of the random variable ξ on the random dimension θ. It can be seen from Equation (3.2) that once the stochastic basis functions are chosen, the solution process simplifies to the computation of the functions u i (x), i = 1, 2,..., m. Substituting Equation (3.2) into the governing equation leads to the stochastic residual error function, which is defined as: m ɛ(x; θ) = L(x; θ) u i (x)φ i (ξ(θ)) f(x; θ). (3.3) i=1 Equations governing the undetermined functions can now be derived by employing a projection scheme along the random dimension θ. Consider the case where the Galerkin projection scheme is employed, in which the stochastic residual-error is orthogonalized with respect to the approximating space φ i (ξ) [15,16]. In other words, the inner product of ɛ(x; θ) with each basis function is set to zero, i.e., m φ j (ξ)l(x; θ)φ i (ξ) u i (x) φ j (ξ)f(x; θ) = 0 j = 1, 2,..., m, (3.4) i=1 where.,. denotes the inner product in Hilbert space of random variables, i.e., f(θ)g(θ) = f(θ)g(θ)dγ(θ). (3.5) φ j (ξ)l(x; θ)φ i (ξ) is a deterministic differential operator. On the other hand, φ j (ξ)f(x; θ) is a deterministic function; therefore, Equation (3.4) represents a set of m coupled deterministic operator problems that govern u i (x). Hence, by applying a functional expansion of the form given in Equation (3.2) in conjunction with the Galerkin projection scheme, we arrive at a system of coupled deterministic operator problems. This increases the dimensionality of the problem of computing u i (x). Since Equation (3.4) is deterministic, it can be

26 CHAPTER 3. REPRESENTATION OF THE SOLUTION PROCESS 18 solved readily using conventional numerical techniques, such as FEM and u i (x) can be computed. Subsequently, it becomes possible to approximate the complete statistics of u(x, θ) efficiently in a postprocessing phase using Equation (3.2). In summary, the functional expansion approach for solving stochastic operator problems involves four steps: (i) selection of a suitable set of stochastic basis functions φ i (ξ), (ii) application of a projection scheme along the random dimension θ to arrive at a system of coupled equations, (iii) numerical solution of the coupled system of deterministic equations to compute the undetermined functions u i (x), and (iv) postprocessing to compute the statistics of interest. An alternative approach can be formulated by first spatially discretizing the continuous governing Equation (3.1) using any conventional techniques such as FEM [12]. This semidiscretization procedure involves treating the discrete nodal values of the field variable u(x, θ) as random variables. This process ultimately leads to a system of random algebraic equations of the form R(u(ξ); ξ) = 0. (3.6) Here u i R n denotes a random vector composed of the values of the discretized field variable whose statistics are to be determined. It is feasible to apply the functional expansion approach outlined earlier to approximate the random vector u(ξ) expanded as û(ξ) = m u i φ i (ξ), (3.7) i=1 where u i R n, i = 1, 2,.., m are vectors of undetermined coefficients. These unknown vectors can be uniquely computed by substituting Equation (3.7) into Equation (3.6) and applying a Galerkin projection scheme along the random dimension θ. This results in the following system of deterministic algebraic equations with increased dimensionality (mn uknowns in comparison to Equation (3.6), which has only n unknowns)

27 CHAPTER 3. REPRESENTATION OF THE SOLUTION PROCESS 19 m φ j (ξ)r( u i φ i (ξ); ξ) = 0, j = 1, 2,..., m. (3.8) i=1 It can be shown that the system of algebraic equations (Eq. (3.8)) is equivalent to a spatially discretized version of Equation (3.4). In other words, the functional expansion approach can be applied either to the continuous form of the stochastic partial differential equation, or its spatially discretized version [12]. However, since we are dealing with a discretized problem where it is desired to approximate the random vector u(ξ) and not the random function u(x; θ), it appears more natural to approximate u(ξ) using a few known basis vectors. Then the random vector u(ξ) can be expanded as û(ξ) = α 1 ψ 1 (ξ) + α 2 ψ 2 (ξ) α m ψ m (ξ) = Ψ(ξ)α, (3.9) where Ψ(ξ) = [ψ 1 (ξ), ψ 2 (ξ),..., ψ m (ξ)] R n m denotes a matrix of known stochastic basis vectors and α = [α 1, α 2,..., α m ] R m is a vector of undetermined coefficients [17]. Note that the above representation has only m unknowns, whereas Equation (3.7) has a total of mn unknowns. The main idea used here is to employ a rich set of problem dependant stochastic basis vectors, which ensure that accurate approximations can be obtained for m << n. The idea of stochastic reduced basis representations was introduced recently by Nair et al. [4] in the context of solving large-scale linear random algebraic systems of equations obtained from semidiscretization of stochastic PDEs. In contrast to the functional expansion scheme, where PC basis functions are typically used, Nair et al. introduced a preconditioned stochastic Krylov subspace which spans the solution space [2]. Hence the solution process is represented using basis vectors spanning the preconditioned stochastic Krylov subspace [2]. Substituting Equation (3.9) into Equation (3.6) and applying the Galerkin projection scheme, we arrive at the following reduced-order deterministic system of equations

28 CHAPTER 3. REPRESENTATION OF THE SOLUTION PROCESS 20 Ψ (ξ)r(ψ(ξ)α; ξ) = 0 j = 1, 2,..., m, (3.10) to be solved for the m unknowns coefficients, α 1, α 2,..., α m. In Equation (3.10) the superscript * is used to denote the complex conjugate transpose of a vector or matrix. Comparing Equation (3.8) with Equation (3.10), it can be observed that a key advantage of the stochastic reduced basis representation is that the undetermined quantities can be efficiently computed compared to the functional expansion approach outlined earlier. This is because in the stochastic reduced basis approach, the application of the Galerkin projection scheme leads to system of equations with increased dimensionality. Clearly the success of this reduced basis approach hinges critically on the choice of the stochastic basis vectors. As we shall show later, using basis vectors spanning the preconditioned stochastic Krylov subspace, highly accurate results can be obtained using only a few basis vectors [12]. 3.2 Polynomial Chaos Projection schemes In this section, we discuss the ideas behind the functional expansion as a projection scheme, which is comprised of a weak Galerkin projection scheme in conjunction with PC expansion of the response process [12,16]. As stated earlier, we can also apply a stochastic projection scheme directly to the continuous form of the governing equations. Consider the case when the governing equation of a problem leads to a system of linear random algebraic equations. In the PC projection scheme of Ghanem and Spanos [3], the random nodal displacements are first expanded using a set of multidimensional Hermite polynomials; this results in the following expansion for the response process u(ξ) = P 1 i=0 u i ϕ i (ξ), (3.11)

29 CHAPTER 3. REPRESENTATION OF THE SOLUTION PROCESS 21 where u i R n, i = 0, 1, 2,..., P 1 are a set of vectors formed from the undetermined coefficients in the PC expansion for each nodal displacement, and ϕ i (ξ) is a set of orthogonal Hermite polynomials in ξ. The number of terms in the expansion, P, is given by P = p k=0 (M + k 1)!, (3.12) k!(m 1)! where p is the order of the PC expansion, i.e., the highest order of the set of Hermite polynomials ϕ i (ξ) [12]. gives Substitution of the PC expansion for u(ξ) into the governing random algebraic equations M P 1 ( K i ξ)( u j ϕ j (ξ)) = f. (3.13) i=0 j=0 We assumed here that the input random process, K, is linearly dependant on ξ. For the case of nonlinear dependence of K on ξ, the expressions should be written in terms of Hermite polynomial, but the procedure is quite similar to the linear case. As shown by Ghanem and Spanos [3], the undetermined terms in the PC expansion can be uniquely computed by imposing the Galerkin condition, which involves orthogonalizing the stochastic residual error to the approximation subspace as shown below ɛ(ξ), ϕ k (ξ) = 0, k = 0, 1, 2,..., P 1, (3.14) where the stochastic residual error vector ɛ(ξ) R n is given by M P 1 ɛ(ξ) = ( K i ξ)( u j ϕ j (ξ)) f. (3.15) i=0 j=0 Substituting Equation (3.15) into (3.14) we get the following system of deterministic equations

30 CHAPTER 3. REPRESENTATION OF THE SOLUTION PROCESS 22 M P 1 i=0 j=0 K i u j ξ i ϕ j ϕ k = ϕ k f k = 0, 1, 2,..., P 1. (3.16) The above equation can be rewritten in a more compact form as where P 1 j=0 K jk u j = f k k = 0, 1, 2,..., P 1, (3.17) and K jk = M K i ξ i ϕ j ϕ k R n n, (3.18) i=0 f k = ϕ k f R n. (3.19) The expectation operation in these equations can be readily carried out using properties of the Hermite Chaos expansion. Now expanding the above equation about the subscripts j and k, we arrive at the following system of linear algebraic equations of the form Kũ = f, where K R np np and ũ, f R np [12]. After solving Equation (3.16), and plugging it into Equation (3.11), we arrive at an explicit expression of the response process. This enables the statistics of the solutions, as well as, other response quantities of interest to be computed efficiently in a post-processing phase. 3.3 Krylov Subspace Projection scheme It was noted that highly accurate approximations of the response process can be computed using a set of basis vectors spanning the preconditioned stochastic Krylov subspace introduced by Nair et al. [17]. This approach is essentially a stochastic generalization of Krylov

31 CHAPTER 3. REPRESENTATION OF THE SOLUTION PROCESS 23 subspace methods in the numerical linear algebra literature, which have been applied widely to solve large-scale deterministic linear algebraic equations. In the context of linear random algebraic equations, the main idea is to approximate the response process, u(ξ), using basis vectors spanning the stochastic Krylov subspace defined as K m (K(ξ), f) = span{f, K(ξ) 2 f,..., K(ξ) m 1 f}. (3.20) This representation of the response can be justified by the following theorem by Nair et al. [4], which establishes the optimality of the stochastic Krylov subspace for solving the linear form of the governing equation, Theorem If the minimal random polynomial of a nonsingular random square matrix K(ξ) has degree m, then the solution to K(ξ)u(ξ) = f lies in the stochastic Krylov subspace K m (K(ξ), f ). The degree of the minimal polynomial, m, of a random matrix depends on the distribution of its eigenvalues. More specifically, the number of basis vectors required to compute accurate approximation depends on the degree of overlap of the pdfs (probability density functions) of the eigenvalues of the coefficient matrix K(ξ). To ensure a good approximation using a small number of basis vectors, it is preferable to use a preconditioner. In other words, we multiply both sides of the governing equation with the preconditioner P R n n to arrive at the following system of equations M ( PK i ξ i )u(ξ) = Pf. (3.21) i=0 The key idea here is to choose a matrix P, such that the pdfs of the eigenvalues of the random matrix P M i=0 K iξ i have a high degree of overlap. K(ξ) 1 = K 1 0 is used as the

32 CHAPTER 3. REPRESENTATION OF THE SOLUTION PROCESS 24 preconditioner. This choice is motivated by the observation that K 1 0 K(ξ) will numerically behave like a matrix with a small number of distinct eigenvalues, particularly when the coefficients of variation of ξ i, i = 1, 2,..., M are small. In theory, convergence can be guaranteed as long as the preconditioner is invertible. By using the preconditioner suggested here, the rate of convergence can be significantly accelerated. A stochastic reduced basis representation of the response process can be written as û(ξ) = α 1 ψ 1 (ξ) + α 2 ψ 2 (ξ) α m ψ m (ξ) = Ψ(ξ)α, (3.22) where Ψ(ξ) = [ψ 1 (ξ), ψ 2 (ξ),..., ψ m (ξ)] R n m is a matrix of basis vectors spanning the preconditioned stochastic Krylov subspace K m (K 1 0 K(ξ), K 1 0 f) and α = {α 1, α 2,..., α m } T R m is a vector of undetermined coefficients. As presented in detail in the next chapter, the numerical studies by Nair and Keane [4] suggest that using the first three basis vectors spanning the preconditioned stochastic Krylov subspace, highly accurate results can be obtained. The first three basis vectors spanning K m (K 1 0 K(ξ), K1 0f) can be written as ψ 1 (ξ) = K 1 0 f, (3.23) ψ 2 (ξ) = K 1 0 K(ξ)ψ 1(ξ), (3.24) ψ 3 (ξ) = K 1 0 K(ξ)ψ 2(ξ), (3.25) and since K(ξ) = K 0 + M i=1 ξ ik i, the basis vectors can be compactly written as ψ 1 (ξ) = u 0, (3.26) ψ 2 (ξ) = ψ 3 (ξ) = M d i ξ i, (3.27) i=1 M i=1 j=1 M e ij ξ i ξ j, (3.28)

33 CHAPTER 3. REPRESENTATION OF THE SOLUTION PROCESS 25 where u 0 = K 1 0 f, d i = K 1 0 Ku 0 and e ij = K 1 0 Kd 0. It can be seen from the above expressions that the basis vectors are random polynomials, which can be written as explicit functions of ξ and the coefficients can be computed recursively, d i = K 1 0 Ku 0and e ij = K 1 0 Kd 0. Because of the recursive representation of the basis vectors, the basis vectors can be efficiently computed given the factored form of preconditioner K 1 0 which is readily available as a byproduct of deterministic analysis of the problem. When the input random field, K, is a nonlinear function of ξ, the basis vectors become nonlinear functions of ξ and can be written as ψ 2 (ξ) = ψ 3 (ξ) = N d i ϕ i (ξ), (3.29) i=1 M i=1 j=1 M e ij ϕ i (ξ)ϕ j (ξ), (3.30) where N is the number of terms retained in the PC decomposition of K(ξ). To compute the vector of undetermined coefficients, α, we can use (weak) Galerkin projection schemes to arrive at the following stochastic residual error vector M ɛ(ξ) = ( K i ξ i )Ψ(ξ)α f R n. (3.31) i=0 If we restrict our attention to self-adjoint stochastic PDEs, the matrices K i, i = 0, 1, 2,.., M are guaranteed to be symmetric and positive definite. Hence the undetermined coefficients can be computed by enforcing the Galerkin condition M K i ξ i Ψ(ξ)α f ψ j (ξ), j = 1, 2,..., m. (3.32) i=0 This weak Galerkin scheme implies that the stochastic residual error vector, ɛ(ξ), be made

34 CHAPTER 3. REPRESENTATION OF THE SOLUTION PROCESS 26 orthogonal to the approximating subspace Ψ(ξ). Therefore, the Galerkin condition is also referred to as an orthogonal projection scheme. Application of the weak Galerkin condition results in the following reduced-order m m deterministic system of equations for α [12] M [ ξ i Ψ (ξ)k i Ψ(ξ) ]α = Ψ (ξ)f, (3.33) i=0 Since explicit expressions for the stochastic basis vectors are available, the expectation operations required to compute the elements of the reduced-order terms in the above equation can be readily performed.

35 Chapter 4 Stochastic Reduced Basis Method 4.1 Introduction In the previous chapters, we reviewed basic methods to represent a stochastic process and then different projection schemes that are basically comprised of basis functions to build a space for our solution process. In this chapter, we review the Stochastic Reduced Basis Method (SRBM), which is introduced by Nair [1,17], for solving linear random algebraic equations arising from discretization of stochastic partial differential equations (PDE). Numerical implementation of the specific problem of the pressure equation, where permeability field is assumed to be the only source of uncertainty, is presented in the next chapter. Here, we focus on the general picture of any well defined stochastic PDE. We first outline the basic steps involved in the construction of a generalized PC expansion and the semi-discretization of the stochastic PDE. Then, we discuss the PC approach for reformulating the stochastic basis vectors spanning the preconditioned Krylov subspace to approximate the response process. Then, we use the Galerkin, or Petrov-Galerkin, projection scheme for estimating the undetermined coefficients. Finally, we describe the post-processing procedure of the solution process to determine the first and second moments of the solutions. The higher moments can be computed in a similar manner. 27

36 CHAPTER 4. STOCHASTIC REDUCED BASIS METHOD Stochastic Finite Element analysis To illustrate the basic steps involved in spatial discretization of stochastic PDE using a Finite Element formulation, let us consider a simple 2D problem, where the input random field can be written as D(x; ω) = h(x; ω)d 0, (4.1) where h(x; ω) : Ω R 2 R represents a random field, and D 0 is the deterministic part of the input field. The random field h(x; ω) needs to be discretized and expanded as a finite set of random variables in order to perform the computations. As we introduced in Chapter 2, we use the KL expansion to semi-discretize the field, though several methods have been implemented in the literature. The Karhunen-Loeve expansion of the random field h(x; ω) can be written as h(x, ω) = h(x, ω) + λi θ i h i (x), (4.2) where λ i and h i (x) are, respectively, the eigenvalues and eigenfunctions of the correlation function of the random field, and θ i, i = 1, 2,..., are a set of uncorrelated random variables, usually a Gaussian set as stated in Chapter 2. Normally, the major challenge with the KL expansion is to find these eigenfunctions, and several methods have been proposed in the literature [18,19,20,21]. Truncating Equation (4.2) at the Nth term gives a finite-dimensional approximation of the random field as n=1 N h(x, ω) h(x, ω) + λi θ i h i (x). (4.3) Substituting Equation (4.3) into Equation (4.1) gives the input random variables. Now the spatial domain can be readily discretized into a number of elements by a Finite Element Method. n=1

37 CHAPTER 4. STOCHASTIC REDUCED BASIS METHOD 29 This sets the stage for the application of the Finite Element Method (FEM) to spatially discretize the governing equation. The starting point of the FEM is the weak form of the governing equation, which is obtained by multiplying the governing equation by a test function and integrated by parts. Subsequently, the domain is divided into a number of elements and the field variables are approximated with each element using a set of shape functions as n e ũ(x) = u i (θ)n i (x), (4.4) n=1 where N i denotes the i th shape function, and u i (θ) can be interpreted as a generalized field variable. Substituting this approximation into the weak form of the governing equation, we arrive at the following system of equations: K(θ)u(θ) = f, (4.5) where u(θ) R n is the random response vector, f R n is assumed to be deterministic for the sake of simplicity. However, for a stochastic f, a procedure similar to the one used for the left hand side is used where f is expanded in terms of Hermite polynomials. For the Gaussian model, the matrix K(θ) can be written in the following form N K(θ) = K g 0 + K g i θ i, (4.6) where θ 1, θ 2,..., θ N are uncorrelated Gaussian random variables, K g 0 Rn is the mean, and K g i Rn are weighted stiffness matrices. Gaussian models are commonly used in the probabilistic analysis of engineering problems primarily due to their simplicity. However, in some cases, it may be preferable to use other processes such as the log-normal process to represent the distribution of the quantity of interest. This is particularly the case when the i=1

38 CHAPTER 4. STOCHASTIC REDUCED BASIS METHOD 30 quantity under consideration is always positive, such as the permeability, K, in the pressure equation. These non-gaussian models can be written in a similar form to the Gaussian. The only difference is to replace the θ s by Hermite polynomials, namely, P 1 K(θ) = K l iγ i (θ). (4.7) i=0 It is worth noting that the Gaussian process can be considered as a special case of the above representation when only the first-order terms in the PC expansion are retained, P1 i=1 Kl iγ i (θ) = P 1 i=1 Kl iθ i. This is because any linear combination of Gaussian random variables is still a Gaussian random variable. Here, the θ s are uncorrelated Gaussian random variables, so P 1 i=1 Kl iθ i is a Gaussian random variable which is orthogonal to the higher order terms. Hence, for a Gaussian process, only the first-order terms in the PC expansion are present. 4.3 Stochastic Krylov subspace Substitution of Equation (4.7) into the linear algebraic equation K(θ)u(θ) = f gives: P 1 [K 0 + K l iγ i (θ)]u(θ) = f, (4.8) i=0 where K 0 is the mean stiffness matrix and K i are weighted stiffness matrices. P 1 depends on the order of the KL expansion used to discretize the field, as well as, the order of the PC decomposition for the non-gaussian random field. When the right hand side of the equation is also stochastic, the equation takes the following form: P 1 h 1 [K 0 + K l iγ i (θ)]u(θ) = [f 0 + f l iγ i (θ)], (4.9) i=0 i=0 where h 1 depends on the order of the PC expansion for approximating the source term, f.

39 CHAPTER 4. STOCHASTIC REDUCED BASIS METHOD 31 as We stated in Chapter 3 that the solution process belongs to the Krylov subspace defined K m (K(ξ), f) = span{f, K(ξ) 2 f,..., K(ξ) m 1 f}. (4.10) The stochastic reduced basis approximation of the solution process can be written as: û(θ) = ξ 0 ψ 0 (θ) + ξ 1 ψ 1 (θ) ξ m ψ m (θ) = Ψ(θ)ξ, (4.11) where Ψ(θ) = [ψ 0 (θ), ψ 1 (θ),..., ψ m (θ)] R n (m+1) is a matrix of basis vectors spanning the preconditioned stochastic Krylov subspace K m (K 1 0 K(ξ), K 1 0 f) and ξ = {ξ 0, ξ 1,..., ξ m } T R m+1 is a vector of undetermined coefficients. The number of basis vectors required to compute an accurate approximation depends on the degree of overlap of the PDFs of the eigenvalues of the coefficient matrix K(θ) [4]. To ensure good approximations using a small number of basis vectors, it is preferable to use a preconditioner. As explained in Chapter 3, the idea is to transform the coefficient matrix to a new matrix, where the numerical PDFs associated with eigenvalues tend to have a high degree of overlap. We used the deterministic matrix of K(θ) 1 = K 1 0 as the preconditioner. This choice was motivated by the observation that K 1 0 K(ξ) will behave numerically like a matrix with a small number of distinct eigenvalues. In theory, convergence can be guaranteed as long as the preconditioner is invertible. However, the preconditioner accelerates the convergence significantly, and it is possible to obtain high accuracy using small number of basis vector. The first three vectors spanning the preconditioned stochastic Krylov subspace can be written as:

40 CHAPTER 4. STOCHASTIC REDUCED BASIS METHOD 32 ψ 0 (θ) = K 1 0 f, ψ 1 (θ) = K 1 0 K(θ)ψ 0(θ), ψ 2 (θ) = K 1 0 K(θ)ψ 1(θ). (4.12) For the Gaussian random fields when K(θ) = K 0 + M i=1 θ ik i, the basis vectors can be compactly written as ψ 0 (θ) = u 0, ψ 1 (θ) = ψ 2 (θ) = N d i θ i, i=1 N i=1 j=1 M e ij θ i θ j (4.13) where u 0 = K 1 0 f, d i = K 1 0 Ku 0 and e ij = K 1 0 Kd 0. To implement higher order projection schemes, it is not computationally efficient to use the representation given in Equations (4.12) and (4.13); for example, the fourth and fifth order basis are ψ 3 (θ) = ψ 4 (θ) = N N N f ijk θ i θ j θ k, i=1 j=1 k=1 N N N i=1 j=1 k=1 l=1 N g ijkl θ i θ j θ k θ l, (4.14) where f ijk = K 1 0 e jk R n and f ijkl = K 1 0 K if jkl R n. Note that these basis vectors are for the Gaussian uncertainty model. For the non-gaussian model, constructing the basis vectors becomes even more demanding, and an alternative approach should be used [1]. The idea is to use the fact that ψ k = (K 1 0 K)k K 1 0 f and that the basis vectors are functions of random variables. Each basis vector can be computed recursively as ψ k+1 =

41 CHAPTER 4. STOCHASTIC REDUCED BASIS METHOD 33 K 1 0 K(θ)ψ k. Hence it is reasonable to apply PC expansions to arrive at a numerically more tractable algorithm [17]. As for the first basis vector ψ 0 (θ) = K 1 0 f, we can rewrite it as: P 2 ψ 0 (θ) = ψi 0 Γ i, (4.15) i=0 where P 2 depends on the order of the PC expansion used for the decomposition, and ψ 0 i is zero for all i except i = 0 which is equal to K 1 0 f. Since higher order basis vectors require more terms for better approximations, we shall use PC expansions of the higher order to minimize truncation errors. Further, we make use of recursive representation of the basis vectors, so that each basis should be computed using a PC expansion that is one order higher than the preceding one. For simplicity of notation, we assume that the number of terms used to approximate all the vectors is P 2, noting that for higher order basis we need to choose a sufficiently large P 2 to reduce the error [1]. For the second basis vector ψ 1 (θ) = K 1 0 K(θ)ψ 0(θ) we have: where ψ 1 i = K 1 0 K iu 0 R n. P 2 ψ 1 (θ) = ψi 1 Γ i, (4.16) i=0 At this stage, we employ a recursive representation and write the higher order basis vectors as functions of random variables. The third basis vector ψ 2 (θ) = K 1 0 K(θ)ψ 1(θ) can be written as: ψ 2 (θ) = K 1 0 P 2 P 2 K i ψj 1 Γ i Γ j. (4.17) i=0 j=0 The objective is to represent the above expression, which is comprised of multiplication of two Hermite polynomials as

42 CHAPTER 4. STOCHASTIC REDUCED BASIS METHOD 34 P 2 ψ 2 (θ) = ψi 2 Γ i. (4.18) i=0 Equating the right hand side of Equation (4.17) and (4.18) and multiplying both sides by Γ k we arrive at: P 2 i=0 ψ 2 i Γ i Γ k = K 1 0 P 2 P 2 K i ψj 1 Γ i Γ j Γ k. (4.19) i=0 j=0 Taking expectation of both sides and making use of the orthogonality property of the PC basis gives the following expression for the ψ 2 i : P 2 P 2 ψi 2 = K 1 0 i=0 j=0 K iψj 1D ijk Γ 2 k, (4.20) where D ijk = Γ i Γ j Γ k. The idea is the same for the higher basis orders, noting that D ijk can be computed beforehand to increase the speed of computation. The general expression for the deterministic vector ψ m+1 k for m > 0 can be written as ψ m+1 k where the corresponding (m + 1) th basis function is given by P 2 P 2 = K 1 0 i=0 j=0 K iψj md ijk Γ 2 k, (4.21) P 2 ψ m+1 (θ) = ψi m+1 Γ i. (4.22) i=0 Here, we acquire a set of stochastic basis vector of Ψ(θ) = [ ψ 0 (θ), ψ 1 (θ),..., ψ m (θ)] R n m. These are not exactly the same as those defined earlier in Equation (4.11) because of the PC decomposition. However if PC expansions of the appropriate order are used in the construction of the basis, the basis vectors would span the preconditioned Krylov subspace.

43 CHAPTER 4. STOCHASTIC REDUCED BASIS METHOD 35 Of course, the computational complexity involved in constructing the PC expansion of the basis vectors will grow as the number of random variables increases [1]. Finally, we can write the solution process in stochastic reduced basis form as û(θ) = ˆξ 0 ˆψ0 (θ) + ˆξ 1 ˆψ1 (θ) ˆξ m ˆψm (θ) = ˆΨ(θ)ˆξ, (4.23) where ˆξ = {ˆξ 0, ˆξ 1,...ˆξ m } T R m+1 is a vector of undetermined coefficients. 4.4 Petrov-Galerkin projection scheme Substituting Equation (4.23) into Equation (4.8) gives: P 1 ( K i Γ i (θ)) ˆΨ(θ)ˆξ = f. (4.24) i=0 To estimate the undetermined coefficients, we apply a projection scheme. As long as K(θ) is a positive definite matrix, the Petrov-Galerkin projection schemes works. The stochastic residual error can be written as P 1 ɛ(θ) = ( K i Γ i )Ψ(θ)α f R n. (4.25) i=0 As mentioned in the chapter 3, the residual error is orthogonal to all the basis, and we have: P 1 ( K i Γ i ) ˆΨ(θ)ˆξ f ˆψ j (θ), j = 0, 2,..., m. (4.26) i=0 By imposing the above orthogonality condition, a reduced-order system of (m + 1) (m + 1) deterministic linear algebraic equation is obtained, namely, Γ ˆΨT i (θ)k i ˆΨ(θ) ]ˆξ = ˆΨT (θ)f, (4.27) P 1 [ i=0

44 CHAPTER 4. STOCHASTIC REDUCED BASIS METHOD 36 we can write the solution process as P 2 u = [ Π i Γ i ]ˆξ, (4.28) i=0 where Π i = [ψi 0, ψ1 i,..., ψm i ] Rn (m+1) is a deterministic coefficients matrix. Plugging this expression into Equation (4.27) gives: P 2 P 1 P 2 P 2 [ Π T i Γ i ][ K j Γ j ][ Π k Γ k ] ˆξ = [ Π T i Γ i ]f. (4.29) i=0 j=0 k=0 i=0 Simplifying the above equation, substituting D ijk = Γ i Γ j Γ k and applying the fact that Γ i = 0 for i > 0 and Γ 0 = 1 we come to the following equation: P 2 P 1 P 2 ( Π T i K j Π k D ijk )ˆξ = Π T 0 f. (4.30) i=0 j=0 k=0 By solving this equation we can compute the undetermined coefficients, and then perform post-processing to compute the solutions. 4.5 Post-Processing It is straight forward to determine the various statistics of the solution process, because the final expression is in terms of the PC basis functions. Taking expectation of both sides of Equation (4.28) gives the mean of the solution process, namely, For the covariance matrix, we have : P 2 P 2 û = [ Π i Γ i ] ˆξ = [ Π i Γ i ]ˆξ = Π 0 ˆξ. (4.31) i=0 i=0

45 CHAPTER 4. STOCHASTIC REDUCED BASIS METHOD 37 u cov = (û(θ) û )(û(θ) û ) T P 2 P 2 P 2 = [Π i ξξ T Γ T i ] Γ i Γ j Π 0 ξξ T Π T i Γ i i=0 i=0 i=0 P 2 Π T i Γ i ξξ T Π 0 + Π 0 ξξ T Π T 0 i=0 P 2 = [Π i ξξ T Γ T i ] Γ 2 i Π 0 ξξ T Π T 0. (4.32) i=0 To study the convergence of the method, we need to compute the norm of the residual error ɛ(θ). Since ɛ(θ) is a random function it can be decomposed using a PC expansion P 1 P 2 ɛ(θ) = ( K i Γ i )( Π j Γ j ) ˆΨ P 3 f = ɛ i Γ i, (4.33) i=0 j=0 i=0 where P 3 is the order of the PC expansion used for the decomposition of the residual error, which should be greater than the order used to decompose the stochastic basis vectors to minimize truncation errors. Multiplying the preceding equation by Γ k and taking expectation gives ɛ k = P 1 P 2 i=0 j=0 K iπ j ˆξDijk f Γ k Γ 2 k. (4.34) So the final expression for residual error is P 3 P 1 P 2 i=0 j=0 ɛ(θ) = [ K iπ j ˆξDijk f Γ k Γ 2 k ]Γ k, (4.35) k=0 and we can express the norm of the residual error as P 3 P 3 ɛ(θ) = ɛ(θ) T ɛ(θ) = ɛ T i ɛ j Γ i Γ j. (4.36) i=0 j=0

46 CHAPTER 4. STOCHASTIC REDUCED BASIS METHOD 38 The L 2 norm gives an estimate of how many basis should be used to have an arbitrarily accurate approximation of the solution.

47 Chapter 5 Numerical Studies 5.1 Introduction Methods to compute prediction of flow behaviors in natural porous media and estimating the associated uncertainty have been reported in the literature [22,23,24]. The main source of uncertainty is the spatial variability and correlation structure of the formation properties for which very sparse data is available [25,26,27]. In stochastic reservoir simulation, we honor the available data and try to model a reservoir based on these measured quantities. Deterministic mathematical formulations for flow continue to serve as the basis for stochastic reservoir simulation with the difference that coefficients and variables of the governing PDE are stochastic processes in space. In a steady-state condition, pressure is a stochastic process defined over the domain. We expect our simulator to predict the average pressure and the amount of change from its mean. The variance, σ 2, provides us with a measure of variation from the mean. From Chebyshev s inequality we know Pr( P P γσ) γ 2 (5.1) e.g., for γ = 3 the probability that pressure lies in the ±3σ from its mean is more than 90 percent. However, To estimate accurately the probability density function (PDF) of a 39

48 CHAPTER 5. NUMERICAL STUDIES 40 random process, in addition to its mean and variance, we need to compute higher moments [28]. Monte Carlo Simulation (MCS) is the primary tool used in the oil industry to quantify the uncertainty in the flow due to uncertainty in the reservoir description [29]. It is a statistical method that samples the distribution using a large number of realizations of the the random process. In MCS, one solves the deterministic PDE for each realization, and then post-process the results to obtain the statistical moments of interest. Typically, large numbers of realizations are needed to achieve statistical convergence due to the high variability in the reservoir properties [29]. This is one of the major drawbacks of MCS simulation. Several alternative approaches have been introduced in the literature, Statistical Moment Equations (SME) [2,37], Stochastic Collocation Methods (SCM) [30,31,32], Polynomial Collocation Methods (PCM) [5,33] and Polynomial Chaos Expansions (PCE) [3]. In SME, higher order terms are dropped in the process of deriving first- and second-order approximations of the solution process. As the variance and correlation scale of permeability increase, higher order terms play a more significant role [2,34,35]. The validity range of the perturbation-based SME approach is limited to small values of the expansion parameter, usually the standard derivation of Log-Permeability σlnk 2, and small correlation scales [36,37]. The main strength of the SME method is its speed in providing a direct approach for quantifying the uncertainty associated with both flow and transport in the heterogenous porous medium. However, its major drawback is that it is not applicable for high variance, or large correlation lengths, which is the case for many practical oil fields. Surveys of natural formations indicate that the level of variability and spatial correlation scales of permeability span a wide range [35,36,38,39]. It can be observed from the surveys that high levels of permeability variability (σ 2 lnk > 1) are common in practice, which cannot be predicted well with SME. In the Stochastic Collocation Method, the output random field is approximated by

49 CHAPTER 5. NUMERICAL STUDIES 41 Lagrange polynomial interpolation in probability space, which is based on deriving an uncoupled system of equations just as in MCS to solve for the function values at selected positions. The choice of the collocation points leads to a variety of collocation methods, including tensor product [40], Smolyak [41], Stroud 2 or 3, Cubature [42] and adaptive stroud methods [43]. One of the main benefits of stochastic collocation methods is that they are amenable to parallelization; however, similar to SME, they work best for small variances. Another stochastic collation method proposed by Tatang [44] is the Probabilistic Collocation Method (PCM). In PCM, a polynomial chaos expansion is used to approximate the solution process, and a collocation method is used to determine the coefficients of the polynomial chaos expansion by solving for the output random field for different sets of collocation points. Both PCM and SCM are rather computationally efficient, but similar to SME, they fail to give a good approximation when the variability of the input field such as the permeability increases [2,5]. To account for highly variable permeability fields, which is the case for most practical applications, an alternative approach, namely, the Stochatic Finite Element Method (SSFEM) has been used extensively [15,16,19,45]. This method employs a polynomial chaos expansion for the stochastic space of random outputs. After truncation, the SSFEM formulation fits into the framework of traditional spectral methods [41,45]. However, similar to spectral methods, one must solve a set of coupled equations for the deterministic coefficients of the polynomial chaos expansion. This increases the computational effort when the number of coefficients is large. To reduce the computational cost, the Stochastic Reduced Basis Method (SRBM) was introduced by Nair [1]. The SRBM projects the solution process into a stochastic Krylov subspace. The solution lies in Krylov subspace and with few basis vectors, we can build the solutions [1,12]. This chapter is organized as follows, we first present the equations governing singlephase flow in porous media. Then, we employ numerical discretization of the field for the Y-Based form of the pressure equation, in which the pressure equation is written in terms of log-permeability. Next, we review MCS and SME, and in the last section we compare

50 CHAPTER 5. NUMERICAL STUDIES 42 the SRBM numerical solutions with MCS and SME for different scenarios of variances and correlation length. The results show the efficiency and accuracy of SRBM in solving the elliptic partial differential equation for pressure. 5.2 Governing Equation We consider the case of incompressible single-phase flow in a heterogenous porous medium. From the continuity equation and Darcy s law, we write the equation governing the pressure in the domain as: x (x, y) [K(x, y) P ] + x y (x, y) [K(x, y) P ] = 0. (5.2) y Here K(x, y) denotes the spatially variable permeability field, and P (x, y) is the pressure. K(x, y) is assumed to be a random process in space; hence, the pressure is also random. If we account for sink/soure terms on the right hand side, the pressure would be a random process in time as well [46]. Expanding the derivatives of Equation (5.2) and defining Y (x, y) = lnk(x, y), we obtain a pressure equation in terms of Log-permeability 2 P (x, y) x P (x, y) y 2 + Y (x, y) P (x, y) + x x Y (x, y) P (x, y) = 0. (5.3) y y We refer to Equations (5.2) and (5.3) as the continuous form of the K-based and Y-based equations for pressure. Monte Carlo Simulation (MCS) is often used to find the statistical moments of pressure when permeability is the source of uncertainty. MCS results often serve as a reference solution to the statistical moments of pressure. In MCS, the statistical results are obtained using an ensemble of solutions, each of which is obtained from a single highly resolved realization of the permeability distribution. Using a random field generator, such as GSLIB[46] and HydroGen[47], realizations of the permeability can be generated such that they share a

51 CHAPTER 5. NUMERICAL STUDIES 43 common correlation structure (mean, variance, and two-point covariance). Here, we consider the simple case where the boundary conditions are P (x, y) y = 0 Y = 0, L, P (0, Y ) = P 0, P (L, Y ) = P 1, (5.4) where L is the domain length, and P 0 and P 1 are the fixed inlet and outlet pressures, which are our Dirichlet boundary conditions. We assume no-flow boundary conditions for y = 0 and y = L [2,46,48]. Now, the problem is well-defined and we are ready to perform numerical computation of the statistical moments of pressure. 5.3 Numerical Implementation To solve Equation (5.2) numerically, we employ a point-distributed grid [2,46], the domain is discretized using N i and N j nodes in the respective x and y directions. The total number of nodes is M = N i N j K-Based Pressure Equation For the K-based pressure equation, we apply a central finite-difference approximation on uniformly spaced grids, namely, P (K x x ) i,j = K 1 i+ P (K y y ) i,j = K i,j+ where K i+ 1,j, K i 1,j, K i,j+ 1, K i,j 1 2,j(P i+1,j P i,j ) K 2 i 1,j(P i,j P i 1,j ) 2 x 2, 1 (P i,j+1 P i,j ) K 2 i,j 1 (P i,j P i,j 1 ) 2 y 2, (5.5) are interfacial permeabilities, depicted in figure (5.1). For interior nodes, or gridblocks, Equation (5.2) can be written as:

52 CHAPTER 5. NUMERICAL STUDIES 44 Figure 5.1: The computational and permeability grids in the domain K i+ 1 2,jP i+1,j + K i 1,jP i 1,j + K 2 i,j+ 1 2 (K i+ 1 2,j + K i 1 2,j + K i,j+ 1 2 P i,j+1 + K i,j 1 P i,j K i,j 1 )P i,j = 0. (5.6) 2 Application of reflection at the no-flow boundary of the point distributed grid [40], with y=0 for example, yields: K i+ 1,jP i+1,j + K 2 i 1,jP i 1,j + 2K 2 i,j+ 1 2 P i,j+1 (K i+ 1 2,j + K i 1 2,j + 2K i,j+ 1 )P i,j = 0. (5.7) 2 For the other no-flow boundary condition, we can write a similar equation. We write all the equations in matrix form as KP=0, (5.8)

53 CHAPTER 5. NUMERICAL STUDIES 45 where K is the M M permeability (transmissibility) matrix, and P is the pressure vector of M elements. The equation is the discretized form of the K-based pressure equation (5.2). The boundary conditions are usually absorbed into the coefficient matrix. Internal source and sinks may appear as source terms on the right hand side, and the pressure equation becomes KP=f, (5.9) where f is a vector of M elements accounting for boundary conditions and sink/source terms Y-based Pressure Equation For the Y-based pressure equation, we again apply finite difference approximation on Equation (5.3): ( 2 P x 2 ) i,j = 1 x 2 (P i+1,j 2P i,j + P i 1,j ), ( 2 P y 2 ) i,j = 1 y 2 (P i,j+1 2P i,j + P i,j 1 ), ( Y P x x ) i,j = 1 2 x 2 (P i+1,j P i 1,j )(Y i+ 1 2,j Y i 1,j), 2 ( Y x P x ) i,j = 1 2 x 2 (P i+1,j P i 1,j )(Y i+ 1 2,j Y i 1,j), (5.10) 2 where consistent use of the interfacial K values is used. Plugging the discretized form of the derivatives into Equation (5.3) for the interior nodes gives the following equation (P i+1,j + P i 1,j + P i,j+1 + P i+1,j 1 4P i,j ) (Y i+ 1 2,j Y i 1,j)(P i+1,j P i 1,j ) (Y i+ 1 2,j Y i 1,j)(P i,j+1 P i,j 1 ) = 0, (5.11) 2 and for no-flow boundary condition the effect of the reflection for the point distributed grid at y=0 leads to

54 CHAPTER 5. NUMERICAL STUDIES 46 (P i+1,j + P i 1,j + 2P i,j+1 4P i,j ) (Y i+ 1 2,j Y i 1 2,j)(P i+1,j P i 1,j ) = 0. (5.12) This is similar to interior cells, except that the latter term is dropped since P i,j+1 P i,j 1 = 0 for the no-flow boundary condition. Again we can write all the equations in a compact matrix form as YP=0, (5.13) where Y is the M M log-permeability matrix and P is the pressure vector of M elements. This equation will be referred to the discretized Y-based pressure equation. Since the input field is the log-permeability, which in common practice is assumed to be Gaussian, for our moment equation and SRBM, it is preferable to use the Y-based pressure equation. To account for source/sink terms, we will have the following matrix form of the equation YP=f, (5.14) where f is a vector of M elements of sink/source terms divided by transmissibility in the Y-based pressure equation. 5.4 Monte Carlo Simulation The basic steps involved in the Monte Carlo simulation are: (1) Define a domain of possible inputs, (2) generate a large number of permeability realizations, (3) perform a deterministic computation using the K-Based pressure equation, and (4) post-process the results of the individual computations and compute the statistical moments of interest. For the first step, we assume the permeability is log-normally distributed and is second-order stationary in space, such that the mean log-permeability is constant and its covariance depends on the

55 CHAPTER 5. NUMERICAL STUDIES 47 relative distance of two points rather than their actual location. For MCS, we solve the discretized K-based pressure equation KP=f. (5.15) Since K is log-normally distributed so Y = LogK will have a normal distribution. Here we use the exponential covariance model to describe the correlation structure of the Log-permeability field C Y ( x, y ) = σy 2 e (x 1 x 2 ) 2 λ 2 x + (y 1 y 2 )2 λ 2 y, (5.16) where r = ( x, y ) is the separation vector between two points, σ 2 Y is the variance of log-permeability, and λ x and λ y are the correlation lengths of the log-permeability in the respective x and y directions. This model of the covariance matrix was used by Zhang et al. [2]. Hence Y is Gaussian with mean Y and stationary covariance, C Y ( r ), and it is straightforward to generate realizations of Y i. In order to eliminate the impact of harmonic averaging, which is usually employed to obtain an interface value that ensures flux continuity in the discrete form of the pressure equation, the permeability at the interface is directly generated for the purpose of Monte Carlo simulation. Moreover, if we generate log-normal permeability at the center of the blocks, the interfacial permeabilities will no longer be log-normal and this makes all the equations nonlinear. It is worth noting that when we assume that our continuous Y field is Gaussian (K is log-normal), due to the special property of Gaussian processes, every single subset of Y 1, Y 2,..., Y m from the field is also jointly Gaussian. If we pick some Y s from the interface of grids, they are also Gaussian. Nonetheless, for the other types of input processes, this does not necessarily hold.

56 CHAPTER 5. NUMERICAL STUDIES 48 Although the Y-based pressure equation and the K-based pressure equation are identical in the continuous form, the results obtained from the discrete form of these two equations can be sensitive to the details of discretization and the properties of the permeability field [2]. The Y-based pressure equation may lead to computational difficulties when applied to a discontinuous permeability distribution in the Monte Carlo simulation. The matrix Y may not be diagonally dominant due to the way we approximated the terms involving the first derivative: Y i+ 1 2,j Y i 1,j. This problem becomes especially pronounced when 2 the permeability variation is high (σ 2 LnK > 1). Thus, Monte Carlo simulation using Y- based pressure equation is not practical, and we should transform our sample Y i to its correspondent permeability data K i, where K i = e Y i, and then use the K-based pressure equation for the simulation. However, to remove any discrepancy between the Y- and K- based solutions, we apply a flux-continuous discretization scheme to the Y-based equations so that they are completely consistent with K-based Monte Carlo simulation [29]. For each realization, we have a matrix of interfacial log-permeability Y i so K i = e Y i, and we can solve the discretized K-based pressure equation to obtain P i, namely, K i P i = f i. (5.17) Then we can calculate the pressure mean and covariance matrix as follow: P = N i=1 P i N, Var(P ) = P 2 P 2 = N i=1 (P i ) 2 N N i=1 ( P i N )2, (5.18) where P is the pressure matrix and N is the total number of realizations. In a similar way, the covariance matrix can be expanded as

57 CHAPTER 5. NUMERICAL STUDIES 49 (C P ) l,m = Cov(P l, P m ) = E[(P l P l )(P m P m )] = E[P l P m ] 2 P l P m + P l P m = E[P l P m ] P l P m = N i=1 P i l P i m N N i=1 P l i N i=1 P m i N 2. (5.19) We can compute the higher moments in the same way. The question left with Monte carlo simulation, which is of great importance, deals with the statistical convergence of the computed moments. The number of realizations, N, which is necessary to achieve convergence, depends on the variability level of the permeability and correlation lengths. Typically, for a heterogeneous field, MCS needs thousands of realization to achieve convergence. By convergence, we mean the statistical convergence of the moments for both permeability and pressure. Zhang et al. [2] have addressed this convergence issue in detail for different scenarios of variances and correlation length. Their analysis shows that for the case we study here, 9000 realizations lead to converged solutions, and so we use N = 9000 here. The need for such a large number of realizations to obtain reliable statistical moments, especially second moments of pressure and other flow-rated quantities, is a huge hurdle for practical use of Monte Carlo simulation. Here, we use high-resolution MCS, to have a nearly exact solution that can be compared with the other methods, especially SRBM [1]. 5.5 Statistical Moment Equations The statistical moment equations was used by Li et al. [2] serves as an alternative approach for solving the elliptic pressure equation. In their study, both the K-based and Y-based pressure equations have been solved by an SME method, and the results show better approximation of pressure when using the Y-based pressure equation. In this section we present a summary of Y-based stochastic equations.

58 CHAPTER 5. NUMERICAL STUDIES 50 As for any random variable we can write for Y: Y = Y + Y Y = 0 P = P + P P = 0 (5.20) where represents ensemble averaging. Substituting this For Y and P into the discretized Y-based pressure equation gives: Y P + Y P = 0. (5.21) By subtracting the above equation from our original Y-based pressure equation YP = 0 we have Y P + P Y + Y P Y P = 0, (5.22) where Y and Y are the matrices representing the discretization of the mean and perturbation of log-permeability, respectively; P and P are vectors of the mean and perturbation of pressure. To obtain the equation for C Y P, we multiply the above equation by Y at a different location and take expectation, which gives Y C YP + P C Y Y + Y Y P = 0, (5.23) where C YP = Y P is the cross-covariance vector of log-permeability and pressure. C Y Y is the matrix representing the derivative of the covariance of Y with every point in the domain Y. In a similar way, we can multiply the equation by P at a different location and average to reach Y C PP + P C P Y + P Y P = 0. (5.24)

59 CHAPTER 5. NUMERICAL STUDIES 51 Here C P Y is the matrix of the discretized derivative of the cross-covariance of P and Y. The last two equations are the discrete moment equations of pressure that correspond to the following equations in continuous differential form: and 2 C P (x, χ) x 2 i + C P (x, χ) Y (x) x i + C P Y (x, χ) x i x i P (x) x i + P (χ) Y (x) P (x) = 0, x i x i (5.25) 2 C Y P (x, χ) χ 2 i + C Y P (x, χ) Y (χ) χ i + C Y Y (x, χ) χ i χ i P (χ) χ i + Y (χ) Y (χ) P (χ) = 0, χ i χ i (5.26) where C Y P (x, χ) = Y (x)p (χ) and C P P (x, χ) = P (x)p (χ). Here x represents any spatial location in the two-dimensional domain, and χ is a reference point. In the above equations, summation over the number of dimensions is implied. Comparison between the continuous and discrete form shows that Y Y P is the discrete form of P (χ) Y (x) P (x) x i x i and P Y P is the discrete form of Y (χ) Y (χ) P (χ) χ i χ i. The approximation used in the statistical moment equations is to neglect the higherorder moments. For example, in the derived equations governing the second moments, we drop the term that depends on the third moments, leading to and 2 C P (x, χ) x 2 i + C P (x, χ) Y (x) x i + C P Y (x, χ) x i x i P (x) x i = 0, (5.27) 2 C Y P (x, χ) χ 2 i + C Y P (x, χ) Y (χ) χ i + C Y Y (x, χ) χ i χ i P (χ) χ i = 0. (5.28)

60 CHAPTER 5. NUMERICAL STUDIES 52 If we do the same for the discretized version of these equation, we would come up with the following system of equations which describe the low-order statistical moment equations that are solved numerically. Namely, Y C YP + P C Y Y = 0, Y C PP + P C P Y = 0. (5.29) The above equations can be solved to approximate the moments of pressure process in space. The main routine for the statistical moments equation method of solving elliptic pressure equation is as follows: (1) Solve for P in the Y P + Y P = 0 equation. Since we do not know the term Y P, we can assume here that it is zero, and come back again to update it within an iteration algorithm. (2) In the coupled equations, we have two equations and two unknowns, C Y P and C P P. We can easily solve the system of equations and compute them, noting that C Y P = C P Y. (3) Usually, there is no need to update the mean equation many times, but we may come back to step one and solve again for P knowing that Y P = C Y P. This new P undergoes the same procedure we did in step (2) to update C Y P and C P P, and so on. Convergence is usually obtained in a few iterations. As we will discuss later, the accuracy of the SME method for small variance and rather large correlation length is fairly good, but as the reservoir becomes more heterogenous, the errors in the computed moments increase as higher terms, which we ignored, play a more important role. 5.6 Stochastic Reduced Basis method In this section we implement the SRBM algorithms [1,17] for our particular problem. Here, we apply SRBM to the Y-based pressure equation, as it is faster than SRBM for the K-based

61 CHAPTER 5. NUMERICAL STUDIES 53 pressure equation. There is actually a subtle point behind this fact: Since we assume K is a lognormal random process, the semi-discretized form would be non-linear with respect to θ, so we should use polynomial chaos expansion of higher orders to represent K. However, Y is a Gaussian random process and only first-order terms are retained in the polynomial chaos expansion. Hence, we apply SRBM to the Y-based pressure equation and approximate the response process. The first step is to discretize random field Y(x; ω) using a Karhunen-Loeve (KL) expansion as follows N Y(x, y) = Y(x, y) + θ i Φ i (x), (5.30) where Y(x,y) is the mean of the random field, and θ i are uncorrelated random variables with zero mean and unit variance. Since they are uncorrelated gaussian random variables, they are also independent. This is due to the special property of gaussian random variables. Also, Φ i (x) = λ i h i (x), where λ i and h i (x) are the i th eigenvalue and eigenfunction of the integral problem i=1 D C(x 1, x 2 )f i (x 1 )dx 1 = λ i f i (x 2 ), (5.31) where C(x 1, x 2 ) denotes the auto covariance function of Y(x, ω) which is assumed to be C Y ( x, y ) = σy 2 e (x 1 x 2 ) 2 λ 2 x + (y 1 y 2 )2 λ 2 y. (5.32) As mentioned in Chapter 2, for this special covariance function, the preceding integral eigenvalue problem cannot be solved analytically, unlike the 1-D problem. However, a numerical method can be easily devised to solve it. An alternative approach can be implemented by first discretizing the continuous equation spatially using any conventional technique, such as a Finite Element Method (FEM).

62 CHAPTER 5. NUMERICAL STUDIES 54 This semi-discretization procedure essentially involves treating the discrete nodal values of the field variable Y(x; ω) as random variables. Here, we choose the nodal points on the interface of our grid blocks. The discrete random process Y(x; ω) comprised of the interfacial log-permeability is depicted in Fig (5.2). Figure 5.2: Defining permeability of nodal points as the discrete form of the continuous permeability field. Let H be the total number of interfacial random variables, so for each d, 1 d H, we can write the Karhunen-Loeve expansion as M Y d (θ) = Y d + θ k Φ k d, (5.33) where Y d is the mean of the random field at the location d, θ k are uncorrelated gaussian k=1 random variables and θ k d = λ k h k d where λk denotes the k th eigenvalue and h k d is the dth element of the k th eigenvector of autocovariance matrix, C Y (H, H), so that C Y h = λh. (5.34)

63 CHAPTER 5. NUMERICAL STUDIES 55 From all of the eigenvalues, we choose a subset with P 1 elements {λ 1, λ 2, λ 3,..., λ p1 } which includes most of the energy of C Y. We can write the Karhunen-Loeve expansion by replacing the d th element by its correspondent (i, j) location; that is, P 1 Y (i,j) (θ) = Y (i,j) + θ k Φ k (i,j). (5.35) This equation will be our approximation of the input random field based on the spectral decomposition. Plugging this KL representation form of Y (i,j) into the discretized Y-based pressure for interior nodes gives k=1 (P i+1,j + P i 1,j + P i,j+1 + P i,j 1 4P i,j ) ( Y i+ 1 2,j Y i 1 2,j + N i= ( Y i+ 1 2,j Y i 1 2,j + N i=1 θ k (φ i+ 1 2,j φ i 1 2,j))(P i+1,j P i 1,j )) θ k (φ i+ 1 2,j φ i 1 2,j))(P i,j+1 P i,j 1 )) = 0. (5.36) The no-flow boundary condition at y = 0 is accounted for as follows: (P i+1,j + P i 1,j + 2P i,j+1 4P i,j ) ( Y i+ 1 2,j Y i 1 2,j + N i=1 θ k (φ i+ 1 2,j φ i 1 2,j))(P i+1,j P i 1,j )) = 0. (5.37) Finally, the resulting equation can be compactly written in matrix form as P 1 (Y 0 + Y k θ k )P = f. (5.38) k=1 Now, we have a linear random algebraic system of equations, which can be solved via SRBM by projecting into a Krylov subspace. We apply SRBM to this equation and solve for pressure. The post-processing procedure to obtain the pressure moments is presented in the last chapter.

64 CHAPTER 5. NUMERICAL STUDIES Probabilistic Collocation Method The other method, which is also based on a spectral decomposition of the field is the Probabilistic Collocation Method (PCM) in conjunction with KL expansion, which was introduced by D. Zhang [33]. The algorithm is quite similar to SRBM except for the projection scheme. In SRBM, one uses a Krylov subspace basis and shows that this space spans the solution process [17]. However, in the PCM the basis functions are chosen as Dirac Delta functions centered on different points, called collocation points. To briefly explain the method, assume that the pressure equation has the form: where pressure can be written as: P 1 (Y 0 + Y k Γ k (θ))p = f, (5.39) k=1 P = P Γ i (θ)α i, (5.40) i=1 where Γ i (θ) is the basis for the solution space (see Chapter 3). To find the coefficients α i we apply a Galerkin projection scheme: P 1 P (Y 0 + Y k Γ k (θ)) Ψ i (θ)α i f Ψ j (θ), j = 1, 2,..., P. (5.41) k=1 i=1 In the PCM, the basis are Dirac functions of the form: Ψ j (θ) = δ(θ θ j ), (5.42) where the θ j is a particular set (selected with a certain algorithm [5]) out of a random vector θ. The elements in θ j are called collocation points. The Galerkin form of Equation (5.41) is given by [5,16]:

65 CHAPTER 5. NUMERICAL STUDIES 57 P 1 P (Y 0 + Y k Γ k (θ j )) Ψ i (θ j )α i f δ(θ θ j ), j = 1, 2,..., P, (5.43) k=1 i=1 which results in a set of independent equations, evaluated at the given sets of collocation points θ j where j = 1, 2,..P. The number of collocation points is equal to the number of basis and also the number of independent equations that must be solved for the pressure coefficients. The algorithm, which is commonly used for selecting the collocation points is somewhat similar to that for selecting integration points in Gaussian quadrature [5]. The performance of PCM depends strongly on the choice of the collocation points. One particular scheme is to select the collocation points at a given order of polynomials from the roots of the next higher order orthogonal polynomial for each uncertain parameter [33]. If the polynomial order is d, then the number of collocation points available is (d + 1) P 1, which is always larger than the number of collocation points needed. For example, the third-order Hermite polynomial is H 3 (ξ) = ξ 3 3ξ, and the three roots of this polynomial are 0, 3, 3, respectively. The P sets of collocation points are chosen from all the combinations of the three roots. For example, in order to solve for a two-dimensional secondorder polynomial chaos expansion, the possible collocation points are (0,0), ( 3,0), (0, 3), (0, 3), ( 3, 3), ( 3, 3), ( 3,0), ( 3, 3) and ( 3, 3). In fact, as the number of degrees of freedom increases, the number of available collocation points increases exponentially. For example, for the case of P 1 = 6, the number of collocation points required for second- and third-order expansions is 28 and 84, respectively. Since the collocation points are selected from the combinations of the roots of one-order higher Hermite polynomials, the numbers of collocation points available for second- and third-order expansions are 3 6 = 729 and 4 6 = There are three roots for a third-order Hermite polynomial (used for obtaining collocation points for a second-order expansion), and there are six variables; hence, the number of possible collocation points is 3 6, and similarly 4 6 for a third-order expansion. As the number of inputs and the order of expansion increase, the

66 CHAPTER 5. NUMERICAL STUDIES 58 number of available collocation points increases exponentially. In principle, any choice of collocation points from the available ones should give adequate estimates of the polynomial chaos expansion coefficients. However, different combinations of collocation points may result in substantially different estimates of the output PDF; this poses the problem of the optimal selection of collocation points. Furthermore, in a computational setting, some of the collocation points could be outside the range of the algorithmic, or numerical applicability of the model, and results at these points cannot be obtained [49]. 5.8 Results and Discussion In this section, we compare the results of SRBM simulation with existing methods. First, we compare the SRBM results with Statistical Moments Equations (SME), which are of a totally different nature. SRBM entails spectral decomposition of the field, while the SME method is based on a perturbation approach. In the second section, we compare the results of SRBM with PCM. Both SRBM and PCM decompose the input field using a Karhunen- Loeve expansion into a number of random variables; the only difference between them is the way of expressing the solution process. SRBM uses a preconditioned Krylov supspace to represent the solution process, and we build our space using polynomial chaos expansion, as explained in Chapter 3. However PCM chooses a number of collocation points in the domain and selects a finite dimensional set of candidate solutions that satisfy the given equation at the collocation points. In the first section, we examine the the idea of spectral decomposition used in SRBM, and in the second section we compare the preconditioned Krylov subspace with other projection schemes.

67 CHAPTER 5. NUMERICAL STUDIES Stochastic Reduced Basis Method vs. Statistical Moment Equations Before we delve into a detailed comparison of SRBM and SME, we discuss issues related to the degrees of freedom in SRBM. As noted in the Chapter 3, SRBM uses Karhunen-Loeve expansions to decompose the permeability field and a Krylov subspace as a projection scheme, which entails a recursive algorithm to build the space from polynomial chaos expansions. We have four (parameters) degrees of freedom in SRBM : (1) Number of random variables in the Karhoun-Loeve expansion: the number of random variables representing the KL expansion, which is basically the number of eigenvalues we work with, play an important role in our problem. (2) Degree of polynomial chaos expansion to represent the KL expansion: for the problems where the stiffness matrix has a nonlinear dependence on the input random field, we need to apply a PC expansion, as discussed in Chapter 2. The number of terms is assumed to be equal to the number of random variables, but we are free to choose the degree of this polynomial [1]. (3) Number of Krylov subspace basis: Number of basis used to project the solution process in the Krylov space. (4) Degree of polynomial chaos expansion to represent the basis of Krylov subspace: in the previous chapter we constructed the basis of the Krylov subspace using a recursive algorithm that employs a polynomial chaos expansion. The number of terms retained in the polynomial chaos expansion used to represent the basis of Kyrlov supspace adds another degree of freedom. Obviously, if we increase the number of eigenvalues in the KL expansion and the number of Krylov subspace basis vectors, we will get closer to the exact solution, but at the cost of being computationally inefficient. The trick here is to choose a reasonable number for these four degrees of freedom for each problem. In our problem, we let the inlet pressure be P 0 = 5 and the outlet pressure P 1 = 0, permeability is log-normal with a mean of zero, E{Y} = 0, and the domain is [0, 1] [0, 1]

68 CHAPTER 5. NUMERICAL STUDIES 60 which is discretized using square cells. The other two boundary conditions are noflow at y = 0 and y = 1. In the following study, we discuss different variances (σlnk 2 ) and correlation lengths of log-permeability (λ = λ x = λ y ). MCS and SME simulations are pretty straight forward to run, but for the Stochastic reduced basis method we need to decide on the degrees of freedom mentioned earlier. Due to the gaussian nature of log-permeability, it is reasonable to assume that the degree of the polynomial chaos expansion for Karhunen- Loeve expansion is one. This is because all the θ s are Gaussian, and any linear combination of Gaussian random variables is still Gaussian. For the number of basis for Krylov subspace, we examine two cases, namely, 4 and 5, and in the case of very high variances, we use 6 vectors. The degree of the polynomial expansion for the Krylov subspace basis is 1 for the low-variability and 2 for the high-variability input (permeability) fields. The only remaining challenge is to decide on the number of eigenvalues to retain in the KL expansion. One measure, which is often used is to express them in terms of energy, namely, E s = s i=1 λ i N i=1 λ. (5.44) i If the retained eigenvalues capture a reasonable amounts of the energy of the heterogenous permeability field, then it is expected to lead to reasonable approximations of the output. However in, highly variable permeability fields, some small eigenvalues which are usually ignored in the energy measure may play an important role. A better analysis of designating the eigenvalues is a frequency measure, which analyzes the corresponding frequency and amplitude of the associate eigenvector. This is out of the scope of this study. Here, we increase the number of eigenvalues up to twenty at most, and we analyze, how the results compare with the exact (i.e., MCS) solution. We start with the case where the variance of log-permeability is unity (σ 2 Y = 1), and the dimensionless correlation length is 0.1 (λ = 0.1). We use an SRBM of order 5 and a PC

69 CHAPTER 5. NUMERICAL STUDIES 61 expansion in representing the basis of the subspace of order one. Figure 5.3 shows the map of eigenvalues in terms of their energy over the total energy. Figure 5.3: Energy map of the eigenvalues for a variance of log-permeability, σ 2 Y = 1 and correlation length of 0.1 Here, if we choose the ten largest eigenvalues based on the energy measure, we retain about 80% of the energy of the covariance, which is reasonable. In Fig. 5.4, we compare the results of SRBM for different numbers of eigenvalues. Increasing the number of eigenvalues leads to a better match with MCS, but obviously the computational cost increases. However, for our problem, choosing between six to ten eigenvalues gives us a result within less than 10% error with respect to the exact (i.e., MCS) solution. In each simulation, we compute the residual error from Equation (4.36). If it is not close enough to zero, we either increase the number of SRBM projection basis, or increase the order of the PC expansion representing these basis. For most of the cases, four basis vectors and a first-order PC expansion work fine, which makes SRBM accurate, yet efficient. In Figure 5.5, we compare SRBM with SME for other correlation lengths (λ = 0.2, 0.4, 0.5) when the variance of log-permeability is one (σ 2 Y = 1). Note that the maximum pressure

70 CHAPTER 5. NUMERICAL STUDIES 62 Figure 5.4: The effect of increasing the number of eigenvalues on pressure variance prediction along the x direction at y=l/2, using SRBM for uniform mean flow with variance of one. variance is significantly smaller than the variance level of log-permeability. This attenuation of the statistical moments of the dependent variable compared to the input variance is an important characteristic of the flow problem [2]. This deserves detailed analysis for the heterogeneity models and flow settings of practical interest [50]. Examination of the MCS in Fig. 5.5 indicates that for a domain of a given size, the overall level of pressure variance increases significantly as the correlation length increases from 0.1 to 0.2. Increasing the correlation scale further, see Fig. 5.6, does not appear to change the overall response significantly. The results in Fig. 5.5, where σ 2 Y = 1 and λ Y /L = 0.1, suggest that prediction of the pressure variance obtained using SRBM (N λ = 18, N b = 5) with eighteen eigenvalues (N λ = 18) and five basis (N basis = 5) leads to an almost perfect match with the results obtained

71 CHAPTER 5. NUMERICAL STUDIES 63 Figure 5.5: Comparing SRBM results on pressure variance with Monte Carlo simulation and SME along the x direction at y=0.5, for uniform mean flow with σ 2 lnk = 1 and different correlation length. (a) λ Y /L = 0.1 (b)λ Y /L = 0.2 (c)λ Y /L = 0.4 (d)λ Y /L = 0.5 using MCS. However, it also shows that the discrepancy between MCS and SRBM (N λ = 18, N b = 5) increases slightly as the correlation length increases. For large correlation scales, discrepancy between MCS and SRBM (N λ = 18, N b = 5) remains below 10%. Nonetheless, if we increase the number of eigenvalues, such as working with SRBM (N λ = 25, N b = 5), we could get a result with less than 5% error compared with MCS. On the other hand, as explained earlier in this chapter, we can improve the SRBM results by plugging a secondorder PC expansion for the Krylov subspace basis. Examination of MCS results, which we take as reference, shown in Figs , indicates

72 CHAPTER 5. NUMERICAL STUDIES 64 that for a particular correlation length, the variance of pressure throughout the domain increases with σy 2. The variance of pressure is a measure of the uncertainty associated with predictions of the pressure distribution in the area of interest. These results imply that as the input variance, σy 2, increases, the level of uncertainty in the obtained response, pressure in this case, also increases. Figure 5.6: Energy map of the eigenvalues for a variance of log-permeability, σy 2, of 3 and a correlation length of 0.1 Intuitively, the energy map of eigenvalues can explain this increase in the pressure variance. Fig. 5.6 shows the energy map of the eigenvalues where the variance of log-permeability is three (σ 2 Y = 3) and the correlation length is 0.1. Comparing this figure with Fig 5.3 indicates that for a certain of amount of E s, the number of retained eigenvalues need to be increased as the variance increases. Fig. 5.6 reveals that to capture around 80% of the total energy, we should retain more than one hundred eigenvalues. This is not feasible. Instead we work with a lower amount of captured energy from the permeability field and accept more discrepancy from the exact solution. We do the same analysis we did for the case σ 2 lnk = 1 and study the effect of choosing different numbers of eigenvalues on the obtained pressure variance, when the

Stochastic Spectral Approaches to Bayesian Inference

Stochastic Spectral Approaches to Bayesian Inference Stochastic Spectral Approaches to Bayesian Inference Prof. Nathan L. Gibson Department of Mathematics Applied Mathematics and Computation Seminar March 4, 2011 Prof. Gibson (OSU) Spectral Approaches to

More information

A Vector-Space Approach for Stochastic Finite Element Analysis

A Vector-Space Approach for Stochastic Finite Element Analysis A Vector-Space Approach for Stochastic Finite Element Analysis S Adhikari 1 1 Swansea University, UK CST2010: Valencia, Spain Adhikari (Swansea) Vector-Space Approach for SFEM 14-17 September, 2010 1 /

More information

Schwarz Preconditioner for the Stochastic Finite Element Method

Schwarz Preconditioner for the Stochastic Finite Element Method Schwarz Preconditioner for the Stochastic Finite Element Method Waad Subber 1 and Sébastien Loisel 2 Preprint submitted to DD22 conference 1 Introduction The intrusive polynomial chaos approach for uncertainty

More information

Solving the Stochastic Steady-State Diffusion Problem Using Multigrid

Solving the Stochastic Steady-State Diffusion Problem Using Multigrid Solving the Stochastic Steady-State Diffusion Problem Using Multigrid Tengfei Su Applied Mathematics and Scientific Computing Advisor: Howard Elman Department of Computer Science Sept. 29, 2015 Tengfei

More information

Performance Evaluation of Generalized Polynomial Chaos

Performance Evaluation of Generalized Polynomial Chaos Performance Evaluation of Generalized Polynomial Chaos Dongbin Xiu, Didier Lucor, C.-H. Su, and George Em Karniadakis 1 Division of Applied Mathematics, Brown University, Providence, RI 02912, USA, gk@dam.brown.edu

More information

Solving the steady state diffusion equation with uncertainty Final Presentation

Solving the steady state diffusion equation with uncertainty Final Presentation Solving the steady state diffusion equation with uncertainty Final Presentation Virginia Forstall vhfors@gmail.com Advisor: Howard Elman elman@cs.umd.edu Department of Computer Science May 6, 2012 Problem

More information

Collocation based high dimensional model representation for stochastic partial differential equations

Collocation based high dimensional model representation for stochastic partial differential equations Collocation based high dimensional model representation for stochastic partial differential equations S Adhikari 1 1 Swansea University, UK ECCM 2010: IV European Conference on Computational Mechanics,

More information

Characterization of heterogeneous hydraulic conductivity field via Karhunen-Loève expansions and a measure-theoretic computational method

Characterization of heterogeneous hydraulic conductivity field via Karhunen-Loève expansions and a measure-theoretic computational method Characterization of heterogeneous hydraulic conductivity field via Karhunen-Loève expansions and a measure-theoretic computational method Jiachuan He University of Texas at Austin April 15, 2016 Jiachuan

More information

PARALLEL COMPUTATION OF 3D WAVE PROPAGATION BY SPECTRAL STOCHASTIC FINITE ELEMENT METHOD

PARALLEL COMPUTATION OF 3D WAVE PROPAGATION BY SPECTRAL STOCHASTIC FINITE ELEMENT METHOD 13 th World Conference on Earthquake Engineering Vancouver, B.C., Canada August 1-6, 24 Paper No. 569 PARALLEL COMPUTATION OF 3D WAVE PROPAGATION BY SPECTRAL STOCHASTIC FINITE ELEMENT METHOD Riki Honda

More information

Hierarchical Parallel Solution of Stochastic Systems

Hierarchical Parallel Solution of Stochastic Systems Hierarchical Parallel Solution of Stochastic Systems Second M.I.T. Conference on Computational Fluid and Solid Mechanics Contents: Simple Model of Stochastic Flow Stochastic Galerkin Scheme Resulting Equations

More information

Lecture 1: Center for Uncertainty Quantification. Alexander Litvinenko. Computation of Karhunen-Loeve Expansion:

Lecture 1: Center for Uncertainty Quantification. Alexander Litvinenko. Computation of Karhunen-Loeve Expansion: tifica Lecture 1: Computation of Karhunen-Loeve Expansion: Alexander Litvinenko http://sri-uq.kaust.edu.sa/ Stochastic PDEs We consider div(κ(x, ω) u) = f (x, ω) in G, u = 0 on G, with stochastic coefficients

More information

Spectral methods for fuzzy structural dynamics: modal vs direct approach

Spectral methods for fuzzy structural dynamics: modal vs direct approach Spectral methods for fuzzy structural dynamics: modal vs direct approach S Adhikari Zienkiewicz Centre for Computational Engineering, College of Engineering, Swansea University, Wales, UK IUTAM Symposium

More information

Probabilistic Structural Dynamics: Parametric vs. Nonparametric Approach

Probabilistic Structural Dynamics: Parametric vs. Nonparametric Approach Probabilistic Structural Dynamics: Parametric vs. Nonparametric Approach S Adhikari School of Engineering, Swansea University, Swansea, UK Email: S.Adhikari@swansea.ac.uk URL: http://engweb.swan.ac.uk/

More information

Probabilistic Collocation Method for Uncertainty Analysis of Soil Infiltration in Flood Modelling

Probabilistic Collocation Method for Uncertainty Analysis of Soil Infiltration in Flood Modelling Probabilistic Collocation Method for Uncertainty Analysis of Soil Infiltration in Flood Modelling Y. Huang 1,2, and X.S. Qin 1,2* 1 School of Civil & Environmental Engineering, Nanyang Technological University,

More information

Dynamic response of structures with uncertain properties

Dynamic response of structures with uncertain properties Dynamic response of structures with uncertain properties S. Adhikari 1 1 Chair of Aerospace Engineering, College of Engineering, Swansea University, Bay Campus, Fabian Way, Swansea, SA1 8EN, UK International

More information

c 2004 Society for Industrial and Applied Mathematics

c 2004 Society for Industrial and Applied Mathematics SIAM J. SCI. COMPUT. Vol. 26, No. 2, pp. 558 577 c 2004 Society for Industrial and Applied Mathematics A COMPARATIVE STUDY ON UNCERTAINTY QUANTIFICATION FOR FLOW IN RANDOMLY HETEROGENEOUS MEDIA USING MONTE

More information

A reduced-order stochastic finite element analysis for structures with uncertainties

A reduced-order stochastic finite element analysis for structures with uncertainties A reduced-order stochastic finite element analysis for structures with uncertainties Ji Yang 1, Béatrice Faverjon 1,2, Herwig Peters 1, icole Kessissoglou 1 1 School of Mechanical and Manufacturing Engineering,

More information

Uncertainty analysis of large-scale systems using domain decomposition

Uncertainty analysis of large-scale systems using domain decomposition Center for Turbulence Research Annual Research Briefs 2007 143 Uncertainty analysis of large-scale systems using domain decomposition By D. Ghosh, C. Farhat AND P. Avery 1. Motivation and objectives A

More information

An Empirical Chaos Expansion Method for Uncertainty Quantification

An Empirical Chaos Expansion Method for Uncertainty Quantification An Empirical Chaos Expansion Method for Uncertainty Quantification Melvin Leok and Gautam Wilkins Abstract. Uncertainty quantification seeks to provide a quantitative means to understand complex systems

More information

Stochastic Solvers for the Euler Equations

Stochastic Solvers for the Euler Equations 43rd AIAA Aerospace Sciences Meeting and Exhibit 1-13 January 5, Reno, Nevada 5-873 Stochastic Solvers for the Euler Equations G. Lin, C.-H. Su and G.E. Karniadakis Division of Applied Mathematics Brown

More information

Polynomial Chaos and Karhunen-Loeve Expansion

Polynomial Chaos and Karhunen-Loeve Expansion Polynomial Chaos and Karhunen-Loeve Expansion 1) Random Variables Consider a system that is modeled by R = M(x, t, X) where X is a random variable. We are interested in determining the probability of the

More information

Stochastic Dimension Reduction

Stochastic Dimension Reduction Stochastic Dimension Reduction Roger Ghanem University of Southern California Los Angeles, CA, USA Computational and Theoretical Challenges in Interdisciplinary Predictive Modeling Over Random Fields 12th

More information

Estimating functional uncertainty using polynomial chaos and adjoint equations

Estimating functional uncertainty using polynomial chaos and adjoint equations 0. Estimating functional uncertainty using polynomial chaos and adjoint equations February 24, 2011 1 Florida State University, Tallahassee, Florida, Usa 2 Moscow Institute of Physics and Technology, Moscow,

More information

Chapter 2 Spectral Expansions

Chapter 2 Spectral Expansions Chapter 2 Spectral Expansions In this chapter, we discuss fundamental and practical aspects of spectral expansions of random model data and of model solutions. We focus on a specific class of random process

More information

The Conjugate Gradient Method

The Conjugate Gradient Method The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large

More information

Fast Numerical Methods for Stochastic Computations

Fast Numerical Methods for Stochastic Computations Fast AreviewbyDongbinXiu May 16 th,2013 Outline Motivation 1 Motivation 2 3 4 5 Example: Burgers Equation Let us consider the Burger s equation: u t + uu x = νu xx, x [ 1, 1] u( 1) =1 u(1) = 1 Example:

More information

SENSITIVITY ANALYSIS IN NUMERICAL SIMULATION OF MULTIPHASE FLOW FOR CO 2 STORAGE IN SALINE AQUIFERS USING THE PROBABILISTIC COLLOCATION APPROACH

SENSITIVITY ANALYSIS IN NUMERICAL SIMULATION OF MULTIPHASE FLOW FOR CO 2 STORAGE IN SALINE AQUIFERS USING THE PROBABILISTIC COLLOCATION APPROACH XIX International Conference on Water Resources CMWR 2012 University of Illinois at Urbana-Champaign June 17-22,2012 SENSITIVITY ANALYSIS IN NUMERICAL SIMULATION OF MULTIPHASE FLOW FOR CO 2 STORAGE IN

More information

Spectral Polynomial Chaos Solutions of the Stochastic Advection Equation

Spectral Polynomial Chaos Solutions of the Stochastic Advection Equation Spectral Polynomial Chaos Solutions of the Stochastic Advection Equation M. Jardak, C.-H. Su and G.E. Karniadakis Division of Applied Mathematics Brown University October 29, 21 Abstract We present a new

More information

Beyond Wiener Askey Expansions: Handling Arbitrary PDFs

Beyond Wiener Askey Expansions: Handling Arbitrary PDFs Journal of Scientific Computing, Vol. 27, Nos. 1 3, June 2006 ( 2005) DOI: 10.1007/s10915-005-9038-8 Beyond Wiener Askey Expansions: Handling Arbitrary PDFs Xiaoliang Wan 1 and George Em Karniadakis 1

More information

STOCHASTIC FINITE ELEMENTS WITH MULTIPLE RANDOM NON-GAUSSIAN PROPERTIES

STOCHASTIC FINITE ELEMENTS WITH MULTIPLE RANDOM NON-GAUSSIAN PROPERTIES STOCHASTIC FIITE ELEMETS WITH MULTIPLE RADOM O-GAUSSIA PROPERTIES By Roger Ghanem, 1 Member, ASCE ABSTRACT: The spectral formulation of the stochastic finite-element method is applied to the problem of

More information

Introduction to Computational Stochastic Differential Equations

Introduction to Computational Stochastic Differential Equations Introduction to Computational Stochastic Differential Equations Gabriel J. Lord Catherine E. Powell Tony Shardlow Preface Techniques for solving many of the differential equations traditionally used by

More information

Model Reduction, Centering, and the Karhunen-Loeve Expansion

Model Reduction, Centering, and the Karhunen-Loeve Expansion Model Reduction, Centering, and the Karhunen-Loeve Expansion Sonja Glavaški, Jerrold E. Marsden, and Richard M. Murray 1 Control and Dynamical Systems, 17-81 California Institute of Technology Pasadena,

More information

Solving the stochastic steady-state diffusion problem using multigrid

Solving the stochastic steady-state diffusion problem using multigrid IMA Journal of Numerical Analysis (2007) 27, 675 688 doi:10.1093/imanum/drm006 Advance Access publication on April 9, 2007 Solving the stochastic steady-state diffusion problem using multigrid HOWARD ELMAN

More information

On the Application of Intervening Variables for Stochastic Finite Element Analysis

On the Application of Intervening Variables for Stochastic Finite Element Analysis On the Application of Intervening Variables for Stochastic Finite Element Analysis M.A. Valdebenito a,, A.A. Labarca a, H.A. Jensen a a Universidad Tecnica Federico Santa Maria, Dept. de Obras Civiles,

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

MULTISCALE FINITE ELEMENT METHODS FOR STOCHASTIC POROUS MEDIA FLOW EQUATIONS AND APPLICATION TO UNCERTAINTY QUANTIFICATION

MULTISCALE FINITE ELEMENT METHODS FOR STOCHASTIC POROUS MEDIA FLOW EQUATIONS AND APPLICATION TO UNCERTAINTY QUANTIFICATION MULTISCALE FINITE ELEMENT METHODS FOR STOCHASTIC POROUS MEDIA FLOW EQUATIONS AND APPLICATION TO UNCERTAINTY QUANTIFICATION P. DOSTERT, Y. EFENDIEV, AND T.Y. HOU Abstract. In this paper, we study multiscale

More information

A Non-Intrusive Polynomial Chaos Method For Uncertainty Propagation in CFD Simulations

A Non-Intrusive Polynomial Chaos Method For Uncertainty Propagation in CFD Simulations An Extended Abstract submitted for the 44th AIAA Aerospace Sciences Meeting and Exhibit, Reno, Nevada January 26 Preferred Session Topic: Uncertainty quantification and stochastic methods for CFD A Non-Intrusive

More information

Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques

Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques Institut für Numerische Mathematik und Optimierung Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques Oliver Ernst Computational Methods with Applications Harrachov, CR,

More information

A Polynomial Chaos Approach to Robust Multiobjective Optimization

A Polynomial Chaos Approach to Robust Multiobjective Optimization A Polynomial Chaos Approach to Robust Multiobjective Optimization Silvia Poles 1, Alberto Lovison 2 1 EnginSoft S.p.A., Optimization Consulting Via Giambellino, 7 35129 Padova, Italy s.poles@enginsoft.it

More information

A Stochastic Collocation based. for Data Assimilation

A Stochastic Collocation based. for Data Assimilation A Stochastic Collocation based Kalman Filter (SCKF) for Data Assimilation Lingzao Zeng and Dongxiao Zhang University of Southern California August 11, 2009 Los Angeles Outline Introduction SCKF Algorithm

More information

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. Vector Spaces Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. For each two vectors a, b ν there exists a summation procedure: a +

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

Implicitly Defined High-Order Operator Splittings for Parabolic and Hyperbolic Variable-Coefficient PDE Using Modified Moments

Implicitly Defined High-Order Operator Splittings for Parabolic and Hyperbolic Variable-Coefficient PDE Using Modified Moments Implicitly Defined High-Order Operator Splittings for Parabolic and Hyperbolic Variable-Coefficient PDE Using Modified Moments James V. Lambers September 24, 2008 Abstract This paper presents a reformulation

More information

Sequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes

Sequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes Sequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes Ellida M. Khazen * 13395 Coppermine Rd. Apartment 410 Herndon VA 20171 USA Abstract

More information

Modeling Uncertainty in Flow Simulations via Generalized Polynomial Chaos

Modeling Uncertainty in Flow Simulations via Generalized Polynomial Chaos Modeling Uncertainty in Flow Simulations via Generalized Polynomial Chaos Dongbin Xiu and George Em Karniadakis Division of Applied Mathematics Brown University Providence, RI 9 Submitted to Journal of

More information

Time-dependent variational forms

Time-dependent variational forms Time-dependent variational forms Hans Petter Langtangen 1,2 1 Center for Biomedical Computing, Simula Research Laboratory 2 Department of Informatics, University of Oslo Oct 30, 2015 PRELIMINARY VERSION

More information

Numerical methods for the discretization of random fields by means of the Karhunen Loève expansion

Numerical methods for the discretization of random fields by means of the Karhunen Loève expansion Numerical methods for the discretization of random fields by means of the Karhunen Loève expansion Wolfgang Betz, Iason Papaioannou, Daniel Straub Engineering Risk Analysis Group, Technische Universität

More information

Numerical Approximation of Stochastic Elliptic Partial Differential Equations

Numerical Approximation of Stochastic Elliptic Partial Differential Equations Numerical Approximation of Stochastic Elliptic Partial Differential Equations Hermann G. Matthies, Andreas Keese Institut für Wissenschaftliches Rechnen Technische Universität Braunschweig wire@tu-bs.de

More information

PRECONDITIONING MARKOV CHAIN MONTE CARLO SIMULATIONS USING COARSE-SCALE MODELS

PRECONDITIONING MARKOV CHAIN MONTE CARLO SIMULATIONS USING COARSE-SCALE MODELS PRECONDITIONING MARKOV CHAIN MONTE CARLO SIMULATIONS USING COARSE-SCALE MODELS Y. EFENDIEV, T. HOU, AND W. LUO Abstract. We study the preconditioning of Markov Chain Monte Carlo (MCMC) methods using coarse-scale

More information

Random Eigenvalue Problems Revisited

Random Eigenvalue Problems Revisited Random Eigenvalue Problems Revisited S Adhikari Department of Aerospace Engineering, University of Bristol, Bristol, U.K. Email: S.Adhikari@bristol.ac.uk URL: http://www.aer.bris.ac.uk/contact/academic/adhikari/home.html

More information

Polynomial chaos expansions for sensitivity analysis

Polynomial chaos expansions for sensitivity analysis c DEPARTMENT OF CIVIL, ENVIRONMENTAL AND GEOMATIC ENGINEERING CHAIR OF RISK, SAFETY & UNCERTAINTY QUANTIFICATION Polynomial chaos expansions for sensitivity analysis B. Sudret Chair of Risk, Safety & Uncertainty

More information

Block-diagonal preconditioning for spectral stochastic finite-element systems

Block-diagonal preconditioning for spectral stochastic finite-element systems IMA Journal of Numerical Analysis (2009) 29, 350 375 doi:0.093/imanum/drn04 Advance Access publication on April 4, 2008 Block-diagonal preconditioning for spectral stochastic finite-element systems CATHERINE

More information

Adaptive Collocation with Kernel Density Estimation

Adaptive Collocation with Kernel Density Estimation Examples of with Kernel Density Estimation Howard C. Elman Department of Computer Science University of Maryland at College Park Christopher W. Miller Applied Mathematics and Scientific Computing Program

More information

Random Matrix Eigenvalue Problems in Probabilistic Structural Mechanics

Random Matrix Eigenvalue Problems in Probabilistic Structural Mechanics Random Matrix Eigenvalue Problems in Probabilistic Structural Mechanics S Adhikari Department of Aerospace Engineering, University of Bristol, Bristol, U.K. URL: http://www.aer.bris.ac.uk/contact/academic/adhikari/home.html

More information

Research Article Multiresolution Analysis for Stochastic Finite Element Problems with Wavelet-Based Karhunen-Loève Expansion

Research Article Multiresolution Analysis for Stochastic Finite Element Problems with Wavelet-Based Karhunen-Loève Expansion Mathematical Problems in Engineering Volume 2012, Article ID 215109, 15 pages doi:10.1155/2012/215109 Research Article Multiresolution Analysis for Stochastic Finite Element Problems with Wavelet-Based

More information

arxiv: v1 [math.na] 3 Apr 2019

arxiv: v1 [math.na] 3 Apr 2019 arxiv:1904.02017v1 [math.na] 3 Apr 2019 Poly-Sinc Solution of Stochastic Elliptic Differential Equations Maha Youssef and Roland Pulch Institute of Mathematics and Computer Science, University of Greifswald,

More information

Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs

Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs Roman Andreev ETH ZÜRICH / 29 JAN 29 TOC of the Talk Motivation & Set-Up Model Problem Stochastic Galerkin FEM Conclusions & Outlook Motivation

More information

Kernel Method: Data Analysis with Positive Definite Kernels

Kernel Method: Data Analysis with Positive Definite Kernels Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University

More information

1 Coherent-Mode Representation of Optical Fields and Sources

1 Coherent-Mode Representation of Optical Fields and Sources 1 Coherent-Mode Representation of Optical Fields and Sources 1.1 Introduction In the 1980s, E. Wolf proposed a new theory of partial coherence formulated in the space-frequency domain. 1,2 The fundamental

More information

STOCHASTIC SAMPLING METHODS

STOCHASTIC SAMPLING METHODS STOCHASTIC SAMPLING METHODS APPROXIMATING QUANTITIES OF INTEREST USING SAMPLING METHODS Recall that quantities of interest often require the evaluation of stochastic integrals of functions of the solutions

More information

Stochastic representation of random positive-definite tensor-valued properties: application to 3D anisotropic permeability random fields

Stochastic representation of random positive-definite tensor-valued properties: application to 3D anisotropic permeability random fields Sous la co-tutelle de : LABORATOIRE DE MODÉLISATION ET SIMULATION MULTI ÉCHELLE CNRS UPEC UNIVERSITÉ PARIS-EST CRÉTEIL UPEM UNIVERSITÉ PARIS-EST MARNE-LA-VALLÉE Stochastic representation of random positive-definite

More information

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space.

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Hilbert Spaces Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space. Vector Space. Vector space, ν, over the field of complex numbers,

More information

An Adaptive Multi-Element Generalized Polynomial Chaos Method for Stochastic Differential Equations

An Adaptive Multi-Element Generalized Polynomial Chaos Method for Stochastic Differential Equations An Adaptive Multi-Element Generalized Polynomial Chaos Method for Stochastic Differential Equations Xiaoliang Wan and George Em Karniadakis Division of Applied Mathematics, Brown University, Providence,

More information

Stochastic structural dynamic analysis with random damping parameters

Stochastic structural dynamic analysis with random damping parameters Stochastic structural dynamic analysis with random damping parameters K. Sepahvand 1, F. Saati Khosroshahi, C. A. Geweth and S. Marburg Chair of Vibroacoustics of Vehicles and Machines Department of Mechanical

More information

Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White

Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White Introduction to Simulation - Lecture 2 Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White Thanks to Deepak Ramaswamy, Michal Rewienski, and Karen Veroy Outline Reminder about

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

Introduction to Uncertainty Quantification in Computational Science Handout #3

Introduction to Uncertainty Quantification in Computational Science Handout #3 Introduction to Uncertainty Quantification in Computational Science Handout #3 Gianluca Iaccarino Department of Mechanical Engineering Stanford University June 29 - July 1, 2009 Scuola di Dottorato di

More information

Seminar on Linear Algebra

Seminar on Linear Algebra Supplement Seminar on Linear Algebra Projection, Singular Value Decomposition, Pseudoinverse Kenichi Kanatani Kyoritsu Shuppan Co., Ltd. Contents 1 Linear Space and Projection 1 1.1 Expression of Linear

More information

Notes on basis changes and matrix diagonalization

Notes on basis changes and matrix diagonalization Notes on basis changes and matrix diagonalization Howard E Haber Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, CA 95064 April 17, 2017 1 Coordinates of vectors and matrix

More information

Sparse polynomial chaos expansions in engineering applications

Sparse polynomial chaos expansions in engineering applications DEPARTMENT OF CIVIL, ENVIRONMENTAL AND GEOMATIC ENGINEERING CHAIR OF RISK, SAFETY & UNCERTAINTY QUANTIFICATION Sparse polynomial chaos expansions in engineering applications B. Sudret G. Blatman (EDF R&D,

More information

Kernel-based Approximation. Methods using MATLAB. Gregory Fasshauer. Interdisciplinary Mathematical Sciences. Michael McCourt.

Kernel-based Approximation. Methods using MATLAB. Gregory Fasshauer. Interdisciplinary Mathematical Sciences. Michael McCourt. SINGAPORE SHANGHAI Vol TAIPEI - Interdisciplinary Mathematical Sciences 19 Kernel-based Approximation Methods using MATLAB Gregory Fasshauer Illinois Institute of Technology, USA Michael McCourt University

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

STATISTICAL LEARNING SYSTEMS

STATISTICAL LEARNING SYSTEMS STATISTICAL LEARNING SYSTEMS LECTURE 8: UNSUPERVISED LEARNING: FINDING STRUCTURE IN DATA Institute of Computer Science, Polish Academy of Sciences Ph. D. Program 2013/2014 Principal Component Analysis

More information

Uncertainty Evolution In Stochastic Dynamic Models Using Polynomial Chaos

Uncertainty Evolution In Stochastic Dynamic Models Using Polynomial Chaos Noname manuscript No. (will be inserted by the editor) Uncertainty Evolution In Stochastic Dynamic Models Using Polynomial Chaos Umamaheswara Konda Puneet Singla Tarunraj Singh Peter Scott Received: date

More information

arxiv: v2 [math.pr] 27 Oct 2015

arxiv: v2 [math.pr] 27 Oct 2015 A brief note on the Karhunen-Loève expansion Alen Alexanderian arxiv:1509.07526v2 [math.pr] 27 Oct 2015 October 28, 2015 Abstract We provide a detailed derivation of the Karhunen Loève expansion of a stochastic

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

AN EFFICIENT COMPUTATIONAL FRAMEWORK FOR UNCERTAINTY QUANTIFICATION IN MULTISCALE SYSTEMS

AN EFFICIENT COMPUTATIONAL FRAMEWORK FOR UNCERTAINTY QUANTIFICATION IN MULTISCALE SYSTEMS AN EFFICIENT COMPUTATIONAL FRAMEWORK FOR UNCERTAINTY QUANTIFICATION IN MULTISCALE SYSTEMS A Dissertation Presented to the Faculty of the Graduate School of Cornell University in Partial Fulfillment of

More information

Separation of Variables in Linear PDE: One-Dimensional Problems

Separation of Variables in Linear PDE: One-Dimensional Problems Separation of Variables in Linear PDE: One-Dimensional Problems Now we apply the theory of Hilbert spaces to linear differential equations with partial derivatives (PDE). We start with a particular example,

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices

Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices Vahid Dehdari and Clayton V. Deutsch Geostatistical modeling involves many variables and many locations.

More information

Reliability Theory of Dynamically Loaded Structures (cont.)

Reliability Theory of Dynamically Loaded Structures (cont.) Outline of Reliability Theory of Dynamically Loaded Structures (cont.) Probability Density Function of Local Maxima in a Stationary Gaussian Process. Distribution of Extreme Values. Monte Carlo Simulation

More information

Quantifying Uncertainty: Modern Computational Representation of Probability and Applications

Quantifying Uncertainty: Modern Computational Representation of Probability and Applications Quantifying Uncertainty: Modern Computational Representation of Probability and Applications Hermann G. Matthies with Andreas Keese Technische Universität Braunschweig wire@tu-bs.de http://www.wire.tu-bs.de

More information

Simulation of random fields in structural design. Lectures. Sebastian Wolff

Simulation of random fields in structural design. Lectures. Sebastian Wolff Lectures Simulation of random fields in structural design Sebastian Wolff presented at the 11th International Probabilistic Workshop, Brno 2013 Source: www.dynardo.de/en/library Novák and Vořechovský:

More information

arxiv: v2 [math.na] 8 Sep 2017

arxiv: v2 [math.na] 8 Sep 2017 arxiv:1704.06339v [math.na] 8 Sep 017 A Monte Carlo approach to computing stiffness matrices arising in polynomial chaos approximations Juan Galvis O. Andrés Cuervo September 3, 018 Abstract We use a Monte

More information

Uncertainty quantification for flow in highly heterogeneous porous media

Uncertainty quantification for flow in highly heterogeneous porous media 695 Uncertainty quantification for flow in highly heterogeneous porous media D. Xiu and D.M. Tartakovsky a a Theoretical Division, Los Alamos National Laboratory, Mathematical Modeling and Analysis Group

More information

Efficient Solvers for Stochastic Finite Element Saddle Point Problems

Efficient Solvers for Stochastic Finite Element Saddle Point Problems Efficient Solvers for Stochastic Finite Element Saddle Point Problems Catherine E. Powell c.powell@manchester.ac.uk School of Mathematics University of Manchester, UK Efficient Solvers for Stochastic Finite

More information

MULTI-ELEMENT GENERALIZED POLYNOMIAL CHAOS FOR ARBITRARY PROBABILITY MEASURES

MULTI-ELEMENT GENERALIZED POLYNOMIAL CHAOS FOR ARBITRARY PROBABILITY MEASURES SIAM J. SCI. COMPUT. Vol. 8, No. 3, pp. 9 98 c 6 Society for Industrial and Applied Mathematics MULTI-ELEMENT GENERALIZED POLYNOMIAL CHAOS FOR ARBITRARY PROBABILITY MEASURES XIAOLIANG WAN AND GEORGE EM

More information

MATH 220: INNER PRODUCT SPACES, SYMMETRIC OPERATORS, ORTHOGONALITY

MATH 220: INNER PRODUCT SPACES, SYMMETRIC OPERATORS, ORTHOGONALITY MATH 22: INNER PRODUCT SPACES, SYMMETRIC OPERATORS, ORTHOGONALITY When discussing separation of variables, we noted that at the last step we need to express the inhomogeneous initial or boundary data as

More information

CHAPTER VIII HILBERT SPACES

CHAPTER VIII HILBERT SPACES CHAPTER VIII HILBERT SPACES DEFINITION Let X and Y be two complex vector spaces. A map T : X Y is called a conjugate-linear transformation if it is a reallinear transformation from X into Y, and if T (λx)

More information

Microstructurally-Informed Random Field Description: Case Study on Chaotic Masonry

Microstructurally-Informed Random Field Description: Case Study on Chaotic Masonry Microstructurally-Informed Random Field Description: Case Study on Chaotic Masonry M. Lombardo 1 J. Zeman 2 M. Šejnoha 2,3 1 Civil and Building Engineering Loughborough University 2 Department of Mechanics

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Dinesh Kumar, Mehrdad Raisee and Chris Lacor

Dinesh Kumar, Mehrdad Raisee and Chris Lacor Dinesh Kumar, Mehrdad Raisee and Chris Lacor Fluid Mechanics and Thermodynamics Research Group Vrije Universiteit Brussel, BELGIUM dkumar@vub.ac.be; m_raisee@yahoo.com; chris.lacor@vub.ac.be October, 2014

More information

Notes Regarding the Karhunen Loève Expansion

Notes Regarding the Karhunen Loève Expansion Notes Regarding the Karhunen Loève Expansion 1 Properties of the Karhunen Loève Expansion As noted in Section 5.3 of [3], the Karhunen Loève expansion for a correlated second-order random field α(x, ω),

More information

Kernel Principal Component Analysis

Kernel Principal Component Analysis Kernel Principal Component Analysis Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr

More information

EXAM MATHEMATICAL METHODS OF PHYSICS. TRACK ANALYSIS (Chapters I-V). Thursday, June 7th,

EXAM MATHEMATICAL METHODS OF PHYSICS. TRACK ANALYSIS (Chapters I-V). Thursday, June 7th, EXAM MATHEMATICAL METHODS OF PHYSICS TRACK ANALYSIS (Chapters I-V) Thursday, June 7th, 1-13 Students who are entitled to a lighter version of the exam may skip problems 1, 8-11 and 16 Consider the differential

More information

Dimensionality reduction of parameter-dependent problems through proper orthogonal decomposition

Dimensionality reduction of parameter-dependent problems through proper orthogonal decomposition MATHICSE Mathematics Institute of Computational Science and Engineering School of Basic Sciences - Section of Mathematics MATHICSE Technical Report Nr. 01.2016 January 2016 (New 25.05.2016) Dimensionality

More information

Linear Algebra in Hilbert Space

Linear Algebra in Hilbert Space Physics 342 Lecture 16 Linear Algebra in Hilbert Space Lecture 16 Physics 342 Quantum Mechanics I Monday, March 1st, 2010 We have seen the importance of the plane wave solutions to the potentialfree Schrödinger

More information

EFFICIENT SHAPE OPTIMIZATION USING POLYNOMIAL CHAOS EXPANSION AND LOCAL SENSITIVITIES

EFFICIENT SHAPE OPTIMIZATION USING POLYNOMIAL CHAOS EXPANSION AND LOCAL SENSITIVITIES 9 th ASCE Specialty Conference on Probabilistic Mechanics and Structural Reliability EFFICIENT SHAPE OPTIMIZATION USING POLYNOMIAL CHAOS EXPANSION AND LOCAL SENSITIVITIES Nam H. Kim and Haoyu Wang University

More information

Probabilistic collocation and Lagrangian sampling for tracer transport in randomly heterogeneous porous media

Probabilistic collocation and Lagrangian sampling for tracer transport in randomly heterogeneous porous media Eidgenössische Technische Hochschule Zürich Ecole polytechnique fédérale de Zurich Politecnico federale di Zurigo Swiss Federal Institute of Technology Zurich Probabilistic collocation and Lagrangian sampling

More information

UNIVERSITY OF CALIFORNIA, SAN DIEGO. An Empirical Chaos Expansion Method for Uncertainty Quantification

UNIVERSITY OF CALIFORNIA, SAN DIEGO. An Empirical Chaos Expansion Method for Uncertainty Quantification UNIVERSITY OF CALIFORNIA, SAN DIEGO An Empirical Chaos Expansion Method for Uncertainty Quantification A Dissertation submitted in partial satisfaction of the requirements for the degree Doctor of Philosophy

More information