Comput. Methods Appl. Mech. Engrg.

Similar documents
Beyond Wiener Askey Expansions: Handling Arbitrary PDFs

Journal of Computational Physics

Performance Evaluation of Generalized Polynomial Chaos

Journal of Computational Physics

Long-term behavior of polynomial chaos in stochastic flow simulations

Journal of Computational Physics

Fast Numerical Methods for Stochastic Computations

A Polynomial Chaos Approach to Robust Multiobjective Optimization

Multi-Element Probabilistic Collocation Method in High Dimensions

EFFICIENT STOCHASTIC GALERKIN METHODS FOR RANDOM DIFFUSION EQUATIONS

EFFICIENT SHAPE OPTIMIZATION USING POLYNOMIAL CHAOS EXPANSION AND LOCAL SENSITIVITIES

Improved Lebesgue constants on the triangle

Final Report: DE-FG02-95ER25239 Spectral Representations of Uncertainty: Algorithms and Applications

Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications

c 2004 Society for Industrial and Applied Mathematics

Solving the steady state diffusion equation with uncertainty Final Presentation

Polynomial chaos expansions for sensitivity analysis

Research Article A Pseudospectral Approach for Kirchhoff Plate Bending Problems with Uncertainties

Research Article Multiresolution Analysis for Stochastic Finite Element Problems with Wavelet-Based Karhunen-Loève Expansion

Stochastic Spectral Approaches to Bayesian Inference

Comput. Methods Appl. Mech. Engrg.

Numerical solution of a Fredholm integro-differential equation modelling _ h-neural networks

Journal of Computational Physics

arxiv: v2 [math.na] 8 Sep 2017

Solving the stochastic steady-state diffusion problem using multigrid

Journal of Computational Physics

Analysis and Computation of Hyperbolic PDEs with Random Data

Stochastic Solvers for the Euler Equations

Comparison between Karhunen Loeve and wavelet expansions for simulation of Gaussian processes

Sparse polynomial chaos expansions in engineering applications

Computers and Electrical Engineering

MULTI-ELEMENT GENERALIZED POLYNOMIAL CHAOS FOR ARBITRARY PROBABILITY MEASURES

Uncertainty analysis of large-scale systems using domain decomposition

A stochastic Lagrangian approach for geometrical uncertainties in electrostatics

Explicit solution to the stochastic system of linear algebraic equations (a 1 A 1 + a 2 A a m A m )x = b

Differentiation matrices in polynomial bases

Flexibility-based upper bounds on the response variability of simple beams

Uncertainty Quantification in Computational Models

AN EFFICIENT COMPUTATIONAL FRAMEWORK FOR UNCERTAINTY QUANTIFICATION IN MULTISCALE SYSTEMS

Probabilistic Collocation Method for Uncertainty Analysis of Soil Infiltration in Flood Modelling

1462. Jacobi pseudo-spectral Galerkin method for second kind Volterra integro-differential equations with a weakly singular kernel

Journal of Computational Physics

Fast Algorithm for Computing Karhunen-Loève Expansion

Uncertainty quantification for flow in highly heterogeneous porous media

Accuracy, Precision and Efficiency in Sparse Grids

Comput. Methods Appl. Mech. Engrg.

Pascal s Triangle on a Budget. Accuracy, Precision and Efficiency in Sparse Grids

Journal of Computational Physics

Stochastic modeling of coupled electromechanical interaction for uncertainty quantification in electrostatically actuated MEMS

arxiv: v2 [math.pr] 27 Oct 2015

Positivity conditions in meshless collocation methods

STOCHASTIC FINITE ELEMENTS WITH MULTIPLE RANDOM NON-GAUSSIAN PROPERTIES

Introduction to Uncertainty Quantification in Computational Science Handout #3

Evaluation of Non-Intrusive Approaches for Wiener-Askey Generalized Polynomial Chaos

STOCHASTIC SAMPLING METHODS

Uncertainty Quantification for multiscale kinetic equations with high dimensional random inputs with sparse grids

Uncertainty Evolution In Stochastic Dynamic Models Using Polynomial Chaos

Simulation of simply cross correlated random fields by series expansion methods

An Empirical Chaos Expansion Method for Uncertainty Quantification

A Non-Intrusive Polynomial Chaos Method For Uncertainty Propagation in CFD Simulations

Uncertainty Quantification for multiscale kinetic equations with random inputs. Shi Jin. University of Wisconsin-Madison, USA

UNCERTAINTY ASSESSMENT USING STOCHASTIC REDUCED BASIS METHOD FOR FLOW IN POROUS MEDIA

arxiv: v1 [math.na] 3 Apr 2019

Schwarz Preconditioner for the Stochastic Finite Element Method

Quadrature for Uncertainty Analysis Stochastic Collocation. What does quadrature have to do with uncertainty?

Initial inverse problem in heat equation with Bessel operator

Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs

ARTICLE IN PRESS. Reliability Engineering and System Safety

Simulating with uncertainty : the rough surface scattering problem

Kernel-based Approximation. Methods using MATLAB. Gregory Fasshauer. Interdisciplinary Mathematical Sciences. Michael McCourt.

Probabilistic collocation method for strongly nonlinear problems: 1. Transform by location

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

A reduced-order stochastic finite element analysis for structures with uncertainties

arxiv: v1 [math.na] 14 Sep 2017

Stochastic structural dynamic analysis with random damping parameters

Quasi-optimal and adaptive sparse grids with control variates for PDEs with random diffusion coefficient

Commun Nonlinear Sci Numer Simulat

Uncertainty Quantification and hypocoercivity based sensitivity analysis for multiscale kinetic equations with random inputs.

Strain and stress computations in stochastic finite element. methods

ICES REPORT A posteriori error control for partial differential equations with random data

A sparse grid stochastic collocation method for elliptic partial differential equations with random input data

Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques

A new modified Halley method without second derivatives for nonlinear equation

Accepted Manuscript. SAMBA: Sparse approximation of moment-based arbitrary polynomial chaos. R. Ahlfeld, B. Belkouchi, F.

SENSITIVITY ANALYSIS IN NUMERICAL SIMULATION OF MULTIPHASE FLOW FOR CO 2 STORAGE IN SALINE AQUIFERS USING THE PROBABILISTIC COLLOCATION APPROACH

Downloaded 08/12/13 to Redistribution subject to SIAM license or copyright; see

Advances in Water Resources


Karhunen-Loeve Expansion and Optimal Low-Rank Model for Spatial Processes

Estimating functional uncertainty using polynomial chaos and adjoint equations

Solving the Stochastic Steady-State Diffusion Problem Using Multigrid

Adaptive Collocation with Kernel Density Estimation

Downloaded 01/28/13 to Redistribution subject to SIAM license or copyright; see

Downloaded 08/15/18 to Redistribution subject to SIAM license or copyright; see

Characterization of reservoir simulation models using a polynomial chaos-based ensemble Kalman filter

Wiener Chaos expansions and numerical solutions of randomly forced equations of fluid mechanics

A note on sufficient dimension reduction

MULTISCALE FINITE ELEMENT METHODS FOR STOCHASTIC POROUS MEDIA FLOW EQUATIONS AND APPLICATION TO UNCERTAINTY QUANTIFICATION

Numerical Solution for Random Forced SPDE via Galerkin Finite Element Method

Recurrence Relations and Fast Algorithms

Transcription:

Comput. Methods Appl. Mech. Engrg. 98 (29) 985 995 Contents lists available at ScienceDirect Comput. Methods Appl. Mech. Engrg. journal homepage: www.elsevier.com/locate/cma Solving elliptic problems with non-gaussian spatially-dependent random coefficients Xiaoliang Wan, George Em Karniadakis * Division of Applied Mathematics, Brown University, Providence, RI 292, USA article info abstract Article history: Received 7 March 28 Received in revised form 9 September 28 Accepted 3 December 28 Available online 24 January 29 Keywords: Polynomial chaos method Monte Carlo method Elliptic partial differential equation Karhunen Loève expansion Uncertainty quantification We propose a simple and effective numerical procedure for solving elliptic problems with non-gaussian random coefficients, assuming that samples of the non-gaussian random inputs are available from a statistical model. Given a correlation function, the Karhunen Loève (K L) expansion is employed to reduce the dimensionality of random inputs. Using the kernel density estimation technique, we obtain the marginal probability density functions (PDFs) of the random variables in the K L expansion, based on which we define an auxiliary joint PDF. We then implement the generalized polynomial chaos (gpc) method via a collocation projection according to the auxiliary joint PDF. Based on the observation that the solution has an analytic extension in the parametric space, we ensure that the polynomial interpolation achieves point-wise convergence in the parametric space regardless of the PDF, where the energy norm is employed in the physical space. Hence, we can sample the gpc solution using the joint PDF instead of the auxiliary one to obtain the correct statistics. We also implement Monte Carlo methods to further refine the statistics using the gpc solution for variance reduction. Numerical results are presented to demonstrate the efficiency of the proposed approach. Ó 29 Elsevier B.V. All rights reserved.. Introduction * Corresponding author. E-mail address: gk@dam.brown.edu (G.E. Karniadakis). Uncertainty quantification has recently received a lot of attention with many classical deterministic mathematical models being reformulated in the stochastic sense. For example, physical parameters and boundary/initial conditions can be modeled by a random process instead of a deterministic function in the mean sense. In engineering applications, random inputs are often assumed to be Gaussian both for simplicity and by the virtue of the central limit theorem. However, the Gaussian assumption is not always valid since observation data exhibit distinct non-gaussian characteristics in many cases. Thus, the simulation of non-gaussian processes is of great practical importance. Although simulations of Gaussian processes are well established, simulations of non-gaussian processes are still limited and actively under development. The main difficulty lies in the characterization of the random processes: unlike Gaussian processes which are determined solely through the first- and second-order probabilistic characteristics, one must know the entire family of joint distributions, which is never available in practice, for the non-gaussian random processes. At present, simulations of non-gaussian processes are mainly based on memoryless nonlinear transforms of the standard Gaussian process due to the analytical tractability and availability of Gaussian simulation methods (spectral representation [], Karhunen Loève expansion [2], Wavelet expansion [3], and Fourier Wavelet expansion [4], etc.). Recent attempts utilize the Hermite polynomial chaos method [5,6] by expressing a non- Gaussian process RðtÞ as RðtÞ ¼ X i¼ a i H i ðgðtþþ; where H i is the Hermite polynomial of degree i, and GðtÞ is a standard one-dimensional stationary Gaussian process. In [7], the Karhunen Loève expansion was employed to simulate non-gaussian processes. All these simulation methods can be coupled directly with Monte Carlo methods to simulate the random response of the system. However, due to the low convergence rate, brute-force Monte Carlo methods are usually not affordable in large-scale simulations. Thus, acceleration techniques for Monte Carlo methods and non-statistical approaches have been developed. One such nonstatistical approach is polynomial chaos, which is a spectral expansion of a given random function with respect to a multi-dimensional random variable. In literature, polynomial chaos is often coupled with the K L expansion of the random inputs (see [8 5] and references therein). If the random inputs can be expressed by a standard Gaussian process, the polynomial chaos (Hermite-chaos) method can be employed directly to capture the 45-7825/$ - see front matter Ó 29 Elsevier B.V. All rights reserved. doi:.6/j.cma.28.2.39

986 X. Wan, G.E. Karniadakis / Comput. Methods Appl. Mech. Engrg. 98 (29) 985 995 uncertainty propagation, since the random variables in the K L expansion are mutually independent. However, the following situations can significantly weaken the effectiveness of polynomial chaos methods for non-gaussian random inputs: (i) A strongly non-gaussian processes RðtÞ may require a highorder polynomial chaos expansion with respect to GðtÞ. If RðtÞ represents the random input, a high-order polynomial chaos expansion of the solution field may also be necessary for convergence, which may not be possible due to the limitation of high dimensionality and complexity. (ii) When the K L expansion is employed to simulate the non- Gaussian random processes, polynomial chaos or its extensions [9,,3,5] cannot be used directly since all random variables in the K L expansion are uncorrelated but not independent. If independence is assumed, the joint probability density between random variables cannot be captured. In the present work, we study an elliptic problem with a random coefficient, which is a typical stochastic model in many physical applications, such as porous media, ground-water systems, etc. In particular, we re-examine the polynomial chaos methods coupled with the K L expansion. However, we take into account the joint probability density of uncorrelated random variables in the K L expansion. We treat the polynomial chaos method as a highdimensional approximation approach, and take advantage of the property of the solution that it has an analytic extension in the parametric space. We observe that the polynomial chaos interpolation provides L convergence in the parametric space regardless of the joint PDF. Thus, the polynomial chaos solution can actually be regarded as a statistical model, based on which we can sample the joint PDF to obtain the desired statistics. Accordingly, we sample a known function instead of the original stochastic PDE, which results in substantial computational savings. Furthermore, we can also use the polynomial chaos solution as a control variate for variance reduction to sample the original stochastic PDE to efficiently refine the obtained statistics. This paper is organized as follows: we first present some necessary techniques and analysis results for the model problem given in the next section. We then describe our methodology in Section 3. In Section 4, we analyze the convergence behavior of our methodology. A numerical study is given in Section 5, and we conclude in Section 6 with a brief discussion. 2. Model problem Let ðx; F; PÞ be a complete probability space, where X is the sample space, F is the r-algebra of subsets of X and P is a proper probability measure. Let D be a bounded, connected, open subset of R d ðd ¼ ; 2; 3Þ with a Lipschitz continuous boundary @D. We consider the following stochastic elliptic problem as a model problem: find a stochastic function, u : X D! R, such that almost surely (a.s.) the following equation holds: r ðaðx; xþruðx; xþþ ¼ f ðxþ on D; ðþ uðx; xþ ¼ on @D; where f ðxþ is assumed to be a deterministic function for simplicity, and aðx; xþ is a second-order random process satisfying the following strong ellipticity condition: Assumption 2. (Strong ellipticity condition). Let aðx; xþ 2L ðd; XÞ be strictly positive with lower and upper bounds a min and a max, respectively, < a min < a max and Prðaðx; xþ 2½a min ; a max Š 8x 2 DÞ ¼: ð2þ Remark 2.2. The strong ellipticity condition can be relaxed as: there exist a positive number a min > such that aðx; xþ > a min almost surely [6]. In this work, we use the strong ellipticity assumption for the discussion of our methodology and present some numerical experiments for the more general cases. 2.. High-dimensional deterministic problem In practice, the most commonly used (non-gaussian) random processes are second-order stationary processes with a known correlation function K a ðx ; x 2 Þ¼E½ðaðx ; xþ E½aŠðx ÞÞðaðx 2 ; xþ E½aŠðx 2 ÞÞŠ. Based on the correlation function K a, we can implement the K L expansion in the form: aðx; xþ ¼E½aŠðxÞþ X i¼ k iyi w i ðxþ; where fy i g i¼ is a set of uncorrelated random variables with zero mean and unit variance, and fðk i ; w i ðxþþg i¼ is a set of eigenvalue eigenfunction pairs satisfying K a ðx ; x 2 Þw i ðx 2 Þdx 2 ¼ k i w i ðx Þ; ð4aþ D w i ðxþw j ðxþdx ¼ d ij ; ð4bþ D where d ij is the Kronecker symbol. Furthermore, the random variables Y i satisfy Y i ¼ ðaðx; xþ E½aŠðxÞÞw i ðxþdx: ð5þ k i D The eigenvalue problem (4), in general, does not have analytical solutions, which means that numerical approximation is usually necessary [7 9]. We note here that Y i are uncorrelated, but not necessarily independent, for general (non-gaussian) random inputs. For numerical implementation, we need to truncate the K L expansion up to M terms as a M ðx; xþ ¼E½aŠðxÞþ XM i¼ k iy i w i ðxþ according to the L 2 convergence E ðaðx; xþ a M ðx; xþþ 2 dx ¼ X k i! D i>m as M!: ð7þ It is known that the K L expansion is optimal in the L 2 sense. We rewrite the problem () as r ða M ðx; YÞruðx; YÞÞ ¼ f ðxþ on D; uðx; YÞ ¼ on @D; where Y ¼ðY ;...; Y M Þ is a multi-dimensional random variable. Then the solution uðx; YÞ will be determined by a finite number of random variables Y i according to the Doob Dynkin lemma [2]. Due to the strong ellipticity condition, we see that Y must be bounded. From Eq. (5), we have jy i j 6 a max a min jw i ðxþjdx 6 ða max a min ÞVolðDÞ p ffiffiffiffi : k i D Without loss of generality, we can assume that Y 2 C ¼ Q M C i¼ i, and p C 2 R M is compact. We denote r i ¼ ffiffiffiffi k ikwi ðxþk LðDÞ, i ¼ ;...; M. Itis obvious that the value of M is determined by the decay rate of eigenvalues k i, which, in general, relies on the regularity of the correlation function K a ðx ; x 2 Þ. For more discussions about the decay rate of k i we refer to [3]. k i ð3þ ð6þ ð8þ

X. Wan, G.E. Karniadakis / Comput. Methods Appl. Mech. Engrg. 98 (29) 985 995 987 2.2. Density estimation In this section, we present algorithms for the density estimation, which are necessary for our methodology in Section 3. In practice, it is impossible to obtain the joint PDF of Y i. However, we can approximate the marginal PDFs q i of Y i efficiently using density estimation approaches. Here we employ the technique of kernel density estimator [2]. Given N q realizations of the random field aðx; xþ, we can obtain a set fy ðjþ i g Nq of samples for Y j¼ i using Eq. (5). Then the kernel density estimation ~q i of q i takes the form ~q i ¼ X Nq K d N q h j¼! y i Y ðjþ i h ; ð9þ where h > is the bandwidth acting as a tuning parameter and K d ðyþ is a prescribed kernel function satisfying K d ðyþ P ; K d ðyþdy ¼ : R The most widely used kernel is the Gaussian kernel K d ðyþ ¼ð2pÞ =2 e y2 =2. In this case, the kernel density estimate can be written as ~q i ¼ p N q h ffiffiffiffiffiffi 2p XNq j¼ e ðy i Y ðjþ Þ 2 =2h 2 i : ðþ The bandwidth h is a scaling factor, which determines the quality of the approximate marginal PDF ~q i. To this end, we choose the optimal bandwidth h, which minimizes the asymptotic integrated mean square error (AIMSE). In particular, AIMSE represents the distance between two density functions, which can be written as AIMSEð ~q i ; q i Þ¼ N q h R K 2 d dy þ 4 h4 R y 2 K d ðyþdy R ðq i Þ2 dy; with q i being the second-order derivative of q i, see [22,23] and the references therein for more details about numerical algorithms of non-parametric density estimation. The best rate of convergence of the AIMSE of the kernel density estimation is of order OðN 4=5 q Þ [22,23]. 2.3. L approximation of uðx; yþ In this section, we discuss briefly the gpc method via a collocation projection, or gpc interpolation in short. The regularity of the solution uðx; yþ in the parametric space y 2 C was given in the following lemma [6]: Lemma 2.3. Let C i ¼ Q M j¼;j i C j and y i; 2 C i, i ¼ ;...; M. Let c i ¼ r i =a min. The solution uðx; y i ; y i; ; xþ admits an analytic extension uðx; z; y i; Þ,z2 C, in the region of the complex plane RðC i ; s i Þfz 2 C; distðz; C i Þ 6 s i g with < s i < =ð2c i Þ. Furthermore, max kuðzþk 6 Cðs z2rðc i ;s H i; c i Þ ðdþ i ; a min Þ; where C is function of s i, c i and a min. Due to the analyticity, we are interested in the point-wise convergence of polynomial interpolation in the parametric space. In particular, we will examine gpc interpolation based on sparse grids. In this section we assume that the random variables Y i are independent for discussions about gpc interpolation and return to this issue later in Section 3. Let P p ðcþ indicate polynomials on C with the total polynomial order up to p. We define an approximation space for the model problem (8) as P p ðcþh ðdþ, where we do not include the errors from physical discretization for simplicity. An interpolation operator based on sparse grids can be defined as I p v ¼ v 8v 2 P p ðcþh ðdþ: ðþ We note that P p ðcþ is usually not the same as the polynomial space that polynomial interpolation on sparse grids can exactly produce. If we use AðM þ s; MÞ to indicate the tensor product formulas given by the Smolyak algorithm, where s indicates the level of sparseness, polynomial interpolation based on AðM þ s; MÞ will be exact for polynomials in P s ðcþ and some other polynomials with a degree larger than s [24]. However, in this work we will focus on the part P s ðcþ. Given a set of sparse grids fy i g n i¼ on C, we compute uðx; y iþ by solving the deterministic elliptic equation r ða M ðx; y i Þruðx; y i ÞÞ ¼ f ðxþ; ð2þ where the number n of interpolation points is large enough to exactly determine polynomials up to order p on C. When necessary, we project the gpc interpolation results onto a basis of P p ðcþ through the discrete integration formula based on sparse grids. We note that if AðM þ p; MÞ is based on Gauss abscissas, the corresponding discrete integration formula has a degree of exactness 2p þ for p < 2M [25], i.e., the discrete integration formula will be exact for polynomials with an order up to 2p þ. To this end, the point-wise error of the gpc interpolation via sparse grids can be expressed as ku ~u p k 6 ðk LðC;H ðdþþ n þ Þ inf ku þ wk LðC;H ðdþþ; w2p pðcþh ðdþ ð3þ where ~u p ¼ I p u and K n is the Lebesgue constant associated with the collocation points fy i g n i¼. It is obvious that the convergence is controlled by two factors: the Lebesgue constant and the best approximation given by the polynomial space P p ðcþ. Due to the existence of analytic extension in the parametric space, we know that the best approximation has a fast (exponential) point-wise convergence [26]. However, the Lebesgue constant is only understood well for certain choices of collocation points, e.g., Jacobi nodes. For a onedimensional interpolation formula, the optimal order of the Lebesgue constant is logðm þ Þ with m being the polynomial order [24]. Polynomial interpolation on sparse grids was studied in [27], where the sparse grids were based on the extrema of the Chebyshev polynomials. Although the authors considered functions in a Sobolevtype space, the deviations can be directly applied to analytic functions, due to the special tensor-product structure of Smolyak algorithm, for an estimate of point-wise convergence. Regarding the stochastic collocation finite element method for the model problem (8), the L 2 ðc; H ðdþþ approximation was studied in [6,28], and the L ðc; H ðdþþ approximation was analyzed in [29], where sparse collocation points were generated by a full tensor product with anisotropic Chebyshev nodes in each random dimension, and the random variables Y i were assumed to be mutually independent. In this work, we will employ sparse grids based on Gauss abscissas for an arbitrary PDF, which is in general a Jacobi-like function (see Fig. 4 in Section 5). Since we know that it is possible to use the Jacobi nodes to get the optimal order logðm þ Þ for the Lebesgue constant through the additional points method [24], we expect a fast point-wise convergence for the right-hand side of Eq. (3). In this work, we just use the Gauss abscissas for a Jacobi-like PDF, and numerical experiments show that the corresponding sparse grids work well for the gpc interpolation with respect to the L ðc; H ðdþþ norm. Remark 2.4. In the polynomial interpolation, the PDF of Y does not affect the norm kk LðC;H ðdþþ directly unlike the L 2 norm, which means that the point-wise convergence is valid for any joint PDF of Y i. However, we note that the PDF of Y can be implicitly related to

988 X. Wan, G.E. Karniadakis / Comput. Methods Appl. Mech. Engrg. 98 (29) 985 995 the choice of grid points fy i g, which will affect the Lebesgue constant. Remark 2.5. For the Galerkin projection, the L ðc; H ðdþþ convergence of the polynomial approximation should also be valid due to the analytic extension in the parametric space. To study the L convergence, the standard technique is to transfer the problem to the approximation of Green s functions of the adjoint problem. To the best of our knowledge, no regularity study of such Green s functions is available yet. In this work, we will present some numerical experiments based on the Galerkin projection results. Based on the point-wise convergence in the parametric space, we are now ready to describe our methodology for elliptic problems with general (non-gaussian) random inputs. 3. Proposed methodology The model problem (8) has been widely studied (see [3,9,3,2,3,6,28,32] and references therein) using the polynomial chaos method, where the uncorrelated random variables in the K L expansion of aðx; xþ are either approximated by a set of independent Gaussian random variables or just assumed to be mutually independent. In this work, we take advantage of the point-wise convergence of polynomial interpolation in the parametric space to develop a simple and effective methodology. 3.. Approximation procedure Let ~aðx; xþ ¼Saðx; xþ denote a simulator of aðx; xþ. For example, S can be a statistical model or a numerical simulation algorithm of the non-gaussian random process aðx; xþ [5 7]. In other words, we can obtain samples of the random field aðx; xþ from the simulator S, based on which we can implement a direct Monte Carlo method. Since brute-force Monte Carlo methods are usually unaffordable, especially for large-scale simulations, we shall try to accelerate it by employing polynomial chaos simulations. Since the K L expansion is valid for any second-order stationary or non-stationary random process, we use it to reduce the number of random dimensions, i.e., ~a M ðx; xþ ¼E½~aŠðxÞþ XM i¼ k iy i w i ðxþ; where the random variable Y i is defined as Y i ¼ ð~aðx; xþ E½~aŠðxÞÞw i ðxþdx: k i D ð4þ ð5þ Then, the marginal PDF q i of Y i can be approximated by density estimation (see Section 2.2) using Eq. (5). Based on the approximate marginal PDFs ~q i, we subsequently define an auxiliary PDF ~q ¼ Q M ~q i¼ i. In other words, independence is assumed between Y i although they are only mutually uncorrelated. The reason in defining ~q is to construct orthogonal (generalized) polynomial chaos bases f/ a g and the corresponding Gauss quadrature points for Y ¼ðY ;...; Y M Þ2C with respect to the marginal PDFs ~q i, based on which we generate sparse grids in the parametric space. Here a ¼ða ;...; a M Þ2N M is a multi-index. f/ ag is a set of orthogonal polynomials with respect to the auxiliary PDF ~q, where / a ¼ Q M i¼ / a i and f/ ai g is a set of orthogonal polynomials with respect to the marginal PDF ~q i. Several issues must be clarified before we try to obtain orthogonal polynomial chaos basis f/ a g: () The image of Y is usually hard to obtain in practice. However, the marginal PDF q i usually decay fast as jy i j becomes large. We then truncate the image of Y i at a certain point where ~q i is relatively small and obtain Y 2 C e C. Thus, we neglect Y 2 C n C, e since the probability PrðY 2 C n CÞ e is very small. (2) To obtain orthogonal polynomials corresponding to the marginal PDF q i, we need to construct them numerically [5]. The efficiency of algorithms for numerical orthogonality is mainly affected by the smoothness of the marginal PDF q i [33]. Thus, we need an algorithm which can give us a smooth approximation of q i. We noted in Section 2.2 that the smoothness of the approximate marginal PDF ~q i is inherited from the smoothness of the kernel K d. If the Gaussian kernel is employed, we have ~q i 2 C ðc i Þ. Thus ~q i is proper for the algorithms of numerical orthogonality. From Eq. (9), we can see that the cost for the evaluation of ~q i at a given point is determined by the number N q of samples. If N q is large, the cost for numerical orthogonality is large. However, we note that ~q i is nothing more but a sum of a series of Gaussian functions, which implies that the fast Gauss transform (FGT) [34,9] can be employed to accelerate the calculation. Actually, numerical experiments show that the cost for numerical orthogonality is very small. In general, the interpolation error of I p u is not very sensitive to the accuracy of ~q i, see Section 5 for a convergence study. Using the polynomial chaos basis f/ a g with order jaj ¼ P M i¼a i 6 p, we can model the solution field as ~u p ðx; YÞ ¼ X ~u a ðxþ/ a ðyþ; ð6þ jaj6p where the coefficients ~u a can be obtained from the interpolation results uðx; y i Þ ~u a ðxþ ¼ Xn i¼ uðx; y i Þ/ a ðy i Þw i ; ð7þ with w i being the associated integration weight of grid y i. We note here that the discrete Galerkin projection is based on the auxiliary PDF ~q instead of the joint PDF of Y. 3.2. Post-processing procedure Since the polynomial approximation ~u p ðx; YÞ is based on the auxiliary PDF ~q while all the statistics of uðx; YÞ are with respect to the joint PDF q, we must deal with the approximation ~u p ðx; YÞ carefully to obtain the correct statistics. Since the gpc interpolation ~u p ðx; YÞ has a point-wise convergence in the parametric space, we known the statistics of ~u p ðx; YÞ with respect to the joint PDF of Y should converge to the statistics of uðx; YÞ. Based on such an observation, we propose two post-processing models: (i) GPC predictor model: although we do not have the explicit form of the joint PDF, we can sample it using the simulator S. We first obtain a sample field of ~aðx; xþ from the simulator S, and then compute samples of Y i using Eq. (5). Then, the moments of uðx; YÞ can be obtained as E½u m ðx; ~aðxþþš N mc X Nmc i¼ B m u ðx; Y ðiþ Þ; where ( B u ðx; Y ðiþ Þ¼ ~u pðx; Y ðiþ Þ if Y ðiþ 2 C; otherwise ð8þ and N mc is the number of realizations. In other words, we sample the polynomial chaos solution ~u p ðx; YÞ instead of the stochastic elliptic PDE. If the polynomial chaos solution

X. Wan, G.E. Karniadakis / Comput. Methods Appl. Mech. Engrg. 98 (29) 985 995 989 provides a good point-wise approximation, we will obtain a good speed-up since sampling the stochastic PDE directly is much more expensive. (ii) GPC predictor corrector model:furthermore, using the polynomial chaos solution as a control variate for variance reduction, we can further refine the results obtained from the first model. We implement the Monte Carlo method to sample the stochastic PDE in the following way: E½u m ðx; ~aðxþþš ¼ E½~u m X Nmc p ðx; YÞŠ þ ½u m ðx; Y ðiþ Þ N mc B m u ðx; Y ðiþ ÞŠ; i¼ ð9þ where E½~u m p ðx; YÞŠ is computed from the joint PDF instead of the auxiliary PDF, and B u ðx; Y ðiþ Þ is defined as in Eq. (8). Since we have the explicit form of ~u p ðx; YÞ, we can use a large number of samples to compute E½~u m p ðx; YÞŠ accurately. If ~u m p ðx; YÞ is a good approximation of um ðx; YÞ, the random variable u m ðx; YÞ B m u ðx; YÞ will have a small variance and result in a good variance reduction for the Monte Carlo method, see Section 4 about the error analysis. In other words, the value of N mc can be reduced significantly to achieve convergence. We now summarize our algorithm: : Compute marginal PDFs ~q i of uncorrelated random variables Y i in the K L expansion by sampling the simulator S of random inputs. 2: Generate orthogonal polynomial chaos basis for each marginal PDF ~q i. 3: Construct orthogonal multi-dimensional orthogonal polynomial chaos basis for the auxiliary PDF ~q ¼ Q M ~q i¼ i. 4: Implement the polynomial chaos method via the collocation or Galerkin projection to obtain the approximation uðx; y i Þ on fy i g n i¼ C based on the auxiliary PDF ~q. 5: Project the interpolation results from step 4 onto the polynomial chaos basis f/ a g to obtain the chaos expansion ~u p ðx; YÞ. 6: Compute statistics by sampling the polynomial chaos approximation ~u p ðx; YÞ as in Eq. (8) or use it as a control variate for variance reduction as in Eq. (9). 4. Error analysis of the proposed methodology In this section we present a general error analysis for the methodology proposed in Section 3. We first list the main error sources: () Truncation of the marginal PDFs. In numerical implementation, we cannot exactly obtain the image C of Y. Instead we use a truncated version ~ C C. (2) Approximation of ~u p ðx; YÞ ¼I p u based on the auxiliary PDF. 6 E½ku ~u p k H ðdþš ~ ¼ kuk H ~C ðdþqðyþdy þ ku ~u p k c H ðdþqðyþdy C =2 =2 6 kuk 2 H ~C ðdþqðyþdy qðyþdy c ~C c ðcauchy Schwarz inequalityþ þ e C ku ~u p k H ðdþqðyþdy 6 þ 2 : It is seen that is due to the truncation of image of Y. The error of ~u p with respect to the joint PDF qðyþ is dominated by the L error 2 given by the approximation with respect to the auxiliary PDF. Remark 4.2. The assumption that R~ C kuk 2 c H ðdþqðyþdy 6 2 is reasonable since uðx; YÞ 2L 2 ðc; H ðdþþ [3]. Lemma 4.3. Let u H ¼ E½k~u p k H ðdþšþ X Nmc h i kuk N H ðdþðy ðiþ Þ kb u k mc H ðdþðy ðiþ Þ ; i¼ where Y ðiþ are samples from the simulator of random inputs, and B u ðx; Y ðiþ Þ is defined as in Eq. (8). We have the convergence estimate qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi je½uš u H jo 2 þ 2 2N =2 mc : ð2þ Proof. We examine the variance of d u ¼kuk H ðdþ kb uk H ðdþ. Varðd u Þ¼E½d 2 u Š ðe½d ušþ 2 6 E½ðkuk kb H uk ðdþ H 6 kuk 2 H ~C ðdþqðyþdy þ c ðdþþ2 Š ku ~u p k 2 H ~C ðdþqðyþdy 6 2 þ 2 2 : qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðdþšþo 2 þ 2 2 Thus, u H ¼ E½k~u p k H ðdþšþe½kuk H ðdþš E½k~u pk H Þ, where we use the convergence rate of Monte Carlo methods. N =2 mc Remark 4.4. From Lemma 4.3 we see that the efficiency of the polynomial chaos solution for variance reduction is determined by the error and 2. In Lemma 4.3 we assume that E½k~u p k H ðdþš is exactly known. In practice, it is possible that we can only obtain E½k~u p k H ðdþš by sampling the random inputs. However, since ~u p is an explicit polynomial, it will be much faster to obtain a desired accuracy compared to sampling the stochastic PDE. Remark 4.5. One option to reduce the errors and 2 is the multielement extension of polynomial chaos methods [4,5], where the parametric space is decomposed for extra h-type convergence. In fact, by doing the decomposition, the domain of analytic extension will be also enlarged, see [35] for more details. Lemma 4.. We assume that ~C c kuk 2 H ðdþqðyþdy 6 2 and kuðx; yþ ~u p ðx; yþk Lð ~ C;H ðdþþ 6 2; where ~ C c ¼ C n ~ C, and qðyþ is the joint PDF of Y. Then we have je½kuk H ðdþš E½k~u pk H ðdþšj 6 þ 2 : ð2þ Proof. je½kuk H ðdþš E½k~u pk H ðdþšj 6 E½jkuk k~u H pk ðdþ H ðdþjš 5. Numerical studies 5.. Algebraic model We first investigate the convergence of the proposed methodology using a simple algebraic problem ðc þ rðn þ n 2 ÞÞu ¼ ; ð22þ where c and r are positive numbers, and n i, i ¼ ; 2 are two uncorrelated random variables. For solvability, we assume that n 2½ ; Š and c þ rðn þ n 2 Þ >.

99 X. Wan, G.E. Karniadakis / Comput. Methods Appl. Mech. Engrg. 98 (29) 985 995 For simplicity, we assume that n 2 ¼ n 2 ðn Þ is a function of n. Then the fact that n and n 2 are uncorrelated implies orthogonality between n and n 2 with respect to the PDF f ðn Þ of n. In other words, n 2 can be expressed as an orthogonal polynomial of n. We note that n 2 is dependent on n. In the phase space, ðn ; n 2 Þ is located on a curve n 2 ¼ n 2 ðn Þ; however, if we use the auxiliary PDF qðn ; n 2 Þ, ðn ; n 2 Þ is on ½ ; Š 2. This can be regarded as the worst case for our methodology since we need to approximate a one-dimensional problem using two-dimensional polynomials. It is obvious that the convergence on ½ ; Š 2 in the L 2 norm is not enough for the convergence on the curve n 2 ðn Þ. A stronger metric to measure the convergence of polynomial chaos is needed, such as the L norm. Following the methodology proposed in Section 3, we first sample n and n 2 to obtain the density estimation. Based on the estimated marginal PDFs we construct orthogonal polynomials and implement the gpc method. Finally, we compute the statistics using the joint PDF, in other words, we sample n since we know that n 2 is a function of n. Let n be a Beta random variable of distribution Beta ( 2, ). We let 2 n 2 ¼ 2n 2. It is easy to verify that n 2 also has a Beta distribution Beta ( 2, ). We have two choices for constructing the polynomial 2 chaos basis corresponding to the auxiliary joint PDF. (i) Construct the two-dimensional polynomial chaos basis using the one-dimensional Chebyshev polynomials since both marginal PDFs of n and n 2 are Beta ð 2, 2 Þ. Error 2 3 4 5 2 3 4 5 6 7 8 p Fig.. Convergence of the mean and standard deviation for the algebraic model. Chebyshev polynomials are used to construct the polynomial chaos basis corresponding to the auxiliary PDF. (ii) Estimate the marginal PDFs of n and n 2, and then construct orthogonal polynomial chaos basis numerically. Both cases are investigated. In Fig. we show the convergence of mean and standard deviation for approach (i). All statistics are computed with respect to the joint PDF dðn 2 n 2 ðn ÞÞq ð Þðn Þ, 2 ; 2 where q ð ÞðÞ is the PDF of 2 ; Bð ; Þ distribution. It can be seen that 2 2 2 the overall convergence rate is, in fact, exponential. If we use the solutions from approach (ii) and the joint PDF dðn 2 n 2 ðn ÞÞq ð Þðn Þ to compute the desired statistics, we also obtain 2 ; 2 fast (exponential) convergence, as shown in Fig. 2. It can be seen that the convergence behavior is similar for N q ¼ and ;. In Fig. 3, we present some estimated marginal PDFs and demonstrate the sensitivity of convergence with respect to the sample size for density estimation. It is observed that the estimated marginal PDFs may be very different, however, the corresponding approximation errors can be of the same order. In other words, convergence is not sensitive to the sample size for density estimation. The reason is that the solution of the algebraic model is analytic with respect to n and n 2, and the Taylor expansion converges in the L norm. 5.2. One-dimensional elliptic model In this section we study the performance of the proposed methodology numerically using an one-dimensional elliptic problem d duðx; xþ aðx; xþ ¼ ; x 2ð; Þ dx dx uðx; xþ ¼; x ¼ ; ; ð23þ where the random process aðx; xþ takes the form aðx; xþ ¼e Gðx;xÞ ð24þ and Gðx; xþ is a Gaussian random process of zero mean satisfying an exponential kernel. 5.2.. The Karhunen Loève expansion of aðx; xþ Let K G ðx ; x 2 Þ denote the covariance kernel of GðxÞ with the form K G ðx ; x 2 Þ¼r 2 e jx x 2 j=l ; ð25þ where r is constant and l the correlation length. We know that the K L expansion of Gðx; xþ is Gðx; xþ ¼r X ffiffiffi k G;ihG;i ðxþ i ; ð26þ i¼ 2 2 3 Error 3 Error 4 4 5 5 2 3 4 5 6 7 8 p 6 2 3 4 5 6 7 8 p Fig. 2. Convergence of the mean and standard deviation for the algebraic model. Estimated marginal PDFs are used for the numerical orthogonality. Left: samples for density estimation. Right:, samples for density estimation.

X. Wan, G.E. Karniadakis / Comput. Methods Appl. Mech. Engrg. 98 (29) 985 995 99.5 Beta(.5,.5) Estimated PDF with samples Estimated PDF with samples Estimated PDF with, samples Estimated PDF with, samples 2 3 PDF Error 4.5 5.5.5 ξ 6 2 3 4 5 N ρ Fig. 3. Left: PDF of Beta (, ) and estimated ones with different numbers of samples. Right: sensitivity of numerical convergence with respect to the sample size in the density 2 2 estimation. where fk G;i ; h G;i g i¼ are eigenpairs and f ig is a set of independent random variables with zero mean and unit variance. We obtain the following statistics of aðx; xþ: : aðxþ ¼E½aŠ ¼e 2 r2 : Variance: VarðaÞ ¼E½ða aþ 2 Š¼e 2r2 e r2 : Correlation function: K a ðx ; x 2 Þ¼ E½ðaðx ; xþ aðx ÞÞðaðx 2 ; xþ aðx 2 ÞÞŠ VarðaÞ jx e x 2 j=l ¼ er2 : e r2 The K L expansion of aðx; xþ takes the form a M ðx; xþ ¼aðxÞþr a X M i¼ We note that Y i are not independent. ffiffi k a;iha;i ðxþy i : ð27þ 5.2.2. Numerical orthogonality For the underlying Gaussian random process, we let r ¼ :3 and l ¼ 5. Using Eq. (5), we can obtain the marginal PDFs of random variables Y i in the K L expansion of the random process aðx; xþ. In Fig. 4, the estimated marginal PDFs of Y i are shown. For comparison, we include the normal distribution. It can be seen that the marginal PDFs of Y i are different from the normal distribution. There exists an apparent bias in the marginal PDF of Y towards the negative direction. We note that all the estimated PDFs are smooth. In Fig. 5 we present some orthogonal polynomials with respect to the marginal PDFs of Y and Y 2. It is seen that nonsymmetric structure in the marginal PDFs is clearly reflected in the orthogonal polynomials, in particular, the fifth-order polynomials. 5.2.3. Convergence of the proposed strategy Due to the absence of exact solutions, we use the solutions given by Monte Carlo simulations as reference solutions. Let ~u p ðx; YÞ be the polynomial chaos approximation of uðx; xþ based on the auxiliary PDF ~q. For this example, we compute ~u p ðx; YÞ through the gpc method via the Galerkin projection. Let N q indicate the size of samples for marginal density estimation and N mc the number of realizations for Monte Carlo simulation. We approximate the statistics of solution using the following four methods: (i) Implement Monte Carlo simulations with 5 realizations. (ii) Sample ~u p ðx; YÞ with respect to the auxiliary PDF ~q. We note that this method is only correct when the random variables in the K L expansion of aðx; xþ are independent, otherwise, there exists a model error in the random inputs. (iii) Sample ~u p ðx; YÞ with respect to the joint PDF q using the gpc predictor model, see Eq. (8). PDF.45.4.35.3.25.2.5 MC: MC: MC: Normal distribution PDF.5.45.4.35.3.25.2.5 Y : MC Y 2 : MC Y 3 : MC Normal distribution...5.5 4 2 2 4 6 4 2 2 4 6 y y i Fig. 4. Estimated marginal PDFs using different numbers of sample points. Right: PDFs of Y i, i ¼ ; 2; 3.

992 X. Wan, G.E. Karniadakis / Comput. Methods Appl. Mech. Engrg. 98 (29) 985 995 5 p=2 p=3 p=4 p=5 6 5 4 3 p=2 p=3 p=4 p=5 φ,i φ 2,i 2 2 5 2 2 3 4 5 3 3 2 2 3 y y 2 Fig. 5. Orthogonal polynomials with polynomial order p ¼ 2;...; 5 with respect to the estimated marginal PDFs. Left: / ;i ðy Þ. Right: / 2;i ðy 2 Þ (iv) Implement Monte Carlo simulations using the gpc predictor corrector model, see Eq. (9). For the underlying Gaussian random process, we let r ¼ :2 and l ¼ 5. For this case, the eigenvalues decay fast and we keep the first five eigenvalues for the K L expansion of the log-normal process aðx; xþ. We use a 2-term K L expansion to approximate the underlying Gaussian process Gðx; xþ when Monte Carlo simulations are needed. In Fig. 6 we show the mean and standard deviation of uðx; xþ given by the first three methods. It is seen that the results are close to each other, which implies that the correlation between Y i has a small effect on the statistics up to second order due to the Gaussian-like marginal PDFs. We subsequently investigate the convergence of methods (iii) and (iv). Let Q ref ðxþ be a reference function and QðxÞ an approximation. We define the difference between Q and Q ref as ¼ jqðxþ Q refðxþj max jq : ð28þ refðxþj 6x6 We use the results given by Monte Carlo simulations with 5 realizations as a reference. In Fig. 7 we show the convergence behavior with respect to for different polynomial orders p and sample size N q used in the kernel density estimation. For the mean, it is clear that becomes smaller as the sample size N q for the density estimation increases. Similar behavior is observed for the standard deviation. However, the error of the standard deviation given by.4.25.2..8.6.4 Method (i): N mc = 5.2 Method (ii): N = 4, p=2 ρ Method (iii): N ρ = 4, p=2.2.4.6.8 x.2.5..5 Method (i): N mc = 5 Method (ii): N = 4, p=2 ρ Method (iii): N ρ = 4, p=2.2.4.6.8 x Fig. 6. and standard deviation of uðx; xþ given by different methods. Five-term K L expansion of aðx; xþ. Left: mean. Right: standard deviation..35.3 Method (iii): N = 2, p=2 Method (iii): N = 3, p=2.9.8 Method (iii): N = 2, p=2 Method (iii): N = 3, p=2 of mean.25.2.5..5 Method (iii): N = 3, p=2 Method (iii): N = 4, p=3 of standard deviation.7.6.5.4.3.2. Method (iii): N = 4, p=2 Method (iii): N = 4, p=3.2.4.6.8 x.2.4.6.8 x Fig. 7. Difference of statistics between the proposed strategy and Monte Carlo simulations. Five-term K L expansion of aðx; xþ is employed. Left: mean. Right: standard deviation.

X. Wan, G.E. Karniadakis / Comput. Methods Appl. Mech. Engrg. 98 (29) 985 995 993 N q ¼ 3 is almost the same as that given by N q ¼ 4 when p ¼ 2. Then we increase the polynomial order to p ¼ 3 and obtain a smaller, which implies that the error from polynomial chaos is dominant for p ¼ 2 and N q ¼ 3 ; 4. Thus, the proposed strategy converges to the correct results when N q and p increase. In contrary to the algebraic model, we see that the size of samples for density estimation has noticeable influence on the convergence. This is because the long tails of PDFs can be approximated better if a larger number of samples is used, in other words, the error in Lemma 4. is reduced. However, when the number of random dimensions is large, it is not efficient to increase the polynomial order. Thus, it is necessary to consider the method (iv). We compare the results from methods (iii) and (iv) in Fig. 8. It is seen that a small number of realizations of the stochastic elliptic problems can significantly improve the convergence, which implies that the polynomial chaos solution based on the auxiliary PDF provides a good qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi prediction for variance reduction, in other words, the factor 2 þ 2 2 in Lemma 4.3 is small. 5.3. Two-dimensional elliptic model Let G i ðx; xþ, i ¼ ; 2;...; 2m be independent Gaussian random fields with zero mean and unit variance, where m is a positive integer. Each Gaussian random field has the same correlation function K G ðx ; x 2 Þ¼K G ðjx x 2 jþ. We consider the following nonnegative random field: R C ðx; xþ ¼ 2 X 2m i¼ G 2 i ðx; xþ: ð29þ It can be verified that given x, the marginal distribution of R C ðx; xþ is a gamma distribution of mean m. Therefore, R C ðx; xþ can be called a homogeneous Gamma random field. We summarize the statistics of R C ðx; xþ as follows : R C ¼ E½R C Š¼m: Variance: VarðR C Þ¼E½ðR C R C Þ 2 Š¼m: Correlation function: K RC ðx ; x 2 Þ¼ E½ðR Cðx ; xþ R C ðx ÞÞðR C ðx 2 ; xþ R C ðx 2 ÞÞŠ VarðR C Þ ¼ K 2 G ðx ; x 2 Þ: The derivation of the correlation function is given in Appendix A. We see that the correlation function of R C ðx; xþ is independent of m. We consider the following non-gaussian coefficient aðx; xþ ¼ þ rr C ðx; xþ ð3þ for the model problem (). Based on the K L expansions of the underlying Gaussian fields G i ðx; xþ, aðx; xþ can be approximated as aðx; xþ ^aðx; fn i;j gþ ¼ þ r 2 X 2m i¼ X M j¼ k G;j h G;j ðxþn i;j! 2 ; ð3þ where n i;j are independent standard Gaussian random variables. To employ the standard procedure of polynomial chaos methods, we need to project the approximate Gamma field ^aðx; fn i;j gþ onto Hermite-chaos. However, we notice there are several issues related to the efficiency of polynomial chaos methods: () the number of random variables is mm, which increases fast with respect to m; (2) second-order Hermite-chaos is necessary to represent the random inputs, which implies that high-order Hermitechaos may be necessary to model the solution. In contrast to the standard procedure, we start from the correlation function of R C ðx; xþ, which has a simple relation with the correlation function K G ðx ; x 2 Þ and is independent of the number of underlying Gaussian random fields. We assume that K G ðx ; x 2 Þ is a Gaussian kernel as K G ðx ; x 2 Þ¼e jx x 2 j2 l : ð32þ Then the correlation function of R C ðx; xþ is K RC ðx ; x 2 Þ¼e jx x 2 j2 l=2 : ð33þ It is seen that the correlation function of R C ðx; xþ is also Gaussian with a half correlation length compared to K G ðx ; x 2 Þ. The K L expansion of aðx; xþ can be expressed as! aðx; xþ ¼ þ r E½R C ŠþstdðR C Þ X k ihi ðxþy i X =2 ¼ þ rm þ rm i¼ i¼ k ihi ðxþy i ; ð34þ where stdðr C Þ is the standard deviation of R C ðxþ, fk i ; h i ðxþg i¼ is a set of eigenpairs of K RC ðx ; x 2 Þ, and fy i g is set of mutually uncorrelated random variables with zero mean and unit variance. Given a realization of aðx; xþ, a sample of Y i can be obtained as Y i ¼ ðaðxþ ðþrþmþh rm =2 i ðxþdx: ð35þ k i D of standard deviation.8.7.6.5.4.3.2 Method (iii): N =, p=2 Method (iv): N mc =, N =, p=2 Method (iv): N mc =5, N =, p=2 Method (iii): N =,, p=3 Eigenvalues 2 3 L = L =.5..2.4.6.8 x Fig. 8. Accelerate Monte Carlo simulations using the polynomial chaos approximation for variance reduction. 4 2 4 6 8 Index Fig. 9. Eigenvalues of Gaussian covariance kernels with correlation length l ¼ ; :5.

994 X. Wan, G.E. Karniadakis / Comput. Methods Appl. Mech. Engrg. 98 (29) 985 995 In numerical approximation, we obtain realizations of aðx; xþ through a set of independent Gaussian random variables using Eq. (3). For a good accuracy, we let M ¼ 3. For this example, we consider the gpc method via a collocation projection. Let the physical domain D ¼ð; Þ 2, f ðxþ ¼, and r ¼ :2. Let the correlation length l ¼ for the underlying Gaussian random fields, which yields that the correlation length for the corresponding Gamma random field is l=2 ¼ :5. In Fig. 9, we show the first largest eigenvalues of Gaussian kernels with l ¼ ; :5. It can be seen that we need M ¼ 4 Gaussian random variables to keep 9% energy in the K L expansion of G i ðx; xþ. Consider m ¼ 2. If the standard procedure is employed, we need mm ¼ 8 independent Gaussian random variables to approximate the Gamma random field. For the gpc interpolation, 2,342,92 grid points are needed for non-nested sparse grids with sparseness level 3 based on the one-dimensional Gauss quadrature formula. The choice of sparseness level 3 is according to the fact that the second-order Hermite-chaos approximation is needed for the random inputs. However, if we implement the K L expansion directly with respect to the correlation function of R C ðx; xþ, only 8 mutually uncorrelated random variables are necessary to keep 99% energy. In contrast to the standard procedure, the dimension of random inputs is dramatically reduced. In this work, we keep the first eigenvalues of K RC ðx ; x 2 Þ. For the gpc interpolation, we need 486 grid points for the non-nested sparse grids with sparseness level 2 based on the one-dimensional Gauss quadrature formula. We use sparseness level 2 because that the K L expansion is a first-order polynomial with respect to the random variables. In Figs. and we show the mean and the standard deviation along the center line x ¼ :5. The statistics are computed by both direct Monte Carlo simulations and the gpc predictor model (see Eq. (8)). For Monte Carlo simulations, we sample the stochastic elliptic PDE directly; for the gpc predictor model, we first sample the stochastic elliptic PDE 486 (the number of sparse grid points) times to obtain the approximate gpc solution ~u p ðx; YÞ, which is a second-order polynomial with respect to random variables (see Eq. (6)), and then sample ~u p ðx; YÞ instead of the stochastic elliptic PDE to compute statistics. It can be seen that sampling the gpc solution ~u p ðx; YÞ, times provides a comparable accuracy with that given by sampling the stochastic PDE, times. However, the numerical cost is significantly different. The main numerical cost of both strategies comes from sampling the stochastic PDE. For the comparable accuracy, we only need to solve 486 deterministic PDEs for the gpc predictor model in contrast to solving, deterministic PDEs for the direct Monte Carlo method, which results in a speed up of Oð2Þ. Since the gpc predictor model yields accurate statistics, we know that the predictorcorrector model (see Eq. (9)) will be effective to further refine the results, where the gpc solution is used a control variate for variance reduction. 6. Discussion In this work, we have proposed and investigated a numerical methodology for elliptic problems with general (non-gaussian) random inputs. We note that the methodology presented is a gen-.5.5 GPC+MC MC MC MC5 MC.24.242.244 GPC+MC MC MC MC5 MC..5.246.248.2.25.3.2.4.6.8 y.25.252.254.4.45.5.55.6 y Fig.. of uðx; xþ along the center line x ¼ :5. Left: mean. Right: close-up view. 4.5 4 3.5 3 2.5 5 x 3 2.5 GPC+MC MC MC.5 MC5 MC.2.4.6.8 y 4.25 x 3 4.2 4.5 4. 4.5 GPC+MC MC 4 MC MC5 MC 3.95.4.45.5.55.6 y Fig.. of uðx; xþ along the center line x ¼ :5. Left: standard deviation. Right: close-up view.

X. Wan, G.E. Karniadakis / Comput. Methods Appl. Mech. Engrg. 98 (29) 985 995 995 eral one, not restricted to the model problem used. The idea is based on the observation that the polynomial chaos method provides an approximate model of the original stochastic PDE while the Monte Carlo method only needs the outputs. Thus, we can feed the Monte Carlo method using the polynomial chaos solution. The Karhunen Loève expansion, which is valid for any secondorder random process, was employed to reduce the dimensionality of random inputs. To maintain the joint PDF of random variables in the Karhunen Loève expansion, we considered a high-dimensional interpolation problem in the parametric space. By noting the analyticity of solution and the L convergence of polynomial interpolation in the parametric space, we implemented the polynomial chaos approximation based on an auxiliary PDF. In the post-processing stage, we considered two models. In the gpc predictor model, we computed all desired statistics by sampling the polynomial chaos solution with respect to the joint PDF instead of the auxiliary one. In the gpc predictor-corrector model, we used Monte Carlo methods to refine the statistics given by gpc predictor model, where the polynomial chaos solution served as a control variate for variance reduction to accelerate efficiently the convergence of the Monte Carlo method. Acknowledgements This work is supported by DOE, NSF, AFOSR and ONR. Appendix A. Derivation of the correlation function of RðxÞ Let X ¼ R C ðx ; xþ and X 2 ¼ R C ðx 2 ; xþ, we first look at the generating function gðh ; h 2 Þ¼E½e h X þh 2 X 2Š. Let Y i; ¼ G i ðx ; xþ and Y i;2 ¼ G i ðx 2 ; xþ. We know that Y i; and Y i;2 are normal random variables with covariance K G ðjx x 2 jþ. It is easy to see gðh ; h 2 Þ¼E½e h X þh 2 X 2 Š¼ Y2m E½e 2 h G 2 i ðx Þþ 2 h 2G 2 i ðx2þ Š ¼ Y2m i¼ ¼ Y2m i¼ 2p i¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffi q K 2 G R 2 e 2 h y i; þ 2 h 2y i;2e 2ð K 2 G Þðy2 i; þy2 i;2 2K Gy i; y i;2 Þ dyi; y i;2 ðð h Þð h 2 Þ h h 2 K 2 G Þ =2 ¼ðð h Þð h 2 Þ h h 2 K 2 G Þ m ða:þ We are now ready to compute the correlation function of R C ðx; xþ K RC ðx ; x 2 Þ¼ E½ðR Cðx Þ E½R C Šðx ÞÞðR C ðx 2 Þ E½R C Šðx 2 ÞÞŠ r RC ðx Þr RC ðx 2 Þ References ¼ E½ðX mþðx 2 mþš ¼ E½X X 2 Š m 2 m m ¼ @ 2 g @h @h 2 j h ¼h 2 ¼ m2 m ¼ K 2 G ðjx x 2 jþ: ða:2þ [] M. Shinozuka, G. Deodatis, Simulation of stochastic processes by spectral representation, Appl. Mech. Rev. 44 (4) (99) 9 23. [2] M. Loeve, Probability Theory, fourth ed., Springer-Verlag, New York, 977. [3] B.A. eldin, P.D. Spanos, Random field simulation using wavelet bases, ASME J. Appl. Mech. 63 (4) (996) 946 952. [4] F.W. Elliott, D.J. Horntrop, A.J. Majda, A Fourier-Wavelet Monte Carlo method for fractal random fields, J. Comput. Phys. 32 (997) 384 48. [5] S. Sakamoto, R.G. Ghanem, Polynomial chaos decomposition for simulation of non-gaussian non-stationary stochastic processes, ASCE J. Engrg. Mech. 28 (2) (22) 9 2. [6] B. Puig, J.L. Akian, Non-Gaussian simulation using Hermite polynomial expansion and maximum entropy principle, Probab. Engrg. Mech. 9 (24) 293 35. [7] K.K. Phoon, S.P. Huang, S.T. Quek, Simulation of strongly non-gaussian processes using Karhunen Loève expansion, Probab. Engrg. Mech. 2 (25) 88 98. [8] R.G. Ghanem, P. Spanos, Stochastic Finite Elements: A Spectral Approach, Springer-Verlag, New York, 99. [9] M.K. Deb, I. Babus ka, J.T. Oden, Solution of stochastic partial differential equations using Galerkin finite element techniques, Comput. Methods Appl. Mech. Engrg. 9 (2) 6359 6372. [] D. Xiu, G.E. Karniadakis, The Wiener Askey polynomial chaos for stochastic differential equations, SIAM J. Sci. Comput. 24 (2) (22) 69 644. [] O.P.L. Maitre, H.N. Njam, R.G. Ghanem, O.M. Knio, Uncertainty propagation using Wiener Haar expansions, J. Comput. Phys. 97 (24) 28 57. [2] H.G. Matthies, A. Keese, Galerkin methods for linear and nonlinear elliptic stochastic partial differential equations, Comput. Methods Appl. Mech. Engrg. 94 (2-6) (999) 295 33. [3] P. Frauenfelder, C. Schwab, R.A. Todor, Finite elements for elliptic problems with stochastic coefficients, Comput. Methods Appl. Mech. Engrg. 94 (25) 25 228. [4] X. Wan, G.E. Karniadakis, An adaptive multi-element generalized polynomial chaos method for stochastic differential equations, J. Comput. Phys. 29 (2) (25) 67 642. [5] X. Wan, G.E. Karniadakis, Multi-element generalized polynomial chaos for arbitrary probability measures, SIAM J. Sci. Comput. 28 (3) (26) 9 928. [6] I. Babus ka, F. Nobile, R. Tempone, A stochastic collocation method for elliptic partial differential equations with random input data, SIAM J. Numer. Anal. 45 (3) (27) 5 34. [7] K.E. Atkinson, The Numerical Solution of Integral Equations of the Second Kind, Cambridge University Press, 997. [8] C. Schwab, R.A. Todor, Karhunen Loève approximation of random fields by generalized multipole methods, J. Comput. Phys. 27 () (26) 22. [9] X. Wan, G.E. Karniadakis, A sharp error estimate of the fast gauss transform, J. Comput. Phys. 29 () (26) 7 2. [2] B. Oksendal, Stochastic Differential Equations, Springer-Verlag, 998. [2] E. Parzen, On estimation of a probability density function and mode, Ann. Math. Stat. 33 (3) (962) 65 76. [22] A.J. Izenman, Recent developments in nonparametric density estimation, J. Amer. Stat. Assoc. 86 (43) (99) 25 224. [23] M.P. Wand, M.C. Jones, Kernel Smoothing, Chapman and Hall, 995. [24] G. Mastroianni, D. Occorsiol, Optimal systems of nodes for lagrange interpolation on bounded intervals. A survey, J. Comput. Appl. Math. 34 (2) 325 34. [25] E. Novak, K. Ritter, Simple cubature formulas with high polynomial exactness, Constr. Approx. 5 (999) 499 522. [26] R.A. Devore, G.G. Lorentz, Constructive Approximation, Springer-Verlag, Berlin, 993. [27] V. Barthelmann, E. Novak, K. Ritter, High dimensional polynomial interpolation on sparse grids, Adv. Comput. Math. 2 (2) 273 288. [28] F. Nobile, R. Tempone, C.G. Webster, A sparse grid stochastic collocation method for partial differential equations with random input data, SIAM J. Numer. Anal. 46 (5) (28) 239 2345. [29] R.A. Todor, C. Schwab, Convergence rates for sparse chaos approximations of elliptic problems with stochastic coefficients, IMA J. Numer. Anal. 27 (2) (27) 232 26. [3] R.G. Ghanem, Stochastic finite elements for heterogeneous media with multiple random non-gaussian properties, ASCE J. Engrg. Mech. 25 () (999) 26 4. [3] I. Babus ka, R. Tempone, G.E. ouraris, Galerkin finite element approximations of stochastic elliptic differential equations, SIAM J. Numer. Anal. 42 (2) (24) 8 825. [32] X. Wan, G.E. Karniadakis, Error control in multi-element generalized polynomial chaos method for elliptic problems with random coefficients, Commun. Comput. Phys. 5 (2 4) (29) 793 82. [33] W. Gautschi, On generating orthogonal polynomials, SIAM J. Sci. Stat. Comput. 3 (3) (982) 289 37. [34] L. Greengard, J. Strain, The fast Gauss transform, SIAM J. Sci. Stat. Comput. 2 () (99) 79 94. [35] J. Foo, X. Wan, G.E. Karniadakis, The multi-element probabilistic collocation method: error analysis and simulation, J. Comput. Phys. 227 (22) (28) 9572 9595.