Fast AreviewbyDongbinXiu May 16 th,2013
Outline Motivation 1 Motivation 2 3 4 5
Example: Burgers Equation Let us consider the Burger s equation: u t + uu x = νu xx, x [ 1, 1] u( 1) =1 u(1) = 1
Example: Burgers Equation Let us consider the Burger s equation: u t + uu x = νu xx, x [ 1, 1] u( 1) =1 u(1) = 1 It has an exact steady-state solution: u(x) = A tanh A 2ν (x z)
Example: Burgers Equation Let us consider the Burger s equation: u t + uu x = νu xx, x [ 1, 1] u( 1) =1 + δ u(1) = 1 It has an exact steady-state solution: u(x) = A tanh A 2ν (x z)
Example: Burgers Equation Let us consider the Burger s equation: u t + uu x = νu xx, x [ 1, 1] u( 1) =1 + δ u(1) = 1 It has an exact steady-state solution: u(x) = A tanh A 2ν (x z)
Techniques Motivation 1 Monte Carlo and sampling methods Generate independent realizations of random inputs based on the prescribed PDF and extract statistical information. Straightforward to apply. Large number of executions needed.
Techniques Motivation 1 Monte Carlo and sampling methods Generate independent realizations of random inputs based on the prescribed PDF and extract statistical information. Straightforward to apply. Large number of executions needed. 2 Perturbation methods Expand (Taylor) random fields around their mean and truncate at a given order. Small number of uncertainties. Complicated systems of equations beyond 2nd order.
Techniques Motivation 1 Monte Carlo and sampling methods Generate independent realizations of random inputs based on the prescribed PDF and extract statistical information. Straightforward to apply. Large number of executions needed. 2 Perturbation methods Expand (Taylor) random fields around their mean and truncate at a given order. Small number of uncertainties. Complicated systems of equations beyond 2nd order. 3 Moment equations Compute moments of the random solution directly from the averages of the original governing equations. Closure problem: Higher moments are needed for the derivation of a moment.
Techniques Motivation 1 Monte Carlo and sampling methods Generate independent realizations of random inputs based on the prescribed PDF and extract statistical information. Straightforward to apply. Large number of executions needed. 2 Perturbation methods Expand (Taylor) random fields around their mean and truncate at a given order. Small number of uncertainties. Complicated systems of equations beyond 2nd order. 3 Moment equations Compute moments of the random solution directly from the averages of the original governing equations. Closure problem: Higher moments are needed for the derivation of a moment. 4 Generalized polynomial chaos (gpc) Express stochastic solutions as orthogonal polynomials of the input random parameters. Fast convergence when the solution depends smoothly on the random parameters.
Techniques Motivation 1 Monte Carlo and sampling methods Generate independent realizations of random inputs based on the prescribed PDF and extract statistical information. Straightforward to apply. Large number of executions needed. 2 Perturbation methods Expand (Taylor) random fields around their mean and truncate at a given order. Small number of uncertainties. Complicated systems of equations beyond 2nd order. 3 Moment equations Compute moments of the random solution directly from the averages of the original governing equations. Closure problem: Higher moments are needed for the derivation of a moment. 4 Generalized polynomial chaos (gpc) Express stochastic solutions as orthogonal polynomials of the input random parameters. Fast convergence when the solution depends smoothly on the random parameters. 5 Operator based methods Manipulate the stochastic operators in the governing equations (Neumann expansion, weighted integral method...) Small uncertainties. Dependent on the operator. Limited to static problems.
Example: Burgers Equation (II) Let us consider the Burger s equation: u t + uu x = 0.05u xx, x [ 1, 1] u( 1) =1 + δ, δ U(0, 0.1) u(1) = 1
Example: Burgers Equation (II) Let us consider the Burger s equation: u t + uu x = 0.05u xx, x [ 1, 1] u( 1) =1 + δ, δ U(0, 0.1) u(1) = 1 Monte Carlo method with n realizations vs. gpc fourth-order expansions: n = 100 n = 1000 n = 2000 n = 5000 n = 10000 gpc z 0.819 0.814 0.815 0.814 0.814 0.814 σ z 0.387 0.418 0.417 0.417 0.414 0.414
Example: Burgers Equation (II) Let us consider the Burger s equation: u t + uu x = 0.05u xx, x [ 1, 1] u( 1) =1 + δ, δ U(0, 0.1) u(1) = 1 Monte Carlo method with n realizations vs. gpc fourth-order expansions: n = 100 n = 1000 n = 2000 n = 5000 n = 10000 gpc z 0.819 0.814 0.815 0.814 0.814 0.814 σ z 0.387 0.418 0.417 0.417 0.414 0.414 Perturbation method of order k vs. gpc fourth-order expansions: k = 1 k = 2 k = 3 k = 4 gpc z 0.823 0.824 0.824 0.824 0.814 σ z 0.349 0.349 0.328 0.328 0.414
Example: Burgers Equation (II) Let us consider the Burger s equation: u t + uu x = 0.05u xx, x [ 1, 1] u( 1) =1 + δ, δ U(0, 0.1) u(1) = 1 Monte Carlo method with n realizations vs. gpc fourth-order expansions: n = 100 n = 1000 n = 2000 n = 5000 n = 10000 gpc z 0.819 0.814 0.815 0.814 0.814 0.814 σ z 0.387 0.418 0.417 0.417 0.414 0.414 Perturbation method of order k vs. gpc fourth-order expansions: k = 1 k = 2 k = 3 k = 4 gpc z 0.823 0.824 0.824 0.824 0.814 σ z 0.349 0.349 0.328 0.328 0.414 Monte Carlo needs much more computations to obtain same accuracy as gpc (gpc needs the equivalent to five deterministic simulations). Perturbation methods do not even seem to converge.
Governing equations and probabilistic framework Let us consider: where: L is a differential operator L(x, u; y) =0, in D, B(x, u; y) =0, on D, B is a boundary operator (Dirichlet, Neumann...) x =(x 1,...,x d ) D R d are the spatial coordinates y =(y 1,...,y N ) R N are the parameters of interest random and mutually independent, defined in (Ω, A, P). Theycanbephysicalparametersofthe system, continuous random processes on the boundary, random initial conditions...
Governing equations and probabilistic framework Let us consider: where: L is a differential operator L(x, u; y) =0, in D, B(x, u; y) =0, on D, B is a boundary operator (Dirichlet, Neumann...) x =(x 1,...,x d ) D R d are the spatial coordinates y =(y 1,...,y N ) R N are the parameters of interest random and mutually independent, defined in (Ω, A, P). Theycanbephysicalparametersofthe system, continuous random processes on the boundary, random initial conditions... We are interested in a set of quantities (QoI), called observables: g =(g 1,...,g K ) R K = G(u)
Governing equations and probabilistic framework Let us consider: where: L is a differential operator L(x, u; y) =0, in D, B(x, u; y) =0, on D, B is a boundary operator (Dirichlet, Neumann...) x =(x 1,...,x d ) D R d are the spatial coordinates y =(y 1,...,y N ) R N are the parameters of interest random and mutually independent, defined in (Ω, A, P). Theycanbephysicalparametersofthe system, continuous random processes on the boundary, random initial conditions... We are interested in a set of quantities (QoI), called observables: g =(g 1,...,g K ) R K = G(u) Let ρ i :Γ i R + be the probability density function (PDF) of y i and N ρ(y) = ρ i (y i ), i=1 the joint PDF of y, withsupportγ= N i=1 Γ i.
gpc basis and approximations (I) One-dimensional orthogonal polynomial spaces in Γ i : where and W i,d i := {v :Γ i R v span{φ m(y i )} d i m=0 }, i = 1,...,N Γ i ρ i (y i )φ m(y i )φ n(y i )dy i = h 2 mδ mn h 2 m = Γ i ρ i φ 2 mdy i
gpc basis and approximations (I) One-dimensional orthogonal polynomial spaces in Γ i : where and W i,d i := {v :Γ i R v span{φ m(y i )} d i m=0 }, i = 1,...,N Γ i ρ i (y i )φ m(y i )φ n(y i )dy i = h 2 mδ mn h 2 m = Γ i ρ i φ 2 mdy i N-dimensional orthogonal polynomial space in Γ: WN P := d P W i,d i where d =(d 1,...,d N ) N N 0 are constructed as: and d = d 1 + + d N.Theorthonormalpolynomials Φ m = φ m1 (y 1 )...φ mn (y N ), m 1 + + m N P
gpc basis and approximations (II) Examples: Continuous Discrete Distribution gpc basis polynomials Support Gaussian Hermite (, ) Gamma Laguerre [0, ) Beta Jacobi [a, b] Uniform Legendre [a, b] Poisson Charlier {0, 1, 2,...} Binomial Krawtchouk {0, 1,...,N} Negative Binomial Meixner {0, 1, 2,...} Hypergeometric Hahn {0, 1,...,N}
gpc basis and approximations (II) Examples: Continuous Discrete Distribution gpc basis polynomials Support Gaussian Hermite (, ) Gamma Laguerre [0, ) Beta Jacobi [a, b] Uniform Legendre [a, b] Poisson Charlier {0, 1, 2,...} Binomial Krawtchouk {0, 1,...,N} Negative Binomial Meixner {0, 1, 2,...} Hypergeometric Hahn {0, 1,...,N} The P th -order gpc approximation of u is: where û m(x) = M un P (x, y) = û m(x)φ m(y), M = m=1 N + P N u(x, y)φ m(y)ρ(y)dy = E[u(x, y)φ m(y)], 1 m M
Statistical information We can compute, for instance, the following statistical information: Mean: M E[u](x) E[uN P ]= û m(x)φ m(y) ρ(y)dy =û 1 (x) Covariance: m=1 Cov[u](x 1, x 2 ) E u P N (x 1, y) E[u P N (x 1, y)] u P N (x 2, y) E[u P N (x 2, y)] Variance: = M û m(x 1 )û m(x 2 ) m=2 Var[u](x) E u P N (x, y) E[uP N (x, y)] 2 = M û m(x) 2 Sensitivity coefficients: m=2 u M Φm(y) E û m(x) ρ(y)dy, j = 1,...,N y j y j m=1
Galerkin method Motivation Stochastic Galerkin method Stochastic collocation methods We approximate un P by such that M vn P (x, y) = ˆv m(x)φ m(y) m=1 L(x, vn P ; y)w(y)ρ(y)dy = 0, in D, B(x, vn P ; y)w(y)ρ(y)dy = 0, on D, for all w W P N.
Galerkin method Motivation Stochastic Galerkin method Stochastic collocation methods We approximate un P by such that M vn P (x, y) = ˆv m(x)φ m(y) m=1 L(x, vn P ; y)w(y)ρ(y)dy = 0, in D, B(x, vn P ; y)w(y)ρ(y)dy = 0, on D, for all w W P N. The resulting equations are a coupled system of M deterministic PDEs for {ˆv m}.
Collocation methods Motivation Stochastic Galerkin method Stochastic collocation methods Lagrange interpolation approach: Let Θ N = {y (i) } Q i=1 Γ asetofnodes. Then: Q u(x, y) Iu(x, y) = ũ k (x)l k (y), x D k=1 where L i (y (j) )=δ ij and ũ k (x) =u(x, y (k) ), 1 i, j, k Q
Collocation methods Motivation Stochastic Galerkin method Stochastic collocation methods Lagrange interpolation approach: Let Θ N = {y (i) } Q i=1 Γ asetofnodes. Then: Q u(x, y) Iu(x, y) = ũ k (x)l k (y), x D k=1 where L i (y (j) )=δ ij and ũ k (x) =u(x, y (k) ), 1 i, j, k Q Pseudo-spectral approach: Let Θ N = {y (i),α (j) } Q i=1 Γ asetofnodesand weights. Then: M Q wn P (x, y) = ŵ m(x)φ m(y), with ŵ m(x) = u(x, y (j) )Φ m(y (j)) )α (j) m=1 j=1
Collocation methods Motivation Stochastic Galerkin method Stochastic collocation methods Lagrange interpolation approach: Let Θ N = {y (i) } Q i=1 Γ asetofnodes. Then: Q u(x, y) Iu(x, y) = ũ k (x)l k (y), x D k=1 where L i (y (j) )=δ ij and ũ k (x) =u(x, y (k) ), 1 i, j, k Q Pseudo-spectral approach: Let Θ N = {y (i),α (j) } Q i=1 Γ asetofnodesand weights. Then: M Q wn P (x, y) = ŵ m(x)φ m(y), with ŵ m(x) = u(x, y (j) )Φ m(y (j)) )α (j) m=1 j=1 In both cases, for each y (k),wehavetosolveq uncoupled problems: L(x, ũ k ; y (k) )=0, in D, B(x, ũ k ; y (k) )=0, on D,
Points selection Motivation Stochastic Galerkin method Stochastic collocation methods It is straightforward in one-dimensional (N = 1) problems, where the Gauss quadratures are usually the optimal choice. But, for large (N 1) dimensions? Tensor products of one-dimensional nodes Sparse grids, subsets of the full tensor product based on Smolyak algorithm Cubature rules
Thanks for your attention!