Fast Numerical Methods for Stochastic Computations

Similar documents
Fast Numerical Methods for Stochastic Computations: A Review

Introduction to Uncertainty Quantification in Computational Science Handout #3

Spectral Representation of Random Processes

Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications

Performance Evaluation of Generalized Polynomial Chaos

Multilevel stochastic collocations with dimensionality reduction

Algorithms for Uncertainty Quantification

A Polynomial Chaos Approach to Robust Multiobjective Optimization

STOCHASTIC SAMPLING METHODS

arxiv: v1 [math.na] 3 Apr 2019

Research Article A Pseudospectral Approach for Kirchhoff Plate Bending Problems with Uncertainties

Modeling Uncertainty in Flow Simulations via Generalized Polynomial Chaos

Uncertainty Quantification of Radionuclide Release Models using Non-Intrusive Polynomial Chaos. Casper Hoogwerf

Accuracy, Precision and Efficiency in Sparse Grids

Downloaded 01/28/13 to Redistribution subject to SIAM license or copyright; see

Simulating with uncertainty : the rough surface scattering problem

Polynomial chaos expansions for sensitivity analysis

Sparse Grids. Léopold Cambier. February 17, ICME, Stanford University

Stochastic Spectral Approaches to Bayesian Inference

Quadrature for Uncertainty Analysis Stochastic Collocation. What does quadrature have to do with uncertainty?

Final Report: DE-FG02-95ER25239 Spectral Representations of Uncertainty: Algorithms and Applications

An Empirical Chaos Expansion Method for Uncertainty Quantification

A High-Order Galerkin Solver for the Poisson Problem on the Surface of the Cubed Sphere

Solving the Stochastic Steady-State Diffusion Problem Using Multigrid

A Unified Framework for Uncertainty and Sensitivity Analysis of Computational Models with Many Input Parameters

Multi-Element Probabilistic Collocation Method in High Dimensions

Nonlinear stochastic Galerkin and collocation methods: application to a ferromagnetic cylinder rotating at high speed

Evaluation of Non-Intrusive Approaches for Wiener-Askey Generalized Polynomial Chaos

Stochastic structural dynamic analysis with random damping parameters

Multilevel accelerated quadrature for elliptic PDEs with random diffusion. Helmut Harbrecht Mathematisches Institut Universität Basel Switzerland

AN EFFICIENT COMPUTATIONAL FRAMEWORK FOR UNCERTAINTY QUANTIFICATION IN MULTISCALE SYSTEMS

Chapter Two: Numerical Methods for Elliptic PDEs. 1 Finite Difference Methods for Elliptic PDEs

New issues in LES of turbulent flows: multiphysics and uncertainty modelling

NONLOCALITY AND STOCHASTICITY TWO EMERGENT DIRECTIONS FOR APPLIED MATHEMATICS. Max Gunzburger

Chapter 2 Spectral Expansions

Benjamin L. Pence 1, Hosam K. Fathy 2, and Jeffrey L. Stein 3

Uncertainty Quantification in Computational Science

Multigrid and stochastic sparse-grids techniques for PDE control problems with random coefficients

c 2004 Society for Industrial and Applied Mathematics

Hyperbolic Polynomial Chaos Expansion (HPCE) and its Application to Statistical Analysis of Nonlinear Circuits

Uncertainty Propagation and Global Sensitivity Analysis in Hybrid Simulation using Polynomial Chaos Expansion

PART IV Spectral Methods

Dimension-adaptive sparse grid for industrial applications using Sobol variances

Solving the steady state diffusion equation with uncertainty Final Presentation

LECTURE 16 GAUSS QUADRATURE In general for Newton-Cotes (equispaced interpolation points/ data points/ integration points/ nodes).

EFFICIENT STOCHASTIC GALERKIN METHODS FOR RANDOM DIFFUSION EQUATIONS

256 Summary. D n f(x j ) = f j+n f j n 2n x. j n=1. α m n = 2( 1) n (m!) 2 (m n)!(m + n)!. PPW = 2π k x 2 N + 1. i=0?d i,j. N/2} N + 1-dim.

Polynomial chaos expansions for structural reliability analysis

Adaptive Collocation with Kernel Density Estimation

Robust Optimal Control using Polynomial Chaos and Adjoints for Systems with Uncertain Inputs

Electromagnetic Relaxation Time Distribution Inverse Problems in the Time-domain

Numerical Analysis Comprehensive Exam Questions

CLASSROOM NOTES PART II: SPECIAL TOPICS. APM526, Spring 2018 Last update: Apr 11

Uncertainty Quantification and Validation Using RAVEN. A. Alfonsi, C. Rabiti. Risk-Informed Safety Margin Characterization.

Estimating functional uncertainty using polynomial chaos and adjoint equations

SENSITIVITY ANALYSIS IN NUMERICAL SIMULATION OF MULTIPHASE FLOW FOR CO 2 STORAGE IN SALINE AQUIFERS USING THE PROBABILISTIC COLLOCATION APPROACH

A stochastic collocation approach for efficient integrated gear health prognosis

Modeling Uncertainty in Steady State Diffusion Problems via Generalized Polynomial Chaos

Beyond Wiener Askey Expansions: Handling Arbitrary PDFs

A sparse grid stochastic collocation method for elliptic partial differential equations with random input data

arxiv: v1 [math.na] 14 Sep 2017

Numerical Approximation of Stochastic Elliptic Partial Differential Equations

Pascal s Triangle on a Budget. Accuracy, Precision and Efficiency in Sparse Grids

Emulation of Numerical Models with Over-specified Basis Functions

Stochastic Solvers for the Euler Equations

Random Matrix Eigenvalue Problems in Probabilistic Structural Mechanics

Downloaded 08/15/18 to Redistribution subject to SIAM license or copyright; see

MULTI-ELEMENT GENERALIZED POLYNOMIAL CHAOS FOR ARBITRARY PROBABILITY MEASURES

Random Eigenvalue Problems Revisited

AN ALGORITHMIC INTRODUCTION TO NUMERICAL METHODS FOR PDES WITH RANDOM INPUTS DRAFT. Max Gunzburger

Application and validation of polynomial chaos methods to quantify uncertainties in simulating the Gulf of Mexico circulation using HYCOM.

ON DISCRETE LEAST-SQUARES PROJECTION IN UNBOUNDED DOMAIN WITH RANDOM EVALUATIONS AND ITS APPLICATION TO PARAMETRIC UNCERTAINTY QUANTIFICATION

Quasi-optimal and adaptive sparse grids with control variates for PDEs with random diffusion coefficient

Prediction of Stochastic Eye Diagrams via IC Equivalents and Lagrange Polynomials

Weighted Residual Methods

Sampling and low-rank tensor approximation of the response surface

Accepted Manuscript. SAMBA: Sparse approximation of moment-based arbitrary polynomial chaos. R. Ahlfeld, B. Belkouchi, F.

Sparse Quadrature Algorithms for Bayesian Inverse Problems

THE CHOICE OF AUXILIARY DENSITY FUNCTION IN STOCHASTIC COLLOCATION

Strain and stress computations in stochastic finite element. methods

Keywords: Sonic boom analysis, Atmospheric uncertainties, Uncertainty quantification, Monte Carlo method, Polynomial chaos method.

Hierarchical Parallel Solution of Stochastic Systems

An Adaptive Multi-Element Generalized Polynomial Chaos Method for Stochastic Differential Equations

CAM Ph.D. Qualifying Exam in Numerical Analysis CONTENTS

Sparse polynomial chaos expansions in engineering applications

An Efficient Spectral Method for Acoustic Scattering from Rough Surfaces

Projection Methods. Michal Kejak CERGE CERGE-EI ( ) 1 / 29

Slow Growth for Gauss Legendre Sparse Grids

Efficient Solvers for Stochastic Finite Element Saddle Point Problems

Special Functions of Mathematical Physics

UNIVERSITY OF CALIFORNIA, SAN DIEGO. An Empirical Chaos Expansion Method for Uncertainty Quantification

Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs

Numerical Analysis for Statisticians

Poisson Equation in 2D

Collocation based high dimensional model representation for stochastic partial differential equations

Instructions for Matlab Routines

LEAST-SQUARES FINITE ELEMENT MODELS

Polynomial Chaos and Karhunen-Loeve Expansion

Numerical Analysis Preliminary Exam 10.00am 1.00pm, January 19, 2018

Outline. 1 Boundary Value Problems. 2 Numerical Methods for BVPs. Boundary Value Problems Numerical Methods for BVPs

Transcription:

Fast AreviewbyDongbinXiu May 16 th,2013

Outline Motivation 1 Motivation 2 3 4 5

Example: Burgers Equation Let us consider the Burger s equation: u t + uu x = νu xx, x [ 1, 1] u( 1) =1 u(1) = 1

Example: Burgers Equation Let us consider the Burger s equation: u t + uu x = νu xx, x [ 1, 1] u( 1) =1 u(1) = 1 It has an exact steady-state solution: u(x) = A tanh A 2ν (x z)

Example: Burgers Equation Let us consider the Burger s equation: u t + uu x = νu xx, x [ 1, 1] u( 1) =1 + δ u(1) = 1 It has an exact steady-state solution: u(x) = A tanh A 2ν (x z)

Example: Burgers Equation Let us consider the Burger s equation: u t + uu x = νu xx, x [ 1, 1] u( 1) =1 + δ u(1) = 1 It has an exact steady-state solution: u(x) = A tanh A 2ν (x z)

Techniques Motivation 1 Monte Carlo and sampling methods Generate independent realizations of random inputs based on the prescribed PDF and extract statistical information. Straightforward to apply. Large number of executions needed.

Techniques Motivation 1 Monte Carlo and sampling methods Generate independent realizations of random inputs based on the prescribed PDF and extract statistical information. Straightforward to apply. Large number of executions needed. 2 Perturbation methods Expand (Taylor) random fields around their mean and truncate at a given order. Small number of uncertainties. Complicated systems of equations beyond 2nd order.

Techniques Motivation 1 Monte Carlo and sampling methods Generate independent realizations of random inputs based on the prescribed PDF and extract statistical information. Straightforward to apply. Large number of executions needed. 2 Perturbation methods Expand (Taylor) random fields around their mean and truncate at a given order. Small number of uncertainties. Complicated systems of equations beyond 2nd order. 3 Moment equations Compute moments of the random solution directly from the averages of the original governing equations. Closure problem: Higher moments are needed for the derivation of a moment.

Techniques Motivation 1 Monte Carlo and sampling methods Generate independent realizations of random inputs based on the prescribed PDF and extract statistical information. Straightforward to apply. Large number of executions needed. 2 Perturbation methods Expand (Taylor) random fields around their mean and truncate at a given order. Small number of uncertainties. Complicated systems of equations beyond 2nd order. 3 Moment equations Compute moments of the random solution directly from the averages of the original governing equations. Closure problem: Higher moments are needed for the derivation of a moment. 4 Generalized polynomial chaos (gpc) Express stochastic solutions as orthogonal polynomials of the input random parameters. Fast convergence when the solution depends smoothly on the random parameters.

Techniques Motivation 1 Monte Carlo and sampling methods Generate independent realizations of random inputs based on the prescribed PDF and extract statistical information. Straightforward to apply. Large number of executions needed. 2 Perturbation methods Expand (Taylor) random fields around their mean and truncate at a given order. Small number of uncertainties. Complicated systems of equations beyond 2nd order. 3 Moment equations Compute moments of the random solution directly from the averages of the original governing equations. Closure problem: Higher moments are needed for the derivation of a moment. 4 Generalized polynomial chaos (gpc) Express stochastic solutions as orthogonal polynomials of the input random parameters. Fast convergence when the solution depends smoothly on the random parameters. 5 Operator based methods Manipulate the stochastic operators in the governing equations (Neumann expansion, weighted integral method...) Small uncertainties. Dependent on the operator. Limited to static problems.

Example: Burgers Equation (II) Let us consider the Burger s equation: u t + uu x = 0.05u xx, x [ 1, 1] u( 1) =1 + δ, δ U(0, 0.1) u(1) = 1

Example: Burgers Equation (II) Let us consider the Burger s equation: u t + uu x = 0.05u xx, x [ 1, 1] u( 1) =1 + δ, δ U(0, 0.1) u(1) = 1 Monte Carlo method with n realizations vs. gpc fourth-order expansions: n = 100 n = 1000 n = 2000 n = 5000 n = 10000 gpc z 0.819 0.814 0.815 0.814 0.814 0.814 σ z 0.387 0.418 0.417 0.417 0.414 0.414

Example: Burgers Equation (II) Let us consider the Burger s equation: u t + uu x = 0.05u xx, x [ 1, 1] u( 1) =1 + δ, δ U(0, 0.1) u(1) = 1 Monte Carlo method with n realizations vs. gpc fourth-order expansions: n = 100 n = 1000 n = 2000 n = 5000 n = 10000 gpc z 0.819 0.814 0.815 0.814 0.814 0.814 σ z 0.387 0.418 0.417 0.417 0.414 0.414 Perturbation method of order k vs. gpc fourth-order expansions: k = 1 k = 2 k = 3 k = 4 gpc z 0.823 0.824 0.824 0.824 0.814 σ z 0.349 0.349 0.328 0.328 0.414

Example: Burgers Equation (II) Let us consider the Burger s equation: u t + uu x = 0.05u xx, x [ 1, 1] u( 1) =1 + δ, δ U(0, 0.1) u(1) = 1 Monte Carlo method with n realizations vs. gpc fourth-order expansions: n = 100 n = 1000 n = 2000 n = 5000 n = 10000 gpc z 0.819 0.814 0.815 0.814 0.814 0.814 σ z 0.387 0.418 0.417 0.417 0.414 0.414 Perturbation method of order k vs. gpc fourth-order expansions: k = 1 k = 2 k = 3 k = 4 gpc z 0.823 0.824 0.824 0.824 0.814 σ z 0.349 0.349 0.328 0.328 0.414 Monte Carlo needs much more computations to obtain same accuracy as gpc (gpc needs the equivalent to five deterministic simulations). Perturbation methods do not even seem to converge.

Governing equations and probabilistic framework Let us consider: where: L is a differential operator L(x, u; y) =0, in D, B(x, u; y) =0, on D, B is a boundary operator (Dirichlet, Neumann...) x =(x 1,...,x d ) D R d are the spatial coordinates y =(y 1,...,y N ) R N are the parameters of interest random and mutually independent, defined in (Ω, A, P). Theycanbephysicalparametersofthe system, continuous random processes on the boundary, random initial conditions...

Governing equations and probabilistic framework Let us consider: where: L is a differential operator L(x, u; y) =0, in D, B(x, u; y) =0, on D, B is a boundary operator (Dirichlet, Neumann...) x =(x 1,...,x d ) D R d are the spatial coordinates y =(y 1,...,y N ) R N are the parameters of interest random and mutually independent, defined in (Ω, A, P). Theycanbephysicalparametersofthe system, continuous random processes on the boundary, random initial conditions... We are interested in a set of quantities (QoI), called observables: g =(g 1,...,g K ) R K = G(u)

Governing equations and probabilistic framework Let us consider: where: L is a differential operator L(x, u; y) =0, in D, B(x, u; y) =0, on D, B is a boundary operator (Dirichlet, Neumann...) x =(x 1,...,x d ) D R d are the spatial coordinates y =(y 1,...,y N ) R N are the parameters of interest random and mutually independent, defined in (Ω, A, P). Theycanbephysicalparametersofthe system, continuous random processes on the boundary, random initial conditions... We are interested in a set of quantities (QoI), called observables: g =(g 1,...,g K ) R K = G(u) Let ρ i :Γ i R + be the probability density function (PDF) of y i and N ρ(y) = ρ i (y i ), i=1 the joint PDF of y, withsupportγ= N i=1 Γ i.

gpc basis and approximations (I) One-dimensional orthogonal polynomial spaces in Γ i : where and W i,d i := {v :Γ i R v span{φ m(y i )} d i m=0 }, i = 1,...,N Γ i ρ i (y i )φ m(y i )φ n(y i )dy i = h 2 mδ mn h 2 m = Γ i ρ i φ 2 mdy i

gpc basis and approximations (I) One-dimensional orthogonal polynomial spaces in Γ i : where and W i,d i := {v :Γ i R v span{φ m(y i )} d i m=0 }, i = 1,...,N Γ i ρ i (y i )φ m(y i )φ n(y i )dy i = h 2 mδ mn h 2 m = Γ i ρ i φ 2 mdy i N-dimensional orthogonal polynomial space in Γ: WN P := d P W i,d i where d =(d 1,...,d N ) N N 0 are constructed as: and d = d 1 + + d N.Theorthonormalpolynomials Φ m = φ m1 (y 1 )...φ mn (y N ), m 1 + + m N P

gpc basis and approximations (II) Examples: Continuous Discrete Distribution gpc basis polynomials Support Gaussian Hermite (, ) Gamma Laguerre [0, ) Beta Jacobi [a, b] Uniform Legendre [a, b] Poisson Charlier {0, 1, 2,...} Binomial Krawtchouk {0, 1,...,N} Negative Binomial Meixner {0, 1, 2,...} Hypergeometric Hahn {0, 1,...,N}

gpc basis and approximations (II) Examples: Continuous Discrete Distribution gpc basis polynomials Support Gaussian Hermite (, ) Gamma Laguerre [0, ) Beta Jacobi [a, b] Uniform Legendre [a, b] Poisson Charlier {0, 1, 2,...} Binomial Krawtchouk {0, 1,...,N} Negative Binomial Meixner {0, 1, 2,...} Hypergeometric Hahn {0, 1,...,N} The P th -order gpc approximation of u is: where û m(x) = M un P (x, y) = û m(x)φ m(y), M = m=1 N + P N u(x, y)φ m(y)ρ(y)dy = E[u(x, y)φ m(y)], 1 m M

Statistical information We can compute, for instance, the following statistical information: Mean: M E[u](x) E[uN P ]= û m(x)φ m(y) ρ(y)dy =û 1 (x) Covariance: m=1 Cov[u](x 1, x 2 ) E u P N (x 1, y) E[u P N (x 1, y)] u P N (x 2, y) E[u P N (x 2, y)] Variance: = M û m(x 1 )û m(x 2 ) m=2 Var[u](x) E u P N (x, y) E[uP N (x, y)] 2 = M û m(x) 2 Sensitivity coefficients: m=2 u M Φm(y) E û m(x) ρ(y)dy, j = 1,...,N y j y j m=1

Galerkin method Motivation Stochastic Galerkin method Stochastic collocation methods We approximate un P by such that M vn P (x, y) = ˆv m(x)φ m(y) m=1 L(x, vn P ; y)w(y)ρ(y)dy = 0, in D, B(x, vn P ; y)w(y)ρ(y)dy = 0, on D, for all w W P N.

Galerkin method Motivation Stochastic Galerkin method Stochastic collocation methods We approximate un P by such that M vn P (x, y) = ˆv m(x)φ m(y) m=1 L(x, vn P ; y)w(y)ρ(y)dy = 0, in D, B(x, vn P ; y)w(y)ρ(y)dy = 0, on D, for all w W P N. The resulting equations are a coupled system of M deterministic PDEs for {ˆv m}.

Collocation methods Motivation Stochastic Galerkin method Stochastic collocation methods Lagrange interpolation approach: Let Θ N = {y (i) } Q i=1 Γ asetofnodes. Then: Q u(x, y) Iu(x, y) = ũ k (x)l k (y), x D k=1 where L i (y (j) )=δ ij and ũ k (x) =u(x, y (k) ), 1 i, j, k Q

Collocation methods Motivation Stochastic Galerkin method Stochastic collocation methods Lagrange interpolation approach: Let Θ N = {y (i) } Q i=1 Γ asetofnodes. Then: Q u(x, y) Iu(x, y) = ũ k (x)l k (y), x D k=1 where L i (y (j) )=δ ij and ũ k (x) =u(x, y (k) ), 1 i, j, k Q Pseudo-spectral approach: Let Θ N = {y (i),α (j) } Q i=1 Γ asetofnodesand weights. Then: M Q wn P (x, y) = ŵ m(x)φ m(y), with ŵ m(x) = u(x, y (j) )Φ m(y (j)) )α (j) m=1 j=1

Collocation methods Motivation Stochastic Galerkin method Stochastic collocation methods Lagrange interpolation approach: Let Θ N = {y (i) } Q i=1 Γ asetofnodes. Then: Q u(x, y) Iu(x, y) = ũ k (x)l k (y), x D k=1 where L i (y (j) )=δ ij and ũ k (x) =u(x, y (k) ), 1 i, j, k Q Pseudo-spectral approach: Let Θ N = {y (i),α (j) } Q i=1 Γ asetofnodesand weights. Then: M Q wn P (x, y) = ŵ m(x)φ m(y), with ŵ m(x) = u(x, y (j) )Φ m(y (j)) )α (j) m=1 j=1 In both cases, for each y (k),wehavetosolveq uncoupled problems: L(x, ũ k ; y (k) )=0, in D, B(x, ũ k ; y (k) )=0, on D,

Points selection Motivation Stochastic Galerkin method Stochastic collocation methods It is straightforward in one-dimensional (N = 1) problems, where the Gauss quadratures are usually the optimal choice. But, for large (N 1) dimensions? Tensor products of one-dimensional nodes Sparse grids, subsets of the full tensor product based on Smolyak algorithm Cubature rules

Thanks for your attention!