Introduction to Uncertainty Quantification in Computational Science Handout #3

Similar documents
Quadrature for Uncertainty Analysis Stochastic Collocation. What does quadrature have to do with uncertainty?

Fast Numerical Methods for Stochastic Computations

Stochastic Spectral Approaches to Bayesian Inference

Dimension-adaptive sparse grid for industrial applications using Sobol variances

Accuracy, Precision and Efficiency in Sparse Grids

Slow Growth for Gauss Legendre Sparse Grids

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

STOCHASTIC SAMPLING METHODS

An Empirical Chaos Expansion Method for Uncertainty Quantification

Evaluation of Non-Intrusive Approaches for Wiener-Askey Generalized Polynomial Chaos

A Polynomial Chaos Approach to Robust Multiobjective Optimization

Polynomial chaos expansions for sensitivity analysis

Numerical Integration (Quadrature) Another application for our interpolation tools!

Accepted Manuscript. SAMBA: Sparse approximation of moment-based arbitrary polynomial chaos. R. Ahlfeld, B. Belkouchi, F.

Estimating functional uncertainty using polynomial chaos and adjoint equations

Solving the steady state diffusion equation with uncertainty Final Presentation

Multi-Element Probabilistic Collocation Method in High Dimensions

UNIVERSITY OF CALIFORNIA, SAN DIEGO. An Empirical Chaos Expansion Method for Uncertainty Quantification

Simulating with uncertainty : the rough surface scattering problem

Research Article A Pseudospectral Approach for Kirchhoff Plate Bending Problems with Uncertainties

Keywords: Sonic boom analysis, Atmospheric uncertainties, Uncertainty quantification, Monte Carlo method, Polynomial chaos method.

Fast Numerical Methods for Stochastic Computations: A Review

Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications

Algorithms for Uncertainty Quantification

Pascal s Triangle on a Budget. Accuracy, Precision and Efficiency in Sparse Grids

arxiv: v1 [math.na] 14 Sep 2017

Performance Evaluation of Generalized Polynomial Chaos

A Non-Intrusive Polynomial Chaos Method For Uncertainty Propagation in CFD Simulations

In numerical analysis quadrature refers to the computation of definite integrals.

Uncertainty Quantification in Computational Models

Adaptive Collocation with Kernel Density Estimation

LECTURE 16 GAUSS QUADRATURE In general for Newton-Cotes (equispaced interpolation points/ data points/ integration points/ nodes).

Analysis and Computation of Hyperbolic PDEs with Random Data

Dinesh Kumar, Mehrdad Raisee and Chris Lacor

Differentiation and Integration

SENSITIVITY ANALYSIS IN NUMERICAL SIMULATION OF MULTIPHASE FLOW FOR CO 2 STORAGE IN SALINE AQUIFERS USING THE PROBABILISTIC COLLOCATION APPROACH

Polynomial Chaos and Karhunen-Loeve Expansion

Uncertainty Quantification of Radionuclide Release Models using Non-Intrusive Polynomial Chaos. Casper Hoogwerf

Chaospy: A modular implementation of Polynomial Chaos expansions and Monte Carlo methods

Spectral Representation of Random Processes

Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs

arxiv: v2 [physics.comp-ph] 15 Sep 2015

PART IV Spectral Methods

Nonlinear stochastic Galerkin and collocation methods: application to a ferromagnetic cylinder rotating at high speed

Review of Polynomial Chaos-Based Methods for Uncertainty Quantification in Modern Integrated Circuits. and Domenico Spina

Quasi-optimal and adaptive sparse grids with control variates for PDEs with random diffusion coefficient

Sparse Grids. Léopold Cambier. February 17, ICME, Stanford University

256 Summary. D n f(x j ) = f j+n f j n 2n x. j n=1. α m n = 2( 1) n (m!) 2 (m n)!(m + n)!. PPW = 2π k x 2 N + 1. i=0?d i,j. N/2} N + 1-dim.

Polynomial chaos expansions for structural reliability analysis

Uncertainty Quantification in Computational Science

Hyperbolic Polynomial Chaos Expansion (HPCE) and its Application to Statistical Analysis of Nonlinear Circuits

Uncertainty Quantification for multiscale kinetic equations with high dimensional random inputs with sparse grids

Uncertainty Propagation and Global Sensitivity Analysis in Hybrid Simulation using Polynomial Chaos Expansion

Section 6.6 Gaussian Quadrature

Sparse polynomial chaos expansions in engineering applications

TECHNISCHE UNIVERSITÄT MÜNCHEN. Uncertainty Quantification in Fluid Flows via Polynomial Chaos Methodologies

arxiv: v1 [math.na] 3 Apr 2019

Stochastic structural dynamic analysis with random damping parameters

Efficient Solvers for Stochastic Finite Element Saddle Point Problems

Ch. 03 Numerical Quadrature. Andrea Mignone Physics Department, University of Torino AA

Beyond Wiener Askey Expansions: Handling Arbitrary PDFs

Quadrature Based Approximations of Non-integer Order Integrator on Finite Integration Interval

Solving the Stochastic Steady-State Diffusion Problem Using Multigrid

Contents. I Basic Methods 13

Robust Optimal Control using Polynomial Chaos and Adjoints for Systems with Uncertain Inputs

Uncertainty Quantification and Validation Using RAVEN. A. Alfonsi, C. Rabiti. Risk-Informed Safety Margin Characterization.

NONLOCALITY AND STOCHASTICITY TWO EMERGENT DIRECTIONS FOR APPLIED MATHEMATICS. Max Gunzburger

EFFICIENT SHAPE OPTIMIZATION USING POLYNOMIAL CHAOS EXPANSION AND LOCAL SENSITIVITIES

ASSESSMENT OF COLLOCATION AND GALERKIN APPROACHES TO LINEAR DIFFUSION EQUATIONS WITH RANDOM DATA

Sobol-Hoeffding Decomposition with Application to Global Sensitivity Analysis

Uncertainty Quantification and hypocoercivity based sensitivity analysis for multiscale kinetic equations with random inputs.

Modeling Uncertainty in Flow Simulations via Generalized Polynomial Chaos

A Framework for Statistical Timing Analysis using Non-Linear Delay and Slew Models

Scientific Computing: Numerical Integration

Uncertainty Quantification for multiscale kinetic equations with random inputs. Shi Jin. University of Wisconsin-Madison, USA

c 2004 Society for Industrial and Applied Mathematics

4. Numerical Quadrature. Where analytical abilities end... continued

Long Time Propagation of Stochasticity by Dynamical Polynomial Chaos Expansions. Hasan Cagan Ozen

Method of Finite Elements I

Multigrid and stochastic sparse-grids techniques for PDE control problems with random coefficients

Collocation based high dimensional model representation for stochastic partial differential equations

A Stochastic Collocation based. for Data Assimilation

Sparse Grids for Stochastic Integrals

2.29 Numerical Fluid Mechanics Spring 2015 Lecture 9

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018

Stochastic Spectral Methods for Uncertainty Quantification

Strain and stress computations in stochastic finite element. methods

Dynamic System Identification using HDMR-Bayesian Technique

Slow Exponential Growth for Clenshaw Curtis Sparse Grids jburkardt/presentations/... slow growth paper.

AN EFFICIENT COMPUTATIONAL FRAMEWORK FOR UNCERTAINTY QUANTIFICATION IN MULTISCALE SYSTEMS

Chapter 2 Spectral Expansions

Electromagnetic Relaxation Time Distribution Inverse Problems in the Time-domain

Lecture 12: Detailed balance and Eigenfunction methods

UNCERTAINTY ASSESSMENT USING STOCHASTIC REDUCED BASIS METHOD FOR FLOW IN POROUS MEDIA

Numerical Solutions to Partial Differential Equations

Original Research. Sensitivity Analysis and Variance Reduction in a Stochastic NDT Problem

ICES REPORT A posteriori error control for partial differential equations with random data

. Frobenius-Perron Operator ACC Workshop on Uncertainty Analysis & Estimation. Raktim Bhattacharya

PDEs, part 1: Introduction and elliptic PDEs

Multilevel stochastic collocations with dimensionality reduction

Transcription:

Introduction to Uncertainty Quantification in Computational Science Handout #3 Gianluca Iaccarino Department of Mechanical Engineering Stanford University June 29 - July 1, 2009 Scuola di Dottorato di Ricerca in Ingegneria Industriale Dottorato di Ricerca in Ingegneria Aerospaziale, Navale e della Qualita Universita di Napoli Federico II Based on the Lecture Notes for ME470 prepared with Dr. A. Doostan

Uncertainty Propagation Using Stochastic Collocation

Uncertainty Quantification Computational Framework Consider a generic computational model (y R d with d large) How do we handle the uncertainties? 1. Data assimilation: characterize uncertainties in the inputs 2. Uncertainty propagation: perform simulations accounting for the identified uncertainties 3. Certification: establish acceptance criteria for predictions

Sampling Methods Sample the random input parameter vector according to its probability distributions Perform a sequence of independent simulations Compute statistics of the quantity of interest

Sampling Methods Sample the random input parameter vector according to its probability distributions Perform a sequence of independent simulations Compute statistics of the quantity of interest Now we will introduce an alternative way of computing the output statistics without random sampling!

Numerical Integration The basic idea is to use advanced numerical integration techniques Recall numerical quadrature: Problem: Compute the integral of f (x) over the interval [a : b].

Numerical integration Express integrals as a finite, weighted sum b a f (ξ)dξ N w i f (ξ i ) i=1 Examples: midpoint, trapezoidal, Simpson rules Remark: all use equispaced abscissas ξ i [a : b] (Newton-Cotes)

Numerical integration Express integrals as a finite, weighted sum b a f (ξ)dξ N w i f (ξ i ) i=1 Examples: midpoint, trapezoidal, Simpson rules Remark: all use equispaced abscissas ξ i [a : b] (Newton-Cotes) We can do better: Gauss quadrature rules. Observation: We can always fit an N-1 degree polynomial to a set N points (N = 2 line, N 3 parabola, etc.).

Numerical integration Express integrals as a finite, weighted sum b a f (ξ)dξ N w i f (ξ i ) i=1 Examples: midpoint, trapezoidal, Simpson rules Remark: all use equispaced abscissas ξ i [a : b] (Newton-Cotes) We can do better: Gauss quadrature rules. Observation: We can always fit an N-1 degree polynomial to a set N points (N = 2 line, N 3 parabola, etc.). By carefully choosing the abscissas and weights (ξ i, w i ), we can exactly evaluate the integral if f (ξ) is (2N 1) degree polynomial.

Numerical integration Express integrals as a finite, weighted sum b a f (ξ)dξ N w i f (ξ i ) i=1 Examples: midpoint, trapezoidal, Simpson rules Remark: all use equispaced abscissas ξ i [a : b] (Newton-Cotes) We can do better: Gauss quadrature rules. Observation: We can always fit an N-1 degree polynomial to a set N points (N = 2 line, N 3 parabola, etc.). By carefully choosing the abscissas and weights (ξ i, w i ), we can exactly evaluate the integral if f (ξ) is (2N 1) degree polynomial. What are the abscissas ξ i and the weights w i?

Numerical quadrature b a f (ξ)dξ N w i f (ξ i ) i=1

Numerical quadrature b a f (ξ)dξ N w i f (ξ i ) i=1 The abscissas are the roots of orthogonal polynomials (Legendre) 1 1 p j (x)p i (x)dx = C i δ ij Abscissas: impose p N (ξ) = 0 ξ 1,..., ξ N

Numerical quadrature b a f (ξ)dξ N w i f (ξ i ) i=1 The abscissas are the roots of orthogonal polynomials (Legendre) 1 1 p j (x)p i (x)dx = C i δ ij Abscissas: impose p N (ξ) = 0 ξ 1,..., ξ N The weights are the integrals of the Lagrange interpolating polynomials passing through the abscissas w i = 1 1 L i,n (ξ)dξ with L i,n (ξ) = N k=1 k j ξ ξ k ξ i ξ k

Legendre-Gauss quadrature Legendre Polynomials (n+1)p n+1 (ξ) = (2n+1)ξP n (ξ) np n 1 (ξ) 1 1 P j (x)p i (x)dx = 2 2n + 1 δ ij Three-term recurrence Orthogonality

Legendre-Gauss quadrature Both the abscissas and the weights are tabulated and can be computed in several ways For example: function I = gauss(f,n) beta =.5/sqrt(1-(2*(1:n))^(-2)); T = diag(beta,1) + diag(beta,-1); [V,D] = eig(t); x = diag(d); [x,i] = sort(x); w = 2*V(1,i)^2; I = w*feval(f,x); % (n+1)-pt Gauss quadrature % 3-term recurrence coeffs % Jacobi matrix % eigenvalue decomposition % nodes (= Legendre points) % weights % the integral The command gauss(cos,6) yields 1.68294196961579 which is correct to double precision [Trefethen, 2008]

Quadrature for Uncertainty Analysis Stochastic Collocation What does quadrature have to do with uncertainty?

Quadrature for Uncertainty Analysis Stochastic Collocation What does quadrature have to do with uncertainty? Assume y is a uniform random variable describing the uncertainty in the input and Q is he quantity of interest

Quadrature for Uncertainty Analysis Stochastic Collocation What does quadrature have to do with uncertainty? Assume y is a uniform random variable describing the uncertainty in the input and Q is he quantity of interest The mean of Q is Q = 1 1 1 Q(y)f y dy = 1 2 1 Q(y)dy if y = U(-1,1) Similarly for the variance, etc.

Quadrature for Uncertainty Analysis Stochastic Collocation What does quadrature have to do with uncertainty? Assume y is a uniform random variable describing the uncertainty in the input and Q is he quantity of interest The mean of Q is Q = 1 1 1 Q(y)f y dy = 1 2 1 Q(y)dy if y = U(-1,1) Similarly for the variance, etc. Moments of a quantity of interest are integrals in the probability space defined by the uncertain variables!

Stochastic Collocation For an input variables that is U( 1, 1) Generate N values of the parameters y k k = 1,..., N: abscissas (the zeros of the Legendre polynomial P N )

Stochastic Collocation For an input variables that is U( 1, 1) Generate N values of the parameters y k k = 1,..., N: abscissas (the zeros of the Legendre polynomial P N ) Perform N simulations according to the selected abscissas and obtain Q(y k )

Stochastic Collocation For an input variables that is U( 1, 1) Generate N values of the parameters y k k = 1,..., N: abscissas (the zeros of the Legendre polynomial P N ) Perform N simulations according to the selected abscissas and obtain Q(y k ) Compute statistics as weighted sums (the weights are integrals of Lagrange polynomials through the abscissas) Q = 1 1 Q(y)dy = N Q(y k )w k k=1

Stochastic Collocation For an input variables that is U( 1, 1) Generate N values of the parameters y k k = 1,..., N: abscissas (the zeros of the Legendre polynomial P N ) Perform N simulations according to the selected abscissas and obtain Q(y k ) Compute statistics as weighted sums (the weights are integrals of Lagrange polynomials through the abscissas) Q = 1 1 Q(y)dy = N Q(y k )w k k=1 No randomness is considered! but convergence is exponential (cfr. MC)

Beyond Uniform rvs Gaussian rvs What do we do if the input variables are not distributed as uniform r.v.?

Beyond Uniform rvs Gaussian rvs What do we do if the input variables are not distributed as uniform r.v.? As you probably know, numerical quadrature is more than just Legendre-Gauss quadrature!

Beyond Uniform rvs Gaussian rvs What do we do if the input variables are not distributed as uniform r.v.? As you probably know, numerical quadrature is more than just Legendre-Gauss quadrature! Consider y distributed as a N(0, 1), we can build orthogonal polynomials w.r.t. to a Gaussian measure as: p j (x)p i (x)e x 2 dx = C i δ ij Hermite polynomials for normal r.v. play the same role as Legendre polynomials for uniform r.v.s!

Hermite-Gauss quadrature Hermite Polynomials H n+1 (ξ) = 2ξH n (ξ) 2nH n 1 (ξ) H j (x)h i (x)e x 2 dx = 2 i i! πδ ij Three-term recurrence Orthogonality

Stochastic Collocation Summary of the quadrature rules We can use Legendre or Hermite polynomials, can we do even more?

Stochastic Collocation Summary of the quadrature rules We can use Legendre or Hermite polynomials, can we do even more? Distribution pdf Polynomials Weights Support Uniform 1/2 Legendre 1 [ 1 : 1] Gaussian (1/ 2π)e ( x2 /2) Hermite e ( x2 /2) [ : ] Exponential e x Laguerre e x [0 : ] (1 x) Beta (1+x) β B(α,β) Jacobi (1 x) α (1 + x) β [ 1 : 1] Table: Some polynomials in the Askey family

Stochastic Collocation Summary of the quadrature rules We can use Legendre or Hermite polynomials, can we do even more? Distribution pdf Polynomials Weights Support Uniform 1/2 Legendre 1 [ 1 : 1] Gaussian (1/ 2π)e ( x2 /2) Hermite e ( x2 /2) [ : ] Exponential e x Laguerre e x [0 : ] (1 x) Beta (1+x) β B(α,β) Jacobi (1 x) α (1 + x) β [ 1 : 1] Table: Some polynomials in the Askey family What if the random variables are not distributed according to any of the above?

Stochastic Collocation Summary of the quadrature rules We can use Legendre or Hermite polynomials, can we do even more? Distribution pdf Polynomials Weights Support Uniform 1/2 Legendre 1 [ 1 : 1] Gaussian (1/ 2π)e ( x2 /2) Hermite e ( x2 /2) [ : ] Exponential e x Laguerre e x [0 : ] (1 x) Beta (1+x) β B(α,β) Jacobi (1 x) α (1 + x) β [ 1 : 1] Table: Some polynomials in the Askey family What if the random variables are not distributed according to any of the above? 1. Szego (1939). Orthogonal Polynomials - American Mathematical Society. 2. Schoutens (2000). Stochastic Processes and Orthogonal Polynomial - Springer. 3. Gram-Schmidt Procedure

Nested rules The choice of abscissas The Gauss quadrature rules introduce different abscissas for each order N considered. They are not nested, no reuse of computed solutions for lower-order quadrature

Nested rules The choice of abscissas The Gauss quadrature rules introduce different abscissas for each order N considered. They are not nested, no reuse of computed solutions for lower-order quadrature Two extensions are possible Gauss-Kronrod rules Clenshaw-Curtis rules: express the integrand using Chebyshev polynomials (lower polynomial exactness) Figure: 32 abscissas in [ 1 : 1]

Clenshaw-Curtis vs. Gauss Figure: Black: Gauss, White: CC [Trefethen, 2008]

Why Nested rules? Assume you have a budget of N = 9 computations. With 9 Gauss abscissas (Legendre), we can obtain an estimate of the statistics of the solution which would be exact if the solution is a polynomial of degree 2N = 18

Why Nested rules? Assume you have a budget of N = 9 computations. With 9 Gauss abscissas (Legendre), we can obtain an estimate of the statistics of the solution which would be exact if the solution is a polynomial of degree 2N = 18 With Clenshaw-Curtis we can obtain again an estimate (only exact for polynomials of degree N = 9). On the other hand, with the same computations (N = 9 abscissas) we can also estiamte the solution statistics corresponding to N = 5 and N = 3 error estimate.

Multi-dimensional rules Tensor Product The extension of the previous 1D rules (Gauss or CC) is straightforward The abscissas are tensor products of the quadrature points in 1D The weights are the products of the 1D weights

Multi-dimensional rules Tensor Product The extension of the previous 1D rules (Gauss or CC) is straightforward The abscissas are tensor products of the quadrature points in 1D The weights are the products of the 1D weights The number of function evaluations increases as N d curse of dimensionality

Multi-dimensional rules Sparse Grids - Smolyak Grids Smolyak idea is to sparsify the construction of quadrature grids Figure: Sequence of grids used in 2D by a nested rule

Multi-dimensional rules Sparse Grids - Smolyak Grids The nominal accuracy can be preserved with much less points Figure: From Tensor grid to Sparse grid in 2D

Multi-dimensional rules Sparse Grids - Smolyak Grids The nominal accuracy can be preserved with much less points Figure: From Tensor grid to Sparse grid in 2D The trick is to use a linear combination of tensor products to build the actual sparse grid

Sparse Grids Stochastic Collocation Table: Abscissas for N = 5 in each dimension d = 2 d = 3 d = 4 d = 5 d = 6 d = 7 Tensor Gauss 25 125 625 3125 15625 78125 Smolyak Gauss 17 31 49 71 97 127 Smolyak CC 13 25 41 61 85 113

Stochastic collocation Consider the integral (with a i and b i U(0,1)) [Webster et al. 2006] exp [ ] N ai 2 (y i b i ) 2 d y i=1

Stochastic collocation Consider the integral (with a i and b i U(0,1)) [Webster et al. 2006] exp [ ] N ai 2 (y i b i ) 2 d y i=1

Sparse Grids Summary A fundamental advance in the development of stochastic collocation approaches in multid Becoming more and more popular

Sparse Grids Summary A fundamental advance in the development of stochastic collocation approaches in multid Becoming more and more popular Not perfect Not straightforward to construct (implementation errors) Can lead to negative weights Does not solve the curse of dimensionality, although it is better than tensor grids

Uncertainty Propagation Using Response Surfaces

Response surface Stochastic collocation methods allows to compute moments of the solution using fewer function evaluations than MC. In same cases we still want to build the PDF, or evaluate a reliability constraint. One possibility is to build an interpolant of the collocation points, for example a Lagrange interpolating polynomial, and then to sample using MC The interpolant is called a response surface, as it represent the approximate dependency of the solution on the input paramters

Stochastic Collocation Example Figure: Unstructured grid used in the present computations showing the strong clustering at the cylinder surface to capture the boundary layer. The dark region is the actual computational domain: periodic conditions are used on the upper and lower boundary.

Stochastic Collocation Example Figure: Temperature distribution for the baseline case.

Stochastic Collocation Example Figure: Streamlines for the baseline case.

Stochastic Collocation: Example (a) GL rule on sparse grid. (b) GL rule on tensor grid (c) CC rule on sparse grid (d) CC rule on tensor grid Figure: Expectation of x 1 -velocity along the domain centerline (x 2 = 0).

Stochastic Collocation: Example (a) GL rule on sparse grid. (b) GL rule on tensor grid (c) CC rule on sparse grid (d) CC rule on tensor grid Figure: Expectation of the temperature on the cylinder surface.

Stochastic Collocation: Example (a) GL rule on sparse-grid. (b) GL rule on tensor grid (c) CC rule on sparse-grid (d) CC rule on tensor grid Figure: Variance of the temperature on the cylinder surface.

Stochastic Collocation: Example (a) GL rule on sparse-grid. (b) GL rule on tensor grid (c) CC rule on sparse-grid (d) CC rule on tensor grid Figure: Probability density function of the stochastic temperature at the stagnation point.

Stochastic Collocation: Example (a) GL rule on sparse-grid. (b) GL rule on tensor grid (c) CC rule on sparse-grid (d) CC rule on tensor grid Figure: Probability density function of the x 1 -velocity in the wake at the point (2,0).

Stochastic Galerkin Approach for Hyperbolic Systems

Polynomial Chaos Stochastic Galerkin Approach The unknown is expressed as a spectral expansion of "one" random variable ξ Ω (assumed to be a Gaussian) u(x, t, ξ) = u i (x, t)ψ i (ξ) i=0 The ψ i (ξ) are Hermite polynomials and form a complete set of orthogonal basis functions Orthogonal Polynomials ψ n ψ m = Ω ψ n(ξ)ψ m (ξ)w(ξ)dξ = h n δ n,m E[ψ 0 ] = Ω ψ 0(ξ)w(ξ)dξ = 1 E[ψ k ] = Ω ψ k(ξ)w(ξ)dξ = 0, k > 0 where w(ξ) is the pdf of ξ and h n are non-zero constants

Polynomial Chaos Stochastic Galerkin Approach Given u(x, t, ξ) = u i (x, t)ψ i (ξ) we can compute the moments Expectation of u u 0 E[u] = Ω Variance of u Ω i=0 uw(ξ)dξ = ψ 0 (ξ)w(ξ)dξ + Var[u] = E[u 2 ] (E[u]) 2 = Ω u i i=1 i=0 u 2 i ( ) u i ψ i w(ξ)dξ = Ω Ω i=0 ψ i (ξ)w(ξ)dξ = u 0 = E[u] ψi 2 w(ξ)dξ u0 2 = ui 2 ψi 2. i=1

Polynomial Chaos 1D Linear Convection Equations Consider the 1D linear convection equations u t + cu x = 0 0 x 1 The exact solution is u(x, t) = u initial (x ct)

Polynomial Chaos 1D Linear Convection Equations Consider the 1D linear convection equations u t + cu x = 0 0 x 1 The exact solution is u(x, t) = u initial (x ct) Assume the uncertainty is characterized by one parameter; let it be a Guassian random variable ξ Ω

Polynomial Chaos 1D Linear Convection Equations Consider the 1D linear convection equations u t + cu x = 0 0 x 1 The exact solution is u(x, t) = u initial (x ct) Assume the uncertainty is characterized by one parameter; let it be a Guassian random variable ξ Ω Consider a (truncated) spectral expansion of the solution in the random space u(x, t, ξ) = u i (x, t)ψ i (ξ) i=0 P u i (x, t)ψ i (ξ) i=0 where ψ i (ξ) are (1D) Hermite polynomials (ψ 0 = 1; ψ 1 = ξ; ψ 2 = ξ 2 1; ψ 3 = ξ 3 3ξ; etc.)

Polynomial Chaos 1D Linear Convection Equations - uncertainty in the initial condition Assume The exact solution is u initial (x, t = 0, ξ) = g(ξ)cos(x) u(x, t, ξ) = g(ξ)cos(x ct)

Polynomial Chaos 1D Linear Convection Equations - uncertainty in the initial condition Assume The exact solution is u initial (x, t = 0, ξ) = g(ξ)cos(x) u(x, t, ξ) = g(ξ)cos(x ct) Plug in the truncated expansion is the original PDE: ( P P ) u i t ψ u i i(ξ) + c x ψ i(ξ) = 0 i=0 i=0

Polynomial Chaos 1D Linear Convection Equations - uncertainty in the initial condition Assume The exact solution is u initial (x, t = 0, ξ) = g(ξ)cos(x) u(x, t, ξ) = g(ξ)cos(x ct) Plug in the truncated expansion is the original PDE: ( P P ) u i t ψ u i i(ξ) + c x ψ i(ξ) = 0 i=0 i=0 Multiply by ψ k (ξ) and integrate over the probability space Ω (Galerkin Projection) P u i P t ψ u i iψ k + c x ψ iψ k = 0 for k = 0, 1,..., P. i=0 i=0

Polynomial Chaos 1D Linear Convection Equations - uncertainty in the initial condition We have P u i P t ψ iψ k + c i=0 i=0 u i x ψ iψ k = 0 for k = 0, 1,..., P.

Polynomial Chaos 1D Linear Convection Equations - uncertainty in the initial condition We have P u i P t ψ iψ k + c i=0 i=0 u i x ψ iψ k = 0 for k = 0, 1,..., P. The orthogonality property ψ i ψ k = δ ik h k implies u 0 t u P t + c u 0 x = 0 + c u P x = 0 We obtain a system of P + 1 uncoupled & deterministic eqns.

Polynomial Chaos 1D Linear Convection Equations - uncertainty in the initial condition Initial conditions for the u 0... u P equations are obtained by projection of the initial condition u initial (x, t = 0, ξ), ψ k = u k (x, t = 0) = = g(ξ), ψ k cos(x) k = 0,..., P The procedure is simply an approximation of g(ξ) on the polynomial basis ψ(ξ)

Polynomial Chaos 1D Linear Convection Equations - uncertainty in the transport velocity Assume The exact solution is c = h(ξ) u(x, t, ξ) = cos(x h(ξ)t)

Polynomial Chaos 1D Linear Convection Equations - uncertainty in the transport velocity Assume The exact solution is c = h(ξ) u(x, t, ξ) = cos(x h(ξ)t) Plug in the truncated expansion is the original PDE: ( P P ) u i t ψ u i i(ξ) + h(ξ) x ψ i(ξ) = 0 i=0 i=0

Polynomial Chaos 1D Linear Convection Equations - uncertainty in the transport velocity Assume The exact solution is c = h(ξ) u(x, t, ξ) = cos(x h(ξ)t) Plug in the truncated expansion is the original PDE: ( P P ) u i t ψ u i i(ξ) + h(ξ) x ψ i(ξ) = 0 i=0 i=0 Multiply by ψ k (ξ) and integrate over the probability space Ω (Galerkin Projection) P u i P t ψ u i iψ k + x h(ξ)ψ iψ k = 0 for k = 0, 1,..., P. i=0 i=0

Polynomial Chaos 1D Linear Convection Equations - uncertainty in the transport velocity If we assume P h h(ξ) = h j ψ j (ξ) j=0 The system of equations becomes P i=0 u i t ψ iψ k + P i=0 u i x P h h j ψ j ψ i ψ k = 0 for k = 0, 1,..., P. j=0 We obtain a system of P + 1 coupled & deterministic eqns.

Polynomial Chaos 1D Linear Convection Equations - uncertainty in the transport velocity If we assume P h h(ξ) = h j ψ j (ξ) j=0 The system of equations becomes P i=0 u i t ψ iψ k + P i=0 u i x P h h j ψ j ψ i ψ k = 0 for k = 0, 1,..., P. j=0 We obtain a system of P + 1 coupled & deterministic eqns. This is a much tougher non-linear problem and leads to the long-time integration issue [Gottlieb & Xiu, 2008]

Polynomial Chaos 1D Burgers Equations Consider the 1D Burgers equations u t + uu x = 0 0 x 1 Assume the uncertainty is characterized by one parameter; let it be a Guassian random variable ξ Ω Consider a (truncated) spectral expansion of the solution in the random space u(x, t, ξ) = u i (x, t)ψ i (ξ) i=0 P u i (x, t)ψ i (ξ) i=0 where ψ i (ξ) are (1D) Hermite polynomials (ψ 0 = 1; ψ 1 = ξ; ψ 2 = ξ 2 1; ψ 3 = ξ 3 3ξ; etc.)

Polynomial Chaos 1D Burgers Equations Plug in the governing equations ( P u i P t ψ P i(ξ) + u j ψ j (ξ) i=0 j=0 i=0 u i x ψ i(ξ) ) = 0

Polynomial Chaos 1D Burgers Equations Plug in the governing equations ( P u i P t ψ P i(ξ) + u j ψ j (ξ) i=0 j=0 i=0 u i x ψ i(ξ) ) = 0 Multiply by ψ k (ξ) and integrate over the probability space Ω (Galerkin Projection) P i=0 u i t ψ iψ k + P P i=0 j=0 u i u j x ψ iψ j ψ k = 0 for k = 0, 1,..., P. We obtain a system of P + 1 coupled & deterministic equations (independently of the type of uncertainty)

Polynomial Chaos 1D Burgers Equations PC expansion for the Burgers equations P i=0 u i t ψ iψ k + P P i=0 j=0 Double/Triple products are numbers ψ i ψ j ψ k = and s = (i + j + k)/2 u i u j x ψ iψ j ψ k = 0 for k = 0, 1,..., P. ψ i ψ j = δ ij i! { 0 if i + j + k is odd or max(i, j, k) > s i!j!k! (s i)!(s j)!(s k)! otherwise

Polynomial Chaos 1D Burgers Equations PC Expansion for the Burgers equations P=1 u 0 t u 1 t u 0 + u 0 x + u u 1 1 x + u 1 u 0 x + u 0 u 1 x = 0 = 0

Polynomial Chaos 1D Burgers Equations PC Expansion for the Burgers equations P=1 u 0 t u 1 t u 0 + u 0 x + u u 1 1 x + u 1 u 0 x + u 0 u 1 x = 0 = 0 PC Expansion for the Burgers equations P=2 u 1 t u 2 t u 0 t u 0 + u 0 x + u u 1 1 x + 2u u 2 2 x + u 1 u 0 x + (u 0 + 2u 2 ) u 1 + u 2 u 0 x + u 1 u 2 x x + 2u 1 u 1 x + (u 0 + 4u 2 ) u 2 x = 0 = 0 = 0

Simple example 1D Viscous Burgers Governing equation; note the modified convective flux: Exact solution 1 u (1 u) 2 x = u µ 2 u 2 u(x) = 1 2 [ ( )] x 1 + tanh 4µ Assume uncertainty in the viscosity - Gaussian r.v. with E[µ] = 0.25 and Var[µ] = 0.0025

Monte Carlo Sampling 1D Viscous Burgers Expectation of the solution: Computed solution (32 points) Exact solution

Monte Carlo Sampling 1D Viscous Burgers Variance of the solution: Computed solution (32 points) Exact solution

Polynomial Chaos 1D Viscous Burgers Statistics of the solution: Expectation Variance

Polynomial Chaos 1D Viscous Burgers Polynomial chaos modes of the solution (P = 3) Coefficients 0 and 1 Coefficients 2 and 3 Mode 0 is the mean (as expected) Mode 1 is dominant with respect to the others (u 2 1 closely approximates the variance)

Polynomial Chaos Observations Let s consider the PC expansion with P = 1 u(x, t, ξ) = u i (x, t)ψ i (ξ) i=0 1 u i (x, t)ψ i (ξ) = u 0 + u 1 ξ i=0

Polynomial Chaos Observations Let s consider the PC expansion with P = 1 u(x, t, ξ) = u i (x, t)ψ i (ξ) i=0 1 u i (x, t)ψ i (ξ) = u 0 + u 1 ξ i=0 The output is a linear function of the input uncertainty!

Polynomial Chaos Observations Let s consider the PC expansion with P = 1 u(x, t, ξ) = u i (x, t)ψ i (ξ) i=0 1 u i (x, t)ψ i (ξ) = u 0 + u 1 ξ i=0 The output is a linear function of the input uncertainty! Remark: ξ = N (0, 1) = u = N (u 0, u 2 1 ) As a consequence f u = e (u u 0) 2 /2u 2 1 2π u1 independently of the governing PDE!

Polynomial Chaos 1D Heat Conduction ( ) u(x, ξ) a(x, ξ) = f x D (0, 1) x x u(0, ξ) = u(1, ξ) = 0 where the diffusivity a(x, ξ) is a random field over D where: 0 < a min < a(x, ξ) < a max

Polynomial Chaos 1D Heat Conduction ( ) u(x, ξ) a(x, ξ) = f x D (0, 1) x x u(0, ξ) = u(1, ξ) = 0 where the diffusivity a(x, ξ) is a random field over D where: 0 < a min < a(x, ξ) < a max How to define the conductivity? We need data!

Polynomial Chaos 1D Heat Conduction We need to represent a random field to describe the conductivity a(x, ξ)

Polynomial Chaos 1D Heat Conduction We need to represent a random field to describe the conductivity a(x, ξ) One option is to use a polynomial-chaos type representation: P a a(x, ξ) = a k (x)ψ k k=0 For P a sufficiently high we can represent the random process Figure: Microstructures in a metal (From Professor H. Fujii, JWRI)

Polynomial Chaos 1D Heat Conduction ( ) u(x, ξ) a(x, ξ) = f x D (0, 1) x x u(0, ξ) = u(1, ξ) = 0 Plug in the expressions for the (unknown) solution u(x, ξ) = P k=0 u k(x)ψ k and the conductivity a(x, ξ) = P a k=0 a k(x)ψ k, and perform the Galerkin projection: * XP a a k (x)ψ k ( ) P P j=0 u! + j(x)ψ j ( ) f, ψ i = 0 i = 0, 1,, P x x k=0 (1)

Polynomial Chaos 1D Heat Conduction Since ψ i ψ j = δ ij ψ i ψ i and ψ 0 = 1 and ψ i = 0, i = 1,, P,!! PX P a X u j (x) a k (x)c ijk = f i = 0 x x j=0 PX x j=0 with C ijk = ψ i ψ j ψ k k=0 XP a k=0 u i (0) = u i (1) = 0 a k (x)c ijk! u j (x) x! i = 0, 1,, P = 0 i = 1,, P This is a system of P + 1 coupled elliptic PDEs

Polynomial Chaos 1D Heat Conduction The coupling is due to the randomness in the diffusion term and the fact that in general C ijk δ ij, k Notice the similarity of the terms corresponding to indices i, j and the deterministic version of the original PDE, i.e. when the diffusivity is just a function of space x. However, due to the coupling, one cannot use deterministic codes without changing them accordingly. This is why the Stochastic Galerkin scheme is called an intrusive technique. The space discretization of the coupled system can be done through a suitable scheme of one s choice, e.g. Finite Element, Finite Difference, etc. Due to orthogonality of basis some terms cancel out

Polynomial Chaos 1D Heat Conduction K 0,0 K 0,1 K 0,P K 1,0 K 1,1 K 1,P...... K P,0 K P,1 K P,P u 0 u 1. u P = f 0 0. 0 The left-hand-side (l.h.s) matrix is block symmetric The l.h.s matrix is symmetric if the space discretization is symmetric If the number of degrees-of-freedom for the space discretization of each u i (x) is N then: Each block K i,j has size N N Thus the l.h.s. matrix has size N(P + 1) N(P + 1)

Polynomial Chaos 1D Heat Conduction When a(x, y) is linear w.r.t. y the l.h.s matrix is sparse (C ijk = ψ i ψ j ψ k is sparse) d = 4, p = 1, nnz = 13 d = 4, p = 2, nnz = 55 d = 4, p = 3, nnz = 155 When a(x, y) is non-linear w.r.t. y and p a 2p the l.h.s matrix is full (easy to show)

Polynomial Chaos Navier-Stokes Equations Consider the NS equations for an incompressible fluid u i x i = 0 u i t + u j u i x j = 1 ρ p x i + µ 2 u i x j x j

Polynomial Chaos Navier-Stokes Equations Consider the NS equations for an incompressible fluid u i t u i x i = 0 u i + u j = 1 p + µ 2 u i x j ρ x i x j x j Assuming that the uncertainty is represented with one uncertain variable ξ, the usual polynomial chaos expansion reads u i (x, t, ξ) = P j=0 u (j) i (x, t)ψ j (ξ) p(x, t, ξ) = P p (j) (x, t)ψ j (ξ) j=0

Polynomial Chaos Navier-Stokes Equations The PC expansion for the velocity plugged in the continuity ( u i / x i = 0) gives u (k) i x i = 0 k = 0,..., P

Polynomial Chaos Navier-Stokes Equations The PC expansion for the velocity plugged in the continuity ( u i / x i = 0) gives u (k) i x i = 0 k = 0,..., P The momentum equation instead becomes (for each k component) u (k) i t + P P m=0 n=0 u (m) u (n) i ψ m ψ n ψ k j = 1 p (k) x j ψ k ψ k ρ x i + µ 2 u (k) i x j x j

Polynomial Chaos Navier-Stokes Equations The PC expansion for the velocity plugged in the continuity ( u i / x i = 0) gives u (k) i x i = 0 k = 0,..., P The momentum equation instead becomes (for each k component) u (k) i t + P P m=0 n=0 u (m) u (n) i ψ m ψ n ψ k j = 1 p (k) x j ψ k ψ k ρ x i + µ 2u (k) i x j x j We obtain P + 1 equations for the velocity-mode vectors and P + 1 constraints.

Polynomial Chaos Navier-Stokes Equations The PC expansion for the velocity plugged in the continuity ( u i / x i = 0) gives u (k) i x i = 0 k = 0,..., P The momentum equation instead becomes (for each k component) u (k) i t + P P m=0 n=0 u (m) u (n) i ψ m ψ n ψ k j = 1 p (k) x j ψ k ψ k ρ x i + µ 2u (k) i x j x j We obtain P + 1 equations for the velocity-mode vectors and P + 1 constraints. Not dissimilar from deterministic system Can be solved by projection and results in a coupled system of 3 (P + 1) momentum-like equations with P + 1 constraints.

Summary of Probabilistic Uncertainty Propagation Methods

Probabilistic Approaches to UQ Define/characterize the input uncertainty in terms of a set of d random variables Propagation methodologies Sampling-based techniques: MC, LHS Quadrature-based methods: SC Galerkin-based approaches: PC Compute statistics of the output of interest

Probabilistic Approaches to UQ Computing statistics Sampling methods compute statistics and pdf by interrogating the parameter space SC and PC are radically different, because the moments are computed explicitly SC: from quadrature rules PC: by construction from the assumed solution representation

Probabilistic Approaches to UQ Computing statistics Sampling methods compute statistics and pdf by interrogating the parameter space SC and PC are radically different, because the moments are computed explicitly SC: from quadrature rules PC: by construction from the assumed solution representation How do we compute the pdf in SC and PC? We interpret both SC and PC as approximate (surrogate) representations of the solution that depend continuously on the input random variables

Probabilistic Approaches to UQ Computing pdfs We interpret both SC and PC as approximate (surrogate) representations of the solution that depend continuously on the input random variables

Probabilistic Approaches to UQ Computing pdfs We interpret both SC and PC as approximate (surrogate) representations of the solution that depend continuously on the input random variables P PC: it is obvious that u(x, t, ξ) u P G = u i (x, t)ψ i (ξ) we can sample (MC) ug P (x, t, ξ) directly i=0

Probabilistic Approaches to UQ Computing pdfs We interpret both SC and PC as approximate (surrogate) representations of the solution that depend continuously on the input random variables P PC: it is obvious that u(x, t, ξ) u P G = u i (x, t)ψ i (ξ) we can sample (MC) ug P (x, t, ξ) directly SC: Gaussian quadrature rules are weighted sums where the weights are integrals of Lagrange interpolating polynomials, therefore we can interpret SC as M u(x, t, ξ) uc M = u(x, t, ξ i )L i,p (ξ) i=0 we can sample (MC) uc M (x, t, ξ) directly i=0

Probabilistic Approaches to UQ Computing pdfs We interpret both SC and PC as approximate (surrogate) representations of the solution that depend continuously on the input random variables P PC: it is obvious that u(x, t, ξ) u P G = u i (x, t)ψ i (ξ) we can sample (MC) ug P (x, t, ξ) directly SC: Gaussian quadrature rules are weighted sums where the weights are integrals of Lagrange interpolating polynomials, therefore we can interpret SC as M u(x, t, ξ) uc M = u(x, t, ξ i )L i,p (ξ) i=0 we can sample (MC) uc M (x, t, ξ) directly Note: sampling u, u G and u C is NOT in general equivalent! i=0