Sparse polynomial chaos expansions in engineering applications

Similar documents
Polynomial chaos expansions for sensitivity analysis

Polynomial chaos expansions for structural reliability analysis

Computational methods for uncertainty quantification and sensitivity analysis of complex systems

Sparse polynomial chaos expansions as a machine learning regression technique

Sparse polynomial chaos expansions based on an adaptive Least Angle Regression algorithm

Addressing high dimensionality in reliability analysis using low-rank tensor approximations

Dinesh Kumar, Mehrdad Raisee and Chris Lacor

A regularized least-squares method for sparse low-rank approximation of multivariate functions

Research Article Multiresolution Analysis for Stochastic Finite Element Problems with Wavelet-Based Karhunen-Loève Expansion

Analysis of covariance (ANCOVA) using polynomial chaos expansions

Stochastic Spectral Approaches to Bayesian Inference

Collocation based high dimensional model representation for stochastic partial differential equations

Solving the steady state diffusion equation with uncertainty Final Presentation

However, reliability analysis is not limited to calculation of the probability of failure.

SURROGATE MODELLING FOR STOCHASTIC DYNAMICAL

Algorithms for Uncertainty Quantification

An Empirical Chaos Expansion Method for Uncertainty Quantification

STOCHASTIC SAMPLING METHODS

Uncertainty Quantification in Computational Models

Polynomial Chaos and Karhunen-Loeve Expansion

A Vector-Space Approach for Stochastic Finite Element Analysis

Fast Numerical Methods for Stochastic Computations

Quantifying conformation fluctuation induced uncertainty in bio-molecular systems

Uncertainty quantification for sparse solutions of random PDEs

Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications

Sparse polynomial chaos expansions of frequency response functions using stochastic frequency transformation

Efficient Solvers for Stochastic Finite Element Saddle Point Problems

Stochastic finite element methods in geotechnical engineering

Multi-Element Probabilistic Collocation Method in High Dimensions

SENSITIVITY ANALYSIS IN NUMERICAL SIMULATION OF MULTIPHASE FLOW FOR CO 2 STORAGE IN SALINE AQUIFERS USING THE PROBABILISTIC COLLOCATION APPROACH

Polynomial-Chaos-based Kriging

Research Collection. Basics of structural reliability and links with structural design codes FBH Herbsttagung November 22nd, 2013.

Uncertainty Quantification in MEMS

A Unified Framework for Uncertainty and Sensitivity Analysis of Computational Models with Many Input Parameters

Uncertainty Quantification in Engineering

Uncertainty Quantification for multiscale kinetic equations with high dimensional random inputs with sparse grids

Hierarchical Parallel Solution of Stochastic Systems

Adaptive Collocation with Kernel Density Estimation

Application of a Stochastic Finite Element Procedure to Reliability Analysis

Uncertainty Propagation and Global Sensitivity Analysis in Hybrid Simulation using Polynomial Chaos Expansion

Risk Assessment for CO 2 Sequestration- Uncertainty Quantification based on Surrogate Models of Detailed Simulations

A Polynomial Chaos Approach to Robust Multiobjective Optimization

A Stochastic Collocation based. for Data Assimilation

arxiv: v1 [math.na] 14 Sep 2017

Solving the Stochastic Steady-State Diffusion Problem Using Multigrid

EFFICIENT SHAPE OPTIMIZATION USING POLYNOMIAL CHAOS EXPANSION AND LOCAL SENSITIVITIES

Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs

Beyond Wiener Askey Expansions: Handling Arbitrary PDFs

Steps in Uncertainty Quantification

Sobol-Hoeffding Decomposition with Application to Global Sensitivity Analysis

Stochastic Elastic-Plastic Finite Element Method for Performance Risk Simulations

Model Calibration under Uncertainty: Matching Distribution Information

Estimating functional uncertainty using polynomial chaos and adjoint equations

Strain and stress computations in stochastic finite element. methods

Uncertainty Quantification and Validation Using RAVEN. A. Alfonsi, C. Rabiti. Risk-Informed Safety Margin Characterization.

UNIVERSITY OF CALIFORNIA, SAN DIEGO. An Empirical Chaos Expansion Method for Uncertainty Quantification

Introduction to Uncertainty Quantification in Computational Science Handout #3

EFFICIENT STOCHASTIC GALERKIN METHODS FOR RANDOM DIFFUSION EQUATIONS

Kernel principal component analysis for stochastic input model generation

Hyperbolic Polynomial Chaos Expansion (HPCE) and its Application to Statistical Analysis of Nonlinear Circuits

Solving the stochastic steady-state diffusion problem using multigrid

arxiv: v1 [stat.ap] 26 Dec 2016

Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques

Multilevel stochastic collocations with dimensionality reduction

Uncertainty Management and Quantification in Industrial Analysis and Design

Uncertainty analysis of large-scale systems using domain decomposition

Uncertainty Quantification in Computational Science

Regularization via Spectral Filtering

Quasi-optimal and adaptive sparse grids with control variates for PDEs with random diffusion coefficient

Simulating with uncertainty : the rough surface scattering problem

Research Article A Pseudospectral Approach for Kirchhoff Plate Bending Problems with Uncertainties

S. Freitag, B. T. Cao, J. Ninić & G. Meschke

arxiv: v1 [math.na] 3 Apr 2019

Gaussian process regression for Sensitivity analysis

Dynamic response of structures with uncertain properties

Microstructurally-Informed Random Field Description: Case Study on Chaotic Masonry

Spectral Regularization

New Developments in LS-OPT - Robustness Studies

Parametric Problems, Stochastics, and Identification

Uncertainty Quantification and hypocoercivity based sensitivity analysis for multiscale kinetic equations with random inputs.

Numerical Approximation of Stochastic Elliptic Partial Differential Equations

Chapter 2 Spectral Expansions

Stochastic structural dynamic analysis with random damping parameters

Simulation of random fields in structural design. Lectures. Sebastian Wolff

Uncertainty Quantification for multiscale kinetic equations with random inputs. Shi Jin. University of Wisconsin-Madison, USA

Probabilistic Structural Dynamics: Parametric vs. Nonparametric Approach

Stochastic optimization - how to improve computational efficiency?

What Should We Do with Radioactive Waste?

Evaluation of Non-Intrusive Approaches for Wiener-Askey Generalized Polynomial Chaos

Performance Evaluation of Generalized Polynomial Chaos

Analysis and Computation of Hyperbolic PDEs with Random Data

Statistical signal processing

A reduced-order stochastic finite element analysis for structures with uncertainties

Schwarz Preconditioner for the Stochastic Finite Element Method

Proper Generalized Decomposition for Linear and Non-Linear Stochastic Models

Spectral Representation of Random Processes

Spectral methods for fuzzy structural dynamics: modal vs direct approach

Accepted Manuscript. SAMBA: Sparse approximation of moment-based arbitrary polynomial chaos. R. Ahlfeld, B. Belkouchi, F.

Fast Numerical Methods for Stochastic Computations: A Review

Chaospy: A modular implementation of Polynomial Chaos expansions and Monte Carlo methods

Transcription:

DEPARTMENT OF CIVIL, ENVIRONMENTAL AND GEOMATIC ENGINEERING CHAIR OF RISK, SAFETY & UNCERTAINTY QUANTIFICATION Sparse polynomial chaos expansions in engineering applications B. Sudret G. Blatman (EDF R&D, France) Chair of Risk, Safety & Uncertainty Quantification UQ Workshop May 16th, 2013

Global framework for uncertainty quantification Step B Step A Step C Quantification of Model(s) of the system Uncertainty propagation sources of uncertainty Assessment criteria Random variables Computational model Distribution Mean, std. deviation Probability of failure Step C Sensitivity analysis B.S., Uncertainty propagation and sensitivity analysis in mechanical models, Habilitation thesis, 2007. B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 2 / 54

Global framework for uncertainty quantification Step B Step A Step C Quantification of Model(s) of the system Uncertainty propagation sources of uncertainty Assessment criteria Random variables Computational model Distribution Mean, std. deviation Probability of failure Step C Sensitivity analysis B.S., Uncertainty propagation and sensitivity analysis in mechanical models, Habilitation thesis, 2007. B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 2 / 54

Global framework for uncertainty quantification Step B Step A Step C Quantification of Model(s) of the system Uncertainty propagation sources of uncertainty Assessment criteria Random variables Computational model Distribution Mean, std. deviation Probability of failure Step C Sensitivity analysis B.S., Uncertainty propagation and sensitivity analysis in mechanical models, Habilitation thesis, 2007. B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 2 / 54

Global framework for uncertainty quantification Step B Step A Step C Quantification of Model(s) of the system Uncertainty propagation sources of uncertainty Assessment criteria Random variables Computational model Distribution Mean, std. deviation Probability of failure Step C Sensitivity analysis B.S., Uncertainty propagation and sensitivity analysis in mechanical models, Habilitation thesis, 2007. B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 2 / 54

Specificity of industrial problems Uncertainty quantification arrives on top of well defined simulation procedures (legacy codes) The computational models are complex: coupled problems (thermo-mechanics), plasticity, large strains, contact, buckling, etc. A single simulation is already costly (e.g. several hours) Engineers focus on so-called quantities of interest, e.g. maximum displacement, average stress, etc. B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 3 / 54

Specificity of industrial problems (cont ) The input variables modelling aleatory uncertainty are often non Gaussian The size of the input random vector is typically 10-100 UQ procedures shall be sufficiently general to be applied with little adaptation to a variety of problems Need for non intrusive and parsimonious methods for uncertainty quantification B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 4 / 54

Outline 1 2 Sparse truncation schemes Computation of the coefficients Error estimation and validation 3 Scalar response quantities Vector response quantities 4 1D diffusion problem Bending beam B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 5 / 54

Outline Sparse truncation schemes Computation of the coefficients Error estimation and validation 1 2 Sparse truncation schemes Computation of the coefficients Error estimation and validation 3 4 B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 6 / 54

Sparse truncation schemes Computation of the coefficients Error estimation and validation Spectral approach Soize & Ghanem, 2004 The input parameters are modelled by a random vector X over a probabilistic space (Ω, F, P) such that P (dx) = f X (x) dx The response random vector Y = M(X) is considered as an element of L 2 (Ω, F, P) A basis of multivariate orthogonal polynomials is built up with respect to the input PDF (assuming independent components) The response random vector Y is completely determined by its coordinates in this basis Y = where : α N M y α Ψ α(x) Ψ α(x) = M i=1 y α : coefficients to be computed (coordinates) Ψ α(x) : basis P (i) α i (X i) B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 7 / 54

The curse of dimensionality Sparse truncation schemes Computation of the coefficients Error estimation and validation Common truncation scheme: all the multivariate polynomials up to a prescribed order p are selected. The number of unknown coefficients grows polynomially both in M and p ( ) M + p P = p Full expansions are not tractable when M 10 Solutions: Sparse truncation schemes, based on the sparsity-of-effect principle for the construction of the PC expansion B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 8 / 54

Sparse truncation schemes Computation of the coefficients Error estimation and validation Why are sparse representations relevant? Cohen, Schwab et al., 2010-13 ; Doostan & Owhadi, 2011 Sparsity-of-effects principle: in usual problems, only low-order interactions between the input variables are relevant. One shall select PC approximations using low-rank monomials Degree of a multi-index α: total degree of polynomial Ψ α α α 1 = Rank of a multi-index α: number of active variables of Ψ α (non zero terms of multi-index α) α 0 = M i=1 M i=1 α i 1 {αi>0} B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 9 / 54

Hyperbolic truncation schemes Sparse truncation schemes Computation of the coefficients Error estimation and validation Two selection techniques Low-rank index sets: A M,p,j = {α N M : α p, α 0 j} Hyperbolic sets: A M,p q = {α N M : α q p} ( M ) 1/q where α q α q i, 0 < q < 1 i=1 NB: the common truncation scheme (all polynomials of maximal total degree p) corresponds to q = 1 B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 10 / 54

Sparse truncation schemes Computation of the coefficients Error estimation and validation Hyperbolic truncation schemes (cont ) The hyperbolic norm primarily selects the high-degree polynomials in one single variable and then the polynomials involving few interaction B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 11 / 54

Sparse truncation schemes Computation of the coefficients Error estimation and validation Hyperbolic truncation schemes (cont ) The hyperbolic norm primarily selects the high-degree polynomials in one single variable and then the polynomials involving few interaction B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 11 / 54

Indices of sparsity Sparse truncation schemes Computation of the coefficients Error estimation and validation Common truncation (A M,p ) Hyperbolic truncation (A M,p q ) Sparse truncation (A) Effect of the hyperbolic truncation scheme IS1 = card AM,p q card A M,p Effect of the adaptive selection algorithm IS2 = card A card A M,p B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 12 / 54

Outline Sparse truncation schemes Computation of the coefficients Error estimation and validation 1 2 Sparse truncation schemes Computation of the coefficients Error estimation and validation 3 4 B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 13 / 54

Sparse truncation schemes Computation of the coefficients Error estimation and validation Various methods for computing the coefficients Intrusive approaches Historical approaches: projection of the equations residuals in the Galerkin sense Ghanem et al. ; Le Maître et al., Babuska, Tempone et al..; Karniadakis et al., etc. Proper generalized decompositions Nouy et al., 2007-10 Non intrusive approaches Non intrusive methods consider the computational model M as a black box They rely upon a design of numerical experiments, i.e. a n-sample X = {x (i) D X, i = 1,..., n} of the input parameters Different classes of methods are available: projection: by simulation or quadrature Matthies & Keese, 2005; Le Maître et al. stochastic collocation Xiu, 2007-09; Nobile, Tempone et al., 2008; Ma & Zabaras, 2009 regression Berveiller et al., 2006; Blatman & S., 2008-11 B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 14 / 54

Statistical approach: regression Sparse truncation schemes Computation of the coefficients Error estimation and validation Choi et al., 2004 ; Berveiller et al., 2006 Principle The exact (infinite) series expansion is considered as the sum of a truncated series and a residual: P 1 Y = M(X) = y j Ψ j(x) + ε P Y T Ψ(X) + ε P j=0 where : Y = {y 0,..., y P 1 } Ψ(x) = {Ψ 0(x),..., Ψ P 1 (x)} B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 15 / 54

Regression: continous solution Sparse truncation schemes Computation of the coefficients Error estimation and validation Mean-square minimization The coefficients are gathered into a vector Ŷ, and computed by minimizing the mean square error: [ (Y Ŷ = arg min E T Ψ(X) M(X) ) ] 2 Analytical solution (continuous case) The least-square minimization problem may be solved analytically: Ŷ = E [M(X) Ψ(X)] The regression solution is identical to the projection solution due to the orthogonality properties of the regressors B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 16 / 54

Regression: discretized solution Sparse truncation schemes Computation of the coefficients Error estimation and validation Resolution An estimate of the mean square error (sample average) is minimized: [ (Y Ŷ reg = arg min Ê T Ψ(X) M(X) ) ] 2 = arg min 1 n n ( Y T Ψ(x (i) ) M(x (i) ) ) 2 i=1 B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 17 / 54

Regression: discretized solution Sparse truncation schemes Computation of the coefficients Error estimation and validation Select an experimental design X = { x (1),..., x (n)} T that covers at best the domain of variation of the parameters Evaluate the model response for each sample (exactly as in Monte carlo simulation) M = { M(x (1) ),..., M(x (n) ) } T Compute the experimental matrix A ij = Ψ j ( x (i) ) i = 1,..., n ; j = 0,..., P 1 Solve the least-square minimization problem Ŷ = (A T A) 1 A T M B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 18 / 54

Choice of the experimental design Sparse truncation schemes Computation of the coefficients Error estimation and validation Random designs Monte Carlo samples obtained by standard random generators Latin Hypercube designs that are both random and space-filling Quasi-random sequences (e.g. Sobol sequence) Monte Carlo Latin Hypercube Sampling Sobol sequence B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 19 / 54

Outline Sparse truncation schemes Computation of the coefficients Error estimation and validation 1 2 Sparse truncation schemes Computation of the coefficients Error estimation and validation 3 4 B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 20 / 54

Validation of the surrogate model Sparse truncation schemes Computation of the coefficients Error estimation and validation The truncated series expansions are convergent in the mean square sense. However one does not know in advance where to truncate (problem-dependent) Usually the series is truncated according to the total maximal degree of the polynomials, say (p = 2, 3, 4 etc.) The recent research deals with the development of error estimates adaptive integration in the projection approach cross validation in the regression approach B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 21 / 54

Error estimators Coefficient of determination Sparse truncation schemes Computation of the coefficients Error estimation and validation The regression technique is based on the minimization of the mean square error. The generalization error is defined as: [ (M(X) E gen = E M PC (X) ) ] 2 It may be estimated by the empirical error using the already computed response quantities: E emp = 1 n ( M(x (i) ) M PC (x (i) ) ) 2 n i=1 The coefficient of determination R 2 is often used as an error estimator: R 2 = 1 Eemp ˆV[Y] ˆV[Y] = 1 n (M(x(i) ) Ȳ)2 This error estimator leads to overfitting B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 22 / 54

Sparse truncation schemes Computation of the coefficients Error estimation and validation Overfitting Illustration of the Runge effect When using a polynomial regression model that uses the same number of points as the degree of the polynomial, one gets an interpolating approximation The empirical error is zero whereas the approximation gets worse and worse B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 23 / 54

Error estimators Leave-one-out cross validation Sparse truncation schemes Computation of the coefficients Error estimation and validation Principle In statistical learning theory, cross validation consists in splitting the experimental design Y in two parts, namely a training set (which is used to build the model) and a validation set The leave-one-out technique consists in using each point of the experimental design as a single validation point for the meta-model built from the remaining n 1 points n different meta-models are built and the error made on the remaining point is computed, then mean-square averaged B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 24 / 54

Cross validation Implementation Sparse truncation schemes Computation of the coefficients Error estimation and validation For each x (i), a polynomial chaos expansion is built using the following experimental design: X \x (i) = {x (j), j = 1,..., n, j i}, denoted by M PC\i (.) The predicted residual is computed in point x (i) : i = M PC\i (x (i) ) M(x (i) ) The PRESS coefficient(predicted residual sum of squares) is evaluated: PRESS = The leave-one-out error and related Q 2 error estimator are computed: E LOO = 1 n n i=1 2 i n i=1 2 i Q 2 = 1 E LOO ˆV[Y] B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 25 / 54

Cross validation Conclusions Sparse truncation schemes Computation of the coefficients Error estimation and validation The Q 2 estimator is robust in order to characterize the accuracy of meta-models In case of polynomial chaos expansions, one does not need to explicitly compute the n different meta-models In some cases, Q 2 may overestimate the generalization error. Heuristic modified versions allows one to be conservative: Chapelle et al., 2002 Q 2 = 1 T(P, n) PRESS/n Var [Y] ) 1 tr C 1 emp T(P, n) = (1 + 1 P/n n C emp = 1 n ΨT Ψ B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 26 / 54

Outline Scalar response quantities Vector response quantities 1 2 3 Scalar response quantities Vector response quantities 4 B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 27 / 54

Scalar response quantities Vector response quantities Forward adaptive algorithm Blatman & S., CRAS, 2008 Blatman & S., Prob. Eng. Mech., 2010 Principles Adaptivity in the PC basis Build up the polynomial chaos basis from scratch, i.e. starting with a single constant term, using an iterative algorithm Add basis functions (that are taken in pre-selected truncated set) one-by-one and decide whether they are kept or discarded Monitor the PC error using cross-validation (Q 2 estimate) and stop when Q 2 tgt is attained Adaptivity in the experimental design In each step, the PC coefficients are computed by regression. Nested experimental designs (e.g. quasi-random numbers, nested LHS) are used in order to make the regression problem well-posed B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 28 / 54

Scalar response quantities Vector response quantities Basis-and-design adaptive algorithm: scheme B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 29 / 54

Variables selection methods Scalar response quantities Vector response quantities The forward adaptive algorithm is based on least-square regression and variable selection It requires that the size of the experimental design is larger than the number of unknowns Recent techniques have been developed in order to solve regression problems with many predictors by regularization techniques. The Least Angle Regression (LAR) algorithm is an efficient approach: A set of candidate basis functions A is pre-selected, e.g. using the hyperbolic truncation scheme A family of sparse models are built, containing resp. 1, 2,..., A terms Among these models the best one is retained by applying cross validation techniques B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 30 / 54

Variables selection methods Scalar response quantities Vector response quantities The forward adaptive algorithm is based on least-square regression and variable selection It requires that the size of the experimental design is larger than the number of unknowns Recent techniques have been developed in order to solve regression problems with many predictors by regularization techniques. The Least Angle Regression (LAR) algorithm is an efficient approach: A set of candidate basis functions A is pre-selected, e.g. using the hyperbolic truncation scheme A family of sparse models are built, containing resp. 1, 2,..., A terms Among these models the best one is retained by applying cross validation techniques B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 30 / 54

Scalar response quantities Vector response quantities Least angle regression Efron et al., 2004 Implementation Guigue et al., 2006 Consider a 3-dimensional vector B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 31 / 54

Scalar response quantities Vector response quantities Least angle regression Efron et al., 2004 Implementation Guigue et al., 2006 The algorithm is initialized with ˆf 0 = 0. The residual is R = y The most correlated regressor is X 1 B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 31 / 54

Scalar response quantities Vector response quantities Least angle regression Efron et al., 2004 Implementation Guigue et al., 2006 A move in the direction X 1 is carried out so that the residual y γ 1X 1 becomes equicorrelated with X 1 and X 2 The 1-term sparse approximation of y is ˆf 1 B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 31 / 54

Scalar response quantities Vector response quantities Least angle regression Efron et al., 2004 Implementation Guigue et al., 2006 A move is jointly made in the directions X 1, X 2 until the residual becomes equicorrelated with a new regressor This gives the 2-term sparse approximation B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 31 / 54

Least angle regression Path of solutions Scalar response quantities Vector response quantities A path of solutions is obtained containing 1, 2,.., min(n, Card A) terms A Q 2 -estimator of the accuracy of each solution is evaluated by cross validation and the best result is kept B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 32 / 54

Scalar response quantities Vector response quantities Basis-and-design adaptive LAR Blatman, 2009 Blatman & S., J. Comp. Phys, 2011 B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 33 / 54

Outline Scalar response quantities Vector response quantities 1 2 3 Scalar response quantities Vector response quantities 4 B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 34 / 54

Rationale for using PCA Scalar response quantities Vector response quantities Motivation Solution The adaptive LAR approach mentioned previously is suitable for a scalar model response. In case of a vector-valued response of dimension K, LAR should be applied to each response component When the vector response is a discretized field (e.g. displacement field in finite element analysis) its components are usually strongly correlated Principal component analysis (PCA) exploits the (possibly significant) correlation between the K components in order to recast the model response in terms of a small number of uncorrelated variables Eventually LAR can be applied successively to K << K uncorrelated components B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 35 / 54

Scalar response quantities Vector response quantities Principal component analysis of the output vector Spectral decomposition Consider a second-order random vector Y = M(X) of size K and its covariance matrix Cov [Y ]: Cov [Y ] = V Λ V where Λ = diag (λ 1,..., λ K ) and V = (v 1,..., v K ) are eigenvalues and vectors The random response Y is represented in the basis of eigenvectors: Y = E [Y ] + K A k v k k=1 where: A k = v T k (Y E [Y ]) B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 36 / 54

Scalar response quantities Vector response quantities Principal component analysis of the output vector Reduced representation Only the K largest eigenvalues are retained, i.e. { K = min Q {1,..., K} : Q i=1 λi trace(cov [Y ]) 1 ε PCA } Polynomial chaos expansion of the components K Y (K ) = E [Y ] + A k v k k=1 where the components may be represented by PC expansions: A k = a α (k) Ψ α(x) k = 1,..., K α N M B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 37 / 54

Scalar response quantities Vector response quantities Sample-based principal component analysis Evaluation of the model M X = { x (1),..., x (n)} Y = { y (1),..., y (n)} where y (i) = M(x (i) ) Sample mean ȳ and covariance matrix S ȳ := 1 n n i=1 S k,l := 1 n 1 y (i) n i=1 ȳ = {ȳ 1,..., ȳ K } T ( y (i) k ȳ k ) ( y (i) l ȳ l ), k, l = 1,..., K Eigenvalue decomposition S def = Cov [Y ] = W L W T B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 38 / 54

Scalar response quantities Vector response quantities PC expansion of the reduced variables The K largest eigenvalues of S are retained: Ŷ (K ) K = Ȳ + A k w k Samples of the scalar variables A k = v T k (Y E [Y ]) are computed by: k=1 A k = ( Y Ȳ) w k k = 1,..., K Sparse PC expansions of these variables are computed from the respective experimental designs (X, A k ): Â k = Ψ α(x) a α (k) α A (k) The final representation of the output vector reads: K Ŷ = Ȳ + Ψ α(x) k=1 a α (k) α A (k) wk B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 39 / 54

Error estimation Scalar response quantities Vector response quantities Blatman & S., 2013 (in prep.) Truncation of the principal component representation ε 1 = K k=k +1 l k /trace(s) Mean-square error of the PC expansions of the components: K ε 2 = k=1 ε (k) LOO A global upper bound of the mean-square error may be derived Both sources of error may be managed independently. B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 40 / 54

Outline 1D diffusion problem Bending beam 1 2 3 4 1D diffusion problem Bending beam B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 41 / 54

1D diffusion problem Bending beam One-dimensional diffusion equation Ernst et al., 2012 Problem statement Consider a 1D SPDE with random diffusion coefficient E(x, ω): [E(x, ω) u (x, ω)] + f (x) = 0 x [0, 1] u(0) = 0 (E u )(1) = F E(x, ω) is a lognormal stationary random field with mean value µ E = 10 and standard deviation σ E = 3: E(x, ω) = exp [λ E + ζ E g(x, ω)] g(x, ω) N (0, 1) Cov [ g(x) g(x ) ] = e x x /l Forcing terms: f (x) = cst = 0.5, F = 1 (l = 1/3) B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 42 / 54

1D diffusion problem Bending beam Karhunen-Loève expansions of the Gaussian random field g(x, ω) = lk ϕ k (x)ξ k (ω) k=1 Eigenvalues/functions obtained by solving the Fredholm equation: 1 0 e x x /l ϕ k (x) dx = l k ϕ k (x ) 25 Truncation: M = 62 terms are retained: M=62 k=1 l k / l k 0.99 k=1 E(x) 20 15 10 5 0 0 0.2 0.4 0.6 0.8 1 x Realizations of E(x, ω) B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 43 / 54

Analytical deterministic solution 1D diffusion problem Bending beam Ernst et al., 2012 For a given realization E(x, ω 0 ) u(x, ω 0) = x Using the truncated expansion where: u M (x, ω 0) = x 0 0 F + f (1 y) dy E(y, ω 0) F + f (1 y) exp [λ E + ζ E Φ(y) T Ξ 0] dy Ξ 0 def = {ξ 1(ω 0),..., ξ M (ω 0)} T is a realization of a standard random vector of size M Φ(y) def = { l 1 ϕ 1(y),..., l M ϕ M (y) } T. B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 44 / 54

Comparison of solutions 1D diffusion problem Bending beam Finite element analysis 0.2 0.15 Finite Element Direct Integration The deterministic diffusion problem is solved using linear 2-node finite elements (200 elements over [0,1]) u(x) 0.1 0.05 Reference vs. PC stochastic solution 0 0 0.2 0.4 0.6 0.8 1 x The reference solution is obtained using 50,000 samples and direct numerical integration. The PCA-gPC approach is based on a LHS experimental design (500 points) B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 45 / 54

1D diffusion problem Bending beam Mean value and standard deviation of the solution 0.14 0.12 0.1 0.08 0.06 Monte Carlo PCA gpc (1 comp.) PCA gpc (2 comp.) PCA gpc (3 comp.) PCA gpc (4 comp.) 0.03 0.025 0.02 0.015 Monte Carlo PCA gpc (1 comp.) PCA gpc (2 comp.) PCA gpc (3 comp.) PCA gpc (4 comp.) 0.04 0.01 0.02 0.005 0 0 0.2 0.4 0.6 0.8 1 Mean value field E [u(x)] 0 0 0.2 0.4 0.6 0.8 1 Standard deviation field Var [u(x)] The convergence of the standard deviation only requires a few components B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 46 / 54

Eigenmodes of the solution field 1D diffusion problem Bending beam 0.15 0.1 0.05 0 0.05 0.1 Mode 1 Mode 2 Mode 3 Mode 4 Mode 5 The solution field is decomposed onto the first eigenvectors of the covariance matrix of the computed realizations: û(x, ω) = NDOF i=1 U i(ω) N i(x) 0 0.2 0.4 0.6 0.8 1 x K U (ω) = Ū + k=1 a α (k) α A (j) Ψ α(ξ) wk B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 47 / 54

Convergence 1D diffusion problem Bending beam Blatman & S., Proc. ICOSSAR 2013 Error estimates 10 0 10 1 10 2 10 3 Meta-model ε 2 PCA ε 1 Global ε Cross validation 10 4 1 3 5 7 9 # Components ε: global L 2-error estimate ε 1: PCA error (reduced number of principal components) ε 2: PC expansion error B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 48 / 54

Outline 1D diffusion problem Bending beam 1 2 3 4 1D diffusion problem Bending beam B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 49 / 54

1D diffusion problem Bending beam Elastic bending beam Blatman, 2009 Blatman & S., J. Comput. Phys (2011) Problem statement Input random field: Spatially varying Young s modulus modelled by a stationary lognormal random field: µ E = 210, 0000 MPa, CV E = 20% E(x, ω) = exp [λ E + ζ E g(x, ω)] g(x, ω) N (0, 1) Cov [ g(x) g(x ) ] = e x x /l (l = 0.5 m = L/6) Quantity of interest: maximal deflection at midspan B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 50 / 54

1D diffusion problem Bending beam Karhunen-Loève expansion of the random field Analytical solution of the Fredholm integral equation 100 terms retained for a discretization accuracy 1% B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 51 / 54

Beam deflection 1D diffusion problem Bending beam Reference LAR - ε tgt = 0.01 LAR - ε tgt = 0.001 Mean (mm) 2.83 2.81 2.83 Standard Deviation (mm) 0.37 0.36 0.36 Skewness [0.38 ; 0.52] 0.30 0.41 Kurtosis [3.12 ; 3.70] 3.09 3.30 Number of FE runs 10,000 200 1,200 Error estimate 10 3 4 10 4 PC degree 3 6 Number of PC terms 10 246 Index of sparsity IS 1 2 10 3 3 10 6 Index of sparsity IS 2 7% 4% PC expansion of max. degree p = 6 in 100 dimensions : P = 1, 705, 904, 746 terms Hyperbolic truncature: 5,118 terms LAR: 246 terms B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 52 / 54

1D diffusion problem Bending beam Beam deflection Sensitivity analysis (Sobol indices) B.S., RESS, 2008 Only the modes #1 and #3 impact the maximal deflection (modes #2 #4 are antisymmetric). Another analysis using only 3 terms in the KL expansion (and N = 40 finite element runs) provide the same second moments within 1% accuracy (L 2-error of 2.10 4 ) B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 53 / 54

Conclusions 1D diffusion problem Bending beam Polynomial chaos expansions suffer from the curse of dimensionality when classical Galerkin/ stochastic collocation techniques are used Hyperbolic truncation schemes allow one to select a priori approximation spaces of lower dimensionality that are consistent with the sparsity-of-effect principle Sparse expansions may be obtained by casting the problem of computing the PC coefficients in terms of regression analysis and variable selection (Least Angle Regression) Although stochastic problems (e.g. involving random fields) may appear as large dimensional, the effective dimension for quantities of interest is usually much smaller Thank you very much for your attention! B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 54 / 54

Conclusions 1D diffusion problem Bending beam Polynomial chaos expansions suffer from the curse of dimensionality when classical Galerkin/ stochastic collocation techniques are used Hyperbolic truncation schemes allow one to select a priori approximation spaces of lower dimensionality that are consistent with the sparsity-of-effect principle Sparse expansions may be obtained by casting the problem of computing the PC coefficients in terms of regression analysis and variable selection (Least Angle Regression) Although stochastic problems (e.g. involving random fields) may appear as large dimensional, the effective dimension for quantities of interest is usually much smaller Thank you very much for your attention! B. Sudret (Chair of Risk & Safety) Sparse PC expansions May 16th, 2013 54 / 54