Gaussian process regression for Sensitivity analysis
|
|
- Ronald Morrison
- 5 years ago
- Views:
Transcription
1 Gaussian process regression for Sensitivity analysis GPSS Workshop on UQ, Sheffield, September 2016 Nicolas Durrande, Mines St-Étienne, GPSS workshop on UQ GPs for sensitivity analysis 1 / 43
2 Introduction FANOVA, ie HDMR, ie Sobol-Hoeffding representation Polynomial Chaos Gaussian process Regression Sensitivity Analysis Conclusion GPSS workshop on UQ GPs for sensitivity analysis 2 / 43
3 We assume we are interested in a function f : R d R with d 2. We want to get some understanding on the structure of f : What is the effect of each input variables on the output? Do some variables have more influence than other? Do some variables interact together? The talk will be illustrated on the following test function : f : [0, 1] 6 R x 10 sin(πx 1 x 2 ) + 20(x 3 0.5) x 4 + 5x 5 GPSS workshop on UQ GPs for sensitivity analysis 3 / 43
4 First thing one can do is to plot the output versus each input. For 100 samples uniformly distributed over [0, 1] 6 we get : x 1 x 2 x x 4 x 5 x GPSS workshop on UQ GPs for sensitivity analysis 4 / 43
5 In a similar fashion, we can fix all variables except one. In graph bellow, all non plotted variables are set to x 1 x 2 x x 4 x 5 x GPSS workshop on UQ GPs for sensitivity analysis 5 / 43
6 In order to get an insight on the interaction between variables, we can look at the influence of changing the reference value x 2 = 0 x 2 = 0.2 x 2 = 0.4 x 2 = 0.5 x 2 = 0.7 x 2 = 0.9 x x 2 = 0 x 2 = 0.2 x 2 = 0.4 x 2 = 0.5 x 2 = 0.7 x 2 = 0.9 x GPSS workshop on UQ GPs for sensitivity analysis 6 / 43
7 Introduction FANOVA, ie HDMR, ie Sobol-Hoeffding representation Polynomial Chaos Gaussian process Regression Sensitivity Analysis Conclusion GPSS workshop on UQ GPs for sensitivity analysis 7 / 43
8 One common tool for analysing the structure of f is to look at its FANOVA representation : d f (x) = f 0 + f i (x i ) + f i,j (x i, x j ) + + f 1,...,d (x) i=1 i<j This decomposition is such that : f 0 accounts for the constant term all f I are zero mean (I 0) f 1 accounts for all signal that can be explained just by x 1 f I (x)dx 1 = 0 for all I / {0, 1} f I (x)dx 1 = 0 for all I 1 In other words, this decomposition is such that all terms are orthogonal in L 2. GPSS workshop on UQ GPs for sensitivity analysis 8 / 43
9 The expressions of the f I are : f 0 = f (x)dx f i (x i ) = f (x)dx i f 0 f i,j (x i, x j ) = f (x)dx ij f i (x i ) f j (x j ) + f 0 It can also be interesting to look at the total effect of some inputs : f 1 (x 1 ) = f (x)dx 1 f 1,2 (x 1, x 2 ) = f (x)dx {1,2} GPSS workshop on UQ GPs for sensitivity analysis 9 / 43
10 On the previous example we obtain : x 1 x 2 x x 4 x 5 x GPSS workshop on UQ GPs for sensitivity analysis 10 / 43
11 We can also look at 2nd order interactions f 1,2 (x 1, x 2 ) f 1,3 (x 1, x 3 ) Interaction x 1, x 2 Interaction x 1, x 3 GPSS workshop on UQ GPs for sensitivity analysis 11 / 43
12 The total effect of (x 1, x 2 ) is thus f 1,2 (x 1, x 2 ) = f 0 + f 1 (x 1 ) + f 2 (x 2 ) + f 1,2 (x 1, x 2 ) GPSS workshop on UQ GPs for sensitivity analysis 12 / 43
13 In practical application f is not analytical so the above method require numerical computations of the integrals. If there is a cost associated with the evaluation of f, surrogate models are useful. Some models are naturally easy to interpret, for example but it soon becomes more tricky. m(x) = β 0 + β 1 x 1 + β 2 x 2 m(x) = β 0 + β 1 x 1 + β 2 x 2 + β 1,2 x 1 x 2 GPSS workshop on UQ GPs for sensitivity analysis 13 / 43
14 In GPR, the mean can be seen either as a linear combination of the observations : m(x) = α t F the kernel evaluated at X : m(x) = k(x, X)β For example, we have for a squared exponential kernel k(x, X) x Z(x) Z(X) = F x The basis function have a local influence which makes the interpretation difficult. GPSS workshop on UQ GPs for sensitivity analysis 14 / 43
15 Introduction FANOVA, ie HDMR, ie Sobol-Hoeffding representation Polynomial Chaos Gaussian process Regression Sensitivity Analysis Conclusion GPSS workshop on UQ GPs for sensitivity analysis 15 / 43
16 The principle of polynomial chaos is to project f onto a basis of orthonormal polynomials. One dimension For x R, the h i are of order i. Starting from the constant function h 0 = 1, the following ones can be obtain using Gram-Schmidt orthogonalisation. h 0 (x) = 1 1, h 1(x) = x x, h 0 h 0 x x, h 0 h 0, h 2(x) = x 2 x, h 0 h 0 x, h 1 h 1 x 2 x, h 0 h 0 x, h 1 h 1 d-dimension In R d, the basis is obtained by a tensor product of one dimensional basis. For example, if d = 2 : h 00 (x) = 1 1 h 10 (x) = h 1 (x 1 ) 1 h 01 (x) = 1 h 1 (x 2 ) h 11 (x) = h 1 (x 1 ) h 1 (x 2 ) h 20 (x) = h 2 (x 1 ) 1. =. GPSS workshop on UQ GPs for sensitivity analysis 16 / 43
17 The orthonormal basis H depends on the measure over the input space D. A uniform measure over D = [ 1, 1] gives the Legendre basis : h 0 (x) = 1/2 h 1 (x) = 3/2 x h 2 (x) = 5/4 (3x 2 1) h 3 (x) = 7/4 (5x 3 3x) h 4 (x) = 9/16 (35x 4 30x 2 + 3). =. A standard Gaussian measure over R gives the Hermite basis : h 0 (x) = 1/ 2π h 1 (x) = 1/ 2π x h 2 (x) = 1/(2 2π) (x 2 1) h 3 (x) = 1/(6 2π) (x 3 3x) h 4 (x) = 1/(24 2π) (x 4 6x 2 + 3). =. GPSS workshop on UQ GPs for sensitivity analysis 17 / 43
18 Legendre basis in 1D x x 3 x 2 x 1 x GPSS workshop on UQ GPs for sensitivity analysis 18 / 43
19 Legendre basis in 2D h00 h10 h01 x 1 x 2 x 1 x 2 x 1 x 2 h11 h20 h21 x 1 x 2 x 1 x 2 x 1 x 2 GPSS workshop on UQ GPs for sensitivity analysis 19 / 43
20 If we consider linear regression model based on polynomial chaos basis functions m(x) = βh I (x) I {0,...p} the FANOVA representation of m is straightforward. For example in 2D : m 0 = m(x)dx 1 dx 2 = β 00 h 00 (x)dx 1 dx 2 = β 00 m 1 (x 1 ) = m(x)dx 2 m 0 = β 00 h 00 (x) + β 10 h 10 (x) + β 20 h 20 (x)dx 2 m 0 = β 10 h 10 (x 1 ) + β 20 h 20 (x 1 ) m 1,2 (x) =... = β 11 h 11 (x) + β 12 h 12 (x) + β 21 h 21 (x) + β 22 h 22 (x) GPSS workshop on UQ GPs for sensitivity analysis 20 / 43
21 We obtain on the motivating example : x 1 x 2 x x 4 x 5 x GPSS workshop on UQ GPs for sensitivity analysis 21 / 43
22 Same figure without cheating : x 1 x 2 x x 4 x 5 x GPSS workshop on UQ GPs for sensitivity analysis 22 / 43
23 Introduction FANOVA, ie HDMR, ie Sobol-Hoeffding representation Polynomial Chaos Gaussian process Regression Sensitivity Analysis Conclusion GPSS workshop on UQ GPs for sensitivity analysis 23 / 43
24 A first idea is to consider ANOVA kernels [Stitson 97] : k(x, y) = d (1 + k i (x i, y i )) i=1 = 1 + The associated GP is d k i (x i, y i ) + d k i (x i, y i )k j (x j, y j ) + + k i (x i, y i ) i=1 i<j i=1 }{{}}{{}}{{} additive part 2 nd order interactions full interaction d Z(x) = Z }{{} 0 + Z i (x i ) + Z i,j (x i, x j ) + + Z 1...d (x) }{{} cst i=1 i<j }{{}}{{} full interaction additive part 2 nd order interactions However, the Z I do not satisfy Z I (x)dx i = 0. GPSS workshop on UQ GPs for sensitivity analysis 24 / 43
25 If we build a GPR model based on this kernel, we obtain : m(x) = k(x, X)k(X, X) 1 F ( d m(x) = 1 + k i (x i, y i ) + ) k i (x i, y i )k j (x j, y j ) k(x, X) 1 F i=1 i<j d = 1 t k(x, X) 1 F + k(x i, X i )k(x, X) 1 F i=1 } {{ } m i (x i ) + k i (x i, X i )k(x j, X j )k(x, X) 1 F i<j }{{} m i,j (x i,x j ) +... As previously, the m I do not satisfy m I (x)dx i = 0. GPSS workshop on UQ GPs for sensitivity analysis 25 / 43
26 samples with zero integrals We are interested in building a GP such that the integral of the samples are exactly zero... Let s consider the associated conditional GP : Z 0 law = Z Z(s)ds =0 Let µ 0 (x) = E ( Z(x) Z(s)ds =0 ) denote the conditional expectation and k 0 (x, x ) = cov ( Z(x), Z(x ) Z(s)ds =0 ) ( 1 µ 0 (x) = k(x, s)ds k(s, t)dsdt) 0 k 0 (x, y) = k(x, x ) ( k(x, s)ds ) 1 k(s, t)dsdt k(x, s)ds GPSS workshop on UQ GPs for sensitivity analysis 26 / 43
27 Samples from Z 0 have the required property k(x, s)ds k(y, s)ds µ 0 (x) = 0 k 0 (x, y) = k(x, y) k(s, t)dsdt Z(x) x GPSS workshop on UQ GPs for sensitivity analysis 27 / 43
28 These 1-dimensional kernels are of great importance to create ANOVA kernels dedicated to sensitivity analysis : k SA (x, y) = d (1 + k 0 (x i, y i )) i=1 = 1 + d k(x i, y i ) + d k(x i, y i )k(x j, y j ) + + k(x i, y i ) i=1 i<j i=1 }{{}}{{}}{{} additive part 2 nd order interactions full interaction The associated GP naturally writes d Z SA (x) = Z }{{} 0 + Z i (x i ) + Z i,j (x i, x j ) + + Z 1...d (x) }{{} cst i=1 i<j }{{}}{{} full interaction additive part 2 nd order interactions Now, the Z I do satisfy Z I (x)dx i = 0. GPSS workshop on UQ GPs for sensitivity analysis 28 / 43
29 Z We get the following decomposition of samples Z00 Z10 x 1 x 2 x 1 x 2 x 1 x 2 Z01 Z11 x 1 x 2 x 1 x 2 GPSS workshop on UQ GPs for sensitivity analysis 29 / 43
30 Furthermore, the GPR model inherits this properties 2D example 2 k(x, y) = (1 + k 0 (x i, y i )) i=1 = 1 + k 0 (x 1, y 1 ) + k 0 (x 2, y 2 ) + k 0 (x 1, y 1 )k 0 (x 2, y 2 ) The mean writes m(x) = (1 + k 0 (x 1, X 1 ) + k 0 (x 2, X 2 ) + k 0 (x 1, X 1 )k 0 (x 2, X 2 )) t k(x, X) 1 F = m 0 + m 1 (x 1 ) + m 2 (x 2 ) + m 12 (x) These terms correspond to the FANOVA representation of m. The sub-models are conditional expectations : m I (x) = E (Z I (x) Z(X)=F ) We can thus associate a predictive covariance to each sub-model! GPSS workshop on UQ GPs for sensitivity analysis 30 / 43
31 We obtain on the motivating example : x 1 x 2 x x 4 x 5 x GPSS workshop on UQ GPs for sensitivity analysis 31 / 43
32 We obtain on the motivating example : GPSS workshop on UQ GPs for sensitivity analysis 32 / 43
33 Introduction FANOVA, ie HDMR, ie Sobol-Hoeffding representation Polynomial Chaos Gaussian process Regression Sensitivity Analysis Conclusion GPSS workshop on UQ GPs for sensitivity analysis 33 / 43
34 The principle of sensitivity analysis is to quantify how much each input or group of inputs has an influence on the output : local sensitivity analysis global sensitivity analysis Probabilistic framework has proven to be very interesting : If we introduce randomness in the inputs, how random is the output? Hereafter, we focus on variance based global sensitivity analysis GPSS workshop on UQ GPs for sensitivity analysis 34 / 43
35 Let X be the random vector representing our uncertainty on the inputs. We assume its probability distribution factorises (i.e. the X i are independents) : µ(x) = µ(x 1 ) µ(x 2 ) µ(x n ) This factorization of µ allows to use it in the FANOVA representation of f : d f (x) = f 0 + f i (x i ) + f i,j (x i, x j ) + + f 1,...,d (x) i=1 i<j in this expression, the f I are orthogonal for µ. We know plug X into this expression : d f (X) = f 0 + f i (X i ) + f i,j (X i, X j ) + + f 1,...,d (X) i=1 i<j and we get interesting results... GPSS workshop on UQ GPs for sensitivity analysis 35 / 43
36 The f I (X I ) are centred and independent E (f I (X I )) = f I (x I )dµ(x) = 0 (for I 0) cov (f I (X I ), f J (X J )) = E (f I (X I )f J (X J )) = f I (x I )f J (x J )dµ(x) = 0 As a consequence, we get d var (f (X)) = var (f i (X i ))+ var (f i,j (X i, X j ))+ +var (f 1,...,d (X)) i=1 i<j The Sobol indices are defined as S I = var (f I(X I )) var (f (X)) These indices are in [0, 1], and their sum is 1. GPSS workshop on UQ GPs for sensitivity analysis 36 / 43
37 In practice, these indices can be computed using Monte Carlo methods. This requires lots of observations Another approach is to use surrogate models. If the model is well chosen, computational cost is almost free! Polynomial Chaos GPR with k sa kernels GPSS workshop on UQ GPs for sensitivity analysis 37 / 43
38 Polynomial Chaos In the case of polynomial Chaos, Sobol indices are given by the squares of the β coefficients D i = Var X [E X (H(X)β X i )] = Var X [H i (X i )β i ] = β 2 i S i = D i k D k For mode details, see the work from Bruno Sudret GPSS workshop on UQ GPs for sensitivity analysis 38 / 43
39 GPR with k sa kernel The sensitivity indices can be obtained analytically : S I = var (m I(X I )) var (m(x)) F T K 1 ( i I = Γ i) K 1 F ( di=1 ) F T K 1 (1 n n + Γ i ) 1 n n K 1 F where Γ i is the matrix Γ i = D i k 0 i (s i)k 0 i (s i) T ds i, and is an entry-wise product. GPSS workshop on UQ GPs for sensitivity analysis 39 / 43
40 The computation of Sobol indices on the mean gives : S 1 S 2 S 3 S 4 S 5 S 6 S 12 model model truth For this test-function 50 observations are enough! GPSS workshop on UQ GPs for sensitivity analysis 40 / 43
41 Introduction FANOVA, ie HDMR, ie Sobol-Hoeffding representation Polynomial Chaos Gaussian process Regression Sensitivity Analysis Conclusion GPSS workshop on UQ GPs for sensitivity analysis 41 / 43
42 Sensitivity analysis There are interesting tools to get an insight of what s happening inside high dimensional functions The effective dimensionality can be much smaller Monte Carlo or model based approach Some modelling tips What is the purpose of the model? GPR models are not necessarily black-box... It is possible to include fancy observations in GPR (integrals, derivatives,...) This talk unfortunately focused on sensitivity analysis on the mean... the proper way is to perform SA on the conditional sample paths. See work from Marrel, Iooss et Al, SAMO GPSS workshop on UQ GPs for sensitivity analysis 42 / 43
43 Using appropriate kernels, the computation of Sobol indices on the samples gives : D i = Var X [E X (Z(X) X i )] = Var X [Z i (X i )] We can easily sample from this distribution D 1 prior D 2 prior D12 prior Similarly, we can sample from the posterior to get an uncertainty measure on the indices. GPSS workshop on UQ GPs for sensitivity analysis 43 / 43
Étude de classes de noyaux adaptées à la simplification et à l interprétation des modèles d approximation.
Étude de classes de noyaux adaptées à la simplification et à l interprétation des modèles d approximation. Soutenance de thèse de N. Durrande Ecole des Mines de St-Etienne, 9 november 2011 Directeurs Laurent
More informationPolynomial chaos expansions for sensitivity analysis
c DEPARTMENT OF CIVIL, ENVIRONMENTAL AND GEOMATIC ENGINEERING CHAIR OF RISK, SAFETY & UNCERTAINTY QUANTIFICATION Polynomial chaos expansions for sensitivity analysis B. Sudret Chair of Risk, Safety & Uncertainty
More informationGaussian Processes (10/16/13)
STA561: Probabilistic machine learning Gaussian Processes (10/16/13) Lecturer: Barbara Engelhardt Scribes: Changwei Hu, Di Jin, Mengdi Wang 1 Introduction In supervised learning, we observe some inputs
More informationPolynomial chaos expansions for structural reliability analysis
DEPARTMENT OF CIVIL, ENVIRONMENTAL AND GEOMATIC ENGINEERING CHAIR OF RISK, SAFETY & UNCERTAINTY QUANTIFICATION Polynomial chaos expansions for structural reliability analysis B. Sudret & S. Marelli Incl.
More informationMulti-fidelity sensitivity analysis
Multi-fidelity sensitivity analysis Loic Le Gratiet 1,2, Claire Cannamela 2 & Bertrand Iooss 3 1 Université Denis-Diderot, Paris, France 2 CEA, DAM, DIF, F-91297 Arpajon, France 3 EDF R&D, 6 quai Watier,
More informationPolynomial Chaos and Karhunen-Loeve Expansion
Polynomial Chaos and Karhunen-Loeve Expansion 1) Random Variables Consider a system that is modeled by R = M(x, t, X) where X is a random variable. We are interested in determining the probability of the
More informationDerivative-based global sensitivity measures for interactions
Derivative-based global sensitivity measures for interactions Olivier Roustant École Nationale Supérieure des Mines de Saint-Étienne Joint work with J. Fruth, B. Iooss and S. Kuhnt SAMO 13 Olivier Roustant
More informationAlgorithms for Uncertainty Quantification
Algorithms for Uncertainty Quantification Lecture 9: Sensitivity Analysis ST 2018 Tobias Neckel Scientific Computing in Computer Science TUM Repetition of Previous Lecture Sparse grids in Uncertainty Quantification
More informationUncertainty Propagation and Global Sensitivity Analysis in Hybrid Simulation using Polynomial Chaos Expansion
Uncertainty Propagation and Global Sensitivity Analysis in Hybrid Simulation using Polynomial Chaos Expansion EU-US-Asia workshop on hybrid testing Ispra, 5-6 October 2015 G. Abbiati, S. Marelli, O.S.
More informationarxiv: v1 [math.st] 14 Oct 2013
ANOVA decomposition of conditional Gaussian processes for sensitivity analysis with dependent inputs Gaelle Chastaing Loic Le Gratiet arxiv:1310.3578v1 [math.st] 14 Oct 2013 Abstract October 15, 2013 Complex
More informationGaussian Processes for Computer Experiments
Gaussian Processes for Computer Experiments Jeremy Oakley School of Mathematics and Statistics, University of Sheffield www.jeremy-oakley.staff.shef.ac.uk 1 / 43 Computer models Computer model represented
More informationSobol-Hoeffding Decomposition with Application to Global Sensitivity Analysis
Sobol-Hoeffding decomposition Application to Global SA Computation of the SI Sobol-Hoeffding Decomposition with Application to Global Sensitivity Analysis Olivier Le Maître with Colleague & Friend Omar
More informationAnalysis of covariance (ANCOVA) using polynomial chaos expansions
Research Collection Conference Paper Analysis of covariance (ANCOVA) using polynomial chaos expansions Author(s): Sudret, Bruno; Caniou, Yves Publication Date: 013 Permanent Link: https://doi.org/10.399/ethz-a-0100633
More informationVector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.
Vector spaces DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Vector space Consists of: A set V A scalar
More informationMachine Learning. Bayesian Regression & Classification. Marc Toussaint U Stuttgart
Machine Learning Bayesian Regression & Classification learning as inference, Bayesian Kernel Ridge regression & Gaussian Processes, Bayesian Kernel Logistic Regression & GP classification, Bayesian Neural
More informationIntroduction to Smoothing spline ANOVA models (metamodelling)
Introduction to Smoothing spline ANOVA models (metamodelling) M. Ratto DYNARE Summer School, Paris, June 215. Joint Research Centre www.jrc.ec.europa.eu Serving society Stimulating innovation Supporting
More informationComputer Vision Group Prof. Daniel Cremers. 2. Regression (cont.)
Prof. Daniel Cremers 2. Regression (cont.) Regression with MLE (Rep.) Assume that y is affected by Gaussian noise : t = f(x, w)+ where Thus, we have p(t x, w, )=N (t; f(x, w), 2 ) 2 Maximum A-Posteriori
More informationReliability Monitoring Using Log Gaussian Process Regression
COPYRIGHT 013, M. Modarres Reliability Monitoring Using Log Gaussian Process Regression Martin Wayne Mohammad Modarres PSA 013 Center for Risk and Reliability University of Maryland Department of Mechanical
More informationLocal and Global Sensitivity Analysis
Omar 1,2 1 Duke University Department of Mechanical Engineering & Materials Science omar.knio@duke.edu 2 KAUST Division of Computer, Electrical, Mathematical Science & Engineering omar.knio@kaust.edu.sa
More informationLecture: Gaussian Process Regression. STAT 6474 Instructor: Hongxiao Zhu
Lecture: Gaussian Process Regression STAT 6474 Instructor: Hongxiao Zhu Motivation Reference: Marc Deisenroth s tutorial on Robot Learning. 2 Fast Learning for Autonomous Robots with Gaussian Processes
More informationLinear Models Review
Linear Models Review Vectors in IR n will be written as ordered n-tuples which are understood to be column vectors, or n 1 matrices. A vector variable will be indicted with bold face, and the prime sign
More informationHilbert Space Methods for Reduced-Rank Gaussian Process Regression
Hilbert Space Methods for Reduced-Rank Gaussian Process Regression Arno Solin and Simo Särkkä Aalto University, Finland Workshop on Gaussian Process Approximation Copenhagen, Denmark, May 2015 Solin &
More informationConcepts in Global Sensitivity Analysis IMA UQ Short Course, June 23, 2015
Concepts in Global Sensitivity Analysis IMA UQ Short Course, June 23, 2015 A good reference is Global Sensitivity Analysis: The Primer. Saltelli, et al. (2008) Paul Constantine Colorado School of Mines
More informationComputational methods for uncertainty quantification and sensitivity analysis of complex systems
DEPARTMENT OF CIVIL, ENVIRONMENTAL AND GEOMATIC ENGINEERING CHAIR OF RISK, SAFETY & UNCERTAINTY QUANTIFICATION Computational methods for uncertainty quantification and sensitivity analysis of complex systems
More informationF denotes cumulative density. denotes probability density function; (.)
BAYESIAN ANALYSIS: FOREWORDS Notation. System means the real thing and a model is an assumed mathematical form for the system.. he probability model class M contains the set of the all admissible models
More informationFast Numerical Methods for Stochastic Computations
Fast AreviewbyDongbinXiu May 16 th,2013 Outline Motivation 1 Motivation 2 3 4 5 Example: Burgers Equation Let us consider the Burger s equation: u t + uu x = νu xx, x [ 1, 1] u( 1) =1 u(1) = 1 Example:
More informationANOVA kernels and RKHS of zero mean functions for model-based sensitivity analysis
ANOVA kernels and RKHS of zero mean functions for model-based sensitivity analysis Nicolas Durrande, David Ginsbourger, Olivier Roustant, Laurent Carraro To cite this version: Nicolas Durrande, David Ginsbourger,
More informationGLOBAL SENSITIVITY ANALYSIS WITH DEPENDENT INPUTS. Philippe Wiederkehr
GLOBAL SENSITIVITY ANALYSIS WITH DEPENDENT INPUTS Philippe Wiederkehr CHAIR OF RISK, SAFETY AND UNCERTAINTY QUANTIFICATION STEFANO-FRANSCINI-PLATZ 5 CH-8093 ZÜRICH Risk, Safety & Uncertainty Quantification
More informationKarhunen-Loève decomposition of Gaussian measures on Banach spaces
Karhunen-Loève decomposition of Gaussian measures on Banach spaces Jean-Charles Croix jean-charles.croix@emse.fr Génie Mathématique et Industriel (GMI) First workshop on Gaussian processes at Saint-Etienne
More informationUncertainty Quantification and Validation Using RAVEN. A. Alfonsi, C. Rabiti. Risk-Informed Safety Margin Characterization. https://lwrs.inl.
Risk-Informed Safety Margin Characterization Uncertainty Quantification and Validation Using RAVEN https://lwrs.inl.gov A. Alfonsi, C. Rabiti North Carolina State University, Raleigh 06/28/2017 Assumptions
More informationGaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012
Gaussian Processes Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 01 Pictorial view of embedding distribution Transform the entire distribution to expected features Feature space Feature
More informationReview. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda
Review DS GA 1002 Statistical and Mathematical Models http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall16 Carlos Fernandez-Granda Probability and statistics Probability: Framework for dealing with
More informationStochastic Spectral Approaches to Bayesian Inference
Stochastic Spectral Approaches to Bayesian Inference Prof. Nathan L. Gibson Department of Mathematics Applied Mathematics and Computation Seminar March 4, 2011 Prof. Gibson (OSU) Spectral Approaches to
More informationThe Hilbert Space of Random Variables
The Hilbert Space of Random Variables Electrical Engineering 126 (UC Berkeley) Spring 2018 1 Outline Fix a probability space and consider the set H := {X : X is a real-valued random variable with E[X 2
More informationGeneralized Sobol indices for dependent variables
Generalized Sobol indices for dependent variables Gaelle Chastaing Fabrice Gamboa Clémentine Prieur July 1st, 2013 1/28 Gaelle Chastaing Sensitivity analysis and dependent variables Contents 1 Context
More informationThe Gaussian process latent variable model with Cox regression
The Gaussian process latent variable model with Cox regression James Barrett Institute for Mathematical and Molecular Biomedicine (IMMB), King s College London James Barrett (IMMB, KCL) GP Latent Variable
More informationDynamic System Identification using HDMR-Bayesian Technique
Dynamic System Identification using HDMR-Bayesian Technique *Shereena O A 1) and Dr. B N Rao 2) 1), 2) Department of Civil Engineering, IIT Madras, Chennai 600036, Tamil Nadu, India 1) ce14d020@smail.iitm.ac.in
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 3 Linear
More informationSTAT 100C: Linear models
STAT 100C: Linear models Arash A. Amini June 9, 2018 1 / 56 Table of Contents Multiple linear regression Linear model setup Estimation of β Geometric interpretation Estimation of σ 2 Hat matrix Gram matrix
More informationGaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008
Gaussian processes Chuong B Do (updated by Honglak Lee) November 22, 2008 Many of the classical machine learning algorithms that we talked about during the first half of this course fit the following pattern:
More informationSTAT 518 Intro Student Presentation
STAT 518 Intro Student Presentation Wen Wei Loh April 11, 2013 Title of paper Radford M. Neal [1999] Bayesian Statistics, 6: 475-501, 1999 What the paper is about Regression and Classification Flexible
More informationComputer Vision Group Prof. Daniel Cremers. 9. Gaussian Processes - Regression
Group Prof. Daniel Cremers 9. Gaussian Processes - Regression Repetition: Regularized Regression Before, we solved for w using the pseudoinverse. But: we can kernelize this problem as well! First step:
More informationA Process over all Stationary Covariance Kernels
A Process over all Stationary Covariance Kernels Andrew Gordon Wilson June 9, 0 Abstract I define a process over all stationary covariance kernels. I show how one might be able to perform inference that
More informationSTA 4273H: Sta-s-cal Machine Learning
STA 4273H: Sta-s-cal Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 2 In our
More informationLearning gradients: prescriptive models
Department of Statistical Science Institute for Genome Sciences & Policy Department of Computer Science Duke University May 11, 2007 Relevant papers Learning Coordinate Covariances via Gradients. Sayan
More informationChaospy: A modular implementation of Polynomial Chaos expansions and Monte Carlo methods
Chaospy: A modular implementation of Polynomial Chaos expansions and Monte Carlo methods Simen Tennøe Supervisors: Jonathan Feinberg Hans Petter Langtangen Gaute Einevoll Geir Halnes University of Oslo,
More informationNONLOCALITY AND STOCHASTICITY TWO EMERGENT DIRECTIONS FOR APPLIED MATHEMATICS. Max Gunzburger
NONLOCALITY AND STOCHASTICITY TWO EMERGENT DIRECTIONS FOR APPLIED MATHEMATICS Max Gunzburger Department of Scientific Computing Florida State University North Carolina State University, March 10, 2011
More informationEcon 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines
Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines Maximilian Kasy Department of Economics, Harvard University 1 / 37 Agenda 6 equivalent representations of the
More informationConvergence Rates of Kernel Quadrature Rules
Convergence Rates of Kernel Quadrature Rules Francis Bach INRIA - Ecole Normale Supérieure, Paris, France ÉCOLE NORMALE SUPÉRIEURE NIPS workshop on probabilistic integration - Dec. 2015 Outline Introduction
More informationUC Berkeley Department of Electrical Engineering and Computer Sciences. EECS 126: Probability and Random Processes
UC Berkeley Department of Electrical Engineering and Computer Sciences EECS 6: Probability and Random Processes Problem Set 3 Spring 9 Self-Graded Scores Due: February 8, 9 Submit your self-graded scores
More informationSparse polynomial chaos expansions in engineering applications
DEPARTMENT OF CIVIL, ENVIRONMENTAL AND GEOMATIC ENGINEERING CHAIR OF RISK, SAFETY & UNCERTAINTY QUANTIFICATION Sparse polynomial chaos expansions in engineering applications B. Sudret G. Blatman (EDF R&D,
More informationGaussian Process Optimization with Mutual Information
Gaussian Process Optimization with Mutual Information Emile Contal 1 Vianney Perchet 2 Nicolas Vayatis 1 1 CMLA Ecole Normale Suprieure de Cachan & CNRS, France 2 LPMA Université Paris Diderot & CNRS,
More informationSensitivity Analysis of Compact Models in Nanodevice Modeling
Sensitivity Analysis of Compact Models in Nanodevice Modeling 1 (in collaboration with Asen Asenov 2 Binjie Cheng 2 Ivan Dimov 1 Urban Kovac 2 Campbell Millar 2 ) 1 Institute of Information and Communication
More informationUncertainty Propagation
Setting: Uncertainty Propagation We assume that we have determined distributions for parameters e.g., Bayesian inference, prior experiments, expert opinion Ṫ 1 = 1 - d 1 T 1 - (1 - ")k 1 VT 1 Ṫ 2 = 2 -
More informationIntroduction to Gaussian Processes
Introduction to Gaussian Processes Neil D. Lawrence GPSS 10th June 2013 Book Rasmussen and Williams (2006) Outline The Gaussian Density Covariance from Basis Functions Basis Function Representations Constructing
More informationLecture 13: Simple Linear Regression in Matrix Format. 1 Expectations and Variances with Vectors and Matrices
Lecture 3: Simple Linear Regression in Matrix Format To move beyond simple regression we need to use matrix algebra We ll start by re-expressing simple linear regression in matrix form Linear algebra is
More informationA polynomial dimensional decomposition for stochastic computing
INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING Int. J. Numer. Meth. Engng 2008; 76:2091 2116 Published online July 2008 in Wiley InterScience (www.interscience.wiley.com)..2394 A polynomial
More informationCOMS 4721: Machine Learning for Data Science Lecture 10, 2/21/2017
COMS 4721: Machine Learning for Data Science Lecture 10, 2/21/2017 Prof. John Paisley Department of Electrical Engineering & Data Science Institute Columbia University FEATURE EXPANSIONS FEATURE EXPANSIONS
More informationSensitivity Analysis and Variable Screening
Chapter 7 Sensitivity Analysis and Variable Screening 7.1 Introduction This chapter discusses sensitivity analysis and the related topic of variable screening. The set-up is as follows. A vector of inputs
More informationProbing the covariance matrix
Probing the covariance matrix Kenneth M. Hanson Los Alamos National Laboratory (ret.) BIE Users Group Meeting, September 24, 2013 This presentation available at http://kmh-lanl.hansonhub.com/ LA-UR-06-5241
More informationLinear Methods for Prediction
Chapter 5 Linear Methods for Prediction 5.1 Introduction We now revisit the classification problem and focus on linear methods. Since our prediction Ĝ(x) will always take values in the discrete set G we
More informationORTHOGONAL POLYNOMIAL EXPANSIONS FOR SOLVING RANDOM EIGENVALUE PROBLEMS
International Journal for Uncertainty Quantification, 1 (2: 163 187 (2011 ORTHOGONAL POLYNOMIAL EXPANSIONS FOR SOLVING RANDOM EIGENVALUE PROBLEMS Sharif Rahman & Vaibhav Yadav College of Engineering and
More informationResearch Collection. Basics of structural reliability and links with structural design codes FBH Herbsttagung November 22nd, 2013.
Research Collection Presentation Basics of structural reliability and links with structural design codes FBH Herbsttagung November 22nd, 2013 Author(s): Sudret, Bruno Publication Date: 2013 Permanent Link:
More informationLecture 6, Sci. Comp. for DPhil Students
Lecture 6, Sci. Comp. for DPhil Students Nick Trefethen, Thursday 1.11.18 Today II.3 QR factorization II.4 Computation of the QR factorization II.5 Linear least-squares Handouts Quiz 4 Householder s 4-page
More informationModel Selection for Gaussian Processes
Institute for Adaptive and Neural Computation School of Informatics,, UK December 26 Outline GP basics Model selection: covariance functions and parameterizations Criteria for model selection Marginal
More informationPrediction of double gene knockout measurements
Prediction of double gene knockout measurements Sofia Kyriazopoulou-Panagiotopoulou sofiakp@stanford.edu December 12, 2008 Abstract One way to get an insight into the potential interaction between a pair
More informationTutorial on Gaussian Processes and the Gaussian Process Latent Variable Model
Tutorial on Gaussian Processes and the Gaussian Process Latent Variable Model (& discussion on the GPLVM tech. report by Prof. N. Lawrence, 06) Andreas Damianou Department of Neuro- and Computer Science,
More informationComputer Vision Group Prof. Daniel Cremers. 10a. Markov Chain Monte Carlo
Group Prof. Daniel Cremers 10a. Markov Chain Monte Carlo Markov Chain Monte Carlo In high-dimensional spaces, rejection sampling and importance sampling are very inefficient An alternative is Markov Chain
More informationA Polynomial Chaos Approach to Robust Multiobjective Optimization
A Polynomial Chaos Approach to Robust Multiobjective Optimization Silvia Poles 1, Alberto Lovison 2 1 EnginSoft S.p.A., Optimization Consulting Via Giambellino, 7 35129 Padova, Italy s.poles@enginsoft.it
More informationWell-developed and understood properties
1 INTRODUCTION TO LINEAR MODELS 1 THE CLASSICAL LINEAR MODEL Most commonly used statistical models Flexible models Well-developed and understood properties Ease of interpretation Building block for more
More informationAPPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.
APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product
More informationDinesh Kumar, Mehrdad Raisee and Chris Lacor
Dinesh Kumar, Mehrdad Raisee and Chris Lacor Fluid Mechanics and Thermodynamics Research Group Vrije Universiteit Brussel, BELGIUM dkumar@vub.ac.be; m_raisee@yahoo.com; chris.lacor@vub.ac.be October, 2014
More informationSENSITIVITY ANALYSIS IN NUMERICAL SIMULATION OF MULTIPHASE FLOW FOR CO 2 STORAGE IN SALINE AQUIFERS USING THE PROBABILISTIC COLLOCATION APPROACH
XIX International Conference on Water Resources CMWR 2012 University of Illinois at Urbana-Champaign June 17-22,2012 SENSITIVITY ANALYSIS IN NUMERICAL SIMULATION OF MULTIPHASE FLOW FOR CO 2 STORAGE IN
More informationSTA414/2104 Statistical Methods for Machine Learning II
STA414/2104 Statistical Methods for Machine Learning II Murat A. Erdogdu & David Duvenaud Department of Computer Science Department of Statistical Sciences Lecture 3 Slide credits: Russ Salakhutdinov Announcements
More informationPRINCIPAL COMPONENTS ANALYSIS
121 CHAPTER 11 PRINCIPAL COMPONENTS ANALYSIS We now have the tools necessary to discuss one of the most important concepts in mathematical statistics: Principal Components Analysis (PCA). PCA involves
More informationIntroduction to Gaussian Processes
Introduction to Gaussian Processes 1 Objectives to express prior knowledge/beliefs about model outputs using Gaussian process (GP) to sample functions from the probability measure defined by GP to build
More informationIntroduction to Gaussian Processes
Introduction to Gaussian Processes Iain Murray murray@cs.toronto.edu CSC255, Introduction to Machine Learning, Fall 28 Dept. Computer Science, University of Toronto The problem Learn scalar function of
More informationFitting Linear Statistical Models to Data by Least Squares II: Weighted
Fitting Linear Statistical Models to Data by Least Squares II: Weighted Brian R. Hunt and C. David Levermore University of Maryland, College Park Math 420: Mathematical Modeling April 21, 2014 version
More informationLinear Algebra, Summer 2011, pt. 3
Linear Algebra, Summer 011, pt. 3 September 0, 011 Contents 1 Orthogonality. 1 1.1 The length of a vector....................... 1. Orthogonal vectors......................... 3 1.3 Orthogonal Subspaces.......................
More informationSparse polynomial chaos expansions as a machine learning regression technique
Research Collection Other Conference Item Sparse polynomial chaos expansions as a machine learning regression technique Author(s): Sudret, Bruno; Marelli, Stefano; Lataniotis, Christos Publication Date:
More informationLinear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept,
Linear Regression In this problem sheet, we consider the problem of linear regression with p predictors and one intercept, y = Xβ + ɛ, where y t = (y 1,..., y n ) is the column vector of target values,
More informationIntroduction to Uncertainty Quantification in Computational Science Handout #3
Introduction to Uncertainty Quantification in Computational Science Handout #3 Gianluca Iaccarino Department of Mechanical Engineering Stanford University June 29 - July 1, 2009 Scuola di Dottorato di
More informationComputer Vision Group Prof. Daniel Cremers. 4. Gaussian Processes - Regression
Group Prof. Daniel Cremers 4. Gaussian Processes - Regression Definition (Rep.) Definition: A Gaussian process is a collection of random variables, any finite number of which have a joint Gaussian distribution.
More informationMulti-Element Probabilistic Collocation Method in High Dimensions
Multi-Element Probabilistic Collocation Method in High Dimensions Jasmine Foo and George Em Karniadakis Division of Applied Mathematics, Brown University, Providence, RI 02912 USA Abstract We combine multi-element
More informationA comparison of polynomial chaos and Gaussian process emulation for uncertainty quantification in computer experiments
University of Exeter Department of Mathematics A comparison of polynomial chaos and Gaussian process emulation for uncertainty quantification in computer experiments Nathan Edward Owen May 2017 Supervised
More informationMultifidelity Information Fusion Algorithms for High- Dimensional Systems and Massive Data sets
Multifidelity Information Fusion Algorithms for High- Dimensional Systems and Massive Data sets The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story
More informationThere are two things that are particularly nice about the first basis
Orthogonality and the Gram-Schmidt Process In Chapter 4, we spent a great deal of time studying the problem of finding a basis for a vector space We know that a basis for a vector space can potentially
More informationADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING. Non-linear regression techniques Part - II
1 Non-linear regression techniques Part - II Regression Algorithms in this Course Support Vector Machine Relevance Vector Machine Support vector regression Boosting random projections Relevance vector
More information1 Last time: least-squares problems
MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that
More informationMath Real Analysis II
Math 4 - Real Analysis II Solutions to Homework due May Recall that a function f is called even if f( x) = f(x) and called odd if f( x) = f(x) for all x. We saw that these classes of functions had a particularly
More informationChapter 6. Eigenvalues. Josef Leydold Mathematical Methods WS 2018/19 6 Eigenvalues 1 / 45
Chapter 6 Eigenvalues Josef Leydold Mathematical Methods WS 2018/19 6 Eigenvalues 1 / 45 Closed Leontief Model In a closed Leontief input-output-model consumption and production coincide, i.e. V x = x
More informationIntroduction to SVM and RVM
Introduction to SVM and RVM Machine Learning Seminar HUS HVL UIB Yushu Li, UIB Overview Support vector machine SVM First introduced by Vapnik, et al. 1992 Several literature and wide applications Relevance
More informationExponential tail inequalities for eigenvalues of random matrices
Exponential tail inequalities for eigenvalues of random matrices M. Ledoux Institut de Mathématiques de Toulouse, France exponential tail inequalities classical theme in probability and statistics quantify
More informationOn Shapley value for measuring importance. of dependent inputs
Shapley value & variable importance 1 On Shapley value for measuring importance of dependent inputs Art B. Owen, Stanford joint with Clémentine Prieur, Grenoble Based on forthcoming JUQ article (2017).
More informationMAT Linear Algebra Collection of sample exams
MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system
More informationKernel-based Approximation. Methods using MATLAB. Gregory Fasshauer. Interdisciplinary Mathematical Sciences. Michael McCourt.
SINGAPORE SHANGHAI Vol TAIPEI - Interdisciplinary Mathematical Sciences 19 Kernel-based Approximation Methods using MATLAB Gregory Fasshauer Illinois Institute of Technology, USA Michael McCourt University
More informationStat 535 C - Statistical Computing & Monte Carlo Methods. Lecture 15-7th March Arnaud Doucet
Stat 535 C - Statistical Computing & Monte Carlo Methods Lecture 15-7th March 2006 Arnaud Doucet Email: arnaud@cs.ubc.ca 1 1.1 Outline Mixture and composition of kernels. Hybrid algorithms. Examples Overview
More informationGaussian processes and bayesian optimization Stanisław Jastrzębski. kudkudak.github.io kudkudak
Gaussian processes and bayesian optimization Stanisław Jastrzębski kudkudak.github.io kudkudak Plan Goal: talk about modern hyperparameter optimization algorithms Bayes reminder: equivalent linear regression
More informationData-driven Kriging models based on FANOVA-decomposition
Data-driven Kriging models based on FANOVA-decomposition Thomas Muehlenstaedt, Olivier Roustant, Laurent Carraro, Sonja Kuhnt To cite this version: Thomas Muehlenstaedt, Olivier Roustant, Laurent Carraro,
More informationFrom Gram-Charlier Series to Orthogonal Polynomials
From Gram-Charlier Series to Orthogonal Polynomials HC Eggers University of Stellenbosch Workshop on Particle Correlations and Fluctuations 2008 Kraków Earlier work on Gram-Charlier Series (GCS) Work by
More information