Local and Global Sensitivity Analysis

Size: px
Start display at page:

Download "Local and Global Sensitivity Analysis"

Transcription

1 Omar 1,2 1 Duke University Department of Mechanical Engineering & Materials Science omar.knio@duke.edu 2 KAUST Division of Computer, Electrical, Mathematical Science & Engineering omar.knio@kaust.edu.sa

2 Motivation In this section, we will be mainly interested with the following, fairly straightforward but as it turns out delicate and important, question : Given f (x, y) does f vary more with x or with y? Why is this important? Planning Decision analysis and support Designing experiments or field campaigns, etc... For the discussion that follows, assume we have a minimum of information, specifying : shape of f (equation, table, spreadsheet, simulation code, etc...) range of x and y optionally a design point, say x 0, y 0

3 L 2 functions over unit-hypercubes Let L 2 (U d ) be the space of real-valued squared-integrable functions over the d-dimensional hypercube U : f : x 2 U d 7! f (x) 2 R, f 2 L 2 (U d ), f (x) 2 dx < 1. U d L 2 (U d ) is equipped with the inner product h, i, 8f, g 2 L 2 (U d ), hu, vi := f (x)g(x)dx, U d and norm k k 2, 8f 2 L 2 (U d ), kf k 2 := hf, f i 1/2.

4 L 2 functions over unit-hypercubes Let L 2 (U d ) be the space of real-valued squared-integrable functions over the d-dimensional hypercube U : f : x 2 U d 7! f (x) 2 R, f 2 L 2 (U d ), f (x) 2 dx < 1. U d NB : all subsequent developments immediately extend to product-type situations, where x 2 A = A 1 A d R d, and weighted spaces L 2 (A, ), : x 2 A 7! (x) 0, (x) = 1 (x 1 ) d (x d ). (e.g. : is a pdf of a random vector x with mutually independent components.)

5 Ensemble notations Let D = {1, 2,...,d}. Given i D, we denote i := D \ i its complement set in D, such that For instance i = {1, 2} and i = {3,...,d}, i = D and i = ;. i [ i = D, i \ i = ;.

6 Ensemble notations Let D = {1, 2,...,d}. Given i D, we denote i := D \ i its complement set in D, such that i [ i = D, i \ i = ;. Given x =(x 1,...,x d ), we denote x i the vector having for components the x i2i, that is D i = {i 1,...,i i } ) x i =(x i1,...,x i i ), where i := Card(i). For instance f (x)dx i = U i U i f (x 1,...,x d ) Y i2i dx i, and Yi /2i f (x)dx i = f (x 1,...,x d ) dx i, U d i U d i i2d

7 Sobol-Hoeffding decomposition Any f 2 L 2 (U d ) has a unique hierarchical orthogonal decomposition of the form f (x) =f (x 1,...,x d )=f 0 + dx dx f i (x i )+ i=1 dx i=1 j=i+1 k=j+1 dx i=1 j=i+1 dx f i,j (x i, x j )+ dx f i,j,k (x i, x j, x k )+ + f 1,...,d (x 1,...,x d ). Hierarchical : 1st order functionals (f i )! 2nd order functionals (f i,j )! 3rd order functionals (f i,j,l )!! d-th order functional (f 1,...,d ). Using ensemble notations : Decomposition in a sum of 2 k functionals f (x) = X i D f i (x i ).

8 Sobol-Hoeffding decomposition Any f 2 L 2 (U d ) has a unique hierarchical orthogonal decomposition of the form f (x) = X f i (x i ). i D Orthogonal : the functionals if the S-H decomposition verify the following orthogonality relations : f i (x i )dx j = 0, 8i D, j 2 i, U f i (x i )f j (x j )dx = hf i, f j i = 0, 8i, j D, i 6= j. U d It follows the hierarchical construction f ; = f (x)dx = hf i ; =D U d f {i} = f (x)dx {i} f ; = hf i D\{i} f ; i 2 D U d 1 X X f i = f (x)dx i f j = hf i i f j i 2 D, i 2. U i j(i j(i

9 Example Consider a 3D example : Y = f (X 1, X 2, X 3 ) Then SH yields : f? = E[Y ] f 1 (X 1 )=E[Y X 1 ] f? f 2 (X 2 )=E[Y X 2 ] f? f 12 (X 1, X 2 )=E[Y X 1, X 2 ] f 1 f 2 f? f 13 (X 1, X 3 )=E[Y X 1, X 3 ] f 1 f 3 f? f 23 (X 2, X 3 )=E[Y X 2, X 3 ] f 2 f 3 f? f 3 (X 3 )=E[Y X 3 ] f? f 123 (X 1, X 2, X 3 ) = E[Y X 1, X 2, X 3 ] f 1 f 2 f 3 f 12 f 13 f 23 f? = f (X 1, X 2, X 3 ) f 1 f 2 f 3 f 12 f 13 f 23 f?

10 Parametric sensitivity analysis Consider x as a set of d independent random parameters uniformly distributed on U d, and f (x) a model-output depending on these random parameters. It is assumes that f is a 2nd order random variable : f 2 L 2 (U d ). Thus, f has a unique S-H decomposition f (x) = X i D f i (x i ). Further, the integrals of f with respect to i are in this context conditional expectations, E[f x i ]= f (x)dx i = g(x i ) 8i D, U i so the S-H decomposition follows the hierarchical structure f ; = E[f ] f {i} = E f x {i} E[f ] i 2 D X f i = E[f x i ] f j i D, i 2. j(i

11 Variance decomposition Because of the orthogonality of the S-H decomposition the variance V [f ] of the model-output can be decomposed as i6=; X V [f ]= V [f i ], V [f i ]=hf i, f i i. i D V [f i ] is interpreted as the contribution to the total variance V [f ] of the interaction between parameters x i2i. The S-H decomposition thus provide a rich mean of analyzing the respective contributions of individual or sets of parameters to model-output variability. However, as there are 2 d 1 contributions, so one needs more "abstract" characterizations.

12 Sensitivity indices To facilitate the hierarchization of the respective influence of each parameter x i, the partial variances V [f i ] are normalized by V [f ] to obtain the sensitivity indices : S i (f )= V [f i6=; i] X apple 1, S i (f )=1. V [f ] i D The order of the sensitivity indices S i is equal to i = Card(i). 1st order sensitivity indices. The d first order indices S {i}2d characterize the fraction of the variance due the parameter x i only, i.e. without any interaction with others. Therefore, dx 1 S {i} (f ) 0, i=1 measures globally the effect on the variability of all interactions between parameters. If P d i=1 S {i} = 1, the model is said additive, because its S-H decomposition is dx f (x i,...,x d )=f 0 + f i (x i ), and the impact of the parameters can be studied separately. i=1

13 Sensitivity indices To facilitate the hierarchization of the respective influence of each parameter x i, the partial variances V [f i ] are normalized by V [f ] to obtain the sensitivity indices : S i (f )= V [f i6=; i] X apple 1, S i (f )=1. V [f ] i D The order of the sensitivity indices S i is equal to i = Card(i). Total sensitivity indices. The first order SI S {i} measures the variability due to parameter x i alone. The total SI T {i} measures the variability due to the parameter x i, including all its interactions with other parameters : T {i} := X i3i S i S {i}. Important point : for x i to be deemed non-important or non-influent on the model-output, S {i} and T {i} have to be negligible. Observe that P i2d T {i} 1, the excess from 1 characterizes the presence of interactions in the model-output.

14

15 Sensitivity indices In many uncertainty problem, the set of uncertain parameters can be naturally grouped into subsets depending on the process each parameter accounts for. For instance, boundary conditions BC, material property ', external forcing F, and D is the union of these distinct subsets : D = D BC [ D ' [ D F. The notion of first order and total sensitivity indices can be extended to characterize the influence of the subsets of parameters. For instance, S D' = X i D ' S i, measures the fraction of variance induced by the material uncertainty alone, while T DF = X S i. i\d F 6=; measures the fraction of variance due to the external forcing uncertainty and all its interactions.

16 Recap Sensitivity index S u, total sensitivity index T u it is easy to verify that S u + T u = S u + T u = 1 P it immediately follows that i S {i} apple 1 ) P i T {i} 1 S u high means that X u has substantial contribution to V (Y ) T u low means that X u has weak contribution to V (Y ) (and so it is a candidate to be dropped)

17 S-H decomposition from PC expansions. Consider the model-output f : 2 R d 7! R, where =( 1,..., d ) are independent real-valued r.v. with joint-probability density function p (x 1,...,x d )= dy p i (x i ). Let { } be the set of d-variate orthogonal polynomials, i=1 ( ) = dy i=1 (i) i ( i ), with (i) l 0 2 l the uni-variate polynomials mutually orthogonal with respect to the density p i. If f 2 L 2 (, p ), it has a convergent PC expansion f ( ) = lim X No!1 appleno ( )f, = dx i. i=1

18 S-H decomposition from PC expansions. Given the truncated PC expansion of f, X ˆf ( ) = ( )f, applea one can readily obtain the PC approximation of the S-H functionals through ˆfi ( i )= X ( i )f, applea(i) where the multi-index set A(i) is given by A(i) ={ 2 A; i > 0 for i 2 i, i = 0 for i /2 i} ( A. For the sensitivity indices it comes P 2A(i) S i (ˆf )= f 2 P h, i P 2A f 2 h, i, T 2T (i) {i}(ˆf )= f 2 h, i P 2A f, 2 h, i where T (i) ={ 2 A; i > 0 for i 2 i}

Sobol-Hoeffding Decomposition with Application to Global Sensitivity Analysis

Sobol-Hoeffding Decomposition with Application to Global Sensitivity Analysis Sobol-Hoeffding decomposition Application to Global SA Computation of the SI Sobol-Hoeffding Decomposition with Application to Global Sensitivity Analysis Olivier Le Maître with Colleague & Friend Omar

More information

Deep Learning for Computer Vision

Deep Learning for Computer Vision Deep Learning for Computer Vision Lecture 3: Probability, Bayes Theorem, and Bayes Classification Peter Belhumeur Computer Science Columbia University Probability Should you play this game? Game: A fair

More information

PC EXPANSION FOR GLOBAL SENSITIVITY ANALYSIS OF NON-SMOOTH FUNCTIONALS OF UNCERTAIN STOCHASTIC DIFFERENTIAL EQUATIONS SOLUTIONS

PC EXPANSION FOR GLOBAL SENSITIVITY ANALYSIS OF NON-SMOOTH FUNCTIONALS OF UNCERTAIN STOCHASTIC DIFFERENTIAL EQUATIONS SOLUTIONS PC EXPANSION FOR GLOBAL SENSITIVITY ANALYSIS OF NON-SMOOTH FUNCTIONALS OF UNCERTAIN STOCHASTIC DIFFERENTIAL EQUATIONS SOLUTIONS M. Navarro, O.P. Le Maître,2, O.M. Knio,3 mariaisabel.navarrojimenez@kaust.edu.sa

More information

Polynomial chaos expansions for sensitivity analysis

Polynomial chaos expansions for sensitivity analysis c DEPARTMENT OF CIVIL, ENVIRONMENTAL AND GEOMATIC ENGINEERING CHAIR OF RISK, SAFETY & UNCERTAINTY QUANTIFICATION Polynomial chaos expansions for sensitivity analysis B. Sudret Chair of Risk, Safety & Uncertainty

More information

Introduction to Smoothing spline ANOVA models (metamodelling)

Introduction to Smoothing spline ANOVA models (metamodelling) Introduction to Smoothing spline ANOVA models (metamodelling) M. Ratto DYNARE Summer School, Paris, June 215. Joint Research Centre www.jrc.ec.europa.eu Serving society Stimulating innovation Supporting

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

Probability theory. References:

Probability theory. References: Reasoning Under Uncertainty References: Probability theory Mathematical methods in artificial intelligence, Bender, Chapter 7. Expert systems: Principles and programming, g, Giarratano and Riley, pag.

More information

Convergence for periodic Fourier series

Convergence for periodic Fourier series Chapter 8 Convergence for periodic Fourier series We are now in a position to address the Fourier series hypothesis that functions can realized as the infinite sum of trigonometric functions discussed

More information

Fourier Sin and Cos Series and Least Squares Convergence

Fourier Sin and Cos Series and Least Squares Convergence Fourier and east Squares Convergence James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University May 7, 28 Outline et s look at the original Fourier sin

More information

Gaussian process regression for Sensitivity analysis

Gaussian process regression for Sensitivity analysis Gaussian process regression for Sensitivity analysis GPSS Workshop on UQ, Sheffield, September 2016 Nicolas Durrande, Mines St-Étienne, durrande@emse.fr GPSS workshop on UQ GPs for sensitivity analysis

More information

Review (Probability & Linear Algebra)

Review (Probability & Linear Algebra) Review (Probability & Linear Algebra) CE-725 : Statistical Pattern Recognition Sharif University of Technology Spring 2013 M. Soleymani Outline Axioms of probability theory Conditional probability, Joint

More information

Functional Analysis Review

Functional Analysis Review Functional Analysis Review Lorenzo Rosasco slides courtesy of Andre Wibisono 9.520: Statistical Learning Theory and Applications September 9, 2013 1 2 3 4 Vector Space A vector space is a set V with binary

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Lecture 2: Repetition of probability theory and statistics

Lecture 2: Repetition of probability theory and statistics Algorithms for Uncertainty Quantification SS8, IN2345 Tobias Neckel Scientific Computing in Computer Science TUM Lecture 2: Repetition of probability theory and statistics Concept of Building Block: Prerequisites:

More information

Linear Algebra Review

Linear Algebra Review January 29, 2013 Table of contents Metrics Metric Given a space X, then d : X X R + 0 and z in X if: d(x, y) = 0 is equivalent to x = y d(x, y) = d(y, x) d(x, y) d(x, z) + d(z, y) is a metric is for all

More information

Introduction to Probability and Stocastic Processes - Part I

Introduction to Probability and Stocastic Processes - Part I Introduction to Probability and Stocastic Processes - Part I Lecture 1 Henrik Vie Christensen vie@control.auc.dk Department of Control Engineering Institute of Electronic Systems Aalborg University Denmark

More information

ECE353: Probability and Random Processes. Lecture 7 -Continuous Random Variable

ECE353: Probability and Random Processes. Lecture 7 -Continuous Random Variable ECE353: Probability and Random Processes Lecture 7 -Continuous Random Variable Xiao Fu School of Electrical Engineering and Computer Science Oregon State University E-mail: xiao.fu@oregonstate.edu Continuous

More information

Northwestern University Department of Electrical Engineering and Computer Science

Northwestern University Department of Electrical Engineering and Computer Science Northwestern University Department of Electrical Engineering and Computer Science EECS 454: Modeling and Analysis of Communication Networks Spring 2008 Probability Review As discussed in Lecture 1, probability

More information

Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems

Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems Lars Eldén Department of Mathematics, Linköping University 1 April 2005 ERCIM April 2005 Multi-Linear

More information

Tools from Lebesgue integration

Tools from Lebesgue integration Tools from Lebesgue integration E.P. van den Ban Fall 2005 Introduction In these notes we describe some of the basic tools from the theory of Lebesgue integration. Definitions and results will be given

More information

Review (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology

Review (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology Review (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology M. Soleymani Fall 2012 Some slides have been adopted from Prof. H.R. Rabiee s and also Prof. R. Gutierrez-Osuna

More information

Unit 6: Fractional Factorial Experiments at Three Levels

Unit 6: Fractional Factorial Experiments at Three Levels Unit 6: Fractional Factorial Experiments at Three Levels Larger-the-better and smaller-the-better problems. Basic concepts for 3 k full factorial designs. Analysis of 3 k designs using orthogonal components

More information

EFFICIENT SHAPE OPTIMIZATION USING POLYNOMIAL CHAOS EXPANSION AND LOCAL SENSITIVITIES

EFFICIENT SHAPE OPTIMIZATION USING POLYNOMIAL CHAOS EXPANSION AND LOCAL SENSITIVITIES 9 th ASCE Specialty Conference on Probabilistic Mechanics and Structural Reliability EFFICIENT SHAPE OPTIMIZATION USING POLYNOMIAL CHAOS EXPANSION AND LOCAL SENSITIVITIES Nam H. Kim and Haoyu Wang University

More information

Multilevel stochastic collocations with dimensionality reduction

Multilevel stochastic collocations with dimensionality reduction Multilevel stochastic collocations with dimensionality reduction Ionut Farcas TUM, Chair of Scientific Computing in Computer Science (I5) 27.01.2017 Outline 1 Motivation 2 Theoretical background Uncertainty

More information

Linear Analysis Lecture 5

Linear Analysis Lecture 5 Linear Analysis Lecture 5 Inner Products and V Let dim V < with inner product,. Choose a basis B and let v, w V have coordinates in F n given by x 1. x n and y 1. y n, respectively. Let A F n n be the

More information

Section 7.5 Inner Product Spaces

Section 7.5 Inner Product Spaces Section 7.5 Inner Product Spaces With the dot product defined in Chapter 6, we were able to study the following properties of vectors in R n. ) Length or norm of a vector u. ( u = p u u ) 2) Distance of

More information

Metric Spaces. Exercises Fall 2017 Lecturer: Viveka Erlandsson. Written by M.van den Berg

Metric Spaces. Exercises Fall 2017 Lecturer: Viveka Erlandsson. Written by M.van den Berg Metric Spaces Exercises Fall 2017 Lecturer: Viveka Erlandsson Written by M.van den Berg School of Mathematics University of Bristol BS8 1TW Bristol, UK 1 Exercises. 1. Let X be a non-empty set, and suppose

More information

Gleason s theorem and unentangled orthonormal bases

Gleason s theorem and unentangled orthonormal bases Gleason s theorem and unentangled orthonormal bases Nolan R. Wallach [5/14]May, 2014 Nolan R. Wallach () Gleason s theorem and unentangled orthonormal bases [5/14]May, 2014 1 / 19 Nolan R. Wallach () Gleason

More information

Review: mostly probability and some statistics

Review: mostly probability and some statistics Review: mostly probability and some statistics C2 1 Content robability (should know already) Axioms and properties Conditional probability and independence Law of Total probability and Bayes theorem Random

More information

Algorithms for Uncertainty Quantification

Algorithms for Uncertainty Quantification Algorithms for Uncertainty Quantification Tobias Neckel, Ionuț-Gabriel Farcaș Lehrstuhl Informatik V Summer Semester 2017 Lecture 2: Repetition of probability theory and statistics Example: coin flip Example

More information

Fast Numerical Methods for Stochastic Computations

Fast Numerical Methods for Stochastic Computations Fast AreviewbyDongbinXiu May 16 th,2013 Outline Motivation 1 Motivation 2 3 4 5 Example: Burgers Equation Let us consider the Burger s equation: u t + uu x = νu xx, x [ 1, 1] u( 1) =1 u(1) = 1 Example:

More information

Tutorial 6 - MUB and Complex Inner Product

Tutorial 6 - MUB and Complex Inner Product Tutorial 6 - MUB and Complex Inner Product Mutually unbiased bases Consider first a vector space called R It is composed of all the vectors you can draw on a plane All of them are of the form: ( ) r v

More information

Preliminary Statistics Lecture 2: Probability Theory (Outline) prelimsoas.webs.com

Preliminary Statistics Lecture 2: Probability Theory (Outline) prelimsoas.webs.com 1 School of Oriental and African Studies September 2015 Department of Economics Preliminary Statistics Lecture 2: Probability Theory (Outline) prelimsoas.webs.com Gujarati D. Basic Econometrics, Appendix

More information

Elements of linear algebra

Elements of linear algebra Elements of linear algebra Elements of linear algebra A vector space S is a set (numbers, vectors, functions) which has addition and scalar multiplication defined, so that the linear combination c 1 v

More information

Singular Integrals. 1 Calderon-Zygmund decomposition

Singular Integrals. 1 Calderon-Zygmund decomposition Singular Integrals Analysis III Calderon-Zygmund decomposition Let f be an integrable function f dx 0, f = g + b with g Cα almost everywhere, with b

More information

SENSITIVITY ANALYSIS IN NUMERICAL SIMULATION OF MULTIPHASE FLOW FOR CO 2 STORAGE IN SALINE AQUIFERS USING THE PROBABILISTIC COLLOCATION APPROACH

SENSITIVITY ANALYSIS IN NUMERICAL SIMULATION OF MULTIPHASE FLOW FOR CO 2 STORAGE IN SALINE AQUIFERS USING THE PROBABILISTIC COLLOCATION APPROACH XIX International Conference on Water Resources CMWR 2012 University of Illinois at Urbana-Champaign June 17-22,2012 SENSITIVITY ANALYSIS IN NUMERICAL SIMULATION OF MULTIPHASE FLOW FOR CO 2 STORAGE IN

More information

where r n = dn+1 x(t)

where r n = dn+1 x(t) Random Variables Overview Probability Random variables Transforms of pdfs Moments and cumulants Useful distributions Random vectors Linear transformations of random vectors The multivariate normal distribution

More information

Ch3 Operations on one random variable-expectation

Ch3 Operations on one random variable-expectation Ch3 Operations on one random variable-expectation Previously we define a random variable as a mapping from the sample space to the real line We will now introduce some operations on the random variable.

More information

Automatic algorithms

Automatic algorithms Automatic algorithms Function approximation using its series decomposition in a basis Lluís Antoni Jiménez Rugama May 30, 2013 Index 1 Introduction Environment Linking H in Example and H out 2 Our algorithms

More information

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space. Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space

More information

7: FOURIER SERIES STEVEN HEILMAN

7: FOURIER SERIES STEVEN HEILMAN 7: FOURIER SERIES STEVE HEILMA Contents 1. Review 1 2. Introduction 1 3. Periodic Functions 2 4. Inner Products on Periodic Functions 3 5. Trigonometric Polynomials 5 6. Periodic Convolutions 7 7. Fourier

More information

Math Real Analysis II

Math Real Analysis II Math 4 - Real Analysis II Solutions to Homework due May Recall that a function f is called even if f( x) = f(x) and called odd if f( x) = f(x) for all x. We saw that these classes of functions had a particularly

More information

The Inverse Iteration Method for Julia Sets in the 3-Dimensional Space

The Inverse Iteration Method for Julia Sets in the 3-Dimensional Space The Inverse Iteration Method for Julia Sets in the 3-Dimensional Space Dominic Rochon 1 Département de Mathématiques et d informatique Université du Québec, Trois-Rivières CANADA The 8th Chaotic Modeling

More information

Chapter 5,6 Multiple RandomVariables

Chapter 5,6 Multiple RandomVariables Chapter 5,6 Multiple RandomVariables ENCS66 - Probabilityand Stochastic Processes Concordia University Vector RandomVariables A vector r.v. is a function where is the sample space of a random experiment.

More information

Math 240 (Driver) Qual Exam (9/12/2017)

Math 240 (Driver) Qual Exam (9/12/2017) 1 Name: I.D. #: Math 240 (Driver) Qual Exam (9/12/2017) Instructions: Clearly explain and justify your answers. You may cite theorems from the text, notes, or class as long as they are not what the problem

More information

Algorithms for Uncertainty Quantification

Algorithms for Uncertainty Quantification Algorithms for Uncertainty Quantification Lecture 9: Sensitivity Analysis ST 2018 Tobias Neckel Scientific Computing in Computer Science TUM Repetition of Previous Lecture Sparse grids in Uncertainty Quantification

More information

Lecture 11: Continuous-valued signals and differential entropy

Lecture 11: Continuous-valued signals and differential entropy Lecture 11: Continuous-valued signals and differential entropy Biology 429 Carl Bergstrom September 20, 2008 Sources: Parts of today s lecture follow Chapter 8 from Cover and Thomas (2007). Some components

More information

1 Random variables and distributions

1 Random variables and distributions Random variables and distributions In this chapter we consider real valued functions, called random variables, defined on the sample space. X : S R X The set of possible values of X is denoted by the set

More information

Vectors in Function Spaces

Vectors in Function Spaces Jim Lambers MAT 66 Spring Semester 15-16 Lecture 18 Notes These notes correspond to Section 6.3 in the text. Vectors in Function Spaces We begin with some necessary terminology. A vector space V, also

More information

(x, y) = d(x, y) = x y.

(x, y) = d(x, y) = x y. 1 Euclidean geometry 1.1 Euclidean space Our story begins with a geometry which will be familiar to all readers, namely the geometry of Euclidean space. In this first chapter we study the Euclidean distance

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Recitation 2: Probability

Recitation 2: Probability Recitation 2: Probability Colin White, Kenny Marino January 23, 2018 Outline Facts about sets Definitions and facts about probability Random Variables and Joint Distributions Characteristics of distributions

More information

Short-time Fourier transform for quaternionic signals

Short-time Fourier transform for quaternionic signals Short-time Fourier transform for quaternionic signals Joint work with Y. Fu and U. Kähler P. Cerejeiras Departamento de Matemática Universidade de Aveiro pceres@ua.pt New Trends and Directions in Harmonic

More information

Lecture 9 Metric spaces. The contraction fixed point theorem. The implicit function theorem. The existence of solutions to differenti. equations.

Lecture 9 Metric spaces. The contraction fixed point theorem. The implicit function theorem. The existence of solutions to differenti. equations. Lecture 9 Metric spaces. The contraction fixed point theorem. The implicit function theorem. The existence of solutions to differential equations. 1 Metric spaces 2 Completeness and completion. 3 The contraction

More information

2. Matrix Algebra and Random Vectors

2. Matrix Algebra and Random Vectors 2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns

More information

REVIEW OF MAIN CONCEPTS AND FORMULAS A B = Ā B. Pr(A B C) = Pr(A) Pr(A B C) =Pr(A) Pr(B A) Pr(C A B)

REVIEW OF MAIN CONCEPTS AND FORMULAS A B = Ā B. Pr(A B C) = Pr(A) Pr(A B C) =Pr(A) Pr(B A) Pr(C A B) REVIEW OF MAIN CONCEPTS AND FORMULAS Boolean algebra of events (subsets of a sample space) DeMorgan s formula: A B = Ā B A B = Ā B The notion of conditional probability, and of mutual independence of two

More information

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process

ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process Department of Electrical Engineering University of Arkansas ELEG 3143 Probability & Stochastic Process Ch. 6 Stochastic Process Dr. Jingxian Wu wuj@uark.edu OUTLINE 2 Definition of stochastic process (random

More information

1 Joint and marginal distributions

1 Joint and marginal distributions DECEMBER 7, 204 LECTURE 2 JOINT (BIVARIATE) DISTRIBUTIONS, MARGINAL DISTRIBUTIONS, INDEPENDENCE So far we have considered one random variable at a time. However, in economics we are typically interested

More information

4th Preparation Sheet - Solutions

4th Preparation Sheet - Solutions Prof. Dr. Rainer Dahlhaus Probability Theory Summer term 017 4th Preparation Sheet - Solutions Remark: Throughout the exercise sheet we use the two equivalent definitions of separability of a metric space

More information

Notes on the Lebesgue Integral by Francis J. Narcowich November, 2013

Notes on the Lebesgue Integral by Francis J. Narcowich November, 2013 Notes on the Lebesgue Integral by Francis J. Narcowich November, 203 Introduction In the definition of the Riemann integral of a function f(x), the x-axis is partitioned and the integral is defined in

More information

The Central Limit Theorem

The Central Limit Theorem The Central Limit Theorem (A rounding-corners overiew of the proof for a.s. convergence assuming i.i.d.r.v. with 2 moments in L 1, provided swaps of lim-ops are legitimate) If {X k } n k=1 are i.i.d.,

More information

The Adjoint of a Linear Operator

The Adjoint of a Linear Operator The Adjoint of a Linear Operator Michael Freeze MAT 531: Linear Algebra UNC Wilmington Spring 2015 1 / 18 Adjoint Operator Let L : V V be a linear operator on an inner product space V. Definition The adjoint

More information

Review of Probability Theory

Review of Probability Theory Review of Probability Theory Arian Maleki and Tom Do Stanford University Probability theory is the study of uncertainty Through this class, we will be relying on concepts from probability theory for deriving

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Continuous distributions

Continuous distributions CHAPTER 7 Continuous distributions 7.. Introduction A r.v. X is said to have a continuous distribution if there exists a nonnegative function f such that P(a X b) = ˆ b a f(x)dx for every a and b. distribution.)

More information

Computer Vision Group Prof. Daniel Cremers. 9. Gaussian Processes - Regression

Computer Vision Group Prof. Daniel Cremers. 9. Gaussian Processes - Regression Group Prof. Daniel Cremers 9. Gaussian Processes - Regression Repetition: Regularized Regression Before, we solved for w using the pseudoinverse. But: we can kernelize this problem as well! First step:

More information

4. Distributions of Functions of Random Variables

4. Distributions of Functions of Random Variables 4. Distributions of Functions of Random Variables Setup: Consider as given the joint distribution of X 1,..., X n (i.e. consider as given f X1,...,X n and F X1,...,X n ) Consider k functions g 1 : R n

More information

Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines

Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines Maximilian Kasy Department of Economics, Harvard University 1 / 37 Agenda 6 equivalent representations of the

More information

March 4, 2019 LECTURE 11: DIFFERENTIALS AND TAYLOR SERIES.

March 4, 2019 LECTURE 11: DIFFERENTIALS AND TAYLOR SERIES. March 4, 2019 LECTURE 11: DIFFERENTIALS AND TAYLOR SERIES 110211 HONORS MULTIVARIABLE CALCULUS PROFESSOR RICHARD BROWN Synopsis Herein, we give a brief interpretation of the differential of a function

More information

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. Vector Spaces Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. For each two vectors a, b ν there exists a summation procedure: a +

More information

2 Continuous Random Variables and their Distributions

2 Continuous Random Variables and their Distributions Name: Discussion-5 1 Introduction - Continuous random variables have a range in the form of Interval on the real number line. Union of non-overlapping intervals on real line. - We also know that for any

More information

2) There should be uncertainty as to which outcome will occur before the procedure takes place.

2) There should be uncertainty as to which outcome will occur before the procedure takes place. robability Numbers For many statisticians the concept of the probability that an event occurs is ultimately rooted in the interpretation of an event as an outcome of an experiment, others would interpret

More information

Probability and Measure

Probability and Measure Probability and Measure Robert L. Wolpert Institute of Statistics and Decision Sciences Duke University, Durham, NC, USA Convergence of Random Variables 1. Convergence Concepts 1.1. Convergence of Real

More information

Probability Review. Yutian Li. January 18, Stanford University. Yutian Li (Stanford University) Probability Review January 18, / 27

Probability Review. Yutian Li. January 18, Stanford University. Yutian Li (Stanford University) Probability Review January 18, / 27 Probability Review Yutian Li Stanford University January 18, 2018 Yutian Li (Stanford University) Probability Review January 18, 2018 1 / 27 Outline 1 Elements of probability 2 Random variables 3 Multiple

More information

Mathematical Analysis Outline. William G. Faris

Mathematical Analysis Outline. William G. Faris Mathematical Analysis Outline William G. Faris January 8, 2007 2 Chapter 1 Metric spaces and continuous maps 1.1 Metric spaces A metric space is a set X together with a real distance function (x, x ) d(x,

More information

Ch. 8 Math Preliminaries for Lossy Coding. 8.5 Rate-Distortion Theory

Ch. 8 Math Preliminaries for Lossy Coding. 8.5 Rate-Distortion Theory Ch. 8 Math Preliminaries for Lossy Coding 8.5 Rate-Distortion Theory 1 Introduction Theory provide insight into the trade between Rate & Distortion This theory is needed to answer: What do typical R-D

More information

Fundamentals of Digital Commun. Ch. 4: Random Variables and Random Processes

Fundamentals of Digital Commun. Ch. 4: Random Variables and Random Processes Fundamentals of Digital Commun. Ch. 4: Random Variables and Random Processes Klaus Witrisal witrisal@tugraz.at Signal Processing and Speech Communication Laboratory www.spsc.tugraz.at Graz University of

More information

Higher-order Fourier analysis of F n p and the complexity of systems of linear forms

Higher-order Fourier analysis of F n p and the complexity of systems of linear forms Higher-order Fourier analysis of F n p and the complexity of systems of linear forms Hamed Hatami School of Computer Science, McGill University, Montréal, Canada hatami@cs.mcgill.ca Shachar Lovett School

More information

Class 2 & 3 Overfitting & Regularization

Class 2 & 3 Overfitting & Regularization Class 2 & 3 Overfitting & Regularization Carlo Ciliberto Department of Computer Science, UCL October 18, 2017 Last Class The goal of Statistical Learning Theory is to find a good estimator f n : X Y, approximating

More information

Introduction to Signal Spaces

Introduction to Signal Spaces Introduction to Signal Spaces Selin Aviyente Department of Electrical and Computer Engineering Michigan State University January 12, 2010 Motivation Outline 1 Motivation 2 Vector Space 3 Inner Product

More information

Introduction to Information Entropy Adapted from Papoulis (1991)

Introduction to Information Entropy Adapted from Papoulis (1991) Introduction to Information Entropy Adapted from Papoulis (1991) Federico Lombardo Papoulis, A., Probability, Random Variables and Stochastic Processes, 3rd edition, McGraw ill, 1991. 1 1. INTRODUCTION

More information

Hilbert Spaces: Infinite-Dimensional Vector Spaces

Hilbert Spaces: Infinite-Dimensional Vector Spaces Hilbert Spaces: Infinite-Dimensional Vector Spaces PHYS 500 - Southern Illinois University October 27, 2016 PHYS 500 - Southern Illinois University Hilbert Spaces: Infinite-Dimensional Vector Spaces October

More information

ABSTRACT ALGEBRA: A PRESENTATION ON PROFINITE GROUPS

ABSTRACT ALGEBRA: A PRESENTATION ON PROFINITE GROUPS ABSTRACT ALGEBRA: A PRESENTATION ON PROFINITE GROUPS JULIA PORCINO Our brief discussion of the p-adic integers earlier in the semester intrigued me and lead me to research further into this topic. That

More information

Reproducing Kernel Hilbert Spaces

Reproducing Kernel Hilbert Spaces Reproducing Kernel Hilbert Spaces Lorenzo Rosasco 9.520 Class 03 February 11, 2009 About this class Goal To introduce a particularly useful family of hypothesis spaces called Reproducing Kernel Hilbert

More information

Basic Concepts in Matrix Algebra

Basic Concepts in Matrix Algebra Basic Concepts in Matrix Algebra An column array of p elements is called a vector of dimension p and is written as x p 1 = x 1 x 2. x p. The transpose of the column vector x p 1 is row vector x = [x 1

More information

Lecture 11. Probability Theory: an Overveiw

Lecture 11. Probability Theory: an Overveiw Math 408 - Mathematical Statistics Lecture 11. Probability Theory: an Overveiw February 11, 2013 Konstantin Zuev (USC) Math 408, Lecture 11 February 11, 2013 1 / 24 The starting point in developing the

More information

Problem Set 6: Solutions Math 201A: Fall a n x n,

Problem Set 6: Solutions Math 201A: Fall a n x n, Problem Set 6: Solutions Math 201A: Fall 2016 Problem 1. Is (x n ) n=0 a Schauder basis of C([0, 1])? No. If f(x) = a n x n, n=0 where the series converges uniformly on [0, 1], then f has a power series

More information

9 Brownian Motion: Construction

9 Brownian Motion: Construction 9 Brownian Motion: Construction 9.1 Definition and Heuristics The central limit theorem states that the standard Gaussian distribution arises as the weak limit of the rescaled partial sums S n / p n of

More information

Generalized Sobol indices for dependent variables

Generalized Sobol indices for dependent variables Generalized Sobol indices for dependent variables Gaelle Chastaing Fabrice Gamboa Clémentine Prieur July 1st, 2013 1/28 Gaelle Chastaing Sensitivity analysis and dependent variables Contents 1 Context

More information

Topics in Probability and Statistics

Topics in Probability and Statistics Topics in Probability and tatistics A Fundamental Construction uppose {, P } is a sample space (with probability P), and suppose X : R is a random variable. The distribution of X is the probability P X

More information

ECE 302 Division 2 Exam 2 Solutions, 11/4/2009.

ECE 302 Division 2 Exam 2 Solutions, 11/4/2009. NAME: ECE 32 Division 2 Exam 2 Solutions, /4/29. You will be required to show your student ID during the exam. This is a closed-book exam. A formula sheet is provided. No calculators are allowed. Total

More information

Some Background Material

Some Background Material Chapter 1 Some Background Material In the first chapter, we present a quick review of elementary - but important - material as a way of dipping our toes in the water. This chapter also introduces important

More information

List of Symbols, Notations and Data

List of Symbols, Notations and Data List of Symbols, Notations and Data, : Binomial distribution with trials and success probability ; 1,2, and 0, 1, : Uniform distribution on the interval,,, : Normal distribution with mean and variance,,,

More information

Evaluating prediction uncertainty in simulation models

Evaluating prediction uncertainty in simulation models Submitted to Computer Physics Communications LA-UR-98 1362 Evaluating prediction uncertainty in simulation models Michael D. McKay, 1 John D. Morrison, Stephen C. Upton Los Alamos National Laboratory Los

More information

Comparative Statics. Autumn 2018

Comparative Statics. Autumn 2018 Comparative Statics Autumn 2018 What is comparative statics? Contents 1 What is comparative statics? 2 One variable functions Multiple variable functions Vector valued functions Differential and total

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca October 22nd, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

Uncertainty Quantification in Computational Science

Uncertainty Quantification in Computational Science DTU 2010 - Lecture I Uncertainty Quantification in Computational Science Jan S Hesthaven Brown University Jan.Hesthaven@Brown.edu Objective of lectures The main objective of these lectures are To offer

More information

STAT 430/510: Lecture 16

STAT 430/510: Lecture 16 STAT 430/510: Lecture 16 James Piette June 24, 2010 Updates HW4 is up on my website. It is due next Mon. (June 28th). Starting today back at section 6.7 and will begin Ch. 7. Joint Distribution of Functions

More information

Outline of Fourier Series: Math 201B

Outline of Fourier Series: Math 201B Outline of Fourier Series: Math 201B February 24, 2011 1 Functions and convolutions 1.1 Periodic functions Periodic functions. Let = R/(2πZ) denote the circle, or onedimensional torus. A function f : C

More information

Chapter 4: Interpolation and Approximation. October 28, 2005

Chapter 4: Interpolation and Approximation. October 28, 2005 Chapter 4: Interpolation and Approximation October 28, 2005 Outline 1 2.4 Linear Interpolation 2 4.1 Lagrange Interpolation 3 4.2 Newton Interpolation and Divided Differences 4 4.3 Interpolation Error

More information