Sensitivity Analysis and Variable Screening

Similar documents
One-at-a-Time Designs for Estimating Elementary Effects of Simulator Experiments with Non-rectangular Input Regions

INTRODUCTION TO PATTERN RECOGNITION

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved

PRECALCULUS BISHOP KELLY HIGH SCHOOL BOISE, IDAHO. Prepared by Kristina L. Gazdik. March 2005

The Integers. Peter J. Kahn

Final Review. Yang Feng. Yang Feng (Columbia University) Final Review 1 / 58

Agile Mind Mathematics 8 Scope and Sequence, Texas Essential Knowledge and Skills for Mathematics

Part 6: Multivariate Normal and Linear Models

Review of Basic Probability Theory

Bivariate distributions

Groups. 3.1 Definition of a Group. Introduction. Definition 3.1 Group

Gaussian processes. Chuong B. Do (updated by Honglak Lee) November 22, 2008

Computer Vision Group Prof. Daniel Cremers. 10a. Markov Chain Monte Carlo

Evaluating prediction uncertainty in simulation models

Regression Analysis. Ordinary Least Squares. The Linear Model

Standard Description Agile Mind Lesson / Activity Page / Link to Resource

SECONDARY MATHEMATICS I

Regression diagnostics

Orthogonal, Planned and Unplanned Comparisons

PRINCIPAL COMPONENTS ANALYSIS

10. Alternative case influence statistics

CHAPTER 4 THE COMMON FACTOR MODEL IN THE SAMPLE. From Exploratory Factor Analysis Ledyard R Tucker and Robert C. MacCallum

Data Mining. Linear & nonlinear classifiers. Hamid Beigy. Sharif University of Technology. Fall 1396

Nonlinear Regression. Summary. Sample StatFolio: nonlinear reg.sgp

Module 1. Probability

Notes on Mathematics Groups

Algebra I. 60 Higher Mathematics Courses Algebra I

Stochastic Processes

Stat 206: Sampling theory, sample moments, mahalanobis

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1

Polynomial chaos expansions for sensitivity analysis

Lecture Notes in Mathematics. Arkansas Tech University Department of Mathematics. The Basics of Linear Algebra

Lecture 2: Linear Models. Bruce Walsh lecture notes Seattle SISG -Mixed Model Course version 23 June 2011

Abstract & Applied Linear Algebra (Chapters 1-2) James A. Bernhard University of Puget Sound

Linear Regression Models

Lecture Notes 1: Vector spaces

Regression Analysis V... More Model Building: Including Qualitative Predictors, Model Searching, Model "Checking"/Diagnostics

Regression Analysis V... More Model Building: Including Qualitative Predictors, Model Searching, Model "Checking"/Diagnostics

Interaction effects for continuous predictors in regression modeling

1.1 Simple functions and equations Polynomials and polynomial equations

10.1 The Formal Model

Integrated Math 1. Course Standards & Resource Guide

Notes on Cellwise Data Interpolation for Visualization Xavier Tricoche

STATISTICAL LEARNING SYSTEMS

STATE COUNCIL OF EDUCATIONAL RESEARCH AND TRAINING TNCF DRAFT SYLLABUS.

Dublin City Schools Mathematics Graded Course of Study Algebra I Philosophy

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

OPTIMIZATION OF FIRST ORDER MODELS

CHAPTER 3 THE COMMON FACTOR MODEL IN THE POPULATION. From Exploratory Factor Analysis Ledyard R Tucker and Robert C. MacCallum

MULTIVARIATE PROBABILITY DISTRIBUTIONS

Linear Models Review

Construction of a general measure structure

INTERPOLATION. and y i = cos x i, i = 0, 1, 2 This gives us the three points. Now find a quadratic polynomial. p(x) = a 0 + a 1 x + a 2 x 2.

Optimization of Muffler and Silencer

WA State Common Core Standards - Mathematics

2. Matrix Algebra and Random Vectors

Math, Stats, and Mathstats Review ECONOMETRICS (ECON 360) BEN VAN KAMMEN, PHD

Linear Algebra and Robot Modeling

Linear Regression and Its Applications

Business Analytics and Data Mining Modeling Using R Prof. Gaurav Dixit Department of Management Studies Indian Institute of Technology, Roorkee

Feature selection and classifier performance in computer-aided diagnosis: The effect of finite sample size

Linear Model Selection and Regularization

CHAPTER 6 A STUDY ON DISC BRAKE SQUEAL USING DESIGN OF EXPERIMENTS

Some Notes on Linear Algebra

ROBERTO BATTITI, MAURO BRUNATO. The LION Way: Machine Learning plus Intelligent Optimization. LIONlab, University of Trento, Italy, Apr 2015

x n -2.5 Definition A list is a list of objects, where multiplicity is allowed, and order matters. For example, as lists

VECTORS. 3-1 What is Physics? 3-2 Vectors and Scalars CHAPTER

Observations Homework Checkpoint quizzes Chapter assessments (Possibly Projects) Blocks of Algebra

Milford Public Schools Curriculum. Department: Mathematics Course Name: Algebra 1 Level 2

Econ 2120: Section 2

This model of the conditional expectation is linear in the parameters. A more practical and relaxed attitude towards linear regression is to say that

Linear & nonlinear classifiers

Discriminative Direction for Kernel Classifiers

1 The Lagrange Equations of Motion

z-transforms 21.1 The z-transform Basics of z-transform Theory z-transforms and Difference Equations 36

Linear Models in Econometrics

CoDa-dendrogram: A new exploratory tool. 2 Dept. Informàtica i Matemàtica Aplicada, Universitat de Girona, Spain;

Algebra Performance Level Descriptors

Post-Selection Inference

1 Introduction. 2 Solving Linear Equations. Charlotte Teacher Institute, Modular Arithmetic

The Hilbert Space of Random Variables

22 Approximations - the method of least squares (1)

This pre-publication material is for review purposes only. Any typographical or technical errors will be corrected prior to publication.

Principal Component Analysis

Howard Mark and Jerome Workman Jr.

Approximations - the method of least squares (1)

10. Smooth Varieties. 82 Andreas Gathmann

More on Estimation. Maximum Likelihood Estimation.

Sparse Linear Models (10/7/13)

COMMON CORE STATE STANDARDS TO BOOK CORRELATION

4. Distributions of Functions of Random Variables

Lesson 8: Why Stay with Whole Numbers?

The Simplex Method: An Example

High School Algebra I Scope and Sequence by Timothy D. Kanold

Key Algebraic Results in Linear Regression

Section 7.2: One-to-One, Onto and Inverse Functions

Principal Components Analysis. Sargur Srihari University at Buffalo

Vectors and Fields. Vectors versus scalars

Volume Title: The Interpolation of Time Series by Related Series. Volume URL:

September Math Course: First Order Derivative

Transcription:

Chapter 7 Sensitivity Analysis and Variable Screening 7.1 Introduction This chapter discusses sensitivity analysis and the related topic of variable screening. The set-up is as follows. A vector of inputs x=(x 1,..., x d ) is given which potentially affect a response function y(x) = y(x 1,..., x d ). Sensitivity analysis (SA) seeks to quantify how variation in y(x) can be apportioned to the inputs x 1,..., x d and to the interactions among these inputs. Variable selection is more decision oriented in that it seeks to simply determine, for each input, whether that input is active or not. However, the two notions are related and variable screening procedures use some form of SA to assess the activity of each candidate input. Hence SA will be described first and then, using SA tools, two approaches to variable selection will be presented. To fix ideas concerning SA, consider the function y(x 1, x 2 )= x 1 + x 2. (7.1.1) with domain (x 1, x 2 ) (, 1) (, 2). One form of SA is based on examining the local change in y(x) as x 1 or x 2 increases by a small amount starting from (x 1, x 2 ). This change can be determined from the partial derivatives of y( ) with respect to x 1 and x 2 ; in this example, y(x 1 x 2 ) x 1 = 1= y(x 1x 2 ) x 2, (7.1.2) so that we can assert that small changes in the inputs parallel to the x 1 or the x 2 axes starting from any input have the same effect on y( ). A more global assessment of the sensitivity of y(x) with respect to any component x i, i = 1,..., d, examines the change in y(x) as x i ranges over its domain for fixed values of the remaining inputs. In the case of (7.1.1), for fixed x 1 it is easy to see that the range of y(x 1, x 2) as x 2 varies over (, 2), is 2 = max x2 y(x 1, x 2) min x2 y(x 1, x 2)= y(x 1, 2) y(x 1, ) which is twice as large as 213

214 Chapter 7 Sensitivity and Screening 1=max x1 y(x 1, x 2 ) min x 1 y(x 1, x 2 ), the range of y(x 1, x 2 ) over x 1 for any fixed x 2. Thus this second method of assessing sensitivity concludes that y(x 1, x 2 ) is twice as sensitive to x 2 as x 1. This example illustrates two of the approaches that have been used to assess the influence of inputs on a given output. Local sensitivity analysis measures the change in the slope of the tangent to y(x) at x in the direction of a given input axis j, fixing the remaining inputs. Global sensitivity analysis measures the change in y(x) as one (or more inputs) vary over their entire domain when the remaining inputs are fixed. As the example above shows, the different criteria can lead to different conclusions about the sensitivity of y(x) to its inputs. When it is determined that certain inputs have relatively little effect on the output, we can set these inputs to nominal values, and reduce the dimensionality of the problem allowing us to perform a more exhaustive investigation of a predictive model with a fixed budget for runs. Sensitivity analysis is also useful for identifying interactions between variables. When interactions do not exist, the effect of any given input is the same regardless of the values of the other inputs; in this case, the relationship between the output and inputs is said to be additive and is readily understandable (Figure 7.1). When interactions exist, the effects of some inputs on the output will depend on the values of other inputs (Figure 7.1). 1 1.9.9.8.8.7 x 2 =1.7.6.6 y(x 1,x 2 ).5.4 x 2 =2 y(x 1,x 2 ).5.4 x 2 =1 x 2 =2.3.3.2.2.1.1.1.2.3.4.5.6.7.8.9 1 x 1.1.2.3.4.5.6.7.8.9 1 x 1 Fig. 7.1 Left Panel: a function y(x 1, x 2 ) with no x 1 x 2 interaction; Right Panel: a function y(x 1, x 2 ) having x 1 x 2 interaction. This chapter is but an introduction to the large and growing literature on sensitivity analysis. For a comprehensive discussion of local and global sensitivity measures and their estimation based on training data, one should refer to the Two book length treatments of the global sensitivity analysis are Saltelli et al (2, 24). Section 7.7.1 describes alternative approaches (and references) to sensitivity analysis. The remainder of this chapter will emphasize methods of quantifying the global sensitivity analysis of a code with respect to each of its inputs and then to estimating these sensitivity indices. It will also describe a companion method of visualizing the sensitivity of a code to each input based on elementary effects. An efficient class of

7.2 Classical Approaches 215 designs called one-at-a-time designs will be introduced for estimating elementary effects. 7.2 Classical Approaches to Sensitivity Analysis 7.2.1 Sensitivity Analysis Based on Scatterplots and Correlations Possibly the simplest approach to sensitivity analysis uses familiar graphical and numerical tools. A scatterplot of each input versus the output of the code provides a visual assessment of the marginal effect of each input on the output. The product moment correlations between each input and the output indicate the extent to which there is linear association between the outputs and the input. Scatterplots are generally more informative than correlations because nonlinear relationships can be seen in plots, whereas correlations only indicate the presence of straight-line relationships. As an example, Figure 1.7 on page 11 plots the failure depth of pockets punched into sheet metal (the output) versus clearance and versus fillet radius, two characteristics of the machine tool used to form the pockets. The scatterplot of failure depth versus clearance shows an increasing trend, suggesting that failure depth is sensitive to clearance. However, in the scatterplot of failure depth versus fillet radius, no trend appears to be present, suggesting that failure depth may not be sensitive to fillet radius. One limitation of marginal scatterplots is that they do not allow assessment of possible interaction effects. Three graphical methods that can be used to explore two-factor interaction effects are: three-dimensional plots of the output versus pairs of inputs; two-dimensional plots that use different plotting symbols to represent the (possibly grouped) values of a second input; and a series of two-dimensional plots each of whose panels use only the data corresponding to a (possibly grouped) value of the second input. The latter are called trellis plots. Graphical displays that allow one to investigate 3-way and higher interactions are possible but typically require some form of dynamic ability to morph the figure and experience in interpretation (see, e.g., Barton (1999)). 7.2.2 Sensitivity Analysis Based on Regression Modeling Regression analysis provides another sensitivity analysis methodology that builds on familiar tools. The method below is most effective when the design is orthogonal or nearly orthogonal and a first-order linear model in the inputs x 1,..., x d (nearly) explains the majority of the variability in the output.

216 Chapter 7 Sensitivity and Screening The regression approach to sensitivity analysis first standardizes the output y(x) and all the inputs x 1,..., x d. If n runs of the simulator code have been made, each variable is standardized by subtracting that variable s mean and dividing the difference by the sample standard deviation. For example, fix an input j, 1 j d, and let x 1, j,..., x n, j denote the values of this variable for the n runs. Let x j denote the mean of x 1, j,..., x n, j and s j their standard deviation. The standardized value of x i, j is defined to be x i, j = x i, j x j, 1 i n. s j In a similar fashion standardize the output values yielding y i, 1 i n. Now fit the first-order regression model y =β +β 1 x 1 + +β d x d (7.2.1) to the standardized variables. The regression coefficients in (7.2.1) are called the standardized regression coefficients (SRCs);β j measures the change in y due to a unit standard deviation change in input j. Because all variables have been placed on a common scale, the magnitudes of the estimated SRCs indicate the relative sensitivity of the output to each input. The output is judged most sensitive to those inputs whose SRCs are largest in absolute value. The validity of the method depends on the overall fit of the regression model, either as indicated by standard goodness-of-fit measures such as the coefficient of determination R 2. If the overall fit is poor, the SRCs do not reflect the effect of the inputs on the output. In addition, regression-based methods are most effective when the input design is orthogonal or at least space-filling so that changes in the output due to one input can not be masked by changes in another. Input Est.β i in (7.2.1) Clearance.875 Fillet Radius.249 Punch Plan View Radius.232 Width.937 Length.681 Lock Bead Distance.171 Table 7.1 Estimated SRCs for the fitted standardized model (7.2.1). Example 1.4 [Continued] Recall the data introduced in Section 1.2 that described the failure depth for a computational model of the operation of punching symmetric rectangular pockets in automobile steel sheets. Table 7.2.2 lists the regression coefficients for model (7.2.1). This analysis is likely to be reasonable because the R 2 associated with the fitted model is.9273. The estimated regression coefficients suggest that the output is most sensitive to Clearance and then, equally so, to the two inputs Fillet Radius and Punch Plan View Radius. The other inputs are of lesser importance.

7.2 Classical Approaches 217 Example 1.3 [Continued] Recall the data introduced in Section 1.2 of the computed time for a fire to reach five feet above a fire source. The inputs affecting this time were the room area, room height, heat loss fraction, and height of the fire source above the room. Figure 3.3 on page 74 shows the marginal relationship between the output and each input based on a 4 point Sobol design. From these plots, it appears that the output is most sensitive to room area, and not very sensitive to the remaining inputs. We perform a sensitivity analysis of the output function based on the 4 point Input Est.β i in (7.2.1) Heat Loss Frac..1283 Fire Height.5347 Room Height.3426 Room Area.966 Table 7.2 Estimated SRCs for the fitted standardized model (7.2.1). training data for this example. Fitted model (7.2.1) has R 2 =.98 suggesting that the output is highly linear in the four inputs and the regression approach to sensitivity analysis is likely to be accurate. Table 7.2 lists the regression coefficients for this model. These values suggest that the single most important input is Room Area, followed by Fire Height, Room Height, and lastly by Heat Loss Fraction. There are a number of variants on regression-based models. Partial correlation coefficients (PCCs) between the output and the inputs can be used to assess sensitivity. PCCs measure the strength of the linear relationship between the output and a given input, after adjusting for any linear effects of the other inputs. The relative sizes of PCCs are used to assess the sensitivity of the output to the inputs. As for SRCs, the same two circumstances will compromise the validity of PCCs. If the overall fit of the model is poor or there is a high degree of collinearity among the predictors PCCs need not provide accurate information about the sensitivity of the output to the inputs. A third variant of the regression approach finds rank transforms of both the inputs and the outputs. The rank transformation is carried out as follows. Suppose that a variable has N values; assign rank 1 to the lowest value, rank 2 to the next lowest, and rank N to the largest value. Use the average rank for ties. Then fit a first-order regression model to the transformed data. The estimated standardized regression coefficients or partial correlations are used to assess the sensitivity of the output to the inputs. Once again, if the overall fit of the first-order regression model is poor or collinearity masks the effects of one or more inputs, the use of standardized regression coefficients or partial correlations need not adequately describe the sensitivity of the output to the inputs. In practice, it has been observed that fitted regression models based on rank transformed data often have higher R 2 values than based on standardized data. This phenomenon can occur because the rank transformation can remove (certain) nonlinearities present in the original data. Thus, when monotone (but nonlinear) trends

218 Chapter 7 Sensitivity and Screening are present, there are some advantages to conducting a sensitivity analysis using the rank transformed data. However, when one uses the rank transformed data, one must keep in mind that the resulting sensitivity measures give information on the sensitivity of the rank transformed output to the rank transformed inputs, rather than on the original variables. A method that takes explicit account of the statistical significance of the estimated regression coefficients is a Stepwise Regression Algorithm applied to the standardized inputs. For example, if a forward stepwise regression is used, the first variable entered would be considered the most influential input, the second variable entered would be considered the second most influential input, etc. As is usual in stepwise regression, one continues until the amount of variation explained by adding further variables is not considered meaningful according to some criterion selected by the user. Criteria such as the mean squared error, the F-statistic for testing whether the added variable significantly improves the model, Akaike s Information Criterion (the AIC ), the coefficient of determination R 2, or the adjusted R 2 can be used to determine when to stop the stepwise regression. For more on stepwise regression, see any standard text on regression, for example Draper and Smith (1981). Whether one uses the standardized or the rank transformed data, we get no information about possible interactions or on non-monotone effects of variables when we fit first-order models. If one has reason to believe that interactions are present, or that the relation between the output and some of the inputs is nonlinear and nonmonotone, these regression methods will not give reliable information about sensitivities. One may wish to consider fitting higher-order models such as a second-order response surface to the output. Such a model allows one to explore second-order (quadratic) effects of inputs and two-factor interaction (cross-product) effects. For more on response surface methods, see Box and Draper (1987). 7.3 Sensitivity Analysis Based on Elementary Effects The Elementary Effects (EEs) of a function y(x)=y(x 1,..., x d ) having d inputs measure the sensitivity of y(x) to each x j by directly measuring the change in y(x) when x j alone is altered. From a geometric viewpoint, EEs are the slopes of secant lines parallel to each of the input axes. In symbols, given j {1,...,d}, the j th EE of y(x) at distance is d j (x)= y(x 1,..., x j 1, x j +, x j+1,..., x d ) y(x). (7.3.1) Specifically, d j (x) is the slope of the secant line connecting y(x) and y(x+ e j ) where e j = (,,..., 1,,..., ) is the j th unit vector. For small, d j (x) is a numerical approximation to the j th partial derivative of y(x) with respect to x j evaluated at x and hence is a local sensitivity measures. However, in most of the literature, EEs are evaluated for large at a widely sampled set of inputs x and hence are global

7.3 Elementary Effects 219 sensitivity measures measuring the (normalized) overall change in the output as each input moves parallel to its axis. Example 7.1. To gain intuition about the interpretation of EEs, consider the following simple analytic output function y(x)=1.+1.5x 2 + 1.5x 3 +.6x 4 + 1.7x 2 4 +.7x 5+.8x 6 +.5x 5 x 6, (7.3.2) of d=6 inputs where x=(x 1, x 2, x 3, x 4, x 5, x 6 ), and x j [, 1] for j=1,..., 6. Notice that y(x) is functionally independent of x 1, is linear in x 2 and x 3, is nonlinear in x 4, and contains an interaction in x 5 and x 6. Note that if the range of x 3 were modified to be [, 2], then the larger range of x 3 compared with x 2 would mean that any reasonable assessment of global sensitivity of y(x) to the inputs should conclude that x 3 is more active than x 2. Straightforward algebra gives the value y(x+ e j ) y(x), j=1,..., 6, and hence the EEs of y(x) can be calculated exactly as 1. d 1 (x)=, 2. d 2 (x)=1.5=d 3 (x), 3. d 4 (x)=+.6+1.7 +3.4x 4, 4. d 5 (x)=+.7+.5x 6, and d 6 (x)=+.8+.5x 5. The EEs for this example are interpreted as follows. The EE of the totally inactive variable x 1 is zero because y(x) is functionally independent of x 1. The EEs of the additive linear terms x 2 and x 3 are the same non-zero constant, 1.5, and hence (7.3.2) is judged to be equally sensitive to x 2 and x 3. In general, the EEs of additive linear terms are local sensitivity measures; from the global viewpoint which assesses the sensitivity of y(x) to each input by the change in the output as the input moves over its range, (7.3.2) is more sensitive to x 3 than x 2 because x 3 has a larger range than x 2. The EE of the quadratic term x 4 depends on both the starting x 4 and ; hence for fixed, d 4 (x) will vary with x 4 alone. Lastly, for the interacting x 5 and x 6 inputs, the EE d 5 (x) depends on x 6 while d 6 (x) depends on x 5. EEs can be used as an exploratory data analysis tool, as follows. Suppose that each d j (x), j=1,..., d, has been computed for r vectors, say x j 1,..., x j r. Let d j and S j denote the sample mean and sample standard deviation, respectively, of d j (x j 1 ),..., d j(x j r). Then as Morris (1991) states, an input x j having a small d j and small S j is non-influential; a large d j and small S j has a strong linear effect on y(x); a large S j (and either a large or small d j ) either has a nonlinear effect in x j or x j has strong interactions with other inputs. Example 7.1 [Continued] To illustrate, consider (7.3.2) in Example 7.1. Suppose that y(x) is evaluated for each row of Table??. Table?? contains five blocks of

22 Chapter 7 Sensitivity and Screening six rows; each block begins with a row in bold-face font. The difference of the y(x) values computed from consecutive pairs of rows within each block provide one d j (x) value. Note that every such pair of rows differs by± =.3 in single input location. In all, these output function evaluations provide r=5 d j (x) evaluations for five different x. The construction of the input table, the design of these runs, will be discussed in the subsequent paragraphs. Here, only the interpretation of the plot is considered. 1 4.8.6 S d.4.2 6 5 1 3 2-1 -.5.5 1 1.5 2 2.5 d-bar Fig. 7.2 Plot of d j and S j values for the EEs computed for function y(x) in (7.3.2) using the design in Table 7.3 Figure 7.2 plots the d j and S j values from the individual EEs computed in the previous paragraph. Because S 1 = =S 2 = S 3, the plot shows that d 1 (x), d 2 (x), and d 3 (x) are each constant and have values., 1.5, and 1.5, respectively (as the theoretical calculations showed). Hence these (d j, S j ) points are interpreted as saying that y(x) is functionally independent of x 1, and contains an additive linear term in each of x 2 and x 3. Because S j > for j=4, 5, and 6, the corresponding d j (x) values vary with x. Hence one can conclude that either y(x) is not linear in x j or that x j interacts with the other inputs. How should one select runs in order to efficiently estimate EEs of y(x)? By definition, each d j (x) requires two function evaluations. Hence, at first impulse, one might think a total of 2 r function evaluations would be required to estimate r EEs. Morris (1991) proposed a more efficient one-at-time (OAT) sampling design for estimating the EEs when the input region is rectangular. This method is particularly useful for providing a sensitivity analysis of an expensive black-box computer simulator. To introduce the method, consider evaluating a function y(x)=y(x 1, x 2, x 3, x 4, x 5 ) having input domain [, 1] 5 at each of the six (input) rows listed in Table 7.4. The

7.3 Elementary Effects 221 x 1 x 2 x 3 x 4 x 5 x 6.5.6.5.4.5.35.8.6.5.4.5.35.8.9.5.4.5.35.8.9.8.4.5.35.8.9.8.1.5.35.8.9.8.1.2.35.8.9.8.1.2.5.85.3.9.65.3.4.55.3.9.65.3.4.55..9.65.3.4.55..6.65.3.4.55..6.95.3.4.55..6.95.6.4.55..6.95.6.1.65..35.75.45.6.35..35.75.45.6.35.3.35.75.45.6.35.3.5.75.45.6.35.3.5.45.45.6.35.3.5.45.75.6.35.3.5.45.75.9.9.5.35.5.4 1..6.5.35.5.4 1..6.35.35.5.4 1..6.35.5.5.4 1..6.35.5.35.4 1..6.35.5.35.1 1..6.35.5.35.1.7.4.35.6..35.6.1.35.6..35.6.1.5.6..35.6.1.5.9..35.6.1.5.9.3.35.6.1.5.9.3.5.6.1.5.9.3.5.3 Table 7.3 An OAT design with r=5 complete tours for a d=6 input function with domain [, 1] 6 and =.3. y(x) evaluations at Runs 1 and 2 can used to compute d 1 (.8,.7, 1.,.7,.7) for =.3. Similarly the y(x) differences for the succeeding consecutive pairs of rows provide, in order, estimates of d 2 (x), d 3 (x), d 4 (x), and d 5 (x) for different x but each using =.3. Such a set of rows is termed a tour of input space. For a given distance, OAT designs provide a set of (d+ 1) d tours with each tour starting at a randomly selected point of the input space and having successive rows that differ in a single input by±. Each tour of the Morris (1991) OAT design is (d+1) d matrix whose rows are valid input vectors for y( ), which is taken to be [, 1] 5 for this example. Tours are determined by a starting vector x, a permutation of (1, 2,..., d) that specifies the

222 Chapter 7 Sensitivity and Screening Run x 1 x 2 x 3 x 4 x 5 1.8.7 1..7.7 2.5.7 1..7.7 3.5 1. 1..7.7 4.5 1..7.7.7 5.5 1..7 1..7 6.5 1..7 1..4 Table 7.4 Six input vectors at which a function of five inputs (x 1, x 2, x 3, x 4, x 5 ) is to be evaluated. input to be modified in successive pairs of rows, by the choice of a >, and by d 1 vector (±1,..., ±1) of directional movements for in the successive pairs of rows. For example the first tour in Table 7.4 uses starting vector x=(.8,.7, 1.,.7,.7), alters the inputs in succeeding rows in the order (1, 2, 3, 4, 5) with =.3 and signs ( 1,+1, 1,+1, 1). Each row of the tour is a valid input of the function because the row is an element of [, 1] 5. In examples where the design consists of multiple tours, Morris (1991) selects the magnitude of to be 3% of the common range of each variable, takes a random permutation of (1,..., d), makes a random selection of the directional sign to be associated with each input, and selects the starting x randomly from a gridding of the input space that is made in such a way that all rows of a selected tour are valid inputs. Example 1.5 [Continued] We conclude with an application of EE construction for the analytic borehole example whose formula, inputs and their ranges are repeated in (7.3.3) and Table 7.7 for convenience. 2πT u (H u H l ) y(x)= [ 2LT ln(r/r w ) 1+ u ln(r/r w )r 2 wk w + T u T l ]. (7.3.3) Notation Input Units Range Radius of influence r m [1, 5] Radius of the borehole r w m [.5,.15] Transmissivity of the upper aquifer T u m 2 /yr [637, 1156] Potentiometric head of the upper aquifer H u m [99, 111] Transmissivity of the lower aquifer T l m 2 /yr [63.1, 116] Potentiometric head of the lower aquifer H l m [7, 82] Length of the borehole L m [112, 168] Hydraulic conductivity of borehole K w m/yr [9855, 1245] Table 7.5 Inputs and ranges of the borehole function from (7.3.3). A Morris OAT design with r = 5 tours based on random starting points was constructed whereδwas selected to be approximately 4% of the range of each input. The OAT design is listed in Table 7.6 with the starting points in bold font.

7.3 Elementary Effects 223 r r w T u H u T l H l L K w 14483.98 13714 19.37 132.4 764.2 166.8 9899.2 14483.98 8272 19.37 132.4 764.2 166.8 9899.2 14483.98 8272 88.214 132.4 764.2 166.8 9899.2 14483.98 8272 88.214 132.4 812.2 166.8 9899.2 14483.98 8272 88.214 18.4 812.2 166.8 9899.2 14483.98 8272 88.214 18.4 812.2 1436.8 9899.2 34123.98 8272 88.214 18.4 812.2 1436.8 9899.2 34123.98 8272 88.214 18.4 812.2 1436.8 1775 34123.58 8272 88.214 18.4 812.2 1436.8 1775 283.8.144 87266 112.5 14.5 755.3 1419.8 11771 283.8.14 87266 112.5 14.5 755.3 1419.8 11771 283.8.14 66254 112.5 14.5 755.3 1419.8 11771 283.8.14 66254 9.886 14.5 755.3 1419.8 11771 283.8.14 66254 9.886 14.5 755.3 1643.8 11771 283.8.14 66254 9.886 14.5 77.3 1643.8 11771 283.8.14 66254 9.886 152.5 77.3 1643.8 11771 283.8.14 66254 9.886 152.5 77.3 1643.8 1895 21724.14 66254 9.886 152.5 77.3 1643.8 1895 123.116 15837 1.82 141.6 747.3 1329.3 11638 123.116 15837 1.82 141.6 747.3 1329.3 1762 123.76 15837 1.82 141.6 747.3 1329.3 1762 123.76 15837 1.82 141.6 747.3 1553.3 1762 123.76 15837 1.82 993.64 747.3 1553.3 1762 123.76 15837 1.82 993.64 795.3 1553.3 1762 31643.76 15837 1.82 993.64 795.3 1553.3 1762 31643.76 15837 79.665 993.64 795.3 1553.3 1762 31643.76 84825 79.665 993.64 795.3 1553.3 1762 28171.11 81111 91.96 16.3 735.2 1148.3 1275 8531.3.11 81111 91.96 16.3 735.2 1148.3 1275 8531.3.11 81111 113.11 16.3 735.2 1148.3 1275 8531.3.11 81111 113.11 16.3 735.2 1372.3 1275 8531.3.11 81111 113.11 118.3 735.2 1372.3 1275 8531.3.11 81111 113.11 118.3 783.2 1372.3 1275 8531.3.11 81111 113.11 118.3 783.2 1372.3 11151 8531.3.11 12123 113.11 118.3 783.2 1372.3 11151 8531.3.7 12123 113.11 118.3 783.2 1372.3 11151 35115.98 998 64.73 155.5 726.7 1238.8 11527 35115.98 69968 64.73 155.5 726.7 1238.8 11527 35115.98 69968 64.73 113.5 726.7 1238.8 11527 35115.58 69968 64.73 113.5 726.7 1238.8 11527 35115.58 69968 64.73 113.5 726.7 1238.8 1651 15475.58 69968 64.73 113.5 726.7 1238.8 1651 15475.58 69968 64.73 113.5 774.7 1238.8 1651 15475.58 69968 85.863 113.5 774.7 1238.8 1651 15475.58 69968 85.863 113.5 774.7 1462.8 1651 Table 7.6 A Morris OAT design for the borehole function (7.3.3) with r=5 tours and d=8 inputs. The bold face rows denote the start of a tour consisting of 9 runs. The d k and S k values computed from this design, k = 1,..., 8, are listed in Table 7.7. From the signs of the EEs we conclude, that the annual flow rate increases

224 Chapter 7 Sensitivity and Screening Input d k S k 1 r -1.8e-6 1.81e-6 2 r w 1436.5 275.25 3 T u 4.55e-9 3.15e-9 4 T l 2.58e-3 2.3257e-3 5 H u.222.68 6 H l -.184.79 7 L -.41.23 8 K w 6.81e-3 2.5132e-3 Table 7.7 The d k and S k values for the borehole function based on the OAT design in Table 7.6..8 6.7 5.6.5.4.3.2 7.1 -.2 -.15 -.1 -.5.5.1.15.2.25 Fig. 7.3 Plot of the 7 largest d k and S k values for the borehole function based on the OAT design in Table 7.6. greatly in r w, increases at a slower rate in H u, and decreases in H l and L. The plot of the d k and S k shown in Figure 7.3. A number of enhancements have been proposed to the basic Morris (1991) OAT design. Campolongo et al (27) suggest ways of making OAT designs more spacefilling. They propose selecting the desired number, r, of tours to maximize a heuristic distance criterion between all pairs of tours. Their R packagesensitivity implements this criterion; it was used to construct the design in Table 7.3. Pujol (29) proposed a method of constructing OAT designs whose projections onto subsets of the input space are not collapsing. This is important if y(x) depends only on a subset of active inputs. For example, suppose that y(x) depends (primarily) on x j where j {1, 3, 5} and other x l are inactive. If multiple x inputs from the selected EE design have common x j for j {1, 3, 5}, then y(x) evaluations at these x would produce (essentially) the same output and hence little information about the input-output relationship. Campolongo et al (211) introduced an OAT design

7.4 Global Sensitivity Indices 225 that spreads starting points using a Sobol sequence and differential for each pair of input vectors. Finally Sun et al (214) introduced an OAT design that can be be used for non-rectangular input regions and an alternative method to Campolongo et al (27) for spreading the secant lines of the design over the input space. 7.4 Global Sensitivity Analysis Often, one of the first tasks in the analysis of simulator output y(x) is the rough assessment of the sensitivity of y(x) to each input x k. In a combined physical system/simulator setting, the analogous issue is the determination of the sensitivity of the mean response of the physical system to each input. Sobol (199), Welch et al (1992), and Sobol (1993) introduced plotting methods to make such an assessment. They introduce the use of main effect plots and joint effect plots. They also define various numerical sensitivity indices (SIs) to make such assessments. This section will define these effect plots and a functional ANOVA decomposition of the output y(x) that is used to define global SIs. More formal variable screening methods have been developed for computer simulators by Linkletter et al (26) and Moon et al (212). The methodology in both papers assumes training data is available to construct a Gaussian Process emulator of the output at arbitrary input sites (with Gaussian correlation function). Linkletter et al (26) use the (posterior distribution of the) estimated process correlation for each input to assess its impact on y(x) while Moon et al (212) calculate each input s total effect index for the same purpose. To ease the notional burden, from this point on assume that y(x) has a hyperrectangular input domain which is taken to be [, 1] d. If the input domain of y(x) is d j=1 [a j, b j ], one should apply the methods below to the function y (x 1,..., x d )=y (a 1 + x 1 (b 1 a 1 ),..., a d + x d (b d a d )). When the input domain is not a hyperrectangle, there are several papers that consider analogs of the effect function definitions and sensitivity indices defined below; however this topic is an area of active research as of the writing of this book (Loeppky et al (211)). Section 7.4.1 introduces (uncentered) main effect and joint effect functions for a given y(x), which are weighted y(x) averages. Then Section 7.4.2 describes an ANOVA-like expansion of y(x) in terms of centered and orthogonalized versions of the main and joint effect functions. Section 7.4.3 defines sensitivity indices for individual inputs and groups of inputs in terms of the variability of these functional ANOVA components. Section 7.5 provides methods for estimating these plots and indices based on a set of y(x) runs. The emphasis in Section 7.5 will be on methods for simulators having expensive code runs so that limited amounts of training data are available.

226 Chapter 7 Sensitivity and Screening 7.4.1 Main Effect and Joint Effect Functions Given a weight function of the form w(x)= d k=1 g k(x k ) where g k ( ) is a probability density function on [, 1], denote the overall mean of y( ) with respect to w(x) by y y(x 1,..., x d ) d g k (x k ) dx k. k=1 Thus y can be interpreted as the expectation E [ y(x) ] where X=(X 1,..., X d ) has joint probability density d k=1 g k(x k ). The weighted mean is useful, for example, in scientific settings where the output depends on input that are not known exactly but whose uncertainty can be specified by independent input distributions. If the uncertainty in X can be specified by a weight function w(x) that has nonindependent input components, then analogs of some of the results below still hold; settings where the results hold for general w(x) will be described in the section. For notational simplicity, the development below assumes that the weight function corresponds to independent and identically distributed U(, 1) components so that y y(x 1,..., x d ) d dx k. (7.4.1) k=1 Similarly, the k th main effect function of y(x), k=1,..., d, is defined to be u k (x k )= y(x 1,..., x d ) dx l = E [ ] y(x) X k = x k ; (7.4.2) l k which is the average y(x) value when x k is fixed. The expectation notation uses the fact that the components of X are independent. The idea of averaging y(x) when a single input is fixed can be extended to fixing multiple inputs. Select a nonempty subset Q of{1,... d} for which Q=Q\{1,... d} is also non-empty (so that the integral below averages over at least one variable). Let x Q denote the vector of x k components with k Q arranged in order by increasing k. Define the joint effect function of y(x 1,..., x d ) when x Q is fixed to be u Q (x Q )= y(x 1,..., x d ) dx l = E [ ] y(x) X Q = x Q l Q (7.4.3) which is the average y(x 1,..., x d ) value when the components of x Q are held constant. For completeness, set u 12...d (x 1,..., x d ) y(x 1,..., x d ) (when Q={1,... d}). For any Q {1,..., d} it is straightforward to see that the mean of the joint effect function u Q ( xq ) with respect to all arguments XQ is y, i.e.,

7.4 Global Sensitivity Indices 227 E [ ( )] 1 u Q XQ = u Q (x Q ) dx Q = y. (7.4.4) The mean of u Q ( xq ) with respect to any collection of xi whose indices i form a proper subset of Q will vary, and in particular, need not be be zero. Hence the u Q ( xq ) are sometimes called uncentered effect functions. In contrast, the ANOVAtype centered versions of the u Q (x Q ) that are defined in the next subsection have zero mean with respect to any set of x i, with i Q. Effect functions u Q (x Q ) can be defined for nonindependent weight functions by the conditional expectation operation in (7.4.3). However, their interpretation and usefulness are more limited than in the independence case. Example 7.2. Suppose y(x 1, x 2 )=2x 1 + x 2 is defined on [, 1] 2. Then the overall mean and u Q effect functions are y = u 1 (x 1 )= u 2 (x 2 )= (2x 1 + x 2 ) dx 2 dx 1 = 1.5, (2x 1 + x 2 ) dx 2 =.5+2x 1, (2x 1 + x 2 ) dx 1 = 1.+ x 2, and u 12 (x 1, x 2 )=y(x 1, x 2 )=2x 1 + x 2. Illustrating the fact (7.4.4) that y is the mean of every u Q (X Q ) with respect to X Q, it is simple to calculate that u 1 (x 1 ) dx 1 = u 2 (x 2 ) dx 2 = u 11 (x 1, x 2 ) dx 1 dx 2 = 1.5. As shown in Figure 7.4, plots of the main effect functions provide accurate information of how this simple function behaves in x 1 and x 2. 2.5 2.5 2 2 u 1 (x 1 ) 1.5 1 u 2 (x 2 ) 1.5 1.5.5.2.4.6.8 1 x 1.2.4.6.8 1 x 2 Fig. 7.4 Plots of the main effect functions u 1 (x 1 ) and u 2 (x 2 ) for y(x 1, x 2 )=2x 1 + x 2.

228 Chapter 7 Sensitivity and Screening Example 7.3. The so-called g-function with d inputs is defined to be y(x 1,..., x d )= d k=1 4x k 2 +c k 1+c k (7.4.5) where x [, 1] d and c=(c 1,...,c d ) has non-negative components (Saltelli and Sobol (1995)). Note that y(x) is a product of functions of each input and does not involve standalone linear terms in any inputs. As the definition of the effect function states, and this example is meant to emphasize, u i (x i ) contains the contributions of every x i component no matter whether they appear as a stand-alone term or as part of an interaction. The value of q(x)= 4x 2 +c, (7.4.6) 1+c over x [, 1] forms a pair of line segments that are symmetric about x=1/2, with minimum value of c/(1+c)=q(1/2) and maximum value of (2+c)/(1+c)= q()= q(1). Thus it is straightforward to calculate that 4x 2 +c 1+c dx = 1. (7.4.7) for every c because this integral is the sum of the areas of two identical trapezoids (triangles when c=). The parameter c determines how active x is in the associated q(x). Figure 7.5 illustrates the level of activity of q(x) on c; three q(x) with c {, 5, 25}, are plotted. The smaller the value of c, the more active is x. 2 1.5 ( 4x-2 + c)/(1+c) 1.5.1.2.3.4.5.6.7.8.9 1 x Fig. 7.5 The function 4x 2 +c 1+c for c=(solid line), c=5 (dotted line), and c=25 (dashed line).

7.4 Global Sensitivity Indices 229 Returning to the g-function with arbitrary numbers of inputs, (7.4.5), and arbitrary vector of parameters c=(c 1,...,c d ), the main and joint effect functions of y(x) are simple to calculate using (7.4.7). The overall mean is y = y(x 1,..., x d ) d dx k = k=1 d k=1 4x k 2 +c k 1+c k dx k = 1.. The k th main effect function is u k (x k )= 4x k 2 +c k 1+c k 1= 4x k 2 + c k 1+c k, i=1,..., d. Thus the main effect plots of individual inputs have essentially the symmetric form shown in Figure 7.5. In a similar way, given a nonempty subset Q of{1,... d}, For example, u Q (x Q )= k Q 4x k 2 +c k 1+c k. (7.4.8) u 12 (x 1, x 2 )= 4x 1 2 +c 1 1+c 1 4x 2 2 +c 2 1+c 2. (7.4.9) which is plotted in Figure 7.6 for c 1 =.25 and c 2 = 1.. Clearly this function shows that x 1, the input associated with the smaller c i, is more active than x 2. Visualizations of higher-order effect functions, while more difficult to display effec- 2 1.5 u 12 (x 1,x 2 ) 1.5 1.8.6 x 2.4.2.2.4 x 1.6.8 1 Fig. 7.6 Joint effect function (7.4.9) for c 1 =.25 and c 2 = 1..

23 Chapter 7 Sensitivity and Screening tively, can also be made. In general, plots of main effect functions (x i, u i (x i )) and joint effect functions ( (xi, x j ), u i j (x i, x j ) ) can be used to provide a rough visual understanding of the change in the averaged y(x) with respect to each single input or pairs of inputs. Section 7.5 will describe methods of estimating the u Q (x Q ) based a set of training data obtained from the output function. 7.4.2 A Functional ANOVA Decomposition The uncentered u Q (x Q ) describe average changes in y(x); u Q (x Q ) values are on the same scale and in the same range as y(x). The sensitivity indices that we shall define shortly, assess the variability of (a centered version) of the u Q (x Q ). Viewed with this objective in mind, the (uncentered) joint effect functions have an important defect that limits their usefulness for constructing sensitivity indices. Namely, when viewed as functions of random X i inputs, different effect functions are, in general, correlated. For example, if X 1 and X 2 are independent U(, 1) random variables, cov(u 1 (X 1 ), u 2 (X 2 )) need not equal. Thus Sobol (199), and Sobol (1993) advocated the use of a functional ANVOAlike decomposition of y(x) that modifies the uncentered joint effect functions to produce uncorrelated and zero mean versions of the u Q (x Q ) (see also Hoeffding (1948)). These centered effect functions will be used to define sensitivity indices. Specifically, Sobol (1993) proposed use of the decomposition of y(x), y(x)=y + d y k (x k ) + k=1 1 k< j d y k j (x k, x j )+ +y 1,2,...,d (x 1,..., x d ) (7.4.1) where x 1,..., x d are the elements of x. The components (7.4.1) are defined as follows. For any fixed k, k=1... d, define y k (x k )=u k (x k ) y = y(x) dx l y (7.4.11) l k to be the centered main effect function of input x k. Similarly for any fixed (k, j), 1 k< j d, define y k j (x k, x j )=u k j (x k, x j ) y k (x k ) y j (x j ) y = y(x) dx l y k (x k ) y j (x j ) y (7.4.12) l k, j to be the centered interaction effect u k j (x k, x j ). Higher-order interaction terms are defined in a recursive manner; if Q is a non-empty subset of{1,... d},

7.4 Global Sensitivity Indices 231 y Q (x Q )=u Q (x Q ) y E (x E ) y (7.4.13) where the sum is over all non-empty proper subsets E of Q; E Q is proper provided E Q. For example, if y(x) has three or more inputs, y 123 (x 1, x 2, x 3 )=u 123 (x 1, x 2, x 3 ) y 12 (x 1, x 2 ) y 13 (x 1, x 3 ) y 23 (x 2, x 3 ) y 1 (x 1 ) y 2 (x 2 ) y 3 (x 3 ) y In particular, y 1,2,...,d (x 1, x 2,..., x d )=u 1,2,...,d (x 1, x 2,..., x d ) y E (x E ) y E E = y(x 1, x 2,..., x d ) y E (x E ) y. where the sum is over all non-empty proper subsets E of{1,..., d}. This final equation is the decomposition (7.4.1). Notice that even if the effect functions are y(x) averages defined in terms of an arbitrary weight function, (7.4.1) still holds by construction. However, if the weight function is defined by mutually independent input component distributions, Section 7.7 shows that the centered effect functions have two properties that make them extremely useful for defining sensitivity indices. First, each y Q (x Q ), Q {1,..., d}, has zero mean when averaged over any collection of x i whose indices i are a subset of Q. Suppose that Q={i 1,...,i s } and i k Q, then E [ y i1,...,i s (x i1,..., X ik,..., x is ) ] = E y i1,...,i s (x i1,..., x is ) dx ik = (7.4.14) for any fixed values x il, i l Q\{i k }. The second property of the y Q (x Q ) is that they are orthogonal meaning that for any (i 1,..., i s ) ( j 1,..., j t ), Cov ( y i1,...,i s (X i1,..., X is ), y j1,..., j t (X j1,..., X jt ) ) = y i1,...,i s (x i1,..., x is ) y j1,..., j t (x j1,..., x jt ) l dx l = (7.4.15) where the product in (7.4.15) is over alll {i 1,..., i s } { j 1,..., j t }. Example 7.2 [Continued] Using y, and the u Q ( ) effect functions calculated previously, algebra gives y 1 (x 1 )=u 1 (x 1 ) y =.5+2x 1 1.5= 1+2x 1 y 2 (x 2 )=u 2 (x 2 ) y = 1.+ x 2 1.5=.5+ x 2 y 12 (x 1, x 2 )=u 12 (x 1, x 2 ) y 1 (x 1 ) y 2 (x 2 ) y =.

232 Chapter 7 Sensitivity and Screening The function y 12 (x 1, x 2 )= states that there is no interaction between x 1 and x 2. It is straightforward to verify the properties (7.4.14) and (7.4.15) for this example because E [ y 1 (X 1 ) ] = E [ y 2 (X 2 ) ] = ( 1+2x 1 ) dx 1 = E [ y 12 (X 1, x 2 ) ] = = E{y 12 (x 1, X 2 )} (.5+ x 2 ) dx 2 = (7.4.16) and, for example, Cov(y 1 (X 1 ), y 12 (X 1, X 2 ))=. Example 7.3 [Continued] Recalling formula (7.4.8), the centered effect function for 1 k d, while y k (x k )=u k (x k ) y = 4x k 2 +c k 1+c k 1= 4x k 2 1. 1+c k, y k j (x k, x j )=u k j (x k, x j ) y k (x k ) y j (x j ) y for 1 k< j d. = 4x k 2 +c k 1+c k 4x j 2 +c j 1+c j 4x k 2 1. 1+c k 4x j 2 1. 1+c j 1, When X 1,..., X d are mutually independent, the corrected effects can be used to partition the variance of y(x) into components of variance that are define global sensitivity indices. As always we take X 1,..., X d to have independent U(, 1) distributions. Recalling that y = E [ y(x) ], the total variance, V, of y(x) is V E [ (y(x) y ) 2] = E [ y 2 (X) ] y 2. Recalling that for any subset Q {1,..., d}, y Q (X Q ) has mean zero, V Q Var(y Q (X Q ))= E [ y 2 Q (X Q) ] denotes the variance of the term y Q (X Q ) in (7.4.1). Using (7.4.1) and the fact that the covariances of different y Q (X Q ) terms is zero, we calculate

7.4 Global Sensitivity Indices 233 V= E [ (y(x) y ) 2] d = E y k (X k )+ y k j (X k, X j )+ + y 1,2,...,d (X 1,..., X d ) 2 = k=1 k< j d E [ y 2 k (X k) ] + E [ y 2 k j (X k, X j ) ] + + E [ y 2 1,2,...,d (X 1,..., X d ) ] k=1 + k< j E [ y E (X E )y E (X E ) ] (7.4.17) where the sum in (7.4.17) is over all nonempty subsets E and E of{1,..., d} for which E E. Thus V= = d E [ y 2 k (X k) ] + E [ y 2 k j (X k, X j ) ] + k=1 k< j + E [ y 2 1,2,...,d (X 1,..., X d ) ] (7.4.18) d V k + V k j + + V 1,2,...,d. (7.4.19) k=1 k< j We reiterate that the sum (7.4.19) requires that the covariances of different y Q (X Q ) terms be zero which was a consequence of the independence of the individual input distributions. The functional decomposition (7.4.1) can, in a more formal way, be modified to result in the classical ANOVA decomposition of a model with d quantitative factors. Suppose that, instead of X taking a uniform distribution over [, 1] d, the input factor 1 n 2 space is regarded as the discrete set of points{, n 1,..., n 1, 1}d with a discrete uniform distribution over these n values. This would arise, for example, if the inputs formed an n d 1 factorial with n levels for each factor that are coded, n 1,..., n 2 n 1, 1. Replacing each integral over [, 1] that defines a term in (7.4.1) by an average over the n discrete values, y becomes y, the overall mean of all the y( ), the{y i } i become the usual ANOVA estimates of main effects in a complete factorial, the {y i j } i j become the usual ANOVA estimates of two-factor interactions in a complete factorial, and so on. Finally, it is clear that V in (7.4.19), is the mean corrected sum of squares of all the y( ), V i is the sum of squares for the i th factor and so forth. Thus, the decomposition (7.4.19) is the usual ANOVA decomposition into sums of squares for main effects, and higher-way interactions. 7.4.3 Global Sensitivity Indices For any subset Q {1,..., d}, define the sensitivity index (SI) of y(x) with respect the set of inputs x k, k Q, to be

234 Chapter 7 Sensitivity and Screening S Q = V Q V. The sensitivity index corresponding to Q ={k} is called the first-order or main effect sensitivity index and is denoted by S k ; intuitively, S k measures the proportion of the variation V that is due to input x k. The sensitivity index corresponding to Q={k, j}, 1 k j d, is called the first-order or main effect sensitivity index and is denoted by S k j ; S k j measures the proportion of V that is due to the joint effects of the inputs x k and x j. Higher-order sensitivity indices are defined analogously. From(7.4.19) the sensitivity indices satisfy d S k + k=1 1 k< j d S k j + +S 1,2,...,d = 1. We illustrate these definitions and the interpretations of SIs with examples. Example 7.2 [Continued] For y(x 1, x 2 )=2x 1 + x 2, recall that y 1 (x 1 ), y 2 (x 2 ), and y 12 (x 1, x 2 ) calculated previously, give V= Var(y(X 1, X 2 ))=Var(2X 1 + X 2 )=4/12+ 1/12=5/12 V 1 = Var(y 1 (X 1 ))=Var( 1+2X 1 )=4/12 V 2 = Var(y 2 (X 2 ))=Var(.5+ X 2 )=1/12 V 12 = Var(y 12 (X 1, X 2 ))=Var()= so that V= V 1 + V 2 + V 12 and S 1 = 4/12 5/12 =.8, S 2= 1/12 5/12 =.2, and S 12=.. The interpretation of these values coincides with our intuition about y(x 1, x 2 ): x 1 is more important than x 2 while there is no interaction between x 1 and x 2. The only deviation from our intuition is that, based on the functional relationship, the reader might have assessed that x 1 was twice as important as x 2 whereas the variance computations used by global sensitivity indices rely on the fact that Var(2X 1 )= 4Var(X 1 ). Before considering additional examples, we use the S Q sensitivity indices to define the so-called total sensitivity index (TSI) of y(x) with respect to a given input x k, 1 k d; T j is meant to include interactions of x k with all other inputs. The total sensitivity of input k is defined to be sum of all the sensitivity indices involving the k th input; in symbols, T k = S k + S jk + S k j + +S 1,2,...,d. (7.4.2) For this example, when d= 3, j<k j>k T 2 = S 2 + S 12 + S 23 + S 123. (7.4.21)

7.4 Global Sensitivity Indices 235 By construction, T k S k, k=1,..., d. The difference T k S k measures the influence of x k due to its interactions with other variables. From (7.4.2), the calculation of T k appears to require the determination of a total of ( d 1 d 1 ) j= j variances, VQ. However there is at least one method of making this computation more efficient which we describe next. For arbitrary Q {1,..., d}, let V u Q = Var( u Q (X Q ) ) = Var ( E { y(x) X Q }) (7.4.22) to be the variance of the uncorrected effect. The quantity VQ u can be interpreted as the average reduction in uncertainty in y(x) when x Q is fixed because V u Q = Var (y(x)) E{ Var ( y(x) X Q )}. Consider two special cases of the uncorrected effect function variances. From (7.4.11), the variance of the uncorrected effect function u k (x k ) of the input x k is V u k = Var (y k(x k )+y )=V k. (7.4.23) Thus the main effect sensitivity index of input x k, S k, can also be computed using S k = Vu k V. (7.4.24) Using (7.4.12), and the orthogonality property (7.4.15), the variance of the uncorrected effect u k j (x k, x j ) is V u k j = Var( y k (X i )+y j (X j )+y k j (X k, X j )+y ) = Vk + V j + V k j. (7.4.25) Equation (7.4.25) contains both the variance of the main effects and the variance of the interaction effect of inputs x k and x j. Thus V u Q V Q when Q contains more than one input. Equation (7.4.25) can be extended to arbitrary Q. We illustrate the usefulness of this expression by developing a formula for T k where k is a fixed integer, k= 1,..., d. When used as a subscript let k be a shorthand for the set{1,...,d} {k}; for example, X k denotes the vector of all components of X except X k. Then V u k = Var (u k(x k )) = Var ( y 1,2,...,k 1,k+1,...,d (X k )+ + y 1 (X 1 )+y 2 (X 2 )+ + y k 1 (X k 1 )+y k+1 (X k+1 )+ +y d (X d )+y ) = Var y Q (X Q ) Q: k Q = V Q. (7.4.26) Q: k Q

236 Chapter 7 Sensitivity and Screening Equation (7.4.26) is the sum of all V Q components not involving the subscript k in the variance decomposition (7.4.19). Thus V V u k is the sum of all V Q components that do involve the input x k. Hence T k can be expressed as T k = V Vu k. (7.4.27) V Thus if one is interested in estimating the d main effect SIs and the d total effect SIs, {S k } k and{t k } k, only 2d uncorrected effect variances (7.4.22) need be computed. Example 7.3 [Continued] Recall the g-function (7.4.5) with d arguments, y(x)= d k=1 4x k 2 + c k 1+c k. (7.4.28) We calculate its associated S k and T k values. We use the fact that if X U(, 1), then Var ( 4X 2 )=16Var ( X 1/2 ) Hence the total variance of y(x) is = 16 { E[(X 1/2) 2 ] (E[ X 1/2 ]) 2} = 1/3. V= Var (y(x)) d 4X l 2 +c l = Var 1+c l=1 l d ( ) 2 4Xl 2 + c l = E 1+c l=1 l 1. d ( ) 2 4Xl 2 + c l = E 1+c l=1 l 1. d [ ( ) ] 4Xl 2 +c l = Var + 1 1 1+c l=1 l d [ ] 1 = (1+c l ) Var( 4X l 2 )+ 1 1 2 = l=1 d ( ) 1 3(1+c l ) 2+ 1 1. (7.4.29) l=1 For k = 1,..., d, the numerator of S k is the variance of the first-order effect function y k (X k )

7.4 Global Sensitivity Indices 237 ( ) 4Xk 2 +c k 1 V k = Var (y k (X k ))=Var = 1+c k 3(1+c k ) 2 (7.4.3) and hence S k = V k /V= (7.4.3)/(7.4.29). (7.4.31) In a similar fashion, for fixed k = 1,..., d, the uncorrected effect function has variance u k (x k )= l k 4X l 2 +c l 1+c l V u k = Var (u i(x k )) 4X l 2 +c l = Var 1+c l k l ( ) 1 = 3(1+c l ) 2+ 1 1 l k using algebra similar to that in the derivation of V. Hence, after some simplification, ( T k = V Vu 1 ) ( d 1 k 1+3(1+c = k ) 2 l=1 3(1+c l + 1 ) ) 2 V ( d 1 l=1 3(1+c l + 1 ) (7.4.32) 1. ) 2 As a specific example, consider the d=2 case illustrated in Figure 7.6 where c 1 =.25 and c 2 = 1.. Calculation of (7.4.31) and (7.4.32) give the values in Table 7.8. The estimated S k and T k in Table 7.8 are interpreted as saying that (1) x 1 is a far k S k T k 1.9846.9873 2.127.154 Table 7.8 Main effect and total effect sensitivity indices for the function (7.4.28) when d = 2 and (c 1, c 2 )=(.25, 1.). more active input than x 2 and (2) there is virtually no interaction between x 1 and x 2 because S 12 = T 1 S 1 = T 2 S 2 =.27. Hence Figure 7.6, the joint effect function of x 1 and x 2, shows the function y(x 1, x 2 ) for this d=2example. This plot visually verifies that (1) and (2) are nearly correct. For each x 2 [, 1],{y(x 1, x 2 ) : x 1 1} has a v-shaped profile that is independent of x 2 ; for each x 1 [, 1], {y(x 1, x 2) : x 2 1} is a horizontal line with height depending on x 1. Example 7.4. This section concludes with an example that shows both the strength and weakness of trying to summarize the behavior of a potentially complicated function by (several) real numbers. Consider