Theory of Screening Procedures to Identify Robust Product Designs Using Fractional Factorial Experiments

Similar documents
Outline. 1. The Screening Philosophy 2. Robust Products/Processes 3. A Statistical Screening Procedure 4. Some Caveats 5.

The Planning and Analysis of Industrial Selection and Screening Experiments

Optimization of Muffler and Silencer

CHAPTER 17 CHI-SQUARE AND OTHER NONPARAMETRIC TESTS FROM: PAGANO, R. R. (2007)

SELECTING THE NORMAL POPULATION WITH THE SMALLEST COEFFICIENT OF VARIATION

OPTIMIZATION OF FIRST ORDER MODELS

Optimal Selection of Blocked Two-Level. Fractional Factorial Designs

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Multistage Methodologies for Partitioning a Set of Exponential. populations.

1 The linear algebra of linear programs (March 15 and 22, 2015)

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University

Taguchi Method and Robust Design: Tutorial and Guideline

Selecting the Best Simulated System. Dave Goldsman. Georgia Tech. July 3 4, 2003

CHAPTER 4 THE COMMON FACTOR MODEL IN THE SAMPLE. From Exploratory Factor Analysis Ledyard R Tucker and Robert C. MacCallum

Statistical Inference: Estimation and Confidence Intervals Hypothesis Testing

Definitive Screening Designs with Added Two-Level Categorical Factors *

Chap The McGraw-Hill Companies, Inc. All rights reserved.

Lecture Notes 1: Vector spaces

Constructions of digital nets using global function fields

Linear Algebra. Preliminary Lecture Notes

Wooldridge, Introductory Econometrics, 4th ed. Appendix C: Fundamentals of mathematical statistics

Matrix Algebra: Vectors

Experimental design. Matti Hotokka Department of Physical Chemistry Åbo Akademi University

Chapter Seven: Multi-Sample Methods 1/52

[y i α βx i ] 2 (2) Q = i=1

Reduction in number of dofs

Robust Design: An introduction to Taguchi Methods

Computational Learning Theory

Supplementary lecture notes on linear programming. We will present an algorithm to solve linear programs of the form. maximize.

Stat 890 Design of computer experiments

Marcia Gumpertz and Sastry G. Pantula Department of Statistics North Carolina State University Raleigh, NC

Stochastic Design Criteria in Linear Models

ORF 522. Linear Programming and Convex Analysis

Open book and notes. 120 minutes. Covers Chapters 8 through 14 of Montgomery and Runger (fourth edition).

DEPARTMENT OF ENGINEERING MANAGEMENT. Two-level designs to estimate all main effects and two-factor interactions. Pieter T. Eendebak & Eric D.

Linear Algebra. Preliminary Lecture Notes

Estimating a Population Mean. Section 7-3

On the Identifiability of the Functional Convolution. Model

2 Introduction to Response Surface Methodology

MATH Notebook 3 Spring 2018

QUASI-ORTHOGONAL ARRAYS AND OPTIMAL FRACTIONAL FACTORIAL PLANS

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Statistics 3858 : Contingency Tables

Measurable Choice Functions

Introduction to Design of Experiments

Distributed Optimization. Song Chong EE, KAIST

Bayesian Optimization

Biostatistics 270 Kruskal-Wallis Test 1. Kruskal-Wallis Test

Branch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems

8 STOCHASTIC SIMULATION

Chapter 5 EXPERIMENTAL DESIGN AND ANALYSIS

CHAPTER 12 DESIGN OF EXPERIMENTS

Multiplication of Multinomial Subjective Opinions

TAGUCHI ANOVA ANALYSIS

Factorial designs (Chapter 5 in the book)

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 2, FEBRUARY Uplink Downlink Duality Via Minimax Duality. Wei Yu, Member, IEEE (1) (2)

CHAPTER 6 A STUDY ON DISC BRAKE SQUEAL USING DESIGN OF EXPERIMENTS

A Randomized Algorithm for the Approximation of Matrices

STAT 263/363: Experimental Design Winter 2016/17. Lecture 13 Feb 27

GENERALIZED RESOLUTION AND MINIMUM ABERRATION CRITERIA FOR PLACKETT-BURMAN AND OTHER NONREGULAR FACTORIAL DESIGNS

D-Optimal Designs for Second-Order Response Surface Models with Qualitative Factors

MA 0540 fall 2013, Row operations on matrices

MATH602: APPLIED STATISTICS

Two-Level Designs to Estimate All Main Effects and Two-Factor Interactions

MATH 2030: MATRICES ,, a m1 a m2 a mn If the columns of A are the vectors a 1, a 2,...,a n ; A is represented as A 1. .

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

SIX SIGMA IMPROVE

Testing Statistical Hypotheses

A Piggybacking Design Framework for Read-and Download-efficient Distributed Storage Codes

Parameter Estimation, Sampling Distributions & Hypothesis Testing

Long-Run Covariability

The Behavior of Algorithms in Practice 2/21/2002. Lecture 4. ɛ 1 x 1 y ɛ 1 x 1 1 = x y 1 1 = y 1 = 1 y 2 = 1 1 = 0 1 1

Auxiliary signal design for failure detection in uncertain systems

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Matrix Basic Concepts

UNIVERSITY of LIMERICK

One-Way ANOVA. Some examples of when ANOVA would be appropriate include:

Inferences on a Normal Covariance Matrix and Generalized Variance with Monotone Missing Data

Lecture Slides. Elementary Statistics. by Mario F. Triola. and the Triola Statistics Series

Lecture Slides. Section 13-1 Overview. Elementary Statistics Tenth Edition. Chapter 13 Nonparametric Statistics. by Mario F.

INVARIANT SMALL SAMPLE CONFIDENCE INTERVALS FOR THE DIFFERENCE OF TWO SUCCESS PROBABILITIES

Data Mining. Linear & nonlinear classifiers. Hamid Beigy. Sharif University of Technology. Fall 1396

Lecture 18: The Rank of a Matrix and Consistency of Linear Systems

Taguchi Design of Experiments

Sections 6.1 and 6.2: Systems of Linear Equations

The matrix approach for abstract argumentation frameworks

ASIGNIFICANT research effort has been devoted to the. Optimal State Estimation for Stochastic Systems: An Information Theoretic Approach

Chapter 1: Systems of Linear Equations

Game Theory. Greg Plaxton Theory in Programming Practice, Spring 2004 Department of Computer Science University of Texas at Austin

Lecture 4: Lower Bounds (ending); Thompson Sampling

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.

MFM Practitioner Module: Risk & Asset Allocation. John Dodson. February 18, 2015

Sparse Linear Models (10/7/13)

Statistical Inference of Covariate-Adjusted Randomized Experiments

Lecture 2: The Simplex method

15-780: LinearProgramming

Quantitative Empirical Methods Exam

The Multinomial Model

One-at-a-Time Designs for Estimating Elementary Effects of Simulator Experiments with Non-rectangular Input Regions

Optimization Theory. A Concise Introduction. Jiongmin Yong

Transcription:

Theory of Screening Procedures to Identify Robust Product Designs Using Fractional Factorial Experiments Guohua Pan Biostatistics and Statistical Reporting Novartis Pharmaceuticals Corporation East Hanover, NJ 07936 Thomas J. Santner Department of Statistics Ohio State University Columbus, OH 43210 Abstract In many quality improvement experiments, there are one or more control factors that can be used to determine a final product design (or manufacturing process), and one or more environmental factors that vary under field (or manufacturing) conditions. This paper considers applications where the product design or process design is considered to be seriously flawed if its performance is inferior at any level of the environmental factor; for example, if a particular prosthetic heart valve design has poor fluid flow characteristics at certain pulse rates, then a manufacturer will not want to put this design into production. In such a situation, the appropriate measure of a product s quality is its worst performance over the levels of the environmental factor. This paper develops theory for a class of subset selection procedures that identify product designs which maximize the worst-case performance over the environmental conditions for general Combined array experiments. The proofs for all results can be found in the on-line companion technical report which is located at the URL http://www.stat.ohio-state.edu/ tjs. Keywords and phrases: Combined-array, Inner array, Minimax approach, Outer array, Productarray, Quality improvement, Response model, Robust process design, Screening, Simulation, Subset selection, Variance reduction. 1

1 Introduction In some quality improvement experiments, two types of factors affect the performance of a product or process design. The first type are control factors which are variables that can be manipulated by the manufacturer. Control factors are sometimes called engineering factors or, in process improvement applications, manufacturing factors. The second type are noise factors which are variables that represent either different environmental conditions or, in process improvement applications, (uncontrollable) variability in component parts or raw materials. Because a process can be viewed as the process designer s product, hereafter we refer only to the product design problem. In some applications, the quality of a product design is measured by its worst performance under the different environments. This criterion is natural in situations where a low response at any level of the noise factors can have potentially serious consequences. Engineering designs for auto tires, seat belts, or heart valves that fail catastrophically under rare, though non-negligible, sets of operating conditions must be identified early in the product design cycle. The effect of the control and noise factors on the performance of a product design is commonly investigated at the product development stage using fractional factorial (combined-array) experiments involving both control and noise factors. Based on the data from such a (fractional factorial) experiment, the product developer wishes to screen out undesirable product designs whose worst-case performances are inferior. This screening goal is equivalent to selecting product designs whose worst-case performance is among the best. We term this selection goal the maximin criterion. Indeed, the best product design, as identified by the maximin criterion, can be viewed as a robust design in that it uses the larger-the-better approach advocated by Taguchi. The subset selection methodology of Gupta (1956, 1965) was a screening methodology for balanced one-way layouts. In the simplest case, a screening procedure identifies a set of treatments (or treatment combinations) containing the treatment (treatment combination) with the greatest mean response based on a one-factor (multi-factor) experiment. Bechhofer, Santner, and Goldsman (1995) provide screening procedures for many goals/models in physical and simulation experiments. This paper extends the Gupta methodology to the area of quality improvement by proposing a method of selecting a subset of treatments containing the product design having the best performance, in the maximin sense described above, based on a fractional experiment. Previously, Pan and Santner (1998) considered the maximin criterion for complete experiments conducted under a variety of randomization restrictions. Santner and Pan (1997) discussed a pilot case study involving a 2 5 1 experiment with three control factors and two noise factors. This paper provides a general screening theory based on the maximin criterion for arbitrary 2

combined-array experiments and any analysis-of-variance mean models. Section 2 presents the basic model, screening goal, confidence requirement, and proposed subset selection procedure. Section 3 discusses the calculation of the critical values required to implement the proposed procedure using the least favorable configuration method. In those cases where the least favorable configuration method is not applicable, Section 4 provides an alternative method for determining conservative critical values based on the calculation of a certain lower bound for the confidence requirement. The section also provides a method for checking whether the lower bound used by this method is sharp (and hence the corresponding critical value is sharp). Several examples are given throughout the paper to illustrate the proposed methods. The proofs of all results are given the Appendix of Pan and Santner (2003). 2 Model, Goal, and Procedure Suppose that the mean of a response that characterizes the output quality depends on p + q control factors and r + s noise factors. In our notation, p and r denote the number of control and noise factors, respectively, that interact with each other in the model for the mean, while q is the number of control factors having no interactions with noise factors, and s denotes the number of noise factors having no interactions with control factors. Of special practical importance is the frequently occurring case when all factors are at two levels; however, nothing in the development below requires this assumption. We introduce the following notation to distinguish these two types of control and noise factors. Notation C I 1,...,C I p C 1,...,C q N I 1,...,N I r N 1,...,N s Definition Control Factors that interact with Noise Factors Control Factors that do not interact with Noise Factors Noise Factors that interact with Control Factors Noise Factors that do not interact with Control Factors The superscripts I and are mnemonic; the I in C I i indicates that this control factor interacts with one or more noise factors while the symbol is meant to suggest the negation, i.e., C i does not interact with any noise factors. A parallel interpretation holds for the notations N I i and N i. Example 1 Box and Jones (1992) discuss a taste-testing experiment that is typical of those used in the food industry to evaluate recipes. The experiment involves five factors that (potentially) affect the taste of a cake recipe; each factor is used at two levels. Three of the factors (S = shortening, F = flour and E = egg powder) are control factors that are varied by the cake 3

manufacturer. The remaining two factors (T = baking temperature and Z = baking time) are noise factors because they represent deviations from the cake baking directions; the temperature controls in ovens can be biased and consumers both under and overbake cake mixes. As an illustration of the general notation that is introduced below, let µ sfe,tz denote the mean taste level when S, E, F, T, and Z are at levels s, f, e, t, and z, respectively, where each lower case index is either 0 or 1. The factor level is set equal to zero (unity) when the factor is at its low (high) level. Suppose the model µ sfe,tz = m 0 + S s + F f + E e + T t + Z z + (ST ) st (2.1) holds for all 2 5 = 32 (sfe, tz) factor combinations. A comma is used to separate the list of indices for control factors (sfe) from the indices for noise factors (tz). As usual, the terms S s, F f, E e, T t, and Z z are the shortening, flour, egg, temperature, and time main effects, respectively. For Model (2.1), there is one control factor, S, that interacts with one noise factor, T ; thus p = 1 = r, C I 1 = S, and N I 1 = T. The two control factors F and E do not interact with any noise factors while the single noise factor Z does not interact with any control factors; thus q = 2, s = 1, C 1 = F, C 2 = E, and N 1 = Z. Returning to the discussion of the general case, let i I = (i I 1,..., i I p) denote the 1 p vector of indices of the levels of those control factors each of which interacts with one or more noise factors, j I = (j1 I,..., jr I ) denote the 1 r vector of indices of the levels of those noise factors each of which interacts with one or more control factors, i = (i 1,..., i q ) denote the 1 q vector of indices of the levels of the non-interacting control factors, and j = (j1,..., js ) denote the 1 s vector of indices of the levels of the non-interacting noise factors. Let I I, I, J I, and J denote the (cross product) range sets for the indices i I, i, j I, and j, respectively. For example, in the case of an experiment with each factor at two levels, I I = {0, 1} p, I = {0, 1} q, J I = {0, 1} r, and J = {0, 1} s. In general we assume that level 0 represents the lowest level of each factor. The highest level can depend on the factor although in many cases we assume two-level experiments in which case (1 p, 1 q ) denotes the vector of highest levels of the control factors. The selection procedure described below assumes that only a fractional factorial experiment need be conducted (subject to the restriction that all means are estimable). Therefore the indices of the observed treatment combinations are a subset of the Cartesian product I I X I X J I X J. Throughout, this paper will use the notation S to denote the cardinality of the generic set S. Also, it assumes that the lowest level of any factor is designated as 0 and that, for two-level factor experiments, the higher level is designated by 1. 4

Let Y i,j and µ i,j denote the response and its mean when the control factors are at level i and the noise factors are at level j; here i = (i I, i ) I, j = (j I, j ) J, where I is I I X I and J is J I X J. We first describe the goal of the screening problem in terms of the means {µ i,j } and then introduce the ANOVA model that describes the {µ i,j } in terms of main effects and interactions. For each combination i of control factors, we determine the worst mean performance of the response over the levels of the noise factors, i.e., ξ i = min j µ i,j, where, for notational simplicity, the minimum is taken over the range j J. In words, ξ i is the smallest mean response for the product design described by control factor combination i. We denote the ordered ξ i corresponding to the I product designs by ξ [1] ξ [ I ]. (2.2) The design associated with ξ [ I ] = max i ξ i is defined to be the best product design; this design is robust to the unknown levels of the noise factors in a maximin sense. Our goal is to use the data from a suitable (complete or fractional factorial) experiment to select a subset of the product designs so that the following confidence requirement is satisfied. Confidence Requirement: Given α with 0 < α < 1, we desire that Pµ {CS} 1 α (2.3) for all µ satisfying the mean Model (2.4) (2.5) defined below, where CS denotes the event that the selected subset includes the control factor combination associated with ξ [ I ], the best product design. Example 1 (Continued) The quality of the recipe (S, F, E) = (s, f, e) is specified by ξ sfe = min{µ sfe00, µ sfe01, µ sfe10, µ sfe11 } where µ sfe,tz satisfies (2.1). Our goal is to identify a subset of the eight possible recipes that contains the best (S, F, E) combination, i.e., that associated with ξ [8] = max ξ sfe. Because we focus on the behavior of the µ i,j for each fixed i, we regard µ = (µ i,j ) as an I by J matrix with each row of µ corresponding to a single setting of the p + q control factors and each column corresponding to a single setting of the r + s noise factors. Several of 5

the proofs of the least favorable configuration results in the companion technical report, Pan and Santner (2003), use Kronecker product formulas for µ that assume the rows and columns of µ are arranged as follows. First, arrange the components of i corresponding to i I in lexicographic order; then cross each fixed i I with the set of i where the latter are also arranged in lexicographic order. The columns are ordered similarly as the cross product of the j I and j, each arranged lexicographically. We illustrate the notation using our prototype Example 1. Example 1 (Continued) Recall that this experiment had a total of three control factors, with one interacting control factor, S, and two noise factors, with but one interacting noise factor, T. Thus, the first four rows of µ have S = 0 and the remaining four rows have S = 1. The four rows with S = 0 are ordered lexicographically in the values of the non-interacting control factors F and E and similarly for the rows with S = 1 (see the right margin below) µ 00000 µ 00001 µ 00010 µ 00011 µ 00100 µ 00101 µ 00110 µ 00111 µ 01000 µ 01001 µ 01010 µ 01011 µ 01100 µ 01101 µ 01110 µ 01111 µ 10000 µ 10001 µ 10010 µ 10011 µ 10100 µ 10101 µ 10110 µ 10111 µ 11000 µ 11001 µ 11010 µ 11011 µ 11100 µ 11101 µ 11110 µ 11111 (S, F, E) (0, 0, 0) (0, 0, 1) (0, 1, 0) (0, 1, 1) (1, 0, 0) (1, 0, 1) (1, 1, 0) (1, 1, 1). (0, 0) (0, 1) (1, 0) (1, 1) (T, Z) The first two columns of µ have T = 0 and the second two columns have T = 1. The pair of columns corresponding to each fixed T value are ordered so that the first has Z = 0 and the second has Z = 1. The entire µ matrix is 8 4 = 2 1+2 2 1+1. The procedure below assumes that an ANOVA-type model µ i,j = m CN(i I, j I ) + m C (i ) + m N (j ) (2.4) 6

for the means is known and that this model holds for all (i, j) I X J. The three components of the model m CN (i I, j I ) = m 0 + Q CN(C I i 1 CI i 2 CI i p N I j 1 N I j 2 N I j r ) i I,j I. m C (i ) = Q C (C i 1 C i 2 C i q ) i, and (2.5) m N (j ) = Q N (N j 1 N j 2 N j s ) j consist of the main effects and interactions that correspond to the interacting control and noise factors, to the non-interacting control factors, and to the non-interacting noise factors. The index set Q CN identifies these quantities among the interacting control and noise factors while Q C and Q N are defined similarly for the non-interacting control factors and noise factors, respectively. We place the overall mean, m 0, with the control noise interaction terms. Also, each term (C i 1 C i q ) i of m C (i ), for example, depends only on the q indices 1 i 1 < i 2 < i q q of i. A similar convention applies to the terms m N (j ) and m CN (i I, j I ). For example, in a model with p = 3 interacting control factors and r = 2 interacting noise factors, suppose m CN (i I, j I ) contains the interaction term (C I 1 C I 2 N I 1 ) i I,j I. Then ii = (i I 1, i I 2, i I 3) and j I = (j I 1, j I 2 ). The term (C I 1 C I 2 N I 1 ) i I,j I depends only on the indices ii 1 and i I 2 in i I and on j I 1 in j I. Lastly, we again emphasize that the screening procedure stated below assumes that the Model (2.4) (2.5) is known. If the model is identified from the same data that is used to implement the screening procedure then the confidence level of the procedure will be affected, presumably negatively. Of course the same phenomenon is true in other settings where data are used for multiple purposes. This paper does not analyze the multiple stage procedure that first identifies the model and then chooses the critical value, although such a study would be of interest. Example 1 (Continued) From (2.1), S and T are the interacting control and noise factors, respectively. The indices are i I = (s), i = (f, e), j I = (t), and j = (z). Thus, in the general notation of (2.4) applied to Model (2.1), we have m CN (s, t) = m 0 + S s + T t + (ST ) st, m C (f, e) = F f + E e, and m N (z) = Z z. We assume that the experimental data satisfies Y i,j = µ i,j + ɛ i,j, (i, j) D (2.6) for each (i, j) D I X J, where D is the set of factor combinations at which data were observed. The D measurement errors ɛ i,j are independent N(0, σ2 ɛ ) random variables. Terms 7

not in the Model (2.4) (2.5) are assumed to be zero. As illustrated in the following example, µ D denotes the I I J I matrix obtained by deleting all the entries in µ with indices not belonging to D; Y D and ɛ D denote the corresponding matrices of observations, Y i,j, and errors, ɛ i,j, respectively. Example 1 (Continued) The data used by Santner and Pan (1997) in conjunction with Model (2.1) came from a 2 5 1 experiment having defining contrast I = SF ET Z. The observed taste scores are in Table 1. The data have the following structure: S F E T Z Y 0 0 0 1 0 1.6 0 0 0 0 1 1.2 0 1 0 0 0 2.2 0 1 0 1 1 6.5 1 0 0 0 0 1.3 1 0 0 1 1 1.7 1 1 0 1 0 3.5 1 1 0 0 1 3.8 0 0 1 0 0 1.6 0 0 1 1 1 4.4 0 1 1 1 0 6.1 0 1 1 0 1 4.9 1 0 1 1 0 2.4 1 0 1 0 1 2.6 1 1 1 0 0 5.2 1 1 1 1 1 6.0 Table 1: A 2 5 1 Resolution V Design with I = SF ET Z for a Cake-Tasting Experiment Y 00001 Y 00010 µ 00001 µ 00010 ɛ 00001 ɛ 00010 Y 00100 Y 00111 µ 00100 µ 00111 ɛ 00100 ɛ 00111 Y 01000 Y 01011 µ 01000 µ 01011 ɛ 01000 ɛ 01011 Y 01101 Y 01110 Y 10000 Y 10011 = µ 01101 µ 01110 µ 10000 µ 10011 + ɛ 01101 ɛ 01110 ɛ 10000 ɛ 10011. (2.7) Y 10101 Y 10110 µ 10101 µ 10110 ɛ 10101 ɛ 10110 Y 11001 Y 11010 µ 11001 µ 11010 ɛ 11001 ɛ 11010 Y 11100 Y 11111 µ 11100 µ 11111 ɛ 11100 ɛ 11111 The three (partially filled) matrices above are Y D, µ D and ɛ D, respectively. The matrix equation (2.6) is Y D = µ D + ɛ D. This notation emphasizes that observations are made only at the design points in D. The 16 observations collected in the experiment provide a ν = 9 (= 16 7) degrees of freedom chi-square estimator of σ 2 ɛ based on Model (2.1). The selection procedure we propose uses the ordinary least squares (OLS) estimator of µ based on Y D and the Model (2.4) (2.5). For each (i, j) I X J, let µ i,j 8 denote the OLS

estimator of µ i,j. For each product design, i I, estimate ξ i by Let ξ i = min j { µ i,j }. ξ [1] ξ [ I ] denote the ordered ξ i. In addition, we assume that there is an estimator S 2 of σ 2 ɛ, for which νs 2 /σɛ 2 χ 2 ν and that S 2 is independent of the OLS estimator of µ D. Ordinarily, such a chisquare estimator would be available when the number of observations, D, is larger than the number of parameters estimated in the Model (2.4). We propose the following procedure to select a subset of the levels of the control factors containing the most robust control factor combination. Procedure G: Select control factor combination i if and only if ξ i ξ [ I ] h S, where h is a cutoff value to be determined in the next section. (t, z) (s, f, e) (0, 0) (0, 1) (1, 0) (1, 1) ξ sfe (0, 0, 0) 0.0 0.9 2.2 3.1 0.0 (0, 0, 1) 1.4 2.3 3.6 4.5 1.4 (0, 1, 0) 2.6 3.6 4.8 5.7 2.6 (0, 1, 1) 4.1 5.0 6.3 7.2 4.1 (1, 0, 0) 0.7 1.6 0.9 1.8 0.7 (1, 0, 1) 2.2 3.1 2.3 3.2 2.2 (1, 1, 0) 3.4 4.3 3.6 4.5 3.4 (1, 1, 1) 4.8 5.7 5.0 5.9 4.8 Table 2: OLS estimators µ sfe,tz of the 32 treatment means for all recipe by baking conditions in Example 1 under Model (2.1) using the data in Table 1, and the corresponding estimated row minima ξ sfe. Example 1 (Continued) As described in the next section, we calculate h = 2.29 for Procedure G to attain confidence level 100(1 α)% = 95%. Table 2 lists the fitted means µ sfe, tz and the associated ξ sfe based on Model (2.1) and the data in Table 1; the estimate of σ is S = 0.4989. Thus for Procedure G, ξ [8] h S = 4.8 1.14 = 3.66 and the selected confidence subset is {(0, 1, 1), (1, 1, 1)}. Therefore, the data from this experiment shows that, with at least 95% confidence, the best recipe uses F and E at their high level and S is not determinable. Equivalently, the results of applying the statistical screening procedure can be interpreted as stating that recipes {(0, 0, 0), (0, 0, 1), (0, 1, 0), (1, 0, 0), (1, 1, 0), (1, 0, 1)} need not be investigated further. 9

For simplicity, we assume below that there is one replicate of the design D. This is the most common case in quality improvement experimentation. However, the theory and methods developed in this paper extend straightforwardly to situations where replicates of the design D are observed one simply views the sample means of the replicates as unreplicated observations from design D and applies the results in this paper. 3 Determination of h Based on the Least Favorable Configuration Sections 3 and 4 discuss two methods that can be applied to determine the critical value h of Procedure G. The least favorable configuration (LFC) method discussed in this section searches for a configuration of means µ at which the probability of correct selection is minimal (a least favorable configuration). The h value is computed by setting the probability of correct selection at the LFC equal to 1 α. The LFC method provides the smallest h that guarantees minµ Pµ{CS} attains 1 α. The method described in Section 4 finds a lower bound (LB) for the probability of correct selection that depends on h and sets the lower bound equal to 1 α. The LB method is more widely applicable than the LFC method but provides a conservative h; in some situations, the h produced by the LB method is identical to that given by the LFC method. To achieve (2.3), we must have infµ Pµ{CS} 1 α where the infimum is over µ satisfying Model (2.4) (2.5). In some screening problems, it is possible to find a configuration of means µ LF C that satisfies the model and for which Pµ LF C {CS} = infµ Pµ{CS}; such a µ LF C is called a least favorable configuration of means. In this problem we will find a sequence of parameter configurations {µ k }, each satisfying the Model (2.4) (2.5), for which lim k Pµ k {CS} = infµ Pµ{CS}. Informally, we refer to lim k µ k, when this limit exists, as a least favorable configuration of means. As will be shown below, when the LFC exists, it is characterized by the model for the interacting control and noise factors, m CN (i I, j I ) = m 0 + Q CN I (C I i 1 CI i 2 CI i p N I j 1 N I j 2 N I j r ) i I,j I. (3.1) Following the convention used to arrange the rows and columns of the µ matrix, we let M = (m CN (i I, j I )) be an I I J I matrix with each row of M corresponding to a single setting of the p interacting control factors and each column corresponding to a single setting of the r interacting noise factors with the elements of i I and j I ordered lexicographically. 10

Example 1 (Continued) For the taste-testing experiment, µ sfe,tz = m 0 + S s + F f + E e + T t + Z z + (ST ) st, and so m CN (s, t) = m 0 + S s + T t + ST st, for s, t {0, 1} is the linear combination of the main effects and interactions between the interacting control factor S at level s and the interacting noise factor T at level t. The M matrix in this example is [ ] mcn (0, 0) m CN (0, 1) m CN (1, 0) m CN (1, 1) with the rows corresponding to the levels of the control factor S and the columns to the levels of the noise factor T. Theorem 3.1 provides the LFC of means for two-level experiments where exactly one control factor interacts with the noise factors. It (and later results) use the notation: J m n to denote an m n matrix of 1 s, 1 v to denote a row vector of v ones, 0 v to denote a row vector of v zeros, and to denote Kronecker product. Theorem 3.1 Assume the Model (2.4) (2.5) has exactly one control factor that interacts with the noise factor(s) and the data Y D come from an orthogonal 2-level (fractional) factorial experiment for which all non-zero main effects and interactions are estimable, and free of non-zero confounding factors. If there exists a sequence of 2 2 r matrices M k for which [ ] 0 + + lim M k =, (3.2) k 0 0 0 then the 2 1+q 2 r+s matrix µ LF C = lim k M k J 2 q 2 s = [ 0 + + 0 0 0 ] J 2 q 2s, (3.3) is a least favorable configuration. Let h k be the solution to the equation } Pµ k { ξ 1,1q max ξ i h k S = 1 α, (3.4) i where µ k = M k J 2 q 2 s then h = lim k h k is the critical value for Procedure G. In practice, simulation is the simplest method to obtain h. Draw independent random variables of the form Y D having means given by the components of µ k = M k J 2 q 2 s and unit variances. Here M k is as in Theorem 3.1 with entries k that are at least 100 (100 is effectively ). Also draw V χ 2 ν and construct the estimators ξ i, for i I; form T ( ) ν max ξ i ξ 1,1q / V. i 11

Then h k (and thus h) can be estimated as the 100 (1 α)% sample percentile of many such draws of T. Note that one can always take σ 2 ɛ = 1 in the simulation because the estimator S 2 of σ 2 ɛ has the property that νs 2 follows a chi-square distribution with ν degrees of freedom and the random variable (max i ξi ξ 1,1q )/S is equivalent to T. [ ] 0 k Example 1 (Continued) We can take M k = in Theorem 3.1 because the model m 0 0 CN (s, t) = m 0 + S s + T t + (ST ) st spans IR 2 2. Thus 0 0 k k 0 0 + + 0 0 k k 0 0 + + 0 0 k k 0 0 + + µ k = M k J 2 2 2 = 0 0 k k 0 0 0 0 0 0 + + 0 0 0 0 0 0 0 0 0 0 0 0 as k which is the LFC. 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 The configuration identified above can be explained intuitively as follows. In this problem, as in many subset selection problems, the LFC for µ makes all associated ξ i parameters equal (to zero). The LFC associates the last row, with levels (1, 1, 1), as best control combination. The µ configuration that minimizes the probability of the event [ ξ111 ξ sfe, (s, f, e) (1, 1, 1)] occurs when ξ 111 is stochastically small and the remaining ξ sfe are stochastically large. Recall that each ξ sfe is a minimum of four estimated means. The estimator ξ 111 is stochastically small because it the minimum of four µ 111,tz each of which has zero mean. The LFC makes four of remaining ξ sfe stochastically large by effectively eliminating two of the µ sfe,tz from each mimimum calculation by making their means large (+ ). However, the µ model restrictions allow only four of the seven rows corresponding to inferior control factor combinations to be based on two µ sfe,tz. The model constraints force the remaining three ξ sfe to be the minimum of four means, as is ξ 111. To compute h, generate independent and identically distributed (iid) standard normal random 12

variables Y 1,..., Y 16 and form Y 1 100 + Y 2 Y 3 100 + Y 4 Y 5 100 + Y 6 Y D = Y 7 100 + Y 8 Y 9 Y 10. Y 11 Y 12 Y 13 Y 14 Y 15 Y 16 corresponding to k = 100 for M k. Also generate a chi-square random variable V with 9 degrees of freedom and set T = 9(max sfe ξ sfe ξ 111 )/ V. The 100 (1 α)% sample percentile of many such T s is an estimate of h, i.e., h satisfies P {T h} = 1 α. This implementation leads to h = 2.29 as the critical value required to obtain a 95% confidence level procedure based on the Box and Jones experimental design. The following example illustrates a case where there is a restricted model for m CN (i I, j I ) that does not span IR p r as was the case in Example 1. Example 2 Consider a six-factor experiment where each factor is at level 0 or 1. Suppose that there are a total of three control factors and three noise factors with one control factor interacting with two noise factors, two control factors that do not interact with any noise factors, and one noise factor that does not interact with any control factors. The notation identifying these factors is Suppose that the mean model is Index Name Type i 1 C I 1 Interacting i 2, i 3 C 1, C 2 Non-interacting j 1, j 2 N I 1, N I 2 Interacting j 3 N 1 Non-interacting µ i1 i 2 i 3,j 1 j 2 j 3 = m CN (i 1, j 1 j 2 ) + m C (i 2 i 3 ) + m N (j 3 ), (3.5) where m N (j 3 ) = (N 1 ) j3, m C (i 2 i 3 ) = (C 1 ) i2 + (C 2 ) i3 + (C 1 C 2 ) i2 i 3, and m CN (i 1, j 1 j 2 ) = m 0 + (C I 1 ) i1 + (N I 1 ) j1 + (N I 2 ) j2 + (C I 1 N I 1 ) i1,j 1 + (C I 1 N I 2 ) i1,j 2 (3.6) 13

and each of i 1, i 2, i 3, j 1, j 2, and j 3 is 0 or 1. The slight change in the subscript notation is used to emphasize the identity of the control and noise indices. Clearly (3.6) does not span IR 2 22 but it can be shown that the 2 4 matrix M = (m CN (i 1, j 1 j 2 )) satisfies Model (3.6) if and only if M satisfies [β0 + β I 1 β 0 β 0 + β I 1 + γ I 2 + δ I 12 β 0 + γ I 2 β 0 + β I 1 + γ I 1 + δ I 11 β 0 + γ I 1 ] β 0 + β1 I + γ1 I + γ2 I + δ11+ I δ12 I β 0 + γ1 I + γ2 I where each coefficient is an arbitrary real-valued number. Using this representation, it is easy to identify the required sequence M k in Theorem 3.1. Force the first column and last row of M k to be zero by setting 0 = β1 I = β 0 = γ1 I = γ2 I. Then set δ11 I = k = δ12. I The resulting matrix is [ ] 0 k k 2k M k = 0 0 0 0 Thus 0 0 k k k k 2k 2k 0 0 + + + + + + 0 0 k k k k 2k 2k 0 0 + + + + + + 0 0 k k k k 2k 2k 0 0 + + + + + + µ k = M k J 2 2 2 = 0 0 k k k k 2k 2k 0 0 0 0 0 0 0 0 0 0 + + + + + + 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 as k which yields a LFC. As above, an estimate of h can be obtained by solving Equation (3.4) via Monte Carlo simulation.. The following example shows a model where the condition (3.2) (3.3) of Theorem 3.1 is not satisfied. For such situations, the lower bound method discussed in the next section can be used. Example 3 Consider a 3-factor experiment with each factor at level 0 or 1. Suppose that there is one control factor C1 I and two noise factors N1 I and N2 I. For simplicity, suppose further that there are no non-interacting control or noise factors and the model for the mean of Y i1,j 1 j 2 is µ i1,j 1 j 2 = m 0 + (C1 I ) i1 + (N1 I ) j1 + (C1 I N1 I ) i1,j 2 + (C1 I N1 I N2 I ) i1,j 2 j 3 where each of i 1, j 1 and j 3 is 0 or 1. Using the technique of Example 2, it can be shown that the M k required in Theorem 3.1 does not exist. When there is more than one interacting control factor, the lower bound method in Subsection 4.1 can be used to determine a (possibly conservative) cutoff value and then the method of Subsection 4.2 to determine whether the resulting critical value is exact or not. 14

We conclude this section by describing a systematic technique for determining whether the condition (3.2) of Theorem 3.1 is satisfied in any particular application. The technique involves solving a linear programming (LP) problem determined by the model m CN (i I, j I ). Recall that the canonical form of an LP in the n decision variables x = (x 1,..., x n ) is max c x s.t. Ax b x 0 where c, A, and b are given n 1, m n, and m 1 arrays. The LP problem for determining if (3.2) is satisfied is formulated as follows. Let Z ( Zero ) denote those (i I, j I ) combinations corresponding to elements in the first column or last row of the matrix M = (m CN (i I, j I )) introduced previously. We wish to determine whether there exists a sequence of model matrices each satisfying (3.1) whose elements m CN (i I, j I ) + for all (i I, j I ) Z and m CN (i I, j I ) = 0 for all (i I, j I ) Z. To solve this problem, we introduce the auxiliary scalar variable w whose role is to be a lower bound on the elements we wish to simultaneously drive to +. Then we solve max w (3.7) s.t. m CN (i I, j I ) = 0 for all (i I, j I ) Z w m CN (i I, j I ) for all (i I, j I ) Z i I I I andj I J I(CI i 1 CI i 2 CI i p N I j 1 N I j 2 N I j r ) i I,j I = 0 for all (i, j ) Q CN I where m CN (i I, j I ) is a short hand for the for the sum m 0 + Q CN(C I i 1 CI i 2 CI i p N I j 1 N I j 2 N I j r ) i I,j I. } The decision variables for the LP are w, the sets of {(C Ii 1 CIi 2 CIi p N Ij N Ij N Ij 1 2 r ) i I,jI for each (i, j ) Q CN, and m 0. The first constraint of constraints forces the first column and last row of M to be zero while the second insures w is the minimum of the m CN (i I, j I ) for (i I, j I ) Z. The third set of constraints are the usual main effect and interaction identifiability constraints. The sequence required in Theorem 3.1 is obtained by letting w +. 15

4 Determination of h Based on a Lower Bound for the Probability of Correct Selection Procedure G requires the computation of the smallest h satisfying minµ Pµ{CS} = 1 α where the minimum is over all µ satisfying the mean Model (2.4) (2.5). An exact determination of this h is possible when one knows an µ LF C that guarantees Pµ LF C {CS} = minµ Pµ{CS}. Section 3 provides one set of sufficient conditions under which such an LFC can be found. This section provides a lower bound (LB) for minµ Pµ{CS} that can be applied in more general settings when the sufficient condition is not satisfied. Setting the lower bound equal to 1 α gives a conservative estimate of h. Of course, if the lower bound is sharp, as is often the case for two-level experiments, then the resulting h is the exact critical value required. In this case, Theorem 3.1 and the LB method yield the same critical value. 4.1 A Lower Bound for the Probability of Correct Selection The lower bound developed below is based on the fact that Pµ LF C {CS} is equal to the probability of an event related to CS that is calculated under the homogeneity configuration where all treatment means are equal to zero. We use Example 1 to illustrate this viewpoint and then generalize the idea. Example 1 (Continued) In Section 3, we showed that h is the limit of the sequence {h k } where Pµ { ξ111 k ξ } sfe h k S, s, f, e {0, 1} = 1 α, [ ] 0 k implicitly defines h k where µ k = M k J 2 2 and M k =. Note that the product design 0 0 (S, F, E) = (1, 1, 1) is labeled as best in the [CS] event. As k, Pµ k {CS} Pµ LF C {CS} where µ LF C is the Kronecker product (3.3). Suppose that µ 0 is a 8 4 matrix of zero means. Consider 16 observations with indices shown in the matrix equation (2.7) but having zero means and unit variances. Compute the 12 estimated means µ sfe,tz shown in (4.2) based on these observations using the linear model (2.1). It can be shown that min{ µ 11100, µ 11110 } µ 00000 h S min{ µ 11100, µ 11110 } µ 00100 h S min{ µ 11100, µ 11110 } µ 01000 h S Pµ LF C {CS}=Pµ 0 min{ µ 11100, µ 11110 } µ 01100 h S min{ µ 11100, µ 11110 } min{ µ 10000, µ 10010 } h S min{ µ 11100, µ 11110 } min{ µ 10100, µ 10110 } h S min{ µ 11100, µ 11110 } min{ µ 11000, µ 11010 } h S 16 (4.1)

where the right-hand side of (4.1) involves the row minimums of the reduced matrix of means µ 00000 µ 00100 µ 01000 µ 01100 µ r = µ 10000 µ 10010 µ 10100 µ 10110 µ 11000 µ 11010 µ 11100 µ 11110 (S, F, E) (0, 0, 0) (0, 0, 1) (0, 1, 0) (0, 1, 1) (1, 0, 0) (1, 0, 1) (1, 1, 0) (1, 1, 1) (4.2) (0, 0)(0, 1)(1, 0)(1, 1) (T, Z) and row (1, 1, 1) designates the best recipe. Note that the event [CS] requires the entire 8 4 matrix of µ sfe,tz. To provide some intuition about the equality (4.1), we note first that µ 000tz a.s. as k under µ k ; hence it suffices to ignore the upper right 4 2 submatrix of estimated means when calculating row means. That the remaining means can be ignored follows from the representation µ sfe,tz = m CN (st) + m C (fe) + m N (z) for the estimated mean where the three terms are the OLS estimators of the model parts. Using this representation it is straightforward to show that, for any (f, e) (1, 1), the following are equivalent ξ 111 ξ 1fe min{ µ 11100, µ 11110, µ 11101, µ 11111 } min{ µ 1fe00, µ 1fe10, µ 1fe01, µ 1fe11 } m C (11) + m N (0) + min{ m CN (10), m CN (11)} m C (fe) + m N (0) + min{ m CN (10), m CN (11)} min{ µ 11100, µ 11110 } min{ µ 1fe00, µ 1fe10 }. A similar argument shows that for any (j, k), ξ 111 ξ 0fe if and only if min{ µ 11100, µ 11110 } µ 0fe00. The estimators µ sfe,tz involved in the matrix µ r can be viewed as the result of applying the following (general) process to the complete 8 4 matrix of µ sfe,tz. First, µ sfe,tz at the high level of the noninteracting noise factors are eliminated; in this case, z = 1 is the high level of the noninteracting noise factor and so both columns with z = 1 are eliminated. Second, among the 17

remaining µ sfe,tz, those estimated means having the same levels of the interacting control factor as the best product design are retained in the reduced µ r matrix. In this example, the best product design is at control level S = 1 so all eight µ sfe,tz with s = 1, i.e., the last four rows of µ r, are retained. Third, for each of the interacting control factor levels that are different from those of the best product design, i.e., s = 0 in this example, we choose one level of the interacting noise factors. In this case T is the interacting control factor and for each f, e {0, 1}, µ r has retained µ 0fe,00 in the choice between µ 0fe,00 and µ 0fe,10 ; thus the first four rows of µ r contain only the first element. In general, the final step can be viewed as the specification of a function j I ( ) with values in J I which is defined as follows. Fix a vector I = (I I, I I ) I I X I which is designated as the set of control factor levels corresponding to the best product design. For each vector i I of non-best interacting control factor levels, i.e., i I I I with i I I I, let j I (i I ) J I specify a set of levels of the interacting noise factors. In the taste test case of Example 1, {i I I I : i I I I } = {0} is the only non-best level of the control factor. The possible levels of the interacting noise variables are J I = {0, 1}; the matrix µ r in (4.2) corresponds to j I (0) = 0. With this notation, we denote the right-hand side probability (4.1) by P { 1, 1 2, j I (0) = 0 }. The probability P { 1, 1 2, j I (0) = 0 } is determined by the levels of both the interacting and non-interacting control factors for the product design (1, 1 2 ) and by the function j I (0) = 0. This observation is now extended to a general setting. Recall that I = (I I, I I ) denotes an arbitrary specification of control factor levels (that should be thought of as the best product design in the probability calculation of (4.3)). Assume that for every interacting control factor combination i I I I, that an interacting noise factor combination j I (= j I (i I )) J I has been selected. Define P { I I, I, j I ( ) } } min = j I { µ I I,I Pµ,jI,0 µ s i I,i,j I (i I ),0 h S, for i I I I, i I ; } s } 0 min j I { µ I I,I,jI,0 min s j I { µ I I,i,jI,0 h S, for i I, (4.3) s where all three minima are over j I J I. As seen in Example 1, such a probability is a lower bound for Pµ{CS}. We emphasize that the probability P { I I, I, j I ( ) } involves only µ i,j where each of the s non-interacting control factors are set at their low level, i.e., j = 0 s. The next theorem provides a lower bound for the probability of correct selection in terms of the set of P { I I, I, j I ( ) } ; the bound is valid for any mean model using any complete or fractional factorial design. 18

Theorem 4.1 Consider an experiment involving factors with arbitrary numbers of levels and having mean µ = (µ i,j ) that satisfies (2.4) (2.6). Suppose that each element of µ is estimable, independent of non-zero confounding factors. Then Pµ{CS} min min P { I I, I, j I ( ) } (4.4) (I I,I ) j I ( ) where the first minimum is over (I I, I ) I I X I and the second is over all functions j I ( ). In principle, one can set the lower bound on the right-hand side of (4.4) equal to 1 α and solve for the (best possible) constant, call it h LB. The value h LB is a conservative critical value for Procedure G when the lower bound is not sharp and is exact when the lower bound is sharp. In general, there are a large number of different combinations of I I I I, I I and functions j I ( ) to be considered in the minima calculations of (4.4). However, for the important case of orthogonal two-level fractional factorial or two-level full factorial experiments, both common in screening studies, the right-hand side of Equation (4.4) is particularly simple. Corollary 4.2 For an orthogonal 2-level (full or fractional) factorial experiment, min P { I I, I, j I ( ) } (4.5) j I ( ) is constant over I I {0, 1} p and I {0, 1} q. Thus min j I ( ) P { 1 p, 1 q, j I ( ) } is a lower bound for Pµ{CS}. Under the hypotheses of Corollary 4.2 one must solve P { 1 p, 1 q, j I ( ) } = 1 α for all j I ( ) to obtain the best possible solution (although possibly conservative) h LB to Equation (4.5). In practice, often there are only a small number of functions j I ( ) involved in the left-hand minimum of Equation (4.5) and this brute force approach is feasible. Example 1 (Continued) Section 3 presented a lower bound based on the function j I (0) = 0 for obtaining the critical value h for this example. Now we examine the alternative lower bound 19

given by Corollary 4.2 corresponding to j I (0) = 1. The corresponding reduced matrix is (S, F, E) µ 00010 (0, 0, 0) µ 00110 (0, 0, 1) µ 01010 (0, 1, 0) µ 01110 (0, 1, 1) µ r = µ 10000 µ 10010 (1, 0, 0) µ 10100 µ 10110 (1, 0, 1) µ 11000 µ 11010 (1, 1, 0) µ 11100 µ 11110 (1, 1, 1) (4.6) (0, 0)(0, 1)(1, 0)(1, 1) (T, Z) The probability of correctly selecting the last row of this matrix based on the row minimums and using a yardstick of length h S is min{ µ 11100, µ 11110 } µ 00010 h S min{ µ 11100, µ 11110 } µ 00110 h S min{ µ 11100, µ 11110 } µ 01010 h S P { 1, 1 2, j I (0) = 1 } min{ µ = 11100, µ 11110 } µ 01110 h S Pµ 0. min{ µ 11100, µ 11110 } min{ µ 10000, µ 10010 } h S min{ µ 11100, µ 11110 } min{ µ 10100, µ 10110 } h S min{ µ 11100, µ 11110 } min{ µ 11000, µ 11010 } h S min{ µ 11100, µ 11110 } min{ µ 11100, µ 11110 } h S By Corollary 4.2, the smaller of P { 1, 1 2, j I (0) = 0 } and P { 1, 1 2, j I (0) = 1 } is a lower bound for the probability of correct selection. However, the next corollary shows that these two probabilities are equal and thus both give the same lower bound. That the lower bound is sharp follows from Theorem 4.4 in Subsection 4.2 and will be discussed there. The following example illustrates a case where Corollary 4.2 represents an effective way to find a critical value. Example 4 Consider a five factor experiment with each factor at level 0 or 1. Suppose that there are three control factors and two noise factors. Two control factors interact with the same noise factor and the third control factor does not interact with any noise factors; the other noise factor does not interact with any control factors. Our notation describing this situation is Index Name Type i 1, i 2 C I 1, C I 2 Interacting i 3 C 1 Non-interacting j 1 N I 1 Interacting j 2 N 1 Non-interacting 20

Suppose that the 2 5 1 fractional factorial experiment is conducted that yields the data (C1 I, C2 I, C1 ) Y 00001 Y 00010 (0,0,0) Y 00100 Y 00111 (0,0,1) Y 01000 Y 01011 (0,1,0) Y 01101 Y 01110 (0,1,1) Y D = Y 10000 Y 10011 (1,0,0) Y 10101 Y 10110. (1,0,1) Y 11001 Y 11010 (1,1,0) Y 11100 Y 11111 (1,1,1) (0, 0)(0, 1)(1, 0)(1, 1) (N I 1, N 1 ) and a 10 = 16 6 degrees of freedom estimator of σ 2 ɛ. Assume the model µ i1 i 2 i 3,j 1 j 2 = m CN (i 1 i 2, j 1 ) + m C (i 3 ) + m N (j 2 ). (4.7) for the mean of Y i1 i 2 i 3,j 1 j 2 where m N (j 2 ) = (N 1 ) j2, m C (i 3 ) = (C 1 ) i3, m CN (i 1 i 2, j 1 ) = m 0 + (C I 1 ) i1 + (C I 2 ) i2 + (N I 1 ) j1 + (C I 1 C I 2 ) i1 i 2 + (C I 1 N I 1 ) i1,j 1 + (C I 2 N I 1 ) i2,j 1 and each of i 1, i 2, i 3, j 1, and j 2 is 0 or 1. By Corollary 4.2, a lower bound for minµ (4.7) Pµ{CS} is given by the minimum of P {1 2, 1, j I ( )} over the 8 possible functions {j I (i 1, i 2 )} on {(0, 1), (1, 0), (0, 0)}. Here, the last row of µ, that with i 1 = i 2 = i 3 = 1, is labeled as the best. For example, in Step 3, the function j I (i 1, i 2 ) 0 always uses the estimator µ i1 i 2 i 3,00, at the lower level j 1 = 0 of the interacting noise factor N I 1, rather than that at the high level, µ i1 i 2 i 3,10. The reduced matrix of µ i1 i 2 i 3,j 1 j 2 associated with j I ( ) 0 is (C1 I, C2 I, C1 ) µ 00000 (0,0,0) µ 00100 (0,0,1) µ 01000 (0,1,0) µ 01100 (0,1,1) µ r = µ 10000 (1,0,0) µ 10100. (1,0,1) µ 11000 µ 11010 (1,1,0) µ 11100 µ 11110 (1,1,1) (0, 0)(0, 1)(1, 0)(1, 1) (N I 1, N 1 ) 21

The complete list of µ r matrices corresponding to the 8 different {j I (i, j)} is µ 00000 µ 00010 µ 00000 µ 00010 µ 00100 µ 00110 µ 00100 µ 00110 µ 01000 µ 01000 µ 01010 µ 01010 µ 01100 µ 10000, µ 01100 µ 10000, µ 01100 µ 10000, µ 01100 µ 10000 µ 10100 µ 10100 µ 10100 µ 10100 µ 11000 µ 11010 µ 11000 µ 11010 µ 11000 µ 11010 µ 11000 µ 11010 µ 11100 µ 11110 µ 11100 µ 11110 µ 11100 µ 11110 µ 11100 µ 11110 µ 00000 µ 00010 µ 00000 µ 00010 µ 00100 µ 00110 µ 00100 µ 00110 µ 01000 µ 01000 µ 01010 µ 01010 µ 01100 µ 10010, µ 01100 µ 10010, µ 01100 µ 10010, µ 01100 µ 10010 µ 10110 µ 10110 µ 10110 µ 10110 µ 11000 µ 11010 µ 11000 µ 11010 µ 11000 µ 11010 µ 11000 µ 11010 µ 11100 µ 11110 µ 11100 µ 11110 µ 11100 µ 11110 µ 11100 µ 11110 where the first µ r in this list can be recognized as corresponding to j I ( ) 0. To obtain a conservative critical value for Procedure G, one can solve P {1 2, 1, j I ( )} = 1 α for h LB for each of the 8 functions j I ( ); the maximum of these h LB is a conservative critical value for Procedure G. As an example of solving P {1 2, 1, j I ( )} = 1 α, consider j I ( ) 0. The expression P {1 2, 1, j I ( ) 0} is the probability, under the zero mean homogeneity configuration µ = (0) 8 8 with σ ɛ = 1.0, of selecting the last row as the best product design based on the reduced set of µ sfe,tz corresponding to j I ( ) 0. The solution, h LB, to P {1 2, 1, j I ( ) = 0} = 1 α is the 100 (1 α) percentile of the random variable T = µ 00000, µ 00100, µ 01000, 10 max µ 01100, µ 10000, µ 10100, min{ µ min{ µ 11000, µ 11010 }, 11100, µ 11110 } / V, min{ µ 11100, µ 11110 } where V χ 2 10. The critical points corresponding to the other three µ r matrices in the first row above are computed similarly. Lastly, we mention that it is straightforward to show the second row of four matrices can be obtained from those in the first row by relabeling the high and low levels of the factor N I 1. In terms of T, the two sets are equivalent. Therefore, we need only compute h LB corresponding to each of µ r matrices in the first row above. 22

The next corollary allows the determination of the minimum in (4.5) of Corollary 4.2 (and hence the lower bound in Theorem 4.1) for the special case of a single interacting control factor. Corollary 4.3 For an orthogonal 2-level (full or fractional) factorial experiment in which only one design factor, C1 I, interacts with r noise factors, i.e., I I = {0, 1}, I = {0, 1} q, and J I = {0, 1} r, then P { I I, I, j I ( ) } = P { 1, 1 q, j I ( ) } for any j I ( ) : {0} {0, 1} r where j I (0) 0 r. Suppose that, in addition, there is a single interacting noise factor then there are but two possible j I ( ) functions, i.e., j I (0) = 0 and j I (0) = 1. Corollary 4.3 states they are equal. In particular, as applied to Example 1, Corollary 4.3 guarantees that the two probabilities P { 1, 1 2, j I (0) = 0 } and P { 1, 1 2, j I (0) = 1 } are equal and hence the lower bound of Theorem 4.1 is equal to P { 1, 1 2, j I (0) = 0 }, say. Furthermore, as the discussion following Equation (4.1) shows, this lower bound is sharp because it is equal to minµ Pµ{CS}. Next we address the general topic of determining when the lower bound (4.1) is sharp. 4.2 Checking Whether the Lower Bound (4.4) is Sharp This subsection provides a method to check whether the lower bound of Theorem 4.1 is sharp First, we describe how to check whether a given P { I I, I, j I ( ) } corresponds to the probability of correct selection at a certain parameter configuration. The conclusion depends only on the identity of the interacting control and noise factors and on the model m CN that relates these factors. Recall that M = (m CN (i I, j I )) is an I I J I matrix ordered lexicographically in both the elements of i I and j I. Fix a vector of indices I = (I I, I ) which is regarded as the best product design in the probability calculation below. For the given I I and a given function j I ( ) from {i I I I : i I I I } to J I, let M k (I I, j I ( )) denote any I I J I matrix, if one exists, such that M k (I I, j I ( )) satisfies model m CN (i I, j I ), (4.8) M k (I I, j I ) = 0 for j I J I, (4.9) M k (i I, j I (i I )) = 0 for i I I I, and (4.10) M k (i I, j I ) k for i I I I and j I j I (i I ) (4.11) In practice, k is a large positive number, say k 100. In words, row I I of M k (I I, j I ( )) consists of zeroes. For any other row i I I I, the entry in the j I (i I )th column of row i I is zero 23

and all other entries in that same row are at least k (while the entire matrix must satisfy model m CN (i I, j I )). Given M k (I I, j I ( )), let µ k (I I, I, j I ( )) = M k (I I, j I ( )) J I J denote the corresponding vector of means for a (conceptual) full factorial experiment in the control and noise factors. It is straightforward to check that the mean matrix µ k (I I, I, j I ( )) satisfies m C (i ) = 0 and m N (j ) = 0. Notice that ξ i = 0 for all rows of µ k (I I, I, j I ( )) and hence the product designs corresponding to these rows have ξ i that are equal. However, intuition suggests that the row I I will tend to have smaller estimated ξ i than the other ξ i = 0 rows because all of its entries are (tied at) zero, whereas other rows have exactly one zero entry and its remaining components are large (magnitude k) components. Theorem 4.4 states that the probability that Procedure G correctly identifies the row (I I, I ) as the best product design under µ k (I I, I, j I ( )) approaches P { I I, I, j I ( ) } as k increases. Theorem 4.4 Suppose that there exists a sequence { M k (I I, j I ( )) } Let µ k (I I, I, j I ( )) = M k (I I, j I ( )) J I J. Then, { P { I I, I, j I ( ) } = lim k P µk (I I,I,j I ( )) Roughly speaking, Theorem 4.4 states that if k=1 ξ I I,I max i I,i ξi I,i h S satisfying (4.8) (4.11). }. (4.12) µ ( I I, I, j I ( ) ) lim k µ k ( I I, I, j I ( ) ) = lim k { M k (I I, j I ( )) } J I J exists, it is a parameter configuration for which the probability of correctly identifying (I I, I ) as the best control factor combination is equal to P { I I, I, j I ( ) }. The following corollary provides a sufficient condition that the lower bound in Theorem 4.1 be sharp. Corollary 4.5 Suppose that I I, I and j I ( ) yield the smallest P { I I, I, j I ( ) }, i.e., P { I I, I, j I ( ) } = min min P { I I, I, j I ( ) }. (I I,I ) In addition, suppose that there exists a sequence {M k } k=1 corresponding to (II, j I ( )) satisfying (4.8) (4.11). Then is a least favorable configuration and j I ( ) µ(i I, I, j I ( )) = lim k M k J I J P { I I, I, j I ( ) } = min µ P µ{cs}. 24