Topics in Experimental Design

Size: px
Start display at page:

Download "Topics in Experimental Design"

Transcription

1 Ronald Christensen Professor of Statistics Department of Mathematics and Statistics University of New Mexico Copyright c 2016 Topics in Experimental Design Springer

2

3

4 Preface An extremely useful concept in experimental design is the use of treatments that have factorial structure. For example, if you are interested in two things, say the effect of alcohol and the effect of sleeping pills, rather than performing separate experiments on each, you can incorporate both factors into the treatments of a single experiment. Briefly, suppose we are interested in two levels of alcohol, say, a 0 no alcohol and a 1 a standard dose of alcohol, and we are also interested in two levels of sleeping pills, say, s 0 no sleeping pills and s 1 a standard dose of sleeping pills. A factorial treatment structure involves forming 4 treatments, a 0 s 0 no alcohol, no sleeping pills, a 0 s 1 no alcohol, sleeping pills, a 1 s 0 alcohol, no sleeping pills, a 1 s 1 alcohol and sleeping pills. There are two benefits to doing this. First, if there is no interaction, i.e., if the effect of alcohol does not depend on whether or not the subject has taken sleeping pills, you can learn as much from running one experiment using factorial treatment structure on, say, 20 people as you can from two separate experiments, one for alcohol and one for sleeping pills, each involving 20 people. This is a 50% savings in the number of observations needed. Second, if interaction exist, i.e., if the effect of alcohol depends on the amount of sleeping pills a person has taken, you can study that interaction in an experiment with factorial treatment structure but you cannot study interaction in an experiment that was solely devoted to looking at the effects of alcohol. An experiment such as this involving two factors each at two levels is referred to as a 2 2 = 2 2 factorial treatment structure. Note that 4 is the number of treatments we end up with. If we had 3 levels of sleeping pills, it would be a 2 3 factorial, thus giving 6 treatments. If we had three levels of both alcohol and sleeping pills it would be a 3 3 = 3 2. If we had three factors, say, alcohol, sleeping pills, and benzedrine each at 2 levels, we would have a = 2 3 structure. If each factor were at 3 levels, say, none of the drug, a standard dose, and twice the standard dose, and we made up our treatments by taking every combination of the levels of each factor, we would get = 3 3 = 27 treatments. See Christensen (1996b, Chapter 11 or 2015, Chapter?) for more on factorial treatments. Mostly this monograph consists of things that got left out of other books. vii

5 viii Ronald Christensen Albuquerque, New Mexico November 12, 2016 BMDP Statistical Software is located at 1440 Sepulveda Boulevard, Los Angeles, CA 90025, telephone: (213) MINITAB is a registered trademark of Minitab, Inc., 3081 Enterprise Drive, State College, PA 16801, telephone: (814) , telex: MSUSTAT is marketed by the Research and Development Institute Inc., Montana State University, Bozeman, MT , Attn: R.E. Lund.

6 Contents Preface vii 1 Screening Designs Introduction Designs for Two Levels Blocking Hare s Full model Construction of Hadamard Matrices Designs for Three Levels Definitive Screening Designs Some Linear Model Theory Weighing out of conference Mixed Levels Experiment on Variability Notes Confounding and Fractional Replication: 2 f Factorial Systems Confounding Fractional replication Analysis of unreplicated experiments More on graphical analysis Augmenting designs for factors at two levels Exercises p f Factorial Treatment Structures f Factorials Column Space Considerations Confounding and Fractional Replication Confounding Fractional Replication Experiment on Variability ix

7 x Contents 3.4 Analysis of a Confounded The Expanded ANOVA Table Interaction Contrasts A DSD f Factorials Further Extensions Mixtures of Prime Powers Powers of Primes Response Surface Maximization Approximating Response Functions First-Order Models and Steepest Ascent Fitting Quadratic Models Interpreting Quadratic Response Functions Recovery of Interblock Information in BIB Designs Estimation Model Testing Contrasts Alternative Inferential Procedures Estimation of Variance Components References Index

8 Chapter 1 Screening Designs Three fundamental ideas in experimental design are replication, blocking and factorial treatment structure. Replication is required so that we have a measure of the variability of the observations. Blocking is used to reduce experimental variability. Blocking has inspired a number of standard designs including randomized complete blocks, Latin squares, balanced incomplete blocks, Youden squares, balanced lattice designs, and partially balanced incomplete blocks. The point of using factorial treatment structures is that they allow one to look very efficiently at the main effects of the factors but also that they allow examination of interactions between the factors. In this chapter we examine designs for looking efficiently at main effects without observing the entire set of factorial treatments. We will see that blocking can be accomplished by treating blocks as additional factors in the experiment. Replication tends to get short shrift in screening designs. It largely consists of pretending that interaction does not exist and using estimates of the nonexistent interaction to estimate variability. In Chapter 2 we will look at some methods for analyzing data without replications. In general, I contend that if interaction exists, it is the only thing worth looking at. In general, if interaction exists, main effects have no useful meaning. In particular, you could have an interaction between two factors, say alcohol and sleeping pills, where responses are low when you use both alcohol and sleeping pills or use neither alcohol nor sleeping pills but responses are high when you use either one but not the other. In such a case, neither the main effect for alcohol nor the main effect for sleeping pills will look important, because their average effects are unimportant, despite the importance of knowing the exact combination of alcohol and sleeping pill used. Screening designs are based on the hope that these sorts of interactions will not occur, cf. Christensen et al. (2010, Subsection 7.4.7). It can be expensive to collect enough data to explore all of the possible interactions. If you want to save money, you have to give something up. Obviously, you should give up the things that you think are unlikely to be important. In the case of screening designs, that means giving up the ability to look at any interactions. As designs get more sophisticated, such as those discussed in Chapters 2 and 3, one can look at main effects and some interactions with relatively few observations. 1

9 2 1 Screening Designs 1.1 Introduction When the treatments have factorial structure, screening designs can be set up to examine all of the factors main effects efficiently but without the cost and trouble of examining all of the factorial treatments. Often, especially in industrial experiments, there are so many factors that using a complete factorial structure becomes prohibitive because there are just too many treatments to consider. For example, suppose we have 8 factors each at two levels. The number of factor combinations (treatments) is 2 8 = 256. That is a lot of treatments, especially if you plan to perform replications in order to estimate error. If you only want to estimate the 8 factorial main effects, in theory you can do that with as few as 9 observations. Nine observations means 9 degrees of freedom, which can be allocated as one for each factor s main effect and one for fitting the grand mean (intercept). EXAMPLE Consider the experiment reported by Hare (1988) with five factors each at two levels for 2 5 = 32 factor combinations (treatments). The issue is excessive variability in the taste of a dry soup mix. The source of variability was identified as a particular component of the mix called the intermix containing flavorful ingredients such as salt and vegetable oil. Intermix is made in a large mixer. Factor A is the number of ports for adding vegetable oil to the mixer. This was set at either 1 (a 0 ) or 3 (a 1 ). Factor B is the temperature of the mixer. The mixer can be cooled by circulating water through the mixer jacket (b 0 ) or the mixer can be used at room temperature (b 1 ). Factor C is the mixing time, 60 seconds (c 0 ) or 80 seconds (c 1 ). Factor D is the size of the intermix batch, either 1500 pounds (d 0 ) or 2000 pounds (d 1 ). Factor E is the delay between making the intermix and using it in the final soup mix. The delay is either 1 day (e 0 ) or 7 days (e 1 ). Table 1.1 contains a list of the 16 treatments that were actually examined out of the 32 possible treatments. The order in which the treatments were run was randomized and they are listed in that order. Batch number 7 contains the standard operating conditions. Because the issue in Hare s experiment is variability, the data collection is complicated. For each of the 16 batches of intermix, the original data are groups of 5 samples taken every 15 minutes throughout a day of processing. Thus each batch yields data for a balanced one-way analysis of variance with N = 5. The data actually analyzed are derived from the ANOVAs on different batches. There are two sources of variability in the original observations, the variability within a group of 5 samples and variability that occurs between 15 minute intervals. From the analysis of variance data, the within group variability is estimated with the MSE and summarized as the estimated capability standard deviation s c = MSE. The process standard deviation is defined as the standard deviation of an individual observation. The standard deviation of an observation incorporates both the between group and the within group sources of variability. The estimated process standard deviation is taken as

10 1.1 Introduction 3 Table 1.1 Hare s intermix variability data. Batch Treatment s c s p 1 a 0 b 0 c 0 d 1 e 1 de a 1 b 0 c 1 d 1 e 1 acde a 1 b 1 c 0 d 0 e 0 ab a 1 b 0 c 1 d 0 e 0 ac a 0 b 1 c 0 d 0 e 1 be a 0 b 0 c 1 d 0 e 1 ce a 0 b 1 c 0 d 1 e 0 bd a 1 b 1 c 1 d 1 e 0 abcd a 0 b 1 c 1 d 1 e 1 bcde a 1 b 1 c 0 d 1 e 1 abde a 0 b 0 c 1 d 1 e 0 cd a 0 b 0 c 0 d 0 e 0 (1) a 1 b 0 c 0 d 0 e 1 ae a 1 b 1 c 1 d 0 e 1 abce a 1 b 0 c 0 d 1 e 0 ad a 0 b 1 c 1 d 0 e 0 bc s p = MSE + MSTrts MSE, 5 where the 5 is the number of samples taken at each time, cf. Christensen (1996, Subsection or 2016, Subsection ). These two statistics, s c and s p, are available from every batch of soup mix prepared and provide the data as reported in Table 1.1 for analyzing batches. As alluded to earlier, with 5 factors each at two levels, in theory we could get estimates of all the treatment main effects from as little as 6 observations. Hare s experiment uses 16 observations for reasons that will be examined in the next chapter. In practice, the smallest number of observations for examining 5 factors each at two levels that has nice properties is 8. We will return to this issue at the end of the continuation of the example. There are many methods in common use for identifying the treatment levels. For experiments like Hare s, with 5 factors each at two levels we have denoted the treatments using lower case letters and subscripts. The lower case letters really just provide an ordering for the factor subscripts so we know that, say, the third subscript corresponds to the third factor, C. Given the ordering, the subscripts contain all the information about treatments. In other words, a 16 5 matrix of 0s and 1s identifies the treatments. Another convenient method of subscripting is to replace the 0s with 1s, and that is sometimes reduced to reporting just plus and minus signs. Yet another way of identifying treatments is to write down only the treatment letters that have a subscript of 1 (and not write down the subscripts). This last method also appears in Table 1.1.

11 4 1 Screening Designs A screening design focuses on main effects. We can get the main effect information out of the data by fitting the model Y = Xβ + e where X is the matrix of subscript values together with an initial column of 1s. EXAMPLE CONTINUED. For Hare s experiment, Y is one of the last two columns of Table 1.1 and X has a first column of 1s and then the rest of X consists of the treatment subscripts from Table 1.1, i.e., X = An equivalent but alternative method of writing the model matrix replaces the subscript 0 with the subscript 1, X =, (1)

12 1.1 Introduction 5 in a model Y = Xγ + e. The matrix X has the useful mathematical property that X X = 16I 6, which makes the linear model easy to analyze. In particular, the estimate of γ is ˆγ = (1/16) X Y. For analyzing the s p data, fitting either the X or X model gives. Fitting the X model gives Analysis of Variance for s p Source df SS MS F P Regression Residual Error Total Table of Coefficients for s p Predictor ˆγ k SE( ˆγ k ) t P Constant A B C D E Fitting the X model gives estimates and standard errors, other than the intercept, that are twice as large but gives the same t statistics and P values. Specifically, for k = 1,...,5, ˆβ k = 2 ˆγ k and SE( ˆβ k ) = 2SE( ˆγ k ). For, say, factor E, the estimated change in going from the low treatment level e 0 to the high treatment level is 2( 0.235) = with a standard error of 2( ) = Again regardless of the model, we can divide the SSReg into one degree of freedom for each main effect. Source df SS A B C D E These sums of squares, divided by the MSE, are equal to the square of the t statistics from the Table of Coefficients. Because of the special structure of X, unlike standard regression problems, neither the sums of squares nor the t statistics change if you drop any other main effects out of the model. From this analysis, factor E, the time waited before using the intermix, has a much larger sum of squares and a much larger t statistic than any of the other factors, so it would seem to be the most important factor. The design Hare used allows examination of not only the main effects but also of all the two-factor interactions, and we will see in the next chapter that two-factor interactions are important in these data.

13 6 1 Screening Designs We mentioned earlier that, in theory, estimating all the main effects in a 2 5 factorial treatment structure requires only 6 observations and that a good practical design can be obtained using only 8. The last 5 columns of the following model matrix determines one good design for evaluating just the five main effects X =. (2) In the next section we will examine where such designs originate. 1.2 Designs for Two Levels For experiments having all factors at two levels, Plackett and Burman (1946) proposed using normalized Hadamard matrices to define screening designs, i.e., groups of factorial treatments that provide nice estimates of the main effects for all factors. A Hadamard matrix H is an n n square matrix that consists of the numbers ±1 for which 1 n H is an orthonormal (more often called orthogonal) matrix. In other words, H H = HH = ni. Recall that permuting either the rows or columns of an orthonormal matrix gives another orthonormal matrix. The number of rows n in a Hadamard matrix needs to be 1, 2, or a multiple of 4 and even then it is not clear that Hadamard matrices always exist. One Hadamard matrix of order 8 is H =. (1) One Hadamard matrix of order 12 is

14 1.2 Designs for Two Levels H = A normalized Hadamard matrix has the form H = [J,T ], where J is a column of 1s. The submatrix T, or any subset of its columns, can be used to define treatments (treatment subscripts) for analysis of variance problems involving many factors each at two levels in which our interest lies only in main effects. For example, if we have f factors each at two levels, an f + 1 dimensional normalized Hadamard matrix H, if it exists, determines a set of treatments whose observation allows us to estimate all f of the main effects. Randomly associate each of the f factors with one of the f columns in T. T provides the subscripts associated with each treatment to be observed. Indeed, the Hadamard matrix becomes X in the linear model for main effects, Y = Xγ + e. This is a smallest design that allows us to estimate the grand mean and all f of the factor main effects. But remember, Hadamard matrices do not exist for all values f + 1. Except for the trivial case of f = 1, Hadamard matrixes only exist when f + 1 is a multiple of 4. More often we have f factors and choose n > f + 1 as the size of the normalized Hadamard matrix H. Again, excluding the initial column of 1s, randomly associate each factor with one of the remaining n 1 columns of H. From the Hadamard matrix, extract the matrix X = [J,T ] that consists of the column of 1s followed by (in any convenient order) the columns associated with the factors. Because H H = ni n, we have X X = ni f +1 and the perpendicular projection operator onto C( X) is M = 1 n X X. T provides the subscripts associated with the treatments to be observed. Assuming no interactions, the model Y = Xγ + e involves n observations, provides n r( X) = n ( f + 1) = n f 1 degrees of freedom for estimating the error, as well as provides estimates of the f main effects and the intercept. If we take n >> f + 1, we should be able to do much more than merely examine main effects, e.g., examine at least some interactions. But in general, it is difficult to know what more we can do, i.e., what interactions we can look at. In the next chapter, we will examine in detail the special case where n is a power of 2. In the special case, we can keep careful track of which interactions can be estimated and which cannot.

15 8 1 Screening Designs EXAMPLE In Hare s example, the matrix X in equation (1.1.1) defined the model matrix for the main-effects linear model and its last five columns defined the treatments used. X consists of the first 6 columns of the following normalized Hadamard matrix: H = At the end of the previous section we mentioned that, if examining main effects was the only goal, Hare could have gotten by with examining only the 8 factor combinations defined by the last five columns of X in equation (1.1.2). The matrix X consists of the first six columns of the Hadamard matrix in equation (1.2.1). If Hare had 7 factors each at two levels, a smallest (orthogonal) design for obtaining all main effects takes X equal to the Hadamard matrix in (1.2.1) [or some other normalized Hadamard matrix of the same size]. Alternatively, Hare could have stuck with 16 treatments and used, say, columns 2 through 8 of H to define the factor combinations. That would be a perfectly good design for looking only at main effects. But Hare was also interested in two-factor interactions and the 7th and 8th columns of H happen to be associated with the AB and AC interactions. (More on this later.) Using the 7th and 8th columns to help define treatments for 7 factors would mean loosing the ability to estimate the AB and AC interactions: estimating these interactions is something that Hare could do with only 5 factors but something that typically is not a priority in a screening design. Permuting the rows of a Hadamard matrix gives another Hadamard matrix which is, for our purposes, equivalent to the first. The rows define the treatments we want, and permuting the rows does not change the collection of treatments. Also, permuting the rows does not change that X X = ni f +1. We just have to make sure that we apply the same permutation to the rows of Y. We could also permute the columns of either H or X, as long as we remember what factor is associated with each column.

16 1.2 Designs for Two Levels Blocking Returning to Hare s experiment with 5 factors and 16 observations (factor combinations), suppose Hare had wanted to run the experiment in four blocks of size 4. The first 6 columns of H define the intercept and treatments, any other two columns of H could be used to define the four blocks of size 4. Let s use the last two columns to define blocks. The last two columns define pairs of numbers (1,1), (1, 1), ( 1,1), ( 1, 1) that will define the blocks. Deleting the columns that we are not using and rearranging the rows of H so that the pairs of numbers from the last two columns are grouped, and introducing some extra space to focus on the five columns that define the treatments, gives We can read off the blocking structure from this matrix. Block one consists of the the treatments (with ±1 subscripts) corresponding to rows of H in which the last two columns are (1,1). The subscripts come from the first four rows of the previous matrix, , so changing the 1 subscripts back to 0s, the treatments are a 1 b 0 c 1 d 1 e 1 a 1 b 1 c 0 d 0 e 0. a 0 b 0 c 0 d 0 e 0 a 0 b 1 c 1 d 1 e 1

17 10 1 Screening Designs In the second block the treatment subscripts correspond to rows of H where the last two columns are (1, 1): The third block has ( 1,1), And the last block has ( 1, 1), The model matrix for the main effects with blocking model can be taken as X = Here the second through fourth columns account for blocks and the last five columns account for factors A, B, C, D, E. The order of listing the treatments has changed from Table 1.1, so the row order of listing the variability measures s c and s p would also need to change. The physical act of blocking would almost certainly change the data from that observed in Table 1.1, but if the data from the blocked experiment were the same as those reported, the estimates and sums of squares for the main

18 1.2 Designs for Two Levels 11 effects would also remain the same. Blocking should change the estimate of error. The whole point of blocking is to isolate substantial effects due to blocks and remove them from the error that would have applied without blocking. If we wanted two blocks of size 8, we would have used only one column (not previously used for treatments or the intercept) of the Hadamard matrix to define blocks. If we wanted 8 blocks of size 2, we would have used three (not previously used) columns to define blocks. Now let s examine blocking with f = 5 factors and n = 8 factor combinations. In this example, the last two columns of the Hadamard matrix in equation (1.2.1) are used to define 4 blocks of size 2. Rearranging the rows of (1.2.1) gives from which we gets the blocks [ ] , [ [ [ Unfortunately, using these blocks will lose us the ability to look at the main effect for factor D because D is at the same level in every block. There are 8 observations, so 8 degrees of freedom. There are 4 degrees of freedom for the blocks and the intercept, which leaves only 4 degrees of freedom for estimating effects, but we have 5 effects to estimate, so we must lose something. Again, a major virtue of the approach in Chapter 2 is that it allows us to keep track of such things. ], ], ] Hare s Full model The Hadamard matrix H was (implicitly) used by Hare to determine a group of 16 treatments to examine from a 2 5 factorial structure; treatments that provide a clean

19 12 1 Screening Designs analysis of main effects. From our discussion in this chapter, H could have been used to define a design and a main-effects model for up to 15 factors. The normalized Hadamard matrix H was actually constructed using the methods of Chapter 2, which allows us to identify each of the 10 columns of the matrix not used in the main-effects model with a particular two-factor interaction. Fitting the linear model Y = Hδ + e fits the data perfectly, leaving 0 degrees of freedom for error. The trick, in using this model, is identifying what effect each column represents. The first 6 columns correspond to the intercept and the factor main effects, the other 10 columns are two-factor interactions and obtained by multiplying two main-effects columns elementwise. In other words, column 7 is the AB interaction column because it is obtained from multiplying column 2 (factor A) times column 3 (factor B) elementwise. In the first row, columns 2 and 3 take the values 1 and 1, so column 7 is 1 1 = 1. In the second row, columns 2 and 3 take the values 1 and 1, so column 7 is 1 1 = 1. In the last row, columns 2 and 3 are 1 and 1, so column 7 is 1 1 = 1. There are 10 distinct pairs of main effects, hence 10 two-factor intereactions. Many ANOVA and regression programs have this method, or an equivalent process, automated. Incidentally, in the next chapter we will see that the AB interaction effect is indistinguishable from the CDE interaction effect. Note that column 7 (AB) is 1 times the elementwise product of columns 4, 5, and 6; the columns associated with C, D, and E. Few computer programs allow fitting models with 0 dfe, so we deleted the last column of H before fitting the model. The last column corresponds to the DE interaction. Analysis of Variance for s p Source df SS MS F P Regression Residual Error Total The sums of squares can be broken down into 15 individual terms associated with main effects and two-factor interactions. These numbers are just the elements of the last 15 terms of the vector H Y, squared, and divided by n = 16. (We have ignored the contribution from the intercept.) Again, the Error term is labeled as DE. Source df SS Source df SS Source df SS A AB BD B AC BE C AD CD D AE CE E BC DE In the next chapter we will explore these associations. In particular, we will see that all of these terms are indistinguishable from higher-order interaction terms. In the main-effects only model, we noted that E looked to be the most important effect. While that remains true in this more expansive model, the sum of squares for BE is of a similar size to that of E and the sum of squares for DE is not inconsiderable.

20 1.2 Designs for Two Levels Construction of Hadamard Matrices Hadmard matrices have very nice design properties but you have to be able to find them. In practice, you just look them up. But there are a variety of ways to construct Hadamards. For an n n Hadamard to exist, n has to be 1, 2, or a multiple of 4. For n = 1, H = ±1. For n = 2, some Hadamards are [ ], [ ], [ ], [ Given any two Hadamard matrices, it is easy to see that their kronecker product is Hadamard: [H 1 H 2 ][H 1 H 2 ] = [H 1 H 2 ][H 1 H 2] ]. = [H 1 H 1 H 1 H 2] = [n 1 I n1 n 2 I n2 ] = n 1 n 2 I n1 n 2. Paley s method of constructing Hadamards uses Jacobsthal matrices. A q q Jacobsthal matrix Q has 0s on the diagonal, ±1s elsewhere, and has the properties: (a) QQ = qi q Jq q and (b) QJ q = J qq = 0. From (a) q 1QQ is the ppo onto C(J q ), so the property J qq = 0 must be true. Clearly, if Q is Jacobsthal, Q is also. If q mod 4 = 3, Q will be skew symmetric, i.e. Q = Q. In that case, an n = q+1 dimensional Hadamard matrix can be obtained from [ ] 1 J H = q J q Q I q or, equivalently, [ ] 0 J H = I n + q J q Q [ ] 1 J = q. J q Q + I q If q mod 4 = 1, Q will be symmetric and Hadamards of dimension n = 2(q + 1) can be constructed by replacing elements of [ ] 0 J q J q Q by certain 2 dimensional Hadamards. Finally, you can construct Hadamards from conference matrices. (In the next section we will find a different use for conference matrices.) A q q conference matrix has 0 on the diagonal, ±1 elsewhere, and C C = (q 1)I. Examples of conference matrices are C 1 =, C = ,

21 14 1 Screening Designs and C 3 = C 2 is skew symmetric and C 3 is symmetric. If you think about what it takes for the columns of a conference matrix to be orthogonal, it is pretty easy to see that the dimension q of a conference matrix has to be an even number. Skew symmetric conference matrices have q a multiple of 4, with the other even numbers giving symmetric conference matrices if they exist. For example, conference matrices are not known to exist for q = 22,34. If C is skew symmetric, a Hadamard matrix is H = I +C. If C symmetric, [ ] C + I C I H = C I C I is Hadamard. We are not going to consider how one goes about constructing either Jacobsthal and conference matrices. 1.3 Designs for Three Levels Screening designs are used as a first step in identifying important factors. They rely on the hope that any interactions that exist will not mask the important main effects. Suppose we have f factors each at 3 levels. As before, we use capital letters to denote factors and small letters to denote levels. With three levels, we have two useful subscripting options for identifying levels. We can identify the levels of factor A as a 0, a 1, a 2 or as a 1, a 0, a 1. The first subscripting option is used in Chapter 3 (because if facilitates modular arithmetic). The second option is used here. Screening designs focus on main effects but now a main effect has 2 degrees of freedom. In this section, we will assume that the three levels are quantitative levels, which means that we can associate the 2 main effect degrees of freedom with one linear term and one quadratic term. Moreover, we assume that the levels are equally spaced, so that the actual levels might just as well be the subscripts 1,0,1. Looking at the linear effect involves comparing the treatments with the 1 subscript to treatments with the 1 subscript. Often the linear effect is considered more important than the quadratic effect. In a comparable two-factor screening design, the linear effects are the main effects. We will consider designs that focus on the linear effect but also retain the ability to estimate the quadratic effect. In Chapter 3 we will discuss fractional replications of 3 f factorial structures. It is often possible to create designs that allow independent estimation of all main

22 1.3 Designs for Three Levels 15 effects that involve observing 3 2 = 9, or 3 3 = 27, or 3 4 = 81 factor combinations. The designs in the next subsection offer more flexibility in terms of numbers of observations but lose the independence of quadratic effects Definitive Screening Designs As discussed in the previous section, a q q conference matrix C has 0 on the diagonal, ±1 elsewhere, and C C = (q 1)I. As such, every row of a conference matrix can be identified with a factor combination. Jones and Nachtsheim (2011) introduced definitive screening designs (DSDs) that allow one to examine both the linear and quadratic main effects efficiently. The treatments are defined by the matrix T = C, where 0 indicates a 1 q row vector of 0s. Again, the rows of T consist of the subscripts for the factor combinations to be observed. The number of observations is obviously n = 2q+1. A linear-main-effects-only model is Y = [J,T ]β +e. A model with all main effects is Y = [J,T,T 2 ]γ + e wherein, if T [t i j ] n q, then T 2 [t 2 i j ]. The designs involve relatively few treatment levels with the subscript 0, so the linear main effects are estimated with more precision than the quadratic main effects. The definitive screening design for q factors has n = 2q + 1 observations with n effects to estimate, i.e., an intercept, q linear main effects, and q quadratic main effects. So the main-effects model is a saturated model with 0 degrees of freedom for Error. As discussed earlier, conference matrices have to have an even numbered dimension, so the available sizes are n = 5,9,13,17,21,25,29,33,37,41,49,..., i.e., one more than a multiple of 4. You might expect n = 45 in this list but recall that there is no conference matrix (known) for q = 22. One way to avoid having a saturated main-effects model is to use a DSD based on larger order conference matrices. For f = 5 factors, the following matrix uses the DSD design based on the 6 dimensional conference matrix given earlier but deleting the fourth column. C 0

23 16 1 Screening Designs T = This design will provide 2 degrees of freedom for error, or for looking at interactions Some Linear Model Theory Let T denote the matrix of treatment subscripts 1,0,1 for a design with f factors at 3 levels. Write T in terms of its columns, rows and elements as T = [T 1,...,T f ] = t 1. t n = [t i j ]. Also define T 2 to be the matrix consisting of the squares of the elements in T, i.e., T 2 [t 2 i j]. Note that, because all entries are 1,0,1, any column of T 2, say Tk 2, has the property that J Tk 2 = T k T k and this equals n minus the number of 0s in T k. The main effects linear model associated with this design is Y = [J,T,T 2 ] δ 0 δ 1 + e, δ 2 where δ 0 is a scalar with δ 1 and δ 2 being f vectors. After centering T 2, we get the equivalent linear model [ Y = J,T, (I 1n ] )T JJ 2 β 0 β 1 + e. (1) β 2

24 1.3 Designs for Three Levels 17 To estimate all of the parameters we need the columns of the model matrix to be linearly independent. Linear independence requires n 2 f + 1 but can break down even if that condition is satisfied. In particular, to estimate all of the quadratic effects, we need T to satisfy the following properties. (i) There must be at least one 0 in every column of T. (ii) For every ordered pair ( j,k), j,k = 1,..., f, j k, there exists an i such that t i j = 0 and t ik 0. If (i) fails, Tk 2 = J for anyt k that does not contain a 0. If (ii) fails for both ( j,k) and (k, j), Tj 2 = Tk 2, so we cannot estimate both quadratic main effects. Similar to Jones and Nachtsheim, let the q f matrix W define a set of q f treatments and then define T as W T = W 0 so that n = 2q+1 2 f +1, which is the number of mean parameters to be estimated. The row of 0s in T ensures that property (i) is satisfied. These designs seem most appropriate as screening designs when q is close to f. The nice analysis properties of these designs follows from the fact that the submatrices of the model matrix in (1) are all orthogonal. In particular, [J,T,(I 1n ] JJ )T [J,T,(I 2 1n ] JJ )T 2 = n W W T 2 ( I 1 n JJ ) T 2 (2) To have a truly orthogonal design, we need the matrix in (2) to be diagonal, not just block diagonal. Nonetheless, the block diagonal structure facilitates finding the matrix inverse, which is really more important to the analysis. Clearly, J J = n. From the definition of T in terms of W, clearly J T = 0 1 f and T T = W 0 W W 0 W = 2 W W. It is also immediate that J (I 1 n JJ )T 2 = 0. The more difficult task is to show that T (I 1 n JJ )T 2 = T T 2 = 0. In fact, we show that the linear main effects are orthogonal, not only to the square terms, but also to linear by linear interaction terms. Let (T k T j ) denote the vector that results from multiplying the vectors T k and T j elementwise. In particular, (T k T k ) = Tk 2. To estimate the linear by linear interaction term between factors k and j, one simply includes the vector (T k T j ) as a column of the model matrix. Orthogonality follows from the fact that,.

25 18 1 Screening Designs T s (T k T j ) = n i=1 = t is t ik t i j = q i=1 2q i=1 t is t ik t i j + t is t ik t i j = q i=1 q i=1 t is t ik t i j + 2q i=q+1 q ( t is )( t ik )( t i j ) = i=1 t is t ik t i j t is t ik t i j q i=1 t is t ik t i j = 0. A similar argument shows that T s (Tk 2T j 2 ) = 0, so the linear main effect terms are also orthogonal to the quadratic by quadratic interaction terms. Without additional conditions on W, the linear main effects do not appear to be orthogonal to linear by quadratic or quadratic by linear interactions. Typically we place additional conditions on W. In particular, a q f dimensional weighing matrix W takes values 1,0,1 and has WW = wi for some w. Hadamard matrices and conference matrices are both weighing. Choose f columns from W to define a submatrix W q f. Any such submatrix has W W = wi f which, when substituted into equation (2), shows the linear main effects to be orthogonal. We do not need to identify the original weighting matrix W, as long as we have the submatrix W. If W is a conference matrix, condition (ii) for estimability of quadratic effects holds. For conference matrices ( T 2 I 1 [ )T n JJ 2 = 2 I + q 4 ] 2q + 1 J f f. Unless q = 4 so that n = 9, the quadratic effects will not be orthogonal. This seems to be the only (useful) case in which a DSD is also a 3 f fractional replication as discussed in Chapter 3. [ ] Since (1/ f )J f f is a ppo, 2 I + q 4 2q+1 J f f is easily inverted using Christensen (2011, Proposition ), i.e., for a ppo P, [ai + bp] 1 = 1 [ I b ] a a + b P. The actual numbers that result from doing this do not strike me as particularly interesting other than that the correlations are nonzero and identical among the estimates of the quadratic coefficients. Jones and Nachtsheim (2011) indicate that the common correlation is f 1 but I have not been able to reproduce that result. Choices for W other than conference matrices are less promising. If W is a Hadamard matrix, (ii) does not hold as is shown in the next subsection. In fact, with a Hadamard matrix we would get only 1 degree of freedom for estimating all f of the quadratic effects. In the next subsection, we also construct a weighting matrix W, with two 0s in each row and column, which aliases pairs of quadratic effects.

26 1.3 Designs for Three Levels Weighing out of conference For general weighting matrices, T 2 ( I 1 n JJ ) T 2 = T 2 T 2 (2w)2 2q + 1 J f f. The result for conference matrices was given earlier. Hadamard matrices are weighing but give [ ] T 2 J f = 2q, 0 (I 1n JJ ) T 2 = [ 1n J f 2q (n 1) n J f 1 both of which are rank 1 matrices, hence our earlier comment about only 1 degree of freedom for estimating quadratic effects, and ( T 2 I 1 ) n JJ T 2 = n 1 n J f f. Next we look at a weighing matrix for which w = q 2 and T 2 ( I 1 n JJ ) T 2 is again singular and incapable of estimating all of the quadratic main effects. Consider W 1 = This skew symmetric W 1 was constructed as [C 2 H 0 ] where C 2 was the skew symmetric conference matrix given near the end of Section 2 and [ ] 1 1 H 0 = 1 1 is a symmetric Hadamard matrix. The problem is, if we square the elements of W 1 to get W1 2 [w2 1i j ], unlike a conference matrix, the consecutive pairs of vectors become identical. Thus we could not be able to tell apart the quadratic terms for factors A and B, or for factors C and D, etc. In particular, using W 1 as W in our design T, causes a violation of condition (ii) for estimating quadratic effects, with T 2 not having full rank and T 2 ( I n 1JJ ) T 2 being singular. We could solve this problem by using every other column of W in W, but at that point we might just as well save ourselves some observations and just use C 2 to define the design. ],

27 20 1 Screening Designs The multiplier w associated with a weighing matrix needs to be q minus the common number of 0s in each row. (The jth diagonal element of WW is the number of nonzero elements in the jth row.) Mathematically, Hadamard matrices are weighing matrices with w = q and conference matrices are weighing matrices with w = q 1. Conference matrices C are required to have 0 s down the diagonal. Permuting the rows of C gives a weighting matrix with one 0 in each row and column. For design purposes, a conference matrix and a weighing matrix with one 0 per row are equivalent. What is relevant to the design is the number of 0s in each row (or column) of W. As with Hadamard and conference matrices, there are serious mathematical questions about when weighing matrices exist, cf. Koukouvinos and Seberry (1997). In particular, it would be interesting to know if any weighing matrices exist with w < (q 1) that satisfy property (ii) and allow estimation of all quadratic effects. While perhaps not terribly practical, it would be of theoretical interest to examine the relationship between a possible DSD with n = 81 = 2(20) + 1 and the n = 81 = fractional replication associated with Chapter Mixed Levels Probably the most famous mixed level design is Taguchi s L 1 8 for one factor at 2 levels and 3 factors at three levels T = I have no idea how this was constructed. I ve wondered if they just pulled 9 observations out of a 3 3 or somehow doubled a 3 2. There is a little discussion of mixed levels in Chapter 3.

28 1.4 Mixed Levels 21 Let T 1 define an design for factors a 3 levels and T 2 define an design for factors at 2 levels, then a mixed level design is T = [T 1 T 2 ]. This is what Taguchi did in the next example Experiment on Variability Byrne and Taguchi (1989) and Lucas (1994) considered an experiment on the force y, measured in pounds, needed to pull a tubing from a connector. Large values of y are good. The controllable factors in the experiment are as follows. A: Interference (Low, Medium, High) B: Wall Thickness (Thin, Medium, Thick) C: Ins. Depth (Shallow, Medium, Deep) D: Percent Adhesive (Low, Medium, High) The data are given in Table 1.2. The design is a fractional factorial involving four factors each at three levels. Specifically, this is a 1/9th rep of a design. With 3 4 = 81 factor combinations and only 9 observations, we must have a 1/9th rep. Fractional replications for 3 n factorials are discussed in detail in the Chapter 3. For now, it suffices to note that the design allows one to estimate all of the main effects. It also only allows estimation of the main effects and assumes the absence of any interactions. This is characteristic of the designs typically suggested by Taguchi. He was only interested in main effects involving the control factors. The treatment matrix is T 1 = Taguchi had his favorite choices of W, but I am not aware that they have better properties than other choices that are available from Chapter 3. There are also three factors at two levels that can only be controlled with great effort. E: Condition Time (24hr, 120hr) F: Condition Temp. (72 F, 150 F) G: Condition R. H. (25%, 75%)

29 22 1 Screening Designs The data with all of the factors identified are presented One approach to analyzing the data in Table 1.2 is to ignore the information about the noise factors and treat the data associated with the noise factors as replications. We make use of our knowledge of the levels of the noise factors. Table 1.2 Taguchi design Controlled Noise Factors Run a b c d e 0 f 0 g 0 e 0 f 0 g 1 e 0 f 1 g 0 e 0 f 1 g 1 e 1 f 0 g 0 e 1 f 0 g 1 e 1 f 1 g 0 e 1 f 1 g The 2 3 design being used corresponds to three columns from an 8 dimensional Hadamard, T 2 = This design T 2 is not a screening design, it contains all the factor combinations. The overall design is T = {[J 8 T 1 ],[T 2 J 9 ]}. 1.5 Notes I spent a long time unsuccessfully trying to figure out, on my own, why Plackett- Burman designs were reasonable. It was only after Chris Nachtsheim give an excellent talk at UNM in 2017 on definitive screening designs that I was inspired to explore the basis of these designs and write this chapter. I found some of the course notes on Bill Cherowitzo s webpage useful (math.ucdenver.edu/ wcherowi) and a useful book is Stinson (2003). wcherowi/courses/m6406/m6406f.html wcherowi/courses/m6023/m6023f.html

Confounding and fractional replication in 2 n factorial systems

Confounding and fractional replication in 2 n factorial systems Chapter 20 Confounding and fractional replication in 2 n factorial systems Confounding is a method of designing a factorial experiment that allows incomplete blocks, i.e., blocks of smaller size than the

More information

MATH602: APPLIED STATISTICS

MATH602: APPLIED STATISTICS MATH602: APPLIED STATISTICS Dr. Srinivas R. Chakravarthy Department of Science and Mathematics KETTERING UNIVERSITY Flint, MI 48504-4898 Lecture 10 1 FRACTIONAL FACTORIAL DESIGNS Complete factorial designs

More information

Design and Analysis of Experiments Prof. Jhareshwar Maiti Department of Industrial and Systems Engineering Indian Institute of Technology, Kharagpur

Design and Analysis of Experiments Prof. Jhareshwar Maiti Department of Industrial and Systems Engineering Indian Institute of Technology, Kharagpur Design and Analysis of Experiments Prof. Jhareshwar Maiti Department of Industrial and Systems Engineering Indian Institute of Technology, Kharagpur Lecture 51 Plackett Burman Designs Hello, welcome. We

More information

Unit 9: Confounding and Fractional Factorial Designs

Unit 9: Confounding and Fractional Factorial Designs Unit 9: Confounding and Fractional Factorial Designs STA 643: Advanced Experimental Design Derek S. Young 1 Learning Objectives Understand what it means for a treatment to be confounded with blocks Know

More information

Experimental design (DOE) - Design

Experimental design (DOE) - Design Experimental design (DOE) - Design Menu: QCExpert Experimental Design Design Full Factorial Fract Factorial This module designs a two-level multifactorial orthogonal plan 2 n k and perform its analysis.

More information

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.] Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the

More information

TWO-LEVEL FACTORIAL EXPERIMENTS: IRREGULAR FRACTIONS

TWO-LEVEL FACTORIAL EXPERIMENTS: IRREGULAR FRACTIONS STAT 512 2-Level Factorial Experiments: Irregular Fractions 1 TWO-LEVEL FACTORIAL EXPERIMENTS: IRREGULAR FRACTIONS A major practical weakness of regular fractional factorial designs is that N must be a

More information

Contents. TAMS38 - Lecture 8 2 k p fractional factorial design. Lecturer: Zhenxia Liu. Example 0 - continued 4. Example 0 - Glazing ceramic 3

Contents. TAMS38 - Lecture 8 2 k p fractional factorial design. Lecturer: Zhenxia Liu. Example 0 - continued 4. Example 0 - Glazing ceramic 3 Contents TAMS38 - Lecture 8 2 k p fractional factorial design Lecturer: Zhenxia Liu Department of Mathematics - Mathematical Statistics Example 0 2 k factorial design with blocking Example 1 2 k p fractional

More information

Reference: Chapter 8 of Montgomery (8e)

Reference: Chapter 8 of Montgomery (8e) Reference: Chapter 8 of Montgomery (8e) 69 Maghsoodloo Fractional Factorials (or Replicates) For Base 2 Designs As the number of factors in a 2 k factorial experiment increases, the number of runs (or

More information

ACOVA and Interactions

ACOVA and Interactions Chapter 15 ACOVA and Interactions Analysis of covariance (ACOVA) incorporates one or more regression variables into an analysis of variance. As such, we can think of it as analogous to the two-way ANOVA

More information

RESPONSE SURFACE MODELLING, RSM

RESPONSE SURFACE MODELLING, RSM CHEM-E3205 BIOPROCESS OPTIMIZATION AND SIMULATION LECTURE 3 RESPONSE SURFACE MODELLING, RSM Tool for process optimization HISTORY Statistical experimental design pioneering work R.A. Fisher in 1925: Statistical

More information

CS 5014: Research Methods in Computer Science

CS 5014: Research Methods in Computer Science Computer Science Clifford A. Shaffer Department of Computer Science Virginia Tech Blacksburg, Virginia Fall 2010 Copyright c 2010 by Clifford A. Shaffer Computer Science Fall 2010 1 / 254 Experimental

More information

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction MAT4 : Introduction to Applied Linear Algebra Mike Newman fall 7 9. Projections introduction One reason to consider projections is to understand approximate solutions to linear systems. A common example

More information

Unit 6: Fractional Factorial Experiments at Three Levels

Unit 6: Fractional Factorial Experiments at Three Levels Unit 6: Fractional Factorial Experiments at Three Levels Larger-the-better and smaller-the-better problems. Basic concepts for 3 k full factorial designs. Analysis of 3 k designs using orthogonal components

More information

LECTURE 15: SIMPLE LINEAR REGRESSION I

LECTURE 15: SIMPLE LINEAR REGRESSION I David Youngberg BSAD 20 Montgomery College LECTURE 5: SIMPLE LINEAR REGRESSION I I. From Correlation to Regression a. Recall last class when we discussed two basic types of correlation (positive and negative).

More information

Design and Analysis of Multi-Factored Experiments

Design and Analysis of Multi-Factored Experiments Design and Analysis of Multi-Factored Experiments Fractional Factorial Designs L. M. Lye DOE Course 1 Design of Engineering Experiments The 2 k-p Fractional Factorial Design Motivation for fractional factorials

More information

Topic 15 Notes Jeremy Orloff

Topic 15 Notes Jeremy Orloff Topic 5 Notes Jeremy Orloff 5 Transpose, Inverse, Determinant 5. Goals. Know the definition and be able to compute the inverse of any square matrix using row operations. 2. Know the properties of inverses.

More information

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3 Linear Algebra Row Reduced Echelon Form Techniques for solving systems of linear equations lie at the heart of linear algebra. In high school we learn to solve systems with or variables using elimination

More information

Module 03 Lecture 14 Inferential Statistics ANOVA and TOI

Module 03 Lecture 14 Inferential Statistics ANOVA and TOI Introduction of Data Analytics Prof. Nandan Sudarsanam and Prof. B Ravindran Department of Management Studies and Department of Computer Science and Engineering Indian Institute of Technology, Madras Module

More information

23. Fractional factorials - introduction

23. Fractional factorials - introduction 173 3. Fractional factorials - introduction Consider a 5 factorial. Even without replicates, there are 5 = 3 obs ns required to estimate the effects - 5 main effects, 10 two factor interactions, 10 three

More information

Regression, part II. I. What does it all mean? A) Notice that so far all we ve done is math.

Regression, part II. I. What does it all mean? A) Notice that so far all we ve done is math. Regression, part II I. What does it all mean? A) Notice that so far all we ve done is math. 1) One can calculate the Least Squares Regression Line for anything, regardless of any assumptions. 2) But, if

More information

Matrix Inverses. November 19, 2014

Matrix Inverses. November 19, 2014 Matrix Inverses November 9, 204 22 The Inverse of a Matrix Now that we have discussed how to multiply two matrices, we can finally have a proper discussion of what we mean by the expression A for a matrix

More information

Getting Started with Communications Engineering

Getting Started with Communications Engineering 1 Linear algebra is the algebra of linear equations: the term linear being used in the same sense as in linear functions, such as: which is the equation of a straight line. y ax c (0.1) Of course, if we

More information

Reference: Chapter 6 of Montgomery(8e) Maghsoodloo

Reference: Chapter 6 of Montgomery(8e) Maghsoodloo Reference: Chapter 6 of Montgomery(8e) Maghsoodloo 51 DOE (or DOX) FOR BASE BALANCED FACTORIALS The notation k is used to denote a factorial experiment involving k factors (A, B, C, D,..., K) each at levels.

More information

Linear Programming and its Extensions Prof. Prabha Shrama Department of Mathematics and Statistics Indian Institute of Technology, Kanpur

Linear Programming and its Extensions Prof. Prabha Shrama Department of Mathematics and Statistics Indian Institute of Technology, Kanpur Linear Programming and its Extensions Prof. Prabha Shrama Department of Mathematics and Statistics Indian Institute of Technology, Kanpur Lecture No. # 03 Moving from one basic feasible solution to another,

More information

INTRODUCTION TO ANALYSIS OF VARIANCE

INTRODUCTION TO ANALYSIS OF VARIANCE CHAPTER 22 INTRODUCTION TO ANALYSIS OF VARIANCE Chapter 18 on inferences about population means illustrated two hypothesis testing situations: for one population mean and for the difference between two

More information

Linear Algebra, Summer 2011, pt. 2

Linear Algebra, Summer 2011, pt. 2 Linear Algebra, Summer 2, pt. 2 June 8, 2 Contents Inverses. 2 Vector Spaces. 3 2. Examples of vector spaces..................... 3 2.2 The column space......................... 6 2.3 The null space...........................

More information

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2 Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch

More information

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS chapter MORE MATRIX ALGEBRA GOALS In Chapter we studied matrix operations and the algebra of sets and logic. We also made note of the strong resemblance of matrix algebra to elementary algebra. The reader

More information

8 Square matrices continued: Determinants

8 Square matrices continued: Determinants 8 Square matrices continued: Determinants 8.1 Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You

More information

MITOCW ocw f99-lec17_300k

MITOCW ocw f99-lec17_300k MITOCW ocw-18.06-f99-lec17_300k OK, here's the last lecture in the chapter on orthogonality. So we met orthogonal vectors, two vectors, we met orthogonal subspaces, like the row space and null space. Now

More information

Reference: CHAPTER 7 of Montgomery(8e)

Reference: CHAPTER 7 of Montgomery(8e) Reference: CHAPTER 7 of Montgomery(8e) 60 Maghsoodloo BLOCK CONFOUNDING IN 2 k FACTORIALS (k factors each at 2 levels) It is often impossible to run all the 2 k observations in a 2 k factorial design (or

More information

x + 2y + 3z = 8 x + 3y = 7 x + 2z = 3

x + 2y + 3z = 8 x + 3y = 7 x + 2z = 3 Chapter 2: Solving Linear Equations 23 Elimination Using Matrices As we saw in the presentation, we can use elimination to make a system of linear equations into an upper triangular system that is easy

More information

Fractional Factorial Designs

Fractional Factorial Designs Fractional Factorial Designs ST 516 Each replicate of a 2 k design requires 2 k runs. E.g. 64 runs for k = 6, or 1024 runs for k = 10. When this is infeasible, we use a fraction of the runs. As a result,

More information

TWO-LEVEL FACTORIAL EXPERIMENTS: REGULAR FRACTIONAL FACTORIALS

TWO-LEVEL FACTORIAL EXPERIMENTS: REGULAR FRACTIONAL FACTORIALS STAT 512 2-Level Factorial Experiments: Regular Fractions 1 TWO-LEVEL FACTORIAL EXPERIMENTS: REGULAR FRACTIONAL FACTORIALS Bottom Line: A regular fractional factorial design consists of the treatments

More information

A primer on matrices

A primer on matrices A primer on matrices Stephen Boyd August 4, 2007 These notes describe the notation of matrices, the mechanics of matrix manipulation, and how to use matrices to formulate and solve sets of simultaneous

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

1 Matrices and Systems of Linear Equations. a 1n a 2n

1 Matrices and Systems of Linear Equations. a 1n a 2n March 31, 2013 16-1 16. Systems of Linear Equations 1 Matrices and Systems of Linear Equations An m n matrix is an array A = (a ij ) of the form a 11 a 21 a m1 a 1n a 2n... a mn where each a ij is a real

More information

Soo King Lim Figure 1: Figure 2: Figure 3: Figure 4: Figure 5: Figure 6: Figure 7: Figure 8: Figure 9: Figure 10: Figure 11: Figure 12: Figure 13:

Soo King Lim Figure 1: Figure 2: Figure 3: Figure 4: Figure 5: Figure 6: Figure 7: Figure 8: Figure 9: Figure 10: Figure 11: Figure 12: Figure 13: 1.0 ial Experiment Design by Block... 3 1.1 ial Experiment in Incomplete Block... 3 1. ial Experiment with Two Blocks... 3 1.3 ial Experiment with Four Blocks... 5 Example 1... 6.0 Fractional ial Experiment....1

More information

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02) Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 206, v 202) Contents 2 Matrices and Systems of Linear Equations 2 Systems of Linear Equations 2 Elimination, Matrix Formulation

More information

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2

More information

MITOCW ocw f99-lec01_300k

MITOCW ocw f99-lec01_300k MITOCW ocw-18.06-f99-lec01_300k Hi. This is the first lecture in MIT's course 18.06, linear algebra, and I'm Gilbert Strang. The text for the course is this book, Introduction to Linear Algebra. And the

More information

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 11 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,, a n, b are given real

More information

REVIEW FOR EXAM II. The exam covers sections , the part of 3.7 on Markov chains, and

REVIEW FOR EXAM II. The exam covers sections , the part of 3.7 on Markov chains, and REVIEW FOR EXAM II The exam covers sections 3.4 3.6, the part of 3.7 on Markov chains, and 4.1 4.3. 1. The LU factorization: An n n matrix A has an LU factorization if A = LU, where L is lower triangular

More information

Quadratic Equations Part I

Quadratic Equations Part I Quadratic Equations Part I Before proceeding with this section we should note that the topic of solving quadratic equations will be covered in two sections. This is done for the benefit of those viewing

More information

19. Blocking & confounding

19. Blocking & confounding 146 19. Blocking & confounding Importance of blocking to control nuisance factors - day of week, batch of raw material, etc. Complete Blocks. This is the easy case. Suppose we run a 2 2 factorial experiment,

More information

Response Surface Methodology IV

Response Surface Methodology IV LECTURE 8 Response Surface Methodology IV 1. Bias and Variance If y x is the response of the system at the point x, or in short hand, y x = f (x), then we can write η x = E(y x ). This is the true, and

More information

Chapter 11: Factorial Designs

Chapter 11: Factorial Designs Chapter : Factorial Designs. Two factor factorial designs ( levels factors ) This situation is similar to the randomized block design from the previous chapter. However, in addition to the effects within

More information

Lecture 6: Finite Fields

Lecture 6: Finite Fields CCS Discrete Math I Professor: Padraic Bartlett Lecture 6: Finite Fields Week 6 UCSB 2014 It ain t what they call you, it s what you answer to. W. C. Fields 1 Fields In the next two weeks, we re going

More information

MITOCW ocw-18_02-f07-lec02_220k

MITOCW ocw-18_02-f07-lec02_220k MITOCW ocw-18_02-f07-lec02_220k The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free.

More information

18.06 Professor Johnson Quiz 1 October 3, 2007

18.06 Professor Johnson Quiz 1 October 3, 2007 18.6 Professor Johnson Quiz 1 October 3, 7 SOLUTIONS 1 3 pts.) A given circuit network directed graph) which has an m n incidence matrix A rows = edges, columns = nodes) and a conductance matrix C [diagonal

More information

FRACTIONAL FACTORIAL

FRACTIONAL FACTORIAL FRACTIONAL FACTORIAL NURNABI MEHERUL ALAM M.Sc. (Agricultural Statistics), Roll No. 443 I.A.S.R.I, Library Avenue, New Delhi- Chairperson: Dr. P.K. Batra Abstract: Fractional replication can be defined

More information

Institutionen för matematik och matematisk statistik Umeå universitet November 7, Inlämningsuppgift 3. Mariam Shirdel

Institutionen för matematik och matematisk statistik Umeå universitet November 7, Inlämningsuppgift 3. Mariam Shirdel Institutionen för matematik och matematisk statistik Umeå universitet November 7, 2011 Inlämningsuppgift 3 Mariam Shirdel (mash0007@student.umu.se) Kvalitetsteknik och försöksplanering, 7.5 hp 1 Uppgift

More information

Math 123, Week 2: Matrix Operations, Inverses

Math 123, Week 2: Matrix Operations, Inverses Math 23, Week 2: Matrix Operations, Inverses Section : Matrices We have introduced ourselves to the grid-like coefficient matrix when performing Gaussian elimination We now formally define general matrices

More information

Biostatistics and Design of Experiments Prof. Mukesh Doble Department of Biotechnology Indian Institute of Technology, Madras

Biostatistics and Design of Experiments Prof. Mukesh Doble Department of Biotechnology Indian Institute of Technology, Madras Biostatistics and Design of Experiments Prof. Mukesh Doble Department of Biotechnology Indian Institute of Technology, Madras Lecture - 39 Regression Analysis Hello and welcome to the course on Biostatistics

More information

Second Midterm Exam April 14, 2011 Answers., and

Second Midterm Exam April 14, 2011 Answers., and Mathematics 34, Spring Problem ( points) (a) Consider the matrices all matrices. Second Midterm Exam April 4, Answers [ Do these matrices span M? ] [, ] [, and Lectures & (Wilson) ], as vectors in the

More information

PLS205 Lab 6 February 13, Laboratory Topic 9

PLS205 Lab 6 February 13, Laboratory Topic 9 PLS205 Lab 6 February 13, 2014 Laboratory Topic 9 A word about factorials Specifying interactions among factorial effects in SAS The relationship between factors and treatment Interpreting results of an

More information

Review from Bootcamp: Linear Algebra

Review from Bootcamp: Linear Algebra Review from Bootcamp: Linear Algebra D. Alex Hughes October 27, 2014 1 Properties of Estimators 2 Linear Algebra Addition and Subtraction Transpose Multiplication Cross Product Trace 3 Special Matrices

More information

Sometimes the domains X and Z will be the same, so this might be written:

Sometimes the domains X and Z will be the same, so this might be written: II. MULTIVARIATE CALCULUS The first lecture covered functions where a single input goes in, and a single output comes out. Most economic applications aren t so simple. In most cases, a number of variables

More information

Lecture 13: Simple Linear Regression in Matrix Format

Lecture 13: Simple Linear Regression in Matrix Format See updates and corrections at http://www.stat.cmu.edu/~cshalizi/mreg/ Lecture 13: Simple Linear Regression in Matrix Format 36-401, Section B, Fall 2015 13 October 2015 Contents 1 Least Squares in Matrix

More information

Math 308 Midterm Answers and Comments July 18, Part A. Short answer questions

Math 308 Midterm Answers and Comments July 18, Part A. Short answer questions Math 308 Midterm Answers and Comments July 18, 2011 Part A. Short answer questions (1) Compute the determinant of the matrix a 3 3 1 1 2. 1 a 3 The determinant is 2a 2 12. Comments: Everyone seemed to

More information

TheFourierTransformAndItsApplications-Lecture28

TheFourierTransformAndItsApplications-Lecture28 TheFourierTransformAndItsApplications-Lecture28 Instructor (Brad Osgood):All right. Let me remind you of the exam information as I said last time. I also sent out an announcement to the class this morning

More information

Written Exam (2 hours)

Written Exam (2 hours) M. Müller Applied Analysis of Variance and Experimental Design Summer 2015 Written Exam (2 hours) General remarks: Open book exam. Switch off your mobile phone! Do not stay too long on a part where you

More information

Vector, Matrix, and Tensor Derivatives

Vector, Matrix, and Tensor Derivatives Vector, Matrix, and Tensor Derivatives Erik Learned-Miller The purpose of this document is to help you learn to take derivatives of vectors, matrices, and higher order tensors (arrays with three dimensions

More information

Suppose we needed four batches of formaldehyde, and coulddoonly4runsperbatch. Thisisthena2 4 factorial in 2 2 blocks.

Suppose we needed four batches of formaldehyde, and coulddoonly4runsperbatch. Thisisthena2 4 factorial in 2 2 blocks. 58 2. 2 factorials in 2 blocks Suppose we needed four batches of formaldehyde, and coulddoonly4runsperbatch. Thisisthena2 4 factorial in 2 2 blocks. Some more algebra: If two effects are confounded with

More information

Determinants of 2 2 Matrices

Determinants of 2 2 Matrices Determinants In section 4, we discussed inverses of matrices, and in particular asked an important question: How can we tell whether or not a particular square matrix A has an inverse? We will be able

More information

Some Notes on Linear Algebra

Some Notes on Linear Algebra Some Notes on Linear Algebra prepared for a first course in differential equations Thomas L Scofield Department of Mathematics and Statistics Calvin College 1998 1 The purpose of these notes is to present

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

A primer on matrices

A primer on matrices A primer on matrices Stephen Boyd August 4, 2007 These notes describe the notation of matrices, the mechanics of matrix manipulation, and how to use matrices to formulate and solve sets of simultaneous

More information

22. The Quadratic Sieve and Elliptic Curves. 22.a The Quadratic Sieve

22. The Quadratic Sieve and Elliptic Curves. 22.a The Quadratic Sieve 22. The Quadratic Sieve and Elliptic Curves 22.a The Quadratic Sieve Sieve methods for finding primes or for finding factors of numbers are methods by which you take a set P of prime numbers one by one,

More information

CS264: Beyond Worst-Case Analysis Lecture #15: Topic Modeling and Nonnegative Matrix Factorization

CS264: Beyond Worst-Case Analysis Lecture #15: Topic Modeling and Nonnegative Matrix Factorization CS264: Beyond Worst-Case Analysis Lecture #15: Topic Modeling and Nonnegative Matrix Factorization Tim Roughgarden February 28, 2017 1 Preamble This lecture fulfills a promise made back in Lecture #1,

More information

2 Systems of Linear Equations

2 Systems of Linear Equations 2 Systems of Linear Equations A system of equations of the form or is called a system of linear equations. x + 2y = 7 2x y = 4 5p 6q + r = 4 2p + 3q 5r = 7 6p q + 4r = 2 Definition. An equation involving

More information

Quantum Mechanics-I Prof. Dr. S. Lakshmi Bala Department of Physics Indian Institute of Technology, Madras. Lecture - 21 Square-Integrable Functions

Quantum Mechanics-I Prof. Dr. S. Lakshmi Bala Department of Physics Indian Institute of Technology, Madras. Lecture - 21 Square-Integrable Functions Quantum Mechanics-I Prof. Dr. S. Lakshmi Bala Department of Physics Indian Institute of Technology, Madras Lecture - 21 Square-Integrable Functions (Refer Slide Time: 00:06) (Refer Slide Time: 00:14) We

More information

= 5 2 and = 13 2 and = (1) = 10 2 and = 15 2 and = 25 2

= 5 2 and = 13 2 and = (1) = 10 2 and = 15 2 and = 25 2 BEGINNING ALGEBRAIC NUMBER THEORY Fermat s Last Theorem is one of the most famous problems in mathematics. Its origin can be traced back to the work of the Greek mathematician Diophantus (third century

More information

Appendix A: Matrices

Appendix A: Matrices Appendix A: Matrices A matrix is a rectangular array of numbers Such arrays have rows and columns The numbers of rows and columns are referred to as the dimensions of a matrix A matrix with, say, 5 rows

More information

Quantum Mechanics- I Prof. Dr. S. Lakshmi Bala Department of Physics Indian Institute of Technology, Madras

Quantum Mechanics- I Prof. Dr. S. Lakshmi Bala Department of Physics Indian Institute of Technology, Madras Quantum Mechanics- I Prof. Dr. S. Lakshmi Bala Department of Physics Indian Institute of Technology, Madras Lecture - 6 Postulates of Quantum Mechanics II (Refer Slide Time: 00:07) In my last lecture,

More information

DOE Wizard Screening Designs

DOE Wizard Screening Designs DOE Wizard Screening Designs Revised: 10/10/2017 Summary... 1 Example... 2 Design Creation... 3 Design Properties... 13 Saving the Design File... 16 Analyzing the Results... 17 Statistical Model... 18

More information

Strategy of Experimentation III

Strategy of Experimentation III LECTURE 3 Strategy of Experimentation III Comments: Homework 1. Design Resolution A design is of resolution R if no p factor effect is confounded with any other effect containing less than R p factors.

More information

Descriptive Statistics (And a little bit on rounding and significant digits)

Descriptive Statistics (And a little bit on rounding and significant digits) Descriptive Statistics (And a little bit on rounding and significant digits) Now that we know what our data look like, we d like to be able to describe it numerically. In other words, how can we represent

More information

Determinants: Uniqueness and more

Determinants: Uniqueness and more Math 5327 Spring 2018 Determinants: Uniqueness and more Uniqueness The main theorem we are after: Theorem 1 The determinant of and n n matrix A is the unique n-linear, alternating function from F n n to

More information

- a value calculated or derived from the data.

- a value calculated or derived from the data. Descriptive statistics: Note: I'm assuming you know some basics. If you don't, please read chapter 1 on your own. It's pretty easy material, and it gives you a good background as to why we need statistics.

More information

AS and A Level Physics Cambridge University Press Tackling the examination. Tackling the examination

AS and A Level Physics Cambridge University Press Tackling the examination. Tackling the examination Tackling the examination You have done all your revision and now you are in the examination room. This is your chance to show off your knowledge. Keep calm, take a few deep breaths, and try to remember

More information

Is economic freedom related to economic growth?

Is economic freedom related to economic growth? Is economic freedom related to economic growth? It is an article of faith among supporters of capitalism: economic freedom leads to economic growth. The publication Economic Freedom of the World: 2003

More information

Solutions to Exercises

Solutions to Exercises 1 c Atkinson et al 2007, Optimum Experimental Designs, with SAS Solutions to Exercises 1. and 2. Certainly, the solutions to these questions will be different for every reader. Examples of the techniques

More information

Introduction to Algebra: The First Week

Introduction to Algebra: The First Week Introduction to Algebra: The First Week Background: According to the thermostat on the wall, the temperature in the classroom right now is 72 degrees Fahrenheit. I want to write to my friend in Europe,

More information

Finite Mathematics : A Business Approach

Finite Mathematics : A Business Approach Finite Mathematics : A Business Approach Dr. Brian Travers and Prof. James Lampes Second Edition Cover Art by Stephanie Oxenford Additional Editing by John Gambino Contents What You Should Already Know

More information

Linear Independence Reading: Lay 1.7

Linear Independence Reading: Lay 1.7 Linear Independence Reading: Lay 17 September 11, 213 In this section, we discuss the concept of linear dependence and independence I am going to introduce the definitions and then work some examples and

More information

20g g g Analyze the residuals from this experiment and comment on the model adequacy.

20g g g Analyze the residuals from this experiment and comment on the model adequacy. 3.4. A computer ANOVA output is shown below. Fill in the blanks. You may give bounds on the P-value. One-way ANOVA Source DF SS MS F P Factor 3 36.15??? Error??? Total 19 196.04 3.11. A pharmaceutical

More information

At the start of the term, we saw the following formula for computing the sum of the first n integers:

At the start of the term, we saw the following formula for computing the sum of the first n integers: Chapter 11 Induction This chapter covers mathematical induction. 11.1 Introduction to induction At the start of the term, we saw the following formula for computing the sum of the first n integers: Claim

More information

2 k, 2 k r and 2 k-p Factorial Designs

2 k, 2 k r and 2 k-p Factorial Designs 2 k, 2 k r and 2 k-p Factorial Designs 1 Types of Experimental Designs! Full Factorial Design: " Uses all possible combinations of all levels of all factors. n=3*2*2=12 Too costly! 2 Types of Experimental

More information

Select/ Special Topics in Atomic Physics Prof. P. C. Deshmukh Department of Physics Indian Institute of Technology, Madras

Select/ Special Topics in Atomic Physics Prof. P. C. Deshmukh Department of Physics Indian Institute of Technology, Madras Select/ Special Topics in Atomic Physics Prof. P. C. Deshmukh Department of Physics Indian Institute of Technology, Madras Lecture No. # 06 Angular Momentum in Quantum Mechanics Greetings, we will begin

More information

Definitive Screening Designs with Added Two-Level Categorical Factors *

Definitive Screening Designs with Added Two-Level Categorical Factors * Definitive Screening Designs with Added Two-Level Categorical Factors * BRADLEY JONES SAS Institute, Cary, NC 27513 CHRISTOPHER J NACHTSHEIM Carlson School of Management, University of Minnesota, Minneapolis,

More information

Regression, Part I. - In correlation, it would be irrelevant if we changed the axes on our graph.

Regression, Part I. - In correlation, it would be irrelevant if we changed the axes on our graph. Regression, Part I I. Difference from correlation. II. Basic idea: A) Correlation describes the relationship between two variables, where neither is independent or a predictor. - In correlation, it would

More information

Factorial designs (Chapter 5 in the book)

Factorial designs (Chapter 5 in the book) Factorial designs (Chapter 5 in the book) Ex: We are interested in what affects ph in a liquide. ph is the response variable Choose the factors that affect amount of soda air flow... Choose the number

More information

Keppel, G. & Wickens, T. D. Design and Analysis Chapter 4: Analytical Comparisons Among Treatment Means

Keppel, G. & Wickens, T. D. Design and Analysis Chapter 4: Analytical Comparisons Among Treatment Means Keppel, G. & Wickens, T. D. Design and Analysis Chapter 4: Analytical Comparisons Among Treatment Means 4.1 The Need for Analytical Comparisons...the between-groups sum of squares averages the differences

More information

Review for Exam Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions.

Review for Exam Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions. Review for Exam. Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions. x + y z = 2 x + 2y + z = 3 x + y + (a 2 5)z = a 2 The augmented matrix for

More information

1.1.1 Algebraic Operations

1.1.1 Algebraic Operations 1.1.1 Algebraic Operations We need to learn how our basic algebraic operations interact. When confronted with many operations, we follow the order of operations: Parentheses Exponentials Multiplication

More information

Eigenvalues and eigenvectors

Eigenvalues and eigenvectors Roberto s Notes on Linear Algebra Chapter 0: Eigenvalues and diagonalization Section Eigenvalues and eigenvectors What you need to know already: Basic properties of linear transformations. Linear systems

More information

Section 0.6: Factoring from Precalculus Prerequisites a.k.a. Chapter 0 by Carl Stitz, PhD, and Jeff Zeager, PhD, is available under a Creative

Section 0.6: Factoring from Precalculus Prerequisites a.k.a. Chapter 0 by Carl Stitz, PhD, and Jeff Zeager, PhD, is available under a Creative Section 0.6: Factoring from Precalculus Prerequisites a.k.a. Chapter 0 by Carl Stitz, PhD, and Jeff Zeager, PhD, is available under a Creative Commons Attribution-NonCommercial-ShareAlike.0 license. 201,

More information

(Refer Slide Time: 1:13)

(Refer Slide Time: 1:13) Linear Algebra By Professor K. C. Sivakumar Department of Mathematics Indian Institute of Technology, Madras Lecture 6 Elementary Matrices, Homogeneous Equaions and Non-homogeneous Equations See the next

More information