is the process standard deviation, then the regions above the center line, say, are defined as is the center line and

Size: px
Start display at page:

Download "is the process standard deviation, then the regions above the center line, say, are defined as is the center line and"

Transcription

1 30. Detecting Drifts Versus Shifts in the Process Mean In studying the performance of control charts, most of our attention is directed towards describing what will happen on the chart following a sustained shift in the process parameter. This is done largely for convenience, and because such performance studies must start somewhere, and a sustained shift is certainly a likely scenario. However, a drifting process parameter is also a likely possibility. Aerne, Champ, and Rigdon (1991) have studies several control charting schemes when the process mean drifts according to a linear trend. Their study encompasses the Shewhart control chart, the Shewhart chart with supplementary runs rules, the EWMA control chart, and the Cusum. They design the charts so that the in-control ARL is 465. Some of the previous studies of control charts with drifting means did not do this, and different charts have different values of ARL 0, thereby making it difficult to draw conclusions about chart performance. See Aerne, Champ, and Rigdon (1991) for references and further details. They report that, in general, Cusum and EWMA charts perform better in detecting trends than does the Shewhart control chart. For small to moderate trends, both of these charts are significantly better than the Shewhart chart with and without runs rules. There is not much difference in performance between the Cusum and the EWMA. 31. Run Sum and Zone Control Charts The run sum control chart was introduced by Roberts (1966), and has been studied further by Reynolds (1971) and Champ and Rigdon (1997). For a run chart for the sample mean, the procedure divides the possible values of x into regions on either side of the center line of the control chart. If 0 is the center line and 0 is the process standard deviation, then the regions above the center line, say, are defined as [ A / n, A / n), i 0,1,,..., a 0 i 0 0 i 1 0 for 0 A0 A1 A Aa Aa 1, where the constants A i are determined by the user. A similar set of regions is defined below the center line. A score is assigned to each region, say s i for the ith region above the center line and s -i for the ith region below the center line. The score s i is nonnegative, while the score s- i is nonpositive. The run sum chart operates by observing the region in which the subgroup averages fall and accumulating the scores for those regions. The cumulative score begins at zero. The charting procedure continues until either the cumulative score reaches or exceeds either a positive upper limit or a negative lower limit in which case an out-of-control signal is generated, or until the subgroup average falls on the other side of the center line in which case the scoring starts over with the cumulative score starting according to the current value of x. Jaehn (1987) discusses a special case of the run sum control chart, usually called the zone control chart. In the zone control chart, there are only three regions on either side of the center line corresponding to one-, two-, and three-sigma intervals (as in the western Electric rules), and the zone scores are often taken as 1,,4, and 8 (this is the value 55

2 assigned to a point outside the three-sigma limits and it is also the total score that triggers an alarm). Davis, Homer, and Woodall (1990) studied he performance of the zone control chart and recommended the zone scores 0,, 4, and 8 (or equivalently, 0, 1,, and 4). Champ and Rigdon (1997) use a Markov chain approach to study the average run length properties of several versions of the run sum control chart. They observe that the run sum control chart can be designed so that it has the same in-control ARL as a Shewhart with supplementary runs rules and better ARL performance than the Shewhart chart with runs rules in detecting small or moderate sized shifts. Their results are consistent with those of Davis, Homer, and Woodall (1990). Jin and Davis (1991) give a FORTRAN computer program for finding the ARLs of the zone control chart. Champ and Rigdon (1997) also compare the zone control chart to the Cusum and EWMA control charts. They observe that by using a sufficient number of regions, the zone control chart can be made competitive with the Cusum and EWMA, so it could be a viable alternative to these charts. 3. More About Adaptive Control Charts Section 9-5 of the text discusses adaptive control charts; that is, control charts on which either the sample size or the sampling interval, or both, are changed periodically depending on the current value of the sample statistic. Some authors refer to these schemes as variable sample size (VSS) or variable sampling interval (VSI) control charts. A procedure that changes both parameters would be called a VSS/SI control chart. The successful application of these types of charts requires some flexibility on the part of the organization using them, in that occasionally larger than usual samples will be taken, or a sample will be taken sooner than routinely scheduled. However, the adaptive schemes offer real advantages in improving control chart performance. The textbook illustrates a two-state or two-zone system; that is, the control chart has an inner zone in which the smaller sample size (or longest time between samples) is used, and an outer zone in which the larger sample size (or shortest time between samples) is used. The book presents an example involving an x chart demonstrating that an improvement of at least 50% in ATS performance is possible if the sampling interval can be adapted using a two-state system. An obvious question concerns the number of states: are two states optimal, or can even better results be obtained by designing a system with more that two states? Several authors have examined this question. Runger and Montgomery (1993) have shown that for the VSI control chart two states are optimal if one is considering the initial-state or zero-state performance of the control chart (that is, the process is out of control when the control chart is started up). However, if one considers steady-state performance (the process shifts after the control chart has been in operation for a long time), then a VSI control chart with more than two states will be optimal. These authors show that a well-designed two-state VSI control chart will perform nearly as well as the optimal chart, so that in practical use, there is little to be gained in operational performance by using more than two states. Zimmer, Montgomery and Runger (1998) 56

3 consider the VSS control chart and show that two states are not optimal, although the performance improvements when using more that two states are modest, and mostly occur when the interest is in detecting small process shifts. Zimmer, Montgomery and Runger (000) summarize the performance of numerous adaptive control chart schemes, and offer some practical guidelines for their use. They observe that, in general, performance improves more quickly from adapting the sample size than from adapting the sampling interval. Tagaras (1998) also gives a nice literature review of the major work in the field up through about Baxley ((1995) gives an interesting account of using VSI control charts in nylon manufacturing. Park and Reynolds (1994) have presented an economic model of the VSS control chart, and Prabhu, Montgomery, and Runger (1997) have investigated economic-statistical design of VSS/SI control charting schemes. 33. Multivariate Cusum Control Charts In Chapter 10 the multivariate EWMA (or MEWMA) control chart is presented as a relatively straightforward extension of the univariate EWMA. It was noted that several authors have developed multivariate extensions of the Cusum. Crosier (1988) proposed two multivariate Cusum procedures. The one with the best ARL performance is based on the statistic ( S 1 1/ 1 X ) ( S 1 X ) C i i i i i where 0, if Ci k Si ( Si 1 Xi)(1 k/ Ci), if Ci k with S 0 = 0, and k > 0. An out of control signal is generated when Y ( S S ) H 1 1/ i i i where k and H the reference value and decision interval for the procedure, respectively. Two different forms of the multivariate cusum were proposed by Pignatiello and Runger (1990). Their best-performing control chart is based on the following vectors of cumulative sums: and MC D i i j i li 1 X D D 1 1/ i max{0,( i i) kli} where k 0, li li 11 if MCi 1 0 and li 1 otherwise. An out of control signal is generated if MC i > H. j 57

4 Both of these multivariate Cusums have better ARL performance that the Hotelling T or the chi-square control chart. However, the MEWMA has very similar ARL performance to both of these multivariate Cusums and it much easier to implement in practice, so it should be preferred. 34. Guidelines for Planning Experiments Coleman and Montgomery (1993) present a discussion of methodology and some guide sheets useful in the pre-experimental planning phases of designing and conducting an industrial experiment. The guide sheets are particularly appropriate for complex, highpayoff or high-consequence experiments involving (possibly) many factors or other issues that need careful consideration and (possibly) many responses. They are most likely to be useful in the earliest stages of experimentation with a process or system. Coleman and Montgomery suggest that the guide sheets work most effectively when they are filled out by a team of experimenters, including engineers and scientists with specialized process knowledge, operators and technicians, managers and (if available) individuals with specialized training and experience in designing experiments. The sheets are intended to encourage discussion and resolution of technical and logistical issues before the experiment is actually conducted. Coleman and Montgomery give an example involving manufacturing impellers on a CNC-machine that are used in a jet turbine engine. To achieve the desired performance objectives, it is necessary to produce parts with blade profiles that closely match the engineering specifications. The objective of the experiment was to study the effect of different tool vendors and machine set-up parameters on the dimensional variability of the parts produced by the CNC-machines. The master guide sheet is shown in Table 34-1 below. It contains information useful in filling out the individual sheets for a particular experiment. Writing the objective of the experiment is usually harder than it appears. Objectives should be unbiased, specific, measurable and of practical consequence. To be unbiased, the experimenters must encourage participation by knowledgeable and interested people with diverse perspectives. It is all too easy to design a very narrow experiment to prove a pet theory. To be specific and measurable the objectives should be detailed enough and stated so that it is clear when they have been met. To be of practical consequence, there should be something that will be done differently as a result of the experiment, such as a new set of operating conditions for the process, a new material source, or perhaps a new experiment will be conducted. All interested parties should agree that the proper objectives have been set. The relevant background should contain information from previous experiments, if any, observational data that may have been collected routinely by process operating personnel, field quality or reliability data, knowledge based on physical laws or theories, and expert opinion. This information helps quantify what new knowledge could be gained by the present experiment and motivates discussion by all team members. Table 34- shows the beginning of the guide sheet for the CNC-machining experiment. 58

5 Response variables come to mind easily for most experimenters. When there is a choice, one should select continuous responses, because generally binary and ordinal data carry much less information and continuous responses measured on a well-defined numerical scale are typically easier to analyze. On the other hand, there are many situations where a count of defectives, a proportion, or even a subjective ranking must be used as a response. Measurement precision is an important aspect of selecting the response variables in an experiment. Insuring that the measurement process is in a state of statistical control is highly desirable. That is, ideally there is a well-established system of insuring both accuracy and precision of the measurement methods to be used. The amount of error in measurement imparted by the gauges used should be understood. If the gauge error is large relative to the change in the response variable that is important to detect, then the experimenter will want to know this before conducting the experiment. Sometimes repeat measurements can be made on each experimental unit or test specimen to reduce the impact of measurement error. For example, when measuring the number average molecular weight of a polymer with a gel permeation chromatograph (GPC) each sample can be tested several times and the average of those molecular weight reading reported as the observation for that sample. When measurement precision is unacceptable, a measurement systems capability study may be performed to attempt to improve the system. These studies are often fairly complicated designed experiments. Chapter 7 presents an example of a factorial experiment used to study the capability of a measurement system. The impeller involved in this experiment is shown in Figure Table 34-3 lists the information about the response variables. Notice that there are three response variables of interest here. As with response variables, most experimenters can easily generate a list of candidate design factors to be studied in the experiment. Coleman and Montgomery call these control variables. We often call them controllable variables, design factors, or process variables in the text. Control variables can be continuous or categorical (discrete). The ability of the experimenters to measure and set these factors is important. Generally, small errors in the ability to set, hold or measure the levels of control variables are of relatively little consequence. Sometimes when the measurement or setting error is large, a numerical control variable such as temperature will have to be treated as a categorical control variable (low or high temperature). Alternatively, there are errors-in-variables statistical models that can be employed, although their use is beyond the scope of this book. Information about the control variables for the CNC-machining example is shown in Table Held-constant factors are control variables whose effects are not of interest in this experiment. The worksheets can force meaningful discussion about which factors are adequately controlled, and if any potentially important factors (for purposes of the present experiment) have inadvertently been held constant when they should have been included as control variables. Sometimes subject-matter experts will elect to hold too many factors constant and as a result fail to identify useful new information. Often this information is in the form of interactions among process variables. 59

6 Table Master Guide Sheet. This guide can be used to help plan and design an experiment. It serves as a checklist to improve experimentation and ensures that results are not corrupted for lack of careful planning. Note that it may not be possible to answer all questions completely. If convenient, use supplementary sheets for topics Experimenter's Name and Organization: Brief Title of Experiment:. Objectives of the experiment (should be unbiased, specific, measurable, and of practical consequence): 3. Relevant background on response and control variables: (a) theoretical relationships; (b) expert knowledge/experience; (c) previous experiments. Where does this experiment fit into the study of the process or system?: 4. List: (a) each response variable, (b) the normal response variable level at which the process runs, the distribution or range of normal operation, (c) the precision or range to which it can be measured (and how): 5. List: (a) each control variable, (b) the normal control variable level at which the process is run, and the distribution or range of normal operation, (c) the precision (s) or range to which it can be set (for the experiment, not ordinary plant operations) and the precision to which it can be measured, (d) the proposed control variable settings, and (e) the predicted effect (at least qualitative) that the settings will have on each response variable: 6. List: (a) each factor to be "held constant" in the experiment, (b) its desired level and allowable s or range of variation, (c) the precision or range to which it can measured (and how), (d) how it can be controlled, and (e) its expected impact, if any, on each of the responses: 7. List: (a) each nuisance factor (perhaps time-varying), (b) measurement precision, (c)strategy (e.g., blocking, randomization, or selection), and (d) anticipated effect: 8. List and label known or suspected interactions: 9. List restrictions on the experiment, e.g., ease of changing control variables, methods of data acquisition, materials, duration, number of runs, type of experimental unit (need for a split-plot design), illegal or irrelevant experimental regions, limits to randomization, run order, cost of changing a control variable setting, etc.: 10. Give current design preferences, if any, and reasons for preference, including blocking and randomization: 11. If possible, propose analysis and presentation techniques, e.g., plots, ANOVA, regression, plots, t tests, etc.: 1. Who will be responsible for the coordination of the experiment? 13. Should trial runs be conducted? Why / why not? 60

7 Table 34-. Beginning of Guide Sheet for CNC-Machining Study. l.experimenter's Name and Organization: John Smith, Process Eng. Group Brief Title of Experiment: CNC Machining Study. Objectives of the experiment (should be unbiased, specific, measurable, and of practical consequence): For machined titanium forgings, quantify the effects of tool vendor; shifts in a-axis, x- axis, y-axis, and z- axis; spindle speed; fixture height; feed rate; and spindle position on the average and variability in blade profile for class X impellers, such as shown in Figure Relevant background on response and control variables: (a) theoretical relationships; (b) expert knowledge/experience; (c) previous experiments. Where does this experiment fit into the study of the process or system? (a) Because of tool geometry, x-axis shifts would be expected to produce thinner blades, an undesirable characteristic of the airfoil. (b) This family of parts has been produced for over 10 years; historical experience indicates that externally reground tools do not perform as well as those from the internal vendor (our own regrind operation). (c) Smith (1987) observed in an internal process engineering study that current spindle speeds and feed rates work well in producing parts that are at the nominal profile required by the engineering drawings - but no study was done of the sensitivity to variations in set-up parameters. Results of this experiment will be used to determine machine set-up parameters for impeller machining. A robust process is desirable; that is, on-target and low variability performance regardless of which tool vendor is used. Figure Jet engine impeller (side view). The z-axis is vertical, x-axis is horizontal, y-axis is into the page. 1 = height of wheel, = diameter of wheel, 3 = inducer blade height, 4 = exducer blade height, 5 = z height of blade. 61

8 Response variable (units) Blade profile (inches) Surface finish Surface defect count Table Response Variables Normal operating Measurement level and range precision, accuracy how known? Nominal (target) 1 X 10-5 inches 1 X 10-3 inches to from a coordinate X 10-3 inches at measurement all points machine capability Smooth to rough (requiring hand finish) Typically 0 to 10 study Visual criterion (compare to standards) Visual criterion (compare to standards) Relationship of response variable to objective Estimate mean absolute difference from target and standard deviation Should be as smooth as possible Must not be excessive in number or magnitude Table Control Variables Measurement Precision and Proposed settings, Predicted effects Control variable Normal level setting error- based on (for various (units) and range how known? predicted effects responses) x-axis shift* inches.001inches 0,.015 inches Difference (inches) (experience) y-axis shift* inches.001inches 0,.015 inches Difference (inches) (experience) z-axis shift* inches.001inches? Difference (inches) (experience) Tool vendor Internal, external - Internal, external External is more variable a-axis shift* degrees.001 degrees 0,.030 degrees Unknown (degrees) (guess) Spindle speed % 1% 90%,110% None? (% of (indicator nominal) on control panel) Fixture height inches.00inches 0,.015 inches Unknown (guess) Feed rate (% of % 1% 90%,110% None? nominal) (indicator on control panel) 'The x, y, and z axes are used to refer to the part and the CNC machine. The a axis refers only to the machine. In the CNC experiment, this worksheet helped the experimenters recognize that the machine had to be fully warmed up before cutting any blade forgings. The actual procedure used was to mount the forged blanks on the machine and run a 30-minute cycle without the cutting tool engaged. This allowed all machine parts and the lubricant to 6

9 reach normal, steady-state operating temperature. The use of a typical (i.e., mid-level) operator and the use of one lot of forgings ware decisions made for experimental insurance. Table 34-5 shows the held-constant factors for the CNC-machining experiment. Table Held-Constant Factors Desired experi- Measurement Factor mental level and precision-how How to control Anticipated (units) allowable range known? (in experiment) effects Type of cutting Standard type Not sure, but Use one type None fluid thought to be adequate Temperature of F. when 1- F. (estimate) Do runs after None cutting fluid machine is machine has (degrees F.) warmed up reached 100 Operator Several operators - Use one "mid- None normally work level" in the process operator Titanium Material Precision of lab Use one lot Slight forgings properties may tests unknown (or block on vary from unit to unit forging lot, only if necessary) Nuisance factors are variables that probably have some effect on the response, but which are of little or no interest to the experimenter. They differ from held-constant factors in that they either cannot be held entirely constant, or they cannot be controlled at all. For example, if two lots of forgings were required to run the experiment, then the potential lot-to-lot differences in the material would be a nuisance variable than could not be held entirely constant. In a chemical process we often cannot control the viscosity (say) of the incoming material feed stream it may vary almost continuously over time. In these cases, nuisance variables must be considered in either the design or the analysis of the experiment. If a nuisance variable can be controlled, then we can use a design technique called blocking to eliminate its effect. If the nuisance variable cannot be controlled but it can be measured, then we can reduce its effect by an analysis technique called the analysis of covariance. Montgomery (1997) gives an introduction to the analysis of covariance. Table 34-6 shows the nuisance variables identified in the CNC-machining experiment. In this experiment, the only nuisance factor thought to have potentially serious effects was the machine spindle. The machine has four spindles, and ultimately a decision was made to run the experiment in four blocks. The other factors were held constant at levels below which problems might be encountered. 63

10 Table Nuisance Factors Measurement Strategy (e.g., Nuisance factor precision-how randomization, (units) known? blocking, etc.) Anticipated effects Viscosity of Standard viscosity Measure viscosity at None to slight cutting fluid start and end Ambient 1- F. by room Make runs below Slight, unless very temperature ( F.) thermometer 80'F. hot weather (estimate) Spindle Block or randomize Spindle-to-spindle on machine spindle variation could be large Vibration of? Do not move heavy Severe vibration can machine during objects in CNC introduce variation operation machine shop within an impeller Coleman and Montgomery also found it useful to introduce an interaction sheet. The concept of interactions among process variables is not an intuitive one, even to welltrained engineers and scientists. Now it is clearly unrealistic to think that the experimenters can identify all of the important interactions at the outset of the planning process. In most situations, the experimenters really don t know which main effects are likely to be important, so asking them to make decisions about interactions is impractical. However, sometimes the statistically-trained team members can use this as an opportunity to teach others about the interaction phenomena. When more is known about the process, it might be possible to use the worksheet to motivate questions such as are there certain interactions that must be estimated? Table 34-7 shows the results of this exercise for the CNC-machining example. Table Interactions Control variable y shift z shift Vendor a shift Speed Height Feed x shift P y shift - P z shift - - P Vendor P a shift Speed F,D Height NOTE: Response variables are P = profile difference, F = surface finish and D = surface defects Two final points: First, an experimenter without a coordinator will probably fail. Furthermore, if something can go wrong, it probably will, so he coordinator will actually have a significant responsibility on checking to ensure that the experiment is being conducted as planned. Second, concerning trial runs, this is often a very good idea particularly if this is the first in a series of experiments, or if the experiment has high 64

11 significance or impact. A trial run can consist of a center point in a factorial or a small part of the experiment perhaps one of the blocks. Since many experiments often involve people and machines doing something they have not done before, practice is a good idea. Another reason for trial runs is that we can use them to get an estimate of the magnitude of experimental error. If the experimental error is much larger than anticipated, then this may indicate the need for redesigning a significant part of the experiment. Trial runs are also a good opportunity to ensure that measurement and dataacquisition or collection systems are operating as anticipated. Most experimenters never regret performing trial runs. Blank Guide Sheets from Coleman and Montgomery (1993) response variable (units) Response Variables normal operating level & range meas. precision, accuracy How known? relationship of response variable to objective control variable (units) Control Variables meas. normal level precision & range & setting error How known? proposed settings, based on predicted effects predicted effects (for various responses) 65

12 factor (units) Held Constant Factors desired experimental level & allowable range measurement precision How known? how to control (in experiment) anticipated effects nuisance factor (units) measurement precision How known? Nuisance Factors strategy (e.g., randomization, blocking, etc.) anticipated effects Interactions control var Other Graphical Aids for Planning Experiments In addition to the tables in Coleman and Montgomery s Technometrics paper, there are a number of useful graphical aids to pre-experimental planning. Perhaps the first person to suggest graphical methods for planning an experiment was Andrews (1964), who proposed a schematic diagram of the system much like Figure 1-1 in the textbook, with 66

13 inputs, experimental variables, and responses all clearly labeled. These diagrams can be very helpful in focusing attention on the broad aspects of the problem. Barton (1997) (1998) (1999) has discussed a number of useful graphical aids in planning experiments. He suggests using IDEF0 diagrams to identify and classify variables. IDEF0 stands for Integrated Computer Aided Manufacturing Identification Language, Level 0. The U. S. Air Force developed it to represent the subroutines and functions of complex computer software systems. The IDEF0 diagram is a block diagram that resembles Figure 1-1 in the textbook. IDEF0 diagrams are hierarchical; that is, the process or system can be decomposed into a series of process steps or systems and represented as a sequence of lower-level boxes drawn within the main block diagram. Barton also suggests that cause-and-effect diagrams can also be useful in identifying and classifying variables in an experimental design problem. These diagrams are very useful in organizing and conducting brainstorming or other problem-solving meetings in which process variables and their potential role in the experiment are discussed and decided. Both of these techniques can be very helpful in uncovering intermediate variables. These are variables that are often confused with the directly adjustable process variables. For example, the burning rate of a rocket propellant may be affected by the presence of voids in the propellant material. However, the voids are the result of mixing techniques, curing temperature and other process variables and so the experimenter cannot directly control the voids themselves. Some other useful papers on planning experiments include Bishop, Petersen and Trayser (198), Hahn (1977) (1984), and Hunter (1977). 35. More About Expected Mean Squares in the Analysis of Variance The Two-Factor Fixed-Effects Model In Section 1-4 we describe the two-factor factorial experiment and present the analysis of variance for the fixed-effects case. On page 585 we observe that dividing the main effect and interaction mean squares by the mean square for error forms the proper test statistics. Examining the expected mean squares can verify this. Consider the two-factor fixed effects model y ( ) ij i j ij ijk 67 R S T i 1,,, a j 1,,, b k 1,,, n given as Equation (1-) in the textbook. It is relatively easy to develop the expected mean squares from direct application of the expectation operator. For an illustration, consider finding the expected value for one of the main effect mean squares, say F I HG K J E MS E SS A 1 ( A) ESS ( A ) a 1 a 1

14 where SS A is the sum of squares for the row factor. Since SS A 1 bn ESS ( ) A a i1 y i.. y... abn a bn E y E y 1... i.. abn F H G I K J Recall that. 0,. 0,( ). j 0,( ) i. 0, and ( ).. 0, where the dot subscript implies summation over that subscript. Now and b n i1 yi yijk bnbni n n( ) i i j1 k 1 bnbn i i.. a a 1 1 bn E y i E ( bn) ( bn) i i ( bn) i bni bn ii bn i i1 L NM a 1 a( bn) ( bn) i abn bn abn a bn i a i1 i1 Furthermore, we can easily show that y so Therefore abn abn E ( y ) abn E ( abn ) O QP 1 abn E ( abn ) abn ) 1 ( abn) abn abn abn 68

15 F HG I K J E MS E SS A ( A) a 1 1 ESS ( A ) a 1 a 1 L abn bni a ( abn ) a 1NM i1 a 1 L O ( a1) bn i a 1 NM a bn i i1 a 1 The other expected mean squares are derived similarly. In general, for the fixed effects model, the expected value of the mean squares for main effects and interaction are equal to the error variance plus a term involving the corresponding fixed effect. The fixed effect term will be zero if the treatment means are zero or if the interaction effects are negligible. The expected value of the error mean square is, so the ratio of the model term mean square to the error mean square results in a one-sided upper-tail test. The use of the F-distribution as the reference distribution follows from the normality assumption on the response variable. i1 QP O QP The Random Effects Model In Section 7-6. we discuss briefly the use of analysis of variance methods for measurement systems capability studies. The two-factor factorial random effects model is assumed to be appropriate for the problem. The model is y ( ) ij i j ij ijk R S T i 1,,, a j 1,,, b k 1,,, n as given as Equation (7-3) in the textbook. We list the expected mean squares for this model in Equation (7-6), but do not formally develop them. It is relatively easy to develop the expected mean squares from direct application of the expectation operator. For example, consider finding F E MS E SS A I 1 ( A) ESS ( A ) HG a K J 1 a 1 where SS A is the sum of squares for the row factor. Recall that the model components i, j and ( ) ijare normally and independently distributed with means zero and variances,, and respectively. The sum of squares and its expectation are defined as 69

16 Now b SS A 1 bn ESS ( ) n A a i1 y i.. y... abn a bn E y E y 1... i.. abn F H G I K J i1 yi yijk bnbni n n( ) i i j1 k 1 and a a 1 1 bn E y i E ( bn) ( bn) i i ( bn) i bni bn ii bn i i1 1 a( bn) a( bn) ab( n) abn abn bn abn abn an an a Furthermore, we can show that y abnbn an n( ) so the second term in the expected value of SS A becomes 1 1 abn E ( y... ) ( abn) a( bn) b( an) abn abn abn abn bn an n We can now collect the components of the expected value of the sum of squares for factor A and find the expected mean square as follows: F HG L N M E MS E SS A ( A) a 1 I K J a a bn E y E y i.. 1 abn i1 F H G IO K J Q P 1 ( a1) n( a1) bn a 1 n bn This agrees with the first result in Equation (7-6). There are situations where there are only a specific set of operators that perform the measurements in a gage R & R study, so we cannot think of the operators as having been selected at random from a large population. Thus an ANOVA model involving parts chosen at random and fixed operator effects would be appropriate. This is a mixed model ANOVA. For details of using mixed models in measurement systems capability 70

17 studies, see the discussion in Montgomery (1997) and the excellent paper by Dolezal, Burdick, and Birch (1998). 36. Blocking in Designed Experiments In many experimental problems it is necessary to design the experiment so that variability arising from nuisance factors can be controlled. As an example, consider the grocery bag paper tensile strength experiment described in Section 3-5. Recall that the runs must be conducted in a pilot plant. Suppose that each run takes about two hours to complete, so that at most four runs can be made on a single day. Now it is certainly possible that pilot plant operations may not be completely consistent from day-to-day, due to variations in environmental conditions, materials, operation of the test equipment for making the tensile strength measurements, changes in operating personnel, and so forth. All these sources of variability can be combined into a single source of nuisance variability called time. A simple method can be used to keep the variability associated with a nuisance variable from impacting experimental results. One each day (or in general, at each possible level) of the nuisance variable, test all treatment or factor levels of interest. In our example, this would consist of testing all four hardwood concentrations that are of interest on a single day. On each day, the four tests are conducted in random order. This type of experimental design is called a randomized complete block design or RCBD. In the RCBD, the block size must be large enough to hold all the treatments. If this condition is not satisfied, then an incomplete block design must be used. These incomplete block designs are discussed in some experimental design textbooks; for example, see Montgomery (1997). In general, for a RCBD with a treatments we will run a complete replicate of these treatments in each of b blocks. The order in which the runs are made in each block is completely random. The statistical model for the RCBD is y i 1,,..., a j 1,,..., b ij i j ij where is an overall mean, i is the ith treatment effect, j is the jth block effect, and ij is a random error term, taken to be NID(0, ). We will think of the treatments as fixed factors. Defining the treatment and block effects as deviations from an overall mean leads to the test on equality of the treatment means being equivalent to a test of the hypotheses that H0 : 1 a 0 versus H1: at least one i 0. The analysis of variance (ANOVA) from Section 3-5 can be adapted to the analysis of the RCBD. The fundamental ANOVA equality becomes or a b a b a b ( yij y.. ) b( yi. y.. ) a( y. j y.. ) ( yij yi. y. j y.. ) i1 j1 i1 j1 i1 j1 71

18 SST SSFactor SSBlocks SSE The number of degrees of freedom associated with these sums of squares is ab1 ( a1) ( b1) ( a1)( b 1) The null hypothesis of no difference in factor level or treatment means is tested by the statistic SSFactor /( a 1) MS Factor F SS E /( a 1)( b 1) MS E To illustrate the procedure, reconsider the tensile strength experiment data in Table 3-6, and suppose that the experiment was run as a RCBD. Now the columns of table 3-6 (currently labeled Observations ) would be labeled blocks or days. Minitab will perform the RCBD analysis. The output follows. ANOVA: strength versus concentration, day (or blocks) Factor Type Levels Values conc fixed day fixed Analysis of Variance for strength Source DF SS MS F P conc day Error Total Notice that the concentration factor is significant; that is, there is evidence that changing the hardwood concentration affects the mean strength. The F-ratio for blocks or days is small, suggesting that the variability associated with the blocks was small. There are some technical problems associated with statistical testing of block effects see the discussion in Montgomery (1997, Chapter 5). The blocking principal can be extended to experiments with more complex treatment structures. For example, in Section 1-5.5, we observe that in a replicated factorial experiment, each replicate can be run in a single block. Thus a nuisance factor can be accommodated in a factorial experiment. As an illustration, consider the router experiment (Example 1-6 in the text). Suppose that each of the replicates were run on a single printed circuit board. Considering boards (or replicates) as blocks, we can analyze this experiment as a factorial in four blocks. The Minitab analysis follows. Notice that both main effects and the interaction are important. There is also some indication that the block effects are significant. 7

19 Fractional Factorial Fit: Vibration versus A, B Estimated Effects and Coefficients for Vibration (coded units) Term Effect Coef SE Coef T P Constant Block A B A*B Analysis of Variance for Vibration (coded units) Source DF Seq SS Adj SS Adj MS F P Blocks Main Effects Way Interactions Residual Error Total Using a t-test for Detecting Curvature In Section of the textbook we discuss the addition of center points to a k factorial design. This is a very useful idea as it allows an estimate of pure error to be obtained even thought the factorial design points are not replicated and it permits the experimenter to obtain an assessment of model adequacy with respect to certain second-order terms. Specifically, we present an F-test for the hypotheses H0 : 11 kk 0 H1: 11 kk 0 An equivalent t-statistic can also be employed to test these hypotheses. Some computer software programs report the t-test instead of (or in addition to) the F-test. It is not difficult to develop the t-test and to show that it is equivalent to the F-test. Suppose that the appropriate model for the response is a complete quadratic polynomial and that the experimenter has conducted an unreplicated full k factorial design with n F design points plus n C center points. Let yf and y C represent the averages of the responses at the factorial and center points, respectively. Also let ˆ be the estimate of the variance obtained using the center points. It is easy to show that 1 E( yf) n ( nf0 nf11nf nfkk) F 011kk and 1 E( yc) n ( nc 0 ) C 0 73

20 Therefore, E( y y ) F C 11 kk and so we see that the difference in averages yf yc is an unbiased estimator of the sum of the pure quadratic model parameters. Now the variance of yf yc is 1 1 V( yf yc) nf nc Consequently, a test of the above hypotheses can be conducted using the statistic yf yc t0 1 1 ˆ nf nc which under the null hypothesis follows a t distribution with n C 1 degrees of freedom. We would reject the null hypothesis (that is, no pure quadratic curvature) if t0 t /, 1. This t-test is equivalent to the F-test given in the book. To see this, square the t-statistic above: t 0 ( yf yc) 1 1 ˆ nf nc nn( y y ) F C F C ( n ) ˆ F nc This ratio is identical to the F-test presented in the textbook. Furthermore, we know that the square of a t random variable with (say) v degrees of freedom is an F random variable with 1 numerator and v denominator degrees of freedom, so the t-test for pure quadratic effects is indeed equivalent to the F-test. n C 38. Response Surface Designs Example 13- introduces the central composite design (CCD), perhaps the most widely used design for fitting the second-order response surface model. The CCD is very attractive for several reasons: (1) it requires fewer runs than some if its competitors, such as the 3 k factorial, () it can be built up from a first-order design (the k ) by adding the axial runs, and (3) the design has some nice properties, such as the rotatability property discussed in the text. The factorial runs in the CCD are important in estimating the first-order (or main effects) in the model as well as the interaction or cross-product terms. The axial runs contribute towards estimation of the pure quadratic terms, plus they also contribute to estimation of the main effects. The center points contribute to estimation of the pure quadratic terms. The CCD can also be run in blocks, with the factorial portion of the design plus center 74

21 points forming one block and the axial runs plus some additional center points forming the second block. For other blocking strategies involving CCDs, refer to Montgomery (1997) or Myers and Montgomery (1995). In addition to the CCD, there are some other designs that can be useful for fitting a second-order response surface model. Some of these designs have been created as alternatives to overcome some possible objections to the CCD. A fairly common criticism is that the CCD requires 5 levels for each design factor, while the minimum number of levels required to fit a second-order model is 3. It is easy to modify the CCD to contain only 3 levels for each factor by setting the axial distance 1. This places the axial runs in the center of each face of the cube, resulting in a design called the face-centered cube. The Box-Behnken design is also a second-order design with all factors at 3 levels. In this design, the runs are located at the centers of the edges of the cube and not at the corners. Another criticism of the CCD is that although the designs are not large, they are far from minimal. For example, a CCD in k = 4 design factors requires 16 factorial runs, 8 axial runs, plus nc 1 center points. This results in a design with at least 5 runs, while the second-order model in k = 4 design factors has only 15 parameters. Obviously, there are situations where it would be desirable to reduce the number of required runs. One approach is to use a fractional factorial in the cube. However, the fractional must be either of resolution V, or it must be of resolution III* (main effects aliased with twofactor interactions, but no two-factor interactions aliased with each other). A resolution IV design cannot be used because that results in two-factor interactions aliased with each other, so the cross-product terms in the second-order model cannot be estimated. A small composite design is a CCD with a resolution III* fraction in the cube. For the in k = 4 design factor example, this would involve setting the generator D = AB for the one-half fraction (the standard resolution IV half-fraction uses D = ABC). This results in a small composite design with a minimum of 17 runs. Hybrid designs are another type of small response surface design that are in many ways superior to the small composite. These and other types of response surface designs are supported by several statistics software packages. These designs are discussed extensively in Myers and Montgomery (1995). 39. Fitting Regression Models by Least Squares Regression models are used extensively in quality and process improvement, often to fit an empirical model to data collected on a process from a designed experiment. In Chapter 13, for example, we illustrate fitting second-order response surface models for process optimization studies. Fitting regression models by the method of least squares is straightforward. Suppose that we have a sample of n observations on a response variable y and a collection of k < n regressor or predictor variables, x1, x,..., x k. The model we wish to fit is yi 0 1xi 1xi kxik i, i 1,,..., n 75

22 The least squares fit selects the model parameters so as to minimize the sum of the squares of the model errors. Least squares is presented most compactly if the model is expressed in matrix notation. The regression model is y X The least squares function is L yx yx yy Xy XX Now L Xy XX ˆ and the least squares normal equations are XX ˆ Xy Therefore, the least squares estimator is XX ˆ ) 1 Xy assuming that the columns of the X matrix are linearly independent so that the ( XX ) 1 matrix exists. Most statistics software packages have extensive regression model-fitting and inference procedures. We have illustrated a few of these procedures in the textbook. 40. Taguchi s Contributions to Quality Engineering In Chapter 1 and 13 we emphasize the importance of using designed experiments for product and process improvement. Today, many engineers and scientists are exposed to the principles of statistically designed experiments as part of their formal technical education. However, during the time period, the principles of experimental design (and statistical methods, in general) were not as widely used as they are today In the early 1980s, Genichi Taguchi, a Japanese engineer, introduced his approach to using experimental design for 1. Designing products or processes so that they are robust to environmental conditions.. Designing/developing products so that they are robust to component variation. 3. Minimizing variation around a target value. Taguchi called this the robust parameter design problem. In Chapter 13 we extend the idea somewhat to include not only robust product design but process robustness studies. 76

23 Taguchi has certainly defined meaningful engineering problems and the philosophy that recommends is sound. However, he advocated some novel methods of statistical data analysis and some approaches to the design of experiments that the process of peer review revealed were unnecessarily complicated, inefficient, and sometimes ineffective. In this section, we will briefly overview Taguchi's philosophy regarding quality engineering and experimental design. We will present some examples of his approach to robust parameter design, and we will use these examples to highlight the problems with his technical methods. It is possible to combine his sound engineering concepts with more efficient and effective experimental design and analysis based on response surface methods, as we did in the process robustness studies examples in Chapter 13. The Taguchi Philosophy Taguchi advocates a philosophy of quality engineering that is broadly applicable. He considers three stages in product (or process) development: system design, parameter design, and tolerance design. In system design, the engineer uses scientific and engineering principles to determine the basic system configuration. For example, if we wish to measure an unknown resistance, we may use our knowledge of electrical circuits to determine that the basic system should be configured as a Wheatstone bridge. If we are designing a process to assemble printed circuit boards, we will determine the need for specific types of axial insertion machines, surface-mount placement machines, flow solder machines, and so forth. In the parameter design stage, the specific values for the system parameters are determined. This would involve choosing the nominal resistor and power supply values for the Wheatstone bridge, the number and type of component placement machines for the printed circuit board assembly process, and so forth. Usually, the objective is to specify these nominal parameter values such that the variability transmitted from uncontrollable or noise variables is minimized. Tolerance design is used to determine the best tolerances for the parameters. For example, in the Wheatstone bridge, tolerance design methods would reveal which components in the design were most sensitive and where the tolerances should be set. If a component does not have much effect on the performance of the circuit, it can be specified with a wide tolerance. Taguchi recommends that statistical experimental design methods be employed to assist in this process, particularly during parameter design and tolerance design. We will focus on parameter design. Experimental design methods can be used to find a best product or process design, where by "best" we mean a product or process that is robust or insensitive to uncontrollable factors that will influence the product or process once it is in routine operation. The notion of robust design is not new. Engineers have always tried to design products so that they will work well under uncontrollable conditions. For example, commercial transport aircraft fly about as well in a thunderstorm as they do in clear air. Taguchi deserves recognition for realizing that experimental design can be used as a formal part of the engineering design process to help accomplish this objective. 77

24 A key component of Taguchi's philosophy is the reduction of variability. Generally, each product or process performance characteristic will have a target or nominal value. The objective is to reduce the variability around this target value. Taguchi models the departures that may occur from this target value with a loss function. The loss refers to the cost that is incurred by society when the consumer uses a product whose quality characteristics differ from the nominal. The concept of societal loss is a departure from traditional thinking. Taguchi imposes a quadratic loss function of the form L(y) = k(y- T) shown in Figure 40-1 below. Clearly this type of function will penalize even small departures of y from the target T. Again, this is a departure from traditional thinking, which usually attaches penalties only to cases where y is outside of the upper and lower specifications (say y > USL or y < LSL in Figure However, the Taguchi philosophy regarding reduction of variability and the emphasis on minimizing costs is entirely consistent with the continuous improvement philosophy of Deming and Juran. In summary, Taguchi's philosophy involves three central ideas: 1. Products and processes should be designed so that they are robust to external sources of variability.. Experimental design methods are an engineering tool to help accomplish this objective. 3. Operation on-target is more important than conformance to specifications. Figure Taguchi s Quadratic Loss Function These are sound concepts, and their value should be readily apparent. Furthermore, as we have seen in the textbook, experimental design methods can play a major role in translating these ideas into practice. We now turn to a discussion of the specific methods that Taguchi recommends for applying his concepts in practice. As we will see, his approach to experimental design and data analysis can be improved. 78

25 Taguchi s Technical Methods An Example We will use a connector pull-off force example to illustrate Taguchi s technical methods. For more information about the problem, refer to the original article in Quality Progress in December 1987 (see "The Taguchi Approach to Parameter Design," by D. M. Byrne and S. Taguchi, Quality Progress, December 1987, pp. 19-6). The experiment involves finding a method to assemble an elastomeric connector to a nylon tube that would deliver the required pull-off performance to be suitable for use in an automotive engine application. The specific objective of the experiment is to maximize the pull-off force. Four controllable and three uncontrollable noise factors were identified. These factors are shown in Table 40-1 below. We want to find the levels of the controllable factors that are the least influenced by the noise factors and that provides the maximum pull-off force. Notice that although the noise factors are not controllable during routine operations, they can be controlled for the purposes of a test. Each controllable factor is tested at three levels, and each noise factor is tested at two levels. In the Taguchi parameter design methodology, one experimental design is selected for the controllable factors and another experimental design is selected for the noise factors. These designs are shown in Table 40-. Taguchi refers to these designs as orthogonal arrays, and represents the factor levels with integers 1,, and 3. In this case the designs selected are just a standard 3 and a 3 4- fractional factorial. Taguchi calls these the L 8 and L 9 orthogonal arrays, respectively. The two designs are combined as shown in Table 40-3 below. This is called a crossed or product array design, composed of the inner array containing the controllable factors, and the outer array containing the noise factors. Literally, each of the 9 runs from the inner array is tested across the 8 runs from the outer array, for a total sample size of 7 runs. The observed pull-off force is reported in Table Table Factors and Levels for the Taguchi Parameter Design Example Controllable Factors Levels A = Interference Low Medium High B = Connector wall thickness Thin Medium Thick C = Insertion,depth Shallow Medium Deep D = Percent adhesive in Low Medium High connector pre-dip Uncontrollable Factors Levels E = Conditioning time 4 h 10 h F = Conditioning temperature 7 F 150 F G = Conditioning relative humidity 5% 75% 79

26 Table 40-. Designs for the Controllable and Uncontrollable Factors (a) L 9 Orthogonal Array (b) L 8 Orthogonal Array for the Controllable for the Uncontrollable Factors Factors Variable. Variable. Run A B C D Run E F E X F G Ex G Fx G e Table Parameter Design with Both Inner and Outer Arrays Outer Array (L 8) E F G Inner Array (L 9). Responses Run A B C D y SN L t Data Analysis and Conclusions The data from this experiment may now be analyzed. Taguchi recommends analyzing the mean response for each run in the inner array (see Table 40-3), and he also suggests analyzing variation using an appropriately chosen signal-to-noise ratio (SN). These signal-to-noise ratios are derived from the quadratic loss function, and three of them are considered to be "standard" and widely applicable. They are defined as follows: 1. Nominal the best: 80

27 . Larger the better: SN SN T L y 10log S n log n i y 1 i 3. Smaller the better: SN L 10log n n 1 yi i1 Notice that these SN ratios are expressed on a decibel scale. We would use SN T if the objective is to reduce variability around a specific target, SN L if the system is optimized when the response is as large as possible, and SN S if the system is optimized when the response is as small as possible. Factor levels that maximize the appropriate SN ratio are optimal. In this problem, we would use SN L because the objective is to maximize the pull-off force. The last two columns of Table 40-3 contain y and SN L values for each of the nine inner-array runs. Taguchi-oriented practitioners often use the analysis of variance to determine the factors that influence y and the factors that influence the signal-to-noise ratio. They also employ graphs of the "marginal means" of each factor, such as the ones shown in Figures 40- and The usual approach is to examine the graphs and "pick the winner." In this case, factors A and C have larger effects than do B and D. In terms of maximizing SN L we would select A Medium, C Deep, B Medium, and D Low. In terms of maximizing the average pull-off force y, we would choose A Medium, C Medium, B Medium and D Low. Notice that there is almost no difference between C Medium and C Deep. The implication is that this choice of levels will maximize the mean pull-off force and reduce variability in the pull-off force. Taguchi advocates claim that the use of the SN ratio generally eliminates the need for examining specific interactions between the controllable and noise factors, although sometimes looking at these interactions improves process understanding. The authors of this study found that the AG and DE interactions were large. Analysis of these interactions, shown in Figure 40-4, suggests that A Medium is best. (It gives the highest pulloff force and a slope close to zero, indicating that if we choose A Medium the effect of relative humidity is minimized.) The analysis also suggests that D Low gives the highest pull-off force regardless of the conditioning time. 81

28 Figure 40-. The Effects of Controllable Factors on Each Response Figure The Effects of Controllable Factors on the Signal to Noise Ratio When cost and other factors were taken into account, the experimenters in this example finally decided to use A Medium, B Thin, C Medium, and D low. (B Thin was much less expensive than B Medium, and C Medium was felt to give slightly less variability than C Deep.) Since this combination was not a run in the original nine inner array trials, five additional tests were made at this set of conditions as a confirmation experiment. For this confirmation experiment, the levels used on the noise variables were E Low, F Low, and G Low. The authors report that good results were obtained from the confirmation test. Critique of Taguchi s Experimental Strategy and Designs The advocates of Taguchi's approach to parameter design utilize the orthogonal array designs, two of which (the L 8 and the L 9 ) were presented in the foregoing example. There are other orthogonal arrays: the L 4, L 1, L 16, L 18, and L 7. These designs were not developed by Taguchi; for example, the L 8 is a 7 4 III fractional factorial, the L 9 is a 3 4 III 1511 fractional factorial, the L 1 is a Plackett-Burman design, the L 16 is a III fractional factorial, and so on. Box, Bisgaard, and Fung (1988) trace the origin of these designs. Some of these designs have very complex alias structures. In particular, the L 1 and all of 8

29 Figure The AG and DE Interactions the designs that use three-level factors will involve partial aliasing of two-factor interactions with main effects. If any two-factor interactions are large, this may lead to a situation in which the experimenter does not get the correct answer. Taguchi argues that we do not need to consider two-factor interactions explicitly. He claims that it is possible to eliminate these interactions either by correctly specifying the response and design factors or by using a sliding setting approach to choose factor levels. As an example of the latter approach, consider the two factors pressure and temperature. Varying these factors independently will probably produce an interaction. However, if temperature levels are chosen contingent on the pressure levels, then the interaction effect can be minimized. In practice, these two approaches are usually difficult to implement unless we have an unusually high level of process knowledge. The lack of provision for adequately dealing with potential interactions between the controllable process factors is a major weakness of the Taguchi approach to parameter design. Instead of designing the experiment to investigate potential interactions, Taguchi prefers to use three-level factors to estimate curvature. For example, in the inner and outer array design used by Byrne and Taguchi, all four controllable factors were run at three levels. Let x 1, x, x 3 and x 4 represent the controllable factors and let z 1, z, and z 3 represent the three noise factors. Recall that the noise factors were run at two levels in a complete factorial design. The design they used allows us to fit the following model: y x x z z z z x 0 4 j1 4 j j jj j j j j1 j1 3 ij 3 j 3 ij i j ij i j i1 j1 4 Notice that we can fit the linear and quadratic effects of the controllable factors but not their two-factor interactions (which are aliased with the main effects). We can also fit the 83

IENG581 Design and Analysis of Experiments INTRODUCTION

IENG581 Design and Analysis of Experiments INTRODUCTION Experimental Design IENG581 Design and Analysis of Experiments INTRODUCTION Experiments are performed by investigators in virtually all fields of inquiry, usually to discover something about a particular

More information

Chapter 4: Randomized Blocks and Latin Squares

Chapter 4: Randomized Blocks and Latin Squares Chapter 4: Randomized Blocks and Latin Squares 1 Design of Engineering Experiments The Blocking Principle Blocking and nuisance factors The randomized complete block design or the RCBD Extension of the

More information

Chapter 1 Statistical Inference

Chapter 1 Statistical Inference Chapter 1 Statistical Inference causal inference To infer causality, you need a randomized experiment (or a huge observational study and lots of outside information). inference to populations Generalizations

More information

3. Factorial Experiments (Ch.5. Factorial Experiments)

3. Factorial Experiments (Ch.5. Factorial Experiments) 3. Factorial Experiments (Ch.5. Factorial Experiments) Hae-Jin Choi School of Mechanical Engineering, Chung-Ang University DOE and Optimization 1 Introduction to Factorials Most experiments for process

More information

Design of Engineering Experiments Part 5 The 2 k Factorial Design

Design of Engineering Experiments Part 5 The 2 k Factorial Design Design of Engineering Experiments Part 5 The 2 k Factorial Design Text reference, Special case of the general factorial design; k factors, all at two levels The two levels are usually called low and high

More information

Taguchi Method and Robust Design: Tutorial and Guideline

Taguchi Method and Robust Design: Tutorial and Guideline Taguchi Method and Robust Design: Tutorial and Guideline CONTENT 1. Introduction 2. Microsoft Excel: graphing 3. Microsoft Excel: Regression 4. Microsoft Excel: Variance analysis 5. Robust Design: An Example

More information

Chapter 6 The 2 k Factorial Design Solutions

Chapter 6 The 2 k Factorial Design Solutions Solutions from Montgomery, D. C. (004) Design and Analysis of Experiments, Wiley, NY Chapter 6 The k Factorial Design Solutions 6.. A router is used to cut locating notches on a printed circuit board.

More information

The Robustness of the Multivariate EWMA Control Chart

The Robustness of the Multivariate EWMA Control Chart The Robustness of the Multivariate EWMA Control Chart Zachary G. Stoumbos, Rutgers University, and Joe H. Sullivan, Mississippi State University Joe H. Sullivan, MSU, MS 39762 Key Words: Elliptically symmetric,

More information

Chapter 5 Introduction to Factorial Designs Solutions

Chapter 5 Introduction to Factorial Designs Solutions Solutions from Montgomery, D. C. (1) Design and Analysis of Experiments, Wiley, NY Chapter 5 Introduction to Factorial Designs Solutions 5.1. The following output was obtained from a computer program that

More information

INTRODUCTION TO ANALYSIS OF VARIANCE

INTRODUCTION TO ANALYSIS OF VARIANCE CHAPTER 22 INTRODUCTION TO ANALYSIS OF VARIANCE Chapter 18 on inferences about population means illustrated two hypothesis testing situations: for one population mean and for the difference between two

More information

Design & Analysis of Experiments 7E 2009 Montgomery

Design & Analysis of Experiments 7E 2009 Montgomery Chapter 5 1 Introduction to Factorial Design Study the effects of 2 or more factors All possible combinations of factor levels are investigated For example, if there are a levels of factor A and b levels

More information

CHAPTER 6 A STUDY ON DISC BRAKE SQUEAL USING DESIGN OF EXPERIMENTS

CHAPTER 6 A STUDY ON DISC BRAKE SQUEAL USING DESIGN OF EXPERIMENTS 134 CHAPTER 6 A STUDY ON DISC BRAKE SQUEAL USING DESIGN OF EXPERIMENTS 6.1 INTRODUCTION In spite of the large amount of research work that has been carried out to solve the squeal problem during the last

More information

Design & Analysis of Experiments 7E 2009 Montgomery

Design & Analysis of Experiments 7E 2009 Montgomery 1 What If There Are More Than Two Factor Levels? The t-test does not directly apply ppy There are lots of practical situations where there are either more than two levels of interest, or there are several

More information

Factorial designs. Experiments

Factorial designs. Experiments Chapter 5: Factorial designs Petter Mostad mostad@chalmers.se Experiments Actively making changes and observing the result, to find causal relationships. Many types of experimental plans Measuring response

More information

3.4. A computer ANOVA output is shown below. Fill in the blanks. You may give bounds on the P-value.

3.4. A computer ANOVA output is shown below. Fill in the blanks. You may give bounds on the P-value. 3.4. A computer ANOVA output is shown below. Fill in the blanks. You may give bounds on the P-value. One-way ANOVA Source DF SS MS F P Factor 3 36.15??? Error??? Total 19 196.04 Completed table is: One-way

More information

Interaction effects for continuous predictors in regression modeling

Interaction effects for continuous predictors in regression modeling Interaction effects for continuous predictors in regression modeling Testing for interactions The linear regression model is undoubtedly the most commonly-used statistical model, and has the advantage

More information

Chap The McGraw-Hill Companies, Inc. All rights reserved.

Chap The McGraw-Hill Companies, Inc. All rights reserved. 11 pter11 Chap Analysis of Variance Overview of ANOVA Multiple Comparisons Tests for Homogeneity of Variances Two-Factor ANOVA Without Replication General Linear Model Experimental Design: An Overview

More information

DESAIN EKSPERIMEN Analysis of Variances (ANOVA) Semester Genap 2017/2018 Jurusan Teknik Industri Universitas Brawijaya

DESAIN EKSPERIMEN Analysis of Variances (ANOVA) Semester Genap 2017/2018 Jurusan Teknik Industri Universitas Brawijaya DESAIN EKSPERIMEN Analysis of Variances (ANOVA) Semester Jurusan Teknik Industri Universitas Brawijaya Outline Introduction The Analysis of Variance Models for the Data Post-ANOVA Comparison of Means Sample

More information

What If There Are More Than. Two Factor Levels?

What If There Are More Than. Two Factor Levels? What If There Are More Than Chapter 3 Two Factor Levels? Comparing more that two factor levels the analysis of variance ANOVA decomposition of total variability Statistical testing & analysis Checking

More information

1 Least Squares Estimation - multiple regression.

1 Least Squares Estimation - multiple regression. Introduction to multiple regression. Fall 2010 1 Least Squares Estimation - multiple regression. Let y = {y 1,, y n } be a n 1 vector of dependent variable observations. Let β = {β 0, β 1 } be the 2 1

More information

Practical Statistics for the Analytical Scientist Table of Contents

Practical Statistics for the Analytical Scientist Table of Contents Practical Statistics for the Analytical Scientist Table of Contents Chapter 1 Introduction - Choosing the Correct Statistics 1.1 Introduction 1.2 Choosing the Right Statistical Procedures 1.2.1 Planning

More information

7. Response Surface Methodology (Ch.10. Regression Modeling Ch. 11. Response Surface Methodology)

7. Response Surface Methodology (Ch.10. Regression Modeling Ch. 11. Response Surface Methodology) 7. Response Surface Methodology (Ch.10. Regression Modeling Ch. 11. Response Surface Methodology) Hae-Jin Choi School of Mechanical Engineering, Chung-Ang University 1 Introduction Response surface methodology,

More information

CONTROL charts are widely used in production processes

CONTROL charts are widely used in production processes 214 IEEE TRANSACTIONS ON SEMICONDUCTOR MANUFACTURING, VOL. 12, NO. 2, MAY 1999 Control Charts for Random and Fixed Components of Variation in the Case of Fixed Wafer Locations and Measurement Positions

More information

Contents. 2 2 factorial design 4

Contents. 2 2 factorial design 4 Contents TAMS38 - Lecture 10 Response surface methodology Lecturer: Zhenxia Liu Department of Mathematics - Mathematical Statistics 12 December, 2017 2 2 factorial design Polynomial Regression model First

More information

THE DETECTION OF SHIFTS IN AUTOCORRELATED PROCESSES WITH MR AND EWMA CHARTS

THE DETECTION OF SHIFTS IN AUTOCORRELATED PROCESSES WITH MR AND EWMA CHARTS THE DETECTION OF SHIFTS IN AUTOCORRELATED PROCESSES WITH MR AND EWMA CHARTS Karin Kandananond, kandananond@hotmail.com Faculty of Industrial Technology, Rajabhat University Valaya-Alongkorn, Prathumthani,

More information

A Unified Approach to Uncertainty for Quality Improvement

A Unified Approach to Uncertainty for Quality Improvement A Unified Approach to Uncertainty for Quality Improvement J E Muelaner 1, M Chappell 2, P S Keogh 1 1 Department of Mechanical Engineering, University of Bath, UK 2 MCS, Cam, Gloucester, UK Abstract To

More information

DESAIN EKSPERIMEN BLOCKING FACTORS. Semester Genap 2017/2018 Jurusan Teknik Industri Universitas Brawijaya

DESAIN EKSPERIMEN BLOCKING FACTORS. Semester Genap 2017/2018 Jurusan Teknik Industri Universitas Brawijaya DESAIN EKSPERIMEN BLOCKING FACTORS Semester Genap Jurusan Teknik Industri Universitas Brawijaya Outline The Randomized Complete Block Design The Latin Square Design The Graeco-Latin Square Design Balanced

More information

Machine Learning, Fall 2009: Midterm

Machine Learning, Fall 2009: Midterm 10-601 Machine Learning, Fall 009: Midterm Monday, November nd hours 1. Personal info: Name: Andrew account: E-mail address:. You are permitted two pages of notes and a calculator. Please turn off all

More information

y response variable x 1, x 2,, x k -- a set of explanatory variables

y response variable x 1, x 2,, x k -- a set of explanatory variables 11. Multiple Regression and Correlation y response variable x 1, x 2,, x k -- a set of explanatory variables In this chapter, all variables are assumed to be quantitative. Chapters 12-14 show how to incorporate

More information

Review of Statistics 101

Review of Statistics 101 Review of Statistics 101 We review some important themes from the course 1. Introduction Statistics- Set of methods for collecting/analyzing data (the art and science of learning from data). Provides methods

More information

Probability Methods in Civil Engineering Prof. Dr. Rajib Maity Department of Civil Engineering Indian Institution of Technology, Kharagpur

Probability Methods in Civil Engineering Prof. Dr. Rajib Maity Department of Civil Engineering Indian Institution of Technology, Kharagpur Probability Methods in Civil Engineering Prof. Dr. Rajib Maity Department of Civil Engineering Indian Institution of Technology, Kharagpur Lecture No. # 36 Sampling Distribution and Parameter Estimation

More information

Written Exam (2 hours)

Written Exam (2 hours) M. Müller Applied Analysis of Variance and Experimental Design Summer 2015 Written Exam (2 hours) General remarks: Open book exam. Switch off your mobile phone! Do not stay too long on a part where you

More information

An Investigation of Combinations of Multivariate Shewhart and MEWMA Control Charts for Monitoring the Mean Vector and Covariance Matrix

An Investigation of Combinations of Multivariate Shewhart and MEWMA Control Charts for Monitoring the Mean Vector and Covariance Matrix Technical Report Number 08-1 Department of Statistics Virginia Polytechnic Institute and State University, Blacksburg, Virginia January, 008 An Investigation of Combinations of Multivariate Shewhart and

More information

An overview of applied econometrics

An overview of applied econometrics An overview of applied econometrics Jo Thori Lind September 4, 2011 1 Introduction This note is intended as a brief overview of what is necessary to read and understand journal articles with empirical

More information

Chapter 7: Simple linear regression

Chapter 7: Simple linear regression The absolute movement of the ground and buildings during an earthquake is small even in major earthquakes. The damage that a building suffers depends not upon its displacement, but upon the acceleration.

More information

Applied Regression Analysis

Applied Regression Analysis Applied Regression Analysis Lecture 2 January 27, 2005 Lecture #2-1/27/2005 Slide 1 of 46 Today s Lecture Simple linear regression. Partitioning the sum of squares. Tests of significance.. Regression diagnostics

More information

The simple linear regression model discussed in Chapter 13 was written as

The simple linear regression model discussed in Chapter 13 was written as 1519T_c14 03/27/2006 07:28 AM Page 614 Chapter Jose Luis Pelaez Inc/Blend Images/Getty Images, Inc./Getty Images, Inc. 14 Multiple Regression 14.1 Multiple Regression Analysis 14.2 Assumptions of the Multiple

More information

Chapter 14. Multiple Regression Models. Multiple Regression Models. Multiple Regression Models

Chapter 14. Multiple Regression Models. Multiple Regression Models. Multiple Regression Models Chapter 14 Multiple Regression Models 1 Multiple Regression Models A general additive multiple regression model, which relates a dependent variable y to k predictor variables,,, is given by the model equation

More information

Chemical Engineering: 4C3/6C3 Statistics for Engineering McMaster University: Final examination

Chemical Engineering: 4C3/6C3 Statistics for Engineering McMaster University: Final examination Chemical Engineering: 4C3/6C3 Statistics for Engineering McMaster University: Final examination Duration of exam: 3 hours Instructor: Kevin Dunn 07 April 2012 dunnkg@mcmaster.ca This exam paper has 8 pages

More information

IE 316 Exam 1 Fall 2011

IE 316 Exam 1 Fall 2011 IE 316 Exam 1 Fall 2011 I have neither given nor received unauthorized assistance on this exam. Name Signed Date Name Printed 1 1. Suppose the actual diameters x in a batch of steel cylinders are normally

More information

Module 03 Lecture 14 Inferential Statistics ANOVA and TOI

Module 03 Lecture 14 Inferential Statistics ANOVA and TOI Introduction of Data Analytics Prof. Nandan Sudarsanam and Prof. B Ravindran Department of Management Studies and Department of Computer Science and Engineering Indian Institute of Technology, Madras Module

More information

Contents. Preface to Second Edition Preface to First Edition Abbreviations PART I PRINCIPLES OF STATISTICAL THINKING AND ANALYSIS 1

Contents. Preface to Second Edition Preface to First Edition Abbreviations PART I PRINCIPLES OF STATISTICAL THINKING AND ANALYSIS 1 Contents Preface to Second Edition Preface to First Edition Abbreviations xv xvii xix PART I PRINCIPLES OF STATISTICAL THINKING AND ANALYSIS 1 1 The Role of Statistical Methods in Modern Industry and Services

More information

An Introduction to Design of Experiments

An Introduction to Design of Experiments An Introduction to Design of Experiments Douglas C. Montgomery Regents Professor of Industrial Engineering and Statistics ASU Foundation Professor of Engineering Arizona State University Bradley Jones

More information

Regression Analysis IV... More MLR and Model Building

Regression Analysis IV... More MLR and Model Building Regression Analysis IV... More MLR and Model Building This session finishes up presenting the formal methods of inference based on the MLR model and then begins discussion of "model building" (use of regression

More information

In the previous chapter, we learned how to use the method of least-squares

In the previous chapter, we learned how to use the method of least-squares 03-Kahane-45364.qxd 11/9/2007 4:40 PM Page 37 3 Model Performance and Evaluation In the previous chapter, we learned how to use the method of least-squares to find a line that best fits a scatter of points.

More information

Chapter 3 Multiple Regression Complete Example

Chapter 3 Multiple Regression Complete Example Department of Quantitative Methods & Information Systems ECON 504 Chapter 3 Multiple Regression Complete Example Spring 2013 Dr. Mohammad Zainal Review Goals After completing this lecture, you should be

More information

Module B1: Multivariate Process Control

Module B1: Multivariate Process Control Module B1: Multivariate Process Control Prof. Fugee Tsung Hong Kong University of Science and Technology Quality Lab: http://qlab.ielm.ust.hk I. Multivariate Shewhart chart WHY MULTIVARIATE PROCESS CONTROL

More information

Trendlines Simple Linear Regression Multiple Linear Regression Systematic Model Building Practical Issues

Trendlines Simple Linear Regression Multiple Linear Regression Systematic Model Building Practical Issues Trendlines Simple Linear Regression Multiple Linear Regression Systematic Model Building Practical Issues Overfitting Categorical Variables Interaction Terms Non-linear Terms Linear Logarithmic y = a +

More information

Multiple Linear Regression

Multiple Linear Regression Andrew Lonardelli December 20, 2013 Multiple Linear Regression 1 Table Of Contents Introduction: p.3 Multiple Linear Regression Model: p.3 Least Squares Estimation of the Parameters: p.4-5 The matrix approach

More information

Experimental designs for multiple responses with different models

Experimental designs for multiple responses with different models Graduate Theses and Dissertations Graduate College 2015 Experimental designs for multiple responses with different models Wilmina Mary Marget Iowa State University Follow this and additional works at:

More information

Do not copy, post, or distribute

Do not copy, post, or distribute 14 CORRELATION ANALYSIS AND LINEAR REGRESSION Assessing the Covariability of Two Quantitative Properties 14.0 LEARNING OBJECTIVES In this chapter, we discuss two related techniques for assessing a possible

More information

20g g g Analyze the residuals from this experiment and comment on the model adequacy.

20g g g Analyze the residuals from this experiment and comment on the model adequacy. 3.4. A computer ANOVA output is shown below. Fill in the blanks. You may give bounds on the P-value. One-way ANOVA Source DF SS MS F P Factor 3 36.15??? Error??? Total 19 196.04 3.11. A pharmaceutical

More information

A Power Analysis of Variable Deletion Within the MEWMA Control Chart Statistic

A Power Analysis of Variable Deletion Within the MEWMA Control Chart Statistic A Power Analysis of Variable Deletion Within the MEWMA Control Chart Statistic Jay R. Schaffer & Shawn VandenHul University of Northern Colorado McKee Greeley, CO 869 jay.schaffer@unco.edu gathen9@hotmail.com

More information

Regression, part II. I. What does it all mean? A) Notice that so far all we ve done is math.

Regression, part II. I. What does it all mean? A) Notice that so far all we ve done is math. Regression, part II I. What does it all mean? A) Notice that so far all we ve done is math. 1) One can calculate the Least Squares Regression Line for anything, regardless of any assumptions. 2) But, if

More information

Ch 2: Simple Linear Regression

Ch 2: Simple Linear Regression Ch 2: Simple Linear Regression 1. Simple Linear Regression Model A simple regression model with a single regressor x is y = β 0 + β 1 x + ɛ, where we assume that the error ɛ is independent random component

More information

Mathematics for Economics MA course

Mathematics for Economics MA course Mathematics for Economics MA course Simple Linear Regression Dr. Seetha Bandara Simple Regression Simple linear regression is a statistical method that allows us to summarize and study relationships between

More information

Chapter 13 Experiments with Random Factors Solutions

Chapter 13 Experiments with Random Factors Solutions Solutions from Montgomery, D. C. (01) Design and Analysis of Experiments, Wiley, NY Chapter 13 Experiments with Random Factors Solutions 13.. An article by Hoof and Berman ( Statistical Analysis of Power

More information

OPTIMIZATION OF FIRST ORDER MODELS

OPTIMIZATION OF FIRST ORDER MODELS Chapter 2 OPTIMIZATION OF FIRST ORDER MODELS One should not multiply explanations and causes unless it is strictly necessary William of Bakersville in Umberto Eco s In the Name of the Rose 1 In Response

More information

IE 316 Exam 1 Fall 2011

IE 316 Exam 1 Fall 2011 IE 316 Exam 1 Fall 2011 I have neither given nor received unauthorized assistance on this exam. Name Signed Date Name Printed 1 1. Suppose the actual diameters x in a batch of steel cylinders are normally

More information

Lecture 2: Linear Models. Bruce Walsh lecture notes Seattle SISG -Mixed Model Course version 23 June 2011

Lecture 2: Linear Models. Bruce Walsh lecture notes Seattle SISG -Mixed Model Course version 23 June 2011 Lecture 2: Linear Models Bruce Walsh lecture notes Seattle SISG -Mixed Model Course version 23 June 2011 1 Quick Review of the Major Points The general linear model can be written as y = X! + e y = vector

More information

Hypothesis Testing hypothesis testing approach

Hypothesis Testing hypothesis testing approach Hypothesis Testing In this case, we d be trying to form an inference about that neighborhood: Do people there shop more often those people who are members of the larger population To ascertain this, we

More information

arxiv: v1 [stat.me] 14 Jan 2019

arxiv: v1 [stat.me] 14 Jan 2019 arxiv:1901.04443v1 [stat.me] 14 Jan 2019 An Approach to Statistical Process Control that is New, Nonparametric, Simple, and Powerful W.J. Conover, Texas Tech University, Lubbock, Texas V. G. Tercero-Gómez,Tecnológico

More information

Design of Engineering Experiments Chapter 5 Introduction to Factorials

Design of Engineering Experiments Chapter 5 Introduction to Factorials Design of Engineering Experiments Chapter 5 Introduction to Factorials Text reference, Chapter 5 page 170 General principles of factorial experiments The two-factor factorial with fixed effects The ANOVA

More information

Six Sigma Black Belt Study Guides

Six Sigma Black Belt Study Guides Six Sigma Black Belt Study Guides 1 www.pmtutor.org Powered by POeT Solvers Limited. Analyze Correlation and Regression Analysis 2 www.pmtutor.org Powered by POeT Solvers Limited. Variables and relationships

More information

BRIDGE CIRCUITS EXPERIMENT 5: DC AND AC BRIDGE CIRCUITS 10/2/13

BRIDGE CIRCUITS EXPERIMENT 5: DC AND AC BRIDGE CIRCUITS 10/2/13 EXPERIMENT 5: DC AND AC BRIDGE CIRCUITS 0//3 This experiment demonstrates the use of the Wheatstone Bridge for precise resistance measurements and the use of error propagation to determine the uncertainty

More information

Open book and notes. 120 minutes. Covers Chapters 8 through 14 of Montgomery and Runger (fourth edition).

Open book and notes. 120 minutes. Covers Chapters 8 through 14 of Montgomery and Runger (fourth edition). IE 330 Seat # Open book and notes 10 minutes Covers Chapters 8 through 14 of Montgomery and Runger (fourth edition) Cover page and eight pages of exam No calculator ( points) I have, or will, complete

More information

Statistical Inference with Regression Analysis

Statistical Inference with Regression Analysis Introductory Applied Econometrics EEP/IAS 118 Spring 2015 Steven Buck Lecture #13 Statistical Inference with Regression Analysis Next we turn to calculating confidence intervals and hypothesis testing

More information

Unit 27 One-Way Analysis of Variance

Unit 27 One-Way Analysis of Variance Unit 27 One-Way Analysis of Variance Objectives: To perform the hypothesis test in a one-way analysis of variance for comparing more than two population means Recall that a two sample t test is applied

More information

Design of Experiments

Design of Experiments Design of Experiments D R. S H A S H A N K S H E K H A R M S E, I I T K A N P U R F E B 19 TH 2 0 1 6 T E Q I P ( I I T K A N P U R ) Data Analysis 2 Draw Conclusions Ask a Question Analyze data What to

More information

Unit 22: Sampling Distributions

Unit 22: Sampling Distributions Unit 22: Sampling Distributions Summary of Video If we know an entire population, then we can compute population parameters such as the population mean or standard deviation. However, we generally don

More information

MATH602: APPLIED STATISTICS

MATH602: APPLIED STATISTICS MATH602: APPLIED STATISTICS Dr. Srinivas R. Chakravarthy Department of Science and Mathematics KETTERING UNIVERSITY Flint, MI 48504-4898 Lecture 10 1 FRACTIONAL FACTORIAL DESIGNS Complete factorial designs

More information

Directionally Sensitive Multivariate Statistical Process Control Methods

Directionally Sensitive Multivariate Statistical Process Control Methods Directionally Sensitive Multivariate Statistical Process Control Methods Ronald D. Fricker, Jr. Naval Postgraduate School October 5, 2005 Abstract In this paper we develop two directionally sensitive statistical

More information

Discrete Distributions

Discrete Distributions Discrete Distributions STA 281 Fall 2011 1 Introduction Previously we defined a random variable to be an experiment with numerical outcomes. Often different random variables are related in that they have

More information

The Matrix Algebra of Sample Statistics

The Matrix Algebra of Sample Statistics The Matrix Algebra of Sample Statistics James H. Steiger Department of Psychology and Human Development Vanderbilt University James H. Steiger (Vanderbilt University) The Matrix Algebra of Sample Statistics

More information

Chapter 10: Statistical Quality Control

Chapter 10: Statistical Quality Control Chapter 10: Statistical Quality Control 1 Introduction As the marketplace for industrial goods has become more global, manufacturers have realized that quality and reliability of their products must be

More information

Addition of Center Points to a 2 k Designs Section 6-6 page 271

Addition of Center Points to a 2 k Designs Section 6-6 page 271 to a 2 k Designs Section 6-6 page 271 Based on the idea of replicating some of the runs in a factorial design 2 level designs assume linearity. If interaction terms are added to model some curvature results

More information

PHY 123 Lab 1 - Error and Uncertainty and the Simple Pendulum

PHY 123 Lab 1 - Error and Uncertainty and the Simple Pendulum To print higher-resolution math symbols, click the Hi-Res Fonts for Printing button on the jsmath control panel. PHY 13 Lab 1 - Error and Uncertainty and the Simple Pendulum Important: You need to print

More information

Introduction to Uncertainty and Treatment of Data

Introduction to Uncertainty and Treatment of Data Introduction to Uncertainty and Treatment of Data Introduction The purpose of this experiment is to familiarize the student with some of the instruments used in making measurements in the physics laboratory,

More information

Process Characterization Using Response Surface Methodology

Process Characterization Using Response Surface Methodology Process Characterization Using Response Surface Methodology A Senior Project Presented to The Faculty of the Statistics Department California Polytechnic State University, San Luis Obispo In Partial Fulfillment

More information

CHAPTER 4 EXPERIMENTAL DESIGN. 4.1 Introduction. Experimentation plays an important role in new product design, manufacturing

CHAPTER 4 EXPERIMENTAL DESIGN. 4.1 Introduction. Experimentation plays an important role in new product design, manufacturing CHAPTER 4 EXPERIMENTAL DESIGN 4.1 Introduction Experimentation plays an important role in new product design, manufacturing process development and process improvement. The objective in all cases may be

More information

POL 681 Lecture Notes: Statistical Interactions

POL 681 Lecture Notes: Statistical Interactions POL 681 Lecture Notes: Statistical Interactions 1 Preliminaries To this point, the linear models we have considered have all been interpreted in terms of additive relationships. That is, the relationship

More information

Confidence Intervals, Testing and ANOVA Summary

Confidence Intervals, Testing and ANOVA Summary Confidence Intervals, Testing and ANOVA Summary 1 One Sample Tests 1.1 One Sample z test: Mean (σ known) Let X 1,, X n a r.s. from N(µ, σ) or n > 30. Let The test statistic is H 0 : µ = µ 0. z = x µ 0

More information

2010 Stat-Ease, Inc. Dual Response Surface Methods (RSM) to Make Processes More Robust* Presented by Mark J. Anderson (

2010 Stat-Ease, Inc. Dual Response Surface Methods (RSM) to Make Processes More Robust* Presented by Mark J. Anderson ( Dual Response Surface Methods (RSM) to Make Processes More Robust* *Posted at www.statease.com/webinar.html Presented by Mark J. Anderson (Email: Mark@StatEase.com ) Timer by Hank Anderson July 2008 Webinar

More information

CHAPTER EIGHT Linear Regression

CHAPTER EIGHT Linear Regression 7 CHAPTER EIGHT Linear Regression 8. Scatter Diagram Example 8. A chemical engineer is investigating the effect of process operating temperature ( x ) on product yield ( y ). The study results in the following

More information

9 Correlation and Regression

9 Correlation and Regression 9 Correlation and Regression SW, Chapter 12. Suppose we select n = 10 persons from the population of college seniors who plan to take the MCAT exam. Each takes the test, is coached, and then retakes the

More information

Non-parametric Hypothesis Testing

Non-parametric Hypothesis Testing Non-parametric Hypothesis Testing Procedures Hypothesis Testing General Procedure for Hypothesis Tests 1. Identify the parameter of interest.. Formulate the null hypothesis, H 0. 3. Specify an appropriate

More information

DIAGNOSIS OF BIVARIATE PROCESS VARIATION USING AN INTEGRATED MSPC-ANN SCHEME

DIAGNOSIS OF BIVARIATE PROCESS VARIATION USING AN INTEGRATED MSPC-ANN SCHEME DIAGNOSIS OF BIVARIATE PROCESS VARIATION USING AN INTEGRATED MSPC-ANN SCHEME Ibrahim Masood, Rasheed Majeed Ali, Nurul Adlihisam Mohd Solihin and Adel Muhsin Elewe Faculty of Mechanical and Manufacturing

More information

Design of Engineering Experiments Part 2 Basic Statistical Concepts Simple comparative experiments

Design of Engineering Experiments Part 2 Basic Statistical Concepts Simple comparative experiments Design of Engineering Experiments Part 2 Basic Statistical Concepts Simple comparative experiments The hypothesis testing framework The two-sample t-test Checking assumptions, validity Comparing more that

More information

Group comparison test for independent samples

Group comparison test for independent samples Group comparison test for independent samples The purpose of the Analysis of Variance (ANOVA) is to test for significant differences between means. Supposing that: samples come from normal populations

More information

Chemometrics Unit 4 Response Surface Methodology

Chemometrics Unit 4 Response Surface Methodology Chemometrics Unit 4 Response Surface Methodology Chemometrics Unit 4. Response Surface Methodology In Unit 3 the first two phases of experimental design - definition and screening - were discussed. In

More information

Introduction. Chapter 1

Introduction. Chapter 1 Chapter 1 Introduction In this book we will be concerned with supervised learning, which is the problem of learning input-output mappings from empirical data (the training dataset). Depending on the characteristics

More information

Confirmation Sample Control Charts

Confirmation Sample Control Charts Confirmation Sample Control Charts Stefan H. Steiner Dept. of Statistics and Actuarial Sciences University of Waterloo Waterloo, NL 3G1 Canada Control charts such as X and R charts are widely used in industry

More information

401 Review. 6. Power analysis for one/two-sample hypothesis tests and for correlation analysis.

401 Review. 6. Power analysis for one/two-sample hypothesis tests and for correlation analysis. 401 Review Major topics of the course 1. Univariate analysis 2. Bivariate analysis 3. Simple linear regression 4. Linear algebra 5. Multiple regression analysis Major analysis methods 1. Graphical analysis

More information

Multiple comparisons - subsequent inferences for two-way ANOVA

Multiple comparisons - subsequent inferences for two-way ANOVA 1 Multiple comparisons - subsequent inferences for two-way ANOVA the kinds of inferences to be made after the F tests of a two-way ANOVA depend on the results if none of the F tests lead to rejection of

More information

THE ROYAL STATISTICAL SOCIETY HIGHER CERTIFICATE

THE ROYAL STATISTICAL SOCIETY HIGHER CERTIFICATE THE ROYAL STATISTICAL SOCIETY 004 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE PAPER II STATISTICAL METHODS The Society provides these solutions to assist candidates preparing for the examinations in future

More information

DISTRIBUTIONS USED IN STATISTICAL WORK

DISTRIBUTIONS USED IN STATISTICAL WORK DISTRIBUTIONS USED IN STATISTICAL WORK In one of the classic introductory statistics books used in Education and Psychology (Glass and Stanley, 1970, Prentice-Hall) there was an excellent chapter on different

More information

, (1) e i = ˆσ 1 h ii. c 2016, Jeffrey S. Simonoff 1

, (1) e i = ˆσ 1 h ii. c 2016, Jeffrey S. Simonoff 1 Regression diagnostics As is true of all statistical methodologies, linear regression analysis can be a very effective way to model data, as along as the assumptions being made are true. For the regression

More information

Statistics Boot Camp. Dr. Stephanie Lane Institute for Defense Analyses DATAWorks 2018

Statistics Boot Camp. Dr. Stephanie Lane Institute for Defense Analyses DATAWorks 2018 Statistics Boot Camp Dr. Stephanie Lane Institute for Defense Analyses DATAWorks 2018 March 21, 2018 Outline of boot camp Summarizing and simplifying data Point and interval estimation Foundations of statistical

More information

Sleep data, two drugs Ch13.xls

Sleep data, two drugs Ch13.xls Model Based Statistics in Biology. Part IV. The General Linear Mixed Model.. Chapter 13.3 Fixed*Random Effects (Paired t-test) ReCap. Part I (Chapters 1,2,3,4), Part II (Ch 5, 6, 7) ReCap Part III (Ch

More information

Analysis of Variance (ANOVA)

Analysis of Variance (ANOVA) Analysis of Variance ANOVA) Compare several means Radu Trîmbiţaş 1 Analysis of Variance for a One-Way Layout 1.1 One-way ANOVA Analysis of Variance for a One-Way Layout procedure for one-way layout Suppose

More information