SUCCESSFUL MEASUREMENT SYSTEM PRACTICES A COMPARATIVE STUDY - AN EMPIRICAL SURVEY

Size: px
Start display at page:

Download "SUCCESSFUL MEASUREMENT SYSTEM PRACTICES A COMPARATIVE STUDY - AN EMPIRICAL SURVEY"

Transcription

1 SUCCESSFUL MEASUREMENT SYSTEM PRACTICES A COMPARATIVE STUDY - AN EMPIRICAL SURVEY Dr. Chandrashekar Vishwanathan Head Faculty, CXO and Chief Mentor The School of Continuous Improvement Ulhasnagar, Mumbai, India vish.growth@gmail.com Dr. T S Mohanchandralal Former Director Maharishi Institute of Management Bangalore. laltsmc@rediffmail.com Abstract: Production system generates excess variations in the dimensions of a product. This excess variation is defect, and it must be eliminated or kept within he tolerance limits. Before eliminating defects, the measurement system error must also be kept within the tolerance band so that the measurement system will not mislead the operators who measure the dimensions of a product. Measurement System Analysis quantifies the measurement error through the examination of equipment variation, operator variation, part variation, and operator by part variation. When an operator measures a quality characteristic several times using the same equipment, and in the same measuring environment, the variations in the measures are attributed to the measuring equipment, and it is called repeatability. When several operators measure a part in multiple times, the differences in results are termed as reproducibility. Total Gauge R&R is the estimate of the combined estimated variation from repeatability and reproducibility. An efficient measurement system must keep the measurement error as minimum as possible, to ensure the production of non-defective parts. There are five approaches to Gauge R&R Study. Each approach has its own merits and limitations. X-Bar-Range method, and the AIAG approach are though simple in terms of computations, they are based on standard deviation and not on variance in the strict sense of the term. They erroneously impose additive property in the measurement system. Moreover, they never measure the interaction variations. On the other hand, ANOVA approach, Evaluating the Measurement Process (EMP) Approach, and Donald Wheeler s approach to Honest Gauge R&R study are based on the percentage Contribution Variance which is both linear and additive while standard deviation is nonlinear and non-additive. Hence, the distorted percentages based on standard deviations are misleading. As rightly pointed out by Donald Wheeler, they violate the fundamental principles of mathematics, and hence are not suitable for meaningful interpretations. Donald Wheeler a celebrated quality specialist therefore has devised procedures for the calculations of additional metrics based on variance for making the gauge R&R study more rigorous and more meaningful. Hence, Wheeler s approach to gauge R&R study, has an edge over the four other approaches in providing reliable estimates of repeatability, reproducibility, and part variation for meaningful interpretation. This paper examines all the five approaches in depth using live data collected from a company, and finally shows the superiority of wheeler s approach over the rest, in the gauge R&R study. Key Words: Repeatability, Reproducibility, Standard deviation, Variance, ANOVA, Range, Number of distinct categories, x-bar, R chart, Analysis of Main Effects, Crossover capabilities, Trigonometric functions, Intra Class Correlation, Effective Measurement Increment, Class monitors, and Watershed specifications. 1. Introduction Production is process-centric. Process involves technology and a host of heterogeneous inputs at various levels. When, everything in the production environment is in apple-pie order, the optimized production process stays in its natural state. But over a period of time, on account of the changes in the production environment even after the standardization of the inputs and 1

2 operations, and preventive and breakdown maintenances, etc., production system generates variations in the dimensions of a product in excess of the prescribed standard measure that comes from the VOC and VOE. This excess variation or imperfection is technically defined as defect, and it is the product of special and random causes. Special causes cause defects. It is inimical to quality, and hence must be eliminated once and for all from the production process.. Methodology To start with, the features of the measurement system, and the preliminaries needed to be checked before conducting the study are highlighted. Appropriate test is chosen to find out whether the sampled data follows normal distribution, after probing the features of various non-parametric tests, and the characteristics of the data. When tolerance limits are not available from the VOC/ VOE, practitioners often consider +- 3σ or +- 4σ from mean as specification limits. Here, after ascertaining that the sampled data follows normal distribution, the tolerance limits are determined from the spread of the original data. Since 96% of the data are located between -1σ and +σ from the mean of all the 48 measurements, they are taken as a proxy for specification limits. To increase the coverage level of the total measurement variation to 99.73%, 6.0 is used as a multiplier in the calculations instead of Descriptive statistics, six sigma tools such as control charts, box plots, Pareto diagram, and probability plot are used at appropriate places to elicit and display the hidden characteristics of the data. The computational procedures are demonstrated and the results of the five approaches are examined in the light of the AIAG and Wheeler s guidelines. This empirical survey also highlights the comprehensive nature of the Wheeler approach to gauge R&R study. 3. Measurement System Analysis (MSA) AIAG defines a measurement system as a collection of gauges, standards, operations, methods, fixtures, software, personnel, environment, and assumptions used to quantify a unit of measure or the complete process used to obtain measurements 3. MSA quantifies the measurement error through the examination of equipment variation, operator variation, part variation, and operator by part variation. The measurement system may be infected by bias, stability, linearity, repeatability, and reproducibility. Bias is an assumed shift in value for measurement of equal magnitudes. It is the difference between the mean of repeated measurements and the reference value. Linearity is how accurate the measurements are across the expected range of the measurements. Stability measures the variation, when the same operator measures the same unit using the same gage over an extended period of time. Repeatability measures the variation due to equipment error, and reproducibility measures variability due to operators, their unique measurement styles, setups, and other environmental factors associated with the measurement system. Measurement system focuses on precision and not accuracy. Accuracy refers to the degree of proximity of a measured value to a standard value. Precision is the degree to which repeated measurements tend to agree with each other. They are independent of each other. 4. Sources of Variation The components of variation can be expressed by an identity equation: σ t = (σ p+ σ s+ σ ms,), where σ t is total variance; σ p is process variance; σ s is sampling variance and σ ms is measurement system variance. Their relationship is linear and additive in terms of the variance, but not in terms of standard deviation. Since sampling variance is often the part of the process, total variance can be defined as a function of the sum of process variance and measurement system variance; i.e., σ t = (σ p+ σ ms). As shown in the triangle below, when σ p > σ ms, it will have larger impact on σ t. Likewise if σ ms > σ p, it too will have greater impact on σ t 4. Between process variance and measurement system variance, the latter is more fundamental. Therefore, to reduce total variation one has to minimize the % variance of σ ms. In a gauge R&R study, σ ms can be split up into two components: σ ms = (σ Repe + σ Repro), and their estimates determine the acceptability of the gage 5. Repeatability and Reproducibility together constitute measurement error or noise. A reliable measurement system must minimize the noise to the maximum extent possible.

3 5. Repeatability vs. Reproducibility When an operator measures a quality characteristic several times in the same measuring environment, the differences are attributed to the measuring equipment. It is called Equipment Variation (EV) in Gauge R&R studies or "Within" Variation in ANOVA studies. It is also termed as within operator variability 6 Wheeler calls it test-retest error. When an operator measures multiple parts in multiple times, the differences in results are attributable to the variations in the parts and the variations in the equipment. When several operators measure the same parts in a randomized order, each several times, the differences in all the results can be traceable to (1) equipment variation, () part variation, (3) operator variation, and (4) the interaction variation (operators by parts). These variations are termed as Reproducibility. It can also be called as between operator variation. In practice, operators perform fewer replications on more parts than vice versa 7. Total Gauge R&R is the estimate of the combined estimated variation from repeatability and reproducibility 8 6. Planning for the Gauge R&R Study The process operators would need to ensure that the experiments are setup to ensure that homogeneous parts are available for study. For example, in a company producing nails from 8 different machines, Measurement systems analysis should be conducted on nails produced from one machine and not from all machines, or with a group of machines having similar characteristics. It should be noted here that Measurement Systems Analysis is conduced to understand variation in the measurement system and not variation amongst the machines. Thus, one should always consider parts being produced under homogeneous conditions for Measurement Systems Analysis. Normality is another consideration which practitioners consider before conducting Measurement Systems Analysis. This practice should be avoided at all costs, because normality is a testing procedure conducted on existing or historical data. Historical data may be fraught with data collection or measurement system errors, which may skew normality results. 7. PART-1 X-Bar- RANGE GRR STUDY When the measurement error exceeds the tolerance band, the measuring device will frequently include bad parts and exclude good ones. Hence MSA must precede all other subsequent analyses. X-Bar-Range method offers a "quick approximation" of measurement variability that warrants a detailed GRR study. This method could detect unacceptable measurement variation in 80% of the time when appraisers measure 5 parts; each one each part twice. X-Bar- Range chart is appropriate when the subgroup size is 8 or less, and when it is 9 or more, X-Bar-S chart is the best choice. It has been estimated that 98% of all processes can be effectively represented by using either the XmR charts or X-Bar and R charts 0. X- Bar chart and range chart together determines the stability of a process. Since the control limits on the mean chart are derived from the average range, if the Range chart is out of control, then the control limits on the mean chart may wrongly indicate an out-of-control condition or fail to detect such a situation. The consistency of R chart is therefore a precondition to make a meaningful interpretation of the mean chart. In the R chart shown below all the 16 range points are well within the control limits, and hence there is consistency in the measurement process. The range chart displays the variation in the range measurements of each subgroup. The Xbar chart plots the means of the subgroups. X-Bar chart compares part-to-part variation to the repeatability component. The center line represents the overall mean of all the measurements. The upper and the lower control limits are based on the number of measurements in each average and the repeatability estimate. The mean chart consists of the average measurement of each part. Since this chart plots the data generated by the operators; one can check here the consistency of each operator. It is good if this chart shows more variation between part averages than what is expected from repeatability variation alone. The scattered nature of many data points above and/or below the control limits indicates that part-to-part variation is much greater than equipment variation. R chart plots points that represents the difference between the largest and the smallest measurements of each part for each operator. The range is 0 when the measurements are identical. After plotting all the range points of the operators, one can compare the consistency of each of them. If an operator is not consistent in measuring the parts, points on the R-chart will fall above the upper and lower control limits. It means that special causes are present and they should be identified and removed. 3

4 If the measurements by operators are consistent, all the points will be within the control limits. From the Charts A&B, (see page 36-37) it is clear that in the R charts of the two operators, all the 8 points are within the control limits. In the case of the operator A, all the eight points are close to the central line compared to those of the operator B. Using the 48 data points given in Table-7, the following vital metrics are computed for analysis, interpretation, and comparison. Computation of Vital Metrics of X- Bar- Range Study double bar A B ) = ( ) = The difference between the maximum and the minimum operator averages: x diff. 1.86) =0.05. The range of the part mean: R P ) = EV=K 1 R (double bar) where K 1 depends on the no of trail. For 3 trails K 1 is EV= (0.5908x0.0655) = EV = AV= (xdiffk ) (EV nr)] where K depends on the no of operators. For operators it is AV (0.05x0.7071) ( x ) = Hence AV = GRR= (EV +AV ) = ( ) = Hence GRR = PV= (R P K 3 ) where K 3 depends on the no of parts. For 8 parts K 3 is Therefore PV= (0.98x0.3375) = Hence PV = TV= (GRR +PV ) = ( ) = ( ) = = Table-4: GRR Based on X-Bar and Range Method Source SD Study Variation(6S D) %Study Variation(%S V) Variance EV AV Gage RR PV TV * * *GRR (R&R) excluded % Contribution of Variance The no of distinct categories (ndc) =1.41(PV/GRR) =[1.41(0.3308/0.0518)] = The integer value is 9 and hence the gage can distinguish all the 8 parts. Precision to Tolerance Ratio (P/T) = [(6x σ GRR )/USL-LSL)], (where USL is 3.54 and LSL is.55, and hence tolerance is 0.99). = 6x. 51 / %. %EV= (EV/TV) 100= (0.0387/ )100=11.56% %AV= (AV/TV) 100= ( /0.3348)100= 10.30% %GRR= (GRR/TV) 100= (0.0518/0.3348)100=15.47% %PV= (PV/TV) 100= (0.3308/ )100=98.81 %. 4

5 Interpretation Generally, an acceptable gage will have lower values for repeatability and reproducibility, and higher values for parts as seen in the above table. Higher part variation speaks of the capability of the measuring device in recognizing the minute differences in parts. Here part variation consumes 99% of the total variation, and the rest accounts for the measurement error. ndc is 9 and it also confirms the high differentiating capability of the measuring device. As shown in the mean charts A&B (page 36-37), all the data points lie outside of the control limits, and it shows the gauge is sensitive enough to detect the minute differences in parts. Thus it can be a visual presentation of the P/T ratio. A low P/T ratio suggests that the impact of the measurement system on the product quality variation is small, while a larger P/T ratio indicates that too much measurement system variation is showing up between specifications. When this ratio is greater than 30%, the measurement tool will not be effective enough to determine whether the part is good or bad. The highest goal for industries at present is to maintain the P/T ratio below 0.1. The secondary goal is to maintain the P/T ratio below 0.3, and the P/T ratios over 0.5 indicate ineffective measurement systems 1. Here the P/T ratio is 31% which is 1% greater than the critical value, and hence the measurement system variation may have marginal impact on product quality variation. Here repeatability is greater than reproducibility, though the difference is marginal. Even then, this may call for gage recalibration, proper maintenance of the measurement device or sometimes redesigning of it. In extreme cases a replacement of a better device may be necessary. The noise here is , or its % SV is 15.47, and % Contribution Variance is.39. As per AIAG guide lines, If the gage R&R % SV contribution is between 10% and 30%, and if the % Contribution Variance is between 1% and 9%, the acceptability of the measurement system depends on the cost of the gage, cost of repair, or other factors. Of course customer approval is vital. Here the measurement system needs further improvement. Recalibration of the gage, rationalization of the measurement procedures, motivation of the operators etc., can be effective in making further improvements in the measurement system. Evaluation In the above Table the % SV of all sources does not add up to 1 %. It is 11%! Hence engineers never could figure out exactly what the final numbers in a gauge R&R study represent. But the AIAG manual states that because standard deviation is expressed in the same units as the process data, it is helpful to compute other metrics, such as Study Variation (6 SD), %Tolerance (if specification limits are included), and % Process (if an historical SD is included). But when the % Contribution Variance which is both linear and additive is compared to its counterpart, who is based on the nonlinear and non-additive standard deviation, the distorted percentages are obvious. Donald Wheeler a celebrated quality specialist has devised procedures for the calculations of additional metrics based on variance for making the gauge & study more meaningful. Hence Wheeler s approach to gauge R&R study has an edge over the other approaches in providing reliable estimates for meaningful interpretation. 8. PART- AIAG Approach to Gauge R&R Study The Auto Industry Action Group (AIAG) of the American Society for Quality popularized the Gauge R&R study. The easy availability of comprehensive data on the various kinds of processes found in the auto industry and the software, made this approach a de-facto standard for quality practitioners across multiple industries. The AIAG Approach took its birth around 196, and since 1990, witnessed many generational changes and several serious revisions. But its fundamentals still remain the same. This approach starts with the computation of the upper range limit RxD 4 ) for the given dataset, to find out whether any average range of the subgroups exceeds it. This is possible, when there is inconsistency in the measurement process due to special causes and this will infect all subsequent computations. If any of the average range value exceeds the upper range limit, the underlying special causes should be eliminated, before proceeding further. Here the upper range limit is (0.0656x.57), and all the 16 range points are well within the control limits and hence there is consistency in the measurement process. Hence gage R&R study can be launched. Here as shown below, the data are rearranged in three subgroups with eight parts in each group, and their averages and ranges are computed to calculate the variations of the components. Generally, differences between subgroups may be greater than the differences within the subgroups. Part to part, operator to operator, and operator by part variations are reflected in between 5

6 subgroup differences while test-retest variation is reflected in within subgroup differences. However, here both types of differences are within narrow limits. Computation of the Variations of the Components 1. Upper range limit is , and all the three subgroup ranges are below to it.. stimated epeatability V R d ) = (0.0656/1.693) = stimated eproducibility AV o d* ) (o/nop x σ pe ) = [. 55 (1.41) ] [(/48) x (0.0387) ] = Combined & V + AV ) = Estimated Pdt Std Dev = (Part Range d* ) = ( ) = stimated Std Dev TV V +AV +PV ) = Table-5 SG/ P1 P P3 P4 P5 P6 P7 P8 P1 P P3 P4 P5 P6 P7 P8 P SG SG SG Ⱦ R The above Components are expressed as Percentages of TV 7. % EV= (0.0387/0.3363) x 100 =11.51%. 8. % AV = (0.0390/0.3363) x 100 =11.60%. 9. % GRR= (0.0549/0.3363) x 100 =16.55% 10.% PV = (0.3318/0.3363) x 100 =98.66%. The Components are expressed as Percentages of Specified Tolerance 11. % EV = 6(0.0387/0.99) x100 =3.45%. 1. %AV= 6(0.0390/0.99) x100 = 3.64%. 13. % GRR =6(0.0549/0.99) x100 =33.7%. Here each estimate is multiplied by 600 and divided by the specified tolerance. Specified tolerance is the permissible limit of variation. It is the difference between the upper and lower specification limits. Generally, these limits come from the VOC and VOE. When this information is not available, some practitioners consider +3sd and -3sd or +4 sd and -4 sd from mean as specification limits. Since here the aim is minimization of variation by improving the measurement process, it is better to reduce standard deviation and move closer to the mean in several stages till the final measures are well within the VOC and VOE. Table-6: GRR Based on AIAG Approach Estimated Measures in SD As % of TV Variance % Con Var EV(Repeatability) AV(Reproducibility) GRR (R&R) PV TV * Here the following procedure is followed to arrive at a proxy for the specification limits. The grand mean of the dataset is computed as.88 units and the standard deviation as 0.33 units. Since 96% of the data (46 out of 48) are concentrated between - 1σ and σ from the grand mean, the lower specification limit is set as.55 and the upper specification limit as This tightened specification is maintained in the interest of rigorous analysis. The specified tolerance here is 0.99, ( ). Here repeatability and reproducibility are equal. They individually consume 1% of Study Variation and 1.33 % contribution of variance. The combined R&R consumes 16% of Study Variation while it is.65% for Contribution Variance. Part variation consumes the rest of the contribution variance, making the total 100. The number of distinct categories (ndc) here is 8, [1.41(0.3318/0.0549)]. AIAG manual considers only integer value, and declares 5 or greater are good. Here the measurement 6

7 system is capable of minutely distinguishing all the 8 parts. When the parts have different orientations, it is quite natural that part variation would be high as in the present case. As per the AIAG norms, when the % study variation lies between 10% and 30%, and the % contribution variance lies between 1% and 9%, the measurement system as found in the X-Bar Range approach, here also the acceptability of the measurement system depends on the approval of the customer, and the cost of the measuring device, cost of repair, or other factors. However when the three percentages of Study Variations are added up, it reaches 11.77%! It laughs at commonsense. When a trigonometric function is mistaken for mere mathematical proportions it cannot be otherwise. Hence one is left with no other alternative except on the lookout for a more refined method for gauge R&R study that is compatible with the basic rules of mathematics, and meaningful interpretation. 9. PART-3: ANOVA GRR Study Analysis of Variance (ANOVA), examines the components of total variation in order to know their statistical significance in impacting total variation. X-Bar and Range method, and AIAG approach, compute the components of variations in percentages based on standard deviation, whereas ANOVA is based on the mathematically-sound variance, and hence their results vary. ANOVA has many attractive features. Besides providing a more accurate and precise estimates of variation, ANOVA can accommodate more parts and more operators in the study. Unlike the other methods, ANOVA has the ability to quantify interactions between parts and operators. It allows hypothesis testing and computation of confidence intervals on key metrics. ANOVA results are more meaningful in drawing right conclusions. However, ANOVA can be extremely sensitive to round-off error. Different software gives different results. When the black-box approaches disagree, it is hard to find out the correct results without performing the computation required 3. Table-1A and 1B in Appendix has the results of 48 trials. Operators measure the parts, picking them up randomly. There are 48 results with.88 as the grand mean x ). The readings of the operator A, are above 3 for the 1 st and 8 th parts in all the three trails. The rest of the reading is close to 3. The part averages vary between.85 and.87 and they are very close to.86, the overall mean. Their coefficients of variations narrowly vary between 11.% and 11.67%. The readings of the operator B are above 3 for the 1 st, 5 th and 8 th parts in all the three trails, and they are below 3 for the rest of the reading. Their part averages vary between.89 and.93, and their coefficient variations vary between 1.18% and 13.44%. These averages are close to the overall average of.91. The overall coefficient of variation for operator A is 10.99% while it is 1.3% for the operator B. Notwithstanding the variations within the narrow limits, all the readings and their statistical measures exhibit a high degree of parallelism. Computing of the Components of Variations for ANOVA 4 Variance can be computed for a set of values using the equation: σ [Σ x i -x) (n-1)]. The total sum of squares (SS T ) is SS T = (SS 0 + SS P + SS 0*P + SS E. ) SS T is the squared deviation of the result of each component of variation from the grand average x ). Here it is.88 (138.4/48). SS T is computed through the formula given below:, Table-7: Basic statistics Trail Mean SD CV% Range IQR T T T Where k = number of operators. It is, n= number of parts. It is 8, and r = number of trail. It is 3. X ijm is the reading of the i th operator measuring the j th part for the m th trial. To obtain SS T, to begin with, one must subtract the grand mean x ) from individual reading and square that outcome. For example; (x x ) = ( ) =0.349 is the first reading of the operator A in Table-1A. This operation is done for all the trails and their results are added up, to obtain SS T. Here the SS T is Table- in the appendix shows the calculations. The degrees of freedom associated with SS T are (nkr 1), i.e. (48-1) =47. The basic statistics is given below. The trail averages and their standard deviations are almost the same. The coefficient of variation (CV) is 11.5% for the third trail, while it is almost the same for the other two trails. The variation is relatively low for the third trail as revealed by its range (0.97). The slightly high IQR for the second trail may be due to the presence of more variations in the middle 50% of the trails. 7

8 Operator Sum of Squares (SS 0 ) is obtained by using the equation given below.. n SS 0 = Σ nr x i - x ) i=0 Here nr, represents the number of results for operator i. So there are 4 results for each operator. x i is the average of the 4 results of operator i. There are averages, one for each operator..858 is the mean of operator A, and.909, is for the operator B. They are almost equal. The CV of the latter is 1% while it is 11% for the former. Hence there may be a slightly higher degree of variation for the operator B. Here SS 0 is Since the overall average is calculated, one df is lost. Hence the df is ( 1) = 1. Table-3 in the appendix has all the calculations. Parts Sum of Squares (SS P ) and Df is computed using the equation given below, where kr represents the number of results for part j. x j " is the average of the 6 results for part1. Here the mean for part1 is The original data are rearranged, part-wise, and since there are 8 parts, there are 8 part-averages. The Table-4 in appendix shows the calculations. The part mean is 3.5 for the first, 3.3 for the eight, and 3.0 for the fifth part. The mean of the second and fourth parts, and that of the sixth and seventh parts vary between.8 and.6. The mean of the third part is the least, and it is.5. Thus these part averages that vary between 3.5 and.5 may be more due to the heterogeneity in the reading of the operator B. Here SS P is It is based on the differences between the part averages and the overall mean. Each difference is squared and added up to obtain SS P. There are eight parts and since the overall average is calculated, one df is lost. Hence, 7 df is associated with the parts sum of squares. Equipment Sum of Square (SS E ) and Df : The equipment sum of squares is computed from the deviation of the three trials for a given operator and a given part and from the mean for that part and operator. The formula is shown below and the calculations are shown in the Table-5. Two part averages are above 3, (1 st and 8 th parts), and it is below 3, for the rest of the parts for the operator A. Likewise two part averages exceed 3 (1 st and 5 th parts), and the rest are below to it for the operator B. These averages are close to their respective reading and hence their respective squared deviations too are the least. It suggests the presence of variation within narrow limits. Here, SS E is It examines the variation around the average. It is the variation in the 3 trail about their mean. Here an average is computed for each set of 3 trials. Hence, 1 df is lost for each set of 3 trial (or r-1). Thus here 16 df is lost. Therefore, the df associated with the equipment variation is nk(r-1) = 3. The sum of square of the Operator by Part is the difference between the total sum of square and the sum of the squares of the Part, Operator and Equipment. Here it is Table-7A: ANOVA Full Model (With Interaction) Source df SS MS F p-value Part (P) Operator (O) Operator by Part (O*P) Equipment (E) Total (0.7310/0.0036) 1, (0.031/0.0036), and (0.0036/0.0017) 3 Table-7B: ANOVA-Reduced Model (Without Interaction) Source df SS MS F p-value Part (P) Operator (O) Equipment (E) Total

9 (0.7310/0.000) 1, and (0.031/0.00), Interpretation The low F-value coupled with p-value for operator by part variation suggests that the null hypothesis of the zero variation for interaction can be retained. When p-value exceeds. 5, null hypothesis can t be rejected. It implies that the operators don t interact with the parts, and its variation is not statistically significant. In other words, the observed variation may be due to random causes. The high degree of parallelism found in the distribution of data generated by the two operators and their various statistical measures as shown in the control charts A&B below, correspond to the nonsignificant F value for the interaction effect in the ANOVA table above. The F-value coupled with 0.01 p-values for operators, points out that the operator effect is detectable in the measurement system. It also suggests that the two operators may not be measuring the parts in the same way, and it is not possible to say which operator is out of line with the other. The range of the coefficient of variation for the operator A is just 0.45 while it is 1.6 for the operator B (Table-1B), which is.8 times greater compared to the former. It is a pointer to the fact that the measurement variation of the operator B is relatively greater. If the operators measure the same parts differently then there will be a serious issue with the measurement process that needs to be addressed on priority basis before the measurement system causes a train wreak havoc in subsequent operations. Multiple reasons can be attributable to appraiser inconsistency. When a new system is implemented, the appraisers may find it difficult to follow properly the measurement procedures, or may not be able to align the product correctly before taking a measurement or may have a problem reading the fine marks on the gage, or may not have clear instructions on which dimension of the part should be measured or may be the result of ineffective training, and or might have been trained in different methods. The p-value is 0 for part variation, which is less than. 5. Hence, the null hypothesis that zero variation for part is rejected. In other words, its impact on the total variation is statistically significant. The large F-value for parts coupled with zero p-value suggests that the new test system developed by the company has great potential for detecting the part-to-part variation, over and above the measurement error. These interpretations are equally valid for the reduced model. Computing Expected Mean Squares The Expected Mean Squares (EMS)* can find the variance of each source of variation. Repeatability is directly related to MS Eq. 1. σ Repe= MS Eq = Since EMS Eq σ Repe= MS Eq. It is the repeatability portion of the Gage R&R study.. σ O*P= [(MS O*P - σ Repe) r] =( ) 3= Since EMS O*P σ Repe rσ O*P) =MS O*P. 3. σ P= [(MS P - MS O*P ) kr] = ( ) 6= Since EMS P = (σ Repe rσ O*P krσ P) =MS P. 4. σ O = [( MS O - MS O*P ) nr] = ( ) 4= Since EMS O σ Repe rσ O*P nrσ O) =MS O. 5. TV=( σ Repe+σ o + σ O*P + σ P )= ndc= 1.41(0.3481/0.0539) = (Full Model). 7. ndc=1.41(0.3486/0.0566) =.6 8. (Reduced Model). Computing Components Reduced Model: Since the p-value is far below to 0.5, there is no need to compute the interaction variation separately. Hence another ANOVA table is prepared without the operator by part variation. 1. st epeatability MSW.. 44, where MSW is Mean Square Within.. st eproducibility /4 [MSO-MSI-MSW] = /48[ ] = st product SD /4 [MSP-MSI-MSW] = /4 [ ] = Est SD of product measurement= ) = Source Estimated Variance Table-8-A: ANOVA Gage R&R-Full Model % Contribn Estimated of Var SD 6 SD (Study Variation) Repeatability Reproducibility GRR Operator by Part Parts % Study Variation (%SV) 9

10 Total TV * * * GRR excluded, ndc=9 Table-8-B: ANOVA Gage R&R-Reduced Model Source * GRR excluded, ndc=8 Estimated Variance % Contribution of Variance Estimated SD 6 SD (Study Variation) Repeatability % % Study Variation (%SV) Reproducibilit % y GRR % Parts % Total TV *% Operators use the % variances of various components, or their % variations to fix a poor measurement system. The combined R&R constitutes measurement error that adds uncertainty to the dataset. A zero reproducibility essentially is possible only when all the appraisers measure the same average value, and no one is different from the other. A high repeatability relative to reproducibility indicates the need for an improved version of the gage. Preventive maintenance is always better than breakdown maintenance. Sometimes Gages with additional features may be a good choice. If the operators measure parts consistently, then the difference between the highest and lowest measurements will be small. A higher reproducibility relative to repeatability, calls for gage recalibration and/ or a need for the retraining of the operators for their skill up gradation. Sometimes fixtures may be required for operators to use gage consistently. Here repeatability (0.0017) exceeds reproducibility (0.001). Hence a better gage or a focused maintenance may bring down the noise. Here ndc is [1.41( )]. Hence all the eight parts of the product are distinguishable by the measurement system. The high ndc coupled with higher part variance suggest that the gage system is sensitive enough in distinguishing one part from another. The Tables-8 A&B, above show that Gage R&R consumes about 16% of study variation or.33% /.57 % contribution of variance. Hence the measurement system is said to be good. AIAG Gage R&R Acceptance Criteria suggests that when %R&R is below 10%, the measurement system is acceptable, and is recommended when parts have to be classified distinctly and /or when tightened process control is needed. Out of.33%, 1.36% is from repeatability. This variation stems from the gage, and 0.96% is from reproducibility. It is the effect of operators on the measurement system. Operator by part accounts for 0.5% of the total variation. It suggests that the interaction effect is not posing any issue. 97.% of contribution to total variance comes from part variation, and the rest from the measurement error. This high process-inherent variation coupled with high ndc (9) suggests that the measurement system is sensitive enough in clearly distinguishing the 8 parts. Despite the superiority of the variance analysis, analysis based on standard deviation attracts the practitioners mainly because standard deviation uses the same unit of measurement for part measurements and the tolerance, and allows for meaningful comparisons. Comparing the measurement system variation with the tolerance is often informative. When the tolerance limits are introduced, the computed % tolerance compares measurement system variation to specifications. If the measurement system aims at process improvement %variance study is a better estimate of measurement precision. If the measurement system appraises parts relative to specifications, %tolerance is a more appropriate metric. In a good measurement system, the largest component of variation must be part-to-part variation. When the % of total variation of the components based on sd, are examined we find the same trend of the % contribution based on variance is reflected here too. But the contents exhibit greater differences. For example, repeatability based on sd is 1% and reproducibility 10% while their counterparts in % variances are 1.36% and 0.96% respectively. Likewise, GRR based on sd is 15% while it is just.33% based on variance. Part variations are very close to each other. Thus the high percentages of the components and the incompatibility of % total variation based on standard deviation lay greater emphasis on the utility of variance. 10. PART-4 Evaluating the Measurement Process (EMP) Approach This approach which is associated with Donald Wheeler and Lyday immediately places the data on average and range chart in order to unfold what is happening in the data. The average and range method focuses on the predictability of a process or 10

11 otherwise, based on the behavior of the data, while an EMP approach explores the possibility of detecting part-to-part differences regardless of the uncertainty introduced by the measurement error. This paradigm shift can change the way one is looking at the average and range chart and interpreting the results. EMP is also compatible with different data structures and different data collection schemes 5. The process behavior charts shed light on the nature of the distribution of data in and around the lower and upper control limits, both for within sub group variation, and for the between sub group differences. When more points lie outside the specification limits of the mean chart, this signals the opportunities for improving the measurement process. It is also a pointer to the fact that the part-to-part variation is much greater than equipment variation. But deviation on the range chart signals inconsistency in the measurement process. Once the system is not in a state of statistical control, the underlying special causes must be eliminated before preparing the mean chart, and initiating the MSA. This check for consistency is wanting in ANOVA which assumes prima facie that the measurement process is in a state of statistical control and proceeds further. Hence ANOVA results are dependable only when production process is stable. There is a high degree of parallelism in the readings of the two operators as shown in the two sets of mean and R charts. There are two subgroups here and each has eight parts. All the data points in both mean charts lie outside their specification limits, and they show up operator to operator, and also part to part differences. The variation within each subgroup displays test-retest error. The distribution of data within the specification limits in R charts confirms consistency in the measurements. Control Charts for the operator -A Operator A measures 8 parts each thrice and their x-bar and Range charts are plotted Control Charts for the operator -B Operator-B measures 8 parts each thrice and their x-bar and Range charts are plotted Analysis of Main Effects (ANOME) In order to see if the two operators have almost the same mean value for their readings, or otherwise, the Analysis of Main ffects A OM is attempted with the scaling factor of.5 for a 5% alpha-level. sing x., x A. 6, x B. 1, R=1.05, and scaling factor=0.59, the Upper Specification and Lower Specification Limits are computed. Central Line (CL) =.89. USL=.89+ (0.59x1.05) = 3.51, and LSL=.89- (0.59 x1.05) =.7. Since the averages of the two operators lie well within the specification limits, the averages do not significantly differ from each other in 95 out of 100 cases. The subtle differences between the averages may be due to random causes. Hence it is concluded that the measurement error is not impacted by the 11

12 interaction effect. This interpretation falls in line with the result of ANOVA, where it is shown that the interaction effect is not statistically significant. Computing components for EMP approach 1. MP - epeatability R d ) = ( ) = EMP- eproducibility [ Operator range (d *) ] _ [((k/nkr) x (repeatability) ] [. 55 (1.41) ] [(/48) x (0.0387) ] = EMP-Pdt variation [part ange) (d* ) ]-[(n/nkr) x (repeatability) ] [. 3 (.963) ]-[(8 /48) x (0.0387) ] = *part range is the difference between the maximum and minimum of part means. 4. EMP-Total Variation repe + repro + part variability) ) = Source GRR Based on EMP Approach SD and Variance %Study Variance SD Variation 6SD %Contribution of variance EMP (Repe) EMP (Repro) EMP Pdt Variation EMP TV * *Excludes GRR (R&R) The results of the GRR study based on x and Range Method and those of the EMP approach are almost identical in terms of standard deviation. Repeatability and reproducibility consume 1% of total variation each. Part variation consumes 99% of total variation. But the EMP TV is 1%! Therefore, one has to consider the results based on variance. Here measurement error in terms of variance accounts for.68 % variation. Hence the acceptability of this measurement system can be decided on the basis of cost factors, the tolerance limits and the other factors associated with them. 11. PART 5 Wheeler Approach- Pricking the Bubbles Donald Wheeler a prominent-name in the Science of Quality Management contends that the AIAG approach violates the basics of mathematics. Trigonometric functions are interpreted as mere proportions when they are not proportions. The nonlinear and non-additive standard deviations perform many disallowed operations without any rational basis to generate other metrics. Thus after pricking the bubbles, he suggests that his Honest Gauge & Study which is based on variance, and capable of generating all the vital metrics for meaningful interpretation, should replace the AIAG approach. Before launching this study as usual, the upper range limit for the chosen dataset must be computed to see that none of the subgroup ranges exceed it. This test is needed to keep the data free of the underlying special causes so that the production process is under statistical control and the dataset is ready for further operations. Computation of the Variations of the Components Step1. Here the three subgroup ranges are below , the upper range limit. Hence the variances of all the components can be computed straight away as shown below. Step. Estimated Repeatability Variance Component, σ pe R/d ) = [0.0655/1.693) = Step3. Estimated eproducibility Variance Component, σ o = [ (Ro /d* ) - (o /nop x σ pe ). σ o =[(0.055/1.41) - /48 ( )] = ( ) = Step 4. Estimated Combined R&R Variance Component, σ e = σ pe σ o ) σ e =( )= Step5. Estimated Part Variance Component, σ p = (Rp/d* ) = (0.98/.963) = Step6. stimated Total Variance, σ x σ p σ e ) = ( )= Estimated Proportions of the above Components as % of Total Variance Step7. Repeatability Proportion = σ pe /σ x ) = ( / ) = Step8. Reproducibility Proportion = σ o / σ x ) = (0.0015/ ) =

13 Step9. Combined R&R Proportion = σ e / σ x ) = ( / ) = Step10. Part Variance (Intra Class Correlation-ICC) is σ p / σ x ) is , ( / ). ICC determines whether the measurement system is a 1 st, nd, 3 rd or 4 th class monitor. When measurement error is 0, the ICC Statistic is unity. A good measurement system with ICC better than 99% will have a measurement error that consumes less than 1 % of the total variation, and Marginal systems with the ICC in the percentage range 99% -91% have a measurement error varying between 1 % and 3 % of the total variation. A measurement system will be termed as bad when the measurement error with ICC below 1% consumes more than 3 % of the total variation. GRR Based on Wheeler s Approach Estimated Variance Components %Contribution of Var EV(Repeatability) AV(Reproducibility) GRR (R&R) PV TV Step11. The production process signal strength is computed using the formula, (1- ICC. When the ICC is between 1.00 and 0.80, the measurement system will be a First Class Monitor of the production process. It means many things, a) there is a reduction in the strength of process signals by less than 10%, b) there is a better than 99% chance for the measurement system to detect a point beyond the control limits, and c) the measurement system can track process improvements up to Cp80, while a First Class Monitor. Here ICC is 0.97; the process signal is 1.35% ( ; and Cp is1. 1. Therefore, there is 99% chance for the measurement system to detect a point above or below the control limits, and it can tract the process improvement up to While a Second Class Monitor the measurement system can track down process improvements up to 3.0 (Cp50= 3.0), and while as third class monitor, it can track down process improvement up to 3.81 (Cp0 = 3.81). When the ICC is below 0., the reduction of process signals will be more than 55%, the chance for detecting the shift in production process vanishes rapidly, and the crossover capability can t track down any process improvement. Crossover capabilities are computed as shown below. Cp80= [(USL-LSL /6σ Rep.]. Cp. /6x ) x Cp50= [(USL-LSL /6σ Rep.5]. Cp5. /6x ) x Cp0= [(USL-LSL /6σ Rep..] Cp. /6x ) x The production process signals and the measurement system signals always linked together, and their intensities will move in opposite direction. When the ICC is between 1.00 and 0.80, signals coming from the production process will be weakened by less than 10%, while the strength of the measurement system signals will be attenuated by more than 55%. When the ICC lies between 0.80 and 0.50, the production process will be attenuated between 10% and 30%, while signals from the measurement system will be weakened between 55% and 30%. When the ICC lies between 0.5 and 0., the production process signals will be attenuated between 30% and 55%, while signals from the measurement system will only be attenuated from 30% to 10%. When the ICC is less than 0.0, the signals from the production process will be attenuated by more than 55%, and the ability to detect process signals rapidly disappears as the measurement error completely dominates the observations. Here the measurement system is on the ropes and hence should only be used in desperation 6. Step1. Probable Error is simply times the standard deviation of test-retest error. It calculates the median error of a measurement, and defines its effective discreteness. Here the PE of a single measurement is 0.03 or 0.061, It implies that 5 % the time P will err by more than. 3 units and another 5 % the time by less than. 3 units. Step13. The Smallest Effective Measurement Increment (SEMI) is 0. PE and the Largest EMI is PE. Here it is 0.01 or 0.005, (0.x0.061), and the Largest EMI is 0.05 or 0.054, (x0.061). Thus SEMI and LEMI varies between 0.01 and In this study the basic data are recorded to the nearest 0.01 unit. Hence the measurement increment is half of 0.01, and it is Step14. The specification limits here as said before are.55 and 3.54 units, and their difference, 0.99 is termed as specified tolerance. Measurement system will always undergo incremental changes over a period in time, and the traditional specification limits can t accommodate them. Watershed specifications include such incremental changes. 13

ABSTRACT. Page 1 of 9

ABSTRACT. Page 1 of 9 Gage R. & R. vs. ANOVA Dilip A. Shah E = mc 3 Solutions 197 Great Oaks Trail # 130 Wadsworth, Ohio 44281-8215 Tel: 330-328-4400 Fax: 330-336-3974 E-mail: emc3solu@aol.com ABSTRACT Quality and metrology

More information

A Better Way to Do R&R Studies

A Better Way to Do R&R Studies The Evaluating the Measurement Process Approach Last month s column looked at how to fix some of the Problems with Gauge R&R Studies. This month I will show you how to learn more from your gauge R&R data

More information

How to do a Gage R&R when you can t do a Gage R&R

How to do a Gage R&R when you can t do a Gage R&R How to do a Gage R&R when you can t do a Gage R&R Thomas Rust Reliability Engineer / Trainer 1 FTC2017 GRR when you can't GRR - Thomas Rust Internal References 2 What are you Measuring 3 Measurement Process

More information

Measuring System Analysis in Six Sigma methodology application Case Study

Measuring System Analysis in Six Sigma methodology application Case Study Measuring System Analysis in Six Sigma methodology application Case Study M.Sc Sibalija Tatjana 1, Prof.Dr Majstorovic Vidosav 1 1 Faculty of Mechanical Engineering, University of Belgrade, Kraljice Marije

More information

A hybrid Measurement Systems Analysis and Uncertainty of Measurement Approach for Industrial Measurement in the Light Controlled Factory

A hybrid Measurement Systems Analysis and Uncertainty of Measurement Approach for Industrial Measurement in the Light Controlled Factory A hybrid Measurement Systems Analysis and Uncertainty of Measurement Approach for Industrial Measurement in the Light Controlled Factory J E Muelaner, A Francis, M Chappell, P G Maropoulos Laboratory for

More information

CHAPTER-5 MEASUREMENT SYSTEM ANALYSIS. Two case studies (case study-3 and 4) conducted in bearing

CHAPTER-5 MEASUREMENT SYSTEM ANALYSIS. Two case studies (case study-3 and 4) conducted in bearing 6 CHAPTER-5 MEASUREMENT SYSTEM ANALYSIS 5.0 INTRODUCTION: Two case studies (case study- and 4) conducted in bearing manufacturing facility. In this industry the core process is bearing rings machining

More information

Measurement Systems Analysis

Measurement Systems Analysis Measurement Systems Analysis Since measurement systems represent a subprocess within a process They are subject to Variation. What could be the source of this variation? Why do Measurements Vary? Possible

More information

Repeatability & Reproducibility Studies

Repeatability & Reproducibility Studies Repeatability & Reproducibility Studies Introduction Before we can talk about gage R&R, we have to define the word gage. When asked to name a gage, people typically think of micrometers, pressure gages,

More information

How Measurement Error Affects the Four Ways We Use Data

How Measurement Error Affects the Four Ways We Use Data Measurement error is generally considered to be a bad thing, and yet there is very little written about how measurement error affects the way we use our measurements. This column will consider these effects

More information

Quantitative. Accurate. Precise. Measurement Error

Quantitative. Accurate. Precise. Measurement Error Measurement Error Before we do any experiments, collect any data, or set up any process: We need to ensure we have a way to measure the results that is: Quantitative Accurate Precise So how do we test

More information

Statistical Quality Control, IE 3255 March Homework #6 Due: Fri, April points

Statistical Quality Control, IE 3255 March Homework #6 Due: Fri, April points Statistical Quality Control, IE 355 March 30 007 Homework #6 Due: Fri, April 6 007 00 points Use Ecel, Minitab and a word processor to present quality answers to the following statistical process control

More information

GAGE STUDIES FOR VARIABLES AVERAGE AND RANGE METHOD

GAGE STUDIES FOR VARIABLES AVERAGE AND RANGE METHOD GAGE STUDIES FOR VARIABLES AVERAGE AND RANGE METHOD JANIGA Ivan (SK) Abstract. There are several methods that can be used to measure gauge variability. The average and range method is widely used in industry

More information

Quality. Statistical Process Control: Control Charts Process Capability DEG/FHC 1

Quality. Statistical Process Control: Control Charts Process Capability DEG/FHC 1 Quality Statistical Process Control: Control Charts Process Capability DEG/FHC 1 SPC Traditional view: Statistical Process Control (SPC) is a statistical method of separating variation resulting from special

More information

Is the Part in Spec?

Is the Part in Spec? Quality Digest Daily, June 1, 2010 Manuscript No. 210 Is the Part in Spec? How to create manufacturing specifications Over the past 20 years it has become fashionable to condemn measurement processes that

More information

Protean Instrument Dutchtown Road, Knoxville, TN TEL/FAX:

Protean Instrument Dutchtown Road, Knoxville, TN TEL/FAX: Application Note AN-0210-1 Tracking Instrument Behavior A frequently asked question is How can I be sure that my instrument is performing normally? Before we can answer this question, we must define what

More information

And how to do them. Denise L Seman City of Youngstown

And how to do them. Denise L Seman City of Youngstown And how to do them Denise L Seman City of Youngstown Quality Control (QC) is defined as the process of detecting analytical errors to ensure both reliability and accuracy of the data generated QC can be

More information

Power Functions for. Process Behavior Charts

Power Functions for. Process Behavior Charts Power Functions for Process Behavior Charts Donald J. Wheeler and Rip Stauffer Every data set contains noise (random, meaningless variation). Some data sets contain signals (nonrandom, meaningful variation).

More information

A Unified Approach to Uncertainty for Quality Improvement

A Unified Approach to Uncertainty for Quality Improvement A Unified Approach to Uncertainty for Quality Improvement J E Muelaner 1, M Chappell 2, P S Keogh 1 1 Department of Mechanical Engineering, University of Bath, UK 2 MCS, Cam, Gloucester, UK Abstract To

More information

INTRODUCTION TO ANALYSIS OF VARIANCE

INTRODUCTION TO ANALYSIS OF VARIANCE CHAPTER 22 INTRODUCTION TO ANALYSIS OF VARIANCE Chapter 18 on inferences about population means illustrated two hypothesis testing situations: for one population mean and for the difference between two

More information

Module 03 Lecture 14 Inferential Statistics ANOVA and TOI

Module 03 Lecture 14 Inferential Statistics ANOVA and TOI Introduction of Data Analytics Prof. Nandan Sudarsanam and Prof. B Ravindran Department of Management Studies and Department of Computer Science and Engineering Indian Institute of Technology, Madras Module

More information

Statistical Quality Control - Stat 3081

Statistical Quality Control - Stat 3081 Statistical Quality Control - Stat 3081 Awol S. Department of Statistics College of Computing & Informatics Haramaya University Dire Dawa, Ethiopia March 2015 Introduction Industrial Statistics and Quality

More information

CHAPTER 17 CHI-SQUARE AND OTHER NONPARAMETRIC TESTS FROM: PAGANO, R. R. (2007)

CHAPTER 17 CHI-SQUARE AND OTHER NONPARAMETRIC TESTS FROM: PAGANO, R. R. (2007) FROM: PAGANO, R. R. (007) I. INTRODUCTION: DISTINCTION BETWEEN PARAMETRIC AND NON-PARAMETRIC TESTS Statistical inference tests are often classified as to whether they are parametric or nonparametric Parameter

More information

Uncertainty, Error, and Precision in Quantitative Measurements an Introduction 4.4 cm Experimental error

Uncertainty, Error, and Precision in Quantitative Measurements an Introduction 4.4 cm Experimental error Uncertainty, Error, and Precision in Quantitative Measurements an Introduction Much of the work in any chemistry laboratory involves the measurement of numerical quantities. A quantitative measurement

More information

WELCOME! Lecture 13 Thommy Perlinger

WELCOME! Lecture 13 Thommy Perlinger Quantitative Methods II WELCOME! Lecture 13 Thommy Perlinger Parametrical tests (tests for the mean) Nature and number of variables One-way vs. two-way ANOVA One-way ANOVA Y X 1 1 One dependent variable

More information

Process Capability Analysis Using Experiments

Process Capability Analysis Using Experiments Process Capability Analysis Using Experiments A designed experiment can aid in separating sources of variability in a quality characteristic. Example: bottling soft drinks Suppose the measured syrup content

More information

Uncertainty due to Finite Resolution Measurements

Uncertainty due to Finite Resolution Measurements Uncertainty due to Finite Resolution Measurements S.D. Phillips, B. Tolman, T.W. Estler National Institute of Standards and Technology Gaithersburg, MD 899 Steven.Phillips@NIST.gov Abstract We investigate

More information

Chapter 3 Multiple Regression Complete Example

Chapter 3 Multiple Regression Complete Example Department of Quantitative Methods & Information Systems ECON 504 Chapter 3 Multiple Regression Complete Example Spring 2013 Dr. Mohammad Zainal Review Goals After completing this lecture, you should be

More information

4. STATISTICAL SAMPLING DESIGNS FOR ISM

4. STATISTICAL SAMPLING DESIGNS FOR ISM IRTC Incremental Sampling Methodology February 2012 4. STATISTICAL SAMPLING DESIGNS FOR ISM This section summarizes results of simulation studies used to evaluate the performance of ISM in estimating the

More information

1.0 Continuous Distributions. 5.0 Shapes of Distributions. 6.0 The Normal Curve. 7.0 Discrete Distributions. 8.0 Tolerances. 11.

1.0 Continuous Distributions. 5.0 Shapes of Distributions. 6.0 The Normal Curve. 7.0 Discrete Distributions. 8.0 Tolerances. 11. Chapter 4 Statistics 45 CHAPTER 4 BASIC QUALITY CONCEPTS 1.0 Continuous Distributions.0 Measures of Central Tendency 3.0 Measures of Spread or Dispersion 4.0 Histograms and Frequency Distributions 5.0

More information

Chapter 10: Statistical Quality Control

Chapter 10: Statistical Quality Control Chapter 10: Statistical Quality Control 1 Introduction As the marketplace for industrial goods has become more global, manufacturers have realized that quality and reliability of their products must be

More information

Experimental Uncertainty (Error) and Data Analysis

Experimental Uncertainty (Error) and Data Analysis Experimental Uncertainty (Error) and Data Analysis Advance Study Assignment Please contact Dr. Reuven at yreuven@mhrd.org if you have any questions Read the Theory part of the experiment (pages 2-14) and

More information

Is That Last Digit Really Significant? You no longer need to guess

Is That Last Digit Really Significant? You no longer need to guess Quality Digest, February 5, 2018 Manuscript 327 Is That Last Digit Really Significant? You no longer need to guess Donald J. Wheeler and James Beagle III Whenever we make a measurement we have to decide

More information

The Levey-Jennings Chart. How to get the most out of your measurement system

The Levey-Jennings Chart. How to get the most out of your measurement system Quality Digest Daily, February 1, 2016 Manuscript 290 How to get the most out of your measurement system Donald J. Wheeler The Levey-Jennings chart was created in the 1950s to answer questions about the

More information

OHSU OGI Class ECE-580-DOE :Statistical Process Control and Design of Experiments Steve Brainerd Basic Statistics Sample size?

OHSU OGI Class ECE-580-DOE :Statistical Process Control and Design of Experiments Steve Brainerd Basic Statistics Sample size? ECE-580-DOE :Statistical Process Control and Design of Experiments Steve Basic Statistics Sample size? Sample size determination: text section 2-4-2 Page 41 section 3-7 Page 107 Website::http://www.stat.uiowa.edu/~rlenth/Power/

More information

An inferential procedure to use sample data to understand a population Procedures

An inferential procedure to use sample data to understand a population Procedures Hypothesis Test An inferential procedure to use sample data to understand a population Procedures Hypotheses, the alpha value, the critical region (z-scores), statistics, conclusion Two types of errors

More information

Measurement Systems Analysis January 2015 Meeting. Steve Cox

Measurement Systems Analysis January 2015 Meeting. Steve Cox Measurement Systems Analysis January 2015 Meeting Steve Cox Steve Cox Currently retired 33 Years with 3M Mostly quality related: 37 total in Quality ASQ Certified Quality Engineer Certified Black Belt

More information

Descriptive Statistics

Descriptive Statistics Descriptive Statistics Once an experiment is carried out and the results are measured, the researcher has to decide whether the results of the treatments are different. This would be easy if the results

More information

Criteria of Determining the P/T Upper Limits of GR&R in MSA

Criteria of Determining the P/T Upper Limits of GR&R in MSA Quality & Quantity 8) 4:3 33 Springer 7 DOI.7/s3-6-933-7 Criteria of Determining the P/T Upper Limits of GR&R in MSA K. S. CHEN,C.H.WU and S. C. CHEN Institute of Production System Engineering & Management,

More information

Chapter 8 Statistical Quality Control, 7th Edition by Douglas C. Montgomery. Copyright (c) 2013 John Wiley & Sons, Inc.

Chapter 8 Statistical Quality Control, 7th Edition by Douglas C. Montgomery. Copyright (c) 2013 John Wiley & Sons, Inc. 1 Learning Objectives Chapter 8 Statistical Quality Control, 7th Edition by Douglas C. Montgomery. 2 Process Capability Natural tolerance limits are defined as follows: Chapter 8 Statistical Quality Control,

More information

Measurement System Analysis

Measurement System Analysis Definition: Measurement System Analysis investigations are the basic requirement for carrying out Capability Studies. They are intended to ensure that the used measuring equipment is suitable. Note: With

More information

T.I.H.E. IT 233 Statistics and Probability: Sem. 1: 2013 ESTIMATION AND HYPOTHESIS TESTING OF TWO POPULATIONS

T.I.H.E. IT 233 Statistics and Probability: Sem. 1: 2013 ESTIMATION AND HYPOTHESIS TESTING OF TWO POPULATIONS ESTIMATION AND HYPOTHESIS TESTING OF TWO POPULATIONS In our work on hypothesis testing, we used the value of a sample statistic to challenge an accepted value of a population parameter. We focused only

More information

Analysis of Variance and Co-variance. By Manza Ramesh

Analysis of Variance and Co-variance. By Manza Ramesh Analysis of Variance and Co-variance By Manza Ramesh Contents Analysis of Variance (ANOVA) What is ANOVA? The Basic Principle of ANOVA ANOVA Technique Setting up Analysis of Variance Table Short-cut Method

More information

Chapter 4. Displaying and Summarizing. Quantitative Data

Chapter 4. Displaying and Summarizing. Quantitative Data STAT 141 Introduction to Statistics Chapter 4 Displaying and Summarizing Quantitative Data Bin Zou (bzou@ualberta.ca) STAT 141 University of Alberta Winter 2015 1 / 31 4.1 Histograms 1 We divide the range

More information

Mathematical Notation Math Introduction to Applied Statistics

Mathematical Notation Math Introduction to Applied Statistics Mathematical Notation Math 113 - Introduction to Applied Statistics Name : Use Word or WordPerfect to recreate the following documents. Each article is worth 10 points and should be emailed to the instructor

More information

Business Statistics. Lecture 10: Course Review

Business Statistics. Lecture 10: Course Review Business Statistics Lecture 10: Course Review 1 Descriptive Statistics for Continuous Data Numerical Summaries Location: mean, median Spread or variability: variance, standard deviation, range, percentiles,

More information

Background to Statistics

Background to Statistics FACT SHEET Background to Statistics Introduction Statistics include a broad range of methods for manipulating, presenting and interpreting data. Professional scientists of all kinds need to be proficient

More information

Protocol for the design, conducts and interpretation of collaborative studies (Resolution Oeno 6/2000)

Protocol for the design, conducts and interpretation of collaborative studies (Resolution Oeno 6/2000) Protocol for the design, conducts and interpretation of collaborative studies (Resolution Oeno 6/2000) INTRODUCTION After a number of meetings and workshops, a group of representatives from 27 organizations

More information

University of Huddersfield Repository

University of Huddersfield Repository University of Huddersfield Repository Ding, Hao, Qi, Qunfen, Scott, Paul J. and Jiang, Xiang An ANOVA method of evaluating the specification uncertainty in roughness measurement Original Citation Ding,

More information

Morehouse. Edward Lane, Morehouse Instrument Company 1742 Sixth Ave York, PA PH: web: sales:

Morehouse. Edward Lane, Morehouse Instrument Company 1742 Sixth Ave York, PA PH: web:  sales: Morehouse 1 Morehouse Edward Lane, Morehouse Instrument Company 1742 Sixth Ave York, PA 17403 PH: 717-843-0081 web: www.mhforce.com sales: edlane@mhforce.com 2 This presentation will cover the calibration

More information

Hypothesis Tests and Estimation for Population Variances. Copyright 2014 Pearson Education, Inc.

Hypothesis Tests and Estimation for Population Variances. Copyright 2014 Pearson Education, Inc. Hypothesis Tests and Estimation for Population Variances 11-1 Learning Outcomes Outcome 1. Formulate and carry out hypothesis tests for a single population variance. Outcome 2. Develop and interpret confidence

More information

Keppel, G. & Wickens, T.D. Design and Analysis Chapter 2: Sources of Variability and Sums of Squares

Keppel, G. & Wickens, T.D. Design and Analysis Chapter 2: Sources of Variability and Sums of Squares Keppel, G. & Wickens, T.D. Design and Analysis Chapter 2: Sources of Variability and Sums of Squares K&W introduce the notion of a simple experiment with two conditions. Note that the raw data (p. 16)

More information

Statistical Process Control

Statistical Process Control Chapter 3 Statistical Process Control 3.1 Introduction Operations managers are responsible for developing and maintaining the production processes that deliver quality products and services. Once the production

More information

Selection of Variable Selecting the right variable for a control chart means understanding the difference between discrete and continuous data.

Selection of Variable Selecting the right variable for a control chart means understanding the difference between discrete and continuous data. Statistical Process Control, or SPC, is a collection of tools that allow a Quality Engineer to ensure that their process is in control, using statistics. Benefit of SPC The primary benefit of a control

More information

Uncertainty Analysis of Experimental Data and Dimensional Measurements

Uncertainty Analysis of Experimental Data and Dimensional Measurements Uncertainty Analysis of Experimental Data and Dimensional Measurements Introduction The primary objective of this experiment is to introduce analysis of measurement uncertainty and experimental error.

More information

Business Analytics and Data Mining Modeling Using R Prof. Gaurav Dixit Department of Management Studies Indian Institute of Technology, Roorkee

Business Analytics and Data Mining Modeling Using R Prof. Gaurav Dixit Department of Management Studies Indian Institute of Technology, Roorkee Business Analytics and Data Mining Modeling Using R Prof. Gaurav Dixit Department of Management Studies Indian Institute of Technology, Roorkee Lecture - 04 Basic Statistics Part-1 (Refer Slide Time: 00:33)

More information

AP Final Review II Exploring Data (20% 30%)

AP Final Review II Exploring Data (20% 30%) AP Final Review II Exploring Data (20% 30%) Quantitative vs Categorical Variables Quantitative variables are numerical values for which arithmetic operations such as means make sense. It is usually a measure

More information

CHAPTER 4 VARIABILITY ANALYSES. Chapter 3 introduced the mode, median, and mean as tools for summarizing the

CHAPTER 4 VARIABILITY ANALYSES. Chapter 3 introduced the mode, median, and mean as tools for summarizing the CHAPTER 4 VARIABILITY ANALYSES Chapter 3 introduced the mode, median, and mean as tools for summarizing the information provided in an distribution of data. Measures of central tendency are often useful

More information

Take the measurement of a person's height as an example. Assuming that her height has been determined to be 5' 8", how accurate is our result?

Take the measurement of a person's height as an example. Assuming that her height has been determined to be 5' 8, how accurate is our result? Error Analysis Introduction The knowledge we have of the physical world is obtained by doing experiments and making measurements. It is important to understand how to express such data and how to analyze

More information

June 12 to 15, 2011 San Diego, CA. for Wafer Test. Lance Milner. Intel Fab 12, Chandler, AZ

June 12 to 15, 2011 San Diego, CA. for Wafer Test. Lance Milner. Intel Fab 12, Chandler, AZ June 12 to 15, 2011 San Diego, CA Statistical Analysis Fundamentals for Wafer Test Lance Milner Intel Fab 12, Chandler, AZ Why Statistics? In God we trust. All others bring data. W Edwards Deming (1900

More information

6 th Grade Math. Full Curriculum Book. Sample file. A+ Interactive Math (by A+ TutorSoft, Inc.)

6 th Grade Math. Full Curriculum Book. Sample file. A+ Interactive Math (by A+ TutorSoft, Inc.) 6 th Grade Math Full Curriculum Book Release 7 A+ Interactive Math (by A+ TutorSoft, Inc.) Email: info@aplustutorsoft.com www.aplustutorsoft.com Page 3 of 518 Copyright 2014 A+ TutorSoft Inc., All Rights

More information

Orthogonal, Planned and Unplanned Comparisons

Orthogonal, Planned and Unplanned Comparisons This is a chapter excerpt from Guilford Publications. Data Analysis for Experimental Design, by Richard Gonzalez Copyright 2008. 8 Orthogonal, Planned and Unplanned Comparisons 8.1 Introduction In this

More information

Statistics Boot Camp. Dr. Stephanie Lane Institute for Defense Analyses DATAWorks 2018

Statistics Boot Camp. Dr. Stephanie Lane Institute for Defense Analyses DATAWorks 2018 Statistics Boot Camp Dr. Stephanie Lane Institute for Defense Analyses DATAWorks 2018 March 21, 2018 Outline of boot camp Summarizing and simplifying data Point and interval estimation Foundations of statistical

More information

BIOL Biometry LAB 6 - SINGLE FACTOR ANOVA and MULTIPLE COMPARISON PROCEDURES

BIOL Biometry LAB 6 - SINGLE FACTOR ANOVA and MULTIPLE COMPARISON PROCEDURES BIOL 458 - Biometry LAB 6 - SINGLE FACTOR ANOVA and MULTIPLE COMPARISON PROCEDURES PART 1: INTRODUCTION TO ANOVA Purpose of ANOVA Analysis of Variance (ANOVA) is an extremely useful statistical method

More information

IndyASQ Workshop September 12, Measure for Six Sigma and Beyond

IndyASQ Workshop September 12, Measure for Six Sigma and Beyond IndyASQ Workshop September 12, 2007 Measure for Six Sigma and Beyond 1 Introductions Tom Pearson 317-507 507-53585358 tpcindy@insightbb.com MS OR Old #7 Golf Guy Innovator Dog Lover Tomorrowist Entrepreneur

More information

Descriptive Statistics-I. Dr Mahmoud Alhussami

Descriptive Statistics-I. Dr Mahmoud Alhussami Descriptive Statistics-I Dr Mahmoud Alhussami Biostatistics What is the biostatistics? A branch of applied math. that deals with collecting, organizing and interpreting data using well-defined procedures.

More information

Procedure 4. The Use of Precision Statistics

Procedure 4. The Use of Precision Statistics Issue 3 9th March 04 Procedure 4 The Use of Precision Statistics. Scope.... Definitions... 3. CEC Precision Statistics... 3 4. Measurement of a Single Product... 4 5. Conformance with Specifications...

More information

Measurement Uncertainty: A practical guide to understanding what your results really mean.

Measurement Uncertainty: A practical guide to understanding what your results really mean. Measurement Uncertainty: A practical guide to understanding what your results really mean. Overview General Factors Influencing Data Variability Measurement Uncertainty as an Indicator of Data Variability

More information

13: Additional ANOVA Topics. Post hoc Comparisons

13: Additional ANOVA Topics. Post hoc Comparisons 13: Additional ANOVA Topics Post hoc Comparisons ANOVA Assumptions Assessing Group Variances When Distributional Assumptions are Severely Violated Post hoc Comparisons In the prior chapter we used ANOVA

More information

Review of Statistics 101

Review of Statistics 101 Review of Statistics 101 We review some important themes from the course 1. Introduction Statistics- Set of methods for collecting/analyzing data (the art and science of learning from data). Provides methods

More information

Effective January 2008 All indicators in Standard / 11

Effective January 2008 All indicators in Standard / 11 Scientific Inquiry 8-1 The student will demonstrate an understanding of technological design and scientific inquiry, including process skills, mathematical thinking, controlled investigative design and

More information

Intraclass Correlations in One-Factor Studies

Intraclass Correlations in One-Factor Studies CHAPTER Intraclass Correlations in One-Factor Studies OBJECTIVE The objective of this chapter is to present methods and techniques for calculating the intraclass correlation coefficient and associated

More information

An extended summary of the NCGR/Berkeley Double-Blind Test of Astrology undertaken by Shawn Carlson and published in 1985

An extended summary of the NCGR/Berkeley Double-Blind Test of Astrology undertaken by Shawn Carlson and published in 1985 From: http://www.astrodivination.com/moa/ncgrberk.htm An extended summary of the NCGR/Berkeley Double-Blind Test of Astrology undertaken by Shawn Carlson and published in 1985 Introduction Under the heading

More information

UNIT 3 CONCEPT OF DISPERSION

UNIT 3 CONCEPT OF DISPERSION UNIT 3 CONCEPT OF DISPERSION Structure 3.0 Introduction 3.1 Objectives 3.2 Concept of Dispersion 3.2.1 Functions of Dispersion 3.2.2 Measures of Dispersion 3.2.3 Meaning of Dispersion 3.2.4 Absolute Dispersion

More information

Measurements and Data Analysis

Measurements and Data Analysis Measurements and Data Analysis 1 Introduction The central point in experimental physical science is the measurement of physical quantities. Experience has shown that all measurements, no matter how carefully

More information

Density Temp vs Ratio. temp

Density Temp vs Ratio. temp Temp Ratio Density 0.00 0.02 0.04 0.06 0.08 0.10 0.12 Density 0.0 0.2 0.4 0.6 0.8 1.0 1. (a) 170 175 180 185 temp 1.0 1.5 2.0 2.5 3.0 ratio The histogram shows that the temperature measures have two peaks,

More information

An Analysis of College Algebra Exam Scores December 14, James D Jones Math Section 01

An Analysis of College Algebra Exam Scores December 14, James D Jones Math Section 01 An Analysis of College Algebra Exam s December, 000 James D Jones Math - Section 0 An Analysis of College Algebra Exam s Introduction Students often complain about a test being too difficult. Are there

More information

TOPIC: Descriptive Statistics Single Variable

TOPIC: Descriptive Statistics Single Variable TOPIC: Descriptive Statistics Single Variable I. Numerical data summary measurements A. Measures of Location. Measures of central tendency Mean; Median; Mode. Quantiles - measures of noncentral tendency

More information

1 Measurement Uncertainties

1 Measurement Uncertainties 1 Measurement Uncertainties (Adapted stolen, really from work by Amin Jaziri) 1.1 Introduction No measurement can be perfectly certain. No measuring device is infinitely sensitive or infinitely precise.

More information

Introduction to Uncertainty and Treatment of Data

Introduction to Uncertainty and Treatment of Data Introduction to Uncertainty and Treatment of Data Introduction The purpose of this experiment is to familiarize the student with some of the instruments used in making measurements in the physics laboratory,

More information

Techniques for Improving Process and Product Quality in the Wood Products Industry: An Overview of Statistical Process Control

Techniques for Improving Process and Product Quality in the Wood Products Industry: An Overview of Statistical Process Control 1 Techniques for Improving Process and Product Quality in the Wood Products Industry: An Overview of Statistical Process Control Scott Leavengood Oregon State University Extension Service The goal: $ 2

More information

Rational Subgrouping. The conceptual foundation of process behavior charts

Rational Subgrouping. The conceptual foundation of process behavior charts Quality Digest Daily, June 1, 21 Manuscript 282 The conceptual foundation of process behavior charts Donald J. Wheeler While the computations for a process behavior chart are completely general and very

More information

Gage repeatability & reproducibility (R&R) studies are widely used to assess measurement system

Gage repeatability & reproducibility (R&R) studies are widely used to assess measurement system QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL Qual. Reliab. Engng. Int. 2008; 24:99 106 Published online 19 June 2007 in Wiley InterScience (www.interscience.wiley.com)..870 Research Some Relationships

More information

2008 Winton. Statistical Testing of RNGs

2008 Winton. Statistical Testing of RNGs 1 Statistical Testing of RNGs Criteria for Randomness For a sequence of numbers to be considered a sequence of randomly acquired numbers, it must have two basic statistical properties: Uniformly distributed

More information

STATISTICS AND PRINTING: APPLICATIONS OF SPC AND DOE TO THE WEB OFFSET PRINTING INDUSTRY. A Project. Presented. to the Faculty of

STATISTICS AND PRINTING: APPLICATIONS OF SPC AND DOE TO THE WEB OFFSET PRINTING INDUSTRY. A Project. Presented. to the Faculty of STATISTICS AND PRINTING: APPLICATIONS OF SPC AND DOE TO THE WEB OFFSET PRINTING INDUSTRY A Project Presented to the Faculty of California State University, Dominguez Hills In Partial Fulfillment of the

More information

Glossary. The ISI glossary of statistical terms provides definitions in a number of different languages:

Glossary. The ISI glossary of statistical terms provides definitions in a number of different languages: Glossary The ISI glossary of statistical terms provides definitions in a number of different languages: http://isi.cbs.nl/glossary/index.htm Adjusted r 2 Adjusted R squared measures the proportion of the

More information

Section II: Assessing Chart Performance. (Jim Benneyan)

Section II: Assessing Chart Performance. (Jim Benneyan) Section II: Assessing Chart Performance (Jim Benneyan) 1 Learning Objectives Understand concepts of chart performance Two types of errors o Type 1: Call an in-control process out-of-control o Type 2: Call

More information

Measurement: The Basics

Measurement: The Basics I. Introduction Measurement: The Basics Physics is first and foremost an experimental science, meaning that its accumulated body of knowledge is due to the meticulous experiments performed by teams of

More information

Unit 27 One-Way Analysis of Variance

Unit 27 One-Way Analysis of Variance Unit 27 One-Way Analysis of Variance Objectives: To perform the hypothesis test in a one-way analysis of variance for comparing more than two population means Recall that a two sample t test is applied

More information

psychological statistics

psychological statistics psychological statistics B Sc. Counselling Psychology 011 Admission onwards III SEMESTER COMPLEMENTARY COURSE UNIVERSITY OF CALICUT SCHOOL OF DISTANCE EDUCATION CALICUT UNIVERSITY.P.O., MALAPPURAM, KERALA,

More information

AUTOMATED TEMPLATE MATCHING METHOD FOR NMIS AT THE Y-12 NATIONAL SECURITY COMPLEX

AUTOMATED TEMPLATE MATCHING METHOD FOR NMIS AT THE Y-12 NATIONAL SECURITY COMPLEX AUTOMATED TEMPLATE MATCHING METHOD FOR NMIS AT THE Y-1 NATIONAL SECURITY COMPLEX J. A. Mullens, J. K. Mattingly, L. G. Chiang, R. B. Oberer, J. T. Mihalczo ABSTRACT This paper describes a template matching

More information

2012 Assessment Report. Mathematics with Calculus Level 3 Statistics and Modelling Level 3

2012 Assessment Report. Mathematics with Calculus Level 3 Statistics and Modelling Level 3 National Certificate of Educational Achievement 2012 Assessment Report Mathematics with Calculus Level 3 Statistics and Modelling Level 3 90635 Differentiate functions and use derivatives to solve problems

More information

What is Statistics? Statistics is the science of understanding data and of making decisions in the face of variability and uncertainty.

What is Statistics? Statistics is the science of understanding data and of making decisions in the face of variability and uncertainty. What is Statistics? Statistics is the science of understanding data and of making decisions in the face of variability and uncertainty. Statistics is a field of study concerned with the data collection,

More information

Introduction to Design of Experiments

Introduction to Design of Experiments Introduction to Design of Experiments Jean-Marc Vincent and Arnaud Legrand Laboratory ID-IMAG MESCAL Project Universities of Grenoble {Jean-Marc.Vincent,Arnaud.Legrand}@imag.fr November 20, 2011 J.-M.

More information

Quality Digest Daily, April 2, 2013 Manuscript 254. Consistency Charts. SPC for measurement systems. Donald J. Wheeler

Quality Digest Daily, April 2, 2013 Manuscript 254. Consistency Charts. SPC for measurement systems. Donald J. Wheeler Quality Digest Daily, April 2, 2013 Manuscript 254 SPC for measurement systems Donald J. Wheeler What happens when we measure the same thing and get different values? How can we ever use such a measurement

More information

Topic 1. Definitions

Topic 1. Definitions S Topic. Definitions. Scalar A scalar is a number. 2. Vector A vector is a column of numbers. 3. Linear combination A scalar times a vector plus a scalar times a vector, plus a scalar times a vector...

More information

Part 01 - Notes: Identifying Significant Figures

Part 01 - Notes: Identifying Significant Figures Part 01 - Notes: Identifying Significant Figures Objectives: Identify the number of significant figures in a measurement. Compare relative uncertainties of different measurements. Relate measurement precision

More information

Using SPSS for One Way Analysis of Variance

Using SPSS for One Way Analysis of Variance Using SPSS for One Way Analysis of Variance This tutorial will show you how to use SPSS version 12 to perform a one-way, between- subjects analysis of variance and related post-hoc tests. This tutorial

More information

Probability Methods in Civil Engineering Prof. Dr. Rajib Maity Department of Civil Engineering Indian Institution of Technology, Kharagpur

Probability Methods in Civil Engineering Prof. Dr. Rajib Maity Department of Civil Engineering Indian Institution of Technology, Kharagpur Probability Methods in Civil Engineering Prof. Dr. Rajib Maity Department of Civil Engineering Indian Institution of Technology, Kharagpur Lecture No. # 36 Sampling Distribution and Parameter Estimation

More information

Interpret Standard Deviation. Outlier Rule. Describe the Distribution OR Compare the Distributions. Linear Transformations SOCS. Interpret a z score

Interpret Standard Deviation. Outlier Rule. Describe the Distribution OR Compare the Distributions. Linear Transformations SOCS. Interpret a z score Interpret Standard Deviation Outlier Rule Linear Transformations Describe the Distribution OR Compare the Distributions SOCS Using Normalcdf and Invnorm (Calculator Tips) Interpret a z score What is an

More information

AP Statistics Cumulative AP Exam Study Guide

AP Statistics Cumulative AP Exam Study Guide AP Statistics Cumulative AP Eam Study Guide Chapters & 3 - Graphs Statistics the science of collecting, analyzing, and drawing conclusions from data. Descriptive methods of organizing and summarizing statistics

More information