Statistical Process Control Methods from the Viewpoint of Industrial Application

Similar documents
STT 843 Key to Homework 1 Spring 2018

Diskussionsbeiträge des Fachgebietes Unternehmensforschung

Practice Problems Section Problems

JRF (Quality, Reliability and Operations Research): 2013 INDIAN STATISTICAL INSTITUTE INSTRUCTIONS

Objective Experiments Glossary of Statistical Terms

TUTORIAL 2 STA437 WINTER 2015

A Bivariate Weibull Regression Model

M.Sc. (Final) DEGREE EXAMINATION, MAY Final Year. Statistics. Paper I STATISTICAL QUALITY CONTROL. Answer any FIVE questions.

Confirmation Sample Control Charts

How Measurement Error Affects the Four Ways We Use Data

15 Discrete Distributions

STAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots. March 8, 2015

Multivariate Distribution Models

Introductory Econometrics. Review of statistics (Part II: Inference)

Optimal SPRT and CUSUM Procedures using Compressed Limit Gauges

CH.9 Tests of Hypotheses for a Single Sample

Summarizing Measured Data

Sequential Procedure for Testing Hypothesis about Mean of Latent Gaussian Process

Notes on the Multivariate Normal and Related Topics

Eco517 Fall 2004 C. Sims MIDTERM EXAM

Directionally Sensitive Multivariate Statistical Process Control Methods

7. Estimation and hypothesis testing. Objective. Recommended reading

EE 302: Probabilistic Methods in Electrical Engineering

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu

Probability, Statistics, and Reliability for Engineers and Scientists FUNDAMENTALS OF STATISTICAL ANALYSIS

Institute of Actuaries of India

Solutions to Final STAT 421, Fall 2008

STAT 6350 Analysis of Lifetime Data. Probability Plotting

arxiv: v1 [stat.me] 14 Jan 2019

Subject CS1 Actuarial Statistics 1 Core Principles

Correlation and Regression Theory 1) Multivariate Statistics

18.05 Practice Final Exam

The Delta Method and Applications

The Model Building Process Part I: Checking Model Assumptions Best Practice

1. Density and properties Brief outline 2. Sampling from multivariate normal and MLE 3. Sampling distribution and large sample behavior of X and S 4.

Statistical Methods in Particle Physics

Probability Theory and Statistics. Peter Jochumzen

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).

Lecture 2: Basic Concepts and Simple Comparative Experiments Montgomery: Chapter 2

Tail dependence in bivariate skew-normal and skew-t distributions

STAT 461/561- Assignments, Year 2015

The Model Building Process Part I: Checking Model Assumptions Best Practice (Version 1.1)

CUMULATIVE SUM CHARTS FOR HIGH YIELD PROCESSES

Unit 10: Planning Life Tests

Problem 1 (20) Log-normal. f(x) Cauchy

401 Review. 6. Power analysis for one/two-sample hypothesis tests and for correlation analysis.

2.830J / 6.780J / ESD.63J Control of Manufacturing Processes (SMA 6303) Spring 2008

Contents. Preface to Second Edition Preface to First Edition Abbreviations PART I PRINCIPLES OF STATISTICAL THINKING AND ANALYSIS 1

Review of Statistics 101

01 Probability Theory and Statistics Review

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Economics 520. Lecture Note 19: Hypothesis Testing via the Neyman-Pearson Lemma CB 8.1,

One-Way Repeated Measures Contrasts

Basic Statistics. 1. Gross error analyst makes a gross mistake (misread balance or entered wrong value into calculation).

Chapter 4. One-sided Process Capability Assessment in the Presence of Gauge Measurement Errors

EE/CpE 345. Modeling and Simulation. Fall Class 9

Review of Statistics

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued

Multivariate probability distributions and linear regression

2. What are the tradeoffs among different measures of error (e.g. probability of false alarm, probability of miss, etc.)?

Lectures on Simple Linear Regression Stat 431, Summer 2012

A Multivariate EWMA Control Chart for Skewed Populations using Weighted Variance Method

ECE 313: Conflict Final Exam Tuesday, May 13, 2014, 7:00 p.m. 10:00 p.m. Room 241 Everitt Lab

STA 2201/442 Assignment 2

parameter space Θ, depending only on X, such that Note: it is not θ that is random, but the set C(X).

Within Cases. The Humble t-test

Question. Hypothesis testing. Example. Answer: hypothesis. Test: true or not? Question. Average is not the mean! μ average. Random deviation or not?

Multivariate Simulations

Applied Econometrics (QEM)

Mathematical and Computer Modelling. Economic design of EWMA control charts based on loss function

Control charts are used for monitoring the performance of a quality characteristic. They assist process

Preliminary Statistics Lecture 5: Hypothesis Testing (Outline)

JRF (Quality, Reliability & Operations Research): 2015 INDIAN STATISTICAL INSTITUTE INSTRUCTIONS

2.830J / 6.780J / ESD.63J Control of Manufacturing Processes (SMA 6303) Spring 2008

Project Report for STAT571 Statistical Methods Instructor: Dr. Ramon V. Leon. Wage Data Analysis. Yuanlei Zhang

Table of z values and probabilities for the standard normal distribution. z is the first column plus the top row. Each cell shows P(X z).

Unit 14: Nonparametric Statistical Methods

SPRING 2007 EXAM C SOLUTIONS

Parameter Estimation and Fitting to Data

2. Variance and Covariance: We will now derive some classic properties of variance and covariance. Assume real-valued random variables X and Y.

Hypothesis Testing One Sample Tests

Statistical techniques for data analysis in Cosmology

Political Science 236 Hypothesis Testing: Review and Bootstrapping

Distribution Fitting (Censored Data)

2.830J / 6.780J / ESD.63J Control of Manufacturing Processes (SMA 6303) Spring 2008

Lecture Note 1: Probability Theory and Statistics

Chapter 23: Inferences About Means

Statistical Hypothesis Testing

Homework 10 (due December 2, 2009)

COMPARISON OF FIVE TESTS FOR THE COMMON MEAN OF SEVERAL MULTIVARIATE NORMAL POPULATIONS

Testing Simple Hypotheses R.L. Wolpert Institute of Statistics and Decision Sciences Duke University, Box Durham, NC 27708, USA

A Comparison of Equivalence Criteria and Basis Values for HEXCEL 8552 IM7 Unidirectional Tape computed from the NCAMP shared database

Lecture 13: p-values and union intersection tests

You can compute the maximum likelihood estimate for the correlation

Econometrics Summary Algebraic and Statistical Preliminaries

Statistical Inference

Inferences about a Mean Vector

Qualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama

Glossary. The ISI glossary of statistical terms provides definitions in a number of different languages:

Lab 6. Current Balance

Transcription:

c Heldermann Verlag Economic Quality Control ISSN 0940-5151 Vol 16 (2001), No. 1, 49 63 Statistical Process Control Methods from the Viewpoint of Industrial Application Constantin Anghel Abstract: Statistical process control is a major part of industrial statistics and consists not only of control charting, but also of capability analysis, of design of experiments and other statistical techniques. This paper reviews and comments some available techniques for the univariate as well as for the multivariate case from the viewpoint of industrial application. 1 Distribution of a Quality Characteristic Many statistical process control activities assume that the quality characteristic under study exhibits a stable performance or in other words that the process is stable and capable. The stability of a process can be defined as the stability of the underlying probability distribution over time and very often this can be described as the stability of the distribution parameters over time. Only if the stability assumption is met by the process, the calculation of capability indices is meaningful and may be used in practice for evaluating process performance. Mathematically, we can, of course, calculate always capability indices, but for an unstable process these indices have no real significance, because there are not identified assignable causes in the process. Thus, a correct identification of the type of probability distribution is not sufficient without the assurance that it is stable over time. In the case that the process is not stable, the probability distribution of the quality characteristic may vary from time-point to time-point. In such a case one could use a mixture of probability distributions as model, which, therefore, can be looked upon as an indication for an unstable process. Each quality characteristic has a proper nature defined by physical and technical conditions. For instance a meaningful flatness-characteristic is defined as a parameter naturally bounded by null. In such a case the normal distribution cannot describe reality sufficiently well and a different model has to be selected. One of many possibilities would be to select the folded normal distribution [1] and [8]. Of course, any model should always be justified by experiments, i.e. by samples. If a sample leads to reject the hypothesis of the assumed model, then the situation must be analyzed again, with respect to various features as the resolution of the measurement gauge, mixture appearance, etc. Table 1 gives some recommendations for the selection of an appropriate model for some frequently relevant quality characteristics (see also [1]). A decision for a specific model to be used to describe the random variations exhibited by a quality characteristic, should be based on any available information about the physical and

50 Constantin Anghel technical conditions on the one hand and on a sufficiently large sample on the other hand, collected under ideal conditions (one operator, one supplier, optimal resolution of the gauge). Subsequent samples of the quality characteristic should be utilized for verifying the hypothesis that the initial distribution has not changed. In case of a rejection of the null hypothesis, an explanation has to be found by an additional analysis. Table 1: Recommended Models for Various Quality Characteristics Characteristic Model Characteristic Model geometric dimensions: straightness folded normal parallelism folded normal evenness folded normal rectangularity folded normal round form folded normal inclination (angularity) folded normal cylinder form folded normal position eccentric Rayleigh line form folded normal coaxiality (concentricity) eccentric Rayleigh surface form folded normal symmetry folded normal circularity flatness folded normal a) form folded normal b) position eccentric Rayleigh life length Weibull, Hjorth resistance normal voltage normal capacitance normal pressure normal viscosity normal These models should be looked upon as a first and preliminary attempt which, in any case, must be justified or discarded by a subsequent analysis of the situation at hand. 2 Data Collection and the Resolution of the Gauge Any data collection is the result of a measurement process and, therefore, the gauge has a high importance beginning with its resolution and its capability. A resolution below 2% of the tolerance is considered as consistent with industrial practice. Above this level the measurement values are assigned to classes by the measurement procedure and the random character of the sample will partly be lost. If we take samples of size two, then the expectation of range is 1.128 times the process standard deviation. This means that the expected distance between any two randomly selected measurements is about 1.1sσ, where σ denotes the process standard deviation. Whenever the difference between two successive measurements is less than the resolution of the gauge, the difference is set to zero. Thus, a larger resolution yields a larger number of zeros, and consequently results in a too small value of the average range (process variability) leading to action limits for an X-chart as well as for an R-chart, which are too tight (Figure 1). The consequences are too many false alarms, and often the wrong decision that the process is not stable [13].

Statistical Process Control Methods 51 1 Example Consider a quality characteristic with tolerance interval [L, U] =[28.7, 31.3], which is evaluated by a gauge with resolution r =0.01 ( 4% of the length of the tolerance interval). 50 samples of size n = 2 are taken the results of which are represented in Table 1 and Figures 1. Table 2: Observed values: 50 samples of size n =2. observed value x frequency 29.98 1 29.99 3 30.00 28 30.01 41 30.02 23 30.03 2 30.04 2 Figure 1: X-chart and R-chart for the 50 samples of size n =2. The uncertainty about the measurement result determines the gauge capability, which often is specified by four times the standard deviation 4σ. Clearly, the gauge capability affects the determination of the actual value of the process capability given by C p. If the ratio of the measurement uncertainty (4σ) andthetolerance U L, withl lower tolerance limit and U upper tolerance limit, is 25%, then a measured process capability (with measurement variability added to process variability) of 1.33 means for the actual index (without measurement variability) a value of 1.55 and a measured value of 1.67 corresponds to a real value of 2.2 (Figure 2).

52 Constantin Anghel Figure 2: The effect of measurement variability (uncertainty) on the determination of process capability C p. Conversely, for a given actual process capability (without measurement variability) of 1.33, the capability with added measurement variability ( 4σ = 25%) reaches only 1.2 and T an actual value of 1.67 is decreased to 1.41 (Figure 3). Figure 3: The effect of measurement variability (uncertainty) on the verification of a given process capability C p. Thus, information about gauge capability is very important to avoid false decisions with respect to process capability and with respect to releasing false alarms.

Statistical Process Control Methods 53 3 Capability Indices As mentioned above, it makes sense to define capability indices for a stable process. Of course, it is possible to use the definitions and to calculate these indices, even if the distribution of the relevant quality characteristics change in time. However, only a stable process assures the predictions based on the stated capability index and, therefore, using the knowledge of what happened today for predicting what we expect tomorrow makes sense only if the situation does not change. From the viewpoint of application, it is desirable to define capability indices being independent of the special type of distribution allowing the comparison of two or more processes with different underlying distributions. The most widely used capability indices for the normal distribution are the following: C p = U L (1) 6σ C pk = 1 ( U µ 3 min, µ L ) (2) σ σ where µ is the process mean and σ 2 the process variance. Introducing the probability p U of exceeding the upper specification limit U and p L of not reaching the lower specification limit L, another representation of the capability indices is obtained. ( ) ( ) U µ L µ p U =1 Φ, p L =Φ (3) σ σ and therefore C p = 1 ( ) Φ 1 (1 p U ) Φ 1 (p L ) (4) 6 C pk = 1 ) (Φ 3 min 1 (1 p U ), Φ 1 (p L ) = 1 ) (Φ 3 min 1 (1 p U ), Φ 1 (1 p L ) = 1 3 Φ 1 (1 p) (5) where Φ 1 denotes the inverse distribution function of the standard normal distribution and p =max(p U,p L ). The representation (4) and (5) also suggests how to define process capability indices for the general case given by an arbitrary distribution function F (x). In the general case the probability of exceeding the upper specification limit denoted by p (F ) U and the probability of not exceeding the lower specification limit denoted by p (F ) L are given by the following expressions: p (F ) U =1 F (U), p(f ) L = F (L) (6) Replacing p U and p L in (4) and (5) by the expressions (6) yields the following formulas for C p and C pk in the general case.

54 Constantin Anghel C p = 1 ( ) Φ 1 (1 p (F ) U 6 ) Φ 1 (p (F ) L ) (7) C pk = 1 ( ) 3 min Φ 1 (1 p (F ) U ), Φ 1 (p (F ) L ) = 1 3 Φ 1 (1 p) (8) We will call (8) the classical definition of C pk, as it defines process capability by means of the maximum nonconformance probability max(p U,p L ) on the one or the other side of the specification interval. In the relevant literature (e.g. [5]) a different proposal for defining C p and C pk for the general case is made. The definition is based on the following representation of the denominator of (1) which is valid under the normal model. 6σ =(3σ + µ) ( 3σ + µ) =q 0.99865 q 0.00135 (9) where q 0.99865 = µ+3σ is the 0.99865-quantile and q 0.00135 = µ 3σ is the 0.00135-quantile of the normal distribution N(µ, σ 2 ). Replacing the denominators in (1) and (2) by the expressions obtained in (9), we arrive at the following formula for the general case. Cp U L = q (F ) 0.99865 q (F ) 0.00135 ( Cpk U µ = min q (F ) 0.99865 µ, µ L µ q (F ) 0.00135 ) where q (F ) 0.99865 and q (F ) 0.00135, respectively, are quantiles of an arbitrary distribution function F (x). Clearly in the normal case the two capability indices (11) and (8) are equivalent. However, this is not true in the general case as illustrated by the following example. 2 Example Consider a quality characteristic X with specification limits L = 1.0 and U = 15.2. Moreover, let X follow a log-normal distribution [3] given by its density function f X (x): 1 f X (x) = σ(x θ) 1 2( ln(x θ) µ 2π e σ ) 2 (12) with θ = 7.5, µ =2.56, σ =0.27. The expectation of X is given by σ2 µ+ E[X] =θ + e 2 (13) and utilizing the fact that ln(x θ) N(µ, σ 2 ), the p-quantile of the log-normal distribution q p (F ) may be determined by the following formula. q p (F ) = θ + e qpσ+µ (14) where q p is the p-quantile of the standardized normal distribution. Thus, we obtain: (10) (11)

Statistical Process Control Methods 55 Table 3: Distribution characteristic of X characteristic actual value process mean E[X] = 5.916 0.99865-quantile q 0.99865 =21.579 0.00135-quantile q 0.00135 = 1.745 With the values given in Table 3, we obtain the capability index C pk U E[X] q 0.99865 E[X] =0.592, E[X] L =0.642 (15) E[X] q 0.00135 and thus: Cpk =0.592 (16) where (16) refers to process quality with respect to exceeding the upper specification limit U. Next, the classical definition of C pk, as given by (8), is used for calculating the process capability. To this end the probability p U of exceeding the upper specification limit and the probability p L of not reaching the lower specification limit have to be determined. ( ) ln(u θ) µ p U = 1 F X (U) =1 Φ σ = 1 Φ(2.08) = 0.01876 (17) ( ) ln(l θ) µ p L = F X (L) =Φ σ = Φ( 1.56) = 0.05938 (18) The probability of not reaching the lower specification limit is more than three times the probability of exceeding the upper specification limit. Thus, process capability with respect to meeting the upper specification is much better than process capability with respect to meeting the lower specification and this fact should be reflected by any meaningful capability index. Unfortunately, as shown by (15), Cpk violates this requirement. For the classical capability index C pk, we obtain: Φ 1 (1 p U ) = Φ 1 (0.98124) = 2.08 (19) Φ 1 (1 p L ) = Φ 1 (0.94062) = 1.56 (20) and therefore C pk =0.520 (21) which refers to the process capability of meeting the lower specification limit L.

56 Constantin Anghel It is generally accepted in industry that the requirement for the classical C pk in the normal case should be C pk 1.33 with the meaning that on an average there are not more than 31.7 ppm of nonconforming items produced on either side of the specification interval. Taking Cpk with C pk 1.33 in the case of an exponentially distributed quality characteristic may lead to an out of specification proportion of 212 ppm on one side of the specification interval, which is about seven times the tolerated number. Therefore, Cpk should not be used in the non-normal case. 4 Median Versus Expectation For non-normal distributions it may be more appropriate to use the median rather than the expectation for controlling the process location, since the expectation of a non-normal distribution does not necessarily correspond to a quantile of fixed order as in the case of the normal distribution (50%). Figure 5 shows the action lines and the operation of 500 simulated samples of size n = 3 of an X-charts for an exponential distribution with =0.3 and action limit (99%) determined according to a Shewhart-chart and determined based on the distribution of the sample mean. For either chart a number of false alarms occur which is considerably larger for the Shewhart chart. Figure 4: Performance of X-charts for controlling the process mean in the exponential case. Figure 5 shows the operation of a median chart for the same simulated samples. As can be seen, there is no false alarm for the median chart.

Statistical Process Control Methods 57 Figure 5: Performance of median chart for controlling the process mean in the exponential case. As can be seen from Figure 5, there is no false alarm for the median chart. Schneider et al.[11] pointed out that for the traditional three-sigma control limit (Shewhart) the probability for a false alarm is 0.27% for the normal distribution but 1.8% for an exponential distribution with µ = 1 (Figure 6). Figure 6: False alarm probability using three-sigma control limit in the normal and the exponential case. During process control with control charts it is not necessary to check the distribution and to calculate capability indices as long as the control charts do not indicate a change. The distribution as well as the capability indices as determined at the latest process capability study of the output characteristic are valid until an alarm is released and the process is analyzed and a new process capability study is carried out.

58 Constantin Anghel 5 Multivariate Process Analysis In industrial practice product quality is generally determined not by only one characteristic but by a vector of several characteristics which may be correlated. For such situations it is generally not appropriate to decide about each variable, or to use quality charts, or to calculate capability indices for each characteristic separately by means of the marginal distributions and thus neglecting the correlations. Therefore, multivariate techniques are necessary taking into account the correlations between quality characteristics for calculating quality charts, tolerances (see e.g. [6]), and capability indices. Figure 8 and 9 illustrate independent control of two variables with respect to location and variability compared with a joint control (Figure 9) using quality charts for the multivariate mean variation. The better performance of the joint charts is evident. Figure 7: Independent control of locations of the two quality characteristics X, Y. Figure 8: Independent control of variability of two quality characteristics X, Y. Figure 9: Joint control of location of (X, Y ) and variability of (X, Y ). 5.1 Reduction of a Multivariate to a Univariate Case For some problems is possible to reduce the multivariate case to a one-dimensional one, so that the established one-dimensional capability indices can be used or the corresponding one-dimensional control charts can be implemented for process control.

Statistical Process Control Methods 59 3 Example Consider the location of the center of a hole defined by its 2-dimensional coordinates (X, Y ). Assume that X and Y are independent and normally distributed random variables with the same variance σ 2. Let the deviation of X and of Y from given targets x 0 and y 0 be the quality characteristics. Setting x 0 =0andy 0 = 0 leaves the two-dimensional quality characteristic (X, Y ). The pair (X, Y ) has a two-dimensional normal distribution with density function f (X,Y ). f (X,Y ) = 1 1 σ 2 (x2 +y 2 ) (22) 2πσ 2 e The deviation from target determines the quality of the product. Thus, instead of (X, Y ) one can introduce the deviation Z as single quality characteristic defined by: Z = (X) 2 +(Y ) 2 (23) Of course, the random variable Z is not normally distributed but has a so-called Raleigh distribution with density function f Z,[2]. f Z (z) = z z2 e 2σ σ2 2 (24) By means of Z the two-dimensional problem is transferred to an equivalent one-dimensional one. A process capability analysis based on Z by means of the normal distribution yields inconsistent results [9]. Moreover, because of the equivalence one can take the two or the one dimensional characteristic for a capability analysis, without priority [7]. A graphical illustration of the problem is presented in Figure 10. Figure 10: Illustration of the quality characteristics (X, Y ), f (X,Y ) and f Z. 5.2 Multivariate Quality Analysis Let X =(X 1,...,X m )beam-dimensional quality characteristic. The multi-dimensional tolerance region for X is often given by a hypercube or an ellipsoid. The first step of the quality analysis consists of calculating the non-conformance probability p (see for instance [10]). p =1... f X (x 1,...,x m )dx 1...dx m (25) tol.region

60 Constantin Anghel Next, a tolerance ellipsoid E T is introduced, i.e. an ellipsoid with center at the target value t =(t 1,...,t m ) representing an event which occurs with probability of (1 p). E T is defined as solution of the following quadratic equation: ( x t) T Σ 1 ( x t) =χ 2 1 p;m 1 (26) where χ 2 1 p;m 1 is the (1 p)-quantile of the χ 2 -distribution with (m 1) degrees of freedom. For m = 1, i.e. the one-dimensional case, and the normal distribution the above introduced tolerance ellipsoid reduces to the given tolerance interval. [ ] Φ 1 (p L ), Φ 1 (1 p U ) σ =[L, U] (27) The relation of the tolerance region and the tolerance ellipsoid is illustrated for two dimensions in Figure 11. Figure 11: Tolerance region and tolerance ellipsoid in a two-dimensional case. The tolerance ellipsoid enables a straightforward generalization of capability indices to cover the multivariate case. volume of tolerance ellipsoid C p = (28) volume of 99.73%-ellipsoid where the 99.73%-ellipsoid is defined as solution of ( x t) T Σ 1 ( x t) =χ 2 0.99.73;m 1 (29) Taam et.al. use instead of (28) a different definition. They introduce instead of the tolerance ellipsoid a modified tolerance ellipsoid, which is an ellipsoid within the tolerance region, centered at target and having maximum volume. Clearly, the modified tolerance ellipsoid is independent of the underlying multivariate distribution and, thus, in particular independent of the correlation coefficients. C p (T aam) = volume of modified tolerance ellipsoid volume of 99.73%-ellipsoid (30)

Statistical Process Control Methods 61 If C p (T aam) = 1 then the modified tolerance ellipsoid contains 99.73% of the possible process space and the tolerance itself in general more than 99.73% which is not in line with the definition of a C p -index. Let U be the vector of the upper specification limits, then the second index taking into account the actual value E[ X] of a multi-dimensional location parameter is denoted by C pk and C pm (T aam), respectively. ( E[ C pk = (1 k)c p where k = ) T ( X] t E[ ) X] t ( ) T ( (31) U t U t) C pm = 1+ C p (T aam) ( E[ X] T ( t) Σ 1 E[ X] ) t (32) where it is assumed here that the target value t is the center of the tolerance region. 4 Example Consider a two-dimensional quality characteristic distributed according to a bivariate normal distribution. The input parameters are the following: ( ) ( ) ( ) 0.0 t =, L 3.2 =, U 3.2 = (33) 0.0 1.6 1.6 E[ X]= ( 0.2 0.3 ), V[ X]= ( 1.00 0.25 ) (34) The joint density function is given by: 1 e (x 0.02 1.0 ) 2 2ρ( x 0.02 1.0 )( y 0.03 0.5 )+( y 0.03 0.5 ) 2 f (X,Y ) (x, y) = 2(1 ρ 2 ) (35) 2πσ X σ Y 1 ρ 2 In Figure 12 the capability indices C p and C pk and C p (T aam) andc pm are displayed as functions of the correlation coefficient. 6 Conclusions In many industrial instances product quality depends on a multitude of dependent characteristics and, therefore, there is an urgent need for appropriate multivariate models and methods. One problem is to define suitable tolerance regions taking into account the correlation structure among the variables. The problem of tolerance regions is closely connected to the problem of deriving process capability indices. Another problem refers to process monitoring for detecting changes in process distribution. Assuming the normal model might lead to inferior methods and wrong decisions.

62 Constantin Anghel Figure 12: C p and C pk and C p (T aam) andc pm as functions of ρ. References [1] C. Anghel, H. Hausberger, W. Streinz (1992): Unsymmetriegrößen erster und zweiter Art richtig auswerten (Teil 1). QZ 37, 755-758. [2] C. Anghel, H. Hausberger, W. Streinz (1993): Unsymmetriegrößen erster und zweiter Art richtig auswerten (Teil 2). QZ 38, 37-40. 2) [3] C. Anghel (1998): Die Prozesszentrierung richtig beurteilen. QZ 43, 1088-1092. [4] Castagliola, P. (1996): Evaluation of non-normal process capability indices using Burrs distribution. Quality Engineering 4, pp. 587-593. [5] Clement, J.A. (1989): Process capability calculation for non-normal distributions. Quality Progress Sept., 95-96. [6] Collani, E.v. and Killmann, F. (2001): A note on the convolution of uniform and related distributions and their use in quality control. EQC, 16,... [7] Dietrich, E., Schulze, A. (1996): Fähigkeitsbeurteilung bei Positionstoleranzen. QZ 41, 812-814. [8] Leone, F.C., Nelson, L.S. and Nottingham, R.B. (1996): The folded normal distribution. Technometrics 3, 543-550. [9] Littig, S.J. and Pollock, S.M. (1992): Capability measurements for multivariate processes: Definitions and an example for a gear carrier. Technical Report 92-42, Dept. Industrial and Operational Engineering, University of Michigan, Ann Arbor.

Statistical Process Control Methods 63 [10] Hamilton, D.C. and Lesperance, M.L. (1995): A comparison of methods for univariate and multivariate acceptance sampling by variables. Technometrics 37, 329-339. [11] H. Schneider, W.J. Kasperski, T. Ledford and W. Kraushaar (1995,1996): Control charts for skewed and censored data. Quality Engineering 8, 263-274. [12] W. Taam, P. Subbaiah and J.W. Liddy (1993): A note on multivariate capability indices. Journal of Applied Statistics 20, 339-351. [13] D.J. Wheeler, D.S. Chambers (1989): Understanding Statistical Process Control. Statistical Process Controls, Inc. Dr. C. Anghel Quality Department BMW AG D-84130 Dingolfing Germany