Process Characterization Using Response Surface Methodology

Similar documents
Response Surface Methodology

Analysis of Variance and Design of Experiments-II

Response Surface Methodology:

STATISTICS 4, S4 (4769) A2

Experimental designs for multiple responses with different models

DESIGN AND ANALYSIS OF EXPERIMENTS Third Edition

14.0 RESPONSE SURFACE METHODOLOGY (RSM)

Practical Statistics for the Analytical Scientist Table of Contents

Design of Engineering Experiments Part 5 The 2 k Factorial Design

RESPONSE SURFACE METHODOLOGY

Determining Machining Parameters of Corn Byproduct Filled Plastics

Part I. Sampling design. Overview. INFOWO Lecture M6: Sampling design and Experiments. Outline. Sampling design Experiments.

Tilt Angle and Birefringence of Smectic Liquid Crystal Materials

Process/product optimization using design of experiments and response surface methodology

2010 Stat-Ease, Inc. Dual Response Surface Methods (RSM) to Make Processes More Robust* Presented by Mark J. Anderson (

Digital Mapping as a Student and Staff Communication Tool. A Senior Project. presented to. the Faculty of Horticulture and Crop Science

Randomized Decision Trees

Chapter 6 The 2 k Factorial Design Solutions

Programme title: MChem Chemistry (Mathematical and Computational Chemistry)

7. Response Surface Methodology (Ch.10. Regression Modeling Ch. 11. Response Surface Methodology)

Sizing Mixture (RSM) Designs for Adequate Precision via Fraction of Design Space (FDS)

Optimality of Central Composite Designs Augmented from One-half Fractional Factorial Designs

SOUTH COAST COASTAL RECREATION METHODS

Response Surface Methodology IV

Comparative Study of Strain-Gauge Balance Calibration Procedures Using the Balance Calibration Machine

Design of experiments and empirical models for up to date burners design for process industries

EXPERIMENTAL DESIGN TECHNIQUES APPLIED TO STUDY OF OXYGEN CONSUMPTION IN A FERMENTER 1

their contents. If the sample mean is 15.2 oz. and the sample standard deviation is 0.50 oz., find the 95% confidence interval of the true mean.

Cost optimisation by using DoE

Prerequisite: STATS 7 or STATS 8 or AP90 or (STATS 120A and STATS 120B and STATS 120C). AP90 with a minimum score of 3

Overview. DS GA 1002 Probability and Statistics for Data Science. Carlos Fernandez-Granda

IE 316 Exam 1 Fall 2012

Response Surface Methodology: Process and Product Optimization Using Designed Experiments, 3rd Edition

Design of Experiments

The 2 k Factorial Design. Dr. Mohammad Abuhaiba 1

MS&E 226: Small Data

Problems. Suppose both models are fitted to the same data. Show that SS Res, A SS Res, B

Plausible Values for Latent Variables Using Mplus

Mohammed. Research in Pharmacoepidemiology National School of Pharmacy, University of Otago

Lack-of-fit Tests to Indicate Material Model Improvement or Experimental Data Noise Reduction

ALL OF NONPARAMETRIC STATISTICS SOLUTIONS

Response Surface Methods

Design Moments. Many properties of experimental designs are quantified by design moments. Given a model matrix 1 x 11 x 21 x k1 1 x 12 x 22 x k2 X =

Data Set 1A: Algal Photosynthesis vs. Salinity and Temperature

Montgomery County Community College CHE 131 Chemistry for Technology I 4-3-3

Application of Indirect Race/ Ethnicity Data in Quality Metric Analyses

Jodye Selco, Cal. Poly. Pomona

Stat 231 Exam 2 Fall 2013

Introduction to Design of Experiments

Programme Study Plan

Vita. Gabriela Putinar (nee Teodosiu)

Sample questions for Fundamentals of Machine Learning 2018

Comparing Group Means When Nonresponse Rates Differ

Ph.D. Preliminary Examination Statistics June 2, 2014

Department of Physics

Sparse Functional Models: Predicting Crop Yields

Faculty: Andrew Carr, Ryan Felix, Stephanie Gould, James Hebda, Karla McCain, John Richardson, Lindsay Zack

S o c i o l o g y E x a m 2 A n s w e r K e y - D R A F T M a r c h 2 7,

Useful Numerical Statistics of Some Response Surface Methodology Designs

Factorial designs. Experiments

Wednesday 8 June 2016 Morning

Strati cation in Multivariate Modeling

Welcome to GST 101: Introduction to Geospatial Technology. This course will introduce you to Geographic Information Systems (GIS), cartography,

Final Exam for MAT2377 Probability and Statistics for Engineers. Professor : M. Zarepour & G. Lamothe. Name :

CHAPTER 6 MACHINABILITY MODELS WITH THREE INDEPENDENT VARIABLES

Hardware/Software Design Methodologies Introduction to DoE and Course Projects

Outline. Possible Reasons. Nature of Heteroscedasticity. Basic Econometrics in Transportation. Heteroscedasticity

USEFUL TRANSFORMATIONS

Analyzing and Interpreting Continuous Data Using JMP

MEASUREMENT SYSTEM ANALYSIS OF OUTSIDE MICROMETER FOR NON-STANDARD TEMPERATURE CONDITIONS

Course ID May 2017 COURSE OUTLINE. Mathematics 130 Elementary & Intermediate Algebra for Statistics

An Analysis of N-Body Trajectory Propagation. Senior Project. In Partial Fulfillment. of the Requirements for the Degree

Math 060/070 PAL: Elementary and Intermediate Algebra Spring 2016

Electromagnetic Calorimeter Calibration: Getting Rid of the Trash

Three Ways the CSU is Using Geographic Information Systems (GIS) to Inform Decisions

APPORTIONMENT ATTENDANCE REPORT

PROVOST JONES REMARKS NASA Panel Review Welcome Thursday, February 18, 2016

AP Statistics Homework 11. Logistic Regression

2016 ChE Undergraduate Advising Sessions Program

ASSESSMENT OF STUDENT LEARNING Department of Geology University of Puerto Rico at Mayaguez. Progress Report

KAZAN NATIONAL RESEARCH TECHNOLOGICAL UNIVERSITY

Investigation of Possible Biases in Tau Neutrino Mass Limits

Shared Experimental Facilities Your Partner for Materials Characterization at the University of California, Santa Barbara

Components for Accurate Forecasting & Continuous Forecast Improvement

Influence of Input Parameters on Characteristics of Electro Chemical Machining Process

Design of Experiments SUTD - 21/4/2015 1

EE290H F05. Spanos. Lecture 5: Comparison of Treatments and ANOVA

California State University, Chico. Science Replacement Building

Experiences in an Undergraduate Laboratory Using Uncertainty Analysis to Validate Engineering Models with Experimental Data

Canal Velocity Indexing at Colorado River Indian Tribes (CRIT) Irrigation Project in Parker, Arizona using the SonTek Argonaut SL

Life after Cal: Adapting to Change and Enjoying your Time in Graduate School

ON REPLICATION IN DESIGN OF EXPERIMENTS

4. STATISTICAL SAMPLING DESIGNS FOR ISM

2015 Stat-Ease, Inc. Practical Strategies for Model Verification

Three year rotational plan for assessing student outcomes: EET

Chemical Engineering: 4C3/6C3 Statistics for Engineering McMaster University: Final examination

Are Forecast Updates Progressive?

Chemometrics Unit 4 Response Surface Methodology

INTEGRATING GEOSPATIAL PERSPECTIVES IN THE ANTHROPOLOGY CURRICULUM AT THE UNIVERSITY OF NEW MEXICO (UNM)

J. Response Surface Methodology

Transcription:

Process Characterization Using Response Surface Methodology A Senior Project Presented to The Faculty of the Statistics Department California Polytechnic State University, San Luis Obispo In Partial Fulfillment of the Requirements for the Degree Bachelor of Science, Statistics By Katherine A Eng June 2013 2013 Katherine A Eng

Table of Contents Introduction... 2 Background... 2 Experimentation... 4 Experimental Design... 4 Data Collection... 7 Results... 8 Recommendations... 9 Further Study... 9 References... 9 Acknowledgements... 9 1

Introduction A Santa Barbara county engineering firm proposed a collaboration with the Cal Poly Statistics Department to investigate the sources of variability in a certain measurement process, understand normal operability characteristics of the machine, reduce variability in machine measurements, establish process monitoring and control for the system, and verify utility of the proposed process control through designed experimentation. The proposed process control will be reviewed for potential integration into the operating specification system, and the two students (one statistics, and one materials engineering major) involved in the project have a unique opportunity to gain valuable experience applying their statistical and engineering knowledge towards a real world problem prior to graduation. This project is on-going and may be available for future Cal Poly student involvement, as further experimentation is necessary to devise, implement, and verify statistical process control measures. Participation in such a collaboration requires complete discretion from all Cal Poly contributing members, as all involved parties are under nondisclosure agreements. Proprietary information may not be revealed or released to anyone not directly involved in the project. Background All measurement systems have intrinsic variability. Measurement variability is often characterized into two different sources: 1. Repeatability, also called precision variability or equipment variability, is a measure of dispersion of measurement results when all measurements are made under the same conditions (e.g., same appraiser, same equipment, same production environment, same time period). 2. Reproducibility is a measure of dispersion of measurement results when the measurement conditions change (e.g., different appraisers, different machines, or different measurement conditions). The measurement process studied was found to have both reproducibility and repeatability variability in all three measurement outcomes (coded y1, y2, and y3). Three years worth of testing data have been meticulously recorded by the engineering firm for all tests conducted on a single material type manufactured by one vendor, and variance component analysis on data collected on that specimen type indicate large portions of observed variability are due to day-to-day effects and test-to-test effects nested within day. Further analysis into the historical data shows that trends exist in each of the three measurements. Measurements on y1 (Figure 1) and y2 (Figure 2) are regularly overestimated compared to the vendor claims (horizontal line), and measurements on y3 (Figure 3) are regularly underestimated compared to vendor claims. In addition, there are mean shifts in measurement readings from the first specimen measured to the second specimen measured, and so on. Therefore, measurement readings do not appear to be independent of one another. 2

Figure 1 Mean and range plot illustrating positive bias of y1 measurements compared to vendor claims Figure 2 Mean and range plot illustrating positive bias of y2 measurements compared to vendor claims 3

Figure 3 Mean and range plot illustrating negative bias of y3 measurements compared to vendor claims Experimentation There are known sources of variability attributable to these erratic measurements; however, no studies have been conducted to empirically quantify the effects of these sources of variability on the measurement outcomes. The factors chosen to be investigated in this experiment are designed to address how controllable parameters influence measurement outcomes. Experimental Design The approved experimental design investigated three factors: 1. Sample holder type 2. Rest time 3. Air flow rate into the machine / air pressure Sample holder type is a binary categorical variable; rest time is the duration of machine inactivity between sample tests; and air flow rate is set by an air pressure gradient across a flowmeter. Two levels of holder type (A and B), five levels of rest time, and five levels of air flow would be tested. Coded levels of these two quantitative factors are shown below in Figure 4. Rest Time - 2 = -1.414-1 0 1 2 = 1.414 Pressure - 2 = -1.414-1 0 1 2 = 1.414 Figure 4 Coded experimental factors and levels. 4

A central composite design (CCD) is an efficient design that can effectively and confidently predict measurement outcomes at most rest time and pressure combinations within the design space (Figure 5). From the data collected in a CCD, heat release measurements can be predicted using two quantitative variables in a regression-like fashion using response surface methodology. Every quantitative treatment combination that is incorporated into a central composite design is run one time except for the (0,0) treatment combination (the center runs). Apart from the number of center runs, this is an unreplicated design. By replicating the center run treatment combination, internal estimates of error can be estimated for the model. The number of center runs that should be conducted in an experiment will be discussed later. There are three kinds of design points used in a central composite design: 1) the center runs; 2) factorial runs; and 3) axial runs. The factorial points are made up of the (±1, ±1) treatment combinations. When only factorial runs are conducted, only linear effects and interaction terms can be estimated. With the addition of center runs, curvature can be estimated in the response surface model. With the addition of axial points, the points coded 0 for one factor and ± 2 in the other factor, second order models can be constructed (Figure 6). The magnitude and sign of these second order coefficients determine the shape of the predicted response surface (Figure 7). Figure 5 CCD experimental design space Central composite designs have several desirable properties, including rotatability, sphericity, and stability. When a design is rotatable, the variance of the predicted values, or the prediction variance, depends only on a treatment combination s distance away from the center of the design space in a given experiment. The prediction variance at any treatment combination is the product of the MSE and the treatment combination s leverage value (Figure 8). Therefore, all treatment combinations that are the same distance away from the center have the same prediction variance (Figure 9). To achieve rotatability, the axial distance should be k = 2, where k=2 factors for this CCD. Since this design region is spherical (the distance from each of the experimental factor combinations from the center are the 5

same, 2), there is low prediction variance within that spherical region. To make these prediction variances stable throughout the entire design region, an appropriate number of center runs, should be conducted. When there are too few center runs (e.g., 2 center runs), the prediction variance is not uniform throughout the design region, but when there are a sufficient number of center runs (e.g., 4 center runs), there is fairly uniform prediction variance (Figure 10). To accommodate a categorical binary variable (e.g., holder type), a central composite design is run for each holder type, so that two different response surfaces are fit for each of the three measurement outcomes. Figure 6 A second order model Figure 7 Second order response surface shapes (Walker) Figure 8 The prediction variance formula (Walker) 6

Figure 9 Scaled variance contour plot for k=2 CCD, α = 2, nc = 5 (Myers) Figure 10 Prediction variance as a function of the number of center runs (Walker) Data Collection It was also discovered on the day of data collection, 25 April 2013, that the measurement apparatus was not capable of running the two higher levels of air pressure. Unfortunately, testing had already begun when this limitation was discovered. A CCD could not be run, since the factor-levels used are chosen precisely in order to obtain the design properties listed in the previous section. Data were collected using new pressure rest time treatment combinations chosen on site (Figure 11). 7

Figure 11 The altered design space Results The amount of variability in the sample holder level A data is lower that the sample holder level B data, and statistically significant response surface models were constructed for y1 (Figure 12) and y2 (Figure 13) when using sample holders at level A. The y1 model is a second order model, with significant quadratic effects on both rest time and pressure and a significant interaction effect. The y2 model was a first order model, where only pressure has a significant linear effect. No significant results were found for the y3 measurement, or with any measurement outcomes using level B samples holders. Figure 12 Second order response surface for y1, using sample holders at level A Figure 13 First order response surface for y2, using sample holders at level B 8

Recommendations Based on these response surface models, the Cal Poly team made the following recommendations to the engineering firm: 1) use the identified factor-level settings that would optimize y1 and y2 measurements to match vendor supplied data; and 2) use sample holders A to reduce measurement variability in y1 and y2 measurements. Implementing these recommendations will decrease variability and diminish biases in measurement outcomes. Further Study Additional experimentation needs to be conducted to observe day-to-day variability in measurement outcomes. Exploring higher levels of air pressure would also be warranted, since the machine was not capable of running high air pressure levels at the time of experimentation. References Myers, Raymond H., and Douglas C. Montgomery. Response Surface Methodology, Process and Product Optimization Using Designed Experiments. New York: Wiley, 1995. Print. Walker, John. "Response Surface Methodology Part 2." May 2013. Lecture. Acknowledgements With thanks to Cal Poly Materials Engineering professor Dr. Trevor Harding, and Materials Engineering major Kyle Klitzner for their collaboration, input, and continued commitment to the project; to the professional engineers and technicians who welcomed us into their facilities, and for their time, energy, and dedication; and to my senior project advisor extraordinaire, Dr. Karen McGaughey. 9