Introduction to Design of Experiments

Similar documents
The t-distribution. Patrick Breheny. October 13. z tests The χ 2 -distribution The t-distribution Summary

Probability and Statistics

Background to Statistics

Data Set 1A: Algal Photosynthesis vs. Salinity and Temperature

Fourier and Stats / Astro Stats and Measurement : Stats Notes

Regression and correlation. Correlation & Regression, I. Regression & correlation. Regression vs. correlation. Involve bivariate, paired data, X & Y

Business Statistics. Lecture 10: Course Review

Review of Statistics 101

Statistical inference

Section 3: Simple Linear Regression

Keller: Stats for Mgmt & Econ, 7th Ed July 17, 2006

Statistics Boot Camp. Dr. Stephanie Lane Institute for Defense Analyses DATAWorks 2018

Introduction to Physics Physics 114 Eyres

Harvard University. Rigorous Research in Engineering Education

Men. Women. Men. Men. Women. Women

Introduction to Statistical Hypothesis Testing


Chapter 16. Simple Linear Regression and Correlation

The Central Limit Theorem

Chapter 16. Simple Linear Regression and dcorrelation

AIM HIGH SCHOOL. Curriculum Map W. 12 Mile Road Farmington Hills, MI (248)

An overview of applied econometrics

OPTIMIZATION OF FIRST ORDER MODELS

CSE 417T: Introduction to Machine Learning. Final Review. Henry Chai 12/4/18

review session gov 2000 gov 2000 () review session 1 / 38

Big Data Analysis with Apache Spark UC#BERKELEY

Stephen Scott.

Interval estimation. October 3, Basic ideas CLT and CI CI for a population mean CI for a population proportion CI for a Normal mean

Predicting AGI: What can we say when we know so little?

Inference for Regression Inference about the Regression Model and Using the Regression Line, with Details. Section 10.1, 2, 3

appstats27.notebook April 06, 2017

Statistical Analysis of Chemical Data Chapter 4

Introduction to Statistical Inference

Last few slides from last time

Bias Variance Trade-off

Response Surface Methodology

Lecture 30. DATA 8 Summer Regression Inference

SIMPLE REGRESSION ANALYSIS. Business Statistics

Sampling. Where we re heading: Last time. What is the sample? Next week: Lecture Monday. **Lab Tuesday leaving at 11:00 instead of 1:00** Tomorrow:

Simulation. Where real stuff starts

ECO220Y Simple Regression: Testing the Slope

Linear Models for Regression CS534

Statistical Inference

Inferential Statistics. Chapter 5

Lecture 8 Sampling Theory

Overall Plan of Simulation and Modeling I. Chapters

Experimental Design, Data, and Data Summary

Statistical Intervals (One sample) (Chs )

Math489/889 Stochastic Processes and Advanced Mathematical Finance Solutions for Homework 7

Transformations The bias-variance tradeoff Model selection criteria Remarks. Model selection I. Patrick Breheny. February 17

MACHINE LEARNING INTRODUCTION: STRING CLASSIFICATION

-However, this definition can be expanded to include: biology (biometrics), environmental science (environmetrics), economics (econometrics).

CS 160: Lecture 16. Quantitative Studies. Outline. Random variables and trials. Random variables. Qualitative vs. Quantitative Studies

Chapter 4: An Introduction to Probability and Statistics

ORF 245 Fundamentals of Statistics Practice Final Exam

COMP6053 lecture: Sampling and the central limit theorem. Markus Brede,

Estimating the accuracy of a hypothesis Setting. Assume a binary classification setting

Confidence Intervals. Confidence interval for sample mean. Confidence interval for sample mean. Confidence interval for sample mean

Statistical Distribution Assumptions of General Linear Models

Introduction to statistics

Introduction to Algebra: The First Week

Warm-up Using the given data Create a scatterplot Find the regression line

Regression Models - Introduction

Statistical Models. David M. Blei Columbia University. October 14, 2014

CPSC 340: Machine Learning and Data Mining

Y i = η + ɛ i, i = 1,...,n.

y response variable x 1, x 2,, x k -- a set of explanatory variables

Wooldridge, Introductory Econometrics, 4th ed. Appendix C: Fundamentals of mathematical statistics

, (1) e i = ˆσ 1 h ii. c 2016, Jeffrey S. Simonoff 1

Regression M&M 2.3 and 10. Uses Curve fitting Summarization ('model') Description Prediction Explanation Adjustment for 'confounding' variables

Statistical Modelling in Stata 5: Linear Models

Supporting Australian Mathematics Project. A guide for teachers Years 11 and 12. Probability and statistics: Module 25. Inference for means

Some Statistics. V. Lindberg. May 16, 2007

Regression with a Single Regressor: Hypothesis Tests and Confidence Intervals

Basics of Experimental Design. Review of Statistics. Basic Study. Experimental Design. When an Experiment is Not Possible. Studying Relations

Statistics - Lecture One. Outline. Charlotte Wickham 1. Basic ideas about estimation

1 Correlation and Inference from Regression

STA121: Applied Regression Analysis

Are data normally normally distributed?

Note that we are looking at the true mean, μ, not y. The problem for us is that we need to find the endpoints of our interval (a, b).

Linear Models for Regression CS534

Chapter Three. Hypothesis Testing

Introduction to Error Analysis

Taguchi Method and Robust Design: Tutorial and Guideline

PHYSICS 15a, Fall 2006 SPEED OF SOUND LAB Due: Tuesday, November 14

This does not cover everything on the final. Look at the posted practice problems for other topics.

Linear Models: Comparing Variables. Stony Brook University CSE545, Fall 2017

Dynamics in Social Networks and Causality

Achilles: Now I know how powerful computers are going to become!

Lecture #11: Classification & Logistic Regression

Statistical methods research done as science rather than math: Estimates on the boundary in random regressions

Chapter 8: An Introduction to Probability and Statistics

EXAM. Exam #1. Math 3342 Summer II, July 21, 2000 ANSWERS

Mathematics for Economics MA course

Regression analysis is a tool for building mathematical and statistical models that characterize relationships between variables Finds a linear

Weighted Majority and the Online Learning Approach

Sampling Distribution Models. Chapter 17

STATISTICS 1 REVISION NOTES

Quadratic Equations Part I

(Re)introduction to Statistics Dan Lizotte

Transcription:

Introduction to Design of Experiments Jean-Marc Vincent and Arnaud Legrand Laboratory ID-IMAG MESCAL Project Universities of Grenoble {Jean-Marc.Vincent,Arnaud.Legrand}@imag.fr November 20, 2011 J.-M. Vincent and A. Legrand Introduction to Design of Experiments 1 / 26

Continuous random variable A random variable (or stochastic variable) is, roughly speaking, a variable whose value results from a measurement. Such a variable enables to model uncertainty that may result of incomplete information or imprecise measurements. Formally (Ω, F, P ) is a probability space where: Ω, the sample space, is the set of all possible outcomes (e.g., {1, 2, 3, 4, 5, 6}) F if the set of events where an event is a set containing zero or more outcomes (e.g., the event of having an odd number {1, 3, 5}) The probability measure P : F [0, 1] is a function returning an event s probability. Since many computer science experiments are based on time measurements, we focus on continuous variables. X : Ω R J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 2 / 26

Probability Distribution A probability distribution (a.k.a. probability density function or p.d.f.) is used to describe the probabilities of different values occurring. A random variable X has density f, where f is a non-negative and integrable function, if: P [a X b] = b a f(x) dx 0.3 0.2 0.1 0 0 2 4 6 8 10 12 14 16 18 20 J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 3 / 26

Expected value When one speaks of the expected price, expected height, etc. one means the expected value of a random variable that is a price, a height, etc. E[X] = x 1 p 1 + x 2 p 2 +... + x k p k = xf(x) dx The expected value of X is the average value of X. It is not the most probable value. The mean is one aspect of the distribution of X. The median or the mode are other interesting aspects. The variance is a measure of how far the values of a random variable are spread out from each other. If a random variable X has the expected value (mean) µ = E[X], then the variance of X is given by: Var(X) = E [ (X µ) 2] = (x µ) 2 f(x) dx J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 4 / 26

How to estimate Expected value? To empirically estimate the expected value of a random variable, one repeatedly measures observations of the variable and computes the arithmetic mean of the results. Unfortunately, if you repeat the estimation, you may get a different value since X is a random variable... J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 5 / 26

Central Limit Theorem Let {X 1, X 2,..., X n } be a random sample of size n (i.e., a sequence of independent and identically distributed random variables with expected values µ and variances σ 2 ). The sample average of these random variables is: S n is a random variable too. S n = 1 n (X 1 + + X n ) For large n s, the distribution of S n is approximately normal with mean µ and variance σ2 n. ) S n (µ, N σ2 n n J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 6 / 26

The Normal Distribution 1.0 0.8 μ= 0, μ= 0, μ= 0, μ= 2, 2 σ = 0.2, 2 σ = 1.0, 2 σ = 5.0, 2 σ = 0.5, φ μ,σ 2( x) 0.6 0.4 0.2 0.0 5 4 3 2 1 0 1 2 3 4 5 x The smaller the variance the more spiky the distribution. J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 7 / 26

The Normal Distribution 0.0 0.1 0.2 0.3 0.4 3σ 34.1% 34.1% 0.1% 2.1% 13.6% 13.6% 2.1% 0.1% 2σ 1σ µ 1σ 2σ 3σ The smaller the variance the more spiky the distribution. Dark blue is less than one standard deviation from the mean. For the normal distribution, this accounts for about 68% of the set. Two standard deviations from the mean (medium and dark blue) account for about 95% Three standard deviations (light, medium, and dark blue) account for about 99.7% J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 7 / 26

CLT Illustration Start with an arbitrary distribution and compute the distribution of S n for increasing values of n. 1 2 3 4 8 16 32 J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 8 / 26

CLT consequence: confidence interval 0.0 0.1 0.2 0.3 0.4 0.1% 2.1% 34.1% 34.1% 13.6% 13.6% 2.1% 0.1% 3σ 2σ 1σ µ 1σ 2σ 3σ When n is large: ( P µ [S n 2 n σ, S n + 2 n σ ]) ( = P S n [µ 2 n σ, µ + 2 n σ ]) 95% J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 9 / 26

CLT consequence: confidence interval 0.48 0.5 0.52 0.54 0.56 0.58 0.6 When n is large: ( P µ [S n 2 n σ, S n + 2 n σ ]) ( = P S n [µ 2 n σ, µ + 2 n σ ]) 95% There is 95% of chance that the true mean lies within 2 σ n of the sample mean. J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 9 / 26

Comparing Two Alternatives Assume, you have evaluated two scheduling heuristics A and B on n different DAGs. Makespan A B Heuristic The two 95% confidence intervals do not overlap P(µ A < µ B ) > 90%. J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 10 / 26

Comparing Two Alternatives Assume, you have evaluated two scheduling heuristics A and B on n different DAGs. Makespan A B Heuristic The two 95% confidence intervals do overlap??. Reduce C.I? J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 10 / 26

Comparing Two Alternatives Assume, you have evaluated two scheduling heuristics A and B on n different DAGs. Makespan A B Heuristic The two 70% confidence intervals do not overlap P(µ A < µ B ) > 49%. Let s do more experiments instead. J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 10 / 26

Comparing Two Alternatives Assume, you have evaluated two scheduling heuristics A and B on n different DAGs. Makespan A B Heuristic The width of the confidence interval is proportionnal to Try to reduce variance if you can... σ n. Halving C.I. requires 4 times more experiments! J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 10 / 26

Comparing Two Alternatives with Blocking C.I.s overlap because variance is large. Some DAGS have an intrinsically longer makespan than others, hence a large Var(A) and Var(B) Makespan A B Heuristic J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 11 / 26

Comparing Two Alternatives with Blocking C.I.s overlap because variance is large. Some DAGS have an intrinsically longer makespan than others, hence a large Var(A) and Var(B) Makespan A B B A Heuristic The previous test estimates µ A and µ B independently. E[A] < E[B] E[B A] < 0. In the previous evaluation, the same DAG is used for measuring A i and B i, hence we can focus on B A. Since Var(B A) is much smaller than Var(A) and Var(B), we can conclude that µ A < µ B with 95% of confidence. J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 11 / 26

Comparing Two Alternatives with Blocking C.I.s overlap because variance is large. Some DAGS have an intrinsically longer makespan than others, hence a large Var(A) and Var(B) Makespan A B B A Heuristic The previous test estimates µ A and µ B independently. E[A] < E[B] E[B A] < 0. In the previous evaluation, the same DAG is used for measuring A i and B i, hence we can focus on B A. Since Var(B A) is much smaller than Var(A) and Var(B), we can conclude that µ A < µ B with 95% of confidence. Relying on such common points is called blocking and enable to reduce variance. J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 11 / 26

How Many Replicates? The CLT says that when n goes large, the sample mean is normally distributed. The CLT uses σ = Var(X) but we only have the sample variance, not the true variance. J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 12 / 26

How Many Replicates? The CLT says that when n goes large, the sample mean is normally distributed. The CLT uses σ = Var(X) but we only have the sample variance, not the true variance. 0 5 10 15 20 25 30 0 5 10 15 Sample size Sample variance J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 12 / 26

How Many Replicates? The CLT says that when n goes large, the sample mean is normally distributed. The CLT uses σ = Var(X) but we only have the sample variance, not the true variance. Q: How Many Replicates? J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 12 / 26

How Many Replicates? The CLT says that when n goes large, the sample mean is normally distributed. The CLT uses σ = Var(X) but we only have the sample variance, not the true variance. Q: How Many Replicates? A1: How many can you afford? J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 12 / 26

How Many Replicates? The CLT says that when n goes large, the sample mean is normally distributed. The CLT uses σ = Var(X) but we only have the sample variance, not the true variance. Q: How Many Replicates? A1: How many can you afford? A2: 30... Rule of thumb: a sample of 30 or more is big sample but a sample of 30 or less is a small one (doesn t always work). J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 12 / 26

How Many Replicates? The CLT says that when n goes large, the sample mean is normally distributed. The CLT uses σ = Var(X) but we only have the sample variance, not the true variance. Q: How Many Replicates? A1: How many can you afford? A2: 30... Rule of thumb: a sample of 30 or more is big sample but a sample of 30 or less is a small one (doesn t always work). With less than 30, you need to make the C.I. wider using e.g. the Student law. J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 12 / 26

How Many Replicates? The CLT says that when n goes large, the sample mean is normally distributed. The CLT uses σ = Var(X) but we only have the sample variance, not the true variance. Q: How Many Replicates? A1: How many can you afford? A2: 30... Rule of thumb: a sample of 30 or more is big sample but a sample of 30 or less is a small one (doesn t always work). With less than 30, you need to make the C.I. wider using e.g. the Student law. Once you have a first C.I. with 30 samples, you can estimate how many samples will be required to answer your question. If it is too large, then either try to reduce variance (or the scope of your experiments) or simply explain that the two alternatives are hardly distinguishable... J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 12 / 26

How Many Replicates? The CLT says that when n goes large, the sample mean is normally distributed. The CLT uses σ = Var(X) but we only have the sample variance, not the true variance. Q: How Many Replicates? A1: How many can you afford? A2: 30... Rule of thumb: a sample of 30 or more is big sample but a sample of 30 or less is a small one (doesn t always work). With less than 30, you need to make the C.I. wider using e.g. the Student law. Once you have a first C.I. with 30 samples, you can estimate how many samples will be required to answer your question. If it is too large, then either try to reduce variance (or the scope of your experiments) or simply explain that the two alternatives are hardly distinguishable... Running the right number of experiments enables to get to conclusions more quickly and hence to test other hypothesis. J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 12 / 26

Key Hypothesis The hypothesis of CLT are very weak. Yet, to qualify as replicates, the repeated measurements: must be independent (take care of warm-up) must not be part of a time series (the system behavior may temporary change) must not come from the same place (the machine may have a problem) must be of appropriate spatial scale Perform graphical checks J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 13 / 26

Simple Graphical Check Fixed Location: If the fixed location assumption holds, then the run sequence plot will be flat and non-drifting. Fixed Variation: If the fixed variation assumption holds, then the vertical spread in the run sequence plot will be the approximately the same over the entire horizontal axis. Independence: If the randomness assumption holds, then the lag plot will be structureless and random. Fixed Distribution : If the fixed distribution assumption holds, in particular if the fixed normal distribution holds, then the histogram will be bell-shaped, and the normal probability plot will be linear. J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 14 / 26

Simple Graphical Check Fixed Location: If the fixed location assumption holds, then the run sequence plot will be flat and non-drifting. Fixed Variation: If the fixed variation assumption holds, then the vertical spread in the run sequence plot will be the approximately the same over the entire horizontal axis. Independence: If the randomness assumption holds, then the lag plot will be structureless and random. Fixed Distribution : If the fixed distribution assumption holds, in particular if the fixed normal distribution holds, then the histogram will be bell-shaped, and the normal probability plot will be linear. J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 14 / 26

Comparing Two Alternatives (Blocking + Randomization) When comparing A and B for different settings, doing A, A, A, A, A, A and then B, B, B, B, B, B is a bad idea. J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 15 / 26

Comparing Two Alternatives (Blocking + Randomization) When comparing A and B for different settings, doing A, A, A, A, A, A and then B, B, B, B, B, B is a bad idea. You should better do A, B, A, B, A, B, A, B,.... J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 15 / 26

Comparing Two Alternatives (Blocking + Randomization) When comparing A and B for different settings, doing A, A, A, A, A, A and then B, B, B, B, B, B is a bad idea. You should better do A, B, A, B, A, B, A, B,.... Even better, randomize your run order. You should flip a coin for each configuration and start with A on head and with B on tail... A, B, B, A, B, A, A, B,.... With such design, you will even be able to check whether being the first alternative to run changes something or not. J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 15 / 26

Comparing Two Alternatives (Blocking + Randomization) When comparing A and B for different settings, doing A, A, A, A, A, A and then B, B, B, B, B, B is a bad idea. You should better do A, B, A, B, A, B, A, B,.... Even better, randomize your run order. You should flip a coin for each configuration and start with A on head and with B on tail... A, B, B, A, B, A, A, B,.... With such design, you will even be able to check whether being the first alternative to run changes something or not. Each configuration you test should be run on different machines. You should record as much information as you can on how the experiments was performed (http://expo.gforge.inria.fr/). J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 15 / 26

Experimental Design There are two key concepts: replication and randomization You replicate to increase reliability. You randomize to reduce bias. If you replicate thoroughly and randomize properly, you will not go far wrong. J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 16 / 26

Experimental Design There are two key concepts: replication and randomization You replicate to increase reliability. You randomize to reduce bias. If you replicate thoroughly and randomize properly, you will not go far wrong. Other important issues: Parsimony Pseudo-replication Experimental vs. observational data J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 16 / 26

Experimental Design There are two key concepts: replication and randomization You replicate to increase reliability. You randomize to reduce bias. If you replicate thoroughly and randomize properly, you will not go far wrong. Other important issues: Parsimony Pseudo-replication Experimental vs. observational data It doesn t matter if you cannot do your own advanced statistical analysis. If you designed your experiments properly, you may be able to find somebody to help you with the statistics. If your experiments is not properly designed, then no matter how good you are at statistics, you experimental effort will have been wasted. No amount of high-powered statistical analysis can turn a bad experiment into a good one. J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 16 / 26

Parsimony The principle of parsimony is attributed to the 14th century English philosopher William of Occam: Given a set of equally good explanations for a given phenomenon, the correct explanation is the simplest explanation J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 17 / 26

Parsimony The principle of parsimony is attributed to the 14th century English philosopher William of Occam: Given a set of equally good explanations for a given phenomenon, the correct explanation is the simplest explanation Models should have as few parameters as possible Linear models should be preferred to non-linear models Models should be pared down until they are minimal adequate J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 17 / 26

Parsimony The principle of parsimony is attributed to the 14th century English philosopher William of Occam: Given a set of equally good explanations for a given phenomenon, the correct explanation is the simplest explanation Models should have as few parameters as possible Linear models should be preferred to non-linear models Models should be pared down until they are minimal adequate This means, a variable should be retained in the model only if it causes a significant increase in deviance when removed from the current model. A model should be as simple as possible. But no simpler. A. Einstein J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 17 / 26

Replication vs. Pseudo-replication Measuring the same configuration several times is not replication. pseudo-replication and may be biased. Instead, test other configurations (with a good randomization). In case of pseudo-replication, here is what you can do: average away the pseudo-replication and carry out your statistical analysis on the means carry out separate analysis for each time period use proper time series analysis It s J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 18 / 26

Experimental data vs. Observational data You need a good blend of observation, theory and experiments. Many scientific experiments appear to be carried out with no hypothesis in mind at all, but simply to see what happens. This may be OK in the early stages but drawing conclusions on such observations is difficult (large number of equally plausible explanations; without testable prediction no experimental ingenuity;... ). J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 19 / 26

Experimental data vs. Observational data You need a good blend of observation, theory and experiments. Many scientific experiments appear to be carried out with no hypothesis in mind at all, but simply to see what happens. This may be OK in the early stages but drawing conclusions on such observations is difficult (large number of equally plausible explanations; without testable prediction no experimental ingenuity;... ). Strong inference Essential steps: 1 Formulate a clear hypothesis 2 devise an acceptable test J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 19 / 26

Experimental data vs. Observational data You need a good blend of observation, theory and experiments. Many scientific experiments appear to be carried out with no hypothesis in mind at all, but simply to see what happens. This may be OK in the early stages but drawing conclusions on such observations is difficult (large number of equally plausible explanations; without testable prediction no experimental ingenuity;... ). Strong inference Essential steps: 1 Formulate a clear hypothesis 2 devise an acceptable test Weak inference It would be silly to disregard all observational data that do not come from designed experiments. Often, they are the only we have (e.g. the trace of a system). But we need to keep the limitations of such data in mind. It is possible to use it to derive hypothesis but not to test hypothesis. J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 19 / 26

Design of Experiments Goal Computer scientists tend to either: vary one parameter at a time and use a very fine sampling of the parameter range, or run thousands of experiments for a week varying a lot of parameters and then try to get something of it. Most of the time, they (1) don t know how to analyze the results (2) realize something went wrong and everything need to be done again. These two flaws come from poor training and from the fact that C.S. experiments are almost free and very fast to conduct. Most strategies of experimentation have been designed to provide sound answers despite all the randomness and uncontrollable factors; maximize the amount of information provided by a given set of experiments; reduce as much as possible the number of experiments to perform to answer a given question under a given level of confidence. J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 20 / 26

Design of Experiments Select the problem to study Clearly define the kind of system to study, the kind of phenomenon to observe (state or evolution of state through time), the kind of study to conduct (descriptive, exploratory, prediction, hypothesis testing,... ). For example, the set of experiments to perform when studying the stabilization of a peer-to-peer algorithm under a high churn is completely different from the ones to perform when trying to assess the superiority of a scheduling algorithm compared to another over a wide variety of platforms. It would be also completely different of the experiments to perform when trying to model the response time of a Web server under a workload close to the server saturation. This first step enables to decide on which kind of design should be used. J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 21 / 26

Design of Experiments Define the set of relevant response Controlable factors x 1... x p The system under study is generally modeled though a black-box model: Inputs System Output y z 1... z q Uncontrolable factors In our case, the response could be the makespan of a scheduling algorithm, the amount of messages exchanged in a peer-to-peer system, the convergence time of distributed algorithm, the average length of a random walk,... Some of these metrics are simple while others are the result of complex aggregation of measurements. Many such responses should thus generally be recorded so as to check their correctness. J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 22 / 26

Design of Experiments Determine the set of relevant factors or variables Controlable factors x 1... x p Some of the variables (x 1,...,x p ) are controllable whereas some others (z 1,..., z q ) are uncontrollable. Inputs System Output y z 1... z q Uncontrolable factors In our case typical controllable variables could be the heuristic used (e.g., FIFO, HEFT,... ) or one of their parameter (e.g., an allowed replication factor, the time-to-live of peer-to-peer requests,... ), the size of the platform or their degree of heterogeneity,.... In the case of computer simulations, randomness should be controlled and it should thus be possible to completely remove uncontrollable factors. Yet, it may be relevant to consider some factors to be uncontrollable and to feed them with an external source of randomness. J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 23 / 26

Design of Experiments Typical case studies The typical case studies defined in the first step could include: determining which variables are most influential on the response y (factorial designs, screening designs). This allows to distinguish between primary factors whose influence on the response should be modeled and secondary factors whose impact should be averaged. This also allows to determine whether some factors interact in the response; deriving an analytical model of the response y as a function of the primary factors x. This model can then be used to determine where to set the primary factors x so that response y is always close to a desired value or is minimized/maximized (analysis of variance, regression model, response surface methodology,... ); determining where to set the primary factors x so that variability in response y is small; determining where to set the primary factors x so that the effect of uncontrollable variables z 1,..., z q is minimized (robust designs, Taguchi designs). J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 24 / 26

Linear Regression Y = a + bx + ε Y is the response variable X is a continuous explanatory variable a is the intercept b is the slope ε is some noise J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 25 / 26

Linear Regression Y = a + bx + ε Y is the response variable X is a continuous explanatory variable a is the intercept b is the slope ε is some noise When there are 2 explanatory variables: Y = a + b (1) X (1) + b (2) X (2) + b (1,2) X (1) X (2) + ε ε is generally assumed to be independent of X (k), hence it needs to be checked once the regression is done. J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 25 / 26

Linear Regression Y = a + bx + ε Y is the response variable X is a continuous explanatory variable a is the intercept b is the slope ε is some noise When there are 2 explanatory variables: Y = a + b (1) X (1) + b (2) X (2) + b (1,2) X (1) X (2) + ε ε is generally assumed to be independent of X (k), hence it needs to be checked once the regression is done. Although your phenomenon is not linear, the linear model helps for initial investigations (as a first crude approximation). You should always wonder whether there is a way of looking at your problem where it is linear. J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 25 / 26

2-level factorial design Decide a low and a high value for every factor Response True factor effect Estimate of factor effect Response True factor effect Estimate of factor effect Low High Factor Low Factor High J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 26 / 26

2-level factorial design Decide a low and a high value for every factor Test every (2 p ) combination of high and low values, possibly replicating for each combination. By varying everything, we can detect interactions right away. J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 26 / 26

2-level factorial design Decide a low and a high value for every factor Test every (2 p ) combination of high and low values, possibly replicating for each combination. By varying everything, we can detect interactions right away. Standard way of analyzing this: ANOVA (ANalysis Of VAriance) enable to discriminate real effects from noise. enable to prove that some parameters have little influence and can be randomized over (possibly with a more elaborate model) enable to easily know how to change factor range when performing steepest ascent method. J.-M. Vincent and A. Legrand Introduction to Design of Experiments Statistics Basics 26 / 26