Hypotheses Testing. Chapter Hypotheses and tests statistics

Similar documents
Statistical Data Analysis Stat 3: p-values, parameter estimation

Lecture 2. G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 1

Recall the Basics of Hypothesis Testing

Statistical Methods for Particle Physics (I)

Statistics for the LHC Lecture 1: Introduction

Statistical Methods for Particle Physics Lecture 2: statistical tests, multivariate methods

E. Santovetti lesson 4 Maximum likelihood Interval estimation

FYST17 Lecture 8 Statistics and hypothesis testing. Thanks to T. Petersen, S. Maschiocci, G. Cowan, L. Lyons

Discovery and Significance. M. Witherell 5/10/12

Statistics for the LHC Lecture 2: Discovery

Systematic uncertainties in statistical data analysis for particle physics. DESY Seminar Hamburg, 31 March, 2009

Statistical Methods in Particle Physics

Statistical Methods in Particle Physics Lecture 1: Bayesian methods

Lecture 3. G. Cowan. Lecture 3 page 1. Lectures on Statistical Data Analysis

Statistical Methods for Particle Physics Lecture 1: parameter estimation, statistical tests

Primer on statistics:

Hypothesis testing:power, test statistic CMS:

ETH Zurich HS Mauro Donegà: Higgs physics meeting name date 1

Hypothesis Testing - Frequentist

Hypothesis testing (cont d)

Statistical Methods in Particle Physics Lecture 2: Limits and Discovery

Lecture 5. G. Cowan Lectures on Statistical Data Analysis Lecture 5 page 1

Statistical Methods for Particle Physics Lecture 4: discovery, exclusion limits

Hypothesis testing. Chapter Formulating a hypothesis. 7.2 Testing if the hypothesis agrees with data

P Values and Nuisance Parameters

Advanced statistical methods for data analysis Lecture 1

Fall 2012 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede

Some Statistical Tools for Particle Physics

Statistical Methods in Particle Physics. Lecture 2

Statistical Methods for Astronomy

Topics in Statistical Data Analysis for HEP Lecture 1: Bayesian Methods CERN-JINR European School of High Energy Physics Bautzen, June 2009

HST.582J / 6.555J / J Biomedical Signal and Image Processing Spring 2007

Practical Statistics

Statistics Challenges in High Energy Physics Search Experiments

Lectures on Statistical Data Analysis

Modern Methods of Data Analysis - WS 07/08

Introduction to Statistical Methods for High Energy Physics

Multivariate statistical methods and data mining in particle physics

Recent developments in statistical methods for particle physics

Physics 403. Segev BenZvi. Classical Hypothesis Testing: The Likelihood Ratio Test. Department of Physics and Astronomy University of Rochester

Statistics for Particle Physics. Kyle Cranmer. New York University. Kyle Cranmer (NYU) CERN Academic Training, Feb 2-5, 2009

Constructing Ensembles of Pseudo-Experiments

Statistical Methods for Particle Physics Lecture 3: Systematics, nuisance parameters

LECTURE NOTES FYS 4550/FYS EXPERIMENTAL HIGH ENERGY PHYSICS AUTUMN 2013 PART I A. STRANDLIE GJØVIK UNIVERSITY COLLEGE AND UNIVERSITY OF OSLO

Combined Higgs Results

12 Statistical Justifications; the Bias-Variance Decomposition

Statistics for Particle Physics. Kyle Cranmer. New York University. Kyle Cranmer (NYU) CERN Academic Training, Feb 2-5, 2009

Practice Problems Section Problems

Statistics. Lent Term 2015 Prof. Mark Thomson. 2: The Gaussian Limit

Discovery significance with statistical uncertainty in the background estimate

Hypothesis Testing. 1 Definitions of test statistics. CB: chapter 8; section 10.3

Discovery Potential for the Standard Model Higgs at ATLAS

Asymptotic formulae for likelihood-based tests of new physics

Journeys of an Accidental Statistician

Quadratic Equations Part I

Discovery potential of the SM Higgs with ATLAS

Physics 509: Non-Parametric Statistics and Correlation Testing

Search for top squark pair production and decay in four bodies, with two leptons in the final state, at the ATLAS Experiment with LHC Run2 data

How to find a Higgs boson. Jonathan Hays QMUL 12 th October 2012

Probability Methods in Civil Engineering Prof. Dr. Rajib Maity Department of Civil Engineering Indian Institution of Technology, Kharagpur

14.30 Introduction to Statistical Methods in Economics Spring 2009

Statistical Methods for Discovery and Limits in HEP Experiments Day 3: Exclusion Limits

Part III: Unstructured Data

32. STATISTICS. 32. Statistics 1

Use of the likelihood principle in physics. Statistics II

Physics 509: Bootstrap and Robust Parameter Estimation

Statistics for Resonance Search

CMS Internal Note. The content of this note is intended for CMS internal use and distribution only

YETI IPPP Durham

Statistics for Data Analysis. Niklaus Berger. PSI Practical Course Physics Institute, University of Heidelberg

Physics 509: Error Propagation, and the Meaning of Error Bars. Scott Oser Lecture #10

Statistical Methods for Astronomy

Statistical Tools in Collider Experiments. Multivariate analysis in high energy physics

Statistical Methods in Particle Physics

6.867 Machine Learning

Parameter Estimation and Fitting to Data

Introductory Statistics Course Part II

Fourier and Stats / Astro Stats and Measurement : Stats Notes

Glossary. The ISI glossary of statistical terms provides definitions in a number of different languages:

RWTH Aachen Graduiertenkolleg

32. STATISTICS. 32. Statistics 1

Hà γγ in the VBF production mode and trigger photon studies using the ATLAS detector at the LHC

Statistical Inference: Estimation and Confidence Intervals Hypothesis Testing

PHYSICS 2150 EXPERIMENTAL MODERN PHYSICS. Lecture 3 Rejection of Data; Weighted Averages

North Carolina State University

Bump Hunt on 2016 Data

MODIFIED FREQUENTIST ANALYSIS OF SEARCH RESULTS (THE CL s METHOD)

Module 03 Lecture 14 Inferential Statistics ANOVA and TOI

Irr. Statistical Methods in Experimental Physics. 2nd Edition. Frederick James. World Scientific. CERN, Switzerland

Frank Porter February 26, 2013

Statistical Methods in Particle Physics

Review of Statistics 101

exp{ (x i) 2 i=1 n i=1 (x i a) 2 (x i ) 2 = exp{ i=1 n i=1 n 2ax i a 2 i=1

Physics 403. Segev BenZvi. Credible Intervals, Confidence Intervals, and Limits. Department of Physics and Astronomy University of Rochester

DELPHI Collaboration DELPHI PHYS 656. Measurement of mass dierence between? and + and mass of -neutrino from three-prong -decays

Algebra. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed.

Relationship between Least Squares Approximation and Maximum Likelihood Hypotheses

Chapter 9. Non-Parametric Density Function Estimation

Observation of a New Particle with a Mass of 125 GeV

Recommendations for presentation of error bars

Transcription:

Chapter 8 Hypotheses Testing In the previous chapters we used experimental data to estimate parameters. Here we will use data to test hypotheses. A typical example is to test whether the data are compatible with the theoretical prediction or to choose among di erent hypothesis which one best represents the data. 8. Hypotheses and tests statistics Let s begin by defining some terminology that we will need in the following. The goal of a statistical test is to make a statement about how well the observed data stand in agreement (accept) or not (reject) with given predicted probabilities, i.e. a hypothesis. The typical naming for the hypothesis under test is the null hypothesis or H 0. The alternative hypothesis, if there is one, is usually called H. If there are several alternative hypotheses they are labeled H,H,... The hypothesis can be simple if the p.d.f. of the random variable under test is completely specified (e.g. the data are drawn from a gaussian p.d.f. with specified mean ad width) or composite if at least one of the parameters is not specified (e.g. the data are drawn from a Poisson with mean >3). In order to tell in a quantitative way what it means to test a hypothesis we need to build a function of the measured variables ~x, called test statistic t(~x ). If we build it in a clever way, the test statistic will be distributed di erently depending on the hypothesis under test: g(t(~x ) H 0 ) or g(t(~x ) H ). This pedantic notation is used here to stress that the test statistic is a function of the data and that it is the distribution of the test statistic values that is di erent under the di erent hypotheses (the lighter notation g(t H i ) will be used from now on). Comparing the value of the test statistic computed on the actual data, with the value(s) obtained computing it under di erent hypotheses, we can quantitatively state the level of agreement. That s the general idea. The way this is implemented in practice will be explained in the next sections. The test statistic can be any function of the data: it can be a multidimensional vector ~t(~x ) or a single real number t(~x ). Even the data themselves {~x } can be used as a test statistic. Collapsing all the information about the data into a single meaningful variable is particularly helpful in visualizing the test statistic and the separation between the two hypothesis. There is no general rule about the choice of the test statistic. The specific choice will depend on the particular case at hand. Di erent test statistic will give in general di erent results and 97

98 CHAPTER 8. HYPOTHESES TESTING it is up to the physicist to decide which is the most appropriate for the specific problem. Example: In order to better understand the terminology we can use a specific example based on particle identification. The average specific ionization de/dx of two charged particle with the same speed passing through matter will be di erent depending on their masses (see Fig.8..). Because of this dependence de/dx can be used as a particle identification tool to distinguish particle types. For example the ionization of electrons with momenta in the range of few GeV tend to be larger than the one of pions of the same momentum range. If we want to distinguish an electron from a pion in a given bin of we can use the specific Figure 8..: Left: the specific ionization for some particle types (in green pions and in red electrons; other particles species are shown with di erent colors); Right: the projections of the left plot on the y-axis, i.e. the measured specific ionization for pions and electrons. ionization itself as test statistic t(~x ) = de/dx. This is a typical case where the data itself is used as test statistic. The test statistics will then be distributed di erently under the two following hypotheses (see Fig.8.. right): null hypothesis g(t H 0 )=P de dx e± alternative hypothesis g(t H )=P de dx ± Example: When testing data for the presence of a signal, we define the null hypothesis as the background only hypothesis and the alternative hypothesis as the signal+background hypothesis. Example: Fig.8.. shows the cross section (e + e )! W + W ( ) measured by the L3 collaboration at di erent centre of mass energies. In this case the test statistic is the cross-section as a function of energy. The measured values are then compared with di erent theoretical models (di erent hypothesis). We haven t explained yet how to quantitatively accept or reject an hypothesis, but already at a naive level we can see that data clearly prefer one of the models.

8.. SIGNIFICANCE, POWER, CONSISTENCY AND BIAS 99 Figure 8..: Analysis of the cross section of e + e! W + W ( ) as a function of the centre of mass energy (L3 detector at LEP). The p.d.f. describing the test statistic corresponding to a certain hypothesis g(~t H) isusu- ally built from a data set that has precisely the characteristic associated to that hypothesis. In the particle identification example discussed before the expected data used to build the p.d.f. for the two hypotheses were pure sample of electrons and pure samples of pions. For example you can get a pure sample of electrons selecting tracks from photon conversions! e + e and a pure sample of pions from the self-tagging decays of charmed mesons D +! + D 0 ; D 0! k + (D! D 0 ; D 0! k + ) (self-tagging means that by knowing the charge of the in the first decay you can unambiguously assign the pion/kaon hypothesis to the positive/negative charge of the second decay). In other cases the p.d.f. are built from dedicated measurement (e.g. a test beam ) or from Monte Carlo simulations. 8. Significance, power, consistency and bias In order to accept or reject a null hypothesis we partition the space of the test statistics values into a critical region and its complementary the acceptance region, for the test hypothesis t (see Fig. 8..3), such that there is a small probability, assuming H 0 to be correct to observe data with a test statistics in that region. The value of the test statistics chosen to define the two regions is called decision boundary: t cut. If the value of the test statistic computed on the data sample under test falls in the rejection region then the null hypothesis is discarded, otherwise it is accepted (or more precisely not reject it). In a test beam, a beam of particles is prepared in a well defined condition (particle type, energy, etc...) and it is typically used to test a device under development. This configuration inverts the typical experimental conditions where a device with known properties is used to characterize particles in a beam or from collisions.

00 CHAPTER 8. HYPOTHESES TESTING Figure 8..3: A test statistic in red, where we defined an acceptance x apple and rejection region x>. Given a test statistic, some parameters are usually defined when sizing a rejection region. The first one is the significance level of the test (see Fig.8..4). It is defined as the integral of the null hypothesis p.d.f. above the decision boundary: = Z t cut g(t H 0 )dt (8..) The probability can be read as the probability to reject H 0 even if H 0 is in reality correct. This is called an error of the first kind. If we have an alternative hypothesis H, an error of the second kind occurs when H 0 is accepted but the correct hypothesis is in reality the alternative one H. The integral of the alternative hypothesis p.d.f. below t cut is called the power of the test to discriminate against the alternative hypothesis H (see Fig.8..4): A good test has both and = Z tcut g(t H )dt (8..) small, which is equivalent to say high significance and high Figure 8..4: Illustration of the acceptance and rejection region both for the hypothesis H 0 (on the left hand side) and the alternative H (on the right hand side) under the same choice of decision boundary. power. This means that H 0 and H are well separated. Table 8..5 summarize the di erent

8.. SIGNIFICANCE, POWER, CONSISTENCY AND BIAS 0 ways to mistakenly interpret the data in terms of errors of the first and second kind. While errors of the first type can be controlled by choosing su ciently small, errors of the second type, depending on the separation between the two hypothesis, are not as easily controllable. In HEP searches we typically speak of evidence when apple 3 0 3, and of discovery when apple 3 0 7 (corresponding to the probability outside 3 and 5 respectively in a single sided tail gaussian); these numbers are purely conventional and they don t have any scientific ground. They are defined this way to set a high threshold for such important claims about the observation of new phenomena. Figure 8..5: Example of errors of the first and second kind (Wikipedia). Example We consider now a machine BM which is used for bonding wires of Si-detector modules. The produced detectors had a scrap rate of P 0 =0.. This machine BM should now be replaced with a newer bonding machine called BM, if (and only if) the new machine can produce detector modules with a lower scrap rate P. In a test run we produce n = 30 modules. To verify P <P 0 statistically, we use the hypothesis test discussed above. Define the two hypotheses H 0 and H as: H 0 : P 0.; H : P < 0.. (8..3) We choose = 0.05 and our test statistic t is the number of malfunctioning detector modules. This quantity is distributed according to a binomial distribution, with the total number of produced modules n = 30 and a probability P. The rejection region for H 0 is constructed out of Xn c n P i i 0( P 0 ) n i <. (8..4) i=0 Here, the critical value is denoted by n c, and it is the maximal number of malfunctioning modules produced by BM which still implies a rejection of H 0 with CL. By going through the calculation we find that for n c = the value of is still just below 0.05. Thus the rejection region for H 0 is K =0,,. This means that if we find two or less malfunctioning modules produced by BM we will replace BM by the new machine BM. If there are 3 or even more malfunctioning detector modules, the old bonding machine BM should be preferred. Once the test statistics is defined there is a trade-o between and, the smaller you make the larger will be; it s up to the experimenter to decide what is acceptable and what is not.

0 CHAPTER 8. HYPOTHESES TESTING Example Suppose we want to distinguish K p elastic scattering events from inelastic scattering events where a 0 is produced. H 0 : K p! K p ; H : K p! K p 0. The detector used for this experiment is a spectrometer capable of measuring the momenta of all the charged particles (K, p) but it is blind to neutral particles ( 0 ). The considered test statistic is the missing mass M defined as the di erence between the initial and final visible mass. The true value of the missing mass is M = 0 under the null hypothesis H 0 (no 0 produced) and M 0 = 35 MeV/c under the alternative hypothesis H (a 0 is produced). The critical region can be defined as M>M c. The value of M c depends on the significance and power we want to obtain (see Fig.8..6): a high value of M c will correspond to a high significance at the expenses of the power, while low values of M c will result in a high power but low significance. Figure 8..6: Top: the p.d.f. for the test statistic M under the null hypothesis of elastic scattering H 0 centred at M = 0; bottom the p.d.f. for the test statistic under the alternative hypothesis of inelastic scattering H centred at M = m 0. M c defines the critical region. Some caution is necessary when using. Suppose you have 0 researchers looking for a new phenomenon which in reality does not exist. Their H 0 hypothesis is that what they see is only background. One of them is liable to reject H 0 with = 5%, while the other 9 will not. This is part of the game and therefore, before rushing for publication, that researcher should balance the claim with what the others don t see. That s the main reason why anytime there is a discovery claim, we always need to have the results to be corroborated by independent measurements. We will come back to this point when we will talk about the look-elsewhere

8.. SIGNIFICANCE, POWER, CONSISTENCY AND BIAS 03 e ect. Example Let s use again the example of the electron/pion separation. As already shown before the specific ionization de/dx of a charged particle can be used as a test statistic to distinguish particle types, for example electrons (e) from pions ( )(see Fig. 8..). The selection e ciency is defined as the probability for a particle to pass the selection cut: e = Z tcut g(t e)dt = = Z tcut g(t )dt = (8..5) By moving the value of t cut you can change the composition of your sample. The lower the value of t cut the larger the electron e ciency but the higher the contamination from pions and vice-versa. In general, one can set a value of t cut, select a sample and work out what is the fraction of electrons N acc present in the initial sample (before the requirement t<t cut ). The number of accepted particles in the sample is composed by: which gives N acc = e N e + N = e N e + (N tot N e ) (8..6) N e = N acc N tot (8..7) e From this, one can immediately notice that the N e can only be calculated if e 6=,i.e. N e can only be extracted if there is any separation power at all. If there are systematic uncertainties in e or these will translate into an uncertainty on N e. One should try to select the critical region t cut such that the total error on N e is negligible. Up to now we used only the p.d.f describing what is the probability that a electron/pion gives a certain amount of ionization; using the Bayes theorem (see.6) we can invert the problem and ask what is the probability that a particle releasing a given ionization signal t is a pion or an electron: a e g(t e) h(e t) = (8..8) a e g(t e)+a g(t ) h( t) = a g(t ) a e g(t e)+a g(t ) (8..9) where a e,a = a e are the prior probabilities for the electron and pion hypotheses. So to give the probability that a particle is an electron (or better the degree of belief that a given particle with a measured t is an electron) one needs to know the prior probability for all possible hypothesis as well as their p.d.f. The other side of the problem is to estimate the purity p e of the sample of candidates passing the requirement t<t cut : p e = #electrons with t < t cut (8..0) #particles with t < t cut R tcut = a eg(t e)dt R tcut (a (8..) eg(t e)+( a e )g(t ))dt = a e e N tot N acc (8..)

04 CHAPTER 8. HYPOTHESES TESTING In high energy physics a parallel nomenclature has been defined with time to express the same concepts we have encounter in this section. Typically we call: background e error ciency = probability to accept background = probability of a type-i signal e ciency = power of the test = probability of a type two error purity = probability for an event to be signal, once we have accepted it as signal 8.3 Is there a signal? A typical application of hypothesis testing in high energy physics is to test for the presence of a signal in data. The easiest case is represented by counting experiments. In this type of experiments the detector is used to count the number of events satisfying some selection criteria (slang: cut-and-count ). The number of expected events in case of background only hypothesis is compared with the measured number and the signal would typically appear as an excess over the expected background. Let n be a number of events which is the sum of some signal and some background events n = n s + n b. Each of the components can be treated as a Poisson variable s (signal) and b (background) and so the total = s + b is also a Poisson variable. The probability to observe n events is: f(n; s, b )= ( s + b ) n e ( s+ b) (8.3.3) n! Suppose you measure n obs events. To quantify our degree of confidence in the discovery of a new phenomenon, i.e. s 6= 0, we can compute how likely it is to find n obs events or more from background alone. X n obs X n obs X n b n! e b. P (n n obs )= f(n; s =0, b )= f(n; s =0, b )= n=n obs n=0 n=0 (8.3.4) For example, if we expect b =0.5background events and we observe n obs = 5, then the p-value from is.7 0 4. This is not the probability of the hypothesis s =0. It is rather the probability, under the assumption s =0, of obtaining as many events as observed or more. Often the result of a measurement is given as the estimated value of a number of events plus or minus one standard deviation. Since the standard deviation of a Poisson variable is equal to the square root of its mean, from the previous example, we have 5 ± p 5 for an estimate of, i.e. after subtracting the expected background, 4.5 ±. for our estimate of s. This is very misleading: it is only two standard deviations from zero, and it gives the impression that s is not very incompatible with zero, but we have seen from the p-value that it is not the case. The subtlety is that we need to ask for the probability that a Poisson variable of The signal doesn t always appear as an excess of events. In case of neutrino disappearance experiments the signal is given by a deficit of events.

8.4. NEYMAN PEARSON LEMMA 05 mean b will fluctuate up to n obs or higher, not for the probability that a gaussian variable with mean n obs will fluctuate down to b or lower. Another important point is that usually b is known within some uncertainty. If we set =0.8rather than 0.5, the p-value would increase by almost an order of magnitude to.4 0 3. It is therefore crucial to quantify the systematic uncertainty of the background when evaluating the significance of a new e ect. In other types of searches the signal would reveal itself as a resonance, i.e. an excess of data in a localized region of a mass spectrum (slang: bump hunt ), or as an excess of events in the tail of a distribution. Two examples are show in Fig.8.3.7. In these cases the signal is extracted from the background using a fit (more on this will be developed in the next sections). In this case on top of using the number of expected events, we add the information about the shape. Figure 8.3.7: Left: Higgs boson search in 0. The data are well described by the background only hypothesis. Right: search for an excess of events at high missing transverse energy. 8.4 Neyman Pearson Lemma We haven t addressed up to now the choice of t cut. The only thing we know up to now is that it a ects the e ciency and the purity of the sample under study. Ideally what we want is to set the desired e ciency and for that value get the best possible purity. Take the case of a simple hypothesis H 0 and allow for an alternative hypothesis H (e.g. the typical situation of signal and background). The Neyman Pearson lemma states that the acceptance region giving the highest power (i.e. the highest purity) for a given significance level is the region of space such that g(~t H 0 ) >c, (8.4.5) g(~t H ) where c is the knob we can tune to achieve the desired e of ~t under the hypothesis H i. ciency and g(~t H i ) is the distribution

06 CHAPTER 8. HYPOTHESES TESTING Basically what the lemma says is that there is function r defined as r = g( ~t H 0 ) g(~t H ) that brings the problem to a dimensional case and that gives the best purity given a fixed e ciency. The function r is called the likelihood ratio for the simple hypothesis H 0 and H (in the likelihood the data are fixed, the hypothesis is the variable). The corresponding acceptance region is given by r>c. Any monotonic function of r will be good too and will lead to the same test. The main draw back of the Neyman-Pearson lemma is that is is valid if and only if both H 0 and H are simple hypothesis (and that is pretty rare). Even in those cases in order to determine c one needs to know g(t H 0 ) and g(t H ). These must be determined by Monte Carlo simulations (or data driven techniques) for both data and background. The resulting p.d.f. is represented by a multidimensional histogram. This can cause some troubles when the dimensionality of the problems is high. Say we have M bins for each of the n components of t, then the total number of bins is M n,i.e. M n, parameters must be determined from Monte Carlo or data. A way to address this problem is by using a Multi-Variate technique as we will see in Chapter??. 8.5 Goodness of Fit A typical application of hypothesis testing is the goodness of fit, quantifying how well the null hypothesis H 0 (a function f(x)) describes a sample of data, without any specific reference to an alternative hypothesis. The test statistic has to be constructed such that it reflects the level of agreement between the observed data and the predictions of H 0 (i.e. the values of f(x)). The p value is the probability, under the assumption of H, to observe data with equal or lesser compatibility with H, relative to the data we got. N.B. it is not the probability that H is true! As frequentist the probability of the hypothesis is not even defined: the probability is defined on the data. As Bayesian the probability of the hypothesis is a di erent thing and it is defined through the Bayes theorem using the prior hypothesis. 8.5. The -Test We have already encountered the as a goodness of fit test in section Sec.7.5. The -test is by far the most commonly used goodness of fit test. Its first application is with a set of measurements x i and y i,wherethex i are supposed to be exact (or at least with negligible uncertainty) and the y i are known with an uncertainty i. We want to test the function f(x) which we believe it gives (predicts) the correct value of y i for each value of x i ;todosowe define the as: NX [y i f(x i )] = i= i. (8.5.6)

Degrees of freedom for -test on fitted data 07 If the uncertainties on the y i measurements are correlated, the above formula becomes (with the lighter matrix notation see Sec.7.3): =(y T f T )V (y f) (8.5.7) where V is the covariance matrix. A function that correctly describes the data will give a small di erence between the values predicted by the function f and the measurements y i.this di erence reflects the statistical uncertainty on the measurements, so for N measurements the should be roughly N. Recalling the p.d.f. of the distribution (see section..3): P ( ; N) = N ( N ) N e (8.5.8) (where the expectation value of this distribution is N, and so /N ), we can base our decision boundary on the goodness-of-fit, by defining the p value: p = Prob( ; N) = Z P ( 0 ; N)d 0 (8.5.9) which is called the probability. This expression gives the probability that the function describing the N measured data points gives a as large or larger than the one we obtained from our measurement. Example Suppose you compute a of 0 for N=5 points. The naive reaction is that the function is a very poor model of the data (0/5 =4 ). To quantify that we compute the probability R 0 P (, 5)d.InROOT you can compute this as TMath::Prob(0,5) = 0.00. The probability is indeed very small and the H 0 hypothesis should be discarded. You have to be careful when using the probability to take decisions. For instance if the is large, giving a very small probability, it could be both that the function f is a bad representation of the data or that the uncertainties are underestimated. On the other hand if you obtain a very small value for the, the function cannot be blamed, so you might have overestimated the uncertainties. It s up to you to interpret correctly the meaning of the probability. A very useful tool for this scope is the pull distribution (see Sec.6.), where each entry is defined as (measured-predicted)/uncertainty = (y i f(x i ))/ i ;ifeverything is done correctly (i.e. the model is correct and the uncertainties are computed correctly) the pull will result in a normal distribution centred at 0 with width. If the pull is not centred at 0 (bias) the model is incorrect, if the pull has a width larger than either the uncertainties are underestimated or the model is wrong, if the pull has a width smaller than the uncertainties are overestimated. 8.5. Degrees of freedom for -test on fitted data The concept of developed above only works if you are given a set of data points and a function (model). If the function comes out from a fit to the data then, by construction, you will get a which is smaller than the expected, because you fit the parameters of the

08 CHAPTER 8. HYPOTHESES TESTING function in order to minimize it. This problem turns out to be very easy to treat. You just need to change the number of degrees of freedom in the computation. For example, suppose you have N points and you fitted m parameters of your function to minimize the sum; then all you have to do to compute the new probability is to reduce the number of d.o.f. to n = N m. Example You have a set of 0 points, you consider as function f(x) a straight line and you get = 36.3. If you use a parabola you get = 0.. The straight line has degrees of freedom (slope and intercept), so the number of d.o.f. of the problem is 0-=8; the probability is TMath::Prob(36.3,8) = 0.0065 which makes the hypothesis that data are described by a straight line improbable. If you now fit it with a parabola you get TMath::Prob(0.,7) = 0.7 which means that you can t reject the hypothesis that the data are distributed according to a parabolic shape. Notes on the -test: For large values of d.o.f. the distribution of p can be approximated with a gaussian distribution with mean p n and standard deviation. When in the past the integrals were extracted from tables this was a neat trick; still it is a useful simplification when the the is used in some explicit calculation. The -test can also be used as a goodness of fit test for binned data. The number of events in bin i (i =,,...,n) are y i,withbinihaving mean value x i.thepredicted number of events is thus f(x i ). The errors are given by Poisson statistics in the bin ( p f(x i )) and the is nx [y i f(x i )] =, (8.5.0) f(x i ) i= where the number of degrees of freedom n is given by the number of bins minus the number of fitted parameters (do not forget the overall normalization of the model when counting the number of fitted parameters). when binning data, you should try to have enough entries per bin such that the computation of the is actually meaningful; as a rule of thumb you should have at least 5 entries per bin. Most of the results for binned data are only true asymptotically, e.g., the normal limit of the multinomial p.d.f. or the distribution of log or the distribution of. 8.5.3 Run test The collapses in one number the level of agreement between a hypothesis and set of data. There are cases where behind a hides in reality a very poor agreement between the data and the model. Consider the situation which is illustrated in figure 8.5.8. The data points are fitted by a straight line, which clearly does not describe the data adequately. Nevertheless, in this example, the is.0 and thus /n =. In cases such as this one the run test provides important extra information. The run test works like this: every time the measured data point lies Above the function, we write an A in a sequence, and every time the data lies Below the function, we write a B. If the data are distributed according to the

Run test 09 Figure 8.5.8: Example for the application of the run test. The dashed line is the hypothesized fit (a straight line), whereas the crosses are the actual data. hypothesis function, then they should fluctuate up and down creating very short sequences of A s and B s (runs). The sequence in the pictures reads AAABBBBBBAAA, making only three runs and possibly pointing to a poor description of the data. The probability of the A s and B s giving a particular number of runs can be calculated. Suppose there are N A points above and N B below with N = N A + N B. The total number of possible combinations without repetitions is given by (see chapter ): N N! = (8.5.) N A! N B! N A this will be our denominator. For the numerator suppose that r is even and the sequence starts with an A. There are N A A-points and r/ divisions between them (occupied by B s). With N A point you can place N A dividingline,inthenextstepn A and so on, giving N A r/ di erent A arrangements. The same argument can be made for the B s. So we find for the probability of r runs: P r = N A r/ N B r/, (8.5.) N N A where the extra factor of is there because we chose to start with an A and we could have started with a B. When r is odd you get: P r = N A (r 3)/ N B (r )/ + N A (r )/ N B (r 3)/ These are probabilities to get r runs given a sequence of A s and B s. It can be shown that (8.5.3) N N A hri = + N AN B N V (r) = N AN B (N A N B ) N (N ) (8.5.4) (8.5.5)

0 CHAPTER 8. HYPOTHESES TESTING In the example above the number of expected runs is hri =+ 6 6/ = 7 with =.65. The deviation between the expected and the observed is 7 3 = 4 and it constitutes.4 standard deviations which is significant at the % level for the one-sided test. Thus the straight line fit could be rejected despite the (far too) good value. The run test does not substitute the test, it s in a sense complementary; the test ignore the sign of the fluctuations, while the run test only looks at them. 8.5.4 Unbinned tests Unbinned tests are used when the binning procedure would result in a too large loss of information (e.g. when the data set is small). They are all based on the comparison of the cumulative distribution function (c.d.f.) F (x) of the model f(x) under some hypothesis H 0 and the c.d.f. for the data. To define a c.d.f. on data we define define an order statistics, i.e. a rule to order the data 3 and then define on it the Empirical Cumulative Distribution Function e.c.d.f.: 0, x<x r S n (x) = n, x r apple x<x r+ (8.5.6), x n <x This is just the fraction of events not exceeding x (which is a staircase function from 0 to ), see Fig.8.5.0. The first unbinned test we describe is the Smirnov-Cramér-von Mises test. We define a measure of the distance between S n (x) and F(x) as: W = Z 0 [S n (x) F (x)] df (x) (8.5.7) (df(x) can be in general a non decreasing weight). Inserting the explicit expression of S n (x) in this definition we get: nw = nx n + F (x i ) i= i (8.5.8) n From the asymptotic distribution of nw the critical regions can be computed: frequently used test sizes are given in the Tab.8.5.9. The asymptotic distribution is reached remarkably rapidly (in this table the asymptotic limit is reached for n 3). The Kolmogorov-Smirnov test follows the same idea of comparing the model c.d.f. with the data e.c.d.f. but it defines a di erent metric for the distance between the two. The test statistic is d := D p N where D is the maximal vertical di erence between F n (x) and F (x) (see Fig.8.5.0): D := max F n (x) F (x) x The hypothesis H 0 corresponding to the function f(x)isrejectedifd is larger than a given critical value. The probability P (d apple t 0 ) can be calculated in ROOT by TMath::KolmogorovProb(t0). 3 In -D the ordering is trivial, ascending/descending, in n-d it is arbitrary, you have to choose a convention and map it to a D sequence.

Unbinned tests Figure 8.5.9: Rejection regions for the Smirnov-Cramér-von Mises test the for some typical test sizes. Some values are reported in table 8.5.. Figure 8.5.0: Example of c.d.f. and e.c.d.f.. The arrow indicates the largest distance used by the Kolmogorov-Smirnov test. Table 8.5.: Critical values t 0 for various significances. 99% 95% 50% 3% 5% % 0.% P (d apple t 0 ) % 5% 50% 68% 95% 99% 99.9% t 0 0.44 0.50 0.83 0.96.36.6.95 The Kolmogorov-Smirnov test can also be used to test if two data sets have been drawn from the same parent distribution. Take the two histograms corresponding to the data to be compared and normalize them (such that the cumulative plateaus at ). Then compare the e.c.d.f. for the two histograms and and compute the maximum distance as before (in ROOT use h.kolmogorovtest(h)).

CHAPTER 8. HYPOTHESES TESTING Notes on the Kolmogorov-Smirnov test: the test is more sensitive to departures of the data from the median of H 0 than to departures from the width (more sensitive to the core than to the tails of the distributions) the test becomes meaningless if the H 0 p.d.f. is a fit to the data. This is due to the fact that there is no equivalent of the number of degrees of freedom as in the -test, hence it cannot be corrected for. 8.6 Two-sample problem In this section we will look at the problem of telling if two samples are compatible with each other, i.e. if both are drawn from the same parent distribution. Clearly the complication is that even if they are compatible they will exhibit di erences coming from statistical fluctuations. In the following we will examine some typical examples of two-sample problems. 8.6. Two gaussians, known Suppose you have two random variables X and Y distributed as gaussians of known width. Typical situations are when you have two measurements taken with the same device with a known resolution; or two samples are taken under di erent conditions where the variances of the parent distribution are known (you have the two means hxi, hxi and the uncertainty on the means x/ p N x and x / p N x ). This problem is equivalent to check if X Y is compatible with 0. The variance of X Y is V (X Y )= x + y and so the problem boils down to how many the di erence is from 0: q (X Y )/ x + y. More generally what you are doing is defining a test statistics hxi µ 0 / p (in the previous case N µ 0 = 0) and a double sided rejection region. This means that you choose the significance of your test ( ) and set as rejection region the (symmetric) values u / on the corresponding gaussian as: Z u / Z G(x; µ 0, )dx = G(x; µ 0, )dx = (8.6.9) u / If the measured di erence ends up in the rejection region (either of the two tails) then the two samples are to be considered di erent. You can also decide to test whether X>Y (or Y > X). In this case the test statistic is (hxi µ 0 ) / p N and the rejection region becomes single sided (u, ) (or (,u )). 8.6. Two gaussians, unknown The problem is like the previous one, you re comparing two gaussian distribution with means hxi and hy i, but this time you don t know what are the parents standard deviations. All you can do is to estimate them from the samples at hand: P (xi hxi) P (yi hyi) s x = ; s y =. (8.6.30) N x N y

F-test 3 Because we re using the estimated standard deviation we have to use the Student s t to test the significance and not the gaussian p.d.f. as we did in the previous case (see Sec...7). So we build a variable which is the ratio between a gaussian and a distribution. The expression q (hxi hyi) (8.6.3) ( x/n x )+( y/n y ) under the null hypothesis that the two distributions have the same mean, is a gaussian centred at zero with standard deviation one. The sum (N x )s x x + (N y )s y y (8.6.3) is a with N x + N y d.o.f. If we take the ratio of the two (assuming that the unknown parent standard deviation x = y, such that they cancel out in the ratio) we get the definition for a t-distribution: where (hxi hyi) t = S p (/N x )+(/N y ) (8.6.33) S = (N x )s x +(N y )s y (8.6.34) N x + N x (S is called the pooled estimate of the standard deviation, as it is the combined estimate from the two samples, appropriately weighted. The term S/ p (/N x )+(/N y ) is analogous to the standard error on the mean / p N that is used when is known). The variable t is distributed as a Student s t with N x + N x d.o.f. With this variable we can now use the same testing procedure (double or single sided rejection regions) used in the case shown above 8.6., substituting the c.d.f of the gaussian with the c.d.f. of the student s t. 8.6.3 F-test The F -test is used to test whether the variances of two samples with size n and n,respectively, are compatible. Because the true variances are not known, the sample variances V and V are used to build the ratio F = V V. Recalling the definition of the sample variance, we can write: P n n n F = V i= = i x ) P V n i= (x i x ). (8.6.35) (by convention the bigger sample variance is at the numerator, such that F ). Intuitively the ratio will be close to if the two variances are similar, while it will go to a large value if they are not. When you divide the variance by you obtain a random variable which is distributed as a with N d.o.f. Given that the random variable F is the ratio of two such variables, the cancels and we are left with the ratio of two distributions with f = N d.o.f. for the numerator and f = N d.o.f. for the denominator. The variable F follows the F -distribution with f and f degrees of freedom: F (N,N ): P (F )= ((f + f )/) (f /) (f /) q f f f f F f / (f + f F ) (f +f )/ (8.6.36)

4 CHAPTER 8. HYPOTHESES TESTING For large numbers, the variable Z = log F (8.6.37) converges to a Gaussian distribution with mean (/f /f ) and variance (/f +/f ). In ROOT you can use the function TMath::fdistribution pdf.

F-test 5 Example Background model for the H! search: The collected diphoton events are divided in several categories (based on resolution and S/B to optimize the analysis sensitivity). Once a model for the background is chosen (e.g. a polynomial) the number of d.o.f. for that model (e.g. the order of the polynomial) can be chosen using an F-test. The main idea is gradually increase the number of d.o.f. until you don t see any decrease in the variance (see snapshot of the text here below). Table 8.6.: Summary of the hypothesis tests for the two-sample problem. H 0 H Test Statistic Rejection Region Comment Comparison of two normal distributions with µ and µ and known and x µ µ apple µ µ > x (u d ; ) x i: arithmetic mean of sample i x µ µ µ µ < x ( ; u d ) d := n + n x µ µ = µ µ 6= x (u / ; ) d and Comparison of µ and µ with unknown (but supposed equal) q x µ µ apple µ µ > x (n S d (t f; ; ) S d = )S +(n )S n +n x µ µ µ µ < x S d ( ; t f; ) f = n + n x µ µ = µ µ 6= x S d (t f; / ; ) Calculate by non-central t-dist. F -Test: Hypotheses about and of two normal distributions apple > S/S A =(F N ;N ; ; ) N i = n i < S/S A =(0;F N ;N ; ) = 6= S/S A and A q n +n n n

6 CHAPTER 8. HYPOTHESES TESTING 8.6.4 Matched and correlated samples In the previous sections we ve seen how to compare two samples under di erent hypothesis. The tests are more discriminating the smaller are their variances. Correlations between the two samples can be used to our advantage to reduce the variance. Take as test statistic: X (x i y i ) (8.6.38) i where each of the data point of the first sample is paired to a corresponding one in the second sample. The variance of this distribution is: V (x y) = x + y x y (8.6.39) Now, if the two samples are correlated ( >0) then the variance is reduced and will make the test more discriminating. Example A consumer magazine is testing a widget claimed to increase fuel economy. Here are the data on seven cars are reported in Fig.8.6.. Is there evidence for any improvement? If you ignore the matching, the means are 38.6 ± 3.0 and 35.6 ±.3 for the samples with and Figure 8.6.: Data from seven cars. without the widget. The improvement of 3 m.p.g. is within the statistical uncertainties. Now look at the di erences. Their average is 3.0. The estimated standard deviation s is 3.6, so the error on the estimated average is 3.6/ p 7=.3, and t is 3.6/.3 =.0. This is significant at the 5% level using Student s t (one-tailed test, 6 degrees of freedom, t critic =.943). 8.6.5 The most general test As we already said the more precisely the test can be formulated, the more discriminant it will be. The most general two-sample test, makes no assumptions at all on the two distributions, it just asks whether the two are the same. You can apply an unbinned test like the the Kolmogorov-Smirnov (as explained in Sec.8.5.4) by ordering the two samples and computing the maximal distance between the two e.c.d.f. Or you can approach the problem ordering together both samples and then look apply a the run-test. If the two samples are drawn from the same parent distribution there will be several very short runs; if on the other hand the two samples are from di erent parent distributions you will have long runs from both samples. This test can be tried only if he number of points in sample A is similar to the one in sample B.

8.7. REFERENCES 7 Example Two samples A and B from the same parent distribution will give something like: AABBABABAABBAABABAABBBA. Two samples from two narrow distributions with di erent means will give something like: AAAAAAAAAABBBBBBBBBBBBB. 8.7 References G. Cowan, Statistical Data Analysis,Ch. 4 R. Barlow, A guide to the use of statistical methods in the physical sciences. Ch. 8 W. Metzger, Statistical Methods in Data Analysis : Ch.0