Lecture 2. G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 1

Similar documents
Statistical Methods for Particle Physics Lecture 2: statistical tests, multivariate methods

Statistical Data Analysis Stat 3: p-values, parameter estimation

Statistics for the LHC Lecture 1: Introduction

Multivariate statistical methods and data mining in particle physics

Statistical Methods for Particle Physics (I)

Advanced statistical methods for data analysis Lecture 1

Statistical Methods for Particle Physics Lecture 2: multivariate methods

Discovery and Significance. M. Witherell 5/10/12

Statistics for the LHC Lecture 2: Discovery

Lecture 3. G. Cowan. Lecture 3 page 1. Lectures on Statistical Data Analysis

Some Statistical Tools for Particle Physics

Statistical Data Analysis Stat 2: Monte Carlo Method, Statistical Tests

Statistical Methods in Particle Physics Day 4: Discovery and limits

Hypothesis testing:power, test statistic CMS:

Statistical Methods for Particle Physics Lecture 4: discovery, exclusion limits

Introduction to Statistical Methods for High Energy Physics

Statistical Methods in Particle Physics Lecture 2: Limits and Discovery

YETI IPPP Durham

Applied Statistics. Multivariate Analysis - part II. Troels C. Petersen (NBI) Statistics is merely a quantization of common sense 1

RWTH Aachen Graduiertenkolleg

Advanced statistical methods for data analysis Lecture 2

Statistical Methods for Particle Physics Lecture 1: parameter estimation, statistical tests

Primer on statistics:

Statistics for Particle Physics. Kyle Cranmer. New York University. Kyle Cranmer (NYU) CERN Academic Training, Feb 2-5, 2009

Statistical Methods for Particle Physics Lecture 3: Systematics, nuisance parameters

Machine Learning for Signal Processing Bayes Classification and Regression

Systematic uncertainties in statistical data analysis for particle physics. DESY Seminar Hamburg, 31 March, 2009

Lectures on Statistical Data Analysis

Statistical Methods in Particle Physics Lecture 1: Bayesian methods

Hypothesis testing (cont d)

Use of the likelihood principle in physics. Statistics II

Hypotheses Testing. Chapter Hypotheses and tests statistics

Some Topics in Statistical Data Analysis

Statistical Methods for Particle Physics Lecture 3: systematic uncertainties / further topics

STA 4273H: Statistical Machine Learning

The classifier. Theorem. where the min is over all possible classifiers. To calculate the Bayes classifier/bayes risk, we need to know

The classifier. Linear discriminant analysis (LDA) Example. Challenges for LDA

Statistical Methods for Particle Physics Tutorial on multivariate methods

Fundamental Probability and Statistics

Machine Learning (CS 567) Lecture 5

Lecture 5. G. Cowan Lectures on Statistical Data Analysis Lecture 5 page 1

Machine Learning Lecture 5

Ways to make neural networks generalize better

Physics 403. Segev BenZvi. Classical Hypothesis Testing: The Likelihood Ratio Test. Department of Physics and Astronomy University of Rochester

Statistical Data Analysis 2017/18

Bayesian Models in Machine Learning

Statistics. Lent Term 2015 Prof. Mark Thomson. 2: The Gaussian Limit

Statistical Methods for Particle Physics

Statistical Methods in Particle Physics

PART I INTRODUCTION The meaning of probability Basic definitions for frequentist statistics and Bayesian inference Bayesian inference Combinatorics

The Bayes classifier

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Bayesian Learning. Tobias Scheffer, Niels Landwehr

FYST17 Lecture 8 Statistics and hypothesis testing. Thanks to T. Petersen, S. Maschiocci, G. Cowan, L. Lyons

Topics in Statistical Data Analysis for HEP Lecture 1: Bayesian Methods CERN-JINR European School of High Energy Physics Bautzen, June 2009

ECE521 week 3: 23/26 January 2017

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 2: PROBABILITY DISTRIBUTIONS

Midterm Review CS 6375: Machine Learning. Vibhav Gogate The University of Texas at Dallas

Physics 403. Segev BenZvi. Credible Intervals, Confidence Intervals, and Limits. Department of Physics and Astronomy University of Rochester

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis

Introductory Statistics Course Part II

Pattern Recognition and Machine Learning

Statistical Methods for LHC Physics

Introduction: MLE, MAP, Bayesian reasoning (28/8/13)

Statistical Methods in Particle Physics

Statistical Methods for Discovery and Limits in HEP Experiments Day 3: Exclusion Limits

INTRODUCTION TO PATTERN RECOGNITION

Lecture 5: Logistic Regression. Neural Networks

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Midterm Review CS 7301: Advanced Machine Learning. Vibhav Gogate The University of Texas at Dallas

Linear Models for Classification

Multi-layer Neural Networks

E. Santovetti lesson 4 Maximum likelihood Interval estimation

STA 414/2104: Machine Learning

9/12/17. Types of learning. Modeling data. Supervised learning: Classification. Supervised learning: Regression. Unsupervised learning: Clustering

Artificial Neural Networks

Introduction to Machine Learning. Maximum Likelihood and Bayesian Inference. Lecturers: Eran Halperin, Lior Wolf

Multivariate Data Analysis and Machine Learning in High Energy Physics (III)

Bayesian Methods for Machine Learning

A Brief Review of Probability, Bayesian Statistics, and Information Theory

STA 4273H: Sta-s-cal Machine Learning

Statistical techniques for data analysis in Cosmology

ORF 245 Fundamentals of Statistics Chapter 9 Hypothesis Testing

Ling 289 Contingency Table Statistics

STA 4273H: Statistical Machine Learning

Probability Theory for Machine Learning. Chris Cremer September 2015

Lecture 16: Small Sample Size Problems (Covariance Estimation) Many thanks to Carlos Thomaz who authored the original version of these slides

Irr. Statistical Methods in Experimental Physics. 2nd Edition. Frederick James. World Scientific. CERN, Switzerland

Parameter estimation and forecasting. Cristiano Porciani AIfA, Uni-Bonn

Discovery Potential for the Standard Model Higgs at ATLAS

Why Try Bayesian Methods? (Lecture 5)

Lecture 3: Pattern Classification

Statistical Methods in Particle Physics

Negative binomial distribution and multiplicities in p p( p) collisions

7 Gaussian Discriminant Analysis (including QDA and LDA)

LINEAR MODELS FOR CLASSIFICATION. J. Elder CSE 6390/PSYC 6225 Computational Modeling of Visual Perception

LECTURE NOTES FYS 4550/FYS EXPERIMENTAL HIGH ENERGY PHYSICS AUTUMN 2013 PART I A. STRANDLIE GJØVIK UNIVERSITY COLLEGE AND UNIVERSITY OF OSLO

Inf2b Learning and Data

Machine Learning Lecture 7

Neural Networks. David Rosenberg. July 26, New York University. David Rosenberg (New York University) DS-GA 1003 July 26, / 35

Transcription:

Lecture 2 1 Probability (90 min.) Definition, Bayes theorem, probability densities and their properties, catalogue of pdfs, Monte Carlo 2 Statistical tests (90 min.) general concepts, test statistics, multivariate methods, goodness-of-fit tests 3 Parameter estimation (90 min.) general concepts, maximum likelihood, variance of estimators, least squares 4 Interval estimation (60 min.) setting limits 5 Further topics (60 min.) systematic errors, MCMC G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 1

The data stream Experiment records a mixture of events of different types, each with different numbers of particles, kinematic properties,... We are usually interested in events of a single type, in a search to see if they exist at all and/or to identify them for further study. G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 2

Statistical tests (in a particle physics context) Suppose the result of a measurement for an individual event is a collection of numbers x 1 = number of muons, x 2 = mean p t of jets, x 3 = missing energy,... follows some n-dimensional joint pdf, which depends on the type of event produced, i.e., was it For each reaction we consider we will have a hypothesis for the pdf of, e.g., etc. Often call H 0 the signal hypothesis (the event type we want); H 1, H 2,... are background hypotheses. G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 3

Selecting events Suppose we have a data sample with two kinds of events, corresponding to hypotheses H 0 and H 1 and we want to select those of type H 0. Each event is a point in space. What decision boundary should we use to accept/reject events as belonging to event type H 0? Perhaps select events with cuts : H 1 H 0 accept G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 4

Other ways to select events Or maybe use some other sort of decision boundary: linear or nonlinear H 1 H 1 H 0 accept H 0 accept How can we do this in an optimal way? G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 5

Test statistics Construct a test statistic of lower dimension (e.g. scalar) Goal is to compactify data without losing ability to discriminate between hypotheses. We can work out the pdfs Decision boundary is now a single cut on t. This effectively divides the sample space into two regions where we either: accept H 0 (acceptance region) or reject it (critical region). G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 6

Significance level and power of a test Probability to reject H 0 if it is true (error of the 1st kind): (significance level) Probability to accept H 0 if H 1 is true (error of the 2nd kind): ( power) G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 7

Efficiency of event selection Signal efficiency, i.e., probability to accept event which is signal, Background efficiency, i.e., probability to accept background event, Expected number of signal events: s = s s L Expected number of background events: b = b b L s, b = signal, background cross sections; L = integrated luminosity G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 8

Purity of event selection Suppose only one background type b; overall fractions of signal and background events are s and b (prior probabilities). Suppose we select events with t < t cut. What is the purity of our selected sample? Here purity means the probability to be signal given that the event was accepted. Using Bayes theorem we find: So the purity depends on the prior probabilities as well as on the signal and background efficiencies. G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 9

Constructing a test statistic How can we select events in an optimal way? Neyman-Pearson lemma (proof in Brandt Ch. 8) states: To get the lowest b for a given s (highest power for a given significance level), choose acceptance region such that where c is a constant which determines s. Equivalently, optimal scalar test statistic is N.B. any monotonic function of this is just as good. G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 10

Purity vs. efficiency optimal trade-off Consider selecting n events: expected numbers s from signal, b from background; n ~ Poisson (s + b) Suppose b is known and goal is to estimate s with minimum relative statistical error. Take as estimator: Variance of Poisson variable equals its mean, therefore So we should maximize equivalent to maximizing product of signal efficiency purity. G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 11

Why Neyman-Pearson doesn t always help The problem is that we usually don t have explicit formulae for the pdfs Instead we may have Monte Carlo models for signal and background processes, so we can produce simulated data, and enter each event into an n-dimensional histogram. Use e.g. M bins for each of the n dimensions, total of M n cells. But n is potentially large, prohibitively large number of cells to populate with Monte Carlo data. Compromise: make Ansatz for form of test statistic with fewer parameters; determine them (e.g. using MC) to give best discrimination between signal and background. G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 12

Linear test statistic Ansatz: Choose the parameters a 1,..., a n so that the pdfs have maximum separation. We want: large distance between mean values, small widths g (t) s s b b t Fisher: maximize G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 13

Determining coefficients for maximum separation We have where In terms of mean and variance of this becomes G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 14

Determining the coefficients (2) The numerator of J(a) is between classes and the denominator is within classes maximize G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 15

Fisher discriminant Setting gives Fisher s linear discriminant function: H 1 Corresponds to a linear decision boundary. H 0 accept G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 16

Fisher discriminant for Gaussian data Suppose is multivariate Gaussian with mean values and covariance matrices V 0 = V 1 = V for both. For this case we can show that the Fisher discriminant is equivalent to using the likelihood-ratio, and thus gives the maximum purity for a given efficiency. For non-gaussian data this no longer holds, but linear discriminant function may be simplest practical solution. Can try to transform data so as to better approximate Gaussian before constructing Fisher discrimimant. G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 17

Nonlinear test statistics The optimal decision boundary may not be a hyperplane, nonlinear test statistic Multivariate statistical methods are a Big Industry: H 1 Neural Networks, Support Vector Machines, Kernel density methods,... H 0 accept Particle Physics can benefit from progress in Machine Learning. G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 18

Introduction to neural networks Used in neurobiology, pattern recognition, financial forecasting,... Here, neural nets are just a type of test statistic. Suppose we take t(x) to have the form logistic sigmoid This is called the single-layer perceptron. s( ) is monotonic equivalent to linear t(x) G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 19

The activation function The activation function function s(t) is often taken to be a logistic sigmoid: G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 20

The multi-layer perceptron Generalize from one layer to the multilayer perceptron: The values of the nodes in the intermediate (hidden) layer are and the network output is given by weights (connection strengths) G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 21

Neural network discussion Easy to generalize to arbitrary number of layers. Feed-forward net: values of a node depend only on earlier layers, usually only on previous layer ( network architecture ). More nodes neural net gets closer to optimal t(x), but more parameters need to be determined. Parameters usually determined by minimizing an error function, where t (0), t (1) are target values, e.g., 0 and 1 for logistic sigmoid. Expectation values replaced by averages of training data (e.g. MC). In general training can be difficult; standard software available. G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 22

Neural network example from LEP II Signal: e e W W (often 4 well separated hadron jets) Background: e e qqgg (4 less well separated hadron jets) input variables based on jet structure, event shape,... none by itself gives much separation. Neural network output does better... (Garrido, Juste and Martinez, ALEPH 96-144) G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 23

Neural network discussion (2) Why not use all of the available input variables? Fewer inputs fewer parameters to be adjusted, parameters better determined for finite training data. Some inputs may be highly correlated drop all but one. Some inputs may contain little or no discriminating power between the hypotheses drop them. NN exploits higher moments (nonlinear features) of joint pdf f(x H), but these may not be well modeled in training data. Better to have simper t(x) where you can understand what it s doing. G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 24

Neural network discussion (3) Recall that the purpose of the statistical test is usually to select objects for further study; e.g. select WW events, then measure their properties (e.g. particle multiplicity). Need to avoid input variables that are correlated with the properties of the selected objects that you want to study. (Not always easy; correlations may be poorly known.) Some NN references: L. Lönnblad et al., Comp. Phys. Comm., 70 (1992) 167; C. Peterson et al., Comp. Phys. Comm., 81 (1994) 185; C.M. Bishop, Neural Networks for Pattern Recognition, OUP (1995); John Hertz et al., Introduction to the Theory of Neural Computation, Addison-Wesley, New York (1991). G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 25

Testing goodness-of-fit Suppose hypothesis H predicts pdf observations for a set of We observe a single point in this space: What can we say about the validity of H in light of the data? Decide what part of the data space represents less compatibility with H than does the point (Not unique!) less compatible with H more compatible with H G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 26

p-values Express goodness-of-fit by giving the p-value for H: p = probability, under assumption of H, to observe data with equal or lesser compatibility with H relative to the data we got. This is not the probability that H is true! In frequentist statistics we don t talk about P(H) (unless H represents a repeatable observation). In Bayesian statistics we do; use Bayes theorem to obtain where (H) is the prior probability for H. For now stick with the frequentist approach; result is p-value, regrettably easy to misinterpret as P(H). G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 27

p-value example: testing whether a coin is fair Probability to observe n heads in N coin tosses is binomial: Hypothesis H: the coin is fair (p = 0.5). Suppose we toss the coin N = 20 times and get n = 17 heads. Region of data space with equal or lesser compatibility with H relative to n = 17 is: n = 17, 18, 19, 20, 0, 1, 2, 3. Adding up the probabilities for these values gives: i.e. p = 0.0026 is the probability of obtaining such a bizarre result (or more so) by chance, under the assumption of H. G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 28

The significance of an observed signal Suppose we observe n events; these can consist of: n b events from known processes (background) n s events from a new process (signal) If n s, n b are Poisson r.v.s with means s, b, then n = n s + n b is also Poisson, mean = s + b: Suppose b = 0.5, and we observe n obs = 5. Should we claim evidence for a new discovery? Give p-value for hypothesis s = 0: G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 29

The significance of a peak Suppose we measure a value x for each event and find: Each bin (observed) is a Poisson r.v., means are given by dashed lines. In the two bins with the peak, 11 entries found with b = 3.2. The p-value for the s = 0 hypothesis is: G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 30

The significance of a peak (2) But... did we know where to look for the peak? give P(n 11) in any 2 adjacent bins Is the observed width consistent with the expected x resolution? take x window several times the expected resolution How many bins distributions have we looked at? look at a thousand of them, you ll find a 10-3 effect Did we adjust the cuts to enhance the peak? freeze cuts, repeat analysis with new data How about the bins to the sides of the peak... (too low!) Should we publish???? G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 31

Making a discovery Often compute p-value of the background only hypothesis H 0 using test variable related to a characteristic of the signal. p-value = Probability to see data as incompatible with H 0, or more so, relative to the data observed. Requires definition of incompatible with H 0 HEP folklore: claim discovery if p-value equivalent to a 5 fluctuation of Gaussian variable (one-sided) Actual p-value at which discovery becomes believable will depend on signal in question (subjective) Why not do Bayesian analysis? Usually don t know how to assign meaningful prior probabilities G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 32

Pearson s 2 statistic Test statistic for comparing observed data (n i independent) to predicted mean values For n i ~ Poisson( i ) we have V[n i ] = i, so this becomes (Pearson s 2 statistic) 2 = sum of squares of the deviations of the ith measurement from the ith prediction, using i as the yardstick for the comparison. G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 33

Pearson s 2 test If n i are Gaussian with mean i and std. dev. i, i.e., n i ~ N( i, i2 ), then Pearson s 2 will follow the 2 pdf (here for 2 = z): If the n i are Poisson with i >> 1 (in practice OK for i > 5) then the Poisson dist. becomes Gaussian and therefore Pearson s 2 statistic here as well follows the 2 pdf. The 2 value obtained from the data then gives the p-value: G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 34

The 2 per degree of freedom Recall that for the chi-square pdf for N degrees of freedom, This makes sense: if the hypothesized i are right, the rms deviation of n i from i is i, so each term in the sum contributes ~ 1. One often sees 2 /N reported as a measure of goodness-of-fit. But... better to give 2 and N separately. Consider, e.g., i.e. for N large, even a 2 per dof only a bit greater than one can imply a small p-value, i.e., poor goodness-of-fit. G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 35

Pearson s 2 with multinomial data If is fixed, then we might model n i ~ binomial with p i = n i / n tot. I.e. ~ multinomial. In this case we can take Pearson s 2 statistic to be If all p i n tot >> 1 then this will follow the chi-square pdf for N 1 degrees of freedom. G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 36

Example of a 2 test This gives for N = 20 dof. Now need to find p-value, but... many bins have few (or no) entries, so here we do not expect 2 to follow the chi-square pdf. G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 37

Using MC to find distribution of 2 statistic The Pearson 2 statistic still reflects the level of agreement between data and prediction, i.e., it is still a valid test statistic. To find its sampling distribution, simulate the data with a Monte Carlo program: Here data sample simulated 10 6 times. The fraction of times we find 2 > 29.8 gives the p-value: p = 0.11 If we had used the chi-square pdf we would find p = 0.073. G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 38

Wrapping up lecture 2 Main ideas of statistical tests and related issues for HEP: Discriminate between event types (hypotheses), determine selection efficiency, sample purity, etc. Some methods for constructing a test statistic Linear: Fisher discriminant Nonlinear: Neural networks Goodness-of-fit tests p-value (not same as P(H 0 )!), 2 = (data prediction) 2 / variance. Often 2 ~ chi-square pdf use to get p-value. Next we turn to: parameter estimation G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 39