Statistical Methods in Particle Physics

Similar documents
Lecture 3. G. Cowan. Lecture 3 page 1. Lectures on Statistical Data Analysis

Statistical Methods in Particle Physics

Statistical Data Analysis Stat 3: p-values, parameter estimation

Modern Methods of Data Analysis - WS 07/08

* Tuesday 17 January :30-16:30 (2 hours) Recored on ESSE3 General introduction to the course.

Statistics for Data Analysis. Niklaus Berger. PSI Practical Course Physics Institute, University of Heidelberg

Physics 403. Segev BenZvi. Propagation of Uncertainties. Department of Physics and Astronomy University of Rochester

Modern Methods of Data Analysis - WS 07/08

Statistics. Lent Term 2015 Prof. Mark Thomson. 2: The Gaussian Limit

Physics 403. Segev BenZvi. Parameter Estimation, Correlations, and Error Bars. Department of Physics and Astronomy University of Rochester

Statistical Methods in Particle Physics

Systematic uncertainties in statistical data analysis for particle physics. DESY Seminar Hamburg, 31 March, 2009

Non-linear least squares

Brandon C. Kelly (Harvard Smithsonian Center for Astrophysics)

The Common Factor Model. Measurement Methods Lecture 15 Chapter 9

Introduction to Error Analysis

Introduction to Statistics and Error Analysis II

Statistics and Data Analysis

STATISTICS OF OBSERVATIONS & SAMPLING THEORY. Parent Distributions

LECTURE NOTES FYS 4550/FYS EXPERIMENTAL HIGH ENERGY PHYSICS AUTUMN 2013 PART I A. STRANDLIE GJØVIK UNIVERSITY COLLEGE AND UNIVERSITY OF OSLO

Negative binomial distribution and multiplicities in p p( p) collisions

Measurement And Uncertainty

Introduction to Simple Linear Regression

32. STATISTICS. 32. Statistics 1

Statistical Methods for Astronomy

Modern Methods of Data Analysis - WS 07/08

A6523 Modeling, Inference, and Mining Jim Cordes, Cornell University

32. STATISTICS. 32. Statistics 1

Statistics. Lecture 2 August 7, 2000 Frank Porter Caltech. The Fundamentals; Point Estimation. Maximum Likelihood, Least Squares and All That

Robot Mapping. Least Squares. Cyrill Stachniss

9 Correlation and Regression

Day By Day Notes for MATH 385/585 Fall 2013

Lectures on Simple Linear Regression Stat 431, Summer 2012

Statistical methods and data analysis

Part IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015

Statistical Distribution Assumptions of General Linear Models

Robustness and Distribution Assumptions

Lecture 3: Multiple Regression

[y i α βx i ] 2 (2) Q = i=1

Lecture 3 - Least Squares

Economics 620, Lecture 2: Regression Mechanics (Simple Regression)

FYST17 Lecture 8 Statistics and hypothesis testing. Thanks to T. Petersen, S. Maschiocci, G. Cowan, L. Lyons

Chapter 5 continued. Chapter 5 sections

Lecture 3 - Least Squares

Review. December 4 th, Review

Physics 509: Propagating Systematic Uncertainties. Scott Oser Lecture #12

401 Review. 6. Power analysis for one/two-sample hypothesis tests and for correlation analysis.

Regression Estimation - Least Squares and Maximum Likelihood. Dr. Frank Wood

Physics 509: Error Propagation, and the Meaning of Error Bars. Scott Oser Lecture #10

Sampling distribution of GLM regression coefficients

Lecture 5. G. Cowan Lectures on Statistical Data Analysis Lecture 5 page 1

p(z)

Physics 509: Bootstrap and Robust Parameter Estimation

Qualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama

ISQS 5349 Spring 2013 Final Exam

Statistical Methods for Particle Physics Lecture 1: parameter estimation, statistical tests

MS&E 226: Small Data. Lecture 11: Maximum likelihood (v2) Ramesh Johari

Lecture 34: Properties of the LSE

Hypothesis testing:power, test statistic CMS:

Chapter 1 Linear Regression with One Predictor

Statistical Models with Uncertain Error Parameters (G. Cowan, arxiv: )

For more information about how to cite these materials visit

First Year Examination Department of Statistics, University of Florida

M2S2 Statistical Modelling

The Gaussian distribution

Summer School in Statistics for Astronomers V June 1 - June 6, Regression. Mosuk Chow Statistics Department Penn State University.

Statistical Methods in Particle Physics

statistical sense, from the distributions of the xs. The model may now be generalized to the case of k regressors:

arxiv:hep-ex/ v1 2 Jun 2000

Lecture 2: Linear Models. Bruce Walsh lecture notes Seattle SISG -Mixed Model Course version 23 June 2011

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix)

Correlation 1. December 4, HMS, 2017, v1.1

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as

Regression Models - Introduction

1. The Multivariate Classical Linear Regression Model

Applied Econometrics (QEM)

Regression Estimation Least Squares and Maximum Likelihood

Experiment 2. F r e e F a l l

Estimation Theory. as Θ = (Θ 1,Θ 2,...,Θ m ) T. An estimator

Recall the Basics of Hypothesis Testing

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators

Linear Regression Linear Regression with Shrinkage

Typing Equations in MS Word 2010

32. STATISTICS. 32. Statistics 1

Linear Regression Linear Regression with Shrinkage

Lecture 2. G. Cowan Lectures on Statistical Data Analysis Lecture 2 page 1

9. Model Selection. statistical models. overview of model selection. information criteria. goodness-of-fit measures

Unit 10: Simple Linear Regression and Correlation

Intelligent Embedded Systems Uncertainty, Information and Learning Mechanisms (Part 1)

Lecture 5: Moment Generating Functions

Data Analysis I. Dr Martin Hendry, Dept of Physics and Astronomy University of Glasgow, UK. 10 lectures, beginning October 2006

Lecture 3. The Population Variance. The population variance, denoted σ 2, is the sum. of the squared deviations about the population

Slides 12: Output Analysis for a Single Model

1 Random and systematic errors

Uncertainty and Bias UIUC, 403 Advanced Physics Laboratory, Fall 2014

Maximum-Likelihood fitting

Poisson distribution and χ 2 (Chap 11-12)

Mathematical statistics

Lecture 3: Linear Models. Bruce Walsh lecture notes Uppsala EQG course version 28 Jan 2012

Transcription:

Statistical Methods in Particle Physics Lecture 10 December 17, 01 Silvia Masciocchi, GSI Darmstadt Winter Semester 01 / 13

Method of least squares The method of least squares is a standard approach to the approximate solution of overdetermined systems, i.e., sets of equations in which there are more equations than unknowns. "Least squares" means that the overall solution minimizes the sum of the squares of the errors made in solving every single equation. The most important application is in data fitting. The best fit in the least-squares sense minimizes the sum of squared residuals, a residual being the difference between an observed value and the fitted value provided by a model. Least squares corresponds to the maximum likelihood criterion if the experimental errors have a normal distribution and can also be derived as a method of moments estimator. Statistical Methods, Lecture 10, December 17, 01

A bit of history Carl Friedrich Gauss is credited with developing the fundamentals of the basis for least-squares analysis in 1795 at the age of eighteen. Legendre was the first to publish the method, however. An early demonstration of the strength of Gauss's method came when it was used to predict the future location of the newly discovered asteroid Ceres. On January 1, 1801, the Italian astronomer Giuseppe Piazzi discovered Ceres and was able to track its path for 40 days before it was lost in the glare of the sun. Based on this data, astronomers desired to determine the location of Ceres after it emerged from behind the sun without solving the complicated Kepler's nonlinear equations of planetary motion. The only predictions that successfully allowed Hungarian astronomer Franz Xaver von Zach to relocate Ceres were those performed by the 4-year-old Gauss using least-squares analysis. Statistical Methods, Lecture 10, December 17, 01 3

Data fitting The method of least squares is often used to fit a function through a set of points by minimizing the distance between the points and the fitted function by the least squares But we need to define well the conditions under which the method can be applied Statistical Methods, Lecture 10, December 17, 01 4

Connection with the ML - 1 In many situations a measured value y can be regarded as a Gaussian random variable centered about the quantity's true value λ. This follows from the central theorem as long as the total error (i.e. the deviation from the true value) is the sum of a large number of small contributions. Extend to N measurements: N measurements yi related to another variable xi, assumed to be known without error The yi are N independent Gaussian random variables Each value yi has a different unknown mean λi, and..... a different but known variance σi Statistical Methods, Lecture 10, December 17, 01 5

Connection with the ML - The N measurements of yi can be equivalently regarded as a single measurement of an N-dimensional random vector, for which the joint pdf is the product of N Gaussians: 1 N g y 1,.., y N ; 1,.., N,,.., = N 1 i=1 i exp y i i i Suppose further that the true value is given as a function of x: λ = λ(x;θ) which depends on unknown parameters θ = (θ1,θ,..., θm) The aim of the method of least squares is to estimate the parameters θ. In addition, the method allows for a simple evaluation of the goodness-of-fit of the hypothesized function. Statistical Methods, Lecture 10, December 17, 01 6

Connection with the ML - 3 Illustration: x,.., x known 1 N Gaussian random variables yi E[yi] = λi = λ(xi;θ) V[yi] = σi known GOAL: Estimate the parameters θ fit the curve through the points Statistical Methods, Lecture 10, December 17, 01 7

Connection to the ML - 4 We take the logarithm of the joint pdf and drop the additive terms that do not depend on the parameters θ: We get the log-likelihood function N y x ; 1 i i log L = i i=1 This is maximized by finding the values of the parameters θ that minimize the quantity = N y x ; i i i=1 i Namely the quadratic sum of the differences between measured and hypothesized values, weighted by the inverse of the variances. This is the basis of the method of least squares. Statistical Methods, Lecture 10, December 17, 01 8

Definition of LS estimators If the N measurements are not independent but described by an Ndimensional Gaussian pdf with known covariance matrix V but unknown mean values, the joint pdf is: The log-likelihood is obtained (again dropping terms not depending on the parameters) as Statistical Methods, Lecture 10, December 17, 01 9

Definition of LS estimators - The log-likelihood is maximized by minimizing the quantity which reduces to the case on page 8 if the covariance matrix (and hence its inverse) are diagonal (and the variables are independent) The parameters which minimize the are called the least square (LS) estimators 1,..., m The minimization is most often done numerically. Example: program MINUIT (in ROOT) Statistical Methods, Lecture 10, December 17, 01 10

Example of a LS fit (polynomial) Fit a polynomial of order p: 0th order: 1 parameter 1st order: parameters 4th order: 5 parameters Statistical Methods, Lecture 10, December 17, 01 11

Linear least squares fit LS has particularly simple properties of the and of the LS estimators, for the case where λ(x;θ) is a linear function of the parameters θ m x ; = j=1 a j x j where the aj(x) are ANY linearly independent functions of x (they are just linearly independent from each other, i.e. one cannot be expressed as a linear combination of the others). For this case, when we calculate the variance, we obtain that is quadratic in θ and it follows that = 1 = min 1 Statistical Methods, Lecture 10, December 17, 01 1

Variance of least squares estimators = 1 = min 1 this yields the contours in parameter space whose tangents are at i± i corresponding to a one standard deviation departure from the LS estimates Example: polynomial fit, 0th order 1-parameter fit Obtain the variance: Statistical Methods, Lecture 10, December 17, 01 13

Two parameter LS fit 1st order polynomial fit, parameters (line with non-zero slope) Tangent lines, Angle of ellipse correlation (same as for ML) 0 1 Statistical Methods, Lecture 10, December 17, 01 14

Confidence region e d n mi e R r = 1 = min 1 this yields the contours in parameter space whose tangents are at i± i corresponding to a one standard deviation departure from the LS estimates If the function λ(x;θ) is NOT linear in the parameters, then the contour defined above is not in general elliptical, and one can no longer obtain the standard deviations from the tangents. It defines a region in parameter space, however, which can be interpreted as CONFIDENCE REGION, the size of which reflects the statistical uncertainty of the fitted parameters Next lecture!!! Statistical Methods, Lecture 10, December 17, 01 15

Five parameter LS fit 4th order polynomial fit, 5 parameters Curve goes through all points min = 0 Number of parameters = number of points The value of min reflects the level of agreement between data and hypothesis use as goodness-of-fit test statistics Statistical Methods, Lecture 10, December 17, 01 16

Goodness-of-fit with least squares The value of the X at its minimum is a measure of the level of agreement between the data and the fitted curve: min N = y i x i ; i=1 i It can therefore be employed as a goodness-of-fit statistic to test the hypothesized functional form λ(x;θ). We can show that if the hypothesis is correct, then the statistic t = min follows the chi-square pdf: f t ;nd = 1 n / 1 t / t e nd / d nd / where the number of degrees of freedom is: nd = number of data points number of fitted parameters Statistical Methods, Lecture 10, December 17, 01 17

Goodness-of-fit with least squares - The chi-square pdf has an expectation value equal to the number of degrees of freedom, so if min nd or min /nd 1 the fit is 'good'. Or more precisely: If min min /nd 1 all is as expected If /nd 1 then the fit is better than expected given the size of the measurement errors. This is not bad in the sense of providing evidence against the hypothesis, but it is usually grounds to check that the errors σi have not been overestimated or are not correlated If min /nd 1 then there is some reason to doubt the hypothesis Statistical Methods, Lecture 10, December 17, 01 18

Goodness-of-fit with least squares - 3 The chi-square pdf has an expectation value equal to the number of degrees of freedom, so if min nd the fit is 'good'. More generally, find the p-value: p = This is the probability of obtaining a higher, if the hypothesis is correct. min min f t ;nd dt as high as the one we got, or E.g. for the previous example with 1st order polynomial (straight line): min = 3.99 nd = 5 = 3 p = 0.63 That is: if the true function λ(x) were a straight line and if the experiment were repeated many times, each time yielding values for the parameters and the X, then one would expect... Statistical Methods, Lecture 10, December 17, 01 19

Goodness-of-fit with least squares - 4 the X values to be worse (i.e. higher) than the one actually obtained in 6.3% of the cases. Whereas for the 0th order polynomial (horizontal line) min = 45.5 nd = 5 1 = 4 9 p = 3.1x10 This hypothesis can be safely ruled out!!! Statistical Methods, Lecture 10, December 17, 01 0

Goodness-of-fit vs statistical errors Keep in mind the distinction between having SMALL STATISTICAL ERRORS and having a GOOD (small) X. The statistical errors are related to the change in X when the parameters are varied away from their fitted values, and not to the absolute value of X itself. The standard deviation of an estimator is a measure of how widely estimates would be distributed if the experiment were to be repeated many times. If the function form of the hypothesis is incorrect, however, then the estimate can still differ significantly from the true value. That is, if the form of the hypothesis is incorrect, then a small standard deviation is not sufficient to imply a small uncertainty in the estimate of the parameter. Statistical Methods, Lecture 10, December 17, 01 1

Goodness-of-fit vs statistical errors Example: horizontal line fit First case: page 16 0 =.66 ± 0.13 min = 45.5 Now keep x and the errors, move the data points wrt previous example New fit: 0 =.84 ± 0.13 min = 4.48 The variance is the same as before, but now the chi-square is good!! Statistical Methods, Lecture 10, December 17, 01

Least squares with binned data Suppose we have n observations of a random variable x from which one makes a histogram with N bins. Let yi be the number of entries in bin i, and f(x;θ) be a hypothesized pdf for which one would like to estimate the parameters θ = (θ1,.., θm). The number of entries predicted in bin i is: x max i i = E[y i ] = n x min where x i min i f x ; dx = npi max and x i are the bin limits and pi(θ) is the probability to have an entry in bin i. Statistical Methods, Lecture 10, December 17, 01 3

Least squares with binned data The parameters θ are found by minimizing the quantity: = N y i i i=1 i where σi=v[yi], here not known a priori. When the mean number of entries in each bin is small compared to the total number of entries, the contents of each bin are approximately Poisson distributed. The variance is therefore equal to the mean. In place of the true variance take either i = i LS method i = y i Modified LS method MLS sometimes easier computationally, but min no longer follows the chi-square pdf (or is undefined) if some bins have few or no entries. Statistical Methods, Lecture 10, December 17, 01 4

LS with binned data: normalization Appreciate the detail: f(x;θ) is a pdf and therefore normalized to 1. The function which is fitted to the histogram is λi(θ). Often, instead of using the observed total number of entries n to obtain λ i (see page 3), an additional adjustable parameter ν is introduced as normalization factor. The predicted number of entries in the bin i then becomes: x max i i, = x min i f x ; dx = pi ν is fitted along with θ. This might lead to an incorrect estimate of the total number of entries. Statistical Methods, Lecture 10, December 17, 01 5

LS with binned data: normalization is a bad estimator for n! Take the formula on the page before, take the derivative of X with respect to ν to 0, and we find: Since one expects a contribution to X on the order of 1 per bin, the relative error in the number of entries is typically N/n too high (LS) or N/n too low (MLS). If one takes as a rule of thumb that each bin should have at least five entries, one could have an (unnecessary) error in the normalization of 10-0%. general preference of maximum likelihood for histogram fitting Statistical Methods, Lecture 10, December 17, 01 6

LS with binned data: normalization Statistical Methods, Lecture 10, December 17, 01 7

Using LS to combine measurements Combine measurements of the same quantity: Suppose that a quantity of unknown true value λ has been measured N times (e.g. in N independent experiments) yielding independent values yi and estimated errors (standard deviations) σi for i = 1,.., N. Since the true value is the same for all the measurements, the value λ is a constant, i.e. the function λ(x) is a constant, and thus the variable x does not actually appear in the problem. yi = result of measurement i, i = 1,.., N σi = V[yi], assume known λ = true value (plays role of θ) For uncorrelated yi, minimize: = N y i i=1 i Statistical Methods, Lecture 10, December 17, 01 8

Using LS to combine measurements Set = 0 and solve to get the LS estimator for N = Variance: = V [ ] i=1 yi / i N 1/ j=1 j Weighted average 1 N i=1 1/ i The variance of the weighted average is smaller than any of the variances of the individual measurements. Furthermore, if one of the measured yi has a much smaller variance than the rest, then this measurement will dominate both in the value and variance of the weighted average. Statistical Methods, Lecture 10, December 17, 01 9

Using LS to combine measurements Generalization to the case where the measurements are not independent. This would occur, for example, if they are based at least in part on the same data. Covariance matrix: cov[yi, yj] = Vij. Minimize: LS has zero bias, minimum variance (Gauss-Markov theorem) Statistical Methods, Lecture 10, December 17, 01 30

Example: averaging two correlated measurements Suppose we have y1 and y, two measurements with the covariance matrix: 1 1 V = 1 where = V 1 / 1 is the correlation coefficient. The inverse covariance matrix is given by: V 1 1/ 1 / 1 1 = 1 / 1 1/ Statistical Methods, Lecture 10, December 17, 01 31

Example: averaging two correlated measurements Using the formulas on page 30, we determine the weighted average: = w y 1 1 w y with w = 1 1 1 The variance is found to be: = V [ ] 1 1 1 1 = Statistical Methods, Lecture 10, December 17, 01 3

Example: averaging two correlated measurements The change in the inverse variance due to the second measurement y is 1 1 1 1 = 1 1 1 This is always greater than or equal to zero, which is to say that the second measurement always helps to decrease σ, or at least it never hurts. If 1 / w 0!!! the weighted average does not lie between y1 and y!!!?? Cannot happen is correlation due to common data. But possible for shared random effect. Very unreliable if e.g., 1, incorrect Statistical Methods, Lecture 10, December 17, 01 33

Example We measure the length of an object with rulers made of different substances, so that the thermal expansion coefficients are different. Suppose both rulers have been calibrated to give accurate results at a temperature T0, but at any other temperature, a corrected estimate y of the true (unknown) length λ must be obtained using: y i = L i c i T T 0 i refers to ruler 1 or, Li is the uncorrected measurement, ci is the expansion coefficient, and T is the temperature, which must be measured. We will treat the measured temperature as a random variable with standard deviation σt, and we assume that T is the same for the measurements, i.e. they are carried out together. The uncorrected Li are treated as random variables, with standard deviation σli. Assume that ci, σli and σt are known. Statistical Methods, Lecture 10, December 17, 01 34

Example - Formulas Goal: determine the weighted average of y1 and y. We need their covariance matrix: Variances of the corrected measurements: = V [ y ] = c i i L i T i Diagonal term: if the measurements are unbiased is: E[ y i ] =, E[T ] = T true true (unknown) temperature E[Li ] = c i T true T 0 E[ T ] = T T true E[Li,L j ] = ij L i Therefore: cov [ y 1, y ] = V 1 = E[ y 1 y ] = c1 c T Statistical Methods, Lecture 10, December 17, 01 35

Example - Formulas The correlation coefficient is: V 1 = = 1 c1 c T L c1 T L c T 1 The weighted average is thus: = L c c 1 c T y 1 L c 1 c 1 c T y 1 L L c 1 c T 1 If σt is very small, then ρ goes to 0 and will lie between y1 and y. If σli are very small and σt is large, the weight becomes negative! In the extreme of ρ=1, the weighted average becomes: = c 1 c y1 y c 1 c c 1 c Statistical Methods, Lecture 10, December 17, 01 36

Example - Illustration The diagonal bands represent the possible values of the corrected length yi, given measured lengths Li, as a function of the temperature Suppose Li known very accurately, and yet y1 and y differ by quite a bit THEN the only available explanation is that the true temperature must be different from the measured value T!!!! The weighted average is thus pulled towards the point where the bands of y1(t) and y(t) cross. Statistical Methods, Lecture 10, December 17, 01 37

Example - Comments Always question the averaging of correlated quantities! The correlation here stems from the fact that the individual measurements are based in part on a common measurement, here the temperature, which itself is a random quantity. Averages of highly correlated quantities should be treated with caution, since a small error in the covariance matrix for the individual measurements can lead to a large error in the average, as well as an incorrect estimation of its variance. This is particularly true if the magnitudes of correlated uncertainties are overestimated. T In the example, if the correction to the temperature T = T turns out to be large compared to σt, then this means that our assumptions about the measurements are probably INCORRECT! This is reflected in a high min value, and a small p-value. Statistical Methods, Lecture 10, December 17, 01 38

Example More comments Here we would have to revise the model, perhaps reevaluating all standard deviations, or the relation between the measured and the corrected lengths. The correct conclusion is this example, would be that because of the large X the two measurements y1 and y are incompatible, and the average is therefore not reliable. Comments about under- and over-estimation of errors, PULLS Statistical Methods, Lecture 10, December 17, 01 39

ALICE data analysis January 14: ALICE master class at GSI 11:00 first arrival at GSI (registration at entrance) VISIT of GSI 1:00 lunch 13:00 introduction and analysis of ALICE data Please send email to With subject 'SMIPP at GSI' With your full name, personal ID number, and the arrival time Statistical Methods, Lecture 10, December 17, 01 40

The next lectures January 7, 1, 8 + February 4 Statistical errors, confidence intervals and limits Higgs case Statistical and systematic errors Unfolding of distributions Don't forget to give me suggestions if you would like some topic to be discussed!!!! Statistical Methods, Lecture 10, December 17, 01 41