N! (N h)!h! ph 1(1 p 1 ) N h. (2) Suppose we make a change of variables from h to x through h = p 1 N + p 1 1 2
|
|
- Branden Brown
- 6 years ago
- Views:
Transcription
1 Physics 48-0 Lab : An example of statistical error analysis in coin ip experiment Intro This worksheet steps you through the reasoning behind how statistical errors in simple experimental measurements are often dealt with. The simple (gedanken) experiment chosen to illustrate the reasoning is a coin ip experiment. After ipping a coin 50 times and counting the number of heads that are obtained, one should be able to state based on the experimental result that the probability of obtaining heads in a single coin ip is x ± y with 95% condence. Error Analysis Theory. Obtaining the probability of a single coin ip being heads Suppose we ip the coin times. We record the number of heads and the number of tails: heads=h and tails= h. Although we would naively conclude that the probability of obtaining heads in a single coin ip is P = h. () However, because is nite, this is not rigorously true. We need to assess with what condence we would expect the outcome of the data being h for a given. Let this probability be φ(, h). If we assume that every coin ip is independent and has a probability p (note the lower case), we nd φ(p,, h) =! ( h)!h! ph ( p ) h. () Suppose we make a change of variables from h to x through h = p + p + x where we are now treating h as a continuous variable. In the limit that, the well known Stirling's formula can be used in Eq. () to obtain where! π e (3) φ(p,, p + p + x) x e σ (4) πσ σ (p, ) = p ( p ) (5) characterizes the width. Hence, the most probable outcome of the experiment is h p as expected but the deviations of order p from this most probable value is not very improbable. ote that in terms of the original variable h, we can write φ(p,, h) (h p πσ e p +/) σ. (6) This functional form is called a Gaussian distribution, and this form of the functional form being a good approximation is expected on general grounds based on a theorem called the central limit theorem. Also, the small terms in the parentheses of the exponent such as p + / are not numerically important and the peak of the distribution should really be regarded as being at h = p. The fact that the sum of probabilities add up to is expressed as φ(p,, h) =. (7) h=0
2 In the approximate form of φ(, h) (given by Eq. (4)), we have a continuous variable for φ(, h). Hence, the conservation becomes dhφ(p,, h) = (8) which means that we had to extend the h variable to unphysical negative values. However, this is not problematic since φ(, 0) in the limit is negligible. Experimentally, we would like to measure p. Since the data sample is nite, we can only obtain an estimate of the true p from the experiment. Hence, we will refer to the data derived p as p (estimate). Suppose we assume that our experimental result corresponds to the most likely outcome. This is called the maximum likelihood method. To this end, we need a probability of p being the true probability given the data h (data). We turn to constructing this next.. Probability of p being true and the statistical uncertainty in the estimate of the probability The probability function φ computed in the previous subsection is the probability of obtaining h heads given that the probability of obtaining heads is p. In this subsection, we would like to estimate the probability of p being the true probability, given that the data is h = h (data). The conditional probability given XXX is denoted as P (something happening XXX): i.e. φ(p,, h)dh = P (obtain h p is the right answer). (9) Suppose p () and p () are two dierent probability value possibilities. According to what is known as Bayes' theorem, we have P (p () is the right answer obtain h) P (p () is the right answer obtain h) = P (p () P (p () Suppose we assume Then we nd or in other words We can obtain using is the right answer) is the right answer) P (obtain h p () is the right answer) P (obtain h p () is the right answer). (0) P (p () is the right answer). () P (p () is the right answer) P (p () is the right answer obtain h) P (p () is the right answer obtain h) φ(p,, h) φ(p,, h) () P (p is the right answer obtain h) φ(p,, h)dp. (3) ˆ 0 dp φ(p,, h) =. (4) Because φ(p,, h) should drop o rapidly as p is close to p (estimate) obtained from data (see below), we should be able to approximate this as a Gaussian. In particular, we can Taylor expand the exponent of Eq. (6) to quadratic order in p (about p which satises f h (p ) = 0) as φ(p,, h) πσ (p, ) e f h (p * ) f h (p* )(p p ) (5) where f h (p ) = (h p p + /) σ. (6) (p, )
3 Figure : Comparison of Eq. (7) and Eq. () for { = 0, h =, 5} and { = 0, h = 4, 0}. Can you guess which curve corresponds to which parameters? ote that this is not a normalized distribution and we must compute the normalization as discussed in the text. It is clear that the numerical approximation is excellent Since the dominant p dependence is in the exponent f(p ), for the purposes of this exercise we can neglect the p dependence in the prefactor for simplicity and write φ(p,, h) πσ (p, )e f h (p * ) f h (p* )(p p ). (7) More explicitly, we nd p = + h ( + ) h (8) f(p ) = 0 (9) f h (p * 4( + ) 4 ) = ( + h)( h + ) 3 h( h) Before moving on, let's check that the approximate formula Eq. (7) is a good approximation to the exact formula Eq. (). This is done by plotting the two functions for { = 0, h =, 5}, { = 0, h = 4, 0} in Fig.. Finally, solving Eq. (4) for, we arrive at P (p is the right answer obtain h) (0) e f (p* h )(p p ) dp () π/f h (p ) First, we can use the maximum likelihood reasoning to estimate p based on the data. The most probable p given the data h = h (data) is p (estimate) = p h(data) h=h (data) which is what we naively expected. ext, we can easily compute the answer to the question What is the probability that p (estimate) p (true) > p? It is approximately I P (the right answer is farther away from p (estimate) than p obtain h (data) ) = P (p (estimate) + p + ɛ is the right answer obtain h (data) ) +P (p (estimate) + p + ɛ is the right answer obtain h (data) ) +... (3) () 3
4 This can be written using Eq. () as f π h (data) (p ) p (estimate) + p P (the right answer is farther away from p (estimate) than p obtain h (data) ) (p * )(p p ) dp e f (p * ˆ p(estimate) h (data) )(p p ) p dp + e f h (data) where we use h (data) when evaluating p in this expression. Using this, one can also answer the question What is the p > 0 value for which I can make the statement that the true value of p is p (estimate) ± p with 95% condence? Explicitly, this is accomplished by solving p (estimate) + p e f (p * ˆ p(estimate) h (data) )(p p ) p dp + e f h (data) The p solving this equation is referred to as the 95% condence error bar. f π h (data) (p * )(p p ) dp (p ) (5) ote that quite generically, the width of the function e f (p * h (data) )(p p ) is approximately / f (p * h (data) ) h (data) ( h (data) ) (6) 3 and that data typically would yield (based on Eq. (6)) h (data) p (true) ( + O p (true) ( p (true ) ) ). (7) This means that in the limit, the error p scales as p (true) ( p (true) ) p (8) 3 p (true) ( p (true) ) =. (9) Hence, the error in determination of p (true) using experimental data decreases like /. This is a typical property of experimental systems with Gaussian-like uctuations. In the limit that, the error in p vanishes, recovering the well known statement that an innite set of measurements will yield the true probability..3 More typical experimental situations The coin ip situation is somewhat special because each of the data points (heads or tail) had only two outcomes with no really preferred value. Its main advantage was the fact that the probability functions for the observables were easy to compute. Most observables in physics experiments have a sharper preferred value within a continuous range of possible values. The probability function for each observation data point is typically approximately Gaussian centered about an ideal value that you want to extract from the experiment. Hence, the likelihood function (analog of φ(p,, h)) will be a product of Gaussians. In fact, we can treat the extracted p (estimate) for a single experiment as a single data point and repeat the experiment 4 (4)
5 to obtain a set of data points where each data point is of the form that is more typical of physics experimental observations. In fact, you have been working with this implicitly in error propagation of physics 47 labs. To avoid overwhelming you with material, we will discuss the mathematics of this later in future labs and now turn to exercises regarding the coin ip experiment. (Some related discussion is given in worked Exercise 4 below.) On the other hand, note that the general philosophy regarding error bars of the situation involving products of Gaussians is the same as the coin ip experiment just described. 3 Exercises. Suppose after 50 coin ips, one obtains 3 heads. What is the estimate of the probability of obtaining heads in a single coin ip from this experiment?. Find the 95% condence error bar associated with the result of the experiment above. [Hint: ote that and ] πy x m+y e y (x xm) dx Suppose at the same time as the experiment just described (which we will refer to as the rst experiment) takes place, another coin ip experiment with a dierent coin (but same procedure of ipping 50 times) takes place next to the rst one. Suppose this second experiment numbers its 50 coin ip results sequentially, just like the rst experiment. The second experiment luckily results in 3 heads as well. When the nth ip of the rst experiment has heads that matches the heads of the nth ip of the second experiment, we say there is a heads coincidence. What is the predicted number of coin ip heads coincidences assuming that every coin ip is independent based on the experimentally measured probabilities. 4. Worked exercise explained by the TA: What is the 95% condence error bar on this prediction? ans Our prediction in the last problem is based on estimating the true probability to be p (estimate) p (estimate) We need to estimate an error on this probability. The probability distribution for having p and p is π/f h (data ). e f (data (p * h ) )(p p ) dp f (data (p * (p ) π/f h (data (p h ) )(p p ) dp. ) )e When integrated over a particular (p, p ) region centered about (p, p ), the probability is h (data ) = h (data ) (which simplies the mathematics), we can write ˆ π/f h (data ) (p ) region e f h (data ) (p * )[(p p ) +(p p ) ] dp dp = 0.95 Since ote that when (p p ) + (p p ) = constant, the integrand has the same value. This is an equation for a circle centered about (p, p ). Hence, we can dene the radial coordinate centered about (p, p ) to be r = (p p ) + (p p ) and write Furthermore, let ˆ π/f h (data ) (p ) region e f h (data ) (p * )r drrdθ = f h (data ) (p* )r = R. 5
6 Choosing a circular region of radius R, we nd ˆ π circular region of radius R e R drrdθ = e ( R) = Hence, we nd R =.45 which is equivalent to an error radius in terms r = (p p ) + (p p ) of.45 = r f h (data (p * ) ). Since p = /f (p (estimate) h (data) ) in exercise, we nd r =. p. This means that we want the maximum and the minimum of (p p ) p p + p p (where we have linearized the dierence) lying on the circle Hence, we want to extremize ( r) = ( p ) + ( p ). (p p ) p p p ( r) ( p ) by taking a derivative with respect to p. One nds (p p ) ± r p + p. Dividing by p p, we nd (p p ) p p = ± r = ± r p p + p = ±. p p (30) Thus, one concludes that the fractional error does not increase by a factor of (which would have given (p p ) p = p p p ) even though one is multiplying two uncertain quantities p (estimate) p (estimate). It increases by a factor close to. The answer to the present question of 95% condence error bar in the prediction is ±5. Instead of going through this long, tedious reasoning, what people typically assume is that the random variable p p has a probability distribution which is Gaussian. In that case, one can calculate the width of the distribution by computing the expectation value of [ (p p )] after linearizing (p p ) = p p +p p : P (p and p data)(p p + p p ) = p,p p,p P (p and p data)(p ( p ) + p ( p ) + p p p p ). The last term which is a cross term evaluates to zero when summed over p and p. The ( p ) and ( p ) terms evaluate to the σ of the Gaussian function (see e.g. Eq. (4)) of the purported probability distribution of p p. Hence, one obtains the eective σ being σ = p σ + p σ. 6
7 This implies that the fractional σ error (which is not a 95% condence error bar) in the quantity of interest p p is σ σ p = p p + σ p. (3) If σ = σ and p = p as for our problem, then one concludes σ p p = σ p suggestive of Eq. (30). The addition of quadratic fractional errors as in Eq. (3) are referred to as adding errors in quadratures. The advantage of adding errors in quadratures is that the computation is utterly simple and gets you in the ball park of the correct answer. The disadvantage is the loss of understanding the assumptions in the approximations (for example, it is not true that p p will be rigorously Gaussian distributed, although it will typically have a similar shape). 5. What are some possible systematic errors associated with the single coin ip probability measurement and the coincident experiment? 7
Roots of Unity, Cyclotomic Polynomials and Applications
Swiss Mathematical Olympiad smo osm Roots of Unity, Cyclotomic Polynomials and Applications The task to be done here is to give an introduction to the topics in the title. This paper is neither complete
More informationMath 180B Problem Set 3
Math 180B Problem Set 3 Problem 1. (Exercise 3.1.2) Solution. By the definition of conditional probabilities we have Pr{X 2 = 1, X 3 = 1 X 1 = 0} = Pr{X 3 = 1 X 2 = 1, X 1 = 0} Pr{X 2 = 1 X 1 = 0} = P
More informationIntroduction to Probability
Introduction to Probability Salvatore Pace September 2, 208 Introduction In a frequentist interpretation of probability, a probability measure P (A) says that if I do something N times, I should see event
More informationPade approximants and noise: rational functions
Journal of Computational and Applied Mathematics 105 (1999) 285 297 Pade approximants and noise: rational functions Jacek Gilewicz a; a; b;1, Maciej Pindor a Centre de Physique Theorique, Unite Propre
More informationIntroduction to Real Analysis
Introduction to Real Analysis Joshua Wilde, revised by Isabel Tecu, Takeshi Suzuki and María José Boccardi August 13, 2013 1 Sets Sets are the basic objects of mathematics. In fact, they are so basic that
More informationIE 4521 Midterm #1. Prof. John Gunnar Carlsson. March 2, 2010
IE 4521 Midterm #1 Prof. John Gunnar Carlsson March 2, 2010 Before you begin: This exam has 9 pages (including the normal distribution table) and a total of 8 problems. Make sure that all pages are present.
More informationLinear Regression and Its Applications
Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start
More informationHomework 1 Solutions ECEn 670, Fall 2013
Homework Solutions ECEn 670, Fall 03 A.. Use the rst seven relations to prove relations (A.0, (A.3, and (A.6. Prove (F G c F c G c (A.0. (F G c ((F c G c c c by A.6. (F G c F c G c by A.4 Prove F (F G
More informationParameter Estimation and Fitting to Data
Parameter Estimation and Fitting to Data Parameter estimation Maximum likelihood Least squares Goodness-of-fit Examples Elton S. Smith, Jefferson Lab 1 Parameter estimation Properties of estimators 3 An
More informationIntroduction and mathematical preliminaries
Chapter Introduction and mathematical preliminaries Contents. Motivation..................................2 Finite-digit arithmetic.......................... 2.3 Errors in numerical calculations.....................
More informationData Analysis for University Physics
Data Analysis for University Physics by John Filaseta orthern Kentucky University Last updated on ovember 9, 004 Four Steps to a Meaningful Experimental Result Most undergraduate physics experiments have
More informationExam. Matrikelnummer: Points. Question Bonus. Total. Grade. Information Theory and Signal Reconstruction Summer term 2013
Exam Name: Matrikelnummer: Question 1 2 3 4 5 Bonus Points Total Grade 1/6 Question 1 You are traveling to the beautiful country of Markovia. Your travel guide tells you that the weather w i in Markovia
More informationLie Groups for 2D and 3D Transformations
Lie Groups for 2D and 3D Transformations Ethan Eade Updated May 20, 2017 * 1 Introduction This document derives useful formulae for working with the Lie groups that represent transformations in 2D and
More informationMath 42, Discrete Mathematics
c Fall 2018 last updated 11/14/2018 at 18:13:19 For use by students in this class only; all rights reserved. Note: some prose & some tables are taken directly from Kenneth R. Rosen, and Its Applications,
More informationMore Probability and Error Analysis
02/01/07 PHY310: Statistical Data Analysis 1 PHY310: Lecture 04 More Probability and Error Analysis Road Map Probability (continued) Types of Uncertainty Propagating Uncertainty Bayes Theorem with P.D.F.s
More informationOrbitals of the Bohr and Sommerfeld atoms with quantized x theory
Orbitals of the Bohr and Sommerfeld atoms with quantized x theory M. W. Evans, H. Eckardt Civil List, A.I.A.S. and UPITEC (www.webarchive.org.uk, www.aias.us, www.atomicprecision.com, www.upitec.org) 3
More informationPhysics 6720 Introduction to Statistics April 4, 2017
Physics 6720 Introduction to Statistics April 4, 2017 1 Statistics of Counting Often an experiment yields a result that can be classified according to a set of discrete events, giving rise to an integer
More informationMath Bootcamp 2012 Miscellaneous
Math Bootcamp 202 Miscellaneous Factorial, combination and permutation The factorial of a positive integer n denoted by n!, is the product of all positive integers less than or equal to n. Define 0! =.
More informationFermilab FERMILAB-Conf-00/342-E CDF January 2001
Fermilab FERMILAB-Conf-00/342-E CDF January 2001 CDF/ANAL/JET/PUBLIC/5507 December 21, 2000 Frascati Physics Series Vol. nnn (2001), pp. 000-000 IX Int. Conf. on Calorimetry in Part. Phys. - Annecy, Oct.
More informationIntermediate Lab PHYS 3870
Intermediate Lab PHYS 3870 Lecture 4 Comparing Data and Models Quantitatively Linear Regression Introduction Section 0 Lecture 1 Slide 1 References: Taylor Ch. 8 and 9 Also refer to Glossary of Important
More informationPHY 123 Lab 1 - Error and Uncertainty and the Simple Pendulum
To print higher-resolution math symbols, click the Hi-Res Fonts for Printing button on the jsmath control panel. PHY 13 Lab 1 - Error and Uncertainty and the Simple Pendulum Important: You need to print
More information6.001, Fall Semester, 1998 Problem Set 5 2 (define (adjoin-term term series) (cons-stream term series) Since this does not produce an error, he shows
1 MASSACHVSETTS INSTITVTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science 6.001 Structure and Interpretation of Computer Programs Fall Semester, 1998 Issued: Thursday, October 15,
More informationy 1 x 1 ) 2 + (y 2 ) 2 A circle is a set of points P in a plane that are equidistant from a fixed point, called the center.
Ch 12. Conic Sections Circles, Parabolas, Ellipses & Hyperbolas The formulas for the conic sections are derived by using the distance formula, which was derived from the Pythagorean Theorem. If you know
More informationStatistics and Data Analysis
Statistics and Data Analysis The Crash Course Physics 226, Fall 2013 "There are three kinds of lies: lies, damned lies, and statistics. Mark Twain, allegedly after Benjamin Disraeli Statistics and Data
More information= w 2. w 1. B j. A j. C + j1j2
Local Minima and Plateaus in Multilayer Neural Networks Kenji Fukumizu and Shun-ichi Amari Brain Science Institute, RIKEN Hirosawa 2-, Wako, Saitama 35-098, Japan E-mail: ffuku, amarig@brain.riken.go.jp
More informationProbability reminders
CS246 Winter 204 Mining Massive Data Sets Probability reminders Sammy El Ghazzal selghazz@stanfordedu Disclaimer These notes may contain typos, mistakes or confusing points Please contact the author so
More informationIntroduction to Machine Learning. Maximum Likelihood and Bayesian Inference. Lecturers: Eran Halperin, Lior Wolf
1 Introduction to Machine Learning Maximum Likelihood and Bayesian Inference Lecturers: Eran Halperin, Lior Wolf 2014-15 We know that X ~ B(n,p), but we do not know p. We get a random sample from X, a
More informationGrades 7 & 8, Math Circles 24/25/26 October, Probability
Faculty of Mathematics Waterloo, Ontario NL 3G1 Centre for Education in Mathematics and Computing Grades 7 & 8, Math Circles 4/5/6 October, 017 Probability Introduction Probability is a measure of how
More informationBinomial Distribution *
OpenStax-CNX module: m11024 1 Binomial Distribution * David Lane This work is produced by OpenStax-CNX and licensed under the Creative Commons Attribution License 3.0 When you ip a coin, there are two
More informationCalculus and linear algebra for biomedical engineering Week 3: Matrices, linear systems of equations, and the Gauss algorithm
Calculus and linear algebra for biomedical engineering Week 3: Matrices, linear systems of equations, and the Gauss algorithm Hartmut Führ fuehr@matha.rwth-aachen.de Lehrstuhl A für Mathematik, RWTH Aachen
More informationError analysis for the physical sciences A course reader for phys 1140 Scott Pinegar and Markus Raschke Department of Physics, University of Colorado
Error analysis for the physical sciences A course reader for phys 1140 Scott Pinegar and Markus Raschke Department of Physics, University of Colorado Version 1.0 (September 9, 2012) 1 Part 1 (chapter 1
More informationy x -0.5 y
SOLVING LINEAR ALGEBRAIC SYSTEMS and CRAMER'S RULE These notes should provide you with a brief review of the facts about linear algebraic systems and a method, Cramer's Rule, that is useful in solving
More informationWhy Bayesian? Rigorous approach to address statistical estimation problems. The Bayesian philosophy is mature and powerful.
Why Bayesian? Rigorous approach to address statistical estimation problems. The Bayesian philosophy is mature and powerful. Even if you aren t Bayesian, you can define an uninformative prior and everything
More informationWave Mechanics Relevant sections in text: , 2.1
Wave Mechanics Relevant sections in text: 1.1 1.6, 2.1 The wave function We will now create a model for the system that we call a particle in one dimension. To do this we should define states and observables.
More informationMeasurement and Uncertainty
Measurement and Uncertainty Name: Date: Block: There is uncertainty in every measurement due to of accuracy and precision. Accuracy: how close the instrument measures to an accepted. Precision: how closely
More informationSUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416)
SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) D. ARAPURA This is a summary of the essential material covered so far. The final will be cumulative. I ve also included some review problems
More informationMachine Learning Algorithm. Heejun Kim
Machine Learning Algorithm Heejun Kim June 12, 2018 Machine Learning Algorithms Machine Learning algorithm: a procedure in developing computer programs that improve their performance with experience. Types
More informationError analysis for IPhO contestants
Error analysis for IPhO contestants Heikki Mäntysaari University of Jyväskylä, Department of Physics Abstract In the experimental part of IPhO (and generally when performing measurements) you have to estimate
More informationError Analysis How Do We Deal With Uncertainty In Science.
How Do We Deal With Uncertainty In Science. 1 Error Analysis - the study and evaluation of uncertainty in measurement. 2 The word error does not mean mistake or blunder in science. 3 Experience shows no
More informationOn reaching head-to-tail ratios for balanced and unbalanced coins
Journal of Statistical Planning and Inference 0 (00) 0 0 www.elsevier.com/locate/jspi On reaching head-to-tail ratios for balanced and unbalanced coins Tamas Lengyel Department of Mathematics, Occidental
More informationStatistics. Lent Term 2015 Prof. Mark Thomson. 2: The Gaussian Limit
Statistics Lent Term 2015 Prof. Mark Thomson Lecture 2 : The Gaussian Limit Prof. M.A. Thomson Lent Term 2015 29 Lecture Lecture Lecture Lecture 1: Back to basics Introduction, Probability distribution
More informationPreliminary Statistics Lecture 2: Probability Theory (Outline) prelimsoas.webs.com
1 School of Oriental and African Studies September 2015 Department of Economics Preliminary Statistics Lecture 2: Probability Theory (Outline) prelimsoas.webs.com Gujarati D. Basic Econometrics, Appendix
More information[Read Ch. 5] [Recommended exercises: 5.2, 5.3, 5.4]
Evaluating Hypotheses [Read Ch. 5] [Recommended exercises: 5.2, 5.3, 5.4] Sample error, true error Condence intervals for observed hypothesis error Estimators Binomial distribution, Normal distribution,
More informationComputational Complexity: Problem Set 4
Computational Complexity: Problem Set 4 Due: Friday January 8, 16, at 3:59 AoE. Submit your solutions as a PDF le by e-mail to jakobn at kth dot se with the subject line Problem set 4: your full name.
More informationExperiment 1 Simple Measurements and Error Estimation
Experiment 1 Simple Measurements and Error Estimation Reading and problems (1 point for each problem): Read Taylor sections 3.6-3.10 Do problems 3.18, 3.22, 3.23, 3.28 Experiment 1 Goals 1. To perform
More informationExperiment 2 Random Error and Basic Statistics
PHY191 Experiment 2: Random Error and Basic Statistics 7/12/2011 Page 1 Experiment 2 Random Error and Basic Statistics Homework 2: turn in the second week of the experiment. This is a difficult homework
More informationStatistical mechanics of classical systems
Statistical mechanics of classical systems States and ensembles A microstate of a statistical system is specied by the complete information about the states of all microscopic degrees of freedom of the
More informationIntermediate Lab PHYS 3870
Intermediate Lab PHYS 3870 Lecture 3 Distribution Functions References: Taylor Ch. 5 (and Chs. 10 and 11 for Reference) Taylor Ch. 6 and 7 Also refer to Glossary of Important Terms in Error Analysis Probability
More informationand 1 P (w i)=1. n i N N = P (w i) lim
Chapter 1 Probability 1.1 Introduction Consider an experiment, result of which is random, and is one of the nite number of outcomes. Example 1. Examples of experiments and possible outcomes: Experiment
More informationf(z)dz = 0. P dx + Qdy = D u dx v dy + i u dy + v dx. dxdy + i x = v
MA525 ON CAUCHY'S THEOREM AND GREEN'S THEOREM DAVID DRASIN (EDITED BY JOSIAH YODER) 1. Introduction No doubt the most important result in this course is Cauchy's theorem. Every critical theorem in the
More informationQUANTUM CHEMISTRY PROJECT 1
Chemistry 460 Fall 2017 Dr. Jean M. Standard September 11, 2017 QUANTUM CHEMISTRY PROJECT 1 OUTLINE This project focuses on applications and solutions of quantum mechanical problems involving one-dimensional
More informationProbability Theory and Simulation Methods
Feb 28th, 2018 Lecture 10: Random variables Countdown to midterm (March 21st): 28 days Week 1 Chapter 1: Axioms of probability Week 2 Chapter 3: Conditional probability and independence Week 4 Chapters
More informationFactorising Cubic Polynomials - Grade 12 *
OpenStax-CNX module: m32660 1 Factorising Cubic Polynomials - Grade 12 * Rory Adams Free High School Science Texts Project Sarah Blyth Heather Williams This work is produced by OpenStax-CNX and licensed
More informationVariational Calculation of Eective Classical. November 12, Abstract
Variational Calculation of Eective Classical Potential at T to Higher Orders H.Kleinert H.Meyer November, 99 Abstract Using the new variational approach proposed recently for a systematic improvement of
More informationIntroducing Spin. Abstract. Doug Jensen
Introducing Spin Doug Jensen Abstract Imaginary numbers were introduced into our real number system with only a vague and ambiguous denition (i = 1). But what are imaginary numbers? They are not found
More informationBayesian Learning (II)
Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen Bayesian Learning (II) Niels Landwehr Overview Probabilities, expected values, variance Basic concepts of Bayesian learning MAP
More informationTyping Equations in MS Word 2010
CM3215 Fundamentals of Chemical Engineering Laboratory Typing Equations in MS Word 2010 https://www.youtube.com/watch?v=cenp9mehtmy Professor Faith Morrison Department of Chemical Engineering Michigan
More information221B Lecture Notes Scattering Theory II
22B Lecture Notes Scattering Theory II Born Approximation Lippmann Schwinger equation ψ = φ + V ψ, () E H 0 + iɛ is an exact equation for the scattering problem, but it still is an equation to be solved
More informationLab 0 Appendix C L0-1 APPENDIX C ACCURACY OF MEASUREMENTS AND TREATMENT OF EXPERIMENTAL UNCERTAINTY
Lab 0 Appendix C L0-1 APPENDIX C ACCURACY OF MEASUREMENTS AND TREATMENT OF EXPERIMENTAL UNCERTAINTY A measurement whose accuracy is unknown has no use whatever. It is therefore necessary to know how to
More informationUncertainties & Error Analysis Tutorial
Uncertainties & Error Analysis Tutorial Physics 118/198/1 Reporting Measurements Uncertainties & Error Analysis Tutorial When we report a measured value of some parameter, X, we write it as X X best ±
More information6.867 Machine Learning
6.867 Machine Learning Problem set 1 Solutions Thursday, September 19 What and how to turn in? Turn in short written answers to the questions explicitly stated, and when requested to explain or prove.
More informationPRIME GENERATING LUCAS SEQUENCES
PRIME GENERATING LUCAS SEQUENCES PAUL LIU & RON ESTRIN Science One Program The University of British Columbia Vancouver, Canada April 011 1 PRIME GENERATING LUCAS SEQUENCES Abstract. The distribution of
More informationCS4705. Probability Review and Naïve Bayes. Slides from Dragomir Radev
CS4705 Probability Review and Naïve Bayes Slides from Dragomir Radev Classification using a Generative Approach Previously on NLP discriminative models P C D here is a line with all the social media posts
More informationEvidence with Uncertain Likelihoods
Evidence with Uncertain Likelihoods Joseph Y. Halpern Cornell University Ithaca, NY 14853 USA halpern@cs.cornell.edu Riccardo Pucella Cornell University Ithaca, NY 14853 USA riccardo@cs.cornell.edu Abstract
More informationLab 1: Measurement, Uncertainty, and Uncertainty Propagation
Lab 1: Measurement, Uncertainty, and Uncertainty Propagation 17 ame Date Partners TA Section Lab 1: Measurement, Uncertainty, and Uncertainty Propagation The first principle is that you must not fool yourself
More informationPractice Final Exam. December 14, 2009
Practice Final Exam December 14, 29 1 New Material 1.1 ANOVA 1. A purication process for a chemical involves passing it, in solution, through a resin on which impurities are adsorbed. A chemical engineer
More informationMATH 3510: PROBABILITY AND STATS July 1, 2011 FINAL EXAM
MATH 3510: PROBABILITY AND STATS July 1, 2011 FINAL EXAM YOUR NAME: KEY: Answers in blue Show all your work. Answers out of the blue and without any supporting work may receive no credit even if they are
More informationBayesian Methods: Naïve Bayes
Bayesian Methods: aïve Bayes icholas Ruozzi University of Texas at Dallas based on the slides of Vibhav Gogate Last Time Parameter learning Learning the parameter of a simple coin flipping model Prior
More informationStatistics and nonlinear fits
Statistics and nonlinear fits 2 In this chapter, we provide a small window into the field of statistics, the mathematical study of data. 1 We naturally focus on the statistics of nonlinear models: how
More informationExperiment 2 Random Error and Basic Statistics
PHY9 Experiment 2: Random Error and Basic Statistics 8/5/2006 Page Experiment 2 Random Error and Basic Statistics Homework 2: Turn in at start of experiment. Readings: Taylor chapter 4: introduction, sections
More informationAfter linear functions, the next simplest functions are polynomials. Examples are
Mathematics 10 Fall 1999 Polynomials After linear functions, the next simplest functions are polynomials. Examples are 1+x, x +3x +, x 3. Here x is a variable. Recall that we have defined a function to
More informationn N CHAPTER 1 Atoms Thermodynamics Molecules Statistical Thermodynamics (S.T.)
CHAPTER 1 Atoms Thermodynamics Molecules Statistical Thermodynamics (S.T.) S.T. is the key to understanding driving forces. e.g., determines if a process proceeds spontaneously. Let s start with entropy
More informationMaximum Likelihood Estimation
Connexions module: m11446 1 Maximum Likelihood Estimation Clayton Scott Robert Nowak This work is produced by The Connexions Project and licensed under the Creative Commons Attribution License Abstract
More informationIntroduction to discrete probability. The rules Sample space (finite except for one example)
Algorithms lecture notes 1 Introduction to discrete probability The rules Sample space (finite except for one example) say Ω. P (Ω) = 1, P ( ) = 0. If the items in the sample space are {x 1,..., x n }
More informationSupport Vector Machines vs Multi-Layer. Perceptron in Particle Identication. DIFI, Universita di Genova (I) INFN Sezione di Genova (I) Cambridge (US)
Support Vector Machines vs Multi-Layer Perceptron in Particle Identication N.Barabino 1, M.Pallavicini 2, A.Petrolini 1;2, M.Pontil 3;1, A.Verri 4;3 1 DIFI, Universita di Genova (I) 2 INFN Sezione di Genova
More informationJi-Sun Kang and Eugenia Kalnay
Review of Probability Wilks, Chapter Ji-Sun Kang and Eugenia Kalnay AOSC630 Spring, 008, updated in 07 Definition Event: set, class, or group of possible uncertain outcomes A compound event can be decomposed
More informationP (A B) P ((B C) A) P (B A) = P (B A) + P (C A) P (A) = P (B A) + P (C A) = Q(A) + Q(B).
Lectures 7-8 jacques@ucsdedu 41 Conditional Probability Let (Ω, F, P ) be a probability space Suppose that we have prior information which leads us to conclude that an event A F occurs Based on this information,
More informationHomework 2. Spring 2019 (Due Thursday February 7)
ECE 302: Probabilistic Methods in Electrical and Computer Engineering Spring 2019 Instructor: Prof. A. R. Reibman Homework 2 Spring 2019 (Due Thursday February 7) Homework is due on Thursday February 7
More informationcross section limit (95% CL) N = B = 10 N = B = 3 N = B = 0 number of experiments
L3 note 2633 Calculating Upper Limits with Poisson Statistics Thomas Hebbeker Humboldt University Berlin February 23, 2001 Often upper limits on cross sections and similar variables are calculated assuming
More informationDiscrete Probability and State Estimation
6.01, Spring Semester, 2008 Week 12 Course Notes 1 MASSACHVSETTS INSTITVTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science 6.01 Introduction to EECS I Spring Semester, 2008 Week
More informationRadioactivity: Experimental Uncertainty
Lab 5 Radioactivity: Experimental Uncertainty In this lab you will learn about statistical distributions of random processes such as radioactive counts. You will also further analyze the gamma-ray absorption
More informationDamped harmonic motion
Damped harmonic motion March 3, 016 Harmonic motion is studied in the presence of a damping force proportional to the velocity. The complex method is introduced, and the different cases of under-damping,
More information4. Complex Oscillations
4. Complex Oscillations The most common use of complex numbers in physics is for analyzing oscillations and waves. We will illustrate this with a simple but crucially important model, the damped harmonic
More informationWave Equations Explicit Formulas In this lecture we derive the representation formulas for the wave equation in the whole space:
Nov. 07 Wave Equations Explicit Formulas In this lecture we derive the representation formulas for the wave equation in the whole space: u u t t u = 0, R n 0, ) ; u x, 0) = g x), u t x, 0) = h x). ) It
More informationStatistical methods for NLP Estimation
Statistical methods for NLP Estimation UNIVERSITY OF Richard Johansson January 29, 2015 why does the teacher care so much about the coin-tossing experiment? because it can model many situations: I pick
More informationd)p () = A = Z x x p b Particle in a Box 3. electron is in a -dimensional well with innitely high sides and width An Which of following statements mus
4 Spring 99 Problem Set Optional Problems Physics April, 999 Handout a) Show that (x; t) =Ae i(kx,!t) satises wave equation for a string: (x; t) @ = v @ (x; t) @t @x Show that same wave function (x; t)
More informationSUMS OF SQUARES WUSHI GOLDRING
SUMS OF SQUARES WUSHI GOLDRING 1. Introduction Here are some opening big questions to think about: Question 1. Which positive integers are sums of two squares? Question 2. Which positive integers are sums
More informationConditional Probability (cont'd)
Conditional Probability (cont'd) April 26, 2006 Conditional Probability (cont'd) Midterm Problems In a ten-question true-false exam, nd the probability that a student get a grade of 70 percent or better
More informationContents. 1 Introduction to Dynamics. 1.1 Examples of Dynamical Systems
Dynamics, Chaos, and Fractals (part 1): Introduction to Dynamics (by Evan Dummit, 2015, v. 1.07) Contents 1 Introduction to Dynamics 1 1.1 Examples of Dynamical Systems......................................
More informationthe probability of getting either heads or tails must be 1 (excluding the remote possibility of getting it to land on its edge).
Probability One of the most useful and intriguing aspects of quantum mechanics is the Heisenberg Uncertainty Principle. Before I get to it however, we need some initial comments on probability. Let s first
More informationMathematical Institute, University of Utrecht. The problem of estimating the mean of an observed Gaussian innite-dimensional vector
On Minimax Filtering over Ellipsoids Eduard N. Belitser and Boris Y. Levit Mathematical Institute, University of Utrecht Budapestlaan 6, 3584 CD Utrecht, The Netherlands The problem of estimating the mean
More informationLab 6 - ELECTRON CHARGE-TO-MASS RATIO
101 Name Date Partners OBJECTIVES OVERVIEW Lab 6 - ELECTRON CHARGE-TO-MASS RATIO To understand how electric and magnetic fields impact an electron beam To experimentally determine the electron charge-to-mass
More informationCHALLENGE! (0) = 5. Construct a polynomial with the following behavior at x = 0:
TAYLOR SERIES Construct a polynomial with the following behavior at x = 0: CHALLENGE! P( x) = a + ax+ ax + ax + ax 2 3 4 0 1 2 3 4 P(0) = 1 P (0) = 2 P (0) = 3 P (0) = 4 P (4) (0) = 5 Sounds hard right?
More informationMeasurements, Sig Figs and Graphing
Measurements, Sig Figs and Graphing Chem 1A Laboratory #1 Chemists as Control Freaks Precision: How close together Accuracy: How close to the true value Accurate Measurements g Knowledge Knowledge g Power
More informationSome Basic Concepts of Probability and Information Theory: Pt. 1
Some Basic Concepts of Probability and Information Theory: Pt. 1 PHYS 476Q - Southern Illinois University January 18, 2018 PHYS 476Q - Southern Illinois University Some Basic Concepts of Probability and
More information1. Introduction As is well known, the bosonic string can be described by the two-dimensional quantum gravity coupled with D scalar elds, where D denot
RIMS-1161 Proof of the Gauge Independence of the Conformal Anomaly of Bosonic String in the Sense of Kraemmer and Rebhan Mitsuo Abe a; 1 and Noboru Nakanishi b; 2 a Research Institute for Mathematical
More informationBANA 7046 Data Mining I Lecture 6. Other Data Mining Algorithms 1
BANA 7046 Data Mining I Lecture 6. Other Data Mining Algorithms 1 Shaobo Li University of Cincinnati 1 Partially based on Hastie, et al. (2009) ESL, and James, et al. (2013) ISLR Data Mining I Lecture
More informationA NEW ALGORITHM FOR ATTITUDE-INDEPENDENT MAGNETOMETER CALIBRATION. Introduction. Roberto Alonso and Malcolm D. Shuster
Flight Mechanics/Estimation Theory Symposium, NASA Goddard Space Flight Center, Greenbelt, Maryland, May 79, 994, pp. 53527 A NEW ALGORITHM FOR ATTITUDE-INDEPENDENT MAGNETOMETER CALIBRATION Roberto Alonso
More information`First Come, First Served' can be unstable! Thomas I. Seidman. Department of Mathematics and Statistics. University of Maryland Baltimore County
revision2: 9/4/'93 `First Come, First Served' can be unstable! Thomas I. Seidman Department of Mathematics and Statistics University of Maryland Baltimore County Baltimore, MD 21228, USA e-mail: hseidman@math.umbc.edui
More informationSome Statistics. V. Lindberg. May 16, 2007
Some Statistics V. Lindberg May 16, 2007 1 Go here for full details An excellent reference written by physicists with sample programs available is Data Reduction and Error Analysis for the Physical Sciences,
More information