PHYS 213 1 Uncertainty Analysis Types of uncertainty We will consider two types of uncertainty that affect our measured or calculated values: random uncertainty and systematic uncertainty. Random uncertainties, often called statistical uncertainties, are those produced by unknown and unpredictable variations in the experimental situation. Systematic uncertainties are uncertainties associated with a particular instrument or experimental technique that skews all measurements in a certain way. The difference in random and systematic uncertainties is illustrated in Figure 1 below. Figure 1. Systematic and random uncertainties in shots at a target. In lab, you will learn and use experimental methods to minimize random uncertainties. Some random uncertainties will persist in our data, but statistical methods give a reliable estimate of their magnitudes. Systematic uncertainties, however, are harder to detect and evaluate. For example, to check the true accuracy of a meter stick you would need to compare it to the length of the path traveled by light in a vacuum during a time interval of 1/2.99792458x10 8 of a second (the definition of the meter agreed upon by the 17 th Conference on Weights and Measures in 1983), which would be difficult. Estimating uncertainties in measurements Almost all direct measurements involve reading a scale (on a ruler, a clock, a voltmeter, etc.) or a digital display (digital multimeter, stop watch, digital thermometer). In most of the experiments you conduct in this lab you will be asked to estimate the uncertainties in the quantities you measure, such as a length or a time interval. When reading a ruler or meter scale you can usually interpolate between the divisions on the scales. However, you cannot estimate the value of the measurement very accurately between two divisions, besides to say that the point to which you are measuring is either less than halfway or over halfway between the divisions. We will use this general rule in the lab: The error in a measurement is taken to be half of the smallest division on the scale being used to make the measurement.
PHYS 213 2 For example, the smallest division on a meter stick is 1 mm. Therefore, the uncertainty in a measurement of x made with the meter stick is 0.5 mm. The smallest division on the wall clock is 1 second, so if we were using the clock to measure a time interval t our uncertainty in our measurement δt would be 0.5 s. Instruments with digital displays often have limited precision. Unless a digital meter is defective, it should display only significant digits. In the case of a digital meter, the uncertainty is often specified by the manufacturer and will be given in the lab handout. Significant Digits The significant digits (also referred to as significant figures) of a measured or calculated quantity are the meaningful digits in it. Meaningful digits refers to the digits that are meaningful to the precision of a number. The number of significant figures in your final answer should be consistent with the precision of your answer based on the measurements used to obtain it. It doesn t make sense to give too many significant figures (e.g., 9.5876493278 ± 0.677789789) or too few significant figures (10 ± 0.1). There are conventions to be learned about significant digits: A non-zero digit is significant (i.e. 1, 2, 3, 4, 5, 6, 7, 8, 9). Zeros between non-zero digits are significant (e.g. 109). Placeholder zeros are NOT significant. For example, the zeroes in 0.000034 and 134,000,000 are not significant. Zeros at the end of decimal numbers are significant. For example, 2.00 has three significant digits and 0.050 has two significant digits. Exact numbers have an infinite number of significant digits. For example, 2.54 cm per inch and π (3.141592653539 ) are infinitely significant. There are also rules for the number of significant digits to carry through calculations when adding and subtracting and multiplying and dividing: Significant digits when adding and subtracting: Consider all of the numbers being added or subtracted and determine the smallest number of significant digits behind the decimal. This is the number of significant digits there should be behind the decimal in the result. e.g. 89.332 + 21.1 = 110.432 110.4 There are three significant digits behind the decimal in the number 89.332, and one significant digit behind the decimal in 21.1. Therefore, there should be one significant digit behind the decimal point in the final answer. Significant digits when multiplying and dividing: Consider all of the numbers being multiplied or divided and determine which one has the smallest number of
PHYS 213 3 significant digits. The result should have the same number of significant digits as the number with the fewest significant digits. e.g. 82.20 x 4.5039 = 370.22058 370.2 There are four significant digits in 82.20. There are five significant digits in 4.5039. Therefore, the answer should have a total of four significant digits. EXCEPTION TO THIS RULE: e.g. 4.63 x 3.81 = 17.6403 17.64 There are three significant digits in both 4.63 and 3.81. However, when a result has a 1 as a leading digit, the answer should have one additional significant digit. In this case, four digits instead of three are appropriate. When doing a mixed calculation, the rules must be applied during the calculation process. Don t wait until the end. Absolute and relative uncertainty When we measure a length x 1 and state the measurement and its associated uncertainty δx 1 (e.g. 1.24 m ± 0.0005 m), the uncertainty is called the absolute uncertainty in the measurement: absolute uncertainty = δx Now, say we measure another length x 2 using a different instrument and find that length to be 2.20 m ± 0.005 m. Next we want to add these two length values together. Each measurement has an absolute uncertainty associated with it, so how do the uncertainties combine when we add the two measurements together? The answer is a general rule: When adding or subtracting two values, the absolute uncertainty in the result is the sum of the absolute uncertainties of the values. In equation form: (x 1 ± δx 1 ) + (x 2 ± δx 2 ) = x 3 ± δx 3 x 3 = x 1 + x 2 δx 3 = δx 1 + δx 2 For our example, we find that x 3 ±δx 3 = 3.44 m ± 0.0055 m. What if instead of adding the two lengths x 1 and x 2, we multiplied them? In this case, we need to determine the relative uncertainty in each measurement. absolute uncertainty relative uncertainty = = δx measured value x Therefore, the relative uncertainties in each measurement are relative uncertainty in x 1 = δx 1 x 1 relative uncertainty in x 2 = δx 2 x 2
PHYS 213 4 Note that if we multiply the relative uncertainty by 100, we obtain the fractional uncertainty. For example, the relative uncertainty of x 2 is 0.005 m/2.20 m = 0.0023, or a fractional uncertainty of 0.23%. The absolute uncertainty for the instrument we are using is the same whether we measure 2 meters or 100 meters. For the case of a measurement of 100 meters, note that our fractional uncertainty decreases drastically, meaning the larger our measurement, the more accurate it is: 0.005 m/100 m = 0.00005 x 100=0.005%. Having the relative uncertainties we state another general rule: In order to determine the uncertainty in the product (or quotient) of x 1 and x 2, add the relative uncertainties. In equation form: (x 1 ± δx 1 ) (x 2 ± δx 2 ) = x 3 ± δx 3 x 3 = x 1 x 2 δx 3 x 3 = δx 1 x 1 + δx 2 x 2 A experimentally determined value and an accepted value can also be compared by calculating the percent difference between them: x theory x exp 100% = percent difference x theory where x theory is the theoretical value and x exp is the experimental value. Note that the percent difference is unit-less. The mean, or average If we have reduced to the point of near elimination our sources of systematic uncertainty, leaving only random errors in our measurements, our best estimate of the quantity x we are trying to determine is the mean or average x of our multiple measurements of the quantity: x = Σx i N where N is the number of trials, or measurements. An estimate of the average uncertainty in the individual measurements is the standard deviation. The deviation of x i from x, x i x = d i, tells us how much our ith measurement differs from the average x. If the deviations are small, our measurements are close together and presumably precise. The standard deviation of x To characterize the average uncertainty we calculate the standard deviation σ x in the following way:
PHYS 213 5 σ x = 1 N 1 d 2 i = 1 N 1 (x i x ) 2 Sometimes the equation for standard deviation has only N in the denominator, rather than N-1. The use of one or the other is a mathematical subtlety not discussed here. The standard deviation of the mean The uncertainty in our best estimate of x is the standard deviation of the mean σ x, given by σ x = σ x N If there are noticeable systematic uncertainties, then the random component of our uncertainty δx ran is equal to the standard deviation of the mean: δx ran = σ x Estimating the total error If you have some way to estimate the systematic component of the uncertainty δx sys in a measurement the total uncertainty is the quadratic sum of δx ran and δx sys : δx total = (δx ran ) 2 + (δx sys ) 2 General Rules for Propagation of Uncertainties: In lab, you will often work with equations with more complicated operations than simple addition or subtraction and multiplication or division. While it is true that when working with addition and subtraction you will be concerned with absolute uncertainty and when dealing with multiplication and division you will need to determine the relative uncertainty, we will use more complex methods of determining the uncertainty than those previously presented. This is due to the fact that these methods allow us to determine the uncertainty in our values more accurately and, often, with greater ease. A full mathematical treatment of the propagation of uncertainties is beyond the scope of this handout, so we merely state the results here. The quantity we are trying to determine is denoted as Q and its associated uncertainty is δq. The ratio of δq/q is known as the fractional, or relative, uncertainty. To distinguish δq from δq/q, δq is called the absolute uncertainty. A, B, and C will denote the quantities that are measured directly (note that there may be more or less than three different measurements; the same rules apply regardless of how many measurements are being combined in the calculation). The measurements A, B, and C are considered to be independent and subject to random errors only. Rule #1 If Q=cA where c is a constant (or a quantity with negligible fractional error), then
PHYS 213 6 δq Q = δa A or δq = cδa Rule #2 If Q=cA m where m is some power (positive, negative, integer or fraction) then δq Q = m δa A or δq = cmam 1 δa If Q depends on two or more quantities then the following rules are useful: Rule #3 If Q=A+B or Q=A-B (or Q=A±B±C±D±...) then δq = (δa) 2 + (δb) 2 Rule #4 If Q=cA m B n (or Q=cA m B n C P ) where m and n are powers (positive, negative, integer or fraction) and c is constant, then δq Q = mδa A 2 + nδb B 2 References: Bevington, P. R., 1969: Data Reduction and error analysis in the physical sciences, McGraw-Hill, 336 p. Huff, D., 1954: How to Lie with Statistics, Norton, 142 p. Jacobs, J. J., 2003: Errors and the Treatment of Data, University of Montana, Dept. of Physics and Astronomy, 11 p. Taylor, J. F., 1997: An Introduction to Error Analysis, 2 nd Ed., University Science Books, 327 p. Young, H. D., 1962: Statistical Treatment of Experimental Data, McGraw-Hill, 172 p.