PARADISE VALLEY COMMUNITY COLLEGE PHYSICS 111 - COLLEGE PHYSICS I LABORATORY PURPOSE OF THE LABORATORY: The laboratory exercises are designed to accomplish two objectives. First, the exercises will illustrate the application of various physical principles with the aim of gaining a quantitative understanding of the relationships among physical quantities. Second, you will learn some of the techniques used in making physical measurements and the standard techniques used in handling data. Any measurement has some inherent limitations which carry over into any conclusions based upon measurements. Consequently, it is necessary to understand these limitations and to make every effort to minimize experimental errors. PROCEDURES: The experiments will generally be performed by groups of two students. In most cases it is difficult for one person to carry out the required measurements without assistance. In some cases limitations of equipment may require teams of four students on each setup. It is required that each student maintain a laboratory notebook and take data for each experiment in this notebook. The notebooks are available in the campus bookstore and 1
in other outlets. The desired format is quarter-inch crosssection paper in a bound notebook. The data should be recorded in ink, not in pencil. No erasures should be made in the lab notebook, and no pages should ever be removed. If errors are made in recording data, just make a note beside the data and continue. A standard rule of thumb is that you should write enough information in your notebook so that two years from now you could pick up your notebook and be able to determine exactly what you did. EXPERIMENTAL ERRORS: Knowing the accuracy with which a measurement is made is just as important as the value of the measurement itself. Experimental errors are often divided into three classes: personal errors, systematic errors, and random errors. Personal errors are those due to mistakes made by the experimenter, such as failure to read an instrument correctly or incorrect use of the data. Such errors can be eliminated by careful attention to the measurement process and the development of good habits in data taking. Systematic errors can be due to faulty calibration of a measurement tool and will usually show up as a bias in one direction from the true value. They may also arise from failure to account properly for some other factor which has an effect on the measurement, such as friction in a mechanical system, stray resistance or capacitance in an electrical system, or other factors. 2
It is important to have a good understanding of the experimental procedure and the equipment in order to minimize systematic errors. Random errors are those which arise from unpredictable events and are characterized by a statistical distribution. They may be due to random fluctuations in parameters which affect measurements, such as voltage variations or temperature fluctuations. Although random errors can never be eliminated, it is possible to take them into account by exploiting their statistical nature. STATISTICAL TREATMENT OF EXPERIMENTAL ERRORS: When a large number of measurements is made of a quantity subject only to random errors, statistical theories indicate that the arithmetic mean of the measurements is the best indicator of the true value. The average of a set of measurements is equal to the sum of the values divided by the number of measurements (also called the mean value). Average values are usually indicated by a bar on top of the variable s symbol or by an angle bracket: x =< x >= (x 1 + x 2 + x 3 +... + x N )/N = ( N i=1 x i )/N, where N is the total number of measurements, and the symbol N i=1 x i indicates the sum of all the x i from 1 to N. The mean is the best estimate of the value of the quantity under study. It is important to note that in contrast to random errors, systematic errors cannot be eliminated or even 3
lessened by taking the average of a number of measurements because the systematic error is always of the same magnitude and sign in a given set of measurements. There are two questions that can be asked and answered quantitatively about the set of N measurements: a) How accurate is each individual measurement? and b) How accurate is the average or mean of all measurements? Since the true value, x t, cannot be found, there is no way to calculate the deviations of the individual measurements. One can, however, calculate the deviations of each measurement from the mean value: x 1 = x 1 < x > x 2 = x 2 < x > x 3 = x 3 < x >... x N = x N < x > The average of the absolute value of the deviations from the mean is called the average deviation from the mean, and it is denoted by the greek letter δ (delta) before the variable, i.e., δx. The absolute value of a number is its value irrespective of sign, so this average is obtained by adding all deviations from the mean with a positive sign for each of them. The absolute value of a number is indicated 4
by two vertical lines: x represents the absolute value of x. Therefore, δx =< x >= x i N. The average deviation from the mean is a measure of the average value of the error in each of the measurements of x. Calculating the Precision of the Mean: We understand intuitively that the average < x > of N measurements of the quantity x is a better guess of the true value of the quantity than any of the individual measurements. That is the same as to say that the error of the mean is smaller than the mean error of the measurement. What exactly is meant by error of mean or average deviation of the mean? How can one take the average deviation of the mean when there is only one mean? The answer is illustrated by the following: Let us imagine that an experimenter performs a series of N measurements M times and obtains M different average values < x > 1, < x > 2,..., < x > M for the quantity x. Statistical theory shows that the deviation of these averages from their mean δ < x > is related to the deviation δx of each measurement from the mean of the N measurements: δ < x > = δx N. Recall that the difference between an individual measurement and the average is called the deviation of that mea- 5
sured value. The root-mean-square value of the deviation is referred to as the standard deviation from the mean and is denoted by the symbol (Greek letter sigma) σ. The standard deviation is calculated from the formula: σ= Σ[xi x] 2 N 1. For a normal distribution (i.e., a gaussian or bell shaped distribution), statistical theory indicates that of a group of measurements, 68.3% should fall within a range of plus or minus σ from the average value and that 95.5% should fall within 2σ on either side of the average value. So the standard deviation from the mean is then a measure of the precision of the measurements. Since 99.7% of all measurements should fall within three standard deviations of the average value, any measurement whose deviation exceeds three standard deviations is highly suspect and should be discarded. The standard deviation of the mean, σ m, for a group of N measurements is given by: σ m = σ N. Normally, this is the value that should be given as the error in the mean value of a set of measurements. RELATIVE ERROR AND PERCENTAGE ERROR: Let a be the error in a measurement whose value is a. Then ( a a ) is the relative error of the measurement, and 6
100 ( a a ) is the percentage error. These terms are useful in laboratory work. UNCERTAINTY ESTIMATE FOR A RESULT INVOLVING MEASUREMENTS OF SEVERAL INDEPENDENT QUANTITIES: a) If the desired result is the SUM or DIFFER- ENCE of two measurements, the absolute uncertainties ADD. Let x and y be the errors in x and y respectively. For the SUM, we have z = x + x + y + y = x + y + x+ y and the relative error is x+ y x+y. Since the signs of x and y can be opposite, adding the absolute values gives a pessimistic estimate of the uncertainty. If errors have a normal or gaussian distribution and are independent, they combine in quadrature, i.e., the square root of the sum of the squares, i.e., z = x 2 + y 2. For the DIFFERENCE of two measurements, we obtain a relative error of x+ y x y which becomes very large if x is nearly equal to y. Hence avoid, if possible, designing an experiment where one measures two large quantities and takes their difference to obtain the desired quantity. b) If the desired result involves MULTIPLYING (or DIVIDING) measured quantities, then the relative uncertainty of the result is the sum of the relative 7
errors in each of the measured quantities. Proof: Let z = x 1x 2 x 3... y 1 y 2 y 3..., and hence ln(z) = ln(x 1 ) + ln(x 2 ) +... ln(y 1 ) ln(y 2 )... Then find the differentials, d(ln(z)): d(ln(z))= dz z =dx 1 x 1 + dx 2 x 2 + dx 3 x 3 +... dy 1 y 1 dy 2 y 2 dy 3 y 3... Consider finite differentials, z, etc., and note that the most pessimistic case corresponds to adding the absolute value of each term since x i and y j can be of either sign. So, z z = i x i x i + j y j y j. Again, if the measurement errors are independent and have a gaussian distribution, the relative errors will add in quadrature: z z =[ i( x i x i ) 2 + j( y j y j ) 2 ] 1/2. c) Corollary: If the desired result is a power of the measured quantity, the relative error in the result is the relative error in the measured quantity multiplied by the power. As an example, if we have z = x n, then z z =n( x x ). The above results also follow in more general form: Let R = f(x, y, z) be the functional relationship between 8
three measurements and the desired result. If one differentiates R, then dr=( f x )dx + ( f y )dy + ( f z )dz gives the uncertainty in R when the uncertainties dx, dy, and dz are known. For example, consider the density of a solid cylinder. The relation is ρ= m πr 2 L, where m=mass, r=radius, and L=length are the three measured quantities and ρ=density (ρ, Greek letter rho). Then, and so ρ m = 1 πr 2 L ; ρ r = 2m πr 3 L ; ρ L = m πr 2 L 2, dρ=( 1 2m m πr 2 L )dm + ( πr 3 L )dr + ( πr 2 L )dl. 2 To get the relative error, divide by ρ = if one drops the negative sign, is dρ ρ =dm m + 2dr r + dl L m. The result, πr 2 L and represents a worst possible combination of errors. For small increments: and ρ ρ = m m ρ=ρ[ m m + 2 r r + 2 r r + L L, + L L ]. 9
Again, if the errors have a normal distribution, then ρ ρ =[( m m )2 + ( 2 r r )2 + ( L L )2 ] 1/2. 10