MEASUREMENTS AND ERRORS (OR EXPERIMENTAL UNCERTAINTIES)

Similar documents
Experimental Uncertainty (Error) and Data Analysis

Uncertainty, Error, and Precision in Quantitative Measurements an Introduction 4.4 cm Experimental error

Measurement: The Basics

Introduction to Uncertainty and Treatment of Data

Experimental Uncertainty (Error) and Data Analysis

Take the measurement of a person's height as an example. Assuming that her height has been determined to be 5' 8", how accurate is our result?

Measurements and Data Analysis

Uncertainty and Graphical Analysis

1 Measurement Uncertainties

Experiment 1 - Mass, Volume and Graphing

Decimal Scientific Decimal Scientific

Appendix B: Skills Handbook

Graphical Analysis and Errors - MBL

PHYS 281 General Physics Laboratory

PHY 123 Lab 1 - Error and Uncertainty and the Simple Pendulum

1 Measurement Uncertainties

Reference Guide. Science Reference 9/25/ Copyright 1996 Gary Lewis Revisions 2007 by John Pratte

PHYS 212 PAGE 1 OF 6 ERROR ANALYSIS EXPERIMENTAL ERROR

APPENDIX D. Appendix D. Scientific conventions BIOLOGY 3201 CURRICULUM GUIDE 183

MATH EVALUATION. What will you learn in this Lab?

ABE Math Review Package

GRE Quantitative Reasoning Practice Questions

Kinematics Unit. Measurement

Table 2.1 presents examples and explains how the proper results should be written. Table 2.1: Writing Your Results When Adding or Subtracting

Chapter 3 Scientific Measurement

3.1 Using and Expressing Measurements > 3.1 Using and Expressing Measurements >

Fundamentals of data, graphical, and error analysis

University of South Carolina. Stephen L Morgan. Tutorial on the Use of Significant Figures

Principles and Problems. Chapter 1: A Physics Toolkit

Algebra & Trig Review

experiment3 Introduction to Data Analysis

Introduction to Measurement

Essential Mathematics

Graphical Analysis and Errors MBL

Math Refresher Answer Sheet (NOTE: Only this answer sheet and the following graph will be evaluated)

MATH REFRESHER ANSWER SHEET (Note: Only this answer sheet and the following graph page will be evaluated)

SPH3U1 Lesson 03 Introduction. 6.1 Expressing Error in Measurement

Using Scientific Measurements

Finite Mathematics : A Business Approach

Assume that you have made n different measurements of a quantity x. Usually the results of these measurements will vary; call them x 1

Physics 10 Scientific Measurement Workbook Mr. Proctor

How to Write a Good Lab Report

ERRORS AND THE TREATMENT OF DATA

Introduction to 1118 Labs

PHYS 2211L - Principles of Physics Laboratory I Propagation of Errors Supplement

Uncertainties and Error Propagation Part I of a manual on Uncertainties, Graphing, and the Vernier Caliper

3.3 Real Zeros of Polynomial Functions

OBJECTIVES UNIT 1. Lesson 1.0

Significant Figures and an Introduction to the Normal Distribution

Radiological Control Technician Training Fundamental Academic Training Study Guide Phase I

SCHOOL OF MATHEMATICS MATHEMATICS FOR PART I ENGINEERING. Self-paced Course

Allows us to work with very large or small numbers more easily. All numbers are a product of 10.

Introduction to the General Physics Laboratories

Appendix. Using Your Calculator. Squares, Square Roots, Reciprocals, and Logs. Addition, Subtraction, Multiplication, and Division

2014 Summer Review for Students Entering Algebra 2. TI-84 Plus Graphing Calculator is required for this course.

Grade 8 Chapter 7: Rational and Irrational Numbers

Physics 12 Rules for Significant Digits and Rounding

Chapter 13 - Inverse Functions

Accuracy: An accurate measurement is a measurement.. It. Is the closeness between the result of a measurement and a value of the measured.

EXPERIMENTAL UNCERTAINTY

Measurements of a Table

STUDY GUIDE Math 20. To accompany Intermediate Algebra for College Students By Robert Blitzer, Third Edition

CHM Accuracy, Precision, and Significant Figures (r14) C. Taylor 1/10

Data and Error Analysis

Physics 2020 Laboratory Manual

2018 Arizona State University Page 1 of 16

5 Error Propagation We start from eq , which shows the explicit dependence of g on the measured variables t and h. Thus.

Errors: What they are, and how to deal with them

CALC 2 CONCEPT PACKET Complete

Practical Algebra. A Step-by-step Approach. Brought to you by Softmath, producers of Algebrator Software

INTRODUCTION TO LABORATORY EXPERIMENT AND MEASUREMENT

IB Physics STUDENT GUIDE 13 and Processing (DCP)

BRIDGE CIRCUITS EXPERIMENT 5: DC AND AC BRIDGE CIRCUITS 10/2/13

Subtract 16 from both sides. Divide both sides by 9. b. Will the swing touch the ground? Explain how you know.

Experiment 0 ~ Introduction to Statistics and Excel Tutorial. Introduction to Statistics, Error and Measurement

Methods and Tools of Physics

Experimental Design and Graphical Analysis of Data

Experiment 1 Simple Measurements and Error Estimation

Appendix II Calculation of Uncertainties

SIGNIFICANT FIGURES. x 100%

Algebra I+ Pacing Guide. Days Units Notes Chapter 1 ( , )

Physics 20. Introduction & Review. Real tough physics equations. Real smart physics guy

Chapter 1. A Physics Toolkit

Revision Guide for Chapter 8

Physics Skills (a.k.a. math review)

The SuperBall Lab. Objective. Instructions

that relative errors are dimensionless. When reporting relative errors it is usual to multiply the fractional error by 100 and report it as a percenta

Algebra 1 S1 Lesson Summaries. Lesson Goal: Mastery 70% or higher

3. What is the decimal place of the least significant figure (LSF) in the number 0.152? a. tenths place b. hundredths place c.

CHAPTER 9: TREATING EXPERIMENTAL DATA: ERRORS, MISTAKES AND SIGNIFICANCE (Written by Dr. Robert Bretz)

Chapter 1: Fundamentals of Algebra Lecture notes Math 1010

CHAPTER 1: Functions

Math Literacy. Curriculum (457 topics)

MEASUREMENT IN THE LABORATORY

PRE-ALGEBRA SUMMARY WHOLE NUMBERS

The Not-Formula Book for C2 Everything you need to know for Core 2 that won t be in the formula book Examination Board: AQA

Measurements, Sig Figs and Graphing

Uncertainty Analysis of Experimental Data and Dimensional Measurements

Summer Packet A Math Refresher For Students Entering IB Mathematics SL

Measurement Error PHYS Introduction

Transcription:

MEASUREMENTS AND ERRORS (OR EXPERIMENTAL UNCERTAINTIES) Determination of Uncertainties in Measured Quantities Physics is not only a theoretical but an experimental science; it depends on measured values of physical constants (e.g., the speed of light), and its laws are found or tested by experimental measurements. No measurement can be made with perfect exactness, and therefore providing an estimate of the reliability is as important as stating the numerical value and dimensions of a result. The degree of reliability, or of uncertainty, is shown by quoting a range number, e.g., stating that a certain mass is 7.33 ± 0.04 gm, which implies a known probability (usually around /3 is selected) that the true lies between 7.9 and 7.37. The more significance we attach to knowing a certain number, the more important is the conceivable existence of error in it. The harder we try to ascertain a good measurement, the more evident is the existence of some error or uncertainty, as we push equipment to the limit and estimate readings between scale marks. The assumption underlying this Section is that we want the best possible measured value and therefore we try hard to minimize errors and their effects. Errors (uncertainties) in measurements are of two kinds. The first category is of systematic errors, which tend individually to throw the results off in one direction; e.g., perhaps all readings on a voltmeter are 3% too high, or perhaps an instrument is used at a temperature of 5 C whereas its calibration was made at 0. Another source could be unconscious personal bias toward overestimating fractions between scale marks or toward slow reactions in timing. In principle, systematic errors can be detected and eliminated or corrected, by calibrating equipment, comparing two or more identical pieces of apparatus, and so on. There is time for very little of this in the instructional laboratory. Good research experiments are marked by thoroughness and skill in removing the effects of systematic errors, and yet there are many historical examples of systematic errors remaining undetected for years. The ultimate safeguard is agreement in results from various independent measurements, made by different people with different equipment and methods. The other category is of random errors; these, no doubt, have physical causes, but causes beyond our ability to detect and remove. In contrast to systematic errors, random errors do not make the result too high or too low but blur its value and reduce its reliability. There are two subdivisions (1) The quantity being measured has a definite value, but repeated readings disagree somewhat for one or more of several reasons such as: 13115 1

(a) Slight fluctuations due to undetectable changes in temperature, surrounding magnetic field, and so on; (b) fluctuations in the performance of equipment; (c) personal errors random, not biased in estimating between scale marks. () The quantity being measured varies randomly, showing statistical fluctuations. An example is counting the number of radioactive-decay events in equal successive intervals of time. Each reading gives a perfectly definite integer, but not always the same one, because the original phenomenon is random. The effect of random errors upon the reliability of a result can be minimized only by use of probability theory. Given subdivision (1) in the preceding paragraph, very often a large number of successive readings of the same quantity show a typical behavior. Many of the readings are close together, forming a peak of grouped values; smaller numbers of readings fall approximately equally on each side of the first group; still smaller numbers fall farther away on each side, and so on; beyond a certain distance on each side, few or no readings are found. The results can be displayed as a histogram, with blocks whose heights show the number of readings at each value of the quantity being measured. Figure 1 is a 5 Number of readings Average value of all readings 4 3 5 3 Normal error curve 4 1 1 0 1 Reading 7.0 7.5 7.7 Fig. 1 hypothetical case, with 16 readings (of the same quantity) ranging from 7.0 to 7.7. Given a very large number of readings, the histogram usually approaches the normal-error distribution (or Gaussian distribution), sketched in dashed line in the Figure, a curve derived from probability theory. 13115

All rules for dealing with measurements in subdivision (1) are based upon properties of b the normal-error curve (which has the form x e, where x is a deviation * and b is a positive quantity that varies from one case to another). These procedures are followed even if the number of readings is not large, because they offer the best odds for success. Rule 1: The best single value to report is the arithmetical average or mean of all readings of a quantity made during the experiment (this ideally corresponds to the most probable value, at the top of the error curve). Rule : The ± designation of reliability, or limits of error, is either the average deviation of the mean (A.D.) or the standard deviation of the mean (S.D.), computed as shown below. If the readings fit the normal-error curve very closely, A.D. and S.D. are mutually proportional and either can be used, with slightly different odds on the true value lying in the specified ranges. In general, S.D. is preferable. The way of finding A.D. and S.D. is best shown by an example. Call the readings R 1, R etc and let N (equals 4, here) be the number of readings. Reading, R Deviation d R R d d 1. 3.1-0.030 0.030 0.0009. 3.0 0.050 0.050 0.005 3. 3.17 0.00 0.00 0.0004 4. 3.11-0.040 0.040 0.0016 1.60 / 4 0.140 / 4 0.0054 / 3 R 3.150 a.d. = 0.035 0.0018 0.0018 0. 04 mean, and most probable value a.d. or average deviation of individual readings A.D. a. d./ N 0.018 S.D. s. d./ N 0.01 Average Deviation of the mean Standard Deviation of the mean Best value for R : R R 3.150 0. 01 (using S.D.) s.d. or standard deviation of individual readings Dividing by N 1 3 to get s.d., instead of by N to get a.d., is theoretically correct. For large N, it scarcely matters; as N 1, the uncertainty properly approaches an indeterminately large amount. * The term deviation here means the difference between a single reading of a quantity and the average value of several readings of that quantity. 13115 3

The a.d. is the average discrepancy, without regard to sign, between any one reading R and the R to become more reliable as N increases. Use of the A.D. or S.D. (average, or standard, deviation of the mean), computed according to theory as a.d./ N or s.d./ N, takes into account this probable increase in reliability. If the readings closely fit a normal-error curve, the probability is about 0.68 (odds just over to 1) that the true value lies within the range R S.D. to R S.D. If the A.D. is used, the probability is about 0.58. (All of this disregards the separate possibility of a systematic error.) You must state whether you are using A.D. or S.D. The a.d., s.d., A.D., and S.D. are usually rounded off according to Rule 3: If a.d., etc. starts with 1 or (disregarding zeros used to locate the decimal point), quote a.d., etc., to two figures; if it starts with 3, 4,, quote only that one figure; 0.06, for example, is kept; 0.031 is rounded off to 0.03, and 0.037 to 0.04. The example above shows an occasional consequence of this rule, that R may sometimes be quoted to one more significant figure than is attained in any single reading; taking a number of readings gives some information about the next figure. Conceivably all N readings might be identical, but this is a most unusual coincidence, which would cease to hold if many more readings were taken; or else the readings have not been pushed to the limit of estimating fractions between scale markers, etc., or estimating has not been impartial. The second kind of random errors, statistical fluctuations, can be described more briefly. If in our example of radioactive decays the count in each time interval is very large, it will usually follow a normal-error distribution quite well, and the preceding rules hold. The much more usual case is one of small counts; then a Poisson distribution is to be expected. Its s.d. is given by s.d. where n is the average of all counts. The best value of n is then given by Rule 4: n ( n / N ), for statistical fluctuations. Occasionally one in a series of otherwise fairly well grouped readings may seem quite out of line perhaps because of instrument failure, or incorrect reading or recording. Arbitrarily correcting such a reading is not legitimate, and yet retaining it can produce unreasonably large a.d., s.d., etc. There are several criteria, of varying complexity, for rejecting a bad reading, R B. A simple one is to calculate R and a.d. with the suspicious R B omitted; if the deviation of R B is allowable. Fairly frequently, it is not practical to make multiple measurements of a parameter to determine the mean and S.D. or A.D. In that case, the experimenter must use his or her n 13115 4

best judgement to state a value of the uncertainty of a single measurement, on the basis of instrumental and human limitations. In every case, the range of uncertainty is expressed as the central value C of the measured quantity, plus or minus C : The experimenter must specify whether a single measurement. Propagation of Errors C C C is S.D., A.D., or the estimated uncertainty in Experimental determinations of a specific quantity, presented as C C, are usually combined with measurements of other parameters to calculate a result. This result will also be uncertain by some amount, and so we must consider how each individual error propagates through the calculation and combines with the other errors to affect the final result. Probability theory leads to certain rules, which we quote here. (a) Addition or Subtraction: When two or more different independent measured quantities C1 C1, C C are to be added or subtracted, in any combination, the uncertainty F of the result F C 1 C is given by Rule 5: F [( C 1/ 1 ) ( C ) ] This rule is plausible. If all measurements are independent, it is overly optimistic to expect cancellation of the several errors; on the other hand, adding all of them is unduly pessimistic. Hence the addition at right angles is a compromise, and the correct one according to probability theory. (b) Multiplication or Division: In this case, we first compute the fractional, uncertainty C i / Ci, for each of the separate factors. Then the fractional uncertainties combine exactly as in Rule 5, to give the fractional uncertainty in the calculated result F i F C C or C C : 1 1 / Rule 6: F F C1 C1 C C 1/ (c) Powers or Roots: Raising a single number C to a power m (integer or not) is closely akin to multiplication and division (consider the processes by use of logarithms). Therefore relative uncertainties are used, but teach of these comes from the same C and m therefore they must all be added directly, yielding for F C Rule 7: 13115 5

F C m F C. (d) Other Calculations: In general the proper rule for combining uncertainties can be found by use of calculus. Suppose, e.g., our measurements directly yield a mean angle and we want sin θ and its uncertainty. We need to know the rate at which sin θ varies with θ, which is given by the derivative. Since d(sin θ) / d θ = cos θ we have approximately, for the uncertainty Δθ: Δ(sin θ) = (cos θ)δθ. Hence the value to quote for sin θ is sin θ ± (cos θ)(δθ). Note: See the Appendix for more examples. Remember: In every case, the experimenter must specify the source of each experimental Δ, whether it is S.D., A.D., or an estimate. Significant Figures When taking and recording data, as in the imagined example above, the usual rule is that the reading is pushed to an estimate between closest marks on the scale showing millimeters but no smaller divisions, the last figures recorded (, 0, 7, 1) represent careful estimates between millimeter marks. All three figures are then said to be significant (note that 0 in 3.0 is written, because it has a meaning). If the numbers had been, for example, 0.0031, etc., the zeros before 3 are not significant. They serve only to locate the decimal point and would be changed by such a trivial alteration as deciding to use a unit of measurement 10 times as big or as small. As another example, the diameter of the earth 8000 miles are the three zeros significant? We cannot tell from the statement alone, without looking up more precise values. As a mater of fact, the zeros are not significant? We cannot ell from the statement alone, without looking up more precise values. As a matter of fact, the zeros are not significant here; they merely locate the decimal point. Scientific notation is preferred, to remove such ambiguity. The number 0.0031 is, in this notation, written as 3.1 10-3 (3.1 divided by 10 3 ); 8000 would be written significant, and the powers of ten merely locate the decimal point. These rules allow us to make clear what figures are significant, in any small, medium, or large number. These procedures are, however, not usually used when the exponent of 10 is 1, or +1. For example, the significant figures are obvious in 0.335 (next paragraph), without changing it to 3.35 10-1. 13115 6

Calculations using significant figures must now be described. Suppose we measure two different lengths, with different instruments, getting 4.5 cm and 0.335 cm, with every number except the zero before the decimal point being significant (this zero, incidentally, is written just to aid the eye in noticing the decimal point). What is the sum of these two lengths? Simple arithmetic suggests 4.835, but the indicated sum 4.5xxx 0.335 4.8yyy shows that this is not reliable. The three x s after 4.5 represent unknown values and the three y s in the sum are therefore unknown. The rule for addition or subtraction is simple: Write the numbers with equal powers of ten in scientific notation and then throw away any numbers (such as the rightmost 3, 5, and in 0.335) that have unknowns in their columns. In so doing, round off to the nearest integer: 0.33 becomes 0.3, but 0.35, 0.36, etc., would become 0.4. Suppose instead we want to multiply these two lengths, to get an area. Simple arithmetic gives 1.50840, but it looks suspicious to get six figures with only two in the factor 4.5. By writing 4.5xx and multiplying it by 0.335, you will quickly find the trouble: x s will come into most of the figure sin 1.50840. Again the rule is simple: in multiplication or division, find the factor with the smallest number of significant figures (4.5, here), round off the other to the same number of figures (0.34), and carry out the calculation (to get, here, 1530). Finally, round off the answer similarly. There is another part of the rule: When the first significant figures is 1 or as in the answer here, keep one more significant figure than when the first figure is 3, 4, Hence our answer is 1.53, not 1.5. It is understood that 3, the last figure, is an estimate, as is the 5 in 4.5. Taking a square or other root is a form of division, and so similar procedures are followed. Electronic calculators know nothing about significant figures; they will give you results with digits filling out their scales. The user must beware and round off appropriately. Even though they are not measured numbers, we can round off such mathematical quantities as,, or trigonometric functions, in exactly the same way in specific contexts. Then, for example, may be properly written as 3, 3.1, 3.14, 3.14, etc., depending on the data and calculations with which it is being used. Following these rules may seem strange and bothersome at first, but there are good reasons for doing so. If the rules are not followed, the result is distorted and tells a scientific lie the answer to 4.5 0.335 is not 1.50840 when the factors are imperfectly measured numbers. The advanced student has to learn more complicated procedures. For example, in a very long chain of calculations one or two extra, non-significant figures are carried to reduce the error caused by a great many roundings-off. However, what we have given here is quite accurate enough for elementary work. 13115 7

13115 8

Per Cent Error, Discrepancies and Deviations Suppose that in the laboratory you have determined the acceleration of a freely falling body, using significant figures correctly, as 9.3 m/second. The accepted value, to the same number of figures, is 9.8 m/sec. Your deviation with respect to the accepted value is 0.5 m/sec. It is often better to express this in per cent: (0.5/9.8)(100)=5%. (Note that the known value, 9.8, is used in the denominator.) A value of 10.3 m/sec would also represent a 5% error, in the opposite direction. Per cent errors are ordinarily not written to more than two significant figures, and often to only one if that figure exceeds. Suppose instead that you do not know the true or accepted value, but by two different methods you have found 9.3 and 9.7 for some quantity. There is a discrepancy of 0.4 between the two values. The per cent discrepancy is (0.4/9.5)(100) or 4%. Note that the average of the two numbers appears in the denominator. Experimental uncertainties can also be given in per cent. Going back to our earlier example of 3.150 0.01, we can replace 0.01 by [0.01/(3.15)](100), or 0.6%. If all the numbers in the earlier example, 3.1, 3.0, etc., had occurred with the same factor of 10, such as 10 4, we could equally well write the result as (3.150 0.01) 10 4, (3.150 0.6%) 10 4, or 3.150 10 4, or 3.150 10 4 0.6%. Graphs Graphs are sued either to display results compactly and strikingly, or for certain purposes of calculation. A display graph might range from or 3 inches square to a full page, depending on the amount of detail to be shown. Often it is effective to show several related display graphs in a single Figure. A graph for calculations should be at least a half page in size. The independent variable is plotted on the x or abscissa axis, the dependent variable on the y or ordinate axis. At least tenth-inch cross section paper should be used for all graphs. For more precise work, a paper of good quality having highly accurate spacings should be used (for example K & E 10 10 to ½ in., or millimeter grid paper). Choose scales that are convenient (1,, 5, 10, etc., units per scale division); use most of each axis (the zero point need not always appear). Properly label both axes, as to data plotted and units used. For example, in plotting the speed of a body vs. time, the axes should be labeled as in Figure, and of course a numbered scale should appear on each axis. A title should also be given to the whole graph. Hard sharp pencils or drawing instruments are to be sued, not crayons, colored pencils, fountain pens, etc. Show plotted points as small circles, or crosses with horizontal and vertical arms. Any statistical uncertainty (S.D., etc.) in the data can be indicated quantitatively by the size of crosses. Either a straight line or a smooth curve should be drawn among the points; it need not pass through any one, but should leave the points scattered fairly equally about it. Mathematical curve fitting (e.g. least squares fitting) is used in professional work, but we 13115 9

do not require it in the introductory labs. In any case, it is good to develop an eye for the data. y 1 y (y 1 y 0 ) y 0 (x 1 - x 0 ) x x 0 x 1 Fig. When a straight line satisfactorily represents the trend of the plotted points, an equation can be derived easily (see Fig. ). The slope m of the plotted line is m ( y1 y0 /( x1 x0 ); x0, x1 and y0, y1 must be stated in their proper units, not in terms of distances on the graph paper. Also the points ( x 0, y0 ), and ( x1, y1) should be as far apart as possible, for most reliable results. Then the equation of the straight line in Figure is ( y y0 ) m( x x0 ), which is easily converted to y mx b. Ordinarily, plots other than straight lines cannot be judged accurately by eye. However, by replotting various functions of y and x it is often possible to get a straight line, and thence an equation representing the data. An example is the log-log plot discussed here. Graphical Analysis by a Log-Log Plot A very useful way of analyzing some laboratory data is given by the use of log-log graph paper. The grid of log-log paper is not linear, but instead proportional in spacing to the logarithm of the number, along each axis. As a result, certain kinds of data that would form curved lines if plotted on linear paper, form straight lines on log-log paper. Since straight-line graphs are relatively simpler to estimate in comparing data and theory, log-log plots can be the best plots to make, at least at first hand, when the data are being obtained in the laboratory. 13115 10

Consider the function y = Ax (1) where A is a constant, which would obviously not present a straight-line graph if one were to use linear paper. If we rewrite the equation in the equivalent form log(y) = log(a) + w log(x) () we see that, providing one, we re willing to take the trouble to calculate all the logarithms necessary, the data could be studied on a straight-line plot, where the variables were not x, y, and A, but rather log(x), log(y), log(a). The line plot would have a slope of. This is obvious if we let log(x) = u, log(y) = v, and log(a) = c, and rewrite equation () as v = c + u (3) But rather than calculating all the logarithms, if we simply plot the original equation (1) on log-log paper, the same straight-line result is obtained. Another useful feature of log-log plotting is the way that the slope of the line is related to the exponent of the independent variable (x in the example). If it were not known that the variable y depended quadratically on x, the log-log plot would have demonstrated this quite effectively, by presenting a straight line with a slope of. In making estimates of the slope for this purpose, be sure that you do not simply read off differences in y and differences in x from the graph axes, as you can do when plotting on linear paper. Rather, you must use a separate measure (for example, a ruler), and divide a measured length in y by a measured length in x to obtain the slope. Convince yourself that this procedure is necessary by plotting a curved function, such as the example given, and then deducing the power dependence from the resulting graph. Finding the value of a power in this way is not restricted to positive integer powers, of course. The same procedure can be applied to find fractional and negative powers. One drawback of log-log plotting is the restriction that the numbers plotted always be greater than zero, since logarithms are not defined for numbers less than or equal to zero. So often, at least in elementary physics, do we deal with simple integer and fractional powers of either sign, that you may find the accompanying graphs useful. They can be used in the laboratory, for example, to match against a log-log plot of data, and so deduce a simple power-law dependence. 13115 11

Brown University PHYS 0160 y=xn 100 90 80 70 60 n=4 50 n=3 40 30 n= 5 0 15 n=1 10 9 8 7 6 n=1/ 5 4 n=1/3 3.5 n=1/4 1.5 1 1.5 13115.5 3 4 5 6 7 8 9 10 X 15 0 5 30 40 50 1 60 70 80 90 100

13115 13

Brown University PHYS 0160 y=x-n 1 0.9 0.8 X 0.7 0.6 0.5 n=1/4 0.4 n=1/3 0.3 n=1/ 0. 0.1 0.09 n=1 0.08 0.07 0.06 0.05 0.04 n= 0.03 n=3 n=4 0.0 0.01 1.5 13115.5 3 4 5 6 7 8 9 10 15 0 5 30 40 50 14 60 70 80 90 100