Treatment of Error in Experimental Measurements

Size: px
Start display at page:

Download "Treatment of Error in Experimental Measurements"

Transcription

1 in Experimental Measurements All measurements contain error. An experiment is truly incomplete without an evaluation of the amount of error in the results. In this course, you will learn to use some common tools for analyzing experimental uncertainties. While the methods you will learn here are by no means the most thorough, they are sufficient for many applications, and are the basis of almost all advanced methods. A. Some Basic Language Experimental errors can be generally classified as one of two types: Random (indeterminate): These errors occur with a random magnitude and sign each time the experiment is executed. The appearance of random error in every measurement is fundamentally unavoidable. However, efforts can be made to minimize contributions from random errors. Systematic (determinate): These errors occur with the same sign and approximate magnitude each time the experiment is executed. Systematic errors are called determinate errors because the experimenter should, in principle, be able to determine the source of these errors and therefore avoid them. In practice, the source(s) of systematic errors are sometimes difficult to identify. Human errors, or mistakes, are a third type of error which can be systematic or non-systematic, mathematical or procedural. Since errors due to mistakes tend to be larger than typical systematic or random errors, they are best spotted by comparing the values in a set of measurements mistakes often stand out against the other more accurate values (this is one reason why single measurements of a property should be avoided). Students learning to write scientifically often cite human error as the principle source of error in their experiment. While it may be true that you are the principle source of error, such a vague statement is not acceptable. In most cases, you should be proficient enough in the laboratory and careful enough in your calculations that mistakes do not become the major contributor to the error. However, if an accident occurs that does significantly influence the quality of your results, you should describe it and the effects on the results in detail. More often, the major source of error is random error, which can be quantified statistically (see below). The quality of a particular measurement or set of measurements can be generally described using the terms precision and accuracy. A precise measurement is one that is highly reproducible, and thus has little associated random error. (Of course, we can only know that a measurement is reproducible if several measurements are obtained. However, the comparison is often made against some standard that was obtained from multiple measurements elsewhere, such as by the manufacturer of an instrument.) An accurate measurement is one whose value is close to the true value, and thus has little or no systematic error. Accuracy can apply to a single measurement or to the average of a set of measurements. Note that precision and accuracy are independent expressions of the quality of the measurement(s). For example, a set of imprecise measurements may still be quite accurate. Four possible combinations of precision and accuracy are illustrated in Figure.

2 (a) (b) (c) (d) Figure : Precision versus accuracy. The center of the target represents the true value of a property, and the bullets represent the measured values of a property. (a) A set of measurements in which all measurements are both accurate and precise. (b) A set of imprecise measurements. The average is accurate, although each individual measurement is inaccurate. (c) A set of precise, but inaccurate measurements. (d) A set of imprecise, inaccurate measurements. B. Statistical Analysis of Random Errors A good assessment of the error in any experiment ultimately comes down to judgement on the part of the experimentalist. In a well-designed and executed experiment, there will be very little systematic error. Therefore, the uncertainty in the measurement can be explained by assuming the errors arise in a random way. This is fortunate, because random errors can be analyzed statistically, while systematic errors cannot. In this course, we will make the assumption that the experiments are well designed and executed so that we can use these methods. However, you must always check this assumption after you have obtained your final results to be sure that it is valid (see section D). Statistical methods also exist for identifying mistakes ( outliers ), although we will not cover them in this course. See SGN and/or Skoog for a good discussion of statistical error analyses.. An Infinite Number of Measurements: The statistical starting point Clearly the goal of making a measurement is to obtain the true value of the property that we are trying to measure. However, since all measurements have error, we are unlikely to measure the true value exactly. The more measurements we make, the more likely it is that we will obtain the correct value for the property of interest. While we will never actually work with an infinite number of measurements, it is useful to examine the errors associated with an infinite sample set because the treatments for the more realistic finite sample sets are derived from this limiting case. Regardless of the sample size, the probability of obtaining any given value as a result of making a measurement can be described by some distribution function. For an infinite sample set, the distribution of measured values (x) about the true value (µ) is described by a normalized Gaussian, or Normal, function: f ( x ) = σ π e ( x µ ) /σ () The group of constants in front of the exponential term serves to normalize the function (a function is normalized if the integrated area under the curve is equal to one.) This function is sometimes also called a bell function because the curve is bell-shaped (Figure ). The width of the bell is determined by the constant σ, which is called the standard deviation. For infinite sample sets, the standard deviation is exactly equal to the root-mean-square error,

3 σ = lim N N N ( x i µ ) () i = where N is number of measurements in the set (infinity, in this case). The term root-mean-square (or RMS) arises often in statistics, and it means exactly that (in reverse order): the error (x i -µ) is first squared, then averaged (to get the mean), and finally square-rooted. The RMS error is therefore a measure of the average magnitude of random error in the experiment Figure : The Normal (Gaussian) distribution function for an infinite sample set (σ=0, µ=50). The shaded region represents the 68.8% confidence interval σ µ If we have made an infinite number of measurements and if the error is indeed randomly distributed, then the mean of the distribution is exactly equal to the true value: µ = lim N N N x i i = (3) This simple result allows us to evaluate the quality of any single measurement by identifying the probability that it is within a certain range of the mean (true value). The certain amount is usually expressed in units of σ. For example, we could ask: What is the probability that a measured value lies within ±σ of the true value (i.e.: that x = µ ± σ)? which is equivalent to asking: What is the probability that the error in a measurement is less than σ in magnitude (i.e.: that x-µ < σ)? We can find this probability by integrating the probability function in equation () over the range of possible values for x, i.e.: from µ-σ to µ+σ. µ + σ P = µ σ f (x ) dx = (4) One place you have almost certainly seen the term RMS in chemistry is in the Kinetic Molecular Theory of gases, where the RMS velocity of gas molecules dictates their average kinetic energy, among other things. This is only true if the distribution function is normalized. The integral must be evaluated numerically. 3

4 The result is shown in Figure as the shaded region. We can phrase the result of equation (4) in several ways: 68.6% of the measured values in an infinite sample set lie within ±σ of the true value. A given measurement in an infinite sample set has a 68.6% chance of having an error of σ in magnitude. The true value has a 68.6% chance of lying within ±σ of any measured value in an infinite sample set. There are many other permutations of these statements, but they are all equivalent. Since we will usually be interested in representing our measurement as an approximation to the true value, we will pick the third statement, and thus express the quality of our measurements in the form: µ = x ± σ with 68.6% probability (5) We now need to introduce some statistics terms: The value of (P x 00%) is termed the confidence level. In the above example, P=0.686, so based on our measurement x, we feel that we know µ with 68.6% confidence. Alternatively, we are 68.6% sure that our measured value is correct. The value of the error limit is called the confidence limit, and is given the symbol λ P. (In this case, λ = σ.) The range of values (x ± λ P ) is called the P x 00% confidence interval, and is where we expect the true value to lie P x 00% of the times that we make a measurement. (In this case, the 68.6% confidence interval is between the limits (x + σ) and (x - σ)). While all of this language and formalism is a bit intimidating at first, it does allow us to make very specific statements about the error in our measurements and the corresponding faith that we have in our result. Most scientists are therefore willing to tolerate the somewhat cumbersome nature of statistical terminology. We are certainly not limited to describing our results with only 68% confidence. We can increase our confidence in our result by increasing the size of the confidence interval. In general, the confidence interval is expressed in terms of Z standard deviations: µ = x ± Zσ (6) where the number Z is called the standard score. Values of the confidence level (P) as a function of the standard score (Z) for infinite sample sets are tabulated in most statistics books, and are available in Table of SGN (p.45). Notice that we, the experimenters, get to manipulate the size of the error that we report by changing the value of Z! However, there is a tradeoff: if we chose a smaller confidence limit (a smaller value of Z), we will be less sure that the true value lies within the reported confidence interval. That is, we are less sure that the value that we report is correct. In practice, rather than choose a Z, most scientists choose work at a particular confidence level (P), and therefore use the tables to determine the corresponding value of Z. In this course, we will use 95% as our standard confidence level. According to Table, Z=.96 when P=0.95. Thus for infinite sample sets: 4

5 µ = x ±.96 σ N=, 95% confidence. A Large, Finite Number of Measurements In reality we can never obtain an infinite number of measurements. We can, however, strive to make a large number of measurements and hope to approximate an infinite sample set. Of course, we cannot exactly determine how close we are to true value if we make a finite number of measurements, no matter how careful we are. Ultimately, the error in the final result will depend on both the quality of the approximations that we are forced to make, and on the quality of the measurements themselves. If N is large enough (N>0 is generally accepted as large ), the distribution of measured values can be fairly well approximated by a Normal distribution: f ( x ) = s π e ( x x ) /s (7) where the true value is now approximated by the mean (average), x : N µ x = N x i (8) i = and the width of the distribution (the standard deviation) is now approximated by s: σ s = N (x i x ) (9) N - i = According to equation (8), the average is the best approximation of the true value that we can get with a finite number of measurements. Since x represents a combination of several measurements, it is reasonable to expect that x carries less random error than a single measurement. This is of course why we take an average, and why we use the average as the approximation of the true value. Regardless of whether the errors are in fact random, there should also be a distribution of means about the true value. 3 This distribution of means turns out to also be described by an approximate Normal distribution function, g( x ), g( x )= s x π e ( x x ) /s x (0) where the standard deviation of g( x ) is called the standard deviation of the mean, s x. The standard deviation of the mean is a better approximation of σ than is s, just as x is a better approximation of µ than is x. The standard deviation of the mean is related to the standard deviation of f(x), s, through the square root of the number of measurements: 3 If you are uncomfortable with this idea, imagine conducting an experiment in which you measure the value of a property a large number of times. The mean of this set of measurements, x, is an approximation of the true value, µ. Now you conduct the experiment again, measuring the value of the same property a large number of times. The mean of this set of measurements is probably slightly different than the mean for the first set of measurements since the mean is only an approximation of the true value, and thus carries error. If you think about conducting the experiment a large number of times, you should be willing to believe that you would obtain a distribution of means. 5

6 σ s x = s N () Examples of exact and approximate Normal distribution functions are shown in Figure Figure 3: Normal distribution functions for an infinite sample set (line; σ=0, µ=50) and a large finite sample set (bars; N=, s=3, s x =6.8, x = 50.). Notice that s x is a better approximation of σ than is s There are several technical points to notice about equations (7)-(9): The number of measurements used to construct the mean in the approximate standard deviation, s (N-, eq. 9), differs from that in the exact standard deviation, σ (N, eq. 3). This number is called the number of degrees of freedom, which is equal to the number of independent variables. There are fewer degrees of freedom in the approximate case because one is used up in constructing x. The idea of degrees of freedom also arises in chemistry, physics, mathematics and other fields. As N, x µ and s x σ. Therefore, if N is large enough, then s x is a good approximation of σ, and x is a good approximation of µ. In this limit, 68.6% of all measurements will fall within s x of x, just as in the infinite sample set case ( s x is said to be statistically meaningful). In analogy to the infinite sample set case (equation 6), the confidence interval for a large, finite sample set is expressed using the approximations introduced in this section (µ~ x and σ~ s x ) so that: µ x ± Z s x N > 0 () where the values of Z are the same as those for an infinite sample set. By using the mean rather than a single measurement x, we decrease the uncertainty in our reported result by a factor of N. This is another reason why it is a good idea to make as many measurements of a single property as possible. As we have said, in this course, P=95%, which means that Z=.96. Thus, µ x ±.96 s x N>0, 95% confidence 6

7 3. A Small Number of Measurements It is often impractical to obtain even 0 measurements of a single property. Unfortunately, when N<0, the distribution of measured values is not well approximated using a Normal function. Instead, we must describe the distribution of measured values about the true value using another distribution function called the Student t function. The form of the Student t function is complex; for our purposes you need only know that it depends on the value of N. Fortunately, even though the distribution function is different, the form of the equation for the confidence interval is quite similar to that for a large sample set: µ x ± t P,ν s x N < 0 (3) Just as before, the value of s x is calculated from equation (). The difference here is that the standard score, Z, is replaced by the Student t value, t P,ν, where P is the confidence level, and ν=n- is the number of degrees of freedom. At 95% confidence, µ x ±t 0.95,ν s x N<0, 95% confidence Values of t P,ν are tabulated in Table 3 (p.49) of SGN. As an example, the expression for the 95% confidence interval for a set with N=4 would be: µ x ± t 0.95,3 s x = x ± 3.8 s x Notice that the uncertainty for an N=4 sample set is quite a bit larger than for the large (N=0) sample set case. In fact, t P,ν increases dramatically as N decreases, to the alarming limit of 63.7 when N= (at 95% confidence). We are once again motivated to make our sample set as large as possible! In the other extreme, t P,ν Z as N, and the Student t distribution reverts to a perfect Gaussian distribution for infinite sample sets. 4. Estimation of Error Sometimes it is impractical to calculate even an approximate standard deviation. In these cases, we can only estimate the uncertainty associated with a measurement by non-statistical means. The most common approach is to estimate the random error associated with the measurement based on either a history of measurements (i.e.: manufacturer specifications) or observed random noise from an instrument. For example, the manufacturer may state that a balance is good to ± 0.00 g. This estimated uncertainty can usually be treated as the 95% confidence interval. However, if the balance is in a very windy room and you observe fluctuations of several milligrams due to the wind, it would be more prudent to assign a 95% confidence interval of perhaps ± g, depending on the size of the fluctuations that you observe. In estimating the error this way, you must be sure that all possible sources of error are accounted for. For example, if the compound that you are weighing tends to stick to the weigh paper, not all of the compound that is weighed will actually be used in the experiment, and this additional uncertainty must be evaluated (it might be negligible). Of course, if you make many measurements, you could estimate the error statistically. In general, the bulk of the contribution to the overall uncertainty in a measurement will come from only a few sources. Because it is often difficult to analyze the relative importance of all 7

8 errors while making measurements, it is good practice to write down estimated errors for all measurements as you obtain the data it can always be ignored later if it is negligible. C. Propagation of Error It is a rare thing indeed to be able to directly measure the value of a property of interest. More often, we must obtain its value indirectly from the measurement of some related property. Since the measurement has error, the value of our interesting property will also have error. We now turn to the task of propagating experimental uncertainties through calculations. We will examine two methods: Differential Error Analysis, which is most useful if we are calculating a value using an analytical expression (an equation), and Linear Least Squares Analysis, which is most useful if we are correlating two or more sets of data (i.e. plotting the data, although you may not need to actually make the plot).. Differential Error Analysis Suppose that the value of some physical property, F, is related to the values of the measured properties x and y by a mathematical expression (the treatment here can easily be extended to more variables). The goal of differential error analysis is to determine the uncertainty in F by propagating the uncertainties in x and y through the mathematical expression. In doing so, we will assume that we have some measure of the uncertainties in x and y (either from statistics or by estimation), and that the uncertainties are uncorrelated (independent of each other). Derivation of DEA Equation: We begin by examining the total differential of F with respect to the measurement variables x and y: df = F dx + F x y dy (4) The terms df, dx, and dy represents infinitesimal changes in F, x, and y, respectively. The values of F, x and y will vary due to experimental errors. We expect our experimental errors to be small, but probably not infinitesimally small. Therefore, we will write each term as Taylor expansions about the means. For example, dx ( x x )+ a( x x ) + b( x x ) 3 +K (5) Similar expressions can be written for dy and df. The expression (x- x ) is a sort of raw error in x (the deviation of x from the mean). Since our errors are (hopefully) small, the higher order terms can be neglected (they are even smaller). Therefore, we can rewrite equation (4) as: (F F ) F (x - x ) + F x y ( y - y ) (6) Recall that the standard deviation is a root-mean-square error, and that an RMS expression is constructed by squaring, then averaging, then square-rooting. Therefore, we first square equation (6), 8

9 (F F ) = F x (x - x ) + F y ( y - y ) + F F x y ( x - x )( y- y ) (7) then construct a mean (an average). Assuming that both x and y have been measured N times, (F F ) N = F N x (x - x ) + F y ( y- y ) + F F N x y ( x - x )( y- y ) There are two types of error terms in equation (8): squared terms (such as (x - x ) ) and cross terms (such as (x - x )( y - y ) ). Since the squared terms will always be positive, they will always contribute to the uncertainty in F. However, since the cross terms contain products of independent uncertainties, they will sometimes be positive and sometimes be negative (that is, for some measurements (x - x ) will be negative, but ( y - y ) may be positive, and vice versa). If N is large enough, all of the cross error terms should cancel, and the average of the cross terms should approach zero. Therefore, we will ignore contributions from the cross terms in equation (8). Recognizing that the averaged squared error is equal to the squared standard deviation ( MS error): s F = F x + F y s x s y where we have used the approximate standard deviations of the means because these are the best estimates of the uncertainties in x and y. We are almost there. The final step is to multiply the entire equation by the squared standard score (or t-value) at the chosen confidence level in order to convert the squared standard deviations to squared confidence intervals: Differential Error Analysis Result: (8) (9) λ F = F x + F y λ x λ y (0) Thus, the square of the uncertainty in F is governed by the squares of the confidence limits in the measured values x and y, and by the partial differentials of F with respect to x and y. If we know the analytical expression for F in terms of x and y, then we can use equation (0) determine the uncertainty in F. One benefit of differential error analysis is that the effect of each source of error on the final outcome can be evaluated independently. For example, if the uncertainty in y (λ y ) dominates the uncertainty in F, we might try to redesign the experiment so that y is measured with less error, while preserving our technique for measuring x. 9

10 . Least Squares Analysis It is often more convenient to extract the values of some physical properties through a graphical analysis rather than from a direct calculation, particularly when the system is overdetermined (there are more sets of data than values to extract). In a least squares analysis, the idea is to find the best fit of the experimental data to a function that contains some number of adjustable parameters (one or more of which represent the properties of interest). In a linear least squares analysis, the function is a linear equation. Since an equation for a line has at most two adjustable parameters (equation ()), we must have significantly more than two sets of data points to perform a meaningful linear least squares analysis. We will use only the linear version in this course; SGN has a good treatment of least squares analyses that includes more complicated cases (p.70-73). Derivation of a Least Squares Line In a linear least squares analysis, the goal is to extract values for the properties α and β by obtaining the best fit of our experimental data (x, y) to the linear function y = α + β x () where y is the dependent variable, and x is the independent variable. Three assumptions are made in performing a linear least squares analysis: A linear relationship exists between the dependent and independent variables. The independent variable is assumed to be exact; all of the error is assumed to be random and is forced into the dependent variable. All measurements of the same quantity have equal uncertainties. In general these are reasonable assumptions. However, if a very poor fit is obtained, it may be that x and y are not linearly related, or that one of the other assumptions is invalid. The least squares approach is similar to the method we would use if we were fitting a line to a set of points by eye: we want the differences between the experimental values of y (the points), and the values of y calculated from equation () (the line) to be small. This amounts to adjusting the values of α and β so that the calculated and measured values of y are as close as possible. We will distinguish between measured values of a property and calculated values of a property by writing a hat (^) over the variable that is calculated from equation (). We begin by defining a quantity called the residual, r i, which represents the vertical (because all error resides in y) deviation of the measured value (no hat) from the fitted line (hats). r i = y i ) y i = y i α ) + ) β x i [ ( )] Keep in mind that from the least squares analysis view, the experiment has been completed, which means that the values of x and y are now fixed, whereas α and β are what we are trying to vary. Therefore, we will never have a calculated value of x, and we will always have calculated values of α and β. As in the differential error analysis treatment, we wish to work with errors that will not accidentally cancel. In a least squares analysis, this comes in the form of the chi-square function, χ, which is another kind of mean-square-error: () 30

11 χ N N = r i = y i α ) + ) β x i = ( ) i =[ ] As usual, N is the number of pairs of (x, y) measurements. To perform a least squares analysis, we minimize ( least ) the chi square ( squares ). The minimum value of χ will occur where the first derivatives with respect to each adjustable parameter are zero. That is, to minimize χ, we require that and χ α ) = y i N ) α ) β (3) ( x i ) = 0 (4a) χ ) β = x i y i ) α x i ) β x i ( ) = 0 (4b) (Recall that the adjustable parameters α and β are the variables here.) Equations (4a) and (4b) can be solved simultaneously to yield expressions for the best values of α and β, which was our original goal: LLS Result: ) β = N x i y i x i y i N x i x i ( ) or ) β = ( x i x )( y i y ) ( x i x ) (5a) and ) α = y ) β x (5b) The expressions in equations (5a) are equivalent. For spreadsheet analyses, the second form will be more convenient. Uncertainties in the Least Squares Fit Parameters A differential error analysis treatment must be performed on equations (5a, 5b) in order to obtain expressions for the uncertainties in the best fit values of α and β. This is an unpleasant task at best. Only the results of this analysis are presented here, but you should understand where they come from and, in principle, how to get them. (Note that this treatment uses the first form in equation (5a) as a starting point.) For simplicity, we define the quantity D to be the denominator of equation (5a), ( ) D = N x i x i The approximated standard deviations in the best fit values of α and β are then given by: (6) 3

12 Error in LLS Parameters: s ) α = s ) x i r i ( N )D = N r i β ( N )D (7a) (7b) where (N -) is equal to the number of degrees of freedom in the fit. (Notice that if N =, then there are no degrees of freedom in the fit, so the fit is not overdetermined. The quality of the fit in this case cannot be statistically determined - that is, two points exactly define a line.) The confidence intervals in the LLS parameters are calculated as usual, using ν= N -. For example, the 95% confidence interval for ) α is given by: λ ) 0.95, α = t 0.95, ν s α ) (8) The percent uncertainties in ) ) α and β can be used to estimate of the quality of the fit and the validity of assumptions made on p. 35. Quality in this sense is subjective - it is up to the scientist (you) to determine what is acceptable, and what is not. D. Interpretation of Uncertanties There is always a temptation to consider an error analysis complete once we know the size of the confidence interval for our property of interest. However, the interpretation of the error cannot be neglected. There are many ways to interpret errors, all of which depend on the application. Two common methods are comparison of the experimental result to a known value (i.e. a literature result), and comparison of the error to the experimental value. Clearly it is best to make both comparisons if possible.. Comparison to Known Results If we have correctly placed the true value within our confidence interval, and if the known value is in fact correct (close to the true value), then the known value should fall within our experimental confidence interval. Therefore, one very good check for the presence of systematic error is a comparison of the experimental confidence interval with a known result. This amounts to identifying the accuracy of our experimental result, provided that we believe the known value to be a good estimate of the true value. Bear in mind, however, that systematic error is not the only possible reason why the known value would not fall within our confidence interval. Other possibilities include omission of a source of random error in our error analysis, and an unreliable known value. Be sure to consider all three options before suggesting a reason for disagreement with a known value or pronouncing an experiment accurate (or inaccurate). 3

13 . Comparison to Experimental Results If the random error is very small, then we would expect that the standard deviation is small, which would make our confidence interval also small. Therefore, we can identify the precision of the experiment by examining the size of the confidence interval. Typically this is done by calculating a percent error, for example: % error = λ P x 00% (9) Again, bear in mind that you may have omitted a source of random error from your analysis. If the error is large (large must be defined by the experimenter), you should try to identify the source of the large error (differential error analysis can help you here), and come up with suggestions for reducing the error. References Shoemaker, D. P., Garland, C. W., & Nibler, J. W., Experiments in Physical Chemistry, 6 th Ed., McGraw- Hill, New York (996), Chapters II and XXII. (This text is denoted SGN within this manual.) Skoog, D. A., Principles of Instrumental Analysis, 3 rd Ed. (or later), Saunders College Publishing, New York (985). 33

Experiment 2 Random Error and Basic Statistics

Experiment 2 Random Error and Basic Statistics PHY9 Experiment 2: Random Error and Basic Statistics 8/5/2006 Page Experiment 2 Random Error and Basic Statistics Homework 2: Turn in at start of experiment. Readings: Taylor chapter 4: introduction, sections

More information

33. SOLVING LINEAR INEQUALITIES IN ONE VARIABLE

33. SOLVING LINEAR INEQUALITIES IN ONE VARIABLE get the complete book: http://wwwonemathematicalcatorg/getfulltextfullbookhtm 33 SOLVING LINEAR INEQUALITIES IN ONE VARIABLE linear inequalities in one variable DEFINITION linear inequality in one variable

More information

Experiment 2 Random Error and Basic Statistics

Experiment 2 Random Error and Basic Statistics PHY191 Experiment 2: Random Error and Basic Statistics 7/12/2011 Page 1 Experiment 2 Random Error and Basic Statistics Homework 2: turn in the second week of the experiment. This is a difficult homework

More information

1 Measurement Uncertainties

1 Measurement Uncertainties 1 Measurement Uncertainties (Adapted stolen, really from work by Amin Jaziri) 1.1 Introduction No measurement can be perfectly certain. No measuring device is infinitely sensitive or infinitely precise.

More information

Appendix II Calculation of Uncertainties

Appendix II Calculation of Uncertainties Part 1: Sources of Uncertainties Appendix II Calculation of Uncertainties In any experiment or calculation, uncertainties can be introduced from errors in accuracy or errors in precision. A. Errors in

More information

Some Statistics. V. Lindberg. May 16, 2007

Some Statistics. V. Lindberg. May 16, 2007 Some Statistics V. Lindberg May 16, 2007 1 Go here for full details An excellent reference written by physicists with sample programs available is Data Reduction and Error Analysis for the Physical Sciences,

More information

Measurements and Data Analysis

Measurements and Data Analysis Measurements and Data Analysis 1 Introduction The central point in experimental physical science is the measurement of physical quantities. Experience has shown that all measurements, no matter how carefully

More information

Numerical Methods Lecture 7 - Statistics, Probability and Reliability

Numerical Methods Lecture 7 - Statistics, Probability and Reliability Topics Numerical Methods Lecture 7 - Statistics, Probability and Reliability A summary of statistical analysis A summary of probability methods A summary of reliability analysis concepts Statistical Analysis

More information

Part I. Experimental Error

Part I. Experimental Error Part I. Experimental Error 1 Types of Experimental Error. There are always blunders, mistakes, and screwups; such as: using the wrong material or concentration, transposing digits in recording scale readings,

More information

Uncertainty, Error, and Precision in Quantitative Measurements an Introduction 4.4 cm Experimental error

Uncertainty, Error, and Precision in Quantitative Measurements an Introduction 4.4 cm Experimental error Uncertainty, Error, and Precision in Quantitative Measurements an Introduction Much of the work in any chemistry laboratory involves the measurement of numerical quantities. A quantitative measurement

More information

Hypothesis testing I. - In particular, we are talking about statistical hypotheses. [get everyone s finger length!] n =

Hypothesis testing I. - In particular, we are talking about statistical hypotheses. [get everyone s finger length!] n = Hypothesis testing I I. What is hypothesis testing? [Note we re temporarily bouncing around in the book a lot! Things will settle down again in a week or so] - Exactly what it says. We develop a hypothesis,

More information

Please bring the task to your first physics lesson and hand it to the teacher.

Please bring the task to your first physics lesson and hand it to the teacher. Pre-enrolment task for 2014 entry Physics Why do I need to complete a pre-enrolment task? This bridging pack serves a number of purposes. It gives you practice in some of the important skills you will

More information

1 Measurement Uncertainties

1 Measurement Uncertainties 1 Measurement Uncertainties (Adapted stolen, really from work by Amin Jaziri) 1.1 Introduction No measurement can be perfectly certain. No measuring device is infinitely sensitive or infinitely precise.

More information

Introduction to the General Physics Laboratories

Introduction to the General Physics Laboratories Introduction to the General Physics Laboratories September 5, 2007 Course Goals The goal of the IIT General Physics laboratories is for you to learn to be experimental scientists. For this reason, you

More information

40.2. Interval Estimation for the Variance. Introduction. Prerequisites. Learning Outcomes

40.2. Interval Estimation for the Variance. Introduction. Prerequisites. Learning Outcomes Interval Estimation for the Variance 40.2 Introduction In Section 40.1 we have seen that the sampling distribution of the sample mean, when the data come from a normal distribution (and even, in large

More information

Data Analysis, Standard Error, and Confidence Limits E80 Spring 2015 Notes

Data Analysis, Standard Error, and Confidence Limits E80 Spring 2015 Notes Data Analysis Standard Error and Confidence Limits E80 Spring 05 otes We Believe in the Truth We frequently assume (believe) when making measurements of something (like the mass of a rocket motor) that

More information

Physics 509: Error Propagation, and the Meaning of Error Bars. Scott Oser Lecture #10

Physics 509: Error Propagation, and the Meaning of Error Bars. Scott Oser Lecture #10 Physics 509: Error Propagation, and the Meaning of Error Bars Scott Oser Lecture #10 1 What is an error bar? Someone hands you a plot like this. What do the error bars indicate? Answer: you can never be

More information

An introduction to basic information theory. Hampus Wessman

An introduction to basic information theory. Hampus Wessman An introduction to basic information theory Hampus Wessman Abstract We give a short and simple introduction to basic information theory, by stripping away all the non-essentials. Theoretical bounds on

More information

Lesson/Unit Plan Name: Algebraic Expressions Identifying Parts and Seeing Entities. as both a single entity and a sum of two terms.

Lesson/Unit Plan Name: Algebraic Expressions Identifying Parts and Seeing Entities. as both a single entity and a sum of two terms. Grade Level/Course: Grade 6 Lesson/Unit Plan Name: Algebraic Expressions Identifying Parts and Seeing Entities Rationale/Lesson Abstract: This lesson focuses on providing students with a solid understanding

More information

Slope Fields: Graphing Solutions Without the Solutions

Slope Fields: Graphing Solutions Without the Solutions 8 Slope Fields: Graphing Solutions Without the Solutions Up to now, our efforts have been directed mainly towards finding formulas or equations describing solutions to given differential equations. Then,

More information

16. . Proceeding similarly, we get a 2 = 52 1 = , a 3 = 53 1 = and a 4 = 54 1 = 125

16. . Proceeding similarly, we get a 2 = 52 1 = , a 3 = 53 1 = and a 4 = 54 1 = 125 . Sequences When we first introduced a function as a special type of relation in Section.3, we did not put any restrictions on the domain of the function. All we said was that the set of x-coordinates

More information

Data Analysis, Standard Error, and Confidence Limits E80 Spring 2012 Notes

Data Analysis, Standard Error, and Confidence Limits E80 Spring 2012 Notes Data Analysis Standard Error and Confidence Limits E80 Spring 0 otes We Believe in the Truth We frequently assume (believe) when making measurements of something (like the mass of a rocket motor) that

More information

Data Analysis for University Physics

Data Analysis for University Physics Data Analysis for University Physics by John Filaseta orthern Kentucky University Last updated on ovember 9, 004 Four Steps to a Meaningful Experimental Result Most undergraduate physics experiments have

More information

Answer Key, Problem Set 3 (full explanations and work)

Answer Key, Problem Set 3 (full explanations and work) Chemistry 1 Mines, Spring, 018 Answer Key, Problem Set (full explanations and work) 1. NT1;. NT;. 1.4; 4. 1.;.1.; 6. 1.6; 7. 1.9; 8. 1.46; 9. 1.47; 10. 1.0*; 11. NT; 1. NT4; 1. 1.0 ----------- The Equilibrium

More information

Take the measurement of a person's height as an example. Assuming that her height has been determined to be 5' 8", how accurate is our result?

Take the measurement of a person's height as an example. Assuming that her height has been determined to be 5' 8, how accurate is our result? Error Analysis Introduction The knowledge we have of the physical world is obtained by doing experiments and making measurements. It is important to understand how to express such data and how to analyze

More information

Confidence intervals

Confidence intervals Confidence intervals We now want to take what we ve learned about sampling distributions and standard errors and construct confidence intervals. What are confidence intervals? Simply an interval for which

More information

Uncertainty and Graphical Analysis

Uncertainty and Graphical Analysis Uncertainty and Graphical Analysis Introduction Two measures of the quality of an experimental result are its accuracy and its precision. An accurate result is consistent with some ideal, true value, perhaps

More information

Probability Distributions

Probability Distributions CONDENSED LESSON 13.1 Probability Distributions In this lesson, you Sketch the graph of the probability distribution for a continuous random variable Find probabilities by finding or approximating areas

More information

J = L + S. to this ket and normalize it. In this way we get expressions for all the kets

J = L + S. to this ket and normalize it. In this way we get expressions for all the kets Lecture 3 Relevant sections in text: 3.7, 3.9 Total Angular Momentum Eigenvectors How are the total angular momentum eigenvectors related to the original product eigenvectors (eigenvectors of L z and S

More information

Designing Information Devices and Systems I Spring 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way

Designing Information Devices and Systems I Spring 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way EECS 16A Designing Information Devices and Systems I Spring 018 Lecture Notes Note 1 1.1 Introduction to Linear Algebra the EECS Way In this note, we will teach the basics of linear algebra and relate

More information

Fitting a Straight Line to Data

Fitting a Straight Line to Data Fitting a Straight Line to Data Thanks for your patience. Finally we ll take a shot at real data! The data set in question is baryonic Tully-Fisher data from http://astroweb.cwru.edu/sparc/btfr Lelli2016a.mrt,

More information

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way EECS 16A Designing Information Devices and Systems I Fall 018 Lecture Notes Note 1 1.1 Introduction to Linear Algebra the EECS Way In this note, we will teach the basics of linear algebra and relate it

More information

ERRORS AND THE TREATMENT OF DATA

ERRORS AND THE TREATMENT OF DATA M. Longo ERRORS AND THE TREATMENT OF DATA Essentially all experimental quantities have an uncertainty associated with them. The only exceptions are a few defined quantities like the wavelength of the orange-red

More information

Algebra I+ Pacing Guide. Days Units Notes Chapter 1 ( , )

Algebra I+ Pacing Guide. Days Units Notes Chapter 1 ( , ) Algebra I+ Pacing Guide Days Units Notes Chapter 1 (1.1-1.4, 1.6-1.7) Expressions, Equations and Functions Differentiate between and write expressions, equations and inequalities as well as applying order

More information

4/1/2012. Test 2 Covers Topics 12, 13, 16, 17, 18, 14, 19 and 20. Skipping Topics 11 and 15. Topic 12. Normal Distribution

4/1/2012. Test 2 Covers Topics 12, 13, 16, 17, 18, 14, 19 and 20. Skipping Topics 11 and 15. Topic 12. Normal Distribution Test 2 Covers Topics 12, 13, 16, 17, 18, 14, 19 and 20 Skipping Topics 11 and 15 Topic 12 Normal Distribution 1 Normal Distribution If Density Curve is symmetric, single peaked, bell-shaped then it is

More information

Part 01 - Notes: Identifying Significant Figures

Part 01 - Notes: Identifying Significant Figures Part 01 - Notes: Identifying Significant Figures Objectives: Identify the number of significant figures in a measurement. Compare relative uncertainties of different measurements. Relate measurement precision

More information

Error Analysis in Experimental Physical Science Mini-Version

Error Analysis in Experimental Physical Science Mini-Version Error Analysis in Experimental Physical Science Mini-Version by David Harrison and Jason Harlow Last updated July 13, 2012 by Jason Harlow. Original version written by David M. Harrison, Department of

More information

Chapter 1 Review of Equations and Inequalities

Chapter 1 Review of Equations and Inequalities Chapter 1 Review of Equations and Inequalities Part I Review of Basic Equations Recall that an equation is an expression with an equal sign in the middle. Also recall that, if a question asks you to solve

More information

PHY 101L - Experiments in Mechanics

PHY 101L - Experiments in Mechanics PHY 101L - Experiments in Mechanics introduction to error analysis What is Error? In everyday usage, the word error usually refers to a mistake of some kind. However, within the laboratory, error takes

More information

Statistics. Lent Term 2015 Prof. Mark Thomson. 2: The Gaussian Limit

Statistics. Lent Term 2015 Prof. Mark Thomson. 2: The Gaussian Limit Statistics Lent Term 2015 Prof. Mark Thomson Lecture 2 : The Gaussian Limit Prof. M.A. Thomson Lent Term 2015 29 Lecture Lecture Lecture Lecture 1: Back to basics Introduction, Probability distribution

More information

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture - 41 Pulse Code Modulation (PCM) So, if you remember we have been talking

More information

Purposes of Data Analysis. Variables and Samples. Parameters and Statistics. Part 1: Probability Distributions

Purposes of Data Analysis. Variables and Samples. Parameters and Statistics. Part 1: Probability Distributions Part 1: Probability Distributions Purposes of Data Analysis True Distributions or Relationships in the Earths System Probability Distribution Normal Distribution Student-t Distribution Chi Square Distribution

More information

PHYSICS 15a, Fall 2006 SPEED OF SOUND LAB Due: Tuesday, November 14

PHYSICS 15a, Fall 2006 SPEED OF SOUND LAB Due: Tuesday, November 14 PHYSICS 15a, Fall 2006 SPEED OF SOUND LAB Due: Tuesday, November 14 GENERAL INFO The goal of this lab is to determine the speed of sound in air, by making measurements and taking into consideration the

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY PHYSICS DEPARTMENT

MASSACHUSETTS INSTITUTE OF TECHNOLOGY PHYSICS DEPARTMENT G. Clark 7oct96 1 MASSACHUSETTS INSTITUTE OF TECHNOLOGY PHYSICS DEPARTMENT 8.13/8.14 Junior Laboratory STATISTICS AND ERROR ESTIMATION The purpose of this note is to explain the application of statistics

More information

Understanding Errors and Uncertainties in the Physics Laboratory

Understanding Errors and Uncertainties in the Physics Laboratory Chapter 2 Understanding Errors and Uncertainties in the Physics Laboratory 2.1 Introduction We begin with a review of general properties of measurements and how measurements affect what we, as scientists,

More information

Chapter 1 Mathematical Preliminaries and Error Analysis

Chapter 1 Mathematical Preliminaries and Error Analysis Numerical Analysis (Math 3313) 2019-2018 Chapter 1 Mathematical Preliminaries and Error Analysis Intended learning outcomes: Upon successful completion of this chapter, a student will be able to (1) list

More information

An Introduction to Error Analysis

An Introduction to Error Analysis An Introduction to Error Analysis Introduction The following notes (courtesy of Prof. Ditchfield) provide an introduction to quantitative error analysis: the study and evaluation of uncertainty in measurement.

More information

BRIDGE CIRCUITS EXPERIMENT 5: DC AND AC BRIDGE CIRCUITS 10/2/13

BRIDGE CIRCUITS EXPERIMENT 5: DC AND AC BRIDGE CIRCUITS 10/2/13 EXPERIMENT 5: DC AND AC BRIDGE CIRCUITS 0//3 This experiment demonstrates the use of the Wheatstone Bridge for precise resistance measurements and the use of error propagation to determine the uncertainty

More information

Algebra Exam. Solutions and Grading Guide

Algebra Exam. Solutions and Grading Guide Algebra Exam Solutions and Grading Guide You should use this grading guide to carefully grade your own exam, trying to be as objective as possible about what score the TAs would give your responses. Full

More information

The Derivative of a Function

The Derivative of a Function The Derivative of a Function James K Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University March 1, 2017 Outline A Basic Evolutionary Model The Next Generation

More information

Probability and Statistics

Probability and Statistics Probability and Statistics Kristel Van Steen, PhD 2 Montefiore Institute - Systems and Modeling GIGA - Bioinformatics ULg kristel.vansteen@ulg.ac.be CHAPTER 4: IT IS ALL ABOUT DATA 4a - 1 CHAPTER 4: IT

More information

Sequences and the Binomial Theorem

Sequences and the Binomial Theorem Chapter 9 Sequences and the Binomial Theorem 9. Sequences When we first introduced a function as a special type of relation in Section.3, we did not put any restrictions on the domain of the function.

More information

Appendix C: Accuracy, Precision, and Uncertainty

Appendix C: Accuracy, Precision, and Uncertainty Appendix C: Accuracy, Precision, and Uncertainty How tall are you? How old are you? When you answered these everyday questions, you probably did it in round numbers such as "five foot, six inches" or "nineteen

More information

University of Massachusetts Boston - Chemistry Department Physical Chemistry Laboratory Introduction to Maximum Probable Error

University of Massachusetts Boston - Chemistry Department Physical Chemistry Laboratory Introduction to Maximum Probable Error University of Massachusetts Boston - Chemistry Department Physical Chemistry Laboratory Introduction to Maximum Probable Error Statistical methods describe random or indeterminate errors in experimental

More information

30. TRANSFORMING TOOL #1 (the Addition Property of Equality)

30. TRANSFORMING TOOL #1 (the Addition Property of Equality) 30 TRANSFORMING TOOL #1 (the Addition Property of Equality) sentences that look different, but always have the same truth values What can you DO to a sentence that will make it LOOK different, but not

More information

Regression, part II. I. What does it all mean? A) Notice that so far all we ve done is math.

Regression, part II. I. What does it all mean? A) Notice that so far all we ve done is math. Regression, part II I. What does it all mean? A) Notice that so far all we ve done is math. 1) One can calculate the Least Squares Regression Line for anything, regardless of any assumptions. 2) But, if

More information

Contingency Tables. Safety equipment in use Fatal Non-fatal Total. None 1, , ,128 Seat belt , ,878

Contingency Tables. Safety equipment in use Fatal Non-fatal Total. None 1, , ,128 Seat belt , ,878 Contingency Tables I. Definition & Examples. A) Contingency tables are tables where we are looking at two (or more - but we won t cover three or more way tables, it s way too complicated) factors, each

More information

Physics Oct A Quantum Harmonic Oscillator

Physics Oct A Quantum Harmonic Oscillator Physics 301 5-Oct-2005 9-1 A Quantum Harmonic Oscillator The quantum harmonic oscillator (the only kind there is, really) has energy levels given by E n = (n + 1/2) hω, where n 0 is an integer and the

More information

Introduction to Thermodynamic States Gases

Introduction to Thermodynamic States Gases Chapter 1 Introduction to Thermodynamic States Gases We begin our study in thermodynamics with a survey of the properties of gases. Gases are one of the first things students study in general chemistry.

More information

Name: Lab Partner: Section: In this experiment error analysis and propagation will be explored.

Name: Lab Partner: Section: In this experiment error analysis and propagation will be explored. Chapter 2 Error Analysis Name: Lab Partner: Section: 2.1 Purpose In this experiment error analysis and propagation will be explored. 2.2 Introduction Experimental physics is the foundation upon which the

More information

LECTURE 10: REVIEW OF POWER SERIES. 1. Motivation

LECTURE 10: REVIEW OF POWER SERIES. 1. Motivation LECTURE 10: REVIEW OF POWER SERIES By definition, a power series centered at x 0 is a series of the form where a 0, a 1,... and x 0 are constants. For convenience, we shall mostly be concerned with the

More information

Calculating Uncertainty For the Analog Ohmmeter

Calculating Uncertainty For the Analog Ohmmeter Calculating Uncertainty For the Analog Ohmmeter K. M. Westerberg (2/2003) The Radio Shack VOM is an analog multimeter which can be used to measure voltage, current, and resistance. Determining the uncertainty

More information

Measurement And Uncertainty

Measurement And Uncertainty Measurement And Uncertainty Based on Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results, NIST Technical Note 1297, 1994 Edition PHYS 407 1 Measurement approximates or

More information

Quadratic Equations Part I

Quadratic Equations Part I Quadratic Equations Part I Before proceeding with this section we should note that the topic of solving quadratic equations will be covered in two sections. This is done for the benefit of those viewing

More information

Introduction to Experiment: Part 1

Introduction to Experiment: Part 1 Introduction to Experiment: Part 1 Nate Saffold nas2173@columbia.edu Office Hours: Mondays 5-6PM Pupin 1216 INTRO TO EXPERIMENTAL PHYS-LAB 1493/1494/2699 General Announcements Labs will commence February

More information

Algebra Year 10. Language

Algebra Year 10. Language Algebra Year 10 Introduction In Algebra we do Maths with numbers, but some of those numbers are not known. They are represented with letters, and called unknowns, variables or, most formally, literals.

More information

CHAPTER 9: TREATING EXPERIMENTAL DATA: ERRORS, MISTAKES AND SIGNIFICANCE (Written by Dr. Robert Bretz)

CHAPTER 9: TREATING EXPERIMENTAL DATA: ERRORS, MISTAKES AND SIGNIFICANCE (Written by Dr. Robert Bretz) CHAPTER 9: TREATING EXPERIMENTAL DATA: ERRORS, MISTAKES AND SIGNIFICANCE (Written by Dr. Robert Bretz) In taking physical measurements, the true value is never known with certainty; the value obtained

More information

Figure 1.1: Schematic symbols of an N-transistor and P-transistor

Figure 1.1: Schematic symbols of an N-transistor and P-transistor Chapter 1 The digital abstraction The term a digital circuit refers to a device that works in a binary world. In the binary world, the only values are zeros and ones. Hence, the inputs of a digital circuit

More information

Contingency Tables. Contingency tables are used when we want to looking at two (or more) factors. Each factor might have two more or levels.

Contingency Tables. Contingency tables are used when we want to looking at two (or more) factors. Each factor might have two more or levels. Contingency Tables Definition & Examples. Contingency tables are used when we want to looking at two (or more) factors. Each factor might have two more or levels. (Using more than two factors gets complicated,

More information

Why? 2.2. What Do You Already Know? 2.2. Goals 2.2. Building Mathematical Language 2.2. Key Concepts 2.2

Why? 2.2. What Do You Already Know? 2.2. Goals 2.2. Building Mathematical Language 2.2. Key Concepts 2.2 Section. Solving Basic Equations Why. You can solve some equations that arise in the real world by isolating a variable. You can use this method to solve the equation 1 400 + 1 (10) x = 460 to determine

More information

Uncertainty. Michael Peters December 27, 2013

Uncertainty. Michael Peters December 27, 2013 Uncertainty Michael Peters December 27, 20 Lotteries In many problems in economics, people are forced to make decisions without knowing exactly what the consequences will be. For example, when you buy

More information

ter. on Can we get a still better result? Yes, by making the rectangles still smaller. As we make the rectangles smaller and smaller, the

ter. on Can we get a still better result? Yes, by making the rectangles still smaller. As we make the rectangles smaller and smaller, the Area and Tangent Problem Calculus is motivated by two main problems. The first is the area problem. It is a well known result that the area of a rectangle with length l and width w is given by A = wl.

More information

Math 5a Reading Assignments for Sections

Math 5a Reading Assignments for Sections Math 5a Reading Assignments for Sections 4.1 4.5 Due Dates for Reading Assignments Note: There will be a very short online reading quiz (WebWork) on each reading assignment due one hour before class on

More information

Key Point. The nth order linear homogeneous equation with constant coefficients

Key Point. The nth order linear homogeneous equation with constant coefficients General Solutions of Higher-Order Linear Equations In section 3.1, we saw the following fact: Key Point. The nth order linear homogeneous equation with constant coefficients a n y (n) +... + a 2 y + a

More information

Hypothesis testing. Chapter Formulating a hypothesis. 7.2 Testing if the hypothesis agrees with data

Hypothesis testing. Chapter Formulating a hypothesis. 7.2 Testing if the hypothesis agrees with data Chapter 7 Hypothesis testing 7.1 Formulating a hypothesis Up until now we have discussed how to define a measurement in terms of a central value, uncertainties, and units, as well as how to extend these

More information

Linear Regression. Linear Regression. Linear Regression. Did You Mean Association Or Correlation?

Linear Regression. Linear Regression. Linear Regression. Did You Mean Association Or Correlation? Did You Mean Association Or Correlation? AP Statistics Chapter 8 Be careful not to use the word correlation when you really mean association. Often times people will incorrectly use the word correlation

More information

Understanding Errors and Uncertainties in the Physics Laboratory

Understanding Errors and Uncertainties in the Physics Laboratory Chapter 2 Understanding Errors and Uncertainties in the Physics Laboratory 2.1 Introduction We begin with a review of general properties of measurements and how measurements affect what we, as scientists,

More information

Review. A Bernoulli Trial is a very simple experiment:

Review. A Bernoulli Trial is a very simple experiment: Review A Bernoulli Trial is a very simple experiment: Review A Bernoulli Trial is a very simple experiment: two possible outcomes (success or failure) probability of success is always the same (p) the

More information

Protean Instrument Dutchtown Road, Knoxville, TN TEL/FAX:

Protean Instrument Dutchtown Road, Knoxville, TN TEL/FAX: Application Note AN-0210-1 Tracking Instrument Behavior A frequently asked question is How can I be sure that my instrument is performing normally? Before we can answer this question, we must define what

More information

Correlation. We don't consider one variable independent and the other dependent. Does x go up as y goes up? Does x go down as y goes up?

Correlation. We don't consider one variable independent and the other dependent. Does x go up as y goes up? Does x go down as y goes up? Comment: notes are adapted from BIOL 214/312. I. Correlation. Correlation A) Correlation is used when we want to examine the relationship of two continuous variables. We are not interested in prediction.

More information

Uncertainty in Physical Measurements: Module 5 Data with Two Variables

Uncertainty in Physical Measurements: Module 5 Data with Two Variables : Module 5 Data with Two Variables Often data have two variables, such as the magnitude of the force F exerted on an object and the object s acceleration a. In this Module we will examine some ways to

More information

EXPERIMENTAL UNCERTAINTY

EXPERIMENTAL UNCERTAINTY 3 EXPERIMENTAL UNCERTAINTY I am no matchmaker, as you well know, said Lady Russell, being much too aware of the uncertainty of all human events and calculations. --- Persuasion 3.1 UNCERTAINTY AS A 95%

More information

Chapter 1A -- Real Numbers. iff. Math Symbols: Sets of Numbers

Chapter 1A -- Real Numbers. iff. Math Symbols: Sets of Numbers Fry Texas A&M University! Fall 2016! Math 150 Notes! Section 1A! Page 1 Chapter 1A -- Real Numbers Math Symbols: iff or Example: Let A = {2, 4, 6, 8, 10, 12, 14, 16,...} and let B = {3, 6, 9, 12, 15, 18,

More information

Chapter 4: An Introduction to Probability and Statistics

Chapter 4: An Introduction to Probability and Statistics Chapter 4: An Introduction to Probability and Statistics 4. Probability The simplest kinds of probabilities to understand are reflected in everyday ideas like these: (i) if you toss a coin, the probability

More information

19. TAYLOR SERIES AND TECHNIQUES

19. TAYLOR SERIES AND TECHNIQUES 19. TAYLOR SERIES AND TECHNIQUES Taylor polynomials can be generated for a given function through a certain linear combination of its derivatives. The idea is that we can approximate a function by a polynomial,

More information

INTRODUCTION TO ANALYSIS OF VARIANCE

INTRODUCTION TO ANALYSIS OF VARIANCE CHAPTER 22 INTRODUCTION TO ANALYSIS OF VARIANCE Chapter 18 on inferences about population means illustrated two hypothesis testing situations: for one population mean and for the difference between two

More information

2 Systems of Linear Equations

2 Systems of Linear Equations 2 Systems of Linear Equations A system of equations of the form or is called a system of linear equations. x + 2y = 7 2x y = 4 5p 6q + r = 4 2p + 3q 5r = 7 6p q + 4r = 2 Definition. An equation involving

More information

UNIT 10 Equations NC: Algebra 3c, 3d

UNIT 10 Equations NC: Algebra 3c, 3d UNIT 10 Equations NC: Algebra 3c, 3d St Ac Ex Sp TOPICS (Text and Practice Books) 10.1 Negative Numbers - - - 10. Arithmetic with Negative Numbers - - 10.3 Simplifying Expressions - 10.4 Simple Equations

More information

The Growth of Functions. A Practical Introduction with as Little Theory as possible

The Growth of Functions. A Practical Introduction with as Little Theory as possible The Growth of Functions A Practical Introduction with as Little Theory as possible Complexity of Algorithms (1) Before we talk about the growth of functions and the concept of order, let s discuss why

More information

Decimal Scientific Decimal Scientific

Decimal Scientific Decimal Scientific Experiment 00 - Numerical Review Name: 1. Scientific Notation Describing the universe requires some very big (and some very small) numbers. Such numbers are tough to write in long decimal notation, so

More information

Lecture 4: Training a Classifier

Lecture 4: Training a Classifier Lecture 4: Training a Classifier Roger Grosse 1 Introduction Now that we ve defined what binary classification is, let s actually train a classifier. We ll approach this problem in much the same way as

More information

Uncertainty: A Reading Guide and Self-Paced Tutorial

Uncertainty: A Reading Guide and Self-Paced Tutorial Uncertainty: A Reading Guide and Self-Paced Tutorial First, read the description of uncertainty at the Experimental Uncertainty Review link on the Physics 108 web page, up to and including Rule 6, making

More information

Last week we looked at limits generally, and at finding limits using substitution.

Last week we looked at limits generally, and at finding limits using substitution. Math 1314 ONLINE Week 4 Notes Lesson 4 Limits (continued) Last week we looked at limits generally, and at finding limits using substitution. Indeterminate Forms What do you do when substitution gives you

More information

1 Motivation for Instrumental Variable (IV) Regression

1 Motivation for Instrumental Variable (IV) Regression ECON 370: IV & 2SLS 1 Instrumental Variables Estimation and Two Stage Least Squares Econometric Methods, ECON 370 Let s get back to the thiking in terms of cross sectional (or pooled cross sectional) data

More information

8. TRANSFORMING TOOL #1 (the Addition Property of Equality)

8. TRANSFORMING TOOL #1 (the Addition Property of Equality) 8 TRANSFORMING TOOL #1 (the Addition Property of Equality) sentences that look different, but always have the same truth values What can you DO to a sentence that will make it LOOK different, but not change

More information

Check List - Data Analysis

Check List - Data Analysis Chem 360 Check List - Data Analysis Reference: Taylor, J. R. An Introduction to Error Analysis; University Science Books, 2nd Ed., (Oxford University Press): Mill Valley, CA,1997 Listed below are data

More information

Using Microsoft Excel

Using Microsoft Excel Using Microsoft Excel Objective: Students will gain familiarity with using Excel to record data, display data properly, use built-in formulae to do calculations, and plot and fit data with linear functions.

More information

Introduction to Algebra: The First Week

Introduction to Algebra: The First Week Introduction to Algebra: The First Week Background: According to the thermostat on the wall, the temperature in the classroom right now is 72 degrees Fahrenheit. I want to write to my friend in Europe,

More information

Course Project. Physics I with Lab

Course Project. Physics I with Lab COURSE OBJECTIVES 1. Explain the fundamental laws of physics in both written and equation form 2. Describe the principles of motion, force, and energy 3. Predict the motion and behavior of objects based

More information

Intersecting Two Lines, Part Two

Intersecting Two Lines, Part Two Module 1.5 Page 149 of 1390. Module 1.5: Intersecting Two Lines, Part Two In this module you will learn about two very common algebraic methods for intersecting two lines: the Substitution Method and the

More information