Rules for Means and Variances; Prediction
|
|
- Holly Daisy Heath
- 5 years ago
- Views:
Transcription
1 Chapter 7 Rules for Means and Variances; Prediction 7.1 Rules for Means and Variances The material in this section is very technical and algebraic. And dry. But it is useful for understanding many of the methods we will learn later in this course. We have random variables X 1, X 2,...X n. Throughout this section, we will assume that these random variables are independent. Sometimes they will also be identically distributed, but we don t need identically distributed for our main result. (There is a similar result without independence too, but we won t need it.) Let µ i denote the mean of X i. Let σ 2 i denote the variance of X i. Let b 1, b 2,..., b n denote n numbers. Define W = b 1 X 1 + b 2 X b n X n. W is a linear combination of the X i s. The main result is The mean of W is µ W = n i=1 b i µ i. The variance of W is σ 2 W = n i=1 b 2 iσ 2 i. Special Cases 1. i.i.d. case. If the sequence is i.i.d. then we can write µ = µ i and σ 2 = σi 2. In this case the mean of W is µ W = ( n i=1 b i )µ and the variance of W is σw 2 = ( n i=1 b 2 i )σ2. 2. Two independent rv s. If n = 2, then we usually call the random variables X and Y instead of X 1 and X 2. We get W = b 1 X + b 2 Y which has mean µ W = b 1 µ X + b 2 µ Y and variance σ 2 W = b 2 1σ 2 X + b 2 2σ 2 Y. 3. Two i.i.d. rv s. Combining the notation of the previous two items, W = b 1 X + b 2 Y has mean µ W = (b 1 + b 2 )µ and variance σ 2 W = (b2 1 + b2 2 )σ2. Especially important is the case W = X + Y which has mean µ W = 2µ and variance σ 2 W = 2σ2. Another important case is W = X Y which has mean µ W = 0 and variance σ 2 W = 2σ2. 79
2 7.2 Predicting for Bernoulli Trials Predictions are tough, especially about the future Yogi Berra. We plan to observe m BT and want to predict the total number of successes that we will get. Let Y denote the random variable and y the observed value of the total number of successes in the future m trials. Similar to estimation, we will learn about point and interval predictions When p is Known We begin with point prediction of Y ; we denote the point prediction by ŷ. We adopt the criterion that we want the probability of being correct to be as large as possible. Below is the result. Calculate the mean of Y, which is mp. If mp is an integer then it is the most probable value of Y and our prediction is ŷ = mp. Here are some examples. Suppose that m = 20 and p = Then, mp = 20(0.5) = 10 is an integer, so 10 is our point prediction of Y. With the help of our website calculator (details not given), we find that P(Y = 10) = Suppose that m = 200 and p = Then, mp = 200(0.5) = 100 is an integer, so 100 is our point prediction of Y. With the help of our website calculator, we find that P(Y = 100) = Suppose that m = 300 and p = Then, mp = 300(0.3) = 90 is an integer, so 90 is our point prediction of Y. With the help of our website calculator, we find that P(Y = 90) = If mp is not an integer, then it can be shown that the most probable value of Y is one of the integers immediately on either side of mp. Just check them both. Here are some examples. Suppose that m = 20 and p = Then, mp = 20(0.42) = 8.4 is not an integer. The most likely value of Y is either 8 or 9. With the help of our website calculator, we find that P(Y = 8) = and P(Y = 9) = Thus, ŷ = 8. Suppose that m = 100 and p = Then, mp = 100(0.615) = 61.5 is not an integer. The most likely value of Y is either 61 or 62. With the help of our website calculator, we find that P(Y = 61) = and P(Y = 62) = Thus, ŷ = 62. In each of the above examples we saw that the probability that a point prediction is correct is very small. As a result, scientists usually prefer a prediction interval. It is possible to create a one-sided prediction interval, but we will consider only two-sided prediction intervals. We have two choices: using a SNC approximation or finding an exact interval. Even if you choose the exact interval, it is useful to begin with the SNC approximation. 80
3 The SNC approximation says to use the interval: mp ± z mpq where z is the same number as we used for the two-sided confidence interval for p. Here is an example. Suppose that m = 100 and p = The SNC approximate 95% prediction interval is 61.5 ± (0.385) = 61.5 ± 9.54 = [52.96, 71.04]. Now, it makes no sense to predict a fractional number of successes, so we will round-off the endpoints to get [53, 71]. We can use our binomial calculator to see whether the SNC approximation is any good. Doing this, we find that the exact probability that Y will be in the interval [53, 71] is If this answer had been importantly too small or too large, we could modify either or both endpoints to get a more desirable answer. The point is that the SNC approximation will either give us a good answer (as in this case) or help us find a good answer When p is Unknown We now consider the situation in which p is unknown. We will begin with point prediction. The first problem is that we cannot achieve our criterion s goal: we cannot find the most probable value of Y. The most probable value, as we saw above, is at or near mp, but we don t know what p is. Thus, we adopt an ad-hoc approach. We simply decide to use mˆp as our point prediction, where ˆp is our point estimate of p. Of course, if mˆp is not an integer, we need to round-off to a near integer. Because this is ad-hoc, I say just round to the nearest integer. If two integers are equally close, round to the one that is an even number. But where did ˆp come from? Well, we need to add another ingredient to our procedure. We assume that we have past data from the CM that will generate the m future trials. We denote the past data as consisting of n trials which yielded x successes, giving ˆp = x/n as our point estimate of the unknown p. As I said above, our point prediction is mˆp = mx/n. This answer is ad-hoc; we use it because it seems sensible. This approach suggests taking our prediction interval for p known, mp±z mpq, and simply changing it to mˆp ± z mˆpˆq and saying, It s ad-hoc. We cannot do this! Why? Well, because we want an interval so that the probability is, for example, 95% that the interval will be correct; i.e. contain the eventual value of Y. It turns out, and you can show this with a simulation study if you want, that the interval mˆp ± z mˆpˆq makes too many mistakes; i.e. the probability it will be correct is smaller than the target associated with the choice of z. Instead of ad-hoc procedure that does not perform as desired, we directly derive an answer using the result of Section 1 of this chapter. We begin by writing W = Y mˆp = Y (m/n)x. We now apply our result of Section 1 with: X 1 = Y, X 2 = X, b 1 = 1 and b 2 = m/n. 81
4 We can calculate the mean and variance of W. Thus, we can standardize W : µ W = µ Y (m/n)µ X = mpq (m/n)npq = 0, and σ 2 W = σ 2 Y + (m/n) 2 σ 2 X = mpq + (m/n) 2 npq = mpq[1 + (m/n)]. Z = W µ W σ W = W 0 mpq 1 + (m/n) = Y mˆp mpq 1 + (m/n). It can be shown that if m and n are both large and p is not too close to either 0 or 1, then probabilities for Z can be well approximated by the SNC. Slutsky s work (see Chapter 3) can also be applied here; the result is: Z Y mˆp = mˆpˆq 1 + (m/n). It can be shown that if m and n are both large and p is not too close to either 0 or 1, then probabilities for Z can be well approximated by the SNC. As a result, using the same algebra we had in Chapter 3 (the names have changed) we can expand Z and get the following prediction interval for Y : mˆp ± z mˆpˆq 1 + (m/n). (7.1) I will illustrate the use of this formula with real data from basketball. Years ago Katie Voigt, a very talented basketball player, took an independent study course from me. In exchange for my help, she graciously provided me with the results of 2000 three-point shots that she attempted in basketball. She obtained her data by performing 100 shots every day for a total of 20 days. I will use some of Katie s data to illustrate the current method. Katie plans to attempt m = 100 three-point shots. I am going to assume that Katie s shots are BT, but I don t know the numerical value of p. I have previous data for Katie in which she attempted n = 200 shots and achieved a total of x = 115 successes. This gives us ˆp = 115/200 = The point prediction of Y is mˆp = 100(0.575) = 57.5 rounded to 58, an even number. Next, I will obtain the 95% prediction interval for Y : 57.5± (0.425) 1 + (100/200) = 57.5±1.96(4.94)(1.225) = 57.5±11.9 = [45.6, 69.4] which I round to [46, 69]. This is a very wide interval. (Why do I say this? Well, in my opinion, a basketball player will believe that making 46 out of 100 is seriously different than making 69 out of 100.) A great thing about prediction (not shared by estimation or testing) is that we find out whether our answer is correct. It is no longer the case that Only Nature knows! In particular, when Katie attempted the m = 100 future shots, she obtained y = 66 successes. Our point prediction, 58, was too small, but our prediction interval is correct because it includes y = 66. You should note that there is no exact solution to this problem. It is, indeed, an extremely complicated computation. Even a simulation study is a challenge. I will discuss how a simulation study is done and then show you how it works. 82
5 In order to do a simulation study we must specify three numbers: m, n and p. (Even though the researcher does not know p, Nature, the great simulator, must know it.) And, as we shall see, it is a two-stage simulation. To be specific, I will simulate something similar to Katie s problem. I will take m = 100, n = 200 and p = (Of course, I don t know Katie s p, but 0.60 seems to be not ridiculous.) A single run of the simulation study is as follows. 1. Simulate the value of X Bin(200,0.60), our simulated previous data for Katie. 2. Use our value of X from step 1 to estimate p and then compute the 95% prediction interval for Y, remembering to use the interval for p unknown. 3. Simulate the value of Y Bin(100,0.60) and see whether or not the interval from step 2 is correct. I performed this simulation with 10,000 runs and obtained 9493 (94.93%) correct prediction intervals! This is very close to the nominal (advertised) rate of 95% correct. 7.3 *Predicting for the Poisson (Optional) We plan to observe the value of the random variable Y Poisson(θ) When θ is Known We begin with point prediction of Y. We adopt the criterion that we want the probability of being correct to be as large as possible. Below is the result. If θ is an integer then y = θ and y = (θ 1) are equally likely to occur and either one is more likely than any other value of Y. For uniformity, we will use the even number of θ and (θ 1) as our point prediction. For example, if θ = 10, it can be shown by using our Poisson calculator that P(Y = 10) = P(Y = 9) = Our point prediction is ŷ = 10. If θ is not an integer then there is a unique most likely value of Y and it is ŷ = the greatest integer in θ. For example, if θ = 7.6, then the greatest integer in θ is 7. With the help of our calculator, P(Y = 7) = is larger than either P(Y = 6) = or P(Y = 8) = Indeed, y = 7 is the most likely of all values for Y ; thus, ŷ = 7. If θ is very close to an integer these results converge as seen in the following example. If θ = then P(Y = 7) = is only slightly larger than P(Y = 8) = As the above examples indicate, the probability that a point prediction will be correct is quite small. Thus, we want a prediction interval (PI) too. We follow the approach taken earlier for BT: We calculate the PI based on using the SNC approximation and then use our calculator to see whether our answer needs improvement. We begin with the facts that the mean and variance of Y are both equal to θ. Approximating the standardized Poisson by the SNC, we get the following PI: θ ± z θ (7.2) 83
6 Below are a couple of examples. 1. Suppose that θ = 200 and we want 95% probability of being correct. First, 95% gives z = 1.96 and our PI is: 200 ± = 200 ± After rounding, our PI is [172, 228]. With the help of our calculator, we find that for θ = 200, P(172 Y 228) = We note that the actual probability, , is somewhat larger than our target value, Most days I would decide, I don t care about the difference. But it s possible that I will think, Perhaps I can make the interval narrower and still achieve 95%. (Why would I prefer a narrower interval?) With this in mind, I decide to shorten either end of the answer I have, [172, 228]. I get: P(173 Y 228) = and P(172 Y 227) = With either shorter interval, I still achieve the target of 95%; I would use [173, 228] because it gives a very slightly higher chance of being correct versus but there really is not much basis for choosing between these. Note that if we shorten this last interval further, either to [174, 228] or [173, 227], then (details not shown) the probability of being correct drops below Suppose that θ = 5 and we want 95% probability of being correct. As above, 95% gives z = 1.96 and our PI is: 5 ± = 5 ± 4.4. After rounding, our PI is [1, 9]. With the help of our calculator, we find that for θ = 5, Moreover, Thus, [1, 9] is our answer When θ is Unknown P(1 Y 9) = P(1 Y 8) = and P(2 Y 9) = We have previous data on X Poisson(θ 1 ) with observed value x. We assume a known relationship between the parameters: θ = cθ 1, for some known number c. This is, perhaps, easiest to visualize for a Poisson Process. For example, suppose that we have a Poisson Process with unknown rate λ per hour. We define X to be the number of successes in a (past) observation of the Poisson Process for a known length of time t 1 hours. We define Y to 84
7 be the number of successes in a (future) observation of the Poisson Process for a known length of time t 2 hours. With this notation, θ 1 = t 1 λ, θ = t 2 λ and θ = (t 2 /t 1 )θ 1. We begin with our ad-hoc point prediction. The idea is that we estimate θ 1 by x, so our estimate of θ is cx. If cx is an integer, then it is my point prediction. If cx is not an integer, I round it to the nearest integer and that is my point prediction. For the PI, we proceed as follows. Define W = Y cx. Using the results of the first section, the mean of W is µ W = µ Y cµ X = θ cθ 1 = θ θ = 0, and the variance of W is σ 2 W = σ2 Y + c2 σ 2 X = θ + c2 θ 1 = θ + c 2 (θ/c) = θ(1 + c). Thus, the standardized version of W is Z = Y cx θ(1 + c). It can be shown that if θ and cθ are both large, then probabilities for Z can be well approximated by using the SNC. Slutsky s work also applies here. In particular, let Z = Y cx cx(1 + c). It can be shown that if θ and cθ are both large, then probabilities for Z can be well approximated by using the SNC. We can invert the formula for Z to obtain the following PI for Y, when θ is unknown. cx ± z cx(1 + c) (7.3) Below are a couple of examples of how to use this formula. 1. Paula wants to predict the number of successes observed during 100 hours of a Poisson Process with unknown rate. From past data, 40 hours yielded 120 successes. Find the point prediction and 90% PI for Paula. Solution: The relationship between the parameter for the future, θ, and the parameter for the past data, θ 1, is θ = (100/40)θ 1 = 2.5θ 1. Thus, c = 2.5. For the point estimate, we calculate cx = 2.5(120) = 300. Thus, the adhoc point prediction for Y is ŷ = 300. The goal of 90% probability of being correct gives z = Thus, the PI is cx ± z cx(1 + c) = 300 ± ( ) = 300 ± 53.3 = [247, 353], after rounding. 85
8 2. This example is similar to the previous one; its purpose is to examine the effect of having much more data for estimating the unknown θ. Mary wants to predict the number of successes observed during 100 hours of a Poisson Process with unknown rate. From past data, 400 hours yielded 1200 successes. Find the point prediction and 90% PI for Mary. Solution: The relationship between the parameter for the future, θ, and the parameter for the past data, θ 1, is θ = (100/400)θ 1 = 0.25θ 1. Thus, c = For the point estimate, we calculate cx = 0.25(1200) = 300. Thus, the ad-hoc point prediction for Y is ŷ = 300. The goal of 90% probability of being correct gives z = Thus, the PI is cx ± z cx(1 + c) = 300 ± ( ) = 300 ± 31.9 = [268, 332], after rounding. As with BT with p unknown, there is no exact solution to this problem. It is, indeed, an extremely complicated computation. Even a simulation study is a challenge. I will discuss how a simulation study is done and then show you how it works. In order to do a simulation study we must specify two numbers: c and θ 1. And, as we shall see, it is a two-stage simulation. To be specific, I will simulate something similar to the first of our above examples. I will take c = 2.5 and θ 1 = Simulate the value of X Poisson(120). 2. Use our value of X from step 1 to compute the 90% prediction interval for Y ; remember to use the formula for θ unknown. 3. Simulate the value of Y Poisson(300) and see whether or not the interval from step 2 is correct. I performed this simulation with 10,000 runs and obtained 9003 (90.03%) correct prediction intervals! This is almost a perfect match to the target value of 90% correct. 86
Chapter 3. Estimation of p. 3.1 Point and Interval Estimates of p
Chapter 3 Estimation of p 3.1 Point and Interval Estimates of p Suppose that we have Bernoulli Trials (BT). So far, in every example I have told you the (numerical) value of p. In science, usually the
More informationSolutions to First Midterm Exam, Stat 371, Spring those values: = Or, we can use Rule 6: = 0.63.
Solutions to First Midterm Exam, Stat 371, Spring 2010 There are two, three or four versions of each question. The questions on your exam comprise a mix of versions. As a result, when you examine the solutions
More informationSolution: First note that the power function of the test is given as follows,
Problem 4.5.8: Assume the life of a tire given by X is distributed N(θ, 5000 ) Past experience indicates that θ = 30000. The manufacturere claims the tires made by a new process have mean θ > 30000. Is
More information(c) P(BC c ) = One point was lost for multiplying. If, however, you made this same mistake in (b) and (c) you lost the point only once.
Solutions to First Midterm Exam, Stat 371, Fall 2010 There are two, three or four versions of each problem. The problems on your exam comprise a mix of versions. As a result, when you examine the solutions
More informationRules for Means and Variances; Prediction
Chapter 14 Rules for Means and Variances; Prediction 14.1 Rules for Means and Variances The result in this section is very technical and algebraic. And dry. But it is useful for understanding many of prediction
More informationCS 5014: Research Methods in Computer Science. Bernoulli Distribution. Binomial Distribution. Poisson Distribution. Clifford A. Shaffer.
Department of Computer Science Virginia Tech Blacksburg, Virginia Copyright c 2015 by Clifford A. Shaffer Computer Science Title page Computer Science Clifford A. Shaffer Fall 2015 Clifford A. Shaffer
More informationLoglikelihood and Confidence Intervals
Stat 504, Lecture 2 1 Loglikelihood and Confidence Intervals The loglikelihood function is defined to be the natural logarithm of the likelihood function, l(θ ; x) = log L(θ ; x). For a variety of reasons,
More information2.1 Independent, Identically Distributed Trials
Chapter 2 Trials 2.1 Independent, Identically Distributed Trials In Chapter 1 we considered the operation of a CM. Many, but not all, CMs can be operated more than once. For example, a coin can be tossed
More informationIB Mathematics HL Year 2 Unit 7 (Core Topic 6: Probability and Statistics) Valuable Practice
IB Mathematics HL Year 2 Unit 7 (Core Topic 6: Probability and Statistics) Valuable Practice 1. We have seen that the TI-83 calculator random number generator X = rand defines a uniformly-distributed random
More informationLecture 10: Generalized likelihood ratio test
Stat 200: Introduction to Statistical Inference Autumn 2018/19 Lecture 10: Generalized likelihood ratio test Lecturer: Art B. Owen October 25 Disclaimer: These notes have not been subjected to the usual
More informationIntroduction to Bayesian Learning. Machine Learning Fall 2018
Introduction to Bayesian Learning Machine Learning Fall 2018 1 What we have seen so far What does it mean to learn? Mistake-driven learning Learning by counting (and bounding) number of mistakes PAC learnability
More informationCS 361: Probability & Statistics
February 19, 2018 CS 361: Probability & Statistics Random variables Markov s inequality This theorem says that for any random variable X and any value a, we have A random variable is unlikely to have an
More informationSTEP Support Programme. Pure STEP 1 Questions
STEP Support Programme Pure STEP 1 Questions 2012 S1 Q4 1 Preparation Find the equation of the tangent to the curve y = x at the point where x = 4. Recall that x means the positive square root. Solve the
More informationQuadratic Equations Part I
Quadratic Equations Part I Before proceeding with this section we should note that the topic of solving quadratic equations will be covered in two sections. This is done for the benefit of those viewing
More information4 Hypothesis testing. 4.1 Types of hypothesis and types of error 4 HYPOTHESIS TESTING 49
4 HYPOTHESIS TESTING 49 4 Hypothesis testing In sections 2 and 3 we considered the problem of estimating a single parameter of interest, θ. In this section we consider the related problem of testing whether
More informationSOLVING QUADRATIC EQUATIONS USING GRAPHING TOOLS
GRADE PRE-CALCULUS UNIT A: QUADRATIC EQUATIONS (ALGEBRA) CLASS NOTES. A definition of Algebra: A branch of mathematics which describes basic arithmetic relations using variables.. Algebra is just a language.
More informationOne-sample categorical data: approximate inference
One-sample categorical data: approximate inference Patrick Breheny October 6 Patrick Breheny Biostatistical Methods I (BIOS 5710) 1/25 Introduction It is relatively easy to think about the distribution
More informationThings to remember when learning probability distributions:
SPECIAL DISTRIBUTIONS Some distributions are special because they are useful They include: Poisson, exponential, Normal (Gaussian), Gamma, geometric, negative binomial, Binomial and hypergeometric distributions
More informationIntroduction to Error Analysis
Introduction to Error Analysis This is a brief and incomplete discussion of error analysis. It is incomplete out of necessity; there are many books devoted entirely to the subject, and we cannot hope to
More informationGCSE MATHEMATICS HELP BOOKLET School of Social Sciences
GCSE MATHEMATICS HELP BOOKLET School of Social Sciences This is a guide to ECON10061 (introductory Mathematics) Whether this guide applies to you or not please read the explanation inside Copyright 00,
More informationReview: General Approach to Hypothesis Testing. 1. Define the research question and formulate the appropriate null and alternative hypotheses.
1 Review: Let X 1, X,..., X n denote n independent random variables sampled from some distribution might not be normal!) with mean µ) and standard deviation σ). Then X µ σ n In other words, X is approximately
More informationJoint Probability Distributions and Random Samples (Devore Chapter Five)
Joint Probability Distributions and Random Samples (Devore Chapter Five) 1016-345-01: Probability and Statistics for Engineers Spring 2013 Contents 1 Joint Probability Distributions 2 1.1 Two Discrete
More informationMATH 728 Homework 3. Oleksandr Pavlenko
MATH 78 Homewor 3 Olesandr Pavleno 4.5.8 Let us say the life of a tire in miles, say X, is normally distributed with mean θ and standard deviation 5000. Past experience indicates that θ = 30000. The manufacturer
More informationLinear Classifiers and the Perceptron
Linear Classifiers and the Perceptron William Cohen February 4, 2008 1 Linear classifiers Let s assume that every instance is an n-dimensional vector of real numbers x R n, and there are only two possible
More informationUncertainty: A Reading Guide and Self-Paced Tutorial
Uncertainty: A Reading Guide and Self-Paced Tutorial First, read the description of uncertainty at the Experimental Uncertainty Review link on the Physics 108 web page, up to and including Rule 6, making
More informationPARAMETER ESTIMATION: BAYESIAN APPROACH. These notes summarize the lectures on Bayesian parameter estimation.
PARAMETER ESTIMATION: BAYESIAN APPROACH. These notes summarize the lectures on Bayesian parameter estimation.. Beta Distribution We ll start by learning about the Beta distribution, since we end up using
More informationLesson 21 Not So Dramatic Quadratics
STUDENT MANUAL ALGEBRA II / LESSON 21 Lesson 21 Not So Dramatic Quadratics Quadratic equations are probably one of the most popular types of equations that you ll see in algebra. A quadratic equation has
More informationBayesian Inference. STA 121: Regression Analysis Artin Armagan
Bayesian Inference STA 121: Regression Analysis Artin Armagan Bayes Rule...s! Reverend Thomas Bayes Posterior Prior p(θ y) = p(y θ)p(θ)/p(y) Likelihood - Sampling Distribution Normalizing Constant: p(y
More informationChapter 5. The Goodness of Fit Test. 5.1 Dice, Computers and Genetics
Chapter 5 The Goodness of Fit Test 5.1 Dice, Computers and Genetics The CM of casting a die was introduced in Chapter 1. We assumed that the six possible outcomes of this CM are equally likely; i.e. we
More informationFundamental Tools - Probability Theory IV
Fundamental Tools - Probability Theory IV MSc Financial Mathematics The University of Warwick October 1, 2015 MSc Financial Mathematics Fundamental Tools - Probability Theory IV 1 / 14 Model-independent
More informationLesson 5: Measuring Variability for Symmetrical Distributions
1. : Measuring Variability for Symmetrical Distributions Student Outcomes Students calculate the standard deviation for a set of data. Students interpret the standard deviation as a typical distance from
More information5.1 Introduction. # of successes # of trials. 5.2 Part 1: Maximum Likelihood. MTH 452 Mathematical Statistics
MTH 452 Mathematical Statistics Instructor: Orlando Merino University of Rhode Island Spring Semester, 2006 5.1 Introduction An Experiment: In 10 consecutive trips to the free throw line, a professional
More informationSwarthmore Honors Exam 2012: Statistics
Swarthmore Honors Exam 2012: Statistics 1 Swarthmore Honors Exam 2012: Statistics John W. Emerson, Yale University NAME: Instructions: This is a closed-book three-hour exam having six questions. You may
More informationStatistics 511 Additional Materials
Sampling Distributions and Central Limit Theorem In previous topics we have discussed taking a single observation from a distribution. More accurately, we looked at the probability of a single variable
More informationACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 1 MATH00030 SEMESTER /2018
ACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 1 MATH00030 SEMESTER 1 2017/2018 DR. ANTHONY BROWN 1. Arithmetic and Algebra 1.1. Arithmetic of Numbers. While we have calculators and computers
More information9.4 Radical Expressions
Section 9.4 Radical Expressions 95 9.4 Radical Expressions In the previous two sections, we learned how to multiply and divide square roots. Specifically, we are now armed with the following two properties.
More informationLesson 5: The Distributive Property
Exploratory Exercise Kim was working on an exercise in math when she ran across this problem. Distribute and simplify if possible. (3x + 5) Kim s dad said, I remember doing something like this in school.
More informationMathematics 102 Fall 1999 The formal rules of calculus The three basic rules The sum rule. The product rule. The composition rule.
Mathematics 02 Fall 999 The formal rules of calculus So far we have calculated the derivative of each function we have looked at all over again from scratch, applying what is essentially the definition
More information3.5 Solving Equations Involving Integers II
208 CHAPTER 3. THE FUNDAMENTALS OF ALGEBRA 3.5 Solving Equations Involving Integers II We return to solving equations involving integers, only this time the equations will be a bit more advanced, requiring
More informationWe're in interested in Pr{three sixes when throwing a single dice 8 times}. => Y has a binomial distribution, or in official notation, Y ~ BIN(n,p).
Sampling distributions and estimation. 1) A brief review of distributions: We're in interested in Pr{three sixes when throwing a single dice 8 times}. => Y has a binomial distribution, or in official notation,
More informationCalculus II. Calculus II tends to be a very difficult course for many students. There are many reasons for this.
Preface Here are my online notes for my Calculus II course that I teach here at Lamar University. Despite the fact that these are my class notes they should be accessible to anyone wanting to learn Calculus
More informationStatistical Inference: Estimation and Confidence Intervals Hypothesis Testing
Statistical Inference: Estimation and Confidence Intervals Hypothesis Testing 1 In most statistics problems, we assume that the data have been generated from some unknown probability distribution. We desire
More information6.867 Machine Learning
6.867 Machine Learning Problem set 1 Solutions Thursday, September 19 What and how to turn in? Turn in short written answers to the questions explicitly stated, and when requested to explain or prove.
More informationSolving with Absolute Value
Solving with Absolute Value Who knew two little lines could cause so much trouble? Ask someone to solve the equation 3x 2 = 7 and they ll say No problem! Add just two little lines, and ask them to solve
More informationAlgebra 2. Factoring Polynomials
Algebra 2 Factoring Polynomials Algebra 2 Bell Ringer Martin-Gay, Developmental Mathematics 2 Algebra 2 Bell Ringer Answer: A Martin-Gay, Developmental Mathematics 3 Daily Learning Target (DLT) Tuesday
More informationDo not copy, post, or distribute
14 CORRELATION ANALYSIS AND LINEAR REGRESSION Assessing the Covariability of Two Quantitative Properties 14.0 LEARNING OBJECTIVES In this chapter, we discuss two related techniques for assessing a possible
More informationPlease bring the task to your first physics lesson and hand it to the teacher.
Pre-enrolment task for 2014 entry Physics Why do I need to complete a pre-enrolment task? This bridging pack serves a number of purposes. It gives you practice in some of the important skills you will
More informationThe Components of a Statistical Hypothesis Testing Problem
Statistical Inference: Recall from chapter 5 that statistical inference is the use of a subset of a population (the sample) to draw conclusions about the entire population. In chapter 5 we studied one
More informationCIS 2033 Lecture 5, Fall
CIS 2033 Lecture 5, Fall 2016 1 Instructor: David Dobor September 13, 2016 1 Supplemental reading from Dekking s textbook: Chapter2, 3. We mentioned at the beginning of this class that calculus was a prerequisite
More informationMTH 452 Mathematical Statistics
MTH 452 Mathematical Statistics Instructor: Orlando Merino University of Rhode Island Spring Semester, 2006 1 5.1 Introduction An Experiment: In 10 consecutive trips to the free throw line, a professional
More informationMath 138: Introduction to solving systems of equations with matrices. The Concept of Balance for Systems of Equations
Math 138: Introduction to solving systems of equations with matrices. Pedagogy focus: Concept of equation balance, integer arithmetic, quadratic equations. The Concept of Balance for Systems of Equations
More informationMath101, Sections 2 and 3, Spring 2008 Review Sheet for Exam #2:
Math101, Sections 2 and 3, Spring 2008 Review Sheet for Exam #2: 03 17 08 3 All about lines 3.1 The Rectangular Coordinate System Know how to plot points in the rectangular coordinate system. Know the
More informationConditional probabilities and graphical models
Conditional probabilities and graphical models Thomas Mailund Bioinformatics Research Centre (BiRC), Aarhus University Probability theory allows us to describe uncertainty in the processes we model within
More information7.1 Basic Properties of Confidence Intervals
7.1 Basic Properties of Confidence Intervals What s Missing in a Point Just a single estimate What we need: how reliable it is Estimate? No idea how reliable this estimate is some measure of the variability
More information, (1) e i = ˆσ 1 h ii. c 2016, Jeffrey S. Simonoff 1
Regression diagnostics As is true of all statistical methodologies, linear regression analysis can be a very effective way to model data, as along as the assumptions being made are true. For the regression
More information(x 1 +x 2 )(x 1 x 2 )+(x 2 +x 3 )(x 2 x 3 )+(x 3 +x 1 )(x 3 x 1 ).
CMPSCI611: Verifying Polynomial Identities Lecture 13 Here is a problem that has a polynomial-time randomized solution, but so far no poly-time deterministic solution. Let F be any field and let Q(x 1,...,
More informationMixed random variables and the the importance of distribution function
Agenda Lecture 26 1. Mixed random variables and the the importance of distribution function 2. Joint Probability Distribution for discrete random variables Mixed random variables and the the importance
More informationAdding and Subtracting Polynomials
Exploratory Exercise Kim was working on a problem in math when she ran across this problem. Distribute and simplify if possible. 2(3x + 5) Kim s dad said, I remember doing something like this in school.
More informationChapter 6. Order Statistics and Quantiles. 6.1 Extreme Order Statistics
Chapter 6 Order Statistics and Quantiles 61 Extreme Order Statistics Suppose we have a finite sample X 1,, X n Conditional on this sample, we define the values X 1),, X n) to be a permutation of X 1,,
More informationAlgebra. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed.
This document was written and copyrighted by Paul Dawkins. Use of this document and its online version is governed by the Terms and Conditions of Use located at. The online version of this document is
More informationPROBABILITY DISTRIBUTION
PROBABILITY DISTRIBUTION DEFINITION: If S is a sample space with a probability measure and x is a real valued function defined over the elements of S, then x is called a random variable. Types of Random
More informationPOISSON PROCESSES 1. THE LAW OF SMALL NUMBERS
POISSON PROCESSES 1. THE LAW OF SMALL NUMBERS 1.1. The Rutherford-Chadwick-Ellis Experiment. About 90 years ago Ernest Rutherford and his collaborators at the Cavendish Laboratory in Cambridge conducted
More informationChapter 5 Simplifying Formulas and Solving Equations
Chapter 5 Simplifying Formulas and Solving Equations Look at the geometry formula for Perimeter of a rectangle P = L W L W. Can this formula be written in a simpler way? If it is true, that we can simplify
More information1. A definition of Algebra: a branch of mathematics which describes basic arithmetic relations using variables.
MA30S PRECALC UNIT C: ALGEBRA CLASS NOTES. A definition of Algebra: a branch of mathematics which describes basic arithmetic relations using variables.. It is just a language. Instead of saying Jason is
More informationCalculus (Math 1A) Lecture 4
Calculus (Math 1A) Lecture 4 Vivek Shende August 30, 2017 Hello and welcome to class! Hello and welcome to class! Last time Hello and welcome to class! Last time We discussed shifting, stretching, and
More information1 Review of the dot product
Any typographical or other corrections about these notes are welcome. Review of the dot product The dot product on R n is an operation that takes two vectors and returns a number. It is defined by n u
More informationDiscrete Random Variables
Discrete Random Variables We have a probability space (S, Pr). A random variable is a function X : S V (X ) for some set V (X ). In this discussion, we must have V (X ) is the real numbers X induces a
More informationConditional distributions (discrete case)
Conditional distributions (discrete case) The basic idea behind conditional distributions is simple: Suppose (XY) is a jointly-distributed random vector with a discrete joint distribution. Then we can
More informationChapter 9: Interval Estimation and Confidence Sets Lecture 16: Confidence sets and credible sets
Chapter 9: Interval Estimation and Confidence Sets Lecture 16: Confidence sets and credible sets Confidence sets We consider a sample X from a population indexed by θ Θ R k. We are interested in ϑ, a vector-valued
More informationMathematics Tutorials. Arithmetic Tutorials Algebra I Tutorials Algebra II Tutorials Word Problems
Mathematics Tutorials These pages are intended to aide in the preparation for the Mathematics Placement test. They are not intended to be a substitute for any mathematics course. Arithmetic Tutorials Algebra
More informationPrealgebra. Edition 5
Prealgebra Edition 5 Prealgebra, Edition 5 2009, 2007, 2005, 2004, 2003 Michelle A. Wyatt (M A Wyatt) 2009, Edition 5 Michelle A. Wyatt, author Special thanks to Garry Knight for many suggestions for the
More informationChapter 3. Discrete Random Variables and Their Probability Distributions
Chapter 3. Discrete Random Variables and Their Probability Distributions 2.11 Definition of random variable 3.1 Definition of a discrete random variable 3.2 Probability distribution of a discrete random
More informationCalculus (Math 1A) Lecture 4
Calculus (Math 1A) Lecture 4 Vivek Shende August 31, 2017 Hello and welcome to class! Last time We discussed shifting, stretching, and composition. Today We finish discussing composition, then discuss
More informationChapter 3. Discrete Random Variables and Their Probability Distributions
Chapter 3. Discrete Random Variables and Their Probability Distributions 1 3.4-3 The Binomial random variable The Binomial random variable is related to binomial experiments (Def 3.6) 1. The experiment
More informationHi, my name is Dr. Ann Weaver of Argosy University. This WebEx is about something in statistics called z-
Hi, my name is Dr. Ann Weaver of Argosy University. This WebEx is about something in statistics called z- Scores. I have two purposes for this WebEx, one, I just want to show you how to use z-scores in
More informationSampling Distributions
Sampling Distributions In previous chapters we have discussed taking a single observation from a distribution. More accurately, we looked at the probability of a single variable taking a specific value
More informationThe Celsius temperature scale is based on the freezing point and the boiling point of water. 12 degrees Celsius below zero would be written as
Prealgebra, Chapter 2 - Integers, Introductory Algebra 2.1 Integers In the real world, numbers are used to represent real things, such as the height of a building, the cost of a car, the temperature of
More informationDefinition: Quadratic equation: A quadratic equation is an equation that could be written in the form ax 2 + bx + c = 0 where a is not zero.
We will see many ways to solve these familiar equations. College algebra Class notes Solving Quadratic Equations: Factoring, Square Root Method, Completing the Square, and the Quadratic Formula (section
More informationContinuous Random Variables. What continuous random variables are and how to use them. I can give a definition of a continuous random variable.
Continuous Random Variables Today we are learning... What continuous random variables are and how to use them. I will know if I have been successful if... I can give a definition of a continuous random
More informationV. Probability. by David M. Lane and Dan Osherson
V. Probability by David M. Lane and Dan Osherson Prerequisites none F.Introduction G.Basic Concepts I.Gamblers Fallacy Simulation K.Binomial Distribution L.Binomial Demonstration M.Base Rates Probability
More informationHow Much Evidence Should One Collect?
How Much Evidence Should One Collect? Remco Heesen October 10, 2013 Abstract This paper focuses on the question how much evidence one should collect before deciding on the truth-value of a proposition.
More informationA-Level Notes CORE 1
A-Level Notes CORE 1 Basic algebra Glossary Coefficient For example, in the expression x³ 3x² x + 4, the coefficient of x³ is, the coefficient of x² is 3, and the coefficient of x is 1. (The final 4 is
More informationNCERT solution for Integers-2
NCERT solution for Integers-2 1 Exercise 6.2 Question 1 Using the number line write the integer which is: (a) 3 more than 5 (b) 5 more than 5 (c) 6 less than 2 (d) 3 less than 2 More means moving right
More informationModule 8 Probability
Module 8 Probability Probability is an important part of modern mathematics and modern life, since so many things involve randomness. The ClassWiz is helpful for calculating probabilities, especially those
More informationLecture 4: Training a Classifier
Lecture 4: Training a Classifier Roger Grosse 1 Introduction Now that we ve defined what binary classification is, let s actually train a classifier. We ll approach this problem in much the same way as
More informationWe're in interested in Pr{three sixes when throwing a single dice 8 times}. => Y has a binomial distribution, or in official notation, Y ~ BIN(n,p).
Sampling distributions and estimation. 1) A brief review of distributions: We're in interested in Pr{three sixes when throwing a single dice 8 times}. => Y has a binomial distribution, or in official notation,
More informationPark School Mathematics Curriculum Book 1, Lesson 1: Defining New Symbols
Park School Mathematics Curriculum Book 1, Lesson 1: Defining New Symbols We re providing this lesson as a sample of the curriculum we use at the Park School of Baltimore in grades 9-11. If you d like
More informationSolving Quadratic & Higher Degree Equations
Chapter 9 Solving Quadratic & Higher Degree Equations Sec 1. Zero Product Property Back in the third grade students were taught when they multiplied a number by zero, the product would be zero. In algebra,
More informationMasters Comprehensive Examination Department of Statistics, University of Florida
Masters Comprehensive Examination Department of Statistics, University of Florida May 6, 003, 8:00 am - :00 noon Instructions: You have four hours to answer questions in this examination You must show
More information6.867 Machine Learning
6.867 Machine Learning Problem set 1 Due Thursday, September 19, in class What and how to turn in? Turn in short written answers to the questions explicitly stated, and when requested to explain or prove.
More informationLecture 21: October 19
36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 21: October 19 21.1 Likelihood Ratio Test (LRT) To test composite versus composite hypotheses the general method is to use
More informationBayesian Estimation An Informal Introduction
Mary Parker, Bayesian Estimation An Informal Introduction page 1 of 8 Bayesian Estimation An Informal Introduction Example: I take a coin out of my pocket and I want to estimate the probability of heads
More informationLesson 3-2: Solving Linear Systems Algebraically
Yesterday we took our first look at solving a linear system. We learned that a linear system is two or more linear equations taken at the same time. Their solution is the point that all the lines have
More informationConfidence intervals
Confidence intervals We now want to take what we ve learned about sampling distributions and standard errors and construct confidence intervals. What are confidence intervals? Simply an interval for which
More informationLecture 4: September Reminder: convergence of sequences
36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 4: September 6 In this lecture we discuss the convergence of random variables. At a high-level, our first few lectures focused
More informationMath Precalculus I University of Hawai i at Mānoa Spring
Math 135 - Precalculus I University of Hawai i at Mānoa Spring - 2014 Created for Math 135, Spring 2008 by Lukasz Grabarek and Michael Joyce Send comments and corrections to lukasz@math.hawaii.edu Contents
More information4 Branching Processes
4 Branching Processes Organise by generations: Discrete time. If P(no offspring) 0 there is a probability that the process will die out. Let X = number of offspring of an individual p(x) = P(X = x) = offspring
More information33. SOLVING LINEAR INEQUALITIES IN ONE VARIABLE
get the complete book: http://wwwonemathematicalcatorg/getfulltextfullbookhtm 33 SOLVING LINEAR INEQUALITIES IN ONE VARIABLE linear inequalities in one variable DEFINITION linear inequality in one variable
More informationAlgebra, Part I. x m x = n xm i x n = x m n = 1
Lesson 7 Algebra, Part I Rules and Definitions Rules Additive property of equality: If a, b, and c represent real numbers, and if a=b, then a + c = b + c. Also, c + a = c + b Multiplicative property of
More informationHypothesis testing I. - In particular, we are talking about statistical hypotheses. [get everyone s finger length!] n =
Hypothesis testing I I. What is hypothesis testing? [Note we re temporarily bouncing around in the book a lot! Things will settle down again in a week or so] - Exactly what it says. We develop a hypothesis,
More information