2.1 Independent, Identically Distributed Trials

Size: px
Start display at page:

Download "2.1 Independent, Identically Distributed Trials"

Transcription

1 Chapter 2 Trials 2.1 Independent, Identically Distributed Trials In Chapter 1 we considered the operation of a CM. Many, but not all, CMs can be operated more than once. For example, a coin can be tossed or a die cast many times. By contrast, the next NFL season will operate only once. In this chapter we consider repeated operations of a CM. Let us return to the Blood type CM of Section 1. Previously, I described the situation as follows: A man with AB blood and a woman with AB blood will have a child. The outcome is the child s blood type. The sample space consists of A, B and AB. I stated in Chapter 1 that these three outcomes are not equally likely, but that the ELC is lurking in this problem. We get the ELC by viewing the problem somewhat differently, namely as two operations of a CM. The first operation is the selection of allele that Dad gives to the child. The second operation is the selection of allele that Mom gives to the child. The possible outcomes are A and B and it seems reasonable to assume that these are equally likely. Consider the following display of the possibilities for the child s blood type. I am willing to make the following assumption. Allele from Mom Allele from Dad A B A A AB B AB B The allele contributed by Dad (Mom) has no influence on the allele contributed by Mom (Dad). Based on this assumption, and the earlier assumption of the ELC for each operation of the CM, I conclude that the four entries in the cells of the table above are equally likely. As a result, we have the following probabilities for the blood type of the child: P(A) = P(B) = 0.25 and P(AB) = Here is another example. I cast a die twice and I am willing to make the following assumptions. 11

2 The number obtained on the first cast is equally likely to be 1, 2, 3, 4, 5 or 6. The number obtained on the second cast is equally likely to be 1, 2, 3, 4, 5 or 6. The number obtained on the first (second) cast has no influence on the number obtained on the second (first) cast. The 36 possible ordered results of the two casts are displayed below, where, for example, (5, 3) means that the first die landed 5 and the second die landed 3. This is different from (3, 5). Number from Number from second cast first cast (1,1) (1,2) (1,3) (1,4) (1,5) (1,6) 2 (2,1) (2,2) (2,3) (2,4) (2,5) (2,6) 3 (3,1) (3,2) (3,3) (3,4) (3,5) (3,6) 4 (4,1) (4,2) (4,3) (4,4) (4,5) (4,6) 5 (5,1) (5,2) (5,3) (5,4) (5,5) (5,6) 6 (6,1) (6,2) (6,3) (6,4) (6,5) (6,6) Just like in the blood type example, b/c of my assumptions, I conclude that these 36 possibilities are equally likely. We will do a number of calculations now. For ease of presentation, define X 1 to be the number obtained on the first cast of the die and let X 2 denote the number obtained on the second cast of the die. We call X 1 and X 2 random variables, which means that to each possible outcome of the CM they assign a number. Every random variable has a probability distribution which is simply a listing of its possible values along with the probability of each value. Note that X 1 and X 2 have the same probability distribution; a fact we describe by saying that they are identically distributed, Below is the common probability distribution for X 1 and X 2. Value Probability 1 1/6 2 1/6 3 1/6 4 1/6 5 1/6 6 1/6 Total 1 As we have seen, X 1 and X 2 each has its own probability distribution (which happens to be the same). It is also useful to talk about their joint probability distribution which is concerned with how they interact. I will illustrate this idea with a number of computations. P(X 1 = 3 and X 2 = 4) = 1/36, b/c, by inspection exactly one of the 36 ordered pairs in the earlier table have a 3 in the first position and a 4 in the second position. 12

3 Before we proceed, I want to invoke my laziness again. It is too much bother to type, say, It is much easier to type, P(X 1 = 3 and X 2 = 4). P(X 1 = 3, X 2 = 4). We will follow this practice; a comma within a probability statement represents the word and. For future reference, note that P(X 1 = 3, X 2 = 4) = 1/36, as does P(X 1 = 3)P(X 2 = 4). In words, the word and within a probability statement tells us to multiply. Here is another example. P(X 1 4, X 2 4) = 12/36, b/c, as you can see from the table below, exactly 12 of the 36 pairs have the required property. Note again that X 2 X X X X 2 X X X 3 X X X 4 X X X 5 6 P(X 1 4, X 2 4) = 12/36 gives the same answer as P(X 1 4)P(X 2 4) = (4/6)(3/6) = 12/36. This last property is called the multiplication rule for independent random variables. It is a very important result. It says that if we have two random variables that are independent, then we can compute joint probabilities by using individual probabilities. In simpler words, if we want to know the probability of X 1 doing something and X 2 doing something, then we can calculate two individual probabilities, one for X 1 and one for X 2 and then take the product of these two individual probabilities. As we shall see repeatedly in this class, the multiplication rule for independent random variables is a great labor saving device. The above ideas for two casts of a die can be extended to any number of casts of a die. In particular, define X 1 to be the number obtained on the first cast of the die; X 2 to be the number obtained on the second cast of the die; X 3 to be the number obtained on the third cast of the die; 13

4 and so on, in general, X k is the number obtained on the kth cast of the die. If we assume that the casts are independent that is, that no outcome has any influence on another outcome then we can use the multiplication rule to calculate probabilities. Some examples are below. You might be familiar with the popular dice game Yahtzee. In this game, a player casts five dice. If all dice show the same number, then the player has achieved a Yahtzee. One of the first things you learn upon playing the game Yahtzee is that the event Yahtzee occurs only rarely. We will calculate its probability. Let s find the probability of throwing five 1 s when casting five dice. We write this as: Now, the probability of a Yahtzee is: P(1, 1, 1, 1, 1) = (1/6) 5. P(Y 1 or Y 2 or Y 3 or Y 4 or Y 5 or Y 6 ), where Y k means all five dice land with the side k facing up; in words, Y k means a Yahtzee on the number k. Clearly all of the Y k have the same probability. Thus, by Rule 3, the probability of a Yahtzee is: 6(1/6) 5 = 1/1296 = There is a slicker way to calculate the probability of a Yahtzee. Imagine that you cast the dice one-at-a-time. (Don t play Yahtzee this way; it will really annoy your friends.) No matter how the first die lands, you can still get a Yahtzee. (An ESPN anchor might announce, Ralph is on pace for a Yahtzee! ) To obtain a Yahtzee, your last four dice must match your first one. The probability of this event is (1/6) 4 = , as above. Here is our general definition of independent and identically distributed trials, abbreviated i.i.d.: Random variables X 1, X 2, X 3,... all have the same probability distribution and these random variables are independent. But how do we know we have independent random variables? This question is a bit tricky. Often times, we simply assume it to be true. This is indeed what I did above when I assumed that the outcome of the first cast of the die has no influence on the outcome of the second cast. In other problems, however, we will need to work from first principles to determine whether or not two random variables are independent. Also, when we try to apply these ideas to scientific problems that are more complex than tossing coins or casting dice, we will need to give careful consideration to whether or not independence makes sense scientifically. Finally, we will learn how to use data to investigate whether the assumption of independence is reasonable. Finally (!), I am ready to answer the third of our three questions about probability, posed long ago on page 5 of Chapter 1. Namely, if I determine/state P(A) = 0.25, what does this mean? There is a famous result in probability theory that answers this question. Sort of. In a limited situation. It is called the Law of Large Numbers (LLN). I will try to explain it. 14

5 Let X 1 be a random variable and let A be some event whose occurrence is determined by the value of X 1. Let X 1, X 2, X 3,...X n be independent and identically distributed trials. Clearly, for each of X 2, X 3,...X n we can determine whether or not event A occurs. Then: 1. Count the number of times that A occurs in the first n trials. 2. Divide the frequency you obtained in step 1 by n, to obtain the relative frequency of occurrence of event A in n trials. The LLN states that in the limit, as n becomes larger without bound, the relative frequency of occurrence of event A converges to P(A). Here is an example. (This appears on page 59 of the text.) I programmed my computer to simulate 60,000 independent trials for casting a balanced die (ELC). The results are summarized below. Relative Outcome Frequency Frequency Probability Difference 1 10, , , , , , Total 60, We can see that for each of the six possible outcomes, the relative frequency of occurrence of the outcome is very close to its probability, as predicted by the LLN. For any event larger than a single outcome, the relative frequency of the event will be the sum of the appropriate relative frequencies, which, from the table above, will be close to its probability. To summarize, when we have independent and identically distributed trials, then the probability of an event is approximately equal to its long-run-relative frequency. This result is used in both directions. If we know the probability, we can predict the long-run-relative frequency of occurrence. If, however, we do not know the probability, we can approximate it by performing a large computer simulation and calculating the relative frequency of occurrence. This latter use is much more important to us and we will use it many times in this course. One of the main users of the first application of the LLN are gambling casinos. I will give a brief example of this. An American roulette wheel has 38 slots, each slot with a number and a color. For this example, I will focus on the color. Two slots are colored green, 18 are red and 18 are black. Red is a popular bet and the casino pays even money to a winner. If we assume that the ELC is appropriate for the roulette wheel, then the probability that a red bet wins is 18/38 = But a gambler is primarily concerned with his/her relative frequency of winning. Suppose that one gambler places a very large number, n of one dollar bets on red. By the LLN, the relative frequency of winning bets will be very close to and the relative frequency of losing bets will be very close to = In simpler terms, in the long 15

6 run, for every $100 bet on red, the casino pays out 2(47.37) = dollars, for a net profit of $5.26 for every $100 bet. As a side note, when a regular person goes to a casino, he/she can see that every table game has a range of allowable bets. For example, there might be a blackjack table that states that the minimum bet allowed is $5 and the maximum is $500. Well, a regular person likely pays no attention to the maximum, but it is very important to the casino. As a silly and extreme example, suppose Bill Gates walks into a casino and wants to place a $50 billion bet on red. No casino could/would accept the bet. Why? 2.2 Bernoulli Trials In Section 1 we learned about i.i.d. trials. In this section, we study a very important special case of these, namely Bernoulli trials (BT). If each trial yields one of exactly two possible outcomes, then we have BT. B/c this is so important, I will be a bit redundant and explicitly present the assumptions of BT. The Assumptions of Bernoulli Trials. There are three: 1. Each trial results in one of two possible outcomes, denoted success (S) or failure (F ). 2. The probability of S remains constant from trial-to-trial and is denoted by p. Write q = 1 p for the constant probability of F. 3. The trials are independent. When we are doing arithmetic, it will be convenient to represent S by the number 1 and F by the number 0. One reason that BT are so important, is that if we have BT, we can calculate probabilities of a great many events. Our first tool for calculation, of course, is the multiplication rule that we learned in Section 1. For example, suppose that we have n = 5 BT with p = The probability that the BT yield four successes followed by a failure is: P(SSSSF) = ppppq = (0.70) 4 (0.30) = Our next tool is extremely powerful and very useful in science. It is the binomial probability distribution. Suppose that we plan to perform/observe n BT. Let X denote the total number of successes in the n trials. The probability distribution of X is given by the following equation. P(X = x) = n! x!(n x)! px q n x, for x = 0, 1,..., n. (2.1) To use this formula, recall that n! is read n-factorial and is computed as follows. 1! = 1; 2! = 2(1) = 2; 3! = 3(2)(1) = 6, 4! = 4(3)(2)(1) = 24; 16

7 and so on. By special definition, 0! = 1. (Note to the extremely interested reader. If you want to see why Equation 2.1 is correct, go to the link to the Revised Student Study Guide on my webpage and read the More Mathematics sections of Chapters 2, 3 and 5.) I will do an extended example to illustrate the use of Equation 2.1. Suppose that n = 5 and p = I will derive the probability distribution for X. P(X = 0) = 5! 0!5! (0.60)0 (0.40) 5 = P(X = 1) = 5! 1!4! (0.60)1 (0.40) 4 = P(X = 2) = 5! 2!3! (0.60)2 (0.40) 3 = P(X = 3) = 5! 3!2! (0.60)3 (0.40) 2 = P(X = 4) = 5! 4!1! (0.60)4 (0.40) 1 = P(X = 5) = 5! 5!0! (0.60)5 (0.40) 0 = You should check the above computations to make sure you are comfortable using Equation 2.1. Here are some guidelines for this class. If n 8, you should be able to evaluate Equation 2.1 by hand as I have done above for n = 5. For n 9, I recommend using a statistical software package on a computer. For example, the sampling distribution for X for n = 25 and p = 0.50 is presented on page 161 of the text. Equation 2.1 is called the binomial probability distribution with parameters n and p; it is denoted by Bin(n, p). With this notation, we see that by earlier by hand effort was the Bin(5,0.60) and the table on page 161 of the text is the Bin(25,0.50). Sadly, life is a bit more complicated than the above. In particular, a statistical software package should not be considered a panacea for the binomial. For example, if I direct my computer to calculate the Bin(n,0.50) for any n 1023 I get an error message; the computer program is smart enough to realize that it has messed up and its answer is nonsense; hence, it does not give me an answer. Why does this happen? Well, consider the computation of P(X = 1000) for the Bin(2000,0.50). This involves a really huge number (2000!) divided by the square of a really huge number (1000!), and them multiplied by a really really small positive number ((0.50) 2000 ). Unless the computer programmer exhibits incredible care in writing the code, the result will be an overflow or an underflow or both. But before we condemn the programmer for carelessness or bemoan the limits of the human mind, note the following. We do not need to evaluate Equation 2.1 for really large n s b/c there is a very easy way to obtain a good approximation to the exact answer. Sadly, it will take some time to motivate this approximate method. But without it, there really is no course, so we need to do it. 17

8 Figures on pages 162 and 163 of the text are probability histograms for several binomial probability distributions. Here is how they are drawn. The method I am going to give you works only if the possible values of the random variable are equally spaced on the number line. This restriction will not be a problem in this course. 1. On a horizontal number line, mark all possible values of X: 0, 1, 2,...n. (Not all possibilities are shown in the pictures in the text.) 2. Determine the value of δ (Greek lower case delta) for the random variable of interest. The number δ is the distance between any two consecutive values of the random variable. For the binomial, δ = 1, which makes life much simpler. 3. Above each x = 0, 1, 2,..., n, draw a rectangle, with its center at x and its height equal to P(X = x)/δ. Of course, in the current case, δ = 1, so the height of each rectangle equals the probability of its center value. For a probability histogram (PH) the area of a rectangle equals the probability of its center value, b/c P(X = x) Area = Base Height = δ = P(X = x). δ A PH allows us to see a probability distribution. For example, for the pictures in the text, we can see that the binomial is symmetric for p = 0.50 and not symmetric for p This is not an accident of the four pictures I chose to present; indeed, the binomial is symmetric if, and only if, p = By the way, I am making the tacit assumption that 0 < p < 1; that is, that p is neither 1 nor 0. Trials for which successes are certain or impossible are not of interest to us. (The idea is that if p = 1, then P(X = n) = 1 and its PH will consist of only one rectangle. Being just one rectangle, it is symmetric.) Here are some other facts about the binomial illustrated by the pictures in the text. 1. The PH for a binomial always has exactly one peak. The peak can be one or two rectangles wide, but never wider. 2. If np is an integer, then there is a one-rectangle wide peak located above np. 3. If np is not an integer, then the peak will occur either at the integer immediately below or above np; or, in some cases, at both of these integers. 4. If you move away from the peak in either direction, the heights of the rectangles become shorter. (If the peak occurs at either 0 or n this fact is true in the one direction away from the peak.) It turns out to be very important to have a way to measure the center and spread of a PH. The center is measured by the center of gravity, defined as follows. Look at a PH. Imagine that the rectangles are made of a material of uniform mass and that the number line has no mass. Next, imagine placing a fulcrum in support of the number line. Now look at the PH for the Bin(100,0.50). If the fulcrum is placed at 50, then the PH will balance. B/c 18

9 of this, 50 is called the center of gravity of the Bin(100,0.50) PH. Similarly, for any binomial PH, there is a point along the number line that will make the PH balance; this point is the center of gravity. The center of gravity is also called the mean of the PH. The mean is denoted by the Greek letter mu: µ. It is an algebraic fact that for the binomial, µ = np. It is difficult to explain our measure of spread, so I won t. There are two related measures of spread for a PH: the variance and the standard deviation. The variance is denoted by σ 2 and the standard deviation by σ. As you have probably noticed the variance is simply the square of the standard deviation. Thus, we don t really need both of these measures, yet in some settings it is more convenient to think of the variance and in others it is more convenient to focus on the standard deviation. In any event, for the binomial, σ 2 = npq and σ = npq. We needed to learn about the center and spread so that I can tell you about a very important operation, standardizing. Let X be any random variable with mean µ and standard deviation σ. Then the standardized version of X is denoted by Z and is given by the equation: Z = X µ. σ Before I show you why standardizing is important and useful, let s do a simple example. Suppose that X Bin(3,0.25). You can verify the following facts about X. 1. Its mean is µ = np = 3(0.25) = 0.75 and its standard deviation is σ = Its probability distribution is: 3(0.25)(0.75) = P(X = 0) = ; P(X = 1) = ; P(X = 2) = ; and P(X = 3) = The formula for Z is Z = X Thus, the possible values of Z are: 1, 1/3, 5/3 and 3. Thus, the probability distribution for Z is: P(Z = 1.00) = ; P(Z = 1/3) = ; P(Z = 5/3) = ; and P(Z = 3) =

10 As we can see from this example, converting from X to Z simply changes the values of the random variable; the probabilities for X are inherited by Z. Given that Z has a probability distribution, it has a PH. This is a bit trickier than the binomial with its δ = 1. As a result of standardizing, unless σ = npq = 1 the δ for Z will not be one. In any event, the PH for Z can be drawn. In fact, I have drawn three such PH s for the standardized binomial on pages 164 and 165 of the text. Superimposed on these pictures is the snc. It is obvious visually, at least for the three situations pictured in the text, that the snc should provide a good approximation to binomial probabilities. I will illustrate the method for a number of examples and you will have homework to hone your skills. Suppose that X Bin(100,0.50). I want to calculate P(X 55). With the help of my computer, I know that this probability equals We will now approximate this probability using the snc. First, note that µ = 100(0.50) = 50 and σ = 100(0.50)(0.50) = 5. First look at the picture on page 162 of the text. The probability that we want is the area of the rectangle centered at 55 plus the area of all rectangles to the right of it. The rectangle centered at 55 actually begins at 54.5; thus, we want the sum of all the areas beginning at 54.5 and going to the right. This picture-based-conversion of 55 to 54.5 is called a continuity correction and it greatly improves the accuracy of the approximation. We proceed as follows: P(X 55) = P(X 54.5); this is the continuity correction, next we standardize. P(X 54.5) = P( X µ σ Now we use the snc approximation to get: ) = P(Z 0.90). 5 P(Z 0.90) the area under the snc to the right of 0.90, which equals Our approximation is exact to four digits! Let us summarize and generalize what we have learned from this example. Suppose that X Bin(n, p) and we want to use the snc to approximate P(X x), for some number x. Here is how we proceed. Calculate µ = np, σ = npq and z = (x 0.5 µ)/σ. Next, P(X x) = P(X x 0.5) = P(Z z), which we approximate by finding the area under the snc to the right of z. Or even more briefly: to approximate P(X x), find the area under the snc to the right of z = (x 0.5 µ)/σ. Let s try this out. Suppose that X Bin(100,0.20) and we want to know P(X 17). 100(0.20)(0.80) = 4 and z = ( )/4 = We calculate µ = 100(0.20) = 20, σ = The exact answer is (with the help of my computer) The snc approximate answer is: if I round z to

11 if I round z to Either of these answers is quite good; one is sensational, but I would need to be psychic to choose it. Next, suppose that X Bin(1600,0.50) and we want to know P(X 820). We calculate µ = 1600(0.50) = 800, σ = 1600(0.50)(0.50) = 20 and z = ( )/20 = The exact answer is unavailable b/c of the limitations of my computer mentioned earlier. The snc approximate answer is: if I round z to if I round z to Of course, we might be interested in calculating a probability other than P(X x). In fact, below is a fairly complete list of expressions that might interest us. P(X = x), P(X x), P(X > x), P(X < x), P(a X b), P(a < X b), P(a X < b), and P(a < X < b). It would be too tedious (even for this class!) to show you how to handle each of these. Instead, I will do several examples. The basic idea is to behave like the mathematician in the kitchen, a story that was told in lecture. In short, faced with a question, change it into one or more questions that can be written as P(X x). Here are some examples. For these examples, the values of n, p, µ and σ are irrelevant for my goal; I just want to show you how to rewrite a question into the form we can handle. 1. P(X > 10) = P(X 11). 2. P(X < 20) = 1 P(X 20). 3. P(X 25) = 1 P(X 26). 4. P(X = 16) = P(X 16) P(X 17). 5. P(10 < X < 20) = P(X 11) P(X 20). 6. P(10 X < 20) = P(X 10) P(X 20). 7. P(10 < X 20) = P(X 11) P(X 21). 8. P(10 X 20) = P(X 10) P(X 21). You will have an opportunity to try these out when you do your homework. 21

12 2.3 Finite Populations A finite population is a well-defined collection of individuals. Here are some examples: All students registered in this course. All persons who are currently registered to vote in Wisconsin. All persons who voted in Wisconsin in the 2008 presidential election. Associated with each individual is a response. In this section we restrict attention to responses that have two possible values; called dichotomous responses. As earlier, one of the values is called a success (S or 1) and the other a failure (F or 0). It is convenient to visualize a finite population as a box of cards. Each member of the population has a card in the box, called the population box, and on the member s card is its value of the response, 1 or 0. The total number of cards in the box marked 1 is denoted by s (for success) and the total number of cards marked 0 is denoted by f (for failure). The total number of cards in the box is denoted by N = s + f. Also, let p = s/n denote the proportion of the cards in the box marked 1 and q = f/n denote the proportion of the cards in the box marked 0. For example, one could have: s = 60 and f = 40, giving N = 100, p = 0.60 and q = Clearly, there is a great deal of redundancy in these five numbers; statisticians prefer to focus on N and p. Knowledge of this pair allows one to determine the other three numbers. We refer to a population box as Box(N,p) to denote a box with N cards, of which N p cards are marked 1. Consider the CM: Select one card at random from Box(N, p). After operating this CM, place the selected card back into the population box and repeat. Repeat this process n times. This operation is referred to as selecting n cards at random with replacement. Viewing each selection as a trial, we can see that we have BT: 1. Each trial results in one of two possible outcomes, denoted success (S) or failure (F ). 2. The probability of S remains constant from trial-to-trial. 3. The trials are independent. Thus, everything we have learned (the binomial sampling distribution) or will learn about BT is also true when one selects n cards at random with replacement from Box(N, p). 22

Chapter 3. Estimation of p. 3.1 Point and Interval Estimates of p

Chapter 3. Estimation of p. 3.1 Point and Interval Estimates of p Chapter 3 Estimation of p 3.1 Point and Interval Estimates of p Suppose that we have Bernoulli Trials (BT). So far, in every example I have told you the (numerical) value of p. In science, usually the

More information

1 Normal Distribution.

1 Normal Distribution. Normal Distribution.. Introduction A Bernoulli trial is simple random experiment that ends in success or failure. A Bernoulli trial can be used to make a new random experiment by repeating the Bernoulli

More information

Discrete Mathematics and Probability Theory Fall 2013 Vazirani Note 12. Random Variables: Distribution and Expectation

Discrete Mathematics and Probability Theory Fall 2013 Vazirani Note 12. Random Variables: Distribution and Expectation CS 70 Discrete Mathematics and Probability Theory Fall 203 Vazirani Note 2 Random Variables: Distribution and Expectation We will now return once again to the question of how many heads in a typical sequence

More information

CS 361: Probability & Statistics

CS 361: Probability & Statistics February 19, 2018 CS 361: Probability & Statistics Random variables Markov s inequality This theorem says that for any random variable X and any value a, we have A random variable is unlikely to have an

More information

Discrete Mathematics and Probability Theory Fall 2012 Vazirani Note 14. Random Variables: Distribution and Expectation

Discrete Mathematics and Probability Theory Fall 2012 Vazirani Note 14. Random Variables: Distribution and Expectation CS 70 Discrete Mathematics and Probability Theory Fall 202 Vazirani Note 4 Random Variables: Distribution and Expectation Random Variables Question: The homeworks of 20 students are collected in, randomly

More information

Discrete Mathematics and Probability Theory Fall 2014 Anant Sahai Note 15. Random Variables: Distributions, Independence, and Expectations

Discrete Mathematics and Probability Theory Fall 2014 Anant Sahai Note 15. Random Variables: Distributions, Independence, and Expectations EECS 70 Discrete Mathematics and Probability Theory Fall 204 Anant Sahai Note 5 Random Variables: Distributions, Independence, and Expectations In the last note, we saw how useful it is to have a way of

More information

Discrete Mathematics for CS Spring 2006 Vazirani Lecture 22

Discrete Mathematics for CS Spring 2006 Vazirani Lecture 22 CS 70 Discrete Mathematics for CS Spring 2006 Vazirani Lecture 22 Random Variables and Expectation Question: The homeworks of 20 students are collected in, randomly shuffled and returned to the students.

More information

Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Note 16. Random Variables: Distribution and Expectation

Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Note 16. Random Variables: Distribution and Expectation CS 70 Discrete Mathematics and Probability Theory Spring 206 Rao and Walrand Note 6 Random Variables: Distribution and Expectation Example: Coin Flips Recall our setup of a probabilistic experiment as

More information

On a horizontal number line, list all possible values of x; in this case: Chapter 3. Section 3.1: The probability histogram

On a horizontal number line, list all possible values of x; in this case: Chapter 3. Section 3.1: The probability histogram Chapter 3 Section 3.1: The probability histogram Below is the probability histogram for the IS (p. 80 of text): On a horizontal number line, list all possible values of x; in this case: 0.90, 0.70, 0.50,

More information

Section 3.1: The probability histogram. Below is the probability histogram for the IS (p. 80 of text):

Section 3.1: The probability histogram. Below is the probability histogram for the IS (p. 80 of text): Chapter 3 Section 3.1: The probability histogram Below is the probability histogram for the IS (p. 80 of text): 1.5 1.5755 1.0 0.7500 0.5 0.90 0.30 0.30 0.90 How is it determined? + 81 On a horizontal

More information

CIS 2033 Lecture 5, Fall

CIS 2033 Lecture 5, Fall CIS 2033 Lecture 5, Fall 2016 1 Instructor: David Dobor September 13, 2016 1 Supplemental reading from Dekking s textbook: Chapter2, 3. We mentioned at the beginning of this class that calculus was a prerequisite

More information

1 of 6 7/16/2009 6:31 AM Virtual Laboratories > 11. Bernoulli Trials > 1 2 3 4 5 6 1. Introduction Basic Theory The Bernoulli trials process, named after James Bernoulli, is one of the simplest yet most

More information

Chapter 1. Probability. 1.1 The Standard Normal Curve.

Chapter 1. Probability. 1.1 The Standard Normal Curve. 0 Chapter 1 Probability 1.1 The Standard Normal Curve. In this section we will learn how to use an approximating device that we will need throughout the semester. The standard normal curve, henceforth

More information

What is a random variable

What is a random variable OKAN UNIVERSITY FACULTY OF ENGINEERING AND ARCHITECTURE MATH 256 Probability and Random Processes 04 Random Variables Fall 20 Yrd. Doç. Dr. Didem Kivanc Tureli didemk@ieee.org didem.kivanc@okan.edu.tr

More information

STAT2201. Analysis of Engineering & Scientific Data. Unit 3

STAT2201. Analysis of Engineering & Scientific Data. Unit 3 STAT2201 Analysis of Engineering & Scientific Data Unit 3 Slava Vaisman The University of Queensland School of Mathematics and Physics What we learned in Unit 2 (1) We defined a sample space of a random

More information

Chapter 4: An Introduction to Probability and Statistics

Chapter 4: An Introduction to Probability and Statistics Chapter 4: An Introduction to Probability and Statistics 4. Probability The simplest kinds of probabilities to understand are reflected in everyday ideas like these: (i) if you toss a coin, the probability

More information

Outline. Probability. Math 143. Department of Mathematics and Statistics Calvin College. Spring 2010

Outline. Probability. Math 143. Department of Mathematics and Statistics Calvin College. Spring 2010 Outline Math 143 Department of Mathematics and Statistics Calvin College Spring 2010 Outline Outline 1 Review Basics Random Variables Mean, Variance and Standard Deviation of Random Variables 2 More Review

More information

Discrete Distributions

Discrete Distributions Discrete Distributions STA 281 Fall 2011 1 Introduction Previously we defined a random variable to be an experiment with numerical outcomes. Often different random variables are related in that they have

More information

The topics in this section concern with the first course objective.

The topics in this section concern with the first course objective. 1.1 Systems & Probability The topics in this section concern with the first course objective. A system is one of the most fundamental concepts and one of the most useful and powerful tools in STEM (science,

More information

Lecture 10: Probability distributions TUESDAY, FEBRUARY 19, 2019

Lecture 10: Probability distributions TUESDAY, FEBRUARY 19, 2019 Lecture 10: Probability distributions DANIEL WELLER TUESDAY, FEBRUARY 19, 2019 Agenda What is probability? (again) Describing probabilities (distributions) Understanding probabilities (expectation) Partial

More information

Discrete Random Variables

Discrete Random Variables Discrete Random Variables An Undergraduate Introduction to Financial Mathematics J. Robert Buchanan 2014 Introduction The markets can be thought of as a complex interaction of a large number of random

More information

Review of Basic Probability

Review of Basic Probability Review of Basic Probability Erik G. Learned-Miller Department of Computer Science University of Massachusetts, Amherst Amherst, MA 01003 September 16, 2009 Abstract This document reviews basic discrete

More information

Chapter 8: An Introduction to Probability and Statistics

Chapter 8: An Introduction to Probability and Statistics Course S3, 200 07 Chapter 8: An Introduction to Probability and Statistics This material is covered in the book: Erwin Kreyszig, Advanced Engineering Mathematics (9th edition) Chapter 24 (not including

More information

Chapter 2.5 Random Variables and Probability The Modern View (cont.)

Chapter 2.5 Random Variables and Probability The Modern View (cont.) Chapter 2.5 Random Variables and Probability The Modern View (cont.) I. Statistical Independence A crucially important idea in probability and statistics is the concept of statistical independence. Suppose

More information

At the start of the term, we saw the following formula for computing the sum of the first n integers:

At the start of the term, we saw the following formula for computing the sum of the first n integers: Chapter 11 Induction This chapter covers mathematical induction. 11.1 Introduction to induction At the start of the term, we saw the following formula for computing the sum of the first n integers: Claim

More information

Chapter 5 Simplifying Formulas and Solving Equations

Chapter 5 Simplifying Formulas and Solving Equations Chapter 5 Simplifying Formulas and Solving Equations Look at the geometry formula for Perimeter of a rectangle P = L W L W. Can this formula be written in a simpler way? If it is true, that we can simplify

More information

RANDOM WALKS IN ONE DIMENSION

RANDOM WALKS IN ONE DIMENSION RANDOM WALKS IN ONE DIMENSION STEVEN P. LALLEY 1. THE GAMBLER S RUIN PROBLEM 1.1. Statement of the problem. I have A dollars; my colleague Xinyi has B dollars. A cup of coffee at the Sacred Grounds in

More information

5 CORRELATION. Expectation of the Binomial Distribution I The Binomial distribution can be defined as: P(X = r) = p r q n r where p + q = 1 and 0 r n

5 CORRELATION. Expectation of the Binomial Distribution I The Binomial distribution can be defined as: P(X = r) = p r q n r where p + q = 1 and 0 r n 5 CORRELATION The covariance of two random variables gives some measure of their independence. A second way of assessing the measure of independence will be discussed shortly but first the expectation

More information

Chapter 5 Simplifying Formulas and Solving Equations

Chapter 5 Simplifying Formulas and Solving Equations Chapter 5 Simplifying Formulas and Solving Equations Look at the geometry formula for Perimeter of a rectangle P = L + W + L + W. Can this formula be written in a simpler way? If it is true, that we can

More information

Probability Experiments, Trials, Outcomes, Sample Spaces Example 1 Example 2

Probability Experiments, Trials, Outcomes, Sample Spaces Example 1 Example 2 Probability Probability is the study of uncertain events or outcomes. Games of chance that involve rolling dice or dealing cards are one obvious area of application. However, probability models underlie

More information

7.1 What is it and why should we care?

7.1 What is it and why should we care? Chapter 7 Probability In this section, we go over some simple concepts from probability theory. We integrate these with ideas from formal language theory in the next chapter. 7.1 What is it and why should

More information

Chapter 4a Probability Models

Chapter 4a Probability Models Chapter 4a Probability Models 4a.2 Probability models for a variable with a finite number of values 297 4a.1 Introduction Chapters 2 and 3 are concerned with data description (descriptive statistics) where

More information

2011 Pearson Education, Inc

2011 Pearson Education, Inc Statistics for Business and Economics Chapter 3 Probability Contents 1. Events, Sample Spaces, and Probability 2. Unions and Intersections 3. Complementary Events 4. The Additive Rule and Mutually Exclusive

More information

Solving Equations by Adding and Subtracting

Solving Equations by Adding and Subtracting SECTION 2.1 Solving Equations by Adding and Subtracting 2.1 OBJECTIVES 1. Determine whether a given number is a solution for an equation 2. Use the addition property to solve equations 3. Determine whether

More information

Expected Value II. 1 The Expected Number of Events that Happen

Expected Value II. 1 The Expected Number of Events that Happen 6.042/18.062J Mathematics for Computer Science December 5, 2006 Tom Leighton and Ronitt Rubinfeld Lecture Notes Expected Value II 1 The Expected Number of Events that Happen Last week we concluded by showing

More information

Probability and the Second Law of Thermodynamics

Probability and the Second Law of Thermodynamics Probability and the Second Law of Thermodynamics Stephen R. Addison January 24, 200 Introduction Over the next several class periods we will be reviewing the basic results of probability and relating probability

More information

1 What is the area model for multiplication?

1 What is the area model for multiplication? for multiplication represents a lovely way to view the distribution property the real number exhibit. This property is the link between addition and multiplication. 1 1 What is the area model for multiplication?

More information

MA 1125 Lecture 15 - The Standard Normal Distribution. Friday, October 6, Objectives: Introduce the standard normal distribution and table.

MA 1125 Lecture 15 - The Standard Normal Distribution. Friday, October 6, Objectives: Introduce the standard normal distribution and table. MA 1125 Lecture 15 - The Standard Normal Distribution Friday, October 6, 2017. Objectives: Introduce the standard normal distribution and table. 1. The Standard Normal Distribution We ve been looking at

More information

Module 8 Probability

Module 8 Probability Module 8 Probability Probability is an important part of modern mathematics and modern life, since so many things involve randomness. The ClassWiz is helpful for calculating probabilities, especially those

More information

Descriptive Statistics (And a little bit on rounding and significant digits)

Descriptive Statistics (And a little bit on rounding and significant digits) Descriptive Statistics (And a little bit on rounding and significant digits) Now that we know what our data look like, we d like to be able to describe it numerically. In other words, how can we represent

More information

STAT/SOC/CSSS 221 Statistical Concepts and Methods for the Social Sciences. Random Variables

STAT/SOC/CSSS 221 Statistical Concepts and Methods for the Social Sciences. Random Variables STAT/SOC/CSSS 221 Statistical Concepts and Methods for the Social Sciences Random Variables Christopher Adolph Department of Political Science and Center for Statistics and the Social Sciences University

More information

Chapter 2: Discrete Distributions. 2.1 Random Variables of the Discrete Type

Chapter 2: Discrete Distributions. 2.1 Random Variables of the Discrete Type Chapter 2: Discrete Distributions 2.1 Random Variables of the Discrete Type 2.2 Mathematical Expectation 2.3 Special Mathematical Expectations 2.4 Binomial Distribution 2.5 Negative Binomial Distribution

More information

Chapter 14. From Randomness to Probability. Copyright 2012, 2008, 2005 Pearson Education, Inc.

Chapter 14. From Randomness to Probability. Copyright 2012, 2008, 2005 Pearson Education, Inc. Chapter 14 From Randomness to Probability Copyright 2012, 2008, 2005 Pearson Education, Inc. Dealing with Random Phenomena A random phenomenon is a situation in which we know what outcomes could happen,

More information

Discrete Probability. Chemistry & Physics. Medicine

Discrete Probability. Chemistry & Physics. Medicine Discrete Probability The existence of gambling for many centuries is evidence of long-running interest in probability. But a good understanding of probability transcends mere gambling. The mathematics

More information

STA Module 4 Probability Concepts. Rev.F08 1

STA Module 4 Probability Concepts. Rev.F08 1 STA 2023 Module 4 Probability Concepts Rev.F08 1 Learning Objectives Upon completing this module, you should be able to: 1. Compute probabilities for experiments having equally likely outcomes. 2. Interpret

More information

Senior Math Circles November 19, 2008 Probability II

Senior Math Circles November 19, 2008 Probability II University of Waterloo Faculty of Mathematics Centre for Education in Mathematics and Computing Senior Math Circles November 9, 2008 Probability II Probability Counting There are many situations where

More information

Let us think of the situation as having a 50 sided fair die; any one number is equally likely to appear.

Let us think of the situation as having a 50 sided fair die; any one number is equally likely to appear. Probability_Homework Answers. Let the sample space consist of the integers through. {, 2, 3,, }. Consider the following events from that Sample Space. Event A: {a number is a multiple of 5 5, 0, 5,, }

More information

Conditional probabilities and graphical models

Conditional probabilities and graphical models Conditional probabilities and graphical models Thomas Mailund Bioinformatics Research Centre (BiRC), Aarhus University Probability theory allows us to describe uncertainty in the processes we model within

More information

Discrete Random Variables

Discrete Random Variables Discrete Random Variables An Undergraduate Introduction to Financial Mathematics J. Robert Buchanan Introduction The markets can be thought of as a complex interaction of a large number of random processes,

More information

Expectation is linear. So far we saw that E(X + Y ) = E(X) + E(Y ). Let α R. Then,

Expectation is linear. So far we saw that E(X + Y ) = E(X) + E(Y ). Let α R. Then, Expectation is linear So far we saw that E(X + Y ) = E(X) + E(Y ). Let α R. Then, E(αX) = ω = ω (αx)(ω) Pr(ω) αx(ω) Pr(ω) = α ω X(ω) Pr(ω) = αe(x). Corollary. For α, β R, E(αX + βy ) = αe(x) + βe(y ).

More information

Uncertainty. Michael Peters December 27, 2013

Uncertainty. Michael Peters December 27, 2013 Uncertainty Michael Peters December 27, 20 Lotteries In many problems in economics, people are forced to make decisions without knowing exactly what the consequences will be. For example, when you buy

More information

Class 26: review for final exam 18.05, Spring 2014

Class 26: review for final exam 18.05, Spring 2014 Probability Class 26: review for final eam 8.05, Spring 204 Counting Sets Inclusion-eclusion principle Rule of product (multiplication rule) Permutation and combinations Basics Outcome, sample space, event

More information

A Event has occurred

A Event has occurred Statistics and probability: 1-1 1. Probability Event: a possible outcome or set of possible outcomes of an experiment or observation. Typically denoted by a capital letter: A, B etc. E.g. The result of

More information

Chapter 1 Review of Equations and Inequalities

Chapter 1 Review of Equations and Inequalities Chapter 1 Review of Equations and Inequalities Part I Review of Basic Equations Recall that an equation is an expression with an equal sign in the middle. Also recall that, if a question asks you to solve

More information

CISC 1100/1400 Structures of Comp. Sci./Discrete Structures Chapter 7 Probability. Outline. Terminology and background. Arthur G.

CISC 1100/1400 Structures of Comp. Sci./Discrete Structures Chapter 7 Probability. Outline. Terminology and background. Arthur G. CISC 1100/1400 Structures of Comp. Sci./Discrete Structures Chapter 7 Probability Arthur G. Werschulz Fordham University Department of Computer and Information Sciences Copyright Arthur G. Werschulz, 2017.

More information

Rules for Means and Variances; Prediction

Rules for Means and Variances; Prediction Chapter 7 Rules for Means and Variances; Prediction 7.1 Rules for Means and Variances The material in this section is very technical and algebraic. And dry. But it is useful for understanding many of the

More information

Lecture Slides. Elementary Statistics Eleventh Edition. by Mario F. Triola. and the Triola Statistics Series 4.1-1

Lecture Slides. Elementary Statistics Eleventh Edition. by Mario F. Triola. and the Triola Statistics Series 4.1-1 Lecture Slides Elementary Statistics Eleventh Edition and the Triola Statistics Series by Mario F. Triola 4.1-1 4-1 Review and Preview Chapter 4 Probability 4-2 Basic Concepts of Probability 4-3 Addition

More information

6.041/6.431 Spring 2009 Quiz 1 Wednesday, March 11, 7:30-9:30 PM. SOLUTIONS

6.041/6.431 Spring 2009 Quiz 1 Wednesday, March 11, 7:30-9:30 PM. SOLUTIONS 6.0/6.3 Spring 009 Quiz Wednesday, March, 7:30-9:30 PM. SOLUTIONS Name: Recitation Instructor: Question Part Score Out of 0 all 0 a 5 b c 5 d 5 e 5 f 5 3 a b c d 5 e 5 f 5 g 5 h 5 Total 00 Write your solutions

More information

Probability (10A) Young Won Lim 6/12/17

Probability (10A) Young Won Lim 6/12/17 Probability (10A) Copyright (c) 2017 Young W. Lim. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later

More information

STAT 345 Spring 2018 Homework 4 - Discrete Probability Distributions

STAT 345 Spring 2018 Homework 4 - Discrete Probability Distributions STAT 345 Spring 2018 Homework 4 - Discrete Probability Distributions Name: Please adhere to the homework rules as given in the Syllabus. 1. Coin Flipping. Timothy and Jimothy are playing a betting game.

More information

Name: 180A MIDTERM 2. (x + n)/2

Name: 180A MIDTERM 2. (x + n)/2 1. Recall the (somewhat strange) person from the first midterm who repeatedly flips a fair coin, taking a step forward when it lands head up and taking a step back when it lands tail up. Suppose this person

More information

What is Probability? Probability. Sample Spaces and Events. Simple Event

What is Probability? Probability. Sample Spaces and Events. Simple Event What is Probability? Probability Peter Lo Probability is the numerical measure of likelihood that the event will occur. Simple Event Joint Event Compound Event Lies between 0 & 1 Sum of events is 1 1.5

More information

Introduction to Probability 2017/18 Supplementary Problems

Introduction to Probability 2017/18 Supplementary Problems Introduction to Probability 2017/18 Supplementary Problems Problem 1: Let A and B denote two events with P(A B) 0. Show that P(A) 0 and P(B) 0. A A B implies P(A) P(A B) 0, hence P(A) 0. Similarly B A

More information

ACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 2 MATH00040 SEMESTER / Probability

ACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 2 MATH00040 SEMESTER / Probability ACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 2 MATH00040 SEMESTER 2 2017/2018 DR. ANTHONY BROWN 5.1. Introduction to Probability. 5. Probability You are probably familiar with the elementary

More information

CS1800: Sequences & Sums. Professor Kevin Gold

CS1800: Sequences & Sums. Professor Kevin Gold CS1800: Sequences & Sums Professor Kevin Gold Moving Toward Analysis of Algorithms Today s tools help in the analysis of algorithms. We ll cover tools for deciding what equation best fits a sequence of

More information

Notes 6 Autumn Example (One die: part 1) One fair six-sided die is thrown. X is the number showing.

Notes 6 Autumn Example (One die: part 1) One fair six-sided die is thrown. X is the number showing. MAS 08 Probability I Notes Autumn 005 Random variables A probability space is a sample space S together with a probability function P which satisfies Kolmogorov s aioms. The Holy Roman Empire was, in the

More information

1.1.1 Algebraic Operations

1.1.1 Algebraic Operations 1.1.1 Algebraic Operations We need to learn how our basic algebraic operations interact. When confronted with many operations, we follow the order of operations: Parentheses Exponentials Multiplication

More information

Probability and Statistics

Probability and Statistics Probability and Statistics Kristel Van Steen, PhD 2 Montefiore Institute - Systems and Modeling GIGA - Bioinformatics ULg kristel.vansteen@ulg.ac.be CHAPTER 4: IT IS ALL ABOUT DATA 4a - 1 CHAPTER 4: IT

More information

GRE Quantitative Reasoning Practice Questions

GRE Quantitative Reasoning Practice Questions GRE Quantitative Reasoning Practice Questions y O x 7. The figure above shows the graph of the function f in the xy-plane. What is the value of f (f( ))? A B C 0 D E Explanation Note that to find f (f(

More information

37.3. The Poisson Distribution. Introduction. Prerequisites. Learning Outcomes

37.3. The Poisson Distribution. Introduction. Prerequisites. Learning Outcomes The Poisson Distribution 37.3 Introduction In this Section we introduce a probability model which can be used when the outcome of an experiment is a random variable taking on positive integer values and

More information

Math 381 Discrete Mathematical Modeling

Math 381 Discrete Mathematical Modeling Math 381 Discrete Mathematical Modeling Sean Griffin Today: -Projects -Central Limit Theorem -Markov Chains Handout Projects Deadlines: Project groups and project descriptions due w/ homework (Due 7/23)

More information

P(A) = Definitions. Overview. P - denotes a probability. A, B, and C - denote specific events. P (A) - Chapter 3 Probability

P(A) = Definitions. Overview. P - denotes a probability. A, B, and C - denote specific events. P (A) - Chapter 3 Probability Chapter 3 Probability Slide 1 Slide 2 3-1 Overview 3-2 Fundamentals 3-3 Addition Rule 3-4 Multiplication Rule: Basics 3-5 Multiplication Rule: Complements and Conditional Probability 3-6 Probabilities

More information

1 The Basic Counting Principles

1 The Basic Counting Principles 1 The Basic Counting Principles The Multiplication Rule If an operation consists of k steps and the first step can be performed in n 1 ways, the second step can be performed in n ways [regardless of how

More information

Discrete Distributions

Discrete Distributions A simplest example of random experiment is a coin-tossing, formally called Bernoulli trial. It happens to be the case that many useful distributions are built upon this simplest form of experiment, whose

More information

Probability Year 9. Terminology

Probability Year 9. Terminology Probability Year 9 Terminology Probability measures the chance something happens. Formally, we say it measures how likely is the outcome of an event. We write P(result) as a shorthand. An event is some

More information

P (E) = P (A 1 )P (A 2 )... P (A n ).

P (E) = P (A 1 )P (A 2 )... P (A n ). Lecture 9: Conditional probability II: breaking complex events into smaller events, methods to solve probability problems, Bayes rule, law of total probability, Bayes theorem Discrete Structures II (Summer

More information

Grades 7 & 8, Math Circles 10/11/12 October, Series & Polygonal Numbers

Grades 7 & 8, Math Circles 10/11/12 October, Series & Polygonal Numbers Faculty of Mathematics Waterloo, Ontario N2L G Centre for Education in Mathematics and Computing Introduction Grades 7 & 8, Math Circles 0//2 October, 207 Series & Polygonal Numbers Mathematicians are

More information

Grades 7 & 8, Math Circles 24/25/26 October, Probability

Grades 7 & 8, Math Circles 24/25/26 October, Probability Faculty of Mathematics Waterloo, Ontario NL 3G1 Centre for Education in Mathematics and Computing Grades 7 & 8, Math Circles 4/5/6 October, 017 Probability Introduction Probability is a measure of how

More information

Joint Probability Distributions and Random Samples (Devore Chapter Five)

Joint Probability Distributions and Random Samples (Devore Chapter Five) Joint Probability Distributions and Random Samples (Devore Chapter Five) 1016-345-01: Probability and Statistics for Engineers Spring 2013 Contents 1 Joint Probability Distributions 2 1.1 Two Discrete

More information

10.1. Randomness and Probability. Investigation: Flip a Coin EXAMPLE A CONDENSED LESSON

10.1. Randomness and Probability. Investigation: Flip a Coin EXAMPLE A CONDENSED LESSON CONDENSED LESSON 10.1 Randomness and Probability In this lesson you will simulate random processes find experimental probabilities based on the results of a large number of trials calculate theoretical

More information

Part 3: Parametric Models

Part 3: Parametric Models Part 3: Parametric Models Matthew Sperrin and Juhyun Park August 19, 2008 1 Introduction There are three main objectives to this section: 1. To introduce the concepts of probability and random variables.

More information

Unit 4 Patterns and Algebra

Unit 4 Patterns and Algebra Unit 4 Patterns and Algebra In this unit, students will solve equations with integer coefficients using a variety of methods, and apply their reasoning skills to find mistakes in solutions of these equations.

More information

STAT 201 Chapter 5. Probability

STAT 201 Chapter 5. Probability STAT 201 Chapter 5 Probability 1 2 Introduction to Probability Probability The way we quantify uncertainty. Subjective Probability A probability derived from an individual's personal judgment about whether

More information

Lecture 2 Binomial and Poisson Probability Distributions

Lecture 2 Binomial and Poisson Probability Distributions Binomial Probability Distribution Lecture 2 Binomial and Poisson Probability Distributions Consider a situation where there are only two possible outcomes (a Bernoulli trial) Example: flipping a coin James

More information

Chapter 5. The Goodness of Fit Test. 5.1 Dice, Computers and Genetics

Chapter 5. The Goodness of Fit Test. 5.1 Dice, Computers and Genetics Chapter 5 The Goodness of Fit Test 5.1 Dice, Computers and Genetics The CM of casting a die was introduced in Chapter 1. We assumed that the six possible outcomes of this CM are equally likely; i.e. we

More information

NIT #7 CORE ALGE COMMON IALS

NIT #7 CORE ALGE COMMON IALS UN NIT #7 ANSWER KEY POLYNOMIALS Lesson #1 Introduction too Polynomials Lesson # Multiplying Polynomials Lesson # Factoring Polynomials Lesson # Factoring Based on Conjugate Pairs Lesson #5 Factoring Trinomials

More information

STAT 285 Fall Assignment 1 Solutions

STAT 285 Fall Assignment 1 Solutions STAT 285 Fall 2014 Assignment 1 Solutions 1. An environmental agency sets a standard of 200 ppb for the concentration of cadmium in a lake. The concentration of cadmium in one lake is measured 17 times.

More information

Probability and Independence Terri Bittner, Ph.D.

Probability and Independence Terri Bittner, Ph.D. Probability and Independence Terri Bittner, Ph.D. The concept of independence is often confusing for students. This brief paper will cover the basics, and will explain the difference between independent

More information

This exam is closed book and closed notes. (You will have access to a copy of the Table of Common Distributions given in the back of the text.

This exam is closed book and closed notes. (You will have access to a copy of the Table of Common Distributions given in the back of the text. TEST #1 STA 5326 September 25, 2014 Name: Please read the following directions. DO NOT TURN THE PAGE UNTIL INSTRUCTED TO DO SO Directions This exam is closed book and closed notes. (You will have access

More information

*Karle Laska s Sections: There is no class tomorrow and Friday! Have a good weekend! Scores will be posted in Compass early Friday morning

*Karle Laska s Sections: There is no class tomorrow and Friday! Have a good weekend! Scores will be posted in Compass early Friday morning STATISTICS 100 EXAM 3 Spring 2016 PRINT NAME (Last name) (First name) *NETID CIRCLE SECTION: Laska MWF L1 Laska Tues/Thurs L2 Robin Tu Write answers in appropriate blanks. When no blanks are provided CIRCLE

More information

Section 0.6: Factoring from Precalculus Prerequisites a.k.a. Chapter 0 by Carl Stitz, PhD, and Jeff Zeager, PhD, is available under a Creative

Section 0.6: Factoring from Precalculus Prerequisites a.k.a. Chapter 0 by Carl Stitz, PhD, and Jeff Zeager, PhD, is available under a Creative Section 0.6: Factoring from Precalculus Prerequisites a.k.a. Chapter 0 by Carl Stitz, PhD, and Jeff Zeager, PhD, is available under a Creative Commons Attribution-NonCommercial-ShareAlike.0 license. 201,

More information

the time it takes until a radioactive substance undergoes a decay

the time it takes until a radioactive substance undergoes a decay 1 Probabilities 1.1 Experiments with randomness Wewillusethetermexperimentinaverygeneralwaytorefertosomeprocess that produces a random outcome. Examples: (Ask class for some first) Here are some discrete

More information

Generating Function Notes , Fall 2005, Prof. Peter Shor

Generating Function Notes , Fall 2005, Prof. Peter Shor Counting Change Generating Function Notes 80, Fall 00, Prof Peter Shor In this lecture, I m going to talk about generating functions We ve already seen an example of generating functions Recall when we

More information

4. The Negative Binomial Distribution

4. The Negative Binomial Distribution 1 of 9 7/16/2009 6:40 AM Virtual Laboratories > 11. Bernoulli Trials > 1 2 3 4 5 6 4. The Negative Binomial Distribution Basic Theory Suppose again that our random experiment is to perform a sequence of

More information

Lecture 1. ABC of Probability

Lecture 1. ABC of Probability Math 408 - Mathematical Statistics Lecture 1. ABC of Probability January 16, 2013 Konstantin Zuev (USC) Math 408, Lecture 1 January 16, 2013 1 / 9 Agenda Sample Spaces Realizations, Events Axioms of Probability

More information

CS 125 Section #12 (More) Probability and Randomized Algorithms 11/24/14. For random numbers X which only take on nonnegative integer values, E(X) =

CS 125 Section #12 (More) Probability and Randomized Algorithms 11/24/14. For random numbers X which only take on nonnegative integer values, E(X) = CS 125 Section #12 (More) Probability and Randomized Algorithms 11/24/14 1 Probability First, recall a couple useful facts from last time about probability: Linearity of expectation: E(aX + by ) = ae(x)

More information

COMP6053 lecture: Sampling and the central limit theorem. Jason Noble,

COMP6053 lecture: Sampling and the central limit theorem. Jason Noble, COMP6053 lecture: Sampling and the central limit theorem Jason Noble, jn2@ecs.soton.ac.uk Populations: long-run distributions Two kinds of distributions: populations and samples. A population is the set

More information

Probability Year 10. Terminology

Probability Year 10. Terminology Probability Year 10 Terminology Probability measures the chance something happens. Formally, we say it measures how likely is the outcome of an event. We write P(result) as a shorthand. An event is some

More information

1. When applied to an affected person, the test comes up positive in 90% of cases, and negative in 10% (these are called false negatives ).

1. When applied to an affected person, the test comes up positive in 90% of cases, and negative in 10% (these are called false negatives ). CS 70 Discrete Mathematics for CS Spring 2006 Vazirani Lecture 8 Conditional Probability A pharmaceutical company is marketing a new test for a certain medical condition. According to clinical trials,

More information

Matrices, Row Reduction of Matrices

Matrices, Row Reduction of Matrices Matrices, Row Reduction of Matrices October 9, 014 1 Row Reduction and Echelon Forms In the previous section, we saw a procedure for solving systems of equations It is simple in that it consists of only

More information