Discrete Probability Distributions

Size: px
Start display at page:

Download "Discrete Probability Distributions"

Transcription

1 Discrete Probability Distributions Ka-fu WONG 24 August 2007 Abstract When there are many basic events, the probability of these events are better described as a probability function. Such function can of course be visualized as a histogram or a table. Nevertheless, students are expected to get used to probability functions. Do not avoid it. Be prepared. More complicated probability function will be introduced in a chapter following this. We will try to illustrate the difficult concepts with many examples. However, there is no need to stop at these examples. Try to construct your own and discuss with your TA and instructors. With the basic knowledge of probability reviewed in last chapter, we are equipped to discuss some commonly used statistical concepts and probability distributions. These concepts and probability distributions are used often by economists in Economics and Finance. We live in an uncertain world, almost everything to happen tomorrow is a random event. Examples are numerous: Will it rain tomorrow? Will the US central bank (i.e., Federal Reserves) raise its target interest rate (i.e., Federal Funds rate) in the next FOMC meeting? Will the price of my favorite stock go up tomorrow? By how much will the economy grow this year? How much time will it take for me to go to school tomorrow? Some of these events are numerical in nature. For those that are not numerical in nature, they can be coded into a numerical value. Since the numerical values of these random events are not known at the time we ask these questions, it is a random variable. Definition 1 (Random Variables): A random variable is a numerical value determined by the outcome of an experiment. A random variable is often denoted by a capital letter, e.g., X or Y. Example 1 (Random variables): The following variable X is a random variable that map numerical values to some events of weather. X event 1 rainy or cloudy 2 cloudy 3 cloudy or sunny 1

2 However, this type of random variables are not convenient to maneuver because the events are not mutually exclusive. 1 The following classification of events and mapping to the random variable are preferred: X event 1 rainy 2 cloudy 3 sunny Note that these events are mutually exclusive and exhaustive. There are many different ways of mapping the value of X to the event. Nothing prevents us from adopting a different mapping such as X event -1 rainy 0 cloudy 5 sunny However, unless we have strong reasons, it is conventional to map X to some consecutive positive integers (i.e., 1,2,3,4,5...) when the events are discrete. How should we describe a random variable that has not yet happened? We can list all the possible outcomes of the random variable and how likely each outcome is going to happen. Definition 2 (Probability distribution): A probability distribution is the listing of all possible outcomes of an experiment and the corresponding probability. These events should be mutually exclusive and exhaustive. Once the probability distribution is known, we can compute the probability of composite events, such as, P rob(a & B) and P rob(a or B) using the probability rules we learn in earlier chapters. According to the numerical values that the random variable can take and their probability of occurrence, we classify random variables as discrete and continuous, and hence discrete probability distribution and continuous probability distribution. 1 Recall that when events A and B are mutually exclusive, we have P (A or B) = P (A) + P (B) and P (A and B) = 0. 2

3 Definition 3 (Discrete Probability Distributions): A discrete probability distribution describes the probability of random variables that takes discrete values. The number of values (hence, outcome) the random variable takes need not be finite but are generally countable (i.e., it can be countable infinite outcomes). Discrete probability distribution is also known as probability mass function. Example 2 (The twelve zodiac signs): Chinese astrology has 12 animals representing a 12- year cycle of the lunar calendar: Rat, Ox, Tiger, Rabbit, Dragon, Snake, Horse, Sheep, Monkey, Rooster, Dog and Pig. If couples do not have a strong preference over birth years of their children, the probability of randomly drawn person being born in the year of Dragon is the same as in other years. 2 That is, Zodiac sign, X Rat Ox Tiger Rabbit Dragon Snake Horse Goat Monkey Rooster Dog P (X) 1/12 1/12 1/12 1/12 1/12 1/12 1/12 1/12 1/12 1/12 1/12 And, when the Zodiac signs are mapped to numerical values: Zodiac sign, X Rat Ox Tiger Rabbit Dragon Snake Horse Goat Monkey Rooster Dog X P (X) 1/12 1/12 1/12 1/12 1/12 1/12 1/12 1/12 1/12 1/12 1/12 Example 3 (Household size in mainland China and Hong Kong): The following is the probability distribution of the family household size in mainland China 2005 (Source: Table 4-14 Family Households by Size and Region (2005), available at Household size, X P (X) The Far Eastern Economic Review (24 June 1999) reported that 2000 (a Dragon year) was a banner year for births in China. It pointed out that many Chinese considered the combination of a Dragon year and the Millennium a double whammy of good fortune. Consequently, in mainland China, many couples plan to give their cherished only child a head start in life by getting pregnant in 1999, so their babies would be born in 2000 a Dragon Year. 3

4 P(X) >= 10 Household size (X) Note that the household size, X, is generally a positive integer, i.e., {1, 2, 3...}. From the distribution, we can compute the probability of a randomly drawn family has a household size smaller than or equal to 3 as P (X 3) = P (X = 1) + P (X = 2) + P (X = 3) = As a comparison, the probability distribution of family household size in 2006 in Hong Kong (Source: Table161: Domestic households by household size and type of quarters, 2006, available at Household size, X P (X) P(X) Household size (X) The probability of a randomly drawn family has a household size smaller than or equal to 3, P (X 3) = P (X = 1) + P (X = 2) + P (X = 3) = 0.638, which is not too different from that of 4

5 Mainland China. Should we conclude that there is no difference between the two distributions? Probably not. Hong Kong actually has more households of size 1 and less of size 3 compared to mainland China. As illustrated in the example above, a discrete probability distribution may be listed in a table, or in a bar chart with x-axis showing the possible values of the random variable and the y-axis showing the probability of the random variable. These two methods are not convenient when X can take many possible values. A better method is to use some function to relate the values the random variable can take, x, and their probability, P (X = x). Example 4 (Probability function for discrete random variables): The following function describe the probability a random variable: P (X) = X 10 for X = 1, 2, 3, 4. The probability can be listed in a table X P (X) and in a bar chart, as done in earlier examples P(X) X When the random variable can take many values, listing the probability distribution in a table 5

6 will prove difficult. An example would be P (X) = 1001 X for X = 1, 2, 3,..., Definition 4 (Continuous Probability Distributions): A continuous probability distribution can assume an infinite number of values within a given range or interval(s) for random variables that take continuous values. Example 5 (Continuous Probability Distributions): Depending on the accuracy of measurement, the time it takes a student to travel to class can range from zero to two hours, i.e., X [0, 2]. Continuous probability distribution cannot be listed as in the discrete case because the random variable can take infinite uncountable possible continuous values. Note that in this example, the outcomes generally lie within an interval. In particular, at least theoretically, all of them can take any number inside the interval. They need not be integers, nor do they need to be a multiple of any number. However, in real life, no hairdresser is going to charge us $ They are often round to the nearest integers. We do not report time as seconds. Measuring time up to the six decimal places is possible but is costly. Thus, depending on the purpose, measurement of time are often round to the nearest seconds or minutes. Of course, the measurement of time will need to be very accurate up to six decimal places in the launch of space shuttle or some science experiments. In short, in real-life applications, what is supposed to be continuous variables are often treated/ approximated/ recorded in discrete values. 1 Features of a Discrete Probability Distribution Probability distribution may be classified according to the number of random variables it describes. Number of random variables Joint distribution 1 Univariate probability distribution 2 Bivariate probability distribution 3 Trivariate probability distribution n Multivariate probability distribution 6

7 These distributions have similar characteristics. We will discuss these characteristics for the univariate and the bivariate distribution. The extension to the multivariate will be straightforward. Theorem 1 (Charateristics of a Univariate Discrete Distribution): Let x 1,..., x N be the list of all possible outcomes (N of them) of a random variable X. 1. The probability of a particular outcome, P (x i ), must lie between 0 and 1.00, i.e. [0, 1]. 2. The sum of the probabilities of all possible outcomes (exhaustive and mutually exclusive) must be That is, P (x 1 ) P (x N ) = 1 3. The outcomes are mutually exclusive. That is, for all i not equal to k. P (x i and x k ) = 0. P (x i or x k ) = P (x i ) + P (x k ) Example 6 (Univariate probability distribution): Consider a random experiment in which a fair coin is tossed three times. Let x be the number of heads. Let H represent the outcome of a head and T the outcome of a tail. The possible outcomes for such an experiment are: T T T, T T H, T HT, T HH, HT T, HT H, HHT, HHH. Thus the possible values of x (number of heads) are x outcome P (X = x) 0 TTT 1/8 1 TTH, THT, HTT 3/8 2 THH, HTH, HHT 3/8 3 HHH 1/8 From the definition of a random variable, X as defined in this experiment, is a random variable. Theorem 2 (Characteristics of a Bivariate Discrete Distribution): If X and Y are discrete random variables that take N and M possible values, we may define their joint probability function as P XY (x, y). Let (x 1, y 1 ),..., (x N, y 1 ),..., (x 1, y M ),..., (x N, y M ) be the list of all possible outcomes (NM of them). Usually we put all these into a table like the following: 7

8 X x 1 x 2 x N-1 x N Total y 1 P XY (x 1,y 1 ) P XY (x 2,y 1 ) P XY (x N-1,y 1 ) P XY (x N,y 1 ) P Y (y 1 ) y 2 P XY (x 1,y 2 ) P XY (x 2,y 2 ) P XY (x N-1,y 2 ) P XY (x N,y 2 ) P Y (y 2 ) Y y M-1 P XY (x N,y M-1 ) P Y (y M-1 ) y M P XY (x 1,y M ) P XY (x 2,y M ) P XY (x N-1,y M ) P XY (x N,y M ) P Y (y M ) Total P X (x 1 ) P X (x 2 ) P X (x N-1 ) P X (x N ) 1 1. The probability of a particular outcome, P XY (x, y) must lie between 0 and 1, i.e. [0, 1]. 2. The outcomes are mutually exclusive. That is, for all i k or j l, P XY ((x i, y j ) and (x k, y l )) = 0 and P XY ((x i, y j ) or (x k, y l )) = P XY (x i, y j ) + P XY (x k, y l ) 3. The marginal probability function of X is P X (x) = M j=1 P XY (x, y j ) = y P XY (x, y). Note that the marginal probability function of X is used when we do not care about the values Y takes. Similarly the marginal probability function of Y is P Y (y) = N i=1 P XY (x i, y) = x P XY (x, y). 4. The sum of all joint probabilities equals 1. N M i=1 j=1 P XY (x i, y j ) = x P XY (x, y) = P XY (x 1, y 1 ) P XY (x N, y M ) = 1 y 8

9 5. The conditional probability function of X given Y : P (X = x Y = y) if P (Y = y) > 0 P X Y (x y) = 0 if P (Y = y) = 0 For each fixed y this is a probability function for X, i.e. the conditional probability function is non-negative and P X Y (x y) = 1 x By the definition of conditional probability, P X Y (x y) = P XY (x, y)/p Y (y) E.g., P (HSI rises Rainy) = 0.2/0.35. When X and Y are independent, P X Y (x y) equals to P X (x). The relationship of these statements is better visualized in a table, as shown in the examples below. Example 7 (Bivariate distribution I): A bag contains 4 white, 3 red and 5 blue balls. Two are chosen at random without replacement. Let X be the number of red balls chosen and let Y be the number of white balls chosen. X and Y both take possible values 0, 1, and 2. We find that: p(0, 0) = P (X = 0, Y = 0) = P (both blue) = 10/66 p(1, 0) = P (X = 1, Y = 0) = P (one red, one blue) = 15/66, p(0, 1) = P (X = 0, Y = 1) = P (one white, one blue) = 20/66, p(0, 2) = P (X = 0, Y = 2) = P (both white) = 6/66, p(2, 0) = P (X = 2, Y = 0) = P (both red) = 3/66, p(1, 1) = P (X = 1, Y = 1) = P (one red, one white) = 12/66, The remaining combinations of X and Y are impossible. Hence the joint probability table will look like 9

10 y = 0 y = 1 y = 2 Total x = 0 10/66 20/66 6/66 36/66 x = 1 15/66 12/ /66 x = 2 3/ /66 Total 28/66 32/66 6/66 1 Example 8 (Bivariate distribution II): The hypothetical joint distribution of the movement of Hang Seng Index (HSI) and weather is shown in the following table. Rainy (y 1 ) Not Rainy (y 2 ) HSI falls (x 1 ) HSI rises (x 1 ) From the table, we can calculate the following quantities: 1. The unconditional probability of X, i.e., P (x), and the unconditional probability of Y, i.e., P (y). y 1 y 2 P (X = x) x x P (Y = y) The conditional probability of Y given X, i.e., P Y X (y x), and the conditional probability of X given Y, i.e., P X Y (x y). y 1 y 2 P (X) P (X y 1 ) P (X y 2 ) x / /0.65 x / /0.65 P (Y ) P (Y x 1 ) 0.15/ /0.55 P (Y x 2 ) 0.2/ / Stock market movement is not independent of weather: P (x 1 y 1 ) P (x 1 ). Note that the numbers in this example are not based on empirical data. Nevertheless, it remains an interesting subject to check whether weather and stock movement in Hong Kong are correlated. 10

11 Evidence on the United States is mixed. See, for examples, Hirshleifer and Shumway (2003) and Goetzmann and Zhu (2004). Theorem 3 (Transformation of Random variables): A transformation of random variable(s) results in a new random variable. Example 9 (Transformation of Random variables): If X and Y are random variables, the following variables Z are also random variables: 1. Z = 2X 2. Z = 3 + 2X 3. Z = X 2 4. Z = log(x) 5. Z = X + Y 6. Z = X 2 + Y 2 7. Z = XY Of course, Z = 0 X = 0 is not a random variable, but is sometimes called a degenerated random variable. Example 10 (A linear transformation of a random variable): Let a and b be constants, and X be a random variable. The random variable Z = a + bx will have a probability distribution similar to X. 1. Let X be the gender outcome of drawing a student in a class. The probability distribution is x 1 (male) 2 (female) P (x) Let Z = 2 X. The probability distribution of Z is z 1 (male) 0 (female) P (z) In essence, we are just re-coding the gender variable to a different set of integers. 11

12 2. Let X be the quiz grade outcome of drawing a student in a class. The probability distribution is x P (x) Suppose the professor decides to double the points awarded to all questions in the quiz, i.e., a linear transformation of the grades: Z = X 2. The probability distribution of Z is z P (z) Let X be the age outcome of drawing a student in a class in The probability distribution is x P (x) Suppose we want to display the result in year of birth, it is as if we are doing a linear transformation of the variable to Z = 2004 X. z P (z) In essence, a linear transformation may be viewed as a change of unit, say from age to year of birth. It does not change the probability distribution. 4. Let X be the daily stock returns (percentage change in stock price) of a company, which takes discrete values from -5 to 5%. The probability distribution is x -5% -4% -3% -2% -1% 0% 1% 2% 3% 4% 5% P (x) Suppose we invest $20,000 in the stock, what is the probability distribution of the value of our stock x 19,000 19,200 19,400 19,600 19,800 20,000 20,200 20,400 20,600 20,800 21,000 P (x)

13 2 Expectations, and expected values The concept of expectation plays a central role in statistics and Economics. Expectation (often known as mean, or first moment) is a measure of the central location of the data. Sometimes, it is also known as the long-run average value of the random variable (i.e., the average of the outcomes of many experiments), and moments. The concept of expectation is widely used in macroeconomics (rational expectation), study of uncertainty in microeconomics (expected utility), and the study of investment (expected portfolio returns). Definition 5 (Expectation, mean, first moment): Let X be a random variable that takes N possible values {x 1,..., x N } with probability distribution {P (x 1 ),..., P (x N )}. The expectation of X is N E(X) = x i P (x i ) i=1 The expectation, E(X), is often denoted by a Greek letter µ (pronounced as mu). Thus, expectation of a random variable is a weighted average of all the possible values of the random variable, weighted by its probability distribution. Example 11 (Expected rate of return): Let X be the daily stock returns (percentage change in stock price) of a company, which takes discrete values from -5 to 5%. The probability distribution is x -5% -4% -3% -2% -1% 0% 1% 2% 3% 4% 5% P (x) The expected rate of return of investing in the stock is 0%. Should we be surprised to find a zero expected rate of return for the probability distribution above? Not really, because the stock return is symmetrically distributed around zero. Simulation 1 (Long-run average interpretation of expectation): Take the setup of the last example. We would like to verify that the expectation can be interpreted as long-run average. Let X be the daily stock returns (percentage change in stock price) of a company, which takes discrete values from -5 to 5%. The probability distribution is x -5% -4% -3% -2% -1% 0% 1% 2% 3% 4% 5% P (x)

14 1. Randomly draw a stock return according to the above probability distribution, e.g., 0.01 chance of drawing -0.5%. 2. Repeat drawing from the distribution n times. Compute the average of the stock returns of these 1000 draws. The following table reports the results of the simulations. Number of draws (n) Average Return % 0.200% 0.133% % % % % % As expected, as number of observations, n, increases, average return rate of the stock approaches the theoretical expectation, i.e., 0%. Thus, the simulations confirm that expected value has the interpretation of long-run average of the random variable. Definition 6 (Expectation of a random variable from a bivariate distribution): Suppose X and Y are jointly distributed random variables that takes values {(x 1, y 1, ),..., (x N, y M )} with probability distribution {P XY (x 1, y 1 ),..., P XY (x N, y M )}. The expectation of X is N M E(X) = i=1 j=1 x i P XY (x i, y j ) = y xp XY (x, y) x Example 12 (Expectation of a random variable in bivariate distribution): In order to better serve the elderly, providing them with better facilities to enrich their retired lives, a neighborhood committee conducted a census among the people older than 55 living in the neighborhood. Part of the result is summarized in the table below (due to one-child policy, no family consists of more than 5 members): Retiree presence, X Family Size, Y 1=Yes 0=No

15 According to the information, and we assume the structure is similar within the district, what is the expected family size for people older than 55 in the district? 5 2 E(Y ) = y j P XY (x i, y j ) j=1 i=1 = 1 ( ) + 2 ( ) + 3 ( ) + 4 ( ) + 5 ( ) = 3.16 Taking the result to its nearest integer, we may give a reasonable expectation of the family size is 3 for families containing at least a member older than 55. We can also compute the expected family size for families containing at least retiree, i.e., X = 1. E(Y X = 1) = ( )/( ) = 3.39 Example 13 (sum of two random variables): 1. Consider tossing a fair coin once. Let head be coded 1, and tail be coded 0. The probability distribution and the expected value of the random variable are X 1 0 P (X) E(X) = = Consider tossing a fair coin twice. Let head be coded 1, and tail be coded 0. The joint probability distribution of the two random variables (X 1 as the result of the first toss, X 2 the result of the second toss) and the expected value of the random variable are X 1 = 1 X 1 = 0 X 2 = X 2 = or (X 1, X 2 ) (1,1) (1,0) (0,1) (0,0) P (X 1, X 2 ) E(X 1 ) = =

16 E(X 2 ) = = 0.5 The total number of the heads in the two tosses is Z = X 1 +X 2. The probability distribution and the expected value of the random variable are Z P (Z) E(Z) = = 1 This example shows numerically that E(Z) = E(X 1 ) + E(X 2 ), i.e., 1 = More generally, we can verify mathematically that E(X + Y ) = E(X) + E(Y ). Example 14 (Expectation of the product of two random variable): The random variables X and Y are jointly distributed as Y = 1 Y = 2 Y = 3 X = X = Compute E(X), E(Y ), and E(XY ). 1. E(X) = ( ) 1 + ( ) 4 = E(Y ) = ( ) 1 + ( ) 2 + ( ) 3 = To compute E(Z), first we will need to make a probability distribution table of Z = XY. Z P (Z) Hence E(Z) = E(XY ) = 0.10(1) (2) (3) (4) + 0.4(8) + 0(12) = 5.1 In this example, it is not difficult to verify that E(XY ) E(X)E(Y ). Simulation 2 (Expectation of product of two random variables): Take the setup of the last example. We would like to verify that the expectation can be interpreted as a long-run average. The random variables X and Y are jointly distributed as 16

17 Y = 1 Y = 2 Y = 3 X = X = It will be useful to rearrange the table as (X, Y ) (1,1) (1,2) (1,3) (4,1) (4,2) (4,3) P (X, Y ) Randomly draw a combination of X and Y from the probability distribution, e.g., 0.4 chance of drawing the pair of X = 4 and Y = 2. Compute the product of Z = XY. 2. Repeat drawing from the distribution 1000 times. Compute the average of X, average of Y, and average of Z of these 1000 draws. The simulation results are shown in the following table, along with the theoretical expectation. Number of draws (n) Theoretical Average X Average Y Average Z It is not difficult to see that the averages approaches their theoretical counterpart as n increases. Definition 7 (Conditional expectation, conditional mean, conditional moment): For two random variables that are jointly distributed with a bivariate probability distribution, the conditional expectation or conditional mean E(X Y = y j ) is computed by the formula: E(X Y = y j ) = x x i P X Y (x y j ) = x 1 P X Y (x 1 y j ) + x 2 P X Y (x 2 y j ) x N P X Y (x N y j ) Sometimes, we write µ X Y =yj = E(X Y = y j ). The unconditional expectation or mean of X is related to the conditional mean. E(X) = y E(X Y = y)p X Y (y) = E[E(X Y )] 17

18 Example 15 (Conditional Expectation): The random variables X and Y are jointly distributed as Y=1 Y=2 Y=3 X= X= Compute the conditional expectation, E(Y X = 4). First, we would need to find the conditional probability: Y P (Y X = 4) 0.35/( ) 0.4/( ) 0/( ) Using the formula in the definition, we have E(Y X = 4) = y y j P Y X (y x i ) = 1 ([0.35/( )] + 2 [0.4/( )] + 3 [0/( )] = 1.53 Example 16 (I haven t found good real data yet.): [Still looking for a good example...] Theorem 4 (Expectation of a linear transformed random variable): If a and b are constants and X is a random variable, then 1. E(a) = a 2. E(bX) = be(x) 3. E(a + bx) = a + be(x) Proof: In our proof, we will only show the most general case E(a + bx) = a + be(x). E(a + bx) = x = x (a + bx)p (a + bx) (a + bx)p (X) = (a + bx 1 )P (x 1 ) + (a + bx 2 )P (x 2 ) (a + bx N )P (x N ) 18

19 = ap (x 1 ) + bx 1 P (x 1 ) + ap (x 2 ) + bx 2 P (x 2 ) ap (x N ) + bx N P (x N ) = a[p (x 1 ) + P (x 2 ) P (x N )] + b[x 1 P (x 1 ) + x 2 P (x 2 ) x N P (x N )] = a + be(x) Example 17 (Expectations of linear transformed random variables): 1. Let X be the quiz grade outcome of drawing a student in a class. The probability distribution is x P (x) E(X) = = 3.1 Suppose the professor decides to double the points awarded to all questions in the quiz, i.e., a linear transformation of the grades: Z = X 2. The probability distribution of Z is z P (z) E(Z) = = 6.2 It is not difficult to verify that E(Z) = 2E(X). 2. Let X be the age outcome of drawing a student in a class in The probability distribution is x P (x) E(X) = = Suppose we want to display the result in year of birth, it is as if we are doing a linear transformation of the variable to Z = 2004 X. 19

20 z P (z) E(X) = = It is not difficult to verify that E(Z) = 2004 E(X). Definition 8 (Variance, or central second moment): Let X be a random variable that takes N possible values {x 1,..., x N } with probability distribution {P (x 1 ),..., P (x N )}. The variance of X is N V (X) = E[X E(X)] 2 = (x i E(X)) 2 P (x i ) i=1 The variance, V (X), is often denoted by a Greek letter σ 2 (pronounced as sigma square). Note that variance of a random variable is the expectation of squared deviation of the random variable from its mean. That is, if we define a transformed variable as Z = [X E(X)] 2, V (X) = E(Z). Thus, we will expect the variance of a variable to be similar to the ones of the expectation of a transformed variable. Definition 9 (Variance of a random variable): Suppose X and Y are jointly distributed random variables that take values {(x 1, y 1, ),..., (x N, y M )} with probability distribution {P XY (x 1, y 1 ),..., P XY (x N, y M )}. The variance of X is V (X) = N M i=1 j=1[x i E(X)] 2 P XY (x i, y j ) = y [x i E(X)] 2 P XY (x, y) x Example 18 (Variance of bivariate random variable): Suppose random variables X and Y are jointly distributed as follows: X = 1 X = 2 Y = Y = Y = First the expectation of X is E(X) = 1 ( ) + 2 ( ) =

21 Hence we have N M V (X) = [x i E(X)] 2 P XY (x i, y j ) i=1 j=1 = (1 1.39) 2 ( ) + (2 1.39) 2 ( ) =.2379 Simulation 3 (Variance of two jointly distributed random variables): We want to test whether the variance of a group randomly drawn observations are equal (approximately) to the theoretical value computed above. This process requires a large number of observations, n. We use the same method we assumed in simulating the expectation earlier. 1. Set up a given probability distribution as follows: (X, Y ) (1,0) (1,6) (1,10) (2,0) (2,6) (2,10) P (X, Y ) Repeat drawing from the probability distribution n times. Compute the sample variance of the observations. Below we show the results of our simulation. As expected, the sample variance of X is very close to the theoretical variance in last example, i.e., n V ar(x) V ar(y ) Students are advised to calculate the theoretical value of V (Y ), and check if the simulation result approaches that value. Definition 10 (Conditional Variance): For bivariate probability distribution, the conditional expectation or conditional mean V (X Y ) is computed by the formula: V (X Y = y j ) = x (x E(X Y = y j )) 2 P X Y (x y j ) = (x 1 E(X Y = y j )) 2 P X Y (x 1 y j ) (x N E(X Y = y j )) 2 P X Y (x N y j ) 21

22 where E(X Y = y j ) is the conditional expectation of X given Y = y j and P X Y (x y) is the conditional probability of X given y. Theorem 5 (Variance of a linear transformed random variable): If a and b are constants and X is a random variable, then 1. V (a) = 0 2. V (bx) = b 2 V (X) 3. V (a + bx) = b 2 V (X) Proof: In our proof, we will only show the most general case V (a + bx) = b 2 V (X). V (a + bx) = E[(a + bx) (a + be(x))] 2 = E[(bX be(x)] 2 = E[b(X E(X))] 2 = E[b 2 (X be(x) 2 ] = b 2 E[(X be(x) 2 ] = b 2 V (X) Example 19 (Expectations of linear transformed random variables): 1. Let X be the quiz grade outcome of drawing a student in a class. The probability distribution is x P (x) E(X) = = 3.1 V (X) = 0 (1 3.1) (2 3.1) (3 3.1) (4 3.1) (5 3.1) 2 = 0.89 Suppose the professor decides to double the points awarded to all questions in the quiz, i.e., a linear transformation of the grades: Z = X 2. The probability distribution of Z is 22

23 z P (z) E(Z) = = 6.2 V (X) = 0 (2 6.2) (4 6.2) (6 6.2) (8 6.2) (10 6.2) 2 = 3.56 It is not difficult to verify that V (Z) = 2 2 V (X) = 4V (X). 2. Let X be the age outcome of drawing a student in a class in The probability distribution is x P (x) E(X) = = V (X) = 0.01 ( ) ( ) ( ) ( ) ( ) 2 = Suppose we want to display the result in year of birth, it is as if we are doing a linear transformation of the variable to Z = 2004 X. z P (z) E(X) = = V (X) = 0.01 ( ) ( ) ( ) ( ) ( ) 2 = It is not difficult to verify that E(Z) = 2004 E(X). Definition 11 (Covariance): Covariance between two random variables X and Y measures how two variables move together. It is defined as C(X, Y ) = E[(X E(X))(Y E(Y ))] = (x E(X)(y E(Y ))P XY (x, y) x y 23

24 Note that the covariance can be written as C(X, Y ) = E[(X E(X))(Y E(Y ))] = E[XY E(X)Y XE(Y ) + E(X)E(Y )] = E[XY ] E[E(X)Y ] E[XE(Y )] + E[E(X)E(Y )] = E[XY ] E(X)E(Y ) E(X)E(Y ) + E(X)E(Y ) = E[XY ] E(X)E(Y ) The concept of covariance is very important in various literature of economics and finance. For instance, in finance, we are interested in how the stock return of a company varies with the market returns (i.e., Capital Asset Pricing Model, or known a CAPM, in short). Note that E(XY ) may be computed following the steps illustrated in Example 14. Example 20 (Expectation of the product of two random variable): The random variables X and Y are jointly distributed as Y = 1 Y = 2 Y = 3 X = X = It will be useful to rearrange the table as X Y P (X, Y ) From the table, we compute E(X) = 3.25, E(Y ) = 1.65, and E(XY ) = 5.1. X Y X EX Y EY (X EX)(Y EY ) P (X, Y )

25 There are two ways of computing the covariance. 1. Cov(X, Y ) = E(X, Y ) E(X)E(Y ) = = Cov(X, Y ) = x y (x E(X)(y E(Y ))P XY (x, y) = (1.4625)(0.1) + ( )(0.05) + ( )(0.1) + ( )(0.35) + (0.2625)(0.4) + (1.0125)(0) = Thus, we verify that both approaches in computing the covariance are equivalent. Theorem 6 (Covariance of a linear transformed random variable): If a and b are constants and X is a random variable, then 1. C(a, b) = 0 2. C(a, bx) = 0 3. C(a + bx, Y ) = bc(x, Y ) Proof: In our proof, we will only show the most general case C(a + bx, Y ) = bc(x, Y ). C(a + bx, Y ) = E{[(a + bx) (a + be(x))][y E(Y )]} = E{[(bX be(x)][y E(Y )]} = E{[b(X E(X))][Y E(Y )]} = be{[x E(X)][Y E(Y )]} = bc(x, Y ) Theorem 7 (Variance of a sum of random variables): If a and b are constants, X and Y are random variables, then 1. V (X + Y ) = V (X) + V (Y ) + 2C(X, Y ) 2. V (ax + by ) = a 2 V (X) + b 2 V (Y ) + 2abC(X, Y ) Proof: In our proof, we will only show the most general case V (ax + by ) = a 2 V (X) + b 2 V (Y ) + 2abC(X, Y ). V (ax + by ) = E[(aX + by ) (ae(x) + be(y ))] 2 25

26 = E[aX ae(x) + by be(y )] 2 = E[a(X E(X)) + b(y E(Y ))] 2 = E[a 2 (X E(X)) 2 + b 2 (Y E(Y )) 2 + 2ab(X E(X))(Y E(Y ))] = a 2 E[(X E(X)) 2 ] + b 2 E[(Y E(Y )) 2 ] + 2abE[(X E(X))(Y E(Y ))] = a 2 V (X) + b 2 V (Y ) + 2abC(X, Y ) Example 21 (Variance of sum of random variables):in a class of 52 students, the variances of midterm and final exam scores are and respectively. The covariance of the two scores are If the weight on midterm is 0.4 and the weight on final is 0.6. What is the variance of the overall score? Mapping the notation we used earlier, X can be labeled the midterm exam score, Y the final exam score, a = 0.4, and b = 0.6, so that Z = ax + by is the overall score. With this mapping of notations, we can calculate the variance of overall score using the formula V ar(ax + by ) = a 2 V (X) + b 2 V (Y ) + 2abC(X, Y ) = = The strength of the dependence between X and Y is measured by the correlation coefficient: Definition 12 (Correlation coefficient): The correlation coefficient between two random variables X and Y is Corr(X, Y ) = C(X, Y ) V (X)V (Y ) Note that Corr(X, Y ) always lies between -1 to A correlation of -1 means that the two variables are negatively correlated. Whenever X rises, Y falls. 2. A correlation of +1 means that the two variables are positively correlated. Whenever X rises, Y also rises. 3. A correlation of 0 means that the two variables are not correlated. 26

27 Example 22 (Variance of sum of random variables):in a class of 52 students, the variances of midterm and final exam scores are and respectively. The covariance of the two scores are What is the correlation between the two exam scores? Mapping the notation we used earlier, X can be labeled the midterm exam score, and Y the final exam score. With this mapping of notations, we can calculate the covariance of the two exam scores using the formula Corr(X, Y ) = = C(X, Y ) V (X)V (Y ) = Thus, students performance in different exams tend to be highly correlated. Definition 13 (Moments of a random variable): The k-th moment is defined as the expectation of the k-th power of a random variable: m k = E(X k ) Similarly, the k-th centralized moment is defined as: m k = E[(X E(x)) k ] It will become clear that second centralized moment m 2 = E[(X E(x)) 2 ] is the variance of a random variable. Definition 14 (Independence): Consider two random variables X and Y with joint probability P XY (x, y), marginal probability P X (x), P Y (y), conditional probability P X Y (x y) and P Y X (y x). 1. They are said to be independent of each other if and only if P XY (x, y) = P X (x) P Y (y) for all x and y. X and Y are independent if each cell probability, P XY (x, y), is the product of the corre- 27

28 sponding row and column total. 2. X is said to be independent of Y if and only if P X Y (x y) = P X (x) for all x and y. 3. Y is said to be independent of X if and only if P Y X (y x) = P Y (y) for all x and y. Theorem 8 (Consequence of Independence): If X and Y are independent random variables, we will have E(XY ) = E(X)E(Y ) We caution that E(XY ) = E(X)E(Y ) needs not imply that the random variables X and Y are independent, as the following example shows. Example 23 (Consequence of Independence): Let X and Y have the following probability distribution. y = 0 y = 1 y = 2 Total x = 0 1/12 2/12 0 3/12 x = 1 2/12 0 4/12 6/12 x = 2 1/12 2/12 0 3/12 Total 4/12 4/12 4/12 1 Here E(XY ) = 1 and E(X) = 1 and E(Y ) = 1. However, X and Y are not independent. For instance, P (X = 1, Y = 2) P (X = 1) P (Y = 2). 3 Binomial Probability Distribution Example 24 (Tossing coins): Consider the probability distribution of the number heads when different number of fair coins 3 are tossed. 1. Toss one coin. The probability distribution for the number of heads is 3 A coin is fair if an only if P (head) = P (tail) =

29 # of heads, X 0 1 P (X) Toss two coins. The probability distribution for the number of heads is # of heads, X P (X) Toss three coins. The probability distribution for the number of heads is # of heads, X P (X) Toss four coins. The probability distribution for the number of heads is # of heads, X P (X) As in the example, we can continue writing out the table for different numbers of coins tossed and also for different probability of obtaining different number of heads. A more convenient way to summarize the probability distribution is using some mathematical formula. Definition 15 (Binomial Probability Distribution): Consider n independent trials. In each trial there are only two possible outcomes. Let the outcomes be labeled success and failure. Let the x be the number of observed successes in the n trials, and π be the probability of success on each trial. The probability of x successes in n trials is P (x) = n C x π x (1 π) n x The mean and variance are E(X) = nπ and V (X) = nπ(1 π) Example 25 (binomial): After its return to mainland China in 1997, Hong Kong experienced the Asian Financial Crisis as well as SARS (Severe acute respiratory syndrome) 4. In the third 4 For an explanation of SARS, refer to for instance. 29

30 quarter of 2003 (when Hong Kong was experiencing the SARS), the unemployment rate reached historical height of 8.4%. Suppose we had drawn a sample of 14 workers in 2004, we would have: 1. The probability that exactly three are unemployed. P (x = 3) = 14 C 3 (0.084) 3 ( ) 11 = (364)( )( ) = The probability that at least three are unemployed. P (x 3) = P (x = 3) + P (x = 4) P (x = 14) = 14 C 3 (0.084) 3 (0.916) C 14 (0.084) 14 (0.916) 0 = = The probability that at least one is unemployed. P (x 1) = 1 P (x = 0) = 1 14 C 0 (0.084) 0 (0.916) 14 = = Mean E(X) = nπ = 14(0.084) = Variance V (X) = nπ(1 π) = (14)(0.084)(0.916) = The following figure plots the binomial distribution when different number of workers are drawn (n = 10, 20, 30, 40, 50). 30

31 n= n=20 Probability n=30 n=40 n= X Note that in the above plot we do not include x 10 because the probability for such x values is small in our case. Of course, theoretically, x can assume any integer less than n. We also note that the binomial distribution approaches a bell shape as n increases. 4 Hypergeometric Distribution Suppose we draw randomly n students from a class of N students, without replacement. What is the probability that x of the n selected students are female? Let s label female as success and denote X the number of female students in the sample. It looks like that X will have a binomial probability distribution. That is WRONG! The random variable X has a binomial distribution only if the probability of drawing a female remains the same in different draws. In this example, the probability of drawing a female in the n draws are not the same because the population is finite. Definition 16 (Finite Population): A finite population is a population consisting of a fixed number of known individuals, objects, or measurements. Example 26 (Changing probability in subsequent draws): In a bag containing 7 red chips and 5 blue chips you select 2 chips one after the other without replacement. The following tree diagram showing the combination of outcomes and their probabilities. 31

32 6/11 R2 7/12 R1 5/11 B2 7/11 R2 5/12 B1 4/11 B2 We can easily see that the probability of drawing a red chip changes in subsequent draws and depends on what has been drawn previously. Let R i be the event of red chip in the i-th draw (i = 1, 2), B i be the event of blue chip in the i-th draw (i = 1, 2). 1. In the first draw, the probability of drawing a red chip is P (R 1 ) = 7/ In the second draw, the probability of drawing a red chip depends on the outcome of the first draw: (a) If the first draw is red, P (R 2 R 1 ) = 6/11. (b) If the first draw is blue, P (R 2 B 1 ) = 7/11. When the population is finite, the probability of drawing success (or outcome of some characteristics) in a sequence of trials will change. In this case, an assumption of the binomial is violated. To account for the change in probability in subsequent trials, we need to use hypergeometric distribution instead of binomial. The hypergeometric distribution has the following characteristics: 1. There are only 2 possible outcomes. 2. The probability of a success is not the same on each trial. 3. It results from a count of the number of successes in a fixed number of trials. Definition 17 (Hypergeometric Distribution): Consider n draws without replacement from a finite population. Each draw has two possible outcomes. Let the outcomes be labeled success and failure. Let x be the number of observed successes in the n trials, and N be the size of the population, S be the number of successes in the population. The probability of x successes in a sample of n observations is P (x) = ( SC x )( N S C n x ) NC n 32

33 The process that resulted in hypergeometric distribution is very similar to that resulted in binomial. Students of statistics often get confused between them. The general rule is to use the hypergeometric distribution to find the probability of a specified number of successes or failures if both of the following conditions are fulfilled: 1. The sample is selected from a finite population without replacement (recall that a criteria for the binomial distribution is that the probability of success remains the same from trial to trial). Note that if a sample is selected from a finite population with replacement, the probability will remain constant. 2. The size of the sample n is greater than 5% of the size of the population N. If the sample size n is small relative to the population size N, the change in probability in subsequently draws is so small that we can safely ignore the changes in probability in subsequent draws without affecting the calculated result. The following figure shows some hypergeomertric distributions, H(n, S N, N), along with its corresponding binomial distributions, B(n, π = S N ). It shows that the hypergeometric distributiions and binomial distributions are similar for a given n and N, especially when S N is small. Thus, when S N is small, we binomial distributions can be good approximation for hypergeometric distributions. Probability H(20,0.05,100) H(20,0.2,100) H(20,0.5,100) B(20,0.05) B(20,0.2) B(20,0.5) X The following figure shows some hypergeomertric distributions, H(n, S N, N), with the same population proportion of success (i.e. S N is kept the same) but different n. We can see that the distribution approaches bell-shaped as n increases. and with n, the number of samples getting larger, the distribution line is more and more smooth.plot shows that given S/N, when N increases (i.e., n/n approaches zero), the hypergeometric 33

34 distribution approaches the binomial. Thus, when n/n is small, we can safely use the binomial distribution to approximate the hypergeometric distribution. Probability H(5,0.2,100) H(10,0.2,100) H(20,0.2,100) H(30,0.2,100) H(40,0.2,100) H(50,0.2,100) X Example 27 (Hypergeometric): The National Air Safety Board has a list of 10 reported safety violations. Suppose only 4 of the reported violations are actual violations and the Safety Board will only be able to investigate five of the violations. What is the probability that three of five violations randomly selected to be investigated are actually violations? P (X = 3) = ( 4C 3 )( 10 4 C 5 2 ) 10C 5 = ( 4C 3 )( 6 C 3 ) 10C 5 = 4(15) 252 = Poisson Probability Distribution The binomial distribution becomes more skewed to the right (positive) as the probability of success become smaller. The limiting form of the binomial distribution where the probability of success π is small and n is large is called the Poisson probability distribution. Definition 18 (Poisson Probability Distribution): The Poisson distribution can be described mathematically using the formula: P (X = x) = µx e µ x! 34

35 where µ is the mean number of successes in a particular interval of time, e is the constant , and x is the number of successes. The mean number of successes µ can be determined in binomial situations by nπ, where n is the number of trials and π the probability of a success. The variance of the Poisson distribution is also equal to nπ. Different from Binomial and Hypergeometric, the Poisson random variable X generally has no specific upper limit. Poisson probability distribution always skewed to the right and becomes symmetrical when µ gets large, as illustrated in the following plot. Probability µ=1 µ=2.5 µ=5 µ=10 µ=15 µ= X Example 28 (Poisson Probability Distribution): According to Shanghai Yearbook from 2000 to 2006, there were many fire accidents every year, threatening to citizens properties and even lives. The numbers of fire accidents over 1999 to 2005 are reported in the following table: Year Number of fires Average per day Assuming that the average over all seven years can be used to predict the future accidents. What s the probability that there are exactly 14 fires in Shanghai the next day? 35

36 Note that the number of accident to happen the next day is better described as a Poisson random variable. We first compute the µ, i.e. average number of fires per day in these 7 years: µ = 1 ( ) = The probability that there are exactly 14 fires in Shanghai next day is: P (x = 14) = µx e µ x! = e ! = [Data is available at the website of Shanghai Local Records] 6 What distributions to use? There is no need to memorize the formula of these probability distributions because in practice, we can always find those formula from the internet, Excel, or a textbook. Computing the probability using a given distribution is a simple task. However, what computer cannot replace human brain is the choice of distribution appropriate in a given situation. Here, we review how to make such judgment. Poisson considers the number of times an event occurs over an INTERVAL of TIME or SPACE and there is no limit of values that Poisson random variable can take. Thus, if we are considering a sample of 10 observations and we are asked to compute the probability of having 6 successes, we should not use Poisson because the maximum number of success is limited to 10. It is only reasonable to consider Binomial or Hypergeometric. Hypergeometric consider the number of successes in a sample when the probability of success varies across trials due to the without replacement sampling strategy. To compute the Hypergeometric probability, one will need to know N and S separately. Suppose we know that the probability of success is 0.3. We are considering a sample of 10 observations and we are asked to compute the probability of having 6 successes. We cannot use Hypergeometric because we do not have N and S separately. Instead, we have to use Binomial even though we are dealing with finite population. Let s check our understanding in the following examples. 36

37 Example 29 (Choice of distributions): In a shipment of 15 boxes of eggs, 5 are contaminated. If 4 boxes are inspected, what is the probability that exactly 1 is defective? First, we recognize that it is not Poisson because 4 boxes are inspected (i.e., sample size =4). Second, it is sampling without replacement because if we were to inspect four boxes for contamination, we will not want to sample with replacement. Third, both N (15 boxes) and S(5 are contaminated) are given. Hence we will use Hypergeometric. Example 30 (Choice of distributions): A research team is doing a new medical survey to update the color blind rate among the male people in the area. They conduct the test by randomly choosing male passersby and ask him to do the medical test. What is the probability that exactly 3 examined males are color blind in a sample of 10? (In the team s report, the updating color blind rate among the male are 7%) There are only two pieces of information given in the problem: π = 7%, n = 10, and x = 3. Thus we can immediately exclude Hypergeometric because hypergeometric needs 4 parameters in its characterization of its distribution. We may reject Poisson because Poisson requires the the maximum number of success to be unlimited. It turns out that we should use Binomial here, although in doing a survey, usually it is not with replacement. This is because the population is assumed to be infinitive and survey with replacement does not make significant difference. Example 31 (Choice of distributions): Traffic Department of the City of Peiging classifies the traffic condition at a crossroad to be good at peak hour if the traffic flow is higher than 40 vehicles/minute at a crossroad; crowded if it is less than 10 vehicles/minute. According the data in the past months, the average traffic flow at peak hours on Chang an Street is 30 vehicles/ minute. What is the probability that the traffic condition is crowded at 17:43, a definitely peak hour time? (Do not try to compute the exact value. Write up the formula only.) We find this problem related to time, which is a very good hint to use Poisson. Going further, we can find the parameters needed is sufficient: µ = 30 and x 40. Other information turns out to be useless here. Just go ahead using Poisson then. 37

Continuous Probability Distributions

Continuous Probability Distributions Continuous Probability Distributions Ka-fu WONG 23 August 2007 Abstract In previous chapter, we noted that a lot of continuous random variables can be approximated by discrete random variables. At the

More information

What is Probability? Probability. Sample Spaces and Events. Simple Event

What is Probability? Probability. Sample Spaces and Events. Simple Event What is Probability? Probability Peter Lo Probability is the numerical measure of likelihood that the event will occur. Simple Event Joint Event Compound Event Lies between 0 & 1 Sum of events is 1 1.5

More information

Statistics for Managers Using Microsoft Excel/SPSS Chapter 4 Basic Probability And Discrete Probability Distributions

Statistics for Managers Using Microsoft Excel/SPSS Chapter 4 Basic Probability And Discrete Probability Distributions Statistics for Managers Using Microsoft Excel/SPSS Chapter 4 Basic Probability And Discrete Probability Distributions 1999 Prentice-Hall, Inc. Chap. 4-1 Chapter Topics Basic Probability Concepts: Sample

More information

p. 4-1 Random Variables

p. 4-1 Random Variables Random Variables A Motivating Example Experiment: Sample k students without replacement from the population of all n students (labeled as 1, 2,, n, respectively) in our class. = {all combinations} = {{i

More information

Statistics for Managers Using Microsoft Excel (3 rd Edition)

Statistics for Managers Using Microsoft Excel (3 rd Edition) Statistics for Managers Using Microsoft Excel (3 rd Edition) Chapter 4 Basic Probability and Discrete Probability Distributions 2002 Prentice-Hall, Inc. Chap 4-1 Chapter Topics Basic probability concepts

More information

Part (A): Review of Probability [Statistics I revision]

Part (A): Review of Probability [Statistics I revision] Part (A): Review of Probability [Statistics I revision] 1 Definition of Probability 1.1 Experiment An experiment is any procedure whose outcome is uncertain ffl toss a coin ffl throw a die ffl buy a lottery

More information

Distribusi Binomial, Poisson, dan Hipergeometrik

Distribusi Binomial, Poisson, dan Hipergeometrik Distribusi Binomial, Poisson, dan Hipergeometrik CHAPTER TOPICS The Probability of a Discrete Random Variable Covariance and Its Applications in Finance Binomial Distribution Poisson Distribution Hypergeometric

More information

Binomial and Poisson Probability Distributions

Binomial and Poisson Probability Distributions Binomial and Poisson Probability Distributions Esra Akdeniz March 3, 2016 Bernoulli Random Variable Any random variable whose only possible values are 0 or 1 is called a Bernoulli random variable. What

More information

Conditional Probability

Conditional Probability Conditional Probability Idea have performed a chance experiment but don t know the outcome (ω), but have some partial information (event A) about ω. Question: given this partial information what s the

More information

Joint Distribution of Two or More Random Variables

Joint Distribution of Two or More Random Variables Joint Distribution of Two or More Random Variables Sometimes more than one measurement in the form of random variable is taken on each member of the sample space. In cases like this there will be a few

More information

Vehicle Freq Rel. Freq Frequency distribution. Statistics

Vehicle Freq Rel. Freq Frequency distribution. Statistics 1.1 STATISTICS Statistics is the science of data. This involves collecting, summarizing, organizing, and analyzing data in order to draw meaningful conclusions about the universe from which the data is

More information

Notation: X = random variable; x = particular value; P(X = x) denotes probability that X equals the value x.

Notation: X = random variable; x = particular value; P(X = x) denotes probability that X equals the value x. Ch. 16 Random Variables Def n: A random variable is a numerical measurement of the outcome of a random phenomenon. A discrete random variable is a random variable that assumes separate values. # of people

More information

More on Distribution Function

More on Distribution Function More on Distribution Function The distribution of a random variable X can be determined directly from its cumulative distribution function F X. Theorem: Let X be any random variable, with cumulative distribution

More information

Chapter 2 Random Variables

Chapter 2 Random Variables Stochastic Processes Chapter 2 Random Variables Prof. Jernan Juang Dept. of Engineering Science National Cheng Kung University Prof. Chun-Hung Liu Dept. of Electrical and Computer Eng. National Chiao Tung

More information

Discrete Probability Distribution

Discrete Probability Distribution Shapes of binomial distributions Discrete Probability Distribution Week 11 For this activity you will use a web applet. Go to http://socr.stat.ucla.edu/htmls/socr_eperiments.html and choose Binomial coin

More information

Statistical Experiment A statistical experiment is any process by which measurements are obtained.

Statistical Experiment A statistical experiment is any process by which measurements are obtained. (التوزيعات الا حتمالية ( Distributions Probability Statistical Experiment A statistical experiment is any process by which measurements are obtained. Examples of Statistical Experiments Counting the number

More information

Probability deals with modeling of random phenomena (phenomena or experiments whose outcomes may vary)

Probability deals with modeling of random phenomena (phenomena or experiments whose outcomes may vary) Chapter 14 From Randomness to Probability How to measure a likelihood of an event? How likely is it to answer correctly one out of two true-false questions on a quiz? Is it more, less, or equally likely

More information

Probabilistic models

Probabilistic models Probabilistic models Kolmogorov (Andrei Nikolaevich, 1903 1987) put forward an axiomatic system for probability theory. Foundations of the Calculus of Probabilities, published in 1933, immediately became

More information

Homework 4 Solution, due July 23

Homework 4 Solution, due July 23 Homework 4 Solution, due July 23 Random Variables Problem 1. Let X be the random number on a die: from 1 to. (i) What is the distribution of X? (ii) Calculate EX. (iii) Calculate EX 2. (iv) Calculate Var

More information

Random Models. Tusheng Zhang. February 14, 2013

Random Models. Tusheng Zhang. February 14, 2013 Random Models Tusheng Zhang February 14, 013 1 Introduction In this module, we will introduce some random models which have many real life applications. The course consists of four parts. 1. A brief review

More information

7. Be able to prove Rules in Section 7.3, using only the Kolmogorov axioms.

7. Be able to prove Rules in Section 7.3, using only the Kolmogorov axioms. Midterm Review Solutions for MATH 50 Solutions to the proof and example problems are below (in blue). In each of the example problems, the general principle is given in parentheses before the solution.

More information

CS37300 Class Notes. Jennifer Neville, Sebastian Moreno, Bruno Ribeiro

CS37300 Class Notes. Jennifer Neville, Sebastian Moreno, Bruno Ribeiro CS37300 Class Notes Jennifer Neville, Sebastian Moreno, Bruno Ribeiro 2 Background on Probability and Statistics These are basic definitions, concepts, and equations that should have been covered in your

More information

Why should you care?? Intellectual curiosity. Gambling. Mathematically the same as the ESP decision problem we discussed in Week 4.

Why should you care?? Intellectual curiosity. Gambling. Mathematically the same as the ESP decision problem we discussed in Week 4. I. Probability basics (Sections 4.1 and 4.2) Flip a fair (probability of HEADS is 1/2) coin ten times. What is the probability of getting exactly 5 HEADS? What is the probability of getting exactly 10

More information

Probability. Paul Schrimpf. January 23, Definitions 2. 2 Properties 3

Probability. Paul Schrimpf. January 23, Definitions 2. 2 Properties 3 Probability Paul Schrimpf January 23, 2018 Contents 1 Definitions 2 2 Properties 3 3 Random variables 4 3.1 Discrete........................................... 4 3.2 Continuous.........................................

More information

Discrete Random Variable

Discrete Random Variable Discrete Random Variable Outcome of a random experiment need not to be a number. We are generally interested in some measurement or numerical attribute of the outcome, rather than the outcome itself. n

More information

STA 2023 EXAM-2 Practice Problems From Chapters 4, 5, & Partly 6. With SOLUTIONS

STA 2023 EXAM-2 Practice Problems From Chapters 4, 5, & Partly 6. With SOLUTIONS STA 2023 EXAM-2 Practice Problems From Chapters 4, 5, & Partly 6 With SOLUTIONS Mudunuru Venkateswara Rao, Ph.D. STA 2023 Fall 2016 Venkat Mu ALL THE CONTENT IN THESE SOLUTIONS PRESENTED IN BLUE AND BLACK

More information

Probabilistic models

Probabilistic models Kolmogorov (Andrei Nikolaevich, 1903 1987) put forward an axiomatic system for probability theory. Foundations of the Calculus of Probabilities, published in 1933, immediately became the definitive formulation

More information

Topic 5: Discrete Random Variables & Expectations Reference Chapter 4

Topic 5: Discrete Random Variables & Expectations Reference Chapter 4 Page 1 Topic 5: Discrete Random Variables & Epectations Reference Chapter 4 In Chapter 3 we studied rules for associating a probability value with a single event or with a subset of events in an eperiment.

More information

The probability of an event is viewed as a numerical measure of the chance that the event will occur.

The probability of an event is viewed as a numerical measure of the chance that the event will occur. Chapter 5 This chapter introduces probability to quantify randomness. Section 5.1: How Can Probability Quantify Randomness? The probability of an event is viewed as a numerical measure of the chance that

More information

Discrete random variables and probability distributions

Discrete random variables and probability distributions Discrete random variables and probability distributions random variable is a mapping from the sample space to real numbers. notation: X, Y, Z,... Example: Ask a student whether she/he works part time or

More information

IAM 530 ELEMENTS OF PROBABILITY AND STATISTICS LECTURE 3-RANDOM VARIABLES

IAM 530 ELEMENTS OF PROBABILITY AND STATISTICS LECTURE 3-RANDOM VARIABLES IAM 530 ELEMENTS OF PROBABILITY AND STATISTICS LECTURE 3-RANDOM VARIABLES VARIABLE Studying the behavior of random variables, and more importantly functions of random variables is essential for both the

More information

Topic -2. Probability. Larson & Farber, Elementary Statistics: Picturing the World, 3e 1

Topic -2. Probability. Larson & Farber, Elementary Statistics: Picturing the World, 3e 1 Topic -2 Probability Larson & Farber, Elementary Statistics: Picturing the World, 3e 1 Probability Experiments Experiment : An experiment is an act that can be repeated under given condition. Rolling a

More information

4/17/2012. NE ( ) # of ways an event can happen NS ( ) # of events in the sample space

4/17/2012. NE ( ) # of ways an event can happen NS ( ) # of events in the sample space I. Vocabulary: A. Outcomes: the things that can happen in a probability experiment B. Sample Space (S): all possible outcomes C. Event (E): one outcome D. Probability of an Event (P(E)): the likelihood

More information

Week 12-13: Discrete Probability

Week 12-13: Discrete Probability Week 12-13: Discrete Probability November 21, 2018 1 Probability Space There are many problems about chances or possibilities, called probability in mathematics. When we roll two dice there are possible

More information

Chapter 13, Probability from Applied Finite Mathematics by Rupinder Sekhon was developed by OpenStax College, licensed by Rice University, and is

Chapter 13, Probability from Applied Finite Mathematics by Rupinder Sekhon was developed by OpenStax College, licensed by Rice University, and is Chapter 13, Probability from Applied Finite Mathematics by Rupinder Sekhon was developed by OpenStax College, licensed by Rice University, and is available on the Connexions website. It is used under a

More information

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems Review of Basic Probability The fundamentals, random variables, probability distributions Probability mass/density functions

More information

Probability Year 9. Terminology

Probability Year 9. Terminology Probability Year 9 Terminology Probability measures the chance something happens. Formally, we say it measures how likely is the outcome of an event. We write P(result) as a shorthand. An event is some

More information

Final Review: Problem Solving Strategies for Stat 430

Final Review: Problem Solving Strategies for Stat 430 Final Review: Problem Solving Strategies for Stat 430 Hyunseung Kang December 14, 011 This document covers the material from the last 1/3 of the class. It s not comprehensive nor is it complete (because

More information

Quantitative Methods for Decision Making

Quantitative Methods for Decision Making January 14, 2012 Lecture 3 Probability Theory Definition Mutually exclusive events: Two events A and B are mutually exclusive if A B = φ Definition Special Addition Rule: Let A and B be two mutually exclusive

More information

Salt Lake Community College MATH 1040 Final Exam Fall Semester 2011 Form E

Salt Lake Community College MATH 1040 Final Exam Fall Semester 2011 Form E Salt Lake Community College MATH 1040 Final Exam Fall Semester 011 Form E Name Instructor Time Limit: 10 minutes Any hand-held calculator may be used. Computers, cell phones, or other communication devices

More information

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample

More information

Introduction to bivariate analysis

Introduction to bivariate analysis Introduction to bivariate analysis When one measurement is made on each observation, univariate analysis is applied. If more than one measurement is made on each observation, multivariate analysis is applied.

More information

STAT 414: Introduction to Probability Theory

STAT 414: Introduction to Probability Theory STAT 414: Introduction to Probability Theory Spring 2016; Homework Assignments Latest updated on April 29, 2016 HW1 (Due on Jan. 21) Chapter 1 Problems 1, 8, 9, 10, 11, 18, 19, 26, 28, 30 Theoretical Exercises

More information

Expectations. Definition Let X be a discrete rv with set of possible values D and pmf p(x). The expected value or mean value of X, denoted by E(X ) or

Expectations. Definition Let X be a discrete rv with set of possible values D and pmf p(x). The expected value or mean value of X, denoted by E(X ) or Expectations Expectations Definition Let X be a discrete rv with set of possible values D and pmf p(x). The expected value or mean value of X, denoted by E(X ) or µ X, is E(X ) = µ X = x D x p(x) Expectations

More information

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix)

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) 1 EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) Taisuke Otsu London School of Economics Summer 2018 A.1. Summation operator (Wooldridge, App. A.1) 2 3 Summation operator For

More information

Introduction to bivariate analysis

Introduction to bivariate analysis Introduction to bivariate analysis When one measurement is made on each observation, univariate analysis is applied. If more than one measurement is made on each observation, multivariate analysis is applied.

More information

Lecture Lecture 5

Lecture Lecture 5 Lecture 4 --- Lecture 5 A. Basic Concepts (4.1-4.2) 1. Experiment: A process of observing a phenomenon that has variation in its outcome. Examples: (E1). Rolling a die, (E2). Drawing a card form a shuffled

More information

Probability and random variables. Sept 2018

Probability and random variables. Sept 2018 Probability and random variables Sept 2018 2 The sample space Consider an experiment with an uncertain outcome. The set of all possible outcomes is called the sample space. Example: I toss a coin twice,

More information

Discrete Distributions

Discrete Distributions Discrete Distributions Applications of the Binomial Distribution A manufacturing plant labels items as either defective or acceptable A firm bidding for contracts will either get a contract or not A marketing

More information

Stats Review Chapter 6. Mary Stangler Center for Academic Success Revised 8/16

Stats Review Chapter 6. Mary Stangler Center for Academic Success Revised 8/16 Stats Review Chapter Revised 8/1 Note: This review is composed of questions similar to those found in the chapter review and/or chapter test. This review is meant to highlight basic concepts from the course.

More information

Probability Theory. Introduction to Probability Theory. Principles of Counting Examples. Principles of Counting. Probability spaces.

Probability Theory. Introduction to Probability Theory. Principles of Counting Examples. Principles of Counting. Probability spaces. Probability Theory To start out the course, we need to know something about statistics and probability Introduction to Probability Theory L645 Advanced NLP Autumn 2009 This is only an introduction; for

More information

Lecture 13 (Part 2): Deviation from mean: Markov s inequality, variance and its properties, Chebyshev s inequality

Lecture 13 (Part 2): Deviation from mean: Markov s inequality, variance and its properties, Chebyshev s inequality Lecture 13 (Part 2): Deviation from mean: Markov s inequality, variance and its properties, Chebyshev s inequality Discrete Structures II (Summer 2018) Rutgers University Instructor: Abhishek Bhrushundi

More information

Statistics for Economists. Lectures 3 & 4

Statistics for Economists. Lectures 3 & 4 Statistics for Economists Lectures 3 & 4 Asrat Temesgen Stockholm University 1 CHAPTER 2- Discrete Distributions 2.1. Random variables of the Discrete Type Definition 2.1.1: Given a random experiment with

More information

CS 361: Probability & Statistics

CS 361: Probability & Statistics September 12, 2017 CS 361: Probability & Statistics Correlation Summary of what we proved We wanted a way of predicting y from x We chose to think in standard coordinates and to use a linear predictor

More information

Probability and Probability Distributions. Dr. Mohammed Alahmed

Probability and Probability Distributions. Dr. Mohammed Alahmed Probability and Probability Distributions 1 Probability and Probability Distributions Usually we want to do more with data than just describing them! We might want to test certain specific inferences about

More information

Chapter 4 : Discrete Random Variables

Chapter 4 : Discrete Random Variables STAT/MATH 394 A - PROBABILITY I UW Autumn Quarter 2015 Néhémy Lim Chapter 4 : Discrete Random Variables 1 Random variables Objectives of this section. To learn the formal definition of a random variable.

More information

Expected Value 7/7/2006

Expected Value 7/7/2006 Expected Value 7/7/2006 Definition Let X be a numerically-valued discrete random variable with sample space Ω and distribution function m(x). The expected value E(X) is defined by E(X) = x Ω x m(x), provided

More information

Random Variables. Statistics 110. Summer Copyright c 2006 by Mark E. Irwin

Random Variables. Statistics 110. Summer Copyright c 2006 by Mark E. Irwin Random Variables Statistics 110 Summer 2006 Copyright c 2006 by Mark E. Irwin Random Variables A Random Variable (RV) is a response of a random phenomenon which is numeric. Examples: 1. Roll a die twice

More information

Lecture 2: Review of Probability

Lecture 2: Review of Probability Lecture 2: Review of Probability Zheng Tian Contents 1 Random Variables and Probability Distributions 2 1.1 Defining probabilities and random variables..................... 2 1.2 Probability distributions................................

More information

Chapter 1: Revie of Calculus and Probability

Chapter 1: Revie of Calculus and Probability Chapter 1: Revie of Calculus and Probability Refer to Text Book: Operations Research: Applications and Algorithms By Wayne L. Winston,Ch. 12 Operations Research: An Introduction By Hamdi Taha, Ch. 12 OR441-Dr.Khalid

More information

2011 Pearson Education, Inc

2011 Pearson Education, Inc Statistics for Business and Economics Chapter 3 Probability Contents 1. Events, Sample Spaces, and Probability 2. Unions and Intersections 3. Complementary Events 4. The Additive Rule and Mutually Exclusive

More information

Probability Experiments, Trials, Outcomes, Sample Spaces Example 1 Example 2

Probability Experiments, Trials, Outcomes, Sample Spaces Example 1 Example 2 Probability Probability is the study of uncertain events or outcomes. Games of chance that involve rolling dice or dealing cards are one obvious area of application. However, probability models underlie

More information

Probability Year 10. Terminology

Probability Year 10. Terminology Probability Year 10 Terminology Probability measures the chance something happens. Formally, we say it measures how likely is the outcome of an event. We write P(result) as a shorthand. An event is some

More information

Chapter 3. Discrete Random Variables and Their Probability Distributions

Chapter 3. Discrete Random Variables and Their Probability Distributions Chapter 3. Discrete Random Variables and Their Probability Distributions 2.11 Definition of random variable 3.1 Definition of a discrete random variable 3.2 Probability distribution of a discrete random

More information

STA 2023 EXAM-2 Practice Problems. Ven Mudunuru. From Chapters 4, 5, & Partly 6. With SOLUTIONS

STA 2023 EXAM-2 Practice Problems. Ven Mudunuru. From Chapters 4, 5, & Partly 6. With SOLUTIONS STA 2023 EXAM-2 Practice Problems From Chapters 4, 5, & Partly 6 With SOLUTIONS Mudunuru, Venkateswara Rao STA 2023 Spring 2016 1 1. A committee of 5 persons is to be formed from 6 men and 4 women. What

More information

STAT/MA 416 Answers Homework 4 September 27, 2007 Solutions by Mark Daniel Ward PROBLEMS

STAT/MA 416 Answers Homework 4 September 27, 2007 Solutions by Mark Daniel Ward PROBLEMS STAT/MA 416 Answers Homework 4 September 27, 2007 Solutions by Mark Daniel Ward PROBLEMS 2. We ust examine the 36 possible products of two dice. We see that 1/36 for i = 1, 9, 16, 25, 36 2/36 for i = 2,

More information

Section 7.2 Homework Answers

Section 7.2 Homework Answers 25.5 30 Sample Mean P 0.1226 sum n b. The two z-scores are z 25 20(1.7) n 1.0 20 sum n 2.012 and z 30 20(1.7) n 1.0 0.894, 20 so the probability is approximately 0.1635 (0.1645 using Table A). P14. a.

More information

EECS 126 Probability and Random Processes University of California, Berkeley: Spring 2015 Abhay Parekh February 17, 2015.

EECS 126 Probability and Random Processes University of California, Berkeley: Spring 2015 Abhay Parekh February 17, 2015. EECS 126 Probability and Random Processes University of California, Berkeley: Spring 2015 Abhay Parekh February 17, 2015 Midterm Exam Last name First name SID Rules. You have 80 mins (5:10pm - 6:30pm)

More information

CHAPTER - 16 PROBABILITY Random Experiment : If an experiment has more than one possible out come and it is not possible to predict the outcome in advance then experiment is called random experiment. Sample

More information

Q1 Own your learning with flash cards.

Q1 Own your learning with flash cards. For this data set, find the mean, mode, median and inter-quartile range. 2, 5, 6, 4, 7, 4, 7, 2, 8, 9, 4, 11, 9, 9, 6 Q1 For this data set, find the sample variance and sample standard deviation. 89, 47,

More information

Part 3: Parametric Models

Part 3: Parametric Models Part 3: Parametric Models Matthew Sperrin and Juhyun Park August 19, 2008 1 Introduction There are three main objectives to this section: 1. To introduce the concepts of probability and random variables.

More information

ST 371 (V): Families of Discrete Distributions

ST 371 (V): Families of Discrete Distributions ST 371 (V): Families of Discrete Distributions Certain experiments and associated random variables can be grouped into families, where all random variables in the family share a certain structure and a

More information

1. If X has density. cx 3 e x ), 0 x < 0, otherwise. Find the value of c that makes f a probability density. f(x) =

1. If X has density. cx 3 e x ), 0 x < 0, otherwise. Find the value of c that makes f a probability density. f(x) = 1. If X has density f(x) = { cx 3 e x ), 0 x < 0, otherwise. Find the value of c that makes f a probability density. 2. Let X have density f(x) = { xe x, 0 < x < 0, otherwise. (a) Find P (X > 2). (b) Find

More information

Discrete Random Variables

Discrete Random Variables Discrete Random Variables An Undergraduate Introduction to Financial Mathematics J. Robert Buchanan 2014 Introduction The markets can be thought of as a complex interaction of a large number of random

More information

1 Basic continuous random variable problems

1 Basic continuous random variable problems Name M362K Final Here are problems concerning material from Chapters 5 and 6. To review the other chapters, look over previous practice sheets for the two exams, previous quizzes, previous homeworks and

More information

324 Stat Lecture Notes (1) Probability

324 Stat Lecture Notes (1) Probability 324 Stat Lecture Notes 1 robability Chapter 2 of the book pg 35-71 1 Definitions: Sample Space: Is the set of all possible outcomes of a statistical experiment, which is denoted by the symbol S Notes:

More information

Introduction to Probability, Fall 2009

Introduction to Probability, Fall 2009 Introduction to Probability, Fall 2009 Math 30530 Review questions for exam 1 solutions 1. Let A, B and C be events. Some of the following statements are always true, and some are not. For those that are

More information

Question Bank In Mathematics Class IX (Term II)

Question Bank In Mathematics Class IX (Term II) Question Bank In Mathematics Class IX (Term II) PROBABILITY A. SUMMATIVE ASSESSMENT. PROBABILITY AN EXPERIMENTAL APPROACH. The science which measures the degree of uncertainty is called probability.. In

More information

HYPERGEOMETRIC and NEGATIVE HYPERGEOMETIC DISTRIBUTIONS

HYPERGEOMETRIC and NEGATIVE HYPERGEOMETIC DISTRIBUTIONS HYPERGEOMETRIC and NEGATIVE HYPERGEOMETIC DISTRIBUTIONS A The Hypergeometric Situation: Sampling without Replacement In the section on Bernoulli trials [top of page 3 of those notes], it was indicated

More information

Probability, Random Processes and Inference

Probability, Random Processes and Inference INSTITUTO POLITÉCNICO NACIONAL CENTRO DE INVESTIGACION EN COMPUTACION Laboratorio de Ciberseguridad Probability, Random Processes and Inference Dr. Ponciano Jorge Escamilla Ambrosio pescamilla@cic.ipn.mx

More information

Probability 5-4 The Multiplication Rules and Conditional Probability

Probability 5-4 The Multiplication Rules and Conditional Probability Outline Lecture 8 5-1 Introduction 5-2 Sample Spaces and 5-3 The Addition Rules for 5-4 The Multiplication Rules and Conditional 5-11 Introduction 5-11 Introduction as a general concept can be defined

More information

STAT 418: Probability and Stochastic Processes

STAT 418: Probability and Stochastic Processes STAT 418: Probability and Stochastic Processes Spring 2016; Homework Assignments Latest updated on April 29, 2016 HW1 (Due on Jan. 21) Chapter 1 Problems 1, 8, 9, 10, 11, 18, 19, 26, 28, 30 Theoretical

More information

To understand and analyze this test, we need to have the right model for the events. We need to identify an event and its probability.

To understand and analyze this test, we need to have the right model for the events. We need to identify an event and its probability. Probabilistic Models Example #1 A production lot of 10,000 parts is tested for defects. It is expected that a defective part occurs once in every 1,000 parts. A sample of 500 is tested, with 2 defective

More information

Bus 216: Business Statistics II Introduction Business statistics II is purely inferential or applied statistics.

Bus 216: Business Statistics II Introduction Business statistics II is purely inferential or applied statistics. Bus 216: Business Statistics II Introduction Business statistics II is purely inferential or applied statistics. Study Session 1 1. Random Variable A random variable is a variable that assumes numerical

More information

Probability. VCE Maths Methods - Unit 2 - Probability

Probability. VCE Maths Methods - Unit 2 - Probability Probability Probability Tree diagrams La ice diagrams Venn diagrams Karnough maps Probability tables Union & intersection rules Conditional probability Markov chains 1 Probability Probability is the mathematics

More information

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416)

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) D. ARAPURA This is a summary of the essential material covered so far. The final will be cumulative. I ve also included some review problems

More information

We investigate the scientific validity of one aspect of the Chinese astrology: individuals

We investigate the scientific validity of one aspect of the Chinese astrology: individuals DO DRAGONS HAVE BETTER FATE? REVISITED USING THE U.S. DATA Dawit Senbet Department of Economics, University of Northern Colorado (Corresponding author) & Wei-Chiao Huang Department of Economics, Western

More information

6.2 Introduction to Probability. The Deal. Possible outcomes: STAT1010 Intro to probability. Definitions. Terms: What are the chances of?

6.2 Introduction to Probability. The Deal. Possible outcomes: STAT1010 Intro to probability. Definitions. Terms: What are the chances of? 6.2 Introduction to Probability Terms: What are the chances of?! Personal probability (subjective) " Based on feeling or opinion. " Gut reaction.! Empirical probability (evidence based) " Based on experience

More information

Tutorial for Lecture Course on Modelling and System Identification (MSI) Albert-Ludwigs-Universität Freiburg Winter Term

Tutorial for Lecture Course on Modelling and System Identification (MSI) Albert-Ludwigs-Universität Freiburg Winter Term Tutorial for Lecture Course on Modelling and System Identification (MSI) Albert-Ludwigs-Universität Freiburg Winter Term 2016-2017 Tutorial 3: Emergency Guide to Statistics Prof. Dr. Moritz Diehl, Robin

More information

STAT/SOC/CSSS 221 Statistical Concepts and Methods for the Social Sciences. Random Variables

STAT/SOC/CSSS 221 Statistical Concepts and Methods for the Social Sciences. Random Variables STAT/SOC/CSSS 221 Statistical Concepts and Methods for the Social Sciences Random Variables Christopher Adolph Department of Political Science and Center for Statistics and the Social Sciences University

More information

1 Basic continuous random variable problems

1 Basic continuous random variable problems Name M362K Final Here are problems concerning material from Chapters 5 and 6. To review the other chapters, look over previous practice sheets for the two exams, previous quizzes, previous homeworks and

More information

ECE 450 Lecture 2. Recall: Pr(A B) = Pr(A) + Pr(B) Pr(A B) in general = Pr(A) + Pr(B) if A and B are m.e. Lecture Overview

ECE 450 Lecture 2. Recall: Pr(A B) = Pr(A) + Pr(B) Pr(A B) in general = Pr(A) + Pr(B) if A and B are m.e. Lecture Overview ECE 450 Lecture 2 Recall: Pr(A B) = Pr(A) + Pr(B) Pr(A B) in general = Pr(A) + Pr(B) if A and B are m.e. Lecture Overview Conditional Probability, Pr(A B) Total Probability Bayes Theorem Independent Events

More information

Instructor Solution Manual. Probability and Statistics for Engineers and Scientists (4th Edition) Anthony Hayter

Instructor Solution Manual. Probability and Statistics for Engineers and Scientists (4th Edition) Anthony Hayter Instructor Solution Manual Probability and Statistics for Engineers and Scientists (4th Edition) Anthony Hayter 1 Instructor Solution Manual This instructor solution manual to accompany the fourth edition

More information

REPEATED TRIALS. p(e 1 ) p(e 2 )... p(e k )

REPEATED TRIALS. p(e 1 ) p(e 2 )... p(e k ) REPEATED TRIALS We first note a basic fact about probability and counting. Suppose E 1 and E 2 are independent events. For example, you could think of E 1 as the event of tossing two dice and getting a

More information

Properties of Summation Operator

Properties of Summation Operator Econ 325 Section 003/004 Notes on Variance, Covariance, and Summation Operator By Hiro Kasahara Properties of Summation Operator For a sequence of the values {x 1, x 2,..., x n, we write the sum of x 1,

More information

Probability: Why do we care? Lecture 2: Probability and Distributions. Classical Definition. What is Probability?

Probability: Why do we care? Lecture 2: Probability and Distributions. Classical Definition. What is Probability? Probability: Why do we care? Lecture 2: Probability and Distributions Sandy Eckel seckel@jhsph.edu 22 April 2008 Probability helps us by: Allowing us to translate scientific questions into mathematical

More information

CHAPTER 4 PROBABILITY AND PROBABILITY DISTRIBUTIONS

CHAPTER 4 PROBABILITY AND PROBABILITY DISTRIBUTIONS CHAPTER 4 PROBABILITY AND PROBABILITY DISTRIBUTIONS 4.2 Events and Sample Space De nition 1. An experiment is the process by which an observation (or measurement) is obtained Examples 1. 1: Tossing a pair

More information

MODULE 2 RANDOM VARIABLE AND ITS DISTRIBUTION LECTURES DISTRIBUTION FUNCTION AND ITS PROPERTIES

MODULE 2 RANDOM VARIABLE AND ITS DISTRIBUTION LECTURES DISTRIBUTION FUNCTION AND ITS PROPERTIES MODULE 2 RANDOM VARIABLE AND ITS DISTRIBUTION LECTURES 7-11 Topics 2.1 RANDOM VARIABLE 2.2 INDUCED PROBABILITY MEASURE 2.3 DISTRIBUTION FUNCTION AND ITS PROPERTIES 2.4 TYPES OF RANDOM VARIABLES: DISCRETE,

More information

Announcements. Lecture 5: Probability. Dangling threads from last week: Mean vs. median. Dangling threads from last week: Sampling bias

Announcements. Lecture 5: Probability. Dangling threads from last week: Mean vs. median. Dangling threads from last week: Sampling bias Recap Announcements Lecture 5: Statistics 101 Mine Çetinkaya-Rundel September 13, 2011 HW1 due TA hours Thursday - Sunday 4pm - 9pm at Old Chem 211A If you added the class last week please make sure to

More information

Math P (A 1 ) =.5, P (A 2 ) =.6, P (A 1 A 2 ) =.9r

Math P (A 1 ) =.5, P (A 2 ) =.6, P (A 1 A 2 ) =.9r Math 3070 1. Treibergs σιι First Midterm Exam Name: SAMPLE January 31, 2000 (1. Compute the sample mean x and sample standard deviation s for the January mean temperatures (in F for Seattle from 1900 to

More information