SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416)

Size: px
Start display at page:

Download "SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416)"

Transcription

1 SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) D. ARAPURA This is a summary of the essential material covered so far. The final will be cumulative. I ve also included some review problems at about the level I would expect you to do. 1. Axioms for probability A sample space consists of set S consisting of outcomes that we are interested in. A subset E S is called an event. A probability measure on a sample space is an assignment of a number P (E) [0, 1], called the probability of E, subject to the following conditions: (1) P (S) = 1 (2) If E 1, E 2,... are mutually exclusive events, i.e. E i E j = when i j, then P ( E i ) = i P (E i). Note that E 1 E 2 = E 1 E 2. One can deduce that P (E c ) = 1 P (E). In particular, P ( ) = 1 1 = 0. Also, we have P (E F ) = P (E) + P (F ) P (EF ) The simplest useful example of such a set up is when S has two elements S = {Success, F ailure} with P (Success) = p [0, 1] and P (F ailure) = 1 p. In general, when S = {s 1,..., s n } is finite, the probably measure is an assignment of probabilities P (s i ) such that P (s i ) = 1. The simplest case is when P (s i ) = 1/n, in which case, we say that the outcomes are equally likely. Then P (E) = E S So calculation of probability in this case becomes a counting problem. Exercise 1. Two cards are dealt from a standard deck of 52. What s the probability of getting a pair (which means that both cards have the same value)? The conditional probability of E given F is P (E F ) = P (EF ) P (F ) provided P (F ) 0. The idea is that we know F occurs, so we replace the original sample space by F and adjust everything else. Exercise 2. Two cards are dealt as above. What s the probability of getting a pair given that both cards are black (clubs or spades)? It comes up in Bayes formula P (E) = P (E F )P (F ) + P (E F c )P (F c ) The following problem was from the first test. 1

2 2 D. ARAPURA Exercise 3. A farm produces two varieties of apples, 80% red and the rest yellow. 20% of red apples are not sweet, and 90% of the yellow are sweet. What s the probability that an apple coming from the farm is sweet? Solution: We can find P (Sweet Red) =.8 and P (Sweet Red c ) =.9, so Bayes formula says P (Sweet) = (.8)(.8) + (.9)(.2) Two events E 1, E 2 are independent if P (E 1 E 2 ) = P (E 1 )P (E 2 ). This is equivalent to the conditional probability P (E 1 E 2 ) = P (E 1E 2 ) P (E 2 ) = P (E 1 ) This means that knowing E 2 has no effect on the value of P (E 1 ). Exercise 4. 3 coins are flipped. Let E be the event that we exactly one tail among the first two flips, and F is the event that we have exactly one tail among last two flips. Show E and F are independent. 2. Random variables A random variable X on a sample space is a numerical function on it. It should be thought of a quantity that we want to measure, e.g., the number of Heads, the sum of values of dice, the grade of a student... Typically we would be interested in the probabilities of the events {s X(s) = a} or {s a X(s) b} which we would simply write as P (X = a) or P (a X b). We distinguish between two types of random variables. X is discrete if it takes a finite or countable set of values x 0, x 1,.... In most cases, the values are simply nonnegative integers, and we will assume this below. The opposite case is where X takes all values in some interval in R; then X is called continuous. In the discrete case, much of the information we care about is given by the probability masses p X (i) = P (X = i) We usually write this as p(i) when X is understood. The axioms of probability imply that P (s) = p(i) = 1 s S Note that in practice, this might be finite sum. The expected value or expectation is the weighted average of the values of X: i=0 (2.1) E(X) = s S X(s)P (s) This is also called the mean. By regrouping terms, the expectation can also be written as E(X) = ip(i) The variance (2.2) V ar(x) = E[(X E(X)) 2 ] = E(X 2 ) E(X) 2 measures the spread, in the sense that it is small if the values of X are concentrated near the mean. Chebyshev s inequality discussed later, will give a more i=0

3 SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) 3 quantitative justification for this statement. Sometimes, we use the standard deviation σ which is V ar(x). Exercise 5. Two dice are rolled. Let X be the sum of the two numbers on top. What s the expectation and the variance? In the continuous case, instead of the mass, we look at the cumulative distribution or the probability density F X (a) = F (a) = P (X a) = P ({s X(s) a}) By the fundamental theorem of calculus Note these functions satisfy f X (a) = f(a) = F (a) F (a) = a f(x)dx F is increasing lim F (a) = f(x)dx = 1 a The expectation is defined by E(X) = and the variance by the formula (2.2) above. Exercise 6. Let f(x) = xf(x)dx { cx 2 if 1 x 1 0 otherwise Assuming this a probability density, find c. What s the expectation and variance? 3. Binomial and other random variables Here are some important examples. (1) Suppose n independent experiments (or coin flips...) are performed, where the probability of success is p and failure is 1 p. Let X be the number of successes. Then ( ) n p(i) = p i (1 p) n i i X is called a binomial random variable with parameters (n, p). The binomial theorem gives (x + y) n = n i=0 Setting x = p, y = 1 p gives p(i) = 1 ( ) n x i y n i i as required. Differentiating (x + y) n with respect to x, multiplying by x, and then setting x = p, y = 1 p gives the expected value E(X) = ip(i) = np

4 4 D. ARAPURA A similar trick allows us to compute the variance V ar(x) = np(1 p) We will give a slicker derivation later. (2) A Poisson random variable with parameter µ has probability mass µ µi p(i) = e i! This is discrete, but takes infinitely many values. We have p(i) = e µ 0 0 µ i i! = 1 by using the Taylor series for the exponential. We have and E(X) = µ V ar(x) = µ Exercise 7. Check the first formula for E(X) by differentiating e λt = (λt) i /i! and setting t = 1. The Poisson distribution can be considered as a limiting case of the binomial distribution X n with parameters (n, p) in the following sense. Setting µ = np to the mean, p Xn (i) is ( n i ) p i (1 p) n i = (n)(n 1)... (n i + 1) µ i } n {{ i } i! 1 (1 µ/n)n i }{{} e µ µi i! e µ as n but µ = np stays constant. (3) A random variable X is uniformly distributed over an interval [a, b] if f(x) = { 1 b a if a x b 0 otherwise This is the continuous version of outcomes are equally likely. E(X) = xf(x)dx = 1 b a b (b a)2 V ar(x) = 2 Exercise 8. Check the last formula. a xdx = a + b 2 (4) A random variable X is normally distributed with parameters µ and σ 2 if f(x) = 1 e (x µ)2 /2σ 2 σ Some people call this the Gaussian distribution. Although this is more complicated than the others, it is one of the most important. The graph is the familiar bell curve with maximum at µ. The fact that f(x)dx = 1

5 SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) 5 can be reduced to checking 1 e x2 /2 dx = 1 by using the substitution u = (x µ)/σ. However, the last integral requires a tricky calculation (rewrite its square as a double integral, and evaluate in polar coordinates); it cannot be done using just freshman calculus. We also have E(X) = µ V ar(x) = σ 2 so that σ is the standard deviation. The first formula is easy to see. After doing the above substitution x u, we can reduce to checking But this is clear, because and xe x2 /2 dx = 0 xe x2 /2 dx 1 xe x2 /2 dx 0 cancel. The normal distribution can also be viewed as limiting case of a binomial distribution, but in a somewhat more complicated way than before. The precise statement is the DeMoivre-Laplace theorem: if X n is binomial with parameters (n, p), set then Y n = X n np np(1 p) P (a Y n b) 1 b e x2 /2 dx, a n It says roughly that if you translate the binomial distribution so that the mean become zero, and then rescale it, so the standard deviation is 1, it will converge to the normal distribution with µ = 0, σ = 1 as n. This is generalized by central limit theorem given at the end of these notes. We encountered some other examples, in passing, such as hypergeometric and exponential distributions. Although I won t expect you to know these for the exam, you might want to review them anyway. 4. Joint distributions of 2 random variables Given two random variables X and Y, we might ask whether there is some relationship between them. This leads us to look at joint distributions. In the discrete case, we consider the joint probability mass p XY (i, j) = P (X = i, Y = j) = P ({s X(s) = i, Y (s) = j})

6 6 D. ARAPURA Usually, we just write p(i, j). This determines the individual masses. For example, p X (i) = p(i, j) j=0 X and Y are called independent random variables if X = i and Y = j are independent events, or equivalently p(i, j) = p X (i)p Y (j) Roughly, this means that X gives no information about Y. By contrast, Y = 2X are clearly dependent. Defining condition probability mass as p X Y (i j) = P (X = i Y = j) = We see that independence means that p X Y (i j) = p X (i) p(i, j) p Y (j) Exercise 9. Roll two dice. Let X be the number on the first die, and let Y be the sum of numbers on both dice. Compute the p(i, j) and p X Y. Are X and Y independent? In the continuous case, we use the joint probability density f(x, y) which satisfies P (a X b, c Y d) = The individual densities are determined by f X (x) = b d a c f(x, y)dy etc. Independence means f(x, y) = f X (x)f Y (y) or equivalently that f X Y (x, y) = f X (x) where the conditional probability density f X Y (x, y) = f(x, y) f Y (y) f(x, y)dydx Exercise 10. Let X, Y have a joint density { ce x 5y if < x <, y > 0 f(x, y) = 0 otherwise for suitable c. Find c. What s f X Y? Are X and Y independent? The conditional expectation is in discrete case, and in the continuous case. E(X Y = j) = i E(X Y = y) = ip X Y (i j) xf X Y (x, y)dx Exercise 11. Find E(X Y = 1), where X and Y are defined in exercise 10.

7 SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) 7 5. Functions of a random variable Given discrete random variable X and a function g from the set of nonnegative to itself, E(g(X)) = g(x(s))p (s) = g(i)p(i) s S i In the continuous case, we have similarly E(g(X)) = g(x)f(x)dx Exercise 12. Let R be a uniformly distributed random variable in [0, 1]. What s the expected value of the area of a circle of radius R? There is a version of this for two variables: E(g(X, Y )) = g(i, j)p(i, j) i j E(g(X, Y )) = g(x, y)f(x, y)dxdy Of special importance is when g(x, Y ) = ax + by, for constants a, b, then these formulas say E(aX + by ) = ae(x) + be(y ) Note that this also follows immediately from (2.1) in the discrete case. For example, we used this implicitly to go between the two formulas for variance. If we write µ = E(X) for the mean, then V ar(x) = E((X µ) 2 ) = E(X 2 2µX + µ 2 ) = E(X 2 ) 2µE(X) + µ 2 = E(X 2 ) E(X) 2 For another example, consider a binomial random variable X which counts number of success with n independent trials with probability p of success. Let { 1 if ith trial is a succes I i = 0 otherwise One can see that X = I I n We have E(I i ) = 0 + (1)P (I i = 1) + 2P (I 1 = 2) +... = P (I i = 1) = p. So we recover the fact that E(X) = p +... p = np Given a continuous random variable X, a decreasing or increasing differentiable function g(x), the density of Y = g(x) is given by { f X (g 1 (y) d dy f Y (y) = g 1 (y) if g 1 (y) is defined 0 otherwise There are couple of additional related things that we covered, and that you should be aware of. However, I won t expect you to know this for the exam, (1) We found the density of X + Y (it s the convolution of densities of X and Y ). (2) If Y 1 = g 1 (X 1, X 2 ) and Y 2 = g 2 (X 1, X 2 ), we have transformation formula for the new joint density f Y1,Y 2 in terms of the old f X1,X 2

8 8 D. ARAPURA 6. Covariance The covariance of two random variables X and Y is Cov(X, Y ) = E[(X E(X))(Y E(Y )] = E(XY ) E(X)E(Y ) E(X)E(Y ) + E(X)E(Y ) = E(XY ) E(X)E(Y ) To understand what this tells us, observe that given independent random variables X and Y, E(XY ) = f(x, y)dxdy = f X (x)f Y (y)dxdy = E(X)E(Y ) Therefore Cov(X, Y ) = 0 when X and Y are independent; equivalently X and Y are dependent it Cov(X, Y ) 0. If Cov(X, Y ) = 0, then we say that X and Y are uncorrelated, but be aware that this does not imply that X and Y independent. The correlation where σ X, σ Y ρ(x, Y ) = Cov(X, Y ) σ X σ Y are the standard deviations. We have 1 ρ(x, Y ) 1 Covariance can help with computing variance. We have that n V ar(x 1 + X X n ) = Cov(X i, X j ) Therefore V ar(x 1 + X X n ) = i=1 V ar(x i ) + 2 i<j n V ar(x i ) if X 1,..., X n are pairwise independent. As an example, we can calculate the variance of a binomial random variable X with parameters n and p. As before X = I I n, where I i is 1 or 0 depending on whether ith trial is success or failure. i=1 V ar(i i ) = E(I 2 i ) E(I i ) 2 = p p 2 = p(1 p) We can see that I i and I j are independent when i j. Therefore V ar(x) = V ar(i i ) = np(1 p) Exercise 13. A jar contains 3 red and 2 blue marbles. Draw 2 marbles with replacement (which means the first marble is returned to the jar before drawing the second). Let I i = 1 if ith marble is red and 0 otherwise, and let X = I 1 + I 2 = number of reds drawn. Use independence of I 1 and I 2 to calculate V ar(x) with formula above. Now assume no replacement. Are I 1 and I 2 still independent? Calculate V ar(x) in this case.

9 SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) 9 7. Moment generating functions Given a random variable X, the moment generating function When X is continous Differentiating gives and again Setting t = 0, gives M(t) = M X (t) = E(e tx ) M(t) = M (t) = M (t) = e tx f(t)dx xe tx f(t)dx x 2 e tx f(t)dx E(X) = M (0) E(X 2 ) = M (0) The last expression is called the second moment. Both formulas are valid from discrete random variables also. We can use these formulas to (re)calculate the expectation and variation for various distributions. (1) Let X be a binomial with parameters (n, p). Then M(t) = ( ) n e it p i (1 p) n i = (pe t + 1 p) n i Then so So M (t) = n(pe t + 1 p) n 1 pe t M (t) = n(n 1)(pe t + 1 p) n 2 (pe t ) 2 + n(pe t + 1 p) n 1 pe t E(X) = np E(X 2 ) = n(n 1)p 2 + np V ar(x) = E(X 2 ) E(X) = np(1 p) (2) Let X be Poisson with parameter λ. Then M(t) = e it λ λi e i! = exp[λ(et 1)] So i=0 M (t) = λe t exp[λ(e t 1)] M (t) = (λe t ) 2 exp[λ(e t 1)] + λe t exp[λ(e t 1)] Setting t = 0 gives Therefore E(X) = λ E(X 2 ) = λ 2 λ V ar(x) = λ

10 10 D. ARAPURA (3) Let X be normal with mean µ = 0 and variance σ 2 = 1. Then M(t) = 1 e tx e x/2 dx = 1 By completing the square and simplifying, we get So Therefore M(t) = e t2 /2 M (t) = te t2 /2 M (t) = t 2 e t2 /2 + e t2 /2 E(X) = 0 E(X 2 ) = 1 e tx x/2 dx V ar(x) = 1 Besides the above the examples, moment generating functions are used in the proof of the central limit theorem give later. 8. Law of large numbers This is sometimes call the law of averages in ordinary speech. Given a sequence of independent random variables X 1, X 2,... with the same distribution, and therefore the same mean µ, set X n = X 1 + X X n n to the average of the first n of them. This is also called the sample average. The law of large numbers says roughly that X n µ as n. There are couple of ways to make this mathematically precise. THEOREM 8.1 (Weak law of large numbers). For any ɛ > 0, the probability as n. P ( X n µ ɛ) 0 THEOREM 8.2 (Strong law of large numbers). P ( lim n X n µ ) = 1 The weak law says the value of X n is likely to be close to µ as n gets larger. The strong law actual says that the limit lim X n is the constant function µ almost everywhere, i.e. the probability that it fails to equal µ is zero. To understand what the strong law tells us, suppose that the same experiment is repeated n times, and these repetitions are independent. Let E be some event whose probability we are interested in finding. Set { 1 if E occurs in the ith experiment X i = 0 otherwise We used this sort of random variable before, and we have seen that E(X i ) = P (E). The strong laws says that for large n, X n = 1 (# of times E occurs after n trials) n

11 SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) 11 gives a good approximation for P (E). This justifies our intuitive belief that if we flip a coin many times, we should get heads half the time. Although the strong law is the more useful result, the weak law is a lot easier to prove. We do this below. The proof is based on: THEOREM 8.3 (Markov s inequality). If X is a nonnegative random variable, and a > 0, then P (X a) E(X) a THEOREM 8.4 (Chebyshev s inequality). If X has finite mean µ and variance σ 2, then for any k > 0 P ( X µ k) σ2 k 2 To prove the weak law, assuming further that each X i has finite variance σ 2, we note that an E(X n ) = 1 n E(X X n ) = 1 n E(Xi ) = µ V ar(x n ) = 1 n V ar(x X n ) = 1 n 2 V ar(xi ) = σ2 n Applying Chebyshev to X n with k = ɛ, gives (8.1) P ( X n µ ɛ) σ2 nɛ 2 The right hand side goes to 0 as n. The proof gives useful information. Namely, it says how large we need to talk n to get a desired bound. Let s say we have a coin that s biased, which means that the probability of heads p is not 1/2. To estimate p, we can flip it n times, and use p (# heads)/n Suppose we wanted to be sure that our estimate is accurate to one decimal place with 90% confidence. Applying (8.1) with ɛ = 0.05 tells us we need σ = 0.1 nɛ2 Of course σ 2 = p(1 p) is not known, but we can see from calculus this is maximized at p = 1/2, so that σ 2 1/4. So we need n 0.25 ɛ 2 (0.1) 9. Central limit theorem Suppose we a sequence of independent random variables X 1, X 2,... with the same distribution as before. Assume that the mean µ and variance σ 2 are finite. The law of large numbers says that X X n µ n for large n. To get a sense how good this is subtract these to get X X n nµ n

12 12 D. ARAPURA We can see that the variance of this is σ/n, which goes to 0. Let us rescale this to Y n = X X n nµ σ n so that Y n has variance 1 and mean 0 for all n. THEOREM 9.1 (Central limit theorem). Y n converges to a normal random variable with mean 0 and variance 1, i.e. lim P (a Y n b) = 1 a e x2 /2 dx n This, along with the law of large numbers, is the most important theorem in this class. When X n are binomial this is just the DeMoivre-Laplace theorem, but the point is that the conclusion holds without knowing anything about the actual distribution. It explains why normal distributions occur so often in practice. Exercise 14. Suppose that 20 dice are rolled, and X is the total (sum of numbers on each die). Estimate P (90 X 110) using the above theorem. Solution: Let X i number on ith die. We have 6 E(X i ) = i/6 = 7/2 V ar(x i ) = By the central limit theorem where a = i 2 /6 (7/2) 2 = 35/12 1 P (90 X 110) = P (a X 70 b) 20 and b = b a e x2 /2 dx software, or using a calculator and tables. b Note this integral is easy to compute using

Random Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline.

Random Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline. Random Variables Amappingthattransformstheeventstotherealline. Example 1. Toss a fair coin. Define a random variable X where X is 1 if head appears and X is if tail appears. P (X =)=1/2 P (X =1)=1/2 Example

More information

Week 12-13: Discrete Probability

Week 12-13: Discrete Probability Week 12-13: Discrete Probability November 21, 2018 1 Probability Space There are many problems about chances or possibilities, called probability in mathematics. When we roll two dice there are possible

More information

LIST OF FORMULAS FOR STK1100 AND STK1110

LIST OF FORMULAS FOR STK1100 AND STK1110 LIST OF FORMULAS FOR STK1100 AND STK1110 (Version of 11. November 2015) 1. Probability Let A, B, A 1, A 2,..., B 1, B 2,... be events, that is, subsets of a sample space Ω. a) Axioms: A probability function

More information

1 Review of Probability

1 Review of Probability 1 Review of Probability Random variables are denoted by X, Y, Z, etc. The cumulative distribution function (c.d.f.) of a random variable X is denoted by F (x) = P (X x), < x

More information

Review of Probability Theory

Review of Probability Theory Review of Probability Theory Arian Maleki and Tom Do Stanford University Probability theory is the study of uncertainty Through this class, we will be relying on concepts from probability theory for deriving

More information

CME 106: Review Probability theory

CME 106: Review Probability theory : Probability theory Sven Schmit April 3, 2015 1 Overview In the first half of the course, we covered topics from probability theory. The difference between statistics and probability theory is the following:

More information

Lecture 13 (Part 2): Deviation from mean: Markov s inequality, variance and its properties, Chebyshev s inequality

Lecture 13 (Part 2): Deviation from mean: Markov s inequality, variance and its properties, Chebyshev s inequality Lecture 13 (Part 2): Deviation from mean: Markov s inequality, variance and its properties, Chebyshev s inequality Discrete Structures II (Summer 2018) Rutgers University Instructor: Abhishek Bhrushundi

More information

Lecture 4: Probability and Discrete Random Variables

Lecture 4: Probability and Discrete Random Variables Error Correcting Codes: Combinatorics, Algorithms and Applications (Fall 2007) Lecture 4: Probability and Discrete Random Variables Wednesday, January 21, 2009 Lecturer: Atri Rudra Scribe: Anonymous 1

More information

Summary of basic probability theory Math 218, Mathematical Statistics D Joyce, Spring 2016

Summary of basic probability theory Math 218, Mathematical Statistics D Joyce, Spring 2016 8. For any two events E and F, P (E) = P (E F ) + P (E F c ). Summary of basic probability theory Math 218, Mathematical Statistics D Joyce, Spring 2016 Sample space. A sample space consists of a underlying

More information

BASICS OF PROBABILITY

BASICS OF PROBABILITY October 10, 2018 BASICS OF PROBABILITY Randomness, sample space and probability Probability is concerned with random experiments. That is, an experiment, the outcome of which cannot be predicted with certainty,

More information

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample

More information

1 Proof techniques. CS 224W Linear Algebra, Probability, and Proof Techniques

1 Proof techniques. CS 224W Linear Algebra, Probability, and Proof Techniques 1 Proof techniques Here we will learn to prove universal mathematical statements, like the square of any odd number is odd. It s easy enough to show that this is true in specific cases for example, 3 2

More information

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n JOINT DENSITIES - RANDOM VECTORS - REVIEW Joint densities describe probability distributions of a random vector X: an n-dimensional vector of random variables, ie, X = (X 1,, X n ), where all X is are

More information

CS 246 Review of Proof Techniques and Probability 01/14/19

CS 246 Review of Proof Techniques and Probability 01/14/19 Note: This document has been adapted from a similar review session for CS224W (Autumn 2018). It was originally compiled by Jessica Su, with minor edits by Jayadev Bhaskaran. 1 Proof techniques Here we

More information

Lectures on Elementary Probability. William G. Faris

Lectures on Elementary Probability. William G. Faris Lectures on Elementary Probability William G. Faris February 22, 2002 2 Contents 1 Combinatorics 5 1.1 Factorials and binomial coefficients................. 5 1.2 Sampling with replacement.....................

More information

18.440: Lecture 28 Lectures Review

18.440: Lecture 28 Lectures Review 18.440: Lecture 28 Lectures 17-27 Review Scott Sheffield MIT 1 Outline Continuous random variables Problems motivated by coin tossing Random variable properties 2 Outline Continuous random variables Problems

More information

STAT 414: Introduction to Probability Theory

STAT 414: Introduction to Probability Theory STAT 414: Introduction to Probability Theory Spring 2016; Homework Assignments Latest updated on April 29, 2016 HW1 (Due on Jan. 21) Chapter 1 Problems 1, 8, 9, 10, 11, 18, 19, 26, 28, 30 Theoretical Exercises

More information

1 Random Variable: Topics

1 Random Variable: Topics Note: Handouts DO NOT replace the book. In most cases, they only provide a guideline on topics and an intuitive feel. 1 Random Variable: Topics Chap 2, 2.1-2.4 and Chap 3, 3.1-3.3 What is a random variable?

More information

1 Probability theory. 2 Random variables and probability theory.

1 Probability theory. 2 Random variables and probability theory. Probability theory Here we summarize some of the probability theory we need. If this is totally unfamiliar to you, you should look at one of the sources given in the readings. In essence, for the major

More information

Actuarial Science Exam 1/P

Actuarial Science Exam 1/P Actuarial Science Exam /P Ville A. Satopää December 5, 2009 Contents Review of Algebra and Calculus 2 2 Basic Probability Concepts 3 3 Conditional Probability and Independence 4 4 Combinatorial Principles,

More information

1 Basic continuous random variable problems

1 Basic continuous random variable problems Name M362K Final Here are problems concerning material from Chapters 5 and 6. To review the other chapters, look over previous practice sheets for the two exams, previous quizzes, previous homeworks and

More information

2. Suppose (X, Y ) is a pair of random variables uniformly distributed over the triangle with vertices (0, 0), (2, 0), (2, 1).

2. Suppose (X, Y ) is a pair of random variables uniformly distributed over the triangle with vertices (0, 0), (2, 0), (2, 1). Name M362K Final Exam Instructions: Show all of your work. You do not have to simplify your answers. No calculators allowed. There is a table of formulae on the last page. 1. Suppose X 1,..., X 1 are independent

More information

Lecture 19: Properties of Expectation

Lecture 19: Properties of Expectation Lecture 19: Properties of Expectation Dan Sloughter Furman University Mathematics 37 February 11, 4 19.1 The unconscious statistician, revisited The following is a generalization of the law of the unconscious

More information

3 Multiple Discrete Random Variables

3 Multiple Discrete Random Variables 3 Multiple Discrete Random Variables 3.1 Joint densities Suppose we have a probability space (Ω, F,P) and now we have two discrete random variables X and Y on it. They have probability mass functions f

More information

18.440: Lecture 28 Lectures Review

18.440: Lecture 28 Lectures Review 18.440: Lecture 28 Lectures 18-27 Review Scott Sheffield MIT Outline Outline It s the coins, stupid Much of what we have done in this course can be motivated by the i.i.d. sequence X i where each X i is

More information

Problem 1. Problem 2. Problem 3. Problem 4

Problem 1. Problem 2. Problem 3. Problem 4 Problem Let A be the event that the fungus is present, and B the event that the staph-bacteria is present. We have P A = 4, P B = 9, P B A =. We wish to find P AB, to do this we use the multiplication

More information

MATH 3510: PROBABILITY AND STATS June 15, 2011 MIDTERM EXAM

MATH 3510: PROBABILITY AND STATS June 15, 2011 MIDTERM EXAM MATH 3510: PROBABILITY AND STATS June 15, 2011 MIDTERM EXAM YOUR NAME: KEY: Answers in Blue Show all your work. Answers out of the blue and without any supporting work may receive no credit even if they

More information

STOR Lecture 16. Properties of Expectation - I

STOR Lecture 16. Properties of Expectation - I STOR 435.001 Lecture 16 Properties of Expectation - I Jan Hannig UNC Chapel Hill 1 / 22 Motivation Recall we found joint distributions to be pretty complicated objects. Need various tools from combinatorics

More information

Math 416 Lecture 3. The average or mean or expected value of x 1, x 2, x 3,..., x n is

Math 416 Lecture 3. The average or mean or expected value of x 1, x 2, x 3,..., x n is Math 416 Lecture 3 Expected values The average or mean or expected value of x 1, x 2, x 3,..., x n is x 1 x 2... x n n x 1 1 n x 2 1 n... x n 1 n 1 n x i p x i where p x i 1 n is the probability of x i

More information

STAT2201. Analysis of Engineering & Scientific Data. Unit 3

STAT2201. Analysis of Engineering & Scientific Data. Unit 3 STAT2201 Analysis of Engineering & Scientific Data Unit 3 Slava Vaisman The University of Queensland School of Mathematics and Physics What we learned in Unit 2 (1) We defined a sample space of a random

More information

Probability Notes. Compiled by Paul J. Hurtado. Last Compiled: September 6, 2017

Probability Notes. Compiled by Paul J. Hurtado. Last Compiled: September 6, 2017 Probability Notes Compiled by Paul J. Hurtado Last Compiled: September 6, 2017 About These Notes These are course notes from a Probability course taught using An Introduction to Mathematical Statistics

More information

Statistics 100A Homework 5 Solutions

Statistics 100A Homework 5 Solutions Chapter 5 Statistics 1A Homework 5 Solutions Ryan Rosario 1. Let X be a random variable with probability density function a What is the value of c? fx { c1 x 1 < x < 1 otherwise We know that for fx to

More information

1 Presessional Probability

1 Presessional Probability 1 Presessional Probability Probability theory is essential for the development of mathematical models in finance, because of the randomness nature of price fluctuations in the markets. This presessional

More information

STAT 418: Probability and Stochastic Processes

STAT 418: Probability and Stochastic Processes STAT 418: Probability and Stochastic Processes Spring 2016; Homework Assignments Latest updated on April 29, 2016 HW1 (Due on Jan. 21) Chapter 1 Problems 1, 8, 9, 10, 11, 18, 19, 26, 28, 30 Theoretical

More information

Joint Probability Distributions and Random Samples (Devore Chapter Five)

Joint Probability Distributions and Random Samples (Devore Chapter Five) Joint Probability Distributions and Random Samples (Devore Chapter Five) 1016-345-01: Probability and Statistics for Engineers Spring 2013 Contents 1 Joint Probability Distributions 2 1.1 Two Discrete

More information

Stochastic Models of Manufacturing Systems

Stochastic Models of Manufacturing Systems Stochastic Models of Manufacturing Systems Ivo Adan Organization 2/47 7 lectures (lecture of May 12 is canceled) Studyguide available (with notes, slides, assignments, references), see http://www.win.tue.nl/

More information

X = X X n, + X 2

X = X X n, + X 2 CS 70 Discrete Mathematics for CS Fall 2003 Wagner Lecture 22 Variance Question: At each time step, I flip a fair coin. If it comes up Heads, I walk one step to the right; if it comes up Tails, I walk

More information

Chapter 4 : Expectation and Moments

Chapter 4 : Expectation and Moments ECE5: Analysis of Random Signals Fall 06 Chapter 4 : Expectation and Moments Dr. Salim El Rouayheb Scribe: Serge Kas Hanna, Lu Liu Expected Value of a Random Variable Definition. The expected or average

More information

Chp 4. Expectation and Variance

Chp 4. Expectation and Variance Chp 4. Expectation and Variance 1 Expectation In this chapter, we will introduce two objectives to directly reflect the properties of a random variable or vector, which are the Expectation and Variance.

More information

Fourier and Stats / Astro Stats and Measurement : Stats Notes

Fourier and Stats / Astro Stats and Measurement : Stats Notes Fourier and Stats / Astro Stats and Measurement : Stats Notes Andy Lawrence, University of Edinburgh Autumn 2013 1 Probabilities, distributions, and errors Laplace once said Probability theory is nothing

More information

STT 441 Final Exam Fall 2013

STT 441 Final Exam Fall 2013 STT 441 Final Exam Fall 2013 (12:45-2:45pm, Thursday, Dec. 12, 2013) NAME: ID: 1. No textbooks or class notes are allowed in this exam. 2. Be sure to show all of your work to receive credit. Credits are

More information

5. Conditional Distributions

5. Conditional Distributions 1 of 12 7/16/2009 5:36 AM Virtual Laboratories > 3. Distributions > 1 2 3 4 5 6 7 8 5. Conditional Distributions Basic Theory As usual, we start with a random experiment with probability measure P on an

More information

Math 510 midterm 3 answers

Math 510 midterm 3 answers Math 51 midterm 3 answers Problem 1 (1 pts) Suppose X and Y are independent exponential random variables both with parameter λ 1. Find the probability that Y < 7X. P (Y < 7X) 7x 7x f(x, y) dy dx e x e

More information

Probability and Distributions

Probability and Distributions Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated

More information

ORF 245 Fundamentals of Statistics Chapter 4 Great Expectations

ORF 245 Fundamentals of Statistics Chapter 4 Great Expectations ORF 245 Fundamentals of Statistics Chapter 4 Great Expectations Robert Vanderbei Fall 2014 Slides last edited on October 20, 2014 http://www.princeton.edu/ rvdb Definition The expectation of a random variable

More information

Chapter 2: Random Variables

Chapter 2: Random Variables ECE54: Stochastic Signals and Systems Fall 28 Lecture 2 - September 3, 28 Dr. Salim El Rouayheb Scribe: Peiwen Tian, Lu Liu, Ghadir Ayache Chapter 2: Random Variables Example. Tossing a fair coin twice:

More information

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu Home Work: 1 1. Describe the sample space when a coin is tossed (a) once, (b) three times, (c) n times, (d) an infinite number of times. 2. A coin is tossed until for the first time the same result appear

More information

Probability Theory and Statistics. Peter Jochumzen

Probability Theory and Statistics. Peter Jochumzen Probability Theory and Statistics Peter Jochumzen April 18, 2016 Contents 1 Probability Theory And Statistics 3 1.1 Experiment, Outcome and Event................................ 3 1.2 Probability............................................

More information

Random Variables and Their Distributions

Random Variables and Their Distributions Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital

More information

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities PCMI 207 - Introduction to Random Matrix Theory Handout #2 06.27.207 REVIEW OF PROBABILITY THEORY Chapter - Events and Their Probabilities.. Events as Sets Definition (σ-field). A collection F of subsets

More information

CS37300 Class Notes. Jennifer Neville, Sebastian Moreno, Bruno Ribeiro

CS37300 Class Notes. Jennifer Neville, Sebastian Moreno, Bruno Ribeiro CS37300 Class Notes Jennifer Neville, Sebastian Moreno, Bruno Ribeiro 2 Background on Probability and Statistics These are basic definitions, concepts, and equations that should have been covered in your

More information

Lecture 1: Review on Probability and Statistics

Lecture 1: Review on Probability and Statistics STAT 516: Stochastic Modeling of Scientific Data Autumn 2018 Instructor: Yen-Chi Chen Lecture 1: Review on Probability and Statistics These notes are partially based on those of Mathias Drton. 1.1 Motivating

More information

3. Probability and Statistics

3. Probability and Statistics FE661 - Statistical Methods for Financial Engineering 3. Probability and Statistics Jitkomut Songsiri definitions, probability measures conditional expectations correlation and covariance some important

More information

Set Theory Digression

Set Theory Digression 1 Introduction to Probability 1.1 Basic Rules of Probability Set Theory Digression A set is defined as any collection of objects, which are called points or elements. The biggest possible collection of

More information

8 Laws of large numbers

8 Laws of large numbers 8 Laws of large numbers 8.1 Introduction We first start with the idea of standardizing a random variable. Let X be a random variable with mean µ and variance σ 2. Then Z = (X µ)/σ will be a random variable

More information

Multiple Random Variables

Multiple Random Variables Multiple Random Variables Joint Probability Density Let X and Y be two random variables. Their joint distribution function is F ( XY x, y) P X x Y y. F XY ( ) 1, < x

More information

Overview. CSE 21 Day 5. Image/Coimage. Monotonic Lists. Functions Probabilistic analysis

Overview. CSE 21 Day 5. Image/Coimage. Monotonic Lists. Functions Probabilistic analysis Day 5 Functions/Probability Overview Functions Probabilistic analysis Neil Rhodes UC San Diego Image/Coimage The image of f is the set of values f actually takes on (a subset of the codomain) The inverse

More information

Random Variables. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay

Random Variables. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay 1 / 13 Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay August 8, 2013 2 / 13 Random Variable Definition A real-valued

More information

Final Exam # 3. Sta 230: Probability. December 16, 2012

Final Exam # 3. Sta 230: Probability. December 16, 2012 Final Exam # 3 Sta 230: Probability December 16, 2012 This is a closed-book exam so do not refer to your notes, the text, or any other books (please put them on the floor). You may use the extra sheets

More information

1 Review of di erential calculus

1 Review of di erential calculus Review of di erential calculus This chapter presents the main elements of di erential calculus needed in probability theory. Often, students taking a course on probability theory have problems with concepts

More information

This does not cover everything on the final. Look at the posted practice problems for other topics.

This does not cover everything on the final. Look at the posted practice problems for other topics. Class 7: Review Problems for Final Exam 8.5 Spring 7 This does not cover everything on the final. Look at the posted practice problems for other topics. To save time in class: set up, but do not carry

More information

1 Basic continuous random variable problems

1 Basic continuous random variable problems Name M362K Final Here are problems concerning material from Chapters 5 and 6. To review the other chapters, look over previous practice sheets for the two exams, previous quizzes, previous homeworks and

More information

STAT 430/510 Probability Lecture 7: Random Variable and Expectation

STAT 430/510 Probability Lecture 7: Random Variable and Expectation STAT 430/510 Probability Lecture 7: Random Variable and Expectation Pengyuan (Penelope) Wang June 2, 2011 Review Properties of Probability Conditional Probability The Law of Total Probability Bayes Formula

More information

Computational Methods for Data Analysis

Computational Methods for Data Analysis 226 Part III Computational Methods for Data Analysis 11 Statistical Methods and Their Applications Our ultimate goal is to analyze highly generic data arising from applications as diverse as imaging, biological

More information

Final Review: Problem Solving Strategies for Stat 430

Final Review: Problem Solving Strategies for Stat 430 Final Review: Problem Solving Strategies for Stat 430 Hyunseung Kang December 14, 011 This document covers the material from the last 1/3 of the class. It s not comprehensive nor is it complete (because

More information

Topic 3: The Expectation of a Random Variable

Topic 3: The Expectation of a Random Variable Topic 3: The Expectation of a Random Variable Course 003, 2017 Page 0 Expectation of a discrete random variable Definition (Expectation of a discrete r.v.): The expected value (also called the expectation

More information

Brief Review of Probability

Brief Review of Probability Maura Department of Economics and Finance Università Tor Vergata Outline 1 Distribution Functions Quantiles and Modes of a Distribution 2 Example 3 Example 4 Distributions Outline Distribution Functions

More information

Lecture 11. Probability Theory: an Overveiw

Lecture 11. Probability Theory: an Overveiw Math 408 - Mathematical Statistics Lecture 11. Probability Theory: an Overveiw February 11, 2013 Konstantin Zuev (USC) Math 408, Lecture 11 February 11, 2013 1 / 24 The starting point in developing the

More information

2 (Statistics) Random variables

2 (Statistics) Random variables 2 (Statistics) Random variables References: DeGroot and Schervish, chapters 3, 4 and 5; Stirzaker, chapters 4, 5 and 6 We will now study the main tools use for modeling experiments with unknown outcomes

More information

Theory of probability and mathematical statistics

Theory of probability and mathematical statistics Theory of probability and mathematical statistics Tomáš Mrkvička Bibliography [1] J. [2] J. Andďż l: Matematickďż statistika, SNTL/ALFA, Praha 1978 Andďż l: Statistickďż metody, Matfyzpress, Praha 1998

More information

Chapter 2. Some Basic Probability Concepts. 2.1 Experiments, Outcomes and Random Variables

Chapter 2. Some Basic Probability Concepts. 2.1 Experiments, Outcomes and Random Variables Chapter 2 Some Basic Probability Concepts 2.1 Experiments, Outcomes and Random Variables A random variable is a variable whose value is unknown until it is observed. The value of a random variable results

More information

Lecture 2: Review of Basic Probability Theory

Lecture 2: Review of Basic Probability Theory ECE 830 Fall 2010 Statistical Signal Processing instructor: R. Nowak, scribe: R. Nowak Lecture 2: Review of Basic Probability Theory Probabilistic models will be used throughout the course to represent

More information

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix)

EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) 1 EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) Taisuke Otsu London School of Economics Summer 2018 A.1. Summation operator (Wooldridge, App. A.1) 2 3 Summation operator For

More information

Exam P Review Sheet. for a > 0. ln(a) i=0 ari = a. (1 r) 2. (Note that the A i s form a partition)

Exam P Review Sheet. for a > 0. ln(a) i=0 ari = a. (1 r) 2. (Note that the A i s form a partition) Exam P Review Sheet log b (b x ) = x log b (y k ) = k log b (y) log b (y) = ln(y) ln(b) log b (yz) = log b (y) + log b (z) log b (y/z) = log b (y) log b (z) ln(e x ) = x e ln(y) = y for y > 0. d dx ax

More information

Expectation, inequalities and laws of large numbers

Expectation, inequalities and laws of large numbers Chapter 3 Expectation, inequalities and laws of large numbers 3. Expectation and Variance Indicator random variable Let us suppose that the event A partitions the sample space S, i.e. A A S. The indicator

More information

Math/Stats 425, Sec. 1, Fall 04: Introduction to Probability. Final Exam: Solutions

Math/Stats 425, Sec. 1, Fall 04: Introduction to Probability. Final Exam: Solutions Math/Stats 45, Sec., Fall 4: Introduction to Probability Final Exam: Solutions. In a game, a contestant is shown two identical envelopes containing money. The contestant does not know how much money is

More information

Math Bootcamp 2012 Miscellaneous

Math Bootcamp 2012 Miscellaneous Math Bootcamp 202 Miscellaneous Factorial, combination and permutation The factorial of a positive integer n denoted by n!, is the product of all positive integers less than or equal to n. Define 0! =.

More information

Expectation of Random Variables

Expectation of Random Variables 1 / 19 Expectation of Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 13, 2015 2 / 19 Expectation of Discrete

More information

MULTIVARIATE PROBABILITY DISTRIBUTIONS

MULTIVARIATE PROBABILITY DISTRIBUTIONS MULTIVARIATE PROBABILITY DISTRIBUTIONS. PRELIMINARIES.. Example. Consider an experiment that consists of tossing a die and a coin at the same time. We can consider a number of random variables defined

More information

Homework 10 (due December 2, 2009)

Homework 10 (due December 2, 2009) Homework (due December, 9) Problem. Let X and Y be independent binomial random variables with parameters (n, p) and (n, p) respectively. Prove that X + Y is a binomial random variable with parameters (n

More information

Exercises and Answers to Chapter 1

Exercises and Answers to Chapter 1 Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean

More information

1.1 Review of Probability Theory

1.1 Review of Probability Theory 1.1 Review of Probability Theory Angela Peace Biomathemtics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,

More information

Continuous Random Variables

Continuous Random Variables 1 / 24 Continuous Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 27, 2013 2 / 24 Continuous Random Variables

More information

Discrete Probability Refresher

Discrete Probability Refresher ECE 1502 Information Theory Discrete Probability Refresher F. R. Kschischang Dept. of Electrical and Computer Engineering University of Toronto January 13, 1999 revised January 11, 2006 Probability theory

More information

Definition: A random variable X is a real valued function that maps a sample space S into the space of real numbers R. X : S R

Definition: A random variable X is a real valued function that maps a sample space S into the space of real numbers R. X : S R Random Variables Definition: A random variable X is a real valued function that maps a sample space S into the space of real numbers R. X : S R As such, a random variable summarizes the outcome of an experiment

More information

Probability and Statistics

Probability and Statistics Probability and Statistics Jane Bae Stanford University hjbae@stanford.edu September 16, 2014 Jane Bae (Stanford) Probability and Statistics September 16, 2014 1 / 35 Overview 1 Probability Concepts Probability

More information

ORF 245 Fundamentals of Statistics Great Expectations

ORF 245 Fundamentals of Statistics Great Expectations ORF 245 Fundamentals of Statistics Great Expectations Robert Vanderbei Fall 2015 Slides last edited on November 16, 2015 http://www.princeton.edu/ rvdb Definition The expectation of a random variable is

More information

IEOR 4701: Stochastic Models in Financial Engineering. Summer 2007, Professor Whitt. SOLUTIONS to Homework Assignment 9: Brownian motion

IEOR 4701: Stochastic Models in Financial Engineering. Summer 2007, Professor Whitt. SOLUTIONS to Homework Assignment 9: Brownian motion IEOR 471: Stochastic Models in Financial Engineering Summer 27, Professor Whitt SOLUTIONS to Homework Assignment 9: Brownian motion In Ross, read Sections 1.1-1.3 and 1.6. (The total required reading there

More information

Discrete Distributions

Discrete Distributions A simplest example of random experiment is a coin-tossing, formally called Bernoulli trial. It happens to be the case that many useful distributions are built upon this simplest form of experiment, whose

More information

Lecture 1: Probability Fundamentals

Lecture 1: Probability Fundamentals Lecture 1: Probability Fundamentals IB Paper 7: Probability and Statistics Carl Edward Rasmussen Department of Engineering, University of Cambridge January 22nd, 2008 Rasmussen (CUED) Lecture 1: Probability

More information

Probability Theory. Introduction to Probability Theory. Principles of Counting Examples. Principles of Counting. Probability spaces.

Probability Theory. Introduction to Probability Theory. Principles of Counting Examples. Principles of Counting. Probability spaces. Probability Theory To start out the course, we need to know something about statistics and probability Introduction to Probability Theory L645 Advanced NLP Autumn 2009 This is only an introduction; for

More information

18.440: Lecture 19 Normal random variables

18.440: Lecture 19 Normal random variables 18.440 Lecture 19 18.440: Lecture 19 Normal random variables Scott Sheffield MIT Outline Tossing coins Normal random variables Special case of central limit theorem Outline Tossing coins Normal random

More information

Notes on Mathematics Groups

Notes on Mathematics Groups EPGY Singapore Quantum Mechanics: 2007 Notes on Mathematics Groups A group, G, is defined is a set of elements G and a binary operation on G; one of the elements of G has particularly special properties

More information

Probability- the good parts version. I. Random variables and their distributions; continuous random variables.

Probability- the good parts version. I. Random variables and their distributions; continuous random variables. Probability- the good arts version I. Random variables and their distributions; continuous random variables. A random variable (r.v) X is continuous if its distribution is given by a robability density

More information

Probability reminders

Probability reminders CS246 Winter 204 Mining Massive Data Sets Probability reminders Sammy El Ghazzal selghazz@stanfordedu Disclaimer These notes may contain typos, mistakes or confusing points Please contact the author so

More information

7 Random samples and sampling distributions

7 Random samples and sampling distributions 7 Random samples and sampling distributions 7.1 Introduction - random samples We will use the term experiment in a very general way to refer to some process, procedure or natural phenomena that produces

More information

Chapter 2. Discrete Distributions

Chapter 2. Discrete Distributions Chapter. Discrete Distributions Objectives ˆ Basic Concepts & Epectations ˆ Binomial, Poisson, Geometric, Negative Binomial, and Hypergeometric Distributions ˆ Introduction to the Maimum Likelihood Estimation

More information

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows. Chapter 5 Two Random Variables In a practical engineering problem, there is almost always causal relationship between different events. Some relationships are determined by physical laws, e.g., voltage

More information

Multivariate distributions

Multivariate distributions CHAPTER Multivariate distributions.. Introduction We want to discuss collections of random variables (X, X,..., X n ), which are known as random vectors. In the discrete case, we can define the density

More information

Lecture 10: Probability distributions TUESDAY, FEBRUARY 19, 2019

Lecture 10: Probability distributions TUESDAY, FEBRUARY 19, 2019 Lecture 10: Probability distributions DANIEL WELLER TUESDAY, FEBRUARY 19, 2019 Agenda What is probability? (again) Describing probabilities (distributions) Understanding probabilities (expectation) Partial

More information