Stat 134 Fall 2011: Notes on generating functions
|
|
- Anabel Glenn
- 5 years ago
- Views:
Transcription
1 Stat 3 Fall 0: Notes on generating functions Michael Lugo October, 0 Definitions Given a random variable X which always takes on a positive integer value, we define the probability generating function f X (z) = P (X = 0) + P (X = )z + P (X = )z + = P (X = k)z k This infinite sum converges at least in some neighborhood of the origin z = 0; in particular f X () = P (X = k) = But we won t usually worry about questions of convergence The nice thing about generating functions is that they let us take a whole sequence of numbers and let us consolidate them into a single function; as Herb Wilf puts it in his book generatingfunctionology, A generating function is a clothesline on which we hang up a sequence of numbers for display Then we can use the tools of calculus on that function Since we know calculus this is useful In particular, if we want to know E(X), E(X ), we can find them by taking successive derivatives of f X Differentiating term by term gives [ ] d dz f X(z) = d P (X = k)z k = P (X = k) d dz dz zk = P (X = k)kz k and so f X () = P (X = k)k = E(X) To find E(X ) is a bit harder The obvious thing to do is differentiate twice, but this gives [ ] d dz f X(z) = d P (X = k)z k = P (X = k) d dz dz zk = P (X = k)k(k )z k Therefore letting z = gives () = P (X = k)k(k ) = E(X(X ))
2 But we need not despair! We have E(X ) = E(X(X ) + X) = E(X(X )) + E(X) = X() + f X() and if we recall that V ar(x) = E(X ) E(X), we can derive the formula V ar(x) = X() + f X() f X() We won t often have use for higher moments, but let s consider how we could find E(X 3 ) Differentiating three times gives f (3) () = E(X(X )(X ) It turns out that x 3 = x(x )(x ) + 3x(x ) + x, as you can easily verify Thus we get the formula E(X 3 ) = f (3) () + 3 () + f() This continues In general { } { } n n n k=0 (x) k k = x n, where denotes a Stirling number k of the second kind ; this is the number of ways to partition a set of n objects into k nonempty subsets, although for our purposes we can just think of these as the number that makes this identity holds Therefore E(X k ) = n k= { } n f (k) (n) k Incidentally, you might also write f X (z) = E(z X ), where the exponent is a random variable; this means that a lot of what we say here about discrete random variables can carry over to continuous random variables Finding some generating functions We know some distributions, so let s find their generating functions and then use these to derive moments The Bernoulli distribution Recall that the Bernoulli distribution with parameter p has P (X = 0) = p, P (X = ) = p Therefore the generating function is ( p)z 0 + pz, or pz + q This gives f X(z) = p, X(z) = 0 and so we can use the formulas E(X) = f X (), V ar(x) = f X () + f X () f X () to get E(X) = p, V ar(x) = 0 + p p = p( p) = pq But we already knew these The geometric distribution This is the time until the first head if we flip a coin with probability p of heads This distribution has P (X = k) = q k p for k, and so f X (z) = k q k pz k = k (qz) k pz
3 By the usual formula for the sum of a geometric series, we get f X (z) = pz/( qz) Differentiating once gives f X(z) = ( qz)p pz( q)/( qz) = p( qz), f X() = p( q) = p p = /p and so E(X) = /p Differentiating again gives X(z) = pq( qz) 3, X() = pq( q) 3 = pq/p 3 = q/p We can put this all together to get V ar(x) = X() + f X() f X() = q p + p p = q p The Poisson distribution The Poisson random variable has P (X = k) = e λ λ k /k! Therefore we have f X (z) = λ λk e k! zk = e λ (λz) k k! Recognizing the sum as the Taylor series for e λz, we get f X (z) = e λ e λz = e λ(z ) Therefore f X (z) = λeλ(z ), f X (z) = λ e λ(z ) Thus E(X) = λ, E(X X) = λ, E(X ) = λ + λ, V ar(x) = E(X ) E(X) = λ The mean and variance of the Poisson are λ, which we already knew The uniform distribution Somewhat surprisingly, it s difficult to use generating functions to get facts about the uniform distribution The generating function of the uniform distribution on 0,,,, n is f X (z) = n k=0 If you differentiate this you get n zk = n ( + z + + zn ) = ( zn ) n( z) f X(z) = n ( (n )z n nz n + ( z) But how can we evaluate this at z = 0? It appears to be 0/0 We take the limit as z : ) E(X) = n lim (n )z n nz n + z ( z) We then apply l Hopital s rule twice to get E(X) = n lim z (n )n(n )z n n(n )(n )z n 3 3
4 Plug in z = and simplify to get E(X) = (n )/ The variance can be found in the same way: we have f X(z) = (n )(n )z n n(n )z n + n(n )z n n (z ) 3 and we take the limit as z We have to apply l Hopital s rule three times to get f X() = (n )(n )n(n )(n ) n(n )(n )(n )(n 3) + n(n )(n )(n 3)(n n 6 and after much simplifcation this is (n )(n )/3 Finally V ar(x) = (n )(n ) 3 + n ( ) n = n But there are easier ways to get this see for example Pitman, exercise 330 The convolution formula One nice thing about generating functions is that they play well with respect to multiplication In particular, if X and Y are independent, and we have S = X + Y, then f S (z) = f X (z)f Y (z) That is, the generating function of the sum is the product of the generating functions To prove this fact, we show that the coefficient of z k in f S (z) is the same as that in f X (z)f Y (z) The coefficient of z k in f X (z)f Y (z) is k j=0 X and Y are independent, this is k j=0 be broken down into disjoint events: P (X = j)p (Y = k j) But since P (X = j, Y = k j) Finally, the event S = k can {S = k} = {X = 0, Y = k} {X =, Y = k } {X =, Y = k } {X = k, Y = 0} and so P (S = k) is the sum of the probabilities of these subevents So the coefficients of z k in f S (z) and f X (z)f Y (z) are the same for all k; thus the functions are the same Of course this can be extended to a sum of any finite number of random variables In particular, if S = X + + X n where X,, X n each have the distribution of X and the X i are independent, then f S (z) = f X (z) n Binomial distribution A binomial(n, p) random variable is the sum of n Bernoulli(p) random variables; therefore its generating function is f X (z) = (pz + q) n, the nth power of that of the Bernoulli So we have f X(z) = np(pz + q) n, X(z) = n(n )p (pz + q) n and in particular f X () = np, f X = n(n )p Thus E(X) = np and V ar(x) = n(n )p + np (np) = n p np + np n p = np np = np( p) = npq Negative binomial distribution Consider the waiting time until the rth success in a series of independent trials, where the first success occurs on the pth trial The overall
5 waiting time is the sum of r waiting times, each of which is geometric with parameter p The generating function of this waiting time T is therefore ( ) r pz f T (z) = ( qz) We can find the mean and the variance of T from this A useful trick is to note that d (log f dz T (z)) = f T (z)/f T (z); this is known as logarithmic differentiation Therefore we have and differentiating gives log f T (z) = r(log pz log( qz)) f T (z) ( f T (z) = r Letting z = gives f T ()/f T () = r Since f T () = we have f T z + q qz ) ( ) + q ; simplifying gives f q T ()/f T () = r/p () = r/p Finding the variance of T is left as an exercise 3 Alternative proofs of some facts When is the sum of two binomials a binomial? I claimed in class that the sum of two independent binomials is only a binomial if they have the same success probability We can prove this using generating functions Let X Bin(n, p ) and Y Bin(n, p ) Then they have generating functions (p z + q ) n and (p z + q ) n, respectively The sum S = X + Y has generating function f S (z) = (p z + q ) n (p z + q ) n, which is only of the form (pz + q) n if n = n One way to see this is to note that f S (z) = (p z + q ) n (p z + q ) n has real zeros at z = q /p, q /p If p p then this means that S has two real zeros, while (pz + q) n only has one Means and variances add Say X and Y are independent random variables with S = X + Y If X, Y have generating functions then we can show that E(X + Y ) = E(X) + E(Y ), V ar(x + Y ) = V ar(x) + V ar(y ) purely by calculus In particular, we have f S (z) = f X (z)f Y (z) by the convolution formula above Differentiating once gives and if we let z = we get f S(z) = f X(z)f Y (z) + f X (z)f Y (z) f S() = f X()f Y () + f X ()f Y () This should be read as I m writing these notes on a Friday afternoon and have done enough calculus for one day 5
6 But since f X, f Y are probability generating functions, f X () = f Y () = and so we have f S () = f X () + f Y () In terms of expectations this is just E(S) = E(X) + E(Y ) Similarly, we have and at z = this becomes S(z) = X(z)f Y (z) + f X(z)f Y (z) + f X (z) Y (z) S() = X() + f X()f Y () + Y () Combining this with the known expression for f S () we get V ar(s) = X() + f X()f Y () + Y () f X() f Y () (f X() + f Y ()) After some rearrangement this becomes X() + f X() f X() + f Y () + f Y () f Y () and this is clearly V ar(x) + V ar(y ) Another proof of the square root law If S = X + + X n, and all the X i are independent and have the distribution of X, then f S (z) = f X (z) n Differentiating both sides of this identity and substituting gives f S() = nf X () n f X() = nf X() which has the probabilistic interpretation E(S) = ne(x) Differentiating twice gives and letting z = gives Therefore the variance of S is S(z) = n(n )f X (z) n f X(z) + nf X (z) n X(z) S() = n(n )f X() + n X() V ar(s) = S() + f S() f S() = n(n )f X() + n X() + nf X() n f X() After simplifying we get V ar(s) = n( X() + f X() f X() ) = nv ar(x) and taking square roots gives SD(S) = nsd(x), the square root law 6
7 A couple frivolous results A power series identity from the negative binomial Recall that for the waiting time until the rth success in independent trials with success probability p, we have ( ) t P (T r = t) = p r q t r r But we also know that the generating function of T r is (pz/( qz)) r Therefore, the coefficient of z t in the Taylor expansion of (pz/( qz)) r is ( t r ) p r q t r We write this fact as ( ) r ( ) pz t [z t ] = p r q t r qz r where we use [z k ]f(z) to stand for the coefficient of z k in the Taylor expansion of f(z) Some simple manipulation gives z t r pr ( qz) r z t r ( qz) r = ( ) t r p r q t r = ( ) t r q t r [z t p r z r ] = ( qz) r ( ) t p r q t r r and if we let t = r + k, we get ( ) [z k k + r ] ( qz) = q k r r Finally, summing over k, we get ( qz) = ( ) k + r q k z k r r For example, we have = ( ) k + q k z k ( qz) 3 ( ) ( ) ( ) ( ) 3 5 = + (qz) + (qz) + (qz) 3 + = + 3(qz) + 6(qz) + 0(qz) 3 + Remainders when flipping coins You flip a coin, which comes up heads with probability p, n times The probability that the number obtained is even is (f X () + f X ( ))/, 7
8 where X is a binomial random variable this was a homework problem Since f X (z) = (pz + q) n, this works out to (p + q) n + ( p + q) n + ( p)n = So for a large number of coin flips (large n), this will be close to /; if p = / this will be exactly / But what if we want to know the probability that we obtain a number of heads which is a multiple of? This is not, in general, / For example, if we flip four fair coins the probability of getting a multiple of heads is ( ( ( 0) + ) )/ = /6; if we flip five fair coins it s ( ( ( 5 0) + 5 ) )/ 5 = 6/3 We can evaluate f X (z) at each of the fourth roots of unity to get f X () = p 0 + p + p + p 3 + p + p 5 + f X (i) = p 0 + ip p ip 3 + p + ip 5 + f X ( ) = p 0 p + p p 3 + p p 5 + f X ( i) = p 0 ip + p + ip 3 + p ip 5 + where p i = P (X = i) Adding all four of these together we get f X () + f X (i) + f X ( ) + f X ( i) = (p 0 + p + p 8 + ) and all the other p j cancel out Therefore we have a formula good for any random variable, P (X is divisible by ) = f X() + f X (i) + f X ( ) + f X ( i) and in the case where X Bin(n, /) this is P (X is divisible by ) = + ( +i ) n + 0 n + ( ) i n Since ( + i)/ < the second and fourth terms in the numerator go away; if we flip a large number of coins the probability that the number of heads is divisible by goes to / as n 8
Things to remember when learning probability distributions:
SPECIAL DISTRIBUTIONS Some distributions are special because they are useful They include: Poisson, exponential, Normal (Gaussian), Gamma, geometric, negative binomial, Binomial and hypergeometric distributions
More informationLecture 13 (Part 2): Deviation from mean: Markov s inequality, variance and its properties, Chebyshev s inequality
Lecture 13 (Part 2): Deviation from mean: Markov s inequality, variance and its properties, Chebyshev s inequality Discrete Structures II (Summer 2018) Rutgers University Instructor: Abhishek Bhrushundi
More informationn px p x (1 p) n x. p x n(n 1)... (n x + 1) x!
Lectures 3-4 jacques@ucsd.edu 7. Classical discrete distributions D. The Poisson Distribution. If a coin with heads probability p is flipped independently n times, then the number of heads is Bin(n, p)
More informationCommon Discrete Distributions
Common Discrete Distributions Statistics 104 Autumn 2004 Taken from Statistics 110 Lecture Notes Copyright c 2004 by Mark E. Irwin Common Discrete Distributions There are a wide range of popular discrete
More informationPoisson approximations
Chapter 9 Poisson approximations 9.1 Overview The Binn, p) can be thought of as the distribution of a sum of independent indicator random variables X 1 + + X n, with {X i = 1} denoting a head on the ith
More informationMathematical Statistics 1 Math A 6330
Mathematical Statistics 1 Math A 6330 Chapter 3 Common Families of Distributions Mohamed I. Riffi Department of Mathematics Islamic University of Gaza September 28, 2015 Outline 1 Subjects of Lecture 04
More informationSDS 321: Introduction to Probability and Statistics
SDS 321: Introduction to Probability and Statistics Lecture 10: Expectation and Variance Purnamrita Sarkar Department of Statistics and Data Science The University of Texas at Austin www.cs.cmu.edu/ psarkar/teaching
More information1 Random Variable: Topics
Note: Handouts DO NOT replace the book. In most cases, they only provide a guideline on topics and an intuitive feel. 1 Random Variable: Topics Chap 2, 2.1-2.4 and Chap 3, 3.1-3.3 What is a random variable?
More informationLecture 3. Discrete Random Variables
Math 408 - Mathematical Statistics Lecture 3. Discrete Random Variables January 23, 2013 Konstantin Zuev (USC) Math 408, Lecture 3 January 23, 2013 1 / 14 Agenda Random Variable: Motivation and Definition
More information3 Multiple Discrete Random Variables
3 Multiple Discrete Random Variables 3.1 Joint densities Suppose we have a probability space (Ω, F,P) and now we have two discrete random variables X and Y on it. They have probability mass functions f
More information1 Review of Probability
1 Review of Probability Random variables are denoted by X, Y, Z, etc. The cumulative distribution function (c.d.f.) of a random variable X is denoted by F (x) = P (X x), < x
More information4 Branching Processes
4 Branching Processes Organise by generations: Discrete time. If P(no offspring) 0 there is a probability that the process will die out. Let X = number of offspring of an individual p(x) = P(X = x) = offspring
More informationUniversity of California, Berkeley, Statistics 134: Concepts of Probability. Michael Lugo, Spring Exam 1
University of California, Berkeley, Statistics 134: Concepts of Probability Michael Lugo, Spring 2011 Exam 1 February 16, 2011, 11:10 am - 12:00 noon Name: Solutions Student ID: This exam consists of seven
More information1 Bernoulli Distribution: Single Coin Flip
STAT 350 - An Introduction to Statistics Named Discrete Distributions Jeremy Troisi Bernoulli Distribution: Single Coin Flip trial of an experiment that yields either a success or failure. X Bern(p),X
More informationDiscrete Distributions
A simplest example of random experiment is a coin-tossing, formally called Bernoulli trial. It happens to be the case that many useful distributions are built upon this simplest form of experiment, whose
More informationTaylor and Maclaurin Series
Taylor and Maclaurin Series MATH 211, Calculus II J. Robert Buchanan Department of Mathematics Spring 2018 Background We have seen that some power series converge. When they do, we can think of them as
More information18.175: Lecture 13 Infinite divisibility and Lévy processes
18.175 Lecture 13 18.175: Lecture 13 Infinite divisibility and Lévy processes Scott Sheffield MIT Outline Poisson random variable convergence Extend CLT idea to stable random variables Infinite divisibility
More informationLectures on Elementary Probability. William G. Faris
Lectures on Elementary Probability William G. Faris February 22, 2002 2 Contents 1 Combinatorics 5 1.1 Factorials and binomial coefficients................. 5 1.2 Sampling with replacement.....................
More informationTom Salisbury
MATH 2030 3.00MW Elementary Probability Course Notes Part V: Independence of Random Variables, Law of Large Numbers, Central Limit Theorem, Poisson distribution Geometric & Exponential distributions Tom
More informationSUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416)
SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) D. ARAPURA This is a summary of the essential material covered so far. The final will be cumulative. I ve also included some review problems
More informationLecture 16. Lectures 1-15 Review
18.440: Lecture 16 Lectures 1-15 Review Scott Sheffield MIT 1 Outline Counting tricks and basic principles of probability Discrete random variables 2 Outline Counting tricks and basic principles of probability
More informationDiscrete Mathematics and Probability Theory Fall 2014 Anant Sahai Note 15. Random Variables: Distributions, Independence, and Expectations
EECS 70 Discrete Mathematics and Probability Theory Fall 204 Anant Sahai Note 5 Random Variables: Distributions, Independence, and Expectations In the last note, we saw how useful it is to have a way of
More informationLecture 4: Probability and Discrete Random Variables
Error Correcting Codes: Combinatorics, Algorithms and Applications (Fall 2007) Lecture 4: Probability and Discrete Random Variables Wednesday, January 21, 2009 Lecturer: Atri Rudra Scribe: Anonymous 1
More informationALL TEXTS BELONG TO OWNERS. Candidate code: glt090 TAKEN FROM
How are Generating Functions used in finding the closed form of sequences involving recurrence relations and in the analysis of probability distributions? Mathematics Extended Essay Word count: 3865 Abstract
More informationSUFFICIENT STATISTICS
SUFFICIENT STATISTICS. Introduction Let X (X,..., X n ) be a random sample from f θ, where θ Θ is unknown. We are interested using X to estimate θ. In the simple case where X i Bern(p), we found that the
More informationEXAM. Exam #1. Math 3342 Summer II, July 21, 2000 ANSWERS
EXAM Exam # Math 3342 Summer II, 2 July 2, 2 ANSWERS i pts. Problem. Consider the following data: 7, 8, 9, 2,, 7, 2, 3. Find the first quartile, the median, and the third quartile. Make a box and whisker
More informationExample 1. Assume that X follows the normal distribution N(2, 2 2 ). Estimate the probabilities: (a) P (X 3); (b) P (X 1); (c) P (1 X 3).
Example 1. Assume that X follows the normal distribution N(2, 2 2 ). Estimate the probabilities: (a) P (X 3); (b) P (X 1); (c) P (1 X 3). First of all, we note that µ = 2 and σ = 2. (a) Since X 3 is equivalent
More informationProbability and Statistics
Probability and Statistics 1 Contents some stochastic processes Stationary Stochastic Processes 2 4. Some Stochastic Processes 4.1 Bernoulli process 4.2 Binomial process 4.3 Sine wave process 4.4 Random-telegraph
More information18.175: Lecture 17 Poisson random variables
18.175: Lecture 17 Poisson random variables Scott Sheffield MIT 1 Outline More on random walks and local CLT Poisson random variable convergence Extend CLT idea to stable random variables 2 Outline More
More informationRANDOM WALKS AND THE PROBABILITY OF RETURNING HOME
RANDOM WALKS AND THE PROBABILITY OF RETURNING HOME ELIZABETH G. OMBRELLARO Abstract. This paper is expository in nature. It intuitively explains, using a geometrical and measure theory perspective, why
More informationPage Max. Possible Points Total 100
Math 3215 Exam 2 Summer 2014 Instructor: Sal Barone Name: GT username: 1. No books or notes are allowed. 2. You may use ONLY NON-GRAPHING and NON-PROGRAMABLE scientific calculators. All other electronic
More information4 Moment generating functions
4 Moment generating functions Moment generating functions (mgf) are a very powerful computational tool. They make certain computations much shorter. However, they are only a computational tool. The mgf
More informationDiscrete Mathematics and Probability Theory Fall 2013 Vazirani Note 12. Random Variables: Distribution and Expectation
CS 70 Discrete Mathematics and Probability Theory Fall 203 Vazirani Note 2 Random Variables: Distribution and Expectation We will now return once again to the question of how many heads in a typical sequence
More informationTopic 3: The Expectation of a Random Variable
Topic 3: The Expectation of a Random Variable Course 003, 2017 Page 0 Expectation of a discrete random variable Definition (Expectation of a discrete r.v.): The expected value (also called the expectation
More informationMAS113 Introduction to Probability and Statistics. Proofs of theorems
MAS113 Introduction to Probability and Statistics Proofs of theorems Theorem 1 De Morgan s Laws) See MAS110 Theorem 2 M1 By definition, B and A \ B are disjoint, and their union is A So, because m is a
More informationIEOR 3106: Introduction to Operations Research: Stochastic Models. Professor Whitt. SOLUTIONS to Homework Assignment 2
IEOR 316: Introduction to Operations Research: Stochastic Models Professor Whitt SOLUTIONS to Homework Assignment 2 More Probability Review: In the Ross textbook, Introduction to Probability Models, read
More informationSuppose that you have three coins. Coin A is fair, coin B shows heads with probability 0.6 and coin C shows heads with probability 0.8.
Suppose that you have three coins. Coin A is fair, coin B shows heads with probability 0.6 and coin C shows heads with probability 0.8. Coin A is flipped until a head appears, then coin B is flipped until
More information1 Solving Linear Recurrences
Solving Linear Recurrences Suppose we have a sequence a n defined by the formula The first couple terms of the sequence are a n = 3a n +, a 0 =, 4, 3, 40, 2,... Can you find a general closed for expression
More informationChapter 2. Discrete Distributions
Chapter. Discrete Distributions Objectives ˆ Basic Concepts & Epectations ˆ Binomial, Poisson, Geometric, Negative Binomial, and Hypergeometric Distributions ˆ Introduction to the Maimum Likelihood Estimation
More informationLecture 8 : The Geometric Distribution
0/ 24 The geometric distribution is a special case of negative binomial, it is the case r = 1. It is so important we give it special treatment. Motivating example Suppose a couple decides to have children
More informationStatistics 100A Homework 5 Solutions
Chapter 5 Statistics 1A Homework 5 Solutions Ryan Rosario 1. Let X be a random variable with probability density function a What is the value of c? fx { c1 x 1 < x < 1 otherwise We know that for fx to
More informationMAS113 Introduction to Probability and Statistics. Proofs of theorems
MAS113 Introduction to Probability and Statistics Proofs of theorems Theorem 1 De Morgan s Laws) See MAS110 Theorem 2 M1 By definition, B and A \ B are disjoint, and their union is A So, because m is a
More information3. DISCRETE RANDOM VARIABLES
IA Probability Lent Term 3 DISCRETE RANDOM VARIABLES 31 Introduction When an experiment is conducted there may be a number of quantities associated with the outcome ω Ω that may be of interest Suppose
More informationChapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University
Chapter 3, 4 Random Variables ENCS6161 - Probability and Stochastic Processes Concordia University ENCS6161 p.1/47 The Notion of a Random Variable A random variable X is a function that assigns a real
More informationMATH 151, FINAL EXAM Winter Quarter, 21 March, 2014
Time: 3 hours, 8:3-11:3 Instructions: MATH 151, FINAL EXAM Winter Quarter, 21 March, 214 (1) Write your name in blue-book provided and sign that you agree to abide by the honor code. (2) The exam consists
More informationSERIES
SERIES.... This chapter revisits sequences arithmetic then geometric to see how these ideas can be extended, and how they occur in other contexts. A sequence is a list of ordered numbers, whereas a series
More informationNorthwestern University Department of Electrical Engineering and Computer Science
Northwestern University Department of Electrical Engineering and Computer Science EECS 454: Modeling and Analysis of Communication Networks Spring 2008 Probability Review As discussed in Lecture 1, probability
More informationDiscrete Mathematics and Probability Theory Fall 2012 Vazirani Note 14. Random Variables: Distribution and Expectation
CS 70 Discrete Mathematics and Probability Theory Fall 202 Vazirani Note 4 Random Variables: Distribution and Expectation Random Variables Question: The homeworks of 20 students are collected in, randomly
More informationDiscrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Note 16. Random Variables: Distribution and Expectation
CS 70 Discrete Mathematics and Probability Theory Spring 206 Rao and Walrand Note 6 Random Variables: Distribution and Expectation Example: Coin Flips Recall our setup of a probabilistic experiment as
More informationChapter 2: Random Variables
ECE54: Stochastic Signals and Systems Fall 28 Lecture 2 - September 3, 28 Dr. Salim El Rouayheb Scribe: Peiwen Tian, Lu Liu, Ghadir Ayache Chapter 2: Random Variables Example. Tossing a fair coin twice:
More information8 Laws of large numbers
8 Laws of large numbers 8.1 Introduction We first start with the idea of standardizing a random variable. Let X be a random variable with mean µ and variance σ 2. Then Z = (X µ)/σ will be a random variable
More informationChapter 3: Random Variables 1
Chapter 3: Random Variables 1 Yunghsiang S. Han Graduate Institute of Communication Engineering, National Taipei University Taiwan E-mail: yshan@mail.ntpu.edu.tw 1 Modified from the lecture notes by Prof.
More informationA = A U. U [n] P(A U ). n 1. 2 k(n k). k. k=1
Lecture I jacques@ucsd.edu Notation: Throughout, P denotes probability and E denotes expectation. Denote (X) (r) = X(X 1)... (X r + 1) and let G n,p denote the Erdős-Rényi model of random graphs. 10 Random
More informationMore on Distribution Function
More on Distribution Function The distribution of a random variable X can be determined directly from its cumulative distribution function F X. Theorem: Let X be any random variable, with cumulative distribution
More informationRandom Models. Tusheng Zhang. February 14, 2013
Random Models Tusheng Zhang February 14, 013 1 Introduction In this module, we will introduce some random models which have many real life applications. The course consists of four parts. 1. A brief review
More informationTo understand and analyze this test, we need to have the right model for the events. We need to identify an event and its probability.
Probabilistic Models Example #1 A production lot of 10,000 parts is tested for defects. It is expected that a defective part occurs once in every 1,000 parts. A sample of 500 is tested, with 2 defective
More informationCMPSCI 240: Reasoning Under Uncertainty
CMPSCI 240: Reasoning Under Uncertainty Lecture 5 Prof. Hanna Wallach wallach@cs.umass.edu February 7, 2012 Reminders Pick up a copy of B&T Check the course website: http://www.cs.umass.edu/ ~wallach/courses/s12/cmpsci240/
More informationPOISSON PROCESSES 1. THE LAW OF SMALL NUMBERS
POISSON PROCESSES 1. THE LAW OF SMALL NUMBERS 1.1. The Rutherford-Chadwick-Ellis Experiment. About 90 years ago Ernest Rutherford and his collaborators at the Cavendish Laboratory in Cambridge conducted
More informationCS145: Probability & Computing
CS45: Probability & Computing Lecture 5: Concentration Inequalities, Law of Large Numbers, Central Limit Theorem Instructor: Eli Upfal Brown University Computer Science Figure credits: Bertsekas & Tsitsiklis,
More informationCS 246 Review of Proof Techniques and Probability 01/14/19
Note: This document has been adapted from a similar review session for CS224W (Autumn 2018). It was originally compiled by Jessica Su, with minor edits by Jayadev Bhaskaran. 1 Proof techniques Here we
More informationECE353: Probability and Random Processes. Lecture 5 - Cumulative Distribution Function and Expectation
ECE353: Probability and Random Processes Lecture 5 - Cumulative Distribution Function and Expectation Xiao Fu School of Electrical Engineering and Computer Science Oregon State University E-mail: xiao.fu@oregonstate.edu
More informationMATHEMATICS 154, SPRING 2009 PROBABILITY THEORY Outline #11 (Tail-Sum Theorem, Conditional distribution and expectation)
MATHEMATICS 154, SPRING 2009 PROBABILITY THEORY Outline #11 (Tail-Sum Theorem, Conditional distribution and expectation) Last modified: March 7, 2009 Reference: PRP, Sections 3.6 and 3.7. 1. Tail-Sum Theorem
More informationX = X X n, + X 2
CS 70 Discrete Mathematics for CS Fall 2003 Wagner Lecture 22 Variance Question: At each time step, I flip a fair coin. If it comes up Heads, I walk one step to the right; if it comes up Tails, I walk
More informationStat 100a, Introduction to Probability.
Stat 100a, Introduction to Probability. Outline for the day: 1. Geometric random variables. 2. Negative binomial random variables. 3. Moment generating functions. 4. Poisson random variables. 5. Continuous
More informationLecture 12. Poisson random variables
18.440: Lecture 12 Poisson random variables Scott Sheffield MIT 1 Outline Poisson random variable definition Poisson random variable properties Poisson random variable problems 2 Outline Poisson random
More informationDiscrete Distributions
Discrete Distributions STA 281 Fall 2011 1 Introduction Previously we defined a random variable to be an experiment with numerical outcomes. Often different random variables are related in that they have
More informationPart (A): Review of Probability [Statistics I revision]
Part (A): Review of Probability [Statistics I revision] 1 Definition of Probability 1.1 Experiment An experiment is any procedure whose outcome is uncertain ffl toss a coin ffl throw a die ffl buy a lottery
More informationRandom Variables Example:
Random Variables Example: We roll a fair die 6 times. Suppose we are interested in the number of 5 s in the 6 rolls. Let X = number of 5 s. Then X could be 0, 1, 2, 3, 4, 5, 6. X = 0 corresponds to the
More informationp. 4-1 Random Variables
Random Variables A Motivating Example Experiment: Sample k students without replacement from the population of all n students (labeled as 1, 2,, n, respectively) in our class. = {all combinations} = {{i
More informationChapter 4 : Expectation and Moments
ECE5: Analysis of Random Signals Fall 06 Chapter 4 : Expectation and Moments Dr. Salim El Rouayheb Scribe: Serge Kas Hanna, Lu Liu Expected Value of a Random Variable Definition. The expected or average
More informationProbability Theory. Introduction to Probability Theory. Principles of Counting Examples. Principles of Counting. Probability spaces.
Probability Theory To start out the course, we need to know something about statistics and probability Introduction to Probability Theory L645 Advanced NLP Autumn 2009 This is only an introduction; for
More informationExpectations. Definition Let X be a discrete rv with set of possible values D and pmf p(x). The expected value or mean value of X, denoted by E(X ) or
Expectations Expectations Definition Let X be a discrete rv with set of possible values D and pmf p(x). The expected value or mean value of X, denoted by E(X ) or µ X, is E(X ) = µ X = x D x p(x) Expectations
More informationChapter Generating Functions
Chapter 8.1.1-8.1.2. Generating Functions Prof. Tesler Math 184A Fall 2017 Prof. Tesler Ch. 8. Generating Functions Math 184A / Fall 2017 1 / 63 Ordinary Generating Functions (OGF) Let a n (n = 0, 1,...)
More information1.1 Review of Probability Theory
1.1 Review of Probability Theory Angela Peace Biomathemtics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,
More informationIEOR 3106: Introduction to Operations Research: Stochastic Models. Professor Whitt. SOLUTIONS to Homework Assignment 1
IEOR 3106: Introduction to Operations Research: Stochastic Models Professor Whitt SOLUTIONS to Homework Assignment 1 Probability Review: Read Chapters 1 and 2 in the textbook, Introduction to Probability
More informationMathematics 350: Problems to Study Solutions
Mathematics 350: Problems to Study Solutions April 25, 206. A Laurent series for cot(z centered at z 0 i converges in the annulus {z : < z i < R}. What is the largest possible value of R? Solution: The
More informationPhysics Sep Example A Spin System
Physics 30 7-Sep-004 4- Example A Spin System In the last lecture, we discussed the binomial distribution. Now, I would like to add a little physical content by considering a spin system. Actually this
More informationTheorem 1.7 [Bayes' Law]: Assume that,,, are mutually disjoint events in the sample space s.t.. Then Pr( )
Theorem 1.7 [Bayes' Law]: Assume that,,, are mutually disjoint events in the sample space s.t.. Then Pr Pr = Pr Pr Pr() Pr Pr. We are given three coins and are told that two of the coins are fair and the
More information1 Presessional Probability
1 Presessional Probability Probability theory is essential for the development of mathematical models in finance, because of the randomness nature of price fluctuations in the markets. This presessional
More information18.440: Lecture 19 Normal random variables
18.440 Lecture 19 18.440: Lecture 19 Normal random variables Scott Sheffield MIT Outline Tossing coins Normal random variables Special case of central limit theorem Outline Tossing coins Normal random
More informationOverview. CSE 21 Day 5. Image/Coimage. Monotonic Lists. Functions Probabilistic analysis
Day 5 Functions/Probability Overview Functions Probabilistic analysis Neil Rhodes UC San Diego Image/Coimage The image of f is the set of values f actually takes on (a subset of the codomain) The inverse
More informationChernoff Bounds. Theme: try to show that it is unlikely a random variable X is far away from its expectation.
Chernoff Bounds Theme: try to show that it is unlikely a random variable X is far away from its expectation. The more you know about X, the better the bound you obtain. Markov s inequality: use E[X ] Chebyshev
More informationChapter 3 Discrete Random Variables
MICHIGAN STATE UNIVERSITY STT 351 SECTION 2 FALL 2008 LECTURE NOTES Chapter 3 Discrete Random Variables Nao Mimoto Contents 1 Random Variables 2 2 Probability Distributions for Discrete Variables 3 3 Expected
More informationPolytechnic Institute of NYU MA 2212 MIDTERM Feb 12, 2009
Polytechnic Institute of NYU MA 2212 MIDTERM Feb 12, 2009 Print Name: Signature: Section: ID #: Directions: You have 55 minutes to answer the following questions. You must show all your work as neatly
More informationDiscrete Random Variables
CPSC 53 Systems Modeling and Simulation Discrete Random Variables Dr. Anirban Mahanti Department of Computer Science University of Calgary mahanti@cpsc.ucalgary.ca Random Variables A random variable is
More information6.1 Moment Generating and Characteristic Functions
Chapter 6 Limit Theorems The power statistics can mostly be seen when there is a large collection of data points and we are interested in understanding the macro state of the system, e.g., the average,
More informationIE 230 Probability & Statistics in Engineering I. Closed book and notes. 120 minutes.
Closed book and notes. 10 minutes. Two summary tables from the concise notes are attached: Discrete distributions and continuous distributions. Eight Pages. Score _ Final Exam, Fall 1999 Cover Sheet, Page
More informationDiscrete Mathematics for CS Spring 2006 Vazirani Lecture 22
CS 70 Discrete Mathematics for CS Spring 2006 Vazirani Lecture 22 Random Variables and Expectation Question: The homeworks of 20 students are collected in, randomly shuffled and returned to the students.
More informationWhy study probability? Set theory. ECE 6010 Lecture 1 Introduction; Review of Random Variables
ECE 6010 Lecture 1 Introduction; Review of Random Variables Readings from G&S: Chapter 1. Section 2.1, Section 2.3, Section 2.4, Section 3.1, Section 3.2, Section 3.5, Section 4.1, Section 4.2, Section
More informationExpected Value 7/7/2006
Expected Value 7/7/2006 Definition Let X be a numerically-valued discrete random variable with sample space Ω and distribution function m(x). The expected value E(X) is defined by E(X) = x Ω x m(x), provided
More informationECE 302 Division 1 MWF 10:30-11:20 (Prof. Pollak) Final Exam Solutions, 5/3/2004. Please read the instructions carefully before proceeding.
NAME: ECE 302 Division MWF 0:30-:20 (Prof. Pollak) Final Exam Solutions, 5/3/2004. Please read the instructions carefully before proceeding. If you are not in Prof. Pollak s section, you may not take this
More informationLecture 4: Probability, Proof Techniques, Method of Induction Lecturer: Lale Özkahya
BBM 205 Discrete Mathematics Hacettepe University http://web.cs.hacettepe.edu.tr/ bbm205 Lecture 4: Probability, Proof Techniques, Method of Induction Lecturer: Lale Özkahya Resources: Kenneth Rosen, Discrete
More informationDiscrete Distributions Chapter 6
Discrete Distributions Chapter 6 Negative Binomial Distribution section 6.3 Consider k r, r +,... independent Bernoulli trials with probability of success in one trial being p. Let the random variable
More informationSequences and infinite series
Sequences and infinite series D. DeTurck University of Pennsylvania March 29, 208 D. DeTurck Math 04 002 208A: Sequence and series / 54 Sequences The lists of numbers you generate using a numerical method
More informationName: 180A MIDTERM 2. (x + n)/2
1. Recall the (somewhat strange) person from the first midterm who repeatedly flips a fair coin, taking a step forward when it lands head up and taking a step back when it lands tail up. Suppose this person
More informationRandomized Algorithms
Randomized Algorithms Andreas Klappenecker Texas A&M University Lecture notes of a course given in Fall 2003. Preliminary draft. c 2003 by Andreas Klappenecker. All rights reserved. Chapter 2 Probability
More informationReview of Probability Theory
Review of Probability Theory Arian Maleki and Tom Do Stanford University Probability theory is the study of uncertainty Through this class, we will be relying on concepts from probability theory for deriving
More informationDiscrete Mathematics. Spring 2017
Discrete Mathematics Spring 2017 Previous Lecture Principle of Mathematical Induction Mathematical Induction: rule of inference Mathematical Induction: Conjecturing and Proving Climbing an Infinite Ladder
More informationWeek 2. Review of Probability, Random Variables and Univariate Distributions
Week 2 Review of Probability, Random Variables and Univariate Distributions Probability Probability Probability Motivation What use is Probability Theory? Probability models Basis for statistical inference
More informationHEAGAN & CO., OPP. f>, L. & W. DEPOT, DOYER, N. J, OUR MOTTO! ould Iwv ia immediate vltlui. VEEY BEST NEW Creamery Butter 22c ib,
#4 NN N G N N % XX NY N Y FY N 2 88 N 28 k N k F P X Y N Y /» 2«X ««!!! 8 P 3 N 0»9! N k 25 F $ 60 $3 00 $3000 k k N 30 Y F00 6 )P 0» «{ N % X zz» «3 0««5 «N «XN» N N 00/ N 4 GN N Y 07 50 220 35 2 25 0
More information