Generating and characteristic functions. Generating and Characteristic Functions. Probability generating function. Probability generating function

Similar documents
4 Moment generating functions

Statistics 3657 : Moment Generating Functions

Characteristic Functions and the Central Limit Theorem

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

Lecture 5: Moment generating functions

Introduction to Probability. Ariel Yadin. Lecture 10. Proposition Let X be a discrete random variable, with range R and density f X.

Chapter 4. Chapter 4 sections

1 Generating functions

Lecture 3. Discrete Random Variables

E[X n ]= dn dt n M X(t). ). What is the mgf? Solution. Found this the other day in the Kernel matching exercise: 1 M X (t) =

STAT/MATH 395 A - PROBABILITY II UW Winter Quarter Moment functions. x r p X (x) (1) E[X r ] = x r f X (x) dx (2) (x E[X]) r p X (x) (3)

Random Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline.

3 Multiple Discrete Random Variables

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

Conditional distributions (discrete case)

1 Presessional Probability

Random Variables (Continuous Case)

Gaussian vectors and central limit theorem

6.041/6.431 Fall 2010 Quiz 2 Solutions

1 Review of di erential calculus

More on Distribution Function

1 Solution to Problem 2.1

We introduce methods that are useful in:

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University

Topic 3: The Expectation of a Random Variable

PROBABILITY THEORY LECTURE 3

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

SOME THEORY AND PRACTICE OF STATISTICS by Howard G. Tucker CHAPTER 2. UNIVARIATE DISTRIBUTIONS

Formulas for probability theory and linear models SF2941

Suppose that you have three coins. Coin A is fair, coin B shows heads with probability 0.6 and coin C shows heads with probability 0.8.

M378K In-Class Assignment #1

Poisson approximations

Things to remember when learning probability distributions:

Week 9 The Central Limit Theorem and Estimation Concepts

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Chapter 5. Random Variables (Continuous Case) 5.1 Basic definitions

18.175: Lecture 8 Weak laws and moment-generating/characteristic functions

18.440: Lecture 28 Lectures Review

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416)

Discrete Random Variables

1 Random Variable: Topics

CALCULUS JIA-MING (FRANK) LIOU

Lecture 32: Taylor Series and McLaurin series We saw last day that some functions are equal to a power series on part of their domain.

Moments. Raw moment: February 25, 2014 Normalized / Standardized moment:

Stat410 Probability and Statistics II (F16)

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

18.175: Lecture 15 Characteristic functions and central limit theorem

ACM 116: Lectures 3 4

18.440: Lecture 28 Lectures Review

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities

BASICS OF PROBABILITY

STAT 516 Midterm Exam 3 Friday, April 18, 2008

Math 55 - Spring Lecture notes #26 - April 29 (Thursday)

Notes 12 Autumn 2005

Chapter 3: Random Variables 1

Manual for SOA Exam MLC.

MAS113 Introduction to Probability and Statistics. Proofs of theorems

5. Conditional Distributions

Section Taylor and Maclaurin Series

1.6 Families of Distributions

Math 651 Introduction to Numerical Analysis I Fall SOLUTIONS: Homework Set 1

5 Operations on Multiple Random Variables

Random Variables. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as

1.1 Review of Probability Theory

Stationary independent increments. 1. Random changes of the form X t+h X t fixed h > 0 are called increments of the process.

8 Laws of large numbers

Basics on Probability. Jingrui He 09/11/2007

Northwestern University Department of Electrical Engineering and Computer Science

BMIR Lecture Series on Probability and Statistics Fall 2015 Discrete RVs

Limiting Distributions

Analysis of Engineering and Scientific Data. Semester

18.440: Lecture 19 Normal random variables

Taylor and Maclaurin Series

MAS113 Introduction to Probability and Statistics. Proofs of theorems

Fundamental Tools - Probability Theory II

18.175: Lecture 13 Infinite divisibility and Lévy processes

CS145: Probability & Computing

Measure-theoretic probability

1 Review of Probability

Probability Review. Gonzalo Mateos

4 Sums of Independent Random Variables

3. Probability and Statistics

1. Compute the c.d.f. or density function of X + Y when X, Y are independent random variables such that:

Practice Problem - Skewness of Bernoulli Random Variable. Lecture 7: Joint Distributions and the Law of Large Numbers. Joint Distributions - Example

Expectation of Random Variables

Random Variables and Their Distributions

STA2603/205/1/2014 /2014. ry II. Tutorial letter 205/1/

Chapter 2 Random Variables

Exam P Review Sheet. for a > 0. ln(a) i=0 ari = a. (1 r) 2. (Note that the A i s form a partition)

Product measure and Fubini s theorem

Chapter 6 - Random Processes

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n

Mathematical Statistics 1 Math A 6330

PolyGamma Functions of Negative Order

Random Models. Tusheng Zhang. February 14, 2013

Math Bootcamp 2012 Miscellaneous

Math 321 Final Examination April 1995 Notation used in this exam: N. (1) S N (f,x) = f(t)e int dt e inx.

Review of Probability Theory II

Multiple Random Variables

Transcription:

Generating and characteristic functions Generating and Characteristic Functions September 3, 03 Probability generating function Moment generating function Power series expansion Characteristic function Characteristic function and moments Convolution and unicity Inversion Joint characteristic functions /60 /60 Probability generating function Probability generating function Let X be a nonnegative integer-valued random variable. The probability generating function of X is defined to be G X (s) E(s X )= X k 0 s k P(X = k) I If X takes a finite number of values, the above expression is a finite sum. I Otherwise, it is a series that converges at least for s [, ] and sometimes in a larger interval. If X takes a finite number of values x 0, x,...,x n, G X (s) isa polynomial: G X (s) = nx s k P(X = k) k= =P(X = 0) + P(X = ) s + +P(X = n) s n 3/60 4/60

Probability generating function If X takes a countable number of values x 0, x,..., x k,...,then G X (s) = X s k P(X = k) k 0 =P(X = 0) + P(X = ) s + +P(X = k) s k + is a series that converges at least for s apple, because X s k P(X = k) apple X s k P(X = k) apple X P(X = k) = k 0 k 0 k 0 Let X be a Bernoulli random variable, X B(p). P(X = 0) = q, P(X = ) = p We have G X (s) = X s k P(X = k) =q + sp, s R k 0 5/60 6/60 Let X Bin(n, p). Then P(X = k) = G X (s) = X k 0 s k P(X = k) = = nx n p k q n k, k =0,,...,n k nx n s k p k q n k n (sp) k q n k =(q + sp) n, s R k k X Poiss( ). P(X = k) =e Then k, k =0,,... G X (s) = X X s k (s ) k P(X = k) =e k 0 = e e s = e (s ), s R 7/60 8/60

Unicity X Geom(p) P(X = k) =q k p, k =,,..., 0 < p < We have G X (s) = X X s k P(X = k) = s k q k k 0 = sp k= X (sq) k = sp sq, s < q k= p If two nonnegative, integer-valued random variables have the same generating function, then they follow the same probability law. Let X and Y be nonnegative integer-valued random variables such that G X (s) =G Y (s). Then P(X = k) =P(Y = k) for all k 0. The result is a special case of the uniqueness theorem for power series 9/60 0 / 60 Example (Convolution) If X and Y are independent random variables and Z = X + Y, then G Z (s) =G X (s) G Y (s) Proof: G Z (s) =E(s Z ) =E(s X +Y )=E(s X s Y )=E(s X )E(s Y )=G X (s)g Y (s) Let X Bin(n, p), Y Bin(m, p) be independent random variables and let Z = X + Y We have G Z (s) =G X (s)g Y (s) =(q + sp) n (q + sp) m =(q + sp) n+m Observe that G Z (s) is the probability generating function of a Bin(n + m, p) random variable. By the unicity theorem, X + Y Bin(n + m, p) / 60 / 60

More generally, Let X, X,...,X n be independent, nonnegative, integer-valued random variables and set S n = X + X + + X n. A case of particular importance is: Corollary If, in addition, X, X,...,X n are equidistributed, with common probability generating function G X (s), then Then ny G S (s) = G Xk (s). k= G S (s) =(G X (s)) n. 3 / 60 4 / 60 Example: Negative binomial probability law Example: Negative binomial probability law A biased coin such that P(heads) = p is repeatedly tossed until a total amount of k heads has been obtained. Let X be the number of tosses. Notice that where X = X + X + + X k, X i Geom(p) is the number of tosses between the (i )-th and the i-th head. As X,...,X k are independent and identically distributed we can apply the convolution theorem. Thus, G X (s) =G X (s)g X (s) G Xk (s) =(G X (s)) k sp k =, s < sq q 5 / 60 6 / 60

Example: Negative binomial probability law Example: Negative binomial probability law Recall that if R, then the Taylor series expansion about 0 of the function ( + x) is, for x (, ), + x + We can write where ( ) x + + ( + x) = X r 0 ( )...( r + ) r! x r r ( )...( r + ) r r! x r + Consider the series expansion of G X (s): where X G X (s) =(sp) k ( sq) k =(sp) k r=0 k ( sq) r r k k( k ) ( k r + ) r r! k + r =( ) r k 7 / 60 8 / 60 Example: Negative binomial probability law Properties Therefore, G X (s) = X k + r p k q r s k+r = k r=0 X n=k n p k q n k Hence, 8 >< 0, n < k P(X = n) = n >: p k q n k, n = k, k +, k This is the negative binomial probability law. k s n I G X (0) = P(X = 0) I G X () = We have 3 G X () = 4 X s k P(X = k) 5 = X P(X = k) = k 0 k 0 s= 9 / 60 0 / 60

Properties Properties Proposition Let R be the radius of covergence of G X (s). IfR>, then E(X )=GX 0 () Indeed, GX 0 (s) = d X s k P(X = k) ds =X ks k P(X = k) k 0 k More generally, Proposition I E(X )=GX 0 () lim s! GX 0 (s) I E(X (X ) (X k + )) = G (k) X () lim s! G (k) X (s) Hence, G 0 X () = X k k P(X = k) =E(X ) / 60 / 60 Let X Poiss( ). Let X Bin(n, p). E(X )=G 0 X () = d ds (q + sp)n s= = np (q + sp) n s= = np (q + p)n = np Analogously, E(X )=G 0 X () = d ds e (s ) s= = e (s ) s= = E(X (X )) = G 00 X () = e (s ) s= = Hence, E(X )= +, Var(X )=E(X ) (E(X )) = 3 / 60 4 / 60

X Geom(p). E(X )=G 0 X () = d ds Analogously, Therefore sp ( sq) s= = E(X (X )) = G 00 X () = pq p ( sq) s= = p ( sq) 3 s= = q p E(X )= q p + p, Var(X )=E(X ) (E(X )) = q p Let X be a negative binomial random variable. E(X )=G 0 X () = d ds = k sp sq sp sq k s= k p ( sq) s= = k p This result can also be obtained from X = X + + X k, with each X i Geom(p). E(X )= kx E(X i )= k p i= 5 / 60 6 / 60 Moment generating function Probability generating function Moment generating function Power series expansion Characteristic function Characteristic function and moments Convolution and unicity Inversion Joint characteristic functions The moment generating function of a random variable X is defined as 8 X e >< tx i P(X = x i ), if X is discrete X (t) E e tx i = Z >: e tx f X (x) dx, if X is continuous provided that the sum or the integral converges. 7 / 60 8 / 60

Unicity The moment generating function specifies uniquely the probability distribution. Let X and Y be random variables. If there exists h > 0, suchthat X (t) = Y (t) for t < h, then X and Y are identically distributed. Let X Bin(n, p). X (t) = nx e tk P(X = k) = nx n k = q + pe t n, t R pe t k q n k 9 / 60 30 / 60 Let X Exp(µ). X (t) = = Z Z 0 e tx f X (x) dx For continuos random variables, transform of f X (x). µe (µ t)x dx = µ µ t, t <µ X (t) is related to the Laplace Let X Poiss( ). X X X (t) = e tk P(X = k) = e tk k e X (e t ) k = e = e (et ), t R 3 / 60 3 / 60

Let Z N(0, ). because Z (t) = = e t Z Z p e e tz f Z (z) dz = p Z e z tz dz p Z (z t) dz e {z } = e t, t R (z t) dz =P( < N(t, ) < ) = More generally, if then X N(m, ). We have X (t) =E = e tm E X = Z + m, t( e Z+m) e tx =E e t Z = e tm Z ( t) =e t +tm 33 / 60 34 / 60 Power series expansion Power series expansion Notice that 0 X (t) = d e d dt E tx =E dt etx =E X e tx For instace, if X Exp(µ), X (t) = µ µ t, t <µ Therefore, Analogously, Thus, 00 X (t) = d dt 0 X (0) = E(X ) 0 X (t) = d Xe dt E tx =E X e tx 00 X (0) = E(X ) and E(X )= 0 X (0) = d dt µ = µ t t=0 Analogously, E(X )= 00 X (0) = d µ dt (µ t) t=0 = µ (µ t) =/µ t=0 µ (µ t) 3 =/µ t=0 35 / 60 36 / 60

Power series expansion Power series expansion More generally, X (t) =E e tx =E +tx+ (tx) + + (tx)k! = + E(X ) t + E(X )! This is the power series expansion of X (t) = t + + E(X k ) X X (t), (k) X (0) t k t k + + If X (t) converges on some open interval containing the origin t =0, then X has moments of any order, E X k = and X (t) = X (k) X (0), E(X k ) t k 37 / 60 38 / 60 Power series expansion For instance, let X Exp(µ). If t <µwe have Hence, Therefore, X (t) = µ µ t = (t/µ) =+ t t µ + + µ E(X n ) n! = µ n E(X n )= n! µ n The convolution theorem applies also to moment generating functions. Let X, X,...,X n be independent random variables and let S = X + X + + X n.then, S(t) = ny k= X k (t) 39 / 60 40 / 60

X Poiss( X ), Y Poiss( Y ),independent. Let Z = X + Y. We have Z (t) = X (t) Y (t) =e X (e t ) e Y (e t ) = e ( X + Y )(e t ) Hence, Z Poiss( X + Y ) X N(m X, X ), Y N(m Y, Y ),independent. Let Z = X + Y. We have Therefore Z (t) = = e X (t) Y (t) X t +tm X e Y t +tm Y = e ( X + Y )t +t(m X +m Y ) Z N(m X + m Y, X + Y ) 4 / 60 4 / 60 Characteristic function Probability generating function Moment generating function Power series expansion Characteristic function Characteristic function and moments Convolution and unicity Inversion Joint characteristic functions The characteristic function of a random variable X is the complex-valued function of the real argument! defined as M X (!) E M X : R! C! 7! M X (!) e i!x = E (cos (!X )) + i E(sin(!X )) 43 / 60 44 / 60

Characteristic function Properties Therefore, 8 >< M X (!) = >: X e i!x k P(X = x k ), if X is discrete k Z e i!x f X (x) dx, if X is continuous I The characteristic function exists for all! and for all random variables. I If X is continuous, M X (!) isthefourier transform of f X (x). (Notice the change of sign from the usual definition.) I If X is discrete, M X (!) isrelatedtofourier series. I M X (!) apple M X (0) = for all! R. M X (!) = E e i!x apple E e i!x = E() = On the other hand, M X (0) = E e i 0 X = E() = 45 / 60 46 / 60 Properties I M X (!) =M X (!). M X (!) =E(e i!x )=E e i!x = E (cos (!X )) i E(sin(!X )) = E (cos (!X )) + i E(sin(!X )) = M X (!) Let X Binom(n, p). Then, M X (!) = pe i! + q n If X Poiss( ), then M X (!) =e (ei! ) I M X (!) is uniformly continuous in R. If X N(m, ),then M X (!) =e i!m! 47 / 60 48 / 60

Characteristic function and moments Characteristic function and moments If E(X n ) < for some n =,,...,then So, M X (!) = nx E(X k )= M(k) X (0) i k In particular, if E(X )=0andVar(X )= M X (!) = E(X k ) (i!) k + o(! n ) as!! 0. for k =,,...,n.,then! + o(t ) as!! 0. Indeed, M X (!) =E e i!x =E! X (i!x ) k X = But this is the Taylor s series expansion of M X (!): Therefore M X (!) = X M (k) X (0)! k i k E(X k )=M (k) X (0) i k E(X k )! k 49 / 60 50 / 60 Let X, X,...,X n be independent random variables and let S = X + X + + X n. Then ny M S (!) = M Xk (!) k= M S (!) =E e i!s =E e i!(x +X + +X n) =E e i!x e i!x e i!xn =E e i!x E e i!x E e i!xn = M X (!)M X (!) M Xn (!) 5 / 60 5 / 60

Unicity In the case n = we have essentially the convolution theorem for Fourier transforms. If X and Y are continuous and independent random variables and Z = X + Y,then f Z = f X f Y This implies that is, F(f Z )=F(f X ) F(f Y ), M Z (!) =M X (!)M Y (!) Let X have probability distribution function F X and characteristic function M X.LetF X (x) =(F X (x)+f X (x ))/. Then F X (b) Z T F X (a) = lim T! T e ia! i! e ib! M X (!) d!. M X specifies uniquely the probability law of X. Two random variables have the same characteristic function if and only if they have the same distribution function. 53 / 60 54 / 60 Inversion Inversion (Inversion of the Fourier transform) Let X be a continuous r.v. with density f X and characteristic function M X.Then f X (x) = Z e i!x M X (!) d! at every point x at which f X is di erentiable. In the discrete case, M X (!) isrelatedtofourier series. If X is an integer-valued random variable, then P(X = k) = Z e i M X (!) d! 55 / 60 56 / 60

Joint characteristic functions Joint moments The joint characteristic function of the random variables X, X,..., X n is defined to be M X (!,!,...,! n )=E e i(! X +! X + +! nx n) Using vectorial notation one can write! =(!,!,,! n ) t, X =(X, X,, X n ) t The joint characteristic function allows as to calculate joint moments. For instance, given X, Y : m kl =E X k Y l = @ k+l M XY (!,! ) i k+l @ k! @ l! (!,! )=(0,0) and M X (! t )=E e i!t X 57 / 60 58 / 60 Marginal characteristic functions Independent random variables Marginal characteristic functions are easily derived from the joint characteristic function. For instance, given X, Y : M X (!) =E e i!x =E e i(! X +! Y ) = M XY (!, 0) (! =!,! =0) Analogously, M Y (!) =M XY (0,!) The random variables X, X,...,X n are independent if and only if M X (!,!,...,! n )=M X (! )M X (! ) M Xn (! n ) If the random variables are independent, then M X (!,!,...,! n ) =E e i(! X +! X + +! nx n) =E e i! X e i! X e i!nxn =E e i! X E e i! X E e i!nxn = M X (! )M X (! ) M Xn (! n ) 59 / 60 60 / 60