Generating and characteristic functions. Generating and Characteristic Functions. Probability generating function. Probability generating function

Size: px
Start display at page:

Download "Generating and characteristic functions. Generating and Characteristic Functions. Probability generating function. Probability generating function"

Transcription

1 Generating and characteristic functions Generating and Characteristic Functions September 3, 03 Probability generating function Moment generating function Power series expansion Characteristic function Characteristic function and moments Convolution and unicity Inversion Joint characteristic functions /60 /60 Probability generating function Probability generating function Let X be a nonnegative integer-valued random variable. The probability generating function of X is defined to be G X (s) E(s X )= X k 0 s k P(X = k) I If X takes a finite number of values, the above expression is a finite sum. I Otherwise, it is a series that converges at least for s [, ] and sometimes in a larger interval. If X takes a finite number of values x 0, x,...,x n, G X (s) isa polynomial: G X (s) = nx s k P(X = k) k= =P(X = 0) + P(X = ) s + +P(X = n) s n 3/60 4/60

2 Probability generating function If X takes a countable number of values x 0, x,..., x k,...,then G X (s) = X s k P(X = k) k 0 =P(X = 0) + P(X = ) s + +P(X = k) s k + is a series that converges at least for s apple, because X s k P(X = k) apple X s k P(X = k) apple X P(X = k) = k 0 k 0 k 0 Let X be a Bernoulli random variable, X B(p). P(X = 0) = q, P(X = ) = p We have G X (s) = X s k P(X = k) =q + sp, s R k 0 5/60 6/60 Let X Bin(n, p). Then P(X = k) = G X (s) = X k 0 s k P(X = k) = = nx n p k q n k, k =0,,...,n k nx n s k p k q n k n (sp) k q n k =(q + sp) n, s R k k X Poiss( ). P(X = k) =e Then k, k =0,,... G X (s) = X X s k (s ) k P(X = k) =e k 0 = e e s = e (s ), s R 7/60 8/60

3 Unicity X Geom(p) P(X = k) =q k p, k =,,..., 0 < p < We have G X (s) = X X s k P(X = k) = s k q k k 0 = sp k= X (sq) k = sp sq, s < q k= p If two nonnegative, integer-valued random variables have the same generating function, then they follow the same probability law. Let X and Y be nonnegative integer-valued random variables such that G X (s) =G Y (s). Then P(X = k) =P(Y = k) for all k 0. The result is a special case of the uniqueness theorem for power series 9/60 0 / 60 Example (Convolution) If X and Y are independent random variables and Z = X + Y, then G Z (s) =G X (s) G Y (s) Proof: G Z (s) =E(s Z ) =E(s X +Y )=E(s X s Y )=E(s X )E(s Y )=G X (s)g Y (s) Let X Bin(n, p), Y Bin(m, p) be independent random variables and let Z = X + Y We have G Z (s) =G X (s)g Y (s) =(q + sp) n (q + sp) m =(q + sp) n+m Observe that G Z (s) is the probability generating function of a Bin(n + m, p) random variable. By the unicity theorem, X + Y Bin(n + m, p) / 60 / 60

4 More generally, Let X, X,...,X n be independent, nonnegative, integer-valued random variables and set S n = X + X + + X n. A case of particular importance is: Corollary If, in addition, X, X,...,X n are equidistributed, with common probability generating function G X (s), then Then ny G S (s) = G Xk (s). k= G S (s) =(G X (s)) n. 3 / 60 4 / 60 Example: Negative binomial probability law Example: Negative binomial probability law A biased coin such that P(heads) = p is repeatedly tossed until a total amount of k heads has been obtained. Let X be the number of tosses. Notice that where X = X + X + + X k, X i Geom(p) is the number of tosses between the (i )-th and the i-th head. As X,...,X k are independent and identically distributed we can apply the convolution theorem. Thus, G X (s) =G X (s)g X (s) G Xk (s) =(G X (s)) k sp k =, s < sq q 5 / 60 6 / 60

5 Example: Negative binomial probability law Example: Negative binomial probability law Recall that if R, then the Taylor series expansion about 0 of the function ( + x) is, for x (, ), + x + We can write where ( ) x + + ( + x) = X r 0 ( )...( r + ) r! x r r ( )...( r + ) r r! x r + Consider the series expansion of G X (s): where X G X (s) =(sp) k ( sq) k =(sp) k r=0 k ( sq) r r k k( k ) ( k r + ) r r! k + r =( ) r k 7 / 60 8 / 60 Example: Negative binomial probability law Properties Therefore, G X (s) = X k + r p k q r s k+r = k r=0 X n=k n p k q n k Hence, 8 >< 0, n < k P(X = n) = n >: p k q n k, n = k, k +, k This is the negative binomial probability law. k s n I G X (0) = P(X = 0) I G X () = We have 3 G X () = 4 X s k P(X = k) 5 = X P(X = k) = k 0 k 0 s= 9 / 60 0 / 60

6 Properties Properties Proposition Let R be the radius of covergence of G X (s). IfR>, then E(X )=GX 0 () Indeed, GX 0 (s) = d X s k P(X = k) ds =X ks k P(X = k) k 0 k More generally, Proposition I E(X )=GX 0 () lim s! GX 0 (s) I E(X (X ) (X k + )) = G (k) X () lim s! G (k) X (s) Hence, G 0 X () = X k k P(X = k) =E(X ) / 60 / 60 Let X Poiss( ). Let X Bin(n, p). E(X )=G 0 X () = d ds (q + sp)n s= = np (q + sp) n s= = np (q + p)n = np Analogously, E(X )=G 0 X () = d ds e (s ) s= = e (s ) s= = E(X (X )) = G 00 X () = e (s ) s= = Hence, E(X )= +, Var(X )=E(X ) (E(X )) = 3 / 60 4 / 60

7 X Geom(p). E(X )=G 0 X () = d ds Analogously, Therefore sp ( sq) s= = E(X (X )) = G 00 X () = pq p ( sq) s= = p ( sq) 3 s= = q p E(X )= q p + p, Var(X )=E(X ) (E(X )) = q p Let X be a negative binomial random variable. E(X )=G 0 X () = d ds = k sp sq sp sq k s= k p ( sq) s= = k p This result can also be obtained from X = X + + X k, with each X i Geom(p). E(X )= kx E(X i )= k p i= 5 / 60 6 / 60 Moment generating function Probability generating function Moment generating function Power series expansion Characteristic function Characteristic function and moments Convolution and unicity Inversion Joint characteristic functions The moment generating function of a random variable X is defined as 8 X e >< tx i P(X = x i ), if X is discrete X (t) E e tx i = Z >: e tx f X (x) dx, if X is continuous provided that the sum or the integral converges. 7 / 60 8 / 60

8 Unicity The moment generating function specifies uniquely the probability distribution. Let X and Y be random variables. If there exists h > 0, suchthat X (t) = Y (t) for t < h, then X and Y are identically distributed. Let X Bin(n, p). X (t) = nx e tk P(X = k) = nx n k = q + pe t n, t R pe t k q n k 9 / / 60 Let X Exp(µ). X (t) = = Z Z 0 e tx f X (x) dx For continuos random variables, transform of f X (x). µe (µ t)x dx = µ µ t, t <µ X (t) is related to the Laplace Let X Poiss( ). X X X (t) = e tk P(X = k) = e tk k e X (e t ) k = e = e (et ), t R 3 / 60 3 / 60

9 Let Z N(0, ). because Z (t) = = e t Z Z p e e tz f Z (z) dz = p Z e z tz dz p Z (z t) dz e {z } = e t, t R (z t) dz =P( < N(t, ) < ) = More generally, if then X N(m, ). We have X (t) =E = e tm E X = Z + m, t( e Z+m) e tx =E e t Z = e tm Z ( t) =e t +tm 33 / / 60 Power series expansion Power series expansion Notice that 0 X (t) = d e d dt E tx =E dt etx =E X e tx For instace, if X Exp(µ), X (t) = µ µ t, t <µ Therefore, Analogously, Thus, 00 X (t) = d dt 0 X (0) = E(X ) 0 X (t) = d Xe dt E tx =E X e tx 00 X (0) = E(X ) and E(X )= 0 X (0) = d dt µ = µ t t=0 Analogously, E(X )= 00 X (0) = d µ dt (µ t) t=0 = µ (µ t) =/µ t=0 µ (µ t) 3 =/µ t=0 35 / / 60

10 Power series expansion Power series expansion More generally, X (t) =E e tx =E +tx+ (tx) + + (tx)k! = + E(X ) t + E(X )! This is the power series expansion of X (t) = t + + E(X k ) X X (t), (k) X (0) t k t k + + If X (t) converges on some open interval containing the origin t =0, then X has moments of any order, E X k = and X (t) = X (k) X (0), E(X k ) t k 37 / / 60 Power series expansion For instance, let X Exp(µ). If t <µwe have Hence, Therefore, X (t) = µ µ t = (t/µ) =+ t t µ + + µ E(X n ) n! = µ n E(X n )= n! µ n The convolution theorem applies also to moment generating functions. Let X, X,...,X n be independent random variables and let S = X + X + + X n.then, S(t) = ny k= X k (t) 39 / / 60

11 X Poiss( X ), Y Poiss( Y ),independent. Let Z = X + Y. We have Z (t) = X (t) Y (t) =e X (e t ) e Y (e t ) = e ( X + Y )(e t ) Hence, Z Poiss( X + Y ) X N(m X, X ), Y N(m Y, Y ),independent. Let Z = X + Y. We have Therefore Z (t) = = e X (t) Y (t) X t +tm X e Y t +tm Y = e ( X + Y )t +t(m X +m Y ) Z N(m X + m Y, X + Y ) 4 / 60 4 / 60 Characteristic function Probability generating function Moment generating function Power series expansion Characteristic function Characteristic function and moments Convolution and unicity Inversion Joint characteristic functions The characteristic function of a random variable X is the complex-valued function of the real argument! defined as M X (!) E M X : R! C! 7! M X (!) e i!x = E (cos (!X )) + i E(sin(!X )) 43 / / 60

12 Characteristic function Properties Therefore, 8 >< M X (!) = >: X e i!x k P(X = x k ), if X is discrete k Z e i!x f X (x) dx, if X is continuous I The characteristic function exists for all! and for all random variables. I If X is continuous, M X (!) isthefourier transform of f X (x). (Notice the change of sign from the usual definition.) I If X is discrete, M X (!) isrelatedtofourier series. I M X (!) apple M X (0) = for all! R. M X (!) = E e i!x apple E e i!x = E() = On the other hand, M X (0) = E e i 0 X = E() = 45 / / 60 Properties I M X (!) =M X (!). M X (!) =E(e i!x )=E e i!x = E (cos (!X )) i E(sin(!X )) = E (cos (!X )) + i E(sin(!X )) = M X (!) Let X Binom(n, p). Then, M X (!) = pe i! + q n If X Poiss( ), then M X (!) =e (ei! ) I M X (!) is uniformly continuous in R. If X N(m, ),then M X (!) =e i!m! 47 / / 60

13 Characteristic function and moments Characteristic function and moments If E(X n ) < for some n =,,...,then So, M X (!) = nx E(X k )= M(k) X (0) i k In particular, if E(X )=0andVar(X )= M X (!) = E(X k ) (i!) k + o(! n ) as!! 0. for k =,,...,n.,then! + o(t ) as!! 0. Indeed, M X (!) =E e i!x =E! X (i!x ) k X = But this is the Taylor s series expansion of M X (!): Therefore M X (!) = X M (k) X (0)! k i k E(X k )=M (k) X (0) i k E(X k )! k 49 / / 60 Let X, X,...,X n be independent random variables and let S = X + X + + X n. Then ny M S (!) = M Xk (!) k= M S (!) =E e i!s =E e i!(x +X + +X n) =E e i!x e i!x e i!xn =E e i!x E e i!x E e i!xn = M X (!)M X (!) M Xn (!) 5 / 60 5 / 60

14 Unicity In the case n = we have essentially the convolution theorem for Fourier transforms. If X and Y are continuous and independent random variables and Z = X + Y,then f Z = f X f Y This implies that is, F(f Z )=F(f X ) F(f Y ), M Z (!) =M X (!)M Y (!) Let X have probability distribution function F X and characteristic function M X.LetF X (x) =(F X (x)+f X (x ))/. Then F X (b) Z T F X (a) = lim T! T e ia! i! e ib! M X (!) d!. M X specifies uniquely the probability law of X. Two random variables have the same characteristic function if and only if they have the same distribution function. 53 / / 60 Inversion Inversion (Inversion of the Fourier transform) Let X be a continuous r.v. with density f X and characteristic function M X.Then f X (x) = Z e i!x M X (!) d! at every point x at which f X is di erentiable. In the discrete case, M X (!) isrelatedtofourier series. If X is an integer-valued random variable, then P(X = k) = Z e i M X (!) d! 55 / / 60

15 Joint characteristic functions Joint moments The joint characteristic function of the random variables X, X,..., X n is defined to be M X (!,!,...,! n )=E e i(! X +! X + +! nx n) Using vectorial notation one can write! =(!,!,,! n ) t, X =(X, X,, X n ) t The joint characteristic function allows as to calculate joint moments. For instance, given X, Y : m kl =E X k Y l k+l M XY (!,! ) i l! (!,! )=(0,0) and M X (! t )=E e i!t X 57 / / 60 Marginal characteristic functions Independent random variables Marginal characteristic functions are easily derived from the joint characteristic function. For instance, given X, Y : M X (!) =E e i!x =E e i(! X +! Y ) = M XY (!, 0) (! =!,! =0) Analogously, M Y (!) =M XY (0,!) The random variables X, X,...,X n are independent if and only if M X (!,!,...,! n )=M X (! )M X (! ) M Xn (! n ) If the random variables are independent, then M X (!,!,...,! n ) =E e i(! X +! X + +! nx n) =E e i! X e i! X e i!nxn =E e i! X E e i! X E e i!nxn = M X (! )M X (! ) M Xn (! n ) 59 / / 60

4 Moment generating functions

4 Moment generating functions 4 Moment generating functions Moment generating functions (mgf) are a very powerful computational tool. They make certain computations much shorter. However, they are only a computational tool. The mgf

More information

Statistics 3657 : Moment Generating Functions

Statistics 3657 : Moment Generating Functions Statistics 3657 : Moment Generating Functions A useful tool for studying sums of independent random variables is generating functions. course we consider moment generating functions. In this Definition

More information

Characteristic Functions and the Central Limit Theorem

Characteristic Functions and the Central Limit Theorem Chapter 6 Characteristic Functions and the Central Limit Theorem 6.1 Characteristic Functions 6.1.1 Transforms and Characteristic Functions. There are several transforms or generating functions used in

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4. (*. Let independent variables X,..., X n have U(0, distribution. Show that for every x (0,, we have P ( X ( < x and P ( X (n > x as n. Ex. 4.2 (**. By using induction or otherwise,

More information

Lecture 5: Moment generating functions

Lecture 5: Moment generating functions Lecture 5: Moment generating functions Definition 2.3.6. The moment generating function (mgf) of a random variable X is { x e tx f M X (t) = E(e tx X (x) if X has a pmf ) = etx f X (x)dx if X has a pdf

More information

Introduction to Probability. Ariel Yadin. Lecture 10. Proposition Let X be a discrete random variable, with range R and density f X.

Introduction to Probability. Ariel Yadin. Lecture 10. Proposition Let X be a discrete random variable, with range R and density f X. Introduction to Probability Ariel Yadin Lecture 1 1. Expectation - Discrete Case Proposition 1.1. Let X be a discrete random variable, with range R and density f X. Then, *** Jan. 3 *** Expectation for

More information

Chapter 4. Chapter 4 sections

Chapter 4. Chapter 4 sections Chapter 4 sections 4.1 Expectation 4.2 Properties of Expectations 4.3 Variance 4.4 Moments 4.5 The Mean and the Median 4.6 Covariance and Correlation 4.7 Conditional Expectation SKIP: 4.8 Utility Expectation

More information

1 Generating functions

1 Generating functions 1 Generating functions Even quite straightforward counting problems can lead to laborious and lengthy calculations. These are greatly simplified by using generating functions. 2 Definition 1.1. Given a

More information

Lecture 3. Discrete Random Variables

Lecture 3. Discrete Random Variables Math 408 - Mathematical Statistics Lecture 3. Discrete Random Variables January 23, 2013 Konstantin Zuev (USC) Math 408, Lecture 3 January 23, 2013 1 / 14 Agenda Random Variable: Motivation and Definition

More information

E[X n ]= dn dt n M X(t). ). What is the mgf? Solution. Found this the other day in the Kernel matching exercise: 1 M X (t) =

E[X n ]= dn dt n M X(t). ). What is the mgf? Solution. Found this the other day in the Kernel matching exercise: 1 M X (t) = Chapter 7 Generating functions Definition 7.. Let X be a random variable. The moment generating function is given by M X (t) =E[e tx ], provided that the expectation exists for t in some neighborhood of

More information

STAT/MATH 395 A - PROBABILITY II UW Winter Quarter Moment functions. x r p X (x) (1) E[X r ] = x r f X (x) dx (2) (x E[X]) r p X (x) (3)

STAT/MATH 395 A - PROBABILITY II UW Winter Quarter Moment functions. x r p X (x) (1) E[X r ] = x r f X (x) dx (2) (x E[X]) r p X (x) (3) STAT/MATH 395 A - PROBABILITY II UW Winter Quarter 07 Néhémy Lim Moment functions Moments of a random variable Definition.. Let X be a rrv on probability space (Ω, A, P). For a given r N, E[X r ], if it

More information

Random Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline.

Random Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline. Random Variables Amappingthattransformstheeventstotherealline. Example 1. Toss a fair coin. Define a random variable X where X is 1 if head appears and X is if tail appears. P (X =)=1/2 P (X =1)=1/2 Example

More information

3 Multiple Discrete Random Variables

3 Multiple Discrete Random Variables 3 Multiple Discrete Random Variables 3.1 Joint densities Suppose we have a probability space (Ω, F,P) and now we have two discrete random variables X and Y on it. They have probability mass functions f

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4.1 (*. Let independent variables X 1,..., X n have U(0, 1 distribution. Show that for every x (0, 1, we have P ( X (1 < x 1 and P ( X (n > x 1 as n. Ex. 4.2 (**. By using induction

More information

Conditional distributions (discrete case)

Conditional distributions (discrete case) Conditional distributions (discrete case) The basic idea behind conditional distributions is simple: Suppose (XY) is a jointly-distributed random vector with a discrete joint distribution. Then we can

More information

1 Presessional Probability

1 Presessional Probability 1 Presessional Probability Probability theory is essential for the development of mathematical models in finance, because of the randomness nature of price fluctuations in the markets. This presessional

More information

Random Variables (Continuous Case)

Random Variables (Continuous Case) Chapter 6 Random Variables (Continuous Case) Thus far, we have purposely limited our consideration to random variables whose ranges are countable, or discrete. The reason for that is that distributions

More information

Gaussian vectors and central limit theorem

Gaussian vectors and central limit theorem Gaussian vectors and central limit theorem Samy Tindel Purdue University Probability Theory 2 - MA 539 Samy T. Gaussian vectors & CLT Probability Theory 1 / 86 Outline 1 Real Gaussian random variables

More information

6.041/6.431 Fall 2010 Quiz 2 Solutions

6.041/6.431 Fall 2010 Quiz 2 Solutions 6.04/6.43: Probabilistic Systems Analysis (Fall 200) 6.04/6.43 Fall 200 Quiz 2 Solutions Problem. (80 points) In this problem: (i) X is a (continuous) uniform random variable on [0, 4]. (ii) Y is an exponential

More information

1 Review of di erential calculus

1 Review of di erential calculus Review of di erential calculus This chapter presents the main elements of di erential calculus needed in probability theory. Often, students taking a course on probability theory have problems with concepts

More information

More on Distribution Function

More on Distribution Function More on Distribution Function The distribution of a random variable X can be determined directly from its cumulative distribution function F X. Theorem: Let X be any random variable, with cumulative distribution

More information

1 Solution to Problem 2.1

1 Solution to Problem 2.1 Solution to Problem 2. I incorrectly worked this exercise instead of 2.2, so I decided to include the solution anyway. a) We have X Y /3, which is a - function. It maps the interval, ) where X lives) onto

More information

We introduce methods that are useful in:

We introduce methods that are useful in: Instructor: Shengyu Zhang Content Derived Distributions Covariance and Correlation Conditional Expectation and Variance Revisited Transforms Sum of a Random Number of Independent Random Variables more

More information

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University Chapter 3, 4 Random Variables ENCS6161 - Probability and Stochastic Processes Concordia University ENCS6161 p.1/47 The Notion of a Random Variable A random variable X is a function that assigns a real

More information

Topic 3: The Expectation of a Random Variable

Topic 3: The Expectation of a Random Variable Topic 3: The Expectation of a Random Variable Course 003, 2017 Page 0 Expectation of a discrete random variable Definition (Expectation of a discrete r.v.): The expected value (also called the expectation

More information

PROBABILITY THEORY LECTURE 3

PROBABILITY THEORY LECTURE 3 PROBABILITY THEORY LECTURE 3 Per Sidén Division of Statistics Dept. of Computer and Information Science Linköping University PER SIDÉN (STATISTICS, LIU) PROBABILITY THEORY - L3 1 / 15 OVERVIEW LECTURE

More information

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample

More information

SOME THEORY AND PRACTICE OF STATISTICS by Howard G. Tucker CHAPTER 2. UNIVARIATE DISTRIBUTIONS

SOME THEORY AND PRACTICE OF STATISTICS by Howard G. Tucker CHAPTER 2. UNIVARIATE DISTRIBUTIONS SOME THEORY AND PRACTICE OF STATISTICS by Howard G. Tucker CHAPTER. UNIVARIATE DISTRIBUTIONS. Random Variables and Distribution Functions. This chapter deals with the notion of random variable, the distribution

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

Suppose that you have three coins. Coin A is fair, coin B shows heads with probability 0.6 and coin C shows heads with probability 0.8.

Suppose that you have three coins. Coin A is fair, coin B shows heads with probability 0.6 and coin C shows heads with probability 0.8. Suppose that you have three coins. Coin A is fair, coin B shows heads with probability 0.6 and coin C shows heads with probability 0.8. Coin A is flipped until a head appears, then coin B is flipped until

More information

M378K In-Class Assignment #1

M378K In-Class Assignment #1 The following problems are a review of M6K. M7K In-Class Assignment # Problem.. Complete the definition of mutual exclusivity of events below: Events A, B Ω are said to be mutually exclusive if A B =.

More information

Poisson approximations

Poisson approximations Chapter 9 Poisson approximations 9.1 Overview The Binn, p) can be thought of as the distribution of a sum of independent indicator random variables X 1 + + X n, with {X i = 1} denoting a head on the ith

More information

Things to remember when learning probability distributions:

Things to remember when learning probability distributions: SPECIAL DISTRIBUTIONS Some distributions are special because they are useful They include: Poisson, exponential, Normal (Gaussian), Gamma, geometric, negative binomial, Binomial and hypergeometric distributions

More information

Week 9 The Central Limit Theorem and Estimation Concepts

Week 9 The Central Limit Theorem and Estimation Concepts Week 9 and Estimation Concepts Week 9 and Estimation Concepts Week 9 Objectives 1 The Law of Large Numbers and the concept of consistency of averages are introduced. The condition of existence of the population

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

Chapter 5. Random Variables (Continuous Case) 5.1 Basic definitions

Chapter 5. Random Variables (Continuous Case) 5.1 Basic definitions Chapter 5 andom Variables (Continuous Case) So far, we have purposely limited our consideration to random variables whose ranges are countable, or discrete. The reason for that is that distributions on

More information

18.175: Lecture 8 Weak laws and moment-generating/characteristic functions

18.175: Lecture 8 Weak laws and moment-generating/characteristic functions 18.175: Lecture 8 Weak laws and moment-generating/characteristic functions Scott Sheffield MIT 18.175 Lecture 8 1 Outline Moment generating functions Weak law of large numbers: Markov/Chebyshev approach

More information

18.440: Lecture 28 Lectures Review

18.440: Lecture 28 Lectures Review 18.440: Lecture 28 Lectures 18-27 Review Scott Sheffield MIT Outline Outline It s the coins, stupid Much of what we have done in this course can be motivated by the i.i.d. sequence X i where each X i is

More information

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416)

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) D. ARAPURA This is a summary of the essential material covered so far. The final will be cumulative. I ve also included some review problems

More information

Discrete Random Variables

Discrete Random Variables CPSC 53 Systems Modeling and Simulation Discrete Random Variables Dr. Anirban Mahanti Department of Computer Science University of Calgary mahanti@cpsc.ucalgary.ca Random Variables A random variable is

More information

1 Random Variable: Topics

1 Random Variable: Topics Note: Handouts DO NOT replace the book. In most cases, they only provide a guideline on topics and an intuitive feel. 1 Random Variable: Topics Chap 2, 2.1-2.4 and Chap 3, 3.1-3.3 What is a random variable?

More information

CALCULUS JIA-MING (FRANK) LIOU

CALCULUS JIA-MING (FRANK) LIOU CALCULUS JIA-MING (FRANK) LIOU Abstract. Contents. Power Series.. Polynomials and Formal Power Series.2. Radius of Convergence 2.3. Derivative and Antiderivative of Power Series 4.4. Power Series Expansion

More information

Lecture 32: Taylor Series and McLaurin series We saw last day that some functions are equal to a power series on part of their domain.

Lecture 32: Taylor Series and McLaurin series We saw last day that some functions are equal to a power series on part of their domain. Lecture 32: Taylor Series and McLaurin series We saw last day that some functions are equal to a power series on part of their domain. For example f(x) = 1 1 x = 1 + x + x2 + x 3 + = ln(1 + x) = x x2 2

More information

Moments. Raw moment: February 25, 2014 Normalized / Standardized moment:

Moments. Raw moment: February 25, 2014 Normalized / Standardized moment: Moments Lecture 10: Central Limit Theorem and CDFs Sta230 / Mth 230 Colin Rundel Raw moment: Central moment: µ n = EX n ) µ n = E[X µ) 2 ] February 25, 2014 Normalized / Standardized moment: µ n σ n Sta230

More information

Stat410 Probability and Statistics II (F16)

Stat410 Probability and Statistics II (F16) Stat4 Probability and Statistics II (F6 Exponential, Poisson and Gamma Suppose on average every /λ hours, a Stochastic train arrives at the Random station. Further we assume the waiting time between two

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

18.175: Lecture 15 Characteristic functions and central limit theorem

18.175: Lecture 15 Characteristic functions and central limit theorem 18.175: Lecture 15 Characteristic functions and central limit theorem Scott Sheffield MIT Outline Characteristic functions Outline Characteristic functions Characteristic functions Let X be a random variable.

More information

ACM 116: Lectures 3 4

ACM 116: Lectures 3 4 1 ACM 116: Lectures 3 4 Joint distributions The multivariate normal distribution Conditional distributions Independent random variables Conditional distributions and Monte Carlo: Rejection sampling Variance

More information

18.440: Lecture 28 Lectures Review

18.440: Lecture 28 Lectures Review 18.440: Lecture 28 Lectures 17-27 Review Scott Sheffield MIT 1 Outline Continuous random variables Problems motivated by coin tossing Random variable properties 2 Outline Continuous random variables Problems

More information

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities PCMI 207 - Introduction to Random Matrix Theory Handout #2 06.27.207 REVIEW OF PROBABILITY THEORY Chapter - Events and Their Probabilities.. Events as Sets Definition (σ-field). A collection F of subsets

More information

BASICS OF PROBABILITY

BASICS OF PROBABILITY October 10, 2018 BASICS OF PROBABILITY Randomness, sample space and probability Probability is concerned with random experiments. That is, an experiment, the outcome of which cannot be predicted with certainty,

More information

STAT 516 Midterm Exam 3 Friday, April 18, 2008

STAT 516 Midterm Exam 3 Friday, April 18, 2008 STAT 56 Midterm Exam 3 Friday, April 8, 2008 Name Purdue student ID (0 digits). The testing booklet contains 8 questions. 2. Permitted Texas Instruments calculators: BA-35 BA II Plus BA II Plus Professional

More information

Math 55 - Spring Lecture notes #26 - April 29 (Thursday)

Math 55 - Spring Lecture notes #26 - April 29 (Thursday) Math 55 - Spring 2004 - Lecture notes #26 - April 29 (Thursday) Goals for today: Generating functions This material is covered by Lenstra and Rosen section 6.4. We will not cover any other sections of

More information

Notes 12 Autumn 2005

Notes 12 Autumn 2005 MAS 08 Probability I Notes Autumn 005 Conditional random variables Remember that the conditional probability of event A given event B is P(A B) P(A B)/P(B). Suppose that X is a discrete random variable.

More information

Chapter 3: Random Variables 1

Chapter 3: Random Variables 1 Chapter 3: Random Variables 1 Yunghsiang S. Han Graduate Institute of Communication Engineering, National Taipei University Taiwan E-mail: yshan@mail.ntpu.edu.tw 1 Modified from the lecture notes by Prof.

More information

Manual for SOA Exam MLC.

Manual for SOA Exam MLC. Chapter 4. Life Insurance. Section 4.9. Computing APV s from a life table. Extract from: Arcones Manual for the SOA Exam MLC. Fall 2009 Edition. available at http://www.actexmadriver.com/ 1/29 Computing

More information

MAS113 Introduction to Probability and Statistics. Proofs of theorems

MAS113 Introduction to Probability and Statistics. Proofs of theorems MAS113 Introduction to Probability and Statistics Proofs of theorems Theorem 1 De Morgan s Laws) See MAS110 Theorem 2 M1 By definition, B and A \ B are disjoint, and their union is A So, because m is a

More information

5. Conditional Distributions

5. Conditional Distributions 1 of 12 7/16/2009 5:36 AM Virtual Laboratories > 3. Distributions > 1 2 3 4 5 6 7 8 5. Conditional Distributions Basic Theory As usual, we start with a random experiment with probability measure P on an

More information

Section Taylor and Maclaurin Series

Section Taylor and Maclaurin Series Section.0 Taylor and Maclaurin Series Ruipeng Shen Feb 5 Taylor and Maclaurin Series Main Goal: How to find a power series representation for a smooth function us assume that a smooth function has a power

More information

1.6 Families of Distributions

1.6 Families of Distributions Your text 1.6. FAMILIES OF DISTRIBUTIONS 15 F(x) 0.20 1.0 0.15 0.8 0.6 Density 0.10 cdf 0.4 0.05 0.2 0.00 a b c 0.0 x Figure 1.1: N(4.5, 2) Distribution Function and Cumulative Distribution Function for

More information

Math 651 Introduction to Numerical Analysis I Fall SOLUTIONS: Homework Set 1

Math 651 Introduction to Numerical Analysis I Fall SOLUTIONS: Homework Set 1 ath 651 Introduction to Numerical Analysis I Fall 2010 SOLUTIONS: Homework Set 1 1. Consider the polynomial f(x) = x 2 x 2. (a) Find P 1 (x), P 2 (x) and P 3 (x) for f(x) about x 0 = 0. What is the relation

More information

5 Operations on Multiple Random Variables

5 Operations on Multiple Random Variables EE360 Random Signal analysis Chapter 5: Operations on Multiple Random Variables 5 Operations on Multiple Random Variables Expected value of a function of r.v. s Two r.v. s: ḡ = E[g(X, Y )] = g(x, y)f X,Y

More information

Random Variables. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay

Random Variables. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay 1 / 13 Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay August 8, 2013 2 / 13 Random Variable Definition A real-valued

More information

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as L30-1 EEL 5544 Noise in Linear Systems Lecture 30 OTHER TRANSFORMS For a continuous, nonnegative RV X, the Laplace transform of X is X (s) = E [ e sx] = 0 f X (x)e sx dx. For a nonnegative RV, the Laplace

More information

1.1 Review of Probability Theory

1.1 Review of Probability Theory 1.1 Review of Probability Theory Angela Peace Biomathemtics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,

More information

Stationary independent increments. 1. Random changes of the form X t+h X t fixed h > 0 are called increments of the process.

Stationary independent increments. 1. Random changes of the form X t+h X t fixed h > 0 are called increments of the process. Stationary independent increments 1. Random changes of the form X t+h X t fixed h > 0 are called increments of the process. 2. If each set of increments, corresponding to non-overlapping collection of

More information

8 Laws of large numbers

8 Laws of large numbers 8 Laws of large numbers 8.1 Introduction We first start with the idea of standardizing a random variable. Let X be a random variable with mean µ and variance σ 2. Then Z = (X µ)/σ will be a random variable

More information

Basics on Probability. Jingrui He 09/11/2007

Basics on Probability. Jingrui He 09/11/2007 Basics on Probability Jingrui He 09/11/2007 Coin Flips You flip a coin Head with probability 0.5 You flip 100 coins How many heads would you expect Coin Flips cont. You flip a coin Head with probability

More information

Northwestern University Department of Electrical Engineering and Computer Science

Northwestern University Department of Electrical Engineering and Computer Science Northwestern University Department of Electrical Engineering and Computer Science EECS 454: Modeling and Analysis of Communication Networks Spring 2008 Probability Review As discussed in Lecture 1, probability

More information

BMIR Lecture Series on Probability and Statistics Fall 2015 Discrete RVs

BMIR Lecture Series on Probability and Statistics Fall 2015 Discrete RVs Lecture #7 BMIR Lecture Series on Probability and Statistics Fall 2015 Department of Biomedical Engineering and Environmental Sciences National Tsing Hua University 7.1 Function of Single Variable Theorem

More information

Limiting Distributions

Limiting Distributions Limiting Distributions We introduce the mode of convergence for a sequence of random variables, and discuss the convergence in probability and in distribution. The concept of convergence leads us to the

More information

Analysis of Engineering and Scientific Data. Semester

Analysis of Engineering and Scientific Data. Semester Analysis of Engineering and Scientific Data Semester 1 2019 Sabrina Streipert s.streipert@uq.edu.au Example: Draw a random number from the interval of real numbers [1, 3]. Let X represent the number. Each

More information

18.440: Lecture 19 Normal random variables

18.440: Lecture 19 Normal random variables 18.440 Lecture 19 18.440: Lecture 19 Normal random variables Scott Sheffield MIT Outline Tossing coins Normal random variables Special case of central limit theorem Outline Tossing coins Normal random

More information

Taylor and Maclaurin Series

Taylor and Maclaurin Series Taylor and Maclaurin Series MATH 211, Calculus II J. Robert Buchanan Department of Mathematics Spring 2018 Background We have seen that some power series converge. When they do, we can think of them as

More information

MAS113 Introduction to Probability and Statistics. Proofs of theorems

MAS113 Introduction to Probability and Statistics. Proofs of theorems MAS113 Introduction to Probability and Statistics Proofs of theorems Theorem 1 De Morgan s Laws) See MAS110 Theorem 2 M1 By definition, B and A \ B are disjoint, and their union is A So, because m is a

More information

Fundamental Tools - Probability Theory II

Fundamental Tools - Probability Theory II Fundamental Tools - Probability Theory II MSc Financial Mathematics The University of Warwick September 29, 2015 MSc Financial Mathematics Fundamental Tools - Probability Theory II 1 / 22 Measurable random

More information

18.175: Lecture 13 Infinite divisibility and Lévy processes

18.175: Lecture 13 Infinite divisibility and Lévy processes 18.175 Lecture 13 18.175: Lecture 13 Infinite divisibility and Lévy processes Scott Sheffield MIT Outline Poisson random variable convergence Extend CLT idea to stable random variables Infinite divisibility

More information

CS145: Probability & Computing

CS145: Probability & Computing CS45: Probability & Computing Lecture 5: Concentration Inequalities, Law of Large Numbers, Central Limit Theorem Instructor: Eli Upfal Brown University Computer Science Figure credits: Bertsekas & Tsitsiklis,

More information

Measure-theoretic probability

Measure-theoretic probability Measure-theoretic probability Koltay L. VEGTMAM144B November 28, 2012 (VEGTMAM144B) Measure-theoretic probability November 28, 2012 1 / 27 The probability space De nition The (Ω, A, P) measure space is

More information

1 Review of Probability

1 Review of Probability 1 Review of Probability Random variables are denoted by X, Y, Z, etc. The cumulative distribution function (c.d.f.) of a random variable X is denoted by F (x) = P (X x), < x

More information

Probability Review. Gonzalo Mateos

Probability Review. Gonzalo Mateos Probability Review Gonzalo Mateos Dept. of ECE and Goergen Institute for Data Science University of Rochester gmateosb@ece.rochester.edu http://www.ece.rochester.edu/~gmateosb/ September 11, 2018 Introduction

More information

4 Sums of Independent Random Variables

4 Sums of Independent Random Variables 4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables

More information

3. Probability and Statistics

3. Probability and Statistics FE661 - Statistical Methods for Financial Engineering 3. Probability and Statistics Jitkomut Songsiri definitions, probability measures conditional expectations correlation and covariance some important

More information

1. Compute the c.d.f. or density function of X + Y when X, Y are independent random variables such that:

1. Compute the c.d.f. or density function of X + Y when X, Y are independent random variables such that: Final exam study guide Probability Theory (235A), Fall 2013 The final exam will be held on Thursday, Dec. 5 from 1:35 to 3:00 in 1344 Storer Hall. Please come on time! It will be a closed-book exam. The

More information

Practice Problem - Skewness of Bernoulli Random Variable. Lecture 7: Joint Distributions and the Law of Large Numbers. Joint Distributions - Example

Practice Problem - Skewness of Bernoulli Random Variable. Lecture 7: Joint Distributions and the Law of Large Numbers. Joint Distributions - Example A little more E(X Practice Problem - Skewness of Bernoulli Random Variable Lecture 7: and the Law of Large Numbers Sta30/Mth30 Colin Rundel February 7, 014 Let X Bern(p We have shown that E(X = p Var(X

More information

Expectation of Random Variables

Expectation of Random Variables 1 / 19 Expectation of Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 13, 2015 2 / 19 Expectation of Discrete

More information

Random Variables and Their Distributions

Random Variables and Their Distributions Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital

More information

STA2603/205/1/2014 /2014. ry II. Tutorial letter 205/1/

STA2603/205/1/2014 /2014. ry II. Tutorial letter 205/1/ STA263/25//24 Tutorial letter 25// /24 Distribution Theor ry II STA263 Semester Department of Statistics CONTENTS: Examination preparation tutorial letterr Solutions to Assignment 6 2 Dear Student, This

More information

Chapter 2 Random Variables

Chapter 2 Random Variables Stochastic Processes Chapter 2 Random Variables Prof. Jernan Juang Dept. of Engineering Science National Cheng Kung University Prof. Chun-Hung Liu Dept. of Electrical and Computer Eng. National Chiao Tung

More information

Exam P Review Sheet. for a > 0. ln(a) i=0 ari = a. (1 r) 2. (Note that the A i s form a partition)

Exam P Review Sheet. for a > 0. ln(a) i=0 ari = a. (1 r) 2. (Note that the A i s form a partition) Exam P Review Sheet log b (b x ) = x log b (y k ) = k log b (y) log b (y) = ln(y) ln(b) log b (yz) = log b (y) + log b (z) log b (y/z) = log b (y) log b (z) ln(e x ) = x e ln(y) = y for y > 0. d dx ax

More information

Product measure and Fubini s theorem

Product measure and Fubini s theorem Chapter 7 Product measure and Fubini s theorem This is based on [Billingsley, Section 18]. 1. Product spaces Suppose (Ω 1, F 1 ) and (Ω 2, F 2 ) are two probability spaces. In a product space Ω = Ω 1 Ω

More information

Chapter 6 - Random Processes

Chapter 6 - Random Processes EE385 Class Notes //04 John Stensby Chapter 6 - Random Processes Recall that a random variable X is a mapping between the sample space S and the extended real line R +. That is, X : S R +. A random process

More information

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n JOINT DENSITIES - RANDOM VECTORS - REVIEW Joint densities describe probability distributions of a random vector X: an n-dimensional vector of random variables, ie, X = (X 1,, X n ), where all X is are

More information

Mathematical Statistics 1 Math A 6330

Mathematical Statistics 1 Math A 6330 Mathematical Statistics 1 Math A 6330 Chapter 3 Common Families of Distributions Mohamed I. Riffi Department of Mathematics Islamic University of Gaza September 28, 2015 Outline 1 Subjects of Lecture 04

More information

PolyGamma Functions of Negative Order

PolyGamma Functions of Negative Order Carnegie Mellon University Research Showcase @ CMU Computer Science Department School of Computer Science -998 PolyGamma Functions of Negative Order Victor S. Adamchik Carnegie Mellon University Follow

More information

Random Models. Tusheng Zhang. February 14, 2013

Random Models. Tusheng Zhang. February 14, 2013 Random Models Tusheng Zhang February 14, 013 1 Introduction In this module, we will introduce some random models which have many real life applications. The course consists of four parts. 1. A brief review

More information

Math Bootcamp 2012 Miscellaneous

Math Bootcamp 2012 Miscellaneous Math Bootcamp 202 Miscellaneous Factorial, combination and permutation The factorial of a positive integer n denoted by n!, is the product of all positive integers less than or equal to n. Define 0! =.

More information

Math 321 Final Examination April 1995 Notation used in this exam: N. (1) S N (f,x) = f(t)e int dt e inx.

Math 321 Final Examination April 1995 Notation used in this exam: N. (1) S N (f,x) = f(t)e int dt e inx. Math 321 Final Examination April 1995 Notation used in this exam: N 1 π (1) S N (f,x) = f(t)e int dt e inx. 2π n= N π (2) C(X, R) is the space of bounded real-valued functions on the metric space X, equipped

More information

Review of Probability Theory II

Review of Probability Theory II Review of Probability Theory II January 9-3, 008 Exectation If the samle sace Ω = {ω, ω,...} is countable and g is a real-valued function, then we define the exected value or the exectation of a function

More information

Multiple Random Variables

Multiple Random Variables Multiple Random Variables Joint Probability Density Let X and Y be two random variables. Their joint distribution function is F ( XY x, y) P X x Y y. F XY ( ) 1, < x

More information