Random variables and transform methods

Size: px
Start display at page:

Download "Random variables and transform methods"

Transcription

1 Chapter Random variables and transform methods. Discrete random variables Suppose X is a random variable whose range is {,,..., } and set p k = P (X = k) for k =,,..., so that its mean and variance are given by E [X] = kp k, (.) k= Var [X] = E [ (X E [X]) 2] = E [ X 2] (E [X]) 2 = Here are some examples that are of interest: ( ) 2. k 2 p k kp k (.2) Binomial distribution denoted by Bin(n, p), which is the distribution of the number of successes in n Bernoulli trials (coin flips) with success probability p. Then ( ) n P (X = k) = p k ( p) n k, k {,,..., n}, p [, ] (.3) k and E [X] = np, Var [X] = np( p). k= Poisson distribution denoted by Pois(λ), defined as P (X = k) = e λ λ k, k {,,..., }, λ > (.4) k! and E [X] = λ, Var [X] = λ. Geometric distribution denoted by Geo(p), defined as P (X = k) = ( p)p k, k {,,..., }, p [, ] (.5) and E [X] = p/( p), Var [X] = p/( p) 2. The geometric distribution possesses the memoryless property. To see this, first note that P (X k) = ( p)p j = p k. (.6) j=k k=

2 Therefore P (X k + n X n) = which does not depend on n. P (X k + n) P (X n) = pk+n p n = p k = P (X k), (.7) Lemma... If X is non-negative integer valued then E [X] = Proof. P (X > k). (.8) k= P (X > k) = P (X = j) k= = k= j=k+ ( j ) P (X = j) = j= k= j= jp (X = j) = E [X]. (.9) Example..2. Let X Geo(p). Then, E [X] = kp (X = k) = k= = p( p) k( p)p k k= kp k = p( p) d dp k= k= p k = p( p) d dp p = p p. (.) Exercise..3. Let X Geo(p) and show that E [X] = p/( p) by using (.6) and (.8). The probability generating function (pgf) of a random variable X with range {,,..., } and probability distribution p k = P (X = k) for k =,,..., is defined by g X (z) = p k z k. (.) k= Notice that g X (z) = E [ z X], the expectation of the random variable z X. The series in (.) is guaranteed to converge if z, because g X (z) p k z k k= p k =. (.2) k= Exercise..4. Let X Geo(p) and show that g X (z) = p pz. (.3) 2

3 Exercise..5. Let X Pois(λ) and show that g X (z) = e λ(z ). (.4) Exercise..6. Let X Bin(n, p) and show that g X (z) = ( p + pz) n. (.5) A pgf uniquely defines the probability distribution, via the relation p k = d k k! dz k g X(z). (.6) z= One of the great advantages of using pgfs is that the moments are easy to determine. For example, E [X] = d dz g X(z) z=. (.7) More generally, E [X(X ) (X k + )] = dk dz k g X(z). (.8) z= Example..7. For the geometric distribution, we get E [X] = d p p( p) dz pz = z= ( pz) 2 = p z= p. (.9) In a similar way, we get E [ X 2] = p( + p) ( p) 2. (.2) Example..8. For the Poisson distribution, we get E [X] = d dz eλ(z ) z= = λ. (.2) In a similar way, we get E [ X 2] = λ 2 + λ. (.22) Exercise..9. Prove that (.6) and (.8) are true, and use (.8) to derive (.2) and (.22)..2 Continuous random variables Suppose X is a random variable whose range is R with distribution function F (t) = P (X t) and density f(u) = d P (X [u, u + du]) F (u). (.23) du du 3

4 The mean and variance of X, when X, are given by E [X] = Var [X] = uf(u)du, (.24) ( 2 u 2 f(u)du uf(u)du). (.25) Here are some continuous distributions that are of interest: Uniform distribution denoted by Uniform(a, b) for which f(u) = { b a, a < u < b,, elsewhere, (.26) with mean b+a 2 and variance 2 (b a)2. Normal distribution denoted by N (µ, σ 2 ) for which f(u) = 2πσ 2 e 2( u µ σ ) 2, < u <, (.27) with mean µ and variance σ 2. Gamma distribution denoted by Gamma(α, β) for which with f(u) = Γ(α) = βα Γ(α) uα e βu, u >, (.28) y α e y dy (.29) the gamma function. So E [X] = α β and Var [X] = α β 2. Lemma.2.. If X is non-negative with bounded mean, we have Proof. E [X] = t= tf(t)dt = ( F (t))dt. (.3) t= [ t ] f(t) du dt = f(t)dtdu = ( F (u))du. (.3) u= u= t=u u= The moment generating function (mgf) of a continuous random variable X is given by m X (t) = E [ e tx] = e tu f(u)du. (.32) 4

5 For non-negative continuous random variables it is common practice to use the Laplace- Stieltjes transform (lst), instead of the mgf, which is defined by ϕ X (s) = E [ e sx] = e su f(u)du, Re s. (.33) Notice that ϕ X (s) = m X ( s). This integral in (.33) is guaranteed to converge for Re s because ϕ X (s) e su f(u)du Example.2.2. For the uniform distribution on [a, b], we get m X (t) = b a f(u)du =. (.34) e tu b a du = etb e ta tb ta. (.35) In some cases you encounter a distribution function that has jumps, for example a continuous random variable X on [, ) with a jump in zero, meaning P (X = ) >. In that case, we get ϕ X (s) = = + e su dp (X < u) = P (X = ) + e su dp (X < u) e su dp (X < u) e su dp (X < u). (.36) As for the pgf, one of the advantages of using a mgf or lst is that the moments are easy to determine. For example, E [X] = d dt m X(t) = d t= ds ϕ X(s), (.37) s= and more generally, E [X k] = dk dt k m X(t) t= = ( ) k dk ds k ϕ X(s). (.38) s= Here is another important property: Property.2.3. Let X be a random variable with mgf m X (t). Let a and b be real constants, and define the random variable Y as Y = ax + b. Then m Y (t) = e bt m X (at). (.39) Example.2.4. Let X N (, ). Then, m X (t) = e 2 t2. (.4) 5

6 To see this, we start from m X (t) = u= = e 2 t2 f X (u)e tu du = u= u= 2π e u2 /2 e tu du 2π e (u t)2 /2 du, (.4) where we have used that e u2 /2 e ut = e u2 /2+ut = e (u t)2 /2 e t2 /2. Now for a normal random variable X N (µ, σ 2 ) we have f X (u) = σ 2π e 2( x µ σ ) 2 and u= f X (u)du =. (.42) Therefore, the integral in the right-hand side of (.4) equals, and we get (.4). Exercise.2.5. Let X N (µ, σ 2 ). Show that m X (t) = e µt+σ2 t 2 /2. (.43) Solution. By definition, m X (t) = x= e xt σ 2π e 2( x µ σ ) 2 dx. (.44) Make the change of variables z = (x µ)/σ. Then x = σz + µ and dx = σdz. This gives m X (t) = z= = e µt z= = e µt e σ2 t 2 /2 e (σz+µ)t σ 2π e z2 /2 σdz 2π e z2 /2+σzt dz z= 2π e (z σt)2 /2 dz = e µt+σ2 t 2 /2. (.45) Exercise.2.6. Let X Gamma(α, β). Show that ( ) β α m X (t) =. (.46) β t Solution. We get m X (t) = u= β α Γ(α) uα e βu e tu du = Γ(α) 6 u= u α β α e u(β t) du. (.47)

7 Make the change of variables x = u(β t). Then u = x/(β t) and du = dx/(β t). We then find that m X (t) = Γ(α) x= ( ) β α = β t Γ(α) ( β = β t x α (β t) α βα e x β t dx x= ) α Γ(α) Γ(α) = x α e x dx.3 Exponential distribution The exponential distribution, denoted by Exp(λ), is defined as ( ) β α. (.48) β t F (t) = { e λt, t,, t <, (.49) and f(t) = { λe λt, t >,, t <, (.5) so that E [X] = /λ and Var [X] = /λ 2. Also, ϕ X (s) = E(e sx ) = e su λe λu du = λ λ + s (.5) and hence E [X] = d λ = ds λ + s s= λ. (.52) E [ X 2] = d2 λ ds 2 = 2 λ + s s= λ 2. (.53) E [X k] = ( ) k dk λ ds k = k! λ + s s= λ k. (.54) There are several alternative ways of deriving (.54). For instance, using integration by parts, [ E X k] = t k λe λt dt = t k e λt = k λ + kt k e λt dt t k λe λt dt, (.55) so that E [ X k] = k λ E [ X k ]. Iterating then yields E [X k] = k [ λ E X k ] = k k [ λ λ E X k 2] = = k! λ k E [ X ] = k! λ k. (.56) 7

8 Or, [ E X k] = = ( ) k λ t k λe λt dt = ( ) k λ dk dλ k d k ( dλ k e λt) dt e λt dt = ( ) k λ dk dλ k λ = k! λ k. (.57) The exponential distribution is somewhat of a special distribution. For instance, it is the only continuous distribution that possesses the memoryless property. That is, when X Exp(λ), P (X > t + u X > u) = = P (X > t + u and X > u) P (X > u) P (X > t + u) P (X > u) = e λ(t+u) e λu = e λt = P (X > t), t, u. (.58) The hazard rate of cdf F ( ) with density f( ) on [, ) is defined as r(t) = f(t) F (t), and has the interpretation of an intensity or rate, because, if X has cdf F ( ), P (X (t, t + dt) X > t) = P (X (t, t + dt)) P (X > t) (.59) = f(t)dt = r(t)dt. (.6) F (t) For instance, if F ( ) is the time it takes for the next claim to arrive, r( ) is the intensity with which this happens. Now, if X Exp(λ), then r(t) = λe λt e λt = λ. (.6) The constant hazard rate matches perfectly with the memoryless property. Here are some further properties of exponential random variables. Property.3.. Let X i Exp(λ i ), i =, 2 and X, X 2 independent. Then, Proof. min(x, X 2 ) Exp(λ + λ 2 ). (.62) P (min(x, X 2 ) < x) = P (X < x or X 2 < x) = P (X < x) + P (X 2 < x) P (X < x) P (X 2 < x) = e λ x + e λ 2x ( e λ x )( e λ 2x ) = e (λ +λ 2 )x. (.63) 8

9 Alternatively, P (min(x, X 2 ) > x) = P (X > x and X 2 > x) = e λ x e λ 2x = e (λ +λ 2 )x. (.64) Property.3.2. Let X i Exp(λ i ), i =, 2 and X, X 2 independent. Then, Proof. P (X < X 2 ) = λ λ + λ 2. (.65) P (X < X 2 ) = = = y λ e λ x λ 2 e λ 2y dxdy ( e λ y )λ 2 e λ 2y dy λ 2 e λ 2y dy λ 2 e (λ +λ 2 )y dy = e λ 2y + λ 2 e (λ +λ 2 )y λ + λ 2 = λ 2 λ + λ 2. (.66) 9

10

11 Chapter 2 Sums of random variables In Chapter we have acquainted ourselves with using transforms for describing the distributions of random variables. One of the main advantages of using transform methods is that it becomes relatively easy to describe the distributions of sums of random variables. Our key example will be the total claim amount, denoted by S n. That is, S n = X + X 2 + X n, (2.) where X i denotes the payment on policy i (claim i). In this chapter, these claims X,..., X n are assumed to be independent random variables. 2. Convolutions and transforms For the mean of S n, we always get that E [S n ] = E [X + X 2 + X n ] = E [X ] + E [X 2 ] + E [X n ]. (2.2) For the variance, we have similarly, that Var [S n ] = Var [X + X 2 + X n ] = Var [X ] + Var [X 2 ] + Var [X n ], (2.3) but this is only true if the random variables X,..., X n are independent. If not, we get the expression Var [S n ] = n n Cov[X i, X j ], (2.4) i= j= where we note that Cov[X i, X i ] = Var [X i ]. Hence, for the first two moments of S n we have nice expressions available, and we now turn to the distribution of S n. Let F X (s) = P (X < s) denote the cdf of a random variable X. With convolution we mean the mathematical operation that calculated the cdf of X + X 2, given the individual cdfs of the two independent continuous random variables X and X 2 : F X F X2 (s) := F X +X 2 (s) = P (X + X 2 s) = F X2 (s x)df X (x). (2.5)

12 If X and X 2 are discrete random variables, we get F X F X2 (s) = P (X + X 2 s) = F X2 (s x)df X (x), (2.6) x or, perhaps easier, when X and X 2 are nonnegative, P (X + X 2 = k) = k P (X = j) P (X 2 = k j). (2.7) j= Performing the convolution operation can, in many cases, become rather cumbersome. Finding the mgf of the sum of independent random variables, however, is straightforward. If X and X 2 are independent, then [ ] m X +X 2 (t) = E e t(x +X 2 ) = E [ e tx ] E [ e tx ] 2 = m X (t)m X2 (t). (2.8) So determining the convolution of the cdfs simply corresponds to a simple multiplication of the mgfs. More generally, for the mgf of the total claim amount S n, we find that m Sn (t) = m X +X 2 + X n (t) [ ] n = E e t(x +X 2 + X n) = m Xi (t), (2.9) i= and the same properties hold for pgfs (in case of discrete random variables) and lst s (in case of nonnegative continuous random variables). Example 2... Let X Pois(λ ), X 2 Pois(λ 2 ), and assume that X and X 2 are independent. Then g X +X 2 (z) = E [ z X +X 2 ] = E [ z X ] E [ z X 2 ] = e λ (z ) e λ 2(z ) = e (λ +λ 2 )(z ). (2.) Example Remember that we can interpret a binomial random variable as the sum of n independent Bernoulli trials. That is, if S n Bin(n, p), then S n = X + X 2 + X n where X i has range {, } with P (X i = ) = p = P (X i = ). Thus and hence g Xi (z) = p + pz (2.) g Sn (z) = n g Xi (z) = ( p + pz) n. (2.2) i= Once you have determined the transform of a random variable, there is always the problem of inverting the transform and thus recognizing the underlying distribution. This is often hard, but for Example 2.. we can see that e (λ +λ 2 )(z ) = e (λ +λ 2 ) (λ + λ 2 ) k z k, (2.3) k! k= 2

13 and hence P (X + X 2 = k) = e (λ +λ 2 ) (λ + λ 2 ) k, (2.4) k! so that we can conclude that X + X 2 Pois(λ + λ 2 ). Indeed, this is the known property that the sum of two independent Poisson random variables is again Poisson distributed. For most distributions, however, such a nice property does not exist. Exercise Show that (2.4) is true without using transforms, but using the convolution formula (2.7). Example Let X i Exp(λ i ), for i =,..., n, and assume that X, X 2,..., X n are independent. Then [ ] ϕ X +X 2 + +X n (s) = E e s(x +X 2 + X n) = n ϕ Xi (s) = i= n i= λ i λ i + s. (2.5) Example Let X i Exp(λ), for i =,..., n, and assume that X, X 2,..., X n are independent. Then ( ) λ n ϕ X +X 2 + X n (s) =. (2.6) λ + s Because S n = X + X 2 + X n is then a Gamma(n, λ) random variable, see (.46), we know that f Sn (u) = λn u n (n )! e λu, u. (2.7) The distribution with density (2.7) is called the Erlang(n, λ) distribution. Exercise Let X i Geo(p i ), for i =, 2, and assume that X, X 2 are independent. Show, by either using a direct method or pgfs, that (p p 2 ) P (X + X 2 = k) = ( p )( p 2 ) p 2 p [p k+ 2 p k+ ], k =,,.... (2.8) Theorem 2..7 (Feller s convergence theorem). Let F (n) (t) denote a sequence of distribution functions of nonnegative random variables with density f (n) (t). If ϕ n (s) = e su f (n) (u)du ϕ(s), as n, (2.9) then ϕ(s) is the lst of the density f(u), with f (n) (u) f(u). Example Let f (n) (u) = (nλ)n u n e nλu, u >, (2.2) (n )! 3

14 be the Erlang(n, nλ) density. Remember that this corresponds to the density of S n = X + + X n with X i Exp(nλ). Then, ( ) nλ n ( ϕ Sn (s) = ϕ n (s) = = + s ) n e s/λ, as n. (2.2) nλ + s nλ Now e s/λ is the lst that belongs to the degenerate distribution P (X < x) = {, x < /λ,, x > /λ. (2.22) To understand the result in (2.2), note that E [S n ] = n nλ = λ and ( ) 2 Var [S n ] = nvar [X ] = n =, as n. (2.23) nλ nλ2 From this, we can conclude that a positive degenerate random variable, a constant c say, can be approximated by an Erlang(n, n/c) random variable with n large. There is also a counterpart of Theorem 2..7 for discrete random variables. Let X (n) Bin(n, λ/n), so that ( g X (n)(z) = λ n + λ ) n n z. (2.24) Now, since ( λ n + λ ) n n z e λ(z ), as n, (2.25) we conclude that Bin(n, λ/n) Pois(λ), and hence that Bin(n, λ/n) Pois(λ), for n large. (2.26) Exercise Let X N (µ, σ 2) and X 2 N (µ 2, σ2 2 ) be independent random variables. Show that X + X 2 N (µ + µ 2, σ 2 + σ2 2 ). Exercise 2... Let X N (µ, σ 2 ) and let a and b be real numbers. Show that ax + b N (aµ + b, (aσ) 2 ). Exercise 2... Let X N (µ, σ 2 ). Show that Z = (X µ)/σ N (, ). Exercise Let Z N (, ). Show that X = σz + µ N (µ, σ 2 ). 2.2 Central limit theorem Theorem 2.2. (Markov inequality). Let X be a nonnegative random variable with E [X] < and let a be a positive number. Then P (X a) E [X] a. (2.27) 4

15 Proof. The bound in (2.32) follows from ap (X a) E [ X {X a} ] E [X]. (2.28) Theorem (Chebychev inequality). Let X have Var [X] = σ 2. Then P ( X E [X] a) σ2 a 2. (2.29) Proof. Since X E [X] is non-negative, it follows that X E [X] a if and only if (X E [X]) 2 a 2. From the Markov inequality, it follows that P ( X E [X] a) = P ( (X E [X]) 2 a 2) E [ (X E [X]) 2] = a 2 Var [X] a 2. (2.3) Some equivalent formulations of the Chebychev Inequality are P ( X µ < kσ) k 2, σ2 P ( X µ < a) a 2, P ( X µ kσ) k 2. (2.3) When you repeat an experiment many times, you expect the sample mean to be close to the actual mean. For example, when you throw a coin many times, you would expect that the fraction of times heads appears would be close to the probability of obtaining heads, which is 5% if you use a fair coin. This intuition is formalized in the following theorem. Theorem (Weak law of large numbers). Let X, X 2,... be an i.i.d. sequence of nonnegative random variables with E [X] < and Var [X] = σ 2 <. Then, for any ɛ >, ( ) lim P n X i E [X] n n > ɛ =, (2.32) i= so that n n i= X i converges in probability to E [X]. Proof. Note that E [ n n i= X i] = E [X] and [ ] n ( ) [ 2 n ] Var X i = Var X i = n n i= i= ( ) 2 n Var [X i ] = σ2 n n. (2.33) i= Choose any ɛ >. By the Chebychev inequality, we find ( ) n P X i E [X] n > ɛ σ2 nɛ 2 (2.34) i= as n. This proves the theorem. The law of large numbers is often called the first fundamental theorem of probability. The following theorem, known as the central limit theorem (CLT) is sometimes called the second fundamental theorem of probability. It states that for a large number of repetitions, the sample mean is approximately normally distributed. It therefore describes the behavior of the sample mean around the true mean of the distribution. 5

16 Theorem (CLT). Let X, X 2, X 3,... be an i.i.d. sequence of random variables, each having expected value µ and finite variance σ 2 >. Let S n := n i= X i and let X n := S n /n be the sample mean, then the random variables Z n, defined by Z n := S n nµ σ n = n X n µ σ (2.35) converge in distribution to a standard normal distribution, that is, lim P (Z n z) = Φ(z). (2.36) n Here, Φ(z) is the standard normal cdf, Φ(z) = z x= 2π e x2 /2 dx. (2.37) Proof. We will sketch the proof of the CLT, under the stronger assumption that the common mgf of the X i µ, say m(t) := m X µ (t), is finite in a neighborhood of. Note that m() =, m () = and m () = σ 2. Using Taylor series expansion for m, we find m(t) = m() + m ()t + 2 m ()t m ()t (2.38) The moment generating function of Z n is given by ( ) ( ( )) t t n m Zn (t) = m P (X i µ) = m, (2.39) nσ 2 nσ 2 since the moment generating function of a sum of independent random variables is just the product of the individual terms. Now using the Taylor series expansion, we may write ( ( )) t n m = ( + t2 nσ 2 2n + t 3 ) n 6σ 3/2 n 3/2 +..., (2.4) and therefore m Zn (t) = ) n ( + t2 2n +... e t2 /2, as n. (2.4) The latter is the moment generating function of the standard normal distribution. This shows that Z n converges in distribution to the standard normal distribution. Example Consider an insurance company with n = policy holders. Each policy holder will file a claim with probability p = /2. Use the CLT to estimate the probability that the total number of claims lies between 4 and 6. The total number of claims S can be described by S = X + + X where the X i are i.i.d. random variables with P (X i = ) = /2 and P (X i = ) = /2. Note that E [S] = np = 5 and Var [S] = np( p) = 25, and thus Z = S 5 25 N (, ). (2.42) 6

17 Then, P (4 S 6) = P ( ) Z = P ( 2 Z 2) Φ(2) Φ( 2) (2.43) This is a rather good approximation, since by observing that S Bin(, 2 ), the true value can be calculated: P (4 S 6) = 6 k=4 ( k ) ( ) k ( ) k (2.44) 2 2 Exercise Claims X i, i =,..., 25, are independent with mean 4 and standard deviation 2. Estimate, using the CLT, the probability that 25 claims together exceed an amount of. Exercise Let X i, i =,..., 2 be independent ( claims, where each claim is uniformly 2 ) distributed over (, ). Estimate the probability P i= X i > 7, using the CLT. Exercise An insurance company has, automobile policyholders. The expected yearly claim per policyholder is 24 with a standard deviation of 8. Approximate, using the CLT, the probability that the total yearly claim exceeds 2.7 million. 7

18 8

19 Chapter 3 Total claim amount As in Chapter 2 we shall again be interested in the total claim amount, except this time we interpret the total claim amount as the sum of all claims that were filed in the interval (, t]. If we denote the number of claims in (, t] by N(t), we thus need to study S N(t) = X + X 2 + X N(t), (3.) where X i denotes claim i. We shall again assume that the claims X,..., X n are independent random variables. We further assume that the claims arrive at times σ, σ 2,..., and we denote the interarrival times of claims by T = σ and T i = σ i σ i with i Stochastic sums of random variables Let us first consider the random variable S N = X + X X N, (3.2) with N a discrete random variable with pgf g N (z), and X, X 2,... i.i.d. random variables with mgf m X (t) and independent of N. Think of X, X 2,... as the claim sizes, N as the (uncertain) number of claims, and S N then as the total claim amount. There exists a nice expression for the mgf of S N : m SN (t) = E [ e ts ] N = = = n= n= P (N = n) E [e ] t(x +X 2 + X n) ( P (N = n) E [e ] ) t(x n ) P (N = n) (m X (t)) n n= = g N (m X (t)). (3.3) To see the latter step, recall the definition of a pgf: g N (z) = P (N = n) z n. (3.4) n= 9

20 Hence, despite the fact that the random variable S N might have a rather complex distribution, its mgf follows easily from the mgf of X and the pgf of N. With (3.3) it is now straightforward to derive expressions for the moments of S N : Property 3... Let S N be defined as in (3.2). Then, E [S N ] = E [X ] E [N], (3.5) Var [S N ] = Var [N] (E [X ]) 2 + E [N] Var [X ]. (3.6) Exercise Let X i Exp(λ) and N Geo(p). Determine the mgf of S N. Exercise An insurance company has policy holders, and each policy holder i will file a claim with probability of size X i Gamma(2, 2). Determine the mean and mgf of total claim size. Exercise An insurance company has n policy holders with an insurance against storm damage. After a storm, depending on the location of the house, a policy holder will have damage or not. Policy holder i will file a claim after a storm with probability p i of size Y i, and the houses of the policy holders are so far apart that the occurrence of claims and the claim sizes Y i, can be considered independent among all policy holders. Assume that p = p =... = p n and Y, Y 2,... are i.i.d. Determine the mean and mgf of total claim size after a storm. Exercise An insurance company has n policy holders with an insurance against storm damage. The houses of the policy holders are in the same area, so that after a storm, either nobody or everybody has damage. After a storm, with probability p all policy holders file a claim, where policy holder i files a claim of size Y i. Assume that Y, Y 2,... are i.i.d. Determine the mean and mgf of total claim size after a storm. Exercise A touring car is involved in a severe accident, caused by the driver, and all passengers can file a claim to one and the same insurance company. There are 5 passengers, and each passenger, independently of the other passengers, will choose to file a claim with probability 9/. Passenger i will file a claim of size X i Exp(/), and all claim sizes are assumed to be i.i.d. Show that the mgf of the total claim size is given by ( m SN (t) = ) / 5. (3.7) / t Exercise From the expression for the mgf in (3.7), we obtain that E [S N ] = m S N () = 45 and Var [S N ] = m S N () (m S N ()) 2 = 495. Verify these results using (3.5) and (3.6). Exercise A touring car is involved in a severe accident, caused by the driver, and all passengers can file a claim to one and the same insurance company. There are 5 passengers, and each passenger, independently of the other passengers, will choose to file a claim with probability 9/. All claim sizes are equally large and drawn from a Exp(/) distribution. Determine the mean and the variance of the total claim size. 2

21 3.2 Poisson process A stochastic proces {N(t), t } is called a counting process when N(t) represents the number of events that occurred in the time interval (, t]. All counting processes have to satisfy the following properties: (i) N() =. (ii) N(t) is an integer, for all t. (iii) N(s) N(t) if s < t. (iv) For s < t, the quantity N(t) N(s) represents the number of events in (s, t]. For example, the number of births in The Netherlands forms a counting process, while the number of inhabitants of The Netherlands clearly does not... Here are certain properties that would make a counting process special: Independent increments. This means that N(b) N(a) and N(d) N(c) are independent if the intervals (a, b] and (c, d] do not overlap. Stationary increments. This means that N(t) N(s) and N(t + u) N(s + u) are identically distributed, s, t, u > and s < t. The process that counts the numbers of arriving customers at a supermarket has roughly independent increments, but probably no stationary increments (think of rush hours). The Poisson process is a counting process with independent and stationary increments, and moreover, with a third property that can be described in multiple ways. Here is our first definition of a Poisson process: Definition The counting process {N(t), t } is a Poisson process with rate λ if N() =, the process has stationary and independent increments, and λt (λt)n P (N(t) = n) = e, n =,,..., t. (3.8) n! Hence, N(t) is for all t > Poisson distributed, with mean λt, i.e. the expected number of events in (, t). Definition The counting process {N(t), t } is a Poisson process with rate λ if N() =, the process has stationary and independent increments, and P (N(h) = ) = λh + o(h), h, (3.9) P (N(h) 2) = o(h), h. (3.) We note that a function f( ) is o(h) if lim h f(h)/h =. Since we now have two definitions of a Poisson process, we should be able to prove that these definitions are equivalent. 2

22 Let us first prove that (3.8) implies (3.9) and (3.): λh (λh) P (N(h) = ) = e! = ( λh + o(h))λh = λh + o(h), h. (3.) P (N(h) 2) = P (N(h) = ) P (N(h) = ) = e λh λh (λh) e! = ( λh + o(h)) λh o(h) = o(h), h. (3.2) To see that (3.9) and (3.) imply (3.8), we give the following sketch of a proof: Divide (, t] into n smaller intervals of length h = t/n, and think of n as a large number. Then, ( ) n (λt ) j ( P (N(t) = j) = j n + o(t/n) λt ) n j n + o(t/n) λt (λt)n e, (3.3) n! where we recall that the binomial distribution can be approximated by a Poisson distribution, see (2.26). From the above sketch, it becomes clear that Poisson processes might be good descriptions of the number of claims that are filed at an insurance company. That is, for these settings, there is typically a large number n of policy holders, who all, more or less independently from each other, file a claim with probability p. Moreover, it seems reasonable that p is roughly proportional to the length of the interval, which gives p = λt/n for a fixed t. From this reasoning, we would then again arrive at P (N(t) = j) = ( ) n (λt j n ) j ( λt n ) n j e λt (λt)n. (3.4) n! Let us now investigate some further properties of the Poisson process. Let T, T 2,... denote the times in between consecutive events of a Poisson process. These times are also called interarrival times, and in case of a claim arrival process represent the times between two consecutive claims. It is clear that P (T > t) = P (N(t) = ) = e λt, (3.5) and so T Exp(λ). Due to the independent and stationary increments, we know that T 2 is independent of T, and again Exp(λ) distributed. In fact, all T i, i =, 2,... are independent and Exp(λ) distributed. Apparently, under the assumptions of independence and stationarity, the counting process start at any point t all over again. In other words, the counting process is memoryless, and we could have expected that indeed T i Exp(λ). This brings us to the third definition of a Poisson process: Definition The counting process {N(t), t } is a Poisson process with rate λ if N() =, the process has stationary and independent increments, and the times between consecutive events are independent and Exp(λ) distributed. We can define the time V n of the nth event as V n = T + T T n, (3.6) 22

23 which is a sum of independent Exp(λ) random variables. Such sums were treated in Chapter 2. Since we know that the mgf of V n is then given by m Vn (t) = ( ) λ n, (3.7) λ t we can immediately conclude that V n Gamma(n, λ) with pdf f(u) = λn (n )! un e λu, u, (3.8) This distribution is also called the Erlang distribution. The fact that V n Gamma(n, λ) can also be obtained directly from the definition of the Poisson process, by observing that Hence, V n t N(t) n. (3.9) P (V n t) = P (N(t) n) = j=n λt (λt)j e j! n λt (λt)j = e. (3.2) j! j= Differentiation of (3.2) then yields (3.8). and so Further intuition follows from P (V n (u, u + h)) = P (N(u) = n, nth event in (u, u + h)) (3.2) λu (λu)n = e [λh + o(h)] (3.22) (n )! f(u) = d du P (V P (V n (u, u + h)) n u) = lim = λn h h (n )! un e λu, u. (3.23) Theorem Let {N (t), t } and {N 2 (t), t } be two independent Poisson processes with rates λ and λ 2. Then {N (t) + N 2 (t), t } is a Poisson process with rate λ + λ 2. Theorem If a Poisson process {N(t), t } with rate λ is randomly split into two subprocesses with probabilities p and p, then the resulting processes are independent Poisson processes with rates pλ and ( p)λ. This result allows a straightforward generalization to a split into more than two subprocesses. 23

24 3.3 Compound Poisson process A stochastic proces {S(t), t } is said to be a compound Poisson process if it can be represented as S(t) = S N(t) = X + X X N(t), (3.24) where {N(t), t } is a Poisson process with rate λ, and X, X, X 2,... independent and identically distributed random variables, that are also independent of {N(t), t }. Notice that when X, that S(t) = N(t). If N(t) counts the number of claims in the time interval [, t], and X i is the size of claim i, then S(t) is the total claim size up to time t. From (3.5) and (3.6) we get E [ S N(t) ] = λte [X], (3.25) Var [ S N(t) ] = λte [ X 2 ]. (3.26) Moreover, since g N(t) (z) = e λt(z ), it follows from (3.3) that m SN(t) (u) = e λt(m X(u) ). (3.27) We call in this section the parameter of the mgf u instead of t, because t is already used as time parameter. Theorem Let S () (t), S (2) (t),..., S (n) (t) be independent compound Poisson processes with Poisson rates λ (), λ (2),..., λ (n) and generic claim sizes X (), X (2),..., X (n). Then S(t) = S () (t) + S (2) (t) + + S (n) (t) is again a compound Poisson process with rate λ = λ () + λ (2) + + λ (n) and generic claim size X (), w.p. λ () /λ, X (2), w.p. λ (2) /λ, X = (3.28). X (n), w.p. λ (n) /λ. Proof. m S(t) (u) = n n m S (i) (t) (u) = ( ) exp λ (i) t(m X (i)(u) ) i= i= ( n ) = exp λ (i) t(m X (i)(u) ) = exp ( i= λt ( n i= )) λ (i) λ m X (i)(u). (3.29) The proof is concluded by recognizing n i= λ(i) λ m X (i)(u) as the mgf of X. Theorem 3.3. shows that the sum of compound Poisson processes is again a compound Poisson process. This was proved via transform methods. A more direct proof can be obtained using the properties of the Poisson process. Take two independent compound Poisson processes S () (t), S (2) (t) with Poisson rates λ (), λ (2) and generic claim sizes X (), X (2). 24

25 Since the sum of two Poisson processes is again a Poisson process, events in the new process S = S () (t) + S (2) (t) will occur according to a Poisson process with rate λ () + λ (2) (Theorem 3.2.4), and each event, independently, will be from the first compound Poisson process with probability λ () /(λ () + λ (2) ) (see Proposition.3.2), yielding the generic claim size X = { X (), w.p. λ () /(λ () + λ (2) ), X (2), w.p. λ (2) /(λ () + λ (2) ). (3.3) 25

The exponential distribution and the Poisson process

The exponential distribution and the Poisson process The exponential distribution and the Poisson process 1-1 Exponential Distribution: Basic Facts PDF f(t) = { λe λt, t 0 0, t < 0 CDF Pr{T t) = 0 t λe λu du = 1 e λt (t 0) Mean E[T] = 1 λ Variance Var[T]

More information

Continuous-time Markov Chains

Continuous-time Markov Chains Continuous-time Markov Chains Gonzalo Mateos Dept. of ECE and Goergen Institute for Data Science University of Rochester gmateosb@ece.rochester.edu http://www.ece.rochester.edu/~gmateosb/ October 23, 2017

More information

Limiting Distributions

Limiting Distributions Limiting Distributions We introduce the mode of convergence for a sequence of random variables, and discuss the convergence in probability and in distribution. The concept of convergence leads us to the

More information

E[X n ]= dn dt n M X(t). ). What is the mgf? Solution. Found this the other day in the Kernel matching exercise: 1 M X (t) =

E[X n ]= dn dt n M X(t). ). What is the mgf? Solution. Found this the other day in the Kernel matching exercise: 1 M X (t) = Chapter 7 Generating functions Definition 7.. Let X be a random variable. The moment generating function is given by M X (t) =E[e tx ], provided that the expectation exists for t in some neighborhood of

More information

Limiting Distributions

Limiting Distributions We introduce the mode of convergence for a sequence of random variables, and discuss the convergence in probability and in distribution. The concept of convergence leads us to the two fundamental results

More information

3. Poisson Processes (12/09/12, see Adult and Baby Ross)

3. Poisson Processes (12/09/12, see Adult and Baby Ross) 3. Poisson Processes (12/09/12, see Adult and Baby Ross) Exponential Distribution Poisson Processes Poisson and Exponential Relationship Generalizations 1 Exponential Distribution Definition: The continuous

More information

Part I Stochastic variables and Markov chains

Part I Stochastic variables and Markov chains Part I Stochastic variables and Markov chains Random variables describe the behaviour of a phenomenon independent of any specific sample space Distribution function (cdf, cumulative distribution function)

More information

STAT/MATH 395 A - PROBABILITY II UW Winter Quarter Moment functions. x r p X (x) (1) E[X r ] = x r f X (x) dx (2) (x E[X]) r p X (x) (3)

STAT/MATH 395 A - PROBABILITY II UW Winter Quarter Moment functions. x r p X (x) (1) E[X r ] = x r f X (x) dx (2) (x E[X]) r p X (x) (3) STAT/MATH 395 A - PROBABILITY II UW Winter Quarter 07 Néhémy Lim Moment functions Moments of a random variable Definition.. Let X be a rrv on probability space (Ω, A, P). For a given r N, E[X r ], if it

More information

1 Review of Probability

1 Review of Probability 1 Review of Probability Random variables are denoted by X, Y, Z, etc. The cumulative distribution function (c.d.f.) of a random variable X is denoted by F (x) = P (X x), < x

More information

Exponential Distribution and Poisson Process

Exponential Distribution and Poisson Process Exponential Distribution and Poisson Process Stochastic Processes - Lecture Notes Fatih Cavdur to accompany Introduction to Probability Models by Sheldon M. Ross Fall 215 Outline Introduction Exponential

More information

3 Continuous Random Variables

3 Continuous Random Variables Jinguo Lian Math437 Notes January 15, 016 3 Continuous Random Variables Remember that discrete random variables can take only a countable number of possible values. On the other hand, a continuous random

More information

Things to remember when learning probability distributions:

Things to remember when learning probability distributions: SPECIAL DISTRIBUTIONS Some distributions are special because they are useful They include: Poisson, exponential, Normal (Gaussian), Gamma, geometric, negative binomial, Binomial and hypergeometric distributions

More information

Probability Distributions Columns (a) through (d)

Probability Distributions Columns (a) through (d) Discrete Probability Distributions Columns (a) through (d) Probability Mass Distribution Description Notes Notation or Density Function --------------------(PMF or PDF)-------------------- (a) (b) (c)

More information

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes

CDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes CDA6530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic process X = {X(t), t2 T} is a collection of random variables (rvs); one rv

More information

Poisson Processes. Particles arriving over time at a particle detector. Several ways to describe most common model.

Poisson Processes. Particles arriving over time at a particle detector. Several ways to describe most common model. Poisson Processes Particles arriving over time at a particle detector. Several ways to describe most common model. Approach 1: a) numbers of particles arriving in an interval has Poisson distribution,

More information

PoissonprocessandderivationofBellmanequations

PoissonprocessandderivationofBellmanequations APPENDIX B PoissonprocessandderivationofBellmanequations 1 Poisson process Let us first define the exponential distribution Definition B1 A continuous random variable X is said to have an exponential distribution

More information

Stochastic Processes and Monte-Carlo Methods. University of Massachusetts: Spring 2018 version. Luc Rey-Bellet

Stochastic Processes and Monte-Carlo Methods. University of Massachusetts: Spring 2018 version. Luc Rey-Bellet Stochastic Processes and Monte-Carlo Methods University of Massachusetts: Spring 2018 version Luc Rey-Bellet April 5, 2018 Contents 1 Simulation and the Monte-Carlo method 3 1.1 Review of probability............................

More information

Sampling Distributions

Sampling Distributions Sampling Distributions In statistics, a random sample is a collection of independent and identically distributed (iid) random variables, and a sampling distribution is the distribution of a function of

More information

EDRP lecture 7. Poisson process. Pawe J. Szab owski

EDRP lecture 7. Poisson process. Pawe J. Szab owski EDRP lecture 7. Poisson process. Pawe J. Szab owski 2007 Counting process Random process fn t ; t 0g is called a counting process, if N t is equal total number of events that have happened up to moment

More information

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical

CDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical CDA5530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic ti process X = {X(t), t T} is a collection of random variables (rvs); one

More information

Chapter 2. Poisson Processes. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan

Chapter 2. Poisson Processes. Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Chapter 2. Poisson Processes Prof. Shun-Ren Yang Department of Computer Science, National Tsing Hua University, Taiwan Outline Introduction to Poisson Processes Definition of arrival process Definition

More information

We introduce methods that are useful in:

We introduce methods that are useful in: Instructor: Shengyu Zhang Content Derived Distributions Covariance and Correlation Conditional Expectation and Variance Revisited Transforms Sum of a Random Number of Independent Random Variables more

More information

Stochastic Processes and Monte-Carlo Methods. University of Massachusetts: Spring Luc Rey-Bellet

Stochastic Processes and Monte-Carlo Methods. University of Massachusetts: Spring Luc Rey-Bellet Stochastic Processes and Monte-Carlo Methods University of Massachusetts: Spring 2010 Luc Rey-Bellet Contents 1 Random variables and Monte-Carlo method 3 1.1 Review of probability............................

More information

1.1 Review of Probability Theory

1.1 Review of Probability Theory 1.1 Review of Probability Theory Angela Peace Biomathemtics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,

More information

Lecture 4a: Continuous-Time Markov Chain Models

Lecture 4a: Continuous-Time Markov Chain Models Lecture 4a: Continuous-Time Markov Chain Models Continuous-time Markov chains are stochastic processes whose time is continuous, t [0, ), but the random variables are discrete. Prominent examples of continuous-time

More information

i=1 k i=1 g i (Y )] = k f(t)dt and f(y) = F (y) except at possibly countably many points, E[g(Y )] = f(y)dy = 1, F(y) = y

i=1 k i=1 g i (Y )] = k f(t)dt and f(y) = F (y) except at possibly countably many points, E[g(Y )] = f(y)dy = 1, F(y) = y Math 480 Exam 2 is Wed. Oct. 31. You are allowed 7 sheets of notes and a calculator. The exam emphasizes HW5-8, and Q5-8. From the 1st exam: The conditional probability of A given B is P(A B) = P(A B)

More information

Poisson Processes. Stochastic Processes. Feb UC3M

Poisson Processes. Stochastic Processes. Feb UC3M Poisson Processes Stochastic Processes UC3M Feb. 2012 Exponential random variables A random variable T has exponential distribution with rate λ > 0 if its probability density function can been written

More information

Random variables. DS GA 1002 Probability and Statistics for Data Science.

Random variables. DS GA 1002 Probability and Statistics for Data Science. Random variables DS GA 1002 Probability and Statistics for Data Science http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall17 Carlos Fernandez-Granda Motivation Random variables model numerical quantities

More information

Probability and Distributions

Probability and Distributions Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated

More information

Continuous Distributions

Continuous Distributions A normal distribution and other density functions involving exponential forms play the most important role in probability and statistics. They are related in a certain way, as summarized in a diagram later

More information

Multiple Random Variables

Multiple Random Variables Multiple Random Variables Joint Probability Density Let X and Y be two random variables. Their joint distribution function is F ( XY x, y) P X x Y y. F XY ( ) 1, < x

More information

1 Solution to Problem 2.1

1 Solution to Problem 2.1 Solution to Problem 2. I incorrectly worked this exercise instead of 2.2, so I decided to include the solution anyway. a) We have X Y /3, which is a - function. It maps the interval, ) where X lives) onto

More information

Sampling Distributions

Sampling Distributions In statistics, a random sample is a collection of independent and identically distributed (iid) random variables, and a sampling distribution is the distribution of a function of random sample. For example,

More information

Section 9.1. Expected Values of Sums

Section 9.1. Expected Values of Sums Section 9.1 Expected Values of Sums Theorem 9.1 For any set of random variables X 1,..., X n, the sum W n = X 1 + + X n has expected value E [W n ] = E [X 1 ] + E [X 2 ] + + E [X n ]. Proof: Theorem 9.1

More information

3 Conditional Expectation

3 Conditional Expectation 3 Conditional Expectation 3.1 The Discrete case Recall that for any two events E and F, the conditional probability of E given F is defined, whenever P (F ) > 0, by P (E F ) P (E)P (F ). P (F ) Example.

More information

Chapter 5. Random Variables (Continuous Case) 5.1 Basic definitions

Chapter 5. Random Variables (Continuous Case) 5.1 Basic definitions Chapter 5 andom Variables (Continuous Case) So far, we have purposely limited our consideration to random variables whose ranges are countable, or discrete. The reason for that is that distributions on

More information

Lecture 1: August 28

Lecture 1: August 28 36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random

More information

Moments. Raw moment: February 25, 2014 Normalized / Standardized moment:

Moments. Raw moment: February 25, 2014 Normalized / Standardized moment: Moments Lecture 10: Central Limit Theorem and CDFs Sta230 / Mth 230 Colin Rundel Raw moment: Central moment: µ n = EX n ) µ n = E[X µ) 2 ] February 25, 2014 Normalized / Standardized moment: µ n σ n Sta230

More information

n px p x (1 p) n x. p x n(n 1)... (n x + 1) x!

n px p x (1 p) n x. p x n(n 1)... (n x + 1) x! Lectures 3-4 jacques@ucsd.edu 7. Classical discrete distributions D. The Poisson Distribution. If a coin with heads probability p is flipped independently n times, then the number of heads is Bin(n, p)

More information

Continuous Random Variables and Continuous Distributions

Continuous Random Variables and Continuous Distributions Continuous Random Variables and Continuous Distributions Continuous Random Variables and Continuous Distributions Expectation & Variance of Continuous Random Variables ( 5.2) The Uniform Random Variable

More information

Chapter 1 Statistical Reasoning Why statistics? Section 1.1 Basics of Probability Theory

Chapter 1 Statistical Reasoning Why statistics? Section 1.1 Basics of Probability Theory Chapter 1 Statistical Reasoning Why statistics? Uncertainty of nature (weather, earth movement, etc. ) Uncertainty in observation/sampling/measurement Variability of human operation/error imperfection

More information

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University Chapter 3, 4 Random Variables ENCS6161 - Probability and Stochastic Processes Concordia University ENCS6161 p.1/47 The Notion of a Random Variable A random variable X is a function that assigns a real

More information

Chapter 5. Chapter 5 sections

Chapter 5. Chapter 5 sections 1 / 43 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

8 Laws of large numbers

8 Laws of large numbers 8 Laws of large numbers 8.1 Introduction We first start with the idea of standardizing a random variable. Let X be a random variable with mean µ and variance σ 2. Then Z = (X µ)/σ will be a random variable

More information

18.440: Lecture 28 Lectures Review

18.440: Lecture 28 Lectures Review 18.440: Lecture 28 Lectures 18-27 Review Scott Sheffield MIT Outline Outline It s the coins, stupid Much of what we have done in this course can be motivated by the i.i.d. sequence X i where each X i is

More information

1 Poisson processes, and Compound (batch) Poisson processes

1 Poisson processes, and Compound (batch) Poisson processes Copyright c 2007 by Karl Sigman 1 Poisson processes, and Compound (batch) Poisson processes 1.1 Point Processes Definition 1.1 A simple point process ψ = {t n : n 1} is a sequence of strictly increasing

More information

Brief Review of Probability

Brief Review of Probability Maura Department of Economics and Finance Università Tor Vergata Outline 1 Distribution Functions Quantiles and Modes of a Distribution 2 Example 3 Example 4 Distributions Outline Distribution Functions

More information

THE QUEEN S UNIVERSITY OF BELFAST

THE QUEEN S UNIVERSITY OF BELFAST THE QUEEN S UNIVERSITY OF BELFAST 0SOR20 Level 2 Examination Statistics and Operational Research 20 Probability and Distribution Theory Wednesday 4 August 2002 2.30 pm 5.30 pm Examiners { Professor R M

More information

Lecture notes for Part A Probability

Lecture notes for Part A Probability Lecture notes for Part A Probability Notes written by James Martin, updated by Matthias Winkel Oxford, Michaelmas Term 017 winkel@stats.ox.ac.uk Version of 5 September 017 1 Review: probability spaces,

More information

1 Probability and Random Variables

1 Probability and Random Variables 1 Probability and Random Variables The models that you have seen thus far are deterministic models. For any time t, there is a unique solution X(t). On the other hand, stochastic models will result in

More information

LIST OF FORMULAS FOR STK1100 AND STK1110

LIST OF FORMULAS FOR STK1100 AND STK1110 LIST OF FORMULAS FOR STK1100 AND STK1110 (Version of 11. November 2015) 1. Probability Let A, B, A 1, A 2,..., B 1, B 2,... be events, that is, subsets of a sample space Ω. a) Axioms: A probability function

More information

Statistics 3657 : Moment Generating Functions

Statistics 3657 : Moment Generating Functions Statistics 3657 : Moment Generating Functions A useful tool for studying sums of independent random variables is generating functions. course we consider moment generating functions. In this Definition

More information

Discrete Distributions Chapter 6

Discrete Distributions Chapter 6 Discrete Distributions Chapter 6 Negative Binomial Distribution section 6.3 Consider k r, r +,... independent Bernoulli trials with probability of success in one trial being p. Let the random variable

More information

Lecture 11. Multivariate Normal theory

Lecture 11. Multivariate Normal theory 10. Lecture 11. Multivariate Normal theory Lecture 11. Multivariate Normal theory 1 (1 1) 11. Multivariate Normal theory 11.1. Properties of means and covariances of vectors Properties of means and covariances

More information

DS-GA 1002 Lecture notes 2 Fall Random variables

DS-GA 1002 Lecture notes 2 Fall Random variables DS-GA 12 Lecture notes 2 Fall 216 1 Introduction Random variables Random variables are a fundamental tool in probabilistic modeling. They allow us to model numerical quantities that are uncertain: the

More information

Fundamental Tools - Probability Theory II

Fundamental Tools - Probability Theory II Fundamental Tools - Probability Theory II MSc Financial Mathematics The University of Warwick September 29, 2015 MSc Financial Mathematics Fundamental Tools - Probability Theory II 1 / 22 Measurable random

More information

STAT 380 Markov Chains

STAT 380 Markov Chains STAT 380 Markov Chains Richard Lockhart Simon Fraser University Spring 2016 Richard Lockhart (Simon Fraser University) STAT 380 Markov Chains Spring 2016 1 / 38 1/41 PoissonProcesses.pdf (#2) Poisson Processes

More information

Fundamental Tools - Probability Theory IV

Fundamental Tools - Probability Theory IV Fundamental Tools - Probability Theory IV MSc Financial Mathematics The University of Warwick October 1, 2015 MSc Financial Mathematics Fundamental Tools - Probability Theory IV 1 / 14 Model-independent

More information

UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, Practice Final Examination (Winter 2017)

UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, Practice Final Examination (Winter 2017) UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, 208 Practice Final Examination (Winter 207) There are 6 problems, each problem with multiple parts. Your answer should be as clear and readable

More information

Tom Salisbury

Tom Salisbury MATH 2030 3.00MW Elementary Probability Course Notes Part V: Independence of Random Variables, Law of Large Numbers, Central Limit Theorem, Poisson distribution Geometric & Exponential distributions Tom

More information

STAT2201. Analysis of Engineering & Scientific Data. Unit 3

STAT2201. Analysis of Engineering & Scientific Data. Unit 3 STAT2201 Analysis of Engineering & Scientific Data Unit 3 Slava Vaisman The University of Queensland School of Mathematics and Physics What we learned in Unit 2 (1) We defined a sample space of a random

More information

2905 Queueing Theory and Simulation PART III: HIGHER DIMENSIONAL AND NON-MARKOVIAN QUEUES

2905 Queueing Theory and Simulation PART III: HIGHER DIMENSIONAL AND NON-MARKOVIAN QUEUES 295 Queueing Theory and Simulation PART III: HIGHER DIMENSIONAL AND NON-MARKOVIAN QUEUES 16 Queueing Systems with Two Types of Customers In this section, we discuss queueing systems with two types of customers.

More information

IEOR 6711, HMWK 5, Professor Sigman

IEOR 6711, HMWK 5, Professor Sigman IEOR 6711, HMWK 5, Professor Sigman 1. Semi-Markov processes: Consider an irreducible positive recurrent discrete-time Markov chain {X n } with transition matrix P (P i,j ), i, j S, and finite state space.

More information

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416)

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) D. ARAPURA This is a summary of the essential material covered so far. The final will be cumulative. I ve also included some review problems

More information

Arrivals and waiting times

Arrivals and waiting times Chapter 20 Arrivals and waiting times The arrival times of events in a Poisson process will be continuous random variables. In particular, the time between two successive events, say event n 1 and event

More information

Example continued. Math 425 Intro to Probability Lecture 37. Example continued. Example

Example continued. Math 425 Intro to Probability Lecture 37. Example continued. Example continued : Coin tossing Math 425 Intro to Probability Lecture 37 Kenneth Harris kaharri@umich.edu Department of Mathematics University of Michigan April 8, 2009 Consider a Bernoulli trials process with

More information

6.1 Moment Generating and Characteristic Functions

6.1 Moment Generating and Characteristic Functions Chapter 6 Limit Theorems The power statistics can mostly be seen when there is a large collection of data points and we are interested in understanding the macro state of the system, e.g., the average,

More information

errors every 1 hour unless he falls asleep, in which case he just reports the total errors

errors every 1 hour unless he falls asleep, in which case he just reports the total errors I. First Definition of a Poisson Process A. Definition: Poisson process A Poisson Process {X(t), t 0} with intensity λ > 0 is a counting process with the following properties. INDEPENDENT INCREMENTS. For

More information

ASM Study Manual for Exam P, First Edition By Dr. Krzysztof M. Ostaszewski, FSA, CFA, MAAA Errata

ASM Study Manual for Exam P, First Edition By Dr. Krzysztof M. Ostaszewski, FSA, CFA, MAAA Errata ASM Study Manual for Exam P, First Edition By Dr. Krzysztof M. Ostaszewski, FSA, CFA, MAAA (krzysio@krzysio.net) Errata Effective July 5, 3, only the latest edition of this manual will have its errata

More information

Stationary independent increments. 1. Random changes of the form X t+h X t fixed h > 0 are called increments of the process.

Stationary independent increments. 1. Random changes of the form X t+h X t fixed h > 0 are called increments of the process. Stationary independent increments 1. Random changes of the form X t+h X t fixed h > 0 are called increments of the process. 2. If each set of increments, corresponding to non-overlapping collection of

More information

BMIR Lecture Series on Probability and Statistics Fall, 2015 Uniform Distribution

BMIR Lecture Series on Probability and Statistics Fall, 2015 Uniform Distribution Lecture #5 BMIR Lecture Series on Probability and Statistics Fall, 2015 Department of Biomedical Engineering and Environmental Sciences National Tsing Hua University s 5.1 Definition ( ) A continuous random

More information

15 Discrete Distributions

15 Discrete Distributions Lecture Note 6 Special Distributions (Discrete and Continuous) MIT 4.30 Spring 006 Herman Bennett 5 Discrete Distributions We have already seen the binomial distribution and the uniform distribution. 5.

More information

Errata for the ASM Study Manual for Exam P, Fourth Edition By Dr. Krzysztof M. Ostaszewski, FSA, CFA, MAAA

Errata for the ASM Study Manual for Exam P, Fourth Edition By Dr. Krzysztof M. Ostaszewski, FSA, CFA, MAAA Errata for the ASM Study Manual for Exam P, Fourth Edition By Dr. Krzysztof M. Ostaszewski, FSA, CFA, MAAA (krzysio@krzysio.net) Effective July 5, 3, only the latest edition of this manual will have its

More information

Expectation. DS GA 1002 Probability and Statistics for Data Science. Carlos Fernandez-Granda

Expectation. DS GA 1002 Probability and Statistics for Data Science.   Carlos Fernandez-Granda Expectation DS GA 1002 Probability and Statistics for Data Science http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall17 Carlos Fernandez-Granda Aim Describe random variables with a few numbers: mean,

More information

S n = x + X 1 + X X n.

S n = x + X 1 + X X n. 0 Lecture 0 0. Gambler Ruin Problem Let X be a payoff if a coin toss game such that P(X = ) = P(X = ) = /2. Suppose you start with x dollars and play the game n times. Let X,X 2,...,X n be payoffs in each

More information

Lectures 16-17: Poisson Approximation. Using Lemma (2.4.3) with θ = 1 and then Lemma (2.4.4), which is valid when max m p n,m 1/2, we have

Lectures 16-17: Poisson Approximation. Using Lemma (2.4.3) with θ = 1 and then Lemma (2.4.4), which is valid when max m p n,m 1/2, we have Lectures 16-17: Poisson Approximation 1. The Law of Rare Events Theorem 2.6.1: For each n 1, let X n,m, 1 m n be a collection of independent random variables with PX n,m = 1 = p n,m and PX n,m = = 1 p

More information

ECE 313 Probability with Engineering Applications Fall 2000

ECE 313 Probability with Engineering Applications Fall 2000 Exponential random variables Exponential random variables arise in studies of waiting times, service times, etc X is called an exponential random variable with parameter λ if its pdf is given by f(u) =

More information

Stat 134 Fall 2011: Notes on generating functions

Stat 134 Fall 2011: Notes on generating functions Stat 3 Fall 0: Notes on generating functions Michael Lugo October, 0 Definitions Given a random variable X which always takes on a positive integer value, we define the probability generating function

More information

Probability. Lecture Notes. Adolfo J. Rumbos

Probability. Lecture Notes. Adolfo J. Rumbos Probability Lecture Notes Adolfo J. Rumbos October 20, 204 2 Contents Introduction 5. An example from statistical inference................ 5 2 Probability Spaces 9 2. Sample Spaces and σ fields.....................

More information

Review for the previous lecture

Review for the previous lecture Lecture 1 and 13 on BST 631: Statistical Theory I Kui Zhang, 09/8/006 Review for the previous lecture Definition: Several discrete distributions, including discrete uniform, hypergeometric, Bernoulli,

More information

MAS223 Statistical Inference and Modelling Exercises

MAS223 Statistical Inference and Modelling Exercises MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,

More information

Expectation. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda

Expectation. DS GA 1002 Statistical and Mathematical Models.   Carlos Fernandez-Granda Expectation DS GA 1002 Statistical and Mathematical Models http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall16 Carlos Fernandez-Granda Aim Describe random variables with a few numbers: mean, variance,

More information

Department of Mathematics

Department of Mathematics Department of Mathematics Ma 3/103 KC Border Introduction to Probability and Statistics Winter 2017 Supplement 2: Review Your Distributions Relevant textbook passages: Pitman [10]: pages 476 487. Larsen

More information

Chapter 3 Common Families of Distributions

Chapter 3 Common Families of Distributions Lecture 9 on BST 631: Statistical Theory I Kui Zhang, 9/3/8 and 9/5/8 Review for the previous lecture Definition: Several commonly used discrete distributions, including discrete uniform, hypergeometric,

More information

Chapter 4 : Expectation and Moments

Chapter 4 : Expectation and Moments ECE5: Analysis of Random Signals Fall 06 Chapter 4 : Expectation and Moments Dr. Salim El Rouayheb Scribe: Serge Kas Hanna, Lu Liu Expected Value of a Random Variable Definition. The expected or average

More information

MATH Notebook 5 Fall 2018/2019

MATH Notebook 5 Fall 2018/2019 MATH442601 2 Notebook 5 Fall 2018/2019 prepared by Professor Jenny Baglivo c Copyright 2004-2019 by Jenny A. Baglivo. All Rights Reserved. 5 MATH442601 2 Notebook 5 3 5.1 Sequences of IID Random Variables.............................

More information

4. CONTINUOUS RANDOM VARIABLES

4. CONTINUOUS RANDOM VARIABLES IA Probability Lent Term 4 CONTINUOUS RANDOM VARIABLES 4 Introduction Up to now we have restricted consideration to sample spaces Ω which are finite, or countable; we will now relax that assumption We

More information

Stat410 Probability and Statistics II (F16)

Stat410 Probability and Statistics II (F16) Stat4 Probability and Statistics II (F6 Exponential, Poisson and Gamma Suppose on average every /λ hours, a Stochastic train arrives at the Random station. Further we assume the waiting time between two

More information

Chapter 3: Random Variables 1

Chapter 3: Random Variables 1 Chapter 3: Random Variables 1 Yunghsiang S. Han Graduate Institute of Communication Engineering, National Taipei University Taiwan E-mail: yshan@mail.ntpu.edu.tw 1 Modified from the lecture notes by Prof.

More information

Continuous random variables

Continuous random variables Continuous random variables Continuous r.v. s take an uncountably infinite number of possible values. Examples: Heights of people Weights of apples Diameters of bolts Life lengths of light-bulbs We cannot

More information

BMIR Lecture Series on Probability and Statistics Fall 2015 Discrete RVs

BMIR Lecture Series on Probability and Statistics Fall 2015 Discrete RVs Lecture #7 BMIR Lecture Series on Probability and Statistics Fall 2015 Department of Biomedical Engineering and Environmental Sciences National Tsing Hua University 7.1 Function of Single Variable Theorem

More information

STAT/MATH 395 PROBABILITY II

STAT/MATH 395 PROBABILITY II STAT/MATH 395 PROBABILITY II Chapter 6 : Moment Functions Néhémy Lim 1 1 Department of Statistics, University of Washington, USA Winter Quarter 2016 of Common Distributions Outline 1 2 3 of Common Distributions

More information

The story of the film so far... Mathematics for Informatics 4a. Continuous-time Markov processes. Counting processes

The story of the film so far... Mathematics for Informatics 4a. Continuous-time Markov processes. Counting processes The story of the film so far... Mathematics for Informatics 4a José Figueroa-O Farrill Lecture 19 28 March 2012 We have been studying stochastic processes; i.e., systems whose time evolution has an element

More information

Figure 10.1: Recording when the event E occurs

Figure 10.1: Recording when the event E occurs 10 Poisson Processes Let T R be an interval. A family of random variables {X(t) ; t T} is called a continuous time stochastic process. We often consider T = [0, 1] and T = [0, ). As X(t) is a random variable

More information

Lecture Notes 2 Random Variables. Discrete Random Variables: Probability mass function (pmf)

Lecture Notes 2 Random Variables. Discrete Random Variables: Probability mass function (pmf) Lecture Notes 2 Random Variables Definition Discrete Random Variables: Probability mass function (pmf) Continuous Random Variables: Probability density function (pdf) Mean and Variance Cumulative Distribution

More information

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample

More information

Slides 8: Statistical Models in Simulation

Slides 8: Statistical Models in Simulation Slides 8: Statistical Models in Simulation Purpose and Overview The world the model-builder sees is probabilistic rather than deterministic: Some statistical model might well describe the variations. An

More information

MA 519 Probability: Review

MA 519 Probability: Review MA 519 : Review Yingwei Wang Department of Mathematics, Purdue University, West Lafayette, IN, USA Contents 1 How to compute the expectation? 1.1 Tail........................................... 1. Index..........................................

More information

Lecture 5: Moment generating functions

Lecture 5: Moment generating functions Lecture 5: Moment generating functions Definition 2.3.6. The moment generating function (mgf) of a random variable X is { x e tx f M X (t) = E(e tx X (x) if X has a pmf ) = etx f X (x)dx if X has a pdf

More information

STA 256: Statistics and Probability I

STA 256: Statistics and Probability I Al Nosedal. University of Toronto. Fall 2017 My momma always said: Life was like a box of chocolates. You never know what you re gonna get. Forrest Gump. Exercise 4.1 Let X be a random variable with p(x)

More information