HUB HUB RESEARCH PAPER. A Markov Binomial Distribution. Hogeschool Universiteit Brussel. E. Omey, J. Santos and S. Van Gulck

Size: px
Start display at page:

Download "HUB HUB RESEARCH PAPER. A Markov Binomial Distribution. Hogeschool Universiteit Brussel. E. Omey, J. Santos and S. Van Gulck"

Transcription

1 HUB HUB RESEARCH PAPER A Markov Binomial Distribution E. Omey, J. Santos and S. Van Gulck HUB RESEARCH PAPER 2007/5. SEPTEMBER 2007 Hogeschool Universiteit Brussel Stormstraat 2, 000 Brussel, Belgium T: F:

2 A MARKOV - BINOMIAL DISTRIBUTION E. Omey, J. Santos and S. Van Gulck Abstract Let fx i ; i g denote a sequence of f0; g-variables and suppose that the sequence forms a Markov Chain. In the paper we study the number of successes S n = X + X 2 + ::: + X n and we study the number of experiments Y (r) up to the r th success. In the i.i.d. case S n has a binomial distribution and Y (r) has a negative binomial distribution and the asymptotic behaviour is well known. In the more general Markov chain case, we prove a central limit theorem for S n and provide conditions under which the distribution of S n can be approximated by a Poisson-type of distribution. We also completely characterize Y (r) and show that Y (r) can be interpreted as the sum of r independent r.v. related to a geometric distribution. Keywords: Markov chain, generalized binomial distribution, central limit theorem MSC 2000 Classi cation: 60J0, 60F05, 60K99

3 Introduction Many papers are devoted to sequences of Bernoulli trials and they form the basis of many (known) distributions. Applications are numerous. To mention only a few: - the one-sample runs test can be used to test the hypothesis that the order in a sample is random; - the number of successes can be used for testing for trends in the weather or in the stock market; - Bernoulli trials are important in matching DNA-sequences; - the number of (consecutive) failures can be used in quality control. For further use we suppose that each X i takes values in the set f0; g and for n, let S n = P n i= X i denote the number of successes in the sequence (X ; X 2 ; :::; X n ). If the X i are i.i.d. with P (X i = ) = p and P (X i = 0) = q = p, it is well known that S n has a binomial distribution S n BIN(n; p). In the classical theory one either calculates probabilities concerning S n by using the binomial distribution or by using a normal- or a Poisson-approximation. A related variable of interest is Y (r) where for r the variable Y (r) counts the number of experiments until the r-th success. In the i.i.d. case it is well known that Y (r) has a negative binomial distribution and that Y (r) can be interpreted as the sum of i.i.d. geometrically distributed r.v. In the section 2 below we rst list the Markov chain properties that we need and then study S n (section 2.) and Y (r) (section 2.2). We nish the paper with some concluding remarks. 2 Markov Chains Let the initial distribution be given by P (X = ) = p and P (X = 0) = q = p and for i; j = 0;, let p i;j = P (X 2 = j j X = i) denote the transition probabilities. To avoid trivialities we suppose that 0 < p i;j <. The one-step transition matrix of the Markov chain is given by p0;0 p P = 0;. p ;0 p ; We list some elementary properties of this Markov chain, cf. [3, Chapter XVI.3]. First note that the Markov chain has a unique stationary vector 2

4 given by (x; y) = (p ;0, p 0; )=(p 0; + p ;0 ). The eigenvalues of P are = and 2 = p 0; p ;0 = p 0;0 + p ;. Note that j 2 j <. By induction it is easy to show that the n-step transition matrix is given by P n = A + n 2B where x A = x y y, B = y y x x It follows that p (n) 0;0 = x + n 2y and p (n) ;0 = x n 2x. Using these relations for n we have. P (X n = ) = y + n 2 (px qy) = y n 2 (y p), P (X n = 0) = x + n 2 (y p). Information about moments is given in the following result. Lemma For n we have (i) E(X n ) = P (X n = ) = y n 2 (y p). (ii) V ar(x n ) = (y n 2 (y p))(x + n 2 (y p)). (iii) For n m we have Cov(X n ; X m ) = n m 2 V ar(x m ). Proof. The rst and the second part are easy to prove. To prove (iii), (n m) note that E(X n X m ) = P (X n = ; X m = ) = p ; P (X m = ). It follows (n m) that Cov(X n ; X m ) = (p ; P (X n = ))P (X m = ). Using the expressions obtained before we obtain Cov(X n ; X m ) = (y + n m 2 x y + n 2 (y p))p (X m = ) = n m 2 (x + m 2 (y p))p (X m = ), and the result follows. As a special case we consider the type of correlated Bernoulli trials studied in [2] and [9]. In this model we assume that P (X n = ) = p, P (X n = 0) = q = p and = (X n ; X n+ ) 6= 0 for all n. From this it follows that Cov(X n ; X n+ ) = pq and that P (X n = ; X n+ = ) = p(p + q). Since P (X n = ) = p we also nd that P (X n = 0; X n+ = ) = P (X n = ; X n+ = 0) = pq( It turns out that P (X n+ = j j X n = i) = p i;j (i; j = 0; ), where the p i;j are given by q + p p( ) P (p; ) =. q( ) p + q In this case we have (x; y) = (q; p) and 2 =. For n m it follows from Lemma that (X n ; X m ) = n m. 3 ).

5 2. The number of successes S n 2.. Moments In this section we study the number of successes S n = P n i= X i. In our rst result we study moments of S n and extend the known i.i.d. results. Proposition 2 (i) We have E(S n ) = ny (y p)( n 2)=( 2 ). (ii) We have Xn V ar(s n ) = nxy( + 2 )=( 2 ) + (A k 2 + B 2k 2 + Ck k 2), k=0 where A; B; C will be determined in the proof of the result. (iii) If P = P (p; ) we have E(S n ) = np and V ar(s n ) = pq(n( + ) 2( n )=( ))=( ). Proof. Part (i) follows from Lemma (i). To prove (ii) we start from V ar(s k+ ) = V ar(s k + X k+ ). Using Lemma, we see that V ar(s k+ ) V ar(s k ) = V ar(x k+ ) + 2 Again from Lemma we see that kx i= V ar(x i ) = xy + a i 2 b 2i 2 2 k+ i 2 V ar(x i ). where a = (y p)(y x) and b = (y p) 2. Straightforward calculations show that V ar(s k+ ) V ar(s k ) = xy( + 2 )=( 2 ) + A k 2 + B 2k 2 + Ck k 2 where C = 2a and A = (a( 2 ) 2xy 2 2b)=( 2 ), B = b( + 2 )=( 2 ). If we de ne S 0 = 0 this result holds for all k 0. Using V ar(s n ) = P n k=0 (V ar(s k+) V ar(s k )), result (ii) follows. Result (iii) is easier to prove. Using Lemma we have V ar(s k+ ) V ar(s k ) = xy( + 2 kx k+ i ) = xy( + 2 k+ )=( ). i= 4

6 The result follows by taking sums as before. The expression for the variance can be simpli ed asymptotically. We use the notation u(n) s cv(n) to indicate that u(n)=v(n)! c. Corollary 3 As n! we have (i) E(S n ) ny and V ar(s n ) nxy( + 2 )=( 2 ). (ii) E(S n ) ny! (p y)=( 2 ) and V ar(s n ) nxy(+ 2 )=( 2 )! c, where c = A + B C 2 2 ( 2 ) Distribution of S n In this section we determine p n (k) = P (S n = k). It is convenient to condition on X n and to this end we de ne p i n(k) = P (S n = k; X n = i), i = 0;. Obviously p n (k) = p 0 n(k) + p n(k). Note that p () = p, p 0 (0) = q and that p (0) = p 0 () = 0. In the next result we show how to calculate p i n(k) recursively. Lemma 4 For n we have (i) p 0 n+(k) = p 0;0 p 0 n(k) + p ;0 p n(k); (ii) p n+(k) = p 0; p 0 n(k ) + p ; p n(k ). Proof. We have p 0 n+(k) = I + II where I = P (S n+ = k; X n+ = 0; X n = 0), II = P (S n+ = k; X n+ = 0; X n = ). Clearly I = P (S n = k; X n+ = 0; X n = 0). Now note that I = P (X n+ = 0 j S n = k; X n = 0)p 0 n(k) = P (X n+ = 0 j S n = k; X n = 0)p 0 n(k) = p 0;0 p 0 n(k). In a similar way we nd that II = p ;0 p n(k) and the rst result follows. The second result can be proved in a similar way. For small values of n we can use Lemma 4 to obtain the p.d. of S n. It does not seem to be easy to obtain an explicit expression for p n (k). For an alternative approach we refer to the end of section 2.2 below. 5

7 2..3 Central limit theorem For xed P and (q; p) and large values of n we can approximate the p.d. of S n by a normal distribution. We prove the following central limit theorem. Theorem 5 As n! we have S n E(S n ) p V ar(sn ) d ) Z N(0; ). Proof. We prove the theorem using generating functions. For jzj i and i = 0; let n (z) = P n k=0 pi n(k)z k 0 and let n(z) = n (z) + n (z). 0 Furthermore, let n (z) = ( n (z); n (z)) and note that (z) = (q; pz). Using Lemma 4 we have 0 0 n+(z) = p 0;0 n(z) + p ;0 n(z), 0 n+(z) = p 0; z n(z) + p ; z n(z). Switching to matrix notation, we nd that n+ (z) = n (z)a(z), where p0;0 p A(z) = 0; z. p ; z p ;0 It follows that n+ (z) = (z)a n (z). We can nd A n (z) by using the eigenvalues of A(z) and to this end we use the characteristic equation ja(z) Ij = 2 (p 0;0 + p ; z) + (p 0;0 p ; p ;0 p 0; )z = 0. Solving this equation gives (z) = (p 0;0 + p ; z + p D)=2 p 2 (z) = (p 0;0 + p ; z D)=2 where D = (p 0;0 p ; z) 2 +4p 0; p ;0 z. Now de ne U = U(z) and W = W (z) by the equations I = A 0 (z) = U +W and A(z) = (z)u + 2 (z)w. We nd that U(z) = (A(z) 2 (z)i)= p D and W (z) = (A(z) (z)i)= p D. Using the theorem of Cayley and Hamilton we obtain for n 0 that A n (z) = n (z)u + n 2(z)W. Note that as z! we have (z)! and 2 (z)! p 0; p ;0 =, where by assumption we have jj <. Since j 2 (z)= (z)j! jj as z!, for all " su ciently small we can nd = (") such that 0 < < and 6

8 such that j 2 (z)= (z)j jj + " <, for all z such that z. We conclude that for any sequence z n!, we have j 2 (z n )= (z n )j n! 0. From this it follows that A n (z n )= n (z n )! U() where both rows of U() are equal to (x; y). Using n+(z) = (z)a n (z) (; ) t it follows that n+(z n )= n (z n )!. Now we discuss the asymptotic behaviour of (z) as z!. For convenience we write (z) = (z). Note that (z) satis es () = and the characteristic equation 2 (z) (z)(p 0;0 + p ; z) + (p 0;0 p ; p ;0 p 0; )z = 0. Taking derivatives with respect z we nd that and 2(z) 0 (z) 0 (z)(p 0;0 + p ; z) (z)p ; + p 0;0 p ; p ;0 p 0; = 0 2(z) 00 (z) + 2( 0 (z)) (z)p ; 00 (z)(p 0;0 + p ; z) = 0. Replacing z by z = we nd that 2() 0 () 0 ()(p 0;0 + p ; ) ()p ; + p 0;0 p ; p ;0 p 0; = 0, 2() 00 () + 2( 0 ()) ()p ; 00 ()(p 0;0 + p ; ) = 0. Since () =, straightforward calculations show that 0 () = p 0; =(p 0; + p ;0 ) = y, 00 () = 2xy(p ; p 0; )=(p 0; + p ;0 ). Using the rst terms of a Taylor expansion, it follows that (z) = + y(z ) ()(z ) 2 ( + o()). Using twice the expansion log(x) = ( x) ( x) 2 (+o())=2, we obtain It follows that log((z)) y log(z)! ( z) 2 2 (00 () + y y 2 ). log((z)z y ) ( z) 2! xy p ; p 0; p 0; + p ;0 + xy 2 = 2, 7

9 where, after simplifying, = xy p ; + p 0;0 p 0; + p ;0 = xy To complete the proof of Theorem 5 we replace z by z n = z =pn. In this case we nd that log((z n )zn y )! n 2 (log(z))2 and then that zn yn n+(z n )! exp((log(z)) 2 =2). It follows that (S n+ ny)= p n ) d W where W N(0; ). Using Corollary 3, the proof of the theorem is nished. From Theorem 5 we obtain the following result. Corollary 6 Let Z s N(0; ). As n! we have the following results: (i) (S n ny)= p n ) d Z, where = xy( + 2 )=( 2 ); (ii) If p 0; = p ; = p, we have (S n pn)= p d npq =) Z; (iii) If P = P (p; ), we have (S n pn)= p n =) d Z, where = pq( + )=( )) Poisson approximation In the i.i.d. case it is well known how we can approximate a binomial distribution by a suitable Poisson distribution. The same can be done in the more general Markov setting. In what follows and also in Section 2.2 we shall use the following notations. We use U(a) to denote a Bernoulli r.v. with P (U(a) = ) = a (0 < a < ), V (a) is a Poisson(a)-variable (a > 0) and G(a) is a geometric r.v. with P (G(a) = k) = ( a)a k (k, 0 < a < ). Note that E(z U(a) ) = ( a) + az, E(z V (a) ) = exp( a + az), E(z G(a) ) = ( a)z=( az). 8

10 A compound Poisson distribution with generator G(a) is de ned as follows. Let G 0 (a) = 0 and for i, let G i (a) denote i.i.d. copies of G(a). Independent of the G i (a) let V (b) denote a Poisson(b)-variable. Now consider the new r.v. B(V (b)) = P V (b) i=0 G i(a). Clearly we have E(z B(V (b)) ) = exp( b + be(z G(a) )) and we say that B(V (b)) has a compound Poisson distribution with generator G(a). In the next result, in the limiting distribution all r.v. involved are independent and we use the notations introduced above. In each case we take limits as n!. Theorem 7 Suppose that np 0;! a > 0 and p ;! c (0 c < ). (i) If c = 0, then S d n ) U(p) + V (a). (ii) If 0 < c <, then S d n ) U(p)G(c) + B(V (a)). Proof. Using the notations as in the proof of Theorem 5 we have n+(z) = (z)a n (z) (; ) t where (z) = (q; pz) and A n (z) = n (z)u(z)+ n 2(z)W (z). Recall that U(z) = (A(z) 2 (z)i)= p D and W (z) = (A(z) (z)i)= p D and that (z) = (p 0;0 + p ; z + p p D)=2 and 2 (z) = (p 0;0 + p ; z D)=2, where D = (p0;0 p ; z) 2 + 4p 0; p ;0 z. Some straighforward calculations show that (z) = 2p 0; ( z)=(p 0; + p ;0 + p ; ( z) + p D). Since by assumption np 0;! a we have p 0;! 0 and p 0;0!. By assumption we have p ;! c and hence also p ;0! c. It follows that D! ( cz) 2. Using cz > 0 we readily see that n( (z))! a( z)=( cz). From this it follows that n (z)! (z) := exp( a( z)=( cz)) Next we consider n 2(z). Clearly we have 2 (z) = 2 ()z= (z). Our assumptions imply that 2 () = p 0;0 + p ;! 0. It follows that ( 2 ()z) n! 0 and hence also that n 2(z)! 0. Using 2 (z)! cz, we obtain that U(z)! U = 0 ( c)=( cz) 0 9

11 and hence also that that A n (z)! (z)u. It follows that so that n+(z)! (q; pz)(z)u (; ) t, n+(z)! L(z) = (z) q + z(p c). cz It remains to identify the limit L(z). If c = 0, we have L(z) = (q + pz) exp( a( z)). Now the interpretation is clear. Using the notations introduced before, let U(p) and V (a) denote independent r.v.. Clearly L(z) = E(z V (a) z U(p) ). If c = 0, we conclude that S n d ) V (a) + U(p). If 0 < c <, we nd L(z) = (q + pk(z)) exp( a( K(z)), where K(z) = ( c)z=( cz). Using the notations introduced before, we see that K(z) = E(z G(c) ). Let G 0 (c) = 0 and let G i (c) denote independent copies of G(c) and, independent of the other variables, let V (a) denote a Poissonvariable with parameter a. The random sum B(V (a)) = P V (a) i=0 G i(c) is wellde ned and has generating function E(z B(V (a)) ) = exp( a( K(z)). Finally, let U(p) denote a Bernoulli variable, independent of all other variables. We conclude that L(z) = E(z U(p)G(c)+B(V (a)) ) and as a consequence we nd that S d n ) U(p)G(c) + B(V (a)). Remark. If c =, then L(z) = q exp( a) which is the generating function of a degenerate variable. As a special case we have the following corollary. Corollary 8 (Wang [9]) Suppose that P = P (p; ). (i) If np! u > 0 and! 0, then S d n ) V (u). (ii) If np! u > 0 and! v (0 < v < ), then S d n ) B(V (u( v)) 2.2 Generalized negative binomial distribution In this section we invert S n and for each r we de ne Y (r) as the number of experiments until the r th succes. Since Y (r) = minfn : S n = rg we have P (S n r) = P (Y (r + ) n + ) and P (Y (r) = n) = P (S n = r; X n = ). The last quantitiy has been studied before. Adapting the notations, for n r and i = 0; we set p i n(r) = P (S n = r; X n = i) and p n (r) = P (S n = r). 0

12 The corresponding generating functions will be denoted by i r (z) and r(z). Using similar methods as before, for n r we obtain that p n(r) = p ; p n (r ) + p 0; p 0 n (r ), p 0 n(r) = p 0;0 p 0 n (r) + p ;0 p n (r). Using generating functions, this leads to We nd that r(z) = zp ; 0 r(z) = zp 0;0 r (z) + zp 0; 0 r(z) + zp ;0 0 r r(z). (z), 0 r(z) = where k(z) = p ; + zp 0; p ;0 =( zp ;0 p 0;0 z r(z) = zk(z) r r(z), (z), p 0;0 z). It follows that Using r(z) = 0 r (z) + r (z) we also nd that r(z) = (zk(z)) r (z). () r(z) = ( + zp ;0 p 0;0 z ) r (z) = u(z)(zk(z)) r (z), (2) where u(z) = + zp ;0 =( p 0;0 z). It remains to determine (z) = P n= P (S n = ; X n = )z n. Clearly we have It follows that P (S n = ; X n = ) = p, if n = P (S n = ; X n = ) = qp n 2 0;0 p 0;, if n 2. p 0; z (z) = z(p + q p 0;0 z ). This result can be interpreted in the following way. We use the notations as in the beginning of Section Assuming that U(s) and G(t) are independent r.v., we have E(z G(t) ) = z( t)=( tz) and E(z U(s)G(t) ) = ( s) + sz( t)=( tz) (3)

13 Using these notations we can identify k(z) and (z). Using (3) we obtain that k(z) = E(z U(p ;0)G(p 0;0 ) ), (4) (z) = ze(z U(q)G(p 0;0) ) (5) Using (), (4) and (5) we obtain the following result. Theorem 9 (i) We have Y () d = + U(q)G(p 0;0 ). (ii) For r 2, we have Xr Y (r) = d ( + U i (p ;0 )G i (p 0;0 )) + + U(q)G(p 0;0 ), i= where all r.v. U(q), U i (p ;0 ), G(p 0;0 ) and G i (p 0;0 ) involved are independent. Using r (z) = zk(z) r (z) it also follows for r 2 that Y (r) d = Y (r ) + + U(p ;0 )G(p 0;0 ) and that Y (r) Y (r ) d = + U(p ;0 )G(p 0;0 ) where Y (r ), U(p ;0 ), and G(p 0;0 ) are independent r.v.. Together with Y () d = + U(q)G(p 0;0 ) we obtain the following probabilistic interpretation of the formulas. At the start, the rst value is either a succes or a failure. If we start with a failure (which happens with probability q) we have to wait geometrically long until we have a rst succes. Given another succes position in the sequence, either the next result is a succes or the next result is a failure (which happens with probability p ;0 ) and then we have to wait geometrically long until we have a new success. Although we start from a sequence of (Markov-)dependent variables, it turns out that the times between consecutive successes are independent variables! Now we take a closer look at p n (r) = P (S n = r) and use (2). Note that r() = u() = =y and observe that yu(z) = y + yzp ;0 p 0;0 z = y + xzp 0; p 0;0 z = E(zU(x)G(p 0;0) ) 2

14 It follows that y r(z) = yu(z)(zk(z)) r (z) and hence that yp (S n = r) = P (U(x)G(p 0;0 ) + Y (r) = n) where U(x), G(p 0;0 ) and Y (r) are independent. In the next result we formulate the central limit theorem for Y (r) and a Poisson-type of approximation. The results easily follow from the representation obtained in Theorem 9. Corollary 0 (i) As r!, we have Y (r) r=y d q ) Z N(0; ). rp ;0 p ; =p 2 0; (ii) If rp ;0! a and p 0;0! c, 0 c <, then Y (r) r d ) B(V (a)). Remark. For r 0, let M(r) = maxfn : S n rg and K(r) = maxfn : Y (n) rg. Clearly M(r) = n if and only if Y (r +) = n+ so that Y (r +) = M(r)+. Using fm(r) ng = fs n rg = fy (r + ) n + g and fk(r) ng = fy (n) rg, it follows that fk(n) rg = fy (r + ) n + g. As a consequence, we nd that K(n) = d S n. Now note that K(r) corresponds to the renewal counting process associated with the sequence A; A ; A 2 ; ::: where A = d +U(q)G(p 0;0 ) and where the A i are i.i.d. random variables with = d + U(p ;0 )G(p 0;0 ). Standard (delayed) renewal theory could now be A i used to prove the central limit theorem for S n. 3 Concluding remarks. Earlier we have observed that for n, P (X n = ) = y n 2 (y p) and P (X n = 0) = x + n 2 (y p). From this it is easy to see that for n; m we have P (X n+m = ) = y( n 2) + n 2P (X m = ) P (X n+m = 0) = x( n 2) + n 2P (X m = 0) Recall that (x; y) was the stationary vector. In what follows we assume that 2 > 0. Now let B n U( n 2) and B s U(y) denote Bernoulli r.v.. The formulas obtained above show that X n+m d = ( B n )B + B n X n where B n 3

15 is independent of B and X n. In particular it follows that fx n g satis es the stochastic di erence equation: X d n+ = ( B )B + B X n. 2. A correlated binomial distribution has been introduced and studied in [], [4], [6], [7]. Examples and applications can be found e.g. in quality control, cf. [5]. One of the models can be described as follows. Let X; X ; X 2 ; :::; X n denote i.i.d. Bernoulli variables with X s U(p) and let U() and U() denote independent Bernoulli variables, independent of the X i. Now de ne Y i = U()X i + ( U())U() and the corresponding sum T n = nx Y i = U()S n + n( i= U())U(). The following interpretation can be given. In quality control we can decide to check all produced units. The alternative is to check just one unit and then accept or reject all produced units. In the rst scenario S n counts the number of defects or successes. In the second scenario we conclude that we have either 0 or n defects or successes. Clearly S n BIN(p; n) and for T n we have P (T n = k) = P (S n = k) + ( )P (nu() = k). It follows that P (T n = k) = P (S n = k) for k n, P (T n = 0) = P (S n = 0) + ( )( ), P (T n = n) = P (S n = n) + ( ). As to Y i we have Y i d = U() where = p + ( ). For the variance V ar(y i ) = 2 we nd that 2 = ( )(p ) 2 + p( p) + ( )( ). For i 6= j we obtain that Cov(Y i ; Y j ) = ( )(p ) 2 + ( )( ). It follows that (Y i ; Y j ) = = if = p. If 6= p we nd that (Y i ; Y j ) = = =(c + ) where c = p( p)=(( )(p ) 2 + ( )( )). We see that (Y i ; Y j ) is the same for all i 6= j. Clearly this implies that V ar(t n ) = n( + (n )=2) 2. In the next result we formulate two asymptotic results. Proposition (i) As n!, (T n ( U())nU() npu())= p np( p) d ) B()Z 4

16 where Z N(0; ). (ii) If np! a > 0 and! 0, then T n d ) Y where Y d = U()V (a) and U() and V (a) are independent. 3. From the physical point of view it seems reasonable to study random vectors (X ; X 2 ; :::; X n ) with a joint probability distribution of the following form: for x i = 0; we have nx nx nx P (X i = x i ; i = ; 2; :::; n) = C exp( a i x i + a i;j x i x j ). The second term represents the interaction between particles. If = 0, the X i are independent and no interaction appears. 4. In our next paper [8] we study runs, singles and the waiting time until the rst run of r consecutive successes for Bernoulli-sequences that are generated by a Markov chain. 4 References [] P. Altham, Two generalizations of the binomial distribution. Applied Statistics 27(978) [2] A.W.F. Edwards, The meaning of binomial distribution. Nature, London 86 (960), 074. [3] W. Feller, "An Introduction to Probability Theory and its Applications", Vol. I (3rd edition). John Wiley and Sons, New York, 968. [4] L.L. Kupper and J.K. Haseman, The use of the correlated binomial model for the analysis of certain toxicological experiments. Biometrics 34(978), [5] C.D. Lai, C.D., K. Govindaraju and M. Xie, E ects of correlation on fraction non-conforming statistical process control procedures. Journal of Applied Statistics 25(4) (998), [6] R.W. Madsen, Generalized binomial distributions. Comm. in Statistics: Theory and Methods 22()(993), [7] S.A. Mingoti, A note on sample size required in sequential tests for the generalized binomial distribution. Journal of Applied Statistics 30(8) (2003), [8] E. Omey and S. Van Gulck, Runs, singles and waiting times for Bernoulli trials governed by a Markov chain. Forthcoming (2008). 5 i= i= j=

17 [9] Y.H. Wang, On the limit of the Markov binomial distribution. J. Appl. Prob. 8(98) Acknowledgement. The authors are grateful to the referees for their detailed comments and useful suggestions to improve the paper. E. Omey and S. Van Gulck EHSAL Stormstraat 2, 000 Brussels - Belgium. {edward.omey;stefan.vangulck}@ehsal.be. J. Santos Public University of Navarre 3006 Pamplona - Spain. santos@si.unavarra.es. 6

Convergence Rates for Renewal Sequences

Convergence Rates for Renewal Sequences Convergence Rates for Renewal Sequences M. C. Spruill School of Mathematics Georgia Institute of Technology Atlanta, Ga. USA January 2002 ABSTRACT The precise rate of geometric convergence of nonhomogeneous

More information

Stat 134 Fall 2011: Notes on generating functions

Stat 134 Fall 2011: Notes on generating functions Stat 3 Fall 0: Notes on generating functions Michael Lugo October, 0 Definitions Given a random variable X which always takes on a positive integer value, we define the probability generating function

More information

LOWELL, MICHIGAN, MAY 28, Every Patriotic American Salutes His Nation's Flag

LOWELL, MICHIGAN, MAY 28, Every Patriotic American Salutes His Nation's Flag / U N K U Y N Y 8 94 N /U/ N N 3 N U NY NUN ;!! - K - - 93 U U - K K»- [ : U K z ; 3 q U 9:3 - : z 353 «- - - q x z- : N / - q - z 6 - x! -! K N - 3 - U N x >» } ( ) - N Y - q K x x x Y 3 z - x x - x 8

More information

4 Branching Processes

4 Branching Processes 4 Branching Processes Organise by generations: Discrete time. If P(no offspring) 0 there is a probability that the process will die out. Let X = number of offspring of an individual p(x) = P(X = x) = offspring

More information

Generating and characteristic functions. Generating and Characteristic Functions. Probability generating function. Probability generating function

Generating and characteristic functions. Generating and Characteristic Functions. Probability generating function. Probability generating function Generating and characteristic functions Generating and Characteristic Functions September 3, 03 Probability generating function Moment generating function Power series expansion Characteristic function

More information

Stochastic Processes. Monday, November 14, 11

Stochastic Processes. Monday, November 14, 11 Stochastic Processes 1 Definition and Classification X(, t): stochastic process: X : T! R (, t) X(, t) where is a sample space and T is time. {X(, t) is a family of r.v. defined on {, A, P and indexed

More information

Chapter 5. Chapter 5 sections

Chapter 5. Chapter 5 sections 1 / 43 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

Dichotomy Of Poincare Maps And Boundedness Of Some Cauchy Sequences

Dichotomy Of Poincare Maps And Boundedness Of Some Cauchy Sequences Applied Mathematics E-Notes, 2(202), 4-22 c ISSN 607-250 Available free at mirror sites of http://www.math.nthu.edu.tw/amen/ Dichotomy Of Poincare Maps And Boundedness Of Some Cauchy Sequences Abar Zada

More information

1 Review of Probability

1 Review of Probability 1 Review of Probability Random variables are denoted by X, Y, Z, etc. The cumulative distribution function (c.d.f.) of a random variable X is denoted by F (x) = P (X x), < x

More information

Spring 2012 Math 541B Exam 1

Spring 2012 Math 541B Exam 1 Spring 2012 Math 541B Exam 1 1. A sample of size n is drawn without replacement from an urn containing N balls, m of which are red and N m are black; the balls are otherwise indistinguishable. Let X denote

More information

GMM-based inference in the AR(1) panel data model for parameter values where local identi cation fails

GMM-based inference in the AR(1) panel data model for parameter values where local identi cation fails GMM-based inference in the AR() panel data model for parameter values where local identi cation fails Edith Madsen entre for Applied Microeconometrics (AM) Department of Economics, University of openhagen,

More information

SOME THEORY AND PRACTICE OF STATISTICS by Howard G. Tucker CHAPTER 2. UNIVARIATE DISTRIBUTIONS

SOME THEORY AND PRACTICE OF STATISTICS by Howard G. Tucker CHAPTER 2. UNIVARIATE DISTRIBUTIONS SOME THEORY AND PRACTICE OF STATISTICS by Howard G. Tucker CHAPTER. UNIVARIATE DISTRIBUTIONS. Random Variables and Distribution Functions. This chapter deals with the notion of random variable, the distribution

More information

Discrete Distributions Chapter 6

Discrete Distributions Chapter 6 Discrete Distributions Chapter 6 Negative Binomial Distribution section 6.3 Consider k r, r +,... independent Bernoulli trials with probability of success in one trial being p. Let the random variable

More information

Lecture 4: Probability and Discrete Random Variables

Lecture 4: Probability and Discrete Random Variables Error Correcting Codes: Combinatorics, Algorithms and Applications (Fall 2007) Lecture 4: Probability and Discrete Random Variables Wednesday, January 21, 2009 Lecturer: Atri Rudra Scribe: Anonymous 1

More information

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks

Recap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution

More information

Mathematical Statistics 1 Math A 6330

Mathematical Statistics 1 Math A 6330 Mathematical Statistics 1 Math A 6330 Chapter 3 Common Families of Distributions Mohamed I. Riffi Department of Mathematics Islamic University of Gaza September 28, 2015 Outline 1 Subjects of Lecture 04

More information

Chapter 2. Discrete Distributions

Chapter 2. Discrete Distributions Chapter. Discrete Distributions Objectives ˆ Basic Concepts & Epectations ˆ Binomial, Poisson, Geometric, Negative Binomial, and Hypergeometric Distributions ˆ Introduction to the Maimum Likelihood Estimation

More information

Math 510 midterm 3 answers

Math 510 midterm 3 answers Math 51 midterm 3 answers Problem 1 (1 pts) Suppose X and Y are independent exponential random variables both with parameter λ 1. Find the probability that Y < 7X. P (Y < 7X) 7x 7x f(x, y) dy dx e x e

More information

Stat 100a, Introduction to Probability.

Stat 100a, Introduction to Probability. Stat 100a, Introduction to Probability. Outline for the day: 1. Geometric random variables. 2. Negative binomial random variables. 3. Moment generating functions. 4. Poisson random variables. 5. Continuous

More information

3 Random Samples from Normal Distributions

3 Random Samples from Normal Distributions 3 Random Samples from Normal Distributions Statistical theory for random samples drawn from normal distributions is very important, partly because a great deal is known about its various associated distributions

More information

Economics 241B Review of Limit Theorems for Sequences of Random Variables

Economics 241B Review of Limit Theorems for Sequences of Random Variables Economics 241B Review of Limit Theorems for Sequences of Random Variables Convergence in Distribution The previous de nitions of convergence focus on the outcome sequences of a random variable. Convergence

More information

Things to remember when learning probability distributions:

Things to remember when learning probability distributions: SPECIAL DISTRIBUTIONS Some distributions are special because they are useful They include: Poisson, exponential, Normal (Gaussian), Gamma, geometric, negative binomial, Binomial and hypergeometric distributions

More information

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra D. R. Wilkins Contents 3 Topics in Commutative Algebra 2 3.1 Rings and Fields......................... 2 3.2 Ideals...............................

More information

1 Simulating normal (Gaussian) rvs with applications to simulating Brownian motion and geometric Brownian motion in one and two dimensions

1 Simulating normal (Gaussian) rvs with applications to simulating Brownian motion and geometric Brownian motion in one and two dimensions Copyright c 2007 by Karl Sigman 1 Simulating normal Gaussian rvs with applications to simulating Brownian motion and geometric Brownian motion in one and two dimensions Fundamental to many applications

More information

Discrete Random Variable

Discrete Random Variable Discrete Random Variable Outcome of a random experiment need not to be a number. We are generally interested in some measurement or numerical attribute of the outcome, rather than the outcome itself. n

More information

3 Multiple Discrete Random Variables

3 Multiple Discrete Random Variables 3 Multiple Discrete Random Variables 3.1 Joint densities Suppose we have a probability space (Ω, F,P) and now we have two discrete random variables X and Y on it. They have probability mass functions f

More information

ECE534, Spring 2018: Solutions for Problem Set #5

ECE534, Spring 2018: Solutions for Problem Set #5 ECE534, Spring 08: s for Problem Set #5 Mean Value and Autocorrelation Functions Consider a random process X(t) such that (i) X(t) ± (ii) The number of zero crossings, N(t), in the interval (0, t) is described

More information

Submitted to the Brazilian Journal of Probability and Statistics

Submitted to the Brazilian Journal of Probability and Statistics Submitted to the Brazilian Journal of Probability and Statistics Multivariate normal approximation of the maximum likelihood estimator via the delta method Andreas Anastasiou a and Robert E. Gaunt b a

More information

Copyright c 2006 Jason Underdown Some rights reserved. choose notation. n distinct items divided into r distinct groups.

Copyright c 2006 Jason Underdown Some rights reserved. choose notation. n distinct items divided into r distinct groups. Copyright & License Copyright c 2006 Jason Underdown Some rights reserved. choose notation binomial theorem n distinct items divided into r distinct groups Axioms Proposition axioms of probability probability

More information

Mathematical Methods for Computer Science

Mathematical Methods for Computer Science Mathematical Methods for Computer Science Computer Science Tripos, Part IB Michaelmas Term 2016/17 R.J. Gibbens Problem sheets for Probability methods William Gates Building 15 JJ Thomson Avenue Cambridge

More information

X (z) = n= 1. Ã! X (z) x [n] X (z) = Z fx [n]g x [n] = Z 1 fx (z)g. r n x [n] ª e jnω

X (z) = n= 1. Ã! X (z) x [n] X (z) = Z fx [n]g x [n] = Z 1 fx (z)g. r n x [n] ª e jnω 3 The z-transform ² Two advantages with the z-transform:. The z-transform is a generalization of the Fourier transform for discrete-time signals; which encompasses a broader class of sequences. The z-transform

More information

Susceptible-Infective-Removed Epidemics and Erdős-Rényi random

Susceptible-Infective-Removed Epidemics and Erdős-Rényi random Susceptible-Infective-Removed Epidemics and Erdős-Rényi random graphs MSR-Inria Joint Centre October 13, 2015 SIR epidemics: the Reed-Frost model Individuals i [n] when infected, attempt to infect all

More information

Tom Salisbury

Tom Salisbury MATH 2030 3.00MW Elementary Probability Course Notes Part V: Independence of Random Variables, Law of Large Numbers, Central Limit Theorem, Poisson distribution Geometric & Exponential distributions Tom

More information

STAT/MATH 395 A - PROBABILITY II UW Winter Quarter Moment functions. x r p X (x) (1) E[X r ] = x r f X (x) dx (2) (x E[X]) r p X (x) (3)

STAT/MATH 395 A - PROBABILITY II UW Winter Quarter Moment functions. x r p X (x) (1) E[X r ] = x r f X (x) dx (2) (x E[X]) r p X (x) (3) STAT/MATH 395 A - PROBABILITY II UW Winter Quarter 07 Néhémy Lim Moment functions Moments of a random variable Definition.. Let X be a rrv on probability space (Ω, A, P). For a given r N, E[X r ], if it

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4. (*. Let independent variables X,..., X n have U(0, distribution. Show that for every x (0,, we have P ( X ( < x and P ( X (n > x as n. Ex. 4.2 (**. By using induction or otherwise,

More information

Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535

Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Additive functionals of infinite-variance moving averages Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Departments of Statistics The University of Chicago Chicago, Illinois 60637 June

More information

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

HEAGAN & CO., OPP. f>, L. & W. DEPOT, DOYER, N. J, OUR MOTTO! ould Iwv ia immediate vltlui. VEEY BEST NEW Creamery Butter 22c ib,

HEAGAN & CO., OPP. f>, L. & W. DEPOT, DOYER, N. J, OUR MOTTO! ould Iwv ia immediate vltlui. VEEY BEST NEW Creamery Butter 22c ib, #4 NN N G N N % XX NY N Y FY N 2 88 N 28 k N k F P X Y N Y /» 2«X ««!!! 8 P 3 N 0»9! N k 25 F $ 60 $3 00 $3000 k k N 30 Y F00 6 )P 0» «{ N % X zz» «3 0««5 «N «XN» N N 00/ N 4 GN N Y 07 50 220 35 2 25 0

More information

Chapter 2. Review of basic Statistical methods 1 Distribution, conditional distribution and moments

Chapter 2. Review of basic Statistical methods 1 Distribution, conditional distribution and moments Chapter 2. Review of basic Statistical methods 1 Distribution, conditional distribution and moments We consider two kinds of random variables: discrete and continuous random variables. For discrete random

More information

Probability and Measure

Probability and Measure Chapter 4 Probability and Measure 4.1 Introduction In this chapter we will examine probability theory from the measure theoretic perspective. The realisation that measure theory is the foundation of probability

More information

THE QUEEN S UNIVERSITY OF BELFAST

THE QUEEN S UNIVERSITY OF BELFAST THE QUEEN S UNIVERSITY OF BELFAST 0SOR20 Level 2 Examination Statistics and Operational Research 20 Probability and Distribution Theory Wednesday 4 August 2002 2.30 pm 5.30 pm Examiners { Professor R M

More information

Joint Probability Distributions and Random Samples (Devore Chapter Five)

Joint Probability Distributions and Random Samples (Devore Chapter Five) Joint Probability Distributions and Random Samples (Devore Chapter Five) 1016-345-01: Probability and Statistics for Engineers Spring 2013 Contents 1 Joint Probability Distributions 2 1.1 Two Discrete

More information

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

1: PROBABILITY REVIEW

1: PROBABILITY REVIEW 1: PROBABILITY REVIEW Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2016 M. Rutkowski (USydney) Slides 1: Probability Review 1 / 56 Outline We will review the following

More information

1 Inverse Transform Method and some alternative algorithms

1 Inverse Transform Method and some alternative algorithms Copyright c 2016 by Karl Sigman 1 Inverse Transform Method and some alternative algorithms Assuming our computer can hand us, upon demand, iid copies of rvs that are uniformly distributed on (0, 1), it

More information

IEOR 3106: Introduction to Operations Research: Stochastic Models. Professor Whitt. SOLUTIONS to Homework Assignment 2

IEOR 3106: Introduction to Operations Research: Stochastic Models. Professor Whitt. SOLUTIONS to Homework Assignment 2 IEOR 316: Introduction to Operations Research: Stochastic Models Professor Whitt SOLUTIONS to Homework Assignment 2 More Probability Review: In the Ross textbook, Introduction to Probability Models, read

More information

EDRP lecture 7. Poisson process. Pawe J. Szab owski

EDRP lecture 7. Poisson process. Pawe J. Szab owski EDRP lecture 7. Poisson process. Pawe J. Szab owski 2007 Counting process Random process fn t ; t 0g is called a counting process, if N t is equal total number of events that have happened up to moment

More information

Key words. Feedback shift registers, Markov chains, stochastic matrices, rapid mixing

Key words. Feedback shift registers, Markov chains, stochastic matrices, rapid mixing MIXING PROPERTIES OF TRIANGULAR FEEDBACK SHIFT REGISTERS BERND SCHOMBURG Abstract. The purpose of this note is to show that Markov chains induced by non-singular triangular feedback shift registers and

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4.1 (*. Let independent variables X 1,..., X n have U(0, 1 distribution. Show that for every x (0, 1, we have P ( X (1 < x 1 and P ( X (n > x 1 as n. Ex. 4.2 (**. By using induction

More information

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes Lecture Notes 7 Random Processes Definition IID Processes Bernoulli Process Binomial Counting Process Interarrival Time Process Markov Processes Markov Chains Classification of States Steady State Probabilities

More information

Probability and Statistics

Probability and Statistics Probability and Statistics 1 Contents some stochastic processes Stationary Stochastic Processes 2 4. Some Stochastic Processes 4.1 Bernoulli process 4.2 Binomial process 4.3 Sine wave process 4.4 Random-telegraph

More information

Statistics for scientists and engineers

Statistics for scientists and engineers Statistics for scientists and engineers February 0, 006 Contents Introduction. Motivation - why study statistics?................................... Examples..................................................3

More information

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i :=

P i [B k ] = lim. n=1 p(n) ii <. n=1. V i := 2.7. Recurrence and transience Consider a Markov chain {X n : n N 0 } on state space E with transition matrix P. Definition 2.7.1. A state i E is called recurrent if P i [X n = i for infinitely many n]

More information

Stationary independent increments. 1. Random changes of the form X t+h X t fixed h > 0 are called increments of the process.

Stationary independent increments. 1. Random changes of the form X t+h X t fixed h > 0 are called increments of the process. Stationary independent increments 1. Random changes of the form X t+h X t fixed h > 0 are called increments of the process. 2. If each set of increments, corresponding to non-overlapping collection of

More information

Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley

Time Series Models and Inference. James L. Powell Department of Economics University of California, Berkeley Time Series Models and Inference James L. Powell Department of Economics University of California, Berkeley Overview In contrast to the classical linear regression model, in which the components of the

More information

MULTISECTION METHOD AND FURTHER FORMULAE FOR π. De-Yin Zheng

MULTISECTION METHOD AND FURTHER FORMULAE FOR π. De-Yin Zheng Indian J. pure appl. Math., 39(: 37-55, April 008 c Printed in India. MULTISECTION METHOD AND FURTHER FORMULAE FOR π De-Yin Zheng Department of Mathematics, Hangzhou Normal University, Hangzhou 30036,

More information

1 Presessional Probability

1 Presessional Probability 1 Presessional Probability Probability theory is essential for the development of mathematical models in finance, because of the randomness nature of price fluctuations in the markets. This presessional

More information

Random Bernstein-Markov factors

Random Bernstein-Markov factors Random Bernstein-Markov factors Igor Pritsker and Koushik Ramachandran October 20, 208 Abstract For a polynomial P n of degree n, Bernstein s inequality states that P n n P n for all L p norms on the unit

More information

BRANCHING PROCESSES AND THEIR APPLICATIONS: Lecture 15: Crump-Mode-Jagers processes and queueing systems with processor sharing

BRANCHING PROCESSES AND THEIR APPLICATIONS: Lecture 15: Crump-Mode-Jagers processes and queueing systems with processor sharing BRANCHING PROCESSES AND THEIR APPLICATIONS: Lecture 5: Crump-Mode-Jagers processes and queueing systems with processor sharing June 7, 5 Crump-Mode-Jagers process counted by random characteristics We give

More information

arxiv: v1 [math.co] 13 Jul 2017

arxiv: v1 [math.co] 13 Jul 2017 A GENERATING FUNCTION FOR THE DISTRIBUTION OF RUNS IN BINARY WORDS arxiv:1707.04351v1 [math.co] 13 Jul 2017 JAMES J. MADDEN Abstract. Let N(n, r, k) denote the number of binary words of length n that begin

More information

Estimating the Number of Common Factors in Serially Dependent Approximate Factor Models

Estimating the Number of Common Factors in Serially Dependent Approximate Factor Models Estimating the Number of Common Factors in Serially Dependent Approximate Factor Models Ryan Greenaway-McGrevy y Bureau of Economic Analysis Chirok Han Korea University February 7, 202 Donggyu Sul University

More information

Homework 6: Solutions Sid Banerjee Problem 1: (The Flajolet-Martin Counter) ORIE 4520: Stochastics at Scale Fall 2015

Homework 6: Solutions Sid Banerjee Problem 1: (The Flajolet-Martin Counter) ORIE 4520: Stochastics at Scale Fall 2015 Problem 1: (The Flajolet-Martin Counter) In class (and in the prelim!), we looked at an idealized algorithm for finding the number of distinct elements in a stream, where we sampled uniform random variables

More information

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf

Qualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 1: Sample Problems for the Elementary Section of Qualifying Exam in Probability and Statistics https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 2: Sample Problems for the Advanced Section

More information

Chapter 5 continued. Chapter 5 sections

Chapter 5 continued. Chapter 5 sections Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

Common Discrete Distributions

Common Discrete Distributions Common Discrete Distributions Statistics 104 Autumn 2004 Taken from Statistics 110 Lecture Notes Copyright c 2004 by Mark E. Irwin Common Discrete Distributions There are a wide range of popular discrete

More information

14 Branching processes

14 Branching processes 4 BRANCHING PROCESSES 6 4 Branching processes In this chapter we will consider a rom model for population growth in the absence of spatial or any other resource constraints. So, consider a population of

More information

Lectures on Elementary Probability. William G. Faris

Lectures on Elementary Probability. William G. Faris Lectures on Elementary Probability William G. Faris February 22, 2002 2 Contents 1 Combinatorics 5 1.1 Factorials and binomial coefficients................. 5 1.2 Sampling with replacement.....................

More information

Practice Problem - Skewness of Bernoulli Random Variable. Lecture 7: Joint Distributions and the Law of Large Numbers. Joint Distributions - Example

Practice Problem - Skewness of Bernoulli Random Variable. Lecture 7: Joint Distributions and the Law of Large Numbers. Joint Distributions - Example A little more E(X Practice Problem - Skewness of Bernoulli Random Variable Lecture 7: and the Law of Large Numbers Sta30/Mth30 Colin Rundel February 7, 014 Let X Bern(p We have shown that E(X = p Var(X

More information

Exercises and Answers to Chapter 1

Exercises and Answers to Chapter 1 Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean

More information

We begin by thinking about population relationships.

We begin by thinking about population relationships. Conditional Expectation Function (CEF) We begin by thinking about population relationships. CEF Decomposition Theorem: Given some outcome Y i and some covariates X i there is always a decomposition where

More information

A Note on two inventory models

A Note on two inventory models A Note on two inventory models J.Colpaert E.Omey Introduction In inventory models the major objective consists of minimizing the total inventory cost and to balance the economics of large orders or large

More information

Probability Theory and Statistics. Peter Jochumzen

Probability Theory and Statistics. Peter Jochumzen Probability Theory and Statistics Peter Jochumzen April 18, 2016 Contents 1 Probability Theory And Statistics 3 1.1 Experiment, Outcome and Event................................ 3 1.2 Probability............................................

More information

2 DISCRETE-TIME MARKOV CHAINS

2 DISCRETE-TIME MARKOV CHAINS 1 2 DISCRETE-TIME MARKOV CHAINS 21 FUNDAMENTAL DEFINITIONS AND PROPERTIES From now on we will consider processes with a countable or finite state space S {0, 1, 2, } Definition 1 A discrete-time discrete-state

More information

The Degree of the Splitting Field of a Random Polynomial over a Finite Field

The Degree of the Splitting Field of a Random Polynomial over a Finite Field The Degree of the Splitting Field of a Random Polynomial over a Finite Field John D. Dixon and Daniel Panario School of Mathematics and Statistics Carleton University, Ottawa, Canada fjdixon,danielg@math.carleton.ca

More information

Special Discrete RV s. Then X = the number of successes is a binomial RV. X ~ Bin(n,p).

Special Discrete RV s. Then X = the number of successes is a binomial RV. X ~ Bin(n,p). Sect 3.4: Binomial RV Special Discrete RV s 1. Assumptions and definition i. Experiment consists of n repeated trials ii. iii. iv. There are only two possible outcomes on each trial: success (S) or failure

More information

arxiv: v2 [math.pr] 4 Sep 2017

arxiv: v2 [math.pr] 4 Sep 2017 arxiv:1708.08576v2 [math.pr] 4 Sep 2017 On the Speed of an Excited Asymmetric Random Walk Mike Cinkoske, Joe Jackson, Claire Plunkett September 5, 2017 Abstract An excited random walk is a non-markovian

More information

MA 519 Probability: Review

MA 519 Probability: Review MA 519 : Review Yingwei Wang Department of Mathematics, Purdue University, West Lafayette, IN, USA Contents 1 How to compute the expectation? 1.1 Tail........................................... 1. Index..........................................

More information

Course 2BA1: Hilary Term 2006 Section 7: Trigonometric and Exponential Functions

Course 2BA1: Hilary Term 2006 Section 7: Trigonometric and Exponential Functions Course 2BA1: Hilary Term 2006 Section 7: Trigonometric and Exponential Functions David R. Wilkins Copyright c David R. Wilkins 2005 Contents 7 Trigonometric and Exponential Functions 1 7.1 Basic Trigonometric

More information

CS37300 Class Notes. Jennifer Neville, Sebastian Moreno, Bruno Ribeiro

CS37300 Class Notes. Jennifer Neville, Sebastian Moreno, Bruno Ribeiro CS37300 Class Notes Jennifer Neville, Sebastian Moreno, Bruno Ribeiro 2 Background on Probability and Statistics These are basic definitions, concepts, and equations that should have been covered in your

More information

2 Random Variable Generation

2 Random Variable Generation 2 Random Variable Generation Most Monte Carlo computations require, as a starting point, a sequence of i.i.d. random variables with given marginal distribution. We describe here some of the basic methods

More information

4 Sums of Independent Random Variables

4 Sums of Independent Random Variables 4 Sums of Independent Random Variables Standing Assumptions: Assume throughout this section that (,F,P) is a fixed probability space and that X 1, X 2, X 3,... are independent real-valued random variables

More information

STAT2201. Analysis of Engineering & Scientific Data. Unit 3

STAT2201. Analysis of Engineering & Scientific Data. Unit 3 STAT2201 Analysis of Engineering & Scientific Data Unit 3 Slava Vaisman The University of Queensland School of Mathematics and Physics What we learned in Unit 2 (1) We defined a sample space of a random

More information

Asymptotic Statistics-III. Changliang Zou

Asymptotic Statistics-III. Changliang Zou Asymptotic Statistics-III Changliang Zou The multivariate central limit theorem Theorem (Multivariate CLT for iid case) Let X i be iid random p-vectors with mean µ and and covariance matrix Σ. Then n (

More information

PROBABILITY AND STATISTICS IN COMPUTING. III. Discrete Random Variables Expectation and Deviations From: [5][7][6] German Hernandez

PROBABILITY AND STATISTICS IN COMPUTING. III. Discrete Random Variables Expectation and Deviations From: [5][7][6] German Hernandez Conditional PROBABILITY AND STATISTICS IN COMPUTING III. Discrete Random Variables and Deviations From: [5][7][6] Page of 46 German Hernandez Conditional. Random variables.. Measurable function Let (Ω,

More information

Practice Examination # 3

Practice Examination # 3 Practice Examination # 3 Sta 23: Probability December 13, 212 This is a closed-book exam so do not refer to your notes, the text, or any other books (please put them on the floor). You may use a single

More information

Let X and Y be two real valued stochastic variables defined on (Ω, F, P). Theorem: If X and Y are independent then. . p.1/21

Let X and Y be two real valued stochastic variables defined on (Ω, F, P). Theorem: If X and Y are independent then. . p.1/21 Multivariate transformations The remaining part of the probability course is centered around transformations t : R k R m and how they transform probability measures. For instance tx 1,...,x k = x 1 +...

More information

ALL TEXTS BELONG TO OWNERS. Candidate code: glt090 TAKEN FROM

ALL TEXTS BELONG TO OWNERS. Candidate code: glt090 TAKEN FROM How are Generating Functions used in finding the closed form of sequences involving recurrence relations and in the analysis of probability distributions? Mathematics Extended Essay Word count: 3865 Abstract

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

8 Periodic Linear Di erential Equations - Floquet Theory

8 Periodic Linear Di erential Equations - Floquet Theory 8 Periodic Linear Di erential Equations - Floquet Theory The general theory of time varying linear di erential equations _x(t) = A(t)x(t) is still amazingly incomplete. Only for certain classes of functions

More information

CDA6530: Performance Models of Computers and Networks. Chapter 2: Review of Practical Random Variables

CDA6530: Performance Models of Computers and Networks. Chapter 2: Review of Practical Random Variables CDA6530: Performance Models of Computers and Networks Chapter 2: Review of Practical Random Variables Two Classes of R.V. Discrete R.V. Bernoulli Binomial Geometric Poisson Continuous R.V. Uniform Exponential,

More information

Mathematical Institute, University of Utrecht. The problem of estimating the mean of an observed Gaussian innite-dimensional vector

Mathematical Institute, University of Utrecht. The problem of estimating the mean of an observed Gaussian innite-dimensional vector On Minimax Filtering over Ellipsoids Eduard N. Belitser and Boris Y. Levit Mathematical Institute, University of Utrecht Budapestlaan 6, 3584 CD Utrecht, The Netherlands The problem of estimating the mean

More information

Some Aspects of Universal Portfolio

Some Aspects of Universal Portfolio 1 Some Aspects of Universal Portfolio Tomoyuki Ichiba (UC Santa Barbara) joint work with Marcel Brod (ETH Zurich) Conference on Stochastic Asymptotics & Applications Sixth Western Conference on Mathematical

More information

Networks: Lectures 9 & 10 Random graphs

Networks: Lectures 9 & 10 Random graphs Networks: Lectures 9 & 10 Random graphs Heather A Harrington Mathematical Institute University of Oxford HT 2017 What you re in for Week 1: Introduction and basic concepts Week 2: Small worlds Week 3:

More information

Question: My computer only knows how to generate a uniform random variable. How do I generate others? f X (x)dx. f X (s)ds.

Question: My computer only knows how to generate a uniform random variable. How do I generate others? f X (x)dx. f X (s)ds. Simulation Question: My computer only knows how to generate a uniform random variable. How do I generate others?. Continuous Random Variables Recall that a random variable X is continuous if it has a probability

More information

1.3 Convergence of Regular Markov Chains

1.3 Convergence of Regular Markov Chains Markov Chains and Random Walks on Graphs 3 Applying the same argument to A T, which has the same λ 0 as A, yields the row sum bounds Corollary 0 Let P 0 be the transition matrix of a regular Markov chain

More information

A discrete-time priority queue with train arrivals

A discrete-time priority queue with train arrivals A discrete-time priority queue with train arrivals Joris Walraevens, Sabine Wittevrongel and Herwig Bruneel SMACS Research Group Department of Telecommunications and Information Processing (IR07) Ghent

More information

Notes on Mathematical Expectations and Classes of Distributions Introduction to Econometric Theory Econ. 770

Notes on Mathematical Expectations and Classes of Distributions Introduction to Econometric Theory Econ. 770 Notes on Mathematical Expectations and Classes of Distributions Introduction to Econometric Theory Econ. 77 Jonathan B. Hill Dept. of Economics University of North Carolina - Chapel Hill October 4, 2 MATHEMATICAL

More information

December 19, Probability Theory Instituto Superior Técnico. Poisson Convergence. João Brazuna. Weak Law of Small Numbers

December 19, Probability Theory Instituto Superior Técnico. Poisson Convergence. João Brazuna. Weak Law of Small Numbers Simple to Probability Theory Instituto Superior Técnico December 19, 2016 Contents Simple to 1 Simple 2 to Contents Simple to 1 Simple 2 to Simple to Theorem - Events with low frequency in a large population

More information

1 Bernoulli Distribution: Single Coin Flip

1 Bernoulli Distribution: Single Coin Flip STAT 350 - An Introduction to Statistics Named Discrete Distributions Jeremy Troisi Bernoulli Distribution: Single Coin Flip trial of an experiment that yields either a success or failure. X Bern(p),X

More information

Basic concepts of probability theory

Basic concepts of probability theory Basic concepts of probability theory Random variable discrete/continuous random variable Transform Z transform, Laplace transform Distribution Geometric, mixed-geometric, Binomial, Poisson, exponential,

More information