EXTREME VALUE DISTRIBUTIONS FOR RANDOM COUPON COLLECTOR AND BIRTHDAY PROBLEMS

Size: px
Start display at page:

Download "EXTREME VALUE DISTRIBUTIONS FOR RANDOM COUPON COLLECTOR AND BIRTHDAY PROBLEMS"

Transcription

1 EXTREME VALUE DISTRIBUTIONS FOR RANDOM COUPON COLLECTOR AND BIRTHDAY PROBLEMS Lars Holst Royal Institute of Technology September 6, 2 Abstract Take n independent copies of a strictly positive random variable X and divide each copy with the sum of the copies, thus obtaining n random probabilities summing to one. These probabilities are used in independent multinomial trials with n outcomes. Let N n N n be the number of trials needed until each some outcome has occurred at least c times. By embedding the sampling procedure in a Poisson point process the distributions of N n and N n can be expressed using extremes of independent identically distributed random variables. Using this, asymptotic distributions as n are obtained from classical extreme value theory. The limits are determined by the behaviour of the Laplace transform of X close to the origin or at infinity. Some examples are studied in detail. Keywords: Poisson embedding; point process; Polya urn; inverse gaussian; lognormal; gamma distribution; repeat time ams 1991 subject classification: primary 6g7 secondary 6c5 1 Introduction Consider a random experiment with n outcomes having probabilities p 1,..., p n. Independent trials are performed until each outcome has occurred at least c times. Let N n be the number of trials needed and let N n < N n be the number of trials when some unspecified outcome has occurred c times. Dept. Mathematics, KTH, SE 144, Stockholm, Sweden. lholst@math.kth.se

2 To find the distribution of N n for c = 1 and p 1 = = p n = 1 n is usually called the coupon collector s problem. The approach by embedding in Poisson point processes given in Section 2 below gives the relation N n Z i = n maxy 1,..., Y n, i=1 where the random variables N n, Z 1, Z 2,... are independent, the Z s being Exp1 density e z for z > and the Y s are independent and Exp1. This implies EN n = nemaxy 1,..., Y n = n and the limit distribution n lim P N n/n log n x = e e x, n 1 j n log n, n, see Section 4.1 below. To find the distribution of N n for c = 2 and equal p s is the birthday problem. In this case the embedding approach gives N n Z i = n miny 1,..., Y n, i=1 where Nn, Z 1, Z 2,... are independent, the Z s Exp1, and the Y s independent and Γ2, 1 we denote by Γc, 1 a gamma distribution with density y c 1 e y /c 1! for y >. We have EN n = neminy 1,..., Y n = n and the limit distribution 1 + y n e ny dy πn/2, n, lim n P N n/ 2n x = 1 e x2, x >, see Section 5.1 below. Combinatorial approaches to the coupon collector s problem or the birthday problem have a long history and can be found in many texts, see e.g. Feller 1968 or Blom, Holst and Sandell In Holst 1995 it is proved that the distribution functions of N n can be partially ordered in the p s by Schur-convexity and that N n is stochastically largest in the symmetric case. A slight modification of the argument shows partial ordering in the collector problem and that N n is stochastically smallest in the symmetric case. 2

3 Papanicolaou, Kokolakis and Boneh 1998 studied a random coupon collector problem for the case c = 1 by letting the p s be random and given by X 1 X n,...,, X X n X X n where X 1, X 2,... are independent identically distributed positive random variables. Applications of the model were given and asymptotic results for EN n as n were derived. Note that X 1 = = X n = 1 gives the classical case. In our paper asymptotic results are obtained for the random coupon collector problem both for the distribution and the mean of N n for c 1, generalizing those of Papanicolaou et al 1998 for the mean. We prove our results by embedding in Poisson point processes. A similar approach is used in Holst 1995 to study birthday problems. By this device distributional problems on N n are transformed so that classical extreme value theory for independent identically distributed random variables can be applied, c.f. Resnick In a similar way we study Nn for random p s given as above. Other recent papers using Poisson embedding on problems of a similar flavour as ours are Steinsaltz 1999 and Camarri and Pitman 2. In the following X 1, X 2,... denote independent copies of a strictly positive random variable X with mean µ = EX <. We will see that the limit behaviour of N n as n is determined by the behaviour of the distribution function F X x = P X x for small x, or equivalently by the behaviour of the Laplace transform g X s = Ee sx for large s. The limit behaviour of Nn is determined by the behaviour of g X s for small s. The organization of the paper is as follows. In Section 2 the embedding of N n in a Poisson point process is constructed. Using this an expression for EN n is derived. In Section 3 extreme value distributions of Fréchet type exp y α occur as limiting distributions of N n and an example with the gamma distribution is analyzed. In Section 4 extremes of Gumbel type exp e y are considered; examples discussed involve X having one-point, inverse gaussian and lognormal distributions. In Section 5 birthday problems are studied and limit distributions of Weibull type 1 exp y α are obtained. 3

4 2 Embedding and EN n Let Π be a Poisson point process with intensity one in the first quadrant of the plane. Independent of Π let X 1, X 2,... be independent identically distributed strictly positive random variables with finite mean µ. Introduce random strips and set i 1 I it = Π {x, s : X j < x i X j, < s t}, for i = 1, 2,... and t >. Here I it is the set of points of Π in the i:th strip up to time t. Let I it denote the number of these points. As Π is a Poisson process with intensity one we have min{t : I it = c} = Y i /X i, where the independent random variables Y 1, Y 2,... are Γc, 1 and independent of X 1, X 2,.... The first time the first n strips all contain at least c points can be written M n = maxy 1 /X 1,..., Y n /X n. Given X 1,..., X n, the projection on the s-axis of the points in these strips is a Poisson process with intensity n X j. The total number of points in the n strips up to time M n can be identified with N n, because the probability that a point occurs in the i:th strip is X i / n X j and the points are independent of each other. Thus with independent Z 1, Z 2,... all being Exp1 and independent of N n, we have the basic relation: N n n Z i = M n X j. i=1 Using this different quantities of the distribution of N n can be expressed in the random variables X 1, X 2,... and Y 1, Y 2,.... Theorem 2.1 With notation as above: and for c = 1 c 1 EN n /n = µem n 1 + j= c j EX j j! nm j n 1 e X nm n 1, EN n /n = µem n 1 + o1, n, EN n < E1/X <, EN n = nµem n = nµ [1 1 g X s n 1 ]ds

5 Proof. The embedding implies EN n = EM n n X j. Thus symmetry and independence give [ EN n = nem n X n = ne X n 1 P Mn 1 sp Y n /X n s X n ] ds = ne X n +ne X n = nµem n 1 + n As Y n is Γc, 1 and independent of X n we have Thus c 1 = l= l= l= [ 1 P Mn 1 s ] ds P M n 1 sp Y n /X n > s X n ds P M n 1 s EX n P Y n > X n s X n ds. P Y n > X n s X n = c 1 l= Xns l l e Xns. l! P M n 1 sex n P Y n > X n s X n ds c 1 = E P M n 1 sx n X l ns l l! e Xns ds P X n M n 1 s sl e s c 1 ds = P X n M n 1 V l, l! where V l is Γl + 1, 1. Hence c 1 c 1 l P X n M n 1 V l = E = c 1 j= l= j= l= X j nm j n 1 j! c j EX j j! nm j n 1 e XnM n 1. e X nm n 1 Combining the results above proves the first assertion. As M n a.s. as n the second assertion follows. It is readily seen that the third assertion holds for any c if it holds for c = 1. 5

6 Let c = 1. Then the Y s are Exp1 and we have P Y/X > s = EP Y > sx X = Ee sx = g X s. Thus P M n 1 s = 1 g X s n 1, and therefore Ee X nm n 1 = Eg X M n 1 = proving the last formula in the assertion. Furthermore, EM n = [1 1 g X s n ]ds = g X sn 11 g X s n 2 g Xsds = 1 n, Therefore EM n < if and only if g X sds = Ee sx ds = E n 1 g X s 1 g X s k ds. k= 1 e sx ds = E <. X Proving the third assertion for c = 1 and therefore for all positive integers c. The distribution function F c s = P Y/X s is important for studying N n. The following result will be useful later on. Proposition 2.1 Let X and Y be independent positive random variables, X with distribution function F X and Laplace transform g X, and Y being Γc, 1. Then for s > : g k X s = 1k EX k e sx, c 1 1 F c s = P Y/X > s = 1 k sk k= k! gk X s = F cs = 1 c c 1! gc X s = c s F cs F c+1 s = 1 s s c 1 F X x/sx c xc 1 e x c 1! dx, F X x/s xc 1 e x c 1! dx, F c s = 1 [ s 2 1 c+1 s c+1 c 1! gc+1 X s 1 c s c ] c 2! gc X s = 1 s 2 F X x/sx c 2 c xc 1 e x c 1! dx. 6

7 Proof. As Y is Γc, 1 we have and also P Y/X > s = EP Y > sx X = E P Y/X > s = EP X < Y/s Y = c 1 k= s k X k e sx, k! F X x/s xc 1 e x c 1! dx. By differentiation the other formulas follows by straightforward calculations. 3 Extremes of Fréchet type for N n In this section we consider X such that for some α > and for some slowly varying function L P X x = x α Lx, x. Recall that L is slowly varying at if Ltx/Lx 1 as x for every fixed t >. A special case is the gamma distribution. The limiting distributions of N n are extreme value distributions of Fréchet or Φ α type, c.f. Resnick Theorem 3.1 Let a n such that na α n L 1/a n Γα + c/c 1! 1. Then P N n /na n µ y e y α, y >, EN n /na n µ Γ1 1/α, α > 1, and EN n = +, α < 1. Proof. Using Proposition 2.1 we get as s Hence for y > P Y/X > s = s α L 1/s x/s α L x/s x c 1 c 1! e x dx x α+c 1 e x dx = s α L 1/s Γα + c/c 1!. c 1! np Y/X > a n y ny α a α n L 1/a n Γα + c/c 1! y α. Poisson convergence gives n IY j /X j > a n y P oissony α. 7

8 Thus for y > n P M n /a n y = P IY j /X j > a n y = e y α. From the behaviour of P Y/X > s as s we have for any integer < k < α that EY/X k <. Hence by Resnick 1987, p. 77 EM n /a n k Γ1 k/α and Theorem 2.1 gives for α > 1 EN n /na n µ = EM n 1 /a n + o1/a n Γ1 1/α, n. If α < 1 then E1/X = + implying EN n = +. Thus the second and third assertions are proved. By the embedding we have E P e tm n n X j = E Therefore for s and t = e s/nanµ 1 we get E e sn n/na n µ = E exp s es/nanµ 1 s/na n µ As e tp Nn Z i N n = E 1 + t N n. Mn a n n X j. nµ e s/na nµ 1 s/na n µ 1, P M n/a n y e y α, n X j nµ 1 in probability, it follows that e s/nanµ n 1 P s/na n µ Mn X j a n nµ y P Mn a n y e y α, n. Thus, by the continuity theorem for Laplace transforms we have for s that Ee sn n/na n µ e sy de y α, from which the first assertion of the theorem follows. 8

9 3.1 Example: gamma distribution Let X be Γα, 1. Then g X s = Ee sx = 1 + s α, s > 1, P X x x α /Γα + 1, x. In Theorem 3.1 we have µ = α and take a n = [nα + c 1 α + 1/c 1!] 1 α, where a n = n 1 α for c = 1. For X 1,..., X n independent and Γα, 1 the sum X X n is Γnα, 1 and independent of X 1,..., X n /X X n, which has the symmetric Dirichlet distribution Dα,..., α. Hence for α > 1 it follows by the embedding that [ ] 1 n 1 EN n = EM n X j = EM n E n X j = nα 1EM n n 1+ 1 α α[α + c 1 α + 1/c 1!] 1 α Γ1 1/α. The mean is infinite for α 1. Note that Y/c/X/α has an F -distribution. For the exponential case α = 1, we take a n = cn and get the limit P N n /cn 2 y e 1/y, n. The probabilities X k /X X n for k = 1,..., n can be interpreted as the spacings in a random sample of size n 1 from a uniform distribution on the unit interval. This corresponds to a D1,..., 1 prior distribution on the drawing probabilities. Unconditionally the drawing procedure is a Polya urn scheme with n balls of different colours at start and replacing each drawn ball together with one new of the same coulour. A general Polya scheme corresponds to having some α >. 4 Extremes of Gumbel type for N n In this section we consider distributions such that P X x faster than any power as x. Extreme value distributions will be of Gumbel or Λ type, see Resnick Assume for the Laplace transform g X s = Ee sx and its derivatives that for k =, 1, 2,... and as s h k s := sexk+1 e sx EX k e sx, h k+1 s h k s = EXk+2 e sx EX k e sx EX k+1 e sx

10 Using Proposition 2.1 this implies for Y being Γc, 1 that Thus P Y/X > s = 1 F c s s c 1 EX c 1 e sx /c 1!, F cs = s c 1 EX c e sx /c 1!, F c s s c 1 EX c+1 e sx /c 1!. 1 F c sf c s F cs 2 1, and F c s < for s sufficiently large. Then from classical extreme value theory, see Resnick 1987, Prop. 1.1 and 2.1, P M n b n /a n y e e y, EM n b n /a n γ, n, where M n = maxy 1 /X 1,..., Y n /X n and γ is Euler s constant, and with the norming constants given from 1 n = 1 F cb n, a n = 1 F c b n /F cb n. The limit behaviour of N n will now be obtained by the embedding. Theorem 4.1 Let X satisfy the conditions above and EX 2 <. Then with a n and b n as above P N n /nµ b n /a n y e e y, E N n /nµ b n /a n γ. Proof. By the embedding we have N n Z i = M n i=1 n X j. Using the estimates above it follows that b n and Thus b 2 n na 2 n = b 2 n 1 F cb n F cb n 2 1 F c b n 2 = b 2 F n cb n 2 1 F c b n F c b n F c b n b 2 nf c b n 1 c 1! Eb nx c+1 e bnx. Var b n a n n X j n = b2 n a 2 VarX, n n 1

11 and we get that M n n X j na n µ b n = M n n b n X j a n a n nµ + b n a n n X j nµ 1 has the same asymptotic behaviour as M n b n /a n. Furthermore by Theorem 2.1 and the estimates above Var Nn Z i 1/na n = EN n /na n 2 = µem n 1 + o1/na n 2. i=1 Hence M n n X j na n µ b Nn n i=1 = Z i a n na n µ b Nn n i=1 = Z i Nn a n na n µ a n nµ b n has the same asymptotic distribution as 1 Nn nµ b n, a n that is the same as that of M n b n /a n. The convergence of the mean also follows from Resnick 1987, Prop Example: constant probabilities For X µ we have Ee sx = e sµ, h k s = µs and h k+1 s/h k s = 1. Furthermore c 1 1 n = b n µ k e bnµ 1, = nµ b nµ c 1 k! a n c 1! e bnµ, implies k= b n µ = log n + c 1 log log n logc 1! + o1, a n µ = 1 + o1. Hence P N n /n log n c 1 log log n + logc 1! y e e y, EN n /n = log n + c 1 log log n logc 1! + γ + o1. For c = 1 this is the result for the classical coupon collector s problem given in the Introduction. Recall that N n is stochastically smallest among all positive distributions of X when X is constant. 11

12 4.2 Example: inverse gaussian distribution Let X be inverse gaussian with mean µ = 1 and variance σ 2 = 1/2ψ, that is Ee sx = e 2ψ 2ψ 1+s/ψ, 2ψ x 1 P X x = Φ x + e 4ψ Φ 2ψ x + 1 x, where Φ is the standard normal distribution function. For s we have s k EX k e sx ψs k/2 Ee sx, 1 F c s = ψsc 1/2 c 1! F cs = ψsc/2 s c 1! 1 + O O 1 s Ee sx, s Ee sx. Thus, the assumptions of Theorem 4.1 are satisfied. From 1 F c b n = 1/n we get 2 ψb n = log n + c 1 log log n c 1 log 2 logc 1! + 2ψ + o1, and a n = 1 F cb n F cb n log n 2ψ, b n /a n = 1 2 log n + c 1 log log n logc 1!2c 1 + 2ψ + o1. Recalling σ 2 = 1/2ψ we obtain P N n /σ 2 n log n b n /a n y e e y, EN n /n log n = σ 2 b n /a n + γ + o Example: lognormal distribution Let X have a lognormal distribution. Without loss of generality let X = e σz, where Z is standard normal and µ = EX = e σ2 /2. For s > we have EX k e sx = 1 2π e kσz seσz z 2 /2 dz. With z s such that z s σ eσzs = s, 12

13 we find after some calculations of saddlepoint type that With y n such that that is roughly y n 2 log n, and we obtain EX k e sx 1 σzs e kσz s z s /σ z 2 s /2, s. y c 3/2 n σ c 1/2 c 1! e y2 n/2 y n /σ 1 n, b n = y n σ eσy n, 1 F c b n 1 n, a n = e σyn 1 F c b n /F cb n. This gives the limit P N n /e σ2 /2 ne σyn y n /σ y e e y, n. 4.4 Example: strictly positive support Let X d > where d = inf{x : P X x > }. Set X d = X d. Then EX k e sx e sd d k P X d x/se x dx = d k Ee sx, s. Hence h k s sd and h k+1 s/h k s 1 implying 1 F c s sdc 1 e sd Ee sx d, c 1! F cs dsdc 1 e sd Ee sx d. c 1! The assumptions of Theorem 4.1 are fullfilled and the norming constants can be determined from c 1 1 n = k= b n d k e bnd Ee b nx d, a n d = 1. k! For X d we get the example with constant probabilities. Other cases are modifications of it. For example let X d be Γα, 1, then µ = d + α, Ee sx d = 1 + s α and the norming constants can be choosen as giving the limit b n d = log n + c 1 α log log n + logd α /c 1!, a n d = 1, P d N n d + α n b nd y e e y, n. 13

14 5 Extremes of Weibull type for N n In this section we consider N n equals the number of trials until some unspecified outcome has occurred c 2 times. As in Section 2 we get by embedding where N n Z i = Mn i=1 n X j, M n = miny 1 /X 1,..., Y n /X n. With a proof similar to that of Theorem 2.1 we obtain: Theorem 5.1 We have EN n/n = µem n 1 j=c+1 j c EX j j! nm j n 1 e XnM n 1. In a similar way as before we get asymptotic results for N n from extreme value theory. A crucial quantity is If F c s = P Y/X < s = k=c s k k! EXk e sx, s. sf cs F c s = sc EX c e sx /c 1! k=c sk EX k e sx c, s, /k! and a n such that nf c a n 1, then for y > P M n/a n y 1 e yc, n, see Resnick 1987, Prop. 1.13, Now small modifications of the proof of Theorem 3.1 give limits of Weibull type. Theorem 5.2 If sf cs/f c s c as s, nf c a n 1 and na n as n, then P N n/na n µ y 1 e yc, y >, and EN n/na n µ c Γ2 1/c. 14

15 5.1 Example: exponential moments Suppose that the Laplace transform g X s = Ee sx is finite in a neighborhood of the origin. Then s k k! EXk e sx = Os c+1. k=c+1 Hence F c s = sc c! EXc e sx + Os c+1 sc c! EXc, s, and therefore we can take a n = c!/nex c 1/c. X µ and c = 2 give the limit in the Introduction for the birthday problem P N n/ 2n y 1 e y2. Recall that N n is stochastically largest when X is constant. If X is Exp1, then µ = 1, a n = n 1/c and we get the limit P N n/n 1 1/c y 1 e yc, cf. Subsection 3.1 and the Polya urn scheme. 5.2 Example: lognormal distribution Let X = e σz where Z is standard normal. Then EX k = e k2 σ 2 /2 and we have k=c+1 s k c EX k e sx = k! k=c+1 s k c e k2 σ 2 /2 Ee sekσ2x k! Hence = k=c+1 e k2 σ 2 /2 e kk cσ2 se kσ2 k c Ee sekσ2x, s. k! which gives F c s = sc c! EXc e sx + os c sc c! EXc = sc c! ec2 σ 2 /2, s, a n c!e c2 σ 2 /2 /n 1/c. and the limit P N n/n 1 1/c c! 1/c e 1 cσ2 /2 y 1 e yc. 15

16 References [1] Blom, G., Holst, S. and Sandell, D Problems and Snapshots from the World of Probability. Springer, New York. [2] Camarri, M. and Pitman, J. 2. Limit distributions and random trees derived from the birthday problem with unequal probabilities. Electronic Journal of Probability. 5, Paper no. 1, [3] Feller, W An Introduction to Probability Theory and Its Applications, Vol. I, 3rd edn. John Wiley, New York. [4] Holst, L The general birthday problem. Random Structures and Algorithms. 6, [5] Papanicolaou, V.G., Kokolakis, G.E. and Boneh, S Asymptotics for the random coupon collector problem. Journal of Computational and Applied Mathematics. 93, [6] Resnick, S Extreme Values, Regular Variation, and Point Processes. Springer, New York. [7] Steinsaltz, D Random time changes for sock-sorting and other stochastic process limit theorems. Electronic Journal of Probability. 4, Paper no. 14,

The Generalized Coupon Collector Problem

The Generalized Coupon Collector Problem The Generalized Coupon Collector Problem John Moriarty & Peter Neal First version: 30 April 2008 Research Report No. 9, 2008, Probability and Statistics Group School of Mathematics, The University of Manchester

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4. (*. Let independent variables X,..., X n have U(0, distribution. Show that for every x (0,, we have P ( X ( < x and P ( X (n > x as n. Ex. 4.2 (**. By using induction or otherwise,

More information

Asymptotic Statistics-III. Changliang Zou

Asymptotic Statistics-III. Changliang Zou Asymptotic Statistics-III Changliang Zou The multivariate central limit theorem Theorem (Multivariate CLT for iid case) Let X i be iid random p-vectors with mean µ and and covariance matrix Σ. Then n (

More information

1 Review of Probability

1 Review of Probability 1 Review of Probability Random variables are denoted by X, Y, Z, etc. The cumulative distribution function (c.d.f.) of a random variable X is denoted by F (x) = P (X x), < x

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4.1 (*. Let independent variables X 1,..., X n have U(0, 1 distribution. Show that for every x (0, 1, we have P ( X (1 < x 1 and P ( X (n > x 1 as n. Ex. 4.2 (**. By using induction

More information

Lecture 1: August 28

Lecture 1: August 28 36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random

More information

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2) 14:17 11/16/2 TOPIC. Convergence in distribution and related notions. This section studies the notion of the so-called convergence in distribution of real random variables. This is the kind of convergence

More information

THE POISSON DIRICHLET DISTRIBUTION AND ITS RELATIVES REVISITED

THE POISSON DIRICHLET DISTRIBUTION AND ITS RELATIVES REVISITED THE POISSON DIRICHLET DISTRIBUTION AND ITS RELATIVES REVISITED LARS HOLST Department of Mathematics, Royal Institute of Technology SE 1 44 Stockholm, Sweden E-mail: lholst@math.kth.se December 17, 21 Abstract

More information

Lecture 5: Moment generating functions

Lecture 5: Moment generating functions Lecture 5: Moment generating functions Definition 2.3.6. The moment generating function (mgf) of a random variable X is { x e tx f M X (t) = E(e tx X (x) if X has a pmf ) = etx f X (x)dx if X has a pdf

More information

E[X n ]= dn dt n M X(t). ). What is the mgf? Solution. Found this the other day in the Kernel matching exercise: 1 M X (t) =

E[X n ]= dn dt n M X(t). ). What is the mgf? Solution. Found this the other day in the Kernel matching exercise: 1 M X (t) = Chapter 7 Generating functions Definition 7.. Let X be a random variable. The moment generating function is given by M X (t) =E[e tx ], provided that the expectation exists for t in some neighborhood of

More information

Mathematical Statistics

Mathematical Statistics Mathematical Statistics Chapter Three. Point Estimation 3.4 Uniformly Minimum Variance Unbiased Estimator(UMVUE) Criteria for Best Estimators MSE Criterion Let F = {p(x; θ) : θ Θ} be a parametric distribution

More information

Gaussian vectors and central limit theorem

Gaussian vectors and central limit theorem Gaussian vectors and central limit theorem Samy Tindel Purdue University Probability Theory 2 - MA 539 Samy T. Gaussian vectors & CLT Probability Theory 1 / 86 Outline 1 Real Gaussian random variables

More information

Lecture I: Asymptotics for large GUE random matrices

Lecture I: Asymptotics for large GUE random matrices Lecture I: Asymptotics for large GUE random matrices Steen Thorbjørnsen, University of Aarhus andom Matrices Definition. Let (Ω, F, P) be a probability space and let n be a positive integer. Then a random

More information

UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES

UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES Applied Probability Trust 7 May 22 UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES HAMED AMINI, AND MARC LELARGE, ENS-INRIA Abstract Upper deviation results are obtained for the split time of a

More information

Permutations with Inversions

Permutations with Inversions 1 2 3 47 6 23 11 Journal of Integer Sequences, Vol. 4 2001, Article 01.2.4 Permutations with Inversions Barbara H. Margolius Cleveland State University Cleveland, Ohio 44115 Email address: b.margolius@csuohio.edu

More information

Proving the central limit theorem

Proving the central limit theorem SOR3012: Stochastic Processes Proving the central limit theorem Gareth Tribello March 3, 2019 1 Purpose In the lectures and exercises we have learnt about the law of large numbers and the central limit

More information

A noninformative Bayesian approach to domain estimation

A noninformative Bayesian approach to domain estimation A noninformative Bayesian approach to domain estimation Glen Meeden School of Statistics University of Minnesota Minneapolis, MN 55455 glen@stat.umn.edu August 2002 Revised July 2003 To appear in Journal

More information

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

A NONINFORMATIVE BAYESIAN APPROACH FOR TWO-STAGE CLUSTER SAMPLING

A NONINFORMATIVE BAYESIAN APPROACH FOR TWO-STAGE CLUSTER SAMPLING Sankhyā : The Indian Journal of Statistics Special Issue on Sample Surveys 1999, Volume 61, Series B, Pt. 1, pp. 133-144 A OIFORMATIVE BAYESIA APPROACH FOR TWO-STAGE CLUSTER SAMPLIG By GLE MEEDE University

More information

Lecture 17: The Exponential and Some Related Distributions

Lecture 17: The Exponential and Some Related Distributions Lecture 7: The Exponential and Some Related Distributions. Definition Definition: A continuous random variable X is said to have the exponential distribution with parameter if the density of X is e x if

More information

Markov Random Fields

Markov Random Fields Markov Random Fields 1. Markov property The Markov property of a stochastic sequence {X n } n 0 implies that for all n 1, X n is independent of (X k : k / {n 1, n, n + 1}), given (X n 1, X n+1 ). Another

More information

Distributions of Functions of Random Variables. 5.1 Functions of One Random Variable

Distributions of Functions of Random Variables. 5.1 Functions of One Random Variable Distributions of Functions of Random Variables 5.1 Functions of One Random Variable 5.2 Transformations of Two Random Variables 5.3 Several Random Variables 5.4 The Moment-Generating Function Technique

More information

The largest eigenvalues of the sample covariance matrix. in the heavy-tail case

The largest eigenvalues of the sample covariance matrix. in the heavy-tail case The largest eigenvalues of the sample covariance matrix 1 in the heavy-tail case Thomas Mikosch University of Copenhagen Joint work with Richard A. Davis (Columbia NY), Johannes Heiny (Aarhus University)

More information

STAT 200C: High-dimensional Statistics

STAT 200C: High-dimensional Statistics STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 59 Classical case: n d. Asymptotic assumption: d is fixed and n. Basic tools: LLN and CLT. High-dimensional setting: n d, e.g. n/d

More information

18.175: Lecture 13 Infinite divisibility and Lévy processes

18.175: Lecture 13 Infinite divisibility and Lévy processes 18.175 Lecture 13 18.175: Lecture 13 Infinite divisibility and Lévy processes Scott Sheffield MIT Outline Poisson random variable convergence Extend CLT idea to stable random variables Infinite divisibility

More information

Chapter 5. Chapter 5 sections

Chapter 5. Chapter 5 sections 1 / 43 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

Poisson approximations

Poisson approximations Chapter 9 Poisson approximations 9.1 Overview The Binn, p) can be thought of as the distribution of a sum of independent indicator random variables X 1 + + X n, with {X i = 1} denoting a head on the ith

More information

Practice Problem - Skewness of Bernoulli Random Variable. Lecture 7: Joint Distributions and the Law of Large Numbers. Joint Distributions - Example

Practice Problem - Skewness of Bernoulli Random Variable. Lecture 7: Joint Distributions and the Law of Large Numbers. Joint Distributions - Example A little more E(X Practice Problem - Skewness of Bernoulli Random Variable Lecture 7: and the Law of Large Numbers Sta30/Mth30 Colin Rundel February 7, 014 Let X Bern(p We have shown that E(X = p Var(X

More information

ECE302 Spring 2015 HW10 Solutions May 3,

ECE302 Spring 2015 HW10 Solutions May 3, ECE32 Spring 25 HW Solutions May 3, 25 Solutions to HW Note: Most of these solutions were generated by R. D. Yates and D. J. Goodman, the authors of our textbook. I have added comments in italics where

More information

Probability Distributions Columns (a) through (d)

Probability Distributions Columns (a) through (d) Discrete Probability Distributions Columns (a) through (d) Probability Mass Distribution Description Notes Notation or Density Function --------------------(PMF or PDF)-------------------- (a) (b) (c)

More information

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

Department of Mathematics

Department of Mathematics Department of Mathematics Ma 3/103 KC Border Introduction to Probability and Statistics Winter 2017 Lecture 8: Expectation in Action Relevant textboo passages: Pitman [6]: Chapters 3 and 5; Section 6.4

More information

Asymptotic Statistics-VI. Changliang Zou

Asymptotic Statistics-VI. Changliang Zou Asymptotic Statistics-VI Changliang Zou Kolmogorov-Smirnov distance Example (Kolmogorov-Smirnov confidence intervals) We know given α (0, 1), there is a well-defined d = d α,n such that, for any continuous

More information

Department of Mathematics

Department of Mathematics Department of Mathematics Ma 3/103 KC Border Introduction to Probability and Statistics Winter 2017 Supplement 2: Review Your Distributions Relevant textbook passages: Pitman [10]: pages 476 487. Larsen

More information

6.1 Moment Generating and Characteristic Functions

6.1 Moment Generating and Characteristic Functions Chapter 6 Limit Theorems The power statistics can mostly be seen when there is a large collection of data points and we are interested in understanding the macro state of the system, e.g., the average,

More information

Probability and Distributions

Probability and Distributions Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated

More information

Notes 9 : Infinitely divisible and stable laws

Notes 9 : Infinitely divisible and stable laws Notes 9 : Infinitely divisible and stable laws Math 733 - Fall 203 Lecturer: Sebastien Roch References: [Dur0, Section 3.7, 3.8], [Shi96, Section III.6]. Infinitely divisible distributions Recall: EX 9.

More information

Partial Solutions for h4/2014s: Sampling Distributions

Partial Solutions for h4/2014s: Sampling Distributions 27 Partial Solutions for h4/24s: Sampling Distributions ( Let X and X 2 be two independent random variables, each with the same probability distribution given as follows. f(x 2 e x/2, x (a Compute the

More information

Convergence in Distribution

Convergence in Distribution Convergence in Distribution Undergraduate version of central limit theorem: if X 1,..., X n are iid from a population with mean µ and standard deviation σ then n 1/2 ( X µ)/σ has approximately a normal

More information

Mathematical Institute, University of Utrecht. The problem of estimating the mean of an observed Gaussian innite-dimensional vector

Mathematical Institute, University of Utrecht. The problem of estimating the mean of an observed Gaussian innite-dimensional vector On Minimax Filtering over Ellipsoids Eduard N. Belitser and Boris Y. Levit Mathematical Institute, University of Utrecht Budapestlaan 6, 3584 CD Utrecht, The Netherlands The problem of estimating the mean

More information

We introduce methods that are useful in:

We introduce methods that are useful in: Instructor: Shengyu Zhang Content Derived Distributions Covariance and Correlation Conditional Expectation and Variance Revisited Transforms Sum of a Random Number of Independent Random Variables more

More information

Department of Mathematics

Department of Mathematics Department of Mathematics Ma 3/103 KC Border Introduction to Probability and Statistics Winter 2018 Supplement 2: Review Your Distributions Relevant textbook passages: Pitman [10]: pages 476 487. Larsen

More information

Chapter 6. Order Statistics and Quantiles. 6.1 Extreme Order Statistics

Chapter 6. Order Statistics and Quantiles. 6.1 Extreme Order Statistics Chapter 6 Order Statistics and Quantiles 61 Extreme Order Statistics Suppose we have a finite sample X 1,, X n Conditional on this sample, we define the values X 1),, X n) to be a permutation of X 1,,

More information

Introducing the Normal Distribution

Introducing the Normal Distribution Department of Mathematics Ma 3/103 KC Border Introduction to Probability and Statistics Winter 2017 Lecture 10: Introducing the Normal Distribution Relevant textbook passages: Pitman [5]: Sections 1.2,

More information

Statistical distributions: Synopsis

Statistical distributions: Synopsis Statistical distributions: Synopsis Basics of Distributions Special Distributions: Binomial, Exponential, Poisson, Gamma, Chi-Square, F, Extreme-value etc Uniform Distribution Empirical Distributions Quantile

More information

Continuous Random Variables and Continuous Distributions

Continuous Random Variables and Continuous Distributions Continuous Random Variables and Continuous Distributions Continuous Random Variables and Continuous Distributions Expectation & Variance of Continuous Random Variables ( 5.2) The Uniform Random Variable

More information

Chp 4. Expectation and Variance

Chp 4. Expectation and Variance Chp 4. Expectation and Variance 1 Expectation In this chapter, we will introduce two objectives to directly reflect the properties of a random variable or vector, which are the Expectation and Variance.

More information

ECON 5350 Class Notes Review of Probability and Distribution Theory

ECON 5350 Class Notes Review of Probability and Distribution Theory ECON 535 Class Notes Review of Probability and Distribution Theory 1 Random Variables Definition. Let c represent an element of the sample space C of a random eperiment, c C. A random variable is a one-to-one

More information

1 Review of di erential calculus

1 Review of di erential calculus Review of di erential calculus This chapter presents the main elements of di erential calculus needed in probability theory. Often, students taking a course on probability theory have problems with concepts

More information

Exponential Distribution and Poisson Process

Exponential Distribution and Poisson Process Exponential Distribution and Poisson Process Stochastic Processes - Lecture Notes Fatih Cavdur to accompany Introduction to Probability Models by Sheldon M. Ross Fall 215 Outline Introduction Exponential

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

Multivariate Gaussian Distribution. Auxiliary notes for Time Series Analysis SF2943. Spring 2013

Multivariate Gaussian Distribution. Auxiliary notes for Time Series Analysis SF2943. Spring 2013 Multivariate Gaussian Distribution Auxiliary notes for Time Series Analysis SF2943 Spring 203 Timo Koski Department of Mathematics KTH Royal Institute of Technology, Stockholm 2 Chapter Gaussian Vectors.

More information

Section 9.1. Expected Values of Sums

Section 9.1. Expected Values of Sums Section 9.1 Expected Values of Sums Theorem 9.1 For any set of random variables X 1,..., X n, the sum W n = X 1 + + X n has expected value E [W n ] = E [X 1 ] + E [X 2 ] + + E [X n ]. Proof: Theorem 9.1

More information

Midterm #1. Lecture 10: Joint Distributions and the Law of Large Numbers. Joint Distributions - Example, cont. Joint Distributions - Example

Midterm #1. Lecture 10: Joint Distributions and the Law of Large Numbers. Joint Distributions - Example, cont. Joint Distributions - Example Midterm #1 Midterm 1 Lecture 10: and the Law of Large Numbers Statistics 104 Colin Rundel February 0, 01 Exam will be passed back at the end of class Exam was hard, on the whole the class did well: Mean:

More information

Experimental Design and Statistics - AGA47A

Experimental Design and Statistics - AGA47A Experimental Design and Statistics - AGA47A Czech University of Life Sciences in Prague Department of Genetics and Breeding Fall/Winter 2014/2015 Matúš Maciak (@ A 211) Office Hours: M 14:00 15:30 W 15:30

More information

LIST OF FORMULAS FOR STK1100 AND STK1110

LIST OF FORMULAS FOR STK1100 AND STK1110 LIST OF FORMULAS FOR STK1100 AND STK1110 (Version of 11. November 2015) 1. Probability Let A, B, A 1, A 2,..., B 1, B 2,... be events, that is, subsets of a sample space Ω. a) Axioms: A probability function

More information

On a class of stochastic differential equations in a financial network model

On a class of stochastic differential equations in a financial network model 1 On a class of stochastic differential equations in a financial network model Tomoyuki Ichiba Department of Statistics & Applied Probability, Center for Financial Mathematics and Actuarial Research, University

More information

Probability distributions. Probability Distribution Functions. Probability distributions (contd.) Binomial distribution

Probability distributions. Probability Distribution Functions. Probability distributions (contd.) Binomial distribution Probability distributions Probability Distribution Functions G. Jogesh Babu Department of Statistics Penn State University September 27, 2011 http://en.wikipedia.org/wiki/probability_distribution We discuss

More information

Introducing the Normal Distribution

Introducing the Normal Distribution Department of Mathematics Ma 3/13 KC Border Introduction to Probability and Statistics Winter 219 Lecture 1: Introducing the Normal Distribution Relevant textbook passages: Pitman [5]: Sections 1.2, 2.2,

More information

CS Lecture 19. Exponential Families & Expectation Propagation

CS Lecture 19. Exponential Families & Expectation Propagation CS 6347 Lecture 19 Exponential Families & Expectation Propagation Discrete State Spaces We have been focusing on the case of MRFs over discrete state spaces Probability distributions over discrete spaces

More information

Stationary independent increments. 1. Random changes of the form X t+h X t fixed h > 0 are called increments of the process.

Stationary independent increments. 1. Random changes of the form X t+h X t fixed h > 0 are called increments of the process. Stationary independent increments 1. Random changes of the form X t+h X t fixed h > 0 are called increments of the process. 2. If each set of increments, corresponding to non-overlapping collection of

More information

1 Solution to Problem 2.1

1 Solution to Problem 2.1 Solution to Problem 2. I incorrectly worked this exercise instead of 2.2, so I decided to include the solution anyway. a) We have X Y /3, which is a - function. It maps the interval, ) where X lives) onto

More information

MAS113 Introduction to Probability and Statistics

MAS113 Introduction to Probability and Statistics MAS113 Introduction to Probability and Statistics School of Mathematics and Statistics, University of Sheffield 2018 19 Identically distributed Suppose we have n random variables X 1, X 2,..., X n. Identically

More information

SOLUTIONS TO MATH68181 EXTREME VALUES AND FINANCIAL RISK EXAM

SOLUTIONS TO MATH68181 EXTREME VALUES AND FINANCIAL RISK EXAM SOLUTIONS TO MATH68181 EXTREME VALUES AND FINANCIAL RISK EXAM Solutions to Question A1 a) The marginal cdfs of F X,Y (x, y) = [1 + exp( x) + exp( y) + (1 α) exp( x y)] 1 are F X (x) = F X,Y (x, ) = [1

More information

The chromatic number of random regular graphs

The chromatic number of random regular graphs The chromatic number of random regular graphs Xavier Pérez Giménez Stochastics Numerics Seminar Models of random graphs Erdős & Rényi classic models (G n,p, G n,m ) Fixed degree sequence Regular graphs

More information

Spring 2012 Math 541B Exam 1

Spring 2012 Math 541B Exam 1 Spring 2012 Math 541B Exam 1 1. A sample of size n is drawn without replacement from an urn containing N balls, m of which are red and N m are black; the balls are otherwise indistinguishable. Let X denote

More information

Stat410 Probability and Statistics II (F16)

Stat410 Probability and Statistics II (F16) Stat4 Probability and Statistics II (F6 Exponential, Poisson and Gamma Suppose on average every /λ hours, a Stochastic train arrives at the Random station. Further we assume the waiting time between two

More information

Chap 2.1 : Random Variables

Chap 2.1 : Random Variables Chap 2.1 : Random Variables Let Ω be sample space of a probability model, and X a function that maps every ξ Ω, toa unique point x R, the set of real numbers. Since the outcome ξ is not certain, so is

More information

1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 =

1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 = Chapter 5 Sequences and series 5. Sequences Definition 5. (Sequence). A sequence is a function which is defined on the set N of natural numbers. Since such a function is uniquely determined by its values

More information

1.1 Review of Probability Theory

1.1 Review of Probability Theory 1.1 Review of Probability Theory Angela Peace Biomathemtics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,

More information

hal , version 1-12 Jun 2014

hal , version 1-12 Jun 2014 Applied Probability Trust (6 June 2014) NEW RESULTS ON A GENERALIZED COUPON COLLECTOR PROBLEM USING MARKOV CHAINS EMMANUELLE ANCEAUME, CNRS YANN BUSNEL, Univ. of Nantes BRUNO SERICOLA, INRIA Abstract We

More information

PAS04 - Important discrete and continuous distributions

PAS04 - Important discrete and continuous distributions PAS04 - Important discrete and continuous distributions Jan Březina Technical University of Liberec 30. října 2014 Bernoulli trials Experiment with two possible outcomes: yes/no questions throwing coin

More information

Sampling Distributions

Sampling Distributions In statistics, a random sample is a collection of independent and identically distributed (iid) random variables, and a sampling distribution is the distribution of a function of random sample. For example,

More information

Randomized Load Balancing:The Power of 2 Choices

Randomized Load Balancing:The Power of 2 Choices Randomized Load Balancing: The Power of 2 Choices June 3, 2010 Balls and Bins Problem We have m balls that are thrown into n bins, the location of each ball chosen independently and uniformly at random

More information

Exercises in Extreme value theory

Exercises in Extreme value theory Exercises in Extreme value theory 2016 spring semester 1. Show that L(t) = logt is a slowly varying function but t ǫ is not if ǫ 0. 2. If the random variable X has distribution F with finite variance,

More information

Erdős-Renyi random graphs basics

Erdős-Renyi random graphs basics Erdős-Renyi random graphs basics Nathanaël Berestycki U.B.C. - class on percolation We take n vertices and a number p = p(n) with < p < 1. Let G(n, p(n)) be the graph such that there is an edge between

More information

Probability and Measure

Probability and Measure Chapter 4 Probability and Measure 4.1 Introduction In this chapter we will examine probability theory from the measure theoretic perspective. The realisation that measure theory is the foundation of probability

More information

CS145: Probability & Computing

CS145: Probability & Computing CS45: Probability & Computing Lecture 5: Concentration Inequalities, Law of Large Numbers, Central Limit Theorem Instructor: Eli Upfal Brown University Computer Science Figure credits: Bertsekas & Tsitsiklis,

More information

An Asymptotically Optimal Algorithm for the Max k-armed Bandit Problem

An Asymptotically Optimal Algorithm for the Max k-armed Bandit Problem An Asymptotically Optimal Algorithm for the Max k-armed Bandit Problem Matthew J. Streeter February 27, 2006 CMU-CS-06-110 Stephen F. Smith School of Computer Science Carnegie Mellon University Pittsburgh,

More information

MATH2715: Statistical Methods

MATH2715: Statistical Methods MATH2715: Statistical Methods Exercises V (based on lectures 9-10, work week 6, hand in lecture Mon 7 Nov) ALL questions count towards the continuous assessment for this module. Q1. If X gamma(α,λ), write

More information

STAT 512 sp 2018 Summary Sheet

STAT 512 sp 2018 Summary Sheet STAT 5 sp 08 Summary Sheet Karl B. Gregory Spring 08. Transformations of a random variable Let X be a rv with support X and let g be a function mapping X to Y with inverse mapping g (A = {x X : g(x A}

More information

Discrete Distributions

Discrete Distributions Chapter 2 Discrete Distributions 2.1 Random Variables of the Discrete Type An outcome space S is difficult to study if the elements of S are not numbers. However, we can associate each element/outcome

More information

04. Random Variables: Concepts

04. Random Variables: Concepts University of Rhode Island DigitalCommons@URI Nonequilibrium Statistical Physics Physics Course Materials 215 4. Random Variables: Concepts Gerhard Müller University of Rhode Island, gmuller@uri.edu Creative

More information

WEIBULL RENEWAL PROCESSES

WEIBULL RENEWAL PROCESSES Ann. Inst. Statist. Math. Vol. 46, No. 4, 64-648 (994) WEIBULL RENEWAL PROCESSES NIKOS YANNAROS Department of Statistics, University of Stockholm, S-06 9 Stockholm, Sweden (Received April 9, 993; revised

More information

Example continued. Math 425 Intro to Probability Lecture 37. Example continued. Example

Example continued. Math 425 Intro to Probability Lecture 37. Example continued. Example continued : Coin tossing Math 425 Intro to Probability Lecture 37 Kenneth Harris kaharri@umich.edu Department of Mathematics University of Michigan April 8, 2009 Consider a Bernoulli trials process with

More information

Stat 5101 Notes: Brand Name Distributions

Stat 5101 Notes: Brand Name Distributions Stat 5101 Notes: Brand Name Distributions Charles J. Geyer September 5, 2012 Contents 1 Discrete Uniform Distribution 2 2 General Discrete Uniform Distribution 2 3 Uniform Distribution 3 4 General Uniform

More information

1 Presessional Probability

1 Presessional Probability 1 Presessional Probability Probability theory is essential for the development of mathematical models in finance, because of the randomness nature of price fluctuations in the markets. This presessional

More information

The Laplace Transform. Background: Improper Integrals

The Laplace Transform. Background: Improper Integrals The Laplace Transform Background: Improper Integrals Recall: Definite Integral: a, b real numbers, a b; f continuous on [a, b] b a f(x) dx 1 Improper integrals: Type I Infinite interval of integration

More information

Chapter 5 continued. Chapter 5 sections

Chapter 5 continued. Chapter 5 sections Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

Some Aspects of Universal Portfolio

Some Aspects of Universal Portfolio 1 Some Aspects of Universal Portfolio Tomoyuki Ichiba (UC Santa Barbara) joint work with Marcel Brod (ETH Zurich) Conference on Stochastic Asymptotics & Applications Sixth Western Conference on Mathematical

More information

18.175: Lecture 8 Weak laws and moment-generating/characteristic functions

18.175: Lecture 8 Weak laws and moment-generating/characteristic functions 18.175: Lecture 8 Weak laws and moment-generating/characteristic functions Scott Sheffield MIT 18.175 Lecture 8 1 Outline Moment generating functions Weak law of large numbers: Markov/Chebyshev approach

More information

Riemann integral and volume are generalized to unbounded functions and sets. is an admissible set, and its volume is a Riemann integral, 1l E,

Riemann integral and volume are generalized to unbounded functions and sets. is an admissible set, and its volume is a Riemann integral, 1l E, Tel Aviv University, 26 Analysis-III 9 9 Improper integral 9a Introduction....................... 9 9b Positive integrands................... 9c Special functions gamma and beta......... 4 9d Change of

More information

the convolution of f and g) given by

the convolution of f and g) given by 09:53 /5/2000 TOPIC Characteristic functions, cont d This lecture develops an inversion formula for recovering the density of a smooth random variable X from its characteristic function, and uses that

More information

Sampling Distributions

Sampling Distributions Sampling Distributions In statistics, a random sample is a collection of independent and identically distributed (iid) random variables, and a sampling distribution is the distribution of a function of

More information

18.440: Lecture 28 Lectures Review

18.440: Lecture 28 Lectures Review 18.440: Lecture 28 Lectures 18-27 Review Scott Sheffield MIT Outline Outline It s the coins, stupid Much of what we have done in this course can be motivated by the i.i.d. sequence X i where each X i is

More information

Lycka till!

Lycka till! Avd. Matematisk statistik TENTAMEN I SF294 SANNOLIKHETSTEORI/EXAM IN SF294 PROBABILITY THE- ORY MONDAY THE 14 T H OF JANUARY 28 14. p.m. 19. p.m. Examinator : Timo Koski, tel. 79 71 34, e-post: timo@math.kth.se

More information

P (A G) dp G P (A G)

P (A G) dp G P (A G) First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume

More information

Frontier estimation based on extreme risk measures

Frontier estimation based on extreme risk measures Frontier estimation based on extreme risk measures by Jonathan EL METHNI in collaboration with Ste phane GIRARD & Laurent GARDES CMStatistics 2016 University of Seville December 2016 1 Risk measures 2

More information

LARGE DEVIATION PROBABILITIES FOR SUMS OF HEAVY-TAILED DEPENDENT RANDOM VECTORS*

LARGE DEVIATION PROBABILITIES FOR SUMS OF HEAVY-TAILED DEPENDENT RANDOM VECTORS* LARGE EVIATION PROBABILITIES FOR SUMS OF HEAVY-TAILE EPENENT RANOM VECTORS* Adam Jakubowski Alexander V. Nagaev Alexander Zaigraev Nicholas Copernicus University Faculty of Mathematics and Computer Science

More information

2 Random Variable Generation

2 Random Variable Generation 2 Random Variable Generation Most Monte Carlo computations require, as a starting point, a sequence of i.i.d. random variables with given marginal distribution. We describe here some of the basic methods

More information