Simulation, 3rd Edition. Solution of Exercise Problems. Yan Zeng

Size: px
Start display at page:

Download "Simulation, 3rd Edition. Solution of Exercise Problems. Yan Zeng"

Transcription

1 Simulation, 3rd Edition Solution of Exercise Problems Yan Zeng September 3, 28

2 Contents 1 Introduction 3 2 Elements of Probability 6 3 Random Numbers 14 4 Generating Discrete Random Variables 19 5 Generating Continuous Random Variables 32 6 The Discrete Event Simulation Approach 37 7 Statistical Analysis of Simulated Data 38 8 Variance Reduction Techniques 39 9 Statistical Validation Techniques 4 1 Markov Chain Monte Carlo Methods Some Additional Topics 42 1

3 This is a solution manual for the textbook Simulation, 3rd Edition, by Sheldon M. Ross (22, Academic Press). This version omits the Problem 8, 9, 18 of Chapter 4. 2

4 Chapter 1 Introduction 1. (a) Proof. The Matlab code for the problem solution is given below. function departuretimes = ross_1_1a(arrivaltimes, servicetimes) ROSS_1_1a solves Exerciese Problem 1(a) of Chapter 1, [1]. Reference: [1] Ross: Simulation, Third Edition, Academic Press, 22. if (numel(arrivaltimes)~= numel(servicetimes)) error( Array of arrival times and array of service times must have same size ); numcustomers = numel(arrivaltimes); serveremptytime = ; The time when the server is empty. departuretimes = zeros(1, numcustomers); for i=1:numcustomers if (serveremptytime < arrivaltimes(i)) departuretimes(i) = arrivaltimes(i) + servicetimes(i); else departuretimes(i) = serveremptytime + servicetimes(i); serveremptytime = departuretimes(i); (b) Proof. The Matlab code for the problem solution is given below. function departuretimes = ross_1_1b(arrivaltimes, servicetimes) ROSS_1_1b solves Exerciese Problem 1(b) of Chapter 1, [1]. Reference: [1] Ross: Simulation, Third Edition, Academic Press, 22. 3

5 if (numel(arrivaltimes)~= numel(servicetimes)) error( Array of arrival times and array of service times must have same size ); numcustomers = numel(arrivaltimes); numservers = 2; serveremptytimes = zeros(1, numservers); serveremptytimes(i) is the time when the i-th server is empty. departuretimes = zeros(1, numcustomers); for i=1:numcustomers servertime is the earliest time when a server is empty; serverno is this server s number (1 or 2). [servertime, serverno] = min(serveremptytimes); if (servertime < arrivaltimes(i)) departuretimes(i) = arrivaltimes(i) + servicetimes(i); else departuretimes(i) = servertime + servicetimes(i); serveremptytimes(serverno) = departuretimes(i); (c) Proof. The Matlab code for the problem solution is given below. function departuretimes = ross_1_1c(arrivaltimes, servicetimes) ROSS_1_1c solves Exerciese Problem 1(c) of Chapter 1, [1]. Reference: [1] Ross: Simulation, Third Edition, Academic Press, 22. if (numel(arrivaltimes)~= numel(servicetimes)) error( Array of arrival times and array of service times must have same size ); numcustomers = numel(arrivaltimes); serveremptytime = ; The time when the server is empty departuretimes = zeros(1, numcustomers); unserved = ones(1, numcustomers); unserved(i) has value 1 if the i-th customer has not been served, and value otherwise. while ( sum(unserved) > ) sum(unserved)> iff there are customers unserved. Find the customer who has been waiting the least time. If no customer is waiting, get the next customer to arrive. waitingcustomers = ( (arrivaltimes <= serveremptytime) & unserved ); if ( sum(waitingcustomers) == ) There are no customers waiting. [temp, next] = max(arrivaltimes > serveremptytime); departuretimes(next) = arrivaltimes(next) + servicetimes(next); 4

6 else There are customers waiting. [temp, next] = max(cumsum(waitingcustomers)); departuretimes(next) = serveremptytime + servicetimes(next); serveremptytime = departuretimes(next); unserved(next) = ; 2. Proof. When there are k servers, the relation is given by D n S n = max{a n, min{d n 1, D n 2,, D n k }}, where D = D 1 = = D (k 1) =. The corresponding Matlab code is given below. function departuretimes = ross_1_2(arrivaltimes, servicetimes, numservers) ROSS_1_2 solves Exerciese Problem 2 of Chapter 1, [1]. Reference: [1] Ross: Simulation, Third Edition, Academic Press, 22. if (numel(arrivaltimes)~= numel(servicetimes)) error( Array of arrival times and array of service times must have same size ); numcustomers = numel(arrivaltimes); serveremptytimes = zeros(1, numservers); serveremptytimes(i) is the time when the i-th server is empty. departuretimes = zeros(1, numcustomers); for i=1:numcustomers servertime is the earliest time when a server is empty; serverno is this server s number (1, 2,..., k). [servertime, serverno] = min(serveremptytimes); if (servertime < arrivaltimes(i)) departuretimes(i) = arrivaltimes(i) + servicetimes(i); else departuretimes(i) = servertime + servicetimes(i); serveremptytimes(serverno) = departuretimes(i); 5

7 Chapter 2 Elements of Probability 1. (a) Proof. B = (A A c ) B = AB A c B. A B = A (AB A c B) = (A AB) A c B = A A c B. (b) Proof. P (A B) = P (A A c B) = P (A) + P (A c B) = P (A) + P (B AB) = P (A) + P (B) P (AB). 2. Proof. A B can be divided into the disjoint union of three sets. The first set consists of all the 6-tuples whose second entry is 2. This set has 5! = 12 elements. This second set consists of all the 6-tuples whose second entry is 1. So it has 5! = 12 elements. The third set consists all the 6-tuples where 1 occupies either the first entry or the second entry and 2 does not occupy the second entry. It has 2 4 4! = 192 elements. So #AB = 432. AB = {(1, 2, i, j, k, l), (i, 2, 1, j, k, l) : {i, j, k, l} = {3, 4, 5, 6}}. So #AB = 2 4! = 48. ABC = {(1, 2, 3, i, j, k) : {i, j, k} = {4, 5, 6}}. So #ABC = 3! = 6. Finally, #A BC = #A + #BC ABC = 3 5! + 4! 3! = = Proof. By assumption, P (bb) = P (bg) = P (gb) = P (gg) = 1 4. So 4. P (gg gg or gb) = P (gg) P (gg) + P (gb) = 1 2. Remark: The problem is essentially the same as the coin-flipping problem at the beginning of 2.3. Proof. We assume P (bb) = P (bg) = P (gb) = P (gg) = 1 4. Then the desired probability equals to P (bb bb or bg gb) = 1 3. Remark: For some subtle issues related to conditional probability, see Feller [2], V.1, Example (c). 5. Proof. Since 1 = i=1 P (X = i) = ( )c, we conclude c = 1. So P (2 X 3) = (2 + 3)/1 = 6

8 Proof. Since 1 = 1 cx2 f(x)dx = 7. Proof. P (X < Y ) = = 2 1 = c 2, we conclude c = 2. So P (X > 1 2 ) = 1 1 = = {x<y} 2e (x+2y) dxdy = 2e 2y (1 e y )dy = 2e 2y dy 2 2e 2y dy 2xdx = x = y 2e 3y dy e x dx 8. Proof. E[X] = 4 i=1 i2 1 = Proof. E[X] = 1 2x2 dx = Proof. Following the hint, we have E[X] = 1 i=1 E[X i]. Note for each i, P (X i = 1) = 1 P (type i coupon is not among the N coupons) = 1 9N 1 N = 1.9N. So E[X] = 1 (1.9 N ). 11. Proof. P (X = i) = 1 6, i = 1,, 6. So E[X] = i=1 i = 3.5 and E[X2 ] = i=1 i2 = 1 6 (6+1) (2 6+1) Therefore V ar(x) = Proof. Since 1 = 1 1 f(x)dx = c(e 1), we conclude c = e 1. We have and E[X 2 ] = c 1 E[X] = c 1 ( xe x dx = c xe x 1 ( x 2 e x dx = c x 2 e x ) e x dx = c ) xe x dx = ce 2E[X] = ce c. So V ar(x) = E[X 2 ] (EX) 2 = ce c c 2 = 1 e 1 (e 1 1 e 1 ) = (e 1)2 1 (e 1) = Proof. V ar(ax +b) = E(aX +b E(aX +b)) 2 = E(aX +b aex b) 2 = a 2 E(X EX) 2 = a 2 V ar(x)

9 Proof. The answer of the problem deps on what distribution the amount of liquid apple follows. For example, if we assume it follows a normal distribution, then the problem s conditions imply mean µ = 4 and variance σ 2 = 4. Denote by X the amount of liquid apple, then ( ) ( X 4 P (X 6) = P 1 =.1587, P (3 X 5) = P.5 X 4 ).5 = ( 3 Proof. The probability that a three-engine plane functions is p 1 = p 2) 2 (1 p) + p 3 = 3p 2 2p 3. The ( ) ( ) 5 5 probability that a five-engine plane functions is p 2 = p 3 3 (1 p) 2 + p 4 4 (1 p)+p 5 = p 3 (6p 2 15p+1). So p 1 p 2 is reduced to (p 1) 2 (2p 1), i.e. p Proof. For i = 1,, n, P (X = i) P (X = i 1) = ( n i 1 ( ) n p i i (1 p) n i ) = p i 1 (1 p) n i+1 n i + 1 i p 1 p. So P (X = i) is an increasing function of i if and only if (n i + 1)p i(1 p), which is equivalent to (n + 1)p i. This shows P (X = i) first increases and then decreases, reaching its maximum value when i is the largest integer less than or equal to (n + 1)p. 17. Proof. X is the sum of n i.i.d. Bernoulli random variables; Y is the sum of m i.i.d. Bernoulli random variables. Indepence of X and Y implies these two families of Bernoulli random variables are also indepent to each other. So X + Y as the sum of n + m i.i.d. Bernoulli random variables is binomial with parameters (n + m, p). 18. Proof. Because Poisson random variables may be used to approximate the distribution of the number of successes in a large number of trials when each trial has a small probability of being a success. See page for details. 19. Proof. We calculate the moment generating function of X: F (u) = E[e Xu ] = k= uk λk e k! e λ = e λ e λe u = e λ(eu 1). Then it s straightforward to calculate F (u) = λe λ(eu 1)+u and F (u) = λe λ(eu 1)+u (λe u + 1). So E[X] = F () = λ, E[X 2 ] = F () = λ 2 + λ, and V ar(x) = E[X 2 ] (E[X]) 2 = λ. 2. 8

10 Proof. X can be seen as the limit of a sequence of binomial random variables (X i ) i=1, such that X i has parameters (n i, p i ) and n i p i λ 1 as i. Similarly, Y can be seen as the limit of a sequence of binomial random variables (Y i ) i=1, such that Y i has parameters (m i, p i ) and m i p i λ 2 as i. By the result of Exercise 17, X i + Y i is binomial with parameters (m i + n i, p i ) such that (m i + n i )p i (λ 1 + λ 2 ). Therefore, X + Y = lim i (X i + Y i ) is Poisson with parameter λ 1 + λ 2. For an analytic proof, we note P (X + Y = k) = = k P (X = i)p (Y = k i) i= k i= e λ1 λi 1 = e (λ1+λ2) k! i! λ k i 2 e λ2 (k i)! k i= = e (λ1+λ2) (λ 1 + λ 2 ) k. k! k! i!(k i)! λi 1λ k i Proof. We can utilize this recursion equations as follows: function p = ross_2_21(lambda, n) ROSS_2_21 solves Exerciese Problem 21 of Chapter 2, [1]. The function calculates the probability of a Poisson random variable with rate LAMBDA being equal to N. Reference: [1] Ross: Simulation, Third Edition, Academic Press, 22. if (lambda <= ) (n < ) error( Invalid argument ); p = exp(-lambda); if (n==) return; else for i=:(n-1) p = p * lambda / (i+1); 22. Proof. P (X > n) = k=n+1 p(1 p)k 1 = p(1 p) n k= (1 p)k = p(1 p) n 1 p = (1 p)n

11 Proof. Let (X i ) i=1 be i.i.d. Bernoulli random variables with parameter p =.6, such that { 1 if A wins the i-th match X i = otherwise. If we define S n = n i=1 X i, then A wins the match if and only if the random walk S n reaches 5 before it reaches -5. This probability is ( 5) 5 ( 5) =.5 by Wald s equation. For details, see Durrett [1], page 18, Example Proof. X = n i=1 Y i is self-evident. By symmetry, Y i has the same distribution, with P (Y i = 1) = N So E[X] = 25. nn N+M. Proof. P (5 U 15) = = Proof. The moment generating function is F (u) = E[e ux ] = e ux 1 2πσ e (x µ)2 2σ 2 dx = 1 2πσ e (x µ uσ)2 u 2 σ 4 2µuσ 2 2σ 2 dx = e 1 2 u2 σ 2 +µu. So E[X] = F (u) u= = µ, E[X 2 ] = F (u) u= = σ 2 + µ 2, and V ar(x) = E[X 2 ] (E[X]) 2 = σ N+M. Proof. If X is a binomial random variable with parameters (n, p), X can be written as X = n i=1 X i, where (X i ) n i=1 is a sequence of i.i.d. Bernoulli random variables with parameters p. So by central limit theorem (note E[X i ] = p, V ar(x i ) = p(1 p)), ( ) X np This is the rationale that P x np(1 p) X np = n i=1 Xi n p n N(, 1), as n. np(1 p) p(1 p) 1 2π x e x2 /2 dx when n is large. Remark: This provides an approximate way to compute binomial distributions while avoiding calculation of factorials. Moreover, since Poisson distribution can be approximated by binomial distribution, this exercise problem shows that, when λ is large, Poisson distribution can be approximated by N(λ, λ). See Feller [2], VII.5 for details. 28. Proof. The Laplace transform is (u ) F (u) = E[e ux ] = e ux λe λx dx = λ λ + u = ( λ) u n, for u sufficiently small. So E[X] = F (u) u= = 1 λ, E[X2 ] = F (u) u= = 2 λ, and V ar(x) = 2 2 λ ( 1 2 λ) = 2 1 λ I didn t find a simple, elementary solution, except by calculating one-by-one the probability that A wins the match at the n-th individual game (n = 5, 6, 7, 8, 9). n=1 1

12 1 Proof. 2. We first give a proof without computation. By symmetry, P (C is the last to leave) = 2P (A leaves before B and B leaves before C) = 2P (B leaves before C A leaves before B)P (A leaves before B). Since exponential random variables are memoryless, after A leaves before B, C and B will start anew, as if they started being served simultaneously. So P (B leaves before C A leaves before B) = 1 2 and we can conclude P (C is the last to leave) = = 1 2. A direct computation is carried out as follows. Denote the service times of A, B and C by X, Y and Z, respectively, we have P (C is the last to leave) = 1 P (X Y + Z) P (Y X + Z) = 1 2 = 1 2 = = 1 2. λt λt P (X t Y + Z = t)λe 1! dt e λt λt λt λe 1! dt 3. Proof. We note for any t, P (max(x, Y ) t) = P (X t and Y t) = P (X t)p (Y t) So max(x, Y ) is not an exponential random variable. 31. = (1 e λt )(1 e µt ) = 1 e λt e µt + e (λ+µ)t. Proof. The probability is equal to P (N t = ) = e λt = e.3 4 = Proof. P (N(s) = k N(t) = n) = = P (N(s) = k, N(t) = n) P (N(t) = n) = P (N(s) = k)p (N(t s) = n k) P (N(t) = n) = sk (t s) n k t n ( ) (s n = k t n! k!(n k)! ) k ( 1 s t ) n k. P (N(s) = k, N(t) N(s) = n k) P (N(t) = n) = (λs) k k! e λs (λ(t s))n k (n k)! (λt) n n! e λt e λ(t s) Remark: This gives an interesting observation: when N(t) is known, the number of events by time s (s < t) follows a binomial distribution with parameters (N(t), s/t). Equivalently, we can say given N(t) = n, the timings of the first n events are i.i.d. uniformly distributed over (, t)

13 Proof. P (N(s) = k N(t) = n) = P (N(s) N(t) = k n N(t) = n) = P (N(s t) = k n) = (λ(s t))k n e λ(s t). (k n)! Remark: Intuitively, the formula should be clear: given N(t) = n, the conditional distribution of N(s) (s > t) behaves like N(s t). 34. Proof. The Laplace transform is (u ) F (u) = E[e ux ] = e ux λx (λx)n 1 λe (n 1)! dx = λ n (λ + u) n (λ+u)x ((λ + u)x)n 1 λ n (λ+u)e dx = (n 1)! (λ + u) n. So E[X] = F (u) u= = n λ, E[X2 ] = F (u) u= = n(n+1) λ 2, and V ar(x) = E[X 2 ] (E[X]) 2 = n λ (a) Proof. E[Y X = 2] = P (Y = 1 X = 2) = 2 6 = 1 3. (b) Proof. We first compute P (Y = 1): P (Y = 1) = 4 P (Y = 1 X = j)p (X = j) = j= 4 j= 4 j 6 ( ) ( ) 4 6 j 4 j ( ) = So for j =, 1, 2, 3, 4, P (X = j Y = 1) = P (Y = 1 X = j)p (X = j) P (Y = 1) = 4 j 6 ( ) ( ) 4 6 j 4 j ( ) That gives E[X Y = 1] = 4 j= P (X = j Y = 1) j = 4 3. (c) Proof. V ar(y X = ) = E[Y 2 X = ] (E[Y X = ]) 2 = P (Y = 1 X = ) (P (Y = 1 X = )) 2 = 2 3 ( 2 3 ) 2 = 2 9. (d) Proof. V ar(x Y = 1) = E[X 2 Y = 1] (E[X Y = 1]) 2 = 4 P (X = j Y = 1) j 2 j= ( ) 2 4 =

14 function ross_2_35 ROSS_2_35 works out some routine computations for Exerciese Problem 35 of Chapter 2, [1]. Reference: [1] Ross: Simulation, Third Edition, Academic Press, 22. Compute the probability P(Y=1) py = ; denom = nchoosek(1,4); for j = :4 py = py + (4-j)/6 * nchoosek(4,j) * nchoosek(6,4-j) / denom; disp([ P(Y=1) =, num2str(py)]); Compute the conditional probability P(X=j Y=1) p = zeros(5,1); for j = :4 p(j+1) = (4-j)/6 * nchoosek(4,j) * nchoosek(6,4-j) / (denom *.4); disp([ P(X=,num2str(j), Y=1) =, num2str(p(j+1))]); Compute the conditional expectation E[X Y=1] and the conditional variance Var(X Y=1) expec = ; var = ; for j = :4 expec = expec + j * p(j+1); var = var + j^2 * p(j+1); disp([ E[X Y=1] =, num2str(expec)]); disp([ Var(X Y=1) =, num2str(var - expec^2)]); 36. Proof. X + Y is a gamma random variable with parameters (2, λ). So for x [, t], P (X = x X + Y = t) = Remark: See page for application. P (X = x, Y = t x) P (X + Y = t) = λe λx λe λ(t x) λe λt λt 1! = 1 t. 13

15 Chapter 3 Random Numbers The Matlab code for the mixed congruential generator is as follows: function ross_mcg(x, a, b, m, num) ROSS_MCG generates num pseudorandom numbers through the mixed congruential formula x(n) = (a * x(n-1) + b) mod m. Reference: [1] Ross: Simulation, Third Edition, Academic Press, 22. numbers = zeros(1,num); numbers(1) = mod(a * x + b, m); for i = 2:num numbers(i) = mod(a * numbers(i-1) + b, m); disp(numbers); 1. Proof. 15, 45, 135, 15, 15, 45, 135, 15, 15, Proof. 22, 117, 192, 167, 42, 17, 92, 67, 142, 117. The Matlab code for Problem 3-9 goes as follows: function ross_3_3 9 ROSS_3_3-9 works out Exercise Problem 3-9 of Chapter 2, [1]. Reference: [1] Ross: Simulation, Third Edition, Academic Press, 22. numsim = 1; points = rand(numsim,1); r3 = mean(feval( func3, points)); r4 = mean(feval( func4, points)); r5 = mean(feval( func5, points)); r7 = mean(feval( func7, points)); 14

16 points1 = rand(numsim,1); r8 = mean(feval( func8, points, points1)); r9 = mean(feval( func9, points, points1)); disp([r3, r4, r5, r7, r8, r9]); ross_3_3 9 Nested funcitons function y = func3(x) y = exp(exp(x)); func3 function y = func4(x) y = (1-x.^2).^(1.5); func4 function y = func5(x) y = 4 * exp(16*x.^2-12*x + 2); func5 function y = func7(x) y = 2 * exp(-x.^2./ (1-x).^2)./ (1-x).^2; func7 function z = func8(x,y) z = exp((x+y).^2); func8 function z = func9(x,y) z = (y <= x).* exp(-x./(1-x)).* exp(-y./(1-y))./ ((1-x).^2.* (1-y).^2); func9 3. Proof. By change of variable formula, we have 1 exp{ex }dx = e 1 eu u 1 du. Using exponential integral function Ei(z) = e t z t dt, the formula can be further written as Ei(e) Ei(1). Mathematica evaluates the result as ; Monte Carlo simulation gives estimates between 6.3 and

17 Proof. Mathmatica gives the exact answer 3π ; Monte Carlo simulation gives estimates between.5889 and Proof. By the substitution x = 4t 2, 2 2 ex+x2 dx is transformed to t+2 e16t2 dt. Mathematica evaluates this formula as ; Monte Carlo simulation gives estimates around Proof. The exact answer is 1 2. There is no need to do Monte Carlo simulation since when the formula is transformed into an integration over (, 1), the integrand is identically e y 2 Proof. e x2 dx = we use symmetry and the substitution x = dy = π To prepare for Monte Carlo simulation, t 1 t : e x2 dx = 2 The Monte Carlo simulation is about e x2 dx = 2 e t 2 (1 t) 2 (1 t) 2 dt. Proof. Mathematica evaluates the formula as ; Monte Carlo simulation gives estimates around Proof. x e (x+y dydx = e x x e y dydx = e x (1 e x )dx = = 1 2. To facilitate Monte Carlo simulation, we use the substitution x = u 1 u, y = v 1 v and conclude x e (x+y) dydx = 1 1 Monte Carlo simulation gives estimates around v 1 v ) e ( u 1 u + 1 {v u} (1 u) 2 (1 v) 2 dudv. Proof. Cov(X, Y ) = E[XY ] E[X]E[Y ]. So Cov(U, e U ) = 1 xex dx 1 xdx 1 ex dx = e Monte Carlo simulation gives estimates around.147. The Matlab code is the following: function result = ross_3_1 ROSS_3_1 works out Exercise Problem 1 of Chapter 2, [1]. Reference: [1] Ross: Simulation, Third Edition, Academic Press, 22. numsim = 1; U = rand(numsim,1); result = mean(u.*exp(u)) - mean(u) * mean(exp(u));

18 Proof. Cov(U, 1 U 2 ) = 1 x 1 x 2 dx 1 xdx 1 1 x2 dx = π Similarly, Cov(U 2, 1 U 2 ) = 1 x2 1 x 2 dx 1 x2 dx 1 1 x2 dx = π π Monte Carlo simulation gives estimates around.594 and.655, respectively. The Matlab corresponding code goes as follows: function [result1, result2] = ross_3_11 ROSS_3_11 works out Exercise Problem 11 of Chapter 2, [1]. Reference: [1] Ross: Simulation, Third Edition, Academic Press, 22. numsim = 1; U = rand(numsim,1); result1 = mean(u.*sqrt(1-u.^2)) - mean(u)*mean(sqrt(1-u.^2)); result2 = mean(u.^2.*sqrt(1-u.^2)) - mean(u.^2)*mean(sqrt(1-u.^2)); 12. Proof. We would guess E[N] = e. The Matlab code for Monte Carlo simulation goes as follows: function result = ross_3_12(numsim) ROSS_3_12 works out Exercise Problem 12 of Chapter 2, [1]. Reference: [1] Ross: Simulation, Third Edition, Academic Press, 22. total = ; for i = 1:numSim total = total + ross_fpt_uniform; result = total/numsim; ross_3_12 Nested function function counter = ross_fpt_uniform ROSS_FPT_UNIFORM generates the first passage time of the sum of uniform random variables exceeding level 1. s = ; counter = ; while (s <= 1) s = s + rand(1,1); counter = counter + 1; ross_fpt_uniform 17

19 13. Proof. Since S n = n i=1 U i (S := 1) is strictly monotone decreasing, N = max{n : n i=1 U i e 3 } = min{n : n i=1 U i < e 3 } 1. Monte Carlo simulation finds E[N] is about 3, and P (N = i) for i =, 1,, 6 are, respectively,.5,.15,.224,.224,.17,.1,.5. The corresponding Matlab code goes as follows: function [expec, prob] = ross_3_13(numsim) ROSS_3_13 works out Exercise Problem 13 of Chapter 2, [1]. Reference: [1] Ross: Simulation, Third Edition, Academic Press, 22. total = ; test_vec = [, 1, 2, 3, 4, 5, 6]; count = zeros(1,7); for i = 1:numSim temp = ross_fpt_loguniform; count = count + (test_vec == temp); total = total + temp; expec = total/numsim; E[N] prob = count/numsim; prob(i) = P(N=i-1), i=1:7 ross_3_13 Nested function function counter = ross_fpt_loguniform ROSS_FPT_LOGUNIFORM generates the first passage time of the sum of log-uniform random variables going below level -3. s = ; counter = ; while (s >= -3) s = s + log(rand(1,1)); counter = counter + 1; counter = counter - 1; ross_fpt_loguniform Remark: By the remark on page 65, N is actually a Poisson random variable with parameter λ = 3. 18

20 Chapter 4 Generating Discrete Random Variables 1. Proof. The Matlab code goes as follows: function result = ross_4_1(numsim) ROSS_4_1 works out Exercise Problem 1 of Chapter 4, [1]. We consider the possible values of the random variable in decreasing order of their probabilities. Since x1 and x2 are not specified, we assume P(X=1)=p1, P(X=2)=p2. Reference: [1] Ross: Simulation, Third Edition, Academic Press, 22. result = 1 - sum(rand(1,numsim)<= 2/3)/numSim; 2. Proof. The Matlab code goes as follows: function result = ross_4_2(probs, values) ROSS_4_2 works out Exercise Problem 2 of Chapter 4, [1]. PROBS is a column vector that stores the probability masses, and VALUES is a column vector that stores the corresponding values of the random variable. We first sort PROBS in descing order and VALUES is also sorted accordingly, we then apply the basic procedure as described in Example 4a on page 46, [1]. Reference: [1] Ross: Simulation, Third Edition, Academic Press, 22. Sort the probability masses in descing order; the values of the random variable are also sorted accordingly. 19

21 sortedmat = sortrows([probs, values], -1); cumprobs = cumsum(sortedmat(:,1)); values = sortedmat(:,2); Generate an incidence of the random variable. u = rand(1,1); for i=1:numel(probs) if u < cumprobs(i) result = values(i); return; 3. Proof. Solution to Exercise Problem 2 has given the desired algorithm. The Matlab command to input is ross_4_2([.3;.2;.35;.15],[1; 2; 3; 4]) 4. Proof. The exact answer can be found in Feller [2], IV.4, the matching problem. A good approximation is that the number of hits follows Poisson distribution with parameter 1. So the expectation and the variance are both about 1. The simulation program relies on the generation of a random permutation, as described in Example 4b, page 47. We first produce the code for generating a random permutation: function randomized = ross_rperm(standard) ROSS_RPERM generates a radom permutation of the entries of the input vector STANDARD. Reference: [1] Ross: Simulation, Third Edition, Academic Press, 22. numelements = numel(standard); randomized = standard; for k = numelements:-1:2 index = floor(k*rand(1,1)) + 1; temp = randomized(k); randomized(k) = randomized(index); randomized(index) = temp; The corresponding Matlab for the problem itself is as follows: function [e, v] = ross_4_4(numsim, numcards) ROSS_4_4 works out Exercise Problem 4 of Chapter 4, [1]. NUMSIM is the number of simulations to be carried out, and NUMCARDS is the number of cards. Reference: [1] Ross: Simulation, Third Edition, Academic Press, 22. 2

22 mysum = ; sqrtsum = ; for i = 1:numSim temp = sum(ross_rperm(1:numcards) == (1:numCards)); mysum = mysum + temp; sqrtsum = sqrtsum + temp^2; e = mysum/numsim; v = sqrtsum/numsim - e * e; 5. (a) Proof. The Matlab code goes as follows: function randomized = ross_rperm2(standard) ROSS_rperm2 generates a radom permutation of the entries of the input vector STANDARD. The algorithm follows Exercise Problem 5 of Chapter 4, [1]. Reference: [1] Ross: Simulation, Third Edition, Academic Press, 22. numelements = numel(standard); if (numelements == 1) randomized = standard; else randomized = zeros(1, numelements); randomized(1:(numelements-1)) = ross_rperm2(standard(1:(numelements-1))); index = floor(rand(1,1)*numelements)+1; randomized() = randomized(index); randomized(index) = standard(); (b) ( ) 1,, n Proof. We denote the permutation that transforms (1,, n) to (P 1,, P n ) by. For any P 1,, P n given (P 1,, P n ), if P i = n, we must have (( )) 1,, n P P 1,, P n (( ) ) 1,, i,, n 1 = P, app n to (P P 1,, P n,, P 1,, P n,, P n 1 ) and interchange P n and n n 1 (( )) 1,, n 1 = P 1 P 1,, P n 1 n (( 1 Clearly P = 1. If we assume P 1)) shows P (( )) 1,, n 1 = P 1,, P 1 (n 1)!, then the above sequence of equalities n 1 (( )) 1,, n = P 1,, P 1 (n 1)! 1 n = 1 n!. By induction, we can conclude that the algorithm works. n 21

23 6. Proof. We note N i=1 e i N N i=1 = N e N i N N 1 ex dx = NE[e U ], where U is a uniform random variable distributed over (, 1). So with 1 uniformly distributed random numbers, we can approximate N i=1 e i N 1 i=1 by N eu i 1. The approximation and comparison are carried out in the following Matlab code: function [exact, approx] = ross_4_6(n, numrn) ROSS_4_6 works out Exercise Problem 6 of Chapter 4, [1]. In the problem, N = 1, and numrn = 1. Reference: [1] Ross: Simulation, Third Edition, Academic Press, 22. exact = sum(exp((1:n)/n)); approx = N * mean(exp(rand(1,numrn))); 7. Proof. The expected number of dice rolls is approximately 61. The Matlab code is given below: function result = ross_4_7(numsim) ROSS_4_7 works out Exercise Problem 7 of Chapter 4, [1]. Reference: [1] Ross: Simulation, Third Edition, Academic Press, 22. result = ; for i=1:numsim result = result + countrolls; result = result/numsim; ross_4_7 Nested function function numrolls = countrolls COUNTROLLS counts the number of dice rolls that are needed for all the possible outcomes to occure. marks = ones(1,11); marks(i)=1 iff outcome (i+1) has not occured. numrolls = ; while (sum(marks) > ) numrolls = numrolls + 1; outcome = sum( floor(6*rand(1,2)) + 1); marks(outcome-1) = ; countrolls 22

24 8. Proof. 9. Proof. 1. (a) Proof. If X is a negative binomial random variable with parameters (r, p), it can be represented as X = r i=1 X i, where (X i ) r i=1 is an i.i.d. sequence of geometric random variables ) with parameter p. Since a geometric random variable with parameter p can be simulated by Int + 1 (U is a uniform random variable over (, 1)), we can use the following program to simulate X: function result = ross_4_1a(r, p) ( log(u) log(1 p) ROSS_4_1a works out Exercise Problem 1(a) of Chapter 4, [1]. Reference: [1] Ross: Simulation, Third Edition, Academic Press, 22. q = 1 - p; result = sum(floor(log(rand(1,r))/log(q)) + 1); (b) Proof. We note p i+1 p i = j! (j+1 r)!(r 1)! pr (1 p) j+1 r = (j 1)! (j r)!(r 1)! pr (1 p) j r j (1 p). j + 1 r (c) ( ) Proof. We note pi+1 p i 1 if and only if i r 1 p. So (p i) i=r achieves its maximum at I = Int r 1 p if I r. To make the algorithm efficient, we should start the search from I: if X I, search downward from I; if X > I, search upward from I + 1. Note if we represent X as F 1 X (U), where F X is the distribution function of X and U is a uniform random variable over (, 1), X I if and only if U F (I). Therefore, we have the following program: function result = ross_4_1c(r, p) ROSS_4_1c works out Exercise Problem 1(c) of Chapter 4, [1]. Reference: [1] Ross: Simulation, Third Edition, Academic Press, 22. u = rand(1,1); i = floor((r-1)/p); i is the index at which the probability achieves maximum. if (i <= r) prob = p^r; cumprob = prob; j = r; 23

25 while (u > cumprob) prob = j*(1-p)/(j+1-r) * prob; cumprob = cumprob + prob; j = j + 1; result = j; return; else Generate the cumulative probability up to i. prob = p^r; cumprob = prob; for j = r:(i-1) prob = j*(1-p)/(j+1-r) * prob; cumprob = cumprob + prob; At the of the loop, j = i-1, prob = p(i), and cumprob = F(i). if (u > cumprob) Loop for upward search. At the of the loop, if j = n, then prob = p(n+1) and cumprob = F(n+1). u satisfies F(n)< u <= F(n+1). RESULT should be set as n+1 = j+1. while (u > cumprob) j = j + 1; prob = j*(1-p)/(j+1-r) * prob; cumprob = cumprob + prob; result = j+1; else Loop for downward search. At the of the loop, if j = n, then prob = p(n+1) and cumprob = F(n+1). u satisfies F(n+1) < u <=F(n+2). RESULT should be set as n+2 = j+2. while (u <= cumprob && j>=(r-1)) prob = prob * (j+1-r)/(j*(1-p)); cumprob = cumprob - prob; j = j-1; result = j+2; (d) Proof. function result = ross_4_1d(r, p) ROSS_4_1d works out Exercise Problem 1(d) of Chapter 4, [1]. Reference: [1] Ross: Simulation, Third Edition, Academic Press, 22. result = ; numsuccess = ; while (numsuccess < r) if (rand(1,1) < p) numsuccess = numsuccess + 1; 24

26 result = result + 1; 11. Proof. By symmetry of N(, 1), we have xe x2 /2 E[ Z ] = E[Z1 {Z>} ] + E[ Z1 {Z<} ] = 2E[Z1 {Z>} ] = 2 dx = 2 2π e u 2π du = ( ) π 12. Proof. The first method is the acceptance-rejection method. We define e λ λ i /i! k i k p i = j= e λ λ j /j! if i > k and q i = e λ λ i /i!. Define c = 1 k j= e λ λ j /j!, then p i/q i c. So we can first generate a Poisson random variable Y with parameter λ and then accept or reject it according to the value of an indepent uniform random variable. First, we produce the Matlab code for generating a Poisson random variable: function result = ross_rpoisson(lambda) ROSS_RPOISSON generates a Poisson random variable with parameter lambda. Reference: [1] Ross: Simulation, Third Edition, Academic Press, 22. u = rand(1,1); i = floor(lambda-1); i is the index at which the probability achieves maximum. if (i <= ) prob = exp(-lambda); cumprob = prob; j = ; while (u > cumprob) prob = lambda /(j+1) * prob; cumprob = cumprob + prob; j = j + 1; result = j; return; else Generate the cumulative probability up to i. prob = exp(-lambda); cumprob = prob; for j = :(i-1) prob = lambda /(j+1) * prob; cumprob = cumprob + prob; At the of the loop, j = i-1, prob = p(i), and cumprob = F(i). 25

27 if (u > cumprob) Loop for upward search. At the of the loop, if j = n, then prob = p(n+1) and cumprob = F(n+1). u satisfies F(n)< u <= F(n+1). RESULT should be set as n+1 = j+1. while (u > cumprob) j = j + 1; prob = lambda/(j+1) * prob; cumprob = cumprob + prob; result = j+1; return; else Loop for downward search. At the of the loop, if j = n, then prob = p(n+1) and cumprob = F(n+1). u satisfies F(n+1) < u <=F(n+2). RESULT should be set as n+2 = j+2. while (u <= cumprob && j>=-1) prob = prob * (1+j)/lambda; cumprob = cumprob - prob; j = j-1; result = j+2; return; Then, we note p i cq i = { 1 if i k if i > k. So the simulation program based on the acceptance-rejection method can be written as follows: function result = ross_4_12_aj(k, lambda) ROSS_4_12_AJ works out Exercise Problem 12 of Chapter 4, [1], by the acceptance-rejection method. Reference: [1] Ross: Simulation, Third Edition, Academic Press, 22. Acceptance-rejection method y = ross_rpoisson(lambda); while (y > k) y = ross_rpoisson(lambda); result = y; ross_4_12_aj For the second method, we note p i /p i 1 = λ/i (i = 1,, k). So a Matlab program based on the inverse transform method (without any optimization of the efficiency) is as follows: function result = ross_4_12(k, lambda) ROSS_4_12 works out Exercise Problem 12 of Chapter 4, [1]. 26

28 Reference: [1] Ross: Simulation, Third Edition, Academic Press, 22. probs = zeros(1,k+1); probs(1) = 1; for i = 2:(k+1) probs(i) = probs(i-1) * lambda / (i-1); probs = probs/sum(probs); cumprob = cumsum(probs); u = rand(1,1); for i = 1:(k+1) if (u <= cumprob(i)) result = i-1; return; ross_4_ (a) Proof. We note P (Y = k) = P (X=k) P (X k) = ( n k ) p k (1 p) n k 1 α, and for i = k,, n 1, P (Y = i + 1) P (Y = i) = n i i + 1 p 1 p. So the Matlab code based on the inverse transform method is as follows: function result = ross_4_13a(n, k, p) ROSS_4_13a works out Exercise Problem 13(a) of Chapter 4, [1]. Reference: [1] Ross: Simulation, Third Edition, Academic Press, 22. Calculate P(X >= k) and P(X=i), i=k,..., n. probs = zeros(1, n-k+1); probs(1) = nchoosek(n,k) * p^k * (1-p)^k; for i = (k+1):n probs(i-k+1) = probs(i-k) * (n-i+1)/i * p/(1-p); alpha = sum(probs); probs = probs/alpha; cumprob = cumsum(probs); Generate the random number via the inverse transform method. u = rand(1,1); j = 1; while (u > cumprob(j)) j = j + 1; 27

29 result = j; ross_4_13a (b) Proof. We can generate Y based on the acceptance-rejection method. The solution is similar to that of Problem 12. So we omit the details. Also, we use Matlab s random generator to generate a binomial random variable, instead of implementing our own. function result = ross_4_13b(n, k, p) ROSS_4_13b works out Exercise Problem 13(b) of Chapter 4, [1], via the acceptance-rejection method. Reference: [1] Ross: Simulation, Third Edition, Academic Press, 22. y = random( bino, n, p); while (y < k) y = random( bino, n, p); result = y; ross_4_13b (c) Proof. Since in (b) we used the acceptance-rejection method, on average we need 1/α binomial random variables to generate a Y. So when α is small, the algorithm in (b) is inefficient. 14. Proof. We note p j can be represented as p j =.55p (1) j +.45p (2) j, where p (1) j is a uniform random variable over {5, 7, 9, 11, 13} and p (2) j is a uniform variable over {6, 8, 1, 12, 14}. Therefore, the Matlab code based on the composition approach is as follows: function result = ross_4_14 ROSS_4_14 works out Exercise Problem 14 of Chapter 4, [1], via the composition approach. Reference: [1] Ross: Simulation, Third Edition, Academic Press, 22. u = rand(1,2); if u(1)<.55 samples = [5, 7, 9, 11, 13]; result = samples(floor(5*u(2)) + 1); else samples = [6, 8, 1, 12, 14]; result = samples(floor(5*u(2)) + 1); 28

30 ross_4_ Proof. We note the probability function (p i ) 1 i=1 can be represented as the following: p i =.6p (1) i +.35p (2) i +.2p (3) i +.1p (4) i +.2p (5) i, where p (1) is the uniform distribution over {1,, 1}, p (2) is the uniform distribution over {6,, 1}, p (3) is the Dirac distribution on 6, p (4) is the Dirac distribution on 8, and p (5) is the Dirac distribution on 9. So we can have the following simulation program: function result = ross_4_15 ROSS_4_15 works out Exercise Problem 15 of Chapter 4, [1], via the composition approach. Reference: [1] Ross: Simulation, Third Edition, Academic Press, 22. u = rand(1,2); if (u(1) <=.6) result = floor(1*u(2))+1; elseif (u(1) <=.95) result = floor(5*u(2))+1; elseif (u(1) <=.97) result = 6; elseif (u(1) <=.98) result = 8; else result = 9; ross_4_ Proof. Let p (1) be the probability mass function that satisfies p (1) j = 1 2 (j = 1, 2, ) and p (2) the probability j mass function that satisfies p (2) j = 2j 1 3 (j = 1, 2, ). Then p j j = P (X = j) = 1 2 (p(1) j + p (2) j ). This gives us a simulation program based on the composition approach: function result = ross_4_16 ROSS_4_16 works out Exercise Problem 16 of Chapter 4, [1], via the composition approach. Reference: [1] Ross: Simulation, Third Edition, Academic Press, 22. u = rand(1,2); if (u(1) <=.5) prob =.5; 29

31 cumprob = prob; result = 1; while (u(2) > cumprob) prob = prob *.5; cumprob = cumprob + prob; result = result + 1; else prob = 1/3; cumprob = prob; result = 1; while (u(2) > cumprob) prob = prob * 2/3; cumprob = cumprob + prob; result = result + 1; ross_4_ (a) Proof. λ n = P (X=n) P (X>n 1), so 1 λ n = (1 λ 1 ) (1 λ n 1 )λ n = P (X>n) P (X>n 1) and P (X > 1) P (X > n 1) P (X > ) P (X > n 2) P (X = n) P (X > n 1) = P (X = n) = p n. (b) Proof. P (X = 1) = P (U < λ 1 ) = λ 1 = P (X=1) P (X>) = p 1. For n > 1, we have by part (a) P (X = n) = P (U 1 λ 1,, U n 1 λ n 1, U n < λ n ) = (1 λ 1 ) (1 λ n 1 )λ n = p n. (c) Proof. If X is a geometric random variable with parameter p, then p n = (1 p) n 1 p and λ = p n 1 n 1 j=1 p j = (1 p) n 1 p 1 n 1 j=1 (1 p)j 1 p = (1 p)n 1 p 1 p 1 (1 p)n 1 1 (1 p) = (1 p)n 1 p = p. (1 p) n 1 In this case, each iteration of step 2-3 is an indepent trial which examines if the trial s outcome is a success. This essentially uses the definition of X as the number of the first trial that is a success. 18. (a) Proof. Y is a geometric random variable with parameter p. (b) Proof. (c) 3

32 Proof. 19. Proof. The first method is to generate a random subset of size m, each element of which contributes to color type i if and only if it falls in the interval [ i 1 j=1 n j + 1, i 1 j=1 n j + n i ]. This method involves generation of m uniform random variables (Example 4b) and m searches. It is efficient if m is small relative to n. The second method is similar to Example 4g, where we have ( ) ( ) n1 n n1 P (X 1 = k 1 ) = k 1 m k 1 ( ) n m and P (X i = k i X 1 = k 1,, X i 1 = k i 1 ) = ( ni k i ) ( n i j=1 n j ) m i j=1 k j ( n ) i 1 j=1 k. j m i 1 j=1 k j We note each conditional distribution is a hypergeometric distribution with parameters (n i, n i j=1 n j, m i 1 j=1 k j). So once the generation of a hypergeometric random variable is known, we can sequentially generate the random vector (X 1,, X r ). 31

33 Chapter 5 Generating Continuous Random Variables 1. Proof. We note F (x) = x f(t)dt = ex 1 e 1. So F 1 (y) = ln[y(e 1)+1] and a random variable with the given density function can be simulated by ln[u(e 1) + 1], where U is a uniform random variable over (, 1). 2. Proof. For 2 x 3, F (x) = x 2 f(t)dt = x 2 x 2 4 x x 2 t/3 3 2 dt = x So F 1 (y) = t 2 2 dt = ( x 2 1)2. For 3 x 6, F (x) = x 2 f(t)dt = { 2( y + 1), y y + 6, 1 4 < y 1. In order to generate a random variable with the given density function, we first generate a uniform random variable U over (, 1). If U 1 4, we set X = 2( U + 1); if U > 1 4, we set X = 12U Proof. Solve the equation y = x2 +x 2, we get F 1 (y) = given distribution function can be generated by (, 1). 2y So a random variable having the 2U , where U is a uniform random variable over 4. [ ] 1/β. Proof. Solve the equation y = 1 e αxβ, we get F 1 (y) = ln(1 y) α So to generate a Weibull random variable, we can either use [ ] ln U 1/β α where U is a uniform random variable over (, 1), or use Y 1/β where Y is an exponential random variable with rate α. 5. Proof. We note f is symmetric. So we first generate the absolute value of the random variable and then choose its sign according to an indepent uniform random variable. More precisely, denote by X the random variable to be generated. Then the distribution function of X is F X (x) = P ( X x) = e 2x 1. So we generate Y = X by 1 2 ln(u 1 + 1), where U 1 is uniformly distributed over (, 1). Then we generate another uniform random variable U 2, which is indepent of U 1, and determine X by { Y, if U 1 X = 2 Y, if U >

34 6. Proof. F (x) = x f(t)dt = 1 e x 1 e.5, < x <.5. So F 1 (y) = ln[1 (1 e.5 )y] and a random variable with given density function can be generated by ln[1 (1 e.5 )U], where U is a uniform random variable over (, 1). The Matlab code to estimate E[X X <.5] is as follows: function result = ross_5_6 ROSS_5_6 works out Exercise Problem 6 of Chapter 5, [1]. Reference: [1] Ross: Simulation, Third Edition, Academic Press, 22. numsim = 1; result = mean(-log(1 - rand(1,numsim)*(1-exp(-.5)))); ross_5_6 To calculate the exact value of E[X X <.5], we note E[X X <.5] =.5 xe x dx = e.5 7. Proof. Use a random variable with probability mass function {p 1,, p n } to determine which of the n distribution functions {F 1,, F n } will be used for the generation of the random variable, then generate the random variable according to the chosen distribution. 8. (a) Proof. If we define F 1 (x) = x, F 2 (x) = x 3, and F 3 (x) = x 5 ( x 1), then F (x) = 1 3 (F 1(x)+F 2 (x)+f 3 (x)). The corresponding Matlab code is as follows: function result = ross_5_8a ROSS_5_8a works out Exercise Problem 8(a) of Chapter 5, [1]. Reference: [1] Ross: Simulation, Third Edition, Academic Press, 22. u = rand(1,2); if (u(1)<=1/3) result = u(2); return; elseif (u(2)<=2/3) result = u(2)^(1/3); return; else result = u(2)^(1/5); return; ross_5_8a 33

35 (b) Proof. It is easy to see the density function corresponding to F (x) is { 2 f(x) = 3 (1 + e 2x ), if < x < e 2x, if 1 < x <. If we define p 1 = 2 3, p 2 = 1 e 2 3, p 3 = e 2 3, f 1(x) = 1 (,,1) (x), f 2 (x) = 2e 2x 1 e 1 2 (,1), and f 3 (x) = 2e 2(x 1) 1 (1, ) (x), then f(x) = p 1 f 1 (x) + p 2 f 2 (x) + p 3 f 3 (x). The corresponding distribution functions are F 1 (x) = x1 (,1] (x) + 1 (1, ), F 2 (x) = 1 e 2x 1 e 2 1 (,1] (x) + 1 (1, ), and F 3 (x) = (1 e 2(x 1) )1 (1, ) (x). The Matlab code is as follows: function result = ross_5_8b ROSS_5_8b works out Exercise Problem 8(b) of Chapter 5, [1]. Reference: [1] Ross: Simulation, Third Edition, Academic Press, 22. u = rand(1,2); if (u(1) <= 2/3) result = u(2); return; elseif (u(1) < (3-exp(-2))/3) result = -.5 * log(1 - u(2) * (1-exp(-2))); return; else result = * log(1-u(2)); return; ross_5_8b (c) Proof. The Matlab code is as follows: function result = ross_5_8c(alpha) ROSS_5_8c works out Exercise Problem 8(c) of Chapter 5, [1]. Reference: [1] Ross: Simulation, Third Edition, Academic Press, 22. u = rand(1,2); n = numel(alpha); cumalpha = cumsum(alpha); for i=1:n if (u(1) <= cumalpha(i)) result = (u(2)*(1+i))^(1/(1+i)); return; 34

36 ross_5_8c 9. Proof. Following the hint, we design the algorithm as follows: Step 1: Generate an exponential random variable Y with rate 1. Step 2: Based on the value of Y, say y, generate X according to the distribution function F (x) = x y 1 (,1] (x) + 1 (1, ) (x). It is easy to see the random variable X thus generated has the desired distribution function, since P (X x) = The corresponding Matlab code is as follows: function result = ross_5_9 P (X x Y = y)e y dy = x y e y dy. ROSS_5_9 works out Exercise Problem 9 of Chapter 5, [1]. Reference: [1] Ross: Simulation, Third Edition, Academic Press, 22. y = random( exp,1); result = (rand(1,1))^(1/y); ross_5_9 1. Proof. Denote by N the number of claims in the next month, and let (X i ) i=1 be an i.i.d. sequence of exponential random variables with mean 8. Then N is a binomial random variable with parameter (1,.5). The sum S of claims made in the next month can be represented as S = N i=1 X i. The algorithm for the simulation program can therefore be designed as follows: Step 1: Set C =. Step 2: Generate a binomial random variable N with parameters (1,.5). Step 3: Assume N = n, generate n exponential random variables with mean 8 and add them up. Step 4: If the sum obtained in Step 3 is greater than $5,, count 1; otherwise, count. Add this counting result to C. Step 5: Go back to Step 2 until the desired number of simulation has been achieved. The corresponding Matlab code is as follows: function result = ross_5_1(numsim) ROSS_5_1 works out Exercise Problem 1 of Chapter 5, [1]. Reference: [1] Ross: Simulation, Third Edition, Academic Press, 22. count = ; 35

37 for i=1:numsim n = random( bino,1,.5); count = count + ( sum(random( exp,8, 1, n)) > 5); result = count/numsim; ross_5_1 11. Proof. Following the method outlined at the of 5.1, we have the following Matlab code: function result = ross_5_11(mu, k) ROSS_5_11 works out Exercise Problem 11 of Chapter 5, [1]. ROSS_5_11 generates a set of K i.i.d. exponential random variables with mean MU. Reference: [1] Ross: Simulation, Third Edition, Academic Press, 22. if (k==1) result = random( exp, mu, 1, 1); return; else t = -log( prod(rand(1,k)) ); U = rand(1,k-1); result = diff([, t * sort(u, asc ), t]); ross_5_ Proof. We can take X = max{x 1,, X n } for part (a) and X = min{x 1,, X n } for part (b). 36

38 Chapter 6 The Discrete Event Simulation Approach 37

39 Chapter 7 Statistical Analysis of Simulated Data 38

40 Chapter 8 Variance Reduction Techniques 39

41 Chapter 9 Statistical Validation Techniques 4

42 Chapter 1 Markov Chain Monte Carlo Methods 41

43 Chapter 11 Some Additional Topics 42

44 Bibliography [1] R. Durrett. Probability: Theory and Examples, 2nd ed. Duxbury Press, [2] W. Feller. An Introduction to Probability Theory and Its Applications, 3rd ed. Wiley, New York,

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416)

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) D. ARAPURA This is a summary of the essential material covered so far. The final will be cumulative. I ve also included some review problems

More information

ACM 116: Lectures 3 4

ACM 116: Lectures 3 4 1 ACM 116: Lectures 3 4 Joint distributions The multivariate normal distribution Conditional distributions Independent random variables Conditional distributions and Monte Carlo: Rejection sampling Variance

More information

1 Review of Probability

1 Review of Probability 1 Review of Probability Random variables are denoted by X, Y, Z, etc. The cumulative distribution function (c.d.f.) of a random variable X is denoted by F (x) = P (X x), < x

More information

1 Random Variable: Topics

1 Random Variable: Topics Note: Handouts DO NOT replace the book. In most cases, they only provide a guideline on topics and an intuitive feel. 1 Random Variable: Topics Chap 2, 2.1-2.4 and Chap 3, 3.1-3.3 What is a random variable?

More information

1 Presessional Probability

1 Presessional Probability 1 Presessional Probability Probability theory is essential for the development of mathematical models in finance, because of the randomness nature of price fluctuations in the markets. This presessional

More information

Exponential Distribution and Poisson Process

Exponential Distribution and Poisson Process Exponential Distribution and Poisson Process Stochastic Processes - Lecture Notes Fatih Cavdur to accompany Introduction to Probability Models by Sheldon M. Ross Fall 215 Outline Introduction Exponential

More information

2905 Queueing Theory and Simulation PART IV: SIMULATION

2905 Queueing Theory and Simulation PART IV: SIMULATION 2905 Queueing Theory and Simulation PART IV: SIMULATION 22 Random Numbers A fundamental step in a simulation study is the generation of random numbers, where a random number represents the value of a random

More information

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University Chapter 3, 4 Random Variables ENCS6161 - Probability and Stochastic Processes Concordia University ENCS6161 p.1/47 The Notion of a Random Variable A random variable X is a function that assigns a real

More information

MAS113 Introduction to Probability and Statistics. Proofs of theorems

MAS113 Introduction to Probability and Statistics. Proofs of theorems MAS113 Introduction to Probability and Statistics Proofs of theorems Theorem 1 De Morgan s Laws) See MAS110 Theorem 2 M1 By definition, B and A \ B are disjoint, and their union is A So, because m is a

More information

Probability and Distributions

Probability and Distributions Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated

More information

2. Suppose (X, Y ) is a pair of random variables uniformly distributed over the triangle with vertices (0, 0), (2, 0), (2, 1).

2. Suppose (X, Y ) is a pair of random variables uniformly distributed over the triangle with vertices (0, 0), (2, 0), (2, 1). Name M362K Final Exam Instructions: Show all of your work. You do not have to simplify your answers. No calculators allowed. There is a table of formulae on the last page. 1. Suppose X 1,..., X 1 are independent

More information

Continuous Distributions

Continuous Distributions A normal distribution and other density functions involving exponential forms play the most important role in probability and statistics. They are related in a certain way, as summarized in a diagram later

More information

18.440: Lecture 28 Lectures Review

18.440: Lecture 28 Lectures Review 18.440: Lecture 28 Lectures 17-27 Review Scott Sheffield MIT 1 Outline Continuous random variables Problems motivated by coin tossing Random variable properties 2 Outline Continuous random variables Problems

More information

MAS113 Introduction to Probability and Statistics. Proofs of theorems

MAS113 Introduction to Probability and Statistics. Proofs of theorems MAS113 Introduction to Probability and Statistics Proofs of theorems Theorem 1 De Morgan s Laws) See MAS110 Theorem 2 M1 By definition, B and A \ B are disjoint, and their union is A So, because m is a

More information

Stochastic Models of Manufacturing Systems

Stochastic Models of Manufacturing Systems Stochastic Models of Manufacturing Systems Ivo Adan Organization 2/47 7 lectures (lecture of May 12 is canceled) Studyguide available (with notes, slides, assignments, references), see http://www.win.tue.nl/

More information

2 Continuous Random Variables and their Distributions

2 Continuous Random Variables and their Distributions Name: Discussion-5 1 Introduction - Continuous random variables have a range in the form of Interval on the real number line. Union of non-overlapping intervals on real line. - We also know that for any

More information

ISyE 3044 Fall 2017 Test #1a Solutions

ISyE 3044 Fall 2017 Test #1a Solutions 1 NAME ISyE 344 Fall 217 Test #1a Solutions This test is 75 minutes. You re allowed one cheat sheet. Good luck! 1. Suppose X has p.d.f. f(x) = 4x 3, < x < 1. Find E[ 2 X 2 3]. Solution: By LOTUS, we have

More information

1 Basic continuous random variable problems

1 Basic continuous random variable problems Name M362K Final Here are problems concerning material from Chapters 5 and 6. To review the other chapters, look over previous practice sheets for the two exams, previous quizzes, previous homeworks and

More information

S n = x + X 1 + X X n.

S n = x + X 1 + X X n. 0 Lecture 0 0. Gambler Ruin Problem Let X be a payoff if a coin toss game such that P(X = ) = P(X = ) = /2. Suppose you start with x dollars and play the game n times. Let X,X 2,...,X n be payoffs in each

More information

Midterm Exam 1 Solution

Midterm Exam 1 Solution EECS 126 Probability and Random Processes University of California, Berkeley: Fall 2015 Kannan Ramchandran September 22, 2015 Midterm Exam 1 Solution Last name First name SID Name of student on your left:

More information

3 Continuous Random Variables

3 Continuous Random Variables Jinguo Lian Math437 Notes January 15, 016 3 Continuous Random Variables Remember that discrete random variables can take only a countable number of possible values. On the other hand, a continuous random

More information

Discrete Distributions

Discrete Distributions A simplest example of random experiment is a coin-tossing, formally called Bernoulli trial. It happens to be the case that many useful distributions are built upon this simplest form of experiment, whose

More information

STA 256: Statistics and Probability I

STA 256: Statistics and Probability I Al Nosedal. University of Toronto. Fall 2017 My momma always said: Life was like a box of chocolates. You never know what you re gonna get. Forrest Gump. There are situations where one might be interested

More information

STAT2201. Analysis of Engineering & Scientific Data. Unit 3

STAT2201. Analysis of Engineering & Scientific Data. Unit 3 STAT2201 Analysis of Engineering & Scientific Data Unit 3 Slava Vaisman The University of Queensland School of Mathematics and Physics What we learned in Unit 2 (1) We defined a sample space of a random

More information

18.440: Lecture 28 Lectures Review

18.440: Lecture 28 Lectures Review 18.440: Lecture 28 Lectures 18-27 Review Scott Sheffield MIT Outline Outline It s the coins, stupid Much of what we have done in this course can be motivated by the i.i.d. sequence X i where each X i is

More information

Things to remember when learning probability distributions:

Things to remember when learning probability distributions: SPECIAL DISTRIBUTIONS Some distributions are special because they are useful They include: Poisson, exponential, Normal (Gaussian), Gamma, geometric, negative binomial, Binomial and hypergeometric distributions

More information

Random Variable. Pr(X = a) = Pr(s)

Random Variable. Pr(X = a) = Pr(s) Random Variable Definition A random variable X on a sample space Ω is a real-valued function on Ω; that is, X : Ω R. A discrete random variable is a random variable that takes on only a finite or countably

More information

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

Sampling Random Variables

Sampling Random Variables Sampling Random Variables Introduction Sampling a random variable X means generating a domain value x X in such a way that the probability of generating x is in accordance with p(x) (respectively, f(x)),

More information

Chp 4. Expectation and Variance

Chp 4. Expectation and Variance Chp 4. Expectation and Variance 1 Expectation In this chapter, we will introduce two objectives to directly reflect the properties of a random variable or vector, which are the Expectation and Variance.

More information

1.1 Review of Probability Theory

1.1 Review of Probability Theory 1.1 Review of Probability Theory Angela Peace Biomathemtics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,

More information

Review of Probability. CS1538: Introduction to Simulations

Review of Probability. CS1538: Introduction to Simulations Review of Probability CS1538: Introduction to Simulations Probability and Statistics in Simulation Why do we need probability and statistics in simulation? Needed to validate the simulation model Needed

More information

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities PCMI 207 - Introduction to Random Matrix Theory Handout #2 06.27.207 REVIEW OF PROBABILITY THEORY Chapter - Events and Their Probabilities.. Events as Sets Definition (σ-field). A collection F of subsets

More information

MAS223 Statistical Inference and Modelling Exercises

MAS223 Statistical Inference and Modelling Exercises MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,

More information

Renewal theory and its applications

Renewal theory and its applications Renewal theory and its applications Stella Kapodistria and Jacques Resing September 11th, 212 ISP Definition of a Renewal process Renewal theory and its applications If we substitute the Exponentially

More information

3. Probability and Statistics

3. Probability and Statistics FE661 - Statistical Methods for Financial Engineering 3. Probability and Statistics Jitkomut Songsiri definitions, probability measures conditional expectations correlation and covariance some important

More information

Tom Salisbury

Tom Salisbury MATH 2030 3.00MW Elementary Probability Course Notes Part V: Independence of Random Variables, Law of Large Numbers, Central Limit Theorem, Poisson distribution Geometric & Exponential distributions Tom

More information

Class 8 Review Problems solutions, 18.05, Spring 2014

Class 8 Review Problems solutions, 18.05, Spring 2014 Class 8 Review Problems solutions, 8.5, Spring 4 Counting and Probability. (a) Create an arrangement in stages and count the number of possibilities at each stage: ( ) Stage : Choose three of the slots

More information

ECE 302 Division 2 Exam 2 Solutions, 11/4/2009.

ECE 302 Division 2 Exam 2 Solutions, 11/4/2009. NAME: ECE 32 Division 2 Exam 2 Solutions, /4/29. You will be required to show your student ID during the exam. This is a closed-book exam. A formula sheet is provided. No calculators are allowed. Total

More information

. Find E(V ) and var(v ).

. Find E(V ) and var(v ). Math 6382/6383: Probability Models and Mathematical Statistics Sample Preliminary Exam Questions 1. A person tosses a fair coin until she obtains 2 heads in a row. She then tosses a fair die the same number

More information

Exam P Review Sheet. for a > 0. ln(a) i=0 ari = a. (1 r) 2. (Note that the A i s form a partition)

Exam P Review Sheet. for a > 0. ln(a) i=0 ari = a. (1 r) 2. (Note that the A i s form a partition) Exam P Review Sheet log b (b x ) = x log b (y k ) = k log b (y) log b (y) = ln(y) ln(b) log b (yz) = log b (y) + log b (z) log b (y/z) = log b (y) log b (z) ln(e x ) = x e ln(y) = y for y > 0. d dx ax

More information

2 Random Variable Generation

2 Random Variable Generation 2 Random Variable Generation Most Monte Carlo computations require, as a starting point, a sequence of i.i.d. random variables with given marginal distribution. We describe here some of the basic methods

More information

Lecture 4: Probability and Discrete Random Variables

Lecture 4: Probability and Discrete Random Variables Error Correcting Codes: Combinatorics, Algorithms and Applications (Fall 2007) Lecture 4: Probability and Discrete Random Variables Wednesday, January 21, 2009 Lecturer: Atri Rudra Scribe: Anonymous 1

More information

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample

More information

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

CDA6530: Performance Models of Computers and Networks. Chapter 2: Review of Practical Random

CDA6530: Performance Models of Computers and Networks. Chapter 2: Review of Practical Random CDA6530: Performance Models of Computers and Networks Chapter 2: Review of Practical Random Variables Definition Random variable (RV)X (R.V.) X: A function on sample space X: S R Cumulative distribution

More information

System Simulation Part II: Mathematical and Statistical Models Chapter 5: Statistical Models

System Simulation Part II: Mathematical and Statistical Models Chapter 5: Statistical Models System Simulation Part II: Mathematical and Statistical Models Chapter 5: Statistical Models Fatih Cavdur fatihcavdur@uludag.edu.tr March 20, 2012 Introduction Introduction The world of the model-builder

More information

Lecture 5: Expectation

Lecture 5: Expectation Lecture 5: Expectation 1. Expectations for random variables 1.1 Expectations for simple random variables 1.2 Expectations for bounded random variables 1.3 Expectations for general random variables 1.4

More information

Probability Models. 4. What is the definition of the expectation of a discrete random variable?

Probability Models. 4. What is the definition of the expectation of a discrete random variable? 1 Probability Models The list of questions below is provided in order to help you to prepare for the test and exam. It reflects only the theoretical part of the course. You should expect the questions

More information

Probability Theory and Statistics. Peter Jochumzen

Probability Theory and Statistics. Peter Jochumzen Probability Theory and Statistics Peter Jochumzen April 18, 2016 Contents 1 Probability Theory And Statistics 3 1.1 Experiment, Outcome and Event................................ 3 1.2 Probability............................................

More information

Probability and Statistics Concepts

Probability and Statistics Concepts University of Central Florida Computer Science Division COT 5611 - Operating Systems. Spring 014 - dcm Probability and Statistics Concepts Random Variable: a rule that assigns a numerical value to each

More information

Northwestern University Department of Electrical Engineering and Computer Science

Northwestern University Department of Electrical Engineering and Computer Science Northwestern University Department of Electrical Engineering and Computer Science EECS 454: Modeling and Analysis of Communication Networks Spring 2008 Probability Review As discussed in Lecture 1, probability

More information

Math Spring Practice for the final Exam.

Math Spring Practice for the final Exam. Math 4 - Spring 8 - Practice for the final Exam.. Let X, Y, Z be three independnet random variables uniformly distributed on [, ]. Let W := X + Y. Compute P(W t) for t. Honors: Compute the CDF function

More information

1 Review of Probability and Distributions

1 Review of Probability and Distributions Random variables. A numerically valued function X of an outcome ω from a sample space Ω X : Ω R : ω X(ω) is called a random variable (r.v.), and usually determined by an experiment. We conventionally denote

More information

Chapter 4. Continuous Random Variables

Chapter 4. Continuous Random Variables Chapter 4. Continuous Random Variables Review Continuous random variable: A random variable that can take any value on an interval of R. Distribution: A density function f : R R + such that 1. non-negative,

More information

Random Variables (Continuous Case)

Random Variables (Continuous Case) Chapter 6 Random Variables (Continuous Case) Thus far, we have purposely limited our consideration to random variables whose ranges are countable, or discrete. The reason for that is that distributions

More information

Chapter 5. Chapter 5 sections

Chapter 5. Chapter 5 sections 1 / 43 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

THE QUEEN S UNIVERSITY OF BELFAST

THE QUEEN S UNIVERSITY OF BELFAST THE QUEEN S UNIVERSITY OF BELFAST 0SOR20 Level 2 Examination Statistics and Operational Research 20 Probability and Distribution Theory Wednesday 4 August 2002 2.30 pm 5.30 pm Examiners { Professor R M

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

Brief Review of Probability

Brief Review of Probability Maura Department of Economics and Finance Università Tor Vergata Outline 1 Distribution Functions Quantiles and Modes of a Distribution 2 Example 3 Example 4 Distributions Outline Distribution Functions

More information

Quick Tour of Basic Probability Theory and Linear Algebra

Quick Tour of Basic Probability Theory and Linear Algebra Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra CS224w: Social and Information Network Analysis Fall 2011 Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra Outline Definitions

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

Common Discrete Distributions

Common Discrete Distributions Common Discrete Distributions Statistics 104 Autumn 2004 Taken from Statistics 110 Lecture Notes Copyright c 2004 by Mark E. Irwin Common Discrete Distributions There are a wide range of popular discrete

More information

Qualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama

Qualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama Qualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama Instructions This exam has 7 pages in total, numbered 1 to 7. Make sure your exam has all the pages. This exam will be 2 hours

More information

Multivariate distributions

Multivariate distributions CHAPTER Multivariate distributions.. Introduction We want to discuss collections of random variables (X, X,..., X n ), which are known as random vectors. In the discrete case, we can define the density

More information

Probability reminders

Probability reminders CS246 Winter 204 Mining Massive Data Sets Probability reminders Sammy El Ghazzal selghazz@stanfordedu Disclaimer These notes may contain typos, mistakes or confusing points Please contact the author so

More information

Continuous Random Variables and Continuous Distributions

Continuous Random Variables and Continuous Distributions Continuous Random Variables and Continuous Distributions Continuous Random Variables and Continuous Distributions Expectation & Variance of Continuous Random Variables ( 5.2) The Uniform Random Variable

More information

Week 12-13: Discrete Probability

Week 12-13: Discrete Probability Week 12-13: Discrete Probability November 21, 2018 1 Probability Space There are many problems about chances or possibilities, called probability in mathematics. When we roll two dice there are possible

More information

Homework 4 Solution, due July 23

Homework 4 Solution, due July 23 Homework 4 Solution, due July 23 Random Variables Problem 1. Let X be the random number on a die: from 1 to. (i) What is the distribution of X? (ii) Calculate EX. (iii) Calculate EX 2. (iv) Calculate Var

More information

Stat410 Probability and Statistics II (F16)

Stat410 Probability and Statistics II (F16) Stat4 Probability and Statistics II (F6 Exponential, Poisson and Gamma Suppose on average every /λ hours, a Stochastic train arrives at the Random station. Further we assume the waiting time between two

More information

The Binomial distribution. Probability theory 2. Example. The Binomial distribution

The Binomial distribution. Probability theory 2. Example. The Binomial distribution Probability theory Tron Anders Moger September th 7 The Binomial distribution Bernoulli distribution: One experiment X i with two possible outcomes, probability of success P. If the experiment is repeated

More information

Math Stochastic Processes & Simulation. Davar Khoshnevisan University of Utah

Math Stochastic Processes & Simulation. Davar Khoshnevisan University of Utah Math 5040 1 Stochastic Processes & Simulation Davar Khoshnevisan University of Utah Module 1 Generation of Discrete Random Variables Just about every programming language and environment has a randomnumber

More information

Lecture 13 (Part 2): Deviation from mean: Markov s inequality, variance and its properties, Chebyshev s inequality

Lecture 13 (Part 2): Deviation from mean: Markov s inequality, variance and its properties, Chebyshev s inequality Lecture 13 (Part 2): Deviation from mean: Markov s inequality, variance and its properties, Chebyshev s inequality Discrete Structures II (Summer 2018) Rutgers University Instructor: Abhishek Bhrushundi

More information

Lecture 2: Repetition of probability theory and statistics

Lecture 2: Repetition of probability theory and statistics Algorithms for Uncertainty Quantification SS8, IN2345 Tobias Neckel Scientific Computing in Computer Science TUM Lecture 2: Repetition of probability theory and statistics Concept of Building Block: Prerequisites:

More information

Basic concepts of probability theory

Basic concepts of probability theory Basic concepts of probability theory Random variable discrete/continuous random variable Transform Z transform, Laplace transform Distribution Geometric, mixed-geometric, Binomial, Poisson, exponential,

More information

Lectures on Elementary Probability. William G. Faris

Lectures on Elementary Probability. William G. Faris Lectures on Elementary Probability William G. Faris February 22, 2002 2 Contents 1 Combinatorics 5 1.1 Factorials and binomial coefficients................. 5 1.2 Sampling with replacement.....................

More information

Problem 1. Problem 2. Problem 3. Problem 4

Problem 1. Problem 2. Problem 3. Problem 4 Problem Let A be the event that the fungus is present, and B the event that the staph-bacteria is present. We have P A = 4, P B = 9, P B A =. We wish to find P AB, to do this we use the multiplication

More information

Lecture 4a: Continuous-Time Markov Chain Models

Lecture 4a: Continuous-Time Markov Chain Models Lecture 4a: Continuous-Time Markov Chain Models Continuous-time Markov chains are stochastic processes whose time is continuous, t [0, ), but the random variables are discrete. Prominent examples of continuous-time

More information

CME 106: Review Probability theory

CME 106: Review Probability theory : Probability theory Sven Schmit April 3, 2015 1 Overview In the first half of the course, we covered topics from probability theory. The difference between statistics and probability theory is the following:

More information

Chapter 3: Random Variables 1

Chapter 3: Random Variables 1 Chapter 3: Random Variables 1 Yunghsiang S. Han Graduate Institute of Communication Engineering, National Taipei University Taiwan E-mail: yshan@mail.ntpu.edu.tw 1 Modified from the lecture notes by Prof.

More information

n(1 p i ) n 1 p i = 1 3 i=1 E(X i p = p i )P(p = p i ) = 1 3 p i = n 3 (p 1 + p 2 + p 3 ). p i i=1 P(X i = 1 p = p i )P(p = p i ) = p1+p2+p3

n(1 p i ) n 1 p i = 1 3 i=1 E(X i p = p i )P(p = p i ) = 1 3 p i = n 3 (p 1 + p 2 + p 3 ). p i i=1 P(X i = 1 p = p i )P(p = p i ) = p1+p2+p3 Introduction to Probability Due:August 8th, 211 Solutions of Final Exam Solve all the problems 1. (15 points) You have three coins, showing Head with probabilities p 1, p 2 and p 3. You perform two different

More information

Exercises in Probability Theory Paul Jung MA 485/585-1C Fall 2015 based on material of Nikolai Chernov

Exercises in Probability Theory Paul Jung MA 485/585-1C Fall 2015 based on material of Nikolai Chernov Exercises in Probability Theory Paul Jung MA 485/585-1C Fall 2015 based on material of Nikolai Chernov Many of the exercises are taken from two books: R. Durrett, The Essentials of Probability, Duxbury

More information

Sampling Distributions

Sampling Distributions In statistics, a random sample is a collection of independent and identically distributed (iid) random variables, and a sampling distribution is the distribution of a function of random sample. For example,

More information

Introduction to Statistical Data Analysis Lecture 3: Probability Distributions

Introduction to Statistical Data Analysis Lecture 3: Probability Distributions Introduction to Statistical Data Analysis Lecture 3: Probability Distributions James V. Lambers Department of Mathematics The University of Southern Mississippi James V. Lambers Statistical Data Analysis

More information

Guidelines for Solving Probability Problems

Guidelines for Solving Probability Problems Guidelines for Solving Probability Problems CS 1538: Introduction to Simulation 1 Steps for Problem Solving Suggested steps for approaching a problem: 1. Identify the distribution What distribution does

More information

Probability Notes. Compiled by Paul J. Hurtado. Last Compiled: September 6, 2017

Probability Notes. Compiled by Paul J. Hurtado. Last Compiled: September 6, 2017 Probability Notes Compiled by Paul J. Hurtado Last Compiled: September 6, 2017 About These Notes These are course notes from a Probability course taught using An Introduction to Mathematical Statistics

More information

Sampling Distributions

Sampling Distributions Sampling Distributions In statistics, a random sample is a collection of independent and identically distributed (iid) random variables, and a sampling distribution is the distribution of a function of

More information

MATH/STAT 3360, Probability Sample Final Examination Model Solutions

MATH/STAT 3360, Probability Sample Final Examination Model Solutions MATH/STAT 3360, Probability Sample Final Examination Model Solutions This Sample examination has more questions than the actual final, in order to cover a wider range of questions. Estimated times are

More information

Actuarial Science Exam 1/P

Actuarial Science Exam 1/P Actuarial Science Exam /P Ville A. Satopää December 5, 2009 Contents Review of Algebra and Calculus 2 2 Basic Probability Concepts 3 3 Conditional Probability and Independence 4 4 Combinatorial Principles,

More information

HW7 Solutions. f(x) = 0 otherwise. 0 otherwise. The density function looks like this: = 20 if x [10, 90) if x [90, 100]

HW7 Solutions. f(x) = 0 otherwise. 0 otherwise. The density function looks like this: = 20 if x [10, 90) if x [90, 100] HW7 Solutions. 5 pts.) James Bond James Bond, my favorite hero, has again jumped off a plane. The plane is traveling from from base A to base B, distance km apart. Now suppose the plane takes off from

More information

Random Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline.

Random Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline. Random Variables Amappingthattransformstheeventstotherealline. Example 1. Toss a fair coin. Define a random variable X where X is 1 if head appears and X is if tail appears. P (X =)=1/2 P (X =1)=1/2 Example

More information

Basic concepts of probability theory

Basic concepts of probability theory Basic concepts of probability theory Random variable discrete/continuous random variable Transform Z transform, Laplace transform Distribution Geometric, mixed-geometric, Binomial, Poisson, exponential,

More information

Nonparametric hypothesis tests and permutation tests

Nonparametric hypothesis tests and permutation tests Nonparametric hypothesis tests and permutation tests 1.7 & 2.3. Probability Generating Functions 3.8.3. Wilcoxon Signed Rank Test 3.8.2. Mann-Whitney Test Prof. Tesler Math 283 Fall 2018 Prof. Tesler Wilcoxon

More information

Chapter 1 Statistical Reasoning Why statistics? Section 1.1 Basics of Probability Theory

Chapter 1 Statistical Reasoning Why statistics? Section 1.1 Basics of Probability Theory Chapter 1 Statistical Reasoning Why statistics? Uncertainty of nature (weather, earth movement, etc. ) Uncertainty in observation/sampling/measurement Variability of human operation/error imperfection

More information

LIST OF FORMULAS FOR STK1100 AND STK1110

LIST OF FORMULAS FOR STK1100 AND STK1110 LIST OF FORMULAS FOR STK1100 AND STK1110 (Version of 11. November 2015) 1. Probability Let A, B, A 1, A 2,..., B 1, B 2,... be events, that is, subsets of a sample space Ω. a) Axioms: A probability function

More information

Math 510 midterm 3 answers

Math 510 midterm 3 answers Math 51 midterm 3 answers Problem 1 (1 pts) Suppose X and Y are independent exponential random variables both with parameter λ 1. Find the probability that Y < 7X. P (Y < 7X) 7x 7x f(x, y) dy dx e x e

More information

IEOR 3106: Introduction to Operations Research: Stochastic Models. Professor Whitt. SOLUTIONS to Homework Assignment 2

IEOR 3106: Introduction to Operations Research: Stochastic Models. Professor Whitt. SOLUTIONS to Homework Assignment 2 IEOR 316: Introduction to Operations Research: Stochastic Models Professor Whitt SOLUTIONS to Homework Assignment 2 More Probability Review: In the Ross textbook, Introduction to Probability Models, read

More information

STAT/MA 416 Answers Homework 6 November 15, 2007 Solutions by Mark Daniel Ward PROBLEMS

STAT/MA 416 Answers Homework 6 November 15, 2007 Solutions by Mark Daniel Ward PROBLEMS STAT/MA 4 Answers Homework November 5, 27 Solutions by Mark Daniel Ward PROBLEMS Chapter Problems 2a. The mass p, corresponds to neither of the first two balls being white, so p, 8 7 4/39. The mass p,

More information

Math Bootcamp 2012 Miscellaneous

Math Bootcamp 2012 Miscellaneous Math Bootcamp 202 Miscellaneous Factorial, combination and permutation The factorial of a positive integer n denoted by n!, is the product of all positive integers less than or equal to n. Define 0! =.

More information

Applied Statistics and Probability for Engineers. Sixth Edition. Chapter 4 Continuous Random Variables and Probability Distributions.

Applied Statistics and Probability for Engineers. Sixth Edition. Chapter 4 Continuous Random Variables and Probability Distributions. Applied Statistics and Probability for Engineers Sixth Edition Douglas C. Montgomery George C. Runger Chapter 4 Continuous Random Variables and Probability Distributions 4 Continuous CHAPTER OUTLINE Random

More information