MATH2715: Statistical Methods

Size: px
Start display at page:

Download "MATH2715: Statistical Methods"

Transcription

1 MATH2715: Statistical Methods Exercises V (based on lectures 9-10, work week 6, hand in lecture Mon 7 Nov) ALL questions count towards the continuous assessment for this module. Q1. If X gamma(α,λ), write down the moment generating function of X. Using moment generating functions, deduce the distribution of Y = bx where b is a constant. Q2. If X 1,X 2,...,X n are independent random variables satisfying X i gamma(α i,λ), use moment generating functions to obtain the distribution of S n = X i. Use moment generating functions to obtain the distribution of Xn where X n = S n n. Q3. Suppose U is a non-negative random variable with probability density function f U (u) and b is a positive constant. Prove the Markov inequality, E[U] bpr{u b}. Hint: Write E[U] and pr{u b} as integrals in terms of u and f U (u). Look at the proof of Chebychev s inequality. Where would you split the region of integration? Q4. Suppose X has a uniform distribution over the interval (0,1) with mean µ and variance σ 2. Calculate pr{ X µ kσ} for a number of different values of k. Compare the exact numerical results with the bound given by Chebyshev s inequality. Hint: Recall that pr{ X µ c} = pr{x (µ c)} + pr{x (µ + c)}. You could use R to evaluate the probabilities. For example k=c(0.5,1.0,1.5,2.0,2.5,3.0,3.5,4.0) # Assign some k values. chebyshev=1/k^2 # Chebyshev s bound. prob=punif(k,0,1) # If X is uniform(0,1), prob = pr(x < k). # prob gives the cdf F(x) evaluated at x=k. # prob does NOT give pr( X-mu > k*sigma). # What SHOULD you type here? Q5. For the discrete random variable X satisfying pr{x = 0} = k2, pr{x = k} = 2k2, pr{x = +k} = 2k2, k 1, show that Chebyshev s inequality becomes an equality for this particular value of k. Hint: First obtain the mean µ and the variance σ 2 of X. 56

2 Background Notes: Lecture 10. Weak law of large numbers Example Let Z n = X n µ σ/ n. Figure 26 shows the cumulative distribution function of Z 1 and Z 4 together with the standard ind ind normal cumulative distribution function in the two cases X i exponential(λ = 1) and X i uniform(0, 1). As n it can be seen that in both cases the cumulative distribution function of Z n tends to that of the standard normal cumulative distribution function. Cdf n=1 n=4 N(0,1) cdf Cdf n=1 n=4 N(0,1) cdf z z ind Figure 26: cumulative distribution function of Z 1 and Z 4 for (left) X i exponential(λ = 1), ind (right) X i uniform(0,1). Chebyshev s Inequality For a random variable X with mean µ and finite variance σ 2 <, Chebyshev s 58 inequality states pr{ X µ kσ} 1 k 2. Further reading for lectures 9 and 10 Rice, J.A. (1995) Mathematical Statistics and Data Analysis (2nd edition), sections 4.2, 4.5, 5.2. Hogg, R.V., McKean, J.W. and Craig, A.T. (2005) Introduction to Mathematical Statistics (6th 58 Pafnuti Chebyshev ( ) was a Russian mathematician. Many variants of the spelling of his name exist, Pafnut-i/y (T/Ts)-cheb-i/y-sh/ch/sch-e-v/ff/w. Uses of Chebyshev s (1867) inequality include constructing confidence intervals, so that pr{ X µ < 4.472σ X} > If X is known to be a normal random variable, we have the tighter bound pr{ X µ < 1.96σ X} = The general application of Chebyshev s inequality has lead it to be used in many fields including digital communication, gene matching, and detecting computer security attacks. 57

3 edition), sections 1.9, 1.10, 4.2. Larsen, R.J. and Marx, M.L. (2010) An Introduction to Mathematical Statistics and its Applications (5th edition), section 3.12, 5.7. Miller, I. and Miller, M. (2004) John E. Freund s Mathematical Statistics with Applications, sections 4.4, 4.5, 5.4,

4 MATH2715: Statistical Methods Worked Examples V Worked Example: If X N(µ,σ 2 ), use the moment generating function to obtain the distribution of Y = θx where θ is a constant. Answer: If X N(µ,σ 2 ), then m X (t) = e µt+1 2 σ2 t 2. Mgf of Y is m Y (t) = E[e ty ] = E[e t(θx) ] = E[e (θt)x ] = m X (θt). Thus m Y (t) = e µ(θt)+1 2 σ2 (θt) 2 = e (µθ)t+1 2 (σ2 θ 2 )t 2 = e (µθ)t+1 2 (σθ)2 t 2. By inspection of the mgf it can be seen that Y N(µθ,σ 2 θ 2 ). Worked Example: If X has an exponential(λ) distribution, use moment generating functions to obtain the distribution of U = ax where a (a > 0) is a known constant. Answer: X exponential(λ) has mgf m X (t) = λ λ t for t < λ. Thus m U (t) = E[e tu ] = E[e t(ax) ] = E[e (at)x ] = m X (at) = By inspection of the mgf it can be seen that U exponential(λ/a). λ λ at = (λ/a) (λ/a) t. Worked Example: If X and Y are independent exponential(λ) random variables, use moment generating functions to obtain the distribution of U = X + Y. Answer: X has mgf m X (t) = λ λ t for t < λ. Similarly m Y (t) = λ λ t for t < λ. Hence m U (t) = E[e tu ] = E[e t(x+y ) ] = E[e tx e ty ] = E[e tx ]E[e ty ] by independence of X and Y. Thus m U (t) = λ λ t λ ( ) λ 2 λ t =. λ t By inspection of the mgf we deduce that U gamma(2,λ). Worked Example: If X 1, X 2 and X 3 are independent gamma(α,λ) random variables, use moment generating functions to obtain the distribution of U = X 1 + X 2 + X 3. ( ) λ α Answer: The X i have mgf m Xi (t) =. Hence λ t m U (t) = E[e tu ] = E[e t(x 1+X 2 +X 3 ) ] = E[e tx 1+tX 2 +tx 3 ] = E[e tx 1 e tx 2 e tx 3 ] = E[e tx 1 ]E[e tx 2 ]E[e tx 3 ] by independence of the X i. Thus m U (t) = By inspection of the mgf, U gamma(3α,λ). ( ) λ α ( ) λ α ( ) λ α ( ) λ 3α =. λ t λ t λ t λ t 59

5 Worked Example: Suppose X N(µ,σ 2 ). Calculate pr{ X µ kσ} for a number of different values of k and compare the exact results with the bound given by Chebyshev s inequality. Answer: pr{ X µ kσ} = pr{x µ kσ} + pr{x µ kσ} = pr{x µ + kσ} + pr{x µ kσ}. But for the normal distribution we evaluate cumulative probabilities by standardising. Thus { } { } X µ X µ pr{ X µ kσ} = pr k + pr k = pr{z k} + pr{z k} σ σ where Z = X µ σ N(0,1) with Φ(z) = pr{z z}. Thus pr{ X µ kσ} = (1 pr{z k}) + pr{z k} = (1 Φ(k)) + Φ( k) = (1 Φ(k)) + (1 Φ(k)) = 2(1 Φ(k)). The probabilities can be generated for different k using the R code below. The probabilities and Chebyshev bound are shown in table 1. k=c(0.5,1.0,1.5,2.0,2.5,3.0,3.5,4.0) prob=2*(1-pnorm(k)) chebyshev=1/k^2 # Assign some k values. # Exact probabilities. # Chebyshev s bound. Figure 27(left) shows pr{ X µ 1.5σ} for X N(µ = 10,σ 2 = 4). Figure 27(right) gives the exact probability and Chebyshev s bound for a range of values k. Pdf Probability Chebyshev s bound Exact probability pr( X µ kσ) k Figure 27: (left) pr{ X µ 1.5σ} for X N(µ = 10,σ 2 = 4); (right) Chebyshev s bound and the exact probability pr{ X µ kσ} for X N(µ,σ 2 ). Worked Example: Suppose X exponential(λ) with mean µ and variance σ 2. Obtain pr{ X µ kσ}. Calculate pr{ X µ kσ} for a number of different values of k and compare the exact result with the bound given by Chebyshev s inequality. 60

6 k pr{ X µ kσ} Chebyshev bound ( ) Table 1: Chebyshev s inequality and the exact probability for X N(µ,σ 2 ). ( ) For k 1 the bound is replaced by unity. Answer: f X (x) = λe λx for x > 0 and F X (x) = pr{x x} = 1 e λx for x > 0. Also µ = E[X] = 1/λ and σ 2 = Var[X] = 1/λ 2. Hence pr{ X µ kσ} = pr{x µ kσ} + pr{x µ + kσ} = pr { X 1 k } { } λ + pr X 1+k λ = F X ( 1 k λ ) + (1 F X( 1+k λ )) = This can be evaluated for different values of k. { 1 e (1 k) + e (1+k) if k < 1, e (1+k) if k 1. Figure 28 shows pr{ X µ > 0.6σ} and pr{ X µ > 1.5σ} for X exponential(λ = 2). 59 Exponential(2) pdf Exponential(2) pdf x x Figure 28: for X exponential(λ = 2), (left) pr{ X µ 0.6σ} and (right) pr{ X µ 1.5σ}. Suitable R commands for plotting the exact probability pr{ X µ kσ} and the Chebyshev bound 1/k 2 are given below for the case X exponential(λ = 1). The result is plotted in figure k=c(1:30)*0.1 # Set k=0.1, 0.2,..., Thus here µ = 1, σ = 1, and pr{ X µ 0.6σ} = pr{x 0.8} + pr{x 0.2} while pr{ X µ 1.5σ} = 2 2 pr{x 1.25} + pr{x 0.25} where here pr{x 0.25} = Chebyshev s inequality says that pr{ X µ kσ} 1/k 2 for k > 1. If k < 1 the right-hand side of Chebyshev s inequality should be set equal to one! 61

7 prob=pexp(1-k)+(1-pexp(k+1)) # Exact probability. cheb=1/k^2 # Chebyshev upper bound. cheb[cheb>1]=1 # Make all values of cheb>1 equal to one. plot(k,prob,type="l",ylim=c(0,1)) points(k,cheb,type="l",lty=2) Probability Chebyshev s bound Exact probability pr( X µ kσ) k Figure 29: Chebyshev s bound and the exact probability pr{ X µ kσ} in the case where X exponential(λ). Worked Example: In the Markov 61 inequality, a non-negative random variable U satisfies E[U] bpr{u b}. Thus if a random variable X has mean µ and variance σ 2 and U = g(x) is a non-negative function of X, then it follows that E[g(X)] bpr{g(x) b}. By considering the special case g(x) = (X µ) 2, derive Chebyshev s inequality pr{ X µ kσ} 1 k 2. Answer: The Markov inequality 62 can be written as pr{g(x) b} E[g(X)]/b. Now write g(x) = (X µ) 2. Thus pr { (X µ) 2 b } E[(X µ)2 ] = Var[X] b b Since (X µ) 2 = X µ 2, then pr { (X µ) 2 b } { = pr X µ } b. = σ2 b. Write b = kσ, b = k 2 σ 2 so pr { (X µ) 2 b } σ2 b gives pr{ X µ kσ} 1 as required. k2 61 Andrei Andreyevich Markov ( ) was a Russian mathematician and a student of Chebyshev. 62 The Markov inequality is still useful for leading to new results. For example, consider Herman Chernoff s (1952) inequality pr{x c} e tc m X(t), where X has moment generating function m X(t). This result follows because pr{x c} = pr e tx e tc and putting g(x) = e tx and b = e tc in the Markov inequality gives n pr e tx e tco E[etX ]. e tc 62

8 Worked Example: Does there exist a random variable X for which Chebyshev s inequality becomes an equality for all k 1? Answer: There is no distribution which attains the Chebyshev bound for all k 1. We use a proof by contradiction and consider the continuous case. Suppose there exists a random variable X with finite variance σ 2 and satisfying pr{ X µ kσ} = 1 k 2 k 1. ( ) Let Z = X µ, with E[Z] = 0 and Var[Z] = 1 ( ) so that pr{ Z k} = 1 σ k 2 k 1. Hence pr{ Z z} = pr{z z} + pr{z z} = 1 for z 1. z2 If Z has cumulative distribution function F Z (z) = pr{z z} and probability density function f Z (z), then F Z ( z) + {1 F Z (z)} = 1 z 2. Differentiating this with respect to z gives f Z ( z) f Z (z) = 2 z 3. Clearly f Z( z) + f Z (z) = 2 z 3 for z 1. Since the area under the probability density function integrates to unity it follows that so that 1 = 1 = 1 1 f Z (z)dz = f Z (z)dz f Z (z)dz {f Z ( z) + f Z (z)}dz = f Z (z)dz f Z (z)dz 2 f Z (z)dz + 1 z 3 dz and then 1 [ ] 1 f Z (z)dz = 1 1 z 2 = 0. 1 As probability density functions are non-negative this can only hold if f Z (z) = 0 for 1 < z < 1. Since by assumption E[Z] = 0, the variance of Z satisfies Var[Z] = E[Z 2 ] so that Var[Z] = z 2 f Z (z)dz = 1 z 2 {f Z ( z) + f Z (z)}dz = 1 2 dz =. z Thus Var[Z] 1 as assumed at ( ). Clearly X cannot have finite variance σ 2. There is thus no random variable X which satisfies ( ). Worked Example: Let X 1,X 2,X 3,... be a sequence of random variables with a common mean µ, common variance σ 2, and with covariances, for i j, given by { ρσ 2 if j = i ± 1, cov(x i,x j ) = 0 otherwise. Consider now the sequence of means { X n }, n = 1,2,3,..., where X n = 1 X i. Show that n ( ) Var[ X n ] = σ2 1) + 2(n n n 2 ρσ 2. Deduce that Var[ X 3n 2 n ] n 2 σ 2 and so use Chebyshev s inequality to show that a weak law of large numbers applies to X n. 63

9 Answer: Let S n = [ ] X i. Then Var[S n ] = Var X i = The double sum has n 2 terms as shown in the table. j=1 cov(x i,x j ). j = 1 j = 2 j = 3 j = n i = 1 cov(x 1,X 1 ) = σ 2 cov(x 1,X 2 ) = ρσ 2 cov(x 1,X 3 ) = 0 cov(x 1,X n ) = 0 i = 2 cov(x 2,X 1 ) = ρσ 2 cov(x 2,X 2 ) = σ 2 cov(x 2,X 3 ) = ρσ 2 cov(x 2,X n ) = i = n cov(x n,x 1 ) = 0 cov(x n,x 2 ) = 0 cov(x 1,X 3 ) = 0 cov(x n,x n ) = σ 2 Hence Var[S n ] = cov(x i,x i ) + cov(x i,x j ) = Var[X i ] + 2 cov(x i,x j ) j=1 j=1 i j i<j = nσ {cov(x 1,X 2 ) + cov(x 2,X 3 ) + + cov(x n 1,X n )} = nσ 2 + 2(n 1)ρσ 2. This gives Var[ X n ] = Var [ ] Sn = 1n n 2Var[S n] = σ2 1) + 2(n n n 2 ρσ 2. Since 1 ρ 1 and cov(x i,x j ) = ρσ 2, then σ 2 cov(x i,x j ) σ 2. Hence Var[ X n ] σ2 1) + 2(n n n 2 σ 2 = Var[ X n ] ( 3n 2 n 2 ) σ 2. In Chebyshev s inequality, pr{ Y µ Y kσ Y } 1/k 2. Put Y = X n with µ Y = µ and put ǫ = kσ Y so that pr { X n µ ǫ } Var[ X n ] (3n 2)σ2 ǫ 2 n 2 ǫ 2. For given η, choose n 0 such that (3n 0 2)σ 2 n 2 < η. Then, for all n > n 0, 0 ǫ2 as required, pr { X n µ ǫ } < η if n > n 0. Thus X p n µ as n. (3n 2)σ2 n 2 ǫ 2 < η and so, 64

10 MATH2715: Solutions to exercises II Q1. E[(X µ) 4 ] = E[X 4 4µX 3 + 6µ 2 X 2 4µ 3 X + µ 4 ] = E[X 4 ] 4µE[X 3 ] + 6µ 2 E[X 2 ] 4µ 3 E[X] + µ 4 as µ is a constant. Thus E[(X µ) 4 ] = µ 4 4µµ 3 + 6µ2 µ 2 4µ4 + µ 4 = µ 4 4µµ 3 + 6µ2 µ 2 3µ4, where µ r = E[X r ] and µ 1 = E[X] = µ. Q2. E[X 3 ] = x x 3 f X (x)dx = = x=0 x=0 x 3λα x α 1 e λx Γ(α) λ α x α+2 e λx Γ(α) dx dx = λα x α+2 e λx dx. Γ(α) x=0 Since f X (x) is a probability density function it integrates to one, so that x f X (x)dx = 1 x=0 This must hold for all values of α so that Putting A = α + 3 we have Thus as required since λ α x α 1 e λx dx = 1 Γ(α) x=0 x=0 x A 1 e λx dx = Γ(A) λ A. x α+2 e λx dx = Γ(α + 3) λ α+3. x=0 x α 1 e λx dx = Γ(α) λ α. E[X 3 ] = λα x α+2 e λx dx = λα Γ(α + 3) α(α + 1)(α + 2) Γ(α) x=0 Γ(α) λ α+3 = λ 3. Γ(α + 3) = (α + 2)Γ(α + 2) = (α + 2)(α + 1)Γ(α + 1) = (α + 2)(α + 1)αΓ(α). Q3. Since X Bin(n,θ), E[X] = nθ and Var[X] = nθ(1 θ). For method of moments estimators equate 63 : x = E[X] and m 2 = Var[X]. Thus x = nθ and m 2 = nθ(1 θ) gives 1 θ = m 2 x θ = 1 m 2 x 63 You could alternatively equate s 2 = Var[X], where s 2 = 1 m 1 give a slightly different estiate for θ and n, namely θ e = 1 and en = s2 x x2 mx (x i x) 2 is the sample variance. This would x s 2. 65

11 and so estimate n by ñ = x θ = x2 x m 2. Problem: Do we expect our estimate for n to be an integer? A bigger problem: Is ñ always non-negative? 64 Q4. E[X] = x xf X (x)dx = x=1 x θ ] dx = θx θ dx = [θ x θ+1 xθ+1 x=1 ( θ + 1) x=1 since lim x x θ+1 = 0 because θ > 1. For method of moments estimates equate x = E[X] so that θ = ( θ + 1) = θ θ 1 x = θ θ 1 θ = x x 1. Q5. For these data x = x i = 4.54, m 2 = x 2 i = , m 2 = m 2 x2 = For a gamma(α,λ) distribution E[X] = α λ, Var[X] = α λ 2. For method moments estimates equate x = E[X] and m 2 = Var[X] so that α = x2 m 2, λ = x m 2. These give α = and λ = The best fitting gamma distribution is thus a gamma(α = 4.038,λ = ) distribution. Notice that α 4. Using R to do the calculations: x=c(8.7,3.3,5.5,5.8,6.1,3.9,1.2,0.7,4.7,5.5) # Set values into x. xbar=mean(x) # Mean is mean(x^2) # m2 is m2=mean(x^2)-xbar^2 # m2 is alpha=xbar^2/m2 # alpha is lambda=xbar/m2 # lambda is A simple R simulation shows that en can be negative: nn=numeric(10000) # Initialise nn to be a vector long. m=20; n=10; theta=0.3 # Initialse required constants m, n and theta. for (k in 1:10000){ # Do simulations. x=rbinom(m,n,theta) # Simulate m values from Bin(n,theta) and store in x. m2=mean((x-mean(x)) 2) # Obtain m2 values. nn[k]=mean(x) 2/(mean(x)-m2) # Obtain n estimate. } # End k loop. hist(nn,100) # Histogram of n values with about 100 breaks. 66

12 MATH2715: Statistical Methods Examples Class V, week 6 The questions below will be looked at in Examples Class V held in week 6. Q1. Suppose that g(x) is a non-negative function 65 which is symmetric about zero and which is a non-decreasing function of x for positive x. If X is a continuous random variable prove that, for any ǫ > 0, pr{ X ǫ} E[g(X)]. g(ǫ) (Hint: Split the region of integration (,+ ) for E[g(X)] into suitable parts.) By writing X = Y µ, deduce Chebyshev s inequality as a special case. Q2. Suppose that Y 1,Y 2,... is a sequence of random variables which satisfies 2 n with probability n Y n = 0 with probability 1 3 n 2 n with probability n for n = 1,2,... Prove that p lim n Y n = 0. What is the mean and variance of Y n? What happens as n? 65 Examples include g(x) = x and g(x) = 1/(1 + e x2 ). 67

MATH2715: Statistical Methods

MATH2715: Statistical Methods MATH275: Statistical Methods Exercises VI (based on lectre, work week 7, hand in lectre Mon 4 Nov) ALL qestions cont towards the continos assessment for this modle. Q. The random variable X has a discrete

More information

MATH2715: Statistical Methods

MATH2715: Statistical Methods MATH2715: Statistical Methods Exercises IV (based on lectures 7-8, work week 5, hand in lecture Mon 30 Oct) ALL questions count towards the continuous assessment for this module. Q1. If a random variable

More information

MATH c UNIVERSITY OF LEEDS Examination for the Module MATH2715 (January 2015) STATISTICAL METHODS. Time allowed: 2 hours

MATH c UNIVERSITY OF LEEDS Examination for the Module MATH2715 (January 2015) STATISTICAL METHODS. Time allowed: 2 hours MATH2750 This question paper consists of 8 printed pages, each of which is identified by the reference MATH275. All calculators must carry an approval sticker issued by the School of Mathematics. c UNIVERSITY

More information

Proving the central limit theorem

Proving the central limit theorem SOR3012: Stochastic Processes Proving the central limit theorem Gareth Tribello March 3, 2019 1 Purpose In the lectures and exercises we have learnt about the law of large numbers and the central limit

More information

1.12 Multivariate Random Variables

1.12 Multivariate Random Variables 112 MULTIVARIATE RANDOM VARIABLES 59 112 Multivariate Random Variables We will be using matrix notation to denote multivariate rvs and their distributions Denote by X (X 1,,X n ) T an n-dimensional random

More information

ECON 5350 Class Notes Review of Probability and Distribution Theory

ECON 5350 Class Notes Review of Probability and Distribution Theory ECON 535 Class Notes Review of Probability and Distribution Theory 1 Random Variables Definition. Let c represent an element of the sample space C of a random eperiment, c C. A random variable is a one-to-one

More information

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample

More information

Random Variables and Their Distributions

Random Variables and Their Distributions Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital

More information

Continuous Distributions

Continuous Distributions A normal distribution and other density functions involving exponential forms play the most important role in probability and statistics. They are related in a certain way, as summarized in a diagram later

More information

5 Operations on Multiple Random Variables

5 Operations on Multiple Random Variables EE360 Random Signal analysis Chapter 5: Operations on Multiple Random Variables 5 Operations on Multiple Random Variables Expected value of a function of r.v. s Two r.v. s: ḡ = E[g(X, Y )] = g(x, y)f X,Y

More information

Statistics 3657 : Moment Generating Functions

Statistics 3657 : Moment Generating Functions Statistics 3657 : Moment Generating Functions A useful tool for studying sums of independent random variables is generating functions. course we consider moment generating functions. In this Definition

More information

3. Probability and Statistics

3. Probability and Statistics FE661 - Statistical Methods for Financial Engineering 3. Probability and Statistics Jitkomut Songsiri definitions, probability measures conditional expectations correlation and covariance some important

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

STAT Chapter 5 Continuous Distributions

STAT Chapter 5 Continuous Distributions STAT 270 - Chapter 5 Continuous Distributions June 27, 2012 Shirin Golchi () STAT270 June 27, 2012 1 / 59 Continuous rv s Definition: X is a continuous rv if it takes values in an interval, i.e., range

More information

Continuous Random Variables and Continuous Distributions

Continuous Random Variables and Continuous Distributions Continuous Random Variables and Continuous Distributions Continuous Random Variables and Continuous Distributions Expectation & Variance of Continuous Random Variables ( 5.2) The Uniform Random Variable

More information

Sampling Distributions

Sampling Distributions In statistics, a random sample is a collection of independent and identically distributed (iid) random variables, and a sampling distribution is the distribution of a function of random sample. For example,

More information

Chp 4. Expectation and Variance

Chp 4. Expectation and Variance Chp 4. Expectation and Variance 1 Expectation In this chapter, we will introduce two objectives to directly reflect the properties of a random variable or vector, which are the Expectation and Variance.

More information

LIST OF FORMULAS FOR STK1100 AND STK1110

LIST OF FORMULAS FOR STK1100 AND STK1110 LIST OF FORMULAS FOR STK1100 AND STK1110 (Version of 11. November 2015) 1. Probability Let A, B, A 1, A 2,..., B 1, B 2,... be events, that is, subsets of a sample space Ω. a) Axioms: A probability function

More information

STA 256: Statistics and Probability I

STA 256: Statistics and Probability I Al Nosedal. University of Toronto. Fall 2017 My momma always said: Life was like a box of chocolates. You never know what you re gonna get. Forrest Gump. Exercise 4.1 Let X be a random variable with p(x)

More information

1.1 Review of Probability Theory

1.1 Review of Probability Theory 1.1 Review of Probability Theory Angela Peace Biomathemtics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,

More information

Lecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN

Lecture Notes 5 Convergence and Limit Theorems. Convergence with Probability 1. Convergence in Mean Square. Convergence in Probability, WLLN Lecture Notes 5 Convergence and Limit Theorems Motivation Convergence with Probability Convergence in Mean Square Convergence in Probability, WLLN Convergence in Distribution, CLT EE 278: Convergence and

More information

Statistics STAT:5100 (22S:193), Fall Sample Final Exam B

Statistics STAT:5100 (22S:193), Fall Sample Final Exam B Statistics STAT:5 (22S:93), Fall 25 Sample Final Exam B Please write your answers in the exam books provided.. Let X, Y, and Y 2 be independent random variables with X N(µ X, σ 2 X ) and Y i N(µ Y, σ 2

More information

Moments. Raw moment: February 25, 2014 Normalized / Standardized moment:

Moments. Raw moment: February 25, 2014 Normalized / Standardized moment: Moments Lecture 10: Central Limit Theorem and CDFs Sta230 / Mth 230 Colin Rundel Raw moment: Central moment: µ n = EX n ) µ n = E[X µ) 2 ] February 25, 2014 Normalized / Standardized moment: µ n σ n Sta230

More information

Lecture 11. Multivariate Normal theory

Lecture 11. Multivariate Normal theory 10. Lecture 11. Multivariate Normal theory Lecture 11. Multivariate Normal theory 1 (1 1) 11. Multivariate Normal theory 11.1. Properties of means and covariances of vectors Properties of means and covariances

More information

Lecture 5: Moment generating functions

Lecture 5: Moment generating functions Lecture 5: Moment generating functions Definition 2.3.6. The moment generating function (mgf) of a random variable X is { x e tx f M X (t) = E(e tx X (x) if X has a pmf ) = etx f X (x)dx if X has a pdf

More information

Probability Models. 4. What is the definition of the expectation of a discrete random variable?

Probability Models. 4. What is the definition of the expectation of a discrete random variable? 1 Probability Models The list of questions below is provided in order to help you to prepare for the test and exam. It reflects only the theoretical part of the course. You should expect the questions

More information

STAT 3610: Review of Probability Distributions

STAT 3610: Review of Probability Distributions STAT 3610: Review of Probability Distributions Mark Carpenter Professor of Statistics Department of Mathematics and Statistics August 25, 2015 Support of a Random Variable Definition The support of a random

More information

Quick Tour of Basic Probability Theory and Linear Algebra

Quick Tour of Basic Probability Theory and Linear Algebra Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra CS224w: Social and Information Network Analysis Fall 2011 Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra Outline Definitions

More information

Stat 5101 Notes: Brand Name Distributions

Stat 5101 Notes: Brand Name Distributions Stat 5101 Notes: Brand Name Distributions Charles J. Geyer February 14, 2003 1 Discrete Uniform Distribution DiscreteUniform(n). Discrete. Rationale Equally likely outcomes. The interval 1, 2,..., n of

More information

Random Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline.

Random Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline. Random Variables Amappingthattransformstheeventstotherealline. Example 1. Toss a fair coin. Define a random variable X where X is 1 if head appears and X is if tail appears. P (X =)=1/2 P (X =1)=1/2 Example

More information

Sampling Distributions

Sampling Distributions Sampling Distributions In statistics, a random sample is a collection of independent and identically distributed (iid) random variables, and a sampling distribution is the distribution of a function of

More information

18.440: Lecture 28 Lectures Review

18.440: Lecture 28 Lectures Review 18.440: Lecture 28 Lectures 17-27 Review Scott Sheffield MIT 1 Outline Continuous random variables Problems motivated by coin tossing Random variable properties 2 Outline Continuous random variables Problems

More information

Final Solutions Fri, June 8

Final Solutions Fri, June 8 EE178: Probabilistic Systems Analysis, Spring 2018 Final Solutions Fri, June 8 1. Small problems (62 points) (a) (8 points) Let X 1, X 2,..., X n be independent random variables uniformly distributed on

More information

Chapter 5. Chapter 5 sections

Chapter 5. Chapter 5 sections 1 / 43 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

BASICS OF PROBABILITY

BASICS OF PROBABILITY October 10, 2018 BASICS OF PROBABILITY Randomness, sample space and probability Probability is concerned with random experiments. That is, an experiment, the outcome of which cannot be predicted with certainty,

More information

Continuous Distributions

Continuous Distributions Chapter 3 Continuous Distributions 3.1 Continuous-Type Data In Chapter 2, we discuss random variables whose space S contains a countable number of outcomes (i.e. of discrete type). In Chapter 3, we study

More information

APPM/MATH 4/5520 Solutions to Exam I Review Problems. f X 1,X 2. 2e x 1 x 2. = x 2

APPM/MATH 4/5520 Solutions to Exam I Review Problems. f X 1,X 2. 2e x 1 x 2. = x 2 APPM/MATH 4/5520 Solutions to Exam I Review Problems. (a) f X (x ) f X,X 2 (x,x 2 )dx 2 x 2e x x 2 dx 2 2e 2x x was below x 2, but when marginalizing out x 2, we ran it over all values from 0 to and so

More information

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as L30-1 EEL 5544 Noise in Linear Systems Lecture 30 OTHER TRANSFORMS For a continuous, nonnegative RV X, the Laplace transform of X is X (s) = E [ e sx] = 0 f X (x)e sx dx. For a nonnegative RV, the Laplace

More information

ECEN 689 Special Topics in Data Science for Communications Networks

ECEN 689 Special Topics in Data Science for Communications Networks ECEN 689 Special Topics in Data Science for Communications Networks Nick Duffield Department of Electrical & Computer Engineering Texas A&M University Lecture 3 Estimation and Bounds Estimation from Packet

More information

ACM 116: Lectures 3 4

ACM 116: Lectures 3 4 1 ACM 116: Lectures 3 4 Joint distributions The multivariate normal distribution Conditional distributions Independent random variables Conditional distributions and Monte Carlo: Rejection sampling Variance

More information

Review 1: STAT Mark Carpenter, Ph.D. Professor of Statistics Department of Mathematics and Statistics. August 25, 2015

Review 1: STAT Mark Carpenter, Ph.D. Professor of Statistics Department of Mathematics and Statistics. August 25, 2015 Review : STAT 36 Mark Carpenter, Ph.D. Professor of Statistics Department of Mathematics and Statistics August 25, 25 Support of a Random Variable The support of a random variable, which is usually denoted

More information

Chapter 5 continued. Chapter 5 sections

Chapter 5 continued. Chapter 5 sections Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

Lecture 7: Chapter 7. Sums of Random Variables and Long-Term Averages

Lecture 7: Chapter 7. Sums of Random Variables and Long-Term Averages Lecture 7: Chapter 7. Sums of Random Variables and Long-Term Averages ELEC206 Probability and Random Processes, Fall 2014 Gil-Jin Jang gjang@knu.ac.kr School of EE, KNU page 1 / 15 Chapter 7. Sums of Random

More information

4. Distributions of Functions of Random Variables

4. Distributions of Functions of Random Variables 4. Distributions of Functions of Random Variables Setup: Consider as given the joint distribution of X 1,..., X n (i.e. consider as given f X1,...,X n and F X1,...,X n ) Consider k functions g 1 : R n

More information

MAS113 Introduction to Probability and Statistics. Proofs of theorems

MAS113 Introduction to Probability and Statistics. Proofs of theorems MAS113 Introduction to Probability and Statistics Proofs of theorems Theorem 1 De Morgan s Laws) See MAS110 Theorem 2 M1 By definition, B and A \ B are disjoint, and their union is A So, because m is a

More information

CS145: Probability & Computing

CS145: Probability & Computing CS45: Probability & Computing Lecture 5: Concentration Inequalities, Law of Large Numbers, Central Limit Theorem Instructor: Eli Upfal Brown University Computer Science Figure credits: Bertsekas & Tsitsiklis,

More information

Mathematical statistics

Mathematical statistics October 18 th, 2018 Lecture 16: Midterm review Countdown to mid-term exam: 7 days Week 1 Chapter 1: Probability review Week 2 Week 4 Week 7 Chapter 6: Statistics Chapter 7: Point Estimation Chapter 8:

More information

Probability Lecture III (August, 2006)

Probability Lecture III (August, 2006) robability Lecture III (August, 2006) 1 Some roperties of Random Vectors and Matrices We generalize univariate notions in this section. Definition 1 Let U = U ij k l, a matrix of random variables. Suppose

More information

Chapter 7: Special Distributions

Chapter 7: Special Distributions This chater first resents some imortant distributions, and then develos the largesamle distribution theory which is crucial in estimation and statistical inference Discrete distributions The Bernoulli

More information

18.440: Lecture 28 Lectures Review

18.440: Lecture 28 Lectures Review 18.440: Lecture 28 Lectures 18-27 Review Scott Sheffield MIT Outline Outline It s the coins, stupid Much of what we have done in this course can be motivated by the i.i.d. sequence X i where each X i is

More information

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

F X (x) = P [X x] = x f X (t)dt. 42 Lebesgue-a.e, to be exact 43 More specifically, if g = f Lebesgue-a.e., then g is also a pdf for X.

F X (x) = P [X x] = x f X (t)dt. 42 Lebesgue-a.e, to be exact 43 More specifically, if g = f Lebesgue-a.e., then g is also a pdf for X. 10.2 Properties of PDF and CDF for Continuous Random Variables 10.18. The pdf f X is determined only almost everywhere 42. That is, given a pdf f for a random variable X, if we construct a function g by

More information

Chapter 2. Continuous random variables

Chapter 2. Continuous random variables Chapter 2 Continuous random variables Outline Review of probability: events and probability Random variable Probability and Cumulative distribution function Review of discrete random variable Introduction

More information

Probability and Distributions

Probability and Distributions Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated

More information

Twelfth Problem Assignment

Twelfth Problem Assignment EECS 401 Not Graded PROBLEM 1 Let X 1, X 2,... be a sequence of independent random variables that are uniformly distributed between 0 and 1. Consider a sequence defined by (a) Y n = max(x 1, X 2,..., X

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4.1 (*. Let independent variables X 1,..., X n have U(0, 1 distribution. Show that for every x (0, 1, we have P ( X (1 < x 1 and P ( X (n > x 1 as n. Ex. 4.2 (**. By using induction

More information

ECE Lecture #9 Part 2 Overview

ECE Lecture #9 Part 2 Overview ECE 450 - Lecture #9 Part Overview Bivariate Moments Mean or Expected Value of Z = g(x, Y) Correlation and Covariance of RV s Functions of RV s: Z = g(x, Y); finding f Z (z) Method : First find F(z), by

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4. (*. Let independent variables X,..., X n have U(0, distribution. Show that for every x (0,, we have P ( X ( < x and P ( X (n > x as n. Ex. 4.2 (**. By using induction or otherwise,

More information

Chapter 4 : Expectation and Moments

Chapter 4 : Expectation and Moments ECE5: Analysis of Random Signals Fall 06 Chapter 4 : Expectation and Moments Dr. Salim El Rouayheb Scribe: Serge Kas Hanna, Lu Liu Expected Value of a Random Variable Definition. The expected or average

More information

Lecture 14: Multivariate mgf s and chf s

Lecture 14: Multivariate mgf s and chf s Lecture 14: Multivariate mgf s and chf s Multivariate mgf and chf For an n-dimensional random vector X, its mgf is defined as M X (t) = E(e t X ), t R n and its chf is defined as φ X (t) = E(e ıt X ),

More information

THE QUEEN S UNIVERSITY OF BELFAST

THE QUEEN S UNIVERSITY OF BELFAST THE QUEEN S UNIVERSITY OF BELFAST 0SOR20 Level 2 Examination Statistics and Operational Research 20 Probability and Distribution Theory Wednesday 4 August 2002 2.30 pm 5.30 pm Examiners { Professor R M

More information

Limiting Distributions

Limiting Distributions Limiting Distributions We introduce the mode of convergence for a sequence of random variables, and discuss the convergence in probability and in distribution. The concept of convergence leads us to the

More information

Exercises and Answers to Chapter 1

Exercises and Answers to Chapter 1 Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean

More information

This does not cover everything on the final. Look at the posted practice problems for other topics.

This does not cover everything on the final. Look at the posted practice problems for other topics. Class 7: Review Problems for Final Exam 8.5 Spring 7 This does not cover everything on the final. Look at the posted practice problems for other topics. To save time in class: set up, but do not carry

More information

Continuous Random Variables

Continuous Random Variables Continuous Random Variables Recall: For discrete random variables, only a finite or countably infinite number of possible values with positive probability. Often, there is interest in random variables

More information

Statistics 1B. Statistics 1B 1 (1 1)

Statistics 1B. Statistics 1B 1 (1 1) 0. Statistics 1B Statistics 1B 1 (1 1) 0. Lecture 1. Introduction and probability review Lecture 1. Introduction and probability review 2 (1 1) 1. Introduction and probability review 1.1. What is Statistics?

More information

The Multivariate Normal Distribution. In this case according to our theorem

The Multivariate Normal Distribution. In this case according to our theorem The Multivariate Normal Distribution Defn: Z R 1 N(0, 1) iff f Z (z) = 1 2π e z2 /2. Defn: Z R p MV N p (0, I) if and only if Z = (Z 1,..., Z p ) T with the Z i independent and each Z i N(0, 1). In this

More information

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University Chapter 3, 4 Random Variables ENCS6161 - Probability and Stochastic Processes Concordia University ENCS6161 p.1/47 The Notion of a Random Variable A random variable X is a function that assigns a real

More information

Chapter 7. Confidence Sets Lecture 30: Pivotal quantities and confidence sets

Chapter 7. Confidence Sets Lecture 30: Pivotal quantities and confidence sets Chapter 7. Confidence Sets Lecture 30: Pivotal quantities and confidence sets Confidence sets X: a sample from a population P P. θ = θ(p): a functional from P to Θ R k for a fixed integer k. C(X): a confidence

More information

MATH2715: Statistical Methods

MATH2715: Statistical Methods MATH275: Statistical Methods Exercises III (based on lectres 5-6, work week 4, hand in lectre Mon 23 Oct) ALL qestions cont towards the continos assessment for this modle. Q. If X has a niform distribtion

More information

Math Spring Practice for the final Exam.

Math Spring Practice for the final Exam. Math 4 - Spring 8 - Practice for the final Exam.. Let X, Y, Z be three independnet random variables uniformly distributed on [, ]. Let W := X + Y. Compute P(W t) for t. Honors: Compute the CDF function

More information

Limiting Distributions

Limiting Distributions We introduce the mode of convergence for a sequence of random variables, and discuss the convergence in probability and in distribution. The concept of convergence leads us to the two fundamental results

More information

Review of Probability Theory II

Review of Probability Theory II Review of Probability Theory II January 9-3, 008 Exectation If the samle sace Ω = {ω, ω,...} is countable and g is a real-valued function, then we define the exected value or the exectation of a function

More information

ECE353: Probability and Random Processes. Lecture 18 - Stochastic Processes

ECE353: Probability and Random Processes. Lecture 18 - Stochastic Processes ECE353: Probability and Random Processes Lecture 18 - Stochastic Processes Xiao Fu School of Electrical Engineering and Computer Science Oregon State University E-mail: xiao.fu@oregonstate.edu From RV

More information

Stat 5101 Notes: Brand Name Distributions

Stat 5101 Notes: Brand Name Distributions Stat 5101 Notes: Brand Name Distributions Charles J. Geyer September 5, 2012 Contents 1 Discrete Uniform Distribution 2 2 General Discrete Uniform Distribution 2 3 Uniform Distribution 3 4 General Uniform

More information

Moment Generating Functions

Moment Generating Functions MATH 382 Moment Generating Functions Dr. Neal, WKU Definition. Let X be a random variable. The moment generating function (mgf) of X is the function M X : R R given by M X (t ) = E[e X t ], defined for

More information

Lecture 1: August 28

Lecture 1: August 28 36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random

More information

Probability- the good parts version. I. Random variables and their distributions; continuous random variables.

Probability- the good parts version. I. Random variables and their distributions; continuous random variables. Probability- the good arts version I. Random variables and their distributions; continuous random variables. A random variable (r.v) X is continuous if its distribution is given by a robability density

More information

Stat410 Probability and Statistics II (F16)

Stat410 Probability and Statistics II (F16) Stat4 Probability and Statistics II (F6 Exponential, Poisson and Gamma Suppose on average every /λ hours, a Stochastic train arrives at the Random station. Further we assume the waiting time between two

More information

Mathematical statistics

Mathematical statistics October 4 th, 2018 Lecture 12: Information Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation Chapter

More information

Partial Solutions for h4/2014s: Sampling Distributions

Partial Solutions for h4/2014s: Sampling Distributions 27 Partial Solutions for h4/24s: Sampling Distributions ( Let X and X 2 be two independent random variables, each with the same probability distribution given as follows. f(x 2 e x/2, x (a Compute the

More information

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

Distributions of Functions of Random Variables. 5.1 Functions of One Random Variable

Distributions of Functions of Random Variables. 5.1 Functions of One Random Variable Distributions of Functions of Random Variables 5.1 Functions of One Random Variable 5.2 Transformations of Two Random Variables 5.3 Several Random Variables 5.4 The Moment-Generating Function Technique

More information

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n JOINT DENSITIES - RANDOM VECTORS - REVIEW Joint densities describe probability distributions of a random vector X: an n-dimensional vector of random variables, ie, X = (X 1,, X n ), where all X is are

More information

Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics

Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Data from one or a series of random experiments are collected. Planning experiments and collecting data (not discussed here). Analysis:

More information

Probability 2 - Notes 10. Lemma. If X is a random variable and g(x) 0 for all x in the support of f X, then P(g(X) 1) E[g(X)].

Probability 2 - Notes 10. Lemma. If X is a random variable and g(x) 0 for all x in the support of f X, then P(g(X) 1) E[g(X)]. Probability 2 - Notes 0 Some Useful Iequalities. Lemma. If X is a radom variable ad g(x 0 for all x i the support of f X, the P(g(X E[g(X]. Proof. (cotiuous case P(g(X Corollaries x:g(x f X (xdx x:g(x

More information

Two hours. Statistical Tables to be provided THE UNIVERSITY OF MANCHESTER. 14 January :45 11:45

Two hours. Statistical Tables to be provided THE UNIVERSITY OF MANCHESTER. 14 January :45 11:45 Two hours Statistical Tables to be provided THE UNIVERSITY OF MANCHESTER PROBABILITY 2 14 January 2015 09:45 11:45 Answer ALL four questions in Section A (40 marks in total) and TWO of the THREE questions

More information

t x 1 e t dt, and simplify the answer when possible (for example, when r is a positive even number). In particular, confirm that EX 4 = 3.

t x 1 e t dt, and simplify the answer when possible (for example, when r is a positive even number). In particular, confirm that EX 4 = 3. Mathematical Statistics: Homewor problems General guideline. While woring outside the classroom, use any help you want, including people, computer algebra systems, Internet, and solution manuals, but mae

More information

MAS113 Introduction to Probability and Statistics. Proofs of theorems

MAS113 Introduction to Probability and Statistics. Proofs of theorems MAS113 Introduction to Probability and Statistics Proofs of theorems Theorem 1 De Morgan s Laws) See MAS110 Theorem 2 M1 By definition, B and A \ B are disjoint, and their union is A So, because m is a

More information

3 Continuous Random Variables

3 Continuous Random Variables Jinguo Lian Math437 Notes January 15, 016 3 Continuous Random Variables Remember that discrete random variables can take only a countable number of possible values. On the other hand, a continuous random

More information

1 Solution to Problem 2.1

1 Solution to Problem 2.1 Solution to Problem 2. I incorrectly worked this exercise instead of 2.2, so I decided to include the solution anyway. a) We have X Y /3, which is a - function. It maps the interval, ) where X lives) onto

More information

Stochastic Processes and Monte-Carlo Methods. University of Massachusetts: Spring 2018 version. Luc Rey-Bellet

Stochastic Processes and Monte-Carlo Methods. University of Massachusetts: Spring 2018 version. Luc Rey-Bellet Stochastic Processes and Monte-Carlo Methods University of Massachusetts: Spring 2018 version Luc Rey-Bellet April 5, 2018 Contents 1 Simulation and the Monte-Carlo method 3 1.1 Review of probability............................

More information

Math 362, Problem set 1

Math 362, Problem set 1 Math 6, roblem set Due //. (4..8) Determine the mean variance of the mean X of a rom sample of size 9 from a distribution having pdf f(x) = 4x, < x

More information

Notes on Random Vectors and Multivariate Normal

Notes on Random Vectors and Multivariate Normal MATH 590 Spring 06 Notes on Random Vectors and Multivariate Normal Properties of Random Vectors If X,, X n are random variables, then X = X,, X n ) is a random vector, with the cumulative distribution

More information

Stochastic Processes and Monte-Carlo Methods. University of Massachusetts: Spring Luc Rey-Bellet

Stochastic Processes and Monte-Carlo Methods. University of Massachusetts: Spring Luc Rey-Bellet Stochastic Processes and Monte-Carlo Methods University of Massachusetts: Spring 2010 Luc Rey-Bellet Contents 1 Random variables and Monte-Carlo method 3 1.1 Review of probability............................

More information

Actuarial models. Proof. We know that. which is. Furthermore, S X (x) = Edward Furman Risk theory / 72

Actuarial models. Proof. We know that. which is. Furthermore, S X (x) = Edward Furman Risk theory / 72 Proof. We know that which is S X Λ (x λ) = exp S X Λ (x λ) = exp Furthermore, S X (x) = { x } h X Λ (t λ)dt, 0 { x } λ a(t)dt = exp { λa(x)}. 0 Edward Furman Risk theory 4280 21 / 72 Proof. We know that

More information

x i p X (x i ) if X is discrete provided that the relevant sum or integral is absolutely convergent, i.e., i

x i p X (x i ) if X is discrete provided that the relevant sum or integral is absolutely convergent, i.e., i Chapter 5 Expectation 5. Introduction Def The expectation (mean), E[X] or µ X, of a random variable X is defined by: x i p X (x i ) if X is discrete E[X] µ X i xf(x)dx if X is continuous provided that

More information

E[X n ]= dn dt n M X(t). ). What is the mgf? Solution. Found this the other day in the Kernel matching exercise: 1 M X (t) =

E[X n ]= dn dt n M X(t). ). What is the mgf? Solution. Found this the other day in the Kernel matching exercise: 1 M X (t) = Chapter 7 Generating functions Definition 7.. Let X be a random variable. The moment generating function is given by M X (t) =E[e tx ], provided that the expectation exists for t in some neighborhood of

More information

Probability Background

Probability Background Probability Background Namrata Vaswani, Iowa State University August 24, 2015 Probability recap 1: EE 322 notes Quick test of concepts: Given random variables X 1, X 2,... X n. Compute the PDF of the second

More information