STAT:5100 (22S:193) Statistical Inference I

Size: px
Start display at page:

Download "STAT:5100 (22S:193) Statistical Inference I"

Transcription

1 STAT:5100 (22S:193) Statistical Inference I Week 10 Luke Tierney University of Iowa Fall 2015 Luke Tierney (U Iowa) STAT:5100 (22S:193) Statistical Inference I Fall

2 Monday, October 26, 2015 Recap Multivariate normal distribution Linear combinations of random variables Copula models Conditional distributions Conditional PMFs and PMFs Conditional expectation Law of total expectation Luke Tierney (U Iowa) STAT:5100 (22S:193) Statistical Inference I Fall

3 Monday, October 26, 2015 Conditional Expectations Some useful properties of conditional expectation: E[g(Y ) + X Y ] = g(y ) + E[X Y ] E[g(Y ) Y ] = g(y ) E[g(Y )X Y ] = g(y )E[X Y ] This shows that E[g(Y )X ] = E[g(Y )E[X Y ]] since E[g(Y )X ] = E[E[g(Y )X Y ]] = E[g(Y )E[X Y ]] This property can be taken as the definition of E[X Y ]. Luke Tierney (U Iowa) STAT:5100 (22S:193) Statistical Inference I Fall

4 Monday, October 26, 2015 Conditional Expectations A formal definition of conditional expectation: Definition Let X, Y be random variables, and assume that E[ X ] <. A conditional expectation E[X Y ] of X given Y is any random variable Z such that for some measurable function h, and Z = h(y ) E[Xg(Y )] = E[Zg(Y )] for all bounded measurable functions g. Theorem A version of the conditional expectation E[X Y ] exists, and any two versions are almost surely equal. Luke Tierney (U Iowa) STAT:5100 (22S:193) Statistical Inference I Fall

5 Monday, October 26, 2015 Conditional Expectations A useful consequence of the properties of conditional expectation: Theorem Suppose X and Y are random variables and g is a function such that E[X 2 ] < and E[g(Y ) 2 ] <. Then E[(X g(y )) 2 ] = E[(X E[X Y ]) 2 ] + E[(E[X Y ] g(y )) 2 ] Notes E[(X g(y )) 2 ] is the mean squared error for using g(y ) to predict X. The function of Y with the smallest mean square prediction error is E[X Y ]. Luke Tierney (U Iowa) STAT:5100 (22S:193) Statistical Inference I Fall

6 Monday, October 26, 2015 Conditional Expectations Proof. E[(X g(y )) 2 ] =E[(X E[X Y ] + E[X Y ] g(y )) 2 ] =E[(X E[X Y ]) 2 ] + E[(E[X Y ] g(y )) 2 ] + 2E[(X E[X Y ])(E[X Y ] g(y ))] To complete the proof we need to show that the cross product term is zero. Luke Tierney (U Iowa) STAT:5100 (22S:193) Statistical Inference I Fall

7 Monday, October 26, 2015 Conditional Expectations Proof (continued). For any function h with E[h(Y ) 2 ] < E[(X E[X Y ])h(y )] = E[E[X E[X Y ] Y ]h(y )] and So E[X E[X Y ] Y ] = E[X Y ] E[X Y ] = 0. E[(X E[X Y ])h(y )] = 0. The cross product expectation corresponds to and is therefore zero. h(y ) = E[X Y ] g(y ) Luke Tierney (U Iowa) STAT:5100 (22S:193) Statistical Inference I Fall

8 Monday, October 26, 2015 Conditional Expectations Corollary If X, Y are random variables with E[X 2 ] < then Var(X ) = E[Var(X Y )] + Var(E[X Y ]) Proof. Take g(y ) = E[X ]: Var(X ) = E[(X E[X ]) 2 ] = E[(X E[X Y ]) 2 ] + E[(E[X Y ] E[X ]) 2 ] = E[E[(X E[X Y ]) 2 Y ]] + E[(E[X Y ] E[E[X Y ]]) 2 ] = E[Var(X Y )] + Var(E[X Y ]) Luke Tierney (U Iowa) STAT:5100 (22S:193) Statistical Inference I Fall

9 Monday, October 26, 2015 Conditional Expectations Example Consider again the example of N customers making total purchases S = N X i. i=1 Suppose the independent purchase amounts have finite variance σ 2. The conditional variance of S given N = n is ( N ) Var(S N = n) = Var X i N = n i=1 ( n ) = Var X i N = n i=1 ( n ) = Var X i i=1 fix upper limit independence of X i and N = nσ 2 mutual independence of X i. Luke Tierney (U Iowa) STAT:5100 (22S:193) Statistical Inference I Fall

10 Monday, October 26, 2015 Conditional Expectations Example (continued) The conditional variance of S given N is the random variable Var(S N) = Nσ 2. The variance of S is therefore Var(S) = E[Var(S N)] + Var(E[S N]) = E[Nσ 2 ] + Var(Nµ) = E[N]σ 2 + Var(N)µ 2 = λσ 2 + λµ 2 Luke Tierney (U Iowa) STAT:5100 (22S:193) Statistical Inference I Fall

11 Monday, October 26, 2015 Conditional Expectations Corollary Let X, Y be random variables E[X 2 ] <. Then for any g such that E[g(Y ) 2 ] < E[(X g(y )) 2 ] E[(X E[X Y ]) 2 ] Proof. From the theorem, E[(X g(y )) 2 ] = E[(X E[X Y ]) 2 ] + E[(E[X Y ] g(y )) 2 ] E[(X E[X Y ]) 2 ]. Luke Tierney (U Iowa) STAT:5100 (22S:193) Statistical Inference I Fall

12 Monday, October 26, 2015 Conditional Expectations Conditional Expectation and Orthogonality A distance among random variables can be defined as X Y = E(X Y ) 2. In terms of this distance E[X Y ] is the closest function of Y to X. Put another way: Among all functions of Y the function with the lowest mean squared error for predicting X is E[X Y ]. The result that E[(X E[X Y ])h(y )] = 0 for all h can be interpreted as: The prediction error X E[X Y ] is orthogonal to the set of all functions of Y. E[X Y ] can be viewed as the orthogonal projection of X onto the set of all random variables that are functions of Y. There are strong parallels to least squares fitting in regression analysis. Luke Tierney (U Iowa) STAT:5100 (22S:193) Statistical Inference I Fall

13 Monday, October 26, 2015 Hierarchical Models Hierarchical Models Often it is useful to build up models from conditional distributions. Example Each customer visiting to a store buys something with probability p. Customers make their purchasing decisions independently. Let X = number who buy something N = number who come to store Then X N Binomial(N, p). Luke Tierney (U Iowa) STAT:5100 (22S:193) Statistical Inference I Fall

14 Monday, October 26, 2015 Hierarchical Models Example (continued) Suppose the number of customers who arrive in a given period has a Poisson(λ) distribution. This is a two-stage hierarchical model: X N Binomial(N, p) N Poisson(λ) If the store is chosen at random, then λ would vary from store to store. This produces a three-stage hierarchical model: X N, λ Binomial(N, p) N λ Poisson(λ) λ f p might also vary from store to store. Luke Tierney (U Iowa) STAT:5100 (22S:193) Statistical Inference I Fall

15 Monday, October 26, 2015 Hierarchical Models Example (continued) The marginal distribution of X is a mixture distribution. In the two-stage model, X is a Poisson mixture of binomials: f X (x) = P(X = x) = n P(X = x, N = n) = n P(X = x N = n)p(n = n) = So X Poisson(pλ). n=x ( n )p x n x λn (1 p) x n! e λ = (pλ)x e λ 1 [(1 p)λ]n x x! (n x)! n=x = (pλ)x x! e λ+(1 p)λ = (pλ)x e pλ x! Luke Tierney (U Iowa) STAT:5100 (22S:193) Statistical Inference I Fall

16 Monday, October 26, 2015 Hierarchical Models Example (continued) With a third stage, P(X = x) = = 0 0 P(X = x λ)f (λ)dλ (pλ) x e pλ f (λ)dλ x! What forms of f are both flexible and convenient? Gamma is a natural choice: f (λ) = 1 Γ(α)β α λα 1 e λ/β 1 (0, ) (λ) Luke Tierney (U Iowa) STAT:5100 (22S:193) Statistical Inference I Fall

17 Monday, October 26, 2015 Hierarchical Models Example (continued) Then the marginal PMF of X is (pλ) x P(X = x) = e pλ 1 0 x! Γ(α)β α λα 1 e λ/β dλ p x = x!γ(α)β α λ x+α 1 e λ(p+1/β) dλ 0 p x Γ(x + α) = x!γ(α)β α (p + 1/β) ( x+α ) Γ(x + α) p x ( ) 1/β α = Γ(α)x! p + 1/β p + 1/β For α a nonnegative integer this is a negative binomial distribution. For non-integer α this also a negative binomial distribution this is a definition. Luke Tierney (U Iowa) STAT:5100 (22S:193) Statistical Inference I Fall

18 Wednesday, October 28, 2015 Recap Properties of conditional expectation Variance decomposition Geometry of conditional expectation Hierarchical models Poisson mixture of binomial distributions Gamma mixture of Poisson distributions Luke Tierney (U Iowa) STAT:5100 (22S:193) Statistical Inference I Fall

19 Wednesday, October 28, 2015 Hierarchical Models Example Suppose we observe N = n customers visiting the store. What is the conditional distribution of the store s λ value? The joint density/pmf of λ and N is f (λ, n) = f N λ (n λ)f (λ) = λn 1 n! e λ Γ(α)β α λα 1 e λ/β = λn+α 1 n!γ(α)β α e λ(1+1/β). The conditional density of λ given N = n is therefore f λ N (λ n) = f (λ, n) f N (n) f (λ, n) λn+α 1 e λ(1+1/β) = λ n+α 1 e λ(β+1)/β. This corresponds to a Γ(n + α, β/(1 + β)) distribution. Luke Tierney (U Iowa) STAT:5100 (22S:193) Statistical Inference I Fall

20 Wednesday, October 28, 2015 Hierarchical Models Example A poll of size n is to be take to see whether voters prefer Candidate A or Candidate B. Assuming sampling with replacement, the number X in the sample who favor A is Binomial(n, p) with p the population proportion who favor A. The race is close and we are fairly confident that p is between 0.4 and 0.6. We can capture this by thinking of p as a random variable with a distribution that puts about 90% probability between 0.4 and 0.6. This is our prior probability distribution on p. Once we have collected our data we can compute a posterior probability like P(p > 0.5 data). This is an example of Bayesian inference. Luke Tierney (U Iowa) STAT:5100 (22S:193) Statistical Inference I Fall

21 Wednesday, October 28, 2015 Hierarchical Models Example (continued) A convenient form for the prior distribution is a Beta(α, β) distribution. The joint density/pmf of X, p is then of the form f (x, p) = f X p (x p)f (p) ( ) n = p x (1 p) n x 1 x B(α, β) pα 1 (1 p) β 1. The posterior density of p given X = x is f (p x) = f (x, p) f X (x) f (x, p) p x+α 1 (1 p) n x+β 1. This is the density of a Beta(x + α, n x + β) distribution. Luke Tierney (U Iowa) STAT:5100 (22S:193) Statistical Inference I Fall

22 Wednesday, October 28, 2015 Hierarchical Models Example (continued) A Beta distribution with α = β = 33 is symmetric about 0.5 and assigns approximately 0.9 probability to the interval between 0.4 and 0.6. Results for some possible sample sizes and observed counts: x n p P(p > 0.5 data) ,046 2, All three scenarios produce approximately the same p-value for testing H 0 : p 0.5 against H 1 : p > 0.5. Some plots: p <- seq(0, 1, len = 101) plot(p, dbeta(p, 33, 33), type = "l", ylim = c(0, 40)) abline(v = 0.5, lty = 2) lines(p, dbeta(p, , ), col = "red") lines(p, dbeta(p, , ), col = "forestgreen") lines(p, dbeta(p, , ), col = "blue") Luke Tierney (U Iowa) STAT:5100 (22S:193) Statistical Inference I Fall

23 Wednesday, October 28, 2015 Hierarchical Models Example Suppose X, Y are bivariate normal with standard normal marginals and correlation ρ. The joint density is f (x, y) = { 1 2π 1 ρ exp x 2 + y 2 } 2ρxy 2 2(1 ρ 2. ) The conditional density of Y given X = x is { f Y X (y x) exp y 2 } 2ρxy 2(1 ρ 2. ) This is a N(ρx, 1 ρ 2 ) density. For a general bivariate normal distribution ( ) x µ X Y X = x N µ Y + ρσ Y, σy 2 (1 ρ 2 ). σ X Luke Tierney (U Iowa) STAT:5100 (22S:193) Statistical Inference I Fall

24 Wednesday, October 28, 2015 Hierarchical Models Example Suppose X 1, X 2, given M = m, are independent N(m, 1). The marginal distribution of M is N(0, δ 2 ). Then X 1, X 2 are bivariate normal with E[X 1 ] = E[X 2 ] = E[E[X 1 M]] = E[M] = 0 Var(X 1 ) = Var(X 2 ) = E[Var(X 1 M)] + Var(E[X 1 M]) The covariance of X 1, X 2 is So the correlation is = 1 + Var(M) = 1 + δ 2. Cov(X 1, X 2 ) = E[X 1 X 2 ] = E[E[X 1 X 2 M]] = E[E[X 1 M]E[X 2 M]] = E[M 2 ] = δ 2. ρ = δ2 1 + δ 2. Luke Tierney (U Iowa) STAT:5100 (22S:193) Statistical Inference I Fall

25 Wednesday, October 28, 2015 Change of Variables Change of Variables Suppose X, Y are jointly continuous on a set A. Let U, V be defined as U = g 1 (X, Y ) V = g 2 (X, Y ) The image of A under the transformation g is B = {(u, v) : u = g 1 (x, y) and v = g 2 (x, y) for some (x, y) A} Assume g = (g 1, g 2 ) is one-to-one on A. Then g has an inverse h = (h 1, h 2 ) defined on B, and X = h 1 (U, V ) Y = h 2 (U, V ) Luke Tierney (U Iowa) STAT:5100 (22S:193) Statistical Inference I Fall

26 Wednesday, October 28, 2015 Change of Variables A small rectangle [u, u + du] [v, v + dv] has area dudv. The image of this rectangle under h is (approximately) a parallelogram. (u, v) (x, y) Luke Tierney (U Iowa) STAT:5100 (22S:193) Statistical Inference I Fall

27 Wednesday, October 28, 2015 Change of Variables The area of this parallelogram is approximately J(u, v) dudv where ( x J(u, v) = dxdy dudv = det u y u = x y u v x y v u x v y v ) J is called the Jacobian determinant of the transformation. This generalizes to three or more variables in the obvious way. Luke Tierney (U Iowa) STAT:5100 (22S:193) Statistical Inference I Fall

28 Wednesday, October 28, 2015 Change of Variables The density of U, V for (u, v) B can be derived as f U,V (u, v)dudv = f X,Y (x, y)dxdy. Then f U,V (u, v) = f X,Y (x, y) dxdy dudv = f X,Y (x, y) J(u, v) Alternatively, f U,V (u, v) = f X,Y (h 1 (u, v), h 2 (u, v)) J(u, v) Here J(u, v) = det ( h1 (u,v) u h 2 (u,v) u h 1 (u,v) v h 2 (u,v) v ) Luke Tierney (U Iowa) STAT:5100 (22S:193) Statistical Inference I Fall

29 Friday, October 30, 2015 Recap Simple Bayesian inference examples Gaussian hierarchical models Change of variables for jointly continuous random variables Luke Tierney (U Iowa) STAT:5100 (22S:193) Statistical Inference I Fall

30 Friday, October 30, 2015 Change of Variables Example Let X, Y be independent with X Gamma(α, 1) Y Gamma(β, 1) Define U = X /(X + Y ) V = X + Y Then A = (0, ) (0, ) B = (0, 1) (0, ). Luke Tierney (U Iowa) STAT:5100 (22S:193) Statistical Inference I Fall

31 Friday, October 30, 2015 Change of Variables Example (continued) The inverse transformation is The Jacobian is J(u, v) = det x = h 1 (u, v) = uv y = h 2 (u, v) = v uv = (1 u)v ( ) v u = v(1 u) + vu = v. v (1 u) Luke Tierney (U Iowa) STAT:5100 (22S:193) Statistical Inference I Fall

32 Friday, October 30, 2015 Change of Variables Example (continued) The joint density of U and V for 0 < u < 1 and v > 0 is therefore f U,V (u, v) = f X (uv)f Y ((1 u)v)v = 1 Γ(α) (uv)α 1 e uv 1 Γ(β) ((1 u)v)β 1 e (1 u)v v 1 = Γ(α)Γ(β) uα 1 (1 u) β 1 v α+β 1 e v Incorporating indicator functions for B gives f U,V (u, v) = 1 [ u α 1 (1 u) β 1 1 (0,1) (u) ] [ v α+β 1 e v 1 (0, ) (v) ]. Γ(α)Γ(β) So U, V are independent with U Beta(α, β) V Gamma(α + β, 1) Luke Tierney (U Iowa) STAT:5100 (22S:193) Statistical Inference I Fall

33 Friday, October 30, 2015 Change of Variables Example Suppose X, Y are uniformly distributed on the region A = {(x, y) : 1 x 1, 1 y 1}. The joint density of X, Y is { 1 f X,Y (x, y) = 4 if 1 x 1 and 1 y 1 0 otherwise. Let U = X + Y V = X Y. Luke Tierney (U Iowa) STAT:5100 (22S:193) Statistical Inference I Fall

34 Friday, October 30, 2015 Change of Variables Example (continued) The inverse transformation is The range of the transformation is x = u + v 2 y = u v. 2 B = {(u, v) : 2 u + v 2, 2 u v 2}, The Jacobian determinant of the inverse transformation is ( 1 ) 1 J(u, v) = det = = Luke Tierney (U Iowa) STAT:5100 (22S:193) Statistical Inference I Fall

35 Friday, October 30, 2015 v Change of Variables Example (continued) The joint density of U, V is therefore ( u + v f U,V (u, v) = f X,Y, u v ) J(u, v) 2 2 { 1 = 8 if 2 u + v 2 and 2 u v 2 0 otherwise. This is a uniform distribution on the square B: u v = 2 u + v = 2 u + v = 2 u v = Luke Tierney (U Iowa) STAT:5100 (22S:193) Statistical Inference I Fall u

36 Friday, October 30, 2015 Change of Variables Example (continued) We can compute the marginal density of U by integrating out v from the joint density: f U (u) = f U,V (u, v)dv 2+u 1 2 u 8 dv = 1 8 2(2 + u) = u 4 if 2 u 0 = 2 u 1 u 2 8 dv = 1 8 2(2 u) = 1 2 u 4 if 0 u 2 0 otherwise { 1 = 2 u 4 if u 2 0 otherwise. Luke Tierney (U Iowa) STAT:5100 (22S:193) Statistical Inference I Fall

37 Friday, October 30, 2015 Change of Variables Example Suppose X, Y are uniformly distributed on the unit disk The joint density of X and Y is A = {(x, y) : x 2 + y 2 1} f X,Y (x, y) = 1 π 1 A(x, y) Let U = X /Y V = Y Luke Tierney (U Iowa) STAT:5100 (22S:193) Statistical Inference I Fall

38 Friday, October 30, 2015 Change of Variables Example (continued) The inverse transformation is x = uv y = v. The range of the transformation is B = {(u, v) : u 2 v 2 + v 2 1} = {(u, v) : v 1/ 1 + u 2 }. The Jacobian determinant of the inverse transformation is ( ) v u J(u, v) = det = v 0 1 Luke Tierney (U Iowa) STAT:5100 (22S:193) Statistical Inference I Fall

39 Friday, October 30, 2015 Change of Variables Example (continued) The joint density of U, V is therefore The marginal density of U is f U (u) = f U,V (u, v) = f X,Y (uv, v) J(u, v) { v = π if v 1/ 1 + u 2 0 otherwise. f U,V (u, v)dv = 1/ 1+u 2 v = 2 0 This is a Cauchy density. π dv = v 2 1/ 1+u 2 v 1/ 1+u 2 π dv 1/ 1+u 2 π = 1 1 π 1 + u 2 0 Luke Tierney (U Iowa) STAT:5100 (22S:193) Statistical Inference I Fall

40 Friday, October 30, 2015 Change of Variables Two problems: 1. X, Y are continuous, independent. Find the density of U = X + Y. 2. X N(0, 1), Y χ 2 p, independent. Find the density of U = X / Y /p t p. Two possible approaches: a. identify the region U u in the x, y plane integrate the joint density over this region differentiate the result with respect to u b. Add another variable V so that (X, Y ) (U, V ) is one-to-one find the joint density of U, V integrate out V The second approach is often easier since it only requires a one-dimensional integral V can be chosen to make this integral easier Often taking V = X or V = Y is sufficient. Luke Tierney (U Iowa) STAT:5100 (22S:193) Statistical Inference I Fall

41 Friday, October 30, 2015 Change of Variables Example For the first problem with U = X + Y take V = X. The inverse transformation is X = V Y = U V The Jacobian determinant of the inverse is ( ) 0 1 J(u, v) = det = So f U,V (u, v) = f X (v)f Y (u v) The marginal distribution of U is therefore f U (u) = f U,V (u, v)dv = This is called the convolution of f X and f Y. f X (v)f Y (u v)dv Transforms (e.g. moment generating functions) turn convolutions into products. Luke Tierney (U Iowa) STAT:5100 (22S:193) Statistical Inference I Fall

42 Friday, October 30, 2015 Change of Variables Example For the second problem with U = X / Y /p set V = Y : Then the inverse transformation is U = X / Y /p V = Y X = U V /p Y = V The Jacobian determinant of the inverse transformation is ( v/p u J(u, v) = det ) 2p v/p = v/p. 0 1 The range of the transformation is B = {(u, v) : < u <, 0 < v < }. Luke Tierney (U Iowa) STAT:5100 (22S:193) Statistical Inference I Fall

43 Friday, October 30, 2015 Change of Variables Example (continued) So the joint density of U and V is f U,V (u, v) = f X (u v/p)f Y (v) v/p { 1 = 2π exp { 1 2 u2 v/p } 1 v p/2 1 e v/2 v/p v > 0 Γ(p/2)2 p/2 0 v 0 Luke Tierney (U Iowa) STAT:5100 (22S:193) Statistical Inference I Fall

44 Friday, October 30, 2015 Change of Variables Example (continued) The marginal density of U is therefore { 1 f U (u) = exp 1 } 1 2π 2 u2 v/p Γ(p/2)2 v p/2 1 e v/2 v/pdv p/2 = 0 1 2π pγ(p/2)2 p/2 0 v p e 1 2 v(1+u2 /p) dv = Γ ( ) p (p+1)/2 1 2πΓ(p/2) p2 p/2 (1 + u 2 /p) (p+1)/2 Γ ( ) p = Γ(p/2)(pπ) 1/2 (1 + u 2 /p) (p+1)/2 This is the density of Student s t distribution with p degrees of freedom. For p = 1 this is a Cauchy distribution. Luke Tierney (U Iowa) STAT:5100 (22S:193) Statistical Inference I Fall

Chapter 4 Multiple Random Variables

Chapter 4 Multiple Random Variables Review for the previous lecture Theorems and Examples: How to obtain the pmf (pdf) of U = g ( X Y 1 ) and V = g ( X Y) Chapter 4 Multiple Random Variables Chapter 43 Bivariate Transformations Continuous

More information

Statistics STAT:5100 (22S:193), Fall Sample Final Exam B

Statistics STAT:5100 (22S:193), Fall Sample Final Exam B Statistics STAT:5 (22S:93), Fall 25 Sample Final Exam B Please write your answers in the exam books provided.. Let X, Y, and Y 2 be independent random variables with X N(µ X, σ 2 X ) and Y i N(µ Y, σ 2

More information

Stat 5101 Notes: Brand Name Distributions

Stat 5101 Notes: Brand Name Distributions Stat 5101 Notes: Brand Name Distributions Charles J. Geyer February 14, 2003 1 Discrete Uniform Distribution DiscreteUniform(n). Discrete. Rationale Equally likely outcomes. The interval 1, 2,..., n of

More information

Chapter 5 continued. Chapter 5 sections

Chapter 5 continued. Chapter 5 sections Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

Chapter 5. Chapter 5 sections

Chapter 5. Chapter 5 sections 1 / 43 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

Bivariate Transformations

Bivariate Transformations Bivariate Transformations October 29, 29 Let X Y be jointly continuous rom variables with density function f X,Y let g be a one to one transformation. Write (U, V ) = g(x, Y ). The goal is to find the

More information

t x 1 e t dt, and simplify the answer when possible (for example, when r is a positive even number). In particular, confirm that EX 4 = 3.

t x 1 e t dt, and simplify the answer when possible (for example, when r is a positive even number). In particular, confirm that EX 4 = 3. Mathematical Statistics: Homewor problems General guideline. While woring outside the classroom, use any help you want, including people, computer algebra systems, Internet, and solution manuals, but mae

More information

MATH c UNIVERSITY OF LEEDS Examination for the Module MATH2715 (January 2015) STATISTICAL METHODS. Time allowed: 2 hours

MATH c UNIVERSITY OF LEEDS Examination for the Module MATH2715 (January 2015) STATISTICAL METHODS. Time allowed: 2 hours MATH2750 This question paper consists of 8 printed pages, each of which is identified by the reference MATH275. All calculators must carry an approval sticker issued by the School of Mathematics. c UNIVERSITY

More information

Statistics 351 Probability I Fall 2006 (200630) Final Exam Solutions. θ α β Γ(α)Γ(β) (uv)α 1 (v uv) β 1 exp v }

Statistics 351 Probability I Fall 2006 (200630) Final Exam Solutions. θ α β Γ(α)Γ(β) (uv)α 1 (v uv) β 1 exp v } Statistics 35 Probability I Fall 6 (63 Final Exam Solutions Instructor: Michael Kozdron (a Solving for X and Y gives X UV and Y V UV, so that the Jacobian of this transformation is x x u v J y y v u v

More information

Bivariate distributions

Bivariate distributions Bivariate distributions 3 th October 017 lecture based on Hogg Tanis Zimmerman: Probability and Statistical Inference (9th ed.) Bivariate Distributions of the Discrete Type The Correlation Coefficient

More information

More on Distribution Function

More on Distribution Function More on Distribution Function The distribution of a random variable X can be determined directly from its cumulative distribution function F X. Theorem: Let X be any random variable, with cumulative distribution

More information

ARCONES MANUAL FOR THE SOA EXAM P/CAS EXAM 1, PROBABILITY, SPRING 2010 EDITION.

ARCONES MANUAL FOR THE SOA EXAM P/CAS EXAM 1, PROBABILITY, SPRING 2010 EDITION. A self published manuscript ARCONES MANUAL FOR THE SOA EXAM P/CAS EXAM 1, PROBABILITY, SPRING 21 EDITION. M I G U E L A R C O N E S Miguel A. Arcones, Ph. D. c 28. All rights reserved. Author Miguel A.

More information

THE QUEEN S UNIVERSITY OF BELFAST

THE QUEEN S UNIVERSITY OF BELFAST THE QUEEN S UNIVERSITY OF BELFAST 0SOR20 Level 2 Examination Statistics and Operational Research 20 Probability and Distribution Theory Wednesday 4 August 2002 2.30 pm 5.30 pm Examiners { Professor R M

More information

Two hours. Statistical Tables to be provided THE UNIVERSITY OF MANCHESTER. 14 January :45 11:45

Two hours. Statistical Tables to be provided THE UNIVERSITY OF MANCHESTER. 14 January :45 11:45 Two hours Statistical Tables to be provided THE UNIVERSITY OF MANCHESTER PROBABILITY 2 14 January 2015 09:45 11:45 Answer ALL four questions in Section A (40 marks in total) and TWO of the THREE questions

More information

15 Discrete Distributions

15 Discrete Distributions Lecture Note 6 Special Distributions (Discrete and Continuous) MIT 4.30 Spring 006 Herman Bennett 5 Discrete Distributions We have already seen the binomial distribution and the uniform distribution. 5.

More information

Stat 5101 Notes: Brand Name Distributions

Stat 5101 Notes: Brand Name Distributions Stat 5101 Notes: Brand Name Distributions Charles J. Geyer September 5, 2012 Contents 1 Discrete Uniform Distribution 2 2 General Discrete Uniform Distribution 2 3 Uniform Distribution 3 4 General Uniform

More information

MAS223 Statistical Inference and Modelling Exercises

MAS223 Statistical Inference and Modelling Exercises MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,

More information

Random Variables and Their Distributions

Random Variables and Their Distributions Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital

More information

18.440: Lecture 28 Lectures Review

18.440: Lecture 28 Lectures Review 18.440: Lecture 28 Lectures 17-27 Review Scott Sheffield MIT 1 Outline Continuous random variables Problems motivated by coin tossing Random variable properties 2 Outline Continuous random variables Problems

More information

Probability and Distributions

Probability and Distributions Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated

More information

(y 1, y 2 ) = 12 y3 1e y 1 y 2 /2, y 1 > 0, y 2 > 0 0, otherwise.

(y 1, y 2 ) = 12 y3 1e y 1 y 2 /2, y 1 > 0, y 2 > 0 0, otherwise. 54 We are given the marginal pdfs of Y and Y You should note that Y gamma(4, Y exponential( E(Y = 4, V (Y = 4, E(Y =, and V (Y = 4 (a With U = Y Y, we have E(U = E(Y Y = E(Y E(Y = 4 = (b Because Y and

More information

Joint Distributions. (a) Scalar multiplication: k = c d. (b) Product of two matrices: c d. (c) The transpose of a matrix:

Joint Distributions. (a) Scalar multiplication: k = c d. (b) Product of two matrices: c d. (c) The transpose of a matrix: Joint Distributions Joint Distributions A bivariate normal distribution generalizes the concept of normal distribution to bivariate random variables It requires a matrix formulation of quadratic forms,

More information

Lecture 16: Hierarchical models and miscellanea

Lecture 16: Hierarchical models and miscellanea Lecture 16: Hierarchical models and miscellanea It is often easier to model a practical situation by thinking of things in a hierarchy. Example 4.4.1 (binomial-poisson hierarchy) An insect lays many eggs,

More information

Probability Theory. Patrick Lam

Probability Theory. Patrick Lam Probability Theory Patrick Lam Outline Probability Random Variables Simulation Important Distributions Discrete Distributions Continuous Distributions Most Basic Definition of Probability: number of successes

More information

Lecture 1: August 28

Lecture 1: August 28 36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random

More information

ACM 116: Lectures 3 4

ACM 116: Lectures 3 4 1 ACM 116: Lectures 3 4 Joint distributions The multivariate normal distribution Conditional distributions Independent random variables Conditional distributions and Monte Carlo: Rejection sampling Variance

More information

STT 441 Final Exam Fall 2013

STT 441 Final Exam Fall 2013 STT 441 Final Exam Fall 2013 (12:45-2:45pm, Thursday, Dec. 12, 2013) NAME: ID: 1. No textbooks or class notes are allowed in this exam. 2. Be sure to show all of your work to receive credit. Credits are

More information

4. CONTINUOUS RANDOM VARIABLES

4. CONTINUOUS RANDOM VARIABLES IA Probability Lent Term 4 CONTINUOUS RANDOM VARIABLES 4 Introduction Up to now we have restricted consideration to sample spaces Ω which are finite, or countable; we will now relax that assumption We

More information

Common probability distributionsi Math 217 Probability and Statistics Prof. D. Joyce, Fall 2014

Common probability distributionsi Math 217 Probability and Statistics Prof. D. Joyce, Fall 2014 Introduction. ommon probability distributionsi Math 7 Probability and Statistics Prof. D. Joyce, Fall 04 I summarize here some of the more common distributions used in probability and statistics. Some

More information

LIST OF FORMULAS FOR STK1100 AND STK1110

LIST OF FORMULAS FOR STK1100 AND STK1110 LIST OF FORMULAS FOR STK1100 AND STK1110 (Version of 11. November 2015) 1. Probability Let A, B, A 1, A 2,..., B 1, B 2,... be events, that is, subsets of a sample space Ω. a) Axioms: A probability function

More information

Joint p.d.f. and Independent Random Variables

Joint p.d.f. and Independent Random Variables 1 Joint p.d.f. and Independent Random Variables Let X and Y be two discrete r.v. s and let R be the corresponding space of X and Y. The joint p.d.f. of X = x and Y = y, denoted by f(x, y) = P(X = x, Y

More information

Final Examination Solutions (Total: 100 points)

Final Examination Solutions (Total: 100 points) Final Examination Solutions (Total: points) There are 4 problems, each problem with multiple parts, each worth 5 points. Make sure you answer all questions. Your answer should be as clear and readable

More information

STA 256: Statistics and Probability I

STA 256: Statistics and Probability I Al Nosedal. University of Toronto. Fall 2017 My momma always said: Life was like a box of chocolates. You never know what you re gonna get. Forrest Gump. There are situations where one might be interested

More information

3 Conditional Expectation

3 Conditional Expectation 3 Conditional Expectation 3.1 The Discrete case Recall that for any two events E and F, the conditional probability of E given F is defined, whenever P (F ) > 0, by P (E F ) P (E)P (F ). P (F ) Example.

More information

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities PCMI 207 - Introduction to Random Matrix Theory Handout #2 06.27.207 REVIEW OF PROBABILITY THEORY Chapter - Events and Their Probabilities.. Events as Sets Definition (σ-field). A collection F of subsets

More information

Bayesian inference. Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark. April 10, 2017

Bayesian inference. Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark. April 10, 2017 Bayesian inference Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark April 10, 2017 1 / 22 Outline for today A genetic example Bayes theorem Examples Priors Posterior summaries

More information

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems Review of Basic Probability The fundamentals, random variables, probability distributions Probability mass/density functions

More information

Lecture 2: Repetition of probability theory and statistics

Lecture 2: Repetition of probability theory and statistics Algorithms for Uncertainty Quantification SS8, IN2345 Tobias Neckel Scientific Computing in Computer Science TUM Lecture 2: Repetition of probability theory and statistics Concept of Building Block: Prerequisites:

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

18.440: Lecture 26 Conditional expectation

18.440: Lecture 26 Conditional expectation 18.440: Lecture 26 Conditional expectation Scott Sheffield MIT 1 Outline Conditional probability distributions Conditional expectation Interpretation and examples 2 Outline Conditional probability distributions

More information

UC Berkeley Department of Electrical Engineering and Computer Science. EE 126: Probablity and Random Processes. Problem Set 8 Fall 2007

UC Berkeley Department of Electrical Engineering and Computer Science. EE 126: Probablity and Random Processes. Problem Set 8 Fall 2007 UC Berkeley Department of Electrical Engineering and Computer Science EE 6: Probablity and Random Processes Problem Set 8 Fall 007 Issued: Thursday, October 5, 007 Due: Friday, November, 007 Reading: Bertsekas

More information

Stat 5101 Lecture Slides: Deck 8 Dirichlet Distribution. Charles J. Geyer School of Statistics University of Minnesota

Stat 5101 Lecture Slides: Deck 8 Dirichlet Distribution. Charles J. Geyer School of Statistics University of Minnesota Stat 5101 Lecture Slides: Deck 8 Dirichlet Distribution Charles J. Geyer School of Statistics University of Minnesota 1 The Dirichlet Distribution The Dirichlet Distribution is to the beta distribution

More information

Conditional distributions (discrete case)

Conditional distributions (discrete case) Conditional distributions (discrete case) The basic idea behind conditional distributions is simple: Suppose (XY) is a jointly-distributed random vector with a discrete joint distribution. Then we can

More information

Expectation. DS GA 1002 Probability and Statistics for Data Science. Carlos Fernandez-Granda

Expectation. DS GA 1002 Probability and Statistics for Data Science.   Carlos Fernandez-Granda Expectation DS GA 1002 Probability and Statistics for Data Science http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall17 Carlos Fernandez-Granda Aim Describe random variables with a few numbers: mean,

More information

STAT Chapter 5 Continuous Distributions

STAT Chapter 5 Continuous Distributions STAT 270 - Chapter 5 Continuous Distributions June 27, 2012 Shirin Golchi () STAT270 June 27, 2012 1 / 59 Continuous rv s Definition: X is a continuous rv if it takes values in an interval, i.e., range

More information

Continuous random variables

Continuous random variables Continuous random variables Continuous r.v. s take an uncountably infinite number of possible values. Examples: Heights of people Weights of apples Diameters of bolts Life lengths of light-bulbs We cannot

More information

18.440: Lecture 28 Lectures Review

18.440: Lecture 28 Lectures Review 18.440: Lecture 28 Lectures 18-27 Review Scott Sheffield MIT Outline Outline It s the coins, stupid Much of what we have done in this course can be motivated by the i.i.d. sequence X i where each X i is

More information

Covariance. Lecture 20: Covariance / Correlation & General Bivariate Normal. Covariance, cont. Properties of Covariance

Covariance. Lecture 20: Covariance / Correlation & General Bivariate Normal. Covariance, cont. Properties of Covariance Covariance Lecture 0: Covariance / Correlation & General Bivariate Normal Sta30 / Mth 30 We have previously discussed Covariance in relation to the variance of the sum of two random variables Review Lecture

More information

Expectation. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda

Expectation. DS GA 1002 Statistical and Mathematical Models.   Carlos Fernandez-Granda Expectation DS GA 1002 Statistical and Mathematical Models http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall16 Carlos Fernandez-Granda Aim Describe random variables with a few numbers: mean, variance,

More information

9.07 Introduction to Probability and Statistics for Brain and Cognitive Sciences Emery N. Brown

9.07 Introduction to Probability and Statistics for Brain and Cognitive Sciences Emery N. Brown 9.07 Introduction to Probability and Statistics for Brain and Cognitive Sciences Emery N. Brown I. Objectives Lecture 5: Conditional Distributions and Functions of Jointly Distributed Random Variables

More information

STA 4322 Exam I Name: Introduction to Statistics Theory

STA 4322 Exam I Name: Introduction to Statistics Theory STA 4322 Exam I Name: Introduction to Statistics Theory Fall 2013 UF-ID: Instructions: There are 100 total points. You must show your work to receive credit. Read each part of each question carefully.

More information

1 Review of Probability and Distributions

1 Review of Probability and Distributions Random variables. A numerically valued function X of an outcome ω from a sample space Ω X : Ω R : ω X(ω) is called a random variable (r.v.), and usually determined by an experiment. We conventionally denote

More information

Bivariate Normal Distribution

Bivariate Normal Distribution .0. TWO-DIMENSIONAL RANDOM VARIABLES 47.0.7 Bivariate Normal Distribution Figure.: Bivariate Normal pdf Here we use matrix notation. A bivariate rv is treated as a random vector X X =. The expectation

More information

Continuous Random Variables

Continuous Random Variables Continuous Random Variables Recall: For discrete random variables, only a finite or countably infinite number of possible values with positive probability. Often, there is interest in random variables

More information

Continuous Random Variables

Continuous Random Variables 1 / 24 Continuous Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 27, 2013 2 / 24 Continuous Random Variables

More information

3. Probability and Statistics

3. Probability and Statistics FE661 - Statistical Methods for Financial Engineering 3. Probability and Statistics Jitkomut Songsiri definitions, probability measures conditional expectations correlation and covariance some important

More information

ECSE B Solutions to Assignment 8 Fall 2008

ECSE B Solutions to Assignment 8 Fall 2008 ECSE 34-35B Solutions to Assignment 8 Fall 28 Problem 8.1 A manufacturing system is governed by a Poisson counting process {N t ; t < } with rate parameter λ >. If a counting event occurs at an instant

More information

Multiple Random Variables

Multiple Random Variables Multiple Random Variables Joint Probability Density Let X and Y be two random variables. Their joint distribution function is F ( XY x, y) P X x Y y. F XY ( ) 1, < x

More information

Continuous Random Variables and Continuous Distributions

Continuous Random Variables and Continuous Distributions Continuous Random Variables and Continuous Distributions Continuous Random Variables and Continuous Distributions Expectation & Variance of Continuous Random Variables ( 5.2) The Uniform Random Variable

More information

2 Functions of random variables

2 Functions of random variables 2 Functions of random variables A basic statistical model for sample data is a collection of random variables X 1,..., X n. The data are summarised in terms of certain sample statistics, calculated as

More information

Stat 426 : Homework 1.

Stat 426 : Homework 1. Stat 426 : Homework 1. Moulinath Banerjee University of Michigan Announcement: The homework carries 120 points and contributes 10 points to the total grade. (1) A geometric random variable W takes values

More information

1 Introduction. P (n = 1 red ball drawn) =

1 Introduction. P (n = 1 red ball drawn) = Introduction Exercises and outline solutions. Y has a pack of 4 cards (Ace and Queen of clubs, Ace and Queen of Hearts) from which he deals a random of selection 2 to player X. What is the probability

More information

Statistics 1B. Statistics 1B 1 (1 1)

Statistics 1B. Statistics 1B 1 (1 1) 0. Statistics 1B Statistics 1B 1 (1 1) 0. Lecture 1. Introduction and probability review Lecture 1. Introduction and probability review 2 (1 1) 1. Introduction and probability review 1.1. What is Statistics?

More information

Joint Distribution of Two or More Random Variables

Joint Distribution of Two or More Random Variables Joint Distribution of Two or More Random Variables Sometimes more than one measurement in the form of random variable is taken on each member of the sample space. In cases like this there will be a few

More information

Contents 1. Contents

Contents 1. Contents Contents 1 Contents 6 Distributions of Functions of Random Variables 2 6.1 Transformation of Discrete r.v.s............. 3 6.2 Method of Distribution Functions............. 6 6.3 Method of Transformations................

More information

Probability Background

Probability Background CS76 Spring 0 Advanced Machine Learning robability Background Lecturer: Xiaojin Zhu jerryzhu@cs.wisc.edu robability Meure A sample space Ω is the set of all possible outcomes. Elements ω Ω are called sample

More information

Chapter 6: Functions of Random Variables

Chapter 6: Functions of Random Variables Chapter 6: Functions of Random Variables We are often interested in a function of one or several random variables, U(Y 1,..., Y n ). We will study three methods for determining the distribution of a function

More information

Stat410 Probability and Statistics II (F16)

Stat410 Probability and Statistics II (F16) Stat4 Probability and Statistics II (F6 Exponential, Poisson and Gamma Suppose on average every /λ hours, a Stochastic train arrives at the Random station. Further we assume the waiting time between two

More information

Miscellaneous Errors in the Chapter 6 Solutions

Miscellaneous Errors in the Chapter 6 Solutions Miscellaneous Errors in the Chapter 6 Solutions 3.30(b In this problem, early printings of the second edition use the beta(a, b distribution, but later versions use the Poisson(λ distribution. If your

More information

STAT 512 sp 2018 Summary Sheet

STAT 512 sp 2018 Summary Sheet STAT 5 sp 08 Summary Sheet Karl B. Gregory Spring 08. Transformations of a random variable Let X be a rv with support X and let g be a function mapping X to Y with inverse mapping g (A = {x X : g(x A}

More information

Pattern Recognition and Machine Learning. Bishop Chapter 2: Probability Distributions

Pattern Recognition and Machine Learning. Bishop Chapter 2: Probability Distributions Pattern Recognition and Machine Learning Chapter 2: Probability Distributions Cécile Amblard Alex Kläser Jakob Verbeek October 11, 27 Probability Distributions: General Density Estimation: given a finite

More information

USEFUL PROPERTIES OF THE MULTIVARIATE NORMAL*

USEFUL PROPERTIES OF THE MULTIVARIATE NORMAL* USEFUL PROPERTIES OF THE MULTIVARIATE NORMAL* 3 Conditionals and marginals For Bayesian analysis it is very useful to understand how to write joint, marginal, and conditional distributions for the multivariate

More information

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.

Perhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows. Chapter 5 Two Random Variables In a practical engineering problem, there is almost always causal relationship between different events. Some relationships are determined by physical laws, e.g., voltage

More information

3 Continuous Random Variables

3 Continuous Random Variables Jinguo Lian Math437 Notes January 15, 016 3 Continuous Random Variables Remember that discrete random variables can take only a countable number of possible values. On the other hand, a continuous random

More information

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ). .8.6 µ =, σ = 1 µ = 1, σ = 1 / µ =, σ =.. 3 1 1 3 x Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ ). The Gaussian distribution Probably the most-important distribution in all of statistics

More information

This exam is closed book and closed notes. (You will have access to a copy of the Table of Common Distributions given in the back of the text.

This exam is closed book and closed notes. (You will have access to a copy of the Table of Common Distributions given in the back of the text. TEST #3 STA 5326 December 4, 214 Name: Please read the following directions. DO NOT TURN THE PAGE UNTIL INSTRUCTED TO DO SO Directions This exam is closed book and closed notes. (You will have access to

More information

THE UNIVERSITY OF HONG KONG DEPARTMENT OF STATISTICS AND ACTUARIAL SCIENCE

THE UNIVERSITY OF HONG KONG DEPARTMENT OF STATISTICS AND ACTUARIAL SCIENCE THE UNIVERSITY OF HONG KONG DEPARTMENT OF STATISTICS AND ACTUARIAL SCIENCE STAT131 PROBABILITY AND STATISTICS I EXAMPLE CLASS 8 Review Conditional Distributions and Conditional Expectation For any two

More information

MAS223 Statistical Inference and Modelling Exercises and Solutions

MAS223 Statistical Inference and Modelling Exercises and Solutions MAS3 Statistical Inference and Modelling Exercises and Solutions The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up

More information

Multivariate Random Variable

Multivariate Random Variable Multivariate Random Variable Author: Author: Andrés Hincapié and Linyi Cao This Version: August 7, 2016 Multivariate Random Variable 3 Now we consider models with more than one r.v. These are called multivariate

More information

Conditional densities, mass functions, and expectations

Conditional densities, mass functions, and expectations Conditional densities, mass functions, and expectations Jason Swanson April 22, 27 1 Discrete random variables Suppose that X is a discrete random variable with range {x 1, x 2, x 3,...}, and that Y is

More information

Chp 4. Expectation and Variance

Chp 4. Expectation and Variance Chp 4. Expectation and Variance 1 Expectation In this chapter, we will introduce two objectives to directly reflect the properties of a random variable or vector, which are the Expectation and Variance.

More information

STAT:5100 (22S:193) Statistical Inference I Homework Assignments. Luke Tierney

STAT:5100 (22S:193) Statistical Inference I Homework Assignments. Luke Tierney STAT:5 (22S:93) Statistical Inference I Homework Assignments Luke Fall 25 Statistics STAT:5 (22S:93), Fall 25 Assignment Due on Monday, August 3, 25.. For each of the following experiments, describe a

More information

ECON 5350 Class Notes Review of Probability and Distribution Theory

ECON 5350 Class Notes Review of Probability and Distribution Theory ECON 535 Class Notes Review of Probability and Distribution Theory 1 Random Variables Definition. Let c represent an element of the sample space C of a random eperiment, c C. A random variable is a one-to-one

More information

Mathematical Statistics

Mathematical Statistics Mathematical Statistics Chapter Three. Point Estimation 3.4 Uniformly Minimum Variance Unbiased Estimator(UMVUE) Criteria for Best Estimators MSE Criterion Let F = {p(x; θ) : θ Θ} be a parametric distribution

More information

Mathematical statistics

Mathematical statistics October 1 st, 2018 Lecture 11: Sufficient statistic Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation

More information

Algorithms for Uncertainty Quantification

Algorithms for Uncertainty Quantification Algorithms for Uncertainty Quantification Tobias Neckel, Ionuț-Gabriel Farcaș Lehrstuhl Informatik V Summer Semester 2017 Lecture 2: Repetition of probability theory and statistics Example: coin flip Example

More information

Statistics Ph.D. Qualifying Exam

Statistics Ph.D. Qualifying Exam Department of Statistics Carnegie Mellon University May 7 2008 Statistics Ph.D. Qualifying Exam You are not expected to solve all five problems. Complete solutions to few problems will be preferred to

More information

Advanced topics from statistics

Advanced topics from statistics Advanced topics from statistics Anders Ringgaard Kristensen Advanced Herd Management Slide 1 Outline Covariance and correlation Random vectors and multivariate distributions The multinomial distribution

More information

Mathematical statistics

Mathematical statistics October 4 th, 2018 Lecture 12: Information Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation Chapter

More information

1 Probability and Random Variables

1 Probability and Random Variables 1 Probability and Random Variables The models that you have seen thus far are deterministic models. For any time t, there is a unique solution X(t). On the other hand, stochastic models will result in

More information

Probability Distributions

Probability Distributions Probability Distributions Seungjin Choi Department of Computer Science Pohang University of Science and Technology, Korea seungjin@postech.ac.kr 1 / 25 Outline Summarize the main properties of some of

More information

MULTIVARIATE DISTRIBUTIONS

MULTIVARIATE DISTRIBUTIONS Chapter 9 MULTIVARIATE DISTRIBUTIONS John Wishart (1898-1956) British statistician. Wishart was an assistant to Pearson at University College and to Fisher at Rothamsted. In 1928 he derived the distribution

More information

Applied Probability Models in Marketing Research: Introduction

Applied Probability Models in Marketing Research: Introduction Applied Probability Models in Marketing Research: Introduction (Supplementary Materials for the A/R/T Forum Tutorial) Bruce G. S. Hardie London Business School bhardie@london.edu www.brucehardie.com Peter

More information

1.12 Multivariate Random Variables

1.12 Multivariate Random Variables 112 MULTIVARIATE RANDOM VARIABLES 59 112 Multivariate Random Variables We will be using matrix notation to denote multivariate rvs and their distributions Denote by X (X 1,,X n ) T an n-dimensional random

More information

Multiple Random Variables

Multiple Random Variables Multiple Random Variables This Version: July 30, 2015 Multiple Random Variables 2 Now we consider models with more than one r.v. These are called multivariate models For instance: height and weight An

More information

Sampling Distributions

Sampling Distributions Sampling Distributions In statistics, a random sample is a collection of independent and identically distributed (iid) random variables, and a sampling distribution is the distribution of a function of

More information

Math 151. Rumbos Spring Solutions to Review Problems for Exam 3

Math 151. Rumbos Spring Solutions to Review Problems for Exam 3 Math 151. Rumbos Spring 2014 1 Solutions to Review Problems for Exam 3 1. Suppose that a book with n pages contains on average λ misprints per page. What is the probability that there will be at least

More information

Motivation Scale Mixutres of Normals Finite Gaussian Mixtures Skew-Normal Models. Mixture Models. Econ 690. Purdue University

Motivation Scale Mixutres of Normals Finite Gaussian Mixtures Skew-Normal Models. Mixture Models. Econ 690. Purdue University Econ 690 Purdue University In virtually all of the previous lectures, our models have made use of normality assumptions. From a computational point of view, the reason for this assumption is clear: combined

More information

1 Exercises for lecture 1

1 Exercises for lecture 1 1 Exercises for lecture 1 Exercise 1 a) Show that if F is symmetric with respect to µ, and E( X )

More information

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Put your solution to each problem on a separate sheet of paper. Problem 1. (5106) Let X 1, X 2,, X n be a sequence of i.i.d. observations from a

More information