Introduction to Normal Distribution
|
|
- Marian Beverly Gray
- 6 years ago
- Views:
Transcription
1 Introduction to Normal Distribution Nathaniel E. Helwig Assistant Professor of Psychology and Statistics University of Minnesota (Twin Cities) Updated 17-Jan-2017 Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 1
2 Copyright Copyright c 2017 by Nathaniel E. Helwig Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 2
3 Outline of Notes 1) Univariate Normal: Distribution form Standard normal Probability calculations Affine transformations Parameter estimation 3) Multivariate Normal: Distribution form Probability calculations Affine transformations Conditional distributions Parameter estimation 2) Bivariate Normal: Distribution form Probability calculations Affine transformations Conditional distributions 4) Sampling Distributions: Univariate case Multivariate case Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 3
4 Univariate Normal Univariate Normal Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 4
5 Univariate Normal Distribution Form Normal Density Function (Univariate) Given a variable x R, the normal probability density function (pdf) is where µ R is the mean f (x) = 1 σ (x µ) 2 2π e 2σ 2 = 1 } (1) { σ 2π exp (x µ)2 2σ 2 σ > 0 is the standard deviation (σ 2 is the variance) e is base of the natural logarithm Write X N(µ, σ 2 ) to denote that X follows a normal distribution. Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 5
6 Univariate Normal Standard Normal Distribution Standard Normal If X N(0, 1), then X follows a standard normal distribution: f (x) = 1 2π e x 2 /2 (2) f(x) x Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 6
7 Univariate Normal Probability Calculations Probabilities and Distribution Functions Probabilities relate to the area under the pdf: P(a X b) = b a f (x)dx = F (b) F(a) (3) where F(x) = x is the cumulative distribution function (cdf). f (u)du (4) Note: F(x) = P(X x) = 0 F(x) 1 Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 7
8 Normal Probabilities Univariate Normal Probability Calculations Helpful figure of normal probabilities: σ 34.1% 34.1% 0.1% 2.1% 13.6% 13.6% 2.1% 0.1% 2σ 1σ µ 1σ 2σ 3σ From Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 8
9 Univariate Normal Probability Calculations Normal Distribution Functions (Univariate) Helpful figures of normal pdfs and cdfs: μ= 0, σ = 0.2, 2 μ= 0, σ = 1.0, 2 μ= 0, σ = 5.0, 2 μ= 2, σ = 0.5, μ= 0, σ = 0.2, 2 μ= 0, σ = 1.0, 2 μ= 0, σ = 5.0, 2 μ= 2, σ = 0.5, φ μ,σ 2( x) Φ μ,σ 2(x) x x Note that the cdf has an elongated S shape, referred to as an ogive. Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 9
10 Univariate Normal Affine Transformations Affine Transformations of Normal (Univariate) Suppose that X N(µ, σ 2 ) and a, b R with a 0. If we define Y = ax + b, then Y N(aµ + b, a 2 σ 2 ). Suppose that X N(1, 2). Determine the distributions of... Y = X + 3 Y = 2X + 3 Y = 3X + 2 Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 10
11 Univariate Normal Affine Transformations Affine Transformations of Normal (Univariate) Suppose that X N(µ, σ 2 ) and a, b R with a 0. If we define Y = ax + b, then Y N(aµ + b, a 2 σ 2 ). Suppose that X N(1, 2). Determine the distributions of... Y = X + 3 = Y N(1(1) + 3, 1 2 (2)) N(4, 2) Y = 2X + 3 = Y N(2(1) + 3, 2 2 (2)) N(5, 8) Y = 3X + 2 = Y N(3(1) + 2, 3 2 (2)) N(5, 18) Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 10
12 Likelihood Function Univariate Normal Parameter Estimation Suppose that x = (x 1,..., x n ) is an iid sample of data from a normal distribution with mean µ and variance σ 2 iid, i.e., x i N(µ, σ 2 ). The likelihood function for the parameters (given the data) has the form L(µ, σ 2 x) = n f (x i ) = i=1 n i=1 and the log-likelihood function is given by { 1 exp (x i µ) 2 } 2πσ 2 2σ 2 LL(µ, σ 2 x) = n i=1 log(f (x i )) = n 2 log(2π) n 2 log(σ2 ) 1 2σ 2 n (x i µ) 2 i=1 Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 11
13 Univariate Normal Parameter Estimation Maximum Likelihood Estimate of the Mean The MLE of the mean is the value of µ that minimizes n (x i µ) 2 = i=1 n i=1 x 2 i 2n xµ + nµ 2 where x = (1/n) n i=1 x i is the sample mean. Taking the derivative with respect to µ we find that n i=1 (x i µ) 2 µ = 2n x + 2nµ x = ˆµ i.e., the sample mean x is the MLE of the population mean µ. Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 12
14 Univariate Normal Parameter Estimation Maximum Likelihood Estimate of the Variance The MLE of the variance is the value of σ 2 that minimizes n 2 log(σ2 ) + 1 2σ 2 n (x i ˆµ) 2 = n 2 log(σ2 ) + i=1 where x = (1/n) n i=1 x i is the sample mean. n i=1 x i 2 2σ 2 n x 2 2σ 2 Taking the derivative with respect to σ 2 we find that n 2 log(σ2 ) + 1 2σ 2 n i=1 (x i ˆµ) 2 σ 2 = n 2σ 2 1 2σ 4 n (x i ˆµ) 2 which implies that the sample variance ˆσ 2 = (1/n) n i=1 (x i x) 2 is the MLE of the population variance σ 2. i=1 Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 13
15 Bivariate Normal Bivariate Normal Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 14
16 Bivariate Normal Distribution Form Normal Density Function (Bivariate) Given two variables x, y R, the bivariate normal pdf is { [ 1 (x µx ) 2 + 2(1 ρ 2 ) exp f (x, y) = σ 2 x (y µy )2 σ 2 y 2πσ x σ y 1 ρ 2 ]} 2ρ(x µx )(y µy ) σ x σ y (5) where µ x R and µ y R are the marginal means σ x R + and σ y R + are the marginal standard deviations 0 ρ < 1 is the correlation coefficient X and Y are marginally normal: X N(µ x, σ 2 x) and Y N(µ y, σ 2 y) Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 15
17 Bivariate Normal Distribution Form Example: µ x = µ y = 0, σ 2 x = 1, σ 2 y = 2, ρ = 0.6/ 2 f(x, y) x y 4 y x f(x, y) Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 16
18 Bivariate Normal Example: Different Means Distribution Form µ x = 0, µ y = 0 µ x = 1, µ y = 2 µ x = 1, µ y = 1 y x f(x, y) y x f(x, y) y x f(x, y) Note: for all three plots σ 2 x = 1, σ 2 y = 2, and ρ = 0.6/ 2. Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 17
19 Bivariate Normal Example: Different Correlations Distribution Form ρ = 0.6/ 2 ρ = 0 ρ = 1.2/ 2 y x f(x, y) y x f(x, y) y x f(x, y) Note: for all three plots µ x = µ y = 0, σ 2 x = 1, and σ 2 y = 2. Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 18
20 Bivariate Normal Example: Different Variances Distribution Form σ y = 1 σ y = 2 σ y = 2 y x f(x, y) y x f(x, y) y x f(x, y) Note: for all three plots µ x = µ y = 0, σ 2 x = 1, and ρ = 0.6/(σ x σ y ). Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 19
21 Bivariate Normal Probability Calculations Probabilities and Multiple Integration Probabilities still relate to the area under the pdf: P([a x X b x ] and [a y Y b y ]) = bx by a x a y f (x, y)dydx (6) where f (x, y)dydx denotes the multiple integral of the pdf f (x, y). Defining z = (x, y), we can still define the cdf: F(z) = P(X x and Y y) = x y f (u, v)dvdu (7) Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 20
22 Bivariate Normal Probability Calculations Normal Distribution Functions (Bivariate) Helpful figures of bivariate normal pdf and cdf: y x f(x, y) y x F(x, y) Note: µ x = µ y = 0, σ 2 x = 1, σ 2 y = 2, and ρ = 0.6/ 2 Note that the cdf still has an ogive shape (now in two-dimensions). Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 21
23 Bivariate Normal Affine Transformations Affine Transformations of Normal (Bivariate) Given z = (x, y), suppose that z N(µ, Σ) where µ = (µ x, µ y ) is the 2 1 mean vector ( ) σ 2 Σ = x ρσ x σ y is the 2 2 covariance matrix ρσ x σ y σ 2 y ( ) a11 a Let A = 12 and b = a 21 a 22 ( b1 b 2 ) with A = ( ) If we define w = Az + b, then w N(Aµ + b, AΣA ). Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 22
24 Bivariate Normal Conditional Distributions Conditional Normal (Bivariate) The conditional distribution of a variable Y given X = x is f Y X (y X = x) = f XY (x, y) f X (x) (8) where f XY (x, y) is the joint pdf of X and Y f X (x) is the marginal pdf of X In the bivariate normal case, we have that Y X N(µ, σ 2 ) (9) where µ = µ y + ρ σy σ x (x µ x ) and σ 2 = σ 2 y(1 ρ 2 ) Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 23
25 Bivariate Normal Conditional Distributions Derivation of Conditional Normal To prove Equation (9), simply write out the definition and simplify: f Y X (y X = x) = f XY (x, y) f X (x) { = = = exp exp exp { { 1 2(1 ρ 2 ) 1 2(1 ρ 2 ) 1 2σ 2 y (1 ρ2 ) [ (x µx ) 2 σ 2 x [ (x µx ) 2 [ σ 2 x ρ 2 σ2 y σ 2 x (y µy )2 + σy 2 { exp (x µx )2 2σ 2 x { } [ exp 1 2σy 2 y µ y ρ σy ] 2 = (1 ρ2 ) σx (x µx ) 2πσy 1 ρ 2 ]} 2ρ(x µx )(y µy ) / (2πσ ) x σ y 1 ρ 2 σx σy } ) / (σ x 2π ] 2ρ(x µx )(y µy ) + σx σy } (x µx )2 2σx 2 (y µy )2 + σy 2 2πσy 1 ρ 2 ]} (x µ x ) 2 + (y µ y ) 2 2ρ σy (x µx )(y µy ) σx 2πσy 1 ρ 2 which completes the proof. Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 24
26 Bivariate Normal Conditional Distributions Statistical Independence for Bivariate Normal Two variables X and Y are statistically independent if f XY (x, y) = f X (x)f Y (y) (10) where f XY (x, y) is joint pdf, and f X (x) and f Y (y) are marginals pdfs. Note that if X and Y are independent, then f Y X (y X = x) = f XY (x, y) f X (x) = f X (x)f Y (y) f X (x) = f Y (y) (11) so conditioning on X = x does not change the distribution of Y. If X and Y are bivariate normal, what is the necessary and sufficient condition for X and Y to be independent? Hint: see Equation (9) Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 25
27 Bivariate Normal Conditional Distributions Example #1 A statistics class takes two exams X (Exam 1) and Y (Exam 2) where the scores follow a bivariate normal distribution with parameters: µ x = 70 and µ y = 60 are the marginal means σ x = 10 and σ y = 15 are the marginal standard deviations ρ = 0.6 is the correlation coefficient Suppose we select a student at random. What is the probability that... (a) the student scores over 75 on Exam 2? (b) the student scores over 75 on Exam 2, given that the student scored X = 80 on Exam 1? (c) the sum of his/her Exam 1 and Exam 2 scores is over 150? (d) the student did better on Exam 1 than Exam 2? (e) P(5X 4Y > 150)? Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 26
28 Example #1: Part (a) Bivariate Normal Conditional Distributions Answer for 1(a): Note that Y N(60, 15 2 ), so the probability that the student scores over 75 on Exam 2 is P(Y > 75) = P ( Z > = P(Z > 1) = 1 P(Z < 1) = 1 Φ(1) ) = = where Φ(x) = x 1 f (z)dz with f (x) = e x 2 /2 denoting the standard 2π normal pdf (see R code for use of pnorm to calculate this quantity). Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 27
29 Example #1: Part (b) Bivariate Normal Conditional Distributions Answer for 1(b): Note that (Y X = 80) N(µ, σ 2 ) where µ = µ Y + ρ σ Y σ X (x µ X ) = 60 + (0.6)(15/10)(80 70) = 69 σ 2 = σ 2 Y (1 ρ2 ) = 15 2 ( ) = 144 If a student scored X = 80 on Exam 1, the probability that the student scores over 75 on Exam 2 is ( ) P(Y > 75 X = 80) = P Z > 12 = P (Z > 0.5) = 1 Φ(0.5) = = Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 28
30 Example #1: Part (c) Bivariate Normal Conditional Distributions Answer for 1(c): Note that (X + Y ) N(µ, σ 2 ) where µ = µ X + µ Y = = 130 σ 2 = σ 2 X + σ2 Y + 2ρσ X σ Y = (0.6)(10)(15) = 505 The probability that the sum of Exam 1 and Exam 2 is above 150 is ( ) P(X + Y > 150) = P Z > 505 = P (Z > ) = 1 Φ( ) = = Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 29
31 Example #1: Part (d) Bivariate Normal Conditional Distributions Answer for 1(d): Note that (X Y ) N(µ, σ 2 ) where µ = µ X µ Y = = 10 σ 2 = σ 2 X + σ2 Y 2ρσ X σ Y = (0.6)(10)(15) = 145 The probability that the student did better on Exam 1 than Exam 2 is P(X > Y ) = P(X Y > 0) ( = P Z > 0 10 ) 145 = P (Z > ) = 1 Φ( ) = = Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 30
32 Example #1: Part (e) Bivariate Normal Conditional Distributions Answer for 1(e): Note that (5X 4Y ) N(µ, σ 2 ) where µ = 5µ X 4µ Y = 5(70) 4(60) = 110 σ 2 = 5 2 σ 2 X + ( 4)2 σ 2 Y + 2(5)( 4)ρσ X σ Y = 25(10 2 ) + 16(15 2 ) 2(20)(0.6)(10)(15) = 2500 Thus, the needed probability can be obtained using ( ) P(5X 4Y > 150) = P Z > 2500 = P (Z > 0.8) = 1 Φ(0.8) = = Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 31
33 Example #1: R Code Bivariate Normal Conditional Distributions # Example 1a > pnorm(1,lower=f) [1] > pnorm(75,mean=60,sd=15,lower=f) [1] # Example 1b > pnorm(0.5,lower=f) [1] > pnorm(75,mean=69,sd=12,lower=f) [1] # Example 1d > pnorm(-10/sqrt(145),lower=f) [1] > pnorm(0,mean=10,sd=sqrt(145),lower=f) [1] # Example 1e > pnorm(0.8,lower=f) [1] > pnorm(150,mean=110,sd=50,lower=f) [1] # Example 1c > pnorm(20/sqrt(505),lower=f) [1] > pnorm(150,mean=130,sd=sqrt(505),lower=f) [1] Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 32
34 Multivariate Normal Multivariate Normal Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 33
35 Multivariate Normal Distribution Form Normal Density Function (Multivariate) Given x = (x 1,..., x p ) with x j R j, the multivariate normal pdf is where f (x) = { 1 (2π) p/2 exp 1 } Σ 1/2 2 (x µ) Σ 1 (x µ) µ = (µ 1,..., µ p ) is the p 1 mean vector σ 11 σ 12 σ 1p σ 21 σ 22 σ 2p Σ =..... is the p p covariance matrix. σ p1 σ p2 σ pp (12) Write x N(µ, Σ) or x N p (µ, Σ) to denote x is multivariate normal. Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 34
36 Multivariate Normal Distribution Form Some Multivariate Normal Properties The mean and covariance parameters have the following restrictions: µ j R for all j σ jj > 0 for all j σ ij = ρ ij σii σ jj where ρ ij is correlation between X i and X j σij 2 σ ii σ jj for any i, j {1,..., p} (Cauchy-Schwarz) Σ is assumed to be positive definite so that Σ 1 exists. Marginals are normal: X j N(µ j, σ jj ) for all j {1,..., p}. Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 35
37 Multivariate Normal Multivariate Normal Probabilities Probability Calculations Probabilities still relate to the area under the pdf: P(a j X j b j j) = b1 bp f (x)dx p dx 1 (13) a 1 a p where f (x)dx p dx 1 denotes the multiple integral f (x). We can still define the cdf of x = (x 1,..., x p ) F(x) = P(X j x j j) = x1 xp f (u)du p du 1 (14) Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 36
38 Multivariate Normal Affine Transformations Affine Transformations of Normal (Multivariate) Suppose that x = (x 1,..., x p ) and that x N(µ, Σ) where µ = {µ j } p 1 is the mean vector Σ = {σ ij } p p is the covariance matrix Let A = {a ij } n p and b = {b i } n 1 with A 0 n p. If we define w = Ax + b, then w N(Aµ + b, AΣA ). Note: linear combinations of normal variables are normally distributed. Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 37
39 Multivariate Normal Conditional Distributions Multivariate Conditional Distributions Given variables x = (x 1,..., x p ) and y = (y 1,..., y q ), we have f Y X (y X = x) = f XY (x, y) f X (x) (15) where f Y X (y X = x) is the conditional distribution of y given x f XY (x, y) is the joint pdf of x and y f X (x) is the marginal pdf of x Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 38
40 Multivariate Normal Conditional Normal (Multivariate) Suppose that z N(µ, Σ) where z = (x, y ) = (x 1,..., x p, y 1,..., y q ) Conditional Distributions µ = (µ x, µ y) = (µ 1x,..., µ px, µ 1y,..., µ qy ) Note: µ x is mean vector of x, and µ y is mean vector of y ( ) Σxx Σ Σ = xy where (Σ xx ) p p, (Σ yy ) q q, and (Σ xy ) p q, Σ xy Σ yy Note: Σ xx is covariance matrix of x, Σ yy is covariance matrix of y, and Σ xy is covariance matrix of x and y In the multivariate normal case, we have that y x N(µ, Σ ) (16) where µ = µ y + Σ xyσ 1 xx (x µ x ) and Σ = Σ yy Σ xyσ 1 xx Σ xy Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 39
41 Multivariate Normal Conditional Distributions Statistical Independence for Multivariate Normal Using Equation (16), we have that y x N(µ, Σ ) N(µ y, Σ yy ) (17) if and only if Σ xy = 0 p q (a matrix of zeros). Note that Σ xy = 0 p q implies that the p elements of x are uncorrelated with the q elements of y. For multivariate normal variables: uncorrelated independent For non-normal variables: uncorrelated independent Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 40
42 Example #2 Multivariate Normal Conditional Distributions Each Delicious Candy Company store makes 3 size candy bars: regular (X 1 ), fun size (X 2 ), and big size (X 3 ). Assume the weight (in ounces) of the candy bars (X 1, X 2, X 3 ) follow a multivariate normal distribution with parameters: µ = 3 and Σ = Suppose we select a store at random. What is the probability that... (a) the weight of a regular candy bar is greater than 8 oz? (b) the weight of a regular candy bar is greater than 8 oz, given that the fun size bar weighs 1 oz and the big size bar weighs 10 oz? (c) P(4X 1 3X 2 + 5X 3 < 63)? Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 41
43 Example #2: Part (a) Multivariate Normal Conditional Distributions Answer for 2(a): Note that X 1 N(5, 4) So, the probability that the regular bar is more than 8 oz is ( P(X 1 > 8) = P Z > 8 5 ) 2 = P(Z > 1.5) = 1 Φ(1.5) = = Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 42
44 Example #2: Part (b) Multivariate Normal Conditional Distributions Answer for 2(b): (X 1 X 2 = 1, X 3 = 10) is normally distributed, see Equation (16). The conditional mean of (X 1 X 2 = 1, X 3 = 10) is given by µ = µ X1 + Σ 12 Σ 1 22 ( x µ) = 5 + ( 1 0 ) ( ) 1 ( ) = 5 + ( 1 0 ) ( ) ( ) = /32 = 5.75 Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 43
45 Multivariate Normal Example #2: Part (b) continued Conditional Distributions Answer for 2(b) continued: The conditional variance of (X 1 X 2 = 1, X 3 = 10) is given by σ 2 = σ 2 X 1 Σ 12 Σ 1 22 Σ 12 = 4 ( 1 0 ) ( ) 1 ( ) = 4 ( 1 0 ) ( = 4 9/32 = ) ( 1 0 ) Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 44
46 Multivariate Normal Example #2: Part (b) continued Conditional Distributions Answer for 2(b) continued: So, if the fun size bar weighs 1 oz and the big size bar weighs 10 oz, the probability that the regular bar is more than 8 oz is ( P(X 1 > 8 X 2 = 1, X 3 = 10) = P Z > ) = P(Z > ) = 1 Φ( ) = = Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 45
47 Multivariate Normal Conditional Distributions Example #2: Part (c) Answer for 2(c): (4X 1 3X 2 + 5X 3 ) is normally distributed. The expectation of (4X 1 3X 2 + 5X 3 ) is given by µ = 4µ X1 3µ X2 + 5µ X3 = 4(5) 3(3) + 5(7) = 46 Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 46
48 Multivariate Normal Example #2: Part (c) continued Conditional Distributions Answer for 2(c) continued: The variance of (4X 1 3X 2 + 5X 3 ) is given by σ 2 = ( ) 4 Σ 3 5 = ( ) = ( ) = 289 Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 47
49 Multivariate Normal Example #2: Part (c) continued Conditional Distributions Answer for 2(c) continued: So, the needed probability can be obtained as ( ) P(4X 1 3X 2 + 5X 3 < 63) = P Z < 289 = P(Z < 1) = Φ(1) = Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 48
50 Example #2: R Code Multivariate Normal Conditional Distributions # Example 2a > pnorm(1.5,lower=f) [1] > pnorm(8,mean=5,sd=2,lower=f) [1] # Example 2b > pnorm(2.25/sqrt(119/32),lower=f) [1] > pnorm(8,mean=5.75,sd=sqrt(119/32),lower=f) [1] # Example 2c > pnorm(1) [1] > pnorm(63,mean=46,sd=17) [1] Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 49
51 Likelihood Function Multivariate Normal Parameter Estimation Suppose that x i = (x i1,..., x ip ) is a sample from a normal distribution iid with mean vector µ and covariance matrix Σ, i.e., x i N(µ, Σ). The likelihood function for the parameters (given the data) has the form L(µ, Σ X) = n f (x i ) = i=1 n i=1 and the log-likelihood function is given by { 1 (2π) p/2 exp 1 } Σ 1/2 2 (x i µ) Σ 1 (x i µ) LL(µ, Σ X) = np 2 log(2π) n 2 log( Σ ) 1 2 n (x i µ) Σ 1 (x i µ) i=1 Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 50
52 Multivariate Normal Parameter Estimation Maximum Likelihood Estimate of Mean Vector The MLE of the mean vector is the value of µ that minimizes n (x i µ) Σ 1 (x i µ) = i=1 n x i Σ 1 x i 2n x Σ 1 µ + nµ Σ 1 µ i=1 where x = (1/n) n i=1 x i is the sample mean vector. Taking the derivative with respect to µ we find that LL(µ, Σ X) µ = 2nΣ 1 x + 2nΣ 1 µ x = ˆµ The sample mean vector x is the MLE of the population mean µ vector. Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 51
53 Multivariate Normal Parameter Estimation Maximum Likelihood Estimate of Covariance Matrix The MLE of the covariance matrix is the value of Σ that minimizes n log( Σ 1 ) + n tr{σ 1 (x i ˆµ)(x i ˆµ) } i=1 where ˆµ = x = (1/n) n i=1 x i is the sample mean. Taking the derivative with respect to Σ 1 we find that LL(µ, Σ X) Σ 1 = nσ + n (x i ˆµ)(x i ˆµ) i.e., the sample covariance matrix ˆΣ = (1/n) n i=1 (x i x)(x i x) is the MLE of the population covariance matrix Σ. i=1 Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 52
54 Sampling Distributions Sampling Distributions Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 53
55 Sampling Distributions Univariate Case Univariate Sampling Distributions: x and s 2 In the univariate normal case, we have that x = (1/n) n i=1 x i N(µ, σ 2 /n) (n 1)s 2 = n i=1 (x i x) 2 σ 2 χ 2 n 1 χ 2 k denotes a chi-square variable with k degrees of freedom. σ 2 χ 2 k = k i=1 z2 i where z i iid N(0, σ 2 ) Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 54
56 Sampling Distributions Multivariate Case Multivariate Sampling Distributions: x and S In the multivariate normal case, we have that x = (1/n) n i=1 x i N(µ, Σ/n) (n 1)S = n i=1 (x i x)(x i x) W n 1 (Σ) W k (Σ) denotes a Wishart variable with k degrees of freedom. W k (Σ) = k i=1 z iz i where z i iid N(0 p, Σ) Nathaniel E. Helwig (U of Minnesota) Introduction to Normal Distribution Updated 17-Jan-2017 : Slide 55
Continuous Random Variables
1 / 24 Continuous Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 27, 2013 2 / 24 Continuous Random Variables
More informationChapter 5 continued. Chapter 5 sections
Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions
More informationJoint Gaussian Graphical Model Review Series I
Joint Gaussian Graphical Model Review Series I Probability Foundations Beilun Wang Advisor: Yanjun Qi 1 Department of Computer Science, University of Virginia http://jointggm.org/ June 23rd, 2017 Beilun
More informationIntroduction to Probability and Stocastic Processes - Part I
Introduction to Probability and Stocastic Processes - Part I Lecture 2 Henrik Vie Christensen vie@control.auc.dk Department of Control Engineering Institute of Electronic Systems Aalborg University Denmark
More informationExam 2. Jeremy Morris. March 23, 2006
Exam Jeremy Morris March 3, 006 4. Consider a bivariate normal population with µ 0, µ, σ, σ and ρ.5. a Write out the bivariate normal density. The multivariate normal density is defined by the following
More informationElements of Probability Theory
Short Guides to Microeconometrics Fall 2016 Kurt Schmidheiny Unversität Basel Elements of Probability Theory Contents 1 Random Variables and Distributions 2 1.1 Univariate Random Variables and Distributions......
More informationLecture 11. Multivariate Normal theory
10. Lecture 11. Multivariate Normal theory Lecture 11. Multivariate Normal theory 1 (1 1) 11. Multivariate Normal theory 11.1. Properties of means and covariances of vectors Properties of means and covariances
More informationNotes on Random Vectors and Multivariate Normal
MATH 590 Spring 06 Notes on Random Vectors and Multivariate Normal Properties of Random Vectors If X,, X n are random variables, then X = X,, X n ) is a random vector, with the cumulative distribution
More informationRandom vectors X 1 X 2. Recall that a random vector X = is made up of, say, k. X k. random variables.
Random vectors Recall that a random vector X = X X 2 is made up of, say, k random variables X k A random vector has a joint distribution, eg a density f(x), that gives probabilities P(X A) = f(x)dx Just
More informationRandom Variables and Their Distributions
Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital
More informationNotes on the Multivariate Normal and Related Topics
Version: July 10, 2013 Notes on the Multivariate Normal and Related Topics Let me refresh your memory about the distinctions between population and sample; parameters and statistics; population distributions
More informationRandom Vectors 1. STA442/2101 Fall See last slide for copyright information. 1 / 30
Random Vectors 1 STA442/2101 Fall 2017 1 See last slide for copyright information. 1 / 30 Background Reading: Renscher and Schaalje s Linear models in statistics Chapter 3 on Random Vectors and Matrices
More informationmatrix-free Elements of Probability Theory 1 Random Variables and Distributions Contents Elements of Probability Theory 2
Short Guides to Microeconometrics Fall 2018 Kurt Schmidheiny Unversität Basel Elements of Probability Theory 2 1 Random Variables and Distributions Contents Elements of Probability Theory matrix-free 1
More informationThe Multivariate Normal Distribution 1
The Multivariate Normal Distribution 1 STA 302 Fall 2014 1 See last slide for copyright information. 1 / 37 Overview 1 Moment-generating Functions 2 Definition 3 Properties 4 χ 2 and t distributions 2
More informationMA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems
MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems Review of Basic Probability The fundamentals, random variables, probability distributions Probability mass/density functions
More informationEE4601 Communication Systems
EE4601 Communication Systems Week 2 Review of Probability, Important Distributions 0 c 2011, Georgia Institute of Technology (lect2 1) Conditional Probability Consider a sample space that consists of two
More informationLIST OF FORMULAS FOR STK1100 AND STK1110
LIST OF FORMULAS FOR STK1100 AND STK1110 (Version of 11. November 2015) 1. Probability Let A, B, A 1, A 2,..., B 1, B 2,... be events, that is, subsets of a sample space Ω. a) Axioms: A probability function
More informationMultiple Random Variables
Multiple Random Variables This Version: July 30, 2015 Multiple Random Variables 2 Now we consider models with more than one r.v. These are called multivariate models For instance: height and weight An
More information2 Functions of random variables
2 Functions of random variables A basic statistical model for sample data is a collection of random variables X 1,..., X n. The data are summarised in terms of certain sample statistics, calculated as
More informationExercises and Answers to Chapter 1
Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean
More informationFormulas for probability theory and linear models SF2941
Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms
More informationP (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n
JOINT DENSITIES - RANDOM VECTORS - REVIEW Joint densities describe probability distributions of a random vector X: an n-dimensional vector of random variables, ie, X = (X 1,, X n ), where all X is are
More informationLecture 2: Review of Probability
Lecture 2: Review of Probability Zheng Tian Contents 1 Random Variables and Probability Distributions 2 1.1 Defining probabilities and random variables..................... 2 1.2 Probability distributions................................
More informationChapter 5. Chapter 5 sections
1 / 43 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions
More informationLecture 25: Review. Statistics 104. April 23, Colin Rundel
Lecture 25: Review Statistics 104 Colin Rundel April 23, 2012 Joint CDF F (x, y) = P [X x, Y y] = P [(X, Y ) lies south-west of the point (x, y)] Y (x,y) X Statistics 104 (Colin Rundel) Lecture 25 April
More information01 Probability Theory and Statistics Review
NAVARCH/EECS 568, ROB 530 - Winter 2018 01 Probability Theory and Statistics Review Maani Ghaffari January 08, 2018 Last Time: Bayes Filters Given: Stream of observations z 1:t and action data u 1:t Sensor/measurement
More information18 Bivariate normal distribution I
8 Bivariate normal distribution I 8 Example Imagine firing arrows at a target Hopefully they will fall close to the target centre As we fire more arrows we find a high density near the centre and fewer
More informationTAMS39 Lecture 2 Multivariate normal distribution
TAMS39 Lecture 2 Multivariate normal distribution Martin Singull Department of Mathematics Mathematical Statistics Linköping University, Sweden Content Lecture Random vectors Multivariate normal distribution
More informationIntroduction to Probability Theory
Introduction to Probability Theory Ping Yu Department of Economics University of Hong Kong Ping Yu (HKU) Probability 1 / 39 Foundations 1 Foundations 2 Random Variables 3 Expectation 4 Multivariate Random
More informationMATH 829: Introduction to Data Mining and Analysis Graphical Models II - Gaussian Graphical Models
1/13 MATH 829: Introduction to Data Mining and Analysis Graphical Models II - Gaussian Graphical Models Dominique Guillot Departments of Mathematical Sciences University of Delaware May 4, 2016 Recall
More informationLecture 14: Multivariate mgf s and chf s
Lecture 14: Multivariate mgf s and chf s Multivariate mgf and chf For an n-dimensional random vector X, its mgf is defined as M X (t) = E(e t X ), t R n and its chf is defined as φ X (t) = E(e ıt X ),
More informationIntroduction to Computational Finance and Financial Econometrics Matrix Algebra Review
You can t see this text! Introduction to Computational Finance and Financial Econometrics Matrix Algebra Review Eric Zivot Spring 2015 Eric Zivot (Copyright 2015) Matrix Algebra Review 1 / 54 Outline 1
More informationChapter 4 Multiple Random Variables
Review for the previous lecture Theorems and Examples: How to obtain the pmf (pdf) of U = g ( X Y 1 ) and V = g ( X Y) Chapter 4 Multiple Random Variables Chapter 43 Bivariate Transformations Continuous
More informationIntroduction to Computational Finance and Financial Econometrics Probability Review - Part 2
You can t see this text! Introduction to Computational Finance and Financial Econometrics Probability Review - Part 2 Eric Zivot Spring 2015 Eric Zivot (Copyright 2015) Probability Review - Part 2 1 /
More informationChapter 2. Continuous random variables
Chapter 2 Continuous random variables Outline Review of probability: events and probability Random variable Probability and Cumulative distribution function Review of discrete random variable Introduction
More information3. Probability and Statistics
FE661 - Statistical Methods for Financial Engineering 3. Probability and Statistics Jitkomut Songsiri definitions, probability measures conditional expectations correlation and covariance some important
More informationMAS223 Statistical Inference and Modelling Exercises
MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,
More informationA Probability Review
A Probability Review Outline: A probability review Shorthand notation: RV stands for random variable EE 527, Detection and Estimation Theory, # 0b 1 A Probability Review Reading: Go over handouts 2 5 in
More informationProbability and Distributions
Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated
More informationx. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).
.8.6 µ =, σ = 1 µ = 1, σ = 1 / µ =, σ =.. 3 1 1 3 x Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ ). The Gaussian distribution Probably the most-important distribution in all of statistics
More informationQuick Tour of Basic Probability Theory and Linear Algebra
Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra CS224w: Social and Information Network Analysis Fall 2011 Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra Outline Definitions
More informationMultivariate Random Variable
Multivariate Random Variable Author: Author: Andrés Hincapié and Linyi Cao This Version: August 7, 2016 Multivariate Random Variable 3 Now we consider models with more than one r.v. These are called multivariate
More informationBASICS OF PROBABILITY
October 10, 2018 BASICS OF PROBABILITY Randomness, sample space and probability Probability is concerned with random experiments. That is, an experiment, the outcome of which cannot be predicted with certainty,
More information5 Operations on Multiple Random Variables
EE360 Random Signal analysis Chapter 5: Operations on Multiple Random Variables 5 Operations on Multiple Random Variables Expected value of a function of r.v. s Two r.v. s: ḡ = E[g(X, Y )] = g(x, y)f X,Y
More informationBivariate Distributions
Bivariate Distributions EGR 260 R. Van Til Industrial & Systems Engineering Dept. Copyright 2013. Robert P. Van Til. All rights reserved. 1 What s It All About? Many random processes produce Examples.»
More informationCovariance. Lecture 20: Covariance / Correlation & General Bivariate Normal. Covariance, cont. Properties of Covariance
Covariance Lecture 0: Covariance / Correlation & General Bivariate Normal Sta30 / Mth 30 We have previously discussed Covariance in relation to the variance of the sum of two random variables Review Lecture
More informationThe Multivariate Normal Distribution 1
The Multivariate Normal Distribution 1 STA 302 Fall 2017 1 See last slide for copyright information. 1 / 40 Overview 1 Moment-generating Functions 2 Definition 3 Properties 4 χ 2 and t distributions 2
More informationLecture 15. Hypothesis testing in the linear model
14. Lecture 15. Hypothesis testing in the linear model Lecture 15. Hypothesis testing in the linear model 1 (1 1) Preliminary lemma 15. Hypothesis testing in the linear model 15.1. Preliminary lemma Lemma
More informationUCSD ECE153 Handout #34 Prof. Young-Han Kim Tuesday, May 27, Solutions to Homework Set #6 (Prepared by TA Fatemeh Arbabjolfaei)
UCSD ECE53 Handout #34 Prof Young-Han Kim Tuesday, May 7, 04 Solutions to Homework Set #6 (Prepared by TA Fatemeh Arbabjolfaei) Linear estimator Consider a channel with the observation Y XZ, where the
More information5.1 Consistency of least squares estimates. We begin with a few consistency results that stand on their own and do not depend on normality.
88 Chapter 5 Distribution Theory In this chapter, we summarize the distributions related to the normal distribution that occur in linear models. Before turning to this general problem that assumes normal
More informationSTAT 730 Chapter 4: Estimation
STAT 730 Chapter 4: Estimation Timothy Hanson Department of Statistics, University of South Carolina Stat 730: Multivariate Analysis 1 / 23 The likelihood We have iid data, at least initially. Each datum
More informationMATH5745 Multivariate Methods Lecture 07
MATH5745 Multivariate Methods Lecture 07 Tests of hypothesis on covariance matrix March 16, 2018 MATH5745 Multivariate Methods Lecture 07 March 16, 2018 1 / 39 Test on covariance matrices: Introduction
More informationNonparametric Independence Tests
Nonparametric Independence Tests Nathaniel E. Helwig Assistant Professor of Psychology and Statistics University of Minnesota (Twin Cities) Updated 04-Jan-2017 Nathaniel E. Helwig (U of Minnesota) Nonparametric
More informationLecture Note 1: Probability Theory and Statistics
Univ. of Michigan - NAME 568/EECS 568/ROB 530 Winter 2018 Lecture Note 1: Probability Theory and Statistics Lecturer: Maani Ghaffari Jadidi Date: April 6, 2018 For this and all future notes, if you would
More informationThe Normal Distribuions
The Normal Distribuions Sections 5.4 & 5.5 Cathy Poliak, Ph.D. cathy@math.uh.edu Office in Fleming 11c Department of Mathematics University of Houston Lecture 15-3339 Cathy Poliak, Ph.D. cathy@math.uh.edu
More informationSTAT Chapter 5 Continuous Distributions
STAT 270 - Chapter 5 Continuous Distributions June 27, 2012 Shirin Golchi () STAT270 June 27, 2012 1 / 59 Continuous rv s Definition: X is a continuous rv if it takes values in an interval, i.e., range
More information1: PROBABILITY REVIEW
1: PROBABILITY REVIEW Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2016 M. Rutkowski (USydney) Slides 1: Probability Review 1 / 56 Outline We will review the following
More informationMAS223 Statistical Modelling and Inference Examples
Chapter MAS3 Statistical Modelling and Inference Examples Example : Sample spaces and random variables. Let S be the sample space for the experiment of tossing two coins; i.e. Define the random variables
More informationMultivariate Statistics
Multivariate Statistics Chapter 2: Multivariate distributions and inference Pedro Galeano Departamento de Estadística Universidad Carlos III de Madrid pedro.galeano@uc3m.es Course 2016/2017 Master in Mathematical
More informationSTT 843 Key to Homework 1 Spring 2018
STT 843 Key to Homework Spring 208 Due date: Feb 4, 208 42 (a Because σ = 2, σ 22 = and ρ 2 = 05, we have σ 2 = ρ 2 σ σ22 = 2/2 Then, the mean and covariance of the bivariate normal is µ = ( 0 2 and Σ
More informationJoint Distributions. (a) Scalar multiplication: k = c d. (b) Product of two matrices: c d. (c) The transpose of a matrix:
Joint Distributions Joint Distributions A bivariate normal distribution generalizes the concept of normal distribution to bivariate random variables It requires a matrix formulation of quadratic forms,
More informationThis does not cover everything on the final. Look at the posted practice problems for other topics.
Class 7: Review Problems for Final Exam 8.5 Spring 7 This does not cover everything on the final. Look at the posted practice problems for other topics. To save time in class: set up, but do not carry
More informationStat 704 Data Analysis I Probability Review
1 / 39 Stat 704 Data Analysis I Probability Review Dr. Yen-Yi Ho Department of Statistics, University of South Carolina A.3 Random Variables 2 / 39 def n: A random variable is defined as a function that
More informationPCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities
PCMI 207 - Introduction to Random Matrix Theory Handout #2 06.27.207 REVIEW OF PROBABILITY THEORY Chapter - Events and Their Probabilities.. Events as Sets Definition (σ-field). A collection F of subsets
More informationSTA 2201/442 Assignment 2
STA 2201/442 Assignment 2 1. This is about how to simulate from a continuous univariate distribution. Let the random variable X have a continuous distribution with density f X (x) and cumulative distribution
More informationUniversity of Cambridge Engineering Part IIB Module 4F10: Statistical Pattern Processing Handout 2: Multivariate Gaussians
Engineering Part IIB: Module F Statistical Pattern Processing University of Cambridge Engineering Part IIB Module F: Statistical Pattern Processing Handout : Multivariate Gaussians. Generative Model Decision
More information4. CONTINUOUS RANDOM VARIABLES
IA Probability Lent Term 4 CONTINUOUS RANDOM VARIABLES 4 Introduction Up to now we have restricted consideration to sample spaces Ω which are finite, or countable; we will now relax that assumption We
More informationChapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued
Chapter 3 sections Chapter 3 - continued 3.1 Random Variables and Discrete Distributions 3.2 Continuous Distributions 3.3 The Cumulative Distribution Function 3.4 Bivariate Distributions 3.5 Marginal Distributions
More informationAnalysis of Experimental Designs
Analysis of Experimental Designs p. 1/? Analysis of Experimental Designs Gilles Lamothe Mathematics and Statistics University of Ottawa Analysis of Experimental Designs p. 2/? Review of Probability A short
More informationLecture 1: August 28
36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random
More informationJoint Probability Distributions and Random Samples (Devore Chapter Five)
Joint Probability Distributions and Random Samples (Devore Chapter Five) 1016-345-01: Probability and Statistics for Engineers Spring 2013 Contents 1 Joint Probability Distributions 2 1.1 Two Discrete
More informationInf2b Learning and Data
Inf2b Learning and Data Lecture 13: Review (Credit: Hiroshi Shimodaira Iain Murray and Steve Renals) Centre for Speech Technology Research (CSTR) School of Informatics University of Edinburgh http://www.inf.ed.ac.uk/teaching/courses/inf2b/
More informationChapter 4. Multivariate Distributions. Obviously, the marginal distributions may be obtained easily from the joint distribution:
4.1 Bivariate Distributions. Chapter 4. Multivariate Distributions For a pair r.v.s (X,Y ), the Joint CDF is defined as F X,Y (x, y ) = P (X x,y y ). Obviously, the marginal distributions may be obtained
More informationAlgorithms for Uncertainty Quantification
Algorithms for Uncertainty Quantification Tobias Neckel, Ionuț-Gabriel Farcaș Lehrstuhl Informatik V Summer Semester 2017 Lecture 2: Repetition of probability theory and statistics Example: coin flip Example
More informationPrincipal Components Analysis
Principal Components Analysis Nathaniel E. Helwig Assistant Professor of Psychology and Statistics University of Minnesota (Twin Cities) Updated 16-Mar-2017 Nathaniel E. Helwig (U of Minnesota) Principal
More informationJoint p.d.f. and Independent Random Variables
1 Joint p.d.f. and Independent Random Variables Let X and Y be two discrete r.v. s and let R be the corresponding space of X and Y. The joint p.d.f. of X = x and Y = y, denoted by f(x, y) = P(X = x, Y
More informationDensity and Distribution Estimation
Density and Distribution Estimation Nathaniel E. Helwig Assistant Professor of Psychology and Statistics University of Minnesota (Twin Cities) Updated 04-Jan-2017 Nathaniel E. Helwig (U of Minnesota) Density
More information2 (Statistics) Random variables
2 (Statistics) Random variables References: DeGroot and Schervish, chapters 3, 4 and 5; Stirzaker, chapters 4, 5 and 6 We will now study the main tools use for modeling experiments with unknown outcomes
More information1 Joint and marginal distributions
DECEMBER 7, 204 LECTURE 2 JOINT (BIVARIATE) DISTRIBUTIONS, MARGINAL DISTRIBUTIONS, INDEPENDENCE So far we have considered one random variable at a time. However, in economics we are typically interested
More informationLet X and Y denote two random variables. The joint distribution of these random
EE385 Class Notes 9/7/0 John Stensby Chapter 3: Multiple Random Variables Let X and Y denote two random variables. The joint distribution of these random variables is defined as F XY(x,y) = [X x,y y] P.
More informationFirst Year Examination Department of Statistics, University of Florida
First Year Examination Department of Statistics, University of Florida August 19, 010, 8:00 am - 1:00 noon Instructions: 1. You have four hours to answer questions in this examination.. You must show your
More informationStatistics 3657 : Moment Approximations
Statistics 3657 : Moment Approximations Preliminaries Suppose that we have a r.v. and that we wish to calculate the expectation of g) for some function g. Of course we could calculate it as Eg)) by the
More informationApplied Econometrics - QEM Theme 1: Introduction to Econometrics Chapter 1 + Probability Primer + Appendix B in PoE
Applied Econometrics - QEM Theme 1: Introduction to Econometrics Chapter 1 + Probability Primer + Appendix B in PoE Warsaw School of Economics Outline 1. Introduction to econometrics 2. Denition of econometrics
More informationNonparametric Location Tests: k-sample
Nonparametric Location Tests: k-sample Nathaniel E. Helwig Assistant Professor of Psychology and Statistics University of Minnesota (Twin Cities) Updated 04-Jan-2017 Nathaniel E. Helwig (U of Minnesota)
More informationBasics on Probability. Jingrui He 09/11/2007
Basics on Probability Jingrui He 09/11/2007 Coin Flips You flip a coin Head with probability 0.5 You flip 100 coins How many heads would you expect Coin Flips cont. You flip a coin Head with probability
More informationMultivariate Distributions
IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing 1 in particular on multivariate
More informationMore than one variable
Chapter More than one variable.1 Bivariate discrete distributions Suppose that the r.v. s X and Y are discrete and take on the values x j and y j, j 1, respectively. Then the joint p.d.f. of X and Y, to
More informationHT Introduction. P(X i = x i ) = e λ λ x i
MODS STATISTICS Introduction. HT 2012 Simon Myers, Department of Statistics (and The Wellcome Trust Centre for Human Genetics) myers@stats.ox.ac.uk We will be concerned with the mathematical framework
More informationPerhaps the simplest way of modeling two (discrete) random variables is by means of a joint PMF, defined as follows.
Chapter 5 Two Random Variables In a practical engineering problem, there is almost always causal relationship between different events. Some relationships are determined by physical laws, e.g., voltage
More informationCourse: ESO-209 Home Work: 1 Instructor: Debasis Kundu
Home Work: 1 1. Describe the sample space when a coin is tossed (a) once, (b) three times, (c) n times, (d) an infinite number of times. 2. A coin is tossed until for the first time the same result appear
More informationLecture 21: Convergence of transformations and generating a random variable
Lecture 21: Convergence of transformations and generating a random variable If Z n converges to Z in some sense, we often need to check whether h(z n ) converges to h(z ) in the same sense. Continuous
More informationLecture 2: Repetition of probability theory and statistics
Algorithms for Uncertainty Quantification SS8, IN2345 Tobias Neckel Scientific Computing in Computer Science TUM Lecture 2: Repetition of probability theory and statistics Concept of Building Block: Prerequisites:
More informationProbability- the good parts version. I. Random variables and their distributions; continuous random variables.
Probability- the good arts version I. Random variables and their distributions; continuous random variables. A random variable (r.v) X is continuous if its distribution is given by a robability density
More informationLecture 11. Probability Theory: an Overveiw
Math 408 - Mathematical Statistics Lecture 11. Probability Theory: an Overveiw February 11, 2013 Konstantin Zuev (USC) Math 408, Lecture 11 February 11, 2013 1 / 24 The starting point in developing the
More informationEEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as
L30-1 EEL 5544 Noise in Linear Systems Lecture 30 OTHER TRANSFORMS For a continuous, nonnegative RV X, the Laplace transform of X is X (s) = E [ e sx] = 0 f X (x)e sx dx. For a nonnegative RV, the Laplace
More informationwhere r n = dn+1 x(t)
Random Variables Overview Probability Random variables Transforms of pdfs Moments and cumulants Useful distributions Random vectors Linear transformations of random vectors The multivariate normal distribution
More informationMaster s Written Examination
Master s Written Examination Option: Statistics and Probability Spring 05 Full points may be obtained for correct answers to eight questions Each numbered question (which may have several parts) is worth
More informationStatistics, Data Analysis, and Simulation SS 2015
Statistics, Data Analysis, and Simulation SS 2015 08.128.730 Statistik, Datenanalyse und Simulation Dr. Michael O. Distler Mainz, 27. April 2015 Dr. Michael O. Distler
More informationChapter 4 : Expectation and Moments
ECE5: Analysis of Random Signals Fall 06 Chapter 4 : Expectation and Moments Dr. Salim El Rouayheb Scribe: Serge Kas Hanna, Lu Liu Expected Value of a Random Variable Definition. The expected or average
More informationLecture 3. Probability - Part 2. Luigi Freda. ALCOR Lab DIAG University of Rome La Sapienza. October 19, 2016
Lecture 3 Probability - Part 2 Luigi Freda ALCOR Lab DIAG University of Rome La Sapienza October 19, 2016 Luigi Freda ( La Sapienza University) Lecture 3 October 19, 2016 1 / 46 Outline 1 Common Continuous
More information