The Multivariate Normal Distribution. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36
|
|
- Scot Carter
- 6 years ago
- Views:
Transcription
1 The Multivariate Normal Distribution Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36
2 The Moment Generating Function (MGF) of a random vector X is given by M X (t) = E(e t X ) provided h > 0 E(e t X ) exists t = (t 1,..., t n ) t i ( h, h) i = 1,..., n. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36
3 Result 5.1: If the MGFs of two random vectors X 1 and X 2 exist in an open rectangle R that includes the origin, then the cumulative distribution functions (CDFs) of X 1 and X 2 are identical iff M X1 (t) = M X2 (t) t R. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36
4 A random variable Z with MGF M Z (t) = E(e tz ) = e t2 /2 is said to have a standard normal distribution. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36
5 Show that a random variable Z with density f Z (z) = 1 2π e z2 /2 has a standard normal distribution. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36
6 E(e tz ) = = = = e t2 /2 e tz 1 2π e z2 /2 dz 1 2π e 1/2(z2 2tz) dz 1 2π e 1/2(z2 2tz+t 2 t 2) dz = e t2 /2. 1 2π e 1/2(z t)2 dz Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36
7 Suppose Z has a standard normal distribution. Then E(Z) = M Z(t) t t=0 = et2 /2 t t=0 = e t2 /2 (t) t=0 = 0. E(Z 2 ) = 2 M Z (t) t 2 t=0 = e t2 /2 + t 2 e t2 /2 t=0 = 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36
8 Thus, E(Z) = 0 and Var(Z) = 1. If Z is standard normal, then Y = µ + σz has mean E(Y) = E(µ + σz) = µ + σe(z) = µ and variance Var(Y) = Var(µ + σz) = Var(σZ) = σ 2 Var(Z) = σ 2. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36
9 Furthermore, the MGF of Y is M Y (t) = E(e ty ) = E(e t(µ+σz) ) = e tµ E(e tσz ) = e tµ M Z (tσ) = e tµ e t2 σ 2 /2 = e tµ+t2 σ 2 /2. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36
10 A random variable Y with MGF M Y (t) = e tµ+t2 σ 2 /2 is said to have a normal distribution with mean µ and variance σ 2. We denote the distribution of Y by N(µ, σ 2 ). Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36
11 If Y N(µ, σ 2 ), then Thus, the density of Y is P(Y y) y P(Y y) = P(µ + σz y) = P(Z y µ σ ). P(Z y µ σ ) = y ( y µ = f Z σ ) 1 σ = 1 2πσ 2 e 1 2( y µ σ ) 2. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36
12 That is, f Y (y) = = 1 2πσ 2 e 1 2( y µ σ ) 2 1 2πσ 2 e 1 2σ 2 (y µ)2. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36
13 Suppose Z 1,..., Z p i.i.d. N(0, 1). Then Z = Z 1. Z p is said to have a standard multivariate normal distribution. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36
14 E(Z) = 0 Var(Z) = I. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36
15 Find the Moment Generating Function of a standard multivariate normal random vector Z. p 1 Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36
16 E(e t Z ) = E(e p i=1 t iz i ) p = E( e t iz i ) = = = i=1 p E(e t iz i ) i=1 p M Zi (t i ) i=1 p i=1 e t2 i /2 = e p i=1 t2 i /2 = e t t/2. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36
17 A p-dimensional random vector Y has the Multivariate Normal Distribution with mean µ and variance Σ (Y N(µ, Σ)) iff the MGF of Y is M Y (t) = e t µ+t Σt/2. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36
18 Suppose Z N(0, I). Show that Y = µ + AZ has a multivariate normal distribution. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36
19 If Z standard multivariate normal, then Y = µ + AZ has mean E(Y) = E(µ + AZ) = µ + AE(Z) = µ and variance Var(Y) = Var(µ + AZ) = Var(AZ) = AVar(Z)A = AA. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36
20 Furthermore, M Y (t) = E(e t Y ) = E(e t (µ+az) ) = e t µ E(e t AZ ) = e t µ M Z (A t) = e t µ+t AA t/2. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36
21 Note that if rank( q p A ) < q, then will be singular. Var(Y) = Var(µ + AZ) = AA In this case, the support of the q 1 random vector Y will lie within a rank(a)(< q)- dimensional vector space. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36
22 Give a specific example of a singular multivariate normal distribution (Y N(µ, Σ), Σ singular). Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36
23 Suppose Z = Z N(0, 1). [ ] 1 Suppose A =. Then 1 rank( A) = 1 < Let Y = AZ. Then ( [ ]) 1 1 Y N 0, AA =. 1 1 Y lies in the 1-dimensional vector space C = {Y = (Y 1, Y 2 ) R 2 : Y 1 = Y 2 } with probability 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36
24 Result 5.3: If X N(µ, Σ) and Y = a + BX, then Y N(a + Bµ, BΣB ). Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36
25 Proof of Result 5.3: E(e t Y ) = E(e t a+t BX ) = e t a E(e t BX ) = e t a M X (B t) = e t a e t Bµ+t BΣB t/2 = e t (a+bµ)+t BΣB t/2. Thus, Y N(a + Bµ, BΣB ). Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36
26 Corollary 5.1: If X is multivariate normal (MVN), then the joint distribution of any p 1 subvector of X is MVN. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36
27 Proof of Corollary 5.1: Suppose {i 1,..., i q } {1,..., p}. Then X i1. = e i 1. X, X iq e i q where e i denotes the ith row of the p p identity matrix. e i 1. X MVN by Result 5.3. e i q Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36
28 Corollary 5.2: If X N(µ, Σ) and Σ is nonsingular, then p 1 (a) a nonsingular matrix A Σ = AA, (b) A 1 (X µ) N(0, I), and (c) The probability density function of X is f X (t) = (2π) p/2 Σ 1/2 e 1/2(t µ) Σ 1 (t µ). Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36
29 Proof of Corollary 5.2: (a) We can take A = Σ 1/2 because Σ is symmetric and positive definite. Because Σ positive definite, (Σ 1/2 ) 1 = Σ 1/2 exists. (b) By Result 5.3, A 1 (X µ) N(A 1 µ A 1 µ, A 1 Σ(A 1 ) ), with A 1 µ A 1 µ = 0 and A 1 Σ(A 1 ) = Σ 1/2 ΣΣ 1/2 = I. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36
30 (c) Homework problem. You may wish to use the multivariate change of variables result on page 185 of Casella and Berger. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36
31 Result 5.2: Suppose the MGF of X i is M Xi (t i ) i = 1,..., p. Let X = [X 1, X 2,..., X p] and t = [t 1, t 2,..., t p]. Suppose X has MGF M X (t). Then X 1,..., X p are mutually independent iff p M X (t) = M Xi (t i ) i=1 t in an open rectangle that includes 0. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36
32 Result 5.4: If X N(µ, Σ) and we partition X = X 1. X p, µ = µ 1. µ p, Σ = Then X 1,..., X p are mutually independent iff Σ 11 Σ 1p..... Σ p1 Σ pp. Σ ij = 0 i j. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36
33 Proof of Result 5.4: (= ) X 1,..., X p mutually independent, then Cov(X i, X j ) = E[(X i µ i )(X j µ j ) ] = [E(X i µ i )][E(X j µ j ) ] = [µ i µ i ][(µ j µ j ) ] = 0 i j. Thus, Σ ij = 0 i j. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36
34 ( =) Partition t as [t 1,..., t p]. Then t Σt = p p t iσ ij t j. i=1 j=1 If Σ ij = 0 i j, then Thus, t Σt = p t iσ ii t i. i=1 M X (t) = e t µ+t Σt/2 = e p i=1 (t i µ i +t i Σ iit i /2) p p = e t i µ i +t i Σ iit i /2 = M Xi (t i ). i=1 i=1 By Result 5.2, X 1,..., X p are mutually independent. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36
35 Corollary 5.3: Suppose X N(µ, Σ) Y 1 = a 1 + B 1 X, Y 2 = a 2 + B 2 X. and Then Y 1 and Y 2 are independent iff B 1 ΣB 2 = 0. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36
36 Proof of Corollary 5.3: Let Then Y MVN with Y = [ Y1 Y 2 Var(Y) = = ] = [ B1 B 2 [ a1 ] a 2 ] + [ B1 B 2 ] [ ] Σ B 1 B 2 [ B1 ΣB 1 B 1 ΣB 2 B 2 ΣB 1 B 2 ΣB 2 X. ]. By Result 5.4, Y 1 and Y 2 independent B 1 ΣB 2 = 0. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36
The Multivariate Normal Distribution. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36
The Multivariate Normal Distribution Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 36 The Moment Generating Function (MGF) of a random vector X is given by M X (t) = E(e t X
More informationThe Multivariate Normal Distribution. In this case according to our theorem
The Multivariate Normal Distribution Defn: Z R 1 N(0, 1) iff f Z (z) = 1 2π e z2 /2. Defn: Z R p MV N p (0, I) if and only if Z = (Z 1,..., Z p ) T with the Z i independent and each Z i N(0, 1). In this
More informationPreliminaries. Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38
Preliminaries Copyright c 2018 Dan Nettleton (Iowa State University) Statistics 510 1 / 38 Notation for Scalars, Vectors, and Matrices Lowercase letters = scalars: x, c, σ. Boldface, lowercase letters
More informationLecture 11. Multivariate Normal theory
10. Lecture 11. Multivariate Normal theory Lecture 11. Multivariate Normal theory 1 (1 1) 11. Multivariate Normal theory 11.1. Properties of means and covariances of vectors Properties of means and covariances
More informationThe Multivariate Normal Distribution 1
The Multivariate Normal Distribution 1 STA 302 Fall 2017 1 See last slide for copyright information. 1 / 40 Overview 1 Moment-generating Functions 2 Definition 3 Properties 4 χ 2 and t distributions 2
More informationML and REML Variance Component Estimation. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 58
ML and REML Variance Component Estimation Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 58 Suppose y = Xβ + ε, where ε N(0, Σ) for some positive definite, symmetric matrix Σ.
More informationNotes on Random Vectors and Multivariate Normal
MATH 590 Spring 06 Notes on Random Vectors and Multivariate Normal Properties of Random Vectors If X,, X n are random variables, then X = X,, X n ) is a random vector, with the cumulative distribution
More informationMiscellaneous Results, Solving Equations, and Generalized Inverses. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51
Miscellaneous Results, Solving Equations, and Generalized Inverses opyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 51 Result A.7: Suppose S and T are vector spaces. If S T and
More informationDistributions of Quadratic Forms. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 31
Distributions of Quadratic Forms Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 31 Under the Normal Theory GMM (NTGMM), y = Xβ + ε, where ε N(0, σ 2 I). By Result 5.3, the NTGMM
More informationThe Multivariate Normal Distribution 1
The Multivariate Normal Distribution 1 STA 302 Fall 2014 1 See last slide for copyright information. 1 / 37 Overview 1 Moment-generating Functions 2 Definition 3 Properties 4 χ 2 and t distributions 2
More informationBASICS OF PROBABILITY
October 10, 2018 BASICS OF PROBABILITY Randomness, sample space and probability Probability is concerned with random experiments. That is, an experiment, the outcome of which cannot be predicted with certainty,
More informationMLES & Multivariate Normal Theory
Merlise Clyde September 6, 2016 Outline Expectations of Quadratic Forms Distribution Linear Transformations Distribution of estimates under normality Properties of MLE s Recap Ŷ = ˆµ is an unbiased estimate
More informationBIOS 2083 Linear Models Abdus S. Wahed. Chapter 2 84
Chapter 2 84 Chapter 3 Random Vectors and Multivariate Normal Distributions 3.1 Random vectors Definition 3.1.1. Random vector. Random vectors are vectors of random variables. For instance, X = X 1 X 2.
More informationDeterminants. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 25
Determinants opyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 25 Notation The determinant of a square matrix n n A is denoted det(a) or A. opyright c 2012 Dan Nettleton (Iowa State
More information5.1 Consistency of least squares estimates. We begin with a few consistency results that stand on their own and do not depend on normality.
88 Chapter 5 Distribution Theory In this chapter, we summarize the distributions related to the normal distribution that occur in linear models. Before turning to this general problem that assumes normal
More informationRandom Vectors and Multivariate Normal Distributions
Chapter 3 Random Vectors and Multivariate Normal Distributions 3.1 Random vectors Definition 3.1.1. Random vector. Random vectors are vectors of random 75 variables. For instance, X = X 1 X 2., where each
More informationIn this course we: do distribution theory when ǫ i N(0, σ 2 ) discuss what if the errors, ǫ i are not normal? omit proofs.
Distribution Theory Question: What is distribution theory? Answer: How to compute the distribution of an estimator, test or other statistic, T : Find P(T t), the Cumulative Distribution Function (CDF)
More informationMoment Generating Function. STAT/MTHE 353: 5 Moment Generating Functions and Multivariate Normal Distribution
Moment Generating Function STAT/MTHE 353: 5 Moment Generating Functions and Multivariate Normal Distribution T. Linder Queen s University Winter 07 Definition Let X (X,...,X n ) T be a random vector and
More informationPreliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100
Preliminary Linear Algebra 1 Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 100 Notation for all there exists such that therefore because end of proof (QED) Copyright c 2012
More informationEstimation of the Response Mean. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 27
Estimation of the Response Mean Copyright c 202 Dan Nettleton (Iowa State University) Statistics 5 / 27 The Gauss-Markov Linear Model y = Xβ + ɛ y is an n random vector of responses. X is an n p matrix
More informationEstimable Functions and Their Least Squares Estimators. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 51
Estimable Functions and Their Least Squares Estimators Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 51 Consider the GLM y = n p X β + ε, where E(ε) = 0. p 1 n 1 n 1 Suppose
More informationANOVA Variance Component Estimation. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 32
ANOVA Variance Component Estimation Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 32 We now consider the ANOVA approach to variance component estimation. The ANOVA approach
More informationMultivariate Distributions
IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing 1 in particular on multivariate
More informationPart IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015
Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.
More information1 Simulating normal (Gaussian) rvs with applications to simulating Brownian motion and geometric Brownian motion in one and two dimensions
Copyright c 2007 by Karl Sigman 1 Simulating normal Gaussian rvs with applications to simulating Brownian motion and geometric Brownian motion in one and two dimensions Fundamental to many applications
More informationConstraints on Solutions to the Normal Equations. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 41
Constraints on Solutions to the Normal Equations Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 41 If rank( n p X) = r < p, there are infinitely many solutions to the NE X Xb
More informationRandom Variables and Their Distributions
Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital
More informationIntroduction to Computational Finance and Financial Econometrics Probability Review - Part 2
You can t see this text! Introduction to Computational Finance and Financial Econometrics Probability Review - Part 2 Eric Zivot Spring 2015 Eric Zivot (Copyright 2015) Probability Review - Part 2 1 /
More informationECON 5350 Class Notes Review of Probability and Distribution Theory
ECON 535 Class Notes Review of Probability and Distribution Theory 1 Random Variables Definition. Let c represent an element of the sample space C of a random eperiment, c C. A random variable is a one-to-one
More informationSometimes can find power series expansion of M X and read off the moments of X from the coefficients of t k /k!.
Moment Generating Functions Defn: The moment generating function of a real valued X is M X (t) = E(e tx ) defined for those real t for which the expected value is finite. Defn: The moment generating function
More information1 Solution to Problem 2.1
Solution to Problem 2. I incorrectly worked this exercise instead of 2.2, so I decided to include the solution anyway. a) We have X Y /3, which is a - function. It maps the interval, ) where X lives) onto
More informationWhen is the OLSE the BLUE? Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 40
When is the OLSE the BLUE? Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 40 When is the Ordinary Least Squares Estimator (OLSE) the Best Linear Unbiased Estimator (BLUE)? Copyright
More informationStatistics 3657 : Moment Generating Functions
Statistics 3657 : Moment Generating Functions A useful tool for studying sums of independent random variables is generating functions. course we consider moment generating functions. In this Definition
More informationANOVA Variance Component Estimation. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 32
ANOVA Variance Component Estimation Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 32 We now consider the ANOVA approach to variance component estimation. The ANOVA approach
More informationSTAT 450. Moment Generating Functions
STAT 450 Moment Generating Functions There are many uses of generating functions in mathematics. We often study the properties of a sequence a n of numbers by creating the function a n s n n0 In statistics
More informationP (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n
JOINT DENSITIES - RANDOM VECTORS - REVIEW Joint densities describe probability distributions of a random vector X: an n-dimensional vector of random variables, ie, X = (X 1,, X n ), where all X is are
More informationProbability Background
Probability Background Namrata Vaswani, Iowa State University August 24, 2015 Probability recap 1: EE 322 notes Quick test of concepts: Given random variables X 1, X 2,... X n. Compute the PDF of the second
More informationLecture 15: Multivariate normal distributions
Lecture 15: Multivariate normal distributions Normal distributions with singular covariance matrices Consider an n-dimensional X N(µ,Σ) with a positive definite Σ and a fixed k n matrix A that is not of
More information11. Linear Mixed-Effects Models. Copyright c 2018 Dan Nettleton (Iowa State University) 11. Statistics / 49
11. Linear Mixed-Effects Models Copyright c 2018 Dan Nettleton (Iowa State University) 11. Statistics 510 1 / 49 The Linear Mixed-Effects Model y = Xβ + Zu + e X is an n p matrix of known constants β R
More informationLinear Mixed-Effects Models. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 34
Linear Mixed-Effects Models Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 34 The Linear Mixed-Effects Model y = Xβ + Zu + e X is an n p design matrix of known constants β R
More informationSTAT 100C: Linear models
STAT 100C: Linear models Arash A. Amini April 27, 2018 1 / 1 Table of Contents 2 / 1 Linear Algebra Review Read 3.1 and 3.2 from text. 1. Fundamental subspace (rank-nullity, etc.) Im(X ) = ker(x T ) R
More informationLikelihood Ratio Test of a General Linear Hypothesis. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 42
Likelihood Ratio Test of a General Linear Hypothesis Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 42 Consider the Likelihood Ratio Test of H 0 : Cβ = d vs H A : Cβ d. Suppose
More informationSTAT 830 The Multivariate Normal Distribution
STAT 830 The Multivariate Normal Distribution Richard Lockhart Simon Fraser University STAT 830 Fall 2013 Richard Lockhart (Simon Fraser University)STAT 830 The Multivariate Normal Distribution STAT 830
More informationXβ is a linear combination of the columns of X: Copyright c 2010 Dan Nettleton (Iowa State University) Statistics / 25 X =
The Gauss-Markov Linear Model y Xβ + ɛ y is an n random vector of responses X is an n p matrix of constants with columns corresponding to explanatory variables X is sometimes referred to as the design
More informationPart IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015
Part IB Statistics Theorems with proof Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly)
More information7.3 The Chi-square, F and t-distributions
7.3 The Chi-square, F and t-distributions Ulrich Hoensch Monday, March 25, 2013 The Chi-square Distribution Recall that a random variable X has a gamma probability distribution (X Gamma(r, λ)) with parameters
More informationEstimating Estimable Functions of β. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 17
Estimating Estimable Functions of β Copyright c 202 Dan Nettleton (Iowa State University) Statistics 5 / 7 The Response Depends on β Only through Xβ In the Gauss-Markov or Normal Theory Gauss-Markov Linear
More informationLecture 14: Multivariate mgf s and chf s
Lecture 14: Multivariate mgf s and chf s Multivariate mgf and chf For an n-dimensional random vector X, its mgf is defined as M X (t) = E(e t X ), t R n and its chf is defined as φ X (t) = E(e ıt X ),
More informationProbability Lecture III (August, 2006)
robability Lecture III (August, 2006) 1 Some roperties of Random Vectors and Matrices We generalize univariate notions in this section. Definition 1 Let U = U ij k l, a matrix of random variables. Suppose
More informationRandom Vectors 1. STA442/2101 Fall See last slide for copyright information. 1 / 30
Random Vectors 1 STA442/2101 Fall 2017 1 See last slide for copyright information. 1 / 30 Background Reading: Renscher and Schaalje s Linear models in statistics Chapter 3 on Random Vectors and Matrices
More informationThe Aitken Model. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 41
The Aitken Model Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 41 The Aitken Model (AM): Suppose where y = Xβ + ε, E(ε) = 0 and Var(ε) = σ 2 V for some σ 2 > 0 and some known
More informationFormulas for probability theory and linear models SF2941
Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms
More informationEcon 508B: Lecture 5
Econ 508B: Lecture 5 Expectation, MGF and CGF Hongyi Liu Washington University in St. Louis July 31, 2017 Hongyi Liu (Washington University in St. Louis) Math Camp 2017 Stats July 31, 2017 1 / 23 Outline
More informationNonparametric hypothesis tests and permutation tests
Nonparametric hypothesis tests and permutation tests 1.7 & 2.3. Probability Generating Functions 3.8.3. Wilcoxon Signed Rank Test 3.8.2. Mann-Whitney Test Prof. Tesler Math 283 Fall 2018 Prof. Tesler Wilcoxon
More informationNext tool is Partial ACF; mathematical tools first. The Multivariate Normal Distribution. e z2 /2. f Z (z) = 1 2π. e z2 i /2
Next tool is Partial ACF; mathematical tools first. The Multivariate Normal Distribution Defn: Z R 1 N(0,1) iff f Z (z) = 1 2π e z2 /2 Defn: Z R p MV N p (0, I) if and only if Z = (Z 1,..., Z p ) (a column
More informationSampling Distributions
Sampling Distributions In statistics, a random sample is a collection of independent and identically distributed (iid) random variables, and a sampling distribution is the distribution of a function of
More informationBMIR Lecture Series on Probability and Statistics Fall 2015 Discrete RVs
Lecture #7 BMIR Lecture Series on Probability and Statistics Fall 2015 Department of Biomedical Engineering and Environmental Sciences National Tsing Hua University 7.1 Function of Single Variable Theorem
More informationEstimation of state space models with skewed shocks
Estimation of state space models with skewed shocks Grzegorz Koloch April 30, 2012 Abstract This paper provides analytical formulae for the likelihood function, predictive densities and the filtration
More information3. The F Test for Comparing Reduced vs. Full Models. opyright c 2018 Dan Nettleton (Iowa State University) 3. Statistics / 43
3. The F Test for Comparing Reduced vs. Full Models opyright c 2018 Dan Nettleton (Iowa State University) 3. Statistics 510 1 / 43 Assume the Gauss-Markov Model with normal errors: y = Xβ + ɛ, ɛ N(0, σ
More informationEE4601 Communication Systems
EE4601 Communication Systems Week 2 Review of Probability, Important Distributions 0 c 2011, Georgia Institute of Technology (lect2 1) Conditional Probability Consider a sample space that consists of two
More informationChapter 4 - Lecture 3 The Normal Distribution
Chapter 4 - Lecture 3 The October 28th, 2009 Chapter 4 - Lecture 3 The Standard Chapter 4 - Lecture 3 The Standard Normal distribution is a statistical unicorn It is the most important distribution in
More informationGeometric Skew-Normal Distribution
Debasis Kundu Arun Kumar Chair Professor Department of Mathematics & Statistics Indian Institute of Technology Kanpur Part of this work is going to appear in Sankhya, Ser. B. April 11, 2014 Outline 1 Motivation
More informationACM 116: Lectures 3 4
1 ACM 116: Lectures 3 4 Joint distributions The multivariate normal distribution Conditional distributions Independent random variables Conditional distributions and Monte Carlo: Rejection sampling Variance
More informationChapter 2: Fundamentals of Statistics Lecture 15: Models and statistics
Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Data from one or a series of random experiments are collected. Planning experiments and collecting data (not discussed here). Analysis:
More informationThe Multivariate Gaussian Distribution
The Multivariate Gaussian Distribution Chuong B. Do October, 8 A vector-valued random variable X = T X X n is said to have a multivariate normal or Gaussian) distribution with mean µ R n and covariance
More informationI L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN
Linear Combinations of Variables Edps/Soc 584 and Psych 594 Applied Multivariate Statistics Carolyn J Anderson Department of Educational Psychology I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN
More informationSOLUTION FOR HOMEWORK 11, ACTS 4306
SOLUTION FOR HOMEWORK, ACTS 36 Welcome to your th homework. This is a collection of transformation, Central Limit Theorem (CLT), and other topics.. Solution: By definition of Z, Var(Z) = Var(3X Y.5). We
More informationIntroduction to Computational Finance and Financial Econometrics Matrix Algebra Review
You can t see this text! Introduction to Computational Finance and Financial Econometrics Matrix Algebra Review Eric Zivot Spring 2015 Eric Zivot (Copyright 2015) Matrix Algebra Review 1 / 54 Outline 1
More informationMATH 829: Introduction to Data Mining and Analysis Consistency of Linear Regression
1/9 MATH 829: Introduction to Data Mining and Analysis Consistency of Linear Regression Dominique Guillot Deartments of Mathematical Sciences University of Delaware February 15, 2016 Distribution of regression
More informationEEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as
L30-1 EEL 5544 Noise in Linear Systems Lecture 30 OTHER TRANSFORMS For a continuous, nonnegative RV X, the Laplace transform of X is X (s) = E [ e sx] = 0 f X (x)e sx dx. For a nonnegative RV, the Laplace
More informationTopic 4: Continuous random variables
Topic 4: Continuous random variables Course 003, 2018 Page 0 Continuous random variables Definition (Continuous random variable): An r.v. X has a continuous distribution if there exists a non-negative
More informationGeneral Linear Test of a General Linear Hypothesis. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 35
General Linear Test of a General Linear Hypothesis Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 35 Suppose the NTGMM holds so that y = Xβ + ε, where ε N(0, σ 2 I). opyright
More informationThe Gauss-Markov Model. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 61
The Gauss-Markov Model Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 61 Recall that Cov(u, v) = E((u E(u))(v E(v))) = E(uv) E(u)E(v) Var(u) = Cov(u, u) = E(u E(u)) 2 = E(u 2
More informationChapter 5. Chapter 5 sections
1 / 43 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions
More informationElements of Probability Theory
Short Guides to Microeconometrics Fall 2016 Kurt Schmidheiny Unversität Basel Elements of Probability Theory Contents 1 Random Variables and Distributions 2 1.1 Univariate Random Variables and Distributions......
More informationChapter 5 continued. Chapter 5 sections
Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions
More informationSo far our focus has been on estimation of the parameter vector β in the. y = Xβ + u
Interval estimation and hypothesis tests So far our focus has been on estimation of the parameter vector β in the linear model y i = β 1 x 1i + β 2 x 2i +... + β K x Ki + u i = x iβ + u i for i = 1, 2,...,
More informationREVIEW OF MAIN CONCEPTS AND FORMULAS A B = Ā B. Pr(A B C) = Pr(A) Pr(A B C) =Pr(A) Pr(B A) Pr(C A B)
REVIEW OF MAIN CONCEPTS AND FORMULAS Boolean algebra of events (subsets of a sample space) DeMorgan s formula: A B = Ā B A B = Ā B The notion of conditional probability, and of mutual independence of two
More informationCourse information: Instructor: Tim Hanson, Leconte 219C, phone Office hours: Tuesday/Thursday 11-12, Wednesday 10-12, and by appointment.
Course information: Instructor: Tim Hanson, Leconte 219C, phone 777-3859. Office hours: Tuesday/Thursday 11-12, Wednesday 10-12, and by appointment. Text: Applied Linear Statistical Models (5th Edition),
More informationRandom Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R
In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample
More informationIIT JAM : MATHEMATICAL STATISTICS (MS) 2013
IIT JAM : MATHEMATICAL STATISTICS (MS 2013 Question Paper with Answer Keys Ctanujit Classes Of Mathematics, Statistics & Economics Visit our website for more: www.ctanujit.in IMPORTANT NOTE FOR CANDIDATES
More informationJoint Probability Distributions and Random Samples (Devore Chapter Five)
Joint Probability Distributions and Random Samples (Devore Chapter Five) 1016-345-01: Probability and Statistics for Engineers Spring 2013 Contents 1 Joint Probability Distributions 2 1.1 Two Discrete
More informationUC Berkeley Department of Electrical Engineering and Computer Sciences. EECS 126: Probability and Random Processes
UC Berkeley Department of Electrical Engineering and Computer Sciences EECS 6: Probability and Random Processes Problem Set 3 Spring 9 Self-Graded Scores Due: February 8, 9 Submit your self-graded scores
More informationx. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).
.8.6 µ =, σ = 1 µ = 1, σ = 1 / µ =, σ =.. 3 1 1 3 x Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ ). The Gaussian distribution Probably the most-important distribution in all of statistics
More informationTAMS39 Lecture 2 Multivariate normal distribution
TAMS39 Lecture 2 Multivariate normal distribution Martin Singull Department of Mathematics Mathematical Statistics Linköping University, Sweden Content Lecture Random vectors Multivariate normal distribution
More informationTwo hours. Statistical Tables to be provided THE UNIVERSITY OF MANCHESTER. 14 January :45 11:45
Two hours Statistical Tables to be provided THE UNIVERSITY OF MANCHESTER PROBABILITY 2 14 January 2015 09:45 11:45 Answer ALL four questions in Section A (40 marks in total) and TWO of the THREE questions
More informationLIST OF FORMULAS FOR STK1100 AND STK1110
LIST OF FORMULAS FOR STK1100 AND STK1110 (Version of 11. November 2015) 1. Probability Let A, B, A 1, A 2,..., B 1, B 2,... be events, that is, subsets of a sample space Ω. a) Axioms: A probability function
More informationAppendix 2. The Multivariate Normal. Thus surfaces of equal probability for MVN distributed vectors satisfy
Appendix 2 The Multivariate Normal Draft Version 1 December 2000, c Dec. 2000, B. Walsh and M. Lynch Please email any comments/corrections to: jbwalsh@u.arizona.edu THE MULTIVARIATE NORMAL DISTRIBUTION
More informationTopic 4: Continuous random variables
Topic 4: Continuous random variables Course 3, 216 Page Continuous random variables Definition (Continuous random variable): An r.v. X has a continuous distribution if there exists a non-negative function
More informationStat 704 Data Analysis I Probability Review
1 / 39 Stat 704 Data Analysis I Probability Review Dr. Yen-Yi Ho Department of Statistics, University of South Carolina A.3 Random Variables 2 / 39 def n: A random variable is defined as a function that
More informationScheffé s Method. opyright c 2012 Dan Nettleton (Iowa State University) Statistics / 37
Scheffé s Method opyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 37 Scheffé s Method: Suppose where Let y = Xβ + ε, ε N(0, σ 2 I). c 1β,..., c qβ be q estimable functions, where
More informationLecture 1: August 28
36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random
More informationLecture 2: Repetition of probability theory and statistics
Algorithms for Uncertainty Quantification SS8, IN2345 Tobias Neckel Scientific Computing in Computer Science TUM Lecture 2: Repetition of probability theory and statistics Concept of Building Block: Prerequisites:
More informationSTAT Homework 2 - Solutions
STAT-36700 Homework - Solutios Fall 08 September 4, 08 This cotais solutios for Homework. Please ote that we have icluded several additioal commets ad approaches to the problems to give you better isight.
More informationStat 366 A1 (Fall 2006) Midterm Solutions (October 23) page 1
Stat 366 A1 Fall 6) Midterm Solutions October 3) page 1 1. The opening prices per share Y 1 and Y measured in dollars) of two similar stocks are independent random variables, each with a density function
More informationSTAT 350. Assignment 6 Solutions
STAT 350 Assignment 6 Solutions 1. For the Nitrogen Output in Wallabies data set from Assignment 3 do forward, backward, stepwise all subsets regression. Here is code for all the methods with all subsets
More informationIntroduction to Probability and Stocastic Processes - Part I
Introduction to Probability and Stocastic Processes - Part I Lecture 2 Henrik Vie Christensen vie@control.auc.dk Department of Control Engineering Institute of Electronic Systems Aalborg University Denmark
More informationProbability Distributions
Probability Distributions Seungjin Choi Department of Computer Science Pohang University of Science and Technology, Korea seungjin@postech.ac.kr 1 / 25 Outline Summarize the main properties of some of
More informationDistributions of Functions of Random Variables. 5.1 Functions of One Random Variable
Distributions of Functions of Random Variables 5.1 Functions of One Random Variable 5.2 Transformations of Two Random Variables 5.3 Several Random Variables 5.4 The Moment-Generating Function Technique
More informationMA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems
MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems Review of Basic Probability The fundamentals, random variables, probability distributions Probability mass/density functions
More information