The Multivariate Normal Distribution 1
|
|
- Liliana Harrington
- 5 years ago
- Views:
Transcription
1 The Multivariate Normal Distribution 1 STA 302 Fall See last slide for copyright information. 1 / 37
2 Overview 1 Moment-generating Functions 2 Definition 3 Properties 4 χ 2 and t distributions 2 / 37
3 Joint moment-generating function Of a p-dimensional random vector X ( M X (t = E e t X For example, M (X1,X 2,X 3 (t 1, t 2, t 3 = E ( e X 1t 1 +X 2 t 2 +X 3 t 3 Section 4.3 of Linear models in statistics has some material on moment-generating functions (optional. 3 / 37
4 Two big theorems Proof omitted 1 Joint moment-generating functions correspond uniquely to joint probability distributions. 2 Two random vectors X 1 and X 2 are independent if and only if the moment-generating function of their joint distribution is the product of their moment-generating functions. These results assume only that the moment-generating functions exist in a neighborhood of t = 0. Nothing else is required. 4 / 37
5 A helpful distinction If X 1 and X 2 are independent, M X1 +X 2 (t = M X1 (tm X2 (t X 1 and X 2 are independent if and only if M X1,X 2 (t 1, t 2 = M X1 (t 1 M X2 (t 2 5 / 37
6 Theorem: Functions of independent random vectors are independent Show X 1 and X 2 independent implies that Y 1 = g 1 (X 1 and Y 2 = g 2 (X 2 are independent. Let Y = ( Y1 Y 2 = ( g1 (X 1 g 2 (X 2 and t = ( t1 t 2. Then M Y (t = E (e t Y ( ( = E e t 1 Y 1+t 2 Y 2 = E e t 1 Y 1 e t 2 Y 2 = E (e t 1 g 1(X 1 e t 2 g 2(X 2 = e t 1 g 1(x 1 e t 2 g 2(x 2 f X1 (x 1 f X2 (x 2 d(x 1 d(x 2 = M g1 (X 1 (t 1 M g2 (X 2 (t 2 6 / 37
7 M AX (t = M X (A t Analogue of M ax(t = M X(at ( M AX (t = E e t AX = E (e ( A t X = M X (A t Note that t is the same length as Y = AX: The number of rows in A. 7 / 37
8 M X+c (t = e t c M X (t Analogue of M X+c(t = e ct M X(t ( M X+c (t = E ( = E = e t c E e t (X+c e t X+t c ( e t X = e t c M X (t 8 / 37
9 Distributions may be defined in terms of moment-generating functions Build up the multivariate normal from univariate normals. If Y N(µ, σ 2, then M Y (t = e µt+ 1 2 σ2 t 2 Moment-generating functions correspond uniquely to probability distributions. So define a normal random variable with expected value µ and variance σ 2 as a random variable with moment-generating function e µt+ 1 2 σ2 t 2. This has one surprising consequence... 9 / 37
10 Degenerate random variables A degenerate random variable has all the probability concentrated at a single value, say P r{y = y 0 } = 1. Then M Y (t = E(e Y t = e yt p(y y = e y 0t p(y 0 = e y 0t 1 = e y 0t 10 / 37
11 If P r{y = y 0 } = 1, then M Y (t = e y 0t This is of the form e µt+ 1 2 σ2 t 2 with µ = y 0 and σ 2 = 0. So Y N(y 0, 0. That is, degenerate random variables are normal with variance zero. Call them singular normals. This will be surprisingly handy later. 11 / 37
12 Independent standard normals Let Z 1,..., Z p i.i.d. N(0, 1. Z = Z 1. Z p E(Z = 0 cov(z = I p 12 / 37
13 Moment-generating function of Z Using e µt+ 1 2 σ2 t 2 M Z (t = = p M Zj (t j j=1 p j=1 e 1 2 t2 j = e 1 2 p j=1 t2 j = e 1 2 t t 13 / 37
14 Transform Z to get a general multivariate normal Remember: A non-negative definite means v Av 0 Let Σ be a p p symmetric non-negative definite matrix and µ R p. Let Y = Σ 1/2 Z + µ. The elements of Y are linear combinations of independent standard normals. Linear combinations of normals should be normal. Y has a multivariate distribution. We d like to call Y a multivariate normal. 14 / 37
15 Moment-generating function of Y = Σ 1/2 Z + µ Remember: M AX (t = M X (A t and M X+c (t = e t c M X (t and M Z (t = e 1 2 t t M Y (t = M (t Y=Σ 1/2 Z+µ = e t µ M (t Σ 1/2 Z = e t µ M Z (Σ 1/2 t = e t µ M Z (Σ 1/2 t = e t µ e 1 2 (Σ1/2 t (Σ 1/2 t = e t µ e 1 2 t Σ 1/2 Σ 1/2 t = e t µ e 1 2 t Σt So define a multivariate normal random variable Y as one with moment-generating function M Y (t = e t µ e 1 2 t Σt. 15 / 37
16 Compare univariate and multivariate normal moment-generating functions Univariate M Y (t = e µt+ 1 2 σ2 t 2 Multivariate M Y (t = e t µ e 1 2 t Σt So the univariate normal is a special case of the multivariate normal with p = / 37
17 Mean and covariance matrix For a univariate normal, E(Y = µ and V ar(y = σ 2 Recall Y = Σ 1/2 Z + µ. E(Y = µ cov(y = Σ 1/2 cov(zσ 1/2 = Σ 1/2 I Σ 1/2 = Σ We will say Y is multivariate normal with expected value µ and variance-covariance matrix Σ, and write Y N p (µ, Σ. 17 / 37
18 Probability density function of Y N p (µ, Σ Remember, Σ is only positive semi-definite. It is easy to write down the density of Z N p (0, I as a product of standard normals. If Σ is strictly positive definite (and not otherwise, the density of Y = Σ 1/2 Z + µ can be obtained using the Jacobian Theorem as { 1 f(y = exp 1 } Σ 1 2 (2π p 2 2 (y µ Σ 1 (y µ This is usually how the multivariate normal is defined. 18 / 37
19 Σ positive definite? Positive definite means that for any non-zero p 1 vector a, we have a Σa > 0. Since the one-dimensional random variable W = p i=1 a iy i may be written as W = a Y and V ar(w = cov(a Y = a Σa, it is natural to require that Σ be positive definite. All it means is that every non-zero linear combination of Y values has a positive variance. Often, this is what you want. 19 / 37
20 Singular normal: Σ is positive semi-definite. Suppose there is a 0 with a Σa = 0. Let W = a Y. Then V ar(w = V ar(a Y = a Σa = 0. That is W has a degenerate distribution (but it s still still normal. In this case we describe the distribution of Y as a singular multivariate normal. Excluding the singular case creates a lot of extra work in later proofs. We will insist that a singular multivariate normal is still multivariate normal, even though it has no density. 20 / 37
21 Distribution of AY Recall M Y (t = e t µ+ 1 2 t Σt Let A be an r p matrix, and W = AY. M W (t = M AY (t = M Y (A t = e (A t µ e 1 2 (A t Σ(A t = e t (Aµ e 1 2 t (AΣA t = e t (Aµ+ 1 2 t (AΣA t Recognize moment-generating function and conclude W N r (Aµ, AΣA 21 / 37
22 Exercise Use moment-generating functions, of course. Let Y N p (µ, Σ. Show Y + c N p (µ + c, Σ. 22 / 37
23 Zero covariance implies independence for the multivariate normal. Independence always implies zero covariance. For the multivariate normal, zero covariance also implies independence. The multivariate normal is the only continuous distribution with this property. 23 / 37
24 Show zero covariance implies independence By showing M Y (t = M Y1 (t 1M Y2 (t 2 Let Y N(µ, Σ, with Y = ( Y1 Y 2 µ = ( µ1 µ 2 ( Σ1 0 Σ = 0 Σ 2 t = ( t1 t 2 M Y (t = E (e t Y = E (e (t 1 t 2 Y = M Y ( (t 1 t 2 = / 37
25 Continuing the calculation: M Y (t = e t µ+ 1 2 t Σt Y = ( Y1 Y 2 µ = ( µ1 µ 2 ( Σ1 0 Σ = 0 Σ 2 t = ( t1 t 2 ( M Y (t = M Y (t 1 t 2 { ( } { ( = exp (t 1 t µ1 1 2 exp µ 2 2 (t 1 t Σ Σ 2 { = e t 1 µ 1 +t 2 µ 2 1 ( exp t 2 1 Σ 1 t ( } t 1 2Σ 2 t 2 { = e t 1 µ 1 +t 2 µ 2 1 ( exp t 2 1 Σ 1 t 1 + t } 2Σ 2 t 2 = e t 1 µ 1 e t 2 µ 2 e 1 2 (t 1 Σ 1t 1 e 1 2 (t 2 Σ 2t 2 = M Y1 (t 1 M Y2 (t 2 ( t1 t 2 } So Y 1 and Y 2 are independent. 25 / 37
26 An easy example If you do it the easy way Let Y 1 N(1, 2, Y 2 N(2, 4 and Y 3 N(6, 3 be independent, with W 1 = Y 1 + Y 2 and W 2 = Y 2 + Y 3. Find the joint distribution of W 1 and W 2. ( W1 W 2 = ( Y 1 Y 2 Y 3 W = AY N(Aµ, AΣA 26 / 37
27 W = AY N(Aµ, AΣA Y 1 N(1, 2, Y 2 N(2, 4 and Y 3 N(6, 3 are independent Aµ = AΣA = = ( ( ( = ( / 37
28 Marginal distributions are multivariate normal Y N p(µ, Σ, so W = AY N(Aµ, AΣA Find the distribution of ( Y 1 Y 2 Y 3 Y 4 = ( Y2 Y 4 Bivariate normal. The expected value is easy. 28 / 37
29 Covariance matrix cov(ay = AΣA = = = ( ( σ1,2 σ 2 2 σ 2,3 σ 2,4 σ 1,4 σ 2,4 σ 3,4 σ 2 4 ( σ 2 2 σ 2,4 σ 2,4 σ 2 4 σ1 2 σ 1,2 σ 1,3 σ 1,4 σ 1,2 σ2 2 σ 2,3 σ 2,4 σ 1,3 σ 2,3 σ3 2 σ 3,4 σ 1,4 σ 2,4 σ 3,4 σ Marginal distributions of a multivariate normal are multivariate normal, with the original means, variances and covariances / 37
30 Summary If c is a vector of constants, X + c N(c + µ, Σ If A is a matrix of constants, AX N(Aµ, AΣA Linear combinations of multivariate normals are multivariate normal. All the marginals (dimension less than p of X are (multivariate normal, but it is possible in theory to have a collection of univariate normals whose joint distribution is not multivariate normal. For the multivariate normal, zero covariance implies independence. The multivariate normal is the only continuous distribution with this property. 30 / 37
31 Multivariate normal likelihood For reference L(µ, Σ = n i=1 1 Σ 1 2 (2π p 2 = Σ n/2 (2π np/2 exp { exp 1 } 2 (xi µ Σ 1 (x i µ { 1 2 } n (x i µ Σ 1 (x i µ i=1 = Σ n/2 (2π np/2 exp n { tr( 2 ΣΣ } 1 + (x µ Σ 1 (x µ, where Σ = 1 n n i=1 (x i x(x i x is the sample variance-covariance matrix. 31 / 37
32 Showing (X µ Σ 1 (X µ χ 2 (p Σ has to be positive definite this time X N (µ, Σ Y = X µ N (0, Σ Z = Σ 1 2 Y N (0, Σ 1 2 ΣΣ 1 2 = N (0, Σ Σ 2 Σ 2 Σ 1 2 = N (0, I So Z is a vector of p independent standard normals, and p Y Σ 1 Y = (Σ 1 2 Y (Σ 1 2 Y = Z Z = Zi 2 χ 2 (p j=1 32 / 37
33 X and S 2 independent X 1,..., X n i.i.d N(µ, σ 2 X = X 1. X n N ( µ1, σ 2 I Y = Note A is (n + 1 1, so cov(ay = σ 2 AA is (n + 1 (n + 1, singular. X 1 X. X n X = AX X 33 / 37
34 The argument X 1 X. Y = AX = X n X = X Y is multivariate normal. Cov ( X, (X j X = 0 (Exercise So X and Y 2 are independent. So X and S 2 = g(y 2 are independent. Y 2 X 34 / 37
35 Leads to the t distribution If Z N(0, 1 and Y χ 2 (ν and Z and Y are independent, then T = Z t(ν Y/ν 35 / 37
36 Random sample from a normal distribution i.i.d. Let X 1,..., X n N(µ, σ 2. Then n(x µ N(0, 1 and σ (n 1S 2 σ 2 χ 2 (n 1 and These quantities are independent, so T = = n(x µ/σ (n 1S 2 /(n 1 σ 2 n(x µ t(n 1 S 36 / 37
37 Copyright Information This slide show was prepared by Jerry Brunner, Department of Statistical Sciences, University of Toronto. It is licensed under a Creative Commons Attribution - ShareAlike 3.0 Unported License. Use any part of it as you like and share the result freely. The L A TEX source code is available from the course website: brunner/oldclass/302f14 37 / 37
The Multivariate Normal Distribution 1
The Multivariate Normal Distribution 1 STA 302 Fall 2017 1 See last slide for copyright information. 1 / 40 Overview 1 Moment-generating Functions 2 Definition 3 Properties 4 χ 2 and t distributions 2
More informationRandom Vectors 1. STA442/2101 Fall See last slide for copyright information. 1 / 30
Random Vectors 1 STA442/2101 Fall 2017 1 See last slide for copyright information. 1 / 30 Background Reading: Renscher and Schaalje s Linear models in statistics Chapter 3 on Random Vectors and Matrices
More informationSTA 302f16 Assignment Five 1
STA 30f16 Assignment Five 1 Except for Problem??, these problems are preparation for the quiz in tutorial on Thursday October 0th, and are not to be handed in As usual, at times you may be asked to prove
More informationSTA 2101/442 Assignment 3 1
STA 2101/442 Assignment 3 1 These questions are practice for the midterm and final exam, and are not to be handed in. 1. Suppose X 1,..., X n are a random sample from a distribution with mean µ and variance
More informationLecture 11. Multivariate Normal theory
10. Lecture 11. Multivariate Normal theory Lecture 11. Multivariate Normal theory 1 (1 1) 11. Multivariate Normal theory 11.1. Properties of means and covariances of vectors Properties of means and covariances
More informationMLES & Multivariate Normal Theory
Merlise Clyde September 6, 2016 Outline Expectations of Quadratic Forms Distribution Linear Transformations Distribution of estimates under normality Properties of MLE s Recap Ŷ = ˆµ is an unbiased estimate
More informationSTA442/2101: Assignment 5
STA442/2101: Assignment 5 Craig Burkett Quiz on: Oct 23 rd, 2015 The questions are practice for the quiz next week, and are not to be handed in. I would like you to bring in all of the code you used to
More informationThe Bootstrap 1. STA442/2101 Fall Sampling distributions Bootstrap Distribution-free regression example
The Bootstrap 1 STA442/2101 Fall 2013 1 See last slide for copyright information. 1 / 18 Background Reading Davison s Statistical models has almost nothing. The best we can do for now is the Wikipedia
More informationBIOS 2083 Linear Models Abdus S. Wahed. Chapter 2 84
Chapter 2 84 Chapter 3 Random Vectors and Multivariate Normal Distributions 3.1 Random vectors Definition 3.1.1. Random vector. Random vectors are vectors of random variables. For instance, X = X 1 X 2.
More informationJoint Distributions: Part Two 1
Joint Distributions: Part Two 1 STA 256: Fall 2018 1 This slide show is an open-source document. See last slide for copyright information. 1 / 30 Overview 1 Independence 2 Conditional Distributions 3 Transformations
More informationStat 366 A1 (Fall 2006) Midterm Solutions (October 23) page 1
Stat 366 A1 Fall 6) Midterm Solutions October 3) page 1 1. The opening prices per share Y 1 and Y measured in dollars) of two similar stocks are independent random variables, each with a density function
More informationRandom Vectors and Multivariate Normal Distributions
Chapter 3 Random Vectors and Multivariate Normal Distributions 3.1 Random vectors Definition 3.1.1. Random vector. Random vectors are vectors of random 75 variables. For instance, X = X 1 X 2., where each
More informationIntroduction to Normal Distribution
Introduction to Normal Distribution Nathaniel E. Helwig Assistant Professor of Psychology and Statistics University of Minnesota (Twin Cities) Updated 17-Jan-2017 Nathaniel E. Helwig (U of Minnesota) Introduction
More informationSTAT 100C: Linear models
STAT 100C: Linear models Arash A. Amini April 27, 2018 1 / 1 Table of Contents 2 / 1 Linear Algebra Review Read 3.1 and 3.2 from text. 1. Fundamental subspace (rank-nullity, etc.) Im(X ) = ker(x T ) R
More informationNotes on Random Vectors and Multivariate Normal
MATH 590 Spring 06 Notes on Random Vectors and Multivariate Normal Properties of Random Vectors If X,, X n are random variables, then X = X,, X n ) is a random vector, with the cumulative distribution
More informationThe Multivariate Normal Distribution. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36
The Multivariate Normal Distribution Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 36 The Moment Generating Function (MGF) of a random vector X is given by M X (t) = E(e t X
More informationThe Multivariate Normal Distribution. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36
The Multivariate Normal Distribution Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 36 The Moment Generating Function (MGF) of a random vector X is given by M X (t) = E(e t X
More informationThe Multinomial Model
The Multinomial Model STA 312: Fall 2012 Contents 1 Multinomial Coefficients 1 2 Multinomial Distribution 2 3 Estimation 4 4 Hypothesis tests 8 5 Power 17 1 Multinomial Coefficients Multinomial coefficient
More information5.1 Consistency of least squares estimates. We begin with a few consistency results that stand on their own and do not depend on normality.
88 Chapter 5 Distribution Theory In this chapter, we summarize the distributions related to the normal distribution that occur in linear models. Before turning to this general problem that assumes normal
More informationGeneralized Linear Models 1
Generalized Linear Models 1 STA 2101/442: Fall 2012 1 See last slide for copyright information. 1 / 24 Suggested Reading: Davison s Statistical models Exponential families of distributions Sec. 5.2 Chapter
More informationExam 2. Jeremy Morris. March 23, 2006
Exam Jeremy Morris March 3, 006 4. Consider a bivariate normal population with µ 0, µ, σ, σ and ρ.5. a Write out the bivariate normal density. The multivariate normal density is defined by the following
More informationLecture 14: Multivariate mgf s and chf s
Lecture 14: Multivariate mgf s and chf s Multivariate mgf and chf For an n-dimensional random vector X, its mgf is defined as M X (t) = E(e t X ), t R n and its chf is defined as φ X (t) = E(e ıt X ),
More informationSTA 2101/442 Assignment Four 1
STA 2101/442 Assignment Four 1 One version of the general linear model with fixed effects is y = Xβ + ɛ, where X is an n p matrix of known constants with n > p and the columns of X linearly independent.
More informationMAS223 Statistical Inference and Modelling Exercises
MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,
More informationx. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).
.8.6 µ =, σ = 1 µ = 1, σ = 1 / µ =, σ =.. 3 1 1 3 x Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ ). The Gaussian distribution Probably the most-important distribution in all of statistics
More informationPart 6: Multivariate Normal and Linear Models
Part 6: Multivariate Normal and Linear Models 1 Multiple measurements Up until now all of our statistical models have been univariate models models for a single measurement on each member of a sample of
More informationMoment Generating Functions
MATH 382 Moment Generating Functions Dr. Neal, WKU Definition. Let X be a random variable. The moment generating function (mgf) of X is the function M X : R R given by M X (t ) = E[e X t ], defined for
More informationSTA 2201/442 Assignment 2
STA 2201/442 Assignment 2 1. This is about how to simulate from a continuous univariate distribution. Let the random variable X have a continuous distribution with density f X (x) and cumulative distribution
More informationSampling Distributions
Merlise Clyde Duke University September 8, 2016 Outline Topics Normal Theory Chi-squared Distributions Student t Distributions Readings: Christensen Apendix C, Chapter 1-2 Prostate Example > library(lasso2);
More informationMultiple Random Variables
Multiple Random Variables This Version: July 30, 2015 Multiple Random Variables 2 Now we consider models with more than one r.v. These are called multivariate models For instance: height and weight An
More informationRegression. ECO 312 Fall 2013 Chris Sims. January 12, 2014
ECO 312 Fall 2013 Chris Sims Regression January 12, 2014 c 2014 by Christopher A. Sims. This document is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License What
More informationChapter 5 continued. Chapter 5 sections
Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions
More informationContingency Tables Part One 1
Contingency Tables Part One 1 STA 312: Fall 2012 1 See last slide for copyright information. 1 / 32 Suggested Reading: Chapter 2 Read Sections 2.1-2.4 You are not responsible for Section 2.5 2 / 32 Overview
More informationThe Delta Method and Applications
Chapter 5 The Delta Method and Applications 5.1 Local linear approximations Suppose that a particular random sequence converges in distribution to a particular constant. The idea of using a first-order
More informationSTA 2101/442 Assignment 2 1
STA 2101/442 Assignment 2 1 These questions are practice for the midterm and final exam, and are not to be handed in. 1. A polling firm plans to ask a random sample of registered voters in Quebec whether
More informationCHAPTER 1 DISTRIBUTION THEORY 1 CHAPTER 1: DISTRIBUTION THEORY
CHAPTER 1 DISTRIBUTION THEORY 1 CHAPTER 1: DISTRIBUTION THEORY CHAPTER 1 DISTRIBUTION THEORY 2 Basic Concepts CHAPTER 1 DISTRIBUTION THEORY 3 Random Variables (R.V.) discrete random variable: probability
More informationST 740: Linear Models and Multivariate Normal Inference
ST 740: Linear Models and Multivariate Normal Inference Alyson Wilson Department of Statistics North Carolina State University November 4, 2013 A. Wilson (NCSU STAT) Linear Models November 4, 2013 1 /
More informationLecture 1: August 28
36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random
More information2 Functions of random variables
2 Functions of random variables A basic statistical model for sample data is a collection of random variables X 1,..., X n. The data are summarised in terms of certain sample statistics, calculated as
More informationBASICS OF PROBABILITY
October 10, 2018 BASICS OF PROBABILITY Randomness, sample space and probability Probability is concerned with random experiments. That is, an experiment, the outcome of which cannot be predicted with certainty,
More information1. Density and properties Brief outline 2. Sampling from multivariate normal and MLE 3. Sampling distribution and large sample behavior of X and S 4.
Multivariate normal distribution Reading: AMSA: pages 149-200 Multivariate Analysis, Spring 2016 Institute of Statistics, National Chiao Tung University March 1, 2016 1. Density and properties Brief outline
More informationInstrumental Variables Again 1
Instrumental Variables Again 1 STA431 Winter/Spring 2017 1 See last slide for copyright information. 1 / 15 Overview 1 2 2 / 15 Remember the problem of omitted variables Example: is income, is credit card
More informationdiscrete random variable: probability mass function continuous random variable: probability density function
CHAPTER 1 DISTRIBUTION THEORY 1 Basic Concepts Random Variables discrete random variable: probability mass function continuous random variable: probability density function CHAPTER 1 DISTRIBUTION THEORY
More information3d scatterplots. You can also make 3d scatterplots, although these are less common than scatterplot matrices.
3d scatterplots You can also make 3d scatterplots, although these are less common than scatterplot matrices. > library(scatterplot3d) > y par(mfrow=c(2,2)) > scatterplot3d(y,highlight.3d=t,angle=20)
More information1.12 Multivariate Random Variables
112 MULTIVARIATE RANDOM VARIABLES 59 112 Multivariate Random Variables We will be using matrix notation to denote multivariate rvs and their distributions Denote by X (X 1,,X n ) T an n-dimensional random
More informationLecture 2: Review of Probability
Lecture 2: Review of Probability Zheng Tian Contents 1 Random Variables and Probability Distributions 2 1.1 Defining probabilities and random variables..................... 2 1.2 Probability distributions................................
More informationA Bayesian Treatment of Linear Gaussian Regression
A Bayesian Treatment of Linear Gaussian Regression Frank Wood December 3, 2009 Bayesian Approach to Classical Linear Regression In classical linear regression we have the following model y β, σ 2, X N(Xβ,
More informationSTA 431s17 Assignment Eight 1
STA 43s7 Assignment Eight The first three questions of this assignment are about how instrumental variables can help with measurement error and omitted variables at the same time; see Lecture slide set
More informationIntroduction 1. STA442/2101 Fall See last slide for copyright information. 1 / 33
Introduction 1 STA442/2101 Fall 2016 1 See last slide for copyright information. 1 / 33 Background Reading Optional Chapter 1 of Linear models with R Chapter 1 of Davison s Statistical models: Data, and
More informationPreliminaries. Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38
Preliminaries Copyright c 2018 Dan Nettleton (Iowa State University) Statistics 510 1 / 38 Notation for Scalars, Vectors, and Matrices Lowercase letters = scalars: x, c, σ. Boldface, lowercase letters
More informationCh4. Distribution of Quadratic Forms in y
ST4233, Linear Models, Semester 1 2008-2009 Ch4. Distribution of Quadratic Forms in y 1 Definition Definition 1.1 If A is a symmetric matrix and y is a vector, the product y Ay = i a ii y 2 i + i j a ij
More informationThe Multivariate Normal Distribution. In this case according to our theorem
The Multivariate Normal Distribution Defn: Z R 1 N(0, 1) iff f Z (z) = 1 2π e z2 /2. Defn: Z R p MV N p (0, I) if and only if Z = (Z 1,..., Z p ) T with the Z i independent and each Z i N(0, 1). In this
More informationContinuous Random Variables 1
Continuous Random Variables 1 STA 256: Fall 2018 1 This slide show is an open-source document. See last slide for copyright information. 1 / 32 Continuous Random Variables: The idea Probability is area
More informationSTAT 135 Lab 13 (Review) Linear Regression, Multivariate Random Variables, Prediction, Logistic Regression and the δ-method.
STAT 135 Lab 13 (Review) Linear Regression, Multivariate Random Variables, Prediction, Logistic Regression and the δ-method. Rebecca Barter May 5, 2015 Linear Regression Review Linear Regression Review
More informationMultivariate Random Variable
Multivariate Random Variable Author: Author: Andrés Hincapié and Linyi Cao This Version: August 7, 2016 Multivariate Random Variable 3 Now we consider models with more than one r.v. These are called multivariate
More informationECE Lecture #10 Overview
ECE 450 - Lecture #0 Overview Introduction to Random Vectors CDF, PDF Mean Vector, Covariance Matrix Jointly Gaussian RV s: vector form of pdf Introduction to Random (or Stochastic) Processes Definitions
More information01 Probability Theory and Statistics Review
NAVARCH/EECS 568, ROB 530 - Winter 2018 01 Probability Theory and Statistics Review Maani Ghaffari January 08, 2018 Last Time: Bayes Filters Given: Stream of observations z 1:t and action data u 1:t Sensor/measurement
More informationContents 1. Contents
Contents 1 Contents 6 Distributions of Functions of Random Variables 2 6.1 Transformation of Discrete r.v.s............. 3 6.2 Method of Distribution Functions............. 6 6.3 Method of Transformations................
More informationMIT Spring 2015
Regression Analysis MIT 18.472 Dr. Kempthorne Spring 2015 1 Outline Regression Analysis 1 Regression Analysis 2 Multiple Linear Regression: Setup Data Set n cases i = 1, 2,..., n 1 Response (dependent)
More informationMoment Generating Function. STAT/MTHE 353: 5 Moment Generating Functions and Multivariate Normal Distribution
Moment Generating Function STAT/MTHE 353: 5 Moment Generating Functions and Multivariate Normal Distribution T. Linder Queen s University Winter 07 Definition Let X (X,...,X n ) T be a random vector and
More informationMultivariate Analysis and Likelihood Inference
Multivariate Analysis and Likelihood Inference Outline 1 Joint Distribution of Random Variables 2 Principal Component Analysis (PCA) 3 Multivariate Normal Distribution 4 Likelihood Inference Joint density
More informationTAMS39 Lecture 2 Multivariate normal distribution
TAMS39 Lecture 2 Multivariate normal distribution Martin Singull Department of Mathematics Mathematical Statistics Linköping University, Sweden Content Lecture Random vectors Multivariate normal distribution
More informationMultivariate Analysis Homework 1
Multivariate Analysis Homework A490970 Yi-Chen Zhang March 6, 08 4.. Consider a bivariate normal population with µ = 0, µ =, σ =, σ =, and ρ = 0.5. a Write out the bivariate normal density. b Write out
More informationSTA441: Spring Multiple Regression. This slide show is a free open source document. See the last slide for copyright information.
STA441: Spring 2018 Multiple Regression This slide show is a free open source document. See the last slide for copyright information. 1 Least Squares Plane 2 Statistical MODEL There are p-1 explanatory
More informationIntroduction to Computational Finance and Financial Econometrics Probability Review - Part 2
You can t see this text! Introduction to Computational Finance and Financial Econometrics Probability Review - Part 2 Eric Zivot Spring 2015 Eric Zivot (Copyright 2015) Probability Review - Part 2 1 /
More informationCS229 Lecture notes. Andrew Ng
CS229 Lecture notes Andrew Ng Part X Factor analysis When we have data x (i) R n that comes from a mixture of several Gaussians, the EM algorithm can be applied to fit a mixture model. In this setting,
More informationSTA441: Spring Multiple Regression. More than one explanatory variable at the same time
STA441: Spring 2016 Multiple Regression More than one explanatory variable at the same time This slide show is a free open source document. See the last slide for copyright information. One Explanatory
More informationANOVA: Analysis of Variance - Part I
ANOVA: Analysis of Variance - Part I The purpose of these notes is to discuss the theory behind the analysis of variance. It is a summary of the definitions and results presented in class with a few exercises.
More information4. CONTINUOUS RANDOM VARIABLES
IA Probability Lent Term 4 CONTINUOUS RANDOM VARIABLES 4 Introduction Up to now we have restricted consideration to sample spaces Ω which are finite, or countable; we will now relax that assumption We
More informationElements of Probability Theory
Short Guides to Microeconometrics Fall 2016 Kurt Schmidheiny Unversität Basel Elements of Probability Theory Contents 1 Random Variables and Distributions 2 1.1 Univariate Random Variables and Distributions......
More informationi=1 k i=1 g i (Y )] = k
Math 483 EXAM 2 covers 2.4, 2.5, 2.7, 2.8, 3.1, 3.2, 3.3, 3.4, 3.8, 3.9, 4.1, 4.2, 4.3, 4.4, 4.5, 4.6, 4.9, 5.1, 5.2, and 5.3. The exam is on Thursday, Oct. 13. You are allowed THREE SHEETS OF NOTES and
More informationStat 5101 Notes: Brand Name Distributions
Stat 5101 Notes: Brand Name Distributions Charles J. Geyer September 5, 2012 Contents 1 Discrete Uniform Distribution 2 2 General Discrete Uniform Distribution 2 3 Uniform Distribution 3 4 General Uniform
More informationInteractions and Factorial ANOVA
Interactions and Factorial ANOVA STA442/2101 F 2018 See last slide for copyright information 1 Interactions Interaction between explanatory variables means It depends. Relationship between one explanatory
More informationPeter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8
Contents 1 Linear model 1 2 GLS for multivariate regression 5 3 Covariance estimation for the GLM 8 4 Testing the GLH 11 A reference for some of this material can be found somewhere. 1 Linear model Recall
More information(Multivariate) Gaussian (Normal) Probability Densities
(Multivariate) Gaussian (Normal) Probability Densities Carl Edward Rasmussen, José Miguel Hernández-Lobato & Richard Turner April 20th, 2018 Rasmussen, Hernàndez-Lobato & Turner Gaussian Densities April
More informationSTA2603/205/1/2014 /2014. ry II. Tutorial letter 205/1/
STA263/25//24 Tutorial letter 25// /24 Distribution Theor ry II STA263 Semester Department of Statistics CONTENTS: Examination preparation tutorial letterr Solutions to Assignment 6 2 Dear Student, This
More informationExercises and Answers to Chapter 1
Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean
More informationMatrix Approach to Simple Linear Regression: An Overview
Matrix Approach to Simple Linear Regression: An Overview Aspects of matrices that you should know: Definition of a matrix Addition/subtraction/multiplication of matrices Symmetric/diagonal/identity matrix
More informationPart IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015
Part IB Statistics Theorems with proof Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly)
More information8 - Continuous random vectors
8-1 Continuous random vectors S. Lall, Stanford 2011.01.25.01 8 - Continuous random vectors Mean-square deviation Mean-variance decomposition Gaussian random vectors The Gamma function The χ 2 distribution
More informationChapter 5. Chapter 5 sections
1 / 43 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions
More informationChapter 5. The multivariate normal distribution. Probability Theory. Linear transformations. The mean vector and the covariance matrix
Probability Theory Linear transformations A transformation is said to be linear if every single function in the transformation is a linear combination. Chapter 5 The multivariate normal distribution When
More informationCourse topics (tentative) The role of random effects
Course topics (tentative) random effects linear mixed models analysis of variance frequentist likelihood-based inference (MLE and REML) prediction Bayesian inference The role of random effects Rasmus Waagepetersen
More informationCovariance. Lecture 20: Covariance / Correlation & General Bivariate Normal. Covariance, cont. Properties of Covariance
Covariance Lecture 0: Covariance / Correlation & General Bivariate Normal Sta30 / Mth 30 We have previously discussed Covariance in relation to the variance of the sum of two random variables Review Lecture
More informationProbability Lecture III (August, 2006)
robability Lecture III (August, 2006) 1 Some roperties of Random Vectors and Matrices We generalize univariate notions in this section. Definition 1 Let U = U ij k l, a matrix of random variables. Suppose
More informationLecture Note 1: Probability Theory and Statistics
Univ. of Michigan - NAME 568/EECS 568/ROB 530 Winter 2018 Lecture Note 1: Probability Theory and Statistics Lecturer: Maani Ghaffari Jadidi Date: April 6, 2018 For this and all future notes, if you would
More informationCS 195-5: Machine Learning Problem Set 1
CS 95-5: Machine Learning Problem Set Douglas Lanman dlanman@brown.edu 7 September Regression Problem Show that the prediction errors y f(x; ŵ) are necessarily uncorrelated with any linear function of
More informationFormulas for probability theory and linear models SF2941
Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms
More informationSTAT 730 Chapter 4: Estimation
STAT 730 Chapter 4: Estimation Timothy Hanson Department of Statistics, University of South Carolina Stat 730: Multivariate Analysis 1 / 23 The likelihood We have iid data, at least initially. Each datum
More informationChapter 5. Random Variables (Continuous Case) 5.1 Basic definitions
Chapter 5 andom Variables (Continuous Case) So far, we have purposely limited our consideration to random variables whose ranges are countable, or discrete. The reason for that is that distributions on
More informationE[X n ]= dn dt n M X(t). ). What is the mgf? Solution. Found this the other day in the Kernel matching exercise: 1 M X (t) =
Chapter 7 Generating functions Definition 7.. Let X be a random variable. The moment generating function is given by M X (t) =E[e tx ], provided that the expectation exists for t in some neighborhood of
More information2.3. The Gaussian Distribution
78 2. PROBABILITY DISTRIBUTIONS Figure 2.5 Plots of the Dirichlet distribution over three variables, where the two horizontal axes are coordinates in the plane of the simplex and the vertical axis corresponds
More informationBMIR Lecture Series on Probability and Statistics Fall 2015 Discrete RVs
Lecture #7 BMIR Lecture Series on Probability and Statistics Fall 2015 Department of Biomedical Engineering and Environmental Sciences National Tsing Hua University 7.1 Function of Single Variable Theorem
More informationAdvanced Econometrics I
Lecture Notes Autumn 2010 Dr. Getinet Haile, University of Mannheim 1. Introduction Introduction & CLRM, Autumn Term 2010 1 What is econometrics? Econometrics = economic statistics economic theory mathematics
More informationOptimization. The value x is called a maximizer of f and is written argmax X f. g(λx + (1 λ)y) < λg(x) + (1 λ)g(y) 0 < λ < 1; x, y X.
Optimization Background: Problem: given a function f(x) defined on X, find x such that f(x ) f(x) for all x X. The value x is called a maximizer of f and is written argmax X f. In general, argmax X f may
More informationProperties of Summation Operator
Econ 325 Section 003/004 Notes on Variance, Covariance, and Summation Operator By Hiro Kasahara Properties of Summation Operator For a sequence of the values {x 1, x 2,..., x n, we write the sum of x 1,
More informationFall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.
1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n
More informationStat 5101 Lecture Slides: Deck 8 Dirichlet Distribution. Charles J. Geyer School of Statistics University of Minnesota
Stat 5101 Lecture Slides: Deck 8 Dirichlet Distribution Charles J. Geyer School of Statistics University of Minnesota 1 The Dirichlet Distribution The Dirichlet Distribution is to the beta distribution
More informationPower and Sample Size for Log-linear Models 1
Power and Sample Size for Log-linear Models 1 STA 312: Fall 2012 1 See last slide for copyright information. 1 / 1 Overview 2 / 1 Power of a Statistical test The power of a test is the probability of rejecting
More information