Notes on Random Vectors and Multivariate Normal
|
|
- Britton Harper
- 5 years ago
- Views:
Transcription
1 MATH 590 Spring 06 Notes on Random Vectors and Multivariate Normal Properties of Random Vectors If X,, X n are random variables, then X = X,, X n ) is a random vector, with the cumulative distribution function cdf) and the probability density function pdf) F x) = F x, x,, x n ) = P X x, X x,, X n x n ) fx) = fx, x,, x n ) = n x x n F x) provided it exists), and the pdf must satisfy fx) 0 and fx)dx = fx,, x n )dx dx n = and Note that marginal f i x i ) = fx)dx dx i dx i+ dx n, so EX i ) = x i f i x i )dx i = x i fx)dx dx n If EX i ) = µ i, then X EX) = E = X n µ µ n = µ x CovX) = EX µ x )X µ x ) = EXX ) EX)EX ) = Σ xx If Y = Y,, Y m ) is another random vector, with EY) = µ y then CovX, Y) = EX µ x )Y µ y ) = Σ xy with the i, j)th element of Σ xy being the scalar covariance CovX i, Y j ) = EX i EX i ))Y j EY j )) If Z = AX + c, where A is p n matrix and c is a p-vector both non-random), then ie, E ) is a linear operator) and EZ) = AEX) + c CovZ) = ACovX)A = AΣ xx A More generally, if W = BY + d where B is a q m matrix and d is a q-vector, then CovZ, W) = ACovX, Y)B = AΣ xy B Claim: the covariance matrix Σ xx is symmetric and psd = We can write Σ xx = PΛP and all the eigenvalues are nonnegative
2 Proof Varb X) = Covb X) = b Σ xx b 0 Change of Variables: Suppose we have X = X,, X n ) with pdf f X x) Let Y = TX c) where T is n n nonsingular and non-random, and c is non-random n-vector Then we get X = T Y + c, and the pdf of y is given by where J is the Jacobian f Y y) = f X T y + c) J J = X X Y X X Y X n Y X Y n X Y n Y Y X n Y X n Y n Now, write T = t ij ) Then from the relationship X = T Y + c, X = t Y + t Y + + t n Y n + c X = t Y + t Y + + t n Y n + c we get X n = t n Y + t n Y + + t nny n + c n J = t t t t t n = T = T t n t n n t nn t and so J = / dett) For example, if T = I ie, Y = X c) then J = Basics of Multivariate Normal Distribution Definition: Let Σ = σ ij ) denote an n n real symmetric pd matrix, and let µ = µ,, µ n ) be a real n-vector Then a random vector X = X,, X n ) is said to have a nonsingular) multivariate normal distribution with mean vector µ and covariance matrix Σ if it has a pdf fx) = π) n/ Σ / x µ) Σ x µ) We write X N n µ, Σ) First, we show that this is a valid pdf ie, fx) 0 and fx)dx = ) Result: Let A be n n symmetric pd matrix, and let X = X,, X n ), a random vector but not necessarily normal Then ) x Ax dx = π) n/ A /
3 Consequently, if Σ is an n n symmetric pd matrix and µ is an n-vector, then x µ) Σ x µ) dx = π) n/ Σ / Alternatively, we can show that it is a valid pdf by Moment Generating Function: Define the n-dimensional multivariate) moment generating function mgf) by Mt) = Ee t X ) where t R n For the normal case, Mt) = Ee t X ) = e t x fx)dx = π) n/ Σ / t x x µ) Σ x µ) dx Since Σ is assume to be pd, we have Σ = PΛ P where Λ = diag/λ,, /λ n ) Now, consider the linear transformation Y = P X µ) Then X = PY + µ remember that P = P ) Hence, J = P and J = why?), and then Mt) = π) n/ Σ / = π) n/ Σ / = t µ)π) n/ Σ / = t µ)π) n/ Σ / = t µ)π) n/ Σ / t x x µ) Σ x µ) dx t Py + µ) Py) Σ Py) t Py y P Σ Py dy dy P t) y y Λ y dy n w j y j n yj /λ j dy where w = P t = w,, w n ) Also, we know that Σ = Λ = n j= λ j, so Mt) = t µ)π) n/ Σ / j= j j= n w j y j j= j= n yj /λ j dy j= n = t µ) π) / λ / w j y j /)yj /λ j dyj and since M j w j ) = Ee wjyj ) = π) / λ / j wj y j /)yj /λ j dyj if Y j N0, λ j ), and we know 3
4 that M j w j ) = λ j wj /), we get Mt) = t µ) = t µ) n π) / λ / j= n λ j wj /) j= n = t µ) λ j wj / j= = t µ + w Λw = t µ + t PΛP t = t µ + t Σt j w j y j /)y j /λ j dyj This is the mgf of X N n µ, Σ) We can easily show that fx)dx = by setting t = 0 in the mgf The mgf each X j is obtained from Mt) by setting all t i = 0 except t j, and we get M Xj t j ) = Ee tjxj ) = t j µ j +σ jj t j /) So X j Nµ j, σ jj ) for j =,, n, and the mgf gives us EX) = µ and CovX) = Σ Alternatively, if we consider Y = P X µ), we saw that each Y j N0, λ j ) and that EY) = 0 and CovY) = Λ, and thus Y N0, Λ) Since X = PY + µ, it follows that EX) = PEY) + µ = µ and CovX) = PCovY)P = PΛP = Σ Multivariate Standard Normal: A random vector Z = Z,, Z n ) is said to have multivariate standard normal distribution if and only if fz) = ) z z = and is denoted Z N n 0, I) π) n/ ) π) n/ z Linear Combinations: Suppose that X N n µ x, Σ xx ), and let c = c,, c n ) Then Y = c X = n j= c jx j has a normal distribution with EY ) = c µ x = n j= c jµ j and VarY ) = c Σ xx c = n n j= c ic j σ ij Quick proof: by mgf, M Y t) = Ee ty ) = Ee tc X ) = Ee tc) X ) = tc µ x + t c Σ xx c/ which is the mgf of Nc µ x, c Σ xx c) Can easily show that X N n µ x, Σ xx ) c X Nc µ x, c Σ xx c) for every c 0 Cramér-Wold) More generally, if Y = BX + d where Y = Y,, Y p ), then Y N p Bµ x + d, BΣ xx B ) Properties of Multivariate Normal Distribution Independence: This is a very nice property of the multivariate normal distribution It says that if X = X,, X k ) and X N k µ x, Σ xx ), then the random variables X,, X k are independent Σ xx is diagonal, or equivalently, the X j are uncorrelated This is easily proved by inspecting the mgf of the multivariate normal Also, if X N k µ x, Σ xx ) and U = AX and V = BX, then U and V are independent if and only if CovU, V) = AΣ xx B = 0 See p5 of the text for the proof 4
5 Marginal Distributions: Let X N k µ, Σ), and partition X = x, x ), where x is k -vector and x is k -vector, with k + k = k Then µ = µ, µ ) where µ i = Ex i ) and ) Σ Σ Σ = Σ Σ where Σ ij = Covx i, x j ) It follows that x i N ki µ i, Σ ii ), and this can be verified by letting x = I k 0X and x = 0 I k X and deriving their distributions In addition, x and x are independent iff Σ = Covx, x ) = 0 which is equivalent to Σ = Σ = 0) For the proof, one direction is obvious As for the other direction, if Σ = 0, then Σ = Σ Σ and Σ = diagσ, Σ ) why?) Then it follows that fx) = π) k/ Σ / x µ) Σ x µ) = π) k/ Σ / Σ / = f x )f x ) ) Σ x µ) 0 0 Σ x µ) Conditional Distribution: Under the same partition as above, the conditional distribution of x given x = c is still normal with conditional mean vector Ex x = c ) = µ + Σ Σ c µ ) and conditional covariance matrix Covx x = c ) = Σ Σ Σ Σ where we write Σ = Σ Σ Σ Σ One proof goes as follows Let y = x and y = x + Mx where M is chosen such that y and y are independent Then we must have 0 = Covy, y ) = Covx + Mx, x ) = Σ + MΣ, and then we can choose M = Σ Σ Then x = y and y = x Σ Σ x are independent, with x N k µ, Σ ) and y N k µ Σ Σ µ, Σ ), since Covy ) = Σ Σ Σ Σ = Σ Hence, the distribution of y x = c is the same as the distribution of unconditional y, so that y x = c ) N k µ Σ Σ µ, Σ ) = x Σ Σ c x = c has the same distribution as y x = c = x x = c ) N k µ + Σ Σ c µ ), Σ ) A few more facts: The matrix B = Σ Σ is called the matrix of regression coefficients or regression matrix) of x on x Ex x = c ) = µ + Bc µ ) is called the linear) regression function of x on x = c The elements of Σ are called partial covariance, adjusted for x, and the corresponding correlation associated with Σ are called partial correlation between elements of x adjusted for x ) Samples From Multivariate Normal Distribution Let X,, X N be an iid sample from the N k µ, Σ) distribution The sample mean is X N = N X i 5
6 The distribution of X N is easily derived from results we saw earlier about multivarite normals, which we can find that X N N k µ, Σ/N) The multivariate analog of sample variance is the sample covariance matrix The sample covariance matrix is S N /N ), where S N = X i X N )X i X N ) = X i X i NX N X N What is the distribution of S N? Wishart Distribution Let X j N k 0, Σ) Then S = m j= X jx j is distributed as central) k- dimensional Wishart Distribution with scale parameter Σ and df m, denoted W k Σ, m) When we have W k I, m), then we say that the Wishart distribution is said to have the standard form Here are some elementary properties of a matrix S that has a k-dimensional Wishart distribution with m degrees of freedom and scale matrix Σ: For each constant vector a R k, a Sa/a Σa χ m For each p k matrix B, BSB W p BΣB, m) ES) = mσ If T W k Σ, n) and is independent of S, then S + T W k Σ, m + n) It can be shown that S N W k Σ, N ) Also, we can prove that X N and S N are independent Note: There is one important non-elementary fact about Wishart matrices If S W k I k, m) with m k, then one over the diagonal entries of S have a χ m k+ distribution It is related to the more general fact that, if we partition S so that the diagonal blocks are k k and k k with k = k + k ), then S = S S S S W k Σ, m k ) Also, S is independent of both S and S S Sufficient Statistics A statistic T is sufficient for a parameter θ for a distribution of X) if the distribution of X given T does not depend on θ We know a factorization theorem: a statistic TX) is sufficient for θ if and only if there exist functions gt θ) and hx) such that fx θ) = gtx) θ)hx) We can show that if X,, X N are independent multivariate normal N k µ, Σ), then X N and S N are sufficient for µ and Σ, respectively To prove this, write the joint density of N independent) normal random vectors X,, X N as fx,, x N ; µ, Σ) = = N ) π) k/ Σ / x i µ) Σ x i µ) π) Nk/ Σ N/ ) x i µ) Σ x i µ) = trσ S N ) N x N µ) Σ x N µ) ) π) Nk/ Σ N/ 6
7 note that we can do this by observing that x i µ) Σ x i µ) = = tr x i µ) Σ x i µ) ) tr Σ x i µ)x i µ) ) = tr = tr = tr +tr Σ N N Σ N Σ Σ x i µ)x i µ) ) x i x N + x N µ)x i x N + x N µ) ) x i x N )x i x N ) ) N x N µ)x N µ) ) = trσ S N ) + Nx N µ) Σ x N µ), since N x i x N )x i µ) = 0) By the method analogous to the univariate case, we see that X N and S N are sufficient statistics Note that we can also show the independence of X N and S N by using Basu s Theorem Maximum likelihood estimates The likelihood function based on a sample of N iid N k µ, Σ) vectors which is actually the same as the joint pdf above) is Lµ, Σ) = Again, onent can be rewritten as π) Nk/ Σ N/ ) x i µ) Σ x i µ) N x N µ) Σ x N µ) trσ S N ) The first term is the only place that µ appears in the likelihood, hence we can maximize the likelihood over µ for each fixed Σ by choosing µ = x N, so the MLE of µ is then Next, µ = X N logl µ, Σ) = Nk logπ) N log Σ ) trσ S N ) One could try to maximize this by taking its derivative with respect to each element of Σ and setting them all to 0 This procedure is facilitated by a couple of results about derivatives with respect to matrices, which are, trab) = B A and log A ) A = A ) 7
8 Now, let A = Σ We get logl µ, Σ) A = N A log A ) ) tras N = N A S N The derivative is 0 k k if and only if A = S N /N The MLE of Σ is then Σ = S N /N Since the MLE has the invariance property, we can show that the MLE of µ Σ µ is µ Σ µ Large Sample Properties First, some preliminaries For x R k, we define the Euclidean norm distance) as x = x x = k x i If X, X, X, are random k-vectors, then we say that, as n, X n as X, if P lim n X n = X) = almost sure convergence) X n L p X, if E X n X p 0 convergence in L p ) X n p X, if for any ɛ > 0, P Xn X > ɛ) 0 convergence in probability) X n d X, if FXn is the cdf of X n and F X is the cdf of X, then F Xn x) F X x), at points of x R k where F X x) is continuous convergence in distribution) Consistency: It can be shown that, if X, X, are iid random k-vectors with EX i ) = µ and CovX i ) = Σ, then X n µ and S n n Σ or S n /n Σ) almost surely, as n Actually, we mostly need convergence in probability consistency) Multivariate Central Limit Theorem If X, X, are iid random k-vectors with EX i ) = µ and CovX i ) = Σ, then n X i µ) = nx n µ) d N k 0, Σ) n as n An important consequence of this result is so-called the Multivriate Delta Method Under the same assumptions as above, and if there exists a real-valued function g such that the first partial derivative exists at µ and is not all zero, then ngxn ) gµ)) d N0, gµ) Σ gµ)) where gx) is the gradient of g 8
Random Vectors 1. STA442/2101 Fall See last slide for copyright information. 1 / 30
Random Vectors 1 STA442/2101 Fall 2017 1 See last slide for copyright information. 1 / 30 Background Reading: Renscher and Schaalje s Linear models in statistics Chapter 3 on Random Vectors and Matrices
More informationBASICS OF PROBABILITY
October 10, 2018 BASICS OF PROBABILITY Randomness, sample space and probability Probability is concerned with random experiments. That is, an experiment, the outcome of which cannot be predicted with certainty,
More informationP (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n
JOINT DENSITIES - RANDOM VECTORS - REVIEW Joint densities describe probability distributions of a random vector X: an n-dimensional vector of random variables, ie, X = (X 1,, X n ), where all X is are
More informationThe Multivariate Normal Distribution. In this case according to our theorem
The Multivariate Normal Distribution Defn: Z R 1 N(0, 1) iff f Z (z) = 1 2π e z2 /2. Defn: Z R p MV N p (0, I) if and only if Z = (Z 1,..., Z p ) T with the Z i independent and each Z i N(0, 1). In this
More informationLecture 11. Multivariate Normal theory
10. Lecture 11. Multivariate Normal theory Lecture 11. Multivariate Normal theory 1 (1 1) 11. Multivariate Normal theory 11.1. Properties of means and covariances of vectors Properties of means and covariances
More informationThe Multivariate Normal Distribution 1
The Multivariate Normal Distribution 1 STA 302 Fall 2017 1 See last slide for copyright information. 1 / 40 Overview 1 Moment-generating Functions 2 Definition 3 Properties 4 χ 2 and t distributions 2
More information3. Probability and Statistics
FE661 - Statistical Methods for Financial Engineering 3. Probability and Statistics Jitkomut Songsiri definitions, probability measures conditional expectations correlation and covariance some important
More informationIntroduction to Normal Distribution
Introduction to Normal Distribution Nathaniel E. Helwig Assistant Professor of Psychology and Statistics University of Minnesota (Twin Cities) Updated 17-Jan-2017 Nathaniel E. Helwig (U of Minnesota) Introduction
More informationElements of Probability Theory
Short Guides to Microeconometrics Fall 2016 Kurt Schmidheiny Unversität Basel Elements of Probability Theory Contents 1 Random Variables and Distributions 2 1.1 Univariate Random Variables and Distributions......
More informationMA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems
MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems Review of Basic Probability The fundamentals, random variables, probability distributions Probability mass/density functions
More informationStat 206: Sampling theory, sample moments, mahalanobis
Stat 206: Sampling theory, sample moments, mahalanobis topology James Johndrow (adapted from Iain Johnstone s notes) 2016-11-02 Notation My notation is different from the book s. This is partly because
More informationThe Multivariate Normal Distribution. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36
The Multivariate Normal Distribution Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 36 The Moment Generating Function (MGF) of a random vector X is given by M X (t) = E(e t X
More informationRandom Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R
In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample
More informationLecture 14: Multivariate mgf s and chf s
Lecture 14: Multivariate mgf s and chf s Multivariate mgf and chf For an n-dimensional random vector X, its mgf is defined as M X (t) = E(e t X ), t R n and its chf is defined as φ X (t) = E(e ıt X ),
More informationProbability Background
Probability Background Namrata Vaswani, Iowa State University August 24, 2015 Probability recap 1: EE 322 notes Quick test of concepts: Given random variables X 1, X 2,... X n. Compute the PDF of the second
More informationExercises and Answers to Chapter 1
Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean
More informationThe Multivariate Normal Distribution. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36
The Multivariate Normal Distribution Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 36 The Moment Generating Function (MGF) of a random vector X is given by M X (t) = E(e t X
More informationPreliminaries. Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38
Preliminaries Copyright c 2018 Dan Nettleton (Iowa State University) Statistics 510 1 / 38 Notation for Scalars, Vectors, and Matrices Lowercase letters = scalars: x, c, σ. Boldface, lowercase letters
More information2. Matrix Algebra and Random Vectors
2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns
More informationMoment Generating Function. STAT/MTHE 353: 5 Moment Generating Functions and Multivariate Normal Distribution
Moment Generating Function STAT/MTHE 353: 5 Moment Generating Functions and Multivariate Normal Distribution T. Linder Queen s University Winter 07 Definition Let X (X,...,X n ) T be a random vector and
More informationMultivariate Distributions
IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing 1 in particular on multivariate
More informationRandom Vectors and Multivariate Normal Distributions
Chapter 3 Random Vectors and Multivariate Normal Distributions 3.1 Random vectors Definition 3.1.1. Random vector. Random vectors are vectors of random 75 variables. For instance, X = X 1 X 2., where each
More informationUses of Asymptotic Distributions: In order to get distribution theory, we need to norm the random variable; we usually look at n 1=2 ( X n ).
1 Economics 620, Lecture 8a: Asymptotics II Uses of Asymptotic Distributions: Suppose X n! 0 in probability. (What can be said about the distribution of X n?) In order to get distribution theory, we need
More information2 (Statistics) Random variables
2 (Statistics) Random variables References: DeGroot and Schervish, chapters 3, 4 and 5; Stirzaker, chapters 4, 5 and 6 We will now study the main tools use for modeling experiments with unknown outcomes
More informationMultivariate Analysis and Likelihood Inference
Multivariate Analysis and Likelihood Inference Outline 1 Joint Distribution of Random Variables 2 Principal Component Analysis (PCA) 3 Multivariate Normal Distribution 4 Likelihood Inference Joint density
More informationThe Multivariate Normal Distribution 1
The Multivariate Normal Distribution 1 STA 302 Fall 2014 1 See last slide for copyright information. 1 / 37 Overview 1 Moment-generating Functions 2 Definition 3 Properties 4 χ 2 and t distributions 2
More informationREVIEW OF MAIN CONCEPTS AND FORMULAS A B = Ā B. Pr(A B C) = Pr(A) Pr(A B C) =Pr(A) Pr(B A) Pr(C A B)
REVIEW OF MAIN CONCEPTS AND FORMULAS Boolean algebra of events (subsets of a sample space) DeMorgan s formula: A B = Ā B A B = Ā B The notion of conditional probability, and of mutual independence of two
More informationBIOS 2083 Linear Models Abdus S. Wahed. Chapter 2 84
Chapter 2 84 Chapter 3 Random Vectors and Multivariate Normal Distributions 3.1 Random vectors Definition 3.1.1. Random vector. Random vectors are vectors of random variables. For instance, X = X 1 X 2.
More information5. Random Vectors. probabilities. characteristic function. cross correlation, cross covariance. Gaussian random vectors. functions of random vectors
EE401 (Semester 1) 5. Random Vectors Jitkomut Songsiri probabilities characteristic function cross correlation, cross covariance Gaussian random vectors functions of random vectors 5-1 Random vectors we
More informationIntroduction to Computational Finance and Financial Econometrics Probability Review - Part 2
You can t see this text! Introduction to Computational Finance and Financial Econometrics Probability Review - Part 2 Eric Zivot Spring 2015 Eric Zivot (Copyright 2015) Probability Review - Part 2 1 /
More informationUC Berkeley Department of Electrical Engineering and Computer Sciences. EECS 126: Probability and Random Processes
UC Berkeley Department of Electrical Engineering and Computer Sciences EECS 6: Probability and Random Processes Problem Set 3 Spring 9 Self-Graded Scores Due: February 8, 9 Submit your self-graded scores
More informationt x 1 e t dt, and simplify the answer when possible (for example, when r is a positive even number). In particular, confirm that EX 4 = 3.
Mathematical Statistics: Homewor problems General guideline. While woring outside the classroom, use any help you want, including people, computer algebra systems, Internet, and solution manuals, but mae
More informationCS229 Lecture notes. Andrew Ng
CS229 Lecture notes Andrew Ng Part X Factor analysis When we have data x (i) R n that comes from a mixture of several Gaussians, the EM algorithm can be applied to fit a mixture model. In this setting,
More informationIntroduction to Probability and Stocastic Processes - Part I
Introduction to Probability and Stocastic Processes - Part I Lecture 2 Henrik Vie Christensen vie@control.auc.dk Department of Control Engineering Institute of Electronic Systems Aalborg University Denmark
More information5 Operations on Multiple Random Variables
EE360 Random Signal analysis Chapter 5: Operations on Multiple Random Variables 5 Operations on Multiple Random Variables Expected value of a function of r.v. s Two r.v. s: ḡ = E[g(X, Y )] = g(x, y)f X,Y
More informationSTA 2201/442 Assignment 2
STA 2201/442 Assignment 2 1. This is about how to simulate from a continuous univariate distribution. Let the random variable X have a continuous distribution with density f X (x) and cumulative distribution
More informationMultivariate Random Variable
Multivariate Random Variable Author: Author: Andrés Hincapié and Linyi Cao This Version: August 7, 2016 Multivariate Random Variable 3 Now we consider models with more than one r.v. These are called multivariate
More informationRandom Variables and Their Distributions
Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital
More informationContinuous Random Variables
1 / 24 Continuous Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 27, 2013 2 / 24 Continuous Random Variables
More information4. Distributions of Functions of Random Variables
4. Distributions of Functions of Random Variables Setup: Consider as given the joint distribution of X 1,..., X n (i.e. consider as given f X1,...,X n and F X1,...,X n ) Consider k functions g 1 : R n
More informationProbability Theory and Statistics. Peter Jochumzen
Probability Theory and Statistics Peter Jochumzen April 18, 2016 Contents 1 Probability Theory And Statistics 3 1.1 Experiment, Outcome and Event................................ 3 1.2 Probability............................................
More informationRandom vectors X 1 X 2. Recall that a random vector X = is made up of, say, k. X k. random variables.
Random vectors Recall that a random vector X = X X 2 is made up of, say, k random variables X k A random vector has a joint distribution, eg a density f(x), that gives probabilities P(X A) = f(x)dx Just
More informationStat 5101 Notes: Algorithms
Stat 5101 Notes: Algorithms Charles J. Geyer January 22, 2016 Contents 1 Calculating an Expectation or a Probability 3 1.1 From a PMF........................... 3 1.2 From a PDF...........................
More informationECE 650 Lecture 4. Intro to Estimation Theory Random Vectors. ECE 650 D. Van Alphen 1
EE 650 Lecture 4 Intro to Estimation Theory Random Vectors EE 650 D. Van Alphen 1 Lecture Overview: Random Variables & Estimation Theory Functions of RV s (5.9) Introduction to Estimation Theory MMSE Estimation
More informationMATH 829: Introduction to Data Mining and Analysis Graphical Models II - Gaussian Graphical Models
1/13 MATH 829: Introduction to Data Mining and Analysis Graphical Models II - Gaussian Graphical Models Dominique Guillot Departments of Mathematical Sciences University of Delaware May 4, 2016 Recall
More informationLecture 1: August 28
36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random
More informationMultiple Random Variables
Multiple Random Variables This Version: July 30, 2015 Multiple Random Variables 2 Now we consider models with more than one r.v. These are called multivariate models For instance: height and weight An
More informationProbability Lecture III (August, 2006)
robability Lecture III (August, 2006) 1 Some roperties of Random Vectors and Matrices We generalize univariate notions in this section. Definition 1 Let U = U ij k l, a matrix of random variables. Suppose
More informationTAMS39 Lecture 2 Multivariate normal distribution
TAMS39 Lecture 2 Multivariate normal distribution Martin Singull Department of Mathematics Mathematical Statistics Linköping University, Sweden Content Lecture Random vectors Multivariate normal distribution
More informationAsymptotic Statistics-III. Changliang Zou
Asymptotic Statistics-III Changliang Zou The multivariate central limit theorem Theorem (Multivariate CLT for iid case) Let X i be iid random p-vectors with mean µ and and covariance matrix Σ. Then n (
More informationLecture 25: Review. Statistics 104. April 23, Colin Rundel
Lecture 25: Review Statistics 104 Colin Rundel April 23, 2012 Joint CDF F (x, y) = P [X x, Y y] = P [(X, Y ) lies south-west of the point (x, y)] Y (x,y) X Statistics 104 (Colin Rundel) Lecture 25 April
More informationSTAT 100C: Linear models
STAT 100C: Linear models Arash A. Amini April 27, 2018 1 / 1 Table of Contents 2 / 1 Linear Algebra Review Read 3.1 and 3.2 from text. 1. Fundamental subspace (rank-nullity, etc.) Im(X ) = ker(x T ) R
More informationA Probability Review
A Probability Review Outline: A probability review Shorthand notation: RV stands for random variable EE 527, Detection and Estimation Theory, # 0b 1 A Probability Review Reading: Go over handouts 2 5 in
More informationChapter 5. The multivariate normal distribution. Probability Theory. Linear transformations. The mean vector and the covariance matrix
Probability Theory Linear transformations A transformation is said to be linear if every single function in the transformation is a linear combination. Chapter 5 The multivariate normal distribution When
More informationRandom Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline.
Random Variables Amappingthattransformstheeventstotherealline. Example 1. Toss a fair coin. Define a random variable X where X is 1 if head appears and X is if tail appears. P (X =)=1/2 P (X =1)=1/2 Example
More informationIntroduction to Computational Finance and Financial Econometrics Matrix Algebra Review
You can t see this text! Introduction to Computational Finance and Financial Econometrics Matrix Algebra Review Eric Zivot Spring 2015 Eric Zivot (Copyright 2015) Matrix Algebra Review 1 / 54 Outline 1
More informationSTAT 730 Chapter 4: Estimation
STAT 730 Chapter 4: Estimation Timothy Hanson Department of Statistics, University of South Carolina Stat 730: Multivariate Analysis 1 / 23 The likelihood We have iid data, at least initially. Each datum
More informationTopics in Probability and Statistics
Topics in Probability and tatistics A Fundamental Construction uppose {, P } is a sample space (with probability P), and suppose X : R is a random variable. The distribution of X is the probability P X
More informationSTAT 135 Lab 13 (Review) Linear Regression, Multivariate Random Variables, Prediction, Logistic Regression and the δ-method.
STAT 135 Lab 13 (Review) Linear Regression, Multivariate Random Variables, Prediction, Logistic Regression and the δ-method. Rebecca Barter May 5, 2015 Linear Regression Review Linear Regression Review
More informationSTAT 512 sp 2018 Summary Sheet
STAT 5 sp 08 Summary Sheet Karl B. Gregory Spring 08. Transformations of a random variable Let X be a rv with support X and let g be a function mapping X to Y with inverse mapping g (A = {x X : g(x A}
More informationAsymptotic Statistics-VI. Changliang Zou
Asymptotic Statistics-VI Changliang Zou Kolmogorov-Smirnov distance Example (Kolmogorov-Smirnov confidence intervals) We know given α (0, 1), there is a well-defined d = d α,n such that, for any continuous
More information1 Data Arrays and Decompositions
1 Data Arrays and Decompositions 1.1 Variance Matrices and Eigenstructure Consider a p p positive definite and symmetric matrix V - a model parameter or a sample variance matrix. The eigenstructure is
More informationEEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as
L30-1 EEL 5544 Noise in Linear Systems Lecture 30 OTHER TRANSFORMS For a continuous, nonnegative RV X, the Laplace transform of X is X (s) = E [ e sx] = 0 f X (x)e sx dx. For a nonnegative RV, the Laplace
More informationReview: mostly probability and some statistics
Review: mostly probability and some statistics C2 1 Content robability (should know already) Axioms and properties Conditional probability and independence Law of Total probability and Bayes theorem Random
More informationQualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf
Part 1: Sample Problems for the Elementary Section of Qualifying Exam in Probability and Statistics https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 2: Sample Problems for the Advanced Section
More informationMultivariate Gaussian Distribution. Auxiliary notes for Time Series Analysis SF2943. Spring 2013
Multivariate Gaussian Distribution Auxiliary notes for Time Series Analysis SF2943 Spring 203 Timo Koski Department of Mathematics KTH Royal Institute of Technology, Stockholm 2 Chapter Gaussian Vectors.
More informationMA 575 Linear Models: Cedric E. Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2
MA 575 Linear Models: Cedric E Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2 1 Revision: Probability Theory 11 Random Variables A real-valued random variable is
More information8 - Continuous random vectors
8-1 Continuous random vectors S. Lall, Stanford 2011.01.25.01 8 - Continuous random vectors Mean-square deviation Mean-variance decomposition Gaussian random vectors The Gamma function The χ 2 distribution
More informationPart IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015
Part IB Statistics Theorems with proof Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly)
More informationThe Multivariate Gaussian Distribution
The Multivariate Gaussian Distribution Chuong B. Do October, 8 A vector-valued random variable X = T X X n is said to have a multivariate normal or Gaussian) distribution with mean µ R n and covariance
More informationmatrix-free Elements of Probability Theory 1 Random Variables and Distributions Contents Elements of Probability Theory 2
Short Guides to Microeconometrics Fall 2018 Kurt Schmidheiny Unversität Basel Elements of Probability Theory 2 1 Random Variables and Distributions Contents Elements of Probability Theory matrix-free 1
More informationJoint Probability Distributions and Random Samples (Devore Chapter Five)
Joint Probability Distributions and Random Samples (Devore Chapter Five) 1016-345-01: Probability and Statistics for Engineers Spring 2013 Contents 1 Joint Probability Distributions 2 1.1 Two Discrete
More informationMAS223 Statistical Inference and Modelling Exercises
MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,
More informationACM 116: Lectures 3 4
1 ACM 116: Lectures 3 4 Joint distributions The multivariate normal distribution Conditional distributions Independent random variables Conditional distributions and Monte Carlo: Rejection sampling Variance
More informationQualifying Exam in Probability and Statistics. https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf
Part : Sample Problems for the Elementary Section of Qualifying Exam in Probability and Statistics https://www.soa.org/files/edu/edu-exam-p-sample-quest.pdf Part 2: Sample Problems for the Advanced Section
More informationProbability and Statistics Notes
Probability and Statistics Notes Chapter Five Jesse Crawford Department of Mathematics Tarleton State University Spring 2011 (Tarleton State University) Chapter Five Notes Spring 2011 1 / 37 Outline 1
More informationConvergence in Distribution
Convergence in Distribution Undergraduate version of central limit theorem: if X 1,..., X n are iid from a population with mean µ and standard deviation σ then n 1/2 ( X µ)/σ has approximately a normal
More informationChp 4. Expectation and Variance
Chp 4. Expectation and Variance 1 Expectation In this chapter, we will introduce two objectives to directly reflect the properties of a random variable or vector, which are the Expectation and Variance.
More informationLecture 15: Multivariate normal distributions
Lecture 15: Multivariate normal distributions Normal distributions with singular covariance matrices Consider an n-dimensional X N(µ,Σ) with a positive definite Σ and a fixed k n matrix A that is not of
More informationANOVA: Analysis of Variance - Part I
ANOVA: Analysis of Variance - Part I The purpose of these notes is to discuss the theory behind the analysis of variance. It is a summary of the definitions and results presented in class with a few exercises.
More informationUnbiased Estimation. Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others.
Unbiased Estimation Binomial problem shows general phenomenon. An estimator can be good for some values of θ and bad for others. To compare ˆθ and θ, two estimators of θ: Say ˆθ is better than θ if it
More informationBayesian Inference. Chapter 9. Linear models and regression
Bayesian Inference Chapter 9. Linear models and regression M. Concepcion Ausin Universidad Carlos III de Madrid Master in Business Administration and Quantitative Methods Master in Mathematical Engineering
More information1. Point Estimators, Review
AMS571 Prof. Wei Zhu 1. Point Estimators, Review Example 1. Let be a random sample from. Please find a good point estimator for Solutions. There are the typical estimators for and. Both are unbiased estimators.
More informationMULTIVARIATE DISTRIBUTIONS
Chapter 9 MULTIVARIATE DISTRIBUTIONS John Wishart (1898-1956) British statistician. Wishart was an assistant to Pearson at University College and to Fisher at Rothamsted. In 1928 he derived the distribution
More informationChapter 17: Undirected Graphical Models
Chapter 17: Undirected Graphical Models The Elements of Statistical Learning Biaobin Jiang Department of Biological Sciences Purdue University bjiang@purdue.edu October 30, 2014 Biaobin Jiang (Purdue)
More informationMath 3215 Intro. Probability & Statistics Summer 14. Homework 5: Due 7/3/14
Math 325 Intro. Probability & Statistics Summer Homework 5: Due 7/3/. Let X and Y be continuous random variables with joint/marginal p.d.f. s f(x, y) 2, x y, f (x) 2( x), x, f 2 (y) 2y, y. Find the conditional
More informationx. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).
.8.6 µ =, σ = 1 µ = 1, σ = 1 / µ =, σ =.. 3 1 1 3 x Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ ). The Gaussian distribution Probably the most-important distribution in all of statistics
More informationAPPM/MATH 4/5520 Solutions to Exam I Review Problems. f X 1,X 2. 2e x 1 x 2. = x 2
APPM/MATH 4/5520 Solutions to Exam I Review Problems. (a) f X (x ) f X,X 2 (x,x 2 )dx 2 x 2e x x 2 dx 2 2e 2x x was below x 2, but when marginalizing out x 2, we ran it over all values from 0 to and so
More informationMultivariate Statistics
Multivariate Statistics Chapter 2: Multivariate distributions and inference Pedro Galeano Departamento de Estadística Universidad Carlos III de Madrid pedro.galeano@uc3m.es Course 2016/2017 Master in Mathematical
More informationSTA 2101/442 Assignment 3 1
STA 2101/442 Assignment 3 1 These questions are practice for the midterm and final exam, and are not to be handed in. 1. Suppose X 1,..., X n are a random sample from a distribution with mean µ and variance
More information01 Probability Theory and Statistics Review
NAVARCH/EECS 568, ROB 530 - Winter 2018 01 Probability Theory and Statistics Review Maani Ghaffari January 08, 2018 Last Time: Bayes Filters Given: Stream of observations z 1:t and action data u 1:t Sensor/measurement
More informationECON 5350 Class Notes Review of Probability and Distribution Theory
ECON 535 Class Notes Review of Probability and Distribution Theory 1 Random Variables Definition. Let c represent an element of the sample space C of a random eperiment, c C. A random variable is a one-to-one
More informationIII - MULTIVARIATE RANDOM VARIABLES
Computational Methods and advanced Statistics Tools III - MULTIVARIATE RANDOM VARIABLES A random vector, or multivariate random variable, is a vector of n scalar random variables. The random vector is
More informationLet X and Y denote two random variables. The joint distribution of these random
EE385 Class Notes 9/7/0 John Stensby Chapter 3: Multiple Random Variables Let X and Y denote two random variables. The joint distribution of these random variables is defined as F XY(x,y) = [X x,y y] P.
More informationSUFFICIENT STATISTICS
SUFFICIENT STATISTICS. Introduction Let X (X,..., X n ) be a random sample from f θ, where θ Θ is unknown. We are interested using X to estimate θ. In the simple case where X i Bern(p), we found that the
More informationEC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix)
1 EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) Taisuke Otsu London School of Economics Summer 2018 A.1. Summation operator (Wooldridge, App. A.1) 2 3 Summation operator For
More informationStatistics for scientists and engineers
Statistics for scientists and engineers February 0, 006 Contents Introduction. Motivation - why study statistics?................................... Examples..................................................3
More informationTest Code: STA/STB (Short Answer Type) 2013 Junior Research Fellowship for Research Course in Statistics
Test Code: STA/STB (Short Answer Type) 2013 Junior Research Fellowship for Research Course in Statistics The candidates for the research course in Statistics will have to take two shortanswer type tests
More informationECON Fundamentals of Probability
ECON 351 - Fundamentals of Probability Maggie Jones 1 / 32 Random Variables A random variable is one that takes on numerical values, i.e. numerical summary of a random outcome e.g., prices, total GDP,
More informationGaussian Models (9/9/13)
STA561: Probabilistic machine learning Gaussian Models (9/9/13) Lecturer: Barbara Engelhardt Scribes: Xi He, Jiangwei Pan, Ali Razeen, Animesh Srivastava 1 Multivariate Normal Distribution The multivariate
More information