Random vectors X 1 X 2. Recall that a random vector X = is made up of, say, k. X k. random variables.
|
|
- Aleesha Ross
- 5 years ago
- Views:
Transcription
1 Random vectors Recall that a random vector X = X X 2 is made up of, say, k random variables X k A random vector has a joint distribution, eg a density f(x), that gives probabilities P(X A) = f(x)dx Just as a random variable X has a mean E(X) and variance var(x), a random vector also has a mean vector E(X) and a covariance matrix cov(x) A
2 Let X = (X,,X k ) be a random vector with density f(x,,x k ) or pmf p(x,,x k ) The mean of X is the vector of marginal means E(X) = E X X 2 = E(X ) E(X 2 ) X k E(X k ) The covariance matrix of X is given by cov(x) = cov(x,x ) cov(x, X 2 ) cov(x,x k ) cov(x 2,X ) cov(x 2, X 2 ) cov(x 2,X k ) cov(x k, X ) cov(x k, X 2 ) cov(x k, X k ) 2
3 Multivariate normal distribution The normal distribution generalizes to multiple dimensions We ll first look at two jointly distributed normal random variables, then discuss three or more The bivariate normal density for (X, X 2 ) is given by f(x, x 2 ) = 2πσ σ 2 ρ 2 exp 2( ρ 2 ) x µ σ 2 2ρ x µ σ x 2 µ 2 σ 2 + x 2 µ 2 2 σ 2 There are 5 parameters: (µ, µ 2, σ, σ 2, ρ) Read: pp 8 84, 45 46, 48,
4 This density jointly defines X and X 2, which live in R 2 = (, ) (, ) Marginally, X N(µ, σ 2 ) and X 2 N(µ 2, σ 2 2) The correlation between X and X 2 is given by corr(x, X 2 ) = ρ For jointly normal random variables, if the correlation is zero then they are independent This is not true in general for jointly defined random variables (eg homework 5 problem) E(X) = µ µ 2, cov(x) = σ2 σ σ 2 ρ σ σ 2 ρ σ2 2 Next slide: µ = 0, ; µ 2 = 0, 2; σ 2 = σ 2 2 = ; ρ = 0, 09, 06 4
5 Figure : Bivariate normal PDF level curves 5
6 Proof that X indpendent X 2 when ρ = 0 When ρ = 0 the joint density for (X, X 2 ) simplifies to f(x, x 2 ) = { [ exp (x ) 2 ( ) ]} 2 µ x2 µ 2 + 2πσ σ 2 2 σ σ 2 = [ e 05 x µ 2 σ e 05 x 2 µ 2 2 σ2 2πσ 2πσ2 Since these are each respectively functions of x and x 2 only, and the range of (X, X 2 ) factors into the produce of two sets, X and X 2 are independent and in fact X N(µ, σ 2 ) and X 2 N(µ 2, σ 2 2) 6
7 Conditional distributions [X X 2 = x 2 ] and [X 2 X = x ] The conditional distribution of X given X 2 = x 2 is ( [X X 2 = x 2 ] N µ + σ ) ρ(x 2 µ 2 ), σ 2 σ ( ρ 2 ) 2 Similarly, [X 2 X = x ] N ( µ 2 + σ ) 2 ρ(x µ ), σ 2 σ 2( ρ 2 ) This ties directly to linear regression: To predict X 2 X = x, we have [ E(X 2 X = x ) = µ 2 σ ] 2 ρµ σ + [ ] σ2 ρ x = β 0 + β x σ 7
8 Bivariate normal distribution as data model Here we assume X i X i2 iid N2 µ µ 2, σ σ 2 σ 2 σ 22, or succinctly, X i iid N 2 (µ,σ) If the bivariate normal model is appropriate for paired outcomes, it provides a convenient probability model with some nice properties The sample mean X = n n i= X i is the MLE of µ and the sample covariance matrix Σ = n n i= (X i X)(X i X) is the MLE for Σ It can be shown that X N 2 ( µ, n Σ ) The matrix n Σ has an inverted Wishart distribution 8
9 Say n outcome pairs are to be recorded: {(X, X 2 ), (X 2, X 22 ),,(X n, X n2 )} The i th pair is (X i, X i2 ) The sample mean vector is given elementwise by X = X X 2 = n n n i= X i n i= X i2, and the sample covariance matrix is given elementwise by S = n n n i= (X i X ) 2 n n i= (X i X )(X i2 X 2 ) n n i= (X i X )(X i2 X 2 ) n i= (X i2 X 2 ) 2 9
10 The sample mean vector X estimates µ = covariance matrix S estimates µ µ 2 and the sample Σ = σ 2 ρσ σ 2 = σ σ 2 ρσ σ 2 σ 2 2 σ 2 σ 22 We will place hats on parameter estimators based on the data So ˆµ = X, ˆµ 2 = X 2, ˆσ 2 = n n (X i X ) 2, ˆσ 2 2 = n i= n (X i2 X 2 ) 2 i= Also, cov(x, X 2 ) = n n (X i X)(X i2 X 2 ) i= 0
11 So a natural estimate of ρ is then ˆρ = cov(x, X 2 ) = ˆσ ˆσ 2 n n n i= (X i X )(X i2 X 2 ) n i= (X i X ) 2 n n i= (X i X ) 2 This is in fact the MLE estimate based on the bivariate normal model It is also a plug-in estimator based on the method-of-moments too It is commonly referred to as the Pearson correlation coefficient You can get it as, eg, cor(age,gesell) in R This estimate of correlation can be unduly influenced by outliers in the sample An alternative measure of linear association is the Spearman correlation based on ranks This correlation estimates something a bit different than the Pearson correlation > cor(age,gesell) [] > cor(age,gesell,method="spearman") []
12 Gesell data: Recall: X is age in months a child speaks his/her first word and let Y is Gesell adaptive score, a measure of a child s aptitude Question: how does the child s aptitude change with how long it takes them to speak? Here, n = 2 In R we find E(X) = Also, cov(x) = Assuming a bivariate model, we plug in the MLEs and obtain the estimated PDF for (X, Y ): f(x, y) = exp( x 0034x y 00098xy 00043y 2 ) We can further find from Y N(9367, 8632), f Y (y) = exp( (y 9367) 2 ) 2
13 Figure 2: 3D plot of f(x, y) for (X, Y ) based on MLEs 3
14 Figure 3: Solid is f Y (y); left dashed is f Y X (y 25) the right dashed is f Y X (y 0) As the age in months of first words X = x increases, the distribution of Gesell Adaptive Scores Y decreases 4
15 Gesell age Figure 4: MLE estimate of density with actual data 5
16 R code to get estimates µ and Σ, plot level curves of PDF, and data age<-c(5,26,0,9,5,20,8,,8,20,7,9,0,,,0,2,42,7,,0) Gesell<-c(95,7,83,9,02,87,93,00,04,94,3,96,83,84,02,00,05,57,2,86,00) data=matrix(c(age,gesell),ncol=2) # make data matrix m=c(mean(age),mean(gesell)) # mean vector s=cov(data) # covariance matrix x=seq(min(age)-sd(age),max(age)+sd(age),length=200) # grid of representative age values x2=seq(min(gesell)-sd(gesell),max(gesell)+sd(gesell),length=200) # grid of Gesell values f=function(x,y){ r=s[,2]/sqrt(s[,]*s[2,2]) term=(x-m[])/sqrt(s[,]); term2=(y-m[2])/sqrt(s[2,2]); term3=-2*r*term*term2 exp(-05*(term^2+term2^2+term3)/(-r^2))/(2*34*sqrt(s[,]*s[2,2]*(-r^2))) } # function that gives the best fitting bivariate normal density to the data z=outer(x,x2,f) # compute the joint pdf over the grid contour(x,x2,z,nlevels=7,xlab="age",ylab="gesell") # make contour plot points(age,gesell,pch=9) # superimpose filled points 6
17 Checking bivariate normality There are no multivariate boxplots There are multivariate histograms (and smoothed histograms), but they re typically not that useful for checking normality Easiest thing to do is to check that marginal boxplots/histograms of X,,X n and X 2,,X 2n are approximately normal This does not guarantee joint normality though One can additionally look at a simple scatterplot of the data, checking to see that it shows an approximately elliptical cloud of points with no glaringly obvious outlying values 7
18 Multivariate normal distribution In general, a k-variate normal is defined through the mean and covariance matrix: X µ σ σ 2 σ k X 2 µ 2 σ 2 σ 22 σ 2k N k, σ k σ k2 σ kk X k µ k Succinctly, X N k (µ,σ) Recall that if Z N(0, ), then X = µ + σz N(µ, σ 2 ) The definition of the multivariate normal distribution just extends this idea 8
19 Instead of one standard normal, we have a list of k independent standard normals Z = (Z,,Z k ), and consider the same sort of transformation in the multivariate case using matrices and vectors Let Z,,Z k iid N(0, ) The joint pdf of (Z,,Z k ) is given by f(z,,z k ) = k exp( 05zi 2 )/ 2π i= Let µ = µ µ 2 and Σ = σ σ 2 σ k σ 2 σ 22 σ 2k, µ k σ k σ k2 σ kk where Σ is symmetric (ie Σ T = Σ, which implies σ ij = σ ji for all i, j k) 9
20 Let Σ /2 be any matrix such that Σ /2 Σ /2 = Σ Then X = µ + Σ /2 Z is said to have a multivariate normal distribution with mean vector µ and covariance matrix Σ, written Written in terms of matrices X N k (µ,σ) /2 X = X X 2 = µ µ 2 + σ σ 2 σ k σ 2 σ 22 σ 2k Z Z 2 X k µ k σ k σ k2 σ kk Z k 20
21 Using some math, it can be shown that the pdf of the new vector X = (X,,X k ) is given by f(x,,x k µ,σ) = 2πΣ /2 exp{ 05(x µ) Σ (x µ)} In the one-dimensional case, this simplifies to our old friend f(x µ, σ 2 ) = (2πσ 2 ) /2 exp{ 05(x µ)(σ 2 ) (x µ)}, the pdf of a N(µ, σ 2 ) random variable X There are several important properties of multivariate normal vectors 2
22 Let X N k (µ,σ) Then For each X i in X = (X,,X k ), E(X i ) = µ i and var(x i ) = σ ii That is, marginally, X i N(µ i, σ ii ) 2 For any r k matrix M, MX N r (Mµ,MΣM ) 3 For any two (X i, X j ) where i < j k, cov(x i, X j ) = σ ij The off-diagonal elements of Σ give the covariance between two elements of (X,,X k ) Note then ρ(x i, X j ) = σ ij / σ ii σ jj 4 For any k vector m = (m,,m k ) and Y N k (µ,σ), m + Y N k (m + µ,σ) 22
23 As a simple example, let X X 2 X 3 N , Define M = and Y = Y Y 2 = MX = Then X2 N(5, 3), cov(x2, X3) = and X X 2 X 3 N , X X 2 X ,
24 or simplifying, Y Y 2 = X X 2 X 3 N 2 2, Note that for the transformed vector Y = (Y, Y 2 ), cov(y, Y 2 ) = 0 and therefore Y and Y 2 are uncorrelated, ie ρ(y, Y 2 ) = 0 We will use these properties later on, when we discuss the large sample distribution of MLE vectors ˆθ θ ˆθ 2 θ = θ 2 N p, ĉov( θ) ˆθ p θ p 24
25 For the linear model (eg simple linear regression or the two-sample model) Y = Xβ + e, the MLE that describes the mean parameters has exactly a multivariate normal distribution (p 574) β N p (β, (X X) σ 2 ) p = 2 is the number of mean parameters The MLE ˆσ 2 has a gamma distribution Let s try an experiment where we know β = We will generate 200 independent experiments, each with a sample size of n = 20 The model is Y i = β 0 + β x i + e i, i =,,20, where β 0 = 5, β = 3, and σ = 02 Take x i to be equally spaced from iid x = 0 to x 20 = The errors satisfy e,,e 20 N(0, 02 2 )
26 Here s R code to sample the m = 200 β s, each obtained from a sample of size n = 20: sigma=02 m=200 # number of betahat s sampled n=20 # sample size going into each experiment y=rep(0,n) beta0hat=rep(0,m) betahat=rep(0,m) x=seq(0,,length=n) # predictor values evenly spaced over [0,] beta=c(5,3) # true (but usually unknown) beta vector mean=beta[]+beta[2]*x # true mean function on grid of x values for(i in :m){ y=mean+rnorm(n,0,sigma) fit=lm(y~x) beta0hat[i]=fit$coefficients[] betahat[i]=fit$coefficients[2] } Xdesign=matrix(c(rep(,n),x),ncol=2) m=beta # exact mean vector s=solve(t(xdesign)%*%xdesign)*sigma^2 # exact covariance x=seq(47,53,length=200) # grid of representative betahat0 values x2=seq(25,34,length=200) # grid of representative betahat values 26
27 f=function(x,y){ r=s[,2]/sqrt(s[,]*s[2,2]) term=(x-m[])/sqrt(s[,]); term2=(y-m[2])/sqrt(s[2,2]); term3=-2*r*term*term2 exp(-05*(term^2+term2^2+term3)/(-r^2))/(2*34*sqrt(s[,]*s[2,2]*(-r^2))) } # exact bivariate normal density according to theory g0=rep(beta[],200); g=rep(beta[2],200) z=outer(x,x2,f) # compute the joint pdf over the grid contour(x,x2,z,nlevels=5,xlab="beta0hat",ylab="betahat") # make contour plot points(beta0hat,betahat) # superimpose betahat s from m experiments of sample size n lines(x,g,lty=2); lines(g0,x2,lty=2) # dotted lines crossing at true beta Exactly, the theory tells us ˆβ 0 ˆβ N 2 5 3, Let s see how a plot of the 200 β s looks superimposed on the exact sampling distribution 27
28 betahat beta0hat Figure 5: Theoretical distribution (pdf) with 200 sampled MLE s 28
18 Bivariate normal distribution I
8 Bivariate normal distribution I 8 Example Imagine firing arrows at a target Hopefully they will fall close to the target centre As we fire more arrows we find a high density near the centre and fewer
More informationChapter 5 continued. Chapter 5 sections
Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions
More informationPh.D. Qualifying Exam Friday Saturday, January 6 7, 2017
Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Put your solution to each problem on a separate sheet of paper. Problem 1. (5106) Let X 1, X 2,, X n be a sequence of i.i.d. observations from a
More informationIntroduction to Normal Distribution
Introduction to Normal Distribution Nathaniel E. Helwig Assistant Professor of Psychology and Statistics University of Minnesota (Twin Cities) Updated 17-Jan-2017 Nathaniel E. Helwig (U of Minnesota) Introduction
More informationCovariance. Lecture 20: Covariance / Correlation & General Bivariate Normal. Covariance, cont. Properties of Covariance
Covariance Lecture 0: Covariance / Correlation & General Bivariate Normal Sta30 / Mth 30 We have previously discussed Covariance in relation to the variance of the sum of two random variables Review Lecture
More informationExam 2. Jeremy Morris. March 23, 2006
Exam Jeremy Morris March 3, 006 4. Consider a bivariate normal population with µ 0, µ, σ, σ and ρ.5. a Write out the bivariate normal density. The multivariate normal density is defined by the following
More informationBivariate distributions
Bivariate distributions 3 th October 017 lecture based on Hogg Tanis Zimmerman: Probability and Statistical Inference (9th ed.) Bivariate Distributions of the Discrete Type The Correlation Coefficient
More informationFall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.
1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n
More informationPeter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8
Contents 1 Linear model 1 2 GLS for multivariate regression 5 3 Covariance estimation for the GLM 8 4 Testing the GLH 11 A reference for some of this material can be found somewhere. 1 Linear model Recall
More informationSTOR Lecture 16. Properties of Expectation - I
STOR 435.001 Lecture 16 Properties of Expectation - I Jan Hannig UNC Chapel Hill 1 / 22 Motivation Recall we found joint distributions to be pretty complicated objects. Need various tools from combinatorics
More informationNotes on Random Vectors and Multivariate Normal
MATH 590 Spring 06 Notes on Random Vectors and Multivariate Normal Properties of Random Vectors If X,, X n are random variables, then X = X,, X n ) is a random vector, with the cumulative distribution
More informationt x 1 e t dt, and simplify the answer when possible (for example, when r is a positive even number). In particular, confirm that EX 4 = 3.
Mathematical Statistics: Homewor problems General guideline. While woring outside the classroom, use any help you want, including people, computer algebra systems, Internet, and solution manuals, but mae
More informationChapter 5. Chapter 5 sections
1 / 43 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions
More informationMultiple Random Variables
Multiple Random Variables This Version: July 30, 2015 Multiple Random Variables 2 Now we consider models with more than one r.v. These are called multivariate models For instance: height and weight An
More informationRandom Vectors 1. STA442/2101 Fall See last slide for copyright information. 1 / 30
Random Vectors 1 STA442/2101 Fall 2017 1 See last slide for copyright information. 1 / 30 Background Reading: Renscher and Schaalje s Linear models in statistics Chapter 3 on Random Vectors and Matrices
More informationRandom Variables and Their Distributions
Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital
More informationMA 575 Linear Models: Cedric E. Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2
MA 575 Linear Models: Cedric E Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2 1 Revision: Probability Theory 11 Random Variables A real-valued random variable is
More informationMachine learning - HT Maximum Likelihood
Machine learning - HT 2016 3. Maximum Likelihood Varun Kanade University of Oxford January 27, 2016 Outline Probabilistic Framework Formulate linear regression in the language of probability Introduce
More informationNotes on the Multivariate Normal and Related Topics
Version: July 10, 2013 Notes on the Multivariate Normal and Related Topics Let me refresh your memory about the distinctions between population and sample; parameters and statistics; population distributions
More informationMatrices and vectors A matrix is a rectangular array of numbers. Here s an example: A =
Matrices and vectors A matrix is a rectangular array of numbers Here s an example: 23 14 17 A = 225 0 2 This matrix has dimensions 2 3 The number of rows is first, then the number of columns We can write
More informationMultivariate Gaussian Distribution. Auxiliary notes for Time Series Analysis SF2943. Spring 2013
Multivariate Gaussian Distribution Auxiliary notes for Time Series Analysis SF2943 Spring 203 Timo Koski Department of Mathematics KTH Royal Institute of Technology, Stockholm 2 Chapter Gaussian Vectors.
More information3d scatterplots. You can also make 3d scatterplots, although these are less common than scatterplot matrices.
3d scatterplots You can also make 3d scatterplots, although these are less common than scatterplot matrices. > library(scatterplot3d) > y par(mfrow=c(2,2)) > scatterplot3d(y,highlight.3d=t,angle=20)
More information[y i α βx i ] 2 (2) Q = i=1
Least squares fits This section has no probability in it. There are no random variables. We are given n points (x i, y i ) and want to find the equation of the line that best fits them. We take the equation
More informationMA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems
MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems Review of Basic Probability The fundamentals, random variables, probability distributions Probability mass/density functions
More informationEEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as
L30-1 EEL 5544 Noise in Linear Systems Lecture 30 OTHER TRANSFORMS For a continuous, nonnegative RV X, the Laplace transform of X is X (s) = E [ e sx] = 0 f X (x)e sx dx. For a nonnegative RV, the Laplace
More informationIntroduction to Probability and Stocastic Processes - Part I
Introduction to Probability and Stocastic Processes - Part I Lecture 2 Henrik Vie Christensen vie@control.auc.dk Department of Control Engineering Institute of Electronic Systems Aalborg University Denmark
More informationTAMS39 Lecture 2 Multivariate normal distribution
TAMS39 Lecture 2 Multivariate normal distribution Martin Singull Department of Mathematics Mathematical Statistics Linköping University, Sweden Content Lecture Random vectors Multivariate normal distribution
More informationQuick Tour of Basic Probability Theory and Linear Algebra
Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra CS224w: Social and Information Network Analysis Fall 2011 Quick Tour of and Linear Algebra Quick Tour of and Linear Algebra Outline Definitions
More informationCourse: ESO-209 Home Work: 1 Instructor: Debasis Kundu
Home Work: 1 1. Describe the sample space when a coin is tossed (a) once, (b) three times, (c) n times, (d) an infinite number of times. 2. A coin is tossed until for the first time the same result appear
More informationOutline Properties of Covariance Quantifying Dependence Models for Joint Distributions Lab 4. Week 8 Jointly Distributed Random Variables Part II
Week 8 Jointly Distributed Random Variables Part II Week 8 Objectives 1 The connection between the covariance of two variables and the nature of their dependence is given. 2 Pearson s correlation coefficient
More informationCopula Regression RAHUL A. PARSA DRAKE UNIVERSITY & STUART A. KLUGMAN SOCIETY OF ACTUARIES CASUALTY ACTUARIAL SOCIETY MAY 18,2011
Copula Regression RAHUL A. PARSA DRAKE UNIVERSITY & STUART A. KLUGMAN SOCIETY OF ACTUARIES CASUALTY ACTUARIAL SOCIETY MAY 18,2011 Outline Ordinary Least Squares (OLS) Regression Generalized Linear Models
More information4. Distributions of Functions of Random Variables
4. Distributions of Functions of Random Variables Setup: Consider as given the joint distribution of X 1,..., X n (i.e. consider as given f X1,...,X n and F X1,...,X n ) Consider k functions g 1 : R n
More informationSummer School in Statistics for Astronomers V June 1 - June 6, Regression. Mosuk Chow Statistics Department Penn State University.
Summer School in Statistics for Astronomers V June 1 - June 6, 2009 Regression Mosuk Chow Statistics Department Penn State University. Adapted from notes prepared by RL Karandikar Mean and variance Recall
More informationJoint p.d.f. and Independent Random Variables
1 Joint p.d.f. and Independent Random Variables Let X and Y be two discrete r.v. s and let R be the corresponding space of X and Y. The joint p.d.f. of X = x and Y = y, denoted by f(x, y) = P(X = x, Y
More informationStatement: With my signature I confirm that the solutions are the product of my own work. Name: Signature:.
MATHEMATICAL STATISTICS Homework assignment Instructions Please turn in the homework with this cover page. You do not need to edit the solutions. Just make sure the handwriting is legible. You may discuss
More informationENGG2430A-Homework 2
ENGG3A-Homework Due on Feb 9th,. Independence vs correlation a For each of the following cases, compute the marginal pmfs from the joint pmfs. Explain whether the random variables X and Y are independent,
More informationLecture 2: Review of Probability
Lecture 2: Review of Probability Zheng Tian Contents 1 Random Variables and Probability Distributions 2 1.1 Defining probabilities and random variables..................... 2 1.2 Probability distributions................................
More informationSTT 843 Key to Homework 1 Spring 2018
STT 843 Key to Homework Spring 208 Due date: Feb 4, 208 42 (a Because σ = 2, σ 22 = and ρ 2 = 05, we have σ 2 = ρ 2 σ σ22 = 2/2 Then, the mean and covariance of the bivariate normal is µ = ( 0 2 and Σ
More informationMAS223 Statistical Inference and Modelling Exercises
MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,
More informationAlgorithms for Uncertainty Quantification
Algorithms for Uncertainty Quantification Tobias Neckel, Ionuț-Gabriel Farcaș Lehrstuhl Informatik V Summer Semester 2017 Lecture 2: Repetition of probability theory and statistics Example: coin flip Example
More informationJoint Probability Distributions and Random Samples (Devore Chapter Five)
Joint Probability Distributions and Random Samples (Devore Chapter Five) 1016-345-01: Probability and Statistics for Engineers Spring 2013 Contents 1 Joint Probability Distributions 2 1.1 Two Discrete
More informationMultivariate Random Variable
Multivariate Random Variable Author: Author: Andrés Hincapié and Linyi Cao This Version: August 7, 2016 Multivariate Random Variable 3 Now we consider models with more than one r.v. These are called multivariate
More informationMultivariate Time Series
Multivariate Time Series Notation: I do not use boldface (or anything else) to distinguish vectors from scalars. Tsay (and many other writers) do. I denote a multivariate stochastic process in the form
More informationSTAT Chapter 5 Continuous Distributions
STAT 270 - Chapter 5 Continuous Distributions June 27, 2012 Shirin Golchi () STAT270 June 27, 2012 1 / 59 Continuous rv s Definition: X is a continuous rv if it takes values in an interval, i.e., range
More informationBASICS OF PROBABILITY
October 10, 2018 BASICS OF PROBABILITY Randomness, sample space and probability Probability is concerned with random experiments. That is, an experiment, the outcome of which cannot be predicted with certainty,
More informationUCSD ECE153 Handout #34 Prof. Young-Han Kim Tuesday, May 27, Solutions to Homework Set #6 (Prepared by TA Fatemeh Arbabjolfaei)
UCSD ECE53 Handout #34 Prof Young-Han Kim Tuesday, May 7, 04 Solutions to Homework Set #6 (Prepared by TA Fatemeh Arbabjolfaei) Linear estimator Consider a channel with the observation Y XZ, where the
More informationP (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n
JOINT DENSITIES - RANDOM VECTORS - REVIEW Joint densities describe probability distributions of a random vector X: an n-dimensional vector of random variables, ie, X = (X 1,, X n ), where all X is are
More informationProblem Selected Scores
Statistics Ph.D. Qualifying Exam: Part II November 20, 2010 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. Problem 1 2 3 4 5 6 7 8 9 10 11 12 Selected
More informationThe Multivariate Gaussian Distribution
The Multivariate Gaussian Distribution Chuong B. Do October, 8 A vector-valued random variable X = T X X n is said to have a multivariate normal or Gaussian) distribution with mean µ R n and covariance
More informationChapter 5 Class Notes
Chapter 5 Class Notes Sections 5.1 and 5.2 It is quite common to measure several variables (some of which may be correlated) and to examine the corresponding joint probability distribution One example
More informationLecture 2: Repetition of probability theory and statistics
Algorithms for Uncertainty Quantification SS8, IN2345 Tobias Neckel Scientific Computing in Computer Science TUM Lecture 2: Repetition of probability theory and statistics Concept of Building Block: Prerequisites:
More informationNext tool is Partial ACF; mathematical tools first. The Multivariate Normal Distribution. e z2 /2. f Z (z) = 1 2π. e z2 i /2
Next tool is Partial ACF; mathematical tools first. The Multivariate Normal Distribution Defn: Z R 1 N(0,1) iff f Z (z) = 1 2π e z2 /2 Defn: Z R p MV N p (0, I) if and only if Z = (Z 1,..., Z p ) (a column
More informationTwo hours. Statistical Tables to be provided THE UNIVERSITY OF MANCHESTER. 14 January :45 11:45
Two hours Statistical Tables to be provided THE UNIVERSITY OF MANCHESTER PROBABILITY 2 14 January 2015 09:45 11:45 Answer ALL four questions in Section A (40 marks in total) and TWO of the THREE questions
More informationLecture 25: Review. Statistics 104. April 23, Colin Rundel
Lecture 25: Review Statistics 104 Colin Rundel April 23, 2012 Joint CDF F (x, y) = P [X x, Y y] = P [(X, Y ) lies south-west of the point (x, y)] Y (x,y) X Statistics 104 (Colin Rundel) Lecture 25 April
More informationmatrix-free Elements of Probability Theory 1 Random Variables and Distributions Contents Elements of Probability Theory 2
Short Guides to Microeconometrics Fall 2018 Kurt Schmidheiny Unversität Basel Elements of Probability Theory 2 1 Random Variables and Distributions Contents Elements of Probability Theory matrix-free 1
More information5.1 Consistency of least squares estimates. We begin with a few consistency results that stand on their own and do not depend on normality.
88 Chapter 5 Distribution Theory In this chapter, we summarize the distributions related to the normal distribution that occur in linear models. Before turning to this general problem that assumes normal
More informationMATH Notebook 4 Fall 2018/2019
MATH442601 2 Notebook 4 Fall 2018/2019 prepared by Professor Jenny Baglivo c Copyright 2004-2019 by Jenny A. Baglivo. All Rights Reserved. 4 MATH442601 2 Notebook 4 3 4.1 Expected Value of a Random Variable............................
More informationACM 116: Lectures 3 4
1 ACM 116: Lectures 3 4 Joint distributions The multivariate normal distribution Conditional distributions Independent random variables Conditional distributions and Monte Carlo: Rejection sampling Variance
More information401 Review. 6. Power analysis for one/two-sample hypothesis tests and for correlation analysis.
401 Review Major topics of the course 1. Univariate analysis 2. Bivariate analysis 3. Simple linear regression 4. Linear algebra 5. Multiple regression analysis Major analysis methods 1. Graphical analysis
More informationEstimation Theory. as Θ = (Θ 1,Θ 2,...,Θ m ) T. An estimator
Estimation Theory Estimation theory deals with finding numerical values of interesting parameters from given set of data. We start with formulating a family of models that could describe how the data were
More informationx. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).
.8.6 µ =, σ = 1 µ = 1, σ = 1 / µ =, σ =.. 3 1 1 3 x Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ ). The Gaussian distribution Probably the most-important distribution in all of statistics
More informationStatistics 351 Probability I Fall 2006 (200630) Final Exam Solutions. θ α β Γ(α)Γ(β) (uv)α 1 (v uv) β 1 exp v }
Statistics 35 Probability I Fall 6 (63 Final Exam Solutions Instructor: Michael Kozdron (a Solving for X and Y gives X UV and Y V UV, so that the Jacobian of this transformation is x x u v J y y v u v
More informationLinear Methods for Prediction
Chapter 5 Linear Methods for Prediction 5.1 Introduction We now revisit the classification problem and focus on linear methods. Since our prediction Ĝ(x) will always take values in the discrete set G we
More informationUQ, Semester 1, 2017, Companion to STAT2201/CIVL2530 Exam Formulae and Tables
UQ, Semester 1, 2017, Companion to STAT2201/CIVL2530 Exam Formulae and Tables To be provided to students with STAT2201 or CIVIL-2530 (Probability and Statistics) Exam Main exam date: Tuesday, 20 June 1
More informationSTA 2201/442 Assignment 2
STA 2201/442 Assignment 2 1. This is about how to simulate from a continuous univariate distribution. Let the random variable X have a continuous distribution with density f X (x) and cumulative distribution
More informationElements of Probability Theory
Short Guides to Microeconometrics Fall 2016 Kurt Schmidheiny Unversität Basel Elements of Probability Theory Contents 1 Random Variables and Distributions 2 1.1 Univariate Random Variables and Distributions......
More informationPCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities
PCMI 207 - Introduction to Random Matrix Theory Handout #2 06.27.207 REVIEW OF PROBABILITY THEORY Chapter - Events and Their Probabilities.. Events as Sets Definition (σ-field). A collection F of subsets
More informationEC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix)
1 EC212: Introduction to Econometrics Review Materials (Wooldridge, Appendix) Taisuke Otsu London School of Economics Summer 2018 A.1. Summation operator (Wooldridge, App. A.1) 2 3 Summation operator For
More informationThe Multivariate Gaussian Distribution [DRAFT]
The Multivariate Gaussian Distribution DRAFT David S. Rosenberg Abstract This is a collection of a few key and standard results about multivariate Gaussian distributions. I have not included many proofs,
More informationA Probability Review
A Probability Review Outline: A probability review Shorthand notation: RV stands for random variable EE 527, Detection and Estimation Theory, # 0b 1 A Probability Review Reading: Go over handouts 2 5 in
More informationLecture 1: August 28
36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random
More informationLecture 11. Multivariate Normal theory
10. Lecture 11. Multivariate Normal theory Lecture 11. Multivariate Normal theory 1 (1 1) 11. Multivariate Normal theory 11.1. Properties of means and covariances of vectors Properties of means and covariances
More information15 Discrete Distributions
Lecture Note 6 Special Distributions (Discrete and Continuous) MIT 4.30 Spring 006 Herman Bennett 5 Discrete Distributions We have already seen the binomial distribution and the uniform distribution. 5.
More informationDependence. Practitioner Course: Portfolio Optimization. John Dodson. September 10, Dependence. John Dodson. Outline.
Practitioner Course: Portfolio Optimization September 10, 2008 Before we define dependence, it is useful to define Random variables X and Y are independent iff For all x, y. In particular, F (X,Y ) (x,
More informationCorrelation analysis 2: Measures of correlation
Correlation analsis 2: Measures of correlation Ran Tibshirani Data Mining: 36-462/36-662 Februar 19 2013 1 Review: correlation Pearson s correlation is a measure of linear association In the population:
More informationGaussian Models (9/9/13)
STA561: Probabilistic machine learning Gaussian Models (9/9/13) Lecturer: Barbara Engelhardt Scribes: Xi He, Jiangwei Pan, Ali Razeen, Animesh Srivastava 1 Multivariate Normal Distribution The multivariate
More informationUC Berkeley Department of Electrical Engineering and Computer Science. EE 126: Probablity and Random Processes. Problem Set 8 Fall 2007
UC Berkeley Department of Electrical Engineering and Computer Science EE 6: Probablity and Random Processes Problem Set 8 Fall 007 Issued: Thursday, October 5, 007 Due: Friday, November, 007 Reading: Bertsekas
More informationMIT Spring 2015
Regression Analysis MIT 18.472 Dr. Kempthorne Spring 2015 1 Outline Regression Analysis 1 Regression Analysis 2 Multiple Linear Regression: Setup Data Set n cases i = 1, 2,..., n 1 Response (dependent)
More informationPart 6: Multivariate Normal and Linear Models
Part 6: Multivariate Normal and Linear Models 1 Multiple measurements Up until now all of our statistical models have been univariate models models for a single measurement on each member of a sample of
More informationThe generative approach to classification. A classification problem. Generative models CSE 250B
The generative approach to classification The generative approach to classification CSE 250B The learning process: Fit a probability distribution to each class, individually To classify a new point: Which
More informationMathematical statistics
October 4 th, 2018 Lecture 12: Information Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation Chapter
More information3. Probability and Statistics
FE661 - Statistical Methods for Financial Engineering 3. Probability and Statistics Jitkomut Songsiri definitions, probability measures conditional expectations correlation and covariance some important
More informationBivariate Distributions. Discrete Bivariate Distribution Example
Spring 7 Geog C: Phaedon C. Kyriakidis Bivariate Distributions Definition: class of multivariate probability distributions describing joint variation of outcomes of two random variables (discrete or continuous),
More informationMathematical statistics
October 1 st, 2018 Lecture 11: Sufficient statistic Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation
More informationHT Introduction. P(X i = x i ) = e λ λ x i
MODS STATISTICS Introduction. HT 2012 Simon Myers, Department of Statistics (and The Wellcome Trust Centre for Human Genetics) myers@stats.ox.ac.uk We will be concerned with the mathematical framework
More informationContinuous Random Variables
1 / 24 Continuous Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 27, 2013 2 / 24 Continuous Random Variables
More informationSOLUTION FOR HOMEWORK 12, STAT 4351
SOLUTION FOR HOMEWORK 2, STAT 435 Welcome to your 2th homework. It looks like this is the last one! As usual, try to find mistakes and get extra points! Now let us look at your problems.. Problem 7.22.
More informationSolutions to Homework Set #6 (Prepared by Lele Wang)
Solutions to Homework Set #6 (Prepared by Lele Wang) Gaussian random vector Given a Gaussian random vector X N (µ, Σ), where µ ( 5 ) T and 0 Σ 4 0 0 0 9 (a) Find the pdfs of i X, ii X + X 3, iii X + X
More informationStatistics STAT:5100 (22S:193), Fall Sample Final Exam B
Statistics STAT:5 (22S:93), Fall 25 Sample Final Exam B Please write your answers in the exam books provided.. Let X, Y, and Y 2 be independent random variables with X N(µ X, σ 2 X ) and Y i N(µ Y, σ 2
More informationSimple Linear Regression
Simple Linear Regression In simple linear regression we are concerned about the relationship between two variables, X and Y. There are two components to such a relationship. 1. The strength of the relationship.
More informationStat 5101 Notes: Brand Name Distributions
Stat 5101 Notes: Brand Name Distributions Charles J. Geyer September 5, 2012 Contents 1 Discrete Uniform Distribution 2 2 General Discrete Uniform Distribution 2 3 Uniform Distribution 3 4 General Uniform
More informationCovariance and Correlation
Covariance and Correlation ST 370 The probability distribution of a random variable gives complete information about its behavior, but its mean and variance are useful summaries. Similarly, the joint probability
More informationPart IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015
Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.
More informationUNIVERSITY OF MASSACHUSETTS. Department of Mathematics and Statistics. Basic Exam - Applied Statistics. Tuesday, January 17, 2017
UNIVERSITY OF MASSACHUSETTS Department of Mathematics and Statistics Basic Exam - Applied Statistics Tuesday, January 17, 2017 Work all problems 60 points are needed to pass at the Masters Level and 75
More informationMultivariate Distributions (Hogg Chapter Two)
Multivariate Distributions (Hogg Chapter Two) STAT 45-1: Mathematical Statistics I Fall Semester 15 Contents 1 Multivariate Distributions 1 11 Random Vectors 111 Two Discrete Random Variables 11 Two Continuous
More informationThe Multivariate Normal Distribution. In this case according to our theorem
The Multivariate Normal Distribution Defn: Z R 1 N(0, 1) iff f Z (z) = 1 2π e z2 /2. Defn: Z R p MV N p (0, I) if and only if Z = (Z 1,..., Z p ) T with the Z i independent and each Z i N(0, 1). In this
More information7. The Multivariate Normal Distribution
of 5 7/6/2009 5:56 AM Virtual Laboratories > 5. Special Distributions > 2 3 4 5 6 7 8 9 0 2 3 4 5 7. The Multivariate Normal Distribution The Bivariate Normal Distribution Definition Suppose that U and
More informationMaster s Written Examination - Solution
Master s Written Examination - Solution Spring 204 Problem Stat 40 Suppose X and X 2 have the joint pdf f X,X 2 (x, x 2 ) = 2e (x +x 2 ), 0 < x < x 2
More informationf X, Y (x, y)dx (x), where f(x,y) is the joint pdf of X and Y. (x) dx
INDEPENDENCE, COVARIANCE AND CORRELATION Independence: Intuitive idea of "Y is independent of X": The distribution of Y doesn't depend on the value of X. In terms of the conditional pdf's: "f(y x doesn't
More informationLecture 3. Probability - Part 2. Luigi Freda. ALCOR Lab DIAG University of Rome La Sapienza. October 19, 2016
Lecture 3 Probability - Part 2 Luigi Freda ALCOR Lab DIAG University of Rome La Sapienza October 19, 2016 Luigi Freda ( La Sapienza University) Lecture 3 October 19, 2016 1 / 46 Outline 1 Common Continuous
More information