BIOS 2083 Linear Models Abdus S. Wahed. Chapter 2 84

Similar documents
Random Vectors and Multivariate Normal Distributions

Lecture 11. Multivariate Normal theory

The Multivariate Normal Distribution 1

Lecture 15: Multivariate normal distributions

Multivariate Distributions

Ch4. Distribution of Quadratic Forms in y

3. Probability and Statistics

Part IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015

The Multivariate Normal Distribution. In this case according to our theorem

The Multivariate Normal Distribution 1

5.1 Consistency of least squares estimates. We begin with a few consistency results that stand on their own and do not depend on normality.

Random Vectors 1. STA442/2101 Fall See last slide for copyright information. 1 / 30

Moment Generating Function. STAT/MTHE 353: 5 Moment Generating Functions and Multivariate Normal Distribution

BASICS OF PROBABILITY

Random Variables and Their Distributions

1.12 Multivariate Random Variables

Preliminaries. Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38

Lecture 14: Multivariate mgf s and chf s

8 - Continuous random vectors

Notes on Random Vectors and Multivariate Normal

Lecture 1: August 28

01 Probability Theory and Statistics Review

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

Probability Background

ANOVA: Analysis of Variance - Part I

STAT 100C: Linear models

Formulas for probability theory and linear models SF2941

Exam 2. Jeremy Morris. March 23, 2006

The Multivariate Normal Distribution. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36

The Multivariate Normal Distribution. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36

STA 2101/442 Assignment 3 1

5. Random Vectors. probabilities. characteristic function. cross correlation, cross covariance. Gaussian random vectors. functions of random vectors

Asymptotic Statistics-III. Changliang Zou

Probability and Distributions

Chapter 5 continued. Chapter 5 sections

Chapter 5. The multivariate normal distribution. Probability Theory. Linear transformations. The mean vector and the covariance matrix

4. Distributions of Functions of Random Variables

1.1 Review of Probability Theory

MIT Spring 2015

BMIR Lecture Series on Probability and Statistics Fall 2015 Discrete RVs

Sampling Distributions

Sampling Distributions

MAS223 Statistical Inference and Modelling Exercises

ECON Fundamentals of Probability

Distributions of Quadratic Forms. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 31

Lecture Note 1: Probability Theory and Statistics

Testing a Normal Covariance Matrix for Small Samples with Monotone Missing Data

CHAPTER 1 DISTRIBUTION THEORY 1 CHAPTER 1: DISTRIBUTION THEORY

Lecture 2: Repetition of probability theory and statistics

Algorithms for Uncertainty Quantification

TAMS39 Lecture 2 Multivariate normal distribution

Asymptotic Statistics-VI. Changliang Zou

Multivariate Random Variable

discrete random variable: probability mass function continuous random variable: probability density function

Basic Concepts in Matrix Algebra

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).

III - MULTIVARIATE RANDOM VARIABLES

Stat 5101 Notes: Brand Name Distributions

Def. The euclidian distance between two points x = (x 1,...,x p ) t and y = (y 1,...,y p ) t in the p-dimensional space R p is defined as

1. Density and properties Brief outline 2. Sampling from multivariate normal and MLE 3. Sampling distribution and large sample behavior of X and S 4.

Chapter 5. Chapter 5 sections

Next tool is Partial ACF; mathematical tools first. The Multivariate Normal Distribution. e z2 /2. f Z (z) = 1 2π. e z2 i /2

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as

Multiple Random Variables

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n

MLES & Multivariate Normal Theory

Chp 4. Expectation and Variance

Covariance. Lecture 20: Covariance / Correlation & General Bivariate Normal. Covariance, cont. Properties of Covariance

On the Distribution of a Quadratic Form in Normal Variates

2. Matrix Algebra and Random Vectors

Gaussian vectors and central limit theorem

STT 843 Key to Homework 1 Spring 2018

Chapter 7: Special Distributions

Recall that if X 1,...,X n are random variables with finite expectations, then. The X i can be continuous or discrete or of any other type.

Lecture 3. Probability - Part 2. Luigi Freda. ALCOR Lab DIAG University of Rome La Sapienza. October 19, 2016

Joint Distributions. (a) Scalar multiplication: k = c d. (b) Product of two matrices: c d. (c) The transpose of a matrix:

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Multivariate Gaussian Distribution. Auxiliary notes for Time Series Analysis SF2943. Spring 2013

Distributions of Functions of Random Variables. 5.1 Functions of One Random Variable

Regression #5: Confidence Intervals and Hypothesis Testing (Part 1)

VAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where:

A Probability Review

STAT5044: Regression and Anova. Inyoung Kim

2 Functions of random variables

Probability Theory and Statistics. Peter Jochumzen

Introduction to Probability and Stocastic Processes - Part I

Two hours. Statistical Tables to be provided THE UNIVERSITY OF MANCHESTER. 14 January :45 11:45

Lecture 5: Moment generating functions

Chapter 4 Multiple Random Variables

1 Probability theory. 2 Random variables and probability theory.

Introduction to Computational Finance and Financial Econometrics Probability Review - Part 2

Chapter 4. Multivariate Distributions. Obviously, the marginal distributions may be obtained easily from the joint distribution:

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu

STA 4322 Exam I Name: Introduction to Statistics Theory

18 Bivariate normal distribution I

Sampling Distributions

Chapter 1. Matrix Algebra

Stat 206: Sampling theory, sample moments, mahalanobis

18.440: Lecture 28 Lectures Review

Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics

Transcription:

Chapter 2 84

Chapter 3 Random Vectors and Multivariate Normal Distributions 3.1 Random vectors Definition 3.1.1. Random vector. Random vectors are vectors of random variables. For instance, X = X 1 X 2. X n, where each element represent a random variable, is a random vector. Definition 3.1.2. Mean and covariance matrix of a random vector. The mean (expectation) and covariance matrix of a random vector X is de- 85

fined as follows: E [X 1 ] E [X E [X] = 2 ],. E [X n ] and cov(x) = E [{X ] E (X)}{X E (X)} T σ1 2 σ 12... σ 1n σ = 21 σ 2 2... σ 2n,.... σ n1 σ n2... σ 2 n (3.1.1) where σ 2 j = var(x j)andσ jk = cov(x j, X k )forj, k =1, 2,...,n. Properties of Mean and Covariance. 1. If X and Y are random vectors and A, B, C and D are constant matrices, then E [AXB + CY + D] =AE [X] B + CE[Y]+D. (3.1.2) Proof. Left as an exercise. Chapter 3 86

2. For any random vector X, the covariance matrix cov(x) is symmetric. Proof. Left as an exercise. 3. If X j,j =1, 2,...,n are independent random variables, then cov(x) = diag(σj 2,j =1, 2,...,n). Proof. Left as an exercise. 4. cov(x + a) = cov(x) for a constant vector a. Proof. Left as an exercise. Properties of Mean and Covariance (cont.) 5. cov(ax) =Acov(X)A T for a constant matrix A. Proof. Left as an exercise. 6. cov(x) is positive semi-definite. Proof. Left as an exercise. 7. cov(x) =E[XX T ] E[X] {E[X]} T. Proof. Left as an exercise. Chapter 3 87

Definition 3.1.3. Correlation Matrix. A correlation matrix of a vector of random variable X is defined as the matrix of pairwise correlations between the elements of X. Explicitly, corr(x) = 1 ρ 12... ρ 1n ρ 21 1... ρ 2n.... ρ n1 ρ n2... 1, (3.1.3) where ρ jk = corr(x j, X k )=σ jk /(σ j σ k ),j,k =1, 2,...,n. Example 3.1.1. If only successive random variables in the random vector X are correlated and have the same correlation ρ, then the correlation matrix corr(x) isgivenby corr(x) = 1 ρ 0... 0 ρ 1 ρ... 0 0 ρ 1... 0..... 0 0 0... 1, (3.1.4) Example 3.1.2. If every pair of random variables in the random vector X Chapter 3 88

have the same correlation ρ, then the correlation matrix corr(x) isgivenby corr(x) = 1 ρ ρ... ρ ρ 1 ρ... ρ ρ ρ 1... ρ,..... (3.1.5) ρ ρ ρ... 1 and the random variables are said to be exchangeable. 3.2 Multivariate Normal Distribution Definition 3.2.1. Multivariate Normal Distribution. A random vector X =(X 1, X 2,...,X n ) T is said to follow a multivariate normal distribution with mean μ and covariance matrix Σ if X canbeexpressedas X = AZ + μ, where Σ = AA T and Z =(Z 1, Z 2,...,Z n )withz i,i=1, 2,...,n iid N(0, 1) variables. Definition 3.2.2. Multivariate Normal Distribution. A random vector X =(X 1, X 2,...,X n ) T is said to follow a multivariate normal distribution with mean μ and a positive definite covariance matrix Σ if X has the density [ 1 f X (x) = (2π) n/2 Σ 1/2exp 1 ] 2 (x μ)t Σ 1 (x μ) (3.2.1) Chapter 3 89

BIOS 2083 Linear Models Abdus S. Wahed 0.25 Bivariate normal distribution with mean (0, 0) T and covariance matrix 0.3 0.3 1.0 0.4 Probability Density 0.3 0.2 0.1 0 2 0 x2 2 3 2 1 x1 0 1 2 3. Properties 1. Moment generating function of a N(μ, Σ) random variable X is given by { M X (t) =exp μ T t + 1 } 2 tt Σt. (3.2.2) Chapter 3 90

2. E(X) =μ and cov(x) =Σ. 3. If X 1, X 2,...,X n are i.i.d N(0, 1) random variables, then their joint distribution can be characterized by X =(X 1, X 2,...,X n ) T N(0, I n ). 4. X N n (μ, Σ) if and only if all non-zero linear combinations of the components of X are normally distributed. Linear transformation 5. If X N n (μ, Σ) anda m n is a constant matrix of rank m, theny = Ax N p (Aμ, AΣA T ). Proof. Use definition 3.2.1 or property 1 above. Orthogonal linear transformation 6. If X N n (μ, I n )anda n n is an orthogonal matrix and Σ = I n,then Y = Ax N n (Aμ, I n ). Marginal and Conditional distributions Suppose X is N n (μ, Σ) andx is partitioned as follows, X = X 1, Chapter 3 91 X 2

where X 1 is of dimension p 1 andx 2 is of dimension n p 1. Suppose the corresponding partitions for μ and Σ are given by μ = μ 1, and Σ = Σ 11 Σ 12 μ 2 Σ 21 Σ 22 respectively. Then, 7. Marginal distribution. X 1 is multivariate normal - N p (μ 1, Σ 11 ). Proof. Use the result from property 5 above. 8. Conditional distribution. The distribution of X 1 X 2 is p-variate normal - N p (μ 1 2, Σ 1 2 ), where, and provided Σ is positive definite. μ 1 2 = μ 1 + Σ 12 Σ 1 22 (X 2 μ 2 ), Σ 1 2 = Σ 11 Σ 12 Σ 1 22 Σ 21, Proof. See Result 5.2.10, page 156 (Ravishanker and Dey). Uncorrelated implies independence for multivariate normal random variables 9. If X, μ, andσ are partitioned as above, then X 1 and X 2 are independent if and only if Σ 12 =0=Σ T 21. Chapter 3 92

Proof. We will use m.g.f to prove this result. Two random vectors X 1 and X 2 are independent iff M (X1,X 2 )(t 1,t 2 )=M X1 (t 1 )M X2 (t 2 ). 3.3 Non-central distributions We will start with the standard chi-square distribution. Definition 3.3.1. Chi-square distribution. If X 1,X 2,...,X n be n independent N(0, 1) variables, then the distribution of n i=1 X2 i is χ 2 n (ch-square with degrees of freedom n). χ 2 n-distribution is a special case of gamma distribution when the scale parameter is set to 1/2 and the shape parameter is set to be n/2. That is, the density of χ 2 n is given by f χ 2 n (x) = (1/2)n/2 Γ(n/2) e x/2 x n/2 1, x 0; n =1, 2,...,. (3.3.1) Example 3.3.1. The distribution of (n 1)S 2 /σ 2,whereS 2 = n i=1 (X i X) 2 /(n 1) is the sample variance of a random sample of size n from a normal distribution with mean μ and variance σ 2, follows a χ 2 n 1. Chapter 3 93

The moment generating function of a chi-square distribution with n d.f. is given by M χ 2 n (t) =(1 2t) n/2,t<1/2. (3.3.2) The m.g.f (3.3.2) shows that the sum of two independent ch-square random variables is also a ch-square. Therefore, differences of sequantial sums of squares of independent normal random variables will be distributed independently as chi-squares. Theorem 3.3.2. If X N n (μ, Σ) and Σ is positive definite, then (X μ) T Σ 1 (X μ) χ 2 n. (3.3.3) Proof. Since Σ is positive definite, there exists a non-singular A n n such that Σ = AA T (Cholesky decomposition). Then, by definition of multivariate normal distribution, X = AZ + μ, where Z is a random sample from a N(0, 1) distribution. Now, Definition 3.3.2. Non-central chi-square distribution. Suppose X s are as in Definition (3.3.1) except that each X i has mean μ i, i =1, 2,...,n. Equivalently, suppose, X =(X 1,...,X n ) T be a random vector distributed as N n (μ, I n ), where μ =(μ 1,...,μ n ) T. Then the distribution of n i=1 X2 i = X T X is referred to as non-central chi-square with d.f. n and non-centrality Chapter 3 94

0.16 0.14 λ=0 0.12 0.1 λ=2 0.08 λ=4 0.06 0.04 0.02 λ=6 λ=8 λ=10 0 0 5 10 15 20 Figure 3.1: Non-central chi-square densities with df 5 and non-centrality parameter λ. parameter λ = n i=1 μ2 i /2= 1 2 μt μ. The density of such a non-central chisquare variable χ 2 n (λ) can be written as a infinite poisson mixture of central chi-square densities as follows: e λ λ j (1/2) (n+2j)/2 f χ 2 n (λ)(x) = j! Γ((n +2j)/2) e x/2 x (n+2j)/2 1. (3.3.4) j=1 Properties 1. The moment generating function of a non-central chi-square variable χ 2 n (λ) isgivenby M χ 2 n (n,λ)(t) =(1 2t) n/2 exp { } 2λt,t<1/2. (3.3.5) 1 2t Chapter 3 95

2. E [ χ 2 n (λ)] = n +2λ. 3. Var [ χ 2 n(λ) ] =2(n +4λ). 4. χ 2 n(0) χ 2 n. 5. For a given constant c, (a) P (χ 2 n (λ) >c) is an increasing function of λ. (b) P (χ 2 n(λ) >c) P (χ 2 n >c). Theorem 3.3.3. If X N n (μ, Σ) and Σ is positive definite, then X T Σ 1 X χ 2 n (λ = μt Σ 1 μ/2). (3.3.6) Proof. Since Σ is positive definite, there exists a non-singular matrix A n n such that Σ = AA T (Cholesky decomposition). Define, Y = {A T } 1 X. Then, Definition 3.3.3. Non-central F -distribution. If U 1 χ 2 n 1 (λ) andu 2 χ 2 n 2 and U 1 and U 2 are independent, then, the distribution of F = U 1/n 1 U 2 /n 2 (3.3.7) is referred to as non-central F -distribution with df n 1 and n 2, and noncentrality parameter λ. Chapter 3 96

0.8 0.7 0.6 λ=0 0.5 λ=2 0.4 0.3 0.2 0.1 λ=4 λ=6 λ=8 λ=10 0 0 1 2 3 4 5 6 7 8 Figure 3.2: Non-central F-densities with df 5 and 15 and non-centrality parameter λ. Chapter 3 97

0.4 0.35 0.3 0.25 0.2 0.15 λ=0 λ=2 λ=4 λ=6 λ=8 0.1 λ=10 0.05 0 0.05 5 0 5 10 15 20 Figure 3.3: Non-central t-densities with df 5 and non-centrality parameter λ. Definition 3.3.4. Non-central t-distribution. If U 1 N(λ, 1) and U 2 χ 2 n and U 1 and U 2 are independent, then, the distribution of T = U 1 U2 /n (3.3.8) is referred to as non-central t-distribution with df n and non-centrality parameter λ. Chapter 3 98

3.4 Distribution of quadratic forms Caution: We assume that our matrix of quadratic form is symmetric. Lemma 3.4.1. If A n n is symmetric and idempotent with rank r, thenr of its eigenvalues are exactly equal to 1 and n r are equal to zero. Proof. Use spectral decomposition theorem. (See Result 2.3.10 on page 51 of Ravishanker and Dey). Theorem 3.4.2. Let X N n (0, I n ). The quadratic form X T AX χ 2 r iff A is idempotent with rank(a) =r. Proof. Let A be (symmetric) idempotent matrix of rank r. Then, by spectral decomposition theorem, there exists an orthogonal matrix P such that P T AP = Λ = I r 0 0 0. (3.4.1) Define Y = P T X = PT 1 X P T 2 X = Y 1 Y 2, sothatp T 1 P 1 = I r. Thus, X = Chapter 3 99

PY and Y 1 N r (0, I r ). Now, X T Ax = (PY) T APY = Y T I r 0 Y 0 0 = Y1 T Y 1 χ 2 r. (3.4.2) Now suppose X T AX χ 2 r. This means that the moment generating function of X T AX is given by M XT AX(t) =(1 2t) r/2. (3.4.3) But, one can calculate the m.g.f. of X T AX directly using the multivariate normal density as M XT AX(t) = E [ exp { (X T AX)t }] = exp { (X T AX)t } f X (x)dx = exp { (X T AX)t } [ 1 (2π) n/2exp 1 ] 2 xt x dx [ 1 = (2π) n/2exp 1 ] 2 xt (I n 2tA)x dx = I n 2tA 1/2 n = (1 2tλ i ) 1/2. (3.4.4) i=1 Chapter 3 100

Equate (3.4.3) and (3.4.4) to obtain the desired result. Theorem 3.4.3. Let X N n (μ, Σ) where Σ is positive definite. The quadratic form X T AX χ 2 r(λ) where λ = μ T Aμ/2, iffaσ is idempotent with rank(aσ) = r. Proof. Omitted. Theorem 3.4.4. Independence of two quadratic forms. Let X N n (μ, Σ) where Σ is positive definite. The two quadratic forms X T AX and X T BX are independent if and only if AΣB =0=BΣA. (3.4.5) Proof. Omitted. Remark 3.4.1. Note that in the above theorem, the two quadratic forms need not have a chi-square distribution. When they are, the theorem is referred to as Craig s theorem. Theorem 3.4.5. Independence of linear and quadratic forms. Let X N n (μ, Σ) where Σ is positive definite. The quadratic form X T AX and the linear form BX are independently distributed if and only if BΣA =0. (3.4.6) Proof. Omitted. Chapter 3 101

Remark 3.4.2. Note that in the above theorem, the quadratic form need not have a chi-square distribution. Example 3.4.6. Independence of sample mean and sample variance. Suppose X N n (0, I n ). Then X = n i=1 X i/n = 1 T X/n and SX 2 = n i=1 (X i X) 2 /(n 1) are independently distributed. Proof. Theorem 3.4.7. Let X N n (μ, Σ). Then E [ X T AX ] = μ T Aμ + trace(aσ). (3.4.7) Remark 3.4.3. Note that in the above theorem, the quadratic form need not have a chi-square distribution. Proof. Theorem 3.4.8. Fisher-Cochran theorem. Suppose X N n (μ, I n ).Let Q j = X T A j X,j =1, 2,...,k be k quadratic forms with rank(a j )=r j such that X T X = k j=1 Q j. Then, Q j s are independently distributed as χ 2 r j (λ j ) where λ j = μ T A j μ/2 if and only if k j=1 r j = n. Proof. Omitted. Theorem 3.4.9. Generalization of Fisher-Cochran theorem. Suppose X N n (μ, I n ). Let A j,j =1, 2,...,k be kn n symmetric matrices with rank(a j )=r j such that A = k j=1 A j with rank(a) =r. Then, Chapter 3 102

1. X T A j X s are independently distributed as χ 2 r j (λ j ) where λ j = μ T A j μ/2, and 2. X T AX χ 2 r (λ) where λ = k j=1 λ j if and only if any one of the following conditions is satisfied. C1. A j Σ is idempotent for all j and A j ΣA k =0for all j<k. C2. A j Σ is idempotent for all j and AΣ is idempotent. C3. A j ΣA k =0for all j<kand AΣ is idempotent. C4. r = k j=1 r j and AΣ is idempotent. C5. the matrices AΣ, A j Σ,j =1, 2,...,k 1 are idempotent and A k Σ is non-negative definite. 3.5 Problems 1. Consider the matrix A = 8 4 4 2 2 2 2 4 4 0 2 2 0 0 4 0 4 0 0 2 2 2 2 0 2 0 0 0 2 2 0 0 2 0 0 2 0 2 0 0 2 0 2 0 2 0 0 0 2. Chapter 3 103

(a) Find the rank of this matrix. (b) Find a basis for the null space of A. (c) Find a basis for the column space of A. 2. Let X i,i =1, 2, 3 are independent standard normal random variables. Show that the variance-covariance matrix of the 3-dimensional vector Y, defined as 5X 1 Y = 1.6X 1 1.2X 2, 2X 1 X 2 is not positive definite. 3. Let X = X 1 X 2 X 3 N 3 μ 1 μ 2 μ 3, 1 ρ 0 ρ 1 ρ 0 ρ 1. (a) Find the marginal distribution of X 2. (b) What is the conditional distribution of X 2 given X 1 = x 1 and X 3 = x 3? Under what condition does this distribution coincide with the marginal distribution of X 2? 4. If X N n (μ, Σ), then show that (X μ) T Σ 1 (X μ) χ 2 n. 5. Suppose Y =(Y 1,Y 2,Y 3 ) T be distributed as N 3 (0,σ 2 I 3 ). (a) Consider the quadratic form: Q = (Y 1 Y 2 ) 2 +(Y 2 Y 3 ) 2 +(Y 3 Y 1 ) 2. (3.5.1) 3 Chapter 3 104

Write Q as Y T AY where A is symmetric. Is A idempotent? What is the distribution of Q/σ 2? Find E(Q). (b) What is the distribution of L = Y 1 + Y 2 + Y 3? Find E(L) andvar(l). (c) Are Q and L independent? Find E(Q/L 2 ) 6. Write each of the following quadratic forms in X T AX form: (a) 1 6 X2 1 + 2 3 X2 2 + 1 6 X2 3 2 3 X 1X 2 + 1 3 X 1X 3 2 3 X 2X 3 (b) n X 2,where X =(X 1 + X 2 +...+ X n )/n. (c) n i=1 X2 i (d) n i=1 ( Xi X ) 2 (e) 2 i=1 2 j=1 (f) 2 2 i=1 ( Xij X ) 2 i.,where X i. = X i1+x i2 2 ( Xi. X ) 2..,where X.. = X 11+X 12 +X 21 +X 22. 4 (g) 2 ( X1. X.. ) 2 +3 ( X2. X.. ) 2,where X 1. = X 11+X 12 2, X 2. = X 21+X 22 +X 23 3, X.. = 2 X 1. +3 X 2. 5. In each case, determine if A is idempotent. If A is idempotent, find rank(a). 7. Let X N 2 (μ, Σ), where μ = μ 1, andσ= 1 0.5. Show that Q 1 = μ 2 0.5 1 (X 1 X 2 ) 2 and Q 2 =(X 1 + X 2 ) 2 are independently distributed. Find the distribution of Q 1, Q 2,and Q 2 3Q 1. 8. Assume that Y N 3 (0,I 3 ). Define Q 1 = Y T AY and Q 2 = Y T BY,where 1 1 0 1 1 0 A = 1 1 0, and,b = 1 1 0. (3.5.2) 0 0 1 0 0 0 Chapter 3 105

Are Q 1 and Q 2 independent? Do Q 1 and Q 2 follow χ 2 distribution? 9. Let Y N 3 (0,I 3 ). Let U 1 = Y T A 1 Y, U 2 = Y T A 2 Y,andV = BY where 1/2 1/2 0 1/2 1/2 0 ( A 1 = 1/2 1/2 0,A 2 = 1/2 1/2 0, and,b = 1 1 0 0 0 1 0 0 0 ). (a) Are U 1 and U 2 independent? (b) Are U 1 and V independent? (c) Are U 2 and V independent? (d) Find the distribution of V. (e) Find the distribution of U 2 U 1. (Include specific values for any parameters of the distribution.) 10. Suppose X =(X 1,X 2,X 3 ) T N 3 (μ, σ 2 V ), where, μ =(1, 1, 0) T,and V = 1 0.5 0 0.5 1 0.5 0 0.5 1 (a) What is the joint distribution of (X 1,X 3 )? (b) What is the conditional distribution of X 1,X 3 X 2? (c) For what value or values of a does L =(ax1 2 + 1 2 X2 3 + 2aX 1 X 3 )/σ 2 follow a chi-square distribution? (d) Find the value of b for which L in (c) and M = X 1 + bx 3 are independently distributed. Chapter 3 106

11. Suppose X N 3 (μ, Σ), where μ = μ 1 μ 2, and Σ = σ 2 1 0 0 0 σ 2 2 0. μ 3 0 0 σ 2 3 Find the distribution of Q = 3 Xi 2 i=1. Express the parameters of its distribution in σi 2 terms of μ i and σi 2,i=1, 2, 3. What is the variance of Q? 12. Suppose X N(0, 1) and Y = UX,whereU follows a uniform distribution on the discrete space { 1, 1} independently of X. (a) Find E(Y )andcov(x, Y ). (b) Show that Y and X are not independent. 13. Suppose X N 4 (μ, I 4 ), where X = X 11 X 12 X 21,μ= α + a 1 α + a 1 α + a 2. X 22 α + a 2 (a) Find the distribution of E = 2 2 ( i=1 j=1 Xij X ) 2,where i. Xi. = X i1+x i2. 2 (b) Find the distribution of Q =2 2 ( i=1 Xi. X ) 2,where.. X.. = X 11+X 12 +X 21 +X 22. 4 (c) Use Fisher-Cochran theorem to prove thate and Q are independently distributed. (d) What is the distribution of Q/E? Chapter 3 107