BIOS 2083 Linear Models Abdus S. Wahed. Chapter 2 84

Size: px
Start display at page:

Download "BIOS 2083 Linear Models Abdus S. Wahed. Chapter 2 84"

Transcription

1 Chapter 2 84

2 Chapter 3 Random Vectors and Multivariate Normal Distributions 3.1 Random vectors Definition Random vector. Random vectors are vectors of random variables. For instance, X = X 1 X 2. X n, where each element represent a random variable, is a random vector. Definition Mean and covariance matrix of a random vector. The mean (expectation) and covariance matrix of a random vector X is de- 85

3 fined as follows: E [X 1 ] E [X E [X] = 2 ],. E [X n ] and cov(x) = E [{X ] E (X)}{X E (X)} T σ1 2 σ σ 1n σ = 21 σ σ 2n,.... σ n1 σ n2... σ 2 n (3.1.1) where σ 2 j = var(x j)andσ jk = cov(x j, X k )forj, k =1, 2,...,n. Properties of Mean and Covariance. 1. If X and Y are random vectors and A, B, C and D are constant matrices, then E [AXB + CY + D] =AE [X] B + CE[Y]+D. (3.1.2) Proof. Left as an exercise. Chapter 3 86

4 2. For any random vector X, the covariance matrix cov(x) is symmetric. Proof. Left as an exercise. 3. If X j,j =1, 2,...,n are independent random variables, then cov(x) = diag(σj 2,j =1, 2,...,n). Proof. Left as an exercise. 4. cov(x + a) = cov(x) for a constant vector a. Proof. Left as an exercise. Properties of Mean and Covariance (cont.) 5. cov(ax) =Acov(X)A T for a constant matrix A. Proof. Left as an exercise. 6. cov(x) is positive semi-definite. Proof. Left as an exercise. 7. cov(x) =E[XX T ] E[X] {E[X]} T. Proof. Left as an exercise. Chapter 3 87

5 Definition Correlation Matrix. A correlation matrix of a vector of random variable X is defined as the matrix of pairwise correlations between the elements of X. Explicitly, corr(x) = 1 ρ ρ 1n ρ ρ 2n.... ρ n1 ρ n2... 1, (3.1.3) where ρ jk = corr(x j, X k )=σ jk /(σ j σ k ),j,k =1, 2,...,n. Example If only successive random variables in the random vector X are correlated and have the same correlation ρ, then the correlation matrix corr(x) isgivenby corr(x) = 1 ρ ρ 1 ρ ρ , (3.1.4) Example If every pair of random variables in the random vector X Chapter 3 88

6 have the same correlation ρ, then the correlation matrix corr(x) isgivenby corr(x) = 1 ρ ρ... ρ ρ 1 ρ... ρ ρ ρ 1... ρ,..... (3.1.5) ρ ρ ρ... 1 and the random variables are said to be exchangeable. 3.2 Multivariate Normal Distribution Definition Multivariate Normal Distribution. A random vector X =(X 1, X 2,...,X n ) T is said to follow a multivariate normal distribution with mean μ and covariance matrix Σ if X canbeexpressedas X = AZ + μ, where Σ = AA T and Z =(Z 1, Z 2,...,Z n )withz i,i=1, 2,...,n iid N(0, 1) variables. Definition Multivariate Normal Distribution. A random vector X =(X 1, X 2,...,X n ) T is said to follow a multivariate normal distribution with mean μ and a positive definite covariance matrix Σ if X has the density [ 1 f X (x) = (2π) n/2 Σ 1/2exp 1 ] 2 (x μ)t Σ 1 (x μ) (3.2.1) Chapter 3 89

7 BIOS 2083 Linear Models Abdus S. Wahed 0.25 Bivariate normal distribution with mean (0, 0) T and covariance matrix Probability Density x x Properties 1. Moment generating function of a N(μ, Σ) random variable X is given by { M X (t) =exp μ T t + 1 } 2 tt Σt. (3.2.2) Chapter 3 90

8 2. E(X) =μ and cov(x) =Σ. 3. If X 1, X 2,...,X n are i.i.d N(0, 1) random variables, then their joint distribution can be characterized by X =(X 1, X 2,...,X n ) T N(0, I n ). 4. X N n (μ, Σ) if and only if all non-zero linear combinations of the components of X are normally distributed. Linear transformation 5. If X N n (μ, Σ) anda m n is a constant matrix of rank m, theny = Ax N p (Aμ, AΣA T ). Proof. Use definition or property 1 above. Orthogonal linear transformation 6. If X N n (μ, I n )anda n n is an orthogonal matrix and Σ = I n,then Y = Ax N n (Aμ, I n ). Marginal and Conditional distributions Suppose X is N n (μ, Σ) andx is partitioned as follows, X = X 1, Chapter 3 91 X 2

9 where X 1 is of dimension p 1 andx 2 is of dimension n p 1. Suppose the corresponding partitions for μ and Σ are given by μ = μ 1, and Σ = Σ 11 Σ 12 μ 2 Σ 21 Σ 22 respectively. Then, 7. Marginal distribution. X 1 is multivariate normal - N p (μ 1, Σ 11 ). Proof. Use the result from property 5 above. 8. Conditional distribution. The distribution of X 1 X 2 is p-variate normal - N p (μ 1 2, Σ 1 2 ), where, and provided Σ is positive definite. μ 1 2 = μ 1 + Σ 12 Σ 1 22 (X 2 μ 2 ), Σ 1 2 = Σ 11 Σ 12 Σ 1 22 Σ 21, Proof. See Result , page 156 (Ravishanker and Dey). Uncorrelated implies independence for multivariate normal random variables 9. If X, μ, andσ are partitioned as above, then X 1 and X 2 are independent if and only if Σ 12 =0=Σ T 21. Chapter 3 92

10 Proof. We will use m.g.f to prove this result. Two random vectors X 1 and X 2 are independent iff M (X1,X 2 )(t 1,t 2 )=M X1 (t 1 )M X2 (t 2 ). 3.3 Non-central distributions We will start with the standard chi-square distribution. Definition Chi-square distribution. If X 1,X 2,...,X n be n independent N(0, 1) variables, then the distribution of n i=1 X2 i is χ 2 n (ch-square with degrees of freedom n). χ 2 n-distribution is a special case of gamma distribution when the scale parameter is set to 1/2 and the shape parameter is set to be n/2. That is, the density of χ 2 n is given by f χ 2 n (x) = (1/2)n/2 Γ(n/2) e x/2 x n/2 1, x 0; n =1, 2,...,. (3.3.1) Example The distribution of (n 1)S 2 /σ 2,whereS 2 = n i=1 (X i X) 2 /(n 1) is the sample variance of a random sample of size n from a normal distribution with mean μ and variance σ 2, follows a χ 2 n 1. Chapter 3 93

11 The moment generating function of a chi-square distribution with n d.f. is given by M χ 2 n (t) =(1 2t) n/2,t<1/2. (3.3.2) The m.g.f (3.3.2) shows that the sum of two independent ch-square random variables is also a ch-square. Therefore, differences of sequantial sums of squares of independent normal random variables will be distributed independently as chi-squares. Theorem If X N n (μ, Σ) and Σ is positive definite, then (X μ) T Σ 1 (X μ) χ 2 n. (3.3.3) Proof. Since Σ is positive definite, there exists a non-singular A n n such that Σ = AA T (Cholesky decomposition). Then, by definition of multivariate normal distribution, X = AZ + μ, where Z is a random sample from a N(0, 1) distribution. Now, Definition Non-central chi-square distribution. Suppose X s are as in Definition (3.3.1) except that each X i has mean μ i, i =1, 2,...,n. Equivalently, suppose, X =(X 1,...,X n ) T be a random vector distributed as N n (μ, I n ), where μ =(μ 1,...,μ n ) T. Then the distribution of n i=1 X2 i = X T X is referred to as non-central chi-square with d.f. n and non-centrality Chapter 3 94

12 λ= λ= λ= λ=6 λ=8 λ= Figure 3.1: Non-central chi-square densities with df 5 and non-centrality parameter λ. parameter λ = n i=1 μ2 i /2= 1 2 μt μ. The density of such a non-central chisquare variable χ 2 n (λ) can be written as a infinite poisson mixture of central chi-square densities as follows: e λ λ j (1/2) (n+2j)/2 f χ 2 n (λ)(x) = j! Γ((n +2j)/2) e x/2 x (n+2j)/2 1. (3.3.4) j=1 Properties 1. The moment generating function of a non-central chi-square variable χ 2 n (λ) isgivenby M χ 2 n (n,λ)(t) =(1 2t) n/2 exp { } 2λt,t<1/2. (3.3.5) 1 2t Chapter 3 95

13 2. E [ χ 2 n (λ)] = n +2λ. 3. Var [ χ 2 n(λ) ] =2(n +4λ). 4. χ 2 n(0) χ 2 n. 5. For a given constant c, (a) P (χ 2 n (λ) >c) is an increasing function of λ. (b) P (χ 2 n(λ) >c) P (χ 2 n >c). Theorem If X N n (μ, Σ) and Σ is positive definite, then X T Σ 1 X χ 2 n (λ = μt Σ 1 μ/2). (3.3.6) Proof. Since Σ is positive definite, there exists a non-singular matrix A n n such that Σ = AA T (Cholesky decomposition). Define, Y = {A T } 1 X. Then, Definition Non-central F -distribution. If U 1 χ 2 n 1 (λ) andu 2 χ 2 n 2 and U 1 and U 2 are independent, then, the distribution of F = U 1/n 1 U 2 /n 2 (3.3.7) is referred to as non-central F -distribution with df n 1 and n 2, and noncentrality parameter λ. Chapter 3 96

14 λ=0 0.5 λ= λ=4 λ=6 λ=8 λ= Figure 3.2: Non-central F-densities with df 5 and 15 and non-centrality parameter λ. Chapter 3 97

15 λ=0 λ=2 λ=4 λ=6 λ=8 0.1 λ= Figure 3.3: Non-central t-densities with df 5 and non-centrality parameter λ. Definition Non-central t-distribution. If U 1 N(λ, 1) and U 2 χ 2 n and U 1 and U 2 are independent, then, the distribution of T = U 1 U2 /n (3.3.8) is referred to as non-central t-distribution with df n and non-centrality parameter λ. Chapter 3 98

16 3.4 Distribution of quadratic forms Caution: We assume that our matrix of quadratic form is symmetric. Lemma If A n n is symmetric and idempotent with rank r, thenr of its eigenvalues are exactly equal to 1 and n r are equal to zero. Proof. Use spectral decomposition theorem. (See Result on page 51 of Ravishanker and Dey). Theorem Let X N n (0, I n ). The quadratic form X T AX χ 2 r iff A is idempotent with rank(a) =r. Proof. Let A be (symmetric) idempotent matrix of rank r. Then, by spectral decomposition theorem, there exists an orthogonal matrix P such that P T AP = Λ = I r (3.4.1) Define Y = P T X = PT 1 X P T 2 X = Y 1 Y 2, sothatp T 1 P 1 = I r. Thus, X = Chapter 3 99

17 PY and Y 1 N r (0, I r ). Now, X T Ax = (PY) T APY = Y T I r 0 Y 0 0 = Y1 T Y 1 χ 2 r. (3.4.2) Now suppose X T AX χ 2 r. This means that the moment generating function of X T AX is given by M XT AX(t) =(1 2t) r/2. (3.4.3) But, one can calculate the m.g.f. of X T AX directly using the multivariate normal density as M XT AX(t) = E [ exp { (X T AX)t }] = exp { (X T AX)t } f X (x)dx = exp { (X T AX)t } [ 1 (2π) n/2exp 1 ] 2 xt x dx [ 1 = (2π) n/2exp 1 ] 2 xt (I n 2tA)x dx = I n 2tA 1/2 n = (1 2tλ i ) 1/2. (3.4.4) i=1 Chapter 3 100

18 Equate (3.4.3) and (3.4.4) to obtain the desired result. Theorem Let X N n (μ, Σ) where Σ is positive definite. The quadratic form X T AX χ 2 r(λ) where λ = μ T Aμ/2, iffaσ is idempotent with rank(aσ) = r. Proof. Omitted. Theorem Independence of two quadratic forms. Let X N n (μ, Σ) where Σ is positive definite. The two quadratic forms X T AX and X T BX are independent if and only if AΣB =0=BΣA. (3.4.5) Proof. Omitted. Remark Note that in the above theorem, the two quadratic forms need not have a chi-square distribution. When they are, the theorem is referred to as Craig s theorem. Theorem Independence of linear and quadratic forms. Let X N n (μ, Σ) where Σ is positive definite. The quadratic form X T AX and the linear form BX are independently distributed if and only if BΣA =0. (3.4.6) Proof. Omitted. Chapter 3 101

19 Remark Note that in the above theorem, the quadratic form need not have a chi-square distribution. Example Independence of sample mean and sample variance. Suppose X N n (0, I n ). Then X = n i=1 X i/n = 1 T X/n and SX 2 = n i=1 (X i X) 2 /(n 1) are independently distributed. Proof. Theorem Let X N n (μ, Σ). Then E [ X T AX ] = μ T Aμ + trace(aσ). (3.4.7) Remark Note that in the above theorem, the quadratic form need not have a chi-square distribution. Proof. Theorem Fisher-Cochran theorem. Suppose X N n (μ, I n ).Let Q j = X T A j X,j =1, 2,...,k be k quadratic forms with rank(a j )=r j such that X T X = k j=1 Q j. Then, Q j s are independently distributed as χ 2 r j (λ j ) where λ j = μ T A j μ/2 if and only if k j=1 r j = n. Proof. Omitted. Theorem Generalization of Fisher-Cochran theorem. Suppose X N n (μ, I n ). Let A j,j =1, 2,...,k be kn n symmetric matrices with rank(a j )=r j such that A = k j=1 A j with rank(a) =r. Then, Chapter 3 102

20 1. X T A j X s are independently distributed as χ 2 r j (λ j ) where λ j = μ T A j μ/2, and 2. X T AX χ 2 r (λ) where λ = k j=1 λ j if and only if any one of the following conditions is satisfied. C1. A j Σ is idempotent for all j and A j ΣA k =0for all j<k. C2. A j Σ is idempotent for all j and AΣ is idempotent. C3. A j ΣA k =0for all j<kand AΣ is idempotent. C4. r = k j=1 r j and AΣ is idempotent. C5. the matrices AΣ, A j Σ,j =1, 2,...,k 1 are idempotent and A k Σ is non-negative definite. 3.5 Problems 1. Consider the matrix A = Chapter 3 103

21 (a) Find the rank of this matrix. (b) Find a basis for the null space of A. (c) Find a basis for the column space of A. 2. Let X i,i =1, 2, 3 are independent standard normal random variables. Show that the variance-covariance matrix of the 3-dimensional vector Y, defined as 5X 1 Y = 1.6X 1 1.2X 2, 2X 1 X 2 is not positive definite. 3. Let X = X 1 X 2 X 3 N 3 μ 1 μ 2 μ 3, 1 ρ 0 ρ 1 ρ 0 ρ 1. (a) Find the marginal distribution of X 2. (b) What is the conditional distribution of X 2 given X 1 = x 1 and X 3 = x 3? Under what condition does this distribution coincide with the marginal distribution of X 2? 4. If X N n (μ, Σ), then show that (X μ) T Σ 1 (X μ) χ 2 n. 5. Suppose Y =(Y 1,Y 2,Y 3 ) T be distributed as N 3 (0,σ 2 I 3 ). (a) Consider the quadratic form: Q = (Y 1 Y 2 ) 2 +(Y 2 Y 3 ) 2 +(Y 3 Y 1 ) 2. (3.5.1) 3 Chapter 3 104

22 Write Q as Y T AY where A is symmetric. Is A idempotent? What is the distribution of Q/σ 2? Find E(Q). (b) What is the distribution of L = Y 1 + Y 2 + Y 3? Find E(L) andvar(l). (c) Are Q and L independent? Find E(Q/L 2 ) 6. Write each of the following quadratic forms in X T AX form: (a) 1 6 X X X X 1X X 1X X 2X 3 (b) n X 2,where X =(X 1 + X X n )/n. (c) n i=1 X2 i (d) n i=1 ( Xi X ) 2 (e) 2 i=1 2 j=1 (f) 2 2 i=1 ( Xij X ) 2 i.,where X i. = X i1+x i2 2 ( Xi. X ) 2..,where X.. = X 11+X 12 +X 21 +X (g) 2 ( X1. X.. ) 2 +3 ( X2. X.. ) 2,where X 1. = X 11+X 12 2, X 2. = X 21+X 22 +X 23 3, X.. = 2 X X In each case, determine if A is idempotent. If A is idempotent, find rank(a). 7. Let X N 2 (μ, Σ), where μ = μ 1, andσ= Show that Q 1 = μ (X 1 X 2 ) 2 and Q 2 =(X 1 + X 2 ) 2 are independently distributed. Find the distribution of Q 1, Q 2,and Q 2 3Q Assume that Y N 3 (0,I 3 ). Define Q 1 = Y T AY and Q 2 = Y T BY,where A = 1 1 0, and,b = (3.5.2) Chapter 3 105

23 Are Q 1 and Q 2 independent? Do Q 1 and Q 2 follow χ 2 distribution? 9. Let Y N 3 (0,I 3 ). Let U 1 = Y T A 1 Y, U 2 = Y T A 2 Y,andV = BY where 1/2 1/2 0 1/2 1/2 0 ( A 1 = 1/2 1/2 0,A 2 = 1/2 1/2 0, and,b = ). (a) Are U 1 and U 2 independent? (b) Are U 1 and V independent? (c) Are U 2 and V independent? (d) Find the distribution of V. (e) Find the distribution of U 2 U 1. (Include specific values for any parameters of the distribution.) 10. Suppose X =(X 1,X 2,X 3 ) T N 3 (μ, σ 2 V ), where, μ =(1, 1, 0) T,and V = (a) What is the joint distribution of (X 1,X 3 )? (b) What is the conditional distribution of X 1,X 3 X 2? (c) For what value or values of a does L =(ax X aX 1 X 3 )/σ 2 follow a chi-square distribution? (d) Find the value of b for which L in (c) and M = X 1 + bx 3 are independently distributed. Chapter 3 106

24 11. Suppose X N 3 (μ, Σ), where μ = μ 1 μ 2, and Σ = σ σ μ σ 2 3 Find the distribution of Q = 3 Xi 2 i=1. Express the parameters of its distribution in σi 2 terms of μ i and σi 2,i=1, 2, 3. What is the variance of Q? 12. Suppose X N(0, 1) and Y = UX,whereU follows a uniform distribution on the discrete space { 1, 1} independently of X. (a) Find E(Y )andcov(x, Y ). (b) Show that Y and X are not independent. 13. Suppose X N 4 (μ, I 4 ), where X = X 11 X 12 X 21,μ= α + a 1 α + a 1 α + a 2. X 22 α + a 2 (a) Find the distribution of E = 2 2 ( i=1 j=1 Xij X ) 2,where i. Xi. = X i1+x i2. 2 (b) Find the distribution of Q =2 2 ( i=1 Xi. X ) 2,where.. X.. = X 11+X 12 +X 21 +X (c) Use Fisher-Cochran theorem to prove thate and Q are independently distributed. (d) What is the distribution of Q/E? Chapter 3 107

Random Vectors and Multivariate Normal Distributions

Random Vectors and Multivariate Normal Distributions Chapter 3 Random Vectors and Multivariate Normal Distributions 3.1 Random vectors Definition 3.1.1. Random vector. Random vectors are vectors of random 75 variables. For instance, X = X 1 X 2., where each

More information

Lecture 11. Multivariate Normal theory

Lecture 11. Multivariate Normal theory 10. Lecture 11. Multivariate Normal theory Lecture 11. Multivariate Normal theory 1 (1 1) 11. Multivariate Normal theory 11.1. Properties of means and covariances of vectors Properties of means and covariances

More information

The Multivariate Normal Distribution 1

The Multivariate Normal Distribution 1 The Multivariate Normal Distribution 1 STA 302 Fall 2017 1 See last slide for copyright information. 1 / 40 Overview 1 Moment-generating Functions 2 Definition 3 Properties 4 χ 2 and t distributions 2

More information

Lecture 15: Multivariate normal distributions

Lecture 15: Multivariate normal distributions Lecture 15: Multivariate normal distributions Normal distributions with singular covariance matrices Consider an n-dimensional X N(µ,Σ) with a positive definite Σ and a fixed k n matrix A that is not of

More information

Multivariate Distributions

Multivariate Distributions IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing 1 in particular on multivariate

More information

Ch4. Distribution of Quadratic Forms in y

Ch4. Distribution of Quadratic Forms in y ST4233, Linear Models, Semester 1 2008-2009 Ch4. Distribution of Quadratic Forms in y 1 Definition Definition 1.1 If A is a symmetric matrix and y is a vector, the product y Ay = i a ii y 2 i + i j a ij

More information

3. Probability and Statistics

3. Probability and Statistics FE661 - Statistical Methods for Financial Engineering 3. Probability and Statistics Jitkomut Songsiri definitions, probability measures conditional expectations correlation and covariance some important

More information

Part IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015

Part IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015 Part IB Statistics Theorems with proof Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly)

More information

The Multivariate Normal Distribution. In this case according to our theorem

The Multivariate Normal Distribution. In this case according to our theorem The Multivariate Normal Distribution Defn: Z R 1 N(0, 1) iff f Z (z) = 1 2π e z2 /2. Defn: Z R p MV N p (0, I) if and only if Z = (Z 1,..., Z p ) T with the Z i independent and each Z i N(0, 1). In this

More information

The Multivariate Normal Distribution 1

The Multivariate Normal Distribution 1 The Multivariate Normal Distribution 1 STA 302 Fall 2014 1 See last slide for copyright information. 1 / 37 Overview 1 Moment-generating Functions 2 Definition 3 Properties 4 χ 2 and t distributions 2

More information

5.1 Consistency of least squares estimates. We begin with a few consistency results that stand on their own and do not depend on normality.

5.1 Consistency of least squares estimates. We begin with a few consistency results that stand on their own and do not depend on normality. 88 Chapter 5 Distribution Theory In this chapter, we summarize the distributions related to the normal distribution that occur in linear models. Before turning to this general problem that assumes normal

More information

Random Vectors 1. STA442/2101 Fall See last slide for copyright information. 1 / 30

Random Vectors 1. STA442/2101 Fall See last slide for copyright information. 1 / 30 Random Vectors 1 STA442/2101 Fall 2017 1 See last slide for copyright information. 1 / 30 Background Reading: Renscher and Schaalje s Linear models in statistics Chapter 3 on Random Vectors and Matrices

More information

Moment Generating Function. STAT/MTHE 353: 5 Moment Generating Functions and Multivariate Normal Distribution

Moment Generating Function. STAT/MTHE 353: 5 Moment Generating Functions and Multivariate Normal Distribution Moment Generating Function STAT/MTHE 353: 5 Moment Generating Functions and Multivariate Normal Distribution T. Linder Queen s University Winter 07 Definition Let X (X,...,X n ) T be a random vector and

More information

BASICS OF PROBABILITY

BASICS OF PROBABILITY October 10, 2018 BASICS OF PROBABILITY Randomness, sample space and probability Probability is concerned with random experiments. That is, an experiment, the outcome of which cannot be predicted with certainty,

More information

Random Variables and Their Distributions

Random Variables and Their Distributions Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital

More information

1.12 Multivariate Random Variables

1.12 Multivariate Random Variables 112 MULTIVARIATE RANDOM VARIABLES 59 112 Multivariate Random Variables We will be using matrix notation to denote multivariate rvs and their distributions Denote by X (X 1,,X n ) T an n-dimensional random

More information

Preliminaries. Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38

Preliminaries. Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38 Preliminaries Copyright c 2018 Dan Nettleton (Iowa State University) Statistics 510 1 / 38 Notation for Scalars, Vectors, and Matrices Lowercase letters = scalars: x, c, σ. Boldface, lowercase letters

More information

Lecture 14: Multivariate mgf s and chf s

Lecture 14: Multivariate mgf s and chf s Lecture 14: Multivariate mgf s and chf s Multivariate mgf and chf For an n-dimensional random vector X, its mgf is defined as M X (t) = E(e t X ), t R n and its chf is defined as φ X (t) = E(e ıt X ),

More information

8 - Continuous random vectors

8 - Continuous random vectors 8-1 Continuous random vectors S. Lall, Stanford 2011.01.25.01 8 - Continuous random vectors Mean-square deviation Mean-variance decomposition Gaussian random vectors The Gamma function The χ 2 distribution

More information

Notes on Random Vectors and Multivariate Normal

Notes on Random Vectors and Multivariate Normal MATH 590 Spring 06 Notes on Random Vectors and Multivariate Normal Properties of Random Vectors If X,, X n are random variables, then X = X,, X n ) is a random vector, with the cumulative distribution

More information

Lecture 1: August 28

Lecture 1: August 28 36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random

More information

01 Probability Theory and Statistics Review

01 Probability Theory and Statistics Review NAVARCH/EECS 568, ROB 530 - Winter 2018 01 Probability Theory and Statistics Review Maani Ghaffari January 08, 2018 Last Time: Bayes Filters Given: Stream of observations z 1:t and action data u 1:t Sensor/measurement

More information

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems Review of Basic Probability The fundamentals, random variables, probability distributions Probability mass/density functions

More information

Probability Background

Probability Background Probability Background Namrata Vaswani, Iowa State University August 24, 2015 Probability recap 1: EE 322 notes Quick test of concepts: Given random variables X 1, X 2,... X n. Compute the PDF of the second

More information

ANOVA: Analysis of Variance - Part I

ANOVA: Analysis of Variance - Part I ANOVA: Analysis of Variance - Part I The purpose of these notes is to discuss the theory behind the analysis of variance. It is a summary of the definitions and results presented in class with a few exercises.

More information

STAT 100C: Linear models

STAT 100C: Linear models STAT 100C: Linear models Arash A. Amini April 27, 2018 1 / 1 Table of Contents 2 / 1 Linear Algebra Review Read 3.1 and 3.2 from text. 1. Fundamental subspace (rank-nullity, etc.) Im(X ) = ker(x T ) R

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

Exam 2. Jeremy Morris. March 23, 2006

Exam 2. Jeremy Morris. March 23, 2006 Exam Jeremy Morris March 3, 006 4. Consider a bivariate normal population with µ 0, µ, σ, σ and ρ.5. a Write out the bivariate normal density. The multivariate normal density is defined by the following

More information

The Multivariate Normal Distribution. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36

The Multivariate Normal Distribution. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36 The Multivariate Normal Distribution Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 36 The Moment Generating Function (MGF) of a random vector X is given by M X (t) = E(e t X

More information

The Multivariate Normal Distribution. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36

The Multivariate Normal Distribution. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 36 The Multivariate Normal Distribution Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 36 The Moment Generating Function (MGF) of a random vector X is given by M X (t) = E(e t X

More information

STA 2101/442 Assignment 3 1

STA 2101/442 Assignment 3 1 STA 2101/442 Assignment 3 1 These questions are practice for the midterm and final exam, and are not to be handed in. 1. Suppose X 1,..., X n are a random sample from a distribution with mean µ and variance

More information

5. Random Vectors. probabilities. characteristic function. cross correlation, cross covariance. Gaussian random vectors. functions of random vectors

5. Random Vectors. probabilities. characteristic function. cross correlation, cross covariance. Gaussian random vectors. functions of random vectors EE401 (Semester 1) 5. Random Vectors Jitkomut Songsiri probabilities characteristic function cross correlation, cross covariance Gaussian random vectors functions of random vectors 5-1 Random vectors we

More information

Asymptotic Statistics-III. Changliang Zou

Asymptotic Statistics-III. Changliang Zou Asymptotic Statistics-III Changliang Zou The multivariate central limit theorem Theorem (Multivariate CLT for iid case) Let X i be iid random p-vectors with mean µ and and covariance matrix Σ. Then n (

More information

Probability and Distributions

Probability and Distributions Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated

More information

Chapter 5 continued. Chapter 5 sections

Chapter 5 continued. Chapter 5 sections Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

Chapter 5. The multivariate normal distribution. Probability Theory. Linear transformations. The mean vector and the covariance matrix

Chapter 5. The multivariate normal distribution. Probability Theory. Linear transformations. The mean vector and the covariance matrix Probability Theory Linear transformations A transformation is said to be linear if every single function in the transformation is a linear combination. Chapter 5 The multivariate normal distribution When

More information

4. Distributions of Functions of Random Variables

4. Distributions of Functions of Random Variables 4. Distributions of Functions of Random Variables Setup: Consider as given the joint distribution of X 1,..., X n (i.e. consider as given f X1,...,X n and F X1,...,X n ) Consider k functions g 1 : R n

More information

1.1 Review of Probability Theory

1.1 Review of Probability Theory 1.1 Review of Probability Theory Angela Peace Biomathemtics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,

More information

MIT Spring 2015

MIT Spring 2015 Regression Analysis MIT 18.472 Dr. Kempthorne Spring 2015 1 Outline Regression Analysis 1 Regression Analysis 2 Multiple Linear Regression: Setup Data Set n cases i = 1, 2,..., n 1 Response (dependent)

More information

BMIR Lecture Series on Probability and Statistics Fall 2015 Discrete RVs

BMIR Lecture Series on Probability and Statistics Fall 2015 Discrete RVs Lecture #7 BMIR Lecture Series on Probability and Statistics Fall 2015 Department of Biomedical Engineering and Environmental Sciences National Tsing Hua University 7.1 Function of Single Variable Theorem

More information

Sampling Distributions

Sampling Distributions Sampling Distributions In statistics, a random sample is a collection of independent and identically distributed (iid) random variables, and a sampling distribution is the distribution of a function of

More information

Sampling Distributions

Sampling Distributions In statistics, a random sample is a collection of independent and identically distributed (iid) random variables, and a sampling distribution is the distribution of a function of random sample. For example,

More information

MAS223 Statistical Inference and Modelling Exercises

MAS223 Statistical Inference and Modelling Exercises MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,

More information

ECON Fundamentals of Probability

ECON Fundamentals of Probability ECON 351 - Fundamentals of Probability Maggie Jones 1 / 32 Random Variables A random variable is one that takes on numerical values, i.e. numerical summary of a random outcome e.g., prices, total GDP,

More information

Distributions of Quadratic Forms. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 31

Distributions of Quadratic Forms. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 31 Distributions of Quadratic Forms Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 31 Under the Normal Theory GMM (NTGMM), y = Xβ + ε, where ε N(0, σ 2 I). By Result 5.3, the NTGMM

More information

Lecture Note 1: Probability Theory and Statistics

Lecture Note 1: Probability Theory and Statistics Univ. of Michigan - NAME 568/EECS 568/ROB 530 Winter 2018 Lecture Note 1: Probability Theory and Statistics Lecturer: Maani Ghaffari Jadidi Date: April 6, 2018 For this and all future notes, if you would

More information

Testing a Normal Covariance Matrix for Small Samples with Monotone Missing Data

Testing a Normal Covariance Matrix for Small Samples with Monotone Missing Data Applied Mathematical Sciences, Vol 3, 009, no 54, 695-70 Testing a Normal Covariance Matrix for Small Samples with Monotone Missing Data Evelina Veleva Rousse University A Kanchev Department of Numerical

More information

CHAPTER 1 DISTRIBUTION THEORY 1 CHAPTER 1: DISTRIBUTION THEORY

CHAPTER 1 DISTRIBUTION THEORY 1 CHAPTER 1: DISTRIBUTION THEORY CHAPTER 1 DISTRIBUTION THEORY 1 CHAPTER 1: DISTRIBUTION THEORY CHAPTER 1 DISTRIBUTION THEORY 2 Basic Concepts CHAPTER 1 DISTRIBUTION THEORY 3 Random Variables (R.V.) discrete random variable: probability

More information

Lecture 2: Repetition of probability theory and statistics

Lecture 2: Repetition of probability theory and statistics Algorithms for Uncertainty Quantification SS8, IN2345 Tobias Neckel Scientific Computing in Computer Science TUM Lecture 2: Repetition of probability theory and statistics Concept of Building Block: Prerequisites:

More information

Algorithms for Uncertainty Quantification

Algorithms for Uncertainty Quantification Algorithms for Uncertainty Quantification Tobias Neckel, Ionuț-Gabriel Farcaș Lehrstuhl Informatik V Summer Semester 2017 Lecture 2: Repetition of probability theory and statistics Example: coin flip Example

More information

TAMS39 Lecture 2 Multivariate normal distribution

TAMS39 Lecture 2 Multivariate normal distribution TAMS39 Lecture 2 Multivariate normal distribution Martin Singull Department of Mathematics Mathematical Statistics Linköping University, Sweden Content Lecture Random vectors Multivariate normal distribution

More information

Asymptotic Statistics-VI. Changliang Zou

Asymptotic Statistics-VI. Changliang Zou Asymptotic Statistics-VI Changliang Zou Kolmogorov-Smirnov distance Example (Kolmogorov-Smirnov confidence intervals) We know given α (0, 1), there is a well-defined d = d α,n such that, for any continuous

More information

Multivariate Random Variable

Multivariate Random Variable Multivariate Random Variable Author: Author: Andrés Hincapié and Linyi Cao This Version: August 7, 2016 Multivariate Random Variable 3 Now we consider models with more than one r.v. These are called multivariate

More information

discrete random variable: probability mass function continuous random variable: probability density function

discrete random variable: probability mass function continuous random variable: probability density function CHAPTER 1 DISTRIBUTION THEORY 1 Basic Concepts Random Variables discrete random variable: probability mass function continuous random variable: probability density function CHAPTER 1 DISTRIBUTION THEORY

More information

Basic Concepts in Matrix Algebra

Basic Concepts in Matrix Algebra Basic Concepts in Matrix Algebra An column array of p elements is called a vector of dimension p and is written as x p 1 = x 1 x 2. x p. The transpose of the column vector x p 1 is row vector x = [x 1

More information

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ). .8.6 µ =, σ = 1 µ = 1, σ = 1 / µ =, σ =.. 3 1 1 3 x Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ ). The Gaussian distribution Probably the most-important distribution in all of statistics

More information

III - MULTIVARIATE RANDOM VARIABLES

III - MULTIVARIATE RANDOM VARIABLES Computational Methods and advanced Statistics Tools III - MULTIVARIATE RANDOM VARIABLES A random vector, or multivariate random variable, is a vector of n scalar random variables. The random vector is

More information

Stat 5101 Notes: Brand Name Distributions

Stat 5101 Notes: Brand Name Distributions Stat 5101 Notes: Brand Name Distributions Charles J. Geyer September 5, 2012 Contents 1 Discrete Uniform Distribution 2 2 General Discrete Uniform Distribution 2 3 Uniform Distribution 3 4 General Uniform

More information

Def. The euclidian distance between two points x = (x 1,...,x p ) t and y = (y 1,...,y p ) t in the p-dimensional space R p is defined as

Def. The euclidian distance between two points x = (x 1,...,x p ) t and y = (y 1,...,y p ) t in the p-dimensional space R p is defined as MAHALANOBIS DISTANCE Def. The euclidian distance between two points x = (x 1,...,x p ) t and y = (y 1,...,y p ) t in the p-dimensional space R p is defined as d E (x, y) = (x 1 y 1 ) 2 + +(x p y p ) 2

More information

1. Density and properties Brief outline 2. Sampling from multivariate normal and MLE 3. Sampling distribution and large sample behavior of X and S 4.

1. Density and properties Brief outline 2. Sampling from multivariate normal and MLE 3. Sampling distribution and large sample behavior of X and S 4. Multivariate normal distribution Reading: AMSA: pages 149-200 Multivariate Analysis, Spring 2016 Institute of Statistics, National Chiao Tung University March 1, 2016 1. Density and properties Brief outline

More information

Chapter 5. Chapter 5 sections

Chapter 5. Chapter 5 sections 1 / 43 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

Next tool is Partial ACF; mathematical tools first. The Multivariate Normal Distribution. e z2 /2. f Z (z) = 1 2π. e z2 i /2

Next tool is Partial ACF; mathematical tools first. The Multivariate Normal Distribution. e z2 /2. f Z (z) = 1 2π. e z2 i /2 Next tool is Partial ACF; mathematical tools first. The Multivariate Normal Distribution Defn: Z R 1 N(0,1) iff f Z (z) = 1 2π e z2 /2 Defn: Z R p MV N p (0, I) if and only if Z = (Z 1,..., Z p ) (a column

More information

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as L30-1 EEL 5544 Noise in Linear Systems Lecture 30 OTHER TRANSFORMS For a continuous, nonnegative RV X, the Laplace transform of X is X (s) = E [ e sx] = 0 f X (x)e sx dx. For a nonnegative RV, the Laplace

More information

Multiple Random Variables

Multiple Random Variables Multiple Random Variables This Version: July 30, 2015 Multiple Random Variables 2 Now we consider models with more than one r.v. These are called multivariate models For instance: height and weight An

More information

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n JOINT DENSITIES - RANDOM VECTORS - REVIEW Joint densities describe probability distributions of a random vector X: an n-dimensional vector of random variables, ie, X = (X 1,, X n ), where all X is are

More information

MLES & Multivariate Normal Theory

MLES & Multivariate Normal Theory Merlise Clyde September 6, 2016 Outline Expectations of Quadratic Forms Distribution Linear Transformations Distribution of estimates under normality Properties of MLE s Recap Ŷ = ˆµ is an unbiased estimate

More information

Chp 4. Expectation and Variance

Chp 4. Expectation and Variance Chp 4. Expectation and Variance 1 Expectation In this chapter, we will introduce two objectives to directly reflect the properties of a random variable or vector, which are the Expectation and Variance.

More information

Covariance. Lecture 20: Covariance / Correlation & General Bivariate Normal. Covariance, cont. Properties of Covariance

Covariance. Lecture 20: Covariance / Correlation & General Bivariate Normal. Covariance, cont. Properties of Covariance Covariance Lecture 0: Covariance / Correlation & General Bivariate Normal Sta30 / Mth 30 We have previously discussed Covariance in relation to the variance of the sum of two random variables Review Lecture

More information

On the Distribution of a Quadratic Form in Normal Variates

On the Distribution of a Quadratic Form in Normal Variates On the Distribution of a Quadratic Form in Normal Variates Jin Zhang School of Mathematics and Statistics, Yunnan University, Kunming, Yunnan, 650091, China E-mail: jinzhang@ynu.edu.cn Abstract It is a

More information

2. Matrix Algebra and Random Vectors

2. Matrix Algebra and Random Vectors 2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns

More information

Gaussian vectors and central limit theorem

Gaussian vectors and central limit theorem Gaussian vectors and central limit theorem Samy Tindel Purdue University Probability Theory 2 - MA 539 Samy T. Gaussian vectors & CLT Probability Theory 1 / 86 Outline 1 Real Gaussian random variables

More information

STT 843 Key to Homework 1 Spring 2018

STT 843 Key to Homework 1 Spring 2018 STT 843 Key to Homework Spring 208 Due date: Feb 4, 208 42 (a Because σ = 2, σ 22 = and ρ 2 = 05, we have σ 2 = ρ 2 σ σ22 = 2/2 Then, the mean and covariance of the bivariate normal is µ = ( 0 2 and Σ

More information

Chapter 7: Special Distributions

Chapter 7: Special Distributions This chater first resents some imortant distributions, and then develos the largesamle distribution theory which is crucial in estimation and statistical inference Discrete distributions The Bernoulli

More information

Recall that if X 1,...,X n are random variables with finite expectations, then. The X i can be continuous or discrete or of any other type.

Recall that if X 1,...,X n are random variables with finite expectations, then. The X i can be continuous or discrete or of any other type. Expectations of Sums of Random Variables STAT/MTHE 353: 4 - More on Expectations and Variances T. Linder Queen s University Winter 017 Recall that if X 1,...,X n are random variables with finite expectations,

More information

Lecture 3. Probability - Part 2. Luigi Freda. ALCOR Lab DIAG University of Rome La Sapienza. October 19, 2016

Lecture 3. Probability - Part 2. Luigi Freda. ALCOR Lab DIAG University of Rome La Sapienza. October 19, 2016 Lecture 3 Probability - Part 2 Luigi Freda ALCOR Lab DIAG University of Rome La Sapienza October 19, 2016 Luigi Freda ( La Sapienza University) Lecture 3 October 19, 2016 1 / 46 Outline 1 Common Continuous

More information

Joint Distributions. (a) Scalar multiplication: k = c d. (b) Product of two matrices: c d. (c) The transpose of a matrix:

Joint Distributions. (a) Scalar multiplication: k = c d. (b) Product of two matrices: c d. (c) The transpose of a matrix: Joint Distributions Joint Distributions A bivariate normal distribution generalizes the concept of normal distribution to bivariate random variables It requires a matrix formulation of quadratic forms,

More information

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

Multivariate Gaussian Distribution. Auxiliary notes for Time Series Analysis SF2943. Spring 2013

Multivariate Gaussian Distribution. Auxiliary notes for Time Series Analysis SF2943. Spring 2013 Multivariate Gaussian Distribution Auxiliary notes for Time Series Analysis SF2943 Spring 203 Timo Koski Department of Mathematics KTH Royal Institute of Technology, Stockholm 2 Chapter Gaussian Vectors.

More information

Distributions of Functions of Random Variables. 5.1 Functions of One Random Variable

Distributions of Functions of Random Variables. 5.1 Functions of One Random Variable Distributions of Functions of Random Variables 5.1 Functions of One Random Variable 5.2 Transformations of Two Random Variables 5.3 Several Random Variables 5.4 The Moment-Generating Function Technique

More information

Regression #5: Confidence Intervals and Hypothesis Testing (Part 1)

Regression #5: Confidence Intervals and Hypothesis Testing (Part 1) Regression #5: Confidence Intervals and Hypothesis Testing (Part 1) Econ 671 Purdue University Justin L. Tobias (Purdue) Regression #5 1 / 24 Introduction What is a confidence interval? To fix ideas, suppose

More information

VAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where:

VAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where: VAR Model (k-variate VAR(p model (in the Reduced Form: where: Y t = A + B 1 Y t-1 + B 2 Y t-2 + + B p Y t-p + ε t Y t = (y 1t, y 2t,, y kt : a (k x 1 vector of time series variables A: a (k x 1 vector

More information

A Probability Review

A Probability Review A Probability Review Outline: A probability review Shorthand notation: RV stands for random variable EE 527, Detection and Estimation Theory, # 0b 1 A Probability Review Reading: Go over handouts 2 5 in

More information

STAT5044: Regression and Anova. Inyoung Kim

STAT5044: Regression and Anova. Inyoung Kim STAT5044: Regression and Anova Inyoung Kim 2 / 51 Outline 1 Matrix Expression 2 Linear and quadratic forms 3 Properties of quadratic form 4 Properties of estimates 5 Distributional properties 3 / 51 Matrix

More information

2 Functions of random variables

2 Functions of random variables 2 Functions of random variables A basic statistical model for sample data is a collection of random variables X 1,..., X n. The data are summarised in terms of certain sample statistics, calculated as

More information

Probability Theory and Statistics. Peter Jochumzen

Probability Theory and Statistics. Peter Jochumzen Probability Theory and Statistics Peter Jochumzen April 18, 2016 Contents 1 Probability Theory And Statistics 3 1.1 Experiment, Outcome and Event................................ 3 1.2 Probability............................................

More information

Introduction to Probability and Stocastic Processes - Part I

Introduction to Probability and Stocastic Processes - Part I Introduction to Probability and Stocastic Processes - Part I Lecture 2 Henrik Vie Christensen vie@control.auc.dk Department of Control Engineering Institute of Electronic Systems Aalborg University Denmark

More information

Two hours. Statistical Tables to be provided THE UNIVERSITY OF MANCHESTER. 14 January :45 11:45

Two hours. Statistical Tables to be provided THE UNIVERSITY OF MANCHESTER. 14 January :45 11:45 Two hours Statistical Tables to be provided THE UNIVERSITY OF MANCHESTER PROBABILITY 2 14 January 2015 09:45 11:45 Answer ALL four questions in Section A (40 marks in total) and TWO of the THREE questions

More information

Lecture 5: Moment generating functions

Lecture 5: Moment generating functions Lecture 5: Moment generating functions Definition 2.3.6. The moment generating function (mgf) of a random variable X is { x e tx f M X (t) = E(e tx X (x) if X has a pmf ) = etx f X (x)dx if X has a pdf

More information

Chapter 4 Multiple Random Variables

Chapter 4 Multiple Random Variables Review for the previous lecture Theorems and Examples: How to obtain the pmf (pdf) of U = g ( X Y 1 ) and V = g ( X Y) Chapter 4 Multiple Random Variables Chapter 43 Bivariate Transformations Continuous

More information

1 Probability theory. 2 Random variables and probability theory.

1 Probability theory. 2 Random variables and probability theory. Probability theory Here we summarize some of the probability theory we need. If this is totally unfamiliar to you, you should look at one of the sources given in the readings. In essence, for the major

More information

Introduction to Computational Finance and Financial Econometrics Probability Review - Part 2

Introduction to Computational Finance and Financial Econometrics Probability Review - Part 2 You can t see this text! Introduction to Computational Finance and Financial Econometrics Probability Review - Part 2 Eric Zivot Spring 2015 Eric Zivot (Copyright 2015) Probability Review - Part 2 1 /

More information

Chapter 4. Multivariate Distributions. Obviously, the marginal distributions may be obtained easily from the joint distribution:

Chapter 4. Multivariate Distributions. Obviously, the marginal distributions may be obtained easily from the joint distribution: 4.1 Bivariate Distributions. Chapter 4. Multivariate Distributions For a pair r.v.s (X,Y ), the Joint CDF is defined as F X,Y (x, y ) = P (X x,y y ). Obviously, the marginal distributions may be obtained

More information

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu

Course: ESO-209 Home Work: 1 Instructor: Debasis Kundu Home Work: 1 1. Describe the sample space when a coin is tossed (a) once, (b) three times, (c) n times, (d) an infinite number of times. 2. A coin is tossed until for the first time the same result appear

More information

STA 4322 Exam I Name: Introduction to Statistics Theory

STA 4322 Exam I Name: Introduction to Statistics Theory STA 4322 Exam I Name: Introduction to Statistics Theory Fall 2013 UF-ID: Instructions: There are 100 total points. You must show your work to receive credit. Read each part of each question carefully.

More information

18 Bivariate normal distribution I

18 Bivariate normal distribution I 8 Bivariate normal distribution I 8 Example Imagine firing arrows at a target Hopefully they will fall close to the target centre As we fire more arrows we find a high density near the centre and fewer

More information

Sampling Distributions

Sampling Distributions Merlise Clyde Duke University September 8, 2016 Outline Topics Normal Theory Chi-squared Distributions Student t Distributions Readings: Christensen Apendix C, Chapter 1-2 Prostate Example > library(lasso2);

More information

Chapter 1. Matrix Algebra

Chapter 1. Matrix Algebra ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface

More information

Stat 206: Sampling theory, sample moments, mahalanobis

Stat 206: Sampling theory, sample moments, mahalanobis Stat 206: Sampling theory, sample moments, mahalanobis topology James Johndrow (adapted from Iain Johnstone s notes) 2016-11-02 Notation My notation is different from the book s. This is partly because

More information

18.440: Lecture 28 Lectures Review

18.440: Lecture 28 Lectures Review 18.440: Lecture 28 Lectures 18-27 Review Scott Sheffield MIT Outline Outline It s the coins, stupid Much of what we have done in this course can be motivated by the i.i.d. sequence X i where each X i is

More information

Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics

Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Chapter 2: Fundamentals of Statistics Lecture 15: Models and statistics Data from one or a series of random experiments are collected. Planning experiments and collecting data (not discussed here). Analysis:

More information