Estimation of Variances and Covariances

Size: px
Start display at page:

Download "Estimation of Variances and Covariances"

Transcription

1 Estimation of Variances and Covariances Variables and Distributions Random variables are samples from a population with a given set of population parameters Random variables can be discrete, having a limited number of distinct possible values, or continuous Continuous Random Variables The cumulative distribution function of a random variable is F (y) = P r(y y), for < y < As y approaches, then F (y) approaches 0 As y approaches, then F (y) approaches F (y) is a nondecreasing function of y If a < b, then F (a) < F (b) p(y) = F (y) y = F (y), wherever the derivative exists p(y) y = F (t) = t p(y) y E(y) = y p(y) y E(g(y)) = g(y) p(y) y V ar(y) = E(y ) [E(y)] Normal Random Variables Random variables in animal breeding problems are typically assumed to be samples from Normal distributions, where p(y) = (π) 5 σ exp( 5(y µ) σ ) for < x < +, where σ is the variance of y and µ is the expected value of y

2 For a random vector variable, y, the multivariate normal density function is p(y) = (π) 5n V 5 exp( 5(y µ) V (y µ)) denoted as y N(µ, V) where V is the variance-covariance matrix of y The determinant of V must be positive, otherwise the density function is undefined Chi-Square Random Variables If y N(0, I), then y y χ n, where χ n is a central chi-square distribution with n degrees of freedom and n is the length of the random vector variable y The mean is n The variance is n If s = y y > 0, then p(s n) = (s) (n/) exp 05s/[ 05n Γ(05n)] If y N(µ, I), then y y χ n,λ where λ is the noncentrality parameter which is equal to 5µ µ The mean of a noncentral chi-square distribution is n + λ and the variance is n + 8λ If y N(µ, V), then y Qy has a noncentral chi-square distribution only if QV is idempotent, ie QVQV = QV The noncentrality parameter is λ = 5µ QVQµ and the mean and variance of the distribution are tr(qv) + λ and tr(qv) + 8λ, respectively If there are two quadratic forms of y, say y Qy and y Py, and both quadratic forms have chi-square distributions, then the two quadratic forms are independent if QVP = 0 3 The Wishart Distribution The Wishart distribution is similar to a multivariate Chi-square distribution An entire matrix is envisioned of which the diagonals have a Chi-square distribution, and the off-diagonals have a built-in correlation structure The resulting matrix is positive definite This distribution is needed when estimating covariance matrices, such as in multiple trait models, maternal genetic effect models, or random regression models 4 The F-distribution The F-distribution is used for hypothesis testing and is built upon two independent Chi-square random variables Let s χ n and v χ m with s and v being independent, then (s/n) (v/m) F n,m

3 The mean of the F-distribution is m/(m ) The variance is m (n + m ) n(m ) (m 4) 3 Expectations of Random Vectors Let y be a random vector variable, then E(y ) = µ = for a vector of length n If c is a scalar constant, then Similarly, if C is a matrix of constants, then µ µ µ n E(cy ) = cµ E(Cy ) = Cµ, Let y be another random vector variable of the same length as y, then E(y + y ) = E(y ) + E(y ) = µ + µ 4 Variance-Covariance Matrices Let y be a random vector variable of length n, then the variance-covariance matrix of y is: V ar(y) = E(yy ) E(y)E(y ) σy σ y y σ y y n σ y y = σy σ y y n = V σ y y n σ y y n σ y n A variance-covariance (VCV) matrix is square, symmetric and should always be positive definite, ie all of the eigenvalues must be positive 3

4 Another name for VCV matrix is a dispersion matrix or (co)variance matrix Let C be a matrix of constants conformable for multiplication with the vector y, then V ar(cy) = E(Cyy C ) E(Cy)E(y C ) = CE(yy )C CE(y)E(y )C = C ( E(yy ) E(y)E(y ) ) C = CV ar(y)c = CVC If there are two sets of functions of y, say C y and C y, then Cov(C y, C y) = C VC If y and z represent two different random vectors, possibly of different orders, and if the (co)variance matrix between these two vectors is W, then Cov(C y, C z) = C WC 5 Quadratic Forms Variances are estimated using sums of squares of various normally distributed variables, and these are known as quadratic forms The general quadratic form is y Qy, where y is a random vector variable, and Q is a regulator matrix Usually Q is a symmetric matrix, but not necessarily positive definite Examples of different Q matrices are as follows: Q = I, then y Qy = y y which is a total sum of squares of the elements in y Q = J(/n), then y Qy = y Jy(/n) where n is the length of y Note that J =, so that y Jy = (y )( y) and ( y) is the sum of the elements in y 3 Q = (I J(/n)) /(n ), then y Qy gives the variance of the elements in y, σ y The expected value of a quadratic form is E(y Qy) = E(tr(y Qy)) = E(tr(Qyy )) = tr(qe(yy )), and the covariance matrix is V ar(y) = E(yy ) E(y)E(y ) 4

5 so that then E(yy ) = V ar(y) + E(y)E(y ), E(y Qy) = tr(q(v ar(y) + E(y)E(y ))) Let V ar(y) = V and E(y) = µ, then E(y Qy) = tr(q(v + µµ )) = tr(qv) + tr(qµµ ) = tr(qv) + µ Qµ The expectation of a quadratic form was derived without knowing the distribution of y However, the variance of a quadratic form requires that y follows a multivariate normal distribution Without showing the derivation, the variance of a quadratic form, assuming y has a multivariate normal distribution, is V ar(y Qy) = tr(qvqv) + 4µ QVQµ The quadratic form, y Qy, has a chi-square distribution if tr(qvqv) = tr(qv), and µ QVQµ = µ Qµ, or the single condition that QV is idempotent Then if m = tr(qv) and λ = 5µ Qµ, the expected value of y Qy is m + λ and the variance is m + 8λ, which are the usual results for a noncentral chi-square variable The covariance between two quadratic forms, say y Qy and y Py, is Cov(y Qy, y Py) = tr(qvpv) + 4µ QVPµ The covariance is zero if QVP = 0, then the two quadratic forms are said to be independent 6 Basic Model for Variance Components The general linear model is described as y = Xb + Zu + e, where E(y) = Xb, E(u) = 0, and E(e) = 0 5

6 Often u is partitioned into s factors as u = (u u u s ) The (co)variance matrices are defined as G = V ar(u) = V ar u u u s G σ G σ G s σs and R = V ar(e) = Iσ0 Then V ar(y) = V = ZGZ + R, and if Z is partitioned corresponding to u, as Z = [ Z Z Z s ], then s ZGZ = Z i G i Z iσi i= Let V i = Z i G i Z i and V 0 = I, then V = s V i σi i=o 6 Mixed Model Equations Henderson s mixed model equations (MME) are written as X R X X R Z X R Z X R Z s Z R X Z R Z + G σ Z R Z Z R Z s Z R X Z R Z Z R Z + G σ Z R Z s Z sr X Z sr Z Z sr Z Z sr Z s + G s = X R y Z R y Z R y Z sr y σs ˆb û û û s 6

7 7 Unbiased Estimation of Variances Assume that all G i are equal to I for this example, so that Z i G i Z i simplifies to Z iz i Let X = Z =, Z = , and y =, , Then and and V 0 = I V = Z Z = V = Z Z = , 7 Define the Necessary Quadratic Forms At least three quadratic forms are needed in order to estimate the variances Below are three arbitrary Q-matrices that were chosen such that Q k X = 0 Let 0 Q =, 0 Q = and Q 3 = 0 0, The numeric values of the quadratic forms are y Q y = 657, y Q y = 306, and y Q 3 y = 88 7

8 For example, ( y Q y = ) = The Expectations of the Quadratic Forms The expectations of the quadratic forms are E(y Q y) = trq V 0 σ 0 + trq V σ + trq V σ = 4σ0 + σ + σ E(y Q y) = 4σ0 + 4σ + σ, E(y Q 3 y) = 6σ 0 + 4σ + 4σ 73 Equate Expected Values to Numerical Values Equate the numeric values of the quadratic forms to their corresponding expected values, which gives a system of equations to be solved, such as Fσ = w In this case, the equations would be 4 σ σ = 306, σ 88 which gives the solution as ˆσ = F w, or ˆσ 0 ˆσ ˆσ = The resulting estimates are unbiased 74 Variances of Quadratic Forms The variance of a quadratic form is V ar(y Qy) = trqvqv + 4b X QVQXb Only translation invariant quadratic forms are typically considered in variance component estimation, that means b X QVQXb = 0 Thus, only trqvqv needs to be calculated Remember that V can be written as the sum of s + matrices, V i σ i, then s s trqvqv = trq V i σi Q V j σj i=o j=o 8

9 = s i=o s j=o tr QV i QV j σ i σ j For example, if s =, then The sampling variances depend on trqvqv = trqv 0 QV 0 σ trqv 0 QV σ 0σ + trqv 0 QV σ 0σ + trqv QV σ 4 + trqv QV σ σ + trqv QV σ 4 The true magnitude of the individual components, The matrices Q k, which depend on the method of estimation and the model, and 3 The structure and amount of the data through X and Z For small examples, the calculation of sampling variances can be easily demonstrated In this case, V ar(f w) = F V ar(w)f, a function of the variance-covariance matrix of the quadratic forms Using the small example of the previous section, the V ar(w) is a 3x3 matrix The (,) element is the variance of y Q y which is V ar(y Q y) = trq VQ V = trq V 0 Q V 0 σ trQ V 0 Q V σ 0σ +4trQ V 0 Q V σ 0σ + trq V Q V σ 4 +4trQ V Q V σ σ + trq V Q V σ 4 = 0σ σ 0σ + 6σ 0σ + 8σ 4 + 0σ σ + 8σ 4 The (,) element is the covariance between the first and second quadratic forms, Cov(y Q y, y Q y) = trq VQ V, and similarly for the other terms All of the results are summarized in the table below Forms σ0 4 σ0 σ σ0 σ σ 4 σ σ σ 4 V ar(w ) Cov(w, w ) Cov(w, w 3 ) V ar(w ) Cov(w, w 3 ) V ar(w 3 )

10 To get numeric values for these variances, the true components need to be known Assume that the true values are σ 0 = 50, σ = 0, and σ = 80, then the variance of w is V ar(w ) = 0(50) + 6(50)(0) + 6(50)(80) +8(0) + 0(0)(80) + 8(80) =, 66, 000 The complete variance- covariance matrix of the quadratic forms is w, 66, 000, 47, 800, 44, 000 V ar w =, 47, 800, 757, 00, 8, 400 w 3, 44, 000, 8, 400 3, 550, 800 The variance-covariance matrix of the estimated variances (assuming the above true values) would be V ar(ˆσ) = F V ar(w)f = 405, , , , , 900 4, , 700 4, , 500 = C 75 Variance of A Ratio of Variance Estimates Often estimates of ratios of functions of the variances are needed for animal breeding work, such as heritabilities, repeatabilities, and variance ratios Let such a ratio be denoted as a/c where and (NOTE: the negative estimate for ˆσ a = ˆσ = (0 0 )ˆσ = 7 c = ˆσ 0 + ˆσ + ˆσ = ( )ˆσ = 88 was set to zero before calculating c From Osborne and Patterson (95) and Rao (968) an approximation to the variance of a ratio is given by Now note that V ar(a/c) = (c V ar(a) + a V ar(c) ac Cov(a, c))/c 4 Then V ar(a) = (0 0 )C(0 0 ) = 93, 500, V ar(c) = ( )C( ) = 3, 00, Cov(a, c, ) = (0 0 )C( ) = 94, 750 V ar(a/c) = [(88) (93, 500) + (7) (3, 00) (7)(88)(94, 750)]/(88) 4 =

11 This result is very large, but could be expected from only 3 observations Thus, (a/c) = 5 with a standard deviation of 5933 Another approximation method assumes that the denominator has been estimated accurately, so that it is considered to be a constant, such as the estimate of σ e Then, For the example problem, this gives V ar(a/c) = V ar(a)/c V ar(a/c) = 93, 500/(88) = , which is slightly larger than the previous approximation The second approximation would not be suitable for a ratio of the residual variance to the variance of one of the other components Suppose a = ˆσ 0 = 6, and c = ˆσ = 7, then (a/c) = 30, and with the first method, and V ar(a/c) = [(7) (405, 700) + (6) (93, 500) (7)(6)( 40, 700)]/(7) 4 = , V ar(a/c) = 405, 700/(7) = 786, with the second method The first method is probably more realistic in this situation, but both are very large 8 Useful Derivatives of Quantities The following information is necessary for derivation of methods of variance component estimation based on the multivariate normal distribution The (co)variance matrix of y is V = s Z i G i Z iσi + Rσ0 i= = ZGZ + R Usually, each G i is assumed to be I for most random factors, but for animal models G i might be equal to A, the additive genetic relationship matrix Thus, G i does not always have to be diagonal, and will not be an identity in animal model analyses

12 The inverse of V is V = R R Z(Z R Z + G ) Z R To prove, show that VV = I Let T = Z R Z + G, then VV = (ZGZ + R)[R R ZT Z R ] = ZGZ R ZGZ R ZT Z R +I ZT Z R = I + [ZGT ZGZ R Z Z](T Z R ) = I + [ZG(Z R Z + G ) ZGZ R Z Z](T Z R ) = I + [ZGZ R Z + Z ZGZ R Z Z](T Z R ) = I + [0](T Z R ) = I 3 If k is a scalar constant and A is any square matrix of order m, then Ak = k m A 4 For general square matrices, say M and U, of the same order then MU = M U 5 For the general matrix below with A and D being square and non-singular (ie the inverse of each exists), then A Q B D = A D + QA B = D A + BD Q Then if A = I and D = I, then I =, so that I + QB = I + BQ = I + B Q = I + Q B

13 6 Using the results in (4) and (5), then V = R + ZGZ = R(I + R ZGZ ) = R I + R ZGZ = R I + Z R ZG = R (G + Z R Z)G = R G + Z R Z G 7 The mixed model coefficient matrix of Henderson can be denoted by ( ) X C = R X X R Z Z R X Z R Z + G then the determinant of C can be derived as C = X R X G + Z (R R X(X R X) X R )Z = Z R Z + G X (R R Z(Z R Z + G ) Z R )X Now let S = R R X(X R X) X R then C = X R X G + Z SZ = Z R Z + G X V X 8 A projection matrix, P, is defined as Properties of P: P = V V X(X V X) X V PX = 0, Py = V (y Xˆb), where ˆb = (X V X) X V y Therefore, y PZ i G i Z ipy = (y Xˆb) V Z i G i Z iv (y Xˆb) 3

14 9 Derivative of V is V σ i = V V σi V 0 Derivative of ln V is ln V σ i = tr ( V ) V σi Derivative of P is P σ i = P V σi P Derivative of V is V σ i = Z i G i Z i 3 Derivative of ln X V X is ln X V X σ i = tr(x V X) X V V σi V X 9 Random Number Generators and R R has some very good random number generators built into it These functions are very useful for application of Gibbs sampling methods in Bayesian estimation Generators for the uniform distribution, normal distribution, Chi-square distribution, and Wishart distribution are necessary to have Below are some examples of the various functions in R The package called MCMCpack should be obtained from the CRAN website 4

15 require(mcmcpack) # Uniform distribution generator # num = number of variates to generate # min = minimum number in range # max = maximum number in range x = runif(00,min=5,max=0) # Normal distribution generator # num = number of deviates to generate # xmean = mean of the distribution you want # xsd = standard deviation of deviates you want w = rnorm(00,-,63) # Chi-square generator # num = number of deviates to generate # df = degrees of freedom # ncp = non-centrality parameter, usually 0 w = rchisq(5,4,0) # Inverted Wishart matrix generator # df = degrees of freedom # SS = matrix of sum of squares and crossproducts U = riwish(df,ss) # New covariance matrix is the inverse of U V = ginv(u) A Chi-square variate with m degrees of freedom is the sum of squares of m random normal deviates The random number generator, however, makes use of a gamma distribution, which with the appropriate parameters is a Chi-square distribution The uniform distribution is the key distribution for all other distribution generators R uses the Mersenne Twister (Matsumoto and Nishimura, 997) with a cycle time of 9937 The Twister is based on a Mersenne prime number 0 Positive Definite Matrices A covariance matrix should be positive definite To check a matrix, compute the eigenvalues and eigenvectors of the matrix All eigenvalues should be positive If they are not positive, then they can be modified, and a new covariance matrix constructed from the eigenvectors and the modified set of eigenvalues The procedure is shown in the following R statements 5

16 # Compute eigenvalues and eigenvectors GE = eigen(g) nre = length(ge $values) for(i in :nre) { qp = GE$ values[i] if(qp < 0)qp = (qp*qp)/0000 GE$ values[i] = qp } # Re-form new matrix Gh = GE$ vectors GG = Gh %*% diag(ge$values) %*% t(gh) If the eigenvalues are all positive, then the new matrix, GG, will be the same as the input matrix, G EXERCISES This is an example of Henderson s Method of unbiased estimation of variance components Let y = µ + Z u + Z u + e, with data as follows: = µ u u u u u u 3 u 4 + e Also, V = Z Z σ + Z Z σ + Iσ 0 Calculate the following: (a) M = ( ) 6

17 (b) A = Z (Z Z ) Z (c) B = Z (Z Z ) Z (d) Q 0 = I M (e) Q = A M (f) Q = B M (g) y Q 0 y (h) y Q y (i) y Q y (j) E(y Q 0 y) = tr(q 0 V 0 )σ 0 + tr(q 0V )σ + tr(q 0V )σ (k) E(y Q y) (l) E(y Q y) (m) Estimate the variances (n) Compute the variances of the estimated variances Check the following matrix for positive definiteness, and create a new modified matrix from it, that is positive definite (if it is not already positive definite) R =

Estimation of Variance Components in Animal Breeding

Estimation of Variance Components in Animal Breeding Estimation of Variance Components in Animal Breeding L R Schaeffer July 19-23, 2004 Iowa State University Short Course 2 Contents 1 Distributions 9 11 Random Variables 9 12 Discrete Random Variables 10

More information

Chapter 5 Prediction of Random Variables

Chapter 5 Prediction of Random Variables Chapter 5 Prediction of Random Variables C R Henderson 1984 - Guelph We have discussed estimation of β, regarded as fixed Now we shall consider a rather different problem, prediction of random variables,

More information

Likelihood Methods. 1 Likelihood Functions. The multivariate normal distribution likelihood function is

Likelihood Methods. 1 Likelihood Functions. The multivariate normal distribution likelihood function is Likelihood Methods 1 Likelihood Functions The multivariate normal distribution likelihood function is The log of the likelihood, say L 1 is Ly = π.5n V.5 exp.5y Xb V 1 y Xb. L 1 = 0.5[N lnπ + ln V +y Xb

More information

Lecture Notes. Introduction

Lecture Notes. Introduction 5/3/016 Lecture Notes R. Rekaya June 1-10, 016 Introduction Variance components play major role in animal breeding and genetic (estimation of BVs) It has been an active area of research since early 1950

More information

Chapter 11 MIVQUE of Variances and Covariances

Chapter 11 MIVQUE of Variances and Covariances Chapter 11 MIVQUE of Variances and Covariances C R Henderson 1984 - Guelph The methods described in Chapter 10 for estimation of variances are quadratic, translation invariant, and unbiased For the balanced

More information

MLES & Multivariate Normal Theory

MLES & Multivariate Normal Theory Merlise Clyde September 6, 2016 Outline Expectations of Quadratic Forms Distribution Linear Transformations Distribution of estimates under normality Properties of MLE s Recap Ŷ = ˆµ is an unbiased estimate

More information

Chapter 12 REML and ML Estimation

Chapter 12 REML and ML Estimation Chapter 12 REML and ML Estimation C. R. Henderson 1984 - Guelph 1 Iterative MIVQUE The restricted maximum likelihood estimator (REML) of Patterson and Thompson (1971) can be obtained by iterating on MIVQUE,

More information

Chapter 3 Best Linear Unbiased Estimation

Chapter 3 Best Linear Unbiased Estimation Chapter 3 Best Linear Unbiased Estimation C R Henderson 1984 - Guelph In Chapter 2 we discussed linear unbiased estimation of k β, having determined that it is estimable Let the estimate be a y, and if

More information

Ch4. Distribution of Quadratic Forms in y

Ch4. Distribution of Quadratic Forms in y ST4233, Linear Models, Semester 1 2008-2009 Ch4. Distribution of Quadratic Forms in y 1 Definition Definition 1.1 If A is a symmetric matrix and y is a vector, the product y Ay = i a ii y 2 i + i j a ij

More information

Reduced Animal Models

Reduced Animal Models Reduced Animal Models 1 Introduction In situations where many offspring can be generated from one mating as in fish poultry or swine or where only a few animals are retained for breeding the genetic evaluation

More information

Mixed-Models. version 30 October 2011

Mixed-Models. version 30 October 2011 Mixed-Models version 30 October 2011 Mixed models Mixed models estimate a vector! of fixed effects and one (or more) vectors u of random effects Both fixed and random effects models always include a vector

More information

1 Data Arrays and Decompositions

1 Data Arrays and Decompositions 1 Data Arrays and Decompositions 1.1 Variance Matrices and Eigenstructure Consider a p p positive definite and symmetric matrix V - a model parameter or a sample variance matrix. The eigenstructure is

More information

MIXED MODELS THE GENERAL MIXED MODEL

MIXED MODELS THE GENERAL MIXED MODEL MIXED MODELS This chapter introduces best linear unbiased prediction (BLUP), a general method for predicting random effects, while Chapter 27 is concerned with the estimation of variances by restricted

More information

Linear Algebra Review

Linear Algebra Review Linear Algebra Review Yang Feng http://www.stat.columbia.edu/~yangfeng Yang Feng (Columbia University) Linear Algebra Review 1 / 45 Definition of Matrix Rectangular array of elements arranged in rows and

More information

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8 Contents 1 Linear model 1 2 GLS for multivariate regression 5 3 Covariance estimation for the GLM 8 4 Testing the GLH 11 A reference for some of this material can be found somewhere. 1 Linear model Recall

More information

The Multivariate Normal Distribution 1

The Multivariate Normal Distribution 1 The Multivariate Normal Distribution 1 STA 302 Fall 2014 1 See last slide for copyright information. 1 / 37 Overview 1 Moment-generating Functions 2 Definition 3 Properties 4 χ 2 and t distributions 2

More information

The purpose of this section is to derive the asymptotic distribution of the Pearson chi-square statistic. k (n j np j ) 2. np j.

The purpose of this section is to derive the asymptotic distribution of the Pearson chi-square statistic. k (n j np j ) 2. np j. Chapter 9 Pearson s chi-square test 9. Null hypothesis asymptotics Let X, X 2, be independent from a multinomial(, p) distribution, where p is a k-vector with nonnegative entries that sum to one. That

More information

Random Vectors 1. STA442/2101 Fall See last slide for copyright information. 1 / 30

Random Vectors 1. STA442/2101 Fall See last slide for copyright information. 1 / 30 Random Vectors 1 STA442/2101 Fall 2017 1 See last slide for copyright information. 1 / 30 Background Reading: Renscher and Schaalje s Linear models in statistics Chapter 3 on Random Vectors and Matrices

More information

BIOS 2083 Linear Models Abdus S. Wahed. Chapter 2 84

BIOS 2083 Linear Models Abdus S. Wahed. Chapter 2 84 Chapter 2 84 Chapter 3 Random Vectors and Multivariate Normal Distributions 3.1 Random vectors Definition 3.1.1. Random vector. Random vectors are vectors of random variables. For instance, X = X 1 X 2.

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation Maximum Likelihood Estimation Merlise Clyde STA721 Linear Models Duke University August 31, 2017 Outline Topics Likelihood Function Projections Maximum Likelihood Estimates Readings: Christensen Chapter

More information

Preliminaries. Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38

Preliminaries. Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38 Preliminaries Copyright c 2018 Dan Nettleton (Iowa State University) Statistics 510 1 / 38 Notation for Scalars, Vectors, and Matrices Lowercase letters = scalars: x, c, σ. Boldface, lowercase letters

More information

Economics 620, Lecture 5: exp

Economics 620, Lecture 5: exp 1 Economics 620, Lecture 5: The K-Variable Linear Model II Third assumption (Normality): y; q(x; 2 I N ) 1 ) p(y) = (2 2 ) exp (N=2) 1 2 2(y X)0 (y X) where N is the sample size. The log likelihood function

More information

Random Vectors and Multivariate Normal Distributions

Random Vectors and Multivariate Normal Distributions Chapter 3 Random Vectors and Multivariate Normal Distributions 3.1 Random vectors Definition 3.1.1. Random vector. Random vectors are vectors of random 75 variables. For instance, X = X 1 X 2., where each

More information

Animal Model. 2. The association of alleles from the two parents is assumed to be at random.

Animal Model. 2. The association of alleles from the two parents is assumed to be at random. Animal Model 1 Introduction In animal genetics, measurements are taken on individual animals, and thus, the model of analysis should include the animal additive genetic effect. The remaining items in the

More information

The linear model is the most fundamental of all serious statistical models encompassing:

The linear model is the most fundamental of all serious statistical models encompassing: Linear Regression Models: A Bayesian perspective Ingredients of a linear model include an n 1 response vector y = (y 1,..., y n ) T and an n p design matrix (e.g. including regressors) X = [x 1,..., x

More information

ANOVA: Analysis of Variance - Part I

ANOVA: Analysis of Variance - Part I ANOVA: Analysis of Variance - Part I The purpose of these notes is to discuss the theory behind the analysis of variance. It is a summary of the definitions and results presented in class with a few exercises.

More information

The Multivariate Normal Distribution 1

The Multivariate Normal Distribution 1 The Multivariate Normal Distribution 1 STA 302 Fall 2017 1 See last slide for copyright information. 1 / 40 Overview 1 Moment-generating Functions 2 Definition 3 Properties 4 χ 2 and t distributions 2

More information

ANALYSIS OF VARIANCE AND QUADRATIC FORMS

ANALYSIS OF VARIANCE AND QUADRATIC FORMS 4 ANALYSIS OF VARIANCE AND QUADRATIC FORMS The previous chapter developed the regression results involving linear functions of the dependent variable, β, Ŷ, and e. All were shown to be normally distributed

More information

Mixed-Model Estimation of genetic variances. Bruce Walsh lecture notes Uppsala EQG 2012 course version 28 Jan 2012

Mixed-Model Estimation of genetic variances. Bruce Walsh lecture notes Uppsala EQG 2012 course version 28 Jan 2012 Mixed-Model Estimation of genetic variances Bruce Walsh lecture notes Uppsala EQG 01 course version 8 Jan 01 Estimation of Var(A) and Breeding Values in General Pedigrees The above designs (ANOVA, P-O

More information

3. Properties of the relationship matrix

3. Properties of the relationship matrix 3. Properties of the relationship matrix 3.1 Partitioning of the relationship matrix The additive relationship matrix, A, can be written as the product of a lower triangular matrix, T, a diagonal matrix,

More information

Multivariate Linear Models

Multivariate Linear Models Multivariate Linear Models Stanley Sawyer Washington University November 7, 2001 1. Introduction. Suppose that we have n observations, each of which has d components. For example, we may have d measurements

More information

Part IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015

Part IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015 Part IB Statistics Theorems with proof Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly)

More information

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1 Inverse of a Square Matrix For an N N square matrix A, the inverse of A, 1 A, exists if and only if A is of full rank, i.e., if and only if no column of A is a linear combination 1 of the others. A is

More information

Regression #5: Confidence Intervals and Hypothesis Testing (Part 1)

Regression #5: Confidence Intervals and Hypothesis Testing (Part 1) Regression #5: Confidence Intervals and Hypothesis Testing (Part 1) Econ 671 Purdue University Justin L. Tobias (Purdue) Regression #5 1 / 24 Introduction What is a confidence interval? To fix ideas, suppose

More information

Modification of negative eigenvalues to create positive definite matrices and approximation of standard errors of correlation estimates

Modification of negative eigenvalues to create positive definite matrices and approximation of standard errors of correlation estimates Modification of negative eigenvalues to create positive definite matrices and approximation of standard errors of correlation estimates L. R. Schaeffer Centre for Genetic Improvement of Livestock Department

More information

Sampling Distributions

Sampling Distributions Merlise Clyde Duke University September 8, 2016 Outline Topics Normal Theory Chi-squared Distributions Student t Distributions Readings: Christensen Apendix C, Chapter 1-2 Prostate Example > library(lasso2);

More information

Matrix Approach to Simple Linear Regression: An Overview

Matrix Approach to Simple Linear Regression: An Overview Matrix Approach to Simple Linear Regression: An Overview Aspects of matrices that you should know: Definition of a matrix Addition/subtraction/multiplication of matrices Symmetric/diagonal/identity matrix

More information

Notes on Random Vectors and Multivariate Normal

Notes on Random Vectors and Multivariate Normal MATH 590 Spring 06 Notes on Random Vectors and Multivariate Normal Properties of Random Vectors If X,, X n are random variables, then X = X,, X n ) is a random vector, with the cumulative distribution

More information

Matrix Algebra, Class Notes (part 2) by Hrishikesh D. Vinod Copyright 1998 by Prof. H. D. Vinod, Fordham University, New York. All rights reserved.

Matrix Algebra, Class Notes (part 2) by Hrishikesh D. Vinod Copyright 1998 by Prof. H. D. Vinod, Fordham University, New York. All rights reserved. Matrix Algebra, Class Notes (part 2) by Hrishikesh D. Vinod Copyright 1998 by Prof. H. D. Vinod, Fordham University, New York. All rights reserved. 1 Converting Matrices Into (Long) Vectors Convention:

More information

Lecture 15. Hypothesis testing in the linear model

Lecture 15. Hypothesis testing in the linear model 14. Lecture 15. Hypothesis testing in the linear model Lecture 15. Hypothesis testing in the linear model 1 (1 1) Preliminary lemma 15. Hypothesis testing in the linear model 15.1. Preliminary lemma Lemma

More information

14 Multiple Linear Regression

14 Multiple Linear Regression B.Sc./Cert./M.Sc. Qualif. - Statistics: Theory and Practice 14 Multiple Linear Regression 14.1 The multiple linear regression model In simple linear regression, the response variable y is expressed in

More information

1 Appendix A: Matrix Algebra

1 Appendix A: Matrix Algebra Appendix A: Matrix Algebra. Definitions Matrix A =[ ]=[A] Symmetric matrix: = for all and Diagonal matrix: 6=0if = but =0if 6= Scalar matrix: the diagonal matrix of = Identity matrix: the scalar matrix

More information

Animal Models. Sheep are scanned at maturity by ultrasound(us) to determine the amount of fat surrounding the muscle. A model (equation) might be

Animal Models. Sheep are scanned at maturity by ultrasound(us) to determine the amount of fat surrounding the muscle. A model (equation) might be Animal Models 1 Introduction An animal model is one in which there are one or more observations per animal, and all factors affecting those observations are described including an animal additive genetic

More information

Orthogonal decompositions in growth curve models

Orthogonal decompositions in growth curve models ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 4, Orthogonal decompositions in growth curve models Daniel Klein and Ivan Žežula Dedicated to Professor L. Kubáček on the occasion

More information

COMPARISON OF FIVE TESTS FOR THE COMMON MEAN OF SEVERAL MULTIVARIATE NORMAL POPULATIONS

COMPARISON OF FIVE TESTS FOR THE COMMON MEAN OF SEVERAL MULTIVARIATE NORMAL POPULATIONS Communications in Statistics - Simulation and Computation 33 (2004) 431-446 COMPARISON OF FIVE TESTS FOR THE COMMON MEAN OF SEVERAL MULTIVARIATE NORMAL POPULATIONS K. Krishnamoorthy and Yong Lu Department

More information

For GLM y = Xβ + e (1) where X is a N k design matrix and p(e) = N(0, σ 2 I N ), we can estimate the coefficients from the normal equations

For GLM y = Xβ + e (1) where X is a N k design matrix and p(e) = N(0, σ 2 I N ), we can estimate the coefficients from the normal equations 1 Generalised Inverse For GLM y = Xβ + e (1) where X is a N k design matrix and p(e) = N(0, σ 2 I N ), we can estimate the coefficients from the normal equations (X T X)β = X T y (2) If rank of X, denoted

More information

Testing a Normal Covariance Matrix for Small Samples with Monotone Missing Data

Testing a Normal Covariance Matrix for Small Samples with Monotone Missing Data Applied Mathematical Sciences, Vol 3, 009, no 54, 695-70 Testing a Normal Covariance Matrix for Small Samples with Monotone Missing Data Evelina Veleva Rousse University A Kanchev Department of Numerical

More information

Economics 573 Problem Set 5 Fall 2002 Due: 4 October b. The sample mean converges in probability to the population mean.

Economics 573 Problem Set 5 Fall 2002 Due: 4 October b. The sample mean converges in probability to the population mean. Economics 573 Problem Set 5 Fall 00 Due: 4 October 00 1. In random sampling from any population with E(X) = and Var(X) =, show (using Chebyshev's inequality) that sample mean converges in probability to..

More information

Bayesian Linear Models

Bayesian Linear Models Bayesian Linear Models Sudipto Banerjee 1 and Andrew O. Finley 2 1 Department of Forestry & Department of Geography, Michigan State University, Lansing Michigan, U.S.A. 2 Biostatistics, School of Public

More information

ST 740: Linear Models and Multivariate Normal Inference

ST 740: Linear Models and Multivariate Normal Inference ST 740: Linear Models and Multivariate Normal Inference Alyson Wilson Department of Statistics North Carolina State University November 4, 2013 A. Wilson (NCSU STAT) Linear Models November 4, 2013 1 /

More information

1 Uniform Distribution. 2 Gamma Distribution. 3 Inverse Gamma Distribution. 4 Multivariate Normal Distribution. 5 Multivariate Student-t Distribution

1 Uniform Distribution. 2 Gamma Distribution. 3 Inverse Gamma Distribution. 4 Multivariate Normal Distribution. 5 Multivariate Student-t Distribution A Few Special Distributions Their Properties Econ 675 Iowa State University November 1 006 Justin L Tobias (ISU Distributional Catalog November 1 006 1 / 0 Special Distributions Their Associated Properties

More information

Appendix 2. The Multivariate Normal. Thus surfaces of equal probability for MVN distributed vectors satisfy

Appendix 2. The Multivariate Normal. Thus surfaces of equal probability for MVN distributed vectors satisfy Appendix 2 The Multivariate Normal Draft Version 1 December 2000, c Dec. 2000, B. Walsh and M. Lynch Please email any comments/corrections to: jbwalsh@u.arizona.edu THE MULTIVARIATE NORMAL DISTRIBUTION

More information

Alternative implementations of Monte Carlo EM algorithms for likelihood inferences

Alternative implementations of Monte Carlo EM algorithms for likelihood inferences Genet. Sel. Evol. 33 001) 443 45 443 INRA, EDP Sciences, 001 Alternative implementations of Monte Carlo EM algorithms for likelihood inferences Louis Alberto GARCÍA-CORTÉS a, Daniel SORENSEN b, Note a

More information

Outline. Remedial Measures) Extra Sums of Squares Standardized Version of the Multiple Regression Model

Outline. Remedial Measures) Extra Sums of Squares Standardized Version of the Multiple Regression Model Outline 1 Multiple Linear Regression (Estimation, Inference, Diagnostics and Remedial Measures) 2 Special Topics for Multiple Regression Extra Sums of Squares Standardized Version of the Multiple Regression

More information

Bayesian Linear Models

Bayesian Linear Models Bayesian Linear Models Sudipto Banerjee 1 and Andrew O. Finley 2 1 Biostatistics, School of Public Health, University of Minnesota, Minneapolis, Minnesota, U.S.A. 2 Department of Forestry & Department

More information

A Few Special Distributions and Their Properties

A Few Special Distributions and Their Properties A Few Special Distributions and Their Properties Econ 690 Purdue University Justin L. Tobias (Purdue) Distributional Catalog 1 / 20 Special Distributions and Their Associated Properties 1 Uniform Distribution

More information

y = 1 N y i = 1 N y. E(W )=a 1 µ 1 + a 2 µ a n µ n (1.2) and its variance is var(w )=a 2 1σ a 2 2σ a 2 nσ 2 n. (1.

y = 1 N y i = 1 N y. E(W )=a 1 µ 1 + a 2 µ a n µ n (1.2) and its variance is var(w )=a 2 1σ a 2 2σ a 2 nσ 2 n. (1. The probability density function of a continuous random variable Y (or the probability mass function if Y is discrete) is referred to simply as a probability distribution and denoted by f(y; θ) where θ

More information

Multivariate Linear Regression Models

Multivariate Linear Regression Models Multivariate Linear Regression Models Regression analysis is used to predict the value of one or more responses from a set of predictors. It can also be used to estimate the linear association between

More information

3 Multiple Linear Regression

3 Multiple Linear Regression 3 Multiple Linear Regression 3.1 The Model Essentially, all models are wrong, but some are useful. Quote by George E.P. Box. Models are supposed to be exact descriptions of the population, but that is

More information

Lecture 6: Selection on Multiple Traits

Lecture 6: Selection on Multiple Traits Lecture 6: Selection on Multiple Traits Bruce Walsh lecture notes Introduction to Quantitative Genetics SISG, Seattle 16 18 July 2018 1 Genetic vs. Phenotypic correlations Within an individual, trait values

More information

Ch 2: Simple Linear Regression

Ch 2: Simple Linear Regression Ch 2: Simple Linear Regression 1. Simple Linear Regression Model A simple regression model with a single regressor x is y = β 0 + β 1 x + ɛ, where we assume that the error ɛ is independent random component

More information

Next is material on matrix rank. Please see the handout

Next is material on matrix rank. Please see the handout B90.330 / C.005 NOTES for Wednesday 0.APR.7 Suppose that the model is β + ε, but ε does not have the desired variance matrix. Say that ε is normal, but Var(ε) σ W. The form of W is W w 0 0 0 0 0 0 w 0

More information

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems Review of Basic Probability The fundamentals, random variables, probability distributions Probability mass/density functions

More information

Lecture 2: Linear Models. Bruce Walsh lecture notes Seattle SISG -Mixed Model Course version 23 June 2011

Lecture 2: Linear Models. Bruce Walsh lecture notes Seattle SISG -Mixed Model Course version 23 June 2011 Lecture 2: Linear Models Bruce Walsh lecture notes Seattle SISG -Mixed Model Course version 23 June 2011 1 Quick Review of the Major Points The general linear model can be written as y = X! + e y = vector

More information

Chapter 5 Matrix Approach to Simple Linear Regression

Chapter 5 Matrix Approach to Simple Linear Regression STAT 525 SPRING 2018 Chapter 5 Matrix Approach to Simple Linear Regression Professor Min Zhang Matrix Collection of elements arranged in rows and columns Elements will be numbers or symbols For example:

More information

STT 843 Key to Homework 1 Spring 2018

STT 843 Key to Homework 1 Spring 2018 STT 843 Key to Homework Spring 208 Due date: Feb 4, 208 42 (a Because σ = 2, σ 22 = and ρ 2 = 05, we have σ 2 = ρ 2 σ σ22 = 2/2 Then, the mean and covariance of the bivariate normal is µ = ( 0 2 and Σ

More information

4 Multiple Linear Regression

4 Multiple Linear Regression 4 Multiple Linear Regression 4. The Model Definition 4.. random variable Y fits a Multiple Linear Regression Model, iff there exist β, β,..., β k R so that for all (x, x 2,..., x k ) R k where ε N (, σ

More information

On testing the equality of mean vectors in high dimension

On testing the equality of mean vectors in high dimension ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 17, Number 1, June 2013 Available online at www.math.ut.ee/acta/ On testing the equality of mean vectors in high dimension Muni S.

More information

Linear Models in Statistics

Linear Models in Statistics Linear Models in Statistics ALVIN C. RENCHER Department of Statistics Brigham Young University Provo, Utah A Wiley-Interscience Publication JOHN WILEY & SONS, INC. New York Chichester Weinheim Brisbane

More information

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ). .8.6 µ =, σ = 1 µ = 1, σ = 1 / µ =, σ =.. 3 1 1 3 x Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ ). The Gaussian distribution Probably the most-important distribution in all of statistics

More information

MA 575 Linear Models: Cedric E. Ginestet, Boston University Midterm Review Week 7

MA 575 Linear Models: Cedric E. Ginestet, Boston University Midterm Review Week 7 MA 575 Linear Models: Cedric E. Ginestet, Boston University Midterm Review Week 7 1 Random Vectors Let a 0 and y be n 1 vectors, and let A be an n n matrix. Here, a 0 and A are non-random, whereas y is

More information

MEI Exam Review. June 7, 2002

MEI Exam Review. June 7, 2002 MEI Exam Review June 7, 2002 1 Final Exam Revision Notes 1.1 Random Rules and Formulas Linear transformations of random variables. f y (Y ) = f x (X) dx. dg Inverse Proof. (AB)(AB) 1 = I. (B 1 A 1 )(AB)(AB)

More information

5.1 Consistency of least squares estimates. We begin with a few consistency results that stand on their own and do not depend on normality.

5.1 Consistency of least squares estimates. We begin with a few consistency results that stand on their own and do not depend on normality. 88 Chapter 5 Distribution Theory In this chapter, we summarize the distributions related to the normal distribution that occur in linear models. Before turning to this general problem that assumes normal

More information

" M A #M B. Standard deviation of the population (Greek lowercase letter sigma) σ 2

 M A #M B. Standard deviation of the population (Greek lowercase letter sigma) σ 2 Notation and Equations for Final Exam Symbol Definition X The variable we measure in a scientific study n The size of the sample N The size of the population M The mean of the sample µ The mean of the

More information

Let X and Y denote two random variables. The joint distribution of these random

Let X and Y denote two random variables. The joint distribution of these random EE385 Class Notes 9/7/0 John Stensby Chapter 3: Multiple Random Variables Let X and Y denote two random variables. The joint distribution of these random variables is defined as F XY(x,y) = [X x,y y] P.

More information

STAT 135 Lab 13 (Review) Linear Regression, Multivariate Random Variables, Prediction, Logistic Regression and the δ-method.

STAT 135 Lab 13 (Review) Linear Regression, Multivariate Random Variables, Prediction, Logistic Regression and the δ-method. STAT 135 Lab 13 (Review) Linear Regression, Multivariate Random Variables, Prediction, Logistic Regression and the δ-method. Rebecca Barter May 5, 2015 Linear Regression Review Linear Regression Review

More information

Lecture 15: Multivariate normal distributions

Lecture 15: Multivariate normal distributions Lecture 15: Multivariate normal distributions Normal distributions with singular covariance matrices Consider an n-dimensional X N(µ,Σ) with a positive definite Σ and a fixed k n matrix A that is not of

More information

Lecture 3: Linear Models. Bruce Walsh lecture notes Uppsala EQG course version 28 Jan 2012

Lecture 3: Linear Models. Bruce Walsh lecture notes Uppsala EQG course version 28 Jan 2012 Lecture 3: Linear Models Bruce Walsh lecture notes Uppsala EQG course version 28 Jan 2012 1 Quick Review of the Major Points The general linear model can be written as y = X! + e y = vector of observed

More information

STAT 100C: Linear models

STAT 100C: Linear models STAT 100C: Linear models Arash A. Amini April 27, 2018 1 / 1 Table of Contents 2 / 1 Linear Algebra Review Read 3.1 and 3.2 from text. 1. Fundamental subspace (rank-nullity, etc.) Im(X ) = ker(x T ) R

More information

Probability Theory and Statistics. Peter Jochumzen

Probability Theory and Statistics. Peter Jochumzen Probability Theory and Statistics Peter Jochumzen April 18, 2016 Contents 1 Probability Theory And Statistics 3 1.1 Experiment, Outcome and Event................................ 3 1.2 Probability............................................

More information

Prediction. is a weighted least squares estimate since it minimizes. Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark

Prediction. is a weighted least squares estimate since it minimizes. Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark Prediction Rasmus Waagepetersen Department of Mathematics Aalborg University Denmark March 22, 2017 WLS and BLUE (prelude to BLUP) Suppose that Y has mean β and known covariance matrix V (but Y need not

More information

Exercises and Answers to Chapter 1

Exercises and Answers to Chapter 1 Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean

More information

Course information: Instructor: Tim Hanson, Leconte 219C, phone Office hours: Tuesday/Thursday 11-12, Wednesday 10-12, and by appointment.

Course information: Instructor: Tim Hanson, Leconte 219C, phone Office hours: Tuesday/Thursday 11-12, Wednesday 10-12, and by appointment. Course information: Instructor: Tim Hanson, Leconte 219C, phone 777-3859. Office hours: Tuesday/Thursday 11-12, Wednesday 10-12, and by appointment. Text: Applied Linear Statistical Models (5th Edition),

More information

Multivariate Distributions

Multivariate Distributions Chapter 2 Multivariate Distributions and Transformations 2.1 Joint, Marginal and Conditional Distributions Often there are n random variables Y 1,..., Y n that are of interest. For example, age, blood

More information

Introduction to Normal Distribution

Introduction to Normal Distribution Introduction to Normal Distribution Nathaniel E. Helwig Assistant Professor of Psychology and Statistics University of Minnesota (Twin Cities) Updated 17-Jan-2017 Nathaniel E. Helwig (U of Minnesota) Introduction

More information

Large Sample Properties of Estimators in the Classical Linear Regression Model

Large Sample Properties of Estimators in the Classical Linear Regression Model Large Sample Properties of Estimators in the Classical Linear Regression Model 7 October 004 A. Statement of the classical linear regression model The classical linear regression model can be written in

More information

Multivariate Regression

Multivariate Regression Multivariate Regression The so-called supervised learning problem is the following: we want to approximate the random variable Y with an appropriate function of the random variables X 1,..., X p with the

More information

Linear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept,

Linear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept, Linear Regression In this problem sheet, we consider the problem of linear regression with p predictors and one intercept, y = Xβ + ɛ, where y t = (y 1,..., y n ) is the column vector of target values,

More information

Reference: Davidson and MacKinnon Ch 2. In particular page

Reference: Davidson and MacKinnon Ch 2. In particular page RNy, econ460 autumn 03 Lecture note Reference: Davidson and MacKinnon Ch. In particular page 57-8. Projection matrices The matrix M I X(X X) X () is often called the residual maker. That nickname is easy

More information

The Multivariate Gaussian Distribution

The Multivariate Gaussian Distribution The Multivariate Gaussian Distribution Chuong B. Do October, 8 A vector-valued random variable X = T X X n is said to have a multivariate normal or Gaussian) distribution with mean µ R n and covariance

More information

Lecture 11. Multivariate Normal theory

Lecture 11. Multivariate Normal theory 10. Lecture 11. Multivariate Normal theory Lecture 11. Multivariate Normal theory 1 (1 1) 11. Multivariate Normal theory 11.1. Properties of means and covariances of vectors Properties of means and covariances

More information

variability of the model, represented by σ 2 and not accounted for by Xβ

variability of the model, represented by σ 2 and not accounted for by Xβ Posterior Predictive Distribution Suppose we have observed a new set of explanatory variables X and we want to predict the outcomes ỹ using the regression model. Components of uncertainty in p(ỹ y) variability

More information

University of Cambridge Engineering Part IIB Module 3F3: Signal and Pattern Processing Handout 2:. The Multivariate Gaussian & Decision Boundaries

University of Cambridge Engineering Part IIB Module 3F3: Signal and Pattern Processing Handout 2:. The Multivariate Gaussian & Decision Boundaries University of Cambridge Engineering Part IIB Module 3F3: Signal and Pattern Processing Handout :. The Multivariate Gaussian & Decision Boundaries..15.1.5 1 8 6 6 8 1 Mark Gales mjfg@eng.cam.ac.uk Lent

More information

BASICS OF PROBABILITY

BASICS OF PROBABILITY October 10, 2018 BASICS OF PROBABILITY Randomness, sample space and probability Probability is concerned with random experiments. That is, an experiment, the outcome of which cannot be predicted with certainty,

More information

01 Probability Theory and Statistics Review

01 Probability Theory and Statistics Review NAVARCH/EECS 568, ROB 530 - Winter 2018 01 Probability Theory and Statistics Review Maani Ghaffari January 08, 2018 Last Time: Bayes Filters Given: Stream of observations z 1:t and action data u 1:t Sensor/measurement

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

Restricted Maximum Likelihood in Linear Regression and Linear Mixed-Effects Model

Restricted Maximum Likelihood in Linear Regression and Linear Mixed-Effects Model Restricted Maximum Likelihood in Linear Regression and Linear Mixed-Effects Model Xiuming Zhang zhangxiuming@u.nus.edu A*STAR-NUS Clinical Imaging Research Center October, 015 Summary This report derives

More information

SOME THEOREMS ON QUADRATIC FORMS AND NORMAL VARIABLES. (2π) 1 2 (σ 2 ) πσ

SOME THEOREMS ON QUADRATIC FORMS AND NORMAL VARIABLES. (2π) 1 2 (σ 2 ) πσ SOME THEOREMS ON QUADRATIC FORMS AND NORMAL VARIABLES 1. THE MULTIVARIATE NORMAL DISTRIBUTION The n 1 vector of random variables, y, is said to be distributed as a multivariate normal with mean vector

More information

So far our focus has been on estimation of the parameter vector β in the. y = Xβ + u

So far our focus has been on estimation of the parameter vector β in the. y = Xβ + u Interval estimation and hypothesis tests So far our focus has been on estimation of the parameter vector β in the linear model y i = β 1 x 1i + β 2 x 2i +... + β K x Ki + u i = x iβ + u i for i = 1, 2,...,

More information

component risk analysis

component risk analysis 273: Urban Systems Modeling Lec. 3 component risk analysis instructor: Matteo Pozzi 273: Urban Systems Modeling Lec. 3 component reliability outline risk analysis for components uncertain demand and uncertain

More information