ESTIMATING VARIANCE CO~WONENTS FOR THE NESTED MODEL, MINQUE OR RESTRICTED MAXIMUM LIKELIHOOD

Size: px
Start display at page:

Download "ESTIMATING VARIANCE CO~WONENTS FOR THE NESTED MODEL, MINQUE OR RESTRICTED MAXIMUM LIKELIHOOD"

Transcription

1 ,. ESTIMATING VARIANCE CO~WONENTS FOR THE NESTED MODEL, MINQUE OR RESTRICTED MAXIMUM LIKELIHOOD by Francis Giesbrecht and Peter Burrows Institute of Statistics Mimeograph Series No Raleigh - June 1976

2 ' Estimating Variance Components for the Nested Model, MINQUE or Restricted Maximum Likelihood by Francis Giesbrecht and Peter Burrows ABSTRACT This paper outlines ~n efficient method for computing MINQUE or restricted maximum likelihood estimates of variance components for the nested or hierarchical classification model. The method does. not require storage of any large matrices and consequently does not impose limits on the size of the data set. Numerical examples indicate that the computations converge very rapidly.

3 . ' Estimating Variance Components for the Nested Model, MINQUE or Restricted Maximum Likelihood Francis Giesbrecht and Peter Burrows I. Introduction In a series of papers Rao (1970, 1971a, 1971b, 1972) has developed the MINQUE (MInimum Norm Quadratic Unbiased Estimation) theory of estimating variance components. These papers provide a complete solution to the problem in the sense that if normality holds then the method will give locally best (minimum variance) ~uadratic unbiased estimates of the variance components. Unfortunately, the form of the estimation e~ua,ti.ons given is such as to tax even a very large computer if one has a reasonably large data set since they involve inversion of an n x n matrix where n is the number of data points. Also the procedure requires prior. information about the relative magnitudes. of the variance components. The possibility of an iterative scheme seems reasonable, but places an even larger premium on efficient computations. Hemmerle and Hartley (1973) develop a method for computing maximum likelihood estimates of the variance components. Corbeil and Searle (1976) modify the Hemmerle and Hartley algorithm to give restricted maximum likelihood estimates by partitioning the likelihood into two parts and then maximizing the part which is free of the fixed effects. Howeve;r, both algorithms require the repeated inversion of a ~ x ~ matrix, where ~ is the total number of random levels in the model. Typically ~ will be much smaller than U, though for large data sets even that rapidly becomes unreasonable. Both algorithms are iterative and as such re~uire only reasonable initial starting values for the unknown variance components rather than prior estimates. The object of this paper is to show that for the hierarchal model the

4 2 MINQUE formulas can be re-written in a form that requires only that a P x P matrix be inverted, where P is the number of variance components or levels of nesting in the model. It is shown that all necessary totals can be obtained by oo.e pass through the data set. It is also shown that if the MINQUE process is iterated (with some provision to replace negative components with small positive values) and converges, then the final answers will also satisfy. the equations obtained by applying the restricted maximum likelihood method. II. MlllQUE With Invariance and Normality Following Rao (1971a, 1971b) let Y be an n-vector of observations with structure (1) where X is a known nx s matrix of rank s, 13 Parameters, U. is an s-vector of unknown is a known n x c. matrix and s. is a c.-vector of normal random ~ J. J. ~. variables with zero means and dispersion matrix I a~ of appropriate dimension. ~ ~ It is also assumed that s. and e. are uncorrelatedfor all i.;. j. ~ J Clearly F.[y] = Xfi Jiy] = criv1 + + (J2y pp and where for i = 1,., p. The quadrati~unbiased and invariant (invariant in the sense that the estimates are unchanged if we replace the vector of observationsy by Y + Xb for any s-vector b) estimate of + given by theminque principle is

5 3 where Q i I = Y RViRY, R = V*-l(I_P), P = X(X/V*-~f~/V*-l, and the {A.) are suitable (derivable) constants for arbitrary {P.). ~ ~ Replacing V* by + a V pp amounts to selecting a different norm and disturbs neither the unbiasedness nor invariance. but unknown [cr~} 1. Further, if the {a.} happen tobe proportional to the true ~ then the estimates obtained will also be the minim.um variance quadratic unb.iased invariant estimates at that particular point in the parameter space. If all variance components are to be estimated simultaneously, rather than only a particular linear function,then the [Ai} need never be computed. The procedure is to compute the [Q.}, equate to their expectations and solve the 1. resulting system of equations. This can be written as = (2) where Sij = tr(rvirv j ). It seems reasonable that if prior knowledge of the relative magnitudes of

6 4 the unknown variance components is available then this information should be used to define a set of {O'i) and the matrix V'*- replaced by V in (2). Since (2) is a linear equation and the right hand sides are quadratic forms, the variancecovariance matrix of the estimated variance components follows immediately (in terms of the true variance components and {O'.). J. Formally this provides a complete solution for the variance component estimation problem. However the computational aspects of obtaining the elements of (2) directly are formidable. They involve inverting an n x n matrix. Consequently the examination of special cases is in order. Of particular interest here is the nested analysis of variance model, since La Motte (1972) -l has obtained V in a manageable form. In. Connection with Restricted Maximum Likelihood In order to obtain the maximum likelihood estimates of the v~riance components we proceed in two steps. The first is to assume a~ = O'i for i = 1,, p e and estimate~. This yields ~ = (X'V-~)-~'v-ly. The second step is to compute residuals,. Z = Y - X~ = Q Y v and apply maximum likelihood to obtain estimates of {a~} Define Z * = T Y v where Tv consists of n - s linearly independent rows of Q v The likelihood can now be written as K s., = ---"-1 IT VT'\2 v v exp r_~ Y'T v ' (T VT')-~ Y}. " v v v 2 Treating T as a constant (the la.} have been replaced by to'j.'}) gives the v J.

7 .. 5 likelihood equations, ~..en 1 2 ~ a. ~ for i = 1, 2,, p. = -itr[(t VT,)-IT V.T '] v v v ~ v +.l.y't' (T VT' )-IT V. T' (T VT,)-lT Y 2 V V V V ~ V V V V Equating the partial derivatives to zero and rearranging terms leads to fori = 1, 2,.., p. tr[t'(t VT,)-IT V.] v v v v ~ = 'lit' (T VT I rlt V. T I (T VT,)-IT Y I vvv v~vvv v Khatri (1966) has shown that for any symmetric positive definite matrix C, Direct substitution into (3) gives for i = 1, 2,.., p. Also we can replace V by V to obtain '"-1 I "'-L. "'-1 trey v~ v ~ "V Q,.V.) = Y Q",V v.v G",Y (4) for i =1, 2,, p. The quadratic forms in (4) are similar to those required by MINQUE, with the only difference being the nature of V or V. In MINQuE theory V is a function of f~.} while in maximum likelihood estimation theory Vis a function of (&~}.. ~.. ~ Also if the [0'.1 are chosen correctly i.e. agree with fa~1. ~ ~ then the expected values of the quadratic forms obtained under the MINQUE principle can be written as

8 6 It follows immediately that if one uses an iterative scheme for solving the maximum. likelihood equations in which the {a.} are updated to agree with the ~ (ail (with some provision to replace negative values with some small positive constants) the solution will agree exactly with that obtained from the MI:NQUE equations using the same rules for iteration. IV. Estimation in the Nested Model The development in sections II and In has been in terms of a relatively general linear model. For the remainder of this paper we will specialize to the 3:-level nested model with only the one fixed parameter ~. The 3 -level \,..., ne6ted model was selected because it is sufficiently complex to display the general pa~tern of terms that arise when more levels are introduced without the notation becomi~ too complex. The extension to the case where the rendom effects are nested within levels of a fixed effect is obvious and will not be discussed. The 3 -level nested model is given by Y hijk = J.L + ~ +b. + c hij + e hijk (5) h~ for k= 1,2,.., n hij. j = 1, 2,.., ~i i= 1, 2,.., ~ h= 1, 2,.., m and where (Y } are the observations, hijk ~ is an unknown constant, (a ) are independent N(O, h ai), (bhil are independent N(O, a~), {c } are independent N(O, hij a~), (ehijkl are independent N(O, a~) and

9 ' 7 the latter 4 sets are also mutually independent. Since we will have occasion to re~er to the value obtained by summation over certain subscripts, we will use the convention o~ indicating such a sum by replacing the subscript with the + symbol. For example ~.+ = E~.. 1: j J.J and etc. To establish the connection between (1) and (5), note that Y corresponds to the n+++ elements {Y hijk } in d~ctionary order. X is' a column of lis and ~ the one element ~. U l corresponds to an n+++ x m matrix of 0'sand l's, U 2 to an n+++ x m+ matrix o~ 0 t sand l's, U 3 to an n+++ x m++ matrix and U 4 the n+++ x n+++ identity matrix. The covariance between Y hijk andyh'i'j'k' can be written as where i~ L = m elm = i~ t :/ m The corresponding element o~ V can be written as where a. represents the prior estimate ~or a~. Since only the relative J. J. magnitudes of the ta.} are important they can be scaled so that one o~ J. them is 1. Also if one wishes the more primitive form of the MI~UE, Rao (1972), all can be set all equal to 1. The extension to more levels of nesting is immediate. The corresponding element of V- l can be written as

10 8 where,w hij = 0'3(0'3 n hij + 0(4)-1, w hi = 0'2(0'2 n + )",1 hi 0'3 w h = 0'1(0'1 n h + 0'2)-1, W = (t w h nh)-l h. n hi = ; ~ijwhij and n h = ~ nhiw hi J. Summa~ion over hi, ii, j' and k ' yields as ~he elemen~ of ~he vec~or v-lx corresponding ~o Yhijk in Y. Also no~e that the scalar X'v-lx is O'~lw-1 The general element of R now follows as --;,. - w h ',w h ' whw;w'h J.J J.. IW h I, J. IW h J. J 0'1 I, I, I -1 The pattern for the case with more levels is obvious. The typical element of Z = RY is

11 ,. 9 - whi,w h ' (t w h ' "Y h, "+. )0'3-1 J ~" ~J ~J. J -W h,,whowh(t I: w h ' IW h I "Y h, I, '+)C{2-1 ~J ~,I" ~ ~ J ~ J ~ J - Whi,Wh,whw(t I: E Wh/Wh"/Wh/ "/Yh"/j '+)C{1-1, J ~ h/i/j' ~ ~ J ~ where The quadratic forms in (2) are now obtained as 2 E E t t zh' 'k h i j k ~J, t tt Z~,,+', h i j ~J 2 E E zh'++ h i J; and 2 E zlrt++, h where the + symbol indicates summation over the subscript before squarihg. We note at this point that these quadratic forms are obtained by two passes through the data file and are not affected by size of data set. In order to obtain expressions for the elements in RV 1 ' RV 2 ' RV 3 ' RV 4 and the various products in a systematic pattern it is convenient to define -1-1 t h = C{2 + whw 0'1 ' -J. + t hi = C{3 whiwht h and -J. t hij = C{4 + WhijWhithi The elements in RV 1 ' RV 2 ' RV 3 and RV 4 are given in Table 1 in a convenient form. Expressions for tr(rv,rv,) for 4 ~ i ~ j ~ 1 are given in AppendixA. ~ J These expressions can (and should) be condensed before being used for computations. However when this is done the pattern is not as clear.. We also

12 .10 'notice that if one arrives at a point where the estimates obtained agree with the prior values then the variance-covariance matrix of the estimates can be e approximated by twice the inverse of the left side of 2.

13 e e e Table I. Elements in the RV l, RV 2, RV 3, and RV 4 Matrices Coefficient RV 4 RV 3 RV 2 RV 1 &hh,iiu'&jj,li kk, -1 0'4 -w hij t hij WhijWhiWh 0'1 -I\w h wo'l Whij(~31_~ijWhijWhithi} WhijW~i(O';l_~iWhiwhth} ( -1-1) 6 hh,6 U,6jj I (l-li kk,) -Whijthij Whij(n;l_I\ijWhijWhithi} WhijWhi(O';l_I\iWhiWh~) ( -1-1) WhijWhiWh 0'1 -I\w h WO'l li hh,l'u,(1-6 jj,} -WhijWhiWhij,thi -WhijWhiWhij'~ij,thi WhijWhi(O';l_~iWhiwhth} ( -1-1) WhijWhiWh 0'1 -I\WhWO'l 6 hh, (1-6 U ') -WhijWhiWhWhi,Whi'jth -WhijWhiWhWhi'Whi/j'~i'j,tn -w hij w hi w h w bi '~i' ~ ( -1-1) WhijWhiWh 0'1 -~whwo'l. (1-6 hh,) WhijWhiWhWWh,wh'i,Wh'i'j'O'l -WhijWhiWhWWh,wh'i,Wh'i'j'~'i'j'O'l -WhijWhiWhWWh,wh'i'~'i'O'l -w hij w hi w hwwh'~ '0'1...

14 12 V. Computational Aspects An examination of the formulas developed in the previous section shows that a very efficient computing routine is possible. If the data are sorted or arranged according to the nesting structure then all totals required in equation (2) can be obtained in one pass. Alternatively, if one wishes to minimize the programming effort, all the accumulations can be obtained rather easily by using any one of a number of standard programs for computing means and totals of subsets of observations in a data set. \ Summaries need to be obtained at each level of nesting. These summaries are then merged and a final set of accumulations obtained. In order to examine the behavior of the suggested iterative scheme several analyses were performed on synthetic, unbalanced data sets. The results of three typical cases are given in Table II. of these computations is the very rapid convergence. The striking feature In fact it appears that 2 or at most 3 cycles are sufficient. The extent of the unbalance in the data can be judged from Table III. VI. Acknowledgement The authors wish to thank T. M. Gerig for his valuable comments and suggestions in the preparation of this paper.

15 ,. 13 TABLE II Summary ofth~ee Analyses (,2 0'1 2 0'3 4 A. True parameter va1ues* Starting values Estimates first cycle Estimates second cycle Estimates third cycle Oll Estimates fourth cycle Estimates fifth cyc1e EstimatesA.o.V B. True parameter values* Starting values Estimates first cycle Estimates second cycle Estimates third cycle l.l4~ Estimates fourth cycle c. True parameter values Starting values Estimates first cycle Estimates second cycle Estimates third cycle Estimates fourth cycle Estimates fifth cycle Estimates A.o.V

16 TABLE III Pattern of ~ij's in Simulated Data Sets h i ,1 2 1,1,2 3,1,1 2,1,3 3,3 2 3,1 3 3,2,1 2 11,2 2,1 2 2,1,1 2,3,1 2,3 2 2,2,1 1,3,3 Sets 3 I 2,1 2,3,1 1,3 1,1 2,3 1 1,1 A & B 4 3 2,2 3 3, ,1 2, l,2,2 1, ,g,1 1,1 3, ,1 1 1,1,3 2,2 1,2,3 2,3,1 1,1,1 1 Set 3 I 2 2 2,1,3 3 1,3,2 2,3,1 2,3,3 2 c 4 I 1,3,1 3,3,3 2,2 1 3, ,3, ~. e, e e

17 ,. 15 REFERENCES Corbeil, R. R. and Searle, S. R. (1976). Restricted maximum likelihood (REML) estimation of variance components in the mixed model. Technometrics 18, Hemmerle, W. J. and Hartley, H. a. (.1973). Computing maximum likelihood estimates for the mixed A.a.V. model using the W-transformation. Technometrics 15, Kh~tri, C. G. (1966). A note on a manova model applied to problems in growth curve. Ann. Inst. Stat. Math. 18, L&~tte, L. R. (1972). Notes on the covariance matrix of a random nested anova model. Ann. Math. Statist. 43, Rao, C. R. (1970). Estimation of heteroscedastic variances in linear models. J. Amer. Statist. Assoc. 65, Rae, C. R. (197la). MmQUE theory. Estimation of variance and covariance components- J. Mult. Analysis 1, Rae, C. R. (1971b). Minimum variance quadratic unbiased estimation of variance components. J. Mult. Analysis 1, Rae,C. R. (1972). Estimation of variance and covaria.l'lce components in linear models. J. Amer. Statist. Assoc. 67,

18 A-l, APPENDIX I: «L L L n. 0 0 w 0 0w 0 w ) h h i j n ~J h ~J h ~ h (L L OWhOjWho)wh)(L Ln.. owho owho)whw cx o 0 n~j ~ ~ 0 j n~j ~J ~ I ~ J ~ -I -1. tr(rv 4 RV 3 ) = L L L owho 0(cx 4 - W oot 00)(cx3-11. OOWhOjWhotho) h i j n~j ~J n~j n~j n~j ~ ' + I: L L «L. n. 00 who 0 ) - 11.iowho 0)11. h i j j owho owhotho n~j ~J n J ~J h1j ~J L «L L L n. i ow ho ow h ow. h ) h h i j h J 1J (I: L ~rh 00W 0 W h h» (1.: L n 0 ow 00W 0)w.hW CX 0 h1j 1J n~j h 1J h 1 I i J ~ J

19 , < A :: 1:: 1:: n...(n... - l)w: h.wh t. (0'2 - n..wh W.ht h ).'. m.j h~j ~J]. h~j h~ ~ h ~ J. (( :: 1:: 1:: 1:: n...w.,w.) - h (1:: n.. w.wh' h h ))n..w. wht hi i j hlj lj 1 j. hlj lj 1. hll h h, L: «1:: 1:: 1:: n... W"h' w h w h ) h.h i j n~j ~J ~ (.J ~ ~»( 2 ~ )-rl -2 - _.;.y~.,-l:j,..--~ij hij hi h ~ ~i hi h 0'1 ~ ~ :: 1:: 1:: n...(n.. - 1)'W h "1 h wht.., (0'1 - n. w h w0'1 ) h i j n~j n~j ~J ~ hj.j h L: «1:: 1:: 1:: n...w h.w h. 1ITḣ) h h i j. n1.j J.J 1.

20 A tr(rv 3 RV 3 )... I: I: I: 11.,.W h, (0'3.- n.,.whi wh,t h,) h i j hj.j J.J hj.j J J. J. «2 2 2) + I: I: 'E 'E n., w h i i j,,wḣ, hj.j ḣ J.J J. 2" !: «'E 'E 'E n.,.w ḣ '. wḣ,w h ). h h i j hj.j J.J J." ('E 1:: n., w. h,.wh,w.h»(:e :E n. "w h ' W.h )Whw 0'1,, hj.j J.J J. 'j hj.j J.J J. J. J J tr(rv 3 RV 2 )... :E :E:E n.,.w. h '.w h ' (0'3 - n...w.,.w h ' t h ) (0'2 - n.,w h ' wht h ) h i j hj.j J.J J. hj.j hj.j J. J. hj. J I: 'E 'E «:E n. i w'h'.) - n.,.w h '.)n.,.w h '.w ḣ t h, (0'2-11.,w h ' wht.. ) h i jj h J J.J. hj.j J.J hj.j J.J J. J. hj.. J. h I: «'E :E :E n. i w h '.w h ' w h ) h h i j h J J.J! J (1:::En.,.w h '.wh,)wh)(:e n.,wh )whw 0'1 i j hj.j J.J. J. i hj. J.

21 A I: I: I: _(n )w. h - W ḣ.wḣ (O'3 - n. -.wḣ ".w h -t h -)(O'1 -.n. w h wo'1 ) h i jhj.j. hj.j J.J J.... hj.j.. J.J.. J.1 h I: I: I: «I: w h -) W: h - -) w h -.wḣ.w.ht h. (0' w h wo'1 ) h i j j h1j 1J h1j 1J h1j 1J 1 1 h + ( 1).; w 2 (-1. W w ) 2 I: ~ ~ ~ij ~ij" hij hi 0'2.. ~i hi h~h h J. J 2.? 2': 2..}J2 + 1: I: «I: n. - -W -) ) 11. h - hth h i i h1 h1 h1 1. h1 1

22 A-5 ~.,. + ~ ~ ( 1)W 2 w 2 w ( -1 w W t )(. -1 W w. -1) ~ i j ~ij ~lij - hij hi h 0'2 - ~i hi h h 0'1 - ~ h 0' 'Wḣ. wḣ) h hi m. J. + t «t tn.. + ( W) W ~ if-( -1 W w_.- 1 )2 I:. ~ ~ ~i - ~ij hij ~ij hij hi h 0'1 - ~ h 'u1 h J. J

Variance Component Estimation Using Constrained Nonlinear.Maximization. Franz Preitschopf Universitat Augsburg. George Casella Cornell University

Variance Component Estimation Using Constrained Nonlinear.Maximization. Franz Preitschopf Universitat Augsburg. George Casella Cornell University Variance Component Estimation Using Constrained Nonlinear.Maximization BU-1029-M June 1989 Franz Preitschopf Universitat Augsburg George Casella Cornell University Key words and phrases: Mixed model, maximum

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

1. The Multivariate Classical Linear Regression Model

1. The Multivariate Classical Linear Regression Model Business School, Brunel University MSc. EC550/5509 Modelling Financial Decisions and Markets/Introduction to Quantitative Methods Prof. Menelaos Karanasos (Room SS69, Tel. 08956584) Lecture Notes 5. The

More information

RECENT DEVELOPMENTS IN VARIANCE COMPONENT ESTIMATION

RECENT DEVELOPMENTS IN VARIANCE COMPONENT ESTIMATION Libraries Conference on Applied Statistics in Agriculture 1989-1st Annual Conference Proceedings RECENT DEVELOPMENTS IN VARIANCE COMPONENT ESTIMATION R. R. Hocking Follow this and additional works at:

More information

[y i α βx i ] 2 (2) Q = i=1

[y i α βx i ] 2 (2) Q = i=1 Least squares fits This section has no probability in it. There are no random variables. We are given n points (x i, y i ) and want to find the equation of the line that best fits them. We take the equation

More information

Linear Algebra Review

Linear Algebra Review Linear Algebra Review Yang Feng http://www.stat.columbia.edu/~yangfeng Yang Feng (Columbia University) Linear Algebra Review 1 / 45 Definition of Matrix Rectangular array of elements arranged in rows and

More information

Chapter 5 Matrix Approach to Simple Linear Regression

Chapter 5 Matrix Approach to Simple Linear Regression STAT 525 SPRING 2018 Chapter 5 Matrix Approach to Simple Linear Regression Professor Min Zhang Matrix Collection of elements arranged in rows and columns Elements will be numbers or symbols For example:

More information

Chapter 3 Best Linear Unbiased Estimation

Chapter 3 Best Linear Unbiased Estimation Chapter 3 Best Linear Unbiased Estimation C R Henderson 1984 - Guelph In Chapter 2 we discussed linear unbiased estimation of k β, having determined that it is estimable Let the estimate be a y, and if

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Chemistry 5.76 Revised February, 1982 NOTES ON MATRIX METHODS

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Chemistry 5.76 Revised February, 1982 NOTES ON MATRIX METHODS MASSACHUSETTS INSTITUTE OF TECHNOLOGY Chemistry 5.76 Revised February, 198 NOTES ON MATRIX METHODS 1. Matrix Algebra Margenau and Murphy, The Mathematics of Physics and Chemistry, Chapter 10, give almost

More information

Inferences on a Normal Covariance Matrix and Generalized Variance with Monotone Missing Data

Inferences on a Normal Covariance Matrix and Generalized Variance with Monotone Missing Data Journal of Multivariate Analysis 78, 6282 (2001) doi:10.1006jmva.2000.1939, available online at http:www.idealibrary.com on Inferences on a Normal Covariance Matrix and Generalized Variance with Monotone

More information

LARGE SAMPLE VARIANCES OF MAXDUM LJICELIHOOD ESTIMATORS OF VARIANCE COMPONENTS. S. R. Searle. Biometrics Unit, Cornell University, Ithaca, New York

LARGE SAMPLE VARIANCES OF MAXDUM LJICELIHOOD ESTIMATORS OF VARIANCE COMPONENTS. S. R. Searle. Biometrics Unit, Cornell University, Ithaca, New York BU-255-M LARGE SAMPLE VARIANCES OF MAXDUM LJICELIHOOD ESTIMATORS OF VARIANCE COMPONENTS. S. R. Searle Biometrics Unit, Cornell University, Ithaca, New York May, 1968 Introduction Asymtotic properties of

More information

Review of Vectors and Matrices

Review of Vectors and Matrices A P P E N D I X D Review of Vectors and Matrices D. VECTORS D.. Definition of a Vector Let p, p, Á, p n be any n real numbers and P an ordered set of these real numbers that is, P = p, p, Á, p n Then P

More information

Large Sample Properties of Estimators in the Classical Linear Regression Model

Large Sample Properties of Estimators in the Classical Linear Regression Model Large Sample Properties of Estimators in the Classical Linear Regression Model 7 October 004 A. Statement of the classical linear regression model The classical linear regression model can be written in

More information

Wiley. Methods and Applications of Linear Models. Regression and the Analysis. of Variance. Third Edition. Ishpeming, Michigan RONALD R.

Wiley. Methods and Applications of Linear Models. Regression and the Analysis. of Variance. Third Edition. Ishpeming, Michigan RONALD R. Methods and Applications of Linear Models Regression and the Analysis of Variance Third Edition RONALD R. HOCKING PenHock Statistical Consultants Ishpeming, Michigan Wiley Contents Preface to the Third

More information

The Algebra of the Kronecker Product. Consider the matrix equation Y = AXB where

The Algebra of the Kronecker Product. Consider the matrix equation Y = AXB where 21 : CHAPTER Seemingly-Unrelated Regressions The Algebra of the Kronecker Product Consider the matrix equation Y = AXB where Y =[y kl ]; k =1,,r,l =1,,s, (1) X =[x ij ]; i =1,,m,j =1,,n, A=[a ki ]; k =1,,r,i=1,,m,

More information

Topic 7 - Matrix Approach to Simple Linear Regression. Outline. Matrix. Matrix. Review of Matrices. Regression model in matrix form

Topic 7 - Matrix Approach to Simple Linear Regression. Outline. Matrix. Matrix. Review of Matrices. Regression model in matrix form Topic 7 - Matrix Approach to Simple Linear Regression Review of Matrices Outline Regression model in matrix form - Fall 03 Calculations using matrices Topic 7 Matrix Collection of elements arranged in

More information

B. L. Raktoe* and W. T. Federer University of Guelph and Cornell University. Abstract

B. L. Raktoe* and W. T. Federer University of Guelph and Cornell University. Abstract BALANCED OPTIMAL SA'IURATED MAIN EFFECT PLANS OF 'IHE 2n FACTORIAL AND THEIR RELATION TO (v,k,'x.) CONFIGURATIONS BU-406-M by January, 1S72 B. L. Raktoe* and W. T. Federer University of Guelph and Cornell

More information

Chapter 5 Prediction of Random Variables

Chapter 5 Prediction of Random Variables Chapter 5 Prediction of Random Variables C R Henderson 1984 - Guelph We have discussed estimation of β, regarded as fixed Now we shall consider a rather different problem, prediction of random variables,

More information

Week 15-16: Combinatorial Design

Week 15-16: Combinatorial Design Week 15-16: Combinatorial Design May 8, 2017 A combinatorial design, or simply a design, is an arrangement of the objects of a set into subsets satisfying certain prescribed properties. The area of combinatorial

More information

VARIANCE COMPONENT ESTIMATION & BEST LINEAR UNBIASED PREDICTION (BLUP)

VARIANCE COMPONENT ESTIMATION & BEST LINEAR UNBIASED PREDICTION (BLUP) VARIANCE COMPONENT ESTIMATION & BEST LINEAR UNBIASED PREDICTION (BLUP) V.K. Bhatia I.A.S.R.I., Library Avenue, New Delhi- 11 0012 vkbhatia@iasri.res.in Introduction Variance components are commonly used

More information

PolynomialExponential Equations in Two Variables

PolynomialExponential Equations in Two Variables journal of number theory 62, 428438 (1997) article no. NT972058 PolynomialExponential Equations in Two Variables Scott D. Ahlgren* Department of Mathematics, University of Colorado, Boulder, Campus Box

More information

Notes on Time Series Modeling

Notes on Time Series Modeling Notes on Time Series Modeling Garey Ramey University of California, San Diego January 17 1 Stationary processes De nition A stochastic process is any set of random variables y t indexed by t T : fy t g

More information

The properties of L p -GMM estimators

The properties of L p -GMM estimators The properties of L p -GMM estimators Robert de Jong and Chirok Han Michigan State University February 2000 Abstract This paper considers Generalized Method of Moment-type estimators for which a criterion

More information

Nonlinear multilevel models, with an application to discrete response data

Nonlinear multilevel models, with an application to discrete response data Biometrika (1991), 78, 1, pp. 45-51 Printed in Great Britain Nonlinear multilevel models, with an application to discrete response data BY HARVEY GOLDSTEIN Department of Mathematics, Statistics and Computing,

More information

The Matrix Algebra of Sample Statistics

The Matrix Algebra of Sample Statistics The Matrix Algebra of Sample Statistics James H. Steiger Department of Psychology and Human Development Vanderbilt University James H. Steiger (Vanderbilt University) The Matrix Algebra of Sample Statistics

More information

CONSTRUCTION OF SLICED SPACE-FILLING DESIGNS BASED ON BALANCED SLICED ORTHOGONAL ARRAYS

CONSTRUCTION OF SLICED SPACE-FILLING DESIGNS BASED ON BALANCED SLICED ORTHOGONAL ARRAYS Statistica Sinica 24 (2014), 1685-1702 doi:http://dx.doi.org/10.5705/ss.2013.239 CONSTRUCTION OF SLICED SPACE-FILLING DESIGNS BASED ON BALANCED SLICED ORTHOGONAL ARRAYS Mingyao Ai 1, Bochuan Jiang 1,2

More information

Linear Algebra. Preliminary Lecture Notes

Linear Algebra. Preliminary Lecture Notes Linear Algebra Preliminary Lecture Notes Adolfo J. Rumbos c Draft date April 29, 23 2 Contents Motivation for the course 5 2 Euclidean n dimensional Space 7 2. Definition of n Dimensional Euclidean Space...........

More information

ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES. 1. Introduction

ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES. 1. Introduction Acta Math. Univ. Comenianae Vol. LXV, 1(1996), pp. 129 139 129 ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES V. WITKOVSKÝ Abstract. Estimation of the autoregressive

More information

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 11 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,, a n, b are given real

More information

ECE 275A Homework 6 Solutions

ECE 275A Homework 6 Solutions ECE 275A Homework 6 Solutions. The notation used in the solutions for the concentration (hyper) ellipsoid problems is defined in the lecture supplement on concentration ellipsoids. Note that θ T Σ θ =

More information

Explicit evaluation of the transmission factor T 1. Part I: For small dead-time ratios. by Jorg W. MUller

Explicit evaluation of the transmission factor T 1. Part I: For small dead-time ratios. by Jorg W. MUller Rapport BIPM-87/5 Explicit evaluation of the transmission factor T (8,E) Part I: For small dead-time ratios by Jorg W. MUller Bureau International des Poids et Mesures, F-930 Sevres Abstract By a detailed

More information

Lecture Notes. Introduction

Lecture Notes. Introduction 5/3/016 Lecture Notes R. Rekaya June 1-10, 016 Introduction Variance components play major role in animal breeding and genetic (estimation of BVs) It has been an active area of research since early 1950

More information

Bare minimum on matrix algebra. Psychology 588: Covariance structure and factor models

Bare minimum on matrix algebra. Psychology 588: Covariance structure and factor models Bare minimum on matrix algebra Psychology 588: Covariance structure and factor models Matrix multiplication 2 Consider three notations for linear combinations y11 y1 m x11 x 1p b11 b 1m y y x x b b n1

More information

Linear Algebra. Preliminary Lecture Notes

Linear Algebra. Preliminary Lecture Notes Linear Algebra Preliminary Lecture Notes Adolfo J. Rumbos c Draft date May 9, 29 2 Contents 1 Motivation for the course 5 2 Euclidean n dimensional Space 7 2.1 Definition of n Dimensional Euclidean Space...........

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

Chapter 12 REML and ML Estimation

Chapter 12 REML and ML Estimation Chapter 12 REML and ML Estimation C. R. Henderson 1984 - Guelph 1 Iterative MIVQUE The restricted maximum likelihood estimator (REML) of Patterson and Thompson (1971) can be obtained by iterating on MIVQUE,

More information

1 Mixed effect models and longitudinal data analysis

1 Mixed effect models and longitudinal data analysis 1 Mixed effect models and longitudinal data analysis Mixed effects models provide a flexible approach to any situation where data have a grouping structure which introduces some kind of correlation between

More information

2. Matrix Algebra and Random Vectors

2. Matrix Algebra and Random Vectors 2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns

More information

COMPARING TRANSFORMATIONS USING TESTS OF SEPARATE FAMILIES. Department of Biostatistics, University of North Carolina at Chapel Hill, NC.

COMPARING TRANSFORMATIONS USING TESTS OF SEPARATE FAMILIES. Department of Biostatistics, University of North Carolina at Chapel Hill, NC. COMPARING TRANSFORMATIONS USING TESTS OF SEPARATE FAMILIES by Lloyd J. Edwards and Ronald W. Helms Department of Biostatistics, University of North Carolina at Chapel Hill, NC. Institute of Statistics

More information

Symmetric Matrices and Eigendecomposition

Symmetric Matrices and Eigendecomposition Symmetric Matrices and Eigendecomposition Robert M. Freund January, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 Symmetric Matrices and Convexity of Quadratic Functions

More information

Linear Models in Machine Learning

Linear Models in Machine Learning CS540 Intro to AI Linear Models in Machine Learning Lecturer: Xiaojin Zhu jerryzhu@cs.wisc.edu We briefly go over two linear models frequently used in machine learning: linear regression for, well, regression,

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Carlos Hurtado Department of Economics University of Illinois at Urbana-Champaign hrtdmrt2@illinois.edu Sep 19th, 2017 C. Hurtado (UIUC - Economics) Numerical Methods On the Agenda

More information

Multivariate Regression (Chapter 10)

Multivariate Regression (Chapter 10) Multivariate Regression (Chapter 10) This week we ll cover multivariate regression and maybe a bit of canonical correlation. Today we ll mostly review univariate multivariate regression. With multivariate

More information

UNIVERSITY OF NORTH CAROLINA Department of Statistics Chapel Hill} N. C. KIND OF MJLTICOLLII-lEARITY? AND UNBIASEDI-lESS AND RESTRICTED MONOTONICITY

UNIVERSITY OF NORTH CAROLINA Department of Statistics Chapel Hill} N. C. KIND OF MJLTICOLLII-lEARITY? AND UNBIASEDI-lESS AND RESTRICTED MONOTONICITY UNIVERSITY OF NORTH CAROLINA Department of Statistics Chapel Hill} N. C. MONOTONICITY OF THE POWER FUNCTIONS OF SOME TESTS FOR A PARTICULAR KIND OF MJLTICOLLII-lEARITY? AND UNBIASEDI-lESS AND RESTRICTED

More information

Math 344 Lecture # Linear Systems

Math 344 Lecture # Linear Systems Math 344 Lecture #12 2.7 Linear Systems Through a choice of bases S and T for finite dimensional vector spaces V (with dimension n) and W (with dimension m), a linear equation L(v) = w becomes the linear

More information

Stat 579: Generalized Linear Models and Extensions

Stat 579: Generalized Linear Models and Extensions Stat 579: Generalized Linear Models and Extensions Linear Mixed Models for Longitudinal Data Yan Lu April, 2018, week 15 1 / 38 Data structure t1 t2 tn i 1st subject y 11 y 12 y 1n1 Experimental 2nd subject

More information

LECTURE 2 LINEAR REGRESSION MODEL AND OLS

LECTURE 2 LINEAR REGRESSION MODEL AND OLS SEPTEMBER 29, 2014 LECTURE 2 LINEAR REGRESSION MODEL AND OLS Definitions A common question in econometrics is to study the effect of one group of variables X i, usually called the regressors, on another

More information

Notation. Pattern Recognition II. Michal Haindl. Outline - PR Basic Concepts. Pattern Recognition Notions

Notation. Pattern Recognition II. Michal Haindl. Outline - PR Basic Concepts. Pattern Recognition Notions Notation S pattern space X feature vector X = [x 1,...,x l ] l = dim{x} number of features X feature space K number of classes ω i class indicator Ω = {ω 1,...,ω K } g(x) discriminant function H decision

More information

1 Outline. 1. Motivation. 2. SUR model. 3. Simultaneous equations. 4. Estimation

1 Outline. 1. Motivation. 2. SUR model. 3. Simultaneous equations. 4. Estimation 1 Outline. 1. Motivation 2. SUR model 3. Simultaneous equations 4. Estimation 2 Motivation. In this chapter, we will study simultaneous systems of econometric equations. Systems of simultaneous equations

More information

Data Mining and Matrices

Data Mining and Matrices Data Mining and Matrices 05 Semi-Discrete Decomposition Rainer Gemulla, Pauli Miettinen May 16, 2013 Outline 1 Hunting the Bump 2 Semi-Discrete Decomposition 3 The Algorithm 4 Applications SDD alone SVD

More information

5. Discriminant analysis

5. Discriminant analysis 5. Discriminant analysis We continue from Bayes s rule presented in Section 3 on p. 85 (5.1) where c i is a class, x isap-dimensional vector (data case) and we use class conditional probability (density

More information

Matrix & Linear Algebra

Matrix & Linear Algebra Matrix & Linear Algebra Jamie Monogan University of Georgia For more information: http://monogan.myweb.uga.edu/teaching/mm/ Jamie Monogan (UGA) Matrix & Linear Algebra 1 / 84 Vectors Vectors Vector: A

More information

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Numerical Methods. Elena loli Piccolomini. Civil Engeneering.  piccolom. Metodi Numerici M p. 1/?? Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement

More information

LINEAR MODELS FOR CLASSIFICATION. J. Elder CSE 6390/PSYC 6225 Computational Modeling of Visual Perception

LINEAR MODELS FOR CLASSIFICATION. J. Elder CSE 6390/PSYC 6225 Computational Modeling of Visual Perception LINEAR MODELS FOR CLASSIFICATION Classification: Problem Statement 2 In regression, we are modeling the relationship between a continuous input variable x and a continuous target variable t. In classification,

More information

Course Summary Math 211

Course Summary Math 211 Course Summary Math 211 table of contents I. Functions of several variables. II. R n. III. Derivatives. IV. Taylor s Theorem. V. Differential Geometry. VI. Applications. 1. Best affine approximations.

More information

A. R. Manson, R. J. Hader, M. J. Karson. Institute of Statistics Mimeograph Series No. 826 Raleigh

A. R. Manson, R. J. Hader, M. J. Karson. Institute of Statistics Mimeograph Series No. 826 Raleigh F 'F MINIMJM BIAS ESTIMATION.AlfD EXPERIMENTAL DESIGN APPLIED TO UNIVARIATE rolynomial M)DELS by A. R. Manson, R. J. Hader, M. J. Karson Institute of Statistics Mimeograph Series No. 826 Raleigh - 1972

More information

SPACES OF MATRICES WITH SEVERAL ZERO EIGENVALUES

SPACES OF MATRICES WITH SEVERAL ZERO EIGENVALUES SPACES OF MATRICES WITH SEVERAL ZERO EIGENVALUES M. D. ATKINSON Let V be an w-dimensional vector space over some field F, \F\ ^ n, and let SC be a space of linear mappings from V into itself {SC ^ Horn

More information

14 Multiple Linear Regression

14 Multiple Linear Regression B.Sc./Cert./M.Sc. Qualif. - Statistics: Theory and Practice 14 Multiple Linear Regression 14.1 The multiple linear regression model In simple linear regression, the response variable y is expressed in

More information

Matrix Algebra Review

Matrix Algebra Review APPENDIX A Matrix Algebra Review This appendix presents some of the basic definitions and properties of matrices. Many of the matrices in the appendix are named the same as the matrices that appear in

More information

I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN

I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN Introduction Edps/Psych/Stat/ 584 Applied Multivariate Statistics Carolyn J Anderson Department of Educational Psychology I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN c Board of Trustees,

More information

Chapter 4: Factor Analysis

Chapter 4: Factor Analysis Chapter 4: Factor Analysis In many studies, we may not be able to measure directly the variables of interest. We can merely collect data on other variables which may be related to the variables of interest.

More information

MATH 112 QUADRATIC AND BILINEAR FORMS NOVEMBER 24, Bilinear forms

MATH 112 QUADRATIC AND BILINEAR FORMS NOVEMBER 24, Bilinear forms MATH 112 QUADRATIC AND BILINEAR FORMS NOVEMBER 24,2015 M. J. HOPKINS 1.1. Bilinear forms and matrices. 1. Bilinear forms Definition 1.1. Suppose that F is a field and V is a vector space over F. bilinear

More information

Lecture 2: Linear Models. Bruce Walsh lecture notes Seattle SISG -Mixed Model Course version 23 June 2011

Lecture 2: Linear Models. Bruce Walsh lecture notes Seattle SISG -Mixed Model Course version 23 June 2011 Lecture 2: Linear Models Bruce Walsh lecture notes Seattle SISG -Mixed Model Course version 23 June 2011 1 Quick Review of the Major Points The general linear model can be written as y = X! + e y = vector

More information

Marcia Gumpertz and Sastry G. Pantula Department of Statistics North Carolina State University Raleigh, NC

Marcia Gumpertz and Sastry G. Pantula Department of Statistics North Carolina State University Raleigh, NC A Simple Approach to Inference in Random Coefficient Models March 8, 1988 Marcia Gumpertz and Sastry G. Pantula Department of Statistics North Carolina State University Raleigh, NC 27695-8203 Key Words

More information

MIVQUE and Maximum Likelihood Estimation for Multivariate Linear Models with Incomplete Observations

MIVQUE and Maximum Likelihood Estimation for Multivariate Linear Models with Incomplete Observations Sankhyā : The Indian Journal of Statistics 2006, Volume 68, Part 3, pp. 409-435 c 2006, Indian Statistical Institute MIVQUE and Maximum Likelihood Estimation for Multivariate Linear Models with Incomplete

More information

Coskewness and Cokurtosis John W. Fowler July 9, 2005

Coskewness and Cokurtosis John W. Fowler July 9, 2005 Coskewness and Cokurtosis John W. Fowler July 9, 2005 The concept of a covariance matrix can be extended to higher moments; of particular interest are the third and fourth moments. A very common application

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

Some Remarks on the Discrete Uncertainty Principle

Some Remarks on the Discrete Uncertainty Principle Highly Composite: Papers in Number Theory, RMS-Lecture Notes Series No. 23, 2016, pp. 77 85. Some Remarks on the Discrete Uncertainty Principle M. Ram Murty Department of Mathematics, Queen s University,

More information

COMPARISON OF FIVE TESTS FOR THE COMMON MEAN OF SEVERAL MULTIVARIATE NORMAL POPULATIONS

COMPARISON OF FIVE TESTS FOR THE COMMON MEAN OF SEVERAL MULTIVARIATE NORMAL POPULATIONS Communications in Statistics - Simulation and Computation 33 (2004) 431-446 COMPARISON OF FIVE TESTS FOR THE COMMON MEAN OF SEVERAL MULTIVARIATE NORMAL POPULATIONS K. Krishnamoorthy and Yong Lu Department

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

A note on the equality of the BLUPs for new observations under two linear models

A note on the equality of the BLUPs for new observations under two linear models ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 14, 2010 A note on the equality of the BLUPs for new observations under two linear models Stephen J Haslett and Simo Puntanen Abstract

More information

Linear & nonlinear classifiers

Linear & nonlinear classifiers Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1396 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1396 1 / 44 Table

More information

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016 Linear Algebra Notes Lecture Notes, University of Toronto, Fall 2016 (Ctd ) 11 Isomorphisms 1 Linear maps Definition 11 An invertible linear map T : V W is called a linear isomorphism from V to W Etymology:

More information

Lecture 3: Linear Models. Bruce Walsh lecture notes Uppsala EQG course version 28 Jan 2012

Lecture 3: Linear Models. Bruce Walsh lecture notes Uppsala EQG course version 28 Jan 2012 Lecture 3: Linear Models Bruce Walsh lecture notes Uppsala EQG course version 28 Jan 2012 1 Quick Review of the Major Points The general linear model can be written as y = X! + e y = vector of observed

More information

Lecture 6. Notes on Linear Algebra. Perceptron

Lecture 6. Notes on Linear Algebra. Perceptron Lecture 6. Notes on Linear Algebra. Perceptron COMP90051 Statistical Machine Learning Semester 2, 2017 Lecturer: Andrey Kan Copyright: University of Melbourne This lecture Notes on linear algebra Vectors

More information

ALGEBRA. 1. Some elementary number theory 1.1. Primes and divisibility. We denote the collection of integers

ALGEBRA. 1. Some elementary number theory 1.1. Primes and divisibility. We denote the collection of integers ALGEBRA CHRISTIAN REMLING 1. Some elementary number theory 1.1. Primes and divisibility. We denote the collection of integers by Z = {..., 2, 1, 0, 1,...}. Given a, b Z, we write a b if b = ac for some

More information

ON EXACT INFERENCE IN LINEAR MODELS WITH TWO VARIANCE-COVARIANCE COMPONENTS

ON EXACT INFERENCE IN LINEAR MODELS WITH TWO VARIANCE-COVARIANCE COMPONENTS Ø Ñ Å Ø Ñ Ø Ð ÈÙ Ð Ø ÓÒ DOI: 10.2478/v10127-012-0017-9 Tatra Mt. Math. Publ. 51 (2012), 173 181 ON EXACT INFERENCE IN LINEAR MODELS WITH TWO VARIANCE-COVARIANCE COMPONENTS Júlia Volaufová Viktor Witkovský

More information

WEAKER MSE CRITERIA AND TESTS FOR LINEAR RESTRICTIONS IN REGRESSION MODELS WITH NON-SPHERICAL DISTURBANCES. Marjorie B. MCELROY *

WEAKER MSE CRITERIA AND TESTS FOR LINEAR RESTRICTIONS IN REGRESSION MODELS WITH NON-SPHERICAL DISTURBANCES. Marjorie B. MCELROY * Journal of Econometrics 6 (1977) 389-394. 0 North-Holland Publishing Company WEAKER MSE CRITERIA AND TESTS FOR LINEAR RESTRICTIONS IN REGRESSION MODELS WITH NON-SPHERICAL DISTURBANCES Marjorie B. MCELROY

More information

Matrices. In this chapter: matrices, determinants. inverse matrix

Matrices. In this chapter: matrices, determinants. inverse matrix Matrices In this chapter: matrices, determinants inverse matrix 1 1.1 Matrices A matrix is a retangular array of numbers. Rows: horizontal lines. A = a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 a 41 a

More information

Linear Algebra Review

Linear Algebra Review Chapter 1 Linear Algebra Review It is assumed that you have had a beginning course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc I will review some of these terms here,

More information

A NOTE ON CONFIDENCE BOUNDS CONNECTED WITH ANOVA AND MANOVA FOR BALANCED AND PARTIALLY BALANCED INCOMPLETE BLOCK DESIGNS. v. P.

A NOTE ON CONFIDENCE BOUNDS CONNECTED WITH ANOVA AND MANOVA FOR BALANCED AND PARTIALLY BALANCED INCOMPLETE BLOCK DESIGNS. v. P. ... --... I. A NOTE ON CONFIDENCE BOUNDS CONNECTED WITH ANOVA AND MANOVA FOR BALANCED AND PARTIALLY BALANCED INCOMPLETE BLOCK DESIGNS by :}I v. P. Bhapkar University of North Carolina. '".". This research

More information

A RESOLUTION RANK CRITERION FOR SUPERSATURATED DESIGNS

A RESOLUTION RANK CRITERION FOR SUPERSATURATED DESIGNS Statistica Sinica 9(1999), 605-610 A RESOLUTION RANK CRITERION FOR SUPERSATURATED DESIGNS Lih-Yuan Deng, Dennis K. J. Lin and Jiannong Wang University of Memphis, Pennsylvania State University and Covance

More information

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved Fundamentals of Linear Algebra Marcel B. Finan Arkansas Tech University c All Rights Reserved 2 PREFACE Linear algebra has evolved as a branch of mathematics with wide range of applications to the natural

More information

On Eigenvalues of Row-Inverted Sylvester Hadamard Matrices

On Eigenvalues of Row-Inverted Sylvester Hadamard Matrices Result.Math. 54 (009), 117 16 c 009 Birkhäuser Verlag Basel/Switzerland 14-68/010117-10, published online January 1, 009 DOI 10.1007/s0005-008-0-4 Results in Mathematics On Eigenvalues of Row-Inverted

More information

The dual simplex method with bounds

The dual simplex method with bounds The dual simplex method with bounds Linear programming basis. Let a linear programming problem be given by min s.t. c T x Ax = b x R n, (P) where we assume A R m n to be full row rank (we will see in the

More information

Algebraic Properties of Solutions of Linear Systems

Algebraic Properties of Solutions of Linear Systems Algebraic Properties of Solutions of Linear Systems In this chapter we will consider simultaneous first-order differential equations in several variables, that is, equations of the form f 1t,,,x n d f

More information

Intelligent Data Analysis. Principal Component Analysis. School of Computer Science University of Birmingham

Intelligent Data Analysis. Principal Component Analysis. School of Computer Science University of Birmingham Intelligent Data Analysis Principal Component Analysis Peter Tiňo School of Computer Science University of Birmingham Discovering low-dimensional spatial layout in higher dimensional spaces - 1-D/3-D example

More information

Math Linear Algebra Final Exam Review Sheet

Math Linear Algebra Final Exam Review Sheet Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

More information

NONLINEAR REGRESSION FOR SPLIT PLOT EXPERIMENTS

NONLINEAR REGRESSION FOR SPLIT PLOT EXPERIMENTS Kansas State University Libraries pplied Statistics in Agriculture 1990-2nd Annual Conference Proceedings NONLINEAR REGRESSION FOR SPLIT PLOT EXPERIMENTS Marcia L. Gumpertz John O. Rawlings Follow this

More information

Computational methods for mixed models

Computational methods for mixed models Computational methods for mixed models Douglas Bates Department of Statistics University of Wisconsin Madison March 27, 2018 Abstract The lme4 package provides R functions to fit and analyze several different

More information

STAT 100C: Linear models

STAT 100C: Linear models STAT 100C: Linear models Arash A. Amini June 9, 2018 1 / 56 Table of Contents Multiple linear regression Linear model setup Estimation of β Geometric interpretation Estimation of σ 2 Hat matrix Gram matrix

More information

CS 195-5: Machine Learning Problem Set 1

CS 195-5: Machine Learning Problem Set 1 CS 95-5: Machine Learning Problem Set Douglas Lanman dlanman@brown.edu 7 September Regression Problem Show that the prediction errors y f(x; ŵ) are necessarily uncorrelated with any linear function of

More information

Heteroskedasticity ECONOMETRICS (ECON 360) BEN VAN KAMMEN, PHD

Heteroskedasticity ECONOMETRICS (ECON 360) BEN VAN KAMMEN, PHD Heteroskedasticity ECONOMETRICS (ECON 360) BEN VAN KAMMEN, PHD Introduction For pedagogical reasons, OLS is presented initially under strong simplifying assumptions. One of these is homoskedastic errors,

More information

Mixed-Model Estimation of genetic variances. Bruce Walsh lecture notes Uppsala EQG 2012 course version 28 Jan 2012

Mixed-Model Estimation of genetic variances. Bruce Walsh lecture notes Uppsala EQG 2012 course version 28 Jan 2012 Mixed-Model Estimation of genetic variances Bruce Walsh lecture notes Uppsala EQG 01 course version 8 Jan 01 Estimation of Var(A) and Breeding Values in General Pedigrees The above designs (ANOVA, P-O

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

Mixed-Models. version 30 October 2011

Mixed-Models. version 30 October 2011 Mixed-Models version 30 October 2011 Mixed models Mixed models estimate a vector! of fixed effects and one (or more) vectors u of random effects Both fixed and random effects models always include a vector

More information

An Overview of Variance Component Estimation

An Overview of Variance Component Estimation An Overview of Variance Component Estimation by Shayle R. Searle Biometrics Unit, Cornell University, Ithaca, N.Y., U.S.A., 14853 BU-1231-M April 1994 AN OVERVIEW OF VARIANCE COMPONENT ESTIMATION Shayle

More information

Review of Basic Concepts in Linear Algebra

Review of Basic Concepts in Linear Algebra Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra

More information

Vector Spaces. 9.1 Opening Remarks. Week Solvable or not solvable, that s the question. View at edx. Consider the picture

Vector Spaces. 9.1 Opening Remarks. Week Solvable or not solvable, that s the question. View at edx. Consider the picture Week9 Vector Spaces 9. Opening Remarks 9.. Solvable or not solvable, that s the question Consider the picture (,) (,) p(χ) = γ + γ χ + γ χ (, ) depicting three points in R and a quadratic polynomial (polynomial

More information