Orthogonal decompositions in growth curve models

Size: px
Start display at page:

Download "Orthogonal decompositions in growth curve models"

Transcription

1 ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 4, Orthogonal decompositions in growth curve models Daniel Klein and Ivan Žežula Dedicated to Professor L. Kubáček on the occasion of his eightieth birthday Abstract. The article shows advantage of orthogonal decompositions in the standard and extended growth curve models. Using this, distribution of estimators of ρ and σ in the standard GCM with uniform correlation structure is derived. Also, equivalence of Hu and von Rosen conditions in the extended GCM under mild conditions is shown.. The standard growth curve model with uniform correlation structure The basic model that we consider is the following one: Y = XBZ + e, Vece) N,Σ I n ), Σ = θ G + θ ww, ) where Y n p is a matrix of independent p-variate observations, X n m is an ANOVA design matrix, Z p r is a regression variables matrix, and e is a matrix of random errors. As for the unknown parameters, B m r is an location parameters matrix, and θ,θ are scalar) variance parameters. Matrix G p p > and vector w R p are known. The Vec operator stacks elements of a matrix into a vector column-wise. Assumed correlation structure is called generalized uniform correlation structure. It was studied in the context of the growth curve model GCM) in [6], and recently in [4]. A special case was studied also in [3]. As for the estimation of unknown parameters, Žežula in [6] used directly model ), whereas Ye and Wang [4] used modified model with orthogonal Received March 5,. Mathematics Subject Classification. Primary 6H; Secondary 6J5. Key words and phrases. Growth curve model, extended growth curve model, orthogonal decomposition, uniform correlation structure. The research was supported by grants VEGA MŠ SR /3/9, VEGA MŠ SR /35/, and by the Agency of the Slovak Ministry of Education for the Structural Funds of the EU, under project ITMS:67. 35

2 36 DANIEL KLEIN AND IVAN ŽEŽULA decomposition: Y G = Y + Y, Y = Y G P F = XBZ G P F + e, ) Y = Y G MF = XBZ G MF + e, where F = G w, P F is the orthogonal projection matrix onto the column space RF) of F, and M F = I P F onto its orthogonal complement. Let us denote S = n rx) Y M X Y, W = P F G SG P F, W = M F G SG M F. The estimators of Žežula are ˆθ = w) TrS) Sw w w) TrG) Gw w, ˆθ = STrG) GTrS) w) TrG) Gw w, 3) and the estimators of Ye and Wang are ˆθ = TrW ) p = w G w.tr G S ) w G SG w p )w G, w) ˆθ = p )TrW ) TrW ) p )w G = p.w G SG w w G w.tr G S ) w p )w G w). 4) These pairs of estimators are both unbiased, but different. Naturally, we would like to know the variances. Since S W p n rx), n rx) Σ ), it is easy to establish that VarVec S) = ) Ip + K pp Σ Σ), 5) n rx) where K pp is the commutation matrix, see e.g. [5]. This immediately implies Var [ Tr G S )] = n rx) Tr G ΣG Σ ), Var [ w G SG w ] = w G ΣG w ), n rx) Cov [ Tr G S ),w G SG w ] = n rx) w G ΣG ΣG w.

3 ORTHOGONAL DECOMPOSITIONS IN GROWTH CURVE MODELS 37 Necessary formulas for Var [TrS)], Var S, and Cov [TrS), S] are special cases. Short computation gives Var ˆθ = n rx) w) 4 Tr Σ ) w) w w. Σ + w w) Σ) [ w) TrG) Gw w], Var ˆθ = n rx) [TrG) Σ] TrG) G Σ + G) TrΣ ) [ w) TrG) Gw w], and Var ˆθ = n rx) [ w p ) w G w) G w ) Tr G ΣG Σ ) + Var ˆθ = + w G ΣG w ) w G w ) w G ΣG ΣG w )], 8) n rx) [ w p ) w G w) 4 G w) Tr G ΣG Σ ) + +p w G ΣG w) pw G w) w G ΣG ΣG w )]. 9) Analytical comparison of these quantities is quite difficult. Few simulations performed suggest that in general Ye and Wang s estimators tend to have smaller variance. Very important special case of the previous model is the model with 6) 7) Σ = σ [ ρ)i p + ρ ]. ) This correlation structure is called the uniform correlation structure or the intraclass correlation structure. It is the case with G = I p and w =, slightly reparametrized. It must hold p ρ. As a special case of ), estimators of σ and ρ can be then obtained by a simple transformation of ˆθ and ˆθ : ˆσ = ˆθ + ˆθ and ˆρ = ˆθ ˆθ + ˆθ. ) This implies the following form of estimators due to Žežula: ˆσ Z = TrS), ˆρ Z = ) S p p TrS), ) and due to Ye and Wang: ˆσ Y W = Tr V ) + Tr V ) p, ˆρ Y W = p Tr V ) p )Tr V ) + Tr V )), 3)

4 38 DANIEL KLEIN AND IVAN ŽEŽULA where V = P SP, V = M SM. Ye and Wang recognized that ˆσ Z = ˆσ Y W, but they failed to recognize that also ˆρ Z = ˆρ Y W. and Lemma. ˆσ Z = ˆσ Y W and ˆρ Z = ˆρ Y W for any Y. Proof. Trivially, Tr V ) = Tr P SP ) = Tr SP ) = p TrS ) = p S, Tr V ) = Tr M SM ) = Tr SM ) = TrS) Tr SP ) = TrS) p S. Substituting these values into 3), we easily get ). Thus, in the following we can write only ˆσ and ˆρ. This orthogonal decomposition is very useful for derivation of the distribution of the estimators. Lemma. Let H W p l,ξ), Ξ >, and let T k p be an arbitrary matrix. Then, rt) TrTHT ) λ i χ l, where λ,...,λ rt) are all positive eigenvalues of TΞT and rt) is the rank of T. In particular, rt) E TrTHT ) = l λ i, i= i= rt) Var TrTHT ) = l λ i. Proof. There must exist independent r.v. X,...,X l distributed as N p,ξ) such that H = l i= X ix i = G G, where G = X,...,X l ). Then, TX i N k,tξt ) i which may be singular). According to the Theorem in [] it holds TrTHT ) = Tr GT ) GT ) ) rt) λ i χ l, where λ,...,λ rt) are all positive eigenvalues of TΞT. Since χ -statistics are independent, the claims about mean and variance are trivial. The results concerning distributions of Tr V ) and Tr V ) can be found in [4], but without proof. We extend these results to the parameters of interest. i= i=

5 ORTHOGONAL DECOMPOSITIONS IN GROWTH CURVE MODELS 39 Theorem 3. Distributions of Tr V ) and Tr V ) are independent, so that ˆσ σ [ pn rx)) ρ + p )ρ TrV ) σ [ + p )ρ] n rx) χ n rx), Tr V ) σ ρ) n rx) χ p )n rx)), + p )ρ)χ n rx) + ρ)χ p )n rx)) [ ] + p )ˆρ F ˆρ n rx),p )n rx)). Proof. It is well known that under normality ) S W p n rx), n rx) Σ. see e.g. Theorem 3.8 in [5]). We want to make use of Lemma with l = n rx), Ξ = n rx) Σ, T = P, and also with T = M. Since ) P n rx) Σ P = σ [ + p )ρ] P n rx) is a multiple of idempotent matrix of rank, its only positive eigenvalue is equal to σ [ + p )ρ]/n rx)). Similarly, since M is idempotent with rank p and ) M n rx) Σ M = σ ρ) n rx) M, it has p positive eigenvalues which are all equal to σ ρ)/n rx)). Now the results for Tr V ) and Tr V ) follow from Lemma, perpendicularity of M and P, and properties of χ -distribution. This, together with 3), immediately implies the result for ˆσ. The second formula in 3) can be transformed to + p )ˆρ ˆρ = p )Tr V ) Tr V ) Because the distributions of Tr V ) and Tr V ) are independent, clearly σ ρ)n rx)) σ [ + p )ρ]n rx)) p )Tr V ) F TrV ) n rx),p )n rx)).. ],

6 4 DANIEL KLEIN AND IVAN ŽEŽULA This result is not very useful with respect to ˆσ, since its distribution depends on both σ and ρ, but enables us to test for any specific value of ρ. Using simple transformation, we can even derive directly the probability density function of ˆρ: ) fx) = ρ p )[ + p )ρ] + ρ p )[ + p )ρ] + p )x x ) n rx) Γ p n rx)) ) Γ n rx) Γ ) n rx) + p )x x p x). p )n rx)) ) pn rx)) Also, α confidence interval for ρ is given by ) c c ;, 4) + p )c + p )c where and c = c = ˆρ + p )ˆρ F n rx),p )n rx)) α ) ˆρ α ) + p )ˆρ F n rx),p )n rx)). Figures 4 below show histograms and theoretical densities of ˆρ for a special case of the model quadratic growth in three groups, 5 simulations) for various true values of unknown parameter. Example 4. Let us consider random sample from bivariate normal distribution with the same variances in both dimensions. It can be formally written as GCM with the uniform correlation structure: Y Y Y =.. Y n Y n = n µ,µ )I +e, e N n n,σ ρ ρ Using the above mentioned estimator we get ˆρ = s s +, s ) ) ) I n. where s is sample covariance of the two variables, and s and s are sample variances. This estimator is slightly more effective than the standard sample correlation coefficient in the sense of MSE).

7 ORTHOGONAL DECOMPOSITIONS IN GROWTH CURVE MODELS ,5 -, -,5 -, -,5 -,,,4,6,8 x x Figure. ρ =, Figure. ρ = ,,,4,6,8,7,75,8,85,9,95 x x Figure 3. ρ =,5 Figure 4. ρ =, 95. The extended growth curve model The extended growth curve model ECGM) with fixed effects, called also the sum-of-profiles model, is Y = X i B i Z i + e, e N n p,σ I n ). 5) i= The dimensions of matrices X i, B i, and Z i are n m i, m i r i, and p r i, respectively. Usually it is supposed that column spaces of X i s are ordered, RX k ) RX ), 6) while nothing is said about different matrices Z i. Recently, Hu see []) came up with modification of the model, assuming X ix j = i j. 7) His idea is to separate groups rather then models. We will show that the two models are under certain conditions equivalent. Example 5. Let us consider EGCM with two groups with different growth patterns linear and quadratic: { β + β t j + e ij, i =,...,n, j =,...,p, Y ij = β 3 + β 4 t j + β 5 t j + e ij, i = n +,...,n + n, j =,...,p.

8 4 DANIEL KLEIN AND IVAN ŽEŽULA This model can be written as Y = ) ) ) n β β... + n β 3 β 4 t... t p )β 5 t... t ) p + e, n or, by the new way, as ) ) )... n... Y = β,β ) + β t... t p 3,β 4,β 5 ) t... t p + e. n t... t p Note that in the second form in the previous example RZ ) RZ ). This leads to the idea that we can consider a model in which the column spaces of all matrices Z i are nested, which naturally arises in situations when different groups use polynomial regression functions of different order. Let us consider model 5) with condition 6), such that n p r X ). 8) Since X i are - matrices whose columns are indicators of different groups, without loss of generality we can assume that all columns of X are mutually perpendicular, and columns of every X i+ are a subset of columns of X i. Let us define Xk = X k and Xi = X i X i+, i =,...,k, where the symbol X i X i+ denotes the matrix consisting of those columns of X i which are not in X i+. It is easy to see, that X i = Xi,...,X k ) and P X i P Xi+ = P X i+ X i = P X i. Then, we can reformulate the model 5) with von Rosen s condition 6) in the following way: EY = = X i B i Z i = i= j= i= Xi,...,Xk ) i= j Xj BijZ i = j= X j B ii. B ik Z i = Z B j,...,bjj). Z j i= j=i df = Xj BijZ i = Xj Bj Zj j= 9) matrices X j have dimensions n m j and B ij m j r i, where m i = k j=i m j ). It is now easy to see that model 9) satisfies Hu s condition: Moreover, now we have X i X j = i j. ) Z i = Z,...,Z i ), which implies RZ ) RZ k ). i =,...,k,

9 ORTHOGONAL DECOMPOSITIONS IN GROWTH CURVE MODELS 43 ECGM with Hu s condition is much easier to handle. If all matrices X i and Z i are of full rank, then all B i are estimable, and unbiased LSE ˆB i depend only on X i and Z i : ˆB i = Xi Xi ) X i Y Σ Zi Z i Σ Zi ), ) see []. Such a closed form was difficult to obtain in the von Rosen model. Even for two components the estimators are rather complicated: ˆB = X X ) X Y Σ Z Z Σ Z ) X X ) X P X Y P Σ M Σ Z Z ˆB = X X ) X Y Σ Z Z Σ M Σ Z Z ), ) Σ Z Z Σ Z ), see [7]. Each ˆB and ˆB depends on both Z and Z, and ˆB even on X. Estimator of common variance matrix can be split into perpendicular pieces: ˆΣ = n rx ) Y M X Y = n k i= rx i )Y I P X i )Y. ) In the last expression, the left-hand side term is the estimator using von Rosen s model and right-hand side one using Hu s model. It is easy to see, that the estimators are equivalent, since X = X,...,X k ). The situation in Hu s model is much easier also for a special correlation structure Σ = σ R with R known. The unbiased estimator of residual variance σ is i= ˆσ = n k i= rx i)tr P R Z i ] ) Tr [Y Ŷ ) Y Ŷ ), R where Ŷ = ) k i= P X i Y PZ R i is the unbiased estimator of EY. For comparison, in the von Rosen model the unbiased estimator of residual variance is ˆσ = [ ] m Tr Y Ŷ ) Y Ŷ ),

10 44 DANIEL KLEIN AND IVAN ŽEŽULA where k ) m = n rx )) TrR) + rx i ) rx i+ )) Tr MZ R,...,Z i ) R + i= ) + r X k )Tr MZ R,...,Z k ) R = k ) ) = n rx i ) rx i+ ))Tr PZ R,...,Z i ) R r X k )Tr PZ R,...,Z k ) R, see [7]. i= 3. Conclusion The method of orthogonal decomposition is very promising in complex models. Many tasks, which are very difficult or impossible to handle in basic models, can be done with ease in models consisting of mutually orthogonal components. As it is shown above, simple transformation can change a model into an equivalent which allows to determine explicit forms of estimators and/or their distribution. We hope the method will prove even more useful in the future, either in the models investigated here or in some others. References [] Glueck D.H. and Muller K.E. 998), On the trace of a Wishart, Comm. Statist. Theory Methods 79), [] Hu J. 9), Properties of the explicit estimators in the extended growth curve model, Statistics 445), [3] Klein D. and Žežula I. 7), On uniform correlation structure; In: Mathematical Methods In Economics And Industry, Herľany, Slovakia, pp. 94. [4] Ye R.D. and Wang S.G. 9), Estimating parameters in extended growth curve models with special covariance structures, J. Statist. Plann. Inference 39, [5] Žežula I. 993), Covariance components estimation in the growth curve model, Statistics 4, [6] Žežula I. 6), Special variance structures in the growth curve model, J. Multivariate Anal. 97, [7] Žežula I. 8), Remarks on unbiased estimation of the sum-of-profiles model parameters, Tatra Mt. Math. Publ. 39, Institute of Mathematics, P. J. Šafárik University, Jesenná 5, SK-454 Košice, Slovakia address: daniel.klein@upjs.sk address: ivan.zezula@upjs.sk

Orthogonal decompositions in growth curve models

Orthogonal decompositions in growth curve models P. J. ŠAFÁRIK UNIVERSITY FACULTY OF SCIENCE INSTITUTE OF MATHEMATICS Jesenná 5, 040 0 Košice, Slovakia I. Žežula and D. Klein Orthogonal decompositions in growth curve models IM Preprint, series A, No.

More information

Testing Some Covariance Structures under a Growth Curve Model in High Dimension

Testing Some Covariance Structures under a Growth Curve Model in High Dimension Department of Mathematics Testing Some Covariance Structures under a Growth Curve Model in High Dimension Muni S. Srivastava and Martin Singull LiTH-MAT-R--2015/03--SE Department of Mathematics Linköping

More information

Journal of Multivariate Analysis. Sphericity test in a GMANOVA MANOVA model with normal error

Journal of Multivariate Analysis. Sphericity test in a GMANOVA MANOVA model with normal error Journal of Multivariate Analysis 00 (009) 305 3 Contents lists available at ScienceDirect Journal of Multivariate Analysis journal homepage: www.elsevier.com/locate/jmva Sphericity test in a GMANOVA MANOVA

More information

Linear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept,

Linear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept, Linear Regression In this problem sheet, we consider the problem of linear regression with p predictors and one intercept, y = Xβ + ɛ, where y t = (y 1,..., y n ) is the column vector of target values,

More information

Linear Models Review

Linear Models Review Linear Models Review Vectors in IR n will be written as ordered n-tuples which are understood to be column vectors, or n 1 matrices. A vector variable will be indicted with bold face, and the prime sign

More information

Lecture 13: Simple Linear Regression in Matrix Format. 1 Expectations and Variances with Vectors and Matrices

Lecture 13: Simple Linear Regression in Matrix Format. 1 Expectations and Variances with Vectors and Matrices Lecture 3: Simple Linear Regression in Matrix Format To move beyond simple regression we need to use matrix algebra We ll start by re-expressing simple linear regression in matrix form Linear algebra is

More information

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017 Put your solution to each problem on a separate sheet of paper. Problem 1. (5106) Let X 1, X 2,, X n be a sequence of i.i.d. observations from a

More information

Econ 620. Matrix Differentiation. Let a and x are (k 1) vectors and A is an (k k) matrix. ) x. (a x) = a. x = a (x Ax) =(A + A (x Ax) x x =(A + A )

Econ 620. Matrix Differentiation. Let a and x are (k 1) vectors and A is an (k k) matrix. ) x. (a x) = a. x = a (x Ax) =(A + A (x Ax) x x =(A + A ) Econ 60 Matrix Differentiation Let a and x are k vectors and A is an k k matrix. a x a x = a = a x Ax =A + A x Ax x =A + A x Ax = xx A We don t want to prove the claim rigorously. But a x = k a i x i i=

More information

Multivariate Gaussian Analysis

Multivariate Gaussian Analysis BS2 Statistical Inference, Lecture 7, Hilary Term 2009 February 13, 2009 Marginal and conditional distributions For a positive definite covariance matrix Σ, the multivariate Gaussian distribution has density

More information

Gaussian representation of a class of Riesz probability distributions

Gaussian representation of a class of Riesz probability distributions arxiv:763v [mathpr] 8 Dec 7 Gaussian representation of a class of Riesz probability distributions A Hassairi Sfax University Tunisia Running title: Gaussian representation of a Riesz distribution Abstract

More information

Econ 2120: Section 2

Econ 2120: Section 2 Econ 2120: Section 2 Part I - Linear Predictor Loose Ends Ashesh Rambachan Fall 2018 Outline Big Picture Matrix Version of the Linear Predictor and Least Squares Fit Linear Predictor Least Squares Omitted

More information

Comparison of prediction quality of the best linear unbiased predictors in time series linear regression models

Comparison of prediction quality of the best linear unbiased predictors in time series linear regression models 1 Comparison of prediction quality of the best linear unbiased predictors in time series linear regression models Martina Hančová Institute of Mathematics, P. J. Šafárik University in Košice Jesenná 5,

More information

Chapter 5 Matrix Approach to Simple Linear Regression

Chapter 5 Matrix Approach to Simple Linear Regression STAT 525 SPRING 2018 Chapter 5 Matrix Approach to Simple Linear Regression Professor Min Zhang Matrix Collection of elements arranged in rows and columns Elements will be numbers or symbols For example:

More information

On testing the equality of mean vectors in high dimension

On testing the equality of mean vectors in high dimension ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 17, Number 1, June 2013 Available online at www.math.ut.ee/acta/ On testing the equality of mean vectors in high dimension Muni S.

More information

Linear models and their mathematical foundations: Simple linear regression

Linear models and their mathematical foundations: Simple linear regression Linear models and their mathematical foundations: Simple linear regression Steffen Unkel Department of Medical Statistics University Medical Center Göttingen, Germany Winter term 2018/19 1/21 Introduction

More information

STAT 100C: Linear models

STAT 100C: Linear models STAT 100C: Linear models Arash A. Amini June 9, 2018 1 / 56 Table of Contents Multiple linear regression Linear model setup Estimation of β Geometric interpretation Estimation of σ 2 Hat matrix Gram matrix

More information

F & B Approaches to a simple model

F & B Approaches to a simple model A6523 Signal Modeling, Statistical Inference and Data Mining in Astrophysics Spring 215 http://www.astro.cornell.edu/~cordes/a6523 Lecture 11 Applications: Model comparison Challenges in large-scale surveys

More information

Vectors and Matrices Statistics with Vectors and Matrices

Vectors and Matrices Statistics with Vectors and Matrices Vectors and Matrices Statistics with Vectors and Matrices Lecture 3 September 7, 005 Analysis Lecture #3-9/7/005 Slide 1 of 55 Today s Lecture Vectors and Matrices (Supplement A - augmented with SAS proc

More information

A note on the equality of the BLUPs for new observations under two linear models

A note on the equality of the BLUPs for new observations under two linear models ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 14, 2010 A note on the equality of the BLUPs for new observations under two linear models Stephen J Haslett and Simo Puntanen Abstract

More information

Department of Statistics

Department of Statistics Research Report Department of Statistics Research Report Department of Statistics No. 05: Testing in multivariate normal models with block circular covariance structures Yuli Liang Dietrich von Rosen Tatjana

More information

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1 Inverse of a Square Matrix For an N N square matrix A, the inverse of A, 1 A, exists if and only if A is of full rank, i.e., if and only if no column of A is a linear combination 1 of the others. A is

More information

Multiple Linear Regression

Multiple Linear Regression Multiple Linear Regression University of California, San Diego Instructor: Ery Arias-Castro http://math.ucsd.edu/~eariasca/teaching.html 1 / 42 Passenger car mileage Consider the carmpg dataset taken from

More information

STAT5044: Regression and Anova. Inyoung Kim

STAT5044: Regression and Anova. Inyoung Kim STAT5044: Regression and Anova Inyoung Kim 2 / 51 Outline 1 Matrix Expression 2 Linear and quadratic forms 3 Properties of quadratic form 4 Properties of estimates 5 Distributional properties 3 / 51 Matrix

More information

Econometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018

Econometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018 Econometrics I KS Module 2: Multivariate Linear Regression Alexander Ahammer Department of Economics Johannes Kepler University of Linz This version: April 16, 2018 Alexander Ahammer (JKU) Module 2: Multivariate

More information

Economics 620, Lecture 4: The K-Variable Linear Model I. y 1 = + x 1 + " 1 y 2 = + x 2 + " 2 :::::::: :::::::: y N = + x N + " N

Economics 620, Lecture 4: The K-Variable Linear Model I. y 1 = + x 1 +  1 y 2 = + x 2 +  2 :::::::: :::::::: y N = + x N +  N 1 Economics 620, Lecture 4: The K-Variable Linear Model I Consider the system y 1 = + x 1 + " 1 y 2 = + x 2 + " 2 :::::::: :::::::: y N = + x N + " N or in matrix form y = X + " where y is N 1, X is N

More information

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8 Contents 1 Linear model 1 2 GLS for multivariate regression 5 3 Covariance estimation for the GLM 8 4 Testing the GLH 11 A reference for some of this material can be found somewhere. 1 Linear model Recall

More information

Linear Algebra Review

Linear Algebra Review Linear Algebra Review Yang Feng http://www.stat.columbia.edu/~yangfeng Yang Feng (Columbia University) Linear Algebra Review 1 / 45 Definition of Matrix Rectangular array of elements arranged in rows and

More information

Linear Models and Estimation by Least Squares

Linear Models and Estimation by Least Squares Linear Models and Estimation by Least Squares Jin-Lung Lin 1 Introduction Causal relation investigation lies in the heart of economics. Effect (Dependent variable) cause (Independent variable) Example:

More information

3 Multiple Linear Regression

3 Multiple Linear Regression 3 Multiple Linear Regression 3.1 The Model Essentially, all models are wrong, but some are useful. Quote by George E.P. Box. Models are supposed to be exact descriptions of the population, but that is

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Principle Components Analysis (PCA) Relationship Between a Linear Combination of Variables and Axes Rotation for PCA

Principle Components Analysis (PCA) Relationship Between a Linear Combination of Variables and Axes Rotation for PCA Principle Components Analysis (PCA) Relationship Between a Linear Combination of Variables and Axes Rotation for PCA Principle Components Analysis: Uses one group of variables (we will call this X) In

More information

Math 423/533: The Main Theoretical Topics

Math 423/533: The Main Theoretical Topics Math 423/533: The Main Theoretical Topics Notation sample size n, data index i number of predictors, p (p = 2 for simple linear regression) y i : response for individual i x i = (x i1,..., x ip ) (1 p)

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Statement: With my signature I confirm that the solutions are the product of my own work. Name: Signature:.

Statement: With my signature I confirm that the solutions are the product of my own work. Name: Signature:. MATHEMATICAL STATISTICS Homework assignment Instructions Please turn in the homework with this cover page. You do not need to edit the solutions. Just make sure the handwriting is legible. You may discuss

More information

Ch 2: Simple Linear Regression

Ch 2: Simple Linear Regression Ch 2: Simple Linear Regression 1. Simple Linear Regression Model A simple regression model with a single regressor x is y = β 0 + β 1 x + ɛ, where we assume that the error ɛ is independent random component

More information

Testing a Normal Covariance Matrix for Small Samples with Monotone Missing Data

Testing a Normal Covariance Matrix for Small Samples with Monotone Missing Data Applied Mathematical Sciences, Vol 3, 009, no 54, 695-70 Testing a Normal Covariance Matrix for Small Samples with Monotone Missing Data Evelina Veleva Rousse University A Kanchev Department of Numerical

More information

MAS223 Statistical Inference and Modelling Exercises

MAS223 Statistical Inference and Modelling Exercises MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,

More information

Lecture 13: Simple Linear Regression in Matrix Format

Lecture 13: Simple Linear Regression in Matrix Format See updates and corrections at http://www.stat.cmu.edu/~cshalizi/mreg/ Lecture 13: Simple Linear Regression in Matrix Format 36-401, Section B, Fall 2015 13 October 2015 Contents 1 Least Squares in Matrix

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

ON EXACT INFERENCE IN LINEAR MODELS WITH TWO VARIANCE-COVARIANCE COMPONENTS

ON EXACT INFERENCE IN LINEAR MODELS WITH TWO VARIANCE-COVARIANCE COMPONENTS Ø Ñ Å Ø Ñ Ø Ð ÈÙ Ð Ø ÓÒ DOI: 10.2478/v10127-012-0017-9 Tatra Mt. Math. Publ. 51 (2012), 173 181 ON EXACT INFERENCE IN LINEAR MODELS WITH TWO VARIANCE-COVARIANCE COMPONENTS Júlia Volaufová Viktor Witkovský

More information

Ma 3/103: Lecture 24 Linear Regression I: Estimation

Ma 3/103: Lecture 24 Linear Regression I: Estimation Ma 3/103: Lecture 24 Linear Regression I: Estimation March 3, 2017 KC Border Linear Regression I March 3, 2017 1 / 32 Regression analysis Regression analysis Estimate and test E(Y X) = f (X). f is the

More information

3. For a given dataset and linear model, what do you think is true about least squares estimates? Is Ŷ always unique? Yes. Is ˆβ always unique? No.

3. For a given dataset and linear model, what do you think is true about least squares estimates? Is Ŷ always unique? Yes. Is ˆβ always unique? No. 7. LEAST SQUARES ESTIMATION 1 EXERCISE: Least-Squares Estimation and Uniqueness of Estimates 1. For n real numbers a 1,...,a n, what value of a minimizes the sum of squared distances from a to each of

More information

Economics 620, Lecture 4: The K-Varable Linear Model I

Economics 620, Lecture 4: The K-Varable Linear Model I Economics 620, Lecture 4: The K-Varable Linear Model I Nicholas M. Kiefer Cornell University Professor N. M. Kiefer (Cornell University) Lecture 4: The K-Varable Linear Model I 1 / 20 Consider the system

More information

ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES. 1. Introduction

ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES. 1. Introduction Acta Math. Univ. Comenianae Vol. LXV, 1(1996), pp. 129 139 129 ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES V. WITKOVSKÝ Abstract. Estimation of the autoregressive

More information

2. Matrix Algebra and Random Vectors

2. Matrix Algebra and Random Vectors 2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns

More information

A Little Necessary Matrix Algebra for Doctoral Studies in Business & Economics. Matrix Algebra

A Little Necessary Matrix Algebra for Doctoral Studies in Business & Economics. Matrix Algebra A Little Necessary Matrix Algebra for Doctoral Studies in Business & Economics James J. Cochran Department of Marketing & Analysis Louisiana Tech University Jcochran@cab.latech.edu Matrix Algebra Matrix

More information

Bare minimum on matrix algebra. Psychology 588: Covariance structure and factor models

Bare minimum on matrix algebra. Psychology 588: Covariance structure and factor models Bare minimum on matrix algebra Psychology 588: Covariance structure and factor models Matrix multiplication 2 Consider three notations for linear combinations y11 y1 m x11 x 1p b11 b 1m y y x x b b n1

More information

4 Multiple Linear Regression

4 Multiple Linear Regression 4 Multiple Linear Regression 4. The Model Definition 4.. random variable Y fits a Multiple Linear Regression Model, iff there exist β, β,..., β k R so that for all (x, x 2,..., x k ) R k where ε N (, σ

More information

TAMS39 Lecture 2 Multivariate normal distribution

TAMS39 Lecture 2 Multivariate normal distribution TAMS39 Lecture 2 Multivariate normal distribution Martin Singull Department of Mathematics Mathematical Statistics Linköping University, Sweden Content Lecture Random vectors Multivariate normal distribution

More information

Outline. Remedial Measures) Extra Sums of Squares Standardized Version of the Multiple Regression Model

Outline. Remedial Measures) Extra Sums of Squares Standardized Version of the Multiple Regression Model Outline 1 Multiple Linear Regression (Estimation, Inference, Diagnostics and Remedial Measures) 2 Special Topics for Multiple Regression Extra Sums of Squares Standardized Version of the Multiple Regression

More information

The Multiple Regression Model Estimation

The Multiple Regression Model Estimation Lesson 5 The Multiple Regression Model Estimation Pilar González and Susan Orbe Dpt Applied Econometrics III (Econometrics and Statistics) Pilar González and Susan Orbe OCW 2014 Lesson 5 Regression model:

More information

[y i α βx i ] 2 (2) Q = i=1

[y i α βx i ] 2 (2) Q = i=1 Least squares fits This section has no probability in it. There are no random variables. We are given n points (x i, y i ) and want to find the equation of the line that best fits them. We take the equation

More information

ANOVA: Analysis of Variance - Part I

ANOVA: Analysis of Variance - Part I ANOVA: Analysis of Variance - Part I The purpose of these notes is to discuss the theory behind the analysis of variance. It is a summary of the definitions and results presented in class with a few exercises.

More information

Testing Hypotheses Of Covariance Structure In Multivariate Data

Testing Hypotheses Of Covariance Structure In Multivariate Data Electronic Journal of Linear Algebra Volume 33 Volume 33: Special Issue for the International Conference on Matrix Analysis and its Applications, MAT TRIAD 2017 Article 6 2018 Testing Hypotheses Of Covariance

More information

Model comparison and selection

Model comparison and selection BS2 Statistical Inference, Lectures 9 and 10, Hilary Term 2008 March 2, 2008 Hypothesis testing Consider two alternative models M 1 = {f (x; θ), θ Θ 1 } and M 2 = {f (x; θ), θ Θ 2 } for a sample (X = x)

More information

A Box-Type Approximation for General Two-Sample Repeated Measures - Technical Report -

A Box-Type Approximation for General Two-Sample Repeated Measures - Technical Report - A Box-Type Approximation for General Two-Sample Repeated Measures - Technical Report - Edgar Brunner and Marius Placzek University of Göttingen, Germany 3. August 0 . Statistical Model and Hypotheses Throughout

More information

Master s Written Examination

Master s Written Examination Master s Written Examination Option: Statistics and Probability Spring 05 Full points may be obtained for correct answers to eight questions Each numbered question (which may have several parts) is worth

More information

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis. Vector spaces DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Vector space Consists of: A set V A scalar

More information

ECON 3150/4150, Spring term Lecture 7

ECON 3150/4150, Spring term Lecture 7 ECON 3150/4150, Spring term 2014. Lecture 7 The multivariate regression model (I) Ragnar Nymoen University of Oslo 4 February 2014 1 / 23 References to Lecture 7 and 8 SW Ch. 6 BN Kap 7.1-7.8 2 / 23 Omitted

More information

Topic 7 - Matrix Approach to Simple Linear Regression. Outline. Matrix. Matrix. Review of Matrices. Regression model in matrix form

Topic 7 - Matrix Approach to Simple Linear Regression. Outline. Matrix. Matrix. Review of Matrices. Regression model in matrix form Topic 7 - Matrix Approach to Simple Linear Regression Review of Matrices Outline Regression model in matrix form - Fall 03 Calculations using matrices Topic 7 Matrix Collection of elements arranged in

More information

Central Limit Theorem ( 5.3)

Central Limit Theorem ( 5.3) Central Limit Theorem ( 5.3) Let X 1, X 2,... be a sequence of independent random variables, each having n mean µ and variance σ 2. Then the distribution of the partial sum S n = X i i=1 becomes approximately

More information

Stat 206: Linear algebra

Stat 206: Linear algebra Stat 206: Linear algebra James Johndrow (adapted from Iain Johnstone s notes) 2016-11-02 Vectors We have already been working with vectors, but let s review a few more concepts. The inner product of two

More information

18.S096 Problem Set 3 Fall 2013 Regression Analysis Due Date: 10/8/2013

18.S096 Problem Set 3 Fall 2013 Regression Analysis Due Date: 10/8/2013 18.S096 Problem Set 3 Fall 013 Regression Analysis Due Date: 10/8/013 he Projection( Hat ) Matrix and Case Influence/Leverage Recall the setup for a linear regression model y = Xβ + ɛ where y and ɛ are

More information

Asymptotic Distribution of the Largest Eigenvalue via Geometric Representations of High-Dimension, Low-Sample-Size Data

Asymptotic Distribution of the Largest Eigenvalue via Geometric Representations of High-Dimension, Low-Sample-Size Data Sri Lankan Journal of Applied Statistics (Special Issue) Modern Statistical Methodologies in the Cutting Edge of Science Asymptotic Distribution of the Largest Eigenvalue via Geometric Representations

More information

Decomposable and Directed Graphical Gaussian Models

Decomposable and Directed Graphical Gaussian Models Decomposable Decomposable and Directed Graphical Gaussian Models Graphical Models and Inference, Lecture 13, Michaelmas Term 2009 November 26, 2009 Decomposable Definition Basic properties Wishart density

More information

Simple Regression Model Setup Estimation Inference Prediction. Model Diagnostic. Multiple Regression. Model Setup and Estimation.

Simple Regression Model Setup Estimation Inference Prediction. Model Diagnostic. Multiple Regression. Model Setup and Estimation. Statistical Computation Math 475 Jimin Ding Department of Mathematics Washington University in St. Louis www.math.wustl.edu/ jmding/math475/index.html October 10, 2013 Ridge Part IV October 10, 2013 1

More information

The Standard Linear Model: Hypothesis Testing

The Standard Linear Model: Hypothesis Testing Department of Mathematics Ma 3/103 KC Border Introduction to Probability and Statistics Winter 2017 Lecture 25: The Standard Linear Model: Hypothesis Testing Relevant textbook passages: Larsen Marx [4]:

More information

Linear Regression. September 27, Chapter 3. Chapter 3 September 27, / 77

Linear Regression. September 27, Chapter 3. Chapter 3 September 27, / 77 Linear Regression Chapter 3 September 27, 2016 Chapter 3 September 27, 2016 1 / 77 1 3.1. Simple linear regression 2 3.2 Multiple linear regression 3 3.3. The least squares estimation 4 3.4. The statistical

More information

1 Data Arrays and Decompositions

1 Data Arrays and Decompositions 1 Data Arrays and Decompositions 1.1 Variance Matrices and Eigenstructure Consider a p p positive definite and symmetric matrix V - a model parameter or a sample variance matrix. The eigenstructure is

More information

Next is material on matrix rank. Please see the handout

Next is material on matrix rank. Please see the handout B90.330 / C.005 NOTES for Wednesday 0.APR.7 Suppose that the model is β + ε, but ε does not have the desired variance matrix. Say that ε is normal, but Var(ε) σ W. The form of W is W w 0 0 0 0 0 0 w 0

More information

Multivariate Random Variable

Multivariate Random Variable Multivariate Random Variable Author: Author: Andrés Hincapié and Linyi Cao This Version: August 7, 2016 Multivariate Random Variable 3 Now we consider models with more than one r.v. These are called multivariate

More information

Multiple Linear Regression

Multiple Linear Regression Multiple Linear Regression Simple linear regression tries to fit a simple line between two variables Y and X. If X is linearly related to Y this explains some of the variability in Y. In most cases, there

More information

STAT 730 Chapter 4: Estimation

STAT 730 Chapter 4: Estimation STAT 730 Chapter 4: Estimation Timothy Hanson Department of Statistics, University of South Carolina Stat 730: Multivariate Analysis 1 / 23 The likelihood We have iid data, at least initially. Each datum

More information

The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation

The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation Electronic Journal of Linear Algebra Volume 18 Volume 18 (2009 Article 23 2009 The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation Qing-feng Xiao qfxiao@hnu.cn

More information

Sample Geometry. Edps/Soc 584, Psych 594. Carolyn J. Anderson

Sample Geometry. Edps/Soc 584, Psych 594. Carolyn J. Anderson Sample Geometry Edps/Soc 584, Psych 594 Carolyn J. Anderson Department of Educational Psychology I L L I N O I S university of illinois at urbana-champaign c Board of Trustees, University of Illinois Spring

More information

Chapter 6: Orthogonality

Chapter 6: Orthogonality Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products

More information

Regression: Lecture 2

Regression: Lecture 2 Regression: Lecture 2 Niels Richard Hansen April 26, 2012 Contents 1 Linear regression and least squares estimation 1 1.1 Distributional results................................ 3 2 Non-linear effects and

More information

Need for Several Predictor Variables

Need for Several Predictor Variables Multiple regression One of the most widely used tools in statistical analysis Matrix expressions for multiple regression are the same as for simple linear regression Need for Several Predictor Variables

More information

Introduction to Estimation Methods for Time Series models. Lecture 1

Introduction to Estimation Methods for Time Series models. Lecture 1 Introduction to Estimation Methods for Time Series models Lecture 1 Fulvio Corsi SNS Pisa Fulvio Corsi Introduction to Estimation () Methods for Time Series models Lecture 1 SNS Pisa 1 / 19 Estimation

More information

Restricted Maximum Likelihood in Linear Regression and Linear Mixed-Effects Model

Restricted Maximum Likelihood in Linear Regression and Linear Mixed-Effects Model Restricted Maximum Likelihood in Linear Regression and Linear Mixed-Effects Model Xiuming Zhang zhangxiuming@u.nus.edu A*STAR-NUS Clinical Imaging Research Center October, 015 Summary This report derives

More information

Inference For High Dimensional M-estimates. Fixed Design Results

Inference For High Dimensional M-estimates. Fixed Design Results : Fixed Design Results Lihua Lei Advisors: Peter J. Bickel, Michael I. Jordan joint work with Peter J. Bickel and Noureddine El Karoui Dec. 8, 2016 1/57 Table of Contents 1 Background 2 Main Results and

More information

COMPARISON OF FIVE TESTS FOR THE COMMON MEAN OF SEVERAL MULTIVARIATE NORMAL POPULATIONS

COMPARISON OF FIVE TESTS FOR THE COMMON MEAN OF SEVERAL MULTIVARIATE NORMAL POPULATIONS Communications in Statistics - Simulation and Computation 33 (2004) 431-446 COMPARISON OF FIVE TESTS FOR THE COMMON MEAN OF SEVERAL MULTIVARIATE NORMAL POPULATIONS K. Krishnamoorthy and Yong Lu Department

More information

Canonical Correlation Analysis of Longitudinal Data

Canonical Correlation Analysis of Longitudinal Data Biometrics Section JSM 2008 Canonical Correlation Analysis of Longitudinal Data Jayesh Srivastava Dayanand N Naik Abstract Studying the relationship between two sets of variables is an important multivariate

More information

Singular Value Decomposition and Principal Component Analysis (PCA) I

Singular Value Decomposition and Principal Component Analysis (PCA) I Singular Value Decomposition and Principal Component Analysis (PCA) I Prof Ned Wingreen MOL 40/50 Microarray review Data per array: 0000 genes, I (green) i,i (red) i 000 000+ data points! The expression

More information

Corner. Corners are the intersections of two edges of sufficiently different orientations.

Corner. Corners are the intersections of two edges of sufficiently different orientations. 2D Image Features Two dimensional image features are interesting local structures. They include junctions of different types like Y, T, X, and L. Much of the work on 2D features focuses on junction L,

More information

Random vectors X 1 X 2. Recall that a random vector X = is made up of, say, k. X k. random variables.

Random vectors X 1 X 2. Recall that a random vector X = is made up of, say, k. X k. random variables. Random vectors Recall that a random vector X = X X 2 is made up of, say, k random variables X k A random vector has a joint distribution, eg a density f(x), that gives probabilities P(X A) = f(x)dx Just

More information

STA 2101/442 Assignment Four 1

STA 2101/442 Assignment Four 1 STA 2101/442 Assignment Four 1 One version of the general linear model with fixed effects is y = Xβ + ɛ, where X is an n p matrix of known constants with n > p and the columns of X linearly independent.

More information

KRUSKAL-WALLIS ONE-WAY ANALYSIS OF VARIANCE BASED ON LINEAR PLACEMENTS

KRUSKAL-WALLIS ONE-WAY ANALYSIS OF VARIANCE BASED ON LINEAR PLACEMENTS Bull. Korean Math. Soc. 5 (24), No. 3, pp. 7 76 http://dx.doi.org/34/bkms.24.5.3.7 KRUSKAL-WALLIS ONE-WAY ANALYSIS OF VARIANCE BASED ON LINEAR PLACEMENTS Yicheng Hong and Sungchul Lee Abstract. The limiting

More information

THE UNIVERSITY OF CHICAGO Booth School of Business Business 41912, Spring Quarter 2016, Mr. Ruey S. Tsay

THE UNIVERSITY OF CHICAGO Booth School of Business Business 41912, Spring Quarter 2016, Mr. Ruey S. Tsay THE UNIVERSITY OF CHICAGO Booth School of Business Business 41912, Spring Quarter 2016, Mr. Ruey S. Tsay Lecture 5: Multivariate Multiple Linear Regression The model is Y n m = Z n (r+1) β (r+1) m + ɛ

More information

STAT 512 sp 2018 Summary Sheet

STAT 512 sp 2018 Summary Sheet STAT 5 sp 08 Summary Sheet Karl B. Gregory Spring 08. Transformations of a random variable Let X be a rv with support X and let g be a function mapping X to Y with inverse mapping g (A = {x X : g(x A}

More information

Lecture 15. Hypothesis testing in the linear model

Lecture 15. Hypothesis testing in the linear model 14. Lecture 15. Hypothesis testing in the linear model Lecture 15. Hypothesis testing in the linear model 1 (1 1) Preliminary lemma 15. Hypothesis testing in the linear model 15.1. Preliminary lemma Lemma

More information

Chapter 11 MIVQUE of Variances and Covariances

Chapter 11 MIVQUE of Variances and Covariances Chapter 11 MIVQUE of Variances and Covariances C R Henderson 1984 - Guelph The methods described in Chapter 10 for estimation of variances are quadratic, translation invariant, and unbiased For the balanced

More information

. a m1 a mn. a 1 a 2 a = a n

. a m1 a mn. a 1 a 2 a = a n Biostat 140655, 2008: Matrix Algebra Review 1 Definition: An m n matrix, A m n, is a rectangular array of real numbers with m rows and n columns Element in the i th row and the j th column is denoted by

More information

Part IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015

Part IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015 Part IB Statistics Theorems with proof Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly)

More information

Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, Jeffreys priors. exp 1 ) p 2

Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, Jeffreys priors. exp 1 ) p 2 Stat260: Bayesian Modeling and Inference Lecture Date: February 10th, 2010 Jeffreys priors Lecturer: Michael I. Jordan Scribe: Timothy Hunter 1 Priors for the multivariate Gaussian Consider a multivariate

More information

Web Appendix to Multivariate High-Frequency-Based Volatility (HEAVY) Models

Web Appendix to Multivariate High-Frequency-Based Volatility (HEAVY) Models Web Appendix to Multivariate High-Frequency-Based Volatility (HEAVY) Models Diaa Noureldin Department of Economics, University of Oxford, & Oxford-Man Institute, Eagle House, Walton Well Road, Oxford OX

More information

Estimation of Variances and Covariances

Estimation of Variances and Covariances Estimation of Variances and Covariances Variables and Distributions Random variables are samples from a population with a given set of population parameters Random variables can be discrete, having a limited

More information

The purpose of this section is to derive the asymptotic distribution of the Pearson chi-square statistic. k (n j np j ) 2. np j.

The purpose of this section is to derive the asymptotic distribution of the Pearson chi-square statistic. k (n j np j ) 2. np j. Chapter 9 Pearson s chi-square test 9. Null hypothesis asymptotics Let X, X 2, be independent from a multinomial(, p) distribution, where p is a k-vector with nonnegative entries that sum to one. That

More information

Cointegrated VAR s. Eduardo Rossi University of Pavia. November Rossi Cointegrated VAR s Financial Econometrics / 56

Cointegrated VAR s. Eduardo Rossi University of Pavia. November Rossi Cointegrated VAR s Financial Econometrics / 56 Cointegrated VAR s Eduardo Rossi University of Pavia November 2013 Rossi Cointegrated VAR s Financial Econometrics - 2013 1 / 56 VAR y t = (y 1t,..., y nt ) is (n 1) vector. y t VAR(p): Φ(L)y t = ɛ t The

More information

MA 575 Linear Models: Cedric E. Ginestet, Boston University Midterm Review Week 7

MA 575 Linear Models: Cedric E. Ginestet, Boston University Midterm Review Week 7 MA 575 Linear Models: Cedric E. Ginestet, Boston University Midterm Review Week 7 1 Random Vectors Let a 0 and y be n 1 vectors, and let A be an n n matrix. Here, a 0 and A are non-random, whereas y is

More information