Linear Models for Multivariate Repeated Measures Data
|
|
- Marcus Heath
- 5 years ago
- Views:
Transcription
1 THE UNIVERSITY OF TEXAS AT SAN ANTONIO, COLLEGE OF BUSINESS Working Paper SERIES Date December 3, 200 WP # 007MSS Linear Models for Multivariate Repeated Measures Data Anuradha Roy Management Science and Statistics University of Texas at San Antonio San Antonio, Texas United States Copyright 200, by the author(s Please do not quote, cite, or reproduce without permission from the author(s ONE UTSA CIRCLE SAN ANTONIO, TEXAS BUSINESSUTSAEDU
2 Linear Models for Multivariate Repeated Measures Data Anuradha Roy Department of Management Science and Statistics The University of Texas at San Antonio San Antonio, Texas 78249, USA Abstract We study the general linear model (GLM with doubly exchangeable distributed error for m observed random variables The doubly exchangeable linear model (DEGLM arises when the m dimensional error vectors are doubly exchangeable (defined later, jointly normally distributed, which is much weaker assumption than the independent and identically distributed error vectors as in the case of GLM or classical GLM (CGLM We estimate the parameters in the model and also find their distributions Key Words: Multivariate repeated measures; Linear model; Replicated observations JEL Classification: C0, C3 Introduction A generalization of the general linear model (GLM or the classical general linear model (CGLM is considered by Arnold in 979 when the m error vectors are unobserved and exchangeable, jointly normally distributed; not independent and identically distributed (iid as in the case of CGLM He named his new model as exchangeable linear model (EGLM EGLM is especially appropriate for doubly multivariate data or two-level multivariate data The variance-covariance matrix Σ (partitioned with exchangeable distributed error is of the form U 0 U U U U 0 U Σ U U U 0 I u (U 0 U + J u U, where I u is the u u identity matrix, u is a u vector containing all elements as unity, J u u u, U 0 is a m m positive definite symmetric matrix, and U is a symmetric m m matrix Leiva (2007 also used this variance-covariance matrix Σ for classification problems, and named this as equicorrelated partitioned
3 matrix with equicorrelation parameters U 0, U The m m block diagonals U 0 represent the variancecovariance matrix of the m response variables at any given site, whereas the m m block off diagonals U represent the covariance matrix of the m response variables between any pair of sites We assume U 0 is constant for all sites Also, U is constant between any pair of sites In this article we extend Arnold s (979 generalization of the EGLM when m error vectors are unobserved and doubly exchangeable (defined in Section 2 Doubly exchangeable data is common in repeated measures designs in biomedical, medical, engineering, and in many other research areas In repeated measures designs, in particular those employed in the clinical trial study of skin care products, the data are collected on a vector of measurements (m at different body positions (u and at different points (v in time For example, consider a clinical trial study where measurements are taken on the characteristics of wrinkling, pigmentation, inflammation, and hydration on hands, face, neck, and arms once in every month for four consecutive months Occasionally, biomedical researchers measure levels of fat byproducts at different parts of the body (sites in an eight-week clinical trial for their research In other words, these data are multivariate in three levels In these examples the variables at different sites and at different time points are not independent, but are stochastically dependent in nature Different sites and different time points may be interchangeable or exchangeable (equicorrelated among themselves; in other words it is reasonable to assume that the variables have doubly exchangeable structure Doubly exchangeable linear model (DEGLM is suitable for data that have doubly exchangeable structure In this article we develop DEGLM for three-level multivariate data by using doubly exchangeable structure or jointly equicorrelated covariance structure (Leiva, 2007; Roy and Leiva, 2007 Jointly equicorrelated covariance structure (defined in Section 2 assumes a block circulant covariance structure, consisting of three unstructured covariance matrices for three multivariate levels This jointly equicorrelated covariance structure can capture double exchangeability in the data structure in a longitudinal study both in time and space Another advantage of this covariance structure is that the measurements over time need not be of equally spaced Let y be the m-variate vector of all measurements We partition this vector y as follows: y y y v, where y t y t y tu, with y ts for s,, u, t,, v The m-dimensional vector of measurements y ts represents the replicate on the s th location and at the t th time point y ts, y ts,m, 2
4 2 Basic results 2 Jointly equicorrelated vectors Definition Let y be an m variate partitioned real-valued random vector y (y,, y v, where y t (y t,, y tu for t,, v, and y ts (y ts,,, y ts,m for s,, u Let E [y] µ y R m, and Γ y be the (m m dimensional partitioned covariance matrix Cov [y] ( Γ yt,y t (Γtt, where Γ tt Cov [y t, y t ] for t, t,, v The m variate vectors y,, y u,, y v,, y vu are said to be jointly equicorrelated if Γ y is given by Γ y I vu (U 0 U +I v J u (U W +J vu W, ( where U 0 is a positive definite symmetric m m matrix, and U and W are symmetric m m matrices The variance covariance matrix Γ y is then said to have a jointly equicorrelated covariance structure with equicorrelation parameters U 0, U and W The matrices U 0, U and W are all unstructured Thus, the vectors y,, y u,, y v,, y vu jointly equicorrelated covariance matrix are jointly equicorrelated if they have the following Γ y U 0 U U W W W W W W U U 0 U W W W W W W U U U 0 W W W W W W W W W U 0 U U W W W W W W U U 0 U W W W W W W U U U 0 W W W W W W W W W U 0 U U W W W W W W U U 0 U W W W W W W U U U 0, (2 that is, U 0 if t t and s s, Cov [y ts ; y t s ] U if t t and s s, W if t t, The m m block diagonals U 0 in (2 represent the variance-covariance matrix of the m response variables at any given site and at any given time point, whereas the m m block off diagonals U in (2 represent the covariance matrix of the m response variables between any two sites and at any given time point We 3
5 assume U 0 is constant for all sites and time points, and U is the same for all site pairs and for all time points The m m block off diagonals W represent the covariance matrix of the m response variables between any two time points It is assumed to be the same for any pair of time points, irrespective of the same site or between any two sites 22 Matrix-variate normal distribution The random matrix X(p n is said to have a matrix-variate normal distribution with mean matrix M(p n and covariance matrix Σ Ψ, where Σ > 0, and Ψ > 0 are p p and n n matrices respectively if and only if Vec(X N pn (Vec(M, Σ Ψ We will use the notation X N p,n (M, Σ Ψ or X N p,n (M, Σ, Ψ Note that, if n (thus, Ψ is a scalar, then X follows a p variate normal distribution with mean vector M and variance-covariance matrix ΨΣ The matrix variate normal distribution arises when sampling from multivariate normal population Let x, x 2,, x N be a random sample of size N from N p (µ, Σ Define the observation matrix as follows: x x 2 x N x 2 x 22 x 2N X x p x p2 x pn then X N N,p ( N µ, I N Σ We will use the following results of matrix-variate normal distribution (Pan and Fang, (2002; Gupta and Nagar, (2002 in this article Result : X N p,n (M, Σ, Ψ, Then X N n,p (M, Ψ, Σ Result 2 : X N p,n (M, Σ, Ψ, and that D(m p is of rank m p, and C(n t is of rank t n, and A(m t, then DXC + A N m,t (DMC + A, DΣD, C ΨC Result 3 : If X N p,n (M, Σ, Ψ and X 2 N p,n (M 2, Σ 2, Ψ 2, then ( X + X 2 N p,n M + M 2, (Σ Ψ + (Σ 2 Ψ 2 23 Matrix results The jointly equicorrelated variance-covariance matrix Γ y (Roy and Leiva, 2007 in ( can be written as Γ y I v (V 0 V + J v V, with V 0 I u (U 0 U + J u U, 4
6 and V J u W, where V 0 is a positive definite (mu mu dimensional symmetric matrix, V is a (mu mu dimensional symmetric matrix, and the m m matrices U 0, U and W are defined as in Section 2 3 The Model We study the doubly exchangeable general linear model (DEGLM for three-level multivariate data by considering an m u(v dimensional random matrix Y What do I mean by this notation? The matrix has m rows, u columns and v depths In other words, this notation means m u dimensional matrices are stacked one after another v times We now write the model as or Y α m u(v + m u(v γ T + e, m r r u(v m u(v Y u(v m u(v α m + T γ u(v r r m + e u(v m, (3 where Y is a m u(v dimensional random matrix α is an m dimensional vector γ is a (r m matrix T is an (r u(v dimensional matrix such that the design matrix X [, T ] has rank r We assume > r The error matrix e is such that the m dimensional components of Vec(e, e,, e u,, e v,, e vu are doubly exchangeable, ie, E(e ts 0, for s,, u, t,, v, and U 0 if t t and s s, Cov [e ts ; e t s ] U if t t and s s, W if t t, Arnold (979 showed that the usual methods for making inferences about α in the CGLM are not valid for the EGLM He also mentioned that it was difficult to extract much information about U 0 from the data Thus, there is no sensible way to test hypotheses about α in the EGLM Our model (3 is an improvement over Arnold s model as with W 0 and min(u, v m + r one can test α in the EGLM To compute the model parameters and their distributions we first need to prove some lemmas Lemma Let Γ C I mu and Γ I v (C I m where C and C are orthogonal matrices whose v v u u first columns are proportional to s Let Γ y be a jointly equicorrelated covariance matrix as in equation (2 5
7 of Def, then Γ Γ(Γ y Γ Γ is a diagonal matrix as follows: I u Γ Γ(Γ y [Γ Γ I u I u, where U 0 U, 2 U 0 + (u U uw (U 0 U + u (U W, and 3 U 0 + (u U + u (v W (U 0 U + u (U W + W Proof: It can be easily shown that Γ and Γ are orthogonal We see that The determinant of Γ y is given by Γ(Γ y Γ ( C I mu(i v (V 0 V + J v V ( C I mu, v v v v v I v (V 0 V + V, [ ] V o + (v V 0 (4 0 I v (V o V Γ(Γ y Γ Γ y V o + (v V V o V v Therefore, the matrix Γ y is non-singular, if both V o + (v V and V o V are non-singular matrices Now, from (4, we have [ (C I m(v o + (v V (C I ] m 0 Γ Γ(Γ y Γ Γ u u u u 0 I v (C I m(v o V (C I (5 m u u u u Now, (C I m(v 0 + (v V (C I m (C I m ( I u (U 0 U + J u [(U W + vw] (C I m u u u u u u u u [ ] (U o U + u(u W + W 0 0 I u (U o U [ ] I u 6
8 Similarly, (C I m(v 0 V (C I m (C I m ( I u (U 0 U + J u (U W (C I m u u u u u u u u [ ] (U o U + u(u W 0 0 I u (U o U [ ] I u Therefore, from (5 we have Γ Γ(Γ y Γ Γ I u I u I u It follows that if, 2 and 3 are non-singular then Γ y is non-singular Corollary If W 0, then Thus, we see that 2 3 U 0 U, 2 U 0 + (u U (U 0 U + uu and 3 U 0 + (u U (U 0 U + uu Lemma 2 Let Y is a m u(v dimensional random matrix Then Γ Γ(Vec Γ Γ(Vec Y m u(v Vec(Z : Z 2 m m (u Vec(Z 2 m Vec(Z v m : Z 22 m (u : Z v2 m (u Y m u(v can be expressed as 7
9 Proof: Γ Γ(Vec Y m u(v Γ ( C I u I m V ec( Y v v m u(v Vec( Y C m uu u Vec( Y 2 C m uu u Vec( Y v C m uu u Vec( u Y u : Z 2 m u m (u Vec( u Y 2 u : Z 22 m u m (u Vec( u Y v u : Z v2 m u m (u Vec(Z : Z 2 m m (u Vec(Z 2 m Vec(Z v m : Z 22 m (u : Z v2 m (u Lemma 3 Γ Γ(Vec( γ Γ Γ(Vec( γ Proof: m r Γ Γ(Vec( γ m r T m r r ( T Vec r can be expressed as γ U m r r T r Γ Γ(Vec( γ : γ U 2 m r r u Γ ( C v v I mu(vec( γ : : γ T m r r ( T m r r (I v C I m(vec( γ u u Vec( γ T C m r r uu u Vec( γ T 2 C m r r uu u Vec( γ C ( Vec ( Vec ( Vec ( Vec T v m r r uu u γ U m r r γ U 2 m r r γ U v m r r γ U m r r T m r r u : γ U 2 m r r u U 22 m r r u : γ U v2 m r r u : γ : γ U 2 m r r u U v m r r : γ T 2 m r r u : : γ : γ U v m r r U v2 m r r u : : γ T v m r r u : γ U v2 m r r u 8
10 Lemma 4 Γ Γ(Vec( α can be expressed as m Proof: Γ Γ(Vec( α m Vec ( α m : 0 : : 0 Γ Γ(Vec( α Γ ( Γ(Vec( α m m Γ [ (Vec( α (C I m u ] We will first calculate Vec( α m (C I u Now, (C I u m α v v v (v v (v v v u 0 m α 0 v α u m 0 0 (v (v v I u u u u α m Therefore, ( Vec( α (C I m u Vec v α : 0 : : 0 m u Thus, Γ [ (Vec( α m (I v C u u I mvec (C I u ] ( v α m u : 0 : : 0 Vec ( v α m u C u u : 0 : : 0 9
11 Now, we see that Therefore we have and the lemma is proved C u u [ v 0 0 u ] ( v Vec α α m 0, 0 m u u u C Lemma 5 Proof: (I v C u u (C I u 0 u 0 u 0 u (I v C u u (C I u v v v (I v C u u (v v (v v (I v C 0 u 0 u 0 u u u v + u v 2 + u 2 + (v v u (v (v v u + + v u I u u + (v (v v u (v v 0 u u u u
12 Theorem Let Y represent the three-level multivariate data If m u(v Γ Γ(V ec Y m ( Vec(Z : Z 2 m m (u Vec(Z 2 m Vec(Z v m : Z 22 m (u : Z v2 m (u then all the components Z, Z 2, Z 2,, Z v2 are independently normally distributed such that Z N m, ( α + γ U, 3,, (6a Z i N m, (γ U i, 2, i 2, 3,, v, (6b and Z i2 N m,u (γ U i2,, I u i, 2,, v (6c Proof: Follows from Lemmas and 2 Comments: Thus, we see that CGLMs are nested in one DEGLM The model involving Z has covariance matrix 3, but has only one sample, thus estimation of 3 is not possible Testing of α is therefore not possible This case is similar to Arnold s (979 EGLM, where the intercept α could not be tested Each of the models involving Z i i 2, 3,, v has no intercept and has only one sample too, however, they have common variance-covariance matrix 2 Thus, γ and 2 can be estimated and γ can be tested too Similarly, each of the models involving Z i2, i, 2,, v, are all independent and have a common covariance matrix and have no intercept term Thus, one can calculate estimates of γ and from each of the models, and test any hypothesis about γ Corollary 2 If W 0, then Z N m, ( α + γ U, 2,, Z i N m, (γ U i, 2, i 2, 3,, v, and Z i2 N m,u (γ U i2,, I u i, 2,, v Comments: In this case there are v independent samples to estimate 2 Thus, testing any hypothesis about α is possible in the Arnold s EGLM when one use DEGLM with W 0 and min(u, v m + r Corollary 3 If v, then the DEGLM reduces to the EGLM For v, we have Z N m, ( uα + γ U, U 0 (u U,, and Z 2 N m,u (γ U 2, U 0 U, I u
13 We see that the above model is exactly same as Arnold s (979 EGLM Theorem 2 The estimates of α and γ in the model (3 are given by r m ( j α ( /2( Z U, (7 m U ij U ij U r Proof: The model (3 can be written as U ( Y α + T m m j γ r r m [ T ] [ α X r B r m + e Therefore, the least square estimate of B is given by Now, and γ (X X B r m X Y X X r r X Y r m r m ] + e, [ ] [ T ] T [ T ] T T T Y m T Y r m We will now compute each of the partitioned matrices in X X r r and X r Y have U ij Z ij U Z, (8 r + e m, (9 m r m T r T ( C I u(i v C (I v C ( C I u r v v u u u u v v ( U : U 2 : : U v : U v2 (I v C ( C I u r r u r r u u u v v ( U : U 2 : : U v : U v2 r r u r r u U, r 2 0 u 0 u 0 u Using Lemmas 3 and 5 we
14 and using Lemma 3 we have T T r r T ( C I u(i v C (I v C r v v u u u u ( U : U 2 : : U v : U v2 r r u r r u U i U i + j Now, using Lemmas 2 and 5 we have Again using Lemmas 2 and 3 we get U i2 U i2 U ij U ij ( C I u T v v U r U 2 u r U v r U v2 u r r Y m ( C I u (I v C (I v C ( C I v v u Y u u u u v v m [ ] : 0 u : 0 u : : 0 u (Iv C ( C I u Y u u v v m Z m T Y T ( C I u(i v C (I v C r m r v v u u u u ( U : U 2 : : U v : U v2 r r u r r u U i Z i + j U i2 Z i2 U ij Z ij 3 ( C I uy v v Z m Z 2 u m Z v m Z v2 u m
15 Therefore, (X X B U r 2 j U r v U iju ij and this is equal to X Y, which can be expressed as follows [ Z X Y m r m 2 v r m j U ijz ij Therefore, equating the corresponding terms we get ] α m r m, Now, from (0a we have U α + m r α m + U j U ij U ij r r m r m α ( /2( Z U m j Thus, (7 is proved Now, substituting the value of α in (0b we get, Therefore, U α + m r U ( /2( Z U + r r m j j U Z U U + r r j ( U ij U ij U U r j ( j r m U ij U ij U r j U ij U ij U ij U ij U ij U ij r m U ( r m r m r m j j Z m U ij Z ij j j j U ij Z ij U ij Z ij U ij Z ij U ij Z ij U Z r U ij Z ij U Z r If v, the model reduces to Arnold s (979 EGLM For v we see that α (u /2( Z U, m ( U j U j U U ( r ( U 2 U 2 r r ( U 2 Z 2 r u u m, j U j Z j U Z r (0a (0b 4
16 and these estimates are exactly same as obtained by Arnold (979 Theorem 3 The distributions of α and are as follows: where α N,m (α,, A ( U i Z i + A [ 3 + U A [ (U i (U i ] A U 2 i2 +U A U i2 U i2a U ], ( U i2 Z i2 N (r,m γ, [ (U i (U i ] A 2 + A U i2 U i2a A ( 2 v r r j U iju ij U U r Proof: We will first find the distribution of, and then the distribution of α We note that A A, so A is symmetric Now, from (8 we have r m ( j A ( U ij U ij U r j U ( U ij Z ij U Z r j U ij Z ij U Z r A [U 2 Z U v Z v + U 2 Z 2 + U 22 Z U v2 Z v2] A U i Z i + A U i2 Z i2 i2 To get the distribution of we will find the distributions of A v i2 U iz i and A v U i2z i2 separately Using (6b and (6c we have and, 2 A A U i Z i N r,m (A (U i (U iγ, i2 U i2 Z i2 N r,m (A A i2 [ (U i (U i ] A, 2, i2 U i2 U i2γ, A 5 U i2 U i2a,
17 Therefore, We have A U i Z i N (r,m (A U i U iγ, i2 i2 (A [ (U i (U i ] A 2, i2 and 2 A U i2 Z i2 N (r,m (A U i2 U i2γ, A U i2 U i2a Therefore, A ( (U i Z i + i2 + 2 N (r,m ( A A U i2 Z i2 (U i U iγ + A U i2 U i2γ, i2 [ (U i (U i ] A 2 + A U i2 U i2a i2 Now, A U i U iγ + A U i2 U i2γ A [ U i U i + i2 γ i2 U i2 U i2]γ Therefore, A ( U i Z i + A ( U i2 Z i2 N (r,m γ, [ (U i (U i ] A 2 + A U i2 U i2a We see that is an unbiased estimate of γ We will now find the distribution of α Now from (7 we have α Z m U, α Z U ( + 2, Z U U 2 ( 6
18 Now, from (6a we have and, ( Z N,m ( α + U γ,, 3, ( U N,m U A U A i2 ( U 2 N,m U A U A (U i (U iγ,, i2 [ (U i (U i ] A U 2, U i2 U i2γ,, U i2 U i2a U Now, ( α + U γ U A [ α + U γ A α Therefore, from ( we have (U i U iγ U A i2 U i U iγ A i2 α N,m (α,, i2 ] U i2 U i2γ [ 3 + U A [ (U i (U i ] A U 2 +U A U i2 U i2a U ] Here also we see that α is an unbiased estimate of α Corollary 4 If W 0, the distributions of α and are as follows: α N,m (α,, [ 2 + U A [ (U i (U i ] A U 2 i2 +U A U i2 U i2a U ], U i2 U i2γ 7
19 and A ( ( U i Z i + U i2 Z i2 N (r,m γ, A [ (U i (U i ] A 2 + A U i2 U i2a i2 where A 2 v r r j U iju ij U U Corollary 5 If v, A r r U ij U ij U U j U U + U 2 U 2 U U U 2 U 2, and the distributions of α and are as follows: α N,m (α,, [ 3 + U A [ (U i (U i ] A U 2 i2 ] [((U 0 U + uu + U u (U 2 U 2 U N,m (α,, U + ( + U u (U 2 U 2 U (U 0 U and ( N (r,m γ, A [ (U i (U i ] A 2 + A U i2 U i2a i2 ( N (r,m γ, (U 2 U 2 U 2 U 2(U 2 U 2 N (r,m (γ, (U 2 U 2, (U 0 U These estimates are exactly the same as those of Arnold s (979 Acknowledgements The author would like to acknowledge the generous support for the summer grant from the College of Business at the University of Texas at San Antonio 8
20 References [] Anderson, TW An introduction to multivariate statistical analysis, Third Ed Wiley, New Jersey, 2003 [2] Arnold, SF, Linear Models with Edchangeably Distributed Errors Journal of the American Statistical Association 74 (365 ( [3] Gupta, AK, Nagar, DK Matrix Variate Distributions, Chapman & Hall/CRC, Florida, 2000 [4] Leiva, R, Linear Discrimination with Equicorrelated Training Vectors, J Multivariate Analysis, 98(2 ( [5] Pan, JX, Fang, KT Growth Curva Models and Statistical Diagnostics, Springer-Verlag, New York, 2002 [6] Roy, A, Leiva, R Discrimination with jointly equicorrelated multi-level multivariate data Advances in Data Analysis and Classification, (3, (
Optimal estimation for doubly multivariate data in blocked compound symmetric covariance structure
THE UNIVERSITY OF TEXAS AT SAN ANTONIO, COLLEGE OF BUSINESS Working Paper SERIES Date April 22, 205 WP # 0006MSS-253-205 Optimal estimation for doubly multivariate data in blocked compound symmetric covariance
More informationTesting Hypotheses Of Covariance Structure In Multivariate Data
Electronic Journal of Linear Algebra Volume 33 Volume 33: Special Issue for the International Conference on Matrix Analysis and its Applications, MAT TRIAD 2017 Article 6 2018 Testing Hypotheses Of Covariance
More informationMultivariate Regression Generalized Likelihood Ratio Tests for FMRI Activation
Multivariate Regression Generalized Likelihood Ratio Tests for FMRI Activation Daniel B Rowe Division of Biostatistics Medical College of Wisconsin Technical Report 40 November 00 Division of Biostatistics
More informationTAMS39 Lecture 2 Multivariate normal distribution
TAMS39 Lecture 2 Multivariate normal distribution Martin Singull Department of Mathematics Mathematical Statistics Linköping University, Sweden Content Lecture Random vectors Multivariate normal distribution
More informationAnalysis of variance, multivariate (MANOVA)
Analysis of variance, multivariate (MANOVA) Abstract: A designed experiment is set up in which the system studied is under the control of an investigator. The individuals, the treatments, the variables
More informationConsistent Bivariate Distribution
A Characterization of the Normal Conditional Distributions MATSUNO 79 Therefore, the function ( ) = G( : a/(1 b2)) = N(0, a/(1 b2)) is a solu- tion for the integral equation (10). The constant times of
More informationBlock Circular Symmetry in Multilevel Models
Block Circular Symmetry in Multilevel Models Yuli Liang, Tatjana von Rosen and Dietrich von Rosen Abstract Models that describe symmetries present in the error structure of observations have been widely
More informationLECTURE 2 LINEAR REGRESSION MODEL AND OLS
SEPTEMBER 29, 2014 LECTURE 2 LINEAR REGRESSION MODEL AND OLS Definitions A common question in econometrics is to study the effect of one group of variables X i, usually called the regressors, on another
More informationTesting a Normal Covariance Matrix for Small Samples with Monotone Missing Data
Applied Mathematical Sciences, Vol 3, 009, no 54, 695-70 Testing a Normal Covariance Matrix for Small Samples with Monotone Missing Data Evelina Veleva Rousse University A Kanchev Department of Numerical
More informationHypothesis testing in multilevel models with block circular covariance structures
1/ 25 Hypothesis testing in multilevel models with block circular covariance structures Yuli Liang 1, Dietrich von Rosen 2,3 and Tatjana von Rosen 1 1 Department of Statistics, Stockholm University 2 Department
More informationAn Introduction to Multivariate Statistical Analysis
An Introduction to Multivariate Statistical Analysis Third Edition T. W. ANDERSON Stanford University Department of Statistics Stanford, CA WILEY- INTERSCIENCE A JOHN WILEY & SONS, INC., PUBLICATION Contents
More informationQuantitative Analysis of Financial Markets. Summary of Part II. Key Concepts & Formulas. Christopher Ting. November 11, 2017
Summary of Part II Key Concepts & Formulas Christopher Ting November 11, 2017 christopherting@smu.edu.sg http://www.mysmu.edu/faculty/christophert/ Christopher Ting 1 of 16 Why Regression Analysis? Understand
More informationVAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where:
VAR Model (k-variate VAR(p model (in the Reduced Form: where: Y t = A + B 1 Y t-1 + B 2 Y t-2 + + B p Y t-p + ε t Y t = (y 1t, y 2t,, y kt : a (k x 1 vector of time series variables A: a (k x 1 vector
More informationX -1 -balance of some partially balanced experimental designs with particular emphasis on block and row-column designs
DOI:.55/bile-5- Biometrical Letters Vol. 5 (5), No., - X - -balance of some partially balanced experimental designs with particular emphasis on block and row-column designs Ryszard Walkowiak Department
More informationEstimation of Unique Variances Using G-inverse Matrix in Factor Analysis
International Mathematical Forum, 3, 2008, no. 14, 671-676 Estimation of Unique Variances Using G-inverse Matrix in Factor Analysis Seval Süzülmüş Osmaniye Korkut Ata University Vocational High School
More informationFactorization of Seperable and Patterned Covariance Matrices for Gibbs Sampling
Monte Carlo Methods Appl, Vol 6, No 3 (2000), pp 205 210 c VSP 2000 Factorization of Seperable and Patterned Covariance Matrices for Gibbs Sampling Daniel B Rowe H & SS, 228-77 California Institute of
More informationTesting Some Covariance Structures under a Growth Curve Model in High Dimension
Department of Mathematics Testing Some Covariance Structures under a Growth Curve Model in High Dimension Muni S. Srivastava and Martin Singull LiTH-MAT-R--2015/03--SE Department of Mathematics Linköping
More informationCorrigendum to Inference on impulse. response functions in structural VAR models. [J. Econometrics 177 (2013), 1-13]
Corrigendum to Inference on impulse response functions in structural VAR models [J. Econometrics 177 (2013), 1-13] Atsushi Inoue a Lutz Kilian b a Department of Economics, Vanderbilt University, Nashville
More information. a m1 a mn. a 1 a 2 a = a n
Biostat 140655, 2008: Matrix Algebra Review 1 Definition: An m n matrix, A m n, is a rectangular array of real numbers with m rows and n columns Element in the i th row and the j th column is denoted by
More informationMultivariate Linear Models
Multivariate Linear Models Stanley Sawyer Washington University November 7, 2001 1. Introduction. Suppose that we have n observations, each of which has d components. For example, we may have d measurements
More informationHaruhiko Ogasawara. This article gives the first half of an expository supplement to Ogasawara (2015).
Economic Review (Otaru University of Commerce, Vol.66, No. & 3, 9-58. December, 5. Expository supplement I to the paper Asymptotic expansions for the estimators of Lagrange multipliers and associated parameters
More informationPeter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8
Contents 1 Linear model 1 2 GLS for multivariate regression 5 3 Covariance estimation for the GLM 8 4 Testing the GLH 11 A reference for some of this material can be found somewhere. 1 Linear model Recall
More informationTo Estimate or Not to Estimate?
To Estimate or Not to Estimate? Benjamin Kedem and Shihua Wen In linear regression there are examples where some of the coefficients are known but are estimated anyway for various reasons not least of
More informationANALYSIS AND RATIO OF LINEAR FUNCTION OF PARAMETERS IN FIXED EFFECT THREE LEVEL NESTED DESIGN
ANALYSIS AND RATIO OF LINEAR FUNCTION OF PARAMETERS IN FIXED EFFECT THREE LEVEL NESTED DESIGN Mustofa Usman 1, Ibnu Malik 2, Warsono 1 and Faiz AM Elfaki 3 1 Department of Mathematics, Universitas Lampung,
More informationOn the conservative multivariate Tukey-Kramer type procedures for multiple comparisons among mean vectors
On the conservative multivariate Tukey-Kramer type procedures for multiple comparisons among mean vectors Takashi Seo a, Takahiro Nishiyama b a Department of Mathematical Information Science, Tokyo University
More informationEconometrics I KS. Module 2: Multivariate Linear Regression. Alexander Ahammer. This version: April 16, 2018
Econometrics I KS Module 2: Multivariate Linear Regression Alexander Ahammer Department of Economics Johannes Kepler University of Linz This version: April 16, 2018 Alexander Ahammer (JKU) Module 2: Multivariate
More informationA note on the equality of the BLUPs for new observations under two linear models
ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 14, 2010 A note on the equality of the BLUPs for new observations under two linear models Stephen J Haslett and Simo Puntanen Abstract
More informationLinear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept,
Linear Regression In this problem sheet, we consider the problem of linear regression with p predictors and one intercept, y = Xβ + ɛ, where y t = (y 1,..., y n ) is the column vector of target values,
More informationON A ZONAL POLYNOMIAL INTEGRAL
ON A ZONAL POLYNOMIAL INTEGRAL A. K. GUPTA AND D. G. KABE Received September 00 and in revised form 9 July 003 A certain multiple integral occurring in the studies of Beherens-Fisher multivariate problem
More informationMarcia Gumpertz and Sastry G. Pantula Department of Statistics North Carolina State University Raleigh, NC
A Simple Approach to Inference in Random Coefficient Models March 8, 1988 Marcia Gumpertz and Sastry G. Pantula Department of Statistics North Carolina State University Raleigh, NC 27695-8203 Key Words
More informationExpected probabilities of misclassification in linear discriminant analysis based on 2-Step monotone missing samples
Expected probabilities of misclassification in linear discriminant analysis based on 2-Step monotone missing samples Nobumichi Shutoh, Masashi Hyodo and Takashi Seo 2 Department of Mathematics, Graduate
More information4 Introduction to modeling longitudinal data
4 Introduction to modeling longitudinal data We are now in a position to introduce a basic statistical model for longitudinal data. The models and methods we discuss in subsequent chapters may be viewed
More informationMultivariate Statistical Analysis
Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 4 for Applied Multivariate Analysis Outline 1 Eigen values and eigen vectors Characteristic equation Some properties of eigendecompositions
More informationConstruction of Combined Charts Based on Combining Functions
Applied Mathematical Sciences, Vol. 8, 2014, no. 84, 4187-4200 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2014.45359 Construction of Combined Charts Based on Combining Functions Hyo-Il
More informationSKEW-NORMALITY IN STOCHASTIC FRONTIER ANALYSIS
SKEW-NORMALITY IN STOCHASTIC FRONTIER ANALYSIS J. Armando Domínguez-Molina, Graciela González-Farías and Rogelio Ramos-Quiroga Comunicación Técnica No I-03-18/06-10-003 PE/CIMAT 1 Skew-Normality in Stochastic
More informationThe Spectral Theorem for normal linear maps
MAT067 University of California, Davis Winter 2007 The Spectral Theorem for normal linear maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (March 14, 2007) In this section we come back to the question
More informationSTATISTICS 407 METHODS OF MULTIVARIATE ANALYSIS TOPICS
STATISTICS 407 METHODS OF MULTIVARIATE ANALYSIS TOPICS Principal Component Analysis (PCA): Reduce the, summarize the sources of variation in the data, transform the data into a new data set where the variables
More informationA Probability Review
A Probability Review Outline: A probability review Shorthand notation: RV stands for random variable EE 527, Detection and Estimation Theory, # 0b 1 A Probability Review Reading: Go over handouts 2 5 in
More information14 Multiple Linear Regression
B.Sc./Cert./M.Sc. Qualif. - Statistics: Theory and Practice 14 Multiple Linear Regression 14.1 The multiple linear regression model In simple linear regression, the response variable y is expressed in
More informationChapter 3 ANALYSIS OF RESPONSE PROFILES
Chapter 3 ANALYSIS OF RESPONSE PROFILES 78 31 Introduction In this chapter we present a method for analysing longitudinal data that imposes minimal structure or restrictions on the mean responses over
More informationSmall area estimation with missing data using a multivariate linear random effects model
Department of Mathematics Small area estimation with missing data using a multivariate linear random effects model Innocent Ngaruye, Dietrich von Rosen and Martin Singull LiTH-MAT-R--2017/07--SE Department
More informationMultilevel Analysis, with Extensions
May 26, 2010 We start by reviewing the research on multilevel analysis that has been done in psychometrics and educational statistics, roughly since 1985. The canonical reference (at least I hope so) is
More informationECE521 week 3: 23/26 January 2017
ECE521 week 3: 23/26 January 2017 Outline Probabilistic interpretation of linear regression - Maximum likelihood estimation (MLE) - Maximum a posteriori (MAP) estimation Bias-variance trade-off Linear
More informationNESTED BLOCK DESIGNS
NESTED BLOCK DESIGNS Rajender Parsad I.A.S.R.I., Library Avenue, New Delhi 0 0 rajender@iasri.res.in. Introduction Heterogeneity in the experimental material is the most important problem to be reckoned
More informationOn the Efficiencies of Several Generalized Least Squares Estimators in a Seemingly Unrelated Regression Model and a Heteroscedastic Model
Journal of Multivariate Analysis 70, 8694 (1999) Article ID jmva.1999.1817, available online at http:www.idealibrary.com on On the Efficiencies of Several Generalized Least Squares Estimators in a Seemingly
More informationComparison of prediction quality of the best linear unbiased predictors in time series linear regression models
1 Comparison of prediction quality of the best linear unbiased predictors in time series linear regression models Martina Hančová Institute of Mathematics, P. J. Šafárik University in Košice Jesenná 5,
More informationInverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1
Inverse of a Square Matrix For an N N square matrix A, the inverse of A, 1 A, exists if and only if A is of full rank, i.e., if and only if no column of A is a linear combination 1 of the others. A is
More informationRatio of Linear Function of Parameters and Testing Hypothesis of the Combination Two Split Plot Designs
Middle-East Journal of Scientific Research 13 (Mathematical Applications in Engineering): 109-115 2013 ISSN 1990-9233 IDOSI Publications 2013 DOI: 10.5829/idosi.mejsr.2013.13.mae.10002 Ratio of Linear
More informationChapter 5 Matrix Approach to Simple Linear Regression
STAT 525 SPRING 2018 Chapter 5 Matrix Approach to Simple Linear Regression Professor Min Zhang Matrix Collection of elements arranged in rows and columns Elements will be numbers or symbols For example:
More informationEconomics 620, Lecture 5: exp
1 Economics 620, Lecture 5: The K-Variable Linear Model II Third assumption (Normality): y; q(x; 2 I N ) 1 ) p(y) = (2 2 ) exp (N=2) 1 2 2(y X)0 (y X) where N is the sample size. The log likelihood function
More informationArrays: Vectors and Matrices
Arrays: Vectors and Matrices Vectors Vectors are an efficient notational method for representing lists of numbers. They are equivalent to the arrays in the programming language "C. A typical vector might
More informationGeneralized Linear Models (GLZ)
Generalized Linear Models (GLZ) Generalized Linear Models (GLZ) are an extension of the linear modeling process that allows models to be fit to data that follow probability distributions other than the
More informationEcon 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms. 1 Diagonalization and Change of Basis
Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms De La Fuente notes that, if an n n matrix has n distinct eigenvalues, it can be diagonalized. In this supplement, we will provide
More informationConcentration Ellipsoids
Concentration Ellipsoids ECE275A Lecture Supplement Fall 2008 Kenneth Kreutz Delgado Electrical and Computer Engineering Jacobs School of Engineering University of California, San Diego VERSION LSECE275CE
More informationThe LIML Estimator Has Finite Moments! T. W. Anderson. Department of Economics and Department of Statistics. Stanford University, Stanford, CA 94305
The LIML Estimator Has Finite Moments! T. W. Anderson Department of Economics and Department of Statistics Stanford University, Stanford, CA 9435 March 25, 2 Abstract The Limited Information Maximum Likelihood
More informationA Test for Order Restriction of Several Multivariate Normal Mean Vectors against all Alternatives when the Covariance Matrices are Unknown but Common
Journal of Statistical Theory and Applications Volume 11, Number 1, 2012, pp. 23-45 ISSN 1538-7887 A Test for Order Restriction of Several Multivariate Normal Mean Vectors against all Alternatives when
More informationOn Properties of QIC in Generalized. Estimating Equations. Shinpei Imori
On Properties of QIC in Generalized Estimating Equations Shinpei Imori Graduate School of Engineering Science, Osaka University 1-3 Machikaneyama-cho, Toyonaka, Osaka 560-8531, Japan E-mail: imori.stat@gmail.com
More informationUNIVERSITY OF ILLINOIS LIBRARY AT URBANA-CHAMPAIGN BOOKSTACKS
UNIVERSITY OF ILLINOIS LIBRARY AT URBANA-CHAMPAIGN BOOKSTACKS i CENTRAL CIRCULATION BOOKSTACKS The person charging this material is responsible for its renewal or its return to the library from which it
More informationInferences on a Normal Covariance Matrix and Generalized Variance with Monotone Missing Data
Journal of Multivariate Analysis 78, 6282 (2001) doi:10.1006jmva.2000.1939, available online at http:www.idealibrary.com on Inferences on a Normal Covariance Matrix and Generalized Variance with Monotone
More informationMatrices: 2.1 Operations with Matrices
Goals In this chapter and section we study matrix operations: Define matrix addition Define multiplication of matrix by a scalar, to be called scalar multiplication. Define multiplication of two matrices,
More informationJournal of Multivariate Analysis. Sphericity test in a GMANOVA MANOVA model with normal error
Journal of Multivariate Analysis 00 (009) 305 3 Contents lists available at ScienceDirect Journal of Multivariate Analysis journal homepage: www.elsevier.com/locate/jmva Sphericity test in a GMANOVA MANOVA
More informationA Significance Test for the Lasso
A Significance Test for the Lasso Lockhart R, Taylor J, Tibshirani R, and Tibshirani R Ashley Petersen May 14, 2013 1 Last time Problem: Many clinical covariates which are important to a certain medical
More informationEPMC Estimation in Discriminant Analysis when the Dimension and Sample Sizes are Large
EPMC Estimation in Discriminant Analysis when the Dimension and Sample Sizes are Large Tetsuji Tonda 1 Tomoyuki Nakagawa and Hirofumi Wakaki Last modified: March 30 016 1 Faculty of Management and Information
More informationDS-GA 1002 Lecture notes 10 November 23, Linear models
DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.
More informationSHOTA KATAYAMA AND YUTAKA KANO. Graduate School of Engineering Science, Osaka University, 1-3 Machikaneyama, Toyonaka, Osaka , Japan
A New Test on High-Dimensional Mean Vector Without Any Assumption on Population Covariance Matrix SHOTA KATAYAMA AND YUTAKA KANO Graduate School of Engineering Science, Osaka University, 1-3 Machikaneyama,
More informationBootstrapping Heteroskedasticity Consistent Covariance Matrix Estimator
Bootstrapping Heteroskedasticity Consistent Covariance Matrix Estimator by Emmanuel Flachaire Eurequa, University Paris I Panthéon-Sorbonne December 2001 Abstract Recent results of Cribari-Neto and Zarkos
More informationMinimax design criterion for fractional factorial designs
Ann Inst Stat Math 205 67:673 685 DOI 0.007/s0463-04-0470-0 Minimax design criterion for fractional factorial designs Yue Yin Julie Zhou Received: 2 November 203 / Revised: 5 March 204 / Published online:
More informationOptimisation Problems for the Determinant of a Sum of 3 3 Matrices
Irish Math. Soc. Bulletin 62 (2008), 31 36 31 Optimisation Problems for the Determinant of a Sum of 3 3 Matrices FINBARR HOLLAND Abstract. Given a pair of positive definite 3 3 matrices A, B, the maximum
More informationOrthogonal decompositions in growth curve models
ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 4, Orthogonal decompositions in growth curve models Daniel Klein and Ivan Žežula Dedicated to Professor L. Kubáček on the occasion
More informationLecture Note 1: Probability Theory and Statistics
Univ. of Michigan - NAME 568/EECS 568/ROB 530 Winter 2018 Lecture Note 1: Probability Theory and Statistics Lecturer: Maani Ghaffari Jadidi Date: April 6, 2018 For this and all future notes, if you would
More informationMultiple Linear Regression
Multiple Linear Regression University of California, San Diego Instructor: Ery Arias-Castro http://math.ucsd.edu/~eariasca/teaching.html 1 / 42 Passenger car mileage Consider the carmpg dataset taken from
More information3 Multiple Linear Regression
3 Multiple Linear Regression 3.1 The Model Essentially, all models are wrong, but some are useful. Quote by George E.P. Box. Models are supposed to be exact descriptions of the population, but that is
More informationSTAT 151A: Lab 1. 1 Logistics. 2 Reference. 3 Playing with R: graphics and lm() 4 Random vectors. Billy Fang. 2 September 2017
STAT 151A: Lab 1 Billy Fang 2 September 2017 1 Logistics Billy Fang (blfang@berkeley.edu) Office hours: Monday 9am-11am, Wednesday 10am-12pm, Evans 428 (room changes will be written on the chalkboard)
More informationInference For High Dimensional M-estimates. Fixed Design Results
: Fixed Design Results Lihua Lei Advisors: Peter J. Bickel, Michael I. Jordan joint work with Peter J. Bickel and Noureddine El Karoui Dec. 8, 2016 1/57 Table of Contents 1 Background 2 Main Results and
More informationFor more information about how to cite these materials visit
Author(s): Kerby Shedden, Ph.D., 2010 License: Unless otherwise noted, this material is made available under the terms of the Creative Commons Attribution Share Alike 3.0 License: http://creativecommons.org/licenses/by-sa/3.0/
More informationKey words. Strongly eventually nonnegative matrix, eventually nonnegative matrix, eventually r-cyclic matrix, Perron-Frobenius.
May 7, DETERMINING WHETHER A MATRIX IS STRONGLY EVENTUALLY NONNEGATIVE LESLIE HOGBEN 3 5 6 7 8 9 Abstract. A matrix A can be tested to determine whether it is eventually positive by examination of its
More informationSTAT 331. Martingale Central Limit Theorem and Related Results
STAT 331 Martingale Central Limit Theorem and Related Results In this unit we discuss a version of the martingale central limit theorem, which states that under certain conditions, a sum of orthogonal
More informationMultivariate Distributions
IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing 1 in particular on multivariate
More informationWiley. Methods and Applications of Linear Models. Regression and the Analysis. of Variance. Third Edition. Ishpeming, Michigan RONALD R.
Methods and Applications of Linear Models Regression and the Analysis of Variance Third Edition RONALD R. HOCKING PenHock Statistical Consultants Ishpeming, Michigan Wiley Contents Preface to the Third
More informationTutorial 2: Power and Sample Size for the Paired Sample t-test
Tutorial 2: Power and Sample Size for the Paired Sample t-test Preface Power is the probability that a study will reject the null hypothesis. The estimated probability is a function of sample size, variability,
More informationFinancial Econometrics
Material : solution Class : Teacher(s) : zacharias psaradakis, marian vavra Example 1.1: Consider the linear regression model y Xβ + u, (1) where y is a (n 1) vector of observations on the dependent variable,
More informationLarge scale correlation detection
Large scale correlation detection Francesca Bassi 1,2 1 LSS CNRS SUELEC Univ aris Sud Gif-sur-Yvette, France Alfred O. Hero III 2 2 University of Michigan, EECS Department Ann Arbor, MI Abstract This work
More informationarxiv: v1 [math.co] 10 Aug 2016
POLYTOPES OF STOCHASTIC TENSORS HAIXIA CHANG 1, VEHBI E. PAKSOY 2 AND FUZHEN ZHANG 2 arxiv:1608.03203v1 [math.co] 10 Aug 2016 Abstract. Considering n n n stochastic tensors (a ijk ) (i.e., nonnegative
More information4. Comparison of Two (K) Samples
4. Comparison of Two (K) Samples K=2 Problem: compare the survival distributions between two groups. E: comparing treatments on patients with a particular disease. Z: Treatment indicator, i.e. Z = 1 for
More informationThe Jordan canonical form
The Jordan canonical form Attila Máté Brooklyn College of the City University of New York November 14, 214 Contents 1 Preliminaries 2 1.1 Rank-nullity theorem.................................... 2 1.2
More informationEconometrics of Panel Data
Econometrics of Panel Data Jakub Mućk Meeting # 4 Jakub Mućk Econometrics of Panel Data Meeting # 4 1 / 30 Outline 1 Two-way Error Component Model Fixed effects model Random effects model 2 Non-spherical
More informationHigh-dimensional asymptotic expansions for the distributions of canonical correlations
Journal of Multivariate Analysis 100 2009) 231 242 Contents lists available at ScienceDirect Journal of Multivariate Analysis journal homepage: www.elsevier.com/locate/jmva High-dimensional asymptotic
More informationLecture 11. Multivariate Normal theory
10. Lecture 11. Multivariate Normal theory Lecture 11. Multivariate Normal theory 1 (1 1) 11. Multivariate Normal theory 11.1. Properties of means and covariances of vectors Properties of means and covariances
More information7 Bilinear forms and inner products
7 Bilinear forms and inner products Definition 7.1 A bilinear form θ on a vector space V over a field F is a function θ : V V F such that θ(λu+µv,w) = λθ(u,w)+µθ(v,w) θ(u,λv +µw) = λθ(u,v)+µθ(u,w) for
More informationMulticollinearity and A Ridge Parameter Estimation Approach
Journal of Modern Applied Statistical Methods Volume 15 Issue Article 5 11-1-016 Multicollinearity and A Ridge Parameter Estimation Approach Ghadban Khalaf King Khalid University, albadran50@yahoo.com
More informationProbability and Stochastic Processes
Probability and Stochastic Processes A Friendly Introduction Electrical and Computer Engineers Third Edition Roy D. Yates Rutgers, The State University of New Jersey David J. Goodman New York University
More informationAnalysis of Longitudinal Data: Comparison Between PROC GLM and PROC MIXED. Maribeth Johnson Medical College of Georgia Augusta, GA
Analysis of Longitudinal Data: Comparison Between PROC GLM and PROC MIXED Maribeth Johnson Medical College of Georgia Augusta, GA Overview Introduction to longitudinal data Describe the data for examples
More informationPerformance assessment of MIMO systems under partial information
Performance assessment of MIMO systems under partial information H Xia P Majecki A Ordys M Grimble Abstract Minimum variance (MV) can characterize the most fundamental performance limitation of a system,
More informationSome Formal Analysis of Rocchio s Similarity-Based Relevance Feedback Algorithm
Some Formal Analysis of Rocchio s Similarity-Based Relevance Feedback Algorithm Zhixiang Chen (chen@cs.panam.edu) Department of Computer Science, University of Texas-Pan American, 1201 West University
More informationTHE UNIVERSITY OF CHICAGO Booth School of Business Business 41912, Spring Quarter 2016, Mr. Ruey S. Tsay
THE UNIVERSITY OF CHICAGO Booth School of Business Business 41912, Spring Quarter 2016, Mr. Ruey S. Tsay Lecture 5: Multivariate Multiple Linear Regression The model is Y n m = Z n (r+1) β (r+1) m + ɛ
More informationRobustness of Simultaneous Estimation Methods to Varying Degrees of Correlation Between Pairs of Random Deviates
Global Journal of Mathematical Sciences: Theory and Practical. Volume, Number 3 (00), pp. 5--3 International Research Publication House http://www.irphouse.com Robustness of Simultaneous Estimation Methods
More informationTopic 7 - Matrix Approach to Simple Linear Regression. Outline. Matrix. Matrix. Review of Matrices. Regression model in matrix form
Topic 7 - Matrix Approach to Simple Linear Regression Review of Matrices Outline Regression model in matrix form - Fall 03 Calculations using matrices Topic 7 Matrix Collection of elements arranged in
More informationROI ANALYSIS OF PHARMAFMRI DATA:
ROI ANALYSIS OF PHARMAFMRI DATA: AN ADAPTIVE APPROACH FOR GLOBAL TESTING Giorgos Minas, John A.D. Aston, Thomas E. Nichols and Nigel Stallard Department of Statistics and Warwick Centre of Analytical Sciences,
More informationSTATS306B STATS306B. Discriminant Analysis. Jonathan Taylor Department of Statistics Stanford University. June 3, 2010
STATS306B Discriminant Analysis Jonathan Taylor Department of Statistics Stanford University June 3, 2010 Spring 2010 Classification Given K classes in R p, represented as densities f i (x), 1 i K classify
More informationBare minimum on matrix algebra. Psychology 588: Covariance structure and factor models
Bare minimum on matrix algebra Psychology 588: Covariance structure and factor models Matrix multiplication 2 Consider three notations for linear combinations y11 y1 m x11 x 1p b11 b 1m y y x x b b n1
More information