Small area estimation with missing data using a multivariate linear random effects model

Similar documents
Testing Some Covariance Structures under a Growth Curve Model in High Dimension

Canonical Correlation Analysis of Longitudinal Data

TAMS39 Lecture 2 Multivariate normal distribution

Hypothesis testing in multilevel models with block circular covariance structures

Department of Statistics

Statistical Methods for Handling Incomplete Data Chapter 2: Likelihood-based approach

6. Fractional Imputation in Survey Sampling

Testing a Normal Covariance Matrix for Small Samples with Monotone Missing Data

Stochastic Design Criteria in Linear Models

Consistent Bivariate Distribution

Imputation Algorithm Using Copulas

Modelling Dropouts by Conditional Distribution, a Copula-Based Approach

Inferences on a Normal Covariance Matrix and Generalized Variance with Monotone Missing Data

An Efficient Estimation Method for Longitudinal Surveys with Monotone Missing Data

6 Pattern Mixture Models

PIRLS 2016 Achievement Scaling Methodology 1

Analysis of variance, multivariate (MANOVA)

Alternative implementations of Monte Carlo EM algorithms for likelihood inferences

Restricted Maximum Likelihood in Linear Regression and Linear Mixed-Effects Model

An Introduction to Multivariate Statistical Analysis

SIMULTANEOUS CONFIDENCE INTERVALS AMONG k MEAN VECTORS IN REPEATED MEASURES WITH MISSING DATA

Introduction An approximated EM algorithm Simulation studies Discussion

ANALYSIS OF PANEL DATA MODELS WITH GROUPED OBSERVATIONS. 1. Introduction

Lecture 3. Inference about multivariate normal distribution

Fractional Hot Deck Imputation for Robust Inference Under Item Nonresponse in Survey Sampling

Estimation of a multivariate normal covariance matrix with staircase pattern data

Estimation of change in a rotation panel design

THE UNIVERSITY OF CHICAGO Booth School of Business Business 41912, Spring Quarter 2016, Mr. Ruey S. Tsay

Bayesian Inference. Chapter 9. Linear models and regression

Finite Population Sampling and Inference

Estimation for parameters of interest in random effects growth curve models

Part IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015

Researchers often record several characters in their research experiments where each character has a special significance to the experimenter.

Journal of Multivariate Analysis. Sphericity test in a GMANOVA MANOVA model with normal error

Multivariate Regression

Analysis of Microtubules using. for Growth Curve modeling.

1 Mixed effect models and longitudinal data analysis

Estimation in Generalized Linear Models with Heterogeneous Random Effects. Woncheol Jang Johan Lim. May 19, 2004

Bayesian nonparametric estimation of finite population quantities in absence of design information on nonsampled units

On the conservative multivariate multiple comparison procedure of correlated mean vectors with a control

MIVQUE and Maximum Likelihood Estimation for Multivariate Linear Models with Incomplete Observations

Interpreting Regression Results

ROBUST ESTIMATION OF A CORRELATION COEFFICIENT: AN ATTEMPT OF SURVEY

Examensarbete. Explicit Estimators for a Banded Covariance Matrix in a Multivariate Normal Distribution Emil Karlsson

Hidden Markov models for time series of counts with excess zeros

Basics of Modern Missing Data Analysis

Estimating the parameters of hidden binomial trials by the EM algorithm

A Squared Correlation Coefficient of the Correlation Matrix

PACKAGE LMest FOR LATENT MARKOV ANALYSIS

Orthogonal decompositions in growth curve models

A weighted simulation-based estimator for incomplete longitudinal data models

Katsuhiro Sugita Faculty of Law and Letters, University of the Ryukyus. Abstract

Department of Econometrics and Business Statistics

ANALYSIS OF ORDINAL SURVEY RESPONSES WITH DON T KNOW

Multivariate Assays With Values Below the Lower Limit of Quantitation: Parametric Estimation By Imputation and Maximum Likelihood

Ph.D. Qualifying Exam Friday Saturday, January 6 7, 2017

An Akaike Criterion based on Kullback Symmetric Divergence in the Presence of Incomplete-Data

Generalized Linear Models (GLZ)

Supplementary Materials for Tensor Envelope Partial Least Squares Regression

CS281 Section 4: Factor Analysis and PCA

Multilevel Analysis, with Extensions

Linear Models in Statistics

RECENT DEVELOPMENTS IN VARIANCE COMPONENT ESTIMATION

Covariance modelling for longitudinal randomised controlled trials

On Properties of QIC in Generalized. Estimating Equations. Shinpei Imori

Linear Models for Multivariate Repeated Measures Data

Mixed-Models. version 30 October 2011

Mohsen Pourahmadi. 1. A sampling theorem for multivariate stationary processes. J. of Multivariate Analysis, Vol. 13, No. 1 (1983),

High-dimensional asymptotic expansions for the distributions of canonical correlations

A stationarity test on Markov chain models based on marginal distribution

Economics 620, Lecture 5: exp

One-stage dose-response meta-analysis

Unbiased prediction in linear regression models with equi-correlated responses

Multivariate Random Variable

Coregionalization by Linear Combination of Nonorthogonal Components 1

Estimation of Mean Population in Small Area with Spatial Best Linear Unbiased Prediction Method

A NOTE ON CONFIDENCE BOUNDS CONNECTED WITH ANOVA AND MANOVA FOR BALANCED AND PARTIALLY BALANCED INCOMPLETE BLOCK DESIGNS. v. P.

Chapter 17: Undirected Graphical Models

ANCOVA. Lecture 9 Andrew Ainsworth

Multivariate Statistical Analysis

Two-phase sampling approach to fractional hot deck imputation

Method of Conditional Moments Based on Incomplete Data

Discussion of Missing Data Methods in Longitudinal Studies: A Review by Ibrahim and Molenberghs

A New Class of Generalized Exponential Ratio and Product Type Estimators for Population Mean using Variance of an Auxiliary Variable

Multiple Imputation for Missing Data in Repeated Measurements Using MCMC and Copulas

The Use of Copulas to Model Conditional Expectation for Multivariate Data

Contextual Effects in Modeling for Small Domains

CS281A/Stat241A Lecture 17

STATISTICAL ANALYSIS WITH MISSING DATA

Multivariate beta regression with application to small area estimation

Shu Yang and Jae Kwang Kim. Harvard University and Iowa State University

Stat 579: Generalized Linear Models and Extensions

Lecture 20: Linear model, the LSE, and UMVUE

You can compute the maximum likelihood estimate for the correlation

MULTIVARIATE ANALYSIS OF VARIANCE

Multivariate Distributions

Testing Hypotheses Of Covariance Structure In Multivariate Data

Semiparametric Gaussian Copula Models: Progress and Problems

Exponential Family and Maximum Likelihood, Gaussian Mixture Models and the EM Algorithm. by Korbinian Schwinger

[y i α βx i ] 2 (2) Q = i=1

Transcription:

Department of Mathematics Small area estimation with missing data using a multivariate linear random effects model Innocent Ngaruye, Dietrich von Rosen and Martin Singull LiTH-MAT-R--2017/07--SE

Department of Mathematics Linköping University S-581 83 Linköping

Small area estimation with missing data using a multivariate linear random effects model Innocent Ngaruye 1,2, Dietrich von Rosen 1,3 and Martin Singull 1 1 Department of Mathematics, Linköping University, SE 581 83 Linköping, Sweden E-mail: {innocent.ngaruye, martin.singull}@liu.se 2 Department of Mathematics, College of Science and Technology, University of Rwanda, P.O. Box 3900 Kigali, Rwanda E-mail: i.ngaruye@ur.ac.rw 3 Department of Energy and Technology, Swedish University of Agricultural Sciences, SE 750 07 Uppsala, Sweden E-mail: dietrich.von.rosen@slu.se Abstract In this article small area estimation with multivariate data that follow a monotonic missing sample pattern is addressed. Random effects growth curve models with covariates are formulated. A likelihood based approach is proposed for estimation of the unknown parameters. Moreover, the prediction of random effects and predicted small area means are also discussed. Keywords: Multivariate linear model, Monotone sample, Repeated measures data. 1 Introduction In survey analysis estimation of characteristics of interest for subpopulations (also called domains or small areas) for which sample sizes are small is challenging. We adopt an approach were

2 Innocent Ngaruye, Dietrich von Rosen and Martin Singull the survey estimates are improved via covariate information. To produce reliable estimates in surveys utilizing covariates for small areas is known as the Small Area Estimation (SAE) problem (Pfeffermann, 2002). Rao (2003) has given a comprehensive overview of theory and methods of model-based SAE. Most surveys are conducted continuously in time based on cross-sectional repeated measures data. There are also some works related to time series and longitudinal surveys in small area estimation, for example, one can refer to Consortium (2004); Ferrante and Pacei (2004); Nissinen (2009); Singh and Sisodia (2011); Ngaruye et al. (2016). In Ngaruye et al. (2016), the authors have proposed a multivariate linear model for repeated measures data in a SAE context. The model is a combination of the classical growth curve model (Potthoff and Roy, 1964) with a random effects model. This model accounts for longitudinal surveys, i.e. units are sampled ones and then followed in time, grouped response units and time correlated random effects. Commonly incomplete repeated measures data are obtained. In this article we extend the above mentioned model and let the model include a monotonic missing observation structure. In particular drop-outs from the survey can be handled, i.e. when it is planned to follow units in time but before the end-point some units disappear. Missing data may be due to a number of limitations such as unexpected budget constraints, but also it may happen that for various reasons units for which the measurements were expected to be sampled over time disappeared from the survey. The statistical analysis of data with missing values emerged early in 1970s with advancement of modern computer based technology (Little and Rubin, 1987). Since then, several methods of analysis of missing data have been developed following the missing data mechanism whether ignorable for inferences which includes missing data at random and missing data completely at random or nonignorable missing data. Many authors have dealt with the problem of missing data and we can refer to Little and Rubin (1987); Carriere (1999); Srivastava (2002); Kim and Timm (2006); Longford (2006), for example. In particular, incomplete data in the classical growth curve models and in random effects growth curve model has been considered, for example, by Kleinbaum (1973); Woolson and Leeper (1980); Srivastava (1985); Liski (1985); Liski and Nummi (1990); Nummi (1997) The missing values are assumed to be independently distributed of the observed values. In Section 3, we present the formulation of a multivariate linear model for repeated measures data. Thereafter this model is extended to handle missing data. A canonical form of the model is considered in Section 4. In Section 5, the estimation of parameters and prediction of random

SAE with missing data using a multivariate linear random effects model 3 effects and small area means are derived. 2 Multivariate linear model for repeated measures data We will in this section consider the multivariate linear regression model for repeated measurements with covariates at p time points suitable for discussing the SAE problem, which was defined by Ngaruye et al. (2016), when data are complete. It is supposed that the target population of size N whose characteristic of interest y is divided into m subpopulations called small areas of sizes N d, d = 1,.., m, and the units in all small areas are grouped in k different categories. Furthermore, we assume the mean growth of each unit in area d for each one of the k groups to be, for example, a polynomial in time with degree q 1 and also suppose that we have covariate variables related to the characteristic of interest whose values are available for all units in the population. Out of the whole population N and small areas N d, n and n d units are sampled according to some sampling scheme which however technically in the present work is of no interest. The model at small area level for the sampled units is written Y d =ABC d + 1 p γ X d + u d z d + E d, u d N p (0, Σ u ), E d N p,nd (0, Σ e, I nd ), and when combining all disjoint m small areas and all n sampled units divided into k nonoverlapping group units yields Y =ABHC + 1 p γ X + UZ + E, U N p,m (0, Σ u, I m ), p m, E N p,n (0, Σ e, I n ), (1) where Σ u is an unknown arbitrary positive definite matrix and Σ e = σei 2 p is assumed to be known. In practise σe 2 is estimated from the survey and only depends on how many units are sampled from the total population N. In model (1), Y : p n is the data matrix, A : p q, q p, is the within individual design matrix indicating the time dependency within individuals, B : q k is unknown parameter matrix, C : mk n with rank(c) + p n and p m is the between individuals design matrix accounting for group effects, γ is an r-vector of fixed regression coefficients representing the effects of auxiliary variables, X : r n is a known matrix taking the values of the covariates, the matrix U : p m is a matrix of random effect whose columns are assumed to be independently distributed as a multivariate normal distribution with

4 Innocent Ngaruye, Dietrich von Rosen and Martin Singull mean zero and a positive dispersion matrix Σ u, i.e., U N p,m (0, Σ u, I m ), Z : m n is a design matrix for random effect and the columns of the error matrix E are assumed to be independently distributed as p-variate normal distribution with mean zero and and known covariance matrix Σ e, i.e., E N p,n (0, Σ e, I n ). More details about model formulation and estimation of model parameters can be found in Ngaruye et al. (2016). 3 Incomplete data Consider model (1) and suppose that there are missing values in such a way that the measurements taken at time t, (for t = 1,..., p), on each unit are not all complete and the number of observations for the different p time points are n 1,..., n p, with n 1 n 2... n p > p. Such a pattern of missing observations follows a so called monotone sample. Let the sample observations be composed of mutually disjoint h sets according to the monotonic pattern of missing data, where the i-th set, (i = 1,..., h), is the sample data matrix Y i : p i n i whose units in the sample have completed i 1 periods and failed to complete the ith period with p i p and h i=1 p i = p. For technical simplicity, in this paper we only study a three-step monotone missing structure with complete sample data for a given number of time points and incomplete sample data for the other time points. 3.1 The model which handles missing data In this article we will only present details for a three-step monotonic pattern. We assume that the model, defined in (1), holds together with a monotonic missing structure. This extended model can be presented by three equations: Y 1 =A 1 BHC 1 + 1 p1 γ X 1 + U 1 Z 1 + E 1, (2) Y 2 =A 2 BHC 2 + 1 p2 γ X 2 + U 2 Z 2 + E 2, (3) Y 3 =A 3 BHC 3 + 1 p3 γ X 3 + U 3 Z 3 + E 3, (4) where A = (A 1 : A 2 : A 3), A i : p i q, q < p, 3 i=1 p i = p, H = (I k : I k... I k ): k km, C i1 0 1 n id1 0 C i =..., C id =..., 0 C im 0 1 n idk

SAE with missing data using a multivariate linear random effects model 5 n idg equals the number of observations for the response Y i, d-th small area and g-th group, X i represents all covariates for the Y i response, z i1 0 Z i =..., z id = 1 1 nid, i = 1, 2, 3, d = 1, 2,..., m, nid 0 z im ( U 1 = (I p1 : 0 : 0)U, U 2 = (0 : I p2 : 0)U, U 3 = (0 : 0 : I p3 )U, U N p,m 0, Σ u, I m ), ( E i N pi,n i 0, I pi, σi 2I n i ), {E i } are mutually independent and E i is independent of U i. In particular the construction of Z i helps to derive a number of mathematical results including C(Z i) C(C i), Z i Z i = I m, (5) where C(Q) stands for the column vector space generated by the columns of the matrix Q. 3.2 A canonical version of the model The model defined through (2), (3) and (4) will be transmitted to a simpler model which will be utilized when estimating the unknown parameters. A couple of definitions will be necessary to introduce but first it is noted that because C(Z i) C(C i) (C i C i) 1/2 C i Z iz i C i(c i C i) 1/2, i = 1, 2, 3, are idempotent. It is supposed that we have so many observations that the inverses exist. Therefore there exists an orthogonal matrix Γ i = (Γ i1 : Γ i2 ), km m, km (k 1)m, such that (C i C i) 1/2 C i Z iz i C i(c i C i) 1/2 = Γ i I m 0 Γ i = Γ i1 Γ i1, 0 0 i = 1, 2, 3. Moreover, Γ i1γ i1 = I m. Put K ij = H(C i C i) 1/2 Γ ij, i = 1, 2, 3, j = 1, 2, R ij = C i(c i C i) 1/2 Γ ij, i = 1, 2, 3, j = 1, 2, (6) and let Q o be any matrix of full rank spanning C(Q), the orthogonal complement to C(Q). The following transformations of Y i, i = 1, 2, 3, is made V i0 = Y i (C i) o = 1 pi γ X i (C i) o + E i (C i) o, i = 1, 2, 3, (7) V i1 = Y i R i1 = A i BK i1 + 1 pi γ X i R i1 + (U i Z i + E i )R i1, i = 1, 2, 3, (8) V i2 = Y i R i2 = A i BK i2 + 1 pi γ X i R i2 + E i R i2, i = 1, 2, 3. (9)

6 Innocent Ngaruye, Dietrich von Rosen and Martin Singull 3.3 The likelihood The transformation which has taken place in the previous section is one-to-one. Based on {V ij }, i = 1, 2, 3, j = 0, 1, 2, we will set up the likelihood for all observations. However, firstly we present the marginal densities (likelihood function) for {V ij }, which of course are normally distributed. Thus, to determine the distributions it is enough to present means and dispersion matrices: E[V i0 ] = 1 pi γ X i (C i) o, D[V i0 ] = σi 2 (C i) o (C i) o, i = 1, 2, 3, (10) E[V i1 ] = A i BK i1 + 1 pi γ X i R i1, D[V i1 ] = R i1z 1Z 1 R i1 Σ u ii + σi 2 R i1r i1 I, i = 1, 2, 3, (11) E[V i2 ] = A i BK i2 + 1 pi γ X i R i2, D[V i2 ] = R i2r i2 I, i = 1, 2, 3, (12) Concerning the simultaneous distribution of {V ij }, i = 1, 2, 3, j = 0, 1, 2, V i0 and V i2, i = 1, 2, 3, are independently distributed and these variables are also independent of {V i1 }. However, the elements in {V i1 }, are not independently distributed. We have to pay attention to the likelihood of these variables and {vecv i1 }, i = 1, 2, 3, will be considered. Let L(V ; Θ) denote the likelihood function for the random variable V with parameter Θ. We are going to discuss L(vecV 31, vecv 21, vecv 11 ; ) = L(vecV 31 vecv 21, vecv 11 ; )L(vecV 21 vecv 11 ; )L(vecV 11 ; ), (13) where in (13) indicates that no parameters have been specified. Before obtaining some useful results we need a few technical relations concerning Z i, i = 1, 2, 3. To some extent the next lemma is our main contribution because without it the mathematics would become very difficult to carry out. Note that the result depends on the definition of Z i, i = 1, 2, 3. Lemma 3.1. Let Z i, i = 1, 2, 3, be as in (2), (3) and (4), and let R i1, i = 1, 2, 3, be defined in (6). Then (i) Z i R i1 R i1z i = I m ; (ii) R i1z iz i R i1 = I m.

SAE with missing data using a multivariate linear random effects model 7 Proof. Using (6), (5) and the definition of Γ i1 it follows that Z i R i1 R i1z i = Z i C i(c i C i) 1/2 Γ i1 Γ i1(c i C i) 1/2 C i Z i = Z i P Ci Z iz i P Ci Z i = Z i Z iz i Z i = I m, where P Ci = C i (C ic i ) 1 C i is the unique orthogonal projection on C(C i ), and thus statement (i) is established. Moreover, once again using (6) and the definition of Γ i1 and statement (ii) is verified. R i1z iz i R i1 = Γ i1 (C i C i) 1/2 C i Z iz i C i(c i C i) 1/2 Γ i1 = Γ i1γ i1 Γ i1γ i1 = I m, The next result will be used in the forthcoming presentation: vecv 11 ( ) D vecv 21 = R i1z iz j R j1 Σ u ij + i=1,2,3;j=1,2,3 diag(r i1r i1 σi 2 I m ), (14) vecv 31 where ( ) denotes a block partitioned matrix and diag( ) operates as follows: i=1,2,3;j=1,2,3 Q 11 0 0 diag(q ii ) = 0 Q 22 0, 0 0 Q 33 which is obtained by straight forward calculations. Note that Lemma 3.1 together with the fact R i1r i1 = I yield that the variances in (14) are of the form I (Σ u ii + σi 2 I pi ), i = 1, 2, 3, which is an important result. From the factorization of the likelihood in (13) it follows that we have to investigate L(vecV 31 vecv 21, vecv 11 ; ). Thus we are interested in the conditional expectation and the conditional dispersion. The conditional mean equals E[vecV 31 vecv 11, vecv 21 ] = E[vecV 31 ] + (C[V 31, V 11 ], C[V 31, V 21 ])D[(vec V 11, vec V 21 ) ] 1 ((vec V 11, vec V 21 ) (E[vec V 11 ], E[vec V 21 ]) ),

8 Innocent Ngaruye, Dietrich von Rosen and Martin Singull where the expectations for vecv i1, i = 1, 2, 3 can be obtained from (11). Moreover, the conditional dispersion is given by D[vecV 31 vecv 11, vecv 21 ] = D[V 31 ] (C[V 31, V 11 ], C[V 31, V 21 ])D[(vec V 11, vec V 21 ) ] 1 (C[V 31, V 11 ], C[V 31, V 21 ]). The next lemma fills in the details of this relation and the conditional mean and indeed shows that relative complicated expressions can be dramatically simplified using Lemma 3.1. Lemma 3.2. Let V i1, i = 1, 2, 3, be defined in (8). Then (i) D[V 31 ] = I (Σ u 33 + σ 2 3 I p 3 ); (ii) C[V 31, V 11 ] = R 31Z 3Z 1 R 11 Σ u 31; (iii) C[V 31, V 21 ] = R 31Z 3Z 2 R 21 Σ u 32; (iv) D vecv 11 vecv 21 = I (Σu 11 + σ1 2I p 1 ) R 11Z 1Z 2 R 21 Σ u 12 R 21Z 2Z 1 R 11 Σ u 21 I (Σ u 22 + σ2 2I p 2 ) ; (v) 1 D vecv 11 vecv 21 = Q 1 11 0 + Q 1 11 Q 12 0 0 I (Q 22 Q 21 Q 1 11 Q 12) 1( Q 21 Q 1 11 I), where Q 1 11 = I (Σu 11 + σ 2 1I p1 ) 1, Q 1 11 Q 12 = R 11Z 1Z 2 R 21 (Σ u 11 + σ 2 1I p1 ) 1 Σ u 12, Q 22 Q 21 Q 1 11 Q 12 = I (Σ u 22 + σ 2 2I p2 Σ u 21(Σ u 11 + σ 2 1I p1 ) 1 Σ u 12); (vi) (C[V 31, V 11 ], C[V 31, V 21 ])D[(vec V 11, vec V 21 ) ] 1 (C[V 31, V 11 ], C[V 31, V 21 ]) = I (Σ u 31(Σ u 11 + σ 2 1I m ) 1 Σ u 13 + Ψ 32 Ψ 1 22 Ψ 23),

SAE with missing data using a multivariate linear random effects model 9 where Ψ 32 = Ψ 23 = Σ u 32 Σ u 31(Σ u 11 + σ 2 1I p1 ) 1 Σ u 12, Ψ 22 = Σ u 22 + σ 2 2I p2 Σ u 21(Σ u 11 + σ 2 1I p1 ) 1 Σ u 12. Proof. Statements (i), (ii), (iii) and (iv) follow directly from (14). In (v) the inverse of a partitioned matrix is utilized and (vi) is obtained by straight forward matrix manipulations and application of Lemma 3.1. Put B 1 = Σ u 31(Σ u 11 + σ 2 1I p1 ) 1, (15) B 2 = Σ u 32Ψ 1 22, (16) Ψ 33 = Σ u 33 Σ u 31(Σ u 11 + σ 2 1I p1 ) 1 Σ u 13, (17) where Ψ 22 is given in Lemma 3.2 and then the next theorem is directly established using Lemma 3.2. Theorem 3.1. Let V i1, i = 1, 2, 3, be defined in (8) and Ψ ij, i, j = 2, 3, be defined in Lemma 3.2 and (17). Moreover, let B 1 and B 2 be given by (15) and (16), respectively. Then vecv 31 vecv 11, vecv 21 N p3 m(m 31, D 31 ), where and where M 31 = E[vecV 31 vecv 11, vecv 21 ] = E[vecV 31 ] +(R 31Z 3Z 1 R 11 B 1 (I + Σ u 12Ψ 1 22 Σu 21(Σ u 11 + σ 2 1I p1 ) 1 ))vec(v 11 E[V 11 ]) (R 31Z 3Z 2 R 21 B 1 Σ u 12)vec(V 21 E[V 21 ]) +(R 31Z 3Z 2 R 21 B 2 )vec(v 21 E[V 21 ]) (R 31Z 3Z 1 R 11 B 2 Σ u 21(Σ u 11 + σ 2 1I p1 ) 1 )vec(v 11 E[V 11 ]) D 31 = D[vecV 31 vecv 11, vecv 21 ] = I m Ψ 3 2, Ψ 3 2 = Ψ 33 Ψ 32 Ψ 1 22 Ψ 23. The result of the theorem shows that vecv 31 given vecv 11 and vecv 21, and if E[vecV 11 ], E[vecV 21 ], Σ u 21, Σ u 11 and Ψ 22 do not depend on unknown parameters the model with unknown

10 Innocent Ngaruye, Dietrich von Rosen and Martin Singull mean parameters B 1 and B 2 and unknown dispersion Ψ 3 2 is the same as a vectorized MANOVA model (e.g. see Srivastava, 2002, for information about MANOVA). Moreover, it follows from (13) that L(vecV 21 vecv 11 ; ) is needed. However, the calculations are the same as above and we only present the final result. Theorem 3.2. Let V i1, i = 1, 2, be defined in (8) and Ψ 22 in Lemma 3.2. Put B 0 = Σ u 21(Σ u 11 + σ1 2I p 1 ) 1. Then vecv 21 vecv 11 N p2 m(m 21, I m Ψ 22 ), where M 21 = E[vecV 21 vecv 11 ] = E[vecV 21 ] +(R 21Z 2Z 1 R 11 B 0 )vec(v 11 E[V 11 ]). Hence, it has been established that vecv 21 vecv 11 is a vectorized MANOVA model. Theorem 3.3. The likelihood for {V ij }, i = 1, 2, 3, j = 0, 1, 2, given in (7), (8) and (9) equals 3 L({V ij }, i = 1, 2, 3, j = 0, 1, 2; γ, B, Σ u ) = L({V i0 }, i = 1, 2, 3; γ) 3 L({V i2 }, i = 1, 2, 3; γ, B) i=1 L(vecV 31 vecv 11, vecv 21 ; γ, B, Σ u 33, Σ u 22, Σ u 12, Σ u 11, B 1, B 2 ) i=1 L(V 21 V 11 ; γ, B, B 0, Σ u 22, Σ u 11)L(V 11 γ, B, Σ u 11), where all parameters mentioned in the likelihoods have been defined earlier in Section 3. 4 Estimation of parameters and prediction of small area means For the monotone missing value problem, treated in the previous sections, it was shown that it is possible to present a model which seems to be easy to utilize. The remaining part of the report consists of a relatively straight forward approach for predicting the small areas which is of concern in this article. 4.1 Estimation In order to estimate the parameters a restricted likelihood approach is proposed which is described in the next proposition.

SAE with missing data using a multivariate linear random effects model 11 Proposition 5.1 For the likelihood given in Theorem 3.3 B and γ are estimated by maximizing 3 3 L({V i0 }, i = 1, 2, 3; γ) L({V i2 }, i = 1, 2, 3; γ, B). i=1 Inserting these estimators in i=1 L(V 21 V 11 ; γ, B, B 0, Σ u 22, Σ u 11)L(V 11 γ, B, Σ u 11), and thereafter maximizing the likelihoods with respect to the remaining unknown parameters produces estimators for Σ u 11, Σ u 12 and Σ u 22. Inserting all the obtained estimators in L(vecV 31 vecv 11, vecv 21 ; γ, B, Σ u 33, Σ u 22, Σ u 12, Σ u 11, B 1, B 2 ) and then maximizing the likelihood with respect to B 1, B 2 and Ψ 33 Ψ 32 Ψ 1 22 Ψ 23 yields estimators for Σ u 31, Σ u 32 and Σ u 33. 4.2 Prediction In order to perform predictions of small area means we first have to predict U 1, U 2 and U 3 in the model given by (2), (3) and (4). Put vecy 1 vecu 1 y = vecy 2 and v = vecu 2. vecy 3 vecu 3 Following Henderson s prediction approach to linear mixed model (Henderson, 1975), the prediction of v can be derived in a two stages, where in at the first stage Σ u is supposed to be known. Thus the plan is to maximize the joint density of f(y, v) =f(y v)f(v) { =c exp 1 { (y 2 tr ) Σ µ 1 ( y µ ) + v Ω v} } 1, (18) with respect to vecb, γ, which are included in µ, and v, which is included µ but also appears in the term in v Ω 1 v. Moreover, in (18) c is a known constant and Ω is given by I Σ u 11 I Σ u 12 I Σ u 13 Ω = I Σ u 21 I Σ u 22 I Σ u 23. I Σ u 31 I Σ u 32 I Σ u 33

12 Innocent Ngaruye, Dietrich von Rosen and Martin Singull The vector µ and the matrix Σ are the expectation and dispersion of y v and are respectively given by E[y v] = µ = H 1 vecb + H 2 γ + H 3 v, where H 1 = C 1H A 1 C 2H A 2 C 3H A 3, H 2 = X 1 1 p1 X 2 1 p2 X 3 1 p3, H 3 = Z 1 I Z 2 I Z 3 I, and D[y v] = Σ = σ1 2I p 1 n 1 0 0 0 σ2 2I p 2 n 2 0. 0 0 σ 2 2 I p 3 n 3 Supposing Σ u is known, and then using (18) together with standard results from linear models theory we find estimators of the unknown parameters and of v as a function of Σ u and thereafter replacement of Σ u by its estimator, which is obtained as described in Section 4.1, yields an estimator v, among other estimators. The prediction of small area means is performed under the superpopulation model approach to finite population in the sense that estimating the small area means is equivalent to predicting small area means of non sampled values, given the sample data and auxiliary data. To this end, for each d-th area and each g-th group units, we consider the means for sample observations of the data matrices Y 1, Y 2 and Y 3 and predict the means of non sampled values. Use the superscripts s and r to indicate the corresponding partitions for observed sample data and non observed sample data in the target population, respectively. Therefore, we denote by X (r) id : r (N d n id ), C (r) id : mk (N d n id ) and z (r) id : (N d n id ) 1 the corresponding matrix of covariates, design matrix and design vector for non sampled units in the population, respectively. Then, the prediction of small area means at each time point and for different group units is presented in the next proposition Proposition 4.1. Consider repeated measures data with missing values on the variable of interest for three-steps monotone sample data described by models (2-4). Then, the target small

SAE with missing data using a multivariate linear random effects model 13 area means at each time point are elements of the vectors µ d = 1 ( ) µ (s) N d + µ (r) d, d = 1,..., m, d where and µ (s) d = Y (s) 1d 1 n 1d Y (s) 2d 1 n 2d Y (s) 3d 1 n 3d, ( ) (r) A 1 BC 1d + 1 p 1 γ X (r) 1d + û 1dz (r) 1d µ (r) ( ) d = (r) A 2 BC 2d + 1 p 2 γ X (r) 2d + û 2dz (r) 2d ( ) (r) A 3 BC 3d + 1 p 3 γ X (r) 3d + û 3dz (r) 3d 1 Nd n 1d 1 Nd n 2d 1 Nd n 1d, d = 1,..., m. The small area means at each time point for each group units for complete and incomplete data sets and are given by where and µ dg = 1 ( ) µ (s) N dg + µ(r) dg, d = 1,..., m, g = 1,..., k, dg µ (r) dg = µ (s) dg = Y (s) 1d 1 n 1dg Y (s) 2d 1 n 2dg Y (s) 3d 1 n 3dg, ( ) (r) A 1 BC 1dg + 1 p 1 γ X (r) 1dg + û 1dz (r) 1dg ( ) (r) A 2 BC 2dg + 1 p 2 γ X (r) 2dg + û 2dz (r) 2dg ( ) (r) A 3 BC 3dg + 1 p 3 γ X (r) 3dg + û 3dz (r) 3dg d = 1,..., m, g = 1,..., k. 1 Ndg n 1dg 1 Ndg n 2dg 1 Ndg n 3dg, Note that the predicted vector û id is the d-th column of the predicted matrix Û i, i = 1,..., 3 and β g is the column of the estimated parameter matrix B for the corresponding group g. A direct application of Proposition 4.1 is to find the target small area means for each group across all time points obtained as a linear combination of µ dg depending on the type of the characteristics of interest.

14 Innocent Ngaruye, Dietrich von Rosen and Martin Singull References Carriere, K. (1999). Methods for repeated measures data analysis with missing values. Journal of Statistical Planning and Inference, 77(2):221 236. Consortium, T. E. (2004). Enhancing small area estimation techniques to meet european needs. Technical report, Office for National Statistics, London. Ferrante, M. R. and Pacei, S. (2004). Small area estimation for longitudinal surveys. Statistical Methods & Applications, 13(3):327 340. Henderson, C. R. (1975). Best linear unbiased estimation and prediction under a selection model. Biometrics, 31(2):423 447. Kim, K. and Timm, N. (2006). Univariate and Multivariate General Linear Models: Theory and Applications with SAS. CRC Press. Kleinbaum, D. G. (1973). A generalization of the growth curve model which allows missing data. Journal of Multivariate Analysis, 3(1):117 124. Liski, E. P. (1985). Estimation from incomplete data in growth curves models. Communications in Statistics - Simulation and Computation, 14(1):13 27. Liski, E. P. and Nummi, T. (1990). Prediction in growth curve models using the EM algorithm. Computational Statistics and Data Analysis, 10(2):99 108. Little, R. J. and Rubin, D. B. (1987). Statistical analysis with missing data. John Wiley & Sons, New York. Longford, N. T. (2006). Missing data and small-area estimation: Modern analytical equipment for the survey statistician. Springer Science & Business Media, New York. Ngaruye, I., Nzabanita, J., von Rosen, D., and Singull, M. (2016). Small area estimation under a multivariate linear model for repeated measures data. Communications in Statistics - Theory and Methods. http://dx.doi.org/10.1080/03610926.2016.1248784. Nissinen, K. (2009). Small Area Estimation with Linear Mixed Models from Unit-level panel and Rotating panel data. PhD thesis, University of Jyväskylä.

SAE with missing data using a multivariate linear random effects model 15 Nummi, T. (1997). Estimation in a random effects growth curve model. Journal of Applied Statistics, 24(2):157 168. Pfeffermann, D. (2002). Small area estimation-new developments and directions. International Statistical Review, 70(1):125 143. Potthoff, R. F. and Roy, S. N. (1964). A generalized multivariate analysis of variance model useful especially for growth curve problems. Biometrika, 51(3/4):313 326. Rao, J. N. K. (2003). Small Area Estimation. John Wiley and Sons, New York. Singh, B. and Sisodia, B. S. V. (2011). Small area estimation in longitudinal surveys. Journal of Reliability and Statistical Studies, 4(2):83 91. Srivastava, M. (1985). Multivariate data with missing observations. Communications in Statistics - Theory and Methods, 14(4):775 792. Srivastava, M. S. (2002). Methods of Multivariate Statistics. Wiley-Interscience New York. Woolson, R. F. and Leeper, J. D. (1980). Growth curve analysis of complete and incomplete longitudinal data. Communications in Statistics - Theory and Methods, 9(14):1491 1513.