Marcia Gumpertz and Sastry G. Pantula Department of Statistics North Carolina State University Raleigh, NC
|
|
- Brittney Randall
- 6 years ago
- Views:
Transcription
1 A Simple Approach to Inference in Random Coefficient Models March 8, 1988 Marcia Gumpertz and Sastry G. Pantula Department of Statistics North Carolina State University Raleigh, NC Key Words and Phrases: Repeated measures regression; Asymptotic inference; Estimated generalized least squares; Growth curve. ABSTRACT Random coefficient regression models have been used to analyze crosssectional and longitudinal data in economics and growth curve data from biological and agricultural experiments. In the literature several estimators, including the ordinary least squares (OLS) and the estimated generalized least squares (EGLS), have been considered for estimating the parameters of the mean model. Based on the asymptotic properties of the EGLS estimators, test statistics have been proposed for testing linear hypotheses involving the parameters of the mean model. An alternative estimator, the simple mean of the individual regression coefficients, provides estimation and hypothesis testing procedures that are simple to compute and simple to teach. The large sample properties of this simple estimator are shown to be similar to that of the EGLS estimator. The performance of the proposed estimator is compared with that of the existing estimators by Monte Carlo simulation.
2 1. INTRODUCTION Frequently in biological, medical, agricultural and clinical studies several measurements are taken on the same experimental unit over time with the objective of fitting a response curve to the data. Such studies are called growth curve, repeated measure or longitudinal studies. In many bio-medica1 and agricultural experiments the number of experimental units is large and the number of repeated measurements on each unit is small. On the other hand, some economic investigations and meteorological experiments involve a small number of units observed over a long period of time. Several models for analyzing such data exist in the literature; the models usually differ in their covariance structures. See Harville (1977) and Jennrich and Schluchter (1986) for a review of the models and of approaches for estimating parameters. Recently, there seems to be a renewed interest in analyzing repeated measures data using Random Coefficient Regression (RCR) models. In this article we present a brief description of the RCR models and some of the existing results for these models. We present a simple approach that is not difficult to discuss in a course on linear models. We compare our estimation procedure with two of the existing methods. Section 2 contains the assumptions of the model and a brief review of the literature. In section 3 we present the properties of the simple estimator. A Monte Carlo comparison of the estimators is presented in Section 4. Finally, we conclude with a summary which includes some possible extensions.
3 Page 2 2. RCR MODEL Suppose that the t observations on the ith of n experimental units are described by the model y. = X.f3. + e. i=1,2,..., n (2.1), ",, ",,,, where y. = (y. 1,y. 2,..., y. t )' is a tx1 vector of observations on the response variable, X. is a txk matrix of observations on k explanatory variables, 13. is a kx1 vector of coefficients unique to the ith experimental unit and e. is a tx1 vector of errors. Each experimental unit and its response curve is considered to be selected from a larger population of response curves, thus the regression coefficient vectors 13.,, i=1,2,...,n may be viewed as random drawings from some k-variate population and hence (2.1) is called a RCR model. In this paper we discuss the estimation and testing of such models under the following assumptions: (i) the e., vectors are independent multivariate normal variables with mean zero and covariance matrix a 2 I t i (ii) the f3 i vectors are independent multivariate normal variables with mean 13 and nonsingular covariance matrix Eo ; (iii) the vectors 13. and e. are independent for all i and j; (iv) the X. 1Jf3, J, matrices are fixed and of full row rank for each i; (v) min(n,t) > k and -1 (vi) there exists an M < ~ such that the elements of t(x~x.) are less than M in absolute value for all i and t.,, The assumption (vi) is not very restrictive. It is satisfied for the models that include polynomials in time and stationary exogeneous variables. Several authors, including Rao (1965), Swamy (1971), Hsiao (1975), Harville (1977), Laird and Ware (1982), Jennrich and Schluchter (1986) and Carter and Yang (1986) have considered the estimation and testing for the RCR models. We summarize the results of Carter and Yang (1986) since they consider the large sample distribution of the estimated generalized least squares (EGLS) estimator
4 Page 3 as nand/or t tend to infinity. For the sake of simplicity, we have assumed that equal number of repeated measurements are taken on all experimental units and that the variance of the error vector e. does not depend on i. 1 However, similar results exist for more general cases and will be discussed in the summary. Consider the least squares estimators b. = (X'.X.)-1 X'.y i=1,2,..., n (2.2) of ~ computed for each individual experimental unit. Note that the b. 's are 1 independent and normally distributed with mean ~ and variance Wi = E~~ + a (XiXi) Therefore, the best (linear) unbiased estimator of ~ is the generalized least squares (GLS) estimator. Swamy (1971) showed that ~GLS n n = ( E w.)-'( E N.b.) i=1 1 i=1 1 1 (2.3) that is, the GLS estimator is the "weighted" least squares (average) estimator of b. where the weights are the inverse variance-covariance matrices of b Under the normality assumption, ~GLS is also the maximum likelihood estimator 2 2 of ~ (provided E~~ and a are known). The elements of E~~ and a are seldom known and hence we consider the estimated GLS (EGLS) estimator ~EGLS n = ( E W.) (E W.b.) i=1 1= A 1 n (2.4) where
5 Page 4 A_1 A2-1 Wi = E~~ + a (X ixi) -1 A 2 n 1 = Sbb - n a E (X~X.)- i =1 ' n = (n-1) -1 E (b.- b) (b. - b) I i =1', -1 n = [n(t-k)] E [y~y. - b~x~y.], i=1 " ", and b -1 n = neb.. 1 ',= Carter and Yang (1986) suggested inference procedures based on the large sample distribution of the estimator ~EGLS. Their results are summarized below. (They suggested a slightly different estimator of E~~ in the case E~~ is not nonnegative definite.) Result 2.1: Consider the model given in (2.1) with the assumptions (i) through (vi). Consider the statistic A 2 A n -1-1 A T = [L ~EGLS - ~O] I [L( i:1 Wi) L I] [L ~EGLS - ~o], (2.5) for testing H o : L~ = ~o ' where L is a qxk matrix of q linearly independent rows. Then, (a) for a fixed nand t tending to infinity: (n-q)q (n-1) T is (asymptotically) distributed as F(q,n-q), (b) for a fixed t and n tending to infinity: T 2 is (asymptotically) distributed as chi-square with q degrees of freedom, and
6 Page 5 (c) for the case where nt is large and q=1: T 2 is approximately distributed as F(1,v) where II = L I, and Proof: See Carter and Yang (1986). [ ] Carter and Yang (1986) proved part (b) of the above result by observing that the distribution of ~EGLS is asymptotically (as n ~~) equivalent to that of ~GLS' To prove part (a), they observed that the distribution of ~EGLS is asymptotically (as t ~~) equivalent to that of II 13-1 = n (2.6) which is also asymptotically equivalent to I3 GLS as t ~~. Finally, when nt is large, Satterthwaite's approximation was used to approximate the 2 distribution of T. In the next section we present inference procedures based on the large sample distribution of the simple estimator b. 3. A SIMPLE APPROACH It is well known that the GLS estimator ~GLS is the best (linear) unbiased estimator of ~ and that (under some regularity conditions) the EGLS estimator ~EGLS is asymptotically (as n ~~) equivalent to the GLS estimator. However, in small samples, the distribution of ~EGLS may be far from being
7 Page 6 normal. It is also argued that the estimator ~EGLS may even be worse than the ordinary least squares (OLS) estimator, ~OLS n n = ( E X'.X.)-'( E X'.X.b.) i=1 1 1 i= (3.1 ) because ~EGLS depends on the estimated variance-covariance matrix which may introduce additional variability. It is easy to see that the OLS estimator, ~OLS' is normally distributed with mean ~ and variance n 1[ n 2 n ] n -1 (E. 1X '. X. ) - E. 1X '. X. I:ooX ~ X. + a E. 1X!X. (E. 1X '. X. ) 1 = 1 1 1= 1 1,.,,., = = 1 1 Thus, to compute either the EGLS estimate ~EGLS or to compute the variance covariance matrices of ~EGLS and ~OLS' it is necessary to estimate the 2 elements of I:~~ and a We now present the properties of the simple estimator 2 b, which does not require the estimation of I:~~ and a. Note that the GLS, EGLS and OLS estimators are weighted averages of the individual least squares estimators b. 1 The estimator -1 n b =neb. (3.2) i=1 1 is the simple average of the individual least squares estimators. In the special case where the model matrix X. 1 is the same (=A say) for all individuals, then the GLS, EGLS and OLS estimates coincide with the estimator b. The estimator b is normally distributed with mean ~ and variance Var(b) (3.3)
8 Page 7 Note that n = E[(n-1)-1 E (b.- b)(b.- b)'l i =1', = (n-1 ) -1 E[ ~ (b. - ~)( b. - ~) I - n (b - ~)(b - ~).]. l',,= = (n-1)-1[ ~ var(b.) - n Var(b)). 1 ',= = (n_1)-1[n 2 var(b) - n Var(bG = n var(b) - -1 Therefore, a simple unbiased estimator for var(b) is n Sbb That is, the sample variance (covariance matrix) divided by n is an unbiased estimator for the variance of the sample mean even though the variances (of b.) are not homogeneous Consider the statistic (3.4) for testing H o : L~ = ~o' where L is a qxk matrix of linearly independent rows. A*2 2 Notice that T is the Hotelling's T statistic one would compute if the variances of the b.'s were equal individuals).,, A*2 Before we establish that the statistic T (i.e., if the X.'s were the same for all has similar asymptotic properties as that of the statistict 2, we will make a few remarks. Remark 3.1: Recall that the estimators b., are independently and normally distributed with mean ~ and variance Wi =E~~ + a (XiXi) Under the assumption (vi), the elements of the matrices t(x~x.)-1 are uniformly (over i),,
9 bounded. Therefore, the matrices (X~X.)-1,,, i=1,...,n, converge uniformly (over i) to zero as t tends to infinity. Also, note that where Since var(z.) = a2(x~x.)-1 Page 8 b., = ~., + Z., (3.5),,, infinity, the difference between b. and ~. i=1,...,n. converges to zero (uniformly in i) as t tends to,, - -,=, tends to zero in probability. -1 n Therefore, for n fixed and t tending to infinity, b = n E. 1b. and -1 n -0,., = n E., = 1~', are asymptotically equivalent. In fact, since var(z) and where J is a matrix with all elements equal to 1, we have (3.6) Hence b is also asymptotically (as t ~ m) equivalent to ~GLS and ~EGLS. (See also Hsiao (1975) for similar comments.) It is important to note here that the OLS estimator ~OLS' however, is not necessarily asymptotically equivalent to b. For example, suppose X~X.,, = itb where B is a fixed kxk positive definite matrix. satisfied. In this example, the OLS estimator ~OLS is and hence ~OLS is not asymptotically equivalent to b = Then the assumption (vi) is (E n.)-1 En. b i=1' i=1' i -1 n n E. 1b..,=,
10 Page 9 Remark 3.2: For a fixed t and n tending to infinity, the estimator b may not be asymptotically equivalent to ~GlS and hence may not be an efficient estimator. However, we know that the exact distribution of b is normal and hence the (exact) distribution of is chi-square with q degrees of freedom, where l is a qxk matrix of rank q. A*2 We now present the asymptotic distribution of the T statistic as n and/or t tends to infinity. Result 3.1: Consider the model given in (2.1) with the assumptions (i) through (vi). Consider the test statistic r*2 defined in (3.4) based on the estimator b. Then, (a) (b) for a fixed nand t tending to infinity: _1-1 A*2 (n-q)q (n-1) T is (asymptotically) distributed as F(q,n-q), A*2 for a fixed t and n tending to infinity: T is (asymptotically) distributed as chi-square with q degrees of freedom, and (c) A*2 for the case where nt is large and g=1: T distributed as F(1,v * ) where is approximately * n 1 2 v = g- [i'i: i + n a E i'(x'.x.)- i] ~~ i=1', g and L = i'. Proof: See Appendix. ()
11 Page 10 With the exception of v *, the Satterthwaite's approximation for the degrees of freedom, the asymptotic distributions of T 2 and r*2 are identical. The A*2 2 advantage of T over T is that it is simple to compute and is simple to explain. 2 * Note that, as in the case of T, the degrees of freedom v to (n-1) as t tends to infinity and (b) tends to infinity as n tends to (a) tends infinity. Also, the degrees of freedom v* is always greater than or equal to (n-1) and hence the approximation in (c) serves as a compromise between the F and chi-square approximations. To summarize, we have seen that asymptotically (as t - ~), the estimators ~GLS' ~EGLS and b are equivalent and are efficient. Also, asymptotically (as n - ~), the estimators ~EGLS and ~GLS are equivalent and are efficient. However, for a fixed t and n large b may not be as efficient as ~GLS and hence the tests based on b may not be as powerful as the tests based on ~GLS' The distribution of b is exactly normal for all nand t, whereas the exact distribution of ~EGLS is unknown. A small Monte Carlo study was conducted to compare the performance of the test statistics based on b and ~EGLS' (In the study, the test statistics based on ~OLS were also included.) The results of the study are summarized in the next section. Consider the model 4. MONTE CARLO SIMULATION Yij = ~Oi + ~1iXij + e ij, i=1,...,n j=1,..,t where ~i = (~Oi' ~1i)' are NID(O,E~~); xij's are independent N(O,9) random variables if i is even and N(O,4) if i is odd; e.. 's are NID(O,4); (~.}, {x.. }, J,, J and {e..} are independent and 'J
12 Page 11 4 E~~ = [ 4 :] The values for nand t are taken to be 5, 10 and 50 to represent small, moderate and large samples. A set of 250 x. 0 values were generated once for 'J all and the same values of x. 0' i=1,...,n; j=1,...,t, were used in all of the 'J replications. For each pair of values of nand t, 100 Monte Carlo replications were used. In each replication, independent ~o 's and e.. 's were generated., 'J 2 A*2 2 Test statistics (T, T and T OLS ) based on ~EGLS' b and ~OLS for testing the hypotheses (i) H O : ~1 = 0, (ii) H O : ~O = ~1 = 0; (iii) H O : ~1 = and (iv) H O : ~O = ~1 = 1 were computed. The number of times the test statistics rejected the hypotheses are summarized in Tables 1 and 2. From the asymptotic results in section 3 we would expect that the EGLS estimator ~EGLS and the simple estimator b perform equally well when t is large. However, we do not expect the ordinary least squares estimator to do as well as b when t is large. For t=50 this expectation was borne out. At all values of n the probability of rejecting a true hypothesis (using the F- approximation) was 9% or less for all three statistics, but the power for rejecting either of the false hypotheses was always greater for ~EGLS and b than for ~OLS. Furthermore the rejection rates for ~EGLS and b were identical. A look at the true variances of ~GLS' b and ~OLS revealed that the relative efficiency of b was almost 100% for both the intercept and the slope parameters, whereas for ~OLS it was only 67% for the intercept parameter and 89% for the slope parameter in the case when n=5 and t=50. For smaller t, the efficiency of ~OLS was even worse. However, the efficiency of b was always close to 100%. Similar values for the relative efficiencies of the estimators were observed when n=5 and n=50.
13 Page 12 Table 1. Comparison of the Levels of Test Criteria: The Number of Times a 0.05 Level Test Criterion Rejects the Hypothesis (out of 100 replications). ( i ) H O : 13 1 = 0 ( i i ) H O : 13 0 = 13 1 = 0 2 Estimator F 2 n t 1, n-1 X 1 F1,v * F 2,n-2 X EGLS BBAR OLS EGLS BBAR OLS EGLS BBAR OLS EGLS BBAR OLS EGLS BBAR OLS EGLS BBAR OLS EGLS BBAR OLS EGLS BBAR OLS EGLS BBAR OLS
14 Page 13 Table 2. Comparison of the Powers of Test Criteria: The Number of Times a 0.05 Level Test Criterion Rejects the Hypothesis (out of 100 repl ications). n Estimator F t 1 I n-1 ( ; ) H O : ~1 = 1 ( i i ) H O : ~O = ~1 = 2 2 X 1 F1 * F,V 2, n-2 X EGLS BBAR OLS EGLS BBAR OLS EGLS BBAR OLS EGLS BBAR OLS EGLS BBAR OLS EGLS BBAR OLS EGLS BBAR OLS EGLS BBAR OLS EGLS BBAR OLS
15 Page 14 As n approaches infinity for fixed t we would expect ~EGLS to be more powerful than b. As it turned out, for n=50 the rejection rates for ~EGLS and b (using the x 2 approximation) were nearly indistinguishable. The rejection rate for ~OLS ranged from 14 to 39 percent lower than that of the other two estimators. For small sample sizes none of the estimators was very powerful. However, contrary to our expectation, the performance of ~EGLS was reasonable. The estimator b may have been more powerful than ~EGLS in rejecting H O : ~o = ~1 =1, but by the same token, b rejected the true hypotheses, H O : ~O = ~1 = 0, more often than ~EGLS' One problem that other authors (e.g., Jennrich & Schluchter (1986), Carter and Yang (1986)) have noted is that, with small sample sizes, E~~ is often not a positive definite matrix. In our simulation this occurred 34% of the time for n=t=5, but for moderate sample sizes, (n=t=10) this was no longer a problem. (If E~~ is not positive definite, the modified estimator suggested by Carter and Yang (1986) was used.), In our simulation even though the X. matrices were different for different, individuals, the weight matrices W. turned out to be close to n This may be one of the reasons why the tests based on b and ~EGLS had very similar power for all sample sizes. 5. SUMMARY In random coefficient regression models several estimators for ~ exist in the literature. Carter and Yang (1986) derived the asymptotic distribution of the estimated generalized least squares estimator as either n, the number of experimental units, tends to infinity and/or as t, the number of repeated measurements on each unit, tends to infinity. They proposed test statistics based on the EGLS estimator n The simple average b = n E. 1b. of the,=,
16 Page 15 regression estimates from each unit has not received much attention in the literature. The main contribution of the paper is to show that inferences can be made, without much difficulty, using the simple estimator b. Asymptotic results for the estimator b, similar to those derived by Carter and Yang (1986) for ~EGLS' are derived. Also, the results of a small Monte Carlo study indicate that it is reasonable to use b for inferences on ~. It is important to emphasize the simplicity of the estimator b, the test statistics based on b and their asymptotic properties. The estimator ~EGLS is not as simple to compute. Also, the estimator E~~ that enters the computation of ~EGLS may need to be adjusted so that E~~ is positive A definite. We are, however, not suggesting that ~EGLS be ignored. The estimator ~EGLS may perform very well for several model matrices (especially when n is large). Our results extend to the case where unequal number (r.,, say) of measurements are made on different individuals. In this case, part (a) of Result 3.1 should be modified to say "for a fixed n and minimum (r.) tending to, infinity." Also, when minimum (r.) is large, the Result 3.1 (a) holds even if 2 o. = variance (e..) is not the same for different experimental units, 'J 2 (provided one uses s., regression mean square error for the regression of ith,, individual, to estimate o~). Wh~n n is large, Result 3.1 (b) holds even if o~ ~ 0 2 for all i, provided we assume that for all i, o~ ~ 0 2 for some finite of. Our results can also be extended to the case where the errors e.. are 'J correlated over time. For example, suppose for each i, {e.. : j=1,...,t} is a 'J stationary time series with variance covariance matrix of e., given by E ee. It -1 - is easy to see that n Sbb is still an unbiased estimator of var(b). Under,
17 Page 16 some regularity conditions (similar to those given in Section 9.1 of Fuller (1976» on X" L oo and L one can obtain the asymptotic results for the, ~~ ee test statistic based on band Sbb. the sake of brevity. The proofs, however, are not included for ACKNOWLEDGEMENTS The work of S. G. Pantula was partially supported by the National Science Foundation. REFERENCES Carter, R. L. and Yang, M. C. K. (1986). "Large Sample Inference in Random Coefficient Regression Models," Communications in Statistics - Theory and Methods, 15(8), Fuller, W. A. (1976). Introduction to Statistical Time Series, New York: John Wiley and Sons. Harville, D. A. (1977). "Maximum Likelihood Approaches to Variance Component Estimation and to Related Problems," Journal of the American Statistical Association, 72, Hsiao, C. (1975). "Some Estimation Methods for a Random Coefficient Model," Econometrica, 43, Jennrich, R. I. and Schluchter, M. D. (1986). "Unbalanced Repeated-Measures Models with Structured Covariance Matrices," Biometrics, 42, Laird, N. M. and Ware, J. H. (1982). "Random-Effects Models for Longitudinal Data," Biometrics, 38, Rao, C. R. (1965). "The Theory of Least Squares When the Parameters are Stochastic and its Applications to the Analysis of Growth Curves," Biometrika, 52, Swamy, P. A. V. B. (1971). Statistical Inference in Random Coefficient Regression Models. Berlin: Springer-Verlag.
18 Page 17 APPENDIX In the appendix, we outline the proof of Result 3.1. (a) n fixed and t tends to infinity: From Remark 3.1, we know that (A.1 ) and hence the statistic r*2 = n(m~ - ~)LI[LSbbL,]-1L(m~ -~) + 0p(t- 1 / 2 ). A1so, reca 11, -1 n Sbb = (n-1) E (b. -b)(b.- b) I i = n = (n-1 ) E [~. + Z. - II - Z][~. + Z. -II-Z]' i=1 1 1 ~ 1 1 ~ = S/3~ + SZZ + S/3Z + S' /3Z (A. 2) where -1 n = (n-1) E (c. - c)(d. - d)', i =1 1 1 and Z. is defined in Remark 3.1. Since~. and Z. are independent normal random 111 variables with means ~ E[S/3Z] = 0 and 0 respectively, and the variance of the (~,m)th element of S~Z is -1 n var[(n-1) E (/3. - m )Z. ] i=1 1,~ /3,~ 1,m -2 n = (n-1) E i=1
19 Page 18 Therefore, S~z (A. 3) Now, Szz = (n_1)-1[~ Z.Z.'- nil'] = i =1 ', since from Remark 1 we know that Z., = 0 (t- 1 / 2 ) and Z p Therefore, from (A.2) and (A.3), we have = (t- 1 / 2 ) (A.4) ~~ P Combining (A.1) and (A.4), we get under H O : L~ = ~o' A*2 2 T - T m = 0 (t-1/2) p where -1 n(..~- ~)' L' [L S~~L'] L(m~ -~). Now, the result (a) follows because T 2 has the Hotelling's T 2 distribution m with (n-1) degrees of freedom. (b) t fixed and n tends to infinity: From Remark 3.2, we know that the exact distribution of T*2 is chisquare with q degrees of freedom. *2 A*2 The difference between T and T is that the matrix n var(b) is replaced by it's unbiased estimator Sbb. If we can show that Sbb is consistent (as n ~ m), then the result (b) will follow from Slutsky's Theorem.
20 Page 19 From (A.2) and (A.3) we have, Now _1 n = (n-1) [E i=1 Z.Z ~ - nzz '] 1 1 _1 n -1-1 = n E Z.Z~ + n (n-1) i =1 1 1 n E i=1 Z.Z~ (n-1)-1 n ii' _1 n 1 1 = n E Z.Z~ + 0 (n- t- ) i=1 ' 1 P (A.5 ) (A. 6) Now, since ~i's are iid N(O, E~~) variables, we have,,, Also, since Z.'s are independent N(O, a2(x~x.)-1) _1 n E[n E Z.Z.'] i=1 ', n = n E (X '. X. )- 1,=. 1 ' 1 variables, we have (A.7) (A.S) and _1 n Var[n ~' E Z.Z~~] i =1 " (A.9) for any arbitrary vector~. Therefore, -1 2 n -1-1/2 Sbb =E + n 0 E (X~X.) + 0 (n ) ~/3 i =1 " P = n var(b) + 0 (n- 1 / 2 ) p
21 Page 20 and the result (b) follows. (c) nt large and g=1: Consider the t-statistic for testing the hypothesis H O : l'~ = A O ' We know that the variable has a standard normal distribution. ~* * T = T [l'n var(b)l] 1/2(l'S lf 1 / 2 bb To show that is (approximately) distributed as Student's t-distribution with v* degrees of * --1 freedom, we need to show that v [l' n var(b) l] l'sbbl is (approximately) a chi-square random variable with v* when nt is large) independent of l'b. degrees of freedom and is (asymptotically, From (A.6), (A.S) and (A.9) we have n = n- 1 E (X~X.)-1 + ~~ i=1 1 1 ~2-1 n 1 = S~~ + 0 n E (X~X.) = where ~2 is defined in Section 2. Note that (n-1)(l's~~l)(lie~~l)-1 is a x 2 (n-1) random variable and (nt-nk)~2/02 is a x 2 (nt-nk) random variable. Therefore, Sbb is the sum of independent scalar multiples of chi-square random variables. Ignoring the terms of order (nt)-1/2 and using Satterthwaite's * --1 approximation, we have that v [l'n var(b)l] l'sbbl is approximately distributed as chi-square with v* degrees of freedom.
22 * Now, to show the (asymptotic) independence of T and Sbb' note that b =.~ + Z is independent of S~~ since ~i's are NID(~,E~~) and are independent of {Z.}. Also, for each i, the least squares estimator b. is 1 1 independent of the residual sums of squares y~y. - b~x~y. and hence b and a are independent. Therefore, for nt large, the distribution of T can be approximated by Student's t-distribution with v* Page 21 -* degrees of freedom.
A Simple Approach to Inference in Random Coefficient Models
A Simple Approach to Inference in Random Coefficient Models October 4, 1988 Marcia Gumpertz and Sastry G. Pantula Department of Statistics North Carolina State University Raleigh, NC 7695-803 Key Words
More informationApproximating Bayesian Posterior Means Using Multivariate Gaussian Quadrature
Approximating Bayesian Posterior Means Using Multivariate Gaussian Quadrature John A.L. Cranfield Paul V. Preckel Songquan Liu Presented at Western Agricultural Economics Association 1997 Annual Meeting
More informationMultivariate Time Series: Part 4
Multivariate Time Series: Part 4 Cointegration Gerald P. Dwyer Clemson University March 2016 Outline 1 Multivariate Time Series: Part 4 Cointegration Engle-Granger Test for Cointegration Johansen Test
More informationLarge Sample Properties of Estimators in the Classical Linear Regression Model
Large Sample Properties of Estimators in the Classical Linear Regression Model 7 October 004 A. Statement of the classical linear regression model The classical linear regression model can be written in
More informationRestricted Maximum Likelihood in Linear Regression and Linear Mixed-Effects Model
Restricted Maximum Likelihood in Linear Regression and Linear Mixed-Effects Model Xiuming Zhang zhangxiuming@u.nus.edu A*STAR-NUS Clinical Imaging Research Center October, 015 Summary This report derives
More informationSome Monte Carlo Evidence for Adaptive Estimation of Unit-Time Varying Heteroscedastic Panel Data Models
Some Monte Carlo Evidence for Adaptive Estimation of Unit-Time Varying Heteroscedastic Panel Data Models G. R. Pasha Department of Statistics, Bahauddin Zakariya University Multan, Pakistan E-mail: drpasha@bzu.edu.pk
More informationMantel-Haenszel Test Statistics. for Correlated Binary Data. Department of Statistics, North Carolina State University. Raleigh, NC
Mantel-Haenszel Test Statistics for Correlated Binary Data by Jie Zhang and Dennis D. Boos Department of Statistics, North Carolina State University Raleigh, NC 27695-8203 tel: (919) 515-1918 fax: (919)
More informationG. S. Maddala Kajal Lahiri. WILEY A John Wiley and Sons, Ltd., Publication
G. S. Maddala Kajal Lahiri WILEY A John Wiley and Sons, Ltd., Publication TEMT Foreword Preface to the Fourth Edition xvii xix Part I Introduction and the Linear Regression Model 1 CHAPTER 1 What is Econometrics?
More informationAn Introduction to Multivariate Statistical Analysis
An Introduction to Multivariate Statistical Analysis Third Edition T. W. ANDERSON Stanford University Department of Statistics Stanford, CA WILEY- INTERSCIENCE A JOHN WILEY & SONS, INC., PUBLICATION Contents
More informationHOW IS GENERALIZED LEAST SQUARES RELATED TO WITHIN AND BETWEEN ESTIMATORS IN UNBALANCED PANEL DATA?
HOW IS GENERALIZED LEAST SQUARES RELATED TO WITHIN AND BETWEEN ESTIMATORS IN UNBALANCED PANEL DATA? ERIK BIØRN Department of Economics University of Oslo P.O. Box 1095 Blindern 0317 Oslo Norway E-mail:
More informationON EXACT INFERENCE IN LINEAR MODELS WITH TWO VARIANCE-COVARIANCE COMPONENTS
Ø Ñ Å Ø Ñ Ø Ð ÈÙ Ð Ø ÓÒ DOI: 10.2478/v10127-012-0017-9 Tatra Mt. Math. Publ. 51 (2012), 173 181 ON EXACT INFERENCE IN LINEAR MODELS WITH TWO VARIANCE-COVARIANCE COMPONENTS Júlia Volaufová Viktor Witkovský
More informationSTAT5044: Regression and Anova. Inyoung Kim
STAT5044: Regression and Anova Inyoung Kim 2 / 47 Outline 1 Regression 2 Simple Linear regression 3 Basic concepts in regression 4 How to estimate unknown parameters 5 Properties of Least Squares Estimators:
More informationLecture 3: Multiple Regression
Lecture 3: Multiple Regression R.G. Pierse 1 The General Linear Model Suppose that we have k explanatory variables Y i = β 1 + β X i + β 3 X 3i + + β k X ki + u i, i = 1,, n (1.1) or Y i = β j X ji + u
More information2 Regression Analysis
FORK 1002 Preparatory Course in Statistics: 2 Regression Analysis Genaro Sucarrat (BI) http://www.sucarrat.net/ Contents: 1 Bivariate Correlation Analysis 2 Simple Regression 3 Estimation and Fit 4 T -Test:
More informationRegression and Statistical Inference
Regression and Statistical Inference Walid Mnif wmnif@uwo.ca Department of Applied Mathematics The University of Western Ontario, London, Canada 1 Elements of Probability 2 Elements of Probability CDF&PDF
More informationTESTING FOR NORMALITY IN THE LINEAR REGRESSION MODEL: AN EMPIRICAL LIKELIHOOD RATIO TEST
Econometrics Working Paper EWP0402 ISSN 1485-6441 Department of Economics TESTING FOR NORMALITY IN THE LINEAR REGRESSION MODEL: AN EMPIRICAL LIKELIHOOD RATIO TEST Lauren Bin Dong & David E. A. Giles Department
More informationAnalysis of variance, multivariate (MANOVA)
Analysis of variance, multivariate (MANOVA) Abstract: A designed experiment is set up in which the system studied is under the control of an investigator. The individuals, the treatments, the variables
More informationNonstationary Panels
Nonstationary Panels Based on chapters 12.4, 12.5, and 12.6 of Baltagi, B. (2005): Econometric Analysis of Panel Data, 3rd edition. Chichester, John Wiley & Sons. June 3, 2009 Agenda 1 Spurious Regressions
More informationEcon 510 B. Brown Spring 2014 Final Exam Answers
Econ 510 B. Brown Spring 2014 Final Exam Answers Answer five of the following questions. You must answer question 7. The question are weighted equally. You have 2.5 hours. You may use a calculator. Brevity
More informationOn Selecting Tests for Equality of Two Normal Mean Vectors
MULTIVARIATE BEHAVIORAL RESEARCH, 41(4), 533 548 Copyright 006, Lawrence Erlbaum Associates, Inc. On Selecting Tests for Equality of Two Normal Mean Vectors K. Krishnamoorthy and Yanping Xia Department
More informationDr. Junchao Xia Center of Biophysics and Computational Biology. Fall /1/2016 1/46
BIO5312 Biostatistics Lecture 10:Regression and Correlation Methods Dr. Junchao Xia Center of Biophysics and Computational Biology Fall 2016 11/1/2016 1/46 Outline In this lecture, we will discuss topics
More information1. The Multivariate Classical Linear Regression Model
Business School, Brunel University MSc. EC550/5509 Modelling Financial Decisions and Markets/Introduction to Quantitative Methods Prof. Menelaos Karanasos (Room SS69, Tel. 08956584) Lecture Notes 5. The
More informationA TIME SERIES PARADOX: UNIT ROOT TESTS PERFORM POORLY WHEN DATA ARE COINTEGRATED
A TIME SERIES PARADOX: UNIT ROOT TESTS PERFORM POORLY WHEN DATA ARE COINTEGRATED by W. Robert Reed Department of Economics and Finance University of Canterbury, New Zealand Email: bob.reed@canterbury.ac.nz
More informationFundamental Probability and Statistics
Fundamental Probability and Statistics "There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don't know. But there are
More informationy it = α i + β 0 ix it + ε it (0.1) The panel data estimators for the linear model are all standard, either the application of OLS or GLS.
0.1. Panel Data. Suppose we have a panel of data for groups (e.g. people, countries or regions) i =1, 2,..., N over time periods t =1, 2,..., T on a dependent variable y it and a kx1 vector of independent
More informationInverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1
Inverse of a Square Matrix For an N N square matrix A, the inverse of A, 1 A, exists if and only if A is of full rank, i.e., if and only if no column of A is a linear combination 1 of the others. A is
More informationReview of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley
Review of Classical Least Squares James L. Powell Department of Economics University of California, Berkeley The Classical Linear Model The object of least squares regression methods is to model and estimate
More informationAsymptotic Theory. L. Magee revised January 21, 2013
Asymptotic Theory L. Magee revised January 21, 2013 1 Convergence 1.1 Definitions Let a n to refer to a random variable that is a function of n random variables. Convergence in Probability The scalar a
More informationStochastic Design Criteria in Linear Models
AUSTRIAN JOURNAL OF STATISTICS Volume 34 (2005), Number 2, 211 223 Stochastic Design Criteria in Linear Models Alexander Zaigraev N. Copernicus University, Toruń, Poland Abstract: Within the framework
More informationTesting Some Covariance Structures under a Growth Curve Model in High Dimension
Department of Mathematics Testing Some Covariance Structures under a Growth Curve Model in High Dimension Muni S. Srivastava and Martin Singull LiTH-MAT-R--2015/03--SE Department of Mathematics Linköping
More informationHeteroskedasticity. in the Error Component Model
Heteroskedasticity in the Error Component Model Baltagi Textbook Chapter 5 Mozhgan Raeisian Parvari (0.06.010) Content Introduction Cases of Heteroskedasticity Adaptive heteroskedastic estimators (EGLS,
More informationLECTURE 2 LINEAR REGRESSION MODEL AND OLS
SEPTEMBER 29, 2014 LECTURE 2 LINEAR REGRESSION MODEL AND OLS Definitions A common question in econometrics is to study the effect of one group of variables X i, usually called the regressors, on another
More informationEconomics 573 Problem Set 5 Fall 2002 Due: 4 October b. The sample mean converges in probability to the population mean.
Economics 573 Problem Set 5 Fall 00 Due: 4 October 00 1. In random sampling from any population with E(X) = and Var(X) =, show (using Chebyshev's inequality) that sample mean converges in probability to..
More informationEmpirical Power of Four Statistical Tests in One Way Layout
International Mathematical Forum, Vol. 9, 2014, no. 28, 1347-1356 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/imf.2014.47128 Empirical Power of Four Statistical Tests in One Way Layout Lorenzo
More informationBiostatistics Workshop Longitudinal Data Analysis. Session 4 GARRETT FITZMAURICE
Biostatistics Workshop 2008 Longitudinal Data Analysis Session 4 GARRETT FITZMAURICE Harvard University 1 LINEAR MIXED EFFECTS MODELS Motivating Example: Influence of Menarche on Changes in Body Fat Prospective
More informationGeneralized, Linear, and Mixed Models
Generalized, Linear, and Mixed Models CHARLES E. McCULLOCH SHAYLER.SEARLE Departments of Statistical Science and Biometrics Cornell University A WILEY-INTERSCIENCE PUBLICATION JOHN WILEY & SONS, INC. New
More information[y i α βx i ] 2 (2) Q = i=1
Least squares fits This section has no probability in it. There are no random variables. We are given n points (x i, y i ) and want to find the equation of the line that best fits them. We take the equation
More informationSociedad de Estadística e Investigación Operativa
Sociedad de Estadística e Investigación Operativa Test Volume 14, Number 2. December 2005 Estimation of Regression Coefficients Subject to Exact Linear Restrictions when Some Observations are Missing and
More informationSensitivity of GLS estimators in random effects models
of GLS estimators in random effects models Andrey L. Vasnev (University of Sydney) Tokyo, August 4, 2009 1 / 19 Plan Plan Simulation studies and estimators 2 / 19 Simulation studies Plan Simulation studies
More informationStatistics 910, #5 1. Regression Methods
Statistics 910, #5 1 Overview Regression Methods 1. Idea: effects of dependence 2. Examples of estimation (in R) 3. Review of regression 4. Comparisons and relative efficiencies Idea Decomposition Well-known
More informationCOMPARISON OF FIVE TESTS FOR THE COMMON MEAN OF SEVERAL MULTIVARIATE NORMAL POPULATIONS
Communications in Statistics - Simulation and Computation 33 (2004) 431-446 COMPARISON OF FIVE TESTS FOR THE COMMON MEAN OF SEVERAL MULTIVARIATE NORMAL POPULATIONS K. Krishnamoorthy and Yong Lu Department
More informationUnit roots in vector time series. Scalar autoregression True model: y t 1 y t1 2 y t2 p y tp t Estimated model: y t c y t1 1 y t1 2 y t2
Unit roots in vector time series A. Vector autoregressions with unit roots Scalar autoregression True model: y t y t y t p y tp t Estimated model: y t c y t y t y t p y tp t Results: T j j is asymptotically
More informationUnit 12: Analysis of Single Factor Experiments
Unit 12: Analysis of Single Factor Experiments Statistics 571: Statistical Methods Ramón V. León 7/16/2004 Unit 12 - Stat 571 - Ramón V. León 1 Introduction Chapter 8: How to compare two treatments. Chapter
More informationLinear Models 1. Isfahan University of Technology Fall Semester, 2014
Linear Models 1 Isfahan University of Technology Fall Semester, 2014 References: [1] G. A. F., Seber and A. J. Lee (2003). Linear Regression Analysis (2nd ed.). Hoboken, NJ: Wiley. [2] A. C. Rencher and
More informationStatistics for Engineers Lecture 9 Linear Regression
Statistics for Engineers Lecture 9 Linear Regression Chong Ma Department of Statistics University of South Carolina chongm@email.sc.edu April 17, 2017 Chong Ma (Statistics, USC) STAT 509 Spring 2017 April
More informationA MODEL-BASED EVALUATION OF SEVERAL WELL-KNOWN VARIANCE ESTIMATORS FOR THE COMBINED RATIO ESTIMATOR
Statistica Sinica 8(1998), 1165-1173 A MODEL-BASED EVALUATION OF SEVERAL WELL-KNOWN VARIANCE ESTIMATORS FOR THE COMBINED RATIO ESTIMATOR Phillip S. Kott National Agricultural Statistics Service Abstract:
More informationCanonical Correlation Analysis of Longitudinal Data
Biometrics Section JSM 2008 Canonical Correlation Analysis of Longitudinal Data Jayesh Srivastava Dayanand N Naik Abstract Studying the relationship between two sets of variables is an important multivariate
More informationApplied Econometrics (QEM)
Applied Econometrics (QEM) based on Prinicples of Econometrics Jakub Mućk Department of Quantitative Economics Jakub Mućk Applied Econometrics (QEM) Meeting #3 1 / 42 Outline 1 2 3 t-test P-value Linear
More informationstatistical sense, from the distributions of the xs. The model may now be generalized to the case of k regressors:
Wooldridge, Introductory Econometrics, d ed. Chapter 3: Multiple regression analysis: Estimation In multiple regression analysis, we extend the simple (two-variable) regression model to consider the possibility
More informationIntroduction to Eco n o m et rics
2008 AGI-Information Management Consultants May be used for personal purporses only or by libraries associated to dandelon.com network. Introduction to Eco n o m et rics Third Edition G.S. Maddala Formerly
More informationIntroduction to Estimation Methods for Time Series models. Lecture 1
Introduction to Estimation Methods for Time Series models Lecture 1 Fulvio Corsi SNS Pisa Fulvio Corsi Introduction to Estimation () Methods for Time Series models Lecture 1 SNS Pisa 1 / 19 Estimation
More information14 Multiple Linear Regression
B.Sc./Cert./M.Sc. Qualif. - Statistics: Theory and Practice 14 Multiple Linear Regression 14.1 The multiple linear regression model In simple linear regression, the response variable y is expressed in
More informationRatio of Linear Function of Parameters and Testing Hypothesis of the Combination Two Split Plot Designs
Middle-East Journal of Scientific Research 13 (Mathematical Applications in Engineering): 109-115 2013 ISSN 1990-9233 IDOSI Publications 2013 DOI: 10.5829/idosi.mejsr.2013.13.mae.10002 Ratio of Linear
More informationLinear models. Linear models are computationally convenient and remain widely used in. applied econometric research
Linear models Linear models are computationally convenient and remain widely used in applied econometric research Our main focus in these lectures will be on single equation linear models of the form y
More informationPANEL DATA RANDOM AND FIXED EFFECTS MODEL. Professor Menelaos Karanasos. December Panel Data (Institute) PANEL DATA December / 1
PANEL DATA RANDOM AND FIXED EFFECTS MODEL Professor Menelaos Karanasos December 2011 PANEL DATA Notation y it is the value of the dependent variable for cross-section unit i at time t where i = 1,...,
More informationLinear Model Under General Variance
Linear Model Under General Variance We have a sample of T random variables y 1, y 2,, y T, satisfying the linear model Y = X β + e, where Y = (y 1,, y T )' is a (T 1) vector of random variables, X = (T
More informationBootstrapping the Grainger Causality Test With Integrated Data
Bootstrapping the Grainger Causality Test With Integrated Data Richard Ti n University of Reading July 26, 2006 Abstract A Monte-carlo experiment is conducted to investigate the small sample performance
More informationA Multivariate Two-Sample Mean Test for Small Sample Size and Missing Data
A Multivariate Two-Sample Mean Test for Small Sample Size and Missing Data Yujun Wu, Marc G. Genton, 1 and Leonard A. Stefanski 2 Department of Biostatistics, School of Public Health, University of Medicine
More informationBootstrapping Heteroskedasticity Consistent Covariance Matrix Estimator
Bootstrapping Heteroskedasticity Consistent Covariance Matrix Estimator by Emmanuel Flachaire Eurequa, University Paris I Panthéon-Sorbonne December 2001 Abstract Recent results of Cribari-Neto and Zarkos
More informationAn Approximate Test for Homogeneity of Correlated Correlation Coefficients
Quality & Quantity 37: 99 110, 2003. 2003 Kluwer Academic Publishers. Printed in the Netherlands. 99 Research Note An Approximate Test for Homogeneity of Correlated Correlation Coefficients TRIVELLORE
More informationSample size calculations for logistic and Poisson regression models
Biometrika (2), 88, 4, pp. 93 99 2 Biometrika Trust Printed in Great Britain Sample size calculations for logistic and Poisson regression models BY GWOWEN SHIEH Department of Management Science, National
More informationEconomics 240A, Section 3: Short and Long Regression (Ch. 17) and the Multivariate Normal Distribution (Ch. 18)
Economics 240A, Section 3: Short and Long Regression (Ch. 17) and the Multivariate Normal Distribution (Ch. 18) MichaelR.Roberts Department of Economics and Department of Statistics University of California
More informationTesting for Unit Roots in Autoregressive Moving Average Models: An Instrumental Variable Approach. Sastry G. Pantula* and Alastair Hall
July 22, 1988 esting for Unit Roots in Autoregressive Moving Average Models: An Instrumental Variable Approach Sastry G. Pantula* and Alastair Hall ~ * Department of Statistics North Carolina State University
More informationThe LIML Estimator Has Finite Moments! T. W. Anderson. Department of Economics and Department of Statistics. Stanford University, Stanford, CA 94305
The LIML Estimator Has Finite Moments! T. W. Anderson Department of Economics and Department of Statistics Stanford University, Stanford, CA 9435 March 25, 2 Abstract The Limited Information Maximum Likelihood
More informationWorking Paper No Maximum score type estimators
Warsaw School of Economics Institute of Econometrics Department of Applied Econometrics Department of Applied Econometrics Working Papers Warsaw School of Economics Al. iepodleglosci 64 02-554 Warszawa,
More informationConsistency of test based method for selection of variables in high dimensional two group discriminant analysis
https://doi.org/10.1007/s42081-019-00032-4 ORIGINAL PAPER Consistency of test based method for selection of variables in high dimensional two group discriminant analysis Yasunori Fujikoshi 1 Tetsuro Sakurai
More informationOrdinary Least Squares Regression
Ordinary Least Squares Regression Goals for this unit More on notation and terminology OLS scalar versus matrix derivation Some Preliminaries In this class we will be learning to analyze Cross Section
More informationMA 575 Linear Models: Cedric E. Ginestet, Boston University Non-parametric Inference, Polynomial Regression Week 9, Lecture 2
MA 575 Linear Models: Cedric E. Ginestet, Boston University Non-parametric Inference, Polynomial Regression Week 9, Lecture 2 1 Bootstrapped Bias and CIs Given a multiple regression model with mean and
More informationthe error term could vary over the observations, in ways that are related
Heteroskedasticity We now consider the implications of relaxing the assumption that the conditional variance Var(u i x i ) = σ 2 is common to all observations i = 1,..., n In many applications, we may
More informationHeteroskedasticity. Part VII. Heteroskedasticity
Part VII Heteroskedasticity As of Oct 15, 2015 1 Heteroskedasticity Consequences Heteroskedasticity-robust inference Testing for Heteroskedasticity Weighted Least Squares (WLS) Feasible generalized Least
More informationCointegration Lecture I: Introduction
1 Cointegration Lecture I: Introduction Julia Giese Nuffield College julia.giese@economics.ox.ac.uk Hilary Term 2008 2 Outline Introduction Estimation of unrestricted VAR Non-stationarity Deterministic
More informationInferences on a Normal Covariance Matrix and Generalized Variance with Monotone Missing Data
Journal of Multivariate Analysis 78, 6282 (2001) doi:10.1006jmva.2000.1939, available online at http:www.idealibrary.com on Inferences on a Normal Covariance Matrix and Generalized Variance with Monotone
More information11. Bootstrap Methods
11. Bootstrap Methods c A. Colin Cameron & Pravin K. Trivedi 2006 These transparencies were prepared in 20043. They can be used as an adjunct to Chapter 11 of our subsequent book Microeconometrics: Methods
More informationVector Auto-Regressive Models
Vector Auto-Regressive Models Laurent Ferrara 1 1 University of Paris Nanterre M2 Oct. 2018 Overview of the presentation 1. Vector Auto-Regressions Definition Estimation Testing 2. Impulse responses functions
More informationTest Code: STA/STB (Short Answer Type) 2013 Junior Research Fellowship for Research Course in Statistics
Test Code: STA/STB (Short Answer Type) 2013 Junior Research Fellowship for Research Course in Statistics The candidates for the research course in Statistics will have to take two shortanswer type tests
More informationTesting Homogeneity Of A Large Data Set By Bootstrapping
Testing Homogeneity Of A Large Data Set By Bootstrapping 1 Morimune, K and 2 Hoshino, Y 1 Graduate School of Economics, Kyoto University Yoshida Honcho Sakyo Kyoto 606-8501, Japan. E-Mail: morimune@econ.kyoto-u.ac.jp
More informationHANDBOOK OF APPLICABLE MATHEMATICS
HANDBOOK OF APPLICABLE MATHEMATICS Chief Editor: Walter Ledermann Volume VI: Statistics PART A Edited by Emlyn Lloyd University of Lancaster A Wiley-Interscience Publication JOHN WILEY & SONS Chichester
More informationANALYSIS OF PANEL DATA MODELS WITH GROUPED OBSERVATIONS. 1. Introduction
Tatra Mt Math Publ 39 (2008), 183 191 t m Mathematical Publications ANALYSIS OF PANEL DATA MODELS WITH GROUPED OBSERVATIONS Carlos Rivero Teófilo Valdés ABSTRACT We present an iterative estimation procedure
More informationVAR Models and Applications
VAR Models and Applications Laurent Ferrara 1 1 University of Paris West M2 EIPMC Oct. 2016 Overview of the presentation 1. Vector Auto-Regressions Definition Estimation Testing 2. Impulse responses functions
More informationEcon 583 Homework 7 Suggested Solutions: Wald, LM and LR based on GMM and MLE
Econ 583 Homework 7 Suggested Solutions: Wald, LM and LR based on GMM and MLE Eric Zivot Winter 013 1 Wald, LR and LM statistics based on generalized method of moments estimation Let 1 be an iid sample
More informationTHE EFFECTS OF MULTICOLLINEARITY IN ORDINARY LEAST SQUARES (OLS) ESTIMATION
THE EFFECTS OF MULTICOLLINEARITY IN ORDINARY LEAST SQUARES (OLS) ESTIMATION Weeraratne N.C. Department of Economics & Statistics SUSL, BelihulOya, Sri Lanka ABSTRACT The explanatory variables are not perfectly
More informationMean squared error matrix comparison of least aquares and Stein-rule estimators for regression coefficients under non-normal disturbances
METRON - International Journal of Statistics 2008, vol. LXVI, n. 3, pp. 285-298 SHALABH HELGE TOUTENBURG CHRISTIAN HEUMANN Mean squared error matrix comparison of least aquares and Stein-rule estimators
More informationTIME SERIES DATA ANALYSIS USING EVIEWS
TIME SERIES DATA ANALYSIS USING EVIEWS I Gusti Ngurah Agung Graduate School Of Management Faculty Of Economics University Of Indonesia Ph.D. in Biostatistics and MSc. in Mathematical Statistics from University
More informationGMM estimation of spatial panels
MRA Munich ersonal ReEc Archive GMM estimation of spatial panels Francesco Moscone and Elisa Tosetti Brunel University 7. April 009 Online at http://mpra.ub.uni-muenchen.de/637/ MRA aper No. 637, posted
More informationA Practical Guide for Creating Monte Carlo Simulation Studies Using R
International Journal of Mathematics and Computational Science Vol. 4, No. 1, 2018, pp. 18-33 http://www.aiscience.org/journal/ijmcs ISSN: 2381-7011 (Print); ISSN: 2381-702X (Online) A Practical Guide
More informationANALYSING BINARY DATA IN A REPEATED MEASUREMENTS SETTING USING SAS
Libraries 1997-9th Annual Conference Proceedings ANALYSING BINARY DATA IN A REPEATED MEASUREMENTS SETTING USING SAS Eleanor F. Allan Follow this and additional works at: http://newprairiepress.org/agstatconference
More informationNONLINEAR REGRESSION FOR SPLIT PLOT EXPERIMENTS
Kansas State University Libraries pplied Statistics in Agriculture 1990-2nd Annual Conference Proceedings NONLINEAR REGRESSION FOR SPLIT PLOT EXPERIMENTS Marcia L. Gumpertz John O. Rawlings Follow this
More informationMultivariate Regression
Multivariate Regression The so-called supervised learning problem is the following: we want to approximate the random variable Y with an appropriate function of the random variables X 1,..., X p with the
More informationTesting Random Effects in Two-Way Spatial Panel Data Models
Testing Random Effects in Two-Way Spatial Panel Data Models Nicolas Debarsy May 27, 2010 Abstract This paper proposes an alternative testing procedure to the Hausman test statistic to help the applied
More informationMissing dependent variables in panel data models
Missing dependent variables in panel data models Jason Abrevaya Abstract This paper considers estimation of a fixed-effects model in which the dependent variable may be missing. For cross-sectional units
More informationSo far our focus has been on estimation of the parameter vector β in the. y = Xβ + u
Interval estimation and hypothesis tests So far our focus has been on estimation of the parameter vector β in the linear model y i = β 1 x 1i + β 2 x 2i +... + β K x Ki + u i = x iβ + u i for i = 1, 2,...,
More informationApplied Multivariate and Longitudinal Data Analysis
Applied Multivariate and Longitudinal Data Analysis Chapter 2: Inference about the mean vector(s) Ana-Maria Staicu SAS Hall 5220; 919-515-0644; astaicu@ncsu.edu 1 In this chapter we will discuss inference
More informationChapter 14 Stein-Rule Estimation
Chapter 14 Stein-Rule Estimation The ordinary least squares estimation of regression coefficients in linear regression model provides the estimators having minimum variance in the class of linear and unbiased
More informationUNIVERSITY OF TORONTO Faculty of Arts and Science
UNIVERSITY OF TORONTO Faculty of Arts and Science December 2013 Final Examination STA442H1F/2101HF Methods of Applied Statistics Jerry Brunner Duration - 3 hours Aids: Calculator Model(s): Any calculator
More informationRegression Analysis. y t = β 1 x t1 + β 2 x t2 + β k x tk + ϵ t, t = 1,..., T,
Regression Analysis The multiple linear regression model with k explanatory variables assumes that the tth observation of the dependent or endogenous variable y t is described by the linear relationship
More informationPanel Data Model (January 9, 2018)
Ch 11 Panel Data Model (January 9, 2018) 1 Introduction Data sets that combine time series and cross sections are common in econometrics For example, the published statistics of the OECD contain numerous
More informationStein-Rule Estimation under an Extended Balanced Loss Function
Shalabh & Helge Toutenburg & Christian Heumann Stein-Rule Estimation under an Extended Balanced Loss Function Technical Report Number 7, 7 Department of Statistics University of Munich http://www.stat.uni-muenchen.de
More informationMATH5745 Multivariate Methods Lecture 07
MATH5745 Multivariate Methods Lecture 07 Tests of hypothesis on covariance matrix March 16, 2018 MATH5745 Multivariate Methods Lecture 07 March 16, 2018 1 / 39 Test on covariance matrices: Introduction
More informationBootstrap Approach to Comparison of Alternative Methods of Parameter Estimation of a Simultaneous Equation Model
Bootstrap Approach to Comparison of Alternative Methods of Parameter Estimation of a Simultaneous Equation Model Olubusoye, O. E., J. O. Olaomi, and O. O. Odetunde Abstract A bootstrap simulation approach
More informationKeywords: One-Way ANOVA, GLM procedure, MIXED procedure, Kenward-Roger method, Restricted maximum likelihood (REML).
A Simulation JKAU: Study Sci., on Vol. Tests 20 of No. Hypotheses 1, pp: 57-68 for (2008 Fixed Effects A.D. / 1429 in Mixed A.H.) Models... 57 A Simulation Study on Tests of Hypotheses for Fixed Effects
More information