114 A^VÇÚO 1n ò where y is an n 1 random vector of observations, X is a known n p matrix of full column rank, ε is an n 1 unobservable random vector,

Size: px
Start display at page:

Download "114 A^VÇÚO 1n ò where y is an n 1 random vector of observations, X is a known n p matrix of full column rank, ε is an n 1 unobservable random vector,"

Transcription

1 A^VÇÚO 1n ò 1Ï 2015c4 Chinese Journal of Applied Probability and Statistics Vol.31 No.2 Apr Optimal Estimator of Regression Coefficient in a General Gauss-Markov Model under a Balanced Loss Function Hu Guikai Peng Ping (School of Science, East China Institute of Technology, Nanchang, ) Abstract In this paper, we investigate optimal estimator of regression coefficient in a general Gauss- Markov model under balanced loss function. Firstly, necessary and sufficient conditions for linear estimators to be best linear unbiased estimator (BLUE) are provided. Secondly, we prove the best linear unbiased estimator is unique in the sense of almost everywhere, and also a balance between least squares estimator and optimal estimator under quadratic loss. Thirdly, loss robustness of the optimal estimator is discussed in terms of relative losses and relative saving losses. Finally, we give some conditions about the robust BLUE on the mis-specification of covariance matrix. Keywords: Optimal estimator, robustness, balanced loss function, general Gauss-Markov model. AMS Subject Classification: 62J12, 62H Introduction We open this section with some notations: Given a matrix A, the symbols M (A), A, A +, tr(a) will stand for the range space, the transpose, the Moore-Penrose inverse, the trace, respectively, of matrix A. The n n identity matrix is denoted by I n. For an n n matrix A, A > 0 means that A is a symmetric positive definite matrix. A 0 means that A is a symmetric nonnegative definite matrix, A B (A B) means that A B 0 (B A 0). R m n stands for the set composed of all m n real matrices. Consider the following linear model y = Xβ + ε, (1.1) E(ε) = 0, Cov (ε) = σ 2 V, The project was supported by the Natural Science Foundation of Jiangxi Province (20144BAB ), Humanities and Social Science Planning Foundation in College of Jiangxi Province (TJ1401) and the National Social Science Foundation of China (12BTJ014). Received August 14, doi: /j.issn

2 114 A^VÇÚO 1n ò where y is an n 1 random vector of observations, X is a known n p matrix of full column rank, ε is an n 1 unobservable random vector, V R n n is a known nonnegative definite matrix. Whereas, β R p and σ 2 > 0 are unknown parameters. This model is usually called a general Gauss-Markov model. For a comprehensive overview of this model, see Rao (1973). Considerable attention has been given in the past several decades to the problem of the best linear unbiased estimator of regression coefficient. For example, Rao (1972) proposed the unified theories of least squares. Albert (1973) obtained two kinds of expressions of the BLUE by using the general inverse of matrices. In their methods, they considered the goodness of fit of model and the precision of estimation, respectively. However, they did not take account of them together. Generally, either of the criterion of goodness of fit of model or precision of estimation is used to judge the performance of any estimator. Consequently, Considering both goodness of fit of model and precision of estimation together, Zellner (1994) has proposed the following balanced loss function(blf) L(d; β, σ 2 ) = θ(y Xd) (y Xd) + (1 θ)(d β) X X(d β), (1.2) where θ is a scalar lying between 0 and 1 which provides the weight assigned to the goodness of fit of the model, d is any estimator of β. The balanced loss function has received considerable attention in the literature under different setups. For example, Rodrigues and Zellner (1994) have used the balanced loss function in the estimation of mean time to failure. The aspects of preliminary test estimation and Stein-rule estimation under BLF were discussed by Giles et al. (1996), Ohtani et al. (1997), Ohtani (1998, 1999). Gruber (2004) obtained the empirical Bayes and approximate minimum mean square error estimators under a general balanced loss function and r uncorrelated linear models with random regression coefficients, and evaluated the efficiency of these estimators averaging over Zellner s balanced loss function. Jozani et al. (2006) motivated weighted balancedtype loss function and considered the issues of the admissibility, dominance, Bayesianity and minimaxity. Bansal and Aggarwal (2007, 2009, 2010) employed BLF to examine loss robustness of Bayes predictors of some finite population quantities. Since the best linear estimator does not exist in the class of the homogeneous linear estimators under the balanced loss risk, Hu and Peng (2010, 2011) studies the linear admissible estimators of the regression coefficient. In our brains, it is easy to think the problem whether the best linear unbiased estimator does not exist, and what properties has it when it exists? We now define the risk function as R(d; β, σ 2 ) = E[L(d; β, σ 2 )].

3 1Ï?m $±: ²ï e Gauss-Markov. 8Xê `O 115 Consider the following linear estimator classes: = {Ly : L R p n }. Definition 1.1 Ly is called an unbiased estimator of β, if E(Ly)=β for all β R p. Obviously, if Ly is an unbiased estimator of β, then LX = I p. If there exists a linear estimator Ly such that Ly is an unbiased estimator of β, then β is called a linearly estimable variable. We concern ourselves with the minimal risk properties of linear unbiased estimator of β. Therefore, we assume that β in the model (1.1) is a linearly estimable variable in the following paper. Definition 1.2 A linear unbiased estimator Ly is called a best linear unbiased estimator of β if for any linear unbiased estimator My in, R(Ly; β, σ 2 ) R(My; β, σ 2 ) holds for any β R p and σ 2 > 0. In this paper, we mainly discuss the best linear unbiased estimator of regression coefficient under the model (1.1) and the loss function (1.2), and obtain necessary and sufficient conditions for linear estimators to be BLUE. Furthermore, Loss robustness of BLUE in terms of relative losses and relative saving losses, and robust BLUE on the mis-specification of the covariance matrix are studied, respectively. The rest of this paper is organized as follows. In Section 2, we give some necessary and sufficient conditions for homogeneous linear estimators to be BLUE and obtain the unique BLUE of β in the sense of almost everywhere. Loss robustness of BLUE is placed in Section 3. The robust BLUE on the mis-specification of the covariance matrix is given in Section 4. Concluding remarks are placed in Section Best Linear Unbiased Estimator In this section, we provide some necessary and sufficient conditions for linear estimators to be a BLUE of β, and give the explicit expression of the BLUE. Theorem 2.1 In the model (1.1), Ly is a best linear unbiased estimator of β under the balanced loss (1.2) if and only if L satisfies the following conditions: where N X = I n X(X X) 1 X. LX = I p, (2.1) LV N X = θ(x X) 1 X V N X, (2.2)

4 116 A^VÇÚO 1n ò Proof Sufficiency: Assume that M y is a linear unbiased estimator of β, Since L satisfies condition (2.1), we have MX = LX, i.e., X (M L) = 0, then there exists a matrix Z R p n such that M = L + ZN X. By direct computation, we have R(My; β, σ 2 ) = E[θ(y XMy) (y XMy) + (1 θ)(my β) X X(My β)] = R(Ly; β, σ 2 ) + σ 2 tr[n X Z X XZN X V 2θN X Z X (I XL)V + 2(1 θ)n X Z X XLV ] = R(Ly; β, σ 2 ) + σ 2 tr(n X Z X XZN X V ) + 2σ 2 tr(x XLV N X Z θx V N X Z ), which combining with condition (2.2) will derive that R(My; β, σ 2 ) = R(Ly; β, σ 2 ) + σ 2 tr(n X Z X XZN X V ) R(Ly; β, σ 2 ), for all β R p and σ 2 > 0. Thus, Ly is the best linear unbiased estimator of β under the balanced loss function. Necessity: We suppose that Ly is the BLUE of β, then L satisfies the condition (2.1). Assume by contradiction that LV N X θ(x X) 1 X V N X. Take Z 0 = X XLV N X θx V N X, then we get Z 0 Z 0 = (X XLV θx V )N X Z 0 0. Thus we have tr[(x XLV θx V )N X Z 0 ] > 0. Hence, there exists a real number t < 0 such that σ2 tr[(x XLV θx V )N X tz 0 + tz 0N X (X XLV θx V ) + t 2 N X Z 0 X XZ 0 N X V ] < 0. Denote M = L + tz 0 N X, then My is a linear unbiased estimator of β, and R(My; β, σ 2 ) = E[θ(y XLy XtZ 0 N X y) (y XLy XtZ 0 N X y) + (1 θ)(ly + tz 0 N X y β) X X(Ly + tz 0 N X y β)] = R(Ly; β, σ 2 ) + σ 2 tr[(x XLV θx V )N X tz 0 + tz 0 N X (X XLV θx V ) + t 2 N X Z 0X XZ 0 N X V ] < R(Ly; β, σ 2 ), which is a contradiction to that Ly is the BLUE of β. Thus L satisfies the condition (2.2). The proof is completed. Theorem 2.2 In the model (1.1), denote L 0 = (X T + X) 1 X T + + θ(x X) 1 X V N X (N X V N X ) + N X, where T = V + XX. Then the following statements hold:

5 1Ï?m $±: ²ï e Gauss-Markov. 8Xê `O 117 (1) L 0 y is a BLUE of β under the balanced loss function (1.2). (2) L = {Ly : L = L 0 +Z[N X N X V N X (N X V N X ) + N X ], Z R p n } is an estimator class that contains all the best linear unbiased estimators. (3) If Ly L, then Ly = L 0 y holds almost everywhere, that is, L 0 y is the unique BLUE of β under the balanced loss function (1.2). Proof (1) It is easy to verify that L 0 satisfies conditions (2.1) and (2.2). Therefore (1) holds. (2) If L satisfies conditions (2.1) and (2.2). Then we have (L L 0 )X = 0, (L L 0 )V N X = 0, (2.3) and hence L L 0 = (L L 0 ) (L L 0 )X(X X) 1 X = (L L 0 )N X. (2.4) It follows from Equations (2.3) and (2.4) that 0 = (L L 0 )V N X = (L L 0 )N X V N X = (L L 0 )N X V N X (N X V N X ) + N X, which together with Equation (2.4) derives that L = L 0 + (L L 0 ) = L 0 + (L L 0 )N X = L 0 + (L L 0 )[N X N X V N X (N X V N X ) + N X ]. Suppose that Z = L L 0, then we have that L = L 0 + Z[N X N X V N X (N X V N X ) + N X ], that is, Ly = [L 0 + Z(N X N X V N X (N X V N X ) + N X )]y L. On the other hand, we suppose that Ly L, i.e., there exists a matrix Z such that L = L 0 + Z[N X N X V N X (N X V N X ) + N X ]. It is easy to verify that L satisfies conditions (2.1) and (2.2), i.e., Ly is the BLUE of β. Therefore L contains all the best linear unbiased estimators. (3) Suppose that Ly L, i.e., there exists a matrix Z R p n such that L = L 0 + Z[N X N X V N X (N X V N X ) + N X ]. Then we have E[(L L 0 )y] = E[Z(N X N X V N X (N X V N X ) + N X )y] = 0, D[(L L 0 )y] = D[Z(N X N X V N X (N X V N X ) + N X )y] = 0. Thus P{(L L 0 )y = 0} = 1, i.e., Ly = L 0 y holds almost everywhere, and hence L 0 y is the unique best linear unbiased estimator of β. The proof is completed. This theorem has given an expression of the BLUE, but it is not easy to analyze its properties on the basis of this. Therefore we must give another form in order to analyze it better. In the following, we first give two lemmas.

6 118 A^VÇÚO 1n ò Lemma 2.1 (Wang, 1987) In the model (1.1), y M (T ) holds almost everywhere. Lemma 2.2 (Yu and He, 1997) Let C and D be two real matrices, if D 0 and M (C) M (D), then (C D + C) + = C + DC + C + DN C (N C DN C ) + N C DC +. Theorem 2.3 In the model (1.1), denote L 1 = (X T + X) 1 X T + + θ(x X) 1 X V (T + T + X(X T + X) 1 X T + ), then L 1 y is the BLUE of β. Proof According to Lemma 2.2, we have (X T + X) 1 = X + T X + X + T N X (N X T N X ) + N X T X +, which together with Lemma 2.1 will yield that L 1 y = (X T + X) 1 X T + y + θ(x X) 1 X T T + y θ(x X) 1 X T (I N X )T + y + θ(x X) 1 X T N X (N X T N X ) + N X T (I N X )T + y = (X T + X) 1 X T + y + θ(x X) 1 X T N X (N X T N X ) + N X T T + y = (X T + X) 1 X T + y + θ(x X) 1 X V N X (N X V N X ) + N X y = L 0 y. Theorem 2.4 In the model (1.1), the best linear unbiased estimator L 1 y can be represented as θ β + (1 θ) β T, where β = (X X) 1 X y, βt = (X T + X) 1 X T + y. Furthermore, the minimal risk is R(L 1 y; β, σ 2 ) = σ 2 tr[(1 θ) 2 (X T + X) 1 X T + V X θ 2 (X X) 1 X V X + θv ]. Proof According to Lemma 2.1, by direct computation, we have L 1 y = (X T + X) 1 X T + y + θ(x X) 1 X V (T + T + X(X T + X) 1 X T + )y = (X T + X) 1 X T + y + θ(x X) 1 X T T + y θ(x X) 1 X X(X T + X) 1 X T + y = θ(x X) 1 X y + (1 θ)(x T + X) 1 X T + y. By the definition of R(d; β, σ 2 ), we have R(L 1 y; β, σ 2 ) = σ 2 tr[l 1X XL 1 V 2θXL 1 V + θv ] = σ 2 tr{[θ(x X) 1 X + (1 θ)(x T + X) 1 X T + ] X X[θ(X X) 1 X + (1 θ)(x T + X) 1 X T + ]V 2θX[θ(X X) 1 X +(1 θ)(x T + X) 1 X T + ]V + θv } = σ 2 tr[(1 θ) 2 (X T + X) 1 X T + V X θ 2 (X X) 1 X V X + θv ].

7 1Ï?m $±: ²ï e Gauss-Markov. 8Xê `O 119 This theorem shows that the best linear unbiased estimator of β under the balanced loss function (1.2) is a balance between the least-squares estimator and the best linear unbiased estimator under quadratic loss. Moreover, the weights assigned to the goodness of fit of model and the precision of estimation are consistent to the weights assigned to their corresponding optimal estimators. It is clear to illustrate the use of the balanced loss function (1.2). Corollary 2.1 Under the model (1.1), if V > 0, then the BLUE of β can be represented as θ β + (1 θ) β V, where β = (X X) 1 X y, β V = (X V 1 X) 1 X V 1 y. Since (X T + X) 1 X T + = (X V 1 X) 1 X V 1, if V > 0, it is easy to verify by Theorem 2.4. Therefore its proof is omitted here. 3. Loss Robustness In this section, we will examine the loss robustness of these estimators. It may be of interest to compare the risk of the BLUE of β with that of least squares estimator β and also with that of β T. By the definition of R(d; β, σ 2 ), we have 1 = R( β; β, σ 2 ) R(L 1 y; β, σ 2 ) = (1 θ) 2 σ 2 tr[(x X) 1 X V X (X T + X) 1 X T + V X], (3.1) 2 = R( β T ; β, σ 2 ) R(L 1 y; β, σ 2 ) = θ 2 σ 2 tr[(x X) 1 X V X (X T + X) 1 X T + V X]. (3.2) It is observed that 1 and 2 are non-negative. Furthermore, 1 is a decreasing function, whereas 2 is an increasing function of θ. In particular, For θ = 1/2, 1 = 2. The relative loss RL 1 in using β relative to L 1 y is RL 1 = and that in using β T relative to L 1 y is RL 2 = 1 R(L 1 y; β, σ 2 ) = 2 R(L 1 y; β, σ 2 ) = (1 θ)2 u 1 + θ(1 θ)u, (3.3) θ 2 u 1 + θ(1 θ)u, (3.4) where u = tr([(x X) 1 X V X (X T + X) 1 X T + V X]) tr[θv + (1 θ)(x T + X) 1 X T + V X θ(x X) 1 X V X]. (3.5) It is observed that RL 1 is maximized at θ = 0, whereas RL 2 is maximized at θ = 1. However, when θ = 1/2, RL 1 = RL 2 for all values of u. As u increases, RL 1 will approach

8 120 A^VÇÚO 1n ò (1 θ)/θ and RL 2 will approach θ/(1 θ). Zellner (1994) and Bansal and Aggarwal (2007, 2009, 2010) also observed similar behavior of relatives losses. In order to measure the relative increase in balanced loss risk by using the least squares estimator β instead of the BLUE with respect to the same by using β T, let us follow Bansal and Aggarwal (2009, 2010) to define relative savings loss (RSL) as RSL = 1 2 = ( 1 θ ) 2, which is also RL 1 /RL 2. Relative savings loss measures the balanced loss risk improvement over β that is sacrificed by the use of β T rather than the best linear unbiased estimator. It is interesting to observe that the RSL depends only on the weight parameter θ of the balanced loss function. Note that θ 1/2 implies RSL 1, which shows that β T is better than the least squares estimator β in the smaller balanced loss risk sense. 4. The Misspecification of the Covariance Matrix In this section, we consider the robustness of the best linear unbiased estimator against possible misspecification of the covariance matrix V. Assume that the adopted model is such that while the true model specifies that M 1 : y = Xβ + ε 1, E(ε 1 ) = 0, Cov (ε 1 ) = σ 2 V 1, M 2 : y = Xβ + ε 2, E(ε 2 ) = 0, Cov (ε 2 ) = σ 2 V 2. Assume that V 1 and V 2 are known nonnegative definite matrices. Let β BLUE i be the BLUE of β under model M i, respectively, i = 1, 2. Then β BLUE i = (X T i + X) 1 X T i + y + θ(x X) 1 X V i (T i + T i + X(X T i + X) 1 X T i + )y, (4.1) where T i = V i + XX, i = 1, 2. Lemma 4.1 estimators of zero vector: θ For linear model M 1 (or M 2 ), let U be the set of all linear unbiased U = {Dy : D R p n and E(Dy) = 0, for all β R p }. Then a linear unbiased estimator Ly is the BLUE of β under balanced loss (1.2) if and only if E(y L X XDy θy XDy) = 0, for all Dy U.

9 1Ï?m $±: ²ï e Gauss-Markov. 8Xê `O 121 Proof Sufficiency: Since Ly is a linear unbiased estimator of β, then F = L + MN X, M R p n, for arbitrary linear unbiased estimators F y. Denote D = MN X, then F = L + D. It is obvious that Dy is a linear unbiased estimator of zero vector. On the other hand, if Dy is arbitrary linear unbiased estimator of zero vector, then X D = 0, that is, D = MN X, M R p n. By E(y L X XDy θy XDy) = 0, we have R(F y; β, σ 2 ) = E[θ(y XF y) (y XF y) + (1 θ)(f y β) X X(F y β)] = R(Ly; β, σ 2 ) + E(y D X XDy) + 2E[y L X XDy θy XDy] R(Ly; β, σ 2 ). Thus Ly is the BLUE of β under the balanced loss (1.2). Necessity: If there exists a matrix D 0 such that E(D 0 y) = 0 and E(y L X XD 0 y θy XD 0 y) 0, then we denote k = E(y L X XD 0 y θy XD 0 y) and assume k < 0 (if k > 0, we can use D 0 in place of D 0 ). Supposing G = L + td 0, t R 1, we have R(Gy; β, σ 2 ) = E[θ(y XGy) (y XGy) + (1 θ)(gy β) X X(Gy β)] = R(Ly; β, σ 2 ) + t 2 E(y D 0X XD 0 y) + 2tE[y L X XD 0 y θy XD 0 y]. Note that t 2 E(y D 0 X XD 0 y) + 2tE[y L X XD 0 y θy XD 0 y] is a quadratic polynomial, and the coefficient of one-time term is a negative number, thus there exists a number t 0 such that t 2 0 E(y D 0 X XD 0 y)+2t 0 E[y L X XD 0 y θy XD 0 y] < 0. Therefore, denote G 0 = L + t 0 D 0, then G 0 y is a linear unbiased estimator of β and R(G 0 y; β, σ 2 ) < R(Ly; β, σ 2 ). This is a contradiction to that Ly is the BLUE of β. Theorem 4.1 Under the balanced loss function (1.2) and the model M 2, β BLUE 1 = β BLUE 2 for β if and only if N X V 2 T + 1 X = 0. Proof Necessity: According to Lemma 4.1, β BLUE 1 = β BLUE 2 if and only if E( β BLUE 1X XDy θy XDy) = 0 holds for all Dy U. This is equivalent to that E[y (θ(x X) 1 X + (1 θ)(x T + 1 X) 1 X T + 1 )X XMN X y θy XMN X y] = 0 holds for all M R p n, that is, tr(n X V 2 T + 1 X(X T + 1 X) 1 X XM) = 0 holds for all M R p n. Hence, N X V 2 T + 1 X = 0.

10 122 A^VÇÚO 1n ò Sufficiency: Let β BLUE 1 be the BLUE of β under the model M 1. If N X V 2 T 1 + X = 0, then N X V 2 T 1 + X(X T 1 + X) 1 X XM = 0, for all M R p n, that is, tr[t 1 + X(X T 1 + X) 1 X XMN X V 2 ] = 0 holds for all M R p n, which derives that E[y (θ(x X) 1 X + (1 θ)(x T + 1 X) 1 X T + 1 ) X XDy θy XDy] = 0 holds for all Dy U. Therefore, E( β BLUE 1X XDy θy XDy) = 0 holds for all Dy U. Thus β BLUE 1 is also the BLUE of β under the model M 2, i.e, β BLUE 1 = β BLUE 2. In the following, we will give an example to illustrate the use of the above result. such that Example 1 while the true model specifies that Let us follow Xu et al. (2011) to suppose that the adopted model is M 3 : y = 1 n β + ε 1, E(ε 1 ) = 0, Cov (ε 1 ) = σ 2 I n, M 4 : y = 1 n β + ε 2, E(ε 2 ) = 0, Cov (ε 2 ) = σ 2 [(1 ρ)i n + ρ1 n 1 n]. It is obvious that β = (1/n)1 ny is the BLUE of β under the model M 3 from Corollary 2.1. It is easy to verify that N X V 2 T + 1 X = 0 for X = 1 n, V 1 = I n and V 2 = (1 ρ)i n + σ 2 ρ1 n 1 n. Therefore, β is also the BLUE of β under the model M 4 according to Theorem Concluding Remarks In this paper, we provide the necessary and sufficient conditions for linear estimators to be the best linear unbiased estimator under Zellner s balanced loss function and the general Gauss-Markov model, and obtain the unique best linear unbiased estimator which is a balance between the least squares estimator and the optimal estimator under quadratic loss. The loss robustness of BLUE with respect to relative losses and relative saving losses are discussed, respectively. Furthermore, the robustness of BLUE on the mis-specification of the covariance matrix is also considered. It is interesting to observe that the best linear estimator is a balance between the least squares estimator and the best linear unbiased estimator under quadratic loss. Moreover, the weights assigned to the goodness of fit of

11 1Ï?m $±: ²ï e Gauss-Markov. 8Xê `O 123 model and the precision of estimation are consistent to the weights assigned to their corresponding optimal estimators. This makes us do further study in the future about the problem how to choose the proper weight when making use of the balanced loss function. References [1] Rao, C.R., Linear Statistical Inference and Its Applications, Wiley, New York, [2] Rao, C.R., Some recent results in linear estimation, Sankhyā: The Indian Journal of Statistics, Series B, 34(4)(1972), [3] Albert, A., The Gauss-Markov theorem for regression models with possibly singular covariances, SIAM Journal on Applied Mathematics, 24(2)(1973), [4] Zellner, A., Bayesian and non-bayesian estimation using balanced loss functions, In: Gupta, S.S. and Berger, J.O. (Editors), Statistical Decision Theory and Related Topics V, Springer, New York, 1994, [5] Rodrigues, J. and Zellner, A., Weighted balanced loss function and estimation of the mean time to failure, Communications in Statistics - Theory and Methods, 23(12)(1994), [6] Giles, J.A., Giles, D.E.A. and Ohtani, K., The exact risks of some pre-test and stein-type regression estimators umder balanced loss, Communications in Statistics - Theory and Methods, 25(12)(1996), [7] Ohtani, K., Giles, D.E.A. and Giles, J.A., The exact risk performance of a pre-test estimator in a heteroskedastic linear regression model under the balanced loss function, Econometric Reviews, 16(1)(1997), [8] Ohtani, K., the exact risk of a weighted average estimator of the OLS and Stein-rule estimators in regression under balanced loss, Statistics and Risk Modeling, 16(1)(1998), [9] Ohtani, K., Inadmissibility of the Stein-rule estimator under the balanced loss function, Journal of Econometrics, 88(1)(1999), [10] Gruber, M.H.J., The efficiency of shrinkage estimators with respect to Zellner s balanced loss function, Communications in Statistics - Theory and Methods, 33(2)(2004), [11] Jozani, M.J., Marchand, E. and Parsian, A., On estimation with weighted balanced-type loss function, Statistics and Probability Letters, 76(8)(2006), [12] Bansal, A.K. and Aggarwal, P., Bayes prediction for a heteroscedastic regression superpopulation model using balanced loss function, Communications in Statistics - Theory and Methods, 36(8)(2007), [13] Bansal, A.K. and Aggarwal, P., Bayes prediction of the regression coefficient in a finite population using balanced loss function, Metron - International Journal of Statistics, 67(1)(2009), [14] Bansal, A.K. and Aggarwal, P., Bayes prediction for a stratified regression superpopulation model using balanced loss function, Communications in Statistics - Theory and Methods, 39(15)(2010), [15] Hu, G.K. and Peng, P., Admissibility for linear estimators of regression coefficient in a general GaussõMarkoff model under balanced loss function, Journal of Statistical Planning and Inference, 140(11)(2010),

12 124 A^VÇÚO 1n ò [16] Hu, G.K. and Peng, P., All admissible linear estimators of a regression coefficient under a balanced loss function, Journal of Multivariate Analysis, 102(8)(2011), [17] Wang, S.G., The Theory of Linear Model and Its Application, Anhui Edcation Press, (in chinese) [18] Yu, S.H. and He, C.Z., Comparison of general Gauss-Markoff models in estimable subspace, Acta Mathematicae Applicatae Sinica, 20(4)(1997), (in Chinese) [19] Xu, L.W., Lu, M. and Jiang, C.F., Optimal prediction in finite populations under matrix loss, Journal of Statistical Planning and Inference, 141(8)(2011), ²ï e Gauss-Markov. 8Xê `O?m $ ± (ÀunóŒÆnÆ, H, ) 3²ï e, ïä Gauss-Markov. 8Xê `O, Äk 5O Z 5à O 7 ^ ; Ùgy²²ï e Z 5à O3A?? Âe, ÊÏ OÚg e `O²ï;,?Ø `O'u ¼êÚ.½ è5, T `O3.ؽeäk è5 7 ^. ' c: `O, è5, ²ï, Gauss-Markov.. Æ a Ò: O212.4.

304 A^VÇÚO 1n ò while the commonly employed loss function for the precision of estimation is squared error loss function ( β β) ( β β) (1.3) or weight

304 A^VÇÚO 1n ò while the commonly employed loss function for the precision of estimation is squared error loss function ( β β) ( β β) (1.3) or weight A^VÇÚO 1n ò 1nÏ 2014c6 Chinese Journal of Applied Probability and Statistics Vol.30 No.3 Jun. 2014 The Best Linear Unbiased Estimation of Regression Coefficient under Weighted Balanced Loss Function Fang

More information

A note on the equality of the BLUPs for new observations under two linear models

A note on the equality of the BLUPs for new observations under two linear models ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 14, 2010 A note on the equality of the BLUPs for new observations under two linear models Stephen J Haslett and Simo Puntanen Abstract

More information

Bayesian Estimation of Regression Coefficients Under Extended Balanced Loss Function

Bayesian Estimation of Regression Coefficients Under Extended Balanced Loss Function Communications in Statistics Theory and Methods, 43: 4253 4264, 2014 Copyright Taylor & Francis Group, LLC ISSN: 0361-0926 print / 1532-415X online DOI: 10.1080/03610926.2012.725498 Bayesian Estimation

More information

Stein-Rule Estimation under an Extended Balanced Loss Function

Stein-Rule Estimation under an Extended Balanced Loss Function Shalabh & Helge Toutenburg & Christian Heumann Stein-Rule Estimation under an Extended Balanced Loss Function Technical Report Number 7, 7 Department of Statistics University of Munich http://www.stat.uni-muenchen.de

More information

On V-orthogonal projectors associated with a semi-norm

On V-orthogonal projectors associated with a semi-norm On V-orthogonal projectors associated with a semi-norm Short Title: V-orthogonal projectors Yongge Tian a, Yoshio Takane b a School of Economics, Shanghai University of Finance and Economics, Shanghai

More information

Sociedad de Estadística e Investigación Operativa

Sociedad de Estadística e Investigación Operativa Sociedad de Estadística e Investigación Operativa Test Volume 14, Number 2. December 2005 Estimation of Regression Coefficients Subject to Exact Linear Restrictions when Some Observations are Missing and

More information

Projektpartner. Sonderforschungsbereich 386, Paper 163 (1999) Online unter:

Projektpartner. Sonderforschungsbereich 386, Paper 163 (1999) Online unter: Toutenburg, Shalabh: Estimation of Regression Coefficients Subject to Exact Linear Restrictions when some Observations are Missing and Balanced Loss Function is Used Sonderforschungsbereich 386, Paper

More information

Stochastic Design Criteria in Linear Models

Stochastic Design Criteria in Linear Models AUSTRIAN JOURNAL OF STATISTICS Volume 34 (2005), Number 2, 211 223 Stochastic Design Criteria in Linear Models Alexander Zaigraev N. Copernicus University, Toruń, Poland Abstract: Within the framework

More information

Research Article Constrained Solutions of a System of Matrix Equations

Research Article Constrained Solutions of a System of Matrix Equations Journal of Applied Mathematics Volume 2012, Article ID 471573, 19 pages doi:10.1155/2012/471573 Research Article Constrained Solutions of a System of Matrix Equations Qing-Wen Wang 1 and Juan Yu 1, 2 1

More information

18Ï È² 7( &: ÄuANOVAp.O`û5 571 Based on this ANOVA model representation, Sobol (1993) proposed global sensitivity index, S i1...i s = D i1...i s /D, w

18Ï È² 7( &: ÄuANOVAp.O`û5 571 Based on this ANOVA model representation, Sobol (1993) proposed global sensitivity index, S i1...i s = D i1...i s /D, w A^VÇÚO 1 Êò 18Ï 2013c12 Chinese Journal of Applied Probability and Statistics Vol.29 No.6 Dec. 2013 Optimal Properties of Orthogonal Arrays Based on ANOVA High-Dimensional Model Representation Chen Xueping

More information

The Linear Regression Model

The Linear Regression Model The Linear Regression Model Carlo Favero Favero () The Linear Regression Model 1 / 67 OLS To illustrate how estimation can be performed to derive conditional expectations, consider the following general

More information

New insights into best linear unbiased estimation and the optimality of least-squares

New insights into best linear unbiased estimation and the optimality of least-squares Journal of Multivariate Analysis 97 (2006) 575 585 www.elsevier.com/locate/jmva New insights into best linear unbiased estimation and the optimality of least-squares Mario Faliva, Maria Grazia Zoia Istituto

More information

A new algebraic analysis to linear mixed models

A new algebraic analysis to linear mixed models A new algebraic analysis to linear mixed models Yongge Tian China Economics and Management Academy, Central University of Finance and Economics, Beijing 100081, China Abstract. This article presents a

More information

Mean squared error matrix comparison of least aquares and Stein-rule estimators for regression coefficients under non-normal disturbances

Mean squared error matrix comparison of least aquares and Stein-rule estimators for regression coefficients under non-normal disturbances METRON - International Journal of Statistics 2008, vol. LXVI, n. 3, pp. 285-298 SHALABH HELGE TOUTENBURG CHRISTIAN HEUMANN Mean squared error matrix comparison of least aquares and Stein-rule estimators

More information

Bayes and Empirical Bayes Estimation of the Scale Parameter of the Gamma Distribution under Balanced Loss Functions

Bayes and Empirical Bayes Estimation of the Scale Parameter of the Gamma Distribution under Balanced Loss Functions The Korean Communications in Statistics Vol. 14 No. 1, 2007, pp. 71 80 Bayes and Empirical Bayes Estimation of the Scale Parameter of the Gamma Distribution under Balanced Loss Functions R. Rezaeian 1)

More information

Unbiased prediction in linear regression models with equi-correlated responses

Unbiased prediction in linear regression models with equi-correlated responses ') -t CAA\..-ll' ~ j... "1-' V'~ /'. uuo. ;). I ''''- ~ ( \ '.. /' I ~, Unbiased prediction in linear regression models with equi-correlated responses Shalabh Received: May 13, 1996; revised version: December

More information

The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation

The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation Electronic Journal of Linear Algebra Volume 18 Volume 18 (2009 Article 23 2009 The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation Qing-feng Xiao qfxiao@hnu.cn

More information

Regression. Oscar García

Regression. Oscar García Regression Oscar García Regression methods are fundamental in Forest Mensuration For a more concise and general presentation, we shall first review some matrix concepts 1 Matrices An order n m matrix is

More information

Review of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley

Review of Classical Least Squares. James L. Powell Department of Economics University of California, Berkeley Review of Classical Least Squares James L. Powell Department of Economics University of California, Berkeley The Classical Linear Model The object of least squares regression methods is to model and estimate

More information

Least Squares Estimation-Finite-Sample Properties

Least Squares Estimation-Finite-Sample Properties Least Squares Estimation-Finite-Sample Properties Ping Yu School of Economics and Finance The University of Hong Kong Ping Yu (HKU) Finite-Sample 1 / 29 Terminology and Assumptions 1 Terminology and Assumptions

More information

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1 Inverse of a Square Matrix For an N N square matrix A, the inverse of A, 1 A, exists if and only if A is of full rank, i.e., if and only if no column of A is a linear combination 1 of the others. A is

More information

The equalities of ordinary least-squares estimators and best linear unbiased estimators for the restricted linear model

The equalities of ordinary least-squares estimators and best linear unbiased estimators for the restricted linear model The equalities of ordinary least-squares estimators and best linear unbiased estimators for the restricted linear model Yongge Tian a and Douglas P. Wiens b a School of Economics, Shanghai University of

More information

Testing a Normal Covariance Matrix for Small Samples with Monotone Missing Data

Testing a Normal Covariance Matrix for Small Samples with Monotone Missing Data Applied Mathematical Sciences, Vol 3, 009, no 54, 695-70 Testing a Normal Covariance Matrix for Small Samples with Monotone Missing Data Evelina Veleva Rousse University A Kanchev Department of Numerical

More information

Summer School in Statistics for Astronomers V June 1 - June 6, Regression. Mosuk Chow Statistics Department Penn State University.

Summer School in Statistics for Astronomers V June 1 - June 6, Regression. Mosuk Chow Statistics Department Penn State University. Summer School in Statistics for Astronomers V June 1 - June 6, 2009 Regression Mosuk Chow Statistics Department Penn State University. Adapted from notes prepared by RL Karandikar Mean and variance Recall

More information

Orthogonal decompositions in growth curve models

Orthogonal decompositions in growth curve models ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 4, Orthogonal decompositions in growth curve models Daniel Klein and Ivan Žežula Dedicated to Professor L. Kubáček on the occasion

More information

3 Multiple Linear Regression

3 Multiple Linear Regression 3 Multiple Linear Regression 3.1 The Model Essentially, all models are wrong, but some are useful. Quote by George E.P. Box. Models are supposed to be exact descriptions of the population, but that is

More information

MIT Spring 2015

MIT Spring 2015 Regression Analysis MIT 18.472 Dr. Kempthorne Spring 2015 1 Outline Regression Analysis 1 Regression Analysis 2 Multiple Linear Regression: Setup Data Set n cases i = 1, 2,..., n 1 Response (dependent)

More information

LECTURES IN ECONOMETRIC THEORY. John S. Chipman. University of Minnesota

LECTURES IN ECONOMETRIC THEORY. John S. Chipman. University of Minnesota LCTURS IN CONOMTRIC THORY John S. Chipman University of Minnesota Chapter 5. Minimax estimation 5.. Stein s theorem and the regression model. It was pointed out in Chapter 2, section 2.2, that if no a

More information

Lecture 13: Simple Linear Regression in Matrix Format. 1 Expectations and Variances with Vectors and Matrices

Lecture 13: Simple Linear Regression in Matrix Format. 1 Expectations and Variances with Vectors and Matrices Lecture 3: Simple Linear Regression in Matrix Format To move beyond simple regression we need to use matrix algebra We ll start by re-expressing simple linear regression in matrix form Linear algebra is

More information

On the Efficiencies of Several Generalized Least Squares Estimators in a Seemingly Unrelated Regression Model and a Heteroscedastic Model

On the Efficiencies of Several Generalized Least Squares Estimators in a Seemingly Unrelated Regression Model and a Heteroscedastic Model Journal of Multivariate Analysis 70, 8694 (1999) Article ID jmva.1999.1817, available online at http:www.idealibrary.com on On the Efficiencies of Several Generalized Least Squares Estimators in a Seemingly

More information

COMPARISON OF GMM WITH SECOND-ORDER LEAST SQUARES ESTIMATION IN NONLINEAR MODELS. Abstract

COMPARISON OF GMM WITH SECOND-ORDER LEAST SQUARES ESTIMATION IN NONLINEAR MODELS. Abstract Far East J. Theo. Stat. 0() (006), 179-196 COMPARISON OF GMM WITH SECOND-ORDER LEAST SQUARES ESTIMATION IN NONLINEAR MODELS Department of Statistics University of Manitoba Winnipeg, Manitoba, Canada R3T

More information

Lecture Notes 1: Decisions and Data. In these notes, I describe some basic ideas in decision theory. theory is constructed from

Lecture Notes 1: Decisions and Data. In these notes, I describe some basic ideas in decision theory. theory is constructed from Topics in Data Analysis Steven N. Durlauf University of Wisconsin Lecture Notes : Decisions and Data In these notes, I describe some basic ideas in decision theory. theory is constructed from The Data:

More information

Chapter 5 Matrix Approach to Simple Linear Regression

Chapter 5 Matrix Approach to Simple Linear Regression STAT 525 SPRING 2018 Chapter 5 Matrix Approach to Simple Linear Regression Professor Min Zhang Matrix Collection of elements arranged in rows and columns Elements will be numbers or symbols For example:

More information

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8 Contents 1 Linear model 1 2 GLS for multivariate regression 5 3 Covariance estimation for the GLM 8 4 Testing the GLH 11 A reference for some of this material can be found somewhere. 1 Linear model Recall

More information

On the construction of asymmetric orthogonal arrays

On the construction of asymmetric orthogonal arrays isid/ms/2015/03 March 05, 2015 http://wwwisidacin/ statmath/indexphp?module=preprint On the construction of asymmetric orthogonal arrays Tianfang Zhang and Aloke Dey Indian Statistical Institute, Delhi

More information

Gamma-admissibility of generalized Bayes estimators under LINEX loss function in a non-regular family of distributions

Gamma-admissibility of generalized Bayes estimators under LINEX loss function in a non-regular family of distributions Hacettepe Journal of Mathematics Statistics Volume 44 (5) (2015), 1283 1291 Gamma-admissibility of generalized Bayes estimators under LINEX loss function in a non-regular family of distributions SH. Moradi

More information

Multivariate Regression

Multivariate Regression Multivariate Regression The so-called supervised learning problem is the following: we want to approximate the random variable Y with an appropriate function of the random variables X 1,..., X p with the

More information

Estimation of Dynamic Regression Models

Estimation of Dynamic Regression Models University of Pavia 2007 Estimation of Dynamic Regression Models Eduardo Rossi University of Pavia Factorization of the density DGP: D t (x t χ t 1, d t ; Ψ) x t represent all the variables in the economy.

More information

arxiv: v1 [math.ra] 14 Apr 2018

arxiv: v1 [math.ra] 14 Apr 2018 Three it representations of the core-ep inverse Mengmeng Zhou a, Jianlong Chen b,, Tingting Li c, Dingguo Wang d arxiv:180.006v1 [math.ra] 1 Apr 018 a School of Mathematics, Southeast University, Nanjing,

More information

ON STATISTICAL INFERENCE UNDER ASYMMETRIC LOSS. Abstract. We introduce a wide class of asymmetric loss functions and show how to obtain

ON STATISTICAL INFERENCE UNDER ASYMMETRIC LOSS. Abstract. We introduce a wide class of asymmetric loss functions and show how to obtain ON STATISTICAL INFERENCE UNDER ASYMMETRIC LOSS FUNCTIONS Michael Baron Received: Abstract We introduce a wide class of asymmetric loss functions and show how to obtain asymmetric-type optimal decision

More information

Tensor Complementarity Problem and Semi-positive Tensors

Tensor Complementarity Problem and Semi-positive Tensors DOI 10.1007/s10957-015-0800-2 Tensor Complementarity Problem and Semi-positive Tensors Yisheng Song 1 Liqun Qi 2 Received: 14 February 2015 / Accepted: 17 August 2015 Springer Science+Business Media New

More information

Ma 3/103: Lecture 24 Linear Regression I: Estimation

Ma 3/103: Lecture 24 Linear Regression I: Estimation Ma 3/103: Lecture 24 Linear Regression I: Estimation March 3, 2017 KC Border Linear Regression I March 3, 2017 1 / 32 Regression analysis Regression analysis Estimate and test E(Y X) = f (X). f is the

More information

The best generalised inverse of the linear operator in normed linear space

The best generalised inverse of the linear operator in normed linear space Linear Algebra and its Applications 420 (2007) 9 19 www.elsevier.com/locate/laa The best generalised inverse of the linear operator in normed linear space Ping Liu, Yu-wen Wang School of Mathematics and

More information

Minimax design criterion for fractional factorial designs

Minimax design criterion for fractional factorial designs Ann Inst Stat Math 205 67:673 685 DOI 0.007/s0463-04-0470-0 Minimax design criterion for fractional factorial designs Yue Yin Julie Zhou Received: 2 November 203 / Revised: 5 March 204 / Published online:

More information

Econometrics I, Estimation

Econometrics I, Estimation Econometrics I, Estimation Department of Economics Stanford University September, 2008 Part I Parameter, Estimator, Estimate A parametric is a feature of the population. An estimator is a function of the

More information

Key Words: binomial parameter n; Bayes estimators; admissibility; Blyth s method; squared error loss.

Key Words: binomial parameter n; Bayes estimators; admissibility; Blyth s method; squared error loss. SOME NEW ESTIMATORS OF THE BINOMIAL PARAMETER n George Iliopoulos Department of Mathematics University of the Aegean 83200 Karlovassi, Samos, Greece geh@unipi.gr Key Words: binomial parameter n; Bayes

More information

ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES. 1. Introduction

ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES. 1. Introduction Acta Math. Univ. Comenianae Vol. LXV, 1(1996), pp. 129 139 129 ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES V. WITKOVSKÝ Abstract. Estimation of the autoregressive

More information

Journal of Multivariate Analysis. Sphericity test in a GMANOVA MANOVA model with normal error

Journal of Multivariate Analysis. Sphericity test in a GMANOVA MANOVA model with normal error Journal of Multivariate Analysis 00 (009) 305 3 Contents lists available at ScienceDirect Journal of Multivariate Analysis journal homepage: www.elsevier.com/locate/jmva Sphericity test in a GMANOVA MANOVA

More information

4 Multiple Linear Regression

4 Multiple Linear Regression 4 Multiple Linear Regression 4. The Model Definition 4.. random variable Y fits a Multiple Linear Regression Model, iff there exist β, β,..., β k R so that for all (x, x 2,..., x k ) R k where ε N (, σ

More information

Expressions and Perturbations for the Moore Penrose Inverse of Bounded Linear Operators in Hilbert Spaces

Expressions and Perturbations for the Moore Penrose Inverse of Bounded Linear Operators in Hilbert Spaces Filomat 30:8 (016), 155 164 DOI 10.98/FIL1608155D Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Expressions and Perturbations

More information

Ridge Regression Revisited

Ridge Regression Revisited Ridge Regression Revisited Paul M.C. de Boer Christian M. Hafner Econometric Institute Report EI 2005-29 In general ridge (GR) regression p ridge parameters have to be determined, whereas simple ridge

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation University of Pavia Maximum Likelihood Estimation Eduardo Rossi Likelihood function Choosing parameter values that make what one has observed more likely to occur than any other parameter values do. Assumption(Distribution)

More information

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n Electronic Journal of Linear Algebra ISSN 08-380 Volume 22, pp. 52-538, May 20 THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE WEI-WEI XU, LI-XIA CAI, AND WEN LI Abstract. In this

More information

Stochastic Comparisons of Order Statistics from Generalized Normal Distributions

Stochastic Comparisons of Order Statistics from Generalized Normal Distributions A^VÇÚO 1 33 ò 1 6 Ï 2017 c 12 Chinese Journal of Applied Probability and Statistics Dec. 2017 Vol. 33 No. 6 pp. 591-607 doi: 10.3969/j.issn.1001-4268.2017.06.004 Stochastic Comparisons of Order Statistics

More information

General Admissibility for Linear Estimators in a General Multivariate Linear Model under Balanced Loss Function

General Admissibility for Linear Estimators in a General Multivariate Linear Model under Balanced Loss Function Acta Mathematica Sinica, English Series Sep., 2013, Vol. 29, No. 9, pp. 1823 1832 Published online: August 15, 2013 DOI: 10.1007/s10114-013-9246-3 Http://www.ActaMath.com Acta Mathematica Sinica, English

More information

Topic 7 - Matrix Approach to Simple Linear Regression. Outline. Matrix. Matrix. Review of Matrices. Regression model in matrix form

Topic 7 - Matrix Approach to Simple Linear Regression. Outline. Matrix. Matrix. Review of Matrices. Regression model in matrix form Topic 7 - Matrix Approach to Simple Linear Regression Review of Matrices Outline Regression model in matrix form - Fall 03 Calculations using matrices Topic 7 Matrix Collection of elements arranged in

More information

Testing Statistical Hypotheses

Testing Statistical Hypotheses E.L. Lehmann Joseph P. Romano Testing Statistical Hypotheses Third Edition 4y Springer Preface vii I Small-Sample Theory 1 1 The General Decision Problem 3 1.1 Statistical Inference and Statistical Decisions

More information

Projection. Ping Yu. School of Economics and Finance The University of Hong Kong. Ping Yu (HKU) Projection 1 / 42

Projection. Ping Yu. School of Economics and Finance The University of Hong Kong. Ping Yu (HKU) Projection 1 / 42 Projection Ping Yu School of Economics and Finance The University of Hong Kong Ping Yu (HKU) Projection 1 / 42 1 Hilbert Space and Projection Theorem 2 Projection in the L 2 Space 3 Projection in R n Projection

More information

Statistical Inference On the High-dimensional Gaussian Covarianc

Statistical Inference On the High-dimensional Gaussian Covarianc Statistical Inference On the High-dimensional Gaussian Covariance Matrix Department of Mathematical Sciences, Clemson University June 6, 2011 Outline Introduction Problem Setup Statistical Inference High-Dimensional

More information

. a m1 a mn. a 1 a 2 a = a n

. a m1 a mn. a 1 a 2 a = a n Biostat 140655, 2008: Matrix Algebra Review 1 Definition: An m n matrix, A m n, is a rectangular array of real numbers with m rows and n columns Element in the i th row and the j th column is denoted by

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Margin Maximizing Loss Functions

Margin Maximizing Loss Functions Margin Maximizing Loss Functions Saharon Rosset, Ji Zhu and Trevor Hastie Department of Statistics Stanford University Stanford, CA, 94305 saharon, jzhu, hastie@stat.stanford.edu Abstract Margin maximizing

More information

Contextual Effects in Modeling for Small Domains

Contextual Effects in Modeling for Small Domains University of Wollongong Research Online Applied Statistics Education and Research Collaboration (ASEARC) - Conference Papers Faculty of Engineering and Information Sciences 2011 Contextual Effects in

More information

arxiv: v1 [stat.me] 3 Nov 2015

arxiv: v1 [stat.me] 3 Nov 2015 The Unbiasedness Approach to Linear Regression Models arxiv:5.0096v [stat.me] 3 Nov 205 P. Vellaisamy Department of Mathematics, Indian Institute of Technology Bombay, Powai, Mumbai-400076, India. Abstract

More information

Complete Moment Convergence for Weighted Sums of Negatively Orthant Dependent Random Variables

Complete Moment Convergence for Weighted Sums of Negatively Orthant Dependent Random Variables Filomat 31:5 217, 1195 126 DOI 1.2298/FIL175195W Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Complete Moment Convergence for

More information

Regression III Lecture 1: Preliminary

Regression III Lecture 1: Preliminary Regression III Lecture 1: Preliminary Dave Armstrong University of Western Ontario Department of Political Science Department of Statistics and Actuarial Science (by courtesy) e: dave.armstrong@uwo.ca

More information

LECTURE 2 LINEAR REGRESSION MODEL AND OLS

LECTURE 2 LINEAR REGRESSION MODEL AND OLS SEPTEMBER 29, 2014 LECTURE 2 LINEAR REGRESSION MODEL AND OLS Definitions A common question in econometrics is to study the effect of one group of variables X i, usually called the regressors, on another

More information

Panel Data Models. James L. Powell Department of Economics University of California, Berkeley

Panel Data Models. James L. Powell Department of Economics University of California, Berkeley Panel Data Models James L. Powell Department of Economics University of California, Berkeley Overview Like Zellner s seemingly unrelated regression models, the dependent and explanatory variables for panel

More information

This model of the conditional expectation is linear in the parameters. A more practical and relaxed attitude towards linear regression is to say that

This model of the conditional expectation is linear in the parameters. A more practical and relaxed attitude towards linear regression is to say that Linear Regression For (X, Y ) a pair of random variables with values in R p R we assume that E(Y X) = β 0 + with β R p+1. p X j β j = (1, X T )β j=1 This model of the conditional expectation is linear

More information

Total Least Squares Approach in Regression Methods

Total Least Squares Approach in Regression Methods WDS'08 Proceedings of Contributed Papers, Part I, 88 93, 2008. ISBN 978-80-7378-065-4 MATFYZPRESS Total Least Squares Approach in Regression Methods M. Pešta Charles University, Faculty of Mathematics

More information

A Bayesian perspective on GMM and IV

A Bayesian perspective on GMM and IV A Bayesian perspective on GMM and IV Christopher A. Sims Princeton University sims@princeton.edu November 26, 2013 What is a Bayesian perspective? A Bayesian perspective on scientific reporting views all

More information

arxiv: v2 [math.st] 20 Jun 2014

arxiv: v2 [math.st] 20 Jun 2014 A solution in small area estimation problems Andrius Čiginas and Tomas Rudys Vilnius University Institute of Mathematics and Informatics, LT-08663 Vilnius, Lithuania arxiv:1306.2814v2 [math.st] 20 Jun

More information

STAT 100C: Linear models

STAT 100C: Linear models STAT 100C: Linear models Arash A. Amini June 9, 2018 1 / 56 Table of Contents Multiple linear regression Linear model setup Estimation of β Geometric interpretation Estimation of σ 2 Hat matrix Gram matrix

More information

Applied Econometrics (QEM)

Applied Econometrics (QEM) Applied Econometrics (QEM) The Simple Linear Regression Model based on Prinicples of Econometrics Jakub Mućk Department of Quantitative Economics Jakub Mućk Applied Econometrics (QEM) Meeting #2 The Simple

More information

A revisit to a reverse-order law for generalized inverses of a matrix product and its variations

A revisit to a reverse-order law for generalized inverses of a matrix product and its variations A revisit to a reverse-order law for generalized inverses of a matrix product and its variations Yongge Tian CEMA, Central University of Finance and Economics, Beijing 100081, China Abstract. For a pair

More information

Spatial statistics, addition to Part I. Parameter estimation and kriging for Gaussian random fields

Spatial statistics, addition to Part I. Parameter estimation and kriging for Gaussian random fields Spatial statistics, addition to Part I. Parameter estimation and kriging for Gaussian random fields 1 Introduction Jo Eidsvik Department of Mathematical Sciences, NTNU, Norway. (joeid@math.ntnu.no) February

More information

STAT 135 Lab 13 (Review) Linear Regression, Multivariate Random Variables, Prediction, Logistic Regression and the δ-method.

STAT 135 Lab 13 (Review) Linear Regression, Multivariate Random Variables, Prediction, Logistic Regression and the δ-method. STAT 135 Lab 13 (Review) Linear Regression, Multivariate Random Variables, Prediction, Logistic Regression and the δ-method. Rebecca Barter May 5, 2015 Linear Regression Review Linear Regression Review

More information

Econ 620. Matrix Differentiation. Let a and x are (k 1) vectors and A is an (k k) matrix. ) x. (a x) = a. x = a (x Ax) =(A + A (x Ax) x x =(A + A )

Econ 620. Matrix Differentiation. Let a and x are (k 1) vectors and A is an (k k) matrix. ) x. (a x) = a. x = a (x Ax) =(A + A (x Ax) x x =(A + A ) Econ 60 Matrix Differentiation Let a and x are k vectors and A is an k k matrix. a x a x = a = a x Ax =A + A x Ax x =A + A x Ax = xx A We don t want to prove the claim rigorously. But a x = k a i x i i=

More information

Results on stability of linear systems with time varying delay

Results on stability of linear systems with time varying delay IET Control Theory & Applications Brief Paper Results on stability of linear systems with time varying delay ISSN 75-8644 Received on 8th June 206 Revised st September 206 Accepted on 20th September 206

More information

Higher order moments of the estimated tangency portfolio weights

Higher order moments of the estimated tangency portfolio weights WORKING PAPER 0/07 Higher order moments of the estimated tangency portfolio weights Farrukh Javed, Stepan Mazur, Edward Ngailo Statistics ISSN 403-0586 http://www.oru.se/institutioner/handelshogskolan-vid-orebro-universitet/forskning/publikationer/working-papers/

More information

On Selecting Tests for Equality of Two Normal Mean Vectors

On Selecting Tests for Equality of Two Normal Mean Vectors MULTIVARIATE BEHAVIORAL RESEARCH, 41(4), 533 548 Copyright 006, Lawrence Erlbaum Associates, Inc. On Selecting Tests for Equality of Two Normal Mean Vectors K. Krishnamoorthy and Yanping Xia Department

More information

Mathematical Institute, University of Utrecht. The problem of estimating the mean of an observed Gaussian innite-dimensional vector

Mathematical Institute, University of Utrecht. The problem of estimating the mean of an observed Gaussian innite-dimensional vector On Minimax Filtering over Ellipsoids Eduard N. Belitser and Boris Y. Levit Mathematical Institute, University of Utrecht Budapestlaan 6, 3584 CD Utrecht, The Netherlands The problem of estimating the mean

More information

A Tutorial on Data Reduction. Principal Component Analysis Theoretical Discussion. By Shireen Elhabian and Aly Farag

A Tutorial on Data Reduction. Principal Component Analysis Theoretical Discussion. By Shireen Elhabian and Aly Farag A Tutorial on Data Reduction Principal Component Analysis Theoretical Discussion By Shireen Elhabian and Aly Farag University of Louisville, CVIP Lab November 2008 PCA PCA is A backbone of modern data

More information

Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines

Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines Maximilian Kasy Department of Economics, Harvard University 1 / 37 Agenda 6 equivalent representations of the

More information

Small area estimation with missing data using a multivariate linear random effects model

Small area estimation with missing data using a multivariate linear random effects model Department of Mathematics Small area estimation with missing data using a multivariate linear random effects model Innocent Ngaruye, Dietrich von Rosen and Martin Singull LiTH-MAT-R--2017/07--SE Department

More information

Solutions of a constrained Hermitian matrix-valued function optimization problem with applications

Solutions of a constrained Hermitian matrix-valued function optimization problem with applications Solutions of a constrained Hermitian matrix-valued function optimization problem with applications Yongge Tian CEMA, Central University of Finance and Economics, Beijing 181, China Abstract. Let f(x) =

More information

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES olume 10 2009, Issue 2, Article 41, 10 pp. ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES HANYU LI, HU YANG, AND HUA SHAO COLLEGE OF MATHEMATICS AND PHYSICS CHONGQING UNIERSITY

More information

Chapter 5 Prediction of Random Variables

Chapter 5 Prediction of Random Variables Chapter 5 Prediction of Random Variables C R Henderson 1984 - Guelph We have discussed estimation of β, regarded as fixed Now we shall consider a rather different problem, prediction of random variables,

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

Ordinary Least Squares Linear Regression

Ordinary Least Squares Linear Regression Ordinary Least Squares Linear Regression Ryan P. Adams COS 324 Elements of Machine Learning Princeton University Linear regression is one of the simplest and most fundamental modeling ideas in statistics

More information

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection Filomat 30: 06, 37 375 DOI 0.98/FIL67M Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Multiplicative Perturbation Bounds of the Group

More information

Equivalence of dynamical systems by bisimulation

Equivalence of dynamical systems by bisimulation Equivalence of dynamical systems by bisimulation Arjan van der Schaft Department of Applied Mathematics, University of Twente P.O. Box 217, 75 AE Enschede, The Netherlands Phone +31-53-4893449, Fax +31-53-48938

More information

Efficient Estimation for the Partially Linear Models with Random Effects

Efficient Estimation for the Partially Linear Models with Random Effects A^VÇÚO 1 33 ò 1 5 Ï 2017 c 10 Chinese Journal of Applied Probability and Statistics Oct., 2017, Vol. 33, No. 5, pp. 529-537 doi: 10.3969/j.issn.1001-4268.2017.05.009 Efficient Estimation for the Partially

More information

Carl N. Morris. University of Texas

Carl N. Morris. University of Texas EMPIRICAL BAYES: A FREQUENCY-BAYES COMPROMISE Carl N. Morris University of Texas Empirical Bayes research has expanded significantly since the ground-breaking paper (1956) of Herbert Robbins, and its province

More information

Estimation of parametric functions in Downton s bivariate exponential distribution

Estimation of parametric functions in Downton s bivariate exponential distribution Estimation of parametric functions in Downton s bivariate exponential distribution George Iliopoulos Department of Mathematics University of the Aegean 83200 Karlovasi, Samos, Greece e-mail: geh@aegean.gr

More information

TESTING FOR NORMALITY IN THE LINEAR REGRESSION MODEL: AN EMPIRICAL LIKELIHOOD RATIO TEST

TESTING FOR NORMALITY IN THE LINEAR REGRESSION MODEL: AN EMPIRICAL LIKELIHOOD RATIO TEST Econometrics Working Paper EWP0402 ISSN 1485-6441 Department of Economics TESTING FOR NORMALITY IN THE LINEAR REGRESSION MODEL: AN EMPIRICAL LIKELIHOOD RATIO TEST Lauren Bin Dong & David E. A. Giles Department

More information

Homoskedasticity. Var (u X) = σ 2. (23)

Homoskedasticity. Var (u X) = σ 2. (23) Homoskedasticity How big is the difference between the OLS estimator and the true parameter? To answer this question, we make an additional assumption called homoskedasticity: Var (u X) = σ 2. (23) This

More information

Lecture 3: Multiple Regression

Lecture 3: Multiple Regression Lecture 3: Multiple Regression R.G. Pierse 1 The General Linear Model Suppose that we have k explanatory variables Y i = β 1 + β X i + β 3 X 3i + + β k X ki + u i, i = 1,, n (1.1) or Y i = β j X ji + u

More information

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators Estimation theory Parametric estimation Properties of estimators Minimum variance estimator Cramer-Rao bound Maximum likelihood estimators Confidence intervals Bayesian estimation 1 Random Variables Let

More information

Testing Statistical Hypotheses

Testing Statistical Hypotheses E.L. Lehmann Joseph P. Romano, 02LEu1 ttd ~Lt~S Testing Statistical Hypotheses Third Edition With 6 Illustrations ~Springer 2 The Probability Background 28 2.1 Probability and Measure 28 2.2 Integration.........

More information