EFFICIENCY of the PRINCIPAL COMPONENT LIU- TYPE ESTIMATOR in LOGISTIC REGRESSION

Size: px
Start display at page:

Download "EFFICIENCY of the PRINCIPAL COMPONENT LIU- TYPE ESTIMATOR in LOGISTIC REGRESSION"

Transcription

1 EFFICIENCY of the PRINCIPAL COMPONEN LIU- YPE ESIMAOR in LOGISIC REGRESSION Authors: Jibo Wu School of Mathematics and Finance, Chongqing University of Arts and Sciences, Chongqing, China, Yasin Asar Department of Mathematics Computer Sciences, Necmettin Erbakan University, Konya, 42090, urkey, Abstract: In this paper we propose a principal component Liu-type logistic estimator by combining the principal component logistic regression estimator and Liu-type logistic estimator to overcome the multicollinearity problem. he superiority of the new estimator over some related estimators are studied under the asymptotic mean squared error matrix. A Monte Carlo simulation experiment is designed to compare the performances of the estimators using mean squared error criterion. Finally, a conclusion section is presented. Key-Words: Liu-type estimator; Logistic regression; Mean squared error matrix; Maximum likelihood estimator; Multicollinearity AMS Subject Classification: 62J07; 62J12.

2 2 Jibo Wu and Yasin Asar

3 Principal Component Liu-type Logistic Estimator 3 1. INRODUCION Consider the following binary logistic regression model 1.1 π i = exp x i β 1 + exp x i = 1,..., n iβ, where x i = 1 x i1 x iq denotes the ith row of X which is an n p p = q + 1 data matrix with q known covariate vectors, y i shows the response variable which takes on the value either 0 or 1 with y i Bernoulliπ i, y i s are supposed to be independent of one another and β = β 0 β 1 β q stands for a p 1 vector of parameters. Usually the maximum likelihood ML method is used to estimate β. he corresponding log-likelihood equation of model 1.1 is given by 1.2 L = n y i log π i + 1 y i log 1 π i i=1 where π i is the i th element of the vector π, i = 1, 2,..., n. ML estimator can be obtained by maximizing the log-likelihood equation given in 1.2. Since Equation 1.2 is non-linear in β, one should use an iterative algorithm called iteratively re-weighted least squares algorithm IRLS as follows Saleh and Kibria, 2013: 1.3 ˆβt+1 = ˆβ t + X V t X 1 X V t y ˆπ t where π t is the estimated values of π using ˆβ t and V t = diag ˆπ i t 1 ˆπ t i such that ˆπ i t is the ith element of ˆπ t. After some algebra, Equation 1.3 can be written as follows: 1.4 ˆβML = X V X 1 X V z where z = z 1 z n with η i = x i β and z i = η i + y i π i η i / π i. In linear regression analysis, multicollinearity has been regarded as a problem in the estimation. In dealing with this problem, many ways have been introduced to deal with this problem. One approach is to study the biased estimators such as ridge estimator Hoerl and Kennard, 1970, Liu estimator Liu, 1993, Liu-type estimator Huang et al., 2009, modified Liu-type estimator Alheety and Kibria, 2013 and improved ridge estimators Yüzbaşı et al., Alternatively, many authors such as Xu and Yang 2011 and Li and Yang 2011, have studied the estimation of linear models with additional restrictions. As in linear regression, estimation in logistic regression is also sensitive to multicollinearity. When there is multicollinearity, columns of the matrix X V X become close to be dependent. It implies that some of the eigenvalues of X V X

4 4 Jibo Wu and Yasin Asar become close to zero. hus, mean squared error value of MLE is inflated so that one cannot obtain stable estimations. hus many authors have studied how to reduce the multicollinearity, such as Lesaffre and Max 1993 discussed the multicollinearity in logistic regression, Schaefer et al proposed the ridge logistic RL estimator, Aguilera et al proposed the principal component logistic regression PCLR estimator, Månsson et al introduced the Liu logistic LL estimator, by combining the principal component logistic regression estimator and ridge logistic estimator to deal with multicollinearity. Moreover, Inan and Erdoğan 2013 proposed Liu-type logistic estimator LL and Asar 2017 studied some properties of LL. In this study, by combining the principal component logistic regression estimator and the Liu-type logistic estimator, the principal component Liu-type logistic estimator is introduced as an alternative to the PCLR, ML and LL to deal with the multicollinearity. he rest of the paper is organized as follows. In Section 2, the new estimator is proposed. Some properties of the new estimator are presented in Section 3. A Monte Carlo simulation is given in Section 4 and some concluding remarks are given in Section HE NEW ESIMAOR he logistic regression model is expressed by Aguilera et al in matrix form in terms of the logit transformation as L = Xβ = X β = Zα where = [t 1,..., t p ] shows an orthogonal matrix with Z V Z = X V X = Λ and Λ = diag λ 1,..., λ p, λ 1... λ p is the ordered [ eigenvalues ] of X V X. hen Λr O and Λ may be written as = r p r and where Z O Λ rv Z r = p r rx V X r = Λ r and Z p rv Z p r = p rx V X p r = Λ p r. he Z matrix and the α vector can be partitioned as Z = Z r Z p r and α = α r α p r. he handling of multicollinearity by means of PCLR corresponds to the transition from the model L = Xβ = X r rβ + X p r p rβ = Z r α r + Z p r α p r to the reduced model L = Z r α r. hen by Equation 1.1 and PCLR method we get the PCLR estimator. Inan and Erdoğan 2013 proposed Liu-type logistic estimator LL as 2.1 ˆβ k, d = X V X + ki 1 X V z d ˆβ ML where < d < and k > 0 are biasing parameters. he principal component logistic regression estimator Aguilera et al., 2006 is defined as 2.2 ˆβr = r r X V X r 1 r X V z

5 Principal Component Liu-type Logistic Estimator 5 We can write 2.2 as follows: 2.3 ˆβr = r r X V X r 1 r X V z = r r ˆβ ML hen we can introduce a new estimator by replacing ˆβ k, d with ˆβ ML in 2.3, and we get ˆβ r k, d = r r ˆβ k, d 2.4 = r r X V X r + ki r 1 r X V X r di r r X V X r 1 r X V z where < d < and k > 0 are biasing parameters. We call this estimator as the principal component Liu-type logistic regression PCLL estimator. Remark 2.1. It is obvious that ˆβ r k, d = r r X V X r + ki r 1 r X V X r di r r ˆβr. hus we can see the PCLL estimator as a linear combination of the PCLR estimator. Remark 2.2. It is easy to obtain the followings: a b c ˆβr 0, 0 = ˆβ r = r rx V X r 1 rx V z, PCLR estimator ˆβp 0, 0 = ˆβ ML = X V X 1 X V z, ML estimator ˆβp k, d = ˆβ k, d = X V X + ki 1 X V z d ˆβ ML, LL estimator. hus, the new estimator in 2.4 includes the PCLR, ML and LL estimators as its special cases. In the next section, we will study the properties of the new estimator. 3. HE PROPERIES OF NEW ESIMAOR For the sake of convenience, we present some lemmas which are needed in the following discussions. Lemma 3.1. Farebrother, 1976; Rao and ountenburg, 1999 Suppose that M is a positive definite matrix, namely M > 0, α is some vector, then M αα 0 if and only if α M 1 α 1.

6 6 Jibo Wu and Yasin Asar Lemma 3.2. Baksalary and renkler,1991 Let C n p be the set of complex matrices and H n n be the Hermitian matrices. Further, given L C n p, L, R L and κ L denote the conjugate transpose, the range and the set of all generalized inverses, respectively of L. Let A H n n, a 1 C n 1 and a 2 C n 1 be linearly independent, f ij = a i A a j, i, j = 1, 2 and A κ L, a 1 / R A. Let [ s = a 1 I AA I AA ] / [ a 2 a 1 I AA I AA ] a 1 hen A + a 1 a 1 a 2a 2 holds: 0 if and only if one of the following sets of conditions a A 0, a i R A, i = 1, 2, f f 22 1 f 12 2 b A 0, a 1 / R A, a 2 R A : a 1, a 2 sa 1 A a 2 sa 1 1 s 2 c A = U U λvv, a i R A, i = 1, 2, v a 1 0, f , f , f f 22 1 f 12 2, where U : v shows a sub-unitary matrix, λ is a positive scalar and is a positive definite diagonal matrix. Further, the condition a, b and c denote all independent of the choice of A, A stands for the generalized inverse of A. o compare the estimators, we use the mean squared error matrix MSEM criterion which is defined for an estimator ˇβ as follows: MSEM ˇβ = Cov ˇβ + Bias ˇβBias ˇβ where Cov ˇβ is the covariance matrix of ˇβ, and Bias ˇβ is the bias vector of ˇβ. Moreover, scalar mean squared error SMSEM of an estimator ˇβ is also given as SMSE ˇβ = tr { MSEM ˇβ } Comparison of the new estimator PCLL to the ML estimator From 2.4, we can compute the asymptotic variance of the new estimator as follows: 3.1 Cov ˆβr k, d = r S r k 1 Λ 1 r S r dλ r S r dλ 1 r S r k 1 r where S r k = Λ r + ki r, S r d = Λ r di r. Using 2.4, we get: 3.2 E ˆβr k, d = r S r k 1 Λ 1 r S r dλ r rβ

7 Principal Component Liu-type Logistic Estimator 7 By 3.3 r S r k 1 Λ r r I p = p r p r + k r S r k 1 r, then we get the asymptotic bias of the new estimator as follows: Bias ˆβr k, d = p r p r d + k r S r k 1 r β. Now, we can get the asymptotic mean squared error matrix of the new estimator as follows MSEM ˆβr k, d = r S r k 1 Λ 1 r S r dλ r S r dλ 1 r S r k 1 r + p r p r d + k r S r k 1 r β 3.4 β p r p r d + k r S r k 1 r. heorem 3.1. Assume that d < k and d+k > 0 then the new estimator is superior to the ML estimator under the asymptotic mean squared error matrix criterion if and only if β r k + d 2 [ 2k + di r + k 2 d 2 Λ 1 ] 1 r r β + β p r Λ p r p rβ 1. Proof: he asymptotic mean squared error matrix of MLE is given by 3.5 MSEM ˆβ = X V X 1. Λr O By Λ = O Λ p r and = r, p r, we may obtain X V X 1 = Λ 1 = r Λ 1 r Let us consider the difference 1 = MSEM ˆβ r + p r Λ 1 p r p r. MSEM ˆβr k, d 1 = r S r k 1 [ 2k + di r + k 2 d 2 Λ 1 ] r Sr k 1 r [ + p r Λp r p rββ ] p r p r k + d 2 r S r k 1 such that 3.6 rββ S r k 1 r + k + d r S r k 1 rββ p r p r +k + d p r p rββ r S r k 1 r. Let and 3.7 Λ 1 = S = Srk k+d 0 0 Λ p r 2k+dIr+k 2 d 2 Λ 1 r 0 k+d 2. 0 Λ p r

8 8 Jibo Wu and Yasin Asar Now we can write 3.6 as = S 1 [ Λ 1 ββ ] S 1. hus 1 is a nonnegative definite matrix if and only if Λ 1 ββ is a nonnegative definite matrix. Using Lemma 3.1, Λ 1 ββ is a nonnegative definite matrix if and only if β Λ β 1. Invoking the notation of Λ in 3.7, we can prove heorem Comparison of the new estimator PCLL to the PCLR estimator heorem 3.2. Suppose that d < k and d+k > 0 then the new estimator is better than the PCLR estimator under the asymptotic mean squared error matrix criterion if and only if rβ = 0. Proof: Suppose that k = d in Equation 3.4, then we get 3.9 MSEM ˆβr = r Λ 1 r r + r r I p ββ r r I p. Now let us consider the difference 2 = MSEM that ˆβr MSEM ˆβr k, d such = r S r k 1 [ 2k + di r + k 2 d 2 Λ 1 ] r Sr k 1 r + r r I p ββ r r I p + p r p r d + k r S r k 1 r β β p r p r d + k r S r k 1 r. o apply Lemma 3.2, let A = r B r, where B = S r k 1 [ 2k + di r + k 2 d 2 Λ 1 ] r Sr k 1 and a 1 = r r I p β, a 2 = p r p r d + k r S r k 1 r β. When d < k and d + k > 0, B is a positive definite matrix. hen we get the Moore-Penrose inverses of A which is A + = r B 1 r, and AA + = r r. hus a 1 R A if and only if a 1 = 0. Since a 1 0, we cannot use part a and c of Lemma 3.2, we can only apply part b of Lemma 3.2. Using the definition of s, we may obtain that s = 1. On the other hand, a 2 a 1 = Aη, where η = d + k r S r k [ 2k + di r + k 2 d 2 Λ 1 ] 1 r rβ.

9 Principal Component Liu-type Logistic Estimator 9 hus, we can easily obtain a 2 R A : a 1. hen Using Lemma 3.2, we can get that the new estimator is superior to the PCLR estimator under the asymptotic mean squared error matrix criterion if and only if a 2 a 1 A a 2 a 1 0 or η Aη 0. In fact, a 2 a 1 A a 2 a 1 0, so the new estimator is better than the PCLR estimator under the asymptotic mean squared error matrix criterion if and only if η Aη = 0, that is β r [ 2k + dir + k 2 d 2 Λ 1 r ] 1 rβ = 0 and β [ r 2k + dir + k 2 d 2 Λ 1 ] 1 r rβ = 0 if and only if rβ = 0. hus, the proof is finished Comparison of the new estimator PCLL to the Liu-type logistic estimator heorem 3.3. he new estimator is superior to the Liu-type logistic estimator under the asymptotic mean squared error matrix criterion if and only if p rβ = Proof: Putting r = p into 3.4, we get MSEM ˆβ k, d = Sk 1 SdΛ 1 SdSk 1 +k + d 2 Sk 1 ββ Sk 1 where Sk = Λ+kI p and S d = Λ di p. Now we study the following difference 3 = MSEM ˆβ k, d MSEM ˆβr k, d where 3 = Sk 1 SdΛ 1 SdSk 1 Suppose that C = p r D p r, where r S r k 1 Λ 1 r S r dλ r S r dλ 1 r S r k 1 r +k + d 2 Sk 1 ββ Sk p r p r d + k r S r k 1 r β β p r p r d + k r S r k 1 r. D = S p r k 1 S p r dλ 1 p r S p rds p r k 1 and a 3 = d + k Sk 1 β, a 2 = p r p r d + k r S r k 1 r β. We can apply part b of Lemma 3.2. he Moore-Penrose inverse of C is C + = p r D 1 p r, and CC + = p r p r. So a 3 / R C, a 2 R C : a 3, s = 1 and a 2 a 3 = Cη 1, where η 1 = p r S p r k 1 S p r dλ 1 p r p rβ.

10 10 Jibo Wu and Yasin Asar hen by Lemma 3.2, we obtain that the new estimator is superior to the Liu-type logistic estimator under the asymptotic mean squared error matrix criterion if and only if a 2 a 3 C a 2 a 3 0 or η 1 Cη 1 0. In fact, a 2 a 3 C a 2 a 3 0, so the new estimator is better than the Liu-type logistic estimator under the asymptotic mean squared error matrix criterion if and only if η 1 Cη 1 = 0, that is β p r Λ p r p rβ = A MONE CARLO SIMULAION SUDY In this simulation study, we study the logistic regression model. In this section, we present the details and the results of the Monte Carlo simulation which is conducted to evaluate the performances of the estimators MLE, PCLR, and LL estimators and PCLL. here are several papers studying the performance of different estimators in the binary logistic regression. herefore, we follow the idea of Lee and Silvapulle 1988, Månsson et al. 2012, Asar 2017 and Asar and Genç 2016 generating explanatory variables as follows 4.1 x i j = 1 ρ 2 1/2 zi j + ρz i q where i = 1, 2,..., n, j = 1, 2,..., q and z i j s are random numbers generated from standard normal distribution. Effective factors in designing the experiment are the number of explanatory variables q, the degree of the correlation among the independent variables ρ 2 and the sample size n. Four different values of the correlation ρ corresponding to 0.8, 0.9, 0.99 and are considered. Moreover, four different values of the number of explanatory variables consisting of q = 6, 8 and 12 are considered in the design of the experiment. he sample size varies as 50, 100, 200, 500 and Moreover, we choose the number of principal components using the method of percentage of the total variability which is defined as P V = r j=1 λ j p j=1 λ j 100. In the simulation, PV is chosen as 0.75 for q = 8 and 12 and 0.83 for q = 6 see Aguilera et al he coefficient vector is chosen due to Newhouse and Oman 1971 such that β β = 1 which is a commonly used restriction, for example see Kibria We generate the n observations of the dependent variable using the Bernoulli distribution Be π i where π i = ex i β such that x 1+e x i β i is the i th row of the data matrix X. he simulation is repeated for times. o compute the simulated MSEs of the estimators, the following equation is used respectively: 4.2 MSE β = c=1 βc β βc β

11 Principal Component Liu-type Logistic Estimator 11 able 1: Simulated MSE values of the estimators when q = 6 n ρ MLE LL PCLR PCLL MLE LL PCLR PCLL MLE LL PCLR PCLL MLE LL PCLR PCLL MLE LL PCLR PCLL where β c is MLE, PCLR, LL, and PCLL in the c th replication. he convergence tolerance is taken to be 10 6 in the IRLS algorithm. We choose the biasing parameter as follows: { } 1. LL: We refer to Asar 2017 and choose d L L = 1 2 min λj p λ j +1 where j=1 min is the minimum function and k AM = 1 p λ j d1+λ j ˆα 2 j p j=1. λ j ˆα 2 j 2. PCLL: We propose to use the modifications of the methods given above as follows: d P CL L = 1 { } r 2 min λj λ j + 1 and k P CL L = 1 r j=1 r λ j d P CL L 1 + λ j ˆα j 2 λ j ˆα j 2. j=1 According to ables 1-3, the following results are obtained: 1. MSE of the MLE is inflated when the degree of correlation is increased

12 12 Jibo Wu and Yasin Asar able 2: Simulated MSE values of the estimators when q = 8 n ρ MLE LL PCLR PCLL MLE LL PCLR PCLL MLE LL PCLR PCLL MLE LL PCLR PCLL MLE LL PCLR PCLL and the sample size is low. On the other hand, the performance of MLE becomes quite well when the sample size is high enough. 2. Similarly, if we consider PCLR and LL, the MSE values are also inflated for increasing values of the degree of correlation especially when n = MLE, PCLR and LL produce high MSE values when the sample size is low and the degree of correlation is high. However, PCLL seems to be robust to this situation in most of the cases. 4. Increasing the sample size makes a positive effect on the estimators in most of the situations. However, there is a degeneracy in this property. 5. When the degree of correlation is low, there is no estimator beating all others. 6. Overall, the new estimator PCLL has the lowest MSE value in most of the situations considered in the simulation.

13 Principal Component Liu-type Logistic Estimator 13 able 3: Simulated MSE values of the estimators when q = 12 n ρ MLE LL PCLR PCLL MLE LL PCLR PCLL MLE LL PCLR PCLL MLE LL PCLR PCLL MLE LL PCLR PCLL CONCLUSION In this paper, we develop a new principal component Liu-type logistic estimator as a combination of the principal component logistic regression estimator and Liu-type logistic estimator to overcome the multicollinearity problem. We have proved some theorems showing the superiority of the new estimator over the other estimators by studying their asymptotic mean squared error matrix criterion. Finally, a Monte Carlo simulation study is presented in order to show the performance of the new estimator. According to the results, it seems that PCLL is a better alternative in multicollinear situations in the binary logistic regression model. REFERENCES [1] Aguilera, A. M., Escabias, M. and Valderrama, M. J Using principal components for estimating logistic regression with high-dimensional multicollinear data, Computational Statistics and Data Analysis, 50, 8, [2] Alheety. M.I. and Kibria, B. M.G Modified Liu-ype Estimator

14 14 Jibo Wu and Yasin Asar Based on r - k Class Estimator, Communications in Statistics-heory and Methods, 2, [3] Asar, Y Some new methods to solve multicollinearity in logistic regression, Communications in Statistics-Simulation and Computation, 46, 4, [4] Asar, Y. and Genç, A New Shrinkage Parameters for the Liu-ype Logistic Estimators, Communications in Statistics-Simulation and Computation, 45, 3, [5] Baksalary, J. K and renkler, G Nonnegative and positive definiteness of matrices modified by two matrices of rank one, Linear Algebra and its Application, 151, [6] Farebrother, R.W Further Results on the Mean Square Error of Ridge Regression, Journal of the Royal Statistical Society B, 38, [7] Hoerl, A.E. and Kennard, R. W Ridge regression: biased estimation for nonorthogonal problems, echnometrics, 12, [8] Huang, W.H., Qi, J.J. and Huang, N Liu-type estimator for linear model with linear restrictions, Journal of System Sciences and Mathematical Sciences, 29, [9] Inan, D. and Erdogan, B. E Liu-type logistic estimator, Communications in Statistics-Simulation and Computation, 42, 7, [10] Kibria, B. M. G Performance of some new ridge regression estimators, Communications in Statistics-Simulation and Computation, 32, 2, [11] Lee, A. H. and Silvapulle, M. J Ridge estimation in logistic regression, Communications in Statistics-Simulation and Computation, 17, 4, [12] Lesaffre, E. and Max, B.D Collinearity in Generalized Linear Regression, Communications in Statistics-heory and Methods, 22, 7, [13] Liu, K A new class of biased estimate in linear regression, Communications in Statistics-heory and Methods, 22, [14] Li, Y.L. and Yang, H wo kinds of restricted modified estimators in linear regression model, Journal of Applied Statistics, 38, [15] Månsson, K., Kibria, B. G. and Shukur, G On Liu estimators for the logit regression model,journal of Applied Statistics, 29, 4, [16] Newhouse, J. P. and Oman, S. D An evaluation of ridge estimators, Rand Corporation, P-716-PR, [17] Rao, C.R. and outenburg, H Linear models: Least squares and Alternatives, Springer-Verlag, New York.. [18] Saleh, A. M. E. and Kibria, B. M. G Improved ridge regression estimators for the logistic regression model, Computational Statistics, 28, 6, [19] Schaefer, R. L., Roi, L. D. and Wolfe, R. A A ridge logistic estimator, Communications in Statistics-heory and Methods, 13, 1, [20] Xu, J. W. and Yang, H On the restricted almost unbiased estimators in linear regression, Communications in Statistics-heory and Methods, 38, [21] Yüzbaşı, B., Ahmed, S. E. and Güngör, M Improved penalty strategies in linear regression models, REVSA Statistical Journal, 15, 2,

Ridge Estimator in Logistic Regression under Stochastic Linear Restrictions

Ridge Estimator in Logistic Regression under Stochastic Linear Restrictions British Journal of Mathematics & Computer Science 15(3): 1-14, 2016, Article no.bjmcs.24585 ISSN: 2231-0851 SCIENCEDOMAIN international www.sciencedomain.org Ridge Estimator in Logistic Regression under

More information

Improved Liu Estimators for the Poisson Regression Model

Improved Liu Estimators for the Poisson Regression Model www.ccsenet.org/isp International Journal of Statistics and Probability Vol., No. ; May 202 Improved Liu Estimators for the Poisson Regression Model Kristofer Mansson B. M. Golam Kibria Corresponding author

More information

Research Article On the Stochastic Restricted r-k Class Estimator and Stochastic Restricted r-d Class Estimator in Linear Regression Model

Research Article On the Stochastic Restricted r-k Class Estimator and Stochastic Restricted r-d Class Estimator in Linear Regression Model Applied Mathematics, Article ID 173836, 6 pages http://dx.doi.org/10.1155/2014/173836 Research Article On the Stochastic Restricted r-k Class Estimator and Stochastic Restricted r-d Class Estimator in

More information

Research Article An Unbiased Two-Parameter Estimation with Prior Information in Linear Regression Model

Research Article An Unbiased Two-Parameter Estimation with Prior Information in Linear Regression Model e Scientific World Journal, Article ID 206943, 8 pages http://dx.doi.org/10.1155/2014/206943 Research Article An Unbiased Two-Parameter Estimation with Prior Information in Linear Regression Model Jibo

More information

COMBINING THE LIU-TYPE ESTIMATOR AND THE PRINCIPAL COMPONENT REGRESSION ESTIMATOR

COMBINING THE LIU-TYPE ESTIMATOR AND THE PRINCIPAL COMPONENT REGRESSION ESTIMATOR Noname manuscript No. (will be inserted by the editor) COMBINING THE LIU-TYPE ESTIMATOR AND THE PRINCIPAL COMPONENT REGRESSION ESTIMATOR Deniz Inan Received: date / Accepted: date Abstract In this study

More information

SOME NEW PROPOSED RIDGE PARAMETERS FOR THE LOGISTIC REGRESSION MODEL

SOME NEW PROPOSED RIDGE PARAMETERS FOR THE LOGISTIC REGRESSION MODEL IMPACT: International Journal of Research in Applied, Natural and Social Sciences (IMPACT: IJRANSS) ISSN(E): 2321-8851; ISSN(P): 2347-4580 Vol. 3, Issue 1, Jan 2015, 67-82 Impact Journals SOME NEW PROPOSED

More information

Improved Ridge Estimator in Linear Regression with Multicollinearity, Heteroscedastic Errors and Outliers

Improved Ridge Estimator in Linear Regression with Multicollinearity, Heteroscedastic Errors and Outliers Journal of Modern Applied Statistical Methods Volume 15 Issue 2 Article 23 11-1-2016 Improved Ridge Estimator in Linear Regression with Multicollinearity, Heteroscedastic Errors and Outliers Ashok Vithoba

More information

Multicollinearity and A Ridge Parameter Estimation Approach

Multicollinearity and A Ridge Parameter Estimation Approach Journal of Modern Applied Statistical Methods Volume 15 Issue Article 5 11-1-016 Multicollinearity and A Ridge Parameter Estimation Approach Ghadban Khalaf King Khalid University, albadran50@yahoo.com

More information

Comparison of Some Improved Estimators for Linear Regression Model under Different Conditions

Comparison of Some Improved Estimators for Linear Regression Model under Different Conditions Florida International University FIU Digital Commons FIU Electronic Theses and Dissertations University Graduate School 3-24-2015 Comparison of Some Improved Estimators for Linear Regression Model under

More information

A class of generalized ridge estimator for high-dimensional linear regression

A class of generalized ridge estimator for high-dimensional linear regression A class of generalized ridge estimator for high-dimensional linear regression Advisor: akeshi Emura Presenter: Szu-Peng Yang June 3, 04 Graduate Institute of Statistics, NCU Outline Introduction Methodology

More information

Ridge Regression and Ill-Conditioning

Ridge Regression and Ill-Conditioning Journal of Modern Applied Statistical Methods Volume 3 Issue Article 8-04 Ridge Regression and Ill-Conditioning Ghadban Khalaf King Khalid University, Saudi Arabia, albadran50@yahoo.com Mohamed Iguernane

More information

On Some Ridge Regression Estimators for Logistic Regression Models

On Some Ridge Regression Estimators for Logistic Regression Models Florida International University FIU Digital Commons FIU Electronic Theses and Dissertations University Graduate School 3-28-2018 On Some Ridge Regression Estimators for Logistic Regression Models Ulyana

More information

ENHANCING THE EFFICIENCY OF THE RIDGE REGRESSION MODEL USING MONTE CARLO SIMULATIONS

ENHANCING THE EFFICIENCY OF THE RIDGE REGRESSION MODEL USING MONTE CARLO SIMULATIONS www.arpapress.com/volumes/vol27issue1/ijrras_27_1_02.pdf ENHANCING THE EFFICIENCY OF THE RIDGE REGRESSION MODEL USING MONTE CARLO SIMULATIONS Rania Ahmed Hamed Mohamed Department of Statistics, Mathematics

More information

Research Article On the Weighted Mixed Almost Unbiased Ridge Estimator in Stochastic Restricted Linear Regression

Research Article On the Weighted Mixed Almost Unbiased Ridge Estimator in Stochastic Restricted Linear Regression Applied Mathematics Volume 2013, Article ID 902715, 10 pages http://dx.doi.org/10.1155/2013/902715 Research Article On the Weighted Mixed Almost Unbiased Ridge Estimator in Stochastic Restricted Linear

More information

On the Performance of some Poisson Ridge Regression Estimators

On the Performance of some Poisson Ridge Regression Estimators Florida International University FIU Digital Commons FIU Electronic Theses and Dissertations University Graduate School 3-28-2018 On the Performance of some Poisson Ridge Regression Estimators Cynthia

More information

Effects of Outliers and Multicollinearity on Some Estimators of Linear Regression Model

Effects of Outliers and Multicollinearity on Some Estimators of Linear Regression Model 204 Effects of Outliers and Multicollinearity on Some Estimators of Linear Regression Model S. A. Ibrahim 1 ; W. B. Yahya 2 1 Department of Physical Sciences, Al-Hikmah University, Ilorin, Nigeria. e-mail:

More information

[y i α βx i ] 2 (2) Q = i=1

[y i α βx i ] 2 (2) Q = i=1 Least squares fits This section has no probability in it. There are no random variables. We are given n points (x i, y i ) and want to find the equation of the line that best fits them. We take the equation

More information

Variable Selection in Restricted Linear Regression Models. Y. Tuaç 1 and O. Arslan 1

Variable Selection in Restricted Linear Regression Models. Y. Tuaç 1 and O. Arslan 1 Variable Selection in Restricted Linear Regression Models Y. Tuaç 1 and O. Arslan 1 Ankara University, Faculty of Science, Department of Statistics, 06100 Ankara/Turkey ytuac@ankara.edu.tr, oarslan@ankara.edu.tr

More information

Nonlinear Inequality Constrained Ridge Regression Estimator

Nonlinear Inequality Constrained Ridge Regression Estimator The International Conference on Trends and Perspectives in Linear Statistical Inference (LinStat2014) 24 28 August 2014 Linköping, Sweden Nonlinear Inequality Constrained Ridge Regression Estimator Dr.

More information

Relationship between ridge regression estimator and sample size when multicollinearity present among regressors

Relationship between ridge regression estimator and sample size when multicollinearity present among regressors Available online at www.worldscientificnews.com WSN 59 (016) 1-3 EISSN 39-19 elationship between ridge regression estimator and sample size when multicollinearity present among regressors ABSTACT M. C.

More information

Theorems. Least squares regression

Theorems. Least squares regression Theorems In this assignment we are trying to classify AML and ALL samples by use of penalized logistic regression. Before we indulge on the adventure of classification we should first explain the most

More information

Principal Component Analysis Applied to Polytomous Quadratic Logistic

Principal Component Analysis Applied to Polytomous Quadratic Logistic Int. Statistical Inst.: Proc. 58th World Statistical Congress, 2011, Dublin (Session CPS024) p.4410 Principal Component Analysis Applied to Polytomous Quadratic Logistic Regression Andruski-Guimarães,

More information

Ridge Regression Revisited

Ridge Regression Revisited Ridge Regression Revisited Paul M.C. de Boer Christian M. Hafner Econometric Institute Report EI 2005-29 In general ridge (GR) regression p ridge parameters have to be determined, whereas simple ridge

More information

Stat 5100 Handout #26: Variations on OLS Linear Regression (Ch. 11, 13)

Stat 5100 Handout #26: Variations on OLS Linear Regression (Ch. 11, 13) Stat 5100 Handout #26: Variations on OLS Linear Regression (Ch. 11, 13) 1. Weighted Least Squares (textbook 11.1) Recall regression model Y = β 0 + β 1 X 1 +... + β p 1 X p 1 + ε in matrix form: (Ch. 5,

More information

DIMENSION REDUCTION OF THE EXPLANATORY VARIABLES IN MULTIPLE LINEAR REGRESSION. P. Filzmoser and C. Croux

DIMENSION REDUCTION OF THE EXPLANATORY VARIABLES IN MULTIPLE LINEAR REGRESSION. P. Filzmoser and C. Croux Pliska Stud. Math. Bulgar. 003), 59 70 STUDIA MATHEMATICA BULGARICA DIMENSION REDUCTION OF THE EXPLANATORY VARIABLES IN MULTIPLE LINEAR REGRESSION P. Filzmoser and C. Croux Abstract. In classical multiple

More information

Measuring Local Influential Observations in Modified Ridge Regression

Measuring Local Influential Observations in Modified Ridge Regression Journal of Data Science 9(2011), 359-372 Measuring Local Influential Observations in Modified Ridge Regression Aboobacker Jahufer 1 and Jianbao Chen 2 1 South Eastern University and 2 Xiamen University

More information

Nonresponse weighting adjustment using estimated response probability

Nonresponse weighting adjustment using estimated response probability Nonresponse weighting adjustment using estimated response probability Jae-kwang Kim Yonsei University, Seoul, Korea December 26, 2006 Introduction Nonresponse Unit nonresponse Item nonresponse Basic strategy

More information

Properties of the least squares estimates

Properties of the least squares estimates Properties of the least squares estimates 2019-01-18 Warmup Let a and b be scalar constants, and X be a scalar random variable. Fill in the blanks E ax + b) = Var ax + b) = Goal Recall that the least squares

More information

A Comparison between Biased and Unbiased Estimators in Ordinary Least Squares Regression

A Comparison between Biased and Unbiased Estimators in Ordinary Least Squares Regression Journal of Modern Alied Statistical Methods Volume Issue Article 7 --03 A Comarison between Biased and Unbiased Estimators in Ordinary Least Squares Regression Ghadban Khalaf King Khalid University, Saudi

More information

A Practical Guide for Creating Monte Carlo Simulation Studies Using R

A Practical Guide for Creating Monte Carlo Simulation Studies Using R International Journal of Mathematics and Computational Science Vol. 4, No. 1, 2018, pp. 18-33 http://www.aiscience.org/journal/ijmcs ISSN: 2381-7011 (Print); ISSN: 2381-702X (Online) A Practical Guide

More information

Mean squared error matrix comparison of least aquares and Stein-rule estimators for regression coefficients under non-normal disturbances

Mean squared error matrix comparison of least aquares and Stein-rule estimators for regression coefficients under non-normal disturbances METRON - International Journal of Statistics 2008, vol. LXVI, n. 3, pp. 285-298 SHALABH HELGE TOUTENBURG CHRISTIAN HEUMANN Mean squared error matrix comparison of least aquares and Stein-rule estimators

More information

T.C. SELÇUK ÜNİVERSİTESİ FEN BİLİMLERİ ENSTİTÜSÜ

T.C. SELÇUK ÜNİVERSİTESİ FEN BİLİMLERİ ENSTİTÜSÜ T.C. SELÇUK ÜNİVERSİTESİ FEN BİLİMLERİ ENSTİTÜSÜ LIU TYPE LOGISTIC ESTIMATORS Yasin ASAR DOKTORA TEZİ İstatisti Anabilim Dalı Oca-015 KONYA Her Haı Salıdır ÖZET DOKTORA TEZİ LİU TİPİ LOJİSTİK REGRESYON

More information

Robust model selection criteria for robust S and LT S estimators

Robust model selection criteria for robust S and LT S estimators Hacettepe Journal of Mathematics and Statistics Volume 45 (1) (2016), 153 164 Robust model selection criteria for robust S and LT S estimators Meral Çetin Abstract Outliers and multi-collinearity often

More information

Least Squares Estimation-Finite-Sample Properties

Least Squares Estimation-Finite-Sample Properties Least Squares Estimation-Finite-Sample Properties Ping Yu School of Economics and Finance The University of Hong Kong Ping Yu (HKU) Finite-Sample 1 / 29 Terminology and Assumptions 1 Terminology and Assumptions

More information

Regularized Multiple Regression Methods to Deal with Severe Multicollinearity

Regularized Multiple Regression Methods to Deal with Severe Multicollinearity International Journal of Statistics and Applications 21, (): 17-172 DOI: 1.523/j.statistics.21.2 Regularized Multiple Regression Methods to Deal with Severe Multicollinearity N. Herawati *, K. Nisa, E.

More information

COMPREHENSIVE WRITTEN EXAMINATION, PAPER III FRIDAY AUGUST 26, 2005, 9:00 A.M. 1:00 P.M. STATISTICS 174 QUESTION

COMPREHENSIVE WRITTEN EXAMINATION, PAPER III FRIDAY AUGUST 26, 2005, 9:00 A.M. 1:00 P.M. STATISTICS 174 QUESTION COMPREHENSIVE WRITTEN EXAMINATION, PAPER III FRIDAY AUGUST 26, 2005, 9:00 A.M. 1:00 P.M. STATISTICS 174 QUESTION Answer all parts. Closed book, calculators allowed. It is important to show all working,

More information

Generalized Ridge Regression Estimator in Semiparametric Regression Models

Generalized Ridge Regression Estimator in Semiparametric Regression Models JIRSS (2015) Vol. 14, No. 1, pp 25-62 Generalized Ridge Regression Estimator in Semiparametric Regression Models M. Roozbeh 1, M. Arashi 2, B. M. Golam Kibria 3 1 Department of Statistics, Faculty of Mathematics,

More information

Confidence Intervals in Ridge Regression using Jackknife and Bootstrap Methods

Confidence Intervals in Ridge Regression using Jackknife and Bootstrap Methods Chapter 4 Confidence Intervals in Ridge Regression using Jackknife and Bootstrap Methods 4.1 Introduction It is now explicable that ridge regression estimator (here we take ordinary ridge estimator (ORE)

More information

Statistics 910, #5 1. Regression Methods

Statistics 910, #5 1. Regression Methods Statistics 910, #5 1 Overview Regression Methods 1. Idea: effects of dependence 2. Examples of estimation (in R) 3. Review of regression 4. Comparisons and relative efficiencies Idea Decomposition Well-known

More information

Using Ridge Least Median Squares to Estimate the Parameter by Solving Multicollinearity and Outliers Problems

Using Ridge Least Median Squares to Estimate the Parameter by Solving Multicollinearity and Outliers Problems Modern Applied Science; Vol. 9, No. ; 05 ISSN 9-844 E-ISSN 9-85 Published by Canadian Center of Science and Education Using Ridge Least Median Squares to Estimate the Parameter by Solving Multicollinearity

More information

Issues of multicollinearity and conditional heteroscedasticy in time series econometrics

Issues of multicollinearity and conditional heteroscedasticy in time series econometrics KRISTOFER MÅNSSON DS DS Issues of multicollinearity and conditional heteroscedasticy in time series econometrics JIBS Dissertation Series No. 075 JIBS Issues of multicollinearity and conditional heteroscedasticy

More information

Sociedad de Estadística e Investigación Operativa

Sociedad de Estadística e Investigación Operativa Sociedad de Estadística e Investigación Operativa Test Volume 14, Number 2. December 2005 Estimation of Regression Coefficients Subject to Exact Linear Restrictions when Some Observations are Missing and

More information

APPLICATION OF RIDGE REGRESSION TO MULTICOLLINEAR DATA

APPLICATION OF RIDGE REGRESSION TO MULTICOLLINEAR DATA Journal of Research (Science), Bahauddin Zakariya University, Multan, Pakistan. Vol.15, No.1, June 2004, pp. 97-106 ISSN 1021-1012 APPLICATION OF RIDGE REGRESSION TO MULTICOLLINEAR DATA G. R. Pasha 1 and

More information

A Modern Look at Classical Multivariate Techniques

A Modern Look at Classical Multivariate Techniques A Modern Look at Classical Multivariate Techniques Yoonkyung Lee Department of Statistics The Ohio State University March 16-20, 2015 The 13th School of Probability and Statistics CIMAT, Guanajuato, Mexico

More information

Remedial Measures for Multiple Linear Regression Models

Remedial Measures for Multiple Linear Regression Models Remedial Measures for Multiple Linear Regression Models Yang Feng http://www.stat.columbia.edu/~yangfeng Yang Feng (Columbia University) Remedial Measures for Multiple Linear Regression Models 1 / 25 Outline

More information

On V-orthogonal projectors associated with a semi-norm

On V-orthogonal projectors associated with a semi-norm On V-orthogonal projectors associated with a semi-norm Short Title: V-orthogonal projectors Yongge Tian a, Yoshio Takane b a School of Economics, Shanghai University of Finance and Economics, Shanghai

More information

. a m1 a mn. a 1 a 2 a = a n

. a m1 a mn. a 1 a 2 a = a n Biostat 140655, 2008: Matrix Algebra Review 1 Definition: An m n matrix, A m n, is a rectangular array of real numbers with m rows and n columns Element in the i th row and the j th column is denoted by

More information

STA Homework 1. Due back at the beginning of class on Oct 20, 2008

STA Homework 1. Due back at the beginning of class on Oct 20, 2008 STA 410 - Homework 1 Due back at the beginning of class on Oct 20, 2008 The following is split into two parts. Part I should be solved by all. Part II is mandatory only for the graduate students in Statistics.

More information

Alternative Biased Estimator Based on Least. Trimmed Squares for Handling Collinear. Leverage Data Points

Alternative Biased Estimator Based on Least. Trimmed Squares for Handling Collinear. Leverage Data Points International Journal of Contemporary Mathematical Sciences Vol. 13, 018, no. 4, 177-189 HIKARI Ltd, www.m-hikari.com https://doi.org/10.1988/ijcms.018.8616 Alternative Biased Estimator Based on Least

More information

IMPROVED PENALTY STRATEGIES in LINEAR REGRESSION MODELS

IMPROVED PENALTY STRATEGIES in LINEAR REGRESSION MODELS REVSTAT Statistical Journal Volume 15, Number, April 017, 51-76 IMPROVED PENALTY STRATEGIES in LINEAR REGRESSION MODELS Authors: Bahadır Yüzbaşı Department of Econometrics, Inonu University, Turkey b.yzb@hotmail.com

More information

Testing Some Covariance Structures under a Growth Curve Model in High Dimension

Testing Some Covariance Structures under a Growth Curve Model in High Dimension Department of Mathematics Testing Some Covariance Structures under a Growth Curve Model in High Dimension Muni S. Srivastava and Martin Singull LiTH-MAT-R--2015/03--SE Department of Mathematics Linköping

More information

Stat 5101 Lecture Notes

Stat 5101 Lecture Notes Stat 5101 Lecture Notes Charles J. Geyer Copyright 1998, 1999, 2000, 2001 by Charles J. Geyer May 7, 2001 ii Stat 5101 (Geyer) Course Notes Contents 1 Random Variables and Change of Variables 1 1.1 Random

More information

Journal of Multivariate Analysis. Use of prior information in the consistent estimation of regression coefficients in measurement error models

Journal of Multivariate Analysis. Use of prior information in the consistent estimation of regression coefficients in measurement error models Journal of Multivariate Analysis 00 (2009) 498 520 Contents lists available at ScienceDirect Journal of Multivariate Analysis journal homepage: www.elsevier.com/locate/jmva Use of prior information in

More information

Basic Concepts in Matrix Algebra

Basic Concepts in Matrix Algebra Basic Concepts in Matrix Algebra An column array of p elements is called a vector of dimension p and is written as x p 1 = x 1 x 2. x p. The transpose of the column vector x p 1 is row vector x = [x 1

More information

Journal of Asian Scientific Research COMBINED PARAMETERS ESTIMATION METHODS OF LINEAR REGRESSION MODEL WITH MULTICOLLINEARITY AND AUTOCORRELATION

Journal of Asian Scientific Research COMBINED PARAMETERS ESTIMATION METHODS OF LINEAR REGRESSION MODEL WITH MULTICOLLINEARITY AND AUTOCORRELATION Journal of Asian Scientific Research ISSN(e): 3-1331/ISSN(p): 6-574 journal homepage: http://www.aessweb.com/journals/5003 COMBINED PARAMETERS ESTIMATION METHODS OF LINEAR REGRESSION MODEL WITH MULTICOLLINEARITY

More information

Evaluation of a New Estimator

Evaluation of a New Estimator Pertanika J. Sci. & Technol. 16 (2): 107-117 (2008) ISSN: 0128-7680 Universiti Putra Malaysia Press Evaluation of a New Estimator Ng Set Foong 1 *, Low Heng Chin 2 and Quah Soon Hoe 3 1 Department of Information

More information

Solutions of a constrained Hermitian matrix-valued function optimization problem with applications

Solutions of a constrained Hermitian matrix-valued function optimization problem with applications Solutions of a constrained Hermitian matrix-valued function optimization problem with applications Yongge Tian CEMA, Central University of Finance and Economics, Beijing 181, China Abstract. Let f(x) =

More information

Chapter 14 Stein-Rule Estimation

Chapter 14 Stein-Rule Estimation Chapter 14 Stein-Rule Estimation The ordinary least squares estimation of regression coefficients in linear regression model provides the estimators having minimum variance in the class of linear and unbiased

More information

REGRESSION WITH SPATIALLY MISALIGNED DATA. Lisa Madsen Oregon State University David Ruppert Cornell University

REGRESSION WITH SPATIALLY MISALIGNED DATA. Lisa Madsen Oregon State University David Ruppert Cornell University REGRESSION ITH SPATIALL MISALIGNED DATA Lisa Madsen Oregon State University David Ruppert Cornell University SPATIALL MISALIGNED DATA 10 X X X X X X X X 5 X X X X X 0 X 0 5 10 OUTLINE 1. Introduction 2.

More information

304 A^VÇÚO 1n ò while the commonly employed loss function for the precision of estimation is squared error loss function ( β β) ( β β) (1.3) or weight

304 A^VÇÚO 1n ò while the commonly employed loss function for the precision of estimation is squared error loss function ( β β) ( β β) (1.3) or weight A^VÇÚO 1n ò 1nÏ 2014c6 Chinese Journal of Applied Probability and Statistics Vol.30 No.3 Jun. 2014 The Best Linear Unbiased Estimation of Regression Coefficient under Weighted Balanced Loss Function Fang

More information

PENALIZING YOUR MODELS

PENALIZING YOUR MODELS PENALIZING YOUR MODELS AN OVERVIEW OF THE GENERALIZED REGRESSION PLATFORM Michael Crotty & Clay Barker Research Statisticians JMP Division, SAS Institute Copyr i g ht 2012, SAS Ins titut e Inc. All rights

More information

A property of orthogonal projectors

A property of orthogonal projectors Linear Algebra and its Applications 354 (2002) 35 39 www.elsevier.com/locate/laa A property of orthogonal projectors Jerzy K. Baksalary a,, Oskar Maria Baksalary b,tomaszszulc c a Department of Mathematics,

More information

ECE 275A Homework 6 Solutions

ECE 275A Homework 6 Solutions ECE 275A Homework 6 Solutions. The notation used in the solutions for the concentration (hyper) ellipsoid problems is defined in the lecture supplement on concentration ellipsoids. Note that θ T Σ θ =

More information

Infinitely Imbalanced Logistic Regression

Infinitely Imbalanced Logistic Regression p. 1/1 Infinitely Imbalanced Logistic Regression Art B. Owen Journal of Machine Learning Research, April 2007 Presenter: Ivo D. Shterev p. 2/1 Outline Motivation Introduction Numerical Examples Notation

More information

ECE521 week 3: 23/26 January 2017

ECE521 week 3: 23/26 January 2017 ECE521 week 3: 23/26 January 2017 Outline Probabilistic interpretation of linear regression - Maximum likelihood estimation (MLE) - Maximum a posteriori (MAP) estimation Bias-variance trade-off Linear

More information

a Λ q 1. Introduction

a Λ q 1. Introduction International Journal of Pure and Applied Mathematics Volume 9 No 26, 959-97 ISSN: -88 (printed version); ISSN: -95 (on-line version) url: http://wwwijpameu doi: 272/ijpamv9i7 PAijpameu EXPLICI MOORE-PENROSE

More information

The Algebra of the Kronecker Product. Consider the matrix equation Y = AXB where

The Algebra of the Kronecker Product. Consider the matrix equation Y = AXB where 21 : CHAPTER Seemingly-Unrelated Regressions The Algebra of the Kronecker Product Consider the matrix equation Y = AXB where Y =[y kl ]; k =1,,r,l =1,,s, (1) X =[x ij ]; i =1,,m,j =1,,n, A=[a ki ]; k =1,,r,i=1,,m,

More information

2. Matrix Algebra and Random Vectors

2. Matrix Algebra and Random Vectors 2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns

More information

Binary choice 3.3 Maximum likelihood estimation

Binary choice 3.3 Maximum likelihood estimation Binary choice 3.3 Maximum likelihood estimation Michel Bierlaire Output of the estimation We explain here the various outputs from the maximum likelihood estimation procedure. Solution of the maximum likelihood

More information

Online Appendix. j=1. φ T (ω j ) vec (EI T (ω j ) f θ0 (ω j )). vec (EI T (ω) f θ0 (ω)) = O T β+1/2) = o(1), M 1. M T (s) exp ( isω)

Online Appendix. j=1. φ T (ω j ) vec (EI T (ω j ) f θ0 (ω j )). vec (EI T (ω) f θ0 (ω)) = O T β+1/2) = o(1), M 1. M T (s) exp ( isω) Online Appendix Proof of Lemma A.. he proof uses similar arguments as in Dunsmuir 979), but allowing for weak identification and selecting a subset of frequencies using W ω). It consists of two steps.

More information

Ridge Estimation and its Modifications for Linear Regression with Deterministic or Stochastic Predictors

Ridge Estimation and its Modifications for Linear Regression with Deterministic or Stochastic Predictors Ridge Estimation and its Modifications for Linear Regression with Deterministic or Stochastic Predictors James Younker Thesis submitted to the Faculty of Graduate and Postdoctoral Studies in partial fulfillment

More information

High-dimensional regression modeling

High-dimensional regression modeling High-dimensional regression modeling David Causeur Department of Statistics and Computer Science Agrocampus Ouest IRMAR CNRS UMR 6625 http://www.agrocampus-ouest.fr/math/causeur/ Course objectives Making

More information

Matrices and Multivariate Statistics - II

Matrices and Multivariate Statistics - II Matrices and Multivariate Statistics - II Richard Mott November 2011 Multivariate Random Variables Consider a set of dependent random variables z = (z 1,..., z n ) E(z i ) = µ i cov(z i, z j ) = σ ij =

More information

Summer School in Statistics for Astronomers V June 1 - June 6, Regression. Mosuk Chow Statistics Department Penn State University.

Summer School in Statistics for Astronomers V June 1 - June 6, Regression. Mosuk Chow Statistics Department Penn State University. Summer School in Statistics for Astronomers V June 1 - June 6, 2009 Regression Mosuk Chow Statistics Department Penn State University. Adapted from notes prepared by RL Karandikar Mean and variance Recall

More information

Lasso Maximum Likelihood Estimation of Parametric Models with Singular Information Matrices

Lasso Maximum Likelihood Estimation of Parametric Models with Singular Information Matrices Article Lasso Maximum Likelihood Estimation of Parametric Models with Singular Information Matrices Fei Jin 1,2 and Lung-fei Lee 3, * 1 School of Economics, Shanghai University of Finance and Economics,

More information

Linear Models in Machine Learning

Linear Models in Machine Learning CS540 Intro to AI Linear Models in Machine Learning Lecturer: Xiaojin Zhu jerryzhu@cs.wisc.edu We briefly go over two linear models frequently used in machine learning: linear regression for, well, regression,

More information

This model of the conditional expectation is linear in the parameters. A more practical and relaxed attitude towards linear regression is to say that

This model of the conditional expectation is linear in the parameters. A more practical and relaxed attitude towards linear regression is to say that Linear Regression For (X, Y ) a pair of random variables with values in R p R we assume that E(Y X) = β 0 + with β R p+1. p X j β j = (1, X T )β j=1 This model of the conditional expectation is linear

More information

MS&E 226: Small Data. Lecture 11: Maximum likelihood (v2) Ramesh Johari

MS&E 226: Small Data. Lecture 11: Maximum likelihood (v2) Ramesh Johari MS&E 226: Small Data Lecture 11: Maximum likelihood (v2) Ramesh Johari ramesh.johari@stanford.edu 1 / 18 The likelihood function 2 / 18 Estimating the parameter This lecture develops the methodology behind

More information

PRINCIPAL COMPONENTS ANALYSIS

PRINCIPAL COMPONENTS ANALYSIS 121 CHAPTER 11 PRINCIPAL COMPONENTS ANALYSIS We now have the tools necessary to discuss one of the most important concepts in mathematical statistics: Principal Components Analysis (PCA). PCA involves

More information

The Statistical Property of Ordinary Least Squares

The Statistical Property of Ordinary Least Squares The Statistical Property of Ordinary Least Squares The linear equation, on which we apply the OLS is y t = X t β + u t Then, as we have derived, the OLS estimator is ˆβ = [ X T X] 1 X T y Then, substituting

More information

ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES. 1. Introduction

ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES. 1. Introduction Acta Math. Univ. Comenianae Vol. LXV, 1(1996), pp. 129 139 129 ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES V. WITKOVSKÝ Abstract. Estimation of the autoregressive

More information

Bayesian Estimation of Regression Coefficients Under Extended Balanced Loss Function

Bayesian Estimation of Regression Coefficients Under Extended Balanced Loss Function Communications in Statistics Theory and Methods, 43: 4253 4264, 2014 Copyright Taylor & Francis Group, LLC ISSN: 0361-0926 print / 1532-415X online DOI: 10.1080/03610926.2012.725498 Bayesian Estimation

More information

The Precise Effect of Multicollinearity on Classification Prediction

The Precise Effect of Multicollinearity on Classification Prediction Multicollinearity and Classification Prediction The Precise Effect of Multicollinearity on Classification Prediction Mary G. Lieberman John D. Morris Florida Atlantic University The results of Morris and

More information

Research Article On the Hermitian R-Conjugate Solution of a System of Matrix Equations

Research Article On the Hermitian R-Conjugate Solution of a System of Matrix Equations Applied Mathematics Volume 01, Article ID 398085, 14 pages doi:10.1155/01/398085 Research Article On the Hermitian R-Conjugate Solution of a System of Matrix Equations Chang-Zhou Dong, 1 Qing-Wen Wang,

More information

Working Paper No Maximum score type estimators

Working Paper No Maximum score type estimators Warsaw School of Economics Institute of Econometrics Department of Applied Econometrics Department of Applied Econometrics Working Papers Warsaw School of Economics Al. iepodleglosci 64 02-554 Warszawa,

More information

Problem Set #6: OLS. Economics 835: Econometrics. Fall 2012

Problem Set #6: OLS. Economics 835: Econometrics. Fall 2012 Problem Set #6: OLS Economics 835: Econometrics Fall 202 A preliminary result Suppose we have a random sample of size n on the scalar random variables (x, y) with finite means, variances, and covariance.

More information

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES olume 10 2009, Issue 2, Article 41, 10 pp. ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES HANYU LI, HU YANG, AND HUA SHAO COLLEGE OF MATHEMATICS AND PHYSICS CHONGQING UNIERSITY

More information

Efficient Choice of Biasing Constant. for Ridge Regression

Efficient Choice of Biasing Constant. for Ridge Regression Int. J. Contemp. Math. Sciences, Vol. 3, 008, no., 57-536 Efficient Choice of Biasing Constant for Ridge Regression Sona Mardikyan* and Eyüp Çetin Department of Management Information Systems, School of

More information

Regression. Oscar García

Regression. Oscar García Regression Oscar García Regression methods are fundamental in Forest Mensuration For a more concise and general presentation, we shall first review some matrix concepts 1 Matrices An order n m matrix is

More information

A new algebraic analysis to linear mixed models

A new algebraic analysis to linear mixed models A new algebraic analysis to linear mixed models Yongge Tian China Economics and Management Academy, Central University of Finance and Economics, Beijing 100081, China Abstract. This article presents a

More information

CALCULATION METHOD FOR NONLINEAR DYNAMIC LEAST-ABSOLUTE DEVIATIONS ESTIMATOR

CALCULATION METHOD FOR NONLINEAR DYNAMIC LEAST-ABSOLUTE DEVIATIONS ESTIMATOR J. Japan Statist. Soc. Vol. 3 No. 200 39 5 CALCULAION MEHOD FOR NONLINEAR DYNAMIC LEAS-ABSOLUE DEVIAIONS ESIMAOR Kohtaro Hitomi * and Masato Kagihara ** In a nonlinear dynamic model, the consistency and

More information

Parameter Estimation

Parameter Estimation Parameter Estimation Consider a sample of observations on a random variable Y. his generates random variables: (y 1, y 2,, y ). A random sample is a sample (y 1, y 2,, y ) where the random variables y

More information

FULL LIKELIHOOD INFERENCES IN THE COX MODEL

FULL LIKELIHOOD INFERENCES IN THE COX MODEL October 20, 2007 FULL LIKELIHOOD INFERENCES IN THE COX MODEL BY JIAN-JIAN REN 1 AND MAI ZHOU 2 University of Central Florida and University of Kentucky Abstract We use the empirical likelihood approach

More information

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1 Inverse of a Square Matrix For an N N square matrix A, the inverse of A, 1 A, exists if and only if A is of full rank, i.e., if and only if no column of A is a linear combination 1 of the others. A is

More information

Lecture 1: OLS derivations and inference

Lecture 1: OLS derivations and inference Lecture 1: OLS derivations and inference Econometric Methods Warsaw School of Economics (1) OLS 1 / 43 Outline 1 Introduction Course information Econometrics: a reminder Preliminary data exploration 2

More information

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications Yongge Tian China Economics and Management Academy, Central University of Finance and Economics,

More information

Classification 2: Linear discriminant analysis (continued); logistic regression

Classification 2: Linear discriminant analysis (continued); logistic regression Classification 2: Linear discriminant analysis (continued); logistic regression Ryan Tibshirani Data Mining: 36-462/36-662 April 4 2013 Optional reading: ISL 4.4, ESL 4.3; ISL 4.3, ESL 4.4 1 Reminder:

More information

Part IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015

Part IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015 Part IB Statistics Theorems with proof Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly)

More information

Analytical formulas for calculating the extremal ranks and inertias of A + BXB when X is a fixed-rank Hermitian matrix

Analytical formulas for calculating the extremal ranks and inertias of A + BXB when X is a fixed-rank Hermitian matrix Analytical formulas for calculating the extremal ranks and inertias of A + BXB when X is a fixed-rank Hermitian matrix Yongge Tian CEMA, Central University of Finance and Economics, Beijing 100081, China

More information

More on generalized inverses of partitioned matrices with Banachiewicz-Schur forms

More on generalized inverses of partitioned matrices with Banachiewicz-Schur forms More on generalized inverses of partitioned matrices wit anaciewicz-scur forms Yongge Tian a,, Yosio Takane b a Cina Economics and Management cademy, Central University of Finance and Economics, eijing,

More information