PARTIAL RIDGE REGRESSION 1
|
|
- Monica Stone
- 5 years ago
- Views:
Transcription
1 1 2 This work was supported by NSF Grants GU-2059 and GU and by U.S. Air Force Grant No. AFOSR On leave from Punjab Agricultural University (India). Reproduction in whole or in part is permitted for any purpose of the United States Government PARTIAL RIDGE REGRESSION 1 by D: Raghavarao 2 and K.J. C. Smith Department of Statistics University of North CaroUna at Chapel. HiU Institute of Statistics Himeo Series No. 863 February, 1973
2 PARTIAL RIDGE REGRESSION~ by 2 D. Raghavarao and K.J.C. Smith Department of Statistics University of North CaroZina at ChapeZ BiZZ ABSTRACT A partial ridge estimator is proposed as a modification of the Hoerl and Kennard ridge regression estimator. It is shown that the proposed estimator has certain advantages over the ridge estimator. The problem of taking an additional observation to meet certain optimality criteria is also discussed. 1 This work was supported by NSF Grants GU-2059 and GU and by U~S. Air Force Grant No. AFOSR-68-l4l5. 2 On leave from Punjab Agricultural University (India)
3 1. Introduction. Consider the problem of fitting a linear model z. = X.. + f.., where z.' "" (Yl'Y2' "'Yn) is a vector of n observations on the dependent variable; X = (X ij ) is an n x p matrix of rank p, ~ "" (xil,xiz""'xip) being the vector of i-th observations on the independent variables (i "" 1,2,..,n); ~' "" (Sl'SZ""'Sp) is the vector of parameters to be estimated; and E' is an n-dimensional vector of random errors assumed to be distributed with mean vector QI and dispersion matrix 2 a In' Q being a zero vector and In the identity matrix of order n. Without loss of generality we assume that the dependent and independent variables are standardized so that XiX is a correlation matrix. Let Al ~ A2~.. ~ A p be the eigenvalues of XIX and let ~1'~2""'~ be a set of ortho-normal eigenvectors associated with the eigenvalues Ai (i = 1,2,...,p). Let a =~! S for i -'1.- i = 1,2,,p. The usual least squares estimator of ~ is given by (1.1) i = (X I X) -1 XI y.. and has the unsatisfactory property, when X'X differs substantially from an identity matrix, that the mean squared error or expected squared distance from! to! tends to be large compared to that of an orthogonal system. Often an investigator is interested in obtaining a variance balanced design in which each parameter Si is estimated with equal precision. The departure of a design from variance balancedness increases the more XIX differs from an identity matrix. The ridge regression method proposed by Hoerl and Kennerd (1970) estimates ~ by the ridge estimator given by (1. 2) _ 8* = (XiX + ki p.j... )-~'v,
4 where k is a positive real number satisfying 2 (1.3) 2 2 k < (1 la == max a max being the maximum of ai(i = 1,2,... p). The estimator A f* is a biased estimator of ~ but has a smaller mean squared error than the least squares estimator!. We propose here as an alternative to the ridge estimator estimator i*, the (1.4) where (1.5) This estimator may be called the partial ridge estimator of ~. We show e in Section 2 that the partial ridge estimator estimates ~! (wit:h minimum mean squared error and estimates the..ilg) (i = 1,'::,,p-1) unbiasedly. In Section 3 we consider the problem of ta~ing e.g a,~ditio:l",l observation so as to rel'~ove the bias of t~e ~rtia1.riqge'2~timatorand to attain certain optimality criteria. 2. Partial Ridge Estimator. To control the mean squared error of the estimator of the coefficient vector ~ in the model y.. = X! 4-, Hoer1 and Kennard (1970) proposed a ridge estimator, s*, defined by (1.2) and showed that the mean squared error of S* was less than that of the least squares estimator ~ of ~. Specifically. the mean squared error of e* is (2.1) [[(e* = (12! Ai i=l (A i +k)2 = y1 (k) + Y2 (k) + i=l! (A.+k) 2 1. say,
5 where E[] denotes the expected value of the term in braces. The term 3 Y l (k) is the sum of the variances of the components of e* and the term y 2 (k) is the bias component of the mean squared error. When k = 0, the ridge estimator coincides with the least squares estimator. We propose as an alternative to the ridge estimator of! a partial ridge estimator of!, denoted by S, defined by (1.4). The partial -p ridge estimator has the following property: Theorem 2.1. The partial ridge estimator S = (X'X + k ~ ~,)-l X'v, wh ere 1 2/ 2, ~ = 0 a, is such that ~ S p P --PI' with minimum mean squared error and estimator of ~~ 8 (i = 1,2,,p-l). -:L - ~'8,-p I' P'"""P""""'P..lis the linear estimator of ~' 8 1'- is the best linear unbiased Proof. Since ~l' ~2""'~ are a set of ortho-normal eigenvectors - - """'"P associated with the eigen values A >A >... >A 1 = 2 = = p of X'X; will be eigenvectors associated with the eigenvalues of xx' (i = 1,2,...p). Let.!ll' n Z ',.!:lq._p be a set of orthogonal eigenvectors associated with the zero eigen value of multiplicity n-p of XX'. The vectors x~ (i = 1,2,...,p) and.!:lj (j = 1,2,., n-p) form a basis of an n-dimensional vector space. Without loss of generality any linear estimator of t = ~! can be taken to be (2.2) where and ~ n-p R: = 2.c i s-i x' y +.L d j ~ Y., i=1 J=l J are scalars. The mean squared error of as an estimator of t can be shown to be '" 2 p-l A _1)2a C A 2 n-p (2.3) E[(t-t) ] = ) c 0 d 2 2 i (Ai a. + o Ai) + (c + I 0 J. J.=l P P P P P j=l J Minimizing (2.3) with respect to the coefficients c i and d. we have J
6 (2.4) =... = c 1 p- d n-p = 0, c p = 1 2 A+~ P 2 a p Choosing k = a la,the linear estimator of ~'e with least mean squared p p -p - error is given by (A + k )-1 ~' X'y. The best linear unbiased estimators p p t' of ~! are the least squares estimators 1\i,-1 2..i..J.. ~'X'v (i =" 1 2,p- 1) ~laking a 1-1 correspondence of estimators of ~! with estimators of Si' the required estimator ~ of ~ is given by p-l = (I c ~ ti + A +k ~ ~)X'y i=l ~ P P = (X'X + k E; ~,)-lx'y. P --p --p This completes the proof of Theorem 2.1. The problem of estimating k can be solved either by graphical or p iterative procedures as described by Hoerl and Kennard (1970). From (2.1) and (2.3) we note that the bias component in the mean squared error of the partial ridge estimator is smaller than that of the ridge estimator. 3. Optimum choice of an additional observation. The equation (1.4) defining the partial ridge estimator suggests taking an additional observation Yn+l on the dependent variable corresponding to some choice of values of the independent variables. Let us assume without loss of generality that the design matrix with an additional observation is (3.1) x J x' - n+l
7 where ~+l ~+l = 1 and w is a non-zero scalar. The least squares 5 estimator of ~ using the additional observation is (3.2) (X 'X + ' )-1 X' = w ~+l ~+l 1 which is an unbiased estimator of ~. Before discussing the optimal choice of the additional observation, we shall introduce the following: Definition 3.1 Let Al ~ A 2 ~ ~ A p be the eigenvalues of X'X, where X is a n x p design matrix. The departure from variance balanced- e (3.3) ness of the design X is measured by An equivalent expression for Q(X) is (3.4) where tr[a] denotes the trace of the matrix A. Definition 3.2. [Kiefer (1959)] Of the class of all n x p design matrices X, the design X is A-optimal if tr[(x,x)-l] is minimum. Definition 3.3. [Kiefer (1959)] Of the class of all n x p design matrices X, the design X is D-optimal if det[(x'x)-l] is minimum, where det[ ] denotes the determinant of the matrix in braces
8 The following theorem gives the optimum choice of w and ~+1 for an- 6 additional observation: Theorem 3.1. Given the n x p design matrix X, among possible choices of w and ~l in (3.1), the design (3.5) X = * J~ X ~, 1- - t> p has the following properties: (i) Q,(X*) < Q(X) (ii) Among the class of designs Xl in (3.1), Q(X *) ~ Q(X ) 1 (iii) Among the class of designs Xl in (3.1) and subject to Q(X ) 1 minimum, X* is A- and D-optimal. Proof. For the design Xl of (3.1), 2 1 = Q(X) + 2w ~+l X'X ~+l+ w (1 - p) - 2w A The quadratic form ~+l (XiX)~+1 is minimized when ~+l = ~ and the minimum value is A p Substituting this least value of ~+lx'x ~+1 in (3.6) and minimizing with respect to w, we obtain the stationary value of w (3.7) to be A - A P w = 1 1 p
9 Substituting into (3.6), the minimum value of Q(X l ) is * (r _A )2 (3.8) ~in(xl) = Q(X) = Q(X) - 1 _ rp Thus Q(X) * < Q(X). Moreover Q(X) * is the minimum value of Q(X ) 1 Now 7 (3.9) = det[(x'x) ] (1 + w ~+l (XiX) ~+l) The maximum value of I (' )-1 ~+l X X ~+l is minimized with respect is lla for Xl = ~I P -0+1 '"'""P order that Q(X 1 ) be least, w must be given by (3.7). Hence Hence to ~+l when ~+l =~. In X* is D-optimal among the class of designs Xl with minimum Q(X ). 1 To prove the A-optimality of X* among the class of designs Xl with minimum (3.10) Q(X l ), we observe that I (I ) w ~+l X X ~+l tr[(xi X1 )- ] 1 = tr[(xix)- ] - I,-1 1+w?S.n+1 (X X)?S.n+1 The maximum value of the second term on the right hand side of (3.10) is the maximum of 1 (3.11) )J =, A (~+ 1) w where A's are the eigenvalues of XiX. In order that Q(X l ) is least, w is given by (3.7) and the maximum )J x' = ~ I Thus X* -0+1 '"'""P minimum Q(X l ) is attained when A = A and p is A-optimal among the class of Xl matrices with
10 References 8 Hoer1, Arthur E. and Kennard, Robert lot. (1970). "Ridge Regression: Biased Estimation for Non-orthogonal Problems. II TeC!hnometriC!s, 12, Kiefer, J. (1959). "Optimum Experimental Designs." J. Roy. Statist. SoC!., 21B,
Ridge Regression and Ill-Conditioning
Journal of Modern Applied Statistical Methods Volume 3 Issue Article 8-04 Ridge Regression and Ill-Conditioning Ghadban Khalaf King Khalid University, Saudi Arabia, albadran50@yahoo.com Mohamed Iguernane
More informationIntroduction to Matrix Algebra
Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p
More informationWiley. Methods and Applications of Linear Models. Regression and the Analysis. of Variance. Third Edition. Ishpeming, Michigan RONALD R.
Methods and Applications of Linear Models Regression and the Analysis of Variance Third Edition RONALD R. HOCKING PenHock Statistical Consultants Ishpeming, Michigan Wiley Contents Preface to the Third
More informationAPPLICATION OF RIDGE REGRESSION TO MULTICOLLINEAR DATA
Journal of Research (Science), Bahauddin Zakariya University, Multan, Pakistan. Vol.15, No.1, June 2004, pp. 97-106 ISSN 1021-1012 APPLICATION OF RIDGE REGRESSION TO MULTICOLLINEAR DATA G. R. Pasha 1 and
More informationImproved Ridge Estimator in Linear Regression with Multicollinearity, Heteroscedastic Errors and Outliers
Journal of Modern Applied Statistical Methods Volume 15 Issue 2 Article 23 11-1-2016 Improved Ridge Estimator in Linear Regression with Multicollinearity, Heteroscedastic Errors and Outliers Ashok Vithoba
More informationRidge Regression Revisited
Ridge Regression Revisited Paul M.C. de Boer Christian M. Hafner Econometric Institute Report EI 2005-29 In general ridge (GR) regression p ridge parameters have to be determined, whereas simple ridge
More informationMulticollinearity and A Ridge Parameter Estimation Approach
Journal of Modern Applied Statistical Methods Volume 15 Issue Article 5 11-1-016 Multicollinearity and A Ridge Parameter Estimation Approach Ghadban Khalaf King Khalid University, albadran50@yahoo.com
More informationStatistics 910, #5 1. Regression Methods
Statistics 910, #5 1 Overview Regression Methods 1. Idea: effects of dependence 2. Examples of estimation (in R) 3. Review of regression 4. Comparisons and relative efficiencies Idea Decomposition Well-known
More informationProperties of Matrices and Operations on Matrices
Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,
More informationLecture 2 - Unconstrained Optimization Definition[Global Minimum and Maximum]Let f : S R be defined on a set S R n. Then
Lecture 2 - Unconstrained Optimization Definition[Global Minimum and Maximum]Let f : S R be defined on a set S R n. Then 1. x S is a global minimum point of f over S if f (x) f (x ) for any x S. 2. x S
More informationFundamentals of Matrices
Maschinelles Lernen II Fundamentals of Matrices Christoph Sawade/Niels Landwehr/Blaine Nelson Tobias Scheffer Matrix Examples Recap: Data Linear Model: f i x = w i T x Let X = x x n be the data matrix
More informationEvaluation of a New Estimator
Pertanika J. Sci. & Technol. 16 (2): 107-117 (2008) ISSN: 0128-7680 Universiti Putra Malaysia Press Evaluation of a New Estimator Ng Set Foong 1 *, Low Heng Chin 2 and Quah Soon Hoe 3 1 Department of Information
More informationAlternative Biased Estimator Based on Least. Trimmed Squares for Handling Collinear. Leverage Data Points
International Journal of Contemporary Mathematical Sciences Vol. 13, 018, no. 4, 177-189 HIKARI Ltd, www.m-hikari.com https://doi.org/10.1988/ijcms.018.8616 Alternative Biased Estimator Based on Least
More informationRelationship between ridge regression estimator and sample size when multicollinearity present among regressors
Available online at www.worldscientificnews.com WSN 59 (016) 1-3 EISSN 39-19 elationship between ridge regression estimator and sample size when multicollinearity present among regressors ABSTACT M. C.
More informationX -1 -balance of some partially balanced experimental designs with particular emphasis on block and row-column designs
DOI:.55/bile-5- Biometrical Letters Vol. 5 (5), No., - X - -balance of some partially balanced experimental designs with particular emphasis on block and row-column designs Ryszard Walkowiak Department
More informationLecture 10 - Eigenvalues problem
Lecture 10 - Eigenvalues problem Department of Computer Science University of Houston February 28, 2008 1 Lecture 10 - Eigenvalues problem Introduction Eigenvalue problems form an important class of problems
More informationSTK-IN4300 Statistical Learning Methods in Data Science
Outline of the lecture STK-I4300 Statistical Learning Methods in Data Science Riccardo De Bin debin@math.uio.no Model Assessment and Selection Cross-Validation Bootstrap Methods Methods using Derived Input
More informationImproved Liu Estimators for the Poisson Regression Model
www.ccsenet.org/isp International Journal of Statistics and Probability Vol., No. ; May 202 Improved Liu Estimators for the Poisson Regression Model Kristofer Mansson B. M. Golam Kibria Corresponding author
More informationMATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.
MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation
More informationA Comparison between Biased and Unbiased Estimators in Ordinary Least Squares Regression
Journal of Modern Alied Statistical Methods Volume Issue Article 7 --03 A Comarison between Biased and Unbiased Estimators in Ordinary Least Squares Regression Ghadban Khalaf King Khalid University, Saudi
More informationEcon Slides from Lecture 8
Econ 205 Sobel Econ 205 - Slides from Lecture 8 Joel Sobel September 1, 2010 Computational Facts 1. det AB = det BA = det A det B 2. If D is a diagonal matrix, then det D is equal to the product of its
More informationMultivariate Statistical Analysis
Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 4 for Applied Multivariate Analysis Outline 1 Eigen values and eigen vectors Characteristic equation Some properties of eigendecompositions
More informationCOMBINING THE LIU-TYPE ESTIMATOR AND THE PRINCIPAL COMPONENT REGRESSION ESTIMATOR
Noname manuscript No. (will be inserted by the editor) COMBINING THE LIU-TYPE ESTIMATOR AND THE PRINCIPAL COMPONENT REGRESSION ESTIMATOR Deniz Inan Received: date / Accepted: date Abstract In this study
More informationMath Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p
Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the
More informationMaximum Likelihood Estimation
Maximum Likelihood Estimation Merlise Clyde STA721 Linear Models Duke University August 31, 2017 Outline Topics Likelihood Function Projections Maximum Likelihood Estimates Readings: Christensen Chapter
More informationge-k ) ECONOMETRIC INSTITUTE
ge-k ) ECONOMETRIC INSTITUTE GIANN:NFC AGRICULTURA 7:)T,4 OF NZ? THE EXACT MSE EFFICIENCY OF THE GENERAL RIDGE ESTIMATOR RELATIVE TO OLS R. TEEKENS and P.M.C. DE BOER otiv"--9 REPORT 770/ES ERASMUS UNIVERSITY
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.
More informationON THE COMPARISON OF BOUNDARY AND INTERIOR SUPPORT POINTS OF A RESPONSE SURFACE UNDER OPTIMALITY CRITERIA. Cross River State, Nigeria
ON THE COMPARISON OF BOUNDARY AND INTERIOR SUPPORT POINTS OF A RESPONSE SURFACE UNDER OPTIMALITY CRITERIA Thomas Adidaume Uge and Stephen Seastian Akpan, Department Of Mathematics/Statistics And Computer
More informationClassification. The goal: map from input X to a label Y. Y has a discrete set of possible values. We focused on binary Y (values 0 or 1).
Regression and PCA Classification The goal: map from input X to a label Y. Y has a discrete set of possible values We focused on binary Y (values 0 or 1). But we also discussed larger number of classes
More informationSOME RESULTS ON THE MULTIPLE GROUP DISCRIMINANT PROBLEM. Peter A. Lachenbruch
.; SOME RESULTS ON THE MULTIPLE GROUP DISCRIMINANT PROBLEM By Peter A. Lachenbruch Department of Biostatistics University of North Carolina, Chapel Hill, N.C. Institute of Statistics Mimeo Series No. 829
More informationSTAT200C: Review of Linear Algebra
Stat200C Instructor: Zhaoxia Yu STAT200C: Review of Linear Algebra 1 Review of Linear Algebra 1.1 Vector Spaces, Rank, Trace, and Linear Equations 1.1.1 Rank and Vector Spaces Definition A vector whose
More informationResearch Article An Unbiased Two-Parameter Estimation with Prior Information in Linear Regression Model
e Scientific World Journal, Article ID 206943, 8 pages http://dx.doi.org/10.1155/2014/206943 Research Article An Unbiased Two-Parameter Estimation with Prior Information in Linear Regression Model Jibo
More informationNumerical Linear Algebra
Numerical Linear Algebra Numerous alications in statistics, articularly in the fitting of linear models. Notation and conventions: Elements of a matrix A are denoted by a ij, where i indexes the rows and
More informationRidge Regression: Biased Estimation for Nonorthogonal Problems
TECHNOMETRICS VOL. 12, No. 1 FEBRUARY 1970 Ridge Regression: Biased Estimation for Nonorthogonal Problems ARTHUR E. HOERL AND ROBERT W. KENNARD University of Delaware and E. 1. du Pont de Nemours & Co.
More informationComparison of Some Improved Estimators for Linear Regression Model under Different Conditions
Florida International University FIU Digital Commons FIU Electronic Theses and Dissertations University Graduate School 3-24-2015 Comparison of Some Improved Estimators for Linear Regression Model under
More informationYORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions
YORK UNIVERSITY Faculty of Science Department of Mathematics and Statistics MATH 222 3. M Test # July, 23 Solutions. For each statement indicate whether it is always TRUE or sometimes FALSE. Note: For
More informationEigenvalues and diagonalization
Eigenvalues and diagonalization Patrick Breheny November 15 Patrick Breheny BST 764: Applied Statistical Modeling 1/20 Introduction The next topic in our course, principal components analysis, revolves
More informationELEC633: Graphical Models
ELEC633: Graphical Models Tahira isa Saleem Scribe from 7 October 2008 References: Casella and George Exploring the Gibbs sampler (1992) Chib and Greenberg Understanding the Metropolis-Hastings algorithm
More informationResearch Article On the Stochastic Restricted r-k Class Estimator and Stochastic Restricted r-d Class Estimator in Linear Regression Model
Applied Mathematics, Article ID 173836, 6 pages http://dx.doi.org/10.1155/2014/173836 Research Article On the Stochastic Restricted r-k Class Estimator and Stochastic Restricted r-d Class Estimator in
More informationData Mining and Analysis: Fundamental Concepts and Algorithms
Data Mining and Analysis: Fundamental Concepts and Algorithms dataminingbook.info Mohammed J. Zaki 1 Wagner Meira Jr. 2 1 Department of Computer Science Rensselaer Polytechnic Institute, Troy, NY, USA
More informationLinear Regression Models. Based on Chapter 3 of Hastie, Tibshirani and Friedman
Linear Regression Models Based on Chapter 3 of Hastie, ibshirani and Friedman Linear Regression Models Here the X s might be: p f ( X = " + " 0 j= 1 X j Raw predictor variables (continuous or coded-categorical
More informationPHYS 705: Classical Mechanics. Rigid Body Motion Introduction + Math Review
1 PHYS 705: Classical Mechanics Rigid Body Motion Introduction + Math Review 2 How to describe a rigid body? Rigid Body - a system of point particles fixed in space i r ij j subject to a holonomic constraint:
More informationStudy Notes on Matrices & Determinants for GATE 2017
Study Notes on Matrices & Determinants for GATE 2017 Matrices and Determinates are undoubtedly one of the most scoring and high yielding topics in GATE. At least 3-4 questions are always anticipated from
More informationKnowledge Discovery and Data Mining 1 (VO) ( )
Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory
More informationPerformance Surfaces and Optimum Points
CSC 302 1.5 Neural Networks Performance Surfaces and Optimum Points 1 Entrance Performance learning is another important class of learning law. Network parameters are adjusted to optimize the performance
More informationMatrix Algebra, part 2
Matrix Algebra, part 2 Ming-Ching Luoh 2005.9.12 1 / 38 Diagonalization and Spectral Decomposition of a Matrix Optimization 2 / 38 Diagonalization and Spectral Decomposition of a Matrix Also called Eigenvalues
More informationExam questions with full solutions
Exam questions with full solutions MH11 Linear Algebra II May 1 QUESTION 1 Let C be the set of complex numbers. (i) Consider C as an R-vector space with the operations of addition of complex numbers and
More informationLinear Models 1. Isfahan University of Technology Fall Semester, 2014
Linear Models 1 Isfahan University of Technology Fall Semester, 2014 References: [1] G. A. F., Seber and A. J. Lee (2003). Linear Regression Analysis (2nd ed.). Hoboken, NJ: Wiley. [2] A. C. Rencher and
More informationInverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1
Inverse of a Square Matrix For an N N square matrix A, the inverse of A, 1 A, exists if and only if A is of full rank, i.e., if and only if no column of A is a linear combination 1 of the others. A is
More information1 Data Arrays and Decompositions
1 Data Arrays and Decompositions 1.1 Variance Matrices and Eigenstructure Consider a p p positive definite and symmetric matrix V - a model parameter or a sample variance matrix. The eigenstructure is
More informationMatrix Algebra: Summary
May, 27 Appendix E Matrix Algebra: Summary ontents E. Vectors and Matrtices.......................... 2 E.. Notation.................................. 2 E..2 Special Types of Vectors.........................
More informationSingular Value Decomposition Compared to cross Product Matrix in an ill Conditioned Regression Model
International Journal of Statistics and Applications 04, 4(): 4-33 DOI: 0.593/j.statistics.04040.07 Singular Value Decomposition Compared to cross Product Matrix in an ill Conditioned Regression Model
More informationSymmetric matrices and dot products
Symmetric matrices and dot products Proposition An n n matrix A is symmetric iff, for all x, y in R n, (Ax) y = x (Ay). Proof. If A is symmetric, then (Ax) y = x T A T y = x T Ay = x (Ay). If equality
More informationResponse Surface Methodology III
LECTURE 7 Response Surface Methodology III 1. Canonical Form of Response Surface Models To examine the estimated regression model we have several choices. First, we could plot response contours. Remember
More informationThe Use of Multiple Measurements to Monitor Protocol. Adherence in Epidemiological Studies*
The Use of Multiple Measurements to Monitor Protocol Adherence in Epidemiological Studies* H. M. Schey and C. E. Davis Department of Biostatistics, University of North Carolina Chapel Hill, North Carolina
More informationMonte Carlo Methods. Leon Gu CSD, CMU
Monte Carlo Methods Leon Gu CSD, CMU Approximate Inference EM: y-observed variables; x-hidden variables; θ-parameters; E-step: q(x) = p(x y, θ t 1 ) M-step: θ t = arg max E q(x) [log p(y, x θ)] θ Monte
More informationAppendix A: Matrices
Appendix A: Matrices A matrix is a rectangular array of numbers Such arrays have rows and columns The numbers of rows and columns are referred to as the dimensions of a matrix A matrix with, say, 5 rows
More informationƒ f(x)dx ~ X) ^i,nf(%i,n) -1 *=1 are the zeros of P«(#) and where the num
ZEROS OF THE HERMITE POLYNOMIALS AND WEIGHTS FOR GAUSS' MECHANICAL QUADRATURE FORMULA ROBERT E. GREENWOOD AND J. J. MILLER In the numerical integration of a function ƒ (x) it is very desirable to choose
More informationSOME NEW PROPOSED RIDGE PARAMETERS FOR THE LOGISTIC REGRESSION MODEL
IMPACT: International Journal of Research in Applied, Natural and Social Sciences (IMPACT: IJRANSS) ISSN(E): 2321-8851; ISSN(P): 2347-4580 Vol. 3, Issue 1, Jan 2015, 67-82 Impact Journals SOME NEW PROPOSED
More informationMeasuring Local Influential Observations in Modified Ridge Regression
Journal of Data Science 9(2011), 359-372 Measuring Local Influential Observations in Modified Ridge Regression Aboobacker Jahufer 1 and Jianbao Chen 2 1 South Eastern University and 2 Xiamen University
More informationEIGENVALUES IN LINEAR ALGEBRA *
EIGENVALUES IN LINEAR ALGEBRA * P. F. Leung National University of Singapore. General discussion In this talk, we shall present an elementary study of eigenvalues in linear algebra. Very often in various
More informationMath 423/533: The Main Theoretical Topics
Math 423/533: The Main Theoretical Topics Notation sample size n, data index i number of predictors, p (p = 2 for simple linear regression) y i : response for individual i x i = (x i1,..., x ip ) (1 p)
More informationSpectral inequalities and equalities involving products of matrices
Spectral inequalities and equalities involving products of matrices Chi-Kwong Li 1 Department of Mathematics, College of William & Mary, Williamsburg, Virginia 23187 (ckli@math.wm.edu) Yiu-Tung Poon Department
More informationA NOTE ON CONFIDENCE BOUNDS CONNECTED WITH ANOVA AND MANOVA FOR BALANCED AND PARTIALLY BALANCED INCOMPLETE BLOCK DESIGNS. v. P.
... --... I. A NOTE ON CONFIDENCE BOUNDS CONNECTED WITH ANOVA AND MANOVA FOR BALANCED AND PARTIALLY BALANCED INCOMPLETE BLOCK DESIGNS by :}I v. P. Bhapkar University of North Carolina. '".". This research
More informationON EFFICIENT FORECASTING IN LINEAR REGRESSION MODELS
Journal of Quantitative Economics, Vol. 13, No.2 (July 1997 133-140 ON EFFICIENT FORECASTING IN LINEAR REGRESSION MODELS SHALABH Department of Statistics, University of Jammu, Jammu-180004, India In this
More informationDraft of an article prepared for the Encyclopedia of Social Science Research Methods, Sage Publications. Copyright by John Fox 2002
Draft of an article prepared for the Encyclopedia of Social Science Research Methods, Sage Publications. Copyright by John Fox 00 Please do not quote without permission Variance Inflation Factors. Variance
More informationMinimax design criterion for fractional factorial designs
Ann Inst Stat Math 205 67:673 685 DOI 0.007/s0463-04-0470-0 Minimax design criterion for fractional factorial designs Yue Yin Julie Zhou Received: 2 November 203 / Revised: 5 March 204 / Published online:
More informationhttps://goo.gl/kfxweg KYOTO UNIVERSITY Statistical Machine Learning Theory Sparsity Hisashi Kashima kashima@i.kyoto-u.ac.jp DEPARTMENT OF INTELLIGENCE SCIENCE AND TECHNOLOGY 1 KYOTO UNIVERSITY Topics:
More informationEFFICIENCY of the PRINCIPAL COMPONENT LIU- TYPE ESTIMATOR in LOGISTIC REGRESSION
EFFICIENCY of the PRINCIPAL COMPONEN LIU- YPE ESIMAOR in LOGISIC REGRESSION Authors: Jibo Wu School of Mathematics and Finance, Chongqing University of Arts and Sciences, Chongqing, China, linfen52@126.com
More informationCovariance to PCA. CS 510 Lecture #8 February 17, 2014
Covariance to PCA CS 510 Lecture 8 February 17, 2014 Status Update Programming Assignment 2 is due March 7 th Expect questions about your progress at the start of class I still owe you Assignment 1 back
More informationConjugate Gradient (CG) Method
Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous
More informationA Practical Guide for Creating Monte Carlo Simulation Studies Using R
International Journal of Mathematics and Computational Science Vol. 4, No. 1, 2018, pp. 18-33 http://www.aiscience.org/journal/ijmcs ISSN: 2381-7011 (Print); ISSN: 2381-702X (Online) A Practical Guide
More informationMore Linear Algebra. Edps/Soc 584, Psych 594. Carolyn J. Anderson
More Linear Algebra Edps/Soc 584, Psych 594 Carolyn J. Anderson Department of Educational Psychology I L L I N O I S university of illinois at urbana-champaign c Board of Trustees, University of Illinois
More informationLinear Algebra Practice Final
. Let (a) First, Linear Algebra Practice Final Summer 3 3 A = 5 3 3 rref([a ) = 5 so if we let x 5 = t, then x 4 = t, x 3 =, x = t, and x = t, so that t t x = t = t t whence ker A = span(,,,, ) and a basis
More informationRidge Estimator in Logistic Regression under Stochastic Linear Restrictions
British Journal of Mathematics & Computer Science 15(3): 1-14, 2016, Article no.bjmcs.24585 ISSN: 2231-0851 SCIENCEDOMAIN international www.sciencedomain.org Ridge Estimator in Logistic Regression under
More informationEconomics 620, Lecture 4: The K-Varable Linear Model I
Economics 620, Lecture 4: The K-Varable Linear Model I Nicholas M. Kiefer Cornell University Professor N. M. Kiefer (Cornell University) Lecture 4: The K-Varable Linear Model I 1 / 20 Consider the system
More informationExam Study Questions for PS10-11 (*=solutions given in the back of the textbook)
Exam Study Questions for PS0- (*=solutions given in the back of the textbook) p 59, Problem p 59 Problem 3 (a)*, 3(b) 3(c) p 55, Problem p547, verify the solutions Eq (8) to the Marcov Processes being
More informationEE263: Introduction to Linear Dynamical Systems Review Session 5
EE263: Introduction to Linear Dynamical Systems Review Session 5 Outline eigenvalues and eigenvectors diagonalization matrix exponential EE263 RS5 1 Eigenvalues and eigenvectors we say that λ C is an eigenvalue
More informationMLES & Multivariate Normal Theory
Merlise Clyde September 6, 2016 Outline Expectations of Quadratic Forms Distribution Linear Transformations Distribution of estimates under normality Properties of MLE s Recap Ŷ = ˆµ is an unbiased estimate
More informationTwo Posts to Fill On School Board
Y Y 9 86 4 4 qz 86 x : ( ) z 7 854 Y x 4 z z x x 4 87 88 Y 5 x q x 8 Y 8 x x : 6 ; : 5 x ; 4 ( z ; ( ) ) x ; z 94 ; x 3 3 3 5 94 ; ; ; ; 3 x : 5 89 q ; ; x ; x ; ; x : ; ; ; ; ; ; 87 47% : () : / : 83
More informationAssignment 11 (C + C ) = (C + C ) = (C + C) i(c C ) ] = i(c C) (AB) = (AB) = B A = BA 0 = [A, B] = [A, B] = (AB BA) = (AB) AB
Arfken 3.4.6 Matrix C is not Hermition. But which is Hermitian. Likewise, Assignment 11 (C + C ) = (C + C ) = (C + C) [ i(c C ) ] = i(c C ) = i(c C) = i ( C C ) Arfken 3.4.9 The matrices A and B are both
More informationRegression coefficients may even have a different sign from the expected.
Multicolinearity Diagnostics : Some of the diagnostics e have just discussed are sensitive to multicolinearity. For example, e kno that ith multicolinearity, additions and deletions of data cause shifts
More informationStat 206: Linear algebra
Stat 206: Linear algebra James Johndrow (adapted from Iain Johnstone s notes) 2016-11-02 Vectors We have already been working with vectors, but let s review a few more concepts. The inner product of two
More informationCovariance to PCA. CS 510 Lecture #14 February 23, 2018
Covariance to PCA CS 510 Lecture 14 February 23, 2018 Overview: Goal Assume you have a gallery (database) of images, and a probe (test) image. The goal is to find the database image that is most similar
More informationUsing Ridge Least Median Squares to Estimate the Parameter by Solving Multicollinearity and Outliers Problems
Modern Applied Science; Vol. 9, No. ; 05 ISSN 9-844 E-ISSN 9-85 Published by Canadian Center of Science and Education Using Ridge Least Median Squares to Estimate the Parameter by Solving Multicollinearity
More informationComputational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science
Computational Methods CMSC/AMSC/MAPL 460 Eigenvalues and Eigenvectors Ramani Duraiswami, Dept. of Computer Science Eigen Values of a Matrix Recap: A N N matrix A has an eigenvector x (non-zero) with corresponding
More information16.584: Random Vectors
1 16.584: Random Vectors Define X : (X 1, X 2,..X n ) T : n-dimensional Random Vector X 1 : X(t 1 ): May correspond to samples/measurements Generalize definition of PDF: F X (x) = P[X 1 x 1, X 2 x 2,...X
More informationCAAM 335 Matrix Analysis
CAAM 335 Matrix Analysis Solutions to Homework 8 Problem (5+5+5=5 points The partial fraction expansion of the resolvent for the matrix B = is given by (si B = s } {{ } =P + s + } {{ } =P + (s (5 points
More informationLinear Algebra Formulas. Ben Lee
Linear Algebra Formulas Ben Lee January 27, 2016 Definitions and Terms Diagonal: Diagonal of matrix A is a collection of entries A ij where i = j. Diagonal Matrix: A matrix (usually square), where entries
More informationExercise Sheet 1.
Exercise Sheet 1 You can download my lecture and exercise sheets at the address http://sami.hust.edu.vn/giang-vien/?name=huynt 1) Let A, B be sets. What does the statement "A is not a subset of B " mean?
More informationPositive definite preserving linear transformations on symmetric matrix spaces
Positive definite preserving linear transformations on symmetric matrix spaces arxiv:1008.1347v1 [math.ra] 7 Aug 2010 Huynh Dinh Tuan-Tran Thi Nha Trang-Doan The Hieu Hue Geometry Group College of Education,
More informationSimultaneous Optimization of Incomplete Multi-Response Experiments
Open Journal of Statistics, 05, 5, 430-444 Published Online August 05 in SciRes. http://www.scirp.org/journal/ojs http://dx.doi.org/0.436/ojs.05.55045 Simultaneous Optimization of Incomplete Multi-Response
More informationMatrix Factorizations
1 Stat 540, Matrix Factorizations Matrix Factorizations LU Factorization Definition... Given a square k k matrix S, the LU factorization (or decomposition) represents S as the product of two triangular
More informationApplications of Fisher Information
Applications of Fisher Information MarvinH.J. Gruber School of Mathematical Sciences Rochester Institute of Technology 85 Lomb Memorial Drive Rochester,NY 14623 jgsma@rit.edu www.people.rit.edu/mjgsma/syracuse/talk.html
More informationLinear Models in Statistics
Linear Models in Statistics ALVIN C. RENCHER Department of Statistics Brigham Young University Provo, Utah A Wiley-Interscience Publication JOHN WILEY & SONS, INC. New York Chichester Weinheim Brisbane
More informationBayesian Estimation of Regression Coefficients Under Extended Balanced Loss Function
Communications in Statistics Theory and Methods, 43: 4253 4264, 2014 Copyright Taylor & Francis Group, LLC ISSN: 0361-0926 print / 1532-415X online DOI: 10.1080/03610926.2012.725498 Bayesian Estimation
More informationNo books, no notes, no calculators. You must show work, unless the question is a true/false, yes/no, or fill-in-the-blank question.
Math 304 Final Exam (May 8) Spring 206 No books, no notes, no calculators. You must show work, unless the question is a true/false, yes/no, or fill-in-the-blank question. Name: Section: Question Points
More informationMultivariate Gaussian Analysis
BS2 Statistical Inference, Lecture 7, Hilary Term 2009 February 13, 2009 Marginal and conditional distributions For a positive definite covariance matrix Σ, the multivariate Gaussian distribution has density
More informationResponse Surface Methodology
Response Surface Methodology Process and Product Optimization Using Designed Experiments Second Edition RAYMOND H. MYERS Virginia Polytechnic Institute and State University DOUGLAS C. MONTGOMERY Arizona
More informationEfficient Choice of Biasing Constant. for Ridge Regression
Int. J. Contemp. Math. Sciences, Vol. 3, 008, no., 57-536 Efficient Choice of Biasing Constant for Ridge Regression Sona Mardikyan* and Eyüp Çetin Department of Management Information Systems, School of
More information