Estimation of Unique Variances Using G-inverse Matrix in Factor Analysis
|
|
- Jade Craig
- 5 years ago
- Views:
Transcription
1 International Mathematical Forum, 3, 2008, no. 14, Estimation of Unique Variances Using G-inverse Matrix in Factor Analysis Seval Süzülmüş Osmaniye Korkut Ata University Vocational High School of Osmaniye Osmaniye, Turkey Sadullah Sakallıoğlu Çukurova University Faculty Science and Letters Department of Statistics Adana, Turkey Abstract The problem of estimation of parameters in factor analysis is one of the important phase and has attracted several researchers. In all methods, when identifying of parameters, Σ (variance-covariance matrix) is positive definite. In a few studies, when Σ is not positive definite generalized inverse (g-inverse) is used [3,6]. So that, when the population variance-covariance (correlation) matrix is non-negative definite we define the estimators of unique variances in factor analysis. Mathematics Subject Classification: 15A09, 62H25 Keywords: Factor analysis, unique variance, g-inverse 1 Introduction The observable random vector x, with p components, has mean μ and covariance matrix Σ. Under the factor analysis model x can be written in the form x = μ + Λf + e
2 672 S. Süzülmüş and S. Sakallıoğlu where Λ =(λ ij )isapxk matrix of factor loadings; f =(f 1,f 2,...,f k ) and e =(e 1,e 2,...,e p ) are unobservable random vectors. The elements of f and e are called the common factors and the unique factors, respectively. We assume that the means of the elements of f and e are zero and that E (ff )=I k and E (ee )=Ψ, where I k is the identity matrix of order k and Ψ is a diagonal matrix, of which the diagonal elements Ψ j (> 0) are called the unique variances. It will furthermore be assumed that E (fe )=0. From these assumptions we have Σ =ΛΛ + Ψ (1) where the matrix Σ =(σ ij ) denotes variance-covariance matrix of x [3]. Albert [1] has given a theorem, that leads to a direct procedure for determining whether Σ Ψ is of rank k. This procedure does not verify that whether Σ Ψ positive definite. Suppose that the matrix Σ partitions as follows: Σ = Σ 11 Σ 12 Σ 13 Σ 21 Σ 22 Σ 23 Σ 31 Σ 32 Σ 33 Let k is the maxium rank of the submatrices of Σ that do not include diagonal elements and Σ 11, Σ 12 = Σ 21 and Σ 22 are square submatrices of order k and Σ 12 is nonsingular. Then Σ Ψ is of rank k, if Σ 12 =(Σ 11 Ψ 1 ) Σ 1 21 (Σ 22 Ψ 2 ) Σ 13 =(Σ 11 Ψ 1 ) Σ 1 21 Σ 23 Σ 32 = Σ 31 Σ 1 21 (Σ 22 Ψ 2 ) Σ 33 Ψ 3 = Σ 31 Σ 1 21 Σ 23. Albert [2] has further shown that if Σ 31 and Σ 32 are also of rank k, then there is a uniquely determined Ψ such that Σ Ψ is of rank k. Anderson and Rubin [5] gaved Teorem 5.1 that is a sufficient condition for identification of Ψ and Λ up to multiplication on the right by an orthogonal matrix is that if any row Λ is deleted there remain two disjoint submatrices of rank k. Ihara and Kano[3] presumed that the matrix Λ in (1) satisfies the condition for identification of Ψ in Theorem 5.1 in [5]. Then, partitioning the matrices Σ, Λ and Ψ, they defined an estimator of Ψ p by ˆΨ p = s pp s 2p S 1 12 s 1p, provided that the submatrix S 12 is nonsingular. S is sample covariance matrix which is partitioned in the same fashion as Σ. Kano [6] proposed a non-iterative estimator using g-inverse matrix in factor analysis, which is a generalization of Ihara and Kano s estimator [3].
3 Estimation of unique variances using G-inverse matrix 673 If there exists an explicit function g (Σ) ofσ such that Ψ = g (Σ), then g (S) will be a good estimator of Ψ and Λ can be easily estimated based on S g (S) [6]. Ihara and Kano [3] found such a function g and showed that the estimate ˆΨ = g (S) leads to a value rather close to maximum likelihood estimator (MLE) by using two real data sets. Kano [6] partitions Λ as follow: Λ = λ 1 Λ 2 where Σ and S are partitioned according to the above Λ. Since Λ 2 and Λ 3 are of full column rank under Anderson and Rubin s condition, there are a k 2 vector a 2 and a k 3 vector a 3 such as Λ 3 k 1 k 2 k 3 λ 1 = a 2Λ 2 = a 3Λ 3. (2) Let A be any generalized inverse (g-inverse) matrix of A. From the equation (2), Kano [6] led to the following relation: λ 1 = a 2 = a 3Λ 3 Λ 2a 2 = a 3 Λ 3Λ 2 (Λ 3Λ 2 ) Λ 3 Λ 2 a 2 = λ 1 Λ 2 (Λ 3Λ 2 ) Λ 3λ 1 = σ 12 Σ 32σ 31. Then from this equation ψ 1 is found in equation (3) 2 Theoretical Aspects ψ 1 = σ 11 σ 12 Σ 32 σ 31. (3) In this paper by using generalized inverse matrix we give a theorem below, defining estimators of unique variances in factor analysis, which is generalization of Albert s Theorem[1,2]. Theorem 2.1 Let Σ = ΛΛ + Ψ be covariance matrix of observable vector x and the matrices Σ, Λ and Ψ partition as follows: Σ 11 Σ 12 Σ 13 m Λ 1 Ψ Σ = Σ 21 Σ 22 Σ 23 n Λ = Λ 2 Ψ = 0 Ψ 2 0. Σ 31 Σ 32 Σ 33 t Λ Ψ 3 m n t
4 674 S. Süzülmüş and S. Sakallıoğlu Suppose that rank Σ 12 = m. Then Σ Ψ is of rank m if Σ 21 =(Σ 22 Ψ 2 ) Σ 12 (Σ 11 Ψ 1 ) Σ 31 = Σ 32 Σ 12 (Σ 11 Ψ 1 ) Σ 23 =(Σ 22 Ψ 2 ) Σ 12Σ 13 Σ 33 Ψ 3 = Σ 32 Σ 12Σ 13 (Σ 22 Ψ 2 ) ( I Σ 12Σ 12 ) = 0nxn ( ) Σ 32 I Σ 12 Σ 12 = 0txn. Furthermore, if m = n, Σ 13 and Σ 23 are also full row rank, then there is a uniquely determined Ψ such that Σ Ψ is of rank m. Proof 2.1 Premultiplication of Σ Ψ by P = and post-multiplication by Θ = I mxm 0 mxn 0 mxt (Σ 22 Ψ 2 ) nxn Σ 12 nxm I nxn 0 nxt Σ 32txn Σ 12 nxm 0 txn I txt pxp I mxm 0 mxn 0 mxt Σ 12 nxm (Σ 11 Ψ 1 ) mxm I nxn Σ 12 nxm Σ 13mxt 0 txm 0 txn I txt pxp then we get, P (Σ Ψ)Θ = 0 Σ 12 0 A B C D E F where A = (Σ 22 Ψ 2 ) Σ 12 (Σ 11 Ψ 1 )+Σ 21, B = (Σ 22 Ψ 2 ) ( I Σ 12 Σ 12), C = (Σ 22 Ψ 2 ) Σ 12Σ 13 + Σ 23, D = Σ 32 Σ 12 (Σ 11 Ψ 1 )+Σ 31, E = Σ 32 Σ 12 Σ 12 + Σ 32, F = Σ 32 Σ 12 Σ 13 + Σ 33 Ψ 3. Since the matrices P and Θ are nonsingular, then rank (P (Σ Ψ) Θ) =rank (Σ Ψ).
5 Estimation of unique variances using G-inverse matrix 675 So, the rank of matrices P (Σ Ψ) Θ and (Σ Ψ) are equal to m if Σ 21 =(Σ 22 Ψ 2 ) Σ 12 (Σ 11 Ψ 1 ) (4) Σ 31 = Σ 32 Σ 12 (Σ 11 Ψ 1 ) (5) Σ 23 =(Σ 22 Ψ 2 ) Σ 12 Σ 13 (6) Σ 33 Ψ 3 = Σ 32 Σ 12 Σ 13 (7) (Σ 22 Ψ 2 ) ( I Σ 12Σ 12 ) = 0nxn (8) ( ) Σ 32 I Σ 12 Σ 12 = 0txn. (9) Since Σ 12 and Σ 23 are full row rank, pre-multiplication of the equation (5) by Σ 32 and Σ 12, respectively, then gives, From this equation we have Σ 12 Σ 32Σ 31 = Σ 11 Ψ 1. Ψ 1 = Σ 11 Σ 12 Σ 32 Σ 31 (10) Post multiplication of the equation (6) by Σ 13 and Σ 12, respectively, since Σ 13 is full row rank, then we get Σ 23 Σ 13 Σ 12 =(Σ 22 Ψ 2 ) Σ 12 Σ 12. (11) From the equation (8), the right hand side of the equation (11) will be equal to (Σ 22 Ψ 2 ), so we have From the equation (7), we have Σ 23 Σ 13Σ 12 = Σ 22 Ψ 2 Ψ 2 = Σ 22 Σ 23 Σ 13 Σ 12. (12) Ψ 3 = Σ 33 Σ 32 Σ 12Σ 13. (13) So Ψ can be uniquely determined and the proof is completed. Estimation for Ψ 1, Ψ 2 and Ψ 3 are obtained if the corresponding sample covariance matrices are used in (10), (12) and (13) instead of the population covariance matrices we find: ˆΨ 1 = S 11 S 12 S 32 S 31 (14) ˆΨ 2 = S 22 S 23 S 13S 12 (15) ˆΨ 3 = S 33 S 32 S 12S 13. (16) As a result writing m = 1 for equation (10) gives us the equation (3). m = n = t = k for equations (10), (12) and (13) gives us the Albert s Theorem [2].
6 676 S. Süzülmüş and S. Sakallıoğlu References [1] A.A. Albert, The matrices of factor analysis, Proc.Nat.Acad.Sci., 30 (1944a), [2] A.A. Albert, The minimum rank of a correlation matrix, Proc.Nat.Acad.Sci., 30 (1944b), [3] M. Ihara, Y. Kano, A new estimator of the uniqueness in factor analysis, Psychometrika, 51 (1986), [4] R.A. Johnson, D.W. Wichern, Applied Multivariate Statistical Analysis, Wiley, United States of America, [5] T.W. Anderson, H. Rubin, Statistical inference in factor analysis, Proc.3rd Berkeley Symp, 5 (1956), [6] Y. Kano, A new estimation procedure using g-inverse matrix in factor analysis, Math. Japonica, 34:1 (1989), Received: October 29, 2007
Chapter 4: Factor Analysis
Chapter 4: Factor Analysis In many studies, we may not be able to measure directly the variables of interest. We can merely collect data on other variables which may be related to the variables of interest.
More informationDo not copy, quote, or cite without permission LECTURE 4: THE GENERAL LISREL MODEL
LECTURE 4: THE GENERAL LISREL MODEL I. QUICK REVIEW OF A LITTLE MATRIX ALGEBRA. II. A SIMPLE RECURSIVE MODEL IN LATENT VARIABLES. III. THE GENERAL LISREL MODEL IN MATRIX FORM. A. SPECIFYING STRUCTURAL
More informationAn Introduction to Multivariate Statistical Analysis
An Introduction to Multivariate Statistical Analysis Third Edition T. W. ANDERSON Stanford University Department of Statistics Stanford, CA WILEY- INTERSCIENCE A JOHN WILEY & SONS, INC., PUBLICATION Contents
More informationImproper Solutions in Exploratory Factor Analysis: Causes and Treatments
Improper Solutions in Exploratory Factor Analysis: Causes and Treatments Yutaka Kano Faculty of Human Sciences, Osaka University Suita, Osaka 565, Japan. email: kano@hus.osaka-u.ac.jp Abstract: There are
More informationFACTOR ANALYSIS AS MATRIX DECOMPOSITION 1. INTRODUCTION
FACTOR ANALYSIS AS MATRIX DECOMPOSITION JAN DE LEEUW ABSTRACT. Meet the abstract. This is the abstract. 1. INTRODUCTION Suppose we have n measurements on each of taking m variables. Collect these measurements
More informationInferences on a Normal Covariance Matrix and Generalized Variance with Monotone Missing Data
Journal of Multivariate Analysis 78, 6282 (2001) doi:10.1006jmva.2000.1939, available online at http:www.idealibrary.com on Inferences on a Normal Covariance Matrix and Generalized Variance with Monotone
More informationMultivariate Statistical Analysis
Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 4 for Applied Multivariate Analysis Outline 1 Eigen values and eigen vectors Characteristic equation Some properties of eigendecompositions
More informationLinear Algebra. Linear Equations and Matrices. Copyright 2005, W.R. Winfrey
Copyright 2005, W.R. Winfrey Topics Preliminaries Systems of Linear Equations Matrices Algebraic Properties of Matrix Operations Special Types of Matrices and Partitioned Matrices Matrix Transformations
More informationLinear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey
Copyright 2005, W.R. Winfrey Topics Preliminaries Echelon Form of a Matrix Elementary Matrices; Finding A -1 Equivalent Matrices LU-Factorization Topics Preliminaries Echelon Form of a Matrix Elementary
More information1.6 CONDITIONS FOR FACTOR (IN)DETERMINACY IN FACTOR ANALYSI. University of Groningen. University of Utrecht
1.6 CONDITIONS FOR FACTOR (IN)DETERMINACY IN FACTOR ANALYSI Wim P. Krijnen 1, Theo K. Dijkstra 2 and Richard D. Gill 3 1;2 University of Groningen 3 University of Utrecht 1 The rst author is obliged to
More information3.2 Gaussian Elimination (and triangular matrices)
(1/19) Solving Linear Systems 3.2 Gaussian Elimination (and triangular matrices) MA385/MA530 Numerical Analysis 1 November 2018 Gaussian Elimination (2/19) Gaussian Elimination is an exact method for solving
More informationMAXIMUM LIKELIHOOD IN GENERALIZED FIXED SCORE FACTOR ANALYSIS 1. INTRODUCTION
MAXIMUM LIKELIHOOD IN GENERALIZED FIXED SCORE FACTOR ANALYSIS JAN DE LEEUW ABSTRACT. We study the weighted least squares fixed rank approximation problem in which the weight matrices depend on unknown
More informationMATRICES AND MATRIX OPERATIONS
SIZE OF THE MATRIX is defined by number of rows and columns in the matrix. For the matrix that have m rows and n columns we say the size of the matrix is m x n. If matrix have the same number of rows (n)
More informationReduced rank regression in cointegrated models
Journal of Econometrics 06 (2002) 203 26 www.elsevier.com/locate/econbase Reduced rank regression in cointegrated models.w. Anderson Department of Statistics, Stanford University, Stanford, CA 94305-4065,
More informationConsistent Bivariate Distribution
A Characterization of the Normal Conditional Distributions MATSUNO 79 Therefore, the function ( ) = G( : a/(1 b2)) = N(0, a/(1 b2)) is a solu- tion for the integral equation (10). The constant times of
More informationTo Estimate or Not to Estimate?
To Estimate or Not to Estimate? Benjamin Kedem and Shihua Wen In linear regression there are examples where some of the coefficients are known but are estimated anyway for various reasons not least of
More informationCS281 Section 4: Factor Analysis and PCA
CS81 Section 4: Factor Analysis and PCA Scott Linderman At this point we have seen a variety of machine learning models, with a particular emphasis on models for supervised learning. In particular, we
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationPreliminaries. Copyright c 2018 Dan Nettleton (Iowa State University) Statistics / 38
Preliminaries Copyright c 2018 Dan Nettleton (Iowa State University) Statistics 510 1 / 38 Notation for Scalars, Vectors, and Matrices Lowercase letters = scalars: x, c, σ. Boldface, lowercase letters
More informationMore Linear Algebra. Edps/Soc 584, Psych 594. Carolyn J. Anderson
More Linear Algebra Edps/Soc 584, Psych 594 Carolyn J. Anderson Department of Educational Psychology I L L I N O I S university of illinois at urbana-champaign c Board of Trustees, University of Illinois
More informationLinear Models for Multivariate Repeated Measures Data
THE UNIVERSITY OF TEXAS AT SAN ANTONIO, COLLEGE OF BUSINESS Working Paper SERIES Date December 3, 200 WP # 007MSS-253-200 Linear Models for Multivariate Repeated Measures Data Anuradha Roy Management Science
More information10: Representation of point group part-1 matrix algebra CHEMISTRY. PAPER No.13 Applications of group Theory
1 Subject Chemistry Paper No and Title Module No and Title Module Tag Paper No 13: Applications of Group Theory CHE_P13_M10 2 TABLE OF CONTENTS 1. Learning outcomes 2. Introduction 3. Definition of a matrix
More information1 Data Arrays and Decompositions
1 Data Arrays and Decompositions 1.1 Variance Matrices and Eigenstructure Consider a p p positive definite and symmetric matrix V - a model parameter or a sample variance matrix. The eigenstructure is
More informationManabu Sato* and Masaaki Ito**
J. Japan Statist. Soc. Vol. 37 No. 2 2007 175 190 THEORETICAL JUSTIFICATION OF DECISION RULES FOR THE NUMBER OF FACTORS: PRINCIPAL COMPONENT ANALYSIS AS A SUBSTITUTE FOR FACTOR ANALYSIS IN ONE-FACTOR CASES
More informationMatrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =
30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can
More informationSTAT 730 Chapter 9: Factor analysis
STAT 730 Chapter 9: Factor analysis Timothy Hanson Department of Statistics, University of South Carolina Stat 730: Multivariate Data Analysis 1 / 15 Basic idea Factor analysis attempts to explain the
More informationTAMS39 Lecture 10 Principal Component Analysis Factor Analysis
TAMS39 Lecture 10 Principal Component Analysis Factor Analysis Martin Singull Department of Mathematics Mathematical Statistics Linköping University, Sweden Content - Lecture Principal component analysis
More informationInverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1
Inverse of a Square Matrix For an N N square matrix A, the inverse of A, 1 A, exists if and only if A is of full rank, i.e., if and only if no column of A is a linear combination 1 of the others. A is
More informationA Short Note on Resolving Singularity Problems in Covariance Matrices
International Journal of Statistics and Probability; Vol. 1, No. 2; 2012 ISSN 1927-7032 E-ISSN 1927-7040 Published by Canadian Center of Science and Education A Short Note on Resolving Singularity Problems
More information2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian
FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian
More informationBayesian inference for factor scores
Bayesian inference for factor scores Murray Aitkin and Irit Aitkin School of Mathematics and Statistics University of Newcastle UK October, 3 Abstract Bayesian inference for the parameters of the factor
More informationIntroduction to Factor Analysis
to Factor Analysis Lecture 10 August 2, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2 Lecture #10-8/3/2011 Slide 1 of 55 Today s Lecture Factor Analysis Today s Lecture Exploratory
More informationMIVQUE and Maximum Likelihood Estimation for Multivariate Linear Models with Incomplete Observations
Sankhyā : The Indian Journal of Statistics 2006, Volume 68, Part 3, pp. 409-435 c 2006, Indian Statistical Institute MIVQUE and Maximum Likelihood Estimation for Multivariate Linear Models with Incomplete
More informationExploratory Factor Analysis: dimensionality and factor scores. Psychology 588: Covariance structure and factor models
Exploratory Factor Analysis: dimensionality and factor scores Psychology 588: Covariance structure and factor models How many PCs to retain 2 Unlike confirmatory FA, the number of factors to extract is
More informationOn the Berlekamp/Massey Algorithm and Counting Singular Hankel Matrices over a Finite Field
On the Berlekamp/Massey Algorithm and Counting Singular Hankel Matrices over a Finite Field Matthew T Comer Dept of Mathematics, North Carolina State University Raleigh, North Carolina, 27695-8205 USA
More informationGlossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB
Glossary of Linear Algebra Terms Basis (for a subspace) A linearly independent set of vectors that spans the space Basic Variable A variable in a linear system that corresponds to a pivot column in the
More informationChapter 3 Best Linear Unbiased Estimation
Chapter 3 Best Linear Unbiased Estimation C R Henderson 1984 - Guelph In Chapter 2 we discussed linear unbiased estimation of k β, having determined that it is estimable Let the estimate be a y, and if
More informationEPMC Estimation in Discriminant Analysis when the Dimension and Sample Sizes are Large
EPMC Estimation in Discriminant Analysis when the Dimension and Sample Sizes are Large Tetsuji Tonda 1 Tomoyuki Nakagawa and Hirofumi Wakaki Last modified: March 30 016 1 Faculty of Management and Information
More informationMath Matrix Algebra
Math 44 - Matrix Algebra Review notes - (Alberto Bressan, Spring 7) sec: Orthogonal diagonalization of symmetric matrices When we seek to diagonalize a general n n matrix A, two difficulties may arise:
More informationJournal of Symbolic Computation. On the Berlekamp/Massey algorithm and counting singular Hankel matrices over a finite field
Journal of Symbolic Computation 47 (2012) 480 491 Contents lists available at SciVerse ScienceDirect Journal of Symbolic Computation journal homepage: wwwelseviercom/locate/jsc On the Berlekamp/Massey
More informationSEMIGROUP PRESENTATIONS FOR CONGRUENCES ON GROUPS
Bull. Korean Math. Soc. 50 (2013), No. 2, pp. 445 449 http://dx.doi.org/10.4134/bkms.2013.50.2.445 SEMIGROUP PRESENTATIONS FOR CONGRUENCES ON GROUPS Gonca Ayık and Basri Çalışkan Abstract. We consider
More informationMatrix Algebra & Elementary Matrices
Matrix lgebra & Elementary Matrices To add two matrices, they must have identical dimensions. To multiply them the number of columns of the first must equal the number of rows of the second. The laws below
More information1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )
Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical
More informationThe 3 Indeterminacies of Common Factor Analysis
The 3 Indeterminacies of Common Factor Analysis James H. Steiger Department of Psychology and Human Development Vanderbilt University James H. Steiger (Vanderbilt University) The 3 Indeterminacies of Common
More informationGaussian Graphical Models and Graphical Lasso
ELE 538B: Sparsity, Structure and Inference Gaussian Graphical Models and Graphical Lasso Yuxin Chen Princeton University, Spring 2017 Multivariate Gaussians Consider a random vector x N (0, Σ) with pdf
More informationDeterminants Chapter 3 of Lay
Determinants Chapter of Lay Dr. Doreen De Leon Math 152, Fall 201 1 Introduction to Determinants Section.1 of Lay Given a square matrix A = [a ij, the determinant of A is denoted by det A or a 11 a 1j
More informationHaruhiko Ogasawara. This article gives the first half of an expository supplement to Ogasawara (2015).
Economic Review (Otaru University of Commerce, Vol.66, No. & 3, 9-58. December, 5. Expository supplement I to the paper Asymptotic expansions for the estimators of Lagrange multipliers and associated parameters
More informationA Little Necessary Matrix Algebra for Doctoral Studies in Business & Economics. Matrix Algebra
A Little Necessary Matrix Algebra for Doctoral Studies in Business & Economics James J. Cochran Department of Marketing & Analysis Louisiana Tech University Jcochran@cab.latech.edu Matrix Algebra Matrix
More informationIntroduction to Factor Analysis
to Factor Analysis Lecture 11 November 2, 2005 Multivariate Analysis Lecture #11-11/2/2005 Slide 1 of 58 Today s Lecture Factor Analysis. Today s Lecture Exploratory factor analysis (EFA). Confirmatory
More informationTAMS39 Lecture 2 Multivariate normal distribution
TAMS39 Lecture 2 Multivariate normal distribution Martin Singull Department of Mathematics Mathematical Statistics Linköping University, Sweden Content Lecture Random vectors Multivariate normal distribution
More informationSTAT 135 Lab 13 (Review) Linear Regression, Multivariate Random Variables, Prediction, Logistic Regression and the δ-method.
STAT 135 Lab 13 (Review) Linear Regression, Multivariate Random Variables, Prediction, Logistic Regression and the δ-method. Rebecca Barter May 5, 2015 Linear Regression Review Linear Regression Review
More information10-701/ Recitation : Linear Algebra Review (based on notes written by Jing Xiang)
10-701/15-781 Recitation : Linear Algebra Review (based on notes written by Jing Xiang) Manojit Nandi February 1, 2014 Outline Linear Algebra General Properties Matrix Operations Inner Products and Orthogonal
More informationFactor Analysis (10/2/13)
STA561: Probabilistic machine learning Factor Analysis (10/2/13) Lecturer: Barbara Engelhardt Scribes: Li Zhu, Fan Li, Ni Guan Factor Analysis Factor analysis is related to the mixture models we have studied.
More informationLinear Algebra Review
Linear Algebra Review Yang Feng http://www.stat.columbia.edu/~yangfeng Yang Feng (Columbia University) Linear Algebra Review 1 / 45 Definition of Matrix Rectangular array of elements arranged in rows and
More informationFactor Analysis. Robert L. Wolpert Department of Statistical Science Duke University, Durham, NC, USA
Factor Analysis Robert L. Wolpert Department of Statistical Science Duke University, Durham, NC, USA 1 Factor Models The multivariate regression model Y = XB +U expresses each row Y i R p as a linear combination
More informationBiostat 2065 Analysis of Incomplete Data
Biostat 2065 Analysis of Incomplete Data Gong Tang Dept of Biostatistics University of Pittsburgh October 20, 2005 1. Large-sample inference based on ML Let θ is the MLE, then the large-sample theory implies
More informationMatrix Inequalities by Means of Block Matrices 1
Mathematical Inequalities & Applications, Vol. 4, No. 4, 200, pp. 48-490. Matrix Inequalities by Means of Block Matrices Fuzhen Zhang 2 Department of Math, Science and Technology Nova Southeastern University,
More informationENGR-1100 Introduction to Engineering Analysis. Lecture 21
ENGR-1100 Introduction to Engineering Analysis Lecture 21 Lecture outline Procedure (algorithm) for finding the inverse of invertible matrix. Investigate the system of linear equation and invertibility
More informationThe LIML Estimator Has Finite Moments! T. W. Anderson. Department of Economics and Department of Statistics. Stanford University, Stanford, CA 94305
The LIML Estimator Has Finite Moments! T. W. Anderson Department of Economics and Department of Statistics Stanford University, Stanford, CA 9435 March 25, 2 Abstract The Limited Information Maximum Likelihood
More informationFE670 Algorithmic Trading Strategies. Stevens Institute of Technology
FE670 Algorithmic Trading Strategies Lecture 3. Factor Models and Their Estimation Steve Yang Stevens Institute of Technology 09/12/2012 Outline 1 The Notion of Factors 2 Factor Analysis via Maximum Likelihood
More informationEp Matrices and Its Weighted Generalized Inverse
Vol.2, Issue.5, Sep-Oct. 2012 pp-3850-3856 ISSN: 2249-6645 ABSTRACT: If A is a con s S.Krishnamoorthy 1 and B.K.N.MuthugobaI 2 Research Scholar Ramanujan Research Centre, Department of Mathematics, Govt.
More informationA note on the equality of the BLUPs for new observations under two linear models
ACTA ET COMMENTATIONES UNIVERSITATIS TARTUENSIS DE MATHEMATICA Volume 14, 2010 A note on the equality of the BLUPs for new observations under two linear models Stephen J Haslett and Simo Puntanen Abstract
More informationI L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN
Introduction Edps/Psych/Stat/ 584 Applied Multivariate Statistics Carolyn J Anderson Department of Educational Psychology I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN c Board of Trustees,
More informationSingular Value Decomposition. 1 Singular Value Decomposition and the Four Fundamental Subspaces
Singular Value Decomposition This handout is a review of some basic concepts in linear algebra For a detailed introduction, consult a linear algebra text Linear lgebra and its pplications by Gilbert Strang
More informationThus ρ is the direct sum of three blocks, where block on T 0 ρ is the identity, on block T 1 it is identically one, and on block T + it is non-zero
THE SPEARMAN MODEL JAN DE LEEUW 1. Spearman Correlations Definition 1.1. A correlation function or CF on a set T is a function ρ : T T R such that ρ(t, t) = 1 for all t T, ρ(s, t) = ρ(t, s) for all s,
More informationOn identification of multi-factor models with correlated residuals
Biometrika (2004), 91, 1, pp. 141 151 2004 Biometrika Trust Printed in Great Britain On identification of multi-factor models with correlated residuals BY MICHEL GRZEBYK Department of Pollutants Metrology,
More informationarxiv: v1 [stat.me] 11 Apr 2018
Sparse Bayesian Factor Analysis when the Number of Factors is Unknown Sylvia Frühwirth-Schnatter and Hedibert Freitas Lopes arxiv:1804.04231v1 [stat.me] 11 Apr 2018 April 13, 2018 Abstract Despite the
More informationTesting a Normal Covariance Matrix for Small Samples with Monotone Missing Data
Applied Mathematical Sciences, Vol 3, 009, no 54, 695-70 Testing a Normal Covariance Matrix for Small Samples with Monotone Missing Data Evelina Veleva Rousse University A Kanchev Department of Numerical
More informationON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES. 1. Introduction
Acta Math. Univ. Comenianae Vol. LXV, 1(1996), pp. 129 139 129 ON VARIANCE COVARIANCE COMPONENTS ESTIMATION IN LINEAR MODELS WITH AR(1) DISTURBANCES V. WITKOVSKÝ Abstract. Estimation of the autoregressive
More informationMore Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order Restriction
Sankhyā : The Indian Journal of Statistics 2007, Volume 69, Part 4, pp. 700-716 c 2007, Indian Statistical Institute More Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order
More informationa Λ q 1. Introduction
International Journal of Pure and Applied Mathematics Volume 9 No 26, 959-97 ISSN: -88 (printed version); ISSN: -95 (on-line version) url: http://wwwijpameu doi: 272/ijpamv9i7 PAijpameu EXPLICI MOORE-PENROSE
More informationELE/MCE 503 Linear Algebra Facts Fall 2018
ELE/MCE 503 Linear Algebra Facts Fall 2018 Fact N.1 A set of vectors is linearly independent if and only if none of the vectors in the set can be written as a linear combination of the others. Fact N.2
More informationParallel Singular Value Decomposition. Jiaxing Tan
Parallel Singular Value Decomposition Jiaxing Tan Outline What is SVD? How to calculate SVD? How to parallelize SVD? Future Work What is SVD? Matrix Decomposition Eigen Decomposition A (non-zero) vector
More informationChapter 5 Matrix Approach to Simple Linear Regression
STAT 525 SPRING 2018 Chapter 5 Matrix Approach to Simple Linear Regression Professor Min Zhang Matrix Collection of elements arranged in rows and columns Elements will be numbers or symbols For example:
More informationIdentification of the Linear Factor Model
Identification of the Linear Factor Model Benjamin Williams The George Washington University RPF Working Paper No. 2018-002 https://www2.gwu.edu/~forcpgm/2018-002.pdf June 14, 2018 RESEARCH PROGRAM ON
More informationMATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.
MATH 2331 Linear Algebra Section 2.1 Matrix Operations Definition: A : m n, B : n p ( 1 2 p ) ( 1 2 p ) AB = A b b b = Ab Ab Ab Example: Compute AB, if possible. 1 Row-column rule: i-j-th entry of AB:
More informationMatrix Theory, Math6304 Lecture Notes from October 25, 2012
Matrix Theory, Math6304 Lecture Notes from October 25, 2012 taken by John Haas Last Time (10/23/12) Example of Low Rank Perturbation Relationship Between Eigenvalues and Principal Submatrices: We started
More informationPrinciple Components Analysis (PCA) Relationship Between a Linear Combination of Variables and Axes Rotation for PCA
Principle Components Analysis (PCA) Relationship Between a Linear Combination of Variables and Axes Rotation for PCA Principle Components Analysis: Uses one group of variables (we will call this X) In
More informationOn the Schur Complement of Diagonally Dominant Matrices
On the Schur Complement of Diagonally Dominant Matrices T.-G. Lei, C.-W. Woo,J.-Z.Liu, and F. Zhang 1 Introduction In 1979, Carlson and Markham proved that the Schur complements of strictly diagonally
More informationIndependent Component Analysis and Its Application on Accelerator Physics
Independent Component Analysis and Its Application on Accelerator Physics Xiaoying Pang LA-UR-12-20069 ICA and PCA Similarities: Blind source separation method (BSS) no model Observed signals are linear
More informationA NOTE ON A CLASS OF PROBLEMS IN I NORMAL I MULTIVARIATE ANALYSIS OF VARIANCE. by S. N. Roy and J. Roy
A NOTE ON A CLASS OF PROBLEMS IN I NORMAL I MULTIVARIATE ANALYSIS OF VARIANCE by S. N. Roy and J. Roy e'.~". This research was supported by the United states Air Force through the Air Force Office of Scientific
More informationA lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo
A lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo A. E. Brouwer & W. H. Haemers 2008-02-28 Abstract We show that if µ j is the j-th largest Laplacian eigenvalue, and d
More informationTesting Equality of Natural Parameters for Generalized Riesz Distributions
Testing Equality of Natural Parameters for Generalized Riesz Distributions Jesse Crawford Department of Mathematics Tarleton State University jcrawford@tarleton.edu faculty.tarleton.edu/crawford April
More informationElementary Matrices. which is obtained by multiplying the first row of I 3 by -1, and
Elementary Matrices In this special handout of material not contained in the text, we introduce the concept of elementary matrix. Elementary matrices are useful in several ways that will be shown in this
More informationRegularized Common Factor Analysis
New Trends in Psychometrics 1 Regularized Common Factor Analysis Sunho Jung 1 and Yoshio Takane 1 (1) Department of Psychology, McGill University, 1205 Dr. Penfield Avenue, Montreal, QC, H3A 1B1, Canada
More informationAPPENDIX A PROOFS. Proof 1: No Generalized Inverse Will Allocate u. Denote by the subset of ( ) which maps to ( ), and by u the vectors in.
APPENDIX A PROOFS Proof 1: No Generalized Inverse Will Allocate u m Denote by the subset of ( ) which maps to ( ), and by u the vectors in. The span( ) = R m unless there are columns of B which are all
More informationCzech J. Anim. Sci., 50, 2005 (4):
Czech J Anim Sci, 50, 2005 (4: 163 168 Original Paper Canonical correlation analysis for studying the relationship between egg production traits and body weight, egg weight and age at sexual maturity in
More informationMultistat2. The course is focused on the parts in bold; a short description will be presented for the other parts
1 2 The course is focused on the parts in bold; a short description will be presented for the other parts 3 4 5 What I cannot teach you: how to look for relevant literature, how to use statistical software
More informationRatio of Linear Function of Parameters and Testing Hypothesis of the Combination Two Split Plot Designs
Middle-East Journal of Scientific Research 13 (Mathematical Applications in Engineering): 109-115 2013 ISSN 1990-9233 IDOSI Publications 2013 DOI: 10.5829/idosi.mejsr.2013.13.mae.10002 Ratio of Linear
More informationLecture 3: Latent Variables Models and Learning with the EM Algorithm. Sam Roweis. Tuesday July25, 2006 Machine Learning Summer School, Taiwan
Lecture 3: Latent Variables Models and Learning with the EM Algorithm Sam Roweis Tuesday July25, 2006 Machine Learning Summer School, Taiwan Latent Variable Models What to do when a variable z is always
More informationMultivariate Gaussians. Sargur Srihari
Multivariate Gaussians Sargur srihari@cedar.buffalo.edu 1 Topics 1. Multivariate Gaussian: Basic Parameterization 2. Covariance and Information Form 3. Operations on Gaussians 4. Independencies in Gaussians
More information6 Pattern Mixture Models
6 Pattern Mixture Models A common theme underlying the methods we have discussed so far is that interest focuses on making inference on parameters in a parametric or semiparametric model for the full data
More informationLecture Notes Part 2: Matrix Algebra
17.874 Lecture Notes Part 2: Matrix Algebra 2. Matrix Algebra 2.1. Introduction: Design Matrices and Data Matrices Matrices are arrays of numbers. We encounter them in statistics in at least three di erent
More informationFixed Effects, Invariance, and Spatial Variation in Intergenerational Mobility
American Economic Review: Papers & Proceedings 2016, 106(5): 400 404 http://dx.doi.org/10.1257/aer.p20161082 Fixed Effects, Invariance, and Spatial Variation in Intergenerational Mobility By Gary Chamberlain*
More informationSummary of factor model analysis by Fan at al.
Summary of factor model analysis by Fan at al. W. Evan Durno July 11, 2015 W. Evan Durno Factoring Porfolios July 11, 2015 1 / 15 Brief summary Factor models for high-dimensional data are analysed [Fan
More informationFACTOR ANALYSIS AND MULTIDIMENSIONAL SCALING
FACTOR ANALYSIS AND MULTIDIMENSIONAL SCALING Vishwanath Mantha Department for Electrical and Computer Engineering Mississippi State University, Mississippi State, MS 39762 mantha@isip.msstate.edu ABSTRACT
More information9.1 Orthogonal factor model.
36 Chapter 9 Factor Analysis Factor analysis may be viewed as a refinement of the principal component analysis The objective is, like the PC analysis, to describe the relevant variables in study in terms
More informationData Mining and Matrices
Data Mining and Matrices 05 Semi-Discrete Decomposition Rainer Gemulla, Pauli Miettinen May 16, 2013 Outline 1 Hunting the Bump 2 Semi-Discrete Decomposition 3 The Algorithm 4 Applications SDD alone SVD
More informationFactor Analysis Edpsy/Soc 584 & Psych 594
Factor Analysis Edpsy/Soc 584 & Psych 594 Carolyn J. Anderson University of Illinois, Urbana-Champaign April 29, 2009 1 / 52 Rotation Assessing Fit to Data (one common factor model) common factors Assessment
More informationPrincipal Component Analysis (PCA) Our starting point consists of T observations from N variables, which will be arranged in an T N matrix R,
Principal Component Analysis (PCA) PCA is a widely used statistical tool for dimension reduction. The objective of PCA is to find common factors, the so called principal components, in form of linear combinations
More information