Principal Component Analysis (PCA) Our starting point consists of T observations from N variables, which will be arranged in an T N matrix R,
|
|
- Eugenia Briggs
- 5 years ago
- Views:
Transcription
1 Principal Component Analysis (PCA) PCA is a widely used statistical tool for dimension reduction. The objective of PCA is to find common factors, the so called principal components, in form of linear combinations of the variables under investigation, and to rank them according to their importance. Our starting point consists of T observations from N variables, which will be arranged in an T N matrix R, R = r 11 r 21 r N1 r 12 r 22 r N r 1T r 2T r NT. That is, r it is the return of asset i at time t. Usually centered data are used, so that R R/(T 1) is the sample covariance matrix (or correlation matrix) of the returns under study. 1
2 The First Principal Component Let us start with one variable, say p. Variable p takes T values, to be arranged in a column vector p = [p 1,..., p T ]. p is not yet determined, but let us proceed as if it were. Then our approximation takes the form R pa, where a is an N dimensional column vector, i.e., r 11 r 21 r N1 r 12 r 22 r N r 1T r 2T r NT = p 1 p 2. p T p 1 a 1 p 1 a 2 p 1 a N p 2 a 1 p 2 a 2 p 2 a N p T a 1 p T a 2 p T a N. [ ] a1 a N. 2
3 Thus, r it is approximated by p t a i. The matrix of discrepancies is R pa. Our criterion for choosing p and a will be to select these vectors such that the sum of squares of all T N discrepancies is minimized, i.e., N i=1 T (r it p t a i ) 2 = tr[(r pa ) (R pa )], (1) t=1 using property (14) of the trace (see Appendix). Note that the product pa remains unchanged when p is multiplied by some scalar c 0 and a by 1/c. By imposing T p 2 t = p p = 1, (2) t=1 we obtain uniqueness except for sign. 3
4 Then our objective function (1) becomes S = tr[(r pa ) (R pa )] = tr(r R) tr(ap R) tr(r pa ) +tr(a }{{} p p a ) =1 = tr(r R) 2p Ra + a a, (3) using that, from (13), tr(ap R) = tr(p Ra) = p Ra, tr(r pa ) = tr(pa R ) = tr(a R p) = a R p = p Ra, and tr(aa ) = tr(a a) = a a. 4
5 Differentiating (3) with respect to a (for given p) and putting the derivative equal to zero, S a = 2R p + 2a = 0, gives a = R p. (4) Now substitute (4) in the objective function (3) to obtain S = tr(r R) p RR p, showing that our new task is to maximize p RR p with respect to p, subject to (2). The Lagrangian is L = p RR p + λ(p p 1). 5
6 The first order condition requires that L p = 2RR p 2λp = 0 where I is the identity matrix. (RR λi)p = 0, (5) For (5) to have a nontrivial solution (p 0), we must have that det(rr λi) = 0, (6) which means that p is an eigenvector of the T T positive semidefinite matrix RR corresponding to the eigenvalue (or root) λ. As RR has, in general, N nonzero eigenvalues (if the sample covariance matrix is of full rank), we have to determine which eigenvalue is to be taken. 6
7 To do so, multiply (5) by p, resulting in p RR p = λp p = λ, (7) which, as we want to maximize p RR p, means that we should take the largest root of RR. Note that all roots of RR are nonnegative, and the positive roots are those of R R, which is T 1 times the sample covariance matrix of the returns under consideration. Note that by multiplying (5) by R we also obtain (R R λi) R p }{{} =a = (R R λi)a = 0, (8) which means that a is an eigenvector of R R corresponding to the largest root of R R (note that R R and RR have the same nonzero eigenvalues). Furthermore, (4) and (5) imply λp (5) = RR p (4) = Ra p = 1 Ra. (9) λ 7
8 Vector p given by (9), which is a linear combination of the original variables in R, is the first principal component of the N variables in R. 8
9 Other Principal Components Let us use subscripts for the first principal component, i.e., p 1, a 1, λ 1, and similarly for the second, third,... principal component. Currently, our matrix is approximated by p 1 a 1. The residual matrix is R p 1 a 1, which in turn will be approximated by another principal component, p 2, with corresponding coefficient vector a 2. As before, for identification, put p 2p 2 = 1. Then we want to minimize S 2 = tr[(r p 1 a 1 p 2 a 2) (R p 1 a 1 p 2 a 2)]. It turns out that the second principal component p 2 is equal to the unit length eigenvector of RR corresponding to the second largest eigenvalue, λ 2, of RR, or, equivalently, of R R. 9
10 Moreover, a 2 R R, and is the corresponding eigenvector of p 2 = 1 λ 2 Ra 2. We can go on in this way by deriving principal components. The ith such component minimizes the sum of squares of the discrepancies that are left after the earlier components have done their work. The result is that p i is the unit length characteristic vector of RR corresponding to the ith largest eigenvalue, λ i. To find the length of vector a i, use p i = Ra i /λ i, which gives p ip i = 1 = a ir Ra i /λ 2 i = a ia i λ i /λ 2 i a i a i = λ i. 10
11 As R R and RR have the same nonzero eigenvalues, one may also work in terms of the sample 1 covariance matrix T 1 R R, which is of primary interest in our context. This means that we perform a PCA on the Variables R/ T 1, where R contains the centered (demeaned) returns. In general, if we use r principal components to approximate the variables under study, the approximation is given by R/ T 1 r p i a i = P A, i=1 where P = [p 1,..., p r ], A = [a 1,..., a r ], and an approximation for the covariance matrix is R R/(T 1) AP P A = AA, as P P = I. (10) 11
12 P P = I follows from our normalization p i p i = 1 and the fact that eigenvectors corresponding to different eigenvalues of symmetric matrices are orthogonal (see Appendix). Note that this means that the principal components are uncorrelated. Note that this approximation will be singular as long as r < N. A full rank covariance matrix can be obtained, however, and quite similar to the Single Index Model, by adding a diagonal matrix of asset specific error variance terms (which are assumed to be uncorrelated). The easiest way to do so is just to replace the diagonal elements of (10) with the sample variances of the individual assets. 12
13 The rationale behind this procedure is that we want to reduce the number of risk factors to a lower dimension. That is, we hope to capture the systematic part of asset covariation by using just a few principal components, while the covariation in the sample covariance matrix which is not captured by these first few principal components is due to random noise, i.e., it will not improve or even considerably deteriorate forecasts of future asset covariance. As this is a statistical factor model, the factors need not have an economic or financial interpretation. The discussion of principal component analysis given here closely follows Henri Theil (1971). Principles of Econometrics. Amsterdam: John Wiley & Sons. See, in particular, pp
14 Choosing the Number of Principal Components The eigenvalues may be used to measure the relative importance of the corresponding components. The argument is based on the criterion used: The sum of squares of all T N discrepancies. Before any component is used the discrepancies are the elements of R, and their sum of squares is N i=1 T rit 2 = tr(r R). t=1 14
15 The residual sum of discrepancies with r principal components is given by ( S = tr R = tr(r R) 2 = tr(r R) 2 ) ( r p i a i R i=1 r i) p i a i=1 r tr(r p i a i) + tr(a i p ip j a j) i j r r tr(r p i a i) + a ia i i=1 i=1 i=1 = tr(r R) 2 i p irr p i + i p irr p i = tr(r R) i p irr p i = tr(r R) i p ip i λ i = tr(r R) r λ i, i=1 where the third equality uses p i p j = 0 (1) for i j (i = j). 15
16 Thus, component i accounts for a reduction of the sum of squared discrepancies equal to λ i. For example, component i accounts for λ i tr(r R) = N λ i λ j j=1 percent of the total variation, and the first r principal components account for r λ j j=1 tr(r R) = r λ j j=1 N λ j j=1 percent of the total variation. 16
17 The following selection methods are frequently used in practical work: Percent of variance: For a fixed fraction δ, choose r such that it is the smallest number for which r λ j j=1 tr(r R) δ. Average Eigenvalue: Keep all principal components whose eigenvalues exceed the average eigenvalue, N 1 j λ j. Scree Graphs: This is named after the geological term scree (Geröllfeld), referring to the scree at the foot of a rocky cliff. Here, the relevant eigenvalues are the cliff and the unimportant components are represented by the smaller eigenvalues forming the scree. Clearly these methods do not represent formal statistical tests but rather rules of thumb. 17
18 Example Consider our 24 stocks from the DAX, monthly returns over the period , 60 observations for each stock. The average eigenvalue is given by Thus, when we use the Average Eigenvalue rule to determine the number of components, we will use the first 6 principal components. 1 When we want to employ the Percent of variance rule with, for example, δ = 0.75, we use the first 7 principal components. The Scree Graph also suggests something in this direction. (?) 1 The eigenvalues are shown in the table on the next page. 18
19 i λ i λ i / 24 j=1 λ j i j=1 λ i/ 24 j=1 λ j
20 800 Eigenvalues of Sample Covariance Matrix
21 Economic Interpretation of the Components Compared to approaches using financial or macroeconomic variables as factors, the factors extracted using a purely statistical procedure such as PCA are more difficult to interpret (at least for equity portfolios). An exception is the first factor, which is usually highly correlated with an appropriate market index. That is, the first principal component captures the common trend. For our example, suppose we use the first 6 principle components. Then the correlations between these 6 components and the DAX index are as follows: The first row of the table indicates the component, the second is the correlation with the DAX. 21
22 Appendix The Trace of a Square Matrix The trace of an n n matrix is the sum of its diagonal elements: tr(a) = n a i i. (11) i=1 Clearly tr(a + B) = tr(a) + tr(b). Moreover, for A m n and B n m, tr(ab) = tr(ba) = m i=1 n a ij b ji. (12) j=1 It follows from (12) that, for conformable matrices A, B and C (permutation rule), tr(abc) = tr(bca) = tr(cab). (13) 22
23 The sum of squares of all elements a ij of an m n matrix A can be written as the trace of A A: tr(a A) = m i=1 n a 2 ij. (14) j=1 23
24 Eigenvalues and Eigenvectors An eigenvalue (or root) of an n n matrix A is a real or complex scalar λ satisfying the equation Ax = λx (15) for some nonzero vector x, which is an eigenvector corresponding to λ. Note that an eigenvector is only determined up to a scalar multiple. Equation (15) can be written as (A λi)x = 0, which requires that matrix A λi is singular, or, equivalently, det(a λi) = 0. (16) 24
25 As det(a λi), which is known as the characteristic polynomial of matrix A, is a polynomial of degree n in λ, an n n matrix has n eigenvalues (counting multiplicities). For illustration, consider the 2 2 matrix [ ] a11 a A = 12. a 21 a 22 Matrix A s characteristic equation is [ ] λ a11 a P (λ) = det(λi 2 A) = det 12 a 21 λ a 22 = (λ a 11 )(λ a 22 ) a 12 a 21 = λ 2 (a 11 + a 22 )λ + a 11 a 22 a 12 a 21 = λ 2 tr(a)λ + det A = 0, which is polynomial of degree 2 in λ, i.e., a quadratic. Thus, A has eigenvalues λ 1 2 = tr(a) ± tr(a) 2 4 det A. (17) 2 25
26 A general property is that the sum λ λ n of the eigenvalues of an n n matrix A is equal to its trace, i.e., tr(a) = n a ii = i=1 n λ i. (18) i=1 For our example, from (17), it is directly observable that λ 1 + λ 2 = a 11 + a 22 = tr(a). In general, the eigenvalues of a matrix may be real or complex. However, for positive definite symmetric matrices (e.g., covariance matrices), we have the following results: i) The eigenvalues of a positive definite matrix are positive. To see this, recall that, for such as matrix, x Ax > 0, x 0. 26
27 Then, using the definition of an eigenvalue, 0 < x Ax = λx x for a positive definite matrix, thus λ > 0. ii) The eigenvectors of any symmetric matrix are orthogonal if they correspond to different roots: Write λ 1 and λ 2 (λ 1 λ 2 ) for the two roots, and x and y for the corresponding vectors: Ax = λ 1 x (19) Ay = λ 2 y. (20) Multiply (19) by y and (20) by x. Since A = A for symmetric matrix A, x Ay = y Ax, and it follows that 0 = y Ax x Ay = (λ 1 λ 2 )x y. Hence x y = 0. 27
28 (iii) For any n m matrix A, A A and AA have the same nonzero eigenvalues. (The number of nonzero eigenvalues is equal to the rank of A.) (Premultiplication by A shows that (AA λi)x = 0 implies (A A λi)a x = 0.) 28
Massachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra
Massachusetts Institute of Technology Department of Economics 14.381 Statistics Guido Kuersteiner Lecture Notes on Matrix Algebra These lecture notes summarize some basic results on matrix algebra used
More informationSTAT200C: Review of Linear Algebra
Stat200C Instructor: Zhaoxia Yu STAT200C: Review of Linear Algebra 1 Review of Linear Algebra 1.1 Vector Spaces, Rank, Trace, and Linear Equations 1.1.1 Rank and Vector Spaces Definition A vector whose
More informationMatrix Algebra, part 2
Matrix Algebra, part 2 Ming-Ching Luoh 2005.9.12 1 / 38 Diagonalization and Spectral Decomposition of a Matrix Optimization 2 / 38 Diagonalization and Spectral Decomposition of a Matrix Also called Eigenvalues
More informationIntroduction Eigen Values and Eigen Vectors An Application Matrix Calculus Optimal Portfolio. Portfolios. Christopher Ting.
Portfolios Christopher Ting Christopher Ting http://www.mysmu.edu/faculty/christophert/ : christopherting@smu.edu.sg : 6828 0364 : LKCSB 5036 November 4, 2016 Christopher Ting QF 101 Week 12 November 4,
More information4. Determinants.
4. Determinants 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 2 2 determinant 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 3 3 determinant 4.1.
More informationChapter 3. Matrices. 3.1 Matrices
40 Chapter 3 Matrices 3.1 Matrices Definition 3.1 Matrix) A matrix A is a rectangular array of m n real numbers {a ij } written as a 11 a 12 a 1n a 21 a 22 a 2n A =.... a m1 a m2 a mn The array has m rows
More informationVectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =
Linear Algebra Review Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1 x x = 2. x n Vectors of up to three dimensions are easy to diagram.
More informationIntroduction to Matrix Algebra
Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p
More informationAppendix A: Matrices
Appendix A: Matrices A matrix is a rectangular array of numbers Such arrays have rows and columns The numbers of rows and columns are referred to as the dimensions of a matrix A matrix with, say, 5 rows
More informationSystems of Algebraic Equations and Systems of Differential Equations
Systems of Algebraic Equations and Systems of Differential Equations Topics: 2 by 2 systems of linear equations Matrix expression; Ax = b Solving 2 by 2 homogeneous systems Functions defined on matrices
More informationLinear Algebra: Matrix Eigenvalue Problems
CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given
More information22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes
Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one
More informationIntroduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX
Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX September 2007 MSc Sep Intro QT 1 Who are these course for? The September
More informationEcon Slides from Lecture 7
Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for
More informationMore Linear Algebra. Edps/Soc 584, Psych 594. Carolyn J. Anderson
More Linear Algebra Edps/Soc 584, Psych 594 Carolyn J. Anderson Department of Educational Psychology I L L I N O I S university of illinois at urbana-champaign c Board of Trustees, University of Illinois
More informationThe Singular Value Decomposition
The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will
More informationKnowledge Discovery and Data Mining 1 (VO) ( )
Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory
More informationDimension. Eigenvalue and eigenvector
Dimension. Eigenvalue and eigenvector Math 112, week 9 Goals: Bases, dimension, rank-nullity theorem. Eigenvalue and eigenvector. Suggested Textbook Readings: Sections 4.5, 4.6, 5.1, 5.2 Week 9: Dimension,
More informationNumerical Linear Algebra Homework Assignment - Week 2
Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.
More informationMATRIX ALGEBRA. or x = (x 1,..., x n ) R n. y 1 y 2. x 2. x m. y m. y = cos θ 1 = x 1 L x. sin θ 1 = x 2. cos θ 2 = y 1 L y.
as Basics Vectors MATRIX ALGEBRA An array of n real numbers x, x,, x n is called a vector and it is written x = x x n or x = x,, x n R n prime operation=transposing a column to a row Basic vector operations
More informationLinear Algebra Formulas. Ben Lee
Linear Algebra Formulas Ben Lee January 27, 2016 Definitions and Terms Diagonal: Diagonal of matrix A is a collection of entries A ij where i = j. Diagonal Matrix: A matrix (usually square), where entries
More informationMath Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p
Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the
More informationLecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε,
2. REVIEW OF LINEAR ALGEBRA 1 Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε, where Y n 1 response vector and X n p is the model matrix (or design matrix ) with one row for
More informationPrinciple Components Analysis (PCA) Relationship Between a Linear Combination of Variables and Axes Rotation for PCA
Principle Components Analysis (PCA) Relationship Between a Linear Combination of Variables and Axes Rotation for PCA Principle Components Analysis: Uses one group of variables (we will call this X) In
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More informationLecture 8: Linear Algebra Background
CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 8: Linear Algebra Background Lecturer: Shayan Oveis Gharan 2/1/2017 Scribe: Swati Padmanabhan Disclaimer: These notes have not been subjected
More informationMath 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008
Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Exam 2 will be held on Tuesday, April 8, 7-8pm in 117 MacMillan What will be covered The exam will cover material from the lectures
More informationMath 315: Linear Algebra Solutions to Assignment 7
Math 5: Linear Algebra s to Assignment 7 # Find the eigenvalues of the following matrices. (a.) 4 0 0 0 (b.) 0 0 9 5 4. (a.) The characteristic polynomial det(λi A) = (λ )(λ )(λ ), so the eigenvalues are
More informationPrincipal Components Analysis. Sargur Srihari University at Buffalo
Principal Components Analysis Sargur Srihari University at Buffalo 1 Topics Projection Pursuit Methods Principal Components Examples of using PCA Graphical use of PCA Multidimensional Scaling Srihari 2
More informationCommon-Knowledge / Cheat Sheet
CSE 521: Design and Analysis of Algorithms I Fall 2018 Common-Knowledge / Cheat Sheet 1 Randomized Algorithm Expectation: For a random variable X with domain, the discrete set S, E [X] = s S P [X = s]
More informationRecall : Eigenvalues and Eigenvectors
Recall : Eigenvalues and Eigenvectors Let A be an n n matrix. If a nonzero vector x in R n satisfies Ax λx for a scalar λ, then : The scalar λ is called an eigenvalue of A. The vector x is called an eigenvector
More informationConceptual Questions for Review
Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.
More informationMATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018
MATH 57: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 18 1 Global and Local Optima Let a function f : S R be defined on a set S R n Definition 1 (minimizers and maximizers) (i) x S
More informationCP3 REVISION LECTURES VECTORS AND MATRICES Lecture 1. Prof. N. Harnew University of Oxford TT 2013
CP3 REVISION LECTURES VECTORS AND MATRICES Lecture 1 Prof. N. Harnew University of Oxford TT 2013 1 OUTLINE 1. Vector Algebra 2. Vector Geometry 3. Types of Matrices and Matrix Operations 4. Determinants
More informationLinear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.
Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the
More information2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian
FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian
More informationand let s calculate the image of some vectors under the transformation T.
Chapter 5 Eigenvalues and Eigenvectors 5. Eigenvalues and Eigenvectors Let T : R n R n be a linear transformation. Then T can be represented by a matrix (the standard matrix), and we can write T ( v) =
More information1 Principal component analysis and dimensional reduction
Linear Algebra Working Group :: Day 3 Note: All vector spaces will be finite-dimensional vector spaces over the field R. 1 Principal component analysis and dimensional reduction Definition 1.1. Given an
More informationCS 246 Review of Linear Algebra 01/17/19
1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector
More informationChapter 1. Matrix Algebra
ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface
More informationLecture 1 and 2: Random Spanning Trees
Recent Advances in Approximation Algorithms Spring 2015 Lecture 1 and 2: Random Spanning Trees Lecturer: Shayan Oveis Gharan March 31st Disclaimer: These notes have not been subjected to the usual scrutiny
More informationMA 1B ANALYTIC - HOMEWORK SET 7 SOLUTIONS
MA 1B ANALYTIC - HOMEWORK SET 7 SOLUTIONS 1. (7 pts)[apostol IV.8., 13, 14] (.) Let A be an n n matrix with characteristic polynomial f(λ). Prove (by induction) that the coefficient of λ n 1 in f(λ) is
More informationThe Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)
Chapter 5 The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) 5.1 Basics of SVD 5.1.1 Review of Key Concepts We review some key definitions and results about matrices that will
More information1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)
1 A linear system of equations of the form Sections 75, 78 & 81 a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2 a m1 x 1 + a m2 x 2 + + a mn x n = b m can be written in matrix
More informationRepeated Eigenvalues and Symmetric Matrices
Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one
More informationComputational Methods. Eigenvalues and Singular Values
Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations
More informationRecall the convention that, for us, all vectors are column vectors.
Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists
More informationMath Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT
Math Camp II Basic Linear Algebra Yiqing Xu MIT Aug 26, 2014 1 Solving Systems of Linear Equations 2 Vectors and Vector Spaces 3 Matrices 4 Least Squares Systems of Linear Equations Definition A linear
More informationChapter 5 Eigenvalues and Eigenvectors
Chapter 5 Eigenvalues and Eigenvectors Outline 5.1 Eigenvalues and Eigenvectors 5.2 Diagonalization 5.3 Complex Vector Spaces 2 5.1 Eigenvalues and Eigenvectors Eigenvalue and Eigenvector If A is a n n
More informationANOVA: Analysis of Variance - Part I
ANOVA: Analysis of Variance - Part I The purpose of these notes is to discuss the theory behind the analysis of variance. It is a summary of the definitions and results presented in class with a few exercises.
More informationMa/CS 6b Class 20: Spectral Graph Theory
Ma/CS 6b Class 20: Spectral Graph Theory By Adam Sheffer Eigenvalues and Eigenvectors A an n n matrix of real numbers. The eigenvalues of A are the numbers λ such that Ax = λx for some nonzero vector x
More informationReview problems for Math 511
Review problems for Math 511 A Eremenko Spring 018 1 Evaluate the determinants: 13547 13647 843 853, 46 47 37 1014 543 443 34 71 61 No electronic devices and no partial credit! Hint: you should not multiply
More informationTherefore, A and B have the same characteristic polynomial and hence, the same eigenvalues.
Similar Matrices and Diagonalization Page 1 Theorem If A and B are n n matrices, which are similar, then they have the same characteristic equation and hence the same eigenvalues. Proof Let A and B be
More informationMa/CS 6b Class 20: Spectral Graph Theory
Ma/CS 6b Class 20: Spectral Graph Theory By Adam Sheffer Recall: Parity of a Permutation S n the set of permutations of 1,2,, n. A permutation σ S n is even if it can be written as a composition of an
More informationMathematical Foundations of Applied Statistics: Matrix Algebra
Mathematical Foundations of Applied Statistics: Matrix Algebra Steffen Unkel Department of Medical Statistics University Medical Center Göttingen, Germany Winter term 2018/19 1/105 Literature Seber, G.
More informationMatrix Vector Products
We covered these notes in the tutorial sessions I strongly recommend that you further read the presented materials in classical books on linear algebra Please make sure that you understand the proofs and
More informationMath Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88
Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant
More informationSolution Set 7, Fall '12
Solution Set 7, 18.06 Fall '12 1. Do Problem 26 from 5.1. (It might take a while but when you see it, it's easy) Solution. Let n 3, and let A be an n n matrix whose i, j entry is i + j. To show that det
More information0.1 Eigenvalues and Eigenvectors
0.. EIGENVALUES AND EIGENVECTORS MATH 22AL Computer LAB for Linear Algebra Eigenvalues and Eigenvectors Dr. Daddel Please save your MATLAB Session (diary)as LAB9.text and submit. 0. Eigenvalues and Eigenvectors
More informationSingular Value Decomposition and Principal Component Analysis (PCA) I
Singular Value Decomposition and Principal Component Analysis (PCA) I Prof Ned Wingreen MOL 40/50 Microarray review Data per array: 0000 genes, I (green) i,i (red) i 000 000+ data points! The expression
More information3.3 Eigenvalues and Eigenvectors
.. EIGENVALUES AND EIGENVECTORS 27. Eigenvalues and Eigenvectors In this section, we assume A is an n n matrix and x is an n vector... Definitions In general, the product Ax results is another n vector
More informationChapter 3. Determinants and Eigenvalues
Chapter 3. Determinants and Eigenvalues 3.1. Determinants With each square matrix we can associate a real number called the determinant of the matrix. Determinants have important applications to the theory
More informationMathematical foundations - linear algebra
Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar
More information3 Matrix Algebra. 3.1 Operations on matrices
3 Matrix Algebra A matrix is a rectangular array of numbers; it is of size m n if it has m rows and n columns. A 1 n matrix is a row vector; an m 1 matrix is a column vector. For example: 1 5 3 5 3 5 8
More informationReview of Linear Algebra
Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=
More informationProperties of Linear Transformations from R n to R m
Properties of Linear Transformations from R n to R m MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Topic Overview Relationship between the properties of a matrix transformation
More informationc c c c c c c c c c a 3x3 matrix C= has a determinant determined by
Linear Algebra Determinants and Eigenvalues Introduction: Many important geometric and algebraic properties of square matrices are associated with a single real number revealed by what s known as the determinant.
More information6 EIGENVALUES AND EIGENVECTORS
6 EIGENVALUES AND EIGENVECTORS INTRODUCTION TO EIGENVALUES 61 Linear equations Ax = b come from steady state problems Eigenvalues have their greatest importance in dynamic problems The solution of du/dt
More informationA Little Necessary Matrix Algebra for Doctoral Studies in Business & Economics. Matrix Algebra
A Little Necessary Matrix Algebra for Doctoral Studies in Business & Economics James J. Cochran Department of Marketing & Analysis Louisiana Tech University Jcochran@cab.latech.edu Matrix Algebra Matrix
More informationLinear algebra I Homework #1 due Thursday, Oct Show that the diagonals of a square are orthogonal to one another.
Homework # due Thursday, Oct. 0. Show that the diagonals of a square are orthogonal to one another. Hint: Place the vertices of the square along the axes and then introduce coordinates. 2. Find the equation
More informationLinear Algebra: Characteristic Value Problem
Linear Algebra: Characteristic Value Problem . The Characteristic Value Problem Let < be the set of real numbers and { be the set of complex numbers. Given an n n real matrix A; does there exist a number
More informationLS.2 Homogeneous Linear Systems with Constant Coefficients
LS2 Homogeneous Linear Systems with Constant Coefficients Using matrices to solve linear systems The naive way to solve a linear system of ODE s with constant coefficients is by eliminating variables,
More informationPrincipal component analysis
Principal component analysis Angela Montanari 1 Introduction Principal component analysis (PCA) is one of the most popular multivariate statistical methods. It was first introduced by Pearson (1901) and
More informationChap 3. Linear Algebra
Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions
More informationAnnouncements Wednesday, November 01
Announcements Wednesday, November 01 WeBWorK 3.1, 3.2 are due today at 11:59pm. The quiz on Friday covers 3.1, 3.2. My office is Skiles 244. Rabinoffice hours are Monday, 1 3pm and Tuesday, 9 11am. Section
More informationExercise Sheet 1.
Exercise Sheet 1 You can download my lecture and exercise sheets at the address http://sami.hust.edu.vn/giang-vien/?name=huynt 1) Let A, B be sets. What does the statement "A is not a subset of B " mean?
More informationChapter 1. Matrix Calculus
Chapter 1 Matrix Calculus 11 Definitions and Notation We assume that the reader is familiar with some basic terms in linear algebra such as vector spaces, linearly dependent vectors, matrix addition and
More informationEigenvalues and Eigenvectors
Contents Eigenvalues and Eigenvectors. Basic Concepts. Applications of Eigenvalues and Eigenvectors 8.3 Repeated Eigenvalues and Symmetric Matrices 3.4 Numerical Determination of Eigenvalues and Eigenvectors
More informationq n. Q T Q = I. Projections Least Squares best fit solution to Ax = b. Gram-Schmidt process for getting an orthonormal basis from any basis.
Exam Review Material covered by the exam [ Orthogonal matrices Q = q 1... ] q n. Q T Q = I. Projections Least Squares best fit solution to Ax = b. Gram-Schmidt process for getting an orthonormal basis
More informationMATH 431: FIRST MIDTERM. Thursday, October 3, 2013.
MATH 431: FIRST MIDTERM Thursday, October 3, 213. (1) An inner product on the space of matrices. Let V be the vector space of 2 2 real matrices (that is, the algebra Mat 2 (R), but without the mulitiplicative
More informationDimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas
Dimensionality Reduction: PCA Nicholas Ruozzi University of Texas at Dallas Eigenvalues λ is an eigenvalue of a matrix A R n n if the linear system Ax = λx has at least one non-zero solution If Ax = λx
More informationMath Matrix Algebra
Math 44 - Matrix Algebra Review notes - (Alberto Bressan, Spring 7) sec: Orthogonal diagonalization of symmetric matrices When we seek to diagonalize a general n n matrix A, two difficulties may arise:
More informationSolutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015
Solutions to Final Practice Problems Written by Victoria Kala vtkala@math.ucsb.edu Last updated /5/05 Answers This page contains answers only. See the following pages for detailed solutions. (. (a x. See
More informationICS 6N Computational Linear Algebra Symmetric Matrices and Orthogonal Diagonalization
ICS 6N Computational Linear Algebra Symmetric Matrices and Orthogonal Diagonalization Xiaohui Xie University of California, Irvine xhx@uci.edu Xiaohui Xie (UCI) ICS 6N 1 / 21 Symmetric matrices An n n
More informationAnnouncements Wednesday, November 01
Announcements Wednesday, November 01 WeBWorK 3.1, 3.2 are due today at 11:59pm. The quiz on Friday covers 3.1, 3.2. My office is Skiles 244. Rabinoffice hours are Monday, 1 3pm and Tuesday, 9 11am. Section
More informationLinear Algebra Primer
Introduction Linear Algebra Primer Daniel S. Stutts, Ph.D. Original Edition: 2/99 Current Edition: 4//4 This primer was written to provide a brief overview of the main concepts and methods in elementary
More informationICS 6N Computational Linear Algebra Eigenvalues and Eigenvectors
ICS 6N Computational Linear Algebra Eigenvalues and Eigenvectors Xiaohui Xie University of California, Irvine xhx@uci.edu Xiaohui Xie (UCI) ICS 6N 1 / 34 The powers of matrix Consider the following dynamic
More informationQueens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.
Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane c Sateesh R. Mane 2018 8 Lecture 8 8.1 Matrices July 22, 2018 We shall study
More informationPrincipal Component Analysis
Principal Component Analysis Laurenz Wiskott Institute for Theoretical Biology Humboldt-University Berlin Invalidenstraße 43 D-10115 Berlin, Germany 11 March 2004 1 Intuition Problem Statement Experimental
More informationIr O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )
Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O
More informationLinear Algebra Review. Fei-Fei Li
Linear Algebra Review Fei-Fei Li 1 / 51 Vectors Vectors and matrices are just collections of ordered numbers that represent something: movements in space, scaling factors, pixel brightnesses, etc. A vector
More information3 (Maths) Linear Algebra
3 (Maths) Linear Algebra References: Simon and Blume, chapters 6 to 11, 16 and 23; Pemberton and Rau, chapters 11 to 13 and 25; Sundaram, sections 1.3 and 1.5. The methods and concepts of linear algebra
More informationEigenvalues and diagonalization
Eigenvalues and diagonalization Patrick Breheny November 15 Patrick Breheny BST 764: Applied Statistical Modeling 1/20 Introduction The next topic in our course, principal components analysis, revolves
More informationMATH 240 Spring, Chapter 1: Linear Equations and Matrices
MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear
More informationChapter 6. Eigenvalues. Josef Leydold Mathematical Methods WS 2018/19 6 Eigenvalues 1 / 45
Chapter 6 Eigenvalues Josef Leydold Mathematical Methods WS 2018/19 6 Eigenvalues 1 / 45 Closed Leontief Model In a closed Leontief input-output-model consumption and production coincide, i.e. V x = x
More informationMAC Module 12 Eigenvalues and Eigenvectors. Learning Objectives. Upon completing this module, you should be able to:
MAC Module Eigenvalues and Eigenvectors Learning Objectives Upon completing this module, you should be able to: Solve the eigenvalue problem by finding the eigenvalues and the corresponding eigenvectors
More informationB553 Lecture 5: Matrix Algebra Review
B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations
More informationMAC Module 12 Eigenvalues and Eigenvectors
MAC 23 Module 2 Eigenvalues and Eigenvectors Learning Objectives Upon completing this module, you should be able to:. Solve the eigenvalue problem by finding the eigenvalues and the corresponding eigenvectors
More informationQuick Tour of Linear Algebra and Graph Theory
Quick Tour of Linear Algebra and Graph Theory CS224W: Social and Information Network Analysis Fall 2014 David Hallac Based on Peter Lofgren, Yu Wayne Wu, and Borja Pelato s previous versions Matrices and
More informationNotes on Linear Algebra and Matrix Theory
Massimo Franceschet featuring Enrico Bozzo Scalar product The scalar product (a.k.a. dot product or inner product) of two real vectors x = (x 1,..., x n ) and y = (y 1,..., y n ) is not a vector but a
More information