Principle Components Analysis (PCA) Relationship Between a Linear Combination of Variables and Axes Rotation for PCA

Size: px
Start display at page:

Download "Principle Components Analysis (PCA) Relationship Between a Linear Combination of Variables and Axes Rotation for PCA"

Transcription

1 Principle Components Analysis (PCA) Relationship Between a Linear Combination of Variables and Axes Rotation for PCA Principle Components Analysis: Uses one group of variables (we will call this X) In PCA, we want to construct p independent random variables from the original p variables. We can do this via axis rotation obtained by using linear combinations of the original p variables. Example of a rotation, for p : [Graph in class] Point B has x 8 and x 5 based on the axes x and x to define this point, and x is orthogonal to x. o Say we rotate x by θ 0, and call this new axis Z. o Z is a linear combination of X and X : o z cos( θ) x + cos( θ) x, whereθ θ andθ 90 θ o A second, axis, Z is then orthogonal (90 ) to Z. o z cos( θ ) x + cos( θ) x, whereθ 90 + θ andθ θ Therefore, a rotation of the axes creates a new set of axes which are linear combinations of the original ones. The point can be redefined using these new axes.

2 Using matrices instead of angles, [ z z ] [ x x ] cos( θ) cos( θ) cos( θ) cos( θ ) [ z z ] [ x x ] (X) a a (X) a a (X) If A A I n Z XA For a proper rotation, reflection), A., then a rotation occurs, and A is an orthogonal matrix. Therefore, a linear transformation of: A. For an improper rotation (rotation and Z XA is a rigid rotation if A is an orthogonal matrix and A. Looking at the transformation matrix, A, a a A a a [ a a ]. a j a j for every column from to p columns (only columns for this example). i.e, multiply any column by itself and you will get. This means that the transformation matrix, A, is normal. NOTE: This does not have anything to do with the normal distribution! a a j k 0.. For any pair of columns (j and k). This means that A, is orthogonal (90 ). i.e., multiply any two columns and you will get zero.

3 For the example, using a 0 orthogonal rotation: cos( θ) A cos( θ) cos( θ) cos( θ) cos(0) A cos(70) cos(0) cos(0) a a a a a a [ ] [ ] [ ] 0 Using linear transformations for p variables and n observations PCA uses principle axis rotation, based on a linear combination of the p variables to create new variables, Zj There will be p new Z variables from the p original X variables. The Z j o are known as the principle components. o are created by obtaining a linear combination of the p original variables. o Will be orthogonal to each other, even though the original X variables were not For principle component one: v jk zi vxi + vxi + v3xi3 + L + v p are the coefficients (multipliers) used in the linear transformation of the X s to the Z s z i represents the principle component score for the ith observation and principle component (sometimes called PC in textbooks) For n observations, there will be n values for each X variable x ip

4 n values (principle component scores) for z i There will be p of these principle components, each one having a different set of multipliers (the form: v jk differ for each principle component. In matrix Z X V ( nxp) ( nxp) ( pxp) each column of the X is a variable each column of the Z is a principle component each row of the X is a measure of the X variables on one observation each row of the Z is principle component scores on one observation each column of V represents the multipliers for a particular PC (one multiplier for each of the p original X variables) V will be an orthogonal matrix, and therefore, the transformation of X represents a rotation of X. Centroid: This is a vector of the average values. For the p X s, there will be p averages. If the average X s are put into the equations for the principle components, you will get the average principle component score. z vx + vx + v3x3 + L + v p x p Choosing a Rotation: What rotation should be used? What then are the v jk values? Objective of PCA: Step. Obtain PC that gives the maximum variance accounted for. Step. PC, is the next maximum variance, but is orthogonal to PC (remember the Gram Smidt process?). Step 3. PC3, is the next maximum variance, but is orthogonal to both PC and PC Step 4. Continue to until there are p principle components from p original X variables. [graph in class]

5 Variances of the X s versus Variances of the Z s: There will be a direct relationship between the covariance matrix for the Z s and the covariance matrix for the X s. There will be a direct relationship between the correlation matrix for the Z s and the correlation matrix for the X s. Why? And what is this relationship? Z X V ( nxp) ( nxp) ( pxp) Sz the corrected sums of squares and cross products for Z Since Sz Z'Z (Z')('Z) n substitute Z with XV,and simplify : Sz ( XV )'( XV) (( XV )')('( XV) ) n ( XV ) V X : Sz ( V X )( XV) (V X') (' XV) n Sz V ( X X) V V ( X') (' X) V n Sz V ( X X) ( X') (' X) V n Sz V Sx V Where Sx the corrected sums of squares and cross products for X. Since the covariance of X is Sx/(n-), and the covariance for Z is Sz/(n- ), CovZ V CovX V In many textbooks, the symbol the symbol Λ Σ X is used for the covariance of X, and is used for the covariance of Z, then: Λ V Σ X V

6 Eigenvalues and eigenvectors: The principal components will be independent (orthogonal). Therefore, all covariances will be equal to zero. The covariance matrix for the Z s will a diagonal matrix, with each element representing the variance for a principal component: λ 0 Λ M 0 0 λ M 0 L L O L For example, λ is the variance for principle component. These are also called eigenvalues or latent roots or characteristic roots of the covariance Σ X matrix for the X variables,. The transformation matrix used to make a linear transformation of the X variables, by rotating the axes (V, the multipliers) are called the eigenvectors or latent vectors or characteristic vectors (one column, a vector, for each principle component). Therefore: the eigenvectors matrix, V, is used to transform the covariance matrix of X, to get the more simple covariance matrix for Z. This is the same as transforming the X variables using a linear model of these multipliers (eigenvector values) to get the Z variables (principle components). Getting the eigenvectors and eigenvalues is also called diagonalizing the matrix. Since V is an orthogonal matrix (rigid rotation), Z will be uncorrelated (orthogonal) may be easier to interpret the Z variables than interpret the X variables. Also: λ p Σ X V VΛ and X' ΣX X Z' ΛZ SO there is no loss of information. The same information is just represented on new basis vectors! Simply transformed data.

7 If the X variables are linearly dependent, than : Then Σ X 0, and Σ X doesn t exist (singular matrix, not of full rank) THEN, at least one of the eigenvalues (the diagonal elements of Λ ) will be zero. Since: p j var( Z j p p ) λ j j j var( X j ) trace( Σ If the matrix is not full rank, we can reduce the number of principle components, as some of them will have 0 variance. Will still represent all of the X variables variances. We can further reduce by eliminating those principle components that have very low variance. Will still represent most of variance in the X variables. Uses of PCA: PCA, then: Can be used to create independent variables, Z, from a set of correlated variables, X. Can be used to reduce the number of dimensions, from p to some smaller value for ease of interpretation. We can also use the PCA results to reduce the number of X variables, as not all variables will have high impact (correlation) with the new Z variables These principle components can be used to: As indices in further analyses, like factor analysis As indices to replace a number of variables, e.g., PC may represent soil productivity, as measured by many variables. These can then be graphed, used in further analyses, etc, and these new variables will be uncorrelated In graphs simpler views, and the new Z variables are uncorrelated As a first step in analysis to aid in reducing the number of variables. (More on this later) X )

8 How do we find the eigenvectors and eigenvalues? Each eigenvector is a (p X ) column of multipliers, that we will label as v, (any column of V), then: v will be a (p X ) transformation matrix (a column vector, actually) we want to get this eigenvector (multipliers), to rotate the axes (transform the X variables), but do not want a stretch. Constrain v v. We want this rotation to maximize the variance of the first principle component. Problem then is to maximize: v Sx v subject to preventing a stretch by using the constraint v v. This means the eigenvalues will be normal with a length of (no stretch). NOTE: Instead of the corrected sums of squares and cross products matrix, Sx, we can use the covariance matrix for X, the correlation matrix for X. Σ X, or we can use Can use Lagrangian multipliers to get the solution (the eigenvector) ( v ) F v Sx v λ v To maximize this function subject to the constraint, we first take partial derivatives with respect to v: δ F δ v Sx v λv Then, set this equal to a zero matrix ( (p X ) matrix of zeros), and divide through by to get: ( Sx λi ) 0 p Sx v λv v (pxp) (pxp) (p X) The solution for this will give the maximum for v Sx preventing a stretch by using the constraint v. v v subject to

9 The solution could be to let v be a column of zeros not very useful (called a trivial solution). For a better solution, let: Sx λ I p This will give us the characteristic equation for the matrix Sx. Then, we can solve for λ. Will be more than one possible value. These will be the eigenvalues. Then, substitute each eigenvalue into: 0 0 ( Sx λi ) v And solve to obtain the eigenvector associated with that eigenvalue. Example: p (easy to get determinant of this size of matrix) p S x 0 0 λi λ 0 0 λ S x λi 0 λ 0 λ Then: S x λi (0 λ)(0 λ) ()() (Since the Rule is (ad - bc) for the determinant of a X matrix) Which is 00 0λ 0λ + λ 44 λ 30λ + 56

10 Can solve this using (math formula to solve any quadratic equation): for x : ax + bx + c 0 b ± b 4ac a For this problem, the x is the λ, a, b-30 and c56. We then get λ and λ. ( 30) + λ ( 30) λ ( 30) () ( 30) () 4()(56) 8 4()(56) We have non-zero eigenvalues since there were X-variables, and the matrix was full rank, with a non-zero determinant. Proof: S x 0 0 (0)(0) ()() 56 The eigenvalues matrix is then: Λ λ 0 0 λ We need the eigenvectors. Using the first eigenvalue: Using H ( λ I ) Sx, then: p ( Sx p ) v 0 λ I 0 H v H v

11 Can get this inverse of the H matrix by: cof H H H Step : Find the cofactor (also called adjunct) matrix for H, using the formula given earlier as: cij ( ) i+ j T M For each element of the cofactor of H. Then, get the transpose of this. Note: All the columns of H will be proportional to each other. Step : Calculate the length of any column of cof H, by the length of that vector (that column): p c i i This prevents the stretch length of the eigenvector will be equal to. Step 3: Divide the selected column of cof H by the length of that vector, and this will be v, the first eigenvector. Since the columns of cof H are proportional, any column will give the same value. Step 4. Repeat the steps with each of the eigenvalues. ij

12 For the example: 0 λ H For λ 8, 0 8 H 0 λ cof H T 8 cof H 8 e. g., for i, j : 8 ( ) For a X matrix, the submatrics, Mij, are a single value (X) Then, we can use either column or column. Using column, the length of this vector is: ( 8) + ( ) ) 4. 4 Then, we can calculate the eigenvector as: v 8 /4.4 / Double check this: So this works!

13 Try column, instead. The length of column is: ( ) + ( 8) ) ( NOTE : v / / Same result; we can use either column. Must then repeat this for the other eigenvalue, 0 λ H For λ, 0 H Then, solve for v. [not shown] 0 λ 8 0 8

14 Properties of the PCA:. Eigenvectors will be orthonormal each column has a length of, and if we multiply any two columns, we will get 0.. If the covariance of the X variables is used, the sum of the eigenvalues will be equal to the sum of diagonal elements (the trace) of covariance matrix for the X-variables preserved. Σ X, so the variance is 3. The eigenvalues are the variances for the principle components, and are ordered from largest to smallest from PC to the last principle component. 4. The first axis defined as principle component, will explain the most variation of the original X-variables, the next axis will be the next most variance explained, etc. In graphical form, the first axis is the major axis of the ellipse, etc. 5. The rotation will be a rigid rotation (just a rotation), produced by taking a linear transformation of the X-variables. The transformation matrix used (the eigenvectors) are orthonormal. 6. All principle components are independent of each other. The covariance for any pair of principle component scores (new variables created) will be If you multiply the eigenvalues together, you get the determinant of the matrix for the X variables (Sx matrix, or Σ X, or correlation matrix, depending on which was used in the PCA).

15 Using the correlation matrix instead of the covariance matrix for the X- variables:. The correlation matrix of the X-variables is the same as covariances for the normalized X-variables (subtract the means and divide by the standard deviations). Therefore, we could use ) the normalized X-variables and do a PCA using the covariance matrix of the normalized X-variables OR ) we can use the correlation matrix of the original X-variables in a PCA will give the same result. 3. When a correlation matrix (or a covariance matrix for the standardized X-variables) is used instead of a covariance matrix (for the original X-variables), a. The eigenvalues will sum to the number of X-variables, p, since the diagonal values of the correlation matrix are all equal to. b. Using the correlation matrix removes the differences in measurement scales among the X-variables 4. Commonly, the correlation matrix is used in PCA.

16 Application of PCA The idea behind the application of PCA to a problem is to transform the original set of variables into another set (principle components -- Zj). Principle components are:. Linear functions of the original variables.. Orthogonal. 3. There are as many principle components as original variables. 4. The total variation of the principle components is equal to that originally present. 5. The variation associated with each component decreases in order. Distinctions between PCA and Factor Analysis:. In factor analysis, the "p" variables are transformed into m<p factors.. Factor analysis has the potential for rotating the axis to an oblique position. 3. In factor analysis, only a portion of the variation that is attributable to the m factors (communalities); the remainder is considered error.

17 Uses of PCA:. Dimension reduction: e.g. represent the original variables by 3 principle components. Use principle components in further analyses. For example, the objective on one study is to predict site index (growing potential) from many soil and vegetation variables. Regression analysis will be used to create a prediction equation. However, before regression analysis, PCA is used on the soil and vegetation variables. The new principle component variables (linear combinations of the original soil and vegetation variables) are used to predict the site index, instead of the original variables. Since the principle component variables are orthogonal (variables are independent), the coefficients of the multiple regression analysis can be interpreted directly. However, interpretation may still be difficult since the principle components are linear combinations of the original variables. Another objective was to see if underlying factors could be identified for the soil and vegetation data. PCA was used as the first step to reduce the number of principle components, followed by factor analysis (more on this with factor analysis).. Variable reduction: Look for redundancies in the data. In further analysis, use the reduced set of variables. For the site index example, the PCA could be used to identify which soil and vegetation variables contribute most to the variance of these variables. Using this knowledge, some of the variables are dropped. The remaining set is used to predict site index. Since the variables are not likely orthogonal (called multicollinearity in regression textbooks), the regression coefficients cannot be interpreted directly. However, interpretation may be easier since a reduced set of the original variables is used in the analysis.

18 3. Ordination: Plot priniciple components against each other to look for trends. Used particularly with multiple discriminant analysis and cluster analysis. The principle components for the soil and vegetation data could be plotted. Using these bivariate plots, groups of similar soil/vegetation data points could be identified. Steps in PCA:. Selection of preliminary variables; should be interval or ratio scale otherwise the C or R matrices have no real meaning.. Transform the data so that the data are multivariate normal. This step is not strictly required. This is needed in order to use tests of significance of eigenvalues, but may make interpretation difficult 3. Calculate either the variance/covariance matrix (C) or the correlation matrix (R). Solutions will differ depending on which matrix you select. Variance/covariance matrix is best if units are the same scale. If variables have units of different scale, the correlation matrix is preferred (sum of trace of R p). Easier to interpret the results if the variance/covariance matrix is used. REMEMBER: The correlation matrix (R) is simply the covariance matrix for z-scores (variables are scaled by subtracting the mean and dividing by the standard deviation). 4. Determine the eigenvectors and eigenvalues of the selected matrix. 5. Interpret the derived components.

19 (). Relate principle components to observed variables. Calculate component loadings (correlation between original variables and principle components). (). Decide how many principle components are required by (see paper on this): a. Tests of significance. Only applicable if the data are multivariate normal. b. Set the value for eigenvalues to include to some arbitrary value. If R was used, a value of > is commonly used. If the C matrix was used, set the number as a percent of total variation (i.e. to a cumulative percent of 95). c. Scree test: Plot the eigenvalue versus the principle component number. Look for a trend where additional eigenvalues are small. Problems: May be no obvious break or may be many breaks. (3). Reducing the number of variables. Choose variables with highest eigenvector values if variables are on the same scale. For variables not on the same scale, base selection on the component loadings (simple correlations). May be more than one variable for each principle component. [see handout on different methods to select number of principle components to retain.]

Principal Component Analysis (PCA) Theory, Practice, and Examples

Principal Component Analysis (PCA) Theory, Practice, and Examples Principal Component Analysis (PCA) Theory, Practice, and Examples Data Reduction summarization of data with many (p) variables by a smaller set of (k) derived (synthetic, composite) variables. p k n A

More information

1 A factor can be considered to be an underlying latent variable: (a) on which people differ. (b) that is explained by unknown variables

1 A factor can be considered to be an underlying latent variable: (a) on which people differ. (b) that is explained by unknown variables 1 A factor can be considered to be an underlying latent variable: (a) on which people differ (b) that is explained by unknown variables (c) that cannot be defined (d) that is influenced by observed variables

More information

Unconstrained Ordination

Unconstrained Ordination Unconstrained Ordination Sites Species A Species B Species C Species D Species E 1 0 (1) 5 (1) 1 (1) 10 (4) 10 (4) 2 2 (3) 8 (3) 4 (3) 12 (6) 20 (6) 3 8 (6) 20 (6) 10 (6) 1 (2) 3 (2) 4 4 (5) 11 (5) 8 (5)

More information

Principal Component Analysis

Principal Component Analysis Principal Component Analysis Laurenz Wiskott Institute for Theoretical Biology Humboldt-University Berlin Invalidenstraße 43 D-10115 Berlin, Germany 11 March 2004 1 Intuition Problem Statement Experimental

More information

Structure in Data. A major objective in data analysis is to identify interesting features or structure in the data.

Structure in Data. A major objective in data analysis is to identify interesting features or structure in the data. Structure in Data A major objective in data analysis is to identify interesting features or structure in the data. The graphical methods are very useful in discovering structure. There are basically two

More information

Eigenvalues, Eigenvectors, and an Intro to PCA

Eigenvalues, Eigenvectors, and an Intro to PCA Eigenvalues, Eigenvectors, and an Intro to PCA Eigenvalues, Eigenvectors, and an Intro to PCA Changing Basis We ve talked so far about re-writing our data using a new set of variables, or a new basis.

More information

Eigenvalues, Eigenvectors, and an Intro to PCA

Eigenvalues, Eigenvectors, and an Intro to PCA Eigenvalues, Eigenvectors, and an Intro to PCA Eigenvalues, Eigenvectors, and an Intro to PCA Changing Basis We ve talked so far about re-writing our data using a new set of variables, or a new basis.

More information

Multivariate Statistical Analysis

Multivariate Statistical Analysis Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 4 for Applied Multivariate Analysis Outline 1 Eigen values and eigen vectors Characteristic equation Some properties of eigendecompositions

More information

Basic Concepts in Matrix Algebra

Basic Concepts in Matrix Algebra Basic Concepts in Matrix Algebra An column array of p elements is called a vector of dimension p and is written as x p 1 = x 1 x 2. x p. The transpose of the column vector x p 1 is row vector x = [x 1

More information

Introduction to Machine Learning

Introduction to Machine Learning 10-701 Introduction to Machine Learning PCA Slides based on 18-661 Fall 2018 PCA Raw data can be Complex, High-dimensional To understand a phenomenon we measure various related quantities If we knew what

More information

7. Variable extraction and dimensionality reduction

7. Variable extraction and dimensionality reduction 7. Variable extraction and dimensionality reduction The goal of the variable selection in the preceding chapter was to find least useful variables so that it would be possible to reduce the dimensionality

More information

Principal Components Analysis (PCA)

Principal Components Analysis (PCA) Principal Components Analysis (PCA) Principal Components Analysis (PCA) a technique for finding patterns in data of high dimension Outline:. Eigenvectors and eigenvalues. PCA: a) Getting the data b) Centering

More information

Basics of Multivariate Modelling and Data Analysis

Basics of Multivariate Modelling and Data Analysis Basics of Multivariate Modelling and Data Analysis Kurt-Erik Häggblom 6. Principal component analysis (PCA) 6.1 Overview 6.2 Essentials of PCA 6.3 Numerical calculation of PCs 6.4 Effects of data preprocessing

More information

2/26/2017. This is similar to canonical correlation in some ways. PSY 512: Advanced Statistics for Psychological and Behavioral Research 2

2/26/2017. This is similar to canonical correlation in some ways. PSY 512: Advanced Statistics for Psychological and Behavioral Research 2 PSY 512: Advanced Statistics for Psychological and Behavioral Research 2 What is factor analysis? What are factors? Representing factors Graphs and equations Extracting factors Methods and criteria Interpreting

More information

PRINCIPAL COMPONENT ANALYSIS

PRINCIPAL COMPONENT ANALYSIS PRINCIPAL COMPONENT ANALYSIS 1 INTRODUCTION One of the main problems inherent in statistics with more than two variables is the issue of visualising or interpreting data. Fortunately, quite often the problem

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

Singular Value Decomposition and Principal Component Analysis (PCA) I

Singular Value Decomposition and Principal Component Analysis (PCA) I Singular Value Decomposition and Principal Component Analysis (PCA) I Prof Ned Wingreen MOL 40/50 Microarray review Data per array: 0000 genes, I (green) i,i (red) i 000 000+ data points! The expression

More information

Principal Component Analysis (PCA) Our starting point consists of T observations from N variables, which will be arranged in an T N matrix R,

Principal Component Analysis (PCA) Our starting point consists of T observations from N variables, which will be arranged in an T N matrix R, Principal Component Analysis (PCA) PCA is a widely used statistical tool for dimension reduction. The objective of PCA is to find common factors, the so called principal components, in form of linear combinations

More information

UCLA STAT 233 Statistical Methods in Biomedical Imaging

UCLA STAT 233 Statistical Methods in Biomedical Imaging UCLA STAT 233 Statistical Methods in Biomedical Imaging Instructor: Ivo Dinov, Asst. Prof. In Statistics and Neurology University of California, Los Angeles, Spring 2004 http://www.stat.ucla.edu/~dinov/

More information

Principal component analysis

Principal component analysis Principal component analysis Angela Montanari 1 Introduction Principal component analysis (PCA) is one of the most popular multivariate statistical methods. It was first introduced by Pearson (1901) and

More information

ICS 6N Computational Linear Algebra Symmetric Matrices and Orthogonal Diagonalization

ICS 6N Computational Linear Algebra Symmetric Matrices and Orthogonal Diagonalization ICS 6N Computational Linear Algebra Symmetric Matrices and Orthogonal Diagonalization Xiaohui Xie University of California, Irvine xhx@uci.edu Xiaohui Xie (UCI) ICS 6N 1 / 21 Symmetric matrices An n n

More information

Linear Dimensionality Reduction

Linear Dimensionality Reduction Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Introduction 2 Principal Component Analysis 3 Factor Analysis

More information

EDAMI DIMENSION REDUCTION BY PRINCIPAL COMPONENT ANALYSIS

EDAMI DIMENSION REDUCTION BY PRINCIPAL COMPONENT ANALYSIS EDAMI DIMENSION REDUCTION BY PRINCIPAL COMPONENT ANALYSIS Mario Romanazzi October 29, 2017 1 Introduction An important task in multidimensional data analysis is reduction in complexity. Recalling that

More information

Econ Slides from Lecture 7

Econ Slides from Lecture 7 Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for

More information

7 Principal Components and Factor Analysis

7 Principal Components and Factor Analysis 7 Principal Components and actor nalysis 7.1 Principal Components a oal. Relationships between two variables can be graphically well captured in a meaningful way. or three variables this is also possible,

More information

Machine Learning 2nd Edition

Machine Learning 2nd Edition INTRODUCTION TO Lecture Slides for Machine Learning 2nd Edition ETHEM ALPAYDIN, modified by Leonardo Bobadilla and some parts from http://www.cs.tau.ac.il/~apartzin/machinelearning/ The MIT Press, 2010

More information

Machine Learning (Spring 2012) Principal Component Analysis

Machine Learning (Spring 2012) Principal Component Analysis 1-71 Machine Learning (Spring 1) Principal Component Analysis Yang Xu This note is partly based on Chapter 1.1 in Chris Bishop s book on PRML and the lecture slides on PCA written by Carlos Guestrin in

More information

2. Matrix Algebra and Random Vectors

2. Matrix Algebra and Random Vectors 2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns

More information

Vectors and Matrices Statistics with Vectors and Matrices

Vectors and Matrices Statistics with Vectors and Matrices Vectors and Matrices Statistics with Vectors and Matrices Lecture 3 September 7, 005 Analysis Lecture #3-9/7/005 Slide 1 of 55 Today s Lecture Vectors and Matrices (Supplement A - augmented with SAS proc

More information

Multivariate Statistics Fundamentals Part 1: Rotation-based Techniques

Multivariate Statistics Fundamentals Part 1: Rotation-based Techniques Multivariate Statistics Fundamentals Part 1: Rotation-based Techniques A reminded from a univariate statistics courses Population Class of things (What you want to learn about) Sample group representing

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

More Linear Algebra. Edps/Soc 584, Psych 594. Carolyn J. Anderson

More Linear Algebra. Edps/Soc 584, Psych 594. Carolyn J. Anderson More Linear Algebra Edps/Soc 584, Psych 594 Carolyn J. Anderson Department of Educational Psychology I L L I N O I S university of illinois at urbana-champaign c Board of Trustees, University of Illinois

More information

B. Weaver (18-Oct-2001) Factor analysis Chapter 7: Factor Analysis

B. Weaver (18-Oct-2001) Factor analysis Chapter 7: Factor Analysis B Weaver (18-Oct-2001) Factor analysis 1 Chapter 7: Factor Analysis 71 Introduction Factor analysis (FA) was developed by C Spearman It is a technique for examining the interrelationships in a set of variables

More information

7. Symmetric Matrices and Quadratic Forms

7. Symmetric Matrices and Quadratic Forms Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

More information

VAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where:

VAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where: VAR Model (k-variate VAR(p model (in the Reduced Form: where: Y t = A + B 1 Y t-1 + B 2 Y t-2 + + B p Y t-p + ε t Y t = (y 1t, y 2t,, y kt : a (k x 1 vector of time series variables A: a (k x 1 vector

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

STATISTICAL LEARNING SYSTEMS

STATISTICAL LEARNING SYSTEMS STATISTICAL LEARNING SYSTEMS LECTURE 8: UNSUPERVISED LEARNING: FINDING STRUCTURE IN DATA Institute of Computer Science, Polish Academy of Sciences Ph. D. Program 2013/2014 Principal Component Analysis

More information

Math Linear Algebra Final Exam Review Sheet

Math Linear Algebra Final Exam Review Sheet Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

More information

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the

More information

Machine Learning (BSMC-GA 4439) Wenke Liu

Machine Learning (BSMC-GA 4439) Wenke Liu Machine Learning (BSMC-GA 4439) Wenke Liu 02-01-2018 Biomedical data are usually high-dimensional Number of samples (n) is relatively small whereas number of features (p) can be large Sometimes p>>n Problems

More information

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017 Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

APPLICATIONS The eigenvalues are λ = 5, 5. An orthonormal basis of eigenvectors consists of

APPLICATIONS The eigenvalues are λ = 5, 5. An orthonormal basis of eigenvectors consists of CHAPTER III APPLICATIONS The eigenvalues are λ =, An orthonormal basis of eigenvectors consists of, The eigenvalues are λ =, A basis of eigenvectors consists of, 4 which are not perpendicular However,

More information

Chapter 4: Factor Analysis

Chapter 4: Factor Analysis Chapter 4: Factor Analysis In many studies, we may not be able to measure directly the variables of interest. We can merely collect data on other variables which may be related to the variables of interest.

More information

Multivariate Data Analysis a survey of data reduction and data association techniques: Principal Components Analysis

Multivariate Data Analysis a survey of data reduction and data association techniques: Principal Components Analysis Multivariate Data Analysis a survey of data reduction and data association techniques: Principal Components Analysis For example Data reduction approaches Cluster analysis Principal components analysis

More information

Matrix Vector Products

Matrix Vector Products We covered these notes in the tutorial sessions I strongly recommend that you further read the presented materials in classical books on linear algebra Please make sure that you understand the proofs and

More information

Principal Component Analysis & Factor Analysis. Psych 818 DeShon

Principal Component Analysis & Factor Analysis. Psych 818 DeShon Principal Component Analysis & Factor Analysis Psych 818 DeShon Purpose Both are used to reduce the dimensionality of correlated measurements Can be used in a purely exploratory fashion to investigate

More information

Ch. 10 Principal Components Analysis (PCA) Outline

Ch. 10 Principal Components Analysis (PCA) Outline Ch. 10 Principal Components Analysis (PCA) Outline 1. Why use PCA? 2. Calculating Principal Components 3. Using Principal Components in Regression 4. PROC FACTOR This material is loosely related to Section

More information

Appendix A: Matrices

Appendix A: Matrices Appendix A: Matrices A matrix is a rectangular array of numbers Such arrays have rows and columns The numbers of rows and columns are referred to as the dimensions of a matrix A matrix with, say, 5 rows

More information

Math Matrix Algebra

Math Matrix Algebra Math 44 - Matrix Algebra Review notes - (Alberto Bressan, Spring 7) sec: Orthogonal diagonalization of symmetric matrices When we seek to diagonalize a general n n matrix A, two difficulties may arise:

More information

L3: Review of linear algebra and MATLAB

L3: Review of linear algebra and MATLAB L3: Review of linear algebra and MATLAB Vector and matrix notation Vectors Matrices Vector spaces Linear transformations Eigenvalues and eigenvectors MATLAB primer CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna

More information

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works CS68: The Modern Algorithmic Toolbox Lecture #8: How PCA Works Tim Roughgarden & Gregory Valiant April 20, 206 Introduction Last lecture introduced the idea of principal components analysis (PCA). The

More information

Data Mining Lecture 4: Covariance, EVD, PCA & SVD

Data Mining Lecture 4: Covariance, EVD, PCA & SVD Data Mining Lecture 4: Covariance, EVD, PCA & SVD Jo Houghton ECS Southampton February 25, 2019 1 / 28 Variance and Covariance - Expectation A random variable takes on different values due to chance The

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Study Notes on Matrices & Determinants for GATE 2017

Study Notes on Matrices & Determinants for GATE 2017 Study Notes on Matrices & Determinants for GATE 2017 Matrices and Determinates are undoubtedly one of the most scoring and high yielding topics in GATE. At least 3-4 questions are always anticipated from

More information

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Linear Algebra & Geometry why is linear algebra useful in computer vision? Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia

More information

GEOG 4110/5100 Advanced Remote Sensing Lecture 15

GEOG 4110/5100 Advanced Remote Sensing Lecture 15 GEOG 4110/5100 Advanced Remote Sensing Lecture 15 Principal Component Analysis Relevant reading: Richards. Chapters 6.3* http://www.ce.yildiz.edu.tr/personal/songul/file/1097/principal_components.pdf *For

More information

Math Camp Notes: Linear Algebra II

Math Camp Notes: Linear Algebra II Math Camp Notes: Linear Algebra II Eigenvalues Let A be a square matrix. An eigenvalue is a number λ which when subtracted from the diagonal elements of the matrix A creates a singular matrix. In other

More information

Lecture 15 Review of Matrix Theory III. Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore

Lecture 15 Review of Matrix Theory III. Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore Lecture 15 Review of Matrix Theory III Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore Matrix An m n matrix is a rectangular or square array of

More information

Principal Component Analysis-I Geog 210C Introduction to Spatial Data Analysis. Chris Funk. Lecture 17

Principal Component Analysis-I Geog 210C Introduction to Spatial Data Analysis. Chris Funk. Lecture 17 Principal Component Analysis-I Geog 210C Introduction to Spatial Data Analysis Chris Funk Lecture 17 Outline Filters and Rotations Generating co-varying random fields Translating co-varying fields into

More information

Eigenvalues and diagonalization

Eigenvalues and diagonalization Eigenvalues and diagonalization Patrick Breheny November 15 Patrick Breheny BST 764: Applied Statistical Modeling 1/20 Introduction The next topic in our course, principal components analysis, revolves

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p

More information

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x = Linear Algebra Review Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1 x x = 2. x n Vectors of up to three dimensions are easy to diagram.

More information

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas Dimensionality Reduction: PCA Nicholas Ruozzi University of Texas at Dallas Eigenvalues λ is an eigenvalue of a matrix A R n n if the linear system Ax = λx has at least one non-zero solution If Ax = λx

More information

Factor Analysis Continued. Psy 524 Ainsworth

Factor Analysis Continued. Psy 524 Ainsworth Factor Analysis Continued Psy 524 Ainsworth Equations Extraction Principal Axis Factoring Variables Skiers Cost Lift Depth Powder S1 32 64 65 67 S2 61 37 62 65 S3 59 40 45 43 S4 36 62 34 35 S5 62 46 43

More information

14 Singular Value Decomposition

14 Singular Value Decomposition 14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

DATA MINING LECTURE 8. Dimensionality Reduction PCA -- SVD

DATA MINING LECTURE 8. Dimensionality Reduction PCA -- SVD DATA MINING LECTURE 8 Dimensionality Reduction PCA -- SVD The curse of dimensionality Real data usually have thousands, or millions of dimensions E.g., web documents, where the dimensionality is the vocabulary

More information

A Tutorial on Data Reduction. Principal Component Analysis Theoretical Discussion. By Shireen Elhabian and Aly Farag

A Tutorial on Data Reduction. Principal Component Analysis Theoretical Discussion. By Shireen Elhabian and Aly Farag A Tutorial on Data Reduction Principal Component Analysis Theoretical Discussion By Shireen Elhabian and Aly Farag University of Louisville, CVIP Lab November 2008 PCA PCA is A backbone of modern data

More information

Eigenvalues, Eigenvectors, and an Intro to PCA

Eigenvalues, Eigenvectors, and an Intro to PCA Eigenvalues, Eigenvectors, and an Intro to PCA Eigenvalues, Eigenvectors, and an Intro to PCA Changing Basis We ve talked so far about re-writing our data using a new set of variables, or a new basis.

More information

Unsupervised Learning: Dimensionality Reduction

Unsupervised Learning: Dimensionality Reduction Unsupervised Learning: Dimensionality Reduction CMPSCI 689 Fall 2015 Sridhar Mahadevan Lecture 3 Outline In this lecture, we set about to solve the problem posed in the previous lecture Given a dataset,

More information

Principal Components Theory Notes

Principal Components Theory Notes Principal Components Theory Notes Charles J. Geyer August 29, 2007 1 Introduction These are class notes for Stat 5601 (nonparametrics) taught at the University of Minnesota, Spring 2006. This not a theory

More information

235 Final exam review questions

235 Final exam review questions 5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an

More information

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Math 443 Differential Geometry Spring 2013 Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Endomorphisms of a Vector Space This handout discusses

More information

What is Principal Component Analysis?

What is Principal Component Analysis? What is Principal Component Analysis? Principal component analysis (PCA) Reduce the dimensionality of a data set by finding a new set of variables, smaller than the original set of variables Retains most

More information

TBP MATH33A Review Sheet. November 24, 2018

TBP MATH33A Review Sheet. November 24, 2018 TBP MATH33A Review Sheet November 24, 2018 General Transformation Matrices: Function Scaling by k Orthogonal projection onto line L Implementation If we want to scale I 2 by k, we use the following: [

More information

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one

More information

Statistical Pattern Recognition

Statistical Pattern Recognition Statistical Pattern Recognition Feature Extraction Hamid R. Rabiee Jafar Muhammadi, Alireza Ghasemi, Payam Siyari Spring 2014 http://ce.sharif.edu/courses/92-93/2/ce725-2/ Agenda Dimensionality Reduction

More information

Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms. 1 Diagonalization and Change of Basis

Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms. 1 Diagonalization and Change of Basis Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms De La Fuente notes that, if an n n matrix has n distinct eigenvalues, it can be diagonalized. In this supplement, we will provide

More information

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Exam 2 will be held on Tuesday, April 8, 7-8pm in 117 MacMillan What will be covered The exam will cover material from the lectures

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Clusters. Unsupervised Learning. Luc Anselin. Copyright 2017 by Luc Anselin, All Rights Reserved

Clusters. Unsupervised Learning. Luc Anselin.   Copyright 2017 by Luc Anselin, All Rights Reserved Clusters Unsupervised Learning Luc Anselin http://spatial.uchicago.edu 1 curse of dimensionality principal components multidimensional scaling classical clustering methods 2 Curse of Dimensionality 3 Curse

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

Linear Methods in Data Mining

Linear Methods in Data Mining Why Methods? linear methods are well understood, simple and elegant; algorithms based on linear methods are widespread: data mining, computer vision, graphics, pattern recognition; excellent general software

More information

Principal Components Analysis (PCA) and Singular Value Decomposition (SVD) with applications to Microarrays

Principal Components Analysis (PCA) and Singular Value Decomposition (SVD) with applications to Microarrays Principal Components Analysis (PCA) and Singular Value Decomposition (SVD) with applications to Microarrays Prof. Tesler Math 283 Fall 2015 Prof. Tesler Principal Components Analysis Math 283 / Fall 2015

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

Theorems. Least squares regression

Theorems. Least squares regression Theorems In this assignment we are trying to classify AML and ALL samples by use of penalized logistic regression. Before we indulge on the adventure of classification we should first explain the most

More information

Machine Learning. Principal Components Analysis. Le Song. CSE6740/CS7641/ISYE6740, Fall 2012

Machine Learning. Principal Components Analysis. Le Song. CSE6740/CS7641/ISYE6740, Fall 2012 Machine Learning CSE6740/CS7641/ISYE6740, Fall 2012 Principal Components Analysis Le Song Lecture 22, Nov 13, 2012 Based on slides from Eric Xing, CMU Reading: Chap 12.1, CB book 1 2 Factor or Component

More information

Singular Value Decomposition. 1 Singular Value Decomposition and the Four Fundamental Subspaces

Singular Value Decomposition. 1 Singular Value Decomposition and the Four Fundamental Subspaces Singular Value Decomposition This handout is a review of some basic concepts in linear algebra For a detailed introduction, consult a linear algebra text Linear lgebra and its pplications by Gilbert Strang

More information

COMP6237 Data Mining Covariance, EVD, PCA & SVD. Jonathon Hare

COMP6237 Data Mining Covariance, EVD, PCA & SVD. Jonathon Hare COMP6237 Data Mining Covariance, EVD, PCA & SVD Jonathon Hare jsh2@ecs.soton.ac.uk Variance and Covariance Random Variables and Expected Values Mathematicians talk variance (and covariance) in terms of

More information

DIAGONALIZATION OF THE STRESS TENSOR

DIAGONALIZATION OF THE STRESS TENSOR DIAGONALIZATION OF THE STRESS TENSOR INTRODUCTION By the use of Cauchy s theorem we are able to reduce the number of stress components in the stress tensor to only nine values. An additional simplification

More information

3 (Maths) Linear Algebra

3 (Maths) Linear Algebra 3 (Maths) Linear Algebra References: Simon and Blume, chapters 6 to 11, 16 and 23; Pemberton and Rau, chapters 11 to 13 and 25; Sundaram, sections 1.3 and 1.5. The methods and concepts of linear algebra

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM 33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM (UPDATED MARCH 17, 2018) The final exam will be cumulative, with a bit more weight on more recent material. This outline covers the what we ve done since the

More information

Principal Component Analysis. Applied Multivariate Statistics Spring 2012

Principal Component Analysis. Applied Multivariate Statistics Spring 2012 Principal Component Analysis Applied Multivariate Statistics Spring 2012 Overview Intuition Four definitions Practical examples Mathematical example Case study 2 PCA: Goals Goal 1: Dimension reduction

More information

A A x i x j i j (i, j) (j, i) Let. Compute the value of for and

A A x i x j i j (i, j) (j, i) Let. Compute the value of for and 7.2 - Quadratic Forms quadratic form on is a function defined on whose value at a vector in can be computed by an expression of the form, where is an symmetric matrix. The matrix R n Q R n x R n Q(x) =

More information

Introduction to multivariate analysis Outline

Introduction to multivariate analysis Outline Introduction to multivariate analysis Outline Why do a multivariate analysis Ordination, classification, model fitting Principal component analysis Discriminant analysis, quickly Species presence/absence

More information

GI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis. Massimiliano Pontil

GI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis. Massimiliano Pontil GI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis Massimiliano Pontil 1 Today s plan SVD and principal component analysis (PCA) Connection

More information

Lecture 13. Principal Component Analysis. Brett Bernstein. April 25, CDS at NYU. Brett Bernstein (CDS at NYU) Lecture 13 April 25, / 26

Lecture 13. Principal Component Analysis. Brett Bernstein. April 25, CDS at NYU. Brett Bernstein (CDS at NYU) Lecture 13 April 25, / 26 Principal Component Analysis Brett Bernstein CDS at NYU April 25, 2017 Brett Bernstein (CDS at NYU) Lecture 13 April 25, 2017 1 / 26 Initial Question Intro Question Question Let S R n n be symmetric. 1

More information

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Linear Algebra & Geometry why is linear algebra useful in computer vision? Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia

More information