Eigenvalues, Eigenvectors, and an Intro to PCA

Similar documents
Eigenvalues, Eigenvectors, and an Intro to PCA

Eigenvalues, Eigenvectors, and an Intro to PCA

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

Lecture 15, 16: Diagonalization

PRINCIPAL COMPONENTS ANALYSIS

Study Guide for Linear Algebra Exam 2

Singular Value Decomposition. 1 Singular Value Decomposition and the Four Fundamental Subspaces

ICS 6N Computational Linear Algebra Symmetric Matrices and Orthogonal Diagonalization

Solutions to Final Exam

VAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where:

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

Announcements Wednesday, November 01

Diagonalization of Matrix

MATH 221, Spring Homework 10 Solutions

Computationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity:

Knowledge Discovery and Data Mining 1 (VO) ( )

Principle Components Analysis (PCA) Relationship Between a Linear Combination of Variables and Axes Rotation for PCA

Eigenvalues, Eigenvectors, and Diagonalization

The Jordan Normal Form and its Applications

MATH 583A REVIEW SESSION #1

Principal Components Analysis (PCA)

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

MAT 1302B Mathematical Methods II

Mathematical foundations - linear algebra

Econ Slides from Lecture 7

Lecture 13. Principal Component Analysis. Brett Bernstein. April 25, CDS at NYU. Brett Bernstein (CDS at NYU) Lecture 13 April 25, / 26

Homework For each of the following matrices, find the minimal polynomial and determine whether the matrix is diagonalizable.

Homework sheet 4: EIGENVALUES AND EIGENVECTORS. DIAGONALIZATION (with solutions) Year ? Why or why not? 6 9

Recall : Eigenvalues and Eigenvectors

Principal Component Analysis (PCA) Principal Component Analysis (PCA)

1 Principal Components Analysis

Principal Component Analysis. Applied Multivariate Statistics Spring 2012

Intelligent Data Analysis. Principal Component Analysis. School of Computer Science University of Birmingham

Jordan Canonical Form Homework Solutions

Diagonalization of Matrices

MA 265 FINAL EXAM Fall 2012

MAC Module 12 Eigenvalues and Eigenvectors. Learning Objectives. Upon completing this module, you should be able to:

MAC Module 12 Eigenvalues and Eigenvectors

Chapter 7: Symmetric Matrices and Quadratic Forms

8 Eigenvectors and the Anisotropic Multivariate Gaussian Distribution

Principal Component Analysis

Orthogonality. Orthonormal Bases, Orthogonal Matrices. Orthogonality

Chapter 4 & 5: Vector Spaces & Linear Transformations

Linear Systems. Class 27. c 2008 Ron Buckmire. TITLE Projection Matrices and Orthogonal Diagonalization CURRENT READING Poole 5.4

Matrices related to linear transformations

Final Exam Practice Problems Answers Math 24 Winter 2012

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Math 2331 Linear Algebra

Introduction to Machine Learning

1. In this problem, if the statement is always true, circle T; otherwise, circle F.

More Linear Algebra. Edps/Soc 584, Psych 594. Carolyn J. Anderson

Vectors and Matrices Statistics with Vectors and Matrices

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

7. Symmetric Matrices and Quadratic Forms

First of all, the notion of linearity does not depend on which coordinates are used. Recall that a map T : R n R m is linear if

4. Linear transformations as a vector space 17

Announcements Wednesday, November 01

Appendix A: Matrices

Name Solutions Linear Algebra; Test 3. Throughout the test simplify all answers except where stated otherwise.

Eigenvalues and Eigenvectors

Math 3191 Applied Linear Algebra

MATH 304 Linear Algebra Lecture 33: Bases of eigenvectors. Diagonalization.

Warm-up. True or false? Baby proof. 2. The system of normal equations for A x = y has solutions iff A x = y has solutions

0 2 0, it is diagonal, hence diagonalizable)

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

Principal Component Analysis-I Geog 210C Introduction to Spatial Data Analysis. Chris Funk. Lecture 17

Data Mining Lecture 4: Covariance, EVD, PCA & SVD

. = V c = V [x]v (5.1) c 1. c k

Announcements Monday, November 20

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Announcements Monday, October 29

TMA Calculus 3. Lecture 21, April 3. Toke Meier Carlsen Norwegian University of Science and Technology Spring 2013

Concentration Ellipsoids

CS168: The Modern Algorithmic Toolbox Lecture #7: Understanding Principal Component Analysis (PCA)

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

Exercises * on Principal Component Analysis

Agenda: Understand the action of A by seeing how it acts on eigenvectors.

Math 2114 Common Final Exam May 13, 2015 Form A

Question 7. Consider a linear system A x = b with 4 unknown. x = [x 1, x 2, x 3, x 4 ] T. The augmented

Linear Algebra. Rekha Santhanam. April 3, Johns Hopkins Univ. Rekha Santhanam (Johns Hopkins Univ.) Linear Algebra April 3, / 7

MAT1302F Mathematical Methods II Lecture 19

COMP6237 Data Mining Covariance, EVD, PCA & SVD. Jonathon Hare

Matrix Vector Products

Recitation 9: Probability Matrices and Real Symmetric Matrices. 3 Probability Matrices: Definitions and Examples

Eigenvalues and Eigenvectors

CS4495/6495 Introduction to Computer Vision. 8B-L2 Principle Component Analysis (and its use in Computer Vision)

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

Linear Algebra Review

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math Linear Algebra Final Exam Review Sheet

Generalized Eigenvectors and Jordan Form

Chemometrics. Matti Hotokka Physical chemistry Åbo Akademi University

Principal Components Theory Notes

and let s calculate the image of some vectors under the transformation T.

APPLICATIONS The eigenvalues are λ = 5, 5. An orthonormal basis of eigenvectors consists of

Jordan Normal Form and Singular Decomposition

Chapter 3 Transformations

Eigenvalues and Eigenvectors 7.2 Diagonalization

Usually, when we first formulate a problem in mathematics, we use the most familiar

Transcription:

Eigenvalues, Eigenvectors, and an Intro to PCA Eigenvalues, Eigenvectors, and an Intro to PCA

Changing Basis We ve talked so far about re-writing our data using a new set of variables, or a new basis. How to choose this new basis?? Eigenvalues, Eigenvectors, and an Intro to PCA

PCA One (very popular) method: start by choosing the basis vectors as the directions in which the variance of the data is maximal. weight height Eigenvalues, Eigenvectors, and an Intro to PCA

PCA One (very popular) method: start by choosing the basis vectors as the directions in which the variance of the data is maximal. v1 w h Eigenvalues, Eigenvectors, and an Intro to PCA

PCA Then, choose subsequent directions that are orthogonal to the first and have the next largest variance. v2 w h Eigenvalues, Eigenvectors, and an Intro to PCA

PCA As it turns out, these directions are given by the eigenvectors of either: The Covariance Matrix (data is only centered) The Correlation Matrix (data is completely standardized) Eigenvalues, Eigenvectors, and an Intro to PCA

PCA As it turns out, these directions are given by the eigenvectors of either: The Covariance Matrix (data is only centered) The Correlation Matrix (data is completely standardized) Great. What the heck is an eigenvector? Eigenvalues, Eigenvectors, and an Intro to PCA

Eigenvalues and Eigenvectors

Effect of Matrix Multiplication When we multiply a matrix by a vector, what do we get? A x =? =

Effect of Matrix Multiplication When we multiply a matrix by a vector, what do we get? A x =? = b Another vector.

Effect of Matrix Multiplication When we multiply a matrix by a vector, what do we get? A x =? = b Another vector. When the matrix A is square (n n) then x and b are both vectors in R n. We can draw (or imagine) them in the same space.

Linear Transformation In general, multiplying a vector by a matrix changes both its magnitude and direction. ( ) ( ) 1 1 1 A = x = Ax = 6 0 5

Linear Transformation In general, multiplying a vector by a matrix changes both its magnitude and direction. ( ) ( ) ( ) 1 1 1 4 A = x = Ax = 6 0 5 6 span(e2) x Ax span(e1)

Eigenvectors However, a matrix may act on certain vectors by changing only their magnitude, not their direction (or possibly reversing it) ( ) ( ) 1 1 1 A = x = Ax = 6 0 3

Eigenvectors However, a matrix may act on certain vectors by changing only their magnitude, not their direction (or possibly reversing it) ( ) ( ) ( ) 1 1 1 2 A = x = Ax = 6 0 3 6 span(e2) Ax x span(e1) The matrix A acts like a scalar!

Eigenvalues and Eigenvectors For a square matrix, A, a non-zero vector x is called an eigenvector of A if multiplying x by A results in a scalar multiple of x. Ax = λx

Eigenvalues and Eigenvectors For a square matrix, A, a non-zero vector x is called an eigenvector of A if multiplying x by A results in a scalar multiple of x. Ax = λx The scalar λ is called the eigenvalue associated with the eigenvector.

Previous Example ( ) 1 1 A = 6 0 ( ) 2 Ax = = 6 x = ( ) 1 3

Previous Example ( ) 1 1 A = 6 0 ( ) 2 Ax = = 2 6 x = ( ) 1 3 ( ) 1 = 2x 3 = λ = 2 is the eigenvalue span(e2) Ax x span(e1)

Example 2 Show that x is an eigenvector of A and find the corresponding eigenvalue. ( ) ( ) 1 1 1 A = x = 6 0 2

Example 2 Show that x is an eigenvector of A and find the corresponding eigenvalue. ( ) ( ) 1 1 1 A = x = 6 0 2 ( ) 3 Ax = ( 1 = 3 2 6 ) = 3x

Example 2 Show that x is an eigenvector of A and find the corresponding eigenvalue. ( ) ( ) 1 1 1 A = x = 6 0 2 ( ) 3 Ax = ( 1 = 3 2 6 ) = 3x = λ = 3 is the eigenvalue corresponding to the eigenvector x

Example 2 A = ( ) 1 1 6 0 x = span(e2) ( ) 1 2 Ax = 3x Ax x span(e1)

Let s Practice 1 Show that v is an eigenvector of A and find the corresponding eigenvalue: A = ( ) 1 2 2 1 v = ( ) 3 3

Let s Practice 1 Show that v is an eigenvector of A and find the corresponding eigenvalue: A = ( ) 1 2 2 1 v = ( ) 3 3 A = ( ) 1 1 6 0 v = ( ) 1 2

Let s Practice 1 Show that v is an eigenvector of A and find the corresponding eigenvalue: A = ( ) 1 2 2 1 v = ( ) 3 3 A = ( ) 1 1 6 0 v = ( ) 1 2 2 Can a rectangular matrix have eigenvalues/eigenvectors?

Eigenspaces Fact: Any scalar multiple of an eigenvector of A is also an eigenvector of A with the same eigenvalue: A = ( ) 1 1 6 0 x = ( ) 1 3 λ = 2 Try v = ( ) 2 6 or u = ( ) 1 3 or z = ( ) 3 9

Eigenspaces Fact: Any scalar multiple of an eigenvector of A is also an eigenvector of A with the same eigenvalue: A = ( ) 1 1 6 0 x = ( ) 1 3 λ = 2 Try v = ( ) 2 6 or u = ( ) 1 3 or z = ( ) 3 9 In general (#proof): Let Ax = λx. If c is some constant, then: A(cx) = c(ax) = c(λx) = λ(cx)

Eigenspaces For a given matrix, there are infinitely many eigenvectors associated with any one eigenvalue.

Eigenspaces For a given matrix, there are infinitely many eigenvectors associated with any one eigenvalue. Any scalar multiple (positive or negative) can be used.

Eigenspaces For a given matrix, there are infinitely many eigenvectors associated with any one eigenvalue. Any scalar multiple (positive or negative) can be used. The collection is referred to as the eigenspace associated with the eigenvalue.

Eigenspaces For a given matrix, there are infinitely many eigenvectors associated with any one eigenvalue. Any scalar multiple (positive or negative) can be used. The collection is referred to as the eigenspace associated with the eigenvalue. In the previous example, the eigenspace of λ = 2 was the { span x = ( )} 1. 3

Eigenspaces For a given matrix, there are infinitely many eigenvectors associated with any one eigenvalue. Any scalar multiple (positive or negative) can be used. The collection is referred to as the eigenspace associated with the eigenvalue. In the previous example, the eigenspace of λ = 2 was the { span x = ( )} 1. 3 With this in mind what should you expect from software?

Zero Eigenvalues What if λ = 0 is an eigenvalue for some matrix A?

Zero Eigenvalues What if λ = 0 is an eigenvalue for some matrix A? Ax = 0x = 0, where x 0 is an eigenvector.

Zero Eigenvalues What if λ = 0 is an eigenvalue for some matrix A? Ax = 0x = 0, where x 0 is an eigenvector. This means that some linear combination of the columns of A equals zero! = Columns of A are linearly dependent.

Zero Eigenvalues What if λ = 0 is an eigenvalue for some matrix A? Ax = 0x = 0, where x 0 is an eigenvector. This means that some linear combination of the columns of A equals zero! = Columns of A are linearly dependent. = A is not full rank.

Zero Eigenvalues What if λ = 0 is an eigenvalue for some matrix A? Ax = 0x = 0, where x 0 is an eigenvector. This means that some linear combination of the columns of A equals zero! = Columns of A are linearly dependent. = A is not full rank. = Perfect Multicollinearity.

Zero Eigenvalues What if λ = 0 is an eigenvalue for some matrix A? Ax = 0x = 0, where x 0 is an eigenvector. This means that some linear combination of the columns of A equals zero! = Columns of A are linearly dependent. = A is not full rank. = Perfect Multicollinearity.

Eigenpairs For a square (n n) matrix A, eigenvalues and eigenvectors come in pairs. Normally, the pairs are ordered by magnitude of eigenvalue. λ 1 λ 2 λ 3 λ n λ 1 is the largest eigenvalue Eigenvector v i corresponds to eigenvalue λ i

Let s Practice 1 For the following matrix, determine the eigenvalue associated with the given eigenvector. 1 1 0 1 A = 0 1 1 v = 1 1 0 1 1 From this eigenvalue, what can you conclude about the matrix A?

Let s Practice 1 For the following matrix, determine the eigenvalue associated with the given eigenvector. 1 1 0 1 A = 0 1 1 v = 1 1 0 1 1 From this eigenvalue, what can you conclude about the matrix A? 2 The matrix M has eigenvectors u and v. What is λ 1, the first eigenvalue for the matrix M? M = ( 1 1 6 0 ) u = ( 1 3 ) v = ( ) 1 2

Let s Practice 1 For the following matrix, determine the eigenvalue associated with the given eigenvector. 1 1 0 1 A = 0 1 1 v = 1 1 0 1 1 From this eigenvalue, what can you conclude about the matrix A? 2 The matrix M has eigenvectors u and v. What is λ 1, the first eigenvalue for the matrix M? M = ( 1 1 6 0 ) u = ( 1 3 ) v = ( ) 1 2 3 For the previous problem, is the specific eigenvector (u or v) the only eigenvector associated with λ 1?

Diagonalization What happens if I put all the eigenvectors of A into a matrix as columns: V = [v 1 v 2 v 3... v n ] and then compute AV?

Diagonalization What happens if I put all the eigenvectors of A into a matrix as columns: V = [v 1 v 2 v 3... v n ] and then compute AV? AV = A[v 1 v 2 v 3... v n ]

Diagonalization What happens if I put all the eigenvectors of A into a matrix as columns: V = [v 1 v 2 v 3... v n ] and then compute AV? AV = A[v 1 v 2 v 3... v n ] = [Av 1 Av 2 Av 3... Av n ]

Diagonalization What happens if I put all the eigenvectors of A into a matrix as columns: V = [v 1 v 2 v 3... v n ] and then compute AV? AV = A[v 1 v 2 v 3... v n ] = [Av 1 Av 2 Av 3... Av n ] = [λ 1 v 1 λ 2 v 2 λ 3 v 3... λ n v n ]

Diagonalization What happens if I put all the eigenvectors of A into a matrix as columns: V = [v 1 v 2 v 3... v n ] and then compute AV? AV = A[v 1 v 2 v 3... v n ] = [Av 1 Av 2 Av 3... Av n ] = [λ 1 v 1 λ 2 v 2 λ 3 v 3... λ n v n ] The effect is that the columns are all scaled by the corresponding eigenvalue.

Diagonalization AV = [λ 1 v 1 λ 2 v 2 λ 3 v 3... λ n v n ] The same thing would happen if we multiplied V by a diagonal matrix of eigenvalues on the right: λ 1 0 0... 0 0 λ 2 0... 0 VD = [v 1 v 2 v 3... v n ]. 0 0... 0....... 0 0 0 0 λ n = [λ 1 v 1 λ 2 v 2 λ 3 v 3... λ n v n ]

Diagonalization AV = [λ 1 v 1 λ 2 v 2 λ 3 v 3... λ n v n ] The same thing would happen if we multiplied V by a diagonal matrix of eigenvalues on the right: λ 1 0 0... 0 0 λ 2 0... 0 VD = [v 1 v 2 v 3... v n ]. 0 0... 0....... 0 0 0 0 λ n = [λ 1 v 1 λ 2 v 2 λ 3 v 3... λ n v n ] = AV = VD

Diagonalization AV = VD

Diagonalization AV = VD If the matrix A has n linearly independent eigenvectors, then the matrix V has an inverse.

Diagonalization AV = VD If the matrix A has n linearly independent eigenvectors, then the matrix V has an inverse. = V 1 AV = D

Diagonalization AV = VD If the matrix A has n linearly independent eigenvectors, then the matrix V has an inverse. = V 1 AV = D This is called the diagonalization of A.

Diagonalization AV = VD If the matrix A has n linearly independent eigenvectors, then the matrix V has an inverse. = V 1 AV = D This is called the diagonalization of A. It is only possible when A has n linearly independent eigenvectors (so that the matrix V has an inverse).

Let s Practice For the matrix A = ( ) 0 4, 1 5 a. The pairs λ 1 = 4, v 1 = ( ) 1 1 and λ 2 = 1, v 2 = ( ) 4 1 are eigenpairs of A. For each pair, provide another eigenvector associated with that eigenvalue.

Let s Practice For the matrix A = ( ) 0 4, 1 5 a. The pairs λ 1 = 4, v 1 = ( ) 1 1 and λ 2 = 1, v 2 = ( ) 4 1 are eigenpairs of A. For each pair, provide another eigenvector associated with that eigenvalue. b. Based on the eigenvectors listed in part a, can the matrix A be diagonalized? Why or why not? If diagonalization is possible, explain how it would be done.

Symmetric Matrices - Properties Symmetric matrices (like the covariance matrix, the correlation matrix, and X T X) have some very nice properties:

Symmetric Matrices - Properties Symmetric matrices (like the covariance matrix, the correlation matrix, and X T X) have some very nice properties: 1 They are always diagonalizable. (Always have n linearly independent eigenvectors.)

Symmetric Matrices - Properties Symmetric matrices (like the covariance matrix, the correlation matrix, and X T X) have some very nice properties: 1 They are always diagonalizable. (Always have n linearly independent eigenvectors.) 2 Their eigenvectors are all mutually orthogonal.

Symmetric Matrices - Properties Symmetric matrices (like the covariance matrix, the correlation matrix, and X T X) have some very nice properties: 1 They are always diagonalizable. (Always have n linearly independent eigenvectors.) 2 Their eigenvectors are all mutually orthogonal. 3 Thus, if you normalize the eigenvectors (software does this automatically) they will form an orthogonal matrix, V.

Symmetric Matrices - Properties Symmetric matrices (like the covariance matrix, the correlation matrix, and X T X) have some very nice properties: 1 They are always diagonalizable. (Always have n linearly independent eigenvectors.) 2 Their eigenvectors are all mutually orthogonal. 3 Thus, if you normalize the eigenvectors (software does this automatically) they will form an orthogonal matrix, V. = AV = VD

Symmetric Matrices - Properties Symmetric matrices (like the covariance matrix, the correlation matrix, and X T X) have some very nice properties: 1 They are always diagonalizable. (Always have n linearly independent eigenvectors.) 2 Their eigenvectors are all mutually orthogonal. 3 Thus, if you normalize the eigenvectors (software does this automatically) they will form an orthogonal matrix, V. = AV = VD V T AV = D

Eigenvectors of Symmetric Matrices 5 4 3 2 1 0 1 2 3 4 5 5 4 3 2 1 0 1 2 3 4 5

Introduction to Principal Components Analysis (PCA) Eigenvalues and Eigenvectors Introduction to Principal Components Analysis (PCA)

Eigenvectors of the Covariance/Correlation Matrices Introduction to Principal Components Analysis (PCA)

Eigenvectors of the Covariance/Correlation Matrices Covariance/Correlation matrices are symmetric. Introduction to Principal Components Analysis (PCA)

Eigenvectors of the Covariance/Correlation Matrices Covariance/Correlation matrices are symmetric. = Eigenvectors are orthogonal Introduction to Principal Components Analysis (PCA)

Eigenvectors of the Covariance/Correlation Matrices Covariance/Correlation matrices are symmetric. = Eigenvectors are orthogonal Eigenvectors are ordered by the magnitude of eigenvalues: λ 1 λ 2 λ p {v 1, v 2,..., v n } Introduction to Principal Components Analysis (PCA)

Covariance vs. Correlation We ll get into more detail later, but Introduction to Principal Components Analysis (PCA)

Covariance vs. Correlation We ll get into more detail later, but For the covariance matrix, we want to think of our data as centered to begin with. Introduction to Principal Components Analysis (PCA)

Covariance vs. Correlation We ll get into more detail later, but For the covariance matrix, we want to think of our data as centered to begin with. For correlation matrix, we want to think of our data as standardized to begin with. (i.e. centered and divided by standard deviation). Introduction to Principal Components Analysis (PCA)

Direction of Maximal Variance The first eigenvector of a covariance/correlation matrix points in the direction of maximum variance in the data. This eigenvector is the first principal component. weight height Introduction to Principal Components Analysis (PCA)

Direction of Maximal Variance The first eigenvector of a covariance matrix points in the direction of maximum variance in the data. This eigenvector is the first principal component. v1 w h Introduction to Principal Components Analysis (PCA)

Not a Regression Line The first principal component is not the same direction as the regression line. It minimizes orthogonal distances, not vertical distances. v1 w h Introduction to Principal Components Analysis (PCA)

Not a Regression Line It may be close in some cases, but it s not the same. x2 Principal Component Regression Line x1 Introduction to Principal Components Analysis (PCA)

Secondary Directions The second eigenvector of a covariance matrix points in the direction, orthogonal to the first, that has the maximum variance. v2 w h Introduction to Principal Components Analysis (PCA)

A New Basis Principal components provide us with a new orthogonal basis where the new coordinates of the data points are uncorrelated: Introduction to Principal Components Analysis (PCA)

Variable Loadings Each principal component is a linear combination of the original variables. ( ) 0.7 v 1 = = 0.7h + 0.7w 0.7 ( ) 0.7 v 2 = = 0.7h + 0.7w 0.7 These coefficients are called loadings (factor loadings). v2 w h Introduction to Principal Components Analysis (PCA)

Combining PCs Likewise, we can think of our original variables as linear combinations of the principal components with coordinates in the new basis: ( ) 0.7 h = = 0.7v 0.7 1 0.7v 2 ( ) 0.7 w = = 0.7v 0.7 1 + 0.7v 2 w h Introduction to Principal Components Analysis (PCA)

The Biplot Uncorrelated data points and variable vectors plotted on the same set of axis! Points in top right have largest weight values Points in top left have smallest height values w h Introduction to Principal Components Analysis (PCA)

weight height Introduction to Principal Components Analysis (PCA)

Variable Loadings The variable loadings give us a formula for finding the coordinates of our data in the new basis. ( ) 0.7 v 1 = = PC 0.7 1 = 0.7height + 0.7weight v 2 = ( ) 0.7 = PC 0.7 2 = 0.7height + 0.7weight From these loadings, we can get a sense of how important each original variable is to our new variables (the components). Introduction to Principal Components Analysis (PCA)

Let s Practice 1 For the following data plot, take your best guess and draw the directions of the first and second principal components. 2 Is there more than one answer to this question? Introduction to Principal Components Analysis (PCA)

Let s Practice Suppose your data contained the variables VO2_max, mile pace, and weight in that order. The first principal component for this data is the eigenvector of the covariance matrix. 0.92 0.82. 0.51 1 What variable is most important/influential to the first principal component? Introduction to Principal Components Analysis (PCA)