Eigenvalues, Eigenvectors, and an Intro to PCA

Size: px
Start display at page:

Download "Eigenvalues, Eigenvectors, and an Intro to PCA"

Transcription

1 Eigenvalues, Eigenvectors, and an Intro to PCA Eigenvalues, Eigenvectors, and an Intro to PCA

2 Changing Basis We ve talked so far about re-writing our data using a new set of variables, or a new basis. How to choose this new basis?? Eigenvalues, Eigenvectors, and an Intro to PCA

3 PCA One (very popular) method: start by choosing the basis vectors as the directions in which the variance of the data is maximal. weight height Eigenvalues, Eigenvectors, and an Intro to PCA

4 PCA One (very popular) method: start by choosing the basis vectors as the directions in which the variance of the data is maximal. v1 w h Eigenvalues, Eigenvectors, and an Intro to PCA

5 PCA Then, choose subsequent directions that are orthogonal to the first and have the next largest variance. v2 w h Eigenvalues, Eigenvectors, and an Intro to PCA

6 PCA As it turns out, these directions are given by the eigenvectors of either: The Covariance Matrix (data is only centered) The Correlation Matrix (data is completely standardized) Eigenvalues, Eigenvectors, and an Intro to PCA

7 PCA As it turns out, these directions are given by the eigenvectors of either: The Covariance Matrix (data is only centered) The Correlation Matrix (data is completely standardized) Great. What the heck is an eigenvector? Eigenvalues, Eigenvectors, and an Intro to PCA

8 Eigenvalues and Eigenvectors

9 Effect of Matrix Multiplication When we multiply a matrix by a vector, what do we get? A x =? =

10 Effect of Matrix Multiplication When we multiply a matrix by a vector, what do we get? A x =? = b Another vector.

11 Effect of Matrix Multiplication When we multiply a matrix by a vector, what do we get? A x =? = b Another vector. When the matrix A is square (n n) then x and b are both vectors in R n. We can draw (or imagine) them in the same space.

12 Linear Transformation In general, multiplying a vector by a matrix changes both its magnitude and direction. ( ) ( ) A = x = Ax = 6 0 5

13 Linear Transformation In general, multiplying a vector by a matrix changes both its magnitude and direction. ( ) ( ) ( ) A = x = Ax = span(e2) x Ax span(e1)

14 Eigenvectors However, a matrix may act on certain vectors by changing only their magnitude, not their direction (or possibly reversing it) ( ) ( ) A = x = Ax = 6 0 3

15 Eigenvectors However, a matrix may act on certain vectors by changing only their magnitude, not their direction (or possibly reversing it) ( ) ( ) ( ) A = x = Ax = span(e2) Ax x span(e1) The matrix A acts like a scalar!

16 Eigenvalues and Eigenvectors For a square matrix, A, a non-zero vector x is called an eigenvector of A if multiplying x by A results in a scalar multiple of x. Ax = λx

17 Eigenvalues and Eigenvectors For a square matrix, A, a non-zero vector x is called an eigenvector of A if multiplying x by A results in a scalar multiple of x. Ax = λx The scalar λ is called the eigenvalue associated with the eigenvector.

18 Previous Example ( ) 1 1 A = 6 0 ( ) 2 Ax = = 6 x = ( ) 1 3

19 Previous Example ( ) 1 1 A = 6 0 ( ) 2 Ax = = 2 6 x = ( ) 1 3 ( ) 1 = 2x 3 = λ = 2 is the eigenvalue span(e2) Ax x span(e1)

20 Example 2 Show that x is an eigenvector of A and find the corresponding eigenvalue. ( ) ( ) A = x = 6 0 2

21 Example 2 Show that x is an eigenvector of A and find the corresponding eigenvalue. ( ) ( ) A = x = ( ) 3 Ax = ( 1 = ) = 3x

22 Example 2 Show that x is an eigenvector of A and find the corresponding eigenvalue. ( ) ( ) A = x = ( ) 3 Ax = ( 1 = ) = 3x = λ = 3 is the eigenvalue corresponding to the eigenvector x

23 Example 2 A = ( ) x = span(e2) ( ) 1 2 Ax = 3x Ax x span(e1)

24 Let s Practice 1 Show that v is an eigenvector of A and find the corresponding eigenvalue: A = ( ) v = ( ) 3 3

25 Let s Practice 1 Show that v is an eigenvector of A and find the corresponding eigenvalue: A = ( ) v = ( ) 3 3 A = ( ) v = ( ) 1 2

26 Let s Practice 1 Show that v is an eigenvector of A and find the corresponding eigenvalue: A = ( ) v = ( ) 3 3 A = ( ) v = ( ) Can a rectangular matrix have eigenvalues/eigenvectors?

27 Eigenspaces Fact: Any scalar multiple of an eigenvector of A is also an eigenvector of A with the same eigenvalue: A = ( ) x = ( ) 1 3 λ = 2 Try v = ( ) 2 6 or u = ( ) 1 3 or z = ( ) 3 9

28 Eigenspaces Fact: Any scalar multiple of an eigenvector of A is also an eigenvector of A with the same eigenvalue: A = ( ) x = ( ) 1 3 λ = 2 Try v = ( ) 2 6 or u = ( ) 1 3 or z = ( ) 3 9 In general (#proof): Let Ax = λx. If c is some constant, then: A(cx) = c(ax) = c(λx) = λ(cx)

29 Eigenspaces For a given matrix, there are infinitely many eigenvectors associated with any one eigenvalue.

30 Eigenspaces For a given matrix, there are infinitely many eigenvectors associated with any one eigenvalue. Any scalar multiple (positive or negative) can be used.

31 Eigenspaces For a given matrix, there are infinitely many eigenvectors associated with any one eigenvalue. Any scalar multiple (positive or negative) can be used. The collection is referred to as the eigenspace associated with the eigenvalue.

32 Eigenspaces For a given matrix, there are infinitely many eigenvectors associated with any one eigenvalue. Any scalar multiple (positive or negative) can be used. The collection is referred to as the eigenspace associated with the eigenvalue. In the previous example, the eigenspace of λ = 2 was the { span x = ( )} 1. 3

33 Eigenspaces For a given matrix, there are infinitely many eigenvectors associated with any one eigenvalue. Any scalar multiple (positive or negative) can be used. The collection is referred to as the eigenspace associated with the eigenvalue. In the previous example, the eigenspace of λ = 2 was the { span x = ( )} 1. 3 With this in mind what should you expect from software?

34 Zero Eigenvalues What if λ = 0 is an eigenvalue for some matrix A?

35 Zero Eigenvalues What if λ = 0 is an eigenvalue for some matrix A? Ax = 0x = 0, where x 0 is an eigenvector.

36 Zero Eigenvalues What if λ = 0 is an eigenvalue for some matrix A? Ax = 0x = 0, where x 0 is an eigenvector. This means that some linear combination of the columns of A equals zero! = Columns of A are linearly dependent.

37 Zero Eigenvalues What if λ = 0 is an eigenvalue for some matrix A? Ax = 0x = 0, where x 0 is an eigenvector. This means that some linear combination of the columns of A equals zero! = Columns of A are linearly dependent. = A is not full rank.

38 Zero Eigenvalues What if λ = 0 is an eigenvalue for some matrix A? Ax = 0x = 0, where x 0 is an eigenvector. This means that some linear combination of the columns of A equals zero! = Columns of A are linearly dependent. = A is not full rank. = Perfect Multicollinearity.

39 Zero Eigenvalues What if λ = 0 is an eigenvalue for some matrix A? Ax = 0x = 0, where x 0 is an eigenvector. This means that some linear combination of the columns of A equals zero! = Columns of A are linearly dependent. = A is not full rank. = Perfect Multicollinearity.

40 Eigenpairs For a square (n n) matrix A, eigenvalues and eigenvectors come in pairs. Normally, the pairs are ordered by magnitude of eigenvalue. λ 1 λ 2 λ 3 λ n λ 1 is the largest eigenvalue Eigenvector v i corresponds to eigenvalue λ i

41 Let s Practice 1 For the following matrix, determine the eigenvalue associated with the given eigenvector A = v = From this eigenvalue, what can you conclude about the matrix A?

42 Let s Practice 1 For the following matrix, determine the eigenvalue associated with the given eigenvector A = v = From this eigenvalue, what can you conclude about the matrix A? 2 The matrix M has eigenvectors u and v. What is λ 1, the first eigenvalue for the matrix M? M = ( ) u = ( 1 3 ) v = ( ) 1 2

43 Let s Practice 1 For the following matrix, determine the eigenvalue associated with the given eigenvector A = v = From this eigenvalue, what can you conclude about the matrix A? 2 The matrix M has eigenvectors u and v. What is λ 1, the first eigenvalue for the matrix M? M = ( ) u = ( 1 3 ) v = ( ) For the previous problem, is the specific eigenvector (u or v) the only eigenvector associated with λ 1?

44 Diagonalization What happens if I put all the eigenvectors of A into a matrix as columns: V = [v 1 v 2 v 3... v n ] and then compute AV?

45 Diagonalization What happens if I put all the eigenvectors of A into a matrix as columns: V = [v 1 v 2 v 3... v n ] and then compute AV? AV = A[v 1 v 2 v 3... v n ]

46 Diagonalization What happens if I put all the eigenvectors of A into a matrix as columns: V = [v 1 v 2 v 3... v n ] and then compute AV? AV = A[v 1 v 2 v 3... v n ] = [Av 1 Av 2 Av 3... Av n ]

47 Diagonalization What happens if I put all the eigenvectors of A into a matrix as columns: V = [v 1 v 2 v 3... v n ] and then compute AV? AV = A[v 1 v 2 v 3... v n ] = [Av 1 Av 2 Av 3... Av n ] = [λ 1 v 1 λ 2 v 2 λ 3 v 3... λ n v n ]

48 Diagonalization What happens if I put all the eigenvectors of A into a matrix as columns: V = [v 1 v 2 v 3... v n ] and then compute AV? AV = A[v 1 v 2 v 3... v n ] = [Av 1 Av 2 Av 3... Av n ] = [λ 1 v 1 λ 2 v 2 λ 3 v 3... λ n v n ] The effect is that the columns are all scaled by the corresponding eigenvalue.

49 Diagonalization AV = [λ 1 v 1 λ 2 v 2 λ 3 v 3... λ n v n ] The same thing would happen if we multiplied V by a diagonal matrix of eigenvalues on the right: λ λ VD = [v 1 v 2 v 3... v n ] λ n = [λ 1 v 1 λ 2 v 2 λ 3 v 3... λ n v n ]

50 Diagonalization AV = [λ 1 v 1 λ 2 v 2 λ 3 v 3... λ n v n ] The same thing would happen if we multiplied V by a diagonal matrix of eigenvalues on the right: λ λ VD = [v 1 v 2 v 3... v n ] λ n = [λ 1 v 1 λ 2 v 2 λ 3 v 3... λ n v n ] = AV = VD

51 Diagonalization AV = VD

52 Diagonalization AV = VD If the matrix A has n linearly independent eigenvectors, then the matrix V has an inverse.

53 Diagonalization AV = VD If the matrix A has n linearly independent eigenvectors, then the matrix V has an inverse. = V 1 AV = D

54 Diagonalization AV = VD If the matrix A has n linearly independent eigenvectors, then the matrix V has an inverse. = V 1 AV = D This is called the diagonalization of A.

55 Diagonalization AV = VD If the matrix A has n linearly independent eigenvectors, then the matrix V has an inverse. = V 1 AV = D This is called the diagonalization of A. It is only possible when A has n linearly independent eigenvectors (so that the matrix V has an inverse).

56 Let s Practice For the matrix A = ( ) 0 4, 1 5 a. The pairs λ 1 = 4, v 1 = ( ) 1 1 and λ 2 = 1, v 2 = ( ) 4 1 are eigenpairs of A. For each pair, provide another eigenvector associated with that eigenvalue.

57 Let s Practice For the matrix A = ( ) 0 4, 1 5 a. The pairs λ 1 = 4, v 1 = ( ) 1 1 and λ 2 = 1, v 2 = ( ) 4 1 are eigenpairs of A. For each pair, provide another eigenvector associated with that eigenvalue. b. Based on the eigenvectors listed in part a, can the matrix A be diagonalized? Why or why not? If diagonalization is possible, explain how it would be done.

58 Symmetric Matrices - Properties Symmetric matrices (like the covariance matrix, the correlation matrix, and X T X) have some very nice properties:

59 Symmetric Matrices - Properties Symmetric matrices (like the covariance matrix, the correlation matrix, and X T X) have some very nice properties: 1 They are always diagonalizable. (Always have n linearly independent eigenvectors.)

60 Symmetric Matrices - Properties Symmetric matrices (like the covariance matrix, the correlation matrix, and X T X) have some very nice properties: 1 They are always diagonalizable. (Always have n linearly independent eigenvectors.) 2 Their eigenvectors are all mutually orthogonal.

61 Symmetric Matrices - Properties Symmetric matrices (like the covariance matrix, the correlation matrix, and X T X) have some very nice properties: 1 They are always diagonalizable. (Always have n linearly independent eigenvectors.) 2 Their eigenvectors are all mutually orthogonal. 3 Thus, if you normalize the eigenvectors (software does this automatically) they will form an orthogonal matrix, V.

62 Symmetric Matrices - Properties Symmetric matrices (like the covariance matrix, the correlation matrix, and X T X) have some very nice properties: 1 They are always diagonalizable. (Always have n linearly independent eigenvectors.) 2 Their eigenvectors are all mutually orthogonal. 3 Thus, if you normalize the eigenvectors (software does this automatically) they will form an orthogonal matrix, V. = AV = VD

63 Symmetric Matrices - Properties Symmetric matrices (like the covariance matrix, the correlation matrix, and X T X) have some very nice properties: 1 They are always diagonalizable. (Always have n linearly independent eigenvectors.) 2 Their eigenvectors are all mutually orthogonal. 3 Thus, if you normalize the eigenvectors (software does this automatically) they will form an orthogonal matrix, V. = AV = VD V T AV = D

64 Eigenvectors of Symmetric Matrices

65 Introduction to Principal Components Analysis (PCA) Eigenvalues and Eigenvectors Introduction to Principal Components Analysis (PCA)

66 Eigenvectors of the Covariance/Correlation Matrices Introduction to Principal Components Analysis (PCA)

67 Eigenvectors of the Covariance/Correlation Matrices Covariance/Correlation matrices are symmetric. Introduction to Principal Components Analysis (PCA)

68 Eigenvectors of the Covariance/Correlation Matrices Covariance/Correlation matrices are symmetric. = Eigenvectors are orthogonal Introduction to Principal Components Analysis (PCA)

69 Eigenvectors of the Covariance/Correlation Matrices Covariance/Correlation matrices are symmetric. = Eigenvectors are orthogonal Eigenvectors are ordered by the magnitude of eigenvalues: λ 1 λ 2 λ p {v 1, v 2,..., v n } Introduction to Principal Components Analysis (PCA)

70 Covariance vs. Correlation We ll get into more detail later, but Introduction to Principal Components Analysis (PCA)

71 Covariance vs. Correlation We ll get into more detail later, but For the covariance matrix, we want to think of our data as centered to begin with. Introduction to Principal Components Analysis (PCA)

72 Covariance vs. Correlation We ll get into more detail later, but For the covariance matrix, we want to think of our data as centered to begin with. For correlation matrix, we want to think of our data as standardized to begin with. (i.e. centered and divided by standard deviation). Introduction to Principal Components Analysis (PCA)

73 Direction of Maximal Variance The first eigenvector of a covariance/correlation matrix points in the direction of maximum variance in the data. This eigenvector is the first principal component. weight height Introduction to Principal Components Analysis (PCA)

74 Direction of Maximal Variance The first eigenvector of a covariance matrix points in the direction of maximum variance in the data. This eigenvector is the first principal component. v1 w h Introduction to Principal Components Analysis (PCA)

75 Not a Regression Line The first principal component is not the same direction as the regression line. It minimizes orthogonal distances, not vertical distances. v1 w h Introduction to Principal Components Analysis (PCA)

76 Not a Regression Line It may be close in some cases, but it s not the same. x2 Principal Component Regression Line x1 Introduction to Principal Components Analysis (PCA)

77 Secondary Directions The second eigenvector of a covariance matrix points in the direction, orthogonal to the first, that has the maximum variance. v2 w h Introduction to Principal Components Analysis (PCA)

78 A New Basis Principal components provide us with a new orthogonal basis where the new coordinates of the data points are uncorrelated: Introduction to Principal Components Analysis (PCA)

79 Variable Loadings Each principal component is a linear combination of the original variables. ( ) 0.7 v 1 = = 0.7h + 0.7w 0.7 ( ) 0.7 v 2 = = 0.7h + 0.7w 0.7 These coefficients are called loadings (factor loadings). v2 w h Introduction to Principal Components Analysis (PCA)

80 Combining PCs Likewise, we can think of our original variables as linear combinations of the principal components with coordinates in the new basis: ( ) 0.7 h = = 0.7v v 2 ( ) 0.7 w = = 0.7v v 2 w h Introduction to Principal Components Analysis (PCA)

81 The Biplot Uncorrelated data points and variable vectors plotted on the same set of axis! Points in top right have largest weight values Points in top left have smallest height values w h Introduction to Principal Components Analysis (PCA)

82 weight height Introduction to Principal Components Analysis (PCA)

83 Variable Loadings The variable loadings give us a formula for finding the coordinates of our data in the new basis. ( ) 0.7 v 1 = = PC = 0.7height + 0.7weight v 2 = ( ) 0.7 = PC = 0.7height + 0.7weight From these loadings, we can get a sense of how important each original variable is to our new variables (the components). Introduction to Principal Components Analysis (PCA)

84 Let s Practice 1 For the following data plot, take your best guess and draw the directions of the first and second principal components. 2 Is there more than one answer to this question? Introduction to Principal Components Analysis (PCA)

85 Let s Practice Suppose your data contained the variables VO2_max, mile pace, and weight in that order. The first principal component for this data is the eigenvector of the covariance matrix What variable is most important/influential to the first principal component? Introduction to Principal Components Analysis (PCA)

Eigenvalues, Eigenvectors, and an Intro to PCA

Eigenvalues, Eigenvectors, and an Intro to PCA Eigenvalues, Eigenvectors, and an Intro to PCA Eigenvalues, Eigenvectors, and an Intro to PCA Changing Basis We ve talked so far about re-writing our data using a new set of variables, or a new basis.

More information

Eigenvalues, Eigenvectors, and an Intro to PCA

Eigenvalues, Eigenvectors, and an Intro to PCA Eigenvalues, Eigenvectors, and an Intro to PCA Eigenvalues, Eigenvectors, and an Intro to PCA Changing Basis We ve talked so far about re-writing our data using a new set of variables, or a new basis.

More information

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix DIAGONALIZATION Definition We say that a matrix A of size n n is diagonalizable if there is a basis of R n consisting of eigenvectors of A ie if there are n linearly independent vectors v v n such that

More information

Lecture 15, 16: Diagonalization

Lecture 15, 16: Diagonalization Lecture 15, 16: Diagonalization Motivation: Eigenvalues and Eigenvectors are easy to compute for diagonal matrices. Hence, we would like (if possible) to convert matrix A into a diagonal matrix. Suppose

More information

PRINCIPAL COMPONENTS ANALYSIS

PRINCIPAL COMPONENTS ANALYSIS 121 CHAPTER 11 PRINCIPAL COMPONENTS ANALYSIS We now have the tools necessary to discuss one of the most important concepts in mathematical statistics: Principal Components Analysis (PCA). PCA involves

More information

Study Guide for Linear Algebra Exam 2

Study Guide for Linear Algebra Exam 2 Study Guide for Linear Algebra Exam 2 Term Vector Space Definition A Vector Space is a nonempty set V of objects, on which are defined two operations, called addition and multiplication by scalars (real

More information

Singular Value Decomposition. 1 Singular Value Decomposition and the Four Fundamental Subspaces

Singular Value Decomposition. 1 Singular Value Decomposition and the Four Fundamental Subspaces Singular Value Decomposition This handout is a review of some basic concepts in linear algebra For a detailed introduction, consult a linear algebra text Linear lgebra and its pplications by Gilbert Strang

More information

ICS 6N Computational Linear Algebra Symmetric Matrices and Orthogonal Diagonalization

ICS 6N Computational Linear Algebra Symmetric Matrices and Orthogonal Diagonalization ICS 6N Computational Linear Algebra Symmetric Matrices and Orthogonal Diagonalization Xiaohui Xie University of California, Irvine xhx@uci.edu Xiaohui Xie (UCI) ICS 6N 1 / 21 Symmetric matrices An n n

More information

Solutions to Final Exam

Solutions to Final Exam Solutions to Final Exam. Let A be a 3 5 matrix. Let b be a nonzero 5-vector. Assume that the nullity of A is. (a) What is the rank of A? 3 (b) Are the rows of A linearly independent? (c) Are the columns

More information

VAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where:

VAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where: VAR Model (k-variate VAR(p model (in the Reduced Form: where: Y t = A + B 1 Y t-1 + B 2 Y t-2 + + B p Y t-p + ε t Y t = (y 1t, y 2t,, y kt : a (k x 1 vector of time series variables A: a (k x 1 vector

More information

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers. Linear Algebra - Test File - Spring Test # For problems - consider the following system of equations. x + y - z = x + y + 4z = x + y + 6z =.) Solve the system without using your calculator..) Find the

More information

Announcements Wednesday, November 01

Announcements Wednesday, November 01 Announcements Wednesday, November 01 WeBWorK 3.1, 3.2 are due today at 11:59pm. The quiz on Friday covers 3.1, 3.2. My office is Skiles 244. Rabinoffice hours are Monday, 1 3pm and Tuesday, 9 11am. Section

More information

Diagonalization of Matrix

Diagonalization of Matrix of Matrix King Saud University August 29, 2018 of Matrix Table of contents 1 2 of Matrix Definition If A M n (R) and λ R. We say that λ is an eigenvalue of the matrix A if there is X R n \ {0} such that

More information

MATH 221, Spring Homework 10 Solutions

MATH 221, Spring Homework 10 Solutions MATH 22, Spring 28 - Homework Solutions Due Tuesday, May Section 52 Page 279, Problem 2: 4 λ A λi = and the characteristic polynomial is det(a λi) = ( 4 λ)( λ) ( )(6) = λ 6 λ 2 +λ+2 The solutions to the

More information

Computationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity:

Computationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity: Diagonalization We have seen that diagonal and triangular matrices are much easier to work with than are most matrices For example, determinants and eigenvalues are easy to compute, and multiplication

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

Principle Components Analysis (PCA) Relationship Between a Linear Combination of Variables and Axes Rotation for PCA

Principle Components Analysis (PCA) Relationship Between a Linear Combination of Variables and Axes Rotation for PCA Principle Components Analysis (PCA) Relationship Between a Linear Combination of Variables and Axes Rotation for PCA Principle Components Analysis: Uses one group of variables (we will call this X) In

More information

Eigenvalues, Eigenvectors, and Diagonalization

Eigenvalues, Eigenvectors, and Diagonalization Math 240 TA: Shuyi Weng Winter 207 February 23, 207 Eigenvalues, Eigenvectors, and Diagonalization The concepts of eigenvalues, eigenvectors, and diagonalization are best studied with examples. We will

More information

The Jordan Normal Form and its Applications

The Jordan Normal Form and its Applications The and its Applications Jeremy IMPACT Brigham Young University A square matrix A is a linear operator on {R, C} n. A is diagonalizable if and only if it has n linearly independent eigenvectors. What happens

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

Principal Components Analysis (PCA)

Principal Components Analysis (PCA) Principal Components Analysis (PCA) Principal Components Analysis (PCA) a technique for finding patterns in data of high dimension Outline:. Eigenvectors and eigenvalues. PCA: a) Getting the data b) Centering

More information

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas Dimensionality Reduction: PCA Nicholas Ruozzi University of Texas at Dallas Eigenvalues λ is an eigenvalue of a matrix A R n n if the linear system Ax = λx has at least one non-zero solution If Ax = λx

More information

MAT 1302B Mathematical Methods II

MAT 1302B Mathematical Methods II MAT 1302B Mathematical Methods II Alistair Savage Mathematics and Statistics University of Ottawa Winter 2015 Lecture 19 Alistair Savage (uottawa) MAT 1302B Mathematical Methods II Winter 2015 Lecture

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

Econ Slides from Lecture 7

Econ Slides from Lecture 7 Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for

More information

Lecture 13. Principal Component Analysis. Brett Bernstein. April 25, CDS at NYU. Brett Bernstein (CDS at NYU) Lecture 13 April 25, / 26

Lecture 13. Principal Component Analysis. Brett Bernstein. April 25, CDS at NYU. Brett Bernstein (CDS at NYU) Lecture 13 April 25, / 26 Principal Component Analysis Brett Bernstein CDS at NYU April 25, 2017 Brett Bernstein (CDS at NYU) Lecture 13 April 25, 2017 1 / 26 Initial Question Intro Question Question Let S R n n be symmetric. 1

More information

Homework For each of the following matrices, find the minimal polynomial and determine whether the matrix is diagonalizable.

Homework For each of the following matrices, find the minimal polynomial and determine whether the matrix is diagonalizable. Math 5327 Fall 2018 Homework 7 1. For each of the following matrices, find the minimal polynomial and determine whether the matrix is diagonalizable. 3 1 0 (a) A = 1 2 0 1 1 0 x 3 1 0 Solution: 1 x 2 0

More information

Homework sheet 4: EIGENVALUES AND EIGENVECTORS. DIAGONALIZATION (with solutions) Year ? Why or why not? 6 9

Homework sheet 4: EIGENVALUES AND EIGENVECTORS. DIAGONALIZATION (with solutions) Year ? Why or why not? 6 9 Bachelor in Statistics and Business Universidad Carlos III de Madrid Mathematical Methods II María Barbero Liñán Homework sheet 4: EIGENVALUES AND EIGENVECTORS DIAGONALIZATION (with solutions) Year - Is

More information

Recall : Eigenvalues and Eigenvectors

Recall : Eigenvalues and Eigenvectors Recall : Eigenvalues and Eigenvectors Let A be an n n matrix. If a nonzero vector x in R n satisfies Ax λx for a scalar λ, then : The scalar λ is called an eigenvalue of A. The vector x is called an eigenvector

More information

Principal Component Analysis (PCA) Principal Component Analysis (PCA)

Principal Component Analysis (PCA) Principal Component Analysis (PCA) Recall: Eigenvectors of the Covariance Matrix Covariance matrices are symmetric. Eigenvectors are orthogonal Eigenvectors are ordered by the magnitude of eigenvalues: λ 1 λ 2 λ p {v 1, v 2,..., v n } Recall:

More information

1 Principal Components Analysis

1 Principal Components Analysis Lecture 3 and 4 Sept. 18 and Sept.20-2006 Data Visualization STAT 442 / 890, CM 462 Lecture: Ali Ghodsi 1 Principal Components Analysis Principal components analysis (PCA) is a very popular technique for

More information

Principal Component Analysis. Applied Multivariate Statistics Spring 2012

Principal Component Analysis. Applied Multivariate Statistics Spring 2012 Principal Component Analysis Applied Multivariate Statistics Spring 2012 Overview Intuition Four definitions Practical examples Mathematical example Case study 2 PCA: Goals Goal 1: Dimension reduction

More information

Intelligent Data Analysis. Principal Component Analysis. School of Computer Science University of Birmingham

Intelligent Data Analysis. Principal Component Analysis. School of Computer Science University of Birmingham Intelligent Data Analysis Principal Component Analysis Peter Tiňo School of Computer Science University of Birmingham Discovering low-dimensional spatial layout in higher dimensional spaces - 1-D/3-D example

More information

Jordan Canonical Form Homework Solutions

Jordan Canonical Form Homework Solutions Jordan Canonical Form Homework Solutions For each of the following, put the matrix in Jordan canonical form and find the matrix S such that S AS = J. [ ]. A = A λi = λ λ = ( λ) = λ λ = λ =, Since we have

More information

Diagonalization of Matrices

Diagonalization of Matrices LECTURE 4 Diagonalization of Matrices Recall that a diagonal matrix is a square n n matrix with non-zero entries only along the diagonal from the upper left to the lower right (the main diagonal) Diagonal

More information

MA 265 FINAL EXAM Fall 2012

MA 265 FINAL EXAM Fall 2012 MA 265 FINAL EXAM Fall 22 NAME: INSTRUCTOR S NAME:. There are a total of 25 problems. You should show work on the exam sheet, and pencil in the correct answer on the scantron. 2. No books, notes, or calculators

More information

MAC Module 12 Eigenvalues and Eigenvectors. Learning Objectives. Upon completing this module, you should be able to:

MAC Module 12 Eigenvalues and Eigenvectors. Learning Objectives. Upon completing this module, you should be able to: MAC Module Eigenvalues and Eigenvectors Learning Objectives Upon completing this module, you should be able to: Solve the eigenvalue problem by finding the eigenvalues and the corresponding eigenvectors

More information

MAC Module 12 Eigenvalues and Eigenvectors

MAC Module 12 Eigenvalues and Eigenvectors MAC 23 Module 2 Eigenvalues and Eigenvectors Learning Objectives Upon completing this module, you should be able to:. Solve the eigenvalue problem by finding the eigenvalues and the corresponding eigenvectors

More information

Chapter 7: Symmetric Matrices and Quadratic Forms

Chapter 7: Symmetric Matrices and Quadratic Forms Chapter 7: Symmetric Matrices and Quadratic Forms (Last Updated: December, 06) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved

More information

8 Eigenvectors and the Anisotropic Multivariate Gaussian Distribution

8 Eigenvectors and the Anisotropic Multivariate Gaussian Distribution Eigenvectors and the Anisotropic Multivariate Gaussian Distribution Eigenvectors and the Anisotropic Multivariate Gaussian Distribution EIGENVECTORS [I don t know if you were properly taught about eigenvectors

More information

Principal Component Analysis

Principal Component Analysis Principal Component Analysis Yuanzhen Shao MA 26500 Yuanzhen Shao PCA 1 / 13 Data as points in R n Assume that we have a collection of data in R n. x 11 x 21 x 12 S = {X 1 =., X x 22 2 =.,, X x m2 m =.

More information

Orthogonality. Orthonormal Bases, Orthogonal Matrices. Orthogonality

Orthogonality. Orthonormal Bases, Orthogonal Matrices. Orthogonality Orthonormal Bases, Orthogonal Matrices The Major Ideas from Last Lecture Vector Span Subspace Basis Vectors Coordinates in different bases Matrix Factorization (Basics) The Major Ideas from Last Lecture

More information

Chapter 4 & 5: Vector Spaces & Linear Transformations

Chapter 4 & 5: Vector Spaces & Linear Transformations Chapter 4 & 5: Vector Spaces & Linear Transformations Philip Gressman University of Pennsylvania Philip Gressman Math 240 002 2014C: Chapters 4 & 5 1 / 40 Objective The purpose of Chapter 4 is to think

More information

Linear Systems. Class 27. c 2008 Ron Buckmire. TITLE Projection Matrices and Orthogonal Diagonalization CURRENT READING Poole 5.4

Linear Systems. Class 27. c 2008 Ron Buckmire. TITLE Projection Matrices and Orthogonal Diagonalization CURRENT READING Poole 5.4 Linear Systems Math Spring 8 c 8 Ron Buckmire Fowler 9 MWF 9: am - :5 am http://faculty.oxy.edu/ron/math//8/ Class 7 TITLE Projection Matrices and Orthogonal Diagonalization CURRENT READING Poole 5. Summary

More information

Matrices related to linear transformations

Matrices related to linear transformations Math 4326 Fall 207 Matrices related to linear transformations We have encountered several ways in which matrices relate to linear transformations. In this note, I summarize the important facts and formulas

More information

Final Exam Practice Problems Answers Math 24 Winter 2012

Final Exam Practice Problems Answers Math 24 Winter 2012 Final Exam Practice Problems Answers Math 4 Winter 0 () The Jordan product of two n n matrices is defined as A B = (AB + BA), where the products inside the parentheses are standard matrix product. Is the

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Math 2331 Linear Algebra

Math 2331 Linear Algebra 5. Eigenvectors & Eigenvalues Math 233 Linear Algebra 5. Eigenvectors & Eigenvalues Shang-Huan Chiu Department of Mathematics, University of Houston schiu@math.uh.edu math.uh.edu/ schiu/ Shang-Huan Chiu,

More information

Introduction to Machine Learning

Introduction to Machine Learning 10-701 Introduction to Machine Learning PCA Slides based on 18-661 Fall 2018 PCA Raw data can be Complex, High-dimensional To understand a phenomenon we measure various related quantities If we knew what

More information

1. In this problem, if the statement is always true, circle T; otherwise, circle F.

1. In this problem, if the statement is always true, circle T; otherwise, circle F. Math 1553, Extra Practice for Midterm 3 (sections 45-65) Solutions 1 In this problem, if the statement is always true, circle T; otherwise, circle F a) T F If A is a square matrix and the homogeneous equation

More information

More Linear Algebra. Edps/Soc 584, Psych 594. Carolyn J. Anderson

More Linear Algebra. Edps/Soc 584, Psych 594. Carolyn J. Anderson More Linear Algebra Edps/Soc 584, Psych 594 Carolyn J. Anderson Department of Educational Psychology I L L I N O I S university of illinois at urbana-champaign c Board of Trustees, University of Illinois

More information

Vectors and Matrices Statistics with Vectors and Matrices

Vectors and Matrices Statistics with Vectors and Matrices Vectors and Matrices Statistics with Vectors and Matrices Lecture 3 September 7, 005 Analysis Lecture #3-9/7/005 Slide 1 of 55 Today s Lecture Vectors and Matrices (Supplement A - augmented with SAS proc

More information

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,

More information

7. Symmetric Matrices and Quadratic Forms

7. Symmetric Matrices and Quadratic Forms Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

More information

First of all, the notion of linearity does not depend on which coordinates are used. Recall that a map T : R n R m is linear if

First of all, the notion of linearity does not depend on which coordinates are used. Recall that a map T : R n R m is linear if 5 Matrices in Different Coordinates In this section we discuss finding matrices of linear maps in different coordinates Earlier in the class was the matrix that multiplied by x to give ( x) in standard

More information

4. Linear transformations as a vector space 17

4. Linear transformations as a vector space 17 4 Linear transformations as a vector space 17 d) 1 2 0 0 1 2 0 0 1 0 0 0 1 2 3 4 32 Let a linear transformation in R 2 be the reflection in the line = x 2 Find its matrix 33 For each linear transformation

More information

Announcements Wednesday, November 01

Announcements Wednesday, November 01 Announcements Wednesday, November 01 WeBWorK 3.1, 3.2 are due today at 11:59pm. The quiz on Friday covers 3.1, 3.2. My office is Skiles 244. Rabinoffice hours are Monday, 1 3pm and Tuesday, 9 11am. Section

More information

Appendix A: Matrices

Appendix A: Matrices Appendix A: Matrices A matrix is a rectangular array of numbers Such arrays have rows and columns The numbers of rows and columns are referred to as the dimensions of a matrix A matrix with, say, 5 rows

More information

Name Solutions Linear Algebra; Test 3. Throughout the test simplify all answers except where stated otherwise.

Name Solutions Linear Algebra; Test 3. Throughout the test simplify all answers except where stated otherwise. Name Solutions Linear Algebra; Test 3 Throughout the test simplify all answers except where stated otherwise. 1) Find the following: (10 points) ( ) Or note that so the rows are linearly independent, so

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) Eigenvalues and Eigenvectors Fall 2015 1 / 14 Introduction We define eigenvalues and eigenvectors. We discuss how to

More information

Math 3191 Applied Linear Algebra

Math 3191 Applied Linear Algebra Math 9 Applied Linear Algebra Lecture 9: Diagonalization Stephen Billups University of Colorado at Denver Math 9Applied Linear Algebra p./9 Section. Diagonalization The goal here is to develop a useful

More information

MATH 304 Linear Algebra Lecture 33: Bases of eigenvectors. Diagonalization.

MATH 304 Linear Algebra Lecture 33: Bases of eigenvectors. Diagonalization. MATH 304 Linear Algebra Lecture 33: Bases of eigenvectors. Diagonalization. Eigenvalues and eigenvectors of an operator Definition. Let V be a vector space and L : V V be a linear operator. A number λ

More information

Warm-up. True or false? Baby proof. 2. The system of normal equations for A x = y has solutions iff A x = y has solutions

Warm-up. True or false? Baby proof. 2. The system of normal equations for A x = y has solutions iff A x = y has solutions Warm-up True or false? 1. proj u proj v u = u 2. The system of normal equations for A x = y has solutions iff A x = y has solutions 3. The normal equations are always consistent Baby proof 1. Let A be

More information

0 2 0, it is diagonal, hence diagonalizable)

0 2 0, it is diagonal, hence diagonalizable) MATH 54 TRUE/FALSE QUESTIONS FOR MIDTERM 2 SOLUTIONS PEYAM RYAN TABRIZIAN 1. (a) TRUE If A is diagonalizable, then A 3 is diagonalizable. (A = P DP 1, so A 3 = P D 3 P = P D P 1, where P = P and D = D

More information

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) Chapter 5 The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) 5.1 Basics of SVD 5.1.1 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Principal Component Analysis-I Geog 210C Introduction to Spatial Data Analysis. Chris Funk. Lecture 17

Principal Component Analysis-I Geog 210C Introduction to Spatial Data Analysis. Chris Funk. Lecture 17 Principal Component Analysis-I Geog 210C Introduction to Spatial Data Analysis Chris Funk Lecture 17 Outline Filters and Rotations Generating co-varying random fields Translating co-varying fields into

More information

Data Mining Lecture 4: Covariance, EVD, PCA & SVD

Data Mining Lecture 4: Covariance, EVD, PCA & SVD Data Mining Lecture 4: Covariance, EVD, PCA & SVD Jo Houghton ECS Southampton February 25, 2019 1 / 28 Variance and Covariance - Expectation A random variable takes on different values due to chance The

More information

. = V c = V [x]v (5.1) c 1. c k

. = V c = V [x]v (5.1) c 1. c k Chapter 5 Linear Algebra It can be argued that all of linear algebra can be understood using the four fundamental subspaces associated with a matrix Because they form the foundation on which we later work,

More information

Announcements Monday, November 20

Announcements Monday, November 20 Announcements Monday, November 20 You already have your midterms! Course grades will be curved at the end of the semester. The percentage of A s, B s, and C s to be awarded depends on many factors, and

More information

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2 Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

Announcements Monday, October 29

Announcements Monday, October 29 Announcements Monday, October 29 WeBWorK on determinents due on Wednesday at :59pm. The quiz on Friday covers 5., 5.2, 5.3. My office is Skiles 244 and Rabinoffice hours are: Mondays, 2 pm; Wednesdays,

More information

TMA Calculus 3. Lecture 21, April 3. Toke Meier Carlsen Norwegian University of Science and Technology Spring 2013

TMA Calculus 3. Lecture 21, April 3. Toke Meier Carlsen Norwegian University of Science and Technology Spring 2013 TMA4115 - Calculus 3 Lecture 21, April 3 Toke Meier Carlsen Norwegian University of Science and Technology Spring 2013 www.ntnu.no TMA4115 - Calculus 3, Lecture 21 Review of last week s lecture Last week

More information

Concentration Ellipsoids

Concentration Ellipsoids Concentration Ellipsoids ECE275A Lecture Supplement Fall 2008 Kenneth Kreutz Delgado Electrical and Computer Engineering Jacobs School of Engineering University of California, San Diego VERSION LSECE275CE

More information

CS168: The Modern Algorithmic Toolbox Lecture #7: Understanding Principal Component Analysis (PCA)

CS168: The Modern Algorithmic Toolbox Lecture #7: Understanding Principal Component Analysis (PCA) CS68: The Modern Algorithmic Toolbox Lecture #7: Understanding Principal Component Analysis (PCA) Tim Roughgarden & Gregory Valiant April 0, 05 Introduction. Lecture Goal Principal components analysis

More information

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x = Linear Algebra Review Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1 x x = 2. x n Vectors of up to three dimensions are easy to diagram.

More information

Exercises * on Principal Component Analysis

Exercises * on Principal Component Analysis Exercises * on Principal Component Analysis Laurenz Wiskott Institut für Neuroinformatik Ruhr-Universität Bochum, Germany, EU 4 February 207 Contents Intuition 3. Problem statement..........................................

More information

Agenda: Understand the action of A by seeing how it acts on eigenvectors.

Agenda: Understand the action of A by seeing how it acts on eigenvectors. Eigenvalues and Eigenvectors If Av=λv with v nonzero, then λ is called an eigenvalue of A and v is called an eigenvector of A corresponding to eigenvalue λ. Agenda: Understand the action of A by seeing

More information

Math 2114 Common Final Exam May 13, 2015 Form A

Math 2114 Common Final Exam May 13, 2015 Form A Math 4 Common Final Exam May 3, 5 Form A Instructions: Using a # pencil only, write your name and your instructor s name in the blanks provided. Write your student ID number and your CRN in the blanks

More information

Question 7. Consider a linear system A x = b with 4 unknown. x = [x 1, x 2, x 3, x 4 ] T. The augmented

Question 7. Consider a linear system A x = b with 4 unknown. x = [x 1, x 2, x 3, x 4 ] T. The augmented Question. How many solutions does x 6 = 4 + i have Practice Problems 6 d) 5 Question. Which of the following is a cubed root of the complex number i. 6 e i arctan() e i(arctan() π) e i(arctan() π)/3 6

More information

Linear Algebra. Rekha Santhanam. April 3, Johns Hopkins Univ. Rekha Santhanam (Johns Hopkins Univ.) Linear Algebra April 3, / 7

Linear Algebra. Rekha Santhanam. April 3, Johns Hopkins Univ. Rekha Santhanam (Johns Hopkins Univ.) Linear Algebra April 3, / 7 Linear Algebra Rekha Santhanam Johns Hopkins Univ. April 3, 2009 Rekha Santhanam (Johns Hopkins Univ.) Linear Algebra April 3, 2009 1 / 7 Dynamical Systems Denote owl and wood rat populations at time k

More information

MAT1302F Mathematical Methods II Lecture 19

MAT1302F Mathematical Methods II Lecture 19 MAT302F Mathematical Methods II Lecture 9 Aaron Christie 2 April 205 Eigenvectors, Eigenvalues, and Diagonalization Now that the basic theory of eigenvalues and eigenvectors is in place most importantly

More information

COMP6237 Data Mining Covariance, EVD, PCA & SVD. Jonathon Hare

COMP6237 Data Mining Covariance, EVD, PCA & SVD. Jonathon Hare COMP6237 Data Mining Covariance, EVD, PCA & SVD Jonathon Hare jsh2@ecs.soton.ac.uk Variance and Covariance Random Variables and Expected Values Mathematicians talk variance (and covariance) in terms of

More information

Matrix Vector Products

Matrix Vector Products We covered these notes in the tutorial sessions I strongly recommend that you further read the presented materials in classical books on linear algebra Please make sure that you understand the proofs and

More information

Recitation 9: Probability Matrices and Real Symmetric Matrices. 3 Probability Matrices: Definitions and Examples

Recitation 9: Probability Matrices and Real Symmetric Matrices. 3 Probability Matrices: Definitions and Examples Math b TA: Padraic Bartlett Recitation 9: Probability Matrices and Real Symmetric Matrices Week 9 Caltech 20 Random Question Show that + + + + +... = ϕ, the golden ratio, which is = + 5. 2 2 Homework comments

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors week -2 Fall 26 Eigenvalues and eigenvectors The most simple linear transformation from R n to R n may be the transformation of the form: T (x,,, x n ) (λ x, λ 2,, λ n x n

More information

CS4495/6495 Introduction to Computer Vision. 8B-L2 Principle Component Analysis (and its use in Computer Vision)

CS4495/6495 Introduction to Computer Vision. 8B-L2 Principle Component Analysis (and its use in Computer Vision) CS4495/6495 Introduction to Computer Vision 8B-L2 Principle Component Analysis (and its use in Computer Vision) Wavelength 2 Wavelength 2 Principal Components Principal components are all about the directions

More information

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true. 1 Which of the following statements is always true? I The null space of an m n matrix is a subspace of R m II If the set B = {v 1,, v n } spans a vector space V and dimv = n, then B is a basis for V III

More information

Linear Algebra Review

Linear Algebra Review Chapter 1 Linear Algebra Review It is assumed that you have had a course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc. I will review some of these terms here, but quite

More information

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017 Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...

More information

Math Linear Algebra Final Exam Review Sheet

Math Linear Algebra Final Exam Review Sheet Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

More information

Generalized Eigenvectors and Jordan Form

Generalized Eigenvectors and Jordan Form Generalized Eigenvectors and Jordan Form We have seen that an n n matrix A is diagonalizable precisely when the dimensions of its eigenspaces sum to n. So if A is not diagonalizable, there is at least

More information

Chemometrics. Matti Hotokka Physical chemistry Åbo Akademi University

Chemometrics. Matti Hotokka Physical chemistry Åbo Akademi University Chemometrics Matti Hotokka Physical chemistry Åbo Akademi University Linear regression Experiment Consider spectrophotometry as an example Beer-Lamberts law: A = cå Experiment Make three known references

More information

Principal Components Theory Notes

Principal Components Theory Notes Principal Components Theory Notes Charles J. Geyer August 29, 2007 1 Introduction These are class notes for Stat 5601 (nonparametrics) taught at the University of Minnesota, Spring 2006. This not a theory

More information

and let s calculate the image of some vectors under the transformation T.

and let s calculate the image of some vectors under the transformation T. Chapter 5 Eigenvalues and Eigenvectors 5. Eigenvalues and Eigenvectors Let T : R n R n be a linear transformation. Then T can be represented by a matrix (the standard matrix), and we can write T ( v) =

More information

APPLICATIONS The eigenvalues are λ = 5, 5. An orthonormal basis of eigenvectors consists of

APPLICATIONS The eigenvalues are λ = 5, 5. An orthonormal basis of eigenvectors consists of CHAPTER III APPLICATIONS The eigenvalues are λ =, An orthonormal basis of eigenvectors consists of, The eigenvalues are λ =, A basis of eigenvectors consists of, 4 which are not perpendicular However,

More information

Jordan Normal Form and Singular Decomposition

Jordan Normal Form and Singular Decomposition University of Debrecen Diagonalization and eigenvalues Diagonalization We have seen that if A is an n n square matrix, then A is diagonalizable if and only if for all λ eigenvalues of A we have dim(u λ

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

Eigenvalues and Eigenvectors 7.2 Diagonalization

Eigenvalues and Eigenvectors 7.2 Diagonalization Eigenvalues and Eigenvectors 7.2 Diagonalization November 8 Goals Suppose A is square matrix of order n. Provide necessary and sufficient condition when there is an invertible matrix P such that P 1 AP

More information

Usually, when we first formulate a problem in mathematics, we use the most familiar

Usually, when we first formulate a problem in mathematics, we use the most familiar Change of basis Usually, when we first formulate a problem in mathematics, we use the most familiar coordinates. In R, this means using the Cartesian coordinates x, y, and z. In vector terms, this is equivalent

More information