Linear Algebra Review. Fei-Fei Li

Similar documents
Linear Algebra Review. Fei-Fei Li

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Linear Algebra & Geometry why is linear algebra useful in computer vision?

The Singular Value Decomposition

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

Foundations of Computer Vision

Lecture: Face Recognition and Feature Reduction

Lecture: Face Recognition and Feature Reduction

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

UNIT 6: The singular value decomposition.

Image Registration Lecture 2: Vectors and Matrices

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

Properties of Linear Transformations from R n to R m

Linear Algebra Review. Vectors

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

1 Linearity and Linear Systems

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

Linear Algebra for Machine Learning. Sargur N. Srihari

8. Diagonalization.

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

COMP 558 lecture 18 Nov. 15, 2010

Linear Algebra: Matrix Eigenvalue Problems

TBP MATH33A Review Sheet. November 24, 2018

Singular Value Decomposition

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

CS 143 Linear Algebra Review

Conceptual Questions for Review

Computational Methods. Eigenvalues and Singular Values

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Math 315: Linear Algebra Solutions to Assignment 7

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

Review of Linear Algebra

Maths for Signals and Systems Linear Algebra in Engineering

Knowledge Discovery and Data Mining 1 (VO) ( )

Singular Value Decompsition

Singular Value Decomposition (SVD)

Econ Slides from Lecture 7

Chapter 2. Matrix Arithmetic. Chapter 2

Properties of Matrices and Operations on Matrices

Linear Algebra Primer

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Linear Algebra - Part II

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Applied Linear Algebra in Geoscience Using MATLAB

Repeated Eigenvalues and Symmetric Matrices

Definition (T -invariant subspace) Example. Example

Linear Algebra, part 3 QR and SVD

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

and let s calculate the image of some vectors under the transformation T.

Singular Value Decomposition and Digital Image Compression

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

Symmetric and anti symmetric matrices

IV. Matrix Approximation using Least-Squares

7. Symmetric Matrices and Quadratic Forms

Spring 2014 Math 272 Final Exam Review Sheet

Basic Linear Algebra in MATLAB

Problem # Max points possible Actual score Total 120

Review of similarity transformation and Singular Value Decomposition

Review problems for MA 54, Fall 2004.

CS 246 Review of Linear Algebra 01/17/19

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

B553 Lecture 5: Matrix Algebra Review

4. Determinants.

MATH36001 Generalized Inverses and the SVD 2015

. = V c = V [x]v (5.1) c 1. c k

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Linear Algebra in Actuarial Science: Slides to the lecture

Elementary Linear Algebra

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

Math Linear Algebra Final Exam Review Sheet

Background Mathematics (2/2) 1. David Barber

Matrices and Linear Algebra

Deep Learning Book Notes Chapter 2: Linear Algebra

Introduction to Matrix Algebra

APPLICATIONS The eigenvalues are λ = 5, 5. An orthonormal basis of eigenvectors consists of

18.06SC Final Exam Solutions

Linear Algebra V = T = ( 4 3 ).

0.1 Eigenvalues and Eigenvectors

Solution Set 7, Fall '12

Exercise Sheet 1.

Examples and MatLab. Vector and Matrix Material. Matrix Addition R = A + B. Matrix Equality A = B. Matrix Multiplication R = A * B.

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

Linear Algebra (Review) Volker Tresp 2017

1. General Vector Spaces

Data Mining Lecture 4: Covariance, EVD, PCA & SVD

Singular Value Decomposition

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

We use the overhead arrow to denote a column vector, i.e., a number with a direction. For example, in three-space, we write

Chapter 5 Eigenvalues and Eigenvectors

6 EIGENVALUES AND EIGENVECTORS

Linear Algebra Fundamentals

Linear Algebra Highlights

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Transcription:

Linear Algebra Review Fei-Fei Li 1 / 51

Vectors Vectors and matrices are just collections of ordered numbers that represent something: movements in space, scaling factors, pixel brightnesses, etc. A vector is a quantity that involves both magnitude and direction A column vector v R nx1 where A row vector v T R nx1 where x 1 x 2 v =. x n v T = [ x 1 x 2... x n ] (1) (2) 2 / 51

Vectors Vectors can represent an offset in 2D or 3D space Points are just vectors from the origin Data (pixels, gradients at an image keypoint, etc) can also be treated as a vector Such vectors don t have a geometric interpretation, but calculations like distance can still have value 3 / 51

Vectors 4 / 51

Matrices A matrix A R mxn is an array of mxn numbers arranged in m rows and n columns. x 11 x 12 x 13... x 1n x 21 x 22 x 23... x 2n A =....... x m1 x m2 x m3... x mn If m = n, we say that A is a square matrix MATLAB represents an image as a matrix of pixel brightnesses 5 / 51

Images Note that matrix coordinates are NOT Cartesian coordinates. The upper left corner is [y,x] = (1,1) Grayscale images have one number per pixel, and are stored as an m x n matrix. 6 / 51

Images Color images have 3 numbers per pixel red, green, and blue brightnesses Stored as an m x n x 3 matrix 7 / 51

Matrix Operations Addition Can only add a matrix with matching dimensions, or a scalar. Scaling 8 / 51

Matrix Operations Inner product (dot product) of vectors Multiply corresponding entries of two vectors and add up the result xy is also x y Cos( the angle between x and y ) If B is a unit vector, then AB gives the length of A which lies in the direction of B 9 / 51

Matrix Operations Multiplication For A.B; each entry in the result is (that row of A) dot product with (that column of B) 10 / 51

Matrix Operations Multiplication 11 / 51

Matrix Operations Transpose row r becomes column c A useful identity 12 / 51

Matrix Operations Determinant det(a) returns a scalar value for square matrix A. For, Properties: 13 / 51

Special Matrices Identity matrix I Square matrix, 1s along diagonal, 0s elsewhere I.A = A Diagonal matrix Square matrix with numbers along diagonal, 0s elsewhere (A diagonal).(another matrix B) scales the rows of matrix B 14 / 51

Special Matrices Symmetric matrix Skew-symmetric matrix 15 / 51

Transformations Matrices can be used to transform vectors in useful ways, through multiplication: x = Ax Simplest is scaling: 16 / 51

Rotation How can you convert a vector represented in frame 0 to a new, rotated coordinate frame 1? Remember what a vector is: [component in direction of the frames x axis, component in direction of y axis] 17 / 51

Rotation So to rotate it we must produce this vector: [component in direction of new x axis, component in direction of new y axis] We can do this easily with dot products! New x coordinate is [original vector] dot [the new x axis] New y coordinate is [original vector] dot [the new y axis] 18 / 51

2D Rotation Matrix Formula Counter-clockwise rotation by an angle θ 19 / 51

Transformation Matrices Multiple transformation matrices can be used to transform a point: p = R 2.R 1.S.p The effect of this is to apply their transformations one after the other, from right to left. In the example above, the result is (R 2 (R 1 (S.p))) The result is exactly the same if we multiply the matrices first, to form a single transformation matrix:p = (R 2.R 1.S)p 20 / 51

Homogeneous System In general, a matrix multiplication lets us linearly combine components of a vector This is sufficient for scale, rotate, skew transformations. But notice, we cant add a constant! The (somewhat hacky) solution? Stick a 1 at the end of every vector 21 / 51

Homogeneous System Now we can rotate, scale, and skew like before, AND translate (note how the multiplication works out, above) This is called homogeneous coordinates In homogeneous coordinates, the multiplication works out so the rightmost column of the matrix is a vector that gets added 22 / 51

2D Translation 23 / 51

2D Translation with Homogeneous Coordinates 24 / 51

Scaling 25 / 51

Scaling 26 / 51

Scaling & Translating 27 / 51

Scaling & Translating 28 / 51

Scaling & Translating 29 / 51

Rotation 30 / 51

Rotation 31 / 51

Rotation Matrix Properties Transpose of a rotation matrix produces a rotation in the opposite direction RR T = R T R = I det(r) = 1 Consider the rotation matrix from the previous slide 32 / 51

Inverse of Matrix Given a matrix A, its inverse A 1 is a matrix such that AA 1 =A 1 A = I [ ] 1 [ ] 2 0 1/2 0 = 0 3 0 1/3 Inverse does not always exist. If A 1 exists, A is invertible or non-singular. Otherwise, its singular. Useful identities, for matrices that are invertible: 33 / 51

Matrix Operations Pseudoinverse Say you have the matrix equation AX=B, where A and B are known, and you want to solve for X You could use MATLAB to calculate the inverse and premultiply by it: A 1 AX = A 1 BX = A 1 B MATLAB command would be inv(a)*b But calculating the inverse for large matrices often brings problems with computer floating-point resolution (because it involves working with very small and very large numbers together). Or, your matrix might not even have an inverse. Fortunately, there are workarounds to solve AX=B in these situations. And MATLAB can do them! 34 / 51

Matrix Operations Pseudoinverse Instead of taking an inverse, directly ask MATLAB to solve for X in AX=B, by typing A\B MATLAB will try several appropriate numerical methods (including the pseudoinverse if the inverse doesnt exist) MATLAB will return the value of X which solves the equation If there is no exact solution, it will return the closest one If there are many solutions, it will return the smallest one 35 / 51

Matrix Operations 36 / 51

Rank of A Matrix Linear independence Suppose we have a set of vectors v 1,..., v n If we can express v1 as a linear combination of the other vectors v 2...v n, then v1 is linearly dependent on the other vectors. The direction v1 can be expressed as a combination of the directions v 2...v n. (E.g.v 1 =.7v 2.7v 4 ) If no vector is linearly dependent on the rest of the set, the set is linearly independent. Common case: a set of vectors v 1,..., v n is always linearly independent if each vector is perpendicular to every other vector (and non-zero) 37 / 51

Rank of A Matrix Column/row rank col rank(a)= the max. number of linearly independent column vector of A row rank(a)= the max. number of linearly independent row vector of A Column rank always equals row rank Matrix rank : rank(a) := col rank(a) = row rank(a) For transformation matrices, the rank tells you the dimensions of the output E.g. if rank of A is 1, then the transformation p = Ap maps points onto a line. 38 / 51

Rank of A Matrix If an m x m matrix is rank m, we say its full rank Maps an m x 1 vector uniquely to another m x 1 vector An inverse matrix can be found If rank < m, we say its singular At least one dimension is getting collapsed. No way to look at the result and tell what the input was Inverse does not exist Inverse also doesnt exist for non-square matrices 39 / 51

Eigenvalues and Eigenvectors Suppose we have a square matrix A. We can solve for vector x and scalar λ such that Ax = λx In other words, find vectors where, if we transform them with A, the only effect is to scale them with no change in direction. These vectors are called eigenvectors (German for self vector of the matrix), and the scaling factors λ are called eigenvalues An m x m matrix will have m eigenvectors where λ is nonzero To find eigenvalues and eigenvectors rewrite the equation: (A λi )x = 0 x = 0 is always a solution but we have to find x 0. This means A λi should not be invertible so we can have many solutions. det(a λi ) = 0 40 / 51

Eigenvalues and Eigenvectors For example; A = [ ] 3 1 1 3 det(a λi )x = 0 3 λ 1 1 3 λ = 0 then λ 1 = 4, x1 T = [1 1] and λ 2 = 2, x2 T = [ 1 1] Another example; [ ] 0 1 B = 1 0 then λ 1 = 1, x1 T = [1 1] and λ 2 = 1, x2 T Relation between these two matrices? = [ 1 1] 41 / 51

Singular Value Decomposition (SVD) There are several computer algorithms that can factor a matrix, representing it as the product of some other matrices The most useful of these is the Singular Value Decomposition. Represents any matrix A as a product of three matrices: UΣV T MATLAB command: [U,S,V]=svd(A) A = UΣV T where U and V are rotation matrices, and Σ is a scaling matrix. 42 / 51

Singular Value Decomposition (SVD) Eigenvectors are for square matrices, but SVD is for all matrices To calculate U, take eigenvectors of AA T Square root of eigenvalues are the singular values (the entries of Σ). To calculate V, take eigenvectors of A T A In general, if A is m x n, then U will be m x m, Σ will be m x n, and V T will be n x n. 43 / 51

Singular Value Decomposition (SVD) U and V are always rotation matrices. Geometric rotation may not be an applicable concept, depending on the matrix. So we call them unitary matrices (each column is a unit vector) Σ is a diagonal matrix The number of nonzero entries = rank of A The algorithm always sorts the entries high to low 44 / 51

Singular Value Decomposition (SVD) Look at how the multiplication works out, left to right Column 1 of U gets scaled by the first value from The resulting vector gets scaled by row 1 of VT to produce a contribution to the columns of A 45 / 51

Singular Value Decomposition (SVD) Each product of (column i of U).(value i from Σ).(row i of V T ) produces a component of the final A 46 / 51

Singular Value Decomposition (SVD) Were building A as a linear combination of the columns of U Using all columns of U, well rebuild the original matrix perfectly But, in real-world data, often we can just use the first few columns of U and well get something close (e.g. the first A partial, above) 47 / 51

Singular Value Decomposition (SVD) We can call those first few columns of U the Principal Components of the data They show the major patterns that can be added to produce the columns of the original matrix The rows of V T show how the principal components are mixed to produce the columns of the matrix 48 / 51

SVD Applications 49 / 51

SVD Applications For this image, using only the first 10 of 300 principal components produces a recognizable reconstruction So, SVD can be used for image compression 50 / 51

Principal Component Analysis Remember, columns of U are the Principal Components of the data: the major patterns that can be added to produce the columns of the original matrix One use of this is to construct a matrix where each column is a separate data sample Run SVD on that matrix, and look at the first few columns of U to see patterns that are common among the columns This is called Principal Component Analysis (or PCA) of the data samples 51 / 51