Linear Algebra & Geometry why is linear algebra useful in computer vision?

Similar documents
Linear Algebra & Geometry why is linear algebra useful in computer vision?

Linear Algebra Review. Fei-Fei Li

Linear Algebra Review. Fei-Fei Li

Lecture: Face Recognition and Feature Reduction

Lecture: Face Recognition and Feature Reduction

CS 143 Linear Algebra Review

Linear Algebra Review. Vectors

Conceptual Questions for Review

Review of Linear Algebra

Linear Algebra (Review) Volker Tresp 2017

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra (Review) Volker Tresp 2018

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

1 Last time: least-squares problems

Linear Algebra: Matrix Eigenvalue Problems

Quantum Computing Lecture 2. Review of Linear Algebra

Econ Slides from Lecture 7

A VERY BRIEF LINEAR ALGEBRA REVIEW for MAP 5485 Introduction to Mathematical Biophysics Fall 2010

Linear Algebra - Part II

Singular Value Decomposition

Linear Algebra for Machine Learning. Sargur N. Srihari

Review of Linear Algebra

Image Registration Lecture 2: Vectors and Matrices

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

Repeated Eigenvalues and Symmetric Matrices

Background Mathematics (2/2) 1. David Barber

UNIT 6: The singular value decomposition.

Introduction to Machine Learning

Foundations of Computer Vision

Knowledge Discovery and Data Mining 1 (VO) ( )

Review of linear algebra

and let s calculate the image of some vectors under the transformation T.

Applied Linear Algebra in Geoscience Using MATLAB

A = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3].

Maths for Signals and Systems Linear Algebra in Engineering

The Singular Value Decomposition

Singular Value Decomposition and Principal Component Analysis (PCA) I

Introduction to Matrix Algebra

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 215 HW #9 Solutions

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7

Linear Algebra V = T = ( 4 3 ).

Lecture II: Linear Algebra Revisited

Math Matrix Algebra

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

Matrices and Linear Algebra

Linear Algebra. Session 12

Linear Algebra in Actuarial Science: Slides to the lecture

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

ECS130 Scientific Computing. Lecture 1: Introduction. Monday, January 7, 10:00 10:50 am

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Reduction to the associated homogeneous system via a particular solution

Positive Definite Matrix

Multivariate Statistical Analysis

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Basic Calculus Review

Lecture 10: Eigenvectors and eigenvalues (Numerical Recipes, Chapter 11)

Singular Value Decomposition

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Chapter 3 Transformations

Example Linear Algebra Competency Test

Lecture 7. Econ August 18

Lecture 3: Matrix and Matrix Operations

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Computational math: Assignment 1

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2

Mathematical foundations - linear algebra

M.A.P. Matrix Algebra Procedures. by Mary Donovan, Adrienne Copeland, & Patrick Curry

Elementary Linear Algebra

Linear Algebra Primer

ANSWERS (5 points) Let A be a 2 2 matrix such that A =. Compute A. 2

ENGI 9420 Lecture Notes 2 - Matrix Algebra Page Matrix operations can render the solution of a linear system much more efficient.

Announcements Monday, November 13

Let A an n n real nonsymmetric matrix. The eigenvalue problem: λ 1 = 1 with eigenvector u 1 = ( ) λ 2 = 2 with eigenvector u 2 = ( 1

Properties of Linear Transformations from R n to R m

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Review problems for MA 54, Fall 2004.

Symmetric and anti symmetric matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Materials engineering Collage \\ Ceramic & construction materials department Numerical Analysis \\Third stage by \\ Dalya Hekmat

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Functional Analysis Review

Proofs for Quizzes. Proof. Suppose T is a linear transformation, and let A be a matrix such that T (x) = Ax for all x R m. Then

LinGloss. A glossary of linear algebra

Matrix Algebra: Summary

Matrix Vector Products

Singular Value Decomposition and Digital Image Compression

Extra Problems for Math 2050 Linear Algebra I

q n. Q T Q = I. Projections Least Squares best fit solution to Ax = b. Gram-Schmidt process for getting an orthonormal basis from any basis.

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

Notes on Linear Algebra and Matrix Theory

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

DATA MINING LECTURE 8. Dimensionality Reduction PCA -- SVD

Vectors and matrices: matrices (Version 2) This is a very brief summary of my lecture notes.

COMP 558 lecture 18 Nov. 15, 2010

Transcription:

Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia I. Camps, Penn State University

Why is linear algebra useful in computer vision? Representation 3D points in the scene 2D points in the image Coordinates will be used to Perform geometrical transformations Associate 3D with 2D points Images are matrices of numbers Find properties of these numbers

Agenda 1. Basics definitions and properties 2. Geometrical transformations 3. SVD and its applications

Vectors (i.e., 2D or 3D vectors) P = [x,y,z] p = [x,y] 3D world Image

Vectors (i.e., 2D vectors) x2 v θ P Magnitude: x1 If, Is a UNIT vector Is a unit vector Orientation:

Vector Addition v+w v w

Vector Subtraction v v-w w

Scalar Product v av

Inner (dot) Product v α w The inner product is a SCALAR!

Orthonormal Basis x2 j i v θ P x1

Vector (cross) Product u w α v The cross product is a VECTOR! Magnitude:kuk = kv wk = kvkkwk sin Orientation:

Vector Product Computation i = (1,0,0) j = (0,1,0) k = (0,0,1) i = 1 j = 1 k = 1 i = j k j = k i k = i j = ( x2 y 3 x3 y ) i + 2 ( x3 y 1 x1 y3) j+ ( x1 y 2 x2 y1 ) k

Matrices A n m = 2 3 a 11 a 12... a 1m a 21 a 22... a 2m 6 7 4.... 5 a n1 a n2... a nm Pixel s intensity value Sum: A and B must have the same dimensions! Example:

Matrices A n m = 2 3 a 11 a 12... a 1m a 21 a 22... a 2m 6 7 4.... 5 a n1 a n2... a nm Product: A and B must have compatible dimensions!

Matrix Transpose Definition: C m n = A T n m c ij = a ji Identities: (A + B) T = A T + B T (AB) T = B T A T If A = A T, then A is symmetric

Matrix Determinant Useful value computed from the elements of a square matrix A det [ a 11 ] = a11 [ ] a11 a det 12 = a a 21 a 11 a 22 a 12 a 21 22 a 11 a 12 a 13 det a 21 a 22 a 23 = a 11 a 22 a 33 + a 12 a 23 a 31 + a 13 a 21 a 32 a 31 a 32 a 33 a 13 a 22 a 31 a 23 a 32 a 11 a 33 a 12 a 21

Matrix Inverse Does not exist for all matrices, necessary (but not sufficient) that the matrix is square AA 1 = A 1 A = I [ ] 1 A 1 a11 a = 12 = 1 a 21 a 22 det A If det A = 0, A does not have an inverse. [ a22 a 12 a 21 a 11 ], det A 0

2D Geometrical Transformations

2D Translation P P t

2D Translation Equation ty y P t P x tx

2D Translation using Matrices ty P t P y x tx

Homogeneous Coordinates Multiply the coordinates by a non-zero scalar and add an extra coordinate equal to that scalar. For example,

Back to Cartesian Coordinates: Divide by the last coordinate and eliminate it. For example, NOTE: in our example the scalar was 1

2D Translation using Homogeneous Coordinates ty P t P y t P x tx

Scaling P P

Scaling Equation s y y P y P x s x x

Scaling & Translating P P P =S P P =T P P =T P =T (S P)=(T S) P = A P

Scaling & Translating A

Translating & Scaling = Scaling & Translating?

Rotation P P

Rotation Equations Counter-clockwise rotation by an angle θ y y P x θ x P

Degrees of Freedom R is 2x2 4 elements Note: R belongs to the category of normal matrices and satisfies many interesting properties:

Rotation+ Scaling +Translation P = (T R S) P If s x =s y, this is a similarity transformation!

Transformation in 2D -Isometries -Similarities -Affinity -Projective

Transformation in 2D Isometries: [Euclideans] - Preserve distance (areas) - 3 DOF - Regulate motion of rigid object

Transformation in 2D Similarities: - Preserve - ratio of lengths - angles -4 DOF

Affinities: Transformation in 2D

Transformation in 2D Affinities: -Preserve: - Parallel lines - Ratio of areas - Ratio of lengths on collinear lines - others - 6 DOF

Transformation in 2D Projective: - 8 DOF - Preserve: - cross ratio of 4 collinear points - collinearity - and a few others

Eigenvalues and Eigenvectors A eigenvalue λ and eigenvector u satisfies where A is a square matrix. Au = λu Multiplying u by A scales u by λ

Eigenvalues and Eigenvectors Rearranging the previous equation gives the system Au λu = (A λi)u = 0 which has a solution if and only if det(a λi) = 0. The eigenvalues are the roots of this determinant which is polynomial in λ. Substitute the resulting eigenvalues back into Au = λu and solve to obtain the corresponding eigenvector.

Singular Value Decomposition UΣV T = A Where U and V are orthogonal matrices, and Σ is a diagonal matrix. For example:

Singular Value decomposition that

An Numerical Example Look at how the multiplication works out, left to right: Column 1 of U gets scaled by the first value from Σ. The resulting vector gets scaled by row 1 of V T to produce a contribution to the columns of A

An Numerical Example + = Each product of (column i of U) (value i from Σ) (row i of V T ) produces a

An Numerical Example We can look at Σ to see that the first column has a large effect while the second column has a much smaller effect in this example

SVD Applications For this image, using only the first 10 of 300 singular values produces a recognizable reconstruction So, SVD can be used for image compression

Principal Component Analysis Remember, columns of U are the Principal Components of the data: the major patterns that can be added to produce the columns of the original matrix One use of this is to construct a matrix where each column is a separate data sample Run SVD on that matrix, and look at the first few columns of U to see patterns that are common among the columns This is called Principal Component Analysis (or PCA) of the data samples

Principal Component Analysis Often, raw data samples have a lot of redundancy and patterns PCA can allow you to represent data samples as weights on the principal components, rather than using the original raw form of the data By representing each sample as just those weights, you can represent just the meat of what s different between samples. This minimal representation makes machine learning and other algorithms much more efficient