Linear Algebra & Geometry why is linear algebra useful in computer vision?

Similar documents
Linear Algebra & Geometry why is linear algebra useful in computer vision?

Lecture: Face Recognition and Feature Reduction

Lecture: Face Recognition and Feature Reduction

Linear Algebra Review. Fei-Fei Li

Linear Algebra Review. Fei-Fei Li

Linear Algebra Review. Vectors

Image Registration Lecture 2: Vectors and Matrices

Singular Value Decomposition

Linear Algebra (Review) Volker Tresp 2017

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Linear Algebra (Review) Volker Tresp 2018

Applied Linear Algebra in Geoscience Using MATLAB

Introduction to Machine Learning

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Review of Linear Algebra

Example Linear Algebra Competency Test

CS 143 Linear Algebra Review

Reduction to the associated homogeneous system via a particular solution

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

Lecture II: Linear Algebra Revisited

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

CSE 554 Lecture 7: Alignment

Review of Linear Algebra

Review of Coordinate Systems

Linear Algebra: Matrix Eigenvalue Problems

Foundations of Computer Vision

Conceptual Questions for Review

Econ Slides from Lecture 7

Maths for Signals and Systems Linear Algebra in Engineering

COMP 558 lecture 18 Nov. 15, 2010

Linear Algebra. 1.1 Introduction to vectors 1.2 Lengths and dot products. January 28th, 2013 Math 301. Monday, January 28, 13

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

TIEA311 Tietokonegrafiikan perusteet kevät 2018

Usually, when we first formulate a problem in mathematics, we use the most familiar

Principal Component Analysis (PCA)

A VERY BRIEF LINEAR ALGEBRA REVIEW for MAP 5485 Introduction to Mathematical Biophysics Fall 2010

Background Mathematics (2/2) 1. David Barber

Data Mining Lecture 4: Covariance, EVD, PCA & SVD

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2

Vectors Coordinate frames 2D implicit curves 2D parametric curves. Graphics 2008/2009, period 1. Lecture 2: vectors, curves, and surfaces

1. Vectors.

Exercise Sheet 1. 1 Probability revision 1: Student-t as an infinite mixture of Gaussians

Linear Algebra for Machine Learning. Sargur N. Srihari

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7

Computational math: Assignment 1

Chapter 3 Transformations

Multiple View Geometry in Computer Vision

Principal Component Analysis (PCA) CSC411/2515 Tutorial

Singular Value Decompsition

Linear Algebra. Session 12

Positive Definite Matrix

Lecture 6 Sept Data Visualization STAT 442 / 890, CM 462

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

UNIT 6: The singular value decomposition.

The Singular Value Decomposition

Linear Algebra in Actuarial Science: Slides to the lecture

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

Camera Models and Affine Multiple Views Geometry

Mathematical foundations - linear algebra

2. Review of Linear Algebra

12x + 18y = 30? ax + by = m

Singular Value Decomposition

A = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3].

Lecture 7. Econ August 18

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Basic Calculus Review

Maths for Signals and Systems Linear Algebra in Engineering

Linear Algebra V = T = ( 4 3 ).

Deep Learning Book Notes Chapter 2: Linear Algebra

Announcements (repeat) Principal Components Analysis

Singular Value Decomposition and Digital Image Compression

Linear Algebra II. 7 Inner product spaces. Notes 7 16th December Inner products and orthonormal bases

Coding the Matrix Index - Version 0

Linear Algebra Review Part I: Geometry

Repeated Eigenvalues and Symmetric Matrices

Math for ML: review. CS 1675 Introduction to ML. Administration. Lecture 2. Milos Hauskrecht 5329 Sennott Square, x4-8845

Some practical remarks (recap) Statistical Natural Language Processing. Today s lecture. Linear algebra. Why study linear algebra?

8. Diagonalization.

DATA MINING LECTURE 8. Dimensionality Reduction PCA -- SVD

Intro Vectors 2D implicit curves 2D parametric curves. Graphics 2011/2012, 4th quarter. Lecture 2: vectors, curves, and surfaces

1 Last time: least-squares problems

GEOG 4110/5100 Advanced Remote Sensing Lecture 15

Singular Value Decomposition

Lecture 02 Linear Algebra Basics

December 20, MAA704, Multivariate analysis. Christopher Engström. Multivariate. analysis. Principal component analysis

PCA, Kernel PCA, ICA

Covariance to PCA. CS 510 Lecture #8 February 17, 2014

Principal Component Analysis

Vectors and Matrices Statistics with Vectors and Matrices

Principal Component Analysis

Linear Algebra - Part II

Elementary maths for GMT

Mathematical foundations - linear algebra

1 Singular Value Decomposition and Principal Component

ANSWERS (5 points) Let A be a 2 2 matrix such that A =. Compute A. 2

Eigenvalues and diagonalization

1 Linearity and Linear Systems

Review of linear algebra

Principal Component Analysis

Machine Learning (Spring 2012) Principal Component Analysis

Transcription:

Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia I. Camps, Penn State University

Why is linear algebra useful in computer vision? Representation 3D points in the scene 2D points in the image Coordinates will be used to Perform geometrical transformations Associate 3D with 2D points Images are matrices of numbers Find properties of these numbers

Agenda 1. Basics definitions and properties 2. Geometrical transformations 3. SVD and its applications

Vectors (i.e., 2D or 3D vectors) P = [x,y,z] p = [x,y] 3D world Image

Vectors (i.e., 2D vectors) x2 v θ P Magnitude: x1 If, Is a UNIT vector Is a unit vector Orientation:

Vector Addition v+w v w

Vector Subtraction v v-w w

Scalar Product v av

Inner (dot) Product v α w The inner product is a SCALAR!

Orthonormal Basis x2 j i v θ P x1

Vector (cross) Product u w α v The cross product is a VECTOR! Magnitude:kuk = kv wk = kvkkwk sin Orientation:

Vector Product Computation i = (1,0,0) j = (0,1,0) k = (0,0,1) i = 1 j = 1 k = 1 i = j k j = k i k = i j = ( x2 y 3 x3 y ) i + 2 ( x3 y 1 x1 y3) j+ ( x1 y 2 x2 y1 ) k

Matrices A n m = 2 3 a 11 a 12... a 1m a 21 a 22... a 2m 6 7 4.... 5 a n1 a n2... a nm Pixel s intensity value Sum: A and B must have the same dimensions! Example:

Matrices A n m = 2 3 a 11 a 12... a 1m a 21 a 22... a 2m 6 7 4.... 5 a n1 a n2... a nm Product: A and B must have compatible dimensions!

Matrices Transpose: If A is symmetric Examples: Symmetric? No! Symmetric? Yes!

Matrices Determinant: A must be square Example:

Matrices Inverse: A must be square Example:

2D Geometrical Transformations

2D Translation P P t

2D Translation Equation ty y P t P x tx

2D Translation using Matrices ty P t P y x tx

Homogeneous Coordinates Multiply the coordinates by a non-zero scalar and add an extra coordinate equal to that scalar. For example,

Back to Cartesian Coordinates: Divide by the last coordinate and eliminate it. For example, NOTE: in our example the scalar was 1

2D Translation using Homogeneous Coordinates ty P t P y t P x tx

Scaling P P

Scaling Equation s y y P y P x s x x

Scaling & Translating P P P =S P P =T P P =T P =T (S P)=(T S) P = A P

Scaling & Translating A

Translating & Scaling = Scaling & Translating?

Rotation P P

Rotation Equations Counter-clockwise rotation by an angle θ y y P x θ x P

Degrees of Freedom R is 2x2 4 elements Note: R belongs to the category of normal matrices and satisfies many interesting properties:

Rotation+ Scaling +Translation P = (T R S) P If s x =s y, this is a similarity transformation!

Transformation in 2D -Isometries -Similarities -Affinity -Projective

Transformation in 2D Isometries: [Euclideans] - Preserve distance (areas) - 3 DOF - Regulate motion of rigid object

Transformation in 2D Similarities: - Preserve - ratio of lengths - angles -4 DOF

Affinities: Transformation in 2D

Transformation in 2D Affinities: -Preserve: - Parallel lines - Ratio of areas - Ratio of lengths on collinear lines - others - 6 DOF

Transformation in 2D Projective: - 8 DOF - Preserve: - cross ratio of 4 collinear points - collinearity - and a few others

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors The eigenvalues of A are the roots of the characteristic equation diagonal form of matrix Eigenvectors of A are columns of S

Singular Value Decomposition UΣV T = A Where U and V are orthogonal matrices, and Σ is a diagonal matrix. For example:

Singular Value decomposition that

An Numerical Example Look at how the multiplication works out, left to right: Column 1 of U gets scaled by the first value from Σ. The resulting vector gets scaled by row 1 of V T to produce a contribution to the columns of A

An Numerical Example + = Each product of (column i of U) (value i from Σ) (row i of V T ) produces a

An Numerical Example We can look at Σ to see that the first column has a large effect while the second column has a much smaller effect in this example

SVD Applications For this image, using only the first 10 of 300 singular values produces a recognizable reconstruction So, SVD can be used for image compression

Principal Component Analysis Remember, columns of U are the Principal Components of the data: the major patterns that can be added to produce the columns of the original matrix One use of this is to construct a matrix where each column is a separate data sample Run SVD on that matrix, and look at the first few columns of U to see patterns that are common among the columns This is called Principal Component Analysis (or PCA) of the data samples

Principal Component Analysis Often, raw data samples have a lot of redundancy and patterns PCA can allow you to represent data samples as weights on the principal components, rather than using the original raw form of the data By representing each sample as just those weights, you can represent just the meat of what s different between samples. This minimal representation makes machine learning and other algorithms much more efficient

HW 0.1: Compute eigenvalues and eigenvectors of the following transformations