CS Finding Corner Points

Similar documents
CS4670: Computer Vision Kavita Bala. Lecture 7: Harris Corner Detec=on

Linear Algebra Review. Fei-Fei Li

Solutions Problem Set 8 Math 240, Fall

Image matching. by Diva Sian. by swashford

Diagonalization of Matrix

Vlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems

Feature Tracking. 2/27/12 ECEn 631

Linear Algebra Review. Fei-Fei Li

Slide a window along the input arc sequence S. Least-squares estimate. σ 2. σ Estimate 1. Statistically test the difference between θ 1 and θ 2

Corner. Corners are the intersections of two edges of sufficiently different orientations.

Singular Value Decomposition

Linear Algebra - Part II

Eigenvalues and Eigenvectors. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI?

Math 5BI: Problem Set 6 Gradient dynamical systems

MATHEMATICS FOR ENGINEERS BASIC MATRIX THEORY TUTORIAL 3 - EIGENVECTORS AND EIGENVALUES

Interest Operators. All lectures are from posted research papers. Harris Corner Detector: the first and most basic interest operator

Properties of Linear Transformations from R n to R m

Foundations of Computer Vision

Math background for PS1. Jiahui Shi

Modal Decomposition and the Time-Domain Response of Linear Systems 1

1 Inner Product and Orthogonality

Conceptual Questions for Review

CS 664 Segmentation (2) Daniel Huttenlocher

A A x i x j i j (i, j) (j, i) Let. Compute the value of for and

The geometry of least squares

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

Unit Speed Curves. Recall that a curve Α is said to be a unit speed curve if

Jordan Canonical Form Homework Solutions

NOTES ON LINEAR ALGEBRA CLASS HANDOUT

Distances and similarities Based in part on slides from textbook, slides of Susan Holmes. October 3, Statistics 202: Data Mining

CS 143 Linear Algebra Review

Simple examples illustrating the use of the deformation gradient tensor

Numerical Optimization

Linear algebra and applications to graphs Part 1

3D Stress Tensors. 3D Stress Tensors, Eigenvalues and Rotations

Eigenvalues and Eigenvectors

Math 240 Calculus III

Econ Slides from Lecture 7

SURF Features. Jacky Baltes Dept. of Computer Science University of Manitoba WWW:

MATH 304 Linear Algebra Lecture 33: Bases of eigenvectors. Diagonalization.

CS5670: Computer Vision

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Lecture 15, 16: Diagonalization

Diagonalization. P. Danziger. u B = A 1. B u S.

Upon successful completion of MATH 220, the student will be able to:

Math 205, Summer I, Week 4b:

Feature detection.

Math for ML: review. CS 1675 Introduction to ML. Administration. Lecture 2. Milos Hauskrecht 5329 Sennott Square, x4-8845

Non-linear Dimensionality Reduction

Corners, Blobs & Descriptors. With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros

AMS526: Numerical Analysis I (Numerical Linear Algebra)

UCLA STAT 233 Statistical Methods in Biomedical Imaging

Lecture 11: Differential Geometry

Estimators for Orientation and Anisotropy in Digitized Images

Basic Calculus Review

A non-turing-recognizable language

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson

Math 108b: Notes on the Spectral Theorem

Lecture 8: Interest Point Detection. Saad J Bedros

Eigenvalues and Eigenvectors

Final Exam. Linear Algebra Summer 2011 Math S2010X (3) Corrin Clarkson. August 10th, Solutions

INTEREST POINTS AT DIFFERENT SCALES

Lecture 02 Linear Algebra Basics

Lecture 24: Principal Component Analysis. Aykut Erdem May 2016 Hacettepe University

Lecture 8: Interest Point Detection. Saad J Bedros

Lecture: Face Recognition and Feature Reduction

Quadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix.

Review for Exam 1. Erik G. Learned-Miller Department of Computer Science University of Massachusetts, Amherst Amherst, MA

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Eigenvalue and Eigenvector Homework

Math 308 Practice Final Exam Page and vector y =

Announcements (repeat) Principal Components Analysis

INFO 4300 / CS4300 Information Retrieval. IR 9: Linear Algebra Review

6 Inner Product Spaces

Definitions for Quizzes

DIAGONALIZATION OF THE STRESS TENSOR

Vectors and Matrices Statistics with Vectors and Matrices

Using the Karush-Kuhn-Tucker Conditions to Analyze the Convergence Rate of Preconditioned Eigenvalue Solvers

Direct Sums and Invariants. Direct Sums

1 Linear Regression and Correlation

Lecture: Face Recognition and Feature Reduction

CS168: The Modern Algorithmic Toolbox Lecture #7: Understanding Principal Component Analysis (PCA)

Lecture Notes: Eigenvalues and Eigenvectors. 1 Definitions. 2 Finding All Eigenvalues

The Eigenvalue Problem: Perturbation Theory

The Jordan Normal Form and its Applications

MATH 220 FINAL EXAMINATION December 13, Name ID # Section #

Chapter 5. Eigenvalues and Eigenvectors

Latent Semantic Models. Reference: Introduction to Information Retrieval by C. Manning, P. Raghavan, H. Schutze

JUST THE MATHS SLIDES NUMBER 9.6. MATRICES 6 (Eigenvalues and eigenvectors) A.J.Hobson

Laplacian Filters. Sobel Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters

The converse is clear, since

Lecture 8 : Eigenvalues and Eigenvectors

Exam 2. Jeremy Morris. March 23, 2006

Recall : Eigenvalues and Eigenvectors

What s Eigenanalysis? Matrix eigenanalysis is a computational theory for the matrix equation

Background Mathematics (2/2) 1. David Barber

Math 2331 Linear Algebra

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Transcription:

CS 682 1 Finding Corner Points

CS 682 2 Normalized Cross-correlation Let w 1 = I 1 (x 1 + i, y 1 + j) andw 2 = I 2 (x 2 + i, y 2 + j), i = W,...,W, j = W,...,W be two square image windows centered at locations (x 1,y 1 )and(x 2,y 2 ) of images I 1 and I 2, respectively. Normalized cross-correlation of w 1 and w 2 is given by NCC(w 1,w 2 )= (w 1 w 1 ) (w 2 w 2 ) w 1 w 1 w 2 w 2 where w 1 and w 2 are treated as vectors. (a b stands for inner product of vectors a and b, a for the mean value of vector elements and a for the 2-norm of vector a.)

CS 682 3 For two windows whose pixel values differ by a scale factor only NCC will be equal to 1; if the windows are different NCC has value lower than 1. For two non-zero binary patterns which differ in all pixels NCC is -1. Normalized cross-correlation corresponds to the cosine of the angle between w 1 and w 2 ; this angle varies between 0 and 180 the corresponding cosines vary between 1 and -1. Corner points differ from other points have low NCC with all other points.

CS 682 4 Sum of Squared Differences Let w 1 = I 1 (x 1 + i, y 1 + j) andw 2 = I 2 (x 2 + i, y 2 + j), i = W,...,W, j = W,...,W be two square image windows centered at locations (x 1,y 1 )and(x 2,y 2 ) of images I 1 and I 2, respectively. Sum of squared differences of w 1 and w 2 is given by SSD(w 1,w 2 )= W i= W W j= W (I 1 (x 1 + i, y 1 + j) I 2 (x 2 + i, y 2 + j)) 2.

CS 682 5 For two windows whose pixel values differ by a scale factor only SSD can be very large; if the windows are the same SSD has value 0. For two non-zero binary patterns which differ in all pixels SSD is (2W +1) 2. Corner points differ from other points have high SSD with all other points.

CS 682 6 Corner Point Candidates

CS 682 7 Corner Point Selection 1 1 1 0.8 0.6 0.5 0.4 0.2 0.5 0 0 0.2 0.4 0 0.6 0.8 0.5 0 50 100 10 20 30 40 50 60 1 0 50 100 10 20 30 40 50 60 0.5 0 50 100 10 20 30 40 50 60 Cross-correlation results for 9 9 neighborhoods of the three image points on the previous slide. Corresponding similarity functions shown from an 80 viewing angle. Two outside points are good feature point candidates, while the center point is not

CS 682 8 Selected Corner Points Selected feature points for 1/2 resolution: each point is shown with its 9 9 neighborhood.

CS 682 9 Selected Corner Points Selected feature points for 1/4, and 1/8 resolutions: each point is shown with its 9 9 neighborhood.

CS 682 10 Problems with NCC We need something more efficient for a practical corner detection algorithm. An NCC-based algorithm detects points that are different from their neighborhood. We need a measure of cornerness. Alternative: use a measure based on local structure.

CS 682 11 Local Structure Matrix Consider the spatial image gradient [E x,e y ] T, computed for all points (x, y) of an image area (neighborhood). The matrix M, defined as M = ΣE2 x ΣE x E y ΣE x E y ΣEy 2 where the sums are taken over the image neighborhood, captures the geometric structure of the gray level pattern. Note that the sums can be replaced by Gaussian/binomial smoothing. Binomial smoothing will improve the localization of corner points. WHY?

CS 682 12 M is a symmetric matrix and can therefore be diagonalized by rotation of the coordinate axes, so with no loss of generality, we can think of M as a diagonal matrix: M = λ 1 0 0 λ 2, where λ 1 and λ 2 are the eigenvalues of M. We can choose λ 1 as the larger eigenvalue so that λ 1 λ 2 0. If the neighborhood contains a corner, then we expect λ 1 >λ 2 0, and the larger the eigenvalues, the stronger (higher contrast) their corresponding edges. A corner is identified as two strong edges; therefore as λ 1 >λ 2, a corner is a location where λ 2 is sufficiently large.

CS 682 13 Solve: det(m λi) =0toobtainλ 1 and λ 2. There are three cases: No structure: (smoothvariation)λ 1 λ 2 0 1D structure: (edge)λ 2 0 (direction of edge), λ 1 large (normal to edge) 2D structure: (corner) λ 1 and λ 2 both large and distinct The eigenvectors n of M (M n = λ n) correspond to the direction of smallest and largest change at (x, y).

CS 682 14 Computing Cornerness METHOD 1: Identify the corner points using the measure described earlier. Consider all image points for which λ 1 τ and λ 1 /λ 2 κ. (For example, use τ equal to one-twentieth of the maximum λ 1 in the entire image and we use κ =2.5.) Decreasing τ or increasing κ will typically result in a larger number of candidate points. METHOD 2: Consider all image points for which λ 1 τ.

CS 682 15 Example: selected corner points

CS 682 16 Selected corner points for full resolution.

CS 682 17 Selected corner points for 1/2 and1/4 resolutions.