Lecture 05 Point Feature Detection and Matching

Similar documents
Keypoint extraction: Corners Harris Corners Pkwy, Charlotte, NC

Perception III: Filtering, Edges, and Point-features

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt.

Instance-level l recognition. Cordelia Schmid INRIA

Instance-level recognition: Local invariant features. Cordelia Schmid INRIA, Grenoble

Instance-level recognition: Local invariant features. Cordelia Schmid INRIA, Grenoble

Instance-level recognition: Local invariant features. Cordelia Schmid INRIA, Grenoble

Feature extraction: Corners and blobs

Feature extraction: Corners and blobs

Edges and Scale. Image Features. Detecting edges. Origin of Edges. Solution: smooth first. Effects of noise

Instance-level l recognition. Cordelia Schmid & Josef Sivic INRIA

Recap: edge detection. Source: D. Lowe, L. Fei-Fei

Lecture 8: Interest Point Detection. Saad J Bedros

Lecture 8: Interest Point Detection. Saad J Bedros

Corners, Blobs & Descriptors. With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros

Vlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems

6.869 Advances in Computer Vision. Prof. Bill Freeman March 1, 2005

CS4670: Computer Vision Kavita Bala. Lecture 7: Harris Corner Detec=on

* h + = Lec 05: Interesting Points Detection. Image Analysis & Retrieval. Outline. Image Filtering. Recap of Lec 04 Image Filtering Edge Features

Interest Operators. All lectures are from posted research papers. Harris Corner Detector: the first and most basic interest operator

Optical flow. Subhransu Maji. CMPSCI 670: Computer Vision. October 20, 2016

Motion estimation. Digital Visual Effects Yung-Yu Chuang. with slides by Michael Black and P. Anandan

Image Analysis. Feature extraction: corners and blobs

Mathematics 309 Conic sections and their applicationsn. Chapter 2. Quadric figures. ai,j x i x j + b i x i + c =0. 1. Coordinate changes

Overview. Introduction to local features. Harris interest points + SSD, ZNCC, SIFT. Evaluation and comparison of different detectors

Advances in Computer Vision. Prof. Bill Freeman. Image and shape descriptors. Readings: Mikolajczyk and Schmid; Belongie et al.

Lecture 04 Image Filtering

Detectors part II Descriptors

SIFT keypoint detection. D. Lowe, Distinctive image features from scale-invariant keypoints, IJCV 60 (2), pp , 2004.

Corner detection: the basic idea

INTEREST POINTS AT DIFFERENT SCALES

Extract useful building blocks: blobs. the same image like for the corners

Overview. Harris interest points. Comparing interest points (SSD, ZNCC, SIFT) Scale & affine invariant interest points

SURF Features. Jacky Baltes Dept. of Computer Science University of Manitoba WWW:

Edge Detection. Image Processing - Computer Vision

Image Processing 1 (IP1) Bildverarbeitung 1

CS5670: Computer Vision

Properties of detectors Edge detectors Harris DoG Properties of descriptors SIFT HOG Shape context

Blobs & Scale Invariance

Lecture 12. Local Feature Detection. Matching with Invariant Features. Why extract features? Why extract features? Why extract features?

Overview. Introduction to local features. Harris interest points + SSD, ZNCC, SIFT. Evaluation and comparison of different detectors

Invariant local features. Invariant Local Features. Classes of transformations. (Good) invariant local features. Case study: panorama stitching

CSE 473/573 Computer Vision and Image Processing (CVIP)

Feature detectors and descriptors. Fei-Fei Li

Blob Detection CSC 767

Feature detectors and descriptors. Fei-Fei Li

Lecture 6: Finding Features (part 1/2)

Eigenvectors and Eigenvalues 1

10.2 The Unit Circle: Cosine and Sine

Image matching. by Diva Sian. by swashford

Local Features (contd.)

LoG Blob Finding and Scale. Scale Selection. Blobs (and scale selection) Achieving scale covariance. Blob detection in 2D. Blob detection in 2D

Interest Point Detection. Lecture-4

Introduction to Computer Vision

Identifying second degree equations

CS 3710: Visual Recognition Describing Images with Features. Adriana Kovashka Department of Computer Science January 8, 2015

UNCORRECTED SAMPLE PAGES. 3Quadratics. Chapter 3. Objectives

Review Topics for MATH 1400 Elements of Calculus Table of Contents

Achieving scale covariance

f(x) = 2x 2 + 2x - 4

= x. Algebra II Notes Quadratic Functions Unit Graphing Quadratic Functions. Math Background

Lecture Outline. Basics of Spatial Filtering Smoothing Spatial Filters. Sharpening Spatial Filters

LESSON #42 - INVERSES OF FUNCTIONS AND FUNCTION NOTATION PART 2 COMMON CORE ALGEBRA II

Scale & Affine Invariant Interest Point Detectors

Analytic Geometry in Three Dimensions

Math 123 Summary of Important Algebra & Trigonometry Concepts Chapter 1 & Appendix D, Stewart, Calculus Early Transcendentals

ES.1803 Topic 16 Notes Jeremy Orloff

Computer Vision & Digital Image Processing

TENSOR TRANSFORMATION OF STRESSES

Machine vision, spring 2018 Summary 4

INF Introduction to classifiction Anne Solberg Based on Chapter 2 ( ) in Duda and Hart: Pattern Classification

Lesson 04. KAZE, Non-linear diffusion filtering, ORB, MSER. Ing. Marek Hrúz, Ph.D.

1.5. Analyzing Graphs of Functions. The Graph of a Function. What you should learn. Why you should learn it. 54 Chapter 1 Functions and Their Graphs

Chapter 5: Systems of Equations

UNCORRECTED. To recognise the rules of a number of common algebraic relations: y = x 1 y 2 = x

Econ Slides from Lecture 8

Lecture 6: Edge Detection. CAP 5415: Computer Vision Fall 2008

LESSON 35: EIGENVALUES AND EIGENVECTORS APRIL 21, (1) We might also write v as v. Both notations refer to a vector.

Advanced Features. Advanced Features: Topics. Jana Kosecka. Slides from: S. Thurn, D. Lowe, Forsyth and Ponce. Advanced features and feature matching

Laplacian Filters. Sobel Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters

6.4 graphs OF logarithmic FUnCTIOnS

Machine vision. Summary # 4. The mask for Laplacian is given

Roadmap. Introduction to image analysis (computer vision) Theory of edge detection. Applications

SIFT: Scale Invariant Feature Transform

EECS150 - Digital Design Lecture 15 SIFT2 + FSM. Recap and Outline

Review: critical point or equivalently f a,

Vector Fields. Field (II) Field (V)

Lecture 7: Finding Features (part 2/2)

Unit 10 - Graphing Quadratic Functions

Edge Detection. Introduction to Computer Vision. Useful Mathematics Funcs. The bad news

4 The Cartesian Coordinate System- Pictures of Equations

10.3 Solving Nonlinear Systems of Equations

Camera calibration. Outline. Pinhole camera. Camera projection models. Nonlinear least square methods A camera calibration tool

Statistical Geometry Processing Winter Semester 2011/2012

Ch 5 Alg 2 L2 Note Sheet Key Do Activity 1 on your Ch 5 Activity Sheet.

Unconstrained Multivariate Optimization

3.2 LOGARITHMIC FUNCTIONS AND THEIR GRAPHS

5.6 RATIOnAl FUnCTIOnS. Using Arrow notation. learning ObjeCTIveS

11.4 Polar Coordinates

Transcription:

nstitute of nformatics nstitute of Neuroinformatics Lecture 05 Point Feature Detection and Matching Davide Scaramuzza 1

Lab Eercise 3 - Toda afternoon Room ETH HG E 1.1 from 13:15 to 15:00 Wor description: implement the Harris corner detector and tracer

Outline Filters for Feature detection Point-feature etraction: toda and net lecture 3

Filters for Feature Detection n the last lecture we used filters to reduce noise or enhance contours However filters can also be used to detect features Goal: reduce amount of data to process in later stages discard redundanc to preserve onl what is useful leads to lower bandwidth and memor storage Edge detection we have seen this alread; edges can enable line or shape detection Template matching Kepoint detection 4

Filters for Template Matching Find locations in an image that are similar to a template f we loo at filters as templates we can use correlation lie convolution but without flipping the filter to detect these locations Template 5

Filters for Template Matching Find locations in an image that are similar to a template f we loo at filters as templates we can use correlation lie convolution but without flipping the filter to detect these locations Detected template Correlation map 6

Where s Waldo? Template Scene 7

Where s Waldo? Template Scene 8

Where s Waldo? Template Scene 9

Summar of filters Smoothing filter: has positive values sums to 1 preserve brightness of constant regions removes high-frequenc components: low-pass filter Derivative filter: has opposite signs used to get high response in regions of high contrast sums to 0 no response in constant regions highlights high-frequenc components: high-pass filter Filters as templates Highest response for regions that loo similar to the filter 10

Template Matching What if the template is not identical to the object we want to detect? Template Matching will onl wor if scale orientation illumination and in general the appearance of the template and the object to detect are ver similar. What about the piels in template bacground object-bacground problem? Scene Template Scene Template 11

Correlation as Scalar Product Consider images H and F as vectors their correlation is: H F H F cos H F n Normalized Cross Correlation NCC we consider the unit vectors of H and F hence we measure their similarit based on the angle. f H and F are identical then NCC 1. cos H H F F u v u v H u v H u v F u v u v F u v 1

Other Similarit measures Sum of Absolute Differences SAD used in optical mice Sum of Squared Differences SSD Normalized Cross Correlation NCC: taes values between -1 and +1 +1 identical u v u v u v v u F v u H v u F v u H NCC u v v u F v u H SSD u v v u F v u H SAD 13

Zero-mean SAD SSD NCC To account for the difference in mean of the two images tpicall caused b illumination changes we subtract the mean value of each image: Zero-mean Sum of Absolute Differences ZSAD used in optical mice Zero-mean Sum of Squared Differences ZSSD Zero-mean Normalized Cross Correlation ZNCC u v F u v H u v F H v u F v u H v u F v u H ZNCC u v F H v u F v u H ZSAD u v F H v u F v u H ZSSD + + 1 1 N v u F N v u H u v F u v H 14 ZNCC is invariant to affine intensit changes: α + β Are these invariant to affine illumination changes?

Census Transform Maps an image patch to a bit string: if a piel is greater than the center piel its corresponding bit is set to 1 else to 0 For a w w window the string will be w 1 bits long The two bit strings are compared using the Hamming distance which is the number of bits that are different. This can be computed b counting the number of 1s in the Eclusive-OR XOR of the two bit strings Advantages More robust to object-bacground problem No square roots or divisions are required thus ver efficient to implement especiall on FPGA ntensities are considered relative to the center piel of the patch maing it invariant to monotonic intensit changes 15

Outline Filters for feature etraction Point-feature or epoint etraction: toda and net lecture 16

Point-feature etraction and matching - Eample Video from Forster Pizzoli Scaramuzza SVO: Semi-Direct Visual Odometr CRA 14 17

What do we need point features for? Recall the Visual-Odometr flow chart: mage sequence Feature detection Feature matching tracing Motion estimation D-D 3D-3D 3D-D Eample of feature tracs Local optimization 18

What do we need point features for? Kepoint etraction is the e ingredient of motion estimation! mage sequence T 1? Feature detection Feature matching tracing u i u i Motion estimation D-D 3D-3D 3D-D p i Local optimization 19

Point Features are also used for: Panorama stitching Object recognition 3D reconstruction Place recognition ndeing and database retrieval e.g. Google mages or http://tinee.com 0

mage matching: wh is it challenging? 1

mage matching: wh is it challenging? NASA Mars Rover images

mage matching: wh is it challenging? Answer below NASA Mars Rover images with SFT feature matches 3

Eample: panorama stitching This panorama was generated using AUTOSTTCH: http://matthewalunbrown.com/autostitch/autostitch.html 4

Local features and alignment We need to align images How would ou do it? 5

Local features and alignment Detect point features in both images 6

Local features and alignment Detect point features in both images Find corresponding pairs 7

Local features and alignment Detect point features in both images Find corresponding pairs Use these pairs to align the images 8

Matching with Features Problem 1: Detect the same points independentl in both images no chance to match! We need a repeatable feature detector 9

Matching with Features Problem : For each point identif its correct correspondence in the other images? We need a reliable and distinctive feature descriptor that is robust to geometric and illumination changes 30

Geometric changes Rotation Scale i.e. zoom View point i.e perspective changes 31

llumination changes Tpicall small illumination changes are modelled with an affine transformation so called affine illumination changes: α + β 3

nvariant local features Subset of local feature tpes designed to be invariant to common geometric and photometric transformations. Basic steps: 1 Detect distinctive interest points Etract invariant descriptors 33

Main questions What points are distinctive i.e. features epoints salient points such that the are repeatable? i.e. can be re-detected from other views How to describe a local region? How to establish correspondences i.e. compute matches? 34

What is a distinctive feature? Consider the image pair below with etracted patches Notice how some patches can be localized or matched with higher accurac than others mage 1 mage 35

A corner is defined as the intersection of one or more edges A corner has high localization accurac Corner detectors are good for VO t s less distinctive than a blob E.g. Harris Shi-Tomasi SUSAN FAST A blob is an other image pattern which is not a corner that differs significantl from its neighbors in intensit and teture e.g. a connected region of piels with similar color a circle etc. Has less localization accurac than a corner Blob detectors are better for place recognition t s more distinctive than a corner E.g. MSER LOG DOG SFT SURF CenSurE Davide Scaramuzza Universit of Zurich Robotics and Perception Group - rpg.ifi.uzh.ch

The Moravec Corner detector 1980 How do we identif corners? We can easil recognize the point b looing through a small window Shifting a window in an direction should give a large change in intensit e.g. in SSD in at least directions flat region: no intensit change i.e. SSD 0 in all directions edge : no change along the edge direction i.e. SSD 0 along edge but 0 in other directions corner : significant change in at least directions i.e. SSD 0 in all directions H. Moravec Obstacle Avoidance and Navigation in the Real World b a Seeing Robot Rover PhD thesis Chapter 5 Stanford Universit Computer Science Department 1980. 37

Corner detection Ke observation: in the region around a corner image gradient has two or more dominant directions Corners are repeatable and distinctive C.Harris and M.Stephens. "A Combined Corner and Edge Detector. 1988 Proceedings of the 4th Alve Vision Conference: pages 147--151. 38

The Moravec Corner detector 1980 Sums of squares of differences of piels adjacent in each of four directions horizontal vertical and two diagonals over each window are calculated and the window's interest measure is the minimum of these four sums. [Moravec 80 Ch. 5] H. Moravec Obstacle Avoidance and Navigation in the Real World b a Seeing Robot Rover PhD thesis Chapter 5 Stanford Universit Computer Science Department 1980. 39

The Harris Corner detector 1988 t implements the Moravec corner detector without having to phsicall shift the window but rather b just looing at the patch itself b using differential calculus. C.Harris and M.Stephens. "A Combined Corner and Edge Detector. 1988 Proceedings of the 4th Alve Vision Conference: pages 147-151. 40

Consider the reference patch centered at and the shifted window centered at. The patch has size P. The Sum of Squared Differences between them is: Let and. Approimating with a 1 st order Talor epansion: This produces the approimation + + P SSD + + + + + + + P SSD How do we implement this? This is a simple quadratic function in two variables Δ Δ 41

This can be written in a matri form as + P SSD M SSD How do we implement this? P SSD P M 4

This can be written in a matri form as + P SSD M SSD How do we implement this? P M Alternative wa to write M nd moment matri Notice that these are NOT matri products but piel-wise products! 43 P SSD

What does this matri reveal? First consider an edge or a flat region. We can conclude that if either λ is close to 0 then this is not a corner. Now let s consider an ais-aligned corner: This means dominant gradient directions are at 45 degrees with and aes What if we have a corner that is not aligned with the image aes? 4 cos 4 -s 4 s 4 cos 0 0 4 cos 4 s 4 -s 4 cos 1 in in in in M 0 0 0 M 0 0 0 0 M Corner Edge Flat region 44

General Case Since M is smmetric it can alwas be decomposed into M R 1 0 1 R 0 M const We can visualize as an ellipse with ais lengths determined b the eigenvalues and the two aes orientations determined b R i.e. the eigenvectors of M The two eigenvectors identif the directions of largest and smallest changes of SSD SSD direction of the fastest change of SSD direction of the slowest change of SSD ma -1/ min -1/ 45

How to compute λ 1 λ R from M Eigenvalue/eigenvector review You can easil prove that λ 1 λ are the eigenvalues of M. The eigenvectors and eigenvalues of a square matri A are the vectors and scalars λ that satisf: A The scalar is the eigenvalue corresponding to The eigenvalues are found b solving: det A 0 n our case A M is a matri so we have m11 det m1 m1 m 0 The solution is: λ 1 1 m 11 + m ± 4m 1 m 1 + m 11 m Once ou now ou find the two eigenvectors i.e. the two columns of R b solving: m11 m1 m1 m 0 46

Visualization of nd moment matrices 47

Visualization of nd moment matrices NB: the ellipses here are plotted proportionall to the eigenvalues and not as iso-ssd ellipses as eplained before. So small ellipses here denote a flat region and big ones a corner.

nterpreting the eigenvalues Classification of image points using eigenvalues of M A corner can then be identified b checing whether the minimum of the two eigenvalues of M is larger than a certain user-defined threshold R min 1 > threshold R is called cornerness function The corner detector using this criterion is called «Shi-Tomasi» detector J. Shi and C. Tomasi June 1994. "Good Features to Trac". 9th EEE Conference on Computer Vision and Pattern Recognition Edge >> 1 Corner 1 and are large R > threshold SSD increases in all directions 1 and are small; SSD is almost constant in all directions Flat region Edge 1 >> 49 1

nterpreting the eigenvalues Computation of λ 1 and λ is epensive Harris & Stephens suggested using a different cornerness function: R 1 1 + det M trace M is a magic number in the range 0.04 to 0.15 Edge >> 1 Corner 1 and are large R > threshold SSD increases in all directions Flat region Edge 1 >> 1 50

Harris Corner Detector Algorithm: 1. Compute derivatives in and directions e.g. with Sobel filter. Compute 3. Convolve with a bo filter to get σ σ σ which are the entries of the matri M optionall use a Gaussian filter instead of a bo filter to avoid aliasing and give more weight to the central piels 4. Compute Harris Corner Measure R according to Shi-Tomasi or Harris 5. Find points with large corner response R > threshold 6. Tae the points of local maima of R 51

Harris Corner Detector mage Cornerness response R 5

Harris vs. Shi-Tomasi Shi-Tomasi operator Harris operator 53

Harris Detector: Worflow 54

Harris Detector: Worflow Compute corner response R 55

Harris Detector: Worflow Find points with large corner response: R > threshold 56

Harris Detector: Worflow Tae onl the points of local maima of thresholded R non-maima suppression 57

Harris Detector: Worflow 58

Harris Detector: Some Properties How does the size of the Harris detector affect the performance? Repeatabilit: How does the Harris detector behave to common image transformations? Can it re-detect the same image patches Harris corners when the image ehibits changes in Rotation View-point Scale zoom llumination? Solution: dentif properties of detector & adapt accordingl 59

Harris Detector: Some Properties Rotation invariance mage 1 mage Ellipse rotates but its shape i.e. eigenvalues remains the same Corner response R is invariant to image rotation 60

Harris Detector: Some Properties But: non-invariant to image scale! mage 1 mage All points will be classified as edges Corner! 61

Harris Detector: Some Properties Qualit of Harris detector for different scale changes Repeatabilit # correspondences detected # correspondences present Scaling the image b ~18% of correspondences get matched

Summar things to remember Filters as templates Correlation as a scalar product Similarit metrics: NCC ZNCC SSD ZSSD SAD ZSAD Census Transform Point feature detection Properties and invariance to transformations Challenges: rotation scale view-point and illumination changes Etraction Moravec Harris and Shi-Tomasi Rotation invariance Reading: Ch. 4.1 and Ch. 8.1 of Szelisi boo Ch. 4 of Autonomous Mobile Robots boo Ch. 13.3 of Peter Core boo

Understanding Chec: Are ou able to: Eplain what is template matching and how it is implemented? Eplain what are the limitations of template matching? Can ou use it to recognize cars? llustrate the similarit metrics SSD SAD NCC and Census transform? What is the intuitive eplanation behind SSD and NCC? Eplain what are good features to trac? n particular can ou eplain what are corners and blobs together with their pros and cons? Eplain the Harris corner detector? n particular: Use the Moravec definition of corner edge and flat region. Show how to get the second moment matri from the definition of SSD and first order approimation show that this is a quadratic epression and what is the intrinsic interpretation of the second moment matri using an ellipse? What is the M matri lie for an edge for a flat region for an ais-aligned 90-degree corner and for a non-ais aligned 90-degree corner? What do the eigenvalues of M reveal? Can ou compare Harris detection with Shi-Tomasi detection? Can ou eplain whether the Harris detector is invariant to illumination or scale changes? What is the repeatabilit of the Harris detector after rescaling b a factor of?