Recap: edge detection. Source: D. Lowe, L. Fei-Fei

Similar documents
Feature extraction: Corners and blobs

Edges and Scale. Image Features. Detecting edges. Origin of Edges. Solution: smooth first. Effects of noise

Keypoint extraction: Corners Harris Corners Pkwy, Charlotte, NC

Corners, Blobs & Descriptors. With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros

Image Analysis. Feature extraction: corners and blobs

Lecture 8: Interest Point Detection. Saad J Bedros

Lecture 8: Interest Point Detection. Saad J Bedros

Lecture 6: Finding Features (part 1/2)

Feature detectors and descriptors. Fei-Fei Li

Feature detectors and descriptors. Fei-Fei Li

CS4670: Computer Vision Kavita Bala. Lecture 7: Harris Corner Detec=on

Advances in Computer Vision. Prof. Bill Freeman. Image and shape descriptors. Readings: Mikolajczyk and Schmid; Belongie et al.

Invariant local features. Invariant Local Features. Classes of transformations. (Good) invariant local features. Case study: panorama stitching

Detectors part II Descriptors

CS 3710: Visual Recognition Describing Images with Features. Adriana Kovashka Department of Computer Science January 8, 2015

Blobs & Scale Invariance

Lecture 12. Local Feature Detection. Matching with Invariant Features. Why extract features? Why extract features? Why extract features?

CS5670: Computer Vision

Local Features (contd.)

* h + = Lec 05: Interesting Points Detection. Image Analysis & Retrieval. Outline. Image Filtering. Recap of Lec 04 Image Filtering Edge Features

Overview. Introduction to local features. Harris interest points + SSD, ZNCC, SIFT. Evaluation and comparison of different detectors

Properties of detectors Edge detectors Harris DoG Properties of descriptors SIFT HOG Shape context

CSE 473/573 Computer Vision and Image Processing (CVIP)

Overview. Introduction to local features. Harris interest points + SSD, ZNCC, SIFT. Evaluation and comparison of different detectors

Instance-level recognition: Local invariant features. Cordelia Schmid INRIA, Grenoble

INTEREST POINTS AT DIFFERENT SCALES

Instance-level recognition: Local invariant features. Cordelia Schmid INRIA, Grenoble

Feature extraction: Corners and blobs

SURF Features. Jacky Baltes Dept. of Computer Science University of Manitoba WWW:

Extract useful building blocks: blobs. the same image like for the corners

Instance-level l recognition. Cordelia Schmid INRIA

Overview. Harris interest points. Comparing interest points (SSD, ZNCC, SIFT) Scale & affine invariant interest points

LoG Blob Finding and Scale. Scale Selection. Blobs (and scale selection) Achieving scale covariance. Blob detection in 2D. Blob detection in 2D

Scale & Affine Invariant Interest Point Detectors

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt.

Instance-level l recognition. Cordelia Schmid & Josef Sivic INRIA

Vlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems

Achieving scale covariance

Instance-level recognition: Local invariant features. Cordelia Schmid INRIA, Grenoble

Blob Detection CSC 767

SIFT keypoint detection. D. Lowe, Distinctive image features from scale-invariant keypoints, IJCV 60 (2), pp , 2004.

Corner detection: the basic idea

Lecture 7: Finding Features (part 2/2)

Scale-space image processing

Lecture 7: Finding Features (part 2/2)

The state of the art and beyond

Wavelet-based Salient Points with Scale Information for Classification

Lecture 05 Point Feature Detection and Matching

Affine Adaptation of Local Image Features Using the Hessian Matrix

Templates, Image Pyramids, and Filter Banks

SIFT: SCALE INVARIANT FEATURE TRANSFORM BY DAVID LOWE

Maximally Stable Local Description for Scale Selection

SIFT: Scale Invariant Feature Transform

Harris Corner Detector

Image matching. by Diva Sian. by swashford

Scale & Affine Invariant Interest Point Detectors

Machine vision. Summary # 4. The mask for Laplacian is given

EE 6882 Visual Search Engine

SURVEY OF APPEARANCE-BASED METHODS FOR OBJECT RECOGNITION

arxiv: v1 [cs.cv] 10 Feb 2016

Machine vision, spring 2018 Summary 4

Given a feature in I 1, how to find the best match in I 2?

Perception III: Filtering, Edges, and Point-features

6.869 Advances in Computer Vision. Prof. Bill Freeman March 1, 2005

Edge Detection. CS 650: Computer Vision

Feature Vector Similarity Based on Local Structure

Taking derivative by convolution

Feature detection.

PCA FACE RECOGNITION

Optical flow. Subhransu Maji. CMPSCI 670: Computer Vision. October 20, 2016

Optical Flow, Motion Segmentation, Feature Tracking

Advanced Features. Advanced Features: Topics. Jana Kosecka. Slides from: S. Thurn, D. Lowe, Forsyth and Ponce. Advanced features and feature matching

Image Processing 1 (IP1) Bildverarbeitung 1

Filtering and Edge Detection

EECS150 - Digital Design Lecture 15 SIFT2 + FSM. Recap and Outline

Feature Extraction and Image Processing

Computer Vision Lecture 3

Interest Operators. All lectures are from posted research papers. Harris Corner Detector: the first and most basic interest operator

Lecture 6: Edge Detection. CAP 5415: Computer Vision Fall 2008

Lecture 04 Image Filtering

Roadmap. Introduction to image analysis (computer vision) Theory of edge detection. Applications

Lecture 7: Edge Detection

Image Filtering. Slides, adapted from. Steve Seitz and Rick Szeliski, U.Washington

ECE Digital Image Processing and Introduction to Computer Vision. Outline

Object Recognition Using Local Characterisation and Zernike Moments

ECE Digital Image Processing and Introduction to Computer Vision. Outline

Advanced Edge Detection 1

A New Shape Adaptation Scheme to Affine Invariant Detector

SIFT, GLOH, SURF descriptors. Dipartimento di Sistemi e Informatica

Slide a window along the input arc sequence S. Least-squares estimate. σ 2. σ Estimate 1. Statistically test the difference between θ 1 and θ 2

Interest Point Detection. Lecture-4

Hilbert-Huang Transform-based Local Regions Descriptors

Affine Differential Invariants for Invariant Feature Point Detection

Is the Scale Invariant Feature Transform (SIFT) really Scale Invariant?

CS4495/6495 Introduction to Computer Vision. 2A-L6 Edge detection: 2D operators

Edge Detection PSY 5018H: Math Models Hum Behavior, Prof. Paul Schrater, Spring 2005

Rotational Invariants for Wide-baseline Stereo

Motion estimation. Digital Visual Effects Yung-Yu Chuang. with slides by Michael Black and P. Anandan

On the consistency of the SIFT Method

Transcription:

Recap: edge detection Source: D. Lowe, L. Fei-Fei

Canny edge detector 1. Filter image with x, y derivatives of Gaussian 2. Find magnitude and orientation of gradient 3. Non-maximum suppression: Thin multi-pixel wide ridges down to single pixel width 4. Thresholding and linking (hysteresis): Define two thresholds: low and high Use the high threshold to start edge curves and the low threshold to continue them MATLAB: edge(image, canny ) Source: D. Lowe, L. Fei-Fei

Hysteresis thresholding Check that maximum value of gradient value is sufficiently large drop-outs? use hysteresis use a high threshold to start edge curves and a low threshold to continue them. Source: S. Seitz

To be continued How do we know that one edge detector is better than another?

nterest Points and Corners Computer Vision Read Szeliski 4.1 James Hays Slides from Rick Szeliski, Svetlana Lazebnik, Derek Hoiem and Grauman&Leibe 2008 AAA Tutorial

Correspondence across views Correspondence: matching points, patches, edges, or regions across images

Example: estimating fundamental matrix that corresponds two views Slide from Silvio Savarese

Example: structure from motion

Applications Feature points are used for: mage alignment 3D reconstruction Motion tracking Robot navigation ndexing and database retrieval Object recognition

This class: interest points and local features Note: interest points = keypoints, also sometimes called features

This class: interest points Suppose you have to click on some point, go away and come back after deform the image, and click on the same points again. Which points would you choose? original deformed

Overview of Keypoint Matching A 1 1. Find a set of distinctive keypoints A 2 A 3 2. Define a region around each keypoint f A f B 3. Compute a local descriptor from the normalized region d( f A, f B ) T K. Grauman, B. Leibe 4. Match local descriptors

Goals for Keypoints Detect points that are repeatable and distinctive

nvariant Local Features mage content is transformed into local feature coordinates that are invariant to translation, rotation, scale, and other imaging parameters Features Descriptors

Why extract features? Motivation: panorama stitching We have two images how do we combine them?

Kristen Grauman Local features: main components 1) Detection: dentify the interest points 2) Description: Extract vector feature descriptor surrounding each interest point. x (1) [ x,, x (1) 1 1 d ] 3) Matching: Determine correspondence between descriptors in two views x (2) [ x,, x (2) 2 1 d ]

Characteristics of good features Repeatability The same feature can be found in several images despite geometric and photometric transformations Saliency Each feature is distinctive Compactness and efficiency Many fewer features than image pixels Locality A feature occupies a relatively small area of the image; robust to clutter and occlusion

Kristen Grauman Goal: interest operator repeatability We want to detect (at least some of) the same points in both images. No chance to find true matches! Yet we have to be able to run the detection procedure independently per image.

Kristen Grauman Goal: descriptor distinctiveness We want to be able to reliably determine which point goes with which.? Must provide some invariance to geometric and photometric differences between the two views.

Local features: main components 1) Detection: dentify the interest points 2) Description:Extract vector feature descriptor surrounding each interest point. 3) Matching: Determine correspondence between descriptors in two views

Many Existing Detectors Available Hessian & Harris [Beaudet 78], [Harris 88] Laplacian, DoG [Lindeberg 98], [Lowe 1999] Harris-/Hessian-Laplace [Mikolajczyk & Schmid 01] Harris-/Hessian-Affine [Mikolajczyk & Schmid 04] EBR and BR [Tuytelaars & Van Gool 04] MSER [Matas 02] Salient Regions [Kadir & Brady 01] Others K. Grauman, B. Leibe

Corner Detection: Basic dea We should easily recognize the point by looking through a small window Shifting a window in any direction should give a large change in intensity Source: A. Efros flat region: no change in all directions edge : no change along the edge direction corner : significant change in all directions

Corner Detection: Mathematics Change in appearance of window w(x,y) for the shift [u,v]: xy, 2 E( u, v) w( x, y) ( x u, y v) ( x, y) (x, y) E(u, v) E(3,2) w(x, y)

Corner Detection: Mathematics Change in appearance of window w(x,y) for the shift [u,v]: xy, 2 E( u, v) w( x, y) ( x u, y v) ( x, y) (x, y) E(u, v) E(0,0) w(x, y)

Corner Detection: Mathematics Change in appearance of window w(x,y) for the shift [u,v]: xy, 2 E( u, v) w( x, y) ( x u, y v) ( x, y) Window function Shifted intensity ntensity Window function w(x,y) = or 1 in window, 0 outside Gaussian Source: R. Szeliski

Corner Detection: Mathematics Change in appearance of window w(x,y) for the shift [u,v]: xy, 2 E( u, v) w( x, y) ( x u, y v) ( x, y) We want to find out how this function behaves for small shifts E(u, v)

O( 11 2 * 11 2 * 600 2 ) = 5.2 billion of these 14.6 thousand per pixel in your image Corner Detection: Mathematics Change in appearance of window w(x,y) for the shift [u,v]: xy, 2 E( u, v) w( x, y) ( x u, y v) ( x, y) We want to find out how this function behaves for small shifts But this is very slow to compute naively. O(window_width 2 * shift_range 2 * image_width 2 )

Corner Detection: Mathematics Change in appearance of window w(x,y) for the shift [u,v]: xy, 2 E( u, v) w( x, y) ( x u, y v) ( x, y) We want to find out how this function behaves for small shifts Recall Taylor series expansion. A function f can be approximated at point a as

Recall: Taylor series expansion A function f can be approximated as Approximation of f(x) = e x centered at f(0)

Corner Detection: Mathematics xy, 2 E( u, v) w( x, y) ( x u, y v) ( x, y) Local quadratic approximation of E(u,v) in the neighborhood of (0,0) is given by the second-order Taylor expansion: E( u, v) Change in appearance of window w(x,y) for the shift [u,v]: We want to find out how this function behaves for small shifts E(0,0) [ u E v] E u v (0,0) (0,0) 1 2 [ u E v] E uu uv (0,0) (0,0) E E uv vv (0,0) u (0,0) v

Corner Detection: Mathematics Local quadratic approximation of E(u,v) in the neighborhood of (0,0) is given by the second-order Taylor expansion: E( u, v) E(0,0) [ u E v] E u v (0,0) (0,0) 1 2 [ u E v] E uu uv (0,0) (0,0) E E uv vv (0,0) u (0,0) v Always 0 First derivative is 0 E(u, v)

Corner Detection: Mathematics The quadratic approximation simplifies to E ( u, v) [ u v] M u v where M is a second moment matrix computed from image derivatives: M 2 x x y w( x, y) 2 xy, x y y M

y y y x y x x x y x w M ), ( x x y y y x y x Corners as distinctive interest points 2 x 2 matrix of image derivatives (averaged in neighborhood of a point). Notation:

The surface E(u,v) is locally approximated by a quadratic form. Let s try to understand its shape. nterpreting the second moment matrix v u M v u v u E ] [ ), ( y x y y x y x x y x w M, 2 2 ), (

nterpreting the second moment matrix Consider a horizontal slice of E(u, v): [ u v] M u const v This is the equation of an ellipse.

nterpreting the second moment matrix Consider a horizontal slice of E(u, v): [ u v] M u const v This is the equation of an ellipse. Diagonalization of M: M R 1 1 0 R 0 2 The axis lengths of the ellipse are determined by the eigenvalues and the orientation is determined by R direction of the fastest change direction of the slowest change ( max ) -1/2 ( min ) -1/2

nterpreting the eigenvalues Classification of image points using eigenvalues of M: 2 Edge 2 >> 1 Corner 1 and 2 are large, 1 ~ 2 ; E increases in all directions 1 and 2 are small; E is almost constant in all directions Flat region Edge 1 >> 2 1

Corner response function R det( M) trace( M) 2 ( ) 1 2 1 2 2 α: constant (0.04 to 0.06) Edge R < 0 Corner R > 0 Flat region R small Edge R < 0

Harris corner detector 1) Compute M matrix for each image window to get their cornerness scores. 2) Find points whose surrounding window gave large corner response (f> threshold) 3) Take the points of local maxima, i.e., perform non-maximum suppression C.Harris and M.Stephens. A Combined Corner and Edge Detector. Proceedings of the 4th Alvey Vision Conference: pages 147 151, 1988.

Harris Detector [Harris88] Second moment matrix 2 x ( D) x y ( D) (, D) g( ) 2 1. mage x y ( D) y ( D) derivatives (optionally, blur first) x y det M trace M 1 2 1 2 2. Square of derivatives 3. Gaussian filter g( ) x 2 y 2 x y g( x2 ) g( y2 ) g( x y ) 4. Cornerness function both eigenvalues are strong 2 har det[ (, )] [trace( (, )) ] g D 2 2 2 2 2 ( x ) g( y ) [ g( x y)] [ g( x ) g( y D )] 2 5. Non-maxima suppression 49 har

Harris Detector: Steps

Harris Detector: Steps Compute corner response R

Harris Detector: Steps Find points with large corner response: R>threshold

Harris Detector: Steps Take only the points of local maxima of R

Harris Detector: Steps

nvariance and covariance We want corner locations to be invariant to photometric transformations and covariant to geometric transformations nvariance: image is transformed and corner locations do not change Covariance: if we have two transformed versions of the same image, features should be detected in corresponding locations

Affine intensity change Only derivatives are used => invariance to intensity shift + b ntensity scaling: a a + b R threshold R x (image coordinate) x (image coordinate) Partially invariant to affine intensity change

mage translation Derivatives and window function are shift-invariant Corner location is covariant w.r.t. translation

mage rotation Second moment ellipse rotates but its shape (i.e. eigenvalues) remains the same Corner location is covariant w.r.t. rotation

Scaling Corner All points will be classified as edges Corner location is not covariant to scaling!