Feature extraction: Corners and blobs

Similar documents
Edges and Scale. Image Features. Detecting edges. Origin of Edges. Solution: smooth first. Effects of noise

Image Analysis. Feature extraction: corners and blobs

Corners, Blobs & Descriptors. With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros

Blob Detection CSC 767

SIFT keypoint detection. D. Lowe, Distinctive image features from scale-invariant keypoints, IJCV 60 (2), pp , 2004.

Recap: edge detection. Source: D. Lowe, L. Fei-Fei

Lecture 8: Interest Point Detection. Saad J Bedros

Detectors part II Descriptors

Blobs & Scale Invariance

Keypoint extraction: Corners Harris Corners Pkwy, Charlotte, NC

Extract useful building blocks: blobs. the same image like for the corners

Advances in Computer Vision. Prof. Bill Freeman. Image and shape descriptors. Readings: Mikolajczyk and Schmid; Belongie et al.

Feature detectors and descriptors. Fei-Fei Li

Feature detectors and descriptors. Fei-Fei Li

Lecture 8: Interest Point Detection. Saad J Bedros

CSE 473/573 Computer Vision and Image Processing (CVIP)

Lecture 12. Local Feature Detection. Matching with Invariant Features. Why extract features? Why extract features? Why extract features?

Local Features (contd.)

LoG Blob Finding and Scale. Scale Selection. Blobs (and scale selection) Achieving scale covariance. Blob detection in 2D. Blob detection in 2D

Achieving scale covariance

Properties of detectors Edge detectors Harris DoG Properties of descriptors SIFT HOG Shape context

Invariant local features. Invariant Local Features. Classes of transformations. (Good) invariant local features. Case study: panorama stitching

Feature extraction: Corners and blobs

CS5670: Computer Vision

Overview. Introduction to local features. Harris interest points + SSD, ZNCC, SIFT. Evaluation and comparison of different detectors

INTEREST POINTS AT DIFFERENT SCALES

Lecture 6: Finding Features (part 1/2)

Overview. Harris interest points. Comparing interest points (SSD, ZNCC, SIFT) Scale & affine invariant interest points

Corner detection: the basic idea

CS4670: Computer Vision Kavita Bala. Lecture 7: Harris Corner Detec=on

Instance-level recognition: Local invariant features. Cordelia Schmid INRIA, Grenoble

Vlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems

CS 3710: Visual Recognition Describing Images with Features. Adriana Kovashka Department of Computer Science January 8, 2015

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt.

Scale & Affine Invariant Interest Point Detectors

Instance-level recognition: Local invariant features. Cordelia Schmid INRIA, Grenoble

Lecture 7: Finding Features (part 2/2)

Instance-level l recognition. Cordelia Schmid INRIA

SURF Features. Jacky Baltes Dept. of Computer Science University of Manitoba WWW:

EE 6882 Visual Search Engine

SIFT: Scale Invariant Feature Transform

Harris Corner Detector

Lecture 7: Finding Features (part 2/2)

SIFT: SCALE INVARIANT FEATURE TRANSFORM BY DAVID LOWE

Given a feature in I 1, how to find the best match in I 2?

Wavelet-based Salient Points with Scale Information for Classification

Instance-level recognition: Local invariant features. Cordelia Schmid INRIA, Grenoble

Scale-space image processing

Scale & Affine Invariant Interest Point Detectors

Lecture 05 Point Feature Detection and Matching

Image matching. by Diva Sian. by swashford

Machine vision, spring 2018 Summary 4

Overview. Introduction to local features. Harris interest points + SSD, ZNCC, SIFT. Evaluation and comparison of different detectors

Image Processing 1 (IP1) Bildverarbeitung 1

Machine vision. Summary # 4. The mask for Laplacian is given

* h + = Lec 05: Interesting Points Detection. Image Analysis & Retrieval. Outline. Image Filtering. Recap of Lec 04 Image Filtering Edge Features

Lecture 6: Edge Detection. CAP 5415: Computer Vision Fall 2008

Instance-level l recognition. Cordelia Schmid & Josef Sivic INRIA

Affine Adaptation of Local Image Features Using the Hessian Matrix

Perception III: Filtering, Edges, and Point-features

Advanced Features. Advanced Features: Topics. Jana Kosecka. Slides from: S. Thurn, D. Lowe, Forsyth and Ponce. Advanced features and feature matching

Maximally Stable Local Description for Scale Selection

Laplacian Filters. Sobel Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters

Filtering and Edge Detection

Feature Vector Similarity Based on Local Structure

Interest Operators. All lectures are from posted research papers. Harris Corner Detector: the first and most basic interest operator

6.869 Advances in Computer Vision. Prof. Bill Freeman March 1, 2005

arxiv: v1 [cs.cv] 10 Feb 2016

Templates, Image Pyramids, and Filter Banks

Computer Vision Lecture 3

Lecture 04 Image Filtering

The state of the art and beyond

Robert Collins CSE598G Mean-Shift Blob Tracking through Scale Space

Feature Extraction and Image Processing

SIFT, GLOH, SURF descriptors. Dipartimento di Sistemi e Informatica

EECS150 - Digital Design Lecture 15 SIFT2 + FSM. Recap and Outline

Lecture 7: Edge Detection

SURVEY OF APPEARANCE-BASED METHODS FOR OBJECT RECOGNITION

Optical flow. Subhransu Maji. CMPSCI 670: Computer Vision. October 20, 2016

Orientation Map Based Palmprint Recognition

Edge Detection. CS 650: Computer Vision

Motion Estimation (I) Ce Liu Microsoft Research New England

Edge Detection PSY 5018H: Math Models Hum Behavior, Prof. Paul Schrater, Spring 2005

Feature detection.

Edge Detection. Computer Vision P. Schrater Spring 2003

Algorithms for Computing a Planar Homography from Conics in Correspondence

Multiscale Autoconvolution Histograms for Affine Invariant Pattern Recognition

VIDEO SYNCHRONIZATION VIA SPACE-TIME INTEREST POINT DISTRIBUTION. Jingyu Yan and Marc Pollefeys

Optical Flow, Motion Segmentation, Feature Tracking

Scale Space Smoothing, Image Feature Extraction and Bessel Filters

Image Filtering. Slides, adapted from. Steve Seitz and Rick Szeliski, U.Washington

Video and Motion Analysis Computer Vision Carnegie Mellon University (Kris Kitani)

Shape of Gaussians as Feature Descriptors

Motion Estimation (I)

Taking derivative by convolution

Roadmap. Introduction to image analysis (computer vision) Theory of edge detection. Applications

Object Recognition Using Local Characterisation and Zernike Moments

Edge Detection. Image Processing - Computer Vision

Face detection and recognition. Detection Recognition Sally

Transcription:

Feature extraction: Corners and blobs

Review: Linear filtering and edge detection Name two different kinds of image noise Name a non-linear smoothing filter What advantages does median filtering have over Gaussian smoothing? What is aliasing? How do we find edges? Why do we need to smooth before computing image derivatives? What are some characteristics of an optimal edge detector? What is nonmaximum suppression? What is hysteresis thresholding?

Why extract features? Motivation: panorama stitching We have two images how do we combine them?

Why extract features? Motivation: panorama stitching We have two images how do we combine them? Step 1: extract features Step : match features

Why extract features? Motivation: panorama stitching We have two images how do we combine them? Step 1: extract features Step : match features Step 3: align images

Characteristics of good features Repeatability The same feature can be found in several images despite geometric and photometric transformations Saliency Each feature has a distinctive description Compactness and efficiency Many fewer features than image pixels Locality A feature occupies a relatively small area of the image; robust to clutter and occlusion

Applications Feature points are used for: Motion tracking Image alignment 3D reconstruction Object recognition Indexing and database retrieval Robot navigation

Finding Corners Key property: in the region around a corner, image gradient has two or more dominant directions Corners are repeatable and distinctive C.Harris and M.Stephens. "A Combined Corner and Edge Detector. Proceedings of the 4th Alvey Vision Conference: pages 147--151.

The Basic Idea We should easily recognize the point by looking through a small window Shifting a window in any direction should give a large change in intensity Source: A. Efros flat region: no change in all directions edge : no change along the edge direction corner : significant change in all directions

Harris Detector: Mathematics Change of intensity for the shift [u,v]: xy, [ ] Euv (, ) = wxy (, ) I( x+ u, y + v) I( x, y) Window function Shifted intensity Intensity Window function w(x,y) = or 1 in window, 0 outside Gaussian Source: R. Szeliski

Harris Detector: Mathematics Change of intensity for the shift [u,v]: xy, [ ] Euv (, ) = wxy (, ) I( x+ u, y + v) I( x, y) Second-order Taylor expansion of E(u,v) about (0,0) (bilinear approximation for small shifts): E( u, v) E(0,0) + [ u E v] E u v (0,0) (0,0) + 1 [ u E v] E uu uv (0,0) (0,0) E E uv vv (0,0) u (0,0) v

Harris Detector: Mathematics The bilinear approximation simplifies to E ( u, v) [ u v] M u v where M is a matrix computed from image derivatives: M I x II x y = w( x, y) xy, I xiy Iy M

Interpreting the second moment matrix First, consider an axis-aligned corner:

= = 1 0 0 λ λ y y x y x x I I I I I I M First, consider an axis-aligned corner: This means dominant gradient directions align with x or y axis If either λ is close to 0, then this is not a corner, so look for locations where both are large. Slide credit: David Jacobs Interpreting the second moment matrix

General Case Since M is symmetric, we have M λ = R 1 0 1 R 0 λ We can visualize M as an ellipse with axis lengths determined by the eigenvalues and orientation determined by R Ellipse equation: u [ u v] M = const v direction of the fastest change (λ max ) -1/ (λ min ) -1/ direction of the slowest change

Visualization of second moment matrices

Visualization of second moment matrices

Interpreting the eigenvalues Classification of image points using eigenvalues of M: λ Edge λ >> λ 1 Corner λ 1 and λ are large, λ 1 ~ λ ; E increases in all directions λ 1 and λ are small; E is almost constant in all directions Flat region Edge λ 1 >> λ λ 1

Corner response function R = det( M ) M α trace( ) = λ1λ α( λ1 + λ ) α: constant (0.04 to 0.06) Edge R < 0 Corner R > 0 Flat region R small Edge R < 0

Harris Detector: Steps

Harris Detector: Steps Compute corner response R

Harris Detector: Steps Find points with large corner response: R>threshold

Harris Detector: Steps Take only the points of local maxima of R

Harris Detector: Steps

Harris detector: Summary of steps 1. Compute Gaussian derivatives at each pixel. Compute second moment matrix M in a Gaussian window around each pixel 3. Compute corner response function R 4. Threshold R 5. Find local maxima of response function (nonmaximum suppression)

Invariance We want features to be detected despite geometric or photometric changes in the image: if we have two transformed versions of the same image, features should be detected in corresponding locations

Models of Image Change Geometric Rotation Scale Affine valid for: orthographic camera, locally planar object Photometric Affine intensity change (I a I + b)

Harris Detector: Invariance Properties Rotation Ellipse rotates but its shape (i.e. eigenvalues) remains the same Corner response R is invariant to image rotation

Harris Detector: Invariance Properties Affine intensity change Only derivatives are used => invariance to intensity shift I I + b Intensity scale: I a I R threshold R x (image coordinate) x (image coordinate) Partially invariant to affine intensity change

Harris Detector: Invariance Properties Scaling Corner All points will be classified as edges Not invariant to scaling

Scale-invariant feature detection Goal: independently detect corresponding regions in scaled versions of the same image Need scale selection mechanism for finding characteristic region size that is covariant with the image transformation

Scale-invariant features: Blobs

Recall: Edge detection f Edge d dx g Derivative of Gaussian f d dx g Edge = maximum of derivative Source: S. Seitz

Edge detection, Take f Edge d dx g Second derivative of Gaussian (Laplacian) d dx f g Edge = zero crossing of second derivative Source: S. Seitz

From edges to blobs Edge = ripple Blob = superposition of two ripples maximum Spatial selection: the magnitude of the Laplacian response will achieve a maximum at the center of the blob, provided the scale of the Laplacian is matched to the scale of the blob

Scale selection We want to find the characteristic scale of the blob by convolving it with Laplacians at several scales and looking for the maximum response However, Laplacian response decays as scale increases: original signal (radius=8) increasing σ Why does this happen?

Scale normalization The response of a derivative of Gaussian filter to a perfect step edge decreases as σ increases 1 σ π

Scale normalization The response of a derivative of Gaussian filter to a perfect step edge decreases as σ increases To keep response the same (scale-invariant), must multiply Gaussian derivative by σ Laplacian is the second Gaussian derivative, so it must be multiplied by σ

Effect of scale normalization Original signal Unnormalized Laplacian response Scale-normalized Laplacian response maximum

Blob detection in D Laplacian of Gaussian: Circularly symmetric operator for blob detection in D g = x g + y g

Blob detection in D Laplacian of Gaussian: Circularly symmetric operator for blob detection in D Scale-normalized: g g g = σ + norm x y

Scale selection The D Laplacian is given by ( x ( )/ σ x + y σ + y ) e (up to scale) Therefore, for a binary circle of radius r, the Laplacian achieves a maximum at σ = r / r Laplacian response image r / scale (σ)

Characteristic scale We define the characteristic scale as the scale that produces peak of Laplacian response characteristic scale T. Lindeberg (1998). "Feature detection with automatic scale selection." International Journal of Computer Vision 30 (): pp 77--116.

Scale-space blob detector 1. Convolve image with scale-normalized Laplacian at several scales. Find maxima of squared Laplacian response in scale-space

Scale-space blob detector: Example

Scale-space blob detector: Example

Scale-space blob detector: Example

Efficient implementation Approximating the Laplacian with a difference of Gaussians: ( (,, ) (,, )) L= Gxx x y + Gyy x y σ σ σ (Laplacian) DoG = G( x, y, kσ ) G( x, y, σ ) (Difference of Gaussians)

Efficient implementation David G. Lowe. "Distinctive image features from scale-invariant keypoints. IJCV 60 (), pp. 91-110, 004.

From scale invariance to affine invariance

Affine adaptation Recall: R R I I I I I I y x w M y y x y x x y x = = 1 1, 0 0 ), ( λ λ direction of the slowest change direction of the fastest change (λ max ) -1/ (λ min ) -1/ We can visualize M as an ellipse with axis lengths determined by the eigenvalues and orientation determined by R const ] [ = v u M v u Ellipse equation:

Affine adaptation The second moment ellipse can be viewed as the characteristic shape of a region We can normalize the region by transforming the ellipse into a unit circle

Orientation ambiguity There is no unique transformation from an ellipse to a unit circle We can rotate or flip a unit circle, and it still stays a unit circle

Orientation ambiguity There is no unique transformation from an ellipse to a unit circle We can rotate or flip a unit circle, and it still stays a unit circle So, to assign a unique orientation to keypoints: Create histogram of local gradient directions in the patch Assign canonical orientation at peak of smoothed histogram 0 π

Affine adaptation Problem: the second moment window determined by weights w(x,y) must match the characteristic shape of the region Solution: iterative approach Use a circular window to compute second moment matrix Perform affine adaptation to find an ellipse-shaped window Recompute second moment matrix using new window and iterate

Iterative affine adaptation K. Mikolajczyk and C. Schmid, Scale and Affine invariant interest point detectors, IJCV 60(1):63-86, 004. http://www.robots.ox.ac.uk/~vgg/research/affine/

Affine adaptation example Scale-invariant regions (blobs)

Affine adaptation example Affine-adapted blobs

Summary: Feature extraction Extract affine regions Normalize regions Eliminate rotational ambiguity Compute appearance descriptors SIFT (Lowe 04)

Invariance vs. covariance Invariance: features(transform(image)) = features(image) Covariance: features(transform(image)) = transform(features(image)) Covariant detection => invariant description

Assignment 1 due February 14 Implement the Laplacian blob detector: http://www.cs.unc.edu/~lazebnik/spring08/assignment1.html

Next time: Fitting