Harris Corner Detector

Similar documents
Advanced Features. Advanced Features: Topics. Jana Kosecka. Slides from: S. Thurn, D. Lowe, Forsyth and Ponce. Advanced features and feature matching

SIFT: Scale Invariant Feature Transform

SIFT: SCALE INVARIANT FEATURE TRANSFORM BY DAVID LOWE

Lecture 8: Interest Point Detection. Saad J Bedros

Vlad Estivill-Castro (2016) Robots for People --- A project for intelligent integrated systems

Feature extraction: Corners and blobs

INTEREST POINTS AT DIFFERENT SCALES

Edges and Scale. Image Features. Detecting edges. Origin of Edges. Solution: smooth first. Effects of noise

CSE 473/573 Computer Vision and Image Processing (CVIP)

Lecture 8: Interest Point Detection. Saad J Bedros

Feature detectors and descriptors. Fei-Fei Li

Detectors part II Descriptors

Feature detectors and descriptors. Fei-Fei Li

CS5670: Computer Vision

Blob Detection CSC 767

LoG Blob Finding and Scale. Scale Selection. Blobs (and scale selection) Achieving scale covariance. Blob detection in 2D. Blob detection in 2D

Achieving scale covariance

Corners, Blobs & Descriptors. With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros

Lecture 12. Local Feature Detection. Matching with Invariant Features. Why extract features? Why extract features? Why extract features?

Blobs & Scale Invariance

Recap: edge detection. Source: D. Lowe, L. Fei-Fei

Image Analysis. Feature extraction: corners and blobs

Image Processing 1 (IP1) Bildverarbeitung 1

SIFT keypoint detection. D. Lowe, Distinctive image features from scale-invariant keypoints, IJCV 60 (2), pp , 2004.

Scale-space image processing

SURF Features. Jacky Baltes Dept. of Computer Science University of Manitoba WWW:

Extract useful building blocks: blobs. the same image like for the corners

Edge Detection. CS 650: Computer Vision

Advances in Computer Vision. Prof. Bill Freeman. Image and shape descriptors. Readings: Mikolajczyk and Schmid; Belongie et al.

CS 3710: Visual Recognition Describing Images with Features. Adriana Kovashka Department of Computer Science January 8, 2015

SIFT, GLOH, SURF descriptors. Dipartimento di Sistemi e Informatica

EECS150 - Digital Design Lecture 15 SIFT2 + FSM. Recap and Outline

Lecture 7: Edge Detection

Image matching. by Diva Sian. by swashford

Overview. Introduction to local features. Harris interest points + SSD, ZNCC, SIFT. Evaluation and comparison of different detectors

Edge Detection. Computer Vision P. Schrater Spring 2003

Lecture 7: Finding Features (part 2/2)

Lecture 7: Finding Features (part 2/2)

Instance-level l recognition. Cordelia Schmid INRIA

Instance-level recognition: Local invariant features. Cordelia Schmid INRIA, Grenoble

Edge Detection PSY 5018H: Math Models Hum Behavior, Prof. Paul Schrater, Spring 2005

6.869 Advances in Computer Vision. Prof. Bill Freeman March 1, 2005

Lecture 6: Finding Features (part 1/2)

Filtering and Edge Detection

Scale & Affine Invariant Interest Point Detectors

Orientation Map Based Palmprint Recognition

Templates, Image Pyramids, and Filter Banks

Machine vision. Summary # 4. The mask for Laplacian is given

Overview. Harris interest points. Comparing interest points (SSD, ZNCC, SIFT) Scale & affine invariant interest points

Scale & Affine Invariant Interest Point Detectors

EE 6882 Visual Search Engine

Instance-level recognition: Local invariant features. Cordelia Schmid INRIA, Grenoble

Machine vision, spring 2018 Summary 4

Keypoint extraction: Corners Harris Corners Pkwy, Charlotte, NC

Invariant local features. Invariant Local Features. Classes of transformations. (Good) invariant local features. Case study: panorama stitching

ECE Digital Image Processing and Introduction to Computer Vision

Edge Detection. Introduction to Computer Vision. Useful Mathematics Funcs. The bad news

Properties of detectors Edge detectors Harris DoG Properties of descriptors SIFT HOG Shape context

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt.

Lucas-Kanade Optical Flow. Computer Vision Carnegie Mellon University (Kris Kitani)

Feature Extraction and Image Processing

Interest Operators. All lectures are from posted research papers. Harris Corner Detector: the first and most basic interest operator

Local Features (contd.)

CS4670: Computer Vision Kavita Bala. Lecture 7: Harris Corner Detec=on

CITS 4402 Computer Vision

Instance-level recognition: Local invariant features. Cordelia Schmid INRIA, Grenoble

Gaussian derivatives

Lec 12 Review of Part I: (Hand-crafted) Features and Classifiers in Image Classification

Lecture 6: Edge Detection. CAP 5415: Computer Vision Fall 2008

Feature detection.

Computer Vision Lecture 3

Optical Flow, Motion Segmentation, Feature Tracking

Taking derivative by convolution

Corner detection: the basic idea

Instance-level l recognition. Cordelia Schmid & Josef Sivic INRIA

Image Filtering. Slides, adapted from. Steve Seitz and Rick Szeliski, U.Washington

Feature Tracking. 2/27/12 ECEn 631

Lecture 04 Image Filtering

Lecture 3: Linear Filters

Introduction to Computer Vision

Given a feature in I 1, how to find the best match in I 2?

Lecture 3: Linear Filters

Robert Collins CSE598G Mean-Shift Blob Tracking through Scale Space

Roadmap. Introduction to image analysis (computer vision) Theory of edge detection. Applications

Feature Vector Similarity Based on Local Structure

Wavelet-based Salient Points with Scale Information for Classification

Multimedia Databases. Previous Lecture. 4.1 Multiresolution Analysis. 4 Shape-based Features. 4.1 Multiresolution Analysis

Motion Estimation (I) Ce Liu Microsoft Research New England

Laplacian Filters. Sobel Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters. Laplacian Filters

Multimedia Databases. Wolf-Tilo Balke Philipp Wille Institut für Informationssysteme Technische Universität Braunschweig

ECE 468: Digital Image Processing. Lecture 8

Affine Adaptation of Local Image Features Using the Hessian Matrix

TRACKING and DETECTION in COMPUTER VISION Filtering and edge detection

arxiv: v1 [cs.cv] 10 Feb 2016

Overview. Introduction to local features. Harris interest points + SSD, ZNCC, SIFT. Evaluation and comparison of different detectors

Motion Estimation (I)

Multimedia Databases. 4 Shape-based Features. 4.1 Multiresolution Analysis. 4.1 Multiresolution Analysis. 4.1 Multiresolution Analysis

Video and Motion Analysis Computer Vision Carnegie Mellon University (Kris Kitani)

ECE 661: Homework 10 Fall 2014

Transcription:

Multimedia Computing: Algorithms, Systems, and Applications: Feature Extraction By Dr. Yu Cao Department of Computer Science The University of Massachusetts Lowell Lowell, MA 01854, USA Part of the slides were adopted from Dr. A. Hanson, Dr. Z.G. Zhu, Dr. S. Thrun, Dr. M. Pollefeys, A. Baraniecki, Wikipedia, and etc. The copyright of those parts belongs to the original authors. 1 Harris Corner Detector 2 1

Finding Corners Edge detectors perform poorly at corners. Corners provide repeatable points for matching, so are worth detecting. Idea: Exactly at a corner, gradient is ill defined. However, in the region around a corner, gradient has two or more different values. 3 The Harris corner detector Form the second-moment matrix: Sum over a small region around the hypothetical corner C = I I x 2 x I y Gradient with respect to x, times gradient with respect to y I I x y 2 I y Matrix is symmetric Slide credit: David Jacobs 4 2

3 5 Simple Case = = 2 1 2 2 0 0 λ λ y y x y x x I I I I I I C First, consider case where: This means dominant gradient directions align with x or y axis If either λ is close to 0, then this is not a corner, so look for locations where both are large. Slide credit: David Jacobs 6 General Case It can be shown that since C is symmetric: R R C = 2 1 1 0 0 λ λ So every case is like a rotated version of the one on last slide. Slide credit: David Jacobs

So, To Detect Corners Filter image with Gaussian to reduce noise Compute magnitude of the gradient everywhere magnitude of the x and y gradients at each pixel Construct C in a window Construct C in a window around each pixel (Harris uses a Gaussian window just blur) Use Linear Algebra to find λ1 and λ2 Solve for product of λs (determinant of C) If they are both big, we have a corner If λs are both big (product reaches local maximum and is above threshold), we have a corner (Harris also checks that ratio of λs is not too high) 7 Gradient Orientation Closeup 8 4

Corner Detection Corners are detected where the product of the ellipse axis lengths reaches a local maximum. 9 Harris Corners Originally developed as features for motion tracking Greatly reduces amount of computation compared to tracking every pixel Translation and rotation invariant (but not scale invariant) 10 5

Harris Corner in Matlab % Harris Corner detector - by Kashif Shahzad sigma=2; thresh=0.1; sze=11; disp=0; % Derivative masks dy = [-1 0 1; -1 0 1; -1 0 1]; dx = dy'; %dx is the transpose matrix of dy % Ix and Iy are the horizontal and vertical edges of image Ix = conv2(bw, dx, 'same'); Iy = conv2(bw, dy, 'same'); % Calculating the gradient of the image Ix and Iy g = fspecial('gaussian',max(1,fix(6*sigma)), sigma); Ix2 = conv2(ix.^2, g, 'same'); Iy2 = conv2(iy.^2, g, 'same'); Ixy = conv2(ix.*iy, g, 'same'); % Smoothed squared image derivatives % My preferred measure according to research paper cornerness = (Ix2.*Iy2 - Ixy.^2)./(Ix2 + Iy2 + eps); % We should perform nonmaximal suppression and threshold mx = ordfilt2(cornerness,sze^2,ones(sze)); % Grey-scale dilate cornerness = (cornerness==mx)&(cornerness>thresh); % Find maxima [rws,cols] = find(cornerness); % Find row,col coords. clf ; imshow(bw); hold on; p=[cols rws]; plot(p(:,1),p(:,2),'or'); title('\bf Harris Corners') 11 Example (σ=0.1) 12 6

Example (σ=0.01) 13 Example (σ=0.001) 14 7

Harris: OpenCV Implementation 15 Template Matching 16 8

Problem: Features for Recognition Want to find in here 17 Templates Find an object in an image! Want Invariance! Scaling Rotation Illumination Deformation 18 9

Convolution with Templates % read image im = imread('bridge.jpg'); bw = double(im(:,:,1))./ 256; imshow(bw) % apply FFT FFTim = fft2(bw); bw2 = real(ifft2(fftim)); imshow(bw2) % select an image patch patch = bw(221:240,351:370); imshow(patch) patch = patch - (sum(sum(patch)) / size(patch,1) / size(patch, 2)); kernel=zeros(size(bw)); kernel(1:size(patch,1),1:size(patch,2)) = patch; FFTkernel = fft2(kernel); % define a kernel kernel=zeros(size(bw)); kernel(1, 1) = 1; kernel(1, 2) = -1; FFTkernel = fft2(kernel); % apply the kernel and check out the result FFTresult = FFTim.* FFTkernel; result = real(ifft2(fftresult)); imshow(result) % apply the kernel and check out the result FFTresult = FFTim.* FFTkernel; result = max(0, real(ifft2(fftresult))); result = result./ max(max(result)); result = (result.^ 1 > 0.5); imshow(result) % alternative convolution imshow(conv2(bw, patch, 'same')) 19 Template Convolution 20 10

Template Convolution 21 Recap: Convolution Theorem Fourier Transform of g: F ( I g) = F ( I ) F ( g) F ( g( x, y))( u, v) = g( x, y) exp{ i2π ( ux + vy)} dx dy R 2 F is invertible 22 11

Convolution with Templates % read image im = imread('bridge.jpg'); bw = double(im(:,:,1))./ 256;; imshow(bw) % apply FFT FFTim = fft2(bw); bw2 = real(ifft2(fftim)); imshow(bw2) % select an image patch patch = bw(221:240,351:370); imshow(patch) patch = patch - (sum(sum(patch)) / size(patch,1) / size(patch, 2)); kernel=zeros(size(bw)); kernel(1:size(patch,1),1:size(patch,2)) = patch; FFTkernel = fft2(kernel); % define a kernel kernel=zeros(size(bw)); kernel(1, 1) = 1; kernel(1, 2) = -1; FFTkernel = fft2(kernel); % apply the kernel and check out the result FFTresult = FFTim.* FFTkernel; result = real(ifft2(fftresult)); imshow(result) % apply the kernel and check out the result FFTresult = FFTim.* FFTkernel; result = max(0, real(ifft2(fftresult))); result = result./ max(max(result)); result = (result.^ 1 > 0.5); imshow(result) % alternative convolution imshow(conv2(bw, patch, 'same')) 23 Convolution with Templates Invariances: Scaling Rotation Illumination Deformation Provides Good localization No No No Maybe No 24 12

Scale Invariance: Image Pyramid 25 Templates with Image Pyramid Invariances: Scaling Rotation Illumination Deformation Provides Good localization Yes No No Maybe No 26 13

Template Matching, Commercial http://www.seeingmachines.com/facelab.htm 27 Templates? 28 14

Scale-invariant feature transform (SIFT) 29 Let s Return to this Problem Want to find in here 30 15

SIFT Invariances: Scaling Rotation Illumination Deformation Provides Good localization Yes Yes Yes Maybe Yes 31 SIFT Reference Distinctive image features from scale-invariant keypoints. David G. Lowe, International Journal of Computer Vision, 60, 2 (2004), pp. 91-110. SIFT = Scale Invariant Feature Transform 32 16

Invariant Local Features Image content is transformed into local feature coordinates that are invariant to translation, rotation, scale, and other imaging parameters SIFT Features 33 Advantages of invariant local features Locality: features are local, so robust to occlusion and clutter (no prior segmentation) Distinctiveness: individual features can be matched to a large database of objects Quantity: many features can be generated for even small objects Efficiency: close to real-time performance Extensibility: can easily be extended to wide range of differing feature types, with each adding robustness 34 17

SIFT On-A-Slide 1. Enforce invariance to scale: Compute Gaussian difference max, for may different scales; non-maximum suppression, find local maxima: keypoint candidates 2. Localizable corner: For each maximum fit quadratic function. Compute center with sub-pixel accuracy by setting first derivative to zero. 3. Eliminate edges: Compute ratio of eigenvalues, drop keypoints for which this ratio is larger than a threshold. 4. Enforce invariance to orientation: Compute orientation, to achieve scale invariance, by finding the strongest second derivative direction in the smoothed image (possibly multiple orientations). Rotate patch so that orientation points up. 5. Compute feature signature: Compute a "gradient histogram" of the local image region in a 4x4 pixel region. Do this for 4x4 regions of that size. Orient so that largest gradient points up (possibly multiple solutions). Result: feature vector with 128 values (15 fields, 8 gradients). 6. Enforce invariance to illumination change and camera saturation: Normalize to unit length to increase invariance to illumination. Then threshold all gradients, to become invariant to camera saturation. 35 SIFT On-A-Slide Step 1: Scale-space extrema detection 1. Enforce invariance to scale: Compute Gaussian difference max, for may different scales; non-maximum suppression, find local maxima: keypoint candidates 2. Localizable corner: For each maximum fit quadratic function. Compute center with sub-pixel accuracy by setting first derivative to zero. 3. Eliminate edges: Compute ratio of eigenvalues, drop keypoints for which this ratio is larger than a threshold. 4. Enforce invariance to orientation: Compute orientation, to achieve scale invariance, by finding the strongest second derivative direction in the smoothed image (possibly multiple orientations). Rotate patch so that orientation points up. 5. Compute feature signature: Compute a "gradient histogram" of the local image region in a 4x4 pixel region. Do this for 4x4 regions of that size. Orient so that largest gradient points up (possibly multiple solutions). Result: feature vector with 128 values (15 fields, 8 gradients). 6. Enforce invariance to illumination change and camera saturation: Normalize to unit length to increase invariance to illumination. Then threshold all gradients, to become invariant to camera saturation. 36 18

Find Invariant Corners 1. Enforce invariance to scale: Compute Gaussian difference max, for may different scales; non-maximum suppression, find local maxima: keypoint candidates Keypoints are detected using scale-space extrema in difference-of-gaussian function D D definition: D( x, y, σ ) = ( G( x, y, kσ ) G( x, y, σ )) I( x, y) Efficient to compute = L( x, y, kσ ) L( x, y, σ ) 37 Finding Keypoints (Corners) Idea: Find Corners, but scale invariance Approach: Run linear filter (diff of Gaussians) At different resolutions of image pyramid 38 19

Difference of Gaussians Minus Equals 39 Difference of Gaussians surf(fspecial('gaussian',40,4)) surf(fspecial('gaussian',40,8)) surf(fspecial('gaussian',40,8) - fspecial('gaussian',40,4)) 40 20

Find Corners with DiffOfGauss im =imread('bridge.jpg'); bw = double(im(:,:,1)) / 256; for i = 1 : 10 gaussd = fspecial('gaussian',40,2*i) - fspecial('gaussian',40,i); res = abs(conv2(bw, gaussd, 'same')); res = res / max(max(res)); imshow(res) ; title(['\bf i = ' num2str(i)]); drawnow end 41 Scale-space extrema detection Scale space construction k 4 σ *I interval k 3 σ *I K 2 σ *I kσ *I σ*i K=2 1/interval 42 21

Gaussian Kernel Size i=1 43 Gaussian Kernel Size i=2 44 22

Gaussian Kernel Size i=3 45 Gaussian Kernel Size i=4 46 23

Gaussian Kernel Size i=5 47 Gaussian Kernel Size i=6 48 24

Gaussian Kernel Size i=7 49 Gaussian Kernel Size i=8 50 25

Gaussian Kernel Size i=9 51 Gaussian Kernel Size i=10 52 26

Resamp Blur le Subtract Key point localization Detect maxima and minima of difference-of-gaussian in scale space Sample point is selected only if it is a minimum or a maximum of these points 53 Example of keypoint detection (a) 233x189 image (b) 832 DOG extrema (c) 729 above threshold 54 27

SIFT On-A-Slide Step 2: Key point localization 1. Enforce invariance to scale: Compute Gaussian difference max, for may different scales; non-maximum suppression, find local maxima: keypoint candidates 2. Localizable corner: For each maximum fit quadratic function. Compute center with sub-pixel accuracy by setting first derivative to zero. 3. Eliminate edges: Compute ratio of eigenvalues, drop keypoints for which this ratio is larger than a threshold. 4. Enforce invariance to orientation: Compute orientation, to achieve scale invariance, by finding the strongest second derivative direction in the smoothed image (possibly multiple orientations). Rotate patch so that orientation points up. 5. Compute feature signature: Compute a "gradient histogram" of the local image region in a 4x4 pixel region. Do this for 4x4 regions of that size. Orient so that largest gradient points up (possibly multiple solutions). Result: feature vector with 128 values (15 fields, 8 gradients). 6. Enforce invariance to illumination change and camera saturation: Normalize to unit length to increase invariance to illumination. Then threshold all gradients, to become invariant to camera saturation. 55 Localization 3D quadratic function is fit to the local sample points Start with Taylor expansion with sample point as the T 2 origin D 1 T D where D( Χ) = D + Χ + Χ Χ 2 Χ 2 Χ 2 Χ ˆ D = Χ 1 2 D Χ Take the derivative with respect to X, and set it to 0, 2 giving D D ˆ 0 2 = + X X X T Χ = ( x, y, σ ) is the location of the keypoint This is a 3x3 linear system 56 28

Filtering Contrast (use prev. equation): If D(X) < 0.03, throw it out Edginess: Use ratio of principal curvatures to throw out poorly defined peaks Curvatures come from Hessian: Ratio of Trace(H) 2 and Determinant(H) Tr( H ) = D xx + D Det( H ) = D D xx yy yy ( D xy ) 2 T D D ˆ 1 ( Χ) = D + 2 Χ If ratio > (r+1) 2 /(r), throw it out (SIFT uses r=10) Xˆ D H = D xx xy D D xy yy 57 Example of keypoint detection Threshold on value at DOG peak and on ratio of principle curvatures (Harris approach) (c) 729 left after peak value threshold (from 832) (d) 536 left after testing ratio of principle curvatures 58 29

SIFT On-A-Slide Step 3: Orientation assignment 1. Enforce invariance to scale: Compute Gaussian difference max, for may different scales; non-maximum suppression, find local maxima: keypoint candidates 2. Localizable corner: For each maximum fit quadratic function. Compute center with sub-pixel accuracy by setting first derivative to zero. 3. Eliminate edges: Compute ratio of eigenvalues, drop keypoints for which this ratio is larger than a threshold. 4. Enforce invariance to orientation: Compute orientation, to achieve scale invariance, by finding the strongest second derivative direction in the smoothed image (possibly multiple orientations). Rotate patch so that orientation points up. 5. Compute feature signature: Compute a "gradient histogram" of the local image region in a 4x4 pixel region. Do this for 4x4 regions of that size. Orient so that largest gradient points up (possibly multiple solutions). Result: feature vector with 128 values (15 fields, 8 gradients). 6. Enforce invariance to illumination change and camera saturation: Normalize to unit length to increase invariance to illumination. Then threshold all gradients, to become invariant to camera saturation. 59 Select canonical orientation Create histogram of local gradient directions computed at selected scale Assign canonical orientation at peak of smoothed histogram Each key specifies stable 2D coordinates (x, y, scale, orientation) 0 2π 60 30

SIFT On-A-Slide Step 4: Generation of key point descriptors 1. Enforce invariance to scale: Compute Gaussian difference max, for may different scales; non-maximum suppression, find local maxima: keypoint candidates 2. Localizable corner: For each maximum fit quadratic function. Compute center with sub-pixel accuracy by setting first derivative to zero. 3. Eliminate edges: Compute ratio of eigenvalues, drop keypoints for which this ratio is larger than a threshold. 4. Enforce invariance to orientation: Compute orientation, to achieve scale invariance, by finding the strongest second derivative direction in the smoothed image (possibly multiple orientations). Rotate patch so that orientation points up. 5. Compute feature signature: Compute a "gradient histogram" of the local image region in a 4x4 pixel region. Do this for 4x4 regions of that size. Orient so that largest gradient points up (possibly multiple solutions). Result: feature vector with 128 values (15 fields, 8 gradients). 6. Enforce invariance to illumination change and camera saturation: Normalize to unit length to increase invariance to illumination. Then threshold all gradients, to become invariant to camera saturation. 61 SIFT vector formation Thresholded image gradients are sampled over 16x16 array of locations in scale space Create array of orientation histograms 8 orientations x 4x4 histogram array = 128 dimensions 62 31

Nearest-neighbor matching to feature database Hypotheses are generated by approximate nearest neighbor matching of each feature to vectors in the database SIFT use best-bin-first (Beis & Lowe, 97) modification to k-d tree algorithm Use heap data structure to identify bins in order by their distance from query point Result: Can give speedup by factor of 1000 while finding nearest neighbor (of interest) 95% of the time 63 3D Object Recognition Extract outlines with background subtraction 64 32

3D Object Recognition Only 3 keys are needed for recognition, so extra keys provide robustness Affine model is no longer as accurate 65 Recognition under occlusion 66 33

Test of illumination invariance Same image under differing illumination 273 keys verified in final match 67 Examples of view interpolation 68 34

Location recognition 69 SIFT Invariances: Scaling Rotation Illumination Deformation Provides Good localization Yes Yes Yes Maybe Yes 70 35

Questions? 71 36